[
  {
    "path": ".claude/skills/gitnexus/gitnexus-cli/SKILL.md",
    "content": "---\nname: gitnexus-cli\ndescription: \"Use when the user needs to run GitNexus CLI commands like analyze/index a repo, check status, clean the index, generate a wiki, or list indexed repos. Examples: \\\"Index this repo\\\", \\\"Reanalyze the codebase\\\", \\\"Generate a wiki\\\"\"\n---\n\n# GitNexus CLI Commands\n\nAll commands work via `npx` — no global install required.\n\n## Commands\n\n### analyze — Build or refresh the index\n\n```bash\nnpx gitnexus analyze\n```\n\nRun from the project root. This parses all source files, builds the knowledge graph, writes it to `.gitnexus/`, and generates CLAUDE.md / AGENTS.md context files.\n\n| Flag           | Effect                                                           |\n| -------------- | ---------------------------------------------------------------- |\n| `--force`      | Force full re-index even if up to date                           |\n| `--embeddings` | Enable embedding generation for semantic search (off by default) |\n\n**When to run:** First time in a project, after major code changes, or when `gitnexus://repo/{name}/context` reports the index is stale. In Claude Code, a PostToolUse hook runs `analyze` automatically after `git commit` and `git merge`, preserving embeddings if previously generated.\n\n### status — Check index freshness\n\n```bash\nnpx gitnexus status\n```\n\nShows whether the current repo has a GitNexus index, when it was last updated, and symbol/relationship counts. Use this to check if re-indexing is needed.\n\n### clean — Delete the index\n\n```bash\nnpx gitnexus clean\n```\n\nDeletes the `.gitnexus/` directory and unregisters the repo from the global registry. Use before re-indexing if the index is corrupt or after removing GitNexus from a project.\n\n| Flag      | Effect                                            |\n| --------- | ------------------------------------------------- |\n| `--force` | Skip confirmation prompt                          |\n| `--all`   | Clean all indexed repos, not just the current one |\n\n### wiki — Generate documentation from the graph\n\n```bash\nnpx gitnexus wiki\n```\n\nGenerates repository documentation from the knowledge graph using an LLM. Requires an API key (saved to `~/.gitnexus/config.json` on first use).\n\n| Flag                | Effect                                    |\n| ------------------- | ----------------------------------------- |\n| `--force`           | Force full regeneration                   |\n| `--model <model>`   | LLM model (default: minimax/minimax-m2.5) |\n| `--base-url <url>`  | LLM API base URL                          |\n| `--api-key <key>`   | LLM API key                               |\n| `--concurrency <n>` | Parallel LLM calls (default: 3)           |\n| `--gist`            | Publish wiki as a public GitHub Gist      |\n\n### list — Show all indexed repos\n\n```bash\nnpx gitnexus list\n```\n\nLists all repositories registered in `~/.gitnexus/registry.json`. The MCP `list_repos` tool provides the same information.\n\n## After Indexing\n\n1. **Read `gitnexus://repo/{name}/context`** to verify the index loaded\n2. Use the other GitNexus skills (`exploring`, `debugging`, `impact-analysis`, `refactoring`) for your task\n\n## Troubleshooting\n\n- **\"Not inside a git repository\"**: Run from a directory inside a git repo\n- **Index is stale after re-analyzing**: Restart Claude Code to reload the MCP server\n- **Embeddings slow**: Omit `--embeddings` (it's off by default) or set `OPENAI_API_KEY` for faster API-based embedding\n"
  },
  {
    "path": ".claude/skills/gitnexus/gitnexus-debugging/SKILL.md",
    "content": "---\nname: gitnexus-debugging\ndescription: \"Use when the user is debugging a bug, tracing an error, or asking why something fails. Examples: \\\"Why is X failing?\\\", \\\"Where does this error come from?\\\", \\\"Trace this bug\\\"\"\n---\n\n# Debugging with GitNexus\n\n## When to Use\n\n- \"Why is this function failing?\"\n- \"Trace where this error comes from\"\n- \"Who calls this method?\"\n- \"This endpoint returns 500\"\n- Investigating bugs, errors, or unexpected behavior\n\n## Workflow\n\n```\n1. gitnexus_query({query: \"<error or symptom>\"})            → Find related execution flows\n2. gitnexus_context({name: \"<suspect>\"})                    → See callers/callees/processes\n3. READ gitnexus://repo/{name}/process/{name}                → Trace execution flow\n4. gitnexus_cypher({query: \"MATCH path...\"})                 → Custom traces if needed\n```\n\n> If \"Index is stale\" → run `npx gitnexus analyze` in terminal.\n\n## Checklist\n\n```\n- [ ] Understand the symptom (error message, unexpected behavior)\n- [ ] gitnexus_query for error text or related code\n- [ ] Identify the suspect function from returned processes\n- [ ] gitnexus_context to see callers and callees\n- [ ] Trace execution flow via process resource if applicable\n- [ ] gitnexus_cypher for custom call chain traces if needed\n- [ ] Read source files to confirm root cause\n```\n\n## Debugging Patterns\n\n| Symptom              | GitNexus Approach                                          |\n| -------------------- | ---------------------------------------------------------- |\n| Error message        | `gitnexus_query` for error text → `context` on throw sites |\n| Wrong return value   | `context` on the function → trace callees for data flow    |\n| Intermittent failure | `context` → look for external calls, async deps            |\n| Performance issue    | `context` → find symbols with many callers (hot paths)     |\n| Recent regression    | `detect_changes` to see what your changes affect           |\n\n## Tools\n\n**gitnexus_query** — find code related to error:\n\n```\ngitnexus_query({query: \"payment validation error\"})\n→ Processes: CheckoutFlow, ErrorHandling\n→ Symbols: validatePayment, handlePaymentError, PaymentException\n```\n\n**gitnexus_context** — full context for a suspect:\n\n```\ngitnexus_context({name: \"validatePayment\"})\n→ Incoming calls: processCheckout, webhookHandler\n→ Outgoing calls: verifyCard, fetchRates (external API!)\n→ Processes: CheckoutFlow (step 3/7)\n```\n\n**gitnexus_cypher** — custom call chain traces:\n\n```cypher\nMATCH path = (a)-[:CodeRelation {type: 'CALLS'}*1..2]->(b:Function {name: \"validatePayment\"})\nRETURN [n IN nodes(path) | n.name] AS chain\n```\n\n## Example: \"Payment endpoint returns 500 intermittently\"\n\n```\n1. gitnexus_query({query: \"payment error handling\"})\n   → Processes: CheckoutFlow, ErrorHandling\n   → Symbols: validatePayment, handlePaymentError\n\n2. gitnexus_context({name: \"validatePayment\"})\n   → Outgoing calls: verifyCard, fetchRates (external API!)\n\n3. READ gitnexus://repo/my-app/process/CheckoutFlow\n   → Step 3: validatePayment → calls fetchRates (external)\n\n4. Root cause: fetchRates calls external API without proper timeout\n```\n"
  },
  {
    "path": ".claude/skills/gitnexus/gitnexus-exploring/SKILL.md",
    "content": "---\nname: gitnexus-exploring\ndescription: \"Use when the user asks how code works, wants to understand architecture, trace execution flows, or explore unfamiliar parts of the codebase. Examples: \\\"How does X work?\\\", \\\"What calls this function?\\\", \\\"Show me the auth flow\\\"\"\n---\n\n# Exploring Codebases with GitNexus\n\n## When to Use\n\n- \"How does authentication work?\"\n- \"What's the project structure?\"\n- \"Show me the main components\"\n- \"Where is the database logic?\"\n- Understanding code you haven't seen before\n\n## Workflow\n\n```\n1. READ gitnexus://repos                          → Discover indexed repos\n2. READ gitnexus://repo/{name}/context             → Codebase overview, check staleness\n3. gitnexus_query({query: \"<what you want to understand>\"})  → Find related execution flows\n4. gitnexus_context({name: \"<symbol>\"})            → Deep dive on specific symbol\n5. READ gitnexus://repo/{name}/process/{name}      → Trace full execution flow\n```\n\n> If step 2 says \"Index is stale\" → run `npx gitnexus analyze` in terminal.\n\n## Checklist\n\n```\n- [ ] READ gitnexus://repo/{name}/context\n- [ ] gitnexus_query for the concept you want to understand\n- [ ] Review returned processes (execution flows)\n- [ ] gitnexus_context on key symbols for callers/callees\n- [ ] READ process resource for full execution traces\n- [ ] Read source files for implementation details\n```\n\n## Resources\n\n| Resource                                | What you get                                            |\n| --------------------------------------- | ------------------------------------------------------- |\n| `gitnexus://repo/{name}/context`        | Stats, staleness warning (~150 tokens)                  |\n| `gitnexus://repo/{name}/clusters`       | All functional areas with cohesion scores (~300 tokens) |\n| `gitnexus://repo/{name}/cluster/{name}` | Area members with file paths (~500 tokens)              |\n| `gitnexus://repo/{name}/process/{name}` | Step-by-step execution trace (~200 tokens)              |\n\n## Tools\n\n**gitnexus_query** — find execution flows related to a concept:\n\n```\ngitnexus_query({query: \"payment processing\"})\n→ Processes: CheckoutFlow, RefundFlow, WebhookHandler\n→ Symbols grouped by flow with file locations\n```\n\n**gitnexus_context** — 360-degree view of a symbol:\n\n```\ngitnexus_context({name: \"validateUser\"})\n→ Incoming calls: loginHandler, apiMiddleware\n→ Outgoing calls: checkToken, getUserById\n→ Processes: LoginFlow (step 2/5), TokenRefresh (step 1/3)\n```\n\n## Example: \"How does payment processing work?\"\n\n```\n1. READ gitnexus://repo/my-app/context       → 918 symbols, 45 processes\n2. gitnexus_query({query: \"payment processing\"})\n   → CheckoutFlow: processPayment → validateCard → chargeStripe\n   → RefundFlow: initiateRefund → calculateRefund → processRefund\n3. gitnexus_context({name: \"processPayment\"})\n   → Incoming: checkoutHandler, webhookHandler\n   → Outgoing: validateCard, chargeStripe, saveTransaction\n4. Read src/payments/processor.ts for implementation details\n```\n"
  },
  {
    "path": ".claude/skills/gitnexus/gitnexus-guide/SKILL.md",
    "content": "---\nname: gitnexus-guide\ndescription: \"Use when the user asks about GitNexus itself — available tools, how to query the knowledge graph, MCP resources, graph schema, or workflow reference. Examples: \\\"What GitNexus tools are available?\\\", \\\"How do I use GitNexus?\\\"\"\n---\n\n# GitNexus Guide\n\nQuick reference for all GitNexus MCP tools, resources, and the knowledge graph schema.\n\n## Always Start Here\n\nFor any task involving code understanding, debugging, impact analysis, or refactoring:\n\n1. **Read `gitnexus://repo/{name}/context`** — codebase overview + check index freshness\n2. **Match your task to a skill below** and **read that skill file**\n3. **Follow the skill's workflow and checklist**\n\n> If step 1 warns the index is stale, run `npx gitnexus analyze` in the terminal first.\n\n## Skills\n\n| Task                                         | Skill to read       |\n| -------------------------------------------- | ------------------- |\n| Understand architecture / \"How does X work?\" | `gitnexus-exploring`         |\n| Blast radius / \"What breaks if I change X?\"  | `gitnexus-impact-analysis`   |\n| Trace bugs / \"Why is X failing?\"             | `gitnexus-debugging`         |\n| Rename / extract / split / refactor          | `gitnexus-refactoring`       |\n| Tools, resources, schema reference           | `gitnexus-guide` (this file) |\n| Index, status, clean, wiki CLI commands      | `gitnexus-cli`               |\n\n## Tools Reference\n\n| Tool             | What it gives you                                                        |\n| ---------------- | ------------------------------------------------------------------------ |\n| `query`          | Process-grouped code intelligence — execution flows related to a concept |\n| `context`        | 360-degree symbol view — categorized refs, processes it participates in  |\n| `impact`         | Symbol blast radius — what breaks at depth 1/2/3 with confidence         |\n| `detect_changes` | Git-diff impact — what do your current changes affect                    |\n| `rename`         | Multi-file coordinated rename with confidence-tagged edits               |\n| `cypher`         | Raw graph queries (read `gitnexus://repo/{name}/schema` first)           |\n| `list_repos`     | Discover indexed repos                                                   |\n\n## Resources Reference\n\nLightweight reads (~100-500 tokens) for navigation:\n\n| Resource                                       | Content                                   |\n| ---------------------------------------------- | ----------------------------------------- |\n| `gitnexus://repo/{name}/context`               | Stats, staleness check                    |\n| `gitnexus://repo/{name}/clusters`              | All functional areas with cohesion scores |\n| `gitnexus://repo/{name}/cluster/{clusterName}` | Area members                              |\n| `gitnexus://repo/{name}/processes`             | All execution flows                       |\n| `gitnexus://repo/{name}/process/{processName}` | Step-by-step trace                        |\n| `gitnexus://repo/{name}/schema`                | Graph schema for Cypher                   |\n\n## Graph Schema\n\n**Nodes:** File, Function, Class, Interface, Method, Community, Process\n**Edges (via CodeRelation.type):** CALLS, IMPORTS, EXTENDS, IMPLEMENTS, DEFINES, MEMBER_OF, STEP_IN_PROCESS\n\n```cypher\nMATCH (caller)-[:CodeRelation {type: 'CALLS'}]->(f:Function {name: \"myFunc\"})\nRETURN caller.name, caller.filePath\n```\n"
  },
  {
    "path": ".claude/skills/gitnexus/gitnexus-impact-analysis/SKILL.md",
    "content": "---\nname: gitnexus-impact-analysis\ndescription: \"Use when the user wants to know what will break if they change something, or needs safety analysis before editing code. Examples: \\\"Is it safe to change X?\\\", \\\"What depends on this?\\\", \\\"What will break?\\\"\"\n---\n\n# Impact Analysis with GitNexus\n\n## When to Use\n\n- \"Is it safe to change this function?\"\n- \"What will break if I modify X?\"\n- \"Show me the blast radius\"\n- \"Who uses this code?\"\n- Before making non-trivial code changes\n- Before committing — to understand what your changes affect\n\n## Workflow\n\n```\n1. gitnexus_impact({target: \"X\", direction: \"upstream\"})  → What depends on this\n2. READ gitnexus://repo/{name}/processes                   → Check affected execution flows\n3. gitnexus_detect_changes()                               → Map current git changes to affected flows\n4. Assess risk and report to user\n```\n\n> If \"Index is stale\" → run `npx gitnexus analyze` in terminal.\n\n## Checklist\n\n```\n- [ ] gitnexus_impact({target, direction: \"upstream\"}) to find dependents\n- [ ] Review d=1 items first (these WILL BREAK)\n- [ ] Check high-confidence (>0.8) dependencies\n- [ ] READ processes to check affected execution flows\n- [ ] gitnexus_detect_changes() for pre-commit check\n- [ ] Assess risk level and report to user\n```\n\n## Understanding Output\n\n| Depth | Risk Level       | Meaning                  |\n| ----- | ---------------- | ------------------------ |\n| d=1   | **WILL BREAK**   | Direct callers/importers |\n| d=2   | LIKELY AFFECTED  | Indirect dependencies    |\n| d=3   | MAY NEED TESTING | Transitive effects       |\n\n## Risk Assessment\n\n| Affected                       | Risk     |\n| ------------------------------ | -------- |\n| <5 symbols, few processes      | LOW      |\n| 5-15 symbols, 2-5 processes    | MEDIUM   |\n| >15 symbols or many processes  | HIGH     |\n| Critical path (auth, payments) | CRITICAL |\n\n## Tools\n\n**gitnexus_impact** — the primary tool for symbol blast radius:\n\n```\ngitnexus_impact({\n  target: \"validateUser\",\n  direction: \"upstream\",\n  minConfidence: 0.8,\n  maxDepth: 3\n})\n\n→ d=1 (WILL BREAK):\n  - loginHandler (src/auth/login.ts:42) [CALLS, 100%]\n  - apiMiddleware (src/api/middleware.ts:15) [CALLS, 100%]\n\n→ d=2 (LIKELY AFFECTED):\n  - authRouter (src/routes/auth.ts:22) [CALLS, 95%]\n```\n\n**gitnexus_detect_changes** — git-diff based impact analysis:\n\n```\ngitnexus_detect_changes({scope: \"staged\"})\n\n→ Changed: 5 symbols in 3 files\n→ Affected: LoginFlow, TokenRefresh, APIMiddlewarePipeline\n→ Risk: MEDIUM\n```\n\n## Example: \"What breaks if I change validateUser?\"\n\n```\n1. gitnexus_impact({target: \"validateUser\", direction: \"upstream\"})\n   → d=1: loginHandler, apiMiddleware (WILL BREAK)\n   → d=2: authRouter, sessionManager (LIKELY AFFECTED)\n\n2. READ gitnexus://repo/my-app/processes\n   → LoginFlow and TokenRefresh touch validateUser\n\n3. Risk: 2 direct callers, 2 processes = MEDIUM\n```\n"
  },
  {
    "path": ".claude/skills/gitnexus/gitnexus-pr-review/SKILL.md",
    "content": "---\nname: gitnexus-pr-review\ndescription: \"Use when the user wants to review a pull request, understand what a PR changes, assess risk of merging, or check for missing test coverage. Examples: \\\"Review this PR\\\", \\\"What does PR #42 change?\\\", \\\"Is this PR safe to merge?\\\"\"\n---\n\n# PR Review with GitNexus\n\n## When to Use\n\n- \"Review this PR\"\n- \"What does PR #42 change?\"\n- \"Is this safe to merge?\"\n- \"What's the blast radius of this PR?\"\n- \"Are there missing tests for this PR?\"\n- Reviewing someone else's code changes before merge\n\n## Workflow\n\n```\n1. gh pr diff <number>                                    → Get the raw diff\n2. gitnexus_detect_changes({scope: \"compare\", base_ref: \"main\"})  → Map diff to affected flows\n3. For each changed symbol:\n   gitnexus_impact({target: \"<symbol>\", direction: \"upstream\"})    → Blast radius per change\n4. gitnexus_context({name: \"<key symbol>\"})               → Understand callers/callees\n5. READ gitnexus://repo/{name}/processes                   → Check affected execution flows\n6. Summarize findings with risk assessment\n```\n\n> If \"Index is stale\" → run `npx gitnexus analyze` in terminal before reviewing.\n\n## Checklist\n\n```\n- [ ] Fetch PR diff (gh pr diff or git diff base...head)\n- [ ] gitnexus_detect_changes to map changes to affected execution flows\n- [ ] gitnexus_impact on each non-trivial changed symbol\n- [ ] Review d=1 items (WILL BREAK) — are callers updated?\n- [ ] gitnexus_context on key changed symbols to understand full picture\n- [ ] Check if affected processes have test coverage\n- [ ] Assess overall risk level\n- [ ] Write review summary with findings\n```\n\n## Review Dimensions\n\n| Dimension | How GitNexus Helps |\n| --- | --- |\n| **Correctness** | `context` shows callers — are they all compatible with the change? |\n| **Blast radius** | `impact` shows d=1/d=2/d=3 dependents — anything missed? |\n| **Completeness** | `detect_changes` shows all affected flows — are they all handled? |\n| **Test coverage** | `impact({includeTests: true})` shows which tests touch changed code |\n| **Breaking changes** | d=1 upstream items that aren't updated in the PR = potential breakage |\n\n## Risk Assessment\n\n| Signal | Risk |\n| --- | --- |\n| Changes touch <3 symbols, 0-1 processes | LOW |\n| Changes touch 3-10 symbols, 2-5 processes | MEDIUM |\n| Changes touch >10 symbols or many processes | HIGH |\n| Changes touch auth, payments, or data integrity code | CRITICAL |\n| d=1 callers exist outside the PR diff | Potential breakage — flag it |\n\n## Tools\n\n**gitnexus_detect_changes** — map PR diff to affected execution flows:\n\n```\ngitnexus_detect_changes({scope: \"compare\", base_ref: \"main\"})\n\n→ Changed: 8 symbols in 4 files\n→ Affected processes: CheckoutFlow, RefundFlow, WebhookHandler\n→ Risk: MEDIUM\n```\n\n**gitnexus_impact** — blast radius per changed symbol:\n\n```\ngitnexus_impact({target: \"validatePayment\", direction: \"upstream\"})\n\n→ d=1 (WILL BREAK):\n  - processCheckout (src/checkout.ts:42) [CALLS, 100%]\n  - webhookHandler (src/webhooks.ts:15) [CALLS, 100%]\n\n→ d=2 (LIKELY AFFECTED):\n  - checkoutRouter (src/routes/checkout.ts:22) [CALLS, 95%]\n```\n\n**gitnexus_impact with tests** — check test coverage:\n\n```\ngitnexus_impact({target: \"validatePayment\", direction: \"upstream\", includeTests: true})\n\n→ Tests that cover this symbol:\n  - validatePayment.test.ts [direct]\n  - checkout.integration.test.ts [via processCheckout]\n```\n\n**gitnexus_context** — understand a changed symbol's role:\n\n```\ngitnexus_context({name: \"validatePayment\"})\n\n→ Incoming calls: processCheckout, webhookHandler\n→ Outgoing calls: verifyCard, fetchRates\n→ Processes: CheckoutFlow (step 3/7), RefundFlow (step 1/5)\n```\n\n## Example: \"Review PR #42\"\n\n```\n1. gh pr diff 42 > /tmp/pr42.diff\n   → 4 files changed: payments.ts, checkout.ts, types.ts, utils.ts\n\n2. gitnexus_detect_changes({scope: \"compare\", base_ref: \"main\"})\n   → Changed symbols: validatePayment, PaymentInput, formatAmount\n   → Affected processes: CheckoutFlow, RefundFlow\n   → Risk: MEDIUM\n\n3. gitnexus_impact({target: \"validatePayment\", direction: \"upstream\"})\n   → d=1: processCheckout, webhookHandler (WILL BREAK)\n   → webhookHandler is NOT in the PR diff — potential breakage!\n\n4. gitnexus_impact({target: \"PaymentInput\", direction: \"upstream\"})\n   → d=1: validatePayment (in PR), createPayment (NOT in PR)\n   → createPayment uses the old PaymentInput shape — breaking change!\n\n5. gitnexus_context({name: \"formatAmount\"})\n   → Called by 12 functions — but change is backwards-compatible (added optional param)\n\n6. Review summary:\n   - MEDIUM risk — 3 changed symbols affect 2 execution flows\n   - BUG: webhookHandler calls validatePayment but isn't updated for new signature\n   - BUG: createPayment depends on PaymentInput type which changed\n   - OK: formatAmount change is backwards-compatible\n   - Tests: checkout.test.ts covers processCheckout path, but no webhook test\n```\n\n## Review Output Format\n\nStructure your review as:\n\n```markdown\n## PR Review: <title>\n\n**Risk: LOW / MEDIUM / HIGH / CRITICAL**\n\n### Changes Summary\n- <N> symbols changed across <M> files\n- <P> execution flows affected\n\n### Findings\n1. **[severity]** Description of finding\n   - Evidence from GitNexus tools\n   - Affected callers/flows\n\n### Missing Coverage\n- Callers not updated in PR: ...\n- Untested flows: ...\n\n### Recommendation\nAPPROVE / REQUEST CHANGES / NEEDS DISCUSSION\n```\n"
  },
  {
    "path": ".claude/skills/gitnexus/gitnexus-refactoring/SKILL.md",
    "content": "---\nname: gitnexus-refactoring\ndescription: \"Use when the user wants to rename, extract, split, move, or restructure code safely. Examples: \\\"Rename this function\\\", \\\"Extract this into a module\\\", \\\"Refactor this class\\\", \\\"Move this to a separate file\\\"\"\n---\n\n# Refactoring with GitNexus\n\n## When to Use\n\n- \"Rename this function safely\"\n- \"Extract this into a module\"\n- \"Split this service\"\n- \"Move this to a new file\"\n- Any task involving renaming, extracting, splitting, or restructuring code\n\n## Workflow\n\n```\n1. gitnexus_impact({target: \"X\", direction: \"upstream\"})  → Map all dependents\n2. gitnexus_query({query: \"X\"})                            → Find execution flows involving X\n3. gitnexus_context({name: \"X\"})                           → See all incoming/outgoing refs\n4. Plan update order: interfaces → implementations → callers → tests\n```\n\n> If \"Index is stale\" → run `npx gitnexus analyze` in terminal.\n\n## Checklists\n\n### Rename Symbol\n\n```\n- [ ] gitnexus_rename({symbol_name: \"oldName\", new_name: \"newName\", dry_run: true}) — preview all edits\n- [ ] Review graph edits (high confidence) and ast_search edits (review carefully)\n- [ ] If satisfied: gitnexus_rename({..., dry_run: false}) — apply edits\n- [ ] gitnexus_detect_changes() — verify only expected files changed\n- [ ] Run tests for affected processes\n```\n\n### Extract Module\n\n```\n- [ ] gitnexus_context({name: target}) — see all incoming/outgoing refs\n- [ ] gitnexus_impact({target, direction: \"upstream\"}) — find all external callers\n- [ ] Define new module interface\n- [ ] Extract code, update imports\n- [ ] gitnexus_detect_changes() — verify affected scope\n- [ ] Run tests for affected processes\n```\n\n### Split Function/Service\n\n```\n- [ ] gitnexus_context({name: target}) — understand all callees\n- [ ] Group callees by responsibility\n- [ ] gitnexus_impact({target, direction: \"upstream\"}) — map callers to update\n- [ ] Create new functions/services\n- [ ] Update callers\n- [ ] gitnexus_detect_changes() — verify affected scope\n- [ ] Run tests for affected processes\n```\n\n## Tools\n\n**gitnexus_rename** — automated multi-file rename:\n\n```\ngitnexus_rename({symbol_name: \"validateUser\", new_name: \"authenticateUser\", dry_run: true})\n→ 12 edits across 8 files\n→ 10 graph edits (high confidence), 2 ast_search edits (review)\n→ Changes: [{file_path, edits: [{line, old_text, new_text, confidence}]}]\n```\n\n**gitnexus_impact** — map all dependents first:\n\n```\ngitnexus_impact({target: \"validateUser\", direction: \"upstream\"})\n→ d=1: loginHandler, apiMiddleware, testUtils\n→ Affected Processes: LoginFlow, TokenRefresh\n```\n\n**gitnexus_detect_changes** — verify your changes after refactoring:\n\n```\ngitnexus_detect_changes({scope: \"all\"})\n→ Changed: 8 files, 12 symbols\n→ Affected processes: LoginFlow, TokenRefresh\n→ Risk: MEDIUM\n```\n\n**gitnexus_cypher** — custom reference queries:\n\n```cypher\nMATCH (caller)-[:CodeRelation {type: 'CALLS'}]->(f:Function {name: \"validateUser\"})\nRETURN caller.name, caller.filePath ORDER BY caller.filePath\n```\n\n## Risk Rules\n\n| Risk Factor         | Mitigation                                |\n| ------------------- | ----------------------------------------- |\n| Many callers (>5)   | Use gitnexus_rename for automated updates |\n| Cross-area refs     | Use detect_changes after to verify scope  |\n| String/dynamic refs | gitnexus_query to find them               |\n| External/public API | Version and deprecate properly            |\n\n## Example: Rename `validateUser` to `authenticateUser`\n\n```\n1. gitnexus_rename({symbol_name: \"validateUser\", new_name: \"authenticateUser\", dry_run: true})\n   → 12 edits: 10 graph (safe), 2 ast_search (review)\n   → Files: validator.ts, login.ts, middleware.ts, config.json...\n\n2. Review ast_search edits (config.json: dynamic reference!)\n\n3. gitnexus_rename({symbol_name: \"validateUser\", new_name: \"authenticateUser\", dry_run: false})\n   → Applied 12 edits across 8 files\n\n4. gitnexus_detect_changes({scope: \"all\"})\n   → Affected: LoginFlow, TokenRefresh\n   → Risk: MEDIUM — run tests for these flows\n```\n"
  },
  {
    "path": ".claude-plugin/marketplace.json",
    "content": "{\n  \"name\": \"gitnexus-marketplace\",\n  \"owner\": {\n    \"name\": \"GitNexus\",\n    \"email\": \"nico@gitnexus.dev\"\n  },\n  \"metadata\": {\n    \"description\": \"Code intelligence powered by a knowledge graph — execution flows, blast radius, and semantic search\",\n    \"homepage\": \"https://github.com/nicosxt/gitnexus\"\n  },\n  \"plugins\": [\n    {\n      \"name\": \"gitnexus\",\n      \"version\": \"1.3.3\",\n      \"source\": \"./gitnexus-claude-plugin\",\n      \"description\": \"Code intelligence powered by a knowledge graph. Provides execution flow tracing, blast radius analysis, and augmented search across your codebase.\"\n    }\n  ]\n}\n"
  },
  {
    "path": ".cursorrules",
    "content": "# AI Agent Rules\n\nFollow .gitnexus/RULES.md for all project context and coding guidelines.\n\nThis project uses GitNexus MCP for code intelligence. See .gitnexus/RULES.md for available tools and best practices.\n"
  },
  {
    "path": ".github/FUNDING.yml",
    "content": "# These are supported funding model platforms\n\ngithub: abhigyanpatwari\n"
  },
  {
    "path": ".github/actions/setup-gitnexus/action.yml",
    "content": "name: Setup GitNexus\ndescription: Setup Node.js 20, install dependencies, and optionally build\n\ninputs:\n  build:\n    description: Whether to run npm run build after install\n    required: false\n    default: 'false'\n\nruns:\n  using: composite\n  steps:\n    - uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4\n      with:\n        node-version: 20\n        cache: npm\n        cache-dependency-path: gitnexus/package-lock.json\n\n    - name: Install dependencies\n      run: npm ci\n      shell: bash\n      working-directory: gitnexus\n\n    - name: Build\n      if: ${{ inputs.build == 'true' }}\n      run: npm run build\n      shell: bash\n      working-directory: gitnexus\n"
  },
  {
    "path": ".github/release.yml",
    "content": "changelog:\n  exclude:\n    labels:\n      - chore\n    authors:\n      - dependabot\n      - dependabot[bot]\n  categories:\n    - title: \"\\U0001F6A8 Security\"\n      labels:\n        - security\n    - title: \"\\U0001F4A5 Breaking Changes\"\n      labels:\n        - breaking\n    - title: \"\\U0001F680 Features\"\n      labels:\n        - enhancement\n    - title: \"\\U0001F41B Bug Fixes\"\n      labels:\n        - bug\n    - title: \"\\U0001F3CE\\uFE0F Performance\"\n      labels:\n        - performance\n    - title: \"\\U0001F9EA Tests\"\n      labels:\n        - test\n    - title: \"\\U0001F504 Refactoring\"\n      labels:\n        - refactor\n    - title: \"\\U0001F477 CI/CD\"\n      labels:\n        - ci\n    - title: \"\\U0001F4E6 Dependencies\"\n      labels:\n        - dependencies\n    - title: \"\\U0001F4DD Other Changes\"\n      labels:\n        - \"*\"\n      exclude:\n        labels:\n          - dependencies\n          - ci\n          - test\n          - refactor\n          - chore\n"
  },
  {
    "path": ".github/workflows/ci-quality.yml",
    "content": "name: Quality Checks\n\non:\n  workflow_call:\n\njobs:\n  typecheck:\n    runs-on: ubuntu-latest\n    timeout-minutes: 10\n    steps:\n      - uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4\n      - uses: ./.github/actions/setup-gitnexus\n      - run: npx tsc --noEmit\n        working-directory: gitnexus\n"
  },
  {
    "path": ".github/workflows/ci-report.yml",
    "content": "name: CI Report\n\non:\n  workflow_run:\n    workflows: ['CI']\n    types: [completed]\n\npermissions:\n  actions: read\n  contents: read\n  pull-requests: write\n\njobs:\n  pr-report:\n    name: PR Report\n    if: >-\n      github.event.workflow_run.event == 'pull_request' &&\n      github.event.workflow_run.conclusion != 'cancelled'\n    runs-on: ubuntu-latest\n    timeout-minutes: 5\n    steps:\n      - name: Download PR metadata\n        uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7\n        with:\n          script: |\n            const fs = require('fs');\n            const path = require('path');\n\n            const artifacts = await github.rest.actions.listWorkflowRunArtifacts({\n              owner: context.repo.owner,\n              repo: context.repo.repo,\n              run_id: ${{ github.event.workflow_run.id }},\n            });\n\n            const meta = artifacts.data.artifacts.find(a => a.name === 'pr-meta');\n            if (!meta) {\n              core.setFailed('pr-meta artifact not found — skipping report');\n              return;\n            }\n\n            const zip = await github.rest.actions.downloadArtifact({\n              owner: context.repo.owner,\n              repo: context.repo.repo,\n              artifact_id: meta.id,\n              archive_format: 'zip',\n            });\n\n            const dest = path.join(process.env.RUNNER_TEMP, 'pr-meta');\n            fs.mkdirSync(dest, { recursive: true });\n            fs.writeFileSync(path.join(dest, 'pr-meta.zip'), Buffer.from(zip.data));\n\n      - name: Extract PR metadata\n        id: meta\n        shell: bash\n        run: |\n          cd \"$RUNNER_TEMP/pr-meta\"\n          unzip -o pr-meta.zip\n\n          PR_NUMBER=$(cat pr-number | tr -d '[:space:]')\n          if ! [[ \"$PR_NUMBER\" =~ ^[0-9]+$ ]]; then\n            echo \"::error::Invalid PR number: '$PR_NUMBER'\"\n            exit 1\n          fi\n\n          echo \"pr-number=$PR_NUMBER\"         >> \"$GITHUB_OUTPUT\"\n          echo \"quality=$(cat quality-result | tr -d '[:space:]')\" >> \"$GITHUB_OUTPUT\"\n          echo \"tests=$(cat tests-result | tr -d '[:space:]')\"     >> \"$GITHUB_OUTPUT\"\n\n      - name: Download test reports\n        id: download-test-reports\n        uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7\n        with:\n          script: |\n            const fs = require('fs');\n            const path = require('path');\n\n            const artifacts = await github.rest.actions.listWorkflowRunArtifacts({\n              owner: context.repo.owner,\n              repo: context.repo.repo,\n              run_id: ${{ github.event.workflow_run.id }},\n            });\n\n            const reports = artifacts.data.artifacts.find(a => a.name === 'test-reports');\n            if (!reports) {\n              core.warning('test-reports artifact not found');\n              return;\n            }\n\n            const zip = await github.rest.actions.downloadArtifact({\n              owner: context.repo.owner,\n              repo: context.repo.repo,\n              artifact_id: reports.id,\n              archive_format: 'zip',\n            });\n\n            const dest = path.join(process.env.RUNNER_TEMP, 'test-reports');\n            fs.mkdirSync(dest, { recursive: true });\n            fs.writeFileSync(path.join(dest, 'test-reports.zip'), Buffer.from(zip.data));\n\n      - name: Extract test reports\n        if: steps.download-test-reports.outcome == 'success'\n        shell: bash\n        run: |\n          cd \"$RUNNER_TEMP/test-reports\"\n          unzip -o test-reports.zip || true\n\n      - name: Fetch cross-platform job results\n        id: jobs\n        uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7\n        with:\n          script: |\n            const jobs = await github.rest.actions.listJobsForWorkflowRun({\n              owner: context.repo.owner,\n              repo: context.repo.repo,\n              run_id: ${{ github.event.workflow_run.id }},\n              per_page: 50,\n            });\n\n            const results = {};\n            for (const job of jobs.data.jobs) {\n              if (job.name.includes('ubuntu')) results.ubuntu = job.conclusion || 'pending';\n              else if (job.name.includes('windows')) results.windows = job.conclusion || 'pending';\n              else if (job.name.includes('macos')) results.macos = job.conclusion || 'pending';\n            }\n            core.setOutput('ubuntu', results.ubuntu || 'unknown');\n            core.setOutput('windows', results.windows || 'unknown');\n            core.setOutput('macos', results.macos || 'unknown');\n\n      - name: Fetch base branch coverage\n        id: base-coverage\n        uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7\n        with:\n          script: |\n            const fs = require('fs');\n            const path = require('path');\n\n            const runs = await github.rest.actions.listWorkflowRuns({\n              owner: context.repo.owner,\n              repo: context.repo.repo,\n              workflow_id: 'ci.yml',\n              branch: 'main',\n              status: 'success',\n              per_page: 1,\n            });\n\n            if (runs.data.workflow_runs.length === 0) {\n              core.setOutput('found', 'false');\n              return;\n            }\n\n            const mainRunId = runs.data.workflow_runs[0].id;\n            const artifacts = await github.rest.actions.listWorkflowRunArtifacts({\n              owner: context.repo.owner,\n              repo: context.repo.repo,\n              run_id: mainRunId,\n            });\n\n            const testReports = artifacts.data.artifacts.find(a => a.name === 'test-reports');\n            if (!testReports) {\n              core.setOutput('found', 'false');\n              return;\n            }\n\n            const zip = await github.rest.actions.downloadArtifact({\n              owner: context.repo.owner,\n              repo: context.repo.repo,\n              artifact_id: testReports.id,\n              archive_format: 'zip',\n            });\n\n            const dest = path.join(process.env.RUNNER_TEMP, 'base-coverage');\n            fs.mkdirSync(dest, { recursive: true });\n            fs.writeFileSync(path.join(dest, 'base.zip'), Buffer.from(zip.data));\n            core.setOutput('found', 'true');\n            core.setOutput('dir', dest);\n\n      - name: Extract base coverage\n        if: steps.base-coverage.outputs.found == 'true'\n        shell: bash\n        run: |\n          cd \"${{ steps.base-coverage.outputs.dir }}\"\n          unzip -o base.zip -d base\n\n      - name: Build and post report\n        uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7\n        env:\n          PR_NUMBER: ${{ steps.meta.outputs.pr-number }}\n          QUALITY: ${{ steps.meta.outputs.quality }}\n          TESTS: ${{ steps.meta.outputs.tests }}\n          UBUNTU: ${{ steps.jobs.outputs.ubuntu }}\n          WINDOWS: ${{ steps.jobs.outputs.windows }}\n          MACOS: ${{ steps.jobs.outputs.macos }}\n          BASE_FOUND: ${{ steps.base-coverage.outputs.found }}\n          BASE_DIR: ${{ steps.base-coverage.outputs.dir }}\n          RUN_ID: ${{ github.event.workflow_run.id }}\n          HEAD_SHA: ${{ github.event.workflow_run.head_sha }}\n        with:\n          script: |\n            const fs = require('fs');\n            const path = require('path');\n\n            const icon = (s) => ({ success: '✅', failure: '❌', cancelled: '⏭️' }[s] || '❓');\n            const temp = process.env.RUNNER_TEMP;\n\n            // ── Read coverage ──\n            function readCov(dir) {\n              const out = { stmts: 'N/A', branch: 'N/A', funcs: 'N/A', lines: 'N/A',\n                            stmtsCov: '', branchCov: '', funcsCov: '', linesCov: '' };\n              try {\n                const files = require('child_process')\n                  .execSync(`find \"${dir}\" -name coverage-summary.json -type f`, { encoding: 'utf8' })\n                  .trim().split('\\n').filter(Boolean);\n                if (!files.length) return out;\n                const d = JSON.parse(fs.readFileSync(files[0], 'utf8')).total;\n                out.stmts = d.statements.pct; out.branch = d.branches.pct;\n                out.funcs = d.functions.pct;  out.lines = d.lines.pct;\n                out.stmtsCov = `${d.statements.covered}/${d.statements.total}`;\n                out.branchCov = `${d.branches.covered}/${d.branches.total}`;\n                out.funcsCov = `${d.functions.covered}/${d.functions.total}`;\n                out.linesCov = `${d.lines.covered}/${d.lines.total}`;\n              } catch {}\n              return out;\n            }\n\n            const cov = readCov(path.join(temp, 'test-reports'));\n            const base = process.env.BASE_FOUND === 'true'\n              ? readCov(path.join(process.env.BASE_DIR, 'base'))\n              : { stmts: 'N/A', branch: 'N/A', funcs: 'N/A', lines: 'N/A' };\n\n            // ── Read test results ──\n            let total = 0, passed = 0, failed = 0, skipped = 0, suites = 0, duration = '0s';\n            let skippedTests = [];\n            try {\n              const files = require('child_process')\n                .execSync(`find \"${path.join(temp, 'test-reports')}\" -name test-results.json -type f`, { encoding: 'utf8' })\n                .trim().split('\\n').filter(Boolean);\n              if (files.length) {\n                const r = JSON.parse(fs.readFileSync(files[0], 'utf8'));\n                total = r.numTotalTests || 0;\n                passed = r.numPassedTests || 0;\n                failed = r.numFailedTests || 0;\n                skipped = r.numPendingTests || 0;\n                suites = r.numTotalTestSuites || 0;\n                const durS = Math.floor((Math.max(...r.testResults.map(t => t.endTime)) - r.startTime) / 1000);\n                duration = durS >= 60 ? `${Math.floor(durS / 60)}m ${durS % 60}s` : `${durS}s`;\n                // Collect skipped test names\n                for (const suite of r.testResults) {\n                  for (const t of (suite.assertionResults || [])) {\n                    if (t.status === 'pending' || t.status === 'skipped') {\n                      skippedTests.push(`- ${t.ancestorTitles.join(' > ')} > ${t.title}`);\n                    }\n                  }\n                }\n              }\n            } catch {}\n\n            // ── Coverage delta ──\n            function delta(pct, basePct) {\n              if (pct === 'N/A' || basePct === 'N/A') return '—';\n              const d = (pct - basePct).toFixed(1);\n              const dNum = parseFloat(d);\n              if (dNum > 0) return `📈 +${d}%`;\n              if (dNum < 0) return `📉 ${d}%`;\n              return '＝';\n            }\n\n            // ── Build markdown ──\n            const { PR_NUMBER, QUALITY, TESTS, UBUNTU, WINDOWS, MACOS, RUN_ID, HEAD_SHA } = process.env;\n            const prNumber = parseInt(PR_NUMBER, 10);\n            const overall = (QUALITY === 'success' && TESTS === 'success')\n              ? '✅ **All checks passed**' : '❌ **Some checks failed**';\n            const sha = HEAD_SHA.slice(0, 7);\n\n            let body = `## CI Report\\n\\n${overall} &ensp; \\`${sha}\\`\\n\\n`;\n\n            body += `### Pipeline\\n\\n`;\n            body += `| Stage | Status | Ubuntu | Windows | macOS |\\n`;\n            body += `|-------|--------|--------|---------|-------|\\n`;\n            body += `| Typecheck | ${icon(QUALITY)} \\`${QUALITY}\\` | — | — | — |\\n`;\n            body += `| Tests | ${icon(TESTS)} \\`${TESTS}\\` | ${icon(UBUNTU)} | ${icon(WINDOWS)} | ${icon(MACOS)} |\\n\\n`;\n\n            if (total > 0) {\n              body += `### Tests\\n\\n`;\n              body += `| Metric | Value |\\n|--------|-------|\\n`;\n              body += `| Total | **${total}** |\\n`;\n              body += `| Passed | **${passed}** |\\n`;\n              if (failed > 0) body += `| Failed | **${failed}** |\\n`;\n              if (skipped > 0) body += `| Skipped | ${skipped} |\\n`;\n              body += `| Files | ${suites} |\\n`;\n              body += `| Duration | ${duration} |\\n\\n`;\n\n              if (failed === 0) {\n                body += `✅ All **${passed}** tests passed across **${suites}** files\\n`;\n              } else {\n                body += `❌ **${failed}** failed / **${passed}** passed\\n`;\n              }\n\n              if (skippedTests.length > 0) {\n                body += `\\n<details>\\n<summary>${skipped} test(s) skipped</summary>\\n\\n`;\n                body += skippedTests.join('\\n') + '\\n\\n</details>\\n';\n              }\n              body += '\\n';\n            }\n\n            if (cov.stmts !== 'N/A') {\n              body += `### Coverage\\n\\n`;\n              body += `| Metric | Coverage | Covered | Base (main) | Delta |\\n`;\n              body += `|--------|----------|---------|-------------|-------|\\n`;\n              body += `| Statements | **${cov.stmts}%** | ${cov.stmtsCov} | ${base.stmts}% | ${delta(cov.stmts, base.stmts)} |\\n`;\n              body += `| Branches | **${cov.branch}%** | ${cov.branchCov} | ${base.branch}% | ${delta(cov.branch, base.branch)} |\\n`;\n              body += `| Functions | **${cov.funcs}%** | ${cov.funcsCov} | ${base.funcs}% | ${delta(cov.funcs, base.funcs)} |\\n`;\n              body += `| Lines | **${cov.lines}%** | ${cov.linesCov} | ${base.lines}% | ${delta(cov.lines, base.lines)} |\\n\\n`;\n            } else {\n              const runUrl = `${context.serverUrl}/${context.repo.owner}/${context.repo.repo}/actions/runs/${RUN_ID}`;\n              body += `### Coverage\\n\\n⚠️ Coverage data unavailable — check the [test job](${runUrl}) for details.\\n\\n`;\n            }\n\n            const runUrl = `${context.serverUrl}/${context.repo.owner}/${context.repo.repo}/actions/runs/${RUN_ID}`;\n            body += `---\\n<sub>📋 [Full run](${runUrl}) · Coverage from Ubuntu · Generated by CI</sub>`;\n\n            // ── Post sticky comment ──\n            const { data: comments } = await github.rest.issues.listComments({\n              owner: context.repo.owner,\n              repo: context.repo.repo,\n              issue_number: prNumber,\n              per_page: 100,\n              direction: 'desc',\n            });\n\n            const marker = '<!-- ci-report -->';\n            const existing = comments.find(c => c.body?.includes(marker));\n            const fullBody = marker + '\\n' + body;\n\n            if (existing) {\n              await github.rest.issues.updateComment({\n                owner: context.repo.owner,\n                repo: context.repo.repo,\n                comment_id: existing.id,\n                body: fullBody,\n              });\n            } else {\n              await github.rest.issues.createComment({\n                owner: context.repo.owner,\n                repo: context.repo.repo,\n                issue_number: prNumber,\n                body: fullBody,\n              });\n            }\n"
  },
  {
    "path": ".github/workflows/ci-tests.yml",
    "content": "name: Tests\n\non:\n  workflow_call:\n\njobs:\n  tests:\n    name: ubuntu / coverage\n    runs-on: ubuntu-latest\n    timeout-minutes: 25\n    steps:\n      - uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4\n      - uses: ./.github/actions/setup-gitnexus\n        with:\n          build: 'true'\n\n      - name: Run all tests with coverage\n        run: >-\n          npx vitest run\n          --reporter=default\n          --reporter=json\n          --outputFile=test-results.json\n          --coverage\n          --coverage.reporter=json-summary\n          --coverage.reporter=json\n          --coverage.reporter=text\n          --coverage.thresholdAutoUpdate=false\n          --coverage.reportOnFailure=true\n        working-directory: gitnexus\n\n      - name: Upload test reports\n        if: always()\n        uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4\n        with:\n          name: test-reports\n          path: |\n            gitnexus/coverage/coverage-summary.json\n            gitnexus/coverage/coverage-final.json\n            gitnexus/test-results.json\n          retention-days: 5\n\n  cross-platform:\n    name: ${{ matrix.os }}\n    strategy:\n      fail-fast: false\n      matrix:\n        # Ubuntu already covered by the coverage job above\n        os: [windows-latest, macos-latest]\n    runs-on: ${{ matrix.os }}\n    timeout-minutes: 25\n    steps:\n      - uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4\n      - uses: ./.github/actions/setup-gitnexus\n        with:\n          build: 'true'\n      - run: npx vitest run\n        working-directory: gitnexus\n"
  },
  {
    "path": ".github/workflows/ci.yml",
    "content": "name: CI\n\non:\n  push:\n    branches: [main]\n    paths-ignore: ['**.md', 'docs/**', 'LICENSE']\n  pull_request:\n    branches: [main]\n    paths-ignore: ['**.md', 'docs/**', 'LICENSE']\n  workflow_call:\n\nconcurrency:\n  group: ci-${{ github.ref }}\n  cancel-in-progress: true\n\n# ── Reusable workflow orchestration ─────────────────────────────────\n# Each concern lives in its own workflow file for maintainability:\n#   ci-quality.yml       — typecheck (tsc --noEmit)\n#   ci-tests.yml         — all tests with coverage (ubuntu) + cross-platform\n#   ci-report.yml        — PR comment (workflow_run trigger for fork write access)\n\njobs:\n  quality:\n    uses: ./.github/workflows/ci-quality.yml\n    permissions:\n      contents: read\n\n  tests:\n    uses: ./.github/workflows/ci-tests.yml\n    permissions:\n      contents: read\n\n  # ── Unified CI gate ──────────────────────────────────────────────\n  # Single required check for branch protection.\n  ci-status:\n    name: CI Gate\n    needs: [quality, tests]\n    if: always()\n    runs-on: ubuntu-latest\n    timeout-minutes: 5\n    steps:\n      - name: Check all jobs passed\n        shell: bash\n        env:\n          QUALITY: ${{ needs.quality.result }}\n          TESTS: ${{ needs.tests.result }}\n        run: |\n          echo \"Quality:  $QUALITY\"\n          echo \"Tests:    $TESTS\"\n          if [[ \"$QUALITY\" != \"success\" ]] ||\n             [[ \"$TESTS\" != \"success\" ]]; then\n            echo \"::error::One or more CI jobs failed\"\n            exit 1\n          fi\n\n  # ── PR metadata for ci-report.yml ────────────────────────────────\n  # Saves PR number and job results so the workflow_run-triggered\n  # report can post comments with a write token (works for forks).\n  save-pr-meta:\n    name: Save PR Metadata\n    if: always() && github.event_name == 'pull_request'\n    needs: [quality, tests]\n    runs-on: ubuntu-latest\n    timeout-minutes: 5\n    steps:\n      - name: Write PR metadata\n        shell: bash\n        env:\n          PR_NUMBER: ${{ github.event.pull_request.number }}\n          QUALITY: ${{ needs.quality.result }}\n          TESTS: ${{ needs.tests.result }}\n        run: |\n          mkdir -p pr-meta\n          echo \"$PR_NUMBER\" > pr-meta/pr-number\n          echo \"$QUALITY\"   > pr-meta/quality-result\n          echo \"$TESTS\"     > pr-meta/tests-result\n\n      - name: Upload PR metadata\n        uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4\n        with:\n          name: pr-meta\n          path: pr-meta/\n          retention-days: 1\n"
  },
  {
    "path": ".github/workflows/claude-code-review.yml",
    "content": "name: Claude Code Review\n\n# Uses pull_request_target so the workflow runs as defined on the default branch,\n# which allows access to secrets for posting review comments on fork PRs.\n# SECURITY: The checkout pins the fork's HEAD SHA (not the branch name) to\n# prevent TOCTOU races (force-push between trigger and checkout). The\n# claude-code-action sandboxes execution — it does NOT run arbitrary code\n# from the checked-out source.\n\non:\n  # Trigger only when explicitly requested:\n  #   - Add the \"claude-review\" label to a PR, OR\n  #   - Comment \"@claude\" or \"/review\" on a PR\n  pull_request_target:\n    types: [labeled]\n  issue_comment:\n    types: [created]\n\n# Serialize per-PR to avoid racing review comments.\nconcurrency:\n  group: claude-review-${{ github.event.issue.number || github.event.pull_request.number }}\n  cancel-in-progress: false\n\njobs:\n  claude-review:\n    # Run only when:\n    #   1. The \"claude-review\" label is added to a non-draft PR by a trusted contributor, OR\n    #   2. A trusted contributor comments \"@claude\" or \"/review\" on a PR\n    if: |\n      (\n        github.event_name == 'pull_request_target' &&\n        github.event.label.name == 'claude-review' &&\n        github.event.pull_request.draft == false &&\n        (github.event.pull_request.author_association == 'OWNER' ||\n         github.event.pull_request.author_association == 'MEMBER' ||\n         github.event.pull_request.author_association == 'COLLABORATOR')\n      ) ||\n      (\n        github.event_name == 'issue_comment' &&\n        github.event.issue.pull_request &&\n        (contains(github.event.comment.body, '@claude') ||\n         contains(github.event.comment.body, '/review')) &&\n        (github.event.comment.author_association == 'OWNER' ||\n         github.event.comment.author_association == 'MEMBER' ||\n         github.event.comment.author_association == 'COLLABORATOR')\n      )\n    runs-on: ubuntu-latest\n    timeout-minutes: 30\n    permissions:\n      contents: read\n      pull-requests: write\n      issues: read\n      id-token: write\n\n    steps:\n      # For issue_comment triggers, resolve the PR number, head SHA, and fork repo\n      - name: Resolve PR context\n        id: pr\n        uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7\n        with:\n          script: |\n            let pr;\n            if (context.eventName === 'issue_comment') {\n              const resp = await github.rest.pulls.get({\n                owner: context.repo.owner,\n                repo: context.repo.repo,\n                pull_number: context.payload.issue.number,\n              });\n              pr = resp.data;\n            } else {\n              pr = context.payload.pull_request;\n            }\n            core.setOutput('number', pr.number);\n            core.setOutput('sha', pr.head.sha);\n            core.setOutput('repo', pr.head.repo.full_name);\n            core.setOutput('branch', pr.head.ref);\n\n      - name: Checkout PR head\n        uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4\n        with:\n          repository: ${{ steps.pr.outputs.repo }}\n          ref: ${{ steps.pr.outputs.sha }}\n          fetch-depth: 1\n\n      - name: Run Claude Code Review\n        id: claude-review\n        uses: anthropics/claude-code-action@9469d113c6afd29550c402740f22d1a97dd1209b # v1\n        with:\n          claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}\n          github_token: ${{ secrets.GITHUB_TOKEN }}\n          allowed_non_write_users: '*'\n          show_full_output: true\n          plugin_marketplaces: 'https://github.com/anthropics/claude-code.git'\n          plugins: 'code-review@claude-code-plugins'\n          prompt: '/code-review:code-review ${{ github.repository }}/pull/${{ steps.pr.outputs.number }}'\n"
  },
  {
    "path": ".github/workflows/claude.yml",
    "content": "name: Claude Code\n\non:\n  issue_comment:\n    types: [created]\n  pull_request_review_comment:\n    types: [created]\n  issues:\n    types: [opened, assigned]\n  pull_request_review:\n    types: [submitted]\n\n# Serialize per-PR/issue to avoid racing comments.\nconcurrency:\n  group: claude-code-${{ github.event.issue.number || github.event.pull_request.number || github.event.issue.id }}\n  cancel-in-progress: false\n\njobs:\n  claude:\n    if: |\n      (\n        github.event_name == 'issue_comment' &&\n        contains(github.event.comment.body, '@claude') &&\n        (github.event.comment.author_association == 'OWNER' ||\n         github.event.comment.author_association == 'MEMBER' ||\n         github.event.comment.author_association == 'COLLABORATOR')\n      ) ||\n      (\n        github.event_name == 'pull_request_review_comment' &&\n        contains(github.event.comment.body, '@claude') &&\n        (github.event.comment.author_association == 'OWNER' ||\n         github.event.comment.author_association == 'MEMBER' ||\n         github.event.comment.author_association == 'COLLABORATOR')\n      ) ||\n      (\n        github.event_name == 'pull_request_review' &&\n        contains(github.event.review.body, '@claude') &&\n        (github.event.review.author_association == 'OWNER' ||\n         github.event.review.author_association == 'MEMBER' ||\n         github.event.review.author_association == 'COLLABORATOR')\n      ) ||\n      (\n        github.event_name == 'issues' &&\n        (contains(github.event.issue.body, '@claude') || contains(github.event.issue.title, '@claude')) &&\n        (github.event.issue.author_association == 'OWNER' ||\n         github.event.issue.author_association == 'MEMBER' ||\n         github.event.issue.author_association == 'COLLABORATOR')\n      )\n    runs-on: ubuntu-latest\n    timeout-minutes: 30\n    permissions:\n      contents: read\n      pull-requests: write\n      issues: write\n      id-token: write\n      actions: read         # required for Claude to read CI results on PRs\n    steps:\n      # For PR-related triggers, resolve the fork repo so we can checkout correctly.\n      - name: Resolve PR context\n        id: pr\n        uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7\n        with:\n          script: |\n            // Determine if this event is PR-related\n            let prNumber = null;\n            if (context.eventName === 'issue_comment' && context.payload.issue.pull_request) {\n              prNumber = context.payload.issue.number;\n            } else if (context.eventName === 'pull_request_review_comment') {\n              prNumber = context.payload.pull_request.number;\n            } else if (context.eventName === 'pull_request_review') {\n              prNumber = context.payload.pull_request.number;\n            }\n\n            if (!prNumber) {\n              core.setOutput('is_pr', 'false');\n              return;\n            }\n\n            const resp = await github.rest.pulls.get({\n              owner: context.repo.owner,\n              repo: context.repo.repo,\n              pull_number: prNumber,\n            });\n            const pr = resp.data;\n\n            core.setOutput('is_pr', 'true');\n            core.setOutput('number', String(prNumber));\n            core.setOutput('sha', pr.head.sha);\n            core.setOutput('repo', pr.head.repo.full_name);\n            core.setOutput('branch', pr.head.ref);\n\n      - name: Checkout repository\n        uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4\n        with:\n          repository: ${{ steps.pr.outputs.is_pr == 'true' && steps.pr.outputs.repo || github.repository }}\n          ref: ${{ steps.pr.outputs.is_pr == 'true' && steps.pr.outputs.sha || '' }}\n          fetch-depth: 1\n\n      - name: Run Claude Code\n        id: claude\n        uses: anthropics/claude-code-action@9469d113c6afd29550c402740f22d1a97dd1209b # v1\n        with:\n          claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}\n          github_token: ${{ secrets.GITHUB_TOKEN }}\n          allowed_non_write_users: '*'\n          show_full_output: true\n\n          # This is an optional setting that allows Claude to read CI results on PRs\n          additional_permissions: |\n            actions: read\n"
  },
  {
    "path": ".github/workflows/publish.yml",
    "content": "name: Publish to npm\n\non:\n  push:\n    tags:\n      - 'v*'\n\n# No workflow-level permissions — scoped per job below.\n\njobs:\n  ci:\n    uses: ./.github/workflows/ci.yml\n    permissions:\n      contents: read\n      actions: read\n      pull-requests: write\n\n  publish:\n    needs: ci\n    runs-on: ubuntu-latest\n    timeout-minutes: 15\n    permissions:\n      contents: write\n      id-token: write\n    steps:\n      - uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4\n      - uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4\n        with:\n          node-version: 20\n          registry-url: https://registry.npmjs.org\n          cache: npm\n          cache-dependency-path: gitnexus/package-lock.json\n      - run: npm ci\n        working-directory: gitnexus\n\n      - name: Verify version consistency\n        shell: bash\n        run: |\n          TAG_VERSION=\"${GITHUB_REF#refs/tags/v}\"\n          if ! [[ \"$TAG_VERSION\" =~ ^[0-9]+\\.[0-9]+\\.[0-9]+(-[a-zA-Z0-9.]+)?$ ]]; then\n            echo \"::error::Tag does not follow semver: v$TAG_VERSION\"\n            exit 1\n          fi\n          PKG_VERSION=$(node -p \"require('./package.json').version\")\n          if [ \"$TAG_VERSION\" != \"$PKG_VERSION\" ]; then\n            echo \"::error::Tag version (v$TAG_VERSION) does not match package.json version ($PKG_VERSION)\"\n            exit 1\n          fi\n          echo \"Version verified: $PKG_VERSION\"\n        working-directory: gitnexus\n\n      - name: Build\n        run: npm run build\n        working-directory: gitnexus\n\n      - name: Dry-run publish\n        run: npm publish --dry-run\n        working-directory: gitnexus\n\n      - name: Publish to npm\n        run: npm publish --provenance --access public\n        working-directory: gitnexus\n        env:\n          NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}\n\n      - name: Create GitHub Release\n        uses: softprops/action-gh-release@a06a81a03ee405af7f2048a818ed3f03bbf83c7b # v2\n        with:\n          generate_release_notes: true\n"
  },
  {
    "path": ".gitignore",
    "content": "# Dependencies\nnode_modules/\n\n# Build output\ndist/\n\n# TypeScript build info\n*.tsbuildinfo\n\n# IDE\n.vscode/\n.idea/\n*.swp\n*.swo\n\n# OS\n.DS_Store\nThumbs.db\n\n.claude/settings.local.json\n\n# Environment variables\n.env\n.env.local\n.env.*.local\n\n# Logs\n*.log\nnpm-debug.log*\n\n# Testing\ncoverage/\n\n# Misc\n*.local\n\n.vercel\n\n\n\n\n\n\n.env*.local\n.gitnexus\n.claude/settings.local.json\n\n# Claude Code worktrees\n.claude/worktrees/\n\n# Claude code skills\n.claude/skills/generated/\n\n# Assets (screenshots, images)\nassets/\n\n# Generated files (should not be indexed)\nrepomix-output*\n\n# Design docs (local only)\ndocs/plans/\n\ngitnexus/test/fixtures/mini-repo/*.md\ngitnexus/test/fixtures/mini-repo/.claude\ngitnexus/test/fixtures/mini-repo/.gitignore\n\n# Ignore csharp generated obj and bin folders\ngitnexus/test/fixtures/lang-resolution/**/obj\ngitnexus/test/fixtures/lang-resolution/**/bin\nGitNexus.sln\n# Git worktrees\n.worktrees/\n"
  },
  {
    "path": ".history/gitnexus/vitest.config_20260317171253.ts",
    "content": "import { defineConfig } from 'vitest/config';\n\nexport default defineConfig({\n  test: {\n    globalSetup: ['test/global-setup.ts'],\n    include: ['test/**/*.test.ts'],\n    testTimeout: 30000,\n    hookTimeout: 120000,\n    pool: 'forks',\n    globals: true,\n    setupFiles: ['test/setup.ts'],\n    teardownTimeout: 3000,\n    dangerouslyIgnoreUnhandledErrors: true, // LadybugDB N-API destructor segfaults on fork exit — not a test failure\n    coverage: {\n      provider: 'v8',\n      include: ['src/**/*.ts'],\n      exclude: [\n        'src/cli/index.ts',          // CLI entry point (commander wiring)\n        'src/server/**',              // HTTP server (requires network)\n        'src/core/wiki/**',           // Wiki generation (requires LLM)\n      ],\n      // Auto-ratchet: vitest bumps thresholds when coverage exceeds them.\n      // CI will fail if a PR drops below these floors.\n      thresholds: {\n        statements: 26,\n        branches: 23,\n        functions: 28,\n        lines: 27,\n        autoUpdate: true,\n      },\n    },\n  },\n});\n"
  },
  {
    "path": ".mcp.json",
    "content": "{\n  \"mcpServers\": {\n    \"gitnexus\": {\n      \"type\": \"stdio\",\n      \"command\": \"npx\",\n      \"args\": [\"-y\", \"gitnexus@latest\", \"mcp\"]\n    }\n  }\n}\n"
  },
  {
    "path": ".sisyphus/drafts/gitnexus-brainstorming.md",
    "content": "# Draft: Gitnexus Brainstorming - Clustering & Process Maps\n\n## Initial Context\n- Project: **GitnexusV2**\n- Structure: \n    - `gitnexus/` (Likely the core application)\n    - `gitnexus-mcp/` (Likely a Model Context Protocol server)\n- Goal: Make it accurate and usable for smaller/dumber models.\n- Current Focus: Implementing **Clustering** and **Process Maps**.\n\n## Findings\n- **Clustering**: Found `gitnexus/src/core/ingestion/cluster-enricher.ts`.\n- **Process Maps**: No files matched `*process*map*` yet. Searching content next.\n\n## Open Questions\n- How is \"process map\" defined in this context? (Graph, mermaid diagram, flowchart?)\n- What is the input for clustering? (Code chunks, files, commits?)\n- What is the intended output for \"smaller models\"? (Simplified context, summaries?)\n"
  },
  {
    "path": ".sisyphus/drafts/noodlbox-comparison.md",
    "content": "# Draft: Gitnexus vs Noodlbox Strategy\n\n## Objectives\n- Understand GitnexusV2 current state and goals.\n- Analyze Noodlbox capabilities from provided URL.\n- Compare features, architecture, and value proposition.\n- Provide strategic views and recommendations.\n\n## Research Findings\n- [GitnexusV2]: Zero-server, browser-native (WASM), KuzuDB based. Graph + Vector hybrid search.\n- [Noodlbox]: CLI-first, heavy install. Has \"Session Hooks\" and \"Search Hooks\" via plugins/CLI.\n\n## Comparison Points\n- **Core Philosophy**: Both bet on \"Knowledge Graph + MCP\" as the future. Noodlbox validates Gitnexus's direction.\n- **Architecture**:\n  - *Noodlbox*: CLI/Binary based. Likely local server management.\n  - *Gitnexus*: Zero-server, Browser-native (WASM). Lower friction, higher privacy.\n- **Features**:\n  - *Communities/Processes*: Both have them. Noodlbox uses them for \"context injection\". Gitnexus uses them for \"visual exploration + query\".\n  - *Impact Analysis*: Noodlbox has polished workflows (e.g., `detect_impact staged`). Gitnexus has the engine (`blastRadius`) but maybe not the specific workflow wrappers yet.\n- **UX/Integration**:\n  - *Noodlbox*: \"Hooks\" (Session/Search) are a killer feature. Proactively injecting context into the agent's session.\n  - *Gitnexus*: Powerful tools, but relies on agent *pulling* data?\n\n## Strategic Views\n1. **Validation**: The market direction is confirmed. You are building the right thing.\n2. **differentiation**: Lean into \"Zero-Setup / Browser-Native\". Noodlbox requires `noodl init` and CLI handling. Gitnexus could just *be*.\n3. **Opportunity**: Steal the \"Session/Search Hooks\" pattern. Make the agent smarter *automatically* without the user asking \"check impact\".\n4. **Workflow Polish**: Noodlbox's `/detect_impact staged` is a great specific use case. Gitnexus should wrap `blastRadius` into similar concrete workflows.\n\n## Technical Feasibility (Interception)\n- **Cursor**: Use `.cursorrules` to \"shadow\" default tools. Instruct agent to ALWAYS use `gitnexus_search` instead of `grep`.\n- **Claude Code**: Likely uses a private plugin API for `PreToolUse`. We can't match this exactly without an official plugin, but we can approximate it with strong prompt instructions in `AGENTS.md`.\n- **MCP Shadowing**: Define tools with names that conflict (e.g., `grep`)? No, unsafe. Better to use \"Virtual Hooks\" via system prompt instructions.\n"
  },
  {
    "path": ".windsurfrules",
    "content": "# AI Agent Rules\n\nFollow .gitnexus/RULES.md for all project context and coding guidelines.\n\nThis project uses GitNexus MCP for code intelligence. See .gitnexus/RULES.md for available tools and best practices.\n"
  },
  {
    "path": "AGENTS.md",
    "content": "<!-- gitnexus:start -->\n# GitNexus — Code Intelligence\n\nThis project is indexed by GitNexus as **GitNexus** (2184 symbols, 5245 relationships, 167 execution flows). Use the GitNexus MCP tools to understand code, assess impact, and navigate safely.\n\n> If any GitNexus tool warns the index is stale, run `npx gitnexus analyze` in terminal first.\n\n## Always Do\n\n- **MUST run impact analysis before editing any symbol.** Before modifying a function, class, or method, run `gitnexus_impact({target: \"symbolName\", direction: \"upstream\"})` and report the blast radius (direct callers, affected processes, risk level) to the user.\n- **MUST run `gitnexus_detect_changes()` before committing** to verify your changes only affect expected symbols and execution flows.\n- **MUST warn the user** if impact analysis returns HIGH or CRITICAL risk before proceeding with edits.\n- When exploring unfamiliar code, use `gitnexus_query({query: \"concept\"})` to find execution flows instead of grepping. It returns process-grouped results ranked by relevance.\n- When you need full context on a specific symbol — callers, callees, which execution flows it participates in — use `gitnexus_context({name: \"symbolName\"})`.\n\n## When Debugging\n\n1. `gitnexus_query({query: \"<error or symptom>\"})` — find execution flows related to the issue\n2. `gitnexus_context({name: \"<suspect function>\"})` — see all callers, callees, and process participation\n3. `READ gitnexus://repo/GitNexus/process/{processName}` — trace the full execution flow step by step\n4. For regressions: `gitnexus_detect_changes({scope: \"compare\", base_ref: \"main\"})` — see what your branch changed\n\n## When Refactoring\n\n- **Renaming**: MUST use `gitnexus_rename({symbol_name: \"old\", new_name: \"new\", dry_run: true})` first. Review the preview — graph edits are safe, text_search edits need manual review. Then run with `dry_run: false`.\n- **Extracting/Splitting**: MUST run `gitnexus_context({name: \"target\"})` to see all incoming/outgoing refs, then `gitnexus_impact({target: \"target\", direction: \"upstream\"})` to find all external callers before moving code.\n- After any refactor: run `gitnexus_detect_changes({scope: \"all\"})` to verify only expected files changed.\n\n## Never Do\n\n- NEVER edit a function, class, or method without first running `gitnexus_impact` on it.\n- NEVER ignore HIGH or CRITICAL risk warnings from impact analysis.\n- NEVER rename symbols with find-and-replace — use `gitnexus_rename` which understands the call graph.\n- NEVER commit changes without running `gitnexus_detect_changes()` to check affected scope.\n\n## Tools Quick Reference\n\n| Tool | When to use | Command |\n|------|-------------|---------|\n| `query` | Find code by concept | `gitnexus_query({query: \"auth validation\"})` |\n| `context` | 360-degree view of one symbol | `gitnexus_context({name: \"validateUser\"})` |\n| `impact` | Blast radius before editing | `gitnexus_impact({target: \"X\", direction: \"upstream\"})` |\n| `detect_changes` | Pre-commit scope check | `gitnexus_detect_changes({scope: \"staged\"})` |\n| `rename` | Safe multi-file rename | `gitnexus_rename({symbol_name: \"old\", new_name: \"new\", dry_run: true})` |\n| `cypher` | Custom graph queries | `gitnexus_cypher({query: \"MATCH ...\"})` |\n\n## Impact Risk Levels\n\n| Depth | Meaning | Action |\n|-------|---------|--------|\n| d=1 | WILL BREAK — direct callers/importers | MUST update these |\n| d=2 | LIKELY AFFECTED — indirect deps | Should test |\n| d=3 | MAY NEED TESTING — transitive | Test if critical path |\n\n## Resources\n\n| Resource | Use for |\n|----------|---------|\n| `gitnexus://repo/GitNexus/context` | Codebase overview, check index freshness |\n| `gitnexus://repo/GitNexus/clusters` | All functional areas |\n| `gitnexus://repo/GitNexus/processes` | All execution flows |\n| `gitnexus://repo/GitNexus/process/{name}` | Step-by-step execution trace |\n\n## Self-Check Before Finishing\n\nBefore completing any code modification task, verify:\n1. `gitnexus_impact` was run for all modified symbols\n2. No HIGH/CRITICAL risk warnings were ignored\n3. `gitnexus_detect_changes()` confirms changes match expected scope\n4. All d=1 (WILL BREAK) dependents were updated\n\n## Keeping the Index Fresh\n\nAfter committing code changes, the GitNexus index becomes stale. Re-run analyze to update it:\n\n```bash\nnpx gitnexus analyze\n```\n\nIf the index previously included embeddings, preserve them by adding `--embeddings`:\n\n```bash\nnpx gitnexus analyze --embeddings\n```\n\nTo check whether embeddings exist, inspect `.gitnexus/meta.json` — the `stats.embeddings` field shows the count (0 means no embeddings). **Running analyze without `--embeddings` will delete any previously generated embeddings.**\n\n> Claude Code users: A PostToolUse hook handles this automatically after `git commit` and `git merge`.\n\n## CLI\n\n| Task | Read this skill file |\n|------|---------------------|\n| Understand architecture / \"How does X work?\" | `.claude/skills/gitnexus/gitnexus-exploring/SKILL.md` |\n| Blast radius / \"What breaks if I change X?\" | `.claude/skills/gitnexus/gitnexus-impact-analysis/SKILL.md` |\n| Trace bugs / \"Why is X failing?\" | `.claude/skills/gitnexus/gitnexus-debugging/SKILL.md` |\n| Rename / extract / split / refactor | `.claude/skills/gitnexus/gitnexus-refactoring/SKILL.md` |\n| Tools, resources, schema reference | `.claude/skills/gitnexus/gitnexus-guide/SKILL.md` |\n| Index, status, clean, wiki CLI commands | `.claude/skills/gitnexus/gitnexus-cli/SKILL.md` |\n\n<!-- gitnexus:end -->\n"
  },
  {
    "path": "CHANGELOG.md",
    "content": "# Changelog\n\nAll notable changes to GitNexus will be documented in this file.\n\n## [Unreleased]\n\n### Changed\n- Migrated from KuzuDB to LadybugDB v0.15 (`@ladybugdb/core`, `@ladybugdb/wasm-core`)\n- Renamed all internal paths from `kuzu` to `lbug` (storage: `.gitnexus/kuzu` → `.gitnexus/lbug`)\n- Added automatic cleanup of stale KuzuDB index files\n- LadybugDB v0.15 requires explicit VECTOR extension loading for semantic search\n\n## [1.4.0] - 2026-03-13\n\n### Added\n\n- **Language-aware symbol resolution engine** with 3-tier resolver: exact FQN → scope-walk → guarded fuzzy fallback that refuses ambiguous matches (#238) — @magyargergo\n- **Method Resolution Order (MRO)** with 5 language-specific strategies: C++ leftmost-base, C#/Java class-over-interface, Python C3 linearization, Rust qualified syntax, default BFS (#238) — @magyargergo\n- **Constructor & struct literal resolution** across all languages — `new Foo()`, `User{...}`, C# primary constructors, target-typed new (#238) — @magyargergo\n- **Receiver-constrained resolution** using per-file TypeEnv — disambiguates `user.save()` vs `repo.save()` via `ownerId` matching (#238) — @magyargergo\n- **Heritage & ownership edges** — HAS_METHOD, OVERRIDES, Go struct embedding, Swift extension heritage, method signatures (`parameterCount`, `returnType`) (#238) — @magyargergo\n- **Language-specific resolver directory** (`resolvers/`) — extracted JVM, Go, C#, PHP, Rust resolvers from monolithic import-processor (#238) — @magyargergo\n- **Type extractor directory** (`type-extractors/`) — per-language type binding extraction with `Record<SupportedLanguages, Handler>` + `satisfies` dispatch (#238) — @magyargergo\n- **Export detection dispatch table** — compile-time exhaustive `Record` + `satisfies` pattern replacing switch/if chains (#238) — @magyargergo\n- **Language config module** (`language-config.ts`) — centralized tsconfig, go.mod, composer.json, .csproj, Swift package config loaders (#238) — @magyargergo\n- **Optional skill generation** via `npx gitnexus analyze --skills` — generates AI agent skills from KuzuDB knowledge graph (#171) — @zander-raycraft\n- **First-class C# support** — sibling-based modifier scanning, record/delegate/property/field/event declaration types (#163, #170, #178 via #237) — @Alice523, @benny-yamagata, @jnMetaCode\n- **C/C++ support fixes** — `.h` → C++ mapping, static-linkage export detection, qualified/parenthesized declarators, 48 entry point patterns (#163, #227 via #237) — @Alice523, @bitgineer\n- **Rust support fixes** — sibling-based `visibility_modifier` scanning for `pub` detection (#227 via #237) — @bitgineer\n- **Adaptive tree-sitter buffer sizing** — `Math.min(Math.max(contentLength * 2, 512KB), 32MB)` (#216 via #237) — @JasonOA888\n- **Call expression matching** in tree-sitter queries (#234 via #237) — @ex-nihilo-jg\n- **DeepSeek model configurations** (#217) — @JasonOA888\n- 282+ new unit tests, 178 integration resolver tests across 9 languages, 53 test files, 1146 total tests passing\n\n### Fixed\n\n- Skip unavailable native Swift parsers in sequential ingestion (#188) — @Gujiassh\n- Heritage heuristic language-gated — no longer applies class/interface rules to wrong languages (#238) — @magyargergo\n- C# `base_list` distinguishes EXTENDS vs IMPLEMENTS via symbol table + `I[A-Z]` heuristic (#238) — @magyargergo\n- Go `qualified_type` (`models.User`) correctly unwrapped in TypeEnv (#238) — @magyargergo\n- Global tier no longer blocks resolution when kind/arity filtering can narrow to 1 candidate (#238) — @magyargergo\n\n### Changed\n\n- `import-processor.ts` reduced from 1412 → 711 lines (50% reduction) via resolver and config extraction (#238) — @magyargergo\n- `type-env.ts` reduced from 635 → ~125 lines via type-extractor extraction (#238) — @magyargergo\n- CI/CD workflows hardened with security fixes and fork PR support (#222, #225) — @magyargergo\n\n## [1.3.11] - 2026-03-08\n\n### Security\n\n- Fix FTS Cypher injection by escaping backslashes in search queries (#209) — @magyargergo\n\n### Added\n\n- Auto-reindex hook that runs `gitnexus analyze` after commits and merges, with automatic embeddings preservation (#205) — @L1nusB\n- 968 integration tests (up from ~840) covering unhappy paths across search, enrichment, CLI, pipeline, worker pool, and KuzuDB (#209) — @magyargergo\n- Coverage auto-ratcheting so thresholds bump automatically on CI (#209) — @magyargergo\n- Rich CI PR report with coverage bars, test counts, and threshold tracking (#209) — @magyargergo\n- Modular CI workflow architecture with separate unit-test, integration-test, and orchestrator jobs (#209) — @magyargergo\n\n### Fixed\n\n- KuzuDB native addon crashes on Linux/macOS by running integration tests in isolated vitest processes with `--pool=forks` (#209) — @magyargergo\n- Worker pool `MODULE_NOT_FOUND` crash when script path is invalid (#209) — @magyargergo\n\n### Changed\n\n- Added macOS to the cross-platform CI test matrix (#208) — @magyargergo\n\n## [1.3.10] - 2026-03-07\n\n### Security\n\n- **MCP transport buffer cap**: Added 10 MB `MAX_BUFFER_SIZE` limit to prevent out-of-memory attacks via oversized `Content-Length` headers or unbounded newline-delimited input\n- **Content-Length validation**: Reject `Content-Length` values exceeding the buffer cap before allocating memory\n- **Stack overflow prevention**: Replaced recursive `readNewlineMessage` with iterative loop to prevent stack overflow from consecutive empty lines\n- **Ambiguous prefix hardening**: Tightened `looksLikeContentLength` to require 14+ bytes before matching, preventing false framing detection on short input\n- **Closed transport guard**: `send()` now rejects with a clear error when called after `close()`, with proper write-error propagation\n\n### Added\n\n- **Dual-framing MCP transport** (`CompatibleStdioServerTransport`): Auto-detects Content-Length (Codex/OpenCode) and newline-delimited JSON (Cursor/Claude Code) framing on the first message, responds in the same format (#207)\n- **Lazy CLI module loading**: All CLI subcommands now use `createLazyAction()` to defer heavy imports (tree-sitter, ONNX, KuzuDB) until invocation, significantly improving `gitnexus mcp` startup time (#207)\n- **Type-safe lazy actions**: `createLazyAction` uses constrained generics to validate export names against module types at compile time\n- **Regression test suite**: 13 unit tests covering transport framing, security hardening, buffer limits, and lazy action loading\n\n### Fixed\n\n- **CALLS edge sourceId alignment**: `findEnclosingFunctionId` now generates IDs with `:startLine` suffix matching node creation format, fixing process detector finding 0 entry points (#194)\n- **LRU cache zero maxSize crash**: Guard `createASTCache` against `maxSize=0` when repos have no parseable files (#144)\n\n### Changed\n\n- Transport constructor accepts `NodeJS.ReadableStream` / `NodeJS.WritableStream` (widened from concrete `ReadStream`/`WriteStream`)\n- `processReadBuffer` simplified to break on first error instead of stale-buffer retry loop\n\n## [1.3.9] - 2026-03-06\n\n### Fixed\n\n- Aligned CALLS edge sourceId with node ID format in parse worker (#194)\n\n## [1.3.8] - 2026-03-05\n\n### Fixed\n\n- Force-exit after analyze to prevent KuzuDB native cleanup hang (#192)\n"
  },
  {
    "path": "CLAUDE.md",
    "content": "<!-- gitnexus:start -->\n# GitNexus — Code Intelligence\n\nThis project is indexed by GitNexus as **GitNexus** (2184 symbols, 5245 relationships, 167 execution flows). Use the GitNexus MCP tools to understand code, assess impact, and navigate safely.\n\n> If any GitNexus tool warns the index is stale, run `npx gitnexus analyze` in terminal first.\n\n## Always Do\n\n- **MUST run impact analysis before editing any symbol.** Before modifying a function, class, or method, run `gitnexus_impact({target: \"symbolName\", direction: \"upstream\"})` and report the blast radius (direct callers, affected processes, risk level) to the user.\n- **MUST run `gitnexus_detect_changes()` before committing** to verify your changes only affect expected symbols and execution flows.\n- **MUST warn the user** if impact analysis returns HIGH or CRITICAL risk before proceeding with edits.\n- When exploring unfamiliar code, use `gitnexus_query({query: \"concept\"})` to find execution flows instead of grepping. It returns process-grouped results ranked by relevance.\n- When you need full context on a specific symbol — callers, callees, which execution flows it participates in — use `gitnexus_context({name: \"symbolName\"})`.\n\n## When Debugging\n\n1. `gitnexus_query({query: \"<error or symptom>\"})` — find execution flows related to the issue\n2. `gitnexus_context({name: \"<suspect function>\"})` — see all callers, callees, and process participation\n3. `READ gitnexus://repo/GitNexus/process/{processName}` — trace the full execution flow step by step\n4. For regressions: `gitnexus_detect_changes({scope: \"compare\", base_ref: \"main\"})` — see what your branch changed\n\n## When Refactoring\n\n- **Renaming**: MUST use `gitnexus_rename({symbol_name: \"old\", new_name: \"new\", dry_run: true})` first. Review the preview — graph edits are safe, text_search edits need manual review. Then run with `dry_run: false`.\n- **Extracting/Splitting**: MUST run `gitnexus_context({name: \"target\"})` to see all incoming/outgoing refs, then `gitnexus_impact({target: \"target\", direction: \"upstream\"})` to find all external callers before moving code.\n- After any refactor: run `gitnexus_detect_changes({scope: \"all\"})` to verify only expected files changed.\n\n## Never Do\n\n- NEVER edit a function, class, or method without first running `gitnexus_impact` on it.\n- NEVER ignore HIGH or CRITICAL risk warnings from impact analysis.\n- NEVER rename symbols with find-and-replace — use `gitnexus_rename` which understands the call graph.\n- NEVER commit changes without running `gitnexus_detect_changes()` to check affected scope.\n\n## Tools Quick Reference\n\n| Tool | When to use | Command |\n|------|-------------|---------|\n| `query` | Find code by concept | `gitnexus_query({query: \"auth validation\"})` |\n| `context` | 360-degree view of one symbol | `gitnexus_context({name: \"validateUser\"})` |\n| `impact` | Blast radius before editing | `gitnexus_impact({target: \"X\", direction: \"upstream\"})` |\n| `detect_changes` | Pre-commit scope check | `gitnexus_detect_changes({scope: \"staged\"})` |\n| `rename` | Safe multi-file rename | `gitnexus_rename({symbol_name: \"old\", new_name: \"new\", dry_run: true})` |\n| `cypher` | Custom graph queries | `gitnexus_cypher({query: \"MATCH ...\"})` |\n\n## Impact Risk Levels\n\n| Depth | Meaning | Action |\n|-------|---------|--------|\n| d=1 | WILL BREAK — direct callers/importers | MUST update these |\n| d=2 | LIKELY AFFECTED — indirect deps | Should test |\n| d=3 | MAY NEED TESTING — transitive | Test if critical path |\n\n## Resources\n\n| Resource | Use for |\n|----------|---------|\n| `gitnexus://repo/GitNexus/context` | Codebase overview, check index freshness |\n| `gitnexus://repo/GitNexus/clusters` | All functional areas |\n| `gitnexus://repo/GitNexus/processes` | All execution flows |\n| `gitnexus://repo/GitNexus/process/{name}` | Step-by-step execution trace |\n\n## Self-Check Before Finishing\n\nBefore completing any code modification task, verify:\n1. `gitnexus_impact` was run for all modified symbols\n2. No HIGH/CRITICAL risk warnings were ignored\n3. `gitnexus_detect_changes()` confirms changes match expected scope\n4. All d=1 (WILL BREAK) dependents were updated\n\n## Keeping the Index Fresh\n\nAfter committing code changes, the GitNexus index becomes stale. Re-run analyze to update it:\n\n```bash\nnpx gitnexus analyze\n```\n\nIf the index previously included embeddings, preserve them by adding `--embeddings`:\n\n```bash\nnpx gitnexus analyze --embeddings\n```\n\nTo check whether embeddings exist, inspect `.gitnexus/meta.json` — the `stats.embeddings` field shows the count (0 means no embeddings). **Running analyze without `--embeddings` will delete any previously generated embeddings.**\n\n> Claude Code users: A PostToolUse hook handles this automatically after `git commit` and `git merge`.\n\n## CLI\n\n| Task | Read this skill file |\n|------|---------------------|\n| Understand architecture / \"How does X work?\" | `.claude/skills/gitnexus/gitnexus-exploring/SKILL.md` |\n| Blast radius / \"What breaks if I change X?\" | `.claude/skills/gitnexus/gitnexus-impact-analysis/SKILL.md` |\n| Trace bugs / \"Why is X failing?\" | `.claude/skills/gitnexus/gitnexus-debugging/SKILL.md` |\n| Rename / extract / split / refactor | `.claude/skills/gitnexus/gitnexus-refactoring/SKILL.md` |\n| Tools, resources, schema reference | `.claude/skills/gitnexus/gitnexus-guide/SKILL.md` |\n| Index, status, clean, wiki CLI commands | `.claude/skills/gitnexus/gitnexus-cli/SKILL.md` |\n\n<!-- gitnexus:end -->\n"
  },
  {
    "path": "LICENSE",
    "content": "PolyForm Noncommercial License 1.0.0\n\n<https://polyformproject.org/licenses/noncommercial/1.0.0>\n\n## Acceptance\n\nIn order to get any license under these terms, you must agree to them as both strict obligations and conditions to all your licenses.\n\n## Copyright License\n\nThe licensor grants you a copyright license for the software to do everything you might do with the software that would otherwise infringe the licensor's copyright in it for any permitted purpose.  However, you may only distribute the software according to [Distribution License](#distribution-license) and make changes or new works based on the software according to [Changes and New Works License](#changes-and-new-works-license).\n\n## Distribution License\n\nThe licensor grants you an additional copyright license to distribute copies of the software.  Your license to distribute covers distributing the software with changes and new works permitted by [Changes and New Works License](#changes-and-new-works-license).\n\n## Notices\n\nYou must ensure that anyone who gets a copy of any part of the software from you also gets a copy of these terms or the URL for them above, as well as copies of any plain-text lines beginning with `Required Notice:` that the licensor provided with the software.  For example:\n\n> Required Notice: Copyright Abhigyan Patwari (https://github.com/abhigyanpatwari/GitNexus)\n\n## Changes and New Works License\n\nThe licensor grants you an additional copyright license to make changes and new works based on the software for any permitted purpose.\n\n## Patent License\n\nThe licensor grants you a patent license for the software that covers patent claims the licensor can license, or becomes able to license, that you would infringe by using the software.\n\n## Noncommercial Purposes\n\nAny noncommercial purpose is a permitted purpose.\n\n## Personal Uses\n\nPersonal use for research, experiment, and testing for the benefit of public knowledge, personal study, private entertainment, hobby projects, amateur pursuits, or religious observance, without any anticipated commercial application, is use for a permitted purpose.\n\n## Noncommercial Organizations\n\nUse by any charitable organization, educational institution, public research organization, public safety or health organization, environmental protection organization, or government institution is use for a permitted purpose regardless of the source of funding or obligations resulting from the funding.\n\n## Fair Use\n\nYou may have \"fair use\" rights for the software under the law. These terms do not limit them.\n\n## No Other Rights\n\nThese terms do not allow you to sublicense or transfer any of your licenses to anyone else, or prevent the licensor from granting licenses to anyone else.  These terms do not imply any other licenses.\n\n## Patent Defense\n\nIf you make any written claim that the software infringes or contributes to infringement of any patent, your patent license for the software granted under these terms ends immediately. If your company makes such a claim, your patent license ends immediately for work on behalf of your company.\n\n## Violations\n\nThe first time you are notified in writing that you have violated any of these terms, or done anything with the software not covered by your licenses, your licenses can nonetheless continue if you come into full compliance with these terms, and take practical steps to correct past violations, within 32 days of receiving notice.  Otherwise, all your licenses end immediately.\n\n## No Liability\n\n***As far as the law allows, the software comes as is, without any warranty or condition, and the licensor will not be liable to you for any damages arising out of these terms or the use or nature of the software, under any kind of legal claim.***\n\n## Definitions\n\nThe **licensor** is the individual or entity offering these terms, and the **software** is the software the licensor makes available under these terms.\n\n**You** refers to the individual or entity agreeing to these terms.\n\n**Your company** is any legal entity, sole proprietorship, or other kind of organization that you work for, plus all organizations that have control over, are under the control of, or are under common control with that organization.  **Control** means ownership of substantially all the assets of an entity, or the power to direct its management and policies by vote, contract, or otherwise.  Control can be direct or indirect.\n\n**Your licenses** are all the licenses granted to you for the software under these terms.\n\n**Use** means anything you do with the software requiring one of your licenses.\n"
  },
  {
    "path": "README.md",
    "content": "# GitNexus\n⚠️ Important Notice:** GitNexus has NO official cryptocurrency, token, or coin. Any token/coin using the GitNexus name on Pump.fun or any other platform is **not affiliated with, endorsed by, or created by** this project or its maintainers. Do not purchase any cryptocurrency claiming association with GitNexus.\n\n<div align=\"center\">\n\n  <a href=\"https://trendshift.io/repositories/19809\" target=\"_blank\">\n    <img src=\"https://trendshift.io/api/badge/repositories/19809\" alt=\"abhigyanpatwari%2FGitNexus | Trendshift\" style=\"width: 250px; height: 55px;\" width=\"250\" height=\"55\"/>\n  </a>\n\n  <h2>Join the official Discord to discuss ideas, issues etc!</h2>\n\n  <a href=\"https://discord.gg/AAsRVT6fGb\">\n    <img src=\"https://img.shields.io/discord/1477255801545429032?color=5865F2&logo=discord&logoColor=white\" alt=\"Discord\"/>\n  </a>\n  <a href=\"https://www.npmjs.com/package/gitnexus\">\n    <img src=\"https://img.shields.io/npm/v/gitnexus.svg\" alt=\"npm version\"/>\n  </a>\n  <a href=\"https://polyformproject.org/licenses/noncommercial/1.0.0/\">\n    <img src=\"https://img.shields.io/badge/License-PolyForm%20Noncommercial-blue.svg\" alt=\"License: PolyForm Noncommercial\"/>\n  </a>\n\n</div>\n\n**Building nervous system for agent context.**\n\nIndexes any codebase into a knowledge graph — every dependency, call chain, cluster, and execution flow — then exposes it through smart tools so AI agents never miss code.\n\n\n\n\nhttps://github.com/user-attachments/assets/172685ba-8e54-4ea7-9ad1-e31a3398da72\n\n\n\n> *Like DeepWiki, but deeper.* DeepWiki helps you *understand* code. GitNexus lets you *analyze* it — because a knowledge graph tracks every relationship, not just descriptions.\n\n**TL;DR:** The **Web UI** is a quick way to chat with any repo. The **CLI + MCP** is how you make your AI agent actually reliable — it gives Cursor, Claude Code, and friends a deep architectural view of your codebase so they stop missing dependencies, breaking call chains, and shipping blind edits. Even smaller models get full architectural clarity, making it compete with goliath models.\n\n---\n\n## Star History\n\n[![Star History Chart](https://api.star-history.com/svg?repos=abhigyanpatwari/GitNexus&type=date&legend=top-left)](https://www.star-history.com/#abhigyanpatwari/GitNexus&type=date&legend=top-left)\n\n\n## Two Ways to Use GitNexus\n\n|                   | **CLI + MCP**                                            | **Web UI**                                             |\n| ----------------- | -------------------------------------------------------------- | ------------------------------------------------------------ |\n| **What**    | Index repos locally, connect AI agents via MCP                 | Visual graph explorer + AI chat in browser                   |\n| **For**     | Daily development with Cursor, Claude Code, Windsurf, OpenCode, Codex | Quick exploration, demos, one-off analysis                   |\n| **Scale**   | Full repos, any size                                           | Limited by browser memory (~5k files), or unlimited via backend mode |\n| **Install** | `npm install -g gitnexus`                                    | No install —[gitnexus.vercel.app](https://gitnexus.vercel.app) |\n| **Storage** | LadybugDB native (fast, persistent)                               | LadybugDB WASM (in-memory, per session)                         |\n| **Parsing** | Tree-sitter native bindings                                    | Tree-sitter WASM                                             |\n| **Privacy** | Everything local, no network                                   | Everything in-browser, no server                             |\n\n> **Bridge mode:** `gitnexus serve` connects the two — the web UI auto-detects the local server and can browse all your CLI-indexed repos without re-uploading or re-indexing.\n\n---\n\n## CLI + MCP (recommended)\n\nThe CLI indexes your repository and runs an MCP server that gives AI agents deep codebase awareness.\n\n### Quick Start\n\n```bash\n# Index your repo (run from repo root)\nnpx gitnexus analyze\n```\n\nThat's it. This indexes the codebase, installs agent skills, registers Claude Code hooks, and creates `AGENTS.md` / `CLAUDE.md` context files — all in one command.\n\nTo configure MCP for your editor, run `npx gitnexus setup` once — or set it up manually below.\n\n### MCP Setup\n\n`gitnexus setup` auto-detects your editors and writes the correct global MCP config. You only need to run it once.\n\n### Editor Support\n\n| Editor                | MCP | Skills | Hooks (auto-augment) | Support        |\n| --------------------- | --- | ------ | -------------------- | -------------- |\n| **Claude Code** | Yes | Yes    | Yes (PreToolUse + PostToolUse) | **Full** |\n| **Cursor**      | Yes | Yes    | —                   | MCP + Skills   |\n| **Windsurf**    | Yes | —     | —                   | MCP            |\n| **OpenCode**    | Yes | Yes    | —                   | MCP + Skills   |\n| **Codex**       | Yes | —     | —                   | MCP            |\n\n> **Claude Code** gets the deepest integration: MCP tools + agent skills + PreToolUse hooks that enrich searches with graph context + PostToolUse hooks that auto-reindex after commits.\n\n### Community Integrations\n\n| Agent | Install | Source |\n|-------|---------|--------|\n| [pi](https://pi.dev) | `pi install npm:pi-gitnexus` | [pi-gitnexus](https://github.com/tintinweb/pi-gitnexus) |\n\nIf you prefer manual configuration:\n\n**Claude Code** (full support — MCP + skills + hooks):\n\n```bash\nclaude mcp add gitnexus -- npx -y gitnexus@latest mcp\n```\n\n**Cursor** (`~/.cursor/mcp.json` — global, works for all projects):\n\n```json\n{\n  \"mcpServers\": {\n    \"gitnexus\": {\n      \"command\": \"npx\",\n      \"args\": [\"-y\", \"gitnexus@latest\", \"mcp\"]\n    }\n  }\n}\n```\n\n**OpenCode** (`~/.config/opencode/config.json`):\n\n```json\n{\n  \"mcp\": {\n    \"gitnexus\": {\n      \"command\": \"npx\",\n      \"args\": [\"-y\", \"gitnexus@latest\", \"mcp\"]\n    }\n  }\n}\n```\n\n**Codex** (`~/.codex/config.toml` for system scope, or `.codex/config.toml` for project scope):\n\n```toml\n[mcp_servers.gitnexus]\ncommand = \"npx\"\nargs = [\"-y\", \"gitnexus@latest\", \"mcp\"]\n```\n\n### CLI Commands\n\n```bash\ngitnexus setup                    # Configure MCP for your editors (one-time)\ngitnexus analyze [path]           # Index a repository (or update stale index)\ngitnexus analyze --force          # Force full re-index\ngitnexus analyze --skills         # Generate repo-specific skill files from detected communities\ngitnexus analyze --skip-embeddings  # Skip embedding generation (faster)\ngitnexus analyze --embeddings     # Enable embedding generation (slower, better search)\ngitnexus analyze --verbose        # Log skipped files when parsers are unavailable\ngitnexus mcp                     # Start MCP server (stdio) — serves all indexed repos\ngitnexus serve                   # Start local HTTP server (multi-repo) for web UI connection\ngitnexus list                    # List all indexed repositories\ngitnexus status                  # Show index status for current repo\ngitnexus clean                   # Delete index for current repo\ngitnexus clean --all --force     # Delete all indexes\ngitnexus wiki [path]             # Generate repository wiki from knowledge graph\ngitnexus wiki --model <model>    # Wiki with custom LLM model (default: gpt-4o-mini)\ngitnexus wiki --base-url <url>   # Wiki with custom LLM API base URL\n```\n\n### What Your AI Agent Gets\n\n**7 tools** exposed via MCP:\n\n| Tool               | What It Does                                                      | `repo` Param |\n| ------------------ | ----------------------------------------------------------------- | -------------- |\n| `list_repos`     | Discover all indexed repositories                                 | —             |\n| `query`          | Process-grouped hybrid search (BM25 + semantic + RRF)             | Optional       |\n| `context`        | 360-degree symbol view — categorized refs, process participation | Optional       |\n| `impact`         | Blast radius analysis with depth grouping and confidence          | Optional       |\n| `detect_changes` | Git-diff impact — maps changed lines to affected processes       | Optional       |\n| `rename`         | Multi-file coordinated rename with graph + text search            | Optional       |\n| `cypher`         | Raw Cypher graph queries                                          | Optional       |\n\n> When only one repo is indexed, the `repo` parameter is optional. With multiple repos, specify which one: `query({query: \"auth\", repo: \"my-app\"})`.\n\n**Resources** for instant context:\n\n| Resource                                  | Purpose                                              |\n| ----------------------------------------- | ---------------------------------------------------- |\n| `gitnexus://repos`                      | List all indexed repositories (read this first)      |\n| `gitnexus://repo/{name}/context`        | Codebase stats, staleness check, and available tools |\n| `gitnexus://repo/{name}/clusters`       | All functional clusters with cohesion scores         |\n| `gitnexus://repo/{name}/cluster/{name}` | Cluster members and details                          |\n| `gitnexus://repo/{name}/processes`      | All execution flows                                  |\n| `gitnexus://repo/{name}/process/{name}` | Full process trace with steps                        |\n| `gitnexus://repo/{name}/schema`         | Graph schema for Cypher queries                      |\n\n**2 MCP prompts** for guided workflows:\n\n| Prompt            | What It Does                                                              |\n| ----------------- | ------------------------------------------------------------------------- |\n| `detect_impact` | Pre-commit change analysis — scope, affected processes, risk level       |\n| `generate_map`  | Architecture documentation from the knowledge graph with mermaid diagrams |\n\n**4 agent skills** installed to `.claude/skills/` automatically:\n\n- **Exploring** — Navigate unfamiliar code using the knowledge graph\n- **Debugging** — Trace bugs through call chains\n- **Impact Analysis** — Analyze blast radius before changes\n- **Refactoring** — Plan safe refactors using dependency mapping\n\n**Repo-specific skills** generated with `--skills`:\n\nWhen you run `gitnexus analyze --skills`, GitNexus detects the functional areas of your codebase (via Leiden community detection) and generates a `SKILL.md` file for each one under `.claude/skills/generated/`. Each skill describes a module's key files, entry points, execution flows, and cross-area connections — so your AI agent gets targeted context for the exact area of code you're working in. Skills are regenerated on each `--skills` run to stay current with the codebase.\n\n---\n\n## Multi-Repo MCP Architecture\n\nGitNexus uses a **global registry** so one MCP server can serve multiple indexed repos. No per-project MCP config needed — set it up once and it works everywhere.\n\n```mermaid\nflowchart TD\n    subgraph CLI [CLI Commands]\n        Setup[\"gitnexus setup\"]\n        Analyze[\"gitnexus analyze\"]\n        Clean[\"gitnexus clean\"]\n        List[\"gitnexus list\"]\n    end\n\n    subgraph Registry [\"~/.gitnexus/\"]\n        RegFile[\"registry.json\"]\n    end\n\n    subgraph Repos [Project Repos]\n        RepoA[\".gitnexus/ in repo A\"]\n        RepoB[\".gitnexus/ in repo B\"]\n    end\n\n    subgraph MCP [MCP Server]\n        Server[\"server.ts\"]\n        Backend[\"LocalBackend\"]\n        Pool[\"Connection Pool\"]\n        ConnA[\"LadybugDB conn A\"]\n        ConnB[\"LadybugDB conn B\"]\n    end\n\n    Setup -->|\"writes global MCP config\"| CursorConfig[\"~/.cursor/mcp.json\"]\n    Analyze -->|\"registers repo\"| RegFile\n    Analyze -->|\"stores index\"| RepoA\n    Clean -->|\"unregisters repo\"| RegFile\n    List -->|\"reads\"| RegFile\n    Server -->|\"reads registry\"| RegFile\n    Server --> Backend\n    Backend --> Pool\n    Pool -->|\"lazy open\"| ConnA\n    Pool -->|\"lazy open\"| ConnB\n    ConnA -->|\"queries\"| RepoA\n    ConnB -->|\"queries\"| RepoB\n```\n\n**How it works:** Each `gitnexus analyze` stores the index in `.gitnexus/` inside the repo (portable, gitignored) and registers a pointer in `~/.gitnexus/registry.json`. When an AI agent starts, the MCP server reads the registry and can serve any indexed repo. LadybugDB connections are opened lazily on first query and evicted after 5 minutes of inactivity (max 5 concurrent). If only one repo is indexed, the `repo` parameter is optional on all tools — agents don't need to change anything.\n\n---\n\n## Web UI (browser-based)\n\nA fully client-side graph explorer and AI chat. No server, no install — your code never leaves the browser.\n\n**Try it now:** [gitnexus.vercel.app](https://gitnexus.vercel.app) — drag & drop a ZIP and start exploring.\n\n<img width=\"2550\" height=\"1343\" alt=\"gitnexus_img\" src=\"https://github.com/user-attachments/assets/cc5d637d-e0e5-48e6-93ff-5bcfdb929285\" />\n\nOr run locally:\n\n```bash\ngit clone https://github.com/abhigyanpatwari/gitnexus.git\ncd gitnexus/gitnexus-web\nnpm install\nnpm run dev\n```\n\nThe web UI uses the same indexing pipeline as the CLI but runs entirely in WebAssembly (Tree-sitter WASM, LadybugDB WASM, in-browser embeddings). It's great for quick exploration but limited by browser memory for larger repos.\n\n**Local Backend Mode:** Run `gitnexus serve` and open the web UI locally — it auto-detects the server and shows all your indexed repos, with full AI chat support. No need to re-upload or re-index. The agent's tools (Cypher queries, search, code navigation) route through the backend HTTP API automatically.\n\n---\n\n## The Problem GitNexus Solves\n\nTools like **Cursor**, **Claude Code**, **Cline**, **Roo Code**, and **Windsurf** are powerful — but they don't truly know your codebase structure.\n\n**What happens:**\n\n1. AI edits `UserService.validate()`\n2. Doesn't know 47 functions depend on its return type\n3. **Breaking changes ship**\n\n### Traditional Graph RAG vs GitNexus\n\nTraditional approaches give the LLM raw graph edges and hope it explores enough. GitNexus **precomputes structure at index time** — clustering, tracing, scoring — so tools return complete context in one call:\n\n```mermaid\nflowchart TB\n    subgraph Traditional[\"Traditional Graph RAG\"]\n        direction TB\n        U1[\"User: What depends on UserService?\"]\n        U1 --> LLM1[\"LLM receives raw graph\"]\n        LLM1 --> Q1[\"Query 1: Find callers\"]\n        Q1 --> Q2[\"Query 2: What files?\"]\n        Q2 --> Q3[\"Query 3: Filter tests?\"]\n        Q3 --> Q4[\"Query 4: High-risk?\"]\n        Q4 --> OUT1[\"Answer after 4+ queries\"]\n    end\n\n    subgraph GN[\"GitNexus Smart Tools\"]\n        direction TB\n        U2[\"User: What depends on UserService?\"]\n        U2 --> TOOL[\"impact UserService upstream\"]\n        TOOL --> PRECOMP[\"Pre-structured response:\n        8 callers, 3 clusters, all 90%+ confidence\"]\n        PRECOMP --> OUT2[\"Complete answer, 1 query\"]\n    end\n```\n\n**Core innovation: Precomputed Relational Intelligence**\n\n- **Reliability** — LLM can't miss context, it's already in the tool response\n- **Token efficiency** — No 10-query chains to understand one function\n- **Model democratization** — Smaller LLMs work because tools do the heavy lifting\n\n---\n\n## How It Works\n\nGitNexus builds a complete knowledge graph of your codebase through a multi-phase indexing pipeline:\n\n1. **Structure** — Walks the file tree and maps folder/file relationships\n2. **Parsing** — Extracts functions, classes, methods, and interfaces using Tree-sitter ASTs\n3. **Resolution** — Resolves imports, function calls, heritage, constructor inference, and `self`/`this` receiver types across files with language-aware logic\n4. **Clustering** — Groups related symbols into functional communities\n5. **Processes** — Traces execution flows from entry points through call chains\n6. **Search** — Builds hybrid search indexes for fast retrieval\n\n### Supported Languages\n\n| Language | Imports | Named Bindings | Exports | Heritage | Type Annotations | Constructor Inference | Config | Frameworks | Entry Points |\n|----------|---------|----------------|---------|----------|-----------------|---------------------|--------|------------|-------------|\n| TypeScript | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |\n| JavaScript | ✓ | ✓ | ✓ | ✓ | — | ✓ | ✓ | ✓ | ✓ |\n| Python | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |\n| Java | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | — | ✓ | ✓ |\n| Kotlin | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | — | ✓ | ✓ |\n| C# | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |\n| Go | ✓ | — | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |\n| Rust | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | — | ✓ | ✓ |\n| PHP | ✓ | ✓ | ✓ | — | ✓ | ✓ | ✓ | ✓ | ✓ |\n| Ruby | ✓ | — | ✓ | ✓ | — | ✓ | — | ✓ | ✓ |\n| Swift | — | — | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |\n| C | — | — | ✓ | — | ✓ | ✓ | — | ✓ | ✓ |\n| C++ | — | — | ✓ | ✓ | ✓ | ✓ | — | ✓ | ✓ |\n\n**Imports** — cross-file import resolution · **Named Bindings** — `import { X as Y }` / re-export tracking · **Exports** — public/exported symbol detection · **Heritage** — class inheritance, interfaces, mixins · **Type Annotations** — explicit type extraction for receiver resolution · **Constructor Inference** — infer receiver type from constructor calls (`self`/`this` resolution included for all languages) · **Config** — language toolchain config parsing (tsconfig, go.mod, etc.) · **Frameworks** — AST-based framework pattern detection · **Entry Points** — entry point scoring heuristics\n\n---\n\n## Tool Examples\n\n### Impact Analysis\n\n```\nimpact({target: \"UserService\", direction: \"upstream\", minConfidence: 0.8})\n\nTARGET: Class UserService (src/services/user.ts)\n\nUPSTREAM (what depends on this):\n  Depth 1 (WILL BREAK):\n    handleLogin [CALLS 90%] -> src/api/auth.ts:45\n    handleRegister [CALLS 90%] -> src/api/auth.ts:78\n    UserController [CALLS 85%] -> src/controllers/user.ts:12\n  Depth 2 (LIKELY AFFECTED):\n    authRouter [IMPORTS] -> src/routes/auth.ts\n```\n\nOptions: `maxDepth`, `minConfidence`, `relationTypes` (`CALLS`, `IMPORTS`, `EXTENDS`, `IMPLEMENTS`), `includeTests`\n\n### Process-Grouped Search\n\n```\nquery({query: \"authentication middleware\"})\n\nprocesses:\n  - summary: \"LoginFlow\"\n    priority: 0.042\n    symbol_count: 4\n    process_type: cross_community\n    step_count: 7\n\nprocess_symbols:\n  - name: validateUser\n    type: Function\n    filePath: src/auth/validate.ts\n    process_id: proc_login\n    step_index: 2\n\ndefinitions:\n  - name: AuthConfig\n    type: Interface\n    filePath: src/types/auth.ts\n```\n\n### Context (360-degree Symbol View)\n\n```\ncontext({name: \"validateUser\"})\n\nsymbol:\n  uid: \"Function:validateUser\"\n  kind: Function\n  filePath: src/auth/validate.ts\n  startLine: 15\n\nincoming:\n  calls: [handleLogin, handleRegister, UserController]\n  imports: [authRouter]\n\noutgoing:\n  calls: [checkPassword, createSession]\n\nprocesses:\n  - name: LoginFlow (step 2/7)\n  - name: RegistrationFlow (step 3/5)\n```\n\n### Detect Changes (Pre-Commit)\n\n```\ndetect_changes({scope: \"all\"})\n\nsummary:\n  changed_count: 12\n  affected_count: 3\n  changed_files: 4\n  risk_level: medium\n\nchanged_symbols: [validateUser, AuthService, ...]\naffected_processes: [LoginFlow, RegistrationFlow, ...]\n```\n\n### Rename (Multi-File)\n\n```\nrename({symbol_name: \"validateUser\", new_name: \"verifyUser\", dry_run: true})\n\nstatus: success\nfiles_affected: 5\ntotal_edits: 8\ngraph_edits: 6     (high confidence)\ntext_search_edits: 2  (review carefully)\nchanges: [...]\n```\n\n### Cypher Queries\n\n```cypher\n-- Find what calls auth functions with high confidence\nMATCH (c:Community {heuristicLabel: 'Authentication'})<-[:CodeRelation {type: 'MEMBER_OF'}]-(fn)\nMATCH (caller)-[r:CodeRelation {type: 'CALLS'}]->(fn)\nWHERE r.confidence > 0.8\nRETURN caller.name, fn.name, r.confidence\nORDER BY r.confidence DESC\n```\n\n---\n\n## Wiki Generation\n\nGenerate LLM-powered documentation from your knowledge graph:\n\n```bash\n# Requires an LLM API key (OPENAI_API_KEY, etc.)\ngitnexus wiki\n\n# Use a custom model or provider\ngitnexus wiki --model gpt-4o\ngitnexus wiki --base-url https://api.anthropic.com/v1\n\n# Force full regeneration\ngitnexus wiki --force\n```\n\nThe wiki generator reads the indexed graph structure, groups files into modules via LLM, generates per-module documentation pages, and creates an overview page — all with cross-references to the knowledge graph.\n\n---\n\n## Tech Stack\n\n| Layer                     | CLI                                   | Web                                     |\n| ------------------------- | ------------------------------------- | --------------------------------------- |\n| **Runtime**         | Node.js (native)                      | Browser (WASM)                          |\n| **Parsing**         | Tree-sitter native bindings           | Tree-sitter WASM                        |\n| **Database**        | LadybugDB native                         | LadybugDB WASM                             |\n| **Embeddings**      | HuggingFace transformers.js (GPU/CPU) | transformers.js (WebGPU/WASM)           |\n| **Search**          | BM25 + semantic + RRF                 | BM25 + semantic + RRF                   |\n| **Agent Interface** | MCP (stdio)                           | LangChain ReAct agent                   |\n| **Visualization**   | —                                    | Sigma.js + Graphology (WebGL)           |\n| **Frontend**        | —                                    | React 18, TypeScript, Vite, Tailwind v4 |\n| **Clustering**      | Graphology                            | Graphology                              |\n| **Concurrency**     | Worker threads + async                | Web Workers + Comlink                   |\n\n---\n\n## Roadmap\n\n### Actively Building\n\n- [ ] **LLM Cluster Enrichment** — Semantic cluster names via LLM API\n- [ ] **AST Decorator Detection** — Parse @Controller, @Get, etc.\n- [ ] **Incremental Indexing** — Only re-index changed files\n\n### Recently Completed\n\n- [X] Constructor-Inferred Type Resolution, `self`/`this` Receiver Mapping\n- [X] Wiki Generation, Multi-File Rename, Git-Diff Impact Analysis\n- [X] Process-Grouped Search, 360-Degree Context, Claude Code Hooks\n- [X] Multi-Repo MCP, Zero-Config Setup, 13 Language Support\n- [X] Community Detection, Process Detection, Confidence Scoring\n- [X] Hybrid Search, Vector Index\n\n---\n\n## Security & Privacy\n\n- **CLI**: Everything runs locally on your machine. No network calls. Index stored in `.gitnexus/` (gitignored). Global registry at `~/.gitnexus/` stores only paths and metadata.\n- **Web**: Everything runs in your browser. No code uploaded to any server. API keys stored in localStorage only.\n- Open source — audit the code yourself.\n\n---\n\n## Acknowledgments\n\n- [Tree-sitter](https://tree-sitter.github.io/) — AST parsing\n- [LadybugDB](https://ladybugdb.com/) — Embedded graph database with vector support (formerly KuzuDB)\n- [Sigma.js](https://www.sigmajs.org/) — WebGL graph rendering\n- [transformers.js](https://huggingface.co/docs/transformers.js) — Browser ML\n- [Graphology](https://graphology.github.io/) — Graph data structures\n- [MCP](https://modelcontextprotocol.io/) — Model Context Protocol\n"
  },
  {
    "path": "compound-engineering.local.md",
    "content": "---\nreview_agents: [kieran-typescript-reviewer, pattern-recognition-specialist, architecture-strategist, data-integrity-guardian, security-sentinel, performance-oracle, code-simplicity-reviewer]\nplan_review_agents: [kieran-typescript-reviewer, architecture-strategist, code-simplicity-reviewer]\nvoltagent_agents: [voltagent-lang:typescript-pro, voltagent-qa-sec:security-auditor, voltagent-data-ai:database-optimizer]\n---\n\n# Review Context\n\n## Project Overview\nGitNexus is a code intelligence tool that builds a knowledge graph from source code using tree-sitter AST parsing across 12 languages and KuzuDB for graph storage. Two packages: `gitnexus/` (CLI/MCP, TypeScript) and `gitnexus-web/` (browser).\n\n## Cross-Language Pattern Consistency (pattern-recognition-specialist)\n- 12 language-specific type extractors in `gitnexus/src/core/ingestion/type-extractors/` must follow identical patterns for: async unwrapping, constructor binding, namespace handling, nullable type stripping, for-loop element typing.\n- Past bugs: C#/Rust missing `await_expression` unwrapping that TypeScript handled correctly; PHP backslash namespace splitting inconsistent with other languages' `::` / `.` splitting.\n- When reviewing type extractor changes, verify the same pattern exists in ALL applicable language files — asymmetry is the #1 source of bugs.\n\n## Data Integrity (data-integrity-guardian)\n- KuzuDB graph operations: schema in `gitnexus/src/core/kuzu/schema.ts`, adapter in `kuzu-adapter.ts`.\n- The ingestion pipeline writes symbols and relationships to the graph — changes to node/relation schemas or the ingestion pipeline can corrupt the index.\n- Known issue: KuzuDB `close()` hangs on Linux due to C++ destructor — use `detachKuzu()` pattern.\n- `lbug-adapter.ts` fallback path needs quote/newline escaping for Cypher injection prevention.\n\n## Security (security-sentinel)\n- Cypher query construction in `lbug-adapter.ts` and `kuzu-adapter.ts` — watch for injection via unescaped user-provided symbol names.\n- CLI accepts `--repo` parameter and file paths — validate against path traversal.\n- MCP server exposes tools to external AI agents — all tool inputs are untrusted.\n\n## Performance (performance-oracle)\n- Tree-sitter buffer size is adaptive (512KB–32MB) via `getTreeSitterBufferSize()` in `constants.ts`.\n- The ingestion pipeline processes entire repositories — O(n) per file with potential O(n²) in cross-file resolution.\n- KuzuDB batch inserts vs individual inserts matter for large repos.\n\n## Architecture (architecture-strategist)\n- Ingestion pipeline phases: structure → parsing → imports → calls → heritage → processes → type resolution.\n- Shared modules: `export-detection.ts`, `constants.ts`, `utils.ts` — changes here have wide blast radius.\n- `gitnexus-web` package drifts behind CLI — flag if a change should be mirrored.\n\n## Voltagent Supplementary Agents\n\nInvoke these via the Agent tool alongside `/ce:review` for deeper specialist analysis. These cover gaps that compound-engineering agents don't:\n\n### voltagent-lang:typescript-pro\n**When:** Changes touch type-resolution logic, generics, conditional types, or complex type-level programming in `type-env.ts`, `type-extractors/*.ts`, or `types.ts`.\n**Why:** The type resolution system uses advanced TypeScript patterns (discriminated unions, mapped types, recursive generics) that benefit from deep TS type-system review beyond what kieran-typescript-reviewer covers.\n\n### voltagent-qa-sec:security-auditor\n**When:** Changes touch MCP tool handlers, Cypher query construction, CLI argument parsing, or any code that processes external input.\n**Why:** GitNexus is an MCP server — all tool inputs come from untrusted AI agents. Systematic OWASP-level audit catches injection vectors that spot-checking misses. Past finding: `lbug-adapter.ts` fallback path had unescaped newlines in Cypher queries.\n\n### voltagent-data-ai:database-optimizer\n**When:** Changes touch `kuzu-adapter.ts`, `schema.ts`, `lbug-adapter.ts`, or any Cypher query construction/execution.\n**Why:** No CE agent specializes in graph database optimization. KuzuDB batch insert patterns, index usage, and query planning directly affect analysis speed on large repos.\n\n## Review Tooling\n- Use `gitnexus_impact()` before approving changes to any symbol — check d=1 (WILL BREAK) callers.\n- Use `gitnexus_detect_changes({scope: \"compare\", base_ref: \"main\"})` to map PR diffs to affected execution flows.\n- Use claude-mem to surface past architectural decisions relevant to the code under review.\n"
  },
  {
    "path": "eval/.gitignore",
    "content": "# Evaluation results (large, should not be committed)\nresults/\n*.traj.json\npreds.json\n\n# Python\n__pycache__/\n*.pyc\n*.egg-info/\n.eggs/\ndist/\nbuild/\n\n# Environment\n.env\n.venv/\n"
  },
  {
    "path": "eval/README.md",
    "content": "# GitNexus SWE-bench Evaluation Harness\n\nEvaluate whether GitNexus code intelligence improves AI agent performance on real software engineering tasks. Runs SWE-bench instances across multiple models and compares baseline (no graph) vs GitNexus-enhanced configurations.\n\n## What This Tests\n\n**Hypothesis**: Giving AI agents structural code intelligence (call graphs, execution flows, blast radius analysis) improves their ability to resolve real GitHub issues — measured by resolve rate, cost, and efficiency.\n\n**Evaluation modes:**\n\n| Mode | What the agent gets |\n|------|-------------------|\n| `baseline` | Standard bash tools (grep, find, cat, sed) — control group |\n| `native` | Baseline + explicit GitNexus tools via eval-server (~100ms) |\n| `native_augment` | Native tools + grep results automatically enriched with graph context (**recommended**) |\n\n> **Recommended**: Use `native_augment` mode. It mirrors the Claude Code model — the agent gets both explicit GitNexus tools (fast bash commands) AND automatic enrichment of grep results with callers, callees, and execution flows. The agent decides when to use explicit tools vs rely on enriched search output.\n\n**Models supported:**\n\n- Claude 3.5 Haiku, Claude Sonnet 4, Claude Opus 4\n- MiniMax M1 2.5\n- GLM 4.7, GLM 5\n- Any model supported by litellm (add a YAML config)\n\n## Prerequisites\n\n- Python 3.11+\n- Docker (for SWE-bench containers)\n- Node.js 18+ (for GitNexus)\n- API keys for your chosen models\n\n## Setup\n\n```bash\ncd eval\n\n# Install dependencies\npip install -e .\n\n# Set up API keys — copy the template and fill in your keys\ncp .env.example .env\n# Then edit .env and paste your key(s)\n```\n\nAll models are routed through **OpenRouter** by default, so a single `OPENROUTER_API_KEY` is all you need. To use provider APIs directly (Anthropic, ZhipuAI, etc.), edit the model YAML in `configs/models/` and set the corresponding key in `.env`.\n\n```bash\n# Pull SWE-bench Docker images (pulled on-demand, but you can pre-pull)\ndocker pull swebench/sweb.eval.x86_64.django_1776_django-16527:latest\n```\n\n## Quick Start\n\n### Debug a single instance\n\n```bash\n# Fastest way to verify everything works\npython run_eval.py debug -m claude-haiku -i django__django-16527 --subset lite\n```\n\n### Run a single configuration\n\n```bash\n# 5 instances, Claude Sonnet, native_augment mode (default)\npython run_eval.py single -m claude-sonnet --subset lite --slice 0:5\n\n# Baseline comparison (no GitNexus)\npython run_eval.py single -m claude-sonnet --mode baseline --subset lite --slice 0:5\n\n# Full Lite benchmark, 4 parallel workers\npython run_eval.py single -m claude-sonnet --subset lite -w 4\n```\n\n### Run the full matrix\n\n```bash\n# All models x all modes\npython run_eval.py matrix --subset lite -w 4\n\n# Key comparison: baseline vs native_augment\npython run_eval.py matrix -m claude-sonnet -m claude-haiku --modes baseline --modes native_augment --subset lite --slice 0:50\n```\n\n### Analyze results\n\n```bash\n# Summary table\npython -m analysis.analyze_results results/\n\n# Compare modes for a specific model\npython -m analysis.analyze_results compare-modes results/ -m claude-sonnet\n\n# GitNexus tool usage analysis\npython -m analysis.analyze_results gitnexus-usage results/\n\n# Export as CSV for further analysis\npython -m analysis.analyze_results summary results/ --format csv > results.csv\n\n# Run official SWE-bench test evaluation\npython -m analysis.analyze_results summary results/ --swebench-eval\n```\n\n### List available configurations\n\n```bash\npython run_eval.py list-configs\n```\n\n## Architecture\n\n```\neval/\n  run_eval.py              # Main entry point (single, matrix, debug commands)\n  agents/\n    gitnexus_agent.py      # GitNexusAgent: extends DefaultAgent with augmentation + metrics\n  environments/\n    gitnexus_docker.py     # Docker env with GitNexus + eval-server + standalone tool scripts\n  bridge/\n    gitnexus_tools.sh      # Bash wrappers (legacy — now standalone scripts are installed directly)\n    mcp_bridge.py          # Legacy MCP bridge (kept for reference)\n  prompts/\n    system_baseline.jinja          # System: persona + format rules\n    instance_baseline.jinja        # Instance: task + workflow\n    system_native.jinja            # System: + GitNexus tool reference\n    instance_native.jinja          # Instance: + GitNexus debugging workflow\n    system_native_augment.jinja    # System: + GitNexus tools + grep enrichment docs\n    instance_native_augment.jinja  # Instance: + GitNexus workflow + risk assessment\n  configs/\n    models/                # Per-model YAML configs\n    modes/                 # Per-mode YAML configs (baseline, native, native_augment)\n  analysis/\n    analyze_results.py     # Post-run comparative analysis\n  results/                 # Output directory (gitignored)\n```\n\n## How It Works\n\n### Template structure\n\nmini-swe-agent requires two Jinja templates:\n- **system_template** → system message: persona, format rules, tool reference (static)\n- **instance_template** → first user message: task, workflow, rules, examples (contains `{{task}}`)\n\nEach mode has a `system_{mode}.jinja` + `instance_{mode}.jinja` pair. The agent loads both automatically based on the configured mode.\n\n### Per-instance flow\n\n1. Docker container starts with SWE-bench instance (repo at specific commit)\n2. **GitNexus setup**: Node.js + gitnexus installed, `gitnexus analyze` runs (or restores from cache)\n3. **Eval-server starts**: `gitnexus eval-server` daemon (persistent HTTP server, keeps LadybugDB warm)\n4. **Standalone tool scripts installed** in `/usr/local/bin/` — works with `subprocess.run` (no `.bashrc` needed)\n5. Agent runs with the configured model + system prompt + GitNexus tools\n6. Agent's patch is extracted as a git diff\n7. Metrics collected: cost, tokens, tool calls, GitNexus usage, augmentation stats\n\n### Tool architecture\n\n```\nAgent → bash command → /usr/local/bin/gitnexus-query\n  → curl localhost:4848/tool/query     (fast path: eval-server, ~100ms)\n  → npx gitnexus query                 (fallback: cold CLI, ~5-10s)\n```\n\nEach tool script in `/usr/local/bin/` is standalone — no sourcing, no env inheritance needed. This is critical because mini-swe-agent runs every command via `subprocess.run` in a fresh subshell.\n\n### Eval-server\n\nThe eval-server is a lightweight HTTP daemon that:\n- Keeps LadybugDB warm in memory (no cold start per tool call)\n- Returns LLM-friendly text (not raw JSON — saves tokens)\n- Includes next-step hints to guide tool chaining (query → context → impact → fix)\n- Auto-shuts down after idle timeout\n\n### Index caching\n\nSWE-bench repos repeat (Django has 200+ instances at different commits). The harness caches GitNexus indexes per `(repo, commit)` hash in `~/.gitnexus-eval-cache/` to avoid redundant re-indexing.\n\n### Grep augmentation (native_augment mode)\n\nWhen the agent runs `grep` or `rg`, the observation is post-processed: the agent class calls `gitnexus-augment` on the search pattern and appends `[GitNexus]` annotations showing callers, callees, and execution flows for matched symbols. This mirrors the Claude Code / Cursor hook integration.\n\n## Adding Models\n\nCreate a YAML file in `configs/models/`:\n\n```yaml\n# configs/models/my-model.yaml\nmodel:\n  model_name: \"openrouter/provider/model-name\"\n  cost_tracking: \"ignore_errors\"  # if not in litellm's cost DB\n  model_kwargs:\n    max_tokens: 8192\n    temperature: 0\n```\n\nThe model name follows [litellm conventions](https://docs.litellm.ai/docs/providers).\n\n## Metrics Collected\n\n| Metric | Description |\n|--------|-------------|\n| Patch Rate | % of instances where agent produced a patch |\n| Resolve Rate | % of instances where patch passes tests (requires --swebench-eval) |\n| Total Cost | API cost across all instances |\n| Avg Cost/Instance | Cost efficiency |\n| API Calls | Number of LLM calls |\n| GN Tool Calls | How many GitNexus tools the agent used |\n| Augment Hits | How many grep/find results got enriched |\n| Augment Hit Rate | % of search commands that got useful enrichment |\n"
  },
  {
    "path": "eval/__init__.py",
    "content": "# GitNexus SWE-bench Evaluation Harness\n"
  },
  {
    "path": "eval/agents/__init__.py",
    "content": ""
  },
  {
    "path": "eval/agents/gitnexus_agent.py",
    "content": "\"\"\"\nGitNexus-Enhanced Agent for SWE-bench Evaluation\n\nExtends mini-swe-agent's DefaultAgent with:\n1. Native augment mode: GitNexus tools via eval-server + grep enrichment (recommended)\n2. Native mode: GitNexus tools via eval-server only\n3. Baseline mode: Pure mini-swe-agent (no GitNexus — control group)\n\nThe agent class itself is minimal — the heavy lifting is in:\n- Prompt selection (system + instance templates per mode)\n- Observation post-processing (grep result augmentation)\n- Metrics tracking (which tools the agent actually uses)\n\nTemplate structure (matches mini-swe-agent's expectations):\n  system_template  → system message: persona + format rules + tool reference\n  instance_template → first user message: task + workflow + rules + examples\n\"\"\"\n\nimport logging\nimport re\nimport time\nfrom enum import Enum\nfrom pathlib import Path\n\nfrom minisweagent import Environment, Model\nfrom minisweagent.agents.default import AgentConfig, DefaultAgent\n\nlogger = logging.getLogger(\"gitnexus_agent\")\n\nPROMPTS_DIR = Path(__file__).parent.parent / \"prompts\"\n\n\nclass GitNexusMode(str, Enum):\n    \"\"\"Evaluation modes for GitNexus integration.\"\"\"\n    BASELINE = \"baseline\"               # No GitNexus — pure mini-swe-agent\n    NATIVE = \"native\"                   # GitNexus tools via eval-server\n    NATIVE_AUGMENT = \"native_augment\"   # Native tools + grep enrichment (recommended)\n\n\nclass GitNexusAgentConfig(AgentConfig):\n    \"\"\"Extended config for GitNexus evaluation agent.\"\"\"\n    gitnexus_mode: GitNexusMode = GitNexusMode.BASELINE\n    augment_timeout: float = 5.0\n    augment_min_pattern_length: int = 3\n    track_gitnexus_usage: bool = True\n\n\nclass GitNexusAgent(DefaultAgent):\n    \"\"\"\n    Agent that optionally enriches its capabilities with GitNexus code intelligence.\n\n    In BASELINE mode, behaves identically to DefaultAgent.\n    In NATIVE mode, GitNexus tools are available as bash commands via eval-server.\n    In NATIVE_AUGMENT mode, GitNexus tools + automatic grep result enrichment.\n    \"\"\"\n\n    def __init__(self, model: Model, env: Environment, *, config_class: type = GitNexusAgentConfig, **kwargs):\n        mode = kwargs.get(\"gitnexus_mode\", GitNexusMode.BASELINE)\n        if isinstance(mode, str):\n            mode = GitNexusMode(mode)\n\n        # Load system template\n        system_file = PROMPTS_DIR / f\"system_{mode.value}.jinja\"\n        if system_file.exists() and \"system_template\" not in kwargs:\n            kwargs[\"system_template\"] = system_file.read_text()\n\n        # Load instance template\n        instance_file = PROMPTS_DIR / f\"instance_{mode.value}.jinja\"\n        if instance_file.exists() and \"instance_template\" not in kwargs:\n            kwargs[\"instance_template\"] = instance_file.read_text()\n\n        super().__init__(model, env, config_class=config_class, **kwargs)\n        self.gitnexus_mode = mode\n        self.gitnexus_metrics = GitNexusMetrics()\n\n    def execute_actions(self, message: dict) -> list[dict]:\n        \"\"\"Execute actions with optional GitNexus augmentation and tracking.\"\"\"\n        if self.config.track_gitnexus_usage:\n            self._track_tool_usage(message)\n\n        outputs = [self.env.execute(action) for action in message.get(\"extra\", {}).get(\"actions\", [])]\n\n        # Augment grep/find observations in NATIVE_AUGMENT mode\n        if self.gitnexus_mode == GitNexusMode.NATIVE_AUGMENT:\n            actions = message.get(\"extra\", {}).get(\"actions\", [])\n            for i, (action, output) in enumerate(zip(actions, outputs)):\n                augmented = self._maybe_augment(action, output)\n                if augmented:\n                    outputs[i] = augmented\n\n        return self.add_messages(\n            *self.model.format_observation_messages(message, outputs, self.get_template_vars())\n        )\n\n    def _maybe_augment(self, action: dict, output: dict) -> dict | None:\n        \"\"\"\n        If the action is a search command (grep, find, rg, ag), augment the output\n        with GitNexus knowledge graph context.\n        \"\"\"\n        command = action.get(\"command\", \"\")\n        if not command:\n            return None\n\n        pattern = self._extract_search_pattern(command)\n        if not pattern or len(pattern) < self.config.augment_min_pattern_length:\n            return None\n\n        start = time.time()\n        try:\n            augment_result = self.env.execute({\n                \"command\": f'gitnexus-augment \"{pattern}\" 2>&1 || true',\n                \"timeout\": self.config.augment_timeout,\n            })\n            elapsed = time.time() - start\n            self.gitnexus_metrics.augmentation_calls += 1\n            self.gitnexus_metrics.augmentation_time += elapsed\n\n            augment_text = augment_result.get(\"output\", \"\").strip()\n            if augment_text and \"[GitNexus]\" in augment_text:\n                original_output = output.get(\"output\", \"\")\n                output = dict(output)\n                output[\"output\"] = f\"{original_output}\\n\\n{augment_text}\"\n                self.gitnexus_metrics.augmentation_hits += 1\n                return output\n        except Exception as e:\n            logger.debug(f\"Augmentation failed for pattern '{pattern}': {e}\")\n            self.gitnexus_metrics.augmentation_errors += 1\n\n        return None\n\n    @staticmethod\n    def _extract_search_pattern(command: str) -> str | None:\n        \"\"\"Extract the search pattern from a grep/find/rg command.\"\"\"\n        patterns = [\n            r'(?:grep|rg|ag)\\s+(?:-[a-zA-Z]*\\s+)*[\"\\']([^\"\\']+)[\"\\']',\n            r'(?:grep|rg|ag)\\s+(?:-[a-zA-Z]*\\s+)*(\\S+)',\n        ]\n\n        for pat in patterns:\n            match = re.search(pat, command)\n            if match:\n                result = match.group(1)\n                if result.startswith(\"/\") or result.startswith(\".\"):\n                    continue\n                if result.startswith(\"-\"):\n                    continue\n                return result\n\n        return None\n\n    def _track_tool_usage(self, message: dict):\n        \"\"\"Track which GitNexus tools the agent uses.\"\"\"\n        for action in message.get(\"extra\", {}).get(\"actions\", []):\n            command = action.get(\"command\", \"\")\n            if \"gitnexus-query\" in command:\n                self.gitnexus_metrics.tool_calls[\"query\"] += 1\n            elif \"gitnexus-context\" in command:\n                self.gitnexus_metrics.tool_calls[\"context\"] += 1\n            elif \"gitnexus-impact\" in command:\n                self.gitnexus_metrics.tool_calls[\"impact\"] += 1\n            elif \"gitnexus-cypher\" in command:\n                self.gitnexus_metrics.tool_calls[\"cypher\"] += 1\n            elif \"gitnexus-overview\" in command:\n                self.gitnexus_metrics.tool_calls[\"overview\"] += 1\n\n    def serialize(self, *extra_dicts) -> dict:\n        \"\"\"Serialize with GitNexus-specific metrics.\"\"\"\n        gitnexus_data = {\n            \"info\": {\n                \"gitnexus\": {\n                    \"mode\": self.gitnexus_mode.value,\n                    \"metrics\": self.gitnexus_metrics.to_dict(),\n                },\n            },\n        }\n        return super().serialize(gitnexus_data, *extra_dicts)\n\n\nclass GitNexusMetrics:\n    \"\"\"Tracks GitNexus-specific metrics during evaluation.\"\"\"\n\n    def __init__(self):\n        self.tool_calls: dict[str, int] = {\n            \"query\": 0,\n            \"context\": 0,\n            \"impact\": 0,\n            \"cypher\": 0,\n            \"overview\": 0,\n        }\n        self.augmentation_calls: int = 0\n        self.augmentation_hits: int = 0\n        self.augmentation_errors: int = 0\n        self.augmentation_time: float = 0.0\n        self.index_time: float = 0.0\n\n    @property\n    def total_tool_calls(self) -> int:\n        return sum(self.tool_calls.values())\n\n    def to_dict(self) -> dict:\n        return {\n            \"tool_calls\": dict(self.tool_calls),\n            \"total_tool_calls\": self.total_tool_calls,\n            \"augmentation_calls\": self.augmentation_calls,\n            \"augmentation_hits\": self.augmentation_hits,\n            \"augmentation_errors\": self.augmentation_errors,\n            \"augmentation_time_seconds\": round(self.augmentation_time, 2),\n            \"index_time_seconds\": round(self.index_time, 2),\n        }\n"
  },
  {
    "path": "eval/analysis/__init__.py",
    "content": ""
  },
  {
    "path": "eval/analysis/analyze_results.py",
    "content": "#!/usr/bin/env python3\n\"\"\"\nResults Analyzer for GitNexus SWE-bench Evaluation\n\nReads evaluation results and generates comparative analysis:\n- Resolve rate by model x mode\n- Cost comparison (total, per-instance)\n- Token/API call efficiency\n- GitNexus tool usage patterns\n- Augmentation hit rates\n\nUsage:\n    python -m analysis.analyze_results /path/to/results\n    python -m analysis.analyze_results /path/to/results --format markdown\n    python -m analysis.analyze_results /path/to/results --swebench-eval  # run actual test verification\n\"\"\"\n\nimport json\nimport logging\nimport os\nimport subprocess\nimport sys\nfrom pathlib import Path\nfrom typing import Any\n\nimport typer\nfrom rich.console import Console\nfrom rich.table import Table\n\nlogger = logging.getLogger(\"analyze_results\")\nconsole = Console()\napp = typer.Typer(rich_markup_mode=\"rich\", add_completion=False)\n\n\ndef load_run_results(results_dir: Path) -> dict[str, dict]:\n    \"\"\"\n    Load all run results from the results directory.\n    \n    Returns: {run_id: {summary, preds, instances}}\n    \"\"\"\n    runs = {}\n\n    for run_dir in sorted(results_dir.iterdir()):\n        if not run_dir.is_dir():\n            continue\n\n        run_id = run_dir.name\n        run_data: dict[str, Any] = {\"run_id\": run_id, \"dir\": run_dir}\n\n        # Load summary\n        summary_path = run_dir / \"summary.json\"\n        if summary_path.exists():\n            run_data[\"summary\"] = json.loads(summary_path.read_text())\n\n        # Load predictions\n        preds_path = run_dir / \"preds.json\"\n        if preds_path.exists():\n            run_data[\"preds\"] = json.loads(preds_path.read_text())\n\n        # Load individual trajectories for detailed metrics\n        run_data[\"trajectories\"] = {}\n        for traj_dir in run_dir.iterdir():\n            if not traj_dir.is_dir():\n                continue\n            for traj_file in traj_dir.glob(\"*.traj.json\"):\n                try:\n                    traj = json.loads(traj_file.read_text())\n                    instance_id = traj.get(\"instance_id\", traj_dir.name)\n                    run_data[\"trajectories\"][instance_id] = traj\n                except Exception:\n                    pass\n\n        if run_data.get(\"preds\") or run_data.get(\"summary\"):\n            runs[run_id] = run_data\n\n    return runs\n\n\ndef parse_run_id(run_id: str) -> tuple[str, str]:\n    \"\"\"Parse 'model_mode' into (model, mode).\"\"\"\n    # Handle multi-word model names like 'minimax-2.5'\n    # Modes are: baseline, mcp, augment, full\n    known_modes = {\"baseline\", \"mcp\", \"augment\", \"full\"}\n    parts = run_id.rsplit(\"_\", 1)\n    if len(parts) == 2 and parts[1] in known_modes:\n        return parts[0], parts[1]\n    return run_id, \"unknown\"\n\n\ndef compute_metrics(run_data: dict) -> dict:\n    \"\"\"Compute evaluation metrics for a single run.\"\"\"\n    preds = run_data.get(\"preds\", {})\n    summary = run_data.get(\"summary\", {})\n    trajectories = run_data.get(\"trajectories\", {})\n\n    n_instances = len(preds)\n    n_with_patch = sum(1 for p in preds.values() if p.get(\"model_patch\", \"\").strip())\n\n    # Cost and API call metrics from trajectories\n    costs = []\n    api_calls = []\n    gn_tool_calls = []\n    gn_augment_hits = []\n    gn_augment_calls = []\n\n    for instance_id, traj in trajectories.items():\n        info = traj.get(\"info\", {})\n        model_stats = info.get(\"model_stats\", {})\n        costs.append(model_stats.get(\"instance_cost\", 0))\n        api_calls.append(model_stats.get(\"api_calls\", 0))\n\n        gn = info.get(\"gitnexus\", {}).get(\"metrics\", {})\n        if gn:\n            gn_tool_calls.append(gn.get(\"total_tool_calls\", 0))\n            gn_augment_hits.append(gn.get(\"augmentation_hits\", 0))\n            gn_augment_calls.append(gn.get(\"augmentation_calls\", 0))\n\n    # Also try summary-level metrics\n    if not costs and summary:\n        results = summary.get(\"results\", [])\n        for r in results:\n            costs.append(r.get(\"cost\", 0))\n            api_calls.append(r.get(\"n_calls\", 0))\n            gn = r.get(\"gitnexus_metrics\", {})\n            if gn:\n                gn_tool_calls.append(gn.get(\"total_tool_calls\", 0))\n                gn_augment_hits.append(gn.get(\"augmentation_hits\", 0))\n                gn_augment_calls.append(gn.get(\"augmentation_calls\", 0))\n\n    total_cost = sum(costs)\n    total_calls = sum(api_calls)\n\n    return {\n        \"n_instances\": n_instances,\n        \"n_with_patch\": n_with_patch,\n        \"patch_rate\": n_with_patch / max(n_instances, 1),\n        \"total_cost\": total_cost,\n        \"avg_cost\": total_cost / max(n_instances, 1),\n        \"total_api_calls\": total_calls,\n        \"avg_api_calls\": total_calls / max(n_instances, 1),\n        \"total_gn_tool_calls\": sum(gn_tool_calls),\n        \"avg_gn_tool_calls\": sum(gn_tool_calls) / max(len(gn_tool_calls), 1) if gn_tool_calls else 0,\n        \"total_augment_hits\": sum(gn_augment_hits),\n        \"total_augment_calls\": sum(gn_augment_calls),\n        \"augment_hit_rate\": sum(gn_augment_hits) / max(sum(gn_augment_calls), 1) if gn_augment_calls else 0,\n    }\n\n\ndef run_swebench_evaluation(results_dir: Path, run_id: str, subset: str = \"lite\") -> dict | None:\n    \"\"\"\n    Run the official SWE-bench evaluation on predictions.\n    \n    Requires: pip install swebench\n    \"\"\"\n    preds_path = results_dir / run_id / \"preds.json\"\n    if not preds_path.exists():\n        return None\n\n    dataset_mapping = {\n        \"lite\": \"princeton-nlp/SWE-Bench_Lite\",\n        \"verified\": \"princeton-nlp/SWE-Bench_Verified\",\n        \"full\": \"princeton-nlp/SWE-Bench\",\n    }\n\n    try:\n        eval_output = results_dir / run_id / \"swebench_eval\"\n        cmd = [\n            sys.executable, \"-m\", \"swebench.harness.run_evaluation\",\n            \"--dataset_name\", dataset_mapping.get(subset, subset),\n            \"--predictions_path\", str(preds_path),\n            \"--max_workers\", \"4\",\n            \"--run_id\", run_id,\n            \"--output_dir\", str(eval_output),\n        ]\n\n        logger.info(f\"Running SWE-bench evaluation for {run_id}...\")\n        result = subprocess.run(cmd, capture_output=True, text=True, timeout=600)\n\n        if result.returncode == 0:\n            # Parse evaluation results\n            report_path = eval_output / run_id / \"results.json\"\n            if report_path.exists():\n                return json.loads(report_path.read_text())\n\n        logger.error(f\"SWE-bench eval failed: {result.stderr[:500]}\")\n        return None\n\n    except Exception as e:\n        logger.error(f\"SWE-bench eval error: {e}\")\n        return None\n\n\n# ─── CLI Commands ───────────────────────────────────────────────────────────\n\n\n@app.command()\ndef summary(\n    results_dir: str = typer.Argument(..., help=\"Path to results directory\"),\n    format: str = typer.Option(\"table\", \"--format\", help=\"Output format: table, markdown, json, csv\"),\n    swebench_eval: bool = typer.Option(False, \"--swebench-eval\", help=\"Run official SWE-bench test evaluation\"),\n    subset: str = typer.Option(\"lite\", \"--subset\", help=\"SWE-bench subset (for --swebench-eval)\"),\n):\n    \"\"\"Generate comparative analysis of evaluation results.\"\"\"\n    results_path = Path(results_dir)\n    if not results_path.exists():\n        console.print(f\"[red]Results directory not found: {results_path}[/red]\")\n        raise typer.Exit(1)\n\n    runs = load_run_results(results_path)\n    if not runs:\n        console.print(\"[yellow]No evaluation results found[/yellow]\")\n        raise typer.Exit(0)\n\n    console.print(f\"\\n[bold]Found {len(runs)} evaluation runs[/bold]\\n\")\n\n    # Compute metrics per run\n    all_metrics = {}\n    for run_id, run_data in runs.items():\n        model, mode = parse_run_id(run_id)\n        metrics = compute_metrics(run_data)\n        metrics[\"model\"] = model\n        metrics[\"mode\"] = mode\n\n        # Optionally run SWE-bench evaluation\n        if swebench_eval:\n            eval_result = run_swebench_evaluation(results_path, run_id, subset)\n            if eval_result:\n                metrics[\"resolved\"] = eval_result.get(\"resolved\", 0)\n                metrics[\"resolve_rate\"] = eval_result.get(\"resolved\", 0) / max(metrics[\"n_instances\"], 1)\n\n        all_metrics[run_id] = metrics\n\n    if format == \"table\":\n        _print_table(all_metrics)\n    elif format == \"markdown\":\n        _print_markdown(all_metrics)\n    elif format == \"json\":\n        console.print(json.dumps(all_metrics, indent=2))\n    elif format == \"csv\":\n        _print_csv(all_metrics)\n\n\n@app.command()\ndef compare_modes(\n    results_dir: str = typer.Argument(..., help=\"Path to results directory\"),\n    model: str = typer.Option(..., \"-m\", \"--model\", help=\"Model to compare across modes\"),\n):\n    \"\"\"Compare modes for a specific model (baseline vs mcp vs augment vs full).\"\"\"\n    results_path = Path(results_dir)\n    runs = load_run_results(results_path)\n\n    # Filter to the specified model\n    model_runs = {\n        run_id: data for run_id, data in runs.items()\n        if parse_run_id(run_id)[0] == model\n    }\n\n    if not model_runs:\n        console.print(f\"[yellow]No results found for model: {model}[/yellow]\")\n        raise typer.Exit(1)\n\n    console.print(f\"\\n[bold]Mode comparison for {model}[/bold]\\n\")\n\n    metrics = {}\n    for run_id, run_data in model_runs.items():\n        _, mode = parse_run_id(run_id)\n        metrics[mode] = compute_metrics(run_data)\n\n    # Print comparison table\n    table = Table(title=f\"Mode Comparison: {model}\")\n    table.add_column(\"Metric\", style=\"bold\")\n    for mode in [\"baseline\", \"mcp\", \"augment\", \"full\"]:\n        if mode in metrics:\n            table.add_column(mode, justify=\"right\")\n\n    rows = [\n        (\"Instances\", \"n_instances\", \"d\"),\n        (\"With Patch\", \"n_with_patch\", \"d\"),\n        (\"Patch Rate\", \"patch_rate\", \".1%\"),\n        (\"Total Cost\", \"total_cost\", \"$.4f\"),\n        (\"Avg Cost\", \"avg_cost\", \"$.4f\"),\n        (\"Total API Calls\", \"total_api_calls\", \"d\"),\n        (\"Avg API Calls\", \"avg_api_calls\", \".1f\"),\n        (\"GN Tool Calls\", \"total_gn_tool_calls\", \"d\"),\n        (\"Augment Hits\", \"total_augment_hits\", \"d\"),\n        (\"Augment Hit Rate\", \"augment_hit_rate\", \".1%\"),\n    ]\n\n    for label, key, fmt in rows:\n        values = []\n        for mode in [\"baseline\", \"mcp\", \"augment\", \"full\"]:\n            if mode in metrics:\n                v = metrics[mode].get(key, 0)\n                if fmt == \".1%\":\n                    values.append(f\"{v:.1%}\")\n                elif fmt == \"$.4f\":\n                    values.append(f\"${v:.4f}\")\n                elif fmt == \".1f\":\n                    values.append(f\"{v:.1f}\")\n                else:\n                    values.append(str(v))\n        table.add_row(label, *values)\n\n    # Add delta rows (improvement over baseline)\n    if \"baseline\" in metrics:\n        baseline_cost = metrics[\"baseline\"][\"avg_cost\"]\n        baseline_calls = metrics[\"baseline\"][\"avg_api_calls\"]\n\n        table.add_section()\n        for mode in [\"mcp\", \"augment\", \"full\"]:\n            if mode not in metrics:\n                continue\n            mode_cost = metrics[mode][\"avg_cost\"]\n            mode_calls = metrics[mode][\"avg_api_calls\"]\n\n            cost_delta = ((mode_cost - baseline_cost) / max(baseline_cost, 0.001)) * 100\n            calls_delta = ((mode_calls - baseline_calls) / max(baseline_calls, 1)) * 100\n\n            cost_str = f\"{cost_delta:+.1f}%\"\n            calls_str = f\"{calls_delta:+.1f}%\"\n\n            # Color-code: negative is good (cheaper/fewer calls)\n            cost_color = \"green\" if cost_delta < 0 else \"red\"\n            calls_color = \"green\" if calls_delta < 0 else \"red\"\n\n            console.print(f\"  {mode} vs baseline: cost [{cost_color}]{cost_str}[/{cost_color}], calls [{calls_color}]{calls_str}[/{calls_color}]\")\n\n    console.print(table)\n\n\n@app.command()\ndef gitnexus_usage(\n    results_dir: str = typer.Argument(..., help=\"Path to results directory\"),\n):\n    \"\"\"Analyze GitNexus tool usage patterns across all runs.\"\"\"\n    results_path = Path(results_dir)\n    runs = load_run_results(results_path)\n\n    console.print(\"\\n[bold]GitNexus Tool Usage Analysis[/bold]\\n\")\n\n    table = Table(title=\"Tool Usage by Run\")\n    table.add_column(\"Run\", style=\"bold\")\n    table.add_column(\"query\", justify=\"right\")\n    table.add_column(\"context\", justify=\"right\")\n    table.add_column(\"impact\", justify=\"right\")\n    table.add_column(\"cypher\", justify=\"right\")\n    table.add_column(\"Total\", justify=\"right\")\n    table.add_column(\"Augment Hits\", justify=\"right\")\n\n    for run_id, run_data in sorted(runs.items()):\n        _, mode = parse_run_id(run_id)\n        if mode == \"baseline\":\n            continue\n\n        # Aggregate tool calls across trajectories\n        tool_totals: dict[str, int] = {\"query\": 0, \"context\": 0, \"impact\": 0, \"cypher\": 0, \"overview\": 0}\n        augment_hits = 0\n\n        for traj in run_data.get(\"trajectories\", {}).values():\n            gn = traj.get(\"info\", {}).get(\"gitnexus\", {}).get(\"metrics\", {})\n            for tool, count in gn.get(\"tool_calls\", {}).items():\n                tool_totals[tool] = tool_totals.get(tool, 0) + count\n            augment_hits += gn.get(\"augmentation_hits\", 0)\n\n        # Also check summary\n        for r in run_data.get(\"summary\", {}).get(\"results\", []):\n            gn = r.get(\"gitnexus_metrics\", {})\n            for tool, count in gn.get(\"tool_calls\", {}).items():\n                tool_totals[tool] = tool_totals.get(tool, 0) + count\n            augment_hits += gn.get(\"augmentation_hits\", 0)\n\n        total = sum(tool_totals.values())\n        if total > 0 or augment_hits > 0:\n            table.add_row(\n                run_id,\n                str(tool_totals.get(\"query\", 0)),\n                str(tool_totals.get(\"context\", 0)),\n                str(tool_totals.get(\"impact\", 0)),\n                str(tool_totals.get(\"cypher\", 0)),\n                str(total),\n                str(augment_hits),\n            )\n\n    console.print(table)\n\n\n# ─── Output Formatters ─────────────────────────────────────────────────────\n\n\ndef _print_table(all_metrics: dict):\n    \"\"\"Print rich table summary.\"\"\"\n    table = Table(title=\"Evaluation Results\")\n    table.add_column(\"Run\", style=\"bold\")\n    table.add_column(\"Model\")\n    table.add_column(\"Mode\")\n    table.add_column(\"N\", justify=\"right\")\n    table.add_column(\"Patched\", justify=\"right\")\n    table.add_column(\"Rate\", justify=\"right\")\n    table.add_column(\"Cost\", justify=\"right\")\n    table.add_column(\"Calls\", justify=\"right\")\n    table.add_column(\"GN Tools\", justify=\"right\")\n\n    for run_id, m in sorted(all_metrics.items()):\n        resolved_str = \"\"\n        if \"resolve_rate\" in m:\n            resolved_str = f\" ({m['resolve_rate']:.0%})\"\n\n        table.add_row(\n            run_id,\n            m[\"model\"],\n            m[\"mode\"],\n            str(m[\"n_instances\"]),\n            str(m[\"n_with_patch\"]),\n            f\"{m['patch_rate']:.0%}{resolved_str}\",\n            f\"${m['total_cost']:.2f}\",\n            str(m[\"total_api_calls\"]),\n            str(m[\"total_gn_tool_calls\"]) if m[\"total_gn_tool_calls\"] > 0 else \"-\",\n        )\n\n    console.print(table)\n\n\ndef _print_markdown(all_metrics: dict):\n    \"\"\"Print markdown table.\"\"\"\n    print(\"| Run | Model | Mode | N | Patched | Rate | Cost | Calls | GN Tools |\")\n    print(\"|-----|-------|------|---|---------|------|------|-------|----------|\")\n    for run_id, m in sorted(all_metrics.items()):\n        gn = str(m[\"total_gn_tool_calls\"]) if m[\"total_gn_tool_calls\"] > 0 else \"-\"\n        print(f\"| {run_id} | {m['model']} | {m['mode']} | {m['n_instances']} | {m['n_with_patch']} | {m['patch_rate']:.0%} | ${m['total_cost']:.2f} | {m['total_api_calls']} | {gn} |\")\n\n\ndef _print_csv(all_metrics: dict):\n    \"\"\"Print CSV output.\"\"\"\n    print(\"run_id,model,mode,n_instances,n_with_patch,patch_rate,total_cost,avg_cost,total_api_calls,avg_api_calls,total_gn_tool_calls,total_augment_hits,augment_hit_rate\")\n    for run_id, m in sorted(all_metrics.items()):\n        print(\n            f\"{run_id},{m['model']},{m['mode']},{m['n_instances']},{m['n_with_patch']},\"\n            f\"{m['patch_rate']:.4f},{m['total_cost']:.4f},{m['avg_cost']:.4f},\"\n            f\"{m['total_api_calls']},{m['avg_api_calls']:.1f},{m['total_gn_tool_calls']},\"\n            f\"{m['total_augment_hits']},{m['augment_hit_rate']:.4f}\"\n        )\n\n\nif __name__ == \"__main__\":\n    logging.basicConfig(level=logging.INFO)\n    app()\n"
  },
  {
    "path": "eval/bridge/__init__.py",
    "content": ""
  },
  {
    "path": "eval/bridge/gitnexus_tools.sh",
    "content": "#!/bin/bash\n# GitNexus CLI tool wrappers for SWE-bench evaluation\n#\n# These functions call the GitNexus eval-server (HTTP daemon) for near-instant\n# tool responses. The eval-server keeps KuzuDB warm in memory.\n#\n# If the eval-server is not running, falls back to direct CLI commands.\n#\n# Usage:\n#   gitnexus-query \"how does authentication work\"\n#   gitnexus-context \"validateUser\"\n#   gitnexus-impact \"AuthService\" upstream\n#   gitnexus-cypher \"MATCH (n:Function) RETURN n.name LIMIT 10\"\n#   gitnexus-overview\n\nGITNEXUS_EVAL_PORT=\"${GITNEXUS_EVAL_PORT:-4848}\"\nGITNEXUS_EVAL_URL=\"http://127.0.0.1:${GITNEXUS_EVAL_PORT}\"\n\n_gitnexus_call() {\n    local tool=\"$1\"\n    shift\n    local json_body=\"$1\"\n\n    # Try eval-server first (fastest path — KuzuDB stays warm)\n    local result\n    result=$(curl -sf -X POST \"${GITNEXUS_EVAL_URL}/tool/${tool}\" \\\n        -H \"Content-Type: application/json\" \\\n        -d \"${json_body}\" 2>/dev/null)\n\n    if [ $? -eq 0 ] && [ -n \"$result\" ]; then\n        echo \"$result\"\n        return 0\n    fi\n\n    # Fallback: direct CLI (cold start, slower but always works)\n    case \"$tool\" in\n        query)\n            local q=$(echo \"$json_body\" | python3 -c \"import sys,json; print(json.load(sys.stdin).get('query',''))\" 2>/dev/null)\n            npx gitnexus query \"$q\" 2>&1\n            ;;\n        context)\n            local n=$(echo \"$json_body\" | python3 -c \"import sys,json; print(json.load(sys.stdin).get('name',''))\" 2>/dev/null)\n            npx gitnexus context \"$n\" 2>&1\n            ;;\n        impact)\n            local t=$(echo \"$json_body\" | python3 -c \"import sys,json; d=json.load(sys.stdin); print(d.get('target',''))\" 2>/dev/null)\n            local d=$(echo \"$json_body\" | python3 -c \"import sys,json; d=json.load(sys.stdin); print(d.get('direction','upstream'))\" 2>/dev/null)\n            npx gitnexus impact \"$t\" --direction \"$d\" 2>&1\n            ;;\n        cypher)\n            local cq=$(echo \"$json_body\" | python3 -c \"import sys,json; print(json.load(sys.stdin).get('query',''))\" 2>/dev/null)\n            npx gitnexus cypher \"$cq\" 2>&1\n            ;;\n        *)\n            echo \"Unknown tool: $tool\" >&2\n            return 1\n            ;;\n    esac\n}\n\ngitnexus-query() {\n    local query=\"$1\"\n    local task_context=\"${2:-}\"\n    local goal=\"${3:-}\"\n\n    if [ -z \"$query\" ]; then\n        echo \"Usage: gitnexus-query <query> [task_context] [goal]\"\n        echo \"Search the code knowledge graph for execution flows related to a concept.\"\n        echo \"\"\n        echo \"Examples:\"\n        echo '  gitnexus-query \"authentication flow\"'\n        echo '  gitnexus-query \"database connection\" \"fixing connection pool leak\"'\n        return 1\n    fi\n\n    local args=\"{\\\"query\\\": \\\"$query\\\"\"\n    [ -n \"$task_context\" ] && args=\"$args, \\\"task_context\\\": \\\"$task_context\\\"\"\n    [ -n \"$goal\" ] && args=\"$args, \\\"goal\\\": \\\"$goal\\\"\"\n    args=\"$args}\"\n\n    _gitnexus_call query \"$args\"\n}\n\ngitnexus-context() {\n    local name=\"$1\"\n    local file_path=\"${2:-}\"\n\n    if [ -z \"$name\" ]; then\n        echo \"Usage: gitnexus-context <symbol_name> [file_path]\"\n        echo \"Get a 360-degree view of a code symbol: callers, callees, processes, file location.\"\n        echo \"\"\n        echo \"Examples:\"\n        echo '  gitnexus-context \"validateUser\"'\n        echo '  gitnexus-context \"AuthService\" \"src/auth/service.py\"'\n        return 1\n    fi\n\n    local args=\"{\\\"name\\\": \\\"$name\\\"\"\n    [ -n \"$file_path\" ] && args=\"$args, \\\"file_path\\\": \\\"$file_path\\\"\"\n    args=\"$args}\"\n\n    _gitnexus_call context \"$args\"\n}\n\ngitnexus-impact() {\n    local target=\"$1\"\n    local direction=\"${2:-upstream}\"\n\n    if [ -z \"$target\" ]; then\n        echo \"Usage: gitnexus-impact <symbol_name> [upstream|downstream]\"\n        echo \"Analyze the blast radius of changing a code symbol.\"\n        echo \"\"\n        echo \"  upstream  = what depends on this (what breaks if you change it)\"\n        echo \"  downstream = what this depends on (what it uses)\"\n        echo \"\"\n        echo \"Examples:\"\n        echo '  gitnexus-impact \"AuthService\" upstream'\n        echo '  gitnexus-impact \"validateUser\" downstream'\n        return 1\n    fi\n\n    _gitnexus_call impact \"{\\\"target\\\": \\\"$target\\\", \\\"direction\\\": \\\"$direction\\\"}\"\n}\n\ngitnexus-cypher() {\n    local query=\"$1\"\n\n    if [ -z \"$query\" ]; then\n        echo \"Usage: gitnexus-cypher <cypher_query>\"\n        echo \"Execute a raw Cypher query against the code knowledge graph.\"\n        echo \"\"\n        echo \"Schema: Nodes: File, Function, Class, Method, Interface, Community, Process\"\n        echo \"Edges via CodeRelation.type: CALLS, IMPORTS, EXTENDS, IMPLEMENTS, DEFINES, MEMBER_OF, STEP_IN_PROCESS\"\n        echo \"\"\n        echo \"Examples:\"\n        echo \"  gitnexus-cypher 'MATCH (a)-[:CodeRelation {type: \\\"CALLS\\\"}]->(b:Function {name: \\\"save\\\"}) RETURN a.name, a.filePath'\"\n        echo \"  gitnexus-cypher 'MATCH (n:Class) RETURN n.name, n.filePath LIMIT 20'\"\n        return 1\n    fi\n\n    _gitnexus_call cypher \"{\\\"query\\\": \\\"$query\\\"}\"\n}\n\ngitnexus-overview() {\n    echo \"=== Code Knowledge Graph Overview ===\"\n    _gitnexus_call list_repos '{}'\n}\n\n# Export functions so they're available in subshells\nexport -f _gitnexus_call 2>/dev/null\nexport -f gitnexus-query 2>/dev/null\nexport -f gitnexus-context 2>/dev/null\nexport -f gitnexus-impact 2>/dev/null\nexport -f gitnexus-cypher 2>/dev/null\nexport -f gitnexus-overview 2>/dev/null\n"
  },
  {
    "path": "eval/bridge/mcp_bridge.py",
    "content": "\"\"\"\nMCP Bridge for GitNexus\n\nStarts the GitNexus MCP server as a subprocess and provides a Python interface\nto call MCP tools. Used by the bash wrapper scripts and the augmentation layer..\n\nThe bridge communicates with the MCP server via stdio using the JSON-RPC protocol.\n\"\"\"\n\nimport json\nimport logging\nimport os\nimport subprocess\nimport sys\nimport threading\nimport time\nfrom pathlib import Path\nfrom typing import Any\n\nlogger = logging.getLogger(\"mcp_bridge\")\n\n\nclass MCPBridge:\n    \"\"\"\n    Manages a GitNexus MCP server subprocess and proxies tool calls to it.\n    \n    Usage:\n        bridge = MCPBridge(repo_path=\"/path/to/repo\")\n        bridge.start()\n        result = bridge.call_tool(\"query\", {\"query\": \"authentication\"})\n        bridge.stop()\n    \"\"\"\n\n    def __init__(self, repo_path: str | None = None):\n        self.repo_path = repo_path or os.getcwd()\n        self.process: subprocess.Popen | None = None\n        self._request_id = 0\n        self._lock = threading.Lock()\n        self._started = False\n\n    def start(self) -> bool:\n        \"\"\"Start the GitNexus MCP server subprocess.\"\"\"\n        if self._started:\n            return True\n\n        try:\n            # Find gitnexus binary\n            gitnexus_bin = self._find_gitnexus()\n            if not gitnexus_bin:\n                logger.error(\"GitNexus not found. Install with: npm install -g gitnexus\")\n                return False\n\n            self.process = subprocess.Popen(\n                [gitnexus_bin, \"mcp\"],\n                stdin=subprocess.PIPE,\n                stdout=subprocess.PIPE,\n                stderr=subprocess.PIPE,\n                cwd=self.repo_path,\n                text=False,\n            )\n\n            # Send initialize request\n            init_result = self._send_request(\"initialize\", {\n                \"protocolVersion\": \"2024-11-05\",\n                \"capabilities\": {},\n                \"clientInfo\": {\"name\": \"gitnexus-eval\", \"version\": \"0.1.0\"},\n            })\n\n            if init_result is None:\n                logger.error(\"MCP server failed to initialize\")\n                self.stop()\n                return False\n\n            # Send initialized notification\n            self._send_notification(\"notifications/initialized\", {})\n            self._started = True\n            logger.info(\"MCP bridge started successfully\")\n            return True\n\n        except Exception as e:\n            logger.error(f\"Failed to start MCP bridge: {e}\")\n            self.stop()\n            return False\n\n    def stop(self):\n        \"\"\"Stop the MCP server subprocess.\"\"\"\n        if self.process:\n            try:\n                self.process.stdin.close()\n                self.process.terminate()\n                self.process.wait(timeout=5)\n            except Exception:\n                try:\n                    self.process.kill()\n                except Exception:\n                    pass\n            self.process = None\n        self._started = False\n\n    def call_tool(self, tool_name: str, arguments: dict[str, Any] | None = None) -> dict[str, Any] | None:\n        \"\"\"\n        Call a GitNexus MCP tool and return the result.\n        \n        Returns the tool result content or None on error.\n        \"\"\"\n        if not self._started:\n            logger.error(\"MCP bridge not started\")\n            return None\n\n        result = self._send_request(\"tools/call\", {\n            \"name\": tool_name,\n            \"arguments\": arguments or {},\n        })\n\n        if result is None:\n            return None\n\n        # Extract text content from MCP response\n        content = result.get(\"content\", [])\n        if content and isinstance(content, list):\n            texts = [item.get(\"text\", \"\") for item in content if item.get(\"type\") == \"text\"]\n            return {\"text\": \"\\n\".join(texts), \"raw\": content}\n\n        return {\"text\": \"\", \"raw\": content}\n\n    def list_tools(self) -> list[dict]:\n        \"\"\"List available MCP tools.\"\"\"\n        result = self._send_request(\"tools/list\", {})\n        if result:\n            return result.get(\"tools\", [])\n        return []\n\n    def read_resource(self, uri: str) -> str | None:\n        \"\"\"Read an MCP resource by URI.\"\"\"\n        result = self._send_request(\"resources/read\", {\"uri\": uri})\n        if result:\n            contents = result.get(\"contents\", [])\n            if contents:\n                return contents[0].get(\"text\", \"\")\n        return None\n\n    def _find_gitnexus(self) -> str | None:\n        \"\"\"Find the gitnexus CLI binary.\"\"\"\n        # Check if npx is available (preferred - uses local install)\n        for cmd in [\"npx\"]:\n            try:\n                result = subprocess.run(\n                    [cmd, \"gitnexus\", \"--version\"],\n                    capture_output=True, text=True, timeout=15,\n                    cwd=self.repo_path,\n                )\n                if result.returncode == 0:\n                    return cmd  # Will use \"npx gitnexus mcp\"\n            except Exception:\n                continue\n\n        # Check for global install\n        try:\n            result = subprocess.run(\n                [\"gitnexus\", \"--version\"],\n                capture_output=True, text=True, timeout=10,\n            )\n            if result.returncode == 0:\n                return \"gitnexus\"\n        except Exception:\n            pass\n\n        return None\n\n    def _next_id(self) -> int:\n        with self._lock:\n            self._request_id += 1\n            return self._request_id\n\n    def _send_request(self, method: str, params: dict) -> dict | None:\n        \"\"\"Send a JSON-RPC request and wait for response.\"\"\"\n        if not self.process or not self.process.stdin or not self.process.stdout:\n            return None\n\n        request_id = self._next_id()\n        request = {\n            \"jsonrpc\": \"2.0\",\n            \"id\": request_id,\n            \"method\": method,\n            \"params\": params,\n        }\n\n        try:\n            message = json.dumps(request)\n            # MCP uses Content-Length header framing\n            header = f\"Content-Length: {len(message.encode('utf-8'))}\\r\\n\\r\\n\"\n            self.process.stdin.write(header.encode(\"utf-8\"))\n            self.process.stdin.write(message.encode(\"utf-8\"))\n            self.process.stdin.flush()\n\n            # Read response\n            response = self._read_response(timeout=30)\n            if response and response.get(\"id\") == request_id:\n                if \"error\" in response:\n                    logger.error(f\"MCP error: {response['error']}\")\n                    return None\n                return response.get(\"result\")\n            return None\n\n        except Exception as e:\n            logger.error(f\"MCP request failed: {e}\")\n            return None\n\n    def _send_notification(self, method: str, params: dict):\n        \"\"\"Send a JSON-RPC notification (no response expected).\"\"\"\n        if not self.process or not self.process.stdin:\n            return\n\n        notification = {\n            \"jsonrpc\": \"2.0\",\n            \"method\": method,\n            \"params\": params,\n        }\n\n        try:\n            message = json.dumps(notification)\n            header = f\"Content-Length: {len(message.encode('utf-8'))}\\r\\n\\r\\n\"\n            self.process.stdin.write(header.encode(\"utf-8\"))\n            self.process.stdin.write(message.encode(\"utf-8\"))\n            self.process.stdin.flush()\n        except Exception as e:\n            logger.error(f\"MCP notification failed: {e}\")\n\n    def _read_response(self, timeout: float = 30) -> dict | None:\n        \"\"\"Read a JSON-RPC response from the MCP server.\"\"\"\n        if not self.process or not self.process.stdout:\n            return None\n\n        start = time.time()\n\n        try:\n            while time.time() - start < timeout:\n                # Read Content-Length header\n                header_line = b\"\"\n                while True:\n                    byte = self.process.stdout.read(1)\n                    if not byte:\n                        return None\n                    header_line += byte\n                    if header_line.endswith(b\"\\r\\n\\r\\n\"):\n                        break\n                    if header_line.endswith(b\"\\n\\n\"):\n                        break\n\n                # Parse content length\n                header_str = header_line.decode(\"utf-8\").strip()\n                content_length = None\n                for line in header_str.split(\"\\r\\n\"):\n                    if line.lower().startswith(\"content-length:\"):\n                        content_length = int(line.split(\":\")[1].strip())\n                        break\n\n                if content_length is None:\n                    continue\n\n                # Read body\n                body = self.process.stdout.read(content_length)\n                if not body:\n                    return None\n\n                message = json.loads(body.decode(\"utf-8\"))\n\n                # Skip notifications (no id), return responses\n                if \"id\" in message:\n                    return message\n\n            return None\n\n        except Exception as e:\n            logger.error(f\"Error reading MCP response: {e}\")\n            return None\n\n\nclass MCPToolCLI:\n    \"\"\"\n    CLI wrapper that exposes MCP tools as simple command-line calls.\n    Used by the bash wrapper scripts inside Docker containers.\n    \n    Usage from bash:\n        python -m bridge.mcp_bridge query '{\"query\": \"authentication\"}'\n        python -m bridge.mcp_bridge context '{\"name\": \"validateUser\"}'\n    \"\"\"\n\n    def __init__(self):\n        self.bridge = MCPBridge()\n\n    def run(self, tool_name: str, args_json: str = \"{}\") -> int:\n        \"\"\"Run a single tool call and print the result.\"\"\"\n        try:\n            args = json.loads(args_json)\n        except json.JSONDecodeError:\n            # Try to parse as simple key=value pairs\n            args = self._parse_simple_args(args_json)\n\n        if not self.bridge.start():\n            print(\"ERROR: Failed to start GitNexus MCP bridge\", file=sys.stderr)\n            return 1\n\n        try:\n            result = self.bridge.call_tool(tool_name, args)\n            if result:\n                print(result.get(\"text\", \"\"))\n                return 0\n            else:\n                print(\"No results\", file=sys.stderr)\n                return 1\n        finally:\n            self.bridge.stop()\n\n    @staticmethod\n    def _parse_simple_args(args_str: str) -> dict:\n        \"\"\"Parse 'key=value key2=value2' style arguments.\"\"\"\n        args = {}\n        for part in args_str.split():\n            if \"=\" in part:\n                key, value = part.split(\"=\", 1)\n                args[key] = value\n        return args\n\n\nif __name__ == \"__main__\":\n    if len(sys.argv) < 2:\n        print(\"Usage: python -m bridge.mcp_bridge <tool_name> [args_json]\", file=sys.stderr)\n        print(\"Tools: query, context, impact, cypher, list_repos, detect_changes, rename\", file=sys.stderr)\n        sys.exit(1)\n\n    tool = sys.argv[1]\n    args_json = sys.argv[2] if len(sys.argv) > 2 else \"{}\"\n\n    cli = MCPToolCLI()\n    sys.exit(cli.run(tool, args_json))\n"
  },
  {
    "path": "eval/configs/models/claude-haiku.yaml",
    "content": "# Claude Haiku 4.5 — fast, cheap, good baseline\n# Via OpenRouter (set OPENROUTER_API_KEY in .env)\nmodel:\n  model_name: \"openrouter/anthropic/claude-haiku-4.5\"\n  cost_tracking: \"ignore_errors\"\n  model_kwargs:\n    max_tokens: 8192\n    temperature: 0\n"
  },
  {
    "path": "eval/configs/models/claude-opus.yaml",
    "content": "# Claude Opus 4 — most capable, highest cost\n# Via OpenRouter (set OPENROUTER_API_KEY in .env)\n# To use Anthropic directly, change to: anthropic/claude-opus-4-20250514\nmodel:\n  model_name: \"openrouter/anthropic/claude-opus-4\"\n  cost_tracking: \"ignore_errors\"\n  model_kwargs:\n    max_tokens: 16384\n    temperature: 0\n"
  },
  {
    "path": "eval/configs/models/claude-sonnet.yaml",
    "content": "# Claude Sonnet 4 — strong all-around model\n# Via OpenRouter (set OPENROUTER_API_KEY in .env)\n# To use Anthropic directly, change to: anthropic/claude-sonnet-4-20250514\nmodel:\n  model_name: \"openrouter/anthropic/claude-sonnet-4\"\n  cost_tracking: \"ignore_errors\"\n  model_kwargs:\n    max_tokens: 16384\n    temperature: 0\n"
  },
  {
    "path": "eval/configs/models/deepseek-chat.yaml",
    "content": "model: deepseek-ai/deepseek-chat\nprovider: openrouter\ncost:\n  input: 0.14  # per 1M tokens\n  output: 0.28  # per 1M tokens\n  \n# Native DeepSeek API (direct)\napi_key: null\nbase_url: null\n\n# For OpenRouter, uncomment below and comment out direct config above\n# api_key: \\${OPENROUTER_API_KEY}\n# base_url: https://openrouter.ai/api/v1\n"
  },
  {
    "path": "eval/configs/models/deepseek-v3.yaml",
    "content": "model: deepseek-ai/DeepSeek-V3\nprovider: openrouter\ncost:\n  input: 0.27  # per 1M tokens\n  output: 1.10  # per 1M tokens\n  \n# Native DeepSeek API (direct)\n# Get your API key at: https://platform.deepseek.com/\n# Or use OpenRouter with: OPENROUTER_API_KEY\napi_key: null\nbase_url: null\n\n# For OpenRouter, uncomment below and comment out direct config above\n# api_key: \\${OPENROUTER_API_KEY}\n# base_url: https://openrouter.ai/api/v1\n"
  },
  {
    "path": "eval/configs/models/glm-4.7.yaml",
    "content": "# GLM 4.7 — via OpenRouter (set OPENROUTER_API_KEY in .env)\nmodel:\n  model_name: \"openrouter/zhipuai/glm-4.7\"\n  cost_tracking: \"ignore_errors\"\n  model_kwargs:\n    max_tokens: 8192\n    temperature: 0\n"
  },
  {
    "path": "eval/configs/models/glm-5.yaml",
    "content": "# GLM 5 — via OpenRouter (set OPENROUTER_API_KEY in .env)\nmodel:\n  model_name: \"openrouter/zhipuai/glm-5\"\n  cost_tracking: \"ignore_errors\"\n  model_kwargs:\n    max_tokens: 8192\n    temperature: 0\n"
  },
  {
    "path": "eval/configs/models/minimax-2.5.yaml",
    "content": "# MiniMax M1 2.5 — via OpenRouter (set OPENROUTER_API_KEY in .env)\nmodel:\n  model_name: \"openrouter/minimax/minimax-m1-2.5\"\n  cost_tracking: \"ignore_errors\"\n  model_kwargs:\n    max_tokens: 8192\n    temperature: 0\n"
  },
  {
    "path": "eval/configs/models/minimax-m2.1.yaml",
    "content": "# MiniMax M2.5 — via OpenRouter (set OPENROUTER_API_KEY in .env)\n# Uses text-based model class because MiniMax doesn't support tool_calls natively.\n# The action_regex tells mini-swe-agent to parse ```bash blocks from responses.\nmodel:\n  model_class: litellm_textbased\n  model_name: \"openrouter/minimax/minimax-m2.5\"\n  action_regex: \"```(?:bash|mswea_bash_command)\\\\s*\\\\n(.*?)\\\\n```\"\n  cost_tracking: \"ignore_errors\"\n  model_kwargs:\n    max_tokens: 8192\n    temperature: 0\n"
  },
  {
    "path": "eval/configs/modes/baseline.yaml",
    "content": "# Baseline mode — no GitNexus, pure mini-swe-agent (control group)\nagent:\n  agent_class: \"eval.agents.gitnexus_agent.GitNexusAgent\"\n  gitnexus_mode: \"baseline\"\n  step_limit: 30\n  cost_limit: 3.0\n\nenvironment:\n  environment_class: \"docker\"\n"
  },
  {
    "path": "eval/configs/modes/native.yaml",
    "content": "# Native mode — GitNexus tools only, no grep enrichment\n#\n# Explicit tools: gitnexus-query, gitnexus-context, gitnexus-impact, gitnexus-cypher\n# Available as fast bash commands (~100ms via eval-server)\n#\n# Use this mode to isolate the value of explicit tools without grep augmentation.\nagent:\n  agent_class: \"eval.agents.gitnexus_agent.GitNexusAgent\"\n  gitnexus_mode: \"native\"\n  step_limit: 30\n  cost_limit: 3.0\n  track_gitnexus_usage: true\n\nenvironment:\n  environment_class: \"eval.environments.gitnexus_docker.GitNexusDockerEnvironment\"\n  enable_gitnexus: true\n  skip_embeddings: true\n  gitnexus_timeout: 120\n  eval_server_port: 4848\n"
  },
  {
    "path": "eval/configs/modes/native_augment.yaml",
    "content": "# Native + Augment mode — the primary evaluation mode\n#\n# Combines two capabilities (mirroring the Claude Code model):\n# 1. Explicit GitNexus tools: gitnexus-query, gitnexus-context, gitnexus-impact, gitnexus-cypher\n#    Available as fast bash commands (~100ms via eval-server)\n# 2. Automatic grep enrichment: grep/rg results are transparently augmented with\n#    [GitNexus] annotations showing callers, callees, and execution flows\n#\n# The agent decides when to use explicit tools vs rely on enriched grep results.\nagent:\n  agent_class: \"eval.agents.gitnexus_agent.GitNexusAgent\"\n  gitnexus_mode: \"native_augment\"\n  step_limit: 30\n  cost_limit: 3.0\n  augment_timeout: 5.0\n  augment_min_pattern_length: 3\n  track_gitnexus_usage: true\n\nenvironment:\n  environment_class: \"eval.environments.gitnexus_docker.GitNexusDockerEnvironment\"\n  enable_gitnexus: true\n  skip_embeddings: true\n  gitnexus_timeout: 120\n  eval_server_port: 4848\n"
  },
  {
    "path": "eval/environments/__init__.py",
    "content": ""
  },
  {
    "path": "eval/environments/gitnexus_docker.py",
    "content": "\"\"\"\nGitNexus Docker Environment for SWE-bench Evaluation\n\nExtends mini-swe-agent's Docker environment to:\n1. Install GitNexus (Node.js + npm + gitnexus package)\n2. Run `gitnexus analyze` on the repository\n3. Start the eval-server daemon (persistent HTTP server with warm KuzuDB)\n4. Install standalone tool scripts in /usr/local/bin/ (works with subprocess.run)\n5. Cache indexes per (repo, base_commit) to avoid re-indexing\n\nIMPORTANT: mini-swe-agent runs every command with subprocess.run in a fresh subshell.\nThis means .bashrc is NOT sourced, exported functions are NOT available, and env vars\ndon't persist. The tool scripts must be standalone executables in $PATH.\n\nArchitecture:\n  Agent bash cmd → /usr/local/bin/gitnexus-query → curl localhost:4848/tool/query → eval-server → KuzuDB\n  Fallback: → npx gitnexus query (cold start, slower)\n\nTool call latency: ~50-100ms via eval-server, ~5-10s via CLI fallback.\n\"\"\"\n\nimport hashlib\nimport json\nimport logging\nimport shutil\nimport time\nfrom pathlib import Path\n\nfrom minisweagent.environments.docker import DockerEnvironment\n\nlogger = logging.getLogger(\"gitnexus_docker\")\n\nDEFAULT_CACHE_DIR = Path.home() / \".gitnexus-eval-cache\"\nEVAL_SERVER_PORT = 4848\n\n# Standalone tool scripts installed into /usr/local/bin/ inside the container.\n# Each script calls the eval-server via curl, with a CLI fallback.\n# These are standalone — no sourcing, no env inheritance needed.\n\nTOOL_SCRIPT_QUERY = r'''#!/bin/bash\nPORT=\"${GITNEXUS_EVAL_PORT:-__PORT__}\"\nquery=\"$1\"; task_ctx=\"${2:-}\"; goal=\"${3:-}\"\n[ -z \"$query\" ] && echo \"Usage: gitnexus-query <query> [task_context] [goal]\" && exit 1\nargs=\"{\\\"query\\\": \\\"$query\\\"\"\n[ -n \"$task_ctx\" ] && args=\"$args, \\\"task_context\\\": \\\"$task_ctx\\\"\"\n[ -n \"$goal\" ] && args=\"$args, \\\"goal\\\": \\\"$goal\\\"\"\nargs=\"$args}\"\nresult=$(curl -sf -X POST \"http://127.0.0.1:${PORT}/tool/query\" -H \"Content-Type: application/json\" -d \"$args\" 2>/dev/null)\nif [ $? -eq 0 ] && [ -n \"$result\" ]; then echo \"$result\"; exit 0; fi\ncd /testbed && npx gitnexus query \"$query\" 2>&1\n'''\n\nTOOL_SCRIPT_CONTEXT = r'''#!/bin/bash\nPORT=\"${GITNEXUS_EVAL_PORT:-__PORT__}\"\nname=\"$1\"; file_path=\"${2:-}\"\n[ -z \"$name\" ] && echo \"Usage: gitnexus-context <symbol_name> [file_path]\" && exit 1\nargs=\"{\\\"name\\\": \\\"$name\\\"\"\n[ -n \"$file_path\" ] && args=\"$args, \\\"file_path\\\": \\\"$file_path\\\"\"\nargs=\"$args}\"\nresult=$(curl -sf -X POST \"http://127.0.0.1:${PORT}/tool/context\" -H \"Content-Type: application/json\" -d \"$args\" 2>/dev/null)\nif [ $? -eq 0 ] && [ -n \"$result\" ]; then echo \"$result\"; exit 0; fi\ncd /testbed && npx gitnexus context \"$name\" 2>&1\n'''\n\nTOOL_SCRIPT_IMPACT = r'''#!/bin/bash\nPORT=\"${GITNEXUS_EVAL_PORT:-__PORT__}\"\ntarget=\"$1\"; direction=\"${2:-upstream}\"\n[ -z \"$target\" ] && echo \"Usage: gitnexus-impact <symbol_name> [upstream|downstream]\" && exit 1\nresult=$(curl -sf -X POST \"http://127.0.0.1:${PORT}/tool/impact\" -H \"Content-Type: application/json\" -d \"{\\\"target\\\": \\\"$target\\\", \\\"direction\\\": \\\"$direction\\\"}\" 2>/dev/null)\nif [ $? -eq 0 ] && [ -n \"$result\" ]; then echo \"$result\"; exit 0; fi\ncd /testbed && npx gitnexus impact \"$target\" --direction \"$direction\" 2>&1\n'''\n\nTOOL_SCRIPT_CYPHER = r'''#!/bin/bash\nPORT=\"${GITNEXUS_EVAL_PORT:-__PORT__}\"\nquery=\"$1\"\n[ -z \"$query\" ] && echo \"Usage: gitnexus-cypher <cypher_query>\" && exit 1\nresult=$(curl -sf -X POST \"http://127.0.0.1:${PORT}/tool/cypher\" -H \"Content-Type: application/json\" -d \"{\\\"query\\\": \\\"$query\\\"}\" 2>/dev/null)\nif [ $? -eq 0 ] && [ -n \"$result\" ]; then echo \"$result\"; exit 0; fi\ncd /testbed && npx gitnexus cypher \"$query\" 2>&1\n'''\n\nTOOL_SCRIPT_AUGMENT = r'''#!/bin/bash\ncd /testbed && npx gitnexus augment \"$1\" 2>&1 || true\n'''\n\nTOOL_SCRIPT_OVERVIEW = r'''#!/bin/bash\nPORT=\"${GITNEXUS_EVAL_PORT:-__PORT__}\"\necho \"=== Code Knowledge Graph Overview ===\"\nresult=$(curl -sf -X POST \"http://127.0.0.1:${PORT}/tool/list_repos\" -H \"Content-Type: application/json\" -d \"{}\" 2>/dev/null)\nif [ $? -eq 0 ] && [ -n \"$result\" ]; then echo \"$result\"; exit 0; fi\ncd /testbed && npx gitnexus list 2>&1\n'''\n\n\nclass GitNexusDockerEnvironment(DockerEnvironment):\n    \"\"\"\n    Docker environment with GitNexus pre-installed, indexed, and eval-server running.\n\n    Setup flow:\n    1. Start Docker container (base SWE-bench image)\n    2. Install Node.js + gitnexus inside the container\n    3. Run `gitnexus analyze` (or restore from cache)\n    4. Start `gitnexus eval-server` daemon (keeps KuzuDB warm)\n    5. Install standalone tool scripts in /usr/local/bin/\n    6. Agent runs with near-instant GitNexus tool calls\n    \"\"\"\n\n    def __init__(\n        self,\n        *,\n        enable_gitnexus: bool = True,\n        cache_dir: str | Path | None = None,\n        skip_embeddings: bool = True,\n        gitnexus_timeout: int = 120,\n        eval_server_port: int = EVAL_SERVER_PORT,\n        **kwargs,\n    ):\n        super().__init__(**kwargs)\n        self.enable_gitnexus = enable_gitnexus\n        self.cache_dir = Path(cache_dir) if cache_dir else DEFAULT_CACHE_DIR\n        self.skip_embeddings = skip_embeddings\n        self.gitnexus_timeout = gitnexus_timeout\n        self.eval_server_port = eval_server_port\n        self.index_time: float = 0.0\n        self._gitnexus_ready = False\n\n    def start(self) -> dict:\n        \"\"\"Start the container and set up GitNexus.\"\"\"\n        result = super().start()\n\n        if self.enable_gitnexus:\n            try:\n                self._setup_gitnexus()\n            except Exception as e:\n                logger.warning(f\"GitNexus setup failed, continuing without it: {e}\")\n                self._gitnexus_ready = False\n\n        return result\n\n    def _setup_gitnexus(self):\n        \"\"\"Install and configure GitNexus in the container.\"\"\"\n        start = time.time()\n\n        self._ensure_nodejs()\n        self._install_gitnexus()\n        self._index_repository()\n        self._start_eval_server()\n        self._install_tools()\n\n        self.index_time = time.time() - start\n        self._gitnexus_ready = True\n        logger.info(f\"GitNexus setup completed in {self.index_time:.1f}s\")\n\n    def _ensure_nodejs(self):\n        \"\"\"Ensure Node.js >= 18 is available in the container.\"\"\"\n        check = self.execute({\"command\": \"node --version 2>/dev/null || echo 'NOT_FOUND'\"})\n        output = check.get(\"output\", \"\").strip()\n\n        if \"NOT_FOUND\" in output:\n            logger.info(\"Installing Node.js in container...\")\n            install_cmds = [\n                \"apt-get update -qq\",\n                \"apt-get install -y -qq curl ca-certificates\",\n                \"curl -fsSL https://deb.nodesource.com/setup_20.x | bash -\",\n                \"apt-get install -y -qq nodejs\",\n            ]\n            for cmd in install_cmds:\n                result = self.execute({\"command\": cmd, \"timeout\": 60})\n                if result.get(\"returncode\", 1) != 0:\n                    raise RuntimeError(f\"Failed to install Node.js: {result.get('output', '')}\")\n        else:\n            logger.info(f\"Node.js already available: {output}\")\n\n    def _install_gitnexus(self):\n        \"\"\"Install the gitnexus npm package globally.\"\"\"\n        check = self.execute({\"command\": \"npx gitnexus --version 2>/dev/null || echo 'NOT_FOUND'\"})\n        if \"NOT_FOUND\" in check.get(\"output\", \"\"):\n            logger.info(\"Installing gitnexus...\")\n            result = self.execute({\n                \"command\": \"npm install -g gitnexus\",\n                \"timeout\": 60,\n            })\n            if result.get(\"returncode\", 1) != 0:\n                raise RuntimeError(f\"Failed to install gitnexus: {result.get('output', '')}\")\n\n    def _index_repository(self):\n        \"\"\"Run gitnexus analyze on the repo, using cache if available.\"\"\"\n        repo_info = self._get_repo_info()\n        cache_key = self._make_cache_key(repo_info)\n        cache_path = self.cache_dir / cache_key\n\n        if cache_path.exists():\n            logger.info(f\"Restoring GitNexus index from cache: {cache_key}\")\n            self._restore_cache(cache_path)\n            return\n\n        logger.info(\"Running gitnexus analyze...\")\n        skip_flag = \"--skip-embeddings\" if self.skip_embeddings else \"\"\n        result = self.execute({\n            \"command\": f\"cd /testbed && npx gitnexus analyze . {skip_flag} 2>&1\",\n            \"timeout\": self.gitnexus_timeout,\n        })\n\n        if result.get(\"returncode\", 1) != 0:\n            output = result.get(\"output\", \"\")\n            if \"error\" in output.lower() and \"indexed\" not in output.lower():\n                raise RuntimeError(f\"gitnexus analyze failed: {output[-500:]}\")\n\n        self._save_cache(cache_path, repo_info)\n\n    def _start_eval_server(self):\n        \"\"\"Start the GitNexus eval-server daemon in the background.\"\"\"\n        logger.info(f\"Starting eval-server on port {self.eval_server_port}...\")\n\n        self.execute({\n            \"command\": (\n                f\"nohup npx gitnexus eval-server --port {self.eval_server_port} \"\n                f\"--idle-timeout 600 \"\n                f\"> /tmp/gitnexus-eval-server.log 2>&1 &\"\n            ),\n            \"timeout\": 5,\n        })\n\n        # Wait for the server to be ready (up to 15s for KuzuDB init)\n        for i in range(30):\n            time.sleep(0.5)\n            health = self.execute({\n                \"command\": f\"curl -sf http://127.0.0.1:{self.eval_server_port}/health 2>/dev/null || echo 'NOT_READY'\",\n                \"timeout\": 3,\n            })\n            output = health.get(\"output\", \"\").strip()\n            if \"NOT_READY\" not in output and \"ok\" in output:\n                logger.info(f\"Eval-server ready after {(i + 1) * 0.5:.1f}s\")\n                return\n\n        log_output = self.execute({\n            \"command\": \"cat /tmp/gitnexus-eval-server.log 2>/dev/null | tail -20\",\n        })\n        logger.warning(\n            f\"Eval-server didn't become ready in 15s. \"\n            f\"Tools will fall back to direct CLI.\\n\"\n            f\"Server log: {log_output.get('output', 'N/A')}\"\n        )\n\n    def _install_tools(self):\n        \"\"\"\n        Install standalone GitNexus tool scripts in /usr/local/bin/.\n\n        Each script is a self-contained bash script that:\n        1. Calls the eval-server via curl (fast path, ~100ms)\n        2. Falls back to direct CLI if eval-server is unavailable\n\n        These are standalone executables — no sourcing, env inheritance, or .bashrc\n        needed. This is critical because mini-swe-agent runs every command via\n        subprocess.run in a fresh subshell.\n\n        Uses heredocs with quoted delimiter to avoid all quoting/escaping issues.\n        \"\"\"\n        port = str(self.eval_server_port)\n\n        tools = {\n            \"gitnexus-query\": TOOL_SCRIPT_QUERY,\n            \"gitnexus-context\": TOOL_SCRIPT_CONTEXT,\n            \"gitnexus-impact\": TOOL_SCRIPT_IMPACT,\n            \"gitnexus-cypher\": TOOL_SCRIPT_CYPHER,\n            \"gitnexus-augment\": TOOL_SCRIPT_AUGMENT,\n            \"gitnexus-overview\": TOOL_SCRIPT_OVERVIEW,\n        }\n\n        for name, script in tools.items():\n            script_content = script.replace(\"__PORT__\", port).strip()\n            # Use heredoc with quoted delimiter — prevents all variable expansion and quoting issues\n            self.execute({\n                \"command\": f\"cat << 'GITNEXUS_SCRIPT_EOF' > /usr/local/bin/{name}\\n{script_content}\\nGITNEXUS_SCRIPT_EOF\\nchmod +x /usr/local/bin/{name}\",\n                \"timeout\": 5,\n            })\n\n        logger.info(f\"Installed {len(tools)} GitNexus tool scripts in /usr/local/bin/\")\n\n    def _get_repo_info(self) -> dict:\n        \"\"\"Get repository identity info from the container.\"\"\"\n        repo_result = self.execute({\n            \"command\": \"cd /testbed && basename $(git remote get-url origin 2>/dev/null || basename $(pwd)) .git\"\n        })\n        commit_result = self.execute({\"command\": \"cd /testbed && git rev-parse HEAD 2>/dev/null || echo unknown\"})\n\n        return {\n            \"repo\": repo_result.get(\"output\", \"unknown\").strip(),\n            \"commit\": commit_result.get(\"output\", \"unknown\").strip(),\n        }\n\n    @staticmethod\n    def _make_cache_key(repo_info: dict) -> str:\n        \"\"\"Create a deterministic cache key from repo info.\"\"\"\n        content = f\"{repo_info['repo']}:{repo_info['commit']}\"\n        return hashlib.sha256(content.encode()).hexdigest()[:16]\n\n    def _save_cache(self, cache_path: Path, repo_info: dict):\n        \"\"\"Save the GitNexus index to the host cache directory.\"\"\"\n        try:\n            cache_path.mkdir(parents=True, exist_ok=True)\n\n            find_result = self.execute({\n                \"command\": \"find /root/.gitnexus -name 'kuzu' -type d 2>/dev/null | head -1\"\n            })\n            gitnexus_dir = find_result.get(\"output\", \"\").strip()\n\n            if gitnexus_dir:\n                parent = str(Path(gitnexus_dir).parent)\n                self.execute({\n                    \"command\": f\"cd {parent} && tar czf /tmp/gitnexus-cache.tar.gz .\",\n                    \"timeout\": 30,\n                })\n\n                container_id = getattr(self, \"_container_id\", None) or getattr(self, \"container_id\", None)\n                if container_id:\n                    import subprocess as sp\n                    sp.run(\n                        [\"docker\", \"cp\", f\"{container_id}:/tmp/gitnexus-cache.tar.gz\",\n                         str(cache_path / \"index.tar.gz\")],\n                        check=True, capture_output=True,\n                    )\n                    (cache_path / \"metadata.json\").write_text(json.dumps(repo_info, indent=2))\n                    logger.info(f\"Cached GitNexus index: {cache_path}\")\n\n        except Exception as e:\n            logger.warning(f\"Failed to cache GitNexus index: {e}\")\n            if cache_path.exists():\n                shutil.rmtree(cache_path, ignore_errors=True)\n\n    def _restore_cache(self, cache_path: Path):\n        \"\"\"Restore a cached GitNexus index into the container.\"\"\"\n        try:\n            cache_tarball = cache_path / \"index.tar.gz\"\n            if not cache_tarball.exists():\n                logger.warning(\"Cache tarball not found, re-indexing\")\n                self._index_repository()\n                return\n\n            container_id = getattr(self, \"_container_id\", None) or getattr(self, \"container_id\", None)\n            if container_id:\n                import subprocess as sp\n\n                self.execute({\"command\": \"mkdir -p /root/.gitnexus\"})\n\n                storage_result = self.execute({\n                    \"command\": \"npx gitnexus list 2>/dev/null | grep -o '/root/.gitnexus/[^ ]*' | head -1 || echo '/root/.gitnexus/repos/default'\"\n                })\n                storage_path = storage_result.get(\"output\", \"\").strip() or \"/root/.gitnexus/repos/default\"\n                self.execute({\"command\": f\"mkdir -p {storage_path}\"})\n\n                sp.run(\n                    [\"docker\", \"cp\", str(cache_tarball), f\"{container_id}:/tmp/gitnexus-cache.tar.gz\"],\n                    check=True, capture_output=True,\n                )\n                self.execute({\n                    \"command\": f\"cd {storage_path} && tar xzf /tmp/gitnexus-cache.tar.gz\",\n                    \"timeout\": 30,\n                })\n                logger.info(\"GitNexus index restored from cache\")\n\n        except Exception as e:\n            logger.warning(f\"Failed to restore cache, re-indexing: {e}\")\n            self._index_repository()\n\n    def stop(self) -> dict:\n        \"\"\"Stop the container, shutting down eval-server first.\"\"\"\n        if self._gitnexus_ready:\n            try:\n                self.execute({\n                    \"command\": f\"curl -sf -X POST http://127.0.0.1:{self.eval_server_port}/shutdown 2>/dev/null || true\",\n                    \"timeout\": 3,\n                })\n            except Exception:\n                pass\n\n        return super().stop()\n\n    def get_template_vars(self) -> dict:\n        \"\"\"Add GitNexus-specific template variables.\"\"\"\n        base_vars = super().get_template_vars()\n        base_vars[\"gitnexus_ready\"] = self._gitnexus_ready\n        base_vars[\"gitnexus_index_time\"] = self.index_time\n        return base_vars\n\n    def serialize(self) -> dict:\n        \"\"\"Include GitNexus environment info in serialization.\"\"\"\n        base = super().serialize()\n        base.setdefault(\"info\", {})[\"gitnexus_env\"] = {\n            \"enabled\": self.enable_gitnexus,\n            \"ready\": self._gitnexus_ready,\n            \"index_time_seconds\": round(self.index_time, 2),\n            \"skip_embeddings\": self.skip_embeddings,\n            \"eval_server_port\": self.eval_server_port,\n        }\n        return base\n"
  },
  {
    "path": "eval/prompts/instance_baseline.jinja",
    "content": "Please solve this issue: {{task}}\n\nYou can execute bash commands and edit files to implement the necessary changes.\n\n## Recommended Workflow\n\nThis workflows should be done step-by-step so that you can iterate on your changes and any possible problems.\n\n1. Analyze the codebase by finding and reading relevant files\n2. Create a script to reproduce the issue\n3. Edit the source code to resolve the issue\n4. Verify your fix works by running your script again\n5. Test edge cases to ensure your fix is robust\n6. Submit your changes and finish your work by issuing the following command: `echo COMPLETE_TASK_AND_SUBMIT_FINAL_OUTPUT`.\n   Do not combine it with any other command. After this command, you cannot continue working on this task.\n\n## Important Rules\n\n1. Every response must contain exactly one action\n2. The action must be enclosed in triple backticks\n3. Directory or environment variable changes are not persistent. Every action is executed in a new subshell.\n   However, you can prefix any action with `MY_ENV_VAR=MY_VALUE cd /path/to/working/dir && ...` or write/load environment variables from files\n\n<system_info>\n{{system}} {{release}} {{version}} {{machine}}\n</system_info>\n\n## Formatting your response\n\nHere is an example of a correct response:\n\n<example_response>\nTHOUGHT: I need to understand the structure of the repository first. Let me check what files are in the current directory to get a better understanding of the codebase.\n\n```mswea_bash_command\nls -la\n```\n</example_response>\n\n## Useful command examples\n\n### Create a new file:\n\n```bash\ncat <<'EOF' > newfile.py\nimport numpy as np\nhello = \"world\"\nprint(hello)\nEOF\n```\n\n### Edit files with sed:\n\n{%- if system == \"Darwin\" -%}\n<note>\nYou are on MacOS. For all the below examples, you need to use `sed -i ''` instead of `sed -i`.\n</note>\n{%- endif -%}\n\n```bash\n# Replace all occurrences\nsed -i 's/old_string/new_string/g' filename.py\n# Replace only first occurrence\nsed -i 's/old_string/new_string/' filename.py\n# Replace all occurrences in lines 1-10\nsed -i '1,10s/old_string/new_string/g' filename.py\n```\n\n### View file content:\n\n```bash\n# View specific lines with numbers\nnl -ba filename.py | sed -n '10,20p'\n```\n\n### Any other command you want to run\n\n```bash\nanything\n```\n"
  },
  {
    "path": "eval/prompts/instance_native.jinja",
    "content": "Please solve this issue: {{task}}\n\nYou can execute bash commands and edit files to implement the necessary changes.\n\n## Recommended Workflow\n\nWork step-by-step so you can iterate on your changes and catch problems early.\n\n1. **Understand the issue** — read the problem statement, identify the symptom and affected area\n2. **Find the relevant code** — use `gitnexus-query \"<feature area>\"` to find execution flows, or `grep` for specific strings\n3. **Understand the suspect** — use `gitnexus-context \"<symbol>\"` to see all callers and callees, then `cat` to read the source\n4. **Check blast radius** — before editing shared code, run `gitnexus-impact \"<symbol>\" upstream` to see what depends on it\n5. **Implement the fix** — make minimal, targeted changes\n6. **Verify** — run relevant tests, check edge cases\n7. **Submit** — issue: `echo COMPLETE_TASK_AND_SUBMIT_FINAL_OUTPUT`\n   Do not combine it with any other command. After this command, you cannot continue working on this task.\n\n## Debugging Patterns\n\n| Symptom | Approach |\n|---------|----------|\n| Error message / exception | `gitnexus-query` for error text → `gitnexus-context` on throw sites |\n| Wrong return value | `gitnexus-context` on the function → trace callees for data flow |\n| Missing feature / incomplete behavior | `gitnexus-query` for feature area → find the execution flow → locate the gap |\n| Need to understand callers | `gitnexus-context` — graph-complete, finds callers grep would miss |\n\n## Risk Assessment\n\nBefore editing shared code, check the blast radius:\n\n| Impact | Risk | Action |\n|--------|------|--------|\n| <5 symbols at d=1 | Low | Fix with confidence |\n| 5-15 symbols at d=1 | Medium | Fix carefully, run broader tests |\n| >15 symbols at d=1 | High | Minimal change, run full test suite |\n\n## Important Rules\n\n1. Every response must contain exactly one action\n2. The action must be enclosed in triple backticks\n3. Directory or environment variable changes are not persistent. Every action is executed in a new subshell.\n   However, you can prefix any action with `MY_ENV_VAR=MY_VALUE cd /path/to/working/dir && ...` or write/load environment variables from files\n4. Make minimal, targeted changes. Don't refactor unrelated code.\n5. GitNexus tools are ~100ms. Use them when they save you multiple grep iterations.\n\n<system_info>\n{{system}} {{release}} {{version}} {{machine}}\n</system_info>\n\n## Formatting your response\n\nHere is an example of a correct response:\n\n<example_response>\nTHOUGHT: The issue mentions a problem with form field validation. Let me search the code knowledge graph for the relevant execution flows to understand how validation works in this codebase.\n\n```mswea_bash_command\ngitnexus-query \"form field validation\"\n```\n</example_response>\n\n## Useful command examples\n\n### Create a new file:\n\n```bash\ncat <<'EOF' > newfile.py\nimport numpy as np\nhello = \"world\"\nprint(hello)\nEOF\n```\n\n### Edit files with sed:\n\n{%- if system == \"Darwin\" -%}\n<note>\nYou are on MacOS. For all the below examples, you need to use `sed -i ''` instead of `sed -i`.\n</note>\n{%- endif -%}\n\n```bash\n# Replace all occurrences\nsed -i 's/old_string/new_string/g' filename.py\n# Replace only first occurrence\nsed -i 's/old_string/new_string/' filename.py\n# Replace all occurrences in lines 1-10\nsed -i '1,10s/old_string/new_string/g' filename.py\n```\n\n### View file content:\n\n```bash\n# View specific lines with numbers\nnl -ba filename.py | sed -n '10,20p'\n```\n\n### Any other command you want to run\n\n```bash\nanything\n```\n"
  },
  {
    "path": "eval/prompts/instance_native_augment.jinja",
    "content": "Please solve this issue: {{task}}\n\nYou can execute bash commands and edit files to implement the necessary changes.\n\n## Recommended Workflow\n\nWork step-by-step so you can iterate on your changes and catch problems early.\n\n1. **Understand the issue** — read the problem statement, identify the symptom and affected area\n2. **Find the relevant code** — use `gitnexus-query \"<feature area>\"` to find execution flows, or `grep` for specific strings\n3. **Understand the suspect** — use `gitnexus-context \"<symbol>\"` to see all callers and callees, then `cat` to read the source\n4. **Check blast radius** — before editing shared code, run `gitnexus-impact \"<symbol>\" upstream` to see what depends on it\n5. **Implement the fix** — make minimal, targeted changes\n6. **Verify** — run relevant tests, check edge cases\n7. **Submit** — issue: `echo COMPLETE_TASK_AND_SUBMIT_FINAL_OUTPUT`\n   Do not combine it with any other command. After this command, you cannot continue working on this task.\n\n## Debugging Patterns\n\n| Symptom | Approach |\n|---------|----------|\n| Error message / exception | `gitnexus-query` for error text → `gitnexus-context` on throw sites |\n| Wrong return value | `gitnexus-context` on the function → trace callees for data flow |\n| Missing feature / incomplete behavior | `gitnexus-query` for feature area → find the execution flow → locate the gap |\n| Need to understand callers | `gitnexus-context` — graph-complete, finds callers grep would miss |\n\n## Risk Assessment\n\nBefore editing shared code, check the blast radius:\n\n| Impact | Risk | Action |\n|--------|------|--------|\n| <5 symbols at d=1 | Low | Fix with confidence |\n| 5-15 symbols at d=1 | Medium | Fix carefully, run broader tests |\n| >15 symbols at d=1 | High | Minimal change, run full test suite |\n\n## Important Rules\n\n1. Every response must contain exactly one action\n2. The action must be enclosed in triple backticks\n3. Directory or environment variable changes are not persistent. Every action is executed in a new subshell.\n   However, you can prefix any action with `MY_ENV_VAR=MY_VALUE cd /path/to/working/dir && ...` or write/load environment variables from files\n4. Make minimal, targeted changes. Don't refactor unrelated code.\n5. GitNexus tools are ~100ms. Use them when they save you multiple grep iterations.\n6. When grep results show `[GitNexus]` enrichments, use those for navigation.\n\n<system_info>\n{{system}} {{release}} {{version}} {{machine}}\n</system_info>\n\n## Formatting your response\n\nHere is an example of a correct response:\n\n<example_response>\nTHOUGHT: The issue mentions a problem with form field validation. Let me search the code knowledge graph for the relevant execution flows to understand how validation works in this codebase.\n\n```mswea_bash_command\ngitnexus-query \"form field validation\"\n```\n</example_response>\n\n## Useful command examples\n\n### Create a new file:\n\n```bash\ncat <<'EOF' > newfile.py\nimport numpy as np\nhello = \"world\"\nprint(hello)\nEOF\n```\n\n### Edit files with sed:\n\n{%- if system == \"Darwin\" -%}\n<note>\nYou are on MacOS. For all the below examples, you need to use `sed -i ''` instead of `sed -i`.\n</note>\n{%- endif -%}\n\n```bash\n# Replace all occurrences\nsed -i 's/old_string/new_string/g' filename.py\n# Replace only first occurrence\nsed -i 's/old_string/new_string/' filename.py\n# Replace all occurrences in lines 1-10\nsed -i '1,10s/old_string/new_string/g' filename.py\n```\n\n### View file content:\n\n```bash\n# View specific lines with numbers\nnl -ba filename.py | sed -n '10,20p'\n```\n\n### Any other command you want to run\n\n```bash\nanything\n```\n"
  },
  {
    "path": "eval/prompts/system_baseline.jinja",
    "content": "You are a helpful assistant that can interact with a computer to solve software engineering tasks.\n\nYour response must contain exactly ONE bash code block with ONE command (or commands connected with && or ||).\nInclude a THOUGHT section before your command where you explain your reasoning process.\nFormat your response as shown in.\n\n<example_response>\nYour reasoning and analysis here. Explain why you want to perform the action.\n\n```mswea_bash_command\nyour_command_here\n```\n</example_response>\n\nFailure to follow these rules will cause your response to be rejected.\n"
  },
  {
    "path": "eval/prompts/system_native.jinja",
    "content": "You are a helpful assistant that can interact with a computer to solve software engineering tasks.\n\nYour response must contain exactly ONE bash code block with ONE command (or commands connected with && or ||).\nInclude a THOUGHT section before your command where you explain your reasoning process.\nFormat your response as shown in.\n\n<example_response>\nYour reasoning and analysis here. Explain why you want to perform the action.\n\n```mswea_bash_command\nyour_command_here\n```\n</example_response>\n\nFailure to follow these rules will cause your response to be rejected.\n\n## Code Intelligence\n\nYou have **GitNexus** — a knowledge graph over this entire codebase. It knows every function call chain, class hierarchy, execution flow, and symbol relationship. These are fast bash commands (~100ms). Use them when useful, skip them when a simple grep suffices.\n\n### GitNexus Commands\n\n**gitnexus-query \"<concept>\"** — Find execution flows related to a concept.\nReturns ranked execution flow traces with participating symbols and file locations.\n```bash\ngitnexus-query \"form field validation\"\n```\n\n**gitnexus-context \"<symbol>\" [\"<file_path>\"]** — 360-degree view of a symbol.\nReturns ALL callers, ALL callees, and execution flows. Graph-complete — finds callers that grep misses.\n```bash\ngitnexus-context \"BoundField\" \"django/forms/boundfield.py\"\n```\n\n**gitnexus-impact \"<symbol>\" [upstream|downstream]** — Blast radius analysis.\nWhat breaks if you change this: d=1 WILL BREAK, d=2 LIKELY AFFECTED, d=3 MAY NEED TESTING.\n```bash\ngitnexus-impact \"BoundField\" upstream\n```\n\n**gitnexus-cypher \"<query>\"** — Raw Cypher query against the code graph.\n```bash\ngitnexus-cypher 'MATCH (a)-[:CodeRelation {type: \"CALLS\"}]->(b:Function {name: \"clean\"}) RETURN a.name, a.filePath'\n```\n\n### When to Use What\n\n| I need to... | Use |\n|---|---|\n| Understand how a feature works end-to-end | `gitnexus-query` |\n| Find ALL callers of a function | `gitnexus-context` |\n| Know what breaks if I change something | `gitnexus-impact` upstream |\n| Find a string literal or error message | `grep` |\n| Read source code | `cat` / `nl -ba` |\n"
  },
  {
    "path": "eval/prompts/system_native_augment.jinja",
    "content": "You are a helpful assistant that can interact with a computer to solve software engineering tasks.\n\nYour response must contain exactly ONE bash code block with ONE command (or commands connected with && or ||).\nInclude a THOUGHT section before your command where you explain your reasoning process.\nFormat your response as shown in.\n\n<example_response>\nYour reasoning and analysis here. Explain why you want to perform the action.\n\n```mswea_bash_command\nyour_command_here\n```\n</example_response>\n\nFailure to follow these rules will cause your response to be rejected.\n\n## Code Intelligence\n\nYou have **GitNexus** — a knowledge graph over this entire codebase. It knows every function call chain, class hierarchy, execution flow, and symbol relationship. These are fast bash commands (~100ms). Use them when useful, skip them when a simple grep suffices.\n\nYour `grep` results are also automatically enriched with `[GitNexus]` annotations showing callers, callees, and execution flows for matched symbols. Pay attention to these — they often point you to the right code without extra tool calls.\n\n### GitNexus Commands\n\n**gitnexus-query \"<concept>\"** — Find execution flows related to a concept.\nReturns ranked execution flow traces with participating symbols and file locations.\n```bash\ngitnexus-query \"form field validation\"\n```\n\n**gitnexus-context \"<symbol>\" [\"<file_path>\"]** — 360-degree view of a symbol.\nReturns ALL callers, ALL callees, and execution flows. Graph-complete — finds callers that grep misses.\n```bash\ngitnexus-context \"BoundField\" \"django/forms/boundfield.py\"\n```\n\n**gitnexus-impact \"<symbol>\" [upstream|downstream]** — Blast radius analysis.\nWhat breaks if you change this: d=1 WILL BREAK, d=2 LIKELY AFFECTED, d=3 MAY NEED TESTING.\n```bash\ngitnexus-impact \"BoundField\" upstream\n```\n\n**gitnexus-cypher \"<query>\"** — Raw Cypher query against the code graph.\n```bash\ngitnexus-cypher 'MATCH (a)-[:CodeRelation {type: \"CALLS\"}]->(b:Function {name: \"clean\"}) RETURN a.name, a.filePath'\n```\n\n### When to Use What\n\n| I need to... | Use |\n|---|---|\n| Understand how a feature works end-to-end | `gitnexus-query` |\n| Find ALL callers of a function | `gitnexus-context` |\n| Know what breaks if I change something | `gitnexus-impact` upstream |\n| Find a string literal or error message | `grep` |\n| Read source code | `cat` / `nl -ba` |\n"
  },
  {
    "path": "eval/pyproject.toml",
    "content": "[project]\nname = \"gitnexus-swebench-eval\"\nversion = \"0.1.0\"\ndescription = \"SWE-bench evaluation harness with GitNexus code intelligence integration\"\nreadme = \"README.md\"\nrequires-python = \">=3.11\"\ndependencies = [\n    \"mini-swe-agent>=2.0.0\",\n    \"litellm>=1.50.0\",\n    \"datasets>=3.0.0\",\n    \"typer>=0.12.0\",\n    \"rich>=13.0.0\",\n    \"pyyaml>=6.0\",\n    \"pandas>=2.0.0\",\n    \"tabulate>=0.9.0\",\n    \"python-dotenv>=1.0.0\",\n]\n\n[project.optional-dependencies]\ndev = [\n    \"pytest>=8.0.0\",\n    \"ruff>=0.5.0\",\n]\n\n[project.scripts]\ngitnexus-eval = \"run_eval:app\"\ngitnexus-eval-analyze = \"analysis.analyze_results:app\"\n\n[build-system]\nrequires = [\"hatchling\"]\nbuild-backend = \"hatchling.build\"\n\n[tool.hatch.build.targets.wheel]\npackages = [\"agents\", \"environments\", \"analysis\", \"bridge\"]\nextra-files = [\"run_eval.py\"]\n\n[tool.ruff]\nline-length = 120\ntarget-version = \"py311\"\n"
  },
  {
    "path": "eval/run_eval.py",
    "content": "#!/usr/bin/env python3\n\"\"\"\nGitNexus SWE-bench Evaluation Runner\n\nMain entry point for running SWE-bench evaluations with and without GitNexus.\nSupports running a single configuration or a full matrix of models x modes.\n\nUsage:\n    # Single run (default: native_augment mode — GitNexus tools + grep enrichment)\n    python run_eval.py single -m claude-sonnet --subset lite --slice 0:5\n\n    # Baseline comparison (no GitNexus)\n    python run_eval.py single -m claude-sonnet --mode baseline --subset lite --slice 0:5\n\n    # Matrix run (all models x all modes)\n    python run_eval.py matrix --subset lite --slice 0:50 --workers 4\n\n    # Single instance for debugging\n    python run_eval.py debug -m claude-haiku -i django__django-16527\n\"\"\"\n\nimport concurrent.futures\nimport json\nimport logging\nimport os\nimport threading\nimport time\nimport traceback\nfrom itertools import product\nfrom pathlib import Path\nfrom typing import Any\n\nimport typer\nimport yaml\nfrom rich.console import Console\nfrom rich.live import Live\nfrom rich.table import Table\n\n# Load .env file from eval/ directory\n_env_file = Path(__file__).parent / \".env\"\nif _env_file.exists():\n    for line in _env_file.read_text().splitlines():\n        line = line.strip()\n        if not line or line.startswith(\"#\"):\n            continue\n        if \"=\" in line:\n            key, _, value = line.partition(\"=\")\n            key, value = key.strip(), value.strip()\n            if value and key not in os.environ:  # Don't override existing env vars\n                os.environ[key] = value\n\nlogger = logging.getLogger(\"gitnexus_eval\")\nconsole = Console()\napp = typer.Typer(rich_markup_mode=\"rich\", add_completion=False)\n\n# Directory paths\nEVAL_DIR = Path(__file__).parent\nCONFIGS_DIR = EVAL_DIR / \"configs\"\nMODELS_DIR = CONFIGS_DIR / \"models\"\nMODES_DIR = CONFIGS_DIR / \"modes\"\nDEFAULT_OUTPUT_DIR = EVAL_DIR / \"results\"\n\n# Available models and modes (discovered from config files)\nAVAILABLE_MODELS = sorted([p.stem for p in MODELS_DIR.glob(\"*.yaml\")])\nAVAILABLE_MODES = sorted([p.stem for p in MODES_DIR.glob(\"*.yaml\")])\n\n# SWE-bench dataset mapping (same as mini-swe-agent)\nDATASET_MAPPING = {\n    \"full\": \"princeton-nlp/SWE-Bench\",\n    \"verified\": \"princeton-nlp/SWE-Bench_Verified\",\n    \"lite\": \"princeton-nlp/SWE-Bench_Lite\",\n}\n\n_output_lock = threading.Lock()\n\n\ndef load_yaml_config(path: Path) -> dict:\n    \"\"\"Load a YAML config file.\"\"\"\n    with open(path) as f:\n        return yaml.safe_load(f) or {}\n\n\ndef merge_configs(*configs: dict) -> dict:\n    \"\"\"Recursively merge multiple config dicts (later values win).\"\"\"\n    result = {}\n    for config in configs:\n        for key, value in config.items():\n            if key in result and isinstance(result[key], dict) and isinstance(value, dict):\n                result[key] = merge_configs(result[key], value)\n            else:\n                result[key] = value\n    return result\n\n\ndef build_config(model_name: str, mode_name: str) -> dict:\n    \"\"\"Build a complete config from model + mode YAML files.\"\"\"\n    model_file = MODELS_DIR / f\"{model_name}.yaml\"\n    mode_file = MODES_DIR / f\"{mode_name}.yaml\"\n\n    if not model_file.exists():\n        raise FileNotFoundError(f\"Model config not found: {model_file}\")\n    if not mode_file.exists():\n        raise FileNotFoundError(f\"Mode config not found: {mode_file}\")\n\n    model_config = load_yaml_config(model_file)\n    mode_config = load_yaml_config(mode_file)\n\n    return merge_configs(mode_config, model_config)\n\n\ndef load_instances(subset: str, split: str, slice_spec: str = \"\", filter_spec: str = \"\") -> list[dict]:\n    \"\"\"Load SWE-bench instances.\"\"\"\n    from datasets import load_dataset\n    import re\n\n    dataset_path = DATASET_MAPPING.get(subset, subset)\n    logger.info(f\"Loading dataset: {dataset_path}, split: {split}\")\n    instances = list(load_dataset(dataset_path, split=split))\n\n    if filter_spec:\n        instances = [i for i in instances if re.match(filter_spec, i[\"instance_id\"])]\n\n    if slice_spec:\n        values = [int(x) if x else None for x in slice_spec.split(\":\")]\n        instances = instances[slice(*values)]\n\n    logger.info(f\"Loaded {len(instances)} instances\")\n    return instances\n\n\ndef get_swebench_docker_image(instance: dict) -> str:\n    \"\"\"Get Docker image name for a SWE-bench instance.\"\"\"\n    image_name = instance.get(\"image_name\")\n    if image_name is None:\n        iid = instance[\"instance_id\"]\n        id_docker = iid.replace(\"__\", \"_1776_\")\n        image_name = f\"docker.io/swebench/sweb.eval.x86_64.{id_docker}:latest\".lower()\n    return image_name\n\n\ndef process_instance(\n    instance: dict,\n    config: dict,\n    output_dir: Path,\n    model_name: str,\n    mode_name: str,\n) -> dict:\n    \"\"\"\n    Process a single SWE-bench instance with the given config.\n    Returns result dict with instance_id, exit_status, submission, metrics.\n    \"\"\"\n    from minisweagent.models import get_model\n\n    instance_id = instance[\"instance_id\"]\n    run_id = f\"{model_name}_{mode_name}\"\n    instance_dir = output_dir / run_id / instance_id\n    instance_dir.mkdir(parents=True, exist_ok=True)\n\n    result = {\n        \"instance_id\": instance_id,\n        \"model\": model_name,\n        \"mode\": mode_name,\n        \"exit_status\": None,\n        \"submission\": \"\",\n        \"cost\": 0.0,\n        \"n_calls\": 0,\n        \"gitnexus_metrics\": {},\n    }\n\n    agent = None\n\n    try:\n        # Build model\n        model = get_model(config=config.get(\"model\", {}))\n\n        # Build environment\n        env_config = dict(config.get(\"environment\", {}))\n        env_class_name = env_config.pop(\"environment_class\", \"docker\")\n\n        if env_class_name == \"eval.environments.gitnexus_docker.GitNexusDockerEnvironment\":\n            from environments.gitnexus_docker import GitNexusDockerEnvironment\n            env_config[\"image\"] = get_swebench_docker_image(instance)\n            env = GitNexusDockerEnvironment(**env_config)\n        else:\n            from minisweagent.environments.docker import DockerEnvironment\n            env = DockerEnvironment(image=get_swebench_docker_image(instance), **env_config)\n\n        # Build agent\n        agent_config = dict(config.get(\"agent\", {}))\n        agent_class_name = agent_config.pop(\"agent_class\", \"eval.agents.gitnexus_agent.GitNexusAgent\")\n\n        from agents.gitnexus_agent import GitNexusAgent\n        traj_path = instance_dir / f\"{instance_id}.traj.json\"\n        agent_config[\"output_path\"] = traj_path\n        agent = GitNexusAgent(model, env, **agent_config)\n\n        # Run\n        logger.info(f\"[{run_id}] Starting {instance_id}\")\n        info = agent.run(instance[\"problem_statement\"])\n\n        result[\"exit_status\"] = info.get(\"exit_status\")\n        result[\"cost\"] = agent.cost\n        result[\"n_calls\"] = agent.n_calls\n        result[\"gitnexus_metrics\"] = agent.gitnexus_metrics.to_dict()\n\n        # Extract git diff patch from the container (SWE-bench needs the model_patch)\n        try:\n            patch_output = env.execute({\"command\": \"cd /testbed && git diff\"})\n            result[\"submission\"] = patch_output.get(\"output\", \"\").strip()\n        except Exception as patch_err:\n            logger.warning(f\"[{run_id}] Failed to extract patch: {patch_err}\")\n            result[\"submission\"] = info.get(\"submission\", \"\")\n\n    except Exception as e:\n        logger.error(f\"[{run_id}] Error on {instance_id}: {e}\")\n        result[\"exit_status\"] = type(e).__name__\n        result[\"error\"] = str(e)\n        result[\"traceback\"] = traceback.format_exc()\n\n    finally:\n        if agent:\n            agent.save(\n                instance_dir / f\"{instance_id}.traj.json\",\n                {\"instance_id\": instance_id, \"run_id\": run_id},\n            )\n\n        # Update predictions file\n        _update_preds(output_dir / run_id / \"preds.json\", instance_id, model_name, result)\n\n    return result\n\n\ndef _update_preds(preds_path: Path, instance_id: str, model_name: str, result: dict):\n    \"\"\"Thread-safe update of predictions file.\"\"\"\n    with _output_lock:\n        preds_path.parent.mkdir(parents=True, exist_ok=True)\n        data = {}\n        if preds_path.exists():\n            data = json.loads(preds_path.read_text())\n        data[instance_id] = {\n            \"model_name_or_path\": model_name,\n            \"instance_id\": instance_id,\n            \"model_patch\": result.get(\"submission\", \"\"),\n        }\n        preds_path.write_text(json.dumps(data, indent=2))\n\n\ndef run_configuration(\n    model_name: str,\n    mode_name: str,\n    instances: list[dict],\n    output_dir: Path,\n    workers: int = 1,\n    redo_existing: bool = False,\n) -> list[dict]:\n    \"\"\"Run a single (model, mode) configuration across all instances.\"\"\"\n    config = build_config(model_name, mode_name)\n    run_id = f\"{model_name}_{mode_name}\"\n    run_dir = output_dir / run_id\n\n    # Skip existing instances\n    if not redo_existing and (run_dir / \"preds.json\").exists():\n        existing = set(json.loads((run_dir / \"preds.json\").read_text()).keys())\n        instances = [i for i in instances if i[\"instance_id\"] not in existing]\n        if not instances:\n            logger.info(f\"[{run_id}] All instances already completed, skipping\")\n            return []\n\n    console.print(f\"  [bold]{run_id}[/bold]: {len(instances)} instances, {workers} workers\")\n\n    results = []\n\n    if workers <= 1:\n        for instance in instances:\n            result = process_instance(instance, config, output_dir, model_name, mode_name)\n            results.append(result)\n    else:\n        with concurrent.futures.ThreadPoolExecutor(max_workers=workers) as executor:\n            futures = {\n                executor.submit(\n                    process_instance, instance, config, output_dir, model_name, mode_name\n                ): instance[\"instance_id\"]\n                for instance in instances\n            }\n            for future in concurrent.futures.as_completed(futures):\n                try:\n                    results.append(future.result())\n                except Exception as e:\n                    iid = futures[future]\n                    logger.error(f\"[{run_id}] Uncaught error for {iid}: {e}\")\n\n    # Save run summary\n    summary = {\n        \"run_id\": run_id,\n        \"model\": model_name,\n        \"mode\": mode_name,\n        \"config\": config,\n        \"total_instances\": len(results),\n        \"completed\": sum(1 for r in results if r[\"exit_status\"] not in [None, \"error\"]),\n        \"total_cost\": sum(r.get(\"cost\", 0) for r in results),\n        \"total_api_calls\": sum(r.get(\"n_calls\", 0) for r in results),\n        \"results\": results,\n    }\n    (run_dir / \"summary.json\").mkdir(parents=True, exist_ok=True) if not run_dir.exists() else None\n    run_dir.mkdir(parents=True, exist_ok=True)\n    (run_dir / \"summary.json\").write_text(json.dumps(summary, indent=2, default=str))\n\n    return results\n\n\n# ─── CLI Commands ───────────────────────────────────────────────────────────\n\n\n@app.command()\ndef single(\n    model: str = typer.Option(..., \"-m\", \"--model\", help=f\"Model config name. Available: {', '.join(AVAILABLE_MODELS)}\"),\n    mode: str = typer.Option(\"native_augment\", \"--mode\", help=f\"Evaluation mode. Available: {', '.join(AVAILABLE_MODES)}\"),\n    subset: str = typer.Option(\"lite\", \"--subset\", help=\"SWE-bench subset: lite, verified, full\"),\n    split: str = typer.Option(\"dev\", \"--split\", help=\"Dataset split\"),\n    slice_spec: str = typer.Option(\"\", \"--slice\", help=\"Slice spec (e.g., '0:5')\"),\n    filter_spec: str = typer.Option(\"\", \"--filter\", help=\"Filter instance IDs by regex\"),\n    workers: int = typer.Option(1, \"-w\", \"--workers\", help=\"Parallel workers\"),\n    output: str = typer.Option(str(DEFAULT_OUTPUT_DIR), \"-o\", \"--output\", help=\"Output directory\"),\n    redo: bool = typer.Option(False, \"--redo\", help=\"Redo existing instances\"),\n):\n    \"\"\"Run a single (model, mode) configuration on SWE-bench.\"\"\"\n    output_dir = Path(output)\n    instances = load_instances(subset, split, slice_spec, filter_spec)\n\n    console.print(f\"\\n[bold]Running evaluation:[/bold] {model} + {mode}\")\n    console.print(f\"  Instances: {len(instances)}\")\n    console.print(f\"  Output: {output_dir}\\n\")\n\n    results = run_configuration(model, mode, instances, output_dir, workers, redo)\n\n    # Print summary\n    _print_summary(results, model, mode)\n\n\n@app.command()\ndef matrix(\n    models: list[str] = typer.Option(AVAILABLE_MODELS, \"-m\", \"--models\", help=\"Models to evaluate (comma-separated or repeated)\"),\n    modes: list[str] = typer.Option(AVAILABLE_MODES, \"--modes\", help=\"Modes to evaluate\"),\n    subset: str = typer.Option(\"lite\", \"--subset\", help=\"SWE-bench subset\"),\n    split: str = typer.Option(\"dev\", \"--split\", help=\"Dataset split\"),\n    slice_spec: str = typer.Option(\"\", \"--slice\", help=\"Slice spec\"),\n    filter_spec: str = typer.Option(\"\", \"--filter\", help=\"Filter instances by regex\"),\n    workers: int = typer.Option(1, \"-w\", \"--workers\", help=\"Parallel workers per config\"),\n    output: str = typer.Option(str(DEFAULT_OUTPUT_DIR), \"-o\", \"--output\", help=\"Output directory\"),\n    redo: bool = typer.Option(False, \"--redo\", help=\"Redo existing instances\"),\n):\n    \"\"\"Run the full evaluation matrix: all models x all modes.\"\"\"\n    output_dir = Path(output)\n    instances = load_instances(subset, split, slice_spec, filter_spec)\n\n    combos = list(product(models, modes))\n    console.print(f\"\\n[bold]Matrix evaluation:[/bold] {len(models)} models x {len(modes)} modes = {len(combos)} configs\")\n    console.print(f\"  Models: {', '.join(models)}\")\n    console.print(f\"  Modes: {', '.join(modes)}\")\n    console.print(f\"  Instances per config: {len(instances)}\")\n    console.print(f\"  Total runs: {len(combos) * len(instances)}\")\n    console.print(f\"  Output: {output_dir}\\n\")\n\n    all_results = {}\n    for model_name, mode_name in combos:\n        run_id = f\"{model_name}_{mode_name}\"\n        console.print(f\"\\n[bold cyan]━━━ {run_id} ━━━[/bold cyan]\")\n        results = run_configuration(model_name, mode_name, instances, output_dir, workers, redo)\n        all_results[run_id] = results\n\n    # Print comparative summary\n    _print_matrix_summary(all_results)\n\n    # Save master summary\n    master = {\n        \"timestamp\": time.time(),\n        \"models\": models,\n        \"modes\": modes,\n        \"subset\": subset,\n        \"n_instances\": len(instances),\n        \"runs\": {\n            run_id: {\n                \"total\": len(results),\n                \"cost\": sum(r.get(\"cost\", 0) for r in results),\n                \"api_calls\": sum(r.get(\"n_calls\", 0) for r in results),\n            }\n            for run_id, results in all_results.items()\n        },\n    }\n    output_dir.mkdir(parents=True, exist_ok=True)\n    (output_dir / \"matrix_summary.json\").write_text(json.dumps(master, indent=2, default=str))\n    console.print(f\"\\n[green]Results saved to {output_dir}[/green]\")\n\n\n@app.command()\ndef debug(\n    model: str = typer.Option(\"claude-haiku\", \"-m\", \"--model\", help=\"Model config name\"),\n    mode: str = typer.Option(\"native_augment\", \"--mode\", help=\"Evaluation mode\"),\n    instance_id: str = typer.Option(..., \"-i\", \"--instance\", help=\"SWE-bench instance ID\"),\n    subset: str = typer.Option(\"lite\", \"--subset\", help=\"SWE-bench subset\"),\n    split: str = typer.Option(\"dev\", \"--split\"),\n    output: str = typer.Option(str(DEFAULT_OUTPUT_DIR / \"debug\"), \"-o\", \"--output\"),\n):\n    \"\"\"Debug a single SWE-bench instance.\"\"\"\n    from datasets import load_dataset\n\n    dataset_path = DATASET_MAPPING.get(subset, subset)\n    instances = {inst[\"instance_id\"]: inst for inst in load_dataset(dataset_path, split=split)}\n\n    if instance_id not in instances:\n        console.print(f\"[red]Instance '{instance_id}' not found in {subset}/{split}[/red]\")\n        raise typer.Exit(1)\n\n    instance = instances[instance_id]\n    config = build_config(model, mode)\n    output_dir = Path(output)\n\n    console.print(f\"\\n[bold]Debug run:[/bold] {model} + {mode}\")\n    console.print(f\"  Instance: {instance_id}\")\n    console.print(f\"  Problem: {instance['problem_statement'][:200]}...\\n\")\n\n    result = process_instance(instance, config, output_dir, model, mode)\n    _print_summary([result], model, mode)\n\n\n@app.command()\ndef list_configs():\n    \"\"\"List available model and mode configurations.\"\"\"\n    console.print(\"\\n[bold]Available Models:[/bold]\")\n    for name in AVAILABLE_MODELS:\n        config = load_yaml_config(MODELS_DIR / f\"{name}.yaml\")\n        model_name = config.get(\"model\", {}).get(\"model_name\", \"unknown\")\n        console.print(f\"  {name:<20} {model_name}\")\n\n    console.print(\"\\n[bold]Available Modes:[/bold]\")\n    for name in AVAILABLE_MODES:\n        config = load_yaml_config(MODES_DIR / f\"{name}.yaml\")\n        gn_mode = config.get(\"agent\", {}).get(\"gitnexus_mode\", \"baseline\")\n        console.print(f\"  {name:<20} gitnexus_mode={gn_mode}\")\n\n    console.print(f\"\\n[bold]Matrix:[/bold] {len(AVAILABLE_MODELS)} models x {len(AVAILABLE_MODES)} modes = {len(AVAILABLE_MODELS) * len(AVAILABLE_MODES)} configurations\")\n\n\n# ─── Summary Output ────────────────────────────────────────────────────────\n\n\ndef _print_summary(results: list[dict], model: str, mode: str):\n    \"\"\"Print a summary table for a single run.\"\"\"\n    if not results:\n        console.print(\"[yellow]No results to display[/yellow]\")\n        return\n\n    table = Table(title=f\"{model} + {mode}\")\n    table.add_column(\"Metric\", style=\"bold\")\n    table.add_column(\"Value\")\n\n    total = len(results)\n    completed = sum(1 for r in results if r.get(\"submission\"))\n    total_cost = sum(r.get(\"cost\", 0) for r in results)\n    total_calls = sum(r.get(\"n_calls\", 0) for r in results)\n\n    table.add_row(\"Instances\", str(total))\n    table.add_row(\"Completed\", f\"{completed}/{total}\")\n    table.add_row(\"Total Cost\", f\"${total_cost:.4f}\")\n    table.add_row(\"Total API Calls\", str(total_calls))\n    table.add_row(\"Avg Cost/Instance\", f\"${total_cost / max(total, 1):.4f}\")\n    table.add_row(\"Avg Calls/Instance\", f\"{total_calls / max(total, 1):.1f}\")\n\n    # GitNexus-specific metrics\n    gn_tool_calls = sum(\n        r.get(\"gitnexus_metrics\", {}).get(\"total_tool_calls\", 0) for r in results\n    )\n    gn_augment_hits = sum(\n        r.get(\"gitnexus_metrics\", {}).get(\"augmentation_hits\", 0) for r in results\n    )\n    if gn_tool_calls > 0:\n        table.add_row(\"GitNexus Tool Calls\", str(gn_tool_calls))\n    if gn_augment_hits > 0:\n        table.add_row(\"Augmentation Hits\", str(gn_augment_hits))\n\n    console.print(table)\n\n\ndef _print_matrix_summary(all_results: dict[str, list[dict]]):\n    \"\"\"Print a comparative matrix summary.\"\"\"\n    table = Table(title=\"Evaluation Matrix Summary\")\n    table.add_column(\"Configuration\", style=\"bold\")\n    table.add_column(\"Instances\")\n    table.add_column(\"Completed\")\n    table.add_column(\"Cost\")\n    table.add_column(\"API Calls\")\n    table.add_column(\"GN Tools\")\n\n    for run_id, results in sorted(all_results.items()):\n        total = len(results)\n        completed = sum(1 for r in results if r.get(\"submission\"))\n        cost = sum(r.get(\"cost\", 0) for r in results)\n        calls = sum(r.get(\"n_calls\", 0) for r in results)\n        gn_calls = sum(r.get(\"gitnexus_metrics\", {}).get(\"total_tool_calls\", 0) for r in results)\n\n        table.add_row(\n            run_id,\n            str(total),\n            f\"{completed}/{total}\",\n            f\"${cost:.2f}\",\n            str(calls),\n            str(gn_calls) if gn_calls > 0 else \"-\",\n        )\n\n    console.print(table)\n\n\nif __name__ == \"__main__\":\n    logging.basicConfig(level=logging.INFO, format=\"%(asctime)s [%(name)s] %(message)s\")\n    app()\n"
  },
  {
    "path": "gitnexus/.claude/settings.local.json",
    "content": "{\n  \"permissions\": {\n    \"allow\": [\n      \"mcp__plugin_claude-mem_mcp-search__get_observations\"\n    ]\n  }\n}\n"
  },
  {
    "path": "gitnexus/.npmignore",
    "content": "# Source (dist/ is the compiled output)\nsrc/\ntsconfig.json\n\n# Dev files\n*.ts\n!dist/**/*.d.ts\n.git/\n.gitignore\nnode_modules/\n\n# Package lock (consumers use their own)\npackage-lock.json\n\n# IDE\n.vscode/\n.idea/\n"
  },
  {
    "path": "gitnexus/CHANGELOG.md",
    "content": "# Changelog\n\nAll notable changes to GitNexus will be documented in this file.\n\n## [1.4.7] - 2026-03-19\n\n### Added\n- **Phase 8 field/property type resolution** — ACCESSES edges with `declaredType` for field reads/writes (#354)\n- **Phase 9 return-type variable binding** — call-result variable binding across 11 languages (#379)\n  - `extractPendingAssignment` in per-language type extractors captures `let x = getUser()` patterns\n  - Unified fixpoint loop resolves variable types from function return types after initial walk\n  - Field access on call-result variables: `user.name` resolves `name` via return type's class definition\n  - Method-call-result chaining: `user.getProfile().bio` resolves through intermediate return types\n  - 22 new test fixtures covering call-result and method-chain binding across all supported languages\n  - Integration tests added for all 10 language resolver suites\n- **ACCESSES edge type** with read/write field access tracking (#372)\n- **Python `enumerate()` for-loop support** with nested tuple patterns (#356)\n- **MCP tool/resource descriptions** updated to reflect Phase 9 ACCESSES edge semantics and `declaredType` property\n\n### Fixed\n- **mcp**: server crashes under parallel tool calls (#326, #349)\n- **parsing**: undefined error on languages missing from call routers (#364)\n- **web**: add missing Kotlin entries to `Record<SupportedLanguages>` maps\n- **rust**: `await` expression unwrapping in `extractPendingAssignment` for async call-result binding\n- **tests**: update property edge and write access expectations across multiple language tests\n- **docs**: corrected stale \"single-pass\" claims in type-resolution-system.md to reflect walk+fixpoint architecture\n\n### Changed\n- **Upgrade `@ladybugdb/core` to 0.15.2** and remove segfault workarounds (#374)\n- **type-resolution-roadmap.md** overhauled — completed phases condensed to summaries, Phases 10–14 added with full engineering specs\n\n## [1.4.6] - 2026-03-18\n\n### Added\n- **Phase 7 type resolution** — return-aware loop inference for call-expression iterables (#341)\n  - `ReturnTypeLookup` interface with `lookupReturnType` / `lookupRawReturnType` split\n  - `ForLoopExtractorContext` context object replacing positional `(node, env)` signature\n  - Call-expression iterable resolution across 8 languages (TS/JS, Java, Kotlin, C#, Go, Rust, Python, PHP)\n  - PHP `$this->property` foreach via `@var` class property scan (Strategy C)\n  - PHP `function_call_expression` and `member_call_expression` foreach paths\n  - `extractElementTypeFromString` as canonical raw-string container unwrapper in `shared.ts`\n  - `extractReturnTypeName` deduplicated from `call-processor.ts` into `shared.ts` (137 lines removed)\n  - `SKIP_SUBTREE_TYPES` performance optimization with documented `template_string` exclusion\n  - `pendingCallResults` infrastructure (dormant — Phase 9 work)\n\n### Fixed\n- **impact**: return structured error + partial results instead of crashing (#345)\n- **impact**: add `HAS_METHOD` and `OVERRIDES` to `VALID_RELATION_TYPES` (#350)\n- **cli**: write tool output to stdout via fd 1 instead of stderr (#346)\n- **postinstall**: add permission fix for CLI and hook scripts (#348)\n- **workflow**: use prefixed temporary branch name for fork PRs to prevent overwriting real branches\n- **test**: add `--repo` to CLI e2e tool tests for multi-repo environment\n- **php**: add `declaration_list` type guard on `findClassPropertyElementType` fallback\n- **docs**: correct `pendingCallResults` description in roadmap and system docs\n\n### Chore\n- Add `.worktrees/` to `.gitignore`\n\n## [1.4.5] - 2026-03-17\n\n### Added\n- **Ruby language support** for CLI and web (#111)\n- **TypeEnvironment API** with constructor inference, self/this/super resolution (#274)\n- **Return type inference** with doc-comment parsing (JSDoc, PHPDoc, YARD) and per-language type extractors (#284)\n- **Phase 4 type resolution** — nullable unwrapping, for-loop typing, assignment chain propagation (#310)\n- **Phase 5 type resolution** — chained calls, pattern matching, class-as-receiver (#315)\n- **Phase 6 type resolution** — for-loop Tier 1c, pattern matching, container descriptors, 10-language coverage (#318)\n  - Container descriptor table for generic type argument resolution (Map keys vs values)\n  - Method-aware for-loop extractors with integration tests for all languages\n  - Recursive pattern binding (C# `is` patterns, Kotlin `when/is` smart casts)\n  - Class field declaration unwrapping for C#/Java\n  - PHP `$this->property` foreach member access\n  - C++ pointer dereference range-for\n  - Java `this.data.values()` field access patterns\n  - Position-indexed when/is bindings for branch-local narrowing\n- **Type resolution system documentation** with architecture guide and roadmap\n- `.gitignore` and `.gitnexusignore` support during file discovery (#231)\n- Codex MCP configuration documentation in README (#236)\n- `skipGraphPhases` pipeline option to skip MRO/community/process phases for faster test runs\n- `hookTimeout: 120000` in vitest config for CI beforeAll hooks\n\n### Changed\n- **Migrated from KuzuDB to LadybugDB v0.15** (#275)\n- Dynamically discover and install agent skills in CLI (#270)\n\n### Performance\n- Worker pool threshold — skip worker creation for small repos (<15 files or <512KB total)\n- AST walk pruning via `SKIP_SUBTREE_TYPES` for leaf-only nodes (string, comment, number literals)\n- Pre-computed `interestingNodeTypes` set — single Set.has() replaces 3 checks per AST node\n- `fastStripNullable` — skip full nullable parsing for simple identifiers (90%+ case)\n- Replace `.children?.find()` with manual for loops in `extractFunctionName` to eliminate array allocations\n\n### Fixed\n- Same-directory Python import resolution (#328)\n- Ruby method-level call resolution, HAS_METHOD edges, and dispatch table (#278)\n- C++ fixture file casing for case-sensitive CI\n- Template string incorrectly included in AST pruning set (contains interpolated expressions)\n\n## [1.4.0] - Previous release\n"
  },
  {
    "path": "gitnexus/Dockerfile.test",
    "content": "FROM node:22-bookworm\nWORKDIR /app\nRUN apt-get update && apt-get install -y python3 make g++ && rm -rf /var/lib/apt/lists/*\nCOPY . .\nRUN npm ci --ignore-scripts \\\n    && node scripts/patch-tree-sitter-swift.cjs \\\n    && (npm rebuild 2>&1 || true) \\\n    && cd node_modules/tree-sitter-kotlin && npx --yes node-gyp rebuild 2>&1\nCMD [\"npx\", \"vitest\", \"run\", \"test/integration\", \"--reporter=verbose\"]\n"
  },
  {
    "path": "gitnexus/README.md",
    "content": "# GitNexus\n\n**Graph-powered code intelligence for AI agents.** Index any codebase into a knowledge graph, then query it via MCP or CLI.\n\nWorks with **Cursor**, **Claude Code**, **Windsurf**, **Cline**, **OpenCode**, and any MCP-compatible tool.\n\n[![npm version](https://img.shields.io/npm/v/gitnexus.svg)](https://www.npmjs.com/package/gitnexus)\n[![License: PolyForm Noncommercial](https://img.shields.io/badge/License-PolyForm%20Noncommercial-blue.svg)](https://polyformproject.org/licenses/noncommercial/1.0.0/)\n\n---\n\n## Why?\n\nAI coding tools don't understand your codebase structure. They edit a function without knowing 47 other functions depend on it. GitNexus fixes this by **precomputing every dependency, call chain, and relationship** into a queryable graph.\n\n**Three commands to give your AI agent full codebase awareness.**\n\n## Quick Start\n\n```bash\n# Index your repo (run from repo root)\nnpx gitnexus analyze\n```\n\nThat's it. This indexes the codebase, installs agent skills, registers Claude Code hooks, and creates `AGENTS.md` / `CLAUDE.md` context files — all in one command.\n\nTo configure MCP for your editor, run `npx gitnexus setup` once — or set it up manually below.\n\n`gitnexus setup` auto-detects your editors and writes the correct global MCP config. You only need to run it once.\n\n### Editor Support\n\n| Editor | MCP | Skills | Hooks (auto-augment) | Support |\n|--------|-----|--------|---------------------|---------|\n| **Claude Code** | Yes | Yes | Yes (PreToolUse) | **Full** |\n| **Cursor** | Yes | Yes | — | MCP + Skills |\n| **Windsurf** | Yes | — | — | MCP |\n| **OpenCode** | Yes | Yes | — | MCP + Skills |\n\n> **Claude Code** gets the deepest integration: MCP tools + agent skills + PreToolUse hooks that automatically enrich grep/glob/bash calls with knowledge graph context.\n\n### Community Integrations\n\n| Agent | Install | Source |\n|-------|---------|--------|\n| [pi](https://pi.dev) | `pi install npm:pi-gitnexus` | [pi-gitnexus](https://github.com/tintinweb/pi-gitnexus) |\n\n## MCP Setup (manual)\n\nIf you prefer to configure manually instead of using `gitnexus setup`:\n\n### Claude Code (full support — MCP + skills + hooks)\n\n```bash\nclaude mcp add gitnexus -- npx -y gitnexus@latest mcp\n```\n\n### Cursor / Windsurf\n\nAdd to `~/.cursor/mcp.json` (global — works for all projects):\n\n```json\n{\n  \"mcpServers\": {\n    \"gitnexus\": {\n      \"command\": \"npx\",\n      \"args\": [\"-y\", \"gitnexus@latest\", \"mcp\"]\n    }\n  }\n}\n```\n\n### OpenCode\n\nAdd to `~/.config/opencode/config.json`:\n\n```json\n{\n  \"mcp\": {\n    \"gitnexus\": {\n      \"command\": \"npx\",\n      \"args\": [\"-y\", \"gitnexus@latest\", \"mcp\"]\n    }\n  }\n}\n```\n\n## How It Works\n\nGitNexus builds a complete knowledge graph of your codebase through a multi-phase indexing pipeline:\n\n1. **Structure** — Walks the file tree and maps folder/file relationships\n2. **Parsing** — Extracts functions, classes, methods, and interfaces using Tree-sitter ASTs\n3. **Resolution** — Resolves imports and function calls across files with language-aware logic\n4. **Clustering** — Groups related symbols into functional communities\n5. **Processes** — Traces execution flows from entry points through call chains\n6. **Search** — Builds hybrid search indexes for fast retrieval\n\nThe result is a **LadybugDB graph database** stored locally in `.gitnexus/` with full-text search and semantic embeddings.\n\n## MCP Tools\n\nYour AI agent gets these tools automatically:\n\n| Tool | What It Does | `repo` Param |\n|------|-------------|--------------|\n| `list_repos` | Discover all indexed repositories | — |\n| `query` | Process-grouped hybrid search (BM25 + semantic + RRF) | Optional |\n| `context` | 360-degree symbol view — categorized refs, process participation | Optional |\n| `impact` | Blast radius analysis with depth grouping and confidence | Optional |\n| `detect_changes` | Git-diff impact — maps changed lines to affected processes | Optional |\n| `rename` | Multi-file coordinated rename with graph + text search | Optional |\n| `cypher` | Raw Cypher graph queries | Optional |\n\n> With one indexed repo, the `repo` param is optional. With multiple, specify which: `query({query: \"auth\", repo: \"my-app\"})`.\n\n## MCP Resources\n\n| Resource | Purpose |\n|----------|---------|\n| `gitnexus://repos` | List all indexed repositories (read first) |\n| `gitnexus://repo/{name}/context` | Codebase stats, staleness check, and available tools |\n| `gitnexus://repo/{name}/clusters` | All functional clusters with cohesion scores |\n| `gitnexus://repo/{name}/cluster/{name}` | Cluster members and details |\n| `gitnexus://repo/{name}/processes` | All execution flows |\n| `gitnexus://repo/{name}/process/{name}` | Full process trace with steps |\n| `gitnexus://repo/{name}/schema` | Graph schema for Cypher queries |\n\n## MCP Prompts\n\n| Prompt | What It Does |\n|--------|-------------|\n| `detect_impact` | Pre-commit change analysis — scope, affected processes, risk level |\n| `generate_map` | Architecture documentation from the knowledge graph with mermaid diagrams |\n\n## CLI Commands\n\n```bash\ngitnexus setup                    # Configure MCP for your editors (one-time)\ngitnexus analyze [path]           # Index a repository (or update stale index)\ngitnexus analyze --force          # Force full re-index\ngitnexus analyze --embeddings     # Enable embedding generation (slower, better search)\ngitnexus analyze --verbose        # Log skipped files when parsers are unavailable\ngitnexus mcp                     # Start MCP server (stdio) — serves all indexed repos\ngitnexus serve                   # Start local HTTP server (multi-repo) for web UI\ngitnexus list                    # List all indexed repositories\ngitnexus status                  # Show index status for current repo\ngitnexus clean                   # Delete index for current repo\ngitnexus clean --all --force     # Delete all indexes\ngitnexus wiki [path]             # Generate LLM-powered docs from knowledge graph\ngitnexus wiki --model <model>    # Wiki with custom LLM model (default: gpt-4o-mini)\n```\n\n## Multi-Repo Support\n\nGitNexus supports indexing multiple repositories. Each `gitnexus analyze` registers the repo in a global registry (`~/.gitnexus/registry.json`). The MCP server serves all indexed repos automatically.\n\n## Supported Languages\n\nTypeScript, JavaScript, Python, Java, C, C++, C#, Go, Rust, PHP, Kotlin, Swift, Ruby\n\n### Language Feature Matrix\n\n| Language | Imports | Named Bindings | Exports | Heritage | Type Annotations | Constructor Inference | Config | Frameworks | Entry Points |\n|----------|---------|----------------|---------|----------|-----------------|---------------------|--------|------------|-------------|\n| TypeScript | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |\n| JavaScript | ✓ | ✓ | ✓ | ✓ | — | ✓ | ✓ | ✓ | ✓ |\n| Python | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |\n| Java | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | — | ✓ | ✓ |\n| Kotlin | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | — | ✓ | ✓ |\n| C# | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |\n| Go | ✓ | — | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |\n| Rust | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | — | ✓ | ✓ |\n| PHP | ✓ | ✓ | ✓ | — | ✓ | ✓ | ✓ | ✓ | ✓ |\n| Ruby | ✓ | — | ✓ | ✓ | — | ✓ | — | ✓ | ✓ |\n| Swift | — | — | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |\n| C | — | — | ✓ | — | ✓ | ✓ | — | ✓ | ✓ |\n| C++ | — | — | ✓ | ✓ | ✓ | ✓ | — | ✓ | ✓ |\n\n**Imports** — cross-file import resolution · **Named Bindings** — `import { X as Y }` / re-export tracking · **Exports** — public/exported symbol detection · **Heritage** — class inheritance, interfaces, mixins · **Type Annotations** — explicit type extraction for receiver resolution · **Constructor Inference** — infer receiver type from constructor calls (`self`/`this` resolution included for all languages) · **Config** — language toolchain config parsing (tsconfig, go.mod, etc.) · **Frameworks** — AST-based framework pattern detection · **Entry Points** — entry point scoring heuristics\n\n## Agent Skills\n\nGitNexus ships with skill files that teach AI agents how to use the tools effectively:\n\n- **Exploring** — Navigate unfamiliar code using the knowledge graph\n- **Debugging** — Trace bugs through call chains\n- **Impact Analysis** — Analyze blast radius before changes\n- **Refactoring** — Plan safe refactors using dependency mapping\n\nInstalled automatically by both `gitnexus analyze` (per-repo) and `gitnexus setup` (global).\n\n## Requirements\n\n- Node.js >= 18\n- Git repository (uses git for commit tracking)\n\n## Privacy\n\n- All processing happens locally on your machine\n- No code is sent to any server\n- Index stored in `.gitnexus/` inside your repo (gitignored)\n- Global registry at `~/.gitnexus/` stores only paths and metadata\n\n## Web UI\n\nGitNexus also has a browser-based UI at [gitnexus.vercel.app](https://gitnexus.vercel.app) — 100% client-side, your code never leaves the browser.\n\n**Local Backend Mode:** Run `gitnexus serve` and open the web UI locally — it auto-detects the server and shows all your indexed repos, with full AI chat support. No need to re-upload or re-index. The agent's tools (Cypher queries, search, code navigation) route through the backend HTTP API automatically.\n\n## License\n\n[PolyForm Noncommercial 1.0.0](https://polyformproject.org/licenses/noncommercial/1.0.0/)\n\nFree for non-commercial use. Contact for commercial licensing.\n"
  },
  {
    "path": "gitnexus/hooks/claude/gitnexus-hook.cjs",
    "content": "#!/usr/bin/env node\n/**\n * GitNexus Claude Code Hook\n *\n * PreToolUse  — intercepts Grep/Glob/Bash searches and augments\n *               with graph context from the GitNexus index.\n * PostToolUse — detects stale index after git mutations and notifies\n *               the agent to reindex.\n *\n * NOTE: SessionStart hooks are broken on Windows (Claude Code bug).\n * Session context is injected via CLAUDE.md / skills instead.\n */\n\nconst fs = require('fs');\nconst path = require('path');\nconst { spawnSync } = require('child_process');\n\n/**\n * Read JSON input from stdin synchronously.\n */\nfunction readInput() {\n  try {\n    const data = fs.readFileSync(0, 'utf-8');\n    return JSON.parse(data);\n  } catch {\n    return {};\n  }\n}\n\n/**\n * Find the .gitnexus directory by walking up from startDir.\n * Returns the path to .gitnexus/ or null if not found.\n */\nfunction findGitNexusDir(startDir) {\n  let dir = startDir || process.cwd();\n  for (let i = 0; i < 5; i++) {\n    const candidate = path.join(dir, '.gitnexus');\n    if (fs.existsSync(candidate)) return candidate;\n    const parent = path.dirname(dir);\n    if (parent === dir) break;\n    dir = parent;\n  }\n  return null;\n}\n\n/**\n * Extract search pattern from tool input.\n */\nfunction extractPattern(toolName, toolInput) {\n  if (toolName === 'Grep') {\n    return toolInput.pattern || null;\n  }\n\n  if (toolName === 'Glob') {\n    const raw = toolInput.pattern || '';\n    const match = raw.match(/[*\\/]([a-zA-Z][a-zA-Z0-9_-]{2,})/);\n    return match ? match[1] : null;\n  }\n\n  if (toolName === 'Bash') {\n    const cmd = toolInput.command || '';\n    if (!/\\brg\\b|\\bgrep\\b/.test(cmd)) return null;\n\n    const tokens = cmd.split(/\\s+/);\n    let foundCmd = false;\n    let skipNext = false;\n    const flagsWithValues = new Set(['-e', '-f', '-m', '-A', '-B', '-C', '-g', '--glob', '-t', '--type', '--include', '--exclude']);\n\n    for (const token of tokens) {\n      if (skipNext) { skipNext = false; continue; }\n      if (!foundCmd) {\n        if (/\\brg$|\\bgrep$/.test(token)) foundCmd = true;\n        continue;\n      }\n      if (token.startsWith('-')) {\n        if (flagsWithValues.has(token)) skipNext = true;\n        continue;\n      }\n      const cleaned = token.replace(/['\"]/g, '');\n      return cleaned.length >= 3 ? cleaned : null;\n    }\n    return null;\n  }\n\n  return null;\n}\n\n/**\n * Resolve the gitnexus CLI path.\n * 1. Relative path (works when script is inside npm package)\n * 2. require.resolve (works when gitnexus is globally installed)\n * 3. Fall back to npx (returns empty string)\n */\nfunction resolveCliPath() {\n  let cliPath = path.resolve(__dirname, '..', '..', 'dist', 'cli', 'index.js');\n  if (!fs.existsSync(cliPath)) {\n    try {\n      cliPath = require.resolve('gitnexus/dist/cli/index.js');\n    } catch {\n      cliPath = '';\n    }\n  }\n  return cliPath;\n}\n\n/**\n * Spawn a gitnexus CLI command synchronously.\n * Returns the stderr output (KuzuDB captures stdout at OS level).\n */\nfunction runGitNexusCli(cliPath, args, cwd, timeout) {\n  const isWin = process.platform === 'win32';\n  if (cliPath) {\n    return spawnSync(\n      process.execPath,\n      [cliPath, ...args],\n      { encoding: 'utf-8', timeout, cwd, stdio: ['pipe', 'pipe', 'pipe'] }\n    );\n  }\n  // On Windows, invoke npx.cmd directly (no shell needed)\n  return spawnSync(\n    isWin ? 'npx.cmd' : 'npx',\n    ['-y', 'gitnexus', ...args],\n    { encoding: 'utf-8', timeout: timeout + 5000, cwd, stdio: ['pipe', 'pipe', 'pipe'] }\n  );\n}\n\n/**\n * PreToolUse handler — augment searches with graph context.\n */\nfunction handlePreToolUse(input) {\n  const cwd = input.cwd || process.cwd();\n  if (!path.isAbsolute(cwd)) return;\n  if (!findGitNexusDir(cwd)) return;\n\n  const toolName = input.tool_name || '';\n  const toolInput = input.tool_input || {};\n\n  if (toolName !== 'Grep' && toolName !== 'Glob' && toolName !== 'Bash') return;\n\n  const pattern = extractPattern(toolName, toolInput);\n  if (!pattern || pattern.length < 3) return;\n\n  const cliPath = resolveCliPath();\n  let result = '';\n  try {\n    const child = runGitNexusCli(cliPath, ['augment', '--', pattern], cwd, 7000);\n    if (!child.error && child.status === 0) {\n      result = child.stderr || '';\n    }\n  } catch { /* graceful failure */ }\n\n  if (result && result.trim()) {\n    sendHookResponse('PreToolUse', result.trim());\n  }\n}\n\n/**\n * Emit a PostToolUse hook response with additional context for the agent.\n */\nfunction sendHookResponse(hookEventName, message) {\n  console.log(JSON.stringify({\n    hookSpecificOutput: { hookEventName, additionalContext: message }\n  }));\n}\n\n/**\n * PostToolUse handler — detect index staleness after git mutations.\n *\n * Instead of spawning a full `gitnexus analyze` synchronously (which blocks\n * the agent for up to 120s and risks KuzuDB corruption on timeout), we do a\n * lightweight staleness check: compare `git rev-parse HEAD` against the\n * lastCommit stored in `.gitnexus/meta.json`. If they differ, notify the\n * agent so it can decide when to reindex.\n */\nfunction handlePostToolUse(input) {\n  const toolName = input.tool_name || '';\n  if (toolName !== 'Bash') return;\n\n  const command = (input.tool_input || {}).command || '';\n  if (!/\\bgit\\s+(commit|merge|rebase|cherry-pick|pull)(\\s|$)/.test(command)) return;\n\n  // Only proceed if the command succeeded\n  const toolOutput = input.tool_output || {};\n  if (toolOutput.exit_code !== undefined && toolOutput.exit_code !== 0) return;\n\n  const cwd = input.cwd || process.cwd();\n  if (!path.isAbsolute(cwd)) return;\n  const gitNexusDir = findGitNexusDir(cwd);\n  if (!gitNexusDir) return;\n\n  // Compare HEAD against last indexed commit — skip if unchanged\n  let currentHead = '';\n  try {\n    const headResult = spawnSync('git', ['rev-parse', 'HEAD'], {\n      encoding: 'utf-8', timeout: 3000, cwd, stdio: ['pipe', 'pipe', 'pipe'],\n    });\n    currentHead = (headResult.stdout || '').trim();\n  } catch { return; }\n\n  if (!currentHead) return;\n\n  let lastCommit = '';\n  let hadEmbeddings = false;\n  try {\n    const meta = JSON.parse(fs.readFileSync(path.join(gitNexusDir, 'meta.json'), 'utf-8'));\n    lastCommit = meta.lastCommit || '';\n    hadEmbeddings = (meta.stats && meta.stats.embeddings > 0);\n  } catch { /* no meta — treat as stale */ }\n\n  // If HEAD matches last indexed commit, no reindex needed\n  if (currentHead && currentHead === lastCommit) return;\n\n  const analyzeCmd = `npx gitnexus analyze${hadEmbeddings ? ' --embeddings' : ''}`;\n  sendHookResponse('PostToolUse',\n    `GitNexus index is stale (last indexed: ${lastCommit ? lastCommit.slice(0, 7) : 'never'}). ` +\n    `Run \\`${analyzeCmd}\\` to update the knowledge graph.`\n  );\n}\n\n// Dispatch map for hook events\nconst handlers = {\n  PreToolUse: handlePreToolUse,\n  PostToolUse: handlePostToolUse,\n};\n\nfunction main() {\n  try {\n    const input = readInput();\n    const handler = handlers[input.hook_event_name || ''];\n    if (handler) handler(input);\n  } catch (err) {\n    if (process.env.GITNEXUS_DEBUG) {\n      console.error('GitNexus hook error:', (err.message || '').slice(0, 200));\n    }\n  }\n}\n\nmain();\n"
  },
  {
    "path": "gitnexus/hooks/claude/pre-tool-use.sh",
    "content": "#!/bin/bash\n# GitNexus PreToolUse hook for Claude Code\n# Intercepts Grep/Glob/Bash searches and augments with graph context.\n# Receives JSON on stdin with { tool_name, tool_input, cwd, ... }\n# Returns JSON with additionalContext for graph-enriched results.\n\nINPUT=$(cat)\n\nTOOL_NAME=$(echo \"$INPUT\" | jq -r '.tool_name // empty' 2>/dev/null)\nCWD=$(echo \"$INPUT\" | jq -r '.cwd // empty' 2>/dev/null)\n\n# Extract search pattern based on tool type\nPATTERN=\"\"\n\ncase \"$TOOL_NAME\" in\n  Grep)\n    PATTERN=$(echo \"$INPUT\" | jq -r '.tool_input.pattern // empty' 2>/dev/null)\n    ;;\n  Glob)\n    # Glob patterns are file paths, not search terms — extract meaningful part\n    RAW=$(echo \"$INPUT\" | jq -r '.tool_input.pattern // empty' 2>/dev/null)\n    # Strip glob syntax to get the meaningful name (e.g., \"**/*.ts\" → skip, \"auth*.ts\" → \"auth\")\n    PATTERN=$(echo \"$RAW\" | sed -n 's/.*[*\\/]\\([a-zA-Z][a-zA-Z0-9_-]*\\).*/\\1/p')\n    ;;\n  Bash)\n    CMD=$(echo \"$INPUT\" | jq -r '.tool_input.command // empty' 2>/dev/null)\n    # Only augment grep/rg commands\n    if echo \"$CMD\" | grep -qE '\\brg\\b|\\bgrep\\b'; then\n      # Extract pattern from rg/grep\n      if echo \"$CMD\" | grep -qE '\\brg\\b'; then\n        PATTERN=$(echo \"$CMD\" | sed -n \"s/.*\\brg\\s\\+\\(--[^ ]*\\s\\+\\)*['\\\"]\\\\?\\([^'\\\";\\| >]*\\\\).*/\\2/p\")\n      elif echo \"$CMD\" | grep -qE '\\bgrep\\b'; then\n        PATTERN=$(echo \"$CMD\" | sed -n \"s/.*\\bgrep\\s\\+\\(-[^ ]*\\s\\+\\)*['\\\"]\\\\?\\([^'\\\";\\| >]*\\\\).*/\\2/p\")\n      fi\n    fi\n    ;;\n  *)\n    # Not a search tool — skip\n    exit 0\n    ;;\nesac\n\n# Skip if pattern too short or empty\nif [ -z \"$PATTERN\" ] || [ ${#PATTERN} -lt 3 ]; then\n  exit 0\nfi\n\n# Check if we're in a GitNexus-indexed repo\ndir=\"${CWD:-$PWD}\"\nfound=false\nfor i in 1 2 3 4 5; do\n  if [ -d \"$dir/.gitnexus\" ]; then\n    found=true\n    break\n  fi\n  parent=\"$(dirname \"$dir\")\"\n  [ \"$parent\" = \"$dir\" ] && break\n  dir=\"$parent\"\ndone\n\nif [ \"$found\" = false ]; then\n  exit 0\nfi\n\n# Run gitnexus augment — must be fast (<500ms target)\n# augment writes to stderr (KuzuDB captures stdout at OS level), so capture stderr and discard stdout\nRESULT=$(cd \"$CWD\" && npx -y gitnexus augment \"$PATTERN\" 2>&1 1>/dev/null)\n\nif [ -n \"$RESULT\" ]; then\n  ESCAPED=$(echo \"$RESULT\" | jq -Rs .)\n  jq -n --argjson ctx \"$ESCAPED\" '{\n    hookSpecificOutput: {\n      hookEventName: \"PreToolUse\",\n      additionalContext: $ctx\n    }\n  }'\nelse\n  exit 0\nfi\n"
  },
  {
    "path": "gitnexus/hooks/claude/session-start.sh",
    "content": "#!/bin/bash\n# GitNexus SessionStart hook for Claude Code\n# Fires on session startup. Stdout is injected into Claude's context.\n# Checks if the current directory has a GitNexus index.\n\ndir=\"$PWD\"\nfound=false\nfor i in 1 2 3 4 5; do\n  if [ -d \"$dir/.gitnexus\" ]; then\n    found=true\n    break\n  fi\n  parent=\"$(dirname \"$dir\")\"\n  [ \"$parent\" = \"$dir\" ] && break\n  dir=\"$parent\"\ndone\n\nif [ \"$found\" = false ]; then\n  exit 0\nfi\n\n# Inject GitNexus context — this stdout goes directly into Claude's context\ncat << 'EOF'\n## GitNexus Code Intelligence\n\nThis codebase is indexed by GitNexus, providing a knowledge graph with execution flows, relationships, and semantic search.\n\n**Available MCP Tools:**\n- `query` — Process-grouped code intelligence (execution flows related to a concept)\n- `context` — 360-degree symbol view (categorized refs, process participation)\n- `impact` — Blast radius analysis (what breaks if you change a symbol)\n- `detect_changes` — Git-diff impact analysis (what do your changes affect)\n- `rename` — Multi-file coordinated rename with confidence tags\n- `cypher` — Raw graph queries\n- `list_repos` — Discover indexed repos\n\n**Quick Start:** READ `gitnexus://repo/{name}/context` for codebase overview, then use `query` to find execution flows.\n\n**Resources:** `gitnexus://repo/{name}/context` (overview), `/processes` (execution flows), `/schema` (for Cypher)\nEOF\n\nexit 0\n"
  },
  {
    "path": "gitnexus/package.json",
    "content": "{\n  \"name\": \"gitnexus\",\n  \"version\": \"1.4.7\",\n  \"description\": \"Graph-powered code intelligence for AI agents. Index any codebase, query via MCP or CLI.\",\n  \"author\": \"Abhigyan Patwari\",\n  \"license\": \"PolyForm-Noncommercial-1.0.0\",\n  \"homepage\": \"https://github.com/abhigyanpatwari/GitNexus#readme\",\n  \"repository\": {\n    \"type\": \"git\",\n    \"url\": \"git+https://github.com/abhigyanpatwari/GitNexus.git\",\n    \"directory\": \"gitnexus\"\n  },\n  \"bugs\": {\n    \"url\": \"https://github.com/abhigyanpatwari/GitNexus/issues\"\n  },\n  \"keywords\": [\n    \"mcp\",\n    \"model-context-protocol\",\n    \"code-intelligence\",\n    \"knowledge-graph\",\n    \"cursor\",\n    \"claude\",\n    \"ai-agent\",\n    \"gitnexus\",\n    \"static-analysis\",\n    \"codebase-indexing\"\n  ],\n  \"type\": \"module\",\n  \"bin\": {\n    \"gitnexus\": \"dist/cli/index.js\"\n  },\n  \"files\": [\n    \"dist\",\n    \"hooks\",\n    \"scripts\",\n    \"skills\",\n    \"vendor\"\n  ],\n  \"scripts\": {\n    \"build\": \"tsc\",\n    \"dev\": \"tsx watch src/cli/index.ts\",\n    \"test\": \"vitest run\",\n    \"test:unit\": \"vitest run test/unit\",\n    \"test:integration\": \"vitest run test/integration\",\n    \"test:watch\": \"vitest\",\n    \"test:coverage\": \"vitest run --coverage\",\n    \"prepare\": \"npm run build\",\n    \"postinstall\": \"node scripts/patch-tree-sitter-swift.cjs\",\n    \"prepack\": \"npm run build && chmod +x dist/cli/index.js\"\n  },\n  \"dependencies\": {\n    \"@huggingface/transformers\": \"^3.0.0\",\n    \"@modelcontextprotocol/sdk\": \"^1.0.0\",\n    \"cli-progress\": \"^3.12.0\",\n    \"commander\": \"^12.0.0\",\n    \"cors\": \"^2.8.5\",\n    \"express\": \"^4.19.2\",\n    \"glob\": \"^11.0.0\",\n    \"graphology\": \"^0.25.4\",\n    \"graphology-indices\": \"^0.17.0\",\n    \"graphology-utils\": \"^2.3.0\",\n    \"@ladybugdb/core\": \"^0.15.2\",\n    \"ignore\": \"^7.0.5\",\n    \"lru-cache\": \"^11.0.0\",\n    \"mnemonist\": \"^0.39.0\",\n    \"pandemonium\": \"^2.4.0\",\n    \"tree-sitter\": \"^0.21.0\",\n    \"tree-sitter-c\": \"^0.21.0\",\n    \"tree-sitter-c-sharp\": \"^0.21.0\",\n    \"tree-sitter-cpp\": \"^0.22.0\",\n    \"tree-sitter-go\": \"^0.21.0\",\n    \"tree-sitter-java\": \"^0.21.0\",\n    \"tree-sitter-javascript\": \"^0.21.0\",\n    \"tree-sitter-php\": \"^0.23.12\",\n    \"tree-sitter-python\": \"^0.21.0\",\n    \"tree-sitter-ruby\": \"^0.23.1\",\n    \"tree-sitter-rust\": \"^0.21.0\",\n    \"tree-sitter-typescript\": \"^0.21.0\",\n    \"uuid\": \"^13.0.0\"\n  },\n  \"optionalDependencies\": {\n    \"tree-sitter-kotlin\": \"^0.3.8\",\n    \"tree-sitter-swift\": \"^0.6.0\"\n  },\n  \"devDependencies\": {\n    \"@types/cli-progress\": \"^3.11.6\",\n    \"@types/cors\": \"^2.8.17\",\n    \"@types/express\": \"^4.17.21\",\n    \"@types/node\": \"^20.0.0\",\n    \"@types/uuid\": \"^10.0.0\",\n    \"@vitest/coverage-v8\": \"^4.0.18\",\n    \"tsx\": \"^4.0.0\",\n    \"typescript\": \"^5.4.5\",\n    \"vitest\": \"^4.0.18\"\n  },\n  \"engines\": {\n    \"node\": \">=18.0.0\"\n  }\n}\n"
  },
  {
    "path": "gitnexus/scripts/patch-tree-sitter-swift.cjs",
    "content": "#!/usr/bin/env node\n/**\n * WORKAROUND: tree-sitter-swift@0.6.0 binding.gyp build failure\n *\n * Background:\n *   tree-sitter-swift@0.6.0's binding.gyp contains an \"actions\" array that\n *   invokes `tree-sitter generate` to regenerate parser.c from grammar.js.\n *   This is intended for grammar developers, but the published npm package\n *   already ships pre-generated parser files (parser.c, scanner.c), so the\n *   actions are unnecessary for consumers. Since consumers don't have\n *   tree-sitter-cli installed, the actions always fail during `npm install`.\n *\n * Why we can't just upgrade:\n *   tree-sitter-swift@0.7.1 fixes this (removes postinstall, ships prebuilds),\n *   but it requires tree-sitter@^0.22.1. The upstream project pins tree-sitter\n *   to ^0.21.0 and all other grammar packages depend on that version.\n *   Upgrading tree-sitter would be a separate breaking change.\n *\n * How this workaround works:\n *   1. tree-sitter-swift's own postinstall fails (npm warns but continues)\n *   2. This script runs as gitnexus's postinstall\n *   3. It removes the \"actions\" array from binding.gyp\n *   4. It rebuilds the native binding with the cleaned binding.gyp\n *\n * TODO: Remove this script when tree-sitter is upgraded to ^0.22.x,\n *       which allows using tree-sitter-swift@0.7.1+ directly.\n */\nconst fs = require('fs');\nconst path = require('path');\nconst { execSync } = require('child_process');\n\nconst swiftDir = path.join(__dirname, '..', 'node_modules', 'tree-sitter-swift');\nconst bindingPath = path.join(swiftDir, 'binding.gyp');\n\ntry {\n  if (!fs.existsSync(bindingPath)) {\n    process.exit(0);\n  }\n\n  const content = fs.readFileSync(bindingPath, 'utf8');\n  let needsRebuild = false;\n\n  if (content.includes('\"actions\"')) {\n    // Strip Python-style comments (#) before JSON parsing\n    const cleaned = content.replace(/#[^\\n]*/g, '');\n    const gyp = JSON.parse(cleaned);\n\n    if (gyp.targets && gyp.targets[0] && gyp.targets[0].actions) {\n      delete gyp.targets[0].actions;\n      fs.writeFileSync(bindingPath, JSON.stringify(gyp, null, 2) + '\\n');\n      console.log('[tree-sitter-swift] Patched binding.gyp (removed actions array)');\n      needsRebuild = true;\n    }\n  }\n\n  // Check if native binding exists\n  const bindingNode = path.join(swiftDir, 'build', 'Release', 'tree_sitter_swift_binding.node');\n  if (!fs.existsSync(bindingNode)) {\n    needsRebuild = true;\n  }\n\n  if (needsRebuild) {\n    console.log('[tree-sitter-swift] Rebuilding native binding...');\n    execSync('npx node-gyp rebuild', {\n      cwd: swiftDir,\n      stdio: 'pipe',\n      timeout: 120000,\n    });\n    console.log('[tree-sitter-swift] Native binding built successfully');\n  }\n} catch (err) {\n  console.warn('[tree-sitter-swift] Could not build native binding:', err.message);\n  console.warn('[tree-sitter-swift] You may need to manually run: cd node_modules/tree-sitter-swift && npx node-gyp rebuild');\n}\n"
  },
  {
    "path": "gitnexus/skills/gitnexus-cli.md",
    "content": "---\nname: gitnexus-cli\ndescription: \"Use when the user needs to run GitNexus CLI commands like analyze/index a repo, check status, clean the index, generate a wiki, or list indexed repos. Examples: \\\"Index this repo\\\", \\\"Reanalyze the codebase\\\", \\\"Generate a wiki\\\"\"\n---\n\n# GitNexus CLI Commands\n\nAll commands work via `npx` — no global install required.\n\n## Commands\n\n### analyze — Build or refresh the index\n\n```bash\nnpx gitnexus analyze\n```\n\nRun from the project root. This parses all source files, builds the knowledge graph, writes it to `.gitnexus/`, and generates CLAUDE.md / AGENTS.md context files.\n\n| Flag           | Effect                                                           |\n| -------------- | ---------------------------------------------------------------- |\n| `--force`      | Force full re-index even if up to date                           |\n| `--embeddings` | Enable embedding generation for semantic search (off by default) |\n\n**When to run:** First time in a project, after major code changes, or when `gitnexus://repo/{name}/context` reports the index is stale. In Claude Code, a PostToolUse hook runs `analyze` automatically after `git commit` and `git merge`, preserving embeddings if previously generated.\n\n### status — Check index freshness\n\n```bash\nnpx gitnexus status\n```\n\nShows whether the current repo has a GitNexus index, when it was last updated, and symbol/relationship counts. Use this to check if re-indexing is needed.\n\n### clean — Delete the index\n\n```bash\nnpx gitnexus clean\n```\n\nDeletes the `.gitnexus/` directory and unregisters the repo from the global registry. Use before re-indexing if the index is corrupt or after removing GitNexus from a project.\n\n| Flag      | Effect                                            |\n| --------- | ------------------------------------------------- |\n| `--force` | Skip confirmation prompt                          |\n| `--all`   | Clean all indexed repos, not just the current one |\n\n### wiki — Generate documentation from the graph\n\n```bash\nnpx gitnexus wiki\n```\n\nGenerates repository documentation from the knowledge graph using an LLM. Requires an API key (saved to `~/.gitnexus/config.json` on first use).\n\n| Flag                | Effect                                    |\n| ------------------- | ----------------------------------------- |\n| `--force`           | Force full regeneration                   |\n| `--model <model>`   | LLM model (default: minimax/minimax-m2.5) |\n| `--base-url <url>`  | LLM API base URL                          |\n| `--api-key <key>`   | LLM API key                               |\n| `--concurrency <n>` | Parallel LLM calls (default: 3)           |\n| `--gist`            | Publish wiki as a public GitHub Gist      |\n\n### list — Show all indexed repos\n\n```bash\nnpx gitnexus list\n```\n\nLists all repositories registered in `~/.gitnexus/registry.json`. The MCP `list_repos` tool provides the same information.\n\n## After Indexing\n\n1. **Read `gitnexus://repo/{name}/context`** to verify the index loaded\n2. Use the other GitNexus skills (`exploring`, `debugging`, `impact-analysis`, `refactoring`) for your task\n\n## Troubleshooting\n\n- **\"Not inside a git repository\"**: Run from a directory inside a git repo\n- **Index is stale after re-analyzing**: Restart Claude Code to reload the MCP server\n- **Embeddings slow**: Omit `--embeddings` (it's off by default) or set `OPENAI_API_KEY` for faster API-based embedding\n"
  },
  {
    "path": "gitnexus/skills/gitnexus-debugging.md",
    "content": "---\nname: gitnexus-debugging\ndescription: \"Use when the user is debugging a bug, tracing an error, or asking why something fails. Examples: \\\"Why is X failing?\\\", \\\"Where does this error come from?\\\", \\\"Trace this bug\\\"\"\n---\n\n# Debugging with GitNexus\n\n## When to Use\n\n- \"Why is this function failing?\"\n- \"Trace where this error comes from\"\n- \"Who calls this method?\"\n- \"This endpoint returns 500\"\n- Investigating bugs, errors, or unexpected behavior\n\n## Workflow\n\n```\n1. gitnexus_query({query: \"<error or symptom>\"})            → Find related execution flows\n2. gitnexus_context({name: \"<suspect>\"})                    → See callers/callees/processes\n3. READ gitnexus://repo/{name}/process/{name}                → Trace execution flow\n4. gitnexus_cypher({query: \"MATCH path...\"})                 → Custom traces if needed\n```\n\n> If \"Index is stale\" → run `npx gitnexus analyze` in terminal.\n\n## Checklist\n\n```\n- [ ] Understand the symptom (error message, unexpected behavior)\n- [ ] gitnexus_query for error text or related code\n- [ ] Identify the suspect function from returned processes\n- [ ] gitnexus_context to see callers and callees\n- [ ] Trace execution flow via process resource if applicable\n- [ ] gitnexus_cypher for custom call chain traces if needed\n- [ ] Read source files to confirm root cause\n```\n\n## Debugging Patterns\n\n| Symptom              | GitNexus Approach                                          |\n| -------------------- | ---------------------------------------------------------- |\n| Error message        | `gitnexus_query` for error text → `context` on throw sites |\n| Wrong return value   | `context` on the function → trace callees for data flow    |\n| Intermittent failure | `context` → look for external calls, async deps            |\n| Performance issue    | `context` → find symbols with many callers (hot paths)     |\n| Recent regression    | `detect_changes` to see what your changes affect           |\n\n## Tools\n\n**gitnexus_query** — find code related to error:\n\n```\ngitnexus_query({query: \"payment validation error\"})\n→ Processes: CheckoutFlow, ErrorHandling\n→ Symbols: validatePayment, handlePaymentError, PaymentException\n```\n\n**gitnexus_context** — full context for a suspect:\n\n```\ngitnexus_context({name: \"validatePayment\"})\n→ Incoming calls: processCheckout, webhookHandler\n→ Outgoing calls: verifyCard, fetchRates (external API!)\n→ Processes: CheckoutFlow (step 3/7)\n```\n\n**gitnexus_cypher** — custom call chain traces:\n\n```cypher\nMATCH path = (a)-[:CodeRelation {type: 'CALLS'}*1..2]->(b:Function {name: \"validatePayment\"})\nRETURN [n IN nodes(path) | n.name] AS chain\n```\n\n## Example: \"Payment endpoint returns 500 intermittently\"\n\n```\n1. gitnexus_query({query: \"payment error handling\"})\n   → Processes: CheckoutFlow, ErrorHandling\n   → Symbols: validatePayment, handlePaymentError\n\n2. gitnexus_context({name: \"validatePayment\"})\n   → Outgoing calls: verifyCard, fetchRates (external API!)\n\n3. READ gitnexus://repo/my-app/process/CheckoutFlow\n   → Step 3: validatePayment → calls fetchRates (external)\n\n4. Root cause: fetchRates calls external API without proper timeout\n```\n"
  },
  {
    "path": "gitnexus/skills/gitnexus-exploring.md",
    "content": "---\nname: gitnexus-exploring\ndescription: \"Use when the user asks how code works, wants to understand architecture, trace execution flows, or explore unfamiliar parts of the codebase. Examples: \\\"How does X work?\\\", \\\"What calls this function?\\\", \\\"Show me the auth flow\\\"\"\n---\n\n# Exploring Codebases with GitNexus\n\n## When to Use\n\n- \"How does authentication work?\"\n- \"What's the project structure?\"\n- \"Show me the main components\"\n- \"Where is the database logic?\"\n- Understanding code you haven't seen before\n\n## Workflow\n\n```\n1. READ gitnexus://repos                          → Discover indexed repos\n2. READ gitnexus://repo/{name}/context             → Codebase overview, check staleness\n3. gitnexus_query({query: \"<what you want to understand>\"})  → Find related execution flows\n4. gitnexus_context({name: \"<symbol>\"})            → Deep dive on specific symbol\n5. READ gitnexus://repo/{name}/process/{name}      → Trace full execution flow\n```\n\n> If step 2 says \"Index is stale\" → run `npx gitnexus analyze` in terminal.\n\n## Checklist\n\n```\n- [ ] READ gitnexus://repo/{name}/context\n- [ ] gitnexus_query for the concept you want to understand\n- [ ] Review returned processes (execution flows)\n- [ ] gitnexus_context on key symbols for callers/callees\n- [ ] READ process resource for full execution traces\n- [ ] Read source files for implementation details\n```\n\n## Resources\n\n| Resource                                | What you get                                            |\n| --------------------------------------- | ------------------------------------------------------- |\n| `gitnexus://repo/{name}/context`        | Stats, staleness warning (~150 tokens)                  |\n| `gitnexus://repo/{name}/clusters`       | All functional areas with cohesion scores (~300 tokens) |\n| `gitnexus://repo/{name}/cluster/{name}` | Area members with file paths (~500 tokens)              |\n| `gitnexus://repo/{name}/process/{name}` | Step-by-step execution trace (~200 tokens)              |\n\n## Tools\n\n**gitnexus_query** — find execution flows related to a concept:\n\n```\ngitnexus_query({query: \"payment processing\"})\n→ Processes: CheckoutFlow, RefundFlow, WebhookHandler\n→ Symbols grouped by flow with file locations\n```\n\n**gitnexus_context** — 360-degree view of a symbol:\n\n```\ngitnexus_context({name: \"validateUser\"})\n→ Incoming calls: loginHandler, apiMiddleware\n→ Outgoing calls: checkToken, getUserById\n→ Processes: LoginFlow (step 2/5), TokenRefresh (step 1/3)\n```\n\n## Example: \"How does payment processing work?\"\n\n```\n1. READ gitnexus://repo/my-app/context       → 918 symbols, 45 processes\n2. gitnexus_query({query: \"payment processing\"})\n   → CheckoutFlow: processPayment → validateCard → chargeStripe\n   → RefundFlow: initiateRefund → calculateRefund → processRefund\n3. gitnexus_context({name: \"processPayment\"})\n   → Incoming: checkoutHandler, webhookHandler\n   → Outgoing: validateCard, chargeStripe, saveTransaction\n4. Read src/payments/processor.ts for implementation details\n```\n"
  },
  {
    "path": "gitnexus/skills/gitnexus-guide.md",
    "content": "---\nname: gitnexus-guide\ndescription: \"Use when the user asks about GitNexus itself — available tools, how to query the knowledge graph, MCP resources, graph schema, or workflow reference. Examples: \\\"What GitNexus tools are available?\\\", \\\"How do I use GitNexus?\\\"\"\n---\n\n# GitNexus Guide\n\nQuick reference for all GitNexus MCP tools, resources, and the knowledge graph schema.\n\n## Always Start Here\n\nFor any task involving code understanding, debugging, impact analysis, or refactoring:\n\n1. **Read `gitnexus://repo/{name}/context`** — codebase overview + check index freshness\n2. **Match your task to a skill below** and **read that skill file**\n3. **Follow the skill's workflow and checklist**\n\n> If step 1 warns the index is stale, run `npx gitnexus analyze` in the terminal first.\n\n## Skills\n\n| Task                                         | Skill to read       |\n| -------------------------------------------- | ------------------- |\n| Understand architecture / \"How does X work?\" | `gitnexus-exploring`         |\n| Blast radius / \"What breaks if I change X?\"  | `gitnexus-impact-analysis`   |\n| Trace bugs / \"Why is X failing?\"             | `gitnexus-debugging`         |\n| Rename / extract / split / refactor          | `gitnexus-refactoring`       |\n| Tools, resources, schema reference           | `gitnexus-guide` (this file) |\n| Index, status, clean, wiki CLI commands      | `gitnexus-cli`               |\n\n## Tools Reference\n\n| Tool             | What it gives you                                                        |\n| ---------------- | ------------------------------------------------------------------------ |\n| `query`          | Process-grouped code intelligence — execution flows related to a concept |\n| `context`        | 360-degree symbol view — categorized refs, processes it participates in  |\n| `impact`         | Symbol blast radius — what breaks at depth 1/2/3 with confidence         |\n| `detect_changes` | Git-diff impact — what do your current changes affect                    |\n| `rename`         | Multi-file coordinated rename with confidence-tagged edits               |\n| `cypher`         | Raw graph queries (read `gitnexus://repo/{name}/schema` first)           |\n| `list_repos`     | Discover indexed repos                                                   |\n\n## Resources Reference\n\nLightweight reads (~100-500 tokens) for navigation:\n\n| Resource                                       | Content                                   |\n| ---------------------------------------------- | ----------------------------------------- |\n| `gitnexus://repo/{name}/context`               | Stats, staleness check                    |\n| `gitnexus://repo/{name}/clusters`              | All functional areas with cohesion scores |\n| `gitnexus://repo/{name}/cluster/{clusterName}` | Area members                              |\n| `gitnexus://repo/{name}/processes`             | All execution flows                       |\n| `gitnexus://repo/{name}/process/{processName}` | Step-by-step trace                        |\n| `gitnexus://repo/{name}/schema`                | Graph schema for Cypher                   |\n\n## Graph Schema\n\n**Nodes:** File, Function, Class, Interface, Method, Community, Process\n**Edges (via CodeRelation.type):** CALLS, IMPORTS, EXTENDS, IMPLEMENTS, DEFINES, MEMBER_OF, STEP_IN_PROCESS\n\n```cypher\nMATCH (caller)-[:CodeRelation {type: 'CALLS'}]->(f:Function {name: \"myFunc\"})\nRETURN caller.name, caller.filePath\n```\n"
  },
  {
    "path": "gitnexus/skills/gitnexus-impact-analysis.md",
    "content": "---\nname: gitnexus-impact-analysis\ndescription: \"Use when the user wants to know what will break if they change something, or needs safety analysis before editing code. Examples: \\\"Is it safe to change X?\\\", \\\"What depends on this?\\\", \\\"What will break?\\\"\"\n---\n\n# Impact Analysis with GitNexus\n\n## When to Use\n\n- \"Is it safe to change this function?\"\n- \"What will break if I modify X?\"\n- \"Show me the blast radius\"\n- \"Who uses this code?\"\n- Before making non-trivial code changes\n- Before committing — to understand what your changes affect\n\n## Workflow\n\n```\n1. gitnexus_impact({target: \"X\", direction: \"upstream\"})  → What depends on this\n2. READ gitnexus://repo/{name}/processes                   → Check affected execution flows\n3. gitnexus_detect_changes()                               → Map current git changes to affected flows\n4. Assess risk and report to user\n```\n\n> If \"Index is stale\" → run `npx gitnexus analyze` in terminal.\n\n## Checklist\n\n```\n- [ ] gitnexus_impact({target, direction: \"upstream\"}) to find dependents\n- [ ] Review d=1 items first (these WILL BREAK)\n- [ ] Check high-confidence (>0.8) dependencies\n- [ ] READ processes to check affected execution flows\n- [ ] gitnexus_detect_changes() for pre-commit check\n- [ ] Assess risk level and report to user\n```\n\n## Understanding Output\n\n| Depth | Risk Level       | Meaning                  |\n| ----- | ---------------- | ------------------------ |\n| d=1   | **WILL BREAK**   | Direct callers/importers |\n| d=2   | LIKELY AFFECTED  | Indirect dependencies    |\n| d=3   | MAY NEED TESTING | Transitive effects       |\n\n## Risk Assessment\n\n| Affected                       | Risk     |\n| ------------------------------ | -------- |\n| <5 symbols, few processes      | LOW      |\n| 5-15 symbols, 2-5 processes    | MEDIUM   |\n| >15 symbols or many processes  | HIGH     |\n| Critical path (auth, payments) | CRITICAL |\n\n## Tools\n\n**gitnexus_impact** — the primary tool for symbol blast radius:\n\n```\ngitnexus_impact({\n  target: \"validateUser\",\n  direction: \"upstream\",\n  minConfidence: 0.8,\n  maxDepth: 3\n})\n\n→ d=1 (WILL BREAK):\n  - loginHandler (src/auth/login.ts:42) [CALLS, 100%]\n  - apiMiddleware (src/api/middleware.ts:15) [CALLS, 100%]\n\n→ d=2 (LIKELY AFFECTED):\n  - authRouter (src/routes/auth.ts:22) [CALLS, 95%]\n```\n\n**gitnexus_detect_changes** — git-diff based impact analysis:\n\n```\ngitnexus_detect_changes({scope: \"staged\"})\n\n→ Changed: 5 symbols in 3 files\n→ Affected: LoginFlow, TokenRefresh, APIMiddlewarePipeline\n→ Risk: MEDIUM\n```\n\n## Example: \"What breaks if I change validateUser?\"\n\n```\n1. gitnexus_impact({target: \"validateUser\", direction: \"upstream\"})\n   → d=1: loginHandler, apiMiddleware (WILL BREAK)\n   → d=2: authRouter, sessionManager (LIKELY AFFECTED)\n\n2. READ gitnexus://repo/my-app/processes\n   → LoginFlow and TokenRefresh touch validateUser\n\n3. Risk: 2 direct callers, 2 processes = MEDIUM\n```\n"
  },
  {
    "path": "gitnexus/skills/gitnexus-pr-review.md",
    "content": "---\nname: gitnexus-pr-review\ndescription: \"Use when the user wants to review a pull request, understand what a PR changes, assess risk of merging, or check for missing test coverage. Examples: \\\"Review this PR\\\", \\\"What does PR #42 change?\\\", \\\"Is this PR safe to merge?\\\"\"\n---\n\n# PR Review with GitNexus\n\n## When to Use\n\n- \"Review this PR\"\n- \"What does PR #42 change?\"\n- \"Is this safe to merge?\"\n- \"What's the blast radius of this PR?\"\n- \"Are there missing tests for this PR?\"\n- Reviewing someone else's code changes before merge\n\n## Workflow\n\n```\n1. gh pr diff <number>                                    → Get the raw diff\n2. gitnexus_detect_changes({scope: \"compare\", base_ref: \"main\"})  → Map diff to affected flows\n3. For each changed symbol:\n   gitnexus_impact({target: \"<symbol>\", direction: \"upstream\"})    → Blast radius per change\n4. gitnexus_context({name: \"<key symbol>\"})               → Understand callers/callees\n5. READ gitnexus://repo/{name}/processes                   → Check affected execution flows\n6. Summarize findings with risk assessment\n```\n\n> If \"Index is stale\" → run `npx gitnexus analyze` in terminal before reviewing.\n\n## Checklist\n\n```\n- [ ] Fetch PR diff (gh pr diff or git diff base...head)\n- [ ] gitnexus_detect_changes to map changes to affected execution flows\n- [ ] gitnexus_impact on each non-trivial changed symbol\n- [ ] Review d=1 items (WILL BREAK) — are callers updated?\n- [ ] gitnexus_context on key changed symbols to understand full picture\n- [ ] Check if affected processes have test coverage\n- [ ] Assess overall risk level\n- [ ] Write review summary with findings\n```\n\n## Review Dimensions\n\n| Dimension | How GitNexus Helps |\n| --- | --- |\n| **Correctness** | `context` shows callers — are they all compatible with the change? |\n| **Blast radius** | `impact` shows d=1/d=2/d=3 dependents — anything missed? |\n| **Completeness** | `detect_changes` shows all affected flows — are they all handled? |\n| **Test coverage** | `impact({includeTests: true})` shows which tests touch changed code |\n| **Breaking changes** | d=1 upstream items that aren't updated in the PR = potential breakage |\n\n## Risk Assessment\n\n| Signal | Risk |\n| --- | --- |\n| Changes touch <3 symbols, 0-1 processes | LOW |\n| Changes touch 3-10 symbols, 2-5 processes | MEDIUM |\n| Changes touch >10 symbols or many processes | HIGH |\n| Changes touch auth, payments, or data integrity code | CRITICAL |\n| d=1 callers exist outside the PR diff | Potential breakage — flag it |\n\n## Tools\n\n**gitnexus_detect_changes** — map PR diff to affected execution flows:\n\n```\ngitnexus_detect_changes({scope: \"compare\", base_ref: \"main\"})\n\n→ Changed: 8 symbols in 4 files\n→ Affected processes: CheckoutFlow, RefundFlow, WebhookHandler\n→ Risk: MEDIUM\n```\n\n**gitnexus_impact** — blast radius per changed symbol:\n\n```\ngitnexus_impact({target: \"validatePayment\", direction: \"upstream\"})\n\n→ d=1 (WILL BREAK):\n  - processCheckout (src/checkout.ts:42) [CALLS, 100%]\n  - webhookHandler (src/webhooks.ts:15) [CALLS, 100%]\n\n→ d=2 (LIKELY AFFECTED):\n  - checkoutRouter (src/routes/checkout.ts:22) [CALLS, 95%]\n```\n\n**gitnexus_impact with tests** — check test coverage:\n\n```\ngitnexus_impact({target: \"validatePayment\", direction: \"upstream\", includeTests: true})\n\n→ Tests that cover this symbol:\n  - validatePayment.test.ts [direct]\n  - checkout.integration.test.ts [via processCheckout]\n```\n\n**gitnexus_context** — understand a changed symbol's role:\n\n```\ngitnexus_context({name: \"validatePayment\"})\n\n→ Incoming calls: processCheckout, webhookHandler\n→ Outgoing calls: verifyCard, fetchRates\n→ Processes: CheckoutFlow (step 3/7), RefundFlow (step 1/5)\n```\n\n## Example: \"Review PR #42\"\n\n```\n1. gh pr diff 42 > /tmp/pr42.diff\n   → 4 files changed: payments.ts, checkout.ts, types.ts, utils.ts\n\n2. gitnexus_detect_changes({scope: \"compare\", base_ref: \"main\"})\n   → Changed symbols: validatePayment, PaymentInput, formatAmount\n   → Affected processes: CheckoutFlow, RefundFlow\n   → Risk: MEDIUM\n\n3. gitnexus_impact({target: \"validatePayment\", direction: \"upstream\"})\n   → d=1: processCheckout, webhookHandler (WILL BREAK)\n   → webhookHandler is NOT in the PR diff — potential breakage!\n\n4. gitnexus_impact({target: \"PaymentInput\", direction: \"upstream\"})\n   → d=1: validatePayment (in PR), createPayment (NOT in PR)\n   → createPayment uses the old PaymentInput shape — breaking change!\n\n5. gitnexus_context({name: \"formatAmount\"})\n   → Called by 12 functions — but change is backwards-compatible (added optional param)\n\n6. Review summary:\n   - MEDIUM risk — 3 changed symbols affect 2 execution flows\n   - BUG: webhookHandler calls validatePayment but isn't updated for new signature\n   - BUG: createPayment depends on PaymentInput type which changed\n   - OK: formatAmount change is backwards-compatible\n   - Tests: checkout.test.ts covers processCheckout path, but no webhook test\n```\n\n## Review Output Format\n\nStructure your review as:\n\n```markdown\n## PR Review: <title>\n\n**Risk: LOW / MEDIUM / HIGH / CRITICAL**\n\n### Changes Summary\n- <N> symbols changed across <M> files\n- <P> execution flows affected\n\n### Findings\n1. **[severity]** Description of finding\n   - Evidence from GitNexus tools\n   - Affected callers/flows\n\n### Missing Coverage\n- Callers not updated in PR: ...\n- Untested flows: ...\n\n### Recommendation\nAPPROVE / REQUEST CHANGES / NEEDS DISCUSSION\n```\n"
  },
  {
    "path": "gitnexus/skills/gitnexus-refactoring.md",
    "content": "---\nname: gitnexus-refactoring\ndescription: \"Use when the user wants to rename, extract, split, move, or restructure code safely. Examples: \\\"Rename this function\\\", \\\"Extract this into a module\\\", \\\"Refactor this class\\\", \\\"Move this to a separate file\\\"\"\n---\n\n# Refactoring with GitNexus\n\n## When to Use\n\n- \"Rename this function safely\"\n- \"Extract this into a module\"\n- \"Split this service\"\n- \"Move this to a new file\"\n- Any task involving renaming, extracting, splitting, or restructuring code\n\n## Workflow\n\n```\n1. gitnexus_impact({target: \"X\", direction: \"upstream\"})  → Map all dependents\n2. gitnexus_query({query: \"X\"})                            → Find execution flows involving X\n3. gitnexus_context({name: \"X\"})                           → See all incoming/outgoing refs\n4. Plan update order: interfaces → implementations → callers → tests\n```\n\n> If \"Index is stale\" → run `npx gitnexus analyze` in terminal.\n\n## Checklists\n\n### Rename Symbol\n\n```\n- [ ] gitnexus_rename({symbol_name: \"oldName\", new_name: \"newName\", dry_run: true}) — preview all edits\n- [ ] Review graph edits (high confidence) and ast_search edits (review carefully)\n- [ ] If satisfied: gitnexus_rename({..., dry_run: false}) — apply edits\n- [ ] gitnexus_detect_changes() — verify only expected files changed\n- [ ] Run tests for affected processes\n```\n\n### Extract Module\n\n```\n- [ ] gitnexus_context({name: target}) — see all incoming/outgoing refs\n- [ ] gitnexus_impact({target, direction: \"upstream\"}) — find all external callers\n- [ ] Define new module interface\n- [ ] Extract code, update imports\n- [ ] gitnexus_detect_changes() — verify affected scope\n- [ ] Run tests for affected processes\n```\n\n### Split Function/Service\n\n```\n- [ ] gitnexus_context({name: target}) — understand all callees\n- [ ] Group callees by responsibility\n- [ ] gitnexus_impact({target, direction: \"upstream\"}) — map callers to update\n- [ ] Create new functions/services\n- [ ] Update callers\n- [ ] gitnexus_detect_changes() — verify affected scope\n- [ ] Run tests for affected processes\n```\n\n## Tools\n\n**gitnexus_rename** — automated multi-file rename:\n\n```\ngitnexus_rename({symbol_name: \"validateUser\", new_name: \"authenticateUser\", dry_run: true})\n→ 12 edits across 8 files\n→ 10 graph edits (high confidence), 2 ast_search edits (review)\n→ Changes: [{file_path, edits: [{line, old_text, new_text, confidence}]}]\n```\n\n**gitnexus_impact** — map all dependents first:\n\n```\ngitnexus_impact({target: \"validateUser\", direction: \"upstream\"})\n→ d=1: loginHandler, apiMiddleware, testUtils\n→ Affected Processes: LoginFlow, TokenRefresh\n```\n\n**gitnexus_detect_changes** — verify your changes after refactoring:\n\n```\ngitnexus_detect_changes({scope: \"all\"})\n→ Changed: 8 files, 12 symbols\n→ Affected processes: LoginFlow, TokenRefresh\n→ Risk: MEDIUM\n```\n\n**gitnexus_cypher** — custom reference queries:\n\n```cypher\nMATCH (caller)-[:CodeRelation {type: 'CALLS'}]->(f:Function {name: \"validateUser\"})\nRETURN caller.name, caller.filePath ORDER BY caller.filePath\n```\n\n## Risk Rules\n\n| Risk Factor         | Mitigation                                |\n| ------------------- | ----------------------------------------- |\n| Many callers (>5)   | Use gitnexus_rename for automated updates |\n| Cross-area refs     | Use detect_changes after to verify scope  |\n| String/dynamic refs | gitnexus_query to find them               |\n| External/public API | Version and deprecate properly            |\n\n## Example: Rename `validateUser` to `authenticateUser`\n\n```\n1. gitnexus_rename({symbol_name: \"validateUser\", new_name: \"authenticateUser\", dry_run: true})\n   → 12 edits: 10 graph (safe), 2 ast_search (review)\n   → Files: validator.ts, login.ts, middleware.ts, config.json...\n\n2. Review ast_search edits (config.json: dynamic reference!)\n\n3. gitnexus_rename({symbol_name: \"validateUser\", new_name: \"authenticateUser\", dry_run: false})\n   → Applied 12 edits across 8 files\n\n4. gitnexus_detect_changes({scope: \"all\"})\n   → Affected: LoginFlow, TokenRefresh\n   → Risk: MEDIUM — run tests for these flows\n```\n"
  },
  {
    "path": "gitnexus/src/cli/ai-context.ts",
    "content": "/**\n * AI Context Generator\n * \n * Creates AGENTS.md and CLAUDE.md with full inline GitNexus context.\n * AGENTS.md is the standard read by Cursor, Windsurf, OpenCode, Cline, etc.\n * CLAUDE.md is for Claude Code which only reads that file.\n */\n\nimport fs from 'fs/promises';\nimport path from 'path';\nimport { fileURLToPath } from 'url';\nimport { type GeneratedSkillInfo } from './skill-gen.js';\n\n// ESM equivalent of __dirname\nconst __filename = fileURLToPath(import.meta.url);\nconst __dirname = path.dirname(__filename);\n\ninterface RepoStats {\n  files?: number;\n  nodes?: number;\n  edges?: number;\n  communities?: number;\n  clusters?: number;       // Aggregated cluster count (what tools show)\n  processes?: number;\n}\n\nconst GITNEXUS_START_MARKER = '<!-- gitnexus:start -->';\nconst GITNEXUS_END_MARKER = '<!-- gitnexus:end -->';\n\n/**\n * Generate the full GitNexus context content.\n *\n * Design principles (learned from real agent behavior and industry research):\n * - Inline critical workflows — skills are skipped 56% of the time (Vercel eval data)\n * - Use RFC 2119 language (MUST, NEVER, ALWAYS) — models follow imperative rules\n * - Three-tier boundaries (Always/When/Never) — proven to change model behavior\n * - Keep under 120 lines — adherence degrades past 150 lines\n * - Exact tool commands with parameters — vague directives get ignored\n * - Self-review checklist — forces model to verify its own work\n */\nfunction generateGitNexusContent(projectName: string, stats: RepoStats, generatedSkills?: GeneratedSkillInfo[]): string {\n  const generatedRows = (generatedSkills && generatedSkills.length > 0)\n    ? generatedSkills.map(s =>\n        `| Work in the ${s.label} area (${s.symbolCount} symbols) | \\`.claude/skills/generated/${s.name}/SKILL.md\\` |`\n      ).join('\\n')\n    : '';\n\n  const skillsTable = `| Task | Read this skill file |\n|------|---------------------|\n| Understand architecture / \"How does X work?\" | \\`.claude/skills/gitnexus/gitnexus-exploring/SKILL.md\\` |\n| Blast radius / \"What breaks if I change X?\" | \\`.claude/skills/gitnexus/gitnexus-impact-analysis/SKILL.md\\` |\n| Trace bugs / \"Why is X failing?\" | \\`.claude/skills/gitnexus/gitnexus-debugging/SKILL.md\\` |\n| Rename / extract / split / refactor | \\`.claude/skills/gitnexus/gitnexus-refactoring/SKILL.md\\` |\n| Tools, resources, schema reference | \\`.claude/skills/gitnexus/gitnexus-guide/SKILL.md\\` |\n| Index, status, clean, wiki CLI commands | \\`.claude/skills/gitnexus/gitnexus-cli/SKILL.md\\` |${generatedRows ? '\\n' + generatedRows : ''}`;\n\n  return `${GITNEXUS_START_MARKER}\n# GitNexus — Code Intelligence\n\nThis project is indexed by GitNexus as **${projectName}** (${stats.nodes || 0} symbols, ${stats.edges || 0} relationships, ${stats.processes || 0} execution flows). Use the GitNexus MCP tools to understand code, assess impact, and navigate safely.\n\n> If any GitNexus tool warns the index is stale, run \\`npx gitnexus analyze\\` in terminal first.\n\n## Always Do\n\n- **MUST run impact analysis before editing any symbol.** Before modifying a function, class, or method, run \\`gitnexus_impact({target: \"symbolName\", direction: \"upstream\"})\\` and report the blast radius (direct callers, affected processes, risk level) to the user.\n- **MUST run \\`gitnexus_detect_changes()\\` before committing** to verify your changes only affect expected symbols and execution flows.\n- **MUST warn the user** if impact analysis returns HIGH or CRITICAL risk before proceeding with edits.\n- When exploring unfamiliar code, use \\`gitnexus_query({query: \"concept\"})\\` to find execution flows instead of grepping. It returns process-grouped results ranked by relevance.\n- When you need full context on a specific symbol — callers, callees, which execution flows it participates in — use \\`gitnexus_context({name: \"symbolName\"})\\`.\n\n## When Debugging\n\n1. \\`gitnexus_query({query: \"<error or symptom>\"})\\` — find execution flows related to the issue\n2. \\`gitnexus_context({name: \"<suspect function>\"})\\` — see all callers, callees, and process participation\n3. \\`READ gitnexus://repo/${projectName}/process/{processName}\\` — trace the full execution flow step by step\n4. For regressions: \\`gitnexus_detect_changes({scope: \"compare\", base_ref: \"main\"})\\` — see what your branch changed\n\n## When Refactoring\n\n- **Renaming**: MUST use \\`gitnexus_rename({symbol_name: \"old\", new_name: \"new\", dry_run: true})\\` first. Review the preview — graph edits are safe, text_search edits need manual review. Then run with \\`dry_run: false\\`.\n- **Extracting/Splitting**: MUST run \\`gitnexus_context({name: \"target\"})\\` to see all incoming/outgoing refs, then \\`gitnexus_impact({target: \"target\", direction: \"upstream\"})\\` to find all external callers before moving code.\n- After any refactor: run \\`gitnexus_detect_changes({scope: \"all\"})\\` to verify only expected files changed.\n\n## Never Do\n\n- NEVER edit a function, class, or method without first running \\`gitnexus_impact\\` on it.\n- NEVER ignore HIGH or CRITICAL risk warnings from impact analysis.\n- NEVER rename symbols with find-and-replace — use \\`gitnexus_rename\\` which understands the call graph.\n- NEVER commit changes without running \\`gitnexus_detect_changes()\\` to check affected scope.\n\n## Tools Quick Reference\n\n| Tool | When to use | Command |\n|------|-------------|---------|\n| \\`query\\` | Find code by concept | \\`gitnexus_query({query: \"auth validation\"})\\` |\n| \\`context\\` | 360-degree view of one symbol | \\`gitnexus_context({name: \"validateUser\"})\\` |\n| \\`impact\\` | Blast radius before editing | \\`gitnexus_impact({target: \"X\", direction: \"upstream\"})\\` |\n| \\`detect_changes\\` | Pre-commit scope check | \\`gitnexus_detect_changes({scope: \"staged\"})\\` |\n| \\`rename\\` | Safe multi-file rename | \\`gitnexus_rename({symbol_name: \"old\", new_name: \"new\", dry_run: true})\\` |\n| \\`cypher\\` | Custom graph queries | \\`gitnexus_cypher({query: \"MATCH ...\"})\\` |\n\n## Impact Risk Levels\n\n| Depth | Meaning | Action |\n|-------|---------|--------|\n| d=1 | WILL BREAK — direct callers/importers | MUST update these |\n| d=2 | LIKELY AFFECTED — indirect deps | Should test |\n| d=3 | MAY NEED TESTING — transitive | Test if critical path |\n\n## Resources\n\n| Resource | Use for |\n|----------|---------|\n| \\`gitnexus://repo/${projectName}/context\\` | Codebase overview, check index freshness |\n| \\`gitnexus://repo/${projectName}/clusters\\` | All functional areas |\n| \\`gitnexus://repo/${projectName}/processes\\` | All execution flows |\n| \\`gitnexus://repo/${projectName}/process/{name}\\` | Step-by-step execution trace |\n\n## Self-Check Before Finishing\n\nBefore completing any code modification task, verify:\n1. \\`gitnexus_impact\\` was run for all modified symbols\n2. No HIGH/CRITICAL risk warnings were ignored\n3. \\`gitnexus_detect_changes()\\` confirms changes match expected scope\n4. All d=1 (WILL BREAK) dependents were updated\n\n## Keeping the Index Fresh\n\nAfter committing code changes, the GitNexus index becomes stale. Re-run analyze to update it:\n\n\\`\\`\\`bash\nnpx gitnexus analyze\n\\`\\`\\`\n\nIf the index previously included embeddings, preserve them by adding \\`--embeddings\\`:\n\n\\`\\`\\`bash\nnpx gitnexus analyze --embeddings\n\\`\\`\\`\n\nTo check whether embeddings exist, inspect \\`.gitnexus/meta.json\\` — the \\`stats.embeddings\\` field shows the count (0 means no embeddings). **Running analyze without \\`--embeddings\\` will delete any previously generated embeddings.**\n\n> Claude Code users: A PostToolUse hook handles this automatically after \\`git commit\\` and \\`git merge\\`.\n\n## CLI\n\n${skillsTable}\n\n${GITNEXUS_END_MARKER}`;\n}\n\n\n/**\n * Check if a file exists\n */\nasync function fileExists(filePath: string): Promise<boolean> {\n  try {\n    await fs.access(filePath);\n    return true;\n  } catch {\n    return false;\n  }\n}\n\n/**\n * Create or update GitNexus section in a file\n * - If file doesn't exist: create with GitNexus content\n * - If file exists without GitNexus section: append\n * - If file exists with GitNexus section: replace that section\n */\nasync function upsertGitNexusSection(\n  filePath: string,\n  content: string\n): Promise<'created' | 'updated' | 'appended'> {\n  const exists = await fileExists(filePath);\n\n  if (!exists) {\n    await fs.writeFile(filePath, content, 'utf-8');\n    return 'created';\n  }\n\n  const existingContent = await fs.readFile(filePath, 'utf-8');\n\n  // Check if GitNexus section already exists\n  const startIdx = existingContent.indexOf(GITNEXUS_START_MARKER);\n  const endIdx = existingContent.indexOf(GITNEXUS_END_MARKER);\n\n  if (startIdx !== -1 && endIdx !== -1 && endIdx > startIdx) {\n    // Replace existing section\n    const before = existingContent.substring(0, startIdx);\n    const after = existingContent.substring(endIdx + GITNEXUS_END_MARKER.length);\n    const newContent = before + content + after;\n    await fs.writeFile(filePath, newContent.trim() + '\\n', 'utf-8');\n    return 'updated';\n  }\n\n  // Append new section\n  const newContent = existingContent.trim() + '\\n\\n' + content + '\\n';\n  await fs.writeFile(filePath, newContent, 'utf-8');\n  return 'appended';\n}\n\n/**\n * Install GitNexus skills to .claude/skills/gitnexus/\n * Works natively with Claude Code, Cursor, and GitHub Copilot\n */\nasync function installSkills(repoPath: string): Promise<string[]> {\n  const skillsDir = path.join(repoPath, '.claude', 'skills', 'gitnexus');\n  const installedSkills: string[] = [];\n\n  // Skill definitions bundled with the package\n  const skills = [\n    {\n      name: 'gitnexus-exploring',\n      description: 'Use when the user asks how code works, wants to understand architecture, trace execution flows, or explore unfamiliar parts of the codebase. Examples: \"How does X work?\", \"What calls this function?\", \"Show me the auth flow\"',\n    },\n    {\n      name: 'gitnexus-debugging',\n      description: 'Use when the user is debugging a bug, tracing an error, or asking why something fails. Examples: \"Why is X failing?\", \"Where does this error come from?\", \"Trace this bug\"',\n    },\n    {\n      name: 'gitnexus-impact-analysis',\n      description: 'Use when the user wants to know what will break if they change something, or needs safety analysis before editing code. Examples: \"Is it safe to change X?\", \"What depends on this?\", \"What will break?\"',\n    },\n    {\n      name: 'gitnexus-refactoring',\n      description: 'Use when the user wants to rename, extract, split, move, or restructure code safely. Examples: \"Rename this function\", \"Extract this into a module\", \"Refactor this class\", \"Move this to a separate file\"',\n    },\n    {\n      name: 'gitnexus-guide',\n      description: 'Use when the user asks about GitNexus itself — available tools, how to query the knowledge graph, MCP resources, graph schema, or workflow reference. Examples: \"What GitNexus tools are available?\", \"How do I use GitNexus?\"',\n    },\n    {\n      name: 'gitnexus-cli',\n      description: 'Use when the user needs to run GitNexus CLI commands like analyze/index a repo, check status, clean the index, generate a wiki, or list indexed repos. Examples: \"Index this repo\", \"Reanalyze the codebase\", \"Generate a wiki\"',\n    },\n  ];\n\n  for (const skill of skills) {\n    const skillDir = path.join(skillsDir, skill.name);\n    const skillPath = path.join(skillDir, 'SKILL.md');\n\n    try {\n      // Create skill directory\n      await fs.mkdir(skillDir, { recursive: true });\n\n      // Try to read from package skills directory\n      const packageSkillPath = path.join(__dirname, '..', '..', 'skills', `${skill.name}.md`);\n      let skillContent: string;\n\n      try {\n        skillContent = await fs.readFile(packageSkillPath, 'utf-8');\n      } catch {\n        // Fallback: generate minimal skill content\n        skillContent = `---\nname: ${skill.name}\ndescription: ${skill.description}\n---\n\n# ${skill.name.charAt(0).toUpperCase() + skill.name.slice(1)}\n\n${skill.description}\n\nUse GitNexus tools to accomplish this task.\n`;\n      }\n\n      await fs.writeFile(skillPath, skillContent, 'utf-8');\n      installedSkills.push(skill.name);\n    } catch (err) {\n      // Skip on error, don't fail the whole process\n      console.warn(`Warning: Could not install skill ${skill.name}:`, err);\n    }\n  }\n\n  return installedSkills;\n}\n\n/**\n * Generate AI context files after indexing\n */\nexport async function generateAIContextFiles(\n  repoPath: string,\n  _storagePath: string,\n  projectName: string,\n  stats: RepoStats,\n  generatedSkills?: GeneratedSkillInfo[]\n): Promise<{ files: string[] }> {\n  const content = generateGitNexusContent(projectName, stats, generatedSkills);\n  const createdFiles: string[] = [];\n\n  // Create AGENTS.md (standard for Cursor, Windsurf, OpenCode, Cline, etc.)\n  const agentsPath = path.join(repoPath, 'AGENTS.md');\n  const agentsResult = await upsertGitNexusSection(agentsPath, content);\n  createdFiles.push(`AGENTS.md (${agentsResult})`);\n\n  // Create CLAUDE.md (for Claude Code)\n  const claudePath = path.join(repoPath, 'CLAUDE.md');\n  const claudeResult = await upsertGitNexusSection(claudePath, content);\n  createdFiles.push(`CLAUDE.md (${claudeResult})`);\n\n  // Install skills to .claude/skills/gitnexus/\n  const installedSkills = await installSkills(repoPath);\n  if (installedSkills.length > 0) {\n    createdFiles.push(`.claude/skills/gitnexus/ (${installedSkills.length} skills)`);\n  }\n\n  return { files: createdFiles };\n}\n\n"
  },
  {
    "path": "gitnexus/src/cli/analyze.ts",
    "content": "/**\n * Analyze Command\n *\n * Indexes a repository and stores the knowledge graph in .gitnexus/\n */\n\nimport path from 'path';\nimport { execFileSync } from 'child_process';\nimport v8 from 'v8';\nimport cliProgress from 'cli-progress';\nimport { runPipelineFromRepo } from '../core/ingestion/pipeline.js';\nimport { initLbug, loadGraphToLbug, getLbugStats, executeQuery, executeWithReusedStatement, closeLbug, createFTSIndex, loadCachedEmbeddings } from '../core/lbug/lbug-adapter.js';\n// Embedding imports are lazy (dynamic import) so onnxruntime-node is never\n// loaded when embeddings are not requested. This avoids crashes on Node\n// versions whose ABI is not yet supported by the native binary (#89).\n// disposeEmbedder intentionally not called — ONNX Runtime segfaults on cleanup (see #38)\nimport { getStoragePaths, saveMeta, loadMeta, addToGitignore, registerRepo, getGlobalRegistryPath, cleanupOldKuzuFiles } from '../storage/repo-manager.js';\nimport { getCurrentCommit, isGitRepo, getGitRoot } from '../storage/git.js';\nimport { generateAIContextFiles } from './ai-context.js';\nimport { generateSkillFiles, type GeneratedSkillInfo } from './skill-gen.js';\nimport fs from 'fs/promises';\n\n\nconst HEAP_MB = 8192;\nconst HEAP_FLAG = `--max-old-space-size=${HEAP_MB}`;\n\n/** Re-exec the process with an 8GB heap if we're currently below that. */\nfunction ensureHeap(): boolean {\n  const nodeOpts = process.env.NODE_OPTIONS || '';\n  if (nodeOpts.includes('--max-old-space-size')) return false;\n\n  const v8Heap = v8.getHeapStatistics().heap_size_limit;\n  if (v8Heap >= HEAP_MB * 1024 * 1024 * 0.9) return false;\n\n  try {\n    execFileSync(process.execPath, [HEAP_FLAG, ...process.argv.slice(1)], {\n      stdio: 'inherit',\n      env: { ...process.env, NODE_OPTIONS: `${nodeOpts} ${HEAP_FLAG}`.trim() },\n    });\n  } catch (e: any) {\n    process.exitCode = e.status ?? 1;\n  }\n  return true;\n}\n\nexport interface AnalyzeOptions {\n  force?: boolean;\n  embeddings?: boolean;\n  skills?: boolean;\n  verbose?: boolean;\n}\n\n/** Threshold: auto-skip embeddings for repos with more nodes than this */\nconst EMBEDDING_NODE_LIMIT = 50_000;\n\nconst PHASE_LABELS: Record<string, string> = {\n  extracting: 'Scanning files',\n  structure: 'Building structure',\n  parsing: 'Parsing code',\n  imports: 'Resolving imports',\n  calls: 'Tracing calls',\n  heritage: 'Extracting inheritance',\n  communities: 'Detecting communities',\n  processes: 'Detecting processes',\n  complete: 'Pipeline complete',\n  lbug: 'Loading into LadybugDB',\n  fts: 'Creating search indexes',\n  embeddings: 'Generating embeddings',\n  done: 'Done',\n};\n\nexport const analyzeCommand = async (\n  inputPath?: string,\n  options?: AnalyzeOptions\n) => {\n  if (ensureHeap()) return;\n\n  if (options?.verbose) {\n    process.env.GITNEXUS_VERBOSE = '1';\n  }\n\n  console.log('\\n  GitNexus Analyzer\\n');\n\n  let repoPath: string;\n  if (inputPath) {\n    repoPath = path.resolve(inputPath);\n  } else {\n    const gitRoot = getGitRoot(process.cwd());\n    if (!gitRoot) {\n      console.log('  Not inside a git repository\\n');\n      process.exitCode = 1;\n      return;\n    }\n    repoPath = gitRoot;\n  }\n\n  if (!isGitRepo(repoPath)) {\n    console.log('  Not a git repository\\n');\n    process.exitCode = 1;\n    return;\n  }\n\n  const { storagePath, lbugPath } = getStoragePaths(repoPath);\n\n  // Clean up stale KuzuDB files from before the LadybugDB migration.\n  // If kuzu existed but lbug doesn't, we're doing a migration re-index — say so.\n  const kuzuResult = await cleanupOldKuzuFiles(storagePath);\n  if (kuzuResult.found && kuzuResult.needsReindex) {\n    console.log('  Migrating from KuzuDB to LadybugDB — rebuilding index...\\n');\n  }\n\n  const currentCommit = getCurrentCommit(repoPath);\n  const existingMeta = await loadMeta(storagePath);\n\n  if (existingMeta && !options?.force && !options?.skills && existingMeta.lastCommit === currentCommit) {\n    console.log('  Already up to date\\n');\n    return;\n  }\n\n  if (process.env.GITNEXUS_NO_GITIGNORE) {\n    console.log('  GITNEXUS_NO_GITIGNORE is set — skipping .gitignore (still reading .gitnexusignore)\\n');\n  }\n\n  // Single progress bar for entire pipeline\n  const bar = new cliProgress.SingleBar({\n    format: '  {bar} {percentage}% | {phase}',\n    barCompleteChar: '\\u2588',\n    barIncompleteChar: '\\u2591',\n    hideCursor: true,\n    barGlue: '',\n    autopadding: true,\n    clearOnComplete: false,\n    stopOnComplete: false,\n  }, cliProgress.Presets.shades_grey);\n\n  bar.start(100, 0, { phase: 'Initializing...' });\n\n  // Graceful SIGINT handling — clean up resources and exit\n  let aborted = false;\n  const sigintHandler = () => {\n    if (aborted) process.exit(1); // Second Ctrl-C: force exit\n    aborted = true;\n    bar.stop();\n    console.log('\\n  Interrupted — cleaning up...');\n    closeLbug().catch(() => {}).finally(() => process.exit(130));\n  };\n  process.on('SIGINT', sigintHandler);\n\n  // Route all console output through bar.log() so the bar doesn't stamp itself\n  // multiple times when other code writes to stdout/stderr mid-render.\n  const origLog = console.log.bind(console);\n  const origWarn = console.warn.bind(console);\n  const origError = console.error.bind(console);\n  const barLog = (...args: any[]) => {\n    // Clear the bar line, print the message, then let the next bar.update redraw\n    process.stdout.write('\\x1b[2K\\r');\n    origLog(args.map(a => (typeof a === 'string' ? a : String(a))).join(' '));\n  };\n  console.log = barLog;\n  console.warn = barLog;\n  console.error = barLog;\n\n  // Track elapsed time per phase — both updateBar and the interval use the\n  // same format so they don't flicker against each other.\n  let lastPhaseLabel = 'Initializing...';\n  let phaseStart = Date.now();\n\n  /** Update bar with phase label + elapsed seconds (shown after 3s). */\n  const updateBar = (value: number, phaseLabel: string) => {\n    if (phaseLabel !== lastPhaseLabel) { lastPhaseLabel = phaseLabel; phaseStart = Date.now(); }\n    const elapsed = Math.round((Date.now() - phaseStart) / 1000);\n    const display = elapsed >= 3 ? `${phaseLabel} (${elapsed}s)` : phaseLabel;\n    bar.update(value, { phase: display });\n  };\n\n  // Tick elapsed seconds for phases with infrequent progress callbacks\n  // (e.g. CSV streaming, FTS indexing). Uses the same display format as\n  // updateBar so there's no flickering.\n  const elapsedTimer = setInterval(() => {\n    const elapsed = Math.round((Date.now() - phaseStart) / 1000);\n    if (elapsed >= 3) {\n      bar.update({ phase: `${lastPhaseLabel} (${elapsed}s)` });\n    }\n  }, 1000);\n\n  const t0Global = Date.now();\n\n  // ── Cache embeddings from existing index before rebuild ────────────\n  let cachedEmbeddingNodeIds = new Set<string>();\n  let cachedEmbeddings: Array<{ nodeId: string; embedding: number[] }> = [];\n\n  if (options?.embeddings && existingMeta && !options?.force) {\n    try {\n      updateBar(0, 'Caching embeddings...');\n      await initLbug(lbugPath);\n      const cached = await loadCachedEmbeddings();\n      cachedEmbeddingNodeIds = cached.embeddingNodeIds;\n      cachedEmbeddings = cached.embeddings;\n      await closeLbug();\n    } catch {\n      try { await closeLbug(); } catch {}\n    }\n  }\n\n  // ── Phase 1: Full Pipeline (0–60%) ─────────────────────────────────\n  const pipelineResult = await runPipelineFromRepo(repoPath, (progress) => {\n    const phaseLabel = PHASE_LABELS[progress.phase] || progress.phase;\n    const scaled = Math.round(progress.percent * 0.6);\n    updateBar(scaled, phaseLabel);\n  });\n\n  // ── Phase 2: LadybugDB (60–85%) ──────────────────────────────────────\n  updateBar(60, 'Loading into LadybugDB...');\n\n  await closeLbug();\n  const lbugFiles = [lbugPath, `${lbugPath}.wal`, `${lbugPath}.lock`];\n  for (const f of lbugFiles) {\n    try { await fs.rm(f, { recursive: true, force: true }); } catch {}\n  }\n\n  const t0Lbug = Date.now();\n  await initLbug(lbugPath);\n  let lbugMsgCount = 0;\n  const lbugResult = await loadGraphToLbug(pipelineResult.graph, pipelineResult.repoPath, storagePath, (msg) => {\n    lbugMsgCount++;\n    const progress = Math.min(84, 60 + Math.round((lbugMsgCount / (lbugMsgCount + 10)) * 24));\n    updateBar(progress, msg);\n  });\n  const lbugTime = ((Date.now() - t0Lbug) / 1000).toFixed(1);\n  const lbugWarnings = lbugResult.warnings;\n\n  // ── Phase 3: FTS (85–90%) ─────────────────────────────────────────\n  updateBar(85, 'Creating search indexes...');\n\n  const t0Fts = Date.now();\n  try {\n    await createFTSIndex('File', 'file_fts', ['name', 'content']);\n    await createFTSIndex('Function', 'function_fts', ['name', 'content']);\n    await createFTSIndex('Class', 'class_fts', ['name', 'content']);\n    await createFTSIndex('Method', 'method_fts', ['name', 'content']);\n    await createFTSIndex('Interface', 'interface_fts', ['name', 'content']);\n  } catch (e: any) {\n    // Non-fatal — FTS is best-effort\n  }\n  const ftsTime = ((Date.now() - t0Fts) / 1000).toFixed(1);\n\n  // ── Phase 3.5: Re-insert cached embeddings ────────────────────────\n  if (cachedEmbeddings.length > 0) {\n    updateBar(88, `Restoring ${cachedEmbeddings.length} cached embeddings...`);\n    const EMBED_BATCH = 200;\n    for (let i = 0; i < cachedEmbeddings.length; i += EMBED_BATCH) {\n      const batch = cachedEmbeddings.slice(i, i + EMBED_BATCH);\n      const paramsList = batch.map(e => ({ nodeId: e.nodeId, embedding: e.embedding }));\n      try {\n        await executeWithReusedStatement(\n          `CREATE (e:CodeEmbedding {nodeId: $nodeId, embedding: $embedding})`,\n          paramsList,\n        );\n      } catch { /* some may fail if node was removed, that's fine */ }\n    }\n  }\n\n  // ── Phase 4: Embeddings (90–98%) ──────────────────────────────────\n  const stats = await getLbugStats();\n  let embeddingTime = '0.0';\n  let embeddingSkipped = true;\n  let embeddingSkipReason = 'off (use --embeddings to enable)';\n\n  if (options?.embeddings) {\n    if (stats.nodes > EMBEDDING_NODE_LIMIT) {\n      embeddingSkipReason = `skipped (${stats.nodes.toLocaleString()} nodes > ${EMBEDDING_NODE_LIMIT.toLocaleString()} limit)`;\n    } else {\n      embeddingSkipped = false;\n    }\n  }\n\n  if (!embeddingSkipped) {\n    updateBar(90, 'Loading embedding model...');\n    const t0Emb = Date.now();\n    const { runEmbeddingPipeline } = await import('../core/embeddings/embedding-pipeline.js');\n    await runEmbeddingPipeline(\n      executeQuery,\n      executeWithReusedStatement,\n      (progress) => {\n        const scaled = 90 + Math.round((progress.percent / 100) * 8);\n        const label = progress.phase === 'loading-model' ? 'Loading embedding model...' : `Embedding ${progress.nodesProcessed || 0}/${progress.totalNodes || '?'}`;\n        updateBar(scaled, label);\n      },\n      {},\n      cachedEmbeddingNodeIds.size > 0 ? cachedEmbeddingNodeIds : undefined,\n    );\n    embeddingTime = ((Date.now() - t0Emb) / 1000).toFixed(1);\n  }\n\n  // ── Phase 5: Finalize (98–100%) ───────────────────────────────────\n  updateBar(98, 'Saving metadata...');\n\n  // Count embeddings in the index (cached + newly generated)\n  let embeddingCount = 0;\n  try {\n    const embResult = await executeQuery(`MATCH (e:CodeEmbedding) RETURN count(e) AS cnt`);\n    embeddingCount = embResult?.[0]?.cnt ?? 0;\n  } catch { /* table may not exist if embeddings never ran */ }\n\n  const meta = {\n    repoPath,\n    lastCommit: currentCommit,\n    indexedAt: new Date().toISOString(),\n    stats: {\n      files: pipelineResult.totalFileCount,\n      nodes: stats.nodes,\n      edges: stats.edges,\n      communities: pipelineResult.communityResult?.stats.totalCommunities,\n      processes: pipelineResult.processResult?.stats.totalProcesses,\n      embeddings: embeddingCount,\n    },\n  };\n  await saveMeta(storagePath, meta);\n  await registerRepo(repoPath, meta);\n  await addToGitignore(repoPath);\n\n  const projectName = path.basename(repoPath);\n  let aggregatedClusterCount = 0;\n  if (pipelineResult.communityResult?.communities) {\n    const groups = new Map<string, number>();\n    for (const c of pipelineResult.communityResult.communities) {\n      const label = c.heuristicLabel || c.label || 'Unknown';\n      groups.set(label, (groups.get(label) || 0) + c.symbolCount);\n    }\n    aggregatedClusterCount = Array.from(groups.values()).filter(count => count >= 5).length;\n  }\n\n  let generatedSkills: GeneratedSkillInfo[] = [];\n  if (options?.skills && pipelineResult.communityResult) {\n    updateBar(99, 'Generating skill files...');\n    const skillResult = await generateSkillFiles(repoPath, projectName, pipelineResult);\n    generatedSkills = skillResult.skills;\n  }\n\n  const aiContext = await generateAIContextFiles(repoPath, storagePath, projectName, {\n    files: pipelineResult.totalFileCount,\n    nodes: stats.nodes,\n    edges: stats.edges,\n    communities: pipelineResult.communityResult?.stats.totalCommunities,\n    clusters: aggregatedClusterCount,\n    processes: pipelineResult.processResult?.stats.totalProcesses,\n  }, generatedSkills);\n\n  await closeLbug();\n  // Note: we intentionally do NOT call disposeEmbedder() here.\n  // ONNX Runtime's native cleanup segfaults on macOS and some Linux configs.\n  // Since the process exits immediately after, Node.js reclaims everything.\n\n  const totalTime = ((Date.now() - t0Global) / 1000).toFixed(1);\n\n  clearInterval(elapsedTimer);\n  process.removeListener('SIGINT', sigintHandler);\n\n  console.log = origLog;\n  console.warn = origWarn;\n  console.error = origError;\n\n  bar.update(100, { phase: 'Done' });\n  bar.stop();\n\n  // ── Summary ───────────────────────────────────────────────────────\n  const embeddingsCached = cachedEmbeddings.length > 0;\n  console.log(`\\n  Repository indexed successfully (${totalTime}s)${embeddingsCached ? ` [${cachedEmbeddings.length} embeddings cached]` : ''}\\n`);\n  console.log(`  ${stats.nodes.toLocaleString()} nodes | ${stats.edges.toLocaleString()} edges | ${pipelineResult.communityResult?.stats.totalCommunities || 0} clusters | ${pipelineResult.processResult?.stats.totalProcesses || 0} flows`);\n  console.log(`  LadybugDB ${lbugTime}s | FTS ${ftsTime}s | Embeddings ${embeddingSkipped ? embeddingSkipReason : embeddingTime + 's'}`);\n  console.log(`  ${repoPath}`);\n\n  if (aiContext.files.length > 0) {\n    console.log(`  Context: ${aiContext.files.join(', ')}`);\n  }\n\n  // Show a quiet summary if some edge types needed fallback insertion\n  if (lbugWarnings.length > 0) {\n    const totalFallback = lbugWarnings.reduce((sum, w) => {\n      const m = w.match(/\\((\\d+) edges\\)/);\n      return sum + (m ? parseInt(m[1]) : 0);\n    }, 0);\n    console.log(`  Note: ${totalFallback} edges across ${lbugWarnings.length} types inserted via fallback (schema will be updated in next release)`);\n  }\n\n  try {\n    await fs.access(getGlobalRegistryPath());\n  } catch {\n    console.log('\\n  Tip: Run `gitnexus setup` to configure MCP for your editor.');\n  }\n\n  console.log('');\n\n  // LadybugDB's native module holds open handles that prevent Node from exiting.\n  // ONNX Runtime also registers native atexit hooks that segfault on some\n  // platforms (#38, #40). Force-exit to ensure clean termination.\n  process.exit(0);\n};\n"
  },
  {
    "path": "gitnexus/src/cli/augment.ts",
    "content": "/**\n * Augment CLI Command\n * \n * Fast-path command for platform hooks.\n * Shells out from Claude Code PreToolUse / Cursor beforeShellExecution hooks.\n * \n * Usage: gitnexus augment <pattern>\n * Returns enriched text to stdout.\n * \n * Performance: Must cold-start fast (<500ms).\n * Skips unnecessary initialization (no web server, no full DB warmup).\n */\n\nimport { augment } from '../core/augmentation/engine.js';\n\nexport async function augmentCommand(pattern: string): Promise<void> {\n  if (!pattern || pattern.length < 3) {\n    process.exit(0);\n  }\n  \n  try {\n    const result = await augment(pattern, process.cwd());\n    \n    if (result) {\n      // IMPORTANT: Write to stderr, NOT stdout.\n      // LadybugDB's native module captures stdout fd at OS level during init,\n      // which makes stdout permanently broken in subprocess contexts.\n      // stderr is never captured, so it works reliably everywhere.\n      // The hook reads from the subprocess's stderr.\n      process.stderr.write(result + '\\n');\n    }\n  } catch {\n    // Graceful failure — never break the calling hook\n    process.exit(0);\n  }\n}\n"
  },
  {
    "path": "gitnexus/src/cli/clean.ts",
    "content": "/**\n * Clean Command\n * \n * Removes the .gitnexus index from the current repository.\n * Also unregisters it from the global registry.\n */\n\nimport fs from 'fs/promises';\nimport { findRepo, unregisterRepo, listRegisteredRepos } from '../storage/repo-manager.js';\n\nexport const cleanCommand = async (options?: { force?: boolean; all?: boolean }) => {\n  // --all flag: clean all indexed repos\n  if (options?.all) {\n    if (!options?.force) {\n      const entries = await listRegisteredRepos();\n      if (entries.length === 0) {\n        console.log('No indexed repositories found.');\n        return;\n      }\n      console.log(`This will delete GitNexus indexes for ${entries.length} repo(s):`);\n      for (const entry of entries) {\n        console.log(`  - ${entry.name} (${entry.path})`);\n      }\n      console.log('\\nRun with --force to confirm deletion.');\n      return;\n    }\n\n    const entries = await listRegisteredRepos();\n    for (const entry of entries) {\n      try {\n        await fs.rm(entry.storagePath, { recursive: true, force: true });\n        await unregisterRepo(entry.path);\n        console.log(`Deleted: ${entry.name} (${entry.storagePath})`);\n      } catch (err) {\n        console.error(`Failed to delete ${entry.name}:`, err);\n      }\n    }\n    return;\n  }\n\n  // Default: clean current repo\n  const cwd = process.cwd();\n  const repo = await findRepo(cwd);\n\n  if (!repo) {\n    console.log('No indexed repository found in this directory.');\n    return;\n  }\n\n  const repoName = repo.repoPath.split(/[/\\\\]/).pop() || repo.repoPath;\n\n  if (!options?.force) {\n    console.log(`This will delete the GitNexus index for: ${repoName}`);\n    console.log(`   Path: ${repo.storagePath}`);\n    console.log('\\nRun with --force to confirm deletion.');\n    return;\n  }\n\n  try {\n    await fs.rm(repo.storagePath, { recursive: true, force: true });\n    await unregisterRepo(repo.repoPath);\n    console.log(`Deleted: ${repo.storagePath}`);\n  } catch (err) {\n    console.error('Failed to delete:', err);\n  }\n};\n"
  },
  {
    "path": "gitnexus/src/cli/eval-server.ts",
    "content": "/**\n * Eval Server — Lightweight HTTP server for SWE-bench evaluation\n * \n * Keeps LadybugDB warm in memory so tool calls from the agent are near-instant.\n * Designed to run inside Docker containers during SWE-bench evaluation.\n * \n * KEY DESIGN: Returns LLM-friendly text, not raw JSON.\n * Raw JSON wastes tokens and is hard for models to parse. The text formatter\n * converts structured results into compact, readable output that models\n * can immediately act on. Next-step hints guide the agent through a\n * productive tool-chaining workflow (query → context → impact → fix).\n * \n * Architecture:\n *   Agent bash cmd → curl localhost:PORT/tool/query → eval-server → LocalBackend → format → text\n * \n * Usage:\n *   gitnexus eval-server                    # default port 4848\n *   gitnexus eval-server --port 4848        # explicit port\n *   gitnexus eval-server --idle-timeout 300 # auto-shutdown after 300s idle\n * \n * API:\n *   POST /tool/:name   — Call a tool. Body is JSON arguments. Returns formatted text.\n *   GET  /health       — Health check. Returns {\"status\":\"ok\",\"repos\":[...]}\n *   POST /shutdown     — Graceful shutdown.\n */\n\nimport http from 'http';\nimport { writeSync } from 'node:fs';\nimport { LocalBackend } from '../mcp/local/local-backend.js';\n\nexport interface EvalServerOptions {\n  port?: string;\n  idleTimeout?: string;\n}\n\n// ─── Text Formatters ──────────────────────────────────────────────────\n// Convert structured JSON results into compact, LLM-friendly text.\n// Design: minimize tokens, maximize actionability.\n\nexport function formatQueryResult(result: any): string {\n  if (result.error) return `Error: ${result.error}`;\n\n  const lines: string[] = [];\n  const processes = result.processes || [];\n  const symbols = result.process_symbols || [];\n  const defs = result.definitions || [];\n\n  if (processes.length === 0 && defs.length === 0) {\n    return 'No matching execution flows found. Try a different search term or use grep.';\n  }\n\n  lines.push(`Found ${processes.length} execution flow(s):\\n`);\n\n  for (let i = 0; i < processes.length; i++) {\n    const p = processes[i];\n    lines.push(`${i + 1}. ${p.summary} (${p.step_count} steps, ${p.symbol_count} symbols)`);\n\n    // Show symbols belonging to this process\n    const procSymbols = symbols.filter((s: any) => s.process_id === p.id);\n    for (const s of procSymbols.slice(0, 6)) {\n      const loc = s.startLine ? `:${s.startLine}` : '';\n      lines.push(`   ${s.type} ${s.name} → ${s.filePath}${loc}`);\n    }\n    if (procSymbols.length > 6) {\n      lines.push(`   ... and ${procSymbols.length - 6} more`);\n    }\n    lines.push('');\n  }\n\n  if (defs.length > 0) {\n    lines.push(`Standalone definitions:`);\n    for (const d of defs.slice(0, 8)) {\n      lines.push(`  ${d.type || 'Symbol'} ${d.name} → ${d.filePath || '?'}`);\n    }\n    if (defs.length > 8) lines.push(`  ... and ${defs.length - 8} more`);\n  }\n\n  return lines.join('\\n').trim();\n}\n\nexport function formatContextResult(result: any): string {\n  if (result.error) return `Error: ${result.error}`;\n\n  if (result.status === 'ambiguous') {\n    const lines = [`Multiple symbols named '${result.candidates?.[0]?.name || '?'}'. Disambiguate with file path:\\n`];\n    for (const c of result.candidates || []) {\n      lines.push(`  ${c.kind} ${c.name} → ${c.filePath}:${c.line || '?'}  (uid: ${c.uid})`);\n    }\n    lines.push(`\\nRe-run: gitnexus-context \"${result.candidates?.[0]?.name}\" \"<file_path>\"`);\n    return lines.join('\\n');\n  }\n\n  const sym = result.symbol;\n  if (!sym) return 'Symbol not found.';\n\n  const lines: string[] = [];\n  const loc = sym.startLine ? `:${sym.startLine}-${sym.endLine}` : '';\n  lines.push(`${sym.kind} ${sym.name} → ${sym.filePath}${loc}`);\n  lines.push('');\n\n  // Incoming refs (who calls/imports/extends this)\n  const incoming = result.incoming || {};\n  const incomingCount = Object.values(incoming).reduce((sum: number, arr: any) => sum + arr.length, 0) as number;\n  if (incomingCount > 0) {\n    lines.push(`Called/imported by (${incomingCount}):`);\n    for (const [relType, refs] of Object.entries(incoming)) {\n      for (const ref of (refs as any[]).slice(0, 10)) {\n        lines.push(`  ← [${relType}] ${ref.kind} ${ref.name} → ${ref.filePath}`);\n      }\n    }\n    lines.push('');\n  }\n\n  // Outgoing refs (what this calls/imports)\n  const outgoing = result.outgoing || {};\n  const outgoingCount = Object.values(outgoing).reduce((sum: number, arr: any) => sum + arr.length, 0) as number;\n  if (outgoingCount > 0) {\n    lines.push(`Calls/imports (${outgoingCount}):`);\n    for (const [relType, refs] of Object.entries(outgoing)) {\n      for (const ref of (refs as any[]).slice(0, 10)) {\n        lines.push(`  → [${relType}] ${ref.kind} ${ref.name} → ${ref.filePath}`);\n      }\n    }\n    lines.push('');\n  }\n\n  // Processes\n  const procs = result.processes || [];\n  if (procs.length > 0) {\n    lines.push(`Participates in ${procs.length} execution flow(s):`);\n    for (const p of procs) {\n      lines.push(`  • ${p.name} (step ${p.step_index}/${p.step_count})`);\n    }\n  }\n\n  if (sym.content) {\n    lines.push('');\n    lines.push(`Source:`);\n    lines.push(sym.content);\n  }\n\n  return lines.join('\\n').trim();\n}\n\nexport function formatImpactResult(result: any): string {\n  if (result.error) {\n    const suggestion = result.suggestion ? `\\nSuggestion: ${result.suggestion}` : '';\n    return `Error: ${result.error}${suggestion}`;\n  }\n\n  const target = result.target;\n  const direction = result.direction;\n  const byDepth = result.byDepth || {};\n  const total = result.impactedCount || 0;\n\n  if (total === 0) {\n    return `${target?.name || '?'}: No ${direction} dependencies found. This symbol appears isolated.`;\n  }\n\n  const lines: string[] = [];\n  const dirLabel = direction === 'upstream' ? 'depends on this (will break if changed)' : 'this depends on';\n  lines.push(`Blast radius for ${target?.kind || ''} ${target?.name} (${direction}): ${total} symbol(s) ${dirLabel}`);\n  if (result.partial) {\n    lines.push('⚠️  Partial results — graph traversal was interrupted. Deeper impacts may exist.');\n  }\n  lines.push('');\n\n  const depthLabels: Record<number, string> = {\n    1: 'WILL BREAK (direct)',\n    2: 'LIKELY AFFECTED (indirect)',\n    3: 'MAY NEED TESTING (transitive)',\n  };\n\n  for (const depth of [1, 2, 3]) {\n    const items = byDepth[depth];\n    if (!items || items.length === 0) continue;\n\n    lines.push(`d=${depth}: ${depthLabels[depth] || ''} (${items.length})`);\n    for (const item of items.slice(0, 12)) {\n      const conf = item.confidence < 1 ? ` (conf: ${item.confidence})` : '';\n      lines.push(`  ${item.type} ${item.name} → ${item.filePath} [${item.relationType}]${conf}`);\n    }\n    if (items.length > 12) {\n      lines.push(`  ... and ${items.length - 12} more`);\n    }\n    lines.push('');\n  }\n\n  return lines.join('\\n').trim();\n}\n\nexport function formatCypherResult(result: any): string {\n  if (result.error) return `Error: ${result.error}`;\n\n  if (Array.isArray(result)) {\n    if (result.length === 0) return 'Query returned 0 rows.';\n    // Format as simple table\n    const keys = Object.keys(result[0]);\n    const lines: string[] = [`${result.length} row(s):\\n`];\n    for (const row of result.slice(0, 30)) {\n      const parts = keys.map(k => `${k}: ${row[k]}`);\n      lines.push(`  ${parts.join(' | ')}`);\n    }\n    if (result.length > 30) {\n      lines.push(`  ... ${result.length - 30} more rows`);\n    }\n    return lines.join('\\n');\n  }\n\n  return typeof result === 'string' ? result : JSON.stringify(result, null, 2);\n}\n\nexport function formatDetectChangesResult(result: any): string {\n  if (result.error) return `Error: ${result.error}`;\n\n  const summary = result.summary || {};\n  const lines: string[] = [];\n\n  if (summary.changed_count === 0) {\n    return 'No changes detected.';\n  }\n\n  lines.push(`Changes: ${summary.changed_files || 0} files, ${summary.changed_count || 0} symbols`);\n  lines.push(`Affected processes: ${summary.affected_count || 0}`);\n  lines.push(`Risk level: ${summary.risk_level || 'unknown'}\\n`);\n\n  const changed = result.changed_symbols || [];\n  if (changed.length > 0) {\n    lines.push(`Changed symbols:`);\n    for (const s of changed.slice(0, 15)) {\n      lines.push(`  ${s.type} ${s.name} → ${s.filePath}`);\n    }\n    if (changed.length > 15) lines.push(`  ... and ${changed.length - 15} more`);\n    lines.push('');\n  }\n\n  const affected = result.affected_processes || [];\n  if (affected.length > 0) {\n    lines.push(`Affected execution flows:`);\n    for (const p of affected.slice(0, 10)) {\n      const steps = (p.changed_steps || []).map((s: any) => s.symbol).join(', ');\n      lines.push(`  • ${p.name} (${p.step_count} steps) — changed: ${steps}`);\n    }\n  }\n\n  return lines.join('\\n').trim();\n}\n\nexport function formatListReposResult(result: any): string {\n  if (!Array.isArray(result) || result.length === 0) {\n    return 'No indexed repositories.';\n  }\n\n  const lines = ['Indexed repositories:\\n'];\n  for (const r of result) {\n    const stats = r.stats || {};\n    lines.push(`  ${r.name} — ${stats.nodes || '?'} symbols, ${stats.edges || '?'} relationships, ${stats.processes || '?'} flows`);\n    lines.push(`    Path: ${r.path}`);\n    lines.push(`    Indexed: ${r.indexedAt}`);\n  }\n  return lines.join('\\n');\n}\n\n/**\n * Format a tool result as compact, LLM-friendly text.\n */\nfunction formatToolResult(toolName: string, result: any): string {\n  switch (toolName) {\n    case 'query': return formatQueryResult(result);\n    case 'context': return formatContextResult(result);\n    case 'impact': return formatImpactResult(result);\n    case 'cypher': return formatCypherResult(result);\n    case 'detect_changes': return formatDetectChangesResult(result);\n    case 'list_repos': return formatListReposResult(result);\n    default: return typeof result === 'string' ? result : JSON.stringify(result, null, 2);\n  }\n}\n\n// ─── Next-Step Hints ──────────────────────────────────────────────────\n// Guide the agent to the logical next tool call.\n// Critical for tool chaining: query → context → impact → fix.\n\nfunction getNextStepHint(toolName: string): string {\n  switch (toolName) {\n    case 'query':\n      return '\\n---\\nNext: Pick a symbol above and run gitnexus-context \"<name>\" to see all its callers, callees, and execution flows.';\n\n    case 'context':\n      return '\\n---\\nNext: To check what breaks if you change this, run gitnexus-impact \"<name>\" upstream';\n\n    case 'impact':\n      return '\\n---\\nNext: Review d=1 items first (WILL BREAK). Read the source with cat to understand the code, then make your fix.';\n\n    case 'cypher':\n      return '\\n---\\nNext: To explore a result symbol in depth, run gitnexus-context \"<name>\"';\n\n    case 'detect_changes':\n      return '\\n---\\nNext: Run gitnexus-context \"<symbol>\" on high-risk changed symbols to check their callers.';\n\n    default:\n      return '';\n  }\n}\n\n// ─── Server ───────────────────────────────────────────────────────────\n\nexport async function evalServerCommand(options?: EvalServerOptions): Promise<void> {\n  const port = parseInt(options?.port || '4848');\n  const idleTimeoutSec = parseInt(options?.idleTimeout || '0');\n\n  const backend = new LocalBackend();\n  const ok = await backend.init();\n\n  if (!ok) {\n    console.error('GitNexus eval-server: No indexed repositories found. Run: gitnexus analyze');\n    process.exit(1);\n  }\n\n  const repos = await backend.listRepos();\n  console.error(`GitNexus eval-server: ${repos.length} repo(s) loaded: ${repos.map(r => r.name).join(', ')}`);\n\n  let idleTimer: ReturnType<typeof setTimeout> | null = null;\n\n  function resetIdleTimer() {\n    if (idleTimeoutSec <= 0) return;\n    if (idleTimer) clearTimeout(idleTimer);\n    idleTimer = setTimeout(async () => {\n      console.error('GitNexus eval-server: Idle timeout reached, shutting down');\n      await backend.disconnect();\n      process.exit(0);\n    }, idleTimeoutSec * 1000);\n  }\n\n  const server = http.createServer(async (req, res) => {\n    resetIdleTimer();\n\n    try {\n      // Health check\n      if (req.method === 'GET' && req.url === '/health') {\n        res.setHeader('Content-Type', 'application/json');\n        res.writeHead(200);\n        res.end(JSON.stringify({ status: 'ok', repos: repos.map(r => r.name) }));\n        return;\n      }\n\n      // Shutdown\n      if (req.method === 'POST' && req.url === '/shutdown') {\n        res.setHeader('Content-Type', 'application/json');\n        res.writeHead(200);\n        res.end(JSON.stringify({ status: 'shutting_down' }));\n        setTimeout(async () => {\n          await backend.disconnect();\n          server.close();\n          process.exit(0);\n        }, 100);\n        return;\n      }\n\n      // Tool calls: POST /tool/:name\n      const toolMatch = req.url?.match(/^\\/tool\\/(\\w+)$/);\n      if (req.method === 'POST' && toolMatch) {\n        const toolName = toolMatch[1];\n\n        const body = await readBody(req);\n        let args: Record<string, any> = {};\n        if (body.trim()) {\n          try {\n            args = JSON.parse(body);\n          } catch {\n            res.setHeader('Content-Type', 'text/plain');\n            res.writeHead(400);\n            res.end('Error: Invalid JSON body');\n            return;\n          }\n        }\n\n        // Call tool, format result as text, append next-step hint\n        const result = await backend.callTool(toolName, args);\n        const formatted = formatToolResult(toolName, result);\n        const hint = getNextStepHint(toolName);\n\n        res.setHeader('Content-Type', 'text/plain');\n        res.writeHead(200);\n        res.end(formatted + hint);\n        return;\n      }\n\n      // 404\n      res.setHeader('Content-Type', 'text/plain');\n      res.writeHead(404);\n      res.end('Not found. Use POST /tool/:name or GET /health');\n\n    } catch (err: any) {\n      res.setHeader('Content-Type', 'text/plain');\n      res.writeHead(500);\n      res.end(`Error: ${err.message || 'Internal error'}`);\n    }\n  });\n\n  server.listen(port, '127.0.0.1', () => {\n    console.error(`GitNexus eval-server: listening on http://127.0.0.1:${port}`);\n    console.error(`  POST /tool/query    — search execution flows`);\n    console.error(`  POST /tool/context  — 360-degree symbol view`);\n    console.error(`  POST /tool/impact   — blast radius analysis`);\n    console.error(`  POST /tool/cypher   — raw Cypher query`);\n    console.error(`  GET  /health        — health check`);\n    console.error(`  POST /shutdown      — graceful shutdown`);\n    if (idleTimeoutSec > 0) {\n      console.error(`  Auto-shutdown after ${idleTimeoutSec}s idle`);\n    }\n    try {\n      // Use fd 1 directly — LadybugDB captures process.stdout (#324)\n      writeSync(1, `GITNEXUS_EVAL_SERVER_READY:${port}\\n`);\n    } catch {\n      // stdout may not be available (e.g., broken pipe)\n    }\n  });\n\n  resetIdleTimer();\n\n  const shutdown = async () => {\n    console.error('GitNexus eval-server: shutting down...');\n    await backend.disconnect();\n    server.close();\n    process.exit(0);\n  };\n\n  process.on('SIGINT', shutdown);\n  process.on('SIGTERM', shutdown);\n}\n\nexport const MAX_BODY_SIZE = 1024 * 1024; // 1MB\n\nfunction readBody(req: http.IncomingMessage): Promise<string> {\n  return new Promise((resolve, reject) => {\n    const chunks: Buffer[] = [];\n    let totalSize = 0;\n    req.on('data', (chunk: Buffer) => {\n      totalSize += chunk.length;\n      if (totalSize > MAX_BODY_SIZE) {\n        req.destroy(new Error('Request body too large (max 1MB)'));\n        return;\n      }\n      chunks.push(chunk);\n    });\n    req.on('end', () => resolve(Buffer.concat(chunks).toString('utf-8')));\n    req.on('error', reject);\n  });\n}\n"
  },
  {
    "path": "gitnexus/src/cli/index.ts",
    "content": "#!/usr/bin/env node\n\n// Heap re-spawn removed — only analyze.ts needs the 8GB heap (via its own ensureHeap()).\n// Removing it from here improves MCP server startup time significantly.\n\nimport { Command } from 'commander';\nimport { createRequire } from 'node:module';\nimport { createLazyAction } from './lazy-action.js';\n\nconst _require = createRequire(import.meta.url);\nconst pkg = _require('../../package.json');\nconst program = new Command();\n\nprogram\n  .name('gitnexus')\n  .description('GitNexus local CLI and MCP server')\n  .version(pkg.version);\n\nprogram\n  .command('setup')\n  .description('One-time setup: configure MCP for Cursor, Claude Code, OpenCode')\n  .action(createLazyAction(() => import('./setup.js'), 'setupCommand'));\n\nprogram\n  .command('analyze [path]')\n  .description('Index a repository (full analysis)')\n  .option('-f, --force', 'Force full re-index even if up to date')\n  .option('--embeddings', 'Enable embedding generation for semantic search (off by default)')\n  .option('--skills', 'Generate repo-specific skill files from detected communities')\n   .option('-v, --verbose', 'Enable verbose ingestion warnings (default: false)')\n   .addHelpText('after', '\\nEnvironment variables:\\n  GITNEXUS_NO_GITIGNORE=1  Skip .gitignore parsing (still reads .gitnexusignore)')\n   .action(createLazyAction(() => import('./analyze.js'), 'analyzeCommand'));\n\nprogram\n  .command('serve')\n  .description('Start local HTTP server for web UI connection')\n  .option('-p, --port <port>', 'Port number', '4747')\n  .option('--host <host>', 'Bind address (default: 127.0.0.1, use 0.0.0.0 for remote access)')\n  .action(createLazyAction(() => import('./serve.js'), 'serveCommand'));\n\nprogram\n  .command('mcp')\n  .description('Start MCP server (stdio) — serves all indexed repos')\n  .action(createLazyAction(() => import('./mcp.js'), 'mcpCommand'));\n\nprogram\n  .command('list')\n  .description('List all indexed repositories')\n  .action(createLazyAction(() => import('./list.js'), 'listCommand'));\n\nprogram\n  .command('status')\n  .description('Show index status for current repo')\n  .action(createLazyAction(() => import('./status.js'), 'statusCommand'));\n\nprogram\n  .command('clean')\n  .description('Delete GitNexus index for current repo')\n  .option('-f, --force', 'Skip confirmation prompt')\n  .option('--all', 'Clean all indexed repos')\n  .action(createLazyAction(() => import('./clean.js'), 'cleanCommand'));\n\nprogram\n  .command('wiki [path]')\n  .description('Generate repository wiki from knowledge graph')\n  .option('-f, --force', 'Force full regeneration even if up to date')\n  .option('--model <model>', 'LLM model name (default: minimax/minimax-m2.5)')\n  .option('--base-url <url>', 'LLM API base URL (default: OpenAI)')\n  .option('--api-key <key>', 'LLM API key (saved to ~/.gitnexus/config.json)')\n  .option('--concurrency <n>', 'Parallel LLM calls (default: 3)', '3')\n  .option('--gist', 'Publish wiki as a public GitHub Gist after generation')\n  .action(createLazyAction(() => import('./wiki.js'), 'wikiCommand'));\n\nprogram\n  .command('augment <pattern>')\n  .description('Augment a search pattern with knowledge graph context (used by hooks)')\n  .action(createLazyAction(() => import('./augment.js'), 'augmentCommand'));\n\n// ─── Direct Tool Commands (no MCP overhead) ────────────────────────\n// These invoke LocalBackend directly for use in eval, scripts, and CI.\n\nprogram\n  .command('query <search_query>')\n  .description('Search the knowledge graph for execution flows related to a concept')\n  .option('-r, --repo <name>', 'Target repository (omit if only one indexed)')\n  .option('-c, --context <text>', 'Task context to improve ranking')\n  .option('-g, --goal <text>', 'What you want to find')\n  .option('-l, --limit <n>', 'Max processes to return (default: 5)')\n  .option('--content', 'Include full symbol source code')\n  .action(createLazyAction(() => import('./tool.js'), 'queryCommand'));\n\nprogram\n  .command('context [name]')\n  .description('360-degree view of a code symbol: callers, callees, processes')\n  .option('-r, --repo <name>', 'Target repository')\n  .option('-u, --uid <uid>', 'Direct symbol UID (zero-ambiguity lookup)')\n  .option('-f, --file <path>', 'File path to disambiguate common names')\n  .option('--content', 'Include full symbol source code')\n  .action(createLazyAction(() => import('./tool.js'), 'contextCommand'));\n\nprogram\n  .command('impact <target>')\n  .description('Blast radius analysis: what breaks if you change a symbol')\n  .option('-d, --direction <dir>', 'upstream (dependants) or downstream (dependencies)', 'upstream')\n  .option('-r, --repo <name>', 'Target repository')\n  .option('--depth <n>', 'Max relationship depth (default: 3)')\n  .option('--include-tests', 'Include test files in results')\n  .action(createLazyAction(() => import('./tool.js'), 'impactCommand'));\n\nprogram\n  .command('cypher <query>')\n  .description('Execute raw Cypher query against the knowledge graph')\n  .option('-r, --repo <name>', 'Target repository')\n  .action(createLazyAction(() => import('./tool.js'), 'cypherCommand'));\n\n// ─── Eval Server (persistent daemon for SWE-bench) ─────────────────\n\nprogram\n  .command('eval-server')\n  .description('Start lightweight HTTP server for fast tool calls during evaluation')\n  .option('-p, --port <port>', 'Port number', '4848')\n  .option('--idle-timeout <seconds>', 'Auto-shutdown after N seconds idle (0 = disabled)', '0')\n  .action(createLazyAction(() => import('./eval-server.js'), 'evalServerCommand'));\n\nprogram.parse(process.argv);\n"
  },
  {
    "path": "gitnexus/src/cli/lazy-action.ts",
    "content": "/**\n * Creates a lazy-loaded CLI action that defers module import until invocation.\n * The generic constraints ensure the export name is a valid key of the module\n * at compile time — catching typos when used with concrete module imports.\n */\n\nfunction isCallable(value: unknown): value is (...args: unknown[]) => unknown {\n  return typeof value === 'function';\n}\n\nexport function createLazyAction<\n  TModule extends Record<string, unknown>,\n  TKey extends string & keyof TModule,\n>(\n  loader: () => Promise<TModule>,\n  exportName: TKey,\n): (...args: unknown[]) => Promise<void> {\n  return async (...args: unknown[]): Promise<void> => {\n    const module = await loader();\n    const action = module[exportName];\n    if (!isCallable(action)) {\n      throw new Error(`Lazy action export not found: ${exportName}`);\n    }\n    await action(...args);\n  };\n}\n"
  },
  {
    "path": "gitnexus/src/cli/list.ts",
    "content": "/**\n * List Command\n * \n * Shows all indexed repositories from the global registry.\n */\n\nimport { listRegisteredRepos } from '../storage/repo-manager.js';\n\nexport const listCommand = async () => {\n  const entries = await listRegisteredRepos({ validate: true });\n\n  if (entries.length === 0) {\n    console.log('No indexed repositories found.');\n    console.log('Run `gitnexus analyze` in a git repo to index it.');\n    return;\n  }\n\n  console.log(`\\n  Indexed Repositories (${entries.length})\\n`);\n\n  for (const entry of entries) {\n    const indexedDate = new Date(entry.indexedAt).toLocaleString();\n    const stats = entry.stats || {};\n    const commitShort = entry.lastCommit?.slice(0, 7) || 'unknown';\n\n    console.log(`  ${entry.name}`);\n    console.log(`    Path:    ${entry.path}`);\n    console.log(`    Indexed: ${indexedDate}`);\n    console.log(`    Commit:  ${commitShort}`);\n    console.log(`    Stats:   ${stats.files ?? 0} files, ${stats.nodes ?? 0} symbols, ${stats.edges ?? 0} edges`);\n    if (stats.communities) console.log(`    Clusters:   ${stats.communities}`);\n    if (stats.processes) console.log(`    Processes:  ${stats.processes}`);\n    console.log('');\n  }\n};\n"
  },
  {
    "path": "gitnexus/src/cli/mcp.ts",
    "content": "/**\n * MCP Command\n * \n * Starts the MCP server in standalone mode.\n * Loads all indexed repos from the global registry.\n * No longer depends on cwd — works from any directory.\n */\n\nimport { startMCPServer } from '../mcp/server.js';\nimport { LocalBackend } from '../mcp/local/local-backend.js';\n\nexport const mcpCommand = async () => {\n  // Prevent unhandled errors from crashing the MCP server process.\n  // LadybugDB lock conflicts and transient errors should degrade gracefully.\n  process.on('uncaughtException', (err) => {\n    console.error(`GitNexus MCP: uncaught exception — ${err.message}`);\n    // Process is in an undefined state after uncaughtException — exit after flushing\n    setTimeout(() => process.exit(1), 100);\n  });\n  process.on('unhandledRejection', (reason) => {\n    const msg = reason instanceof Error ? reason.message : String(reason);\n    console.error(`GitNexus MCP: unhandled rejection — ${msg}`);\n  });\n\n  // Initialize multi-repo backend from registry.\n  // The server starts even with 0 repos — tools call refreshRepos() lazily,\n  // so repos indexed after the server starts are discovered automatically.\n  const backend = new LocalBackend();\n  await backend.init();\n\n  const repos = await backend.listRepos();\n  if (repos.length === 0) {\n    console.error('GitNexus: No indexed repos yet. Run `gitnexus analyze` in a git repo — the server will pick it up automatically.');\n  } else {\n    console.error(`GitNexus: MCP server starting with ${repos.length} repo(s): ${repos.map(r => r.name).join(', ')}`);\n  }\n\n  // Start MCP server (serves all repos, discovers new ones lazily)\n  await startMCPServer(backend);\n};\n"
  },
  {
    "path": "gitnexus/src/cli/serve.ts",
    "content": "import { createServer } from '../server/api.js';\n\nexport const serveCommand = async (options?: { port?: string; host?: string }) => {\n  const port = Number(options?.port ?? 4747);\n  const host = options?.host ?? '127.0.0.1';\n  await createServer(port, host);\n};\n"
  },
  {
    "path": "gitnexus/src/cli/setup.ts",
    "content": "/**\n * Setup Command\n * \n * One-time global MCP configuration writer.\n * Detects installed AI editors and writes the appropriate MCP config\n * so the GitNexus MCP server is available in all projects.\n */\n\nimport fs from 'fs/promises';\nimport path from 'path';\nimport os from 'os';\nimport { fileURLToPath } from 'url';\nimport { glob } from 'glob';\nimport { getGlobalDir } from '../storage/repo-manager.js';\n\nconst __filename = fileURLToPath(import.meta.url);\nconst __dirname = path.dirname(__filename);\n\ninterface SetupResult {\n  configured: string[];\n  skipped: string[];\n  errors: string[];\n}\n\n/**\n * The MCP server entry for all editors.\n * On Windows, npx must be invoked via cmd /c since it's a .cmd script.\n */\nfunction getMcpEntry() {\n  if (process.platform === 'win32') {\n    return {\n      command: 'cmd',\n      args: ['/c', 'npx', '-y', 'gitnexus@latest', 'mcp'],\n    };\n  }\n  return {\n    command: 'npx',\n    args: ['-y', 'gitnexus@latest', 'mcp'],\n  };\n}\n\n/**\n * Merge gitnexus entry into an existing MCP config JSON object.\n * Returns the updated config.\n */\nfunction mergeMcpConfig(existing: any): any {\n  if (!existing || typeof existing !== 'object') {\n    existing = {};\n  }\n  if (!existing.mcpServers || typeof existing.mcpServers !== 'object') {\n    existing.mcpServers = {};\n  }\n  existing.mcpServers.gitnexus = getMcpEntry();\n  return existing;\n}\n\n/**\n * Try to read a JSON file, returning null if it doesn't exist or is invalid.\n */\nasync function readJsonFile(filePath: string): Promise<any | null> {\n  try {\n    const raw = await fs.readFile(filePath, 'utf-8');\n    return JSON.parse(raw);\n  } catch {\n    return null;\n  }\n}\n\n/**\n * Write JSON to a file, creating parent directories if needed.\n */\nasync function writeJsonFile(filePath: string, data: any): Promise<void> {\n  await fs.mkdir(path.dirname(filePath), { recursive: true });\n  await fs.writeFile(filePath, JSON.stringify(data, null, 2) + '\\n', 'utf-8');\n}\n\n/**\n * Check if a directory exists\n */\nasync function dirExists(dirPath: string): Promise<boolean> {\n  try {\n    const stat = await fs.stat(dirPath);\n    return stat.isDirectory();\n  } catch {\n    return false;\n  }\n}\n\n// ─── Editor-specific setup ─────────────────────────────────────────\n\nasync function setupCursor(result: SetupResult): Promise<void> {\n  const cursorDir = path.join(os.homedir(), '.cursor');\n  if (!(await dirExists(cursorDir))) {\n    result.skipped.push('Cursor (not installed)');\n    return;\n  }\n\n  const mcpPath = path.join(cursorDir, 'mcp.json');\n  try {\n    const existing = await readJsonFile(mcpPath);\n    const updated = mergeMcpConfig(existing);\n    await writeJsonFile(mcpPath, updated);\n    result.configured.push('Cursor');\n  } catch (err: any) {\n    result.errors.push(`Cursor: ${err.message}`);\n  }\n}\n\nasync function setupClaudeCode(result: SetupResult): Promise<void> {\n  const claudeDir = path.join(os.homedir(), '.claude');\n  const hasClaude = await dirExists(claudeDir);\n\n  if (!hasClaude) {\n    result.skipped.push('Claude Code (not installed)');\n    return;\n  }\n\n  // Claude Code uses a JSON settings file at ~/.claude.json or claude mcp add\n  console.log('');\n  console.log('  Claude Code detected. Run this command to add GitNexus MCP:');\n  console.log('');\n  console.log('    claude mcp add gitnexus -- npx -y gitnexus mcp');\n  console.log('');\n  result.configured.push('Claude Code (MCP manual step printed)');\n}\n\n/**\n * Install GitNexus skills to ~/.claude/skills/ for Claude Code.\n */\nasync function installClaudeCodeSkills(result: SetupResult): Promise<void> {\n  const claudeDir = path.join(os.homedir(), '.claude');\n  if (!(await dirExists(claudeDir))) return;\n\n  const skillsDir = path.join(claudeDir, 'skills');\n  try {\n    const installed = await installSkillsTo(skillsDir);\n    if (installed.length > 0) {\n      result.configured.push(`Claude Code skills (${installed.length} skills → ~/.claude/skills/)`);\n    }\n  } catch (err: any) {\n    result.errors.push(`Claude Code skills: ${err.message}`);\n  }\n}\n\n/**\n * Install GitNexus hooks to ~/.claude/settings.json for Claude Code.\n * Merges hook config without overwriting existing hooks.\n */\nasync function installClaudeCodeHooks(result: SetupResult): Promise<void> {\n  const claudeDir = path.join(os.homedir(), '.claude');\n  if (!(await dirExists(claudeDir))) return;\n\n  const settingsPath = path.join(claudeDir, 'settings.json');\n\n  // Source hooks bundled within the gitnexus package (hooks/claude/)\n  const pluginHooksPath = path.join(__dirname, '..', '..', 'hooks', 'claude');\n\n  // Copy unified hook script to ~/.claude/hooks/gitnexus/\n  const destHooksDir = path.join(claudeDir, 'hooks', 'gitnexus');\n\n  try {\n    await fs.mkdir(destHooksDir, { recursive: true });\n\n    const src = path.join(pluginHooksPath, 'gitnexus-hook.cjs');\n    const dest = path.join(destHooksDir, 'gitnexus-hook.cjs');\n    try {\n      let content = await fs.readFile(src, 'utf-8');\n      // Inject resolved CLI path so the copied hook can find the CLI\n      // even when it's no longer inside the npm package tree\n      const resolvedCli = path.join(__dirname, '..', 'cli', 'index.js');\n      const normalizedCli = path.resolve(resolvedCli).replace(/\\\\/g, '/');\n      const jsonCli = JSON.stringify(normalizedCli);\n      content = content.replace(\n        \"let cliPath = path.resolve(__dirname, '..', '..', 'dist', 'cli', 'index.js');\",\n        `let cliPath = ${jsonCli};`\n      );\n      await fs.writeFile(dest, content, 'utf-8');\n    } catch {\n      // Script not found in source — skip\n    }\n\n    const hookPath = path.join(destHooksDir, 'gitnexus-hook.cjs').replace(/\\\\/g, '/');\n    const hookCmd = `node \"${hookPath.replace(/\"/g, '\\\\\"')}\"`;\n\n    // Merge hook config into ~/.claude/settings.json\n    const existing = await readJsonFile(settingsPath) || {};\n    if (!existing.hooks) existing.hooks = {};\n\n    // NOTE: SessionStart hooks are broken on Windows (Claude Code bug #23576).\n    // Session context is delivered via CLAUDE.md / skills instead.\n\n    // Helper: add a hook entry if one with 'gitnexus-hook' isn't already registered\n    interface HookEntry { hooks?: Array<{ command?: string }> }\n    function ensureHookEntry(\n      eventName: string,\n      matcher: string,\n      timeout: number,\n      statusMessage: string,\n    ) {\n      if (!existing.hooks[eventName]) existing.hooks[eventName] = [];\n      const hasHook = existing.hooks[eventName].some(\n        (h: HookEntry) => h.hooks?.some(hh => hh.command?.includes('gitnexus-hook'))\n      );\n      if (!hasHook) {\n        existing.hooks[eventName].push({\n          matcher,\n          hooks: [{ type: 'command', command: hookCmd, timeout, statusMessage }],\n        });\n      }\n    }\n\n    ensureHookEntry('PreToolUse', 'Grep|Glob|Bash', 10, 'Enriching with GitNexus graph context...');\n    ensureHookEntry('PostToolUse', 'Bash', 10, 'Checking GitNexus index freshness...');\n\n    await writeJsonFile(settingsPath, existing);\n    result.configured.push('Claude Code hooks (PreToolUse, PostToolUse)');\n  } catch (err: any) {\n    result.errors.push(`Claude Code hooks: ${err.message}`);\n  }\n}\n\nasync function setupOpenCode(result: SetupResult): Promise<void> {\n  const opencodeDir = path.join(os.homedir(), '.config', 'opencode');\n  if (!(await dirExists(opencodeDir))) {\n    result.skipped.push('OpenCode (not installed)');\n    return;\n  }\n\n  const configPath = path.join(opencodeDir, 'config.json');\n  try {\n    const existing = await readJsonFile(configPath);\n    const config = existing || {};\n    if (!config.mcp) config.mcp = {};\n    config.mcp.gitnexus = getMcpEntry();\n    await writeJsonFile(configPath, config);\n    result.configured.push('OpenCode');\n  } catch (err: any) {\n    result.errors.push(`OpenCode: ${err.message}`);\n  }\n}\n\n// ─── Skill Installation ───────────────────────────────────────────\n\n/**\n * Install GitNexus skills to a target directory.\n * Each skill is installed as {targetDir}/gitnexus-{skillName}/SKILL.md\n * following the Agent Skills standard (both Cursor and Claude Code).\n *\n * Supports two source layouts:\n *   - Flat file:  skills/{name}.md           → copied as SKILL.md\n *   - Directory:  skills/{name}/SKILL.md     → copied recursively (includes references/, etc.)\n */\nasync function installSkillsTo(targetDir: string): Promise<string[]> {\n  const installed: string[] = [];\n  const skillsRoot = path.join(__dirname, '..', '..', 'skills');\n\n  let flatFiles: string[] = [];\n  let dirSkillFiles: string[] = [];\n  try {\n    [flatFiles, dirSkillFiles] = await Promise.all([\n      glob('*.md', { cwd: skillsRoot }),\n      glob('*/SKILL.md', { cwd: skillsRoot }),\n    ]);\n  } catch {\n    return [];\n  }\n\n  const skillSources = new Map<string, { isDirectory: boolean }>();\n\n  for (const relPath of dirSkillFiles) {\n    skillSources.set(path.dirname(relPath), { isDirectory: true });\n  }\n  for (const relPath of flatFiles) {\n    const skillName = path.basename(relPath, '.md');\n    if (!skillSources.has(skillName)) {\n      skillSources.set(skillName, { isDirectory: false });\n    }\n  }\n\n  for (const [skillName, source] of skillSources) {\n    const skillDir = path.join(targetDir, skillName);\n\n    try {\n      if (source.isDirectory) {\n        const dirSource = path.join(skillsRoot, skillName);\n        await copyDirRecursive(dirSource, skillDir);\n        installed.push(skillName);\n      } else {\n        const flatSource = path.join(skillsRoot, `${skillName}.md`);\n        const content = await fs.readFile(flatSource, 'utf-8');\n        await fs.mkdir(skillDir, { recursive: true });\n        await fs.writeFile(path.join(skillDir, 'SKILL.md'), content, 'utf-8');\n        installed.push(skillName);\n      }\n    } catch {\n      // Source skill not found — skip\n    }\n  }\n\n  return installed;\n}\n\n/**\n * Recursively copy a directory tree.\n */\nasync function copyDirRecursive(src: string, dest: string): Promise<void> {\n  await fs.mkdir(dest, { recursive: true });\n  const entries = await fs.readdir(src, { withFileTypes: true });\n  for (const entry of entries) {\n    const srcPath = path.join(src, entry.name);\n    const destPath = path.join(dest, entry.name);\n    if (entry.isDirectory()) {\n      await copyDirRecursive(srcPath, destPath);\n    } else {\n      await fs.copyFile(srcPath, destPath);\n    }\n  }\n}\n\n/**\n * Install global Cursor skills to ~/.cursor/skills/gitnexus/\n */\nasync function installCursorSkills(result: SetupResult): Promise<void> {\n  const cursorDir = path.join(os.homedir(), '.cursor');\n  if (!(await dirExists(cursorDir))) return;\n  \n  const skillsDir = path.join(cursorDir, 'skills');\n  try {\n    const installed = await installSkillsTo(skillsDir);\n    if (installed.length > 0) {\n      result.configured.push(`Cursor skills (${installed.length} skills → ~/.cursor/skills/)`);\n    }\n  } catch (err: any) {\n    result.errors.push(`Cursor skills: ${err.message}`);\n  }\n}\n\n/**\n * Install global OpenCode skills to ~/.config/opencode/skill/gitnexus/\n */\nasync function installOpenCodeSkills(result: SetupResult): Promise<void> {\n  const opencodeDir = path.join(os.homedir(), '.config', 'opencode');\n  if (!(await dirExists(opencodeDir))) return;\n  \n  const skillsDir = path.join(opencodeDir, 'skill');\n  try {\n    const installed = await installSkillsTo(skillsDir);\n    if (installed.length > 0) {\n      result.configured.push(`OpenCode skills (${installed.length} skills → ~/.config/opencode/skill/)`);\n    }\n  } catch (err: any) {\n    result.errors.push(`OpenCode skills: ${err.message}`);\n  }\n}\n\n// ─── Main command ──────────────────────────────────────────────────\n\nexport const setupCommand = async () => {\n  console.log('');\n  console.log('  GitNexus Setup');\n  console.log('  ==============');\n  console.log('');\n\n  // Ensure global directory exists\n  const globalDir = getGlobalDir();\n  await fs.mkdir(globalDir, { recursive: true });\n\n  const result: SetupResult = {\n    configured: [],\n    skipped: [],\n    errors: [],\n  };\n\n  // Detect and configure each editor's MCP\n  await setupCursor(result);\n  await setupClaudeCode(result);\n  await setupOpenCode(result);\n  \n  // Install global skills for platforms that support them\n  await installClaudeCodeSkills(result);\n  await installClaudeCodeHooks(result);\n  await installCursorSkills(result);\n  await installOpenCodeSkills(result);\n\n  // Print results\n  if (result.configured.length > 0) {\n    console.log('  Configured:');\n    for (const name of result.configured) {\n      console.log(`    + ${name}`);\n    }\n  }\n\n  if (result.skipped.length > 0) {\n    console.log('');\n    console.log('  Skipped:');\n    for (const name of result.skipped) {\n      console.log(`    - ${name}`);\n    }\n  }\n\n  if (result.errors.length > 0) {\n    console.log('');\n    console.log('  Errors:');\n    for (const err of result.errors) {\n      console.log(`    ! ${err}`);\n    }\n  }\n\n  console.log('');\n  console.log('  Summary:');\n  console.log(`    MCP configured for: ${result.configured.filter(c => !c.includes('skills')).join(', ') || 'none'}`);\n  console.log(`    Skills installed to: ${result.configured.filter(c => c.includes('skills')).length > 0 ? result.configured.filter(c => c.includes('skills')).join(', ') : 'none'}`);\n  console.log('');\n  console.log('  Next steps:');\n  console.log('    1. cd into any git repo');\n  console.log('    2. Run: gitnexus analyze');\n  console.log('    3. Open the repo in your editor — MCP is ready!');\n  console.log('');\n};\n"
  },
  {
    "path": "gitnexus/src/cli/skill-gen.ts",
    "content": "/**\n * Skill File Generator\n *\n * Generates repo-specific SKILL.md files from detected Leiden communities.\n * Each significant community becomes a skill that describes a functional area\n * of the codebase, including key files, entry points, execution flows, and\n * cross-community connections.\n */\n\nimport fs from 'fs/promises';\nimport path from 'path';\nimport { PipelineResult } from '../types/pipeline.js';\nimport { CommunityNode, CommunityMembership } from '../core/ingestion/community-processor.js';\nimport { ProcessNode } from '../core/ingestion/process-processor.js';\nimport { GraphNode, KnowledgeGraph } from '../core/graph/types.js';\n\n// ============================================================================\n// TYPES\n// ============================================================================\n\nexport interface GeneratedSkillInfo {\n  name: string;\n  label: string;\n  symbolCount: number;\n  fileCount: number;\n}\n\ninterface AggregatedCommunity {\n  label: string;\n  rawIds: string[];\n  symbolCount: number;\n  cohesion: number;\n}\n\ninterface MemberSymbol {\n  id: string;\n  name: string;\n  label: string;\n  filePath: string;\n  startLine: number;\n  isExported: boolean;\n}\n\ninterface FileInfo {\n  relativePath: string;\n  symbols: string[];\n}\n\ninterface CrossConnection {\n  targetLabel: string;\n  count: number;\n}\n\n// ============================================================================\n// MAIN EXPORT\n// ============================================================================\n\n/**\n * @brief Generate repo-specific skill files from detected communities\n * @param {string} repoPath - Absolute path to the repository root\n * @param {string} projectName - Human-readable project name\n * @param {PipelineResult} pipelineResult - In-memory pipeline data with communities, processes, graph\n * @returns {Promise<{ skills: GeneratedSkillInfo[], outputPath: string }>} Generated skill metadata\n */\nexport const generateSkillFiles = async (\n  repoPath: string,\n  projectName: string,\n  pipelineResult: PipelineResult\n): Promise<{ skills: GeneratedSkillInfo[]; outputPath: string }> => {\n  const { communityResult, processResult, graph } = pipelineResult;\n  const outputDir = path.join(repoPath, '.claude', 'skills', 'generated');\n\n  if (!communityResult || !communityResult.memberships.length) {\n    console.log('\\n  Skills: no communities detected, skipping skill generation');\n    return { skills: [], outputPath: outputDir };\n  }\n\n  console.log('\\n  Generating repo-specific skills...');\n\n  // Step 1: Build communities from memberships (not the filtered communities array).\n  // The community processor skips singletons from its communities array but memberships\n  // include ALL assignments. For repos with sparse CALLS edges, the communities array\n  // can be empty while memberships still has useful groupings.\n  const communities = communityResult.communities.length > 0\n    ? communityResult.communities\n    : buildCommunitiesFromMemberships(communityResult.memberships, graph, repoPath);\n\n  const aggregated = aggregateCommunities(communities);\n\n  // Step 2: Filter to significant communities\n  // Keep communities with >= 3 symbols after aggregation.\n  const significant = aggregated\n    .filter(c => c.symbolCount >= 3)\n    .sort((a, b) => b.symbolCount - a.symbolCount)\n    .slice(0, 20);\n\n  if (significant.length === 0) {\n    console.log('\\n  Skills: no significant communities found (all below 3-symbol threshold)');\n    return { skills: [], outputPath: outputDir };\n  }\n\n  // Step 3: Build lookup maps\n  const membershipsByComm = buildMembershipMap(communityResult.memberships);\n  const nodeIdToCommunityLabel = buildNodeCommunityLabelMap(\n    communityResult.memberships,\n    communities\n  );\n\n  // Step 4: Clear and recreate output directory\n  try {\n    await fs.rm(outputDir, { recursive: true, force: true });\n  } catch { /* may not exist */ }\n  await fs.mkdir(outputDir, { recursive: true });\n\n  // Step 5: Generate skill files\n  const skills: GeneratedSkillInfo[] = [];\n  const usedNames = new Set<string>();\n\n  for (const community of significant) {\n    // Gather member symbols\n    const members = gatherMembers(community.rawIds, membershipsByComm, graph);\n    if (members.length === 0) continue;\n\n    // Gather file info\n    const files = gatherFiles(members, repoPath);\n\n    // Gather entry points\n    const entryPoints = gatherEntryPoints(members);\n\n    // Gather execution flows\n    const flows = gatherFlows(community.rawIds, processResult?.processes || []);\n\n    // Gather cross-community connections\n    const connections = gatherCrossConnections(\n      community.rawIds,\n      community.label,\n      membershipsByComm,\n      nodeIdToCommunityLabel,\n      graph\n    );\n\n    // Generate kebab name\n    const kebabName = toKebabName(community.label, usedNames);\n    usedNames.add(kebabName);\n\n    // Generate SKILL.md content\n    const content = renderSkillMarkdown(\n      community,\n      projectName,\n      members,\n      files,\n      entryPoints,\n      flows,\n      connections,\n      kebabName\n    );\n\n    // Write file\n    const skillDir = path.join(outputDir, kebabName);\n    await fs.mkdir(skillDir, { recursive: true });\n    await fs.writeFile(path.join(skillDir, 'SKILL.md'), content, 'utf-8');\n\n    const info: GeneratedSkillInfo = {\n      name: kebabName,\n      label: community.label,\n      symbolCount: community.symbolCount,\n      fileCount: files.length,\n    };\n    skills.push(info);\n\n    console.log(`    \\u2713 ${community.label} (${community.symbolCount} symbols, ${files.length} files)`);\n  }\n\n  console.log(`\\n  ${skills.length} skills generated \\u2192 .claude/skills/generated/`);\n\n  return { skills, outputPath: outputDir };\n};\n\n// ============================================================================\n// FALLBACK COMMUNITY BUILDER\n// ============================================================================\n\n/**\n * @brief Build CommunityNode-like objects from raw memberships when the community\n *        processor's communities array is empty (all singletons were filtered out)\n * @param {CommunityMembership[]} memberships - All node-to-community assignments\n * @param {KnowledgeGraph} graph - The knowledge graph for resolving node metadata\n * @param {string} repoPath - Repository root for path normalization\n * @returns {CommunityNode[]} Synthetic community nodes built from membership data\n */\nconst buildCommunitiesFromMemberships = (\n  memberships: CommunityMembership[],\n  graph: KnowledgeGraph,\n  repoPath: string\n): CommunityNode[] => {\n  // Group memberships by communityId\n  const groups = new Map<string, string[]>();\n  for (const m of memberships) {\n    const arr = groups.get(m.communityId);\n    if (arr) {\n      arr.push(m.nodeId);\n    } else {\n      groups.set(m.communityId, [m.nodeId]);\n    }\n  }\n\n  const communities: CommunityNode[] = [];\n\n  for (const [commId, nodeIds] of groups) {\n    // Derive a heuristic label from the most common parent directory\n    const folderCounts = new Map<string, number>();\n    for (const nodeId of nodeIds) {\n      const node = graph.getNode(nodeId);\n      if (!node?.properties.filePath) continue;\n      const normalized = node.properties.filePath.replace(/\\\\/g, '/');\n      const parts = normalized.split('/').filter(Boolean);\n      if (parts.length >= 2) {\n        const folder = parts[parts.length - 2];\n        if (!['src', 'lib', 'core', 'utils', 'common', 'shared', 'helpers'].includes(folder.toLowerCase())) {\n          folderCounts.set(folder, (folderCounts.get(folder) || 0) + 1);\n        }\n      }\n    }\n\n    let bestFolder = '';\n    let bestCount = 0;\n    for (const [folder, count] of folderCounts) {\n      if (count > bestCount) {\n        bestCount = count;\n        bestFolder = folder;\n      }\n    }\n\n    const label = bestFolder\n      ? bestFolder.charAt(0).toUpperCase() + bestFolder.slice(1)\n      : `Cluster_${commId.replace('comm_', '')}`;\n\n    // Compute cohesion as internal-edge ratio (matches backend calculateCohesion).\n    // For each member node, count edges that stay inside the community vs total.\n    const nodeSet = new Set(nodeIds);\n    let internalEdges = 0;\n    let totalEdges = 0;\n    graph.forEachRelationship(rel => {\n      if (nodeSet.has(rel.sourceId)) {\n        totalEdges++;\n        if (nodeSet.has(rel.targetId)) internalEdges++;\n      }\n    });\n    const cohesion = totalEdges > 0 ? Math.min(1.0, internalEdges / totalEdges) : 1.0;\n\n    communities.push({\n      id: commId,\n      label,\n      heuristicLabel: label,\n      cohesion,\n      symbolCount: nodeIds.length,\n    });\n  }\n\n  return communities.sort((a, b) => b.symbolCount - a.symbolCount);\n};\n\n// ============================================================================\n// AGGREGATION\n// ============================================================================\n\n/**\n * @brief Aggregate raw Leiden communities by heuristicLabel\n * @param {CommunityNode[]} communities - Raw community nodes from Leiden detection\n * @returns {AggregatedCommunity[]} Aggregated communities grouped by label\n */\nconst aggregateCommunities = (communities: CommunityNode[]): AggregatedCommunity[] => {\n  const groups = new Map<string, {\n    rawIds: string[];\n    totalSymbols: number;\n    weightedCohesion: number;\n  }>();\n\n  for (const c of communities) {\n    const label = c.heuristicLabel || c.label || 'Unknown';\n    const symbols = c.symbolCount || 0;\n    const cohesion = c.cohesion || 0;\n    const existing = groups.get(label);\n\n    if (!existing) {\n      groups.set(label, {\n        rawIds: [c.id],\n        totalSymbols: symbols,\n        weightedCohesion: cohesion * symbols,\n      });\n    } else {\n      existing.rawIds.push(c.id);\n      existing.totalSymbols += symbols;\n      existing.weightedCohesion += cohesion * symbols;\n    }\n  }\n\n  return Array.from(groups.entries()).map(([label, g]) => ({\n    label,\n    rawIds: g.rawIds,\n    symbolCount: g.totalSymbols,\n    cohesion: g.totalSymbols > 0 ? g.weightedCohesion / g.totalSymbols : 0,\n  }));\n};\n\n// ============================================================================\n// LOOKUP MAP BUILDERS\n// ============================================================================\n\n/**\n * @brief Build a map from communityId to member nodeIds\n * @param {CommunityMembership[]} memberships - All membership records\n * @returns {Map<string, string[]>} Map of communityId -> nodeId[]\n */\nconst buildMembershipMap = (memberships: CommunityMembership[]): Map<string, string[]> => {\n  const map = new Map<string, string[]>();\n  for (const m of memberships) {\n    const arr = map.get(m.communityId);\n    if (arr) {\n      arr.push(m.nodeId);\n    } else {\n      map.set(m.communityId, [m.nodeId]);\n    }\n  }\n  return map;\n};\n\n/**\n * @brief Build a map from nodeId to aggregated community label\n * @param {CommunityMembership[]} memberships - All membership records\n * @param {CommunityNode[]} communities - Community nodes with labels\n * @returns {Map<string, string>} Map of nodeId -> community label\n */\nconst buildNodeCommunityLabelMap = (\n  memberships: CommunityMembership[],\n  communities: CommunityNode[]\n): Map<string, string> => {\n  const commIdToLabel = new Map<string, string>();\n  for (const c of communities) {\n    commIdToLabel.set(c.id, c.heuristicLabel || c.label || 'Unknown');\n  }\n\n  const map = new Map<string, string>();\n  for (const m of memberships) {\n    const label = commIdToLabel.get(m.communityId);\n    if (label) {\n      map.set(m.nodeId, label);\n    }\n  }\n  return map;\n};\n\n// ============================================================================\n// DATA GATHERING\n// ============================================================================\n\n/**\n * @brief Gather member symbols for an aggregated community\n * @param {string[]} rawIds - Raw community IDs belonging to this aggregated community\n * @param {Map<string, string[]>} membershipsByComm - communityId -> nodeIds\n * @param {KnowledgeGraph} graph - The knowledge graph\n * @returns {MemberSymbol[]} Array of member symbol information\n */\nconst gatherMembers = (\n  rawIds: string[],\n  membershipsByComm: Map<string, string[]>,\n  graph: KnowledgeGraph\n): MemberSymbol[] => {\n  const seen = new Set<string>();\n  const members: MemberSymbol[] = [];\n\n  for (const commId of rawIds) {\n    const nodeIds = membershipsByComm.get(commId) || [];\n    for (const nodeId of nodeIds) {\n      if (seen.has(nodeId)) continue;\n      seen.add(nodeId);\n\n      const node = graph.getNode(nodeId);\n      if (!node) continue;\n\n      members.push({\n        id: node.id,\n        name: node.properties.name,\n        label: node.label,\n        filePath: node.properties.filePath || '',\n        startLine: node.properties.startLine || 0,\n        isExported: node.properties.isExported === true,\n      });\n    }\n  }\n\n  return members;\n};\n\n/**\n * @brief Gather deduplicated file info with per-file symbol names\n * @param {MemberSymbol[]} members - Member symbols\n * @param {string} repoPath - Repository root for relative path computation\n * @returns {FileInfo[]} Sorted by symbol count descending\n */\nconst gatherFiles = (members: MemberSymbol[], repoPath: string): FileInfo[] => {\n  const fileMap = new Map<string, string[]>();\n\n  for (const m of members) {\n    if (!m.filePath) continue;\n    const rel = toRelativePath(m.filePath, repoPath);\n    const arr = fileMap.get(rel);\n    if (arr) {\n      arr.push(m.name);\n    } else {\n      fileMap.set(rel, [m.name]);\n    }\n  }\n\n  return Array.from(fileMap.entries())\n    .map(([relativePath, symbols]) => ({ relativePath, symbols }))\n    .sort((a, b) => b.symbols.length - a.symbols.length);\n};\n\n/**\n * @brief Gather exported entry points prioritized by type\n * @param {MemberSymbol[]} members - Member symbols\n * @returns {MemberSymbol[]} Exported symbols sorted by type priority\n */\nconst gatherEntryPoints = (members: MemberSymbol[]): MemberSymbol[] => {\n  const typePriority: Record<string, number> = {\n    Function: 0,\n    Class: 1,\n    Method: 2,\n    Interface: 3,\n  };\n\n  return members\n    .filter(m => m.isExported)\n    .sort((a, b) => {\n      const pa = typePriority[a.label] ?? 99;\n      const pb = typePriority[b.label] ?? 99;\n      return pa - pb;\n    });\n};\n\n/**\n * @brief Gather execution flows touching this community\n * @param {string[]} rawIds - Raw community IDs for this aggregated community\n * @param {ProcessNode[]} processes - All detected processes\n * @returns {ProcessNode[]} Processes whose communities intersect rawIds, sorted by stepCount\n */\nconst gatherFlows = (rawIds: string[], processes: ProcessNode[]): ProcessNode[] => {\n  const rawIdSet = new Set(rawIds);\n\n  return processes\n    .filter(proc => proc.communities.some(cid => rawIdSet.has(cid)))\n    .sort((a, b) => b.stepCount - a.stepCount);\n};\n\n/**\n * @brief Gather cross-community call connections\n * @param {string[]} rawIds - Raw community IDs for this aggregated community\n * @param {string} ownLabel - This community's aggregated label\n * @param {Map<string, string[]>} membershipsByComm - communityId -> nodeIds\n * @param {Map<string, string>} nodeIdToCommunityLabel - nodeId -> community label\n * @param {KnowledgeGraph} graph - The knowledge graph\n * @returns {CrossConnection[]} Aggregated cross-community connections sorted by count\n */\nconst gatherCrossConnections = (\n  rawIds: string[],\n  ownLabel: string,\n  membershipsByComm: Map<string, string[]>,\n  nodeIdToCommunityLabel: Map<string, string>,\n  graph: KnowledgeGraph\n): CrossConnection[] => {\n  // Collect all node IDs in this aggregated community\n  const ownNodeIds = new Set<string>();\n  for (const commId of rawIds) {\n    const nodeIds = membershipsByComm.get(commId) || [];\n    for (const nid of nodeIds) {\n      ownNodeIds.add(nid);\n    }\n  }\n\n  // Count outgoing CALLS to nodes in different communities\n  const targetCounts = new Map<string, number>();\n\n  graph.forEachRelationship(rel => {\n    if (rel.type !== 'CALLS') return;\n    if (!ownNodeIds.has(rel.sourceId)) return;\n    if (ownNodeIds.has(rel.targetId)) return; // same community\n\n    const targetLabel = nodeIdToCommunityLabel.get(rel.targetId);\n    if (!targetLabel || targetLabel === ownLabel) return;\n\n    targetCounts.set(targetLabel, (targetCounts.get(targetLabel) || 0) + 1);\n  });\n\n  return Array.from(targetCounts.entries())\n    .map(([targetLabel, count]) => ({ targetLabel, count }))\n    .sort((a, b) => b.count - a.count);\n};\n\n// ============================================================================\n// MARKDOWN RENDERING\n// ============================================================================\n\n/**\n * @brief Render SKILL.md content for a single community\n * @param {AggregatedCommunity} community - The aggregated community data\n * @param {string} projectName - Project name for the description\n * @param {MemberSymbol[]} members - All member symbols\n * @param {FileInfo[]} files - File info with symbol names\n * @param {MemberSymbol[]} entryPoints - Exported entry point symbols\n * @param {ProcessNode[]} flows - Execution flows touching this community\n * @param {CrossConnection[]} connections - Cross-community connections\n * @param {string} kebabName - Kebab-case name for the skill\n * @returns {string} Full SKILL.md content\n */\nconst renderSkillMarkdown = (\n  community: AggregatedCommunity,\n  projectName: string,\n  members: MemberSymbol[],\n  files: FileInfo[],\n  entryPoints: MemberSymbol[],\n  flows: ProcessNode[],\n  connections: CrossConnection[],\n  kebabName: string\n): string => {\n  const cohesionPct = Math.round(community.cohesion * 100);\n\n  // Dominant directory: most common top-level directory\n  const dominantDir = getDominantDirectory(files);\n\n  // Top symbol names for \"When to Use\"\n  const topNames = entryPoints.slice(0, 3).map(e => e.name);\n  if (topNames.length === 0) {\n    // Fallback to any members\n    topNames.push(...members.slice(0, 3).map(m => m.name));\n  }\n\n  const lines: string[] = [];\n\n  // Frontmatter\n  lines.push('---');\n  lines.push(`name: ${kebabName}`);\n  lines.push(`description: \"Skill for the ${community.label} area of ${projectName}. ${community.symbolCount} symbols across ${files.length} files.\"`);\n  lines.push('---');\n  lines.push('');\n\n  // Title\n  lines.push(`# ${community.label}`);\n  lines.push('');\n  lines.push(`${community.symbolCount} symbols | ${files.length} files | Cohesion: ${cohesionPct}%`);\n  lines.push('');\n\n  // When to Use\n  lines.push('## When to Use');\n  lines.push('');\n  if (dominantDir) {\n    lines.push(`- Working with code in \\`${dominantDir}/\\``);\n  }\n  if (topNames.length > 0) {\n    lines.push(`- Understanding how ${topNames.join(', ')} work`);\n  }\n  lines.push(`- Modifying ${community.label.toLowerCase()}-related functionality`);\n  lines.push('');\n\n  // Key Files (top 10)\n  lines.push('## Key Files');\n  lines.push('');\n  lines.push('| File | Symbols |');\n  lines.push('|------|---------|');\n  for (const f of files.slice(0, 10)) {\n    const symbolList = f.symbols.slice(0, 5).join(', ');\n    const suffix = f.symbols.length > 5 ? ` (+${f.symbols.length - 5})` : '';\n    lines.push(`| \\`${f.relativePath}\\` | ${symbolList}${suffix} |`);\n  }\n  lines.push('');\n\n  // Entry Points (top 5)\n  if (entryPoints.length > 0) {\n    lines.push('## Entry Points');\n    lines.push('');\n    lines.push('Start here when exploring this area:');\n    lines.push('');\n    for (const ep of entryPoints.slice(0, 5)) {\n      lines.push(`- **\\`${ep.name}\\`** (${ep.label}) \\u2014 \\`${ep.filePath}:${ep.startLine}\\``);\n    }\n    lines.push('');\n  }\n\n  // Key Symbols (top 20, exported first, then by type)\n  lines.push('## Key Symbols');\n  lines.push('');\n  lines.push('| Symbol | Type | File | Line |');\n  lines.push('|--------|------|------|------|');\n  const sortedMembers = [...members].sort((a, b) => {\n    if (a.isExported !== b.isExported) return a.isExported ? -1 : 1;\n    return a.label.localeCompare(b.label);\n  });\n  for (const m of sortedMembers.slice(0, 20)) {\n    lines.push(`| \\`${m.name}\\` | ${m.label} | \\`${m.filePath}\\` | ${m.startLine} |`);\n  }\n  lines.push('');\n\n  // Execution Flows\n  if (flows.length > 0) {\n    lines.push('## Execution Flows');\n    lines.push('');\n    lines.push('| Flow | Type | Steps |');\n    lines.push('|------|------|-------|');\n    for (const f of flows.slice(0, 10)) {\n      lines.push(`| \\`${f.heuristicLabel}\\` | ${f.processType} | ${f.stepCount} |`);\n    }\n    lines.push('');\n  }\n\n  // Connected Areas\n  if (connections.length > 0) {\n    lines.push('## Connected Areas');\n    lines.push('');\n    lines.push('| Area | Connections |');\n    lines.push('|------|-------------|');\n    for (const c of connections.slice(0, 8)) {\n      lines.push(`| ${c.targetLabel} | ${c.count} calls |`);\n    }\n    lines.push('');\n  }\n\n  // How to Explore\n  const firstEntry = entryPoints.length > 0 ? entryPoints[0].name : (members.length > 0 ? members[0].name : community.label);\n  lines.push('## How to Explore');\n  lines.push('');\n  lines.push(`1. \\`gitnexus_context({name: \"${firstEntry}\"})\\` \\u2014 see callers and callees`);\n  lines.push(`2. \\`gitnexus_query({query: \"${community.label.toLowerCase()}\"})\\` \\u2014 find related execution flows`);\n  lines.push('3. Read key files listed above for implementation details');\n  lines.push('');\n\n  return lines.join('\\n');\n};\n\n// ============================================================================\n// UTILITY HELPERS\n// ============================================================================\n\n/**\n * @brief Convert a community label to a kebab-case directory name\n * @param {string} label - The community label\n * @param {Set<string>} usedNames - Already-used names for collision detection\n * @returns {string} Unique kebab-case name capped at 50 characters\n */\nconst toKebabName = (label: string, usedNames: Set<string>): string => {\n  let name = label\n    .toLowerCase()\n    .replace(/[^a-z0-9]+/g, '-')\n    .replace(/^-+|-+$/g, '')\n    .slice(0, 50);\n\n  if (!name) name = 'skill';\n\n  let candidate = name;\n  let counter = 2;\n  while (usedNames.has(candidate)) {\n    candidate = `${name}-${counter}`;\n    counter++;\n  }\n\n  return candidate;\n};\n\n/**\n * @brief Convert an absolute or repo-relative file path to a clean relative path\n * @param {string} filePath - The file path from the graph node\n * @param {string} repoPath - Repository root path\n * @returns {string} Relative path using forward slashes\n */\nconst toRelativePath = (filePath: string, repoPath: string): string => {\n  // Normalize to forward slashes for cross-platform consistency\n  const normalizedFile = filePath.replace(/\\\\/g, '/');\n  const normalizedRepo = repoPath.replace(/\\\\/g, '/');\n\n  if (normalizedFile.startsWith(normalizedRepo)) {\n    return normalizedFile.slice(normalizedRepo.length).replace(/^\\//, '');\n  }\n  // Already relative or different root\n  return normalizedFile.replace(/^\\//, '');\n};\n\n/**\n * @brief Find the dominant (most common) top-level directory across files\n * @param {FileInfo[]} files - File info entries\n * @returns {string | null} Most common directory or null\n */\nconst getDominantDirectory = (files: FileInfo[]): string | null => {\n  const dirCounts = new Map<string, number>();\n\n  for (const f of files) {\n    const parts = f.relativePath.split('/');\n    if (parts.length >= 2) {\n      const dir = parts[0];\n      dirCounts.set(dir, (dirCounts.get(dir) || 0) + f.symbols.length);\n    }\n  }\n\n  let best: string | null = null;\n  let bestCount = 0;\n  for (const [dir, count] of dirCounts) {\n    if (count > bestCount) {\n      bestCount = count;\n      best = dir;\n    }\n  }\n\n  return best;\n};\n"
  },
  {
    "path": "gitnexus/src/cli/status.ts",
    "content": "/**\n * Status Command\n * \n * Shows the indexing status of the current repository.\n */\n\nimport { findRepo, getStoragePaths, hasKuzuIndex } from '../storage/repo-manager.js';\nimport { getCurrentCommit, isGitRepo, getGitRoot } from '../storage/git.js';\n\nexport const statusCommand = async () => {\n  const cwd = process.cwd();\n\n  if (!isGitRepo(cwd)) {\n    console.log('Not a git repository.');\n    return;\n  }\n\n  const repo = await findRepo(cwd);\n  if (!repo) {\n    // Check if there's a stale KuzuDB index that needs migration\n    const repoRoot = getGitRoot(cwd) ?? cwd;\n    const { storagePath } = getStoragePaths(repoRoot);\n    if (await hasKuzuIndex(storagePath)) {\n      console.log('Repository has a stale KuzuDB index from a previous version.');\n      console.log('Run: gitnexus analyze   (rebuilds the index with LadybugDB)');\n    } else {\n      console.log('Repository not indexed.');\n      console.log('Run: gitnexus analyze');\n    }\n    return;\n  }\n\n  const currentCommit = getCurrentCommit(repo.repoPath);\n  const isUpToDate = currentCommit === repo.meta.lastCommit;\n\n  console.log(`Repository: ${repo.repoPath}`);\n  console.log(`Indexed: ${new Date(repo.meta.indexedAt).toLocaleString()}`);\n  console.log(`Indexed commit: ${repo.meta.lastCommit?.slice(0, 7)}`);\n  console.log(`Current commit: ${currentCommit?.slice(0, 7)}`);\n  console.log(`Status: ${isUpToDate ? '✅ up-to-date' : '⚠️ stale (re-run gitnexus analyze)'}`);\n};\n"
  },
  {
    "path": "gitnexus/src/cli/tool.ts",
    "content": "/**\n * Direct CLI Tool Commands\n * \n * Exposes GitNexus tools (query, context, impact, cypher) as direct CLI commands.\n * Bypasses MCP entirely — invokes LocalBackend directly for minimal overhead.\n * \n * Usage:\n *   gitnexus query \"authentication flow\"\n *   gitnexus context --name \"validateUser\"\n *   gitnexus impact --target \"AuthService\" --direction upstream\n *   gitnexus cypher \"MATCH (n:Function) RETURN n.name LIMIT 10\"\n * \n * Note: Output goes to stdout via fs.writeSync(fd 1), bypassing LadybugDB's\n * native module which captures the Node.js process.stdout stream during init.\n * See the output() function for details (#324).\n */\n\nimport { writeSync } from 'node:fs';\nimport { LocalBackend } from '../mcp/local/local-backend.js';\n\nlet _backend: LocalBackend | null = null;\n\nasync function getBackend(): Promise<LocalBackend> {\n  if (_backend) return _backend;\n  _backend = new LocalBackend();\n  const ok = await _backend.init();\n  if (!ok) {\n    console.error('GitNexus: No indexed repositories found. Run: gitnexus analyze');\n    process.exit(1);\n  }\n  return _backend;\n}\n\n/**\n * Write tool output to stdout using low-level fd write.\n *\n * LadybugDB's native module captures Node.js process.stdout during init,\n * but the underlying OS file descriptor 1 (stdout) remains intact.\n * By using fs.writeSync(1, ...) we bypass the Node.js stream layer\n * and write directly to the real stdout fd (#324).\n *\n * Falls back to stderr if the fd write fails (e.g., broken pipe).\n */\nfunction output(data: any): void {\n  const text = typeof data === 'string' ? data : JSON.stringify(data, null, 2);\n  try {\n    writeSync(1, text + '\\n');\n  } catch (err: any) {\n    if (err?.code === 'EPIPE') {\n      // Consumer closed the pipe (e.g., `gitnexus cypher ... | head -1`)\n      // Exit cleanly per Unix convention\n      process.exit(0);\n    }\n    // Fallback: stderr (previous behavior, works on all platforms)\n    process.stderr.write(text + '\\n');\n  }\n}\n\nexport async function queryCommand(queryText: string, options?: {\n  repo?: string;\n  context?: string;\n  goal?: string;\n  limit?: string;\n  content?: boolean;\n}): Promise<void> {\n  if (!queryText?.trim()) {\n    console.error('Usage: gitnexus query <search_query>');\n    process.exit(1);\n  }\n\n  const backend = await getBackend();\n  const result = await backend.callTool('query', {\n    query: queryText,\n    task_context: options?.context,\n    goal: options?.goal,\n    limit: options?.limit ? parseInt(options.limit) : undefined,\n    include_content: options?.content ?? false,\n    repo: options?.repo,\n  });\n  output(result);\n}\n\nexport async function contextCommand(name: string, options?: {\n  repo?: string;\n  file?: string;\n  uid?: string;\n  content?: boolean;\n}): Promise<void> {\n  if (!name?.trim() && !options?.uid) {\n    console.error('Usage: gitnexus context <symbol_name> [--uid <uid>] [--file <path>]');\n    process.exit(1);\n  }\n\n  const backend = await getBackend();\n  const result = await backend.callTool('context', {\n    name: name || undefined,\n    uid: options?.uid,\n    file_path: options?.file,\n    include_content: options?.content ?? false,\n    repo: options?.repo,\n  });\n  output(result);\n}\n\nexport async function impactCommand(target: string, options?: {\n  direction?: string;\n  repo?: string;\n  depth?: string;\n  includeTests?: boolean;\n}): Promise<void> {\n  if (!target?.trim()) {\n    console.error('Usage: gitnexus impact <symbol_name> [--direction upstream|downstream]');\n    process.exit(1);\n  }\n\n  try {\n    const backend = await getBackend();\n    const result = await backend.callTool('impact', {\n      target,\n      direction: options?.direction || 'upstream',\n      maxDepth: options?.depth ? parseInt(options.depth, 10) : undefined,\n      includeTests: options?.includeTests ?? false,\n      repo: options?.repo,\n    });\n    output(result);\n  } catch (err: unknown) {\n    // Belt-and-suspenders: catch infrastructure failures (getBackend, callTool transport)\n    // The backend's impact() already returns structured errors for graph query failures\n    output({\n      error: (err instanceof Error ? err.message : String(err)) || 'Impact analysis failed unexpectedly',\n      target: { name: target },\n      direction: options?.direction || 'upstream',\n      suggestion: 'Try reducing --depth or using gitnexus context <symbol> as a fallback',\n    });\n    process.exit(1);\n  }\n}\n\nexport async function cypherCommand(query: string, options?: {\n  repo?: string;\n}): Promise<void> {\n  if (!query?.trim()) {\n    console.error('Usage: gitnexus cypher <cypher_query>');\n    process.exit(1);\n  }\n\n  const backend = await getBackend();\n  const result = await backend.callTool('cypher', {\n    query,\n    repo: options?.repo,\n  });\n  output(result);\n}\n"
  },
  {
    "path": "gitnexus/src/cli/wiki.ts",
    "content": "/**\n * Wiki Command\n * \n * Generates repository documentation from the knowledge graph.\n * Usage: gitnexus wiki [path] [options]\n */\n\nimport path from 'path';\nimport readline from 'readline';\nimport { execSync, execFileSync } from 'child_process';\nimport cliProgress from 'cli-progress';\nimport { getGitRoot, isGitRepo } from '../storage/git.js';\nimport { getStoragePaths, loadMeta, loadCLIConfig, saveCLIConfig } from '../storage/repo-manager.js';\nimport { WikiGenerator, type WikiOptions } from '../core/wiki/generator.js';\nimport { resolveLLMConfig } from '../core/wiki/llm-client.js';\n\nexport interface WikiCommandOptions {\n  force?: boolean;\n  model?: string;\n  baseUrl?: string;\n  apiKey?: string;\n  concurrency?: string;\n  gist?: boolean;\n}\n\n/**\n * Prompt the user for input via stdin.\n */\nfunction prompt(question: string, hide = false): Promise<string> {\n  return new Promise((resolve) => {\n    const rl = readline.createInterface({\n      input: process.stdin,\n      output: process.stdout,\n    });\n\n    if (hide && process.stdin.isTTY) {\n      // Mask input for API keys\n      process.stdout.write(question);\n      let input = '';\n      process.stdin.setRawMode(true);\n      process.stdin.resume();\n      process.stdin.setEncoding('utf-8');\n\n      const onData = (char: string) => {\n        if (char === '\\n' || char === '\\r' || char === '\\u0004') {\n          process.stdin.setRawMode(false);\n          process.stdin.removeListener('data', onData);\n          process.stdout.write('\\n');\n          rl.close();\n          resolve(input);\n        } else if (char === '\\u0003') {\n          // Ctrl+C\n          process.stdin.setRawMode(false);\n          rl.close();\n          process.exit(1);\n        } else if (char === '\\u007F' || char === '\\b') {\n          // Backspace\n          if (input.length > 0) {\n            input = input.slice(0, -1);\n            process.stdout.write('\\b \\b');\n          }\n        } else {\n          input += char;\n          process.stdout.write('*');\n        }\n      };\n      process.stdin.on('data', onData);\n    } else {\n      rl.question(question, (answer) => {\n        rl.close();\n        resolve(answer.trim());\n      });\n    }\n  });\n}\n\nexport const wikiCommand = async (\n  inputPath?: string,\n  options?: WikiCommandOptions,\n) => {\n  console.log('\\n  GitNexus Wiki Generator\\n');\n\n  // ── Resolve repo path ───────────────────────────────────────────────\n  let repoPath: string;\n  if (inputPath) {\n    repoPath = path.resolve(inputPath);\n  } else {\n    const gitRoot = getGitRoot(process.cwd());\n    if (!gitRoot) {\n      console.log('  Error: Not inside a git repository\\n');\n      process.exitCode = 1;\n      return;\n    }\n    repoPath = gitRoot;\n  }\n\n  if (!isGitRepo(repoPath)) {\n    console.log('  Error: Not a git repository\\n');\n    process.exitCode = 1;\n    return;\n  }\n\n  // ── Check for existing index ────────────────────────────────────────\n  const { storagePath, lbugPath } = getStoragePaths(repoPath);\n  const meta = await loadMeta(storagePath);\n\n  if (!meta) {\n    console.log('  Error: No GitNexus index found.');\n    console.log('  Run `gitnexus analyze` first to index this repository.\\n');\n    process.exitCode = 1;\n    return;\n  }\n\n  // ── Resolve LLM config (with interactive fallback) ─────────────────\n  // Save any CLI overrides immediately\n  if (options?.apiKey || options?.model || options?.baseUrl) {\n    const existing = await loadCLIConfig();\n    const updates: Record<string, string> = {};\n    if (options.apiKey) updates.apiKey = options.apiKey;\n    if (options.model) updates.model = options.model;\n    if (options.baseUrl) updates.baseUrl = options.baseUrl;\n    await saveCLIConfig({ ...existing, ...updates });\n    console.log('  Config saved to ~/.gitnexus/config.json\\n');\n  }\n\n  const savedConfig = await loadCLIConfig();\n  const hasSavedConfig = !!(savedConfig.apiKey && savedConfig.baseUrl);\n  const hasCLIOverrides = !!(options?.apiKey || options?.model || options?.baseUrl);\n\n  let llmConfig = await resolveLLMConfig({\n    model: options?.model,\n    baseUrl: options?.baseUrl,\n    apiKey: options?.apiKey,\n  });\n\n  // Run interactive setup if no saved config and no CLI flags provided\n  // (even if env vars exist — let user explicitly choose their provider)\n  if (!hasSavedConfig && !hasCLIOverrides) {\n    if (!process.stdin.isTTY) {\n      if (!llmConfig.apiKey) {\n        console.log('  Error: No LLM API key found.');\n        console.log('  Set OPENAI_API_KEY or GITNEXUS_API_KEY environment variable,');\n        console.log('  or pass --api-key <key>.\\n');\n        process.exitCode = 1;\n        return;\n      }\n      // Non-interactive with env var — just use it\n    } else {\n      console.log('  No LLM configured. Let\\'s set it up.\\n');\n      console.log('  Supports OpenAI, OpenRouter, or any OpenAI-compatible API.\\n');\n\n      // Provider selection\n      console.log('  [1] OpenAI (api.openai.com)');\n      console.log('  [2] OpenRouter (openrouter.ai)');\n      console.log('  [3] Custom endpoint\\n');\n\n      const choice = await prompt('  Select provider (1/2/3): ');\n\n      let baseUrl: string;\n      let defaultModel: string;\n\n      if (choice === '2') {\n        baseUrl = 'https://openrouter.ai/api/v1';\n        defaultModel = 'minimax/minimax-m2.5';\n      } else if (choice === '3') {\n        baseUrl = await prompt('  Base URL (e.g. http://localhost:11434/v1): ');\n        if (!baseUrl) {\n          console.log('\\n  No URL provided. Aborting.\\n');\n          process.exitCode = 1;\n          return;\n        }\n        defaultModel = 'gpt-4o-mini';\n      } else {\n        baseUrl = 'https://api.openai.com/v1';\n        defaultModel = 'gpt-4o-mini';\n      }\n\n      // Model\n      const modelInput = await prompt(`  Model (default: ${defaultModel}): `);\n      const model = modelInput || defaultModel;\n\n      // API key — pre-fill hint if env var exists\n      const envKey = process.env.GITNEXUS_API_KEY || process.env.OPENAI_API_KEY || '';\n      let key: string;\n      if (envKey) {\n        const masked = envKey.slice(0, 6) + '...' + envKey.slice(-4);\n        const useEnv = await prompt(`  Use existing env key (${masked})? (Y/n): `);\n        if (!useEnv || useEnv.toLowerCase() === 'y' || useEnv.toLowerCase() === 'yes') {\n          key = envKey;\n        } else {\n          key = await prompt('  API key: ', true);\n        }\n      } else {\n        key = await prompt('  API key: ', true);\n      }\n\n      if (!key) {\n        console.log('\\n  No key provided. Aborting.\\n');\n        process.exitCode = 1;\n        return;\n      }\n\n      // Save\n      await saveCLIConfig({ apiKey: key, baseUrl, model });\n      console.log('  Config saved to ~/.gitnexus/config.json\\n');\n\n      llmConfig = { ...llmConfig, apiKey: key, baseUrl, model };\n    }\n  }\n\n  // ── Setup progress bar with elapsed timer ──────────────────────────\n  const bar = new cliProgress.SingleBar({\n    format: '  {bar} {percentage}% | {phase}',\n    barCompleteChar: '\\u2588',\n    barIncompleteChar: '\\u2591',\n    hideCursor: true,\n    barGlue: '',\n    autopadding: true,\n    clearOnComplete: false,\n    stopOnComplete: false,\n  }, cliProgress.Presets.shades_grey);\n\n  bar.start(100, 0, { phase: 'Initializing...' });\n\n  const t0 = Date.now();\n  let lastPhase = '';\n  let phaseStart = t0;\n\n  // Tick elapsed time every second while stuck on the same phase\n  const elapsedTimer = setInterval(() => {\n    if (lastPhase) {\n      const elapsed = Math.round((Date.now() - phaseStart) / 1000);\n      if (elapsed >= 3) {\n        bar.update({ phase: `${lastPhase} (${elapsed}s)` });\n      }\n    }\n  }, 1000);\n\n  // ── Run generator ───────────────────────────────────────────────────\n  const wikiOptions: WikiOptions = {\n    force: options?.force,\n    model: options?.model,\n    baseUrl: options?.baseUrl,\n    concurrency: options?.concurrency ? parseInt(options.concurrency, 10) : undefined,\n  };\n\n  const generator = new WikiGenerator(\n    repoPath,\n    storagePath,\n    lbugPath,\n    llmConfig,\n    wikiOptions,\n    (phase, percent, detail) => {\n      const label = detail || phase;\n      if (label !== lastPhase) {\n        lastPhase = label;\n        phaseStart = Date.now();\n      }\n      bar.update(percent, { phase: label });\n    },\n  );\n\n  try {\n    const result = await generator.run();\n\n    clearInterval(elapsedTimer);\n    bar.update(100, { phase: 'Done' });\n    bar.stop();\n\n    const elapsed = ((Date.now() - t0) / 1000).toFixed(1);\n\n    const wikiDir = path.join(storagePath, 'wiki');\n    const viewerPath = path.join(wikiDir, 'index.html');\n\n    if (result.mode === 'up-to-date' && !options?.force) {\n      console.log('\\n  Wiki is already up to date.');\n      console.log(`  Viewer: ${viewerPath}\\n`);\n      await maybePublishGist(viewerPath, options?.gist);\n      return;\n    }\n\n    console.log(`\\n  Wiki generated successfully (${elapsed}s)\\n`);\n    console.log(`  Mode: ${result.mode}`);\n    console.log(`  Pages: ${result.pagesGenerated}`);\n    console.log(`  Output: ${wikiDir}`);\n    console.log(`  Viewer: ${viewerPath}`);\n\n    if (result.failedModules && result.failedModules.length > 0) {\n      console.log(`\\n  Failed modules (${result.failedModules.length}):`);\n      for (const mod of result.failedModules) {\n        console.log(`    - ${mod}`);\n      }\n      console.log('  Re-run to retry failed modules (pages will be regenerated).');\n    }\n\n    console.log('');\n\n    await maybePublishGist(viewerPath, options?.gist);\n  } catch (err: any) {\n    clearInterval(elapsedTimer);\n    bar.stop();\n\n    if (err.message?.includes('No source files')) {\n      console.log(`\\n  ${err.message}\\n`);\n    } else if (err.message?.includes('API key') || err.message?.includes('API error')) {\n      console.log(`\\n  LLM Error: ${err.message}\\n`);\n\n      // Offer to reconfigure on auth-related failures\n      const isAuthError = err.message?.includes('401') || err.message?.includes('403')\n        || err.message?.includes('502') || err.message?.includes('authenticate')\n        || err.message?.includes('Unauthorized');\n      if (isAuthError && process.stdin.isTTY) {\n        const answer = await new Promise<string>((resolve) => {\n          const rl = readline.createInterface({ input: process.stdin, output: process.stdout });\n          rl.question('  Reconfigure LLM settings? (Y/n): ', (ans) => { rl.close(); resolve(ans.trim().toLowerCase()); });\n        });\n        if (!answer || answer === 'y' || answer === 'yes') {\n          // Clear saved config so next run triggers interactive setup\n          await saveCLIConfig({});\n          console.log('  Config cleared. Run `gitnexus wiki` again to reconfigure.\\n');\n        }\n      }\n    } else {\n      console.log(`\\n  Error: ${err.message}\\n`);\n      if (process.env.DEBUG) {\n        console.error(err);\n      }\n    }\n    process.exitCode = 1;\n  }\n};\n\n// ─── Gist Publishing ───────────────────────────────────────────────────\n\nfunction hasGhCLI(): boolean {\n  try {\n    execSync('gh --version', { stdio: 'ignore' });\n    return true;\n  } catch {\n    return false;\n  }\n}\n\nfunction publishGist(htmlPath: string): { url: string; rawUrl: string } | null {\n  try {\n    const output = execFileSync('gh', [\n      'gist', 'create', htmlPath,\n      '--desc', 'Repository Wiki — generated by GitNexus',\n      '--public',\n    ], { encoding: 'utf-8', stdio: ['pipe', 'pipe', 'pipe'] }).trim();\n\n    // gh gist create prints the gist URL as the last line\n    const lines = output.split('\\n');\n    const gistUrl = lines.find(l => l.includes('gist.github.com')) || lines[lines.length - 1];\n\n    if (!gistUrl || !gistUrl.includes('gist.github.com')) return null;\n\n    // Build a raw viewer URL via gist.githack.com\n    // gist URL format: https://gist.github.com/{user}/{id}\n    const match = gistUrl.match(/gist\\.github\\.com\\/([^/]+)\\/([a-f0-9]+)/);\n    let rawUrl = gistUrl;\n    if (match) {\n      rawUrl = `https://gistcdn.githack.com/${match[1]}/${match[2]}/raw/index.html`;\n    }\n\n    return { url: gistUrl.trim(), rawUrl };\n  } catch {\n    return null;\n  }\n}\n\nasync function maybePublishGist(htmlPath: string, gistFlag?: boolean): Promise<void> {\n  if (gistFlag === false) return;\n\n  // Check that the HTML file exists\n  try {\n    const fs = await import('fs/promises');\n    await fs.access(htmlPath);\n  } catch {\n    return;\n  }\n\n  if (!hasGhCLI()) {\n    if (gistFlag) {\n      console.log('  GitHub CLI (gh) is not installed. Cannot publish gist.');\n      console.log('  Install it: https://cli.github.com\\n');\n    }\n    return;\n  }\n\n  let shouldPublish = !!gistFlag;\n\n  if (!shouldPublish && process.stdin.isTTY) {\n    const answer = await new Promise<string>((resolve) => {\n      const rl = readline.createInterface({ input: process.stdin, output: process.stdout });\n      rl.question('  Publish wiki as a GitHub Gist for easy viewing? (Y/n): ', (ans) => {\n        rl.close();\n        resolve(ans.trim().toLowerCase());\n      });\n    });\n    shouldPublish = !answer || answer === 'y' || answer === 'yes';\n  }\n\n  if (!shouldPublish) return;\n\n  console.log('\\n  Publishing to GitHub Gist...');\n  const result = publishGist(htmlPath);\n\n  if (result) {\n    console.log(`  Gist:   ${result.url}`);\n    console.log(`  Viewer: ${result.rawUrl}\\n`);\n  } else {\n    console.log('  Failed to publish gist. Make sure `gh auth login` is configured.\\n');\n  }\n}\n"
  },
  {
    "path": "gitnexus/src/config/ignore-service.ts",
    "content": "import ignore, { type Ignore } from 'ignore';\nimport fs from 'fs/promises';\nimport nodePath from 'path';\nimport type { Path } from 'path-scurry';\n\nconst DEFAULT_IGNORE_LIST = new Set([\n    // Version Control\n    '.git',\n    '.svn',\n    '.hg',\n    '.bzr',\n    \n    // IDEs & Editors\n    '.idea',\n    '.vscode',\n    '.vs',\n    '.eclipse',\n    '.settings',\n    '.DS_Store',\n    'Thumbs.db',\n  \n    // Dependencies\n    'node_modules',\n    'bower_components',\n    'jspm_packages',\n    'vendor',           // PHP/Go\n    // 'packages' removed - commonly used for monorepo source code (lerna, pnpm, yarn workspaces)\n    'venv',\n    '.venv',\n    'env',\n    '.env',\n    '__pycache__',\n    '.pytest_cache',\n    '.mypy_cache',\n    'site-packages',\n    '.tox',\n    'eggs',\n    '.eggs',\n    'lib64',\n    'parts',\n    'sdist',\n    'wheels',\n  \n    // Build Outputs\n    'dist',\n    'build',\n    'out',\n    'output',\n    'bin',\n    'obj',\n    'target',           // Java/Rust\n    '.next',\n    '.nuxt',\n    '.output',\n    '.vercel',\n    '.netlify',\n    '.serverless',\n    '_build',\n    'public/build',\n    '.parcel-cache',\n    '.turbo',\n    '.svelte-kit',\n  \n    // Test & Coverage\n    'coverage',\n    '.nyc_output',\n    'htmlcov',\n    '.coverage',\n    '__tests__',        // Often just test files\n    '__mocks__',\n    '.jest',\n    \n    // Logs & Temp\n    'logs',\n    'log',\n    'tmp',\n    'temp',\n    'cache',\n    '.cache',\n    '.tmp',\n    '.temp',\n    \n    // Generated/Compiled\n    '.generated',\n    'generated',\n    'auto-generated',\n    '.terraform',\n    '.serverless',\n    \n    // Documentation (optional - might want to keep)\n    // 'docs',\n    // 'documentation',\n    \n    // Misc\n    '.husky',\n    '.github',          // GitHub config, not code\n    '.circleci',\n    '.gitlab',\n    'fixtures',         // Test fixtures\n    'snapshots',        // Jest snapshots\n    '__snapshots__',\n]);\n\nconst IGNORED_EXTENSIONS = new Set([\n    // Images\n    '.png', '.jpg', '.jpeg', '.gif', '.svg', '.ico', '.webp', '.bmp', '.tiff', '.tif',\n    '.psd', '.ai', '.sketch', '.fig', '.xd',\n    \n    // Archives\n    '.zip', '.tar', '.gz', '.rar', '.7z', '.bz2', '.xz', '.tgz',\n    \n    // Binary/Compiled\n    '.exe', '.dll', '.so', '.dylib', '.a', '.lib', '.o', '.obj',\n    '.class', '.jar', '.war', '.ear',\n    '.pyc', '.pyo', '.pyd',\n    '.beam',            // Erlang\n    '.wasm',            // WebAssembly - important!\n    '.node',            // Native Node addons\n    \n    // Documents\n    '.pdf', '.doc', '.docx', '.xls', '.xlsx', '.ppt', '.pptx',\n    '.odt', '.ods', '.odp',\n    \n    // Media\n    '.mp4', '.mp3', '.wav', '.mov', '.avi', '.mkv', '.flv', '.wmv',\n    '.ogg', '.webm', '.flac', '.aac', '.m4a',\n    \n    // Fonts\n    '.woff', '.woff2', '.ttf', '.eot', '.otf',\n    \n    // Databases\n    '.db', '.sqlite', '.sqlite3', '.mdb', '.accdb',\n    \n    // Minified/Bundled files\n    '.min.js', '.min.css', '.bundle.js', '.chunk.js',\n    \n    // Source maps (debug files, not source)\n    '.map',\n    \n    // Lock files (handled separately, but also here)\n    '.lock',\n    \n    // Certificates & Keys (security - don't index!)\n    '.pem', '.key', '.crt', '.cer', '.p12', '.pfx',\n    \n    // Data files (often large/binary)\n    '.csv', '.tsv', '.parquet', '.avro', '.feather',\n    '.npy', '.npz', '.pkl', '.pickle', '.h5', '.hdf5',\n    \n    // Misc binary\n    '.bin', '.dat', '.data', '.raw',\n    '.iso', '.img', '.dmg',\n]);\n\n// Files to ignore by exact name\nconst IGNORED_FILES = new Set([\n    'package-lock.json',\n    'yarn.lock',\n    'pnpm-lock.yaml',\n    'composer.lock',\n    'Gemfile.lock',\n    'poetry.lock',\n    'Cargo.lock',\n    'go.sum',\n    '.gitignore',\n    '.gitattributes',\n    '.npmrc',\n    '.yarnrc',\n    '.editorconfig',\n    '.prettierrc',\n    '.prettierignore',\n    '.eslintignore',\n    '.dockerignore',\n    'Thumbs.db',\n    '.DS_Store',\n    'LICENSE',\n    'LICENSE.md',\n    'LICENSE.txt',\n    'CHANGELOG.md',\n    'CHANGELOG',\n    'CONTRIBUTING.md',\n    'CODE_OF_CONDUCT.md',\n    'SECURITY.md',\n    '.env',\n    '.env.local',\n    '.env.development',\n    '.env.production',\n    '.env.test',\n    '.env.example',\n]);\n\n\n\n// NOTE: Negation patterns in .gitnexusignore (e.g. `!vendor/`) cannot override\n// entries in DEFAULT_IGNORE_LIST — this is intentional. The hardcoded list protects\n// against indexing directories that are almost never source code (node_modules, .git, etc.).\n// Users who need to include such directories should remove them from the hardcoded list.\nexport const shouldIgnorePath = (filePath: string): boolean => {\n  const normalizedPath = filePath.replace(/\\\\/g, '/');\n  const parts = normalizedPath.split('/');\n  const fileName = parts[parts.length - 1];\n  const fileNameLower = fileName.toLowerCase();\n\n  // Check if any path segment is in ignore list\n  for (const part of parts) {\n    if (DEFAULT_IGNORE_LIST.has(part)) {\n      return true;\n    }\n  }\n\n  // Check exact filename matches\n  if (IGNORED_FILES.has(fileName) || IGNORED_FILES.has(fileNameLower)) {\n    return true;\n  }\n\n  // Check extension\n  const lastDotIndex = fileNameLower.lastIndexOf('.');\n  if (lastDotIndex !== -1) {\n    const ext = fileNameLower.substring(lastDotIndex);\n    if (IGNORED_EXTENSIONS.has(ext)) return true;\n    \n    // Handle compound extensions like .min.js, .bundle.js\n    const secondLastDot = fileNameLower.lastIndexOf('.', lastDotIndex - 1);\n    if (secondLastDot !== -1) {\n      const compoundExt = fileNameLower.substring(secondLastDot);\n      if (IGNORED_EXTENSIONS.has(compoundExt)) return true;\n    }\n  }\n\n  // Ignore hidden files (starting with .)\n  if (fileName.startsWith('.') && fileName !== '.') {\n    // But allow some important config files\n    const allowedDotFiles = ['.env', '.gitignore']; // Already in IGNORED_FILES, so this is redundant\n    // Actually, let's NOT ignore all dot files - many are important configs\n    // Just rely on the explicit lists above\n  }\n\n  // Ignore files that look like generated/bundled code\n  if (fileNameLower.includes('.bundle.') || \n      fileNameLower.includes('.chunk.') ||\n      fileNameLower.includes('.generated.') ||\n      fileNameLower.endsWith('.d.ts')) { // TypeScript declaration files\n    return true;\n  }\n\n  return false;\n}\n\n/** Check if a directory name is in the hardcoded ignore list */\nexport const isHardcodedIgnoredDirectory = (name: string): boolean => {\n  return DEFAULT_IGNORE_LIST.has(name);\n};\n\n/**\n * Load .gitignore and .gitnexusignore rules from the repo root.\n * Returns an `ignore` instance with all patterns, or null if no files found.\n */\nexport interface IgnoreOptions {\n  /** Skip .gitignore parsing, only read .gitnexusignore. Defaults to GITNEXUS_NO_GITIGNORE env var. */\n  noGitignore?: boolean;\n}\n\nexport const loadIgnoreRules = async (\n  repoPath: string,\n  options?: IgnoreOptions\n): Promise<Ignore | null> => {\n  const ig = ignore();\n  let hasRules = false;\n\n  // Allow users to bypass .gitignore parsing (e.g. when .gitignore accidentally excludes source files)\n  const skipGitignore = options?.noGitignore ?? !!process.env.GITNEXUS_NO_GITIGNORE;\n  const filenames = skipGitignore\n    ? ['.gitnexusignore']\n    : ['.gitignore', '.gitnexusignore'];\n\n  for (const filename of filenames) {\n    try {\n      const content = await fs.readFile(nodePath.join(repoPath, filename), 'utf-8');\n      ig.add(content);\n      hasRules = true;\n    } catch (err: unknown) {\n      const code = (err as NodeJS.ErrnoException).code;\n      if (code !== 'ENOENT') {\n        console.warn(`  Warning: could not read ${filename}: ${(err as Error).message}`);\n      }\n    }\n  }\n\n  return hasRules ? ig : null;\n};\n\n/**\n * Create a glob-compatible ignore filter combining:\n * - .gitignore / .gitnexusignore patterns (via `ignore` package)\n * - Hardcoded DEFAULT_IGNORE_LIST, IGNORED_EXTENSIONS, IGNORED_FILES\n *\n * Returns an IgnoreLike object for glob's `ignore` option,\n * enabling directory-level pruning during traversal.\n */\nexport const createIgnoreFilter = async (repoPath: string, options?: IgnoreOptions) => {\n  const ig = await loadIgnoreRules(repoPath, options);\n\n  return {\n    ignored(p: Path): boolean {\n      // path-scurry's Path.relative() returns POSIX paths on all platforms,\n      // which is what the `ignore` package expects. No explicit normalization needed.\n      const rel = p.relative();\n      if (!rel) return false;\n      // Check .gitignore / .gitnexusignore patterns\n      if (ig && ig.ignores(rel)) return true;\n      // Fall back to hardcoded rules\n      return shouldIgnorePath(rel);\n    },\n    childrenIgnored(p: Path): boolean {\n      // Fast path: check directory name against hardcoded list.\n      // Note: dot-directories (.git, .vscode, etc.) are primarily excluded by\n      // glob's `dot: false` option in filesystem-walker.ts. This check is\n      // defense-in-depth — do not remove `dot: false` assuming this covers it.\n      if (DEFAULT_IGNORE_LIST.has(p.name)) return true;\n      // Check against .gitignore / .gitnexusignore patterns.\n      // Test both bare path and path with trailing slash to handle\n      // bare-name patterns (e.g. `local`) and dir-only patterns (e.g. `local/`).\n      if (ig) {\n        const rel = p.relative();\n        if (rel && (ig.ignores(rel) || ig.ignores(rel + '/'))) return true;\n      }\n      return false;\n    },\n  };\n};\n\n"
  },
  {
    "path": "gitnexus/src/config/supported-languages.ts",
    "content": "export enum SupportedLanguages {\n    JavaScript = 'javascript',\n    TypeScript = 'typescript',\n    Python = 'python',\n    Java = 'java',\n    C = 'c',\n    CPlusPlus = 'cpp',\n    CSharp = 'csharp',\n    Go = 'go',\n    Ruby = 'ruby',\n    Rust = 'rust',\n    PHP = 'php',\n    Kotlin = 'kotlin',\n    Swift = 'swift',\n}"
  },
  {
    "path": "gitnexus/src/core/augmentation/engine.ts",
    "content": "/**\n * Augmentation Engine\n * \n * Lightweight, fast-path enrichment of search patterns with knowledge graph context.\n * Designed to be called from platform hooks (Claude Code PreToolUse, Cursor beforeShellExecution)\n * when an agent runs grep/glob/search.\n * \n * Performance target: <500ms cold start, <200ms warm.\n * \n * Design decisions:\n * - Uses only BM25 search (no semantic/embedding) for speed\n * - Clusters used internally for ranking, NEVER in output\n * - Output is pure relationships: callers, callees, process participation\n * - Graceful failure: any error → return empty string\n */\n\nimport path from 'path';\nimport { listRegisteredRepos } from '../../storage/repo-manager.js';\n\n/**\n * Find the best matching repo for a given working directory.\n * Matches by checking if cwd is within the repo's path.\n */\nasync function findRepoForCwd(cwd: string): Promise<{\n  name: string;\n  storagePath: string;\n  lbugPath: string;\n} | null> {\n  try {\n    const entries = await listRegisteredRepos({ validate: true });\n    const resolved = path.resolve(cwd);\n    \n    // Normalize to lowercase on Windows (drive letters can differ: D: vs d:)\n    const isWindows = process.platform === 'win32';\n    const normalizedCwd = isWindows ? resolved.toLowerCase() : resolved;\n    const sep = path.sep;\n    \n    // Find the LONGEST matching repo path (most specific match wins)\n    let bestMatch: typeof entries[0] | null = null;\n    let bestLen = 0;\n    \n    for (const entry of entries) {\n      const repoResolved = path.resolve(entry.path);\n      const normalizedRepo = isWindows ? repoResolved.toLowerCase() : repoResolved;\n      \n      // Check if cwd is inside repo OR repo is inside cwd\n      // Must match at a path separator boundary to avoid false positives\n      // (e.g. /projects/gitnexusv2 should NOT match /projects/gitnexus)\n      let matched = false;\n      if (normalizedCwd === normalizedRepo) {\n        matched = true;\n      } else if (normalizedCwd.startsWith(normalizedRepo + sep)) {\n        matched = true;\n      } else if (normalizedRepo.startsWith(normalizedCwd + sep)) {\n        matched = true;\n      }\n      \n      if (matched && normalizedRepo.length > bestLen) {\n        bestMatch = entry;\n        bestLen = normalizedRepo.length;\n      }\n    }\n    \n    if (!bestMatch) return null;\n    \n    return {\n      name: bestMatch.name,\n      storagePath: bestMatch.storagePath,\n      lbugPath: path.join(bestMatch.storagePath, 'lbug'),\n    };\n  } catch {\n    return null;\n  }\n}\n\n/**\n * Augment a search pattern with knowledge graph context.\n * \n * 1. BM25 search for the pattern\n * 2. For top matches, fetch callers/callees/processes\n * 3. Rank by internal cluster cohesion (not exposed)\n * 4. Format as structured text block\n * \n * Returns empty string on any error (graceful failure).\n */\nexport async function augment(pattern: string, cwd?: string): Promise<string> {\n  if (!pattern || pattern.length < 3) return '';\n  \n  const workDir = cwd || process.cwd();\n  \n  try {\n    const repo = await findRepoForCwd(workDir);\n    if (!repo) return '';\n    \n    // Lazy-load lbug adapter (skip unnecessary init)\n    const { initLbug, executeQuery, isLbugReady } = await import('../../mcp/core/lbug-adapter.js');\n    const { searchFTSFromLbug } = await import('../search/bm25-index.js');\n\n    const repoId = repo.name.toLowerCase();\n\n    // Init LadybugDB if not already\n    if (!isLbugReady(repoId)) {\n      await initLbug(repoId, repo.lbugPath);\n    }\n\n    // Step 1: BM25 search (fast, no embeddings)\n    const bm25Results = await searchFTSFromLbug(pattern, 10, repoId);\n    \n    if (bm25Results.length === 0) return '';\n    \n    // Step 2: Map BM25 file results to symbols\n    const symbolMatches: Array<{\n      nodeId: string;\n      name: string;\n      type: string;\n      filePath: string;\n      score: number;\n    }> = [];\n    \n    for (const result of bm25Results.slice(0, 5)) {\n      const escaped = result.filePath.replace(/'/g, \"''\");\n      try {\n        const symbols = await executeQuery(repoId, `\n          MATCH (n) WHERE n.filePath = '${escaped}'\n          AND n.name CONTAINS '${pattern.replace(/'/g, \"''\").split(/\\s+/)[0]}'\n          RETURN n.id AS id, n.name AS name, labels(n)[0] AS type, n.filePath AS filePath\n          LIMIT 3\n        `);\n        for (const sym of symbols) {\n          symbolMatches.push({\n            nodeId: sym.id || sym[0],\n            name: sym.name || sym[1],\n            type: sym.type || sym[2],\n            filePath: sym.filePath || sym[3],\n            score: result.score,\n          });\n        }\n      } catch { /* skip */ }\n    }\n    \n    if (symbolMatches.length === 0) return '';\n    \n    // Step 3: Batch-fetch callers/callees/processes/cohesion for top matches\n    // Uses batched WHERE n.id IN [...] queries instead of per-symbol queries\n    const uniqueSymbols = symbolMatches.slice(0, 5).filter((sym, i, arr) =>\n      arr.findIndex(s => s.nodeId === sym.nodeId) === i\n    );\n\n    if (uniqueSymbols.length === 0) return '';\n\n    const idList = uniqueSymbols.map(s => `'${s.nodeId.replace(/'/g, \"''\")}'`).join(', ');\n\n    // Batch fetch callers\n    const callersMap = new Map<string, string[]>();\n    try {\n      const rows = await executeQuery(repoId, `\n        MATCH (caller)-[:CodeRelation {type: 'CALLS'}]->(n)\n        WHERE n.id IN [${idList}]\n        RETURN n.id AS targetId, caller.name AS name\n        LIMIT 15\n      `);\n      for (const r of rows) {\n        const tid = r.targetId || r[0];\n        const name = r.name || r[1];\n        if (tid && name) {\n          if (!callersMap.has(tid)) callersMap.set(tid, []);\n          callersMap.get(tid)!.push(name);\n        }\n      }\n    } catch { /* skip */ }\n\n    // Batch fetch callees\n    const calleesMap = new Map<string, string[]>();\n    try {\n      const rows = await executeQuery(repoId, `\n        MATCH (n)-[:CodeRelation {type: 'CALLS'}]->(callee)\n        WHERE n.id IN [${idList}]\n        RETURN n.id AS sourceId, callee.name AS name\n        LIMIT 15\n      `);\n      for (const r of rows) {\n        const sid = r.sourceId || r[0];\n        const name = r.name || r[1];\n        if (sid && name) {\n          if (!calleesMap.has(sid)) calleesMap.set(sid, []);\n          calleesMap.get(sid)!.push(name);\n        }\n      }\n    } catch { /* skip */ }\n\n    // Batch fetch processes\n    const processesMap = new Map<string, string[]>();\n    try {\n      const rows = await executeQuery(repoId, `\n        MATCH (n)-[r:CodeRelation {type: 'STEP_IN_PROCESS'}]->(p:Process)\n        WHERE n.id IN [${idList}]\n        RETURN n.id AS nodeId, p.heuristicLabel AS label, r.step AS step, p.stepCount AS stepCount\n      `);\n      for (const r of rows) {\n        const nid = r.nodeId || r[0];\n        const label = r.label || r[1];\n        const step = r.step || r[2];\n        const stepCount = r.stepCount || r[3];\n        if (nid && label) {\n          if (!processesMap.has(nid)) processesMap.set(nid, []);\n          processesMap.get(nid)!.push(`${label} (step ${step}/${stepCount})`);\n        }\n      }\n    } catch { /* skip */ }\n\n    // Batch fetch cohesion\n    const cohesionMap = new Map<string, number>();\n    try {\n      const rows = await executeQuery(repoId, `\n        MATCH (n)-[:CodeRelation {type: 'MEMBER_OF'}]->(c:Community)\n        WHERE n.id IN [${idList}]\n        RETURN n.id AS nodeId, c.cohesion AS cohesion\n      `);\n      for (const r of rows) {\n        const nid = r.nodeId || r[0];\n        const coh = r.cohesion ?? r[1] ?? 0;\n        if (nid) cohesionMap.set(nid, coh);\n      }\n    } catch { /* skip */ }\n\n    // Assemble enriched results\n    const enriched: Array<{\n      name: string;\n      filePath: string;\n      callers: string[];\n      callees: string[];\n      processes: string[];\n      cohesion: number;\n    }> = [];\n\n    for (const sym of uniqueSymbols) {\n      enriched.push({\n        name: sym.name,\n        filePath: sym.filePath,\n        callers: (callersMap.get(sym.nodeId) || []).slice(0, 3),\n        callees: (calleesMap.get(sym.nodeId) || []).slice(0, 3),\n        processes: processesMap.get(sym.nodeId) || [],\n        cohesion: cohesionMap.get(sym.nodeId) || 0,\n      });\n    }\n    \n    if (enriched.length === 0) return '';\n    \n    // Step 4: Rank by cohesion (internal signal) and format\n    enriched.sort((a, b) => b.cohesion - a.cohesion);\n    \n    const lines: string[] = [`[GitNexus] ${enriched.length} related symbols found:`, ''];\n    \n    for (const item of enriched) {\n      lines.push(`${item.name} (${item.filePath})`);\n      if (item.callers.length > 0) {\n        lines.push(`  Called by: ${item.callers.join(', ')}`);\n      }\n      if (item.callees.length > 0) {\n        lines.push(`  Calls: ${item.callees.join(', ')}`);\n      }\n      if (item.processes.length > 0) {\n        lines.push(`  Flows: ${item.processes.join(', ')}`);\n      }\n      lines.push('');\n    }\n    \n    return lines.join('\\n').trim();\n  } catch {\n    // Graceful failure — never break the original tool\n    return '';\n  }\n}\n"
  },
  {
    "path": "gitnexus/src/core/embeddings/embedder.ts",
    "content": "/**\n * Embedder Module\n *\n * Singleton factory for transformers.js embedding pipeline.\n * Handles model loading, caching, and both single and batch embedding operations.\n *\n * Uses snowflake-arctic-embed-xs by default (22M params, 384 dims, ~90MB)\n */\n\n// Suppress ONNX Runtime native warnings (e.g. VerifyEachNodeIsAssignedToAnEp)\n// Must be set BEFORE onnxruntime-node is imported by transformers.js\n// Level 3 = Error only (skips Warning/Info)\nif (!process.env.ORT_LOG_LEVEL) {\n  process.env.ORT_LOG_LEVEL = '3';\n}\n\nimport { pipeline, env, type FeatureExtractionPipeline } from '@huggingface/transformers';\nimport { existsSync } from 'fs';\nimport { execFileSync } from 'child_process';\nimport { join } from 'path';\nimport { DEFAULT_EMBEDDING_CONFIG, type EmbeddingConfig, type ModelProgress } from './types.js';\n\n/**\n * Check whether CUDA libraries are actually available on this system.\n * ONNX Runtime's native layer crashes (uncatchable) if we attempt CUDA\n * without the required shared libraries, so we probe first.\n *\n * Checks the dynamic linker cache (ldconfig) which covers all architectures\n * and install paths, then falls back to CUDA_PATH / LD_LIBRARY_PATH env vars.\n */\nfunction isCudaAvailable(): boolean {\n  // Primary: query the dynamic linker cache — covers all architectures,\n  // distro layouts, and custom install paths registered with ldconfig\n  try {\n    const out = execFileSync('ldconfig', ['-p'], { timeout: 3000, encoding: 'utf-8' });\n    if (out.includes('libcublasLt.so.12')) return true;\n  } catch {\n    // ldconfig not available (e.g. non-standard container)\n  }\n\n  // Fallback: check CUDA_PATH and LD_LIBRARY_PATH for environments where\n  // ldconfig doesn't know about the CUDA install (conda, manual /opt/cuda, etc.)\n  for (const envVar of ['CUDA_PATH', 'LD_LIBRARY_PATH']) {\n    const val = process.env[envVar];\n    if (!val) continue;\n    for (const dir of val.split(':').filter(Boolean)) {\n      if (existsSync(join(dir, 'lib64', 'libcublasLt.so.12')) ||\n          existsSync(join(dir, 'lib', 'libcublasLt.so.12')) ||\n          existsSync(join(dir, 'libcublasLt.so.12'))) return true;\n    }\n  }\n\n  return false;\n}\n\n// Module-level state for singleton pattern\nlet embedderInstance: FeatureExtractionPipeline | null = null;\nlet isInitializing = false;\nlet initPromise: Promise<FeatureExtractionPipeline> | null = null;\nlet currentDevice: 'dml' | 'cuda' | 'cpu' | 'wasm' | null = null;\n\n/**\n * Progress callback type for model loading\n */\nexport type ModelProgressCallback = (progress: ModelProgress) => void;\n\n/**\n * Get the current device being used for inference\n */\nexport const getCurrentDevice = (): 'dml' | 'cuda' | 'cpu' | 'wasm' | null => currentDevice;\n\n/**\n * Initialize the embedding model\n * Uses singleton pattern - only loads once, subsequent calls return cached instance\n * \n * @param onProgress - Optional callback for model download progress\n * @param config - Optional configuration override\n * @param forceDevice - Force a specific device\n * @returns Promise resolving to the embedder pipeline\n */\nexport const initEmbedder = async (\n  onProgress?: ModelProgressCallback,\n  config: Partial<EmbeddingConfig> = {},\n  forceDevice?: 'dml' | 'cuda' | 'cpu' | 'wasm'\n): Promise<FeatureExtractionPipeline> => {\n  // Return existing instance if available\n  if (embedderInstance) {\n    return embedderInstance;\n  }\n\n  // If already initializing, wait for that promise\n  if (isInitializing && initPromise) {\n    return initPromise;\n  }\n\n  isInitializing = true;\n  \n  const finalConfig = { ...DEFAULT_EMBEDDING_CONFIG, ...config };\n  // On Windows, use DirectML for GPU acceleration (via DirectX12)\n  // CUDA is only available on Linux x64 with onnxruntime-node\n  // Probe for CUDA first — ONNX Runtime crashes (uncatchable native error)\n  // if we attempt CUDA without the required shared libraries\n  const isWindows = process.platform === 'win32';\n  const gpuDevice = isWindows ? 'dml' : (isCudaAvailable() ? 'cuda' : 'cpu');\n  let requestedDevice = forceDevice || (finalConfig.device === 'auto' ? gpuDevice : finalConfig.device);\n\n  initPromise = (async () => {\n    try {\n      // Configure transformers.js environment\n      env.allowLocalModels = false;\n      \n      const isDev = process.env.NODE_ENV === 'development';\n      if (isDev) {\n        console.log(`🧠 Loading embedding model: ${finalConfig.modelId}`);\n      }\n\n      const progressCallback = onProgress ? (data: any) => {\n        const progress: ModelProgress = {\n          status: data.status || 'progress',\n          file: data.file,\n          progress: data.progress,\n          loaded: data.loaded,\n          total: data.total,\n        };\n        onProgress(progress);\n      } : undefined;\n\n      // Try GPU first if auto, fall back to CPU\n      // Windows: dml (DirectML/DirectX12), Linux: cuda\n      const devicesToTry: Array<'dml' | 'cuda' | 'cpu' | 'wasm'> = \n        (requestedDevice === 'dml' || requestedDevice === 'cuda') \n          ? [requestedDevice, 'cpu'] \n          : [requestedDevice as 'cpu' | 'wasm'];\n\n      for (const device of devicesToTry) {\n        try {\n          if (isDev && device === 'dml') {\n            console.log('🔧 Trying DirectML (DirectX12) GPU backend...');\n          } else if (isDev && device === 'cuda') {\n            console.log('🔧 Trying CUDA GPU backend...');\n          } else if (isDev && device === 'cpu') {\n            console.log('🔧 Using CPU backend...');\n          } else if (isDev && device === 'wasm') {\n            console.log('🔧 Using WASM backend (slower)...');\n          }\n\n          embedderInstance = await (pipeline as any)(\n            'feature-extraction',\n            finalConfig.modelId,\n            {\n              device: device,\n              dtype: 'fp32',\n              progress_callback: progressCallback,\n              session_options: { logSeverityLevel: 3 },\n            }\n          );\n          currentDevice = device;\n\n          if (isDev) {\n            const label = device === 'dml' ? 'GPU (DirectML/DirectX12)' \n                        : device === 'cuda' ? 'GPU (CUDA)' \n                        : device.toUpperCase();\n            console.log(`✅ Using ${label} backend`);\n            console.log('✅ Embedding model loaded successfully');\n          }\n\n          return embedderInstance!;\n        } catch (deviceError) {\n          if (isDev && (device === 'cuda' || device === 'dml')) {\n            const gpuType = device === 'dml' ? 'DirectML' : 'CUDA';\n            console.log(`⚠️  ${gpuType} not available, falling back to CPU...`);\n          }\n          // Continue to next device in list\n          if (device === devicesToTry[devicesToTry.length - 1]) {\n            throw deviceError; // Last device failed, propagate error\n          }\n        }\n      }\n\n      throw new Error('No suitable device found for embedding model');\n    } catch (error) {\n      isInitializing = false;\n      initPromise = null;\n      embedderInstance = null;\n      throw error;\n    } finally {\n      isInitializing = false;\n    }\n  })();\n\n  return initPromise;\n};\n\n/**\n * Check if the embedder is initialized and ready\n */\nexport const isEmbedderReady = (): boolean => {\n  return embedderInstance !== null;\n};\n\n/**\n * Get the embedder instance (throws if not initialized)\n */\nexport const getEmbedder = (): FeatureExtractionPipeline => {\n  if (!embedderInstance) {\n    throw new Error('Embedder not initialized. Call initEmbedder() first.');\n  }\n  return embedderInstance;\n};\n\n/**\n * Embed a single text string\n * \n * @param text - Text to embed\n * @returns Float32Array of embedding vector (384 dimensions)\n */\nexport const embedText = async (text: string): Promise<Float32Array> => {\n  const embedder = getEmbedder();\n  \n  const result = await embedder(text, {\n    pooling: 'mean',\n    normalize: true,\n  });\n  \n  // Result is a Tensor, convert to Float32Array\n  return new Float32Array(result.data as ArrayLike<number>);\n};\n\n/**\n * Embed multiple texts in a single batch\n * More efficient than calling embedText multiple times\n * \n * @param texts - Array of texts to embed\n * @returns Array of Float32Array embedding vectors\n */\nexport const embedBatch = async (texts: string[]): Promise<Float32Array[]> => {\n  if (texts.length === 0) {\n    return [];\n  }\n\n  const embedder = getEmbedder();\n  \n  // Process batch\n  const result = await embedder(texts, {\n    pooling: 'mean',\n    normalize: true,\n  });\n  \n  // Result shape is [batch_size, dimensions]\n  // Need to split into individual vectors\n  const data = result.data as ArrayLike<number>;\n  const dimensions = DEFAULT_EMBEDDING_CONFIG.dimensions;\n  const embeddings: Float32Array[] = [];\n  \n  for (let i = 0; i < texts.length; i++) {\n    const start = i * dimensions;\n    const end = start + dimensions;\n    embeddings.push(new Float32Array(Array.prototype.slice.call(data, start, end)));\n  }\n  \n  return embeddings;\n};\n\n/**\n * Convert Float32Array to regular number array (for LadybugDB storage)\n */\nexport const embeddingToArray = (embedding: Float32Array): number[] => {\n  return Array.from(embedding);\n};\n\n/**\n * Cleanup the embedder (free memory)\n * Call this when done with embeddings\n */\nexport const disposeEmbedder = async (): Promise<void> => {\n  if (embedderInstance) {\n    // transformers.js pipelines may have a dispose method\n    try {\n      if ('dispose' in embedderInstance && typeof embedderInstance.dispose === 'function') {\n        await embedderInstance.dispose();\n      }\n    } catch {\n      // Ignore disposal errors\n    }\n    embedderInstance = null;\n    initPromise = null;\n  }\n};\n\n"
  },
  {
    "path": "gitnexus/src/core/embeddings/embedding-pipeline.ts",
    "content": "/**\n * Embedding Pipeline Module\n * \n * Orchestrates the background embedding process:\n * 1. Query embeddable nodes from LadybugDB\n * 2. Generate text representations\n * 3. Batch embed using transformers.js\n * 4. Update LadybugDB with embeddings\n * 5. Create vector index for semantic search\n */\n\nimport { initEmbedder, embedBatch, embedText, embeddingToArray, isEmbedderReady } from './embedder.js';\nimport { generateBatchEmbeddingTexts, generateEmbeddingText } from './text-generator.js';\nimport {\n  type EmbeddingProgress,\n  type EmbeddingConfig,\n  type EmbeddableNode,\n  type SemanticSearchResult,\n  type ModelProgress,\n  DEFAULT_EMBEDDING_CONFIG,\n  EMBEDDABLE_LABELS,\n} from './types.js';\n\nconst isDev = process.env.NODE_ENV === 'development';\n\n/**\n * Progress callback type\n */\nexport type EmbeddingProgressCallback = (progress: EmbeddingProgress) => void;\n\n/**\n * Query all embeddable nodes from LadybugDB\n * Uses table-specific queries (File has different schema than code elements)\n */\nconst queryEmbeddableNodes = async (\n  executeQuery: (cypher: string) => Promise<any[]>\n): Promise<EmbeddableNode[]> => {\n  const allNodes: EmbeddableNode[] = [];\n  \n  // Query each embeddable table with table-specific columns\n  for (const label of EMBEDDABLE_LABELS) {\n    try {\n      let query: string;\n      \n      if (label === 'File') {\n        // File nodes don't have startLine/endLine\n        query = `\n          MATCH (n:File)\n          RETURN n.id AS id, n.name AS name, 'File' AS label, \n                 n.filePath AS filePath, n.content AS content\n        `;\n      } else {\n        // Code elements have startLine/endLine\n        query = `\n          MATCH (n:${label})\n          RETURN n.id AS id, n.name AS name, '${label}' AS label, \n                 n.filePath AS filePath, n.content AS content,\n                 n.startLine AS startLine, n.endLine AS endLine\n        `;\n      }\n      \n      const rows = await executeQuery(query);\n      for (const row of rows) {\n        allNodes.push({\n          id: row.id ?? row[0],\n          name: row.name ?? row[1],\n          label: row.label ?? row[2],\n          filePath: row.filePath ?? row[3],\n          content: row.content ?? row[4] ?? '',\n          startLine: row.startLine ?? row[5],\n          endLine: row.endLine ?? row[6],\n        });\n      }\n    } catch (error) {\n      // Table might not exist or be empty, continue\n      if (isDev) {\n        console.warn(`Query for ${label} nodes failed:`, error);\n      }\n    }\n  }\n\n  return allNodes;\n};\n\n/**\n * Batch INSERT embeddings into separate CodeEmbedding table\n * Using a separate lightweight table avoids copy-on-write overhead\n * that occurs when UPDATEing nodes with large content fields\n */\nconst batchInsertEmbeddings = async (\n  executeWithReusedStatement: (\n    cypher: string,\n    paramsList: Array<Record<string, any>>\n  ) => Promise<void>,\n  updates: Array<{ id: string; embedding: number[] }>\n): Promise<void> => {\n  // INSERT into separate embedding table - much more memory efficient!\n  const cypher = `CREATE (e:CodeEmbedding {nodeId: $nodeId, embedding: $embedding})`;\n  const paramsList = updates.map(u => ({ nodeId: u.id, embedding: u.embedding }));\n  await executeWithReusedStatement(cypher, paramsList);\n};\n\n/**\n * Create the vector index for semantic search\n * Now indexes the separate CodeEmbedding table\n */\nlet vectorExtensionLoaded = false;\n\nconst createVectorIndex = async (\n  executeQuery: (cypher: string) => Promise<any[]>\n): Promise<void> => {\n  // LadybugDB v0.15+ requires explicit VECTOR extension loading (once per session)\n  if (!vectorExtensionLoaded) {\n    try {\n      await executeQuery('INSTALL VECTOR');\n      await executeQuery('LOAD EXTENSION VECTOR');\n      vectorExtensionLoaded = true;\n    } catch {\n      // Extension may already be loaded — CREATE_VECTOR_INDEX will fail clearly if not\n      vectorExtensionLoaded = true;\n    }\n  }\n\n  const cypher = `\n    CALL CREATE_VECTOR_INDEX('CodeEmbedding', 'code_embedding_idx', 'embedding', metric := 'cosine')\n  `;\n\n  try {\n    await executeQuery(cypher);\n  } catch (error) {\n    // Index might already exist\n    if (isDev) {\n      console.warn('Vector index creation warning:', error);\n    }\n  }\n};\n\n/**\n * Run the embedding pipeline\n * \n * @param executeQuery - Function to execute Cypher queries against LadybugDB\n * @param executeWithReusedStatement - Function to execute with reused prepared statement\n * @param onProgress - Callback for progress updates\n * @param config - Optional configuration override\n * @param skipNodeIds - Optional set of node IDs that already have embeddings (incremental mode)\n */\nexport const runEmbeddingPipeline = async (\n  executeQuery: (cypher: string) => Promise<any[]>,\n  executeWithReusedStatement: (cypher: string, paramsList: Array<Record<string, any>>) => Promise<void>,\n  onProgress: EmbeddingProgressCallback,\n  config: Partial<EmbeddingConfig> = {},\n  skipNodeIds?: Set<string>,\n): Promise<void> => {\n  const finalConfig = { ...DEFAULT_EMBEDDING_CONFIG, ...config };\n\n  try {\n    // Phase 1: Load embedding model\n    onProgress({\n      phase: 'loading-model',\n      percent: 0,\n      modelDownloadPercent: 0,\n    });\n\n    await initEmbedder((modelProgress: ModelProgress) => {\n      const downloadPercent = modelProgress.progress ?? 0;\n      onProgress({\n        phase: 'loading-model',\n        percent: Math.round(downloadPercent * 0.2),\n        modelDownloadPercent: downloadPercent,\n      });\n    }, finalConfig);\n\n    onProgress({\n      phase: 'loading-model',\n      percent: 20,\n      modelDownloadPercent: 100,\n    });\n\n    if (isDev) {\n      console.log('🔍 Querying embeddable nodes...');\n    }\n\n    // Phase 2: Query embeddable nodes\n    let nodes = await queryEmbeddableNodes(executeQuery);\n\n    // Incremental mode: filter out nodes that already have embeddings\n    if (skipNodeIds && skipNodeIds.size > 0) {\n      const beforeCount = nodes.length;\n      nodes = nodes.filter(n => !skipNodeIds.has(n.id));\n      if (isDev) {\n        console.log(`📦 Incremental embeddings: ${beforeCount} total, ${skipNodeIds.size} cached, ${nodes.length} to embed`);\n      }\n    }\n\n    const totalNodes = nodes.length;\n\n    if (isDev) {\n      console.log(`📊 Found ${totalNodes} embeddable nodes`);\n    }\n\n    if (totalNodes === 0) {\n      onProgress({\n        phase: 'ready',\n        percent: 100,\n        nodesProcessed: 0,\n        totalNodes: 0,\n      });\n      return;\n    }\n\n    // Phase 3: Batch embed nodes\n    const batchSize = finalConfig.batchSize;\n    const totalBatches = Math.ceil(totalNodes / batchSize);\n    let processedNodes = 0;\n\n    onProgress({\n      phase: 'embedding',\n      percent: 20,\n      nodesProcessed: 0,\n      totalNodes,\n      currentBatch: 0,\n      totalBatches,\n    });\n\n    for (let batchIndex = 0; batchIndex < totalBatches; batchIndex++) {\n      const start = batchIndex * batchSize;\n      const end = Math.min(start + batchSize, totalNodes);\n      const batch = nodes.slice(start, end);\n\n      // Generate texts for this batch\n      const texts = generateBatchEmbeddingTexts(batch, finalConfig);\n\n      // Embed the batch\n      const embeddings = await embedBatch(texts);\n\n      // Update LadybugDB with embeddings\n      const updates = batch.map((node, i) => ({\n        id: node.id,\n        embedding: embeddingToArray(embeddings[i]),\n      }));\n\n      await batchInsertEmbeddings(executeWithReusedStatement, updates);\n\n      processedNodes += batch.length;\n\n      // Report progress (20-90% for embedding phase)\n      const embeddingProgress = 20 + ((processedNodes / totalNodes) * 70);\n      onProgress({\n        phase: 'embedding',\n        percent: Math.round(embeddingProgress),\n        nodesProcessed: processedNodes,\n        totalNodes,\n        currentBatch: batchIndex + 1,\n        totalBatches,\n      });\n    }\n\n    // Phase 4: Create vector index\n    onProgress({\n      phase: 'indexing',\n      percent: 90,\n      nodesProcessed: totalNodes,\n      totalNodes,\n    });\n\n    if (isDev) {\n      console.log('📇 Creating vector index...');\n    }\n\n    await createVectorIndex(executeQuery);\n\n    // Complete\n    onProgress({\n      phase: 'ready',\n      percent: 100,\n      nodesProcessed: totalNodes,\n      totalNodes,\n    });\n\n    if (isDev) {\n      console.log('✅ Embedding pipeline complete!');\n    }\n  } catch (error) {\n    const errorMessage = error instanceof Error ? error.message : 'Unknown error';\n    \n    if (isDev) {\n      console.error('❌ Embedding pipeline error:', error);\n    }\n\n    onProgress({\n      phase: 'error',\n      percent: 0,\n      error: errorMessage,\n    });\n\n    throw error;\n  }\n};\n\n/**\n * Perform semantic search using the vector index\n * \n * Uses CodeEmbedding table and queries each node table to get metadata\n * \n * @param executeQuery - Function to execute Cypher queries\n * @param query - Search query text\n * @param k - Number of results to return (default: 10)\n * @param maxDistance - Maximum distance threshold (default: 0.5)\n * @returns Array of search results ordered by relevance\n */\nexport const semanticSearch = async (\n  executeQuery: (cypher: string) => Promise<any[]>,\n  query: string,\n  k: number = 10,\n  maxDistance: number = 0.5\n): Promise<SemanticSearchResult[]> => {\n  if (!isEmbedderReady()) {\n    throw new Error('Embedding model not initialized. Run embedding pipeline first.');\n  }\n\n  // Embed the query\n  const queryEmbedding = await embedText(query);\n  const queryVec = embeddingToArray(queryEmbedding);\n  const queryVecStr = `[${queryVec.join(',')}]`;\n\n  // Query the vector index on CodeEmbedding to get nodeIds and distances\n  const vectorQuery = `\n    CALL QUERY_VECTOR_INDEX('CodeEmbedding', 'code_embedding_idx', \n      CAST(${queryVecStr} AS FLOAT[384]), ${k})\n    YIELD node AS emb, distance\n    WITH emb, distance\n    WHERE distance < ${maxDistance}\n    RETURN emb.nodeId AS nodeId, distance\n    ORDER BY distance\n  `;\n\n  const embResults = await executeQuery(vectorQuery);\n  \n  if (embResults.length === 0) {\n    return [];\n  }\n\n  // Group results by label for batched metadata queries\n  const byLabel = new Map<string, Array<{ nodeId: string; distance: number }>>();\n  for (const embRow of embResults) {\n    const nodeId = embRow.nodeId ?? embRow[0];\n    const distance = embRow.distance ?? embRow[1];\n    const labelEndIdx = nodeId.indexOf(':');\n    const label = labelEndIdx > 0 ? nodeId.substring(0, labelEndIdx) : 'Unknown';\n    if (!byLabel.has(label)) byLabel.set(label, []);\n    byLabel.get(label)!.push({ nodeId, distance });\n  }\n\n  // Batch-fetch metadata per label\n  const results: SemanticSearchResult[] = [];\n\n  for (const [label, items] of byLabel) {\n    const idList = items.map(i => `'${i.nodeId.replace(/'/g, \"''\")}'`).join(', ');\n    try {\n      let nodeQuery: string;\n      if (label === 'File') {\n        nodeQuery = `\n          MATCH (n:File) WHERE n.id IN [${idList}]\n          RETURN n.id AS id, n.name AS name, n.filePath AS filePath\n        `;\n      } else {\n        nodeQuery = `\n          MATCH (n:${label}) WHERE n.id IN [${idList}]\n          RETURN n.id AS id, n.name AS name, n.filePath AS filePath,\n                 n.startLine AS startLine, n.endLine AS endLine\n        `;\n      }\n      const nodeRows = await executeQuery(nodeQuery);\n      const rowMap = new Map<string, any>();\n      for (const row of nodeRows) {\n        const id = row.id ?? row[0];\n        rowMap.set(id, row);\n      }\n      for (const item of items) {\n        const nodeRow = rowMap.get(item.nodeId);\n        if (nodeRow) {\n          results.push({\n            nodeId: item.nodeId,\n            name: nodeRow.name ?? nodeRow[1] ?? '',\n            label,\n            filePath: nodeRow.filePath ?? nodeRow[2] ?? '',\n            distance: item.distance,\n            startLine: label !== 'File' ? (nodeRow.startLine ?? nodeRow[3]) : undefined,\n            endLine: label !== 'File' ? (nodeRow.endLine ?? nodeRow[4]) : undefined,\n          });\n        }\n      }\n    } catch {\n      // Table might not exist, skip\n    }\n  }\n\n  // Re-sort by distance since batch queries may have mixed order\n  results.sort((a, b) => a.distance - b.distance);\n\n  return results;\n};\n\n/**\n * Semantic search with graph expansion (flattened results)\n * \n * Note: With multi-table schema, graph traversal is simplified.\n * Returns semantic matches with their metadata.\n * For full graph traversal, use execute_vector_cypher tool directly.\n * \n * @param executeQuery - Function to execute Cypher queries\n * @param query - Search query text\n * @param k - Number of initial semantic matches (default: 5)\n * @param _hops - Unused (kept for API compatibility).\n * @returns Semantic matches with metadata\n */\nexport const semanticSearchWithContext = async (\n  executeQuery: (cypher: string) => Promise<any[]>,\n  query: string,\n  k: number = 5,\n  _hops: number = 1\n): Promise<any[]> => {\n  // For multi-table schema, just return semantic search results\n  // Graph traversal is complex with separate tables - use execute_vector_cypher instead\n  const results = await semanticSearch(executeQuery, query, k, 0.5);\n  \n  return results.map(r => ({\n    matchId: r.nodeId,\n    matchName: r.name,\n    matchLabel: r.label,\n    matchPath: r.filePath,\n    distance: r.distance,\n    connectedId: null,\n    connectedName: null,\n    connectedLabel: null,\n    relationType: null,\n  }));\n};\n\n"
  },
  {
    "path": "gitnexus/src/core/embeddings/index.ts",
    "content": "/**\n * Embeddings Module\n * \n * Re-exports for the embedding pipeline system.\n */\n\nexport * from './types.js';\nexport * from './embedder.js';\nexport * from './text-generator.js';\nexport * from './embedding-pipeline.js';\n\n"
  },
  {
    "path": "gitnexus/src/core/embeddings/text-generator.ts",
    "content": "/**\n * Text Generator Module\n * \n * Pure functions to generate embedding text from code nodes.\n * Combines node metadata with code snippets for semantic matching.\n */\n\nimport type { EmbeddableNode, EmbeddingConfig } from './types.js';\nimport { DEFAULT_EMBEDDING_CONFIG } from './types.js';\n\n/**\n * Extract the filename from a file path\n */\nconst getFileName = (filePath: string): string => {\n  const parts = filePath.split('/');\n  return parts[parts.length - 1] || filePath;\n};\n\n/**\n * Extract the directory path from a file path\n */\nconst getDirectory = (filePath: string): string => {\n  const parts = filePath.split('/');\n  parts.pop();\n  return parts.join('/') || '';\n};\n\n/**\n * Truncate content to max length, preserving word boundaries\n */\nconst truncateContent = (content: string, maxLength: number): string => {\n  if (content.length <= maxLength) {\n    return content;\n  }\n  \n  // Find last space before maxLength to avoid cutting words\n  const truncated = content.slice(0, maxLength);\n  const lastSpace = truncated.lastIndexOf(' ');\n  \n  if (lastSpace > maxLength * 0.8) {\n    return truncated.slice(0, lastSpace) + '...';\n  }\n  \n  return truncated + '...';\n};\n\n/**\n * Clean code content for embedding\n * Removes excessive whitespace while preserving structure\n */\nconst cleanContent = (content: string): string => {\n  return content\n    // Normalize line endings\n    .replace(/\\r\\n/g, '\\n')\n    // Remove excessive blank lines (more than 2)\n    .replace(/\\n{3,}/g, '\\n\\n')\n    // Trim each line\n    .split('\\n')\n    .map(line => line.trimEnd())\n    .join('\\n')\n    .trim();\n};\n\n/**\n * Generate embedding text for a Function node\n */\nconst generateFunctionText = (\n  node: EmbeddableNode,\n  maxSnippetLength: number\n): string => {\n  const parts: string[] = [\n    `Function: ${node.name}`,\n    `File: ${getFileName(node.filePath)}`,\n  ];\n\n  const dir = getDirectory(node.filePath);\n  if (dir) {\n    parts.push(`Directory: ${dir}`);\n  }\n\n  if (node.content) {\n    const cleanedContent = cleanContent(node.content);\n    const snippet = truncateContent(cleanedContent, maxSnippetLength);\n    parts.push('', snippet);\n  }\n\n  return parts.join('\\n');\n};\n\n/**\n * Generate embedding text for a Class node\n */\nconst generateClassText = (\n  node: EmbeddableNode,\n  maxSnippetLength: number\n): string => {\n  const parts: string[] = [\n    `Class: ${node.name}`,\n    `File: ${getFileName(node.filePath)}`,\n  ];\n\n  const dir = getDirectory(node.filePath);\n  if (dir) {\n    parts.push(`Directory: ${dir}`);\n  }\n\n  if (node.content) {\n    const cleanedContent = cleanContent(node.content);\n    const snippet = truncateContent(cleanedContent, maxSnippetLength);\n    parts.push('', snippet);\n  }\n\n  return parts.join('\\n');\n};\n\n/**\n * Generate embedding text for a Method node\n */\nconst generateMethodText = (\n  node: EmbeddableNode,\n  maxSnippetLength: number\n): string => {\n  const parts: string[] = [\n    `Method: ${node.name}`,\n    `File: ${getFileName(node.filePath)}`,\n  ];\n\n  const dir = getDirectory(node.filePath);\n  if (dir) {\n    parts.push(`Directory: ${dir}`);\n  }\n\n  if (node.content) {\n    const cleanedContent = cleanContent(node.content);\n    const snippet = truncateContent(cleanedContent, maxSnippetLength);\n    parts.push('', snippet);\n  }\n\n  return parts.join('\\n');\n};\n\n/**\n * Generate embedding text for an Interface node\n */\nconst generateInterfaceText = (\n  node: EmbeddableNode,\n  maxSnippetLength: number\n): string => {\n  const parts: string[] = [\n    `Interface: ${node.name}`,\n    `File: ${getFileName(node.filePath)}`,\n  ];\n\n  const dir = getDirectory(node.filePath);\n  if (dir) {\n    parts.push(`Directory: ${dir}`);\n  }\n\n  if (node.content) {\n    const cleanedContent = cleanContent(node.content);\n    const snippet = truncateContent(cleanedContent, maxSnippetLength);\n    parts.push('', snippet);\n  }\n\n  return parts.join('\\n');\n};\n\n/**\n * Generate embedding text for a File node\n * Uses file name and first N characters of content\n */\nconst generateFileText = (\n  node: EmbeddableNode,\n  maxSnippetLength: number\n): string => {\n  const parts: string[] = [\n    `File: ${node.name}`,\n    `Path: ${node.filePath}`,\n  ];\n\n  if (node.content) {\n    const cleanedContent = cleanContent(node.content);\n    // For files, use a shorter snippet since they can be very long\n    const snippet = truncateContent(cleanedContent, Math.min(maxSnippetLength, 300));\n    parts.push('', snippet);\n  }\n\n  return parts.join('\\n');\n};\n\n/**\n * Generate embedding text for any embeddable node\n * Dispatches to the appropriate generator based on node label\n * \n * @param node - The node to generate text for\n * @param config - Optional configuration for max snippet length\n * @returns Text suitable for embedding\n */\nexport const generateEmbeddingText = (\n  node: EmbeddableNode,\n  config: Partial<EmbeddingConfig> = {}\n): string => {\n  const maxSnippetLength = config.maxSnippetLength ?? DEFAULT_EMBEDDING_CONFIG.maxSnippetLength;\n\n  switch (node.label) {\n    case 'Function':\n      return generateFunctionText(node, maxSnippetLength);\n    case 'Class':\n      return generateClassText(node, maxSnippetLength);\n    case 'Method':\n      return generateMethodText(node, maxSnippetLength);\n    case 'Interface':\n      return generateInterfaceText(node, maxSnippetLength);\n    case 'File':\n      return generateFileText(node, maxSnippetLength);\n    default:\n      // Fallback for any other embeddable type\n      return `${node.label}: ${node.name}\\nPath: ${node.filePath}`;\n  }\n};\n\n/**\n * Generate embedding texts for a batch of nodes\n * \n * @param nodes - Array of nodes to generate text for\n * @param config - Optional configuration\n * @returns Array of texts in the same order as input nodes\n */\nexport const generateBatchEmbeddingTexts = (\n  nodes: EmbeddableNode[],\n  config: Partial<EmbeddingConfig> = {}\n): string[] => {\n  return nodes.map(node => generateEmbeddingText(node, config));\n};\n\n"
  },
  {
    "path": "gitnexus/src/core/embeddings/types.ts",
    "content": "/**\n * Embedding Pipeline Types\n * \n * Type definitions for the embedding generation and semantic search system.\n */\n\n/**\n * Node labels that should be embedded for semantic search\n * These are code elements that benefit from semantic matching\n */\nexport const EMBEDDABLE_LABELS = [\n  'Function',\n  'Class', \n  'Method',\n  'Interface',\n  'File',\n] as const;\n\nexport type EmbeddableLabel = typeof EMBEDDABLE_LABELS[number];\n\n/**\n * Check if a label should be embedded\n */\nexport const isEmbeddableLabel = (label: string): label is EmbeddableLabel =>\n  EMBEDDABLE_LABELS.includes(label as EmbeddableLabel);\n\n/**\n * Embedding pipeline phases\n */\nexport type EmbeddingPhase = \n  | 'idle'\n  | 'loading-model'\n  | 'embedding'\n  | 'indexing'\n  | 'ready'\n  | 'error';\n\n/**\n * Progress information for the embedding pipeline\n */\nexport interface EmbeddingProgress {\n  phase: EmbeddingPhase;\n  percent: number;\n  modelDownloadPercent?: number;\n  nodesProcessed?: number;\n  totalNodes?: number;\n  currentBatch?: number;\n  totalBatches?: number;\n  error?: string;\n}\n\n/**\n * Configuration for the embedding pipeline\n */\nexport interface EmbeddingConfig {\n  /** Model identifier for transformers.js */\n  modelId: string;\n  /** Number of nodes to embed in each batch */\n  batchSize: number;\n  /** Embedding vector dimensions */\n  dimensions: number;\n  /** Device to use for inference: 'auto' tries GPU first (DirectML on Windows, CUDA on Linux), falls back to CPU */\n  device: 'auto' | 'dml' | 'cuda' | 'cpu' | 'wasm';\n  /** Maximum characters of code snippet to include */\n  maxSnippetLength: number;\n}\n\n/**\n * Default embedding configuration\n * Uses snowflake-arctic-embed-xs for browser efficiency\n * Tries WebGPU first (fast), user can choose WASM fallback if unavailable\n */\nexport const DEFAULT_EMBEDDING_CONFIG: EmbeddingConfig = {\n  modelId: 'Snowflake/snowflake-arctic-embed-xs',\n  batchSize: 16,\n  dimensions: 384,\n  device: 'auto',\n  maxSnippetLength: 500,\n};\n\n/**\n * Result from semantic search\n */\nexport interface SemanticSearchResult {\n  nodeId: string;\n  name: string;\n  label: string;\n  filePath: string;\n  distance: number;\n  startLine?: number;\n  endLine?: number;\n}\n\n/**\n * Node data for embedding (minimal structure from LadybugDB query)\n */\nexport interface EmbeddableNode {\n  id: string;\n  name: string;\n  label: string;\n  filePath: string;\n  content: string;\n  startLine?: number;\n  endLine?: number;\n}\n\n/**\n * Model download progress from transformers.js\n */\nexport interface ModelProgress {\n  status: 'initiate' | 'download' | 'progress' | 'done' | 'ready';\n  file?: string;\n  progress?: number;\n  loaded?: number;\n  total?: number;\n}\n\n"
  },
  {
    "path": "gitnexus/src/core/graph/graph.ts",
    "content": "import { GraphNode, GraphRelationship, KnowledgeGraph } from './types.js'\n\nexport const createKnowledgeGraph = (): KnowledgeGraph => {\n  const nodeMap = new Map<string, GraphNode>();\n  const relationshipMap = new Map<string, GraphRelationship>();\n\n  const addNode = (node: GraphNode) => {\n    if(!nodeMap.has(node.id)) {\n      nodeMap.set(node.id, node);\n    }\n  };\n\n  const addRelationship = (relationship: GraphRelationship) => {\n    if (!relationshipMap.has(relationship.id)) {\n      relationshipMap.set(relationship.id, relationship);\n    }\n  };\n\n  /**\n   * Remove a single node and all relationships involving it\n   */\n  const removeNode = (nodeId: string): boolean => {\n    if (!nodeMap.has(nodeId)) return false;\n    \n    nodeMap.delete(nodeId);\n    \n    // Remove all relationships involving this node\n    for (const [relId, rel] of relationshipMap) {\n      if (rel.sourceId === nodeId || rel.targetId === nodeId) {\n        relationshipMap.delete(relId);\n      }\n    }\n    return true;\n  };\n\n  /**\n   * Remove all nodes (and their relationships) belonging to a file\n   */\n  const removeNodesByFile = (filePath: string): number => {\n    let removed = 0;\n    for (const [nodeId, node] of nodeMap) {\n      if (node.properties?.filePath === filePath) {\n        removeNode(nodeId);\n        removed++;\n      }\n    }\n    return removed;\n  };\n\n  return{\n    get nodes(){\n      return Array.from(nodeMap.values())\n    },\n\n    get relationships(){\n      return Array.from(relationshipMap.values())\n    },\n\n    iterNodes: () => nodeMap.values(),\n    iterRelationships: () => relationshipMap.values(),\n    forEachNode(fn: (node: GraphNode) => void) { nodeMap.forEach(fn); },\n    forEachRelationship(fn: (rel: GraphRelationship) => void) { relationshipMap.forEach(fn); },\n    getNode: (id: string) => nodeMap.get(id),\n\n    // O(1) count getters - avoid creating arrays just for length\n    get nodeCount() {\n      return nodeMap.size;\n    },\n\n    get relationshipCount() {\n      return relationshipMap.size;\n    },\n\n    addNode,\n    addRelationship,\n    removeNode,\n    removeNodesByFile,\n\n  };\n};\n"
  },
  {
    "path": "gitnexus/src/core/graph/types.ts",
    "content": "export type NodeLabel =\n  | 'Project'\n  | 'Package'\n  | 'Module'\n  | 'Folder'\n  | 'File'\n  | 'Class'\n  | 'Function'\n  | 'Method'\n  | 'Variable'\n  | 'Interface'\n  | 'Enum'\n  | 'Decorator'\n  | 'Import'\n  | 'Type'\n  | 'CodeElement'\n  | 'Community'\n  | 'Process'\n  // Multi-language node types\n  | 'Struct'\n  | 'Macro'\n  | 'Typedef'\n  | 'Union'\n  | 'Namespace'\n  | 'Trait'\n  | 'Impl'\n  | 'TypeAlias'\n  | 'Const'\n  | 'Static'\n  | 'Property'\n  | 'Record'\n  | 'Delegate'\n  | 'Annotation'\n  | 'Constructor'\n  | 'Template';\n\n\nimport { SupportedLanguages } from '../../config/supported-languages.js';\n\nexport type NodeProperties = {\n  name: string,\n  filePath: string,\n  startLine?: number,\n  endLine?: number,\n  language?: SupportedLanguages,\n  isExported?: boolean,\n  // Optional AST-derived framework hint (e.g. @Controller, @GetMapping)\n  astFrameworkMultiplier?: number,\n  astFrameworkReason?: string,\n  // Community-specific properties\n  heuristicLabel?: string,\n  cohesion?: number,\n  symbolCount?: number,\n  keywords?: string[],\n  description?: string,\n  enrichedBy?: 'heuristic' | 'llm',\n  // Process-specific properties\n  processType?: 'intra_community' | 'cross_community',\n  stepCount?: number,\n  communities?: string[],\n  entryPointId?: string,\n  terminalId?: string,\n  // Entry point scoring (computed by process detection)\n  entryPointScore?: number,\n  entryPointReason?: string,\n  // Method signature (for MRO disambiguation)\n  parameterCount?: number,\n  returnType?: string,\n}\n\nexport type RelationshipType =\n  | 'CONTAINS'\n  | 'CALLS'\n  | 'INHERITS'\n  | 'OVERRIDES'\n  | 'IMPORTS'\n  | 'USES'\n  | 'DEFINES'\n  | 'DECORATES'\n  | 'IMPLEMENTS'\n  | 'EXTENDS'\n  | 'HAS_METHOD'\n  | 'HAS_PROPERTY'\n  | 'ACCESSES'\n  | 'MEMBER_OF'\n  | 'STEP_IN_PROCESS'\n\nexport interface GraphNode {\n  id:  string,\n  label: NodeLabel,\n  properties: NodeProperties,  \n}\n\nexport interface GraphRelationship {\n  id: string,\n  sourceId: string,\n  targetId: string,\n  type: RelationshipType,\n  /** Confidence score 0-1 (1.0 = certain, lower = uncertain resolution) */\n  confidence: number,\n  /** Semantics are edge-type-dependent: CALLS uses resolution tier, ACCESSES uses 'read'/'write', OVERRIDES uses MRO reason */\n  reason: string,\n  /** Step number for STEP_IN_PROCESS relationships (1-indexed) */\n  step?: number,\n}\n\nexport interface KnowledgeGraph {\n  /** Returns a full array copy — prefer iterNodes() for iteration */\n  nodes: GraphNode[],\n  /** Returns a full array copy — prefer iterRelationships() for iteration */\n  relationships: GraphRelationship[],\n  /** Zero-copy iterator over nodes */\n  iterNodes: () => IterableIterator<GraphNode>,\n  /** Zero-copy iterator over relationships */\n  iterRelationships: () => IterableIterator<GraphRelationship>,\n  /** Zero-copy forEach — avoids iterator protocol overhead in hot loops */\n  forEachNode: (fn: (node: GraphNode) => void) => void,\n  forEachRelationship: (fn: (rel: GraphRelationship) => void) => void,\n  /** Lookup a single node by id — O(1) */\n  getNode: (id: string) => GraphNode | undefined,\n  nodeCount: number,\n  relationshipCount: number,\n  addNode: (node: GraphNode) => void,\n  addRelationship: (relationship: GraphRelationship) => void,\n  removeNode: (nodeId: string) => boolean,\n  removeNodesByFile: (filePath: string) => number,\n}\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/ast-cache.ts",
    "content": "import { LRUCache } from 'lru-cache';\nimport Parser from 'tree-sitter';\n\n// Define the interface for the Cache\nexport interface ASTCache {\n  get: (filePath: string) => Parser.Tree | undefined;\n  set: (filePath: string, tree: Parser.Tree) => void;\n  clear: () => void;\n  stats: () => { size: number; maxSize: number };\n}\n\nexport const createASTCache = (maxSize: number = 50): ASTCache => {\n  const effectiveMax = Math.max(maxSize, 1);\n  // Initialize the cache with a 'dispose' handler\n  // This is the magic: When an item is evicted (dropped), this runs automatically.\n  const cache = new LRUCache<string, Parser.Tree>({\n    max: effectiveMax,\n    dispose: (tree) => {\n      try {\n        // NOTE: web-tree-sitter has tree.delete(); native tree-sitter trees are GC-managed.\n        // Keep this try/catch so we don't crash on either runtime.\n        (tree as any).delete?.();\n      } catch (e) {\n        console.warn('Failed to delete tree from WASM memory', e);\n      }\n    }\n  });\n\n  return {\n    get: (filePath: string) => {\n      const tree = cache.get(filePath);\n      return tree; // Returns undefined if not found\n    },\n    \n    set: (filePath: string, tree: Parser.Tree) => {\n      cache.set(filePath, tree);\n    },\n    \n    clear: () => {\n      cache.clear();\n    },\n\n    stats: () => ({\n      size: cache.size,\n      maxSize: effectiveMax\n    })\n  };\n};\n\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/call-processor.ts",
    "content": "import { KnowledgeGraph } from '../graph/types.js';\nimport { ASTCache } from './ast-cache.js';\nimport type { SymbolDefinition } from './symbol-table.js';\nimport Parser from 'tree-sitter';\nimport type { ResolutionContext } from './resolution-context.js';\nimport { TIER_CONFIDENCE, type ResolutionTier } from './resolution-context.js';\nimport { isLanguageAvailable, loadParser, loadLanguage } from '../tree-sitter/parser-loader.js';\nimport { LANGUAGE_QUERIES } from './tree-sitter-queries.js';\nimport { generateId } from '../../lib/utils.js';\nimport {\n  getLanguageFromFilename,\n  isVerboseIngestionEnabled,\n  yieldToEventLoop,\n  FUNCTION_NODE_TYPES,\n  extractFunctionName,\n  isBuiltInOrNoise,\n  countCallArguments,\n  inferCallForm,\n  extractReceiverName,\n  extractReceiverNode,\n  findEnclosingClassId,\n  CALL_EXPRESSION_TYPES,\n  extractMixedChain,\n  type MixedChainStep,\n} from './utils.js';\nimport { buildTypeEnv, isSubclassOf } from './type-env.js';\nimport type { ConstructorBinding } from './type-env.js';\nimport { getTreeSitterBufferSize } from './constants.js';\nimport type { ExtractedCall, ExtractedAssignment, ExtractedHeritage, ExtractedRoute, FileConstructorBindings } from './workers/parse-worker.js';\nimport { callRouters } from './call-routing.js';\nimport { extractReturnTypeName, stripNullable } from './type-extractors/shared.js';\nimport { typeConfigs } from './type-extractors/index.js';\nimport type { LiteralTypeInferrer } from './type-extractors/types.js';\nimport type { SyntaxNode } from './utils.js';\n\n// Stdlib methods that preserve the receiver's type identity. When TypeEnv already\n// strips nullable wrappers (Option<User> → User), these chain steps are no-ops\n// for type resolution — the current type passes through unchanged.\nconst TYPE_PRESERVING_METHODS = new Set([\n  'unwrap', 'expect', 'unwrap_or', 'unwrap_or_default', 'unwrap_or_else',  // Rust Option/Result\n  'clone', 'to_owned', 'as_ref', 'as_mut', 'borrow', 'borrow_mut',        // Rust clone/borrow\n  'get',                                                                     // Kotlin/Java Optional.get()\n  'orElseThrow',                                                             // Java Optional\n]);\n\n/**\n * Walk up the AST from a node to find the enclosing function/method.\n * Returns null if the call is at module/file level (top-level code).\n */\nconst findEnclosingFunction = (\n  node: any,\n  filePath: string,\n  ctx: ResolutionContext\n): string | null => {\n  let current = node.parent;\n\n  while (current) {\n    if (FUNCTION_NODE_TYPES.has(current.type)) {\n      const { funcName, label } = extractFunctionName(current);\n\n      if (funcName) {\n        const resolved = ctx.resolve(funcName, filePath);\n        if (resolved?.tier === 'same-file' && resolved.candidates.length > 0) {\n          return resolved.candidates[0].nodeId;\n        }\n\n        return generateId(label, `${filePath}:${funcName}`);\n      }\n    }\n    current = current.parent;\n  }\n\n  return null;\n};\n\n/**\n * Verify constructor bindings against SymbolTable and infer receiver types.\n * Shared between sequential (processCalls) and worker (processCallsFromExtracted) paths.\n */\nconst verifyConstructorBindings = (\n  bindings: readonly ConstructorBinding[],\n  filePath: string,\n  ctx: ResolutionContext,\n  graph?: KnowledgeGraph,\n): Map<string, string> => {\n  const verified = new Map<string, string>();\n\n  for (const { scope, varName, calleeName, receiverClassName } of bindings) {\n    const tiered = ctx.resolve(calleeName, filePath);\n    const isClass = tiered?.candidates.some(def => def.type === 'Class') ?? false;\n\n    if (isClass) {\n      verified.set(receiverKey(scope, varName), calleeName);\n    } else {\n      let callableDefs = tiered?.candidates.filter(d =>\n        d.type === 'Function' || d.type === 'Method'\n      );\n\n      // When receiver class is known (e.g. $this->method() in PHP), narrow\n      // candidates to methods owned by that class to avoid false disambiguation failures.\n      if (callableDefs && callableDefs.length > 1 && receiverClassName) {\n        if (graph) {\n          // Worker path: use graph.getNode (fast, already in-memory)\n          const narrowed = callableDefs.filter(d => {\n            if (!d.ownerId) return false;\n            const owner = graph.getNode(d.ownerId);\n            return owner?.properties.name === receiverClassName;\n          });\n          if (narrowed.length > 0) callableDefs = narrowed;\n        } else {\n          // Sequential path: use ctx.resolve (no graph available)\n          const classResolved = ctx.resolve(receiverClassName, filePath);\n          if (classResolved && classResolved.candidates.length > 0) {\n            const classNodeIds = new Set(classResolved.candidates.map(c => c.nodeId));\n            const narrowed = callableDefs.filter(d =>\n              d.ownerId && classNodeIds.has(d.ownerId)\n            );\n            if (narrowed.length > 0) callableDefs = narrowed;\n          }\n        }\n      }\n\n      if (callableDefs && callableDefs.length === 1 && callableDefs[0].returnType) {\n        const typeName = extractReturnTypeName(callableDefs[0].returnType);\n        if (typeName) {\n          verified.set(receiverKey(scope, varName), typeName);\n        }\n      }\n    }\n  }\n\n  return verified;\n};\n\nexport const processCalls = async (\n  graph: KnowledgeGraph,\n  files: { path: string; content: string }[],\n  astCache: ASTCache,\n  ctx: ResolutionContext,\n  onProgress?: (current: number, total: number) => void,\n): Promise<ExtractedHeritage[]> => {\n  const parser = await loadParser();\n  const collectedHeritage: ExtractedHeritage[] = [];\n  const pendingWrites: { receiverTypeName: string; propertyName: string; filePath: string; srcId: string }[] = [];\n  // Phase P cross-file: accumulate heritage across files for cross-file isSubclassOf.\n  // Used as a secondary check when per-file parentMap lacks the relationship — helps\n  // when the heritage-declaring file is processed before the call site file.\n  // For remaining cases (reverse file order), the SymbolTable class-type fallback applies.\n  const globalParentMap = new Map<string, string[]>();\n  const globalParentSeen = new Map<string, Set<string>>();\n  const logSkipped = isVerboseIngestionEnabled();\n  const skippedByLang = logSkipped ? new Map<string, number>() : null;\n\n  for (let i = 0; i < files.length; i++) {\n    const file = files[i];\n    onProgress?.(i + 1, files.length);\n    if (i % 20 === 0) await yieldToEventLoop();\n\n    const language = getLanguageFromFilename(file.path);\n    if (!language) continue;\n    if (!isLanguageAvailable(language)) {\n      if (skippedByLang) {\n        skippedByLang.set(language, (skippedByLang.get(language) ?? 0) + 1);\n      }\n      continue;\n    }\n\n    const queryStr = LANGUAGE_QUERIES[language];\n    if (!queryStr) continue;\n\n    await loadLanguage(language, file.path);\n\n    let tree = astCache.get(file.path);\n    if (!tree) {\n      try {\n        tree = parser.parse(file.content, undefined, { bufferSize: getTreeSitterBufferSize(file.content.length) });\n      } catch (parseError) {\n        continue;\n      }\n      astCache.set(file.path, tree);\n    }\n\n    let query;\n    let matches;\n    try {\n      const language = parser.getLanguage();\n      query = new Parser.Query(language, queryStr);\n      matches = query.matches(tree.rootNode);\n    } catch (queryError) {\n      console.warn(`Query error for ${file.path}:`, queryError);\n      continue;\n    }\n\n    const lang = getLanguageFromFilename(file.path);\n\n    // Pre-pass: extract heritage from query matches to build parentMap for buildTypeEnv.\n    // Heritage-processor runs in PARALLEL, so graph edges don't exist when buildTypeEnv runs.\n    const fileParentMap = new Map<string, string[]>();\n    for (const match of matches) {\n      const captureMap: Record<string, any> = {};\n      match.captures.forEach(c => captureMap[c.name] = c.node);\n      if (captureMap['heritage.class'] && captureMap['heritage.extends']) {\n        const className: string = captureMap['heritage.class'].text;\n        const parentName: string = captureMap['heritage.extends'].text;\n        const extendsNode = captureMap['heritage.extends'];\n        const fieldDecl = extendsNode.parent;\n        if (fieldDecl?.type === 'field_declaration' && fieldDecl.childForFieldName('name')) continue;\n        let parents = fileParentMap.get(className);\n        if (!parents) { parents = []; fileParentMap.set(className, parents); }\n        if (!parents.includes(parentName)) parents.push(parentName);\n      }\n    }\n    const parentMap: ReadonlyMap<string, readonly string[]> = fileParentMap;\n    // Merge per-file heritage into globalParentMap for cross-file isSubclassOf lookups.\n    // Uses a parallel Set (globalParentSeen) for O(1) deduplication instead of O(n) includes().\n    for (const [cls, parents] of fileParentMap) {\n      let global = globalParentMap.get(cls);\n      let seen = globalParentSeen.get(cls);\n      if (!global) { global = []; globalParentMap.set(cls, global); }\n      if (!seen) { seen = new Set(); globalParentSeen.set(cls, seen); }\n      for (const p of parents) {\n        if (!seen.has(p)) { seen.add(p); global.push(p); }\n      }\n    }\n\n    const typeEnv = lang ? buildTypeEnv(tree, lang, { symbolTable: ctx.symbols, parentMap }) : null;\n    const callRouter = callRouters[language];\n\n    const verifiedReceivers = typeEnv && typeEnv.constructorBindings.length > 0\n      ? verifyConstructorBindings(typeEnv.constructorBindings, file.path, ctx)\n      : new Map<string, string>();\n    const receiverIndex = buildReceiverTypeIndex(verifiedReceivers);\n\n    ctx.enableCache(file.path);\n\n    matches.forEach(match => {\n      const captureMap: Record<string, any> = {};\n      match.captures.forEach(c => captureMap[c.name] = c.node);\n      // ── Write access: emit ACCESSES {reason: 'write'} for assignments to member fields ──\n      if (captureMap['assignment'] && captureMap['assignment.receiver'] && captureMap['assignment.property']) {\n        const receiverNode = captureMap['assignment.receiver'];\n        const propertyName: string = captureMap['assignment.property'].text;\n        // Resolve receiver type: simple identifier → TypeEnv lookup or class resolution\n        let receiverTypeName: string | undefined;\n        const receiverText = receiverNode.text;\n        if (receiverText && typeEnv) {\n          receiverTypeName = typeEnv.lookup(receiverText, captureMap['assignment']);\n        }\n        // Fall back to verified constructor bindings (mirrors CALLS resolution tier 2)\n        if (!receiverTypeName && receiverText && receiverIndex.size > 0) {\n          const enclosing = findEnclosingFunction(captureMap['assignment'], file.path, ctx);\n          const funcName = enclosing ? extractFuncNameFromSourceId(enclosing) : '';\n          receiverTypeName = lookupReceiverType(receiverIndex, funcName, receiverText);\n        }\n        if (!receiverTypeName && receiverText) {\n          const resolved = ctx.resolve(receiverText, file.path);\n          if (resolved?.candidates.some(d =>\n            d.type === 'Class' || d.type === 'Struct' || d.type === 'Interface'\n              || d.type === 'Enum' || d.type === 'Record' || d.type === 'Impl',\n          )) {\n            receiverTypeName = receiverText;\n          }\n        }\n        if (receiverTypeName) {\n          const enclosing = findEnclosingFunction(captureMap['assignment'], file.path, ctx);\n          const srcId = enclosing || generateId('File', file.path);\n          // Defer resolution: Ruby attr_accessor properties are registered during\n          // this same loop, so cross-file lookups fail if the declaring file hasn't\n          // been processed yet. Collect now, resolve after all files are done.\n          pendingWrites.push({ receiverTypeName, propertyName, filePath: file.path, srcId });\n        }\n        // Assignment-only capture (no @call sibling): skip the rest of this\n        // forEach iteration — this acts as a `continue` in the match loop.\n        if (!captureMap['call']) return;\n      }\n\n      if (!captureMap['call']) return;\n\n      const nameNode = captureMap['call.name'];\n      if (!nameNode) return;\n\n      const calledName = nameNode.text;\n\n      const routed = callRouter(calledName, captureMap['call']);\n      if (routed) {\n        switch (routed.kind) {\n          case 'skip':\n          case 'import':\n            return;\n\n          case 'heritage':\n            for (const item of routed.items) {\n              collectedHeritage.push({\n                filePath: file.path,\n                className: item.enclosingClass,\n                parentName: item.mixinName,\n                kind: item.heritageKind,\n              });\n            }\n            return;\n\n          case 'properties': {\n            const fileId = generateId('File', file.path);\n            const propEnclosingClassId = findEnclosingClassId(captureMap['call'], file.path);\n            for (const item of routed.items) {\n              const nodeId = generateId('Property', `${file.path}:${item.propName}`);\n              graph.addNode({\n                id: nodeId,\n                label: 'Property',\n                properties: {\n                  name: item.propName, filePath: file.path,\n                  startLine: item.startLine, endLine: item.endLine,\n                  language, isExported: true,\n                  description: item.accessorType,\n                },\n              });\n              ctx.symbols.add(file.path, item.propName, nodeId, 'Property', {\n                ...(propEnclosingClassId ? { ownerId: propEnclosingClassId } : {}),\n                ...(item.declaredType ? { declaredType: item.declaredType } : {}),\n              });\n              const relId = generateId('DEFINES', `${fileId}->${nodeId}`);\n              graph.addRelationship({\n                id: relId, sourceId: fileId, targetId: nodeId,\n                type: 'DEFINES', confidence: 1.0, reason: '',\n              });\n              if (propEnclosingClassId) {\n                graph.addRelationship({\n                  id: generateId('HAS_PROPERTY', `${propEnclosingClassId}->${nodeId}`),\n                  sourceId: propEnclosingClassId, targetId: nodeId,\n                  type: 'HAS_PROPERTY', confidence: 1.0, reason: '',\n                });\n              }\n            }\n            return;\n          }\n\n          case 'call':\n            break;\n        }\n      }\n\n      if (isBuiltInOrNoise(calledName)) return;\n\n      const callNode = captureMap['call'];\n      const callForm = inferCallForm(callNode, nameNode);\n      const receiverName = callForm === 'member' ? extractReceiverName(nameNode) : undefined;\n      let receiverTypeName = receiverName && typeEnv ? typeEnv.lookup(receiverName, callNode) : undefined;\n      // Phase P: virtual dispatch override — when the declared type is a base class but\n      // the constructor created a known subclass, prefer the more specific type.\n      // Checks per-file parentMap first, then falls back to globalParentMap for\n      // cross-file heritage (e.g. Dog extends Animal declared in a different file).\n      // Reconstructs the exact scope key (funcName@startIndex\\0varName) from the\n      // enclosing function AST node for a correct, O(1) map lookup.\n      if (receiverTypeName && receiverName && typeEnv && typeEnv.constructorTypeMap.size > 0) {\n        // Reconstruct scope key to match constructorTypeMap's scope\\0varName format\n        let scope = '';\n        let p = callNode.parent;\n        while (p) {\n          if (FUNCTION_NODE_TYPES.has(p.type)) {\n            const { funcName } = extractFunctionName(p);\n            if (funcName) { scope = `${funcName}@${p.startIndex}`; break; }\n          }\n          p = p.parent;\n        }\n        const ctorType = typeEnv.constructorTypeMap.get(`${scope}\\0${receiverName}`);\n        if (ctorType && ctorType !== receiverTypeName) {\n          // Verify subclass relationship: per-file parentMap first, then cross-file\n          // globalParentMap, then fall back to SymbolTable class verification.\n          // The SymbolTable fallback handles cross-file cases where heritage is declared\n          // in a file not yet processed (e.g. Dog extends Animal in models/Dog.kt when\n          // processing services/App.kt). Since constructorTypeMap only records entries\n          // when a type annotation AND constructor are both present (val x: Base = Sub()),\n          // confirming both are class-like types is sufficient — the original code would\n          // not compile if Sub didn't extend Base.\n          if (isSubclassOf(ctorType, receiverTypeName, parentMap)\n            || isSubclassOf(ctorType, receiverTypeName, globalParentMap)\n            || (ctx.symbols.lookupFuzzy(ctorType).some(d => d.type === 'Class' || d.type === 'Struct')\n              && ctx.symbols.lookupFuzzy(receiverTypeName).some(d => d.type === 'Class' || d.type === 'Struct' || d.type === 'Interface'))) {\n            receiverTypeName = ctorType;\n          }\n        }\n      }\n      // Fall back to verified constructor bindings for return type inference\n      if (!receiverTypeName && receiverName && receiverIndex.size > 0) {\n        const enclosingFunc = findEnclosingFunction(callNode, file.path, ctx);\n        const funcName = enclosingFunc ? extractFuncNameFromSourceId(enclosingFunc) : '';\n        receiverTypeName = lookupReceiverType(receiverIndex, funcName, receiverName);\n      }\n      // Fall back to class-as-receiver for static method calls (e.g. UserService.find_user()).\n      // When the receiver name is not a variable in TypeEnv but resolves to a Class/Struct/Interface\n      // through the standard tiered resolution, use it directly as the receiver type.\n      if (!receiverTypeName && receiverName && callForm === 'member') {\n        const typeResolved = ctx.resolve(receiverName, file.path);\n        if (typeResolved && typeResolved.candidates.some(\n          d => d.type === 'Class' || d.type === 'Interface' || d.type === 'Struct' || d.type === 'Enum',\n        )) {\n          receiverTypeName = receiverName;\n        }\n      }\n      // Hoist sourceId so it's available for ACCESSES edge emission during chain walk.\n      const enclosingFuncId = findEnclosingFunction(callNode, file.path, ctx);\n      const sourceId = enclosingFuncId || generateId('File', file.path);\n\n      // Fall back to mixed chain resolution when the receiver is a complex expression\n      // (field chain, call chain, or interleaved — e.g. user.address.city.save() or\n      // svc.getUser().address.save()). Handles all cases with a single unified walk.\n      if (callForm === 'member' && !receiverTypeName && !receiverName) {\n        const receiverNode = extractReceiverNode(nameNode);\n        if (receiverNode) {\n          const extracted = extractMixedChain(receiverNode);\n          if (extracted && extracted.chain.length > 0) {\n            let currentType = extracted.baseReceiverName && typeEnv\n              ? typeEnv.lookup(extracted.baseReceiverName, callNode)\n              : undefined;\n            if (!currentType && extracted.baseReceiverName && receiverIndex.size > 0) {\n              const funcName = enclosingFuncId ? extractFuncNameFromSourceId(enclosingFuncId) : '';\n              currentType = lookupReceiverType(receiverIndex, funcName, extracted.baseReceiverName);\n            }\n            if (!currentType && extracted.baseReceiverName) {\n              const cr = ctx.resolve(extracted.baseReceiverName, file.path);\n              if (cr?.candidates.some(d =>\n                d.type === 'Class' || d.type === 'Interface' || d.type === 'Struct' || d.type === 'Enum',\n              )) {\n                currentType = extracted.baseReceiverName;\n              }\n            }\n            if (currentType) {\n              receiverTypeName = walkMixedChain(\n                extracted.chain, currentType, file.path, ctx,\n                makeAccessEmitter(graph, sourceId),\n              );\n            }\n          }\n        }\n      }\n\n      // Build overload hints for languages with inferLiteralType (Java/Kotlin/C#/C++).\n      // Only used when multiple candidates survive arity filtering — ~1-3% of calls.\n      const langConfig = lang ? typeConfigs[lang as keyof typeof typeConfigs] : undefined;\n      const hints: OverloadHints | undefined = langConfig?.inferLiteralType\n        ? { callNode, inferLiteralType: langConfig.inferLiteralType }\n        : undefined;\n\n      const resolved = resolveCallTarget({\n        calledName,\n        argCount: countCallArguments(callNode),\n        callForm,\n        receiverTypeName,\n      }, file.path, ctx, hints);\n\n      if (!resolved) return;\n      const relId = generateId('CALLS', `${sourceId}:${calledName}->${resolved.nodeId}`);\n\n      graph.addRelationship({\n        id: relId,\n        sourceId,\n        targetId: resolved.nodeId,\n        type: 'CALLS',\n        confidence: resolved.confidence,\n        reason: resolved.reason,\n      });\n    });\n\n    ctx.clearCache();\n  }\n\n  // ── Resolve deferred write-access edges ──\n  // All properties (including Ruby attr_accessor) are now registered.\n  for (const pw of pendingWrites) {\n    const fieldOwner = resolveFieldOwnership(pw.receiverTypeName, pw.propertyName, pw.filePath, ctx);\n    if (fieldOwner) {\n      graph.addRelationship({\n        id: generateId('ACCESSES', `${pw.srcId}:${fieldOwner.nodeId}:write`),\n        sourceId: pw.srcId,\n        targetId: fieldOwner.nodeId,\n        type: 'ACCESSES',\n        confidence: 1.0,\n        reason: 'write',\n      });\n    }\n  }\n\n  if (skippedByLang && skippedByLang.size > 0) {\n    for (const [lang, count] of skippedByLang.entries()) {\n      console.warn(\n        `[ingestion] Skipped ${count} ${lang} file(s) in call processing — ${lang} parser not available.`\n      );\n    }\n  }\n\n  return collectedHeritage;\n};\n\n/**\n * Resolution result with confidence scoring\n */\ninterface ResolveResult {\n  nodeId: string;\n  confidence: number;\n  reason: string;\n  returnType?: string;\n}\n\nconst CALLABLE_SYMBOL_TYPES = new Set([\n  'Function',\n  'Method',\n  'Constructor',\n  'Macro',\n  'Delegate',\n]);\n\nconst CONSTRUCTOR_TARGET_TYPES = new Set(['Constructor', 'Class', 'Struct', 'Record']);\n\nconst filterCallableCandidates = (\n  candidates: readonly SymbolDefinition[],\n  argCount?: number,\n  callForm?: 'free' | 'member' | 'constructor',\n): SymbolDefinition[] => {\n  let kindFiltered: SymbolDefinition[];\n\n  if (callForm === 'constructor') {\n    const constructors = candidates.filter(c => c.type === 'Constructor');\n    if (constructors.length > 0) {\n      kindFiltered = constructors;\n    } else {\n      const types = candidates.filter(c => CONSTRUCTOR_TARGET_TYPES.has(c.type));\n      kindFiltered = types.length > 0 ? types : candidates.filter(c => CALLABLE_SYMBOL_TYPES.has(c.type));\n    }\n  } else {\n    kindFiltered = candidates.filter(c => CALLABLE_SYMBOL_TYPES.has(c.type));\n  }\n\n  if (kindFiltered.length === 0) return [];\n  if (argCount === undefined) return kindFiltered;\n\n  const hasParameterMetadata = kindFiltered.some(candidate => candidate.parameterCount !== undefined);\n  if (!hasParameterMetadata) return kindFiltered;\n\n  return kindFiltered.filter(candidate =>\n    candidate.parameterCount === undefined\n    || (argCount >= (candidate.requiredParameterCount ?? candidate.parameterCount)\n      && argCount <= candidate.parameterCount)\n  );\n};\n\nconst toResolveResult = (\n  definition: SymbolDefinition,\n  tier: ResolutionTier,\n): ResolveResult => ({\n  nodeId: definition.nodeId,\n  confidence: TIER_CONFIDENCE[tier],\n  reason: tier === 'same-file' ? 'same-file' : tier === 'import-scoped' ? 'import-resolved' : 'global',\n  returnType: definition.returnType,\n});\n\n\n/** Optional hints for overload disambiguation via argument literal types.\n *  Only available on the sequential path (has AST); worker path passes undefined. */\ninterface OverloadHints {\n  callNode: SyntaxNode;\n  inferLiteralType: LiteralTypeInferrer;\n}\n\n/**\n * Kotlin (and JVM in general) uses boxed type names in parameter declarations\n * (e.g. `Int`, `Long`, `Boolean`) while inferJvmLiteralType returns unboxed\n * primitives (`int`, `long`, `boolean`). Normalise both sides to lowercase so\n * that the comparison `'Int' === 'int'` does not fail.\n *\n * Only applied to single-word identifiers that look like a JVM primitive alias;\n * multi-word or qualified names are left untouched.\n */\nconst KOTLIN_BOXED_TO_PRIMITIVE: Readonly<Record<string, string>> = {\n  Int: 'int',\n  Long: 'long',\n  Short: 'short',\n  Byte: 'byte',\n  Float: 'float',\n  Double: 'double',\n  Boolean: 'boolean',\n  Char: 'char',\n};\n\nconst normalizeJvmTypeName = (name: string): string =>\n  KOTLIN_BOXED_TO_PRIMITIVE[name] ?? name;\n\n/**\n * Try to disambiguate overloaded candidates using argument literal types.\n * Only invoked when filteredCandidates.length > 1 and at least one has parameterTypes.\n * Returns the single matching candidate, or null if ambiguous/inconclusive.\n */\nconst tryOverloadDisambiguation = (\n  candidates: SymbolDefinition[],\n  hints: OverloadHints,\n): SymbolDefinition | null => {\n  if (!candidates.some(c => c.parameterTypes)) return null;\n\n  // Find the argument list node in the call expression.\n  // Kotlin wraps value_arguments inside a call_suffix child, so we must also\n  // search one level deeper when a direct match is not found.\n  let argList: any = hints.callNode.childForFieldName?.('arguments')\n    ?? hints.callNode.children.find((c: any) =>\n      c.type === 'arguments' || c.type === 'argument_list' || c.type === 'value_arguments'\n    );\n  if (!argList) {\n    // Kotlin: call_expression → call_suffix → value_arguments\n    const callSuffix = hints.callNode.children.find((c: any) => c.type === 'call_suffix');\n    if (callSuffix) {\n      argList = callSuffix.children.find((c: any) => c.type === 'value_arguments');\n    }\n  }\n  if (!argList) return null;\n\n  const argTypes: (string | undefined)[] = [];\n  for (const arg of argList.namedChildren) {\n    if (arg.type === 'comment') continue;\n    // Unwrap argument wrapper nodes before passing to inferLiteralType:\n    //   - Kotlin value_argument: has 'value' field containing the literal\n    //   - C# argument: has 'expression' field (handles named args like `name: \"alice\"`\n    //     where firstNamedChild would return name_colon instead of the value)\n    //   - Java/others: arg IS the literal directly (no unwrapping needed)\n    const valueNode = arg.childForFieldName?.('value')\n      ?? arg.childForFieldName?.('expression')\n      ?? (arg.type === 'argument' || arg.type === 'value_argument'\n        ? arg.firstNamedChild ?? arg\n        : arg);\n    argTypes.push(hints.inferLiteralType(valueNode));\n  }\n\n  // If no literal types could be inferred, can't disambiguate\n  if (argTypes.every(t => t === undefined)) return null;\n\n  const matched = candidates.filter(c => {\n    // Keep candidates without type info — conservative: partially-annotated codebases\n    // (e.g. C++ with some missing declarations) may have mixed typed/untyped overloads.\n    // If one typed and one untyped both survive, matched.length > 1 → returns null (no edge).\n    if (!c.parameterTypes) return true;\n    return c.parameterTypes.every((pType, i) => {\n      if (i >= argTypes.length || !argTypes[i]) return true;\n      // Normalise Kotlin boxed type names (Int→int, Boolean→boolean, etc.) so\n      // that the stored declaration type matches the inferred literal type.\n      return normalizeJvmTypeName(pType) === argTypes[i];\n    });\n  });\n\n  if (matched.length === 1) return matched[0];\n  // Multiple survivors may share the same nodeId (e.g. TypeScript overload signatures +\n  // implementation body all collide via generateId). Deduplicate by nodeId — if all\n  // matched candidates resolve to the same graph node, disambiguation succeeded.\n  if (matched.length > 1) {\n    const uniqueIds = new Set(matched.map(c => c.nodeId));\n    if (uniqueIds.size === 1) return matched[0];\n  }\n  return null;\n};\n\n/**\n * Resolve a function call to its target node ID using priority strategy:\n * A. Narrow candidates by scope tier via ctx.resolve()\n * B. Filter to callable symbol kinds (constructor-aware when callForm is set)\n * C. Apply arity filtering when parameter metadata is available\n * D. Apply receiver-type filtering for member calls with typed receivers\n * E. Apply overload disambiguation via argument literal types (when available)\n *\n * If filtering still leaves multiple candidates, refuse to emit a CALLS edge.\n */\nconst resolveCallTarget = (\n  call: Pick<ExtractedCall, 'calledName' | 'argCount' | 'callForm' | 'receiverTypeName'>,\n  currentFile: string,\n  ctx: ResolutionContext,\n  overloadHints?: OverloadHints,\n): ResolveResult | null => {\n  const tiered = ctx.resolve(call.calledName, currentFile);\n  if (!tiered) return null;\n\n  const filteredCandidates = filterCallableCandidates(tiered.candidates, call.argCount, call.callForm);\n\n  // D. Receiver-type filtering: for member calls with a known receiver type,\n  // resolve the type through the same tiered import infrastructure, then\n  // filter method candidates to the type's defining file. Fall back to\n  // fuzzy ownerId matching only when file-based narrowing is inconclusive.\n  //\n  // Applied regardless of candidate count — the sole same-file candidate may\n  // belong to the wrong class (e.g. super.save() should hit the parent's save,\n  // not the child's own save method in the same file).\n  if (call.callForm === 'member' && call.receiverTypeName) {\n    // D1. Resolve the receiver type\n    const typeResolved = ctx.resolve(call.receiverTypeName, currentFile);\n    if (typeResolved && typeResolved.candidates.length > 0) {\n      const typeNodeIds = new Set(typeResolved.candidates.map(d => d.nodeId));\n      const typeFiles = new Set(typeResolved.candidates.map(d => d.filePath));\n\n      // D2. Widen candidates: same-file tier may miss the parent's method when\n      //     it lives in another file. Query the symbol table directly for all\n      //     global methods with this name, then apply arity/kind filtering.\n      const methodPool = filteredCandidates.length <= 1\n        ? filterCallableCandidates(ctx.symbols.lookupFuzzy(call.calledName), call.argCount, call.callForm)\n        : filteredCandidates;\n\n      // D3. File-based: prefer candidates whose filePath matches the resolved type's file\n      const fileFiltered = methodPool.filter(c => typeFiles.has(c.filePath));\n      if (fileFiltered.length === 1) {\n        return toResolveResult(fileFiltered[0], tiered.tier);\n      }\n\n      // D4. ownerId fallback: narrow by ownerId matching the type's nodeId\n      const pool = fileFiltered.length > 0 ? fileFiltered : methodPool;\n      const ownerFiltered = pool.filter(c => c.ownerId && typeNodeIds.has(c.ownerId));\n      if (ownerFiltered.length === 1) {\n        return toResolveResult(ownerFiltered[0], tiered.tier);\n      }\n      // E. Try overload disambiguation on the narrowed pool\n      if ((fileFiltered.length > 1 || ownerFiltered.length > 1) && overloadHints) {\n        const overloadPool = ownerFiltered.length > 1 ? ownerFiltered : fileFiltered;\n        const disambiguated = tryOverloadDisambiguation(overloadPool, overloadHints);\n        if (disambiguated) return toResolveResult(disambiguated, tiered.tier);\n      }\n      if (fileFiltered.length > 1 || ownerFiltered.length > 1) return null;\n    }\n  }\n\n  // E. Overload disambiguation: when multiple candidates survive arity + receiver filtering,\n  // try matching argument literal types against parameter types (Phase P).\n  // Only available on sequential path (has AST); worker path falls through gracefully.\n  if (filteredCandidates.length > 1 && overloadHints) {\n    const disambiguated = tryOverloadDisambiguation(filteredCandidates, overloadHints);\n    if (disambiguated) return toResolveResult(disambiguated, tiered.tier);\n  }\n\n  if (filteredCandidates.length !== 1) return null;\n\n  return toResolveResult(filteredCandidates[0], tiered.tier);\n};\n\n// ── Scope key helpers ────────────────────────────────────────────────────\n// Scope keys use the format \"funcName@startIndex\" (produced by type-env.ts).\n// Source IDs use \"Label:filepath:funcName\" (produced by parse-worker.ts).\n// NUL (\\0) is used as a composite-key separator because it cannot appear\n// in source-code identifiers, preventing ambiguous concatenation.\n//\n// receiverKey stores the FULL scope (funcName@startIndex) to prevent\n// collisions between overloaded methods with the same name in different\n// classes (e.g. User.save@100 and Repo.save@200 are distinct keys).\n// Lookup uses a secondary funcName-only index built in lookupReceiverType.\n\n/** Extract the function name from a scope key (\"funcName@startIndex\" → \"funcName\"). */\nconst extractFuncNameFromScope = (scope: string): string =>\n  scope.slice(0, scope.indexOf('@'));\n\n/** Extract the trailing function name from a sourceId (\"Function:filepath:funcName\" → \"funcName\"). */\nconst extractFuncNameFromSourceId = (sourceId: string): string => {\n  const lastColon = sourceId.lastIndexOf(':');\n  return lastColon >= 0 ? sourceId.slice(lastColon + 1) : '';\n};\n\n/**\n * Build a composite key for receiver type storage.\n * Uses the full scope string (e.g. \"save@100\") to distinguish overloaded\n * methods with the same name in different classes.\n */\nconst receiverKey = (scope: string, varName: string): string =>\n  `${scope}\\0${varName}`;\n\n/**\n * Pre-built secondary index for O(1) receiver type lookups.\n * Built once per file from the verified receiver map, keyed by funcName → varName.\n */\ntype ReceiverTypeEntry =\n  | { readonly kind: 'resolved'; readonly value: string }\n  | { readonly kind: 'ambiguous' };\ntype ReceiverTypeIndex = Map<string, Map<string, ReceiverTypeEntry>>;\n\n/**\n * Build a two-level secondary index from the verified receiver map.\n * The verified map is keyed by `scope\\0varName` where scope is either\n * \"funcName@startIndex\" (inside a function) or \"\" (file level).\n * Index structure: Map<funcName, Map<varName, ReceiverTypeEntry>>\n */\nconst buildReceiverTypeIndex = (map: Map<string, string>): ReceiverTypeIndex => {\n  const index: ReceiverTypeIndex = new Map();\n  for (const [key, typeName] of map) {\n    const nul = key.indexOf('\\0');\n    if (nul < 0) continue;\n    const scope = key.slice(0, nul);\n    const varName = key.slice(nul + 1);\n    if (!varName) continue;\n    if (scope !== '' && !scope.includes('@')) continue;\n    const funcName = scope === '' ? '' : scope.slice(0, scope.indexOf('@'));\n\n    let varMap = index.get(funcName);\n    if (!varMap) { varMap = new Map(); index.set(funcName, varMap); }\n\n    const existing = varMap.get(varName);\n    if (existing === undefined) {\n      varMap.set(varName, { kind: 'resolved', value: typeName });\n    } else if (existing.kind === 'resolved' && existing.value !== typeName) {\n      varMap.set(varName, { kind: 'ambiguous' });\n    }\n  }\n  return index;\n};\n\n/**\n * O(1) receiver type lookup using the pre-built secondary index.\n * Returns the unique type name if unambiguous. Falls back to file-level scope.\n */\nconst lookupReceiverType = (\n  index: ReceiverTypeIndex,\n  funcName: string,\n  varName: string,\n): string | undefined => {\n  const funcBucket = index.get(funcName);\n  if (funcBucket) {\n    const entry = funcBucket.get(varName);\n    if (entry?.kind === 'resolved') return entry.value;\n    if (entry?.kind === 'ambiguous') {\n      // Ambiguous in this function scope — try file-level fallback\n      const fileEntry = index.get('')?.get(varName);\n      return fileEntry?.kind === 'resolved' ? fileEntry.value : undefined;\n    }\n  }\n  // Fallback: file-level scope (funcName \"\")\n  if (funcName !== '') {\n    const fileEntry = index.get('')?.get(varName);\n    if (fileEntry?.kind === 'resolved') return fileEntry.value;\n  }\n  return undefined;\n};\n\ninterface FieldResolution {\n  typeName: string;      // resolved declared type (continues chain threading)\n  fieldNodeId: string;   // nodeId of the Property symbol (for ACCESSES edge target)\n}\n\n/**\n * Resolve the type that results from accessing `receiverName.fieldName`.\n * Requires declaredType on the Property node (needed for chain walking continuation).\n */\nconst resolveFieldAccessType = (\n  receiverName: string,\n  fieldName: string,\n  filePath: string,\n  ctx: ResolutionContext,\n): FieldResolution | undefined => {\n  const fieldDef = resolveFieldOwnership(receiverName, fieldName, filePath, ctx);\n  if (!fieldDef?.declaredType) return undefined;\n\n  // Use stripNullable (not extractReturnTypeName) — field types like List<User>\n  // should be preserved as-is, not unwrapped to User. Only strip nullable wrappers.\n  return {\n    typeName: stripNullable(fieldDef.declaredType),\n    fieldNodeId: fieldDef.nodeId,\n  };\n};\n\n/**\n * Resolve a field's Property node given a receiver type name and field name.\n * Does NOT require declaredType — used by write-access tracking where only the\n * fieldNodeId is needed (no chain continuation).\n */\nconst resolveFieldOwnership = (\n  receiverName: string,\n  fieldName: string,\n  filePath: string,\n  ctx: ResolutionContext,\n): { nodeId: string; declaredType?: string } | undefined => {\n  const typeResolved = ctx.resolve(receiverName, filePath);\n  if (!typeResolved) return undefined;\n  const classDef = typeResolved.candidates.find(\n    d => d.type === 'Class' || d.type === 'Struct' || d.type === 'Interface'\n      || d.type === 'Enum' || d.type === 'Record' || d.type === 'Impl',\n  );\n  if (!classDef) return undefined;\n\n  return ctx.symbols.lookupFieldByOwner(classDef.nodeId, fieldName) ?? undefined;\n};\n\n/**\n * Create a deduplicated ACCESSES edge emitter for a single source node.\n * Each (sourceId, fieldNodeId) pair is emitted at most once per source.\n */\nconst makeAccessEmitter = (\n  graph: KnowledgeGraph,\n  sourceId: string,\n): OnFieldResolved => {\n  const emitted = new Set<string>();\n  return (fieldNodeId: string): void => {\n    const key = `${sourceId}\\0${fieldNodeId}`;\n    if (emitted.has(key)) return;\n    emitted.add(key);\n\n    graph.addRelationship({\n      id: generateId('ACCESSES', `${sourceId}:${fieldNodeId}:read`),\n      sourceId,\n      targetId: fieldNodeId,\n      type: 'ACCESSES',\n      confidence: 1.0,\n      reason: 'read',\n    });\n  };\n};\n\n/**\n * Walk a pre-built mixed chain of field/call steps, threading the current type\n * through each step and returning the final resolved type.\n *\n * Returns `undefined` if any step cannot be resolved (chain is broken).\n * The caller is responsible for seeding `startType` from its own context\n * (TypeEnv, constructor bindings, or static-class fallback).\n */\ntype OnFieldResolved = (fieldNodeId: string) => void;\n\nconst walkMixedChain = (\n  chain: MixedChainStep[],\n  startType: string,\n  filePath: string,\n  ctx: ResolutionContext,\n  onFieldResolved?: OnFieldResolved,\n): string | undefined => {\n  let currentType: string | undefined = startType;\n  for (const step of chain) {\n    if (!currentType) break;\n    if (step.kind === 'field') {\n      const resolved = resolveFieldAccessType(currentType, step.name, filePath, ctx);\n      if (!resolved) { currentType = undefined; break; }\n      onFieldResolved?.(resolved.fieldNodeId);\n      currentType = resolved.typeName;\n    } else {\n      // Ruby/Python: property access is syntactically identical to method calls.\n      // Try field resolution first — if the name is a known property with declaredType,\n      // use that type directly. Otherwise fall back to method call resolution.\n      const fieldResolved = resolveFieldAccessType(currentType, step.name, filePath, ctx);\n      if (fieldResolved) {\n        onFieldResolved?.(fieldResolved.fieldNodeId);\n        currentType = fieldResolved.typeName;\n        continue;\n      }\n      const resolved = resolveCallTarget(\n        { calledName: step.name, callForm: 'member', receiverTypeName: currentType },\n        filePath,\n        ctx,\n      );\n      if (!resolved) {\n        // Stdlib passthrough: unwrap(), clone(), etc. preserve the receiver type\n        if (TYPE_PRESERVING_METHODS.has(step.name)) continue;\n        currentType = undefined; break;\n      }\n      if (!resolved.returnType) { currentType = undefined; break; }\n      const retType = extractReturnTypeName(resolved.returnType);\n      if (!retType) { currentType = undefined; break; }\n      currentType = retType;\n    }\n  }\n  return currentType;\n};\n\n/**\n * Fast path: resolve pre-extracted call sites from workers.\n * No AST parsing — workers already extracted calledName + sourceId.\n */\nexport const processCallsFromExtracted = async (\n  graph: KnowledgeGraph,\n  extractedCalls: ExtractedCall[],\n  ctx: ResolutionContext,\n  onProgress?: (current: number, total: number) => void,\n  constructorBindings?: FileConstructorBindings[],\n) => {\n  // Scope-aware receiver types: keyed by filePath → \"funcName\\0varName\" → typeName.\n  // The scope dimension prevents collisions when two functions in the same file\n  // have same-named locals pointing to different constructor types.\n  const fileReceiverTypes = new Map<string, ReceiverTypeIndex>();\n  if (constructorBindings) {\n    for (const { filePath, bindings } of constructorBindings) {\n      const verified = verifyConstructorBindings(bindings, filePath, ctx, graph);\n      if (verified.size > 0) {\n        fileReceiverTypes.set(filePath, buildReceiverTypeIndex(verified));\n      }\n    }\n  }\n\n  const byFile = new Map<string, ExtractedCall[]>();\n  for (const call of extractedCalls) {\n    let list = byFile.get(call.filePath);\n    if (!list) { list = []; byFile.set(call.filePath, list); }\n    list.push(call);\n  }\n  const totalFiles = byFile.size;\n  let filesProcessed = 0;\n\n  for (const [filePath, calls] of byFile) {\n    filesProcessed++;\n    if (filesProcessed % 100 === 0) {\n      onProgress?.(filesProcessed, totalFiles);\n      await yieldToEventLoop();\n    }\n\n    ctx.enableCache(filePath);\n    const receiverMap = fileReceiverTypes.get(filePath);\n\n    for (const call of calls) {\n      let effectiveCall = call;\n\n      // Step 1: resolve receiver type from constructor bindings\n      if (!call.receiverTypeName && call.receiverName && receiverMap) {\n        const callFuncName = extractFuncNameFromSourceId(call.sourceId);\n        const resolvedType = lookupReceiverType(receiverMap, callFuncName, call.receiverName);\n        if (resolvedType) {\n          effectiveCall = { ...call, receiverTypeName: resolvedType };\n        }\n      }\n\n      // Step 1b: class-as-receiver for static method calls (e.g. UserService.find_user())\n      if (!effectiveCall.receiverTypeName && effectiveCall.receiverName && effectiveCall.callForm === 'member') {\n        const typeResolved = ctx.resolve(effectiveCall.receiverName, effectiveCall.filePath);\n        if (typeResolved && typeResolved.candidates.some(\n          d => d.type === 'Class' || d.type === 'Interface' || d.type === 'Struct' || d.type === 'Enum',\n        )) {\n          effectiveCall = { ...effectiveCall, receiverTypeName: effectiveCall.receiverName };\n        }\n      }\n\n      // Step 1c: mixed chain resolution (field, call, or interleaved — e.g. svc.getUser().address.save()).\n      // Runs whenever receiverMixedChain is present. Steps 1/1b may have resolved the base receiver\n      // type already; that type is used as the chain's starting point.\n      if (effectiveCall.receiverMixedChain?.length) {\n        // Use the already-resolved base type (from Steps 1/1b) or look it up now.\n        let currentType: string | undefined = effectiveCall.receiverTypeName;\n        if (!currentType && effectiveCall.receiverName && receiverMap) {\n          const callFuncName = extractFuncNameFromSourceId(effectiveCall.sourceId);\n          currentType = lookupReceiverType(receiverMap, callFuncName, effectiveCall.receiverName);\n        }\n        if (!currentType && effectiveCall.receiverName) {\n          const typeResolved = ctx.resolve(effectiveCall.receiverName, effectiveCall.filePath);\n          if (typeResolved?.candidates.some(d =>\n            d.type === 'Class' || d.type === 'Interface' || d.type === 'Struct' || d.type === 'Enum',\n          )) {\n            currentType = effectiveCall.receiverName;\n          }\n        }\n        if (currentType) {\n          const walkedType = walkMixedChain(\n            effectiveCall.receiverMixedChain, currentType, effectiveCall.filePath, ctx,\n            makeAccessEmitter(graph, effectiveCall.sourceId),\n          );\n          if (walkedType) {\n            effectiveCall = { ...effectiveCall, receiverTypeName: walkedType };\n          }\n        }\n      }\n\n      const resolved = resolveCallTarget(effectiveCall, effectiveCall.filePath, ctx);\n      if (!resolved) continue;\n\n      const relId = generateId('CALLS', `${effectiveCall.sourceId}:${effectiveCall.calledName}->${resolved.nodeId}`);\n      graph.addRelationship({\n        id: relId,\n        sourceId: effectiveCall.sourceId,\n        targetId: resolved.nodeId,\n        type: 'CALLS',\n        confidence: resolved.confidence,\n        reason: resolved.reason,\n      });\n    }\n\n    ctx.clearCache();\n  }\n\n  onProgress?.(totalFiles, totalFiles);\n};\n\n/**\n * Resolve pre-extracted field write assignments to ACCESSES {reason: 'write'} edges.\n * Accepts optional constructorBindings for return-type-aware receiver inference,\n * mirroring processCallsFromExtracted's verified binding lookup.\n */\nexport const processAssignmentsFromExtracted = (\n  graph: KnowledgeGraph,\n  assignments: ExtractedAssignment[],\n  ctx: ResolutionContext,\n  constructorBindings?: FileConstructorBindings[],\n): void => {\n  // Build per-file receiver type indexes from verified constructor bindings\n  const fileReceiverTypes = new Map<string, ReceiverTypeIndex>();\n  if (constructorBindings) {\n    for (const { filePath, bindings } of constructorBindings) {\n      const verified = verifyConstructorBindings(bindings, filePath, ctx, graph);\n      if (verified.size > 0) {\n        fileReceiverTypes.set(filePath, buildReceiverTypeIndex(verified));\n      }\n    }\n  }\n\n  for (const asn of assignments) {\n    // Resolve the receiver type\n    let receiverTypeName = asn.receiverTypeName;\n    // Tier 2: verified constructor bindings (return-type inference)\n    if (!receiverTypeName && fileReceiverTypes.size > 0) {\n      const receiverMap = fileReceiverTypes.get(asn.filePath);\n      if (receiverMap) {\n        const funcName = extractFuncNameFromSourceId(asn.sourceId);\n        receiverTypeName = lookupReceiverType(receiverMap, funcName, asn.receiverText);\n      }\n    }\n    // Tier 3: static class-as-receiver fallback\n    if (!receiverTypeName) {\n      const resolved = ctx.resolve(asn.receiverText, asn.filePath);\n      if (resolved?.candidates.some(d =>\n        d.type === 'Class' || d.type === 'Struct' || d.type === 'Interface'\n          || d.type === 'Enum' || d.type === 'Record' || d.type === 'Impl',\n      )) {\n        receiverTypeName = asn.receiverText;\n      }\n    }\n    if (!receiverTypeName) continue;\n    const fieldOwner = resolveFieldOwnership(receiverTypeName, asn.propertyName, asn.filePath, ctx);\n    if (!fieldOwner) continue;\n    graph.addRelationship({\n      id: generateId('ACCESSES', `${asn.sourceId}:${fieldOwner.nodeId}:write`),\n      sourceId: asn.sourceId,\n      targetId: fieldOwner.nodeId,\n      type: 'ACCESSES',\n      confidence: 1.0,\n      reason: 'write',\n    });\n  }\n};\n\n/**\n * Resolve pre-extracted Laravel routes to CALLS edges from route files to controller methods.\n */\nexport const processRoutesFromExtracted = async (\n  graph: KnowledgeGraph,\n  extractedRoutes: ExtractedRoute[],\n  ctx: ResolutionContext,\n  onProgress?: (current: number, total: number) => void,\n) => {\n  for (let i = 0; i < extractedRoutes.length; i++) {\n    const route = extractedRoutes[i];\n    if (i % 50 === 0) {\n      onProgress?.(i, extractedRoutes.length);\n      await yieldToEventLoop();\n    }\n\n    if (!route.controllerName || !route.methodName) continue;\n\n    const controllerResolved = ctx.resolve(route.controllerName, route.filePath);\n    if (!controllerResolved || controllerResolved.candidates.length === 0) continue;\n    if (controllerResolved.tier === 'global' && controllerResolved.candidates.length > 1) continue;\n\n    const controllerDef = controllerResolved.candidates[0];\n    const confidence = TIER_CONFIDENCE[controllerResolved.tier];\n\n    const methodResolved = ctx.resolve(route.methodName, controllerDef.filePath);\n    const methodId = methodResolved?.tier === 'same-file' ? methodResolved.candidates[0]?.nodeId : undefined;\n    const sourceId = generateId('File', route.filePath);\n\n    if (!methodId) {\n      const guessedId = generateId('Method', `${controllerDef.filePath}:${route.methodName}`);\n      const relId = generateId('CALLS', `${sourceId}:route->${guessedId}`);\n      graph.addRelationship({\n        id: relId,\n        sourceId,\n        targetId: guessedId,\n        type: 'CALLS',\n        confidence: confidence * 0.8,\n        reason: 'laravel-route',\n      });\n      continue;\n    }\n\n    const relId = generateId('CALLS', `${sourceId}:route->${methodId}`);\n    graph.addRelationship({\n      id: relId,\n      sourceId,\n      targetId: methodId,\n      type: 'CALLS',\n      confidence,\n      reason: 'laravel-route',\n    });\n  }\n\n  onProgress?.(extractedRoutes.length, extractedRoutes.length);\n};\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/call-routing.ts",
    "content": "/**\n * Shared Ruby call routing logic.\n *\n * Ruby expresses imports, heritage (mixins), and property definitions as\n * method calls rather than syntax-level constructs. This module provides a\n * routing function used by the CLI call-processor, CLI parse-worker, and\n * the web call-processor so that the classification logic lives in one place.\n *\n * NOTE: This file is intentionally duplicated in gitnexus-web/ because the\n * two packages have separate build targets (Node native vs WASM/browser).\n * Keep both copies in sync until a shared package is introduced.\n */\n\nimport { SupportedLanguages } from '../../config/supported-languages.js';\n\n// ── Call routing dispatch table ─────────────────────────────────────────────\n\n/** null = this call was not routed; fall through to default call handling */\nexport type CallRoutingResult = RubyCallRouting | null;\n\nexport type CallRouter = (\n  calledName: string,\n  callNode: any,\n) => CallRoutingResult;\n\n/** No-op router: returns null for every call (passthrough to normal processing) */\nconst noRouting: CallRouter = () => null;\n\n/** Per-language call routing. noRouting = no special routing (normal call processing) */\nexport const callRouters = {\n  [SupportedLanguages.JavaScript]: noRouting,\n  [SupportedLanguages.TypeScript]: noRouting,\n  [SupportedLanguages.Python]: noRouting,\n  [SupportedLanguages.Java]: noRouting,\n  [SupportedLanguages.Kotlin]: noRouting,\n  [SupportedLanguages.Go]: noRouting,\n  [SupportedLanguages.Rust]: noRouting,\n  [SupportedLanguages.CSharp]: noRouting,\n  [SupportedLanguages.PHP]: noRouting,\n  [SupportedLanguages.Swift]: noRouting,\n  [SupportedLanguages.CPlusPlus]: noRouting,\n  [SupportedLanguages.C]: noRouting,\n  [SupportedLanguages.Ruby]: routeRubyCall,\n} satisfies Record<SupportedLanguages, CallRouter>;\n\n// ── Result types ────────────────────────────────────────────────────────────\n\nexport type RubyCallRouting =\n  | { kind: 'import'; importPath: string; isRelative: boolean }\n  | { kind: 'heritage'; items: RubyHeritageItem[] }\n  | { kind: 'properties'; items: RubyPropertyItem[] }\n  | { kind: 'call' }\n  | { kind: 'skip' };\n\nexport interface RubyHeritageItem {\n  enclosingClass: string;\n  mixinName: string;\n  heritageKind: 'include' | 'extend' | 'prepend';\n}\n\nexport type RubyAccessorType = 'attr_accessor' | 'attr_reader' | 'attr_writer';\n\nexport interface RubyPropertyItem {\n  propName: string;\n  accessorType: RubyAccessorType;\n  startLine: number;\n  endLine: number;\n  /** YARD @return [Type] annotation preceding the attr_accessor call */\n  declaredType?: string;\n}\n\n// ── Pre-allocated singletons for common return values ────────────────────────\nconst CALL_RESULT: RubyCallRouting = { kind: 'call' };\nconst SKIP_RESULT: RubyCallRouting = { kind: 'skip' };\n\n/** Max depth for parent-walking loops to prevent pathological AST traversals */\nconst MAX_PARENT_DEPTH = 50;\n\n// ── Routing function ────────────────────────────────────────────────────────\n\n/**\n * Classify a Ruby call node and extract its semantic payload.\n *\n * @param calledName - The method name (e.g. 'require', 'include', 'attr_accessor')\n * @param callNode   - The tree-sitter `call` AST node\n * @returns A discriminated union describing the call's semantic role\n */\nexport function routeRubyCall(calledName: string, callNode: any): RubyCallRouting {\n  // ── require / require_relative → import ─────────────────────────────────\n  if (calledName === 'require' || calledName === 'require_relative') {\n    const argList = callNode.childForFieldName?.('arguments');\n    const stringNode = argList?.children?.find((c: any) => c.type === 'string');\n    const contentNode = stringNode?.children?.find((c: any) => c.type === 'string_content');\n    if (!contentNode) return SKIP_RESULT;\n\n    let importPath: string = contentNode.text;\n    // Validate: reject null bytes, control chars, excessively long paths\n    if (!importPath || importPath.length > 1024 || /[\\x00-\\x1f]/.test(importPath)) {\n      return SKIP_RESULT;\n    }\n    const isRelative = calledName === 'require_relative';\n    if (isRelative && !importPath.startsWith('.')) {\n      importPath = './' + importPath;\n    }\n    return { kind: 'import', importPath, isRelative };\n  }\n\n  // ── include / extend / prepend → heritage (mixin) ──────────────────────\n  if (calledName === 'include' || calledName === 'extend' || calledName === 'prepend') {\n    let enclosingClass: string | null = null;\n    let current = callNode.parent;\n    let depth = 0;\n    while (current && ++depth <= MAX_PARENT_DEPTH) {\n      if (current.type === 'class' || current.type === 'module') {\n        const nameNode = current.childForFieldName?.('name');\n        if (nameNode) { enclosingClass = nameNode.text; break; }\n      }\n      current = current.parent;\n    }\n    if (!enclosingClass) return SKIP_RESULT;\n\n    const items: RubyHeritageItem[] = [];\n    const argList = callNode.childForFieldName?.('arguments');\n    for (const arg of (argList?.children ?? [])) {\n      if (arg.type === 'constant' || arg.type === 'scope_resolution') {\n        items.push({ enclosingClass, mixinName: arg.text, heritageKind: calledName as 'include' | 'extend' | 'prepend' });\n      }\n    }\n    return items.length > 0 ? { kind: 'heritage', items } : SKIP_RESULT;\n  }\n\n  // ── attr_accessor / attr_reader / attr_writer → property definitions ───\n  if (calledName === 'attr_accessor' || calledName === 'attr_reader' || calledName === 'attr_writer') {\n    // Extract YARD @return [Type] from preceding comment (e.g. `# @return [Address]`)\n    let yardType: string | undefined;\n    let sibling = callNode.previousSibling;\n    while (sibling) {\n      if (sibling.type === 'comment') {\n        const match = /@return\\s+\\[([^\\]]+)\\]/.exec(sibling.text);\n        if (match) {\n          const raw = match[1].trim();\n          // Extract simple type name: \"User\", \"Array<User>\" → \"User\"\n          const simple = raw.match(/^([A-Z]\\w*)/);\n          if (simple) yardType = simple[1];\n          break;\n        }\n      } else if (sibling.isNamed) {\n        break; // stop at non-comment named sibling\n      }\n      sibling = sibling.previousSibling;\n    }\n\n    const items: RubyPropertyItem[] = [];\n    const argList = callNode.childForFieldName?.('arguments');\n    for (const arg of (argList?.children ?? [])) {\n      if (arg.type === 'simple_symbol') {\n        items.push({\n          propName: arg.text.startsWith(':') ? arg.text.slice(1) : arg.text,\n          accessorType: calledName as RubyAccessorType,\n          startLine: arg.startPosition.row,\n          endLine: arg.endPosition.row,\n          ...(yardType ? { declaredType: yardType } : {}),\n        });\n      }\n    }\n    return items.length > 0 ? { kind: 'properties', items } : SKIP_RESULT;\n  }\n\n  // ── Everything else → regular call ─────────────────────────────────────\n  return CALL_RESULT;\n}\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/cluster-enricher.ts",
    "content": "/**\n * Cluster Enricher\n * \n * LLM-based enrichment for community clusters.\n * Generates semantic names, keywords, and descriptions using an LLM.\n */\n\nimport { CommunityNode } from './community-processor.js';\n\n// ============================================================================\n// TYPES\n// ============================================================================\n\nexport interface ClusterEnrichment {\n  name: string;\n  keywords: string[];\n  description: string;\n}\n\nexport interface EnrichmentResult {\n  enrichments: Map<string, ClusterEnrichment>;\n  tokensUsed: number;\n}\n\nexport interface LLMClient {\n  generate: (prompt: string) => Promise<string>;\n}\n\nexport interface ClusterMemberInfo {\n  name: string;\n  filePath: string;\n  type: string; // 'Function' | 'Class' | 'Method' | 'Interface'\n}\n\n// ============================================================================\n// PROMPT TEMPLATE\n// ============================================================================\n\nconst buildEnrichmentPrompt = (\n  members: ClusterMemberInfo[],\n  heuristicLabel: string\n): string => {\n  // Limit to first 20 members to control token usage\n  const limitedMembers = members.slice(0, 20);\n  \n  const memberList = limitedMembers\n    .map(m => `${m.name} (${m.type})`)\n    .join(', ');\n  \n  return `Analyze this code cluster and provide a semantic name and short description.\n\nHeuristic: \"${heuristicLabel}\"\nMembers: ${memberList}${members.length > 20 ? ` (+${members.length - 20} more)` : ''}\n\nReply with JSON only:\n{\"name\": \"2-4 word semantic name\", \"description\": \"One sentence describing purpose\"}`\n};\n\n// ============================================================================\n// PARSE LLM RESPONSE\n// ============================================================================\n\nconst parseEnrichmentResponse = (\n  response: string,\n  fallbackLabel: string\n): ClusterEnrichment => {\n  try {\n    // Extract JSON from response (handles markdown code blocks)\n    const jsonMatch = response.match(/\\{[\\s\\S]*\\}/);\n    if (!jsonMatch) {\n      throw new Error('No JSON found in response');\n    }\n    \n    const parsed = JSON.parse(jsonMatch[0]);\n    \n    return {\n      name: parsed.name || fallbackLabel,\n      keywords: Array.isArray(parsed.keywords) ? parsed.keywords : [],\n      description: parsed.description || '',\n    };\n  } catch {\n    // Fallback if parsing fails\n    return {\n      name: fallbackLabel,\n      keywords: [],\n      description: '',\n    };\n  }\n};\n\n// ============================================================================\n// MAIN ENRICHMENT FUNCTION\n// ============================================================================\n\n/**\n * Enrich clusters with LLM-generated names, keywords, and descriptions\n * \n * @param communities - Community nodes to enrich\n * @param memberMap - Map of communityId -> member info\n * @param llmClient - LLM client for generation\n * @param onProgress - Progress callback\n */\nexport const enrichClusters = async (\n  communities: CommunityNode[],\n  memberMap: Map<string, ClusterMemberInfo[]>,\n  llmClient: LLMClient,\n  onProgress?: (current: number, total: number) => void\n): Promise<EnrichmentResult> => {\n  const enrichments = new Map<string, ClusterEnrichment>();\n  let tokensUsed = 0;\n  \n  for (let i = 0; i < communities.length; i++) {\n    const community = communities[i];\n    const members = memberMap.get(community.id) || [];\n    \n    onProgress?.(i + 1, communities.length);\n    \n    if (members.length === 0) {\n      // No members, use heuristic\n      enrichments.set(community.id, {\n        name: community.heuristicLabel,\n        keywords: [],\n        description: '',\n      });\n      continue;\n    }\n    \n    try {\n      const prompt = buildEnrichmentPrompt(members, community.heuristicLabel);\n      const response = await llmClient.generate(prompt);\n      \n      // Rough token estimate\n      tokensUsed += prompt.length / 4 + response.length / 4;\n      \n      const enrichment = parseEnrichmentResponse(response, community.heuristicLabel);\n      enrichments.set(community.id, enrichment);\n    } catch (error) {\n      // On error, fallback to heuristic\n      console.warn(`Failed to enrich cluster ${community.id}:`, error);\n      enrichments.set(community.id, {\n        name: community.heuristicLabel,\n        keywords: [],\n        description: '',\n      });\n    }\n  }\n  \n  return { enrichments, tokensUsed };\n};\n\n// ============================================================================\n// BATCH ENRICHMENT (more efficient)\n// ============================================================================\n\n/**\n * Enrich multiple clusters in a single LLM call (batch mode)\n * More efficient for token usage but requires larger context window\n */\nexport const enrichClustersBatch = async (\n  communities: CommunityNode[],\n  memberMap: Map<string, ClusterMemberInfo[]>,\n  llmClient: LLMClient,\n  batchSize: number = 5,\n  onProgress?: (current: number, total: number) => void\n): Promise<EnrichmentResult> => {\n  const enrichments = new Map<string, ClusterEnrichment>();\n  let tokensUsed = 0;\n  \n  // Process in batches\n  for (let i = 0; i < communities.length; i += batchSize) {\n    // Report progress\n    onProgress?.(Math.min(i + batchSize, communities.length), communities.length);\n\n    const batch = communities.slice(i, i + batchSize);\n    \n    const batchPrompt = batch.map((community, idx) => {\n      const members = memberMap.get(community.id) || [];\n      const limitedMembers = members.slice(0, 15);\n      const memberList = limitedMembers\n        .map(m => `${m.name} (${m.type})`)\n        .join(', ');\n      \n      return `Cluster ${idx + 1} (id: ${community.id}):\nHeuristic: \"${community.heuristicLabel}\"\nMembers: ${memberList}`;\n    }).join('\\n\\n');\n    \n    const prompt = `Analyze these code clusters and generate semantic names, keywords, and descriptions.\n\n${batchPrompt}\n\nOutput JSON array:\n[\n  {\"id\": \"comm_X\", \"name\": \"...\", \"keywords\": [...], \"description\": \"...\"},\n  ...\n]`;\n    \n    try {\n      const response = await llmClient.generate(prompt);\n      tokensUsed += prompt.length / 4 + response.length / 4;\n      \n      // Parse batch response\n      const jsonMatch = response.match(/\\[[\\s\\S]*\\]/);\n      if (jsonMatch) {\n        const parsed = JSON.parse(jsonMatch[0]) as Array<{\n          id: string;\n          name: string;\n          keywords: string[];\n          description: string;\n        }>;\n        \n        for (const item of parsed) {\n          enrichments.set(item.id, {\n            name: item.name,\n            keywords: item.keywords || [],\n            description: item.description || '',\n          });\n        }\n      }\n    } catch (error) {\n      console.warn('Batch enrichment failed, falling back to heuristics:', error);\n      // Fallback for this batch\n      for (const community of batch) {\n        enrichments.set(community.id, {\n          name: community.heuristicLabel,\n          keywords: [],\n          description: '',\n        });\n      }\n    }\n  }\n  \n  // Fill in any missing communities\n  for (const community of communities) {\n    if (!enrichments.has(community.id)) {\n      enrichments.set(community.id, {\n        name: community.heuristicLabel,\n        keywords: [],\n        description: '',\n      });\n    }\n  }\n  \n  return { enrichments, tokensUsed };\n};\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/community-processor.ts",
    "content": "/**\n * Community Detection Processor\n * \n * Uses the Leiden algorithm (via graphology-communities-leiden) to detect\n * communities/clusters in the code graph based on CALLS relationships.\n * \n * Communities represent groups of code that work together frequently,\n * helping agents navigate the codebase by functional area rather than file structure.\n */\n\n// NOTE: The Leiden algorithm source is vendored from graphology's repo\n// (src/communities-leiden) because it was never published to npm.\n// We use createRequire to load the CommonJS vendored files in ESM context.\nimport Graph from 'graphology';\nimport { createRequire } from 'node:module';\nimport { fileURLToPath } from 'node:url';\nimport { dirname, resolve } from 'node:path';\nimport { KnowledgeGraph, NodeLabel } from '../graph/types.js';\n\nconst __filename = fileURLToPath(import.meta.url);\nconst __dirname = dirname(__filename);\n// Navigate to package root (works from both src/ and dist/)\nconst leidenPath = resolve(__dirname, '..', '..', '..', 'vendor', 'leiden', 'index.cjs');\nconst _require = createRequire(import.meta.url);\nconst leiden = _require(leidenPath);\n\n// ============================================================================\n// TYPES\n// ============================================================================\n\nexport interface CommunityNode {\n  id: string;\n  label: string;\n  heuristicLabel: string;\n  cohesion: number;\n  symbolCount: number;\n}\n\nexport interface CommunityMembership {\n  nodeId: string;\n  communityId: string;\n}\n\nexport interface CommunityDetectionResult {\n  communities: CommunityNode[];\n  memberships: CommunityMembership[];\n  stats: {\n    totalCommunities: number;\n    modularity: number;\n    nodesProcessed: number;\n  };\n}\n\n// ============================================================================\n// COMMUNITY COLORS (for visualization)\n// ============================================================================\n\nexport const COMMUNITY_COLORS = [\n  '#ef4444', // red\n  '#f97316', // orange\n  '#eab308', // yellow\n  '#22c55e', // green\n  '#06b6d4', // cyan\n  '#3b82f6', // blue\n  '#8b5cf6', // violet\n  '#d946ef', // fuchsia\n  '#ec4899', // pink\n  '#f43f5e', // rose\n  '#14b8a6', // teal\n  '#84cc16', // lime\n];\n\nexport const getCommunityColor = (communityIndex: number): string => {\n  return COMMUNITY_COLORS[communityIndex % COMMUNITY_COLORS.length];\n};\n\n// ============================================================================\n// MAIN PROCESSOR\n// ============================================================================\n\n/**\n * Detect communities in the knowledge graph using Leiden algorithm\n * \n * This runs AFTER all relationships (CALLS, IMPORTS, etc.) have been built.\n * It uses primarily CALLS edges to cluster code that works together.\n */\nexport const processCommunities = async (\n  knowledgeGraph: KnowledgeGraph,\n  onProgress?: (message: string, progress: number) => void\n): Promise<CommunityDetectionResult> => {\n  onProgress?.('Building graph for community detection...', 0);\n\n  // Pre-check total symbol count to determine large-graph mode before building\n  let symbolCount = 0;\n  knowledgeGraph.forEachNode(node => {\n    if (node.label === 'Function' || node.label === 'Class' || node.label === 'Method' || node.label === 'Interface') {\n      symbolCount++;\n    }\n  });\n  const isLarge = symbolCount > 10_000;\n\n  const graph = buildGraphologyGraph(knowledgeGraph, isLarge);\n\n  if (graph.order === 0) {\n    return {\n      communities: [],\n      memberships: [],\n      stats: { totalCommunities: 0, modularity: 0, nodesProcessed: 0 }\n    };\n  }\n\n  const nodeCount = graph.order;\n  const edgeCount = graph.size;\n\n  onProgress?.(`Running Leiden on ${nodeCount} nodes, ${edgeCount} edges${isLarge ? ` (filtered from ${symbolCount} symbols)` : ''}...`, 30);\n\n  // Large graphs: higher resolution + capped iterations (matching Python leidenalg default of 2).\n  // The first 2 iterations capture ~95%+ of modularity; additional iterations have diminishing returns.\n  // Timeout: abort after 60s for pathological graph structures.\n  const LEIDEN_TIMEOUT_MS = 60_000;\n  let details: any;\n  try {\n    details = await Promise.race([\n      Promise.resolve((leiden as any).detailed(graph, {\n        resolution: isLarge ? 2.0 : 1.0,\n        maxIterations: isLarge ? 3 : 0,\n      })),\n      new Promise((_, reject) =>\n        setTimeout(() => reject(new Error('Leiden timeout')), LEIDEN_TIMEOUT_MS)\n      ),\n    ]);\n  } catch (e: any) {\n    if (e.message === 'Leiden timeout') {\n      onProgress?.('Community detection timed out, using fallback...', 60);\n      // Fallback: assign all nodes to community 0\n      const communities: Record<string, number> = {};\n      graph.forEachNode((node: string) => { communities[node] = 0; });\n      details = { communities, count: 1, modularity: 0 };\n    } else {\n      throw e;\n    }\n  }\n\n  onProgress?.(`Found ${details.count} communities...`, 60);\n\n  // Step 3: Create community nodes with heuristic labels\n  const communityNodes = createCommunityNodes(\n    details.communities as Record<string, number>,\n    details.count,\n    graph,\n    knowledgeGraph\n  );\n\n  onProgress?.('Creating membership edges...', 80);\n\n  // Step 4: Create membership mappings\n  const memberships: CommunityMembership[] = [];\n  Object.entries(details.communities).forEach(([nodeId, communityNum]) => {\n    memberships.push({\n      nodeId,\n      communityId: `comm_${communityNum}`,\n    });\n  });\n\n  onProgress?.('Community detection complete!', 100);\n\n  return {\n    communities: communityNodes,\n    memberships,\n    stats: {\n      totalCommunities: details.count,\n      modularity: details.modularity,\n      nodesProcessed: graph.order,\n    }\n  };\n};\n\n// ============================================================================\n// HELPER: Build graphology graph from knowledge graph\n// ============================================================================\n\n/**\n * Build a graphology graph containing only symbol nodes and clustering edges.\n * For large graphs (>10K symbols), filter out low-confidence fuzzy-global edges\n * and degree-1 nodes that add noise and massively increase Leiden runtime.\n */\nconst MIN_CONFIDENCE_LARGE = 0.5;\n\nconst buildGraphologyGraph = (knowledgeGraph: KnowledgeGraph, isLarge: boolean): any => {\n  const graph = new (Graph as any)({ type: 'undirected', allowSelfLoops: false });\n\n  const symbolTypes = new Set<NodeLabel>(['Function', 'Class', 'Method', 'Interface']);\n  const clusteringRelTypes = new Set(['CALLS', 'EXTENDS', 'IMPLEMENTS']);\n  const connectedNodes = new Set<string>();\n  const nodeDegree = new Map<string, number>();\n\n  knowledgeGraph.forEachRelationship(rel => {\n    if (!clusteringRelTypes.has(rel.type) || rel.sourceId === rel.targetId) return;\n    if (isLarge && rel.confidence < MIN_CONFIDENCE_LARGE) return;\n\n    connectedNodes.add(rel.sourceId);\n    connectedNodes.add(rel.targetId);\n    nodeDegree.set(rel.sourceId, (nodeDegree.get(rel.sourceId) || 0) + 1);\n    nodeDegree.set(rel.targetId, (nodeDegree.get(rel.targetId) || 0) + 1);\n  });\n\n  knowledgeGraph.forEachNode(node => {\n    if (!symbolTypes.has(node.label) || !connectedNodes.has(node.id)) return;\n    // For large graphs, skip degree-1 nodes — they just become singletons or\n    // get absorbed into their single neighbor's community, but cost iteration time.\n    if (isLarge && (nodeDegree.get(node.id) || 0) < 2) return;\n\n    graph.addNode(node.id, {\n      name: node.properties.name,\n      filePath: node.properties.filePath,\n      type: node.label,\n    });\n  });\n\n  knowledgeGraph.forEachRelationship(rel => {\n    if (!clusteringRelTypes.has(rel.type)) return;\n    if (isLarge && rel.confidence < MIN_CONFIDENCE_LARGE) return;\n    if (graph.hasNode(rel.sourceId) && graph.hasNode(rel.targetId) && rel.sourceId !== rel.targetId) {\n      if (!graph.hasEdge(rel.sourceId, rel.targetId)) {\n        graph.addEdge(rel.sourceId, rel.targetId);\n      }\n    }\n  });\n\n  return graph;\n};\n\n// ============================================================================\n// HELPER: Create community nodes with heuristic labels\n// ============================================================================\n\n/**\n * Create Community nodes with auto-generated labels based on member file paths\n */\nconst createCommunityNodes = (\n  communities: Record<string, number>,\n  communityCount: number,\n  graph: any,\n  knowledgeGraph: KnowledgeGraph\n): CommunityNode[] => {\n  // Group node IDs by community\n  const communityMembers = new Map<number, string[]>();\n  \n  Object.entries(communities).forEach(([nodeId, commNum]) => {\n    if (!communityMembers.has(commNum)) {\n      communityMembers.set(commNum, []);\n    }\n    communityMembers.get(commNum)!.push(nodeId);\n  });\n\n  // Build node lookup for file paths\n  const nodePathMap = new Map<string, string>();\n  for (const node of knowledgeGraph.iterNodes()) {\n    if (node.properties.filePath) {\n      nodePathMap.set(node.id, node.properties.filePath);\n    }\n  }\n\n  // Create community nodes - SKIP SINGLETONS (isolated nodes)\n  const communityNodes: CommunityNode[] = [];\n  \n  communityMembers.forEach((memberIds, commNum) => {\n    // Skip singleton communities - they're just isolated nodes\n    if (memberIds.length < 2) return;\n    \n    const heuristicLabel = generateHeuristicLabel(memberIds, nodePathMap, graph, commNum);\n    \n    communityNodes.push({\n      id: `comm_${commNum}`,\n      label: heuristicLabel,\n      heuristicLabel,\n      cohesion: calculateCohesion(memberIds, graph),\n      symbolCount: memberIds.length,\n    });\n  });\n\n  // Sort by size descending\n  communityNodes.sort((a, b) => b.symbolCount - a.symbolCount);\n\n  return communityNodes;\n};\n\n// ============================================================================\n// HELPER: Generate heuristic label from folder patterns\n// ============================================================================\n\n/**\n * Generate a human-readable label from the most common folder name in the community\n */\nconst generateHeuristicLabel = (\n  memberIds: string[],\n  nodePathMap: Map<string, string>,\n  graph: any,\n  commNum: number\n): string => {\n  // Collect folder names from file paths\n  const folderCounts = new Map<string, number>();\n  \n  memberIds.forEach(nodeId => {\n    const filePath = nodePathMap.get(nodeId) || '';\n    const parts = filePath.split('/').filter(Boolean);\n    \n    // Get the most specific folder (parent directory)\n    if (parts.length >= 2) {\n      const folder = parts[parts.length - 2];\n      // Skip generic folder names\n      if (!['src', 'lib', 'core', 'utils', 'common', 'shared', 'helpers'].includes(folder.toLowerCase())) {\n        folderCounts.set(folder, (folderCounts.get(folder) || 0) + 1);\n      }\n    }\n  });\n\n  // Find most common folder\n  let maxCount = 0;\n  let bestFolder = '';\n  \n  folderCounts.forEach((count, folder) => {\n    if (count > maxCount) {\n      maxCount = count;\n      bestFolder = folder;\n    }\n  });\n\n  if (bestFolder) {\n    // Capitalize first letter\n    return bestFolder.charAt(0).toUpperCase() + bestFolder.slice(1);\n  }\n\n  // Fallback: use function names to detect patterns\n  const names: string[] = [];\n  memberIds.forEach(nodeId => {\n    const name = graph.getNodeAttribute(nodeId, 'name');\n    if (name) names.push(name);\n  });\n\n  // Look for common prefixes\n  if (names.length > 2) {\n    const commonPrefix = findCommonPrefix(names);\n    if (commonPrefix.length > 2) {\n      return commonPrefix.charAt(0).toUpperCase() + commonPrefix.slice(1);\n    }\n  }\n\n  // Last resort: generic name with community ID for uniqueness\n  return `Cluster_${commNum}`;\n};\n\n/**\n * Find common prefix among strings\n */\nconst findCommonPrefix = (strings: string[]): string => {\n  if (strings.length === 0) return '';\n  \n  const sorted = strings.slice().sort();\n  const first = sorted[0];\n  const last = sorted[sorted.length - 1];\n  \n  let i = 0;\n  while (i < first.length && first[i] === last[i]) {\n    i++;\n  }\n  \n  return first.substring(0, i);\n};\n\n// ============================================================================\n// HELPER: Calculate community cohesion\n// ============================================================================\n\n/**\n * Estimate cohesion score (0-1) based on internal edge density.\n * Uses sampling for large communities to avoid O(N^2) cost.\n */\nconst calculateCohesion = (memberIds: string[], graph: any): number => {\n  if (memberIds.length <= 1) return 1.0;\n\n  const memberSet = new Set(memberIds);\n\n  // Sample up to 50 members for large communities\n  const SAMPLE_SIZE = 50;\n  const sample = memberIds.length <= SAMPLE_SIZE\n    ? memberIds\n    : memberIds.slice(0, SAMPLE_SIZE);\n\n  let internalEdges = 0;\n  let totalEdges = 0;\n\n  for (const nodeId of sample) {\n    if (!graph.hasNode(nodeId)) continue;\n    graph.forEachNeighbor(nodeId, (neighbor: string) => {\n      totalEdges++;\n      if (memberSet.has(neighbor)) {\n        internalEdges++;\n      }\n    });\n  }\n\n  // Cohesion = fraction of edges that stay internal\n  if (totalEdges === 0) return 1.0;\n  return Math.min(1.0, internalEdges / totalEdges);\n};\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/constants.ts",
    "content": "/**\n * Default minimum buffer size for tree-sitter parsing (512 KB).\n * tree-sitter requires bufferSize >= file size in bytes.\n */\nexport const TREE_SITTER_BUFFER_SIZE = 512 * 1024;\n\n/**\n * Maximum buffer size cap (32 MB) to prevent OOM on huge files.\n * Also used as the file-size skip threshold — files larger than this are not parsed.\n */\nexport const TREE_SITTER_MAX_BUFFER = 32 * 1024 * 1024;\n\n/**\n * Compute adaptive buffer size for tree-sitter parsing.\n * Uses 2× file size, clamped between 512 KB and 32 MB.\n * Previous 256 KB fixed limit silently skipped files > ~200 KB (e.g., imgui.h at 411 KB).\n */\nexport const getTreeSitterBufferSize = (contentLength: number): number =>\n  Math.min(Math.max(contentLength * 2, TREE_SITTER_BUFFER_SIZE), TREE_SITTER_MAX_BUFFER);\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/entry-point-scoring.ts",
    "content": "/**\n * Entry Point Scoring\n * \n * Calculates entry point scores for process detection based on:\n * 1. Call ratio (existing algorithm - callees / (callers + 1))\n * 2. Export status (exported functions get higher priority)\n * 3. Name patterns (functions matching entry point patterns like handle*, on*, *Controller)\n * 4. Framework detection (path-based detection for Next.js, Express, Django, etc.)\n * \n * This module is language-agnostic - language-specific patterns are defined per language.\n */\n\nimport { detectFrameworkFromPath } from './framework-detection.js';\nimport { SupportedLanguages } from '../../config/supported-languages.js';\n\n// ============================================================================\n// NAME PATTERNS - All 11 supported languages\n// ============================================================================\n\n/**\n * Common entry point naming patterns by language\n * These patterns indicate functions that are likely feature entry points\n */\nconst ENTRY_POINT_PATTERNS: Record<string, RegExp[]> = {\n  // Universal patterns (apply to all languages)\n  '*': [\n    /^(main|init|bootstrap|start|run|setup|configure)$/i,\n    /^handle[A-Z]/,           // handleLogin, handleSubmit\n    /^on[A-Z]/,               // onClick, onSubmit\n    /Handler$/,               // RequestHandler\n    /Controller$/,            // UserController\n    /^process[A-Z]/,          // processPayment\n    /^execute[A-Z]/,          // executeQuery\n    /^perform[A-Z]/,          // performAction\n    /^dispatch[A-Z]/,         // dispatchEvent\n    /^trigger[A-Z]/,          // triggerAction\n    /^fire[A-Z]/,             // fireEvent\n    /^emit[A-Z]/,             // emitEvent\n  ],\n  \n  // JavaScript/TypeScript\n  [SupportedLanguages.JavaScript]: [\n    /^use[A-Z]/,              // React hooks (useEffect, etc.)\n  ],\n  [SupportedLanguages.TypeScript]: [\n    /^use[A-Z]/,              // React hooks\n  ],\n\n  // Python\n  [SupportedLanguages.Python]: [\n    /^app$/,                  // Flask/FastAPI app\n    /^(get|post|put|delete|patch)_/i,  // REST conventions\n    /^api_/,                  // API functions\n    /^view_/,                 // Django views\n  ],\n\n  // Java\n  [SupportedLanguages.Java]: [\n    /^do[A-Z]/,               // doGet, doPost (Servlets)\n    /^create[A-Z]/,           // Factory patterns\n    /^build[A-Z]/,            // Builder patterns\n    /Service$/,               // UserService\n  ],\n\n  // C#\n  [SupportedLanguages.CSharp]: [\n    /^(Get|Post|Put|Delete|Patch)/,  // ASP.NET action methods\n    /Action$/,                        // MVC actions\n    /^On[A-Z]/,                      // Event handlers / Blazor lifecycle\n    /Async$/,                        // Async entry points\n    /^Configure$/,                   // Startup.Configure\n    /^ConfigureServices$/,           // Startup.ConfigureServices\n    /^Handle$/,                      // MediatR / generic handler\n    /^Execute$/,                     // Command pattern\n    /^Invoke$/,                      // Middleware Invoke\n    /^Map[A-Z]/,                     // Minimal API MapGet, MapPost\n    /Service$/,                      // Service classes\n    /^Seed/,                         // Database seeding\n  ],\n  \n  // Go\n  [SupportedLanguages.Go]: [\n    /Handler$/,               // http.Handler pattern\n    /^Serve/,                 // ServeHTTP\n    /^New[A-Z]/,              // Constructor pattern (returns new instance)\n    /^Make[A-Z]/,             // Make functions\n  ],\n  \n  // Rust\n  [SupportedLanguages.Rust]: [\n    /^(get|post|put|delete)_handler$/i,\n    /^handle_/,               // handle_request\n    /^new$/,                  // Constructor pattern\n    /^run$/,                  // run entry point\n    /^spawn/,                 // Async spawn\n  ],\n  \n  // C - explicit main() boost plus common C entry point conventions\n  [SupportedLanguages.C]: [\n    /^main$/,                 // THE entry point\n    /^init_/,                 // init_server, init_client\n    /_init$/,                 // module_init, server_init\n    /^start_/,                // start_server\n    /_start$/,                // thread_start\n    /^run_/,                  // run_loop\n    /_run$/,                  // event_run\n    /^stop_/,                 // stop_server\n    /_stop$/,                 // service_stop\n    /^open_/,                 // open_connection\n    /_open$/,                 // file_open\n    /^close_/,                // close_connection\n    /_close$/,                // socket_close\n    /^create_/,               // create_session\n    /_create$/,               // object_create\n    /^destroy_/,              // destroy_session\n    /_destroy$/,              // object_destroy\n    /^handle_/,               // handle_request\n    /_handler$/,              // signal_handler\n    /_callback$/,             // event_callback\n    /^cmd_/,                  // tmux: cmd_new_window, cmd_attach_session\n    /^server_/,               // server_start, server_loop\n    /^client_/,               // client_connect\n    /^session_/,              // session_create\n    /^window_/,               // window_resize (tmux)\n    /^key_/,                  // key_press\n    /^input_/,                // input_parse\n    /^output_/,               // output_write\n    /^notify_/,               // notify_client\n    /^control_/,              // control_start\n  ],\n\n  // C++ - same as C plus OOP/template patterns\n  [SupportedLanguages.CPlusPlus]: [\n    /^main$/,                 // THE entry point\n    /^init_/,\n    /_init$/,\n    /^Create[A-Z]/,           // Factory patterns\n    /^create_/,\n    /^Run$/,                  // Run methods\n    /^run$/,\n    /^Start$/,                // Start methods\n    /^start$/,\n    /^handle_/,\n    /_handler$/,\n    /_callback$/,\n    /^OnEvent/,               // Event callbacks\n    /^on_/,\n    /::Run$/,                 // Class::Run\n    /::Start$/,               // Class::Start\n    /::Init$/,                // Class::Init\n    /::Execute$/,             // Class::Execute\n  ],\n\n  // Swift / iOS\n  [SupportedLanguages.Swift]: [\n    /^viewDidLoad$/,                  // UIKit lifecycle\n    /^viewWillAppear$/,               // UIKit lifecycle\n    /^viewDidAppear$/,                // UIKit lifecycle\n    /^viewWillDisappear$/,            // UIKit lifecycle\n    /^viewDidDisappear$/,             // UIKit lifecycle\n    /^application\\(/,                 // AppDelegate methods\n    /^scene\\(/,                       // SceneDelegate methods\n    /^body$/,                         // SwiftUI View.body\n    /Coordinator$/,                   // Coordinator pattern\n    /^sceneDidBecomeActive$/,         // SceneDelegate lifecycle\n    /^sceneWillResignActive$/,        // SceneDelegate lifecycle\n    /^didFinishLaunchingWithOptions$/, // AppDelegate\n    /ViewController$/,                // ViewController classes\n    /^configure[A-Z]/,               // Configuration methods\n    /^setup[A-Z]/,                    // Setup methods\n    /^makeBody$/,                     // SwiftUI ViewModifier\n  ],\n\n  // PHP / Laravel\n  [SupportedLanguages.PHP]: [\n    /Controller$/,            // UserController (class name convention)\n    /^handle$/,               // Job::handle(), Listener::handle()\n    /^execute$/,              // Command::execute()\n    /^boot$/,                 // ServiceProvider::boot()\n    /^register$/,             // ServiceProvider::register()\n    /^__invoke$/,             // Invokable controllers/actions\n    /^(index|show|store|update|destroy|create|edit)$/,  // RESTful resource methods\n    /^(get|post|put|delete|patch)[A-Z]/,  // Explicit HTTP method actions\n    /^run$/,                  // Command/Job run()\n    /^fire$/,                 // Event fire()\n    /^dispatch$/,             // Dispatchable jobs\n    /Service$/,               // UserService (Service layer)\n    /Repository$/,            // UserRepository (Repository pattern)\n    /^find$/,                 // Repository::find()\n    /^findAll$/,              // Repository::findAll()\n    /^save$/,                 // Repository::save()\n    /^delete$/,               // Repository::delete()\n  ],\n\n  // Ruby\n  [SupportedLanguages.Ruby]: [\n    /^call$/,                 // Service objects (MyService.call)\n    /^perform$/,              // Background jobs (Sidekiq, ActiveJob)\n    /^execute$/,              // Command pattern\n  ],\n};\n\n/** Pre-computed merged patterns (universal + language-specific) to avoid per-call array allocation. */\nconst MERGED_ENTRY_POINT_PATTERNS: Record<string, RegExp[]> = {};\nconst UNIVERSAL_PATTERNS = ENTRY_POINT_PATTERNS['*'] || [];\nfor (const [lang, patterns] of Object.entries(ENTRY_POINT_PATTERNS)) {\n  if (lang === '*') continue;\n  MERGED_ENTRY_POINT_PATTERNS[lang] = [...UNIVERSAL_PATTERNS, ...patterns];\n}\n\n// ============================================================================\n// UTILITY PATTERNS - Functions that should be penalized\n// ============================================================================\n\n/**\n * Patterns that indicate utility/helper functions (NOT entry points)\n * These get penalized in scoring\n */\nconst UTILITY_PATTERNS: RegExp[] = [\n  /^(get|set|is|has|can|should|will|did)[A-Z]/,  // Accessors/predicates\n  /^_/,                                            // Private by convention\n  /^(format|parse|validate|convert|transform)/i,  // Transformation utilities\n  /^(log|debug|error|warn|info)$/i,               // Logging\n  /^(to|from)[A-Z]/,                              // Conversions\n  /^(encode|decode)/i,                            // Encoding utilities\n  /^(serialize|deserialize)/i,                    // Serialization\n  /^(clone|copy|deep)/i,                          // Cloning utilities\n  /^(merge|extend|assign)/i,                      // Object utilities\n  /^(filter|map|reduce|sort|find)/i,             // Collection utilities (standalone)\n  /Helper$/,\n  /Util$/,\n  /Utils$/,\n  /^utils?$/i,\n  /^helpers?$/i,\n];\n\n// ============================================================================\n// TYPES\n// ============================================================================\n\nexport interface EntryPointScoreResult {\n  score: number;\n  reasons: string[];\n}\n\n// ============================================================================\n// MAIN SCORING FUNCTION\n// ============================================================================\n\n/**\n * Calculate an entry point score for a function/method\n * \n * Higher scores indicate better entry point candidates.\n * Score = baseScore × exportMultiplier × nameMultiplier\n * \n * @param name - Function/method name\n * @param language - Programming language\n * @param isExported - Whether the function is exported/public\n * @param callerCount - Number of functions that call this function\n * @param calleeCount - Number of functions this function calls\n * @returns Score and array of reasons explaining the score\n */\nexport function calculateEntryPointScore(\n  name: string,\n  language: SupportedLanguages,\n  isExported: boolean,\n  callerCount: number,\n  calleeCount: number,\n  filePath: string = ''  // Optional for backwards compatibility\n): EntryPointScoreResult {\n  const reasons: string[] = [];\n  \n  // Must have outgoing calls to be an entry point (we need to trace forward)\n  if (calleeCount === 0) {\n    return { score: 0, reasons: ['no-outgoing-calls'] };\n  }\n  \n  // Base score: call ratio (existing algorithm)\n  // High ratio = calls many, called by few = likely entry point\n  const baseScore = calleeCount / (callerCount + 1);\n  reasons.push(`base:${baseScore.toFixed(2)}`);\n  \n  // Export bonus: exported/public functions are more likely entry points\n  const exportMultiplier = isExported ? 2.0 : 1.0;\n  if (isExported) {\n    reasons.push('exported');\n  }\n  \n  // Name pattern scoring\n  let nameMultiplier = 1.0;\n  \n  // Check negative patterns first (utilities get penalized)\n  if (UTILITY_PATTERNS.some(p => p.test(name))) {\n    nameMultiplier = 0.3;  // Significant penalty\n    reasons.push('utility-pattern');\n  } else {\n    // Check positive patterns\n    const allPatterns = MERGED_ENTRY_POINT_PATTERNS[language] || UNIVERSAL_PATTERNS;\n    \n    if (allPatterns.some(p => p.test(name))) {\n      nameMultiplier = 1.5;  // Bonus for matching entry point pattern\n      reasons.push('entry-pattern');\n    }\n  }\n  \n  // Framework detection bonus (Phase 2)\n  let frameworkMultiplier = 1.0;\n  if (filePath) {\n    const frameworkHint = detectFrameworkFromPath(filePath);\n    if (frameworkHint) {\n      frameworkMultiplier = frameworkHint.entryPointMultiplier;\n      reasons.push(`framework:${frameworkHint.reason}`);\n    }\n  }\n  \n  // Calculate final score\n  const finalScore = baseScore * exportMultiplier * nameMultiplier * frameworkMultiplier;\n  \n  return {\n    score: finalScore,\n    reasons,\n  };\n}\n\n// ============================================================================\n// HELPER FUNCTIONS\n// ============================================================================\n\n/**\n * Check if a file path is a test file (should be excluded from entry points)\n * Covers common test file patterns across all supported languages\n */\nexport function isTestFile(filePath: string): boolean {\n  const p = filePath.toLowerCase().replace(/\\\\/g, '/');\n  \n  return (\n    // JavaScript/TypeScript test patterns\n    p.includes('.test.') || \n    p.includes('.spec.') || \n    p.includes('__tests__/') || \n    p.includes('__mocks__/') ||\n    // Generic test folders\n    p.includes('/test/') ||\n    p.includes('/tests/') ||\n    p.includes('/testing/') ||\n    // Python test patterns\n    p.endsWith('_test.py') ||\n    p.includes('/test_') ||\n    // Go test patterns\n    p.endsWith('_test.go') ||\n    // Java test patterns\n    p.includes('/src/test/') ||\n    // Rust test patterns (inline tests are different, but test files)\n    p.includes('/tests/') ||\n    // Swift/iOS test patterns\n    p.endsWith('tests.swift') ||\n    p.endsWith('test.swift') ||\n    p.includes('uitests/') ||\n    // C# test patterns\n    p.endsWith('tests.cs') ||\n    p.endsWith('test.cs') ||\n    p.includes('.tests/') ||\n    p.includes('.test/') ||\n    p.includes('.integrationtests/') ||\n    p.includes('.unittests/') ||\n    p.includes('/testproject/') ||\n    // PHP/Laravel test patterns\n    p.endsWith('test.php') ||\n    p.endsWith('spec.php') ||\n    p.includes('/tests/feature/') ||\n    p.includes('/tests/unit/') ||\n    // Ruby test patterns\n    p.endsWith('_spec.rb') ||\n    p.endsWith('_test.rb') ||\n    p.includes('/spec/') ||\n    p.includes('/test/fixtures/')\n  );\n}\n\n/**\n * Check if a file path is likely a utility/helper file\n * These might still have entry points but should be lower priority\n */\nexport function isUtilityFile(filePath: string): boolean {\n  const p = filePath.toLowerCase().replace(/\\\\/g, '/');\n  \n  return (\n    p.includes('/utils/') ||\n    p.includes('/util/') ||\n    p.includes('/helpers/') ||\n    p.includes('/helper/') ||\n    p.includes('/common/') ||\n    p.includes('/shared/') ||\n    p.includes('/lib/') ||\n    p.endsWith('/utils.ts') ||\n    p.endsWith('/utils.js') ||\n    p.endsWith('/helpers.ts') ||\n    p.endsWith('/helpers.js') ||\n    p.endsWith('_utils.py') ||\n    p.endsWith('_helpers.py')\n  );\n}\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/export-detection.ts",
    "content": "/**\n * Export Detection\n *\n * Determines whether a symbol (function, class, etc.) is exported/public\n * in its language. This is a pure function — safe for use in worker threads.\n *\n * Shared between parse-worker.ts (worker pool) and parsing-processor.ts (sequential fallback).\n */\n\nimport { findSiblingChild, SyntaxNode } from './utils.js';\nimport { SupportedLanguages } from '../../config/supported-languages.js';\n\n/** Handler type: given a node and symbol name, return true if the symbol is exported/public. */\ntype ExportChecker = (node: SyntaxNode, name: string) => boolean;\n\n// ============================================================================\n// Per-language export checkers\n// ============================================================================\n\n/** JS/TS: walk ancestors looking for export_statement or export_specifier. */\nconst tsExportChecker: ExportChecker = (node, _name) => {\n  let current: SyntaxNode | null = node;\n  while (current) {\n    const type = current.type;\n    if (type === 'export_statement' ||\n        type === 'export_specifier' ||\n        (type === 'lexical_declaration' && current.parent?.type === 'export_statement')) {\n      return true;\n    }\n    // Fallback: check if node text starts with 'export ' for edge cases\n    if (current.text?.startsWith('export ')) {\n      return true;\n    }\n    current = current.parent;\n  }\n  return false;\n};\n\n/** Python: public if no leading underscore (convention). */\nconst pythonExportChecker: ExportChecker = (_node, name) => !name.startsWith('_');\n\n/** Java: check for 'public' modifier — modifiers are siblings of the name node, not parents. */\nconst javaExportChecker: ExportChecker = (node, _name) => {\n  let current: SyntaxNode | null = node;\n  while (current) {\n    if (current.parent) {\n      const parent = current.parent;\n      for (let i = 0; i < parent.childCount; i++) {\n        const child = parent.child(i);\n        if (child?.type === 'modifiers' && child.text?.includes('public')) {\n          return true;\n        }\n      }\n      if (parent.type === 'method_declaration' || parent.type === 'constructor_declaration') {\n        if (parent.text?.trimStart().startsWith('public')) {\n          return true;\n        }\n      }\n    }\n    current = current.parent;\n  }\n  return false;\n};\n\n/** C# declaration node types for sibling modifier scanning. */\nconst CSHARP_DECL_TYPES = new Set([\n  'method_declaration', 'local_function_statement', 'constructor_declaration',\n  'class_declaration', 'interface_declaration', 'struct_declaration',\n  'enum_declaration', 'record_declaration', 'record_struct_declaration',\n  'record_class_declaration', 'delegate_declaration',\n  'property_declaration', 'field_declaration', 'event_declaration',\n  'namespace_declaration', 'file_scoped_namespace_declaration',\n]);\n\n/**\n * C#: modifier nodes are SIBLINGS of the name node inside the declaration.\n * Walk up to the declaration node, then scan its direct children.\n */\nconst csharpExportChecker: ExportChecker = (node, _name) => {\n  let current: SyntaxNode | null = node;\n  while (current) {\n    if (CSHARP_DECL_TYPES.has(current.type)) {\n      for (let i = 0; i < current.childCount; i++) {\n        const child = current.child(i);\n        if (child?.type === 'modifier' && child.text === 'public') return true;\n      }\n      return false;\n    }\n    current = current.parent;\n  }\n  return false;\n};\n\n/** Go: uppercase first letter = exported. */\nconst goExportChecker: ExportChecker = (_node, name) => {\n  if (name.length === 0) return false;\n  const first = name[0];\n  return first === first.toUpperCase() && first !== first.toLowerCase();\n};\n\n/** Rust declaration node types for sibling visibility_modifier scanning. */\nconst RUST_DECL_TYPES = new Set([\n  'function_item', 'struct_item', 'enum_item', 'trait_item', 'impl_item',\n  'union_item', 'type_item', 'const_item', 'static_item', 'mod_item',\n  'use_declaration', 'associated_type', 'function_signature_item',\n]);\n\n/**\n * Rust: visibility_modifier is a SIBLING of the name node within the declaration node\n * (function_item, struct_item, etc.), not a parent. Walk up to the declaration node,\n * then scan its direct children.\n */\nconst rustExportChecker: ExportChecker = (node, _name) => {\n  let current: SyntaxNode | null = node;\n  while (current) {\n    if (RUST_DECL_TYPES.has(current.type)) {\n      for (let i = 0; i < current.childCount; i++) {\n        const child = current.child(i);\n        if (child?.type === 'visibility_modifier' && child.text?.startsWith('pub')) return true;\n      }\n      return false;\n    }\n    current = current.parent;\n  }\n  return false;\n};\n\n/**\n * Kotlin: default visibility is public (unlike Java).\n * visibility_modifier is inside modifiers, a sibling of the name node within the declaration.\n */\nconst kotlinExportChecker: ExportChecker = (node, _name) => {\n  let current: SyntaxNode | null = node;\n  while (current) {\n    if (current.parent) {\n      const visMod = findSiblingChild(current.parent, 'modifiers', 'visibility_modifier');\n      if (visMod) {\n        const text = visMod.text;\n        if (text === 'private' || text === 'internal' || text === 'protected') return false;\n        if (text === 'public') return true;\n      }\n    }\n    current = current.parent;\n  }\n  // No visibility modifier = public (Kotlin default)\n  return true;\n};\n\n/**\n * C/C++: functions without 'static' storage class have external linkage by default,\n * making them globally accessible (equivalent to exported). Only functions explicitly\n * marked 'static' are file-scoped (not exported). C++ anonymous namespaces\n * (namespace { ... }) also give internal linkage.\n */\nconst cCppExportChecker: ExportChecker = (node, _name) => {\n  let cur: SyntaxNode | null = node;\n  while (cur) {\n    if (cur.type === 'function_definition' || cur.type === 'declaration') {\n      // Check for 'static' storage class specifier as a direct child node.\n      // This avoids reading the full function text (which can be very large).\n      for (let i = 0; i < cur.childCount; i++) {\n        const child = cur.child(i);\n        if (child?.type === 'storage_class_specifier' && child.text === 'static') return false;\n      }\n    }\n    // C++ anonymous namespace: namespace_definition with no name child = internal linkage\n    if (cur.type === 'namespace_definition') {\n      const hasName = cur.childForFieldName?.('name');\n      if (!hasName) return false;\n    }\n    cur = cur.parent;\n  }\n  return true; // Top-level C/C++ functions default to external linkage\n};\n\n/** PHP: check for visibility modifier or top-level scope. */\nconst phpExportChecker: ExportChecker = (node, _name) => {\n  let current: SyntaxNode | null = node;\n  while (current) {\n    if (current.type === 'class_declaration' ||\n        current.type === 'interface_declaration' ||\n        current.type === 'trait_declaration' ||\n        current.type === 'enum_declaration') {\n      return true;\n    }\n    if (current.type === 'visibility_modifier') {\n      return current.text === 'public';\n    }\n    current = current.parent;\n  }\n  // Top-level functions are globally accessible\n  return true;\n};\n\n/** Swift: check for 'public' or 'open' access modifiers. */\nconst swiftExportChecker: ExportChecker = (node, _name) => {\n  let current: SyntaxNode | null = node;\n  while (current) {\n    if (current.type === 'modifiers' || current.type === 'visibility_modifier') {\n      const text = current.text || '';\n      if (text.includes('public') || text.includes('open')) return true;\n    }\n    current = current.parent;\n  }\n  return false;\n};\n\n// ============================================================================\n// Exhaustive dispatch table — satisfies enforces all SupportedLanguages are covered\n// ============================================================================\n\nconst exportCheckers = {\n  [SupportedLanguages.JavaScript]: tsExportChecker,\n  [SupportedLanguages.TypeScript]: tsExportChecker,\n  [SupportedLanguages.Python]: pythonExportChecker,\n  [SupportedLanguages.Java]: javaExportChecker,\n  [SupportedLanguages.CSharp]: csharpExportChecker,\n  [SupportedLanguages.Go]: goExportChecker,\n  [SupportedLanguages.Rust]: rustExportChecker,\n  [SupportedLanguages.Kotlin]: kotlinExportChecker,\n  [SupportedLanguages.C]: cCppExportChecker,\n  [SupportedLanguages.CPlusPlus]: cCppExportChecker,\n  [SupportedLanguages.PHP]: phpExportChecker,\n  [SupportedLanguages.Swift]: swiftExportChecker,\n  [SupportedLanguages.Ruby]: (_node, _name) => true,\n} satisfies Record<SupportedLanguages, ExportChecker>;\n\n// ============================================================================\n// Public API\n// ============================================================================\n\n/**\n * Check if a tree-sitter node is exported/public in its language.\n * @param node - The tree-sitter AST node\n * @param name - The symbol name\n * @param language - The programming language\n * @returns true if the symbol is exported/public\n */\nexport const isNodeExported = (node: SyntaxNode, name: string, language: SupportedLanguages): boolean => {\n  const checker = exportCheckers[language];\n  if (!checker) return false;\n  return checker(node, name);\n};\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/filesystem-walker.ts",
    "content": "import fs from 'fs/promises';\nimport path from 'path';\nimport { glob } from 'glob';\nimport { createIgnoreFilter } from '../../config/ignore-service.js';\n\nexport interface FileEntry {\n  path: string;\n  content: string;\n}\n\n/** Lightweight entry — path + size from stat, no content in memory */\nexport interface ScannedFile {\n  path: string;\n  size: number;\n}\n\n/** Path-only reference (for type signatures) */\nexport interface FilePath {\n  path: string;\n}\n\nconst READ_CONCURRENCY = 32;\n\n/** Skip files larger than 512KB — they're usually generated/vendored and crash tree-sitter */\nconst MAX_FILE_SIZE = 512 * 1024;\n\n/**\n * Phase 1: Scan repository — stat files to get paths + sizes, no content loaded.\n * Memory: ~10MB for 100K files vs ~1GB+ with content.\n */\nexport const walkRepositoryPaths = async (\n  repoPath: string,\n  onProgress?: (current: number, total: number, filePath: string) => void\n): Promise<ScannedFile[]> => {\n  const ignoreFilter = await createIgnoreFilter(repoPath);\n\n  const filtered = await glob('**/*', {\n    cwd: repoPath,\n    nodir: true,\n    dot: false,\n    ignore: ignoreFilter,\n  });\n  const entries: ScannedFile[] = [];\n  let processed = 0;\n  let skippedLarge = 0;\n\n  for (let start = 0; start < filtered.length; start += READ_CONCURRENCY) {\n    const batch = filtered.slice(start, start + READ_CONCURRENCY);\n    const results = await Promise.allSettled(\n      batch.map(async relativePath => {\n        const fullPath = path.join(repoPath, relativePath);\n        const stat = await fs.stat(fullPath);\n        if (stat.size > MAX_FILE_SIZE) {\n          skippedLarge++;\n          return null;\n        }\n        return { path: relativePath.replace(/\\\\/g, '/'), size: stat.size };\n      })\n    );\n\n    for (const result of results) {\n      processed++;\n      if (result.status === 'fulfilled' && result.value !== null) {\n        entries.push(result.value);\n        onProgress?.(processed, filtered.length, result.value.path);\n      } else {\n        onProgress?.(processed, filtered.length, batch[results.indexOf(result)]);\n      }\n    }\n  }\n\n  if (skippedLarge > 0) {\n    console.warn(`  Skipped ${skippedLarge} large files (>${MAX_FILE_SIZE / 1024}KB, likely generated/vendored)`);\n  }\n\n  return entries;\n};\n\n/**\n * Phase 2: Read file contents for a specific set of relative paths.\n * Returns a Map for O(1) lookup. Silently skips files that fail to read.\n */\nexport const readFileContents = async (\n  repoPath: string,\n  relativePaths: string[],\n): Promise<Map<string, string>> => {\n  const contents = new Map<string, string>();\n\n  for (let start = 0; start < relativePaths.length; start += READ_CONCURRENCY) {\n    const batch = relativePaths.slice(start, start + READ_CONCURRENCY);\n    const results = await Promise.allSettled(\n      batch.map(async relativePath => {\n        const fullPath = path.join(repoPath, relativePath);\n        const content = await fs.readFile(fullPath, 'utf-8');\n        return { path: relativePath, content };\n      })\n    );\n\n    for (const result of results) {\n      if (result.status === 'fulfilled') {\n        contents.set(result.value.path, result.value.content);\n      }\n    }\n  }\n\n  return contents;\n};\n\n/**\n * Legacy API — scans and reads everything into memory.\n * Used by sequential fallback path only.\n */\nexport const walkRepository = async (\n  repoPath: string,\n  onProgress?: (current: number, total: number, filePath: string) => void\n): Promise<FileEntry[]> => {\n  const scanned = await walkRepositoryPaths(repoPath, onProgress);\n  const contents = await readFileContents(repoPath, scanned.map(f => f.path));\n  return scanned\n    .filter(f => contents.has(f.path))\n    .map(f => ({ path: f.path, content: contents.get(f.path)! }));\n};\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/framework-detection.ts",
    "content": "/**\n * Framework Detection\n * \n * Detects frameworks from:\n * 1) file path patterns\n * 2) AST definition text (decorators/annotations/attributes)\n * and provides entry point multipliers for process scoring.\n * \n * DESIGN: Returns null for unknown frameworks, which causes a 1.0 multiplier\n * (no bonus, no penalty) - same behavior as before this feature.\n */\n\n// ============================================================================\n// TYPES\n// ============================================================================\n\nexport interface FrameworkHint {\n  framework: string;\n  entryPointMultiplier: number;\n  reason: string;\n}\n\n// ============================================================================\n// PATH-BASED FRAMEWORK DETECTION\n// ============================================================================\n\n/**\n * Detect framework from file path patterns\n * \n * This provides entry point multipliers based on well-known framework conventions.\n * Returns null if no framework pattern is detected (falls back to 1.0 multiplier).\n */\nexport function detectFrameworkFromPath(filePath: string): FrameworkHint | null {\n  // Normalize path separators and ensure leading slash for consistent matching\n  let p = filePath.toLowerCase().replace(/\\\\/g, '/');\n  if (!p.startsWith('/')) {\n    p = '/' + p;  // Add leading slash so patterns like '/app/' match 'app/...'\n  }\n  \n  // ========== JAVASCRIPT / TYPESCRIPT FRAMEWORKS ==========\n  \n  // Next.js - Pages Router (high confidence)\n  if (p.includes('/pages/') && !p.includes('/_') && !p.includes('/api/')) {\n    if (p.endsWith('.tsx') || p.endsWith('.ts') || p.endsWith('.jsx') || p.endsWith('.js')) {\n      return { framework: 'nextjs-pages', entryPointMultiplier: 3.0, reason: 'nextjs-page' };\n    }\n  }\n  \n  // Next.js - App Router (page.tsx files)\n  if (p.includes('/app/') && (\n    p.endsWith('page.tsx') || p.endsWith('page.ts') || \n    p.endsWith('page.jsx') || p.endsWith('page.js')\n  )) {\n    return { framework: 'nextjs-app', entryPointMultiplier: 3.0, reason: 'nextjs-app-page' };\n  }\n  \n  // Next.js - API Routes\n  if (p.includes('/pages/api/') || (p.includes('/app/') && p.includes('/api/') && p.endsWith('route.ts'))) {\n    return { framework: 'nextjs-api', entryPointMultiplier: 3.0, reason: 'nextjs-api-route' };\n  }\n  \n  // Next.js - Layout files (moderate - they're entry-ish but not the main entry)\n  if (p.includes('/app/') && (p.endsWith('layout.tsx') || p.endsWith('layout.ts'))) {\n    return { framework: 'nextjs-app', entryPointMultiplier: 2.0, reason: 'nextjs-layout' };\n  }\n  \n  // Express / Node.js routes\n  if (p.includes('/routes/') && (p.endsWith('.ts') || p.endsWith('.js'))) {\n    return { framework: 'express', entryPointMultiplier: 2.5, reason: 'routes-folder' };\n  }\n  \n  // Generic controllers (MVC pattern)\n  if (p.includes('/controllers/') && (p.endsWith('.ts') || p.endsWith('.js'))) {\n    return { framework: 'mvc', entryPointMultiplier: 2.5, reason: 'controllers-folder' };\n  }\n  \n  // Generic handlers\n  if (p.includes('/handlers/') && (p.endsWith('.ts') || p.endsWith('.js'))) {\n    return { framework: 'handlers', entryPointMultiplier: 2.5, reason: 'handlers-folder' };\n  }\n  \n  // React components (lower priority - not all are entry points)\n  if ((p.includes('/components/') || p.includes('/views/')) && \n      (p.endsWith('.tsx') || p.endsWith('.jsx'))) {\n    // Only boost if PascalCase filename (likely a component, not util)\n    const fileName = p.split('/').pop() || '';\n    if (/^[A-Z]/.test(fileName)) {\n      return { framework: 'react', entryPointMultiplier: 1.5, reason: 'react-component' };\n    }\n  }\n  \n  // ========== PYTHON FRAMEWORKS ==========\n  \n  // Django views (high confidence)\n  if (p.endsWith('views.py')) {\n    return { framework: 'django', entryPointMultiplier: 3.0, reason: 'django-views' };\n  }\n  \n  // Django URL configs\n  if (p.endsWith('urls.py')) {\n    return { framework: 'django', entryPointMultiplier: 2.0, reason: 'django-urls' };\n  }\n  \n  // FastAPI / Flask routers\n  if ((p.includes('/routers/') || p.includes('/endpoints/') || p.includes('/routes/')) && \n      p.endsWith('.py')) {\n    return { framework: 'fastapi', entryPointMultiplier: 2.5, reason: 'api-routers' };\n  }\n  \n  // Python API folder\n  if (p.includes('/api/') && p.endsWith('.py') && !p.endsWith('__init__.py')) {\n    return { framework: 'python-api', entryPointMultiplier: 2.0, reason: 'api-folder' };\n  }\n  \n  // ========== JAVA FRAMEWORKS ==========\n  \n  // Spring Boot controllers\n  if ((p.includes('/controller/') || p.includes('/controllers/')) && p.endsWith('.java')) {\n    return { framework: 'spring', entryPointMultiplier: 3.0, reason: 'spring-controller' };\n  }\n  \n  // Spring Boot - files ending in Controller.java\n  if (p.endsWith('controller.java')) {\n    return { framework: 'spring', entryPointMultiplier: 3.0, reason: 'spring-controller-file' };\n  }\n  \n  // Java service layer (often entry points for business logic)\n  if ((p.includes('/service/') || p.includes('/services/')) && p.endsWith('.java')) {\n    return { framework: 'java-service', entryPointMultiplier: 1.8, reason: 'java-service' };\n  }\n  \n  // ========== KOTLIN FRAMEWORKS ==========\n\n  // Spring Boot Kotlin controllers\n  if ((p.includes('/controller/') || p.includes('/controllers/')) && p.endsWith('.kt')) {\n    return { framework: 'spring-kotlin', entryPointMultiplier: 3.0, reason: 'spring-kotlin-controller' };\n  }\n\n  // Spring Boot - files ending in Controller.kt\n  if (p.endsWith('controller.kt')) {\n    return { framework: 'spring-kotlin', entryPointMultiplier: 3.0, reason: 'spring-kotlin-controller-file' };\n  }\n\n  // Ktor routes\n  if (p.includes('/routes/') && p.endsWith('.kt')) {\n    return { framework: 'ktor', entryPointMultiplier: 2.5, reason: 'ktor-routes' };\n  }\n\n  // Ktor plugins folder or Routing.kt files\n  if (p.includes('/plugins/') && p.endsWith('.kt')) {\n    return { framework: 'ktor', entryPointMultiplier: 2.0, reason: 'ktor-plugin' };\n  }\n  if (p.endsWith('routing.kt') || p.endsWith('routes.kt')) {\n    return { framework: 'ktor', entryPointMultiplier: 2.5, reason: 'ktor-routing-file' };\n  }\n\n  // Android Activities, Fragments\n  if ((p.includes('/activity/') || p.includes('/ui/')) && p.endsWith('.kt')) {\n    return { framework: 'android-kotlin', entryPointMultiplier: 2.5, reason: 'android-ui' };\n  }\n  if (p.endsWith('activity.kt') || p.endsWith('fragment.kt')) {\n    return { framework: 'android-kotlin', entryPointMultiplier: 2.5, reason: 'android-component' };\n  }\n\n  // Kotlin main entry point\n  if (p.endsWith('/main.kt')) {\n    return { framework: 'kotlin', entryPointMultiplier: 3.0, reason: 'kotlin-main' };\n  }\n\n  // Kotlin Application entry point (common naming)\n  if (p.endsWith('/application.kt')) {\n    return { framework: 'kotlin', entryPointMultiplier: 2.5, reason: 'kotlin-application' };\n  }\n\n  // ========== C# / .NET FRAMEWORKS ==========\n  \n  // ASP.NET Controllers\n  if (p.includes('/controllers/') && p.endsWith('.cs')) {\n    return { framework: 'aspnet', entryPointMultiplier: 3.0, reason: 'aspnet-controller' };\n  }\n  \n  // ASP.NET - files ending in Controller.cs\n  if (p.endsWith('controller.cs')) {\n    return { framework: 'aspnet', entryPointMultiplier: 3.0, reason: 'aspnet-controller-file' };\n  }\n\n  // ASP.NET Services\n  if ((p.includes('/services/') || p.includes('/service/')) && p.endsWith('.cs')) {\n    return { framework: 'aspnet', entryPointMultiplier: 1.8, reason: 'aspnet-service' };\n  }\n\n  // ASP.NET Middleware\n  if (p.includes('/middleware/') && p.endsWith('.cs')) {\n    return { framework: 'aspnet', entryPointMultiplier: 2.5, reason: 'aspnet-middleware' };\n  }\n\n  // SignalR Hubs\n  if (p.includes('/hubs/') && p.endsWith('.cs')) {\n    return { framework: 'signalr', entryPointMultiplier: 2.5, reason: 'signalr-hub' };\n  }\n  if (p.endsWith('hub.cs')) {\n    return { framework: 'signalr', entryPointMultiplier: 2.5, reason: 'signalr-hub-file' };\n  }\n\n  // Minimal API / Program.cs / Startup.cs\n  if (p.endsWith('/program.cs') || p.endsWith('/startup.cs')) {\n    return { framework: 'aspnet', entryPointMultiplier: 3.0, reason: 'aspnet-entry' };\n  }\n\n  // Background services / Hosted services\n  if ((p.includes('/backgroundservices/') || p.includes('/hostedservices/')) && p.endsWith('.cs')) {\n    return { framework: 'aspnet', entryPointMultiplier: 2.0, reason: 'aspnet-background-service' };\n  }\n\n  // Blazor pages\n  if (p.includes('/pages/') && p.endsWith('.razor')) {\n    return { framework: 'blazor', entryPointMultiplier: 2.5, reason: 'blazor-page' };\n  }\n  \n  // ========== GO FRAMEWORKS ==========\n  \n  // Go handlers\n  if ((p.includes('/handlers/') || p.includes('/handler/')) && p.endsWith('.go')) {\n    return { framework: 'go-http', entryPointMultiplier: 2.5, reason: 'go-handlers' };\n  }\n  \n  // Go routes\n  if (p.includes('/routes/') && p.endsWith('.go')) {\n    return { framework: 'go-http', entryPointMultiplier: 2.5, reason: 'go-routes' };\n  }\n  \n  // Go controllers\n  if (p.includes('/controllers/') && p.endsWith('.go')) {\n    return { framework: 'go-mvc', entryPointMultiplier: 2.5, reason: 'go-controller' };\n  }\n  \n  // Go main.go files (THE entry point)\n  if (p.endsWith('/main.go') || p.endsWith('/cmd/') && p.endsWith('.go')) {\n    return { framework: 'go', entryPointMultiplier: 3.0, reason: 'go-main' };\n  }\n  \n  // ========== RUST FRAMEWORKS ==========\n  \n  // Rust handlers/routes\n  if ((p.includes('/handlers/') || p.includes('/routes/')) && p.endsWith('.rs')) {\n    return { framework: 'rust-web', entryPointMultiplier: 2.5, reason: 'rust-handlers' };\n  }\n  \n  // Rust main.rs (THE entry point)\n  if (p.endsWith('/main.rs')) {\n    return { framework: 'rust', entryPointMultiplier: 3.0, reason: 'rust-main' };\n  }\n  \n  // Rust bin folder (executables)\n  if (p.includes('/bin/') && p.endsWith('.rs')) {\n    return { framework: 'rust', entryPointMultiplier: 2.5, reason: 'rust-bin' };\n  }\n  \n  // ========== C / C++ ==========\n  \n  // C/C++ main files\n  if (p.endsWith('/main.c') || p.endsWith('/main.cpp') || p.endsWith('/main.cc')) {\n    return { framework: 'c-cpp', entryPointMultiplier: 3.0, reason: 'c-main' };\n  }\n  \n  // C/C++ src folder entry points (if named specifically)\n  if ((p.includes('/src/') && (p.endsWith('/app.c') || p.endsWith('/app.cpp')))) {\n    return { framework: 'c-cpp', entryPointMultiplier: 2.5, reason: 'c-app' };\n  }\n  \n  // ========== PHP / LARAVEL FRAMEWORKS ==========\n\n  // Laravel routes (highest - these ARE the entry point definitions)\n  if (p.includes('/routes/') && p.endsWith('.php')) {\n    return { framework: 'laravel', entryPointMultiplier: 3.0, reason: 'laravel-routes' };\n  }\n\n  // Laravel controllers (very high - receive HTTP requests)\n  if ((p.includes('/http/controllers/') || p.includes('/controllers/')) && p.endsWith('.php')) {\n    return { framework: 'laravel', entryPointMultiplier: 3.0, reason: 'laravel-controller' };\n  }\n\n  // Laravel controller by file name convention\n  if (p.endsWith('controller.php')) {\n    return { framework: 'laravel', entryPointMultiplier: 3.0, reason: 'laravel-controller-file' };\n  }\n\n  // Laravel console commands\n  if ((p.includes('/console/commands/') || p.includes('/commands/')) && p.endsWith('.php')) {\n    return { framework: 'laravel', entryPointMultiplier: 2.5, reason: 'laravel-command' };\n  }\n\n  // Laravel jobs (queue entry points)\n  if (p.includes('/jobs/') && p.endsWith('.php')) {\n    return { framework: 'laravel', entryPointMultiplier: 2.5, reason: 'laravel-job' };\n  }\n\n  // Laravel listeners (event-driven entry points)\n  if (p.includes('/listeners/') && p.endsWith('.php')) {\n    return { framework: 'laravel', entryPointMultiplier: 2.5, reason: 'laravel-listener' };\n  }\n\n  // Laravel middleware\n  if (p.includes('/http/middleware/') && p.endsWith('.php')) {\n    return { framework: 'laravel', entryPointMultiplier: 2.5, reason: 'laravel-middleware' };\n  }\n\n  // Laravel service providers\n  if (p.includes('/providers/') && p.endsWith('.php')) {\n    return { framework: 'laravel', entryPointMultiplier: 1.8, reason: 'laravel-provider' };\n  }\n\n  // Laravel policies\n  if (p.includes('/policies/') && p.endsWith('.php')) {\n    return { framework: 'laravel', entryPointMultiplier: 2.0, reason: 'laravel-policy' };\n  }\n\n  // Laravel models (important but not entry points per se)\n  if (p.includes('/models/') && p.endsWith('.php')) {\n    return { framework: 'laravel', entryPointMultiplier: 1.5, reason: 'laravel-model' };\n  }\n\n  // Laravel services (Service Repository pattern)\n  if (p.includes('/services/') && p.endsWith('.php')) {\n    return { framework: 'laravel', entryPointMultiplier: 1.8, reason: 'laravel-service' };\n  }\n\n  // Laravel repositories (Service Repository pattern)\n  if (p.includes('/repositories/') && p.endsWith('.php')) {\n    return { framework: 'laravel', entryPointMultiplier: 1.5, reason: 'laravel-repository' };\n  }\n\n  // ========== RUBY ==========\n\n  // Ruby: bin/ or exe/ (CLI entry points)\n  if ((p.includes('/bin/') || p.includes('/exe/')) && p.endsWith('.rb')) {\n    return { framework: 'ruby', entryPointMultiplier: 2.5, reason: 'ruby-executable' };\n  }\n\n  // Ruby: Rakefile or *.rake (task definitions)\n  if (p.endsWith('/rakefile') || p.endsWith('.rake')) {\n    return { framework: 'ruby', entryPointMultiplier: 1.5, reason: 'ruby-rake' };\n  }\n  \n  // ========== SWIFT / iOS ==========\n\n  // iOS App entry points (highest priority)\n  if (p.endsWith('/appdelegate.swift') || p.endsWith('/scenedelegate.swift') || p.endsWith('/app.swift')) {\n    return { framework: 'ios', entryPointMultiplier: 3.0, reason: 'ios-app-entry' };\n  }\n\n  // SwiftUI App entry (@main)\n  if (p.endsWith('app.swift') && p.includes('/sources/')) {\n    return { framework: 'swiftui', entryPointMultiplier: 3.0, reason: 'swiftui-app' };\n  }\n\n  // UIKit ViewControllers (high priority - screen entry points)\n  if ((p.includes('/viewcontrollers/') || p.includes('/controllers/') || p.includes('/screens/')) && p.endsWith('.swift')) {\n    return { framework: 'uikit', entryPointMultiplier: 2.5, reason: 'uikit-viewcontroller' };\n  }\n\n  // ViewController by filename convention\n  if (p.endsWith('viewcontroller.swift') || p.endsWith('vc.swift')) {\n    return { framework: 'uikit', entryPointMultiplier: 2.5, reason: 'uikit-viewcontroller-file' };\n  }\n\n  // Coordinator pattern (navigation entry points)\n  if (p.includes('/coordinators/') && p.endsWith('.swift')) {\n    return { framework: 'ios-coordinator', entryPointMultiplier: 2.5, reason: 'ios-coordinator' };\n  }\n\n  // Coordinator by filename\n  if (p.endsWith('coordinator.swift')) {\n    return { framework: 'ios-coordinator', entryPointMultiplier: 2.5, reason: 'ios-coordinator-file' };\n  }\n\n  // SwiftUI Views (moderate - reusable components)\n  if ((p.includes('/views/') || p.includes('/scenes/')) && p.endsWith('.swift')) {\n    return { framework: 'swiftui', entryPointMultiplier: 1.8, reason: 'swiftui-view' };\n  }\n\n  // Service layer\n  if (p.includes('/services/') && p.endsWith('.swift')) {\n    return { framework: 'ios-service', entryPointMultiplier: 1.8, reason: 'ios-service' };\n  }\n\n  // Router / navigation\n  if (p.includes('/router/') && p.endsWith('.swift')) {\n    return { framework: 'ios-router', entryPointMultiplier: 2.0, reason: 'ios-router' };\n  }\n\n  // ========== GENERIC PATTERNS ==========\n\n  // Any language: index files in API folders\n  if (p.includes('/api/') && (\n    p.endsWith('/index.ts') || p.endsWith('/index.js') || \n    p.endsWith('/__init__.py')\n  )) {\n    return { framework: 'api', entryPointMultiplier: 1.8, reason: 'api-index' };\n  }\n  \n  // No framework detected - return null for graceful fallback (1.0 multiplier)\n  return null;\n}\n\n// ============================================================================\n// AST-BASED FRAMEWORK DETECTION\n// ============================================================================\n\n/**\n * Patterns that indicate framework entry points within code definitions.\n * These are matched against AST node text (class/method/function declaration text).\n */\nexport const FRAMEWORK_AST_PATTERNS = {\n  // JavaScript/TypeScript decorators\n  'nestjs': ['@Controller', '@Get', '@Post', '@Put', '@Delete', '@Patch'],\n  'express': ['app.get', 'app.post', 'app.put', 'app.delete', 'router.get', 'router.post'],\n  \n  // Python decorators\n  'fastapi': ['@app.get', '@app.post', '@app.put', '@app.delete', '@router.get'],\n  'flask': ['@app.route', '@blueprint.route'],\n  \n  // Java annotations\n  'spring': ['@RestController', '@Controller', '@GetMapping', '@PostMapping', '@RequestMapping'],\n  'jaxrs': ['@Path', '@GET', '@POST', '@PUT', '@DELETE'],\n  \n  // C# attributes\n  'aspnet': ['[ApiController]', '[HttpGet]', '[HttpPost]', '[HttpPut]', '[HttpDelete]',\n             '[Route]', '[Authorize]', '[AllowAnonymous]'],\n  'signalr': ['[HubMethodName]', ': Hub', ': Hub<'],\n  'blazor': ['@page', '[Parameter]', '@inject'],\n  'efcore': ['DbContext', 'DbSet<', 'OnModelCreating'],\n  \n  // Go patterns (function signatures)\n  'go-http': ['http.Handler', 'http.HandlerFunc', 'ServeHTTP'],\n\n  // PHP/Laravel\n  'laravel': ['Route::get', 'Route::post', 'Route::put', 'Route::delete',\n              'Route::resource', 'Route::apiResource', '#[Route('],\n\n  // Rust macros\n  'actix': ['#[get', '#[post', '#[put', '#[delete'],\n  'axum': ['Router::new'],\n  'rocket': ['#[get', '#[post'],\n\n  // Swift/iOS\n  'uikit': ['viewDidLoad', 'viewWillAppear', 'viewDidAppear', 'UIViewController'],\n  'swiftui': ['@main', 'WindowGroup', 'ContentView', '@StateObject', '@ObservedObject'],\n  'combine': ['sink', 'assign', 'Publisher', 'Subscriber'],\n};\n\nimport { SupportedLanguages } from '../../config/supported-languages.js';\n\ninterface AstFrameworkPatternConfig {\n  framework: string;\n  entryPointMultiplier: number;\n  reason: string;\n  patterns: string[];\n}\n\nconst AST_FRAMEWORK_PATTERNS_BY_LANGUAGE: Record<string, AstFrameworkPatternConfig[]> = {\n  [SupportedLanguages.JavaScript]: [\n    { framework: 'nestjs', entryPointMultiplier: 3.2, reason: 'nestjs-decorator', patterns: FRAMEWORK_AST_PATTERNS.nestjs },\n  ],\n  [SupportedLanguages.TypeScript]: [\n    { framework: 'nestjs', entryPointMultiplier: 3.2, reason: 'nestjs-decorator', patterns: FRAMEWORK_AST_PATTERNS.nestjs },\n  ],\n  [SupportedLanguages.Python]: [\n    { framework: 'fastapi', entryPointMultiplier: 3.0, reason: 'fastapi-decorator', patterns: FRAMEWORK_AST_PATTERNS.fastapi },\n    { framework: 'flask', entryPointMultiplier: 2.8, reason: 'flask-decorator', patterns: FRAMEWORK_AST_PATTERNS.flask },\n  ],\n  [SupportedLanguages.Java]: [\n    { framework: 'spring', entryPointMultiplier: 3.2, reason: 'spring-annotation', patterns: FRAMEWORK_AST_PATTERNS.spring },\n    { framework: 'jaxrs', entryPointMultiplier: 3.0, reason: 'jaxrs-annotation', patterns: FRAMEWORK_AST_PATTERNS.jaxrs },\n  ],\n  [SupportedLanguages.Kotlin]: [\n    { framework: 'spring-kotlin', entryPointMultiplier: 3.2, reason: 'spring-kotlin-annotation', patterns: FRAMEWORK_AST_PATTERNS.spring },\n    { framework: 'jaxrs', entryPointMultiplier: 3.0, reason: 'jaxrs-annotation', patterns: FRAMEWORK_AST_PATTERNS.jaxrs },\n    { framework: 'ktor', entryPointMultiplier: 2.8, reason: 'ktor-routing', patterns: ['routing', 'embeddedServer', 'Application.module'] },\n    { framework: 'android-kotlin', entryPointMultiplier: 2.5, reason: 'android-annotation', patterns: ['@AndroidEntryPoint', 'AppCompatActivity', 'Fragment('] },\n  ],\n  [SupportedLanguages.CSharp]: [\n    { framework: 'aspnet', entryPointMultiplier: 3.2, reason: 'aspnet-attribute', patterns: FRAMEWORK_AST_PATTERNS.aspnet },\n    { framework: 'signalr', entryPointMultiplier: 2.8, reason: 'signalr-attribute', patterns: FRAMEWORK_AST_PATTERNS.signalr },\n    { framework: 'blazor', entryPointMultiplier: 2.5, reason: 'blazor-attribute', patterns: FRAMEWORK_AST_PATTERNS.blazor },\n    { framework: 'efcore', entryPointMultiplier: 2.0, reason: 'efcore-pattern', patterns: FRAMEWORK_AST_PATTERNS.efcore },\n  ],\n  [SupportedLanguages.PHP]: [\n    { framework: 'laravel', entryPointMultiplier: 3.0, reason: 'php-route-attribute', patterns: FRAMEWORK_AST_PATTERNS.laravel },\n  ],\n};\n\n/** Pre-lowercased patterns for O(1) pattern matching at runtime */\nconst AST_PATTERNS_LOWERED: Record<string, Array<{ framework: string; entryPointMultiplier: number; reason: string; patterns: string[] }>> =\n  Object.fromEntries(\n    Object.entries(AST_FRAMEWORK_PATTERNS_BY_LANGUAGE).map(([lang, cfgs]) => [\n      lang,\n      cfgs.map(cfg => ({ ...cfg, patterns: cfg.patterns.map(p => p.toLowerCase()) })),\n    ])\n  );\n\n/**\n * Detect framework entry points from AST definition text (decorators/annotations/attributes).\n * Returns null if no known pattern is found.\n * Note: callers should slice definitionText to ~300 chars since annotations appear at the start.\n */\nexport function detectFrameworkFromAST(\n  language: SupportedLanguages,\n  definitionText: string\n): FrameworkHint | null {\n  if (!language || !definitionText) return null;\n\n  const configs = AST_PATTERNS_LOWERED[language.toLowerCase()];\n  if (!configs || configs.length === 0) return null;\n\n  const normalized = definitionText.toLowerCase();\n\n  for (const cfg of configs) {\n    for (const pattern of cfg.patterns) {\n      if (normalized.includes(pattern)) {\n        return {\n          framework: cfg.framework,\n          entryPointMultiplier: cfg.entryPointMultiplier,\n          reason: cfg.reason,\n        };\n      }\n    }\n  }\n\n  return null;\n}\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/heritage-processor.ts",
    "content": "/**\n * Heritage Processor\n *\n * Extracts class inheritance relationships:\n * - EXTENDS: Class extends another Class (TS, JS, Python, C#, C++)\n * - IMPLEMENTS: Class implements an Interface (TS, C#, Java, Kotlin, PHP)\n *\n * Languages like C# use a single `base_list` for both class and interface parents.\n * We resolve the correct edge type by checking the symbol table: if the parent is\n * registered as an Interface, we emit IMPLEMENTS; otherwise EXTENDS. For unresolved\n * external symbols, the fallback heuristic is language-gated:\n *   - C# / Java: apply the `I[A-Z]` naming convention (e.g. IDisposable → IMPLEMENTS)\n *   - Swift: default to IMPLEMENTS (protocol conformance is more common than class inheritance)\n *   - All other languages: default to EXTENDS\n */\n\nimport { KnowledgeGraph } from '../graph/types.js';\nimport { ASTCache } from './ast-cache.js';\nimport Parser from 'tree-sitter';\nimport { isLanguageAvailable, loadParser, loadLanguage } from '../tree-sitter/parser-loader.js';\nimport { LANGUAGE_QUERIES } from './tree-sitter-queries.js';\nimport { generateId } from '../../lib/utils.js';\nimport { getLanguageFromFilename, isVerboseIngestionEnabled, yieldToEventLoop } from './utils.js';\nimport { SupportedLanguages } from '../../config/supported-languages.js';\nimport { getTreeSitterBufferSize } from './constants.js';\nimport type { ExtractedHeritage } from './workers/parse-worker.js';\nimport type { ResolutionContext } from './resolution-context.js';\n\n/** C#/Java convention: interfaces start with I followed by an uppercase letter */\nconst INTERFACE_NAME_RE = /^I[A-Z]/;\n\n/**\n * Determine whether a heritage.extends capture is actually an IMPLEMENTS relationship.\n * Uses the symbol table first (authoritative — Tier 1); falls back to a language-gated\n * heuristic for external symbols not present in the graph:\n *   - C# / Java: `I[A-Z]` naming convention\n *   - Swift: default IMPLEMENTS (protocol conformance is the norm)\n *   - All others: default EXTENDS\n */\nconst resolveExtendsType = (\n  parentName: string,\n  currentFilePath: string,\n  ctx: ResolutionContext,\n  language: SupportedLanguages,\n): { type: 'EXTENDS' | 'IMPLEMENTS'; idPrefix: string } => {\n  const resolved = ctx.resolve(parentName, currentFilePath);\n  if (resolved && resolved.candidates.length > 0) {\n    const isInterface = resolved.candidates[0].type === 'Interface';\n    return isInterface\n      ? { type: 'IMPLEMENTS', idPrefix: 'Interface' }\n      : { type: 'EXTENDS', idPrefix: 'Class' };\n  }\n  // Unresolved symbol — fall back to language-specific heuristic\n  if (language === SupportedLanguages.CSharp || language === SupportedLanguages.Java) {\n    if (INTERFACE_NAME_RE.test(parentName)) {\n      return { type: 'IMPLEMENTS', idPrefix: 'Interface' };\n    }\n  } else if (language === SupportedLanguages.Swift) {\n    // Protocol conformance is far more common than class inheritance in Swift\n    return { type: 'IMPLEMENTS', idPrefix: 'Interface' };\n  }\n  return { type: 'EXTENDS', idPrefix: 'Class' };\n};\n\n/**\n * Resolve a symbol ID for heritage, with fallback to generated ID.\n * Uses ctx.resolve() → pick first candidate's nodeId → generate synthetic ID.\n */\nconst resolveHeritageId = (\n  name: string,\n  filePath: string,\n  ctx: ResolutionContext,\n  fallbackLabel: string,\n  fallbackKey?: string,\n): string => {\n  const resolved = ctx.resolve(name, filePath);\n  if (resolved && resolved.candidates.length > 0) {\n    // For global with multiple candidates, refuse (a wrong edge is worse than no edge)\n    if (resolved.tier === 'global' && resolved.candidates.length > 1) {\n      return generateId(fallbackLabel, fallbackKey ?? name);\n    }\n    return resolved.candidates[0].nodeId;\n  }\n  return generateId(fallbackLabel, fallbackKey ?? name);\n};\n\nexport const processHeritage = async (\n  graph: KnowledgeGraph,\n  files: { path: string; content: string }[],\n  astCache: ASTCache,\n  ctx: ResolutionContext,\n  onProgress?: (current: number, total: number) => void,\n) => {\n  const parser = await loadParser();\n  const logSkipped = isVerboseIngestionEnabled();\n  const skippedByLang = logSkipped ? new Map<string, number>() : null;\n\n  for (let i = 0; i < files.length; i++) {\n    const file = files[i];\n    onProgress?.(i + 1, files.length);\n    if (i % 20 === 0) await yieldToEventLoop();\n\n    // 1. Check language support\n    const language = getLanguageFromFilename(file.path);\n    if (!language) continue;\n    if (!isLanguageAvailable(language)) {\n      if (skippedByLang) {\n        skippedByLang.set(language, (skippedByLang.get(language) ?? 0) + 1);\n      }\n      continue;\n    }\n\n    const queryStr = LANGUAGE_QUERIES[language];\n    if (!queryStr) continue;\n\n    // 2. Load the language\n    await loadLanguage(language, file.path);\n\n    // 3. Get AST\n    let tree = astCache.get(file.path);\n    if (!tree) {\n      // Use larger bufferSize for files > 32KB\n      try {\n        tree = parser.parse(file.content, undefined, { bufferSize: getTreeSitterBufferSize(file.content.length) });\n      } catch (parseError) {\n        // Skip files that can't be parsed\n        continue;\n      }\n      // Cache re-parsed tree for potential future use\n      astCache.set(file.path, tree);\n    }\n\n    let query;\n    let matches;\n    try {\n      const language = parser.getLanguage();\n      query = new Parser.Query(language, queryStr);\n      matches = query.matches(tree.rootNode);\n    } catch (queryError) {\n      console.warn(`Heritage query error for ${file.path}:`, queryError);\n      continue;\n    }\n\n    // 4. Process heritage matches\n    matches.forEach(match => {\n      const captureMap: Record<string, any> = {};\n      match.captures.forEach(c => {\n        captureMap[c.name] = c.node;\n      });\n\n      // EXTENDS or IMPLEMENTS: resolve via symbol table for languages where\n      // the tree-sitter query can't distinguish classes from interfaces (C#, Java)\n      if (captureMap['heritage.class'] && captureMap['heritage.extends']) {\n        // Go struct embedding: skip named fields (only anonymous fields are embedded)\n        const extendsNode = captureMap['heritage.extends'];\n        const fieldDecl = extendsNode.parent;\n        if (fieldDecl?.type === 'field_declaration' && fieldDecl.childForFieldName('name')) {\n          return; // Named field, not struct embedding\n        }\n\n        const className = captureMap['heritage.class'].text;\n        const parentClassName = captureMap['heritage.extends'].text;\n\n        const { type: relType, idPrefix } = resolveExtendsType(parentClassName, file.path, ctx, language);\n\n        const childId = resolveHeritageId(className, file.path, ctx, 'Class', `${file.path}:${className}`);\n        const parentId = resolveHeritageId(parentClassName, file.path, ctx, idPrefix);\n\n        if (childId && parentId && childId !== parentId) {\n          graph.addRelationship({\n            id: generateId(relType, `${childId}->${parentId}`),\n            sourceId: childId,\n            targetId: parentId,\n            type: relType,\n            confidence: 1.0,\n            reason: '',\n          });\n        }\n      }\n\n      // IMPLEMENTS: Class implements Interface (TypeScript only)\n      if (captureMap['heritage.class'] && captureMap['heritage.implements']) {\n        const className = captureMap['heritage.class'].text;\n        const interfaceName = captureMap['heritage.implements'].text;\n\n        const classId = resolveHeritageId(className, file.path, ctx, 'Class', `${file.path}:${className}`);\n        const interfaceId = resolveHeritageId(interfaceName, file.path, ctx, 'Interface');\n\n        if (classId && interfaceId) {\n          graph.addRelationship({\n            id: generateId('IMPLEMENTS', `${classId}->${interfaceId}`),\n            sourceId: classId,\n            targetId: interfaceId,\n            type: 'IMPLEMENTS',\n            confidence: 1.0,\n            reason: '',\n          });\n        }\n      }\n\n      // IMPLEMENTS (Rust): impl Trait for Struct\n      if (captureMap['heritage.trait'] && captureMap['heritage.class']) {\n        const structName = captureMap['heritage.class'].text;\n        const traitName = captureMap['heritage.trait'].text;\n\n        const structId = resolveHeritageId(structName, file.path, ctx, 'Struct', `${file.path}:${structName}`);\n        const traitId = resolveHeritageId(traitName, file.path, ctx, 'Trait');\n\n        if (structId && traitId) {\n          graph.addRelationship({\n            id: generateId('IMPLEMENTS', `${structId}->${traitId}`),\n            sourceId: structId,\n            targetId: traitId,\n            type: 'IMPLEMENTS',\n            confidence: 1.0,\n            reason: 'trait-impl',\n          });\n        }\n      }\n    });\n\n    // Tree is now owned by the LRU cache — no manual delete needed\n  }\n\n  if (skippedByLang && skippedByLang.size > 0) {\n    for (const [lang, count] of skippedByLang.entries()) {\n      console.warn(\n        `[ingestion] Skipped ${count} ${lang} file(s) in heritage processing — ${lang} parser not available.`\n      );\n    }\n  }\n};\n\n/**\n * Fast path: resolve pre-extracted heritage from workers.\n * No AST parsing — workers already extracted className + parentName + kind.\n */\nexport const processHeritageFromExtracted = async (\n  graph: KnowledgeGraph,\n  extractedHeritage: ExtractedHeritage[],\n  ctx: ResolutionContext,\n  onProgress?: (current: number, total: number) => void,\n) => {\n  const total = extractedHeritage.length;\n\n  for (let i = 0; i < extractedHeritage.length; i++) {\n    if (i % 500 === 0) {\n      onProgress?.(i, total);\n      await yieldToEventLoop();\n    }\n\n    const h = extractedHeritage[i];\n\n    if (h.kind === 'extends') {\n      const fileLanguage = getLanguageFromFilename(h.filePath);\n      if (!fileLanguage) continue;\n      const { type: relType, idPrefix } = resolveExtendsType(h.parentName, h.filePath, ctx, fileLanguage);\n\n      const childId = resolveHeritageId(h.className, h.filePath, ctx, 'Class', `${h.filePath}:${h.className}`);\n      const parentId = resolveHeritageId(h.parentName, h.filePath, ctx, idPrefix);\n\n      if (childId && parentId && childId !== parentId) {\n        graph.addRelationship({\n          id: generateId(relType, `${childId}->${parentId}`),\n          sourceId: childId,\n          targetId: parentId,\n          type: relType,\n          confidence: 1.0,\n          reason: '',\n        });\n      }\n    } else if (h.kind === 'implements') {\n      const classId = resolveHeritageId(h.className, h.filePath, ctx, 'Class', `${h.filePath}:${h.className}`);\n      const interfaceId = resolveHeritageId(h.parentName, h.filePath, ctx, 'Interface');\n\n      if (classId && interfaceId) {\n        graph.addRelationship({\n          id: generateId('IMPLEMENTS', `${classId}->${interfaceId}`),\n          sourceId: classId,\n          targetId: interfaceId,\n          type: 'IMPLEMENTS',\n          confidence: 1.0,\n          reason: '',\n        });\n      }\n    } else if (h.kind === 'trait-impl' || h.kind === 'include' || h.kind === 'extend' || h.kind === 'prepend') {\n      const structId = resolveHeritageId(h.className, h.filePath, ctx, 'Struct', `${h.filePath}:${h.className}`);\n      const traitId = resolveHeritageId(h.parentName, h.filePath, ctx, 'Trait');\n\n      if (structId && traitId) {\n        graph.addRelationship({\n          id: generateId('IMPLEMENTS', `${structId}->${traitId}:${h.kind}`),\n          sourceId: structId,\n          targetId: traitId,\n          type: 'IMPLEMENTS',\n          confidence: 1.0,\n          reason: h.kind,\n        });\n      }\n    }\n  }\n\n  onProgress?.(total, total);\n};\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/import-processor.ts",
    "content": "import { KnowledgeGraph } from '../graph/types.js';\nimport { ASTCache } from './ast-cache.js';\nimport Parser from 'tree-sitter';\nimport { isLanguageAvailable, loadParser, loadLanguage } from '../tree-sitter/parser-loader.js';\nimport { LANGUAGE_QUERIES } from './tree-sitter-queries.js';\nimport { generateId } from '../../lib/utils.js';\nimport { getLanguageFromFilename, isVerboseIngestionEnabled, yieldToEventLoop } from './utils.js';\nimport { SupportedLanguages } from '../../config/supported-languages.js';\nimport { extractNamedBindings } from './named-binding-extraction.js';\nimport type { ExtractedImport } from './workers/parse-worker.js';\nimport { getTreeSitterBufferSize } from './constants.js';\nimport {\n  loadTsconfigPaths,\n  loadGoModulePath,\n  loadComposerConfig,\n  loadCSharpProjectConfig,\n  loadSwiftPackageConfig,\n  type SwiftPackageConfig,\n} from './language-config.js';\nimport {\n  buildSuffixIndex,\n  resolveImportPath,\n  appendKotlinWildcard,\n  KOTLIN_EXTENSIONS,\n  resolveJvmWildcard,\n  resolveJvmMemberImport,\n  resolveGoPackageDir,\n  resolveGoPackage,\n  resolveCSharpImport,\n  resolveCSharpNamespaceDir,\n  resolvePhpImport,\n  resolveRustImport,\n  resolveRubyImport,\n  resolvePythonImport,\n} from './resolvers/index.js';\nimport { callRouters } from './call-routing.js';\nimport type { ResolutionContext } from './resolution-context.js';\nimport type {\n  SuffixIndex,\n  TsconfigPaths,\n  GoModuleConfig,\n  CSharpProjectConfig,\n  ComposerConfig\n} from './resolvers/index.js';\n\n// Re-export resolver types for consumers\nexport type {\n  SuffixIndex,\n  TsconfigPaths,\n  GoModuleConfig,\n  CSharpProjectConfig,\n  ComposerConfig\n} from './resolvers/index.js';\n\nconst isDev = process.env.NODE_ENV === 'development';\n\n// Type: Map<FilePath, Set<ResolvedFilePath>>\n// Stores all files that a given file imports from\nexport type ImportMap = Map<string, Set<string>>;\n\n// Type: Map<FilePath, Set<PackageDirSuffix>>\n// Stores Go package directory suffixes imported by a file (e.g., \"/internal/auth/\").\n// Avoids expanding every Go package import into N individual ImportMap edges.\nexport type PackageMap = Map<string, Set<string>>;\n\n// Type: Map<ImportingFilePath, Map<LocalName, {sourcePath, exportedName}>>\n// Tracks which specific names a file imports from which sources (TS/Python only).\n// Used to tighten Tier 2a resolution: `import { User } from './models'`\n// means only `User` (not `Repo`) is visible from models.ts via this import.\n// Stores both the resolved source path and the original exported name so that\n// aliased imports (`import { User as U }`) can resolve U → User in the source file.\nexport interface NamedImportBinding { sourcePath: string; exportedName: string }\nexport type NamedImportMap = Map<string, Map<string, NamedImportBinding>>;\n\n/**\n * Check if a file path is directly inside a package directory identified by its suffix.\n * Used by the symbol resolver for Go and C# directory-level import matching.\n */\nexport function isFileInPackageDir(filePath: string, dirSuffix: string): boolean {\n  // Prepend '/' so paths like \"internal/auth/service.go\" match suffix \"/internal/auth/\"\n  const normalized = '/' + filePath.replace(/\\\\/g, '/');\n  if (!normalized.includes(dirSuffix)) return false;\n  const afterDir = normalized.substring(normalized.indexOf(dirSuffix) + dirSuffix.length);\n  return !afterDir.includes('/');\n}\n\n/** Pre-built lookup structures for import resolution. Build once, reuse across chunks. */\nexport interface ImportResolutionContext {\n  allFilePaths: Set<string>;\n  allFileList: string[];\n  normalizedFileList: string[];\n  suffixIndex: SuffixIndex;\n  resolveCache: Map<string, string | null>;\n}\n\nexport function buildImportResolutionContext(allPaths: string[]): ImportResolutionContext {\n  const allFileList = allPaths;\n  const normalizedFileList = allFileList.map(p => p.replace(/\\\\/g, '/'));\n  const allFilePaths = new Set(allFileList);\n  const suffixIndex = buildSuffixIndex(normalizedFileList, allFileList);\n  return { allFilePaths, allFileList, normalizedFileList, suffixIndex, resolveCache: new Map() };\n}\n\n// Config loaders extracted to ./language-config.ts (Phase 2 refactor)\n// Resolver functions are in ./resolvers/ — imported above\n\n// ============================================================================\n// SHARED LANGUAGE DISPATCH\n// ============================================================================\n\n/** Bundled language-specific configs loaded once per ingestion run. */\ninterface LanguageConfigs {\n  tsconfigPaths: TsconfigPaths | null;\n  goModule: GoModuleConfig | null;\n  composerConfig: ComposerConfig | null;\n  swiftPackageConfig: SwiftPackageConfig | null;\n  csharpConfigs: CSharpProjectConfig[];\n}\n\n/** Context for import path resolution (file lists, indexes, cache). */\ninterface ResolveCtx {\n  allFilePaths: Set<string>;\n  allFileList: string[];\n  normalizedFileList: string[];\n  index: SuffixIndex;\n  resolveCache: Map<string, string | null>;\n}\n\n/**\n * Result of resolving an import via language-specific dispatch.\n * - 'files': resolved to one or more files → add to ImportMap\n * - 'package': resolved to a directory → add graph edges + store dirSuffix in PackageMap\n * - null: no resolution (external dependency, etc.)\n */\ntype ImportResult =\n  | { kind: 'files'; files: string[] }\n  | { kind: 'package'; files: string[]; dirSuffix: string }\n  | null;\n\n/**\n * Shared language dispatch for import resolution.\n * Used by both processImports and processImportsFromExtracted.\n */\nfunction resolveLanguageImport(\n  filePath: string,\n  rawImportPath: string,\n  language: SupportedLanguages,\n  configs: LanguageConfigs,\n  ctx: ResolveCtx,\n): ImportResult {\n  const { allFilePaths, allFileList, normalizedFileList, index, resolveCache } = ctx;\n  const { tsconfigPaths, goModule, composerConfig, swiftPackageConfig, csharpConfigs } = configs;\n\n  // JVM languages (Java + Kotlin): handle wildcards and member imports\n  if (language === SupportedLanguages.Java || language === SupportedLanguages.Kotlin) {\n    const exts = language === SupportedLanguages.Java ? ['.java'] : KOTLIN_EXTENSIONS;\n\n    if (rawImportPath.endsWith('.*')) {\n      const matchedFiles = resolveJvmWildcard(rawImportPath, normalizedFileList, allFileList, exts, index);\n      if (matchedFiles.length === 0 && language === SupportedLanguages.Kotlin) {\n        const javaMatches = resolveJvmWildcard(rawImportPath, normalizedFileList, allFileList, ['.java'], index);\n        if (javaMatches.length > 0) return { kind: 'files', files: javaMatches };\n      }\n      if (matchedFiles.length > 0) return { kind: 'files', files: matchedFiles };\n      // Fall through to standard resolution\n    } else {\n      let memberResolved = resolveJvmMemberImport(rawImportPath, normalizedFileList, allFileList, exts, index);\n      if (!memberResolved && language === SupportedLanguages.Kotlin) {\n        memberResolved = resolveJvmMemberImport(rawImportPath, normalizedFileList, allFileList, ['.java'], index);\n      }\n      if (memberResolved) return { kind: 'files', files: [memberResolved] };\n      // Fall through to standard resolution\n    }\n  }\n\n  // Go: handle package-level imports\n  if (language === SupportedLanguages.Go && goModule && rawImportPath.startsWith(goModule.modulePath)) {\n    const pkgSuffix = resolveGoPackageDir(rawImportPath, goModule);\n    if (pkgSuffix) {\n      const pkgFiles = resolveGoPackage(rawImportPath, goModule, normalizedFileList, allFileList);\n      if (pkgFiles.length > 0) {\n        return { kind: 'package', files: pkgFiles, dirSuffix: pkgSuffix };\n      }\n    }\n    // Fall through if no files found (package might be external)\n  }\n\n  // C#: handle namespace-based imports (using directives)\n  if (language === SupportedLanguages.CSharp && csharpConfigs.length > 0) {\n    const resolvedFiles = resolveCSharpImport(rawImportPath, csharpConfigs, normalizedFileList, allFileList, index);\n    if (resolvedFiles.length > 1) {\n      const dirSuffix = resolveCSharpNamespaceDir(rawImportPath, csharpConfigs);\n      if (dirSuffix) {\n        return { kind: 'package', files: resolvedFiles, dirSuffix };\n      }\n    }\n    if (resolvedFiles.length > 0) return { kind: 'files', files: resolvedFiles };\n    return null;\n  }\n\n  // PHP: handle namespace-based imports (use statements)\n  if (language === SupportedLanguages.PHP) {\n    const resolved = resolvePhpImport(rawImportPath, composerConfig, allFilePaths, normalizedFileList, allFileList, index);\n    return resolved ? { kind: 'files', files: [resolved] } : null;\n  }\n\n  // Swift: handle module imports\n  if (language === SupportedLanguages.Swift && swiftPackageConfig) {\n    const targetDir = swiftPackageConfig.targets.get(rawImportPath);\n    if (targetDir) {\n      const dirPrefix = targetDir + '/';\n      const files: string[] = [];\n      for (let i = 0; i < normalizedFileList.length; i++) {\n        if (normalizedFileList[i].startsWith(dirPrefix) && normalizedFileList[i].endsWith('.swift')) {\n          files.push(allFileList[i]);\n        }\n      }\n      if (files.length > 0) return { kind: 'files', files };\n    }\n    return null; // External framework (Foundation, UIKit, etc.)\n  }\n\n  // Python: relative imports (PEP 328) + proximity-based bare imports\n  // Falls through to standard suffix resolution when proximity finds no match.\n  if (language === SupportedLanguages.Python) {\n    const resolved = resolvePythonImport(filePath, rawImportPath, allFilePaths);\n    if (resolved) return { kind: 'files', files: [resolved] };\n    if (rawImportPath.startsWith('.')) return null; // relative but unresolved — don't suffix-match\n  }\n\n  // Ruby: require / require_relative\n  if (language === SupportedLanguages.Ruby) {\n    const resolved = resolveRubyImport(rawImportPath, normalizedFileList, allFileList, index);\n    return resolved ? { kind: 'files', files: [resolved] } : null;\n  }\n\n  // Rust: expand top-level grouped imports: use {crate::a, crate::b}\n  if (language === SupportedLanguages.Rust && rawImportPath.startsWith('{') && rawImportPath.endsWith('}')) {\n    const inner = rawImportPath.slice(1, -1);\n    const parts = inner.split(',').map(p => p.trim()).filter(Boolean);\n    const resolved: string[] = [];\n    for (const part of parts) {\n      const r = resolveRustImport(filePath, part, allFilePaths);\n      if (r) resolved.push(r);\n    }\n    return resolved.length > 0 ? { kind: 'files', files: resolved } : null;\n  }\n\n  // Standard single-file resolution\n  const resolvedPath = resolveImportPath(\n    filePath,\n    rawImportPath,\n    allFilePaths,\n    allFileList,\n    normalizedFileList,\n    resolveCache,\n    language,\n    tsconfigPaths,\n    index,\n  );\n\n  return resolvedPath ? { kind: 'files', files: [resolvedPath] } : null;\n}\n\n/**\n * Apply an ImportResult: emit graph edges and update ImportMap/PackageMap.\n * If namedBindings are provided and the import resolves to a single file,\n * also populate the NamedImportMap for precise Tier 2a resolution.\n */\nfunction applyImportResult(\n  result: ImportResult,\n  filePath: string,\n  importMap: ImportMap,\n  packageMap: PackageMap | undefined,\n  addImportEdge: (from: string, to: string) => void,\n  addImportGraphEdge: (from: string, to: string) => void,\n  namedBindings?: { local: string; exported: string }[],\n  namedImportMap?: NamedImportMap,\n): void {\n  if (!result) return;\n\n  if (result.kind === 'package' && packageMap) {\n    // Store directory suffix in PackageMap (skip ImportMap expansion)\n    for (const resolvedFile of result.files) {\n      addImportGraphEdge(filePath, resolvedFile);\n    }\n    if (!packageMap.has(filePath)) packageMap.set(filePath, new Set());\n    packageMap.get(filePath)!.add(result.dirSuffix);\n  } else {\n    // 'files' kind, or 'package' without PackageMap — use ImportMap directly\n    const files = result.files;\n    for (const resolvedFile of files) {\n      addImportEdge(filePath, resolvedFile);\n    }\n\n    // Record named bindings for precise Tier 2a resolution\n    if (namedBindings && namedImportMap && files.length === 1) {\n      const resolvedFile = files[0];\n      if (!namedImportMap.has(filePath)) namedImportMap.set(filePath, new Map());\n      const fileBindings = namedImportMap.get(filePath)!;\n      for (const binding of namedBindings) {\n        fileBindings.set(binding.local, { sourcePath: resolvedFile, exportedName: binding.exported });\n      }\n    }\n  }\n}\n\n// ============================================================================\n// MAIN IMPORT PROCESSOR\n// ============================================================================\n\nexport const processImports = async (\n  graph: KnowledgeGraph,\n  files: { path: string; content: string }[],\n  astCache: ASTCache,\n  ctx: ResolutionContext,\n  onProgress?: (current: number, total: number) => void,\n  repoRoot?: string,\n  allPaths?: string[],\n) => {\n  const importMap = ctx.importMap;\n  const packageMap = ctx.packageMap;\n  const namedImportMap = ctx.namedImportMap;\n  // Use allPaths (full repo) when available for cross-chunk resolution, else fall back to chunk files\n  const allFileList = allPaths ?? files.map(f => f.path);\n  const allFilePaths = new Set(allFileList);\n  const parser = await loadParser();\n  const logSkipped = isVerboseIngestionEnabled();\n  const skippedByLang = logSkipped ? new Map<string, number>() : null;\n  const resolveCache = new Map<string, string | null>();\n  // Pre-compute normalized file list once (forward slashes)\n  const normalizedFileList = allFileList.map(p => p.replace(/\\\\/g, '/'));\n  // Build suffix index for O(1) lookups\n  const index = buildSuffixIndex(normalizedFileList, allFileList);\n\n  // Track import statistics\n  let totalImportsFound = 0;\n  let totalImportsResolved = 0;\n\n  // Load language-specific configs once before the file loop\n  const effectiveRoot = repoRoot || '';\n  const configs: LanguageConfigs = {\n    tsconfigPaths: await loadTsconfigPaths(effectiveRoot),\n    goModule: await loadGoModulePath(effectiveRoot),\n    composerConfig: await loadComposerConfig(effectiveRoot),\n    swiftPackageConfig: await loadSwiftPackageConfig(effectiveRoot),\n    csharpConfigs: await loadCSharpProjectConfig(effectiveRoot),\n  };\n  const resolveCtx: ResolveCtx = { allFilePaths, allFileList, normalizedFileList, index, resolveCache };\n\n  // Helper: add an IMPORTS edge to the graph only (no ImportMap update)\n  const addImportGraphEdge = (filePath: string, resolvedPath: string) => {\n    const sourceId = generateId('File', filePath);\n    const targetId = generateId('File', resolvedPath);\n    const relId = generateId('IMPORTS', `${filePath}->${resolvedPath}`);\n\n    totalImportsResolved++;\n\n    graph.addRelationship({\n      id: relId,\n      sourceId,\n      targetId,\n      type: 'IMPORTS',\n      confidence: 1.0,\n      reason: '',\n    });\n  };\n\n  // Helper: add an IMPORTS edge + update import map\n  const addImportEdge = (filePath: string, resolvedPath: string) => {\n    addImportGraphEdge(filePath, resolvedPath);\n\n    if (!importMap.has(filePath)) {\n      importMap.set(filePath, new Set());\n    }\n    importMap.get(filePath)!.add(resolvedPath);\n  };\n\n  for (let i = 0; i < files.length; i++) {\n    const file = files[i];\n    onProgress?.(i + 1, files.length);\n    if (i % 20 === 0) await yieldToEventLoop();\n\n    // 1. Check language support first\n    const language = getLanguageFromFilename(file.path);\n    if (!language) continue;\n    if (!isLanguageAvailable(language)) {\n      if (skippedByLang) {\n        skippedByLang.set(language, (skippedByLang.get(language) ?? 0) + 1);\n      }\n      continue;\n    }\n\n    const queryStr = LANGUAGE_QUERIES[language];\n    if (!queryStr) continue;\n\n    // 2. ALWAYS load the language before querying (parser is stateful)\n    await loadLanguage(language, file.path);\n\n    // 3. Get AST (Try Cache First)\n    let tree = astCache.get(file.path);\n    let wasReparsed = false;\n\n    if (!tree) {\n      try {\n        tree = parser.parse(file.content, undefined, { bufferSize: getTreeSitterBufferSize(file.content.length) });\n      } catch (parseError) {\n        continue;\n      }\n      wasReparsed = true;\n      // Cache re-parsed tree so call/heritage phases get hits\n      astCache.set(file.path, tree);\n    }\n\n    let query;\n    let matches;\n    try {\n      const lang = parser.getLanguage();\n      query = new Parser.Query(lang, queryStr);\n      matches = query.matches(tree.rootNode);\n    } catch (queryError: any) {\n      if (isDev) {\n        console.group(`🔴 Query Error: ${file.path}`);\n        console.log('Language:', language);\n        console.log('Query (first 200 chars):', queryStr.substring(0, 200) + '...');\n        console.log('Error:', queryError?.message || queryError);\n        console.log('File content (first 300 chars):', file.content.substring(0, 300));\n        console.log('AST root type:', tree.rootNode?.type);\n        console.log('AST has errors:', tree.rootNode?.hasError);\n        console.groupEnd();\n      }\n\n      if (wasReparsed) (tree as any).delete?.();\n      continue;\n    }\n\n    matches.forEach(match => {\n      const captureMap: Record<string, any> = {};\n      match.captures.forEach(c => captureMap[c.name] = c.node);\n\n      if (captureMap['import']) {\n        const sourceNode = captureMap['import.source'];\n        if (!sourceNode) {\n          if (isDev) {\n            console.log(`⚠️ Import captured but no source node in ${file.path}`);\n          }\n          return;\n        }\n\n        // Clean path (remove quotes and angle brackets for C/C++ includes)\n        const rawImportPath = language === SupportedLanguages.Kotlin\n          ? appendKotlinWildcard(sourceNode.text.replace(/['\"<>]/g, ''), captureMap['import'])\n          : sourceNode.text.replace(/['\"<>]/g, '');\n        totalImportsFound++;\n\n        const result = resolveLanguageImport(file.path, rawImportPath, language, configs, resolveCtx);\n        const bindings = namedImportMap ? extractNamedBindings(captureMap['import'], language) : undefined;\n        applyImportResult(result, file.path, importMap, packageMap, addImportEdge, addImportGraphEdge, bindings, namedImportMap);\n      }\n\n      // ---- Language-specific call-as-import routing (Ruby require, etc.) ----\n      if (captureMap['call']) {\n        const callNameNode = captureMap['call.name'];\n        if (callNameNode) {\n          const callRouter = callRouters[language];\n          const routed = callRouter(callNameNode.text, captureMap['call']);\n          if (routed && routed.kind === 'import') {\n            totalImportsFound++;\n            const result = resolveLanguageImport(file.path, routed.importPath, language, configs, resolveCtx);\n            applyImportResult(result, file.path, importMap, packageMap, addImportEdge, addImportGraphEdge);\n          }\n        }\n      }\n    });\n\n    // Tree is now owned by the LRU cache — no manual delete needed\n  }\n\n  if (skippedByLang && skippedByLang.size > 0) {\n    for (const [lang, count] of skippedByLang.entries()) {\n      console.warn(\n        `[ingestion] Skipped ${count} ${lang} file(s) in import processing — ${lang} parser not available.`\n      );\n    }\n  }\n\n  if (isDev) {\n    console.log(`📊 Import processing complete: ${totalImportsResolved}/${totalImportsFound} imports resolved to graph edges`);\n  }\n};\n\n// ============================================================================\n// FAST PATH: Resolve pre-extracted imports (no parsing needed)\n// ============================================================================\n\nexport const processImportsFromExtracted = async (\n  graph: KnowledgeGraph,\n  files: { path: string }[],\n  extractedImports: ExtractedImport[],\n  ctx: ResolutionContext,\n  onProgress?: (current: number, total: number) => void,\n  repoRoot?: string,\n  prebuiltCtx?: ImportResolutionContext,\n) => {\n  const importMap = ctx.importMap;\n  const packageMap = ctx.packageMap;\n  const namedImportMap = ctx.namedImportMap;\n  const importCtx = prebuiltCtx ?? buildImportResolutionContext(files.map(f => f.path));\n  const { allFilePaths, allFileList, normalizedFileList, suffixIndex: index, resolveCache } = importCtx;\n\n  let totalImportsFound = 0;\n  let totalImportsResolved = 0;\n\n  const effectiveRoot = repoRoot || '';\n  const configs: LanguageConfigs = {\n    tsconfigPaths: await loadTsconfigPaths(effectiveRoot),\n    goModule: await loadGoModulePath(effectiveRoot),\n    composerConfig: await loadComposerConfig(effectiveRoot),\n    swiftPackageConfig: await loadSwiftPackageConfig(effectiveRoot),\n    csharpConfigs: await loadCSharpProjectConfig(effectiveRoot),\n  };\n  const resolveCtx: ResolveCtx = { allFilePaths, allFileList, normalizedFileList, index, resolveCache };\n\n  // Helper: add an IMPORTS edge to the graph only (no ImportMap update)\n  const addImportGraphEdge = (filePath: string, resolvedPath: string) => {\n    const sourceId = generateId('File', filePath);\n    const targetId = generateId('File', resolvedPath);\n    const relId = generateId('IMPORTS', `${filePath}->${resolvedPath}`);\n\n    totalImportsResolved++;\n\n    graph.addRelationship({\n      id: relId,\n      sourceId,\n      targetId,\n      type: 'IMPORTS',\n      confidence: 1.0,\n      reason: '',\n    });\n  };\n\n  const addImportEdge = (filePath: string, resolvedPath: string) => {\n    addImportGraphEdge(filePath, resolvedPath);\n\n    if (!importMap.has(filePath)) {\n      importMap.set(filePath, new Set());\n    }\n    importMap.get(filePath)!.add(resolvedPath);\n  };\n\n  // Group by file for progress reporting (users see file count, not import count)\n  const importsByFile = new Map<string, ExtractedImport[]>();\n  for (const imp of extractedImports) {\n    let list = importsByFile.get(imp.filePath);\n    if (!list) {\n      list = [];\n      importsByFile.set(imp.filePath, list);\n    }\n    list.push(imp);\n  }\n\n  const totalFiles = importsByFile.size;\n  let filesProcessed = 0;\n\n  for (const [filePath, fileImports] of importsByFile) {\n    filesProcessed++;\n    if (filesProcessed % 100 === 0) {\n      onProgress?.(filesProcessed, totalFiles);\n      await yieldToEventLoop();\n    }\n\n    for (const imp of fileImports) {\n      totalImportsFound++;\n\n      const result = resolveLanguageImport(filePath, imp.rawImportPath, imp.language, configs, resolveCtx);\n      applyImportResult(result, filePath, importMap, packageMap, addImportEdge, addImportGraphEdge, imp.namedBindings, namedImportMap);\n    }\n  }\n\n  onProgress?.(totalFiles, totalFiles);\n\n  if (isDev) {\n    console.log(`📊 Import processing (fast path): ${totalImportsResolved}/${totalImportsFound} imports resolved to graph edges`);\n  }\n};\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/language-config.ts",
    "content": "import fs from 'fs/promises';\nimport path from 'path';\n\nconst isDev = process.env.NODE_ENV === 'development';\n\n// ============================================================================\n// LANGUAGE-SPECIFIC CONFIG TYPES\n// ============================================================================\n\n/** TypeScript path alias config parsed from tsconfig.json */\nexport interface TsconfigPaths {\n  /** Map of alias prefix -> target prefix (e.g., \"@/\" -> \"src/\") */\n  aliases: Map<string, string>;\n  /** Base URL for path resolution (relative to repo root) */\n  baseUrl: string;\n}\n\n/** Go module config parsed from go.mod */\nexport interface GoModuleConfig {\n  /** Module path (e.g., \"github.com/user/repo\") */\n  modulePath: string;\n}\n\n/** PHP Composer PSR-4 autoload config */\nexport interface ComposerConfig {\n  /** Map of namespace prefix -> directory (e.g., \"App\\\\\" -> \"app/\") */\n  psr4: Map<string, string>;\n}\n\n/** C# project config parsed from .csproj files */\nexport interface CSharpProjectConfig {\n  /** Root namespace from <RootNamespace> or assembly name (default: project directory name) */\n  rootNamespace: string;\n  /** Directory containing the .csproj file */\n  projectDir: string;\n}\n\n/** Swift Package Manager module config */\nexport interface SwiftPackageConfig {\n  /** Map of target name -> source directory path (e.g., \"SiuperModel\" -> \"Package/Sources/SiuperModel\") */\n  targets: Map<string, string>;\n}\n\n// ============================================================================\n// LANGUAGE-SPECIFIC CONFIG LOADERS\n// ============================================================================\n\n/**\n * Parse tsconfig.json to extract path aliases.\n * Tries tsconfig.json, tsconfig.app.json, tsconfig.base.json in order.\n */\nexport async function loadTsconfigPaths(repoRoot: string): Promise<TsconfigPaths | null> {\n  const candidates = ['tsconfig.json', 'tsconfig.app.json', 'tsconfig.base.json'];\n\n  for (const filename of candidates) {\n    try {\n      const tsconfigPath = path.join(repoRoot, filename);\n      const raw = await fs.readFile(tsconfigPath, 'utf-8');\n      // Strip JSON comments (// and /* */ style) for robustness\n      const stripped = raw.replace(/\\/\\/.*$/gm, '').replace(/\\/\\*[\\s\\S]*?\\*\\//g, '');\n      const tsconfig = JSON.parse(stripped);\n      const compilerOptions = tsconfig.compilerOptions;\n      if (!compilerOptions?.paths) continue;\n\n      const baseUrl = compilerOptions.baseUrl || '.';\n      const aliases = new Map<string, string>();\n\n      for (const [pattern, targets] of Object.entries(compilerOptions.paths)) {\n        if (!Array.isArray(targets) || targets.length === 0) continue;\n        const target = targets[0] as string;\n\n        // Convert glob patterns: \"@/*\" -> \"@/\", \"src/*\" -> \"src/\"\n        const aliasPrefix = pattern.endsWith('/*') ? pattern.slice(0, -1) : pattern;\n        const targetPrefix = target.endsWith('/*') ? target.slice(0, -1) : target;\n\n        aliases.set(aliasPrefix, targetPrefix);\n      }\n\n      if (aliases.size > 0) {\n        if (isDev) {\n          console.log(`📦 Loaded ${aliases.size} path aliases from ${filename}`);\n        }\n        return { aliases, baseUrl };\n      }\n    } catch {\n      // File doesn't exist or isn't valid JSON - try next\n    }\n  }\n\n  return null;\n}\n\n/**\n * Parse go.mod to extract module path.\n */\nexport async function loadGoModulePath(repoRoot: string): Promise<GoModuleConfig | null> {\n  try {\n    const goModPath = path.join(repoRoot, 'go.mod');\n    const content = await fs.readFile(goModPath, 'utf-8');\n    const match = content.match(/^module\\s+(\\S+)/m);\n    if (match) {\n      if (isDev) {\n        console.log(`📦 Loaded Go module path: ${match[1]}`);\n      }\n      return { modulePath: match[1] };\n    }\n  } catch {\n    // No go.mod\n  }\n  return null;\n}\n\n/** Parse composer.json to extract PSR-4 autoload mappings (including autoload-dev). */\nexport async function loadComposerConfig(repoRoot: string): Promise<ComposerConfig | null> {\n  try {\n    const composerPath = path.join(repoRoot, 'composer.json');\n    const raw = await fs.readFile(composerPath, 'utf-8');\n    const composer = JSON.parse(raw);\n    const psr4Raw = composer.autoload?.['psr-4'] ?? {};\n    const psr4Dev = composer['autoload-dev']?.['psr-4'] ?? {};\n    const merged = { ...psr4Raw, ...psr4Dev };\n\n    const psr4 = new Map<string, string>();\n    for (const [ns, dir] of Object.entries(merged)) {\n      const nsNorm = (ns as string).replace(/\\\\+$/, '');\n      const dirNorm = (dir as string).replace(/\\\\/g, '/').replace(/\\/+$/, '');\n      psr4.set(nsNorm, dirNorm);\n    }\n\n    if (isDev) {\n      console.log(`📦 Loaded ${psr4.size} PSR-4 mappings from composer.json`);\n    }\n    return { psr4 };\n  } catch {\n    return null;\n  }\n}\n\n/**\n * Parse .csproj files to extract RootNamespace.\n * Scans the repo root for .csproj files and returns configs for each.\n */\nexport async function loadCSharpProjectConfig(repoRoot: string): Promise<CSharpProjectConfig[]> {\n  const configs: CSharpProjectConfig[] = [];\n  // BFS scan for .csproj files up to 5 levels deep, cap at 100 dirs to avoid runaway scanning\n  const scanQueue: { dir: string; depth: number }[] = [{ dir: repoRoot, depth: 0 }];\n  const maxDepth = 5;\n  const maxDirs = 100;\n  let dirsScanned = 0;\n\n  while (scanQueue.length > 0 && dirsScanned < maxDirs) {\n    const { dir, depth } = scanQueue.shift()!;\n    dirsScanned++;\n    try {\n      const entries = await fs.readdir(dir, { withFileTypes: true });\n      for (const entry of entries) {\n        if (entry.isDirectory() && depth < maxDepth) {\n          // Skip common non-project directories\n          if (entry.name === 'node_modules' || entry.name === '.git' || entry.name === 'bin' || entry.name === 'obj') continue;\n          scanQueue.push({ dir: path.join(dir, entry.name), depth: depth + 1 });\n        }\n        if (entry.isFile() && entry.name.endsWith('.csproj')) {\n          try {\n            const csprojPath = path.join(dir, entry.name);\n            const content = await fs.readFile(csprojPath, 'utf-8');\n            const nsMatch = content.match(/<RootNamespace>\\s*([^<]+)\\s*<\\/RootNamespace>/);\n            const rootNamespace = nsMatch\n              ? nsMatch[1].trim()\n              : entry.name.replace(/\\.csproj$/, '');\n            const projectDir = path.relative(repoRoot, dir).replace(/\\\\/g, '/');\n            configs.push({ rootNamespace, projectDir });\n            if (isDev) {\n              console.log(`📦 Loaded C# project: ${entry.name} (namespace: ${rootNamespace}, dir: ${projectDir})`);\n            }\n          } catch {\n            // Can't read .csproj\n          }\n        }\n      }\n    } catch {\n      // Can't read directory\n    }\n  }\n  return configs;\n}\n\nexport async function loadSwiftPackageConfig(repoRoot: string): Promise<SwiftPackageConfig | null> {\n  // Swift imports are module-name based (e.g., `import SiuperModel`)\n  // SPM convention: Sources/<TargetName>/ or Package/Sources/<TargetName>/\n  // We scan for these directories to build a target map\n  const targets = new Map<string, string>();\n\n  const sourceDirs = ['Sources', 'Package/Sources', 'src'];\n  for (const sourceDir of sourceDirs) {\n    try {\n      const fullPath = path.join(repoRoot, sourceDir);\n      const entries = await fs.readdir(fullPath, { withFileTypes: true });\n      for (const entry of entries) {\n        if (entry.isDirectory()) {\n          targets.set(entry.name, sourceDir + '/' + entry.name);\n        }\n      }\n    } catch {\n      // Directory doesn't exist\n    }\n  }\n\n  if (targets.size > 0) {\n    if (isDev) {\n      console.log(`📦 Loaded ${targets.size} Swift package targets`);\n    }\n    return { targets };\n  }\n  return null;\n}\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/mro-processor.ts",
    "content": "/**\n * MRO (Method Resolution Order) Processor\n *\n * Walks the inheritance DAG (EXTENDS/IMPLEMENTS edges), collects methods from\n * each ancestor via HAS_METHOD edges, detects method-name collisions across\n * parents, and applies language-specific resolution rules to emit OVERRIDES edges.\n *\n * Language-specific rules:\n * - C++:       leftmost base class in declaration order wins\n * - C#/Java:   class method wins over interface default; multiple interface\n *              methods with same name are ambiguous (null resolution)\n * - Python:    C3 linearization determines MRO; first in linearized order wins\n * - Rust:      no auto-resolution — requires qualified syntax, resolvedTo = null\n * - Default:   single inheritance — first definition wins\n *\n * OVERRIDES edge direction: Class → Method (not Method → Method).\n * The source is the child class that inherits conflicting methods,\n * the target is the winning ancestor method node.\n * Cypher: MATCH (c:Class)-[r:CodeRelation {type: 'OVERRIDES'}]->(m:Method)\n */\n\nimport { KnowledgeGraph, GraphRelationship } from '../graph/types.js';\nimport { generateId } from '../../lib/utils.js';\nimport { SupportedLanguages } from '../../config/supported-languages.js';\n\n// ---------------------------------------------------------------------------\n// Public types\n// ---------------------------------------------------------------------------\n\nexport interface MROEntry {\n  classId: string;\n  className: string;\n  language: SupportedLanguages;\n  mro: string[];               // linearized parent names\n  ambiguities: MethodAmbiguity[];\n}\n\nexport interface MethodAmbiguity {\n  methodName: string;\n  definedIn: Array<{ classId: string; className: string; methodId: string }>;\n  resolvedTo: string | null;   // winning methodId or null if truly ambiguous\n  reason: string;\n}\n\nexport interface MROResult {\n  entries: MROEntry[];\n  overrideEdges: number;\n  ambiguityCount: number;\n}\n\n// ---------------------------------------------------------------------------\n// Internal helpers\n// ---------------------------------------------------------------------------\n\n/** Collect EXTENDS, IMPLEMENTS, and HAS_METHOD adjacency from the graph. */\nfunction buildAdjacency(graph: KnowledgeGraph) {\n  // parentMap: childId → parentIds[] (in insertion / declaration order)\n  const parentMap = new Map<string, string[]>();\n  // methodMap: classId → methodIds[]\n  const methodMap = new Map<string, string[]>();\n  // Track which edge type each parent link came from\n  const parentEdgeType = new Map<string, Map<string, 'EXTENDS' | 'IMPLEMENTS'>>();\n\n  graph.forEachRelationship((rel) => {\n    if (rel.type === 'EXTENDS' || rel.type === 'IMPLEMENTS') {\n      let parents = parentMap.get(rel.sourceId);\n      if (!parents) {\n        parents = [];\n        parentMap.set(rel.sourceId, parents);\n      }\n      parents.push(rel.targetId);\n\n      let edgeTypes = parentEdgeType.get(rel.sourceId);\n      if (!edgeTypes) {\n        edgeTypes = new Map();\n        parentEdgeType.set(rel.sourceId, edgeTypes);\n      }\n      edgeTypes.set(rel.targetId, rel.type);\n    }\n\n    if (rel.type === 'HAS_METHOD') {\n      let methods = methodMap.get(rel.sourceId);\n      if (!methods) {\n        methods = [];\n        methodMap.set(rel.sourceId, methods);\n      }\n      methods.push(rel.targetId);\n    }\n  });\n\n  return { parentMap, methodMap, parentEdgeType };\n}\n\n/**\n * Gather all ancestor IDs in BFS / topological order.\n * Returns the linearized list of ancestor IDs (excluding the class itself).\n */\nfunction gatherAncestors(\n  classId: string,\n  parentMap: Map<string, string[]>,\n): string[] {\n  const visited = new Set<string>();\n  const order: string[] = [];\n  const queue: string[] = [...(parentMap.get(classId) ?? [])];\n\n  while (queue.length > 0) {\n    const id = queue.shift()!;\n    if (visited.has(id)) continue;\n    visited.add(id);\n    order.push(id);\n    const grandparents = parentMap.get(id);\n    if (grandparents) {\n      for (const gp of grandparents) {\n        if (!visited.has(gp)) queue.push(gp);\n      }\n    }\n  }\n\n  return order;\n}\n\n// ---------------------------------------------------------------------------\n// C3 linearization (Python MRO)\n// ---------------------------------------------------------------------------\n\n/**\n * Compute C3 linearization for a class given a parentMap.\n * Returns an array of ancestor IDs in C3 order (excluding the class itself),\n * or null if linearization fails (inconsistent or cyclic hierarchy).\n */\nfunction c3Linearize(\n  classId: string,\n  parentMap: Map<string, string[]>,\n  cache: Map<string, string[] | null>,\n  inProgress?: Set<string>,\n): string[] | null {\n  if (cache.has(classId)) return cache.get(classId)!;\n\n  // Cycle detection: if we're already computing this class, the hierarchy is cyclic\n  const visiting = inProgress ?? new Set<string>();\n  if (visiting.has(classId)) {\n    cache.set(classId, null);\n    return null;\n  }\n  visiting.add(classId);\n\n  const directParents = parentMap.get(classId);\n  if (!directParents || directParents.length === 0) {\n    visiting.delete(classId);\n    cache.set(classId, []);\n    return [];\n  }\n\n  // Compute linearization for each parent first\n  const parentLinearizations: string[][] = [];\n  for (const pid of directParents) {\n    const pLin = c3Linearize(pid, parentMap, cache, visiting);\n    if (pLin === null) {\n      visiting.delete(classId);\n      cache.set(classId, null);\n      return null;\n    }\n    parentLinearizations.push([pid, ...pLin]);\n  }\n\n  // Add the direct parents list as the final sequence\n  const sequences = [...parentLinearizations, [...directParents]];\n  const result: string[] = [];\n\n  while (sequences.some(s => s.length > 0)) {\n    // Find a good head: one that doesn't appear in the tail of any other sequence\n    let head: string | null = null;\n    for (const seq of sequences) {\n      if (seq.length === 0) continue;\n      const candidate = seq[0];\n      const inTail = sequences.some(\n        other => other.length > 1 && other.indexOf(candidate, 1) !== -1\n      );\n      if (!inTail) {\n        head = candidate;\n        break;\n      }\n    }\n\n    if (head === null) {\n      // Inconsistent hierarchy\n      visiting.delete(classId);\n      cache.set(classId, null);\n      return null;\n    }\n\n    result.push(head);\n\n    // Remove the chosen head from all sequences\n    for (const seq of sequences) {\n      if (seq.length > 0 && seq[0] === head) {\n        seq.shift();\n      }\n    }\n  }\n\n  visiting.delete(classId);\n  cache.set(classId, result);\n  return result;\n}\n\n// ---------------------------------------------------------------------------\n// Language-specific resolution\n// ---------------------------------------------------------------------------\n\ntype MethodDef = { classId: string; className: string; methodId: string };\ntype Resolution = { resolvedTo: string | null; reason: string };\n\n/** Resolve by MRO order — first ancestor in linearized order wins. */\nfunction resolveByMroOrder(\n  methodName: string,\n  defs: MethodDef[],\n  mroOrder: string[],\n  reasonPrefix: string,\n): Resolution {\n  for (const ancestorId of mroOrder) {\n    const match = defs.find(d => d.classId === ancestorId);\n    if (match) {\n      return {\n        resolvedTo: match.methodId,\n        reason: `${reasonPrefix}: ${match.className}::${methodName}`,\n      };\n    }\n  }\n  return { resolvedTo: defs[0].methodId, reason: `${reasonPrefix} fallback: first definition` };\n}\n\nfunction resolveCsharpJava(\n  methodName: string,\n  defs: MethodDef[],\n  parentEdgeTypes: Map<string, 'EXTENDS' | 'IMPLEMENTS'> | undefined,\n): Resolution {\n  const classDefs: MethodDef[] = [];\n  const interfaceDefs: MethodDef[] = [];\n\n  for (const def of defs) {\n    const edgeType = parentEdgeTypes?.get(def.classId);\n    if (edgeType === 'IMPLEMENTS') {\n      interfaceDefs.push(def);\n    } else {\n      classDefs.push(def);\n    }\n  }\n\n  if (classDefs.length > 0) {\n    return {\n      resolvedTo: classDefs[0].methodId,\n      reason: `class method wins: ${classDefs[0].className}::${methodName}`,\n    };\n  }\n\n  if (interfaceDefs.length > 1) {\n    return {\n      resolvedTo: null,\n      reason: `ambiguous: ${methodName} defined in multiple interfaces: ${interfaceDefs.map(d => d.className).join(', ')}`,\n    };\n  }\n\n  if (interfaceDefs.length === 1) {\n    return {\n      resolvedTo: interfaceDefs[0].methodId,\n      reason: `single interface default: ${interfaceDefs[0].className}::${methodName}`,\n    };\n  }\n\n  return { resolvedTo: null, reason: 'no resolution found' };\n}\n\n// ---------------------------------------------------------------------------\n// Main entry point\n// ---------------------------------------------------------------------------\n\nexport function computeMRO(graph: KnowledgeGraph): MROResult {\n  const { parentMap, methodMap, parentEdgeType } = buildAdjacency(graph);\n  const c3Cache = new Map<string, string[] | null>();\n\n  const entries: MROEntry[] = [];\n  let overrideEdges = 0;\n  let ambiguityCount = 0;\n\n  // Process every class that has at least one parent\n  for (const [classId, directParents] of parentMap) {\n    if (directParents.length === 0) continue;\n\n    const classNode = graph.getNode(classId);\n    if (!classNode) continue;\n\n    const language = classNode.properties.language;\n    if (!language) continue;\n    const className = classNode.properties.name;\n\n    // Compute linearized MRO depending on language\n    let mroOrder: string[];\n    if (language === SupportedLanguages.Python) {\n      const c3Result = c3Linearize(classId, parentMap, c3Cache);\n      mroOrder = c3Result ?? gatherAncestors(classId, parentMap);\n    } else {\n      mroOrder = gatherAncestors(classId, parentMap);\n    }\n\n    // Get the parent names for the MRO entry\n    const mroNames: string[] = mroOrder\n      .map(id => graph.getNode(id)?.properties.name)\n      .filter((n): n is string => n !== undefined);\n\n    // Collect methods from all ancestors, grouped by method name\n    const methodsByName = new Map<string, MethodDef[]>();\n    for (const ancestorId of mroOrder) {\n      const ancestorNode = graph.getNode(ancestorId);\n      if (!ancestorNode) continue;\n\n      const methods = methodMap.get(ancestorId) ?? [];\n      for (const methodId of methods) {\n        const methodNode = graph.getNode(methodId);\n        if (!methodNode) continue;\n        // Properties don't participate in method resolution order\n        if (methodNode.label === 'Property') continue;\n\n        const methodName = methodNode.properties.name;\n        let defs = methodsByName.get(methodName);\n        if (!defs) {\n          defs = [];\n          methodsByName.set(methodName, defs);\n        }\n        // Avoid duplicates (same method seen via multiple paths)\n        if (!defs.some(d => d.methodId === methodId)) {\n          defs.push({\n            classId: ancestorId,\n            className: ancestorNode.properties.name,\n            methodId,\n          });\n        }\n      }\n    }\n\n    // Detect collisions: methods defined in 2+ different ancestors\n    const ambiguities: MethodAmbiguity[] = [];\n\n    // Compute transitive edge types once per class (only needed for C#/Java)\n    const needsEdgeTypes = language === SupportedLanguages.CSharp || language === SupportedLanguages.Java || language === SupportedLanguages.Kotlin;\n    const classEdgeTypes = needsEdgeTypes\n      ? buildTransitiveEdgeTypes(classId, parentMap, parentEdgeType)\n      : undefined;\n\n    for (const [methodName, defs] of methodsByName) {\n      if (defs.length < 2) continue;\n\n      // Own method shadows inherited — no ambiguity\n      const ownMethods = methodMap.get(classId) ?? [];\n      const ownDefinesIt = ownMethods.some(mid => {\n        const mn = graph.getNode(mid);\n        return mn?.properties.name === methodName;\n      });\n      if (ownDefinesIt) continue;\n\n      let resolution: Resolution;\n\n      switch (language) {\n        case SupportedLanguages.CPlusPlus:\n          resolution = resolveByMroOrder(methodName, defs, mroOrder, 'C++ leftmost base');\n          break;\n        case SupportedLanguages.CSharp:\n        case SupportedLanguages.Java:\n        case SupportedLanguages.Kotlin:\n          resolution = resolveCsharpJava(methodName, defs, classEdgeTypes);\n          break;\n        case SupportedLanguages.Python:\n          resolution = resolveByMroOrder(methodName, defs, mroOrder, 'Python C3 MRO');\n          break;\n        case SupportedLanguages.Rust:\n          resolution = {\n            resolvedTo: null,\n            reason: `Rust requires qualified syntax: <Type as Trait>::${methodName}()`,\n          };\n          break;\n        default:\n          resolution = resolveByMroOrder(methodName, defs, mroOrder, 'first definition');\n          break;\n      }\n\n      const ambiguity: MethodAmbiguity = {\n        methodName,\n        definedIn: defs,\n        resolvedTo: resolution.resolvedTo,\n        reason: resolution.reason,\n      };\n      ambiguities.push(ambiguity);\n\n      if (resolution.resolvedTo === null) {\n        ambiguityCount++;\n      }\n\n      // Emit OVERRIDES edge if resolution found\n      if (resolution.resolvedTo !== null) {\n        graph.addRelationship({\n          id: generateId('OVERRIDES', `${classId}->${resolution.resolvedTo}`),\n          sourceId: classId,\n          targetId: resolution.resolvedTo,\n          type: 'OVERRIDES',\n          confidence: 1.0,\n          reason: resolution.reason,\n        });\n        overrideEdges++;\n      }\n    }\n\n    entries.push({\n      classId,\n      className,\n      language,\n      mro: mroNames,\n      ambiguities,\n    });\n  }\n\n  return { entries, overrideEdges, ambiguityCount };\n}\n\n/**\n * Build transitive edge types for a class using BFS from the class to all ancestors.\n *\n * Known limitation: BFS first-reach heuristic can misclassify an interface as\n * EXTENDS if it's reachable via a class chain before being seen via IMPLEMENTS.\n * E.g. if BaseClass also implements IFoo, IFoo may be classified as EXTENDS.\n * This affects C#/Java/Kotlin conflict resolution in rare diamond hierarchies.\n */\nfunction buildTransitiveEdgeTypes(\n  classId: string,\n  parentMap: Map<string, string[]>,\n  parentEdgeType: Map<string, Map<string, 'EXTENDS' | 'IMPLEMENTS'>>,\n): Map<string, 'EXTENDS' | 'IMPLEMENTS'> {\n  const result = new Map<string, 'EXTENDS' | 'IMPLEMENTS'>();\n  const directEdges = parentEdgeType.get(classId);\n  if (!directEdges) return result;\n\n  // BFS: propagate edge type from direct parents\n  const queue: Array<{ id: string; edgeType: 'EXTENDS' | 'IMPLEMENTS' }> = [];\n  const directParents = parentMap.get(classId) ?? [];\n\n  for (const pid of directParents) {\n    const et = directEdges.get(pid) ?? 'EXTENDS';\n    if (!result.has(pid)) {\n      result.set(pid, et);\n      queue.push({ id: pid, edgeType: et });\n    }\n  }\n\n  while (queue.length > 0) {\n    const { id, edgeType } = queue.shift()!;\n    const grandparents = parentMap.get(id) ?? [];\n    for (const gp of grandparents) {\n      if (!result.has(gp)) {\n        result.set(gp, edgeType);\n        queue.push({ id: gp, edgeType });\n      }\n    }\n  }\n\n  return result;\n}\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/named-binding-extraction.ts",
    "content": "import { SupportedLanguages } from '../../config/supported-languages.js';\nimport type { SymbolTable, SymbolDefinition } from './symbol-table.js';\nimport type { NamedImportMap } from './import-processor.js';\n\n/**\n * Walk a named-binding re-export chain through NamedImportMap.\n *\n * When file A imports { User } from B, and B re-exports { User } from C,\n * the NamedImportMap for A points to B, but B has no User definition.\n * This function follows the chain: A→B→C until a definition is found.\n *\n * Returns the definitions found at the end of the chain, or null if the\n * chain breaks (missing binding, circular reference, or depth exceeded).\n * Max depth 5 to prevent infinite loops.\n *\n * @param allDefs Pre-computed `symbolTable.lookupFuzzy(name)` result — must be the\n *               complete unfiltered result. Passing a file-filtered subset will cause\n *               silent misses at depth=0 for non-aliased bindings.\n */\nexport function walkBindingChain(\n  name: string,\n  currentFilePath: string,\n  symbolTable: SymbolTable,\n  namedImportMap: NamedImportMap,\n  allDefs: SymbolDefinition[],\n): SymbolDefinition[] | null {\n  let lookupFile = currentFilePath;\n  let lookupName = name;\n  const visited = new Set<string>();\n\n  for (let depth = 0; depth < 5; depth++) {\n    const bindings = namedImportMap.get(lookupFile);\n    if (!bindings) return null;\n\n    const binding = bindings.get(lookupName);\n    if (!binding) return null;\n\n    const key = `${binding.sourcePath}:${binding.exportedName}`;\n    if (visited.has(key)) return null; // circular\n    visited.add(key);\n\n    const targetName = binding.exportedName;\n    const resolvedDefs = targetName !== lookupName || depth > 0\n      ? symbolTable.lookupFuzzy(targetName).filter(def => def.filePath === binding.sourcePath)\n      : allDefs.filter(def => def.filePath === binding.sourcePath);\n\n    if (resolvedDefs.length > 0) return resolvedDefs;\n\n    // No definition in source file → follow re-export chain\n    lookupFile = binding.sourcePath;\n    lookupName = targetName;\n  }\n\n  return null;\n}\n\n/**\n * Extract named bindings from an import AST node.\n * Returns undefined if the import is not a named import (e.g., import * or default).\n *\n * TS: import { User, Repo as R } from './models'\n *   → [{local:'User', exported:'User'}, {local:'R', exported:'Repo'}]\n *\n * Python: from models import User, Repo as R\n *   → [{local:'User', exported:'User'}, {local:'R', exported:'Repo'}]\n */\nexport function extractNamedBindings(\n  importNode: any,\n  language: SupportedLanguages,\n): { local: string; exported: string }[] | undefined {\n  if (language === SupportedLanguages.TypeScript || language === SupportedLanguages.JavaScript) {\n    return extractTsNamedBindings(importNode);\n  }\n  if (language === SupportedLanguages.Python) {\n    return extractPythonNamedBindings(importNode);\n  }\n  if (language === SupportedLanguages.Kotlin) {\n    return extractKotlinNamedBindings(importNode);\n  }\n  if (language === SupportedLanguages.Rust) {\n    return extractRustNamedBindings(importNode);\n  }\n  if (language === SupportedLanguages.PHP) {\n    return extractPhpNamedBindings(importNode);\n  }\n  if (language === SupportedLanguages.CSharp) {\n    return extractCsharpNamedBindings(importNode);\n  }\n  if (language === SupportedLanguages.Java) {\n    return extractJavaNamedBindings(importNode);\n  }\n  return undefined;\n}\n\nexport function extractTsNamedBindings(importNode: any): { local: string; exported: string }[] | undefined {\n  // import_statement > import_clause > named_imports > import_specifier*\n  const importClause = findChild(importNode, 'import_clause');\n  if (importClause) {\n    const namedImports = findChild(importClause, 'named_imports');\n    if (!namedImports) return undefined; // default import, namespace import, or side-effect\n\n    const bindings: { local: string; exported: string }[] = [];\n    for (let i = 0; i < namedImports.namedChildCount; i++) {\n      const specifier = namedImports.namedChild(i);\n      if (specifier?.type !== 'import_specifier') continue;\n\n      const identifiers: string[] = [];\n      for (let j = 0; j < specifier.namedChildCount; j++) {\n        const child = specifier.namedChild(j);\n        if (child?.type === 'identifier') identifiers.push(child.text);\n      }\n\n      if (identifiers.length === 1) {\n        bindings.push({ local: identifiers[0], exported: identifiers[0] });\n      } else if (identifiers.length === 2) {\n        // import { Foo as Bar } → exported='Foo', local='Bar'\n        bindings.push({ local: identifiers[1], exported: identifiers[0] });\n      }\n    }\n    return bindings.length > 0 ? bindings : undefined;\n  }\n\n  // Re-export: export { X } from './y' → export_statement > export_clause > export_specifier\n  const exportClause = findChild(importNode, 'export_clause');\n  if (exportClause) {\n    const bindings: { local: string; exported: string }[] = [];\n    for (let i = 0; i < exportClause.namedChildCount; i++) {\n      const specifier = exportClause.namedChild(i);\n      if (specifier?.type !== 'export_specifier') continue;\n\n      const identifiers: string[] = [];\n      for (let j = 0; j < specifier.namedChildCount; j++) {\n        const child = specifier.namedChild(j);\n        if (child?.type === 'identifier') identifiers.push(child.text);\n      }\n\n      if (identifiers.length === 1) {\n        // export { User } from './base' → re-exports User as User\n        bindings.push({ local: identifiers[0], exported: identifiers[0] });\n      } else if (identifiers.length === 2) {\n        // export { Repo as Repository } from './models' → name=Repo, alias=Repository\n        // For re-exports, the first id is the source name, second is what's exported\n        // When another file imports { Repository }, they get Repo from the source\n        bindings.push({ local: identifiers[1], exported: identifiers[0] });\n      }\n    }\n    return bindings.length > 0 ? bindings : undefined;\n  }\n\n  return undefined;\n}\n\nexport function extractPythonNamedBindings(importNode: any): { local: string; exported: string }[] | undefined {\n  // Only from import_from_statement, not plain import_statement\n  if (importNode.type !== 'import_from_statement') return undefined;\n\n  const bindings: { local: string; exported: string }[] = [];\n  for (let i = 0; i < importNode.namedChildCount; i++) {\n    const child = importNode.namedChild(i);\n    if (!child) continue;\n\n    if (child.type === 'dotted_name') {\n      // Skip the module_name (first dotted_name is the source module)\n      const fieldName = importNode.childForFieldName?.('module_name');\n      if (fieldName && child.startIndex === fieldName.startIndex) continue;\n\n      // This is an imported name: from x import User\n      const name = child.text;\n      if (name) bindings.push({ local: name, exported: name });\n    }\n\n    if (child.type === 'aliased_import') {\n      // from x import Repo as R\n      const dottedName = findChild(child, 'dotted_name');\n      const aliasIdent = findChild(child, 'identifier');\n      if (dottedName && aliasIdent) {\n        bindings.push({ local: aliasIdent.text, exported: dottedName.text });\n      }\n    }\n  }\n\n  return bindings.length > 0 ? bindings : undefined;\n}\n\nexport function extractKotlinNamedBindings(importNode: any): { local: string; exported: string }[] | undefined {\n  // import_header > identifier + import_alias > simple_identifier\n  if (importNode.type !== 'import_header') return undefined;\n\n  const fullIdent = findChild(importNode, 'identifier');\n  if (!fullIdent) return undefined;\n\n  const fullText = fullIdent.text;\n  const exportedName = fullText.includes('.') ? fullText.split('.').pop()! : fullText;\n\n  const importAlias = findChild(importNode, 'import_alias');\n  if (importAlias) {\n    // Aliased: import com.example.User as U\n    const aliasIdent = findChild(importAlias, 'simple_identifier');\n    if (!aliasIdent) return undefined;\n    return [{ local: aliasIdent.text, exported: exportedName }];\n  }\n\n  // Non-aliased: import com.example.User → local=\"User\", exported=\"User\"\n  // Skip wildcard imports (ending in *)\n  if (fullText.endsWith('.*') || fullText.endsWith('*')) return undefined;\n  // Skip lowercase last segments — those are member/function imports (e.g.,\n  // import util.OneArg.writeAudit), not class imports. Multiple member imports\n  // with the same function name would collide in NamedImportMap, breaking\n  // arity-based disambiguation.\n  if (exportedName[0] && exportedName[0] === exportedName[0].toLowerCase()) return undefined;\n  return [{ local: exportedName, exported: exportedName }];\n}\n\nexport function extractRustNamedBindings(importNode: any): { local: string; exported: string }[] | undefined {\n  // use_declaration may contain use_as_clause at any depth\n  if (importNode.type !== 'use_declaration') return undefined;\n\n  const bindings: { local: string; exported: string }[] = [];\n  collectRustBindings(importNode, bindings);\n  return bindings.length > 0 ? bindings : undefined;\n}\n\nfunction collectRustBindings(node: any, bindings: { local: string; exported: string }[]): void {\n  if (node.type === 'use_as_clause') {\n    // First identifier = exported name, second identifier = local alias\n    const idents: string[] = [];\n    for (let i = 0; i < node.namedChildCount; i++) {\n      const child = node.namedChild(i);\n      if (child?.type === 'identifier') idents.push(child.text);\n      // For scoped_identifier, extract the last segment\n      if (child?.type === 'scoped_identifier') {\n        const nameNode = child.childForFieldName?.('name');\n        if (nameNode) idents.push(nameNode.text);\n      }\n    }\n    if (idents.length === 2) {\n      bindings.push({ local: idents[1], exported: idents[0] });\n    }\n    return;\n  }\n\n  // Terminal identifier in a use_list: use crate::models::{User, Repo}\n  if (node.type === 'identifier' && node.parent?.type === 'use_list') {\n    bindings.push({ local: node.text, exported: node.text });\n    return;\n  }\n\n  // Skip scoped_identifier that serves as path prefix in scoped_use_list\n  // e.g. use crate::models::{User, Repo} — the path node \"crate::models\" is not an importable symbol\n  if (node.type === 'scoped_identifier' && node.parent?.type === 'scoped_use_list') {\n    return; // path prefix — the use_list sibling handles the actual symbols\n  }\n\n  // Terminal scoped_identifier: use crate::models::User;\n  // Only extract if this is a leaf (no deeper use_list/use_as_clause/scoped_use_list)\n  if (node.type === 'scoped_identifier') {\n    let hasDeeper = false;\n    for (let i = 0; i < node.namedChildCount; i++) {\n      const child = node.namedChild(i);\n      if (child?.type === 'use_list' || child?.type === 'use_as_clause' || child?.type === 'scoped_use_list') {\n        hasDeeper = true;\n        break;\n      }\n    }\n    if (!hasDeeper) {\n      const nameNode = node.childForFieldName?.('name');\n      if (nameNode) {\n        bindings.push({ local: nameNode.text, exported: nameNode.text });\n      }\n      return;\n    }\n  }\n\n  // Recurse into children\n  for (let i = 0; i < node.namedChildCount; i++) {\n    const child = node.namedChild(i);\n    if (child) collectRustBindings(child, bindings);\n  }\n}\n\nexport function extractPhpNamedBindings(importNode: any): { local: string; exported: string }[] | undefined {\n  // namespace_use_declaration > namespace_use_clause* (flat)\n  // namespace_use_declaration > namespace_use_group > namespace_use_clause* (grouped)\n  if (importNode.type !== 'namespace_use_declaration') return undefined;\n\n  const bindings: { local: string; exported: string }[] = [];\n\n  // Collect all clauses — from direct children AND from namespace_use_group\n  const clauses: any[] = [];\n  for (let i = 0; i < importNode.namedChildCount; i++) {\n    const child = importNode.namedChild(i);\n    if (child?.type === 'namespace_use_clause') {\n      clauses.push(child);\n    } else if (child?.type === 'namespace_use_group') {\n      for (let j = 0; j < child.namedChildCount; j++) {\n        const groupChild = child.namedChild(j);\n        if (groupChild?.type === 'namespace_use_clause') clauses.push(groupChild);\n      }\n    }\n  }\n\n  for (const clause of clauses) {\n    // Flat imports: qualified_name + name (alias)\n    let qualifiedName: any = null;\n    const names: any[] = [];\n    for (let j = 0; j < clause.namedChildCount; j++) {\n      const child = clause.namedChild(j);\n      if (child?.type === 'qualified_name') qualifiedName = child;\n      else if (child?.type === 'name') names.push(child);\n    }\n\n    if (qualifiedName && names.length > 0) {\n      // Flat aliased import: use App\\Models\\Repo as R;\n      const fullText = qualifiedName.text;\n      const exportedName = fullText.includes('\\\\') ? fullText.split('\\\\').pop()! : fullText;\n      bindings.push({ local: names[0].text, exported: exportedName });\n    } else if (qualifiedName && names.length === 0) {\n      // Flat non-aliased import: use App\\Models\\User;\n      const fullText = qualifiedName.text;\n      const lastSegment = fullText.includes('\\\\') ? fullText.split('\\\\').pop()! : fullText;\n      bindings.push({ local: lastSegment, exported: lastSegment });\n    } else if (!qualifiedName && names.length >= 2) {\n      // Grouped aliased import: {Repo as R} — first name = exported, second = alias\n      bindings.push({ local: names[1].text, exported: names[0].text });\n    } else if (!qualifiedName && names.length === 1) {\n      // Grouped non-aliased import: {User} in use App\\Models\\{User, Repo as R}\n      bindings.push({ local: names[0].text, exported: names[0].text });\n    }\n  }\n  return bindings.length > 0 ? bindings : undefined;\n}\n\nexport function extractCsharpNamedBindings(importNode: any): { local: string; exported: string }[] | undefined {\n  // using_directive with identifier (alias) + qualified_name (target)\n  if (importNode.type !== 'using_directive') return undefined;\n\n  let aliasIdent: any = null;\n  let qualifiedName: any = null;\n  for (let i = 0; i < importNode.namedChildCount; i++) {\n    const child = importNode.namedChild(i);\n    if (child?.type === 'identifier' && !aliasIdent) aliasIdent = child;\n    else if (child?.type === 'qualified_name') qualifiedName = child;\n  }\n\n  if (!aliasIdent || !qualifiedName) return undefined;\n\n  const fullText = qualifiedName.text;\n  const exportedName = fullText.includes('.') ? fullText.split('.').pop()! : fullText;\n\n  return [{ local: aliasIdent.text, exported: exportedName }];\n}\n\nexport function extractJavaNamedBindings(importNode: any): { local: string; exported: string }[] | undefined {\n  // import_declaration > scoped_identifier \"com.example.models.User\"\n  // Wildcard imports (.*) don't produce named bindings\n  if (importNode.type !== 'import_declaration') return undefined;\n\n  // Check for asterisk (wildcard import) — skip those\n  for (let i = 0; i < importNode.childCount; i++) {\n    const child = importNode.child(i);\n    if (child?.type === 'asterisk') return undefined;\n  }\n\n  const scopedId = findChild(importNode, 'scoped_identifier');\n  if (!scopedId) return undefined;\n\n  const fullText = scopedId.text;\n  const lastDot = fullText.lastIndexOf('.');\n  if (lastDot === -1) return undefined;\n\n  const className = fullText.slice(lastDot + 1);\n  // Skip lowercase names — those are package imports, not class imports\n  if (className[0] && className[0] === className[0].toLowerCase()) return undefined;\n\n  return [{ local: className, exported: className }];\n}\n\nfunction findChild(node: any, type: string): any {\n  for (let i = 0; i < node.namedChildCount; i++) {\n    const child = node.namedChild(i);\n    if (child?.type === type) return child;\n  }\n  return null;\n}\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/parsing-processor.ts",
    "content": "import { KnowledgeGraph, GraphNode, GraphRelationship, type NodeLabel } from '../graph/types.js';\nimport Parser from 'tree-sitter';\nimport { loadParser, loadLanguage, isLanguageAvailable } from '../tree-sitter/parser-loader.js';\nimport { LANGUAGE_QUERIES } from './tree-sitter-queries.js';\nimport { generateId } from '../../lib/utils.js';\nimport { SymbolTable } from './symbol-table.js';\nimport { ASTCache } from './ast-cache.js';\nimport { getLanguageFromFilename, yieldToEventLoop, getDefinitionNodeFromCaptures, findEnclosingClassId, extractMethodSignature } from './utils.js';\nimport { extractPropertyDeclaredType } from './type-extractors/shared.js';\nimport { isNodeExported } from './export-detection.js';\nimport { detectFrameworkFromAST } from './framework-detection.js';\nimport { typeConfigs } from './type-extractors/index.js';\nimport { SupportedLanguages } from '../../config/supported-languages.js';\nimport { WorkerPool } from './workers/worker-pool.js';\nimport type { ParseWorkerResult, ParseWorkerInput, ExtractedImport, ExtractedCall, ExtractedAssignment, ExtractedHeritage, ExtractedRoute, FileConstructorBindings } from './workers/parse-worker.js';\nimport { getTreeSitterBufferSize, TREE_SITTER_MAX_BUFFER } from './constants.js';\n\nexport type FileProgressCallback = (current: number, total: number, filePath: string) => void;\n\nexport interface WorkerExtractedData {\n  imports: ExtractedImport[];\n  calls: ExtractedCall[];\n  assignments: ExtractedAssignment[];\n  heritage: ExtractedHeritage[];\n  routes: ExtractedRoute[];\n  constructorBindings: FileConstructorBindings[];\n}\n\n// isNodeExported imported from ./export-detection.js (shared module)\n// Re-export for backward compatibility with any external consumers\nexport { isNodeExported } from './export-detection.js';\n\n// ============================================================================\n// Worker-based parallel parsing\n// ============================================================================\n\nconst processParsingWithWorkers = async (\n  graph: KnowledgeGraph,\n  files: { path: string; content: string }[],\n  symbolTable: SymbolTable,\n  astCache: ASTCache,\n  workerPool: WorkerPool,\n  onFileProgress?: FileProgressCallback,\n): Promise<WorkerExtractedData> => {\n  // Filter to parseable files only\n  const parseableFiles: ParseWorkerInput[] = [];\n  for (const file of files) {\n    const lang = getLanguageFromFilename(file.path);\n    if (lang) parseableFiles.push({ path: file.path, content: file.content });\n  }\n\n  if (parseableFiles.length === 0) return { imports: [], calls: [], assignments: [], heritage: [], routes: [], constructorBindings: [] };\n\n  const total = files.length;\n\n  // Dispatch to worker pool — pool handles splitting into chunks and sub-batching\n  const chunkResults = await workerPool.dispatch<ParseWorkerInput, ParseWorkerResult>(\n    parseableFiles,\n    (filesProcessed) => {\n      onFileProgress?.(Math.min(filesProcessed, total), total, 'Parsing...');\n    },\n  );\n\n  // Merge results from all workers into graph and symbol table\n  const allImports: ExtractedImport[] = [];\n  const allCalls: ExtractedCall[] = [];\n  const allAssignments: ExtractedAssignment[] = [];\n  const allHeritage: ExtractedHeritage[] = [];\n  const allRoutes: ExtractedRoute[] = [];\n  const allConstructorBindings: FileConstructorBindings[] = [];\n  for (const result of chunkResults) {\n    for (const node of result.nodes) {\n      graph.addNode({\n        id: node.id,\n        label: node.label as any,\n        properties: node.properties,\n      });\n    }\n\n    for (const rel of result.relationships) {\n      graph.addRelationship(rel);\n    }\n\n    for (const sym of result.symbols) {\n      symbolTable.add(sym.filePath, sym.name, sym.nodeId, sym.type, {\n        parameterCount: sym.parameterCount,\n        requiredParameterCount: sym.requiredParameterCount,\n        parameterTypes: sym.parameterTypes,\n        returnType: sym.returnType,\n        declaredType: sym.declaredType,\n        ownerId: sym.ownerId,\n      });\n    }\n\n    allImports.push(...result.imports);\n    allCalls.push(...result.calls);\n    allAssignments.push(...result.assignments);\n    allHeritage.push(...result.heritage);\n    allRoutes.push(...result.routes);\n    allConstructorBindings.push(...result.constructorBindings);\n  }\n\n  // Merge and log skipped languages from workers\n  const skippedLanguages = new Map<string, number>();\n  for (const result of chunkResults) {\n    for (const [lang, count] of Object.entries(result.skippedLanguages)) {\n      skippedLanguages.set(lang, (skippedLanguages.get(lang) || 0) + count);\n    }\n  }\n  if (skippedLanguages.size > 0) {\n    const summary = Array.from(skippedLanguages.entries())\n      .map(([lang, count]) => `${lang}: ${count}`)\n      .join(', ');\n    console.warn(`  Skipped unsupported languages: ${summary}`);\n  }\n\n  // Final progress\n  onFileProgress?.(total, total, 'done');\n  return { imports: allImports, calls: allCalls, assignments: allAssignments, heritage: allHeritage, routes: allRoutes, constructorBindings: allConstructorBindings };\n};\n\n// ============================================================================\n// Sequential fallback (original implementation)\n// ============================================================================\n\nconst processParsingSequential = async (\n  graph: KnowledgeGraph,\n  files: { path: string; content: string }[],\n  symbolTable: SymbolTable,\n  astCache: ASTCache,\n  onFileProgress?: FileProgressCallback\n) => {\n  const parser = await loadParser();\n  const total = files.length;\n  const skippedLanguages = new Map<string, number>();\n\n  for (let i = 0; i < files.length; i++) {\n    const file = files[i];\n\n    onFileProgress?.(i + 1, total, file.path);\n\n    if (i % 20 === 0) await yieldToEventLoop();\n\n    const language = getLanguageFromFilename(file.path);\n\n    if (!language) continue;\n\n    // Skip unsupported languages (e.g. Swift when tree-sitter-swift not installed)\n    if (!isLanguageAvailable(language)) {\n      skippedLanguages.set(language, (skippedLanguages.get(language) || 0) + 1);\n      continue;\n    }\n\n    // Skip files larger than the max tree-sitter buffer (32 MB)\n    if (file.content.length > TREE_SITTER_MAX_BUFFER) continue;\n\n    try {\n      await loadLanguage(language, file.path);\n    } catch {\n      continue;  // parser unavailable — safety net\n    }\n\n    let tree;\n    try {\n      tree = parser.parse(file.content, undefined, { bufferSize: getTreeSitterBufferSize(file.content.length) });\n    } catch (parseError) {\n      console.warn(`Skipping unparseable file: ${file.path}`);\n      continue;\n    }\n\n    astCache.set(file.path, tree);\n\n    const queryString = LANGUAGE_QUERIES[language];\n    if (!queryString) {\n      continue;\n    }\n\n    let query;\n    let matches;\n    try {\n      const language = parser.getLanguage();\n      query = new Parser.Query(language, queryString);\n      matches = query.matches(tree.rootNode);\n    } catch (queryError) {\n      console.warn(`Query error for ${file.path}:`, queryError);\n      continue;\n    }\n\n    matches.forEach(match => {\n      const captureMap: Record<string, any> = {};\n\n      match.captures.forEach(c => {\n        captureMap[c.name] = c.node;\n      });\n\n      if (captureMap['import']) {\n        return;\n      }\n\n      if (captureMap['call']) {\n        return;\n      }\n\n      const nameNode = captureMap['name'];\n      // Synthesize name for constructors without explicit @name capture (e.g. Swift init)\n      if (!nameNode && !captureMap['definition.constructor']) return;\n      const nodeName = nameNode ? nameNode.text : 'init';\n\n      let nodeLabel: NodeLabel = 'CodeElement';\n\n      if (captureMap['definition.function']) {\n        // C/C++: @definition.function is broad and also matches inline class methods (inside\n        // a class/struct body). Those are already captured by @definition.method, so skip\n        // the duplicate Function entry to prevent double-indexing in globalIndex.\n        if (language === SupportedLanguages.CPlusPlus || language === SupportedLanguages.C) {\n          let ancestor = captureMap['definition.function']?.parent;\n          while (ancestor) {\n            if (ancestor.type === 'class_specifier' || ancestor.type === 'struct_specifier') {\n              break;\n            }\n            ancestor = ancestor.parent;\n          }\n          if (ancestor) return; // inside a class body — handled by @definition.method\n        }\n        nodeLabel = 'Function';\n      }\n      else if (captureMap['definition.class']) nodeLabel = 'Class';\n      else if (captureMap['definition.interface']) nodeLabel = 'Interface';\n      else if (captureMap['definition.method']) nodeLabel = 'Method';\n      else if (captureMap['definition.struct']) nodeLabel = 'Struct';\n      else if (captureMap['definition.enum']) nodeLabel = 'Enum';\n      else if (captureMap['definition.namespace']) nodeLabel = 'Namespace';\n      else if (captureMap['definition.module']) nodeLabel = 'Module';\n      else if (captureMap['definition.trait']) nodeLabel = 'Trait';\n      else if (captureMap['definition.impl']) nodeLabel = 'Impl';\n      else if (captureMap['definition.type']) nodeLabel = 'TypeAlias';\n      else if (captureMap['definition.const']) nodeLabel = 'Const';\n      else if (captureMap['definition.static']) nodeLabel = 'Static';\n      else if (captureMap['definition.typedef']) nodeLabel = 'Typedef';\n      else if (captureMap['definition.macro']) nodeLabel = 'Macro';\n      else if (captureMap['definition.union']) nodeLabel = 'Union';\n      else if (captureMap['definition.property']) nodeLabel = 'Property';\n      else if (captureMap['definition.record']) nodeLabel = 'Record';\n      else if (captureMap['definition.delegate']) nodeLabel = 'Delegate';\n      else if (captureMap['definition.annotation']) nodeLabel = 'Annotation';\n      else if (captureMap['definition.constructor']) nodeLabel = 'Constructor';\n      else if (captureMap['definition.template']) nodeLabel = 'Template';\n\n      const definitionNodeForRange = getDefinitionNodeFromCaptures(captureMap);\n      const startLine = definitionNodeForRange ? definitionNodeForRange.startPosition.row : (nameNode ? nameNode.startPosition.row : 0);\n      const nodeId = generateId(nodeLabel, `${file.path}:${nodeName}`);\n\n      const definitionNode = getDefinitionNodeFromCaptures(captureMap);\n      const frameworkHint = definitionNode\n        ? detectFrameworkFromAST(language, (definitionNode.text || '').slice(0, 300))\n        : null;\n\n      // Extract method signature for Method/Constructor nodes\n      const methodSig = (nodeLabel === 'Function' || nodeLabel === 'Method' || nodeLabel === 'Constructor')\n        ? extractMethodSignature(definitionNode)\n        : undefined;\n\n      // Language-specific return type fallback (e.g. Ruby YARD @return [Type])\n      // Also upgrades uninformative AST types like PHP `array` with PHPDoc `@return User[]`\n      if (methodSig && (!methodSig.returnType || methodSig.returnType === 'array' || methodSig.returnType === 'iterable') && definitionNode) {\n        const tc = typeConfigs[language as keyof typeof typeConfigs];\n        if (tc?.extractReturnType) {\n          const docReturn = tc.extractReturnType(definitionNode);\n          if (docReturn) methodSig.returnType = docReturn;\n        }\n      }\n\n      const node: GraphNode = {\n        id: nodeId,\n        label: nodeLabel as any,\n        properties: {\n          name: nodeName,\n          filePath: file.path,\n          startLine: definitionNodeForRange ? definitionNodeForRange.startPosition.row : startLine,\n          endLine: definitionNodeForRange ? definitionNodeForRange.endPosition.row : startLine,\n          language: language,\n          isExported: isNodeExported(nameNode || definitionNodeForRange, nodeName, language),\n          ...(frameworkHint ? {\n            astFrameworkMultiplier: frameworkHint.entryPointMultiplier,\n            astFrameworkReason: frameworkHint.reason,\n          } : {}),\n          ...(methodSig ? {\n            parameterCount: methodSig.parameterCount,\n            ...(methodSig.requiredParameterCount !== undefined ? { requiredParameterCount: methodSig.requiredParameterCount } : {}),\n            ...(methodSig.parameterTypes ? { parameterTypes: methodSig.parameterTypes } : {}),\n            returnType: methodSig.returnType,\n          } : {}),\n        },\n      };\n\n      graph.addNode(node);\n\n      // Compute enclosing class for Method/Constructor/Property/Function — used for both ownerId and HAS_METHOD\n      // Function is included because Kotlin/Rust/Python capture class methods as Function nodes\n      const needsOwner = nodeLabel === 'Method' || nodeLabel === 'Constructor' || nodeLabel === 'Property' || nodeLabel === 'Function';\n      const enclosingClassId = needsOwner ? findEnclosingClassId(nameNode || definitionNodeForRange, file.path) : null;\n\n      // Extract declared type for Property nodes (field/property type annotations)\n      const declaredType = (nodeLabel === 'Property' && definitionNode)\n        ? extractPropertyDeclaredType(definitionNode)\n        : undefined;\n\n      symbolTable.add(file.path, nodeName, nodeId, nodeLabel, {\n        parameterCount: methodSig?.parameterCount,\n        requiredParameterCount: methodSig?.requiredParameterCount,\n        parameterTypes: methodSig?.parameterTypes,\n        returnType: methodSig?.returnType,\n        declaredType,\n        ownerId: enclosingClassId ?? undefined,\n      });\n\n      const fileId = generateId('File', file.path);\n\n      const relId = generateId('DEFINES', `${fileId}->${nodeId}`);\n\n      const relationship: GraphRelationship = {\n        id: relId,\n        sourceId: fileId,\n        targetId: nodeId,\n        type: 'DEFINES',\n        confidence: 1.0,\n        reason: '',\n      };\n\n      graph.addRelationship(relationship);\n\n      // ── HAS_METHOD / HAS_PROPERTY: link member to enclosing class ──\n      if (enclosingClassId) {\n        const memberEdgeType = nodeLabel === 'Property' ? 'HAS_PROPERTY' : 'HAS_METHOD';\n        graph.addRelationship({\n          id: generateId(memberEdgeType, `${enclosingClassId}->${nodeId}`),\n          sourceId: enclosingClassId,\n          targetId: nodeId,\n          type: memberEdgeType,\n          confidence: 1.0,\n          reason: '',\n        });\n      }\n    });\n  }\n\n  if (skippedLanguages.size > 0) {\n    const summary = Array.from(skippedLanguages.entries())\n      .map(([lang, count]) => `${lang}: ${count}`)\n      .join(', ');\n    console.warn(`  Skipped unsupported languages: ${summary}`);\n  }\n};\n\n// ============================================================================\n// Public API\n// ============================================================================\n\nexport const processParsing = async (\n  graph: KnowledgeGraph,\n  files: { path: string; content: string }[],\n  symbolTable: SymbolTable,\n  astCache: ASTCache,\n  onFileProgress?: FileProgressCallback,\n  workerPool?: WorkerPool,\n): Promise<WorkerExtractedData | null> => {\n  if (workerPool) {\n    try {\n      return await processParsingWithWorkers(graph, files, symbolTable, astCache, workerPool, onFileProgress);\n    } catch (err) {\n      console.warn('Worker pool parsing failed, falling back to sequential:', err instanceof Error ? err.message : err);\n    }\n  }\n\n  // Fallback: sequential parsing (no pre-extracted data)\n  await processParsingSequential(graph, files, symbolTable, astCache, onFileProgress);\n  return null;\n};\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/pipeline.ts",
    "content": "import { createKnowledgeGraph } from '../graph/graph.js';\nimport { processStructure } from './structure-processor.js';\nimport { processParsing } from './parsing-processor.js';\nimport {\n  processImports,\n  processImportsFromExtracted,\n  buildImportResolutionContext\n} from './import-processor.js';\nimport { processCalls, processCallsFromExtracted, processAssignmentsFromExtracted, processRoutesFromExtracted } from './call-processor.js';\nimport { processHeritage, processHeritageFromExtracted } from './heritage-processor.js';\nimport { computeMRO } from './mro-processor.js';\nimport { processCommunities } from './community-processor.js';\nimport { processProcesses } from './process-processor.js';\nimport { createResolutionContext } from './resolution-context.js';\nimport { createASTCache } from './ast-cache.js';\nimport { PipelineProgress, PipelineResult } from '../../types/pipeline.js';\nimport { walkRepositoryPaths, readFileContents } from './filesystem-walker.js';\nimport { getLanguageFromFilename } from './utils.js';\nimport { isLanguageAvailable } from '../tree-sitter/parser-loader.js';\nimport { createWorkerPool, WorkerPool } from './workers/worker-pool.js';\nimport fs from 'node:fs';\nimport path from 'node:path';\nimport { fileURLToPath, pathToFileURL } from 'node:url';\n\nconst isDev = process.env.NODE_ENV === 'development';\n\n/** Max bytes of source content to load per parse chunk. Each chunk's source +\n *  parsed ASTs + extracted records + worker serialization overhead all live in\n *  memory simultaneously, so this must be conservative. 20MB source ≈ 200-400MB\n *  peak working memory per chunk after parse expansion. */\nconst CHUNK_BYTE_BUDGET = 20 * 1024 * 1024; // 20MB\n\n/** Max AST trees to keep in LRU cache */\nconst AST_CACHE_CAP = 50;\n\nexport interface PipelineOptions {\n  /** Skip MRO, community detection, and process extraction for faster test runs. */\n  skipGraphPhases?: boolean;\n}\n\nexport const runPipelineFromRepo = async (\n  repoPath: string,\n  onProgress: (progress: PipelineProgress) => void,\n  options?: PipelineOptions,\n): Promise<PipelineResult> => {\n  const graph = createKnowledgeGraph();\n  const ctx = createResolutionContext();\n  const symbolTable = ctx.symbols;\n  let astCache = createASTCache(AST_CACHE_CAP);\n\n  const cleanup = () => {\n    astCache.clear();\n    ctx.clear();\n  };\n\n  try {\n    // ── Phase 1: Scan paths only (no content read) ─────────────────────\n    onProgress({\n      phase: 'extracting',\n      percent: 0,\n      message: 'Scanning repository...',\n    });\n\n    const scannedFiles = await walkRepositoryPaths(repoPath, (current, total, filePath) => {\n      const scanProgress = Math.round((current / total) * 15);\n      onProgress({\n        phase: 'extracting',\n        percent: scanProgress,\n        message: 'Scanning repository...',\n        detail: filePath,\n        stats: { filesProcessed: current, totalFiles: total, nodesCreated: graph.nodeCount },\n      });\n    });\n\n    const totalFiles = scannedFiles.length;\n\n    onProgress({\n      phase: 'extracting',\n      percent: 15,\n      message: 'Repository scanned successfully',\n      stats: { filesProcessed: totalFiles, totalFiles, nodesCreated: graph.nodeCount },\n    });\n\n    // ── Phase 2: Structure (paths only — no content needed) ────────────\n    onProgress({\n      phase: 'structure',\n      percent: 15,\n      message: 'Analyzing project structure...',\n      stats: { filesProcessed: 0, totalFiles, nodesCreated: graph.nodeCount },\n    });\n\n    const allPaths = scannedFiles.map(f => f.path);\n    processStructure(graph, allPaths);\n\n    onProgress({\n      phase: 'structure',\n      percent: 20,\n      message: 'Project structure analyzed',\n      stats: { filesProcessed: totalFiles, totalFiles, nodesCreated: graph.nodeCount },\n    });\n\n    // ── Phase 3+4: Chunked read + parse ────────────────────────────────\n    // Group parseable files into byte-budget chunks so only ~20MB of source\n    // is in memory at a time. Each chunk is: read → parse → extract → free.\n\n    const parseableScanned = scannedFiles.filter(f => {\n      const lang = getLanguageFromFilename(f.path);\n      return lang && isLanguageAvailable(lang);\n    });\n\n    // Warn about files skipped due to unavailable parsers\n    const skippedByLang = new Map<string, number>();\n    for (const f of scannedFiles) {\n      const lang = getLanguageFromFilename(f.path);\n      if (lang && !isLanguageAvailable(lang)) {\n        skippedByLang.set(lang, (skippedByLang.get(lang) || 0) + 1);\n      }\n    }\n    for (const [lang, count] of skippedByLang) {\n      console.warn(`Skipping ${count} ${lang} file(s) — ${lang} parser not available (native binding may not have built). Try: npm rebuild tree-sitter-${lang}`);\n    }\n\n    const totalParseable = parseableScanned.length;\n\n    if (totalParseable === 0) {\n      onProgress({\n        phase: 'parsing',\n        percent: 82,\n        message: 'No parseable files found — skipping parsing phase',\n        stats: { filesProcessed: 0, totalFiles: 0, nodesCreated: graph.nodeCount },\n      });\n    }\n\n    // Build byte-budget chunks\n    const chunks: string[][] = [];\n    let currentChunk: string[] = [];\n    let currentBytes = 0;\n    for (const file of parseableScanned) {\n      if (currentChunk.length > 0 && currentBytes + file.size > CHUNK_BYTE_BUDGET) {\n        chunks.push(currentChunk);\n        currentChunk = [];\n        currentBytes = 0;\n      }\n      currentChunk.push(file.path);\n      currentBytes += file.size;\n    }\n    if (currentChunk.length > 0) chunks.push(currentChunk);\n\n    const numChunks = chunks.length;\n\n    if (isDev) {\n      const totalMB = parseableScanned.reduce((s, f) => s + f.size, 0) / (1024 * 1024);\n      console.log(`📂 Scan: ${totalFiles} paths, ${totalParseable} parseable (${totalMB.toFixed(0)}MB), ${numChunks} chunks @ ${CHUNK_BYTE_BUDGET / (1024 * 1024)}MB budget`);\n    }\n\n    onProgress({\n      phase: 'parsing',\n      percent: 20,\n      message: `Parsing ${totalParseable} files in ${numChunks} chunk${numChunks !== 1 ? 's' : ''}...`,\n      stats: { filesProcessed: 0, totalFiles: totalParseable, nodesCreated: graph.nodeCount },\n    });\n\n    // Don't spawn workers for tiny repos — overhead exceeds benefit\n    const MIN_FILES_FOR_WORKERS = 15;\n    const MIN_BYTES_FOR_WORKERS = 512 * 1024;\n    const totalBytes = parseableScanned.reduce((s, f) => s + f.size, 0);\n\n    // Create worker pool once, reuse across chunks\n    let workerPool: WorkerPool | undefined;\n    if (totalParseable >= MIN_FILES_FOR_WORKERS || totalBytes >= MIN_BYTES_FOR_WORKERS) {\n      try {\n        let workerUrl = new URL('./workers/parse-worker.js', import.meta.url);\n        // When running under vitest, import.meta.url points to src/ where no .js exists.\n        // Fall back to the compiled dist/ worker so the pool can spawn real worker threads.\n        const thisDir = fileURLToPath(new URL('.', import.meta.url));\n        if (!fs.existsSync(fileURLToPath(workerUrl))) {\n          const distWorker = path.resolve(thisDir, '..', '..', '..', 'dist', 'core', 'ingestion', 'workers', 'parse-worker.js');\n          if (fs.existsSync(distWorker)) {\n            workerUrl = pathToFileURL(distWorker) as URL;\n          }\n        }\n        workerPool = createWorkerPool(workerUrl);\n      } catch (err) {\n        if (isDev) console.warn('Worker pool creation failed, using sequential fallback:', (err as Error).message);\n      }\n    }\n\n    let filesParsedSoFar = 0;\n\n    // AST cache sized for one chunk (sequential fallback uses it for import/call/heritage)\n    const maxChunkFiles = chunks.reduce((max, c) => Math.max(max, c.length), 0);\n    astCache = createASTCache(maxChunkFiles);\n\n    // Build import resolution context once — suffix index, file lists, resolve cache.\n    // Reused across all chunks to avoid rebuilding O(files × path_depth) structures.\n    const importCtx = buildImportResolutionContext(allPaths);\n    const allPathObjects = allPaths.map(p => ({ path: p }));\n\n    // Single-pass: parse + resolve imports/calls/heritage per chunk.\n    // Calls/heritage use the symbol table built so far (symbols from earlier chunks\n    // are already registered). This trades ~5% cross-chunk resolution accuracy for\n    // 200-400MB less memory — critical for Linux-kernel-scale repos.\n    const sequentialChunkPaths: string[][] = [];\n\n    try {\n      for (let chunkIdx = 0; chunkIdx < numChunks; chunkIdx++) {\n        const chunkPaths = chunks[chunkIdx];\n\n        // Read content for this chunk only\n        const chunkContents = await readFileContents(repoPath, chunkPaths);\n        const chunkFiles = chunkPaths\n          .filter(p => chunkContents.has(p))\n          .map(p => ({ path: p, content: chunkContents.get(p)! }));\n\n        // Parse this chunk (workers or sequential fallback)\n        const chunkWorkerData = await processParsing(\n          graph, chunkFiles, symbolTable, astCache,\n          (current, _total, filePath) => {\n            const globalCurrent = filesParsedSoFar + current;\n            const parsingProgress = 20 + ((globalCurrent / totalParseable) * 62);\n            onProgress({\n              phase: 'parsing',\n              percent: Math.round(parsingProgress),\n              message: `Parsing chunk ${chunkIdx + 1}/${numChunks}...`,\n              detail: filePath,\n              stats: { filesProcessed: globalCurrent, totalFiles: totalParseable, nodesCreated: graph.nodeCount },\n            });\n          },\n          workerPool,\n        );\n\n        const chunkBasePercent = 20 + ((filesParsedSoFar / totalParseable) * 62);\n\n        if (chunkWorkerData) {\n          // Imports\n          await processImportsFromExtracted(graph, allPathObjects, chunkWorkerData.imports, ctx, (current, total) => {\n            onProgress({\n              phase: 'parsing',\n              percent: Math.round(chunkBasePercent),\n              message: `Resolving imports (chunk ${chunkIdx + 1}/${numChunks})...`,\n              detail: `${current}/${total} files`,\n              stats: { filesProcessed: filesParsedSoFar, totalFiles: totalParseable, nodesCreated: graph.nodeCount },\n            });\n          }, repoPath, importCtx);\n          // Calls + Heritage + Routes — resolve in parallel (no shared mutable state between them)\n          // This is safe because each writes disjoint relationship types into idempotent id-keyed Maps,\n          // and the single-threaded event loop prevents races between synchronous addRelationship calls.\n          await Promise.all([\n            processCallsFromExtracted(\n              graph,\n              chunkWorkerData.calls,\n              ctx,\n              (current, total) => {\n                onProgress({\n                  phase: 'parsing',\n                  percent: Math.round(chunkBasePercent),\n                  message: `Resolving calls (chunk ${chunkIdx + 1}/${numChunks})...`,\n                  detail: `${current}/${total} files`,\n                  stats: { filesProcessed: filesParsedSoFar, totalFiles: totalParseable, nodesCreated: graph.nodeCount },\n                });\n              },\n              chunkWorkerData.constructorBindings,\n            ),\n            processHeritageFromExtracted(\n              graph,\n              chunkWorkerData.heritage,\n              ctx,\n              (current, total) => {\n                onProgress({\n                  phase: 'parsing',\n                  percent: Math.round(chunkBasePercent),\n                  message: `Resolving heritage (chunk ${chunkIdx + 1}/${numChunks})...`,\n                  detail: `${current}/${total} records`,\n                  stats: { filesProcessed: filesParsedSoFar, totalFiles: totalParseable, nodesCreated: graph.nodeCount },\n                });\n              },\n            ),\n            processRoutesFromExtracted(\n              graph,\n              chunkWorkerData.routes ?? [],\n              ctx,\n              (current, total) => {\n                onProgress({\n                  phase: 'parsing',\n                  percent: Math.round(chunkBasePercent),\n                  message: `Resolving routes (chunk ${chunkIdx + 1}/${numChunks})...`,\n                  detail: `${current}/${total} routes`,\n                  stats: { filesProcessed: filesParsedSoFar, totalFiles: totalParseable, nodesCreated: graph.nodeCount },\n                });\n              },\n            ),\n          ]);\n          // Process field write assignments (synchronous, runs after calls resolve)\n          if (chunkWorkerData.assignments?.length) {\n            processAssignmentsFromExtracted(graph, chunkWorkerData.assignments, ctx, chunkWorkerData.constructorBindings);\n          }\n        } else {\n          await processImports(graph, chunkFiles, astCache, ctx, undefined, repoPath, allPaths);\n          sequentialChunkPaths.push(chunkPaths);\n        }\n\n        filesParsedSoFar += chunkFiles.length;\n\n        // Clear AST cache between chunks to free memory\n        astCache.clear();\n        // chunkContents + chunkFiles + chunkWorkerData go out of scope → GC reclaims\n      }\n    } finally {\n      await workerPool?.terminate();\n    }\n\n    // Sequential fallback chunks: re-read source for call/heritage resolution\n    for (const chunkPaths of sequentialChunkPaths) {\n      const chunkContents = await readFileContents(repoPath, chunkPaths);\n      const chunkFiles = chunkPaths\n        .filter(p => chunkContents.has(p))\n        .map(p => ({ path: p, content: chunkContents.get(p)! }));\n      astCache = createASTCache(chunkFiles.length);\n      const rubyHeritage = await processCalls(graph, chunkFiles, astCache, ctx);\n      await processHeritage(graph, chunkFiles, astCache, ctx);\n      if (rubyHeritage.length > 0) {\n        await processHeritageFromExtracted(graph, rubyHeritage, ctx);\n      }\n      astCache.clear();\n    }\n\n    // Log resolution cache stats\n    if (isDev) {\n      const rcStats = ctx.getStats();\n      const total = rcStats.cacheHits + rcStats.cacheMisses;\n      const hitRate = total > 0 ? ((rcStats.cacheHits / total) * 100).toFixed(1) : '0';\n      console.log(`🔍 Resolution cache: ${rcStats.cacheHits} hits, ${rcStats.cacheMisses} misses (${hitRate}% hit rate)`);\n    }\n\n    // Free import resolution context — suffix index + resolve cache no longer needed\n    // (allPathObjects and importCtx hold ~94MB+ for large repos)\n    allPathObjects.length = 0;\n    importCtx.resolveCache.clear();\n    (importCtx as any).suffixIndex = null;\n    (importCtx as any).normalizedFileList = null;\n\n    let communityResult: Awaited<ReturnType<typeof processCommunities>> | undefined;\n    let processResult: Awaited<ReturnType<typeof processProcesses>> | undefined;\n\n    if (!options?.skipGraphPhases) {\n      // ── Phase 4.5: Method Resolution Order ──────────────────────────────\n      onProgress({\n        phase: 'parsing',\n        percent: 81,\n        message: 'Computing method resolution order...',\n        stats: { filesProcessed: totalFiles, totalFiles, nodesCreated: graph.nodeCount },\n      });\n\n      const mroResult = computeMRO(graph);\n      if (isDev && mroResult.entries.length > 0) {\n        console.log(`🔀 MRO: ${mroResult.entries.length} classes analyzed, ${mroResult.ambiguityCount} ambiguities found, ${mroResult.overrideEdges} OVERRIDES edges`);\n      }\n\n      // ── Phase 5: Communities ───────────────────────────────────────────\n      onProgress({\n        phase: 'communities',\n        percent: 82,\n        message: 'Detecting code communities...',\n        stats: { filesProcessed: totalFiles, totalFiles, nodesCreated: graph.nodeCount },\n      });\n\n      communityResult = await processCommunities(graph, (message, progress) => {\n        const communityProgress = 82 + (progress * 0.10);\n        onProgress({\n          phase: 'communities',\n          percent: Math.round(communityProgress),\n          message,\n          stats: { filesProcessed: totalFiles, totalFiles, nodesCreated: graph.nodeCount },\n        });\n      });\n\n      if (isDev) {\n        console.log(`🏘️ Community detection: ${communityResult.stats.totalCommunities} communities found (modularity: ${communityResult.stats.modularity.toFixed(3)})`);\n      }\n\n      communityResult.communities.forEach(comm => {\n        graph.addNode({\n          id: comm.id,\n          label: 'Community' as const,\n          properties: {\n            name: comm.label,\n            filePath: '',\n            heuristicLabel: comm.heuristicLabel,\n            cohesion: comm.cohesion,\n            symbolCount: comm.symbolCount,\n          }\n        });\n      });\n\n      communityResult.memberships.forEach(membership => {\n        graph.addRelationship({\n          id: `${membership.nodeId}_member_of_${membership.communityId}`,\n          type: 'MEMBER_OF',\n          sourceId: membership.nodeId,\n          targetId: membership.communityId,\n          confidence: 1.0,\n          reason: 'leiden-algorithm',\n        });\n      });\n\n      // ── Phase 6: Processes ─────────────────────────────────────────────\n      onProgress({\n        phase: 'processes',\n        percent: 94,\n        message: 'Detecting execution flows...',\n        stats: { filesProcessed: totalFiles, totalFiles, nodesCreated: graph.nodeCount },\n      });\n\n      let symbolCount = 0;\n      graph.forEachNode(n => { if (n.label !== 'File') symbolCount++; });\n      const dynamicMaxProcesses = Math.max(20, Math.min(300, Math.round(symbolCount / 10)));\n\n      processResult = await processProcesses(\n        graph,\n        communityResult.memberships,\n        (message, progress) => {\n          const processProgress = 94 + (progress * 0.05);\n          onProgress({\n            phase: 'processes',\n            percent: Math.round(processProgress),\n            message,\n            stats: { filesProcessed: totalFiles, totalFiles, nodesCreated: graph.nodeCount },\n          });\n        },\n        { maxProcesses: dynamicMaxProcesses, minSteps: 3 }\n      );\n\n      if (isDev) {\n        console.log(`🔄 Process detection: ${processResult.stats.totalProcesses} processes found (${processResult.stats.crossCommunityCount} cross-community)`);\n      }\n\n      processResult.processes.forEach(proc => {\n        graph.addNode({\n          id: proc.id,\n          label: 'Process' as const,\n          properties: {\n            name: proc.label,\n            filePath: '',\n            heuristicLabel: proc.heuristicLabel,\n            processType: proc.processType,\n            stepCount: proc.stepCount,\n            communities: proc.communities,\n            entryPointId: proc.entryPointId,\n            terminalId: proc.terminalId,\n          }\n        });\n      });\n\n      processResult.steps.forEach(step => {\n        graph.addRelationship({\n          id: `${step.nodeId}_step_${step.step}_${step.processId}`,\n          type: 'STEP_IN_PROCESS',\n          sourceId: step.nodeId,\n          targetId: step.processId,\n          confidence: 1.0,\n          reason: 'trace-detection',\n          step: step.step,\n        });\n      });\n    }\n\n    onProgress({\n      phase: 'complete',\n      percent: 100,\n      message: communityResult && processResult\n        ? `Graph complete! ${communityResult.stats.totalCommunities} communities, ${processResult.stats.totalProcesses} processes detected.`\n        : 'Graph complete! (graph phases skipped)',\n      stats: {\n        filesProcessed: totalFiles,\n        totalFiles,\n        nodesCreated: graph.nodeCount\n      },\n    });\n\n    astCache.clear();\n\n    return { graph, repoPath, totalFileCount: totalFiles, communityResult, processResult };\n  } catch (error) {\n    cleanup();\n    throw error;\n  }\n};\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/process-processor.ts",
    "content": "/**\n * Process Detection Processor\n * \n * Detects execution flows (Processes) in the code graph by:\n * 1. Finding entry points (functions with no internal callers)\n * 2. Tracing forward via CALLS edges (BFS)\n * 3. Grouping and deduplicating similar paths\n * 4. Labeling with heuristic names\n * \n * Processes help agents understand how features work through the codebase.\n */\n\nimport { KnowledgeGraph, GraphNode, GraphRelationship, NodeLabel } from '../graph/types.js';\nimport { CommunityMembership } from './community-processor.js';\nimport { calculateEntryPointScore, isTestFile } from './entry-point-scoring.js';\nimport { SupportedLanguages } from '../../config/supported-languages.js';\n\nconst isDev = process.env.NODE_ENV === 'development';\n\n// ============================================================================\n// CONFIGURATION\n// ============================================================================\n\nexport interface ProcessDetectionConfig {\n  maxTraceDepth: number;      // Maximum steps to trace (default: 10)\n  maxBranching: number;       // Max branches to follow per node (default: 3)\n  maxProcesses: number;       // Maximum processes to detect (default: 50)\n  minSteps: number;           // Minimum steps for a valid process (default: 2)\n}\n\nconst DEFAULT_CONFIG: ProcessDetectionConfig = {\n  maxTraceDepth: 10,\n  maxBranching: 4,\n  maxProcesses: 75,\n  minSteps: 3,       // 3+ steps = genuine multi-hop flow (2-step is just \"A calls B\")\n};\n\n// ============================================================================\n// TYPES\n// ============================================================================\n\nexport interface ProcessNode {\n  id: string;                    // \"proc_handleLogin_createSession\"\n  label: string;                 // \"HandleLogin → CreateSession\"\n  heuristicLabel: string;\n  processType: 'intra_community' | 'cross_community';\n  stepCount: number;\n  communities: string[];         // Community IDs touched\n  entryPointId: string;\n  terminalId: string;\n  trace: string[];               // Ordered array of node IDs\n}\n\nexport interface ProcessStep {\n  nodeId: string;\n  processId: string;\n  step: number;                  // 1-indexed position in trace\n}\n\nexport interface ProcessDetectionResult {\n  processes: ProcessNode[];\n  steps: ProcessStep[];\n  stats: {\n    totalProcesses: number;\n    crossCommunityCount: number;\n    avgStepCount: number;\n    entryPointsFound: number;\n  };\n}\n\n// ============================================================================\n// MAIN PROCESSOR\n// ============================================================================\n\n/**\n * Detect processes (execution flows) in the knowledge graph\n * \n * This runs AFTER community detection, using CALLS edges to trace flows.\n */\nexport const processProcesses = async (\n  knowledgeGraph: KnowledgeGraph,\n  memberships: CommunityMembership[],\n  onProgress?: (message: string, progress: number) => void,\n  config: Partial<ProcessDetectionConfig> = {}\n): Promise<ProcessDetectionResult> => {\n  const cfg = { ...DEFAULT_CONFIG, ...config };\n  \n  onProgress?.('Finding entry points...', 0);\n  \n  // Build lookup maps\n  const membershipMap = new Map<string, string>();\n  memberships.forEach(m => membershipMap.set(m.nodeId, m.communityId));\n  \n  const callsEdges = buildCallsGraph(knowledgeGraph);\n  const reverseCallsEdges = buildReverseCallsGraph(knowledgeGraph);\n  const nodeMap = new Map<string, GraphNode>();\n  for (const n of knowledgeGraph.iterNodes()) nodeMap.set(n.id, n);\n  \n  // Step 1: Find entry points (functions that call others but have few callers)\n  const entryPoints = findEntryPoints(knowledgeGraph, reverseCallsEdges, callsEdges);\n  \n  onProgress?.(`Found ${entryPoints.length} entry points, tracing flows...`, 20);\n  \n  onProgress?.(`Found ${entryPoints.length} entry points, tracing flows...`, 20);\n  \n  // Step 2: Trace processes from each entry point\n  const allTraces: string[][] = [];\n  \n  for (let i = 0; i < entryPoints.length && allTraces.length < cfg.maxProcesses * 2; i++) {\n    const entryId = entryPoints[i];\n    const traces = traceFromEntryPoint(entryId, callsEdges, cfg);\n    \n    // Filter out traces that are too short\n    traces.filter(t => t.length >= cfg.minSteps).forEach(t => allTraces.push(t));\n    \n    if (i % 10 === 0) {\n      onProgress?.(`Tracing entry point ${i + 1}/${entryPoints.length}...`, 20 + (i / entryPoints.length) * 40);\n    }\n  }\n  \n  onProgress?.(`Found ${allTraces.length} traces, deduplicating...`, 60);\n  \n  // Step 3: Deduplicate similar traces (subset removal)\n  const uniqueTraces = deduplicateTraces(allTraces);\n  \n  // Step 3b: Deduplicate by entry+terminal pair (keep longest path per pair)\n  const endpointDeduped = deduplicateByEndpoints(uniqueTraces);\n  \n  onProgress?.(`Deduped ${uniqueTraces.length} → ${endpointDeduped.length} unique endpoint pairs`, 70);\n  \n  // Step 4: Limit to max processes (prioritize longer traces)\n  const limitedTraces = endpointDeduped\n    .sort((a, b) => b.length - a.length)\n    .slice(0, cfg.maxProcesses);\n  \n  onProgress?.(`Creating ${limitedTraces.length} process nodes...`, 80);\n  \n  // Step 5: Create process nodes\n  const processes: ProcessNode[] = [];\n  const steps: ProcessStep[] = [];\n  \n  limitedTraces.forEach((trace, idx) => {\n    const entryPointId = trace[0];\n    const terminalId = trace[trace.length - 1];\n    \n    // Get communities touched\n    const communitiesSet = new Set<string>();\n    trace.forEach(nodeId => {\n      const comm = membershipMap.get(nodeId);\n      if (comm) communitiesSet.add(comm);\n    });\n    const communities = Array.from(communitiesSet);\n    \n    // Determine process type\n    const processType: 'intra_community' | 'cross_community' = \n      communities.length > 1 ? 'cross_community' : 'intra_community';\n    \n    // Generate label\n    const entryNode = nodeMap.get(entryPointId);\n    const terminalNode = nodeMap.get(terminalId);\n    const entryName = entryNode?.properties.name || 'Unknown';\n    const terminalName = terminalNode?.properties.name || 'Unknown';\n    const heuristicLabel = `${capitalize(entryName)} → ${capitalize(terminalName)}`;\n    \n    const processId = `proc_${idx}_${sanitizeId(entryName)}`;\n    \n    processes.push({\n      id: processId,\n      label: heuristicLabel,\n      heuristicLabel,\n      processType,\n      stepCount: trace.length,\n      communities,\n      entryPointId,\n      terminalId,\n      trace,\n    });\n    \n    // Create step relationships\n    trace.forEach((nodeId, stepIdx) => {\n      steps.push({\n        nodeId,\n        processId,\n        step: stepIdx + 1,  // 1-indexed\n      });\n    });\n  });\n  \n  onProgress?.('Process detection complete!', 100);\n  \n  // Calculate stats\n  const crossCommunityCount = processes.filter(p => p.processType === 'cross_community').length;\n  const avgStepCount = processes.length > 0 \n    ? processes.reduce((sum, p) => sum + p.stepCount, 0) / processes.length \n    : 0;\n  \n  return {\n    processes,\n    steps,\n    stats: {\n      totalProcesses: processes.length,\n      crossCommunityCount,\n      avgStepCount: Math.round(avgStepCount * 10) / 10,\n      entryPointsFound: entryPoints.length,\n    },\n  };\n};\n\n// ============================================================================\n// HELPER: Build CALLS adjacency list\n// ============================================================================\n\ntype AdjacencyList = Map<string, string[]>;\n\n/**\n * Minimum edge confidence for process tracing.\n * Filters out ambiguous fuzzy-global matches (0.3) that cause\n * traces to jump across unrelated code areas.\n */\nconst MIN_TRACE_CONFIDENCE = 0.5;\n\nconst buildCallsGraph = (graph: KnowledgeGraph): AdjacencyList => {\n  const adj = new Map<string, string[]>();\n  \n  for (const rel of graph.iterRelationships()) {\n    if (rel.type === 'CALLS' && rel.confidence >= MIN_TRACE_CONFIDENCE) {\n      if (!adj.has(rel.sourceId)) {\n        adj.set(rel.sourceId, []);\n      }\n      adj.get(rel.sourceId)!.push(rel.targetId);\n    }\n  }\n\n  return adj;\n};\n\nconst buildReverseCallsGraph = (graph: KnowledgeGraph): AdjacencyList => {\n  const adj = new Map<string, string[]>();\n\n  for (const rel of graph.iterRelationships()) {\n    if (rel.type === 'CALLS' && rel.confidence >= MIN_TRACE_CONFIDENCE) {\n      if (!adj.has(rel.targetId)) {\n        adj.set(rel.targetId, []);\n      }\n      adj.get(rel.targetId)!.push(rel.sourceId);\n    }\n  }\n  \n  return adj;\n};\n\n/**\n * Find functions/methods that are good entry points for tracing.\n * \n * Entry points are scored based on:\n * 1. Call ratio (calls many, called by few)\n * 2. Export status (exported/public functions rank higher)\n * 3. Name patterns (handle*, on*, *Controller, etc.)\n * \n * Test files are excluded entirely.\n */\nconst findEntryPoints = (\n  graph: KnowledgeGraph, \n  reverseCallsEdges: AdjacencyList,\n  callsEdges: AdjacencyList\n): string[] => {\n  const symbolTypes = new Set<NodeLabel>(['Function', 'Method']);\n  const entryPointCandidates: { \n    id: string; \n    score: number; \n    reasons: string[];\n  }[] = [];\n  \n  for (const node of graph.iterNodes()) {\n    if (!symbolTypes.has(node.label)) continue;\n    \n    const filePath = node.properties.filePath || '';\n    \n    // Skip test files entirely\n    if (isTestFile(filePath)) continue;\n\n    const callers = reverseCallsEdges.get(node.id) || [];\n    const callees = callsEdges.get(node.id) || [];\n\n    // Must have at least 1 outgoing call to trace forward\n    if (callees.length === 0) continue;\n\n    // Calculate entry point score using new scoring system\n    const { score: baseScore, reasons } = calculateEntryPointScore(\n      node.properties.name,\n      node.properties.language ?? SupportedLanguages.JavaScript,\n      node.properties.isExported ?? false,\n      callers.length,\n      callees.length,\n      filePath  // Pass filePath for framework detection\n    );\n\n    let score = baseScore;\n    const astFrameworkMultiplier = node.properties.astFrameworkMultiplier ?? 1.0;\n    if (astFrameworkMultiplier > 1.0) {\n      score *= astFrameworkMultiplier;\n      reasons.push(`framework-ast:${node.properties.astFrameworkReason || 'decorator'}`);\n    }\n\n    if (score > 0) {\n      entryPointCandidates.push({ id: node.id, score, reasons });\n    }\n  }\n  \n  // Sort by score descending and return top candidates\n  const sorted = entryPointCandidates.sort((a, b) => b.score - a.score);\n  \n  // DEBUG: Log top candidates with new scoring details\n  if (sorted.length > 0 && isDev) {\n    console.log(`[Process] Top 10 entry point candidates (new scoring):`);\n    sorted.slice(0, 10).forEach((c, i) => {\n      const node = graph.getNode(c.id);\n      const exported = node?.properties.isExported ? '✓' : '✗';\n      const shortPath = node?.properties.filePath?.split('/').slice(-2).join('/') || '';\n      console.log(`  ${i+1}. ${node?.properties.name} [exported:${exported}] (${shortPath})`);\n      console.log(`     score: ${c.score.toFixed(2)} = [${c.reasons.join(' × ')}]`);\n    });\n  }\n  \n  return sorted\n    .slice(0, 200)  // Limit to prevent explosion\n    .map(c => c.id);\n};\n\n// ============================================================================\n// HELPER: Trace from entry point (BFS)\n// ============================================================================\n\n/**\n * Trace forward from an entry point using BFS.\n * Returns all distinct paths up to maxDepth.\n */\nconst traceFromEntryPoint = (\n  entryId: string,\n  callsEdges: AdjacencyList,\n  config: ProcessDetectionConfig\n): string[][] => {\n  const traces: string[][] = [];\n  \n  // BFS with path tracking\n  // Each queue item: [currentNodeId, pathSoFar]\n  const queue: [string, string[]][] = [[entryId, [entryId]]];\n\n  while (queue.length > 0 && traces.length < config.maxBranching * 3) {\n    const [currentId, path] = queue.shift()!;\n    \n    // Get outgoing calls\n    const callees = callsEdges.get(currentId) || [];\n    \n    if (callees.length === 0) {\n      // Terminal node - this is a complete trace\n      if (path.length >= config.minSteps) {\n        traces.push([...path]);\n      }\n    } else if (path.length >= config.maxTraceDepth) {\n      // Max depth reached - save what we have\n      if (path.length >= config.minSteps) {\n        traces.push([...path]);\n      }\n    } else {\n      // Continue tracing - limit branching\n      const limitedCallees = callees.slice(0, config.maxBranching);\n      let addedBranch = false;\n      \n      for (const calleeId of limitedCallees) {\n        // Avoid cycles\n        if (!path.includes(calleeId)) {\n          queue.push([calleeId, [...path, calleeId]]);\n          addedBranch = true;\n        }\n      }\n      \n      // If all branches were cycles, save current path as terminal\n      if (!addedBranch && path.length >= config.minSteps) {\n        traces.push([...path]);\n      }\n    }\n  }\n  \n  return traces;\n};\n\n// ============================================================================\n// HELPER: Deduplicate traces\n// ============================================================================\n\n/**\n * Merge traces that are subsets of other traces.\n * Keep longer traces, remove redundant shorter ones.\n */\nconst deduplicateTraces = (traces: string[][]): string[][] => {\n  if (traces.length === 0) return [];\n  \n  // Sort by length descending\n  const sorted = [...traces].sort((a, b) => b.length - a.length);\n  const unique: string[][] = [];\n  \n  for (const trace of sorted) {\n    // Check if this trace is a subset of any already-added trace\n    const traceKey = trace.join('->');\n    const isSubset = unique.some(existing => {\n      const existingKey = existing.join('->');\n      return existingKey.includes(traceKey);\n    });\n    \n    if (!isSubset) {\n      unique.push(trace);\n    }\n  }\n  \n  return unique;\n};\n\n// ============================================================================\n// HELPER: Deduplicate by entry+terminal endpoints\n// ============================================================================\n\n/**\n * Keep only the longest trace per unique entry→terminal pair.\n * Multiple paths between the same two endpoints are redundant for agents.\n */\nconst deduplicateByEndpoints = (traces: string[][]): string[][] => {\n  if (traces.length === 0) return [];\n  \n  const byEndpoints = new Map<string, string[]>();\n  // Sort longest first so the first seen per key is the longest\n  const sorted = [...traces].sort((a, b) => b.length - a.length);\n  \n  for (const trace of sorted) {\n    const key = `${trace[0]}::${trace[trace.length - 1]}`;\n    if (!byEndpoints.has(key)) {\n      byEndpoints.set(key, trace);\n    }\n  }\n  \n  return Array.from(byEndpoints.values());\n};\n\n// ============================================================================\n// HELPER: String utilities\n// ============================================================================\n\nconst capitalize = (s: string): string => {\n  if (!s) return s;\n  return s.charAt(0).toUpperCase() + s.slice(1);\n};\n\nconst sanitizeId = (s: string): string => {\n  return s.replace(/[^a-zA-Z0-9]/g, '_').substring(0, 20).toLowerCase();\n};\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/resolution-context.ts",
    "content": "/**\n * Resolution Context\n *\n * Single implementation of tiered name resolution. Replaces the duplicated\n * tier-selection logic previously split between symbol-resolver.ts and\n * call-processor.ts.\n *\n * Resolution tiers (highest confidence first):\n * 1. Same file (lookupExactFull — authoritative)\n * 2a-named. Named binding chain (walkBindingChain via NamedImportMap)\n * 2a. Import-scoped (lookupFuzzy filtered by ImportMap)\n * 2b. Package-scoped (lookupFuzzy filtered by PackageMap)\n * 3. Global (all candidates — consumers must check candidate count)\n */\n\nimport type { SymbolTable, SymbolDefinition } from './symbol-table.js';\nimport { createSymbolTable } from './symbol-table.js';\nimport type { NamedImportBinding } from './import-processor.js';\nimport { isFileInPackageDir } from './import-processor.js';\nimport { walkBindingChain } from './named-binding-extraction.js';\n\n/** Resolution tier for tracking, logging, and test assertions. */\nexport type ResolutionTier = 'same-file' | 'import-scoped' | 'global';\n\n/** Tier-selected candidates with metadata. */\nexport interface TieredCandidates {\n  readonly candidates: readonly SymbolDefinition[];\n  readonly tier: ResolutionTier;\n}\n\n/** Confidence scores per resolution tier. */\nexport const TIER_CONFIDENCE: Record<ResolutionTier, number> = {\n  'same-file': 0.95,\n  'import-scoped': 0.9,\n  'global': 0.5,\n};\n\n// --- Map types ---\nexport type ImportMap = Map<string, Set<string>>;\nexport type PackageMap = Map<string, Set<string>>;\nexport type NamedImportMap = Map<string, Map<string, NamedImportBinding>>;\n\nexport interface ResolutionContext {\n  /**\n   * The only resolution API. Returns all candidates at the winning tier.\n   *\n   * Tier 3 ('global') returns ALL candidates regardless of count —\n   * consumers must check candidates.length and refuse ambiguous matches.\n   */\n  resolve(name: string, fromFile: string): TieredCandidates | null;\n\n  // --- Data access (for pipeline wiring, not resolution) ---\n  /** Symbol table — used by parsing-processor to populate symbols. */\n  readonly symbols: SymbolTable;\n  /** Raw maps — used by import-processor to populate import data. */\n  readonly importMap: ImportMap;\n  readonly packageMap: PackageMap;\n  readonly namedImportMap: NamedImportMap;\n\n  // --- Per-file cache lifecycle ---\n  enableCache(filePath: string): void;\n  clearCache(): void;\n\n  // --- Operational ---\n  getStats(): { fileCount: number; globalSymbolCount: number; cacheHits: number; cacheMisses: number };\n  clear(): void;\n}\n\nexport const createResolutionContext = (): ResolutionContext => {\n  const symbols = createSymbolTable();\n  const importMap: ImportMap = new Map();\n  const packageMap: PackageMap = new Map();\n  const namedImportMap: NamedImportMap = new Map();\n\n  // Per-file cache state\n  let cacheFile: string | null = null;\n  let cache: Map<string, TieredCandidates | null> | null = null;\n  let cacheHits = 0;\n  let cacheMisses = 0;\n\n  // --- Core resolution (single implementation of tier logic) ---\n\n  const resolveUncached = (name: string, fromFile: string): TieredCandidates | null => {\n    // Tier 1: Same file — authoritative match (returns all overloads)\n    const localDefs = symbols.lookupExactAll(fromFile, name);\n    if (localDefs.length > 0) {\n      return { candidates: localDefs, tier: 'same-file' };\n    }\n\n    // Get all global definitions for subsequent tiers\n    const allDefs = symbols.lookupFuzzy(name);\n\n    // Tier 2a-named: Check named bindings BEFORE empty-allDefs early return\n    // because aliased imports mean lookupFuzzy('U') returns empty but we\n    // can resolve via the exported name.\n    const chainResult = walkBindingChain(name, fromFile, symbols, namedImportMap, allDefs);\n    if (chainResult && chainResult.length > 0) {\n      return { candidates: chainResult, tier: 'import-scoped' };\n    }\n\n    if (allDefs.length === 0) return null;\n\n    // Tier 2a: Import-scoped — definition in a file imported by fromFile\n    const importedFiles = importMap.get(fromFile);\n    if (importedFiles) {\n      const importedDefs = allDefs.filter(def => importedFiles.has(def.filePath));\n      if (importedDefs.length > 0) {\n        return { candidates: importedDefs, tier: 'import-scoped' };\n      }\n    }\n\n    // Tier 2b: Package-scoped — definition in a package dir imported by fromFile\n    const importedPackages = packageMap.get(fromFile);\n    if (importedPackages) {\n      const packageDefs = allDefs.filter(def => {\n        for (const dirSuffix of importedPackages) {\n          if (isFileInPackageDir(def.filePath, dirSuffix)) return true;\n        }\n        return false;\n      });\n      if (packageDefs.length > 0) {\n        return { candidates: packageDefs, tier: 'import-scoped' };\n      }\n    }\n\n    // Tier 3: Global — pass all candidates through.\n    // Consumers must check candidate count and refuse ambiguous matches.\n    return { candidates: allDefs, tier: 'global' };\n  };\n\n  const resolve = (name: string, fromFile: string): TieredCandidates | null => {\n    // Check cache (only when enabled AND fromFile matches cached file)\n    if (cache && cacheFile === fromFile) {\n      if (cache.has(name)) {\n        cacheHits++;\n        return cache.get(name)!;\n      }\n      cacheMisses++;\n    }\n\n    const result = resolveUncached(name, fromFile);\n\n    // Store in cache if active and file matches\n    if (cache && cacheFile === fromFile) {\n      cache.set(name, result);\n    }\n\n    return result;\n  };\n\n  // --- Cache lifecycle ---\n\n  const enableCache = (filePath: string): void => {\n    cacheFile = filePath;\n    if (!cache) cache = new Map();\n    else cache.clear();\n  };\n\n  const clearCache = (): void => {\n    cacheFile = null;\n    // Reuse the Map instance — just clear entries to reduce GC pressure at scale.\n    cache?.clear();\n  };\n\n  const getStats = () => ({\n    ...symbols.getStats(),\n    cacheHits,\n    cacheMisses,\n  });\n\n  const clear = (): void => {\n    symbols.clear();\n    importMap.clear();\n    packageMap.clear();\n    namedImportMap.clear();\n    clearCache();\n    cacheHits = 0;\n    cacheMisses = 0;\n  };\n\n  return {\n    resolve,\n    symbols,\n    importMap,\n    packageMap,\n    namedImportMap,\n    enableCache,\n    clearCache,\n    getStats,\n    clear,\n  };\n};\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/resolvers/csharp.ts",
    "content": "/**\n * C# namespace import resolution.\n * Handles using-directive resolution via .csproj root namespace stripping.\n */\n\nimport type { SuffixIndex } from './utils.js';\nimport { suffixResolve } from './utils.js';\n\n/** C# project config parsed from .csproj files */\nexport interface CSharpProjectConfig {\n  /** Root namespace from <RootNamespace> or assembly name (default: project directory name) */\n  rootNamespace: string;\n  /** Directory containing the .csproj file */\n  projectDir: string;\n}\n\n/**\n * Resolve a C# using-directive import path to matching .cs files.\n * Tries single-file match first, then directory match for namespace imports.\n */\nexport function resolveCSharpImport(\n  importPath: string,\n  csharpConfigs: CSharpProjectConfig[],\n  normalizedFileList: string[],\n  allFileList: string[],\n  index?: SuffixIndex,\n): string[] {\n  const namespacePath = importPath.replace(/\\./g, '/');\n  const results: string[] = [];\n\n  for (const config of csharpConfigs) {\n    const nsPath = config.rootNamespace.replace(/\\./g, '/');\n    let relative: string;\n    if (namespacePath.startsWith(nsPath + '/')) {\n      relative = namespacePath.slice(nsPath.length + 1);\n    } else if (namespacePath === nsPath) {\n      // The import IS the root namespace — resolve to all .cs files in project root\n      relative = '';\n    } else {\n      continue;\n    }\n\n    const dirPrefix = config.projectDir\n      ? (relative ? config.projectDir + '/' + relative : config.projectDir)\n      : relative;\n\n    // 1. Try as single file: relative.cs (e.g., \"Models/DlqMessage.cs\")\n    if (relative) {\n      const candidate = dirPrefix + '.cs';\n      if (index) {\n        const result = index.get(candidate) || index.getInsensitive(candidate);\n        if (result) return [result];\n      }\n      // Also try suffix match\n      const suffixResult = index?.get(relative + '.cs') || index?.getInsensitive(relative + '.cs');\n      if (suffixResult) return [suffixResult];\n    }\n\n    // 2. Try as directory: all .cs files directly inside (namespace import)\n    if (index) {\n      const dirFiles = index.getFilesInDir(dirPrefix, '.cs');\n      for (const f of dirFiles) {\n        const normalized = f.replace(/\\\\/g, '/');\n        // Check it's a direct child by finding the dirPrefix and ensuring no deeper slashes\n        const prefixIdx = normalized.indexOf(dirPrefix + '/');\n        if (prefixIdx < 0) continue;\n        const afterDir = normalized.substring(prefixIdx + dirPrefix.length + 1);\n        if (!afterDir.includes('/')) {\n          results.push(f);\n        }\n      }\n      if (results.length > 0) return results;\n    }\n\n    // 3. Linear scan fallback for directory matching\n    if (results.length === 0) {\n      const dirTrail = dirPrefix + '/';\n      for (let i = 0; i < normalizedFileList.length; i++) {\n        const normalized = normalizedFileList[i];\n        if (!normalized.endsWith('.cs')) continue;\n        const prefixIdx = normalized.indexOf(dirTrail);\n        if (prefixIdx < 0) continue;\n        const afterDir = normalized.substring(prefixIdx + dirTrail.length);\n        if (!afterDir.includes('/')) {\n          results.push(allFileList[i]);\n        }\n      }\n      if (results.length > 0) return results;\n    }\n  }\n\n  // Fallback: suffix matching without namespace stripping (single file)\n  const pathParts = namespacePath.split('/').filter(Boolean);\n  const fallback = suffixResolve(pathParts, normalizedFileList, allFileList, index);\n  return fallback ? [fallback] : [];\n}\n\n/**\n * Compute the directory suffix for a C# namespace import (for PackageMap).\n * Returns a suffix like \"/ProjectDir/Models/\" or null if no config matches.\n */\nexport function resolveCSharpNamespaceDir(\n  importPath: string,\n  csharpConfigs: CSharpProjectConfig[],\n): string | null {\n  const namespacePath = importPath.replace(/\\./g, '/');\n\n  for (const config of csharpConfigs) {\n    const nsPath = config.rootNamespace.replace(/\\./g, '/');\n    let relative: string;\n    if (namespacePath.startsWith(nsPath + '/')) {\n      relative = namespacePath.slice(nsPath.length + 1);\n    } else if (namespacePath === nsPath) {\n      relative = '';\n    } else {\n      continue;\n    }\n\n    const dirPrefix = config.projectDir\n      ? (relative ? config.projectDir + '/' + relative : config.projectDir)\n      : relative;\n\n    if (!dirPrefix) continue;\n    return '/' + dirPrefix + '/';\n  }\n\n  return null;\n}\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/resolvers/go.ts",
    "content": "/**\n * Go package import resolution.\n * Handles Go module path-based package imports.\n */\n\n/** Go module config parsed from go.mod */\nexport interface GoModuleConfig {\n  /** Module path (e.g., \"github.com/user/repo\") */\n  modulePath: string;\n}\n\n/**\n * Extract the package directory suffix from a Go import path.\n * Returns the suffix string (e.g., \"/internal/auth/\") or null if invalid.\n */\nexport function resolveGoPackageDir(\n  importPath: string,\n  goModule: GoModuleConfig,\n): string | null {\n  if (!importPath.startsWith(goModule.modulePath)) return null;\n  const relativePkg = importPath.slice(goModule.modulePath.length + 1);\n  if (!relativePkg) return null;\n  return '/' + relativePkg + '/';\n}\n\n/**\n * Resolve a Go internal package import to all .go files in the package directory.\n * Returns an array of file paths.\n */\nexport function resolveGoPackage(\n  importPath: string,\n  goModule: GoModuleConfig,\n  normalizedFileList: string[],\n  allFileList: string[],\n): string[] {\n  if (!importPath.startsWith(goModule.modulePath)) return [];\n\n  // Strip module path to get relative package path\n  const relativePkg = importPath.slice(goModule.modulePath.length + 1); // e.g., \"internal/auth\"\n  if (!relativePkg) return [];\n\n  const pkgSuffix = '/' + relativePkg + '/';\n  const matches: string[] = [];\n\n  for (let i = 0; i < normalizedFileList.length; i++) {\n    // Prepend '/' so paths like \"internal/auth/service.go\" match suffix \"/internal/auth/\"\n    const normalized = '/' + normalizedFileList[i];\n    // File must be directly in the package directory (not a subdirectory)\n    if (normalized.includes(pkgSuffix) && normalized.endsWith('.go') && !normalized.endsWith('_test.go')) {\n      const afterPkg = normalized.substring(normalized.indexOf(pkgSuffix) + pkgSuffix.length);\n      if (!afterPkg.includes('/')) {\n        matches.push(allFileList[i]);\n      }\n    }\n  }\n\n  return matches;\n}\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/resolvers/index.ts",
    "content": "/**\n * Language-specific import resolvers.\n * Extracted from import-processor.ts for maintainability.\n */\n\nexport { EXTENSIONS, tryResolveWithExtensions, buildSuffixIndex, suffixResolve } from './utils.js';\nexport type { SuffixIndex } from './utils.js';\n\nexport { KOTLIN_EXTENSIONS, appendKotlinWildcard, resolveJvmWildcard, resolveJvmMemberImport } from './jvm.js';\n\nexport { resolveGoPackageDir, resolveGoPackage } from './go.js';\nexport type { GoModuleConfig } from './go.js';\n\nexport { resolveCSharpImport, resolveCSharpNamespaceDir } from './csharp.js';\nexport type { CSharpProjectConfig } from './csharp.js';\n\nexport { resolvePhpImport } from './php.js';\nexport type { ComposerConfig } from './php.js';\n\nexport { resolveRustImport, tryRustModulePath } from './rust.js';\n\nexport { resolveRubyImport } from './ruby.js';\n\nexport { resolvePythonImport } from './python.js';\n\nexport { resolveImportPath, RESOLVE_CACHE_CAP } from './standard.js';\nexport type { TsconfigPaths } from './standard.js';\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/resolvers/jvm.ts",
    "content": "/**\n * JVM import resolution (Java + Kotlin).\n * Handles wildcard imports, member/static imports, and Kotlin-specific patterns.\n */\n\nimport type { SuffixIndex } from './utils.js';\n\n/** Kotlin file extensions for JVM resolver reuse */\nexport const KOTLIN_EXTENSIONS: readonly string[] = ['.kt', '.kts'];\n\n/**\n * Append .* to a Kotlin import path if the AST has a wildcard_import sibling node.\n * Pure function — returns a new string without mutating the input.\n */\nexport const appendKotlinWildcard = (importPath: string, importNode: any): string => {\n  for (let i = 0; i < importNode.childCount; i++) {\n    if (importNode.child(i)?.type === 'wildcard_import') {\n      return importPath.endsWith('.*') ? importPath : `${importPath}.*`;\n    }\n  }\n  return importPath;\n};\n\n/**\n * Resolve a JVM wildcard import (com.example.*) to all matching files.\n * Works for both Java (.java) and Kotlin (.kt, .kts).\n */\nexport function resolveJvmWildcard(\n  importPath: string,\n  normalizedFileList: string[],\n  allFileList: string[],\n  extensions: readonly string[],\n  index?: SuffixIndex,\n): string[] {\n  // \"com.example.util.*\" -> \"com/example/util\"\n  const packagePath = importPath.slice(0, -2).replace(/\\./g, '/');\n\n  if (index) {\n    const candidates = extensions.flatMap(ext => index.getFilesInDir(packagePath, ext));\n    // Filter to only direct children (no subdirectories)\n    const packageSuffix = '/' + packagePath + '/';\n    return candidates.filter(f => {\n      const normalized = f.replace(/\\\\/g, '/');\n      const idx = normalized.indexOf(packageSuffix);\n      if (idx < 0) return false;\n      const afterPkg = normalized.substring(idx + packageSuffix.length);\n      return !afterPkg.includes('/');\n    });\n  }\n\n  // Fallback: linear scan\n  const packageSuffix = '/' + packagePath + '/';\n  const matches: string[] = [];\n  for (let i = 0; i < normalizedFileList.length; i++) {\n    const normalized = normalizedFileList[i];\n    if (normalized.includes(packageSuffix) &&\n        extensions.some(ext => normalized.endsWith(ext))) {\n      const afterPackage = normalized.substring(normalized.indexOf(packageSuffix) + packageSuffix.length);\n      if (!afterPackage.includes('/')) {\n        matches.push(allFileList[i]);\n      }\n    }\n  }\n  return matches;\n}\n\n/**\n * Try to resolve a JVM member/static import by stripping the member name.\n * Java: \"com.example.Constants.VALUE\" -> resolve \"com.example.Constants\"\n * Kotlin: \"com.example.Constants.VALUE\" -> resolve \"com.example.Constants\"\n */\nexport function resolveJvmMemberImport(\n  importPath: string,\n  normalizedFileList: string[],\n  allFileList: string[],\n  extensions: readonly string[],\n  index?: SuffixIndex,\n): string | null {\n  // Member imports: com.example.Constants.VALUE or com.example.Constants.*\n  // The last segment is a member name if it starts with lowercase, is ALL_CAPS, or is a wildcard\n  const segments = importPath.split('.');\n  if (segments.length < 3) return null;\n\n  const lastSeg = segments[segments.length - 1];\n  if (lastSeg === '*' || /^[a-z]/.test(lastSeg) || /^[A-Z_]+$/.test(lastSeg)) {\n    const classPath = segments.slice(0, -1).join('/');\n\n    for (const ext of extensions) {\n      const classSuffix = classPath + ext;\n      if (index) {\n        const result = index.get(classSuffix) || index.getInsensitive(classSuffix);\n        if (result) return result;\n      } else {\n        const fullSuffix = '/' + classSuffix;\n        for (let i = 0; i < normalizedFileList.length; i++) {\n          if (normalizedFileList[i].endsWith(fullSuffix) ||\n              normalizedFileList[i].toLowerCase().endsWith(fullSuffix.toLowerCase())) {\n            return allFileList[i];\n          }\n        }\n      }\n    }\n  }\n\n  return null;\n}\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/resolvers/php.ts",
    "content": "/**\n * PHP PSR-4 import resolution.\n * Handles use-statement resolution via composer.json autoload mappings.\n */\n\nimport type { SuffixIndex } from './utils.js';\nimport { suffixResolve } from './utils.js';\n\n/** PHP Composer PSR-4 autoload config */\nexport interface ComposerConfig {\n  /** Map of namespace prefix -> directory (e.g., \"App\\\\\" -> \"app/\") */\n  psr4: Map<string, string>;\n}\n\n/**\n * Resolve a PHP use-statement import path using PSR-4 mappings.\n * e.g. \"App\\Http\\Controllers\\UserController\" -> \"app/Http/Controllers/UserController.php\"\n */\nexport function resolvePhpImport(\n  importPath: string,\n  composerConfig: ComposerConfig | null,\n  allFiles: Set<string>,\n  normalizedFileList: string[],\n  allFileList: string[],\n  index?: SuffixIndex,\n): string | null {\n  // Normalize: replace backslashes with forward slashes\n  const normalized = importPath.replace(/\\\\/g, '/');\n\n  // Try PSR-4 resolution if composer.json was found\n  if (composerConfig) {\n    // Sort namespaces by length descending (longest match wins)\n    const sorted = [...composerConfig.psr4.entries()].sort((a, b) => b[0].length - a[0].length);\n    for (const [nsPrefix, dirPrefix] of sorted) {\n      const nsPrefixSlash = nsPrefix.replace(/\\\\/g, '/');\n      if (normalized.startsWith(nsPrefixSlash + '/') || normalized === nsPrefixSlash) {\n        const remainder = normalized.slice(nsPrefixSlash.length).replace(/^\\//, '');\n        const filePath = dirPrefix + (remainder ? '/' + remainder : '') + '.php';\n        if (allFiles.has(filePath)) return filePath;\n        if (index) {\n          const result = index.getInsensitive(filePath);\n          if (result) return result;\n        }\n      }\n    }\n  }\n\n  // Fallback: suffix matching (works without composer.json)\n  const pathParts = normalized.split('/').filter(Boolean);\n  return suffixResolve(pathParts, normalizedFileList, allFileList, index);\n}\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/resolvers/python.ts",
    "content": "/**\n * Python import resolution — PEP 328 relative imports and proximity-based bare imports.\n * Import system spec: PEP 302 (original), PEP 451 (current).\n */\n\nimport { tryResolveWithExtensions } from './utils.js';\n\n/**\n * Resolve a Python import to a file path.\n *\n * 1. Relative (PEP 328): `.module`, `..module` — 1 dot = current package, each extra dot goes up one level.\n * 2. Proximity bare import: static heuristic — checks the importer's own directory first.\n *    Approximates the common case where co-located files find each other without an installed package.\n *    Single-segment only — multi-segment (e.g. `os.path`) falls through to suffixResolve.\n *    Checks package (__init__.py) before module (.py), matching CPython's finder order (PEP 451 §4).\n *    Coexistence of both is physically impossible (same name = file vs directory), so the order\n *    only matters for spec compliance.\n *    Note: namespace packages (PEP 420, directory without __init__.py) are not handled.\n *\n * Returns null to let the caller fall through to suffixResolve.\n */\nexport function resolvePythonImport(\n  currentFile: string,\n  importPath: string,\n  allFiles: Set<string>,\n): string | null {\n  // Relative import — PEP 328 (https://peps.python.org/pep-0328/)\n  if (importPath.startsWith('.')) {\n    const dotMatch = importPath.match(/^(\\.+)(.*)/);\n    if (!dotMatch) return null;\n\n    const dotCount = dotMatch[1].length;\n    const modulePart = dotMatch[2];\n    const dirParts = currentFile.split('/').slice(0, -1);\n\n    // PEP 328: more dots than directory levels → beyond top-level package → invalid\n    if (dotCount - 1 > dirParts.length) return null;\n    for (let i = 1; i < dotCount; i++) dirParts.pop();\n\n    if (modulePart) {\n      dirParts.push(...modulePart.replace(/\\./g, '/').split('/'));\n    }\n\n    return tryResolveWithExtensions(dirParts.join('/'), allFiles);\n  }\n\n  // Proximity bare import — single-segment only; package before module (PEP 451 §4)\n  const pathLike = importPath.replace(/\\./g, '/');\n  if (pathLike.includes('/')) return null;\n\n  // Normalize for Windows backslashes\n  const importerDir = currentFile.replace(/\\\\/g, '/').split('/').slice(0, -1).join('/');\n  if (!importerDir) return null;\n\n  if (allFiles.has(`${importerDir}/${pathLike}/__init__.py`)) return `${importerDir}/${pathLike}/__init__.py`;\n  if (allFiles.has(`${importerDir}/${pathLike}.py`)) return `${importerDir}/${pathLike}.py`;\n\n  return null;\n}\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/resolvers/ruby.ts",
    "content": "/**\n * Ruby require/require_relative import resolution.\n * Handles path resolution for Ruby's require and require_relative calls.\n */\n\nimport type { SuffixIndex } from './utils.js';\nimport { suffixResolve } from './utils.js';\n\n/**\n * Resolve a Ruby require/require_relative path to a matching .rb file.\n *\n * require_relative paths are pre-normalized to './' prefix by the caller.\n * require paths use suffix matching (gem-style paths like 'json', 'net/http').\n */\nexport function resolveRubyImport(\n  importPath: string,\n  normalizedFileList: string[],\n  allFileList: string[],\n  index?: SuffixIndex,\n): string | null {\n  const pathParts = importPath.replace(/^\\.\\//, '').split('/').filter(Boolean);\n  return suffixResolve(pathParts, normalizedFileList, allFileList, index);\n}\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/resolvers/rust.ts",
    "content": "/**\n * Rust module import resolution.\n * Handles crate::, super::, self:: prefix paths and :: separators.\n */\n\n/**\n * Resolve Rust use-path to a file.\n * Handles crate::, super::, self:: prefixes and :: path separators.\n */\nexport function resolveRustImport(\n  currentFile: string,\n  importPath: string,\n  allFiles: Set<string>,\n): string | null {\n  let rustPath: string;\n\n  if (importPath.startsWith('crate::')) {\n    // crate:: resolves from src/ directory (standard Rust layout)\n    rustPath = importPath.slice(7).replace(/::/g, '/');\n\n    // Try from src/ (standard layout)\n    const fromSrc = tryRustModulePath('src/' + rustPath, allFiles);\n    if (fromSrc) return fromSrc;\n\n    // Try from repo root (non-standard)\n    const fromRoot = tryRustModulePath(rustPath, allFiles);\n    if (fromRoot) return fromRoot;\n\n    return null;\n  }\n\n  if (importPath.startsWith('super::')) {\n    // super:: = parent directory of current file's module\n    const currentDir = currentFile.split('/').slice(0, -1);\n    currentDir.pop(); // Go up one level for super::\n    rustPath = importPath.slice(7).replace(/::/g, '/');\n    const fullPath = [...currentDir, rustPath].join('/');\n    return tryRustModulePath(fullPath, allFiles);\n  }\n\n  if (importPath.startsWith('self::')) {\n    // self:: = current module's directory\n    const currentDir = currentFile.split('/').slice(0, -1);\n    rustPath = importPath.slice(6).replace(/::/g, '/');\n    const fullPath = [...currentDir, rustPath].join('/');\n    return tryRustModulePath(fullPath, allFiles);\n  }\n\n  // Bare path without prefix (e.g., from a use in a nested module)\n  // Convert :: to / and try suffix matching\n  if (importPath.includes('::')) {\n    rustPath = importPath.replace(/::/g, '/');\n    return tryRustModulePath(rustPath, allFiles);\n  }\n\n  return null;\n}\n\n/**\n * Try to resolve a Rust module path to a file.\n * Tries: path.rs, path/mod.rs, and with the last segment stripped\n * (last segment might be a symbol name, not a module).\n */\nexport function tryRustModulePath(modulePath: string, allFiles: Set<string>): string | null {\n  // Try direct: path.rs\n  if (allFiles.has(modulePath + '.rs')) return modulePath + '.rs';\n  // Try directory: path/mod.rs\n  if (allFiles.has(modulePath + '/mod.rs')) return modulePath + '/mod.rs';\n  // Try path/lib.rs (for crate root)\n  if (allFiles.has(modulePath + '/lib.rs')) return modulePath + '/lib.rs';\n\n  // The last segment might be a symbol (function, struct, etc.), not a module.\n  // Strip it and try again.\n  const lastSlash = modulePath.lastIndexOf('/');\n  if (lastSlash > 0) {\n    const parentPath = modulePath.substring(0, lastSlash);\n    if (allFiles.has(parentPath + '.rs')) return parentPath + '.rs';\n    if (allFiles.has(parentPath + '/mod.rs')) return parentPath + '/mod.rs';\n  }\n\n  return null;\n}\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/resolvers/standard.ts",
    "content": "/**\n * Standard import path resolution.\n * Handles relative imports, path alias rewriting, and generic suffix matching.\n * Used as the fallback when language-specific resolvers don't match.\n */\n\nimport type { SuffixIndex } from './utils.js';\nimport { tryResolveWithExtensions, suffixResolve } from './utils.js';\nimport { resolveRustImport } from './rust.js';\nimport { SupportedLanguages } from '../../../config/supported-languages.js';\n\n/** TypeScript path alias config parsed from tsconfig.json */\nexport interface TsconfigPaths {\n  /** Map of alias prefix -> target prefix (e.g., \"@/\" -> \"src/\") */\n  aliases: Map<string, string>;\n  /** Base URL for path resolution (relative to repo root) */\n  baseUrl: string;\n}\n\n/** Max entries in the resolve cache. Beyond this, entries are evicted.\n *  100K entries ≈ 15MB — covers the most common import patterns. */\nexport const RESOLVE_CACHE_CAP = 100_000;\n\n/**\n * Resolve an import path to a file path in the repository.\n *\n * Language-specific preprocessing is applied before the generic resolution:\n * - TypeScript/JavaScript: rewrites tsconfig path aliases\n * - Rust: converts crate::/super::/self:: to relative paths\n *\n * Java wildcards and Go package imports are handled separately in processImports\n * because they resolve to multiple files.\n */\nexport const resolveImportPath = (\n  currentFile: string,\n  importPath: string,\n  allFiles: Set<string>,\n  allFileList: string[],\n  normalizedFileList: string[],\n  resolveCache: Map<string, string | null>,\n  language: SupportedLanguages,\n  tsconfigPaths: TsconfigPaths | null,\n  index?: SuffixIndex,\n): string | null => {\n  const cacheKey = `${currentFile}::${importPath}`;\n  if (resolveCache.has(cacheKey)) return resolveCache.get(cacheKey) ?? null;\n\n  const cache = (result: string | null): string | null => {\n    // Evict oldest 20% when cap is reached instead of clearing all\n    if (resolveCache.size >= RESOLVE_CACHE_CAP) {\n      const evictCount = Math.floor(RESOLVE_CACHE_CAP * 0.2);\n      const iter = resolveCache.keys();\n      for (let i = 0; i < evictCount; i++) {\n        const key = iter.next().value;\n        if (key !== undefined) resolveCache.delete(key);\n      }\n    }\n    resolveCache.set(cacheKey, result);\n    return result;\n  };\n\n  // ---- TypeScript/JavaScript: rewrite path aliases ----\n  if (\n    (language === SupportedLanguages.TypeScript || language === SupportedLanguages.JavaScript) &&\n    tsconfigPaths &&\n    !importPath.startsWith('.')\n  ) {\n    for (const [aliasPrefix, targetPrefix] of tsconfigPaths.aliases) {\n      if (importPath.startsWith(aliasPrefix)) {\n        const remainder = importPath.slice(aliasPrefix.length);\n        // Build the rewritten path relative to baseUrl\n        const rewritten = tsconfigPaths.baseUrl === '.'\n          ? targetPrefix + remainder\n          : tsconfigPaths.baseUrl + '/' + targetPrefix + remainder;\n\n        // Try direct resolution from repo root\n        const resolved = tryResolveWithExtensions(rewritten, allFiles);\n        if (resolved) return cache(resolved);\n\n        // Try suffix matching as fallback\n        const parts = rewritten.split('/').filter(Boolean);\n        const suffixResult = suffixResolve(parts, normalizedFileList, allFileList, index);\n        if (suffixResult) return cache(suffixResult);\n      }\n    }\n  }\n\n  // ---- Rust: convert module path syntax to file paths ----\n  if (language === SupportedLanguages.Rust) {\n    // Handle grouped imports: use crate::module::{Foo, Bar, Baz}\n    // Extract the prefix path before ::{...} and resolve the module, not the symbols\n    let rustImportPath = importPath;\n    const braceIdx = importPath.indexOf('::{');\n    if (braceIdx !== -1) {\n      rustImportPath = importPath.substring(0, braceIdx);\n    } else if (importPath.startsWith('{') && importPath.endsWith('}')) {\n      // Top-level grouped imports: use {crate::a, crate::b}\n      // Iterate each part and return the first that resolves. This function returns a single\n      // string, so callers that need ALL edges must intercept before reaching here (see the\n      // Rust grouped-import blocks in processImports / processImportsBatch). This fallback\n      // handles any path that reaches resolveImportPath directly.\n      const inner = importPath.slice(1, -1);\n      const parts = inner.split(',').map(p => p.trim()).filter(Boolean);\n      for (const part of parts) {\n        const partResult = resolveRustImport(currentFile, part, allFiles);\n        if (partResult) return cache(partResult);\n      }\n      return cache(null);\n    }\n\n    const rustResult = resolveRustImport(currentFile, rustImportPath, allFiles);\n    if (rustResult) return cache(rustResult);\n    // Fall through to generic resolution if Rust-specific didn't match\n  }\n\n  // ---- Generic relative import resolution (./ and ../) ----\n  const currentDir = currentFile.split('/').slice(0, -1);\n  const parts = importPath.split('/');\n\n  for (const part of parts) {\n    if (part === '.') continue;\n    if (part === '..') {\n      currentDir.pop();\n    } else {\n      currentDir.push(part);\n    }\n  }\n\n  const basePath = currentDir.join('/');\n\n  if (importPath.startsWith('.')) {\n    const resolved = tryResolveWithExtensions(basePath, allFiles);\n    return cache(resolved);\n  }\n\n  // ---- Generic package/absolute import resolution (suffix matching) ----\n  // Java wildcards are handled in processImports, not here\n  if (importPath.endsWith('.*')) {\n    return cache(null);\n  }\n\n  // C/C++ includes use actual file paths (e.g. \"animal.h\") — don't convert dots to slashes\n  const isCpp = language === SupportedLanguages.C || language === SupportedLanguages.CPlusPlus;\n  const pathLike = importPath.includes('/') || isCpp\n    ? importPath\n    : importPath.replace(/\\./g, '/');\n  const pathParts = pathLike.split('/').filter(Boolean);\n\n  const resolved = suffixResolve(pathParts, normalizedFileList, allFileList, index);\n  return cache(resolved);\n};\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/resolvers/utils.ts",
    "content": "/**\n * Shared utilities for import resolution.\n * Extracted from import-processor.ts to reduce file size.\n */\n\n/** All file extensions to try during resolution */\nexport const EXTENSIONS = [\n  '',\n  // TypeScript/JavaScript\n  '.tsx', '.ts', '.jsx', '.js', '/index.tsx', '/index.ts', '/index.jsx', '/index.js',\n  // Python\n  '.py', '/__init__.py',\n  // Java\n  '.java',\n  // Kotlin\n  '.kt', '.kts',\n  // C/C++\n  '.c', '.h', '.cpp', '.hpp', '.cc', '.cxx', '.hxx', '.hh',\n  // C#\n  '.cs',\n  // Go\n  '.go',\n  // Rust\n  '.rs', '/mod.rs',\n  // PHP\n  '.php', '.phtml',\n  // Swift\n  '.swift',\n  // Ruby\n  '.rb',\n];\n\n/**\n * Try to match a path (with extensions) against the known file set.\n * Returns the matched file path or null.\n */\nexport function tryResolveWithExtensions(\n  basePath: string,\n  allFiles: Set<string>,\n): string | null {\n  for (const ext of EXTENSIONS) {\n    const candidate = basePath + ext;\n    if (allFiles.has(candidate)) return candidate;\n  }\n  return null;\n}\n\n/**\n * Build a suffix index for O(1) endsWith lookups.\n * Maps every possible path suffix to its original file path.\n * e.g. for \"src/com/example/Foo.java\":\n *   \"Foo.java\" -> \"src/com/example/Foo.java\"\n *   \"example/Foo.java\" -> \"src/com/example/Foo.java\"\n *   \"com/example/Foo.java\" -> \"src/com/example/Foo.java\"\n *   etc.\n */\nexport interface SuffixIndex {\n  /** Exact suffix lookup (case-sensitive) */\n  get(suffix: string): string | undefined;\n  /** Case-insensitive suffix lookup */\n  getInsensitive(suffix: string): string | undefined;\n  /** Get all files in a directory suffix */\n  getFilesInDir(dirSuffix: string, extension: string): string[];\n}\n\nexport function buildSuffixIndex(normalizedFileList: string[], allFileList: string[]): SuffixIndex {\n  // Map: normalized suffix -> original file path\n  const exactMap = new Map<string, string>();\n  // Map: lowercase suffix -> original file path\n  const lowerMap = new Map<string, string>();\n  // Map: directory suffix -> list of file paths in that directory\n  const dirMap = new Map<string, string[]>();\n\n  for (let i = 0; i < normalizedFileList.length; i++) {\n    const normalized = normalizedFileList[i];\n    const original = allFileList[i];\n    const parts = normalized.split('/');\n\n    // Index all suffixes: \"a/b/c.java\" -> [\"c.java\", \"b/c.java\", \"a/b/c.java\"]\n    for (let j = parts.length - 1; j >= 0; j--) {\n      const suffix = parts.slice(j).join('/');\n      // Only store first match (longest path wins for ambiguous suffixes)\n      if (!exactMap.has(suffix)) {\n        exactMap.set(suffix, original);\n      }\n      const lower = suffix.toLowerCase();\n      if (!lowerMap.has(lower)) {\n        lowerMap.set(lower, original);\n      }\n    }\n\n    // Index directory membership\n    const lastSlash = normalized.lastIndexOf('/');\n    if (lastSlash >= 0) {\n      // Build all directory suffixes\n      const dirParts = parts.slice(0, -1);\n      const fileName = parts[parts.length - 1];\n      const ext = fileName.substring(fileName.lastIndexOf('.'));\n\n      for (let j = dirParts.length - 1; j >= 0; j--) {\n        const dirSuffix = dirParts.slice(j).join('/');\n        const key = `${dirSuffix}:${ext}`;\n        let list = dirMap.get(key);\n        if (!list) {\n          list = [];\n          dirMap.set(key, list);\n        }\n        list.push(original);\n      }\n    }\n  }\n\n  return {\n    get: (suffix: string) => exactMap.get(suffix),\n    getInsensitive: (suffix: string) => lowerMap.get(suffix.toLowerCase()),\n    getFilesInDir: (dirSuffix: string, extension: string) => {\n      return dirMap.get(`${dirSuffix}:${extension}`) || [];\n    },\n  };\n}\n\n/**\n * Suffix-based resolution using index. O(1) per lookup instead of O(files).\n */\nexport function suffixResolve(\n  pathParts: string[],\n  normalizedFileList: string[],\n  allFileList: string[],\n  index?: SuffixIndex,\n): string | null {\n  if (index) {\n    for (let i = 0; i < pathParts.length; i++) {\n      const suffix = pathParts.slice(i).join('/');\n      for (const ext of EXTENSIONS) {\n        const suffixWithExt = suffix + ext;\n        const result = index.get(suffixWithExt) || index.getInsensitive(suffixWithExt);\n        if (result) return result;\n      }\n    }\n    return null;\n  }\n\n  // Fallback: linear scan (for backward compatibility)\n  for (let i = 0; i < pathParts.length; i++) {\n    const suffix = pathParts.slice(i).join('/');\n    for (const ext of EXTENSIONS) {\n      const suffixWithExt = suffix + ext;\n      const suffixPattern = '/' + suffixWithExt;\n      const matchIdx = normalizedFileList.findIndex(filePath =>\n        filePath.endsWith(suffixPattern) || filePath.toLowerCase().endsWith(suffixPattern.toLowerCase())\n      );\n      if (matchIdx !== -1) {\n        return allFileList[matchIdx];\n      }\n    }\n  }\n  return null;\n}\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/structure-processor.ts",
    "content": "import { generateId } from \"../../lib/utils.js\";\nimport { KnowledgeGraph, GraphNode, GraphRelationship } from \"../graph/types.js\";\n\nexport const processStructure = ( graph: KnowledgeGraph, paths: string[])=>{\n    paths.forEach( path => {\n        const parts = path.split('/')\n        let currentPath = ''\n        let parentId = ''\n\n        parts.forEach( (part, index ) => {\n            const isFile = index === parts.length - 1\n            const label = isFile ? 'File' : 'Folder' \n\n            currentPath = currentPath ? `${currentPath}/${part}` : part\n\n            const nodeId=generateId(label, currentPath)\n\n            const node: GraphNode = {\n                id: nodeId,\n                label: label,\n                properties: {\n                    name: part,\n                    filePath: currentPath\n                }\n            }\n            graph.addNode(node)\n\n            if(parentId){\n                const relId = generateId('CONTAINS', `${parentId}->${nodeId}`)\n\n                const relationship: GraphRelationship={\n                    id: relId,\n                    type: 'CONTAINS',\n                    sourceId: parentId,\n                    targetId: nodeId,\n                    confidence: 1.0,\n                    reason: '',\n                }\n\n                graph.addRelationship(relationship)\n            }\n\n            parentId = nodeId\n\n        })\n    })\n}\n\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/symbol-table.ts",
    "content": "import type { NodeLabel } from '../graph/types.js';\n\nexport interface SymbolDefinition {\n  nodeId: string;\n  filePath: string;\n  type: NodeLabel;\n  parameterCount?: number;\n  /** Number of required (non-optional, non-default) parameters.\n   *  Enables range-based arity filtering: argCount >= requiredParameterCount && argCount <= parameterCount. */\n  requiredParameterCount?: number;\n  /** Per-parameter type names for overload disambiguation (e.g. ['int', 'String']).\n   *  Populated when parameter types are resolvable from AST (any typed language).\n   *  Used for disambiguation in overloading languages (Java, Kotlin, C#, C++). */\n  parameterTypes?: string[];\n  /** Raw return type text extracted from AST (e.g. 'User', 'Promise<User>') */\n  returnType?: string;\n  /** Declared type for non-callable symbols — fields/properties (e.g. 'Address', 'List<User>') */\n  declaredType?: string;\n  /** Links Method/Constructor/Property to owning Class/Struct/Trait nodeId */\n  ownerId?: string;\n}\n\nexport interface SymbolTable {\n  /**\n   * Register a new symbol definition\n   */\n  add: (\n    filePath: string,\n    name: string,\n    nodeId: string,\n    type: NodeLabel,\n    metadata?: { parameterCount?: number; requiredParameterCount?: number; parameterTypes?: string[]; returnType?: string; declaredType?: string; ownerId?: string }\n  ) => void;\n\n  /**\n   * High Confidence: Look for a symbol specifically inside a file\n   * Returns the Node ID if found\n   */\n  lookupExact: (filePath: string, name: string) => string | undefined;\n  \n  /**\n   * High Confidence: Look for a symbol in a specific file, returning full definition.\n   * Includes type information needed for heritage resolution (Class vs Interface).\n   * Returns first matching definition — use lookupExactAll for overloaded methods.\n   */\n  lookupExactFull: (filePath: string, name: string) => SymbolDefinition | undefined;\n\n  /**\n   * High Confidence: Look for ALL symbols with this name in a specific file.\n   * Returns all definitions, including overloaded methods with the same name.\n   * Used by resolution-context to pass all same-file overloads to candidate filtering.\n   */\n  lookupExactAll: (filePath: string, name: string) => SymbolDefinition[];\n\n  /**\n   * Low Confidence: Look for a symbol anywhere in the project\n   * Used when imports are missing or for framework magic\n   */\n  lookupFuzzy: (name: string) => SymbolDefinition[];\n\n  /**\n   * Low Confidence: Look for callable symbols (Function/Method/Constructor) by name.\n   * Faster than `lookupFuzzy` + filter — backed by a lazy callable-only index.\n   * Used by ReturnTypeLookup to resolve callee → return type.\n   */\n  lookupFuzzyCallable: (name: string) => SymbolDefinition[];\n\n  /**\n   * Look up a field/property by its owning class nodeId and field name.\n   * O(1) via dedicated eagerly-populated index keyed by `ownerNodeId\\0fieldName`.\n   * Returns undefined when no matching property exists or the owner is ambiguous.\n   */\n  lookupFieldByOwner: (ownerNodeId: string, fieldName: string) => SymbolDefinition | undefined;\n\n  /**\n   * Debugging: See how many symbols are tracked\n   */\n  getStats: () => { fileCount: number; globalSymbolCount: number };\n  \n  /**\n   * Cleanup memory\n   */\n  clear: () => void;\n}\n\nexport const createSymbolTable = (): SymbolTable => {\n  // 1. File-Specific Index — stores full SymbolDefinition(s) for O(1) lookup.\n  // Structure: FilePath -> (SymbolName -> SymbolDefinition[])\n  // Array allows overloaded methods (same name, different signatures) to coexist.\n  const fileIndex = new Map<string, Map<string, SymbolDefinition[]>>();\n\n  // 2. Global Reverse Index (The \"Backup\")\n  // Structure: SymbolName -> [List of Definitions]\n  const globalIndex = new Map<string, SymbolDefinition[]>();\n\n  // 3. Lazy Callable Index — populated on first lookupFuzzyCallable call.\n  // Structure: SymbolName -> [Callable Definitions]\n  // Only Function, Method, Constructor symbols are indexed.\n  let callableIndex: Map<string, SymbolDefinition[]> | null = null;\n\n  // 4. Eagerly-populated Field/Property Index — keyed by \"ownerNodeId\\0fieldName\".\n  // Only Property symbols with ownerId and declaredType are indexed.\n  const fieldByOwner = new Map<string, SymbolDefinition>();\n\n  const CALLABLE_TYPES = new Set(['Function', 'Method', 'Constructor']);\n\n  const add = (\n    filePath: string,\n    name: string,\n    nodeId: string,\n    type: NodeLabel,\n    metadata?: { parameterCount?: number; requiredParameterCount?: number; parameterTypes?: string[]; returnType?: string; declaredType?: string; ownerId?: string }\n  ) => {\n    const def: SymbolDefinition = {\n      nodeId,\n      filePath,\n      type,\n      ...(metadata?.parameterCount !== undefined ? { parameterCount: metadata.parameterCount } : {}),\n      ...(metadata?.requiredParameterCount !== undefined ? { requiredParameterCount: metadata.requiredParameterCount } : {}),\n      ...(metadata?.parameterTypes !== undefined ? { parameterTypes: metadata.parameterTypes } : {}),\n      ...(metadata?.returnType !== undefined ? { returnType: metadata.returnType } : {}),\n      ...(metadata?.declaredType !== undefined ? { declaredType: metadata.declaredType } : {}),\n      ...(metadata?.ownerId !== undefined ? { ownerId: metadata.ownerId } : {}),\n    };\n\n    // A. Add to File Index (shared reference — zero additional memory)\n    if (!fileIndex.has(filePath)) {\n      fileIndex.set(filePath, new Map());\n    }\n    const fileMap = fileIndex.get(filePath)!;\n    if (!fileMap.has(name)) {\n      fileMap.set(name, [def]);\n    } else {\n      fileMap.get(name)!.push(def);\n    }\n\n    // B. Properties go to fieldByOwner index only — skip globalIndex to prevent\n    // namespace pollution for common names like 'id', 'name', 'type'.\n    // Index ALL properties (even without declaredType) so write-access tracking\n    // can resolve field ownership for dynamically-typed languages (Ruby, JS).\n    if (type === 'Property' && metadata?.ownerId) {\n      fieldByOwner.set(`${metadata.ownerId}\\0${name}`, def);\n      // Still add to fileIndex above (for lookupExact), but skip globalIndex\n      return;\n    }\n\n    // C. Add to Global Index (same object reference)\n    if (!globalIndex.has(name)) {\n      globalIndex.set(name, []);\n    }\n    globalIndex.get(name)!.push(def);\n\n    // D. Invalidate the lazy callable index only when adding callable types\n    if (CALLABLE_TYPES.has(type)) {\n      callableIndex = null;\n    }\n  };\n\n  const lookupExact = (filePath: string, name: string): string | undefined => {\n    const defs = fileIndex.get(filePath)?.get(name);\n    return defs?.[0]?.nodeId;\n  };\n\n  const lookupExactFull = (filePath: string, name: string): SymbolDefinition | undefined => {\n    const defs = fileIndex.get(filePath)?.get(name);\n    return defs?.[0];\n  };\n\n  const lookupExactAll = (filePath: string, name: string): SymbolDefinition[] => {\n    return fileIndex.get(filePath)?.get(name) ?? [];\n  };\n\n  const lookupFuzzy = (name: string): SymbolDefinition[] => {\n    return globalIndex.get(name) || [];\n  };\n\n  const lookupFuzzyCallable = (name: string): SymbolDefinition[] => {\n    if (!callableIndex) {\n      // Build the callable index lazily on first use\n      callableIndex = new Map();\n      for (const [symName, defs] of globalIndex) {\n        const callables = defs.filter(d => CALLABLE_TYPES.has(d.type));\n        if (callables.length > 0) callableIndex.set(symName, callables);\n      }\n    }\n    return callableIndex.get(name) ?? [];\n  };\n\n  const lookupFieldByOwner = (ownerNodeId: string, fieldName: string): SymbolDefinition | undefined => {\n    return fieldByOwner.get(`${ownerNodeId}\\0${fieldName}`);\n  };\n\n  const getStats = () => ({\n    fileCount: fileIndex.size,\n    globalSymbolCount: globalIndex.size\n  });\n\n  const clear = () => {\n    fileIndex.clear();\n    globalIndex.clear();\n    callableIndex = null;\n    fieldByOwner.clear();\n  };\n\n  return { add, lookupExact, lookupExactFull, lookupExactAll, lookupFuzzy, lookupFuzzyCallable, lookupFieldByOwner, getStats, clear };\n};\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/tree-sitter-queries.ts",
    "content": "import { SupportedLanguages } from '../../config/supported-languages.js';\n\n/* \n * Tree-sitter queries for extracting code definitions.\n * \n * Note: Different grammars (typescript vs tsx vs javascript) may have\n * slightly different node types. These queries are designed to be \n * compatible with the standard tree-sitter grammars.\n */\n\n// TypeScript queries - works with tree-sitter-typescript\nexport const TYPESCRIPT_QUERIES = `\n(class_declaration\n  name: (type_identifier) @name) @definition.class\n\n(interface_declaration\n  name: (type_identifier) @name) @definition.interface\n\n(function_declaration\n  name: (identifier) @name) @definition.function\n\n; TypeScript overload signatures (function_signature is a separate node type from function_declaration)\n(function_signature\n  name: (identifier) @name) @definition.function\n\n(method_definition\n  name: (property_identifier) @name) @definition.method\n\n(lexical_declaration\n  (variable_declarator\n    name: (identifier) @name\n    value: (arrow_function))) @definition.function\n\n(lexical_declaration\n  (variable_declarator\n    name: (identifier) @name\n    value: (function_expression))) @definition.function\n\n(export_statement\n  declaration: (lexical_declaration\n    (variable_declarator\n      name: (identifier) @name\n      value: (arrow_function)))) @definition.function\n\n(export_statement\n  declaration: (lexical_declaration\n    (variable_declarator\n      name: (identifier) @name\n      value: (function_expression)))) @definition.function\n\n(import_statement\n  source: (string) @import.source) @import\n\n; Re-export statements: export { X } from './y'\n(export_statement\n  source: (string) @import.source) @import\n\n(call_expression\n  function: (identifier) @call.name) @call\n\n(call_expression\n  function: (member_expression\n    property: (property_identifier) @call.name)) @call\n\n; Constructor calls: new Foo()\n(new_expression\n  constructor: (identifier) @call.name) @call\n\n; Class properties — public_field_definition covers most TS class fields\n(public_field_definition\n  name: (property_identifier) @name) @definition.property\n\n; Private class fields: #address: Address\n(public_field_definition\n  name: (private_property_identifier) @name) @definition.property\n\n; Constructor parameter properties: constructor(public address: Address)\n(required_parameter\n  (accessibility_modifier)\n  pattern: (identifier) @name) @definition.property\n\n; Heritage queries - class extends\n(class_declaration\n  name: (type_identifier) @heritage.class\n  (class_heritage\n    (extends_clause\n      value: (identifier) @heritage.extends))) @heritage\n\n; Heritage queries - class implements interface\n(class_declaration\n  name: (type_identifier) @heritage.class\n  (class_heritage\n    (implements_clause\n      (type_identifier) @heritage.implements))) @heritage.impl\n\n; Write access: obj.field = value\n(assignment_expression\n  left: (member_expression\n    object: (_) @assignment.receiver\n    property: (property_identifier) @assignment.property)\n  right: (_)) @assignment\n\n; Write access: obj.field += value (compound assignment)\n(augmented_assignment_expression\n  left: (member_expression\n    object: (_) @assignment.receiver\n    property: (property_identifier) @assignment.property)\n  right: (_)) @assignment\n`;\n\n// JavaScript queries - works with tree-sitter-javascript\nexport const JAVASCRIPT_QUERIES = `\n(class_declaration\n  name: (identifier) @name) @definition.class\n\n(function_declaration\n  name: (identifier) @name) @definition.function\n\n(method_definition\n  name: (property_identifier) @name) @definition.method\n\n(lexical_declaration\n  (variable_declarator\n    name: (identifier) @name\n    value: (arrow_function))) @definition.function\n\n(lexical_declaration\n  (variable_declarator\n    name: (identifier) @name\n    value: (function_expression))) @definition.function\n\n(export_statement\n  declaration: (lexical_declaration\n    (variable_declarator\n      name: (identifier) @name\n      value: (arrow_function)))) @definition.function\n\n(export_statement\n  declaration: (lexical_declaration\n    (variable_declarator\n      name: (identifier) @name\n      value: (function_expression)))) @definition.function\n\n(import_statement\n  source: (string) @import.source) @import\n\n; Re-export statements: export { X } from './y'\n(export_statement\n  source: (string) @import.source) @import\n\n(call_expression\n  function: (identifier) @call.name) @call\n\n(call_expression\n  function: (member_expression\n    property: (property_identifier) @call.name)) @call\n\n; Constructor calls: new Foo()\n(new_expression\n  constructor: (identifier) @call.name) @call\n\n; Class fields — field_definition captures JS class fields (class User { address = ... })\n(field_definition\n  property: (property_identifier) @name) @definition.property\n\n; Heritage queries - class extends (JavaScript uses different AST than TypeScript)\n; In tree-sitter-javascript, class_heritage directly contains the parent identifier\n(class_declaration\n  name: (identifier) @heritage.class\n  (class_heritage\n    (identifier) @heritage.extends)) @heritage\n\n; Write access: obj.field = value\n(assignment_expression\n  left: (member_expression\n    object: (_) @assignment.receiver\n    property: (property_identifier) @assignment.property)\n  right: (_)) @assignment\n\n; Write access: obj.field += value (compound assignment)\n(augmented_assignment_expression\n  left: (member_expression\n    object: (_) @assignment.receiver\n    property: (property_identifier) @assignment.property)\n  right: (_)) @assignment\n`;\n\n// Python queries - works with tree-sitter-python\nexport const PYTHON_QUERIES = `\n(class_definition\n  name: (identifier) @name) @definition.class\n\n(function_definition\n  name: (identifier) @name) @definition.function\n\n(import_statement\n  name: (dotted_name) @import.source) @import\n\n(import_from_statement\n  module_name: (dotted_name) @import.source) @import\n\n(import_from_statement\n  module_name: (relative_import) @import.source) @import\n\n(call\n  function: (identifier) @call.name) @call\n\n(call\n  function: (attribute\n    attribute: (identifier) @call.name)) @call\n\n; Class attribute type annotations — PEP 526: address: Address or address: Address = Address()\n; Both bare annotations (address: Address) and annotated assignments (name: str = \"test\")\n; are parsed as (assignment left: ... type: ...) in tree-sitter-python.\n(expression_statement\n  (assignment\n    left: (identifier) @name\n    type: (type)) @definition.property)\n\n; Heritage queries - Python class inheritance\n(class_definition\n  name: (identifier) @heritage.class\n  superclasses: (argument_list\n    (identifier) @heritage.extends)) @heritage\n\n; Write access: obj.field = value\n(assignment\n  left: (attribute\n    object: (_) @assignment.receiver\n    attribute: (identifier) @assignment.property)\n  right: (_)) @assignment\n\n; Write access: obj.field += value (compound assignment)\n(augmented_assignment\n  left: (attribute\n    object: (_) @assignment.receiver\n    attribute: (identifier) @assignment.property)\n  right: (_)) @assignment\n`;\n\n// Java queries - works with tree-sitter-java\nexport const JAVA_QUERIES = `\n; Classes, Interfaces, Enums, Annotations\n(class_declaration name: (identifier) @name) @definition.class\n(interface_declaration name: (identifier) @name) @definition.interface\n(enum_declaration name: (identifier) @name) @definition.enum\n(annotation_type_declaration name: (identifier) @name) @definition.annotation\n\n; Methods & Constructors\n(method_declaration name: (identifier) @name) @definition.method\n(constructor_declaration name: (identifier) @name) @definition.constructor\n\n; Fields — typed field declarations inside class bodies\n(field_declaration\n  declarator: (variable_declarator\n    name: (identifier) @name)) @definition.property\n\n; Imports - capture any import declaration child as source\n(import_declaration (_) @import.source) @import\n\n; Calls\n(method_invocation name: (identifier) @call.name) @call\n(method_invocation object: (_) name: (identifier) @call.name) @call\n\n; Constructor calls: new Foo()\n(object_creation_expression type: (type_identifier) @call.name) @call\n\n; Heritage - extends class\n(class_declaration name: (identifier) @heritage.class\n  (superclass (type_identifier) @heritage.extends)) @heritage\n\n; Heritage - implements interfaces\n(class_declaration name: (identifier) @heritage.class\n  (super_interfaces (type_list (type_identifier) @heritage.implements))) @heritage.impl\n\n; Write access: obj.field = value\n(assignment_expression\n  left: (field_access\n    object: (_) @assignment.receiver\n    field: (identifier) @assignment.property)\n  right: (_)) @assignment\n`;\n\n// C queries - works with tree-sitter-c\nexport const C_QUERIES = `\n; Functions (direct declarator)\n(function_definition declarator: (function_declarator declarator: (identifier) @name)) @definition.function\n(declaration declarator: (function_declarator declarator: (identifier) @name)) @definition.function\n\n; Functions returning pointers (pointer_declarator wraps function_declarator)\n(function_definition declarator: (pointer_declarator declarator: (function_declarator declarator: (identifier) @name))) @definition.function\n(declaration declarator: (pointer_declarator declarator: (function_declarator declarator: (identifier) @name))) @definition.function\n\n; Functions returning double pointers (nested pointer_declarator)\n(function_definition declarator: (pointer_declarator declarator: (pointer_declarator declarator: (function_declarator declarator: (identifier) @name)))) @definition.function\n\n; Structs, Unions, Enums, Typedefs\n(struct_specifier name: (type_identifier) @name) @definition.struct\n(union_specifier name: (type_identifier) @name) @definition.union\n(enum_specifier name: (type_identifier) @name) @definition.enum\n(type_definition declarator: (type_identifier) @name) @definition.typedef\n\n; Macros\n(preproc_function_def name: (identifier) @name) @definition.macro\n(preproc_def name: (identifier) @name) @definition.macro\n\n; Includes\n(preproc_include path: (_) @import.source) @import\n\n; Calls\n(call_expression function: (identifier) @call.name) @call\n(call_expression function: (field_expression field: (field_identifier) @call.name)) @call\n`;\n\n// Go queries - works with tree-sitter-go\nexport const GO_QUERIES = `\n; Functions & Methods\n(function_declaration name: (identifier) @name) @definition.function\n(method_declaration name: (field_identifier) @name) @definition.method\n\n; Types\n(type_declaration (type_spec name: (type_identifier) @name type: (struct_type))) @definition.struct\n(type_declaration (type_spec name: (type_identifier) @name type: (interface_type))) @definition.interface\n\n; Imports\n(import_declaration (import_spec path: (interpreted_string_literal) @import.source)) @import\n(import_declaration (import_spec_list (import_spec path: (interpreted_string_literal) @import.source))) @import\n\n; Struct fields — named field declarations inside struct types\n(field_declaration_list\n  (field_declaration\n    name: (field_identifier) @name) @definition.property)\n\n; Struct embedding (anonymous fields = inheritance)\n(type_declaration\n  (type_spec\n    name: (type_identifier) @heritage.class\n    type: (struct_type\n      (field_declaration_list\n        (field_declaration\n          type: (type_identifier) @heritage.extends))))) @definition.struct\n\n; Calls\n(call_expression function: (identifier) @call.name) @call\n(call_expression function: (selector_expression field: (field_identifier) @call.name)) @call\n\n; Struct literal construction: User{Name: \"Alice\"}\n(composite_literal type: (type_identifier) @call.name) @call\n\n; Write access: obj.field = value\n(assignment_statement\n  left: (expression_list\n    (selector_expression\n      operand: (_) @assignment.receiver\n      field: (field_identifier) @assignment.property))\n  right: (_)) @assignment\n\n; Write access: obj.field++ / obj.field--\n(inc_statement\n  (selector_expression\n    operand: (_) @assignment.receiver\n    field: (field_identifier) @assignment.property)) @assignment\n(dec_statement\n  (selector_expression\n    operand: (_) @assignment.receiver\n    field: (field_identifier) @assignment.property)) @assignment\n`;\n\n// C++ queries - works with tree-sitter-cpp\nexport const CPP_QUERIES = `\n; Classes, Structs, Namespaces\n(class_specifier name: (type_identifier) @name) @definition.class\n(struct_specifier name: (type_identifier) @name) @definition.struct\n(namespace_definition name: (namespace_identifier) @name) @definition.namespace\n(enum_specifier name: (type_identifier) @name) @definition.enum\n\n; Typedefs and unions (common in C-style headers and mixed C/C++ code)\n(type_definition declarator: (type_identifier) @name) @definition.typedef\n(union_specifier name: (type_identifier) @name) @definition.union\n\n; Macros\n(preproc_function_def name: (identifier) @name) @definition.macro\n(preproc_def name: (identifier) @name) @definition.macro\n\n; Functions & Methods (direct declarator)\n(function_definition declarator: (function_declarator declarator: (identifier) @name)) @definition.function\n(function_definition declarator: (function_declarator declarator: (qualified_identifier name: (identifier) @name))) @definition.method\n\n; Functions/methods returning pointers (pointer_declarator wraps function_declarator)\n(function_definition declarator: (pointer_declarator declarator: (function_declarator declarator: (identifier) @name))) @definition.function\n(function_definition declarator: (pointer_declarator declarator: (function_declarator declarator: (qualified_identifier name: (identifier) @name)))) @definition.method\n\n; Functions/methods returning double pointers (nested pointer_declarator)\n(function_definition declarator: (pointer_declarator declarator: (pointer_declarator declarator: (function_declarator declarator: (identifier) @name)))) @definition.function\n(function_definition declarator: (pointer_declarator declarator: (pointer_declarator declarator: (function_declarator declarator: (qualified_identifier name: (identifier) @name))))) @definition.method\n\n; Functions/methods returning references (reference_declarator wraps function_declarator)\n(function_definition declarator: (reference_declarator (function_declarator declarator: (identifier) @name))) @definition.function\n(function_definition declarator: (reference_declarator (function_declarator declarator: (qualified_identifier name: (identifier) @name)))) @definition.method\n\n; Destructors (destructor_name is distinct from identifier in tree-sitter-cpp)\n(function_definition declarator: (function_declarator declarator: (qualified_identifier name: (destructor_name) @name))) @definition.method\n\n; Function declarations / prototypes (common in headers)\n(declaration declarator: (function_declarator declarator: (identifier) @name)) @definition.function\n(declaration declarator: (pointer_declarator declarator: (function_declarator declarator: (identifier) @name))) @definition.function\n\n; Class/struct data member fields (Address address; int count;)\n; Uses field_identifier to exclude method declarations (which use function_declarator)\n(field_declaration\n  declarator: (field_identifier) @name) @definition.property\n\n; Pointer member fields (Address* address;)\n(field_declaration\n  declarator: (pointer_declarator\n    declarator: (field_identifier) @name)) @definition.property\n\n; Reference member fields (Address& address;)\n(field_declaration\n  declarator: (reference_declarator\n    (field_identifier) @name)) @definition.property\n\n; Inline class method declarations (inside class body, no body: void Foo();)\n(field_declaration declarator: (function_declarator declarator: (identifier) @name)) @definition.method\n\n; Inline class method definitions (inside class body, with body: void Foo() { ... })\n(field_declaration_list\n  (function_definition\n    declarator: (function_declarator\n      declarator: [(field_identifier) (identifier) (operator_name) (destructor_name)] @name)) @definition.method)\n\n; Inline class methods returning a pointer type (User* lookup(int id) { ... })\n(field_declaration_list\n  (function_definition\n    declarator: (pointer_declarator\n      declarator: (function_declarator\n        declarator: [(field_identifier) (identifier) (operator_name)] @name))) @definition.method)\n\n; Inline class methods returning a reference type (User& lookup(int id) { ... })\n(field_declaration_list\n  (function_definition\n    declarator: (reference_declarator\n      (function_declarator\n        declarator: [(field_identifier) (identifier) (operator_name)] @name))) @definition.method)\n\n; Templates\n(template_declaration (class_specifier name: (type_identifier) @name)) @definition.template\n(template_declaration (function_definition declarator: (function_declarator declarator: (identifier) @name))) @definition.template\n\n; Includes\n(preproc_include path: (_) @import.source) @import\n\n; Calls\n(call_expression function: (identifier) @call.name) @call\n(call_expression function: (field_expression field: (field_identifier) @call.name)) @call\n(call_expression function: (qualified_identifier name: (identifier) @call.name)) @call\n(call_expression function: (template_function name: (identifier) @call.name)) @call\n\n; Constructor calls: new User()\n(new_expression type: (type_identifier) @call.name) @call\n\n; Heritage\n(class_specifier name: (type_identifier) @heritage.class\n  (base_class_clause (type_identifier) @heritage.extends)) @heritage\n(class_specifier name: (type_identifier) @heritage.class\n  (base_class_clause (access_specifier) (type_identifier) @heritage.extends)) @heritage\n\n; Write access: obj.field = value\n(assignment_expression\n  left: (field_expression\n    argument: (_) @assignment.receiver\n    field: (field_identifier) @assignment.property)\n  right: (_)) @assignment\n\n`;\n\n// C# queries - works with tree-sitter-c-sharp\nexport const CSHARP_QUERIES = `\n; Types\n(class_declaration name: (identifier) @name) @definition.class\n(interface_declaration name: (identifier) @name) @definition.interface\n(struct_declaration name: (identifier) @name) @definition.struct\n(enum_declaration name: (identifier) @name) @definition.enum\n(record_declaration name: (identifier) @name) @definition.record\n(delegate_declaration name: (identifier) @name) @definition.delegate\n\n; Namespaces (block form and C# 10+ file-scoped form)\n(namespace_declaration name: (identifier) @name) @definition.namespace\n(namespace_declaration name: (qualified_name) @name) @definition.namespace\n(file_scoped_namespace_declaration name: (identifier) @name) @definition.namespace\n(file_scoped_namespace_declaration name: (qualified_name) @name) @definition.namespace\n\n; Methods & Properties\n(method_declaration name: (identifier) @name) @definition.method\n(local_function_statement name: (identifier) @name) @definition.function\n(constructor_declaration name: (identifier) @name) @definition.constructor\n(property_declaration name: (identifier) @name) @definition.property\n\n; Primary constructors (C# 12): class User(string name, int age) { }\n(class_declaration name: (identifier) @name (parameter_list) @definition.constructor)\n(record_declaration name: (identifier) @name (parameter_list) @definition.constructor)\n\n; Using\n(using_directive (qualified_name) @import.source) @import\n(using_directive (identifier) @import.source) @import\n\n; Calls\n(invocation_expression function: (identifier) @call.name) @call\n(invocation_expression function: (member_access_expression name: (identifier) @call.name)) @call\n\n; Null-conditional method calls: user?.Save()\n; Parses as: invocation_expression → conditional_access_expression → member_binding_expression → identifier\n(invocation_expression\n  function: (conditional_access_expression\n    (member_binding_expression\n      (identifier) @call.name))) @call\n\n; Constructor calls: new Foo() and new Foo { Props }\n(object_creation_expression type: (identifier) @call.name) @call\n\n; Target-typed new (C# 9): User u = new(\"x\", 5)\n(variable_declaration type: (identifier) @call.name (variable_declarator (implicit_object_creation_expression) @call))\n\n; Heritage\n(class_declaration name: (identifier) @heritage.class\n  (base_list (identifier) @heritage.extends)) @heritage\n(class_declaration name: (identifier) @heritage.class\n  (base_list (generic_name (identifier) @heritage.extends))) @heritage\n\n; Write access: obj.field = value\n(assignment_expression\n  left: (member_access_expression\n    expression: (_) @assignment.receiver\n    name: (identifier) @assignment.property)\n  right: (_)) @assignment\n`;\n\n// Rust queries - works with tree-sitter-rust\nexport const RUST_QUERIES = `\n; Functions & Items\n(function_item name: (identifier) @name) @definition.function\n(struct_item name: (type_identifier) @name) @definition.struct\n(enum_item name: (type_identifier) @name) @definition.enum\n(trait_item name: (type_identifier) @name) @definition.trait\n(impl_item type: (type_identifier) @name !trait) @definition.impl\n(impl_item type: (generic_type type: (type_identifier) @name) !trait) @definition.impl\n(mod_item name: (identifier) @name) @definition.module\n\n; Type aliases, const, static, macros\n(type_item name: (type_identifier) @name) @definition.type\n(const_item name: (identifier) @name) @definition.const\n(static_item name: (identifier) @name) @definition.static\n(macro_definition name: (identifier) @name) @definition.macro\n\n; Use statements\n(use_declaration argument: (_) @import.source) @import\n\n; Calls\n(call_expression function: (identifier) @call.name) @call\n(call_expression function: (field_expression field: (field_identifier) @call.name)) @call\n(call_expression function: (scoped_identifier name: (identifier) @call.name)) @call\n(call_expression function: (generic_function function: (identifier) @call.name)) @call\n\n; Struct literal construction: User { name: value }\n(struct_expression name: (type_identifier) @call.name) @call\n\n; Struct fields — named field declarations inside struct bodies\n(field_declaration_list\n  (field_declaration\n    name: (field_identifier) @name) @definition.property)\n\n; Heritage (trait implementation) — all combinations of concrete/generic trait × concrete/generic type\n(impl_item trait: (type_identifier) @heritage.trait type: (type_identifier) @heritage.class) @heritage\n(impl_item trait: (generic_type type: (type_identifier) @heritage.trait) type: (type_identifier) @heritage.class) @heritage\n(impl_item trait: (type_identifier) @heritage.trait type: (generic_type type: (type_identifier) @heritage.class)) @heritage\n(impl_item trait: (generic_type type: (type_identifier) @heritage.trait) type: (generic_type type: (type_identifier) @heritage.class)) @heritage\n\n; Write access: obj.field = value\n(assignment_expression\n  left: (field_expression\n    value: (_) @assignment.receiver\n    field: (field_identifier) @assignment.property)\n  right: (_)) @assignment\n\n; Write access: obj.field += value (compound assignment)\n(compound_assignment_expr\n  left: (field_expression\n    value: (_) @assignment.receiver\n    field: (field_identifier) @assignment.property)\n  right: (_)) @assignment\n`;\n\n// PHP queries - works with tree-sitter-php (php_only grammar)\nexport const PHP_QUERIES = `\n; ── Namespace ────────────────────────────────────────────────────────────────\n(namespace_definition\n  name: (namespace_name) @name) @definition.namespace\n\n; ── Classes ──────────────────────────────────────────────────────────────────\n(class_declaration\n  name: (name) @name) @definition.class\n\n; ── Interfaces ───────────────────────────────────────────────────────────────\n(interface_declaration\n  name: (name) @name) @definition.interface\n\n; ── Traits ───────────────────────────────────────────────────────────────────\n(trait_declaration\n  name: (name) @name) @definition.trait\n\n; ── Enums (PHP 8.1) ──────────────────────────────────────────────────────────\n(enum_declaration\n  name: (name) @name) @definition.enum\n\n; ── Top-level functions ───────────────────────────────────────────────────────\n(function_definition\n  name: (name) @name) @definition.function\n\n; ── Methods (including constructors) ─────────────────────────────────────────\n(method_declaration\n  name: (name) @name) @definition.method\n\n; ── Class properties (including Eloquent $fillable, $casts, etc.) ────────────\n(property_declaration\n  (property_element\n    (variable_name\n      (name) @name))) @definition.property\n\n; Constructor property promotion (PHP 8.0+: public Address $address in __construct)\n(method_declaration\n  parameters: (formal_parameters\n    (property_promotion_parameter\n      name: (variable_name\n        (name) @name)))) @definition.property\n\n; ── Imports: use statements ──────────────────────────────────────────────────\n; Simple: use App\\\\Models\\\\User;\n(namespace_use_declaration\n  (namespace_use_clause\n    (qualified_name) @import.source)) @import\n\n; ── Function/method calls ────────────────────────────────────────────────────\n; Regular function call: foo()\n(function_call_expression\n  function: (name) @call.name) @call\n\n; Method call: $obj->method()\n(member_call_expression\n  name: (name) @call.name) @call\n\n; Nullsafe method call: $obj?->method()\n(nullsafe_member_call_expression\n  name: (name) @call.name) @call\n\n; Static call: Foo::bar() (php_only uses scoped_call_expression)\n(scoped_call_expression\n  name: (name) @call.name) @call\n\n; Constructor call: new User()\n(object_creation_expression (name) @call.name) @call\n\n; ── Heritage: extends ────────────────────────────────────────────────────────\n(class_declaration\n  name: (name) @heritage.class\n  (base_clause\n    [(name) (qualified_name)] @heritage.extends)) @heritage\n\n; ── Heritage: implements ─────────────────────────────────────────────────────\n(class_declaration\n  name: (name) @heritage.class\n  (class_interface_clause\n    [(name) (qualified_name)] @heritage.implements)) @heritage.impl\n\n; ── Heritage: use trait (must capture enclosing class name) ──────────────────\n(class_declaration\n  name: (name) @heritage.class\n  body: (declaration_list\n    (use_declaration\n      [(name) (qualified_name)] @heritage.trait))) @heritage\n\n; Write access: $obj->field = value\n(assignment_expression\n  left: (member_access_expression\n    object: (_) @assignment.receiver\n    name: (name) @assignment.property)\n  right: (_)) @assignment\n\n; Write access: ClassName::$field = value (static property)\n(assignment_expression\n  left: (scoped_property_access_expression\n    scope: (_) @assignment.receiver\n    name: (variable_name (name) @assignment.property))\n  right: (_)) @assignment\n`;\n\n// Ruby queries - works with tree-sitter-ruby\n// NOTE: Ruby uses `call` for require, include, extend, prepend, attr_* etc.\n// These are all captured as @call and routed in JS post-processing:\n//   - require/require_relative → import extraction\n//   - include/extend/prepend → heritage (mixin) extraction\n//   - attr_accessor/attr_reader/attr_writer → property definition extraction\n//   - everything else → regular call extraction\nexport const RUBY_QUERIES = `\n; ── Modules ──────────────────────────────────────────────────────────────────\n(module\n  name: (constant) @name) @definition.module\n\n; ── Classes ──────────────────────────────────────────────────────────────────\n(class\n  name: (constant) @name) @definition.class\n\n; ── Instance methods ─────────────────────────────────────────────────────────\n(method\n  name: (identifier) @name) @definition.method\n\n; ── Singleton (class-level) methods ──────────────────────────────────────────\n(singleton_method\n  name: (identifier) @name) @definition.method\n\n; ── All calls (require, include, attr_*, and regular calls routed in JS) ─────\n(call\n  method: (identifier) @call.name) @call\n\n; ── Bare calls without parens (identifiers at statement level are method calls) ─\n; NOTE: This may over-capture variable reads as calls (e.g. 'result' at\n; statement level). Ruby's grammar makes bare identifiers ambiguous — they\n; could be local variables or zero-arity method calls. Post-processing via\n; isBuiltInOrNoise and symbol resolution filtering suppresses most false\n; positives, but a variable name that coincidentally matches a method name\n; elsewhere may produce a false CALLS edge.\n(body_statement\n  (identifier) @call.name @call)\n\n; ── Heritage: class < SuperClass ─────────────────────────────────────────────\n(class\n  name: (constant) @heritage.class\n  superclass: (superclass\n    (constant) @heritage.extends)) @heritage\n\n; Write access: obj.field = value (Ruby setter — syntactically a method call to field=)\n(assignment\n  left: (call\n    receiver: (_) @assignment.receiver\n    method: (identifier) @assignment.property)\n  right: (_)) @assignment\n\n; Write access: obj.field += value (compound assignment — operator_assignment node, not assignment)\n(operator_assignment\n  left: (call\n    receiver: (_) @assignment.receiver\n    method: (identifier) @assignment.property)\n  right: (_)) @assignment\n`;\n\n// Kotlin queries - works with tree-sitter-kotlin (fwcd/tree-sitter-kotlin)\n// Based on official tags.scm; functions use simple_identifier, classes use type_identifier\nexport const KOTLIN_QUERIES = `\n; ── Interfaces ─────────────────────────────────────────────────────────────\n; tree-sitter-kotlin (fwcd) has no interface_declaration node type.\n; Interfaces are class_declaration nodes with an anonymous \"interface\" keyword child.\n(class_declaration\n  \"interface\"\n  (type_identifier) @name) @definition.interface\n\n; ── Classes (regular, data, sealed, enum) ────────────────────────────────\n; All have the anonymous \"class\" keyword child. enum class has both\n; \"enum\" and \"class\" children — the \"class\" child still matches.\n(class_declaration\n  \"class\"\n  (type_identifier) @name) @definition.class\n\n; ── Object declarations (Kotlin singletons) ──────────────────────────────\n(object_declaration\n  (type_identifier) @name) @definition.class\n\n; ── Companion objects (named only) ───────────────────────────────────────\n(companion_object\n  (type_identifier) @name) @definition.class\n\n; ── Functions (top-level, member, extension) ──────────────────────────────\n(function_declaration\n  (simple_identifier) @name) @definition.function\n\n; ── Properties ───────────────────────────────────────────────────────────\n(property_declaration\n  (variable_declaration\n    (simple_identifier) @name)) @definition.property\n\n; Primary constructor val/var parameters (data class, value class, regular class)\n; binding_pattern_kind contains \"val\" or \"var\" — without it, the param is not a property\n(class_parameter\n  (binding_pattern_kind)\n  (simple_identifier) @name) @definition.property\n\n; ── Enum entries ─────────────────────────────────────────────────────────\n(enum_entry\n  (simple_identifier) @name) @definition.enum\n\n; ── Type aliases ─────────────────────────────────────────────────────────\n(type_alias\n  (type_identifier) @name) @definition.type\n\n; ── Imports ──────────────────────────────────────────────────────────────\n(import_header\n  (identifier) @import.source) @import\n\n; ── Function calls (direct) ──────────────────────────────────────────────\n(call_expression\n  (simple_identifier) @call.name) @call\n\n; ── Method calls (via navigation: obj.method()) ──────────────────────────\n(call_expression\n  (navigation_expression\n    (navigation_suffix\n      (simple_identifier) @call.name))) @call\n\n; ── Constructor invocations ──────────────────────────────────────────────\n(constructor_invocation\n  (user_type\n    (type_identifier) @call.name)) @call\n\n; ── Infix function calls (e.g., a to b, x until y) ──────────────────────\n(infix_expression\n  (simple_identifier) @call.name) @call\n\n; ── Heritage: extends / implements via delegation_specifier ──────────────\n; Interface implementation (bare user_type): class Foo : Bar\n(class_declaration\n  (type_identifier) @heritage.class\n  (delegation_specifier\n    (user_type (type_identifier) @heritage.extends))) @heritage\n\n; Class extension (constructor_invocation): class Foo : Bar()\n(class_declaration\n  (type_identifier) @heritage.class\n  (delegation_specifier\n    (constructor_invocation\n      (user_type (type_identifier) @heritage.extends)))) @heritage\n\n; Write access: obj.field = value\n(assignment\n  (directly_assignable_expression\n    (_) @assignment.receiver\n    (navigation_suffix\n      (simple_identifier) @assignment.property))\n  (_)) @assignment\n\n`;\n\n// Swift queries - works with tree-sitter-swift\nexport const SWIFT_QUERIES = `\n; Classes\n(class_declaration \"class\" name: (type_identifier) @name) @definition.class\n\n; Structs\n(class_declaration \"struct\" name: (type_identifier) @name) @definition.struct\n\n; Enums\n(class_declaration \"enum\" name: (type_identifier) @name) @definition.enum\n\n; Extensions (mapped to class — no dedicated label in schema)\n(class_declaration \"extension\" name: (user_type (type_identifier) @name)) @definition.class\n\n; Actors\n(class_declaration \"actor\" name: (type_identifier) @name) @definition.class\n\n; Protocols (mapped to interface)\n(protocol_declaration name: (type_identifier) @name) @definition.interface\n\n; Type aliases\n(typealias_declaration name: (type_identifier) @name) @definition.type\n\n; Functions (top-level and methods)\n(function_declaration name: (simple_identifier) @name) @definition.function\n\n; Protocol method declarations\n(protocol_function_declaration name: (simple_identifier) @name) @definition.method\n\n; Initializers\n(init_declaration) @definition.constructor\n\n; Properties (stored and computed)\n(property_declaration (pattern (simple_identifier) @name)) @definition.property\n\n; Imports\n(import_declaration (identifier (simple_identifier) @import.source)) @import\n\n; Calls - direct function calls\n(call_expression (simple_identifier) @call.name) @call\n\n; Calls - member/navigation calls (obj.method())\n(call_expression (navigation_expression (navigation_suffix (simple_identifier) @call.name))) @call\n\n; Heritage - class/struct/enum inheritance and protocol conformance\n(class_declaration name: (type_identifier) @heritage.class\n  (inheritance_specifier inherits_from: (user_type (type_identifier) @heritage.extends))) @heritage\n\n; Heritage - protocol inheritance\n(protocol_declaration name: (type_identifier) @heritage.class\n  (inheritance_specifier inherits_from: (user_type (type_identifier) @heritage.extends))) @heritage\n\n; Heritage - extension protocol conformance (e.g. extension Foo: SomeProtocol)\n; Extensions wrap the name in user_type unlike class/struct/enum declarations\n(class_declaration \"extension\" name: (user_type (type_identifier) @heritage.class)\n  (inheritance_specifier inherits_from: (user_type (type_identifier) @heritage.extends))) @heritage\n\n; Write access: obj.field = value\n(assignment\n  (directly_assignable_expression\n    (_) @assignment.receiver\n    (navigation_suffix\n      (simple_identifier) @assignment.property))\n  (_)) @assignment\n\n`;\n\nexport const LANGUAGE_QUERIES: Record<SupportedLanguages, string> = {\n  [SupportedLanguages.TypeScript]: TYPESCRIPT_QUERIES,\n  [SupportedLanguages.JavaScript]: JAVASCRIPT_QUERIES,\n  [SupportedLanguages.Python]: PYTHON_QUERIES,\n  [SupportedLanguages.Java]: JAVA_QUERIES,\n  [SupportedLanguages.C]: C_QUERIES,\n  [SupportedLanguages.Go]: GO_QUERIES,\n  [SupportedLanguages.CPlusPlus]: CPP_QUERIES,\n  [SupportedLanguages.CSharp]: CSHARP_QUERIES,\n  [SupportedLanguages.Ruby]: RUBY_QUERIES,\n  [SupportedLanguages.Rust]: RUST_QUERIES,\n  [SupportedLanguages.PHP]: PHP_QUERIES,\n  [SupportedLanguages.Kotlin]: KOTLIN_QUERIES,\n  [SupportedLanguages.Swift]: SWIFT_QUERIES,\n};\n "
  },
  {
    "path": "gitnexus/src/core/ingestion/type-env.ts",
    "content": "import type { SyntaxNode } from './utils.js';\nimport { FUNCTION_NODE_TYPES, extractFunctionName, CLASS_CONTAINER_TYPES, CALL_EXPRESSION_TYPES, isBuiltInOrNoise } from './utils.js';\nimport { SupportedLanguages } from '../../config/supported-languages.js';\nimport { typeConfigs, TYPED_PARAMETER_TYPES } from './type-extractors/index.js';\nimport type { ClassNameLookup, ReturnTypeLookup, ForLoopExtractorContext, PendingAssignment } from './type-extractors/types.js';\nimport { extractSimpleTypeName, extractVarName, stripNullable, extractReturnTypeName } from './type-extractors/shared.js';\nimport type { SymbolTable } from './symbol-table.js';\n\n/**\n * Per-file scoped type environment: maps (scope, variableName) → typeName.\n * Scope-aware: variables inside functions are keyed by function name,\n * file-level variables use the '' (empty string) scope.\n *\n * Design constraints:\n * - Explicit-only: Tier 0 uses type annotations; Tier 1 infers from constructors\n * - Tier 2: single-pass assignment chain propagation in source order — resolves\n *   `const b = a` when `a` already has a type from Tier 0/1\n * - Scope-aware: function-local variables don't collide across functions\n * - Conservative: complex/generic types extract the base name only\n * - Per-file: built once, used for receiver resolution, then discarded\n */\nexport type TypeEnv = Map<string, Map<string, string>>;\n\n/** File-level scope key */\nconst FILE_SCOPE = '';\n\n/** Fallback for languages where class names aren't in a 'name' field (e.g. Kotlin uses type_identifier). */\nconst findTypeIdentifierChild = (node: SyntaxNode): SyntaxNode | null => {\n  for (let i = 0; i < node.childCount; i++) {\n    const child = node.child(i);\n    if (child && child.type === 'type_identifier') return child;\n  }\n  return null;\n};\n\n/**\n * Per-file type environment with receiver resolution.\n * Built once per file via `buildTypeEnv`, used for receiver-type filtering,\n * then discarded. Encapsulates scope-aware type lookup and self/this/super\n * AST resolution behind a single `.lookup()` method.\n */\nexport interface TypeEnvironment {\n  /** Look up a variable's resolved type, with self/this/super AST resolution. */\n  lookup(varName: string, callNode: SyntaxNode): string | undefined;\n  /** Unverified cross-file constructor bindings for SymbolTable verification. */\n  readonly constructorBindings: readonly ConstructorBinding[];\n  /** Raw per-scope type bindings — for testing and debugging. */\n  readonly env: TypeEnv;\n  /** Maps `scope\\0varName` → constructor type for virtual dispatch override.\n   *  Populated when a variable has BOTH a declared base type AND a more specific\n   *  constructor type (e.g., `Animal a = new Dog()` → key maps to 'Dog'). */\n  readonly constructorTypeMap: ReadonlyMap<string, string>;\n}\n\n/**\n * Position-indexed pattern binding: active only within a specific AST range.\n * Used for smart-cast narrowing in mutually exclusive branches (e.g., Kotlin when arms).\n */\ninterface PatternOverride {\n  rangeStart: number;\n  rangeEnd: number;\n  typeName: string;\n}\n\n/** scope → varName → overrides (checked in order, first range match wins) */\ntype PatternOverrides = Map<string, Map<string, PatternOverride[]>>;\n\n/** AST node types that represent mutually exclusive branch containers for pattern bindings.\n *  Includes both multi-arm pattern-match branches AND if-statement bodies for null-check narrowing. */\nconst NARROWING_BRANCH_TYPES = new Set([\n  'when_entry',          // Kotlin when\n  'switch_block_label',  // Java switch (enhanced)\n  'if_statement',        // TS/JS, Java, C/C++\n  'if_expression',       // Kotlin (if is an expression)\n  'statement_block',     // TS/JS: { ... } body of if\n  'control_structure_body', // Kotlin: body of if\n]);\n\n/** Walk up the AST from a pattern node to find the enclosing branch container. */\nconst findNarrowingBranchScope = (node: SyntaxNode): SyntaxNode | undefined => {\n  let current = node.parent;\n  while (current) {\n    if (NARROWING_BRANCH_TYPES.has(current.type)) return current;\n    if (FUNCTION_NODE_TYPES.has(current.type)) return undefined;\n    current = current.parent;\n  }\n  return undefined;\n};\n\n/** Bare nullable keywords that fastStripNullable must reject. */\nconst FAST_NULLABLE_KEYWORDS = new Set(['null', 'undefined', 'void', 'None', 'nil']);\n\n/**\n * Fast-path nullable check: 90%+ of type names are simple identifiers (e.g. \"User\")\n * that don't need the full stripNullable parse. Only call stripNullable when the\n * string contains nullable markers ('|' for union types, '?' for nullable suffix).\n */\nconst fastStripNullable = (typeName: string): string | undefined => {\n  if (FAST_NULLABLE_KEYWORDS.has(typeName)) return undefined;\n  return (typeName.indexOf('|') === -1 && typeName.indexOf('?') === -1)\n    ? typeName\n    : stripNullable(typeName);\n};\n\n/** Implementation of the lookup logic — shared between TypeEnvironment and the legacy export. */\nconst lookupInEnv = (\n  env: TypeEnv,\n  varName: string,\n  callNode: SyntaxNode,\n  patternOverrides?: PatternOverrides,\n): string | undefined => {\n  // Self/this receiver: resolve to enclosing class name via AST walk\n  if (varName === 'self' || varName === 'this' || varName === '$this') {\n    return findEnclosingClassName(callNode);\n  }\n\n  // Super/base/parent receiver: resolve to the parent class name via AST walk.\n  // Walks up to the enclosing class, then extracts the superclass from its heritage node.\n  if (varName === 'super' || varName === 'base' || varName === 'parent') {\n    return findEnclosingParentClassName(callNode);\n  }\n\n  // Determine the enclosing function scope for the call\n  const scopeKey = findEnclosingScopeKey(callNode);\n\n  // Check position-indexed pattern overrides first (e.g., Kotlin when/is smart casts).\n  // These take priority over flat scopeEnv because they represent per-branch narrowing.\n  if (scopeKey && patternOverrides) {\n    const varOverrides = patternOverrides.get(scopeKey)?.get(varName);\n    if (varOverrides) {\n      const pos = callNode.startIndex;\n      for (const override of varOverrides) {\n        if (pos >= override.rangeStart && pos <= override.rangeEnd) {\n          return fastStripNullable(override.typeName);\n        }\n      }\n    }\n  }\n\n  // Try function-local scope first\n  if (scopeKey) {\n    const scopeEnv = env.get(scopeKey);\n    if (scopeEnv) {\n      const result = scopeEnv.get(varName);\n      if (result) return fastStripNullable(result);\n    }\n  }\n\n  // Fall back to file-level scope\n  const fileEnv = env.get(FILE_SCOPE);\n  const raw = fileEnv?.get(varName);\n  return raw ? fastStripNullable(raw) : undefined;\n};\n\n\n/**\n * Walk up the AST from a node to find the enclosing class/module name.\n * Used to resolve `self`/`this` receivers to their containing type.\n */\nconst findEnclosingClassName = (node: SyntaxNode): string | undefined => {\n  let current = node.parent;\n  while (current) {\n    if (CLASS_CONTAINER_TYPES.has(current.type)) {\n      const nameNode = current.childForFieldName('name')\n        ?? findTypeIdentifierChild(current);\n      if (nameNode) return nameNode.text;\n    }\n    current = current.parent;\n  }\n  return undefined;\n};\n\n/** Keywords that refer to the current instance across languages. */\nconst THIS_RECEIVERS = new Set(['this', 'self', '$this', 'Me']);\n\n/**\n * If a pending assignment's receiver is this/self/$this/Me, substitute the\n * enclosing class name. Returns the item unchanged for non-receiver kinds\n * or when the receiver is not a this-keyword. Properties are readonly in the\n * discriminated union, so a new object is returned when substitution occurs.\n */\nconst substituteThisReceiver = (item: PendingAssignment, node: SyntaxNode): PendingAssignment => {\n  if (item.kind !== 'fieldAccess' && item.kind !== 'methodCallResult') return item;\n  if (!THIS_RECEIVERS.has(item.receiver)) return item;\n  const className = findEnclosingClassName(node);\n  if (!className) return item;\n  return { ...item, receiver: className };\n};\n\n/**\n * Walk up the AST to find the enclosing class, then extract its parent class name\n * from the heritage/superclass AST node. Used to resolve `super`/`base`/`parent`.\n *\n * Supported patterns per tree-sitter grammar:\n * - Java/Ruby: `superclass` field → type_identifier/constant\n * - Python: `superclasses` field → argument_list → first identifier\n * - TypeScript/JS: unnamed `class_heritage` child → `extends_clause` → identifier\n * - C#: unnamed `base_list` child → first identifier\n * - PHP: unnamed `base_clause` child → name\n * - Kotlin: unnamed `delegation_specifier` child → constructor_invocation → user_type → type_identifier\n * - C++: unnamed `base_class_clause` child → type_identifier\n * - Swift: unnamed `inheritance_specifier` child → user_type → type_identifier\n */\nconst findEnclosingParentClassName = (node: SyntaxNode): string | undefined => {\n  let current = node.parent;\n  while (current) {\n    if (CLASS_CONTAINER_TYPES.has(current.type)) {\n      return extractParentClassFromNode(current);\n    }\n    current = current.parent;\n  }\n  return undefined;\n};\n\n/** Extract the parent/superclass name from a class declaration AST node. */\nconst extractParentClassFromNode = (classNode: SyntaxNode): string | undefined => {\n  // 1. Named fields: Java (superclass), Ruby (superclass), Python (superclasses)\n  const superclassNode = classNode.childForFieldName('superclass');\n  if (superclassNode) {\n    // Java: superclass > type_identifier or generic_type, Ruby: superclass > constant\n    const inner = superclassNode.childForFieldName('type')\n      ?? superclassNode.firstNamedChild\n      ?? superclassNode;\n    return extractSimpleTypeName(inner) ?? inner.text;\n  }\n\n  const superclassesNode = classNode.childForFieldName('superclasses');\n  if (superclassesNode) {\n    // Python: argument_list with identifiers or attribute nodes (e.g. models.Model)\n    const first = superclassesNode.firstNamedChild;\n    if (first) return extractSimpleTypeName(first) ?? first.text;\n  }\n\n  // 2. Unnamed children: walk class node's children looking for heritage nodes\n  for (let i = 0; i < classNode.childCount; i++) {\n    const child = classNode.child(i);\n    if (!child) continue;\n\n    switch (child.type) {\n      // TypeScript: class_heritage > extends_clause > type_identifier\n      // JavaScript: class_heritage > identifier (no extends_clause wrapper)\n      case 'class_heritage': {\n        for (let j = 0; j < child.childCount; j++) {\n          const clause = child.child(j);\n          if (clause?.type === 'extends_clause') {\n            const typeNode = clause.firstNamedChild;\n            if (typeNode) return extractSimpleTypeName(typeNode) ?? typeNode.text;\n          }\n          // JS: direct identifier child (no extends_clause wrapper)\n          if (clause?.type === 'identifier' || clause?.type === 'type_identifier') {\n            return clause.text;\n          }\n        }\n        break;\n      }\n\n      // C#: base_list > identifier or generic_name > identifier\n      case 'base_list': {\n        const first = child.firstNamedChild;\n        if (first) {\n          // generic_name wraps the identifier: BaseClass<T>\n          if (first.type === 'generic_name') {\n            const inner = first.childForFieldName('name') ?? first.firstNamedChild;\n            if (inner) return inner.text;\n          }\n          return first.text;\n        }\n        break;\n      }\n\n      // PHP: base_clause > name\n      case 'base_clause': {\n        const name = child.firstNamedChild;\n        if (name) return name.text;\n        break;\n      }\n\n      // C++: base_class_clause > type_identifier (with optional access_specifier before it)\n      case 'base_class_clause': {\n        for (let j = 0; j < child.childCount; j++) {\n          const inner = child.child(j);\n          if (inner?.type === 'type_identifier') return inner.text;\n        }\n        break;\n      }\n\n      // Kotlin: delegation_specifier > constructor_invocation > user_type > type_identifier\n      case 'delegation_specifier': {\n        const delegate = child.firstNamedChild;\n        if (delegate?.type === 'constructor_invocation') {\n          const userType = delegate.firstNamedChild;\n          if (userType?.type === 'user_type') {\n            const typeId = userType.firstNamedChild;\n            if (typeId) return typeId.text;\n          }\n        }\n        // Also handle plain user_type (interface conformance without parentheses)\n        if (delegate?.type === 'user_type') {\n          const typeId = delegate.firstNamedChild;\n          if (typeId) return typeId.text;\n        }\n        break;\n      }\n\n      // Swift: inheritance_specifier > user_type > type_identifier\n      case 'inheritance_specifier': {\n        const userType = child.childForFieldName('inherits_from') ?? child.firstNamedChild;\n        if (userType?.type === 'user_type') {\n          const typeId = userType.firstNamedChild;\n          if (typeId) return typeId.text;\n        }\n        break;\n      }\n    }\n  }\n\n  return undefined;\n};\n\n/** Find the enclosing function name for scope lookup. */\nconst findEnclosingScopeKey = (node: SyntaxNode): string | undefined => {\n  let current = node.parent;\n  while (current) {\n    if (FUNCTION_NODE_TYPES.has(current.type)) {\n      const { funcName } = extractFunctionName(current);\n      if (funcName) return `${funcName}@${current.startIndex}`;\n    }\n    current = current.parent;\n  }\n  return undefined;\n};\n\n/**\n * Create a lookup that checks both local AST class names AND the SymbolTable's\n * global index. This allows extractInitializer functions to distinguish\n * constructor calls from function calls (e.g. Kotlin `User()` vs `getUser()`)\n * using cross-file type information when available.\n *\n * Only `.has()` is exposed — the SymbolTable doesn't support iteration.\n * Results are memoized to avoid redundant lookupFuzzy scans across declarations.\n */\nconst createClassNameLookup = (\n  localNames: Set<string>,\n  symbolTable?: SymbolTable,\n): ClassNameLookup => {\n  if (!symbolTable) return localNames;\n\n  const memo = new Map<string, boolean>();\n  return {\n    has(name: string): boolean {\n      if (localNames.has(name)) return true;\n      const cached = memo.get(name);\n      if (cached !== undefined) return cached;\n      const result = symbolTable.lookupFuzzy(name).some(def =>\n        def.type === 'Class' || def.type === 'Enum' || def.type === 'Struct',\n      );\n      memo.set(name, result);\n      return result;\n    },\n  };\n};\n\n/**\n * Build a TypeEnvironment from a tree-sitter AST for a given language.\n * Single-pass: collects class/struct names, type bindings, AND constructor\n * bindings that couldn't be resolved locally — all in one AST walk.\n *\n * When a symbolTable is provided (call-processor path), class names from across\n * the project are available for constructor inference in languages like Kotlin\n * where constructors are syntactically identical to function calls.\n */\n/**\n * Node types whose subtrees can NEVER contain type-relevant descendants\n * (declarations, parameters, for-loops, class definitions, pattern bindings).\n * Conservative leaf-only set — verified safe across all 12 supported language grammars.\n * IMPORTANT: Do NOT add expression containers (arguments, binary_expression, etc.) —\n * they can contain arrow functions with typed parameters.\n */\nconst SKIP_SUBTREE_TYPES = new Set([\n  // Plain string literals (NOT template_string — it contains interpolated expressions\n  // that can hold arrow functions with typed parameters, e.g. `${(x: T) => x}`)\n  'string',              'string_literal',\n  'string_content',      'string_fragment',      'heredoc_body',\n  // Comments\n  'comment',             'line_comment',         'block_comment',\n  // Numeric/boolean/null literals\n  'number',              'integer_literal',      'float_literal',\n  'true',                'false',                'null',\n  // Regex\n  'regex',               'regex_pattern',\n]);\n\nconst CLASS_LIKE_TYPES = new Set(['Class', 'Struct', 'Interface']);\n\n/** Memoize class definition lookups during fixpoint iteration.\n *  SymbolTable is immutable during type resolution, so results never change.\n *  Eliminates redundant array allocations + filter scans across iterations. */\nconst createClassDefCache = (symbolTable?: SymbolTable) => {\n  const cache = new Map<string, Array<{ nodeId: string; type: string }>>();\n  return (typeName: string) => {\n    let result = cache.get(typeName);\n    if (result === undefined) {\n      result = symbolTable\n        ? symbolTable.lookupFuzzy(typeName).filter(d => CLASS_LIKE_TYPES.has(d.type))\n        : [];\n      cache.set(typeName, result);\n    }\n    return result;\n  };\n};\n\n/** AST node types representing constructor expressions across languages.\n *  Note: C# also has `implicit_object_creation_expression` (`new()` with type\n *  inference) which is NOT captured — the type is inferred, not explicit.\n *  Kotlin constructors use `call_expression` (no `new` keyword) — not detected. */\nconst CONSTRUCTOR_EXPR_TYPES = new Set([\n  'new_expression',               // TS/JS/C++: new Dog()\n  'object_creation_expression',   // Java/C#: new Dog()\n]);\n\n/** Extract the constructor class name from a declaration node's initializer.\n *  Searches for new_expression / object_creation_expression in the node's subtree.\n *  Returns the class name or undefined if no constructor is found.\n *  Depth-limited to 5 to avoid expensive traversals. */\nconst extractConstructorTypeName = (node: SyntaxNode, depth = 0): string | undefined => {\n  if (depth > 5) return undefined;\n  if (CONSTRUCTOR_EXPR_TYPES.has(node.type)) {\n    // Java/C#: object_creation_expression has 'type' field\n    const typeField = node.childForFieldName('type');\n    if (typeField) return extractSimpleTypeName(typeField);\n    // TS/JS: new_expression has 'constructor' field (but tree-sitter often just has identifier child)\n    const ctorField = node.childForFieldName('constructor');\n    if (ctorField) return extractSimpleTypeName(ctorField);\n    // Fallback: first named child is often the class identifier\n    if (node.firstNamedChild) return extractSimpleTypeName(node.firstNamedChild);\n  }\n  for (let i = 0; i < node.namedChildCount; i++) {\n    const child = node.namedChild(i);\n    if (!child) continue;\n    // Don't descend into nested functions/classes or call expressions (prevents\n    // finding constructor args inside method calls, e.g. processAll(new Dog()))\n    if (FUNCTION_NODE_TYPES.has(child.type) || CLASS_CONTAINER_TYPES.has(child.type)\n      || CALL_EXPRESSION_TYPES.has(child.type)) continue;\n    const result = extractConstructorTypeName(child, depth + 1);\n    if (result) return result;\n  }\n  return undefined;\n};\n\n/** Max depth for MRO parent chain walking. Real-world inheritance rarely exceeds 3-4 levels. */\nconst MAX_MRO_DEPTH = 5;\n\n/** Check if `child` is a subclass of `parent` using the parentMap.\n *  BFS up from child, depth-limited (5), cycle-safe. */\nexport const isSubclassOf = (\n  child: string, parent: string,\n  parentMap: ReadonlyMap<string, readonly string[]> | undefined,\n): boolean => {\n  if (!parentMap || child === parent) return false;\n  const visited = new Set<string>([child]);\n  let current = [child];\n  for (let depth = 0; depth < MAX_MRO_DEPTH && current.length > 0; depth++) {\n    const next: string[] = [];\n    for (const cls of current) {\n      const parents = parentMap.get(cls);\n      if (!parents) continue;\n      for (const p of parents) {\n        if (p === parent) return true;\n        if (!visited.has(p)) { visited.add(p); next.push(p); }\n      }\n    }\n    current = next;\n  }\n  return false;\n};\n\n/** Walk up the parent class chain to find a field or method on an ancestor.\n *  BFS-like traversal with depth limit and cycle detection. First match wins.\n *  Used by resolveFieldType and resolveMethodReturnType when direct lookup fails. */\nconst walkParentChain = <T>(\n  typeName: string,\n  parentMap: ReadonlyMap<string, readonly string[]> | undefined,\n  getClassDefs: (name: string) => Array<{ nodeId: string; type: string }>,\n  lookupOnClass: (nodeId: string) => T | undefined,\n): T | undefined => {\n  if (!parentMap) return undefined;\n  const visited = new Set<string>([typeName]);\n  let current = [typeName];\n  for (let depth = 0; depth < MAX_MRO_DEPTH && current.length > 0; depth++) {\n    const next: string[] = [];\n    for (const cls of current) {\n      const parents = parentMap.get(cls);\n      if (!parents) continue;\n      for (const parent of parents) {\n        if (visited.has(parent)) continue;\n        visited.add(parent);\n        const parentDefs = getClassDefs(parent);\n        if (parentDefs.length === 1) {\n          const result = lookupOnClass(parentDefs[0].nodeId);\n          if (result !== undefined) return result;\n        }\n        next.push(parent);\n      }\n    }\n    current = next;\n  }\n  return undefined;\n};\n\n/** Resolve a field's declared type given a receiver variable and field name.\n *  Uses SymbolTable to find the class nodeId for the receiver's type, then\n *  looks up the field via the eagerly-populated fieldByOwner index.\n *  Falls back to MRO parent chain walking if direct lookup fails (Phase 11A). */\nconst resolveFieldType = (\n  receiver: string, field: string,\n  scopeEnv: ReadonlyMap<string, string>, symbolTable?: SymbolTable,\n  getClassDefs?: (typeName: string) => Array<{ nodeId: string; type: string }>,\n  parentMap?: ReadonlyMap<string, readonly string[]>,\n): string | undefined => {\n  if (!symbolTable) return undefined;\n  const receiverType = scopeEnv.get(receiver);\n  if (!receiverType) return undefined;\n  const lookup = getClassDefs\n    ?? ((name: string) => symbolTable.lookupFuzzy(name).filter(d => CLASS_LIKE_TYPES.has(d.type)));\n  const classDefs = lookup(receiverType);\n  if (classDefs.length !== 1) return undefined;\n  // Direct lookup first\n  const fieldDef = symbolTable.lookupFieldByOwner(classDefs[0].nodeId, field);\n  if (fieldDef?.declaredType) return extractReturnTypeName(fieldDef.declaredType);\n  // MRO parent chain walking on miss\n  const inherited = walkParentChain(receiverType, parentMap, lookup, (nodeId) => {\n    const f = symbolTable.lookupFieldByOwner(nodeId, field);\n    return f?.declaredType ? extractReturnTypeName(f.declaredType) : undefined;\n  });\n  return inherited;\n};\n\n/** Resolve a method's return type given a receiver variable and method name.\n *  Uses SymbolTable to find class nodeIds for the receiver's type, then\n *  looks up the method via lookupFuzzyCallable filtered by ownerId.\n *  Falls back to MRO parent chain walking if direct lookup fails (Phase 11A). */\nconst resolveMethodReturnType = (\n  receiver: string, method: string,\n  scopeEnv: ReadonlyMap<string, string>, symbolTable?: SymbolTable,\n  getClassDefs?: (typeName: string) => Array<{ nodeId: string; type: string }>,\n  parentMap?: ReadonlyMap<string, readonly string[]>,\n): string | undefined => {\n  if (!symbolTable) return undefined;\n  const receiverType = scopeEnv.get(receiver);\n  if (!receiverType) return undefined;\n  const lookup = getClassDefs\n    ?? ((name: string) => symbolTable.lookupFuzzy(name).filter(d => CLASS_LIKE_TYPES.has(d.type)));\n  const classDefs = lookup(receiverType);\n  if (classDefs.length === 0) return undefined;\n  // Direct lookup first\n  const classNodeIds = new Set(classDefs.map(d => d.nodeId));\n  const methods = symbolTable.lookupFuzzyCallable(method)\n    .filter(d => d.ownerId && classNodeIds.has(d.ownerId));\n  if (methods.length === 1 && methods[0].returnType) {\n    return extractReturnTypeName(methods[0].returnType);\n  }\n  // MRO parent chain walking on miss\n  if (methods.length === 0) {\n    const inherited = walkParentChain(receiverType, parentMap, lookup, (nodeId) => {\n      const parentMethods = symbolTable.lookupFuzzyCallable(method)\n        .filter(d => d.ownerId === nodeId);\n      if (parentMethods.length !== 1 || !parentMethods[0].returnType) return undefined;\n      return extractReturnTypeName(parentMethods[0].returnType);\n    });\n    return inherited;\n  }\n  return undefined;\n};\n\n/**\n * Unified fixpoint propagation: iterate over ALL pending items (copy, callResult,\n * fieldAccess, methodCallResult) until no new bindings are produced.\n * Handles arbitrary-depth mixed chains:\n *   const user = getUser();      // callResult → User\n *   const addr = user.address;   // fieldAccess → Address (depends on user)\n *   const city = addr.getCity(); // methodCallResult → City (depends on addr)\n *   const alias = city;          // copy → City (depends on city)\n * Data flow: SymbolTable (immutable) + scopeEnv → resolve → scopeEnv.\n * Termination: finite entries, each bound at most once (first-writer-wins), max 10 iterations.\n */\nconst MAX_FIXPOINT_ITERATIONS = 10;\n\nconst resolveFixpointBindings = (\n  pendingItems: Array<{ scope: string } & PendingAssignment>,\n  env: TypeEnv,\n  returnTypeLookup: ReturnTypeLookup,\n  symbolTable?: SymbolTable,\n  parentMap?: ReadonlyMap<string, readonly string[]>,\n): void => {\n  if (pendingItems.length === 0) return;\n  const getClassDefs = createClassDefCache(symbolTable);\n  const resolved = new Set<number>();\n  for (let iter = 0; iter < MAX_FIXPOINT_ITERATIONS; iter++) {\n    let changed = false;\n    for (let i = 0; i < pendingItems.length; i++) {\n      if (resolved.has(i)) continue;\n      const item = pendingItems[i];\n      const scopeEnv = env.get(item.scope);\n      if (!scopeEnv || scopeEnv.has(item.lhs)) { resolved.add(i); continue; }\n\n      let typeName: string | undefined;\n      switch (item.kind) {\n        case 'callResult':\n          typeName = returnTypeLookup.lookupReturnType(item.callee);\n          break;\n        case 'copy':\n          typeName = scopeEnv.get(item.rhs) ?? env.get(FILE_SCOPE)?.get(item.rhs);\n          break;\n        case 'fieldAccess':\n          typeName = resolveFieldType(item.receiver, item.field, scopeEnv, symbolTable, getClassDefs, parentMap);\n          break;\n        case 'methodCallResult':\n          typeName = resolveMethodReturnType(item.receiver, item.method, scopeEnv, symbolTable, getClassDefs, parentMap);\n          break;\n        default: {\n          // Exhaustive check: TypeScript will error here if a new PendingAssignment\n          // kind is added without handling it in the switch.\n          const _exhaustive: never = item;\n          break;\n        }\n      }\n      if (typeName) {\n        scopeEnv.set(item.lhs, typeName);\n        resolved.add(i);\n        changed = true;\n      }\n    }\n    if (!changed) break;\n    if (iter === MAX_FIXPOINT_ITERATIONS - 1 && process.env.GITNEXUS_DEBUG) {\n      const unresolved = pendingItems.length - resolved.size;\n      if (unresolved > 0) {\n        console.warn(`[type-env] fixpoint hit iteration cap (${MAX_FIXPOINT_ITERATIONS}), ${unresolved} items unresolved`);\n      }\n    }\n  }\n};\n\n/**\n * Options for buildTypeEnv.\n * Uses an options object to allow future extensions without positional parameter sprawl.\n */\nexport interface BuildTypeEnvOptions {\n  symbolTable?: SymbolTable;\n  parentMap?: ReadonlyMap<string, readonly string[]>;\n}\n\nexport const buildTypeEnv = (\n  tree: { rootNode: SyntaxNode },\n  language: SupportedLanguages,\n  options?: BuildTypeEnvOptions,\n): TypeEnvironment => {\n  const symbolTable = options?.symbolTable;\n  const parentMap = options?.parentMap;\n  const env: TypeEnv = new Map();\n  const patternOverrides: PatternOverrides = new Map();\n  // Phase P: maps `scope\\0varName` → constructor type when a declaration has BOTH\n  // a base type annotation AND a more specific constructor initializer.\n  // e.g., `Animal a = new Dog()` → constructorTypeMap.set('func@42\\0a', 'Dog')\n  const constructorTypeMap = new Map<string, string>();\n  const localClassNames = new Set<string>();\n  const classNames = createClassNameLookup(localClassNames, symbolTable);\n  const config = typeConfigs[language];\n  const bindings: ConstructorBinding[] = [];\n\n  // Build ReturnTypeLookup from optional SymbolTable.\n  // Conservative: returns undefined when callee is ambiguous (0 or 2+ matches).\n  const returnTypeLookup: ReturnTypeLookup = {\n    lookupReturnType(callee: string): string | undefined {\n      if (!symbolTable) return undefined;\n      if (isBuiltInOrNoise(callee)) return undefined;\n      const callables = symbolTable.lookupFuzzyCallable(callee);\n      if (callables.length !== 1) return undefined;\n      const rawReturn = callables[0].returnType;\n      if (!rawReturn) return undefined;\n      return extractReturnTypeName(rawReturn);\n    },\n    lookupRawReturnType(callee: string): string | undefined {\n      if (!symbolTable) return undefined;\n      if (isBuiltInOrNoise(callee)) return undefined;\n      const callables = symbolTable.lookupFuzzyCallable(callee);\n      if (callables.length !== 1) return undefined;\n      return callables[0].returnType;\n    }\n  };\n\n  // Pre-compute combined set of node types that need extractTypeBinding.\n  // Single Set.has() replaces 3 separate checks per node in walk().\n  const interestingNodeTypes = new Set<string>();\n  TYPED_PARAMETER_TYPES.forEach(t => interestingNodeTypes.add(t));\n  config.declarationNodeTypes.forEach(t => interestingNodeTypes.add(t));\n  config.forLoopNodeTypes?.forEach(t => interestingNodeTypes.add(t));\n  // Tier 2: unified fixpoint propagation — collects copy, callResult, fieldAccess, and\n  // methodCallResult items during walk(), then iterates until no new bindings are produced.\n  // Handles arbitrary-depth mixed chains: callResult → fieldAccess → methodCallResult → copy.\n  const pendingItems: Array<{ scope: string } & PendingAssignment> = [];\n  // For-loop nodes whose iterable was unresolved at walk-time. Replayed after the fixpoint\n  // resolves the iterable's type, bridging the walk-time/fixpoint gap (Phase 10 / ex-9B).\n  const pendingForLoops: Array<{ node: SyntaxNode; scope: string }> = [];\n  // Maps `scope\\0varName` → the type annotation AST node from the original declaration.\n  // Allows pattern extractors to navigate back to the declaration's generic type arguments\n  // (e.g., to extract T from Result<T, E> for `if let Ok(x) = res`).\n  // NOTE: This is a SUPERSET of scopeEnv — entries exist even when extractSimpleTypeName\n  // returns undefined for container types (User[], []User, List[User]). This is intentional:\n  // for-loop Strategy 1 needs the raw AST type node for exactly those container types.\n  const declarationTypeNodes = new Map<string, SyntaxNode>();\n\n  /**\n   * Try to extract a (variableName → typeName) binding from a single AST node.\n   *\n   * Resolution tiers (first match wins):\n   * - Tier 0: explicit type annotations via extractDeclaration / extractForLoopBinding\n   * - Tier 1: constructor-call inference via extractInitializer (fallback)\n   *\n   * Side effect: populates declarationTypeNodes for variables that have an explicit\n   * type annotation field on the declaration node. This allows pattern extractors to\n   * retrieve generic type arguments from the original declaration (e.g., extracting T\n   * from Result<T, E> for `if let Ok(x) = res`).\n   */\n  const extractTypeBinding = (node: SyntaxNode, scopeEnv: Map<string, string>, scope: string): void => {\n    // This guard eliminates 90%+ of calls before any language dispatch.\n    if (TYPED_PARAMETER_TYPES.has(node.type)) {\n      // Capture the raw type annotation BEFORE extractParameter.\n      // Most languages use 'name' field; Rust uses 'pattern'; TS uses 'pattern' for some param types.\n      // Kotlin `parameter` nodes use positional children instead of named fields,\n      // so we fall back to scanning children by type when childForFieldName returns null.\n      let typeNode = node.childForFieldName('type');\n      if (typeNode) {\n        const nameNode = node.childForFieldName('name')\n          ?? node.childForFieldName('pattern')\n          // Python typed_parameter: name is a positional child (identifier), not a named field\n          ?? (node.firstNamedChild?.type === 'identifier' ? node.firstNamedChild : null);\n        if (nameNode) {\n          const varName = extractVarName(nameNode);\n          if (varName && !declarationTypeNodes.has(`${scope}\\0${varName}`)) {\n            declarationTypeNodes.set(`${scope}\\0${varName}`, typeNode);\n          }\n        }\n      } else {\n        // Fallback: positional children (Kotlin `parameter` → simple_identifier + user_type)\n        let fallbackName: SyntaxNode | null = null;\n        let fallbackType: SyntaxNode | null = null;\n        for (let i = 0; i < node.namedChildCount; i++) {\n          const child = node.namedChild(i);\n          if (!child) continue;\n          if (!fallbackName && (child.type === 'simple_identifier' || child.type === 'identifier')) {\n            fallbackName = child;\n          }\n          if (!fallbackType && (child.type === 'user_type' || child.type === 'type_identifier'\n            || child.type === 'generic_type' || child.type === 'parameterized_type'\n            || child.type === 'nullable_type')) {\n            fallbackType = child;\n          }\n        }\n        if (fallbackName && fallbackType) {\n          const varName = extractVarName(fallbackName);\n          if (varName && !declarationTypeNodes.has(`${scope}\\0${varName}`)) {\n            declarationTypeNodes.set(`${scope}\\0${varName}`, fallbackType);\n          }\n        }\n      }\n      config.extractParameter(node, scopeEnv);\n      return;\n    }\n    // For-each loop variable bindings (Java/C#/Kotlin): explicit element types in the AST.\n    // Checked before declarationNodeTypes — loop variables are not declarations.\n    if (config.forLoopNodeTypes?.has(node.type)) {\n      if (config.extractForLoopBinding) {\n        const sizeBefore = scopeEnv.size;\n        const forLoopCtx: ForLoopExtractorContext = { scopeEnv, declarationTypeNodes, scope, returnTypeLookup };\n        config.extractForLoopBinding(node, forLoopCtx);\n        // If no new binding was produced, the iterable's type may not yet be resolved.\n        // Store for post-fixpoint replay (Phase 10 / ex-9B loop-fixpoint bridge).\n        if (scopeEnv.size === sizeBefore) {\n          pendingForLoops.push({ node, scope });\n        }\n      }\n      return;\n    }\n    if (config.declarationNodeTypes.has(node.type)) {\n      // Capture the raw type annotation AST node BEFORE extractDeclaration.\n      // This decouples type node capture from scopeEnv success — container types\n      // (User[], []User, List[User]) that fail extractSimpleTypeName still get\n      // their AST type node recorded for Strategy 1 for-loop resolution.\n      // Try direct extraction first (works for Go var_spec, Python assignment, Rust let_declaration).\n      // Try direct type field first, then unwrap wrapper nodes (C# field_declaration,\n      // local_declaration_statement wrap their type inside a variable_declaration child).\n      let typeNode = node.childForFieldName('type');\n      if (!typeNode) {\n        // C# field_declaration / local_declaration_statement wrap type inside variable_declaration.\n        // Use manual loop instead of namedChildren.find() to avoid array allocation on hot path.\n        let wrapped = node.childForFieldName('declaration');\n        if (!wrapped) {\n          for (let i = 0; i < node.namedChildCount; i++) {\n            const c = node.namedChild(i);\n            if (c?.type === 'variable_declaration') { wrapped = c; break; }\n          }\n        }\n        if (wrapped) {\n          typeNode = wrapped.childForFieldName('type');\n          // Kotlin: variable_declaration stores the type as user_type / nullable_type\n          // child rather than a named 'type' field.\n          if (!typeNode) {\n            for (let i = 0; i < wrapped.namedChildCount; i++) {\n              const c = wrapped.namedChild(i);\n              if (c && (c.type === 'user_type' || c.type === 'nullable_type')) {\n                typeNode = c;\n                break;\n              }\n            }\n          }\n        }\n      }\n      if (typeNode) {\n        const nameNode = node.childForFieldName('name')\n          ?? node.childForFieldName('left')\n          ?? node.childForFieldName('pattern');\n        if (nameNode) {\n          const varName = extractVarName(nameNode);\n          if (varName && !declarationTypeNodes.has(`${scope}\\0${varName}`)) {\n            declarationTypeNodes.set(`${scope}\\0${varName}`, typeNode);\n          }\n        }\n      }\n      // Run the language-specific declaration extractor (may or may not add to scopeEnv).\n      const sizeBefore = typeNode ? scopeEnv.size : -1;\n      config.extractDeclaration(node, scopeEnv);\n      // Fallback: for multi-declarator languages (TS, C#, Java) where the type field\n      // is on variable_declarator children, capture newly-added keys.\n      // Map preserves insertion order, so new keys are always at the end —\n      // skip the first sizeBefore entries to find only newly-added variables.\n      if (sizeBefore >= 0 && scopeEnv.size > sizeBefore) {\n        let skip = sizeBefore;\n        for (const varName of scopeEnv.keys()) {\n          if (skip > 0) { skip--; continue; }\n          if (!declarationTypeNodes.has(`${scope}\\0${varName}`)) {\n            declarationTypeNodes.set(`${scope}\\0${varName}`, typeNode);\n          }\n        }\n      }\n      // Tier 1: constructor-call inference as fallback.\n      // Always called when available — each language's extractInitializer\n      // internally skips declarators that already have explicit annotations,\n      // so this handles mixed cases like `const a: A = x, b = new B()`.\n      if (config.extractInitializer) {\n        config.extractInitializer(node, scopeEnv, classNames);\n      }\n\n      // Phase P: detect constructor-visible virtual dispatch.\n      // When a declaration has BOTH a type annotation AND a constructor initializer,\n      // record the constructor type for receiver override at call resolution time.\n      // e.g., `Animal a = new Dog()` → constructorTypeMap.set('scope\\0a', 'Dog')\n      if (sizeBefore >= 0 && scopeEnv.size > sizeBefore) {\n        let ctorSkip = sizeBefore;\n        for (const varName of scopeEnv.keys()) {\n          if (ctorSkip > 0) { ctorSkip--; continue; }\n          const declaredType = scopeEnv.get(varName);\n          if (!declaredType) continue;\n          const ctorType = extractConstructorTypeName(node)\n            ?? config.detectConstructorType?.(node, classNames);\n          if (!ctorType || ctorType === declaredType) continue;\n          // Unwrap wrapper types (e.g., C++ shared_ptr<Animal> → Animal) for an\n          // accurate isSubclassOf comparison. Language-specific via config hook.\n          const declTypeNode = declarationTypeNodes.get(`${scope}\\0${varName}`);\n          const effectiveDeclaredType = (declTypeNode && config.unwrapDeclaredType)\n            ? (config.unwrapDeclaredType(declaredType, declTypeNode) ?? declaredType)\n            : declaredType;\n          if (ctorType !== effectiveDeclaredType) {\n            constructorTypeMap.set(`${scope}\\0${varName}`, ctorType);\n          }\n        }\n      }\n    }\n  };\n\n  const walk = (node: SyntaxNode, currentScope: string): void => {\n    // Fast skip: subtrees that can never contain type-relevant nodes (leaf-like literals).\n    if (SKIP_SUBTREE_TYPES.has(node.type)) return;\n\n    // Collect class/struct names as we encounter them (used by extractInitializer\n    // to distinguish constructor calls from function calls, e.g. C++ `User()` vs `getUser()`)\n    // Currently only C++ uses this locally; other languages rely on the SymbolTable path.\n    if (CLASS_CONTAINER_TYPES.has(node.type)) {\n      // Most languages use 'name' field; Kotlin uses a type_identifier child instead\n      const nameNode = node.childForFieldName('name')\n        ?? findTypeIdentifierChild(node);\n      if (nameNode) localClassNames.add(nameNode.text);\n    }\n\n    // Detect scope boundaries (function/method definitions)\n    let scope = currentScope;\n    if (FUNCTION_NODE_TYPES.has(node.type)) {\n      const { funcName } = extractFunctionName(node);\n      if (funcName) scope = `${funcName}@${node.startIndex}`;\n    }\n\n    // Only create scope map and call extractTypeBinding for interesting node types.\n    // Single Set.has() replaces 3 separate checks inside extractTypeBinding.\n    if (interestingNodeTypes.has(node.type)) {\n      if (!env.has(scope)) env.set(scope, new Map());\n      const scopeEnv = env.get(scope)!;\n      extractTypeBinding(node, scopeEnv, scope);\n    }\n\n    // Pattern binding extraction: handles constructs that introduce NEW typed variables\n    // via pattern matching (e.g. `if let Some(x) = opt`, `x instanceof T t`)\n    // or narrow existing variables within a branch (null-check narrowing).\n    // Runs after Tier 0/1 so scopeEnv already contains the source variable's type.\n    // Conservative: extractor returns undefined when source type is unknown.\n    if (config.extractPatternBinding && (!config.patternBindingNodeTypes || config.patternBindingNodeTypes.has(node.type))) {\n      // Ensure scopeEnv exists for pattern binding reads/writes\n      if (!env.has(scope)) env.set(scope, new Map());\n      const scopeEnv = env.get(scope)!;\n      const patternBinding = config.extractPatternBinding(node, scopeEnv, declarationTypeNodes, scope);\n      if (patternBinding) {\n        if (patternBinding.narrowingRange) {\n          // Explicit narrowing range (null-check narrowing): always store in patternOverrides\n          // using the extractor-provided range (typically the if-body block).\n          if (!patternOverrides.has(scope)) patternOverrides.set(scope, new Map());\n          const varMap = patternOverrides.get(scope)!;\n          if (!varMap.has(patternBinding.varName)) varMap.set(patternBinding.varName, []);\n          varMap.get(patternBinding.varName)!.push({\n            rangeStart: patternBinding.narrowingRange.startIndex,\n            rangeEnd: patternBinding.narrowingRange.endIndex,\n            typeName: patternBinding.typeName,\n          });\n        } else if (config.allowPatternBindingOverwrite) {\n          // Position-indexed: store per-branch binding for smart-cast narrowing.\n          // Each when arm / switch case gets its own type for the variable,\n          // preventing cross-arm contamination (e.g., Kotlin when/is).\n          const branchNode = findNarrowingBranchScope(node);\n          if (branchNode) {\n            if (!patternOverrides.has(scope)) patternOverrides.set(scope, new Map());\n            const varMap = patternOverrides.get(scope)!;\n            if (!varMap.has(patternBinding.varName)) varMap.set(patternBinding.varName, []);\n            varMap.get(patternBinding.varName)!.push({\n              rangeStart: branchNode.startIndex,\n              rangeEnd: branchNode.endIndex,\n              typeName: patternBinding.typeName,\n            });\n          }\n          // Also store in flat scopeEnv as fallback (last arm wins — same as before\n          // for code that doesn't use position-indexed lookup).\n          scopeEnv.set(patternBinding.varName, patternBinding.typeName);\n        } else if (!scopeEnv.has(patternBinding.varName)) {\n          // First-writer-wins for languages without smart-cast overwrite (Java instanceof, etc.)\n          scopeEnv.set(patternBinding.varName, patternBinding.typeName);\n        }\n      }\n    }\n\n    // Tier 2: collect plain-identifier RHS assignments for post-walk propagation.\n    // Delegates to per-language extractPendingAssignment — AST shapes differ widely\n    // (JS uses variable_declarator/name/value, Rust uses let_declaration/pattern/value,\n    // Python uses assignment/left/right, Go uses short_var_declaration/expression_list).\n    // May return a single item or an array (for destructuring: N fieldAccess items).\n    if (config.extractPendingAssignment && config.declarationNodeTypes.has(node.type)) {\n      // scopeEnv is guaranteed to exist here because declarationNodeTypes is a subset\n      // of interestingNodeTypes, so extractTypeBinding already created the scope map above.\n      const scopeEnv = env.get(scope);\n      if (scopeEnv) {\n        const pending = config.extractPendingAssignment(node, scopeEnv);\n        if (pending) {\n          const items = Array.isArray(pending) ? pending : [pending];\n          for (const item of items) {\n            // Substitute this/self/$this/Me receivers with enclosing class name\n            const resolved = substituteThisReceiver(item, node);\n            pendingItems.push({ scope, ...resolved });\n          }\n        }\n      }\n    }\n\n    // Scan for constructor bindings that couldn't be resolved locally.\n    // Only collect if TypeEnv didn't already resolve this binding.\n    if (config.scanConstructorBinding) {\n      const result = config.scanConstructorBinding(node);\n      if (result) {\n        const scopeEnv = env.get(scope);\n        if (!scopeEnv?.has(result.varName)) {\n          bindings.push({ scope, ...result });\n        }\n      }\n    }\n\n    // Recurse into children\n    for (let i = 0; i < node.childCount; i++) {\n      const child = node.child(i);\n      if (child) walk(child, scope);\n    }\n  };\n\n  walk(tree.rootNode, FILE_SCOPE);\n\n  resolveFixpointBindings(pendingItems, env, returnTypeLookup, symbolTable, parentMap);\n\n  // Post-fixpoint for-loop replay (Phase 10 / ex-9B loop-fixpoint bridge):\n  // For-loop nodes whose iterables were unresolved at walk-time may now be\n  // resolvable because the fixpoint bound the iterable's type.\n  // Example: `const users = getUsers(); for (const u of users) { u.save(); }`\n  //   - walk-time: users untyped → u unresolved\n  //   - fixpoint: users → User[]\n  //   - replay: users now typed → u → User\n  if (pendingForLoops.length > 0 && config.extractForLoopBinding) {\n    for (const { node, scope } of pendingForLoops) {\n      if (!env.has(scope)) env.set(scope, new Map());\n      const scopeEnv = env.get(scope)!;\n      config.extractForLoopBinding(node, { scopeEnv, declarationTypeNodes, scope, returnTypeLookup });\n    }\n    // Re-run the main fixpoint to resolve items that depended on loop variables.\n    // Only needed if replay actually produced new bindings.\n    const unresolvedBefore = pendingItems.filter((item) => {\n      const scopeEnv = env.get(item.scope);\n      return scopeEnv && !scopeEnv.has(item.lhs);\n    });\n    if (unresolvedBefore.length > 0) {\n      resolveFixpointBindings(unresolvedBefore, env, returnTypeLookup, symbolTable);\n    }\n  }\n\n  return {\n    lookup: (varName, callNode) => lookupInEnv(env, varName, callNode, patternOverrides),\n    constructorBindings: bindings,\n    env,\n    constructorTypeMap,\n  };\n};\n\n/**\n * Unverified constructor binding: a `val x = Callee()` pattern where we\n * couldn't confirm the callee is a class (because it's defined in another file).\n * The caller must verify `calleeName` against the SymbolTable before trusting.\n */\nexport interface ConstructorBinding {\n  /** Function scope key (matches TypeEnv scope keys) */\n  scope: string;\n  /** Variable name that received the constructor result */\n  varName: string;\n  /** Name of the callee (potential class constructor) */\n  calleeName: string;\n  /** Enclosing class name when callee is a method on a known receiver (e.g. $this) */\n  receiverClassName?: string;\n}\n\n\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/type-extractors/c-cpp.ts",
    "content": "import type { SyntaxNode } from '../utils.js';\nimport type { LanguageTypeConfig, ParameterExtractor, TypeBindingExtractor, InitializerExtractor, ClassNameLookup, ConstructorBindingScanner, PendingAssignmentExtractor, ForLoopExtractor, LiteralTypeInferrer, ConstructorTypeDetector, DeclaredTypeUnwrapper } from './types.js';\nimport { extractSimpleTypeName, extractVarName, resolveIterableElementType, methodToTypeArgPosition, type TypeArgPosition } from './shared.js';\n\nconst DECLARATION_NODE_TYPES: ReadonlySet<string> = new Set([\n  'declaration',\n]);\n\n/** Smart pointer factory function names that create a typed object. */\nconst SMART_PTR_FACTORIES = new Set([\n  'make_shared', 'make_unique', 'make_shared_for_overwrite',\n]);\n\n/** Smart pointer wrapper type names. When the declared type is a smart pointer,\n *  the inner template type is extracted for virtual dispatch comparison. */\nconst SMART_PTR_WRAPPERS = new Set(['shared_ptr', 'unique_ptr', 'weak_ptr']);\n\n/** Extract the first type name from a template_argument_list child.\n *  Unwraps type_descriptor wrappers common in tree-sitter-cpp ASTs.\n *  Returns undefined if no template arguments or no type found. */\nexport const extractFirstTemplateTypeArg = (parentNode: SyntaxNode): string | undefined => {\n  const templateArgs = parentNode.children.find((c: any) => c.type === 'template_argument_list');\n  if (!templateArgs?.firstNamedChild) return undefined;\n  let argNode: any = templateArgs.firstNamedChild;\n  if (argNode.type === 'type_descriptor') {\n    const inner = argNode.childForFieldName('type');\n    if (inner) argNode = inner;\n  }\n  return extractSimpleTypeName(argNode) ?? undefined;\n};\n\n/** C++: Type x = ...; Type* x; Type& x; */\nconst extractDeclaration: TypeBindingExtractor = (node: SyntaxNode, env: Map<string, string>): void => {\n  const typeNode = node.childForFieldName('type');\n  if (!typeNode) return;\n  const typeName = extractSimpleTypeName(typeNode);\n  if (!typeName) return;\n\n  const declarator = node.childForFieldName('declarator');\n  if (!declarator) return;\n\n  // init_declarator: Type x = value\n  const nameNode = declarator.type === 'init_declarator'\n    ? declarator.childForFieldName('declarator')\n    : declarator;\n  if (!nameNode) return;\n\n  // Handle pointer/reference declarators\n  const finalName = nameNode.type === 'pointer_declarator' || nameNode.type === 'reference_declarator'\n    ? nameNode.firstNamedChild\n    : nameNode;\n  if (!finalName) return;\n\n  const varName = extractVarName(finalName);\n  if (varName) env.set(varName, typeName);\n};\n\n/** C++: auto x = new User(); auto x = User(); */\nconst extractInitializer: InitializerExtractor = (node: SyntaxNode, env: Map<string, string>, classNames: ClassNameLookup): void => {\n  const typeNode = node.childForFieldName('type');\n  if (!typeNode) return;\n\n  // Only handle auto/placeholder — typed declarations are handled by extractDeclaration\n  const typeText = typeNode.text;\n  if (\n    typeText !== 'auto' &&\n    typeText !== 'decltype(auto)' &&\n    typeNode.type !== 'placeholder_type_specifier'\n  ) return;\n\n  const declarator = node.childForFieldName('declarator');\n  if (!declarator) return;\n\n  // Must be an init_declarator (i.e., has an initializer value)\n  if (declarator.type !== 'init_declarator') return;\n\n  const value = declarator.childForFieldName('value');\n  if (!value) return;\n\n  // Resolve the variable name, unwrapping pointer/reference declarators\n  const nameNode = declarator.childForFieldName('declarator');\n  if (!nameNode) return;\n  const finalName =\n    nameNode.type === 'pointer_declarator' || nameNode.type === 'reference_declarator'\n      ? nameNode.firstNamedChild\n      : nameNode;\n  if (!finalName) return;\n  const varName = extractVarName(finalName);\n  if (!varName) return;\n\n  // auto x = new User() — new_expression\n  if (value.type === 'new_expression') {\n    const ctorType = value.childForFieldName('type');\n    if (ctorType) {\n      const typeName = extractSimpleTypeName(ctorType);\n      if (typeName) env.set(varName, typeName);\n    }\n    return;\n  }\n\n  // auto x = User() — call_expression where function is a type name\n  // tree-sitter-cpp may parse the constructor name as type_identifier or identifier.\n  // For plain identifiers, verify against known class names from the file's AST\n  // to distinguish constructor calls (User()) from function calls (getUser()).\n  if (value.type === 'call_expression') {\n    const func = value.childForFieldName('function');\n    if (!func) return;\n    if (func.type === 'type_identifier') {\n      const typeName = func.text;\n      if (typeName) env.set(varName, typeName);\n    } else if (func.type === 'identifier') {\n      const text = func.text;\n      if (text && classNames.has(text)) env.set(varName, text);\n    } else {\n      // auto x = std::make_shared<Dog>() — smart pointer factory via template_function.\n      // AST: call_expression > function: qualified_identifier > template_function\n      //   or: call_expression > function: template_function (unqualified)\n      const templateFunc = func.type === 'template_function'\n        ? func\n        : (func.type === 'qualified_identifier' || func.type === 'scoped_identifier')\n          ? func.namedChildren.find((c: any) => c.type === 'template_function') ?? null\n          : null;\n      if (templateFunc) {\n        const nameNode = templateFunc.firstNamedChild;\n        if (nameNode) {\n          const funcName = (nameNode.type === 'qualified_identifier' || nameNode.type === 'scoped_identifier')\n            ? nameNode.lastNamedChild?.text ?? ''\n            : nameNode.text;\n          if (SMART_PTR_FACTORIES.has(funcName)) {\n            const typeName = extractFirstTemplateTypeArg(templateFunc);\n            if (typeName) env.set(varName, typeName);\n          }\n        }\n      }\n    }\n    return;\n  }\n\n  // auto x = User{} — compound_literal_expression (brace initialization)\n  // AST: compound_literal_expression > type_identifier + initializer_list\n  if (value.type === 'compound_literal_expression') {\n    const typeId = value.firstNamedChild;\n    const typeName = typeId ? extractSimpleTypeName(typeId) : undefined;\n    if (typeName) env.set(varName, typeName);\n  }\n};\n\n/** C/C++: parameter_declaration → type declarator */\nconst extractParameter: ParameterExtractor = (node: SyntaxNode, env: Map<string, string>): void => {\n  let nameNode: SyntaxNode | null = null;\n  let typeNode: SyntaxNode | null = null;\n\n  if (node.type === 'parameter_declaration') {\n    typeNode = node.childForFieldName('type');\n    const declarator = node.childForFieldName('declarator');\n    if (declarator) {\n      nameNode = declarator.type === 'pointer_declarator' || declarator.type === 'reference_declarator'\n        ? declarator.firstNamedChild\n        : declarator;\n    }\n  } else {\n    nameNode = node.childForFieldName('name') ?? node.childForFieldName('pattern');\n    typeNode = node.childForFieldName('type');\n  }\n\n  if (!nameNode || !typeNode) return;\n  const varName = extractVarName(nameNode);\n  const typeName = extractSimpleTypeName(typeNode);\n  if (varName && typeName) env.set(varName, typeName);\n};\n\n/** C/C++: auto x = User() where function is an identifier (not type_identifier) */\nconst scanConstructorBinding: ConstructorBindingScanner = (node) => {\n  if (node.type !== 'declaration') return undefined;\n  const typeNode = node.childForFieldName('type');\n  if (!typeNode) return undefined;\n  const typeText = typeNode.text;\n  if (typeText !== 'auto' && typeText !== 'decltype(auto)' && typeNode.type !== 'placeholder_type_specifier') return undefined;\n  const declarator = node.childForFieldName('declarator');\n  if (!declarator || declarator.type !== 'init_declarator') return undefined;\n  const value = declarator.childForFieldName('value');\n  if (!value || value.type !== 'call_expression') return undefined;\n  const func = value.childForFieldName('function');\n  if (!func) return undefined;\n  if (func.type === 'qualified_identifier' || func.type === 'scoped_identifier') {\n    const last = func.lastNamedChild;\n    if (!last) return undefined;\n    const nameNode = declarator.childForFieldName('declarator');\n    if (!nameNode) return undefined;\n    const finalName = nameNode.type === 'pointer_declarator' || nameNode.type === 'reference_declarator'\n      ? nameNode.firstNamedChild : nameNode;\n    if (!finalName) return undefined;\n    return { varName: finalName.text, calleeName: last.text };\n  }\n  if (func.type !== 'identifier') return undefined;\n  const nameNode = declarator.childForFieldName('declarator');\n  if (!nameNode) return undefined;\n  const finalName = nameNode.type === 'pointer_declarator' || nameNode.type === 'reference_declarator'\n    ? nameNode.firstNamedChild : nameNode;\n  if (!finalName) return undefined;\n  const varName = finalName.text;\n  if (!varName) return undefined;\n  return { varName, calleeName: func.text };\n};\n\n/** C++: auto alias = user → declaration with auto type + init_declarator where value is identifier */\nconst extractPendingAssignment: PendingAssignmentExtractor = (node, scopeEnv) => {\n  if (node.type !== 'declaration') return undefined;\n  const typeNode = node.childForFieldName('type');\n  if (!typeNode) return undefined;\n  // Only handle auto — typed declarations already resolved by extractDeclaration\n  const typeText = typeNode.text;\n  if (typeText !== 'auto' && typeText !== 'decltype(auto)'\n    && typeNode.type !== 'placeholder_type_specifier') return undefined;\n  const declarator = node.childForFieldName('declarator');\n  if (!declarator || declarator.type !== 'init_declarator') return undefined;\n  const value = declarator.childForFieldName('value');\n  if (!value) return undefined;\n  const nameNode = declarator.childForFieldName('declarator');\n  if (!nameNode) return undefined;\n  const finalName = nameNode.type === 'pointer_declarator' || nameNode.type === 'reference_declarator'\n    ? nameNode.firstNamedChild : nameNode;\n  if (!finalName) return undefined;\n  const lhs = extractVarName(finalName);\n  if (!lhs || scopeEnv.has(lhs)) return undefined;\n  if (value.type === 'identifier') return { kind: 'copy', lhs, rhs: value.text };\n  // field_expression RHS → fieldAccess (a.field)\n  if (value.type === 'field_expression') {\n    const obj = value.firstNamedChild;\n    const field = value.lastNamedChild;\n    if (obj?.type === 'identifier' && field?.type === 'field_identifier') {\n      return { kind: 'fieldAccess', lhs, receiver: obj.text, field: field.text };\n    }\n  }\n  // call_expression RHS\n  if (value.type === 'call_expression') {\n    const funcNode = value.childForFieldName('function');\n    if (funcNode?.type === 'identifier') {\n      return { kind: 'callResult', lhs, callee: funcNode.text };\n    }\n    // method call with receiver: call_expression → function: field_expression\n    if (funcNode?.type === 'field_expression') {\n      const obj = funcNode.firstNamedChild;\n      const field = funcNode.lastNamedChild;\n      if (obj?.type === 'identifier' && field?.type === 'field_identifier') {\n        return { kind: 'methodCallResult', lhs, receiver: obj.text, method: field.text };\n      }\n    }\n  }\n  return undefined;\n};\n\n// --- For-loop Tier 1c ---\n\nconst FOR_LOOP_NODE_TYPES: ReadonlySet<string> = new Set(['for_range_loop']);\n\n/** Extract template type arguments from a C++ template_type node.\n *  C++ template_type uses template_argument_list (not type_arguments), and each\n *  argument is a type_descriptor with a 'type' field containing the type_specifier. */\nconst extractCppTemplateTypeArgs = (templateTypeNode: SyntaxNode): string[] => {\n  const argsNode = templateTypeNode.childForFieldName('arguments');\n  if (!argsNode || argsNode.type !== 'template_argument_list') return [];\n  const result: string[] = [];\n  for (let i = 0; i < argsNode.namedChildCount; i++) {\n    let argNode = argsNode.namedChild(i);\n    if (!argNode) continue;\n    // type_descriptor wraps the actual type specifier in a 'type' field\n    if (argNode.type === 'type_descriptor') {\n      const inner = argNode.childForFieldName('type');\n      if (inner) argNode = inner;\n    }\n    const name = extractSimpleTypeName(argNode);\n    if (name) result.push(name);\n  }\n  return result;\n};\n\n/** Extract element type from a C++ type annotation AST node.\n *  Handles: template_type (vector<User>, map<string, User>),\n *  pointer/reference types (User*, User&). */\nconst extractCppElementTypeFromTypeNode = (typeNode: SyntaxNode, pos: TypeArgPosition = 'last', depth = 0): string | undefined => {\n  if (depth > 50) return undefined;\n  // template_type: vector<User>, map<string, User> — extract type arg based on position\n  if (typeNode.type === 'template_type') {\n    const args = extractCppTemplateTypeArgs(typeNode);\n    if (args.length >= 1) return pos === 'first' ? args[0] : args[args.length - 1];\n  }\n  // reference/pointer types: unwrap and recurse (vector<User>& → vector<User>)\n  if (typeNode.type === 'reference_type' || typeNode.type === 'pointer_type'\n    || typeNode.type === 'type_descriptor') {\n    const inner = typeNode.lastNamedChild;\n    if (inner) return extractCppElementTypeFromTypeNode(inner, pos, depth + 1);\n  }\n  // qualified/scoped types: std::vector<User> → unwrap to template_type child\n  if (typeNode.type === 'qualified_identifier' || typeNode.type === 'scoped_type_identifier') {\n    const inner = typeNode.lastNamedChild;\n    if (inner) return extractCppElementTypeFromTypeNode(inner, pos, depth + 1);\n  }\n  return undefined;\n};\n\n/** Walk up from a for-range-loop to the enclosing function_definition and search parameters\n *  for one named `iterableName`. Returns the element type from its annotation. */\nconst findCppParamElementType = (iterableName: string, startNode: SyntaxNode, pos: TypeArgPosition = 'last'): string | undefined => {\n  let current: SyntaxNode | null = startNode.parent;\n  while (current) {\n    if (current.type === 'function_definition') {\n      const declarator = current.childForFieldName('declarator');\n      // function_definition > declarator (function_declarator) > parameters (parameter_list)\n      const paramsNode = declarator?.childForFieldName('parameters');\n      if (paramsNode) {\n        for (let i = 0; i < paramsNode.namedChildCount; i++) {\n          const param = paramsNode.namedChild(i);\n          if (!param || param.type !== 'parameter_declaration') continue;\n          const paramDeclarator = param.childForFieldName('declarator');\n          if (!paramDeclarator) continue;\n          // Unwrap reference/pointer declarators: vector<User>& users → &users\n          let identNode = paramDeclarator;\n          if (identNode.type === 'reference_declarator' || identNode.type === 'pointer_declarator') {\n            identNode = identNode.firstNamedChild ?? identNode;\n          }\n          if (identNode.text !== iterableName) continue;\n          const typeNode = param.childForFieldName('type');\n          if (typeNode) return extractCppElementTypeFromTypeNode(typeNode, pos);\n        }\n      }\n      break;\n    }\n    current = current.parent;\n  }\n  return undefined;\n};\n\n/** C++: for (auto& user : users) — extract loop variable binding.\n *  Handles explicit types (for (User& user : users)) and auto (for (auto& user : users)).\n *  For auto, resolves element type from the iterable's container type. */\nconst extractForLoopBinding: ForLoopExtractor = (node, { scopeEnv, declarationTypeNodes, scope } ): void => {\n  if (node.type !== 'for_range_loop') return;\n\n  const typeNode = node.childForFieldName('type');\n  const declaratorNode = node.childForFieldName('declarator');\n  const rightNode = node.childForFieldName('right');\n  if (!typeNode || !declaratorNode || !rightNode) return;\n\n  // Unwrap reference/pointer declarator to get the loop variable name\n  let nameNode = declaratorNode;\n  if (nameNode.type === 'reference_declarator' || nameNode.type === 'pointer_declarator') {\n    nameNode = nameNode.firstNamedChild ?? nameNode;\n  }\n\n  // Handle structured bindings: auto& [key, value] or auto [key, value]\n  // Bind the last identifier (value heuristic for [key, value] patterns)\n  let loopVarName: string | undefined;\n  if (nameNode.type === 'structured_binding_declarator') {\n    const lastChild = nameNode.lastNamedChild;\n    if (lastChild?.type === 'identifier') {\n      loopVarName = lastChild.text;\n    }\n  } else if (declaratorNode.type === 'structured_binding_declarator') {\n    const lastChild = declaratorNode.lastNamedChild;\n    if (lastChild?.type === 'identifier') {\n      loopVarName = lastChild.text;\n    }\n  }\n\n  const varName = loopVarName ?? extractVarName(nameNode);\n  if (!varName) return;\n\n  // Check if the type is auto/placeholder — if not, use the explicit type directly\n  const isAuto = typeNode.type === 'placeholder_type_specifier'\n    || typeNode.text === 'auto'\n    || typeNode.text === 'const auto'\n    || typeNode.text === 'decltype(auto)';\n\n  if (!isAuto) {\n    // Explicit type: for (User& user : users) — extract directly\n    const typeName = extractSimpleTypeName(typeNode);\n    if (typeName) scopeEnv.set(varName, typeName);\n    return;\n  }\n\n  // auto/const auto/auto& — resolve from the iterable's container type\n  // Extract iterable name + optional method\n  let iterableName: string | undefined;\n  let methodName: string | undefined;\n  if (rightNode.type === 'identifier') {\n    iterableName = rightNode.text;\n  } else if (rightNode.type === 'field_expression') {\n    const prop = rightNode.lastNamedChild;\n    if (prop) iterableName = prop.text;\n  } else if (rightNode.type === 'call_expression') {\n    // users.begin() is NOT used in range-for, but container.items() etc. might be\n    const fieldExpr = rightNode.childForFieldName('function');\n    if (fieldExpr?.type === 'field_expression') {\n      const obj = fieldExpr.firstNamedChild;\n      if (obj?.type === 'identifier') iterableName = obj.text;\n      const field = fieldExpr.lastNamedChild;\n      if (field?.type === 'field_identifier') methodName = field.text;\n    }\n  } else if (rightNode.type === 'pointer_expression') {\n    // Dereference: for (auto& user : *ptr) → pointer_expression > identifier\n    // Only handles simple *identifier; *this->field and **ptr are not resolved.\n    const operand = rightNode.lastNamedChild;\n    if (operand?.type === 'identifier') iterableName = operand.text;\n  }\n  if (!iterableName) return;\n\n  const containerTypeName = scopeEnv.get(iterableName);\n  const typeArgPos = methodToTypeArgPosition(methodName, containerTypeName);\n  const elementType = resolveIterableElementType(\n    iterableName, node, scopeEnv, declarationTypeNodes, scope,\n    extractCppElementTypeFromTypeNode, findCppParamElementType,\n    typeArgPos,\n  );\n  if (elementType) scopeEnv.set(varName, elementType);\n};\n\n/** Infer the type of a literal AST node for C++ overload disambiguation. */\nconst inferLiteralType: LiteralTypeInferrer = (node) => {\n  switch (node.type) {\n    case 'number_literal': {\n      const t = node.text;\n      // Float suffixes\n      if (t.endsWith('f') || t.endsWith('F')) return 'float';\n      if (t.includes('.') || t.includes('e') || t.includes('E')) return 'double';\n      // Long suffix\n      if (t.endsWith('L') || t.endsWith('l') || t.endsWith('LL') || t.endsWith('ll')) return 'long';\n      return 'int';\n    }\n    case 'string_literal':\n    case 'raw_string_literal':\n    case 'concatenated_string':\n      return 'string';\n    case 'char_literal':\n      return 'char';\n    case 'true':\n    case 'false':\n      return 'bool';\n    case 'null':\n    case 'nullptr':\n      return 'null';\n    default:\n      return undefined;\n  }\n};\n\n/** C++: detect constructor type from smart pointer factory calls (make_shared<Dog>()).\n *  Extracts the template type argument as the constructor type for virtual dispatch. */\nconst detectCppConstructorType: ConstructorTypeDetector = (node, classNames) => {\n  // Navigate to the initializer value in the declaration\n  const declarator = node.childForFieldName('declarator');\n  const initDecl = declarator?.type === 'init_declarator' ? declarator : undefined;\n  if (!initDecl) return undefined;\n  const value = initDecl.childForFieldName('value');\n  if (!value || value.type !== 'call_expression') return undefined;\n\n  // Check for template_function pattern: make_shared<Dog>()\n  const func = value.childForFieldName('function');\n  if (!func || func.type !== 'template_function') return undefined;\n\n  // Extract function name (possibly qualified: std::make_shared)\n  const nameNode = func.firstNamedChild;\n  if (!nameNode) return undefined;\n  let funcName: string;\n  if (nameNode.type === 'qualified_identifier' || nameNode.type === 'scoped_identifier') {\n    funcName = nameNode.lastNamedChild?.text ?? '';\n  } else {\n    funcName = nameNode.text;\n  }\n  if (!SMART_PTR_FACTORIES.has(funcName)) return undefined;\n\n  // Extract template type argument\n  return extractFirstTemplateTypeArg(func);\n};\n\n/** Unwrap a C++ smart pointer declared type to its inner template type.\n *  E.g., shared_ptr<Animal> → Animal. Returns the original name if not a smart pointer. */\nconst unwrapCppDeclaredType: DeclaredTypeUnwrapper = (declaredType, typeNode) => {\n  if (!SMART_PTR_WRAPPERS.has(declaredType)) return declaredType;\n  if (typeNode.type !== 'template_type') return declaredType;\n  return extractFirstTemplateTypeArg(typeNode) ?? declaredType;\n};\n\nexport const typeConfig: LanguageTypeConfig = {\n  declarationNodeTypes: DECLARATION_NODE_TYPES,\n  forLoopNodeTypes: FOR_LOOP_NODE_TYPES,\n  extractDeclaration,\n  extractParameter,\n  extractInitializer,\n  scanConstructorBinding,\n  extractForLoopBinding,\n  extractPendingAssignment,\n  inferLiteralType,\n  detectConstructorType: detectCppConstructorType,\n  unwrapDeclaredType: unwrapCppDeclaredType,\n};\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/type-extractors/csharp.ts",
    "content": "import type { SyntaxNode } from '../utils.js';\nimport type { ConstructorBindingScanner, ForLoopExtractor, LanguageTypeConfig, ParameterExtractor, TypeBindingExtractor, PendingAssignmentExtractor, PatternBindingExtractor, LiteralTypeInferrer } from './types.js';\nimport { extractSimpleTypeName, extractVarName, findChildByType, unwrapAwait, extractGenericTypeArgs, resolveIterableElementType, methodToTypeArgPosition, extractElementTypeFromString, type TypeArgPosition } from './shared.js';\n\n/** Known container property accessors that operate on the container itself (e.g., dict.Keys, dict.Values) */\nconst KNOWN_CONTAINER_PROPS: ReadonlySet<string> = new Set(['Keys', 'Values']);\n\nconst DECLARATION_NODE_TYPES: ReadonlySet<string> = new Set([\n  'local_declaration_statement',\n  'variable_declaration',\n  'field_declaration',\n]);\n\n/** C#: Type x = ...; var x = new Type(); */\nconst extractDeclaration: TypeBindingExtractor = (node: SyntaxNode, env: Map<string, string>): void => {\n  // C# tree-sitter: local_declaration_statement > variable_declaration > ...\n  // Recursively descend through wrapper nodes\n  for (let i = 0; i < node.namedChildCount; i++) {\n    const child = node.namedChild(i);\n    if (!child) continue;\n    if (child.type === 'variable_declaration' || child.type === 'local_declaration_statement') {\n      extractDeclaration(child, env);\n      return;\n    }\n  }\n\n  // At variable_declaration level: first child is type, rest are variable_declarators\n  let typeNode: SyntaxNode | null = null;\n  const declarators: SyntaxNode[] = [];\n\n  for (let i = 0; i < node.namedChildCount; i++) {\n    const child = node.namedChild(i);\n    if (!child) continue;\n\n    if (!typeNode && child.type !== 'variable_declarator' && child.type !== 'equals_value_clause') {\n      // First non-declarator child is the type (identifier, implicit_type, generic_name, etc.)\n      typeNode = child;\n    }\n    if (child.type === 'variable_declarator') {\n      declarators.push(child);\n    }\n  }\n\n  if (!typeNode || declarators.length === 0) return;\n\n  // Handle 'var x = new Foo()' — infer from object_creation_expression\n  let typeName: string | undefined;\n  if (typeNode.type === 'implicit_type' && typeNode.text === 'var') {\n    // Try to infer from initializer: var x = new Foo()\n    // tree-sitter-c-sharp may put object_creation_expression as direct child\n    // or inside equals_value_clause depending on grammar version\n    if (declarators.length === 1) {\n      const initializer = findChildByType(declarators[0], 'object_creation_expression')\n        ?? findChildByType(declarators[0], 'equals_value_clause')?.firstNamedChild;\n      if (initializer?.type === 'object_creation_expression') {\n        const ctorType = initializer.childForFieldName('type');\n        if (ctorType) typeName = extractSimpleTypeName(ctorType);\n      }\n    }\n  } else {\n    typeName = extractSimpleTypeName(typeNode);\n  }\n\n  if (!typeName) return;\n  for (const decl of declarators) {\n    const nameNode = decl.childForFieldName('name') ?? decl.firstNamedChild;\n    if (nameNode) {\n      const varName = extractVarName(nameNode);\n      if (varName) env.set(varName, typeName);\n    }\n  }\n};\n\n/** C#: parameter → type name */\nconst extractParameter: ParameterExtractor = (node: SyntaxNode, env: Map<string, string>): void => {\n  let nameNode: SyntaxNode | null = null;\n  let typeNode: SyntaxNode | null = null;\n\n  if (node.type === 'parameter') {\n    typeNode = node.childForFieldName('type');\n    nameNode = node.childForFieldName('name');\n  } else {\n    nameNode = node.childForFieldName('name') ?? node.childForFieldName('pattern');\n    typeNode = node.childForFieldName('type');\n  }\n\n  if (!nameNode || !typeNode) return;\n  const varName = extractVarName(nameNode);\n  const typeName = extractSimpleTypeName(typeNode);\n  if (varName && typeName) env.set(varName, typeName);\n};\n\n/** C#: var x = SomeFactory(...) → bind x to SomeFactory (constructor-like call) */\nconst scanConstructorBinding: ConstructorBindingScanner = (node) => {\n  if (node.type !== 'variable_declaration') return undefined;\n  // Find type and declarator children by iterating (C# grammar doesn't expose 'type' as a named field)\n  let typeNode: SyntaxNode | null = null;\n  let declarator: SyntaxNode | null = null;\n  for (let i = 0; i < node.namedChildCount; i++) {\n    const child = node.namedChild(i);\n    if (!child) continue;\n    if (child.type === 'variable_declarator') { if (!declarator) declarator = child; }\n    else if (!typeNode) { typeNode = child; }\n  }\n  // Only handle implicit_type (var) — explicit types handled by extractDeclaration\n  if (!typeNode || typeNode.type !== 'implicit_type') return undefined;\n  if (!declarator) return undefined;\n  const nameNode = declarator.childForFieldName('name') ?? declarator.firstNamedChild;\n  if (!nameNode || nameNode.type !== 'identifier') return undefined;\n  // Find the initializer value: either inside equals_value_clause or as a direct child\n  // (tree-sitter-c-sharp puts invocation_expression directly inside variable_declarator)\n  let value: SyntaxNode | null = null;\n  for (let i = 0; i < declarator.namedChildCount; i++) {\n    const child = declarator.namedChild(i);\n    if (!child) continue;\n    if (child.type === 'equals_value_clause') { value = child.firstNamedChild; break; }\n    if (child.type === 'invocation_expression' || child.type === 'object_creation_expression' || child.type === 'await_expression') { value = child; break; }\n  }\n  if (!value) return undefined;\n  // Unwrap await: `var user = await svc.GetUserAsync()` → await_expression wraps invocation_expression\n  value = unwrapAwait(value);\n  if (!value) return undefined;\n  // Skip object_creation_expression (new User()) — handled by extractInitializer\n  if (value.type === 'object_creation_expression') return undefined;\n  if (value.type !== 'invocation_expression') return undefined;\n  const func = value.firstNamedChild;\n  if (!func) return undefined;\n  const calleeName = extractSimpleTypeName(func);\n  if (!calleeName) return undefined;\n  return { varName: nameNode.text, calleeName };\n};\n\nconst FOR_LOOP_NODE_TYPES: ReadonlySet<string> = new Set([\n  'foreach_statement',\n]);\n\n/** Extract element type from a C# type annotation AST node.\n *  Handles generic_name (List<User>), array_type (User[]), nullable_type (?).\n *  `pos` selects which type arg: 'first' for keys, 'last' for values (default). */\nconst extractCSharpElementTypeFromTypeNode = (typeNode: SyntaxNode, pos: TypeArgPosition = 'last', depth = 0): string | undefined => {\n  if (depth > 50) return undefined;\n  // generic_name: List<User>, IEnumerable<User>, Dictionary<string, User>\n  // C# uses generic_name (not generic_type)\n  if (typeNode.type === 'generic_name') {\n    const argList = findChildByType(typeNode, 'type_argument_list');\n    if (argList && argList.namedChildCount >= 1) {\n      if (pos === 'first') {\n        const firstArg = argList.namedChild(0);\n        if (firstArg) return extractSimpleTypeName(firstArg);\n      } else {\n        const lastArg = argList.namedChild(argList.namedChildCount - 1);\n        if (lastArg) return extractSimpleTypeName(lastArg);\n      }\n    }\n  }\n  // array_type: User[]\n  if (typeNode.type === 'array_type') {\n    const elemNode = typeNode.firstNamedChild;\n    if (elemNode) return extractSimpleTypeName(elemNode);\n  }\n  // nullable_type: unwrap and recurse (List<User>? → List<User> → User)\n  if (typeNode.type === 'nullable_type') {\n    const inner = typeNode.firstNamedChild;\n    if (inner) return extractCSharpElementTypeFromTypeNode(inner, pos, depth + 1);\n  }\n  return undefined;\n};\n\n/** Walk up from a foreach to the enclosing method and search parameters. */\nconst findCSharpParamElementType = (iterableName: string, startNode: SyntaxNode, pos: TypeArgPosition = 'last'): string | undefined => {\n  let current: SyntaxNode | null = startNode.parent;\n  while (current) {\n    if (current.type === 'method_declaration' || current.type === 'local_function_statement') {\n      const paramsNode = current.childForFieldName('parameters');\n      if (paramsNode) {\n        for (let i = 0; i < paramsNode.namedChildCount; i++) {\n          const param = paramsNode.namedChild(i);\n          if (!param || param.type !== 'parameter') continue;\n          const nameNode = param.childForFieldName('name');\n          if (nameNode?.text !== iterableName) continue;\n          const typeNode = param.childForFieldName('type');\n          if (typeNode) return extractCSharpElementTypeFromTypeNode(typeNode, pos);\n        }\n      }\n      break;\n    }\n    current = current.parent;\n  }\n  return undefined;\n};\n\n/** C#: foreach (User user in users) — extract loop variable binding.\n *  Tier 1c: for `foreach (var user in users)`, resolves element type from iterable. */\nconst extractForLoopBinding: ForLoopExtractor = (node, { scopeEnv, declarationTypeNodes, scope, returnTypeLookup }): void => {\n  const typeNode = node.childForFieldName('type');\n  const nameNode = node.childForFieldName('left');\n  if (!typeNode || !nameNode) return;\n  const varName = extractVarName(nameNode);\n  if (!varName) return;\n\n  // Explicit type (existing behavior): foreach (User user in users)\n  if (!(typeNode.type === 'implicit_type' && typeNode.text === 'var')) {\n    const typeName = extractSimpleTypeName(typeNode);\n    if (typeName) scopeEnv.set(varName, typeName);\n    return;\n  }\n\n  // Tier 1c: implicit type (var) — resolve from iterable's container type\n  const rightNode = node.childForFieldName('right');\n  let iterableName: string | undefined;\n  let methodName: string | undefined;\n  let callExprElementType: string | undefined;\n\n  if (rightNode?.type === 'identifier') {\n    iterableName = rightNode.text;\n  } else if (rightNode?.type === 'member_access_expression') {\n    // C# property access: data.Keys, data.Values → member_access_expression\n    // Also handles bare member access: this.users, repo.users → use property as iterableName\n    const obj = rightNode.childForFieldName('expression');\n    const prop = rightNode.childForFieldName('name');\n    const propText = prop?.type === 'identifier' ? prop.text : undefined;\n    if (propText && KNOWN_CONTAINER_PROPS.has(propText)) {\n      if (obj?.type === 'identifier') {\n        iterableName = obj.text;\n      } else if (obj?.type === 'member_access_expression') {\n        // Nested member access: this.data.Values → obj is \"this.data\", extract \"data\"\n        const innerProp = obj.childForFieldName('name');\n        if (innerProp) iterableName = innerProp.text;\n      }\n      methodName = propText;\n    } else if (propText) {\n      // Bare member access: this.users → use property name for scopeEnv lookup\n      iterableName = propText;\n    }\n  } else if (rightNode?.type === 'invocation_expression') {\n    // C# method call: data.Select(...) → invocation_expression > member_access_expression\n    // Direct function call: GetUsers() → invocation_expression > identifier\n    const fn = rightNode.firstNamedChild;\n    if (fn?.type === 'member_access_expression') {\n      const obj = fn.childForFieldName('expression');\n      const prop = fn.childForFieldName('name');\n      if (obj?.type === 'identifier') iterableName = obj.text;\n      if (prop?.type === 'identifier') methodName = prop.text;\n    } else if (fn?.type === 'identifier') {\n      // Direct function call: foreach (var u in GetUsers())\n      const rawReturn = returnTypeLookup.lookupRawReturnType(fn.text);\n      if (rawReturn) callExprElementType = extractElementTypeFromString(rawReturn);\n    }\n  }\n  if (!iterableName && !callExprElementType) return;\n\n  let elementType: string | undefined;\n  if (callExprElementType) {\n    elementType = callExprElementType;\n  } else {\n    const containerTypeName = scopeEnv.get(iterableName!);\n    const typeArgPos = methodToTypeArgPosition(methodName, containerTypeName);\n    elementType = resolveIterableElementType(\n      iterableName!, node, scopeEnv, declarationTypeNodes, scope,\n      extractCSharpElementTypeFromTypeNode, findCSharpParamElementType,\n      typeArgPos,\n    );\n  }\n  if (elementType) scopeEnv.set(varName, elementType);\n};\n\n/**\n * C# pattern binding extractor for `obj is Type variable` (type pattern).\n *\n * AST structure:\n *   is_pattern_expression\n *     expression: (the variable being tested)\n *     pattern: declaration_pattern\n *       type: (the declared type)\n *       name: single_variable_designation > identifier (the new variable name)\n *\n * Conservative: returns undefined when the pattern field is absent, is not a\n * declaration_pattern, or when the type/name cannot be extracted.\n * No scopeEnv lookup is needed — the pattern explicitly declares the new variable's type.\n */\n/**\n * Find the if-body (consequence) block for a C# null-check.\n * Walks up from the expression to find the enclosing if_statement,\n * then returns its first block child (the truthy branch body).\n */\nconst findCSharpIfConsequenceBlock = (expr: SyntaxNode): SyntaxNode | undefined => {\n  let current = expr.parent;\n  while (current) {\n    if (current.type === 'if_statement') {\n      // C# if_statement consequence is the 'consequence' field or first block child\n      const consequence = current.childForFieldName('consequence');\n      if (consequence) return consequence;\n      for (let i = 0; i < current.childCount; i++) {\n        const child = current.child(i);\n        if (child?.type === 'block') return child;\n      }\n      return undefined;\n    }\n    if (current.type === 'block' || current.type === 'method_declaration'\n      || current.type === 'constructor_declaration' || current.type === 'local_function_statement'\n      || current.type === 'lambda_expression') return undefined;\n    current = current.parent;\n  }\n  return undefined;\n};\n\n/** Check if a C# declaration type node represents a nullable type.\n *  Checks for nullable_type AST node or '?' in the type text (e.g., User?). */\nconst isCSharpNullableDecl = (declTypeNode: SyntaxNode): boolean => {\n  if (declTypeNode.type === 'nullable_type') return true;\n  return declTypeNode.text.includes('?');\n};\n\nconst extractPatternBinding: PatternBindingExtractor = (node, scopeEnv, declarationTypeNodes, scope) => {\n  // is_pattern_expression: `obj is User user` — has a declaration_pattern child\n  // Also handles `x is not null` for null-check narrowing\n  if (node.type === 'is_pattern_expression') {\n    const pattern = node.childForFieldName('pattern');\n    if (!pattern) return undefined;\n\n    // Standard type pattern: `obj is User user`\n    if (pattern.type === 'declaration_pattern' || pattern.type === 'recursive_pattern') {\n      const typeNode = pattern.childForFieldName('type');\n      const nameNode = pattern.childForFieldName('name');\n      if (!typeNode || !nameNode) return undefined;\n      const typeName = extractSimpleTypeName(typeNode);\n      const varName = extractVarName(nameNode);\n      if (!typeName || !varName) return undefined;\n      return { varName, typeName };\n    }\n\n    // Null-check: `x is not null` — negated_pattern > constant_pattern > null_literal\n    if (pattern.type === 'negated_pattern') {\n      const inner = pattern.firstNamedChild;\n      if (inner?.type === 'constant_pattern') {\n        const literal = inner.firstNamedChild ?? inner.firstChild;\n        if (literal?.type === 'null_literal' || literal?.text === 'null') {\n          const expr = node.childForFieldName('expression');\n          if (!expr || expr.type !== 'identifier') return undefined;\n          const varName = expr.text;\n          const resolvedType = scopeEnv.get(varName);\n          if (!resolvedType) return undefined;\n          // Verify the original declaration was nullable\n          const declTypeNode = declarationTypeNodes.get(`${scope}\\0${varName}`);\n          if (!declTypeNode || !isCSharpNullableDecl(declTypeNode)) return undefined;\n          const ifBody = findCSharpIfConsequenceBlock(node);\n          if (!ifBody) return undefined;\n          return {\n            varName,\n            typeName: resolvedType,\n            narrowingRange: { startIndex: ifBody.startIndex, endIndex: ifBody.endIndex },\n          };\n        }\n      }\n    }\n    return undefined;\n  }\n  // declaration_pattern / recursive_pattern: standalone in switch statements and switch expressions\n  // `case User u:` or `User u =>` or `User { Name: \"Alice\" } u =>`\n  // Both use the same 'type' and 'name' fields.\n  if (node.type === 'declaration_pattern' || node.type === 'recursive_pattern') {\n    const typeNode = node.childForFieldName('type');\n    const nameNode = node.childForFieldName('name');\n    if (!typeNode || !nameNode) return undefined;\n    const typeName = extractSimpleTypeName(typeNode);\n    const varName = extractVarName(nameNode);\n    if (!typeName || !varName) return undefined;\n    return { varName, typeName };\n  }\n  // Null-check: `x != null` — binary_expression with != operator\n  if (node.type === 'binary_expression') {\n    const op = node.children.find(c => !c.isNamed && c.text === '!=');\n    if (!op) return undefined;\n    const left = node.namedChild(0);\n    const right = node.namedChild(1);\n    if (!left || !right) return undefined;\n    let varNode: SyntaxNode | undefined;\n    if (left.type === 'identifier' && (right.type === 'null_literal' || right.text === 'null')) {\n      varNode = left;\n    } else if (right.type === 'identifier' && (left.type === 'null_literal' || left.text === 'null')) {\n      varNode = right;\n    }\n    if (!varNode) return undefined;\n    const varName = varNode.text;\n    const resolvedType = scopeEnv.get(varName);\n    if (!resolvedType) return undefined;\n    // Verify the original declaration was nullable\n    const declTypeNode = declarationTypeNodes.get(`${scope}\\0${varName}`);\n    if (!declTypeNode || !isCSharpNullableDecl(declTypeNode)) return undefined;\n    const ifBody = findCSharpIfConsequenceBlock(node);\n    if (!ifBody) return undefined;\n    return {\n      varName,\n      typeName: resolvedType,\n      narrowingRange: { startIndex: ifBody.startIndex, endIndex: ifBody.endIndex },\n    };\n  }\n  return undefined;\n};\n\n/** C#: var alias = u → variable_declarator with name + equals_value_clause.\n *  Only local_declaration_statement and variable_declaration contain variable_declarator children;\n *  is_pattern_expression and field_declaration never do — skip them early. */\nconst extractPendingAssignment: PendingAssignmentExtractor = (node, scopeEnv) => {\n  if (node.type === 'is_pattern_expression' || node.type === 'field_declaration') return undefined;\n  for (let i = 0; i < node.namedChildCount; i++) {\n    const child = node.namedChild(i);\n    if (!child || child.type !== 'variable_declarator') continue;\n    const nameNode = child.childForFieldName('name');\n    if (!nameNode) continue;\n    const lhs = nameNode.text;\n    if (scopeEnv.has(lhs)) continue;\n    // C# wraps value in equals_value_clause; fall back to last named child\n    let evc: SyntaxNode | null = null;\n    for (let j = 0; j < child.childCount; j++) {\n      if (child.child(j)?.type === 'equals_value_clause') { evc = child.child(j); break; }\n    }\n    const valueNode = evc?.firstNamedChild ?? child.namedChild(child.namedChildCount - 1);\n    if (valueNode && valueNode !== nameNode && (valueNode.type === 'identifier' || valueNode.type === 'simple_identifier')) {\n      return { kind: 'copy', lhs, rhs: valueNode.text };\n    }\n    // member_access_expression RHS → fieldAccess (a.Field)\n    if (valueNode?.type === 'member_access_expression') {\n      const expr = valueNode.childForFieldName('expression');\n      const name = valueNode.childForFieldName('name');\n      if (expr?.type === 'identifier' && name?.type === 'identifier') {\n        return { kind: 'fieldAccess', lhs, receiver: expr.text, field: name.text };\n      }\n    }\n    // invocation_expression RHS\n    if (valueNode?.type === 'invocation_expression') {\n      const funcNode = valueNode.firstNamedChild;\n      if (funcNode?.type === 'identifier_name' || funcNode?.type === 'identifier') {\n        return { kind: 'callResult', lhs, callee: funcNode.text };\n      }\n      // method call with receiver → methodCallResult: a.GetC()\n      if (funcNode?.type === 'member_access_expression') {\n        const expr = funcNode.childForFieldName('expression');\n        const name = funcNode.childForFieldName('name');\n        if (expr?.type === 'identifier' && name?.type === 'identifier') {\n          return { kind: 'methodCallResult', lhs, receiver: expr.text, method: name.text };\n        }\n      }\n    }\n    // await_expression → unwrap and check inner\n    if (valueNode?.type === 'await_expression') {\n      const inner = valueNode.firstNamedChild;\n      if (inner?.type === 'invocation_expression') {\n        const funcNode = inner.firstNamedChild;\n        if (funcNode?.type === 'identifier_name' || funcNode?.type === 'identifier') {\n          return { kind: 'callResult', lhs, callee: funcNode.text };\n        }\n        if (funcNode?.type === 'member_access_expression') {\n          const expr = funcNode.childForFieldName('expression');\n          const name = funcNode.childForFieldName('name');\n          if (expr?.type === 'identifier' && name?.type === 'identifier') {\n            return { kind: 'methodCallResult', lhs, receiver: expr.text, method: name.text };\n          }\n        }\n      }\n    }\n  }\n  return undefined;\n};\n\n/** Infer the type of a literal AST node for C# overload disambiguation. */\nconst inferLiteralType: LiteralTypeInferrer = (node) => {\n  switch (node.type) {\n    case 'integer_literal':\n      if (node.text.endsWith('L') || node.text.endsWith('l')) return 'long';\n      return 'int';\n    case 'real_literal':\n      if (node.text.endsWith('f') || node.text.endsWith('F')) return 'float';\n      if (node.text.endsWith('m') || node.text.endsWith('M')) return 'decimal';\n      return 'double';\n    case 'string_literal':\n    case 'verbatim_string_literal':\n    case 'raw_string_literal':\n    case 'interpolated_string_expression':\n      return 'string';\n    case 'character_literal':\n      return 'char';\n    case 'boolean_literal':\n      return 'bool';\n    case 'null_literal':\n      return 'null';\n    default:\n      return undefined;\n  }\n};\n\nexport const typeConfig: LanguageTypeConfig = {\n  declarationNodeTypes: DECLARATION_NODE_TYPES,\n  forLoopNodeTypes: FOR_LOOP_NODE_TYPES,\n  patternBindingNodeTypes: new Set(['is_pattern_expression', 'declaration_pattern', 'recursive_pattern', 'binary_expression']),\n  extractDeclaration,\n  extractParameter,\n  scanConstructorBinding,\n  extractForLoopBinding,\n  extractPendingAssignment,\n  extractPatternBinding,\n  inferLiteralType,\n};\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/type-extractors/go.ts",
    "content": "import type { SyntaxNode } from '../utils.js';\nimport type { ConstructorBindingScanner, ForLoopExtractor, LanguageTypeConfig, ParameterExtractor, TypeBindingExtractor, PendingAssignmentExtractor } from './types.js';\nimport { extractSimpleTypeName, extractVarName, extractElementTypeFromString, extractGenericTypeArgs, findChildByType, resolveIterableElementType, methodToTypeArgPosition, type TypeArgPosition } from './shared.js';\n\nconst DECLARATION_NODE_TYPES: ReadonlySet<string> = new Set([\n  'var_declaration',\n  'var_spec',\n  'short_var_declaration',\n]);\n\n/** Go: var x Foo */\nconst extractGoVarDeclaration = (node: SyntaxNode, env: Map<string, string>): void => {\n  // Go var_declaration contains var_spec children\n  if (node.type === 'var_declaration') {\n    for (let i = 0; i < node.namedChildCount; i++) {\n      const spec = node.namedChild(i);\n      if (spec?.type === 'var_spec') extractGoVarDeclaration(spec, env);\n    }\n    return;\n  }\n\n  // var_spec: name type [= value]\n  const nameNode = node.childForFieldName('name');\n  const typeNode = node.childForFieldName('type');\n  if (!nameNode || !typeNode) return;\n  const varName = extractVarName(nameNode);\n  const typeName = extractSimpleTypeName(typeNode);\n  if (varName && typeName) env.set(varName, typeName);\n};\n\n/** Go: x := Foo{...} — infer type from composite literal (handles multi-assignment) */\nconst extractGoShortVarDeclaration = (node: SyntaxNode, env: Map<string, string>): void => {\n  const left = node.childForFieldName('left');\n  const right = node.childForFieldName('right');\n  if (!left || !right) return;\n\n  // Collect LHS names and RHS values (may be expression_lists for multi-assignment)\n  const lhsNodes: SyntaxNode[] = [];\n  const rhsNodes: SyntaxNode[] = [];\n\n  if (left.type === 'expression_list') {\n    for (let i = 0; i < left.namedChildCount; i++) {\n      const c = left.namedChild(i);\n      if (c) lhsNodes.push(c);\n    }\n  } else {\n    lhsNodes.push(left);\n  }\n\n  if (right.type === 'expression_list') {\n    for (let i = 0; i < right.namedChildCount; i++) {\n      const c = right.namedChild(i);\n      if (c) rhsNodes.push(c);\n    }\n  } else {\n    rhsNodes.push(right);\n  }\n\n  // Pair each LHS name with its corresponding RHS value\n  const count = Math.min(lhsNodes.length, rhsNodes.length);\n  for (let i = 0; i < count; i++) {\n    let valueNode = rhsNodes[i];\n    // Unwrap &User{} — unary_expression (address-of) wrapping composite_literal\n    if (valueNode.type === 'unary_expression' && valueNode.firstNamedChild?.type === 'composite_literal') {\n      valueNode = valueNode.firstNamedChild;\n    }\n    // Go built-in new(User) — call_expression with 'new' callee and type argument\n    // Go built-in make([]User, 0) / make(map[string]User) — extract element/value type\n    if (valueNode.type === 'call_expression') {\n      const funcNode = valueNode.childForFieldName('function');\n      if (funcNode?.text === 'new') {\n        const args = valueNode.childForFieldName('arguments');\n        if (args?.firstNamedChild) {\n          const typeName = extractSimpleTypeName(args.firstNamedChild);\n          const varName = extractVarName(lhsNodes[i]);\n          if (varName && typeName) env.set(varName, typeName);\n        }\n      } else if (funcNode?.text === 'make') {\n        const args = valueNode.childForFieldName('arguments');\n        const firstArg = args?.firstNamedChild;\n        if (firstArg) {\n          let innerType: SyntaxNode | null = null;\n          if (firstArg.type === 'slice_type') {\n            innerType = firstArg.childForFieldName('element');\n          } else if (firstArg.type === 'map_type') {\n            innerType = firstArg.childForFieldName('value');\n          }\n          if (innerType) {\n            const typeName = extractSimpleTypeName(innerType);\n            const varName = extractVarName(lhsNodes[i]);\n            if (varName && typeName) env.set(varName, typeName);\n          }\n        }\n      }\n      continue;\n    }\n    // Go type assertion: user := iface.(User) — type_assertion_expression with 'type' field\n    if (valueNode.type === 'type_assertion_expression') {\n      const typeNode = valueNode.childForFieldName('type');\n      if (typeNode) {\n        const typeName = extractSimpleTypeName(typeNode);\n        const varName = extractVarName(lhsNodes[i]);\n        if (varName && typeName) env.set(varName, typeName);\n      }\n      continue;\n    }\n    if (valueNode.type !== 'composite_literal') continue;\n    const typeNode = valueNode.childForFieldName('type');\n    if (!typeNode) continue;\n    const typeName = extractSimpleTypeName(typeNode);\n    if (!typeName) continue;\n    const varName = extractVarName(lhsNodes[i]);\n    if (varName) env.set(varName, typeName);\n  }\n};\n\nconst extractDeclaration: TypeBindingExtractor = (node: SyntaxNode, env: Map<string, string>): void => {\n  if (node.type === 'var_declaration' || node.type === 'var_spec') {\n    extractGoVarDeclaration(node, env);\n  } else if (node.type === 'short_var_declaration') {\n    extractGoShortVarDeclaration(node, env);\n  }\n};\n\n/** Go: parameter → name type */\nconst extractParameter: ParameterExtractor = (node: SyntaxNode, env: Map<string, string>): void => {\n  let nameNode: SyntaxNode | null = null;\n  let typeNode: SyntaxNode | null = null;\n\n  if (node.type === 'parameter') {\n    nameNode = node.childForFieldName('name');\n    typeNode = node.childForFieldName('type');\n  } else {\n    nameNode = node.childForFieldName('name') ?? node.childForFieldName('pattern');\n    typeNode = node.childForFieldName('type');\n  }\n\n  if (!nameNode || !typeNode) return;\n  const varName = extractVarName(nameNode);\n  const typeName = extractSimpleTypeName(typeNode);\n  if (varName && typeName) env.set(varName, typeName);\n};\n\n/** Go: user := NewUser(...) — infer type from single-assignment call expression */\nconst scanConstructorBinding: ConstructorBindingScanner = (node) => {\n  if (node.type !== 'short_var_declaration') return undefined;\n  const left = node.childForFieldName('left');\n  const right = node.childForFieldName('right');\n  if (!left || !right) return undefined;\n  const leftIds = left.type === 'expression_list' ? left.namedChildren : [left];\n  const rightExprs = right.type === 'expression_list' ? right.namedChildren : [right];\n\n  // Multi-return: user, err := NewUser() — bind first var when second is err/ok/_\n  if (leftIds.length === 2 && rightExprs.length === 1) {\n    const secondVar = leftIds[1];\n    const isErrorOrDiscard =\n      secondVar.text === '_' ||\n      secondVar.text === 'err' ||\n      secondVar.text === 'ok' ||\n      secondVar.text === 'error';\n    if (isErrorOrDiscard && leftIds[0].type === 'identifier') {\n      if (rightExprs[0].type !== 'call_expression') return undefined;\n      const func = rightExprs[0].childForFieldName('function');\n      if (!func) return undefined;\n      if (func.text === 'new' || func.text === 'make') return undefined;\n      const calleeName = extractSimpleTypeName(func);\n      if (!calleeName) return undefined;\n      return { varName: leftIds[0].text, calleeName };\n    }\n  }\n\n  // Single assignment only\n  if (leftIds.length !== 1 || leftIds[0].type !== 'identifier') return undefined;\n  if (rightExprs.length !== 1 || rightExprs[0].type !== 'call_expression') return undefined;\n  const func = rightExprs[0].childForFieldName('function');\n  if (!func) return undefined;\n  // Skip new() and make() — already handled by extractDeclaration\n  if (func.text === 'new' || func.text === 'make') return undefined;\n  const calleeName = extractSimpleTypeName(func);\n  if (!calleeName) return undefined;\n  return { varName: leftIds[0].text, calleeName };\n};\n\nconst FOR_LOOP_NODE_TYPES: ReadonlySet<string> = new Set([\n  'for_statement',\n]);\n\n/** Go function/method node types that carry a parameter list. */\nconst GO_FUNCTION_NODE_TYPES = new Set([\n  'function_declaration', 'method_declaration', 'func_literal',\n]);\n\n/**\n * Extract element type from a Go type annotation AST node.\n * Handles:\n *   slice_type \"[]User\"  →  element field → type_identifier \"User\"\n *   array_type \"[10]User\" →  element field → type_identifier \"User\"\n * Falls back to text-based extraction via extractElementTypeFromString.\n */\nconst extractGoElementTypeFromTypeNode = (typeNode: SyntaxNode, pos: TypeArgPosition = 'last'): string | undefined => {\n  // slice_type: []User — element field is the element type\n  if (typeNode.type === 'slice_type' || typeNode.type === 'array_type') {\n    const elemNode = typeNode.childForFieldName('element');\n    if (elemNode) return extractSimpleTypeName(elemNode);\n  }\n  // map_type: map[string]User — value field is the element type (for range, second var gets value)\n  if (typeNode.type === 'map_type') {\n    const valueNode = typeNode.childForFieldName('value');\n    if (valueNode) return extractSimpleTypeName(valueNode);\n  }\n  // channel_type: chan User — the type argument is the element type\n  if (typeNode.type === 'channel_type') {\n    const valueNode = typeNode.childForFieldName('value') ?? typeNode.lastNamedChild;\n    if (valueNode) return extractSimpleTypeName(valueNode);\n  }\n  // generic_type: Go 1.18+ generics (e.g., MySlice[User], Cache[string, User])\n  // Use position-aware arg selection: 'first' for keys, 'last' for values.\n  if (typeNode.type === 'generic_type') {\n    const args = extractGenericTypeArgs(typeNode);\n    if (args.length >= 1) return pos === 'first' ? args[0] : args[args.length - 1];\n  }\n  // Fallback: text-based extraction ([]User → User, User[] → User)\n  return extractElementTypeFromString(typeNode.text, pos);\n};\n\n/** Check if a Go type node represents a channel type. Used to determine\n *  whether single-var range yields the element (channels) vs index (slices/maps). */\nconst isChannelType = (\n  iterableName: string,\n  scopeEnv: ReadonlyMap<string, string>,\n  declarationTypeNodes?: ReadonlyMap<string, SyntaxNode>,\n  scope?: string,\n): boolean => {\n  if (declarationTypeNodes && scope) {\n    const typeNode = declarationTypeNodes.get(`${scope}\\0${iterableName}`);\n    if (typeNode) return typeNode.type === 'channel_type';\n  }\n  const t = scopeEnv.get(iterableName);\n  return !!t && t.startsWith('chan ');\n};\n\n/**\n * Walk up the AST from a for-statement to find the enclosing function declaration,\n * then search its parameters for one named `iterableName`.\n * Returns the element type extracted from its type annotation, or undefined.\n *\n * Go parameter_declaration has:\n *   name field: identifier (the parameter name)\n *   type field: the type node (slice_type for []User)\n */\nconst findGoParamElementType = (iterableName: string, startNode: SyntaxNode, pos: TypeArgPosition = 'last'): string | undefined => {\n  let current: SyntaxNode | null = startNode.parent;\n  while (current) {\n    if (GO_FUNCTION_NODE_TYPES.has(current.type)) {\n      const paramsNode = current.childForFieldName('parameters');\n      if (paramsNode) {\n        for (let i = 0; i < paramsNode.namedChildCount; i++) {\n          const paramDecl = paramsNode.namedChild(i);\n          if (!paramDecl || paramDecl.type !== 'parameter_declaration') continue;\n          // parameter_declaration: name type — name field is the identifier\n          const nameNode = paramDecl.childForFieldName('name');\n          if (nameNode?.text === iterableName) {\n            const typeNode = paramDecl.childForFieldName('type');\n            if (typeNode) return extractGoElementTypeFromTypeNode(typeNode, pos);\n          }\n        }\n      }\n      break;\n    }\n    current = current.parent;\n  }\n  return undefined;\n};\n\n/**\n * Go: for _, user := range users where users has a known slice type.\n *\n * Go uses a single `for_statement` node for all for-loop forms. We detect\n * range-based loops by looking for a `range_clause` child node. C-style for\n * loops (with `for_clause`) and infinite loops (no clause) are ignored.\n *\n * Tier 1c: resolves the element type via three strategies in priority order:\n *   1. declarationTypeNodes — raw type annotation AST node\n *   2. scopeEnv string — extractElementTypeFromString on the stored type\n *   3. AST walk — walks up to the enclosing function's parameters to read []User directly\n * For `_, user := range users`, the loop variable is the second identifier in\n * the `left` expression_list (index is discarded, value is the element).\n */\nconst extractForLoopBinding: ForLoopExtractor = (node, { scopeEnv, declarationTypeNodes, scope, returnTypeLookup }): void => {\n  if (node.type !== 'for_statement') return;\n\n  // Find the range_clause child — this distinguishes range loops from other for forms.\n  let rangeClause: SyntaxNode | null = null;\n  for (let i = 0; i < node.namedChildCount; i++) {\n    const child = node.namedChild(i);\n    if (child?.type === 'range_clause') {\n      rangeClause = child;\n      break;\n    }\n  }\n  if (!rangeClause) return;\n\n  // The iterable is the `right` field of the range_clause.\n  const rightNode = rangeClause.childForFieldName('right');\n  let iterableName: string | undefined;\n  let callExprElementType: string | undefined;\n  if (rightNode?.type === 'identifier') {\n    iterableName = rightNode.text;\n  } else if (rightNode?.type === 'selector_expression') {\n    const field = rightNode.childForFieldName('field');\n    if (field) iterableName = field.text;\n  } else if (rightNode?.type === 'call_expression') {\n    // Range over a call result: `for _, v := range getItems()` or `for _, v := range repo.All()`\n    const funcNode = rightNode.childForFieldName('function');\n    let callee: string | undefined;\n    if (funcNode?.type === 'identifier') {\n      callee = funcNode.text;\n    } else if (funcNode?.type === 'selector_expression') {\n      const field = funcNode.childForFieldName('field');\n      if (field) callee = field.text;\n    }\n    if (callee) {\n      const rawReturn = returnTypeLookup.lookupRawReturnType(callee);\n      if (rawReturn) callExprElementType = extractElementTypeFromString(rawReturn);\n    }\n  }\n  if (!iterableName && !callExprElementType) return;\n\n  let elementType: string | undefined;\n  if (callExprElementType) {\n    elementType = callExprElementType;\n  } else {\n    const containerTypeName = scopeEnv.get(iterableName!);\n    const typeArgPos = methodToTypeArgPosition(undefined, containerTypeName);\n    elementType = resolveIterableElementType(\n      iterableName!, node, scopeEnv, declarationTypeNodes, scope,\n      extractGoElementTypeFromTypeNode, findGoParamElementType,\n      typeArgPos,\n    );\n  }\n  if (!elementType) return;\n\n  // The loop variable(s) are in the `left` field.\n  // Go range semantics:\n  //   Slice/Array/String: single-var → INDEX (int); two-var → (index, element)\n  //   Map:                single-var → KEY; two-var → (key, value)\n  //   Channel:            single-var → ELEMENT (channels have no index)\n  const leftNode = rangeClause.childForFieldName('left');\n  if (!leftNode) return;\n\n  let loopVarNode: SyntaxNode | null = null;\n  if (leftNode.type === 'expression_list') {\n    if (leftNode.namedChildCount >= 2) {\n      // Two-var form: `_, user` or `i, user` — second variable gets element/value type\n      loopVarNode = leftNode.namedChild(1);\n    } else {\n      // Single-var in expression_list — yields INDEX for slices/maps, ELEMENT for channels.\n      // For call-expression iterables (iterableName undefined), conservative: treat as non-channel.\n      // Channels are rarely returned from function calls, and even if they were, skipping here\n      // just means we miss a binding rather than create an incorrect one.\n      if (iterableName && isChannelType(iterableName, scopeEnv, declarationTypeNodes, scope)) {\n        loopVarNode = leftNode.namedChild(0);\n      } else {\n        return; // index-only range on slice/map — skip\n      }\n    }\n  } else {\n    // Plain identifier (single-var form without expression_list)\n    // For call-expression iterables (iterableName undefined), conservative: treat as non-channel.\n    // Channels are rarely returned from function calls, and even if they were, skipping here\n    // just means we miss a binding rather than create an incorrect one.\n    if (iterableName && isChannelType(iterableName, scopeEnv, declarationTypeNodes, scope)) {\n      loopVarNode = leftNode;\n    } else {\n      return; // index-only range on slice/map — skip\n    }\n  }\n  if (!loopVarNode) return;\n\n  // Skip the blank identifier `_`\n  if (loopVarNode.text === '_') return;\n\n  const loopVarName = extractVarName(loopVarNode);\n  if (loopVarName) scopeEnv.set(loopVarName, elementType);\n};\n\n/** Go: alias := u (short_var_declaration) or var b = u (var_spec) */\nconst extractPendingAssignment: PendingAssignmentExtractor = (node, scopeEnv) => {\n  if (node.type === 'short_var_declaration') {\n    const left = node.childForFieldName('left');\n    const right = node.childForFieldName('right');\n    if (!left || !right) return undefined;\n    const lhsNode = left.type === 'expression_list' ? left.firstNamedChild : left;\n    const rhsNode = right.type === 'expression_list' ? right.firstNamedChild : right;\n    if (!lhsNode || !rhsNode) return undefined;\n    if (lhsNode.type !== 'identifier') return undefined;\n    const lhs = lhsNode.text;\n    if (scopeEnv.has(lhs)) return undefined;\n    if (rhsNode.type === 'identifier') return { kind: 'copy', lhs, rhs: rhsNode.text };\n    // selector_expression RHS → fieldAccess (a.field)\n    if (rhsNode.type === 'selector_expression') {\n      const operand = rhsNode.childForFieldName('operand');\n      const field = rhsNode.childForFieldName('field');\n      if (operand?.type === 'identifier' && field) {\n        return { kind: 'fieldAccess', lhs, receiver: operand.text, field: field.text };\n      }\n    }\n    // call_expression RHS\n    if (rhsNode.type === 'call_expression') {\n      const funcNode = rhsNode.childForFieldName('function');\n      if (funcNode?.type === 'identifier') {\n        return { kind: 'callResult', lhs, callee: funcNode.text };\n      }\n      // method call with receiver: call_expression → function: selector_expression\n      if (funcNode?.type === 'selector_expression') {\n        const operand = funcNode.childForFieldName('operand');\n        const field = funcNode.childForFieldName('field');\n        if (operand?.type === 'identifier' && field) {\n          return { kind: 'methodCallResult', lhs, receiver: operand.text, method: field.text };\n        }\n      }\n    }\n    return undefined;\n  }\n  if (node.type === 'var_spec' || node.type === 'var_declaration') {\n    // var_declaration contains var_spec children; var_spec has name + expression_list value\n    const specs: SyntaxNode[] = [];\n    if (node.type === 'var_declaration') {\n      for (let i = 0; i < node.namedChildCount; i++) {\n        const c = node.namedChild(i);\n        if (c?.type === 'var_spec') specs.push(c);\n      }\n    } else {\n      specs.push(node);\n    }\n    for (const spec of specs) {\n      const nameNode = spec.childForFieldName('name');\n      if (!nameNode || nameNode.type !== 'identifier') continue;\n      const lhs = nameNode.text;\n      if (scopeEnv.has(lhs)) continue;\n      // Check if the last named child is a bare identifier (no type annotation between name and value)\n      let exprList: SyntaxNode | null = null;\n      for (let i = 0; i < spec.childCount; i++) {\n        if (spec.child(i)?.type === 'expression_list') { exprList = spec.child(i); break; }\n      }\n      const rhsNode = exprList?.firstNamedChild;\n      if (rhsNode?.type === 'identifier') return { kind: 'copy', lhs, rhs: rhsNode.text };\n      // selector_expression RHS → fieldAccess\n      if (rhsNode?.type === 'selector_expression') {\n        const operand = rhsNode.childForFieldName('operand');\n        const field = rhsNode.childForFieldName('field');\n        if (operand?.type === 'identifier' && field) {\n          return { kind: 'fieldAccess', lhs, receiver: operand.text, field: field.text };\n        }\n      }\n      // call_expression RHS\n      if (rhsNode?.type === 'call_expression') {\n        const funcNode = rhsNode.childForFieldName('function');\n        if (funcNode?.type === 'identifier') {\n          return { kind: 'callResult', lhs, callee: funcNode.text };\n        }\n        if (funcNode?.type === 'selector_expression') {\n          const operand = funcNode.childForFieldName('operand');\n          const field = funcNode.childForFieldName('field');\n          if (operand?.type === 'identifier' && field) {\n            return { kind: 'methodCallResult', lhs, receiver: operand.text, method: field.text };\n          }\n        }\n      }\n    }\n  }\n  return undefined;\n};\n\nexport const typeConfig: LanguageTypeConfig = {\n  declarationNodeTypes: DECLARATION_NODE_TYPES,\n  forLoopNodeTypes: FOR_LOOP_NODE_TYPES,\n  extractDeclaration,\n  extractParameter,\n  scanConstructorBinding,\n  extractForLoopBinding,\n  extractPendingAssignment,\n};\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/type-extractors/index.ts",
    "content": "/**\n * Per-language type extraction configurations.\n * Assembled here into a dispatch map keyed by SupportedLanguages.\n */\n\nimport { SupportedLanguages } from '../../../config/supported-languages.js';\nimport type { LanguageTypeConfig } from './types.js';\n\nimport { typeConfig as typescriptConfig } from './typescript.js';\nimport { javaTypeConfig, kotlinTypeConfig } from './jvm.js';\nimport { typeConfig as csharpConfig } from './csharp.js';\nimport { typeConfig as goConfig } from './go.js';\nimport { typeConfig as rustConfig } from './rust.js';\nimport { typeConfig as pythonConfig } from './python.js';\nimport { typeConfig as swiftConfig } from './swift.js';\nimport { typeConfig as cCppConfig } from './c-cpp.js';\nimport { typeConfig as phpConfig } from './php.js';\nimport { typeConfig as rubyConfig } from './ruby.js';\n\nexport const typeConfigs = {\n  [SupportedLanguages.JavaScript]: typescriptConfig,\n  [SupportedLanguages.TypeScript]: typescriptConfig,\n  [SupportedLanguages.Java]: javaTypeConfig,\n  [SupportedLanguages.Kotlin]: kotlinTypeConfig,\n  [SupportedLanguages.CSharp]: csharpConfig,\n  [SupportedLanguages.Go]: goConfig,\n  [SupportedLanguages.Rust]: rustConfig,\n  [SupportedLanguages.Python]: pythonConfig,\n  [SupportedLanguages.Swift]: swiftConfig,\n  [SupportedLanguages.C]: cCppConfig,\n  [SupportedLanguages.CPlusPlus]: cCppConfig,\n  [SupportedLanguages.PHP]: phpConfig,\n  [SupportedLanguages.Ruby]: rubyConfig,\n} satisfies Record<SupportedLanguages, LanguageTypeConfig>;\n\nexport type {\n  LanguageTypeConfig,\n  TypeBindingExtractor,\n  ParameterExtractor,\n  ConstructorBindingScanner,\n  ForLoopExtractor,\n  PendingAssignmentExtractor,\n  PatternBindingExtractor,\n} from './types.js';\nexport { \n  TYPED_PARAMETER_TYPES,\n  extractSimpleTypeName,\n  extractGenericTypeArgs,\n  extractVarName,\n  findChildByType,\n  extractRubyConstructorAssignment\n} from './shared.js';\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/type-extractors/jvm.ts",
    "content": "import type { SyntaxNode } from '../utils.js';\nimport type { LanguageTypeConfig, ParameterExtractor, TypeBindingExtractor, InitializerExtractor, ClassNameLookup, ConstructorBindingScanner, ForLoopExtractor, PendingAssignmentExtractor, PatternBindingExtractor, LiteralTypeInferrer, ConstructorTypeDetector } from './types.js';\nimport { extractSimpleTypeName, extractVarName, findChildByType, extractGenericTypeArgs, resolveIterableElementType, methodToTypeArgPosition, extractElementTypeFromString, type TypeArgPosition } from './shared.js';\n\n// ── Java ──────────────────────────────────────────────────────────────────\n\nconst JAVA_DECLARATION_NODE_TYPES: ReadonlySet<string> = new Set([\n  'local_variable_declaration',\n  'field_declaration',\n]);\n\n/** Java: Type x = ...; Type x; */\nconst extractJavaDeclaration: TypeBindingExtractor = (node: SyntaxNode, env: Map<string, string>): void => {\n  const typeNode = node.childForFieldName('type');\n  if (!typeNode) return;\n  const typeName = extractSimpleTypeName(typeNode);\n  if (!typeName || typeName === 'var') return; // skip Java 10 var — handled by extractInitializer\n\n  // Find variable_declarator children\n  for (let i = 0; i < node.namedChildCount; i++) {\n    const child = node.namedChild(i);\n    if (child?.type !== 'variable_declarator') continue;\n    const nameNode = child.childForFieldName('name');\n    if (nameNode) {\n      const varName = extractVarName(nameNode);\n      if (varName) env.set(varName, typeName);\n    }\n  }\n};\n\n/** Java 10+: var x = new User() — infer type from object_creation_expression */\nconst extractJavaInitializer: InitializerExtractor = (node: SyntaxNode, env: Map<string, string>, _classNames: ClassNameLookup): void => {\n  for (let i = 0; i < node.namedChildCount; i++) {\n    const child = node.namedChild(i);\n    if (child?.type !== 'variable_declarator') continue;\n    const nameNode = child.childForFieldName('name');\n    const valueNode = child.childForFieldName('value');\n    if (!nameNode || !valueNode) continue;\n    // Skip declarators that already have a binding from extractDeclaration\n    const varName = extractVarName(nameNode);\n    if (!varName || env.has(varName)) continue;\n    if (valueNode.type !== 'object_creation_expression') continue;\n    const ctorType = valueNode.childForFieldName('type');\n    if (!ctorType) continue;\n    const typeName = extractSimpleTypeName(ctorType);\n    if (typeName) env.set(varName, typeName);\n  }\n};\n\n/** Java: formal_parameter → type name */\nconst extractJavaParameter: ParameterExtractor = (node: SyntaxNode, env: Map<string, string>): void => {\n  let nameNode: SyntaxNode | null = null;\n  let typeNode: SyntaxNode | null = null;\n\n  if (node.type === 'formal_parameter') {\n    typeNode = node.childForFieldName('type');\n    nameNode = node.childForFieldName('name');\n  } else {\n    // Generic fallback\n    nameNode = node.childForFieldName('name') ?? node.childForFieldName('pattern');\n    typeNode = node.childForFieldName('type');\n  }\n\n  if (!nameNode || !typeNode) return;\n  const varName = extractVarName(nameNode);\n  const typeName = extractSimpleTypeName(typeNode);\n  if (varName && typeName) env.set(varName, typeName);\n};\n\n/** Java: var x = SomeFactory.create() — constructor binding for `var` with method_invocation */\nconst scanJavaConstructorBinding: ConstructorBindingScanner = (node) => {\n  if (node.type !== 'local_variable_declaration') return undefined;\n  const typeNode = node.childForFieldName('type');\n  if (!typeNode) return undefined;\n  if (typeNode.text !== 'var') return undefined;\n  const declarator = findChildByType(node, 'variable_declarator');\n  if (!declarator) return undefined;\n  const nameNode = declarator.childForFieldName('name');\n  const value = declarator.childForFieldName('value');\n  if (!nameNode || !value) return undefined;\n  if (value.type === 'object_creation_expression') return undefined;\n  if (value.type !== 'method_invocation') return undefined;\n  const methodName = value.childForFieldName('name');\n  if (!methodName) return undefined;\n  return { varName: nameNode.text, calleeName: methodName.text };\n};\n\nconst JAVA_FOR_LOOP_NODE_TYPES: ReadonlySet<string> = new Set([\n  'enhanced_for_statement',\n]);\n\n/** Extract element type from a Java type annotation AST node.\n *  Handles generic_type (List<User>), array_type (User[]). */\nconst extractJavaElementTypeFromTypeNode = (typeNode: SyntaxNode, pos: TypeArgPosition = 'last'): string | undefined => {\n  if (typeNode.type === 'generic_type') {\n    const args = extractGenericTypeArgs(typeNode);\n    if (args.length >= 1) return pos === 'first' ? args[0] : args[args.length - 1];\n  }\n  if (typeNode.type === 'array_type') {\n    const elemNode = typeNode.firstNamedChild;\n    if (elemNode) return extractSimpleTypeName(elemNode);\n  }\n  return undefined;\n};\n\n/** Walk up from a for-each to the enclosing method_declaration and search parameters. */\nconst findJavaParamElementType = (iterableName: string, startNode: SyntaxNode, pos: TypeArgPosition = 'last'): string | undefined => {\n  let current: SyntaxNode | null = startNode.parent;\n  while (current) {\n    if (current.type === 'method_declaration' || current.type === 'constructor_declaration') {\n      const paramsNode = current.childForFieldName('parameters');\n      if (paramsNode) {\n        for (let i = 0; i < paramsNode.namedChildCount; i++) {\n          const param = paramsNode.namedChild(i);\n          if (!param || param.type !== 'formal_parameter') continue;\n          const nameNode = param.childForFieldName('name');\n          if (nameNode?.text !== iterableName) continue;\n          const typeNode = param.childForFieldName('type');\n          if (typeNode) return extractJavaElementTypeFromTypeNode(typeNode, pos);\n        }\n      }\n      break;\n    }\n    current = current.parent;\n  }\n  return undefined;\n};\n\n/** Java: for (User user : users) — extract loop variable binding.\n *  Tier 1c: for `for (var user : users)`, resolves element type from iterable. */\nconst extractJavaForLoopBinding: ForLoopExtractor = (node,  { scopeEnv, declarationTypeNodes, scope, returnTypeLookup }): void => {\n  const typeNode = node.childForFieldName('type');\n  const nameNode = node.childForFieldName('name');\n  if (!typeNode || !nameNode) return;\n  const varName = extractVarName(nameNode);\n  if (!varName) return;\n\n  // Explicit type (existing behavior): for (User user : users)\n  const typeName = extractSimpleTypeName(typeNode);\n  if (typeName && typeName !== 'var') {\n    scopeEnv.set(varName, typeName);\n    return;\n  }\n\n  // Tier 1c: var — resolve from iterable's container type\n  const iterableNode = node.childForFieldName('value');\n  if (!iterableNode) return;\n\n  let iterableName: string | undefined;\n  let methodName: string | undefined;\n  let callExprElementType: string | undefined;\n  if (iterableNode.type === 'identifier') {\n    iterableName = iterableNode.text;\n  } else if (iterableNode.type === 'field_access') {\n    const field = iterableNode.childForFieldName('field');\n    if (field) iterableName = field.text;\n  } else if (iterableNode.type === 'method_invocation') {\n    // data.keySet() → method_invocation > object: identifier + name: identifier\n    // Also handles this.data.values() → object is field_access, extract inner field name\n    const obj = iterableNode.childForFieldName('object');\n    const name = iterableNode.childForFieldName('name');\n    if (obj?.type === 'identifier') {\n      iterableName = obj.text;\n    } else if (obj?.type === 'field_access') {\n      const innerField = obj.childForFieldName('field');\n      if (innerField) iterableName = innerField.text;\n    } else if (!obj && name) {\n      // Direct function call: for (var u : getUsers()) — no receiver object\n      const rawReturn = returnTypeLookup.lookupRawReturnType(name.text);\n      if (rawReturn) callExprElementType = extractElementTypeFromString(rawReturn);\n    }\n    if (name) methodName = name.text;\n  }\n  if (!iterableName && !callExprElementType) return;\n\n  let elementType: string | undefined;\n  if (callExprElementType) {\n    elementType = callExprElementType;\n  } else {\n    const containerTypeName = scopeEnv.get(iterableName!);\n    const typeArgPos = methodToTypeArgPosition(methodName, containerTypeName);\n    elementType = resolveIterableElementType(\n      iterableName!, node, scopeEnv, declarationTypeNodes, scope,\n      extractJavaElementTypeFromTypeNode, findJavaParamElementType,\n      typeArgPos,\n    );\n  }\n  if (elementType) scopeEnv.set(varName, elementType);\n};\n\n/** Java: var alias = u → local_variable_declaration > variable_declarator with name/value */\nconst extractJavaPendingAssignment: PendingAssignmentExtractor = (node, scopeEnv) => {\n  for (let i = 0; i < node.namedChildCount; i++) {\n    const child = node.namedChild(i);\n    if (!child || child.type !== 'variable_declarator') continue;\n    const nameNode = child.childForFieldName('name');\n    const valueNode = child.childForFieldName('value');\n    if (!nameNode || !valueNode) continue;\n    const lhs = nameNode.text;\n    if (scopeEnv.has(lhs)) continue;\n    if (valueNode.type === 'identifier' || valueNode.type === 'simple_identifier') return { kind: 'copy', lhs, rhs: valueNode.text };\n    // field_access RHS → fieldAccess (a.field)\n    if (valueNode.type === 'field_access') {\n      const obj = valueNode.childForFieldName('object');\n      const field = valueNode.childForFieldName('field');\n      if (obj?.type === 'identifier' && field) {\n        return { kind: 'fieldAccess', lhs, receiver: obj.text, field: field.text };\n      }\n    }\n    // method_invocation RHS\n    if (valueNode.type === 'method_invocation') {\n      const objField = valueNode.childForFieldName('object');\n      if (!objField) {\n        // No receiver → callResult\n        const nameField = valueNode.childForFieldName('name');\n        if (nameField?.type === 'identifier') {\n          return { kind: 'callResult', lhs, callee: nameField.text };\n        }\n      } else if (objField.type === 'identifier') {\n        // With receiver → methodCallResult\n        const nameField = valueNode.childForFieldName('name');\n        if (nameField?.type === 'identifier') {\n          return { kind: 'methodCallResult', lhs, receiver: objField.text, method: nameField.text };\n        }\n      }\n    }\n  }\n  return undefined;\n};\n\n/**\n * Java 16+ `instanceof` pattern variable: `x instanceof User user`\n *\n * AST structure:\n *   instanceof_expression\n *     left: expression (the variable being tested)\n *     instanceof keyword\n *     right: type (the type to test against)\n *     name: identifier (the pattern variable — optional, Java 16+)\n *\n * Conservative: returns undefined when the `name` field is absent (plain instanceof\n * without pattern variable, e.g. `x instanceof User`) or when the type cannot be\n * extracted. The source variable's existing type is NOT used — the pattern explicitly\n * declares the new type, so no scopeEnv lookup is needed.\n */\nconst extractJavaPatternBinding: PatternBindingExtractor = (node) => {\n  if (node.type === 'type_pattern') {\n    // Java 17+ switch pattern: case User u -> ...\n    // type_pattern has positional children (NO named fields):\n    //   namedChild(0) = type (type_identifier, e.g., User)\n    //   namedChild(1) = identifier (e.g., u)\n    const typeNode = node.namedChild(0);\n    const nameNode = node.namedChild(1);\n    if (!typeNode || !nameNode) return undefined;\n    const typeName = extractSimpleTypeName(typeNode);\n    const varName = extractVarName(nameNode);\n    if (!typeName || !varName) return undefined;\n    return { varName, typeName };\n  }\n  if (node.type !== 'instanceof_expression') return undefined;\n  const nameNode = node.childForFieldName('name');\n  if (!nameNode) return undefined;\n  const typeNode = node.childForFieldName('right');\n  if (!typeNode) return undefined;\n  const typeName = extractSimpleTypeName(typeNode);\n  const varName = extractVarName(nameNode);\n  if (!typeName || !varName) return undefined;\n  return { varName, typeName };\n};\n\n/** Infer the type of a literal AST node for Java/Kotlin overload disambiguation. */\nconst inferJvmLiteralType: LiteralTypeInferrer = (node) => {\n  switch (node.type) {\n    case 'decimal_integer_literal':\n    case 'integer_literal':\n    case 'hex_integer_literal':\n    case 'octal_integer_literal':\n    case 'binary_integer_literal':\n      // Check for long suffix\n      if (node.text.endsWith('L') || node.text.endsWith('l')) return 'long';\n      return 'int';\n    case 'decimal_floating_point_literal':\n    case 'real_literal':\n      if (node.text.endsWith('f') || node.text.endsWith('F')) return 'float';\n      return 'double';\n    case 'string_literal':\n    case 'line_string_literal':\n    case 'multi_line_string_literal':\n      return 'String';\n    case 'character_literal':\n      return 'char';\n    case 'true':\n    case 'false':\n    case 'boolean_literal':\n      return 'boolean';\n    case 'null_literal':\n      return 'null';\n    default:\n      return undefined;\n  }\n};\n\nexport const javaTypeConfig: LanguageTypeConfig = {\n  declarationNodeTypes: JAVA_DECLARATION_NODE_TYPES,\n  forLoopNodeTypes: JAVA_FOR_LOOP_NODE_TYPES,\n  patternBindingNodeTypes: new Set(['instanceof_expression', 'type_pattern']),\n  extractDeclaration: extractJavaDeclaration,\n  extractParameter: extractJavaParameter,\n  extractInitializer: extractJavaInitializer,\n  scanConstructorBinding: scanJavaConstructorBinding,\n  extractForLoopBinding: extractJavaForLoopBinding,\n  extractPendingAssignment: extractJavaPendingAssignment,\n  extractPatternBinding: extractJavaPatternBinding,\n  inferLiteralType: inferJvmLiteralType,\n};\n\n// ── Kotlin ────────────────────────────────────────────────────────────────\n\nconst KOTLIN_DECLARATION_NODE_TYPES: ReadonlySet<string> = new Set([\n  'property_declaration',\n  'variable_declaration',\n]);\n\n/** Kotlin: val x: Foo = ... */\nconst extractKotlinDeclaration: TypeBindingExtractor = (node: SyntaxNode, env: Map<string, string>): void => {\n  if (node.type === 'property_declaration') {\n    // Kotlin property_declaration: name/type are inside a variable_declaration child\n    const varDecl = findChildByType(node, 'variable_declaration');\n    if (varDecl) {\n      const nameNode = findChildByType(varDecl, 'simple_identifier');\n      const typeNode = findChildByType(varDecl, 'user_type')\n        ?? findChildByType(varDecl, 'nullable_type');\n      if (!nameNode || !typeNode) return;\n      const varName = extractVarName(nameNode);\n      const typeName = extractSimpleTypeName(typeNode);\n      if (varName && typeName) env.set(varName, typeName);\n      return;\n    }\n    // Fallback: try direct fields\n    const nameNode = node.childForFieldName('name')\n      ?? findChildByType(node, 'simple_identifier');\n    const typeNode = node.childForFieldName('type')\n      ?? findChildByType(node, 'user_type');\n    if (!nameNode || !typeNode) return;\n    const varName = extractVarName(nameNode);\n    const typeName = extractSimpleTypeName(typeNode);\n    if (varName && typeName) env.set(varName, typeName);\n  } else if (node.type === 'variable_declaration') {\n    // variable_declaration directly inside functions\n    const nameNode = findChildByType(node, 'simple_identifier');\n    const typeNode = findChildByType(node, 'user_type');\n    if (nameNode && typeNode) {\n      const varName = extractVarName(nameNode);\n      const typeName = extractSimpleTypeName(typeNode);\n      if (varName && typeName) env.set(varName, typeName);\n    }\n  }\n};\n\n/** Kotlin: parameter / formal_parameter → type name.\n *  Kotlin's tree-sitter grammar uses positional children (simple_identifier, user_type)\n *  rather than named fields (name, type) on `parameter` nodes, so we fall back to\n *  findChildByType when childForFieldName returns null. */\nconst extractKotlinParameter: ParameterExtractor = (node: SyntaxNode, env: Map<string, string>): void => {\n  let nameNode: SyntaxNode | null = null;\n  let typeNode: SyntaxNode | null = null;\n\n  if (node.type === 'formal_parameter') {\n    typeNode = node.childForFieldName('type');\n    nameNode = node.childForFieldName('name');\n  } else {\n    nameNode = node.childForFieldName('name') ?? node.childForFieldName('pattern');\n    typeNode = node.childForFieldName('type');\n  }\n\n  // Fallback: Kotlin `parameter` nodes use positional children, not named fields\n  if (!nameNode) nameNode = findChildByType(node, 'simple_identifier');\n  if (!typeNode) typeNode = findChildByType(node, 'user_type')\n    ?? findChildByType(node, 'nullable_type');\n\n  if (!nameNode || !typeNode) return;\n  const varName = extractVarName(nameNode);\n  const typeName = extractSimpleTypeName(typeNode);\n  if (varName && typeName) env.set(varName, typeName);\n};\n\n/** Find the constructor callee name in a Kotlin property_declaration's initializer.\n *  Returns the class name if the callee is a verified class constructor, undefined otherwise. */\nconst findKotlinConstructorCallee = (node: SyntaxNode, classNames: ClassNameLookup): string | undefined => {\n  if (node.type !== 'property_declaration') return undefined;\n  const value = node.childForFieldName('value')\n    ?? findChildByType(node, 'call_expression');\n  if (!value || value.type !== 'call_expression') return undefined;\n  const callee = value.firstNamedChild;\n  if (!callee || callee.type !== 'simple_identifier') return undefined;\n  const calleeName = callee.text;\n  if (!calleeName || !classNames.has(calleeName)) return undefined;\n  return calleeName;\n};\n\n/** Kotlin: val user = User() — infer type from call_expression when callee is a known class.\n *  Kotlin constructors are syntactically identical to function calls, so we verify\n *  against classNames (which may include cross-file SymbolTable lookups). */\nconst extractKotlinInitializer: InitializerExtractor = (node: SyntaxNode, env: Map<string, string>, classNames: ClassNameLookup): void => {\n  // Skip if there's an explicit type annotation — Tier 0 already handled it\n  const varDecl = findChildByType(node, 'variable_declaration');\n  if (varDecl && findChildByType(varDecl, 'user_type')) return;\n\n  const calleeName = findKotlinConstructorCallee(node, classNames);\n  if (!calleeName) return;\n\n  // Extract the variable name from the variable_declaration inside property_declaration\n  const nameNode = varDecl\n    ? findChildByType(varDecl, 'simple_identifier')\n    : findChildByType(node, 'simple_identifier');\n  if (!nameNode) return;\n\n  const varName = extractVarName(nameNode);\n  if (varName) env.set(varName, calleeName);\n};\n\n/** Kotlin: detect constructor type from call_expression in typed declarations.\n *  Unlike extractKotlinInitializer (which SKIPS typed declarations), this detects\n *  the constructor type EVEN when a type annotation exists, enabling virtual dispatch\n *  for patterns like `val a: Animal = Dog()`. */\nconst detectKotlinConstructorType: ConstructorTypeDetector = (node, classNames) => {\n  return findKotlinConstructorCallee(node, classNames);\n};\n\n/** Kotlin: val x = User(...) — constructor binding for property_declaration with call_expression */\nconst scanKotlinConstructorBinding: ConstructorBindingScanner = (node) => {\n  if (node.type !== 'property_declaration') return undefined;\n  const varDecl = findChildByType(node, 'variable_declaration');\n  if (!varDecl) return undefined;\n  if (findChildByType(varDecl, 'user_type')) return undefined;\n  const callExpr = findChildByType(node, 'call_expression');\n  if (!callExpr) return undefined;\n  const callee = callExpr.firstNamedChild;\n  if (!callee) return undefined;\n\n  let calleeName: string | undefined;\n  if (callee.type === 'simple_identifier') {\n    calleeName = callee.text;\n  } else if (callee.type === 'navigation_expression') {\n    // Extract method name from qualified call: service.getUser() → getUser\n    const suffix = callee.lastNamedChild;\n    if (suffix?.type === 'navigation_suffix') {\n      const methodName = suffix.lastNamedChild;\n      if (methodName?.type === 'simple_identifier') {\n        calleeName = methodName.text;\n      }\n    }\n  }\n  if (!calleeName) return undefined;\n  const nameNode = findChildByType(varDecl, 'simple_identifier');\n  if (!nameNode) return undefined;\n  return { varName: nameNode.text, calleeName };\n};\n\nconst KOTLIN_FOR_LOOP_NODE_TYPES: ReadonlySet<string> = new Set([\n  'for_statement',\n]);\n\n/** Extract element type from a Kotlin type annotation AST node (user_type wrapping generic).\n *  Kotlin: user_type → [type_identifier, type_arguments → [type_projection → user_type]]\n *  Handles the type_projection wrapper that Kotlin uses for generic type arguments. */\nconst extractKotlinElementTypeFromTypeNode = (typeNode: SyntaxNode, pos: TypeArgPosition = 'last'): string | undefined => {\n  if (typeNode.type === 'user_type') {\n    const argsNode = findChildByType(typeNode, 'type_arguments');\n    if (argsNode && argsNode.namedChildCount >= 1) {\n      const targetArg = pos === 'first'\n        ? argsNode.namedChild(0)\n        : argsNode.namedChild(argsNode.namedChildCount - 1);\n      if (!targetArg) return undefined;\n      // Kotlin wraps type args in type_projection — unwrap to get the inner type\n      const inner = targetArg.type === 'type_projection'\n        ? targetArg.firstNamedChild\n        : targetArg;\n      if (inner) return extractSimpleTypeName(inner);\n    }\n  }\n  return undefined;\n};\n\n/** Walk up from a for-loop to the enclosing function_declaration and search parameters.\n *  Kotlin parameters use positional children (simple_identifier, user_type), not named fields. */\nconst findKotlinParamElementType = (iterableName: string, startNode: SyntaxNode, pos: TypeArgPosition = 'last'): string | undefined => {\n  let current: SyntaxNode | null = startNode.parent;\n  while (current) {\n    if (current.type === 'function_declaration') {\n      const paramsNode = findChildByType(current, 'function_value_parameters');\n      if (paramsNode) {\n        for (let i = 0; i < paramsNode.namedChildCount; i++) {\n          const param = paramsNode.namedChild(i);\n          if (!param || param.type !== 'parameter') continue;\n          const nameNode = findChildByType(param, 'simple_identifier');\n          if (nameNode?.text !== iterableName) continue;\n          const typeNode = findChildByType(param, 'user_type');\n          if (typeNode) return extractKotlinElementTypeFromTypeNode(typeNode, pos);\n        }\n      }\n      break;\n    }\n    current = current.parent;\n  }\n  return undefined;\n};\n\n/** Kotlin: for (user: User in users) — extract loop variable binding.\n *  Tier 1c: for `for (user in users)` without annotation, resolves from iterable. */\nconst extractKotlinForLoopBinding: ForLoopExtractor = (node, ctx): void => {\n  const { scopeEnv, declarationTypeNodes, scope, returnTypeLookup } = ctx;\n  const varDecl = findChildByType(node, 'variable_declaration');\n  if (!varDecl) return;\n  const nameNode = findChildByType(varDecl, 'simple_identifier');\n  if (!nameNode) return;\n  const varName = extractVarName(nameNode);\n  if (!varName) return;\n\n  // Explicit type annotation (existing behavior): for (user: User in users)\n  const typeNode = findChildByType(varDecl, 'user_type');\n  if (typeNode) {\n    const typeName = extractSimpleTypeName(typeNode);\n    if (typeName) scopeEnv.set(varName, typeName);\n    return;\n  }\n\n  // Tier 1c: no annotation — resolve from iterable's container type\n  // Kotlin for-loop children: [variable_declaration, iterable_expr, control_structure_body]\n  // The iterable is the second named child of the for_statement (after variable_declaration)\n  let iterableName: string | undefined;\n  let methodName: string | undefined;\n  let fallbackIterableName: string | undefined;\n  let callExprElementType: string | undefined;\n  let foundVarDecl = false;\n  for (let i = 0; i < node.namedChildCount; i++) {\n    const child = node.namedChild(i);\n    if (child === varDecl) { foundVarDecl = true; continue; }\n    if (!foundVarDecl || !child) continue;\n    if (child.type === 'simple_identifier') {\n      iterableName = child.text;\n      break;\n    }\n    if (child.type === 'navigation_expression') {\n      // data.keys → navigation_expression > simple_identifier(data) + navigation_suffix > simple_identifier(keys)\n      const obj = child.firstNamedChild;\n      const suffix = findChildByType(child, 'navigation_suffix');\n      const prop = suffix ? findChildByType(suffix, 'simple_identifier') : null;\n      const hasCallSuffix = suffix ? findChildByType(suffix, 'call_suffix') !== null : false;\n      // Always try object as iterable + property as method first (handles data.values, data.keys).\n      // For bare property access without call_suffix, also save property as fallback\n      // (handles this.users, repo.items where the property IS the iterable).\n      if (obj?.type === 'simple_identifier') iterableName = obj.text;\n      if (prop) methodName = prop.text;\n      if (!hasCallSuffix && prop) {\n        fallbackIterableName = prop.text;\n      }\n      break;\n    }\n    if (child.type === 'call_expression') {\n      // data.values() → call_expression > navigation_expression > simple_identifier + navigation_suffix\n      const callee = child.firstNamedChild;\n      if (callee?.type === 'navigation_expression') {\n        const obj = callee.firstNamedChild;\n        if (obj?.type === 'simple_identifier') iterableName = obj.text;\n        const suffix = findChildByType(callee, 'navigation_suffix');\n        if (suffix) {\n          const prop = findChildByType(suffix, 'simple_identifier');\n          if (prop) methodName = prop.text;\n        }\n      } else if (callee?.type === 'simple_identifier') {\n        // Direct function call: for (u in getUsers())\n        const rawReturn = returnTypeLookup.lookupRawReturnType(callee.text);\n        if (rawReturn) callExprElementType = extractElementTypeFromString(rawReturn);\n      }\n      break;\n    }\n  }\n  if (!iterableName && !callExprElementType) return;\n\n  let elementType: string | undefined;\n  if (callExprElementType) {\n    elementType = callExprElementType;\n  } else {\n    let containerTypeName = scopeEnv.get(iterableName!);\n    // Fallback: if object has no type in scope, try the property as the iterable name.\n    // Handles patterns like this.users where the property itself is the iterable variable.\n    if (!containerTypeName && fallbackIterableName) {\n      iterableName = fallbackIterableName;\n      methodName = undefined;\n      containerTypeName = scopeEnv.get(iterableName);\n    }\n    const typeArgPos = methodToTypeArgPosition(methodName, containerTypeName);\n    elementType = resolveIterableElementType(\n      iterableName!, node, scopeEnv, declarationTypeNodes, scope,\n      extractKotlinElementTypeFromTypeNode, findKotlinParamElementType,\n      typeArgPos,\n    );\n  }\n  if (elementType) scopeEnv.set(varName, elementType);\n};\n\n/** Kotlin: val alias = u → property_declaration or variable_declaration.\n *  property_declaration has: binding_pattern_kind(\"val\"), variable_declaration(\"alias\"),\n *  \"=\", and the RHS value (simple_identifier \"u\").\n *  variable_declaration appears directly inside functions and has simple_identifier children. */\nconst extractKotlinPendingAssignment: PendingAssignmentExtractor = (node, scopeEnv) => {\n  if (node.type === 'property_declaration') {\n    // Find the variable name from variable_declaration child\n    const varDecl = findChildByType(node, 'variable_declaration');\n    if (!varDecl) return undefined;\n    const nameNode = varDecl.firstNamedChild;\n    if (!nameNode || nameNode.type !== 'simple_identifier') return undefined;\n    const lhs = nameNode.text;\n    if (scopeEnv.has(lhs)) return undefined;\n    // Find the RHS after the \"=\" token\n    let foundEq = false;\n    for (let i = 0; i < node.childCount; i++) {\n      const child = node.child(i);\n      if (!child) continue;\n      if (child.type === '=') { foundEq = true; continue; }\n      if (foundEq && child.type === 'simple_identifier') {\n        return { kind: 'copy', lhs, rhs: child.text };\n      }\n      // navigation_expression RHS → fieldAccess (a.field)\n      if (foundEq && child.type === 'navigation_expression') {\n        const recv = child.firstNamedChild;\n        const suffix = child.lastNamedChild;\n        const fieldNode = suffix?.type === 'navigation_suffix' ? suffix.lastNamedChild : suffix;\n        if (recv?.type === 'simple_identifier' && fieldNode?.type === 'simple_identifier') {\n          return { kind: 'fieldAccess', lhs, receiver: recv.text, field: fieldNode.text };\n        }\n      }\n      // call_expression RHS\n      if (foundEq && child.type === 'call_expression') {\n        const calleeNode = child.firstNamedChild;\n        if (calleeNode?.type === 'simple_identifier') {\n          return { kind: 'callResult', lhs, callee: calleeNode.text };\n        }\n        // navigation_expression callee → methodCallResult (a.method())\n        if (calleeNode?.type === 'navigation_expression') {\n          const recv = calleeNode.firstNamedChild;\n          const suffix = calleeNode.lastNamedChild;\n          const methodNode = suffix?.type === 'navigation_suffix' ? suffix.lastNamedChild : suffix;\n          if (recv?.type === 'simple_identifier' && methodNode?.type === 'simple_identifier') {\n            return { kind: 'methodCallResult', lhs, receiver: recv.text, method: methodNode.text };\n          }\n        }\n      }\n    }\n    return undefined;\n  }\n\n  if (node.type === 'variable_declaration') {\n    // variable_declaration directly inside functions: simple_identifier children\n    const nameNode = findChildByType(node, 'simple_identifier');\n    if (!nameNode) return undefined;\n    const lhs = nameNode.text;\n    if (scopeEnv.has(lhs)) return undefined;\n    // Look for RHS after \"=\" in the parent (property_declaration)\n    const parent = node.parent;\n    if (!parent) return undefined;\n    let foundEq = false;\n    for (let i = 0; i < parent.childCount; i++) {\n      const child = parent.child(i);\n      if (!child) continue;\n      if (child.type === '=') { foundEq = true; continue; }\n      if (foundEq && child.type === 'simple_identifier') {\n        return { kind: 'copy', lhs, rhs: child.text };\n      }\n      if (foundEq && child.type === 'navigation_expression') {\n        const recv = child.firstNamedChild;\n        const suffix = child.lastNamedChild;\n        const fieldNode = suffix?.type === 'navigation_suffix' ? suffix.lastNamedChild : suffix;\n        if (recv?.type === 'simple_identifier' && fieldNode?.type === 'simple_identifier') {\n          return { kind: 'fieldAccess', lhs, receiver: recv.text, field: fieldNode.text };\n        }\n      }\n      if (foundEq && child.type === 'call_expression') {\n        const calleeNode = child.firstNamedChild;\n        if (calleeNode?.type === 'simple_identifier') {\n          return { kind: 'callResult', lhs, callee: calleeNode.text };\n        }\n        if (calleeNode?.type === 'navigation_expression') {\n          const recv = calleeNode.firstNamedChild;\n          const suffix = calleeNode.lastNamedChild;\n          const methodNode = suffix?.type === 'navigation_suffix' ? suffix.lastNamedChild : suffix;\n          if (recv?.type === 'simple_identifier' && methodNode?.type === 'simple_identifier') {\n            return { kind: 'methodCallResult', lhs, receiver: recv.text, method: methodNode.text };\n          }\n        }\n      }\n    }\n    return undefined;\n  }\n\n  return undefined;\n};\n\n/** Walk up from a node to find an ancestor of a given type. */\nconst findAncestorByType = (node: SyntaxNode, type: string): SyntaxNode | undefined => {\n  let current = node.parent;\n  while (current) {\n    if (current.type === type) return current;\n    current = current.parent;\n  }\n  return undefined;\n};\n\nconst extractKotlinPatternBinding: PatternBindingExtractor = (node, scopeEnv, declarationTypeNodes, scope) => {\n  // Kotlin when/is smart casts (existing behavior)\n  if (node.type === 'type_test') {\n    const typeNode = node.lastNamedChild;\n    if (!typeNode) return undefined;\n    const typeName = extractSimpleTypeName(typeNode);\n    if (!typeName) return undefined;\n    const whenExpr = findAncestorByType(node, 'when_expression');\n    if (!whenExpr) return undefined;\n    const whenSubject = whenExpr.namedChild(0);\n    const subject = whenSubject?.firstNamedChild ?? whenSubject;\n    if (!subject) return undefined;\n    const varName = extractVarName(subject);\n    if (!varName) return undefined;\n    return { varName, typeName };\n  }\n\n  // Null-check narrowing: if (x != null) { ... }\n  // Kotlin AST: equality_expression > simple_identifier, \"!=\" [anon], \"null\" [anon]\n  // Note: `null` is an anonymous node in tree-sitter-kotlin, not `null_literal`.\n  if (node.type === 'equality_expression') {\n    const op = node.children.find(c => !c.isNamed && c.text === '!=');\n    if (!op) return undefined;\n\n    // `null` is anonymous in Kotlin grammar — use positional child scan\n    let varNode: SyntaxNode | undefined;\n    let hasNull = false;\n    for (let i = 0; i < node.childCount; i++) {\n      const c = node.child(i);\n      if (!c) continue;\n      if (c.type === 'simple_identifier') varNode = c;\n      if (!c.isNamed && c.text === 'null') hasNull = true;\n    }\n    if (!varNode || !hasNull) return undefined;\n\n    const varName = varNode.text;\n    const resolvedType = scopeEnv.get(varName);\n    if (!resolvedType) return undefined;\n\n    // Check if the original declaration type was nullable (ends with ?)\n    const declTypeNode = declarationTypeNodes.get(`${scope}\\0${varName}`);\n    if (!declTypeNode) return undefined;\n    const declText = declTypeNode.text;\n    if (!declText.includes('?') && !declText.includes('null')) return undefined;\n\n    // Find the if-body: walk up to if_expression, then find control_structure_body\n    const ifExpr = findAncestorByType(node, 'if_expression');\n    if (!ifExpr) return undefined;\n    // The consequence is the first control_structure_body child\n    for (let i = 0; i < ifExpr.childCount; i++) {\n      const child = ifExpr.child(i);\n      if (child?.type === 'control_structure_body') {\n        return {\n          varName,\n          typeName: resolvedType,\n          narrowingRange: { startIndex: child.startIndex, endIndex: child.endIndex },\n        };\n      }\n    }\n    return undefined;\n  }\n\n  return undefined;\n};\n\nexport const kotlinTypeConfig: LanguageTypeConfig = {\n  allowPatternBindingOverwrite: true,\n  declarationNodeTypes: KOTLIN_DECLARATION_NODE_TYPES,\n  forLoopNodeTypes: KOTLIN_FOR_LOOP_NODE_TYPES,\n  patternBindingNodeTypes: new Set(['type_test', 'equality_expression']),\n  extractDeclaration: extractKotlinDeclaration,\n  extractParameter: extractKotlinParameter,\n  extractInitializer: extractKotlinInitializer,\n  scanConstructorBinding: scanKotlinConstructorBinding,\n  extractForLoopBinding: extractKotlinForLoopBinding,\n  extractPendingAssignment: extractKotlinPendingAssignment,\n  extractPatternBinding: extractKotlinPatternBinding,\n  inferLiteralType: inferJvmLiteralType,\n  detectConstructorType: detectKotlinConstructorType,\n};\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/type-extractors/php.ts",
    "content": "import type { SyntaxNode } from '../utils.js';\nimport type { LanguageTypeConfig, ParameterExtractor, TypeBindingExtractor, InitializerExtractor, ClassNameLookup, ConstructorBindingScanner, ReturnTypeExtractor, PendingAssignmentExtractor, ForLoopExtractor } from './types.js';\nimport { extractSimpleTypeName, extractVarName, extractCalleeName, resolveIterableElementType, extractElementTypeFromString } from './shared.js';\n\nconst DECLARATION_NODE_TYPES: ReadonlySet<string> = new Set([\n  'assignment_expression',   // For constructor inference: $x = new User()\n  'property_declaration',    // PHP 7.4+ typed properties: private UserRepo $repo;\n  'method_declaration',      // PHPDoc @param on class methods\n  'function_definition',     // PHPDoc @param on top-level functions\n]);\n\n/** Walk up the AST to find the enclosing class declaration. */\nconst findEnclosingClass = (node: SyntaxNode): SyntaxNode | null => {\n  let current = node.parent;\n  while (current) {\n    if (current.type === 'class_declaration') return current;\n    current = current.parent;\n  }\n  return null;\n};\n\n/**\n * Resolve PHP self/static/parent to the actual class name.\n * - self/static → enclosing class name\n * - parent → superclass from base_clause\n */\nconst resolvePhpKeyword = (keyword: string, node: SyntaxNode): string | undefined => {\n  if (keyword === 'self' || keyword === 'static') {\n    const cls = findEnclosingClass(node);\n    if (!cls) return undefined;\n    const nameNode = cls.childForFieldName('name');\n    return nameNode?.text;\n  }\n  if (keyword === 'parent') {\n    const cls = findEnclosingClass(node);\n    if (!cls) return undefined;\n    // base_clause contains the parent class name\n    for (let i = 0; i < cls.namedChildCount; i++) {\n      const child = cls.namedChild(i);\n      if (child?.type === 'base_clause') {\n        const parentName = child.firstNamedChild;\n        if (parentName) return extractSimpleTypeName(parentName);\n      }\n    }\n    return undefined;\n  }\n  return undefined;\n};\n\nconst normalizePhpType = (raw: string): string | undefined => {\n  // Strip nullable prefix: ?User → User\n  let type = raw.startsWith('?') ? raw.slice(1) : raw;\n  // Strip array suffix: User[] → User\n  type = type.replace(/\\[\\]$/, '');\n  // Strip union with null/false/void: User|null → User\n  const parts = type.split('|').filter(p => p !== 'null' && p !== 'false' && p !== 'void' && p !== 'mixed');\n  if (parts.length !== 1) return undefined;\n  type = parts[0];\n  // Strip namespace: \\App\\Models\\User → User\n  const segments = type.split('\\\\');\n  type = segments[segments.length - 1];\n  // Skip uninformative types\n  if (type === 'mixed' || type === 'void' || type === 'self' || type === 'static' || type === 'object') return undefined;\n  // Extract element type from generic: Collection<User> → User\n  // PHPDoc generics encode the element type in angle brackets. Since PHP's Strategy B\n  // uses the scopeEnv value directly as the element type, we must store the inner type,\n  // not the container name. This mirrors how User[] → User is handled by the [] strip above.\n  const genericMatch = type.match(/^(\\w+)\\s*</);\n  if (genericMatch) {\n    const elementType = extractElementTypeFromString(type);\n    return elementType ?? undefined;\n  }\n  if (/^\\w+$/.test(type)) return type;\n  return undefined;\n};\n\n/** Node types to skip when walking backwards to find doc-comments.\n *  PHP 8+ attributes (#[Route(...)]) appear as named siblings between PHPDoc and method. */\nconst SKIP_NODE_TYPES: ReadonlySet<string> = new Set(['attribute_list', 'attribute']);\n\n/** Regex to extract PHPDoc @param annotations: `@param Type $name` (standard order) */\nconst PHPDOC_PARAM_RE = /@param\\s+(\\S+)\\s+\\$(\\w+)/g;\n/** Alternate PHPDoc order: `@param $name Type` (name first) */\nconst PHPDOC_PARAM_ALT_RE = /@param\\s+\\$(\\w+)\\s+(\\S+)/g;\n/** Regex to extract PHPDoc @var annotations: `@var Type` */\nconst PHPDOC_VAR_RE = /@var\\s+(\\S+)/;\n\n/**\n * Extract the element type for a class property from its PHPDoc @var annotation or\n * PHP 7.4+ native type. Walks backward from the property_declaration node to find\n * an immediately preceding comment containing @var.\n *\n * Returns the normalized element type (e.g. User[] → User, Collection<User> → User).\n * Returns undefined when no usable type annotation is found.\n */\nconst extractClassPropertyElementType = (propDecl: SyntaxNode): string | undefined => {\n  // Strategy 1: PHPDoc @var annotation on a preceding comment sibling\n  let sibling = propDecl.previousSibling;\n  while (sibling) {\n    if (sibling.type === 'comment') {\n      const match = PHPDOC_VAR_RE.exec(sibling.text);\n      if (match) return normalizePhpType(match[1]);\n    } else if (sibling.isNamed && !SKIP_NODE_TYPES.has(sibling.type)) {\n      break;\n    }\n    sibling = sibling.previousSibling;\n  }\n  // Strategy 2: PHP 7.4+ native type field — skip generic 'array' since element type is unknown\n  const typeNode = propDecl.childForFieldName('type');\n  if (!typeNode) return undefined;\n  const typeName = extractSimpleTypeName(typeNode);\n  if (!typeName || typeName === 'array') return undefined;\n  return typeName;\n};\n\n/**\n * Scan a class body for a property_declaration matching the given property name,\n * and extract its element type. The class body is the `declaration_list` child of\n * a `class_declaration` node.\n *\n * Used as Strategy C in extractForLoopBinding for `$this->property` iterables\n * where Strategy A (resolveIterableElementType) and Strategy B (scopeEnv lookup)\n * both fail to find the type.\n */\nconst findClassPropertyElementType = (propName: string, classNode: SyntaxNode): string | undefined => {\n  const declList = classNode.childForFieldName('body')\n    ?? (classNode.namedChild(classNode.namedChildCount - 1)?.type === 'declaration_list'\n        ? classNode.namedChild(classNode.namedChildCount - 1)\n        : null); // fallback: last named child, only if it's a declaration_list\n  if (!declList) return undefined;\n  for (let i = 0; i < declList.namedChildCount; i++) {\n    const child = declList.namedChild(i);\n    if (child?.type !== 'property_declaration') continue;\n    // Check if any property_element has a variable_name matching '$propName'\n    for (let j = 0; j < child.namedChildCount; j++) {\n      const elem = child.namedChild(j);\n      if (elem?.type !== 'property_element') continue;\n      const varNameNode = elem.firstNamedChild; // variable_name node\n      if (varNameNode?.text === '$' + propName) {\n        return extractClassPropertyElementType(child);\n      }\n    }\n  }\n  return undefined;\n};\n\n/**\n * Collect PHPDoc @param type bindings from comment nodes preceding a method/function.\n * Returns a map of paramName → typeName (without $ prefix).\n */\nconst collectPhpDocParams = (methodNode: SyntaxNode): Map<string, string> => {\n  const commentTexts: string[] = [];\n  let sibling = methodNode.previousSibling;\n  while (sibling) {\n    if (sibling.type === 'comment') {\n      commentTexts.unshift(sibling.text);\n    } else if (sibling.isNamed && !SKIP_NODE_TYPES.has(sibling.type)) {\n      break;\n    }\n    sibling = sibling.previousSibling;\n  }\n  if (commentTexts.length === 0) return new Map();\n\n  const params = new Map<string, string>();\n  const commentBlock = commentTexts.join('\\n');\n  PHPDOC_PARAM_RE.lastIndex = 0;\n  let match: RegExpExecArray | null;\n  while ((match = PHPDOC_PARAM_RE.exec(commentBlock)) !== null) {\n    const typeName = normalizePhpType(match[1]);\n    const paramName = match[2]; // without $ prefix\n    if (typeName) {\n      // Store with $ prefix to match how PHP variables appear in the env\n      params.set('$' + paramName, typeName);\n    }\n  }\n\n  // Also check alternate PHPDoc order: @param $name Type\n  PHPDOC_PARAM_ALT_RE.lastIndex = 0;\n  while ((match = PHPDOC_PARAM_ALT_RE.exec(commentBlock)) !== null) {\n    const paramName = match[1];\n    if (params.has('$' + paramName)) continue; // standard format takes priority\n    const typeName = normalizePhpType(match[2]);\n    if (typeName) {\n      params.set('$' + paramName, typeName);\n    }\n  }\n  return params;\n};\n\n/**\n * PHP: typed class properties (PHP 7.4+): private UserRepo $repo;\n * Also: PHPDoc @param annotations on method/function definitions.\n */\nconst extractDeclaration: TypeBindingExtractor = (node: SyntaxNode, env: Map<string, string>): void => {\n  // PHPDoc @param on methods/functions — pre-populate env with param types\n  if (node.type === 'method_declaration' || node.type === 'function_definition') {\n    const phpDocParams = collectPhpDocParams(node);\n    for (const [paramName, typeName] of phpDocParams) {\n      if (!env.has(paramName)) env.set(paramName, typeName);\n    }\n    return;\n  }\n\n  if (node.type !== 'property_declaration') return;\n\n  const typeNode = node.childForFieldName('type');\n  if (!typeNode) return;\n\n  const typeName = extractSimpleTypeName(typeNode);\n  if (!typeName) return;\n\n  // The variable name is inside property_element > variable_name\n  for (let i = 0; i < node.namedChildCount; i++) {\n    const child = node.namedChild(i);\n    if (child?.type === 'property_element') {\n      const varNameNode = child.firstNamedChild; // variable_name\n      if (varNameNode) {\n        const varName = extractVarName(varNameNode);\n        if (varName) env.set(varName, typeName);\n      }\n      break;\n    }\n  }\n};\n\n/** PHP: $x = new User() — infer type from object_creation_expression */\nconst extractInitializer: InitializerExtractor = (node: SyntaxNode, env: Map<string, string>, _classNames: ClassNameLookup): void => {\n  if (node.type !== 'assignment_expression') return;\n  const left = node.childForFieldName('left');\n  const right = node.childForFieldName('right');\n  if (!left || !right) return;\n  if (right.type !== 'object_creation_expression') return;\n  // The class name is the first named child of object_creation_expression\n  // (tree-sitter-php uses 'name' or 'qualified_name' nodes here)\n  const ctorType = right.firstNamedChild;\n  if (!ctorType) return;\n  const typeName = extractSimpleTypeName(ctorType);\n  if (!typeName) return;\n  // Resolve PHP self/static/parent to actual class names\n  const resolvedType = (typeName === 'self' || typeName === 'static' || typeName === 'parent')\n    ? resolvePhpKeyword(typeName, node)\n    : typeName;\n  if (!resolvedType) return;\n  const varName = extractVarName(left);\n  if (varName) env.set(varName, resolvedType);\n};\n\n/** PHP: simple_parameter → type $name */\nconst extractParameter: ParameterExtractor = (node: SyntaxNode, env: Map<string, string>): void => {\n  let nameNode: SyntaxNode | null = null;\n  let typeNode: SyntaxNode | null = null;\n\n  if (node.type === 'simple_parameter') {\n    typeNode = node.childForFieldName('type');\n    nameNode = node.childForFieldName('name');\n  } else {\n    nameNode = node.childForFieldName('name') ?? node.childForFieldName('pattern');\n    typeNode = node.childForFieldName('type');\n  }\n\n  if (!nameNode || !typeNode) return;\n  const varName = extractVarName(nameNode);\n  if (!varName) return;\n  // Don't overwrite PHPDoc-derived types (e.g. @param User[] $users → User)\n  // with the less-specific AST type annotation (e.g. array).\n  if (env.has(varName)) return;\n  const typeName = extractSimpleTypeName(typeNode);\n  if (typeName) env.set(varName, typeName);\n};\n\n/** PHP: $x = SomeFactory() or $x = $this->getUser() — bind variable to call return type */\nconst scanConstructorBinding: ConstructorBindingScanner = (node) => {\n  if (node.type !== 'assignment_expression') return undefined;\n  const left = node.childForFieldName('left');\n  const right = node.childForFieldName('right');\n  if (!left || !right) return undefined;\n  if (left.type !== 'variable_name') return undefined;\n  // Skip object_creation_expression (new User()) — handled by extractInitializer\n  if (right.type === 'object_creation_expression') return undefined;\n  // Handle both standalone function calls and method calls ($this->getUser())\n  if (right.type === 'function_call_expression') {\n    const calleeName = extractCalleeName(right);\n    if (!calleeName) return undefined;\n    return { varName: left.text, calleeName };\n  }\n  if (right.type === 'member_call_expression') {\n    const methodName = right.childForFieldName('name');\n    if (!methodName) return undefined;\n    // When receiver is $this/self/static, qualify with enclosing class for disambiguation\n    const receiver = right.childForFieldName('object');\n    const receiverText = receiver?.text;\n    let receiverClassName: string | undefined;\n    if (receiverText === '$this' || receiverText === 'self' || receiverText === 'static') {\n      const cls = findEnclosingClass(node);\n      const clsName = cls?.childForFieldName('name');\n      if (clsName) receiverClassName = clsName.text;\n    }\n    return { varName: left.text, calleeName: methodName.text, receiverClassName };\n  }\n  return undefined;\n};\n\n/** Regex to extract PHPDoc @return annotations: `@return User` */\nconst PHPDOC_RETURN_RE = /@return\\s+(\\S+)/;\n\n/**\n * Normalize a PHPDoc return type for storage in the SymbolTable.\n * Unlike normalizePhpType (which strips User[] → User for scopeEnv), this preserves\n * array notation so lookupRawReturnType can extract element types for for-loop resolution.\n *   \\App\\Models\\User[] → User[]\n *   ?User → User\n *   Collection<User> → Collection<User>  (preserved for extractElementTypeFromString)\n */\nconst normalizePhpReturnType = (raw: string): string | undefined => {\n  // Strip nullable prefix: ?User[] → User[]\n  let type = raw.startsWith('?') ? raw.slice(1) : raw;\n  // Strip union with null/false/void: User[]|null → User[]\n  const parts = type.split('|').filter(p => p !== 'null' && p !== 'false' && p !== 'void' && p !== 'mixed');\n  if (parts.length !== 1) return undefined;\n  type = parts[0];\n  // Strip namespace: \\App\\Models\\User[] → User[]\n  const segments = type.split('\\\\');\n  type = segments[segments.length - 1];\n  // Skip uninformative types\n  if (type === 'mixed' || type === 'void' || type === 'self' || type === 'static' || type === 'object' || type === 'array') return undefined;\n  if (/^\\w+(\\[\\])?$/.test(type) || /^\\w+\\s*</.test(type)) return type;\n  return undefined;\n};\n\n/**\n * Extract return type from PHPDoc `@return Type` annotation preceding a method.\n * Walks backwards through preceding siblings looking for comment nodes.\n * Preserves array notation (e.g., User[]) for for-loop element type extraction.\n */\nconst extractReturnType: ReturnTypeExtractor = (node) => {\n  let sibling = node.previousSibling;\n  while (sibling) {\n    if (sibling.type === 'comment') {\n      const match = PHPDOC_RETURN_RE.exec(sibling.text);\n      if (match) return normalizePhpReturnType(match[1]);\n    } else if (sibling.isNamed && !SKIP_NODE_TYPES.has(sibling.type)) break;\n    sibling = sibling.previousSibling;\n  }\n  return undefined;\n};\n\n/** PHP: $alias = $user → assignment_expression with variable_name left/right.\n *  PHP TypeEnv stores variables WITH $ prefix ($user → User), so we keep $ in lhs/rhs. */\nconst extractPendingAssignment: PendingAssignmentExtractor = (node, scopeEnv) => {\n  if (node.type !== 'assignment_expression') return undefined;\n  const left = node.childForFieldName('left');\n  const right = node.childForFieldName('right');\n  if (!left || !right) return undefined;\n  if (left.type !== 'variable_name') return undefined;\n  const lhs = left.text;\n  if (!lhs || scopeEnv.has(lhs)) return undefined;\n  if (right.type === 'variable_name') {\n    const rhs = right.text;\n    if (rhs) return { kind: 'copy', lhs, rhs };\n  }\n  // member_access_expression RHS → fieldAccess ($a->field)\n  if (right.type === 'member_access_expression') {\n    const obj = right.childForFieldName('object');\n    const name = right.childForFieldName('name');\n    if (obj?.type === 'variable_name' && name) {\n      return { kind: 'fieldAccess', lhs, receiver: obj.text, field: name.text };\n    }\n  }\n  // function_call_expression RHS → callResult (bare function calls only)\n  if (right.type === 'function_call_expression') {\n    const funcNode = right.childForFieldName('function');\n    if (funcNode?.type === 'name') {\n      return { kind: 'callResult', lhs, callee: funcNode.text };\n    }\n  }\n  // member_call_expression RHS → methodCallResult ($a->method())\n  if (right.type === 'member_call_expression') {\n    const obj = right.childForFieldName('object');\n    const name = right.childForFieldName('name');\n    if (obj?.type === 'variable_name' && name) {\n      return { kind: 'methodCallResult', lhs, receiver: obj.text, method: name.text };\n    }\n  }\n  return undefined;\n};\n\nconst FOR_LOOP_NODE_TYPES: ReadonlySet<string> = new Set([\n  'foreach_statement',\n]);\n\n/** Extract element type from a PHP type annotation AST node.\n *  PHP has limited AST-level container types — `array` is a primitive_type with no generic args.\n *  Named types (e.g., `Collection`) are returned as-is (container descriptor lookup handles them). */\nconst extractPhpElementTypeFromTypeNode = (_typeNode: SyntaxNode): string | undefined => {\n  // PHP AST type nodes don't carry generic parameters (array<User> is PHPDoc-only).\n  // primitive_type 'array' and named_type 'Collection' don't encode element types.\n  return undefined;\n};\n\n/** Walk up from a foreach to the enclosing function and search parameter type annotations.\n *  PHP parameter type hints are limited (array, ClassName) — this extracts element type when possible. */\nconst findPhpParamElementType = (iterableName: string, startNode: SyntaxNode): string | undefined => {\n  let current: SyntaxNode | null = startNode.parent;\n  while (current) {\n    if (current.type === 'method_declaration' || current.type === 'function_definition') {\n      const paramsNode = current.childForFieldName('parameters');\n      if (paramsNode) {\n        for (let i = 0; i < paramsNode.namedChildCount; i++) {\n          const param = paramsNode.namedChild(i);\n          if (!param || param.type !== 'simple_parameter') continue;\n          const nameNode = param.childForFieldName('name');\n          if (nameNode?.text !== iterableName) continue;\n          const typeNode = param.childForFieldName('type');\n          if (typeNode) return extractPhpElementTypeFromTypeNode(typeNode);\n        }\n      }\n      break;\n    }\n    current = current.parent;\n  }\n  return undefined;\n};\n\n/**\n * PHP: foreach ($users as $user) — extract loop variable binding.\n *\n * AST structure (from tree-sitter-php grammar):\n *   foreach_statement — no named fields for iterable/value (only 'body')\n *     children[0]: expression (iterable, e.g. $users)\n *     children[1]: expression (simple value) OR pair ($key => $value)\n *       pair children: expression (key), expression (value)\n *\n * PHP's PHPDoc @param normalizes `User[]` → `User` in the env, so the iterable's\n * stored type IS the element type. We first try resolveIterableElementType (for\n * constructor-binding cases that retain container types), then fall back to direct\n * scopeEnv lookup (for PHPDoc-normalized types).\n */\nconst extractForLoopBinding: ForLoopExtractor = (node,  { scopeEnv, declarationTypeNodes, scope, returnTypeLookup }): void => {\n  if (node.type !== 'foreach_statement') return;\n\n  // Collect non-body named children: first is the iterable, second is value or pair\n  const children: SyntaxNode[] = [];\n  for (let i = 0; i < node.namedChildCount; i++) {\n    const child = node.namedChild(i);\n    if (child && child !== node.childForFieldName('body')) {\n      children.push(child);\n    }\n  }\n  if (children.length < 2) return;\n\n  const iterableNode = children[0];\n  const valueOrPair = children[1];\n\n  // Determine the loop variable node\n  let loopVarNode: SyntaxNode;\n  if (valueOrPair.type === 'pair') {\n    // $key => $value — the value is the last named child of the pair\n    const lastChild = valueOrPair.namedChild(valueOrPair.namedChildCount - 1);\n    if (!lastChild) return;\n    // Handle by_ref: foreach ($arr as $k => &$v)\n    loopVarNode = lastChild.type === 'by_ref' ? (lastChild.firstNamedChild ?? lastChild) : lastChild;\n  } else {\n    // Simple: foreach ($users as $user) or foreach ($users as &$user)\n    loopVarNode = valueOrPair.type === 'by_ref' ? (valueOrPair.firstNamedChild ?? valueOrPair) : valueOrPair;\n  }\n\n  const varName = extractVarName(loopVarNode);\n  if (!varName) return;\n\n  // Get iterable variable name (PHP vars include $ prefix)\n  let iterableName: string | undefined;\n  let callExprElementType: string | undefined;\n  if (iterableNode.type === 'variable_name') {\n    iterableName = iterableNode.text;\n  } else if (iterableNode?.type === 'member_access_expression') {\n    const name = iterableNode.childForFieldName('name');\n    // PHP properties are stored in scopeEnv with $ prefix ($users), but\n    // member_access_expression.name returns without $ (users). Add $ to match.\n    if (name) iterableName = '$' + name.text;\n  } else if (iterableNode?.type === 'function_call_expression') {\n    // foreach (getUsers() as $user) — resolve via return type lookup\n    const calleeName = extractCalleeName(iterableNode);\n    if (calleeName) {\n      const rawReturn = returnTypeLookup.lookupRawReturnType(calleeName);\n      if (rawReturn) callExprElementType = extractElementTypeFromString(rawReturn);\n    }\n  } else if (iterableNode?.type === 'member_call_expression') {\n    // foreach ($this->getUsers() as $user) — resolve via return type lookup\n    const methodName = iterableNode.childForFieldName('name');\n    if (methodName) {\n      const rawReturn = returnTypeLookup.lookupRawReturnType(methodName.text);\n      if (rawReturn) callExprElementType = extractElementTypeFromString(rawReturn);\n    }\n  }\n  if (!iterableName && !callExprElementType) return;\n\n  // If we resolved the element type from a call expression, bind and return early\n  if (callExprElementType) {\n    scopeEnv.set(varName, callExprElementType);\n    return;\n  }\n\n  // Strategy A: try resolveIterableElementType (handles constructor-binding container types)\n  const elementType = resolveIterableElementType(\n    iterableName, node, scopeEnv, declarationTypeNodes, scope,\n    extractPhpElementTypeFromTypeNode, findPhpParamElementType,\n    undefined,\n  );\n  if (elementType) {\n    scopeEnv.set(varName, elementType);\n    return;\n  }\n\n  // Strategy B: direct scopeEnv lookup — PHP normalizePhpType strips User[] → User,\n  // so the iterable's stored type is already the element type from PHPDoc annotations.\n  const iterableType = scopeEnv.get(iterableName);\n  if (iterableType) {\n    scopeEnv.set(varName, iterableType);\n    return;\n  }\n\n  // Strategy C: $this->property — scan the enclosing class body for the property\n  // declaration and extract its element type from @var PHPDoc or native type.\n  // This handles the common PHP pattern where the property type is declared on the\n  // class body (/** @var User[] */ private $users) but the foreach is in a method\n  // whose scopeEnv does not contain the property type.\n  if (iterableNode?.type === 'member_access_expression') {\n    const obj = iterableNode.childForFieldName('object');\n    if (obj?.text === '$this') {\n      const nameNode = iterableNode.childForFieldName('name');\n      const propName = nameNode?.text;\n      if (propName) {\n        const classNode = findEnclosingClass(iterableNode);\n        if (classNode) {\n          const elementType = findClassPropertyElementType(propName, classNode);\n          if (elementType) scopeEnv.set(varName, elementType);\n        }\n      }\n    }\n  }\n};\n\nexport const typeConfig: LanguageTypeConfig = {\n  declarationNodeTypes: DECLARATION_NODE_TYPES,\n  forLoopNodeTypes: FOR_LOOP_NODE_TYPES,\n  extractDeclaration,\n  extractParameter,\n  extractInitializer,\n  scanConstructorBinding,\n  extractReturnType,\n  extractForLoopBinding,\n  extractPendingAssignment,\n};\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/type-extractors/python.ts",
    "content": "import type { SyntaxNode } from '../utils.js';\nimport type { LanguageTypeConfig, ParameterExtractor, TypeBindingExtractor, InitializerExtractor, ClassNameLookup, ConstructorBindingScanner, PendingAssignmentExtractor, PatternBindingExtractor, ForLoopExtractor } from './types.js';\nimport { extractSimpleTypeName, extractVarName, extractElementTypeFromString, extractGenericTypeArgs, resolveIterableElementType, methodToTypeArgPosition, type TypeArgPosition } from './shared.js';\n\nconst DECLARATION_NODE_TYPES: ReadonlySet<string> = new Set([\n  'assignment',\n  'named_expression',\n  'expression_statement',\n]);\n\n/** Python: x: Foo = ... (PEP 484 annotated assignment) or x: Foo (standalone annotation).\n *\n * tree-sitter-python grammar produces two distinct shapes:\n *\n *   1. Annotated assignment with value:  `name: str = \"\"`\n *      Node type: `assignment`\n *      Fields: left=identifier, type=identifier/type, right=value\n *\n *   2. Standalone annotation (no value):  `name: str`\n *      Node type: `expression_statement`\n *      Child: `type` node with fields name=identifier, type=identifier/type\n *\n * Both appear at file scope and inside class bodies (PEP 526 class variable annotations).\n */\nconst extractDeclaration: TypeBindingExtractor = (node: SyntaxNode, env: Map<string, string>): void => {\n  if (node.type === 'expression_statement') {\n    // Standalone annotation: expression_statement > type { name: identifier, type: identifier }\n    const typeChild = node.firstNamedChild;\n    if (!typeChild || typeChild.type !== 'type') return;\n    const nameNode = typeChild.childForFieldName('name');\n    const typeNode = typeChild.childForFieldName('type');\n    if (!nameNode || !typeNode) return;\n    const varName = extractVarName(nameNode);\n    const inner = typeNode.type === 'type' ? (typeNode.firstNamedChild ?? typeNode) : typeNode;\n    const typeName = extractSimpleTypeName(inner) ?? inner.text;\n    if (varName && typeName) env.set(varName, typeName);\n    return;\n  }\n\n  // Annotated assignment: left : type = value\n  const left = node.childForFieldName('left');\n  const typeNode = node.childForFieldName('type');\n  if (!left || !typeNode) return;\n  const varName = extractVarName(left);\n  // extractSimpleTypeName handles identifiers and qualified names.\n  // Python 3.10+ union syntax `User | None` is parsed as binary_operator,\n  // which extractSimpleTypeName doesn't handle. Fall back to raw text so\n  // stripNullable can process it at lookup time (e.g., \"User | None\" → \"User\").\n  const inner = typeNode.type === 'type' ? (typeNode.firstNamedChild ?? typeNode) : typeNode;\n  const typeName = extractSimpleTypeName(inner) ?? inner.text;\n  if (varName && typeName) env.set(varName, typeName);\n};\n\n/** Python: parameter with type annotation */\nconst extractParameter: ParameterExtractor = (node: SyntaxNode, env: Map<string, string>): void => {\n  let nameNode: SyntaxNode | null = null;\n  let typeNode: SyntaxNode | null = null;\n\n  if (node.type === 'parameter') {\n    nameNode = node.childForFieldName('name');\n    typeNode = node.childForFieldName('type');\n  } else {\n    nameNode = node.childForFieldName('name') ?? node.childForFieldName('pattern');\n    typeNode = node.childForFieldName('type');\n    // Python typed_parameter: name is a positional child (identifier), not a named field\n    if (!nameNode && node.type === 'typed_parameter') {\n      nameNode = node.firstNamedChild?.type === 'identifier' ? node.firstNamedChild : null;\n    }\n  }\n\n  if (!nameNode || !typeNode) return;\n  const varName = extractVarName(nameNode);\n  const typeName = extractSimpleTypeName(typeNode);\n  if (varName && typeName) env.set(varName, typeName);\n};\n\n/** Python: user = User(\"alice\") — infer type from call when callee is a known class.\n *  Python constructors are syntactically identical to function calls, so we verify\n *  against classNames (which may include cross-file SymbolTable lookups).\n *  Also handles walrus operator: if (user := User(\"alice\")): */\nconst extractInitializer: InitializerExtractor = (node: SyntaxNode, env: Map<string, string>, classNames: ClassNameLookup): void => {\n  let left: SyntaxNode | null;\n  let right: SyntaxNode | null;\n\n  if (node.type === 'named_expression') {\n    // Walrus operator: (user := User(\"alice\"))\n    // tree-sitter-python: named_expression has 'name' and 'value' fields\n    left = node.childForFieldName('name');\n    right = node.childForFieldName('value');\n  } else if (node.type === 'assignment') {\n    left = node.childForFieldName('left');\n    right = node.childForFieldName('right');\n    // Skip if already has type annotation — extractDeclaration handled it\n    if (node.childForFieldName('type')) return;\n  } else {\n    return;\n  }\n\n  if (!left || !right) return;\n  const varName = extractVarName(left);\n  if (!varName || env.has(varName)) return;\n  if (right.type !== 'call') return;\n  const func = right.childForFieldName('function');\n  if (!func) return;\n  // Support both direct calls (User()) and qualified calls (models.User())\n  // tree-sitter-python: direct → identifier, qualified → attribute\n  const calleeName = extractSimpleTypeName(func);\n  if (!calleeName) return;\n  if (classNames.has(calleeName)) {\n    env.set(varName, calleeName);\n  }\n};\n\n/** Python: user = User(\"alice\") — scan assignment/walrus for constructor-like calls.\n *  Returns {varName, calleeName} without checking classNames (caller validates). */\nconst scanConstructorBinding: ConstructorBindingScanner = (node) => {\n  let left: SyntaxNode | null;\n  let right: SyntaxNode | null;\n\n  if (node.type === 'named_expression') {\n    left = node.childForFieldName('name');\n    right = node.childForFieldName('value');\n  } else if (node.type === 'assignment') {\n    left = node.childForFieldName('left');\n    right = node.childForFieldName('right');\n    if (node.childForFieldName('type')) return undefined;\n  } else {\n    return undefined;\n  }\n\n  if (!left || !right) return undefined;\n  if (left.type !== 'identifier') return undefined;\n  if (right.type !== 'call') return undefined;\n  const func = right.childForFieldName('function');\n  if (!func) return undefined;\n  const calleeName = extractSimpleTypeName(func);\n  if (!calleeName) return undefined;\n  return { varName: left.text, calleeName };\n};\n\nconst FOR_LOOP_NODE_TYPES: ReadonlySet<string> = new Set([\n  'for_statement',\n]);\n\n/** Python function/method node types that carry a parameters list. */\nconst PY_FUNCTION_NODE_TYPES = new Set([\n  'function_definition', 'decorated_definition',\n]);\n\n/**\n * Extract element type from a Python type annotation AST node.\n * Handles:\n *   subscript \"List[User]\"  →  extractElementTypeFromString(\"List[User]\") → \"User\"\n *   generic_type            →  extractGenericTypeArgs → first arg\n * Falls back to text-based extraction.\n */\nconst extractPyElementTypeFromAnnotation = (typeNode: SyntaxNode, pos: TypeArgPosition = 'last'): string | undefined => {\n  // Unwrap 'type' wrapper node to get to the actual type (e.g., type > generic_type)\n  const inner = typeNode.type === 'type' ? (typeNode.firstNamedChild ?? typeNode) : typeNode;\n\n  // Python subscript: List[User], Sequence[User] — use raw text\n  if (inner.type === 'subscript') {\n    return extractElementTypeFromString(inner.text, pos);\n  }\n  // generic_type: dict[str, User] — tree-sitter-python uses type_parameter child\n  if (inner.type === 'generic_type') {\n    // Try standard extractGenericTypeArgs first (handles type_arguments)\n    const args = extractGenericTypeArgs(inner);\n    if (args.length >= 1) return pos === 'first' ? args[0] : args[args.length - 1];\n    // Fallback: look for type_parameter child (tree-sitter-python specific)\n    for (let i = 0; i < inner.namedChildCount; i++) {\n      const child = inner.namedChild(i);\n      if (child?.type === 'type_parameter') {\n        if (pos === 'first') {\n          const firstArg = child.firstNamedChild;\n          if (firstArg) return extractSimpleTypeName(firstArg);\n        } else {\n          const lastArg = child.lastNamedChild;\n          if (lastArg) return extractSimpleTypeName(lastArg);\n        }\n      }\n    }\n  }\n  // Fallback: raw text extraction (handles User[], [User], etc.)\n  return extractElementTypeFromString(inner.text, pos);\n};\n\n/**\n * Walk up the AST from a for-statement to find the enclosing function definition,\n * then search its parameters for one named `iterableName`.\n * Returns the element type extracted from its type annotation, or undefined.\n *\n * Handles both `parameter` and `typed_parameter` node types in tree-sitter-python.\n * `typed_parameter` may not expose the name as a `name` field — falls back to\n * checking the first identifier-type named child.\n */\nconst findPyParamElementType = (iterableName: string, startNode: SyntaxNode, pos: TypeArgPosition = 'last'): string | undefined => {\n  let current: SyntaxNode | null = startNode.parent;\n  while (current) {\n    if (current.type === 'function_definition') {\n      const paramsNode = current.childForFieldName('parameters');\n      if (paramsNode) {\n        for (let i = 0; i < paramsNode.namedChildCount; i++) {\n          const param = paramsNode.namedChild(i);\n          if (!param) continue;\n          // Try named `name` field first (parameter node), then first identifier child\n          // (typed_parameter node may store name as first positional child)\n          const nameNode = param.childForFieldName('name')\n            ?? (param.firstNamedChild?.type === 'identifier' ? param.firstNamedChild : null);\n          if (nameNode?.text !== iterableName) continue;\n          // Try `type` field, then last named child (typed_parameter stores type last)\n          const typeAnnotation = param.childForFieldName('type')\n            ?? (param.namedChildCount >= 2 ? param.namedChild(param.namedChildCount - 1) : null);\n          if (typeAnnotation && typeAnnotation !== nameNode) {\n            return extractPyElementTypeFromAnnotation(typeAnnotation, pos);\n          }\n        }\n      }\n      break;\n    }\n    current = current.parent;\n  }\n  return undefined;\n};\n\n/**\n * Extracts iterableName and methodName from a call expression like `data.items()`.\n * Returns undefined if the call doesn't match the expected pattern.\n */\nconst extractMethodCall = (callNode: SyntaxNode): { iterableName: string; methodName?: string } | undefined => {\n  const fn = callNode.childForFieldName('function');\n  if (fn?.type !== 'attribute') return undefined;\n  const obj = fn.firstNamedChild;\n  if (obj?.type !== 'identifier') return undefined;\n  const method = fn.lastNamedChild;\n  const methodName = (method?.type === 'identifier' && method !== obj) ? method.text : undefined;\n  return { iterableName: obj.text, methodName };\n};\n\n/**\n * Collects all identifier nodes from a pattern, descending into nested tuple_patterns.\n * For `i, (k, v)` returns [i, k, v]. For `key, value` returns [key, value].\n */\nconst collectPatternIdentifiers = (pattern: SyntaxNode): SyntaxNode[] => {\n  const vars: SyntaxNode[] = [];\n  for (let i = 0; i < pattern.namedChildCount; i++) {\n    const child = pattern.namedChild(i);\n    if (child?.type === 'identifier') {\n      vars.push(child);\n    } else if (child?.type === 'tuple_pattern') {\n      vars.push(...collectPatternIdentifiers(child));\n    }\n  }\n  return vars;\n};\n\n/**\n * Python: for user in users: where users has a known container type annotation.\n *\n * AST node: `for_statement` with `left` (loop variable) and `right` (iterable).\n *\n * Tier 1c: resolves the element type via three strategies in priority order:\n *   1. declarationTypeNodes — raw type annotation AST node (covers stored container types)\n *   2. scopeEnv string — extractElementTypeFromString on the stored type\n *   3. AST walk — walks up to the enclosing function's parameters to read List[User] directly\n *\n * Also handles `enumerate(iterable)` — unwraps the outer call and skips the integer\n * index variable so the value variable still resolves to the element type.\n */\nconst extractForLoopBinding: ForLoopExtractor = (node, { scopeEnv, declarationTypeNodes, scope, returnTypeLookup }): void => {\n  if (node.type !== 'for_statement') return;\n\n  const rightNode = node.childForFieldName('right');\n  let iterableName: string | undefined;\n  let methodName: string | undefined;\n  let callExprElementType: string | undefined;\n  let isEnumerate = false;\n\n  // Extract iterable info from the `right` field — may be identifier, attribute, or call.\n  if (rightNode?.type === 'identifier') {\n    iterableName = rightNode.text;\n  } else if (rightNode?.type === 'attribute') {\n    const prop = rightNode.lastNamedChild;\n    if (prop) iterableName = prop.text;\n  } else if (rightNode?.type === 'call') {\n    const fn = rightNode.childForFieldName('function');\n    if (fn?.type === 'identifier' && fn.text === 'enumerate') {\n      // enumerate(iterable) or enumerate(d.items()) — unwrap to inner iterable.\n      isEnumerate = true;\n      const innerArg = rightNode.childForFieldName('arguments')?.firstNamedChild;\n      if (innerArg?.type === 'identifier') {\n        iterableName = innerArg.text;\n      } else if (innerArg?.type === 'call') {\n        const extracted = extractMethodCall(innerArg);\n        if (extracted) ({ iterableName, methodName } = extracted);\n      }\n    } else if (fn?.type === 'attribute') {\n      // data.items() → call > function: attribute > identifier('data') + identifier('items')\n      const extracted = extractMethodCall(rightNode);\n      if (extracted) ({ iterableName, methodName } = extracted);\n    } else if (fn?.type === 'identifier') {\n      // Direct function call: for user in get_users() (Phase 7.3 — return-type path)\n      const rawReturn = returnTypeLookup.lookupRawReturnType(fn.text);\n      if (rawReturn) callExprElementType = extractElementTypeFromString(rawReturn);\n    }\n  }\n  if (!iterableName && !callExprElementType) return;\n\n  let elementType: string | undefined;\n  if (callExprElementType) {\n    elementType = callExprElementType;\n  } else {\n    const containerTypeName = scopeEnv.get(iterableName!);\n    const typeArgPos = methodToTypeArgPosition(methodName, containerTypeName);\n    elementType = resolveIterableElementType(\n      iterableName!, node, scopeEnv, declarationTypeNodes, scope,\n      extractPyElementTypeFromAnnotation, findPyParamElementType,\n      typeArgPos,\n    );\n  }\n  if (!elementType) return;\n\n  // The loop variable is the `left` field — identifier or pattern_list.\n  const leftNode = node.childForFieldName('left');\n  if (!leftNode) return;\n\n  if (leftNode.type === 'pattern_list' || leftNode.type === 'tuple_pattern') {\n    // Tuple unpacking: `key, value` or `i, (k, v)` or `(k, v)` — bind the last identifier to element type.\n    // With enumerate, skip binding if there's only one var (just the index, no value to bind).\n    const vars = collectPatternIdentifiers(leftNode);\n    if (vars.length > 0 && (!isEnumerate || vars.length > 1)) {\n      scopeEnv.set(vars[vars.length - 1].text, elementType);\n    }\n    return;\n  }\n\n  const loopVarName = extractVarName(leftNode);\n  if (loopVarName) scopeEnv.set(loopVarName, elementType);\n};\n\n/** Python: alias = u → assignment with left/right fields.\n *  Also handles walrus operator: alias := u → named_expression with name/value fields. */\nconst extractPendingAssignment: PendingAssignmentExtractor = (node, scopeEnv) => {\n  let left: SyntaxNode | null;\n  let right: SyntaxNode | null;\n\n  if (node.type === 'assignment') {\n    left = node.childForFieldName('left');\n    right = node.childForFieldName('right');\n  } else if (node.type === 'named_expression') {\n    left = node.childForFieldName('name');\n    right = node.childForFieldName('value');\n  } else {\n    return undefined;\n  }\n\n  if (!left || !right) return undefined;\n  const lhs = left.type === 'identifier' ? left.text : undefined;\n  if (!lhs || scopeEnv.has(lhs)) return undefined;\n  if (right.type === 'identifier') return { kind: 'copy', lhs, rhs: right.text };\n  // attribute RHS → fieldAccess (a.field)\n  if (right.type === 'attribute') {\n    const obj = right.firstNamedChild;\n    const field = right.lastNamedChild;\n    if (obj?.type === 'identifier' && field?.type === 'identifier' && obj !== field) {\n      return { kind: 'fieldAccess', lhs, receiver: obj.text, field: field.text };\n    }\n  }\n  // call RHS\n  if (right.type === 'call') {\n    const funcNode = right.childForFieldName('function');\n    if (funcNode?.type === 'identifier') {\n      return { kind: 'callResult', lhs, callee: funcNode.text };\n    }\n    // method call with receiver: call → function: attribute\n    if (funcNode?.type === 'attribute') {\n      const obj = funcNode.firstNamedChild;\n      const method = funcNode.lastNamedChild;\n      if (obj?.type === 'identifier' && method?.type === 'identifier' && obj !== method) {\n        return { kind: 'methodCallResult', lhs, receiver: obj.text, method: method.text };\n      }\n    }\n  }\n  return undefined;\n};\n\n/**\n * Python match/case `as` pattern binding: `case User() as u:`\n *\n * AST structure (tree-sitter-python):\n *   as_pattern\n *     alias: as_pattern_target   ← the bound variable name (e.g. \"u\")\n *     children[0]: case_pattern  ← wraps class_pattern (or is class_pattern directly)\n *       class_pattern\n *         dotted_name            ← the class name (e.g. \"User\")\n *\n * The `alias` field is an `as_pattern_target` node whose `.text` is the identifier.\n * The class name lives in the first non-alias named child: either a `case_pattern`\n * wrapping a `class_pattern`, or a direct `class_pattern`.\n *\n * Conservative: returns undefined when:\n * - The node is not an `as_pattern`\n * - The pattern side is not a class_pattern (e.g. guard or literal match)\n * - The variable was already bound in scopeEnv\n */\nconst extractPatternBinding: PatternBindingExtractor = (node, scopeEnv) => {\n  if (node.type !== 'as_pattern') return undefined;\n\n  // as_pattern: `case User() as u:` — binds matched value to a name.\n  // Try named field first (future grammar versions may expose it), fall back to positional.\n  if (node.namedChildCount < 2) return undefined;\n\n  const patternChild = node.namedChild(0);\n  const varNameNode = node.childForFieldName('alias')\n    ?? node.namedChild(node.namedChildCount - 1);\n  if (!patternChild || !varNameNode) return undefined;\n  if (varNameNode.type !== 'identifier') return undefined;\n\n  const varName = varNameNode.text;\n  if (!varName || scopeEnv.has(varName)) return undefined;\n\n  // Find the class_pattern — may be direct or wrapped in case_pattern.\n  let classPattern: SyntaxNode | null = null;\n  if (patternChild.type === 'class_pattern') {\n    classPattern = patternChild;\n  } else if (patternChild.type === 'case_pattern') {\n    // Unwrap one level: case_pattern wraps class_pattern\n    for (let j = 0; j < patternChild.namedChildCount; j++) {\n      const inner = patternChild.namedChild(j);\n      if (inner?.type === 'class_pattern') {\n        classPattern = inner;\n        break;\n      }\n    }\n  }\n  if (!classPattern) return undefined;\n\n  // class_pattern children: dotted_name (the class name) + optional keyword_pattern args.\n  const classNameNode = classPattern.firstNamedChild;\n  if (!classNameNode || (classNameNode.type !== 'dotted_name' && classNameNode.type !== 'identifier')) return undefined;\n  const typeName = classNameNode.text;\n  if (!typeName) return undefined;\n\n  return { varName, typeName };\n};\n\nconst PATTERN_BINDING_NODE_TYPES: ReadonlySet<string> = new Set(['as_pattern']);\n\nexport const typeConfig: LanguageTypeConfig = {\n  declarationNodeTypes: DECLARATION_NODE_TYPES,\n  forLoopNodeTypes: FOR_LOOP_NODE_TYPES,\n  patternBindingNodeTypes: PATTERN_BINDING_NODE_TYPES,\n  extractDeclaration,\n  extractParameter,\n  extractInitializer,\n  scanConstructorBinding,\n  extractForLoopBinding,\n  extractPendingAssignment,\n  extractPatternBinding,\n};\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/type-extractors/ruby.ts",
    "content": "import type { LanguageTypeConfig, ParameterExtractor, TypeBindingExtractor, InitializerExtractor, ClassNameLookup, ConstructorBindingScanner, ReturnTypeExtractor, PendingAssignmentExtractor, ForLoopExtractor } from './types.js';\nimport { extractRubyConstructorAssignment, extractSimpleTypeName, extractElementTypeFromString, extractVarName, resolveIterableElementType } from './shared.js';\nimport type { SyntaxNode } from '../utils.js';\n\n/**\n * Ruby type extractor — YARD annotation parsing.\n *\n * Ruby has no static type system, but the YARD documentation convention\n * provides de facto type annotations via comments:\n *\n *   # @param name [String] the user's name\n *   # @param repo [UserRepo] the repository\n *   # @return [User]\n *   def create(name, repo)\n *     repo.save\n *   end\n *\n * This extractor parses `@param name [Type]` patterns from comment nodes\n * preceding method definitions and binds parameter names to their types.\n *\n * Resolution tiers:\n * - Tier 0: YARD @param annotations (extractDeclaration pre-populates env)\n * - Tier 1: Constructor inference via `user = User.new` (handled by scanConstructorBinding in typeConfig)\n */\n\n/** Regex to extract @param annotations: `@param name [Type]` */\nconst YARD_PARAM_RE = /@param\\s+(\\w+)\\s+\\[([^\\]]+)\\]/g;\n/** Alternate YARD order: `@param [Type] name` */\nconst YARD_PARAM_ALT_RE = /@param\\s+\\[([^\\]]+)\\]\\s+(\\w+)/g;\n\n/** Regex to extract @return annotations: `@return [Type]` */\nconst YARD_RETURN_RE = /@return\\s+\\[([^\\]]+)\\]/;\n\n/**\n * Extract the simple type name from a YARD type string.\n * Handles:\n * - Simple types: \"String\" → \"String\"\n * - Qualified types: \"Models::User\" → \"User\"\n * - Generic types: \"Array<User>\" → \"Array\"\n * - Nullable types: \"String, nil\" → \"String\"\n * - Union types: \"String, Integer\" → undefined (ambiguous)\n */\nconst extractYardTypeName = (yardType: string): string | undefined => {\n  const trimmed = yardType.trim();\n\n  // Handle nullable: \"Type, nil\" or \"nil, Type\"\n  // Use bracket-balanced split to avoid breaking on commas inside generics like Hash<Symbol, User>\n  const parts: string[] = [];\n  let depth = 0, start = 0;\n  for (let i = 0; i < trimmed.length; i++) {\n    if (trimmed[i] === '<') depth++;\n    else if (trimmed[i] === '>') depth--;\n    else if (trimmed[i] === ',' && depth === 0) {\n      parts.push(trimmed.slice(start, i).trim());\n      start = i + 1;\n    }\n  }\n  parts.push(trimmed.slice(start).trim());\n  const filtered = parts.filter(p => p !== '' && p !== 'nil');\n  if (filtered.length !== 1) return undefined; // ambiguous union\n\n  const typePart = filtered[0];\n\n  // Handle qualified: \"Models::User\" → \"User\"\n  const segments = typePart.split('::');\n  const last = segments[segments.length - 1];\n\n  // Handle generic: \"Array<User>\" → \"Array\"\n  const genericMatch = last.match(/^(\\w+)\\s*[<{(]/);\n  if (genericMatch) return genericMatch[1];\n\n  // Simple identifier check\n  if (/^\\w+$/.test(last)) return last;\n\n  return undefined;\n};\n\n/**\n * Collect YARD @param annotations from comment nodes preceding a method definition.\n * Returns a map of paramName → typeName.\n *\n * In tree-sitter-ruby, comments are sibling nodes that appear before the method node.\n * We walk backwards through preceding siblings collecting consecutive comment nodes.\n */\nconst collectYardParams = (methodNode: SyntaxNode): Map<string, string> => {\n  const params = new Map<string, string>();\n\n  // In tree-sitter-ruby, YARD comments preceding a method inside a class body\n  // are placed as children of the `class` node, NOT as siblings of the `method`\n  // inside `body_statement`. The AST structure is:\n  //\n  //   class\n  //     constant = \"ClassName\"\n  //     comment = \"# @param ...\"     ← sibling of body_statement\n  //     comment = \"# @param ...\"     ← sibling of body_statement\n  //     body_statement\n  //       method                     ← method is here, no preceding siblings\n  //\n  // For top-level methods (outside classes), comments ARE direct siblings.\n  // We handle both by checking: if method has no preceding comment siblings,\n  // look at parent (body_statement) siblings instead.\n  const commentTexts: string[] = [];\n\n  const collectComments = (startNode: SyntaxNode): void => {\n    let sibling = startNode.previousSibling;\n    while (sibling) {\n      if (sibling.type === 'comment') {\n        commentTexts.unshift(sibling.text);\n      } else if (sibling.isNamed) {\n        break;\n      }\n      sibling = sibling.previousSibling;\n    }\n  };\n\n  // Try method's own siblings first (top-level methods)\n  collectComments(methodNode);\n\n  // If no comments found and parent is body_statement, check parent's siblings\n  if (commentTexts.length === 0 && methodNode.parent?.type === 'body_statement') {\n    collectComments(methodNode.parent);\n  }\n\n  // Parse all comment lines for @param annotations\n  const commentBlock = commentTexts.join('\\n');\n  let match: RegExpExecArray | null;\n\n  // Reset regex state\n  YARD_PARAM_RE.lastIndex = 0;\n  while ((match = YARD_PARAM_RE.exec(commentBlock)) !== null) {\n    const paramName = match[1];\n    const rawType = match[2];\n    const typeName = extractYardTypeName(rawType);\n    if (typeName) {\n      params.set(paramName, typeName);\n    }\n  }\n\n  // Also check alternate YARD order: @param [Type] name\n  YARD_PARAM_ALT_RE.lastIndex = 0;\n  while ((match = YARD_PARAM_ALT_RE.exec(commentBlock)) !== null) {\n    const rawType = match[1];\n    const paramName = match[2];\n    if (params.has(paramName)) continue; // standard format takes priority\n    const typeName = extractYardTypeName(rawType);\n    if (typeName) {\n      params.set(paramName, typeName);\n    }\n  }\n\n  return params;\n};\n\n/**\n * Ruby node types that may carry type bindings.\n * - `method`/`singleton_method`: YARD @param annotations (via extractDeclaration)\n * - `assignment`: Constructor inference like `user = User.new` (via extractInitializer;\n *   extractDeclaration returns early for these nodes)\n */\nconst DECLARATION_NODE_TYPES: ReadonlySet<string> = new Set([\n  'method',\n  'singleton_method',\n  'assignment',\n]);\n\n/**\n * Extract YARD annotations from method definitions.\n * Pre-populates the scope env with parameter types before the\n * standard parameter walk (which won't find types since Ruby has none).\n */\nconst extractDeclaration: TypeBindingExtractor = (node: SyntaxNode, env: Map<string, string>): void => {\n  if (node.type !== 'method' && node.type !== 'singleton_method') return;\n\n  const yardParams = collectYardParams(node);\n  if (yardParams.size === 0) return;\n\n  // Pre-populate env with YARD type bindings for each parameter\n  for (const [paramName, typeName] of yardParams) {\n    env.set(paramName, typeName);\n  }\n};\n\n/**\n * Ruby parameter extraction.\n * Ruby parameters (identifiers inside method_parameters) have no inline\n * type annotations. YARD types are already populated by extractDeclaration,\n * so this is a no-op — the bindings are already in the env.\n *\n * We still register this to maintain the LanguageTypeConfig contract.\n */\nconst extractParameter: ParameterExtractor = (_node: SyntaxNode, _env: Map<string, string>): void => {\n  // Ruby parameters have no type annotations.\n  // YARD types are pre-populated by extractDeclaration.\n};\n\n/**\n * Ruby constructor inference: user = User.new or service = Models::User.new\n * Uses the shared extractRubyConstructorAssignment helper for AST matching,\n * then resolves against locally-known class names.\n */\nconst extractInitializer: InitializerExtractor = (node, env, classNames): void => {\n  const result = extractRubyConstructorAssignment(node);\n  if (!result) return;\n  if (env.has(result.varName)) return;\n  if (classNames.has(result.calleeName)) {\n    env.set(result.varName, result.calleeName);\n  }\n};\n\n/**\n * Extract return type from YARD `@return [Type]` annotation preceding a method.\n * Reuses the same comment-walking strategy as collectYardParams: try direct\n * siblings first, fall back to parent (body_statement) siblings for class methods.\n */\nconst extractReturnType: ReturnTypeExtractor = (node) => {\n  const search = (startNode: SyntaxNode): string | undefined => {\n    let sibling = startNode.previousSibling;\n    while (sibling) {\n      if (sibling.type === 'comment') {\n        const match = YARD_RETURN_RE.exec(sibling.text);\n        if (match) return extractYardTypeName(match[1]);\n      } else if (sibling.isNamed) {\n        break;\n      }\n      sibling = sibling.previousSibling;\n    }\n    return undefined;\n  };\n\n  const result = search(node);\n  if (result) return result;\n\n  if (node.parent?.type === 'body_statement') {\n    return search(node.parent);\n  }\n  return undefined;\n};\n\n/**\n * Ruby constructor binding scanner: captures both `user = User.new` and\n * plain call assignments like `user = get_user()`.\n * The `.new` pattern returns the class name directly; plain calls return the\n * callee name for return-type inference via SymbolTable lookup.\n */\nconst scanConstructorBinding: ConstructorBindingScanner = (node) => {\n  // Try the .new pattern first (returns class name directly)\n  const newResult = extractRubyConstructorAssignment(node);\n  if (newResult) return newResult;\n\n  // Plain call assignment: user = get_user() / user = Models.create()\n  if (node.type !== 'assignment') return undefined;\n  const left = node.childForFieldName('left');\n  const right = node.childForFieldName('right');\n  if (!left || !right) return undefined;\n  if (left.type !== 'identifier' && left.type !== 'constant') return undefined;\n  if (right.type !== 'call') return undefined;\n  const method = right.childForFieldName('method');\n  if (!method) return undefined;\n  const calleeName = extractSimpleTypeName(method);\n  if (!calleeName) return undefined;\n  return { varName: left.text, calleeName };\n};\n\n/** Ruby method node types that carry a parameter list. */\nconst RUBY_METHOD_NODE_TYPES = new Set(['method', 'singleton_method']);\n\nconst FOR_LOOP_NODE_TYPES: ReadonlySet<string> = new Set(['for']);\n\n/**\n * Collect raw YARD @param type strings from comment nodes preceding a method.\n * Unlike collectYardParams which returns simplified type names, this returns the\n * raw bracket content (e.g., \"Array<User>\" not \"Array\") for element type extraction.\n */\nconst collectYardRawParams = (methodNode: SyntaxNode): Map<string, string> => {\n  const params = new Map<string, string>();\n  const commentTexts: string[] = [];\n\n  const collectComments = (startNode: SyntaxNode): void => {\n    let sibling = startNode.previousSibling;\n    while (sibling) {\n      if (sibling.type === 'comment') {\n        commentTexts.unshift(sibling.text);\n      } else if (sibling.isNamed) {\n        break;\n      }\n      sibling = sibling.previousSibling;\n    }\n  };\n\n  collectComments(methodNode);\n  if (commentTexts.length === 0 && methodNode.parent?.type === 'body_statement') {\n    collectComments(methodNode.parent);\n  }\n\n  const commentBlock = commentTexts.join('\\n');\n  let match: RegExpExecArray | null;\n\n  YARD_PARAM_RE.lastIndex = 0;\n  while ((match = YARD_PARAM_RE.exec(commentBlock)) !== null) {\n    params.set(match[1], match[2]);\n  }\n  YARD_PARAM_ALT_RE.lastIndex = 0;\n  while ((match = YARD_PARAM_ALT_RE.exec(commentBlock)) !== null) {\n    if (!params.has(match[2])) params.set(match[2], match[1]);\n  }\n\n  return params;\n};\n\n/**\n * Walk up the AST from a for-statement to find the enclosing method,\n * then search its YARD @param annotations for one named `iterableName`.\n * Returns the element type extracted from the raw YARD type string.\n *\n * Example: `@param users [Array<User>]` → extracts \"User\" from \"Array<User>\".\n */\nconst findRubyParamElementType = (iterableName: string, startNode: SyntaxNode): string | undefined => {\n  let current: SyntaxNode | null = startNode.parent;\n  while (current) {\n    if (RUBY_METHOD_NODE_TYPES.has(current.type)) {\n      const rawParams = collectYardRawParams(current);\n      const rawType = rawParams.get(iterableName);\n      if (rawType) return extractElementTypeFromString(rawType);\n      break;\n    }\n    current = current.parent;\n  }\n  return undefined;\n};\n\n/**\n * Ruby: for user in users ... end\n *\n * tree-sitter-ruby `for` node structure:\n *   pattern field: the loop variable (identifier)\n *   value field: `in` node whose child is the iterable expression\n *\n * Tier 1c: resolves the element type via:\n *   1. scopeEnv string — extractElementTypeFromString on the stored type\n *   2. AST walk — walks up to the enclosing method's YARD @param to read Array<User> directly\n *\n * Ruby has no static types on loop variables, so this mainly works when the\n * iterable has a YARD-annotated container type (e.g., `@param users [Array<User>]`).\n */\nconst extractForLoopBinding: ForLoopExtractor = (node, { scopeEnv, declarationTypeNodes, scope }): void => {\n  if (node.type !== 'for') return;\n\n  // The loop variable is the `pattern` field (identifier).\n  const patternNode = node.childForFieldName('pattern');\n  if (!patternNode) return;\n  const loopVarName = extractVarName(patternNode);\n  if (!loopVarName) return;\n\n  // The iterable is inside the `value` field which is an `in` node wrapping the expression.\n  const inNode = node.childForFieldName('value');\n  if (!inNode) return;\n  const iterableNode = inNode.firstNamedChild;\n  let iterableName: string | undefined;\n  if (iterableNode?.type === 'identifier') {\n    iterableName = iterableNode.text;\n  } else if (iterableNode?.type === 'call') {\n    const method = iterableNode.childForFieldName('method');\n    if (method) iterableName = method.text;\n  }\n  if (!iterableName) return;\n\n  // Ruby has no extractFromTypeNode (no AST type annotations), pass a no-op.\n  const noopExtractFromTypeNode = (): string | undefined => undefined;\n\n  const elementType = resolveIterableElementType(\n    iterableName, node, scopeEnv, declarationTypeNodes, scope,\n    noopExtractFromTypeNode, findRubyParamElementType,\n    undefined,\n  );\n  if (!elementType) return;\n\n  scopeEnv.set(loopVarName, elementType);\n};\n\n/**\n * Ruby: alias_user = user → assignment with left/right identifier fields.\n * Only handles plain identifier RHS (not calls, not literals).\n * Skips if LHS already has a resolved type in scopeEnv.\n */\nconst extractPendingAssignment: PendingAssignmentExtractor = (node, scopeEnv) => {\n  if (node.type !== 'assignment') return undefined;\n  const lhsNode = node.childForFieldName('left');\n  if (!lhsNode || lhsNode.type !== 'identifier') return undefined;\n  const varName = lhsNode.text;\n  if (scopeEnv.has(varName)) return undefined;\n  const rhsNode = node.childForFieldName('right');\n  if (!rhsNode) return undefined;\n  if (rhsNode.type === 'identifier') return { kind: 'copy', lhs: varName, rhs: rhsNode.text };\n  // call/method_call RHS — Ruby uses method calls for both field access and method calls\n  if (rhsNode.type === 'call' || rhsNode.type === 'method_call') {\n    const methodNode = rhsNode.childForFieldName('method');\n    const receiverNode = rhsNode.childForFieldName('receiver');\n    if (!receiverNode && methodNode?.type === 'identifier') {\n      // No receiver → callResult (bare function call)\n      return { kind: 'callResult', lhs: varName, callee: methodNode.text };\n    }\n    if (receiverNode?.type === 'identifier' && methodNode?.type === 'identifier') {\n      // With receiver → methodCallResult (a.method)\n      return { kind: 'methodCallResult', lhs: varName, receiver: receiverNode.text, method: methodNode.text };\n    }\n  }\n  return undefined;\n};\n\nexport const typeConfig: LanguageTypeConfig = {\n  declarationNodeTypes: DECLARATION_NODE_TYPES,\n  forLoopNodeTypes: FOR_LOOP_NODE_TYPES,\n  extractDeclaration,\n  extractParameter,\n  extractInitializer,\n  scanConstructorBinding,\n  extractReturnType,\n  extractForLoopBinding,\n  extractPendingAssignment,\n};\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/type-extractors/rust.ts",
    "content": "import type { SyntaxNode } from '../utils.js';\nimport type { LanguageTypeConfig, ParameterExtractor, TypeBindingExtractor, InitializerExtractor, ClassNameLookup, ConstructorBindingScanner, PendingAssignmentExtractor, PendingAssignment, PatternBindingExtractor, ForLoopExtractor } from './types.js';\nimport { extractSimpleTypeName, extractVarName, hasTypeAnnotation, unwrapAwait, extractGenericTypeArgs, resolveIterableElementType, methodToTypeArgPosition, extractElementTypeFromString, type TypeArgPosition } from './shared.js';\n\nconst DECLARATION_NODE_TYPES: ReadonlySet<string> = new Set([\n  'let_declaration',\n  'let_condition',\n]);\n\n/** Walk up the AST to find the enclosing impl block and extract the implementing type name. */\nconst findEnclosingImplType = (node: SyntaxNode): string | undefined => {\n  let current = node.parent;\n  while (current) {\n    if (current.type === 'impl_item') {\n      // The 'type' field holds the implementing type (e.g., `impl User { ... }`)\n      const typeNode = current.childForFieldName('type');\n      if (typeNode) return extractSimpleTypeName(typeNode);\n    }\n    current = current.parent;\n  }\n  return undefined;\n};\n\n/**\n * Extract the type name from a struct_pattern's 'type' field.\n * Handles both simple `User { .. }` and scoped `Message::Data { .. }`.\n */\nconst extractStructPatternType = (structPattern: SyntaxNode): string | undefined => {\n  const typeNode = structPattern.childForFieldName('type');\n  if (!typeNode) return undefined;\n  return extractSimpleTypeName(typeNode);\n};\n\n/**\n * Recursively scan a pattern tree for captured_pattern nodes (x @ StructType { .. })\n * and extract variable → type bindings from them.\n */\nconst extractCapturedPatternBindings = (pattern: SyntaxNode, env: Map<string, string>, depth = 0): void => {\n  if (depth > 50) return;\n  if (pattern.type === 'captured_pattern') {\n    // captured_pattern: identifier @ inner_pattern\n    // The first named child is the identifier, followed by the inner pattern.\n    const nameNode = pattern.firstNamedChild;\n    if (!nameNode || nameNode.type !== 'identifier') return;\n    // Find the struct_pattern child — that gives us the type\n    for (let i = 0; i < pattern.namedChildCount; i++) {\n      const child = pattern.namedChild(i);\n      if (child?.type === 'struct_pattern') {\n        const typeName = extractStructPatternType(child);\n        if (typeName) env.set(nameNode.text, typeName);\n        return;\n      }\n    }\n    return;\n  }\n  // Recurse into tuple_struct_pattern children to find nested captured_patterns\n  // e.g., Some(user @ User { .. })\n  if (pattern.type === 'tuple_struct_pattern') {\n    for (let i = 0; i < pattern.namedChildCount; i++) {\n      const child = pattern.namedChild(i);\n      if (child) extractCapturedPatternBindings(child, env, depth + 1);\n    }\n  }\n};\n\n/** Rust: let x: Foo = ... | if let / while let pattern bindings */\nconst extractDeclaration: TypeBindingExtractor = (node: SyntaxNode, env: Map<string, string>): void => {\n  if (node.type === 'let_condition') {\n    // if let / while let: extract type bindings from pattern matching.\n    //\n    // Supported patterns:\n    // - captured_pattern: `if let user @ User { .. } = expr` → user: User\n    // - tuple_struct_pattern with nested captured_pattern:\n    //   `if let Some(user @ User { .. }) = expr` → user: User\n    //\n    // NOT supported (requires generic unwrapping — Phase 3):\n    // - `if let Some(x) = opt` where opt: Option<T> → x: T\n    //\n    // struct_pattern without capture (`if let User { name } = expr`)\n    // destructures fields — individual field types are unknown without\n    // field-type resolution, so no bindings are extracted.\n    const pattern = node.childForFieldName('pattern');\n    if (!pattern) return;\n    extractCapturedPatternBindings(pattern, env);\n    return;\n  }\n\n  // Standard let_declaration: let x: Foo = ...\n  const pattern = node.childForFieldName('pattern');\n  const typeNode = node.childForFieldName('type');\n  if (!pattern || !typeNode) return;\n  const varName = extractVarName(pattern);\n  const typeName = extractSimpleTypeName(typeNode);\n  if (varName && typeName) env.set(varName, typeName);\n};\n\n/** Rust: let x = User::new(), let x = User::default(), or let x = User { ... } */\nconst extractInitializer: InitializerExtractor = (node: SyntaxNode, env: Map<string, string>, classNames: ClassNameLookup): void => {\n  // Skip if there's an explicit type annotation — Tier 0 already handled it\n  if (node.childForFieldName('type') !== null) return;\n  const pattern = node.childForFieldName('pattern');\n  const value = node.childForFieldName('value');\n  if (!pattern || !value) return;\n\n  // Rust struct literal: let user = User { name: \"alice\", age: 30 }\n  // tree-sitter-rust: struct_expression with 'name' field holding the type\n  if (value.type === 'struct_expression') {\n    const typeNode = value.childForFieldName('name');\n    if (!typeNode) return;\n    const rawType = extractSimpleTypeName(typeNode);\n    if (!rawType) return;\n    // Resolve Self to the actual struct/enum name from the enclosing impl block\n    const typeName = rawType === 'Self' ? findEnclosingImplType(node) : rawType;\n    const varName = extractVarName(pattern);\n    if (varName && typeName) env.set(varName, typeName);\n    return;\n  }\n\n  // Unit struct instantiation: let svc = UserService; (bare identifier, no braces or call)\n  if (value.type === 'identifier' && classNames.has(value.text)) {\n    const varName = extractVarName(pattern);\n    if (varName) env.set(varName, value.text);\n    return;\n  }\n\n  if (value.type !== 'call_expression') return;\n  const func = value.childForFieldName('function');\n  if (!func || func.type !== 'scoped_identifier') return;\n  const nameField = func.childForFieldName('name');\n  // Only match ::new() and ::default() — the two idiomatic Rust constructors.\n  // Deliberately excludes ::from(), ::with_capacity(), etc. to avoid false positives\n  // (e.g. String::from(\"x\") is not necessarily the \"String\" type we want for method resolution).\n  if (!nameField || (nameField.text !== 'new' && nameField.text !== 'default')) return;\n  const pathField = func.childForFieldName('path');\n  if (!pathField) return;\n  const rawType = extractSimpleTypeName(pathField);\n  if (!rawType) return;\n  // Resolve Self to the actual struct/enum name from the enclosing impl block\n  const typeName = rawType === 'Self' ? findEnclosingImplType(node) : rawType;\n  const varName = extractVarName(pattern);\n  if (varName && typeName) env.set(varName, typeName);\n};\n\n/** Rust: parameter → pattern: type */\nconst extractParameter: ParameterExtractor = (node: SyntaxNode, env: Map<string, string>): void => {\n  let nameNode: SyntaxNode | null = null;\n  let typeNode: SyntaxNode | null = null;\n\n  if (node.type === 'parameter') {\n    nameNode = node.childForFieldName('pattern');\n    typeNode = node.childForFieldName('type');\n  } else {\n    nameNode = node.childForFieldName('name') ?? node.childForFieldName('pattern');\n    typeNode = node.childForFieldName('type');\n  }\n\n  if (!nameNode || !typeNode) return;\n  const varName = extractVarName(nameNode);\n  const typeName = extractSimpleTypeName(typeNode);\n  if (varName && typeName) env.set(varName, typeName);\n};\n\n/** Rust: let user = get_user(\"alice\") — let_declaration with call_expression value, no type annotation.\n * Skips `let user: User = ...` (explicit type annotation — handled by extractDeclaration).\n * Skips `let user = User::new()` (scoped_identifier callee named \"new\" — handled by extractInitializer).\n * Unwraps `let mut user = get_user()` by looking inside mut_pattern for the inner identifier.\n */\nconst scanConstructorBinding: ConstructorBindingScanner = (node) => {\n  if (node.type !== 'let_declaration') return undefined;\n  if (hasTypeAnnotation(node)) return undefined;\n  let patternNode = node.childForFieldName('pattern');\n  if (!patternNode) return undefined;\n  if (patternNode.type === 'mut_pattern') {\n    patternNode = patternNode.firstNamedChild;\n    if (!patternNode) return undefined;\n  }\n  if (patternNode.type !== 'identifier') return undefined;\n  // Unwrap `.await`: `let user = get_user().await` → await_expression wraps call_expression\n  const value = unwrapAwait(node.childForFieldName('value'));\n  if (!value || value.type !== 'call_expression') return undefined;\n  const func = value.childForFieldName('function');\n  if (!func) return undefined;\n  if (func.type === 'scoped_identifier') {\n    const methodName = func.lastNamedChild;\n    if (methodName?.text === 'new' || methodName?.text === 'default') return undefined;\n  }\n  const calleeName = extractSimpleTypeName(func);\n  if (!calleeName) return undefined;\n  return { varName: patternNode.text, calleeName };\n};\n\n/** Rust: let alias = u; → let_declaration with pattern + value fields.\n *  Also handles struct destructuring: `let Point { x, y } = p` → N fieldAccess items. */\nconst extractPendingAssignment: PendingAssignmentExtractor = (node, scopeEnv) => {\n  if (node.type !== 'let_declaration') return undefined;\n  const pattern = node.childForFieldName('pattern');\n  const value = node.childForFieldName('value');\n  if (!pattern || !value) return undefined;\n\n  // Struct pattern destructuring: `let Point { x, y } = receiver`\n  // struct_pattern has a type child (struct name) and field_pattern children\n  if (pattern.type === 'struct_pattern' && value.type === 'identifier') {\n    const receiver = value.text;\n    const items: PendingAssignment[] = [];\n    for (let j = 0; j < pattern.namedChildCount; j++) {\n      const field = pattern.namedChild(j);\n      if (!field) continue;\n      if (field.type === 'field_pattern') {\n        // `Point { x: local_x }` → field_pattern with name + pattern children\n        const nameNode = field.childForFieldName('name');\n        const patNode = field.childForFieldName('pattern');\n        if (nameNode && patNode) {\n          const fieldName = nameNode.text;\n          const varName = extractVarName(patNode);\n          if (varName && !scopeEnv.has(varName)) {\n            items.push({ kind: 'fieldAccess', lhs: varName, receiver, field: fieldName });\n          }\n        } else if (nameNode) {\n          // Shorthand: `Point { x }` → field_pattern with only name (varName = fieldName)\n          const varName = nameNode.text;\n          if (!scopeEnv.has(varName)) {\n            items.push({ kind: 'fieldAccess', lhs: varName, receiver, field: varName });\n          }\n        }\n      }\n    }\n    if (items.length > 0) return items;\n    return undefined;\n  }\n\n  const lhs = extractVarName(pattern);\n  if (!lhs || scopeEnv.has(lhs)) return undefined;\n  // Unwrap Rust .await: `let user = get_user().await` → call_expression\n  const unwrapped = unwrapAwait(value) ?? value;\n  if (unwrapped.type === 'identifier') return { kind: 'copy', lhs, rhs: unwrapped.text };\n  // field_expression RHS → fieldAccess (a.field)\n  if (unwrapped.type === 'field_expression') {\n    const obj = unwrapped.firstNamedChild;\n    const field = unwrapped.lastNamedChild;\n    if (obj?.type === 'identifier' && field?.type === 'field_identifier') {\n      return { kind: 'fieldAccess', lhs, receiver: obj.text, field: field.text };\n    }\n  }\n  // call_expression RHS → callResult (simple calls only)\n  if (unwrapped.type === 'call_expression') {\n    const funcNode = unwrapped.childForFieldName('function');\n    if (funcNode?.type === 'identifier') {\n      return { kind: 'callResult', lhs, callee: funcNode.text };\n    }\n  }\n  // method_call_expression RHS → methodCallResult (receiver.method())\n  if (unwrapped.type === 'method_call_expression') {\n    const obj = unwrapped.firstNamedChild;\n    if (obj?.type === 'identifier') {\n      const methodNode = unwrapped.childForFieldName('name') ?? unwrapped.namedChild(1);\n      if (methodNode?.type === 'field_identifier') {\n        return { kind: 'methodCallResult', lhs, receiver: obj.text, method: methodNode.text };\n      }\n    }\n  }\n  return undefined;\n};\n\n/**\n * Rust pattern binding extractor for `if let` / `while let` constructs that unwrap\n * enum variants and introduce new typed variables.\n *\n * Supported patterns:\n * - `if let Some(x) = opt`  → x: T  (opt: Option<T>, T already in scopeEnv via NULLABLE_WRAPPER_TYPES)\n * - `if let Ok(x) = res`    → x: T  (res: Result<T, E>, T extracted from declarationTypeNodes)\n *\n * These complement the captured_pattern support in extractDeclaration (which handles\n * `if let x @ Struct { .. } = expr` but NOT tuple struct unwrapping like Some(x) / Ok(x)).\n *\n * Conservative: returns undefined when:\n * - The source variable's type is unknown (not in scopeEnv)\n * - The wrapper is not a known single-unwrap variant (Some / Ok)\n * - The value side is not a simple identifier\n */\nconst extractPatternBinding: PatternBindingExtractor = (\n  node,\n  scopeEnv,\n  declarationTypeNodes,\n  scope,\n) => {\n  let patternNode: SyntaxNode | null = null;\n  let valueNode: SyntaxNode | null = null;\n\n  if (node.type === 'let_condition') {\n    patternNode = node.childForFieldName('pattern');\n    valueNode = node.childForFieldName('value');\n  } else if (node.type === 'match_arm') {\n    // match_arm → pattern field is match_pattern wrapping the actual pattern\n    const matchPatternNode = node.childForFieldName('pattern');\n    // Unwrap match_pattern to get the tuple_struct_pattern inside\n    patternNode = matchPatternNode?.type === 'match_pattern'\n      ? matchPatternNode.firstNamedChild\n      : matchPatternNode;\n    // source variable is in the parent match_expression's 'value' field\n    const matchExpr = node.parent?.parent; // match_arm → match_block → match_expression\n    if (matchExpr?.type === 'match_expression') {\n      valueNode = matchExpr.childForFieldName('value');\n    }\n  }\n  if (!patternNode || !valueNode) return undefined;\n\n  // Only handle tuple_struct_pattern: Some(x) or Ok(x)\n  if (patternNode.type !== 'tuple_struct_pattern') return undefined;\n\n  // Extract the wrapper type name: Some | Ok\n  const wrapperTypeNode = patternNode.childForFieldName('type');\n  if (!wrapperTypeNode) return undefined;\n  const wrapperName = extractSimpleTypeName(wrapperTypeNode);\n  if (wrapperName !== 'Some' && wrapperName !== 'Ok' && wrapperName !== 'Err') return undefined;\n\n  // Extract the inner variable name from the single child of the tuple_struct_pattern.\n  // `Some(x)` → the first named child after the type field is the identifier.\n  // tree-sitter-rust: tuple_struct_pattern has 'type' field + unnamed children for args.\n  let innerVar: string | undefined;\n  for (let i = 0; i < patternNode.namedChildCount; i++) {\n    const child = patternNode.namedChild(i);\n    if (!child) continue;\n    // Skip the type node itself\n    if (child === wrapperTypeNode) continue;\n    if (child.type === 'identifier') {\n      innerVar = child.text;\n      break;\n    }\n  }\n  if (!innerVar) return undefined;\n\n  // The value must be a simple identifier so we can look it up in scopeEnv\n  const sourceVarName = valueNode.type === 'identifier' ? valueNode.text : undefined;\n  if (!sourceVarName) return undefined;\n\n  // For `Some(x)`: Option<T> is already unwrapped to T in scopeEnv (via NULLABLE_WRAPPER_TYPES).\n  // For `Ok(x)`: Result<T, E> stores \"Result\" in scopeEnv — must use declarationTypeNodes.\n  if (wrapperName === 'Some') {\n    const innerType = scopeEnv.get(sourceVarName);\n    if (!innerType) return undefined;\n    return { varName: innerVar, typeName: innerType };\n  }\n\n  // wrapperName === 'Ok' or 'Err': look up the Result<T, E> type AST node.\n  // Ok(x) → extract T (typeArgs[0]), Err(e) → extract E (typeArgs[1]).\n  const typeNodeKey = `${scope}\\0${sourceVarName}`;\n  const typeAstNode = declarationTypeNodes.get(typeNodeKey);\n  if (!typeAstNode) return undefined;\n  const typeArgs = extractGenericTypeArgs(typeAstNode);\n  const argIndex = wrapperName === 'Err' ? 1 : 0;\n  if (typeArgs.length < argIndex + 1) return undefined;\n  return { varName: innerVar, typeName: typeArgs[argIndex] };\n};\n\n// --- For-loop Tier 1c ---\n\nconst FOR_LOOP_NODE_TYPES: ReadonlySet<string> = new Set(['for_expression']);\n\n/** Extract element type from a Rust type annotation AST node.\n *  Handles: generic_type (Vec<User>), reference_type (&[User]), array_type ([User; N]),\n *  slice_type ([User]). For call-graph purposes, strips references (&User → User). */\nconst extractRustElementTypeFromTypeNode = (typeNode: SyntaxNode, pos: TypeArgPosition = 'last', depth = 0): string | undefined => {\n  if (depth > 50) return undefined;\n  // generic_type: Vec<User>, HashMap<K, V> — extract type arg based on position\n  if (typeNode.type === 'generic_type') {\n    const args = extractGenericTypeArgs(typeNode);\n    if (args.length >= 1) return pos === 'first' ? args[0] : args[args.length - 1];\n  }\n  // reference_type: &[User] or &Vec<User> — unwrap the reference and recurse\n  if (typeNode.type === 'reference_type') {\n    const inner = typeNode.lastNamedChild;\n    if (inner) return extractRustElementTypeFromTypeNode(inner, pos, depth + 1);\n  }\n  // array_type: [User; N] — element is the first child\n  if (typeNode.type === 'array_type') {\n    const elemNode = typeNode.firstNamedChild;\n    if (elemNode) return extractSimpleTypeName(elemNode);\n  }\n  // slice_type: [User] — element is the first child\n  if (typeNode.type === 'slice_type') {\n    const elemNode = typeNode.firstNamedChild;\n    if (elemNode) return extractSimpleTypeName(elemNode);\n  }\n  return undefined;\n};\n\n/** Walk up from a for-loop to the enclosing function_item and search parameters\n *  for one named `iterableName`. Returns the element type from its annotation. */\nconst findRustParamElementType = (iterableName: string, startNode: SyntaxNode, pos: TypeArgPosition = 'last'): string | undefined => {\n  let current: SyntaxNode | null = startNode.parent;\n  while (current) {\n    if (current.type === 'function_item') {\n      const paramsNode = current.childForFieldName('parameters');\n      if (paramsNode) {\n        for (let i = 0; i < paramsNode.namedChildCount; i++) {\n          const param = paramsNode.namedChild(i);\n          if (!param || param.type !== 'parameter') continue;\n          const nameNode = param.childForFieldName('pattern');\n          if (!nameNode) continue;\n          // Unwrap reference patterns: &users, &mut users\n          let identNode = nameNode;\n          if (identNode.type === 'reference_pattern') {\n            identNode = identNode.lastNamedChild ?? identNode;\n          }\n          if (identNode.type === 'mut_pattern') {\n            identNode = identNode.firstNamedChild ?? identNode;\n          }\n          if (identNode.text !== iterableName) continue;\n          const typeNode = param.childForFieldName('type');\n          if (typeNode) return extractRustElementTypeFromTypeNode(typeNode, pos);\n        }\n      }\n      break;\n    }\n    current = current.parent;\n  }\n  return undefined;\n};\n\n/** Rust: for user in &users where users has a known container type.\n *  Unwraps reference_expression (&users, &mut users) to get the iterable name. */\nconst extractForLoopBinding: ForLoopExtractor = (node, { scopeEnv, declarationTypeNodes, scope, returnTypeLookup }): void => {\n  if (node.type !== 'for_expression') return;\n\n  const patternNode = node.childForFieldName('pattern');\n  const valueNode = node.childForFieldName('value');\n  if (!patternNode || !valueNode) return;\n\n  // Extract iterable name + method — may be &users, users, or users.iter()/keys()/values()\n  let iterableName: string | undefined;\n  let methodName: string | undefined;\n  let callExprElementType: string | undefined;\n  if (valueNode.type === 'reference_expression') {\n    const inner = valueNode.lastNamedChild;\n    if (inner?.type === 'identifier') iterableName = inner.text;\n  } else if (valueNode.type === 'identifier') {\n    iterableName = valueNode.text;\n  } else if (valueNode.type === 'field_expression') {\n    const prop = valueNode.lastNamedChild;\n    if (prop) iterableName = prop.text;\n  } else if (valueNode.type === 'call_expression') {\n    const funcExpr = valueNode.childForFieldName('function');\n    if (funcExpr?.type === 'field_expression') {\n      // users.iter() → field_expression > identifier + field_identifier\n      const obj = funcExpr.firstNamedChild;\n      if (obj?.type === 'identifier') iterableName = obj.text;\n      // Extract method name: iter, keys, values, into_iter, etc.\n      const field = funcExpr.lastNamedChild;\n      if (field?.type === 'field_identifier') methodName = field.text;\n    } else if (funcExpr?.type === 'identifier') {\n      // Direct function call: for user in get_users()\n      const rawReturn = returnTypeLookup.lookupRawReturnType(funcExpr.text);\n      if (rawReturn) callExprElementType = extractElementTypeFromString(rawReturn);\n    }\n  }\n  if (!iterableName && !callExprElementType) return;\n\n  let elementType: string | undefined;\n  if (callExprElementType) {\n    elementType = callExprElementType;\n  } else {\n    const containerTypeName = scopeEnv.get(iterableName!);\n    const typeArgPos = methodToTypeArgPosition(methodName, containerTypeName);\n    elementType = resolveIterableElementType(\n      iterableName!, node, scopeEnv, declarationTypeNodes, scope,\n      extractRustElementTypeFromTypeNode, findRustParamElementType,\n      typeArgPos,\n    );\n  }\n  if (!elementType) return;\n\n  const loopVarName = extractVarName(patternNode);\n  if (loopVarName) scopeEnv.set(loopVarName, elementType);\n};\n\nexport const typeConfig: LanguageTypeConfig = {\n  declarationNodeTypes: DECLARATION_NODE_TYPES,\n  forLoopNodeTypes: FOR_LOOP_NODE_TYPES,\n  patternBindingNodeTypes: new Set(['let_condition', 'match_arm']),\n  extractDeclaration,\n  extractInitializer,\n  extractParameter,\n  scanConstructorBinding,\n  extractForLoopBinding,\n  extractPendingAssignment,\n  extractPatternBinding,\n};\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/type-extractors/shared.ts",
    "content": "import type { SyntaxNode } from '../utils.js';\n\n/** Which type argument to extract from a multi-arg generic container.\n *  - 'first': key type (e.g., K from Map<K,V>) — used for .keys(), .keySet()\n *  - 'last':  value type (e.g., V from Map<K,V>) — used for .values(), .items(), .iter() */\nexport type TypeArgPosition = 'first' | 'last';\n\n// ---------------------------------------------------------------------------\n// Container type descriptors — maps container base names to type parameter\n// semantics per access method. Replaces the simple KEY_METHODS heuristic.\n//\n// For user-defined generics (MyCache<K,V> extends Map<K,V>), heritage-aware\n// fallback can walk the EXTENDS chain to find a matching descriptor.\n// ---------------------------------------------------------------------------\n\n/** Describes which type parameter position each access method yields. */\ninterface ContainerDescriptor {\n  /** Number of type parameters (1 = single-element, 2 = key-value) */\n  arity: number;\n  /** Methods that yield the first type parameter (key type for maps) */\n  keyMethods: ReadonlySet<string>;\n  /** Methods that yield the last type parameter (value type) */\n  valueMethods: ReadonlySet<string>;\n}\n\n/** Empty set for containers that have no key-yielding methods */\nconst NO_KEYS: ReadonlySet<string> = new Set();\n\n/** Standard key-yielding methods across languages */\nconst STD_KEY_METHODS: ReadonlySet<string> = new Set(['keys']);\nconst JAVA_KEY_METHODS: ReadonlySet<string> = new Set(['keySet']);\nconst CSHARP_KEY_METHODS: ReadonlySet<string> = new Set(['Keys']);\n\n/** Standard value-yielding methods across languages */\nconst STD_VALUE_METHODS: ReadonlySet<string> = new Set(['values', 'get', 'pop', 'remove']);\nconst CSHARP_VALUE_METHODS: ReadonlySet<string> = new Set(['Values', 'TryGetValue']);\nconst SINGLE_ELEMENT_METHODS: ReadonlySet<string> = new Set([\n  'iter', 'into_iter', 'iterator', 'get', 'first', 'last', 'pop',\n  'peek', 'poll', 'find', 'filter', 'map',\n]);\n\nconst CONTAINER_DESCRIPTORS: ReadonlyMap<string, ContainerDescriptor> = new Map([\n  // --- Map / Dict types (arity 2: key + value) ---\n  ['Map',           { arity: 2, keyMethods: STD_KEY_METHODS,    valueMethods: STD_VALUE_METHODS }],\n  ['WeakMap',       { arity: 2, keyMethods: STD_KEY_METHODS,    valueMethods: STD_VALUE_METHODS }],\n  ['HashMap',       { arity: 2, keyMethods: STD_KEY_METHODS,    valueMethods: STD_VALUE_METHODS }],\n  ['BTreeMap',      { arity: 2, keyMethods: STD_KEY_METHODS,    valueMethods: STD_VALUE_METHODS }],\n  ['LinkedHashMap', { arity: 2, keyMethods: JAVA_KEY_METHODS,   valueMethods: STD_VALUE_METHODS }],\n  ['TreeMap',       { arity: 2, keyMethods: JAVA_KEY_METHODS,   valueMethods: STD_VALUE_METHODS }],\n  ['dict',          { arity: 2, keyMethods: STD_KEY_METHODS,    valueMethods: STD_VALUE_METHODS }],\n  ['Dict',          { arity: 2, keyMethods: STD_KEY_METHODS,    valueMethods: STD_VALUE_METHODS }],\n  ['Dictionary',    { arity: 2, keyMethods: CSHARP_KEY_METHODS, valueMethods: CSHARP_VALUE_METHODS }],\n  ['SortedDictionary', { arity: 2, keyMethods: CSHARP_KEY_METHODS, valueMethods: CSHARP_VALUE_METHODS }],\n  ['Record',        { arity: 2, keyMethods: STD_KEY_METHODS,    valueMethods: STD_VALUE_METHODS }],\n  ['OrderedDict',   { arity: 2, keyMethods: STD_KEY_METHODS,    valueMethods: STD_VALUE_METHODS }],\n  ['ConcurrentHashMap', { arity: 2, keyMethods: JAVA_KEY_METHODS, valueMethods: STD_VALUE_METHODS }],\n  ['ConcurrentDictionary', { arity: 2, keyMethods: CSHARP_KEY_METHODS, valueMethods: CSHARP_VALUE_METHODS }],\n\n  // --- Single-element containers (arity 1) ---\n  ['Array',     { arity: 1, keyMethods: NO_KEYS, valueMethods: SINGLE_ELEMENT_METHODS }],\n  ['List',      { arity: 1, keyMethods: NO_KEYS, valueMethods: SINGLE_ELEMENT_METHODS }],\n  ['ArrayList', { arity: 1, keyMethods: NO_KEYS, valueMethods: SINGLE_ELEMENT_METHODS }],\n  ['LinkedList',{ arity: 1, keyMethods: NO_KEYS, valueMethods: SINGLE_ELEMENT_METHODS }],\n  ['Vec',       { arity: 1, keyMethods: NO_KEYS, valueMethods: SINGLE_ELEMENT_METHODS }],\n  ['VecDeque',  { arity: 1, keyMethods: NO_KEYS, valueMethods: SINGLE_ELEMENT_METHODS }],\n  ['Set',       { arity: 1, keyMethods: NO_KEYS, valueMethods: SINGLE_ELEMENT_METHODS }],\n  ['HashSet',   { arity: 1, keyMethods: NO_KEYS, valueMethods: SINGLE_ELEMENT_METHODS }],\n  ['BTreeSet',  { arity: 1, keyMethods: NO_KEYS, valueMethods: SINGLE_ELEMENT_METHODS }],\n  ['TreeSet',   { arity: 1, keyMethods: NO_KEYS, valueMethods: SINGLE_ELEMENT_METHODS }],\n  ['Queue',     { arity: 1, keyMethods: NO_KEYS, valueMethods: SINGLE_ELEMENT_METHODS }],\n  ['Deque',     { arity: 1, keyMethods: NO_KEYS, valueMethods: SINGLE_ELEMENT_METHODS }],\n  ['Stack',     { arity: 1, keyMethods: NO_KEYS, valueMethods: SINGLE_ELEMENT_METHODS }],\n  ['Sequence',  { arity: 1, keyMethods: NO_KEYS, valueMethods: SINGLE_ELEMENT_METHODS }],\n  ['Iterable',  { arity: 1, keyMethods: NO_KEYS, valueMethods: SINGLE_ELEMENT_METHODS }],\n  ['Iterator',  { arity: 1, keyMethods: NO_KEYS, valueMethods: SINGLE_ELEMENT_METHODS }],\n  ['IEnumerable', { arity: 1, keyMethods: NO_KEYS, valueMethods: SINGLE_ELEMENT_METHODS }],\n  ['IList',     { arity: 1, keyMethods: NO_KEYS, valueMethods: SINGLE_ELEMENT_METHODS }],\n  ['ICollection', { arity: 1, keyMethods: NO_KEYS, valueMethods: SINGLE_ELEMENT_METHODS }],\n  ['Collection',  { arity: 1, keyMethods: NO_KEYS, valueMethods: SINGLE_ELEMENT_METHODS }],\n  ['ObservableCollection', { arity: 1, keyMethods: NO_KEYS, valueMethods: SINGLE_ELEMENT_METHODS }],\n  ['IEnumerator', { arity: 1, keyMethods: NO_KEYS, valueMethods: SINGLE_ELEMENT_METHODS }],\n  ['SortedSet', { arity: 1, keyMethods: NO_KEYS, valueMethods: SINGLE_ELEMENT_METHODS }],\n  ['Stream',    { arity: 1, keyMethods: NO_KEYS, valueMethods: SINGLE_ELEMENT_METHODS }],\n  ['MutableList', { arity: 1, keyMethods: NO_KEYS, valueMethods: SINGLE_ELEMENT_METHODS }],\n  ['MutableSet',  { arity: 1, keyMethods: NO_KEYS, valueMethods: SINGLE_ELEMENT_METHODS }],\n  ['LinkedHashSet', { arity: 1, keyMethods: NO_KEYS, valueMethods: SINGLE_ELEMENT_METHODS }],\n  ['ArrayDeque',  { arity: 1, keyMethods: NO_KEYS, valueMethods: SINGLE_ELEMENT_METHODS }],\n  ['PriorityQueue', { arity: 1, keyMethods: NO_KEYS, valueMethods: SINGLE_ELEMENT_METHODS }],\n  ['MutableMap', { arity: 2, keyMethods: STD_KEY_METHODS, valueMethods: STD_VALUE_METHODS }],\n  ['list',      { arity: 1, keyMethods: NO_KEYS, valueMethods: SINGLE_ELEMENT_METHODS }],\n  ['set',       { arity: 1, keyMethods: NO_KEYS, valueMethods: SINGLE_ELEMENT_METHODS }],\n  ['tuple',     { arity: 1, keyMethods: NO_KEYS, valueMethods: SINGLE_ELEMENT_METHODS }],\n  ['frozenset', { arity: 1, keyMethods: NO_KEYS, valueMethods: SINGLE_ELEMENT_METHODS }],\n]);\n\n/** Determine which type arg to extract based on container type name and access method.\n *\n *  Resolution order:\n *  1. If container is known and method is in keyMethods → 'first'\n *  2. If container is known with arity 1 → 'last' (same as 'first' for single-arg)\n *  3. If container is unknown → fall back to method name heuristic\n *  4. Default: 'last' (value type)\n */\nexport function methodToTypeArgPosition(methodName: string | undefined, containerTypeName?: string): TypeArgPosition {\n  if (containerTypeName) {\n    const desc = CONTAINER_DESCRIPTORS.get(containerTypeName);\n    if (desc) {\n      // Single-element container: always 'last' (= only arg)\n      if (desc.arity === 1) return 'last';\n      // Multi-element: check if method yields key type\n      if (methodName && desc.keyMethods.has(methodName)) return 'first';\n      // Default for multi-element: value type\n      return 'last';\n    }\n  }\n  // Fallback for unknown containers: simple method name heuristic\n  if (methodName && (methodName === 'keys' || methodName === 'keySet' || methodName === 'Keys')) {\n    return 'first';\n  }\n  return 'last';\n}\n\n/** Look up the container descriptor for a type name. Exported for heritage-chain lookups. */\nexport function getContainerDescriptor(typeName: string): ContainerDescriptor | undefined {\n  return CONTAINER_DESCRIPTORS.get(typeName);\n}\n\n/**\n * Shared 3-strategy fallback for resolving the element type of a container variable.\n * Used by all for-loop extractors to resolve the loop variable's type from the iterable.\n *\n * Strategy 1: declarationTypeNodes — raw AST type annotation node (handles container types\n *             where extractSimpleTypeName returned undefined, e.g., User[], List[User])\n * Strategy 2: scopeEnv string — extractElementTypeFromString on the stored type string\n * Strategy 3: AST walk — language-specific upward walk to enclosing function parameters\n *\n * @param extractFromTypeNode Language-specific function to extract element type from AST node\n * @param findParamElementType Optional language-specific AST walk to find parameter type\n * @param typeArgPos Which generic type arg to extract: 'first' for keys, 'last' for values (default)\n */\nexport function resolveIterableElementType(\n  iterableName: string,\n  node: SyntaxNode,\n  scopeEnv: ReadonlyMap<string, string>,\n  declarationTypeNodes: ReadonlyMap<string, SyntaxNode>,\n  scope: string,\n  extractFromTypeNode: (typeNode: SyntaxNode, pos?: TypeArgPosition) => string | undefined,\n  findParamElementType?: (name: string, startNode: SyntaxNode, pos?: TypeArgPosition) => string | undefined,\n  typeArgPos: TypeArgPosition = 'last',\n): string | undefined {\n  // Strategy 1: declarationTypeNodes AST node (check current scope, then file scope)\n  const typeNode = declarationTypeNodes.get(`${scope}\\0${iterableName}`)\n    ?? (scope !== '' ? declarationTypeNodes.get(`\\0${iterableName}`) : undefined);\n  if (typeNode) {\n    const t = extractFromTypeNode(typeNode, typeArgPos);\n    if (t) return t;\n  }\n  // Strategy 2: scopeEnv string → extractElementTypeFromString\n  const iterableType = scopeEnv.get(iterableName);\n  if (iterableType) {\n    const el = extractElementTypeFromString(iterableType, typeArgPos);\n    if (el) return el;\n  }\n  // Strategy 3: AST walk to function parameters\n  if (findParamElementType) return findParamElementType(iterableName, node, typeArgPos);\n  return undefined;\n}\n\n/** Known single-arg nullable wrapper types that unwrap to their inner type\n *  for receiver resolution. Optional<User> → \"User\", Option<User> → \"User\".\n *  Only nullable wrappers — NOT containers (List, Vec) or async wrappers (Promise, Future).\n *  See WRAPPER_GENERICS below for the full set used in return-type inference. */\nconst NULLABLE_WRAPPER_TYPES = new Set([\n  'Optional',    // Java\n  'Option',      // Rust, Scala\n  'Maybe',       // Haskell-style, Kotlin Arrow\n]);\n\n/**\n * Extract the simple type name from a type AST node.\n * Handles generic types (e.g., List<User> → List), qualified names\n * (e.g., models.User → User), and nullable types (e.g., User? → User).\n * Returns undefined for complex types (unions, intersections, function types).\n */\nexport const extractSimpleTypeName = (typeNode: SyntaxNode, depth = 0): string | undefined => {\n  if (depth > 50 || typeNode.text.length > 2048) return undefined;\n  // Direct type identifier (includes Ruby 'constant' for class names)\n  if (typeNode.type === 'type_identifier' || typeNode.type === 'identifier'\n    || typeNode.type === 'simple_identifier' || typeNode.type === 'constant') {\n    return typeNode.text;\n  }\n\n  // Qualified/scoped names: take the last segment (e.g., models.User → User, Models::User → User)\n  if (typeNode.type === 'scoped_identifier' || typeNode.type === 'qualified_identifier'\n    || typeNode.type === 'scoped_type_identifier' || typeNode.type === 'qualified_name'\n    || typeNode.type === 'qualified_type'\n    || typeNode.type === 'member_expression' || typeNode.type === 'member_access_expression'\n    || typeNode.type === 'attribute'\n    || typeNode.type === 'scope_resolution'\n    || typeNode.type === 'selector_expression') {\n    const last = typeNode.lastNamedChild;\n    if (last && (last.type === 'type_identifier' || last.type === 'identifier'\n      || last.type === 'simple_identifier' || last.type === 'name'\n      || last.type === 'constant' || last.type === 'property_identifier'\n      || last.type === 'field_identifier')) {\n      return last.text;\n    }\n  }\n\n  // C++ template_type (e.g., vector<User>, map<string, User>): extract base name\n  if (typeNode.type === 'template_type') {\n    const base = typeNode.childForFieldName('name') ?? typeNode.firstNamedChild;\n    if (base) return extractSimpleTypeName(base, depth + 1);\n  }\n\n  // Generic types: extract the base type (e.g., List<User> → List)\n  // For nullable wrappers (Optional<User>, Option<User>), unwrap to inner type.\n  if (typeNode.type === 'generic_type' || typeNode.type === 'parameterized_type'\n    || typeNode.type === 'generic_name') {\n    const base = typeNode.childForFieldName('name')\n      ?? typeNode.childForFieldName('type')\n      ?? typeNode.firstNamedChild;\n    if (!base) return undefined;\n    const baseName = extractSimpleTypeName(base, depth + 1);\n    // Unwrap known nullable wrappers: Optional<User> → User, Option<User> → User\n    if (baseName && NULLABLE_WRAPPER_TYPES.has(baseName)) {\n      const args = extractGenericTypeArgs(typeNode);\n      if (args.length >= 1) return args[0];\n    }\n    return baseName;\n  }\n\n  // Nullable types (Kotlin User?, C# User?)\n  if (typeNode.type === 'nullable_type') {\n    const inner = typeNode.firstNamedChild;\n    if (inner) return extractSimpleTypeName(inner, depth + 1);\n  }\n\n  // Nullable union types (TS/JS: User | null, User | undefined, User | null | undefined)\n  // Extract the single non-null/undefined type from the union.\n  if (typeNode.type === 'union_type') {\n    const nonNullTypes: SyntaxNode[] = [];\n    for (let i = 0; i < typeNode.namedChildCount; i++) {\n      const child = typeNode.namedChild(i);\n      if (!child) continue;\n      // Skip null/undefined/void literal types\n      const text = child.text;\n      if (text === 'null' || text === 'undefined' || text === 'void') continue;\n      nonNullTypes.push(child);\n    }\n    // Only unwrap if exactly one meaningful type remains\n    if (nonNullTypes.length === 1) {\n      return extractSimpleTypeName(nonNullTypes[0], depth + 1);\n    }\n  }\n\n  // Type annotations that wrap the actual type (TS/Python: `: Foo`, Kotlin: user_type)\n  if (typeNode.type === 'type_annotation' || typeNode.type === 'type'\n    || typeNode.type === 'user_type') {\n    const inner = typeNode.firstNamedChild;\n    if (inner) return extractSimpleTypeName(inner, depth + 1);\n  }\n\n  // Pointer/reference types (C++, Rust): User*, &User, &mut User\n  if (typeNode.type === 'pointer_type' || typeNode.type === 'reference_type') {\n    // Skip mutable_specifier for Rust &mut references — firstNamedChild would be\n    // `mutable_specifier` not the actual type. Walk named children to find the type.\n    for (let i = 0; i < typeNode.namedChildCount; i++) {\n      const child = typeNode.namedChild(i);\n      if (child && child.type !== 'mutable_specifier') {\n        return extractSimpleTypeName(child, depth + 1);\n      }\n    }\n  }\n\n  // Primitive/predefined types: string, int, float, bool, number, unknown, any\n  // PHP: primitive_type; TS/JS: predefined_type\n  // Java: integral_type (int/long/short/byte), floating_point_type (float/double),\n  //       boolean_type (boolean), void_type (void)\n  if (typeNode.type === 'primitive_type' || typeNode.type === 'predefined_type'\n    || typeNode.type === 'integral_type' || typeNode.type === 'floating_point_type'\n    || typeNode.type === 'boolean_type' || typeNode.type === 'void_type') {\n    return typeNode.text;\n  }\n\n  // PHP named_type / optional_type\n  if (typeNode.type === 'named_type' || typeNode.type === 'optional_type') {\n    const inner = typeNode.childForFieldName('name') ?? typeNode.firstNamedChild;\n    if (inner) return extractSimpleTypeName(inner, depth + 1);\n  }\n\n  // Name node (PHP)\n  if (typeNode.type === 'name') {\n    return typeNode.text;\n  }\n\n  return undefined;\n};\n\n/**\n * Extract variable name from a declarator or pattern node.\n * Returns the simple identifier text, or undefined for destructuring/complex patterns.\n */\nexport const extractVarName = (node: SyntaxNode): string | undefined => {\n  if (node.type === 'identifier' || node.type === 'simple_identifier'\n    || node.type === 'variable_name' || node.type === 'name'\n    || node.type === 'constant' || node.type === 'property_identifier') {\n    return node.text;\n  }\n  // variable_declarator (Java/C#): has a 'name' field\n  if (node.type === 'variable_declarator') {\n    const nameChild = node.childForFieldName('name');\n    if (nameChild) return extractVarName(nameChild);\n  }\n  // Rust: let mut x = ... — mut_pattern wraps an identifier\n  if (node.type === 'mut_pattern') {\n    const inner = node.firstNamedChild;\n    if (inner) return extractVarName(inner);\n  }\n  return undefined;\n};\n\n/** Node types for function/method parameters with type annotations */\nexport const TYPED_PARAMETER_TYPES = new Set([\n  'required_parameter',      // TS: (x: Foo)\n  'optional_parameter',      // TS: (x?: Foo)\n  'formal_parameter',        // Java/Kotlin\n  'parameter',               // C#/Rust/Go/Python/Swift\n  'typed_parameter',         // Python: def f(x: Foo) — distinct from 'parameter' in tree-sitter-python\n  'parameter_declaration',   // C/C++ void f(Type name)\n  'simple_parameter',        // PHP function(Foo $x)\n  'property_promotion_parameter', // PHP 8.0+ constructor promotion: __construct(private Foo $x)\n  'closure_parameter',       // Rust: |user: User| — typed closure parameters\n]);\n\n/**\n * Extract type arguments from a generic type node.\n * e.g., List<User, String> → ['User', 'String'], Vec<User> → ['User']\n *\n * Used by extractSimpleTypeName to unwrap nullable wrappers (Optional<User> → User).\n *\n * Handles language-specific AST structures:\n * - TS/Java/Rust/Go: generic_type > type_arguments > type nodes\n * - C#:              generic_type > type_argument_list > type nodes\n * - Kotlin:          generic_type > type_arguments > type_projection > type nodes\n *\n * Note: Go slices/maps use slice_type/map_type, not generic_type — those are\n * NOT handled here. Use language-specific extractors for Go container types.\n *\n * @param typeNode A generic_type or parameterized_type AST node (or any node —\n *   returns [] for non-generic types).\n * @returns Array of resolved type argument names. Unresolvable arguments are omitted.\n */\nexport const extractGenericTypeArgs = (typeNode: SyntaxNode, depth = 0): string[] => {\n  if (depth > 50) return [];\n  // Unwrap wrapper nodes that may sit above the generic_type\n  if (typeNode.type === 'type_annotation' || typeNode.type === 'type'\n    || typeNode.type === 'user_type' || typeNode.type === 'nullable_type'\n    || typeNode.type === 'optional_type') {\n    const inner = typeNode.firstNamedChild;\n    if (inner) return extractGenericTypeArgs(inner, depth + 1);\n    return [];\n  }\n\n  // Only process generic/parameterized type nodes (includes C#'s generic_name)\n  if (typeNode.type !== 'generic_type' && typeNode.type !== 'parameterized_type'\n    && typeNode.type !== 'generic_name') {\n    return [];\n  }\n\n  // Find the type_arguments / type_argument_list child\n  let argsNode: SyntaxNode | null = null;\n  for (let i = 0; i < typeNode.namedChildCount; i++) {\n    const child = typeNode.namedChild(i);\n    if (child && (child.type === 'type_arguments' || child.type === 'type_argument_list')) {\n      argsNode = child;\n      break;\n    }\n  }\n  if (!argsNode) return [];\n\n  const result: string[] = [];\n  for (let i = 0; i < argsNode.namedChildCount; i++) {\n    let argNode = argsNode.namedChild(i);\n    if (!argNode) continue;\n\n    // Kotlin: type_arguments > type_projection > user_type > type_identifier\n    if (argNode.type === 'type_projection') {\n      argNode = argNode.firstNamedChild;\n      if (!argNode) continue;\n    }\n\n    const name = extractSimpleTypeName(argNode);\n    if (name) result.push(name);\n  }\n\n  return result;\n};\n\n/**\n * Match Ruby constructor assignment: `user = User.new` or `service = Models::User.new`.\n * Returns { varName, calleeName } or undefined if the node is not a Ruby constructor assignment.\n * Handles both simple constants and scope_resolution (namespaced) receivers.\n */\nexport const extractRubyConstructorAssignment = (\n  node: SyntaxNode,\n): { varName: string; calleeName: string } | undefined => {\n  if (node.type !== 'assignment') return undefined;\n  const left = node.childForFieldName('left');\n  const right = node.childForFieldName('right');\n  if (!left || !right) return undefined;\n  if (left.type !== 'identifier' && left.type !== 'constant') return undefined;\n  if (right.type !== 'call') return undefined;\n  const method = right.childForFieldName('method');\n  if (!method || method.text !== 'new') return undefined;\n  const receiver = right.childForFieldName('receiver');\n  if (!receiver) return undefined;\n  let calleeName: string;\n  if (receiver.type === 'constant') {\n    calleeName = receiver.text;\n  } else if (receiver.type === 'scope_resolution') {\n    // Models::User → extract last segment \"User\"\n    const last = receiver.lastNamedChild;\n    if (!last || last.type !== 'constant') return undefined;\n    calleeName = last.text;\n  } else {\n    return undefined;\n  }\n  return { varName: left.text, calleeName };\n};\n\n/**\n * Check if an AST node has an explicit type annotation.\n * Checks both named fields ('type') and child nodes ('type_annotation').\n * Used by constructor binding scanners to skip annotated declarations.\n */\nexport const hasTypeAnnotation = (node: SyntaxNode): boolean => {\n  if (node.childForFieldName('type')) return true;\n  for (let i = 0; i < node.childCount; i++) {\n    if (node.child(i)?.type === 'type_annotation') return true;\n  }\n  return false;\n};\n\n/** Bare nullable keywords that should not produce a receiver binding. */\nconst NULLABLE_KEYWORDS = new Set(['null', 'undefined', 'void', 'None', 'nil']);\n\n/**\n * Strip nullable wrappers from a type name string.\n * Used by both lookupInEnv (TypeEnv annotations) and extractReturnTypeName\n * (return-type text) to normalize types before receiver lookup.\n *\n *   \"User | null\"           → \"User\"\n *   \"User | undefined\"      → \"User\"\n *   \"User | null | undefined\" → \"User\"\n *   \"User?\"                 → \"User\"\n *   \"User | Repo\"           → undefined  (genuine union — refuse)\n *   \"null\"                  → undefined\n */\nexport const stripNullable = (typeName: string): string | undefined => {\n  let text = typeName.trim();\n  if (!text) return undefined;\n\n  if (NULLABLE_KEYWORDS.has(text)) return undefined;\n\n  // Strip nullable suffix: User? → User\n  if (text.endsWith('?')) text = text.slice(0, -1).trim();\n\n  // Strip union with null/undefined/None/nil/void\n  if (text.includes('|')) {\n    const parts = text.split('|').map(p => p.trim()).filter(p =>\n      p !== '' && !NULLABLE_KEYWORDS.has(p)\n    );\n    if (parts.length === 1) return parts[0];\n    return undefined; // genuine union or all-nullable — refuse\n  }\n\n  return text || undefined;\n};\n\n/**\n * Unwrap an await_expression to get the inner value.\n * Returns the node itself if not an await_expression, or null if input is null.\n */\nexport const unwrapAwait = (node: SyntaxNode | null): SyntaxNode | null => {\n  if (!node) return null;\n  return node.type === 'await_expression' ? node.firstNamedChild : node;\n};\n\n/**\n * Extract the callee name from a call_expression node.\n * Navigates to the 'function' field (or first named child) and extracts a simple type name.\n */\nexport const extractCalleeName = (callNode: SyntaxNode): string | undefined => {\n  const func = callNode.childForFieldName('function') ?? callNode.firstNamedChild;\n  if (!func) return undefined;\n  return extractSimpleTypeName(func);\n};\n\n/** Find the first named child with the given node type */\nexport const findChildByType = (node: SyntaxNode, type: string): SyntaxNode | null => {\n  for (let i = 0; i < node.namedChildCount; i++) {\n    const child = node.namedChild(i);\n    if (child?.type === type) return child;\n  }\n  return null;\n};\n\n// Internal helper: extract the first comma-separated argument from a string,\n// respecting nested angle-bracket and square-bracket depth.\nfunction extractFirstArg(args: string): string {\n  let depth = 0;\n  for (let i = 0; i < args.length; i++) {\n    const ch = args[i];\n    if (ch === '<' || ch === '[') depth++;\n    else if (ch === '>' || ch === ']') depth--;\n    else if (ch === ',' && depth === 0) return args.slice(0, i).trim();\n  }\n  return args.trim();\n}\n\n/**\n * Extract element type from a container type string.\n * Uses bracket-balanced parsing (no regex) for generic argument extraction.\n * Returns undefined for ambiguous or unparseable strings.\n *\n * Handles:\n * - Array<User>    → User  (generic angle brackets)\n * - User[]         → User  (array suffix)\n * - []User         → User  (Go slice prefix)\n * - List[User]     → User  (Python subscript)\n * - [User]         → User  (Swift array sugar)\n * - vector<User>   → User  (C++ container)\n * - Vec<User>      → User  (Rust container)\n *\n * For multi-argument generics (Map<K, V>), returns the first or last type arg\n * based on `pos` ('first' for keys, 'last' for values — default 'last').\n * Returns undefined when the extracted type is not a simple word.\n */\nexport function extractElementTypeFromString(typeStr: string, pos: TypeArgPosition = 'last'): string | undefined {\n  if (!typeStr || typeStr.length === 0 || typeStr.length > 2048) return undefined;\n\n  // 1. Array suffix: User[] → User\n  if (typeStr.endsWith('[]')) {\n    const base = typeStr.slice(0, -2).trim();\n    return base && /^\\w+$/.test(base) ? base : undefined;\n  }\n\n  // 2. Go slice prefix: []User → User\n  if (typeStr.startsWith('[]')) {\n    const element = typeStr.slice(2).trim();\n    return element && /^\\w+$/.test(element) ? element : undefined;\n  }\n\n  // 3. Swift array sugar: [User] → User\n  //    Must start with '[', end with ']', and contain no angle brackets\n  //    (to avoid confusing with List[User] handled below).\n  if (typeStr.startsWith('[') && typeStr.endsWith(']') && !typeStr.includes('<')) {\n    const element = typeStr.slice(1, -1).trim();\n    return element && /^\\w+$/.test(element) ? element : undefined;\n  }\n\n  // 4. Generic bracket-balanced extraction: Array<User> / List[User] / Vec<User>\n  //    Find the first opening bracket (< or [) and pick the one that appears first.\n  const openAngle = typeStr.indexOf('<');\n  const openSquare = typeStr.indexOf('[');\n\n  let openIdx = -1;\n  let openChar = '';\n  let closeChar = '';\n\n  if (openAngle >= 0 && (openSquare < 0 || openAngle < openSquare)) {\n    openIdx = openAngle;\n    openChar = '<';\n    closeChar = '>';\n  } else if (openSquare >= 0) {\n    openIdx = openSquare;\n    openChar = '[';\n    closeChar = ']';\n  }\n\n  if (openIdx < 0) return undefined;\n\n  // Walk bracket-balanced from the character after the opening bracket to find\n  // the matching close bracket, tracking depth for nested brackets.\n  // All bracket types (<, >, [, ]) contribute to depth uniformly, but only the\n  // selected closeChar can match at depth 0 (prevents cross-bracket miscounting).\n  let depth = 0;\n  const start = openIdx + 1;\n  let lastCommaIdx = -1; // Track last top-level comma for 'last' position\n  for (let i = start; i < typeStr.length; i++) {\n    const ch = typeStr[i];\n    if (ch === '<' || ch === '[') {\n      depth++;\n    } else if (ch === '>' || ch === ']') {\n      if (depth === 0) {\n        // At depth 0 — only match if it is our selected close bracket.\n        if (ch !== closeChar) return undefined; // mismatched bracket = malformed\n        if (pos === 'last' && lastCommaIdx >= 0) {\n          // Return last arg (text after last comma)\n          const lastArg = typeStr.slice(lastCommaIdx + 1, i).trim();\n          return lastArg && /^\\w+$/.test(lastArg) ? lastArg : undefined;\n        }\n        const inner = typeStr.slice(start, i).trim();\n        const firstArg = extractFirstArg(inner);\n        return firstArg && /^\\w+$/.test(firstArg) ? firstArg : undefined;\n      }\n      depth--;\n    } else if (ch === ',' && depth === 0) {\n      if (pos === 'first') {\n        // Return first arg (text before first comma)\n        const arg = typeStr.slice(start, i).trim();\n        return arg && /^\\w+$/.test(arg) ? arg : undefined;\n      }\n      lastCommaIdx = i;\n    }\n  }\n\n  return undefined;\n}\n\n// ── Return type text helpers ─────────────────────────────────────────────\n// extractReturnTypeName works on raw return-type text already stored in\n// SymbolDefinition (e.g. \"User\", \"Promise<User>\", \"User | null\", \"*User\").\n// Extracts the base user-defined type name.\n\n/** Primitive / built-in types that should NOT produce a receiver binding. */\nconst PRIMITIVE_TYPES = new Set([\n  'string', 'number', 'boolean', 'void', 'int', 'float', 'double', 'long',\n  'short', 'byte', 'char', 'bool', 'str', 'i8', 'i16', 'i32', 'i64',\n  'u8', 'u16', 'u32', 'u64', 'f32', 'f64', 'usize', 'isize',\n  'undefined', 'null', 'None', 'nil',\n]);\n\n/**\n * Extract a simple type name from raw return-type text.\n * Handles common patterns:\n *   \"User\"                → \"User\"\n *   \"Promise<User>\"       → \"User\"   (unwrap wrapper generics)\n *   \"Option<User>\"        → \"User\"\n *   \"Result<User, Error>\" → \"User\"   (first type arg)\n *   \"User | null\"         → \"User\"   (strip nullable union)\n *   \"User?\"               → \"User\"   (strip nullable suffix)\n *   \"*User\"               → \"User\"   (Go pointer)\n *   \"&User\"               → \"User\"   (Rust reference)\n * Returns undefined for complex types or primitives.\n */\nconst WRAPPER_GENERICS = new Set([\n  'Promise', 'Observable', 'Future', 'CompletableFuture', 'Task', 'ValueTask',  // async wrappers\n  'Option', 'Some', 'Optional', 'Maybe',                                         // nullable wrappers\n  'Result', 'Either',                                                             // result wrappers\n  // Rust smart pointers (Deref to inner type)\n  'Rc', 'Arc', 'Weak',                                                          // pointer types\n  'MutexGuard', 'RwLockReadGuard', 'RwLockWriteGuard',                          // guard types\n  'Ref', 'RefMut',                                                               // RefCell guards\n  'Cow',                                                                         // copy-on-write\n  // Containers (List, Array, Vec, Set, etc.) are intentionally excluded —\n  // methods are called on the container, not the element type.\n  // Non-wrapper generics return the base type (e.g., List) via the else branch.\n]);\n\n/**\n * Extracts the first type argument from a comma-separated generic argument string,\n * respecting nested angle brackets. For example:\n *   \"Result<User, Error>\"  → \"Result<User, Error>\"  (no top-level comma)\n *   \"User, Error\"          → \"User\"\n *   \"Map<K, V>, string\"    → \"Map<K, V>\"\n */\nfunction extractFirstGenericArg(args: string): string {\n  let depth = 0;\n  for (let i = 0; i < args.length; i++) {\n    if (args[i] === '<') depth++;\n    else if (args[i] === '>') depth--;\n    else if (args[i] === ',' && depth === 0) return args.slice(0, i).trim();\n  }\n  return args.trim();\n}\n\n/**\n * Extract the first non-lifetime type argument from a generic argument string.\n * Skips Rust lifetime parameters (e.g., `'a`, `'_`) to find the actual type.\n *   \"'_, User\"       → \"User\"\n *   \"'a, User\"       → \"User\"\n *   \"User, Error\"    → \"User\"  (no lifetime — delegates to extractFirstGenericArg)\n */\nfunction extractFirstTypeArg(args: string): string {\n  let remaining = args;\n  while (remaining) {\n    const first = extractFirstGenericArg(remaining);\n    if (!first.startsWith(\"'\")) return first;\n    // Skip past this lifetime arg + the comma separator\n    const commaIdx = remaining.indexOf(',', first.length);\n    if (commaIdx < 0) return first; // only lifetimes — fall through\n    remaining = remaining.slice(commaIdx + 1).trim();\n  }\n  return args.trim();\n}\n\nconst MAX_RETURN_TYPE_INPUT_LENGTH = 2048;\nconst MAX_RETURN_TYPE_LENGTH = 512;\n\nexport const extractReturnTypeName = (raw: string, depth = 0): string | undefined => {\n  if (depth > 10) return undefined;\n  if (raw.length > MAX_RETURN_TYPE_INPUT_LENGTH) return undefined;\n  let text = raw.trim();\n  if (!text) return undefined;\n\n  // Strip pointer/reference prefixes: *User, &User, &mut User\n  text = text.replace(/^[&*]+\\s*(mut\\s+)?/, '');\n\n  // Strip nullable suffix: User?\n  text = text.replace(/\\?$/, '');\n\n  // Handle union types: \"User | null\" → \"User\"\n  if (text.includes('|')) {\n    const parts = text.split('|').map(p => p.trim()).filter(p =>\n      p !== 'null' && p !== 'undefined' && p !== 'void' && p !== 'None' && p !== 'nil'\n    );\n    if (parts.length === 1) text = parts[0];\n    else return undefined; // genuine union — too complex\n  }\n\n  // Handle generics: Promise<User> → unwrap if wrapper, else take base\n  const genericMatch = text.match(/^(\\w+)\\s*<(.+)>$/);\n  if (genericMatch) {\n    const [, base, args] = genericMatch;\n    if (WRAPPER_GENERICS.has(base)) {\n      // Take the first non-lifetime type argument, using bracket-balanced splitting\n      // so that nested generics like Result<User, Error> are not split at the inner\n      // comma. Lifetime parameters (Rust 'a, '_) are skipped.\n      const firstArg = extractFirstTypeArg(args);\n      return extractReturnTypeName(firstArg, depth + 1);\n    }\n    // Non-wrapper generic: return the base type (e.g., Map<K,V> → Map)\n    return PRIMITIVE_TYPES.has(base.toLowerCase()) ? undefined : base;\n  }\n\n  // Bare wrapper type without generic argument (e.g. Task, Promise, Option)\n  // should not produce a binding — these are meaningless without a type parameter\n  if (WRAPPER_GENERICS.has(text)) return undefined;\n\n  // Handle qualified names: models.User → User, Models::User → User, \\App\\Models\\User → User\n  if (text.includes('::') || text.includes('.') || text.includes('\\\\')) {\n    text = text.split(/::|[.\\\\]/).pop()!;\n  }\n\n  // Final check: skip primitives\n  if (PRIMITIVE_TYPES.has(text) || PRIMITIVE_TYPES.has(text.toLowerCase())) return undefined;\n\n  // Must start with uppercase (class/type convention) or be a valid identifier\n  if (!/^[A-Z_]\\w*$/.test(text)) return undefined;\n\n  // If the final extracted type name is too long, reject it\n  if (text.length > MAX_RETURN_TYPE_LENGTH) return undefined;\n\n  return text;\n};\n\n// ── Property declared-type extraction ────────────────────────────────────\n// Shared between parse-worker (worker path) and parsing-processor (sequential path).\n\n/**\n * Extract the declared type of a property/field from its AST definition node.\n * Handles cross-language patterns:\n * - TypeScript: `name: Type` → type_annotation child\n * - Java: `Type name` → type child on field_declaration\n * - C#: `Type Name { get; set; }` → type child on property_declaration\n * - Go: `Name Type` → type child on field_declaration\n * - Kotlin: `var name: Type` → variable_declaration child with type field\n *\n * Returns the normalized type name, or undefined if no type can be extracted.\n */\nexport const extractPropertyDeclaredType = (definitionNode: SyntaxNode | null): string | undefined => {\n  if (!definitionNode) return undefined;\n\n  // Strategy 1: Look for a `type` or `type_annotation` named field\n  const typeNode = definitionNode.childForFieldName?.('type');\n  if (typeNode) {\n    const typeName = extractSimpleTypeName(typeNode);\n    if (typeName) return typeName;\n    // Fallback: use the raw text (for complex types like User[] or List<User>)\n    const text = typeNode.text?.trim();\n    if (text && text.length < 100) return text;\n  }\n\n  // Strategy 2: Walk children looking for type_annotation (TypeScript pattern)\n  for (let i = 0; i < definitionNode.childCount; i++) {\n    const child = definitionNode.child(i);\n    if (!child) continue;\n    if (child.type === 'type_annotation') {\n      // Type annotation has the actual type as a child\n      for (let j = 0; j < child.childCount; j++) {\n        const typeChild = child.child(j);\n        if (typeChild && typeChild.type !== ':') {\n          const typeName = extractSimpleTypeName(typeChild);\n          if (typeName) return typeName;\n          const text = typeChild.text?.trim();\n          if (text && text.length < 100) return text;\n        }\n      }\n    }\n  }\n\n  // Strategy 3: For Java field_declaration, the type is a sibling of variable_declarator\n  // AST: (field_declaration type: (type_identifier) declarator: (variable_declarator ...))\n  const parentDecl = definitionNode.parent;\n  if (parentDecl) {\n    const parentType = parentDecl.childForFieldName?.('type');\n    if (parentType) {\n      const typeName = extractSimpleTypeName(parentType);\n      if (typeName) return typeName;\n    }\n  }\n\n  // Strategy 4: Kotlin property_declaration — type is nested inside variable_declaration child\n  // AST: (property_declaration (variable_declaration (simple_identifier) \":\" (user_type (type_identifier))))\n  // Kotlin's variable_declaration has NO named 'type' field — children are all positional.\n  for (let i = 0; i < definitionNode.childCount; i++) {\n    const child = definitionNode.child(i);\n    if (child?.type === 'variable_declaration') {\n      // Try named field first (works for other languages sharing this strategy)\n      const varType = child.childForFieldName?.('type');\n      if (varType) {\n        const typeName = extractSimpleTypeName(varType);\n        if (typeName) return typeName;\n        const text = varType.text?.trim();\n        if (text && text.length < 100) return text;\n      }\n      // Fallback: walk unnamed children for user_type / type_identifier (Kotlin)\n      for (let j = 0; j < child.namedChildCount; j++) {\n        const varChild = child.namedChild(j);\n        if (varChild && (varChild.type === 'user_type' || varChild.type === 'type_identifier'\n          || varChild.type === 'nullable_type' || varChild.type === 'generic_type')) {\n          const typeName = extractSimpleTypeName(varChild);\n          if (typeName) return typeName;\n        }\n      }\n    }\n  }\n\n  // Strategy 5: PHP @var PHPDoc — look for preceding comment with @var Type\n  // Handles pre-PHP-7.4 code: /** @var Address */ public $address;\n  const prevSibling = definitionNode.previousNamedSibling ?? definitionNode.parent?.previousNamedSibling;\n  if (prevSibling?.type === 'comment') {\n    const commentText = prevSibling.text;\n    const varMatch = commentText?.match(/@var\\s+([A-Z][\\w\\\\]*)/);\n    if (varMatch) {\n      // Strip namespace prefix: \\App\\Models\\User → User\n      const raw = varMatch[1];\n      const base = raw.includes('\\\\') ? raw.split('\\\\').pop()! : raw;\n      if (base && /^[A-Z]\\w*$/.test(base)) return base;\n    }\n  }\n\n  return undefined;\n};\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/type-extractors/swift.ts",
    "content": "import type { SyntaxNode } from '../utils.js';\nimport type { LanguageTypeConfig, ParameterExtractor, TypeBindingExtractor, InitializerExtractor, ClassNameLookup, ConstructorBindingScanner } from './types.js';\nimport { extractSimpleTypeName, extractVarName, findChildByType, hasTypeAnnotation } from './shared.js';\n\nconst DECLARATION_NODE_TYPES: ReadonlySet<string> = new Set([\n  'property_declaration',\n]);\n\n/** Swift: let x: Foo = ... */\nconst extractDeclaration: TypeBindingExtractor = (node: SyntaxNode, env: Map<string, string>): void => {\n  // Swift property_declaration has pattern and type_annotation\n  const pattern = node.childForFieldName('pattern')\n    ?? findChildByType(node, 'pattern');\n  const typeAnnotation = node.childForFieldName('type')\n    ?? findChildByType(node, 'type_annotation');\n  if (!pattern || !typeAnnotation) return;\n  const varName = extractVarName(pattern) ?? pattern.text;\n  const typeName = extractSimpleTypeName(typeAnnotation);\n  if (varName && typeName) env.set(varName, typeName);\n};\n\n/** Swift: parameter → name: type */\nconst extractParameter: ParameterExtractor = (node: SyntaxNode, env: Map<string, string>): void => {\n  let nameNode: SyntaxNode | null = null;\n  let typeNode: SyntaxNode | null = null;\n\n  if (node.type === 'parameter') {\n    nameNode = node.childForFieldName('name')\n      ?? node.childForFieldName('internal_name');\n    typeNode = node.childForFieldName('type');\n  } else {\n    nameNode = node.childForFieldName('name') ?? node.childForFieldName('pattern');\n    typeNode = node.childForFieldName('type');\n  }\n\n  if (!nameNode || !typeNode) return;\n  const varName = extractVarName(nameNode);\n  const typeName = extractSimpleTypeName(typeNode);\n  if (varName && typeName) env.set(varName, typeName);\n};\n\n/** Swift: let user = User(name: \"alice\") — infer type from call when callee is a known class.\n *  Swift initializers are syntactically identical to function calls, so we verify\n *  against classNames (which may include cross-file SymbolTable lookups). */\nconst extractInitializer: InitializerExtractor = (node: SyntaxNode, env: Map<string, string>, classNames: ClassNameLookup): void => {\n  if (node.type !== 'property_declaration') return;\n  // Skip if has type annotation — extractDeclaration handled it\n  if (node.childForFieldName('type') || findChildByType(node, 'type_annotation')) return;\n  // Find pattern (variable name)\n  const pattern = node.childForFieldName('pattern') ?? findChildByType(node, 'pattern');\n  if (!pattern) return;\n  const varName = extractVarName(pattern) ?? pattern.text;\n  if (!varName || env.has(varName)) return;\n  // Find call_expression in the value\n  const callExpr = findChildByType(node, 'call_expression');\n  if (!callExpr) return;\n  const callee = callExpr.firstNamedChild;\n  if (!callee) return;\n  // Direct call: User(name: \"alice\")\n  if (callee.type === 'simple_identifier') {\n    const calleeName = callee.text;\n    if (calleeName && classNames.has(calleeName)) {\n      env.set(varName, calleeName);\n    }\n    return;\n  }\n  // Explicit init: User.init(name: \"alice\") — navigation_expression with .init suffix\n  if (callee.type === 'navigation_expression') {\n    const receiver = callee.firstNamedChild;\n    const suffix = callee.lastNamedChild;\n    if (receiver?.type === 'simple_identifier' && suffix?.text === 'init') {\n      const calleeName = receiver.text;\n      if (calleeName && classNames.has(calleeName)) {\n        env.set(varName, calleeName);\n      }\n    }\n  }\n};\n\n/** Swift: let user = User(name: \"alice\") — scan property_declaration for constructor binding */\nconst scanConstructorBinding: ConstructorBindingScanner = (node) => {\n  if (node.type !== 'property_declaration') return undefined;\n  if (hasTypeAnnotation(node)) return undefined;\n  const pattern = node.childForFieldName('pattern');\n  if (!pattern) return undefined;\n  const varName = pattern.text;\n  if (!varName) return undefined;\n  let callExpr: SyntaxNode | null = null;\n  for (let i = 0; i < node.namedChildCount; i++) {\n    const child = node.namedChild(i);\n    if (child?.type === 'call_expression') { callExpr = child; break; }\n  }\n  if (!callExpr) return undefined;\n  const callee = callExpr.firstNamedChild;\n  if (!callee) return undefined;\n  if (callee.type === 'simple_identifier') {\n    return { varName, calleeName: callee.text };\n  }\n  if (callee.type === 'navigation_expression') {\n    const receiver = callee.firstNamedChild;\n    const suffix = callee.lastNamedChild;\n    if (receiver?.type === 'simple_identifier' && suffix?.text === 'init') {\n      return { varName, calleeName: receiver.text };\n    }\n    // General qualified call: service.getUser() → extract method name.\n    // tree-sitter-swift may wrap the identifier in navigation_suffix, so\n    // check both direct simple_identifier and navigation_suffix > simple_identifier.\n    if (suffix?.type === 'simple_identifier') {\n      return { varName, calleeName: suffix.text };\n    }\n    if (suffix?.type === 'navigation_suffix') {\n      const inner = suffix.lastNamedChild;\n      if (inner?.type === 'simple_identifier') {\n        return { varName, calleeName: inner.text };\n      }\n    }\n  }\n  return undefined;\n};\n\nexport const typeConfig: LanguageTypeConfig = {\n  declarationNodeTypes: DECLARATION_NODE_TYPES,\n  extractDeclaration,\n  extractParameter,\n  extractInitializer,\n  scanConstructorBinding,\n};\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/type-extractors/types.ts",
    "content": "import type { SyntaxNode } from '../utils.js';\n\n/** Extracts type bindings from a declaration node into the env map */\nexport type TypeBindingExtractor = (node: SyntaxNode, env: Map<string, string>) => void;\n\n/** Extracts type bindings from a parameter node into the env map */\nexport type ParameterExtractor = (node: SyntaxNode, env: Map<string, string>) => void;\n\n/** Minimal interface for checking whether a name is a known class/struct.\n *  Narrower than ReadonlySet — only `.has()` is used by extractors. */\nexport type ClassNameLookup = { has(name: string): boolean };\n\n/** Extracts type bindings from a constructor-call initializer, with access to known class names */\nexport type InitializerExtractor = (node: SyntaxNode, env: Map<string, string>, classNames: ClassNameLookup) => void;\n\n/** Scans an AST node for untyped `var = callee()` patterns for return-type inference.\n *  Returns { varName, calleeName } if the node matches, undefined otherwise.\n *  `receiverClassName` — optional hint for method calls on known receivers\n *  (e.g. $this->getUser() in PHP provides the enclosing class name). */\nexport type ConstructorBindingScanner = (node: SyntaxNode) => { varName: string; calleeName: string; receiverClassName?: string } | undefined;\n\n/** Extracts a return type string from a method/function definition node.\n *  Used for languages where return types are expressed in comments (e.g. YARD @return [Type])\n *  rather than in AST fields. Returns undefined if no return type can be determined. */\nexport type ReturnTypeExtractor = (node: SyntaxNode) => string | undefined;\n\n/** Infer the type name of a literal AST node for overload disambiguation.\n *  Returns the canonical type name (e.g. 'int', 'String', 'boolean') or undefined\n *  for non-literal nodes. Only used when resolveCallTarget has multiple candidates\n *  with parameterTypes — ~1-3% of call sites. */\nexport type LiteralTypeInferrer = (node: SyntaxNode) => string | undefined;\n\n/** Detect constructor-style call expressions that don't use `new` keyword.\n *  Returns the constructor class name if the node's initializer is a constructor call,\n *  or undefined otherwise. Used for virtual dispatch in languages like Kotlin\n *  where constructors are syntactically identical to function calls, and C++\n *  where smart pointer factory functions (make_shared/make_unique) wrap constructors. */\nexport type ConstructorTypeDetector = (node: SyntaxNode, classNames: ClassNameLookup) => string | undefined;\n\n/** Unwrap a declared type name to its inner type for virtual dispatch comparison.\n *  E.g., C++ shared_ptr<Animal> → Animal. Returns undefined if no unwrapping applies. */\nexport type DeclaredTypeUnwrapper = (declaredType: string, typeNode: SyntaxNode) => string | undefined;\n\n/** Narrow lookup interface for resolving a callee name → return type name.\n *  Backed by SymbolTable.lookupFuzzyCallable; passed via ForLoopExtractorContext.\n *  Conservative: returns undefined when the callee is ambiguous (0 or 2+ matches). */\nexport interface ReturnTypeLookup {\n  /** Processed type name after stripping wrappers (e.g., 'User' from 'Promise<User>').\n   *  Use for call-result variable bindings (`const b = foo()`). */\n  lookupReturnType(callee: string): string | undefined;\n  /** Raw return type as declared in the symbol (e.g., '[]User', 'List<User>').\n   *  Use for iterable-element extraction (`for v := range foo()`). */\n  lookupRawReturnType(callee: string): string | undefined;\n}\n\n/** Context object passed to ForLoopExtractor.\n *  Groups the four parameters that were previously positional. */\nexport interface ForLoopExtractorContext {\n  /** Mutable type-env for the current scope — extractor writes bindings here */\n  scopeEnv: Map<string, string>;\n  /** Maps `scope\\0varName` to the declaration's type annotation AST node */\n  declarationTypeNodes: ReadonlyMap<string, SyntaxNode>;\n  /** Current scope key, e.g. `\"process@42\"` */\n  scope: string;\n  /** Resolves a callee name to its declared return type (undefined = unknown/ambiguous) */\n  returnTypeLookup: ReturnTypeLookup;\n}\n\n/** Extracts loop variable type binding from a for-each statement. */\nexport type ForLoopExtractor = (node: SyntaxNode, ctx: ForLoopExtractorContext) => void;\n\n/** Discriminated union for pending Tier-2 propagation items.\n *  - `copy`             — `const b = a` (identifier alias, propagate a's type to b)\n *  - `callResult`       — `const b = foo()` (bind b to foo's declared return type)\n *  - `fieldAccess`      — `const b = a.field` (bind b to field's declaredType on a's type)\n *  - `methodCallResult` — `const b = a.method()` (bind b to method's returnType on a's type) */\nexport type PendingAssignment =\n  | { kind: 'copy'; lhs: string; rhs: string }\n  | { kind: 'callResult'; lhs: string; callee: string }\n  | { kind: 'fieldAccess'; lhs: string; receiver: string; field: string }\n  | { kind: 'methodCallResult'; lhs: string; receiver: string; method: string };\n\n/** Extracts a pending assignment for Tier 2 propagation.\n *  Returns a PendingAssignment when the RHS is a bare identifier (`copy`), a\n *  call expression (`callResult`), a field access (`fieldAccess`), or a\n *  method call with receiver (`methodCallResult`) and the LHS has no resolved type yet.\n *  May return an array of PendingAssignment items for destructuring patterns\n *  (e.g., `const { a, b } = obj` emits N fieldAccess items).\n *  Returns undefined if the node is not a matching assignment. */\nexport type PendingAssignmentExtractor = (\n  node: SyntaxNode,\n  scopeEnv: ReadonlyMap<string, string>,\n) => PendingAssignment | PendingAssignment[] | undefined;\n\n/** Result of a pattern binding extraction. */\nexport interface PatternBindingResult {\n  varName: string;\n  typeName: string;\n  /** Optional: AST node whose position range should be used for the patternOverride.\n   *  When present, the override uses this node's range instead of the auto-detected\n   *  branch scope. Used by null-check narrowing to target the if-body specifically. */\n  narrowingRange?: { startIndex: number; endIndex: number };\n}\n\n/** Extracts a typed variable binding from a pattern-matching construct.\n *  Returns { varName, typeName } for patterns that introduce NEW variables\n *  or narrow existing variables (null-check narrowing).\n *  Examples: `if let Some(user) = opt` (Rust), `x instanceof User user` (Java),\n *  `if (x != null)` (null-check narrowing in TS/Kotlin/C#).\n *  Conservative: returns undefined when the source variable's type is unknown.\n *\n *  @param scopeEnv   Read-only view of already-resolved type bindings in the current scope.\n *  @param declarationTypeNodes  Maps `scope\\0varName` to the original declaration's type\n *    annotation AST node. Allows extracting generic type arguments (e.g., T from Result<T,E>)\n *    that are stripped during normal TypeEnv extraction.\n *  @param scope  Current scope key (e.g. `\"process@42\"`) for declarationTypeNodes lookups. */\nexport type PatternBindingExtractor = (\n  node: SyntaxNode,\n  scopeEnv: ReadonlyMap<string, string>,\n  declarationTypeNodes: ReadonlyMap<string, SyntaxNode>,\n  scope: string,\n) => PatternBindingResult | undefined;\n\n/** Per-language type extraction configuration */\nexport interface LanguageTypeConfig {\n  /** Allow pattern binding to overwrite existing scopeEnv entries.\n   *  WARNING: Enables function-scope type pollution. Only for languages with\n   *  smart-cast semantics (e.g., Kotlin `when/is`) where the subject variable\n   *  already exists in scopeEnv from its declaration. */\n  readonly allowPatternBindingOverwrite?: boolean;\n  /** Node types that represent typed declarations for this language */\n  declarationNodeTypes: ReadonlySet<string>;\n  /** AST node types for for-each/for-in statements with explicit element types. */\n  forLoopNodeTypes?: ReadonlySet<string>;\n  /** Optional allowlist of AST node types on which extractPatternBinding should run.\n   *  When present, extractPatternBinding is only invoked for nodes whose type is in this set,\n   *  short-circuiting the call for all other node types. When absent, every node is passed to\n   *  extractPatternBinding (legacy behaviour). */\n  patternBindingNodeTypes?: ReadonlySet<string>;\n  /** Extract a (varName → typeName) binding from a declaration node */\n  extractDeclaration: TypeBindingExtractor;\n  /** Extract a (varName → typeName) binding from a parameter node */\n  extractParameter: ParameterExtractor;\n  /** Extract a (varName → typeName) binding from a constructor-call initializer.\n   *  Called as fallback when extractDeclaration produces no binding for a declaration node.\n   *  Only for languages with syntactic constructor markers (new, composite_literal, ::new).\n   *  Receives classNames — the set of class/struct names visible in the current file's AST. */\n  extractInitializer?: InitializerExtractor;\n  /** Scan for untyped `var = callee()` assignments for return-type inference.\n   *  Called on every AST node during buildTypeEnv walk; returns undefined for non-matches.\n   *  The callee binding is unverified — the caller must confirm against the SymbolTable. */\n  scanConstructorBinding?: ConstructorBindingScanner;\n  /** Extract return type from comment-based annotations (e.g. YARD @return [Type]).\n   *  Called as fallback when extractMethodSignature finds no AST-based return type. */\n  extractReturnType?: ReturnTypeExtractor;\n  /** Extract loop variable → type binding from a for-each AST node. */\n  extractForLoopBinding?: ForLoopExtractor;\n  /** Extract pending assignment for Tier 2 propagation.\n   *  Called on declaration/assignment nodes; returns a PendingAssignment when the RHS\n   *  is a bare identifier (copy) or call expression (callResult) and the LHS has no\n   *  resolved type yet. Language-specific because AST shapes differ widely. */\n  extractPendingAssignment?: PendingAssignmentExtractor;\n  /** Extract a typed variable binding from a pattern-matching construct.\n   *  Called on every AST node; returns { varName, typeName } when the node introduces a new\n   *  typed variable via pattern matching (e.g. `if let Some(x) = opt`, `x instanceof T t`).\n   *  The extractor receives the current scope's resolved bindings (read-only) to look up the\n   *  source variable's type. Returns undefined for non-matching nodes or unknown source types. */\n  extractPatternBinding?: PatternBindingExtractor;\n  inferLiteralType?: LiteralTypeInferrer;\n  detectConstructorType?: ConstructorTypeDetector;\n  unwrapDeclaredType?: DeclaredTypeUnwrapper;\n}\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/type-extractors/typescript.ts",
    "content": "import type { SyntaxNode } from '../utils.js';\nimport type { LanguageTypeConfig, ParameterExtractor, TypeBindingExtractor, InitializerExtractor, ClassNameLookup, ConstructorBindingScanner, ReturnTypeExtractor, PendingAssignmentExtractor, PendingAssignment, ForLoopExtractor, PatternBindingExtractor, LiteralTypeInferrer } from './types.js';\nimport { extractSimpleTypeName, extractVarName, hasTypeAnnotation, unwrapAwait, extractCalleeName, extractElementTypeFromString, extractGenericTypeArgs, resolveIterableElementType, methodToTypeArgPosition, type TypeArgPosition } from './shared.js';\n\nconst DECLARATION_NODE_TYPES: ReadonlySet<string> = new Set([\n  'lexical_declaration',\n  'variable_declaration',\n  'function_declaration',   // JSDoc @param on function declarations\n  'method_definition',      // JSDoc @param on class methods\n  'public_field_definition', // class field: private users: User[]\n]);\n\nconst normalizeJsDocType = (raw: string): string | undefined => {\n  let type = raw.trim();\n  // Strip JSDoc nullable/non-nullable prefixes: ?User → User, !User → User\n  if (type.startsWith('?') || type.startsWith('!')) type = type.slice(1);\n  // Strip union with null/undefined/void: User|null → User\n  const parts = type.split('|').map(p => p.trim()).filter(p =>\n    p !== 'null' && p !== 'undefined' && p !== 'void'\n  );\n  if (parts.length !== 1) return undefined; // ambiguous union\n  type = parts[0];\n  // Strip module: prefix — module:models.User → models.User\n  if (type.startsWith('module:')) type = type.slice(7);\n  // Take last segment of dotted path: models.User → User\n  const segments = type.split('.');\n  type = segments[segments.length - 1];\n  // Strip generic wrapper: Promise<User> → Promise (base type, not inner)\n  const genericMatch = type.match(/^(\\w+)\\s*</);\n  if (genericMatch) type = genericMatch[1];\n  // Simple identifier check\n  if (/^\\w+$/.test(type)) return type;\n  return undefined;\n};\n\n/** Regex to extract JSDoc @param annotations: `@param {Type} name` */\nconst JSDOC_PARAM_RE = /@param\\s*\\{([^}]+)\\}\\s+\\[?(\\w+)[\\]=]?[^\\s]*/g;\n\n/**\n * Collect JSDoc @param type bindings from comment nodes preceding a function/method.\n * Returns a map of paramName → typeName.\n */\nconst collectJsDocParams = (funcNode: SyntaxNode): Map<string, string> => {\n  const commentTexts: string[] = [];\n  let sibling = funcNode.previousSibling;\n  while (sibling) {\n    if (sibling.type === 'comment') {\n      commentTexts.unshift(sibling.text);\n    } else if (sibling.isNamed && sibling.type !== 'decorator') {\n      break;\n    }\n    sibling = sibling.previousSibling;\n  }\n  if (commentTexts.length === 0) return new Map();\n\n  const params = new Map<string, string>();\n  const commentBlock = commentTexts.join('\\n');\n  JSDOC_PARAM_RE.lastIndex = 0;\n  let match: RegExpExecArray | null;\n  while ((match = JSDOC_PARAM_RE.exec(commentBlock)) !== null) {\n    const typeName = normalizeJsDocType(match[1]);\n    const paramName = match[2];\n    if (typeName) {\n      params.set(paramName, typeName);\n    }\n  }\n  return params;\n};\n\n/**\n * TypeScript: const x: Foo = ..., let x: Foo\n * Also: JSDoc @param annotations on function/method definitions (for .js files).\n */\nconst extractDeclaration: TypeBindingExtractor = (node: SyntaxNode, env: Map<string, string>): void => {\n  // JSDoc @param on functions/methods — pre-populate env with param types\n  if (node.type === 'function_declaration' || node.type === 'method_definition') {\n    const jsDocParams = collectJsDocParams(node);\n    for (const [paramName, typeName] of jsDocParams) {\n      if (!env.has(paramName)) env.set(paramName, typeName);\n    }\n    return;\n  }\n\n  // Class field: `private users: User[]` — public_field_definition has name + type fields directly.\n  if (node.type === 'public_field_definition') {\n    const nameNode = node.childForFieldName('name');\n    const typeAnnotation = node.childForFieldName('type');\n    if (!nameNode || !typeAnnotation) return;\n    const varName = nameNode.text;\n    if (!varName) return;\n    const typeName = extractSimpleTypeName(typeAnnotation);\n    if (typeName) env.set(varName, typeName);\n    return;\n  }\n\n  for (let i = 0; i < node.namedChildCount; i++) {\n    const declarator = node.namedChild(i);\n    if (declarator?.type !== 'variable_declarator') continue;\n    const nameNode = declarator.childForFieldName('name');\n    const typeAnnotation = declarator.childForFieldName('type');\n    if (!nameNode || !typeAnnotation) continue;\n    const varName = extractVarName(nameNode);\n    const typeName = extractSimpleTypeName(typeAnnotation);\n    if (varName && typeName) env.set(varName, typeName);\n  }\n};\n\n/** TypeScript: required_parameter / optional_parameter → name: type */\nconst extractParameter: ParameterExtractor = (node: SyntaxNode, env: Map<string, string>): void => {\n  let nameNode: SyntaxNode | null = null;\n  let typeNode: SyntaxNode | null = null;\n\n  if (node.type === 'required_parameter' || node.type === 'optional_parameter') {\n    nameNode = node.childForFieldName('pattern') ?? node.childForFieldName('name');\n    typeNode = node.childForFieldName('type');\n  } else {\n    // Generic fallback\n    nameNode = node.childForFieldName('name') ?? node.childForFieldName('pattern');\n    typeNode = node.childForFieldName('type');\n  }\n\n  if (!nameNode || !typeNode) return;\n  const varName = extractVarName(nameNode);\n  const typeName = extractSimpleTypeName(typeNode);\n  if (varName && typeName) env.set(varName, typeName);\n};\n\n/** TypeScript: const x = new User() — infer type from new_expression */\nconst extractInitializer: InitializerExtractor = (node: SyntaxNode, env: Map<string, string>, _classNames: ClassNameLookup): void => {\n  for (let i = 0; i < node.namedChildCount; i++) {\n    const declarator = node.namedChild(i);\n    if (declarator?.type !== 'variable_declarator') continue;\n    // Only activate when there is no explicit type annotation — extractDeclaration already\n    // handles the annotated case and this function is called as a fallback.\n    if (declarator.childForFieldName('type') !== null) continue;\n    let valueNode = declarator.childForFieldName('value');\n    // Unwrap `new User() as T`, `new User()!`, and double-cast `new User() as unknown as T`\n    while (valueNode?.type === 'as_expression' || valueNode?.type === 'non_null_expression') {\n      valueNode = valueNode.firstNamedChild;\n    }\n    if (valueNode?.type !== 'new_expression') continue;\n    const constructorNode = valueNode.childForFieldName('constructor');\n    if (!constructorNode) continue;\n    const nameNode = declarator.childForFieldName('name');\n    if (!nameNode) continue;\n    const varName = extractVarName(nameNode);\n    const typeName = extractSimpleTypeName(constructorNode);\n    if (varName && typeName) env.set(varName, typeName);\n  }\n};\n\n/**\n * TypeScript/JavaScript: const user = getUser() — variable_declarator with call_expression value.\n * Only matches unannotated declarators; annotated ones are handled by extractDeclaration.\n * await is unwrapped: const user = await fetchUser() → callee = 'fetchUser'.\n */\nconst scanConstructorBinding: ConstructorBindingScanner = (node) => {\n  if (node.type !== 'variable_declarator') return undefined;\n  if (hasTypeAnnotation(node)) return undefined;\n  const nameNode = node.childForFieldName('name');\n  if (!nameNode || nameNode.type !== 'identifier') return undefined;\n  const value = unwrapAwait(node.childForFieldName('value'));\n  if (!value || value.type !== 'call_expression') return undefined;\n  const calleeName = extractCalleeName(value);\n  if (!calleeName) return undefined;\n  return { varName: nameNode.text, calleeName };\n};\n\n/** Regex to extract @returns or @return from JSDoc comments: `@returns {Type}` */\nconst JSDOC_RETURN_RE = /@returns?\\s*\\{([^}]+)\\}/;\n\n/**\n * Minimal sanitization for JSDoc return types — preserves generic wrappers\n * (e.g. `Promise<User>`) so that extractReturnTypeName in call-processor\n * can apply WRAPPER_GENERICS unwrapping. Unlike normalizeJsDocType (which\n * strips generics), this only strips JSDoc-specific syntax markers.\n */\nconst sanitizeReturnType = (raw: string): string | undefined => {\n  let type = raw.trim();\n  // Strip JSDoc nullable/non-nullable prefixes: ?User → User, !User → User\n  if (type.startsWith('?') || type.startsWith('!')) type = type.slice(1);\n  // Strip module: prefix — module:models.User → models.User\n  if (type.startsWith('module:')) type = type.slice(7);\n  // Reject unions (ambiguous)\n  if (type.includes('|')) return undefined;\n  if (!type) return undefined;\n  return type;\n};\n\n/**\n * Extract return type from JSDoc `@returns {Type}` or `@return {Type}` annotation\n * preceding a function/method definition. Walks backwards through preceding siblings\n * looking for comment nodes containing the annotation.\n */\nconst extractReturnType: ReturnTypeExtractor = (node) => {\n  let sibling = node.previousSibling;\n  while (sibling) {\n    if (sibling.type === 'comment') {\n      const match = JSDOC_RETURN_RE.exec(sibling.text);\n      if (match) return sanitizeReturnType(match[1]);\n    } else if (sibling.isNamed && sibling.type !== 'decorator') break;\n    sibling = sibling.previousSibling;\n  }\n  return undefined;\n};\n\nconst FOR_LOOP_NODE_TYPES: ReadonlySet<string> = new Set([\n  'for_in_statement',\n]);\n\n/** TS function/method node types that carry a parameters list. */\nconst TS_FUNCTION_NODE_TYPES = new Set([\n  'function_declaration', 'function_expression', 'arrow_function',\n  'method_definition', 'generator_function', 'generator_function_declaration',\n]);\n\n/**\n * Extract element type from a TypeScript type annotation AST node.\n * Handles:\n *   type_annotation \": User[]\"  →  array_type → type_identifier \"User\"\n *   type_annotation \": Array<User>\"  →  generic_type → extractGenericTypeArgs → \"User\"\n * Falls back to text-based extraction via extractElementTypeFromString.\n */\nconst extractTsElementTypeFromAnnotation = (typeAnnotation: SyntaxNode, pos: TypeArgPosition = 'last', depth = 0): string | undefined => {\n  if (depth > 50) return undefined;\n  // Unwrap type_annotation (the node text includes ': ' prefix)\n  const inner = typeAnnotation.type === 'type_annotation'\n    ? (typeAnnotation.firstNamedChild ?? typeAnnotation)\n    : typeAnnotation;\n\n  // readonly User[] — readonly_type wraps array_type: unwrap and recurse\n  if (inner.type === 'readonly_type') {\n    const wrapped = inner.firstNamedChild;\n    if (wrapped) return extractTsElementTypeFromAnnotation(wrapped, pos, depth + 1);\n  }\n\n  // User[] — array_type: first named child is the element type\n  if (inner.type === 'array_type') {\n    const elem = inner.firstNamedChild;\n    if (elem) return extractSimpleTypeName(elem);\n  }\n\n  // Array<User>, Map<string, User> — generic_type\n  // pos determines which type arg: 'first' for keys, 'last' for values\n  if (inner.type === 'generic_type') {\n    const args = extractGenericTypeArgs(inner);\n    if (args.length >= 1) return pos === 'first' ? args[0] : args[args.length - 1];\n  }\n\n  // Fallback: strip ': ' prefix from type_annotation text and use string extraction\n  const rawText = inner.text;\n  return extractElementTypeFromString(rawText, pos);\n};\n\n/**\n * Search a statement_block (function body) for a variable_declarator named `iterableName`\n * that has a type annotation, preceding the given `beforeNode`.\n * Returns the element type from the type annotation, or undefined.\n */\nconst findTsLocalDeclElementType = (\n  iterableName: string,\n  blockNode: SyntaxNode,\n  beforeNode: SyntaxNode,\n  pos: TypeArgPosition = 'last',\n): string | undefined => {\n  for (let i = 0; i < blockNode.namedChildCount; i++) {\n    const stmt = blockNode.namedChild(i);\n    if (!stmt) continue;\n    // Stop when we reach the for-loop itself\n    if (stmt === beforeNode || stmt.startIndex >= beforeNode.startIndex) break;\n    // Look for lexical_declaration or variable_declaration\n    if (stmt.type !== 'lexical_declaration' && stmt.type !== 'variable_declaration') continue;\n    for (let j = 0; j < stmt.namedChildCount; j++) {\n      const decl = stmt.namedChild(j);\n      if (decl?.type !== 'variable_declarator') continue;\n      const nameNode = decl.childForFieldName('name');\n      if (nameNode?.text !== iterableName) continue;\n      const typeAnnotation = decl.childForFieldName('type');\n      if (typeAnnotation) return extractTsElementTypeFromAnnotation(typeAnnotation, pos);\n    }\n  }\n  return undefined;\n};\n\n/**\n * Walk up the AST from a for-loop node to find the enclosing function scope,\n * then search (1) its parameter list and (2) local declarations in the body\n * for a variable named `iterableName` with a container type annotation.\n * Returns the element type extracted from the annotation, or undefined.\n */\nconst findTsIterableElementType = (iterableName: string, startNode: SyntaxNode, pos: TypeArgPosition = 'last'): string | undefined => {\n  let current: SyntaxNode | null = startNode.parent;\n  // Capture the immediate statement_block parent to search local declarations\n  const blockNode = current?.type === 'statement_block' ? current : null;\n\n  while (current) {\n    if (TS_FUNCTION_NODE_TYPES.has(current.type)) {\n      // Search function parameters\n      const paramsNode = current.childForFieldName('parameters')\n        ?? current.childForFieldName('formal_parameters');\n      if (paramsNode) {\n        for (let i = 0; i < paramsNode.namedChildCount; i++) {\n          const param = paramsNode.namedChild(i);\n          if (!param) continue;\n          const patternNode = param.childForFieldName('pattern') ?? param.childForFieldName('name');\n          if (patternNode?.text === iterableName) {\n            const typeAnnotation = param.childForFieldName('type');\n            if (typeAnnotation) return extractTsElementTypeFromAnnotation(typeAnnotation, pos);\n          }\n        }\n      }\n      // Search local declarations in the function body (statement_block)\n      if (blockNode) {\n        const result = findTsLocalDeclElementType(iterableName, blockNode, startNode, pos);\n        if (result) return result;\n      }\n      break; // stop at the nearest function boundary\n    }\n    current = current.parent;\n  }\n  return undefined;\n};\n\n/**\n * TypeScript/JavaScript: for (const user of users) where users has a known array type.\n *\n * Both `for...of` and `for...in` use the same `for_in_statement` AST node in tree-sitter.\n * We differentiate by checking for the `of` keyword among the unnamed children.\n *\n * Tier 1c: resolves the element type via three strategies in priority order:\n *   1. declarationTypeNodes — raw type annotation AST node (covers Array<User> from declarations)\n *   2. scopeEnv string — extractElementTypeFromString on the stored type (covers locally annotated vars)\n *   3. AST walk — walks up to the enclosing function's parameters to read User[] annotations directly\n * Only handles `for...of`; `for...in` produces string keys, not element types.\n */\nconst extractForLoopBinding: ForLoopExtractor = (node, { scopeEnv, declarationTypeNodes, scope, returnTypeLookup }): void => {\n  if (node.type !== 'for_in_statement') return;\n\n  // Confirm this is `for...of`, not `for...in`, by scanning unnamed children for the keyword text.\n  let isForOf = false;\n  for (let i = 0; i < node.childCount; i++) {\n    const child = node.child(i);\n    if (child && !child.isNamed && child.text === 'of') {\n      isForOf = true;\n      break;\n    }\n  }\n  if (!isForOf) return;\n\n  // The iterable is the `right` field — may be identifier, member_expression, or call_expression.\n  const rightNode = node.childForFieldName('right');\n  let iterableName: string | undefined;\n  let methodName: string | undefined;\n  let callExprElementType: string | undefined;\n  if (rightNode?.type === 'identifier') {\n    iterableName = rightNode.text;\n  } else if (rightNode?.type === 'member_expression') {\n    const prop = rightNode.childForFieldName('property');\n    if (prop) iterableName = prop.text;\n  } else if (rightNode?.type === 'call_expression') {\n    // entries.values() → call_expression > function: member_expression > object + property\n    // this.repos.values() → nested member_expression: extract property from inner member\n    // getUsers() → call_expression > function: identifier (Phase 7.3 — return-type path)\n    const fn = rightNode.childForFieldName('function');\n    if (fn?.type === 'member_expression') {\n      const obj = fn.childForFieldName('object');\n      const prop = fn.childForFieldName('property');\n      if (obj?.type === 'identifier') {\n        iterableName = obj.text;\n      } else if (obj?.type === 'member_expression') {\n        // this.repos.values() → obj = this.repos → extract 'repos'\n        const innerProp = obj.childForFieldName('property');\n        if (innerProp) iterableName = innerProp.text;\n      }\n      if (prop?.type === 'property_identifier') methodName = prop.text;\n    } else if (fn?.type === 'identifier') {\n      // Direct function call: for (const user of getUsers())\n      const rawReturn = returnTypeLookup.lookupRawReturnType(fn.text);\n      if (rawReturn) callExprElementType = extractElementTypeFromString(rawReturn);\n    }\n  }\n  if (!iterableName && !callExprElementType) return;\n\n  let elementType: string | undefined;\n  if (callExprElementType) {\n    elementType = callExprElementType;\n  } else {\n    // Look up the container's base type name for descriptor-aware resolution\n    const containerTypeName = scopeEnv.get(iterableName!);\n    const typeArgPos = methodToTypeArgPosition(methodName, containerTypeName);\n    elementType = resolveIterableElementType(\n      iterableName!, node, scopeEnv, declarationTypeNodes, scope,\n      extractTsElementTypeFromAnnotation, findTsIterableElementType,\n      typeArgPos,\n    );\n  }\n  if (!elementType) return;\n\n  // The loop variable is the `left` field.\n  const leftNode = node.childForFieldName('left');\n  if (!leftNode) return;\n\n  // Handle destructured for-of: for (const [k, v] of entries)\n  // AST: left = array_pattern directly (no variable_declarator wrapper)\n  // Bind the LAST identifier to the element type (value in [key, value] patterns)\n  if (leftNode.type === 'array_pattern') {\n    const lastChild = leftNode.lastNamedChild;\n    if (lastChild?.type === 'identifier') {\n      scopeEnv.set(lastChild.text, elementType);\n    }\n    return;\n  }\n\n  if (leftNode.type === 'object_pattern') {\n    // Object destructuring (e.g., `for (const { id } of users)`) destructures\n    // into fields of the element type. Without field-level resolution, we cannot\n    // bind individual properties to their correct types. Skip to avoid false bindings.\n    return;\n  }\n\n  let loopVarNode: SyntaxNode | null = leftNode;\n  // `const user` parses as: left → variable_declarator containing an identifier named `user`\n  if (loopVarNode.type === 'variable_declarator') {\n    loopVarNode = loopVarNode.childForFieldName('name') ?? loopVarNode.firstNamedChild;\n  }\n  if (!loopVarNode) return;\n\n  const loopVarName = extractVarName(loopVarNode);\n  if (loopVarName) scopeEnv.set(loopVarName, elementType);\n};\n\n/** TS/JS: const alias = u → variable_declarator with name/value fields.\n *  Also handles destructuring: `const { a, b } = obj` → N fieldAccess items. */\nconst extractPendingAssignment: PendingAssignmentExtractor = (node, scopeEnv) => {\n  for (let i = 0; i < node.namedChildCount; i++) {\n    const child = node.namedChild(i);\n    if (!child || child.type !== 'variable_declarator') continue;\n    const nameNode = child.childForFieldName('name');\n    const valueNode = child.childForFieldName('value');\n    if (!nameNode || !valueNode) continue;\n\n    // Object destructuring: `const { address, name } = user`\n    // Emits N fieldAccess items — one per destructured binding.\n    if (nameNode.type === 'object_pattern' && valueNode.type === 'identifier') {\n      const receiver = valueNode.text;\n      const items: PendingAssignment[] = [];\n      for (let j = 0; j < nameNode.namedChildCount; j++) {\n        const prop = nameNode.namedChild(j);\n        if (!prop) continue;\n        if (prop.type === 'shorthand_property_identifier_pattern') {\n          // `const { name } = user` → shorthand: varName = fieldName\n          const varName = prop.text;\n          if (!scopeEnv.has(varName)) {\n            items.push({ kind: 'fieldAccess', lhs: varName, receiver, field: varName });\n          }\n        } else if (prop.type === 'pair_pattern') {\n          // `const { address: addr } = user` → pair_pattern: key=field, value=varName\n          const keyNode = prop.childForFieldName('key');\n          const valNode = prop.childForFieldName('value');\n          if (keyNode && valNode) {\n            const fieldName = keyNode.text;\n            const varName = valNode.text;\n            if (!scopeEnv.has(varName)) {\n              items.push({ kind: 'fieldAccess', lhs: varName, receiver, field: fieldName });\n            }\n          }\n        }\n      }\n      if (items.length > 0) return items;\n      continue;\n    }\n\n    const lhs = nameNode.text;\n    if (scopeEnv.has(lhs)) continue;\n    if (valueNode.type === 'identifier') return { kind: 'copy', lhs, rhs: valueNode.text };\n    // member_expression RHS → fieldAccess (a.field, this.field)\n    if (valueNode.type === 'member_expression') {\n      const obj = valueNode.childForFieldName('object');\n      const prop = valueNode.childForFieldName('property');\n      if (obj && prop?.type === 'property_identifier' &&\n          (obj.type === 'identifier' || obj.type === 'this')) {\n        return { kind: 'fieldAccess', lhs, receiver: obj.text, field: prop.text };\n      }\n      continue;\n    }\n    // Unwrap await: `const user = await fetchUser()` or `await a.getC()`\n    const callNode = unwrapAwait(valueNode);\n    if (!callNode || callNode.type !== 'call_expression') continue;\n    const funcNode = callNode.childForFieldName('function');\n    if (!funcNode) continue;\n    // Simple call → callResult: getUser()\n    if (funcNode.type === 'identifier') {\n      return { kind: 'callResult', lhs, callee: funcNode.text };\n    }\n    // Method call with receiver → methodCallResult: a.getC()\n    if (funcNode.type === 'member_expression') {\n      const obj = funcNode.childForFieldName('object');\n      const prop = funcNode.childForFieldName('property');\n      if (obj && prop?.type === 'property_identifier' &&\n          (obj.type === 'identifier' || obj.type === 'this')) {\n        return { kind: 'methodCallResult', lhs, receiver: obj.text, method: prop.text };\n      }\n    }\n  }\n  return undefined;\n};\n\n/** Null-check keywords that indicate a null-comparison in binary expressions. */\nconst NULL_CHECK_KEYWORDS = new Set(['null', 'undefined']);\n\n/**\n * Find the if-body (consequence) block for a null-check binary_expression.\n * Walks up from the binary_expression through parenthesized_expression to if_statement,\n * then returns the consequence block (statement_block).\n *\n * AST structure: if_statement > parenthesized_expression > binary_expression\n *                if_statement > statement_block (consequence)\n */\nconst findIfConsequenceBlock = (binaryExpr: SyntaxNode): SyntaxNode | undefined => {\n  // Walk up to find the if_statement (typically: binary_expression > parenthesized_expression > if_statement)\n  let current = binaryExpr.parent;\n  while (current) {\n    if (current.type === 'if_statement') {\n      // The consequence is the first statement_block child of if_statement\n      for (let i = 0; i < current.childCount; i++) {\n        const child = current.child(i);\n        if (child?.type === 'statement_block') return child;\n      }\n      return undefined;\n    }\n    // Stop climbing at function/block boundaries — don't cross scope\n    if (current.type === 'function_declaration' || current.type === 'function_expression'\n      || current.type === 'arrow_function' || current.type === 'method_definition') return undefined;\n    current = current.parent;\n  }\n  return undefined;\n};\n\n/** TS instanceof narrowing: `x instanceof User` → bind x to User.\n *  Also handles null-check narrowing: `x !== null`, `x != undefined` etc.\n *  instanceof: first-writer-wins (no prior type binding).\n *  null-check: position-indexed narrowing via narrowingRange. */\nconst extractPatternBinding: PatternBindingExtractor = (node, scopeEnv, declarationTypeNodes, scope) => {\n  if (node.type !== 'binary_expression') return undefined;\n\n  // Check for instanceof first (existing behavior)\n  const instanceofOp = node.children.find(c => !c.isNamed && c.text === 'instanceof');\n  if (instanceofOp) {\n    const left = node.namedChild(0);\n    const right = node.namedChild(1);\n    if (left?.type !== 'identifier' || right?.type !== 'identifier') return undefined;\n    return { varName: left.text, typeName: right.text };\n  }\n\n  // Null-check narrowing: x !== null, x != null, x !== undefined, x != undefined\n  const op = node.children.find(c => !c.isNamed && (c.text === '!==' || c.text === '!='));\n  if (!op) return undefined;\n\n  const left = node.namedChild(0);\n  const right = node.namedChild(1);\n  if (!left || !right) return undefined;\n\n  // Determine which side is the variable and which is null/undefined\n  let varNode: SyntaxNode | undefined;\n  let isNullCheck = false;\n  if (left.type === 'identifier' && NULL_CHECK_KEYWORDS.has(right.text)) {\n    varNode = left;\n    isNullCheck = true;\n  } else if (right.type === 'identifier' && NULL_CHECK_KEYWORDS.has(left.text)) {\n    varNode = right;\n    isNullCheck = true;\n  }\n  if (!isNullCheck || !varNode) return undefined;\n\n  const varName = varNode.text;\n  // Look up the variable's resolved type (already stripped of nullable by extractSimpleTypeName)\n  const resolvedType = scopeEnv.get(varName);\n  if (!resolvedType) return undefined;\n\n  // Check if the original declaration type was nullable by looking at the raw AST type node.\n  // extractSimpleTypeName already strips nullable markers, so we need the original to know\n  // if narrowing is meaningful (i.e., the variable was declared as nullable).\n  const declTypeNode = declarationTypeNodes.get(`${scope}\\0${varName}`);\n  if (!declTypeNode) return undefined;\n  const declText = declTypeNode.text;\n  // Only narrow if the original declaration was nullable\n  if (!declText.includes('null') && !declText.includes('undefined')) return undefined;\n\n  // Find the if-body block to scope the narrowing\n  const ifBody = findIfConsequenceBlock(node);\n  if (!ifBody) return undefined;\n\n  return {\n    varName,\n    typeName: resolvedType,\n    narrowingRange: { startIndex: ifBody.startIndex, endIndex: ifBody.endIndex },\n  };\n};\n\n/** Infer the type of a literal AST node for TypeScript overload disambiguation. */\nconst inferTsLiteralType: LiteralTypeInferrer = (node) => {\n  switch (node.type) {\n    case 'number':\n      return 'number';\n    case 'string':\n    case 'template_string':\n      return 'string';\n    case 'true':\n    case 'false':\n      return 'boolean';\n    case 'null':\n      return 'null';\n    case 'undefined':\n      return 'undefined';\n    case 'regex':\n      return 'RegExp';\n    default:\n      return undefined;\n  }\n};\n\nexport const typeConfig: LanguageTypeConfig = {\n  declarationNodeTypes: DECLARATION_NODE_TYPES,\n  forLoopNodeTypes: FOR_LOOP_NODE_TYPES,\n  patternBindingNodeTypes: new Set(['binary_expression']),\n  extractDeclaration,\n  extractParameter,\n  extractInitializer,\n  scanConstructorBinding,\n  extractReturnType,\n  extractForLoopBinding,\n  extractPendingAssignment,\n  extractPatternBinding,\n  inferLiteralType: inferTsLiteralType,\n};\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/utils.ts",
    "content": "import type Parser from 'tree-sitter';\nimport { SupportedLanguages } from '../../config/supported-languages.js';\nimport { generateId } from '../../lib/utils.js';\nimport { extractSimpleTypeName } from './type-extractors/shared.js';\n\n/** Tree-sitter AST node. Re-exported for use across ingestion modules. */\nexport type SyntaxNode = Parser.SyntaxNode;\n\n/**\n * Ordered list of definition capture keys for tree-sitter query matches.\n * Used to extract the definition node from a capture map.\n */\nexport const DEFINITION_CAPTURE_KEYS = [\n  'definition.function',\n  'definition.class',\n  'definition.interface',\n  'definition.method',\n  'definition.struct',\n  'definition.enum',\n  'definition.namespace',\n  'definition.module',\n  'definition.trait',\n  'definition.impl',\n  'definition.type',\n  'definition.const',\n  'definition.static',\n  'definition.typedef',\n  'definition.macro',\n  'definition.union',\n  'definition.property',\n  'definition.record',\n  'definition.delegate',\n  'definition.annotation',\n  'definition.constructor',\n  'definition.template',\n] as const;\n\n/** Extract the definition node from a tree-sitter query capture map. */\nexport const getDefinitionNodeFromCaptures = (captureMap: Record<string, any>): SyntaxNode | null => {\n  for (const key of DEFINITION_CAPTURE_KEYS) {\n    if (captureMap[key]) return captureMap[key];\n  }\n  return null;\n};\n\n/**\n * Node types that represent function/method definitions across languages.\n * Used to find the enclosing function for a call site.\n */\nexport const FUNCTION_NODE_TYPES = new Set([\n  // TypeScript/JavaScript\n  'function_declaration',\n  'arrow_function',\n  'function_expression',\n  'method_definition',\n  'generator_function_declaration',\n  // Python\n  'function_definition',\n  // Common async variants\n  'async_function_declaration',\n  'async_arrow_function',\n  // Java\n  'method_declaration',\n  'constructor_declaration',\n  // C/C++\n  // 'function_definition' already included above\n  // Go\n  // 'method_declaration' already included from Java\n  // C#\n  'local_function_statement',\n  // Rust\n  'function_item',\n  'impl_item', // Methods inside impl blocks\n  // PHP\n  'anonymous_function',\n  // Kotlin\n  'lambda_literal',\n  // Swift\n  'init_declaration',\n  'deinit_declaration',\n  // Ruby\n  'method',           // def foo\n  'singleton_method', // def self.foo\n]);\n\n/**\n * Node types for standard function declarations that need C/C++ declarator handling.\n * Used by extractFunctionName to determine how to extract the function name.\n */\nexport const FUNCTION_DECLARATION_TYPES = new Set([\n  'function_declaration',\n  'function_definition',\n  'async_function_declaration',\n  'generator_function_declaration',\n  'function_item',\n]);\n\n/**\n * Built-in function/method names that should not be tracked as call targets.\n * Covers JS/TS, Python, Kotlin, C/C++, PHP, Swift standard library functions.\n */\nexport const BUILT_IN_NAMES = new Set([\n  // JavaScript/TypeScript\n  'console', 'log', 'warn', 'error', 'info', 'debug',\n  'setTimeout', 'setInterval', 'clearTimeout', 'clearInterval',\n  'parseInt', 'parseFloat', 'isNaN', 'isFinite',\n  'encodeURI', 'decodeURI', 'encodeURIComponent', 'decodeURIComponent',\n  'JSON', 'parse', 'stringify',\n  'Object', 'Array', 'String', 'Number', 'Boolean', 'Symbol', 'BigInt',\n  'Map', 'Set', 'WeakMap', 'WeakSet',\n  'Promise', 'resolve', 'reject', 'then', 'catch', 'finally',\n  'Math', 'Date', 'RegExp', 'Error',\n  'require', 'import', 'export', 'fetch', 'Response', 'Request',\n  'useState', 'useEffect', 'useCallback', 'useMemo', 'useRef', 'useContext',\n  'useReducer', 'useLayoutEffect', 'useImperativeHandle', 'useDebugValue',\n  'createElement', 'createContext', 'createRef', 'forwardRef', 'memo', 'lazy',\n  'map', 'filter', 'reduce', 'forEach', 'find', 'findIndex', 'some', 'every',\n  'includes', 'indexOf', 'slice', 'splice', 'concat', 'join', 'split',\n  'push', 'pop', 'shift', 'unshift', 'sort', 'reverse',\n  'keys', 'values', 'entries', 'assign', 'freeze', 'seal',\n  'hasOwnProperty', 'toString', 'valueOf',\n  // Python\n  'print', 'len', 'range', 'str', 'int', 'float', 'list', 'dict', 'set', 'tuple',\n  'append', 'extend', 'update',\n  // NOTE: 'open', 'read', 'write', 'close' removed — these are real C POSIX syscalls\n  'type', 'isinstance', 'issubclass', 'getattr', 'setattr', 'hasattr',\n  'enumerate', 'zip', 'sorted', 'reversed', 'min', 'max', 'sum', 'abs',\n  // Kotlin stdlib\n  'println', 'print', 'readLine', 'require', 'requireNotNull', 'check', 'assert', 'lazy', 'error',\n  'listOf', 'mapOf', 'setOf', 'mutableListOf', 'mutableMapOf', 'mutableSetOf',\n  'arrayOf', 'sequenceOf', 'also', 'apply', 'run', 'with', 'takeIf', 'takeUnless',\n  'TODO', 'buildString', 'buildList', 'buildMap', 'buildSet',\n  'repeat', 'synchronized',\n  // Kotlin coroutine builders & scope functions\n  'launch', 'async', 'runBlocking', 'withContext', 'coroutineScope',\n  'supervisorScope', 'delay',\n  // Kotlin Flow operators\n  'flow', 'flowOf', 'collect', 'emit', 'onEach', 'catch',\n  'buffer', 'conflate', 'distinctUntilChanged',\n  'flatMapLatest', 'flatMapMerge', 'combine',\n  'stateIn', 'shareIn', 'launchIn',\n  // Kotlin infix stdlib functions\n  'to', 'until', 'downTo', 'step',\n  // C/C++ standard library\n  'printf', 'fprintf', 'sprintf', 'snprintf', 'vprintf', 'vfprintf', 'vsprintf', 'vsnprintf',\n  'scanf', 'fscanf', 'sscanf',\n  'malloc', 'calloc', 'realloc', 'free', 'memcpy', 'memmove', 'memset', 'memcmp',\n  'strlen', 'strcpy', 'strncpy', 'strcat', 'strncat', 'strcmp', 'strncmp', 'strstr', 'strchr', 'strrchr',\n  'atoi', 'atol', 'atof', 'strtol', 'strtoul', 'strtoll', 'strtoull', 'strtod',\n  'sizeof', 'offsetof', 'typeof',\n  'assert', 'abort', 'exit', '_exit',\n  'fopen', 'fclose', 'fread', 'fwrite', 'fseek', 'ftell', 'rewind', 'fflush', 'fgets', 'fputs',\n  // Linux kernel common macros/helpers (not real call targets)\n  'likely', 'unlikely', 'BUG', 'BUG_ON', 'WARN', 'WARN_ON', 'WARN_ONCE',\n  'IS_ERR', 'PTR_ERR', 'ERR_PTR', 'IS_ERR_OR_NULL',\n  'ARRAY_SIZE', 'container_of', 'list_for_each_entry', 'list_for_each_entry_safe',\n  'min', 'max', 'clamp', 'abs', 'swap',\n  'pr_info', 'pr_warn', 'pr_err', 'pr_debug', 'pr_notice', 'pr_crit', 'pr_emerg',\n  'printk', 'dev_info', 'dev_warn', 'dev_err', 'dev_dbg',\n  'GFP_KERNEL', 'GFP_ATOMIC',\n  'spin_lock', 'spin_unlock', 'spin_lock_irqsave', 'spin_unlock_irqrestore',\n  'mutex_lock', 'mutex_unlock', 'mutex_init',\n  'kfree', 'kmalloc', 'kzalloc', 'kcalloc', 'krealloc', 'kvmalloc', 'kvfree',\n  'get', 'put',\n  // C# / .NET built-ins\n  'Console', 'WriteLine', 'ReadLine', 'Write',\n  'Task', 'Run', 'Wait', 'WhenAll', 'WhenAny', 'FromResult', 'Delay', 'ContinueWith',\n  'ConfigureAwait', 'GetAwaiter', 'GetResult',\n  'ToString', 'GetType', 'Equals', 'GetHashCode', 'ReferenceEquals',\n  'Add', 'Remove', 'Contains', 'Clear', 'Count', 'Any', 'All',\n  'Where', 'Select', 'SelectMany', 'OrderBy', 'OrderByDescending', 'GroupBy',\n  'First', 'FirstOrDefault', 'Single', 'SingleOrDefault', 'Last', 'LastOrDefault',\n  'ToList', 'ToArray', 'ToDictionary', 'AsEnumerable', 'AsQueryable',\n  'Aggregate', 'Sum', 'Average', 'Min', 'Max', 'Distinct', 'Skip', 'Take',\n  'String', 'Format', 'IsNullOrEmpty', 'IsNullOrWhiteSpace', 'Concat', 'Join',\n  'Trim', 'TrimStart', 'TrimEnd', 'Split', 'Replace', 'StartsWith', 'EndsWith',\n  'Convert', 'ToInt32', 'ToDouble', 'ToBoolean', 'ToByte',\n  'Math', 'Abs', 'Ceiling', 'Floor', 'Round', 'Pow', 'Sqrt',\n  'Dispose', 'Close',\n  'TryParse', 'Parse',\n  'AddRange', 'RemoveAt', 'RemoveAll', 'FindAll', 'Exists', 'TrueForAll',\n  'ContainsKey', 'TryGetValue', 'AddOrUpdate',\n  'Throw', 'ThrowIfNull',\n  // PHP built-ins\n  'echo', 'isset', 'empty', 'unset', 'list', 'array', 'compact', 'extract',\n  'count', 'strlen', 'strpos', 'strrpos', 'substr', 'strtolower', 'strtoupper', 'trim',\n  'ltrim', 'rtrim', 'str_replace', 'str_contains', 'str_starts_with', 'str_ends_with',\n  'sprintf', 'vsprintf', 'printf', 'number_format',\n  'array_map', 'array_filter', 'array_reduce', 'array_push', 'array_pop', 'array_shift',\n  'array_unshift', 'array_slice', 'array_splice', 'array_merge', 'array_keys', 'array_values',\n  'array_key_exists', 'in_array', 'array_search', 'array_unique', 'usort', 'rsort',\n  'json_encode', 'json_decode', 'serialize', 'unserialize',\n  'intval', 'floatval', 'strval', 'boolval', 'is_null', 'is_string', 'is_int', 'is_array',\n  'is_object', 'is_numeric', 'is_bool', 'is_float',\n  'var_dump', 'print_r', 'var_export',\n  'date', 'time', 'strtotime', 'mktime', 'microtime',\n  'file_exists', 'file_get_contents', 'file_put_contents', 'is_file', 'is_dir',\n  'preg_match', 'preg_match_all', 'preg_replace', 'preg_split',\n  'header', 'session_start', 'session_destroy', 'ob_start', 'ob_end_clean', 'ob_get_clean',\n  'dd', 'dump',\n  // Swift/iOS built-ins and standard library\n  'print', 'debugPrint', 'dump', 'fatalError', 'precondition', 'preconditionFailure',\n  'assert', 'assertionFailure', 'NSLog',\n  'abs', 'min', 'max', 'zip', 'stride', 'sequence', 'repeatElement',\n  'swap', 'withUnsafePointer', 'withUnsafeMutablePointer', 'withUnsafeBytes',\n  'autoreleasepool', 'unsafeBitCast', 'unsafeDowncast', 'numericCast',\n  'type', 'MemoryLayout',\n  // Swift collection/string methods (common noise)\n  'map', 'flatMap', 'compactMap', 'filter', 'reduce', 'forEach', 'contains',\n  'first', 'last', 'prefix', 'suffix', 'dropFirst', 'dropLast',\n  'sorted', 'reversed', 'enumerated', 'joined', 'split',\n  'append', 'insert', 'remove', 'removeAll', 'removeFirst', 'removeLast',\n  'isEmpty', 'count', 'index', 'startIndex', 'endIndex',\n  // UIKit/Foundation common methods (noise in call graph)\n  'addSubview', 'removeFromSuperview', 'layoutSubviews', 'setNeedsLayout',\n  'layoutIfNeeded', 'setNeedsDisplay', 'invalidateIntrinsicContentSize',\n  'addTarget', 'removeTarget', 'addGestureRecognizer',\n  'addConstraint', 'addConstraints', 'removeConstraint', 'removeConstraints',\n  'NSLocalizedString', 'Bundle',\n  'reloadData', 'reloadSections', 'reloadRows', 'performBatchUpdates',\n  'register', 'dequeueReusableCell', 'dequeueReusableSupplementaryView',\n  'beginUpdates', 'endUpdates', 'insertRows', 'deleteRows', 'insertSections', 'deleteSections',\n  'present', 'dismiss', 'pushViewController', 'popViewController', 'popToRootViewController',\n  'performSegue', 'prepare',\n  // GCD / async\n  'DispatchQueue', 'async', 'sync', 'asyncAfter',\n  'Task', 'withCheckedContinuation', 'withCheckedThrowingContinuation',\n  // Combine\n  'sink', 'store', 'assign', 'receive', 'subscribe',\n  // Notification / KVO\n  'addObserver', 'removeObserver', 'post', 'NotificationCenter',\n  // Rust standard library (common noise in call graphs)\n  'unwrap', 'expect', 'unwrap_or', 'unwrap_or_else', 'unwrap_or_default',\n  'ok', 'err', 'is_ok', 'is_err', 'map', 'map_err', 'and_then', 'or_else',\n  'clone', 'to_string', 'to_owned', 'into', 'from', 'as_ref', 'as_mut',\n  'iter', 'into_iter', 'collect', 'map', 'filter', 'fold', 'for_each',\n  'len', 'is_empty', 'push', 'pop', 'insert', 'remove', 'contains',\n  'format', 'write', 'writeln', 'panic', 'unreachable', 'todo', 'unimplemented',\n  'vec', 'println', 'eprintln', 'dbg',\n  'lock', 'read', 'write', 'try_lock',\n  'spawn', 'join', 'sleep',\n  'Some', 'None', 'Ok', 'Err',\n  // Ruby built-ins and Kernel methods\n  'puts', 'p', 'pp', 'raise', 'fail',\n  'require', 'require_relative', 'load', 'autoload',\n  'include', 'extend', 'prepend',\n  'attr_accessor', 'attr_reader', 'attr_writer',\n  'public', 'private', 'protected', 'module_function',\n  'lambda', 'proc', 'block_given?',\n  'nil?', 'is_a?', 'kind_of?', 'instance_of?', 'respond_to?',\n  'freeze', 'frozen?', 'dup', 'tap', 'yield_self',\n  // Ruby enumerables\n  'each', 'select', 'reject', 'detect', 'collect',\n  'inject', 'flat_map', 'each_with_object', 'each_with_index',\n  'any?', 'all?', 'none?', 'count', 'first', 'last',\n  'sort_by', 'min_by', 'max_by',\n  'group_by', 'partition', 'compact', 'flatten', 'uniq',\n]);\n\n/** Check if a name is a built-in function or common noise that should be filtered out */\nexport const isBuiltInOrNoise = (name: string): boolean => BUILT_IN_NAMES.has(name);\n\n/** AST node types that represent a class-like container (for HAS_METHOD edge extraction) */\nexport const CLASS_CONTAINER_TYPES = new Set([\n  'class_declaration', 'abstract_class_declaration',\n  'interface_declaration', 'struct_declaration', 'record_declaration',\n  'class_specifier', 'struct_specifier',\n  'impl_item', 'trait_item', 'struct_item', 'enum_item',\n  'class_definition',\n  'trait_declaration',\n  'protocol_declaration',\n  // Ruby\n  'class',\n  'module',\n  // Kotlin\n  'object_declaration',\n  'companion_object',\n]);\n\nexport const CONTAINER_TYPE_TO_LABEL: Record<string, string> = {\n  class_declaration: 'Class',\n  abstract_class_declaration: 'Class',\n  interface_declaration: 'Interface',\n  struct_declaration: 'Struct',\n  struct_specifier: 'Struct',\n  class_specifier: 'Class',\n  class_definition: 'Class',\n  impl_item: 'Impl',\n  trait_item: 'Trait',\n  struct_item: 'Struct',\n  enum_item: 'Enum',\n  trait_declaration: 'Trait',\n  record_declaration: 'Record',\n  protocol_declaration: 'Interface',\n  class: 'Class',\n  module: 'Module',\n  object_declaration: 'Class',\n  companion_object: 'Class',\n};\n\n/** Walk up AST to find enclosing class/struct/interface/impl, return its generateId or null.\n *  For Go method_declaration nodes, extracts receiver type (e.g. `func (u *User) Save()` → User struct). */\nexport const findEnclosingClassId = (node: any, filePath: string): string | null => {\n  let current = node.parent;\n  while (current) {\n    // Go: method_declaration has a receiver parameter with the struct type\n    if (current.type === 'method_declaration') {\n      const receiver = current.childForFieldName?.('receiver');\n      if (receiver) {\n        // receiver is a parameter_list: (u *User) or (u User)\n        const paramDecl = receiver.namedChildren?.find?.((c: any) => c.type === 'parameter_declaration');\n        if (paramDecl) {\n          const typeNode = paramDecl.childForFieldName?.('type');\n          if (typeNode) {\n            // Unwrap pointer_type (*User → User)\n            const inner = typeNode.type === 'pointer_type' ? typeNode.firstNamedChild : typeNode;\n            if (inner && (inner.type === 'type_identifier' || inner.type === 'identifier')) {\n              return generateId('Struct', `${filePath}:${inner.text}`);\n            }\n          }\n        }\n      }\n    }\n    // Go: type_declaration wrapping a struct_type (type User struct { ... })\n    // field_declaration → field_declaration_list → struct_type → type_spec → type_declaration\n    if (current.type === 'type_declaration') {\n      const typeSpec = current.children?.find((c: any) => c.type === 'type_spec');\n      if (typeSpec) {\n        const typeBody = typeSpec.childForFieldName?.('type');\n        if (typeBody?.type === 'struct_type' || typeBody?.type === 'interface_type') {\n          const nameNode = typeSpec.childForFieldName?.('name');\n          if (nameNode) {\n            const label = typeBody.type === 'struct_type' ? 'Struct' : 'Interface';\n            return generateId(label, `${filePath}:${nameNode.text}`);\n          }\n        }\n      }\n    }\n    if (CLASS_CONTAINER_TYPES.has(current.type)) {\n      // Rust impl_item: for `impl Trait for Struct {}`, pick the type after `for`\n      if (current.type === 'impl_item') {\n        const children = current.children ?? [];\n        const forIdx = children.findIndex((c: any) => c.text === 'for');\n        if (forIdx !== -1) {\n          const nameNode = children.slice(forIdx + 1).find((c: any) =>\n            c.type === 'type_identifier' || c.type === 'identifier'\n          );\n          if (nameNode) {\n            return generateId('Impl', `${filePath}:${nameNode.text}`);\n          }\n        }\n        // Fall through: plain `impl Struct {}` — use first type_identifier below\n      }\n      const nameNode = current.childForFieldName?.('name')\n        ?? current.children?.find((c: any) =>\n          c.type === 'type_identifier' || c.type === 'identifier' || c.type === 'name' || c.type === 'constant'\n        );\n      if (nameNode) {\n        const label = CONTAINER_TYPE_TO_LABEL[current.type] || 'Class';\n        return generateId(label, `${filePath}:${nameNode.text}`);\n      }\n    }\n    current = current.parent;\n  }\n  return null;\n};\n\n/**\n * Extract function name and label from a function_definition or similar AST node.\n * Handles C/C++ qualified_identifier (ClassName::MethodName) and other language patterns.\n */\nexport const extractFunctionName = (node: SyntaxNode): { funcName: string | null; label: string } => {\n  let funcName: string | null = null;\n  let label = 'Function';\n\n  // Swift init/deinit\n  if (node.type === 'init_declaration' || node.type === 'deinit_declaration') {\n    return {\n      funcName: node.type === 'init_declaration' ? 'init' : 'deinit',\n      label: 'Constructor',\n    };\n  }\n\n  if (FUNCTION_DECLARATION_TYPES.has(node.type)) {\n    // C/C++: function_definition -> [pointer_declarator ->] function_declarator -> qualified_identifier/identifier\n    // Unwrap pointer_declarator / reference_declarator wrappers to reach function_declarator\n    let declarator = node.childForFieldName?.('declarator');\n    if (!declarator) {\n      for (let i = 0; i < node.childCount; i++) {\n        const c = node.child(i);\n        if (c?.type === 'function_declarator') { declarator = c; break; }\n      }\n    }\n    while (declarator && (declarator.type === 'pointer_declarator' || declarator.type === 'reference_declarator')) {\n      let nextDeclarator = declarator.childForFieldName?.('declarator');\n      if (!nextDeclarator) {\n        for (let i = 0; i < declarator.childCount; i++) {\n          const c = declarator.child(i);\n          if (c?.type === 'function_declarator' || c?.type === 'pointer_declarator' || c?.type === 'reference_declarator') { nextDeclarator = c; break; }\n        }\n      }\n      declarator = nextDeclarator;\n    }\n    if (declarator) {\n      let innerDeclarator = declarator.childForFieldName?.('declarator');\n      if (!innerDeclarator) {\n        for (let i = 0; i < declarator.childCount; i++) {\n          const c = declarator.child(i);\n          if (c?.type === 'qualified_identifier' || c?.type === 'identifier'\n            || c?.type === 'field_identifier' || c?.type === 'parenthesized_declarator') { innerDeclarator = c; break; }\n        }\n      }\n\n      if (innerDeclarator?.type === 'qualified_identifier') {\n        let nameNode = innerDeclarator.childForFieldName?.('name');\n        if (!nameNode) {\n          for (let i = 0; i < innerDeclarator.childCount; i++) {\n            const c = innerDeclarator.child(i);\n            if (c?.type === 'identifier') { nameNode = c; break; }\n          }\n        }\n        if (nameNode?.text) {\n          funcName = nameNode.text;\n          label = 'Method';\n        }\n      } else if (innerDeclarator?.type === 'identifier' || innerDeclarator?.type === 'field_identifier') {\n        // field_identifier is used for method names inside C++ class bodies\n        funcName = innerDeclarator.text;\n        if (innerDeclarator.type === 'field_identifier') label = 'Method';\n      } else if (innerDeclarator?.type === 'parenthesized_declarator') {\n        let nestedId: SyntaxNode | null = null;\n        for (let i = 0; i < innerDeclarator.childCount; i++) {\n          const c = innerDeclarator.child(i);\n          if (c?.type === 'qualified_identifier' || c?.type === 'identifier') { nestedId = c; break; }\n        }\n        if (nestedId?.type === 'qualified_identifier') {\n          let nameNode = nestedId.childForFieldName?.('name');\n          if (!nameNode) {\n            for (let i = 0; i < nestedId.childCount; i++) {\n              const c = nestedId.child(i);\n              if (c?.type === 'identifier') { nameNode = c; break; }\n            }\n          }\n          if (nameNode?.text) {\n            funcName = nameNode.text;\n            label = 'Method';\n          }\n        } else if (nestedId?.type === 'identifier') {\n          funcName = nestedId.text;\n        }\n      }\n    }\n\n    // Fallback for other languages (Kotlin uses simple_identifier, Swift uses simple_identifier)\n    if (!funcName) {\n      let nameNode = node.childForFieldName?.('name');\n      if (!nameNode) {\n        for (let i = 0; i < node.childCount; i++) {\n          const c = node.child(i);\n          if (c?.type === 'identifier' || c?.type === 'property_identifier' || c?.type === 'simple_identifier') { nameNode = c; break; }\n        }\n      }\n      funcName = nameNode?.text;\n    }\n  } else if (node.type === 'impl_item') {\n    let funcItem: SyntaxNode | null = null;\n    for (let i = 0; i < node.childCount; i++) {\n      const c = node.child(i);\n      if (c?.type === 'function_item') { funcItem = c; break; }\n    }\n    if (funcItem) {\n      let nameNode = funcItem.childForFieldName?.('name');\n      if (!nameNode) {\n        for (let i = 0; i < funcItem.childCount; i++) {\n          const c = funcItem.child(i);\n          if (c?.type === 'identifier') { nameNode = c; break; }\n        }\n      }\n      funcName = nameNode?.text;\n      label = 'Method';\n    }\n  } else if (node.type === 'method_definition') {\n    let nameNode = node.childForFieldName?.('name');\n    if (!nameNode) {\n      for (let i = 0; i < node.childCount; i++) {\n        const c = node.child(i);\n        if (c?.type === 'property_identifier') { nameNode = c; break; }\n      }\n    }\n    funcName = nameNode?.text;\n    label = 'Method';\n  } else if (node.type === 'method_declaration' || node.type === 'constructor_declaration') {\n    let nameNode = node.childForFieldName?.('name');\n    if (!nameNode) {\n      for (let i = 0; i < node.childCount; i++) {\n        const c = node.child(i);\n        if (c?.type === 'identifier') { nameNode = c; break; }\n      }\n    }\n    funcName = nameNode?.text;\n    label = 'Method';\n  } else if (node.type === 'arrow_function' || node.type === 'function_expression') {\n    const parent = node.parent;\n    if (parent?.type === 'variable_declarator') {\n      let nameNode = parent.childForFieldName?.('name');\n      if (!nameNode) {\n        for (let i = 0; i < parent.childCount; i++) {\n          const c = parent.child(i);\n          if (c?.type === 'identifier') { nameNode = c; break; }\n        }\n      }\n      funcName = nameNode?.text;\n    }\n  } else if (node.type === 'method' || node.type === 'singleton_method') {\n    let nameNode = node.childForFieldName?.('name');\n    if (!nameNode) {\n      for (let i = 0; i < node.childCount; i++) {\n        const c = node.child(i);\n        if (c?.type === 'identifier') { nameNode = c; break; }\n      }\n    }\n    funcName = nameNode?.text;\n    label = 'Method';\n  }\n\n  return { funcName, label };\n};\n\n/**\n * Yield control to the event loop so spinners/progress can render.\n * Call periodically in hot loops to prevent UI freezes.\n */\nexport const yieldToEventLoop = (): Promise<void> => new Promise(resolve => setImmediate(resolve));\n\n/** Ruby extensionless filenames recognised as Ruby source */\nconst RUBY_EXTENSIONLESS_FILES = new Set(['Rakefile', 'Gemfile', 'Guardfile', 'Vagrantfile', 'Brewfile']);\n\n/**\n * Find a child of `childType` within a sibling node of `siblingType`.\n * Used for Kotlin AST traversal where visibility_modifier lives inside a modifiers sibling.\n */\nexport const findSiblingChild = (parent: any, siblingType: string, childType: string): any | null => {\n  for (let i = 0; i < parent.childCount; i++) {\n    const sibling = parent.child(i);\n    if (sibling?.type === siblingType) {\n      for (let j = 0; j < sibling.childCount; j++) {\n        const child = sibling.child(j);\n        if (child?.type === childType) return child;\n      }\n    }\n  }\n  return null;\n};\n\n/**\n * Map file extension to SupportedLanguage enum\n */\nexport const getLanguageFromFilename = (filename: string): SupportedLanguages | null => {\n  // TypeScript (including TSX)\n  if (filename.endsWith('.tsx')) return SupportedLanguages.TypeScript;\n  if (filename.endsWith('.ts')) return SupportedLanguages.TypeScript;\n  // JavaScript (including JSX)\n  if (filename.endsWith('.jsx')) return SupportedLanguages.JavaScript;\n  if (filename.endsWith('.js')) return SupportedLanguages.JavaScript;\n  // Python\n  if (filename.endsWith('.py')) return SupportedLanguages.Python;\n  // Java\n  if (filename.endsWith('.java')) return SupportedLanguages.Java;\n  // C source files\n  if (filename.endsWith('.c')) return SupportedLanguages.C;\n  // C++ (all common extensions, including .h)\n  // .h is parsed as C++ because tree-sitter-cpp is a strict superset of C, so pure-C\n  // headers parse correctly, and C++ headers (classes, templates) are handled properly.\n  if (filename.endsWith('.cpp') || filename.endsWith('.cc') || filename.endsWith('.cxx') ||\n      filename.endsWith('.h') || filename.endsWith('.hpp') || filename.endsWith('.hxx') || filename.endsWith('.hh')) return SupportedLanguages.CPlusPlus;\n  // C#\n  if (filename.endsWith('.cs')) return SupportedLanguages.CSharp;\n  // Go\n  if (filename.endsWith('.go')) return SupportedLanguages.Go;\n  // Rust\n  if (filename.endsWith('.rs')) return SupportedLanguages.Rust;\n  // Kotlin\n  if (filename.endsWith('.kt') || filename.endsWith('.kts')) return SupportedLanguages.Kotlin;\n  // PHP (all common extensions)\n  if (filename.endsWith('.php') || filename.endsWith('.phtml') ||\n      filename.endsWith('.php3') || filename.endsWith('.php4') ||\n      filename.endsWith('.php5') || filename.endsWith('.php8')) {\n    return SupportedLanguages.PHP;\n  }\n  // Ruby (extensions)\n  if (filename.endsWith('.rb') || filename.endsWith('.rake') || filename.endsWith('.gemspec')) {\n    return SupportedLanguages.Ruby;\n  }\n  // Ruby (extensionless files)\n  const basename = filename.split('/').pop() || filename;\n  if (RUBY_EXTENSIONLESS_FILES.has(basename)) {\n    return SupportedLanguages.Ruby;\n  }\n  // Swift (extensions)\n  if (filename.endsWith('.swift')) return SupportedLanguages.Swift;\n  return null;\n};\n\nexport interface MethodSignature {\n  parameterCount: number | undefined;\n  /** Number of required (non-optional, non-default) parameters.\n   *  Only set when fewer than parameterCount — enables range-based arity filtering.\n   *  undefined means all parameters are required (or metadata unavailable). */\n  requiredParameterCount: number | undefined;\n  /** Per-parameter type names extracted via extractSimpleTypeName.\n   *  Only populated for languages with method overloading (Java, Kotlin, C#, C++).\n   *  undefined (not []) when no types are extractable — avoids empty array allocations. */\n  parameterTypes: string[] | undefined;\n  returnType: string | undefined;\n}\n\nconst CALL_ARGUMENT_LIST_TYPES = new Set([\n  'arguments',\n  'argument_list',\n  'value_arguments',\n]);\n\n/**\n * Extract parameter count and return type text from an AST method/function node.\n * Works across languages by looking for common AST patterns.\n */\nexport const extractMethodSignature = (node: SyntaxNode | null | undefined): MethodSignature => {\n  let parameterCount: number | undefined = 0;\n  let requiredCount = 0;\n  let returnType: string | undefined;\n  let isVariadic = false;\n  const paramTypes: string[] = [];\n\n  if (!node) return { parameterCount, requiredParameterCount: undefined, parameterTypes: undefined, returnType };\n\n  const paramListTypes = new Set([\n    'formal_parameters', 'parameters', 'parameter_list',\n    'function_parameters', 'method_parameters', 'function_value_parameters',\n  ]);\n\n  // Node types that indicate variadic/rest parameters\n  const VARIADIC_PARAM_TYPES = new Set([\n    'variadic_parameter_declaration',  // Go: ...string\n    'variadic_parameter',              // Rust: extern \"C\" fn(...)\n    'spread_parameter',                // Java: Object... args\n    'list_splat_pattern',              // Python: *args\n    'dictionary_splat_pattern',        // Python: **kwargs\n  ]);\n\n  /** AST node types that represent parameters with default values. */\n  const OPTIONAL_PARAM_TYPES = new Set([\n    'optional_parameter',                // TypeScript, Ruby: (x?: number), (x: number = 5), def f(x = 5)\n    'default_parameter',                 // Python: def f(x=5)\n    'typed_default_parameter',           // Python: def f(x: int = 5)\n    'optional_parameter_declaration',    // C++: void f(int x = 5)\n  ]);\n\n  /** Check if a parameter node has a default value (handles Kotlin, C#, Swift, PHP\n   *  where defaults are expressed as child nodes rather than distinct node types). */\n  const hasDefaultValue = (paramNode: SyntaxNode): boolean => {\n    if (OPTIONAL_PARAM_TYPES.has(paramNode.type)) return true;\n    // C#, Swift, PHP: check for '=' token or equals_value_clause child\n    for (let i = 0; i < paramNode.childCount; i++) {\n      const c = paramNode.child(i);\n      if (!c) continue;\n      if (c.type === '=' || c.type === 'equals_value_clause') return true;\n    }\n    // Kotlin: default values are siblings of the parameter node, not children.\n    // The AST is: parameter, =, <literal>  — all at function_value_parameters level.\n    // Check if the immediately following sibling is '=' (default value separator).\n    const sib = paramNode.nextSibling;\n    if (sib && sib.type === '=') return true;\n    return false;\n  };\n\n  const findParameterList = (current: SyntaxNode): SyntaxNode | null => {\n    for (const child of current.children) {\n      if (paramListTypes.has(child.type)) return child;\n    }\n    for (const child of current.children) {\n      const nested = findParameterList(child);\n      if (nested) return nested;\n    }\n    return null;\n  };\n\n  const parameterList = (\n    paramListTypes.has(node.type) ? node                // node itself IS the parameter list (e.g. C# primary constructors)\n      : node.childForFieldName?.('parameters')\n        ?? findParameterList(node)\n  );\n\n  if (parameterList && paramListTypes.has(parameterList.type)) {\n    for (const param of parameterList.namedChildren) {\n      if (param.type === 'comment') continue;\n      if (param.text === 'self' || param.text === '&self' || param.text === '&mut self' ||\n          param.type === 'self_parameter') {\n        continue;\n      }\n      // Kotlin: default values are siblings of the parameter node inside\n      // function_value_parameters, so they appear as named children (e.g.\n      // string_literal, integer_literal, boolean_literal, call_expression).\n      // Skip any named child that isn't a parameter-like or modifier node.\n      if (param.type.endsWith('_literal') || param.type === 'call_expression'\n        || param.type === 'navigation_expression' || param.type === 'prefix_expression'\n        || param.type === 'parenthesized_expression') {\n        continue;\n      }\n      // Check for variadic parameter types\n      if (VARIADIC_PARAM_TYPES.has(param.type)) {\n        isVariadic = true;\n        continue;\n      }\n      // TypeScript/JavaScript: rest parameter — required_parameter containing rest_pattern\n      if (param.type === 'required_parameter' || param.type === 'optional_parameter') {\n        for (const child of param.children) {\n          if (child.type === 'rest_pattern') {\n            isVariadic = true;\n            break;\n          }\n        }\n        if (isVariadic) continue;\n      }\n      // Kotlin: vararg modifier on a regular parameter\n      if (param.type === 'parameter' || param.type === 'formal_parameter') {\n        const prev = param.previousSibling;\n        if (prev?.type === 'parameter_modifiers' && prev.text.includes('vararg')) {\n          isVariadic = true;\n        }\n      }\n      // Extract parameter type name for overload disambiguation.\n      // Works for Java (formal_parameter), Kotlin (parameter), C# (parameter),\n      // C++ (parameter_declaration). Uses childForFieldName('type') which is the\n      // standard tree-sitter field for typed parameters across these languages.\n      // Kotlin uses positional children instead of 'type' field — fall back to\n      // searching for user_type/nullable_type/predefined_type children.\n      const paramTypeNode = param.childForFieldName('type');\n      if (paramTypeNode) {\n        const typeName = extractSimpleTypeName(paramTypeNode);\n        paramTypes.push(typeName ?? 'unknown');\n      } else {\n        // Kotlin: parameter → [simple_identifier, user_type|nullable_type]\n        let found = false;\n        for (const child of param.namedChildren) {\n          if (child.type === 'user_type' || child.type === 'nullable_type'\n            || child.type === 'type_identifier' || child.type === 'predefined_type') {\n            const typeName = extractSimpleTypeName(child);\n            paramTypes.push(typeName ?? 'unknown');\n            found = true;\n            break;\n          }\n        }\n        if (!found) paramTypes.push('unknown');\n      }\n      if (!hasDefaultValue(param)) requiredCount++;\n      parameterCount++;\n    }\n    // C/C++: bare `...` token in parameter list (not a named child — check all children)\n    if (!isVariadic) {\n      for (const child of parameterList.children) {\n        if (!child.isNamed && child.text === '...') {\n          isVariadic = true;\n          break;\n        }\n      }\n    }\n  }\n\n  // Return type extraction — language-specific field names\n  // Go: 'result' field is either a type_identifier or parameter_list (multi-return)\n  const goResult = node.childForFieldName?.('result');\n  if (goResult) {\n    if (goResult.type === 'parameter_list') {\n      // Multi-return: extract first parameter's type only (e.g. (*User, error) → *User)\n      const firstParam = goResult.firstNamedChild;\n      if (firstParam?.type === 'parameter_declaration') {\n        const typeNode = firstParam.childForFieldName('type');\n        if (typeNode) returnType = typeNode.text;\n      } else if (firstParam) {\n        // Unnamed return types: (string, error) — first child is a bare type node\n        returnType = firstParam.text;\n      }\n    } else {\n      returnType = goResult.text;\n    }\n  }\n\n  // Rust: 'return_type' field — the value IS the type node (e.g. primitive_type, type_identifier).\n  // Skip if the node is a type_annotation (TS/Python), which is handled by the generic loop below.\n  if (!returnType) {\n    const rustReturn = node.childForFieldName?.('return_type');\n    if (rustReturn && rustReturn.type !== 'type_annotation') {\n      returnType = rustReturn.text;\n    }\n  }\n\n  // C/C++: 'type' field on function_definition\n  if (!returnType) {\n    const cppType = node.childForFieldName?.('type');\n    if (cppType && cppType.text !== 'void') {\n      returnType = cppType.text;\n    }\n  }\n\n  // C#: 'returns' field on method_declaration\n  if (!returnType) {\n    const csReturn = node.childForFieldName?.('returns');\n    if (csReturn && csReturn.text !== 'void') {\n      returnType = csReturn.text;\n    }\n  }\n\n  // TS/Rust/Python/C#/Kotlin: type_annotation or return_type child\n  if (!returnType) {\n    for (const child of node.children) {\n      if (child.type === 'type_annotation' || child.type === 'return_type') {\n        const typeNode = child.children.find((c) => c.isNamed);\n        if (typeNode) returnType = typeNode.text;\n      }\n    }\n  }\n\n  // Kotlin: fun getUser(): User — return type is a bare user_type child of\n  // function_declaration. The Kotlin grammar does NOT wrap it in type_annotation\n  // or return_type; it appears as a direct child after function_value_parameters.\n  // Note: Kotlin uses function_value_parameters (not a field), so we find it by type.\n  if (!returnType) {\n    let paramsEnd = -1;\n    for (let i = 0; i < node.childCount; i++) {\n      const child = node.child(i);\n      if (!child) continue;\n      if (child.type === 'function_value_parameters' || child.type === 'value_parameters') {\n        paramsEnd = child.endIndex;\n      }\n      if (paramsEnd >= 0 && child.type === 'user_type' && child.startIndex > paramsEnd) {\n        returnType = child.text;\n        break;\n      }\n    }\n  }\n\n  if (isVariadic) parameterCount = undefined;\n\n  // Only include parameterTypes when at least one type was successfully extracted.\n  // Use undefined (not []) to avoid empty array allocations for untyped parameters.\n  const hasTypes = paramTypes.length > 0 && paramTypes.some(t => t !== 'unknown');\n  // Only set requiredParameterCount when it differs from total — saves memory on the common case.\n  const requiredParameterCount = (!isVariadic && requiredCount < (parameterCount ?? 0))\n    ? requiredCount : undefined;\n  return { parameterCount, requiredParameterCount, parameterTypes: hasTypes ? paramTypes : undefined, returnType };\n};\n\n/**\n * Count direct arguments for a call expression across common tree-sitter grammars.\n * Returns undefined when the argument container cannot be located cheaply.\n */\nexport const countCallArguments = (callNode: SyntaxNode | null | undefined): number | undefined => {\n  if (!callNode) return undefined;\n\n  // Direct field or direct child (most languages)\n  let argsNode: SyntaxNode | null | undefined = callNode.childForFieldName('arguments')\n    ?? callNode.children.find((child) => CALL_ARGUMENT_LIST_TYPES.has(child.type));\n\n  // Kotlin/Swift: call_expression → call_suffix → value_arguments\n  // Search one level deeper for languages that wrap arguments in a suffix node\n  if (!argsNode) {\n    for (const child of callNode.children) {\n      if (!child.isNamed) continue;\n      const nested = child.children.find((gc) => CALL_ARGUMENT_LIST_TYPES.has(gc.type));\n      if (nested) { argsNode = nested; break; }\n    }\n  }\n\n  if (!argsNode) return undefined;\n\n  let count = 0;\n  for (const child of argsNode.children) {\n    if (!child.isNamed) continue;\n    if (child.type === 'comment') continue;\n    count++;\n  }\n\n  return count;\n};\n\n// ── Call-form discrimination (Phase 1, Step D) ─────────────────────────\n\n/**\n * AST node types that indicate a member-access wrapper around the callee name.\n * When nameNode.parent.type is one of these, the call is a member call.\n */\nconst MEMBER_ACCESS_NODE_TYPES = new Set([\n  'member_expression',           // TS/JS: obj.method()\n  'attribute',                   // Python: obj.method()\n  'member_access_expression',    // C#: obj.Method()\n  'field_expression',            // Rust/C++: obj.method() / ptr->method()\n  'selector_expression',         // Go: obj.Method()\n  'navigation_suffix',           // Kotlin/Swift: obj.method() — nameNode sits inside navigation_suffix\n  'member_binding_expression',   // C#: user?.Method() — null-conditional access\n]);\n\n/**\n * Call node types that are inherently constructor invocations.\n * Only includes patterns that the tree-sitter queries already capture as @call.\n */\nconst CONSTRUCTOR_CALL_NODE_TYPES = new Set([\n  'constructor_invocation',              // Kotlin: Foo()\n  'new_expression',                      // TS/JS/C++: new Foo()\n  'object_creation_expression',          // Java/C#/PHP: new Foo()\n  'implicit_object_creation_expression', // C# 9: User u = new(...)\n  'composite_literal',                   // Go: User{...}\n  'struct_expression',                   // Rust: User { ... }\n]);\n\n/**\n * AST node types for scoped/qualified calls (e.g., Foo::new() in Rust, Foo::bar() in C++).\n */\nconst SCOPED_CALL_NODE_TYPES = new Set([\n  'scoped_identifier',           // Rust: Foo::new()\n  'qualified_identifier',        // C++: ns::func()\n]);\n\ntype CallForm = 'free' | 'member' | 'constructor';\n\n/**\n * Infer whether a captured call site is a free call, member call, or constructor.\n * Returns undefined if the form cannot be determined.\n *\n * Works by inspecting the AST structure between callNode (@call) and nameNode (@call.name).\n * No tree-sitter query changes needed — the distinction is in the node types.\n */\nexport const inferCallForm = (\n  callNode: SyntaxNode,\n  nameNode: SyntaxNode,\n): CallForm | undefined => {\n  // 1. Constructor: callNode itself is a constructor invocation (Kotlin)\n  if (CONSTRUCTOR_CALL_NODE_TYPES.has(callNode.type)) {\n    return 'constructor';\n  }\n\n  // 2. Member call: nameNode's parent is a member-access wrapper\n  const nameParent = nameNode.parent;\n  if (nameParent && MEMBER_ACCESS_NODE_TYPES.has(nameParent.type)) {\n    return 'member';\n  }\n\n  // 3. PHP: the callNode itself distinguishes member vs free calls\n  if (callNode.type === 'member_call_expression' || callNode.type === 'nullsafe_member_call_expression') {\n    return 'member';\n  }\n  if (callNode.type === 'scoped_call_expression') {\n    return 'member'; // static call Foo::bar()\n  }\n\n  // 4. Java method_invocation: member if it has an 'object' field\n  if (callNode.type === 'method_invocation' && callNode.childForFieldName('object')) {\n    return 'member';\n  }\n\n  // 4b. Ruby call with receiver: obj.method\n  if (callNode.type === 'call' && callNode.childForFieldName('receiver')) {\n    return 'member';\n  }\n\n  // 5. Scoped calls (Rust Foo::new(), C++ ns::func()): treat as free\n  //    The receiver is a type, not an instance — handled differently in Phase 3\n  if (nameParent && SCOPED_CALL_NODE_TYPES.has(nameParent.type)) {\n    return 'free';\n  }\n\n  // 6. Default: if nameNode is a direct child of callNode, it's a free call\n  if (nameNode.parent === callNode || nameParent?.parent === callNode) {\n    return 'free';\n  }\n\n  return undefined;\n};\n\n/**\n * Extract the receiver identifier for member calls.\n * Only captures simple identifiers — returns undefined for complex expressions\n * like getUser().save() or arr[0].method().\n */\nconst SIMPLE_RECEIVER_TYPES = new Set([\n  'identifier',\n  'simple_identifier',\n  'variable_name',     // PHP $variable (tree-sitter-php)\n  'name',              // PHP name node\n  'this',              // TS/JS/Java/C# this.method()\n  'self',              // Rust/Python self.method()\n  'super',             // TS/JS/Java/Kotlin/Ruby super.method()\n  'super_expression',  // Kotlin wraps super in super_expression\n  'base',              // C# base.Method()\n  'parent',            // PHP parent::method()\n  'constant',          // Ruby CONSTANT.method() (uppercase identifiers)\n]);\n\nexport const extractReceiverName = (\n  nameNode: SyntaxNode,\n): string | undefined => {\n  const parent = nameNode.parent;\n  if (!parent) return undefined;\n\n  // PHP: member_call_expression / nullsafe_member_call_expression — receiver is on the callNode\n  // Java: method_invocation — receiver is the 'object' field on callNode\n  // For these, parent of nameNode is the call itself, so check the call's object field\n  const callNode = parent.parent ?? parent;\n\n  let receiver: SyntaxNode | null = null;\n\n  // Try standard field names used across grammars\n  receiver = parent.childForFieldName('object')       // TS/JS member_expression, Python attribute, PHP, Java\n    ?? parent.childForFieldName('value')               // Rust field_expression\n    ?? parent.childForFieldName('operand')             // Go selector_expression\n    ?? parent.childForFieldName('expression')          // C# member_access_expression\n    ?? parent.childForFieldName('argument');            // C++ field_expression\n\n  // Java method_invocation: 'object' field is on the callNode, not on nameNode's parent\n  if (!receiver && callNode.type === 'method_invocation') {\n    receiver = callNode.childForFieldName('object');\n  }\n\n  // PHP: member_call_expression has 'object' on the call node\n  if (!receiver && (callNode.type === 'member_call_expression' || callNode.type === 'nullsafe_member_call_expression')) {\n    receiver = callNode.childForFieldName('object');\n  }\n\n  // Ruby: call node has 'receiver' field\n  if (!receiver && parent.type === 'call') {\n    receiver = parent.childForFieldName('receiver');\n  }\n\n  // PHP scoped_call_expression (parent::method(), self::method()):\n  // nameNode's direct parent IS the scoped_call_expression (name is a direct child)\n  if (!receiver && (parent.type === 'scoped_call_expression' || callNode.type === 'scoped_call_expression')) {\n    const scopedCall = parent.type === 'scoped_call_expression' ? parent : callNode;\n    receiver = scopedCall.childForFieldName('scope');\n    // relative_scope wraps 'parent'/'self'/'static' — unwrap to get the keyword\n    if (receiver?.type === 'relative_scope') {\n      receiver = receiver.firstChild;\n    }\n  }\n\n  // C# null-conditional: user?.Save() → conditional_access_expression wraps member_binding_expression\n  if (!receiver && parent.type === 'member_binding_expression') {\n    const condAccess = parent.parent;\n    if (condAccess?.type === 'conditional_access_expression') {\n      receiver = condAccess.firstNamedChild;\n    }\n  }\n\n  // Kotlin/Swift: navigation_expression target is the first child\n  if (!receiver && parent.type === 'navigation_suffix') {\n    const navExpr = parent.parent;\n    if (navExpr?.type === 'navigation_expression') {\n      // First named child is the target (receiver)\n      for (const child of navExpr.children) {\n        if (child.isNamed && child !== parent) {\n          receiver = child;\n          break;\n        }\n      }\n    }\n  }\n\n  if (!receiver) return undefined;\n\n  // Only capture simple identifiers — refuse complex expressions\n  if (SIMPLE_RECEIVER_TYPES.has(receiver.type)) {\n    return receiver.text;\n  }\n\n  // Python super().method(): receiver is a call node `super()` — extract the function name\n  if (receiver.type === 'call') {\n    const func = receiver.childForFieldName('function');\n    if (func?.text === 'super') return 'super';\n  }\n\n  return undefined;\n};\n\n/**\n * Extract the raw receiver AST node for a member call.\n * Unlike extractReceiverName, this returns the receiver node regardless of its type —\n * including call_expression / method_invocation nodes that appear in chained calls\n * like `svc.getUser().save()`.\n *\n * Returns undefined when the call is not a member call or when no receiver node\n * can be found (e.g. top-level free calls).\n */\nexport const extractReceiverNode = (\n  nameNode: SyntaxNode,\n): SyntaxNode | undefined => {\n  const parent = nameNode.parent;\n  if (!parent) return undefined;\n\n  const callNode = parent.parent ?? parent;\n\n  let receiver: SyntaxNode | null = null;\n\n  receiver = parent.childForFieldName('object')\n    ?? parent.childForFieldName('value')\n    ?? parent.childForFieldName('operand')\n    ?? parent.childForFieldName('expression')\n    ?? parent.childForFieldName('argument');\n\n  if (!receiver && callNode.type === 'method_invocation') {\n    receiver = callNode.childForFieldName('object');\n  }\n\n  if (!receiver && (callNode.type === 'member_call_expression' || callNode.type === 'nullsafe_member_call_expression')) {\n    receiver = callNode.childForFieldName('object');\n  }\n\n  if (!receiver && parent.type === 'call') {\n    receiver = parent.childForFieldName('receiver');\n  }\n\n  if (!receiver && (parent.type === 'scoped_call_expression' || callNode.type === 'scoped_call_expression')) {\n    const scopedCall = parent.type === 'scoped_call_expression' ? parent : callNode;\n    receiver = scopedCall.childForFieldName('scope');\n    if (receiver?.type === 'relative_scope') {\n      receiver = receiver.firstChild;\n    }\n  }\n\n  if (!receiver && parent.type === 'member_binding_expression') {\n    const condAccess = parent.parent;\n    if (condAccess?.type === 'conditional_access_expression') {\n      receiver = condAccess.firstNamedChild;\n    }\n  }\n\n  if (!receiver && parent.type === 'navigation_suffix') {\n    const navExpr = parent.parent;\n    if (navExpr?.type === 'navigation_expression') {\n      for (const child of navExpr.children) {\n        if (child.isNamed && child !== parent) {\n          receiver = child;\n          break;\n        }\n      }\n    }\n  }\n\n  return receiver ?? undefined;\n};\n\nexport const isVerboseIngestionEnabled = (): boolean => {\n  const raw = process.env.GITNEXUS_VERBOSE;\n  if (!raw) return false;\n  const value = raw.toLowerCase();\n  return value === '1' || value === 'true' || value === 'yes';\n};\n\n// ── Chained-call extraction ───────────────────────────────────────────────\n\n/** Node types representing call expressions across supported languages. */\nexport const CALL_EXPRESSION_TYPES = new Set([\n  'call_expression',                   // TS/JS/C/C++/Go/Rust\n  'method_invocation',                 // Java\n  'member_call_expression',            // PHP\n  'nullsafe_member_call_expression',   // PHP ?.\n  'call',                              // Python/Ruby\n  'invocation_expression',             // C#\n]);\n\n/**\n * Hard limit on chain depth to prevent runaway recursion.\n * For `a.b().c().d()`, the chain has depth 2 (b and c before d).\n */\nexport const MAX_CHAIN_DEPTH = 3;\n\n/**\n * Walk a receiver AST node that is itself a call expression, accumulating the\n * chain of intermediate method names up to MAX_CHAIN_DEPTH.\n *\n * For `svc.getUser().save()`, called with the receiver of `save` (getUser() call):\n *   returns { chain: ['getUser'], baseReceiverName: 'svc' }\n *\n * For `a.b().c().d()`, called with the receiver of `d` (c() call):\n *   returns { chain: ['b', 'c'], baseReceiverName: 'a' }\n */\nexport function extractCallChain(\n  receiverCallNode: SyntaxNode,\n): { chain: string[]; baseReceiverName: string | undefined } | undefined {\n  const chain: string[] = [];\n  let current: SyntaxNode = receiverCallNode;\n\n  while (CALL_EXPRESSION_TYPES.has(current.type) && chain.length < MAX_CHAIN_DEPTH) {\n    // Extract the method name from this call node.\n    const funcNode = current.childForFieldName?.('function')\n      ?? current.childForFieldName?.('name')\n      ?? current.childForFieldName?.('method');  // Ruby `call` node\n    let methodName: string | undefined;\n    let innerReceiver: SyntaxNode | null = null;\n    if (funcNode) {\n      // member_expression / attribute: last named child is the method identifier\n      methodName = funcNode.lastNamedChild?.text ?? funcNode.text;\n    }\n    // Kotlin/Swift: call_expression exposes callee as firstNamedChild, not a field.\n    // navigation_expression: method name is in navigation_suffix → simple_identifier.\n    if (!funcNode && current.type === 'call_expression') {\n      const callee = current.firstNamedChild;\n      if (callee?.type === 'navigation_expression') {\n        const suffix = callee.lastNamedChild;\n        if (suffix?.type === 'navigation_suffix') {\n          methodName = suffix.lastNamedChild?.text;\n          // The receiver is the part of navigation_expression before the suffix\n          for (let i = 0; i < callee.namedChildCount; i++) {\n            const child = callee.namedChild(i);\n            if (child && child.type !== 'navigation_suffix') {\n              innerReceiver = child;\n              break;\n            }\n          }\n        }\n      }\n    }\n    if (!methodName) break;\n    chain.unshift(methodName); // build chain outermost-last\n\n    // Walk into the receiver of this call to continue the chain\n    if (!innerReceiver && funcNode) {\n      innerReceiver = funcNode.childForFieldName?.('object')\n        ?? funcNode.childForFieldName?.('value')\n        ?? funcNode.childForFieldName?.('operand')\n        ?? funcNode.childForFieldName?.('expression');\n    }\n    // Java method_invocation: object field is on the call node\n    if (!innerReceiver && current.type === 'method_invocation') {\n      innerReceiver = current.childForFieldName?.('object');\n    }\n    // PHP member_call_expression\n    if (!innerReceiver && (current.type === 'member_call_expression' || current.type === 'nullsafe_member_call_expression')) {\n      innerReceiver = current.childForFieldName?.('object');\n    }\n    // Ruby `call` node: receiver field is on the call node itself\n    if (!innerReceiver && current.type === 'call') {\n      innerReceiver = current.childForFieldName?.('receiver');\n    }\n\n    if (!innerReceiver) break;\n\n    if (CALL_EXPRESSION_TYPES.has(innerReceiver.type)) {\n      current = innerReceiver; // continue walking\n    } else {\n      // Reached a simple identifier — the base receiver\n      return { chain, baseReceiverName: innerReceiver.text || undefined };\n    }\n  }\n\n  return chain.length > 0 ? { chain, baseReceiverName: undefined } : undefined;\n}\n\n/** Node types representing member/field access across languages. */\nconst FIELD_ACCESS_NODE_TYPES = new Set([\n  'member_expression',           // TS/JS\n  'member_access_expression',    // C#\n  'selector_expression',         // Go\n  'field_expression',            // Rust/C++\n  'field_access',                // Java\n  'attribute',                   // Python\n  'navigation_expression',       // Kotlin/Swift\n  'member_binding_expression',   // C# null-conditional (user?.Address)\n]);\n\n/** One step in a mixed receiver chain. */\nexport type MixedChainStep = { kind: 'field' | 'call'; name: string };\n\n/**\n * Walk a receiver AST node that may interleave field accesses and method calls,\n * building a unified chain of steps up to MAX_CHAIN_DEPTH.\n *\n * For `svc.getUser().address.save()`, called with the receiver of `save`\n * (`svc.getUser().address`, a field access node):\n *   returns { chain: [{ kind:'call', name:'getUser' }, { kind:'field', name:'address' }],\n *             baseReceiverName: 'svc' }\n *\n * For `user.getAddress().city.getName()`, called with receiver of `getName`\n * (`user.getAddress().city`):\n *   returns { chain: [{ kind:'call', name:'getAddress' }, { kind:'field', name:'city' }],\n *             baseReceiverName: 'user' }\n *\n * Pure field chains and pure call chains are special cases (all steps same kind).\n */\nexport function extractMixedChain(\n  receiverNode: SyntaxNode,\n): { chain: MixedChainStep[]; baseReceiverName: string | undefined } | undefined {\n  const chain: MixedChainStep[] = [];\n  let current: SyntaxNode = receiverNode;\n\n  while (chain.length < MAX_CHAIN_DEPTH) {\n    if (CALL_EXPRESSION_TYPES.has(current.type)) {\n      // ── Call expression: extract method name + inner receiver ────────────\n      const funcNode = current.childForFieldName?.('function')\n        ?? current.childForFieldName?.('name')\n        ?? current.childForFieldName?.('method');\n      let methodName: string | undefined;\n      let innerReceiver: SyntaxNode | null = null;\n\n      if (funcNode) {\n        methodName = funcNode.lastNamedChild?.text ?? funcNode.text;\n      }\n      // Kotlin/Swift: call_expression → navigation_expression\n      if (!funcNode && current.type === 'call_expression') {\n        const callee = current.firstNamedChild;\n        if (callee?.type === 'navigation_expression') {\n          const suffix = callee.lastNamedChild;\n          if (suffix?.type === 'navigation_suffix') {\n            methodName = suffix.lastNamedChild?.text;\n            for (let i = 0; i < callee.namedChildCount; i++) {\n              const child = callee.namedChild(i);\n              if (child && child.type !== 'navigation_suffix') { innerReceiver = child; break; }\n            }\n          }\n        }\n      }\n      if (!methodName) break;\n      chain.unshift({ kind: 'call', name: methodName });\n\n      if (!innerReceiver && funcNode) {\n        innerReceiver = funcNode.childForFieldName?.('object')\n          ?? funcNode.childForFieldName?.('value')\n          ?? funcNode.childForFieldName?.('operand')\n          ?? funcNode.childForFieldName?.('argument')    // C/C++ field_expression\n          ?? funcNode.childForFieldName?.('expression')\n          ?? null;\n      }\n      if (!innerReceiver && current.type === 'method_invocation') {\n        innerReceiver = current.childForFieldName?.('object') ?? null;\n      }\n      if (!innerReceiver && (current.type === 'member_call_expression' || current.type === 'nullsafe_member_call_expression')) {\n        innerReceiver = current.childForFieldName?.('object') ?? null;\n      }\n      if (!innerReceiver && current.type === 'call') {\n        innerReceiver = current.childForFieldName?.('receiver') ?? null;\n      }\n      if (!innerReceiver) break;\n\n      if (CALL_EXPRESSION_TYPES.has(innerReceiver.type) || FIELD_ACCESS_NODE_TYPES.has(innerReceiver.type)) {\n        current = innerReceiver;\n      } else {\n        return { chain, baseReceiverName: innerReceiver.text || undefined };\n      }\n    } else if (FIELD_ACCESS_NODE_TYPES.has(current.type)) {\n      // ── Field/member access: extract property name + inner object ─────────\n      let propertyName: string | undefined;\n      let innerObject: SyntaxNode | null = null;\n\n      if (current.type === 'navigation_expression') {\n        for (const child of current.children ?? []) {\n          if (child.type === 'navigation_suffix') {\n            for (const sc of child.children ?? []) {\n              if (sc.isNamed && sc.type !== '.') { propertyName = sc.text; break; }\n            }\n          } else if (child.isNamed && !innerObject) {\n            innerObject = child;\n          }\n        }\n      } else if (current.type === 'attribute') {\n        innerObject = current.childForFieldName?.('object') ?? null;\n        propertyName = current.childForFieldName?.('attribute')?.text;\n      } else {\n        innerObject = current.childForFieldName?.('object')\n          ?? current.childForFieldName?.('value')\n          ?? current.childForFieldName?.('operand')\n          ?? current.childForFieldName?.('argument')    // C/C++ field_expression\n          ?? current.childForFieldName?.('expression')\n          ?? null;\n        propertyName = (current.childForFieldName?.('property')\n          ?? current.childForFieldName?.('field')\n          ?? current.childForFieldName?.('name'))?.text;\n      }\n\n      if (!propertyName) break;\n      chain.unshift({ kind: 'field', name: propertyName });\n\n      if (!innerObject) break;\n\n      if (CALL_EXPRESSION_TYPES.has(innerObject.type) || FIELD_ACCESS_NODE_TYPES.has(innerObject.type)) {\n        current = innerObject;\n      } else {\n        return { chain, baseReceiverName: innerObject.text || undefined };\n      }\n    } else {\n      // Simple identifier — this is the base receiver\n      return chain.length > 0\n        ? { chain, baseReceiverName: current.text || undefined }\n        : undefined;\n    }\n  }\n\n  return chain.length > 0 ? { chain, baseReceiverName: undefined } : undefined;\n}\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/workers/parse-worker.ts",
    "content": "import { parentPort } from 'node:worker_threads';\nimport Parser from 'tree-sitter';\nimport JavaScript from 'tree-sitter-javascript';\nimport TypeScript from 'tree-sitter-typescript';\nimport Python from 'tree-sitter-python';\nimport Java from 'tree-sitter-java';\nimport C from 'tree-sitter-c';\nimport CPP from 'tree-sitter-cpp';\nimport CSharp from 'tree-sitter-c-sharp';\nimport Go from 'tree-sitter-go';\nimport Rust from 'tree-sitter-rust';\nimport PHP from 'tree-sitter-php';\nimport Ruby from 'tree-sitter-ruby';\nimport { createRequire } from 'node:module';\nimport { SupportedLanguages } from '../../../config/supported-languages.js';\nimport { LANGUAGE_QUERIES } from '../tree-sitter-queries.js';\nimport { getTreeSitterBufferSize, TREE_SITTER_MAX_BUFFER } from '../constants.js';\n\n// tree-sitter-swift is an optionalDependency — may not be installed\nconst _require = createRequire(import.meta.url);\nlet Swift: any = null;\ntry { Swift = _require('tree-sitter-swift'); } catch {}\n\n// tree-sitter-kotlin is an optionalDependency — may not be installed\nlet Kotlin: any = null;\ntry { Kotlin = _require('tree-sitter-kotlin'); } catch {}\nimport {\n  getLanguageFromFilename,\n  FUNCTION_NODE_TYPES,\n  extractFunctionName,\n  isBuiltInOrNoise,\n  getDefinitionNodeFromCaptures,\n  findEnclosingClassId,\n  extractMethodSignature,\n  countCallArguments,\n  inferCallForm,\n  extractReceiverName,\n  extractReceiverNode,\n  extractMixedChain,\n  type MixedChainStep,\n} from '../utils.js';\nimport { buildTypeEnv } from '../type-env.js';\nimport type { ConstructorBinding } from '../type-env.js';\nimport { isNodeExported } from '../export-detection.js';\nimport { detectFrameworkFromAST } from '../framework-detection.js';\nimport { typeConfigs } from '../type-extractors/index.js';\nimport { generateId } from '../../../lib/utils.js';\nimport { extractNamedBindings } from '../named-binding-extraction.js';\nimport { appendKotlinWildcard } from '../resolvers/index.js';\nimport { callRouters } from '../call-routing.js';\nimport { extractPropertyDeclaredType } from '../type-extractors/shared.js';\nimport type { NodeLabel } from '../../graph/types.js';\n\n// ============================================================================\n// Types for serializable results\n// ============================================================================\n\ninterface ParsedNode {\n  id: string;\n  label: string;\n  properties: {\n    name: string;\n    filePath: string;\n    startLine: number;\n    endLine: number;\n    language: SupportedLanguages;\n    isExported: boolean;\n    astFrameworkMultiplier?: number;\n    astFrameworkReason?: string;\n    description?: string;\n    parameterCount?: number;\n    requiredParameterCount?: number;\n    returnType?: string;\n  };\n}\n\ninterface ParsedRelationship {\n  id: string;\n  sourceId: string;\n  targetId: string;\n  type: 'DEFINES' | 'HAS_METHOD' | 'HAS_PROPERTY';\n  confidence: number;\n  reason: string;\n}\n\ninterface ParsedSymbol {\n  filePath: string;\n  name: string;\n  nodeId: string;\n  type: NodeLabel;\n  parameterCount?: number;\n  requiredParameterCount?: number;\n  parameterTypes?: string[];\n  returnType?: string;\n  declaredType?: string;\n  ownerId?: string;\n}\n\nexport interface ExtractedImport {\n  filePath: string;\n  rawImportPath: string;\n  language: SupportedLanguages;\n  /** Named bindings from the import (e.g., import {User as U} → [{local:'U', exported:'User'}]) */\n  namedBindings?: { local: string; exported: string }[];\n}\n\nexport interface ExtractedCall {\n  filePath: string;\n  calledName: string;\n  /** generateId of enclosing function, or generateId('File', filePath) for top-level */\n  sourceId: string;\n  argCount?: number;\n  /** Discriminates free function calls from member/constructor calls */\n  callForm?: 'free' | 'member' | 'constructor';\n  /** Simple identifier of the receiver for member calls (e.g., 'user' in user.save()) */\n  receiverName?: string;\n  /** Resolved type name of the receiver (e.g., 'User' for user.save() when user: User) */\n  receiverTypeName?: string;\n  /**\n   * Unified mixed chain when the receiver is a chain of field accesses and/or method calls.\n   * Steps are ordered base-first (innermost to outermost). Examples:\n   *   `svc.getUser().save()`        → chain=[{kind:'call',name:'getUser'}], receiverName='svc'\n   *   `user.address.save()`         → chain=[{kind:'field',name:'address'}], receiverName='user'\n   *   `svc.getUser().address.save()` → chain=[{kind:'call',name:'getUser'},{kind:'field',name:'address'}]\n   * Length is capped at MAX_CHAIN_DEPTH (3).\n   */\n  receiverMixedChain?: MixedChainStep[];\n}\n\nexport interface ExtractedAssignment {\n  filePath: string;\n  /** generateId of enclosing function, or generateId('File', filePath) for top-level */\n  sourceId: string;\n  /** Receiver text (e.g., 'user' from user.address = value) */\n  receiverText: string;\n  /** Property name being written (e.g., 'address') */\n  propertyName: string;\n  /** Resolved type name of the receiver if available from TypeEnv */\n  receiverTypeName?: string;\n}\n\nexport interface ExtractedHeritage {\n  filePath: string;\n  className: string;\n  parentName: string;\n  /** 'extends' | 'implements' | 'trait-impl' | 'include' | 'extend' | 'prepend' */\n  kind: string;\n}\n\nexport interface ExtractedRoute {\n  filePath: string;\n  httpMethod: string;\n  routePath: string | null;\n  controllerName: string | null;\n  methodName: string | null;\n  middleware: string[];\n  prefix: string | null;\n  lineNumber: number;\n}\n\n/** Constructor bindings keyed by filePath for cross-file type resolution */\nexport interface FileConstructorBindings {\n  filePath: string;\n  bindings: ConstructorBinding[];\n}\n\nexport interface ParseWorkerResult {\n  nodes: ParsedNode[];\n  relationships: ParsedRelationship[];\n  symbols: ParsedSymbol[];\n  imports: ExtractedImport[];\n  calls: ExtractedCall[];\n  assignments: ExtractedAssignment[];\n  heritage: ExtractedHeritage[];\n  routes: ExtractedRoute[];\n  constructorBindings: FileConstructorBindings[];\n  skippedLanguages: Record<string, number>;\n  fileCount: number;\n}\n\nexport interface ParseWorkerInput {\n  path: string;\n  content: string;\n}\n\n// ============================================================================\n// Worker-local parser + language map\n// ============================================================================\n\nconst parser = new Parser();\n\nconst languageMap: Record<string, any> = {\n  [SupportedLanguages.JavaScript]: JavaScript,\n  [SupportedLanguages.TypeScript]: TypeScript.typescript,\n  [`${SupportedLanguages.TypeScript}:tsx`]: TypeScript.tsx,\n  [SupportedLanguages.Python]: Python,\n  [SupportedLanguages.Java]: Java,\n  [SupportedLanguages.C]: C,\n  [SupportedLanguages.CPlusPlus]: CPP,\n  [SupportedLanguages.CSharp]: CSharp,\n  [SupportedLanguages.Go]: Go,\n  [SupportedLanguages.Rust]: Rust,\n  ...(Kotlin ? { [SupportedLanguages.Kotlin]: Kotlin } : {}),\n  [SupportedLanguages.PHP]: PHP.php_only,\n  [SupportedLanguages.Ruby]: Ruby,\n  ...(Swift ? { [SupportedLanguages.Swift]: Swift } : {}),\n};\n\n/**\n * Check if a language grammar is available in this worker.\n * Duplicated from parser-loader.ts because workers can't import from the main thread.\n * Extra filePath parameter needed to distinguish .tsx from .ts (different grammars\n * under the same SupportedLanguages.TypeScript key).\n */\nconst isLanguageAvailable = (language: SupportedLanguages, filePath: string): boolean => {\n  const key = language === SupportedLanguages.TypeScript && filePath.endsWith('.tsx')\n    ? `${language}:tsx`\n    : language;\n  return key in languageMap && languageMap[key] != null;\n};\n\nconst setLanguage = (language: SupportedLanguages, filePath: string): void => {\n  const key = language === SupportedLanguages.TypeScript && filePath.endsWith('.tsx')\n    ? `${language}:tsx`\n    : language;\n  const lang = languageMap[key];\n  if (!lang) throw new Error(`Unsupported language: ${language}`);\n  parser.setLanguage(lang);\n};\n\n// isNodeExported imported from ../export-detection.js (shared module)\n\n// ============================================================================\n// Enclosing function detection (for call extraction)\n// ============================================================================\n\n/** Walk up AST to find enclosing function, return its generateId or null for top-level */\nconst findEnclosingFunctionId = (node: any, filePath: string): string | null => {\n  let current = node.parent;\n  while (current) {\n    if (FUNCTION_NODE_TYPES.has(current.type)) {\n      const { funcName, label } = extractFunctionName(current);\n      if (funcName) {\n        return generateId(label, `${filePath}:${funcName}`);\n      }\n    }\n    current = current.parent;\n  }\n  return null;\n};\n\n// ============================================================================\n// Label detection from capture map\n// ============================================================================\n\nconst getLabelFromCaptures = (captureMap: Record<string, any>): NodeLabel | null => {\n  // Skip imports (handled separately) and calls\n  if (captureMap['import'] || captureMap['call']) return null;\n  if (!captureMap['name']) return null;\n\n  if (captureMap['definition.function']) return 'Function';\n  if (captureMap['definition.class']) return 'Class';\n  if (captureMap['definition.interface']) return 'Interface';\n  if (captureMap['definition.method']) return 'Method';\n  if (captureMap['definition.struct']) return 'Struct';\n  if (captureMap['definition.enum']) return 'Enum';\n  if (captureMap['definition.namespace']) return 'Namespace';\n  if (captureMap['definition.module']) return 'Module';\n  if (captureMap['definition.trait']) return 'Trait';\n  if (captureMap['definition.impl']) return 'Impl';\n  if (captureMap['definition.type']) return 'TypeAlias';\n  if (captureMap['definition.const']) return 'Const';\n  if (captureMap['definition.static']) return 'Static';\n  if (captureMap['definition.typedef']) return 'Typedef';\n  if (captureMap['definition.macro']) return 'Macro';\n  if (captureMap['definition.union']) return 'Union';\n  if (captureMap['definition.property']) return 'Property';\n  if (captureMap['definition.record']) return 'Record';\n  if (captureMap['definition.delegate']) return 'Delegate';\n  if (captureMap['definition.annotation']) return 'Annotation';\n  if (captureMap['definition.constructor']) return 'Constructor';\n  if (captureMap['definition.template']) return 'Template';\n  return 'CodeElement';\n};\n\n// DEFINITION_CAPTURE_KEYS and getDefinitionNodeFromCaptures imported from ../utils.js\n\n\n// ============================================================================\n// Process a batch of files\n// ============================================================================\n\nconst processBatch = (files: ParseWorkerInput[], onProgress?: (filesProcessed: number) => void): ParseWorkerResult => {\n  const result: ParseWorkerResult = {\n    nodes: [],\n    relationships: [],\n    symbols: [],\n    imports: [],\n    calls: [],\n    assignments: [],\n    heritage: [],\n    routes: [],\n    constructorBindings: [],\n    skippedLanguages: {},\n    fileCount: 0,\n  };\n\n  // Group by language to minimize setLanguage calls\n  const byLanguage = new Map<SupportedLanguages, ParseWorkerInput[]>();\n  for (const file of files) {\n    const lang = getLanguageFromFilename(file.path);\n    if (!lang) continue;\n    let list = byLanguage.get(lang);\n    if (!list) {\n      list = [];\n      byLanguage.set(lang, list);\n    }\n    list.push(file);\n  }\n\n  let totalProcessed = 0;\n  let lastReported = 0;\n  const PROGRESS_INTERVAL = 100; // report every 100 files\n\n  const onFileProcessed = onProgress ? () => {\n    totalProcessed++;\n    if (totalProcessed - lastReported >= PROGRESS_INTERVAL) {\n      lastReported = totalProcessed;\n      onProgress(totalProcessed);\n    }\n  } : undefined;\n\n  for (const [language, langFiles] of byLanguage) {\n    const queryString = LANGUAGE_QUERIES[language];\n    if (!queryString) continue;\n\n    // Track if we need to handle tsx separately\n    const tsxFiles: ParseWorkerInput[] = [];\n    const regularFiles: ParseWorkerInput[] = [];\n\n    if (language === SupportedLanguages.TypeScript) {\n      for (const f of langFiles) {\n        if (f.path.endsWith('.tsx')) {\n          tsxFiles.push(f);\n        } else {\n          regularFiles.push(f);\n        }\n      }\n    } else {\n      regularFiles.push(...langFiles);\n    }\n\n    // Process regular files for this language\n    if (regularFiles.length > 0) {\n      if (isLanguageAvailable(language, regularFiles[0].path)) {\n        try {\n          setLanguage(language, regularFiles[0].path);\n          processFileGroup(regularFiles, language, queryString, result, onFileProcessed);\n        } catch {\n          // parser unavailable — skip this language group\n        }\n      } else {\n        result.skippedLanguages[language] = (result.skippedLanguages[language] || 0) + regularFiles.length;\n      }\n    }\n\n    // Process tsx files separately (different grammar)\n    if (tsxFiles.length > 0) {\n      if (isLanguageAvailable(language, tsxFiles[0].path)) {\n        try {\n          setLanguage(language, tsxFiles[0].path);\n          processFileGroup(tsxFiles, language, queryString, result, onFileProcessed);\n        } catch {\n          // parser unavailable — skip this language group\n        }\n      } else {\n        result.skippedLanguages[language] = (result.skippedLanguages[language] || 0) + tsxFiles.length;\n      }\n    }\n  }\n\n  return result;\n};\n\n// ============================================================================\n// PHP Eloquent metadata extraction\n// ============================================================================\n\n/** Eloquent model properties whose array values are worth indexing */\nconst ELOQUENT_ARRAY_PROPS = new Set(['fillable', 'casts', 'hidden', 'guarded', 'with', 'appends']);\n\n/** Eloquent relationship method names */\nconst ELOQUENT_RELATIONS = new Set([\n  'hasMany', 'hasOne', 'belongsTo', 'belongsToMany',\n  'morphTo', 'morphMany', 'morphOne', 'morphToMany', 'morphedByMany',\n  'hasManyThrough', 'hasOneThrough',\n]);\n\nfunction findDescendant(node: any, type: string): any {\n  if (node.type === type) return node;\n  for (const child of (node.children ?? [])) {\n    const found = findDescendant(child, type);\n    if (found) return found;\n  }\n  return null;\n}\n\nfunction extractStringContent(node: any): string | null {\n  if (!node) return null;\n  const content = node.children?.find((c: any) => c.type === 'string_content');\n  if (content) return content.text;\n  if (node.type === 'string_content') return node.text;\n  return null;\n}\n\n/**\n * For a PHP property_declaration node, extract array values as a description string.\n * Returns null if not an Eloquent model property or no array values found.\n */\nfunction extractPhpPropertyDescription(propName: string, propDeclNode: any): string | null {\n  if (!ELOQUENT_ARRAY_PROPS.has(propName)) return null;\n\n  const arrayNode = findDescendant(propDeclNode, 'array_creation_expression');\n  if (!arrayNode) return null;\n\n  const items: string[] = [];\n  for (const child of (arrayNode.children ?? [])) {\n    if (child.type !== 'array_element_initializer') continue;\n    const children = child.children ?? [];\n    const arrowIdx = children.findIndex((c: any) => c.type === '=>');\n    if (arrowIdx !== -1) {\n      // key => value pair (used in $casts)\n      const key = extractStringContent(children[arrowIdx - 1]);\n      const val = extractStringContent(children[arrowIdx + 1]);\n      if (key && val) items.push(`${key}:${val}`);\n    } else {\n      // Simple value (used in $fillable, $hidden, etc.)\n      const val = extractStringContent(children[0]);\n      if (val) items.push(val);\n    }\n  }\n\n  return items.length > 0 ? items.join(', ') : null;\n}\n\n/**\n * For a PHP method_declaration node, detect if it defines an Eloquent relationship.\n * Returns description like \"hasMany(Post)\" or null.\n */\nfunction extractEloquentRelationDescription(methodNode: any): string | null {\n  function findRelationCall(node: any): any {\n    if (node.type === 'member_call_expression') {\n      const children = node.children ?? [];\n      const objectNode = children.find((c: any) => c.type === 'variable_name' && c.text === '$this');\n      const nameNode = children.find((c: any) => c.type === 'name');\n      if (objectNode && nameNode && ELOQUENT_RELATIONS.has(nameNode.text)) return node;\n    }\n    for (const child of (node.children ?? [])) {\n      const found = findRelationCall(child);\n      if (found) return found;\n    }\n    return null;\n  }\n\n  const callNode = findRelationCall(methodNode);\n  if (!callNode) return null;\n\n  const relType = callNode.children?.find((c: any) => c.type === 'name')?.text;\n  const argsNode = callNode.children?.find((c: any) => c.type === 'arguments');\n  let targetModel: string | null = null;\n  if (argsNode) {\n    const firstArg = argsNode.children?.find((c: any) => c.type === 'argument');\n    if (firstArg) {\n      const classConstant = firstArg.children?.find((c: any) =>\n        c.type === 'class_constant_access_expression'\n      );\n      if (classConstant) {\n        targetModel = classConstant.children?.find((c: any) => c.type === 'name')?.text ?? null;\n      }\n    }\n  }\n\n  if (relType && targetModel) return `${relType}(${targetModel})`;\n  if (relType) return relType;\n  return null;\n}\n\n// ============================================================================\n// Laravel Route Extraction (procedural AST walk)\n// ============================================================================\n\ninterface RouteGroupContext {\n  middleware: string[];\n  prefix: string | null;\n  controller: string | null;\n}\n\nconst ROUTE_HTTP_METHODS = new Set([\n  'get', 'post', 'put', 'patch', 'delete', 'options', 'any', 'match',\n]);\n\nconst ROUTE_RESOURCE_METHODS = new Set(['resource', 'apiResource']);\n\nconst RESOURCE_ACTIONS = ['index', 'create', 'store', 'show', 'edit', 'update', 'destroy'];\nconst API_RESOURCE_ACTIONS = ['index', 'store', 'show', 'update', 'destroy'];\n\n/** Check if node is a scoped_call_expression with object 'Route' */\nfunction isRouteStaticCall(node: any): boolean {\n  if (node.type !== 'scoped_call_expression') return false;\n  const obj = node.childForFieldName?.('object') ?? node.children?.[0];\n  return obj?.text === 'Route';\n}\n\n/** Get the method name from a scoped_call_expression or member_call_expression */\nfunction getCallMethodName(node: any): string | null {\n  const nameNode = node.childForFieldName?.('name') ??\n    node.children?.find((c: any) => c.type === 'name');\n  return nameNode?.text ?? null;\n}\n\n/** Get the arguments node from a call expression */\nfunction getArguments(node: any): any {\n  return node.children?.find((c: any) => c.type === 'arguments') ?? null;\n}\n\n/** Find the closure body inside arguments */\nfunction findClosureBody(argsNode: any): any | null {\n  if (!argsNode) return null;\n  for (const child of argsNode.children ?? []) {\n    if (child.type === 'argument') {\n      for (const inner of child.children ?? []) {\n        if (inner.type === 'anonymous_function' ||\n            inner.type === 'arrow_function') {\n          return inner.childForFieldName?.('body') ??\n            inner.children?.find((c: any) => c.type === 'compound_statement');\n        }\n      }\n    }\n    if (child.type === 'anonymous_function' ||\n        child.type === 'arrow_function') {\n      return child.childForFieldName?.('body') ??\n        child.children?.find((c: any) => c.type === 'compound_statement');\n    }\n  }\n  return null;\n}\n\n/** Extract first string argument from arguments node */\nfunction extractFirstStringArg(argsNode: any): string | null {\n  if (!argsNode) return null;\n  for (const child of argsNode.children ?? []) {\n    const target = child.type === 'argument' ? child.children?.[0] : child;\n    if (!target) continue;\n    if (target.type === 'string' || target.type === 'encapsed_string') {\n      return extractStringContent(target);\n    }\n  }\n  return null;\n}\n\n/** Extract middleware from arguments — handles string or array */\nfunction extractMiddlewareArg(argsNode: any): string[] {\n  if (!argsNode) return [];\n  for (const child of argsNode.children ?? []) {\n    const target = child.type === 'argument' ? child.children?.[0] : child;\n    if (!target) continue;\n    if (target.type === 'string' || target.type === 'encapsed_string') {\n      const val = extractStringContent(target);\n      return val ? [val] : [];\n    }\n    if (target.type === 'array_creation_expression') {\n      const items: string[] = [];\n      for (const el of target.children ?? []) {\n        if (el.type === 'array_element_initializer') {\n          const str = el.children?.find((c: any) => c.type === 'string' || c.type === 'encapsed_string');\n          const val = str ? extractStringContent(str) : null;\n          if (val) items.push(val);\n        }\n      }\n      return items;\n    }\n  }\n  return [];\n}\n\n/** Extract Controller::class from arguments */\nfunction extractClassArg(argsNode: any): string | null {\n  if (!argsNode) return null;\n  for (const child of argsNode.children ?? []) {\n    const target = child.type === 'argument' ? child.children?.[0] : child;\n    if (target?.type === 'class_constant_access_expression') {\n      return target.children?.find((c: any) => c.type === 'name')?.text ?? null;\n    }\n  }\n  return null;\n}\n\n/** Extract controller class name from arguments: [Controller::class, 'method'] or 'Controller@method' */\nfunction extractControllerTarget(argsNode: any): { controller: string | null; method: string | null } {\n  if (!argsNode) return { controller: null, method: null };\n\n  const args: any[] = [];\n  for (const child of argsNode.children ?? []) {\n    if (child.type === 'argument') args.push(child.children?.[0]);\n    else if (child.type !== '(' && child.type !== ')' && child.type !== ',') args.push(child);\n  }\n\n  // Second arg is the handler\n  const handlerNode = args[1];\n  if (!handlerNode) return { controller: null, method: null };\n\n  // Array syntax: [UserController::class, 'index']\n  if (handlerNode.type === 'array_creation_expression') {\n    let controller: string | null = null;\n    let method: string | null = null;\n    const elements: any[] = [];\n    for (const el of handlerNode.children ?? []) {\n      if (el.type === 'array_element_initializer') elements.push(el);\n    }\n    if (elements[0]) {\n      const classAccess = findDescendant(elements[0], 'class_constant_access_expression');\n      if (classAccess) {\n        controller = classAccess.children?.find((c: any) => c.type === 'name')?.text ?? null;\n      }\n    }\n    if (elements[1]) {\n      const str = findDescendant(elements[1], 'string');\n      method = str ? extractStringContent(str) : null;\n    }\n    return { controller, method };\n  }\n\n  // String syntax: 'UserController@index'\n  if (handlerNode.type === 'string' || handlerNode.type === 'encapsed_string') {\n    const text = extractStringContent(handlerNode);\n    if (text?.includes('@')) {\n      const [controller, method] = text.split('@');\n      return { controller, method };\n    }\n  }\n\n  // Class reference: UserController::class (invokable controller)\n  if (handlerNode.type === 'class_constant_access_expression') {\n    const controller = handlerNode.children?.find((c: any) => c.type === 'name')?.text ?? null;\n    return { controller, method: '__invoke' };\n  }\n\n  return { controller: null, method: null };\n}\n\ninterface ChainedRouteCall {\n  isRouteFacade: boolean;\n  terminalMethod: string;\n  attributes: { method: string; argsNode: any }[];\n  terminalArgs: any;\n  node: any;\n}\n\n/**\n * Unwrap a chained call like Route::middleware('auth')->prefix('api')->group(fn)\n */\nfunction unwrapRouteChain(node: any): ChainedRouteCall | null {\n  if (node.type !== 'member_call_expression') return null;\n\n  const terminalMethod = getCallMethodName(node);\n  if (!terminalMethod) return null;\n\n  const terminalArgs = getArguments(node);\n  const attributes: { method: string; argsNode: any }[] = [];\n\n  let current = node.children?.[0];\n\n  while (current) {\n    if (current.type === 'member_call_expression') {\n      const method = getCallMethodName(current);\n      const args = getArguments(current);\n      if (method) attributes.unshift({ method, argsNode: args });\n      current = current.children?.[0];\n    } else if (current.type === 'scoped_call_expression') {\n      const obj = current.childForFieldName?.('object') ?? current.children?.[0];\n      if (obj?.text !== 'Route') return null;\n\n      const method = getCallMethodName(current);\n      const args = getArguments(current);\n      if (method) attributes.unshift({ method, argsNode: args });\n\n      return { isRouteFacade: true, terminalMethod, attributes, terminalArgs, node };\n    } else {\n      break;\n    }\n  }\n\n  return null;\n}\n\n/** Parse Route::group(['middleware' => ..., 'prefix' => ...], fn) array syntax */\nfunction parseArrayGroupArgs(argsNode: any): RouteGroupContext {\n  const ctx: RouteGroupContext = { middleware: [], prefix: null, controller: null };\n  if (!argsNode) return ctx;\n\n  for (const child of argsNode.children ?? []) {\n    const target = child.type === 'argument' ? child.children?.[0] : child;\n    if (target?.type === 'array_creation_expression') {\n      for (const el of target.children ?? []) {\n        if (el.type !== 'array_element_initializer') continue;\n        const children = el.children ?? [];\n        const arrowIdx = children.findIndex((c: any) => c.type === '=>');\n        if (arrowIdx === -1) continue;\n        const key = extractStringContent(children[arrowIdx - 1]);\n        const val = children[arrowIdx + 1];\n        if (key === 'middleware') {\n          if (val?.type === 'string') {\n            const s = extractStringContent(val);\n            if (s) ctx.middleware.push(s);\n          } else if (val?.type === 'array_creation_expression') {\n            for (const item of val.children ?? []) {\n              if (item.type === 'array_element_initializer') {\n                const str = item.children?.find((c: any) => c.type === 'string');\n                const s = str ? extractStringContent(str) : null;\n                if (s) ctx.middleware.push(s);\n              }\n            }\n          }\n        } else if (key === 'prefix') {\n          ctx.prefix = extractStringContent(val) ?? null;\n        } else if (key === 'controller') {\n          if (val?.type === 'class_constant_access_expression') {\n            ctx.controller = val.children?.find((c: any) => c.type === 'name')?.text ?? null;\n          }\n        }\n      }\n    }\n  }\n  return ctx;\n}\n\nfunction extractLaravelRoutes(tree: any, filePath: string): ExtractedRoute[] {\n  const routes: ExtractedRoute[] = [];\n\n  function resolveStack(stack: RouteGroupContext[]): { middleware: string[]; prefix: string | null; controller: string | null } {\n    const middleware: string[] = [];\n    let prefix: string | null = null;\n    let controller: string | null = null;\n    for (const ctx of stack) {\n      middleware.push(...ctx.middleware);\n      if (ctx.prefix) prefix = prefix ? `${prefix}/${ctx.prefix}`.replace(/\\/+/g, '/') : ctx.prefix;\n      if (ctx.controller) controller = ctx.controller;\n    }\n    return { middleware, prefix, controller };\n  }\n\n  function emitRoute(\n    httpMethod: string,\n    argsNode: any,\n    lineNumber: number,\n    groupStack: RouteGroupContext[],\n    chainAttrs: { method: string; argsNode: any }[],\n  ) {\n    const effective = resolveStack(groupStack);\n\n    for (const attr of chainAttrs) {\n      if (attr.method === 'middleware') effective.middleware.push(...extractMiddlewareArg(attr.argsNode));\n      if (attr.method === 'prefix') {\n        const p = extractFirstStringArg(attr.argsNode);\n        if (p) effective.prefix = effective.prefix ? `${effective.prefix}/${p}` : p;\n      }\n      if (attr.method === 'controller') {\n        const cls = extractClassArg(attr.argsNode);\n        if (cls) effective.controller = cls;\n      }\n    }\n\n    const routePath = extractFirstStringArg(argsNode);\n\n    if (ROUTE_RESOURCE_METHODS.has(httpMethod)) {\n      const target = extractControllerTarget(argsNode);\n      const actions = httpMethod === 'apiResource' ? API_RESOURCE_ACTIONS : RESOURCE_ACTIONS;\n      for (const action of actions) {\n        routes.push({\n          filePath, httpMethod, routePath,\n          controllerName: target.controller ?? effective.controller,\n          methodName: action,\n          middleware: [...effective.middleware],\n          prefix: effective.prefix,\n          lineNumber,\n        });\n      }\n    } else {\n      const target = extractControllerTarget(argsNode);\n      routes.push({\n        filePath, httpMethod, routePath,\n        controllerName: target.controller ?? effective.controller,\n        methodName: target.method,\n        middleware: [...effective.middleware],\n        prefix: effective.prefix,\n        lineNumber,\n      });\n    }\n  }\n\n  function walk(node: any, groupStack: RouteGroupContext[]) {\n    // Case 1: Simple Route::get(...), Route::post(...), etc.\n    if (isRouteStaticCall(node)) {\n      const method = getCallMethodName(node);\n      if (method && (ROUTE_HTTP_METHODS.has(method) || ROUTE_RESOURCE_METHODS.has(method))) {\n        emitRoute(method, getArguments(node), node.startPosition.row, groupStack, []);\n        return;\n      }\n      if (method === 'group') {\n        const argsNode = getArguments(node);\n        const groupCtx = parseArrayGroupArgs(argsNode);\n        const body = findClosureBody(argsNode);\n        if (body) {\n          groupStack.push(groupCtx);\n          walkChildren(body, groupStack);\n          groupStack.pop();\n        }\n        return;\n      }\n    }\n\n    // Case 2: Fluent chain — Route::middleware(...)->group(...) or Route::middleware(...)->get(...)\n    const chain = unwrapRouteChain(node);\n    if (chain) {\n      if (chain.terminalMethod === 'group') {\n        const groupCtx: RouteGroupContext = { middleware: [], prefix: null, controller: null };\n        for (const attr of chain.attributes) {\n          if (attr.method === 'middleware') groupCtx.middleware.push(...extractMiddlewareArg(attr.argsNode));\n          if (attr.method === 'prefix') groupCtx.prefix = extractFirstStringArg(attr.argsNode);\n          if (attr.method === 'controller') groupCtx.controller = extractClassArg(attr.argsNode);\n        }\n        const body = findClosureBody(chain.terminalArgs);\n        if (body) {\n          groupStack.push(groupCtx);\n          walkChildren(body, groupStack);\n          groupStack.pop();\n        }\n        return;\n      }\n      if (ROUTE_HTTP_METHODS.has(chain.terminalMethod) || ROUTE_RESOURCE_METHODS.has(chain.terminalMethod)) {\n        emitRoute(chain.terminalMethod, chain.terminalArgs, node.startPosition.row, groupStack, chain.attributes);\n        return;\n      }\n    }\n\n    // Default: recurse into children\n    walkChildren(node, groupStack);\n  }\n\n  function walkChildren(node: any, groupStack: RouteGroupContext[]) {\n    for (const child of node.children ?? []) {\n      walk(child, groupStack);\n    }\n  }\n\n  walk(tree.rootNode, []);\n  return routes;\n}\n\nconst processFileGroup = (\n  files: ParseWorkerInput[],\n  language: SupportedLanguages,\n  queryString: string,\n  result: ParseWorkerResult,\n  onFileProcessed?: () => void,\n): void => {\n  let query: any;\n  try {\n    const lang = parser.getLanguage();\n    query = new Parser.Query(lang, queryString);\n  } catch (err) {\n    const message = `Query compilation failed for ${language}: ${err instanceof Error ? err.message : String(err)}`;\n    if (parentPort) {\n      parentPort.postMessage({ type: 'warning', message });\n    } else {\n      console.warn(message);\n    }\n    return;\n  }\n\n  for (const file of files) {\n    // Skip files larger than the max tree-sitter buffer (32 MB)\n    if (file.content.length > TREE_SITTER_MAX_BUFFER) continue;\n\n    let tree;\n    try {\n      tree = parser.parse(file.content, undefined, { bufferSize: getTreeSitterBufferSize(file.content.length) });\n    } catch (err) {\n      console.warn(`Failed to parse file ${file.path}: ${err instanceof Error ? err.message : String(err)}`);\n      continue;\n    }\n\n    result.fileCount++;\n    onFileProcessed?.();\n\n    let matches;\n    try {\n      matches = query.matches(tree.rootNode);\n    } catch (err) {\n      console.warn(`Query execution failed for ${file.path}: ${err instanceof Error ? err.message : String(err)}`);\n      continue;\n    }\n\n    // Pre-pass: extract heritage from query matches to build parentMap for buildTypeEnv.\n    // Heritage edges (EXTENDS/IMPLEMENTS) are created by heritage-processor which runs\n    // in PARALLEL with call-processor, so the graph edges don't exist when buildTypeEnv\n    // runs. This pre-pass makes parent class information available for type resolution.\n    const fileParentMap = new Map<string, string[]>();\n    for (const match of matches) {\n      const captureMap: Record<string, any> = {};\n      for (const c of match.captures) {\n        captureMap[c.name] = c.node;\n      }\n      if (captureMap['heritage.class'] && captureMap['heritage.extends']) {\n        const className: string = captureMap['heritage.class'].text;\n        const parentName: string = captureMap['heritage.extends'].text;\n        // Skip Go named fields (only anonymous fields are struct embedding)\n        const extendsNode = captureMap['heritage.extends'];\n        const fieldDecl = extendsNode.parent;\n        if (fieldDecl?.type === 'field_declaration' && fieldDecl.childForFieldName('name')) continue;\n        let parents = fileParentMap.get(className);\n        if (!parents) { parents = []; fileParentMap.set(className, parents); }\n        if (!parents.includes(parentName)) parents.push(parentName);\n      }\n    }\n\n    // Build per-file type environment + constructor bindings in a single AST walk.\n    // Constructor bindings are verified against the SymbolTable in processCallsFromExtracted.\n    const parentMap: ReadonlyMap<string, readonly string[]> = fileParentMap;\n    const typeEnv = buildTypeEnv(tree, language, { parentMap });\n    const callRouter = callRouters[language];\n\n    if (typeEnv.constructorBindings.length > 0) {\n      result.constructorBindings.push({ filePath: file.path, bindings: [...typeEnv.constructorBindings] });\n    }\n\n    for (const match of matches) {\n      const captureMap: Record<string, any> = {};\n      for (const c of match.captures) {\n        captureMap[c.name] = c.node;\n      }\n\n      // Extract import paths before skipping\n      if (captureMap['import'] && captureMap['import.source']) {\n        const rawImportPath = language === SupportedLanguages.Kotlin\n          ? appendKotlinWildcard(captureMap['import.source'].text.replace(/['\"<>]/g, ''), captureMap['import'])\n          : captureMap['import.source'].text.replace(/['\"<>]/g, '');\n        const namedBindings = extractNamedBindings(captureMap['import'], language);\n        result.imports.push({\n          filePath: file.path,\n          rawImportPath,\n          language: language,\n          ...(namedBindings ? { namedBindings } : {}),\n        });\n        continue;\n      }\n\n      // Extract assignment sites (field write access)\n      if (captureMap['assignment'] && captureMap['assignment.receiver'] && captureMap['assignment.property']) {\n        const receiverText = captureMap['assignment.receiver'].text;\n        const propertyName = captureMap['assignment.property'].text;\n        if (receiverText && propertyName) {\n          const srcId = findEnclosingFunctionId(captureMap['assignment'], file.path)\n            || generateId('File', file.path);\n          let receiverTypeName: string | undefined;\n          if (typeEnv) {\n            receiverTypeName = typeEnv.lookup(receiverText, captureMap['assignment']) ?? undefined;\n          }\n          result.assignments.push({\n            filePath: file.path,\n            sourceId: srcId,\n            receiverText,\n            propertyName,\n            ...(receiverTypeName ? { receiverTypeName } : {}),\n          });\n        }\n        if (!captureMap['call']) continue;\n      }\n\n      // Extract call sites\n      if (captureMap['call']) {\n        const callNameNode = captureMap['call.name'];\n        if (callNameNode) {\n          const calledName = callNameNode.text;\n\n          // Dispatch: route language-specific calls (heritage, properties, imports)\n          const routed = callRouter(calledName, captureMap['call']);\n          if (routed) {\n            if (routed.kind === 'skip') continue;\n\n            if (routed.kind === 'import') {\n              result.imports.push({\n                filePath: file.path,\n                rawImportPath: routed.importPath,\n                language,\n              });\n              continue;\n            }\n\n            if (routed.kind === 'heritage') {\n              for (const item of routed.items) {\n                result.heritage.push({\n                  filePath: file.path,\n                  className: item.enclosingClass,\n                  parentName: item.mixinName,\n                  kind: item.heritageKind,\n                });\n              }\n              continue;\n            }\n\n            if (routed.kind === 'properties') {\n              const propEnclosingClassId = findEnclosingClassId(captureMap['call'], file.path);\n              for (const item of routed.items) {\n                const nodeId = generateId('Property', `${file.path}:${item.propName}`);\n                result.nodes.push({\n                  id: nodeId,\n                  label: 'Property',\n                  properties: {\n                    name: item.propName,\n                    filePath: file.path,\n                    startLine: item.startLine,\n                    endLine: item.endLine,\n                    language,\n                    isExported: true,\n                    description: item.accessorType,\n                  },\n                });\n                result.symbols.push({\n                  filePath: file.path,\n                  name: item.propName,\n                  nodeId,\n                  type: 'Property',\n                  ...(propEnclosingClassId ? { ownerId: propEnclosingClassId } : {}),\n                  ...(item.declaredType ? { declaredType: item.declaredType } : {}),\n                });\n                const fileId = generateId('File', file.path);\n                const relId = generateId('DEFINES', `${fileId}->${nodeId}`);\n                result.relationships.push({\n                  id: relId,\n                  sourceId: fileId,\n                  targetId: nodeId,\n                  type: 'DEFINES',\n                  confidence: 1.0,\n                  reason: '',\n                });\n                if (propEnclosingClassId) {\n                  result.relationships.push({\n                    id: generateId('HAS_PROPERTY', `${propEnclosingClassId}->${nodeId}`),\n                    sourceId: propEnclosingClassId,\n                    targetId: nodeId,\n                    type: 'HAS_PROPERTY',\n                    confidence: 1.0,\n                    reason: '',\n                  });\n                }\n              }\n              continue;\n            }\n\n            // kind === 'call' — fall through to normal call processing below\n          }\n\n          if (!isBuiltInOrNoise(calledName)) {\n            const callNode = captureMap['call'];\n            const sourceId = findEnclosingFunctionId(callNode, file.path)\n              || generateId('File', file.path);\n            const callForm = inferCallForm(callNode, callNameNode);\n            let receiverName = callForm === 'member' ? extractReceiverName(callNameNode) : undefined;\n            let receiverTypeName = receiverName ? typeEnv.lookup(receiverName, callNode) : undefined;\n            let receiverMixedChain: MixedChainStep[] | undefined;\n\n            // When the receiver is a complex expression (call chain, field chain, or mixed),\n            // extractReceiverName returns undefined. Walk the receiver node to build a unified\n            // mixed chain for deferred resolution in processCallsFromExtracted.\n            if (callForm === 'member' && receiverName === undefined && !receiverTypeName) {\n              const receiverNode = extractReceiverNode(callNameNode);\n              if (receiverNode) {\n                const extracted = extractMixedChain(receiverNode);\n                if (extracted && extracted.chain.length > 0) {\n                  receiverMixedChain = extracted.chain;\n                  receiverName = extracted.baseReceiverName;\n                  // Try the type environment immediately for the base receiver\n                  // (covers explicitly-typed locals and annotated parameters).\n                  if (receiverName) {\n                    receiverTypeName = typeEnv.lookup(receiverName, callNode);\n                  }\n                }\n              }\n            }\n\n            result.calls.push({\n              filePath: file.path,\n              calledName,\n              sourceId,\n              argCount: countCallArguments(callNode),\n              ...(callForm !== undefined ? { callForm } : {}),\n              ...(receiverName !== undefined ? { receiverName } : {}),\n              ...(receiverTypeName !== undefined ? { receiverTypeName } : {}),\n              ...(receiverMixedChain !== undefined ? { receiverMixedChain } : {}),\n            });\n          }\n        }\n        continue;\n      }\n\n      // Extract heritage (extends/implements)\n      if (captureMap['heritage.class']) {\n        if (captureMap['heritage.extends']) {\n          // Go struct embedding: the query matches ALL field_declarations with\n          // type_identifier, but only anonymous fields (no name) are embedded.\n          // Named fields like `Breed string` also match — skip them.\n          const extendsNode = captureMap['heritage.extends'];\n          const fieldDecl = extendsNode.parent;\n          const isNamedField = fieldDecl?.type === 'field_declaration'\n            && fieldDecl.childForFieldName('name');\n          if (!isNamedField) {\n            result.heritage.push({\n              filePath: file.path,\n              className: captureMap['heritage.class'].text,\n              parentName: captureMap['heritage.extends'].text,\n              kind: 'extends',\n            });\n          }\n        }\n        if (captureMap['heritage.implements']) {\n          result.heritage.push({\n            filePath: file.path,\n            className: captureMap['heritage.class'].text,\n            parentName: captureMap['heritage.implements'].text,\n            kind: 'implements',\n          });\n        }\n        if (captureMap['heritage.trait']) {\n          result.heritage.push({\n            filePath: file.path,\n            className: captureMap['heritage.class'].text,\n            parentName: captureMap['heritage.trait'].text,\n            kind: 'trait-impl',\n          });\n        }\n        if (captureMap['heritage.extends'] || captureMap['heritage.implements'] || captureMap['heritage.trait']) {\n          continue;\n        }\n      }\n\n      const nodeLabel = getLabelFromCaptures(captureMap);\n      if (!nodeLabel) continue;\n\n      // C/C++: @definition.function is broad and also matches inline class methods (inside\n      // a class/struct body). Those are already captured by @definition.method, so skip\n      // the duplicate Function entry to prevent double-indexing in globalIndex.\n      if (\n        (language === SupportedLanguages.CPlusPlus || language === SupportedLanguages.C) &&\n        nodeLabel === 'Function'\n      ) {\n        let ancestor = captureMap['definition.function']?.parent;\n        while (ancestor) {\n          if (ancestor.type === 'class_specifier' || ancestor.type === 'struct_specifier') {\n            break; // inside a class body — duplicate of @definition.method\n          }\n          ancestor = ancestor.parent;\n        }\n        if (ancestor) continue; // found a class/struct ancestor → skip\n      }\n\n      const nameNode = captureMap['name'];\n      // Synthesize name for constructors without explicit @name capture (e.g. Swift init)\n      if (!nameNode && nodeLabel !== 'Constructor') continue;\n      const nodeName = nameNode ? nameNode.text : 'init';\n      const definitionNode = getDefinitionNodeFromCaptures(captureMap);\n      const startLine = definitionNode ? definitionNode.startPosition.row : (nameNode ? nameNode.startPosition.row : 0);\n      const nodeId = generateId(nodeLabel, `${file.path}:${nodeName}`);\n\n      let description: string | undefined;\n      if (language === SupportedLanguages.PHP) {\n        if (nodeLabel === 'Property' && captureMap['definition.property']) {\n          description = extractPhpPropertyDescription(nodeName, captureMap['definition.property']) ?? undefined;\n        } else if (nodeLabel === 'Method' && captureMap['definition.method']) {\n          description = extractEloquentRelationDescription(captureMap['definition.method']) ?? undefined;\n        }\n      }\n\n      const frameworkHint = definitionNode\n        ? detectFrameworkFromAST(language, (definitionNode.text || '').slice(0, 300))\n        : null;\n\n      let parameterCount: number | undefined;\n      let requiredParameterCount: number | undefined;\n      let parameterTypes: string[] | undefined;\n      let returnType: string | undefined;\n      let declaredType: string | undefined;\n      if (nodeLabel === 'Function' || nodeLabel === 'Method' || nodeLabel === 'Constructor') {\n        const sig = extractMethodSignature(definitionNode);\n        parameterCount = sig.parameterCount;\n        requiredParameterCount = sig.requiredParameterCount;\n        parameterTypes = sig.parameterTypes;\n        returnType = sig.returnType;\n\n        // Language-specific return type fallback (e.g. Ruby YARD @return [Type])\n        // Also upgrades uninformative AST types like PHP `array` with PHPDoc `@return User[]`\n        if ((!returnType || returnType === 'array' || returnType === 'iterable') && definitionNode) {\n          const tc = typeConfigs[language as keyof typeof typeConfigs];\n          if (tc?.extractReturnType) {\n            const docReturn = tc.extractReturnType(definitionNode);\n            if (docReturn) returnType = docReturn;\n          }\n        }\n      } else if (nodeLabel === 'Property' && definitionNode) {\n        // Extract the declared type for property/field nodes.\n        // Walk the definition node for type annotation children.\n        declaredType = extractPropertyDeclaredType(definitionNode);\n      }\n\n      result.nodes.push({\n        id: nodeId,\n        label: nodeLabel,\n        properties: {\n          name: nodeName,\n          filePath: file.path,\n          startLine: definitionNode ? definitionNode.startPosition.row : startLine,\n          endLine: definitionNode ? definitionNode.endPosition.row : startLine,\n          language: language,\n          isExported: isNodeExported(nameNode || definitionNode, nodeName, language),\n          ...(frameworkHint ? {\n            astFrameworkMultiplier: frameworkHint.entryPointMultiplier,\n            astFrameworkReason: frameworkHint.reason,\n          } : {}),\n          ...(description !== undefined ? { description } : {}),\n          ...(parameterCount !== undefined ? { parameterCount } : {}),\n          ...(requiredParameterCount !== undefined ? { requiredParameterCount } : {}),\n          ...(parameterTypes !== undefined ? { parameterTypes } : {}),\n          ...(returnType !== undefined ? { returnType } : {}),\n        },\n      });\n\n      // Compute enclosing class for Method/Constructor/Property/Function — used for both ownerId and HAS_METHOD\n      // Function is included because Kotlin/Rust/Python capture class methods as Function nodes\n      const needsOwner = nodeLabel === 'Method' || nodeLabel === 'Constructor' || nodeLabel === 'Property' || nodeLabel === 'Function';\n      const enclosingClassId = needsOwner ? findEnclosingClassId(nameNode || definitionNode, file.path) : null;\n\n      result.symbols.push({\n        filePath: file.path,\n        name: nodeName,\n        nodeId,\n        type: nodeLabel,\n        ...(parameterCount !== undefined ? { parameterCount } : {}),\n        ...(requiredParameterCount !== undefined ? { requiredParameterCount } : {}),\n        ...(parameterTypes !== undefined ? { parameterTypes } : {}),\n        ...(returnType !== undefined ? { returnType } : {}),\n        ...(declaredType !== undefined ? { declaredType } : {}),\n        ...(enclosingClassId ? { ownerId: enclosingClassId } : {}),\n      });\n\n      const fileId = generateId('File', file.path);\n      const relId = generateId('DEFINES', `${fileId}->${nodeId}`);\n      result.relationships.push({\n        id: relId,\n        sourceId: fileId,\n        targetId: nodeId,\n        type: 'DEFINES',\n        confidence: 1.0,\n        reason: '',\n      });\n\n      // ── HAS_METHOD / HAS_PROPERTY: link member to enclosing class ──\n      if (enclosingClassId) {\n        const memberEdgeType = nodeLabel === 'Property' ? 'HAS_PROPERTY' : 'HAS_METHOD';\n        result.relationships.push({\n          id: generateId(memberEdgeType, `${enclosingClassId}->${nodeId}`),\n          sourceId: enclosingClassId,\n          targetId: nodeId,\n          type: memberEdgeType,\n          confidence: 1.0,\n          reason: '',\n        });\n      }\n    }\n\n    // Extract Laravel routes from route files via procedural AST walk\n    if (language === SupportedLanguages.PHP && (file.path.includes('/routes/') || file.path.startsWith('routes/')) && file.path.endsWith('.php')) {\n      const extractedRoutes = extractLaravelRoutes(tree, file.path);\n      result.routes.push(...extractedRoutes);\n    }\n  }\n};\n\n// ============================================================================\n// Worker message handler — supports sub-batch streaming\n// ============================================================================\n\n/** Accumulated result across sub-batches */\nlet accumulated: ParseWorkerResult = {\n  nodes: [], relationships: [], symbols: [],\n  imports: [], calls: [], assignments: [], heritage: [], routes: [], constructorBindings: [], skippedLanguages: {}, fileCount: 0,\n};\nlet cumulativeProcessed = 0;\n\nconst mergeResult = (target: ParseWorkerResult, src: ParseWorkerResult) => {\n  target.nodes.push(...src.nodes);\n  target.relationships.push(...src.relationships);\n  target.symbols.push(...src.symbols);\n  target.imports.push(...src.imports);\n  target.calls.push(...src.calls);\n  target.assignments.push(...src.assignments);\n  target.heritage.push(...src.heritage);\n  target.routes.push(...src.routes);\n  target.constructorBindings.push(...src.constructorBindings);\n  for (const [lang, count] of Object.entries(src.skippedLanguages)) {\n    target.skippedLanguages[lang] = (target.skippedLanguages[lang] || 0) + count;\n  }\n  target.fileCount += src.fileCount;\n};\n\nparentPort!.on('message', (msg: any) => {\n  try {\n    // Sub-batch mode: { type: 'sub-batch', files: [...] }\n    if (msg && msg.type === 'sub-batch') {\n      const result = processBatch(msg.files, (filesProcessed) => {\n        parentPort!.postMessage({ type: 'progress', filesProcessed: cumulativeProcessed + filesProcessed });\n      });\n      cumulativeProcessed += result.fileCount;\n      mergeResult(accumulated, result);\n      // Signal ready for next sub-batch\n      parentPort!.postMessage({ type: 'sub-batch-done' });\n      return;\n    }\n\n    // Flush: send accumulated results\n    if (msg && msg.type === 'flush') {\n      parentPort!.postMessage({ type: 'result', data: accumulated });\n      // Reset for potential reuse\n      accumulated = { nodes: [], relationships: [], symbols: [], imports: [], calls: [], assignments: [], heritage: [], routes: [], constructorBindings: [], skippedLanguages: {}, fileCount: 0 };\n      cumulativeProcessed = 0;\n      return;\n    }\n\n    // Legacy single-message mode (backward compat): array of files\n    if (Array.isArray(msg)) {\n      const result = processBatch(msg, (filesProcessed) => {\n        parentPort!.postMessage({ type: 'progress', filesProcessed });\n      });\n      parentPort!.postMessage({ type: 'result', data: result });\n      return;\n    }\n  } catch (err) {\n    const message = err instanceof Error ? err.message : String(err);\n    parentPort!.postMessage({ type: 'error', error: message });\n  }\n});\n"
  },
  {
    "path": "gitnexus/src/core/ingestion/workers/worker-pool.ts",
    "content": "import { Worker } from 'node:worker_threads';\nimport os from 'node:os';\nimport fs from 'node:fs';\nimport { fileURLToPath } from 'node:url';\n\nexport interface WorkerPool {\n  /**\n   * Dispatch items across workers. Items are split into chunks (one per worker),\n   * each worker processes its chunk via sub-batches to limit peak memory,\n   * and results are concatenated back in order.\n   */\n  dispatch<TInput, TResult>(items: TInput[], onProgress?: (filesProcessed: number) => void): Promise<TResult[]>;\n\n  /** Terminate all workers. Must be called when done. */\n  terminate(): Promise<void>;\n\n  /** Number of workers in the pool */\n  readonly size: number;\n}\n\n/**\n * Max files to send to a worker in a single postMessage.\n * Keeps structured-clone memory bounded per sub-batch.\n */\nconst SUB_BATCH_SIZE = 1500;\n\n/** Per sub-batch timeout. If a single sub-batch takes longer than this,\n *  likely a pathological file (e.g. minified 50MB JS). Fail fast. */\nconst SUB_BATCH_TIMEOUT_MS = 30_000;\n\n/**\n * Create a pool of worker threads.\n */\nexport const createWorkerPool = (workerUrl: URL, poolSize?: number): WorkerPool => {\n  // Validate worker script exists before spawning to prevent uncaught\n  // MODULE_NOT_FOUND crashes in worker threads (e.g. when running from src/ via vitest)\n  const workerPath = fileURLToPath(workerUrl);\n  if (!fs.existsSync(workerPath)) {\n    throw new Error(`Worker script not found: ${workerPath}`);\n  }\n\n  const size = poolSize ?? Math.min(8, Math.max(1, os.cpus().length - 1));\n  const workers: Worker[] = [];\n\n  for (let i = 0; i < size; i++) {\n    workers.push(new Worker(workerUrl));\n  }\n\n  const dispatch = <TInput, TResult>(items: TInput[], onProgress?: (filesProcessed: number) => void): Promise<TResult[]> => {\n    if (items.length === 0) return Promise.resolve([]);\n\n    const chunkSize = Math.ceil(items.length / size);\n    const chunks: TInput[][] = [];\n    for (let i = 0; i < items.length; i += chunkSize) {\n      chunks.push(items.slice(i, i + chunkSize));\n    }\n\n    const workerProgress = new Array(chunks.length).fill(0);\n\n    const promises = chunks.map((chunk, i) => {\n      const worker = workers[i];\n      return new Promise<TResult>((resolve, reject) => {\n        let settled = false;\n        let subBatchTimer: ReturnType<typeof setTimeout> | null = null;\n\n        const cleanup = () => {\n          if (subBatchTimer) clearTimeout(subBatchTimer);\n          worker.removeListener('message', handler);\n          worker.removeListener('error', errorHandler);\n          worker.removeListener('exit', exitHandler);\n        };\n\n        const resetSubBatchTimer = () => {\n          if (subBatchTimer) clearTimeout(subBatchTimer);\n          subBatchTimer = setTimeout(() => {\n            if (!settled) {\n              settled = true;\n              cleanup();\n              reject(new Error(`Worker ${i} sub-batch timed out after ${SUB_BATCH_TIMEOUT_MS / 1000}s (chunk: ${chunk.length} items).`));\n            }\n          }, SUB_BATCH_TIMEOUT_MS);\n        };\n\n        let subBatchIdx = 0;\n\n        const sendNextSubBatch = () => {\n          const start = subBatchIdx * SUB_BATCH_SIZE;\n          if (start >= chunk.length) {\n            worker.postMessage({ type: 'flush' });\n            return;\n          }\n          const subBatch = chunk.slice(start, start + SUB_BATCH_SIZE);\n          subBatchIdx++;\n          resetSubBatchTimer();\n          worker.postMessage({ type: 'sub-batch', files: subBatch });\n        };\n\n        const handler = (msg: any) => {\n          if (settled) return;\n          if (msg && msg.type === 'progress') {\n            workerProgress[i] = msg.filesProcessed;\n            if (onProgress) {\n              const total = workerProgress.reduce((a, b) => a + b, 0);\n              onProgress(total);\n            }\n          } else if (msg && msg.type === 'sub-batch-done') {\n            sendNextSubBatch();\n          } else if (msg && msg.type === 'error') {\n            settled = true;\n            cleanup();\n            reject(new Error(`Worker ${i} error: ${msg.error}`));\n          } else if (msg && msg.type === 'result') {\n            settled = true;\n            cleanup();\n            resolve(msg.data);\n          } else {\n            settled = true;\n            cleanup();\n            resolve(msg);\n          }\n        };\n\n        const errorHandler = (err: any) => {\n          if (!settled) { settled = true; cleanup(); reject(err); }\n        };\n\n        const exitHandler = (code: number) => {\n          if (!settled) {\n            settled = true;\n            cleanup();\n            reject(new Error(`Worker ${i} exited with code ${code}. Likely OOM or native addon failure.`));\n          }\n        };\n\n        worker.on('message', handler);\n        worker.once('error', errorHandler);\n        worker.once('exit', exitHandler);\n        sendNextSubBatch();\n      });\n    });\n\n    return Promise.all(promises);\n  };\n\n  const terminate = async (): Promise<void> => {\n    await Promise.all(workers.map(w => w.terminate()));\n    workers.length = 0;\n  };\n\n  return { dispatch, terminate, size };\n};\n"
  },
  {
    "path": "gitnexus/src/core/lbug/csv-generator.ts",
    "content": "/**\n * CSV Generator for LadybugDB Hybrid Schema\n *\n * Streams CSV rows directly to disk files in a single pass over graph nodes.\n * File contents are lazy-read from disk per-node to avoid holding the entire\n * repo in RAM. Rows are buffered (FLUSH_EVERY) before writing to minimize\n * per-row Promise overhead.\n *\n * RFC 4180 Compliant:\n * - Fields containing commas, double quotes, or newlines are enclosed in double quotes\n * - Double quotes within fields are escaped by doubling them (\"\")\n * - All fields are consistently quoted for safety with code content\n */\n\nimport fs from 'fs/promises';\nimport { createWriteStream, WriteStream } from 'fs';\nimport path from 'path';\nimport { KnowledgeGraph, GraphNode, NodeLabel } from '../graph/types.js';\nimport { NodeTableName } from './schema.js';\n\n/** Flush buffered rows to disk every N rows */\nconst FLUSH_EVERY = 500;\n\n// ============================================================================\n// CSV ESCAPE UTILITIES\n// ============================================================================\n\nexport const sanitizeUTF8 = (str: string): string => {\n  return str\n    .replace(/\\r\\n/g, '\\n')\n    .replace(/\\r/g, '\\n')\n    .replace(/[\\x00-\\x08\\x0B\\x0C\\x0E-\\x1F\\x7F]/g, '')\n    .replace(/[\\uD800-\\uDFFF]/g, '')\n    .replace(/[\\uFFFE\\uFFFF]/g, '');\n};\n\nexport const escapeCSVField = (value: string | number | undefined | null): string => {\n  if (value === undefined || value === null) return '\"\"';\n  let str = String(value);\n  str = sanitizeUTF8(str);\n  return `\"${str.replace(/\"/g, '\"\"')}\"`;\n};\n\nexport const escapeCSVNumber = (value: number | undefined | null, defaultValue: number = -1): string => {\n  if (value === undefined || value === null) return String(defaultValue);\n  return String(value);\n};\n\n// ============================================================================\n// CONTENT EXTRACTION (lazy — reads from disk on demand)\n// ============================================================================\n\nexport const isBinaryContent = (content: string): boolean => {\n  if (!content || content.length === 0) return false;\n  const sample = content.slice(0, 1000);\n  let nonPrintable = 0;\n  for (let i = 0; i < sample.length; i++) {\n    const code = sample.charCodeAt(i);\n    if ((code < 9) || (code > 13 && code < 32) || code === 127) nonPrintable++;\n  }\n  return (nonPrintable / sample.length) > 0.1;\n};\n\n/**\n * LRU content cache — avoids re-reading the same source file for every\n * symbol defined in it. Sized generously so most files stay cached during\n * the single-pass node iteration.\n */\nclass FileContentCache {\n  private cache = new Map<string, string>();\n  private accessOrder: string[] = [];\n  private maxSize: number;\n  private repoPath: string;\n\n  constructor(repoPath: string, maxSize: number = 3000) {\n    this.repoPath = repoPath;\n    this.maxSize = maxSize;\n  }\n\n  async get(relativePath: string): Promise<string> {\n    if (!relativePath) return '';\n    const cached = this.cache.get(relativePath);\n    if (cached !== undefined) {\n      // Move to end of accessOrder (LRU promotion)\n      const idx = this.accessOrder.indexOf(relativePath);\n      if (idx !== -1) {\n        this.accessOrder.splice(idx, 1);\n        this.accessOrder.push(relativePath);\n      }\n      return cached;\n    }\n    try {\n      const fullPath = path.join(this.repoPath, relativePath);\n      const content = await fs.readFile(fullPath, 'utf-8');\n      this.set(relativePath, content);\n      return content;\n    } catch {\n      this.set(relativePath, '');\n      return '';\n    }\n  }\n\n  private set(key: string, value: string) {\n    if (this.cache.size >= this.maxSize) {\n      const oldest = this.accessOrder.shift();\n      if (oldest) this.cache.delete(oldest);\n    }\n    this.cache.set(key, value);\n    this.accessOrder.push(key);\n  }\n}\n\nconst extractContent = async (\n  node: GraphNode,\n  contentCache: FileContentCache\n): Promise<string> => {\n  const filePath = node.properties.filePath;\n  const content = await contentCache.get(filePath);\n  if (!content) return '';\n  if (node.label === 'Folder') return '';\n  if (isBinaryContent(content)) return '[Binary file - content not stored]';\n\n  if (node.label === 'File') {\n    const MAX_FILE_CONTENT = 10000;\n    return content.length > MAX_FILE_CONTENT\n      ? content.slice(0, MAX_FILE_CONTENT) + '\\n... [truncated]'\n      : content;\n  }\n\n  const startLine = node.properties.startLine;\n  const endLine = node.properties.endLine;\n  if (startLine === undefined || endLine === undefined) return '';\n\n  const lines = content.split('\\n');\n  const start = Math.max(0, startLine - 2);\n  const end = Math.min(lines.length - 1, endLine + 2);\n  const snippet = lines.slice(start, end + 1).join('\\n');\n  const MAX_SNIPPET = 5000;\n  return snippet.length > MAX_SNIPPET\n    ? snippet.slice(0, MAX_SNIPPET) + '\\n... [truncated]'\n    : snippet;\n};\n\n// ============================================================================\n// BUFFERED CSV WRITER\n// ============================================================================\n\nclass BufferedCSVWriter {\n  private ws: WriteStream;\n  private buffer: string[] = [];\n  rows = 0;\n\n  constructor(filePath: string, header: string) {\n    this.ws = createWriteStream(filePath, 'utf-8');\n    // Large repos flush many times — raise listener cap to avoid MaxListenersExceededWarning\n    this.ws.setMaxListeners(50);\n    this.buffer.push(header);\n  }\n\n  addRow(row: string) {\n    this.buffer.push(row);\n    this.rows++;\n    if (this.buffer.length >= FLUSH_EVERY) {\n      return this.flush();\n    }\n    return Promise.resolve();\n  }\n\n  flush(): Promise<void> {\n    if (this.buffer.length === 0) return Promise.resolve();\n    const chunk = this.buffer.join('\\n') + '\\n';\n    this.buffer.length = 0;\n    return new Promise((resolve, reject) => {\n      this.ws.once('error', reject);\n      const ok = this.ws.write(chunk);\n      if (ok) {\n        this.ws.removeListener('error', reject);\n        resolve();\n      } else {\n        this.ws.once('drain', () => {\n          this.ws.removeListener('error', reject);\n          resolve();\n        });\n      }\n    });\n  }\n\n  async finish(): Promise<void> {\n    await this.flush();\n    return new Promise((resolve, reject) => {\n      this.ws.end(() => resolve());\n      this.ws.on('error', reject);\n    });\n  }\n}\n\n// ============================================================================\n// STREAMING CSV GENERATION — SINGLE PASS\n// ============================================================================\n\nexport interface StreamedCSVResult {\n  nodeFiles: Map<NodeTableName, { csvPath: string; rows: number }>;\n  relCsvPath: string;\n  relRows: number;\n}\n\n/**\n * Stream all CSV data directly to disk files.\n * Iterates graph nodes exactly ONCE — routes each node to the right writer.\n * File contents are lazy-read from disk with a generous LRU cache.\n */\nexport const streamAllCSVsToDisk = async (\n  graph: KnowledgeGraph,\n  repoPath: string,\n  csvDir: string,\n): Promise<StreamedCSVResult> => {\n  // Remove stale CSVs from previous crashed runs, then recreate\n  try { await fs.rm(csvDir, { recursive: true, force: true }); } catch {}\n  await fs.mkdir(csvDir, { recursive: true });\n\n  // We open ~30 concurrent write-streams; raise process limit to suppress\n  // MaxListenersExceededWarning (restored after all streams finish).\n  const prevMax = process.getMaxListeners();\n  process.setMaxListeners(prevMax + 40);\n\n  const contentCache = new FileContentCache(repoPath);\n\n  // Create writers for every node type up-front\n  const fileWriter = new BufferedCSVWriter(path.join(csvDir, 'file.csv'), 'id,name,filePath,content');\n  const folderWriter = new BufferedCSVWriter(path.join(csvDir, 'folder.csv'), 'id,name,filePath');\n  const codeElementHeader = 'id,name,filePath,startLine,endLine,isExported,content,description';\n  const functionWriter = new BufferedCSVWriter(path.join(csvDir, 'function.csv'), codeElementHeader);\n  const classWriter = new BufferedCSVWriter(path.join(csvDir, 'class.csv'), codeElementHeader);\n  const interfaceWriter = new BufferedCSVWriter(path.join(csvDir, 'interface.csv'), codeElementHeader);\n  const methodHeader = 'id,name,filePath,startLine,endLine,isExported,content,description,parameterCount,returnType';\n  const methodWriter = new BufferedCSVWriter(path.join(csvDir, 'method.csv'), methodHeader);\n  const codeElemWriter = new BufferedCSVWriter(path.join(csvDir, 'codeelement.csv'), codeElementHeader);\n  const communityWriter = new BufferedCSVWriter(path.join(csvDir, 'community.csv'), 'id,label,heuristicLabel,keywords,description,enrichedBy,cohesion,symbolCount');\n  const processWriter = new BufferedCSVWriter(path.join(csvDir, 'process.csv'), 'id,label,heuristicLabel,processType,stepCount,communities,entryPointId,terminalId');\n\n  // Multi-language node types share the same CSV shape (no isExported column)\n  const multiLangHeader = 'id,name,filePath,startLine,endLine,content,description';\n  const MULTI_LANG_TYPES = ['Struct', 'Enum', 'Macro', 'Typedef', 'Union', 'Namespace', 'Trait', 'Impl',\n    'TypeAlias', 'Const', 'Static', 'Property', 'Record', 'Delegate', 'Annotation', 'Constructor', 'Template', 'Module'] as const;\n  const multiLangWriters = new Map<string, BufferedCSVWriter>();\n  for (const t of MULTI_LANG_TYPES) {\n    multiLangWriters.set(t, new BufferedCSVWriter(path.join(csvDir, `${t.toLowerCase()}.csv`), multiLangHeader));\n  }\n\n  const codeWriterMap: Record<string, BufferedCSVWriter> = {\n    'Function': functionWriter,\n    'Class': classWriter,\n    'Interface': interfaceWriter,\n    'CodeElement': codeElemWriter,\n  };\n\n  const seenFileIds = new Set<string>();\n\n  // --- SINGLE PASS over all nodes ---\n  for (const node of graph.iterNodes()) {\n    switch (node.label) {\n      case 'File': {\n        if (seenFileIds.has(node.id)) break;\n        seenFileIds.add(node.id);\n        const content = await extractContent(node, contentCache);\n        await fileWriter.addRow([\n          escapeCSVField(node.id),\n          escapeCSVField(node.properties.name || ''),\n          escapeCSVField(node.properties.filePath || ''),\n          escapeCSVField(content),\n        ].join(','));\n        break;\n      }\n      case 'Folder':\n        await folderWriter.addRow([\n          escapeCSVField(node.id),\n          escapeCSVField(node.properties.name || ''),\n          escapeCSVField(node.properties.filePath || ''),\n        ].join(','));\n        break;\n      case 'Community': {\n        const keywords = (node.properties as any).keywords || [];\n        const keywordsStr = `[${keywords.map((k: string) => `'${k.replace(/\\\\/g, '\\\\\\\\').replace(/'/g, \"''\").replace(/,/g, '\\\\,')}'`).join(',')}]`;\n        await communityWriter.addRow([\n          escapeCSVField(node.id),\n          escapeCSVField(node.properties.name || ''),\n          escapeCSVField(node.properties.heuristicLabel || ''),\n          keywordsStr,\n          escapeCSVField((node.properties as any).description || ''),\n          escapeCSVField((node.properties as any).enrichedBy || 'heuristic'),\n          escapeCSVNumber(node.properties.cohesion, 0),\n          escapeCSVNumber(node.properties.symbolCount, 0),\n        ].join(','));\n        break;\n      }\n      case 'Process': {\n        const communities = (node.properties as any).communities || [];\n        const communitiesStr = `[${communities.map((c: string) => `'${c.replace(/'/g, \"''\")}'`).join(',')}]`;\n        await processWriter.addRow([\n          escapeCSVField(node.id),\n          escapeCSVField(node.properties.name || ''),\n          escapeCSVField((node.properties as any).heuristicLabel || ''),\n          escapeCSVField((node.properties as any).processType || ''),\n          escapeCSVNumber((node.properties as any).stepCount, 0),\n          escapeCSVField(communitiesStr),\n          escapeCSVField((node.properties as any).entryPointId || ''),\n          escapeCSVField((node.properties as any).terminalId || ''),\n        ].join(','));\n        break;\n      }\n      case 'Method': {\n        const content = await extractContent(node, contentCache);\n        await methodWriter.addRow([\n          escapeCSVField(node.id),\n          escapeCSVField(node.properties.name || ''),\n          escapeCSVField(node.properties.filePath || ''),\n          escapeCSVNumber(node.properties.startLine, -1),\n          escapeCSVNumber(node.properties.endLine, -1),\n          node.properties.isExported ? 'true' : 'false',\n          escapeCSVField(content),\n          escapeCSVField((node.properties as any).description || ''),\n          escapeCSVNumber(node.properties.parameterCount, 0),\n          escapeCSVField(node.properties.returnType || ''),\n        ].join(','));\n        break;\n      }\n      default: {\n        // Code element nodes (Function, Class, Interface, CodeElement)\n        const writer = codeWriterMap[node.label];\n        if (writer) {\n          const content = await extractContent(node, contentCache);\n          await writer.addRow([\n            escapeCSVField(node.id),\n            escapeCSVField(node.properties.name || ''),\n            escapeCSVField(node.properties.filePath || ''),\n            escapeCSVNumber(node.properties.startLine, -1),\n            escapeCSVNumber(node.properties.endLine, -1),\n            node.properties.isExported ? 'true' : 'false',\n            escapeCSVField(content),\n            escapeCSVField((node.properties as any).description || ''),\n          ].join(','));\n        } else {\n          // Multi-language node types (Struct, Impl, Trait, Macro, etc.)\n          const mlWriter = multiLangWriters.get(node.label);\n          if (mlWriter) {\n            const content = await extractContent(node, contentCache);\n            await mlWriter.addRow([\n              escapeCSVField(node.id),\n              escapeCSVField(node.properties.name || ''),\n              escapeCSVField(node.properties.filePath || ''),\n              escapeCSVNumber(node.properties.startLine, -1),\n              escapeCSVNumber(node.properties.endLine, -1),\n              escapeCSVField(content),\n              escapeCSVField((node.properties as any).description || ''),\n            ].join(','));\n          }\n        }\n        break;\n      }\n    }\n  }\n\n  // Finish all node writers\n  const allWriters = [fileWriter, folderWriter, functionWriter, classWriter, interfaceWriter, methodWriter, codeElemWriter, communityWriter, processWriter, ...multiLangWriters.values()];\n  await Promise.all(allWriters.map(w => w.finish()));\n\n  // --- Stream relationship CSV ---\n  const relCsvPath = path.join(csvDir, 'relations.csv');\n  const relWriter = new BufferedCSVWriter(relCsvPath, 'from,to,type,confidence,reason,step');\n  for (const rel of graph.iterRelationships()) {\n    await relWriter.addRow([\n      escapeCSVField(rel.sourceId),\n      escapeCSVField(rel.targetId),\n      escapeCSVField(rel.type),\n      escapeCSVNumber(rel.confidence, 1.0),\n      escapeCSVField(rel.reason),\n      escapeCSVNumber((rel as any).step, 0),\n    ].join(','));\n  }\n  await relWriter.finish();\n\n  // Build result map — only include tables that have rows\n  const nodeFiles = new Map<NodeTableName, { csvPath: string; rows: number }>();\n  const tableMap: [NodeTableName, BufferedCSVWriter][] = [\n    ['File', fileWriter], ['Folder', folderWriter],\n    ['Function', functionWriter], ['Class', classWriter],\n    ['Interface', interfaceWriter], ['Method', methodWriter],\n    ['CodeElement', codeElemWriter],\n    ['Community', communityWriter], ['Process', processWriter],\n    ...Array.from(multiLangWriters.entries()).map(([name, w]) => [name as NodeTableName, w] as [NodeTableName, BufferedCSVWriter]),\n  ];\n  for (const [name, writer] of tableMap) {\n    if (writer.rows > 0) {\n      nodeFiles.set(name, { csvPath: path.join(csvDir, `${name.toLowerCase()}.csv`), rows: writer.rows });\n    }\n  }\n\n  // Restore original process listener limit\n  process.setMaxListeners(prevMax);\n\n  return { nodeFiles, relCsvPath, relRows: relWriter.rows };\n};\n"
  },
  {
    "path": "gitnexus/src/core/lbug/lbug-adapter.ts",
    "content": "import fs from 'fs/promises';\nimport { createReadStream } from 'fs';\nimport { createInterface } from 'readline';\nimport path from 'path';\nimport lbug from '@ladybugdb/core';\nimport { KnowledgeGraph } from '../graph/types.js';\nimport {\n  NODE_TABLES,\n  REL_TABLE_NAME,\n  SCHEMA_QUERIES,\n  EMBEDDING_TABLE_NAME,\n  NodeTableName,\n} from './schema.js';\nimport { streamAllCSVsToDisk } from './csv-generator.js';\n\nlet db: lbug.Database | null = null;\nlet conn: lbug.Connection | null = null;\nlet currentDbPath: string | null = null;\nlet ftsLoaded = false;\n\n/** Expose the current Database for pool adapter reuse in tests. */\nexport const getDatabase = (): lbug.Database | null => db;\n\n// Global session lock for operations that touch module-level lbug globals.\n// This guarantees no DB switch can happen while an operation is running.\nlet sessionLock: Promise<void> = Promise.resolve();\n\nconst runWithSessionLock = async <T>(operation: () => Promise<T>): Promise<T> => {\n  const previous = sessionLock;\n  let release: (() => void) | null = null;\n  sessionLock = new Promise<void>(resolve => {\n    release = resolve;\n  });\n\n  await previous;\n  try {\n    return await operation();\n  } finally {\n    release?.();\n  }\n};\n\nconst normalizeCopyPath = (filePath: string): string => filePath.replace(/\\\\/g, '/');\n\nexport const initLbug = async (dbPath: string) => {\n  return runWithSessionLock(() => ensureLbugInitialized(dbPath));\n};\n\n/**\n * Execute multiple queries against one repo DB atomically.\n * While the callback runs, no other request can switch the active DB.\n */\nexport const withLbugDb = async <T>(dbPath: string, operation: () => Promise<T>): Promise<T> => {\n  return runWithSessionLock(async () => {\n    await ensureLbugInitialized(dbPath);\n    return operation();\n  });\n};\n\nconst ensureLbugInitialized = async (dbPath: string) => {\n  if (conn && currentDbPath === dbPath) {\n    return { db, conn };\n  }\n  await doInitLbug(dbPath);\n  return { db, conn };\n};\n\nconst doInitLbug = async (dbPath: string) => {\n  // Different database requested — close the old one first\n  if (conn || db) {\n    try { if (conn) await conn.close(); } catch {}\n    try { if (db) await db.close(); } catch {}\n    conn = null;\n    db = null;\n    currentDbPath = null;\n    ftsLoaded = false;\n  }\n\n  // LadybugDB stores the database as a single file (not a directory).\n  // If the path already exists, it must be a valid LadybugDB database file.\n  // Remove stale empty directories or files from older versions.\n  try {\n    const stat = await fs.lstat(dbPath);\n    if (stat.isSymbolicLink()) {\n      // Never follow symlinks — just remove the link itself\n      await fs.unlink(dbPath);\n    } else if (stat.isDirectory()) {\n      // Verify path is within expected storage directory before deleting\n      const realPath = await fs.realpath(dbPath);\n      const parentDir = path.dirname(dbPath);\n      const realParent = await fs.realpath(parentDir);\n      if (!realPath.startsWith(realParent + path.sep) && realPath !== realParent) {\n        throw new Error(`Refusing to delete ${dbPath}: resolved path ${realPath} is outside storage directory`);\n      }\n      // Old-style directory database or empty leftover - remove it\n      await fs.rm(dbPath, { recursive: true, force: true });\n    }\n    // If it's a file, assume it's an existing LadybugDB database - LadybugDB will open it\n  } catch {\n    // Path doesn't exist, which is what LadybugDB wants for a new database\n  }\n\n  // Ensure parent directory exists\n  const parentDir = path.dirname(dbPath);\n  await fs.mkdir(parentDir, { recursive: true });\n\n  db = new lbug.Database(dbPath);\n  conn = new lbug.Connection(db);\n\n  for (const schemaQuery of SCHEMA_QUERIES) {\n    try {\n      await conn.query(schemaQuery);\n    } catch (err) {\n      // Only ignore \"already exists\" errors - log everything else\n      const msg = err instanceof Error ? err.message : String(err);\n      if (!msg.includes('already exists')) {\n        console.warn(`⚠️ Schema creation warning: ${msg.slice(0, 120)}`);\n      }\n    }\n  }\n\n  currentDbPath = dbPath;\n  return { db, conn };\n};\n\nexport type LbugProgressCallback = (message: string) => void;\n\nexport const loadGraphToLbug = async (\n  graph: KnowledgeGraph,\n  repoPath: string,\n  storagePath: string,\n  onProgress?: LbugProgressCallback\n) => {\n  if (!conn) {\n    throw new Error('LadybugDB not initialized. Call initLbug first.');\n  }\n\n  const log = onProgress || (() => {});\n\n  const csvDir = path.join(storagePath, 'csv');\n\n  log('Streaming CSVs to disk...');\n  const csvResult = await streamAllCSVsToDisk(graph, repoPath, csvDir);\n\n  const validTables = new Set<string>(NODE_TABLES as readonly string[]);\n  const getNodeLabel = (nodeId: string): string => {\n    if (nodeId.startsWith('comm_')) return 'Community';\n    if (nodeId.startsWith('proc_')) return 'Process';\n    return nodeId.split(':')[0];\n  };\n\n  // Bulk COPY all node CSVs (sequential — LadybugDB allows only one write txn at a time)\n  const nodeFiles = [...csvResult.nodeFiles.entries()];\n  const totalSteps = nodeFiles.length + 1; // +1 for relationships\n  let stepsDone = 0;\n\n  for (const [table, { csvPath, rows }] of nodeFiles) {\n    stepsDone++;\n    log(`Loading nodes ${stepsDone}/${totalSteps}: ${table} (${rows.toLocaleString()} rows)`);\n\n    const normalizedPath = normalizeCopyPath(csvPath);\n    const copyQuery = getCopyQuery(table, normalizedPath);\n\n    try {\n      await conn.query(copyQuery);\n    } catch (err) {\n      try {\n        const retryQuery = copyQuery.replace('auto_detect=false)', 'auto_detect=false, IGNORE_ERRORS=true)');\n        await conn.query(retryQuery);\n      } catch (retryErr) {\n        const retryMsg = retryErr instanceof Error ? retryErr.message : String(retryErr);\n        throw new Error(`COPY failed for ${table}: ${retryMsg.slice(0, 200)}`);\n      }\n    }\n  }\n\n  // Bulk COPY relationships — split by FROM→TO label pair (LadybugDB requires it)\n  // Stream-read the relation CSV line by line to avoid exceeding V8 max string length\n  let relHeader = '';\n  const relsByPair = new Map<string, string[]>();\n  let skippedRels = 0;\n  let totalValidRels = 0;\n\n  await new Promise<void>((resolve, reject) => {\n    const rl = createInterface({ input: createReadStream(csvResult.relCsvPath, 'utf-8'), crlfDelay: Infinity });\n    let isFirst = true;\n    rl.on('line', (line) => {\n      if (isFirst) { relHeader = line; isFirst = false; return; }\n      if (!line.trim()) return;\n      const match = line.match(/\"([^\"]*)\",\"([^\"]*)\"/);\n      if (!match) { skippedRels++; return; }\n      const fromLabel = getNodeLabel(match[1]);\n      const toLabel = getNodeLabel(match[2]);\n      if (!validTables.has(fromLabel) || !validTables.has(toLabel)) {\n        skippedRels++;\n        return;\n      }\n      const pairKey = `${fromLabel}|${toLabel}`;\n      let list = relsByPair.get(pairKey);\n      if (!list) { list = []; relsByPair.set(pairKey, list); }\n      list.push(line);\n      totalValidRels++;\n    });\n    rl.on('close', resolve);\n    rl.on('error', reject);\n  });\n\n  const insertedRels = totalValidRels;\n  const warnings: string[] = [];\n  if (insertedRels > 0) {\n\n    log(`Loading edges: ${insertedRels.toLocaleString()} across ${relsByPair.size} types`);\n\n    let pairIdx = 0;\n    let failedPairEdges = 0;\n    const failedPairLines: string[] = [];\n\n    for (const [pairKey, lines] of relsByPair) {\n      pairIdx++;\n      const [fromLabel, toLabel] = pairKey.split('|');\n      const pairCsvPath = path.join(csvDir, `rel_${fromLabel}_${toLabel}.csv`);\n      await fs.writeFile(pairCsvPath, relHeader + '\\n' + lines.join('\\n'), 'utf-8');\n      const normalizedPath = normalizeCopyPath(pairCsvPath);\n      const copyQuery = `COPY ${REL_TABLE_NAME} FROM \"${normalizedPath}\" (from=\"${fromLabel}\", to=\"${toLabel}\", HEADER=true, ESCAPE='\"', DELIM=',', QUOTE='\"', PARALLEL=false, auto_detect=false)`;\n\n      if (pairIdx % 5 === 0 || lines.length > 1000) {\n        log(`Loading edges: ${pairIdx}/${relsByPair.size} types (${fromLabel} -> ${toLabel})`);\n      }\n\n      try {\n        await conn.query(copyQuery);\n      } catch (err) {\n        try {\n          const retryQuery = copyQuery.replace('auto_detect=false)', 'auto_detect=false, IGNORE_ERRORS=true)');\n          await conn.query(retryQuery);\n        } catch (retryErr) {\n          const retryMsg = retryErr instanceof Error ? retryErr.message : String(retryErr);\n          warnings.push(`${fromLabel}->${toLabel} (${lines.length} edges): ${retryMsg.slice(0, 80)}`);\n          failedPairEdges += lines.length;\n          failedPairLines.push(...lines);\n        }\n      }\n      try { await fs.unlink(pairCsvPath); } catch {}\n    }\n\n    if (failedPairLines.length > 0) {\n      log(`Inserting ${failedPairEdges} edges individually (missing schema pairs)`);\n      await fallbackRelationshipInserts([relHeader, ...failedPairLines], validTables, getNodeLabel);\n    }\n  }\n\n  // Cleanup all CSVs\n  try { await fs.unlink(csvResult.relCsvPath); } catch {}\n  for (const [, { csvPath }] of csvResult.nodeFiles) {\n    try { await fs.unlink(csvPath); } catch {}\n  }\n  try {\n    const remaining = await fs.readdir(csvDir);\n    for (const f of remaining) {\n      try { await fs.unlink(path.join(csvDir, f)); } catch {}\n    }\n  } catch {}\n  try { await fs.rmdir(csvDir); } catch {}\n\n  return { success: true, insertedRels, skippedRels, warnings };\n};\n\n// LadybugDB default ESCAPE is '\\' (backslash), but our CSV uses RFC 4180 escaping (\"\" for literal quotes).\n// Source code content is full of backslashes which confuse the auto-detection.\n// We MUST explicitly set ESCAPE='\"' to use RFC 4180 escaping, and disable auto_detect to prevent\n// LadybugDB from overriding our settings based on sample rows.\nconst COPY_CSV_OPTS = `(HEADER=true, ESCAPE='\"', DELIM=',', QUOTE='\"', PARALLEL=false, auto_detect=false)`;\n\n// Multi-language table names that were created with backticks in CODE_ELEMENT_BASE\n// and must always be referenced with backticks in queries\nconst BACKTICK_TABLES = new Set([\n  'Struct', 'Enum', 'Macro', 'Typedef', 'Union', 'Namespace', 'Trait', 'Impl',\n  'TypeAlias', 'Const', 'Static', 'Property', 'Record', 'Delegate', 'Annotation',\n  'Constructor', 'Template', 'Module',\n]);\n\nconst escapeTableName = (table: string): string => {\n  return BACKTICK_TABLES.has(table) ? `\\`${table}\\`` : table;\n};\n\n/** Fallback: insert relationships one-by-one if COPY fails */\nconst fallbackRelationshipInserts = async (\n  validRelLines: string[],\n  validTables: Set<string>,\n  getNodeLabel: (id: string) => string\n) => {\n  if (!conn) return;\n  const escapeLabel = (label: string): string => {\n    return BACKTICK_TABLES.has(label) ? `\\`${label}\\`` : label;\n  };\n\n  for (let i = 1; i < validRelLines.length; i++) {\n    const line = validRelLines[i];\n    try {\n      const match = line.match(/\"([^\"]*)\",\"([^\"]*)\",\"([^\"]*)\",([0-9.]+),\"([^\"]*)\",([0-9-]+)/);\n      if (!match) continue;\n      const [, fromId, toId, relType, confidenceStr, reason, stepStr] = match;\n      const fromLabel = getNodeLabel(fromId);\n      const toLabel = getNodeLabel(toId);\n      if (!validTables.has(fromLabel) || !validTables.has(toLabel)) continue;\n\n      const confidence = parseFloat(confidenceStr) || 1.0;\n      const step = parseInt(stepStr) || 0;\n\n      const esc = (s: string) => s.replace(/'/g, \"''\").replace(/\\\\/g, '\\\\\\\\').replace(/\\n/g, '\\\\n').replace(/\\r/g, '\\\\r');\n      await conn.query(`\n        MATCH (a:${escapeLabel(fromLabel)} {id: '${esc(fromId)}' }),\n              (b:${escapeLabel(toLabel)} {id: '${esc(toId)}' })\n        CREATE (a)-[:${REL_TABLE_NAME} {type: '${esc(relType)}', confidence: ${confidence}, reason: '${esc(reason)}', step: ${step}}]->(b)\n      `);\n    } catch {\n      // skip\n    }\n  }\n};\n\n/** Tables with isExported column (TypeScript/JS-native types) */\nconst TABLES_WITH_EXPORTED = new Set<string>(['Function', 'Class', 'Interface', 'Method', 'CodeElement']);\n\nconst getCopyQuery = (table: NodeTableName, filePath: string): string => {\n  const t = escapeTableName(table);\n  if (table === 'File') {\n    return `COPY ${t}(id, name, filePath, content) FROM \"${filePath}\" ${COPY_CSV_OPTS}`;\n  }\n  if (table === 'Folder') {\n    return `COPY ${t}(id, name, filePath) FROM \"${filePath}\" ${COPY_CSV_OPTS}`;\n  }\n  if (table === 'Community') {\n    return `COPY ${t}(id, label, heuristicLabel, keywords, description, enrichedBy, cohesion, symbolCount) FROM \"${filePath}\" ${COPY_CSV_OPTS}`;\n  }\n  if (table === 'Process') {\n    return `COPY ${t}(id, label, heuristicLabel, processType, stepCount, communities, entryPointId, terminalId) FROM \"${filePath}\" ${COPY_CSV_OPTS}`;\n  }\n  if (table === 'Method') {\n    return `COPY ${t}(id, name, filePath, startLine, endLine, isExported, content, description, parameterCount, returnType) FROM \"${filePath}\" ${COPY_CSV_OPTS}`;\n  }\n  // TypeScript/JS code element tables have isExported; multi-language tables do not\n  if (TABLES_WITH_EXPORTED.has(table)) {\n    return `COPY ${t}(id, name, filePath, startLine, endLine, isExported, content, description) FROM \"${filePath}\" ${COPY_CSV_OPTS}`;\n  }\n  // Multi-language tables (Struct, Impl, Trait, Macro, etc.)\n  return `COPY ${t}(id, name, filePath, startLine, endLine, content, description) FROM \"${filePath}\" ${COPY_CSV_OPTS}`;\n};\n\n/**\n * Insert a single node to LadybugDB\n * @param label - Node type (File, Function, Class, etc.)\n * @param properties - Node properties\n * @param dbPath - Path to LadybugDB database (optional if already initialized)\n */\nexport const insertNodeToLbug = async (\n  label: string,\n  properties: Record<string, any>,\n  dbPath?: string\n): Promise<boolean> => {\n  // Use provided dbPath or fall back to module-level db\n  const targetDbPath = dbPath || (db ? undefined : null);\n  if (!targetDbPath && !db) {\n    throw new Error('LadybugDB not initialized. Provide dbPath or call initLbug first.');\n  }\n\n  try {\n    const escapeValue = (v: any): string => {\n      if (v === null || v === undefined) return 'NULL';\n      if (typeof v === 'number') return String(v);\n      // Escape backslashes first (for Windows paths), then single quotes\n      return `'${String(v).replace(/\\\\/g, '\\\\\\\\').replace(/'/g, \"''\").replace(/\\n/g, '\\\\n').replace(/\\r/g, '\\\\r')}'`;\n    };\n\n    // Build INSERT query based on node type\n    const t = escapeTableName(label);\n    let query: string;\n\n    if (label === 'File') {\n      query = `CREATE (n:File {id: ${escapeValue(properties.id)}, name: ${escapeValue(properties.name)}, filePath: ${escapeValue(properties.filePath)}, content: ${escapeValue(properties.content || '')}})`;\n    } else if (label === 'Folder') {\n      query = `CREATE (n:Folder {id: ${escapeValue(properties.id)}, name: ${escapeValue(properties.name)}, filePath: ${escapeValue(properties.filePath)}})`;\n    } else if (TABLES_WITH_EXPORTED.has(label)) {\n      const descPart = properties.description ? `, description: ${escapeValue(properties.description)}` : '';\n      query = `CREATE (n:${t} {id: ${escapeValue(properties.id)}, name: ${escapeValue(properties.name)}, filePath: ${escapeValue(properties.filePath)}, startLine: ${properties.startLine || 0}, endLine: ${properties.endLine || 0}, isExported: ${!!properties.isExported}, content: ${escapeValue(properties.content || '')}${descPart}})`;\n    } else {\n      // Multi-language tables (Struct, Impl, Trait, Macro, etc.) — no isExported\n      const descPart = properties.description ? `, description: ${escapeValue(properties.description)}` : '';\n      query = `CREATE (n:${t} {id: ${escapeValue(properties.id)}, name: ${escapeValue(properties.name)}, filePath: ${escapeValue(properties.filePath)}, startLine: ${properties.startLine || 0}, endLine: ${properties.endLine || 0}, content: ${escapeValue(properties.content || '')}${descPart}})`;\n    }\n\n    // Use per-query connection if dbPath provided (avoids lock conflicts)\n    if (targetDbPath) {\n      const tempDb = new lbug.Database(targetDbPath);\n      const tempConn = new lbug.Connection(tempDb);\n      try {\n        await tempConn.query(query);\n        return true;\n      } finally {\n        try { await tempConn.close(); } catch {}\n        try { await tempDb.close(); } catch {}\n      }\n    } else if (conn) {\n      // Use existing persistent connection (when called from analyze)\n      await conn.query(query);\n      return true;\n    }\n\n    return false;\n  } catch (e: any) {\n    // Node may already exist or other error\n    console.error(`Failed to insert ${label} node:`, e.message);\n    return false;\n  }\n};\n\n/**\n * Batch insert multiple nodes to LadybugDB using a single connection\n * @param nodes - Array of {label, properties} to insert\n * @param dbPath - Path to LadybugDB database\n * @returns Object with success count and error count\n */\nexport const batchInsertNodesToLbug = async (\n  nodes: Array<{ label: string; properties: Record<string, any> }>,\n  dbPath: string\n): Promise<{ inserted: number; failed: number }> => {\n  if (nodes.length === 0) return { inserted: 0, failed: 0 };\n\n  const escapeValue = (v: any): string => {\n    if (v === null || v === undefined) return 'NULL';\n    if (typeof v === 'number') return String(v);\n    // Escape backslashes first (for Windows paths), then single quotes, then newlines\n    return `'${String(v).replace(/\\\\/g, '\\\\\\\\').replace(/'/g, \"''\").replace(/\\n/g, '\\\\n').replace(/\\r/g, '\\\\r')}'`;\n  };\n\n  // Open a single connection for all inserts\n  const tempDb = new lbug.Database(dbPath);\n  const tempConn = new lbug.Connection(tempDb);\n\n  let inserted = 0;\n  let failed = 0;\n\n  try {\n    for (const { label, properties } of nodes) {\n      try {\n        let query: string;\n\n        // Use MERGE instead of CREATE for upsert behavior (handles duplicates gracefully)\n        const t = escapeTableName(label);\n        if (label === 'File') {\n          query = `MERGE (n:File {id: ${escapeValue(properties.id)}}) SET n.name = ${escapeValue(properties.name)}, n.filePath = ${escapeValue(properties.filePath)}, n.content = ${escapeValue(properties.content || '')}`;\n        } else if (label === 'Folder') {\n          query = `MERGE (n:Folder {id: ${escapeValue(properties.id)}}) SET n.name = ${escapeValue(properties.name)}, n.filePath = ${escapeValue(properties.filePath)}`;\n        } else if (TABLES_WITH_EXPORTED.has(label)) {\n          const descPart = properties.description ? `, n.description = ${escapeValue(properties.description)}` : '';\n          query = `MERGE (n:${t} {id: ${escapeValue(properties.id)}}) SET n.name = ${escapeValue(properties.name)}, n.filePath = ${escapeValue(properties.filePath)}, n.startLine = ${properties.startLine || 0}, n.endLine = ${properties.endLine || 0}, n.isExported = ${!!properties.isExported}, n.content = ${escapeValue(properties.content || '')}${descPart}`;\n        } else {\n          const descPart = properties.description ? `, n.description = ${escapeValue(properties.description)}` : '';\n          query = `MERGE (n:${t} {id: ${escapeValue(properties.id)}}) SET n.name = ${escapeValue(properties.name)}, n.filePath = ${escapeValue(properties.filePath)}, n.startLine = ${properties.startLine || 0}, n.endLine = ${properties.endLine || 0}, n.content = ${escapeValue(properties.content || '')}${descPart}`;\n        }\n\n        await tempConn.query(query);\n        inserted++;\n      } catch (e: any) {\n        // Don't console.error here - it corrupts MCP JSON-RPC on stderr\n        failed++;\n      }\n    }\n  } finally {\n    try { await tempConn.close(); } catch {}\n    try { await tempDb.close(); } catch {}\n  }\n\n  return { inserted, failed };\n};\n\nexport const executeQuery = async (cypher: string): Promise<any[]> => {\n  if (!conn) {\n    throw new Error('LadybugDB not initialized. Call initLbug first.');\n  }\n\n  const queryResult = await conn.query(cypher);\n  // LadybugDB uses getAll() instead of hasNext()/getNext()\n  // Query returns QueryResult for single queries, QueryResult[] for multi-statement\n  const result = Array.isArray(queryResult) ? queryResult[0] : queryResult;\n  const rows = await result.getAll();\n  return rows;\n};\n\nexport const executeWithReusedStatement = async (\n  cypher: string,\n  paramsList: Array<Record<string, any>>\n): Promise<void> => {\n  if (!conn) {\n    throw new Error('LadybugDB not initialized. Call initLbug first.');\n  }\n  if (paramsList.length === 0) return;\n\n  const SUB_BATCH_SIZE = 4;\n  for (let i = 0; i < paramsList.length; i += SUB_BATCH_SIZE) {\n    const subBatch = paramsList.slice(i, i + SUB_BATCH_SIZE);\n    const stmt = await conn.prepare(cypher);\n    if (!stmt.isSuccess()) {\n      const errMsg = await stmt.getErrorMessage();\n      throw new Error(`Prepare failed: ${errMsg}`);\n    }\n    try {\n      for (const params of subBatch) {\n        await conn.execute(stmt, params);\n      }\n    } catch (e) {\n      // Log the error and continue with next batch\n      console.warn('Batch execution error:', e);\n    }\n    // Note: LadybugDB PreparedStatement doesn't require explicit close()\n  }\n};\n\nexport const getLbugStats = async (): Promise<{ nodes: number; edges: number }> => {\n  if (!conn) return { nodes: 0, edges: 0 };\n\n  let totalNodes = 0;\n  for (const tableName of NODE_TABLES) {\n    try {\n      const queryResult = await conn.query(`MATCH (n:${escapeTableName(tableName)}) RETURN count(n) AS cnt`);\n      const nodeResult = Array.isArray(queryResult) ? queryResult[0] : queryResult;\n      const nodeRows = await nodeResult.getAll();\n      if (nodeRows.length > 0) {\n        totalNodes += Number(nodeRows[0]?.cnt ?? nodeRows[0]?.[0] ?? 0);\n      }\n    } catch {\n      // ignore\n    }\n  }\n\n  let totalEdges = 0;\n  try {\n    const queryResult = await conn.query(`MATCH ()-[r:${REL_TABLE_NAME}]->() RETURN count(r) AS cnt`);\n    const edgeResult = Array.isArray(queryResult) ? queryResult[0] : queryResult;\n    const edgeRows = await edgeResult.getAll();\n    if (edgeRows.length > 0) {\n      totalEdges = Number(edgeRows[0]?.cnt ?? edgeRows[0]?.[0] ?? 0);\n    }\n  } catch {\n    // ignore\n  }\n\n  return { nodes: totalNodes, edges: totalEdges };\n};\n\n/**\n * Load cached embeddings from LadybugDB before a rebuild.\n * Returns all embedding vectors so they can be re-inserted after the graph is reloaded,\n * avoiding expensive re-embedding of unchanged nodes.\n */\nexport const loadCachedEmbeddings = async (): Promise<{\n  embeddingNodeIds: Set<string>;\n  embeddings: Array<{ nodeId: string; embedding: number[] }>;\n}> => {\n  if (!conn) {\n    return { embeddingNodeIds: new Set(), embeddings: [] };\n  }\n\n  const embeddingNodeIds = new Set<string>();\n  const embeddings: Array<{ nodeId: string; embedding: number[] }> = [];\n  try {\n    const rows = await conn.query(`MATCH (e:${EMBEDDING_TABLE_NAME}) RETURN e.nodeId AS nodeId, e.embedding AS embedding`);\n    const result = Array.isArray(rows) ? rows[0] : rows;\n    for (const row of await result.getAll()) {\n      const nodeId = String(row.nodeId ?? row[0] ?? '');\n      if (!nodeId) continue;\n      embeddingNodeIds.add(nodeId);\n      const embedding = row.embedding ?? row[1];\n      if (embedding) {\n        embeddings.push({\n          nodeId,\n          embedding: Array.isArray(embedding) ? embedding.map(Number) : Array.from(embedding as any).map(Number),\n        });\n      }\n    }\n  } catch { /* embedding table may not exist */ }\n\n  return { embeddingNodeIds, embeddings };\n};\n\nexport const closeLbug = async (): Promise<void> => {\n  if (conn) {\n    try {\n      await conn.close();\n    } catch {}\n    conn = null;\n  }\n  if (db) {\n    try {\n      await db.close();\n    } catch {}\n    db = null;\n  }\n  currentDbPath = null;\n  ftsLoaded = false;\n};\n\nexport const isLbugReady = (): boolean => conn !== null && db !== null;\n\n\n/**\n * Delete all nodes (and their relationships) for a specific file from LadybugDB\n * @param filePath - The file path to delete nodes for\n * @param dbPath - Optional path to LadybugDB for per-query connection\n * @returns Object with counts of deleted nodes\n */\nexport const deleteNodesForFile = async (filePath: string, dbPath?: string): Promise<{ deletedNodes: number }> => {\n  const usePerQuery = !!dbPath;\n\n  // Set up connection (either use existing or create per-query)\n  let tempDb: lbug.Database | null = null;\n  let tempConn: lbug.Connection | null = null;\n  let targetConn: lbug.Connection | null = conn;\n\n  if (usePerQuery) {\n    tempDb = new lbug.Database(dbPath);\n    tempConn = new lbug.Connection(tempDb);\n    targetConn = tempConn;\n  } else if (!conn) {\n    throw new Error('LadybugDB not initialized. Provide dbPath or call initLbug first.');\n  }\n\n  try {\n    let deletedNodes = 0;\n    const escapedPath = filePath.replace(/'/g, \"''\");\n\n    // Delete nodes from each table that has filePath\n    // DETACH DELETE removes the node and all its relationships\n    for (const tableName of NODE_TABLES) {\n      // Skip tables that don't have filePath (Community, Process)\n      if (tableName === 'Community' || tableName === 'Process') continue;\n\n      try {\n        // First count how many we'll delete\n        const tn = escapeTableName(tableName);\n        const countResult = await targetConn!.query(\n          `MATCH (n:${tn}) WHERE n.filePath = '${escapedPath}' RETURN count(n) AS cnt`\n        );\n        const result = Array.isArray(countResult) ? countResult[0] : countResult;\n        const rows = await result.getAll();\n        const count = Number(rows[0]?.cnt ?? rows[0]?.[0] ?? 0);\n\n        if (count > 0) {\n          // Delete nodes (and implicitly their relationships via DETACH)\n          await targetConn!.query(\n            `MATCH (n:${tn}) WHERE n.filePath = '${escapedPath}' DETACH DELETE n`\n          );\n          deletedNodes += count;\n        }\n      } catch (e) {\n        // Some tables may not support this query, skip\n      }\n    }\n\n    // Also delete any embeddings for nodes in this file\n    try {\n      await targetConn!.query(\n        `MATCH (e:${EMBEDDING_TABLE_NAME}) WHERE e.nodeId STARTS WITH '${escapedPath}' DELETE e`\n      );\n    } catch {\n      // Embedding table may not exist or nodeId format may differ\n    }\n\n    return { deletedNodes };\n  } finally {\n    // Close per-query connection if used\n    if (tempConn) {\n      try { await tempConn.close(); } catch {}\n    }\n    if (tempDb) {\n      try { await tempDb.close(); } catch {}\n    }\n  }\n};\n\nexport const getEmbeddingTableName = (): string => EMBEDDING_TABLE_NAME;\n\n// ============================================================================\n// Full-Text Search (FTS) Functions\n// ============================================================================\n\n/**\n * Load the FTS extension (required before using FTS functions).\n * Safe to call multiple times — tracks loaded state via module-level ftsLoaded.\n */\nexport const loadFTSExtension = async (): Promise<void> => {\n  if (ftsLoaded) return;\n  if (!conn) {\n    throw new Error('LadybugDB not initialized. Call initLbug first.');\n  }\n  try {\n    await conn.query('INSTALL fts');\n    await conn.query('LOAD EXTENSION fts');\n    ftsLoaded = true;\n  } catch (err: any) {\n    const msg = err?.message || '';\n    if (msg.includes('already loaded') || msg.includes('already installed') || msg.includes('already exists')) {\n      ftsLoaded = true;\n    } else {\n      console.error('GitNexus: FTS extension load failed:', msg);\n    }\n  }\n};\n\n/**\n * Create a full-text search index on a table\n * @param tableName - The node table name (e.g., 'File', 'CodeSymbol')\n * @param indexName - Name for the FTS index\n * @param properties - List of properties to index (e.g., ['name', 'code'])\n * @param stemmer - Stemming algorithm (default: 'porter')\n */\nexport const createFTSIndex = async (\n  tableName: string,\n  indexName: string,\n  properties: string[],\n  stemmer: string = 'porter'\n): Promise<void> => {\n  if (!conn) {\n    throw new Error('LadybugDB not initialized. Call initLbug first.');\n  }\n\n  await loadFTSExtension();\n\n  const propList = properties.map(p => `'${p}'`).join(', ');\n  const query = `CALL CREATE_FTS_INDEX('${tableName}', '${indexName}', [${propList}], stemmer := '${stemmer}')`;\n\n  try {\n    await conn.query(query);\n  } catch (e: any) {\n    if (!e.message?.includes('already exists')) {\n      throw e;\n    }\n  }\n};\n\n/**\n * Query a full-text search index\n * @param tableName - The node table name\n * @param indexName - FTS index name\n * @param query - Search query string\n * @param limit - Maximum results\n * @param conjunctive - If true, all terms must match (AND); if false, any term matches (OR)\n * @returns Array of { node properties, score }\n */\nexport const queryFTS = async (\n  tableName: string,\n  indexName: string,\n  query: string,\n  limit: number = 20,\n  conjunctive: boolean = false\n): Promise<Array<{ nodeId: string; name: string; filePath: string; score: number; [key: string]: any }>> => {\n  if (!conn) {\n    throw new Error('LadybugDB not initialized. Call initLbug first.');\n  }\n\n  // Escape backslashes and single quotes to prevent Cypher injection\n  const escapedQuery = query.replace(/\\\\/g, '\\\\\\\\').replace(/'/g, \"''\");\n\n  const cypher = `\n    CALL QUERY_FTS_INDEX('${tableName}', '${indexName}', '${escapedQuery}', conjunctive := ${conjunctive})\n    RETURN node, score\n    ORDER BY score DESC\n    LIMIT ${limit}\n  `;\n\n  try {\n    const queryResult = await conn.query(cypher);\n    const result = Array.isArray(queryResult) ? queryResult[0] : queryResult;\n    const rows = await result.getAll();\n\n    return rows.map((row: any) => {\n      const node = row.node || row[0] || {};\n      const score = row.score ?? row[1] ?? 0;\n      return {\n        nodeId: node.nodeId || node.id || '',\n        name: node.name || '',\n        filePath: node.filePath || '',\n        score: typeof score === 'number' ? score : parseFloat(score) || 0,\n        ...node,\n      };\n    });\n  } catch (e: any) {\n    // Return empty if index doesn't exist yet\n    if (e.message?.includes('does not exist')) {\n      return [];\n    }\n    throw e;\n  }\n};\n\n/**\n * Drop an FTS index\n */\nexport const dropFTSIndex = async (tableName: string, indexName: string): Promise<void> => {\n  if (!conn) {\n    throw new Error('LadybugDB not initialized. Call initLbug first.');\n  }\n\n  try {\n    await conn.query(`CALL DROP_FTS_INDEX('${tableName}', '${indexName}')`);\n  } catch {\n    // Index may not exist\n  }\n};\n"
  },
  {
    "path": "gitnexus/src/core/lbug/schema.ts",
    "content": "/**\n * LadybugDB Schema Definitions\n * \n * Hybrid Schema:\n * - Separate node tables for each code element type (File, Function, Class, etc.)\n * - Single CodeRelation table with 'type' property for all relationships\n * \n * This allows LLMs to write natural Cypher queries like:\n *   MATCH (f:Function)-[r:CodeRelation {type: 'CALLS'}]->(g:Function) RETURN f, g\n */\n\n// ============================================================================\n// NODE TABLE NAMES\n// ============================================================================\nexport const NODE_TABLES = [\n  'File', 'Folder', 'Function', 'Class', 'Interface', 'Method', 'CodeElement', 'Community', 'Process',\n  // Multi-language support\n  'Struct', 'Enum', 'Macro', 'Typedef', 'Union', 'Namespace', 'Trait', 'Impl',\n  'TypeAlias', 'Const', 'Static', 'Property', 'Record', 'Delegate', 'Annotation', 'Constructor', 'Template', 'Module'\n] as const;\nexport type NodeTableName = typeof NODE_TABLES[number];\n\n// ============================================================================\n// RELATION TABLE\n// ============================================================================\nexport const REL_TABLE_NAME = 'CodeRelation';\n\n// Valid relation types\nexport const REL_TYPES = ['CONTAINS', 'DEFINES', 'IMPORTS', 'CALLS', 'EXTENDS', 'IMPLEMENTS', 'HAS_METHOD', 'HAS_PROPERTY', 'ACCESSES', 'OVERRIDES', 'MEMBER_OF', 'STEP_IN_PROCESS'] as const;\nexport type RelType = typeof REL_TYPES[number];\n\n// ============================================================================\n// EMBEDDING TABLE\n// ============================================================================\nexport const EMBEDDING_TABLE_NAME = 'CodeEmbedding';\n\n// ============================================================================\n// NODE TABLE SCHEMAS\n// ============================================================================\n\nexport const FILE_SCHEMA = `\nCREATE NODE TABLE File (\n  id STRING,\n  name STRING,\n  filePath STRING,\n  content STRING,\n  PRIMARY KEY (id)\n)`;\n\nexport const FOLDER_SCHEMA = `\nCREATE NODE TABLE Folder (\n  id STRING,\n  name STRING,\n  filePath STRING,\n  PRIMARY KEY (id)\n)`;\n\nexport const FUNCTION_SCHEMA = `\nCREATE NODE TABLE Function (\n  id STRING,\n  name STRING,\n  filePath STRING,\n  startLine INT64,\n  endLine INT64,\n  isExported BOOLEAN,\n  content STRING,\n  description STRING,\n  PRIMARY KEY (id)\n)`;\n\nexport const CLASS_SCHEMA = `\nCREATE NODE TABLE Class (\n  id STRING,\n  name STRING,\n  filePath STRING,\n  startLine INT64,\n  endLine INT64,\n  isExported BOOLEAN,\n  content STRING,\n  description STRING,\n  PRIMARY KEY (id)\n)`;\n\nexport const INTERFACE_SCHEMA = `\nCREATE NODE TABLE Interface (\n  id STRING,\n  name STRING,\n  filePath STRING,\n  startLine INT64,\n  endLine INT64,\n  isExported BOOLEAN,\n  content STRING,\n  description STRING,\n  PRIMARY KEY (id)\n)`;\n\nexport const METHOD_SCHEMA = `\nCREATE NODE TABLE Method (\n  id STRING,\n  name STRING,\n  filePath STRING,\n  startLine INT64,\n  endLine INT64,\n  isExported BOOLEAN,\n  content STRING,\n  description STRING,\n  parameterCount INT32,\n  returnType STRING,\n  PRIMARY KEY (id)\n)`;\n\nexport const CODE_ELEMENT_SCHEMA = `\nCREATE NODE TABLE CodeElement (\n  id STRING,\n  name STRING,\n  filePath STRING,\n  startLine INT64,\n  endLine INT64,\n  isExported BOOLEAN,\n  content STRING,\n  description STRING,\n  PRIMARY KEY (id)\n)`;\n\n// ============================================================================\n// COMMUNITY NODE TABLE (for Leiden algorithm clusters)\n// ============================================================================\n\nexport const COMMUNITY_SCHEMA = `\nCREATE NODE TABLE Community (\n  id STRING,\n  label STRING,\n  heuristicLabel STRING,\n  keywords STRING[],\n  description STRING,\n  enrichedBy STRING,\n  cohesion DOUBLE,\n  symbolCount INT32,\n  PRIMARY KEY (id)\n)`;\n\n// ============================================================================\n// PROCESS NODE TABLE (for execution flow detection)\n// ============================================================================\n\nexport const PROCESS_SCHEMA = `\nCREATE NODE TABLE Process (\n  id STRING,\n  label STRING,\n  heuristicLabel STRING,\n  processType STRING,\n  stepCount INT32,\n  communities STRING[],\n  entryPointId STRING,\n  terminalId STRING,\n  PRIMARY KEY (id)\n)`;\n\n// ============================================================================\n// MULTI-LANGUAGE NODE TABLE SCHEMAS\n// ============================================================================\n\n// Generic code element with startLine/endLine for C, C++, Rust, Go, Java, C#\n// description: optional metadata (e.g. Eloquent $fillable fields, relationship targets)\nconst CODE_ELEMENT_BASE = (name: string) => `\nCREATE NODE TABLE \\`${name}\\` (\n  id STRING,\n  name STRING,\n  filePath STRING,\n  startLine INT64,\n  endLine INT64,\n  content STRING,\n  description STRING,\n  PRIMARY KEY (id)\n)`;\n\nexport const STRUCT_SCHEMA = CODE_ELEMENT_BASE('Struct');\nexport const ENUM_SCHEMA = CODE_ELEMENT_BASE('Enum');\nexport const MACRO_SCHEMA = CODE_ELEMENT_BASE('Macro');\nexport const TYPEDEF_SCHEMA = CODE_ELEMENT_BASE('Typedef');\nexport const UNION_SCHEMA = CODE_ELEMENT_BASE('Union');\nexport const NAMESPACE_SCHEMA = CODE_ELEMENT_BASE('Namespace');\nexport const TRAIT_SCHEMA = CODE_ELEMENT_BASE('Trait');\nexport const IMPL_SCHEMA = CODE_ELEMENT_BASE('Impl');\nexport const TYPE_ALIAS_SCHEMA = CODE_ELEMENT_BASE('TypeAlias');\nexport const CONST_SCHEMA = CODE_ELEMENT_BASE('Const');\nexport const STATIC_SCHEMA = CODE_ELEMENT_BASE('Static');\nexport const PROPERTY_SCHEMA = CODE_ELEMENT_BASE('Property');\nexport const RECORD_SCHEMA = CODE_ELEMENT_BASE('Record');\nexport const DELEGATE_SCHEMA = CODE_ELEMENT_BASE('Delegate');\nexport const ANNOTATION_SCHEMA = CODE_ELEMENT_BASE('Annotation');\nexport const CONSTRUCTOR_SCHEMA = CODE_ELEMENT_BASE('Constructor');\nexport const TEMPLATE_SCHEMA = CODE_ELEMENT_BASE('Template');\nexport const MODULE_SCHEMA = CODE_ELEMENT_BASE('Module');\n\n// ============================================================================\n// RELATION TABLE SCHEMA\n// Single table with 'type' property - connects all node tables\n// ============================================================================\n\nexport const RELATION_SCHEMA = `\nCREATE REL TABLE ${REL_TABLE_NAME} (\n  FROM File TO File,\n  FROM File TO Folder,\n  FROM File TO Function,\n  FROM File TO Class,\n  FROM File TO Interface,\n  FROM File TO Method,\n  FROM File TO CodeElement,\n  FROM File TO \\`Struct\\`,\n  FROM File TO \\`Enum\\`,\n  FROM File TO \\`Macro\\`,\n  FROM File TO \\`Typedef\\`,\n  FROM File TO \\`Union\\`,\n  FROM File TO \\`Namespace\\`,\n  FROM File TO \\`Trait\\`,\n  FROM File TO \\`Impl\\`,\n  FROM File TO \\`TypeAlias\\`,\n  FROM File TO \\`Const\\`,\n  FROM File TO \\`Static\\`,\n  FROM File TO \\`Property\\`,\n  FROM File TO \\`Record\\`,\n  FROM File TO \\`Delegate\\`,\n  FROM File TO \\`Annotation\\`,\n  FROM File TO \\`Constructor\\`,\n  FROM File TO \\`Template\\`,\n  FROM File TO \\`Module\\`,\n  FROM Folder TO Folder,\n  FROM Folder TO File,\n  FROM Function TO Function,\n  FROM Function TO Method,\n  FROM Function TO Class,\n  FROM Function TO Community,\n  FROM Function TO \\`Macro\\`,\n  FROM Function TO \\`Struct\\`,\n  FROM Function TO \\`Template\\`,\n  FROM Function TO \\`Enum\\`,\n  FROM Function TO \\`Namespace\\`,\n  FROM Function TO \\`TypeAlias\\`,\n  FROM Function TO \\`Module\\`,\n  FROM Function TO \\`Impl\\`,\n  FROM Function TO Interface,\n  FROM Function TO \\`Constructor\\`,\n  FROM Function TO \\`Const\\`,\n  FROM Function TO \\`Typedef\\`,\n  FROM Function TO \\`Union\\`,\n  FROM Function TO \\`Property\\`,\n  FROM Class TO Method,\n  FROM Class TO Function,\n  FROM Class TO Class,\n  FROM Class TO Interface,\n  FROM Class TO Community,\n  FROM Class TO \\`Template\\`,\n  FROM Class TO \\`TypeAlias\\`,\n  FROM Class TO \\`Struct\\`,\n  FROM Class TO \\`Enum\\`,\n  FROM Class TO \\`Annotation\\`,\n  FROM Class TO \\`Constructor\\`,\n  FROM Class TO \\`Trait\\`,\n  FROM Class TO \\`Macro\\`,\n  FROM Class TO \\`Impl\\`,\n  FROM Class TO \\`Union\\`,\n  FROM Class TO \\`Namespace\\`,\n  FROM Class TO \\`Typedef\\`,\n  FROM Class TO \\`Property\\`,\n  FROM Method TO Function,\n  FROM Method TO Method,\n  FROM Method TO Class,\n  FROM Method TO Community,\n  FROM Method TO \\`Template\\`,\n  FROM Method TO \\`Struct\\`,\n  FROM Method TO \\`TypeAlias\\`,\n  FROM Method TO \\`Enum\\`,\n  FROM Method TO \\`Macro\\`,\n  FROM Method TO \\`Namespace\\`,\n  FROM Method TO \\`Module\\`,\n  FROM Method TO \\`Impl\\`,\n  FROM Method TO Interface,\n  FROM Method TO \\`Constructor\\`,\n  FROM Method TO \\`Property\\`,\n  FROM \\`Template\\` TO \\`Template\\`,\n  FROM \\`Template\\` TO Function,\n  FROM \\`Template\\` TO Method,\n  FROM \\`Template\\` TO Class,\n  FROM \\`Template\\` TO \\`Struct\\`,\n  FROM \\`Template\\` TO \\`TypeAlias\\`,\n  FROM \\`Template\\` TO \\`Enum\\`,\n  FROM \\`Template\\` TO \\`Macro\\`,\n  FROM \\`Template\\` TO Interface,\n  FROM \\`Template\\` TO \\`Constructor\\`,\n  FROM \\`Module\\` TO \\`Module\\`,\n  FROM CodeElement TO Community,\n  FROM Interface TO Community,\n  FROM Interface TO Function,\n  FROM Interface TO Method,\n  FROM Interface TO Class,\n  FROM Interface TO Interface,\n  FROM Interface TO \\`TypeAlias\\`,\n  FROM Interface TO \\`Struct\\`,\n  FROM Interface TO \\`Constructor\\`,\n  FROM Interface TO \\`Property\\`,\n  FROM \\`Struct\\` TO Community,\n  FROM \\`Struct\\` TO \\`Trait\\`,\n  FROM \\`Struct\\` TO \\`Struct\\`,\n  FROM \\`Struct\\` TO Class,\n  FROM \\`Struct\\` TO \\`Enum\\`,\n  FROM \\`Struct\\` TO Function,\n  FROM \\`Struct\\` TO Method,\n  FROM \\`Struct\\` TO Interface,\n  FROM \\`Struct\\` TO \\`Constructor\\`,\n  FROM \\`Struct\\` TO \\`Property\\`,\n  FROM \\`Enum\\` TO \\`Enum\\`,\n  FROM \\`Enum\\` TO Community,\n  FROM \\`Enum\\` TO Class,\n  FROM \\`Enum\\` TO Interface,\n  FROM \\`Macro\\` TO Community,\n  FROM \\`Macro\\` TO Function,\n  FROM \\`Macro\\` TO Method,\n  FROM \\`Module\\` TO Function,\n  FROM \\`Module\\` TO Method,\n  FROM \\`Typedef\\` TO Community,\n  FROM \\`Union\\` TO Community,\n  FROM \\`Namespace\\` TO Community,\n  FROM \\`Namespace\\` TO \\`Struct\\`,\n  FROM \\`Trait\\` TO Method,\n  FROM \\`Trait\\` TO \\`Constructor\\`,\n  FROM \\`Trait\\` TO \\`Property\\`,\n  FROM \\`Trait\\` TO Community,\n  FROM \\`Impl\\` TO Method,\n  FROM \\`Impl\\` TO \\`Constructor\\`,\n  FROM \\`Impl\\` TO \\`Property\\`,\n  FROM \\`Impl\\` TO Community,\n  FROM \\`Impl\\` TO \\`Trait\\`,\n  FROM \\`Impl\\` TO \\`Struct\\`,\n  FROM \\`Impl\\` TO \\`Impl\\`,\n  FROM \\`TypeAlias\\` TO Community,\n  FROM \\`TypeAlias\\` TO \\`Trait\\`,\n  FROM \\`TypeAlias\\` TO Class,\n  FROM \\`Const\\` TO Community,\n  FROM \\`Static\\` TO Community,\n  FROM \\`Property\\` TO Community,\n  FROM \\`Record\\` TO Method,\n  FROM \\`Record\\` TO \\`Constructor\\`,\n  FROM \\`Record\\` TO \\`Property\\`,\n  FROM \\`Record\\` TO Community,\n  FROM \\`Delegate\\` TO Community,\n  FROM \\`Annotation\\` TO Community,\n  FROM \\`Constructor\\` TO Community,\n  FROM \\`Constructor\\` TO Interface,\n  FROM \\`Constructor\\` TO Class,\n  FROM \\`Constructor\\` TO Method,\n  FROM \\`Constructor\\` TO Function,\n  FROM \\`Constructor\\` TO \\`Constructor\\`,\n  FROM \\`Constructor\\` TO \\`Struct\\`,\n  FROM \\`Constructor\\` TO \\`Macro\\`,\n  FROM \\`Constructor\\` TO \\`Template\\`,\n  FROM \\`Constructor\\` TO \\`TypeAlias\\`,\n  FROM \\`Constructor\\` TO \\`Enum\\`,\n  FROM \\`Constructor\\` TO \\`Annotation\\`,\n  FROM \\`Constructor\\` TO \\`Impl\\`,\n  FROM \\`Constructor\\` TO \\`Namespace\\`,\n  FROM \\`Constructor\\` TO \\`Module\\`,\n  FROM \\`Constructor\\` TO \\`Property\\`,\n  FROM \\`Constructor\\` TO \\`Typedef\\`,\n  FROM \\`Template\\` TO Community,\n  FROM \\`Module\\` TO Community,\n  FROM Function TO Process,\n  FROM Method TO Process,\n  FROM Class TO Process,\n  FROM Interface TO Process,\n  FROM \\`Struct\\` TO Process,\n  FROM \\`Constructor\\` TO Process,\n  FROM \\`Module\\` TO Process,\n  FROM \\`Macro\\` TO Process,\n  FROM \\`Impl\\` TO Process,\n  FROM \\`Typedef\\` TO Process,\n  FROM \\`TypeAlias\\` TO Process,\n  FROM \\`Enum\\` TO Process,\n  FROM \\`Union\\` TO Process,\n  FROM \\`Namespace\\` TO Process,\n  FROM \\`Trait\\` TO Process,\n  FROM \\`Const\\` TO Process,\n  FROM \\`Static\\` TO Process,\n  FROM \\`Property\\` TO Process,\n  FROM \\`Record\\` TO Process,\n  FROM \\`Delegate\\` TO Process,\n  FROM \\`Annotation\\` TO Process,\n  FROM \\`Template\\` TO Process,\n  FROM CodeElement TO Process,\n  type STRING,\n  confidence DOUBLE,\n  reason STRING,\n  step INT32\n)`;\n\n// ============================================================================\n// EMBEDDING TABLE SCHEMA\n// Separate table for vector storage to avoid copy-on-write overhead\n// ============================================================================\n\nexport const EMBEDDING_SCHEMA = `\nCREATE NODE TABLE ${EMBEDDING_TABLE_NAME} (\n  nodeId STRING,\n  embedding FLOAT[384],\n  PRIMARY KEY (nodeId)\n)`;\n\n/**\n * Create vector index for semantic search\n * Uses HNSW (Hierarchical Navigable Small World) algorithm with cosine similarity\n */\nexport const CREATE_VECTOR_INDEX_QUERY = `\nCALL CREATE_VECTOR_INDEX('${EMBEDDING_TABLE_NAME}', 'code_embedding_idx', 'embedding', metric := 'cosine')\n`;\n\n// ============================================================================\n// ALL SCHEMA QUERIES IN ORDER\n// Node tables must be created before relationship tables that reference them\n// ============================================================================\n\nexport const NODE_SCHEMA_QUERIES = [\n  FILE_SCHEMA,\n  FOLDER_SCHEMA,\n  FUNCTION_SCHEMA,\n  CLASS_SCHEMA,\n  INTERFACE_SCHEMA,\n  METHOD_SCHEMA,\n  CODE_ELEMENT_SCHEMA,\n  COMMUNITY_SCHEMA,\n  PROCESS_SCHEMA,\n  // Multi-language support\n  STRUCT_SCHEMA,\n  ENUM_SCHEMA,\n  MACRO_SCHEMA,\n  TYPEDEF_SCHEMA,\n  UNION_SCHEMA,\n  NAMESPACE_SCHEMA,\n  TRAIT_SCHEMA,\n  IMPL_SCHEMA,\n  TYPE_ALIAS_SCHEMA,\n  CONST_SCHEMA,\n  STATIC_SCHEMA,\n  PROPERTY_SCHEMA,\n  RECORD_SCHEMA,\n  DELEGATE_SCHEMA,\n  ANNOTATION_SCHEMA,\n  CONSTRUCTOR_SCHEMA,\n  TEMPLATE_SCHEMA,\n  MODULE_SCHEMA,\n];\n\nexport const REL_SCHEMA_QUERIES = [\n  RELATION_SCHEMA,\n];\n\nexport const SCHEMA_QUERIES = [\n  ...NODE_SCHEMA_QUERIES,\n  ...REL_SCHEMA_QUERIES,\n  EMBEDDING_SCHEMA,\n];\n"
  },
  {
    "path": "gitnexus/src/core/search/bm25-index.ts",
    "content": "/**\n * Full-Text Search via LadybugDB FTS\n *\n * Uses LadybugDB's built-in full-text search indexes for keyword-based search.\n * Always reads from the database (no cached state to drift).\n */\n\nimport { queryFTS } from '../lbug/lbug-adapter.js';\n\nexport interface BM25SearchResult {\n  filePath: string;\n  score: number;\n  rank: number;\n}\n\n/**\n * Execute a single FTS query via a custom executor (for MCP connection pool).\n * Returns the same shape as core queryFTS (from LadybugDB adapter).\n */\nasync function queryFTSViaExecutor(\n  executor: (cypher: string) => Promise<any[]>,\n  tableName: string,\n  indexName: string,\n  query: string,\n  limit: number,\n): Promise<Array<{ filePath: string; score: number }>> {\n  // Escape single quotes and backslashes to prevent Cypher injection\n  const escapedQuery = query.replace(/\\\\/g, '\\\\\\\\').replace(/'/g, \"''\");\n  const cypher = `\n    CALL QUERY_FTS_INDEX('${tableName}', '${indexName}', '${escapedQuery}', conjunctive := false)\n    RETURN node, score\n    ORDER BY score DESC\n    LIMIT ${limit}\n  `;\n  try {\n    const rows = await executor(cypher);\n    return rows.map((row: any) => {\n      const node = row.node || row[0] || {};\n      const score = row.score ?? row[1] ?? 0;\n      return {\n        filePath: node.filePath || '',\n        score: typeof score === 'number' ? score : parseFloat(score) || 0,\n      };\n    });\n  } catch {\n    return [];\n  }\n}\n\n/**\n * Search using LadybugDB's built-in FTS (always fresh, reads from disk)\n *\n * Queries multiple node tables (File, Function, Class, Method) in parallel\n * and merges results by filePath, summing scores for the same file.\n *\n * @param query - Search query string\n * @param limit - Maximum results\n * @param repoId - If provided, queries will be routed via the MCP connection pool\n * @returns Ranked search results from FTS indexes\n */\nexport const searchFTSFromLbug = async (query: string, limit: number = 20, repoId?: string): Promise<BM25SearchResult[]> => {\n  let fileResults: any[], functionResults: any[], classResults: any[], methodResults: any[], interfaceResults: any[];\n\n  if (repoId) {\n    // Use MCP connection pool via dynamic import\n    // IMPORTANT: FTS queries run sequentially to avoid connection contention.\n    // The MCP pool supports multiple connections, but FTS is best run serially.\n    const { executeQuery } = await import('../../mcp/core/lbug-adapter.js');\n    const executor = (cypher: string) => executeQuery(repoId, cypher);\n    fileResults = await queryFTSViaExecutor(executor, 'File', 'file_fts', query, limit);\n    functionResults = await queryFTSViaExecutor(executor, 'Function', 'function_fts', query, limit);\n    classResults = await queryFTSViaExecutor(executor, 'Class', 'class_fts', query, limit);\n    methodResults = await queryFTSViaExecutor(executor, 'Method', 'method_fts', query, limit);\n    interfaceResults = await queryFTSViaExecutor(executor, 'Interface', 'interface_fts', query, limit);\n  } else {\n    // Use core lbug adapter (CLI / pipeline context) — also sequential for safety\n    fileResults = await queryFTS('File', 'file_fts', query, limit, false).catch(() => []);\n    functionResults = await queryFTS('Function', 'function_fts', query, limit, false).catch(() => []);\n    classResults = await queryFTS('Class', 'class_fts', query, limit, false).catch(() => []);\n    methodResults = await queryFTS('Method', 'method_fts', query, limit, false).catch(() => []);\n    interfaceResults = await queryFTS('Interface', 'interface_fts', query, limit, false).catch(() => []);\n  }\n  \n  // Merge results by filePath, summing scores for same file\n  const merged = new Map<string, { filePath: string; score: number }>();\n  \n  const addResults = (results: any[]) => {\n    for (const r of results) {\n      const existing = merged.get(r.filePath);\n      if (existing) {\n        existing.score += r.score;\n      } else {\n        merged.set(r.filePath, { filePath: r.filePath, score: r.score });\n      }\n    }\n  };\n  \n  addResults(fileResults);\n  addResults(functionResults);\n  addResults(classResults);\n  addResults(methodResults);\n  addResults(interfaceResults);\n  \n  // Sort by score descending and add rank\n  const sorted = Array.from(merged.values())\n    .sort((a, b) => b.score - a.score)\n    .slice(0, limit);\n  \n  return sorted.map((r, index) => ({\n    filePath: r.filePath,\n    score: r.score,\n    rank: index + 1,\n  }));\n};\n"
  },
  {
    "path": "gitnexus/src/core/search/hybrid-search.ts",
    "content": "/**\n * Hybrid Search with Reciprocal Rank Fusion (RRF)\n * \n * Combines BM25 (keyword) and semantic (embedding) search results.\n * Uses RRF to merge rankings without needing score normalization.\n * \n * This is the same approach used by Elasticsearch, Pinecone, and other\n * production search systems.\n */\n\nimport { searchFTSFromLbug, type BM25SearchResult } from './bm25-index.js';\nimport type { SemanticSearchResult } from '../embeddings/types.js';\n\n/**\n * RRF constant - standard value used in the literature\n * Higher values give more weight to lower-ranked results\n */\nconst RRF_K = 60;\n\nexport interface HybridSearchResult {\n  filePath: string;\n  score: number;           // RRF score\n  rank: number;            // Final rank\n  sources: ('bm25' | 'semantic')[];  // Which methods found this\n  \n  // Metadata from semantic search (if available)\n  nodeId?: string;\n  name?: string;\n  label?: string;\n  startLine?: number;\n  endLine?: number;\n  \n  // Original scores for debugging\n  bm25Score?: number;\n  semanticScore?: number;\n}\n\n/**\n * Perform hybrid search combining BM25 and semantic results\n * \n * @param bm25Results - Results from BM25 keyword search\n * @param semanticResults - Results from semantic/embedding search\n * @param limit - Maximum results to return\n * @returns Merged and re-ranked results\n */\nexport const mergeWithRRF = (\n  bm25Results: BM25SearchResult[],\n  semanticResults: SemanticSearchResult[],\n  limit: number = 10\n): HybridSearchResult[] => {\n  const merged = new Map<string, HybridSearchResult>();\n  \n  // Process BM25 results\n  for (let i = 0; i < bm25Results.length; i++) {\n    const r = bm25Results[i];\n    const rrfScore = 1 / (RRF_K + i + 1);  // i+1 because rank starts at 1\n    \n    merged.set(r.filePath, {\n      filePath: r.filePath,\n      score: rrfScore,\n      rank: 0,  // Will be set after sorting\n      sources: ['bm25'],\n      bm25Score: r.score,\n    });\n  }\n  \n  // Process semantic results and merge\n  for (let i = 0; i < semanticResults.length; i++) {\n    const r = semanticResults[i];\n    const rrfScore = 1 / (RRF_K + i + 1);\n    \n    const existing = merged.get(r.filePath);\n    if (existing) {\n      // Found by both methods - add scores\n      existing.score += rrfScore;\n      existing.sources.push('semantic');\n      existing.semanticScore = 1 - r.distance;\n      \n      // Add semantic metadata\n      existing.nodeId = r.nodeId;\n      existing.name = r.name;\n      existing.label = r.label;\n      existing.startLine = r.startLine;\n      existing.endLine = r.endLine;\n    } else {\n      // Only found by semantic\n      merged.set(r.filePath, {\n        filePath: r.filePath,\n        score: rrfScore,\n        rank: 0,\n        sources: ['semantic'],\n        semanticScore: 1 - r.distance,\n        nodeId: r.nodeId,\n        name: r.name,\n        label: r.label,\n        startLine: r.startLine,\n        endLine: r.endLine,\n      });\n    }\n  }\n  \n  // Sort by RRF score descending\n  const sorted = Array.from(merged.values())\n    .sort((a, b) => b.score - a.score)\n    .slice(0, limit);\n  \n  // Assign final ranks\n  sorted.forEach((r, i) => {\n    r.rank = i + 1;\n  });\n  \n  return sorted;\n};\n\n/**\n * Check if hybrid search is available\n * LadybugDB FTS is always available once the database is initialized.\n * Semantic search is optional - hybrid works with just FTS if embeddings aren't ready.\n */\nexport const isHybridSearchReady = (): boolean => {\n  return true; // FTS is always available via LadybugDB when DB is open\n};\n\n/**\n * Format hybrid results for LLM consumption\n */\nexport const formatHybridResults = (results: HybridSearchResult[]): string => {\n  if (results.length === 0) {\n    return 'No results found.';\n  }\n  \n  const formatted = results.map((r, i) => {\n    const sources = r.sources.join(' + ');\n    const location = r.startLine ? ` (lines ${r.startLine}-${r.endLine})` : '';\n    const label = r.label ? `${r.label}: ` : 'File: ';\n    const name = r.name || r.filePath.split('/').pop() || r.filePath;\n    \n    return `[${i + 1}] ${label}${name}\n    File: ${r.filePath}${location}\n    Found by: ${sources}\n    Relevance: ${r.score.toFixed(4)}`;\n  });\n  \n  return `Found ${results.length} results:\\n\\n${formatted.join('\\n\\n')}`;\n};\n\n/**\n * Execute BM25 + semantic search and merge with RRF.\n * Uses LadybugDB FTS for always-fresh BM25 results (no cached data).\n * The semanticSearch function is injected to keep this module environment-agnostic.\n */\nexport const hybridSearch = async (\n  query: string,\n  limit: number,\n  executeQuery: (cypher: string) => Promise<any[]>,\n  semanticSearch: (executeQuery: (cypher: string) => Promise<any[]>, query: string, k?: number) => Promise<SemanticSearchResult[]>\n): Promise<HybridSearchResult[]> => {\n  // Use LadybugDB FTS for always-fresh BM25 results\n  const bm25Results = await searchFTSFromLbug(query, limit);\n  const semanticResults = await semanticSearch(executeQuery, query, limit);\n  return mergeWithRRF(bm25Results, semanticResults, limit);\n};\n"
  },
  {
    "path": "gitnexus/src/core/tree-sitter/parser-loader.ts",
    "content": "import Parser from 'tree-sitter';\nimport JavaScript from 'tree-sitter-javascript';\nimport TypeScript from 'tree-sitter-typescript';\nimport Python from 'tree-sitter-python';\nimport Java from 'tree-sitter-java';\nimport C from 'tree-sitter-c';\nimport CPP from 'tree-sitter-cpp';\nimport CSharp from 'tree-sitter-c-sharp';\nimport Go from 'tree-sitter-go';\nimport Rust from 'tree-sitter-rust';\nimport PHP from 'tree-sitter-php';\nimport Ruby from 'tree-sitter-ruby';\nimport { createRequire } from 'node:module';\nimport { SupportedLanguages } from '../../config/supported-languages.js';\n\n// tree-sitter-swift is an optionalDependency — may not be installed\nconst _require = createRequire(import.meta.url);\nlet Swift: any = null;\ntry { Swift = _require('tree-sitter-swift'); } catch {}\n\n// tree-sitter-kotlin is an optionalDependency — may not be installed\nlet Kotlin: any = null;\ntry { Kotlin = _require('tree-sitter-kotlin'); } catch {}\n\nlet parser: Parser | null = null;\n\nconst languageMap: Record<string, any> = {\n  [SupportedLanguages.JavaScript]: JavaScript,\n  [SupportedLanguages.TypeScript]: TypeScript.typescript,\n  [`${SupportedLanguages.TypeScript}:tsx`]: TypeScript.tsx,\n  [SupportedLanguages.Python]: Python,\n  [SupportedLanguages.Java]: Java,\n  [SupportedLanguages.C]: C,\n  [SupportedLanguages.CPlusPlus]: CPP,\n  [SupportedLanguages.CSharp]: CSharp,\n  [SupportedLanguages.Go]: Go,\n  [SupportedLanguages.Rust]: Rust,\n  ...(Kotlin ? { [SupportedLanguages.Kotlin]: Kotlin } : {}),\n  [SupportedLanguages.PHP]: PHP.php_only,\n  [SupportedLanguages.Ruby]: Ruby,\n  ...(Swift ? { [SupportedLanguages.Swift]: Swift } : {}),\n};\n\nexport const isLanguageAvailable = (language: SupportedLanguages): boolean =>\n  language in languageMap;\n\nexport const loadParser = async (): Promise<Parser> => {\n  if (parser) return parser;\n  parser = new Parser();\n  return parser;\n};\n\nexport const loadLanguage = async (language: SupportedLanguages, filePath?: string): Promise<void> => {\n  if (!parser) await loadParser();\n  const key = language === SupportedLanguages.TypeScript && filePath?.endsWith('.tsx')\n    ? `${language}:tsx`\n    : language;\n\n  const lang = languageMap[key];\n  if (!lang) {\n    throw new Error(`Unsupported language: ${language}`);\n  }\n  parser!.setLanguage(lang);\n};\n"
  },
  {
    "path": "gitnexus/src/core/wiki/generator.ts",
    "content": "/**\n * Wiki Generator\n * \n * Orchestrates the full wiki generation pipeline:\n *   Phase 0: Validate prerequisites + gather graph structure\n *   Phase 1: Build module tree (one LLM call)\n *   Phase 2: Generate module pages (one LLM call per module, bottom-up)\n *   Phase 3: Generate overview page\n * \n * Supports incremental updates via git diff + module-file mapping.\n */\n\nimport fs from 'fs/promises';\nimport path from 'path';\nimport { execSync, execFileSync } from 'child_process';\n\nimport {\n  initWikiDb,\n  closeWikiDb,\n  getFilesWithExports,\n  getAllFiles,\n  getInterFileCallEdges,\n  getIntraModuleCallEdges,\n  getInterModuleCallEdges,\n  getProcessesForFiles,\n  getAllProcesses,\n  getInterModuleEdgesForOverview,\n  type FileWithExports,\n} from './graph-queries.js';\nimport { generateHTMLViewer } from './html-viewer.js';\n\nimport {\n  callLLM,\n  estimateTokens,\n  type LLMConfig,\n  type CallLLMOptions,\n} from './llm-client.js';\n\nimport {\n  GROUPING_SYSTEM_PROMPT,\n  GROUPING_USER_PROMPT,\n  MODULE_SYSTEM_PROMPT,\n  MODULE_USER_PROMPT,\n  PARENT_SYSTEM_PROMPT,\n  PARENT_USER_PROMPT,\n  OVERVIEW_SYSTEM_PROMPT,\n  OVERVIEW_USER_PROMPT,\n  fillTemplate,\n  formatFileListForGrouping,\n  formatDirectoryTree,\n  formatCallEdges,\n  formatProcesses,\n} from './prompts.js';\n\nimport { shouldIgnorePath } from '../../config/ignore-service.js';\n\n// ─── Types ────────────────────────────────────────────────────────────\n\nexport interface WikiOptions {\n  force?: boolean;\n  model?: string;\n  baseUrl?: string;\n  apiKey?: string;\n  maxTokensPerModule?: number;\n  concurrency?: number;\n}\n\nexport interface WikiMeta {\n  fromCommit: string;\n  generatedAt: string;\n  model: string;\n  moduleFiles: Record<string, string[]>;\n  moduleTree: ModuleTreeNode[];\n}\n\nexport interface ModuleTreeNode {\n  name: string;\n  slug: string;\n  files: string[];\n  children?: ModuleTreeNode[];\n}\n\nexport type ProgressCallback = (phase: string, percent: number, detail?: string) => void;\n\n// ─── Constants ────────────────────────────────────────────────────────\n\nconst DEFAULT_MAX_TOKENS_PER_MODULE = 30_000;\nconst WIKI_DIR = 'wiki';\n\n// ─── Generator Class ──────────────────────────────────────────────────\n\nexport class WikiGenerator {\n  private repoPath: string;\n  private storagePath: string;\n  private wikiDir: string;\n  private lbugPath: string;\n  private llmConfig: LLMConfig;\n  private maxTokensPerModule: number;\n  private concurrency: number;\n  private options: WikiOptions;\n  private onProgress: ProgressCallback;\n  private failedModules: string[] = [];\n\n  constructor(\n    repoPath: string,\n    storagePath: string,\n    lbugPath: string,\n    llmConfig: LLMConfig,\n    options: WikiOptions = {},\n    onProgress?: ProgressCallback,\n  ) {\n    this.repoPath = repoPath;\n    this.storagePath = storagePath;\n    this.wikiDir = path.join(storagePath, WIKI_DIR);\n    this.lbugPath = lbugPath;\n    this.options = options;\n    this.llmConfig = llmConfig;\n    this.maxTokensPerModule = options.maxTokensPerModule ?? DEFAULT_MAX_TOKENS_PER_MODULE;\n    this.concurrency = options.concurrency ?? 3;\n    const progressFn = onProgress || (() => {});\n    this.onProgress = (phase, percent, detail) => {\n      if (percent > 0) this.lastPercent = percent;\n      progressFn(phase, percent, detail);\n    };\n  }\n\n  private lastPercent = 0;\n\n  /**\n   * Create streaming options that report LLM progress to the progress bar.\n   * Uses the last known percent so streaming doesn't reset the bar backwards.\n   */\n  private streamOpts(label: string, fixedPercent?: number): CallLLMOptions {\n    return {\n      onChunk: (chars: number) => {\n        const tokens = Math.round(chars / 4);\n        const pct = fixedPercent ?? this.lastPercent;\n        this.onProgress('stream', pct, `${label} (${tokens} tok)`);\n      },\n    };\n  }\n\n  /**\n   * Main entry point. Runs the full pipeline or incremental update.\n   */\n  async run(): Promise<{ pagesGenerated: number; mode: 'full' | 'incremental' | 'up-to-date'; failedModules: string[] }> {\n    await fs.mkdir(this.wikiDir, { recursive: true });\n\n    const existingMeta = await this.loadWikiMeta();\n    const currentCommit = this.getCurrentCommit();\n    const forceMode = this.options.force;\n\n    // Up-to-date check (skip if --force)\n    if (!forceMode && existingMeta && existingMeta.fromCommit === currentCommit) {\n      // Still regenerate the HTML viewer in case it's missing\n      await this.ensureHTMLViewer();\n      return { pagesGenerated: 0, mode: 'up-to-date', failedModules: [] };\n    }\n\n    // Force mode: delete snapshot to force full re-grouping\n    if (forceMode) {\n      try { await fs.unlink(path.join(this.wikiDir, 'first_module_tree.json')); } catch {}\n      // Delete existing module pages so they get regenerated\n      const existingFiles = await fs.readdir(this.wikiDir).catch(() => [] as string[]);\n      for (const f of existingFiles) {\n        if (f.endsWith('.md')) {\n          try { await fs.unlink(path.join(this.wikiDir, f)); } catch {}\n        }\n      }\n    }\n\n    // Init graph\n    this.onProgress('init', 2, 'Connecting to knowledge graph...');\n    await initWikiDb(this.lbugPath);\n\n    let result: { pagesGenerated: number; mode: 'full' | 'incremental' | 'up-to-date'; failedModules: string[] };\n    try {\n      if (!forceMode && existingMeta && existingMeta.fromCommit) {\n        result = await this.incrementalUpdate(existingMeta, currentCommit);\n      } else {\n        result = await this.fullGeneration(currentCommit);\n      }\n    } finally {\n      await closeWikiDb();\n    }\n\n    // Always generate the HTML viewer after wiki content changes\n    await this.ensureHTMLViewer();\n\n    return result;\n  }\n\n  // ─── HTML Viewer ─────────────────────────────────────────────────────\n\n  private async ensureHTMLViewer(): Promise<void> {\n    // Only generate if there are markdown pages to bundle\n    const dirEntries = await fs.readdir(this.wikiDir).catch(() => [] as string[]);\n    const hasMd = dirEntries.some(f => f.endsWith('.md'));\n    if (!hasMd) return;\n\n    this.onProgress('html', 98, 'Building HTML viewer...');\n    const repoName = path.basename(this.repoPath);\n    await generateHTMLViewer(this.wikiDir, repoName);\n  }\n\n  // ─── Full Generation ────────────────────────────────────────────────\n\n  private async fullGeneration(currentCommit: string): Promise<{ pagesGenerated: number; mode: 'full'; failedModules: string[] }> {\n    let pagesGenerated = 0;\n\n    // Phase 0: Gather structure\n    this.onProgress('gather', 5, 'Querying graph for file structure...');\n    const filesWithExports = await getFilesWithExports();\n    const allFiles = await getAllFiles();\n\n    // Filter to source files only\n    const sourceFiles = allFiles.filter(f => !shouldIgnorePath(f));\n    if (sourceFiles.length === 0) {\n      throw new Error('No source files found in the knowledge graph. Nothing to document.');\n    }\n\n    // Build enriched file list (merge exports into all source files)\n    const exportMap = new Map(filesWithExports.map(f => [f.filePath, f]));\n    const enrichedFiles: FileWithExports[] = sourceFiles.map(fp => {\n      return exportMap.get(fp) || { filePath: fp, symbols: [] };\n    });\n\n    this.onProgress('gather', 10, `Found ${sourceFiles.length} source files`);\n\n    // Phase 1: Build module tree\n    const moduleTree = await this.buildModuleTree(enrichedFiles);\n    pagesGenerated = 0;\n\n    // Phase 2: Generate module pages (parallel with concurrency limit)\n    const totalModules = this.countModules(moduleTree);\n    let modulesProcessed = 0;\n\n    const reportProgress = (moduleName?: string) => {\n      modulesProcessed++;\n      const percent = 30 + Math.round((modulesProcessed / totalModules) * 55);\n      const detail = moduleName\n        ? `${modulesProcessed}/${totalModules} — ${moduleName}`\n        : `${modulesProcessed}/${totalModules} modules`;\n      this.onProgress('modules', percent, detail);\n    };\n\n    // Flatten tree into layers: leaves first, then parents\n    // Leaves can run in parallel; parents must wait for their children\n    const { leaves, parents } = this.flattenModuleTree(moduleTree);\n\n    // Process all leaf modules in parallel\n    pagesGenerated += await this.runParallel(leaves, async (node) => {\n      const pagePath = path.join(this.wikiDir, `${node.slug}.md`);\n      if (await this.fileExists(pagePath)) {\n        reportProgress(node.name);\n        return 0;\n      }\n      try {\n        await this.generateLeafPage(node);\n        reportProgress(node.name);\n        return 1;\n      } catch (err: any) {\n        this.failedModules.push(node.name);\n        reportProgress(`Failed: ${node.name}`);\n        return 0;\n      }\n    });\n\n    // Process parent modules sequentially (they depend on child docs)\n    for (const node of parents) {\n      const pagePath = path.join(this.wikiDir, `${node.slug}.md`);\n      if (await this.fileExists(pagePath)) {\n        reportProgress(node.name);\n        continue;\n      }\n      try {\n        await this.generateParentPage(node);\n        pagesGenerated++;\n        reportProgress(node.name);\n      } catch (err: any) {\n        this.failedModules.push(node.name);\n        reportProgress(`Failed: ${node.name}`);\n      }\n    }\n\n    // Phase 3: Generate overview\n    this.onProgress('overview', 88, 'Generating overview page...');\n    await this.generateOverview(moduleTree);\n    pagesGenerated++;\n\n    // Save metadata\n    this.onProgress('finalize', 95, 'Saving metadata...');\n    const moduleFiles = this.extractModuleFiles(moduleTree);\n    await this.saveModuleTree(moduleTree);\n    await this.saveWikiMeta({\n      fromCommit: currentCommit,\n      generatedAt: new Date().toISOString(),\n      model: this.llmConfig.model,\n      moduleFiles,\n      moduleTree,\n    });\n\n    this.onProgress('done', 100, 'Wiki generation complete');\n    return { pagesGenerated, mode: 'full', failedModules: [...this.failedModules] };\n  }\n\n  // ─── Phase 1: Build Module Tree ────────────────────────────────────\n\n  private async buildModuleTree(files: FileWithExports[]): Promise<ModuleTreeNode[]> {\n    // Check for existing immutable snapshot (resumability)\n    const snapshotPath = path.join(this.wikiDir, 'first_module_tree.json');\n    try {\n      const existing = await fs.readFile(snapshotPath, 'utf-8');\n      const parsed = JSON.parse(existing);\n      if (Array.isArray(parsed) && parsed.length > 0) {\n        this.onProgress('grouping', 25, 'Using existing module tree (resuming)');\n        return parsed;\n      }\n    } catch {\n      // No snapshot, generate new\n    }\n\n    this.onProgress('grouping', 15, 'Grouping files into modules (LLM)...');\n\n    const fileList = formatFileListForGrouping(files);\n    const dirTree = formatDirectoryTree(files.map(f => f.filePath));\n\n    const prompt = fillTemplate(GROUPING_USER_PROMPT, {\n      FILE_LIST: fileList,\n      DIRECTORY_TREE: dirTree,\n    });\n\n    const response = await callLLM(\n      prompt, this.llmConfig, GROUPING_SYSTEM_PROMPT,\n      this.streamOpts('Grouping files', 15),\n    );\n    const grouping = this.parseGroupingResponse(response.content, files);\n\n    // Convert to tree nodes\n    const tree: ModuleTreeNode[] = [];\n    for (const [moduleName, modulePaths] of Object.entries(grouping)) {\n      const slug = this.slugify(moduleName);\n      const node: ModuleTreeNode = { name: moduleName, slug, files: modulePaths };\n\n      // Token budget check — split if too large\n      const totalTokens = await this.estimateModuleTokens(modulePaths);\n      if (totalTokens > this.maxTokensPerModule && modulePaths.length > 3) {\n        node.children = this.splitBySubdirectory(moduleName, modulePaths);\n        node.files = []; // Parent doesn't own files directly when split\n      }\n\n      tree.push(node);\n    }\n\n    // Save immutable snapshot for resumability\n    await fs.writeFile(snapshotPath, JSON.stringify(tree, null, 2), 'utf-8');\n    this.onProgress('grouping', 28, `Created ${tree.length} modules`);\n\n    return tree;\n  }\n\n  /**\n   * Parse LLM grouping response. Validates all files are assigned.\n   */\n  private parseGroupingResponse(\n    content: string,\n    files: FileWithExports[],\n  ): Record<string, string[]> {\n    // Extract JSON from response (handle markdown fences)\n    let jsonStr = content.trim();\n    const fenceMatch = jsonStr.match(/```(?:json)?\\s*\\n?([\\s\\S]*?)\\n?```/);\n    if (fenceMatch) {\n      jsonStr = fenceMatch[1].trim();\n    }\n\n    let parsed: Record<string, string[]>;\n    try {\n      parsed = JSON.parse(jsonStr);\n    } catch {\n      // Fallback: group by top-level directory\n      return this.fallbackGrouping(files);\n    }\n\n    if (typeof parsed !== 'object' || Array.isArray(parsed)) {\n      return this.fallbackGrouping(files);\n    }\n\n    // Validate — ensure all files are assigned\n    const allFilePaths = new Set(files.map(f => f.filePath));\n    const assignedFiles = new Set<string>();\n    const validGrouping: Record<string, string[]> = {};\n\n    for (const [mod, paths] of Object.entries(parsed)) {\n      if (!Array.isArray(paths)) continue;\n      const validPaths = paths.filter(p => {\n        if (allFilePaths.has(p) && !assignedFiles.has(p)) {\n          assignedFiles.add(p);\n          return true;\n        }\n        return false;\n      });\n      if (validPaths.length > 0) {\n        validGrouping[mod] = validPaths;\n      }\n    }\n\n    // Assign unassigned files to a \"Miscellaneous\" module\n    const unassigned = files\n      .map(f => f.filePath)\n      .filter(fp => !assignedFiles.has(fp));\n    if (unassigned.length > 0) {\n      validGrouping['Other'] = unassigned;\n    }\n\n    return Object.keys(validGrouping).length > 0\n      ? validGrouping\n      : this.fallbackGrouping(files);\n  }\n\n  /**\n   * Fallback grouping by top-level directory when LLM parsing fails.\n   */\n  private fallbackGrouping(files: FileWithExports[]): Record<string, string[]> {\n    const groups = new Map<string, string[]>();\n    for (const f of files) {\n      const parts = f.filePath.replace(/\\\\/g, '/').split('/');\n      const topDir = parts.length > 1 ? parts[0] : 'Root';\n      let group = groups.get(topDir);\n      if (!group) { group = []; groups.set(topDir, group); }\n      group.push(f.filePath);\n    }\n    return Object.fromEntries(groups);\n  }\n\n  /**\n   * Split a large module into sub-modules by subdirectory.\n   */\n  private splitBySubdirectory(moduleName: string, files: string[]): ModuleTreeNode[] {\n    const subGroups = new Map<string, string[]>();\n    for (const fp of files) {\n      const parts = fp.replace(/\\\\/g, '/').split('/');\n      // Use the deepest common-ish directory\n      const subDir = parts.length > 2 ? parts.slice(0, 2).join('/') : parts[0];\n      let group = subGroups.get(subDir);\n      if (!group) { group = []; subGroups.set(subDir, group); }\n      group.push(fp);\n    }\n\n    return Array.from(subGroups.entries()).map(([subDir, subFiles]) => ({\n      name: `${moduleName} — ${path.basename(subDir)}`,\n      slug: this.slugify(`${moduleName}-${path.basename(subDir)}`),\n      files: subFiles,\n    }));\n  }\n\n  // ─── Phase 2: Generate Module Pages ─────────────────────────────────\n\n  /**\n   * Generate a leaf module page from source code + graph data.\n   */\n  private async generateLeafPage(node: ModuleTreeNode): Promise<void> {\n    const filePaths = node.files;\n\n    // Read source files from disk\n    const sourceCode = await this.readSourceFiles(filePaths);\n\n    // Token budget check — if too large, summarize in batches\n    const totalTokens = estimateTokens(sourceCode);\n    let finalSourceCode = sourceCode;\n    if (totalTokens > this.maxTokensPerModule) {\n      finalSourceCode = this.truncateSource(sourceCode, this.maxTokensPerModule);\n    }\n\n    // Get graph data\n    const [intraCalls, interCalls, processes] = await Promise.all([\n      getIntraModuleCallEdges(filePaths),\n      getInterModuleCallEdges(filePaths),\n      getProcessesForFiles(filePaths, 5),\n    ]);\n\n    const prompt = fillTemplate(MODULE_USER_PROMPT, {\n      MODULE_NAME: node.name,\n      SOURCE_CODE: finalSourceCode,\n      INTRA_CALLS: formatCallEdges(intraCalls),\n      OUTGOING_CALLS: formatCallEdges(interCalls.outgoing),\n      INCOMING_CALLS: formatCallEdges(interCalls.incoming),\n      PROCESSES: formatProcesses(processes),\n    });\n\n    const response = await callLLM(\n      prompt, this.llmConfig, MODULE_SYSTEM_PROMPT,\n      this.streamOpts(node.name),\n    );\n\n    // Write page with front matter\n    const pageContent = `# ${node.name}\\n\\n${response.content}`;\n    await fs.writeFile(path.join(this.wikiDir, `${node.slug}.md`), pageContent, 'utf-8');\n  }\n\n  /**\n   * Generate a parent module page from children's documentation.\n   */\n  private async generateParentPage(node: ModuleTreeNode): Promise<void> {\n    if (!node.children || node.children.length === 0) return;\n\n    // Read children's overview sections\n    const childDocs: string[] = [];\n    for (const child of node.children) {\n      const childPage = path.join(this.wikiDir, `${child.slug}.md`);\n      try {\n        const content = await fs.readFile(childPage, 'utf-8');\n        // Extract overview section (first ~500 chars or up to \"### Architecture\")\n        const overviewEnd = content.indexOf('### Architecture');\n        const overview = overviewEnd > 0 ? content.slice(0, overviewEnd).trim() : content.slice(0, 800).trim();\n        childDocs.push(`#### ${child.name}\\n${overview}`);\n      } catch {\n        childDocs.push(`#### ${child.name}\\n(Documentation not yet generated)`);\n      }\n    }\n\n    // Get cross-child call edges\n    const allChildFiles = node.children.flatMap(c => c.files);\n    const crossCalls = await getIntraModuleCallEdges(allChildFiles);\n    const processes = await getProcessesForFiles(allChildFiles, 3);\n\n    const prompt = fillTemplate(PARENT_USER_PROMPT, {\n      MODULE_NAME: node.name,\n      CHILDREN_DOCS: childDocs.join('\\n\\n'),\n      CROSS_MODULE_CALLS: formatCallEdges(crossCalls),\n      CROSS_PROCESSES: formatProcesses(processes),\n    });\n\n    const response = await callLLM(\n      prompt, this.llmConfig, PARENT_SYSTEM_PROMPT,\n      this.streamOpts(node.name),\n    );\n\n    const pageContent = `# ${node.name}\\n\\n${response.content}`;\n    await fs.writeFile(path.join(this.wikiDir, `${node.slug}.md`), pageContent, 'utf-8');\n  }\n\n  // ─── Phase 3: Generate Overview ─────────────────────────────────────\n\n  private async generateOverview(moduleTree: ModuleTreeNode[]): Promise<void> {\n    // Read module overview sections\n    const moduleSummaries: string[] = [];\n    for (const node of moduleTree) {\n      const pagePath = path.join(this.wikiDir, `${node.slug}.md`);\n      try {\n        const content = await fs.readFile(pagePath, 'utf-8');\n        const overviewEnd = content.indexOf('### Architecture');\n        const overview = overviewEnd > 0 ? content.slice(0, overviewEnd).trim() : content.slice(0, 600).trim();\n        moduleSummaries.push(`#### ${node.name}\\n${overview}`);\n      } catch {\n        moduleSummaries.push(`#### ${node.name}\\n(Documentation pending)`);\n      }\n    }\n\n    // Get inter-module edges for architecture diagram\n    const moduleFiles = this.extractModuleFiles(moduleTree);\n    const moduleEdges = await getInterModuleEdgesForOverview(moduleFiles);\n\n    // Get top processes for key workflows\n    const topProcesses = await getAllProcesses(5);\n\n    // Read project config\n    const projectInfo = await this.readProjectInfo();\n\n    const edgesText = moduleEdges.length > 0\n      ? moduleEdges.map(e => `${e.from} → ${e.to} (${e.count} calls)`).join('\\n')\n      : 'No inter-module call edges detected';\n\n    const prompt = fillTemplate(OVERVIEW_USER_PROMPT, {\n      PROJECT_INFO: projectInfo,\n      MODULE_SUMMARIES: moduleSummaries.join('\\n\\n'),\n      MODULE_EDGES: edgesText,\n      TOP_PROCESSES: formatProcesses(topProcesses),\n    });\n\n    const response = await callLLM(\n      prompt, this.llmConfig, OVERVIEW_SYSTEM_PROMPT,\n      this.streamOpts('Generating overview', 88),\n    );\n\n    const pageContent = `# ${path.basename(this.repoPath)} — Wiki\\n\\n${response.content}`;\n    await fs.writeFile(path.join(this.wikiDir, 'overview.md'), pageContent, 'utf-8');\n  }\n\n  // ─── Incremental Updates ────────────────────────────────────────────\n\n  private async incrementalUpdate(\n    existingMeta: WikiMeta,\n    currentCommit: string,\n  ): Promise<{ pagesGenerated: number; mode: 'incremental'; failedModules: string[] }> {\n    this.onProgress('incremental', 5, 'Detecting changes...');\n\n    // Get changed files since last generation\n    const changedFiles = this.getChangedFiles(existingMeta.fromCommit, currentCommit);\n    if (changedFiles.length === 0) {\n      // No file changes but commit differs (e.g. merge commit)\n      await this.saveWikiMeta({\n        ...existingMeta,\n        fromCommit: currentCommit,\n        generatedAt: new Date().toISOString(),\n      });\n      return { pagesGenerated: 0, mode: 'incremental', failedModules: [] };\n    }\n\n    this.onProgress('incremental', 10, `${changedFiles.length} files changed`);\n\n    // Determine affected modules\n    const affectedModules = new Set<string>();\n    const newFiles: string[] = [];\n\n    for (const fp of changedFiles) {\n      let found = false;\n      for (const [mod, files] of Object.entries(existingMeta.moduleFiles)) {\n        if (files.includes(fp)) {\n          affectedModules.add(mod);\n          found = true;\n          break;\n        }\n      }\n      if (!found && !shouldIgnorePath(fp)) {\n        newFiles.push(fp);\n      }\n    }\n\n    // If significant new files exist, re-run full grouping\n    if (newFiles.length > 5) {\n      this.onProgress('incremental', 15, 'Significant new files detected, running full generation...');\n      // Delete old snapshot to force re-grouping\n      try { await fs.unlink(path.join(this.wikiDir, 'first_module_tree.json')); } catch {}\n      const fullResult = await this.fullGeneration(currentCommit);\n      return { ...fullResult, mode: 'incremental' };\n    }\n\n    // Add new files to nearest module or \"Other\"\n    if (newFiles.length > 0) {\n      if (!existingMeta.moduleFiles['Other']) {\n        existingMeta.moduleFiles['Other'] = [];\n      }\n      existingMeta.moduleFiles['Other'].push(...newFiles);\n      affectedModules.add('Other');\n    }\n\n    // Regenerate affected module pages (parallel)\n    let pagesGenerated = 0;\n    const moduleTree = existingMeta.moduleTree;\n    const affectedArray = Array.from(affectedModules);\n\n    this.onProgress('incremental', 20, `Regenerating ${affectedArray.length} module(s)...`);\n\n    const affectedNodes: ModuleTreeNode[] = [];\n    for (const mod of affectedArray) {\n      const modSlug = this.slugify(mod);\n      const node = this.findNodeBySlug(moduleTree, modSlug);\n      if (node) {\n        try { await fs.unlink(path.join(this.wikiDir, `${node.slug}.md`)); } catch {}\n        affectedNodes.push(node);\n      }\n    }\n\n    let incProcessed = 0;\n    pagesGenerated += await this.runParallel(affectedNodes, async (node) => {\n      try {\n        if (node.children && node.children.length > 0) {\n          await this.generateParentPage(node);\n        } else {\n          await this.generateLeafPage(node);\n        }\n        incProcessed++;\n        const percent = 20 + Math.round((incProcessed / affectedNodes.length) * 60);\n        this.onProgress('incremental', percent, `${incProcessed}/${affectedNodes.length} — ${node.name}`);\n        return 1;\n      } catch (err: any) {\n        this.failedModules.push(node.name);\n        incProcessed++;\n        return 0;\n      }\n    });\n\n    // Regenerate overview if any pages changed\n    if (pagesGenerated > 0) {\n      this.onProgress('incremental', 85, 'Updating overview...');\n      await this.generateOverview(moduleTree);\n      pagesGenerated++;\n    }\n\n    // Save updated metadata\n    this.onProgress('incremental', 95, 'Saving metadata...');\n    await this.saveWikiMeta({\n      ...existingMeta,\n      fromCommit: currentCommit,\n      generatedAt: new Date().toISOString(),\n      model: this.llmConfig.model,\n    });\n\n    this.onProgress('done', 100, 'Incremental update complete');\n    return { pagesGenerated, mode: 'incremental', failedModules: [...this.failedModules] };\n  }\n\n  // ─── Helpers ────────────────────────────────────────────────────────\n\n  private getCurrentCommit(): string {\n    try {\n      return execSync('git rev-parse HEAD', { cwd: this.repoPath }).toString().trim();\n    } catch {\n      return '';\n    }\n  }\n\n  private getChangedFiles(fromCommit: string, toCommit: string): string[] {\n    try {\n      const output = execFileSync(\n        'git', ['diff', `${fromCommit}..${toCommit}`, '--name-only'],\n        { cwd: this.repoPath },\n      ).toString().trim();\n      return output ? output.split('\\n').filter(Boolean) : [];\n    } catch {\n      return [];\n    }\n  }\n\n  private async readSourceFiles(filePaths: string[]): Promise<string> {\n    const parts: string[] = [];\n    for (const fp of filePaths) {\n      const fullPath = path.join(this.repoPath, fp);\n      try {\n        const content = await fs.readFile(fullPath, 'utf-8');\n        parts.push(`\\n--- ${fp} ---\\n${content}`);\n      } catch {\n        parts.push(`\\n--- ${fp} ---\\n(file not readable)`);\n      }\n    }\n    return parts.join('\\n');\n  }\n\n  private truncateSource(source: string, maxTokens: number): string {\n    // Rough truncation: keep first maxTokens*4 chars and add notice\n    const maxChars = maxTokens * 4;\n    if (source.length <= maxChars) return source;\n    return source.slice(0, maxChars) + '\\n\\n... (source truncated for context window limits)';\n  }\n\n  private async estimateModuleTokens(filePaths: string[]): Promise<number> {\n    let total = 0;\n    for (const fp of filePaths) {\n      try {\n        const content = await fs.readFile(path.join(this.repoPath, fp), 'utf-8');\n        total += estimateTokens(content);\n      } catch {\n        // File not readable, skip\n      }\n    }\n    return total;\n  }\n\n  private async readProjectInfo(): Promise<string> {\n    const candidates = ['package.json', 'Cargo.toml', 'pyproject.toml', 'go.mod', 'pom.xml', 'build.gradle'];\n    const lines: string[] = [`Project: ${path.basename(this.repoPath)}`];\n\n    for (const file of candidates) {\n      const fullPath = path.join(this.repoPath, file);\n      try {\n        const content = await fs.readFile(fullPath, 'utf-8');\n        if (file === 'package.json') {\n          const pkg = JSON.parse(content);\n          if (pkg.name) lines.push(`Name: ${pkg.name}`);\n          if (pkg.description) lines.push(`Description: ${pkg.description}`);\n          if (pkg.scripts) lines.push(`Scripts: ${Object.keys(pkg.scripts).join(', ')}`);\n        } else {\n          // Include first 500 chars of other config files\n          lines.push(`\\n${file}:\\n${content.slice(0, 500)}`);\n        }\n        break; // Use first config found\n      } catch {\n        continue;\n      }\n    }\n\n    // Read README excerpt\n    for (const readme of ['README.md', 'readme.md', 'README.txt']) {\n      try {\n        const content = await fs.readFile(path.join(this.repoPath, readme), 'utf-8');\n        lines.push(`\\nREADME excerpt:\\n${content.slice(0, 1000)}`);\n        break;\n      } catch {\n        continue;\n      }\n    }\n\n    return lines.join('\\n');\n  }\n\n  private extractModuleFiles(tree: ModuleTreeNode[]): Record<string, string[]> {\n    const result: Record<string, string[]> = {};\n    for (const node of tree) {\n      if (node.children && node.children.length > 0) {\n        result[node.name] = node.children.flatMap(c => c.files);\n        for (const child of node.children) {\n          result[child.name] = child.files;\n        }\n      } else {\n        result[node.name] = node.files;\n      }\n    }\n    return result;\n  }\n\n  private countModules(tree: ModuleTreeNode[]): number {\n    let count = 0;\n    for (const node of tree) {\n      count++;\n      if (node.children) {\n        count += node.children.length;\n      }\n    }\n    return count;\n  }\n\n  /**\n   * Flatten the module tree into leaf nodes and parent nodes.\n   * Leaves can be processed in parallel; parents must wait for children.\n   */\n  private flattenModuleTree(tree: ModuleTreeNode[]): { leaves: ModuleTreeNode[]; parents: ModuleTreeNode[] } {\n    const leaves: ModuleTreeNode[] = [];\n    const parents: ModuleTreeNode[] = [];\n\n    for (const node of tree) {\n      if (node.children && node.children.length > 0) {\n        for (const child of node.children) {\n          leaves.push(child);\n        }\n        parents.push(node);\n      } else {\n        leaves.push(node);\n      }\n    }\n\n    return { leaves, parents };\n  }\n\n  /**\n   * Run async tasks in parallel with a concurrency limit and adaptive rate limiting.\n   * If a 429 rate limit is hit, concurrency is temporarily reduced.\n   */\n  private async runParallel<T>(\n    items: T[],\n    fn: (item: T) => Promise<number>,\n  ): Promise<number> {\n    let total = 0;\n    let activeConcurrency = this.concurrency;\n    let running = 0;\n    let idx = 0;\n\n    return new Promise((resolve, reject) => {\n      const next = () => {\n        while (running < activeConcurrency && idx < items.length) {\n          const item = items[idx++];\n          running++;\n\n          fn(item)\n            .then((count) => {\n              total += count;\n              running--;\n              if (idx >= items.length && running === 0) {\n                resolve(total);\n              } else {\n                next();\n              }\n            })\n            .catch((err) => {\n              running--;\n              // On rate limit, reduce concurrency temporarily\n              if (err.message?.includes('429')) {\n                activeConcurrency = Math.max(1, activeConcurrency - 1);\n                this.onProgress('modules', this.lastPercent, `Rate limited — concurrency → ${activeConcurrency}`);\n                // Re-queue the item\n                idx--;\n                setTimeout(next, 5000);\n              } else {\n                if (idx >= items.length && running === 0) {\n                  resolve(total);\n                } else {\n                  next();\n                }\n              }\n            });\n        }\n      };\n\n      if (items.length === 0) {\n        resolve(0);\n      } else {\n        next();\n      }\n    });\n  }\n\n  private findNodeBySlug(tree: ModuleTreeNode[], slug: string): ModuleTreeNode | null {\n    for (const node of tree) {\n      if (node.slug === slug) return node;\n      if (node.children) {\n        const found = this.findNodeBySlug(node.children, slug);\n        if (found) return found;\n      }\n    }\n    return null;\n  }\n\n  private slugify(name: string): string {\n    return name\n      .toLowerCase()\n      .replace(/[^a-z0-9]+/g, '-')\n      .replace(/^-+|-+$/g, '')\n      .slice(0, 60);\n  }\n\n  private async fileExists(fp: string): Promise<boolean> {\n    try {\n      await fs.access(fp);\n      return true;\n    } catch {\n      return false;\n    }\n  }\n\n  private async loadWikiMeta(): Promise<WikiMeta | null> {\n    try {\n      const raw = await fs.readFile(path.join(this.wikiDir, 'meta.json'), 'utf-8');\n      return JSON.parse(raw) as WikiMeta;\n    } catch {\n      return null;\n    }\n  }\n\n  private async saveWikiMeta(meta: WikiMeta): Promise<void> {\n    await fs.writeFile(\n      path.join(this.wikiDir, 'meta.json'),\n      JSON.stringify(meta, null, 2),\n      'utf-8',\n    );\n  }\n\n  private async saveModuleTree(tree: ModuleTreeNode[]): Promise<void> {\n    await fs.writeFile(\n      path.join(this.wikiDir, 'module_tree.json'),\n      JSON.stringify(tree, null, 2),\n      'utf-8',\n    );\n  }\n}\n"
  },
  {
    "path": "gitnexus/src/core/wiki/graph-queries.ts",
    "content": "/**\n * Graph Queries for Wiki Generation\n *\n * Encapsulated Cypher queries against the GitNexus knowledge graph.\n * Uses the MCP-style pooled lbug-adapter for connection management.\n */\n\nimport { initLbug, executeQuery, closeLbug } from '../../mcp/core/lbug-adapter.js';\n\nconst REPO_ID = '__wiki__';\n\nexport interface FileWithExports {\n  filePath: string;\n  symbols: Array<{ name: string; type: string }>;\n}\n\nexport interface CallEdge {\n  fromFile: string;\n  fromName: string;\n  toFile: string;\n  toName: string;\n}\n\nexport interface ProcessInfo {\n  id: string;\n  label: string;\n  type: string;\n  stepCount: number;\n  steps: Array<{\n    step: number;\n    name: string;\n    filePath: string;\n    type: string;\n  }>;\n}\n\n/**\n * Initialize the LadybugDB connection for wiki generation.\n */\nexport async function initWikiDb(lbugPath: string): Promise<void> {\n  await initLbug(REPO_ID, lbugPath);\n}\n\n/**\n * Close the LadybugDB connection.\n */\nexport async function closeWikiDb(): Promise<void> {\n  await closeLbug(REPO_ID);\n}\n\n/**\n * Get all source files with their exported symbol names and types.\n */\nexport async function getFilesWithExports(): Promise<FileWithExports[]> {\n  const rows = await executeQuery(REPO_ID, `\n    MATCH (f:File)-[:CodeRelation {type: 'DEFINES'}]->(n)\n    WHERE n.isExported = true\n    RETURN f.filePath AS filePath, n.name AS name, labels(n)[0] AS type\n    ORDER BY f.filePath\n  `);\n\n  const fileMap = new Map<string, FileWithExports>();\n  for (const row of rows) {\n    const fp = row.filePath || row[0];\n    const name = row.name || row[1];\n    const type = row.type || row[2];\n\n    let entry = fileMap.get(fp);\n    if (!entry) {\n      entry = { filePath: fp, symbols: [] };\n      fileMap.set(fp, entry);\n    }\n    entry.symbols.push({ name, type });\n  }\n\n  return Array.from(fileMap.values());\n}\n\n/**\n * Get all files tracked in the graph (including those with no exports).\n */\nexport async function getAllFiles(): Promise<string[]> {\n  const rows = await executeQuery(REPO_ID, `\n    MATCH (f:File)\n    RETURN f.filePath AS filePath\n    ORDER BY f.filePath\n  `);\n  return rows.map(r => r.filePath || r[0]);\n}\n\n/**\n * Get inter-file call edges (calls between different files).\n */\nexport async function getInterFileCallEdges(): Promise<CallEdge[]> {\n  const rows = await executeQuery(REPO_ID, `\n    MATCH (a)-[:CodeRelation {type: 'CALLS'}]->(b)\n    WHERE a.filePath <> b.filePath\n    RETURN DISTINCT a.filePath AS fromFile, a.name AS fromName,\n           b.filePath AS toFile, b.name AS toName\n  `);\n\n  return rows.map(r => ({\n    fromFile: r.fromFile || r[0],\n    fromName: r.fromName || r[1],\n    toFile: r.toFile || r[2],\n    toName: r.toName || r[3],\n  }));\n}\n\n/**\n * Get call edges between files within a specific set (intra-module).\n */\nexport async function getIntraModuleCallEdges(filePaths: string[]): Promise<CallEdge[]> {\n  if (filePaths.length === 0) return [];\n\n  const fileList = filePaths.map(f => `'${f.replace(/'/g, \"''\")}'`).join(', ');\n  const rows = await executeQuery(REPO_ID, `\n    MATCH (a)-[:CodeRelation {type: 'CALLS'}]->(b)\n    WHERE a.filePath IN [${fileList}] AND b.filePath IN [${fileList}]\n    RETURN DISTINCT a.filePath AS fromFile, a.name AS fromName,\n           b.filePath AS toFile, b.name AS toName\n  `);\n\n  return rows.map(r => ({\n    fromFile: r.fromFile || r[0],\n    fromName: r.fromName || r[1],\n    toFile: r.toFile || r[2],\n    toName: r.toName || r[3],\n  }));\n}\n\n/**\n * Get call edges crossing module boundaries (external calls from/to module files).\n */\nexport async function getInterModuleCallEdges(filePaths: string[]): Promise<{\n  outgoing: CallEdge[];\n  incoming: CallEdge[];\n}> {\n  if (filePaths.length === 0) return { outgoing: [], incoming: [] };\n\n  const fileList = filePaths.map(f => `'${f.replace(/'/g, \"''\")}'`).join(', ');\n\n  const outRows = await executeQuery(REPO_ID, `\n    MATCH (a)-[:CodeRelation {type: 'CALLS'}]->(b)\n    WHERE a.filePath IN [${fileList}] AND NOT b.filePath IN [${fileList}]\n    RETURN DISTINCT a.filePath AS fromFile, a.name AS fromName,\n           b.filePath AS toFile, b.name AS toName\n    LIMIT 30\n  `);\n\n  const inRows = await executeQuery(REPO_ID, `\n    MATCH (a)-[:CodeRelation {type: 'CALLS'}]->(b)\n    WHERE NOT a.filePath IN [${fileList}] AND b.filePath IN [${fileList}]\n    RETURN DISTINCT a.filePath AS fromFile, a.name AS fromName,\n           b.filePath AS toFile, b.name AS toName\n    LIMIT 30\n  `);\n\n  return {\n    outgoing: outRows.map(r => ({\n      fromFile: r.fromFile || r[0],\n      fromName: r.fromName || r[1],\n      toFile: r.toFile || r[2],\n      toName: r.toName || r[3],\n    })),\n    incoming: inRows.map(r => ({\n      fromFile: r.fromFile || r[0],\n      fromName: r.fromName || r[1],\n      toFile: r.toFile || r[2],\n      toName: r.toName || r[3],\n    })),\n  };\n}\n\n/**\n * Get processes (execution flows) that pass through a set of files.\n * Returns top N by step count.\n */\nexport async function getProcessesForFiles(filePaths: string[], limit = 5): Promise<ProcessInfo[]> {\n  if (filePaths.length === 0) return [];\n\n  const fileList = filePaths.map(f => `'${f.replace(/'/g, \"''\")}'`).join(', ');\n\n  // Find processes that have steps in the given files\n  const procRows = await executeQuery(REPO_ID, `\n    MATCH (s)-[r:CodeRelation {type: 'STEP_IN_PROCESS'}]->(p:Process)\n    WHERE s.filePath IN [${fileList}]\n    RETURN DISTINCT p.id AS id, p.heuristicLabel AS label,\n           p.processType AS type, p.stepCount AS stepCount\n    ORDER BY stepCount DESC\n    LIMIT ${limit}\n  `);\n\n  const processes: ProcessInfo[] = [];\n  for (const row of procRows) {\n    const procId = row.id || row[0];\n    const label = row.label || row[1] || procId;\n    const type = row.type || row[2] || 'unknown';\n    const stepCount = row.stepCount || row[3] || 0;\n\n    // Get the full step trace for this process\n    const stepRows = await executeQuery(REPO_ID, `\n      MATCH (s)-[r:CodeRelation {type: 'STEP_IN_PROCESS'}]->(p:Process {id: '${procId.replace(/'/g, \"''\")}'})\n      RETURN s.name AS name, s.filePath AS filePath, labels(s)[0] AS type, r.step AS step\n      ORDER BY r.step\n    `);\n\n    processes.push({\n      id: procId,\n      label,\n      type,\n      stepCount,\n      steps: stepRows.map(s => ({\n        step: s.step || s[3] || 0,\n        name: s.name || s[0],\n        filePath: s.filePath || s[1],\n        type: s.type || s[2],\n      })),\n    });\n  }\n\n  return processes;\n}\n\n/**\n * Get all processes in the graph (for overview page).\n */\nexport async function getAllProcesses(limit = 20): Promise<ProcessInfo[]> {\n  const procRows = await executeQuery(REPO_ID, `\n    MATCH (p:Process)\n    RETURN p.id AS id, p.heuristicLabel AS label,\n           p.processType AS type, p.stepCount AS stepCount\n    ORDER BY stepCount DESC\n    LIMIT ${limit}\n  `);\n\n  const processes: ProcessInfo[] = [];\n  for (const row of procRows) {\n    const procId = row.id || row[0];\n    const label = row.label || row[1] || procId;\n    const type = row.type || row[2] || 'unknown';\n    const stepCount = row.stepCount || row[3] || 0;\n\n    const stepRows = await executeQuery(REPO_ID, `\n      MATCH (s)-[r:CodeRelation {type: 'STEP_IN_PROCESS'}]->(p:Process {id: '${procId.replace(/'/g, \"''\")}'})\n      RETURN s.name AS name, s.filePath AS filePath, labels(s)[0] AS type, r.step AS step\n      ORDER BY r.step\n    `);\n\n    processes.push({\n      id: procId,\n      label,\n      type,\n      stepCount,\n      steps: stepRows.map(s => ({\n        step: s.step || s[3] || 0,\n        name: s.name || s[0],\n        filePath: s.filePath || s[1],\n        type: s.type || s[2],\n      })),\n    });\n  }\n\n  return processes;\n}\n\n/**\n * Get inter-module edges for overview architecture diagram.\n * Groups call edges by source/target module.\n */\nexport async function getInterModuleEdgesForOverview(\n  moduleFiles: Record<string, string[]>\n): Promise<Array<{ from: string; to: string; count: number }>> {\n  // Build file-to-module lookup\n  const fileToModule = new Map<string, string>();\n  for (const [mod, files] of Object.entries(moduleFiles)) {\n    for (const f of files) {\n      fileToModule.set(f, mod);\n    }\n  }\n\n  const allEdges = await getInterFileCallEdges();\n  const moduleEdgeCounts = new Map<string, number>();\n\n  for (const edge of allEdges) {\n    const fromMod = fileToModule.get(edge.fromFile);\n    const toMod = fileToModule.get(edge.toFile);\n    if (fromMod && toMod && fromMod !== toMod) {\n      const key = `${fromMod}|||${toMod}`;\n      moduleEdgeCounts.set(key, (moduleEdgeCounts.get(key) || 0) + 1);\n    }\n  }\n\n  return Array.from(moduleEdgeCounts.entries())\n    .map(([key, count]) => {\n      const [from, to] = key.split('|||');\n      return { from, to, count };\n    })\n    .sort((a, b) => b.count - a.count);\n}\n"
  },
  {
    "path": "gitnexus/src/core/wiki/html-viewer.ts",
    "content": "/**\n * HTML Viewer Generator for Wiki\n *\n * Produces a self-contained index.html that embeds all markdown pages,\n * module tree, and metadata — viewable offline in any browser.\n */\n\nimport fs from 'fs/promises';\nimport path from 'path';\n\ninterface ModuleTreeNode {\n  name: string;\n  slug: string;\n  files: string[];\n  children?: ModuleTreeNode[];\n}\n\n/**\n * Generate the wiki HTML viewer (index.html) from existing markdown pages.\n */\nexport async function generateHTMLViewer(\n  wikiDir: string,\n  projectName: string,\n): Promise<string> {\n  // Load module tree\n  let moduleTree: ModuleTreeNode[] = [];\n  try {\n    const raw = await fs.readFile(path.join(wikiDir, 'module_tree.json'), 'utf-8');\n    moduleTree = JSON.parse(raw);\n  } catch { /* will show empty nav */ }\n\n  // Load meta\n  let meta: Record<string, unknown> | null = null;\n  try {\n    const raw = await fs.readFile(path.join(wikiDir, 'meta.json'), 'utf-8');\n    meta = JSON.parse(raw);\n  } catch { /* no meta */ }\n\n  // Read all markdown files into a { slug: content } map\n  const pages: Record<string, string> = {};\n  const dirEntries = await fs.readdir(wikiDir);\n  for (const f of dirEntries.filter(f => f.endsWith('.md'))) {\n    const content = await fs.readFile(path.join(wikiDir, f), 'utf-8');\n    pages[f.replace(/\\.md$/, '')] = content;\n  }\n\n  const html = buildHTML(projectName, moduleTree, pages, meta);\n  const outputPath = path.join(wikiDir, 'index.html');\n  await fs.writeFile(outputPath, html, 'utf-8');\n  return outputPath;\n}\n\n// ─── HTML Builder ───────────────────────────────────────────────────────\n\nfunction esc(text: string): string {\n  return text\n    .replace(/&/g, '&amp;')\n    .replace(/</g, '&lt;')\n    .replace(/>/g, '&gt;')\n    .replace(/\"/g, '&quot;');\n}\n\nfunction buildHTML(\n  projectName: string,\n  moduleTree: ModuleTreeNode[],\n  pages: Record<string, string>,\n  meta: Record<string, unknown> | null,\n): string {\n  // Embed data as JSON inside the HTML\n  const pagesJSON = JSON.stringify(pages);\n  const treeJSON = JSON.stringify(moduleTree);\n  const metaJSON = JSON.stringify(meta);\n\n  const parts: string[] = [];\n\n  // ── Head ──\n  parts.push('<!DOCTYPE html>');\n  parts.push('<html lang=\"en\">');\n  parts.push('<head>');\n  parts.push('<meta charset=\"UTF-8\">');\n  parts.push('<meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">');\n  parts.push('<title>' + esc(projectName) + ' — Wiki</title>');\n  parts.push('<script src=\"https://cdn.jsdelivr.net/npm/marked@11.0.0/marked.min.js\"><\\/script>');\n  parts.push('<script src=\"https://cdn.jsdelivr.net/npm/mermaid@11/dist/mermaid.min.js\"><\\/script>');\n  parts.push('<style>');\n  parts.push(CSS);\n  parts.push('</style>');\n  parts.push('</head>');\n\n  // ── Body ──\n  parts.push('<body>');\n  parts.push('<button class=\"menu-toggle\" id=\"menu-toggle\" aria-label=\"Toggle menu\">&#9776;</button>');\n  parts.push('<div class=\"layout\">');\n\n  // Sidebar\n  parts.push('<nav class=\"sidebar\" id=\"sidebar\">');\n  parts.push('<div class=\"sidebar-header\">');\n  parts.push('<div class=\"sidebar-title\">');\n  parts.push(BOOK_SVG);\n  parts.push(esc(projectName));\n  parts.push('</div>');\n  parts.push('<div class=\"sidebar-meta\" id=\"meta-info\"></div>');\n  parts.push('</div>');\n  parts.push('<div id=\"nav-tree\"></div>');\n  parts.push('<div class=\"sidebar-footer\">Generated by GitNexus</div>');\n  parts.push('</nav>');\n\n  // Content\n  parts.push('<main class=\"content\" id=\"content\">');\n  parts.push('<div class=\"empty-state\"><h2>Loading…</h2></div>');\n  parts.push('</main>');\n  parts.push('</div>');\n\n  // ── Script ──\n  parts.push('<script>');\n  parts.push('var PAGES = ' + pagesJSON + ';');\n  parts.push('var TREE = ' + treeJSON + ';');\n  parts.push('var META = ' + metaJSON + ';');\n  parts.push(JS_APP);\n  parts.push('<\\/script>');\n\n  parts.push('</body>');\n  parts.push('</html>');\n\n  return parts.join('\\n');\n}\n\n// ─── Static Assets ────────────────────────────────────────────────────\n\nconst BOOK_SVG =\n  '<svg width=\"18\" height=\"18\" viewBox=\"0 0 24 24\" fill=\"none\" stroke=\"currentColor\" stroke-width=\"2\">' +\n  '<path d=\"M2 3h6a4 4 0 014 4v14a3 3 0 00-3-3H2z\"/>' +\n  '<path d=\"M22 3h-6a4 4 0 00-4 4v14a3 3 0 013-3h7z\"/>' +\n  '</svg>';\n\nconst CSS = `\n*{margin:0;padding:0;box-sizing:border-box}\n:root{\n  --bg:#ffffff;--sidebar-bg:#f8f9fb;--border:#e5e7eb;\n  --text:#1e293b;--text-muted:#64748b;--primary:#2563eb;\n  --primary-soft:#eff6ff;--hover:#f1f5f9;--code-bg:#f1f5f9;\n  --radius:8px;--shadow:0 1px 3px rgba(0,0,0,.08);\n}\nbody{font-family:-apple-system,BlinkMacSystemFont,'Segoe UI',Roboto,sans-serif;\n  line-height:1.65;color:var(--text);background:var(--bg)}\n\n.layout{display:flex;min-height:100vh}\n.sidebar{width:280px;background:var(--sidebar-bg);border-right:1px solid var(--border);\n  position:fixed;top:0;left:0;bottom:0;overflow-y:auto;padding:24px 16px;\n  display:flex;flex-direction:column;z-index:10}\n.content{margin-left:280px;flex:1;padding:48px 64px;max-width:960px}\n\n.sidebar-header{margin-bottom:20px;padding-bottom:16px;border-bottom:1px solid var(--border)}\n.sidebar-title{font-size:16px;font-weight:700;color:var(--text);display:flex;align-items:center;gap:8px}\n.sidebar-title svg{flex-shrink:0}\n.sidebar-meta{font-size:11px;color:var(--text-muted);margin-top:6px}\n.nav-section{margin-bottom:2px}\n.nav-item{display:block;padding:7px 12px;border-radius:var(--radius);cursor:pointer;\n  font-size:13px;color:var(--text);text-decoration:none;transition:all .15s;\n  white-space:nowrap;overflow:hidden;text-overflow:ellipsis}\n.nav-item:hover{background:var(--hover)}\n.nav-item.active{background:var(--primary-soft);color:var(--primary);font-weight:600}\n.nav-item.overview{font-weight:600;margin-bottom:4px}\n.nav-children{padding-left:14px;border-left:1px solid var(--border);margin-left:12px}\n.nav-group-label{font-size:11px;font-weight:600;color:var(--text-muted);\n  text-transform:uppercase;letter-spacing:.5px;padding:12px 12px 4px;user-select:none}\n.sidebar-footer{margin-top:auto;padding-top:16px;border-top:1px solid var(--border);\n  font-size:11px;color:var(--text-muted);text-align:center}\n\n.content h1{font-size:28px;font-weight:700;margin-bottom:8px;line-height:1.3}\n.content h2{font-size:22px;font-weight:600;margin:32px 0 12px;padding-bottom:6px;border-bottom:1px solid var(--border)}\n.content h3{font-size:17px;font-weight:600;margin:24px 0 8px}\n.content h4{font-size:15px;font-weight:600;margin:20px 0 6px}\n.content p{margin:12px 0}\n.content ul,.content ol{margin:12px 0 12px 24px}\n.content li{margin:4px 0}\n.content a{color:var(--primary);text-decoration:none}\n.content a:hover{text-decoration:underline}\n.content blockquote{border-left:3px solid var(--primary);padding:8px 16px;margin:16px 0;\n  background:var(--primary-soft);border-radius:0 var(--radius) var(--radius) 0;\n  color:var(--text-muted);font-size:14px}\n.content code{font-family:'SF Mono',Consolas,'Courier New',monospace;font-size:13px;\n  background:var(--code-bg);padding:2px 6px;border-radius:4px}\n.content pre{background:#1e293b;color:#e2e8f0;border-radius:var(--radius);padding:16px;\n  overflow-x:auto;margin:16px 0}\n.content pre code{background:none;padding:0;font-size:13px;line-height:1.6;color:inherit}\n.content table{border-collapse:collapse;width:100%;margin:16px 0}\n.content th,.content td{border:1px solid var(--border);padding:8px 12px;text-align:left;font-size:14px}\n.content th{background:var(--sidebar-bg);font-weight:600}\n.content img{max-width:100%;border-radius:var(--radius)}\n.content hr{border:none;border-top:1px solid var(--border);margin:32px 0}\n.content .mermaid{margin:20px 0;text-align:center}\n\n.menu-toggle{display:none;position:fixed;top:12px;left:12px;z-index:20;\n  background:var(--bg);border:1px solid var(--border);border-radius:var(--radius);\n  padding:8px 12px;cursor:pointer;font-size:18px;box-shadow:var(--shadow)}\n@media(max-width:768px){\n  .sidebar{transform:translateX(-100%);transition:transform .2s}\n  .sidebar.open{transform:translateX(0);box-shadow:2px 0 12px rgba(0,0,0,.1)}\n  .content{margin-left:0;padding:24px 20px;padding-top:56px}\n  .menu-toggle{display:block}\n}\n.empty-state{text-align:center;padding:80px 20px;color:var(--text-muted)}\n.empty-state h2{font-size:20px;margin-bottom:8px;border:none}\n`;\n\n// The client-side JS is kept as a plain string to avoid template literal conflicts\nconst JS_APP = `\n(function() {\n  var activePage = 'overview';\n\n  document.addEventListener('DOMContentLoaded', function() {\n    mermaid.initialize({ startOnLoad: false, theme: 'neutral', securityLevel: 'loose' });\n    renderMeta();\n    renderNav();\n    document.getElementById('menu-toggle').addEventListener('click', function() {\n      document.getElementById('sidebar').classList.toggle('open');\n    });\n    if (location.hash && location.hash.length > 1) {\n      activePage = decodeURIComponent(location.hash.slice(1));\n    }\n    navigateTo(activePage);\n  });\n\n  function renderMeta() {\n    if (!META) return;\n    var el = document.getElementById('meta-info');\n    var parts = [];\n    if (META.generatedAt) {\n      parts.push(new Date(META.generatedAt).toLocaleDateString());\n    }\n    if (META.model) parts.push(META.model);\n    if (META.fromCommit) parts.push(META.fromCommit.slice(0, 8));\n    el.textContent = parts.join(' \\\\u00b7 ');\n  }\n\n  function renderNav() {\n    var container = document.getElementById('nav-tree');\n    var html = '<div class=\"nav-section\">';\n    html += '<a class=\"nav-item overview\" data-page=\"overview\" href=\"#overview\">Overview</a>';\n    html += '</div>';\n    if (TREE.length > 0) {\n      html += '<div class=\"nav-group-label\">Modules</div>';\n      html += buildNavTree(TREE);\n    }\n    container.innerHTML = html;\n    container.addEventListener('click', function(e) {\n      var target = e.target;\n      while (target && !target.dataset.page) { target = target.parentElement; }\n      if (target && target.dataset.page) {\n        e.preventDefault();\n        navigateTo(target.dataset.page);\n      }\n    });\n  }\n\n  function buildNavTree(nodes) {\n    var html = '';\n    for (var i = 0; i < nodes.length; i++) {\n      var node = nodes[i];\n      html += '<div class=\"nav-section\">';\n      html += '<a class=\"nav-item\" data-page=\"' + escH(node.slug) + '\" href=\"#' + encodeURIComponent(node.slug) + '\">' + escH(node.name) + '</a>';\n      if (node.children && node.children.length > 0) {\n        html += '<div class=\"nav-children\">' + buildNavTree(node.children) + '</div>';\n      }\n      html += '</div>';\n    }\n    return html;\n  }\n\n  function escH(s) {\n    var d = document.createElement('div');\n    d.textContent = s;\n    return d.innerHTML;\n  }\n\n  function navigateTo(page) {\n    activePage = page;\n    location.hash = encodeURIComponent(page);\n\n    var items = document.querySelectorAll('.nav-item');\n    for (var i = 0; i < items.length; i++) {\n      if (items[i].dataset.page === page) {\n        items[i].classList.add('active');\n      } else {\n        items[i].classList.remove('active');\n      }\n    }\n\n    var contentEl = document.getElementById('content');\n    var md = PAGES[page];\n\n    if (!md) {\n      contentEl.innerHTML = '<div class=\"empty-state\"><h2>Page not found</h2><p>' + escH(page) + '.md does not exist.</p></div>';\n      return;\n    }\n\n    contentEl.innerHTML = marked.parse(md);\n\n    // Rewrite .md links to hash navigation\n    var links = contentEl.querySelectorAll('a[href]');\n    for (var i = 0; i < links.length; i++) {\n      var href = links[i].getAttribute('href');\n      if (href && href.endsWith('.md') && href.indexOf('://') === -1) {\n        var slug = href.replace(/\\\\.md$/, '');\n        links[i].setAttribute('href', '#' + encodeURIComponent(slug));\n        (function(s) {\n          links[i].addEventListener('click', function(e) {\n            e.preventDefault();\n            navigateTo(s);\n          });\n        })(slug);\n      }\n    }\n\n    // Convert mermaid code blocks into mermaid divs\n    var mermaidBlocks = contentEl.querySelectorAll('pre code.language-mermaid');\n    for (var i = 0; i < mermaidBlocks.length; i++) {\n      var pre = mermaidBlocks[i].parentElement;\n      var div = document.createElement('div');\n      div.className = 'mermaid';\n      div.textContent = mermaidBlocks[i].textContent;\n      pre.parentNode.replaceChild(div, pre);\n    }\n    try { mermaid.run({ querySelector: '.mermaid' }); } catch(e) {}\n\n    window.scrollTo(0, 0);\n    document.getElementById('sidebar').classList.remove('open');\n  }\n})();\n`;\n"
  },
  {
    "path": "gitnexus/src/core/wiki/llm-client.ts",
    "content": "/**\n * LLM Client for Wiki Generation\n * \n * OpenAI-compatible API client using native fetch.\n * Supports OpenAI, Azure, LiteLLM, Ollama, and any OpenAI-compatible endpoint.\n * \n * Config priority: CLI flags > env vars > defaults\n */\n\nexport interface LLMConfig {\n  apiKey: string;\n  baseUrl: string;\n  model: string;\n  maxTokens: number;\n  temperature: number;\n}\n\nexport interface LLMResponse {\n  content: string;\n  promptTokens?: number;\n  completionTokens?: number;\n}\n\n/**\n * Resolve LLM configuration from env vars, saved config, and optional overrides.\n * Priority: overrides (CLI flags) > env vars > ~/.gitnexus/config.json > error\n * \n * If no API key is found, returns config with empty apiKey (caller should handle).\n */\nexport async function resolveLLMConfig(overrides?: Partial<LLMConfig>): Promise<LLMConfig> {\n  const { loadCLIConfig } = await import('../../storage/repo-manager.js');\n  const savedConfig = await loadCLIConfig();\n\n  const apiKey = overrides?.apiKey\n    || process.env.GITNEXUS_API_KEY\n    || process.env.OPENAI_API_KEY\n    || savedConfig.apiKey\n    || '';\n\n  return {\n    apiKey,\n    baseUrl: overrides?.baseUrl\n      || process.env.GITNEXUS_LLM_BASE_URL\n      || savedConfig.baseUrl\n      || 'https://openrouter.ai/api/v1',\n    model: overrides?.model\n      || process.env.GITNEXUS_MODEL\n      || savedConfig.model\n      || 'minimax/minimax-m2.5',\n    maxTokens: overrides?.maxTokens ?? 16_384,\n    temperature: overrides?.temperature ?? 0,\n  };\n}\n\n/**\n * Estimate token count from text (rough heuristic: ~4 chars per token).\n */\nexport function estimateTokens(text: string): number {\n  return Math.ceil(text.length / 4);\n}\n\nexport interface CallLLMOptions {\n  onChunk?: (charsReceived: number) => void;\n}\n\n/**\n * Call an OpenAI-compatible LLM API.\n * Uses streaming when onChunk callback is provided for real-time progress.\n * Retries up to 3 times on transient failures (429, 5xx, network errors).\n */\nexport async function callLLM(\n  prompt: string,\n  config: LLMConfig,\n  systemPrompt?: string,\n  options?: CallLLMOptions,\n): Promise<LLMResponse> {\n  const messages: Array<{ role: string; content: string }> = [];\n  if (systemPrompt) {\n    messages.push({ role: 'system', content: systemPrompt });\n  }\n  messages.push({ role: 'user', content: prompt });\n\n  const url = `${config.baseUrl.replace(/\\/+$/, '')}/chat/completions`;\n  const useStream = !!options?.onChunk;\n\n  const body: Record<string, unknown> = {\n    model: config.model,\n    messages,\n    max_tokens: config.maxTokens,\n    temperature: config.temperature,\n  };\n  if (useStream) body.stream = true;\n\n  const MAX_RETRIES = 3;\n  let lastError: Error | null = null;\n\n  for (let attempt = 0; attempt < MAX_RETRIES; attempt++) {\n    try {\n      const response = await fetch(url, {\n        method: 'POST',\n        headers: {\n          'Content-Type': 'application/json',\n          'Authorization': `Bearer ${config.apiKey}`,\n        },\n        body: JSON.stringify(body),\n      });\n\n      if (!response.ok) {\n        const errorText = await response.text().catch(() => 'unknown error');\n\n        // Rate limit — wait with exponential backoff and retry\n        if (response.status === 429 && attempt < MAX_RETRIES - 1) {\n          const retryAfter = parseInt(response.headers.get('retry-after') || '0', 10);\n          const delay = retryAfter > 0 ? retryAfter * 1000 : (2 ** attempt) * 3000;\n          await sleep(delay);\n          continue;\n        }\n\n        // Server error — retry with backoff\n        if (response.status >= 500 && attempt < MAX_RETRIES - 1) {\n          await sleep((attempt + 1) * 2000);\n          continue;\n        }\n\n        throw new Error(`LLM API error (${response.status}): ${errorText.slice(0, 500)}`);\n      }\n\n      // Streaming path\n      if (useStream && response.body) {\n        return await readSSEStream(response.body, options!.onChunk!);\n      }\n\n      // Non-streaming path\n      const json = await response.json() as any;\n      const choice = json.choices?.[0];\n      if (!choice?.message?.content) {\n        throw new Error('LLM returned empty response');\n      }\n\n      return {\n        content: choice.message.content,\n        promptTokens: json.usage?.prompt_tokens,\n        completionTokens: json.usage?.completion_tokens,\n      };\n    } catch (err: any) {\n      lastError = err;\n\n      // Network error — retry with backoff\n      if (attempt < MAX_RETRIES - 1 && (err.code === 'ECONNREFUSED' || err.code === 'ETIMEDOUT' || err.message?.includes('fetch'))) {\n        await sleep((attempt + 1) * 3000);\n        continue;\n      }\n\n      throw err;\n    }\n  }\n\n  throw lastError || new Error('LLM call failed after retries');\n}\n\n/**\n * Read an SSE stream from an OpenAI-compatible streaming response.\n */\nasync function readSSEStream(\n  body: ReadableStream<Uint8Array>,\n  onChunk: (charsReceived: number) => void,\n): Promise<LLMResponse> {\n  const decoder = new TextDecoder();\n  const reader = body.getReader();\n  let content = '';\n  let buffer = '';\n\n  while (true) {\n    const { done, value } = await reader.read();\n    if (done) break;\n\n    buffer += decoder.decode(value, { stream: true });\n    const lines = buffer.split('\\n');\n    buffer = lines.pop() || '';\n\n    for (const line of lines) {\n      const trimmed = line.trim();\n      if (!trimmed || !trimmed.startsWith('data: ')) continue;\n      const data = trimmed.slice(6);\n      if (data === '[DONE]') continue;\n\n      try {\n        const parsed = JSON.parse(data);\n        const delta = parsed.choices?.[0]?.delta?.content;\n        if (delta) {\n          content += delta;\n          onChunk(content.length);\n        }\n      } catch {\n        // Skip malformed SSE chunks\n      }\n    }\n  }\n\n  if (!content) {\n    throw new Error('LLM returned empty streaming response');\n  }\n\n  return { content };\n}\n\nfunction sleep(ms: number): Promise<void> {\n  return new Promise(resolve => setTimeout(resolve, ms));\n}\n"
  },
  {
    "path": "gitnexus/src/core/wiki/prompts.ts",
    "content": "/**\n * LLM Prompt Templates for Wiki Generation\n * \n * All prompts produce deterministic, source-grounded documentation.\n * Templates use {{PLACEHOLDER}} substitution.\n */\n\n// ─── Grouping Prompt ──────────────────────────────────────────────────\n\nexport const GROUPING_SYSTEM_PROMPT = `You are a documentation architect. Given a list of source files with their exported symbols, group them into logical documentation modules.\n\nRules:\n- Each module should represent a cohesive feature, layer, or domain\n- Every file must appear in exactly one module\n- Module names should be human-readable (e.g. \"Authentication\", \"Database Layer\", \"API Routes\")\n- Aim for 5-15 modules for a typical project. Fewer for small projects, more for large ones\n- Group by functionality, not by file type or directory structure alone\n- Do NOT create modules for tests, configs, or non-source files`;\n\nexport const GROUPING_USER_PROMPT = `Group these source files into documentation modules.\n\n**Files and their exports:**\n{{FILE_LIST}}\n\n**Directory structure:**\n{{DIRECTORY_TREE}}\n\nRespond with ONLY a JSON object mapping module names to file path arrays. No markdown, no explanation.\nExample format:\n{\n  \"Authentication\": [\"src/auth/login.ts\", \"src/auth/session.ts\"],\n  \"Database\": [\"src/db/connection.ts\", \"src/db/models.ts\"]\n}`;\n\n// ─── Leaf Module Prompt ───────────────────────────────────────────────\n\nexport const MODULE_SYSTEM_PROMPT = `You are a technical documentation writer. Write clear, developer-focused documentation for a code module.\n\nRules:\n- Reference actual function names, class names, and code patterns — do NOT invent APIs\n- Use the call graph and execution flow data for accuracy, but do NOT mechanically list every edge\n- Include Mermaid diagrams only when they genuinely help understanding. Keep them small (5-10 nodes max)\n- Structure the document however makes sense for this module — there is no mandatory format\n- Write for a developer who needs to understand and contribute to this code`;\n\nexport const MODULE_USER_PROMPT = `Write documentation for the **{{MODULE_NAME}}** module.\n\n## Source Code\n\n{{SOURCE_CODE}}\n\n## Call Graph & Execution Flows (reference for accuracy)\n\nInternal calls: {{INTRA_CALLS}}\nOutgoing calls: {{OUTGOING_CALLS}}\nIncoming calls: {{INCOMING_CALLS}}\nExecution flows: {{PROCESSES}}\n\n---\n\nWrite comprehensive documentation for this module. Cover its purpose, how it works, its key components, and how it connects to the rest of the codebase. Use whatever structure best fits this module — you decide the sections and headings. Include a Mermaid diagram only if it genuinely clarifies the architecture.`;\n\n// ─── Parent Module Prompt ─────────────────────────────────────────────\n\nexport const PARENT_SYSTEM_PROMPT = `You are a technical documentation writer. Write a summary page for a module that contains sub-modules. Synthesize the children's documentation — do not re-read source code.\n\nRules:\n- Reference actual components from the child modules\n- Focus on how the sub-modules work together, not repeating their individual docs\n- Keep it concise — the reader can click through to child pages for detail\n- Include a Mermaid diagram only if it genuinely clarifies how the sub-modules relate`;\n\nexport const PARENT_USER_PROMPT = `Write documentation for the **{{MODULE_NAME}}** module, which contains these sub-modules:\n\n{{CHILDREN_DOCS}}\n\nCross-module calls: {{CROSS_MODULE_CALLS}}\nShared execution flows: {{CROSS_PROCESSES}}\n\n---\n\nWrite a concise overview of this module group. Explain its purpose, how the sub-modules fit together, and the key workflows that span them. Link to sub-module pages (e.g. \\`[Sub-module Name](sub-module-slug.md)\\`) rather than repeating their content. Use whatever structure fits best.`;\n\n// ─── Overview Prompt ──────────────────────────────────────────────────\n\nexport const OVERVIEW_SYSTEM_PROMPT = `You are a technical documentation writer. Write the top-level overview page for a repository wiki. This is the first page a new developer sees.\n\nRules:\n- Be clear and welcoming — this is the entry point to the entire codebase\n- Reference actual module names so readers can navigate to their docs\n- Include a high-level Mermaid architecture diagram showing only the most important modules and their relationships (max 10 nodes). A new dev should grasp it in 10 seconds\n- Do NOT create module index tables or list every module with descriptions — just link to module pages naturally within the text\n- Use the inter-module edges and execution flow data for accuracy, but do NOT dump them raw`;\n\nexport const OVERVIEW_USER_PROMPT = `Write the overview page for this repository's wiki.\n\n## Project Info\n\n{{PROJECT_INFO}}\n\n## Module Summaries\n\n{{MODULE_SUMMARIES}}\n\n## Reference Data (for accuracy — do not reproduce verbatim)\n\nInter-module call edges: {{MODULE_EDGES}}\nKey system flows: {{TOP_PROCESSES}}\n\n---\n\nWrite a clear overview of this project: what it does, how it's architected, and the key end-to-end flows. Include a simple Mermaid architecture diagram (max 10 nodes, big-picture only). Link to module pages (e.g. \\`[Module Name](module-slug.md)\\`) naturally in the text rather than listing them in a table. If project config was provided, include brief setup instructions. Structure the page however reads best.`;\n\n// ─── Template Substitution Helper ─────────────────────────────────────\n\n/**\n * Replace {{PLACEHOLDER}} tokens in a template string.\n */\nexport function fillTemplate(\n  template: string,\n  vars: Record<string, string>,\n): string {\n  let result = template;\n  for (const [key, value] of Object.entries(vars)) {\n    result = result.replaceAll(`{{${key}}}`, value);\n  }\n  return result;\n}\n\n// ─── Formatting Helpers ───────────────────────────────────────────────\n\n/**\n * Format file list with exports for the grouping prompt.\n */\nexport function formatFileListForGrouping(\n  files: Array<{ filePath: string; symbols: Array<{ name: string; type: string }> }>,\n): string {\n  return files\n    .map(f => {\n      const exports = f.symbols.length > 0\n        ? f.symbols.map(s => `${s.name} (${s.type})`).join(', ')\n        : 'no exports';\n      return `- ${f.filePath}: ${exports}`;\n    })\n    .join('\\n');\n}\n\n/**\n * Build a directory tree string from file paths.\n */\nexport function formatDirectoryTree(filePaths: string[]): string {\n  const dirs = new Set<string>();\n  for (const fp of filePaths) {\n    const parts = fp.replace(/\\\\/g, '/').split('/');\n    for (let i = 1; i < parts.length; i++) {\n      dirs.add(parts.slice(0, i).join('/'));\n    }\n  }\n\n  const sorted = Array.from(dirs).sort();\n  if (sorted.length === 0) return '(flat structure)';\n\n  return sorted.slice(0, 50).join('\\n') + (sorted.length > 50 ? `\\n... and ${sorted.length - 50} more directories` : '');\n}\n\n/**\n * Format call edges as readable text.\n */\nexport function formatCallEdges(\n  edges: Array<{ fromFile: string; fromName: string; toFile: string; toName: string }>,\n): string {\n  if (edges.length === 0) return 'None';\n  return edges\n    .slice(0, 30)\n    .map(e => `${e.fromName} (${shortPath(e.fromFile)}) → ${e.toName} (${shortPath(e.toFile)})`)\n    .join('\\n');\n}\n\n/**\n * Format process traces as readable text.\n */\nexport function formatProcesses(\n  processes: Array<{\n    label: string;\n    type: string;\n    steps: Array<{ step: number; name: string; filePath: string }>;\n  }>,\n): string {\n  if (processes.length === 0) return 'No execution flows detected for this module.';\n\n  return processes\n    .map(p => {\n      const stepsText = p.steps\n        .map(s => `  ${s.step}. ${s.name} (${shortPath(s.filePath)})`)\n        .join('\\n');\n      return `**${p.label}** (${p.type}):\\n${stepsText}`;\n    })\n    .join('\\n\\n');\n}\n\n/**\n * Shorten a file path for readability.\n */\nfunction shortPath(fp: string): string {\n  const parts = fp.replace(/\\\\/g, '/').split('/');\n  return parts.length > 3 ? parts.slice(-3).join('/') : fp;\n}\n"
  },
  {
    "path": "gitnexus/src/lib/utils.ts",
    "content": "export const generateId = (label: string, name: string): string => {\n  return `${label}:${name}`\n}"
  },
  {
    "path": "gitnexus/src/mcp/compatible-stdio-transport.ts",
    "content": "import process from 'node:process';\nimport type { Transport, TransportSendOptions } from '@modelcontextprotocol/sdk/shared/transport.js';\nimport { JSONRPCMessageSchema, type JSONRPCMessage } from '@modelcontextprotocol/sdk/types.js';\n\nexport type StdioFraming = 'content-length' | 'newline';\n\nfunction deserializeMessage(raw: string): JSONRPCMessage {\n  return JSONRPCMessageSchema.parse(JSON.parse(raw));\n}\n\nfunction serializeNewlineMessage(message: JSONRPCMessage): string {\n  return `${JSON.stringify(message)}\\n`;\n}\n\nfunction serializeContentLengthMessage(message: JSONRPCMessage): string {\n  const body = JSON.stringify(message);\n  return `Content-Length: ${Buffer.byteLength(body, 'utf8')}\\r\\n\\r\\n${body}`;\n}\n\nfunction findHeaderEnd(buffer: Buffer): { index: number; separatorLength: number } | null {\n  const crlfEnd = buffer.indexOf('\\r\\n\\r\\n');\n  if (crlfEnd !== -1) {\n    return { index: crlfEnd, separatorLength: 4 };\n  }\n\n  const lfEnd = buffer.indexOf('\\n\\n');\n  if (lfEnd !== -1) {\n    return { index: lfEnd, separatorLength: 2 };\n  }\n\n  return null;\n}\n\nfunction looksLikeContentLength(buffer: Buffer): boolean {\n  if (buffer.length < 14) {\n    return false;\n  }\n  const probe = buffer.toString('utf8', 0, Math.min(buffer.length, 32));\n  return /^content-length\\s*:/i.test(probe);\n}\n\nconst MAX_BUFFER_SIZE = 10 * 1024 * 1024; // 10 MB — generous for JSON-RPC\n\nexport class CompatibleStdioServerTransport implements Transport {\n  private _readBuffer: Buffer | undefined;\n  private _started = false;\n  private _framing: StdioFraming | null = null;\n\n  onmessage?: (message: JSONRPCMessage) => void;\n  onerror?: (error: Error) => void;\n  onclose?: () => void;\n\n  constructor(\n    private readonly _stdin: NodeJS.ReadableStream = process.stdin,\n    private readonly _stdout: NodeJS.WritableStream = process.stdout,\n  ) {}\n\n  private readonly _ondata = (chunk: Buffer) => {\n    this._readBuffer = this._readBuffer ? Buffer.concat([this._readBuffer, chunk]) : chunk;\n    if (this._readBuffer.length > MAX_BUFFER_SIZE) {\n      this.onerror?.(new Error(`Read buffer exceeded maximum size (${MAX_BUFFER_SIZE} bytes)`));\n      this.discardBufferedInput();\n      return;\n    }\n    this.processReadBuffer();\n  };\n\n  private readonly _onerror = (error: Error) => {\n    this.onerror?.(error);\n  };\n\n  async start() {\n    if (this._started) {\n      throw new Error('CompatibleStdioServerTransport already started!');\n    }\n\n    this._started = true;\n    this._stdin.on('data', this._ondata);\n    this._stdin.on('error', this._onerror);\n  }\n\n  private detectFraming(): StdioFraming | null {\n    if (!this._readBuffer || this._readBuffer.length === 0) {\n      return null;\n    }\n\n    const firstByte = this._readBuffer[0];\n    if (firstByte === 0x7b || firstByte === 0x5b) {\n      return 'newline';\n    }\n\n    if (looksLikeContentLength(this._readBuffer)) {\n      return 'content-length';\n    }\n\n    return null;\n  }\n\n  private discardBufferedInput() {\n    this._readBuffer = undefined;\n    this._framing = null;\n  }\n\n  private readContentLengthMessage(): JSONRPCMessage | null {\n    if (!this._readBuffer) {\n      return null;\n    }\n\n    const header = findHeaderEnd(this._readBuffer);\n    if (header === null) {\n      return null;\n    }\n\n    const headerText = this._readBuffer\n      .toString('utf8', 0, header.index)\n      .replace(/\\r\\n/g, '\\n')\n      .replace(/\\r/g, '\\n');\n    const match = headerText.match(/(?:^|\\n)content-length\\s*:\\s*(\\d+)/i);\n    if (!match) {\n      this.discardBufferedInput();\n      throw new Error('Missing Content-Length header from MCP client');\n    }\n\n    const contentLength = Number.parseInt(match[1], 10);\n    if (!Number.isFinite(contentLength) || contentLength < 0) {\n      this.discardBufferedInput();\n      throw new Error('Invalid Content-Length header from MCP client');\n    }\n    if (contentLength > MAX_BUFFER_SIZE) {\n      this.discardBufferedInput();\n      throw new Error(`Content-Length ${contentLength} exceeds maximum allowed size (${MAX_BUFFER_SIZE} bytes)`);\n    }\n    const bodyStart = header.index + header.separatorLength;\n    const bodyEnd = bodyStart + contentLength;\n    if (this._readBuffer.length < bodyEnd) {\n      return null;\n    }\n\n    const body = this._readBuffer.toString('utf8', bodyStart, bodyEnd);\n    this._readBuffer = this._readBuffer.subarray(bodyEnd);\n    return deserializeMessage(body);\n  }\n\n  private readNewlineMessage(): JSONRPCMessage | null {\n    if (!this._readBuffer) {\n      return null;\n    }\n\n    while (true) {\n      const newlineIndex = this._readBuffer.indexOf('\\n');\n      if (newlineIndex === -1) {\n        return null;\n      }\n\n      const line = this._readBuffer.toString('utf8', 0, newlineIndex).replace(/\\r$/, '');\n      this._readBuffer = this._readBuffer.subarray(newlineIndex + 1);\n      if (line.trim().length === 0) {\n        continue;\n      }\n\n      return deserializeMessage(line);\n    }\n  }\n\n  private readMessage(): JSONRPCMessage | null {\n    if (!this._readBuffer || this._readBuffer.length === 0) {\n      return null;\n    }\n\n    if (this._framing === null) {\n      this._framing = this.detectFraming();\n      if (this._framing === null) {\n        return null;\n      }\n    }\n\n    return this._framing === 'content-length'\n      ? this.readContentLengthMessage()\n      : this.readNewlineMessage();\n  }\n\n  private processReadBuffer() {\n    while (true) {\n      try {\n        const message = this.readMessage();\n        if (message === null) {\n          break;\n        }\n        this.onmessage?.(message);\n      } catch (error) {\n        this.onerror?.(error as Error);\n        break;\n      }\n    }\n  }\n\n  async close() {\n    this._stdin.off('data', this._ondata);\n    this._stdin.off('error', this._onerror);\n\n    const remainingDataListeners = this._stdin.listenerCount('data');\n    if (remainingDataListeners === 0) {\n      this._stdin.pause();\n    }\n\n    this._started = false;\n    this._readBuffer = undefined;\n    this.onclose?.();\n  }\n\n  send(message: JSONRPCMessage, _options?: TransportSendOptions) {\n    return new Promise<void>((resolve, reject) => {\n      if (!this._started) {\n        reject(new Error('Transport is closed'));\n        return;\n      }\n\n      const payload = this._framing === 'newline'\n        ? serializeNewlineMessage(message)\n        : serializeContentLengthMessage(message);\n\n      const onError = (error: Error) => {\n        this._stdout.removeListener('error', onError);\n        reject(error);\n      };\n\n      this._stdout.on('error', onError);\n\n      if (this._stdout.write(payload)) {\n        this._stdout.removeListener('error', onError);\n        resolve();\n      } else {\n        this._stdout.once('drain', () => {\n          this._stdout.removeListener('error', onError);\n          resolve();\n        });\n      }\n    });\n  }\n}\n"
  },
  {
    "path": "gitnexus/src/mcp/core/embedder.ts",
    "content": "/**\n * Embedder Module (Read-Only)\n * \n * Singleton factory for transformers.js embedding pipeline.\n * For MCP, we only need to compute query embeddings, not batch embed.\n */\n\nimport { pipeline, env, type FeatureExtractionPipeline } from '@huggingface/transformers';\n\n// Model config\nconst MODEL_ID = 'Snowflake/snowflake-arctic-embed-xs';\nconst EMBEDDING_DIMS = 384;\n\n// Module-level state for singleton pattern\nlet embedderInstance: FeatureExtractionPipeline | null = null;\nlet isInitializing = false;\nlet initPromise: Promise<FeatureExtractionPipeline> | null = null;\n\n/**\n * Initialize the embedding model (lazy, on first search)\n */\nexport const initEmbedder = async (): Promise<FeatureExtractionPipeline> => {\n  if (embedderInstance) {\n    return embedderInstance;\n  }\n\n  if (isInitializing && initPromise) {\n    return initPromise;\n  }\n\n  isInitializing = true;\n\n  initPromise = (async () => {\n    try {\n      env.allowLocalModels = false;\n      \n      console.error('GitNexus: Loading embedding model (first search may take a moment)...');\n\n      // Try GPU first (DirectML on Windows, CUDA on Linux), fall back to CPU\n      const isWindows = process.platform === 'win32';\n      const gpuDevice = isWindows ? 'dml' : 'cuda';\n      const devicesToTry: Array<'dml' | 'cuda' | 'cpu'> = [gpuDevice, 'cpu'];\n      \n      for (const device of devicesToTry) {\n        try {\n          // Silence stdout and stderr during model load — ONNX Runtime and transformers.js\n          // may write progress/init messages that corrupt MCP stdio protocol or produce\n          // noisy warnings (e.g. node assignment to execution providers).\n          const origStdout = process.stdout.write;\n          const origStderr = process.stderr.write;\n          process.stdout.write = (() => true) as any;\n          process.stderr.write = (() => true) as any;\n          try {\n            embedderInstance = await (pipeline as any)(\n              'feature-extraction',\n              MODEL_ID,\n              {\n                device: device,\n                dtype: 'fp32',\n              }\n            );\n          } finally {\n            process.stdout.write = origStdout;\n            process.stderr.write = origStderr;\n          }\n          console.error(`GitNexus: Embedding model loaded (${device})`);\n          return embedderInstance!;\n        } catch {\n          if (device === 'cpu') throw new Error('Failed to load embedding model');\n        }\n      }\n\n      throw new Error('No suitable device found');\n    } catch (error) {\n      isInitializing = false;\n      initPromise = null;\n      embedderInstance = null;\n      throw error;\n    } finally {\n      isInitializing = false;\n    }\n  })();\n\n  return initPromise;\n};\n\n/**\n * Check if embedder is ready\n */\nexport const isEmbedderReady = (): boolean => embedderInstance !== null;\n\n/**\n * Embed a query text for semantic search\n */\nexport const embedQuery = async (query: string): Promise<number[]> => {\n  const embedder = await initEmbedder();\n  \n  const result = await embedder(query, {\n    pooling: 'mean',\n    normalize: true,\n  });\n  \n  return Array.from(result.data as ArrayLike<number>);\n};\n\n/**\n * Get embedding dimensions\n */\nexport const getEmbeddingDims = (): number => EMBEDDING_DIMS;\n\n/**\n * Cleanup embedder\n */\nexport const disposeEmbedder = async (): Promise<void> => {\n  if (embedderInstance) {\n    try {\n      if ('dispose' in embedderInstance && typeof embedderInstance.dispose === 'function') {\n        await embedderInstance.dispose();\n      }\n    } catch {}\n    embedderInstance = null;\n    initPromise = null;\n  }\n};\n"
  },
  {
    "path": "gitnexus/src/mcp/core/lbug-adapter.ts",
    "content": "/**\n * LadybugDB Adapter (Connection Pool)\n *\n * Manages a pool of LadybugDB databases keyed by repoId, each with\n * multiple Connection objects for safe concurrent query execution.\n *\n * LadybugDB Connections are NOT thread-safe — a single Connection\n * segfaults if concurrent .query() calls hit it simultaneously.\n * This adapter provides a checkout/return connection pool so each\n * concurrent query gets its own Connection from the same Database.\n *\n * @see https://docs.ladybugdb.com/concurrency — multiple Connections\n * from the same Database is the officially supported concurrency pattern.\n */\n\nimport fs from 'fs/promises';\nimport lbug from '@ladybugdb/core';\n\n/** Per-repo pool: one Database, many Connections */\ninterface PoolEntry {\n  db: lbug.Database;\n  /** Available connections ready for checkout */\n  available: lbug.Connection[];\n  /** Number of connections currently checked out */\n  checkedOut: number;\n  /** Queued waiters for when all connections are busy */\n  waiters: Array<(conn: lbug.Connection) => void>;\n  lastUsed: number;\n  dbPath: string;\n  /** Set to true when the pool entry is closed — checkin will close orphaned connections */\n  closed: boolean;\n}\n\nconst pool = new Map<string, PoolEntry>();\n\n/**\n * Shared Database cache keyed by resolved dbPath.\n * Multiple repoIds pointing to the same path share one native Database\n * object to avoid exhausting the buffer manager's mmap budget.\n */\ninterface SharedDB {\n  db: lbug.Database;\n  refCount: number;\n  ftsLoaded: boolean;\n  /** When true, closeOne skips db.close() — the Database is owned externally. */\n  external?: boolean;\n}\nconst dbCache = new Map<string, SharedDB>();\n\n/** Max repos in the pool (LRU eviction) */\nconst MAX_POOL_SIZE = 5;\n/** Idle timeout before closing a repo's connections */\nconst IDLE_TIMEOUT_MS = 5 * 60 * 1000; // 5 minutes\n/** Max connections per repo (caps concurrent queries per repo) */\nconst MAX_CONNS_PER_REPO = 8;\n\nlet idleTimer: ReturnType<typeof setInterval> | null = null;\n\n/** Saved real stdout.write — used to silence LadybugDB native output without race conditions */\nexport const realStdoutWrite = process.stdout.write.bind(process.stdout);\nlet stdoutSilenceCount = 0;\n/** True while pre-warming connections — prevents watchdog from prematurely restoring stdout */\nlet preWarmActive = false;\n\n/**\n * Start the idle cleanup timer (runs every 60s)\n */\nfunction ensureIdleTimer(): void {\n  if (idleTimer) return;\n  idleTimer = setInterval(() => {\n    const now = Date.now();\n    for (const [repoId, entry] of pool) {\n      if (now - entry.lastUsed > IDLE_TIMEOUT_MS && entry.checkedOut === 0) {\n        closeOne(repoId);\n      }\n    }\n  }, 60_000);\n  if (idleTimer && typeof idleTimer === 'object' && 'unref' in idleTimer) {\n    (idleTimer as NodeJS.Timeout).unref();\n  }\n}\n\n/**\n * Evict the least-recently-used repo if pool is at capacity\n */\nfunction evictLRU(): void {\n  if (pool.size < MAX_POOL_SIZE) return;\n\n  let oldestId: string | null = null;\n  let oldestTime = Infinity;\n  for (const [id, entry] of pool) {\n    if (entry.checkedOut === 0 && entry.lastUsed < oldestTime) {\n      oldestTime = entry.lastUsed;\n      oldestId = id;\n    }\n  }\n  if (oldestId) {\n    closeOne(oldestId);\n  }\n}\n\n/**\n * Remove a repo from the pool, close its connections, and release its\n * shared Database ref.  Only closes the Database when no other repoIds\n * reference it (refCount === 0).\n */\nfunction closeOne(repoId: string): void {\n  const entry = pool.get(repoId);\n  if (!entry) return;\n\n  entry.closed = true;\n\n  // Close available connections — fire-and-forget with .catch() to prevent\n  // unhandled rejections.  Native close() returns Promise<void> but can crash\n  // the N-API destructor on macOS/Windows; deferring to process exit lets\n  // dangerouslyIgnoreUnhandledErrors absorb the crash.\n  for (const conn of entry.available) {\n    conn.close().catch(() => {});\n  }\n  entry.available.length = 0;\n\n  // Checked-out connections can't be closed here — they're in-flight.\n  // The checkin() function detects entry.closed and closes them on return.\n\n  // Only close the Database when no other repoIds reference it.\n  // External databases (injected via initLbugWithDb) are never closed here —\n  // the core adapter owns them and handles their lifecycle.\n  const shared = dbCache.get(entry.dbPath);\n  if (shared) {\n    shared.refCount--;\n    if (shared.refCount === 0) {\n      if (shared.external) {\n        // External databases are owned by the core adapter — don't close\n        // or remove from cache.  Keep the entry so future initLbug() calls\n        // for the same dbPath reuse it instead of hitting a file lock.\n        shared.refCount = 0;\n      } else {\n        shared.db.close().catch(() => {});\n        dbCache.delete(entry.dbPath);\n      }\n    }\n  }\n\n  pool.delete(repoId);\n}\n\n/**\n * Create a new Connection from a repo's Database.\n * Silences stdout to prevent native module output from corrupting MCP stdio.\n */\nfunction silenceStdout(): void {\n  if (stdoutSilenceCount++ === 0) {\n    process.stdout.write = (() => true) as any;\n  }\n}\n\nfunction restoreStdout(): void {\n  if (--stdoutSilenceCount <= 0) {\n    stdoutSilenceCount = 0;\n    process.stdout.write = realStdoutWrite;\n  }\n}\n\n// Safety watchdog: restore stdout if it gets stuck silenced (e.g. native crash\n// inside createConnection before restoreStdout runs).\nsetInterval(() => {\n  if (stdoutSilenceCount > 0 && !preWarmActive) {\n    stdoutSilenceCount = 0;\n    process.stdout.write = realStdoutWrite;\n  }\n}, 1000).unref();\n\nfunction createConnection(db: lbug.Database): lbug.Connection {\n  silenceStdout();\n  try {\n    return new lbug.Connection(db);\n  } finally {\n    restoreStdout();\n  }\n}\n\n/** Query timeout in milliseconds */\nconst QUERY_TIMEOUT_MS = 30_000;\n/** Waiter queue timeout in milliseconds */\nconst WAITER_TIMEOUT_MS = 15_000;\n\nconst LOCK_RETRY_ATTEMPTS = 3;\nconst LOCK_RETRY_DELAY_MS = 2000;\n\n/** Deduplicates concurrent initLbug calls for the same repoId */\nconst initPromises = new Map<string, Promise<void>>();\n\n/**\n * Initialize (or reuse) a Database + connection pool for a specific repo.\n * Retries on lock errors (e.g., when `gitnexus analyze` is running).\n *\n * Concurrent calls for the same repoId are deduplicated — the second caller\n * awaits the first's in-progress init rather than starting a redundant one.\n */\nexport const initLbug = async (repoId: string, dbPath: string): Promise<void> => {\n  const existing = pool.get(repoId);\n  if (existing) {\n    existing.lastUsed = Date.now();\n    return;\n  }\n\n  // Deduplicate concurrent init calls for the same repoId —\n  // prevents double-init race when multiple parallel tool calls\n  // trigger initialization for the same repo simultaneously.\n  const pending = initPromises.get(repoId);\n  if (pending) return pending;\n\n  const promise = doInitLbug(repoId, dbPath);\n  initPromises.set(repoId, promise);\n  try {\n    await promise;\n  } finally {\n    initPromises.delete(repoId);\n  }\n};\n\n/**\n * Internal init — creates DB, pre-warms connections, loads FTS, then registers pool.\n * Pool entry is registered LAST so concurrent executeQuery calls see either\n * \"not initialized\" (and throw) or a fully ready pool — never a half-built one.\n */\nasync function doInitLbug(repoId: string, dbPath: string): Promise<void> {\n  // Check if database exists\n  try {\n    await fs.stat(dbPath);\n  } catch {\n    throw new Error(`LadybugDB not found at ${dbPath}. Run: gitnexus analyze`);\n  }\n\n  evictLRU();\n\n  // Reuse an existing native Database if another repoId already opened this path.\n  // This prevents buffer manager exhaustion from multiple mmap regions on the same file.\n  let shared = dbCache.get(dbPath);\n  if (!shared) {\n    // Open in read-only mode — MCP server never writes to the database.\n    // This allows multiple MCP server instances to read concurrently, and\n    // avoids lock conflicts when `gitnexus analyze` is writing.\n    let lastError: Error | null = null;\n    for (let attempt = 1; attempt <= LOCK_RETRY_ATTEMPTS; attempt++) {\n      silenceStdout();\n      try {\n        const db = new lbug.Database(\n          dbPath,\n          0,     // bufferManagerSize (default)\n          false, // enableCompression (default)\n          true,  // readOnly\n        );\n        restoreStdout();\n        shared = { db, refCount: 0, ftsLoaded: false };\n        dbCache.set(dbPath, shared);\n        break;\n      } catch (err: any) {\n        restoreStdout();\n        lastError = err instanceof Error ? err : new Error(String(err));\n        const isLockError = lastError.message.includes('Could not set lock')\n          || lastError.message.includes('lock');\n        if (!isLockError || attempt === LOCK_RETRY_ATTEMPTS) break;\n        await new Promise(resolve => setTimeout(resolve, LOCK_RETRY_DELAY_MS * attempt));\n      }\n    }\n\n    if (!shared) {\n      throw new Error(\n        `LadybugDB unavailable for ${repoId}. Another process may be rebuilding the index. ` +\n        `Retry later. (${lastError?.message || 'unknown error'})`\n      );\n    }\n  }\n\n  shared.refCount++;\n  const db = shared.db;\n\n  // Pre-create the full pool upfront so createConnection() (which silences\n  // stdout) is never called lazily during active query execution.\n  // Mark preWarmActive so the watchdog timer doesn't interfere.\n  preWarmActive = true;\n  const available: lbug.Connection[] = [];\n  try {\n    for (let i = 0; i < MAX_CONNS_PER_REPO; i++) {\n      available.push(createConnection(db));\n    }\n  } finally {\n    preWarmActive = false;\n  }\n\n  // Load FTS extension once per shared Database.\n  // Done BEFORE pool registration so no concurrent checkout can grab\n  // the connection while the async FTS load is in progress.\n  if (!shared.ftsLoaded) {\n    try {\n      await available[0].query('LOAD EXTENSION fts');\n      shared.ftsLoaded = true;\n    } catch {\n      // Extension may not be installed — FTS queries will fail gracefully\n    }\n  }\n\n  // Register pool entry only after all connections are pre-warmed and FTS is\n  // loaded.  Concurrent executeQuery calls see either \"not initialized\"\n  // (and throw cleanly) or a fully ready pool — never a half-built one.\n  pool.set(repoId, { db, available, checkedOut: 0, waiters: [], lastUsed: Date.now(), dbPath, closed: false });\n  ensureIdleTimer();\n}\n\n/**\n * Initialize a pool entry from a pre-existing Database object.\n *\n * Used in tests to avoid the writable→close→read-only cycle that crashes\n * on macOS due to N-API destructor segfaults.  The pool adapter reuses\n * the core adapter's writable Database instead of opening a new read-only one.\n *\n * The Database is registered in the shared dbCache so closeOne() decrements\n * the refCount correctly.  If the Database is already cached (e.g. another\n * repoId already injected it), the existing entry is reused.\n */\nexport async function initLbugWithDb(\n  repoId: string,\n  existingDb: lbug.Database,\n  dbPath: string,\n): Promise<void> {\n  const existing = pool.get(repoId);\n  if (existing) {\n    existing.lastUsed = Date.now();\n    return;\n  }\n\n  // Register in dbCache with external: true so other initLbug() calls\n  // for the same dbPath reuse this Database instead of trying to open\n  // a new one (which would fail with a file lock error).\n  // closeOne() respects the external flag and skips db.close().\n  let shared = dbCache.get(dbPath);\n  if (!shared) {\n    shared = { db: existingDb, refCount: 0, ftsLoaded: false, external: true };\n    dbCache.set(dbPath, shared);\n  }\n  shared.refCount++;\n\n  const available: lbug.Connection[] = [];\n  preWarmActive = true;\n  try {\n    for (let i = 0; i < MAX_CONNS_PER_REPO; i++) {\n      available.push(createConnection(existingDb));\n    }\n  } finally {\n    preWarmActive = false;\n  }\n\n  // Load FTS extension if not already loaded on this Database\n  try {\n    await available[0].query('LOAD EXTENSION fts');\n  } catch {\n    // Extension may already be loaded or not installed\n  }\n\n  pool.set(repoId, { \n    db: existingDb,\n    available,\n    checkedOut: 0,\n    waiters: [],\n    lastUsed: Date.now(),\n    dbPath,\n    closed: false \n  });\n  ensureIdleTimer();\n}\n\n/**\n * Checkout a connection from the pool.\n * Returns an available connection, or creates a new one if under the cap.\n * If all connections are busy and at cap, queues the caller until one is returned.\n */\nfunction checkout(entry: PoolEntry): Promise<lbug.Connection> {\n  // Fast path: grab an available connection\n  if (entry.available.length > 0) {\n    entry.checkedOut++;\n    return Promise.resolve(entry.available.pop()!);\n  }\n\n  // Pool was pre-warmed to MAX_CONNS_PER_REPO during init.  If we're here\n  // with fewer total connections, something leaked — surface the bug rather\n  // than silently creating a connection (which would silence stdout mid-query).\n  const totalConns = entry.available.length + entry.checkedOut;\n  if (totalConns < MAX_CONNS_PER_REPO) {\n    throw new Error(\n      `Connection pool integrity error: expected ${MAX_CONNS_PER_REPO} ` +\n      `connections but found ${totalConns} (${entry.available.length} available, ` +\n      `${entry.checkedOut} checked out)`\n    );\n  }\n\n  // At capacity — queue the caller with a timeout.\n  return new Promise<lbug.Connection>((resolve, reject) => {\n    const waiter = (conn: lbug.Connection) => {\n      clearTimeout(timer);\n      resolve(conn);\n    };\n    const timer = setTimeout(() => {\n      const idx = entry.waiters.indexOf(waiter);\n      if (idx !== -1) entry.waiters.splice(idx, 1);\n      reject(new Error(`Connection pool exhausted: timed out after ${WAITER_TIMEOUT_MS}ms waiting for a free connection`));\n    }, WAITER_TIMEOUT_MS);\n    entry.waiters.push(waiter);\n  });\n}\n\n/**\n * Return a connection to the pool after use.\n * If the pool entry was closed while the connection was checked out (e.g.\n * LRU eviction), close the orphaned connection instead of returning it.\n * If there are queued waiters, hand the connection directly to the next one\n * instead of putting it back in the available array (avoids race conditions).\n */\nfunction checkin(entry: PoolEntry, conn: lbug.Connection): void {\n  if (entry.closed) {\n    // Pool entry was deleted during checkout — close the orphaned connection\n    conn.close().catch(() => {});\n    return;\n  }\n  if (entry.waiters.length > 0) {\n    // Hand directly to the next waiter — no intermediate available state\n    const waiter = entry.waiters.shift()!;\n    waiter(conn);\n  } else {\n    entry.checkedOut--;\n    entry.available.push(conn);\n  }\n}\n\n/**\n * Execute a query on a specific repo's connection pool.\n * Automatically checks out a connection, runs the query, and returns it.\n */\n/** Race a promise against a timeout */\nfunction withTimeout<T>(promise: Promise<T>, ms: number, label: string): Promise<T> {\n  let timer: ReturnType<typeof setTimeout>;\n  const timeout = new Promise<never>((_, reject) => {\n    timer = setTimeout(() => reject(new Error(`${label} timed out after ${ms}ms`)), ms);\n  });\n  return Promise.race([promise, timeout]).finally(() => clearTimeout(timer));\n}\n\nexport const executeQuery = async (repoId: string, cypher: string): Promise<any[]> => {\n  const entry = pool.get(repoId);\n  if (!entry) {\n    throw new Error(`LadybugDB not initialized for repo \"${repoId}\". Call initLbug first.`);\n  }\n\n  if (isWriteQuery(cypher)) {\n    throw new Error('Write operations are not allowed. The pool adapter is read-only.');\n  }\n\n  entry.lastUsed = Date.now();\n\n  const conn = await checkout(entry);\n  try {\n    const queryResult = await withTimeout(conn.query(cypher), QUERY_TIMEOUT_MS, 'Query');\n    const result = Array.isArray(queryResult) ? queryResult[0] : queryResult;\n    const rows = await result.getAll();\n    return rows;\n  } finally {\n    checkin(entry, conn);\n  }\n};\n\n/**\n * Execute a parameterized query on a specific repo's connection pool.\n * Uses prepare/execute pattern to prevent Cypher injection.\n */\nexport const executeParameterized = async (\n  repoId: string,\n  cypher: string,\n  params: Record<string, any>,\n): Promise<any[]> => {\n  const entry = pool.get(repoId);\n  if (!entry) {\n    throw new Error(`LadybugDB not initialized for repo \"${repoId}\". Call initLbug first.`);\n  }\n\n  entry.lastUsed = Date.now();\n\n  const conn = await checkout(entry);\n  try {\n    const stmt = await withTimeout(conn.prepare(cypher), QUERY_TIMEOUT_MS, 'Prepare');\n    if (!stmt.isSuccess()) {\n      const errMsg = await stmt.getErrorMessage();\n      throw new Error(`Prepare failed: ${errMsg}`);\n    }\n    const queryResult = await withTimeout(conn.execute(stmt, params), QUERY_TIMEOUT_MS, 'Execute');\n    const result = Array.isArray(queryResult) ? queryResult[0] : queryResult;\n    const rows = await result.getAll();\n    return rows;\n  } finally {\n    checkin(entry, conn);\n  }\n};\n\n/**\n * Close one or all repo pools.\n * If repoId is provided, close only that repo's connections.\n * If omitted, close all repos.\n */\nexport const closeLbug = async (repoId?: string): Promise<void> => {\n  if (repoId) {\n    closeOne(repoId);\n    return;\n  }\n\n  for (const id of [...pool.keys()]) {\n    closeOne(id);\n  }\n\n  if (idleTimer) {\n    clearInterval(idleTimer);\n    idleTimer = null;\n  }\n};\n\n\n/**\n * Check if a specific repo's pool is active\n */\nexport const isLbugReady = (repoId: string): boolean => pool.has(repoId);\n\n/** Regex to detect write operations in user-supplied Cypher queries */\nexport const CYPHER_WRITE_RE = /\\b(CREATE|DELETE|SET|MERGE|REMOVE|DROP|ALTER|COPY|DETACH)\\b/i;\n\n/** Check if a Cypher query contains write operations */\nexport function isWriteQuery(query: string): boolean {\n  return CYPHER_WRITE_RE.test(query);\n}\n"
  },
  {
    "path": "gitnexus/src/mcp/local/local-backend.ts",
    "content": "/**\n * Local Backend (Multi-Repo)\n * \n * Provides tool implementations using local .gitnexus/ indexes.\n * Supports multiple indexed repositories via a global registry.\n * LadybugDB connections are opened lazily per repo on first query.\n */\n\nimport fs from 'fs/promises';\nimport path from 'path';\nimport { initLbug, executeQuery, executeParameterized, closeLbug, isLbugReady } from '../core/lbug-adapter.js';\n// Embedding imports are lazy (dynamic import) to avoid loading onnxruntime-node\n// at MCP server startup — crashes on unsupported Node ABI versions (#89)\n// git utilities available if needed\n// import { isGitRepo, getCurrentCommit, getGitRoot } from '../../storage/git.js';\nimport {\n  listRegisteredRepos,\n  cleanupOldKuzuFiles,\n  type RegistryEntry,\n} from '../../storage/repo-manager.js';\n// AI context generation is CLI-only (gitnexus analyze)\n// import { generateAIContextFiles } from '../../cli/ai-context.js';\n\n/**\n * Quick test-file detection for filtering impact results.\n * Matches common test file patterns across all supported languages.\n */\nexport function isTestFilePath(filePath: string): boolean {\n  const p = filePath.toLowerCase().replace(/\\\\/g, '/');\n  return (\n    p.includes('.test.') || p.includes('.spec.') ||\n    p.includes('__tests__/') || p.includes('__mocks__/') ||\n    p.includes('/test/') || p.includes('/tests/') ||\n    p.includes('/testing/') || p.includes('/fixtures/') ||\n    p.endsWith('_test.go') || p.endsWith('_test.py') ||\n    p.endsWith('_spec.rb') || p.endsWith('_test.rb') || p.includes('/spec/') ||\n    p.includes('/test_') || p.includes('/conftest.')\n  );\n}\n\n/** Valid LadybugDB node labels for safe Cypher query construction */\nexport const VALID_NODE_LABELS = new Set([\n  'File', 'Folder', 'Function', 'Class', 'Interface', 'Method', 'CodeElement',\n  'Community', 'Process', 'Struct', 'Enum', 'Macro', 'Typedef', 'Union',\n  'Namespace', 'Trait', 'Impl', 'TypeAlias', 'Const', 'Static', 'Property',\n  'Record', 'Delegate', 'Annotation', 'Constructor', 'Template', 'Module',\n]);\n\n/** Valid relation types for impact analysis filtering */\nexport const VALID_RELATION_TYPES = new Set(['CALLS', 'IMPORTS', 'EXTENDS', 'IMPLEMENTS', 'HAS_METHOD', 'HAS_PROPERTY', 'OVERRIDES', 'ACCESSES']);\n\n/** Regex to detect write operations in user-supplied Cypher queries */\nexport const CYPHER_WRITE_RE = /\\b(CREATE|DELETE|SET|MERGE|REMOVE|DROP|ALTER|COPY|DETACH)\\b/i;\n\n/** Check if a Cypher query contains write operations */\nexport function isWriteQuery(query: string): boolean {\n  return CYPHER_WRITE_RE.test(query);\n}\n\n/** Structured error logging for query failures — replaces empty catch blocks */\nfunction logQueryError(context: string, err: unknown): void {\n  const msg = err instanceof Error ? err.message : String(err);\n  console.error(`GitNexus [${context}]: ${msg}`);\n}\n\nexport interface CodebaseContext {\n  projectName: string;\n  stats: {\n    fileCount: number;\n    functionCount: number;\n    communityCount: number;\n    processCount: number;\n  };\n}\n\ninterface RepoHandle {\n  id: string;          // unique key = repo name (basename)\n  name: string;\n  repoPath: string;\n  storagePath: string;\n  lbugPath: string;\n  indexedAt: string;\n  lastCommit: string;\n  stats?: RegistryEntry['stats'];\n}\n\nexport class LocalBackend {\n  private repos: Map<string, RepoHandle> = new Map();\n  private contextCache: Map<string, CodebaseContext> = new Map();\n  private initializedRepos: Set<string> = new Set();\n\n  // ─── Initialization ──────────────────────────────────────────────\n\n  /**\n   * Initialize from the global registry.\n   * Returns true if at least one repo is available.\n   */\n  async init(): Promise<boolean> {\n    await this.refreshRepos();\n    return this.repos.size > 0;\n  }\n\n  /**\n   * Re-read the global registry and update the in-memory repo map.\n   * New repos are added, existing repos are updated, removed repos are pruned.\n   * LadybugDB connections for removed repos are NOT closed (they idle-timeout naturally).\n   */\n  private async refreshRepos(): Promise<void> {\n    const entries = await listRegisteredRepos({ validate: true });\n    const freshIds = new Set<string>();\n\n    for (const entry of entries) {\n      const id = this.repoId(entry.name, entry.path);\n      freshIds.add(id);\n\n      const storagePath = entry.storagePath;\n      const lbugPath = path.join(storagePath, 'lbug');\n\n      // Clean up any leftover KuzuDB files from before the LadybugDB migration.\n      // If kuzu exists but lbug doesn't, warn so the user knows to re-analyze.\n      const kuzu = await cleanupOldKuzuFiles(storagePath);\n      if (kuzu.found && kuzu.needsReindex) {\n        console.error(`GitNexus: \"${entry.name}\" has a stale KuzuDB index. Run: gitnexus analyze ${entry.path}`);\n      }\n\n      const handle: RepoHandle = {\n        id,\n        name: entry.name,\n        repoPath: entry.path,\n        storagePath,\n        lbugPath,\n        indexedAt: entry.indexedAt,\n        lastCommit: entry.lastCommit,\n        stats: entry.stats,\n      };\n\n      this.repos.set(id, handle);\n\n      // Build lightweight context (no LadybugDB needed)\n      const s = entry.stats || {};\n      this.contextCache.set(id, {\n        projectName: entry.name,\n        stats: {\n          fileCount: s.files || 0,\n          functionCount: s.nodes || 0,\n          communityCount: s.communities || 0,\n          processCount: s.processes || 0,\n        },\n      });\n    }\n\n    // Prune repos that no longer exist in the registry\n    for (const id of this.repos.keys()) {\n      if (!freshIds.has(id)) {\n        this.repos.delete(id);\n        this.contextCache.delete(id);\n        this.initializedRepos.delete(id);\n      }\n    }\n  }\n\n  /**\n   * Generate a stable repo ID from name + path.\n   * If names collide, append a hash of the path.\n   */\n  private repoId(name: string, repoPath: string): string {\n    const base = name.toLowerCase();\n    // Check for name collision with a different path\n    for (const [id, handle] of this.repos) {\n      if (id === base && handle.repoPath !== path.resolve(repoPath)) {\n        // Collision — use path hash\n        const hash = Buffer.from(repoPath).toString('base64url').slice(0, 6);\n        return `${base}-${hash}`;\n      }\n    }\n    return base;\n  }\n\n  // ─── Repo Resolution ─────────────────────────────────────────────\n\n  /**\n   * Resolve which repo to use.\n   * - If repoParam is given, match by name or path\n   * - If only 1 repo, use it\n   * - If 0 or multiple without param, throw with helpful message\n   *\n   * On a miss, re-reads the registry once in case a new repo was indexed\n   * while the MCP server was running.\n   */\n  async resolveRepo(repoParam?: string): Promise<RepoHandle> {\n    const result = this.resolveRepoFromCache(repoParam);\n    if (result) return result;\n\n    // Miss — refresh registry and try once more\n    await this.refreshRepos();\n    const retried = this.resolveRepoFromCache(repoParam);\n    if (retried) return retried;\n\n    // Still no match — throw with helpful message\n    if (this.repos.size === 0) {\n      throw new Error('No indexed repositories. Run: gitnexus analyze');\n    }\n    if (repoParam) {\n      const names = [...this.repos.values()].map(h => h.name);\n      throw new Error(`Repository \"${repoParam}\" not found. Available: ${names.join(', ')}`);\n    }\n    const names = [...this.repos.values()].map(h => h.name);\n    throw new Error(\n      `Multiple repositories indexed. Specify which one with the \"repo\" parameter. Available: ${names.join(', ')}`\n    );\n  }\n\n  /**\n   * Try to resolve a repo from the in-memory cache. Returns null on miss.\n   */\n  private resolveRepoFromCache(repoParam?: string): RepoHandle | null {\n    if (this.repos.size === 0) return null;\n\n    if (repoParam) {\n      const paramLower = repoParam.toLowerCase();\n      // Match by id\n      if (this.repos.has(paramLower)) return this.repos.get(paramLower)!;\n      // Match by name (case-insensitive)\n      for (const handle of this.repos.values()) {\n        if (handle.name.toLowerCase() === paramLower) return handle;\n      }\n      // Match by path (substring)\n      const resolved = path.resolve(repoParam);\n      for (const handle of this.repos.values()) {\n        if (handle.repoPath === resolved) return handle;\n      }\n      // Match by partial name\n      for (const handle of this.repos.values()) {\n        if (handle.name.toLowerCase().includes(paramLower)) return handle;\n      }\n      return null;\n    }\n\n    if (this.repos.size === 1) {\n      return this.repos.values().next().value!;\n    }\n\n    return null; // Multiple repos, no param — ambiguous\n  }\n\n  // ─── Lazy LadybugDB Init ────────────────────────────────────────────\n\n  private async ensureInitialized(repoId: string): Promise<void> {\n    // Always check the actual pool — the idle timer may have evicted the connection\n    if (this.initializedRepos.has(repoId) && isLbugReady(repoId)) return;\n\n    const handle = this.repos.get(repoId);\n    if (!handle) throw new Error(`Unknown repo: ${repoId}`);\n\n    try {\n      await initLbug(repoId, handle.lbugPath);\n      this.initializedRepos.add(repoId);\n    } catch (err: any) {\n      // If lock error, mark as not initialized so next call retries\n      this.initializedRepos.delete(repoId);\n      throw err;\n    }\n  }\n\n  // ─── Public Getters ──────────────────────────────────────────────\n\n  /**\n   * Get context for a specific repo (or the single repo if only one).\n   */\n  getContext(repoId?: string): CodebaseContext | null {\n    if (repoId && this.contextCache.has(repoId)) {\n      return this.contextCache.get(repoId)!;\n    }\n    if (this.repos.size === 1) {\n      return this.contextCache.values().next().value ?? null;\n    }\n    return null;\n  }\n\n  /**\n   * List all registered repos with their metadata.\n   * Re-reads the global registry so newly indexed repos are discovered\n   * without restarting the MCP server.\n   */\n  async listRepos(): Promise<Array<{ name: string; path: string; indexedAt: string; lastCommit: string; stats?: any }>> {\n    await this.refreshRepos();\n    return [...this.repos.values()].map(h => ({\n      name: h.name,\n      path: h.repoPath,\n      indexedAt: h.indexedAt,\n      lastCommit: h.lastCommit,\n      stats: h.stats,\n    }));\n  }\n\n  // ─── Tool Dispatch ───────────────────────────────────────────────\n\n  async callTool(method: string, params: any): Promise<any> {\n    if (method === 'list_repos') {\n      return this.listRepos();\n    }\n\n    // Resolve repo from optional param (re-reads registry on miss)\n    const repo = await this.resolveRepo(params?.repo);\n\n    switch (method) {\n      case 'query':\n        return this.query(repo, params);\n      case 'cypher': {\n        const raw = await this.cypher(repo, params);\n        return this.formatCypherAsMarkdown(raw);\n      }\n      case 'context':\n        return this.context(repo, params);\n      case 'impact':\n        return this.impact(repo, params);\n      case 'detect_changes':\n        return this.detectChanges(repo, params);\n      case 'rename':\n        return this.rename(repo, params);\n      // Legacy aliases for backwards compatibility\n      case 'search':\n        return this.query(repo, params);\n      case 'explore':\n        return this.context(repo, { name: params?.name, ...params });\n      case 'overview':\n        return this.overview(repo, params);\n      default:\n        throw new Error(`Unknown tool: ${method}`);\n    }\n  }\n\n  // ─── Tool Implementations ────────────────────────────────────────\n\n  /**\n   * Query tool — process-grouped search.\n   * \n   * 1. Hybrid search (BM25 + semantic) to find matching symbols\n   * 2. Trace each match to its process(es) via STEP_IN_PROCESS\n   * 3. Group by process, rank by aggregate relevance + internal cluster cohesion\n   * 4. Return: { processes, process_symbols, definitions }\n   */\n  private async query(repo: RepoHandle, params: {\n    query: string;\n    task_context?: string;\n    goal?: string;\n    limit?: number;\n    max_symbols?: number;\n    include_content?: boolean;\n  }): Promise<any> {\n    if (!params.query?.trim()) {\n      return { error: 'query parameter is required and cannot be empty.' };\n    }\n    \n    await this.ensureInitialized(repo.id);\n    \n    const processLimit = params.limit || 5;\n    const maxSymbolsPerProcess = params.max_symbols || 10;\n    const includeContent = params.include_content ?? false;\n    const searchQuery = params.query.trim();\n    \n    // Step 1: Run hybrid search to get matching symbols\n    const searchLimit = processLimit * maxSymbolsPerProcess; // fetch enough raw results\n    const [bm25Results, semanticResults] = await Promise.all([\n      this.bm25Search(repo, searchQuery, searchLimit),\n      this.semanticSearch(repo, searchQuery, searchLimit),\n    ]);\n    \n    // Merge via reciprocal rank fusion\n    const scoreMap = new Map<string, { score: number; data: any }>();\n    \n    for (let i = 0; i < bm25Results.length; i++) {\n      const result = bm25Results[i];\n      const key = result.nodeId || result.filePath;\n      const rrfScore = 1 / (60 + i);\n      const existing = scoreMap.get(key);\n      if (existing) {\n        existing.score += rrfScore;\n      } else {\n        scoreMap.set(key, { score: rrfScore, data: result });\n      }\n    }\n    \n    for (let i = 0; i < semanticResults.length; i++) {\n      const result = semanticResults[i];\n      const key = result.nodeId || result.filePath;\n      const rrfScore = 1 / (60 + i);\n      const existing = scoreMap.get(key);\n      if (existing) {\n        existing.score += rrfScore;\n      } else {\n        scoreMap.set(key, { score: rrfScore, data: result });\n      }\n    }\n    \n    const merged = Array.from(scoreMap.entries())\n      .sort((a, b) => b[1].score - a[1].score)\n      .slice(0, searchLimit);\n    \n    // Step 2: For each match with a nodeId, trace to process(es)\n    const processMap = new Map<string, { id: string; label: string; heuristicLabel: string; processType: string; stepCount: number; totalScore: number; cohesionBoost: number; symbols: any[] }>();\n    const definitions: any[] = []; // standalone symbols not in any process\n    \n    for (const [_, item] of merged) {\n      const sym = item.data;\n      if (!sym.nodeId) {\n        // File-level results go to definitions\n        definitions.push({\n          name: sym.name,\n          type: sym.type || 'File',\n          filePath: sym.filePath,\n        });\n        continue;\n      }\n      \n      // Find processes this symbol participates in\n      let processRows: any[] = [];\n      try {\n        processRows = await executeParameterized(repo.id, `\n          MATCH (n {id: $nodeId})-[r:CodeRelation {type: 'STEP_IN_PROCESS'}]->(p:Process)\n          RETURN p.id AS pid, p.label AS label, p.heuristicLabel AS heuristicLabel, p.processType AS processType, p.stepCount AS stepCount, r.step AS step\n        `, { nodeId: sym.nodeId });\n      } catch (e) { logQueryError('query:process-lookup', e); }\n\n      // Get cluster membership + cohesion (cohesion used as internal ranking signal)\n      let cohesion = 0;\n      let module: string | undefined;\n      try {\n        const cohesionRows = await executeParameterized(repo.id, `\n          MATCH (n {id: $nodeId})-[:CodeRelation {type: 'MEMBER_OF'}]->(c:Community)\n          RETURN c.cohesion AS cohesion, c.heuristicLabel AS module\n          LIMIT 1\n        `, { nodeId: sym.nodeId });\n        if (cohesionRows.length > 0) {\n          cohesion = (cohesionRows[0].cohesion ?? cohesionRows[0][0]) || 0;\n          module = cohesionRows[0].module ?? cohesionRows[0][1];\n        }\n      } catch (e) { logQueryError('query:cluster-info', e); }\n\n      // Optionally fetch content\n      let content: string | undefined;\n      if (includeContent) {\n        try {\n          const contentRows = await executeParameterized(repo.id, `\n            MATCH (n {id: $nodeId})\n            RETURN n.content AS content\n          `, { nodeId: sym.nodeId });\n          if (contentRows.length > 0) {\n            content = contentRows[0].content ?? contentRows[0][0];\n          }\n        } catch (e) { logQueryError('query:content-fetch', e); }\n      }\n\n      const symbolEntry = {\n        id: sym.nodeId,\n        name: sym.name,\n        type: sym.type,\n        filePath: sym.filePath,\n        startLine: sym.startLine,\n        endLine: sym.endLine,\n        ...(module ? { module } : {}),\n        ...(includeContent && content ? { content } : {}),\n      };\n      \n      if (processRows.length === 0) {\n        // Symbol not in any process — goes to definitions\n        definitions.push(symbolEntry);\n      } else {\n        // Add to each process it belongs to\n        for (const row of processRows) {\n          const pid = row.pid ?? row[0];\n          const label = row.label ?? row[1];\n          const hLabel = row.heuristicLabel ?? row[2];\n          const pType = row.processType ?? row[3];\n          const stepCount = row.stepCount ?? row[4];\n          const step = row.step ?? row[5];\n          \n          if (!processMap.has(pid)) {\n            processMap.set(pid, {\n              id: pid,\n              label,\n              heuristicLabel: hLabel,\n              processType: pType,\n              stepCount,\n              totalScore: 0,\n              cohesionBoost: 0,\n              symbols: [],\n            });\n          }\n          \n          const proc = processMap.get(pid)!;\n          proc.totalScore += item.score;\n          proc.cohesionBoost = Math.max(proc.cohesionBoost, cohesion);\n          proc.symbols.push({\n            ...symbolEntry,\n            process_id: pid,\n            step_index: step,\n          });\n        }\n      }\n    }\n    \n    // Step 3: Rank processes by aggregate score + internal cohesion boost\n    const rankedProcesses = Array.from(processMap.values())\n      .map(p => ({\n        ...p,\n        priority: p.totalScore + (p.cohesionBoost * 0.1), // cohesion as subtle ranking signal\n      }))\n      .sort((a, b) => b.priority - a.priority)\n      .slice(0, processLimit);\n    \n    // Step 4: Build response\n    const processes = rankedProcesses.map(p => ({\n      id: p.id,\n      summary: p.heuristicLabel || p.label,\n      priority: Math.round(p.priority * 1000) / 1000,\n      symbol_count: p.symbols.length,\n      process_type: p.processType,\n      step_count: p.stepCount,\n    }));\n    \n    const processSymbols = rankedProcesses.flatMap(p =>\n      p.symbols.slice(0, maxSymbolsPerProcess).map(s => ({\n        ...s,\n        // remove internal fields\n      }))\n    );\n    \n    // Deduplicate process_symbols by id\n    const seen = new Set<string>();\n    const dedupedSymbols = processSymbols.filter(s => {\n      if (seen.has(s.id)) return false;\n      seen.add(s.id);\n      return true;\n    });\n    \n    return {\n      processes,\n      process_symbols: dedupedSymbols,\n      definitions: definitions.slice(0, 20), // cap standalone definitions\n    };\n  }\n\n  /**\n   * BM25 keyword search helper - uses LadybugDB FTS for always-fresh results\n   */\n  private async bm25Search(repo: RepoHandle, query: string, limit: number): Promise<any[]> {\n    const { searchFTSFromLbug } = await import('../../core/search/bm25-index.js');\n    let bm25Results;\n    try {\n      bm25Results = await searchFTSFromLbug(query, limit, repo.id);\n    } catch (err: any) {\n      console.error('GitNexus: BM25/FTS search failed (FTS indexes may not exist) -', err.message);\n      return [];\n    }\n    \n    const results: any[] = [];\n    \n    for (const bm25Result of bm25Results) {\n      const fullPath = bm25Result.filePath;\n      try {\n        const symbols = await executeParameterized(repo.id, `\n          MATCH (n)\n          WHERE n.filePath = $filePath\n          RETURN n.id AS id, n.name AS name, labels(n)[0] AS type, n.filePath AS filePath, n.startLine AS startLine, n.endLine AS endLine\n          LIMIT 3\n        `, { filePath: fullPath });\n        \n        if (symbols.length > 0) {\n          for (const sym of symbols) {\n            results.push({\n              nodeId: sym.id || sym[0],\n              name: sym.name || sym[1],\n              type: sym.type || sym[2],\n              filePath: sym.filePath || sym[3],\n              startLine: sym.startLine || sym[4],\n              endLine: sym.endLine || sym[5],\n              bm25Score: bm25Result.score,\n            });\n          }\n        } else {\n          const fileName = fullPath.split('/').pop() || fullPath;\n          results.push({\n            name: fileName,\n            type: 'File',\n            filePath: bm25Result.filePath,\n            bm25Score: bm25Result.score,\n          });\n        }\n      } catch {\n        const fileName = fullPath.split('/').pop() || fullPath;\n        results.push({\n          name: fileName,\n          type: 'File',\n          filePath: bm25Result.filePath,\n          bm25Score: bm25Result.score,\n        });\n      }\n    }\n    \n    return results;\n  }\n\n  /**\n   * Semantic vector search helper\n   */\n  private async semanticSearch(repo: RepoHandle, query: string, limit: number): Promise<any[]> {\n    try {\n      // Check if embedding table exists before loading the model (avoids heavy model init when embeddings are off)\n      const tableCheck = await executeQuery(repo.id, `MATCH (e:CodeEmbedding) RETURN COUNT(*) AS cnt LIMIT 1`);\n      if (!tableCheck.length || (tableCheck[0].cnt ?? tableCheck[0][0]) === 0) return [];\n\n      const { embedQuery, getEmbeddingDims } = await import('../core/embedder.js');\n      const queryVec = await embedQuery(query);\n      const dims = getEmbeddingDims();\n      const queryVecStr = `[${queryVec.join(',')}]`;\n      \n      const vectorQuery = `\n        CALL QUERY_VECTOR_INDEX('CodeEmbedding', 'code_embedding_idx', \n          CAST(${queryVecStr} AS FLOAT[${dims}]), ${limit})\n        YIELD node AS emb, distance\n        WITH emb, distance\n        WHERE distance < 0.6\n        RETURN emb.nodeId AS nodeId, distance\n        ORDER BY distance\n      `;\n      \n      const embResults = await executeQuery(repo.id, vectorQuery);\n      \n      if (embResults.length === 0) return [];\n      \n      const results: any[] = [];\n      \n      for (const embRow of embResults) {\n        const nodeId = embRow.nodeId ?? embRow[0];\n        const distance = embRow.distance ?? embRow[1];\n        \n        const labelEndIdx = nodeId.indexOf(':');\n        const label = labelEndIdx > 0 ? nodeId.substring(0, labelEndIdx) : 'Unknown';\n        \n        // Validate label against known node types to prevent Cypher injection\n        if (!VALID_NODE_LABELS.has(label)) continue;\n        \n        try {\n          const nodeQuery = label === 'File'\n            ? `MATCH (n:File {id: $nodeId}) RETURN n.name AS name, n.filePath AS filePath`\n            : `MATCH (n:\\`${label}\\` {id: $nodeId}) RETURN n.name AS name, n.filePath AS filePath, n.startLine AS startLine, n.endLine AS endLine`;\n\n          const nodeRows = await executeParameterized(repo.id, nodeQuery, { nodeId });\n          if (nodeRows.length > 0) {\n            const nodeRow = nodeRows[0];\n            results.push({\n              nodeId,\n              name: nodeRow.name ?? nodeRow[0] ?? '',\n              type: label,\n              filePath: nodeRow.filePath ?? nodeRow[1] ?? '',\n              distance,\n              startLine: label !== 'File' ? (nodeRow.startLine ?? nodeRow[2]) : undefined,\n              endLine: label !== 'File' ? (nodeRow.endLine ?? nodeRow[3]) : undefined,\n            });\n          }\n        } catch {}\n      }\n      \n      return results;\n    } catch {\n      // Expected when embeddings are disabled — silently fall back to BM25-only\n      return [];\n    }\n  }\n\n  async executeCypher(repoName: string, query: string): Promise<any> {\n    const repo = await this.resolveRepo(repoName);\n    return this.cypher(repo, { query });\n  }\n\n  private async cypher(repo: RepoHandle, params: { query: string }): Promise<any> {\n    await this.ensureInitialized(repo.id);\n\n    if (!isLbugReady(repo.id)) {\n      return { error: 'LadybugDB not ready. Index may be corrupted.' };\n    }\n\n    // Block write operations (defense-in-depth — DB is already read-only)\n    if (CYPHER_WRITE_RE.test(params.query)) {\n      return { error: 'Write operations (CREATE, DELETE, SET, MERGE, REMOVE, DROP, ALTER, COPY, DETACH) are not allowed. The knowledge graph is read-only.' };\n    }\n\n    try {\n      const result = await executeQuery(repo.id, params.query);\n      return result;\n    } catch (err: any) {\n      return { error: err.message || 'Query failed' };\n    }\n  }\n\n  /**\n   * Format raw Cypher result rows as a markdown table for LLM readability.\n   * Falls back to raw result if rows aren't tabular objects.\n   */\n  private formatCypherAsMarkdown(result: any): any {\n    if (!Array.isArray(result) || result.length === 0) return result;\n\n    const firstRow = result[0];\n    if (typeof firstRow !== 'object' || firstRow === null) return result;\n\n    const keys = Object.keys(firstRow);\n    if (keys.length === 0) return result;\n\n    const header = '| ' + keys.join(' | ') + ' |';\n    const separator = '| ' + keys.map(() => '---').join(' | ') + ' |';\n    const dataRows = result.map((row: any) =>\n      '| ' + keys.map(k => {\n        const v = row[k];\n        if (v === null || v === undefined) return '';\n        if (typeof v === 'object') return JSON.stringify(v);\n        return String(v);\n      }).join(' | ') + ' |'\n    );\n\n    return {\n      markdown: [header, separator, ...dataRows].join('\\n'),\n      row_count: result.length,\n    };\n  }\n\n  /**\n   * Aggregate same-named clusters: group by heuristicLabel, sum symbols,\n   * weighted-average cohesion, filter out tiny clusters (<5 symbols).\n   * Raw communities stay intact in LadybugDB for Cypher queries.\n   */\n  private aggregateClusters(clusters: any[]): any[] {\n    const groups = new Map<string, { ids: string[]; totalSymbols: number; weightedCohesion: number; largest: any }>();\n\n    for (const c of clusters) {\n      const label = c.heuristicLabel || c.label || 'Unknown';\n      const symbols = c.symbolCount || 0;\n      const cohesion = c.cohesion || 0;\n      const existing = groups.get(label);\n\n      if (!existing) {\n        groups.set(label, { ids: [c.id], totalSymbols: symbols, weightedCohesion: cohesion * symbols, largest: c });\n      } else {\n        existing.ids.push(c.id);\n        existing.totalSymbols += symbols;\n        existing.weightedCohesion += cohesion * symbols;\n        if (symbols > (existing.largest.symbolCount || 0)) {\n          existing.largest = c;\n        }\n      }\n    }\n\n    return Array.from(groups.entries())\n      .map(([label, g]) => ({\n        id: g.largest.id,\n        label,\n        heuristicLabel: label,\n        symbolCount: g.totalSymbols,\n        cohesion: g.totalSymbols > 0 ? g.weightedCohesion / g.totalSymbols : 0,\n        subCommunities: g.ids.length,\n      }))\n      .filter(c => c.symbolCount >= 5)\n      .sort((a, b) => b.symbolCount - a.symbolCount);\n  }\n\n  private async overview(repo: RepoHandle, params: { showClusters?: boolean; showProcesses?: boolean; limit?: number }): Promise<any> {\n    await this.ensureInitialized(repo.id);\n    \n    const limit = params.limit || 20;\n    const result: any = {\n      repo: repo.name,\n      repoPath: repo.repoPath,\n      stats: repo.stats,\n      indexedAt: repo.indexedAt,\n      lastCommit: repo.lastCommit,\n    };\n    \n    if (params.showClusters !== false) {\n      try {\n        // Fetch more raw communities than the display limit so aggregation has enough data\n        const rawLimit = Math.max(limit * 5, 200);\n        const clusters = await executeQuery(repo.id, `\n          MATCH (c:Community)\n          RETURN c.id AS id, c.label AS label, c.heuristicLabel AS heuristicLabel, c.cohesion AS cohesion, c.symbolCount AS symbolCount\n          ORDER BY c.symbolCount DESC\n          LIMIT ${rawLimit}\n        `);\n        const rawClusters = clusters.map((c: any) => ({\n          id: c.id || c[0],\n          label: c.label || c[1],\n          heuristicLabel: c.heuristicLabel || c[2],\n          cohesion: c.cohesion || c[3],\n          symbolCount: c.symbolCount || c[4],\n        }));\n        result.clusters = this.aggregateClusters(rawClusters).slice(0, limit);\n      } catch {\n        result.clusters = [];\n      }\n    }\n    \n    if (params.showProcesses !== false) {\n      try {\n        const processes = await executeQuery(repo.id, `\n          MATCH (p:Process)\n          RETURN p.id AS id, p.label AS label, p.heuristicLabel AS heuristicLabel, p.processType AS processType, p.stepCount AS stepCount\n          ORDER BY p.stepCount DESC\n          LIMIT ${limit}\n        `);\n        result.processes = processes.map((p: any) => ({\n          id: p.id || p[0],\n          label: p.label || p[1],\n          heuristicLabel: p.heuristicLabel || p[2],\n          processType: p.processType || p[3],\n          stepCount: p.stepCount || p[4],\n        }));\n      } catch {\n        result.processes = [];\n      }\n    }\n    \n    return result;\n  }\n\n  /**\n   * Context tool — 360-degree symbol view with categorized refs.\n   * Disambiguation when multiple symbols share a name.\n   * UID-based direct lookup. No cluster in output.\n   */\n  private async context(repo: RepoHandle, params: {\n    name?: string;\n    uid?: string;\n    file_path?: string;\n    include_content?: boolean;\n  }): Promise<any> {\n    await this.ensureInitialized(repo.id);\n    \n    const { name, uid, file_path, include_content } = params;\n    \n    if (!name && !uid) {\n      return { error: 'Either \"name\" or \"uid\" parameter is required.' };\n    }\n    \n    // Step 1: Find the symbol\n    let symbols: any[];\n    \n    if (uid) {\n      symbols = await executeParameterized(repo.id, `\n        MATCH (n {id: $uid})\n        RETURN n.id AS id, n.name AS name, labels(n)[0] AS type, n.filePath AS filePath, n.startLine AS startLine, n.endLine AS endLine${include_content ? ', n.content AS content' : ''}\n        LIMIT 1\n      `, { uid });\n    } else {\n      const isQualified = name!.includes('/') || name!.includes(':');\n\n      let whereClause: string;\n      let queryParams: Record<string, any>;\n      if (file_path) {\n        whereClause = `WHERE n.name = $symName AND n.filePath CONTAINS $filePath`;\n        queryParams = { symName: name!, filePath: file_path };\n      } else if (isQualified) {\n        whereClause = `WHERE n.id = $symName OR n.name = $symName`;\n        queryParams = { symName: name! };\n      } else {\n        whereClause = `WHERE n.name = $symName`;\n        queryParams = { symName: name! };\n      }\n\n      symbols = await executeParameterized(repo.id, `\n        MATCH (n) ${whereClause}\n        RETURN n.id AS id, n.name AS name, labels(n)[0] AS type, n.filePath AS filePath, n.startLine AS startLine, n.endLine AS endLine${include_content ? ', n.content AS content' : ''}\n        LIMIT 10\n      `, queryParams);\n    }\n    \n    if (symbols.length === 0) {\n      return { error: `Symbol '${name || uid}' not found` };\n    }\n    \n    // Step 2: Disambiguation\n    if (symbols.length > 1 && !uid) {\n      return {\n        status: 'ambiguous',\n        message: `Found ${symbols.length} symbols matching '${name}'. Use uid or file_path to disambiguate.`,\n        candidates: symbols.map((s: any) => ({\n          uid: s.id || s[0],\n          name: s.name || s[1],\n          kind: s.type || s[2],\n          filePath: s.filePath || s[3],\n          line: s.startLine || s[4],\n        })),\n      };\n    }\n    \n    // Step 3: Build full context\n    const sym = symbols[0];\n    const symId = sym.id || sym[0];\n\n    // Categorized incoming refs\n    const incomingRows = await executeParameterized(repo.id, `\n      MATCH (caller)-[r:CodeRelation]->(n {id: $symId})\n      WHERE r.type IN ['CALLS', 'IMPORTS', 'EXTENDS', 'IMPLEMENTS', 'HAS_METHOD', 'HAS_PROPERTY', 'OVERRIDES', 'ACCESSES']\n      RETURN r.type AS relType, caller.id AS uid, caller.name AS name, caller.filePath AS filePath, labels(caller)[0] AS kind\n      LIMIT 30\n    `, { symId });\n\n    // Categorized outgoing refs\n    const outgoingRows = await executeParameterized(repo.id, `\n      MATCH (n {id: $symId})-[r:CodeRelation]->(target)\n      WHERE r.type IN ['CALLS', 'IMPORTS', 'EXTENDS', 'IMPLEMENTS', 'HAS_METHOD', 'HAS_PROPERTY', 'OVERRIDES', 'ACCESSES']\n      RETURN r.type AS relType, target.id AS uid, target.name AS name, target.filePath AS filePath, labels(target)[0] AS kind\n      LIMIT 30\n    `, { symId });\n\n    // Process participation\n    let processRows: any[] = [];\n    try {\n      processRows = await executeParameterized(repo.id, `\n        MATCH (n {id: $symId})-[r:CodeRelation {type: 'STEP_IN_PROCESS'}]->(p:Process)\n        RETURN p.id AS pid, p.heuristicLabel AS label, r.step AS step, p.stepCount AS stepCount\n      `, { symId });\n    } catch (e) { logQueryError('context:process-participation', e); }\n    \n    // Helper to categorize refs\n    const categorize = (rows: any[]) => {\n      const cats: Record<string, any[]> = {};\n      for (const row of rows) {\n        const relType = (row.relType || row[0] || '').toLowerCase();\n        const entry = {\n          uid: row.uid || row[1],\n          name: row.name || row[2],\n          filePath: row.filePath || row[3],\n          kind: row.kind || row[4],\n        };\n        if (!cats[relType]) cats[relType] = [];\n        cats[relType].push(entry);\n      }\n      return cats;\n    };\n    \n    return {\n      status: 'found',\n      symbol: {\n        uid: sym.id || sym[0],\n        name: sym.name || sym[1],\n        kind: sym.type || sym[2],\n        filePath: sym.filePath || sym[3],\n        startLine: sym.startLine || sym[4],\n        endLine: sym.endLine || sym[5],\n        ...(include_content && (sym.content || sym[6]) ? { content: sym.content || sym[6] } : {}),\n      },\n      incoming: categorize(incomingRows),\n      outgoing: categorize(outgoingRows),\n      processes: processRows.map((r: any) => ({\n        id: r.pid || r[0],\n        name: r.label || r[1],\n        step_index: r.step || r[2],\n        step_count: r.stepCount || r[3],\n      })),\n    };\n  }\n\n  /**\n   * Legacy explore — kept for backwards compatibility with resources.ts.\n   * Routes cluster/process types to direct graph queries.\n   */\n  private async explore(repo: RepoHandle, params: { name: string; type: 'symbol' | 'cluster' | 'process' }): Promise<any> {\n    await this.ensureInitialized(repo.id);\n    const { name, type } = params;\n    \n    if (type === 'symbol') {\n      return this.context(repo, { name });\n    }\n    \n    if (type === 'cluster') {\n      const clusters = await executeParameterized(repo.id, `\n        MATCH (c:Community)\n        WHERE c.label = $clusterName OR c.heuristicLabel = $clusterName\n        RETURN c.id AS id, c.label AS label, c.heuristicLabel AS heuristicLabel, c.cohesion AS cohesion, c.symbolCount AS symbolCount\n      `, { clusterName: name });\n      if (clusters.length === 0) return { error: `Cluster '${name}' not found` };\n\n      const rawClusters = clusters.map((c: any) => ({\n        id: c.id || c[0], label: c.label || c[1], heuristicLabel: c.heuristicLabel || c[2],\n        cohesion: c.cohesion || c[3], symbolCount: c.symbolCount || c[4],\n      }));\n\n      let totalSymbols = 0, weightedCohesion = 0;\n      for (const c of rawClusters) {\n        const s = c.symbolCount || 0;\n        totalSymbols += s;\n        weightedCohesion += (c.cohesion || 0) * s;\n      }\n\n      const members = await executeParameterized(repo.id, `\n        MATCH (n)-[:CodeRelation {type: 'MEMBER_OF'}]->(c:Community)\n        WHERE c.label = $clusterName OR c.heuristicLabel = $clusterName\n        RETURN DISTINCT n.name AS name, labels(n)[0] AS type, n.filePath AS filePath\n        LIMIT 30\n      `, { clusterName: name });\n      \n      return {\n        cluster: {\n          id: rawClusters[0].id,\n          label: rawClusters[0].heuristicLabel || rawClusters[0].label,\n          heuristicLabel: rawClusters[0].heuristicLabel || rawClusters[0].label,\n          cohesion: totalSymbols > 0 ? weightedCohesion / totalSymbols : 0,\n          symbolCount: totalSymbols,\n          subCommunities: rawClusters.length,\n        },\n        members: members.map((m: any) => ({\n          name: m.name || m[0], type: m.type || m[1], filePath: m.filePath || m[2],\n        })),\n      };\n    }\n    \n    if (type === 'process') {\n      const processes = await executeParameterized(repo.id, `\n        MATCH (p:Process)\n        WHERE p.label = $processName OR p.heuristicLabel = $processName\n        RETURN p.id AS id, p.label AS label, p.heuristicLabel AS heuristicLabel, p.processType AS processType, p.stepCount AS stepCount\n        LIMIT 1\n      `, { processName: name });\n      if (processes.length === 0) return { error: `Process '${name}' not found` };\n\n      const proc = processes[0];\n      const procId = proc.id || proc[0];\n      const steps = await executeParameterized(repo.id, `\n        MATCH (n)-[r:CodeRelation {type: 'STEP_IN_PROCESS'}]->(p {id: $procId})\n        RETURN n.name AS name, labels(n)[0] AS type, n.filePath AS filePath, r.step AS step\n        ORDER BY r.step\n      `, { procId });\n      \n      return {\n        process: {\n          id: procId, label: proc.label || proc[1], heuristicLabel: proc.heuristicLabel || proc[2],\n          processType: proc.processType || proc[3], stepCount: proc.stepCount || proc[4],\n        },\n        steps: steps.map((s: any) => ({\n          step: s.step || s[3], name: s.name || s[0], type: s.type || s[1], filePath: s.filePath || s[2],\n        })),\n      };\n    }\n    \n    return { error: 'Invalid type. Use: symbol, cluster, or process' };\n  }\n\n  /**\n   * Detect changes — git-diff based impact analysis.\n   * Maps changed lines to indexed symbols, then finds affected processes.\n   */\n  private async detectChanges(repo: RepoHandle, params: {\n    scope?: string;\n    base_ref?: string;\n  }): Promise<any> {\n    await this.ensureInitialized(repo.id);\n    \n    const scope = params.scope || 'unstaged';\n    const { execFileSync } = await import('child_process');\n\n    // Build git diff args based on scope (using execFileSync to avoid shell injection)\n    let diffArgs: string[];\n    switch (scope) {\n      case 'staged':\n        diffArgs = ['diff', '--staged', '--name-only'];\n        break;\n      case 'all':\n        diffArgs = ['diff', 'HEAD', '--name-only'];\n        break;\n      case 'compare':\n        if (!params.base_ref) return { error: 'base_ref is required for \"compare\" scope' };\n        diffArgs = ['diff', params.base_ref, '--name-only'];\n        break;\n      case 'unstaged':\n      default:\n        diffArgs = ['diff', '--name-only'];\n        break;\n    }\n\n    let changedFiles: string[];\n    try {\n      const output = execFileSync('git', diffArgs, { cwd: repo.repoPath, encoding: 'utf-8' });\n      changedFiles = output.trim().split('\\n').filter(f => f.length > 0);\n    } catch (err: any) {\n      return { error: `Git diff failed: ${err.message}` };\n    }\n    \n    if (changedFiles.length === 0) {\n      return {\n        summary: { changed_count: 0, affected_count: 0, risk_level: 'none', message: 'No changes detected.' },\n        changed_symbols: [],\n        affected_processes: [],\n      };\n    }\n    \n    // Map changed files to indexed symbols\n    const changedSymbols: any[] = [];\n    for (const file of changedFiles) {\n      const normalizedFile = file.replace(/\\\\/g, '/');\n      try {\n        const symbols = await executeParameterized(repo.id, `\n          MATCH (n) WHERE n.filePath CONTAINS $filePath\n          RETURN n.id AS id, n.name AS name, labels(n)[0] AS type, n.filePath AS filePath\n          LIMIT 20\n        `, { filePath: normalizedFile });\n        for (const sym of symbols) {\n          changedSymbols.push({\n            id: sym.id || sym[0],\n            name: sym.name || sym[1],\n            type: sym.type || sym[2],\n            filePath: sym.filePath || sym[3],\n            change_type: 'Modified',\n          });\n        }\n      } catch (e) { logQueryError('detect-changes:file-symbols', e); }\n    }\n\n    // Find affected processes\n    const affectedProcesses = new Map<string, any>();\n    for (const sym of changedSymbols) {\n      try {\n        const procs = await executeParameterized(repo.id, `\n          MATCH (n {id: $nodeId})-[r:CodeRelation {type: 'STEP_IN_PROCESS'}]->(p:Process)\n          RETURN p.id AS pid, p.heuristicLabel AS label, p.processType AS processType, p.stepCount AS stepCount, r.step AS step\n        `, { nodeId: sym.id });\n        for (const proc of procs) {\n          const pid = proc.pid || proc[0];\n          if (!affectedProcesses.has(pid)) {\n            affectedProcesses.set(pid, {\n              id: pid,\n              name: proc.label || proc[1],\n              process_type: proc.processType || proc[2],\n              step_count: proc.stepCount || proc[3],\n              changed_steps: [],\n            });\n          }\n          affectedProcesses.get(pid)!.changed_steps.push({\n            symbol: sym.name,\n            step: proc.step || proc[4],\n          });\n        }\n      } catch (e) { logQueryError('detect-changes:process-lookup', e); }\n    }\n\n    const processCount = affectedProcesses.size;\n    const risk = processCount === 0 ? 'low' : processCount <= 5 ? 'medium' : processCount <= 15 ? 'high' : 'critical';\n    \n    return {\n      summary: {\n        changed_count: changedSymbols.length,\n        affected_count: processCount,\n        changed_files: changedFiles.length,\n        risk_level: risk,\n      },\n      changed_symbols: changedSymbols,\n      affected_processes: Array.from(affectedProcesses.values()),\n    };\n  }\n\n  /**\n   * Rename tool — multi-file coordinated rename using graph + text search.\n   * Graph refs are tagged \"graph\" (high confidence).\n   * Additional refs found via text search are tagged \"text_search\" (lower confidence).\n   */\n  private async rename(repo: RepoHandle, params: {\n    symbol_name?: string;\n    symbol_uid?: string;\n    new_name: string;\n    file_path?: string;\n    dry_run?: boolean;\n  }): Promise<any> {\n    await this.ensureInitialized(repo.id);\n    \n    const { new_name, file_path } = params;\n    const dry_run = params.dry_run ?? true;\n\n    if (!params.symbol_name && !params.symbol_uid) {\n      return { error: 'Either symbol_name or symbol_uid is required.' };\n    }\n\n    /** Guard: ensure a file path resolves within the repo root (prevents path traversal) */\n    const assertSafePath = (filePath: string): string => {\n      const full = path.resolve(repo.repoPath, filePath);\n      if (!full.startsWith(repo.repoPath + path.sep) && full !== repo.repoPath) {\n        throw new Error(`Path traversal blocked: ${filePath}`);\n      }\n      return full;\n    };\n    \n    // Step 1: Find the target symbol (reuse context's lookup)\n    const lookupResult = await this.context(repo, {\n      name: params.symbol_name,\n      uid: params.symbol_uid,\n      file_path,\n    });\n    \n    if (lookupResult.status === 'ambiguous') {\n      return lookupResult; // pass disambiguation through\n    }\n    if (lookupResult.error) {\n      return lookupResult;\n    }\n    \n    const sym = lookupResult.symbol;\n    const oldName = sym.name;\n    \n    if (oldName === new_name) {\n      return { error: 'New name is the same as the current name.' };\n    }\n    \n    // Step 2: Collect edits from graph (high confidence)\n    const changes = new Map<string, { file_path: string; edits: any[] }>();\n    \n    const addEdit = (filePath: string, line: number, oldText: string, newText: string, confidence: string) => {\n      if (!changes.has(filePath)) {\n        changes.set(filePath, { file_path: filePath, edits: [] });\n      }\n      changes.get(filePath)!.edits.push({ line, old_text: oldText, new_text: newText, confidence });\n    };\n    \n    // The definition itself\n    if (sym.filePath && sym.startLine) {\n      try {\n        const content = await fs.readFile(assertSafePath(sym.filePath), 'utf-8');\n        const lines = content.split('\\n');\n        const lineIdx = sym.startLine - 1;\n        if (lineIdx >= 0 && lineIdx < lines.length && lines[lineIdx].includes(oldName)) {\n          const defRegex = new RegExp(`\\\\b${oldName.replace(/[.*+?^${}()|[\\]\\\\]/g, '\\\\$&')}\\\\b`, 'g');\n          addEdit(sym.filePath, sym.startLine, lines[lineIdx].trim(), lines[lineIdx].replace(defRegex, new_name).trim(), 'graph');\n        }\n      } catch (e) { logQueryError('rename:read-definition', e); }\n    }\n\n    // All incoming refs from graph (callers, importers, etc.)\n    const allIncoming = [\n      ...(lookupResult.incoming.calls || []),\n      ...(lookupResult.incoming.imports || []),\n      ...(lookupResult.incoming.extends || []),\n      ...(lookupResult.incoming.implements || []),\n    ];\n    \n    let graphEdits = changes.size > 0 ? 1 : 0; // count definition edit\n    \n    for (const ref of allIncoming) {\n      if (!ref.filePath) continue;\n      try {\n        const content = await fs.readFile(assertSafePath(ref.filePath), 'utf-8');\n        const lines = content.split('\\n');\n        for (let i = 0; i < lines.length; i++) {\n          if (lines[i].includes(oldName)) {\n            addEdit(ref.filePath, i + 1, lines[i].trim(), lines[i].replace(new RegExp(`\\\\b${oldName.replace(/[.*+?^${}()|[\\]\\\\]/g, '\\\\$&')}\\\\b`, 'g'), new_name).trim(), 'graph');\n            graphEdits++;\n            break; // one edit per file from graph refs\n          }\n        }\n      } catch (e) { logQueryError('rename:read-ref', e); }\n    }\n\n    // Step 3: Text search for refs the graph might have missed\n    let astSearchEdits = 0;\n    const graphFiles = new Set([sym.filePath, ...allIncoming.map(r => r.filePath)].filter(Boolean));\n    \n    // Simple text search across the repo for the old name (in files not already covered by graph)\n    try {\n      const { execFileSync } = await import('child_process');\n      const rgArgs = [\n        '-l',\n        '--type-add', 'code:*.{ts,tsx,js,jsx,py,go,rs,java,c,h,cpp,cc,cxx,hpp,hxx,hh,cs,php,swift}',\n        '-t', 'code',\n        `\\\\b${oldName}\\\\b`,\n        '.',\n      ];\n      const output = execFileSync('rg', rgArgs, { cwd: repo.repoPath, encoding: 'utf-8', timeout: 5000 });\n      const files = output.trim().split('\\n').filter(f => f.length > 0);\n      \n      for (const file of files) {\n        const normalizedFile = file.replace(/\\\\/g, '/').replace(/^\\.\\//, '');\n        if (graphFiles.has(normalizedFile)) continue; // already covered by graph\n        \n        try {\n          const content = await fs.readFile(assertSafePath(normalizedFile), 'utf-8');\n          const lines = content.split('\\n');\n          const regex = new RegExp(`\\\\b${oldName.replace(/[.*+?^${}()|[\\]\\\\]/g, '\\\\$&')}\\\\b`, 'g');\n          for (let i = 0; i < lines.length; i++) {\n            regex.lastIndex = 0;\n            if (regex.test(lines[i])) {\n              regex.lastIndex = 0;\n              addEdit(normalizedFile, i + 1, lines[i].trim(), lines[i].replace(regex, new_name).trim(), 'text_search');\n              astSearchEdits++;\n            }\n          }\n        } catch (e) { logQueryError('rename:text-search-read', e); }\n      }\n    } catch (e) { logQueryError('rename:ripgrep', e); }\n    \n    // Step 4: Apply or preview\n    const allChanges = Array.from(changes.values());\n    const totalEdits = allChanges.reduce((sum, c) => sum + c.edits.length, 0);\n    \n    if (!dry_run) {\n      // Apply edits to files\n      for (const change of allChanges) {\n        try {\n          const fullPath = assertSafePath(change.file_path);\n          let content = await fs.readFile(fullPath, 'utf-8');\n          const regex = new RegExp(`\\\\b${oldName.replace(/[.*+?^${}()|[\\]\\\\]/g, '\\\\$&')}\\\\b`, 'g');\n          content = content.replace(regex, new_name);\n          await fs.writeFile(fullPath, content, 'utf-8');\n        } catch (e) { logQueryError('rename:apply-edit', e); }\n      }\n    }\n    \n    return {\n      status: 'success',\n      old_name: oldName,\n      new_name,\n      files_affected: allChanges.length,\n      total_edits: totalEdits,\n      graph_edits: graphEdits,\n      text_search_edits: astSearchEdits,\n      changes: allChanges,\n      applied: !dry_run,\n    };\n  }\n\n  private async impact(repo: RepoHandle, params: {\n    target: string;\n    direction: 'upstream' | 'downstream';\n    maxDepth?: number;\n    relationTypes?: string[];\n    includeTests?: boolean;\n    minConfidence?: number;\n  }): Promise<any> {\n    try {\n      return await this._impactImpl(repo, params);\n    } catch (err: any) {\n      // Return structured error instead of crashing (#321)\n      return {\n        error: (err instanceof Error ? err.message : String(err)) || 'Impact analysis failed',\n        target: { name: params.target },\n        direction: params.direction,\n        impactedCount: 0,\n        risk: 'UNKNOWN',\n        suggestion: 'The graph query failed — try gitnexus context <symbol> as a fallback',\n      };\n    }\n  }\n\n  private async _impactImpl(repo: RepoHandle, params: {\n    target: string;\n    direction: 'upstream' | 'downstream';\n    maxDepth?: number;\n    relationTypes?: string[];\n    includeTests?: boolean;\n    minConfidence?: number;\n  }): Promise<any> {\n    await this.ensureInitialized(repo.id);\n    \n    const { target, direction } = params;\n    const maxDepth = params.maxDepth || 3;\n    const rawRelTypes = params.relationTypes && params.relationTypes.length > 0\n      ? params.relationTypes.filter(t => VALID_RELATION_TYPES.has(t))\n      : ['CALLS', 'IMPORTS', 'EXTENDS', 'IMPLEMENTS'];\n    const relationTypes = rawRelTypes.length > 0 ? rawRelTypes : ['CALLS', 'IMPORTS', 'EXTENDS', 'IMPLEMENTS'];\n    const includeTests = params.includeTests ?? false;\n    const minConfidence = params.minConfidence ?? 0;\n\n    const relTypeFilter = relationTypes.map(t => `'${t}'`).join(', ');\n    const confidenceFilter = minConfidence > 0 ? ` AND r.confidence >= ${minConfidence}` : '';\n\n    const targets = await executeParameterized(repo.id, `\n      MATCH (n)\n      WHERE n.name = $targetName\n      RETURN n.id AS id, n.name AS name, labels(n)[0] AS type, n.filePath AS filePath\n      LIMIT 1\n    `, { targetName: target });\n    if (targets.length === 0) return { error: `Target '${target}' not found` };\n    \n    const sym = targets[0];\n    const symId = sym.id || sym[0];\n    \n    const impacted: any[] = [];\n    const visited = new Set<string>([symId]);\n    let frontier = [symId];\n    let traversalComplete = true;\n    \n    for (let depth = 1; depth <= maxDepth && frontier.length > 0; depth++) {\n      const nextFrontier: string[] = [];\n      \n      // Batch frontier nodes into a single Cypher query per depth level\n      const idList = frontier.map(id => `'${id.replace(/'/g, \"''\")}'`).join(', ');\n      const query = direction === 'upstream'\n        ? `MATCH (caller)-[r:CodeRelation]->(n) WHERE n.id IN [${idList}] AND r.type IN [${relTypeFilter}]${confidenceFilter} RETURN n.id AS sourceId, caller.id AS id, caller.name AS name, labels(caller)[0] AS type, caller.filePath AS filePath, r.type AS relType, r.confidence AS confidence`\n        : `MATCH (n)-[r:CodeRelation]->(callee) WHERE n.id IN [${idList}] AND r.type IN [${relTypeFilter}]${confidenceFilter} RETURN n.id AS sourceId, callee.id AS id, callee.name AS name, labels(callee)[0] AS type, callee.filePath AS filePath, r.type AS relType, r.confidence AS confidence`;\n      \n      try {\n        const related = await executeQuery(repo.id, query);\n        \n        for (const rel of related) {\n          const relId = rel.id || rel[1];\n          const filePath = rel.filePath || rel[4] || '';\n          \n          if (!includeTests && isTestFilePath(filePath)) continue;\n          \n          if (!visited.has(relId)) {\n            visited.add(relId);\n            nextFrontier.push(relId);\n            impacted.push({\n              depth,\n              id: relId,\n              name: rel.name || rel[2],\n              type: rel.type || rel[3],\n              filePath,\n              relationType: rel.relType || rel[5],\n              confidence: rel.confidence || rel[6] || 1.0,\n            });\n          }\n        }\n      } catch (e) {\n        logQueryError('impact:depth-traversal', e);\n        // Break out of depth loop on query failure but return partial results\n        // collected so far, rather than silently swallowing the error (#321)\n        traversalComplete = false;\n        break;\n      }\n      \n      frontier = nextFrontier;\n    }\n    \n    const grouped: Record<number, any[]> = {};\n    for (const item of impacted) {\n      if (!grouped[item.depth]) grouped[item.depth] = [];\n      grouped[item.depth].push(item);\n    }\n\n    // ── Enrichment: affected processes, modules, risk ──────────────\n    const directCount = (grouped[1] || []).length;\n    let affectedProcesses: any[] = [];\n    let affectedModules: any[] = [];\n\n    if (impacted.length > 0) {\n      const allIds = impacted.map(i => `'${i.id.replace(/'/g, \"''\")}'`).join(', ');\n      const d1Ids = (grouped[1] || []).map((i: any) => `'${i.id.replace(/'/g, \"''\")}'`).join(', ');\n\n      // Affected processes: which execution flows are broken and at which step\n      const [processRows, moduleRows, directModuleRows] = await Promise.all([\n        executeQuery(repo.id, `\n          MATCH (s)-[r:CodeRelation {type: 'STEP_IN_PROCESS'}]->(p:Process)\n          WHERE s.id IN [${allIds}]\n          RETURN p.heuristicLabel AS name, COUNT(DISTINCT s.id) AS hits, MIN(r.step) AS minStep, p.stepCount AS stepCount\n          ORDER BY hits DESC\n          LIMIT 20\n        `).catch(() => []),\n        executeQuery(repo.id, `\n          MATCH (s)-[:CodeRelation {type: 'MEMBER_OF'}]->(c:Community)\n          WHERE s.id IN [${allIds}]\n          RETURN c.heuristicLabel AS name, COUNT(DISTINCT s.id) AS hits\n          ORDER BY hits DESC\n          LIMIT 20\n        `).catch(() => []),\n        d1Ids ? executeQuery(repo.id, `\n          MATCH (s)-[:CodeRelation {type: 'MEMBER_OF'}]->(c:Community)\n          WHERE s.id IN [${d1Ids}]\n          RETURN DISTINCT c.heuristicLabel AS name\n        `).catch(() => []) : Promise.resolve([]),\n      ]);\n\n      affectedProcesses = processRows.map((r: any) => ({\n        name: r.name || r[0],\n        hits: r.hits || r[1],\n        broken_at_step: r.minStep ?? r[2],\n        step_count: r.stepCount ?? r[3],\n      }));\n\n      const directModuleSet = new Set(directModuleRows.map((r: any) => r.name || r[0]));\n      affectedModules = moduleRows.map((r: any) => {\n        const name = r.name || r[0];\n        return {\n          name,\n          hits: r.hits || r[1],\n          impact: directModuleSet.has(name) ? 'direct' : 'indirect',\n        };\n      });\n    }\n\n    // Risk scoring\n    const processCount = affectedProcesses.length;\n    const moduleCount = affectedModules.length;\n    let risk = 'LOW';\n    if (directCount >= 30 || processCount >= 5 || moduleCount >= 5 || impacted.length >= 200) {\n      risk = 'CRITICAL';\n    } else if (directCount >= 15 || processCount >= 3 || moduleCount >= 3 || impacted.length >= 100) {\n      risk = 'HIGH';\n    } else if (directCount >= 5 || impacted.length >= 30) {\n      risk = 'MEDIUM';\n    }\n\n    return {\n      target: {\n        id: symId,\n        name: sym.name || sym[1],\n        type: sym.type || sym[2],\n        filePath: sym.filePath || sym[3],\n      },\n      direction,\n      impactedCount: impacted.length,\n      risk,\n      ...(!traversalComplete && { partial: true }),\n      summary: {\n        direct: directCount,\n        processes_affected: processCount,\n        modules_affected: moduleCount,\n      },\n      affected_processes: affectedProcesses,\n      affected_modules: affectedModules,\n      byDepth: grouped,\n    };\n  }\n\n  // ─── Direct Graph Queries (for resources.ts) ────────────────────\n\n  /**\n   * Query clusters (communities) directly from graph.\n   * Used by getClustersResource — avoids legacy overview() dispatch.\n   */\n  async queryClusters(repoName?: string, limit = 100): Promise<{ clusters: any[] }> {\n    const repo = await this.resolveRepo(repoName);\n    await this.ensureInitialized(repo.id);\n\n    try {\n      const rawLimit = Math.max(limit * 5, 200);\n      const clusters = await executeQuery(repo.id, `\n        MATCH (c:Community)\n        RETURN c.id AS id, c.label AS label, c.heuristicLabel AS heuristicLabel, c.cohesion AS cohesion, c.symbolCount AS symbolCount\n        ORDER BY c.symbolCount DESC\n        LIMIT ${rawLimit}\n      `);\n      const rawClusters = clusters.map((c: any) => ({\n        id: c.id || c[0],\n        label: c.label || c[1],\n        heuristicLabel: c.heuristicLabel || c[2],\n        cohesion: c.cohesion || c[3],\n        symbolCount: c.symbolCount || c[4],\n      }));\n      return { clusters: this.aggregateClusters(rawClusters).slice(0, limit) };\n    } catch {\n      return { clusters: [] };\n    }\n  }\n\n  /**\n   * Query processes directly from graph.\n   * Used by getProcessesResource — avoids legacy overview() dispatch.\n   */\n  async queryProcesses(repoName?: string, limit = 50): Promise<{ processes: any[] }> {\n    const repo = await this.resolveRepo(repoName);\n    await this.ensureInitialized(repo.id);\n\n    try {\n      const processes = await executeQuery(repo.id, `\n        MATCH (p:Process)\n        RETURN p.id AS id, p.label AS label, p.heuristicLabel AS heuristicLabel, p.processType AS processType, p.stepCount AS stepCount\n        ORDER BY p.stepCount DESC\n        LIMIT ${limit}\n      `);\n      return {\n        processes: processes.map((p: any) => ({\n          id: p.id || p[0],\n          label: p.label || p[1],\n          heuristicLabel: p.heuristicLabel || p[2],\n          processType: p.processType || p[3],\n          stepCount: p.stepCount || p[4],\n        })),\n      };\n    } catch {\n      return { processes: [] };\n    }\n  }\n\n  /**\n   * Query cluster detail (members) directly from graph.\n   * Used by getClusterDetailResource.\n   */\n  async queryClusterDetail(name: string, repoName?: string): Promise<any> {\n    const repo = await this.resolveRepo(repoName);\n    await this.ensureInitialized(repo.id);\n\n    const clusters = await executeParameterized(repo.id, `\n      MATCH (c:Community)\n      WHERE c.label = $clusterName OR c.heuristicLabel = $clusterName\n      RETURN c.id AS id, c.label AS label, c.heuristicLabel AS heuristicLabel, c.cohesion AS cohesion, c.symbolCount AS symbolCount\n    `, { clusterName: name });\n    if (clusters.length === 0) return { error: `Cluster '${name}' not found` };\n\n    const rawClusters = clusters.map((c: any) => ({\n      id: c.id || c[0], label: c.label || c[1], heuristicLabel: c.heuristicLabel || c[2],\n      cohesion: c.cohesion || c[3], symbolCount: c.symbolCount || c[4],\n    }));\n\n    let totalSymbols = 0, weightedCohesion = 0;\n    for (const c of rawClusters) {\n      const s = c.symbolCount || 0;\n      totalSymbols += s;\n      weightedCohesion += (c.cohesion || 0) * s;\n    }\n\n    const members = await executeParameterized(repo.id, `\n      MATCH (n)-[:CodeRelation {type: 'MEMBER_OF'}]->(c:Community)\n      WHERE c.label = $clusterName OR c.heuristicLabel = $clusterName\n      RETURN DISTINCT n.name AS name, labels(n)[0] AS type, n.filePath AS filePath\n      LIMIT 30\n    `, { clusterName: name });\n\n    return {\n      cluster: {\n        id: rawClusters[0].id,\n        label: rawClusters[0].heuristicLabel || rawClusters[0].label,\n        heuristicLabel: rawClusters[0].heuristicLabel || rawClusters[0].label,\n        cohesion: totalSymbols > 0 ? weightedCohesion / totalSymbols : 0,\n        symbolCount: totalSymbols,\n        subCommunities: rawClusters.length,\n      },\n      members: members.map((m: any) => ({\n        name: m.name || m[0], type: m.type || m[1], filePath: m.filePath || m[2],\n      })),\n    };\n  }\n\n  /**\n   * Query process detail (steps) directly from graph.\n   * Used by getProcessDetailResource.\n   */\n  async queryProcessDetail(name: string, repoName?: string): Promise<any> {\n    const repo = await this.resolveRepo(repoName);\n    await this.ensureInitialized(repo.id);\n\n    const processes = await executeParameterized(repo.id, `\n      MATCH (p:Process)\n      WHERE p.label = $processName OR p.heuristicLabel = $processName\n      RETURN p.id AS id, p.label AS label, p.heuristicLabel AS heuristicLabel, p.processType AS processType, p.stepCount AS stepCount\n      LIMIT 1\n    `, { processName: name });\n    if (processes.length === 0) return { error: `Process '${name}' not found` };\n\n    const proc = processes[0];\n    const procId = proc.id || proc[0];\n    const steps = await executeParameterized(repo.id, `\n      MATCH (n)-[r:CodeRelation {type: 'STEP_IN_PROCESS'}]->(p {id: $procId})\n      RETURN n.name AS name, labels(n)[0] AS type, n.filePath AS filePath, r.step AS step\n      ORDER BY r.step\n    `, { procId });\n\n    return {\n      process: {\n        id: procId, label: proc.label || proc[1], heuristicLabel: proc.heuristicLabel || proc[2],\n        processType: proc.processType || proc[3], stepCount: proc.stepCount || proc[4],\n      },\n      steps: steps.map((s: any) => ({\n        step: s.step || s[3], name: s.name || s[0], type: s.type || s[1], filePath: s.filePath || s[2],\n      })),\n    };\n  }\n\n  async disconnect(): Promise<void> {\n    await closeLbug(); // close all connections\n    // Note: we intentionally do NOT call disposeEmbedder() here.\n    // ONNX Runtime's native cleanup segfaults on macOS and some Linux configs,\n    // and importing the embedder module on Node v24+ crashes if onnxruntime\n    // was never loaded during the session. Since process.exit(0) follows\n    // immediately after disconnect(), the OS reclaims everything. See #38, #89.\n    this.repos.clear();\n    this.contextCache.clear();\n    this.initializedRepos.clear();\n  }\n}\n"
  },
  {
    "path": "gitnexus/src/mcp/resources.ts",
    "content": "/**\n * MCP Resources (Multi-Repo)\n * \n * Provides structured on-demand data to AI agents.\n * All resources use repo-scoped URIs: gitnexus://repo/{name}/context\n */\n\nimport type { LocalBackend } from './local/local-backend.js';\nimport { checkStaleness } from './staleness.js';\n\nexport interface ResourceDefinition {\n  uri: string;\n  name: string;\n  description: string;\n  mimeType: string;\n}\n\nexport interface ResourceTemplate {\n  uriTemplate: string;\n  name: string;\n  description: string;\n  mimeType: string;\n}\n\n/**\n * Static resources — includes per-repo resources and the global repos list\n */\nexport function getResourceDefinitions(): ResourceDefinition[] {\n  return [\n    {\n      uri: 'gitnexus://repos',\n      name: 'All Indexed Repositories',\n      description: 'List of all indexed repos with stats. Read this first to discover available repos.',\n      mimeType: 'text/yaml',\n    },\n    {\n      uri: 'gitnexus://setup',\n      name: 'GitNexus Setup Content',\n      description: 'Returns AGENTS.md content for all indexed repos. Useful for setup/onboarding.',\n      mimeType: 'text/markdown',\n    },\n  ];\n}\n\n/**\n * Dynamic resource templates\n */\nexport function getResourceTemplates(): ResourceTemplate[] {\n  return [\n    {\n      uriTemplate: 'gitnexus://repo/{name}/context',\n      name: 'Repo Overview',\n      description: 'Codebase stats, staleness check, and available tools',\n      mimeType: 'text/yaml',\n    },\n    {\n      uriTemplate: 'gitnexus://repo/{name}/clusters',\n      name: 'Repo Modules',\n      description: 'All functional areas (Leiden clusters)',\n      mimeType: 'text/yaml',\n    },\n    {\n      uriTemplate: 'gitnexus://repo/{name}/processes',\n      name: 'Repo Processes',\n      description: 'All execution flows',\n      mimeType: 'text/yaml',\n    },\n    {\n      uriTemplate: 'gitnexus://repo/{name}/schema',\n      name: 'Graph Schema',\n      description: 'Node/edge schema for Cypher queries',\n      mimeType: 'text/yaml',\n    },\n    {\n      uriTemplate: 'gitnexus://repo/{name}/cluster/{clusterName}',\n      name: 'Module Detail',\n      description: 'Deep dive into a specific functional area',\n      mimeType: 'text/yaml',\n    },\n    {\n      uriTemplate: 'gitnexus://repo/{name}/process/{processName}',\n      name: 'Process Trace',\n      description: 'Step-by-step execution trace',\n      mimeType: 'text/yaml',\n    },\n  ];\n}\n\n/**\n * Parse a resource URI to extract the repo name and resource type.\n */\nfunction parseUri(uri: string): { repoName?: string; resourceType: string; param?: string } {\n  if (uri === 'gitnexus://repos') return { resourceType: 'repos' };\n  if (uri === 'gitnexus://setup') return { resourceType: 'setup' };\n\n  // Repo-scoped: gitnexus://repo/{name}/context\n  const repoMatch = uri.match(/^gitnexus:\\/\\/repo\\/([^/]+)\\/(.+)$/);\n  if (repoMatch) {\n    const repoName = decodeURIComponent(repoMatch[1]);\n    const rest = repoMatch[2];\n\n    if (rest.startsWith('cluster/')) {\n      return { repoName, resourceType: 'cluster', param: decodeURIComponent(rest.replace('cluster/', '')) };\n    }\n    if (rest.startsWith('process/')) {\n      return { repoName, resourceType: 'process', param: decodeURIComponent(rest.replace('process/', '')) };\n    }\n\n    return { repoName, resourceType: rest };\n  }\n\n  throw new Error(`Unknown resource URI: ${uri}`);\n}\n\n/**\n * Read a resource and return its content\n */\nexport async function readResource(uri: string, backend: LocalBackend): Promise<string> {\n  const parsed = parseUri(uri);\n\n  // Global repos list — no repo context needed\n  if (parsed.resourceType === 'repos') {\n    return getReposResource(backend);\n  }\n  \n  // Setup resource — returns AGENTS.md content for all repos\n  if (parsed.resourceType === 'setup') {\n    return getSetupResource(backend);\n  }\n\n  const repoName = parsed.repoName;\n\n  switch (parsed.resourceType) {\n    case 'context':\n      return getContextResource(backend, repoName);\n    case 'clusters':\n      return getClustersResource(backend, repoName);\n    case 'processes':\n      return getProcessesResource(backend, repoName);\n    case 'schema':\n      return getSchemaResource();\n    case 'cluster':\n      return getClusterDetailResource(parsed.param!, backend, repoName);\n    case 'process':\n      return getProcessDetailResource(parsed.param!, backend, repoName);\n    default:\n      throw new Error(`Unknown resource: ${uri}`);\n  }\n}\n\n// ─── Resource Implementations ─────────────────────────────────────────\n\n/**\n * Repos resource — list all indexed repositories\n */\nasync function getReposResource(backend: LocalBackend): Promise<string> {\n  const repos = await backend.listRepos();\n\n  if (repos.length === 0) {\n    return 'repos: []\\n# No repositories indexed. Run: gitnexus analyze';\n  }\n\n  const lines: string[] = ['repos:'];\n  for (const repo of repos) {\n    lines.push(`  - name: \"${repo.name}\"`);\n    lines.push(`    path: \"${repo.path}\"`);\n    lines.push(`    indexed: \"${repo.indexedAt}\"`);\n    lines.push(`    commit: \"${repo.lastCommit?.slice(0, 7) || 'unknown'}\"`);\n    if (repo.stats) {\n      lines.push(`    files: ${repo.stats.files || 0}`);\n      lines.push(`    symbols: ${repo.stats.nodes || 0}`);\n      lines.push(`    processes: ${repo.stats.processes || 0}`);\n    }\n  }\n\n  if (repos.length > 1) {\n    lines.push('');\n    lines.push('# Multiple repos indexed. Use repo parameter in tool calls:');\n    lines.push(`# gitnexus_search({query: \"auth\", repo: \"${repos[0].name}\"})`);\n  }\n\n  return lines.join('\\n');\n}\n\n/**\n * Context resource — codebase overview for a specific repo\n */\nasync function getContextResource(backend: LocalBackend, repoName?: string): Promise<string> {\n  // Resolve repo\n  const repo = await backend.resolveRepo(repoName);\n  const repoId = repo.name.toLowerCase();\n  const context = backend.getContext(repoId) || backend.getContext();\n\n  if (!context) {\n    return 'error: No codebase loaded. Run: gitnexus analyze';\n  }\n  \n  // Check staleness\n  const repoPath = repo.repoPath;\n  const lastCommit = repo.lastCommit || 'HEAD';\n  const staleness = repoPath ? checkStaleness(repoPath, lastCommit) : { isStale: false, commitsBehind: 0 };\n  \n  const lines: string[] = [\n    `project: ${context.projectName}`,\n  ];\n  \n  if (staleness.isStale && staleness.hint) {\n    lines.push('');\n    lines.push(`staleness: \"${staleness.hint}\"`);\n  }\n  \n  lines.push('');\n  lines.push('stats:');\n  lines.push(`  files: ${context.stats.fileCount}`);\n  lines.push(`  symbols: ${context.stats.functionCount}`);\n  lines.push(`  processes: ${context.stats.processCount}`);\n  lines.push('');\n  lines.push('tools_available:');\n  lines.push('  - query: Process-grouped code intelligence (execution flows related to a concept)');\n  lines.push('  - context: 360-degree symbol view (categorized refs, process participation)');\n  lines.push('  - impact: Blast radius analysis (what breaks if you change a symbol)');\n  lines.push('  - detect_changes: Git-diff impact analysis (what do your changes affect)');\n  lines.push('  - rename: Multi-file coordinated rename with confidence tags');\n  lines.push('  - cypher: Raw graph queries');\n  lines.push('  - list_repos: Discover all indexed repositories');\n  lines.push('');\n  lines.push('re_index: Run `npx gitnexus analyze` in terminal if data is stale');\n  lines.push('');\n  lines.push('resources_available:');\n  lines.push('  - gitnexus://repos: All indexed repositories');\n  lines.push(`  - gitnexus://repo/${context.projectName}/clusters: All functional areas`);\n  lines.push(`  - gitnexus://repo/${context.projectName}/processes: All execution flows`);\n  lines.push(`  - gitnexus://repo/${context.projectName}/cluster/{name}: Module details`);\n  lines.push(`  - gitnexus://repo/${context.projectName}/process/{name}: Process trace`);\n  \n  return lines.join('\\n');\n}\n\n/**\n * Clusters resource — queries graph directly via backend.queryClusters()\n */\nasync function getClustersResource(backend: LocalBackend, repoName?: string): Promise<string> {\n  try {\n    const result = await backend.queryClusters(repoName, 100);\n\n    if (!result.clusters || result.clusters.length === 0) {\n      return 'modules: []\\n# No functional areas detected. Run: gitnexus analyze';\n    }\n\n    const displayLimit = 20;\n    const lines: string[] = ['modules:'];\n    const toShow = result.clusters.slice(0, displayLimit);\n\n    for (const cluster of toShow) {\n      const label = cluster.heuristicLabel || cluster.label || cluster.id;\n      lines.push(`  - name: \"${label}\"`);\n      lines.push(`    symbols: ${cluster.symbolCount || 0}`);\n      if (cluster.cohesion) {\n        lines.push(`    cohesion: ${(cluster.cohesion * 100).toFixed(0)}%`);\n      }\n    }\n\n    if (result.clusters.length > displayLimit) {\n      lines.push(`\\n# Showing top ${displayLimit} of ${result.clusters.length} modules. Use gitnexus_query for deeper search.`);\n    }\n\n    return lines.join('\\n');\n  } catch (err: any) {\n    return `error: ${err.message}`;\n  }\n}\n\n/**\n * Processes resource — queries graph directly via backend.queryProcesses()\n */\nasync function getProcessesResource(backend: LocalBackend, repoName?: string): Promise<string> {\n  try {\n    const result = await backend.queryProcesses(repoName, 50);\n\n    if (!result.processes || result.processes.length === 0) {\n      return 'processes: []\\n# No processes detected. Run: gitnexus analyze';\n    }\n\n    const displayLimit = 20;\n    const lines: string[] = ['processes:'];\n    const toShow = result.processes.slice(0, displayLimit);\n\n    for (const proc of toShow) {\n      const label = proc.heuristicLabel || proc.label || proc.id;\n      lines.push(`  - name: \"${label}\"`);\n      lines.push(`    type: ${proc.processType || 'unknown'}`);\n      lines.push(`    steps: ${proc.stepCount || 0}`);\n    }\n\n    if (result.processes.length > displayLimit) {\n      lines.push(`\\n# Showing top ${displayLimit} of ${result.processes.length} processes. Use gitnexus_query for deeper search.`);\n    }\n\n    return lines.join('\\n');\n  } catch (err: any) {\n    return `error: ${err.message}`;\n  }\n}\n\n/**\n * Schema resource — graph structure for Cypher queries\n */\nfunction getSchemaResource(): string {\n  return `# GitNexus Graph Schema\n\nnodes:\n  - File: Source code files\n  - Folder: Directory containers\n  - Function: Functions and arrow functions\n  - Class: Class definitions\n  - Interface: Interface/type definitions\n  - Method: Class methods\n  - CodeElement: Catch-all for other code elements\n  - Community: Auto-detected functional area (Leiden algorithm)\n  - Process: Execution flow trace\n\nadditional_node_types: \"Multi-language: Struct, Enum, Macro, Typedef, Union, Namespace, Trait, Impl, TypeAlias, Const, Static, Property, Record, Delegate, Annotation, Constructor, Template, Module (use backticks in queries: \\`Struct\\`, \\`Enum\\`, etc.)\"\n\nnode_properties:\n  common: \"name (STRING), filePath (STRING), startLine (INT32), endLine (INT32)\"\n  Method: \"parameterCount (INT32), returnType (STRING), isVariadic (BOOL)\"\n  Function: \"parameterCount (INT32), returnType (STRING), isVariadic (BOOL)\"\n  Property: \"declaredType (STRING) — the field's type annotation (e.g., 'Address', 'City'). Used for field-access chain resolution.\"\n  Constructor: \"parameterCount (INT32)\"\n\nrelationships:\n  - CONTAINS: File/Folder contains child\n  - DEFINES: File defines a symbol\n  - CALLS: Function/method invocation\n  - IMPORTS: Module imports\n  - EXTENDS: Class inheritance\n  - IMPLEMENTS: Interface implementation\n  - HAS_METHOD: Class/Struct/Interface owns a Method\n  - HAS_PROPERTY: Class/Struct/Interface owns a Property (field)\n  - ACCESSES: Function/Method reads or writes a Property (reason: 'read' or 'write')\n  - OVERRIDES: Method overrides another Method (MRO)\n  - MEMBER_OF: Symbol belongs to community\n  - STEP_IN_PROCESS: Symbol is step N in process\n\nrelationship_table: \"All relationships use a single CodeRelation table with a 'type' property. Properties: type (STRING), confidence (DOUBLE), reason (STRING), step (INT32)\"\n\nexample_queries:\n  find_callers: |\n    MATCH (caller)-[:CodeRelation {type: 'CALLS'}]->(f:Function {name: \"myFunc\"})\n    RETURN caller.name, caller.filePath\n  \n  find_community_members: |\n    MATCH (s)-[:CodeRelation {type: 'MEMBER_OF'}]->(c:Community)\n    WHERE c.heuristicLabel = \"Auth\"\n    RETURN s.name, labels(s)[0] AS type\n  \n  trace_process: |\n    MATCH (s)-[r:CodeRelation {type: 'STEP_IN_PROCESS'}]->(p:Process)\n    WHERE p.heuristicLabel = \"LoginFlow\"\n    RETURN s.name, r.step\n    ORDER BY r.step\n`;\n}\n\n/**\n * Cluster detail resource — queries graph directly via backend.queryClusterDetail()\n */\nasync function getClusterDetailResource(name: string, backend: LocalBackend, repoName?: string): Promise<string> {\n  try {\n    const result = await backend.queryClusterDetail(name, repoName);\n\n    if (result.error) {\n      return `error: ${result.error}`;\n    }\n\n    const cluster = result.cluster;\n    const members = result.members || [];\n\n    const lines: string[] = [\n      `module: \"${cluster.heuristicLabel || cluster.label || cluster.id}\"`,\n      `symbols: ${cluster.symbolCount || members.length}`,\n    ];\n\n    if (cluster.cohesion) {\n      lines.push(`cohesion: ${(cluster.cohesion * 100).toFixed(0)}%`);\n    }\n\n    if (members.length > 0) {\n      lines.push('');\n      lines.push('members:');\n      for (const member of members.slice(0, 20)) {\n        lines.push(`  - name: ${member.name}`);\n        lines.push(`    type: ${member.type}`);\n        lines.push(`    file: ${member.filePath}`);\n      }\n      if (members.length > 20) {\n        lines.push(`  # ... and ${members.length - 20} more`);\n      }\n    }\n\n    return lines.join('\\n');\n  } catch (err: any) {\n    return `error: ${err.message}`;\n  }\n}\n\n/**\n * Process detail resource — queries graph directly via backend.queryProcessDetail()\n */\nasync function getProcessDetailResource(name: string, backend: LocalBackend, repoName?: string): Promise<string> {\n  try {\n    const result = await backend.queryProcessDetail(name, repoName);\n\n    if (result.error) {\n      return `error: ${result.error}`;\n    }\n\n    const proc = result.process;\n    const steps = result.steps || [];\n\n    const lines: string[] = [\n      `name: \"${proc.heuristicLabel || proc.label || proc.id}\"`,\n      `type: ${proc.processType || 'unknown'}`,\n      `step_count: ${proc.stepCount || steps.length}`,\n    ];\n\n    if (steps.length > 0) {\n      lines.push('');\n      lines.push('trace:');\n      for (const step of steps) {\n        lines.push(`  ${step.step}: ${step.name} (${step.filePath})`);\n      }\n    }\n\n    return lines.join('\\n');\n  } catch (err: any) {\n    return `error: ${err.message}`;\n  }\n}\n\n/**\n * Setup resource — generates AGENTS.md content for all indexed repos.\n * Useful for `gitnexus setup` onboarding or dynamic content injection.\n */\nasync function getSetupResource(backend: LocalBackend): Promise<string> {\n  const repos = await backend.listRepos();\n\n  if (repos.length === 0) {\n    return '# GitNexus\\n\\nNo repositories indexed. Run: `npx gitnexus analyze` in a repository.';\n  }\n  \n  const sections: string[] = [];\n  \n  for (const repo of repos) {\n    const stats = repo.stats || {};\n    const lines = [\n      `# GitNexus MCP — ${repo.name}`,\n      '',\n      `This project is indexed by GitNexus as **${repo.name}** (${stats.nodes || 0} symbols, ${stats.edges || 0} relationships, ${stats.processes || 0} execution flows).`,\n      '',\n      '## Tools',\n      '',\n      '| Tool | What it gives you |',\n      '|------|-------------------|',\n      '| `query` | Process-grouped code intelligence — execution flows related to a concept |',\n      '| `context` | 360-degree symbol view — categorized refs, processes it participates in |',\n      '| `impact` | Symbol blast radius — what breaks at depth 1/2/3 with confidence |',\n      '| `detect_changes` | Git-diff impact — what do your current changes affect |',\n      '| `rename` | Multi-file coordinated rename with confidence-tagged edits |',\n      '| `cypher` | Raw graph queries |',\n      '| `list_repos` | Discover indexed repos |',\n      '',\n      '## Resources',\n      '',\n      `- \\`gitnexus://repo/${repo.name}/context\\` — Stats, staleness check`,\n      `- \\`gitnexus://repo/${repo.name}/clusters\\` — All functional areas`,\n      `- \\`gitnexus://repo/${repo.name}/processes\\` — All execution flows`,\n      `- \\`gitnexus://repo/${repo.name}/schema\\` — Graph schema for Cypher`,\n    ];\n    sections.push(lines.join('\\n'));\n  }\n  \n  return sections.join('\\n\\n---\\n\\n');\n}\n"
  },
  {
    "path": "gitnexus/src/mcp/server.ts",
    "content": "/**\n * MCP Server (Multi-Repo)\n *\n * Model Context Protocol server that runs on stdio.\n * External AI tools (Cursor, Claude) spawn this process and\n * communicate via stdin/stdout using the MCP protocol.\n *\n * Supports multiple indexed repositories via the global registry.\n *\n * Tools: list_repos, query, cypher, context, impact, detect_changes, rename\n * Resources: repos, repo/{name}/context, repo/{name}/clusters, ...\n */\n\nimport { createRequire } from 'module';\nimport { Server } from '@modelcontextprotocol/sdk/server/index.js';\nimport { CompatibleStdioServerTransport } from './compatible-stdio-transport.js';\nimport {\n  CallToolRequestSchema,\n  ListToolsRequestSchema,\n  ListResourcesRequestSchema,\n  ReadResourceRequestSchema,\n  ListResourceTemplatesRequestSchema,\n  ListPromptsRequestSchema,\n  GetPromptRequestSchema,\n} from '@modelcontextprotocol/sdk/types.js';\nimport { GITNEXUS_TOOLS } from './tools.js';\nimport { realStdoutWrite } from './core/lbug-adapter.js';\nimport type { LocalBackend } from './local/local-backend.js';\nimport { getResourceDefinitions, getResourceTemplates, readResource } from './resources.js';\n\n/**\n * Next-step hints appended to tool responses.\n *\n * Agents often stop after one tool call. These hints guide them to the\n * logical next action, creating a self-guiding workflow without hooks.\n *\n * Design: Each hint is a short, actionable instruction (not a suggestion).\n * The hint references the specific tool/resource to use next.\n */\nfunction getNextStepHint(toolName: string, args: Record<string, any> | undefined): string {\n  const repo = args?.repo;\n  const repoParam = repo ? `, repo: \"${repo}\"` : '';\n  const repoPath = repo || '{name}';\n\n  switch (toolName) {\n    case 'list_repos':\n      return `\\n\\n---\\n**Next:** READ gitnexus://repo/{name}/context for any repo above to get its overview and check staleness.`;\n\n    case 'query':\n      return `\\n\\n---\\n**Next:** To understand a specific symbol in depth, use context({name: \"<symbol_name>\"${repoParam}}) to see categorized refs and process participation.`;\n\n    case 'context':\n      return `\\n\\n---\\n**Next:** If planning changes, use impact({target: \"${args?.name || '<name>'}\", direction: \"upstream\"${repoParam}}) to check blast radius. To see execution flows, READ gitnexus://repo/${repoPath}/processes.`;\n\n    case 'impact':\n      return `\\n\\n---\\n**Next:** Review d=1 items first (WILL BREAK). To check affected execution flows, READ gitnexus://repo/${repoPath}/processes.`;\n\n    case 'detect_changes':\n      return `\\n\\n---\\n**Next:** Review affected processes. Use context() on high-risk changed symbols. READ gitnexus://repo/${repoPath}/process/{name} for full execution traces.`;\n\n    case 'rename':\n      return `\\n\\n---\\n**Next:** Run detect_changes(${repoParam ? `{repo: \"${repo}\"}` : ''}) to verify no unexpected side effects from the rename.`;\n\n    case 'cypher':\n      return `\\n\\n---\\n**Next:** To explore a result symbol, use context({name: \"<name>\"${repoParam}}). For schema reference, READ gitnexus://repo/${repoPath}/schema.`;\n\n    // Legacy tool names — still return useful hints\n    case 'search':\n      return `\\n\\n---\\n**Next:** To understand a result in context, use context({name: \"<symbol_name>\"${repoParam}}).`;\n    case 'explore':\n      return `\\n\\n---\\n**Next:** If planning changes, use impact({target: \"<name>\", direction: \"upstream\"${repoParam}}).`;\n    case 'overview':\n      return `\\n\\n---\\n**Next:** To drill into an area, READ gitnexus://repo/${repoPath}/cluster/{name}. To see execution flows, READ gitnexus://repo/${repoPath}/processes.`;\n\n    default:\n      return '';\n  }\n}\n\n/**\n * Create a configured MCP Server with all handlers registered.\n * Transport-agnostic — caller connects the desired transport.\n */\nexport function createMCPServer(backend: LocalBackend): Server {\n  const require = createRequire(import.meta.url);\n  const pkgVersion: string = require('../../package.json').version;\n  const server = new Server(\n    {\n      name: 'gitnexus',\n      version: pkgVersion,\n    },\n    {\n      capabilities: {\n        tools: {},\n        resources: {},\n        prompts: {},\n      },\n    }\n  );\n\n  // Handle list resources request\n  server.setRequestHandler(ListResourcesRequestSchema, async () => {\n    const resources = getResourceDefinitions();\n    return {\n      resources: resources.map(r => ({\n        uri: r.uri,\n        name: r.name,\n        description: r.description,\n        mimeType: r.mimeType,\n      })),\n    };\n  });\n\n  // Handle list resource templates request (for dynamic resources)\n  server.setRequestHandler(ListResourceTemplatesRequestSchema, async () => {\n    const templates = getResourceTemplates();\n    return {\n      resourceTemplates: templates.map(t => ({\n        uriTemplate: t.uriTemplate,\n        name: t.name,\n        description: t.description,\n        mimeType: t.mimeType,\n      })),\n    };\n  });\n\n  // Handle read resource request\n  server.setRequestHandler(ReadResourceRequestSchema, async (request) => {\n    const { uri } = request.params;\n\n    try {\n      const content = await readResource(uri, backend);\n      return {\n        contents: [\n          {\n            uri,\n            mimeType: 'text/yaml',\n            text: content,\n          },\n        ],\n      };\n    } catch (err: any) {\n      return {\n        contents: [\n          {\n            uri,\n            mimeType: 'text/plain',\n            text: `Error: ${err.message}`,\n          },\n        ],\n      };\n    }\n  });\n\n\n  // Handle list tools request\n  server.setRequestHandler(ListToolsRequestSchema, async () => ({\n    tools: GITNEXUS_TOOLS.map((tool) => ({\n      name: tool.name,\n      description: tool.description,\n      inputSchema: tool.inputSchema,\n    })),\n  }));\n\n  // Handle tool calls — append next-step hints to guide agent workflow\n  server.setRequestHandler(CallToolRequestSchema, async (request) => {\n    const { name, arguments: args } = request.params;\n\n    try {\n      const result = await backend.callTool(name, args);\n      const resultText = typeof result === 'string' ? result : JSON.stringify(result, null, 2);\n      const hint = getNextStepHint(name, args as Record<string, any> | undefined);\n\n      return {\n        content: [\n          {\n            type: 'text',\n            text: resultText + hint,\n          },\n        ],\n      };\n    } catch (error) {\n      const message = error instanceof Error ? error.message : 'Unknown error';\n      return {\n        content: [\n          {\n            type: 'text',\n            text: `Error: ${message}`,\n          },\n        ],\n        isError: true,\n      };\n    }\n  });\n\n  // Handle list prompts request\n  server.setRequestHandler(ListPromptsRequestSchema, async () => ({\n    prompts: [\n      {\n        name: 'detect_impact',\n        description: 'Analyze the impact of your current changes before committing. Guides through scope selection, change detection, process analysis, and risk assessment.',\n        arguments: [\n          { name: 'scope', description: 'What to analyze: unstaged, staged, all, or compare', required: false },\n          { name: 'base_ref', description: 'Branch/commit for compare scope', required: false },\n        ],\n      },\n      {\n        name: 'generate_map',\n        description: 'Generate architecture documentation from the knowledge graph. Creates a codebase overview with execution flows and mermaid diagrams.',\n        arguments: [\n          { name: 'repo', description: 'Repository name (omit if only one indexed)', required: false },\n        ],\n      },\n    ],\n  }));\n\n  // Handle get prompt request\n  server.setRequestHandler(GetPromptRequestSchema, async (request) => {\n    const { name, arguments: args } = request.params;\n\n    if (name === 'detect_impact') {\n      const scope = args?.scope || 'all';\n      const baseRef = args?.base_ref || '';\n      return {\n        messages: [\n          {\n            role: 'user' as const,\n            content: {\n              type: 'text' as const,\n              text: `Analyze the impact of my current code changes before committing.\n\nFollow these steps:\n1. Run \\`detect_changes(${JSON.stringify({ scope, ...(baseRef ? { base_ref: baseRef } : {}) })})\\` to find what changed and affected processes\n2. For each changed symbol in critical processes, run \\`context({name: \"<symbol>\"})\\` to see its full reference graph\n3. For any high-risk items (many callers or cross-process), run \\`impact({target: \"<symbol>\", direction: \"upstream\"})\\` for blast radius\n4. Summarize: changes, affected processes, risk level, and recommended actions\n\nPresent the analysis as a clear risk report.`,\n            },\n          },\n        ],\n      };\n    }\n\n    if (name === 'generate_map') {\n      const repo = args?.repo || '';\n      return {\n        messages: [\n          {\n            role: 'user' as const,\n            content: {\n              type: 'text' as const,\n              text: `Generate architecture documentation for this codebase using the knowledge graph.\n\nFollow these steps:\n1. READ \\`gitnexus://repo/${repo || '{name}'}/context\\` for codebase stats\n2. READ \\`gitnexus://repo/${repo || '{name}'}/clusters\\` to see all functional areas\n3. READ \\`gitnexus://repo/${repo || '{name}'}/processes\\` to see all execution flows\n4. For the top 5 most important processes, READ \\`gitnexus://repo/${repo || '{name}'}/process/{name}\\` for step-by-step traces\n5. Generate a mermaid architecture diagram showing the major areas and their connections\n6. Write an ARCHITECTURE.md file with: overview, functional areas, key execution flows, and the mermaid diagram`,\n            },\n          },\n        ],\n      };\n    }\n\n    throw new Error(`Unknown prompt: ${name}`);\n  });\n\n  return server;\n}\n\n/**\n * Start the MCP server on stdio transport (for CLI use).\n */\nexport async function startMCPServer(backend: LocalBackend): Promise<void> {\n  const server = createMCPServer(backend);\n\n  // Use the shared stdout reference captured at module-load time by the\n  // lbug-adapter.  Avoids divergence if anything patches stdout between\n  // module load and server start.\n  const _safeStdout = new Proxy(process.stdout, {\n    get(target, prop, receiver) {\n      if (prop === 'write') return realStdoutWrite;\n      const val = Reflect.get(target, prop, receiver);\n      return typeof val === 'function' ? val.bind(target) : val;\n    }\n  });\n  const transport = new CompatibleStdioServerTransport(process.stdin, _safeStdout);\n  await server.connect(transport);\n\n  // Graceful shutdown helper\n  let shuttingDown = false;\n  const shutdown = async (exitCode = 0) => {\n    if (shuttingDown) return;\n    shuttingDown = true;\n    try { await backend.disconnect(); } catch {}\n    try { await server.close(); } catch {}\n    process.exit(exitCode);\n  };\n\n  // Handle graceful shutdown\n  process.on('SIGINT', shutdown);\n  process.on('SIGTERM', shutdown);\n\n  // Log crashes to stderr so they aren't silently lost.\n  // uncaughtException is fatal — shut down.\n  // unhandledRejection is logged but kept non-fatal (availability-first):\n  // killing the server for one missed catch would be worse than logging it.\n  process.on('uncaughtException', (err) => {\n    process.stderr.write(`GitNexus MCP uncaughtException: ${err?.stack || err}\\n`);\n    shutdown(1);\n  });\n  process.on('unhandledRejection', (reason: any) => {\n    process.stderr.write(`GitNexus MCP unhandledRejection: ${reason?.stack || reason}\\n`);\n  });\n\n  // Handle stdio errors — stdin close means the parent process is gone\n  process.stdin.on('end', shutdown);\n  process.stdin.on('error', () => shutdown());\n  process.stdout.on('error', () => shutdown());\n}\n"
  },
  {
    "path": "gitnexus/src/mcp/staleness.ts",
    "content": "/**\n * Staleness Check\n * \n * Checks if the GitNexus index is behind the current git HEAD.\n * Returns a hint for the LLM to call analyze if stale.\n */\n\nimport { execFileSync } from 'child_process';\nimport path from 'path';\n\nexport interface StalenessInfo {\n  isStale: boolean;\n  commitsBehind: number;\n  hint?: string;\n}\n\n/**\n * Check how many commits the index is behind HEAD\n */\nexport function checkStaleness(repoPath: string, lastCommit: string): StalenessInfo {\n  try {\n    // Get count of commits between lastCommit and HEAD\n    const result = execFileSync(\n      'git', ['rev-list', '--count', `${lastCommit}..HEAD`],\n      { cwd: repoPath, encoding: 'utf-8', stdio: ['pipe', 'pipe', 'pipe'] }\n    ).trim();\n    \n    const commitsBehind = parseInt(result, 10) || 0;\n    \n    if (commitsBehind > 0) {\n      return {\n        isStale: true,\n        commitsBehind,\n        hint: `⚠️ Index is ${commitsBehind} commit${commitsBehind > 1 ? 's' : ''} behind HEAD. Run analyze tool to update.`,\n      };\n    }\n    \n    return { isStale: false, commitsBehind: 0 };\n  } catch {\n    // If git command fails, assume not stale (fail open)\n    return { isStale: false, commitsBehind: 0 };\n  }\n}\n"
  },
  {
    "path": "gitnexus/src/mcp/tools.ts",
    "content": "/**\n * MCP Tool Definitions\n * \n * Defines the tools that GitNexus exposes to external AI agents.\n * All tools support an optional `repo` parameter for multi-repo setups.\n */\n\nexport interface ToolDefinition {\n  name: string;\n  description: string;\n  inputSchema: {\n    type: 'object';\n    properties: Record<string, {\n      type: string;\n      description?: string;\n      default?: any;\n      items?: { type: string };\n      enum?: string[];\n    }>;\n    required: string[];\n  };\n}\n\nexport const GITNEXUS_TOOLS: ToolDefinition[] = [\n  {\n    name: 'list_repos',\n    description: `List all indexed repositories available to GitNexus.\n\nReturns each repo's name, path, indexed date, last commit, and stats.\n\nWHEN TO USE: First step when multiple repos are indexed, or to discover available repos.\nAFTER THIS: READ gitnexus://repo/{name}/context for the repo you want to work with.\n\nWhen multiple repos are indexed, you MUST specify the \"repo\" parameter\non other tools (query, context, impact, etc.) to target the correct one.`,\n    inputSchema: {\n      type: 'object',\n      properties: {},\n      required: [],\n    },\n  },\n  {\n    name: 'query',\n    description: `Query the code knowledge graph for execution flows related to a concept.\nReturns processes (call chains) ranked by relevance, each with its symbols and file locations.\n\nWHEN TO USE: Understanding how code works together. Use this when you need execution flows and relationships, not just file matches. Complements grep/IDE search.\nAFTER THIS: Use context() on a specific symbol for 360-degree view (callers, callees, categorized refs).\n\nReturns results grouped by process (execution flow):\n- processes: ranked execution flows with relevance priority\n- process_symbols: all symbols in those flows with file locations and module (functional area)\n- definitions: standalone types/interfaces not in any process\n\nHybrid ranking: BM25 keyword + semantic vector search, ranked by Reciprocal Rank Fusion.`,\n    inputSchema: {\n      type: 'object',\n      properties: {\n        query: { type: 'string', description: 'Natural language or keyword search query' },\n        task_context: { type: 'string', description: 'What you are working on (e.g., \"adding OAuth support\"). Helps ranking.' },\n        goal: { type: 'string', description: 'What you want to find (e.g., \"existing auth validation logic\"). Helps ranking.' },\n        limit: { type: 'number', description: 'Max processes to return (default: 5)', default: 5 },\n        max_symbols: { type: 'number', description: 'Max symbols per process (default: 10)', default: 10 },\n        include_content: { type: 'boolean', description: 'Include full symbol source code (default: false)', default: false },\n        repo: { type: 'string', description: 'Repository name or path. Omit if only one repo is indexed.' },\n      },\n      required: ['query'],\n    },\n  },\n  {\n    name: 'cypher',\n    description: `Execute Cypher query against the code knowledge graph.\n\nWHEN TO USE: Complex structural queries that search/explore can't answer. READ gitnexus://repo/{name}/schema first for the full schema.\nAFTER THIS: Use context() on result symbols for deeper context.\n\nSCHEMA:\n- Nodes: File, Folder, Function, Class, Interface, Method, CodeElement, Community, Process\n- Multi-language nodes (use backticks): \\`Struct\\`, \\`Enum\\`, \\`Trait\\`, \\`Impl\\`, etc.\n- All edges via single CodeRelation table with 'type' property\n- Edge types: CONTAINS, DEFINES, CALLS, IMPORTS, EXTENDS, IMPLEMENTS, HAS_METHOD, HAS_PROPERTY, ACCESSES, OVERRIDES, MEMBER_OF, STEP_IN_PROCESS\n- Edge properties: type (STRING), confidence (DOUBLE), reason (STRING), step (INT32)\n\nEXAMPLES:\n• Find callers of a function:\n  MATCH (a)-[:CodeRelation {type: 'CALLS'}]->(b:Function {name: \"validateUser\"}) RETURN a.name, a.filePath\n\n• Find community members:\n  MATCH (f)-[:CodeRelation {type: 'MEMBER_OF'}]->(c:Community) WHERE c.heuristicLabel = \"Auth\" RETURN f.name\n\n• Trace a process:\n  MATCH (s)-[r:CodeRelation {type: 'STEP_IN_PROCESS'}]->(p:Process) WHERE p.heuristicLabel = \"UserLogin\" RETURN s.name, r.step ORDER BY r.step\n\n• Find all methods of a class:\n  MATCH (c:Class {name: \"UserService\"})-[r:CodeRelation {type: 'HAS_METHOD'}]->(m:Method) RETURN m.name, m.parameterCount, m.returnType\n\n• Find all properties of a class:\n  MATCH (c:Class {name: \"User\"})-[r:CodeRelation {type: 'HAS_PROPERTY'}]->(p:Property) RETURN p.name, p.declaredType\n\n• Find all writers of a field:\n  MATCH (f:Function)-[r:CodeRelation {type: 'ACCESSES', reason: 'write'}]->(p:Property) WHERE p.name = \"address\" RETURN f.name, f.filePath\n\n• Find method overrides (MRO resolution):\n  MATCH (winner:Method)-[r:CodeRelation {type: 'OVERRIDES'}]->(loser:Method) RETURN winner.name, winner.filePath, loser.filePath, r.reason\n\n• Detect diamond inheritance:\n  MATCH (d:Class)-[:CodeRelation {type: 'EXTENDS'}]->(b1), (d)-[:CodeRelation {type: 'EXTENDS'}]->(b2), (b1)-[:CodeRelation {type: 'EXTENDS'}]->(a), (b2)-[:CodeRelation {type: 'EXTENDS'}]->(a) WHERE b1 <> b2 RETURN d.name, b1.name, b2.name, a.name\n\nOUTPUT: Returns { markdown, row_count } — results formatted as a Markdown table for easy reading.\n\nTIPS:\n- All relationships use single CodeRelation table — filter with {type: 'CALLS'} etc.\n- Community = auto-detected functional area (Leiden algorithm)\n- Process = execution flow trace from entry point to terminal\n- Use heuristicLabel (not label) for human-readable community/process names`,\n    inputSchema: {\n      type: 'object',\n      properties: {\n        query: { type: 'string', description: 'Cypher query to execute' },\n        repo: { type: 'string', description: 'Repository name or path. Omit if only one repo is indexed.' },\n      },\n      required: ['query'],\n    },\n  },\n  {\n    name: 'context',\n    description: `360-degree view of a single code symbol.\nShows categorized incoming/outgoing references (calls, imports, extends, implements, methods, properties, overrides), process participation, and file location.\n\nWHEN TO USE: After query() to understand a specific symbol in depth. When you need to know all callers, callees, and what execution flows a symbol participates in.\nAFTER THIS: Use impact() if planning changes, or READ gitnexus://repo/{name}/process/{processName} for full execution trace.\n\nHandles disambiguation: if multiple symbols share the same name, returns candidates for you to pick from. Use uid param for zero-ambiguity lookup from prior results.\n\nNOTE: ACCESSES edges (field read/write tracking) are included in context results with reason 'read' or 'write'. CALLS edges resolve through field access chains and method-call chains (e.g., user.address.getCity().save() produces CALLS edges at each step).`,\n    inputSchema: {\n      type: 'object',\n      properties: {\n        name: { type: 'string', description: 'Symbol name (e.g., \"validateUser\", \"AuthService\")' },\n        uid: { type: 'string', description: 'Direct symbol UID from prior tool results (zero-ambiguity lookup)' },\n        file_path: { type: 'string', description: 'File path to disambiguate common names' },\n        include_content: { type: 'boolean', description: 'Include full symbol source code (default: false)', default: false },\n        repo: { type: 'string', description: 'Repository name or path. Omit if only one repo is indexed.' },\n      },\n      required: [],\n    },\n  },\n  {\n    name: 'detect_changes',\n    description: `Analyze uncommitted git changes and find affected execution flows.\nMaps git diff hunks to indexed symbols, then traces which processes are impacted.\n\nWHEN TO USE: Before committing — to understand what your changes affect. Pre-commit review, PR preparation.\nAFTER THIS: Review affected processes. Use context() on high-risk symbols. READ gitnexus://repo/{name}/process/{name} for full traces.\n\nReturns: changed symbols, affected processes, and a risk summary.`,\n    inputSchema: {\n      type: 'object',\n      properties: {\n        scope: { type: 'string', description: 'What to analyze: \"unstaged\" (default), \"staged\", \"all\", or \"compare\"', enum: ['unstaged', 'staged', 'all', 'compare'], default: 'unstaged' },\n        base_ref: { type: 'string', description: 'Branch/commit for \"compare\" scope (e.g., \"main\")' },\n        repo: { type: 'string', description: 'Repository name or path. Omit if only one repo is indexed.' },\n      },\n      required: [],\n    },\n  },\n  {\n    name: 'rename',\n    description: `Multi-file coordinated rename using the knowledge graph + text search.\nFinds all references via graph (high confidence) and regex text search (lower confidence). Preview by default.\n\nWHEN TO USE: Renaming a function, class, method, or variable across the codebase. Safer than find-and-replace.\nAFTER THIS: Run detect_changes() to verify no unexpected side effects.\n\nEach edit is tagged with confidence:\n- \"graph\": found via knowledge graph relationships (high confidence, safe to accept)\n- \"text_search\": found via regex text search (lower confidence, review carefully)`,\n    inputSchema: {\n      type: 'object',\n      properties: {\n        symbol_name: { type: 'string', description: 'Current symbol name to rename' },\n        symbol_uid: { type: 'string', description: 'Direct symbol UID from prior tool results (zero-ambiguity)' },\n        new_name: { type: 'string', description: 'The new name for the symbol' },\n        file_path: { type: 'string', description: 'File path to disambiguate common names' },\n        dry_run: { type: 'boolean', description: 'Preview edits without modifying files (default: true)', default: true },\n        repo: { type: 'string', description: 'Repository name or path. Omit if only one repo is indexed.' },\n      },\n      required: ['new_name'],\n    },\n  },\n  {\n    name: 'impact',\n    description: `Analyze the blast radius of changing a code symbol.\nReturns affected symbols grouped by depth, plus risk assessment, affected execution flows, and affected modules.\n\nWHEN TO USE: Before making code changes — especially refactoring, renaming, or modifying shared code. Shows what would break.\nAFTER THIS: Review d=1 items (WILL BREAK). Use context() on high-risk symbols.\n\nOutput includes:\n- risk: LOW / MEDIUM / HIGH / CRITICAL\n- summary: direct callers, processes affected, modules affected\n- affected_processes: which execution flows break and at which step\n- affected_modules: which functional areas are hit (direct vs indirect)\n- byDepth: all affected symbols grouped by traversal depth\n\nDepth groups:\n- d=1: WILL BREAK (direct callers/importers)\n- d=2: LIKELY AFFECTED (indirect)\n- d=3: MAY NEED TESTING (transitive)\n\nTIP: Default traversal uses CALLS/IMPORTS/EXTENDS/IMPLEMENTS. For class members, include HAS_METHOD and HAS_PROPERTY in relationTypes. For field access analysis, include ACCESSES in relationTypes.\n\nEdgeType: CALLS, IMPORTS, EXTENDS, IMPLEMENTS, HAS_METHOD, HAS_PROPERTY, OVERRIDES, ACCESSES\nConfidence: 1.0 = certain, <0.8 = fuzzy match`,\n    inputSchema: {\n      type: 'object',\n      properties: {\n        target: { type: 'string', description: 'Name of function, class, or file to analyze' },\n        direction: { type: 'string', description: 'upstream (what depends on this) or downstream (what this depends on)' },\n        maxDepth: { type: 'number', description: 'Max relationship depth (default: 3)', default: 3 },\n        relationTypes: { type: 'array', items: { type: 'string' }, description: 'Filter: CALLS, IMPORTS, EXTENDS, IMPLEMENTS, HAS_METHOD, HAS_PROPERTY, OVERRIDES, ACCESSES (default: usage-based, ACCESSES excluded by default)' },\n        includeTests: { type: 'boolean', description: 'Include test files (default: false)' },\n        minConfidence: { type: 'number', description: 'Minimum confidence 0-1 (default: 0.7)' },\n        repo: { type: 'string', description: 'Repository name or path. Omit if only one repo is indexed.' },\n      },\n      required: ['target', 'direction'],\n    },\n  },\n];\n"
  },
  {
    "path": "gitnexus/src/server/api.ts",
    "content": "/**\n * HTTP API Server\n *\n * REST API for browser-based clients to query the local .gitnexus/ index.\n * Also hosts the MCP server over StreamableHTTP for remote AI tool access.\n *\n * Security: binds to 127.0.0.1 by default (use --host to override).\n * CORS is restricted to localhost and the deployed site.\n */\n\nimport express from 'express';\nimport cors from 'cors';\nimport path from 'path';\nimport fs from 'fs/promises';\nimport { loadMeta, listRegisteredRepos } from '../storage/repo-manager.js';\nimport { executeQuery, closeLbug, withLbugDb } from '../core/lbug/lbug-adapter.js';\nimport { NODE_TABLES } from '../core/lbug/schema.js';\nimport { GraphNode, GraphRelationship } from '../core/graph/types.js';\nimport { searchFTSFromLbug } from '../core/search/bm25-index.js';\nimport { hybridSearch } from '../core/search/hybrid-search.js';\n// Embedding imports are lazy (dynamic import) to avoid loading onnxruntime-node\n// at server startup — crashes on unsupported Node ABI versions (#89)\nimport { LocalBackend } from '../mcp/local/local-backend.js';\nimport { mountMCPEndpoints } from './mcp-http.js';\n\nconst buildGraph = async (): Promise<{ nodes: GraphNode[]; relationships: GraphRelationship[] }> => {\n  const nodes: GraphNode[] = [];\n  for (const table of NODE_TABLES) {\n    try {\n      let query = '';\n      if (table === 'File') {\n        query = `MATCH (n:File) RETURN n.id AS id, n.name AS name, n.filePath AS filePath, n.content AS content`;\n      } else if (table === 'Folder') {\n        query = `MATCH (n:Folder) RETURN n.id AS id, n.name AS name, n.filePath AS filePath`;\n      } else if (table === 'Community') {\n        query = `MATCH (n:Community) RETURN n.id AS id, n.label AS label, n.heuristicLabel AS heuristicLabel, n.cohesion AS cohesion, n.symbolCount AS symbolCount`;\n      } else if (table === 'Process') {\n        query = `MATCH (n:Process) RETURN n.id AS id, n.label AS label, n.heuristicLabel AS heuristicLabel, n.processType AS processType, n.stepCount AS stepCount, n.communities AS communities, n.entryPointId AS entryPointId, n.terminalId AS terminalId`;\n      } else {\n        query = `MATCH (n:${table}) RETURN n.id AS id, n.name AS name, n.filePath AS filePath, n.startLine AS startLine, n.endLine AS endLine, n.content AS content`;\n      }\n\n      const rows = await executeQuery(query);\n      for (const row of rows) {\n        nodes.push({\n          id: row.id ?? row[0],\n          label: table as GraphNode['label'],\n          properties: {\n            name: row.name ?? row.label ?? row[1],\n            filePath: row.filePath ?? row[2],\n            startLine: row.startLine,\n            endLine: row.endLine,\n            content: row.content,\n            heuristicLabel: row.heuristicLabel,\n            cohesion: row.cohesion,\n            symbolCount: row.symbolCount,\n            processType: row.processType,\n            stepCount: row.stepCount,\n            communities: row.communities,\n            entryPointId: row.entryPointId,\n            terminalId: row.terminalId,\n          } as GraphNode['properties'],\n        });\n      }\n    } catch {\n      // ignore empty tables\n    }\n  }\n\n  const relationships: GraphRelationship[] = [];\n  const relRows = await executeQuery(\n    `MATCH (a)-[r:CodeRelation]->(b) RETURN a.id AS sourceId, b.id AS targetId, r.type AS type, r.confidence AS confidence, r.reason AS reason, r.step AS step`\n  );\n  for (const row of relRows) {\n    relationships.push({\n      id: `${row.sourceId}_${row.type}_${row.targetId}`,\n      type: row.type,\n      sourceId: row.sourceId,\n      targetId: row.targetId,\n      confidence: row.confidence,\n      reason: row.reason,\n      step: row.step,\n    });\n  }\n\n  return { nodes, relationships };\n};\n\nconst statusFromError = (err: any): number => {\n  const msg = String(err?.message ?? '');\n  if (msg.includes('No indexed repositories') || msg.includes('not found')) return 404;\n  if (msg.includes('Multiple repositories')) return 400;\n  return 500;\n};\n\nconst requestedRepo = (req: express.Request): string | undefined => {\n  const fromQuery = typeof req.query.repo === 'string' ? req.query.repo : undefined;\n  if (fromQuery) return fromQuery;\n\n  if (req.body && typeof req.body === 'object' && typeof req.body.repo === 'string') {\n    return req.body.repo;\n  }\n\n  return undefined;\n};\n\nexport const createServer = async (port: number, host: string = '127.0.0.1') => {\n  const app = express();\n\n  // CORS: only allow localhost origins and the deployed site.\n  // Non-browser requests (curl, server-to-server) have no origin and are allowed.\n  app.use(cors({\n    origin: (origin, callback) => {\n      if (\n        !origin\n        || origin.startsWith('http://localhost:')\n        || origin.startsWith('http://127.0.0.1:')\n        || origin === 'https://gitnexus.vercel.app'\n      ) {\n        callback(null, true);\n      } else {\n        callback(new Error('Not allowed by CORS'));\n      }\n    }\n  }));\n  app.use(express.json({ limit: '10mb' }));\n\n  // Initialize MCP backend (multi-repo, shared across all MCP sessions)\n  const backend = new LocalBackend();\n  await backend.init();\n  const cleanupMcp = mountMCPEndpoints(app, backend);\n\n  // Helper: resolve a repo by name from the global registry, or default to first\n  const resolveRepo = async (repoName?: string) => {\n    const repos = await listRegisteredRepos();\n    if (repos.length === 0) return null;\n    if (repoName) return repos.find(r => r.name === repoName) || null;\n    return repos[0]; // default to first\n  };\n\n  // List all registered repos\n  app.get('/api/repos', async (_req, res) => {\n    try {\n      const repos = await listRegisteredRepos();\n      res.json(repos.map(r => ({\n        name: r.name, path: r.path, indexedAt: r.indexedAt,\n        lastCommit: r.lastCommit, stats: r.stats,\n      })));\n    } catch (err: any) {\n      res.status(500).json({ error: err.message || 'Failed to list repos' });\n    }\n  });\n\n  // Get repo info\n  app.get('/api/repo', async (req, res) => {\n    try {\n      const entry = await resolveRepo(requestedRepo(req));\n      if (!entry) {\n        res.status(404).json({ error: 'Repository not found. Run: gitnexus analyze' });\n        return;\n      }\n      const meta = await loadMeta(entry.storagePath);\n      res.json({\n        name: entry.name,\n        repoPath: entry.path,\n        indexedAt: meta?.indexedAt ?? entry.indexedAt,\n        stats: meta?.stats ?? entry.stats ?? {},\n      });\n    } catch (err: any) {\n      res.status(500).json({ error: err.message || 'Failed to get repo info' });\n    }\n  });\n\n  // Get full graph\n  app.get('/api/graph', async (req, res) => {\n    try {\n      const entry = await resolveRepo(requestedRepo(req));\n      if (!entry) {\n        res.status(404).json({ error: 'Repository not found' });\n        return;\n      }\n      const lbugPath = path.join(entry.storagePath, 'lbug');\n      const graph = await withLbugDb(lbugPath, async () => buildGraph());\n      res.json(graph);\n    } catch (err: any) {\n      res.status(500).json({ error: err.message || 'Failed to build graph' });\n    }\n  });\n\n  // Execute Cypher query\n  app.post('/api/query', async (req, res) => {\n    try {\n      const cypher = req.body.cypher as string;\n      if (!cypher) {\n        res.status(400).json({ error: 'Missing \"cypher\" in request body' });\n        return;\n      }\n\n      const entry = await resolveRepo(requestedRepo(req));\n      if (!entry) {\n        res.status(404).json({ error: 'Repository not found' });\n        return;\n      }\n      const lbugPath = path.join(entry.storagePath, 'lbug');\n      const result = await withLbugDb(lbugPath, () => executeQuery(cypher));\n      res.json({ result });\n    } catch (err: any) {\n      res.status(500).json({ error: err.message || 'Query failed' });\n    }\n  });\n\n  // Search\n  app.post('/api/search', async (req, res) => {\n    try {\n      const query = (req.body.query ?? '').trim();\n      if (!query) {\n        res.status(400).json({ error: 'Missing \"query\" in request body' });\n        return;\n      }\n\n      const entry = await resolveRepo(requestedRepo(req));\n      if (!entry) {\n        res.status(404).json({ error: 'Repository not found' });\n        return;\n      }\n      const lbugPath = path.join(entry.storagePath, 'lbug');\n      const parsedLimit = Number(req.body.limit ?? 10);\n      const limit = Number.isFinite(parsedLimit)\n        ? Math.max(1, Math.min(100, Math.trunc(parsedLimit)))\n        : 10;\n\n      const results = await withLbugDb(lbugPath, async () => {\n        const { isEmbedderReady } = await import('../core/embeddings/embedder.js');\n        if (isEmbedderReady()) {\n          const { semanticSearch } = await import('../core/embeddings/embedding-pipeline.js');\n          return hybridSearch(query, limit, executeQuery, semanticSearch);\n        }\n        // FTS-only fallback when embeddings aren't loaded\n        return searchFTSFromLbug(query, limit);\n      });\n      res.json({ results });\n    } catch (err: any) {\n      res.status(500).json({ error: err.message || 'Search failed' });\n    }\n  });\n\n  // Read file — with path traversal guard\n  app.get('/api/file', async (req, res) => {\n    try {\n      const entry = await resolveRepo(requestedRepo(req));\n      if (!entry) {\n        res.status(404).json({ error: 'Repository not found' });\n        return;\n      }\n      const filePath = req.query.path as string;\n      if (!filePath) {\n        res.status(400).json({ error: 'Missing path' });\n        return;\n      }\n\n      // Prevent path traversal — resolve and verify the path stays within the repo root\n      const repoRoot = path.resolve(entry.path);\n      const fullPath = path.resolve(repoRoot, filePath);\n      if (!fullPath.startsWith(repoRoot + path.sep) && fullPath !== repoRoot) {\n        res.status(403).json({ error: 'Path traversal denied' });\n        return;\n      }\n\n      const content = await fs.readFile(fullPath, 'utf-8');\n      res.json({ content });\n    } catch (err: any) {\n      if (err.code === 'ENOENT') {\n        res.status(404).json({ error: 'File not found' });\n      } else {\n        res.status(500).json({ error: err.message || 'Failed to read file' });\n      }\n    }\n  });\n\n  // List all processes\n  app.get('/api/processes', async (req, res) => {\n    try {\n      const result = await backend.queryProcesses(requestedRepo(req));\n      res.json(result);\n    } catch (err: any) {\n      res.status(statusFromError(err)).json({ error: err.message || 'Failed to query processes' });\n    }\n  });\n\n  // Process detail\n  app.get('/api/process', async (req, res) => {\n    try {\n      const name = String(req.query.name ?? '').trim();\n      if (!name) {\n        res.status(400).json({ error: 'Missing \"name\" query parameter' });\n        return;\n      }\n\n      const result = await backend.queryProcessDetail(name, requestedRepo(req));\n      if (result?.error) {\n        res.status(404).json({ error: result.error });\n        return;\n      }\n      res.json(result);\n    } catch (err: any) {\n      res.status(statusFromError(err)).json({ error: err.message || 'Failed to query process detail' });\n    }\n  });\n\n  // List all clusters\n  app.get('/api/clusters', async (req, res) => {\n    try {\n      const result = await backend.queryClusters(requestedRepo(req));\n      res.json(result);\n    } catch (err: any) {\n      res.status(statusFromError(err)).json({ error: err.message || 'Failed to query clusters' });\n    }\n  });\n\n  // Cluster detail\n  app.get('/api/cluster', async (req, res) => {\n    try {\n      const name = String(req.query.name ?? '').trim();\n      if (!name) {\n        res.status(400).json({ error: 'Missing \"name\" query parameter' });\n        return;\n      }\n\n      const result = await backend.queryClusterDetail(name, requestedRepo(req));\n      if (result?.error) {\n        res.status(404).json({ error: result.error });\n        return;\n      }\n      res.json(result);\n    } catch (err: any) {\n      res.status(statusFromError(err)).json({ error: err.message || 'Failed to query cluster detail' });\n    }\n  });\n\n  // Global error handler — catch anything the route handlers miss\n  app.use((err: any, _req: express.Request, res: express.Response, _next: express.NextFunction) => {\n    console.error('Unhandled error:', err);\n    res.status(500).json({ error: 'Internal server error' });\n  });\n\n  const server = app.listen(port, host, () => {\n    console.log(`GitNexus server running on http://${host}:${port}`);\n  });\n\n  // Graceful shutdown — close Express + LadybugDB cleanly\n  const shutdown = async () => {\n    server.close();\n    await cleanupMcp();\n    await closeLbug();\n    await backend.disconnect();\n    process.exit(0);\n  };\n  process.once('SIGINT', shutdown);\n  process.once('SIGTERM', shutdown);\n};\n"
  },
  {
    "path": "gitnexus/src/server/mcp-http.ts",
    "content": "/**\n * MCP over HTTP\n *\n * Mounts the GitNexus MCP server on Express using StreamableHTTP transport.\n * Each connecting client gets its own stateful session; the LocalBackend\n * is shared across all sessions (thread-safe — lazy LadybugDB per repo).\n *\n * Sessions are cleaned up on explicit close or after SESSION_TTL_MS of inactivity\n * (guards against network drops that never trigger onclose).\n */\n\nimport type { Express, Request, Response } from 'express';\nimport { StreamableHTTPServerTransport } from '@modelcontextprotocol/sdk/server/streamableHttp.js';\nimport { Server } from '@modelcontextprotocol/sdk/server/index.js';\nimport { createMCPServer } from '../mcp/server.js';\nimport type { LocalBackend } from '../mcp/local/local-backend.js';\nimport { randomUUID } from 'crypto';\n\ninterface MCPSession {\n  server: Server;\n  transport: StreamableHTTPServerTransport;\n  lastActivity: number;\n}\n\n/** Idle sessions are evicted after 30 minutes */\nconst SESSION_TTL_MS = 30 * 60 * 1000;\n/** Cleanup sweep runs every 5 minutes */\nconst CLEANUP_INTERVAL_MS = 5 * 60 * 1000;\n\nexport function mountMCPEndpoints(app: Express, backend: LocalBackend): () => Promise<void> {\n  const sessions = new Map<string, MCPSession>();\n\n  // Periodic cleanup of idle sessions (guards against network drops)\n  const cleanupTimer = setInterval(() => {\n    const now = Date.now();\n    for (const [id, session] of sessions) {\n      if (now - session.lastActivity > SESSION_TTL_MS) {\n        try { session.server.close(); } catch {}\n        sessions.delete(id);\n      }\n    }\n  }, CLEANUP_INTERVAL_MS);\n  if (cleanupTimer && typeof cleanupTimer === 'object' && 'unref' in cleanupTimer) {\n    (cleanupTimer as NodeJS.Timeout).unref();\n  }\n\n  const handleMcpRequest = async (req: Request, res: Response) => {\n    const sessionId = req.headers['mcp-session-id'] as string | undefined;\n\n    if (sessionId && sessions.has(sessionId)) {\n      // Existing session — delegate to its transport\n      const session = sessions.get(sessionId)!;\n      session.lastActivity = Date.now();\n      await session.transport.handleRequest(req, res, req.body);\n    } else if (sessionId) {\n      // Unknown/expired session ID — tell client to re-initialize (per MCP spec)\n      res.status(404).json({\n        jsonrpc: '2.0',\n        error: { code: -32001, message: 'Session not found. Re-initialize.' },\n        id: null,\n      });\n    } else if (req.method === 'POST') {\n      // No session ID — new client initializing\n      const transport = new StreamableHTTPServerTransport({\n        sessionIdGenerator: () => randomUUID(),\n      });\n      const server = createMCPServer(backend);\n      await server.connect(transport);\n      await transport.handleRequest(req, res, req.body);\n\n      if (transport.sessionId) {\n        sessions.set(transport.sessionId, { server, transport, lastActivity: Date.now() });\n        transport.onclose = () => {\n          sessions.delete(transport.sessionId!);\n        };\n      }\n    } else {\n      res.status(400).json({\n        jsonrpc: '2.0',\n        error: { code: -32000, message: 'No valid session. Send a POST to initialize.' },\n        id: null,\n      });\n    }\n  };\n\n  app.all('/api/mcp', (req: Request, res: Response) => {\n    void handleMcpRequest(req, res).catch((err: any) => {\n      console.error('MCP HTTP request failed:', err);\n      if (res.headersSent) return;\n      res.status(500).json({\n        jsonrpc: '2.0',\n        error: { code: -32000, message: 'Internal MCP server error' },\n        id: null,\n      });\n    });\n  });\n\n  const cleanup = async () => {\n    clearInterval(cleanupTimer);\n    const closers = [...sessions.values()].map(async session => {\n      try {\n        await Promise.resolve(session.server.close());\n      } catch {}\n    });\n    sessions.clear();\n    await Promise.allSettled(closers);\n  };\n\n  console.log('MCP HTTP endpoints mounted at /api/mcp');\n  return cleanup;\n}\n"
  },
  {
    "path": "gitnexus/src/storage/git.ts",
    "content": "import { execSync } from 'child_process';\nimport path from 'path';\n\n// Git utilities for repository detection, commit tracking, and diff analysis\n\nexport const isGitRepo = (repoPath: string): boolean => {\n  try {\n    execSync('git rev-parse --is-inside-work-tree', { cwd: repoPath, stdio: 'ignore' });\n    return true;\n  } catch {\n    return false;\n  }\n};\n\nexport const getCurrentCommit = (repoPath: string): string => {\n  try {\n    return execSync('git rev-parse HEAD', { cwd: repoPath }).toString().trim();\n  } catch {\n    return '';\n  }\n};\n\n/**\n * Find the git repository root from any path inside the repo\n */\nexport const getGitRoot = (fromPath: string): string | null => {\n  try {\n    const raw = execSync('git rev-parse --show-toplevel', { cwd: fromPath })\n      .toString()\n      .trim();\n    // On Windows, git returns /d/Projects/Foo — path.resolve normalizes to D:\\Projects\\Foo\n    return path.resolve(raw);\n  } catch {\n    return null;\n  }\n};\n"
  },
  {
    "path": "gitnexus/src/storage/repo-manager.ts",
    "content": "/**\n * Repository Manager\n * \n * Manages GitNexus index storage in .gitnexus/ at repo root.\n * Also maintains a global registry at ~/.gitnexus/registry.json\n * so the MCP server can discover indexed repos from any cwd.\n */\n\nimport fs from 'fs/promises';\nimport path from 'path';\nimport os from 'os';\n\nexport interface RepoMeta {\n  repoPath: string;\n  lastCommit: string;\n  indexedAt: string;\n  stats?: {\n    files?: number;\n    nodes?: number;\n    edges?: number;\n    communities?: number;\n    processes?: number;\n    embeddings?: number;\n  };\n}\n\nexport interface IndexedRepo {\n  repoPath: string;\n  storagePath: string;\n  lbugPath: string;\n  metaPath: string;\n  meta: RepoMeta;\n}\n\n/**\n * Shape of an entry in the global registry (~/.gitnexus/registry.json)\n */\nexport interface RegistryEntry {\n  name: string;\n  path: string;\n  storagePath: string;\n  indexedAt: string;\n  lastCommit: string;\n  stats?: RepoMeta['stats'];\n}\n\nconst GITNEXUS_DIR = '.gitnexus';\n\n// ─── Local Storage Helpers ─────────────────────────────────────────────\n\n/**\n * Get the .gitnexus storage path for a repository\n */\nexport const getStoragePath = (repoPath: string): string => {\n  return path.join(path.resolve(repoPath), GITNEXUS_DIR);\n};\n\n/**\n * Get paths to key storage files\n */\nexport const getStoragePaths = (repoPath: string) => {\n  const storagePath = getStoragePath(repoPath);\n  return {\n    storagePath,\n    lbugPath: path.join(storagePath, 'lbug'),\n    metaPath: path.join(storagePath, 'meta.json'),\n  };\n};\n\n/**\n * Check whether a KuzuDB index exists in the given storage path.\n * Non-destructive — safe to call from status commands.\n */\nexport const hasKuzuIndex = async (storagePath: string): Promise<boolean> => {\n  try {\n    await fs.stat(path.join(storagePath, 'kuzu'));\n    return true;\n  } catch {\n    return false;\n  }\n};\n\n/**\n * Clean up stale KuzuDB files after migration to LadybugDB.\n *\n * Returns:\n *   found        — true if .gitnexus/kuzu existed and was deleted\n *   needsReindex — true if kuzu existed but lbug does not (re-analyze required)\n *\n * Callers own the user-facing messaging; this function only deletes files.\n */\nexport const cleanupOldKuzuFiles = async (\n  storagePath: string,\n): Promise<{ found: boolean; needsReindex: boolean }> => {\n  const oldPath = path.join(storagePath, 'kuzu');\n  const newPath = path.join(storagePath, 'lbug');\n  try {\n    await fs.stat(oldPath);\n    // Old kuzu file/dir exists — determine if lbug is already present\n    let needsReindex = false;\n    try {\n      await fs.stat(newPath);\n    } catch {\n      needsReindex = true;\n    }\n    // Delete kuzu database file and its sidecars (.wal, .lock)\n    for (const suffix of ['', '.wal', '.lock']) {\n      try { await fs.unlink(oldPath + suffix); } catch {}\n    }\n    // Also handle the case where kuzu was stored as a directory\n    try { await fs.rm(oldPath, { recursive: true, force: true }); } catch {}\n    return { found: true, needsReindex };\n  } catch {\n    // Old path doesn't exist — nothing to do\n    return { found: false, needsReindex: false };\n  }\n};\n\n/**\n * Load metadata from an indexed repo\n */\nexport const loadMeta = async (storagePath: string): Promise<RepoMeta | null> => {\n  try {\n    const metaPath = path.join(storagePath, 'meta.json');\n    const raw = await fs.readFile(metaPath, 'utf-8');\n    return JSON.parse(raw) as RepoMeta;\n  } catch {\n    return null;\n  }\n};\n\n/**\n * Save metadata to storage\n */\nexport const saveMeta = async (storagePath: string, meta: RepoMeta): Promise<void> => {\n  await fs.mkdir(storagePath, { recursive: true });\n  const metaPath = path.join(storagePath, 'meta.json');\n  await fs.writeFile(metaPath, JSON.stringify(meta, null, 2), 'utf-8');\n};\n\n/**\n * Check if a path has a GitNexus index\n */\nexport const hasIndex = async (repoPath: string): Promise<boolean> => {\n  const { metaPath } = getStoragePaths(repoPath);\n  try {\n    await fs.access(metaPath);\n    return true;\n  } catch {\n    return false;\n  }\n};\n\n/**\n * Load an indexed repo from a path\n */\nexport const loadRepo = async (repoPath: string): Promise<IndexedRepo | null> => {\n  const paths = getStoragePaths(repoPath);\n  const meta = await loadMeta(paths.storagePath);\n  if (!meta) return null;\n  \n  return {\n    repoPath: path.resolve(repoPath),\n    ...paths,\n    meta,\n  };\n};\n\n/**\n * Find .gitnexus by walking up from a starting path\n */\nexport const findRepo = async (startPath: string): Promise<IndexedRepo | null> => {\n  let current = path.resolve(startPath);\n  const root = path.parse(current).root;\n  \n  while (current !== root) {\n    const repo = await loadRepo(current);\n    if (repo) return repo;\n    current = path.dirname(current);\n  }\n  \n  return null;\n};\n\n/**\n * Add .gitnexus to .gitignore if not already present\n */\nexport const addToGitignore = async (repoPath: string): Promise<void> => {\n  const gitignorePath = path.join(repoPath, '.gitignore');\n  \n  try {\n    const content = await fs.readFile(gitignorePath, 'utf-8');\n    if (content.includes(GITNEXUS_DIR)) return;\n    \n    const newContent = content.endsWith('\\n') \n      ? `${content}${GITNEXUS_DIR}\\n`\n      : `${content}\\n${GITNEXUS_DIR}\\n`;\n    await fs.writeFile(gitignorePath, newContent, 'utf-8');\n  } catch {\n    // .gitignore doesn't exist, create it\n    await fs.writeFile(gitignorePath, `${GITNEXUS_DIR}\\n`, 'utf-8');\n  }\n};\n\n// ─── Global Registry (~/.gitnexus/registry.json) ───────────────────────\n\n/**\n * Get the path to the global GitNexus directory\n */\nexport const getGlobalDir = (): string => {\n  return path.join(os.homedir(), '.gitnexus');\n};\n\n/**\n * Get the path to the global registry file\n */\nexport const getGlobalRegistryPath = (): string => {\n  return path.join(getGlobalDir(), 'registry.json');\n};\n\n/**\n * Read the global registry. Returns empty array if not found.\n */\nexport const readRegistry = async (): Promise<RegistryEntry[]> => {\n  try {\n    const raw = await fs.readFile(getGlobalRegistryPath(), 'utf-8');\n    const data = JSON.parse(raw);\n    return Array.isArray(data) ? data : [];\n  } catch {\n    return [];\n  }\n};\n\n/**\n * Write the global registry to disk\n */\nconst writeRegistry = async (entries: RegistryEntry[]): Promise<void> => {\n  const dir = getGlobalDir();\n  await fs.mkdir(dir, { recursive: true });\n  await fs.writeFile(getGlobalRegistryPath(), JSON.stringify(entries, null, 2), 'utf-8');\n};\n\n/**\n * Register (add or update) a repo in the global registry.\n * Called after `gitnexus analyze` completes.\n */\nexport const registerRepo = async (repoPath: string, meta: RepoMeta): Promise<void> => {\n  const resolved = path.resolve(repoPath);\n  const name = path.basename(resolved);\n  const { storagePath } = getStoragePaths(resolved);\n\n  const entries = await readRegistry();\n  const existing = entries.findIndex((e) => {\n    const a = path.resolve(e.path);\n    const b = resolved;\n    return process.platform === 'win32'\n      ? a.toLowerCase() === b.toLowerCase()\n      : a === b;\n  });\n\n  const entry: RegistryEntry = {\n    name,\n    path: resolved,\n    storagePath,\n    indexedAt: meta.indexedAt,\n    lastCommit: meta.lastCommit,\n    stats: meta.stats,\n  };\n\n  if (existing >= 0) {\n    entries[existing] = entry;\n  } else {\n    entries.push(entry);\n  }\n\n  await writeRegistry(entries);\n};\n\n/**\n * Remove a repo from the global registry.\n * Called after `gitnexus clean`.\n */\nexport const unregisterRepo = async (repoPath: string): Promise<void> => {\n  const resolved = path.resolve(repoPath);\n  const entries = await readRegistry();\n  const filtered = entries.filter(\n    (e) => path.resolve(e.path) !== resolved\n  );\n  await writeRegistry(filtered);\n};\n\n/**\n * List all registered repos from the global registry.\n * Optionally validates that each entry's .gitnexus/ still exists.\n */\nexport const listRegisteredRepos = async (opts?: { validate?: boolean }): Promise<RegistryEntry[]> => {\n  const entries = await readRegistry();\n  if (!opts?.validate) return entries;\n\n  // Validate each entry still has a .gitnexus/ directory\n  const valid: RegistryEntry[] = [];\n  for (const entry of entries) {\n    try {\n      await fs.access(path.join(entry.storagePath, 'meta.json'));\n      valid.push(entry);\n    } catch {\n      // Index no longer exists — skip\n    }\n  }\n\n  // If we pruned any entries, save the cleaned registry\n  if (valid.length !== entries.length) {\n    await writeRegistry(valid);\n  }\n\n  return valid;\n};\n\n// ─── Global CLI Config (~/.gitnexus/config.json) ─────────────────────────\n\nexport interface CLIConfig {\n  apiKey?: string;\n  model?: string;\n  baseUrl?: string;\n}\n\n/**\n * Get the path to the global CLI config file\n */\nexport const getGlobalConfigPath = (): string => {\n  return path.join(getGlobalDir(), 'config.json');\n};\n\n/**\n * Load CLI config from ~/.gitnexus/config.json\n */\nexport const loadCLIConfig = async (): Promise<CLIConfig> => {\n  try {\n    const raw = await fs.readFile(getGlobalConfigPath(), 'utf-8');\n    return JSON.parse(raw) as CLIConfig;\n  } catch {\n    return {};\n  }\n};\n\n/**\n * Save CLI config to ~/.gitnexus/config.json\n */\nexport const saveCLIConfig = async (config: CLIConfig): Promise<void> => {\n  const dir = getGlobalDir();\n  await fs.mkdir(dir, { recursive: true });\n  const configPath = getGlobalConfigPath();\n  await fs.writeFile(configPath, JSON.stringify(config, null, 2), 'utf-8');\n  // Restrict file permissions on Unix (config may contain API keys)\n  if (process.platform !== 'win32') {\n    try { await fs.chmod(configPath, 0o600); } catch { /* best-effort */ }\n  }\n};\n"
  },
  {
    "path": "gitnexus/src/types/pipeline.ts",
    "content": "import { GraphNode, GraphRelationship, KnowledgeGraph } from '../core/graph/types.js';\nimport { CommunityDetectionResult } from '../core/ingestion/community-processor.js';\nimport { ProcessDetectionResult } from '../core/ingestion/process-processor.js';\n\nexport type PipelinePhase = 'idle' | 'extracting' | 'structure' | 'parsing' | 'imports' | 'calls' | 'heritage' | 'communities' | 'processes' | 'enriching' | 'complete' | 'error';\n\nexport interface PipelineProgress {\n  phase: PipelinePhase;\n  percent: number;\n  message: string;\n  detail?: string;\n  stats?: {\n    filesProcessed: number;\n    totalFiles: number;\n    nodesCreated: number;\n  };\n}\n\n// Original result type (used internally in pipeline)\nexport interface PipelineResult {\n  graph: KnowledgeGraph;\n  /** Absolute path to the repo root — used for lazy file reads during LadybugDB loading */\n  repoPath: string;\n  /** Total files scanned (for stats) */\n  totalFileCount: number;\n  communityResult?: CommunityDetectionResult;\n  processResult?: ProcessDetectionResult;\n}\n\n// Serializable version for Web Worker communication\n// Maps and functions cannot be transferred via postMessage\nexport interface SerializablePipelineResult {\n  nodes: GraphNode[];\n  relationships: GraphRelationship[];\n  repoPath: string;\n  totalFileCount: number;\n}\n\n// Helper to convert PipelineResult to serializable format\nexport const serializePipelineResult = (result: PipelineResult): SerializablePipelineResult => ({\n  nodes: [...result.graph.iterNodes()],\n  relationships: [...result.graph.iterRelationships()],\n  repoPath: result.repoPath,\n  totalFileCount: result.totalFileCount,\n});\n\n// Helper to reconstruct from serializable format (used in main thread)\nexport const deserializePipelineResult = (\n  serialized: SerializablePipelineResult,\n  createGraph: () => KnowledgeGraph\n): PipelineResult => {\n  const graph = createGraph();\n  serialized.nodes.forEach(node => graph.addNode(node));\n  serialized.relationships.forEach(rel => graph.addRelationship(rel));\n\n  return {\n    graph,\n    repoPath: serialized.repoPath,\n    totalFileCount: serialized.totalFileCount,\n  };\n};\n\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-ambiguous/handler_a.h",
    "content": "#pragma once\n\nclass Handler {\npublic:\n    void handle();\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-ambiguous/handler_b.h",
    "content": "#pragma once\n\nclass Handler {\npublic:\n    void process();\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-ambiguous/processor.h",
    "content": "#pragma once\n\n#include \"handler_a.h\"\n\nclass Processor : public Handler {\npublic:\n    void run();\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-assignment-chain/models/Repo.h",
    "content": "#pragma once\n#include <string>\n\nclass Repo {\npublic:\n    Repo(const std::string& name) : name_(name) {}\n    bool save() { return false; }\nprivate:\n    std::string name_;\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-assignment-chain/models/User.h",
    "content": "#pragma once\n#include <string>\n\nclass User {\npublic:\n    User(const std::string& name) : name_(name) {}\n    bool save() { return true; }\nprivate:\n    std::string name_;\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-assignment-chain/services/App.cpp",
    "content": "#include \"models/User.h\"\n#include \"models/Repo.h\"\n\n// Tests C++ auto alias = u assignment chain propagation.\nvoid processEntities() {\n    User u(\"alice\");\n    auto alias = u;\n    alias.save();\n\n    Repo r(\"maindb\");\n    auto rAlias = r;\n    rAlias.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-brace-init-inference/models/Repo.h",
    "content": "class Repo {\npublic:\n  void save() {}\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-brace-init-inference/models/User.h",
    "content": "class User {\npublic:\n  void save() {}\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-brace-init-inference/services/App.cpp",
    "content": "#include \"../models/User.h\"\n#include \"../models/Repo.h\"\n\nvoid process() {\n  auto user = User{};\n  user.save();\n\n  auto repo = Repo{};\n  repo.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-call-result-binding/app.cpp",
    "content": "#include \"user.h\"\n\nvoid processUser() {\n    auto user = getUser(\"alice\");\n    user.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-call-result-binding/user.h",
    "content": "#pragma once\n#include <string>\n\nclass User {\npublic:\n    User(const std::string& n) : name_(n) {}\n    bool save() { return true; }\nprivate:\n    std::string name_;\n};\n\nUser getUser(const std::string& name) {\n    return User(name);\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-calls/main.cpp",
    "content": "#include \"one.h\"\n#include \"zero.h\"\n\nvoid run() {\n    write_audit(\"hello\");\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-calls/one.h",
    "content": "inline const char* write_audit(const char* message) {\n    return message;\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-calls/zero.h",
    "content": "inline const char* write_audit() {\n    return \"zero\";\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-chain-call/app.cpp",
    "content": "#include \"service.h\"\n#include \"user.h\"\n#include \"repo.h\"\n\nvoid processUser() {\n    UserService svc;\n    svc.getUser().save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-chain-call/repo.h",
    "content": "#pragma once\n\nclass Repo {\npublic:\n    bool save() { return true; }\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-chain-call/service.h",
    "content": "#pragma once\n#include \"user.h\"\n\nclass UserService {\npublic:\n    User getUser() { return User(); }\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-chain-call/user.h",
    "content": "#pragma once\n\nclass User {\npublic:\n    bool save() { return true; }\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-constructor-calls/app.cpp",
    "content": "#include \"user.h\"\n\nvoid processUser(const std::string& name) {\n    auto user = new User(name);\n    user->save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-constructor-calls/user.h",
    "content": "#pragma once\n#include <string>\n\nclass User {\npublic:\n    User(const std::string& name) : name_(name) {}\n    bool save() { return true; }\nprivate:\n    std::string name_;\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-constructor-type-inference/models/Repo.h",
    "content": "#pragma once\n#include <string>\n\nclass Repo {\npublic:\n    Repo(const std::string& dbName) : dbName_(dbName) {}\n    bool save() { return false; }\nprivate:\n    std::string dbName_;\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-constructor-type-inference/models/User.h",
    "content": "#pragma once\n#include <string>\n\nclass User {\npublic:\n    User(const std::string& name) : name_(name) {}\n    bool save() { return true; }\nprivate:\n    std::string name_;\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-constructor-type-inference/services/App.cpp",
    "content": "#include \"models/User.h\"\n#include \"models/Repo.h\"\n\nvoid processEntities() {\n    auto user = User(\"alice\");\n    auto repo = Repo(\"maindb\");\n    user.save();\n    repo.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-deep-field-chain/models.h",
    "content": "#pragma once\n\nclass City {\npublic:\n    std::string zipCode;\n\n    std::string getName() {\n        return \"city\";\n    }\n};\n\nclass Address {\npublic:\n    City city;\n    std::string street;\n\n    void save() {\n        // persist address\n    }\n};\n\nclass User {\npublic:\n    std::string name;\n    Address address;\n\n    std::string greet() {\n        return name;\n    }\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-deep-field-chain/service.cpp",
    "content": "#include \"models.h\"\n\nvoid processUser(User user) {\n    // 2-level chain: user.address → Address, then .save() → Address#save\n    user.address.save();\n\n    // 3-level chain: user.address → Address, .city → City, .getName() → City#getName\n    user.address.city.getName();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-default-params/src/app.cpp",
    "content": "#include <string>\n\nstd::string greet(std::string name, std::string greeting = \"Hello\") {\n    return greeting + \", \" + name;\n}\n\nvoid process() {\n    greet(\"Alice\");\n    greet(\"Bob\", \"Hi\");\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-deref-range-for/App.cpp",
    "content": "#include \"User.h\"\n#include \"Repo.h\"\n#include <vector>\n\nvoid processUsers(std::vector<User>* usersPtr) {\n    for (auto& user : *usersPtr) {\n        user.save();\n    }\n}\n\nvoid processRepos(std::vector<Repo>* reposPtr) {\n    for (const auto& repo : *reposPtr) {\n        repo.save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-deref-range-for/Repo.h",
    "content": "#pragma once\n#include <string>\n\nclass Repo {\npublic:\n    Repo(const std::string& name) : name_(name) {}\n    void save() {}\nprivate:\n    std::string name_;\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-deref-range-for/User.h",
    "content": "#pragma once\n#include <string>\n\nclass User {\npublic:\n    User(const std::string& name) : name_(name) {}\n    void save() {}\nprivate:\n    std::string name_;\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-diamond/animal.h",
    "content": "#pragma once\n\nclass Animal {\npublic:\n    virtual void speak();\n    virtual void move();\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-diamond/duck.cpp",
    "content": "#include \"duck.h\"\n\nvoid Duck::speak() {\n    // quack\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-diamond/duck.h",
    "content": "#pragma once\n#include \"flyer.h\"\n#include \"swimmer.h\"\n\nclass Duck : public Flyer, public Swimmer {\npublic:\n    void speak() override;\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-diamond/flyer.h",
    "content": "#pragma once\n#include \"animal.h\"\n\nclass Flyer : public Animal {\npublic:\n    void move() override;\n    void fly();\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-diamond/swimmer.h",
    "content": "#pragma once\n#include \"animal.h\"\n\nclass Swimmer : public Animal {\npublic:\n    void move() override;\n    void swim();\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-field-types/models.h",
    "content": "#pragma once\n\nclass Address {\npublic:\n    std::string city;\n\n    void save() {\n        // persist address\n    }\n};\n\nclass User {\npublic:\n    std::string name;\n    Address address;\n\n    std::string greet() {\n        return name;\n    }\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-field-types/service.cpp",
    "content": "#include \"models.h\"\n\nvoid processUser(User user) {\n    // Field-access chain: user.address → Address, then .save() → Address#save\n    user.address.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-grandparent-resolution/src/A.h",
    "content": "#pragma once\n#include \"Greeting.h\"\n\nclass A {\npublic:\n    Greeting greet() { return Greeting(); }\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-grandparent-resolution/src/B.h",
    "content": "#pragma once\n#include \"A.h\"\n\nclass B : public A {};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-grandparent-resolution/src/C.h",
    "content": "#pragma once\n#include \"B.h\"\n\nclass C : public B {};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-grandparent-resolution/src/Greeting.h",
    "content": "#pragma once\n\nclass Greeting {\npublic:\n    void save() {}\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-grandparent-resolution/src/app.cpp",
    "content": "#include \"C.h\"\n\nvoid process() {\n    C c;\n    c.greet().save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-local-shadow/CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 3.10)\nproject(cpp-local-shadow)\nadd_executable(main src/main.cpp src/utils.cpp)\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-local-shadow/src/main.cpp",
    "content": "#include \"utils.h\"\n\n// Local function shadows included save\nvoid save(const char* data) {\n    printf(\"local save: %s\\n\", data);\n}\n\nvoid run() {\n    save(\"test\");\n}\n\nint main() {\n    run();\n    return 0;\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-local-shadow/src/utils.cpp",
    "content": "#include \"utils.h\"\n#include <cstdio>\n\nvoid save(const char* data) {\n    printf(\"utils save: %s\\n\", data);\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-local-shadow/src/utils.h",
    "content": "#pragma once\n\nvoid save(const char* data);\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-member-calls/app.cpp",
    "content": "#include \"user.h\"\n\nvoid processUser() {\n    User user;\n    user.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-member-calls/user.h",
    "content": "#pragma once\n\nclass User {\npublic:\n    bool save() { return true; }\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-method-chain-binding/app.cpp",
    "content": "#include \"models.h\"\n\nvoid processChain() {\n    auto user = getUser();\n    auto addr = user.address;\n    auto city = addr.getCity();\n    city.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-method-chain-binding/models.h",
    "content": "#pragma once\n#include <string>\n\nclass City {\npublic:\n    std::string name;\n    City(const std::string& n) : name(n) {}\n    bool save() { return true; }\n};\n\nclass Address {\npublic:\n    City city;\n    Address(const City& c) : city(c) {}\n    City getCity() { return city; }\n};\n\nclass User {\npublic:\n    Address address;\n    User(const Address& a) : address(a) {}\n};\n\nUser getUser() {\n    return User(Address(City(\"NYC\")));\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-nullable-receiver/models/Repo.h",
    "content": "#pragma once\n#include <string>\n\nclass Repo {\npublic:\n    Repo(const std::string& dbName) : dbName_(dbName) {}\n    bool save() { return false; }\nprivate:\n    std::string dbName_;\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-nullable-receiver/models/User.h",
    "content": "#pragma once\n#include <string>\n\nclass User {\npublic:\n    User(const std::string& name) : name_(name) {}\n    bool save() { return true; }\nprivate:\n    std::string name_;\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-nullable-receiver/services/App.cpp",
    "content": "#include \"models/User.h\"\n#include \"models/Repo.h\"\n\nUser* findUser() {\n    return new User(\"alice\");\n}\n\nRepo* findRepo() {\n    return new Repo(\"maindb\");\n}\n\nvoid processEntities() {\n    User* user = findUser();\n    Repo* repo = findRepo();\n\n    // Pointer-based nullable receivers — should disambiguate via unwrapped type\n    user->save();\n    repo->save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-overload-param-types/service.cpp",
    "content": "#include <string>\n\nclass User {};\n\nclass UserService {\npublic:\n    User* lookup(int id) {\n        return nullptr;\n    }\n\n    User* lookup(std::string name) {\n        return nullptr;\n    }\n\n    void run() {\n        lookup(42);        // literal int → should disambiguate to lookup(int)\n        lookup(\"alice\");   // literal string → should disambiguate to lookup(string)\n    }\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-parent-resolution/src/BaseModel.h",
    "content": "#pragma once\n\nclass BaseModel {\npublic:\n    bool save() { return true; }\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-parent-resolution/src/User.h",
    "content": "#pragma once\n#include \"BaseModel.h\"\n\nclass User : public BaseModel {\npublic:\n    const char* serialize() { return \"\"; }\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-pointer-ref-fields/models.h",
    "content": "#pragma once\n\nclass Address {\npublic:\n    std::string city;\n\n    void save() {\n        // persist address\n    }\n};\n\nclass User {\npublic:\n    Address* address;       // raw pointer member field\n    Address& ref_address;   // reference member field\n    std::string name;\n\n    std::string greet() {\n        return name;\n    }\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-pointer-ref-fields/service.cpp",
    "content": "#include \"models.h\"\n\nvoid processUser(User user) {\n    // Pointer member field access: user.address->save()\n    user.address->save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-range-for/App.cpp",
    "content": "#include \"User.h\"\n#include \"Repo.h\"\n#include <vector>\n\nvoid processUsers(const std::vector<User>& users) {\n    for (auto& user : users) {\n        user.save();\n    }\n}\n\nvoid processRepos(const std::vector<Repo>& repos) {\n    for (const auto& repo : repos) {\n        repo.save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-range-for/Repo.h",
    "content": "#pragma once\n#include <string>\n\nclass Repo {\npublic:\n    Repo(const std::string& name) : name_(name) {}\n    void save() {}\nprivate:\n    std::string name_;\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-range-for/User.h",
    "content": "#pragma once\n#include <string>\n\nclass User {\npublic:\n    User(const std::string& name) : name_(name) {}\n    void save() {}\nprivate:\n    std::string name_;\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-receiver-resolution/app.cpp",
    "content": "#include \"user.h\"\n#include \"repo.h\"\n\nvoid processEntities() {\n    User user;\n    Repo repo;\n    user.save();\n    repo.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-receiver-resolution/repo.h",
    "content": "#pragma once\n\nclass Repo {\npublic:\n    bool save() { return false; }\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-receiver-resolution/user.h",
    "content": "#pragma once\n\nclass User {\npublic:\n    bool save() { return true; }\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-return-type/app.cpp",
    "content": "#include \"user.h\"\n\nUser getUser(const char* name) {\n    return User(name);\n}\n\nvoid processUser() {\n    auto user = getUser(\"alice\");\n    user.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-return-type/user.h",
    "content": "#pragma once\n\nclass User {\npublic:\n    User(const char* name) : name_(name) {}\n    void save() {}\nprivate:\n    const char* name_;\n};\n\nUser getUser(const char* name);\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-return-type-inference/app.cpp",
    "content": "#include \"user.h\"\n#include \"repo.h\"\n\nUser getUser(const char* name) {\n    return User(name);\n}\n\nRepo getRepo(const char* name) {\n    return Repo(name);\n}\n\nvoid processUser() {\n    auto user = getUser(\"alice\");\n    user.save();\n}\n\nvoid processRepo() {\n    auto repo = getRepo(\"main\");\n    repo.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-return-type-inference/repo.h",
    "content": "#pragma once\n\nclass Repo {\npublic:\n    Repo(const char* name) : name_(name) {}\n    bool save() { return true; }\nprivate:\n    const char* name_;\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-return-type-inference/user.h",
    "content": "#pragma once\n\nclass User {\npublic:\n    User(const char* name) : name_(name) {}\n    bool save() { return true; }\nprivate:\n    const char* name_;\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-scoped-brace-init/main.cpp",
    "content": "#include \"models.h\"\n\nvoid run() {\n    auto client = ns::HttpClient{};\n    client.connect();\n    client.send();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-scoped-brace-init/models.h",
    "content": "namespace ns {\n    class HttpClient {\n    public:\n        void connect() {}\n        void send() {}\n    };\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-self-this-resolution/src/Repo.cpp",
    "content": "class Repo {\npublic:\n    bool save() { return true; }\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-self-this-resolution/src/User.cpp",
    "content": "class User {\npublic:\n    bool save() { return true; }\n    void process() {\n        this->save();\n    }\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-smart-ptr-dispatch/src/app.cpp",
    "content": "#include <memory>\n#include <string>\n\nclass Animal {\npublic:\n    virtual std::string speak() { return \"...\"; }\n};\n\nclass Dog : public Animal {\npublic:\n    std::string speak() override { return \"woof\"; }\n};\n\nvoid process() {\n    auto dog = std::make_shared<Dog>();\n    dog->speak();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-structured-binding/App.cpp",
    "content": "#include \"User.h\"\n#include \"Repo.h\"\n#include <map>\n#include <string>\n#include <vector>\n\nvoid processUserMap(std::map<std::string, User> userMap) {\n    for (auto& [key, user] : userMap) {\n        user.save();\n    }\n}\n\nvoid processRepoMap(std::map<std::string, Repo> repoMap) {\n    for (const auto& [key, repo] : repoMap) {\n        repo.save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-structured-binding/Repo.h",
    "content": "#pragma once\n#include <string>\n\nclass Repo {\npublic:\n    Repo(const std::string& name) : name_(name) {}\n    void save() {}\nprivate:\n    std::string name_;\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-structured-binding/User.h",
    "content": "#pragma once\n#include <string>\n\nclass User {\npublic:\n    User(const std::string& name) : name_(name) {}\n    void save() {}\nprivate:\n    std::string name_;\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-variadic-resolution/logger.h",
    "content": "#ifndef LOGGER_H\n#define LOGGER_H\n\n#include <cstdarg>\n#include <cstdio>\n\nvoid log_entry(const char* fmt, ...) {\n    va_list args;\n    va_start(args, fmt);\n    vprintf(fmt, args);\n    va_end(args);\n}\n\n#endif\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-variadic-resolution/main.cpp",
    "content": "#include \"logger.h\"\n\nint main() {\n    log_entry(\"hello %s %s\", \"world\", \"test\");\n    return 0;\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-write-access/models.h",
    "content": "#pragma once\n\nclass Address {\npublic:\n    std::string city;\n};\n\nclass User {\npublic:\n    std::string name;\n    Address address;\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/cpp-write-access/service.cpp",
    "content": "#include \"models.h\"\n\nvoid updateUser(User& user) {\n    user.name = \"Alice\";\n    user.address = Address();\n    user.name += \" Smith\";\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-alias-imports/CsharpAlias.csproj",
    "content": "<Project Sdk=\"Microsoft.NET.Sdk\">\n  <PropertyGroup>\n    <OutputType>Library</OutputType>\n    <TargetFramework>net8.0</TargetFramework>\n  </PropertyGroup>\n</Project>\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-alias-imports/Models/Repo.cs",
    "content": "namespace Models\n{\n    public class Repo\n    {\n        public string Url { get; }\n        public Repo(string url) { Url = url; }\n        public bool Persist() => true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-alias-imports/Models/User.cs",
    "content": "namespace Models\n{\n    public class User\n    {\n        public string Name { get; }\n        public User(string name) { Name = name; }\n        public bool Save() => true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-alias-imports/Services/Main.cs",
    "content": "using U = Models.User;\nusing R = Models.Repo;\n\nnamespace Services\n{\n    public class Main\n    {\n        public void Run()\n        {\n            var u = new U(\"alice\");\n            var r = new R(\"https://example.com\");\n            u.Save();\n            r.Persist();\n        }\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-ambiguous/Models/Handler.cs",
    "content": "namespace MyApp.Models\n{\n    public class Handler\n    {\n        public void Handle() {}\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-ambiguous/Models/IProcessor.cs",
    "content": "namespace MyApp.Models\n{\n    public interface IProcessor\n    {\n        void Run();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-ambiguous/Other/Handler.cs",
    "content": "namespace MyApp.Other\n{\n    public class Handler\n    {\n        public void Process() {}\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-ambiguous/Other/IProcessor.cs",
    "content": "namespace MyApp.Other\n{\n    public interface IProcessor\n    {\n        void Execute();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-ambiguous/Services/UserHandler.cs",
    "content": "using MyApp.Models;\n\nnamespace MyApp.Services\n{\n    public class UserHandler : Handler, IProcessor\n    {\n        public void Run() {}\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-assignment-chain/AssignmentChain.csproj",
    "content": "<Project Sdk=\"Microsoft.NET.Sdk\">\n  <PropertyGroup>\n    <TargetFramework>net8.0</TargetFramework>\n  </PropertyGroup>\n</Project>\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-assignment-chain/Models/Repo.cs",
    "content": "namespace Models;\n\npublic class Repo\n{\n    public bool Save()\n    {\n        return false;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-assignment-chain/Models/User.cs",
    "content": "namespace Models;\n\npublic class User\n{\n    public bool Save()\n    {\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-assignment-chain/Program.cs",
    "content": "using Models;\n\nnamespace App;\n\npublic class Program\n{\n    static User GetUser() => new User();\n    static Repo GetRepo() => new Repo();\n\n    public static void ProcessEntities()\n    {\n        User u = GetUser();\n        var alias = u;\n        alias.Save();\n\n        Repo r = GetRepo();\n        var rAlias = r;\n        rAlias.Save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-async-binding/Order.cs",
    "content": "namespace CSharpAsyncBinding;\n\npublic class Order\n{\n    public string Name { get; set; }\n\n    public void Save()\n    {\n        // persist order\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-async-binding/OrderService.cs",
    "content": "namespace CSharpAsyncBinding;\n\npublic class OrderService\n{\n    public async Task<Order> GetOrderAsync(string name)\n    {\n        return new Order { Name = name };\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-async-binding/Program.cs",
    "content": "namespace CSharpAsyncBinding;\n\npublic class Program\n{\n    public static async Task Main(string[] args)\n    {\n        var userSvc = new UserService();\n        var orderSvc = new OrderService();\n        await ProcessUser(userSvc);\n        await ProcessOrder(orderSvc);\n    }\n\n    public static async Task ProcessUser(UserService userSvc)\n    {\n        var user = await userSvc.GetUserAsync(\"alice\");\n        user.Save();\n    }\n\n    public static async Task ProcessOrder(OrderService orderSvc)\n    {\n        var order = await orderSvc.GetOrderAsync(\"bob\");\n        order.Save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-async-binding/User.cs",
    "content": "namespace CSharpAsyncBinding;\n\npublic class User\n{\n    public string Name { get; set; }\n\n    public void Save()\n    {\n        // persist user\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-async-binding/UserService.cs",
    "content": "namespace CSharpAsyncBinding;\n\npublic class UserService\n{\n    public async Task<User> GetUserAsync(string name)\n    {\n        return new User { Name = name };\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-call-result-binding/App.cs",
    "content": "class User {\n    public string Name { get; set; }\n\n    public User(string name) {\n        Name = name;\n    }\n\n    public bool Save() {\n        return true;\n    }\n}\n\nclass App {\n    static User GetUser(string name) {\n        return new User(name);\n    }\n\n    void ProcessUser() {\n        var user = GetUser(\"alice\");\n        user.Save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-calls/CallProj.csproj",
    "content": "<Project Sdk=\"Microsoft.NET.Sdk\">\n  <PropertyGroup>\n    <TargetFramework>net8.0</TargetFramework>\n    <RootNamespace>CallProj</RootNamespace>\n  </PropertyGroup>\n</Project>\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-calls/Services/UserService.cs",
    "content": "using static CallProj.Utils.OneArg;\nusing static CallProj.Utils.ZeroArg;\n\nnamespace CallProj.Services\n{\n    public class UserService\n    {\n        public void CreateUser()\n        {\n            WriteAudit(\"hello\");\n        }\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-calls/Utils/OneArg.cs",
    "content": "namespace CallProj.Utils\n{\n    public static class OneArg\n    {\n        public static string WriteAudit(string message)\n        {\n            return message;\n        }\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-calls/Utils/ZeroArg.cs",
    "content": "namespace CallProj.Utils\n{\n    public static class ZeroArg\n    {\n        public static string WriteAudit()\n        {\n            return \"zero\";\n        }\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-chain-call/Models/Repo.cs",
    "content": "namespace ChainCall.Models;\n\npublic class Repo\n{\n    public bool Save()\n    {\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-chain-call/Models/User.cs",
    "content": "namespace ChainCall.Models;\n\npublic class User\n{\n    public bool Save()\n    {\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-chain-call/Program.cs",
    "content": "using ChainCall.Services;\n\npublic class App\n{\n    public void ProcessUser()\n    {\n        var svc = new UserService();\n        svc.GetUser().Save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-chain-call/Services/UserService.cs",
    "content": "using ChainCall.Models;\n\nnamespace ChainCall.Services;\n\npublic class UserService\n{\n    public User GetUser()\n    {\n        return new User();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-deep-field-chain/Models.cs",
    "content": "namespace DeepFieldChain;\n\npublic class City\n{\n    public string ZipCode { get; set; }\n\n    public string GetName()\n    {\n        return \"city\";\n    }\n}\n\npublic class Address\n{\n    public City City { get; set; }\n    public string Street { get; set; }\n\n    public void Save()\n    {\n        // persist address\n    }\n}\n\npublic class User\n{\n    public string Name { get; set; }\n    public Address Address { get; set; }\n\n    public string Greet()\n    {\n        return Name;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-deep-field-chain/Service.cs",
    "content": "namespace DeepFieldChain;\n\npublic class Service\n{\n    public static void ProcessUser(User user)\n    {\n        // 2-level chain: user.Address → Address, then .Save() → Address#Save\n        user.Address.Save();\n\n        // 3-level chain: user.Address → Address, .City → City, .GetName() → City#GetName\n        user.Address.City.GetName();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-dictionary-keys-values/App.cs",
    "content": "using System.Collections.Generic;\n\npublic class App {\n    public void ProcessValues(Dictionary<string, User> data) {\n        foreach (var user in data.Values) {\n            user.Save();\n        }\n    }\n\n    public void ProcessList(List<User> users) {\n        foreach (var user in users) {\n            user.Save();\n        }\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-dictionary-keys-values/Repo.cs",
    "content": "public class Repo {\n    public string Name { get; set; }\n    public void Save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-dictionary-keys-values/User.cs",
    "content": "public class User {\n    public string Name { get; set; }\n    public void Save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-field-types/Models.cs",
    "content": "namespace FieldTypes;\n\npublic class Address\n{\n    public string City { get; set; }\n\n    public void Save()\n    {\n        // persist address\n    }\n}\n\npublic class User\n{\n    public string Name { get; set; }\n    public Address Address { get; set; }\n\n    public string Greet()\n    {\n        return Name;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-field-types/Service.cs",
    "content": "namespace FieldTypes;\n\npublic class Service\n{\n    public static void ProcessUser(User user)\n    {\n        // Field-access chain: user.Address → Address, then .Save() → Address#Save\n        user.Address.Save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-foreach/ForeachProj.csproj",
    "content": "<Project Sdk=\"Microsoft.NET.Sdk\">\n  <PropertyGroup>\n    <TargetFramework>net8.0</TargetFramework>\n  </PropertyGroup>\n</Project>\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-foreach/Models/Repo.cs",
    "content": "namespace Models;\n\npublic class Repo\n{\n    public bool Save()\n    {\n        return false;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-foreach/Models/User.cs",
    "content": "namespace Models;\n\npublic class User\n{\n    public bool Save()\n    {\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-foreach/Program.cs",
    "content": "using Models;\nusing System.Collections.Generic;\n\nnamespace App;\n\npublic class AppService\n{\n    public void ProcessEntities(List<User> users, List<Repo> repos)\n    {\n        foreach (User user in users)\n        {\n            user.Save();\n        }\n        foreach (Repo repo in repos)\n        {\n            repo.Save();\n        }\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-generic-parent-resolution/src/Models/BaseModel.cs",
    "content": "namespace Models;\n\npublic class BaseModel<T> {\n    public virtual bool Save() { return true; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-generic-parent-resolution/src/Models/Repo.cs",
    "content": "namespace Models;\n\npublic class Repo {\n    public bool Save() { return true; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-generic-parent-resolution/src/Models/User.cs",
    "content": "namespace Models;\n\npublic class User : BaseModel<string> {\n    public override bool Save() {\n        base.Save();\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-grandparent-resolution/Models/A.cs",
    "content": "namespace Grandparent.Models\n{\n    public class A\n    {\n        public Greeting Greet() => new Greeting();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-grandparent-resolution/Models/B.cs",
    "content": "namespace Grandparent.Models\n{\n    public class B : A {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-grandparent-resolution/Models/C.cs",
    "content": "namespace Grandparent.Models\n{\n    public class C : B {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-grandparent-resolution/Models/Greeting.cs",
    "content": "namespace Grandparent.Models\n{\n    public class Greeting\n    {\n        public void Save() {}\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-grandparent-resolution/Services/App.cs",
    "content": "using Grandparent.Models;\n\nnamespace Grandparent.Services\n{\n    public class App\n    {\n        public void Process()\n        {\n            var c = new C();\n            c.Greet().Save();\n        }\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-is-pattern/IsPatternProj.csproj",
    "content": "<Project Sdk=\"Microsoft.NET.Sdk\">\n  <PropertyGroup>\n    <TargetFramework>net8.0</TargetFramework>\n  </PropertyGroup>\n</Project>\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-is-pattern/models/Repo.cs",
    "content": "namespace IsPattern.Models;\n\npublic class Repo\n{\n    public void Save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-is-pattern/models/User.cs",
    "content": "namespace IsPattern.Models;\n\npublic class User\n{\n    public void Save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-is-pattern/services/App.cs",
    "content": "using IsPattern.Models;\n\nnamespace IsPattern.Services;\n\npublic class App\n{\n    public void Process(object obj)\n    {\n        if (obj is User user)\n        {\n            user.Save();\n        }\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-local-shadow/App/Main.cs",
    "content": "using Utils;\n\nnamespace App {\n    public class Main {\n        // Local method shadows imported Logger.Save\n        public static void Save(string data) {\n            System.Console.WriteLine(\"local save: \" + data);\n        }\n\n        public static void Run() {\n            Save(\"test\");\n        }\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-local-shadow/Utils/Logger.cs",
    "content": "namespace Utils {\n    public class Logger {\n        public static void Save(string data) {\n            System.Console.WriteLine(\"utils save: \" + data);\n        }\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-member-calls/MemberCallProj.csproj",
    "content": "<Project Sdk=\"Microsoft.NET.Sdk\">\n  <PropertyGroup>\n    <TargetFramework>net8.0</TargetFramework>\n  </PropertyGroup>\n</Project>\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-member-calls/Models/User.cs",
    "content": "namespace Models;\n\npublic class User\n{\n    public bool Save()\n    {\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-member-calls/Services/UserService.cs",
    "content": "using Models;\n\nnamespace Services;\n\npublic class UserService\n{\n    public bool ProcessUser()\n    {\n        var user = new User();\n        return user.Save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-method-chain-binding/App.cs",
    "content": "class City {\n    public string Name { get; set; }\n    public City(string name) { Name = name; }\n    public bool Save() { return true; }\n}\n\nclass Address {\n    public City City { get; set; }\n    public Address(City city) { City = city; }\n    public City GetCity() { return City; }\n}\n\nclass User {\n    public Address Address { get; set; }\n    public User(Address address) { Address = address; }\n}\n\nclass App {\n    static User GetUser() {\n        return new User(new Address(new City(\"NYC\")));\n    }\n\n    void ProcessChain() {\n        var user = GetUser();\n        var addr = user.Address;\n        var city = addr.GetCity();\n        city.Save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-mixed-decl-chain/MixedDeclChain.csproj",
    "content": "<Project Sdk=\"Microsoft.NET.Sdk\">\n  <PropertyGroup>\n    <TargetFramework>net8.0</TargetFramework>\n  </PropertyGroup>\n</Project>\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-mixed-decl-chain/Models/Repo.cs",
    "content": "public class Repo\n{\n    public bool Save() => false;\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-mixed-decl-chain/Models/User.cs",
    "content": "public class User\n{\n    public bool Save() => true;\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-mixed-decl-chain/Program.cs",
    "content": "// Tests assignment chain + is-pattern in the same file.\n// The is-pattern (obj is User u) creates a Tier 0 binding;\n// the assignment chain (var alias = u) propagates it via Tier 2.\n// Also verifies that the type guard in extractPendingAssignment\n// correctly skips is_pattern_expression nodes without breaking.\npublic class App\n{\n    public static void ProcessWithChain()\n    {\n        User u = new User();\n        var alias = u;\n        alias.Save();\n    }\n\n    public static void ProcessWithPattern(object obj)\n    {\n        if (obj is User u)\n        {\n            u.Save();\n        }\n    }\n\n    public static void ProcessRepoChain()\n    {\n        Repo r = new Repo();\n        var alias = r;\n        alias.Save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-nested-member-foreach/App.cs",
    "content": "using System.Collections.Generic;\n\npublic class App {\n    private Dictionary<string, User> data;\n\n    public void ProcessValues() {\n        foreach (var user in this.data.Values) {\n            user.Save();\n        }\n    }\n\n    public void ProcessKeys() {\n        foreach (var key in this.data.Keys) {\n            key.ToString();\n        }\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-nested-member-foreach/Repo.cs",
    "content": "public class Repo {\n    public string Name { get; set; }\n    public void Save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-nested-member-foreach/User.cs",
    "content": "public class User {\n    public string Name { get; set; }\n    public void Save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-null-check-narrowing/Models/Repo.cs",
    "content": "namespace NullCheck.Models\n{\n    public class Repo\n    {\n        public void Save() {}\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-null-check-narrowing/Models/User.cs",
    "content": "namespace NullCheck.Models\n{\n    public class User\n    {\n        public void Save() {}\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-null-check-narrowing/Services/App.cs",
    "content": "using NullCheck.Models;\nusing System;\n\nnamespace NullCheck.Services\n{\n    public class App\n    {\n        public App(User? x)\n        {\n            if (x != null)\n            {\n                x.Save();\n            }\n        }\n\n        public void ProcessInequality(User x)\n        {\n            if (x != null)\n            {\n                x.Save();\n            }\n        }\n\n        public void ProcessIsNotNull(User x)\n        {\n            if (x is not null)\n            {\n                x.Save();\n            }\n        }\n\n        public void ProcessInLambda(User? x)\n        {\n            Action act = () =>\n            {\n                if (x != null)\n                {\n                    x.Save();\n                }\n            };\n            act();\n        }\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-null-conditional/App.cs",
    "content": "using Models;\n\nnamespace App;\n\npublic class AppService\n{\n    public void Process()\n    {\n        User? user = new User();\n        Repo? repo = new Repo();\n\n        // Null-conditional calls — nullable receiver should be unwrapped\n        user?.Save();\n        repo?.Save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-null-conditional/Models/Repo.cs",
    "content": "namespace Models;\n\npublic class Repo\n{\n    public bool Save()\n    {\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-null-conditional/Models/User.cs",
    "content": "namespace Models;\n\npublic class User\n{\n    public bool Save()\n    {\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-null-conditional/NullConditional.csproj",
    "content": "<Project Sdk=\"Microsoft.NET.Sdk\">\n  <PropertyGroup>\n    <TargetFramework>net8.0</TargetFramework>\n  </PropertyGroup>\n</Project>\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-optional-params/Services/App.cs",
    "content": "public class Greeter {\n    public string Greet(string name, string greeting = \"Hello\") {\n        return greeting + \", \" + name;\n    }\n}\n\npublic class Program {\n    public static void Main() {\n        var g = new Greeter();\n        g.Greet(\"Alice\");\n        g.Greet(\"Bob\", \"Hi\");\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-overload-param-types/Models/UserService.cs",
    "content": "namespace Models;\n\npublic class User\n{\n    public string GetName() => \"user\";\n}\n\npublic class UserService\n{\n    public User Lookup(int id)\n    {\n        return new User();\n    }\n\n    public User Lookup(string name)\n    {\n        return new User();\n    }\n\n    public void Run()\n    {\n        Lookup(42);        // literal int → should disambiguate to Lookup(int)\n        Lookup(\"alice\");   // literal string → should disambiguate to Lookup(string)\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-parent-resolution/src/Models/BaseModel.cs",
    "content": "namespace Models;\n\npublic class BaseModel {\n    public bool Save() { return true; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-parent-resolution/src/Models/ISerializable.cs",
    "content": "namespace Models;\n\npublic interface ISerializable {\n    string Serialize();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-parent-resolution/src/Models/User.cs",
    "content": "namespace Models;\n\npublic class User : BaseModel, ISerializable {\n    public string Serialize() { return \"\"; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-pattern-matching/Models/Animal.cs",
    "content": "namespace Models;\n\npublic class Animal\n{\n    public string Name { get; set; }\n}\n\npublic class Dog : Animal\n{\n    public void Bark()\n    {\n    }\n}\n\npublic class Cat : Animal\n{\n    public void Meow()\n    {\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-pattern-matching/PatternMatchProj.csproj",
    "content": "<Project Sdk=\"Microsoft.NET.Sdk\">\n  <PropertyGroup>\n    <TargetFramework>net8.0</TargetFramework>\n  </PropertyGroup>\n</Project>\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-pattern-matching/Services/AnimalService.cs",
    "content": "using Models;\n\nnamespace Services;\n\npublic class AnimalService\n{\n    public void HandleAnimal(Animal animal)\n    {\n        if (animal is Dog dog)\n        {\n            dog.Bark();\n        }\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-primary-ctors/App.cs",
    "content": "using Models;\n\npublic class App\n{\n    public void Run()\n    {\n        // Explicit new\n        var user = new User(\"Alice\", 30);\n        user.Save();\n\n        // Target-typed new (C# 9)\n        User user2 = new(\"Bob\", 25);\n        user2.Save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-primary-ctors/Models/Person.cs",
    "content": "namespace Models;\n\n// C# 12 record with primary constructor\npublic record Person(string FirstName, string LastName);\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-primary-ctors/Models/User.cs",
    "content": "namespace Models;\n\n// C# 12 primary constructor\npublic class User(string name, int age)\n{\n    public string Name => name;\n    public int Age => age;\n\n    public void Save() { }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-proj/Interfaces/IRepository.cs",
    "content": "namespace MyApp.Interfaces\n{\n    public interface IRepository\n    {\n        void Save();\n        void Delete();\n    }\n\n    public interface ILogger\n    {\n        void Log(string message);\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-proj/Models/BaseEntity.cs",
    "content": "namespace MyApp.Models\n{\n    public class BaseEntity\n    {\n        public int Id { get; set; }\n\n        public virtual void Validate()\n        {\n        }\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-proj/Models/User.cs",
    "content": "using MyApp.Interfaces;\n\nnamespace MyApp.Models\n{\n    public class User : BaseEntity, IRepository\n    {\n        public string Name { get; set; }\n\n        public void Save()\n        {\n        }\n\n        public void Delete()\n        {\n        }\n\n        public override void Validate()\n        {\n        }\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-proj/Services/UserService.cs",
    "content": "using MyApp.Models;\nusing MyApp.Interfaces;\n\nnamespace MyApp.Services\n{\n    public class UserService\n    {\n        private readonly IRepository _repo;\n        private readonly ILogger _logger;\n\n        public void CreateUser(string name)\n        {\n            var user = new User();\n            user.Validate();\n            _repo.Save();\n            _logger.Log(\"User created\");\n        }\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-receiver-resolution/App.cs",
    "content": "using Models;\n\nnamespace App;\n\npublic class AppService\n{\n    public void ProcessEntities()\n    {\n        User user = new User();\n        Repo repo = new Repo();\n        user.Save();\n        repo.Save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-receiver-resolution/Models/Repo.cs",
    "content": "namespace Models;\n\npublic class Repo\n{\n    public bool Save()\n    {\n        return false;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-receiver-resolution/Models/User.cs",
    "content": "namespace Models;\n\npublic class User\n{\n    public bool Save()\n    {\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-receiver-resolution/ReceiverProj.csproj",
    "content": "<Project Sdk=\"Microsoft.NET.Sdk\">\n  <PropertyGroup>\n    <TargetFramework>net8.0</TargetFramework>\n  </PropertyGroup>\n</Project>\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-recursive-pattern/Models/Repo.cs",
    "content": "namespace Models;\n\npublic class Repo\n{\n    public string Name { get; set; } = \"\";\n    public bool Save() { return true; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-recursive-pattern/Models/User.cs",
    "content": "namespace Models;\n\npublic class User\n{\n    public string Name { get; set; } = \"\";\n    public bool Save() { return true; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-recursive-pattern/Program.cs",
    "content": "using Models;\n\nnamespace App;\n\npublic class AppService\n{\n    public void ProcessWithRecursivePattern(object obj)\n    {\n        if (obj is User { Name: \"Alice\" } u)\n        {\n            u.Save();\n        }\n\n        var result = obj switch\n        {\n            Repo { Name: \"main\" } r => r.Save(),\n            _ => false\n        };\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-recursive-pattern/RecursivePatternProj.csproj",
    "content": "<Project Sdk=\"Microsoft.NET.Sdk\">\n  <PropertyGroup>\n    <TargetFramework>net8.0</TargetFramework>\n  </PropertyGroup>\n</Project>\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-return-type/Models/Repo.cs",
    "content": "namespace ReturnType.Models;\n\npublic class Repo\n{\n    public bool Save()\n    {\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-return-type/Models/User.cs",
    "content": "namespace ReturnType.Models;\n\npublic class User\n{\n    private string _name;\n\n    public User(string name)\n    {\n        _name = name;\n    }\n\n    public bool Save()\n    {\n        return true;\n    }\n}\n\npublic class UserService\n{\n    public User GetUser(string name)\n    {\n        return new User(name);\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-return-type/ReturnType.csproj",
    "content": "<Project Sdk=\"Microsoft.NET.Sdk\">\n  <PropertyGroup>\n    <TargetFramework>net8.0</TargetFramework>\n  </PropertyGroup>\n</Project>\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-return-type/Services/App.cs",
    "content": "using ReturnType.Models;\n\nnamespace ReturnType.Services;\n\npublic class App\n{\n    public void Run()\n    {\n        var svc = new UserService();\n        var user = svc.GetUser(\"alice\");\n        user.Save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-self-this-resolution/src/Models/Repo.cs",
    "content": "namespace Models;\n\npublic class Repo {\n    public bool Save() { return true; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-self-this-resolution/src/Models/User.cs",
    "content": "namespace Models;\n\npublic class User {\n    public bool Save() { return true; }\n    public void Process() {\n        this.Save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-super-resolution/src/Models/BaseModel.cs",
    "content": "namespace Models;\n\npublic class BaseModel {\n    public virtual bool Save() { return true; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-super-resolution/src/Models/Repo.cs",
    "content": "namespace Models;\n\npublic class Repo {\n    public bool Save() { return true; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-super-resolution/src/Models/User.cs",
    "content": "namespace Models;\n\npublic class User : BaseModel {\n    public override bool Save() {\n        base.Save();\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-switch-pattern/Models/Repo.cs",
    "content": "namespace Models;\n\npublic class Repo\n{\n    public bool Save() { return false; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-switch-pattern/Models/User.cs",
    "content": "namespace Models;\n\npublic class User\n{\n    public bool Save() { return true; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-switch-pattern/Program.cs",
    "content": "using Models;\n\nnamespace App;\n\npublic class AppService\n{\n    public void Process(object obj)\n    {\n        if (obj is User user)\n        {\n            user.Save();\n        }\n\n        switch (obj)\n        {\n            case Repo repo:\n                repo.Save();\n                break;\n        }\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-switch-pattern/SwitchPattern.csproj",
    "content": "<Project Sdk=\"Microsoft.NET.Sdk\">\n  <PropertyGroup>\n    <TargetFramework>net8.0</TargetFramework>\n  </PropertyGroup>\n</Project>\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-var-foreach/Models/Repo.cs",
    "content": "namespace Models;\n\npublic class Repo\n{\n    public bool Save() { return false; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-var-foreach/Models/User.cs",
    "content": "namespace Models;\n\npublic class User\n{\n    public bool Save() { return true; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-var-foreach/Program.cs",
    "content": "using Models;\nusing System.Collections.Generic;\n\nnamespace App;\n\npublic class AppService\n{\n    public void ProcessUsers(List<User> users)\n    {\n        foreach (var user in users)\n        {\n            user.Save();\n        }\n    }\n\n    public void ProcessRepos(List<Repo> repos)\n    {\n        foreach (var repo in repos)\n        {\n            repo.Save();\n        }\n    }\n\n    public void Direct(User u, Repo r)\n    {\n        u.Save();\n        r.Save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-var-foreach/VarForeach.csproj",
    "content": "<Project Sdk=\"Microsoft.NET.Sdk\">\n  <PropertyGroup>\n    <TargetFramework>net8.0</TargetFramework>\n  </PropertyGroup>\n</Project>\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-variadic-resolution/Services/App.cs",
    "content": "using static VariadicProj.Utils.Logger;\n\nnamespace VariadicProj.Services\n{\n    public class App\n    {\n        public void Execute()\n        {\n            Record(\"hello\", \"world\");\n        }\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-variadic-resolution/Utils/Logger.cs",
    "content": "namespace VariadicProj.Utils\n{\n    public static class Logger\n    {\n        public static string Record(params string[] args)\n        {\n            return string.Join(\", \", args);\n        }\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-variadic-resolution/VariadicProj.csproj",
    "content": "<Project Sdk=\"Microsoft.NET.Sdk\">\n  <PropertyGroup>\n    <TargetFramework>net8.0</TargetFramework>\n  </PropertyGroup>\n</Project>\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-write-access/Models.cs",
    "content": "public class Address {\n    public string City { get; set; }\n}\n\npublic class User {\n    public string Name { get; set; }\n    public Address Address { get; set; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/csharp-write-access/Service.cs",
    "content": "public class UserService {\n    public void UpdateUser(User user) {\n        user.Name = \"Alice\";\n        user.Address = new Address();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/field-types/models.ts",
    "content": "export class Address {\n  city: string;\n\n  save(): void {\n    // persist address\n  }\n}\n\nexport class User {\n  name: string;\n  address: Address;\n\n  greet(): string {\n    return this.name;\n  }\n}\n\nexport class Config {\n  static DEFAULT: Config = new Config();\n\n  validate(): boolean {\n    return true;\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/field-types/service.ts",
    "content": "import { User, Config } from './models';\n\nfunction processUser(user: User) {\n  // Field-access chain: user.address resolves to Address, then .save() resolves to Address#save\n  user.address.save();\n}\n\nfunction validateConfig() {\n  // Static field access: Config.DEFAULT resolves to Config, then .validate() resolves to Config#validate\n  Config.DEFAULT.validate();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-ambiguous/go.mod",
    "content": "module github.com/example/ambiguous\n\ngo 1.21\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-ambiguous/internal/models/handler.go",
    "content": "package models\n\ntype Handler struct {\n\tName string\n}\n\nfunc (h *Handler) Handle() {}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-ambiguous/internal/other/handler.go",
    "content": "package other\n\ntype Handler struct {\n\tID int\n}\n\nfunc (h *Handler) Process() {}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-ambiguous/internal/services/user.go",
    "content": "package services\n\nimport \"github.com/example/ambiguous/internal/models\"\n\ntype UserHandler struct {\n\tmodels.Handler\n}\n\nfunc (u *UserHandler) Run() {}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-assignment-chain/cmd/main.go",
    "content": "package main\n\nimport \"example.com/go-assignment-chain/models\"\n\nfunc getUser() models.User {\n\treturn models.User{}\n}\n\nfunc getRepo() models.Repo {\n\treturn models.Repo{}\n}\n\nfunc processEntities() {\n\tvar u models.User = getUser()\n\talias := u\n\talias.Save()\n\n\tvar r models.Repo = getRepo()\n\trAlias := r\n\trAlias.Save()\n}\n\nfunc processWithVar() {\n\tvar u models.User = getUser()\n\tvar alias = u\n\talias.Save()\n\n\tvar r models.Repo = getRepo()\n\tvar rAlias = r\n\trAlias.Save()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-assignment-chain/go.mod",
    "content": "module example.com/go-assignment-chain\n\ngo 1.21\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-assignment-chain/models/repo.go",
    "content": "package models\n\ntype Repo struct{}\n\nfunc (r *Repo) Save() bool {\n\treturn false\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-assignment-chain/models/user.go",
    "content": "package models\n\ntype User struct{}\n\nfunc (u *User) Save() bool {\n\treturn true\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-call-result-binding/cmd/main.go",
    "content": "package main\n\nimport \"example.com/callresult/models\"\n\nfunc GetUser(name string) *models.User {\n\treturn &models.User{Name: name}\n}\n\nfunc processUser() {\n\tuser := GetUser(\"alice\")\n\tuser.Save()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-call-result-binding/go.mod",
    "content": "module example.com/callresult\n\ngo 1.21\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-call-result-binding/models/user.go",
    "content": "package models\n\ntype User struct {\n\tName string\n}\n\nfunc (u *User) Save() bool {\n\treturn true\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-calls/cmd/main.go",
    "content": "package main\n\nimport (\n\t. \"example.com/go-calls/internal/onearg\"\n\t_ \"example.com/go-calls/internal/zeroarg\"\n)\n\nfunc main() {\n\t_ = WriteAudit(\"hello\")\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-calls/go.mod",
    "content": "module example.com/go-calls\n\ngo 1.22\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-calls/internal/onearg/log.go",
    "content": "package onearg\n\nfunc WriteAudit(message string) string {\n\treturn message\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-calls/internal/zeroarg/log.go",
    "content": "package zeroarg\n\nfunc WriteAudit() string {\n\treturn \"zero\"\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-chain-call/cmd/main.go",
    "content": "package main\n\nimport \"example.com/chaincall/models\"\n\ntype UserService struct{}\n\nfunc (s *UserService) GetUser() *models.User {\n\treturn &models.User{Name: \"alice\"}\n}\n\nfunc processUser() {\n\tsvc := &UserService{}\n\tsvc.GetUser().Save()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-chain-call/go.mod",
    "content": "module example.com/chaincall\n\ngo 1.21\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-chain-call/models/repo.go",
    "content": "package models\n\ntype Repo struct {\n\tName string\n}\n\nfunc (r *Repo) Save() bool {\n\treturn true\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-chain-call/models/user.go",
    "content": "package models\n\ntype User struct {\n\tName string\n}\n\nfunc (u *User) Save() bool {\n\treturn true\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-constructor-type-inference/cmd/main.go",
    "content": "package main\n\nimport \"example.com/go-constructor-type-inference/models\"\n\nfunc processEntities() {\n\tuser := models.User{}\n\trepo := models.Repo{}\n\tuser.Save()\n\trepo.Save()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-constructor-type-inference/go.mod",
    "content": "module example.com/go-constructor-type-inference\n\ngo 1.21\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-constructor-type-inference/models/repo.go",
    "content": "package models\n\ntype Repo struct{}\n\nfunc (r *Repo) Save() bool {\n\treturn false\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-constructor-type-inference/models/user.go",
    "content": "package models\n\ntype User struct{}\n\nfunc (u *User) Save() bool {\n\treturn true\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-deep-field-chain/cmd/main.go",
    "content": "package main\n\nimport \"example.com/go-deep-field-chain/models\"\n\nfunc processUser(user models.User) {\n\t// 2-level chain: user.Address → Address, then .Save() → Address#Save\n\tuser.Address.Save()\n\n\t// 3-level chain: user.Address → Address, .City → City, .GetName() → City#GetName\n\tuser.Address.City.GetName()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-deep-field-chain/go.mod",
    "content": "module example.com/go-deep-field-chain\n\ngo 1.21\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-deep-field-chain/models/models.go",
    "content": "package models\n\ntype City struct {\n\tZipCode string\n}\n\nfunc (c *City) GetName() string {\n\treturn \"city\"\n}\n\ntype Address struct {\n\tCity   City\n\tStreet string\n}\n\nfunc (a *Address) Save() bool {\n\treturn true\n}\n\ntype User struct {\n\tName    string\n\tAddress Address\n}\n\nfunc (u *User) Greet() string {\n\treturn u.Name\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-field-types/cmd/main.go",
    "content": "package main\n\nimport \"example.com/go-field-types/models\"\n\nfunc processUser(user models.User) {\n\t// Field-access chain: user.Address → Address, then .Save() → Address#Save\n\tuser.Address.Save()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-field-types/go.mod",
    "content": "module example.com/go-field-types\n\ngo 1.21\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-field-types/models/models.go",
    "content": "package models\n\ntype Address struct {\n\tCity string\n}\n\nfunc (a *Address) Save() bool {\n\treturn true\n}\n\ntype User struct {\n\tName    string\n\tAddress Address\n}\n\nfunc (u *User) Greet() string {\n\treturn u.Name\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-for-call-expr/cmd/main.go",
    "content": "package main\n\nimport \"example.com/for-call-expr/models\"\n\nfunc processUsers() {\n\tfor _, user := range models.GetUsers() {\n\t\tuser.Save()\n\t}\n}\n\nfunc processRepos() {\n\tfor _, repo := range models.GetRepos() {\n\t\trepo.Save()\n\t}\n}\n\nfunc main() {}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-for-call-expr/go.mod",
    "content": "module example.com/for-call-expr\n\ngo 1.21\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-for-call-expr/models/repo.go",
    "content": "package models\n\ntype Repo struct {\n\tName string\n}\n\nfunc (r *Repo) Save() error {\n\treturn nil\n}\n\nfunc GetRepos() []Repo {\n\treturn []Repo{{Name: \"main\"}}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-for-call-expr/models/user.go",
    "content": "package models\n\ntype User struct {\n\tName string\n}\n\nfunc (u *User) Save() error {\n\treturn nil\n}\n\nfunc GetUsers() []User {\n\treturn []User{{Name: \"alice\"}}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-inc-dec-write-access/main.go",
    "content": "package main\n\ntype Counter struct {\n\tCount int\n\tTotal int\n}\n\nfunc increment(c *Counter) {\n\tc.Count++\n\tc.Total++\n}\n\nfunc decrement(c *Counter) {\n\tc.Count--\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-local-shadow/cmd/main.go",
    "content": "package main\n\nimport \"go-local-shadow/internal/utils\"\n\nfunc Save(data string) {\n\tprintln(\"local save\")\n}\n\nfunc main() {\n\tSave(\"test\")\n\t_ = utils.Save\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-local-shadow/go.mod",
    "content": "module go-local-shadow\n\ngo 1.21\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-local-shadow/internal/utils/utils.go",
    "content": "package utils\n\nfunc Save(data string) {\n\tprintln(\"saving from utils\")\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-make-builtin/main.go",
    "content": "package main\n\nfunc main() {\n\tsl := make([]User, 0)\n\tsl[0].Save()\n\n\tm := make(map[string]User)\n\tm[\"key\"].Greet()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-make-builtin/models.go",
    "content": "package main\n\ntype User struct {\n\tName string\n}\n\nfunc (u *User) Save() {}\nfunc (u *User) Greet() string {\n\treturn \"Hello, \" + u.Name\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-map-range/main.go",
    "content": "package main\n\nimport \"models\"\n\nfunc processMap(userMap map[string]models.User) {\n\tfor _, user := range userMap {\n\t\tuser.Save()\n\t}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-map-range/models/repo.go",
    "content": "package models\n\ntype Repo struct {\n\tPath string\n}\n\nfunc (r Repo) Save() {}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-map-range/models/user.go",
    "content": "package models\n\ntype User struct {\n\tName string\n}\n\nfunc (u User) Save() {}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-member-calls/cmd/main.go",
    "content": "package main\n\nimport \"example.com/go-member-calls/models\"\n\nfunc processUser() bool {\n\tuser := models.User{}\n\treturn user.Save()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-member-calls/go.mod",
    "content": "module example.com/go-member-calls\n\ngo 1.21\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-member-calls/models/user.go",
    "content": "package models\n\ntype User struct{}\n\nfunc (u *User) Save() bool {\n\treturn true\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-method-chain-binding/cmd/main.go",
    "content": "package main\n\nimport \"example.com/methodchain/models\"\n\nfunc GetUser() *models.User {\n\treturn &models.User{}\n}\n\nfunc processChain() {\n\tuser := GetUser()\n\taddr := user.Address\n\tcity := addr.GetCity()\n\tcity.Save()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-method-chain-binding/go.mod",
    "content": "module example.com/methodchain\n\ngo 1.21\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-method-chain-binding/models/user.go",
    "content": "package models\n\ntype City struct {\n\tName string\n}\n\nfunc (c *City) Save() bool {\n\treturn true\n}\n\ntype Address struct {\n\tCity City\n}\n\nfunc (a *Address) GetCity() *City {\n\treturn &a.City\n}\n\ntype User struct {\n\tAddress Address\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-mixed-chain/cmd/main.go",
    "content": "package main\n\nimport \"example.com/go-mixed-chain/models\"\n\nfunc processWithService(svc *models.UserService) {\n\tsvc.GetUser().Address.Save()\n}\n\nfunc processWithUser(user *models.User) {\n\tuser.GetAddress().City.GetName()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-mixed-chain/go.mod",
    "content": "module example.com/go-mixed-chain\n\ngo 1.21\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-mixed-chain/models/models.go",
    "content": "package models\n\ntype City struct {\n\tName string\n}\n\nfunc (c *City) GetName() string {\n\treturn c.Name\n}\n\ntype Address struct {\n\tCity   City\n\tStreet string\n}\n\nfunc (a *Address) Save() {\n}\n\ntype User struct {\n\tName    string\n\tAddress Address\n}\n\nfunc (u *User) GetAddress() *Address {\n\treturn &u.Address\n}\n\ntype UserService struct{}\n\nfunc (s *UserService) GetUser() *User {\n\treturn &User{}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-multi-assign/app.go",
    "content": "package main\n\nfunc process(name string, url string) {\n\tuser, repo := User{Name: name}, Repo{URL: url}\n\tuser.Save()\n\trepo.Persist()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-multi-assign/models.go",
    "content": "package main\n\ntype User struct {\n\tName string\n}\n\nfunc (u *User) Save() bool {\n\treturn true\n}\n\ntype Repo struct {\n\tURL string\n}\n\nfunc (r *Repo) Persist() bool {\n\treturn true\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-multi-return-inference/cmd/main.go",
    "content": "package main\n\nimport \"example.com/multireturn/models\"\n\nfunc NewUser(name string) (*models.User, error) {\n\treturn &models.User{Name: name}, nil\n}\n\nfunc NewRepo(name string) (*models.Repo, error) {\n\treturn &models.Repo{Name: name}, nil\n}\n\nfunc processUser() {\n\tuser, err := NewUser(\"alice\")\n\tif err != nil {\n\t\treturn\n\t}\n\tuser.Save()\n}\n\nfunc processRepo() {\n\trepo, _ := NewRepo(\"main\")\n\trepo.Save()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-multi-return-inference/go.mod",
    "content": "module example.com/multireturn\n\ngo 1.21\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-multi-return-inference/models/repo.go",
    "content": "package models\n\ntype Repo struct {\n\tName string\n}\n\nfunc (r *Repo) Save() bool {\n\treturn true\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-multi-return-inference/models/user.go",
    "content": "package models\n\ntype User struct {\n\tName string\n}\n\nfunc (u *User) Save() bool {\n\treturn true\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-new-builtin/main.go",
    "content": "package main\n\nfunc main() {\n\tuser := new(User)\n\tuser.Save()\n\tuser.Greet()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-new-builtin/models.go",
    "content": "package main\n\ntype User struct {\n\tName string\n\tAge  int\n}\n\nfunc (u *User) Save() {}\nfunc (u *User) Greet() string {\n\treturn \"Hello, \" + u.Name\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-nullable-receiver/cmd/main.go",
    "content": "package main\n\nimport \"example.com/go-nullable-receiver/models\"\n\nfunc findUser() *models.User {\n\treturn &models.User{}\n}\n\nfunc findRepo() *models.Repo {\n\treturn &models.Repo{}\n}\n\nfunc processEntities() {\n\tvar user *models.User = findUser()\n\tvar repo *models.Repo = findRepo()\n\tuser.Save()\n\trepo.Save()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-nullable-receiver/go.mod",
    "content": "module example.com/go-nullable-receiver\n\ngo 1.21\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-nullable-receiver/models/repo.go",
    "content": "package models\n\ntype Repo struct{}\n\nfunc (r *Repo) Save() bool {\n\treturn false\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-nullable-receiver/models/user.go",
    "content": "package models\n\ntype User struct{}\n\nfunc (u *User) Save() bool {\n\treturn true\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-parent-resolution/go.mod",
    "content": "module example.com/app\n\ngo 1.21\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-parent-resolution/models/base.go",
    "content": "package models\n\ntype BaseModel struct{}\n\nfunc (b *BaseModel) Save() bool {\n    return true\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-parent-resolution/models/user.go",
    "content": "package models\n\ntype User struct {\n    BaseModel\n}\n\nfunc (u *User) Serialize() string {\n    return \"\"\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-pkg/cmd/main.go",
    "content": "package main\n\nimport (\n\t\"github.com/example/gopkg/internal/auth\"\n\t\"github.com/example/gopkg/internal/models\"\n)\n\nfunc main() {\n\tuser := auth.Authenticate(\"alice\")\n\t_ = models.NewUser(\"bob\")\n\t_ = models.NewAdmin(\"charlie\", \"superadmin\")\n\t_ = user\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-pkg/go.mod",
    "content": "module github.com/example/gopkg\n\ngo 1.21\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-pkg/internal/auth/service.go",
    "content": "package auth\n\nimport \"github.com/example/gopkg/internal/models\"\n\nfunc Authenticate(name string) *models.User {\n\tuser := models.NewUser(name)\n\treturn user\n}\n\nfunc ValidateToken(token string) bool {\n\treturn len(token) > 0\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-pkg/internal/models/admin.go",
    "content": "package models\n\ntype Admin struct {\n\tUser\n\tRole string\n}\n\nfunc NewAdmin(name string, role string) *Admin {\n\treturn &Admin{User: *NewUser(name), Role: role}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-pkg/internal/models/repository.go",
    "content": "package models\n\ntype Repository interface {\n\tSave(user *User) error\n\tFindByID(id int) (*User, error)\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-pkg/internal/models/user.go",
    "content": "package models\n\ntype User struct {\n\tID   int\n\tName string\n}\n\nfunc NewUser(name string) *User {\n\treturn &User{Name: name}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-pointer-constructor-inference/cmd/main.go",
    "content": "package main\n\nimport \"example.com/pointer-test/models\"\n\nfunc process() {\n\tuser := &models.User{Name: \"alice\"}\n\tuser.Save()\n\n\trepo := &models.Repo{Name: \"test\"}\n\trepo.Save()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-pointer-constructor-inference/go.mod",
    "content": "module example.com/pointer-test\n\ngo 1.21\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-pointer-constructor-inference/models/repo.go",
    "content": "package models\n\ntype Repo struct {\n\tName string\n}\n\nfunc (r *Repo) Save() bool {\n\treturn true\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-pointer-constructor-inference/models/user.go",
    "content": "package models\n\ntype User struct {\n\tName string\n}\n\nfunc (u *User) Save() bool {\n\treturn true\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-receiver-resolution/cmd/main.go",
    "content": "package main\n\nimport \"example.com/go-receiver-resolution/models\"\n\nfunc processEntities() {\n\tvar user models.User\n\tvar repo models.Repo\n\tuser.Save()\n\trepo.Save()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-receiver-resolution/go.mod",
    "content": "module example.com/go-receiver-resolution\n\ngo 1.21\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-receiver-resolution/models/repo.go",
    "content": "package models\n\ntype Repo struct{}\n\nfunc (r *Repo) Save() bool {\n\treturn false\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-receiver-resolution/models/user.go",
    "content": "package models\n\ntype User struct{}\n\nfunc (u *User) Save() bool {\n\treturn true\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-return-type-inference/cmd/main.go",
    "content": "package main\n\nimport \"example.com/returntype/models\"\n\nfunc GetUser(name string) *models.User {\n\treturn &models.User{Name: name}\n}\n\nfunc processUser() {\n\tuser := GetUser(\"alice\")\n\tuser.Save()\n}\n\n// Cross-package factory call: models.NewUser() uses selector_expression in the AST\nfunc processUserCrossPackage() {\n\tuser := models.NewUser(\"bob\")\n\tuser.Save()\n}\n\nfunc GetRepo(name string) *models.Repo {\n\treturn &models.Repo{Name: name}\n}\n\nfunc processRepo() {\n\trepo := GetRepo(\"main\")\n\trepo.Save()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-return-type-inference/go.mod",
    "content": "module example.com/returntype\n\ngo 1.21\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-return-type-inference/models/repo.go",
    "content": "package models\n\ntype Repo struct {\n\tName string\n}\n\nfunc (r *Repo) Save() bool {\n\treturn true\n}\n\nfunc GetRepo(name string) *Repo {\n\treturn &Repo{Name: name}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-return-type-inference/models/user.go",
    "content": "package models\n\ntype User struct {\n\tName string\n}\n\nfunc (u *User) Save() bool {\n\treturn true\n}\n\nfunc NewUser(name string) *User {\n\treturn &User{Name: name}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-struct-literals/app.go",
    "content": "package main\n\nfunc processUser(name string) {\n\tuser := User{Name: name}\n\tuser.Save()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-struct-literals/user.go",
    "content": "package main\n\ntype User struct {\n\tName string\n}\n\nfunc (u *User) Save() bool {\n\treturn true\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-type-assertion/main.go",
    "content": "package main\n\nfunc process(s Saver) {\n\tuser := s.(User)\n\tuser.Save()\n\tuser.Greet()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-type-assertion/models.go",
    "content": "package main\n\ntype Saver interface {\n\tSave()\n}\n\ntype User struct {\n\tName string\n}\n\nfunc (u *User) Save() {}\nfunc (u *User) Greet() string {\n\treturn \"Hello, \" + u.Name\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-variadic-resolution/cmd/main.go",
    "content": "package main\n\nimport . \"example.com/go-variadic-resolution/internal/logger\"\n\nfunc main() {\n\tEntry(\"hello\", \"world\", \"test\")\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-variadic-resolution/go.mod",
    "content": "module example.com/go-variadic-resolution\n\ngo 1.22\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-variadic-resolution/internal/logger/logger.go",
    "content": "package logger\n\nfunc Entry(args ...interface{}) {\n\t_ = args\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/go-write-access/main.go",
    "content": "package main\n\ntype Address struct {\n\tCity string\n}\n\ntype User struct {\n\tName    string\n\tAddress Address\n}\n\nfunc updateUser(user *User) {\n\tuser.Name = \"Alice\"\n\tuser.Address = Address{City: \"NYC\"}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-ambiguous/models/Handler.java",
    "content": "package models;\n\npublic class Handler {\n    public void handle() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-ambiguous/models/Processor.java",
    "content": "package models;\n\npublic interface Processor {\n    void run();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-ambiguous/other/Handler.java",
    "content": "package other;\n\npublic class Handler {\n    public void process() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-ambiguous/other/Processor.java",
    "content": "package other;\n\npublic interface Processor {\n    void execute();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-ambiguous/services/UserHandler.java",
    "content": "package services;\n\nimport models.Handler;\nimport models.Processor;\n\npublic class UserHandler extends Handler implements Processor {\n    public void run() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-assignment-chain/App.java",
    "content": "import models.User;\nimport models.Repo;\n\npublic class App {\n    static User getUser() { return new User(); }\n    static Repo getRepo() { return new Repo(); }\n\n    public static void processEntities() {\n        User u = getUser();\n        var alias = u;\n        alias.save();\n\n        Repo r = getRepo();\n        var rAlias = r;\n        rAlias.save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-assignment-chain/models/Repo.java",
    "content": "package models;\n\npublic class Repo {\n    public boolean save() {\n        return false;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-assignment-chain/models/User.java",
    "content": "package models;\n\npublic class User {\n    public boolean save() {\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-call-result-binding/App.java",
    "content": "public class App {\n    static User getUser(String name) {\n        return new User(name);\n    }\n\n    void processUser() {\n        var user = getUser(\"alice\");\n        user.save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-call-result-binding/User.java",
    "content": "public class User {\n    private String name;\n\n    public User(String name) {\n        this.name = name;\n    }\n\n    public boolean save() {\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-calls/services/UserService.java",
    "content": "package services;\n\nimport static util.OneArg.writeAudit;\nimport static util.ZeroArg.writeAudit;\n\npublic class UserService {\n    public void processUser() {\n        writeAudit(\"hello\");\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-calls/util/OneArg.java",
    "content": "package util;\n\npublic class OneArg {\n    public static String writeAudit(String message) {\n        return message;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-calls/util/ZeroArg.java",
    "content": "package util;\n\npublic class ZeroArg {\n    public static String writeAudit() {\n        return \"zero\";\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-chain-call/App.java",
    "content": "import services.UserService;\n\npublic class App {\n    public static void processUser() {\n        UserService svc = new UserService();\n        svc.getUser().save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-chain-call/models/Repo.java",
    "content": "package models;\n\npublic class Repo {\n    public boolean save() {\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-chain-call/models/User.java",
    "content": "package models;\n\npublic class User {\n    public boolean save() {\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-chain-call/services/UserService.java",
    "content": "package services;\n\nimport models.User;\n\npublic class UserService {\n    public User getUser() {\n        return new User();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-constructor-calls/App.java",
    "content": "import models.User;\n\npublic class App {\n    public static void processUser(String name) {\n        User user = new User(name);\n        user.save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-constructor-calls/models/User.java",
    "content": "package models;\n\npublic class User {\n    private String name;\n\n    public User(String name) {\n        this.name = name;\n    }\n\n    public boolean save() {\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-constructor-type-inference/App.java",
    "content": "import models.User;\nimport models.Repo;\n\npublic class App {\n    public static void processEntities() {\n        var user = new User();\n        var repo = new Repo();\n        user.save();\n        repo.save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-constructor-type-inference/models/Repo.java",
    "content": "package models;\n\npublic class Repo {\n    public boolean save() {\n        return false;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-constructor-type-inference/models/User.java",
    "content": "package models;\n\npublic class User {\n    public boolean save() {\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-deep-field-chain/App.java",
    "content": "import models.User;\n\npublic class App {\n    public static void processUser(User user) {\n        // 2-level chain: user.address → Address, then .save() → Address#save\n        user.address.save();\n\n        // 3-level chain: user.address → Address, .city → City, .getName() → City#getName\n        user.address.city.getName();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-deep-field-chain/models/Address.java",
    "content": "package models;\n\npublic class Address {\n    public City city;\n    public String street;\n\n    public void save() {\n        // persist address\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-deep-field-chain/models/City.java",
    "content": "package models;\n\npublic class City {\n    public String zipCode;\n\n    public String getName() {\n        return \"city\";\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-deep-field-chain/models/User.java",
    "content": "package models;\n\npublic class User {\n    public String name;\n    public Address address;\n\n    public String greet() {\n        return this.name;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-enum-static-call/src/App.java",
    "content": "public class App {\n    public void process() {\n        Status s = Status.fromCode(200);\n        s.label();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-enum-static-call/src/Status.java",
    "content": "public enum Status {\n    OK,\n    ERROR;\n\n    public static Status fromCode(int code) {\n        return code == 200 ? OK : ERROR;\n    }\n\n    public String label() {\n        return this.name().toLowerCase();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-field-types/App.java",
    "content": "import models.User;\n\npublic class App {\n    public static void processUser(User user) {\n        // Field-access chain: user.address → Address, then .save() → Address#save\n        user.address.save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-field-types/models/Address.java",
    "content": "package models;\n\npublic class Address {\n    public String city;\n\n    public void save() {\n        // persist address\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-field-types/models/User.java",
    "content": "package models;\n\npublic class User {\n    public String name;\n    public Address address;\n\n    public String greet() {\n        return this.name;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-foreach/App.java",
    "content": "import models.User;\nimport models.Repo;\n\npublic class App {\n    public static void processEntities(User[] users, Repo[] repos) {\n        for (User user : users) {\n            user.save();\n        }\n        for (Repo repo : repos) {\n            repo.save();\n        }\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-foreach/models/Repo.java",
    "content": "package models;\n\npublic class Repo {\n    public boolean save() {\n        return false;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-foreach/models/User.java",
    "content": "package models;\n\npublic class User {\n    public boolean save() {\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-foreach-call-expr/Main.java",
    "content": "import models.User;\nimport models.Repo;\n\npublic class Main {\n    void processUsers() {\n        for (User user : User.getUsers()) {\n            user.save();\n        }\n    }\n\n    void processRepos() {\n        for (Repo repo : Repo.getRepos()) {\n            repo.save();\n        }\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-foreach-call-expr/models/Repo.java",
    "content": "package models;\n\nimport java.util.List;\n\npublic class Repo {\n    private String name;\n\n    public Repo(String name) {\n        this.name = name;\n    }\n\n    public void save() {}\n\n    public static List<Repo> getRepos() {\n        return List.of(new Repo(\"main\"));\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-foreach-call-expr/models/User.java",
    "content": "package models;\n\nimport java.util.List;\n\npublic class User {\n    private String name;\n\n    public User(String name) {\n        this.name = name;\n    }\n\n    public void save() {}\n\n    public static List<User> getUsers() {\n        return List.of(new User(\"alice\"));\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-generic-parent-resolution/src/models/BaseModel.java",
    "content": "package models;\n\npublic class BaseModel<T> {\n    public boolean save() { return true; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-generic-parent-resolution/src/models/Repo.java",
    "content": "package models;\n\npublic class Repo {\n    public boolean save() { return true; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-generic-parent-resolution/src/models/User.java",
    "content": "package models;\n\npublic class User extends BaseModel<String> {\n    public boolean save() {\n        super.save();\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-grandparent-resolution/src/models/A.java",
    "content": "package models;\n\npublic class A {\n    public Greeting greet() { return new Greeting(); }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-grandparent-resolution/src/models/B.java",
    "content": "package models;\n\npublic class B extends A {}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-grandparent-resolution/src/models/C.java",
    "content": "package models;\n\npublic class C extends B {}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-grandparent-resolution/src/models/Greeting.java",
    "content": "package models;\n\npublic class Greeting {\n    public void save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-grandparent-resolution/src/services/App.java",
    "content": "package services;\n\nimport models.C;\n\npublic class App {\n    public void process() {\n        C c = new C();\n        c.greet().save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-heritage/interfaces/Serializable.java",
    "content": "package interfaces;\n\npublic interface Serializable {\n    String serialize();\n    void deserialize(String data);\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-heritage/interfaces/Validatable.java",
    "content": "package interfaces;\n\npublic interface Validatable {\n    boolean validate();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-heritage/models/BaseModel.java",
    "content": "package models;\n\npublic class BaseModel {\n    protected int id;\n\n    public int getId() {\n        return id;\n    }\n\n    public void save() {\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-heritage/models/User.java",
    "content": "package models;\n\nimport interfaces.Serializable;\nimport interfaces.Validatable;\n\npublic class User extends BaseModel implements Serializable, Validatable {\n    private String name;\n\n    public String serialize() {\n        return name;\n    }\n\n    public void deserialize(String data) {\n        this.name = data;\n    }\n\n    public boolean validate() {\n        return name != null;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-heritage/services/UserService.java",
    "content": "package services;\n\nimport models.User;\nimport interfaces.Serializable;\n\npublic class UserService {\n    public void processUser(User user) {\n        user.validate();\n        user.save();\n        String data = user.serialize();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-instanceof-pattern/models/Repo.java",
    "content": "package models;\n\npublic class Repo {\n    public void save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-instanceof-pattern/models/User.java",
    "content": "package models;\n\npublic class User {\n    public void save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-instanceof-pattern/services/App.java",
    "content": "package services;\n\nimport models.User;\nimport models.Repo;\n\npublic class App {\n    public void process(Object obj) {\n        if (obj instanceof User user) {\n            user.save();\n        }\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-local-shadow/src/main/java/com/example/app/Main.java",
    "content": "package com.example.app;\n\nimport com.example.utils.Logger;\n\npublic class Main {\n    // Local method shadows imported Logger.save\n    public static void save(String data) {\n        System.out.println(\"local save: \" + data);\n    }\n\n    public static void run() {\n        save(\"test\");\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-local-shadow/src/main/java/com/example/utils/Logger.java",
    "content": "package com.example.utils;\n\npublic class Logger {\n    public static void save(String data) {\n        System.out.println(\"utils save: \" + data);\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-map-keys-values/src/App.java",
    "content": "package src;\n\nimport java.util.Map;\nimport java.util.List;\n\npublic class App {\n    public void processValues(Map<String, User> data) {\n        for (var user : data.values()) {\n            user.save();\n        }\n    }\n\n    public void processList(List<User> users) {\n        for (var user : users) {\n            user.save();\n        }\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-map-keys-values/src/Repo.java",
    "content": "package src;\n\npublic class Repo {\n    private String name;\n    public Repo(String name) { this.name = name; }\n    public void save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-map-keys-values/src/User.java",
    "content": "package src;\n\npublic class User {\n    private String name;\n    public User(String name) { this.name = name; }\n    public void save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-member-calls/models/User.java",
    "content": "package models;\n\npublic class User {\n    public boolean save() {\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-member-calls/services/UserService.java",
    "content": "package services;\n\nimport models.User;\n\npublic class UserService {\n    public boolean processUser() {\n        User user = new User();\n        return user.save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-method-chain-binding/App.java",
    "content": "class App {\n    static User getUser() {\n        return new User(new Address(new City(\"NYC\")));\n    }\n\n    void processChain() {\n        var user = getUser();\n        var addr = user.address;\n        var city = addr.getCity();\n        city.save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-method-chain-binding/Models.java",
    "content": "class City {\n    String name;\n    City(String name) { this.name = name; }\n    boolean save() { return true; }\n}\n\nclass Address {\n    City city;\n    Address(City city) { this.city = city; }\n    City getCity() { return city; }\n}\n\nclass User {\n    Address address;\n    User(Address address) { this.address = address; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-mixed-chain/App.java",
    "content": "import services.UserService;\nimport models.User;\n\npublic class App {\n    public static void processWithService(UserService svc) {\n        svc.getUser().address.save();\n    }\n\n    public static void processWithUser(User user) {\n        user.getAddress().city.getName();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-mixed-chain/models/Address.java",
    "content": "package models;\n\npublic class Address {\n    public City city;\n\n    public void save() {\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-mixed-chain/models/City.java",
    "content": "package models;\n\npublic class City {\n    public String getName() {\n        return \"city\";\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-mixed-chain/models/User.java",
    "content": "package models;\n\npublic class User {\n    public Address address;\n\n    public Address getAddress() {\n        return this.address;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-mixed-chain/services/UserService.java",
    "content": "package services;\n\nimport models.User;\n\npublic class UserService {\n    public User getUser() {\n        return new User();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-named-imports/com/example/app/Main.java",
    "content": "package com.example.app;\n\nimport com.example.models.User;\n\npublic class Main {\n    public void run() {\n        User user = new User();\n        user.save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-named-imports/com/example/models/User.java",
    "content": "package com.example.models;\n\npublic class User {\n    public void save() {\n        System.out.println(\"models User save\");\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-named-imports/com/example/other/User.java",
    "content": "package com.example.other;\n\npublic class User {\n    public void save() {\n        System.out.println(\"other User save\");\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-nullable-receiver/App.java",
    "content": "import models.User;\nimport models.Repo;\n\npublic class App {\n    public static void processEntities() {\n        User user = findUser();\n        Repo repo = findRepo();\n        user.save();\n        repo.save();\n    }\n\n    private static User findUser() {\n        return new User();\n    }\n\n    private static Repo findRepo() {\n        return new Repo();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-nullable-receiver/models/Repo.java",
    "content": "package models;\n\npublic class Repo {\n    public boolean save() {\n        return false;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-nullable-receiver/models/User.java",
    "content": "package models;\n\npublic class User {\n    public boolean save() {\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-optional-receiver/App.java",
    "content": "import models.User;\nimport models.Repo;\n\n// Tests that Optional<User> unwraps to User in TypeEnv,\n// so assignment chains from Optional-typed sources resolve correctly.\npublic class App {\n    static User findUser() { return new User(); }\n    static Repo findRepo() { return new Repo(); }\n\n    static void processEntities() {\n        // Optional<User> declared — TypeEnv stores \"User\" (not \"Optional\")\n        // The alias then propagates User through the chain\n        java.util.Optional<User> opt = java.util.Optional.of(findUser());\n        User user = opt.get();\n        user.save();\n\n        Repo repo = findRepo();\n        repo.save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-optional-receiver/models/Repo.java",
    "content": "package models;\n\npublic class Repo {\n    public void save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-optional-receiver/models/User.java",
    "content": "package models;\n\npublic class User {\n    public void save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-overload-param-types/models/User.java",
    "content": "package models;\n\npublic class User {\n    public String getName() {\n        return \"user\";\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-overload-param-types/models/UserService.java",
    "content": "package models;\n\npublic class UserService {\n    public User lookup(int id) {\n        return new User();\n    }\n\n    public User lookup(String name) {\n        return new User();\n    }\n\n    public void run() {\n        lookup(42);        // literal int → should disambiguate to lookup(int)\n        lookup(\"alice\");   // literal String → should disambiguate to lookup(String)\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-parent-resolution/src/interfaces/Serializable.java",
    "content": "package interfaces;\n\npublic interface Serializable {\n    String serialize();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-parent-resolution/src/models/BaseModel.java",
    "content": "package models;\n\npublic class BaseModel {\n    public boolean save() { return true; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-parent-resolution/src/models/User.java",
    "content": "package models;\n\nimport interfaces.Serializable;\n\npublic class User extends BaseModel implements Serializable {\n    public String serialize() { return \"\"; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-receiver-resolution/App.java",
    "content": "import models.User;\nimport models.Repo;\n\npublic class App {\n    public static void processEntities() {\n        User user = new User();\n        Repo repo = new Repo();\n        user.save();\n        repo.save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-receiver-resolution/models/Repo.java",
    "content": "package models;\n\npublic class Repo {\n    public boolean save() {\n        return false;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-receiver-resolution/models/User.java",
    "content": "package models;\n\npublic class User {\n    public boolean save() {\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-return-type-inference/App.java",
    "content": "import services.UserService;\n\npublic class App {\n    public static void processUser() {\n        UserService svc = new UserService();\n        var user = svc.getUser(\"alice\");\n        user.save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-return-type-inference/models/User.java",
    "content": "package models;\n\npublic class User {\n    private String name;\n\n    public User(String name) {\n        this.name = name;\n    }\n\n    public boolean save() {\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-return-type-inference/services/UserService.java",
    "content": "package services;\n\nimport models.User;\n\npublic class UserService {\n    public User getUser(String name) {\n        return new User(name);\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-self-this-resolution/src/models/Repo.java",
    "content": "package models;\n\npublic class Repo {\n    public boolean save() { return true; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-self-this-resolution/src/models/User.java",
    "content": "package models;\n\npublic class User {\n    public boolean save() { return true; }\n    public void process() {\n        this.save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-super-resolution/src/models/BaseModel.java",
    "content": "package models;\n\npublic class BaseModel {\n    public boolean save() { return true; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-super-resolution/src/models/Repo.java",
    "content": "package models;\n\npublic class Repo {\n    public boolean save() { return true; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-super-resolution/src/models/User.java",
    "content": "package models;\n\npublic class User extends BaseModel {\n    public boolean save() {\n        super.save();\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-switch-pattern/App.java",
    "content": "import models.User;\nimport models.Repo;\n\npublic class App {\n    public static void processAny(Object obj) {\n        switch (obj) {\n            case User user -> user.save();\n            case Repo repo -> repo.save();\n            default -> {}\n        }\n    }\n\n    public static void handleUser(Object obj) {\n        switch (obj) {\n            case User user -> user.save();\n            default -> {}\n        }\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-switch-pattern/models/Repo.java",
    "content": "package models;\n\npublic class Repo {\n    public void save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-switch-pattern/models/User.java",
    "content": "package models;\n\npublic class User {\n    public void save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-variadic-resolution/com/example/app/Main.java",
    "content": "package com.example.app;\n\nimport com.example.util.Logger;\n\npublic class Main {\n    public void run() {\n        Logger logger = new Logger();\n        logger.record(\"hello\", \"world\", \"test\");\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-variadic-resolution/com/example/util/Logger.java",
    "content": "package com.example.util;\n\npublic class Logger {\n    public void record(String... args) {\n        for (String a : args) System.out.println(a);\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-virtual-dispatch/models/App.java",
    "content": "package models;\n\n// All classes in same file so parentMap captures the extends relationship\n\nclass Animal {\n    public String speak() {\n        return \"...\";\n    }\n}\n\nclass Dog extends Animal {\n    public String speak() {\n        return \"woof\";\n    }\n\n    public String fetchBall() {\n        return \"ball\";\n    }\n}\n\npublic class App {\n    public void run() {\n        // Virtual dispatch: declared as Animal, constructed as Dog\n        Animal animal = new Dog();\n        animal.fetchBall();  // Only Dog has fetchBall — proves virtual dispatch override\n\n        // Direct type: no override needed\n        Dog dog = new Dog();\n        dog.fetchBall();     // Direct resolution to Dog#fetchBall\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-write-access/App.java",
    "content": "import models.User;\nimport models.Address;\n\npublic class App {\n    public static void updateUser(User user) {\n        user.name = \"Alice\";\n        user.address = new Address();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-write-access/models/Address.java",
    "content": "package models;\n\npublic class Address {\n    public String city;\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/java-write-access/models/User.java",
    "content": "package models;\n\npublic class User {\n    public String name;\n    public Address address;\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/javascript-chain-call/src/app.js",
    "content": "const { UserService } = require('./service');\n\nfunction processUser() {\n  const svc = new UserService();\n  svc.getUser().save();\n}\n\nmodule.exports = { processUser };\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/javascript-chain-call/src/repo.js",
    "content": "class Repo {\n  save() {\n    return true;\n  }\n}\n\nmodule.exports = { Repo };\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/javascript-chain-call/src/service.js",
    "content": "const { User } = require('./user');\n\nclass UserService {\n  /**\n   * @returns {User}\n   */\n  getUser() {\n    return new User();\n  }\n}\n\nmodule.exports = { UserService };\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/javascript-chain-call/src/user.js",
    "content": "class User {\n  save() {\n    return true;\n  }\n}\n\nmodule.exports = { User };\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/javascript-constructor-type-inference/src/app.js",
    "content": "const { User } = require('./user');\nconst { Repo } = require('./repo');\n\nfunction processEntities() {\n  const user = new User();\n  const repo = new Repo();\n  user.save();\n  repo.save();\n}\n\nmodule.exports = { processEntities };\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/javascript-constructor-type-inference/src/repo.js",
    "content": "class Repo {\n  save() {\n    return false;\n  }\n}\n\nmodule.exports = { Repo };\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/javascript-constructor-type-inference/src/user.js",
    "content": "class User {\n  save() {\n    return true;\n  }\n}\n\nmodule.exports = { User };\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/javascript-parent-resolution/src/models/Base.js",
    "content": "class BaseModel {\n  save() { return true; }\n}\nmodule.exports = { BaseModel };\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/javascript-parent-resolution/src/models/User.js",
    "content": "const { BaseModel } = require('./Base');\n\nclass User extends BaseModel {\n  serialize() { return ''; }\n}\nmodule.exports = { User };\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/javascript-self-this-resolution/src/models/Repo.js",
    "content": "class Repo {\n  save() { return true; }\n}\nmodule.exports = { Repo };\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/javascript-self-this-resolution/src/models/User.js",
    "content": "class User {\n  save() { return true; }\n  process() {\n    this.save();\n  }\n}\nmodule.exports = { User };\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/javascript-super-resolution/src/models/Base.js",
    "content": "class BaseModel {\n  save() { return true; }\n}\nmodule.exports = { BaseModel };\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/javascript-super-resolution/src/models/Repo.js",
    "content": "class Repo {\n  save() { return true; }\n}\nmodule.exports = { Repo };\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/javascript-super-resolution/src/models/User.js",
    "content": "const { BaseModel } = require('./Base');\n\nclass User extends BaseModel {\n  save() {\n    super.save();\n    return true;\n  }\n}\nmodule.exports = { User };\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/js-call-result-binding/app.js",
    "content": "const { getUser } = require('./service');\n\nfunction processUser() {\n  const user = getUser('alice');\n  user.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/js-call-result-binding/models.js",
    "content": "class User {\n  constructor(name) {\n    this.name = name;\n  }\n\n  save() {\n    return true;\n  }\n}\n\nmodule.exports = { User };\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/js-call-result-binding/service.js",
    "content": "const { User } = require('./models');\n\n/**\n * @param {string} name\n * @returns {User}\n */\nfunction getUser(name) {\n  return new User(name);\n}\n\nmodule.exports = { getUser };\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/js-field-types/models.js",
    "content": "class Address {\n  city = '';\n\n  save() {\n    // persist address\n  }\n}\n\nclass User {\n  name = '';\n  address = new Address();\n\n  greet() {\n    return this.name;\n  }\n}\n\nclass Config {\n  static DEFAULT = new Config();\n\n  validate() {\n    return true;\n  }\n}\n\nmodule.exports = { Address, User, Config };\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/js-field-types/service.js",
    "content": "const { User, Config } = require('./models');\n\nfunction processUser(user) {\n  user.address.save();\n}\n\nfunction validateConfig() {\n  Config.DEFAULT.validate();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/js-fixpoint-for-loop/app.js",
    "content": "const { getUsers } = require('./models');\n\nfunction process() {\n  const users = getUsers();\n  for (const u of users) {\n    u.save();\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/js-fixpoint-for-loop/models.js",
    "content": "/**\n * @class\n */\nclass User {\n  save() {}\n}\n\n/**\n * @returns {User[]}\n */\nfunction getUsers() { return []; }\n\nmodule.exports = { User, getUsers };\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/js-jsdoc-async-return-type/app.js",
    "content": "const { User } = require('./user');\nconst { Repo } = require('./repo');\n\n/**\n * @returns {Promise<User>}\n */\nasync function fetchUser(name) {\n  return new User(name);\n}\n\n/**\n * @returns {Promise<Repo>}\n */\nasync function fetchRepo(path) {\n  return new Repo(path);\n}\n\nasync function processUser() {\n  const user = await fetchUser('alice');\n  user.save();\n}\n\nasync function processRepo() {\n  const repo = await fetchRepo('/data');\n  repo.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/js-jsdoc-async-return-type/repo.js",
    "content": "class Repo {\n  constructor(path) {\n    this.path = path;\n  }\n\n  save() {\n    return true;\n  }\n}\n\nmodule.exports = { Repo };\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/js-jsdoc-async-return-type/user.js",
    "content": "class User {\n  constructor(name) {\n    this.name = name;\n  }\n\n  save() {\n    return true;\n  }\n}\n\nmodule.exports = { User };\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/js-jsdoc-qualified-return-type/app.js",
    "content": "const { User } = require('./user');\n\n/**\n * @returns {Promise<models.User>}\n */\nasync function fetchUser(name) {\n  return new User(name);\n}\n\nasync function processUser() {\n  const user = await fetchUser('alice');\n  user.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/js-jsdoc-qualified-return-type/user.js",
    "content": "class User {\n  constructor(name) {\n    this.name = name;\n  }\n\n  save() {\n    return true;\n  }\n}\n\nmodule.exports = { User };\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/js-jsdoc-return-type/app.js",
    "content": "const { User } = require('./user');\nconst { Repo } = require('./repo');\n\n/**\n * @returns {User}\n */\nfunction getUser(name) {\n  return new User(name);\n}\n\n/**\n * @returns {Repo}\n */\nfunction getRepo(path) {\n  return new Repo(path);\n}\n\nfunction processUser() {\n  const user = getUser('alice');\n  user.save();\n}\n\nfunction processRepo() {\n  const repo = getRepo('/data');\n  repo.save();\n}\n\n/**\n * @param {User} user the user to handle\n */\nfunction handleUser(user) {\n  user.save();\n}\n\n/**\n * @param {Repo} repo the repo to handle\n */\nfunction handleRepo(repo) {\n  repo.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/js-jsdoc-return-type/repo.js",
    "content": "class Repo {\n  constructor(path) {\n    this.path = path;\n  }\n\n  save() {\n    return true;\n  }\n}\n\nmodule.exports = { Repo };\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/js-jsdoc-return-type/user.js",
    "content": "class User {\n  constructor(name) {\n    this.name = name;\n  }\n\n  save() {\n    return true;\n  }\n}\n\nmodule.exports = { User };\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/js-nullable-receiver/src/app.js",
    "content": "const { User } = require('./user');\nconst { Repo } = require('./repo');\n\n/**\n * @param {User | null} user\n * @param {Repo | null} repo\n */\nfunction processEntities(user, repo) {\n  if (user) user.save();\n  if (repo) repo.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/js-nullable-receiver/src/repo.js",
    "content": "class Repo {\n  save() { return true; }\n}\nmodule.exports = { Repo };\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/js-nullable-receiver/src/user.js",
    "content": "class User {\n  save() { return true; }\n}\nmodule.exports = { User };\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/js-object-destructuring/app.js",
    "content": "const { getUser } = require('./service');\n\nfunction processDestructured() {\n  const user = getUser();\n  const { address } = user;\n  address.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/js-object-destructuring/models.js",
    "content": "/**\n * @class\n */\nclass Address {\n  /** @returns {boolean} */\n  save() { return true; }\n}\n\n/**\n * @class\n */\nclass User {\n  constructor() {\n    /** @type {Address} */\n    this.address = new Address();\n  }\n}\n\nmodule.exports = { Address, User };\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/js-object-destructuring/service.js",
    "content": "const { User } = require('./models');\n\n/**\n * @returns {User}\n */\nfunction getUser() { return new User(); }\n\nmodule.exports = { getUser };\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/js-write-access/models.js",
    "content": "class Address {\n    city = '';\n}\n\nclass User {\n    name = '';\n    address = null;\n}\n\nmodule.exports = { Address, User };\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/js-write-access/service.js",
    "content": "const { User, Address } = require('./models');\n\nfunction updateUser() {\n    const user = new User();\n    user.name = \"Alice\";\n    user.address = new Address();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-alias-imports/app/App.kt",
    "content": "package app\n\nimport models.User as U\nimport models.Repo as R\n\nfun main() {\n    val u = U(\"alice\")\n    val r = R(\"https://example.com\")\n    u.save()\n    r.persist()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-alias-imports/models/Models.kt",
    "content": "package models\n\nclass User(val name: String) {\n    fun save(): Boolean = true\n}\n\nclass Repo(val url: String) {\n    fun persist(): Boolean = true\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-ambiguous/models/Handler.kt",
    "content": "package models\n\nopen class Handler {\n    open fun handle() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-ambiguous/models/Runnable.kt",
    "content": "package models\n\ninterface Runnable {\n    fun run()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-ambiguous/other/Handler.kt",
    "content": "package other\n\nopen class Handler {\n    open fun process() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-ambiguous/other/Runnable.kt",
    "content": "package other\n\ninterface Runnable {\n    fun execute()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-ambiguous/services/UserHandler.kt",
    "content": "package services\n\nimport models.Handler\nimport models.Runnable\n\nclass UserHandler : Handler(), Runnable {\n    override fun run() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-assignment-chain/App.kt",
    "content": "fun getUser(): User = User()\nfun getRepo(): Repo = Repo()\n\nfun processEntities() {\n    val u: User = getUser()\n    val alias = u\n    alias.save()\n\n    val r: Repo = getRepo()\n    val rAlias = r\n    rAlias.save()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-assignment-chain/models/Repo.kt",
    "content": "class Repo {\n    fun save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-assignment-chain/models/User.kt",
    "content": "class User {\n    fun save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-call-result-binding/User.kt",
    "content": "class User(val name: String) {\n    fun save(): Boolean {\n        return true\n    }\n}\n\nfun getUser(name: String): User {\n    return User(name)\n}\n\nfun processUser() {\n    val user = getUser(\"alice\")\n    user.save()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-calls/services/UserService.kt",
    "content": "package services\n\nimport util.OneArg.writeAudit\nimport util.ZeroArg.writeAudit\n\nclass UserService {\n    fun processUser() {\n        writeAudit(\"hello\")\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-calls/util/OneArg.kt",
    "content": "package util\n\nobject OneArg {\n    fun writeAudit(message: String): String {\n        return message\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-calls/util/ZeroArg.kt",
    "content": "package util\n\nobject ZeroArg {\n    fun writeAudit(): String {\n        return \"zero\"\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-chain-call/src/App.kt",
    "content": "import models.User\nimport services.UserService\n\nfun processUser() {\n    val svc = UserService()\n    svc.getUser().save()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-chain-call/src/Repo.kt",
    "content": "package models\n\nclass Repo {\n    fun save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-chain-call/src/User.kt",
    "content": "package models\n\nclass User {\n    fun save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-chain-call/src/UserService.kt",
    "content": "package services\n\nimport models.User\n\nclass UserService {\n    fun getUser(): User {\n        return User()\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-class-method-chain/App.kt",
    "content": "// Assignment chain with typed parameter propagation.\n// Tests that extractKotlinPendingAssignment handles val alias = u\n// where u comes from an explicit typed declaration.\nfun processUser() {\n    val u: User = User()\n    val alias = u\n    alias.save()\n}\n\nfun processRepo() {\n    val r: Repo = Repo()\n    val alias = r\n    alias.save()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-class-method-chain/models/Repo.kt",
    "content": "class Repo {\n    fun save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-class-method-chain/models/User.kt",
    "content": "class User {\n    fun save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-constructor-calls/app/App.kt",
    "content": "package app\n\nimport models.User\n\nfun main() {\n    val user = User(\"alice\")\n    user.save()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-constructor-calls/models/User.kt",
    "content": "package models\n\nclass User(val name: String) {\n    fun save(): Boolean = true\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-constructor-type-inference/models/Repo.kt",
    "content": "package models\n\nclass Repo(val dbName: String) {\n    fun save(): Boolean = false\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-constructor-type-inference/models/User.kt",
    "content": "package models\n\nclass User(val name: String) {\n    fun save(): Boolean = true\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-constructor-type-inference/services/App.kt",
    "content": "package services\n\nimport models.User\nimport models.Repo\n\nfun processEntities() {\n    val user = User(\"alice\")\n    val repo = Repo(\"maindb\")\n    user.save()\n    repo.save()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-data-class-fields/Models.kt",
    "content": "class Address {\n    var city: String = \"\"\n\n    fun save() {\n        // persist address\n    }\n}\n\ndata class User(\n    val name: String,\n    val address: Address,\n    val age: Int\n)\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-data-class-fields/Service.kt",
    "content": "fun processUser(user: User) {\n    // Field-access chain: user.address → Address, then .save() → Address#save\n    user.address.save()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-deep-field-chain/Models.kt",
    "content": "class City {\n    var zipCode: String = \"\"\n\n    fun getName(): String {\n        return \"city\"\n    }\n}\n\nclass Address {\n    var city: City = City()\n    var street: String = \"\"\n\n    fun save() {\n        // persist address\n    }\n}\n\nclass User {\n    var name: String = \"\"\n    var address: Address = Address()\n\n    fun greet(): String {\n        return name\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-deep-field-chain/Service.kt",
    "content": "fun processUser(user: User) {\n    // 2-level chain: user.address → Address, then .save() → Address#save\n    user.address.save()\n\n    // 3-level chain: user.address → Address, .city → City, .getName() → City#getName\n    user.address.city.getName()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-default-params/App.kt",
    "content": "fun greet(name: String, greeting: String = \"Hello\"): String = \"$greeting, $name!\"\n\nfun process() {\n    greet(\"Alice\")\n    greet(\"Bob\", \"Hi\")\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-field-types/Models.kt",
    "content": "class Address {\n    var city: String = \"\"\n\n    fun save() {\n        // persist address\n    }\n}\n\nclass User {\n    var name: String = \"\"\n    var address: Address = Address()\n\n    fun greet(): String {\n        return name\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-field-types/Service.kt",
    "content": "fun processUser(user: User) {\n    // Field-access chain: user.address → Address, then .save() → Address#save\n    user.address.save()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-foreach/App.kt",
    "content": "package app\n\nimport models.User\nimport models.Repo\n\nfun processUsers(users: List<User>) {\n    for (user: User in users) {\n        user.save()\n    }\n}\n\nfun processRepos(repos: List<Repo>) {\n    for (repo: Repo in repos) {\n        repo.save()\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-foreach/models/Repo.kt",
    "content": "package models\n\nclass Repo {\n    fun save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-foreach/models/User.kt",
    "content": "package models\n\nclass User {\n    fun save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-foreach-call-expr/Main.kt",
    "content": "import models.getUsers\nimport models.getRepos\n\nfun processUsers() {\n    for (user in getUsers()) {\n        user.save()\n    }\n}\n\nfun processRepos() {\n    for (repo in getRepos()) {\n        repo.save()\n    }\n}\n\nfun main() {}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-foreach-call-expr/models/Repo.kt",
    "content": "package models\n\nclass Repo(val name: String) {\n    fun save() {}\n}\n\nfun getRepos(): List<Repo> {\n    return listOf(Repo(\"main\"))\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-foreach-call-expr/models/User.kt",
    "content": "package models\n\nclass User(val name: String) {\n    fun save() {}\n}\n\nfun getUsers(): List<User> {\n    return listOf(User(\"alice\"))\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-generic-parent-resolution/models/BaseModel.kt",
    "content": "package models\n\nopen class BaseModel<T> {\n    open fun save(): Boolean = true\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-generic-parent-resolution/models/Repo.kt",
    "content": "package models\n\nclass Repo {\n    fun save(): Boolean = true\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-generic-parent-resolution/models/User.kt",
    "content": "package models\n\nclass User : BaseModel<String>() {\n    override fun save(): Boolean {\n        super.save()\n        return true\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-grandparent-resolution/models/A.kt",
    "content": "package models\n\nopen class A {\n    fun greet(): Greeting = Greeting()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-grandparent-resolution/models/B.kt",
    "content": "package models\n\nopen class B : A()\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-grandparent-resolution/models/C.kt",
    "content": "package models\n\nclass C : B()\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-grandparent-resolution/models/Greeting.kt",
    "content": "package models\n\nclass Greeting {\n    fun save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-grandparent-resolution/services/App.kt",
    "content": "package services\n\nimport models.C\n\nfun process() {\n    val c = C()\n    c.greet().save()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-heritage/interfaces/Serializable.kt",
    "content": "package interfaces\n\ninterface Serializable {\n    fun serialize(): String\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-heritage/interfaces/Validatable.kt",
    "content": "package interfaces\n\ninterface Validatable {\n    fun validate(): Boolean\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-heritage/models/BaseModel.kt",
    "content": "package models\n\nabstract class BaseModel {\n    fun save() {\n        // persist to storage\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-heritage/models/User.kt",
    "content": "package models\n\nimport interfaces.Serializable\nimport interfaces.Validatable\n\ndata class User(val name: String) : BaseModel(), Serializable, Validatable {\n    override fun serialize(): String = name\n\n    override fun validate(): Boolean = name.isNotEmpty()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-heritage/services/UserService.kt",
    "content": "package services\n\nimport models.User\nimport interfaces.Serializable\n\nclass UserService {\n    fun processUser(user: User) {\n        user.validate()\n        user.save()\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-local-shadow/src/main/kotlin/app/Main.kt",
    "content": "package app\n\nimport utils.save\n\n// Local function shadows imported save\nfun save(data: String) {\n    println(\"local save: $data\")\n}\n\nfun run() {\n    save(\"test\")\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-local-shadow/src/main/kotlin/utils/Logger.kt",
    "content": "package utils\n\nfun save(data: String) {\n    println(\"utils save: $data\")\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-map-keys-values/src/App.kt",
    "content": "fun processValues(data: HashMap<String, User>) {\n    for (user in data.values) {\n        user.save()\n    }\n}\n\nfun processKeys(data: HashMap<User, Repo>) {\n    for (user in data.keys) {\n        user.save()\n    }\n}\n\nfun processMutableMapValues(data: MutableMap<String, Repo>) {\n    for (repo in data.values) {\n        repo.save()\n    }\n}\n\nfun processList(users: List<User>) {\n    for (user in users) {\n        user.save()\n    }\n}\n\nfun processSet(repos: Set<Repo>) {\n    for (repo in repos) {\n        repo.save()\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-map-keys-values/src/Repo.kt",
    "content": "class Repo(val name: String) {\n    fun save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-map-keys-values/src/User.kt",
    "content": "class User(val name: String) {\n    fun save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-member-calls/models/User.kt",
    "content": "package models\n\nclass User {\n    fun save(): Boolean {\n        return true\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-member-calls/services/UserService.kt",
    "content": "package services\n\nimport models.User\n\nclass UserService {\n    fun processUser(): Boolean {\n        val user = User()\n        return user.save()\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-method-chain-binding/Models.kt",
    "content": "class City(val name: String) {\n    fun save(): Boolean = true\n}\n\nclass Address(val city: City) {\n    fun getCity(): City = city\n}\n\nclass User(val address: Address)\n\nfun getUser(): User = User(Address(City(\"NYC\")))\n\nfun processChain() {\n    val user = getUser()\n    val addr = user.address\n    val city = addr.getCity()\n    city.save()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-null-check-narrowing/models/Repo.kt",
    "content": "package models\n\nclass Repo {\n    fun save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-null-check-narrowing/models/User.kt",
    "content": "package models\n\nclass User {\n    fun save() {}\n}\n\nfun findUser(): User? {\n    return null\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-null-check-narrowing/services/App.kt",
    "content": "package services\n\nimport models.User\nimport models.findUser\n\nfun processNullable(x: User?) {\n    if (x != null) {\n        x.save()\n    }\n}\n\nfun processLocalNullable() {\n    val x: User? = findUser()\n    if (x != null) {\n        x.save()\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-nullable-receiver/models/Repo.kt",
    "content": "package models\n\nclass Repo(val dbName: String) {\n    fun save(): Boolean = false\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-nullable-receiver/models/User.kt",
    "content": "package models\n\nclass User(val name: String) {\n    fun save(): Boolean = true\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-nullable-receiver/services/App.kt",
    "content": "package services\n\nimport models.User\nimport models.Repo\n\nfun processEntities() {\n    val user: User? = User(\"alice\")\n    val repo: Repo? = Repo(\"maindb\")\n\n    // Safe calls on nullable receivers — should disambiguate via unwrapped type\n    user?.save()\n    repo?.save()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-overload-param-types/services/UserService.kt",
    "content": "package services\n\nclass User\n\nclass UserService {\n    fun lookup(id: Int): User? {\n        return null\n    }\n\n    fun lookup(name: String): User? {\n        return null\n    }\n\n    fun run() {\n        lookup(42)        // literal Int → should disambiguate to lookup(Int)\n        lookup(\"alice\")   // literal String → should disambiguate to lookup(String)\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-parent-resolution/models/BaseModel.kt",
    "content": "package models\n\nopen class BaseModel {\n    fun save(): Boolean = true\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-parent-resolution/models/Serializable.kt",
    "content": "package models\n\ninterface Serializable {\n    fun serialize(): String\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-parent-resolution/models/User.kt",
    "content": "package models\n\nclass User : BaseModel(), Serializable {\n    override fun serialize(): String = \"\"\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-receiver-resolution/models/Repo.kt",
    "content": "package models\n\nclass Repo {\n    fun save(): Boolean {\n        return false\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-receiver-resolution/models/User.kt",
    "content": "package models\n\nclass User {\n    fun save(): Boolean {\n        return true\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-receiver-resolution/services/App.kt",
    "content": "package services\n\nimport models.User\nimport models.Repo\n\nclass AppService {\n    fun processEntities() {\n        val user: User = User()\n        val repo: Repo = Repo()\n        user.save()\n        repo.save()\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-return-type/models/Repo.kt",
    "content": "package models\n\nclass Repo(val name: String) {\n    fun save() {}\n}\n\nfun getRepo(name: String): Repo {\n    return Repo(name)\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-return-type/models/User.kt",
    "content": "package models\n\nclass User(val name: String) {\n    fun save() {}\n}\n\nfun getUser(name: String): User {\n    return User(name)\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-return-type/services/App.kt",
    "content": "package services\n\nimport models.getUser\nimport models.getRepo\n\nfun processUser() {\n    val user = getUser(\"alice\")\n    user.save()\n}\n\nfun processRepo() {\n    val repo = getRepo(\"main\")\n    repo.save()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-self-this-resolution/models/AppConfig.kt",
    "content": "package models\n\nobject AppConfig {\n    fun init(): Boolean = true\n    fun setup() {\n        this.init()\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-self-this-resolution/models/Repo.kt",
    "content": "package models\n\nclass Repo {\n    fun save(): Boolean = true\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-self-this-resolution/models/User.kt",
    "content": "package models\n\nclass User {\n    fun save(): Boolean = true\n    fun process() {\n        this.save()\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-super-resolution/models/BaseModel.kt",
    "content": "package models\n\nopen class BaseModel {\n    open fun save(): Boolean = true\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-super-resolution/models/Repo.kt",
    "content": "package models\n\nclass Repo {\n    fun save(): Boolean = true\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-super-resolution/models/User.kt",
    "content": "package models\n\nclass User : BaseModel() {\n    override fun save(): Boolean {\n        super.save()\n        return true\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-var-foreach/app.kt",
    "content": "package app\n\nimport models.User\nimport models.Repo\n\nfun processUsers(users: List<User>) {\n    for (user in users) {\n        user.save()\n    }\n}\n\nfun processRepos(repos: List<Repo>) {\n    for (repo in repos) {\n        repo.save()\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-var-foreach/models/Repo.kt",
    "content": "package models\n\nclass Repo {\n    fun save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-var-foreach/models/User.kt",
    "content": "package models\n\nclass User {\n    fun save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-variadic-resolution/app/App.kt",
    "content": "package app\n\nimport util.logEntry\n\nfun main() {\n    logEntry(\"hello\", \"world\", \"test\")\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-variadic-resolution/util/Logger.kt",
    "content": "package util\n\nfun logEntry(vararg messages: String) {\n    messages.forEach { println(it) }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-virtual-dispatch/models/Animal.kt",
    "content": "package models\n\nopen class Animal {\n    open fun speak(): String = \"...\"\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-virtual-dispatch/models/Dog.kt",
    "content": "package models\n\nclass Dog : Animal() {\n    override fun speak(): String = \"woof\"\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-virtual-dispatch/services/App.kt",
    "content": "package services\n\nimport models.Animal\nimport models.Dog\n\nfun process() {\n    val animal: Animal = Dog()\n    animal.speak()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-when-complex/App.kt",
    "content": "import models.User\nimport models.Repo\nimport models.Admin\n\n// Three-arm when: each arm should resolve obj to its narrowed type\nfun processThreeArms(obj: Any) {\n    when (obj) {\n        is User -> obj.save()\n        is Repo -> obj.save()\n        is Admin -> obj.save()\n    }\n}\n\n// Multiple method calls within a single when arm\nfun processMultiCall(obj: Any) {\n    when (obj) {\n        is User -> {\n            obj.validate()\n            obj.save()\n        }\n        is Repo -> {\n            obj.validate()\n            obj.save()\n        }\n    }\n}\n\n// when with else branch — else should NOT narrow the type\nfun processWithElse(obj: Any) {\n    when (obj) {\n        is User -> obj.save()\n        else -> println(obj)\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-when-complex/models/Admin.kt",
    "content": "package models\n\nclass Admin {\n    fun save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-when-complex/models/Repo.kt",
    "content": "package models\n\nclass Repo {\n    fun save() {}\n    fun validate() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-when-complex/models/User.kt",
    "content": "package models\n\nclass User {\n    fun save() {}\n    fun validate() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-when-pattern/App.kt",
    "content": "import models.User\nimport models.Repo\n\nfun processAny(obj: Any) {\n    when (obj) {\n        is User -> obj.save()\n        is Repo -> obj.save()\n    }\n}\n\nfun handleUser(obj: Any) {\n    when (obj) {\n        is User -> obj.save()\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-when-pattern/models/Repo.kt",
    "content": "package models\n\nclass Repo {\n    fun save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-when-pattern/models/User.kt",
    "content": "package models\n\nclass User {\n    fun save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-write-access/Models.kt",
    "content": "class Address {\n    var city: String = \"\"\n}\n\nclass User {\n    var name: String = \"\"\n    var address: Address = Address()\n    var score: Int = 0\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/kotlin-write-access/Service.kt",
    "content": "fun updateUser(user: User) {\n    user.name = \"Alice\"\n    user.address = Address()\n    // Compound assignment — tree-sitter-kotlin uses `assignment` node for both\n    user.score += 10\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-alias-imports/app/Models/Repo.php",
    "content": "<?php\nnamespace App\\Models;\n\nclass Repo {\n    public string $url;\n\n    public function __construct(string $url) {\n        $this->url = $url;\n    }\n\n    public function persist(): bool {\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-alias-imports/app/Models/User.php",
    "content": "<?php\nnamespace App\\Models;\n\nclass User {\n    public string $name;\n\n    public function __construct(string $name) {\n        $this->name = $name;\n    }\n\n    public function save(): bool {\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-alias-imports/app/Services/Main.php",
    "content": "<?php\nnamespace App\\Services;\n\nuse App\\Models\\User as U;\nuse App\\Models\\Repo as R;\n\nclass Main {\n    public function run(): void {\n        $u = new U(\"alice\");\n        $r = new R(\"https://example.com\");\n        $u->save();\n        $r->persist();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-alias-imports/composer.json",
    "content": "{\n  \"autoload\": {\n    \"psr-4\": {\n      \"App\\\\\": \"app/\"\n    }\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-ambiguous/app/Models/Dispatchable.php",
    "content": "<?php\n\nnamespace App\\Models;\n\ninterface Dispatchable\n{\n    public function dispatch(): void;\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-ambiguous/app/Models/Handler.php",
    "content": "<?php\n\nnamespace App\\Models;\n\nclass Handler\n{\n    public function handle(): void {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-ambiguous/app/Other/Dispatchable.php",
    "content": "<?php\n\nnamespace App\\Other;\n\ninterface Dispatchable\n{\n    public function queue(): void;\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-ambiguous/app/Other/Handler.php",
    "content": "<?php\n\nnamespace App\\Other;\n\nclass Handler\n{\n    public function process(): void {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-ambiguous/app/Services/UserHandler.php",
    "content": "<?php\n\nnamespace App\\Services;\n\nuse App\\Models\\Handler;\nuse App\\Models\\Dispatchable;\n\nclass UserHandler extends Handler implements Dispatchable\n{\n    public function dispatch(): void {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-ambiguous/composer.json",
    "content": "{\n  \"autoload\": {\n    \"psr-4\": {\n      \"App\\\\\": \"app/\"\n    }\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-app/app/Contracts/Loggable.php",
    "content": "<?php\n\nnamespace App\\Contracts;\n\ninterface Loggable\n{\n    public function log(string $message): void;\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-app/app/Contracts/Repository.php",
    "content": "<?php\n\nnamespace App\\Contracts;\n\ninterface Repository\n{\n    public function find(int $id): mixed;\n    public function save(mixed $entity): void;\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-app/app/Enums/UserRole.php",
    "content": "<?php\n\nnamespace App\\Enums;\n\nenum UserRole: string\n{\n    case Admin = 'admin';\n    case Editor = 'editor';\n    case Viewer = 'viewer';\n\n    public function label(): string\n    {\n        return match($this) {\n            self::Admin => 'Administrator',\n            self::Editor => 'Editor',\n            self::Viewer => 'Viewer',\n        };\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-app/app/Models/BaseModel.php",
    "content": "<?php\n\nnamespace App\\Models;\n\nuse App\\Contracts\\Loggable;\nuse App\\Traits\\HasTimestamps;\n\nabstract class BaseModel implements Loggable\n{\n    use HasTimestamps;\n\n    protected int $id;\n\n    public function getId(): int\n    {\n        return $this->id;\n    }\n\n    public function log(string $message): void\n    {\n        error_log($message);\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-app/app/Models/User.php",
    "content": "<?php\n\nnamespace App\\Models;\n\nuse App\\Traits\\SoftDeletes;\n\nclass User extends BaseModel\n{\n    use SoftDeletes;\n\n    private string $name;\n    private string $email;\n\n    public function __construct(string $name, string $email)\n    {\n        $this->name = $name;\n        $this->email = $email;\n    }\n\n    public function getName(): string\n    {\n        return $this->name;\n    }\n\n    public function getEmail(): string\n    {\n        return $this->email;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-app/app/Services/UserService.php",
    "content": "<?php\n\nnamespace App\\Services;\n\nuse App\\Contracts\\Repository;\nuse App\\Models\\User;\nuse App\\Enums\\UserRole;\n\nclass UserService implements Repository\n{\n    private array $users = [];\n\n    public function find(int $id): ?User\n    {\n        return $this->users[$id] ?? null;\n    }\n\n    public function save(mixed $entity): void\n    {\n        $this->users[$entity->getId()] = $entity;\n    }\n\n    public function createUser(string $name, string $email): User\n    {\n        $user = new User($name, $email);\n        $this->save($user);\n        $user->log('User created: ' . $name);\n        $user?->touch();\n        $defaultRole = UserRole::Viewer;\n        $label = $defaultRole->label();\n        return $user;\n    }\n\n    public static function instance(): self\n    {\n        return new self();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-app/app/Traits/HasTimestamps.php",
    "content": "<?php\n\nnamespace App\\Traits;\n\ntrait HasTimestamps\n{\n    protected string $status = 'active';\n\n    public function touch(): void\n    {\n        $this->updatedAt = new \\DateTimeImmutable();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-app/app/Traits/SoftDeletes.php",
    "content": "<?php\n\nnamespace App\\Traits;\n\ntrait SoftDeletes\n{\n    protected string $status = 'active';\n\n    public function softDelete(): void\n    {\n        $this->deletedAt = new \\DateTimeImmutable();\n    }\n\n    public function restore(): void\n    {\n        $this->deletedAt = null;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-app/composer.json",
    "content": "{\n  \"autoload\": {\n    \"psr-4\": {\n      \"App\\\\\": \"app/\"\n    }\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-assignment-chain/app/Models/Repo.php",
    "content": "<?php\nnamespace App\\Models;\n\nclass Repo {\n    public function save(): bool {\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-assignment-chain/app/Models/User.php",
    "content": "<?php\nnamespace App\\Models;\n\nclass User {\n    public function save(): bool {\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-assignment-chain/app/Services/AppService.php",
    "content": "<?php\nnamespace App\\Services;\n\nuse App\\Models\\User;\nuse App\\Models\\Repo;\n\nclass AppService {\n    public function process(User $user, Repo $repo): void {\n        $alias = $user;\n        $alias->save();\n\n        $rAlias = $repo;\n        $rAlias->save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-assignment-chain/composer.json",
    "content": "{\n    \"autoload\": {\n        \"psr-4\": {\n            \"App\\\\\": \"app/\"\n        }\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-call-result-binding/App.php",
    "content": "<?php\n\nclass User {\n    public string $name;\n\n    public function __construct(string $name) {\n        $this->name = $name;\n    }\n\n    public function save(): bool {\n        return true;\n    }\n}\n\nfunction getUser(string $name): User {\n    return new User($name);\n}\n\nfunction processUser(): void {\n    $user = getUser(\"alice\");\n    $user->save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-calls/app/Services/UserService.php",
    "content": "<?php\n\nnamespace App\\Services;\n\nuse function App\\Utils\\OneArg\\log;\nuse function App\\Utils\\ZeroArg\\log as zero_log;\n\nfunction create_user(): string\n{\n    return write_audit('hello');\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-calls/app/Utils/OneArg/log.php",
    "content": "<?php\n\nnamespace App\\Utils\\OneArg;\n\nfunction write_audit(string $message): string\n{\n    return $message;\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-calls/app/Utils/ZeroArg/log.php",
    "content": "<?php\n\nnamespace App\\Utils\\ZeroArg;\n\nfunction write_audit(): string\n{\n    return 'zero';\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-calls/composer.json",
    "content": "{\n  \"autoload\": {\n    \"psr-4\": {\n      \"App\\\\\": \"app/\"\n    }\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-constructor-calls/Models/User.php",
    "content": "<?php\n\nnamespace Models;\n\nclass User {\n    private string $name;\n\n    public function __construct(string $name) {\n        $this->name = $name;\n    }\n\n    public function save(): bool {\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-constructor-calls/app.php",
    "content": "<?php\n\nuse Models\\User;\n\nfunction processUser(string $name): void {\n    $user = new User($name);\n    $user->save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-constructor-promotion-fields/Models.php",
    "content": "<?php\n\nclass Address {\n    public string $city;\n\n    public function save(): void {\n        // persist address\n    }\n}\n\nclass User {\n    public function __construct(\n        public string $name,\n        public Address $address,\n    ) {}\n\n    public function greet(): string {\n        return $this->name;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-constructor-promotion-fields/Service.php",
    "content": "<?php\n\nclass Service {\n    public function processUser(User $user): void {\n        // Field-access chain: $user->address → Address, then ->save() → Address#save\n        $user->address->save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-constructor-type-inference/app/Models/Repo.php",
    "content": "<?php\n\nnamespace App\\Models;\n\nclass Repo\n{\n    public function save(): bool\n    {\n        return false;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-constructor-type-inference/app/Models/User.php",
    "content": "<?php\n\nnamespace App\\Models;\n\nclass User\n{\n    public function save(): bool\n    {\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-constructor-type-inference/app/Services/AppService.php",
    "content": "<?php\n\nnamespace App\\Services;\n\nuse App\\Models\\User;\nuse App\\Models\\Repo;\n\nclass AppService\n{\n    public function processEntities(): void\n    {\n        $user = new User();\n        $repo = new Repo();\n        $user->save();\n        $repo->save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-constructor-type-inference/composer.json",
    "content": "{\n  \"autoload\": {\n    \"psr-4\": {\n      \"App\\\\\": \"app/\"\n    }\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-deep-field-chain/Models.php",
    "content": "<?php\n\nclass City {\n    /** @var string */\n    public string $zipCode;\n\n    public function getName(): string {\n        return \"city\";\n    }\n}\n\nclass Address {\n    /** @var City */\n    public City $city;\n\n    /** @var string */\n    public string $street;\n\n    public function save(): void {\n        // persist address\n    }\n}\n\nclass User {\n    /** @var string */\n    public string $name;\n\n    /** @var Address */\n    public Address $address;\n\n    public function greet(): string {\n        return $this->name;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-deep-field-chain/Service.php",
    "content": "<?php\n\nclass Service {\n    public function processUser(User $user): void {\n        // 2-level chain: $user->address → Address, then ->save() → Address#save\n        $user->address->save();\n\n        // 3-level chain: $user->address → Address, ->city → City, ->getName() → City#getName\n        $user->address->city->getName();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-default-params/app.php",
    "content": "<?php\n\nfunction greet(string $name, string $greeting = \"Hello\"): string {\n    return \"$greeting, $name\";\n}\n\nfunction process(): void {\n    greet(\"Alice\");\n    greet(\"Bob\", \"Hi\");\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-field-types/Models.php",
    "content": "<?php\n\nclass Address {\n    /** @var string */\n    public string $city;\n\n    public function save(): void {\n        // persist address\n    }\n}\n\nclass User {\n    /** @var string */\n    public string $name;\n\n    /** @var Address */\n    public Address $address;\n\n    public function greet(): string {\n        return $this->name;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-field-types/Service.php",
    "content": "<?php\n\nclass Service {\n    public function processUser(User $user): void {\n        // Field-access chain: $user->address → Address, then ->save() → Address#save\n        $user->address->save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-foreach-call-expr/Repo.php",
    "content": "<?php\n\nclass Repo {\n    public string $name;\n\n    public function __construct(string $name) {\n        $this->name = $name;\n    }\n\n    public function save(): void {}\n}\n\n/**\n * @return Repo[]\n */\nfunction getRepos(): array {\n    return [new Repo(\"main\")];\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-foreach-call-expr/User.php",
    "content": "<?php\n\nclass User {\n    public string $name;\n\n    public function __construct(string $name) {\n        $this->name = $name;\n    }\n\n    public function save(): void {}\n}\n\n/**\n * @return User[]\n */\nfunction getUsers(): array {\n    return [new User(\"alice\")];\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-foreach-call-expr/main.php",
    "content": "<?php\n\nrequire_once 'User.php';\nrequire_once 'Repo.php';\n\nfunction processUsers(): void {\n    foreach (getUsers() as $user) {\n        $user->save();\n    }\n}\n\nfunction processRepos(): void {\n    foreach (getRepos() as $repo) {\n        $repo->save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-foreach-generic/App.php",
    "content": "<?php\n\nrequire_once 'User.php';\nrequire_once 'Repo.php';\n\nclass App {\n    /**\n     * PHPDoc generic Collection<User> — element type should resolve to User, not Collection.\n     * @param Collection<User> $users\n     */\n    public function processCollection($users): void {\n        foreach ($users as $user) {\n            $user->save();\n        }\n    }\n\n    /**\n     * PHPDoc array-style User[] — existing behavior, should still work.\n     * @param User[] $repos\n     */\n    public function processArray(array $repos): void {\n        foreach ($repos as $repo) {\n            $repo->save();\n        }\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-foreach-generic/Repo.php",
    "content": "<?php\n\nclass Repo {\n    public string $name;\n\n    public function __construct(string $name) {\n        $this->name = $name;\n    }\n\n    public function save(): void {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-foreach-generic/User.php",
    "content": "<?php\n\nclass User {\n    public string $name;\n\n    public function __construct(string $name) {\n        $this->name = $name;\n    }\n\n    public function save(): void {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-foreach-loop/App.php",
    "content": "<?php\n\nrequire_once 'User.php';\n\nclass App {\n    /** @param User[] $users */\n    public function processUsers(array $users): void {\n        foreach ($users as $user) {\n            $user->save();\n        }\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-foreach-loop/Repo.php",
    "content": "<?php\n\nclass Repo {\n    public string $name;\n\n    public function __construct(string $name) {\n        $this->name = $name;\n    }\n\n    public function save(): void {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-foreach-loop/User.php",
    "content": "<?php\n\nclass User {\n    public string $name;\n\n    public function __construct(string $name) {\n        $this->name = $name;\n    }\n\n    public function save(): void {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-foreach-member-access/App.php",
    "content": "<?php\n\nrequire_once 'User.php';\nrequire_once 'Repo.php';\n\nclass App {\n    /** @var User[] */\n    private array $users;\n\n    public function __construct() {\n        $this->users = [];\n    }\n\n    /**\n     * $this->users member access in foreach — resolved via Phase 7.4 Strategy C:\n     * scans the class body for the property_declaration and extracts the element\n     * type from the @var PHPDoc annotation without requiring a @param workaround.\n     */\n    public function processMembers(): void {\n        foreach ($this->users as $user) {\n            $user->save();\n        }\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-foreach-member-access/Repo.php",
    "content": "<?php\n\nclass Repo {\n    public string $name;\n\n    public function __construct(string $name) {\n        $this->name = $name;\n    }\n\n    public function save(): void {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-foreach-member-access/User.php",
    "content": "<?php\n\nclass User {\n    public string $name;\n\n    public function __construct(string $name) {\n        $this->name = $name;\n    }\n\n    public function save(): void {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-grandparent-resolution/app/Models/A.php",
    "content": "<?php\n\nnamespace App\\Models;\n\nclass A\n{\n    public function greet(): Greeting { return new Greeting(); }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-grandparent-resolution/app/Models/B.php",
    "content": "<?php\n\nnamespace App\\Models;\n\nclass B extends A {}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-grandparent-resolution/app/Models/C.php",
    "content": "<?php\n\nnamespace App\\Models;\n\nclass C extends B {}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-grandparent-resolution/app/Models/Greeting.php",
    "content": "<?php\n\nnamespace App\\Models;\n\nclass Greeting\n{\n    public function save(): void {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-grandparent-resolution/app/Services/App.php",
    "content": "<?php\n\nnamespace App\\Services;\n\nuse App\\Models\\C;\n\nclass App\n{\n    public function process(): void\n    {\n        $c = new C();\n        $c->greet()->save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-grandparent-resolution/composer.json",
    "content": "{\n  \"autoload\": {\n    \"psr-4\": {\n      \"App\\\\\": \"app/\"\n    }\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-grouped-imports/app/Models/Repo.php",
    "content": "<?php\nnamespace App\\Models;\n\nclass Repo {\n    public function persist(): void {\n        echo \"persisting repo\";\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-grouped-imports/app/Models/User.php",
    "content": "<?php\nnamespace App\\Models;\n\nclass User {\n    public function save(): void {\n        echo \"saving user\";\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-grouped-imports/app/Services/Main.php",
    "content": "<?php\nnamespace App\\Services;\n\nuse App\\Models\\{User, Repo as R};\n\nclass Main {\n    public function run(): void {\n        $u = new User();\n        $u->save();\n\n        $r = new R();\n        $r->persist();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-grouped-imports/composer.json",
    "content": "{\n  \"autoload\": {\n    \"psr-4\": {\n      \"App\\\\\": \"app/\"\n    }\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-local-shadow/app/Services/Main.php",
    "content": "<?php\nnamespace App\\Services;\n\nuse function App\\Utils\\save;\n\n// Local function shadows imported save\nfunction save(string $data): void {\n    echo \"local save: $data\\n\";\n}\n\nfunction run(): void {\n    save(\"test\");\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-local-shadow/app/Utils/Logger.php",
    "content": "<?php\nnamespace App\\Utils;\n\nfunction save(string $data): void {\n    echo \"utils save: $data\\n\";\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-local-shadow/composer.json",
    "content": "{\n  \"autoload\": {\n    \"psr-4\": {\n      \"App\\\\\": \"app/\"\n    }\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-member-calls/app/Models/User.php",
    "content": "<?php\n\nnamespace App\\Models;\n\nclass User\n{\n    public function save(): bool\n    {\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-member-calls/app/Services/UserService.php",
    "content": "<?php\n\nnamespace App\\Services;\n\nuse App\\Models\\User;\n\nclass UserService\n{\n    public function processUser(): bool\n    {\n        $user = new User();\n        return $user->save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-member-calls/composer.json",
    "content": "{\n  \"autoload\": {\n    \"psr-4\": {\n      \"App\\\\\": \"app/\"\n    }\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-method-chain-binding/App.php",
    "content": "<?php\n\nclass City {\n    public string $name;\n    public function __construct(string $name) { $this->name = $name; }\n    public function save(): bool { return true; }\n}\n\nclass Address {\n    public City $city;\n    public function __construct(City $city) { $this->city = $city; }\n    public function getCity(): City { return $this->city; }\n}\n\nclass User {\n    public Address $address;\n    public function __construct(Address $address) { $this->address = $address; }\n}\n\nfunction getUser(): User {\n    return new User(new Address(new City(\"NYC\")));\n}\n\nfunction processChain(): void {\n    $user = getUser();\n    $city = $user->getCity();\n    $city->save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-nullable-receiver/app/Models/Repo.php",
    "content": "<?php\n\nnamespace App\\Models;\n\nclass Repo\n{\n    public function save(): bool\n    {\n        return false;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-nullable-receiver/app/Models/User.php",
    "content": "<?php\n\nnamespace App\\Models;\n\nclass User\n{\n    public function save(): bool\n    {\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-nullable-receiver/app/Services/AppService.php",
    "content": "<?php\n\nnamespace App\\Services;\n\nuse App\\Models\\User;\nuse App\\Models\\Repo;\n\nclass AppService\n{\n    public function process(?User $user, ?Repo $repo): void\n    {\n        // Nullable type-hinted params — should disambiguate via unwrapped type\n        $user->save();\n        $repo->save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-nullable-receiver/composer.json",
    "content": "{\n  \"autoload\": {\n    \"psr-4\": {\n      \"App\\\\\": \"app/\"\n    }\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-parent-resolution/app/Models/BaseModel.php",
    "content": "<?php\n\nnamespace App\\Models;\n\nclass BaseModel\n{\n    public function save(): bool { return true; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-parent-resolution/app/Models/Serializable.php",
    "content": "<?php\n\nnamespace App\\Models;\n\ninterface Serializable\n{\n    public function serialize(): string;\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-parent-resolution/app/Models/User.php",
    "content": "<?php\n\nnamespace App\\Models;\n\nclass User extends BaseModel implements Serializable\n{\n    public function serialize(): string { return ''; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-parent-resolution/composer.json",
    "content": "{\n  \"autoload\": {\n    \"psr-4\": { \"App\\\\\": \"app/\" }\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-phpdoc-attribute-return-type/Models.php",
    "content": "<?php\nclass User {\n    public function save() { return true; }\n}\n\nclass Repo {\n    public function save() { return true; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-phpdoc-attribute-return-type/Services.php",
    "content": "<?php\nrequire_once 'Models.php';\n\nclass UserService {\n    /**\n     * @return User\n     */\n    #[Route('/user')]\n    public function getUser(string $name) {\n        return new User();\n    }\n\n    /**\n     * @return Repo\n     */\n    #[Route('/repo')]\n    public function getRepo(string $path) {\n        return new Repo();\n    }\n\n    public function processUser() {\n        $user = $this->getUser(\"alice\");\n        $user->save();\n    }\n\n    public function processRepo() {\n        $repo = $this->getRepo(\"/data\");\n        $repo->save();\n    }\n\n    /**\n     * @param User $user the user to handle\n     */\n    #[Validate]\n    public function handleUser($user) {\n        $user->save();\n    }\n\n    /**\n     * @param Repo $repo the repo to handle\n     */\n    #[Validate]\n    public function handleRepo($repo) {\n        $repo->save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-phpdoc-return-type/Models.php",
    "content": "<?php\nclass User {\n    public function save() { return true; }\n}\n\nclass Repo {\n    public function save() { return true; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-phpdoc-return-type/Services.php",
    "content": "<?php\nrequire_once 'Models.php';\n\nclass UserService {\n    /**\n     * @return User\n     */\n    public function getUser(string $name) {\n        return new User();\n    }\n\n    /**\n     * @return Repo\n     */\n    public function getRepo(string $path) {\n        return new Repo();\n    }\n\n    public function processUser() {\n        $user = $this->getUser(\"alice\");\n        $user->save();\n    }\n\n    public function processRepo() {\n        $repo = $this->getRepo(\"/data\");\n        $repo->save();\n    }\n\n    /**\n     * @param User $user the user to handle\n     */\n    public function handleUser($user) {\n        $user->save();\n    }\n\n    /**\n     * @param Repo $repo the repo to handle\n     */\n    public function handleRepo($repo) {\n        $repo->save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-property-promotion/UserService.php",
    "content": "<?php\n\nclass UserRepo {\n    public function find(int $id): void {}\n    public function save(): void {}\n}\n\nclass UserService {\n    public function __construct(\n        private UserRepo $repo\n    ) {\n        // Promoted parameter $repo is available as a local variable in the constructor\n        $repo->save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-receiver-resolution/app/Models/Repo.php",
    "content": "<?php\n\nnamespace App\\Models;\n\nclass Repo\n{\n    public function save(): bool\n    {\n        return false;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-receiver-resolution/app/Models/User.php",
    "content": "<?php\n\nnamespace App\\Models;\n\nclass User\n{\n    public function save(): bool\n    {\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-receiver-resolution/app/Services/AppService.php",
    "content": "<?php\n\nnamespace App\\Services;\n\nuse App\\Models\\User;\nuse App\\Models\\Repo;\n\nclass AppService\n{\n    public function processEntities(User $user, Repo $repo): void\n    {\n        $user->save();\n        $repo->save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-receiver-resolution/composer.json",
    "content": "{\n  \"autoload\": {\n    \"psr-4\": {\n      \"App\\\\\": \"app/\"\n    }\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-return-type/app/Models/Repo.php",
    "content": "<?php\n\nnamespace App\\Models;\n\nclass Repo {\n    public function save(): bool {\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-return-type/app/Models/User.php",
    "content": "<?php\n\nnamespace App\\Models;\n\nclass User {\n    private string $name;\n\n    public function __construct(string $name) {\n        $this->name = $name;\n    }\n\n    public function save(): bool {\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-return-type/app/Services/UserService.php",
    "content": "<?php\n\nnamespace App\\Services;\n\nuse App\\Models\\User;\n\nclass UserService {\n    public function getUser(string $name): User {\n        return new User($name);\n    }\n\n    public function processUser(): void {\n        $user = $this->getUser(\"alice\");\n        $user->save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-self-this-resolution/app/Models/Repo.php",
    "content": "<?php\n\nnamespace App\\Models;\n\nclass Repo\n{\n    public function save(): bool { return true; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-self-this-resolution/app/Models/User.php",
    "content": "<?php\n\nnamespace App\\Models;\n\nclass User\n{\n    public function save(): bool { return true; }\n    public function process(): void\n    {\n        $this->save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-self-this-resolution/composer.json",
    "content": "{\n  \"autoload\": {\n    \"psr-4\": { \"App\\\\\": \"app/\" }\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-super-resolution/app/Models/BaseModel.php",
    "content": "<?php\n\nnamespace App\\Models;\n\nclass BaseModel\n{\n    public function save(): bool { return true; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-super-resolution/app/Models/Repo.php",
    "content": "<?php\n\nnamespace App\\Models;\n\nclass Repo\n{\n    public function save(): bool { return true; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-super-resolution/app/Models/User.php",
    "content": "<?php\n\nnamespace App\\Models;\n\nclass User extends BaseModel\n{\n    public function save(): bool\n    {\n        parent::save();\n        return true;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-super-resolution/composer.json",
    "content": "{\n  \"autoload\": {\n    \"psr-4\": { \"App\\\\\": \"app/\" }\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-this-receiver-disambiguation/AdminService.php",
    "content": "<?php\nrequire_once 'Models.php';\n\nclass AdminService {\n    /** @return Repo */\n    public function getUser(string $name) {\n        return new Repo();\n    }\n\n    public function processAdmin() {\n        $repo = $this->getUser(\"admin\");\n        $repo->save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-this-receiver-disambiguation/Models.php",
    "content": "<?php\nclass User {\n    public function save() { return true; }\n}\n\nclass Repo {\n    public function save() { return true; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-this-receiver-disambiguation/UserService.php",
    "content": "<?php\nrequire_once 'Models.php';\n\nclass UserService {\n    /** @return User */\n    public function getUser(string $name) {\n        return new User();\n    }\n\n    public function processUser() {\n        $user = $this->getUser(\"alice\");\n        $user->save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-typed-properties/app/Models/UserRepo.php",
    "content": "<?php\n\nnamespace App\\Models;\n\nclass UserRepo\n{\n    public function find(int $id): void {}\n    public function save(): void {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-typed-properties/app/Services/UserService.php",
    "content": "<?php\n\nnamespace App\\Services;\n\nuse App\\Models\\UserRepo;\n\nclass UserService\n{\n    private UserRepo $repo;\n\n    public function process(UserRepo $repo): void\n    {\n        $repo->save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-typed-properties/composer.json",
    "content": "{\n  \"autoload\": {\n    \"psr-4\": {\n      \"App\\\\\": \"app/\"\n    }\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-variadic-resolution/app/Services/AppService.php",
    "content": "<?php\nnamespace App\\Services;\n\nuse App\\Utils\\Logger;\n\nclass AppService {\n    public function run(): void {\n        Logger::record(\"info\", \"started\", \"processing\", \"done\");\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-variadic-resolution/app/Utils/Logger.php",
    "content": "<?php\nnamespace App\\Utils;\n\nclass Logger {\n    public static function record(string $level, string ...$messages): void {\n        foreach ($messages as $msg) {\n            echo \"[$level] $msg\\n\";\n        }\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-variadic-resolution/composer.json",
    "content": "{\n  \"autoload\": {\n    \"psr-4\": {\n      \"App\\\\\": \"app/\"\n    }\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-write-access/models.php",
    "content": "<?php\n\nclass Address {\n    public string $city;\n}\n\nclass User {\n    public string $name;\n    public Address $address;\n    public static int $count = 0;\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/php-write-access/service.php",
    "content": "<?php\n\nrequire_once 'models.php';\n\nfunction updateUser(User $user) {\n    // Simple write\n    $user->name = \"Alice\";\n\n    // Object write\n    $user->address = new Address();\n\n    // Static property write\n    User::$count = 42;\n\n    // Compound assignment write\n    $user->name .= \" Smith\";\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-alias-imports/app.py",
    "content": "from models import User as U, Repo as R\n\ndef main():\n    u = U(\"alice\")\n    r = R(\"https://example.com\")\n    u.save()\n    r.persist()\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-alias-imports/models.py",
    "content": "class User:\n    def __init__(self, name):\n        self.name = name\n\n    def save(self):\n        return True\n\nclass Repo:\n    def __init__(self, url):\n        self.url = url\n\n    def persist(self):\n        return True\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-ambiguous/models/__init__.py",
    "content": ""
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-ambiguous/models/handler.py",
    "content": "class Handler:\n    def handle(self):\n        pass\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-ambiguous/other/__init__.py",
    "content": ""
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-ambiguous/other/handler.py",
    "content": "class Handler:\n    def process(self):\n        pass\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-ambiguous/services/__init__.py",
    "content": ""
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-ambiguous/services/user_handler.py",
    "content": "from ..models.handler import Handler\n\nclass UserHandler(Handler):\n    def run(self):\n        pass\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-assignment-chain/app.py",
    "content": "from user import User\nfrom repo import Repo\n\ndef get_user() -> User:\n    return User()\n\ndef get_repo() -> Repo:\n    return Repo()\n\ndef process():\n    u: User = get_user()\n    alias = u\n    alias.save()\n\n    r: Repo = get_repo()\n    r_alias = r\n    r_alias.save()\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-assignment-chain/repo.py",
    "content": "class Repo:\n    def save(self):\n        return False\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-assignment-chain/user.py",
    "content": "class User:\n    def save(self):\n        return True\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-bare-import/models/user.py",
    "content": "class User:\n    def save(self):\n        pass\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-bare-import/services/auth.py",
    "content": "import user\n\ndef authenticate():\n    svc = user.UserService()\n    svc.execute()\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-bare-import/services/user.py",
    "content": "class UserService:\n    def execute(self):\n        pass\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-call-result-binding/app.py",
    "content": "from service import get_user\n\ndef process_user():\n    user = get_user(\"alice\")\n    user.save()\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-call-result-binding/models.py",
    "content": "class User:\n    def __init__(self, name: str):\n        self.name = name\n\n    def save(self) -> bool:\n        return True\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-call-result-binding/service.py",
    "content": "from models import User\n\ndef get_user(name: str) -> User:\n    return User(name)\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-calls/one.py",
    "content": "def write_audit(message):\n    return message\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-calls/service.py",
    "content": "from one import write_audit\nfrom zero import write_audit as zero_write_audit\n\n\ndef run():\n    return write_audit(\"hello\")\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-calls/zero.py",
    "content": "def write_audit():\n    return \"zero\"\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-chain-call/app.py",
    "content": "from service import UserService\n\n\ndef process_user():\n    svc = UserService()\n    svc.get_user().save()\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-chain-call/models/repo.py",
    "content": "class Repo:\n    def save(self):\n        pass\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-chain-call/models/user.py",
    "content": "class User:\n    def save(self):\n        pass\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-chain-call/service.py",
    "content": "from models.user import User\n\n\nclass UserService:\n    def get_user(self) -> User:\n        return User()\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-class-annotations/repo.py",
    "content": "class Repo:\n    name: str = \"\"\n\n    def save(self):\n        return False\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-class-annotations/service.py",
    "content": "from user import User\nfrom repo import Repo\n\n# File-level class annotations (no default)\nactive_user: User\nactive_repo: Repo\n\ndef process():\n    active_user.save()\n    active_repo.save()\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-class-annotations/user.py",
    "content": "class User:\n    name: str = \"\"\n\n    def save(self):\n        return True\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-constructor-calls/app.py",
    "content": "from models import User\n\ndef process():\n    user = User(\"alice\")\n    user.save()\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-constructor-calls/models.py",
    "content": "class User:\n    def __init__(self, name):\n        self.name = name\n\n    def save(self):\n        print('saving user')\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-constructor-type-inference/models/repo.py",
    "content": "class Repo:\n    def __init__(self, db_name: str):\n        self.db_name = db_name\n\n    def save(self) -> bool:\n        return False\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-constructor-type-inference/models/user.py",
    "content": "class User:\n    def __init__(self, name: str):\n        self.name = name\n\n    def save(self) -> bool:\n        return True\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-constructor-type-inference/services/app.py",
    "content": "from models.user import User\nfrom models.repo import Repo\n\n\ndef process_entities():\n    user = User(\"alice\")\n    repo = Repo(\"maindb\")\n    user.save()\n    repo.save()\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-default-params/app.py",
    "content": "def greet(name: str, greeting: str = \"Hello\") -> str:\n    return greeting + \", \" + name\n\ndef search(query: str, limit: int = 10) -> list:\n    return []\n\ndef process():\n    greet(\"alice\")\n    greet(\"bob\", \"Hi\")\n    search(\"test\")\n    search(\"test\", 5)\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-dict-items-loop/app.py",
    "content": "from user import User\n\ndef process(data: dict[str, User]):\n    for key, user in data.items():\n        user.save()\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-dict-items-loop/repo.py",
    "content": "class Repo:\n    def __init__(self, name: str):\n        self.name = name\n\n    def save(self):\n        pass\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-dict-items-loop/user.py",
    "content": "class User:\n    def __init__(self, name: str):\n        self.name = name\n\n    def save(self):\n        pass\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-enumerate-loop/app.py",
    "content": "from user import User\nfrom typing import List\n\n\ndef process_users(users: dict[str, User]):\n    # 3-variable enumerate: i=index, k=key, v=value (User)\n    for i, k, v in enumerate(users.items()):\n        v.save()\n\n\ndef process_nested_tuple(users: dict[str, User]):\n    # Nested tuple pattern: i=index, (k,v) tuple unpacked\n    for i, (k, v) in enumerate(users.items()):\n        v.save()\n\n\ndef process_parenthesized_tuple(users: List[User]):\n    # Parenthesized tuple as top-level pattern\n    for (i, u) in enumerate(users):\n        u.save()\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-enumerate-loop/repo.py",
    "content": "class Repo:\n    def save(self) -> bool:\n        return True\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-enumerate-loop/user.py",
    "content": "class User:\n    def __init__(self, name: str):\n        self.name = name\n\n    def save(self) -> bool:\n        return True\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-field-type-disambig/address.py",
    "content": "class Address:\n    city: str\n\n    def save(self):\n        pass\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-field-type-disambig/service.py",
    "content": "from user import User\n\ndef process_user(user: User):\n    # Field-access chain: user.address → Address, then .save() must resolve\n    # to Address#save (NOT User#save) — only lookupFieldByOwner can disambiguate.\n    user.address.save()\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-field-type-disambig/user.py",
    "content": "from address import Address\n\nclass User:\n    name: str\n    address: Address\n\n    def save(self):\n        pass\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-field-types/models.py",
    "content": "class Address:\n    city: str\n\n    def save(self):\n        pass\n\nclass User:\n    name: str\n    address: Address\n\n    def greet(self) -> str:\n        return self.name\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-field-types/service.py",
    "content": "from models import User\n\ndef process_user(user: User):\n    user.address.save()\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-for-call-expr/main.py",
    "content": "from models import get_users, get_repos\n\ndef process_users():\n    for user in get_users():\n        user.save()\n\ndef process_repos():\n    for repo in get_repos():\n        repo.save()\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-for-call-expr/models.py",
    "content": "class User:\n    def __init__(self, name: str):\n        self.name = name\n\n    def save(self) -> None:\n        pass\n\nclass Repo:\n    def __init__(self, name: str):\n        self.name = name\n\n    def save(self) -> None:\n        pass\n\ndef get_users() -> list[User]:\n    return [User(\"alice\")]\n\ndef get_repos() -> list[Repo]:\n    return [Repo(\"main\")]\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-grandparent-resolution/app.py",
    "content": "from models.c import C\n\ndef process():\n    c = C()\n    c.greet().save()\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-grandparent-resolution/models/__init__.py",
    "content": ""
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-grandparent-resolution/models/a.py",
    "content": "from .greeting import Greeting\n\nclass A:\n    def greet(self) -> Greeting:\n        return Greeting()\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-grandparent-resolution/models/b.py",
    "content": "from .a import A\n\nclass B(A):\n    pass\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-grandparent-resolution/models/c.py",
    "content": "from .b import B\n\nclass C(B):\n    pass\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-grandparent-resolution/models/greeting.py",
    "content": "class Greeting:\n    def save(self) -> None:\n        pass\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-local-shadow/app.py",
    "content": "from utils import save\n\ndef save(x):\n    print('local save')\n\ndef main():\n    save(\"test\")\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-local-shadow/utils.py",
    "content": "def save(data):\n    print('saving from utils')\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-match-case/app.py",
    "content": "from models.user import User\nfrom models.repo import Repo\n\n\ndef process(x):\n    match x:\n        case User() as u:\n            u.save()  # should resolve to User#save, not Repo#save\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-match-case/models/repo.py",
    "content": "class Repo:\n    def save(self):\n        pass\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-match-case/models/user.py",
    "content": "class User:\n    def save(self):\n        pass\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-member-access-for-loop/app.py",
    "content": "from models.user import User\nfrom models.repo import Repo\nfrom typing import List\n\nclass UserService:\n    def process_users(self, users: List[User]):\n        for user in self.users:\n            user.save()\n\nclass RepoService:\n    def process_repos(self, repos: List[Repo]):\n        for repo in self.repos:\n            repo.save()\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-member-access-for-loop/models/repo.py",
    "content": "class Repo:\n    def save(self):\n        pass\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-member-access-for-loop/models/user.py",
    "content": "class User:\n    def save(self):\n        pass\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-member-calls/app.py",
    "content": "from user import User\n\ndef process_user():\n    user = User()\n    return user.save()\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-member-calls/user.py",
    "content": "class User:\n    def save(self):\n        return True\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-method-chain-binding/app.py",
    "content": "from models import User, Address, City\n\ndef get_user() -> User:\n    return User(Address(City(\"NYC\")))\n\ndef process_chain():\n    user = get_user()\n    city = user.get_city()\n    city.save()\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-method-chain-binding/models.py",
    "content": "class City:\n    def __init__(self, name: str):\n        self.name = name\n\n    def save(self) -> bool:\n        return True\n\nclass Address:\n    city: City\n\n    def __init__(self, city: City):\n        self.city = city\n\n    def get_city(self) -> City:\n        return self.city\n\nclass User:\n    address: Address\n\n    def __init__(self, address: Address):\n        self.address = address\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-named-imports/app.py",
    "content": "from format_upper import format_data\n\ndef process_input():\n    return format_data(\"hello\")\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-named-imports/format_prefix.py",
    "content": "def format_data(data, prefix):\n    return prefix + data\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-named-imports/format_upper.py",
    "content": "def format_data(data):\n    return data.upper()\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-nullable-chain/app.py",
    "content": "from user import User\nfrom repo import Repo\n\n\ndef get_user() -> User:\n    return User()\n\n\ndef get_repo() -> Repo:\n    return Repo()\n\n\n# Python 3.10+ union: User | None is parsed as binary_operator,\n# stored as raw text \"User | None\" in TypeEnv, then stripNullable resolves it.\ndef nullable_chain_user() -> None:\n    u: User | None = get_user()\n    alias = u\n    alias.save()\n\n\ndef nullable_chain_repo() -> None:\n    r: Repo | None = get_repo()\n    alias = r\n    alias.save()\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-nullable-chain/repo.py",
    "content": "class Repo:\n    def save(self) -> bool:\n        return False\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-nullable-chain/user.py",
    "content": "class User:\n    def save(self) -> bool:\n        return True\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-nullable-receiver/app.py",
    "content": "from user import User\nfrom repo import Repo\n\ndef find_user() -> User | None:\n    return User()\n\ndef find_repo() -> Repo | None:\n    return Repo()\n\ndef process_entities():\n    user: User | None = find_user()\n    user.save()\n    repo: Repo | None = find_repo()\n    repo.save()\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-nullable-receiver/repo.py",
    "content": "class Repo:\n    def save(self):\n        return False\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-nullable-receiver/user.py",
    "content": "class User:\n    def save(self):\n        return True\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-parent-resolution/models/__init__.py",
    "content": ""
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-parent-resolution/models/base.py",
    "content": "class BaseModel:\n    def save(self) -> bool:\n        return True\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-parent-resolution/models/user.py",
    "content": "from .base import BaseModel\n\nclass User(BaseModel):\n    def serialize(self) -> str:\n        return ''\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-pkg/models/base.py",
    "content": "class BaseModel:\n    def save(self):\n        pass\n\n    def validate(self):\n        pass\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-pkg/models/user.py",
    "content": "from .base import BaseModel\n\n\nclass User(BaseModel):\n    def get_name(self):\n        return self.name\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-pkg/services/auth.py",
    "content": "from ..models.user import User\n\n\nclass AuthService:\n    def authenticate(self, user: User):\n        user.validate()\n        return True\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-pkg/utils/helpers.py",
    "content": "from ..models.base import BaseModel\n\n\ndef process_model(model: BaseModel):\n    model.validate()\n    model.save()\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-qualified-constructor/main.py",
    "content": "import models\n\ndef main():\n    user = models.User(\"alice\")\n    user.save()\n    user.greet()\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-qualified-constructor/models.py",
    "content": "class User:\n    def __init__(self, name):\n        self.name = name\n\n    def save(self):\n        pass\n\n    def greet(self):\n        return f\"Hello, {self.name}\"\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-receiver-resolution/app.py",
    "content": "from user import User\nfrom repo import Repo\n\ndef process_entities():\n    user: User = User()\n    repo: Repo = Repo()\n    user.save()\n    repo.save()\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-receiver-resolution/repo.py",
    "content": "class Repo:\n    def save(self):\n        return False\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-receiver-resolution/user.py",
    "content": "class User:\n    def save(self):\n        return True\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-reexport-chain/app.py",
    "content": "from models import User, Repo\n\ndef main():\n    user = User()\n    user.save()\n\n    repo = Repo()\n    repo.persist()\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-reexport-chain/models/__init__.py",
    "content": "from .base import User, Repo\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-reexport-chain/models/base.py",
    "content": "class User:\n    def save(self):\n        print('saving user')\n\nclass Repo:\n    def persist(self):\n        print('persisting repo')\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-return-type-inference/app.py",
    "content": "from service import get_user\n\ndef process_user():\n    user = get_user('alice')\n    user.save()\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-return-type-inference/models.py",
    "content": "class User:\n    def __init__(self, name: str):\n        self.name = name\n\n    def save(self) -> bool:\n        return True\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-return-type-inference/service.py",
    "content": "from models import User\n\ndef get_user(name: str) -> User:\n    return User(name)\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-self-this-resolution/models/repo.py",
    "content": "class Repo:\n    def save(self) -> bool:\n        return True\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-self-this-resolution/models/user.py",
    "content": "class User:\n    def save(self) -> bool:\n        return True\n    def process(self) -> None:\n        self.save()\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-static-class-methods/app.py",
    "content": "from service import UserService, AdminService\n\n\ndef process():\n    user = UserService.find_user(\"alice\")\n    UserService.create_user(\"bob\")\n    svc = UserService.from_config({})\n\n    AdminService.find_user(\"charlie\")\n    AdminService.delete_user(\"charlie\")\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-static-class-methods/service.py",
    "content": "class UserService:\n    @staticmethod\n    def find_user(name: str) -> str:\n        return name\n\n    @staticmethod\n    def create_user(name: str) -> str:\n        return name\n\n    @classmethod\n    def from_config(cls, config: dict) -> \"UserService\":\n        return cls()\n\n\nclass AdminService:\n    @staticmethod\n    def find_user(name: str) -> str:\n        return name\n\n    @staticmethod\n    def delete_user(name: str) -> bool:\n        return True\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-super-resolution/models/__init__.py",
    "content": ""
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-super-resolution/models/base.py",
    "content": "class BaseModel:\n    def save(self) -> bool:\n        return True\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-super-resolution/models/repo.py",
    "content": "class Repo:\n    def save(self) -> bool:\n        return True\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-super-resolution/models/user.py",
    "content": "from .base import BaseModel\n\nclass User(BaseModel):\n    def save(self) -> bool:\n        super().save()\n        return True\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-variadic-resolution/app.py",
    "content": "from logger import log_entry\n\ndef process_input():\n    log_entry(\"hello\", \"world\", \"test\")\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-variadic-resolution/logger.py",
    "content": "def log_entry(*messages):\n    print(' '.join(messages))\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-walrus-chain/app.py",
    "content": "from user import User\nfrom repo import Repo\n\n\ndef get_user() -> User:\n    return User()\n\n\ndef get_repo() -> Repo:\n    return Repo()\n\n\n# Walrus operator (:=) creates a named_expression binding.\n# Tests that extractPendingAssignment propagates through walrus assignments.\ndef walrus_chain_user() -> None:\n    u: User = get_user()\n    # Regular assignment where alias gets type from u (regular chain)\n    alias = u\n    # Walrus inside condition: w gets type from u via named_expression chain\n    if (w := u):\n        w.save()\n    alias.save()\n\n\ndef walrus_chain_repo() -> None:\n    r: Repo = get_repo()\n    alias = r\n    if (w := r):\n        w.save()\n    alias.save()\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-walrus-chain/repo.py",
    "content": "class Repo:\n    def save(self) -> bool:\n        return False\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-walrus-chain/user.py",
    "content": "class User:\n    def save(self) -> bool:\n        return True\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-walrus-operator/main.py",
    "content": "from models import User\n\n\ndef process():\n    if (user := User(\"alice\")):\n        user.save()\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-walrus-operator/models.py",
    "content": "class User:\n    def __init__(self, name: str):\n        self.name = name\n\n    def save(self) -> bool:\n        return True\n\n    def greet(self) -> str:\n        return f\"Hello, {self.name}\"\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-write-access/models.py",
    "content": "class Address:\n    city: str\n\nclass User:\n    name: str\n    address: Address\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/python-write-access/service.py",
    "content": "from models import User, Address\n\ndef update_user(user: User):\n    # Write access\n    user.name = \"Alice\"\n    user.address = Address()\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-ambiguous/lib/user_handler.rb",
    "content": "require_relative '../models/handler'\n\nclass UserHandler < Handler\n  def handle_event\n    process_request\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-ambiguous/models/handler.rb",
    "content": "class Handler\n  def process_request\n    true\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-ambiguous/other/handler.rb",
    "content": "class Handler\n  def process_request\n    false\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-app/lib/base_model.rb",
    "content": "require 'lib/concerns/serializable'\n\nclass BaseModel\n  attr_accessor :id, :created_at\n\n  def persist\n    run_validations\n  end\n\n  def run_validations\n    true\n  end\n\n  def self.factory\n    run_validations\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-app/lib/concerns/cacheable.rb",
    "content": "module Cacheable\n  def cache_key\n    \"key\"\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-app/lib/concerns/loggable.rb",
    "content": "module Loggable\n  def log_action\n    true\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-app/lib/concerns/serializable.rb",
    "content": "module Serializable\n  def serialize_data\n    \"{}\"\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-app/lib/service.rb",
    "content": "require_relative './user'\n\nclass UserService\n  def create_user\n    user = User.new\n    user.persist\n    user.greet_user\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-app/lib/user.rb",
    "content": "require_relative './base_model'\nrequire_relative './concerns/serializable'\nrequire_relative './concerns/loggable'\nrequire_relative './concerns/cacheable'\n\nclass User < BaseModel\n  include Serializable\n  extend Loggable\n  prepend Cacheable\n\n  attr_reader :name\n  attr_writer :email\n\n  def greet_user\n    persist\n    serialize_data\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-call-result-binding/app.rb",
    "content": "class User\n  def save\n    true\n  end\nend\n\n# @return [User]\ndef get_user(name)\n  User.new\nend\n\ndef process_user\n  user = get_user(\"alice\")\n  user.save\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-calls/lib/one_arg.rb",
    "content": "class OneArg\n  def write_audit(message)\n    message\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-calls/lib/service.rb",
    "content": "require_relative './one_arg'\nrequire_relative './two_args'\n\nclass Service\n  def run_task\n    write_audit(\"done\")\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-calls/lib/two_args.rb",
    "content": "class TwoArgs\n  def write_audit(message, level)\n    message\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-chain-call/lib/app.rb",
    "content": "require_relative './user_service'\n\nclass App\n  def process\n    svc = UserService.new\n    svc.get_user.save\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-chain-call/lib/repo.rb",
    "content": "class Repo\n  def save\n    true\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-chain-call/lib/user.rb",
    "content": "class User\n  def save\n    true\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-chain-call/lib/user_service.rb",
    "content": "require_relative './user'\n\nclass UserService\n  # @return [User]\n  def get_user\n    User.new\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-constant-constructor/app.rb",
    "content": "require_relative 'models'\n\nSERVICE = UserService.new\nSERVICE.process\nSERVICE.validate\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-constant-constructor/models.rb",
    "content": "class UserService\n  def process\n  end\n\n  def validate\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-constant-factory-call/admin_service.rb",
    "content": "class AdminService\n  def process\n    true\n  end\n\n  def validate\n    true\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-constant-factory-call/app.rb",
    "content": "require_relative 'user_service'\nrequire_relative 'admin_service'\n\n# @return [UserService]\ndef build_service\n  UserService.new\nend\n\nSERVICE = build_service()\nSERVICE.process\nSERVICE.validate\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-constant-factory-call/user_service.rb",
    "content": "class UserService\n  def process\n    true\n  end\n\n  def validate\n    true\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-constructor-type-inference/models/repo.rb",
    "content": "class Repo\n  def save\n    true\n  end\n\n  def cleanup\n    true\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-constructor-type-inference/models/user.rb",
    "content": "class User\n  def save\n    true\n  end\n\n  def cleanup\n    true\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-constructor-type-inference/services/app.rb",
    "content": "require_relative '../models/user'\nrequire_relative '../models/repo'\n\nclass AppService\n  def process_entities\n    user = User.new\n    repo = Repo.new\n    user.save\n    repo.save\n  end\n\n  def cleanup\n    true\n  end\n\n  def greet\n    self.process_entities\n    self.cleanup\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-default-params/app.rb",
    "content": "def greet(name, greeting = \"Hello\")\n  \"#{greeting}, #{name}\"\nend\n\ndef process\n  greet(\"Alice\")\n  greet(\"Bob\", \"Hi\")\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-field-type-disambig/address.rb",
    "content": "class Address\n  # @return [String]\n  attr_accessor :city\n\n  def save\n    true\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-field-type-disambig/service.rb",
    "content": "require_relative 'user'\n\n# @param user [User]\ndef process_user(user)\n  # Field-access chain: user.address → Address, then .save → Address#save\n  # Both User and Address have save — only lookupFieldByOwner can disambiguate.\n  user.address.save\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-field-type-disambig/user.rb",
    "content": "require_relative 'address'\n\nclass User\n  # @return [String]\n  attr_accessor :name\n\n  # @return [Address]\n  attr_accessor :address\n\n  def save\n    true\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-field-types/models.rb",
    "content": "class Address\n  # @return [String]\n  attr_accessor :city\n\n  def save\n    true\n  end\nend\n\nclass User\n  # @return [String]\n  attr_accessor :name\n\n  # @return [Address]\n  attr_accessor :address\n\n  def greet\n    name\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-field-types/service.rb",
    "content": "require_relative 'models'\n\n# @param user [User]\ndef process_user(user)\n  # Field-access chain: user.address → Address, then .save → Address#save\n  user.address.save\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-for-in-loop/app.rb",
    "content": "require_relative 'user'\n\n# @param users [Array<User>]\ndef process_users(users)\n  for user in users\n    user.save\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-for-in-loop/repo.rb",
    "content": "class Repo\n  def initialize(name)\n    @name = name\n  end\n\n  def save\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-for-in-loop/user.rb",
    "content": "class User\n  def initialize(name)\n    @name = name\n  end\n\n  def save\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-grandparent-resolution/lib/app.rb",
    "content": "require_relative 'models/c'\n\ndef process\n  c = C.new\n  c.greet.save\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-grandparent-resolution/lib/models/a.rb",
    "content": "require_relative 'greeting'\n\nclass A\n  def greet\n    Greeting.new\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-grandparent-resolution/lib/models/b.rb",
    "content": "require_relative 'a'\n\nclass B < A\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-grandparent-resolution/lib/models/c.rb",
    "content": "require_relative 'b'\n\nclass C < B\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-grandparent-resolution/lib/models/greeting.rb",
    "content": "class Greeting\n  def save\n    true\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-local-shadow/lib/app.rb",
    "content": "require_relative './utils'\n\ndef do_work\n  false\nend\n\ndef run_app\n  do_work\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-local-shadow/lib/utils.rb",
    "content": "class Utils\n  def do_work\n    true\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-member-calls/lib/app.rb",
    "content": "require_relative './user'\n\nclass App\n  def process_user\n    user = User.new\n    user.persist_record\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-member-calls/lib/user.rb",
    "content": "class User\n  def persist_record\n    true\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-method-chain-binding/app.rb",
    "content": "class City\n  def save\n    true\n  end\nend\n\nclass Address\n  # @return [City]\n  def get_city\n    City.new\n  end\nend\n\nclass User\n  # @return [Address]\n  def get_address\n    Address.new\n  end\nend\n\n# @return [User]\ndef get_user\n  User.new\nend\n\ndef process_chain\n  user = get_user()\n  addr = user.get_address()\n  city = addr.get_city()\n  city.save\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-namespaced-constructor/app.rb",
    "content": "require_relative 'models/user_service'\n\nsvc = Models::UserService.new\nsvc.process('alice')\nsvc.validate\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-namespaced-constructor/models/user_service.rb",
    "content": "module Models\n  class UserService\n    def process(name)\n      name.upcase\n    end\n\n    def validate\n      true\n    end\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-parent-resolution/lib/models/base_model.rb",
    "content": "class BaseModel\n  def save\n    true\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-parent-resolution/lib/models/serializable.rb",
    "content": "module Serializable\n  def serialize\n    ''\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-parent-resolution/lib/models/user.rb",
    "content": "require_relative 'base_model'\nrequire_relative 'serializable'\n\nclass User < BaseModel\n  include Serializable\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-return-type/app.rb",
    "content": "require_relative 'models'\nrequire_relative 'repo'\n\ndef process_user\n  user = get_user('alice')\n  user.save\nend\n\ndef process_repo\n  repo = get_repo('/data')\n  repo.save\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-return-type/models.rb",
    "content": "class User\n  def initialize(name)\n    @name = name\n  end\n\n  def save\n    true\n  end\nend\n\n# @return [User]\ndef get_user(name)\n  User.new(name)\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-return-type/repo.rb",
    "content": "class Repo\n  def initialize(path)\n    @path = path\n  end\n\n  def save\n    true\n  end\nend\n\n# @return [Repo]\ndef get_repo(path)\n  Repo.new(path)\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-self-this-resolution/lib/models/repo.rb",
    "content": "class Repo\n  def save\n    true\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-self-this-resolution/lib/models/user.rb",
    "content": "class User\n  def save\n    true\n  end\n  def process\n    self.save\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-super-resolution/lib/models/base_model.rb",
    "content": "class BaseModel\n  def save\n    true\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-super-resolution/lib/models/repo.rb",
    "content": "class Repo\n  def save\n    true\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-super-resolution/lib/models/user.rb",
    "content": "require_relative 'base_model'\n\nclass User < BaseModel\n  def save\n    super\n    true\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-write-access/models.rb",
    "content": "class Address\n  attr_accessor :city\nend\n\nclass User\n  attr_accessor :name, :address, :score\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-write-access/service.rb",
    "content": "require_relative 'models'\n\nclass UserService\n  def update_user\n    user = User.new\n\n    # Simple write (setter method call)\n    user.name = \"Alice\"\n\n    # Object write\n    user.address = Address.new\n\n    # Compound assignment (operator_assignment node)\n    user.score += 10\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-yard-annotations/models.rb",
    "content": "class UserRepo\n  def save\n    true\n  end\n\n  def find_by_name(name)\n    true\n  end\nend\n\nclass User\n  def greet\n    \"hello\"\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-yard-annotations/service.rb",
    "content": "require_relative './models'\n\nclass UserService\n  # @param repo [UserRepo] the repository\n  # @param user [User] the user to create\n  # @return [Boolean]\n  def create(repo, user)\n    repo.save\n    user.greet\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-yard-generics/models.rb",
    "content": "class UserRepo\n  def save\n    true\n  end\n\n  def find_all\n    []\n  end\nend\n\nclass AdminRepo\n  def save\n    true\n  end\n\n  def find_all\n    []\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ruby-yard-generics/service.rb",
    "content": "require_relative './models'\n\nclass DataService\n  # @param repo [UserRepo] the user repository\n  # @param cache [Hash<Symbol, UserRepo>] cache of repos by symbol key\n  def sync(repo, cache)\n    repo.save\n    repo.find_all\n  end\n\n  # @param [AdminRepo] admin_repo the admin repository (alternate YARD order)\n  def audit(admin_repo)\n    admin_repo.save\n    admin_repo.find_all\n  end\nend\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-alias-imports/src/main.rs",
    "content": "mod models;\n\nuse crate::models::User as U;\nuse crate::models::Repo as R;\n\nfn main() {\n    let u = U { name: String::from(\"alice\") };\n    let r = R { url: String::from(\"https://example.com\") };\n    u.save();\n    r.persist();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-alias-imports/src/models.rs",
    "content": "pub struct User {\n    pub name: String,\n}\n\nimpl User {\n    pub fn save(&self) -> bool {\n        true\n    }\n}\n\npub struct Repo {\n    pub url: String,\n}\n\nimpl Repo {\n    pub fn persist(&self) -> bool {\n        true\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-ambiguous/src/main.rs",
    "content": "mod models;\nmod other;\nmod services;\n\nfn main() {\n    let h = services::create_handler();\n    h.handle();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-ambiguous/src/models/handler.rs",
    "content": "pub struct Handler {\n    pub name: String,\n}\n\nimpl Handler {\n    pub fn handle(&self) {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-ambiguous/src/models/mod.rs",
    "content": "pub mod handler;\npub use handler::Handler;\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-ambiguous/src/other/handler.rs",
    "content": "pub struct Handler {\n    pub id: u32,\n}\n\nimpl Handler {\n    pub fn process(&self) {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-ambiguous/src/other/mod.rs",
    "content": "pub mod handler;\npub use handler::Handler;\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-ambiguous/src/services/mod.rs",
    "content": "use crate::models::Handler;\n\npub fn create_handler() -> Handler {\n    Handler { name: String::new() }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-assignment-chain/src/main.rs",
    "content": "mod user;\nmod repo;\nuse crate::user::User;\nuse crate::repo::Repo;\n\nfn get_user() -> User { User }\nfn get_repo() -> Repo { Repo }\n\nfn process_entities() {\n    let u: User = get_user();\n    let alias = u;\n    alias.save();\n\n    let r: Repo = get_repo();\n    let r_alias = r;\n    r_alias.save();\n}\n\nfn main() {}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-assignment-chain/src/repo.rs",
    "content": "pub struct Repo;\n\nimpl Repo {\n    pub fn save(&self) -> bool {\n        false\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-assignment-chain/src/user.rs",
    "content": "pub struct User;\n\nimpl User {\n    pub fn save(&self) -> bool {\n        true\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-async-binding/src/main.rs",
    "content": "mod user;\nmod repo;\n\nuse user::User;\nuse repo::Repo;\n\nasync fn get_user() -> User {\n    User { name: String::from(\"alice\") }\n}\n\nasync fn get_repo() -> Repo {\n    Repo { name: String::from(\"main\") }\n}\n\nasync fn process_user() {\n    let user = get_user().await;\n    user.save();\n}\n\nasync fn process_repo() {\n    let repo = get_repo().await;\n    repo.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-async-binding/src/repo.rs",
    "content": "pub struct Repo {\n    pub name: String,\n}\n\nimpl Repo {\n    pub fn save(&self) -> bool {\n        true\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-async-binding/src/user.rs",
    "content": "pub struct User {\n    pub name: String,\n}\n\nimpl User {\n    pub fn save(&self) -> bool {\n        true\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-call-result-binding/src/main.rs",
    "content": "mod models;\nuse models::get_user;\n\nfn process_user() {\n    let user = get_user(\"alice\");\n    user.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-call-result-binding/src/models.rs",
    "content": "pub struct User {\n    pub name: String,\n}\n\nimpl User {\n    pub fn save(&self) -> bool {\n        true\n    }\n}\n\npub fn get_user(name: &str) -> User {\n    User { name: name.to_string() }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-calls/src/main.rs",
    "content": "mod onearg;\nmod zeroarg;\n\nuse crate::onearg::write_audit;\nuse crate::zeroarg::write_audit as zero_write_audit;\n\nfn main() {\n    let _ = write_audit(\"hello\");\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-calls/src/onearg/mod.rs",
    "content": "pub fn write_audit(message: &str) -> &str {\n    message\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-calls/src/zeroarg/mod.rs",
    "content": "pub fn write_audit() -> &'static str {\n    \"zero\"\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-chain-call/src/main.rs",
    "content": "mod models;\n\nuse models::user::User;\n\nstruct UserService;\n\nimpl UserService {\n    fn get_user(&self) -> User {\n        User { name: String::from(\"alice\") }\n    }\n}\n\nfn process_user() {\n    let svc = UserService;\n    svc.get_user().save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-chain-call/src/models/mod.rs",
    "content": "pub mod user;\npub mod repo;\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-chain-call/src/models/repo.rs",
    "content": "pub struct Repo {\n    pub name: String,\n}\n\nimpl Repo {\n    pub fn save(&self) -> bool {\n        true\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-chain-call/src/models/user.rs",
    "content": "pub struct User {\n    pub name: String,\n}\n\nimpl User {\n    pub fn save(&self) -> bool {\n        true\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-constructor-type-inference/src/main.rs",
    "content": "mod user;\nmod repo;\nuse crate::user::User;\nuse crate::repo::Repo;\n\nfn process_entities() {\n    let user = User::new();\n    let repo = Repo::new();\n    user.save();\n    repo.save();\n}\n\nfn main() {}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-constructor-type-inference/src/repo.rs",
    "content": "pub struct Repo;\n\nimpl Repo {\n    pub fn new() -> Self {\n        Repo\n    }\n\n    pub fn save(&self) -> bool {\n        false\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-constructor-type-inference/src/user.rs",
    "content": "pub struct User;\n\nimpl User {\n    pub fn new() -> Self {\n        User\n    }\n\n    pub fn save(&self) -> bool {\n        true\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-deep-field-chain/models.rs",
    "content": "pub struct City {\n    pub zip_code: String,\n}\n\nimpl City {\n    pub fn get_name(&self) -> &str {\n        \"city\"\n    }\n}\n\npub struct Address {\n    pub city: City,\n    pub street: String,\n}\n\nimpl Address {\n    pub fn save(&self) {\n        // persist address\n    }\n}\n\npub struct User {\n    pub name: String,\n    pub address: Address,\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-deep-field-chain/service.rs",
    "content": "use crate::models::{User, Address, City};\n\nfn process_user(user: &User) {\n    user.address.save();\n    user.address.city.get_name();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-default-constructor/src/main.rs",
    "content": "mod user;\nmod repo;\nuse crate::user::User;\nuse crate::repo::Repo;\n\nfn process_with_new() {\n    let user = User::new();\n    let repo = Repo::new();\n    user.save();\n    repo.save();\n}\n\nfn process_with_default() {\n    let user = User::default();\n    let repo = Repo::default();\n    user.save();\n    repo.save();\n}\n\nfn main() {}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-default-constructor/src/repo.rs",
    "content": "pub struct Repo;\n\nimpl Repo {\n    pub fn new() -> Self {\n        Repo\n    }\n\n    pub fn save(&self) -> bool {\n        true\n    }\n}\n\nimpl Default for Repo {\n    fn default() -> Self {\n        Repo\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-default-constructor/src/user.rs",
    "content": "pub struct User;\n\nimpl User {\n    pub fn new() -> Self {\n        User\n    }\n\n    pub fn save(&self) -> bool {\n        true\n    }\n}\n\nimpl Default for User {\n    fn default() -> Self {\n        User\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-err-unwrap/src/error.rs",
    "content": "pub struct AppError {\n    pub code: i32,\n}\n\nimpl AppError {\n    pub fn report(&self) {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-err-unwrap/src/main.rs",
    "content": "mod user;\nmod error;\nuse crate::user::User;\nuse crate::error::AppError;\n\nfn handle_err(res: Result<User, AppError>) {\n    if let Err(e) = res {\n        e.report();\n    }\n}\n\nfn handle_ok(res: Result<User, AppError>) {\n    if let Ok(user) = res {\n        user.save();\n    }\n}\n\nfn main() {}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-err-unwrap/src/repo.rs",
    "content": "pub struct Repo {\n    pub name: String,\n}\n\nimpl Repo {\n    pub fn save(&self) {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-err-unwrap/src/user.rs",
    "content": "pub struct User {\n    pub name: String,\n}\n\nimpl User {\n    pub fn save(&self) {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-field-types/models.rs",
    "content": "pub struct Address {\n    pub city: String,\n}\n\nimpl Address {\n    pub fn save(&self) {\n        // persist address\n    }\n}\n\npub struct User {\n    pub name: String,\n    pub address: Address,\n}\n\nimpl User {\n    pub fn greet(&self) -> &str {\n        &self.name\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-field-types/service.rs",
    "content": "use crate::models::{User, Address};\n\nfn process_user(user: &User) {\n    user.address.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-for-call-expr/src/main.rs",
    "content": "mod user;\nmod repo;\nuse crate::user::get_users;\nuse crate::repo::get_repos;\n\nfn process_users() {\n    for user in get_users() {\n        user.save();\n    }\n}\n\nfn process_repos() {\n    for repo in get_repos() {\n        repo.save();\n    }\n}\n\nfn main() {}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-for-call-expr/src/repo.rs",
    "content": "pub struct Repo {\n    pub name: String,\n}\n\nimpl Repo {\n    pub fn save(&self) {}\n}\n\npub fn get_repos() -> Vec<Repo> {\n    vec![Repo { name: \"main\".into() }]\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-for-call-expr/src/user.rs",
    "content": "pub struct User {\n    pub name: String,\n}\n\nimpl User {\n    pub fn save(&self) {}\n}\n\npub fn get_users() -> Vec<User> {\n    vec![User { name: \"alice\".into() }]\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-for-loop/src/main.rs",
    "content": "mod user;\nmod repo;\nuse crate::user::User;\nuse crate::repo::Repo;\n\nfn process_users(users: Vec<User>) {\n    for user in &users {\n        user.save();\n    }\n}\n\nfn process_repos(repos: Vec<Repo>) {\n    for repo in &repos {\n        repo.save();\n    }\n}\n\nfn main() {}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-for-loop/src/repo.rs",
    "content": "pub struct Repo {\n    pub name: String,\n}\n\nimpl Repo {\n    pub fn save(&self) {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-for-loop/src/user.rs",
    "content": "pub struct User {\n    pub name: String,\n}\n\nimpl User {\n    pub fn save(&self) {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-grouped-imports/src/helpers/mod.rs",
    "content": "pub fn format_name(name: &str) -> String {\n    format!(\"Hello, {}\", name)\n}\n\npub fn validate_email(email: &str) -> bool {\n    email.contains('@')\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-grouped-imports/src/main.rs",
    "content": "mod helpers;\n\nuse crate::helpers::{format_name, validate_email};\n\nfn main() {\n    let name = format_name(\"world\");\n    let valid = validate_email(\"test@example.com\");\n    println!(\"{} {}\", name, valid);\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-if-let/main.rs",
    "content": "mod models;\nuse models::{User, Config};\n\nfn get_user() -> User {\n    User { name: \"alice\".to_string(), age: 30 }\n}\n\nfn process_if_let() {\n    // captured_pattern: user @ User { .. } — binds 'user' with type 'User'\n    if let user @ User { .. } = get_user() {\n        user.save();\n    }\n}\n\nfn process_while_let() {\n    // captured_pattern inside while-let\n    while let cfg @ Config { .. } = get_config() {\n        cfg.validate();\n    }\n}\n\nfn get_config() -> Config {\n    Config { debug: true }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-if-let/models.rs",
    "content": "pub struct User {\n    pub name: String,\n    pub age: u32,\n}\n\nimpl User {\n    pub fn save(&self) {}\n}\n\npub struct Config {\n    pub debug: bool,\n}\n\nimpl Config {\n    pub fn validate(&self) -> bool {\n        true\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-if-let-unwrap/models/mod.rs",
    "content": "// Placeholder — actual module structure is in src/\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-if-let-unwrap/models/repo.rs",
    "content": "// Placeholder — actual Repo definition is in src/repo.rs\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-if-let-unwrap/models/user.rs",
    "content": "// Placeholder — actual User definition is in src/user.rs\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-if-let-unwrap/src/main.rs",
    "content": "mod user;\nmod repo;\nuse crate::user::User;\n\nfn process(opt: Option<User>) {\n    if let Some(user) = opt {\n        user.save();\n    }\n}\n\nfn main() {}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-if-let-unwrap/src/repo.rs",
    "content": "pub struct Repo {\n    pub name: String,\n}\n\nimpl Repo {\n    pub fn save(&self) {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-if-let-unwrap/src/user.rs",
    "content": "pub struct User {\n    pub name: String,\n}\n\nimpl User {\n    pub fn save(&self) {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-iter-for-loop/src/main.rs",
    "content": "mod user;\nmod repo;\nuse crate::user::User;\nuse crate::repo::Repo;\n\nfn process_users(users: Vec<User>) {\n    for user in users.iter() {\n        user.save();\n    }\n}\n\nfn process_repos(repos: Vec<Repo>) {\n    for repo in repos.into_iter() {\n        repo.save();\n    }\n}\n\nfn main() {}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-iter-for-loop/src/repo.rs",
    "content": "pub struct Repo {\n    pub name: String,\n}\n\nimpl Repo {\n    pub fn save(&self) {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-iter-for-loop/src/user.rs",
    "content": "pub struct User {\n    pub name: String,\n}\n\nimpl User {\n    pub fn save(&self) {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-local-shadow/Cargo.toml",
    "content": "[package]\nname = \"rust-local-shadow\"\nversion = \"0.1.0\"\nedition = \"2021\"\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-local-shadow/src/main.rs",
    "content": "mod utils;\nuse utils::save;\n\n// Local function shadows imported save\nfn save(data: &str) {\n    println!(\"local save: {}\", data);\n}\n\nfn run() {\n    save(\"test\");\n}\n\nfn main() {\n    run();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-local-shadow/src/utils.rs",
    "content": "pub fn save(data: &str) {\n    println!(\"utils save: {}\", data);\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-match-unwrap/src/main.rs",
    "content": "mod user;\nmod repo;\nuse crate::user::User;\nuse crate::repo::Repo;\n\nfn process(opt: Option<User>) {\n    match opt {\n        Some(user) => user.save(),\n        None => {},\n    }\n}\n\nfn check(res: Result<Repo, String>) {\n    if let Ok(repo) = res {\n        repo.save();\n    }\n}\n\nfn main() {}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-match-unwrap/src/repo.rs",
    "content": "pub struct Repo {\n    pub name: String,\n}\n\nimpl Repo {\n    pub fn save(&self) {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-match-unwrap/src/user.rs",
    "content": "pub struct User {\n    pub name: String,\n}\n\nimpl User {\n    pub fn save(&self) {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-member-calls/src/main.rs",
    "content": "mod user;\nuse crate::user::User;\n\nfn process_user() -> bool {\n    let u = User;\n    u.save()\n}\n\nfn main() {}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-member-calls/src/user.rs",
    "content": "pub struct User;\n\nimpl User {\n    pub fn save(&self) -> bool {\n        true\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-method-chain-binding/src/main.rs",
    "content": "mod models;\nuse models::get_user;\n\nfn process_chain() {\n    let user = get_user();\n    let addr = user.address;\n    let city = addr.get_city();\n    city.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-method-chain-binding/src/models.rs",
    "content": "pub struct City {\n    pub name: String,\n}\n\nimpl City {\n    pub fn save(&self) -> bool { true }\n}\n\npub struct Address {\n    pub city: City,\n}\n\nimpl Address {\n    pub fn get_city(&self) -> &City { &self.city }\n}\n\npub struct User {\n    pub address: Address,\n}\n\npub fn get_user() -> User {\n    User { address: Address { city: City { name: \"NYC\".to_string() } } }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-nullable-receiver/src/main.rs",
    "content": "mod user;\nmod repo;\nuse crate::user::User;\nuse crate::repo::Repo;\n\nfn find_user() -> Option<User> {\n    Some(User)\n}\n\nfn find_repo() -> Option<Repo> {\n    Some(Repo)\n}\n\nfn process_entities() {\n    let user: Option<User> = find_user();\n    user.unwrap().save();\n    let repo: Option<Repo> = find_repo();\n    repo.unwrap().save();\n}\n\nfn main() {}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-nullable-receiver/src/repo.rs",
    "content": "pub struct Repo;\n\nimpl Repo {\n    pub fn save(&self) -> bool {\n        false\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-nullable-receiver/src/user.rs",
    "content": "pub struct User;\n\nimpl User {\n    pub fn save(&self) -> bool {\n        true\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-option-receiver/src/main.rs",
    "content": "mod user;\nmod repo;\nuse crate::user::User;\nuse crate::repo::Repo;\n\n// Tests that Option<User> unwraps to User in TypeEnv,\n// and assignment chain from Option-typed source resolves correctly.\nfn process_entities() {\n    let opt: Option<User> = Some(User);\n    let alias = opt;\n    alias.save();\n\n    let repo: Repo = Repo;\n    repo.save();\n}\n\nfn main() {}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-option-receiver/src/repo.rs",
    "content": "pub struct Repo;\n\nimpl Repo {\n    pub fn save(&self) {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-option-receiver/src/user.rs",
    "content": "pub struct User;\n\nimpl User {\n    pub fn save(&self) {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-parent-resolution/src/lib.rs",
    "content": "pub mod serializable;\npub mod user;\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-parent-resolution/src/serializable.rs",
    "content": "pub trait Serializable {\n    fn serialize(&self) -> String;\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-parent-resolution/src/user.rs",
    "content": "use crate::serializable::Serializable;\n\npub struct User;\n\nimpl Serializable for User {\n    fn serialize(&self) -> String {\n        String::new()\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-receiver-resolution/src/main.rs",
    "content": "mod user;\nmod repo;\nuse crate::user::User;\nuse crate::repo::Repo;\n\nfn process_entities() {\n    let user: User = User;\n    let repo: Repo = Repo;\n    user.save();\n    repo.save();\n}\n\nfn main() {}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-receiver-resolution/src/repo.rs",
    "content": "pub struct Repo;\n\nimpl Repo {\n    pub fn save(&self) -> bool {\n        false\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-receiver-resolution/src/user.rs",
    "content": "pub struct User;\n\nimpl User {\n    pub fn save(&self) -> bool {\n        true\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-reexport-chain/src/main.rs",
    "content": "mod models;\nuse crate::models::Handler;\n\nfn main() {\n    let h = Handler { name: String::from(\"test\") };\n    h.process();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-reexport-chain/src/models/handler.rs",
    "content": "pub struct Handler {\n    pub name: String,\n}\n\nimpl Handler {\n    pub fn process(&self) -> bool {\n        true\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-reexport-chain/src/models/mod.rs",
    "content": "pub mod handler;\npub use handler::Handler;\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-return-type/src/main.rs",
    "content": "mod models;\nuse crate::models::get_user;\n\nfn main() {\n    let user = get_user(\"alice\");\n    user.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-return-type/src/models.rs",
    "content": "pub struct User {\n    pub name: String,\n}\n\nimpl User {\n    pub fn save(&self) {}\n}\n\npub fn get_user(name: &str) -> User {\n    User { name: name.to_string() }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-return-type-inference/src/main.rs",
    "content": "mod models;\n\nuse models::{User, Repo};\n\nfn get_user() -> User {\n    User { name: String::from(\"alice\") }\n}\n\nfn get_repo() -> Repo {\n    Repo { name: String::from(\"main\") }\n}\n\nfn process_user() {\n    let user = get_user();\n    user.save();\n}\n\nfn process_repo() {\n    let repo = get_repo();\n    repo.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-return-type-inference/src/models.rs",
    "content": "pub struct User {\n    pub name: String,\n}\n\nimpl User {\n    pub fn save(&self) -> bool {\n        true\n    }\n}\n\npub struct Repo {\n    pub name: String,\n}\n\nimpl Repo {\n    pub fn save(&self) -> bool {\n        true\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-self-struct-literal/main.rs",
    "content": "mod models;\nuse models::User;\n\nfn main() {\n    let user = User::blank();\n    user.greet();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-self-struct-literal/models.rs",
    "content": "pub struct User {\n    pub name: String,\n}\n\nimpl User {\n    pub fn blank() -> Self {\n        let fresh = Self { name: String::new() };\n        fresh.validate();\n        fresh\n    }\n\n    pub fn validate(&self) -> bool {\n        !self.name.is_empty()\n    }\n\n    pub fn greet(&self) -> String {\n        format!(\"Hello, {}\", self.name)\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-self-this-resolution/src/repo.rs",
    "content": "pub struct Repo;\n\nimpl Repo {\n    pub fn save(&self) -> bool { true }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-self-this-resolution/src/user.rs",
    "content": "pub struct User;\n\nimpl User {\n    pub fn save(&self) -> bool { true }\n    pub fn process(&self) {\n        self.save();\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-struct-destructuring/main.rs",
    "content": "mod point;\nmod vec2;\n\nuse crate::point::Point;\n\nfn process(p: Point) {\n    let Point { x, y } = p;\n    x.save();\n    y.save();\n}\n\nfn main() {}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-struct-destructuring/point.rs",
    "content": "use crate::vec2::Vec2;\n\npub struct Point {\n    pub x: Vec2,\n    pub y: Vec2,\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-struct-destructuring/vec2.rs",
    "content": "pub struct Vec2 {\n    pub value: f32,\n}\n\nimpl Vec2 {\n    pub fn save(&self) {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-struct-literal-inference/main.rs",
    "content": "mod models;\nuse models::{User, Config};\n\nfn main() {\n    let user = User { name: \"alice\".to_string(), age: 30 };\n    user.save();\n\n    let config = Config { debug: true };\n    config.validate();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-struct-literal-inference/models.rs",
    "content": "pub struct User {\n    pub name: String,\n    pub age: u32,\n}\n\nimpl User {\n    pub fn save(&self) {}\n    pub fn greet(&self) -> String {\n        format!(\"Hello, {}\", self.name)\n    }\n}\n\npub struct Config {\n    pub debug: bool,\n}\n\nimpl Config {\n    pub fn validate(&self) -> bool {\n        true\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-struct-literals/app.rs",
    "content": "mod user;\nuse user::User;\n\nfn process_user(name: String) {\n    let user = User { name };\n    user.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-struct-literals/user.rs",
    "content": "pub struct User {\n    pub name: String,\n}\n\nimpl User {\n    pub fn save(&self) -> bool {\n        true\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-traits/src/impls/button.rs",
    "content": "use crate::traits::{Drawable, Clickable};\n\npub struct Button {\n    label: String,\n    enabled: bool,\n}\n\nimpl Drawable for Button {\n    fn draw(&self) {\n        println!(\"{}\", self.label);\n    }\n\n    fn resize(&self, width: u32, height: u32) {\n    }\n}\n\nimpl Clickable for Button {\n    fn on_click(&self) {\n        println!(\"clicked\");\n    }\n\n    fn is_enabled(&self) -> bool {\n        self.enabled\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-traits/src/main.rs",
    "content": "mod traits;\nmod impls;\n\nuse crate::impls::button::Button;\n\nfn main() {\n    let btn = Button { label: String::from(\"OK\"), enabled: true };\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-traits/src/traits/clickable.rs",
    "content": "pub trait Clickable {\n    fn on_click(&self);\n    fn is_enabled(&self) -> bool;\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-traits/src/traits/drawable.rs",
    "content": "pub trait Drawable {\n    fn draw(&self);\n    fn resize(&self, width: u32, height: u32);\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-write-access/models.rs",
    "content": "pub struct Address {\n    pub city: String,\n}\n\npub struct User {\n    pub name: String,\n    pub address: Address,\n    pub score: i32,\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/rust-write-access/service.rs",
    "content": "use crate::models::{User, Address};\n\nfn update_user(user: &mut User) {\n    user.name = String::from(\"Alice\");\n    user.address = Address { city: String::from(\"NYC\") };\n    user.score += 10;\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/swift-constructor-type-inference/Models/Repo.swift",
    "content": "class Repo {\n    let dbName: String\n\n    init(dbName: String) {\n        self.dbName = dbName\n    }\n\n    func save() -> Bool {\n        return false\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/swift-constructor-type-inference/Models/User.swift",
    "content": "class User {\n    let name: String\n\n    init(name: String) {\n        self.name = name\n    }\n\n    func save() -> Bool {\n        return true\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/swift-constructor-type-inference/Services/App.swift",
    "content": "import Models\n\nfunc processEntities() {\n    let user = User(name: \"alice\")\n    let repo = Repo(dbName: \"maindb\")\n    user.save()\n    repo.save()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/swift-init-cross-file/User.swift",
    "content": "class User {\n    var name: String\n\n    init(name: String) {\n        self.name = name\n    }\n\n    func save() {}\n    func greet() -> String {\n        return \"Hello, \\(name)\"\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/swift-init-cross-file/main.swift",
    "content": "func main() {\n    let user = User.init(name: \"alice\")\n    user.save()\n    user.greet()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/swift-parent-resolution/Sources/Models/BaseModel.swift",
    "content": "class BaseModel {\n    func save() -> Bool { return true }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/swift-parent-resolution/Sources/Models/Serializable.swift",
    "content": "protocol Serializable {\n    func serialize() -> String\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/swift-parent-resolution/Sources/Models/User.swift",
    "content": "class User: BaseModel, Serializable {\n    func serialize() -> String { return \"\" }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/swift-return-type/App.swift",
    "content": "func processUser() {\n    let user = getUser(name: \"alice\")\n    user.save()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/swift-return-type/Models.swift",
    "content": "class User {\n    let name: String\n\n    init(name: String) {\n        self.name = name\n    }\n\n    func save() {}\n}\n\nfunc getUser(name: String) -> User {\n    return User(name: name)\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/swift-return-type-inference/App.swift",
    "content": "func getUser() -> User {\n    return User(name: \"alice\")\n}\n\nfunc getRepo() -> Repo {\n    return Repo(name: \"main\")\n}\n\nfunc processUser() {\n    let user = getUser()\n    user.save()\n}\n\nfunc processRepo() {\n    let repo = getRepo()\n    repo.save()\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/swift-return-type-inference/Models.swift",
    "content": "class User {\n    var name: String\n    init(name: String) { self.name = name }\n    func save() -> Bool { return true }\n}\n\nclass Repo {\n    var name: String\n    init(name: String) { self.name = name }\n    func save() -> Bool { return true }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/swift-self-this-resolution/Sources/Models/Repo.swift",
    "content": "class Repo {\n    func save() -> Bool { return true }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/swift-self-this-resolution/Sources/Models/User.swift",
    "content": "class User {\n    func save() -> Bool { return true }\n    func process() {\n        self.save()\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-assignment-chain/src/app.ts",
    "content": "import { User } from './user';\nimport { Repo } from './repo';\n\nfunction getUser(): User { return new User(); }\nfunction getRepo(): Repo { return new Repo(); }\n\nexport function processEntities(): void {\n  const u: User = getUser();\n  const alias = u;\n  alias.save();\n\n  const r: Repo = getRepo();\n  const rAlias = r;\n  rAlias.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-assignment-chain/src/repo.ts",
    "content": "export class Repo {\n  save(): boolean {\n    return false;\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-assignment-chain/src/user.ts",
    "content": "export class User {\n  save(): boolean {\n    return true;\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-call-result-binding/app.ts",
    "content": "import { getUser } from './service';\n\nfunction processUser() {\n  const user = getUser('alice');\n  user.save();\n}\n\nfunction processAlias() {\n  const user = getUser('bob');\n  const alias = user;\n  alias.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-call-result-binding/models.ts",
    "content": "export class User {\n  name: string;\n\n  constructor(name: string) {\n    this.name = name;\n  }\n\n  save(): boolean {\n    return true;\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-call-result-binding/service.ts",
    "content": "import { User } from './models';\n\nexport function getUser(name: string): User {\n  return new User(name);\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-class-field-foreach/app.ts",
    "content": "import { User } from './models/user';\nimport { Repo } from './models/repo';\n\nclass UserService {\n  private users: User[] = [];\n  private repos: Map<string, Repo> = new Map();\n\n  processUsers() {\n    for (const user of this.users) {\n      user.save();\n    }\n  }\n\n  processRepos() {\n    for (const repo of this.repos.values()) {\n      repo.save();\n    }\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-class-field-foreach/models/repo.ts",
    "content": "export class Repo {\n  save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-class-field-foreach/models/user.ts",
    "content": "export class User {\n  save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-deep-field-chain/models.ts",
    "content": "export class City {\n  zipCode: string;\n\n  getName(): string {\n    return 'city';\n  }\n}\n\nexport class Address {\n  city: City;\n  street: string;\n\n  save(): void {\n    // persist address\n  }\n}\n\nexport class User {\n  name: string;\n  address: Address;\n\n  greet(): string {\n    return this.name;\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-deep-field-chain/service.ts",
    "content": "import { User } from './models';\n\nfunction processUser(user: User) {\n  // 2-level chain: user.address → Address, then .save() → Address#save\n  user.address.save();\n\n  // 3-level chain: user.address → Address, .city → City, .getName() → City#getName\n  user.address.city.getName();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-field-type-disambig/address.ts",
    "content": "export class Address {\n  city: string;\n\n  save(): void {\n    // persist address\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-field-type-disambig/service.ts",
    "content": "import { User } from './user';\n\nfunction processUser(user: User) {\n  // Field-access chain: user.address resolves to Address, then .save() must resolve\n  // to Address#save (NOT User#save) — only lookupFieldByOwner can disambiguate.\n  user.address.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-field-type-disambig/user.ts",
    "content": "import { Address } from './address';\n\nexport class User {\n  name: string;\n  address: Address;\n\n  save(): void {\n    // persist user\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-fixpoint-for-loop/src/app.ts",
    "content": "import { getUsers } from './models';\n\nfunction process() {\n  const users = getUsers();\n  for (const u of users) {\n    u.save();\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-fixpoint-for-loop/src/models.ts",
    "content": "export class User {\n  save(): void {}\n}\nexport function getUsers(): User[] {\n  return [];\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-grandparent-resolution/src/app.ts",
    "content": "import { C } from './derived';\n\nfunction process() {\n  const c = new C();\n  c.greet().save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-grandparent-resolution/src/base.ts",
    "content": "import { Greeting } from './greeting';\n\nexport class A {\n  greet(): Greeting { return new Greeting(); }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-grandparent-resolution/src/derived.ts",
    "content": "import { B } from './middle';\nexport class C extends B {}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-grandparent-resolution/src/greeting.ts",
    "content": "export class Greeting {\n  save(): void {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-grandparent-resolution/src/middle.ts",
    "content": "import { A } from './base';\nexport class B extends A {}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-method-chain-binding/app.ts",
    "content": "import { getUser } from './service';\n\nfunction processChain() {\n  const user = getUser();\n  const addr = user.address;\n  const city = addr.getCity();\n  city.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-method-chain-binding/models.ts",
    "content": "export class City {\n  name: string;\n\n  constructor(name: string) {\n    this.name = name;\n  }\n\n  save(): boolean {\n    return true;\n  }\n}\n\nexport class Address {\n  city: City;\n\n  constructor(city: City) {\n    this.city = city;\n  }\n\n  getCity(): City {\n    return this.city;\n  }\n}\n\nexport class User {\n  address: Address;\n\n  constructor(address: Address) {\n    this.address = address;\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-method-chain-binding/service.ts",
    "content": "import { User, Address, City } from './models';\n\nexport function getUser(): User {\n  return new User(new Address(new City('NYC')));\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-mixed-chain/models.ts",
    "content": "export class City {\n  getName(): string {\n    return 'city';\n  }\n}\n\nexport class Address {\n  city: City;\n\n  save(): void {\n    // persist address\n  }\n}\n\nexport class User {\n  address: Address;\n\n  getAddress(): Address {\n    return this.address;\n  }\n}\n\nexport class UserService {\n  getUser(): User {\n    return new User();\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-mixed-chain/service.ts",
    "content": "import { User, UserService } from './models';\n\nfunction processWithService(svc: UserService) {\n  // call → field → call: svc.getUser().address.save()\n  svc.getUser().address.save();\n}\n\nfunction processWithUser(user: User) {\n  // field → call → call: user.getAddress().city.getName()\n  user.getAddress().city.getName();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-multi-hop-chain/src/app.ts",
    "content": "import { User } from './user';\nimport { Repo } from './repo';\n\nfunction getUser(): User { return new User(); }\nfunction getRepo(): Repo { return new Repo(); }\n\n// Multi-hop forward-declared chain: a → b → c (source order)\n// All three should resolve because the post-walk pass processes in order.\nexport function multiHopForward(): void {\n  const a: User = getUser();\n  const b = a;\n  const c = b;\n  c.save();\n}\n\n// Multi-hop with Repo to prove disambiguation\nexport function multiHopRepo(): void {\n  const a: Repo = getRepo();\n  const b = a;\n  const c = b;\n  c.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-multi-hop-chain/src/repo.ts",
    "content": "export class Repo {\n  save(): boolean {\n    return false;\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-multi-hop-chain/src/user.ts",
    "content": "export class User {\n  save(): boolean {\n    return true;\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-null-check-narrowing/src/app.ts",
    "content": "import { User } from './models';\n\nfunction processStrict(x: User | null) {\n  if (x !== null) {\n    x.save();\n  }\n}\n\nfunction processLoose(x: User | null) {\n  if (x != null) {\n    x.save();\n  }\n}\n\nfunction processUndefined(x: User | undefined) {\n  if (x !== undefined) {\n    x.save();\n  }\n}\n\nconst processFuncExpr = function(x: User | null) {\n  if (x !== null) {\n    x.save();\n  }\n};\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-null-check-narrowing/src/models.ts",
    "content": "export class User {\n  save(): void {}\n}\n\nexport class Repo {\n  save(): void {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-nullable-chain/src/app.ts",
    "content": "import { User } from './user';\nimport { Repo } from './repo';\n\nfunction findUser(): User | null { return new User(); }\nfunction findRepo(): Repo | undefined { return new Repo(); }\n\n// Nullable type + assignment chain: the nullable union must be stripped\n// before the alias can resolve to User.\nexport function nullableChainUser(): void {\n  const u: User | null = findUser();\n  const alias = u;\n  alias.save();\n}\n\n// Same pattern with Repo | undefined\nexport function nullableChainRepo(): void {\n  const r: Repo | undefined = findRepo();\n  const alias = r;\n  alias.save();\n}\n\n// Triple nullable: User | null | undefined → still User\nexport function tripleNullable(): void {\n  const u: User | null | undefined = findUser();\n  const alias = u;\n  alias.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-nullable-chain/src/repo.ts",
    "content": "export class Repo {\n  save(): boolean {\n    return false;\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-nullable-chain/src/user.ts",
    "content": "export class User {\n  save(): boolean {\n    return true;\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-nullable-receiver/src/app.ts",
    "content": "import { User } from './user';\nimport { Repo } from './repo';\n\nexport function processEntities(): void {\n  const user: User | null = new User();\n  const repo: Repo | undefined = new Repo();\n\n  // Optional chain calls — receiver should still resolve\n  user?.save();\n  user?.greet('hello');\n  repo?.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-nullable-receiver/src/repo.ts",
    "content": "export class Repo {\n  save(): boolean {\n    return false;\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-nullable-receiver/src/user.ts",
    "content": "export class User {\n  save(): boolean {\n    return true;\n  }\n\n  greet(msg: string): void {\n    console.log(msg);\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-object-destructuring/src/app.ts",
    "content": "import { getUser } from './service';\n\nfunction processDestructured() {\n  const user = getUser();\n  const { address } = user;\n  address.save();\n}\n\nfunction processMultiField() {\n  const user = getUser();\n  const { name, address } = user;\n  address.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-object-destructuring/src/models.ts",
    "content": "export class Address {\n  city: string = '';\n  save(): boolean { return true; }\n}\n\nexport class User {\n  name: string = '';\n  address: Address = new Address();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-object-destructuring/src/service.ts",
    "content": "import { User } from './models';\nexport function getUser(): User { return new User(); }\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-optional-params/src/app.ts",
    "content": "function greet(name: string, greeting: string = \"Hello\"): string {\n  return greeting + \", \" + name;\n}\n\nfunction search(query: string, limit?: number): string[] {\n  return [];\n}\n\nfunction process() {\n  greet(\"Alice\");\n  greet(\"Bob\", \"Hi\");\n  search(\"test\");\n  search(\"test\", 10);\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-overload-disambiguation/src/app.ts",
    "content": "function lookup(id: number): string;\nfunction lookup(name: string): string;\nfunction lookup(key: number | string): string {\n    return String(key);\n}\n\nfunction process() {\n    lookup(42);\n    lookup(\"alice\");\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-param-property-fields/models.ts",
    "content": "export class Address {\n  city: string;\n\n  save(): void {\n    // persist address\n  }\n}\n\nexport class User {\n  #secret: string;\n\n  constructor(\n    public name: string,\n    public address: Address,\n  ) {\n    this.#secret = 'hidden';\n  }\n\n  greet(): string {\n    return this.name;\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-param-property-fields/service.ts",
    "content": "import { User } from './models';\n\nfunction processUser(user: User) {\n  user.address.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-readonly-foreach/app.ts",
    "content": "import { User } from './models/user';\nimport { Repo } from './models/repo';\n\nfunction processUsers(users: readonly User[]) {\n  for (const user of users) {\n    user.save();\n  }\n}\n\nfunction processRepos(repos: readonly Repo[]) {\n  for (const repo of repos) {\n    repo.save();\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-readonly-foreach/models/repo.ts",
    "content": "export class Repo {\n  save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-readonly-foreach/models/user.ts",
    "content": "export class User {\n  save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-return-type-inference/app.ts",
    "content": "import { getUser, fetchUserAsync } from './service';\n\nfunction processUser() {\n  const user = getUser('alice');\n  user.save();\n}\n\nasync function processUserAsync() {\n  const user = await fetchUserAsync('bob');\n  user.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-return-type-inference/models.ts",
    "content": "export class User {\n  name: string;\n\n  constructor(name: string) {\n    this.name = name;\n  }\n\n  save(): boolean {\n    return true;\n  }\n\n  getName(): string {\n    return this.name;\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-return-type-inference/service.ts",
    "content": "import { User } from './models';\n\nexport function getUser(name: string): User {\n  return new User(name);\n}\n\nexport function fetchUserAsync(name: string): Promise<User> {\n  return Promise.resolve(new User(name));\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-virtual-dispatch/src/app.ts",
    "content": "// All classes in same file so parentMap captures the extends relationship\n\nclass Animal {\n  speak(): string {\n    return '...';\n  }\n}\n\nclass Dog extends Animal {\n  speak(): string {\n    return 'woof';\n  }\n\n  fetchBall(): string {\n    return 'ball';\n  }\n}\n\nexport function run(): void {\n  // Virtual dispatch: declared as Animal, constructed as Dog\n  const animal: Animal = new Dog();\n  animal.fetchBall();  // Only Dog has fetchBall — proves virtual dispatch override\n\n  // Direct type: no override needed\n  const dog: Dog = new Dog();\n  dog.fetchBall();     // Direct resolution to Dog#fetchBall\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-write-access/models.ts",
    "content": "export class Address {\n  city: string;\n}\n\nexport class User {\n  name: string;\n  address: Address;\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/ts-write-access/service.ts",
    "content": "import { User, Address } from './models';\n\nfunction updateUser(user: User) {\n  // Write access: user.name = \"Alice\"\n  user.name = \"Alice\";\n\n  // Write access: user.address = new Address()\n  user.address = new Address();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-alias-imports/src/app.ts",
    "content": "import { User as U, Repo as R } from './models';\n\nexport function main() {\n  const u = new U('alice');\n  const r = new R('https://example.com');\n  u.save();\n  r.persist();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-alias-imports/src/models.ts",
    "content": "export class User {\n  name: string;\n  constructor(name: string) {\n    this.name = name;\n  }\n  save(): boolean {\n    return true;\n  }\n}\n\nexport class Repo {\n  url: string;\n  constructor(url: string) {\n    this.url = url;\n  }\n  persist(): boolean {\n    return true;\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-ambiguous/src/logger.ts",
    "content": "import { ILogger } from './models';\n\nexport class ConsoleLogger implements ILogger {\n    log(message: string): void {\n        console.log(message);\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-ambiguous/src/models.ts",
    "content": "export interface ILogger {\n    log(message: string): void;\n}\n\nexport class BaseService {\n    protected name: string = '';\n\n    getName(): string {\n        return this.name;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-ambiguous/src/service.ts",
    "content": "import { BaseService, ILogger } from './models';\nimport { ConsoleLogger } from './logger';\n\nexport class UserService extends BaseService implements ILogger {\n    log(message: string): void {\n        const logger = new ConsoleLogger();\n        logger.log(message);\n    }\n\n    getUsers(): string[] {\n        return [];\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-calls/src/one.ts",
    "content": "export function writeAudit(message: string): string {\n  return message;\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-calls/src/service.ts",
    "content": "import { writeAudit } from './one';\nimport { writeAudit as zeroWriteAudit } from './zero';\n\nexport function run(): string {\n  return writeAudit('hello');\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-calls/src/zero.ts",
    "content": "export function writeAudit(): string {\n  return 'zero';\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-cast-constructor-inference/src/app.ts",
    "content": "import { User } from './user';\nimport { Repo } from './repo';\n\nfunction process() {\n  const user = new User() as any;\n  user.save();\n\n  const repo = new Repo()!;\n  repo.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-cast-constructor-inference/src/repo.ts",
    "content": "export class Repo {\n  save() { return true; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-cast-constructor-inference/src/user.ts",
    "content": "export class User {\n  save() { return true; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-chain-call/app.ts",
    "content": "import { UserService } from './services/UserService';\n\nexport function processUser(): void {\n  const svc = new UserService();\n  svc.getUser().save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-chain-call/models/Repo.ts",
    "content": "export class Repo {\n  save(): void {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-chain-call/models/User.ts",
    "content": "export class User {\n  save(): void {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-chain-call/services/UserService.ts",
    "content": "import { User } from '../models/User';\n\nexport class UserService {\n  getUser(): User {\n    return new User();\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-constructor-calls/src/app.ts",
    "content": "import { User } from './user';\n\nexport function processUser(name: string): void {\n  const user = new User(name);\n  user.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-constructor-calls/src/user.ts",
    "content": "export class User {\n  name: string;\n  constructor(name: string) {\n    this.name = name;\n  }\n  save(): boolean {\n    return true;\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-constructor-type-inference/src/app.ts",
    "content": "import { User } from './user';\nimport { Repo } from './repo';\n\nexport function processEntities(): void {\n  const user = new User();\n  const repo = new Repo();\n  user.save();\n  repo.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-constructor-type-inference/src/repo.ts",
    "content": "export class Repo {\n  save(): boolean {\n    return false;\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-constructor-type-inference/src/user.ts",
    "content": "export class User {\n  save(): boolean {\n    return true;\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-destructured-for-of/src/app.ts",
    "content": "import { User } from './user';\n\nfunction processEntries(entries: Map<string, User>) {\n  for (const [key, user] of entries) {\n    user.save();\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-destructured-for-of/src/repo.ts",
    "content": "export class Repo {\n  constructor(public name: string) {}\n  save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-destructured-for-of/src/user.ts",
    "content": "export class User {\n  constructor(public name: string) {}\n  save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-double-cast-inference/src/app.ts",
    "content": "import { User } from './user';\nimport { Repo } from './repo';\n\nfunction process() {\n  const user = new User() as unknown as any;\n  user.save();\n\n  const repo = new Repo() as unknown as object;\n  repo.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-double-cast-inference/src/repo.ts",
    "content": "export class Repo {\n  save() { return true; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-double-cast-inference/src/user.ts",
    "content": "export class User {\n  save() { return true; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-for-of-call-expr/main.ts",
    "content": "import { getUsers } from './models/user';\nimport { getRepos } from './models/repo';\n\nfunction processUsers(): void {\n  for (const user of getUsers()) {\n    user.save();\n  }\n}\n\nfunction processRepos(): void {\n  for (const repo of getRepos()) {\n    repo.save();\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-for-of-call-expr/models/repo.ts",
    "content": "export class Repo {\n  name: string;\n  constructor(name: string) { this.name = name; }\n  save(): void {}\n}\n\nexport function getRepos(): Repo[] {\n  return [new Repo(\"main\")];\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-for-of-call-expr/models/user.ts",
    "content": "export class User {\n  name: string;\n  constructor(name: string) { this.name = name; }\n  save(): void {}\n}\n\nexport function getUsers(): User[] {\n  return [new User(\"alice\")];\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-generic-parent-resolution/src/models/Base.ts",
    "content": "export class BaseModel<T> {\n  save(): boolean { return true; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-generic-parent-resolution/src/models/Repo.ts",
    "content": "export class Repo {\n  save(): boolean { return true; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-generic-parent-resolution/src/models/User.ts",
    "content": "import { BaseModel } from './Base';\n\nexport class User extends BaseModel<string> {\n  save(): boolean {\n    super.save();\n    return true;\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-instanceof-narrowing/src/app.ts",
    "content": "import { User } from './user';\n\nfunction process(x) {\n  if (x instanceof User) {\n    x.save();\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-instanceof-narrowing/src/repo.ts",
    "content": "export class Repo {\n  constructor(public name: string) {}\n  save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-instanceof-narrowing/src/user.ts",
    "content": "export class User {\n  constructor(public name: string) {}\n  save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-local-shadow/src/app.ts",
    "content": "import { save } from './utils';\n\n// Local definition shadows the imported one\nfunction save(x: string): void {\n  console.log('local save:', x);\n}\n\nfunction run(): void {\n  save('test');\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-local-shadow/src/utils.ts",
    "content": "export function save(data: string): void {\n  console.log('utils save:', data);\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-member-access-for-loop/src/app.ts",
    "content": "import { User } from './models/User';\nimport { Repo } from './models/Repo';\n\nclass UserService {\n    processUsers(users: User[]) {\n        for (const user of this.users) {\n            user.save();\n        }\n    }\n}\n\nclass RepoService {\n    processRepos(repos: Repo[]) {\n        for (const repo of this.repos) {\n            repo.save();\n        }\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-member-access-for-loop/src/models/Repo.ts",
    "content": "export class Repo {\n    save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-member-access-for-loop/src/models/User.ts",
    "content": "export class User {\n    save() {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-member-calls/src/app.ts",
    "content": "import { User } from './user';\n\nexport function processUser(): boolean {\n  const user = new User();\n  return user.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-member-calls/src/user.ts",
    "content": "export class User {\n  save(): boolean {\n    return true;\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-named-imports/src/app.ts",
    "content": "import { formatData } from './format-upper';\n\nexport function processInput(): string {\n  return formatData('hello');\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-named-imports/src/format-prefix.ts",
    "content": "export function formatData(data: string, prefix: string): string {\n  return prefix + data;\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-named-imports/src/format-upper.ts",
    "content": "export function formatData(data: string): string {\n  return data.toUpperCase();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-overloaded-receiver/app.ts",
    "content": "import { User } from './models/User';\nimport { Repo } from './models/Repo';\n\n// Both 'user' and 'repo' are created via constructor inference (no type annotation).\n// The enclosing scope is 'run@0', with varNames 'user' and 'repo'.\n// user.save() must resolve to User#save and repo.save() must resolve to Repo#save.\nexport function run(): void {\n  const user = new User();\n  const repo = new Repo();\n  user.save();\n  repo.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-overloaded-receiver/db/Cache.ts",
    "content": "export class Cache {\n  store(): void {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-overloaded-receiver/db/Database.ts",
    "content": "export class Database {\n  persist(): void {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-overloaded-receiver/models/Repo.ts",
    "content": "export class Repo {\n  save(): boolean {\n    return false;\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-overloaded-receiver/models/User.ts",
    "content": "export class User {\n  save(): boolean {\n    return true;\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-parent-resolution/src/models/Base.ts",
    "content": "export class BaseModel {\n  save(): boolean { return true; }\n}\n\nexport interface Serializable {\n  serialize(): string;\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-parent-resolution/src/models/User.ts",
    "content": "import { BaseModel, Serializable } from './Base';\n\nexport class User extends BaseModel implements Serializable {\n  serialize(): string { return ''; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-receiver-resolution/src/app.ts",
    "content": "import { User } from './user';\nimport { Repo } from './repo';\n\nexport function processEntities(): void {\n  const user: User = new User();\n  const repo: Repo = new Repo();\n  user.save();\n  repo.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-receiver-resolution/src/repo.ts",
    "content": "export class Repo {\n  save(): boolean {\n    return false;\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-receiver-resolution/src/user.ts",
    "content": "export class User {\n  save(): boolean {\n    return true;\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-reexport-chain/src/app.ts",
    "content": "import { User, Repo } from './models';\n\nfunction main(): void {\n  const user = new User();\n  user.save();\n\n  const repo = new Repo();\n  repo.persist();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-reexport-chain/src/base.ts",
    "content": "export class User {\n  save(): void {\n    console.log('saving user');\n  }\n}\n\nexport class Repo {\n  persist(): void {\n    console.log('persisting repo');\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-reexport-chain/src/models.ts",
    "content": "// Barrel re-export — no local definitions\nexport { User } from './base';\nexport { Repo } from './base';\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-reexport-type/src/app.ts",
    "content": "import type { User, Repo } from './models';\n\nfunction main(): void {\n  const user = new User();\n  user.save();\n\n  const repo = new Repo();\n  repo.persist();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-reexport-type/src/base.ts",
    "content": "export class User {\n  save(): void {\n    console.log('saving user');\n  }\n}\n\nexport class Repo {\n  persist(): void {\n    console.log('persisting repo');\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-reexport-type/src/models.ts",
    "content": "// Type-only re-exports\nexport type { User } from './base';\nexport type { Repo } from './base';\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-scoped-receiver/src/app.ts",
    "content": "import { User } from './user';\nimport { Repo } from './repo';\n\nexport function handleUser(entity: User): void {\n  entity.save();\n}\n\nexport function handleRepo(entity: Repo): void {\n  entity.save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-scoped-receiver/src/repo.ts",
    "content": "export class Repo {\n  save(): void {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-scoped-receiver/src/user.ts",
    "content": "export class User {\n  save(): void {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-self-this-resolution/src/models/Repo.ts",
    "content": "export class Repo {\n  save(): boolean { return true; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-self-this-resolution/src/models/User.ts",
    "content": "export class User {\n  save(): boolean { return true; }\n  process(): void {\n    this.save();\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-static-chain/app.ts",
    "content": "import { UserService } from './services/UserService';\n\n// Chain base is a class name, not a variable.\n// Requires class-as-receiver fallback on the chain base resolution.\nexport function processUser(): void {\n  UserService.findUser().save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-static-chain/models/Repo.ts",
    "content": "export class Repo {\n  save(): void {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-static-chain/models/User.ts",
    "content": "export class User {\n  save(): void {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-static-chain/services/UserService.ts",
    "content": "import { User } from '../models/User';\n\nexport class UserService {\n  static findUser(): User {\n    return new User();\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-super-resolution/src/models/Base.ts",
    "content": "export class BaseModel {\n  save(): boolean { return true; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-super-resolution/src/models/Repo.ts",
    "content": "export class Repo {\n  save(): boolean { return true; }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-super-resolution/src/models/User.ts",
    "content": "import { BaseModel } from './Base';\n\nexport class User extends BaseModel {\n  save(): boolean {\n    super.save();\n    return true;\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-typed-param-chain/app.ts",
    "content": "import { UserService } from './services/UserService';\n\n// svc is typed via parameter annotation, NOT constructor binding.\n// The chain base type must come from typeEnv, not receiverMap.\nexport function processUser(svc: UserService): void {\n  svc.getUser().save();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-typed-param-chain/models/Repo.ts",
    "content": "export class Repo {\n  save(): void {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-typed-param-chain/models/User.ts",
    "content": "export class User {\n  save(): void {}\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-typed-param-chain/services/UserService.ts",
    "content": "import { User } from '../models/User';\n\nexport class UserService {\n  getUser(): User {\n    return new User();\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-variadic-resolution/src/app.ts",
    "content": "import { logEntry } from './logger';\n\nexport function processInput(): void {\n  logEntry('hello', 'world', 'test');\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/lang-resolution/typescript-variadic-resolution/src/logger.ts",
    "content": "export function logEntry(...messages: string[]): void {\n  console.log(messages.join(' '));\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/local-backend-seed.ts",
    "content": "import type { FTSIndexDef } from '../helpers/test-indexed-db.js';\n\nexport const LOCAL_BACKEND_SEED_DATA = [\n  // Files\n  `CREATE (f:File {id: 'file:auth.ts', name: 'auth.ts', filePath: 'src/auth.ts', content: 'auth module'})`,\n  `CREATE (f:File {id: 'file:utils.ts', name: 'utils.ts', filePath: 'src/utils.ts', content: 'utils module'})`,\n  // Functions\n  `CREATE (fn:Function {id: 'func:login', name: 'login', filePath: 'src/auth.ts', startLine: 1, endLine: 15, isExported: true, content: 'function login() {}', description: 'User login'})`,\n  `CREATE (fn:Function {id: 'func:validate', name: 'validate', filePath: 'src/auth.ts', startLine: 17, endLine: 25, isExported: true, content: 'function validate() {}', description: 'Validate input'})`,\n  `CREATE (fn:Function {id: 'func:hash', name: 'hash', filePath: 'src/utils.ts', startLine: 1, endLine: 8, isExported: true, content: 'function hash() {}', description: 'Hash utility'})`,\n  // Class\n  `CREATE (c:Class {id: 'class:AuthService', name: 'AuthService', filePath: 'src/auth.ts', startLine: 30, endLine: 60, isExported: true, content: 'class AuthService {}', description: 'Authentication service'})`,\n  `CREATE (c:Class {id: 'class:BaseService', name: 'BaseService', filePath: 'src/base.ts', startLine: 1, endLine: 20, isExported: true, content: 'class BaseService {}', description: 'Base service class'})`,\n  // Methods\n  `CREATE (m:Method {id: 'method:AuthService.authenticate', name: 'authenticate', filePath: 'src/auth.ts', startLine: 35, endLine: 45, isExported: false, content: 'authenticate() {}', description: 'Authenticate user'})`,\n  `CREATE (m:Method {id: 'method:BaseService.authenticate', name: 'authenticate', filePath: 'src/base.ts', startLine: 5, endLine: 10, isExported: false, content: 'authenticate() {}', description: 'Base authenticate'})`,\n  // Community\n  `CREATE (c:Community {id: 'comm:auth', label: 'Auth', heuristicLabel: 'Authentication', keywords: ['auth', 'login'], description: 'Auth module', enrichedBy: 'heuristic', cohesion: 0.8, symbolCount: 3})`,\n  // Process\n  `CREATE (p:Process {id: 'proc:login-flow', label: 'LoginFlow', heuristicLabel: 'User Login', processType: 'intra_community', stepCount: 2, communities: ['auth'], entryPointId: 'func:login', terminalId: 'func:validate'})`,\n  // Relationships\n  `MATCH (a:Function), (b:Function) WHERE a.id = 'func:login' AND b.id = 'func:validate'\n   CREATE (a)-[:CodeRelation {type: 'CALLS', confidence: 1.0, reason: 'direct', step: 0}]->(b)`,\n  `MATCH (a:Function), (b:Function) WHERE a.id = 'func:login' AND b.id = 'func:hash'\n   CREATE (a)-[:CodeRelation {type: 'CALLS', confidence: 0.9, reason: 'import-resolved', step: 0}]->(b)`,\n  `MATCH (a:Function), (c:Community) WHERE a.id = 'func:login' AND c.id = 'comm:auth'\n   CREATE (a)-[:CodeRelation {type: 'MEMBER_OF', confidence: 1.0, reason: '', step: 0}]->(c)`,\n  `MATCH (a:Function), (p:Process) WHERE a.id = 'func:login' AND p.id = 'proc:login-flow'\n   CREATE (a)-[:CodeRelation {type: 'STEP_IN_PROCESS', confidence: 1.0, reason: '', step: 1}]->(p)`,\n  `MATCH (a:Function), (p:Process) WHERE a.id = 'func:validate' AND p.id = 'proc:login-flow'\n   CREATE (a)-[:CodeRelation {type: 'STEP_IN_PROCESS', confidence: 1.0, reason: '', step: 2}]->(p)`,\n  // HAS_METHOD: AuthService -> authenticate\n  `MATCH (c:Class), (m:Method) WHERE c.id = 'class:AuthService' AND m.id = 'method:AuthService.authenticate'\n   CREATE (c)-[:CodeRelation {type: 'HAS_METHOD', confidence: 1.0, reason: 'class-method', step: 0}]->(m)`,\n  // OVERRIDES: AuthService.authenticate -> BaseService.authenticate\n  `MATCH (a:Method), (b:Method) WHERE a.id = 'method:AuthService.authenticate' AND b.id = 'method:BaseService.authenticate'\n   CREATE (a)-[:CodeRelation {type: 'OVERRIDES', confidence: 1.0, reason: 'mro-resolution', step: 0}]->(b)`,\n  // HAS_METHOD: BaseService -> authenticate\n  `MATCH (c:Class), (m:Method) WHERE c.id = 'class:BaseService' AND m.id = 'method:BaseService.authenticate'\n   CREATE (c)-[:CodeRelation {type: 'HAS_METHOD', confidence: 1.0, reason: 'class-method', step: 0}]->(m)`,\n];\n\nexport const LOCAL_BACKEND_FTS_INDEXES: FTSIndexDef[] = [\n  { table: 'Function', indexName: 'function_fts', columns: ['name', 'content', 'description'] },\n  { table: 'Class', indexName: 'class_fts', columns: ['name', 'content', 'description'] },\n  { table: 'Method', indexName: 'method_fts', columns: ['name', 'content', 'description'] },\n  { table: 'File', indexName: 'file_fts', columns: ['name', 'content'] },\n];\n"
  },
  {
    "path": "gitnexus/test/fixtures/mini-repo/src/db.ts",
    "content": "import type { ValidationResult } from './validator';\n\nexport interface DbRecord {\n  id: string;\n  value: string;\n  timestamp: number;\n}\n\nexport async function saveToDb(input: ValidationResult): Promise<DbRecord> {\n  return {\n    id: Math.random().toString(36),\n    value: input.value,\n    timestamp: Date.now(),\n  };\n}\n\nexport async function findById(id: string): Promise<DbRecord | null> {\n  return null;\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/mini-repo/src/formatter.ts",
    "content": "import type { DbRecord } from './db';\n\nexport function formatResponse(record: DbRecord): string {\n  return JSON.stringify({\n    success: true,\n    data: record,\n  });\n}\n\nexport function formatError(message: string): string {\n  return JSON.stringify({\n    success: false,\n    error: message,\n  });\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/mini-repo/src/handler.ts",
    "content": "import { validateInput } from './validator';\nimport { saveToDb } from './db';\nimport { formatResponse } from './formatter';\n\nexport class RequestHandler {\n  async handleRequest(input: string): Promise<string> {\n    const validated = validateInput(input);\n    const saved = await saveToDb(validated);\n    return formatResponse(saved);\n  }\n}\n\nexport function createHandler(): RequestHandler {\n  return new RequestHandler();\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/mini-repo/src/index.ts",
    "content": "export { RequestHandler, createHandler } from './handler';\nexport { validateInput, sanitize } from './validator';\nexport { formatResponse, formatError } from './formatter';\nexport { processRequest, errorMiddleware } from './middleware';\nexport { createLogEntry, formatLogEntry, logMessage } from './logger';\n"
  },
  {
    "path": "gitnexus/test/fixtures/mini-repo/src/logger.ts",
    "content": "export interface LogEntry {\n  level: string;\n  message: string;\n  timestamp: number;\n}\n\nexport function createLogEntry(level: string, message: string): LogEntry {\n  return { level, message, timestamp: Date.now() };\n}\n\nexport function formatLogEntry(entry: LogEntry): string {\n  return `[${entry.level}] ${entry.message}`;\n}\n\nexport function logMessage(level: string, message: string): string {\n  const entry = createLogEntry(level, message);\n  return formatLogEntry(entry);\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/mini-repo/src/middleware.ts",
    "content": "import { sanitize } from './validator';\nimport { logMessage } from './logger';\n\nexport function processRequest(input: string): string {\n  const clean = sanitize(input);\n  return logMessage('info', `Processing: ${clean}`);\n}\n\nexport function errorMiddleware(error: string): string {\n  return logMessage('error', error);\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/mini-repo/src/validator.ts",
    "content": "export interface ValidationResult {\n  valid: boolean;\n  value: string;\n}\n\nexport function validateInput(input: string): ValidationResult {\n  if (!input || input.trim().length === 0) {\n    return { valid: false, value: '' };\n  }\n  return { valid: true, value: input.trim() };\n}\n\nexport function sanitize(input: string): string {\n  return input.replace(/[<>]/g, '');\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/sample-code/simple.c",
    "content": "#include <stdio.h>\n\nint add(int a, int b) {\n    return a + b;\n}\n\nstatic int internal_helper(void) {\n    return 0;\n}\n\nvoid print_message(const char* msg) {\n    printf(\"%s\\n\", msg);\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/sample-code/simple.cpp",
    "content": "#include <string>\n\nclass UserManager {\npublic:\n    void addUser(const std::string& name) {\n        users_.push_back(name);\n    }\n\n    int getCount() const {\n        return static_cast<int>(users_.size());\n    }\n\nprivate:\n    std::vector<std::string> users_;\n};\n\nint helperFunction(int x) {\n    return x * 2;\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/sample-code/simple.cs",
    "content": "using System;\nusing System.Collections.Generic;\n\nnamespace SampleApp\n{\n    public interface ICalculator\n    {\n        int Add(int a, int b);\n    }\n\n    public class Calculator : ICalculator\n    {\n        public int Result { get; private set; }\n\n        public Calculator() { Result = 0; }\n\n        public int Add(int a, int b)\n        {\n            Result = a + b;\n            LogResult(Result);\n            return Result;\n        }\n\n        private void LogResult(int value)\n        {\n            Console.WriteLine(value);\n        }\n\n        private int Multiply(int a, int b) { return a * b; }\n    }\n\n    internal class Helper\n    {\n        public void DoWork()\n        {\n            var calc = new Calculator();\n            calc.Add(1, 2);\n        }\n    }\n\n    public enum Operation\n    {\n        Add,\n        Subtract,\n        Multiply\n    }\n\n    public record CalculationResult(int Value, Operation Op);\n\n    public struct Point\n    {\n        public int X { get; set; }\n        public int Y { get; set; }\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/sample-code/simple.go",
    "content": "package main\n\nimport \"fmt\"\n\n// ExportedFunction is a public function\nfunc ExportedFunction(name string) string {\n\treturn fmt.Sprintf(\"Hello, %s\", name)\n}\n\n// unexportedFunction is a private function\nfunc unexportedFunction() int {\n\treturn 42\n}\n\ntype UserService struct {\n\tName string\n}\n\nfunc (s *UserService) GetName() string {\n\treturn s.Name\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/sample-code/simple.java",
    "content": "public class UserService {\n    private String name;\n\n    public UserService(String name) {\n        this.name = name;\n    }\n\n    public String getName() {\n        return this.name;\n    }\n\n    private void reset() {\n        this.name = \"\";\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/sample-code/simple.js",
    "content": "const path = require('path');\n\nclass EventEmitter {\n  constructor() {\n    this.listeners = {};\n  }\n\n  on(event, callback) {\n    if (!this.listeners[event]) {\n      this.listeners[event] = [];\n    }\n    this.listeners[event].push(callback);\n  }\n\n  emit(event, ...args) {\n    const handlers = this.listeners[event] || [];\n    handlers.forEach(handler => handler(...args));\n  }\n}\n\nfunction createLogger(prefix) {\n  return {\n    log: (msg) => console.log(`[${prefix}] ${msg}`),\n    error: (msg) => console.error(`[${prefix}] ${msg}`),\n  };\n}\n\nconst formatDate = (date) => {\n  return date.toISOString().split('T')[0];\n};\n\nmodule.exports = { EventEmitter, createLogger, formatDate };\n"
  },
  {
    "path": "gitnexus/test/fixtures/sample-code/simple.php",
    "content": "<?php\n\nfunction topLevelFunction(string $name): string {\n    return \"Hello, \" . $name;\n}\n\nclass UserRepository {\n    private array $users = [];\n\n    public function addUser(string $name): void {\n        $this->users[] = $name;\n    }\n\n    private function validateName(string $name): bool {\n        return strlen($name) > 0;\n    }\n\n    public function getUsers(): array {\n        return $this->users;\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/sample-code/simple.py",
    "content": "def public_function(x: int, y: int) -> int:\n    \"\"\"A public function.\"\"\"\n    return x + y\n\ndef _private_helper(data: str) -> str:\n    \"\"\"A private helper function.\"\"\"\n    return data.strip()\n\nclass Calculator:\n    def add(self, a: int, b: int) -> int:\n        return a + b\n\n    def _reset(self) -> None:\n        pass\n"
  },
  {
    "path": "gitnexus/test/fixtures/sample-code/simple.rs",
    "content": "pub fn public_function(x: i32) -> i32 {\n    x + 1\n}\n\nfn private_function() -> &'static str {\n    \"private\"\n}\n\npub struct Config {\n    pub name: String,\n}\n\nimpl Config {\n    pub fn new(name: &str) -> Self {\n        Config { name: name.to_string() }\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/sample-code/simple.swift",
    "content": "class UserManager {\n    var users: [String] = []\n\n    init() {\n        users = []\n    }\n\n    func addUser(_ name: String) {\n        users.append(name)\n    }\n\n    public func getCount() -> Int {\n        return users.count\n    }\n}\n\nfunc helperFunction() -> String {\n    return \"swift helper\"\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/sample-code/simple.ts",
    "content": "export interface UserConfig {\n  name: string;\n  email: string;\n  active: boolean;\n}\n\nexport function validateUser(config: UserConfig): boolean {\n  return config.name.length > 0 && config.email.includes('@');\n}\n\nexport class UserService {\n  private users: UserConfig[] = [];\n\n  addUser(user: UserConfig): void {\n    if (validateUser(user)) {\n      this.users.push(user);\n    }\n  }\n\n  getUser(name: string): UserConfig | undefined {\n    return this.users.find(u => u.name === name);\n  }\n}\n\nfunction internalHelper(): string {\n  return 'helper';\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/sample-code/simple.tsx",
    "content": "import React, { useState } from 'react';\n\ninterface ButtonProps {\n  label: string;\n  onClick: () => void;\n}\n\nexport class Counter extends React.Component<{}, { count: number }> {\n  state = { count: 0 };\n\n  increment() {\n    this.setState({ count: this.state.count + 1 });\n  }\n\n  render() {\n    return <button onClick={() => this.increment()}>{this.state.count}</button>;\n  }\n}\n\nexport const Button: React.FC<ButtonProps> = ({ label, onClick }) => {\n  return <button onClick={onClick}>{label}</button>;\n};\n\nexport function useCounter(initial: number = 0) {\n  const [count, setCount] = useState(initial);\n  const increment = () => setCount(c => c + 1);\n  const decrement = () => setCount(c => c - 1);\n  return { count, increment, decrement };\n}\n\nconst App = () => {\n  const { count, increment } = useCounter();\n  return (\n    <div>\n      <h1>Count: {count}</h1>\n      <Button label=\"+\" onClick={increment} />\n    </div>\n  );\n};\n\nexport default App;\n"
  },
  {
    "path": "gitnexus/test/fixtures/sample-code/swift-extension.swift",
    "content": "protocol Greetable {\n    func greet() -> String\n}\n\nclass Person {\n    var name: String\n    init(name: String) {\n        self.name = name\n    }\n}\n\nextension Person: Greetable {\n    func greet() -> String {\n        return \"Hello, \\(name)\"\n    }\n}\n"
  },
  {
    "path": "gitnexus/test/fixtures/search-seed.ts",
    "content": "import type { FTSIndexDef } from '../helpers/test-indexed-db.js';\n\nexport const SEARCH_SEED_DATA = [\n  // File nodes — content is the searchable field\n  `CREATE (n:File {id: 'file:auth.ts', name: 'auth.ts', filePath: 'src/auth.ts', content: 'authentication module for user login and session management'})`,\n  `CREATE (n:File {id: 'file:router.ts', name: 'router.ts', filePath: 'src/router.ts', content: 'HTTP request routing and middleware pipeline'})`,\n  `CREATE (n:File {id: 'file:utils.ts', name: 'utils.ts', filePath: 'src/utils.ts', content: 'general utility functions for string manipulation'})`,\n\n  // Function nodes\n  `CREATE (n:Function {id: 'func:validateUser', name: 'validateUser', filePath: 'src/auth.ts', startLine: 10, endLine: 30, isExported: true, content: 'validates user credentials and authentication tokens', description: 'user auth validator'})`,\n  `CREATE (n:Function {id: 'func:hashPassword', name: 'hashPassword', filePath: 'src/auth.ts', startLine: 35, endLine: 50, isExported: true, content: 'hashes user password with bcrypt for secure authentication', description: 'password hashing'})`,\n  `CREATE (n:Function {id: 'func:handleRoute', name: 'handleRoute', filePath: 'src/router.ts', startLine: 1, endLine: 20, isExported: true, content: 'handles HTTP request routing to controllers', description: 'route handler'})`,\n  `CREATE (n:Function {id: 'func:formatString', name: 'formatString', filePath: 'src/utils.ts', startLine: 1, endLine: 10, isExported: true, content: 'formats a string with template placeholders', description: 'string formatter'})`,\n\n  // Class nodes\n  `CREATE (n:Class {id: 'class:AuthService', name: 'AuthService', filePath: 'src/auth.ts', startLine: 55, endLine: 120, isExported: true, content: 'authentication service handling user login logout and token refresh', description: 'auth service class'})`,\n\n  // Method nodes\n  `CREATE (n:Method {id: 'method:AuthService.login', name: 'login', filePath: 'src/auth.ts', startLine: 60, endLine: 80, isExported: false, content: 'authenticates user with username and password returning JWT token', description: 'login method'})`,\n\n  // Interface nodes\n  `CREATE (n:Interface {id: 'iface:UserCredentials', name: 'UserCredentials', filePath: 'src/auth.ts', startLine: 1, endLine: 8, isExported: true, content: 'interface for user authentication credentials username password', description: 'credentials interface'})`,\n];\n\nexport const SEARCH_FTS_INDEXES: FTSIndexDef[] = [\n  { table: 'File', indexName: 'file_fts', columns: ['name', 'content'] },\n  { table: 'Function', indexName: 'function_fts', columns: ['name', 'content', 'description'] },\n  { table: 'Class', indexName: 'class_fts', columns: ['name', 'content', 'description'] },\n  { table: 'Method', indexName: 'method_fts', columns: ['name', 'content', 'description'] },\n  { table: 'Interface', indexName: 'interface_fts', columns: ['name', 'content', 'description'] },\n];\n"
  },
  {
    "path": "gitnexus/test/global-setup.ts",
    "content": "/**\n * Vitest globalSetup — runs once in the MAIN process before any forks.\n *\n * Creates a single shared LadybugDB with full schema so that forked test\n * files only need to clear + reseed data instead of recreating the\n * entire schema each time (~29 DDL queries per file eliminated).\n *\n * The dbPath is shared with test files via vitest's provide/inject API.\n */\nimport path from 'path';\nimport lbug from '@ladybugdb/core';\nimport type { GlobalSetupContext } from 'vitest/node';\nimport { createTempDir } from './helpers/test-db.js';\nimport {\n  NODE_SCHEMA_QUERIES,\n  REL_SCHEMA_QUERIES,\n  EMBEDDING_SCHEMA,\n} from '../src/core/lbug/schema.js';\n\nexport default async function setup({ provide }: GlobalSetupContext) {\n  const tmpHandle = await createTempDir('gitnexus-shared-');\n  const dbPath = path.join(tmpHandle.dbPath, 'lbug');\n\n  // Create DB with full schema\n  const db = new lbug.Database(dbPath);\n  const conn = new lbug.Connection(db);\n\n  for (const q of NODE_SCHEMA_QUERIES) {\n    await conn.query(q);\n  }\n  for (const q of REL_SCHEMA_QUERIES) {\n    await conn.query(q);\n  }\n  await conn.query(EMBEDDING_SCHEMA);\n\n  // Pre-install FTS extension so forks don't need to download it\n  try {\n    await conn.query('INSTALL fts');\n    await conn.query('LOAD EXTENSION fts');\n  } catch {\n    // FTS may already be installed system-wide — not fatal\n  }\n\n  await conn.close();\n  await db.close();\n\n  // Share the dbPath with all test files via inject('lbugDbPath')\n  provide('lbugDbPath', dbPath);\n\n  // Teardown: remove temp directory after all tests complete\n  return async () => {\n    await tmpHandle.cleanup();\n  };\n}\n"
  },
  {
    "path": "gitnexus/test/helpers/test-db.ts",
    "content": "/**\n * Test helper: Temporary LadybugDB factory\n *\n * Creates a temp directory, initializes LadybugDB with schema, and\n * optionally loads minimal test data. Returns a cleanup function.\n */\nimport fs from 'fs/promises';\nimport os from 'os';\nimport path from 'path';\n\nexport interface TestDBHandle {\n  dbPath: string;\n  cleanup: () => Promise<void>;\n}\n\n/**\n * Create a temporary directory for LadybugDB tests.\n * Returns the path and a cleanup function.\n */\nexport async function createTempDir(prefix: string = 'gitnexus-test-'): Promise<TestDBHandle> {\n  const tmpDir = await fs.mkdtemp(path.join(os.tmpdir(), prefix));\n  return {\n    dbPath: tmpDir,\n    cleanup: async () => {\n      try {\n        await fs.rm(tmpDir, { recursive: true, force: true });\n      } catch {\n        // best-effort cleanup\n      }\n    },\n  };\n}\n"
  },
  {
    "path": "gitnexus/test/helpers/test-graph.ts",
    "content": "/**\n * Test helper: In-memory knowledge graph builder\n *\n * Provides a convenient API for constructing test graphs\n * without touching the filesystem or LadybugDB.\n */\nimport { createKnowledgeGraph } from '../../src/core/graph/graph.js';\nimport type { KnowledgeGraph, GraphNode, NodeLabel, RelationshipType } from '../../src/core/graph/types.js';\n\nexport interface TestNodeInput {\n  id: string;\n  label: NodeLabel;\n  name: string;\n  filePath: string;\n  startLine?: number;\n  endLine?: number;\n  isExported?: boolean;\n  extra?: Record<string, any>;\n}\n\nexport interface TestRelInput {\n  sourceId: string;\n  targetId: string;\n  type: RelationshipType;\n  confidence?: number;\n  reason?: string;\n  step?: number;\n}\n\n/**\n * Build a test graph from simple input arrays.\n */\nexport function buildTestGraph(\n  nodes: TestNodeInput[],\n  relationships: TestRelInput[] = [],\n): KnowledgeGraph {\n  const graph = createKnowledgeGraph();\n\n  for (const n of nodes) {\n    graph.addNode({\n      id: n.id,\n      label: n.label,\n      properties: {\n        name: n.name,\n        filePath: n.filePath,\n        startLine: n.startLine,\n        endLine: n.endLine,\n        isExported: n.isExported,\n        ...n.extra,\n      },\n    });\n  }\n\n  for (const r of relationships) {\n    graph.addRelationship({\n      id: `${r.sourceId}-${r.type}-${r.targetId}`,\n      sourceId: r.sourceId,\n      targetId: r.targetId,\n      type: r.type,\n      confidence: r.confidence ?? 1.0,\n      reason: r.reason ?? '',\n      step: r.step,\n    });\n  }\n\n  return graph;\n}\n\n/**\n * Create a minimal graph with a few files, functions, and relationships.\n * Useful as a baseline for integration tests.\n */\nexport function createMinimalTestGraph(): KnowledgeGraph {\n  return buildTestGraph(\n    [\n      { id: 'File:src/index.ts', label: 'File', name: 'index.ts', filePath: 'src/index.ts' },\n      { id: 'File:src/utils.ts', label: 'File', name: 'utils.ts', filePath: 'src/utils.ts' },\n      { id: 'Function:src/index.ts:main:1', label: 'Function', name: 'main', filePath: 'src/index.ts', startLine: 1, endLine: 10, isExported: true },\n      { id: 'Function:src/utils.ts:helper:1', label: 'Function', name: 'helper', filePath: 'src/utils.ts', startLine: 1, endLine: 5, isExported: true },\n      { id: 'Class:src/index.ts:App:12', label: 'Class', name: 'App', filePath: 'src/index.ts', startLine: 12, endLine: 30, isExported: true },\n      { id: 'Folder:src', label: 'Folder', name: 'src', filePath: 'src' },\n    ],\n    [\n      { sourceId: 'Function:src/index.ts:main:1', targetId: 'Function:src/utils.ts:helper:1', type: 'CALLS' },\n      { sourceId: 'Function:src/index.ts:main:1', targetId: 'Class:src/index.ts:App:12', type: 'CALLS' },\n      { sourceId: 'File:src/index.ts', targetId: 'Function:src/index.ts:main:1', type: 'CONTAINS' },\n      { sourceId: 'File:src/utils.ts', targetId: 'Function:src/utils.ts:helper:1', type: 'CONTAINS' },\n    ],\n  );\n}\n"
  },
  {
    "path": "gitnexus/test/helpers/test-indexed-db.ts",
    "content": "/**\n * Test helper: Indexed LadybugDB lifecycle manager\n *\n * Uses a shared LadybugDB created by globalSetup (test/global-setup.ts).\n * Each test file clears all data, reseeds, and initializes adapters —\n * avoiding per-file schema creation overhead.\n *\n * Cleanup properly closes adapters and releases native resources.\n *\n * Each test file gets a unique repoId to prevent MCP pool map collisions.\n * Seed data is NOT included — each test provides its own via options.seed.\n */\n/// <reference path=\"../vitest.d.ts\" />\nimport path from 'path';\nimport { describe, beforeAll, afterAll, inject } from 'vitest';\nimport type { TestDBHandle } from './test-db.js';\nimport {\n  NODE_TABLES,\n  EMBEDDING_TABLE_NAME,\n} from '../../src/core/lbug/schema.js';\n\nexport interface IndexedDBHandle {\n  /** Path to the LadybugDB database file */\n  dbPath: string;\n  /** Unique repoId for MCP pool adapter — prevents cross-file collisions */\n  repoId: string;\n  /** Temp directory handle for filesystem cleanup */\n  tmpHandle: TestDBHandle;\n  /** Cleanup: closes adapters and releases native resources */\n  cleanup: () => Promise<void>;\n}\n\nlet repoCounter = 0;\n\n/** FTS index definition for withTestLbugDB */\nexport interface FTSIndexDef {\n  table: string;\n  indexName: string;\n  columns: string[];\n}\n\n/**\n * Options for withTestLbugDB lifecycle.\n *\n * Lifecycle: initLbug → loadFTS → dropFTS → clearData → seed\n *            → createFTS → [closeCoreLbug + poolInitLbug] → afterSetup\n */\nexport interface WithTestLbugDBOptions {\n  /** Cypher CREATE queries to insert seed data (runs before core adapter opens). */\n  seed?: string[];\n  /** FTS indexes to create after seeding. */\n  ftsIndexes?: FTSIndexDef[];\n  /** Close core adapter and open pool adapter (read-only) after FTS setup. */\n  poolAdapter?: boolean;\n  /** Run after all lifecycle phases complete (mocks, dynamic imports, etc). */\n  afterSetup?: (handle: IndexedDBHandle) => Promise<void>;\n  /** Timeout for beforeAll in ms (default: 30000). */\n  timeout?: number;\n}\n\n/**\n * Manages the full LadybugDB test lifecycle using the shared global DB:\n * data clearing, reseeding, FTS indexes, adapter init/teardown.\n *\n * All data operations go through the core adapter's writable connection —\n * no raw lbug.Database() connections are opened.  This avoids file-lock\n * conflicts with orphaned native objects from previous test files.\n *\n * Each call is wrapped in its own `describe` block to isolate lifecycle\n * hooks — safe to call multiple times in the same file.\n */\nexport function withTestLbugDB(\n  prefix: string,\n  fn: (handle: IndexedDBHandle) => void,\n  options?: WithTestLbugDBOptions,\n): void {\n  const ref: { handle: IndexedDBHandle | undefined } = { handle: undefined };\n  const timeout = options?.timeout ?? 30000;\n\n  const setup = async () => {\n    // Get shared DB path from globalSetup (created once with full schema)\n    const dbPath = inject<'lbugDbPath'>('lbugDbPath');\n    const repoId = `test-${prefix}-${Date.now()}-${repoCounter++}`;\n\n    const adapter = await import('../../src/core/lbug/lbug-adapter.js');\n\n    // 1. Init core adapter (writable) — reuses existing connection if\n    //    already open for this dbPath (no new native objects created).\n    await adapter.initLbug(dbPath);\n\n    // 2. Load FTS extension (idempotent — skips if already loaded)\n    await adapter.loadFTSExtension();\n\n    // 3. Drop stale FTS indexes from previous test file\n    if (options?.ftsIndexes?.length) {\n      for (const idx of options.ftsIndexes) {\n        try { await adapter.dropFTSIndex(idx.table, idx.indexName); } catch { /* may not exist */ }\n      }\n    }\n\n    // 4. Clear all data via adapter (DETACH DELETE cascades to relationships)\n    for (const table of NODE_TABLES) {\n      await adapter.executeQuery(`MATCH (n:\\`${table}\\`) DETACH DELETE n`);\n    }\n    await adapter.executeQuery(`MATCH (n:${EMBEDDING_TABLE_NAME}) DELETE n`);\n\n    // 5. Seed new data via adapter\n    if (options?.seed?.length) {\n      for (const q of options.seed) {\n        await adapter.executeQuery(q);\n      }\n    }\n\n    // 6. Create FTS indexes on fresh data\n    if (options?.ftsIndexes?.length) {\n      for (const idx of options.ftsIndexes) {\n        await adapter.createFTSIndex(idx.table, idx.indexName, idx.columns);\n      }\n    }\n\n    // 7. Open pool adapter by injecting the core adapter's writable Database.\n    //    LadybugDB enforces file locks — writable + read-only can't coexist\n    //    on the same path, and db.close() segfaults on macOS due to N-API\n    //    destructor issues.  Reusing the writable Database avoids both problems.\n    //    Write protection is enforced at the query validation layer (isWriteQuery)\n    //    rather than at the native DB level.\n    if (options?.poolAdapter) {\n      const coreDb = adapter.getDatabase();\n      if (!coreDb) throw new Error('withTestLbugDB: core adapter has no open Database');\n      const { initLbugWithDb } = await import('../../src/mcp/core/lbug-adapter.js');\n      await initLbugWithDb(repoId, coreDb, dbPath);\n    }\n\n    const cleanup = async () => {\n      if (options?.poolAdapter) {\n        const poolAdapter = await import('../../src/mcp/core/lbug-adapter.js');\n        await poolAdapter.closeLbug(repoId);\n      }\n      await adapter.closeLbug();\n    };\n\n    // tmpHandle.dbPath → parent temp dir (not the lbug file) so tests\n    // that create sibling directories (e.g. 'storage') still work.\n    const tmpDir = path.dirname(dbPath);\n    const tmpHandle: TestDBHandle = { dbPath: tmpDir, cleanup: async () => {} };\n    ref.handle = { dbPath, repoId, tmpHandle, cleanup };\n\n    // 8. User's final setup (mocks, dynamic imports, etc.)\n    if (options?.afterSetup) {\n      await options.afterSetup(ref.handle);\n    }\n  };\n\n  const lazyHandle = new Proxy({} as IndexedDBHandle, {\n    get(_target, prop) {\n      if (!ref.handle) throw new Error('withTestLbugDB: handle not initialized — beforeAll has not run yet');\n      return (ref.handle as any)[prop];\n    },\n  });\n\n  // Wrap in describe to scope beforeAll/afterAll — prevents lifecycle\n  // collisions when multiple withTestLbugDB calls share the same file.\n  describe(`withTestLbugDB(${prefix})`, () => {\n    beforeAll(setup, timeout);\n    afterAll(async () => { if (ref.handle) await ref.handle.cleanup(); });\n    fn(lazyHandle);\n  });\n}\n"
  },
  {
    "path": "gitnexus/test/integration/augmentation.test.ts",
    "content": "/**\n * Integration Tests: Augmentation Engine\n *\n * augment() against a real indexed LadybugDB\n *   - Matching pattern returns non-empty string with callers/callees\n *   - Non-matching pattern returns empty string\n *   - Pattern shorter than 3 chars returns empty string\n */\nimport { describe, it, expect, vi } from 'vitest';\nimport { withTestLbugDB } from '../helpers/test-indexed-db.js';\n\n// ─── Seed data & FTS indexes for augmentation ────────\n\nconst AUGMENT_SEED_DATA = [\n  // File nodes\n  `CREATE (n:File {id: 'file:auth.ts', name: 'auth.ts', filePath: 'src/auth.ts', content: 'authentication module for user login'})`,\n  `CREATE (n:File {id: 'file:utils.ts', name: 'utils.ts', filePath: 'src/utils.ts', content: 'utility functions for hashing'})`,\n\n  // Function nodes\n  `CREATE (n:Function {id: 'func:login', name: 'login', filePath: 'src/auth.ts', startLine: 1, endLine: 15, isExported: true, content: 'function login authenticates user credentials', description: 'user login'})`,\n  `CREATE (n:Function {id: 'func:validate', name: 'validate', filePath: 'src/auth.ts', startLine: 17, endLine: 25, isExported: true, content: 'function validate checks user input', description: 'input validation'})`,\n  `CREATE (n:Function {id: 'func:hash', name: 'hash', filePath: 'src/utils.ts', startLine: 1, endLine: 8, isExported: true, content: 'function hash computes bcrypt hash', description: 'password hashing'})`,\n\n  // Class / Method / Interface nodes\n  `CREATE (n:Class {id: 'class:AuthService', name: 'AuthService', filePath: 'src/auth.ts', startLine: 30, endLine: 60, isExported: true, content: 'class AuthService handles authentication', description: 'auth service'})`,\n  `CREATE (n:Method {id: 'method:AuthService.login', name: 'loginMethod', filePath: 'src/auth.ts', startLine: 35, endLine: 50, isExported: false, content: 'method login in AuthService', description: 'login method'})`,\n  `CREATE (n:Interface {id: 'iface:Creds', name: 'Credentials', filePath: 'src/auth.ts', startLine: 1, endLine: 5, isExported: true, content: 'interface Credentials for login authentication', description: 'credentials type'})`,\n\n  // Community & Process nodes\n  `CREATE (n:Community {id: 'comm:auth', label: 'Auth', heuristicLabel: 'Authentication', keywords: ['auth'], description: 'Auth cluster', enrichedBy: 'heuristic', cohesion: 0.8, symbolCount: 3})`,\n  `CREATE (n:Process {id: 'proc:login-flow', label: 'LoginFlow', heuristicLabel: 'User Login', processType: 'intra_community', stepCount: 2, communities: ['auth'], entryPointId: 'func:login', terminalId: 'func:validate'})`,\n\n  // Relationships\n  `MATCH (a:Function), (b:Function) WHERE a.id = 'func:login' AND b.id = 'func:validate'\n   CREATE (a)-[:CodeRelation {type: 'CALLS', confidence: 1.0, reason: 'direct', step: 0}]->(b)`,\n  `MATCH (a:Function), (b:Function) WHERE a.id = 'func:login' AND b.id = 'func:hash'\n   CREATE (a)-[:CodeRelation {type: 'CALLS', confidence: 0.9, reason: 'import-resolved', step: 0}]->(b)`,\n  `MATCH (a:Function), (c:Community) WHERE a.id = 'func:login' AND c.id = 'comm:auth'\n   CREATE (a)-[:CodeRelation {type: 'MEMBER_OF', confidence: 1.0, reason: '', step: 0}]->(c)`,\n  `MATCH (a:Function), (p:Process) WHERE a.id = 'func:login' AND p.id = 'proc:login-flow'\n   CREATE (a)-[:CodeRelation {type: 'STEP_IN_PROCESS', confidence: 1.0, reason: '', step: 1}]->(p)`,\n  `MATCH (a:Function), (p:Process) WHERE a.id = 'func:validate' AND p.id = 'proc:login-flow'\n   CREATE (a)-[:CodeRelation {type: 'STEP_IN_PROCESS', confidence: 1.0, reason: '', step: 2}]->(p)`,\n];\n\nconst AUGMENT_FTS_INDEXES = [\n  { table: 'File', indexName: 'file_fts', columns: ['name', 'content'] },\n  { table: 'Function', indexName: 'function_fts', columns: ['name', 'content', 'description'] },\n  { table: 'Class', indexName: 'class_fts', columns: ['name', 'content', 'description'] },\n  { table: 'Method', indexName: 'method_fts', columns: ['name', 'content', 'description'] },\n  { table: 'Interface', indexName: 'interface_fts', columns: ['name', 'content', 'description'] },\n];\n\n// Mock repo-manager so augment() finds our test DB\nvi.mock('../../src/storage/repo-manager.js', () => ({\n  listRegisteredRepos: vi.fn(),\n}));\n\nlet augment: (pattern: string, cwd?: string) => Promise<string>;\n\nwithTestLbugDB('augment', (handle) => {\n  describe('augment()', () => {\n    it('returns non-empty string with relationship info for a matching pattern', async () => {\n      const result = await augment('login', handle.dbPath);\n\n      expect(result.length).toBeGreaterThan(0);\n      expect(result).toContain('[GitNexus]');\n      expect(result).toContain('login');\n    });\n\n    it('returns empty string for a non-matching pattern', async () => {\n      const result = await augment('nonexistent_xyz', handle.dbPath);\n      expect(result).toBe('');\n    });\n\n    it('returns empty string for patterns shorter than 3 characters', async () => {\n      const result = await augment('ab', handle.dbPath);\n      expect(result).toBe('');\n    });\n\n    it('returns empty string for empty pattern', async () => {\n      const result = await augment('', handle.dbPath);\n      expect(result).toBe('');\n    });\n\n    // ─── Unhappy paths ────────────────────────────────────────────────\n\n    it('returns empty string for whitespace-only pattern', async () => {\n      const result = await augment('   ', handle.dbPath);\n      expect(result).toBe('');\n    });\n\n    it('handles special regex characters in pattern without throwing', async () => {\n      const result = await augment('func()', handle.dbPath);\n      expect(typeof result).toBe('string');\n    });\n\n    it('handles very long pattern without throwing', async () => {\n      const result = await augment('a'.repeat(500), handle.dbPath);\n      expect(typeof result).toBe('string');\n    });\n\n    it('handles unicode pattern without throwing', async () => {\n      const result = await augment('日本語テスト', handle.dbPath);\n      expect(typeof result).toBe('string');\n    });\n  });\n}, {\n  seed: AUGMENT_SEED_DATA,\n  ftsIndexes: AUGMENT_FTS_INDEXES,\n  poolAdapter: true,\n  afterSetup: async (handle) => {\n    // Configure mock to return our test DB so augment() can find it\n    const { listRegisteredRepos } = await import('../../src/storage/repo-manager.js');\n    (listRegisteredRepos as ReturnType<typeof vi.fn>).mockResolvedValue([\n      {\n        name: handle.repoId,\n        path: handle.dbPath,\n        storagePath: handle.tmpHandle.dbPath,\n        indexedAt: new Date().toISOString(),\n        lastCommit: 'abc123',\n      },\n    ]);\n\n    // Dynamically import augment after mocks are in place\n    const engine = await import('../../src/core/augmentation/engine.js');\n    augment = engine.augment;\n  },\n});\n"
  },
  {
    "path": "gitnexus/test/integration/cli-e2e.test.ts",
    "content": "/**\n * P1 Integration Tests: CLI End-to-End\n *\n * Tests CLI commands via child process spawn:\n * - statusCommand: verify stdout for unindexed repo\n * - analyzeCommand: verify pipeline runs and creates .gitnexus/ output\n *\n * Uses process.execPath (never 'node' string), no shell: true.\n * Accepts status === null (timeout) as valid on slow CI runners.\n */\nimport { describe, it, expect, beforeAll, afterAll } from 'vitest';\nimport { spawnSync, spawn } from 'child_process';\nimport path from 'path';\nimport fs from 'fs';\nimport os from 'os';\nimport { fileURLToPath, pathToFileURL } from 'url';\n\nimport { createRequire } from 'module';\n\nconst testDir = path.dirname(fileURLToPath(import.meta.url));\nconst repoRoot = path.resolve(testDir, '../..');\nconst cliEntry = path.join(repoRoot, 'src/cli/index.ts');\nconst MINI_REPO = path.resolve(testDir, '..', 'fixtures', 'mini-repo');\n\n// Absolute file:// URL to tsx loader — needed when spawning CLI with cwd\n// outside the project tree (bare 'tsx' specifier won't resolve there).\n// Cannot use require.resolve('tsx/dist/loader.mjs') because the subpath is\n// not in tsx's package.json exports; resolve the package root then join.\nconst _require = createRequire(import.meta.url);\nconst tsxPkgDir = path.dirname(_require.resolve('tsx/package.json'));\nconst tsxImportUrl = pathToFileURL(path.join(tsxPkgDir, 'dist', 'loader.mjs')).href;\n\nbeforeAll(() => {\n  // Initialize mini-repo as a git repo so the CLI analyze command\n  // can run the full pipeline (it requires a .git directory).\n  const gitDir = path.join(MINI_REPO, '.git');\n  if (!fs.existsSync(gitDir)) {\n    spawnSync('git', ['init'], { cwd: MINI_REPO, stdio: 'pipe' });\n    spawnSync('git', ['add', '-A'], { cwd: MINI_REPO, stdio: 'pipe' });\n    spawnSync('git', ['commit', '-m', 'initial commit'], {\n      cwd: MINI_REPO,\n      stdio: 'pipe',\n      env: { ...process.env, GIT_AUTHOR_NAME: 'test', GIT_AUTHOR_EMAIL: 'test@test', GIT_COMMITTER_NAME: 'test', GIT_COMMITTER_EMAIL: 'test@test' },\n    });\n  }\n});\n\nafterAll(() => {\n  // Clean up .git/ and .gitnexus/ directories created during the test\n  for (const dir of ['.git', '.gitnexus']) {\n    const fullPath = path.join(MINI_REPO, dir);\n    if (fs.existsSync(fullPath)) {\n      fs.rmSync(fullPath, { recursive: true, force: true });\n    }\n  }\n});\n\nfunction runCli(command: string, cwd: string, timeoutMs = 15000) {\n  return spawnSync(process.execPath, ['--import', 'tsx', cliEntry, command], {\n    cwd,\n    encoding: 'utf8',\n    timeout: timeoutMs,\n    stdio: ['pipe', 'pipe', 'pipe'],\n    env: {\n      ...process.env,\n      // Pre-set --max-old-space-size so analyzeCommand's ensureHeap() sees it\n      // and skips the re-exec. The re-exec drops the tsx loader (--import tsx\n      // is not in process.argv), causing ERR_UNKNOWN_FILE_EXTENSION on .ts files.\n      NODE_OPTIONS: `${process.env.NODE_OPTIONS || ''} --max-old-space-size=8192`.trim(),\n    },\n  });\n}\n\n/**\n * Like runCli but accepts an arbitrary extra-args array so unhappy-path tests\n * can pass flags (e.g. --help) or omit a command entirely.\n */\nfunction runCliRaw(extraArgs: string[], cwd: string, timeoutMs = 15000) {\n  return spawnSync(process.execPath, ['--import', 'tsx', cliEntry, ...extraArgs], {\n    cwd,\n    encoding: 'utf8',\n    timeout: timeoutMs,\n    stdio: ['pipe', 'pipe', 'pipe'],\n    env: {\n      ...process.env,\n      NODE_OPTIONS: `${process.env.NODE_OPTIONS || ''} --max-old-space-size=8192`.trim(),\n    },\n  });\n}\n\ndescribe('CLI end-to-end', () => {\n  it('status command exits cleanly', () => {\n    const result = runCli('status', MINI_REPO);\n\n    // Accept timeout as valid on slow CI\n    if (result.status === null) return;\n\n    expect(result.status).toBe(0);\n    const combined = result.stdout + result.stderr;\n    // mini-repo may or may not be indexed depending on prior test runs\n    expect(combined).toMatch(/Repository|not indexed/i);\n  });\n\n  it('analyze command runs pipeline on mini-repo', () => {\n    const result = runCli('analyze', MINI_REPO, 30000);\n\n    // Accept timeout as valid on slow CI\n    if (result.status === null) return;\n\n    expect(result.status, [\n      `analyze exited with code ${result.status}`,\n      `stdout: ${result.stdout}`,\n      `stderr: ${result.stderr}`,\n    ].join('\\n')).toBe(0);\n\n    // Successful analyze should create .gitnexus/ output directory\n    const gitnexusDir = path.join(MINI_REPO, '.gitnexus');\n    expect(fs.existsSync(gitnexusDir)).toBe(true);\n    expect(fs.statSync(gitnexusDir).isDirectory()).toBe(true);\n  });\n\n  describe('unhappy path', () => {\n    it('exits with error when no command is given', () => {\n      const result = runCliRaw([], MINI_REPO);\n\n      // Accept timeout as valid on slow CI\n      if (result.status === null) return;\n\n      // Commander exits with code 1 when no subcommand is given and\n      // prints a usage/error message to stderr.\n      expect(result.status).toBe(1);\n      const combined = result.stdout + result.stderr;\n      expect(combined.length).toBeGreaterThan(0);\n    });\n\n    it('shows help with --help flag', () => {\n      const result = runCliRaw(['--help'], MINI_REPO);\n\n      // Accept timeout as valid on slow CI\n      if (result.status === null) return;\n\n      expect(result.status).toBe(0);\n      // Commander writes --help output to stdout.\n      expect(result.stdout).toMatch(/Usage:/i);\n      // The program name and at least one known subcommand should appear.\n      expect(result.stdout).toMatch(/gitnexus/i);\n      expect(result.stdout).toMatch(/analyze|status|serve/i);\n    });\n\n    it('fails with unknown command', () => {\n      const result = runCliRaw(['nonexistent'], MINI_REPO);\n\n      // Accept timeout as valid on slow CI\n      if (result.status === null) return;\n\n      // Commander exits with code 1 and prints an error to stderr for unknown commands.\n      expect(result.status).toBe(1);\n      expect(result.stderr).toMatch(/unknown command/i);\n    });\n  });\n\n  describe('CLI error handling', () => {\n    /**\n     * Helper to spawn CLI from a cwd outside the project tree.\n     * Uses the absolute file:// URL to tsx loader so the --import hook\n     * resolves even when cwd has no node_modules.\n     */\n    function runCliOutsideProject(args: string[], cwd: string, timeoutMs = 15000) {\n      return spawnSync(process.execPath, ['--import', tsxImportUrl, cliEntry, ...args], {\n        cwd,\n        encoding: 'utf8',\n        timeout: timeoutMs,\n        stdio: ['pipe', 'pipe', 'pipe'],\n        env: {\n          ...process.env,\n          NODE_OPTIONS: `${process.env.NODE_OPTIONS || ''} --max-old-space-size=8192`.trim(),\n        },\n      });\n    }\n\n    it('status on non-indexed repo reports not indexed', () => {\n      // MINI_REPO is inside the project tree so findRepo() walks up and\n      // finds the parent project's .gitnexus. Use an isolated temp git\n      // repo to guarantee no .gitnexus exists anywhere in the path.\n      const tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'cli-noindex-'));\n      try {\n        spawnSync('git', ['init'], { cwd: tmpDir, stdio: 'pipe' });\n        spawnSync('git', ['commit', '--allow-empty', '-m', 'init'], {\n          cwd: tmpDir, stdio: 'pipe',\n          env: { ...process.env, GIT_AUTHOR_NAME: 'test', GIT_AUTHOR_EMAIL: 'test@test', GIT_COMMITTER_NAME: 'test', GIT_COMMITTER_EMAIL: 'test@test' },\n        });\n\n        const result = runCliOutsideProject(['status'], tmpDir);\n        if (result.status === null) return;\n\n        expect(result.status).toBe(0);\n        expect(result.stdout).toMatch(/Repository not indexed/);\n      } finally {\n        fs.rmSync(tmpDir, { recursive: true, force: true });\n      }\n    });\n\n    it('status on non-git directory reports not a git repo', () => {\n      const tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'cli-nogit-'));\n      try {\n        const result = runCliOutsideProject(['status'], tmpDir);\n        if (result.status === null) return;\n\n        // status.ts doesn't set process.exitCode — just prints and returns\n        expect(result.status).toBe(0);\n        expect(result.stdout).toMatch(/Not a git repository/);\n      } finally {\n        fs.rmSync(tmpDir, { recursive: true, force: true });\n      }\n    });\n\n    it('analyze on non-git directory fails with exit code 1', () => {\n      const tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'cli-nogit-'));\n      try {\n        // Pass the non-git path as a separate argument via runCliRaw\n        // (runCli passes the whole string as one arg which breaks path parsing)\n        const result = runCliRaw(['analyze', tmpDir], repoRoot);\n        if (result.status === null) return;\n\n        // analyze.ts sets process.exitCode = 1 for non-git paths\n        expect(result.status).toBe(1);\n        expect(result.stdout).toMatch(/not.*git repository/i);\n      } finally {\n        fs.rmSync(tmpDir, { recursive: true, force: true });\n      }\n    });\n\n  });\n\n  // ─── stdout fd 1 tests (#324) ───────────────────────────────────────\n  // These tests verify that tool output goes to stdout (fd 1), not stderr.\n  // Requires analyze to have run first (the analyze test above populates .gitnexus/).\n\n  // All tool commands pass --repo to disambiguate when the global registry\n  // has multiple indexed repos (e.g. the parent project is also indexed).\n  describe('tool output goes to stdout via fd 1 (#324)', () => {\n    it('cypher: JSON appears on stdout, not stderr', () => {\n      const result = runCliRaw(['cypher', 'MATCH (n) RETURN n.name LIMIT 3', '--repo', 'mini-repo'], MINI_REPO);\n      if (result.status === null) return; // CI timeout tolerance\n\n      expect(result.status).toBe(0);\n\n      // stdout must contain valid JSON (array or object)\n      expect(() => JSON.parse(result.stdout.trim())).not.toThrow();\n\n      // stderr must NOT contain JSON — only human-readable diagnostics allowed\n      const stderrTrimmed = result.stderr.trim();\n      if (stderrTrimmed.length > 0) {\n        expect(() => JSON.parse(stderrTrimmed)).toThrow();\n      }\n    });\n\n    it('query: JSON appears on stdout, not stderr', () => {\n      // \"handler\" is a generic term likely to match something in mini-repo\n      const result = runCliRaw(['query', 'handler', '--repo', 'mini-repo'], MINI_REPO);\n      if (result.status === null) return;\n\n      expect(result.status).toBe(0);\n      expect(() => JSON.parse(result.stdout.trim())).not.toThrow();\n    });\n\n    it('impact: JSON appears on stdout, not stderr', () => {\n      const result = runCliRaw(\n        ['impact', 'handleRequest', '--direction', 'upstream', '--repo', 'mini-repo'],\n        MINI_REPO,\n      );\n      if (result.status === null) return;\n\n      expect(result.status).toBe(0);\n      // impact may return an error object (symbol not found) or a real result —\n      // either way it must be valid JSON on stdout\n      expect(() => JSON.parse(result.stdout.trim())).not.toThrow();\n    });\n\n    it('stdout is pipeable: cypher output parses as valid JSON', () => {\n      const result = runCliRaw(\n        ['cypher', 'MATCH (n:Function) RETURN n.name LIMIT 5', '--repo', 'mini-repo'],\n        MINI_REPO,\n      );\n      if (result.status === null) return;\n\n      expect(result.status).toBe(0);\n\n      // Simulate what jq does: parse stdout as JSON\n      const parsed = JSON.parse(result.stdout.trim());\n      expect(Array.isArray(parsed) || typeof parsed === 'object').toBe(true);\n    });\n  });\n\n  // ─── EPIPE clean exit test (#324) ───────────────────────────────────\n\n  describe('EPIPE handling (#324)', () => {\n    it('cypher: EPIPE exits with code 0, not stderr dump', () => {\n      return new Promise<void>((resolve, reject) => {\n        const child = spawn(\n          process.execPath,\n          ['--import', 'tsx', cliEntry, 'cypher', 'MATCH (n) RETURN n LIMIT 500', '--repo', 'mini-repo'],\n          {\n            cwd: MINI_REPO,\n            stdio: ['ignore', 'pipe', 'pipe'],\n            env: {\n              ...process.env,\n              NODE_OPTIONS: `${process.env.NODE_OPTIONS || ''} --max-old-space-size=8192`.trim(),\n            },\n          },\n        );\n\n        let stderrOutput = '';\n        child.stderr.on('data', (chunk: Buffer) => { stderrOutput += chunk.toString(); });\n\n        // Destroy stdout immediately — simulates `| head -0` (consumer closes early)\n        child.stdout.once('data', () => {\n          child.stdout.destroy(); // triggers EPIPE on next write\n        });\n\n        const timer = setTimeout(() => {\n          child.kill('SIGTERM');\n          // Timeout is acceptable on CI — not a failure\n          resolve();\n        }, 20000);\n\n        child.on('close', (code) => {\n          clearTimeout(timer);\n          try {\n            // Clean EPIPE exit: code 0\n            expect(code).toBe(0);\n            // No JSON payload should appear on stderr\n            const trimmed = stderrOutput.trim();\n            if (trimmed.length > 0) {\n              expect(() => JSON.parse(trimmed)).toThrow();\n            }\n            resolve();\n          } catch (err) {\n            reject(err);\n          }\n        });\n      });\n    }, 25000);\n  });\n\n  // ─── eval-server READY signal test (#324) ───────────────────────────\n\n  describe('eval-server READY signal (#324)', () => {\n    it('READY signal appears on stdout, not stderr', () => {\n      return new Promise<void>((resolve, reject) => {\n        const child = spawn(\n          process.execPath,\n          ['--import', 'tsx', cliEntry, 'eval-server', '--port', '0', '--idle-timeout', '3'],\n          {\n            cwd: MINI_REPO,\n            stdio: ['ignore', 'pipe', 'pipe'],\n            env: {\n              ...process.env,\n              NODE_OPTIONS: `${process.env.NODE_OPTIONS || ''} --max-old-space-size=8192`.trim(),\n            },\n          },\n        );\n\n        let stdoutBuffer = '';\n        let foundOnStdout = false;\n        let foundOnStderr = false;\n\n        child.stdout.on('data', (chunk: Buffer) => {\n          stdoutBuffer += chunk.toString();\n          if (stdoutBuffer.includes('GITNEXUS_EVAL_SERVER_READY:')) {\n            foundOnStdout = true;\n            child.kill('SIGTERM');\n          }\n        });\n\n        child.stderr.on('data', (chunk: Buffer) => {\n          const text = chunk.toString();\n          if (text.includes('GITNEXUS_EVAL_SERVER_READY:')) {\n            foundOnStderr = true;\n            child.kill('SIGTERM');\n          }\n        });\n\n        const timer = setTimeout(() => {\n          child.kill('SIGTERM');\n          // Timeout is acceptable on CI — not a failure\n          resolve();\n        }, 30000);\n\n        child.on('close', () => {\n          clearTimeout(timer);\n          try {\n            if (foundOnStderr) {\n              reject(new Error('READY signal appeared on stderr instead of stdout'));\n            } else if (foundOnStdout) {\n              resolve();\n            } else {\n              // eval-server may not start on all CI environments — don't fail\n              resolve();\n            }\n          } catch (err) {\n            reject(err);\n          }\n        });\n      });\n    }, 35000);\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/integration/csv-pipeline.test.ts",
    "content": "/**\n * P1 Integration Tests: CSV Pipeline\n *\n * Tests: streamAllCSVsToDisk with real graph data.\n * Covers hardening fixes: LRU cache (#24), BufferedCSVWriter flush\n */\nimport { describe, it, expect, beforeAll, afterAll } from 'vitest';\nimport fs from 'fs/promises';\nimport path from 'path';\nimport { createTempDir, type TestDBHandle } from '../helpers/test-db.js';\nimport { buildTestGraph } from '../helpers/test-graph.js';\nimport { streamAllCSVsToDisk } from '../../src/core/lbug/csv-generator.js';\n\nlet tmpHandle: TestDBHandle;\nlet csvDir: string;\nlet repoDir: string;\n\nbeforeAll(async () => {\n  tmpHandle = await createTempDir('csv-pipeline-test-');\n  csvDir = path.join(tmpHandle.dbPath, 'csv');\n  repoDir = path.join(tmpHandle.dbPath, 'repo');\n\n  // Create a fake repo directory with source files\n  await fs.mkdir(path.join(repoDir, 'src'), { recursive: true });\n  await fs.writeFile(\n    path.join(repoDir, 'src', 'index.ts'),\n    'export function main() {\\n  console.log(\"hello\");\\n  helper();\\n}\\n\\nexport class App {\\n  run() {}\\n}\\n',\n  );\n  await fs.writeFile(\n    path.join(repoDir, 'src', 'utils.ts'),\n    'export function helper() {\\n  return 42;\\n}\\n',\n  );\n});\n\nafterAll(async () => {\n  try { await tmpHandle.cleanup(); } catch { /* best-effort */ }\n});\n\ndescribe('streamAllCSVsToDisk', () => {\n  it('generates CSV files for all node types in the graph', async () => {\n    const graph = buildTestGraph(\n      [\n        { id: 'file:src/index.ts', label: 'File', name: 'index.ts', filePath: 'src/index.ts' },\n        { id: 'file:src/utils.ts', label: 'File', name: 'utils.ts', filePath: 'src/utils.ts' },\n        { id: 'func:main', label: 'Function', name: 'main', filePath: 'src/index.ts', startLine: 1, endLine: 4, isExported: true },\n        { id: 'func:helper', label: 'Function', name: 'helper', filePath: 'src/utils.ts', startLine: 1, endLine: 3, isExported: true },\n        { id: 'class:App', label: 'Class', name: 'App', filePath: 'src/index.ts', startLine: 6, endLine: 8, isExported: true },\n        { id: 'folder:src', label: 'Folder', name: 'src', filePath: 'src' },\n      ],\n      [\n        { sourceId: 'func:main', targetId: 'func:helper', type: 'CALLS' },\n        { sourceId: 'file:src/index.ts', targetId: 'func:main', type: 'CONTAINS' },\n        { sourceId: 'file:src/utils.ts', targetId: 'func:helper', type: 'CONTAINS' },\n      ],\n    );\n\n    const result = await streamAllCSVsToDisk(graph, repoDir, csvDir);\n\n    // Check that CSV files were created\n    expect(result.nodeFiles.size).toBeGreaterThan(0);\n    expect(result.relRows).toBe(3);\n\n    // Verify File CSV\n    const fileCsv = result.nodeFiles.get('File');\n    expect(fileCsv).toBeDefined();\n    expect(fileCsv!.rows).toBe(2);\n\n    // Verify Function CSV\n    const funcCsv = result.nodeFiles.get('Function');\n    expect(funcCsv).toBeDefined();\n    expect(funcCsv!.rows).toBe(2);\n\n    // Verify Class CSV\n    const classCsv = result.nodeFiles.get('Class');\n    expect(classCsv).toBeDefined();\n    expect(classCsv!.rows).toBe(1);\n\n    // Verify Folder CSV\n    const folderCsv = result.nodeFiles.get('Folder');\n    expect(folderCsv).toBeDefined();\n    expect(folderCsv!.rows).toBe(1);\n\n    // Verify relations CSV exists\n    const relContent = await fs.readFile(result.relCsvPath, 'utf-8');\n    const relLines = relContent.trim().split('\\n');\n    expect(relLines.length).toBe(4); // header + 3 relationships\n  });\n\n  it('CSV content is properly escaped', async () => {\n    const graph = buildTestGraph([\n      {\n        id: 'file:src/index.ts',\n        label: 'File',\n        name: 'index.ts',\n        filePath: 'src/index.ts',\n      },\n    ]);\n\n    const result = await streamAllCSVsToDisk(graph, repoDir, csvDir);\n    const fileCsv = result.nodeFiles.get('File');\n    expect(fileCsv).toBeDefined();\n\n    const content = await fs.readFile(fileCsv!.csvPath, 'utf-8');\n    // Content should be properly quoted\n    expect(content).toContain('\"file:src/index.ts\"');\n    expect(content).toContain('\"index.ts\"');\n  });\n\n  it('handles community nodes with keywords', async () => {\n    const graph = buildTestGraph([\n      {\n        id: 'comm:auth',\n        label: 'Community' as any,\n        name: 'Auth',\n        filePath: '',\n        extra: {\n          heuristicLabel: 'Authentication',\n          keywords: ['auth', 'login', 'pass,word'],\n          description: 'Auth module',\n          enrichedBy: 'heuristic',\n          cohesion: 0.85,\n          symbolCount: 5,\n        },\n      },\n    ]);\n\n    const result = await streamAllCSVsToDisk(graph, repoDir, csvDir);\n    const commCsv = result.nodeFiles.get('Community');\n    expect(commCsv).toBeDefined();\n    expect(commCsv!.rows).toBe(1);\n\n    const content = await fs.readFile(commCsv!.csvPath, 'utf-8');\n    // Keywords with commas should be escaped with \\,\n    expect(content).toContain('pass\\\\,word');\n  });\n\n  it('handles process nodes', async () => {\n    const graph = buildTestGraph([\n      {\n        id: 'proc:flow',\n        label: 'Process' as any,\n        name: 'LoginFlow',\n        filePath: '',\n        extra: {\n          heuristicLabel: 'User Login',\n          processType: 'intra_community',\n          stepCount: 3,\n          communities: ['auth'],\n          entryPointId: 'func:login',\n          terminalId: 'func:validate',\n        },\n      },\n    ]);\n\n    const result = await streamAllCSVsToDisk(graph, repoDir, csvDir);\n    const procCsv = result.nodeFiles.get('Process');\n    expect(procCsv).toBeDefined();\n    expect(procCsv!.rows).toBe(1);\n  });\n\n  it('deduplicates File nodes', async () => {\n    const graph = buildTestGraph([\n      { id: 'file:src/index.ts', label: 'File', name: 'index.ts', filePath: 'src/index.ts' },\n      // Duplicate (same id) — should not appear twice\n    ]);\n    // Add the same node again manually\n    graph.addNode({\n      id: 'file:src/index.ts',\n      label: 'File',\n      properties: { name: 'index.ts', filePath: 'src/index.ts' },\n    });\n\n    const result = await streamAllCSVsToDisk(graph, repoDir, csvDir);\n    const fileCsv = result.nodeFiles.get('File');\n    expect(fileCsv).toBeDefined();\n    expect(fileCsv!.rows).toBe(1);\n  });\n\n  // ─── Unhappy paths ──────────────────────────────────────────────────\n\n  it('handles empty graph (zero nodes)', async () => {\n    const graph = buildTestGraph([], []);\n    const result = await streamAllCSVsToDisk(graph, repoDir, csvDir);\n    expect(result.nodeFiles.size).toBe(0);\n    expect(result.relRows).toBe(0);\n  });\n\n  it('handles node with empty string properties', async () => {\n    const graph = buildTestGraph([\n      { id: 'file:empty', label: 'File', name: '', filePath: '' },\n    ]);\n\n    const result = await streamAllCSVsToDisk(graph, repoDir, csvDir);\n    const fileCsv = result.nodeFiles.get('File');\n    expect(fileCsv).toBeDefined();\n    expect(fileCsv!.rows).toBe(1);\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/integration/enrichment.test.ts",
    "content": "/**\n * Integration Tests: Cluster Enricher\n *\n * enrichClusters / enrichClustersBatch with mock LLM\n *   - Valid JSON response populates enrichments\n *   - Invalid JSON response falls back to heuristic label\n *   - Batch processing with enrichClustersBatch\n *   - Empty members use heuristicLabel fallback\n */\nimport { describe, it, expect, vi } from 'vitest';\nimport {\n  enrichClusters,\n  enrichClustersBatch,\n  type LLMClient,\n  type ClusterMemberInfo,\n} from '../../src/core/ingestion/cluster-enricher.js';\nimport type { CommunityNode } from '../../src/core/ingestion/community-processor.js';\n\ndescribe('enrichment', () => {\n  describe('enrichClusters', () => {\n    const communities: CommunityNode[] = [\n      {\n        id: 'comm_0',\n        label: 'Auth',\n        heuristicLabel: 'Authentication',\n        cohesion: 0.8,\n        symbolCount: 3,\n      },\n      {\n        id: 'comm_1',\n        label: 'Utils',\n        heuristicLabel: 'Utilities',\n        cohesion: 0.5,\n        symbolCount: 2,\n      },\n    ];\n\n    const memberMap = new Map<string, ClusterMemberInfo[]>([\n      [\n        'comm_0',\n        [\n          { name: 'login', filePath: 'src/auth.ts', type: 'Function' },\n          { name: 'validate', filePath: 'src/auth.ts', type: 'Function' },\n          { name: 'AuthService', filePath: 'src/auth.ts', type: 'Class' },\n        ],\n      ],\n      [\n        'comm_1',\n        [\n          { name: 'hash', filePath: 'src/utils.ts', type: 'Function' },\n          { name: 'format', filePath: 'src/utils.ts', type: 'Function' },\n        ],\n      ],\n    ]);\n\n    it('populates enrichments when LLM returns valid JSON', async () => {\n      const mockLLM: LLMClient = {\n        generate: vi.fn()\n          .mockResolvedValueOnce('{\"name\": \"Auth Module\", \"description\": \"Handles authentication\"}')\n          .mockResolvedValueOnce('{\"name\": \"Utility Helpers\", \"description\": \"Common utilities\"}'),\n      };\n\n      const result = await enrichClusters(communities, memberMap, mockLLM);\n\n      expect(result.enrichments.size).toBe(2);\n\n      const auth = result.enrichments.get('comm_0')!;\n      expect(auth.name).toBe('Auth Module');\n      expect(auth.description).toBe('Handles authentication');\n\n      const utils = result.enrichments.get('comm_1')!;\n      expect(utils.name).toBe('Utility Helpers');\n      expect(utils.description).toBe('Common utilities');\n\n      expect(result.tokensUsed).toBeGreaterThan(0);\n      expect(mockLLM.generate).toHaveBeenCalledTimes(2);\n    });\n\n    it('falls back to heuristic label when LLM returns invalid JSON', async () => {\n      const badLLM: LLMClient = {\n        generate: vi.fn().mockResolvedValue('this is not json at all'),\n      };\n\n      const result = await enrichClusters(communities, memberMap, badLLM);\n\n      expect(result.enrichments.size).toBe(2);\n\n      // Invalid JSON -> parseEnrichmentResponse falls back to heuristicLabel\n      const auth = result.enrichments.get('comm_0')!;\n      expect(auth.name).toBe('Authentication');\n      expect(auth.keywords).toEqual([]);\n      expect(auth.description).toBe('');\n\n      const utils = result.enrichments.get('comm_1')!;\n      expect(utils.name).toBe('Utilities');\n    });\n\n    it('uses heuristicLabel fallback for clusters with empty members', async () => {\n      const emptyMemberMap = new Map<string, ClusterMemberInfo[]>([\n        ['comm_0', []],\n        ['comm_1', []],\n      ]);\n\n      const mockLLM: LLMClient = {\n        generate: vi.fn().mockResolvedValue('{\"name\": \"Should Not Appear\", \"description\": \"nope\"}'),\n      };\n\n      const result = await enrichClusters(communities, emptyMemberMap, mockLLM);\n\n      expect(result.enrichments.size).toBe(2);\n\n      // Empty members -> skip LLM, use heuristic directly\n      const auth = result.enrichments.get('comm_0')!;\n      expect(auth.name).toBe('Authentication');\n      expect(auth.keywords).toEqual([]);\n      expect(auth.description).toBe('');\n\n      // LLM should never be called for empty members\n      expect(mockLLM.generate).not.toHaveBeenCalled();\n    });\n\n    it('calls onProgress callback with correct current/total', async () => {\n      const mockLLM: LLMClient = {\n        generate: vi.fn().mockResolvedValue('{\"name\": \"X\", \"description\": \"Y\"}'),\n      };\n      const progress: Array<[number, number]> = [];\n\n      await enrichClusters(communities, memberMap, mockLLM, (current, total) => {\n        progress.push([current, total]);\n      });\n\n      expect(progress).toEqual([\n        [1, 2],\n        [2, 2],\n      ]);\n    });\n\n    // ─── Unhappy paths ────────────────────────────────────────────────\n\n    it('falls back to heuristic when LLM returns empty string', async () => {\n      const emptyLLM: LLMClient = {\n        generate: vi.fn().mockResolvedValue(''),\n      };\n\n      const result = await enrichClusters(communities, memberMap, emptyLLM);\n      expect(result.enrichments.size).toBe(2);\n      expect(result.enrichments.get('comm_0')!.name).toBe('Authentication');\n      expect(result.enrichments.get('comm_1')!.name).toBe('Utilities');\n    });\n\n    it('handles zero communities gracefully', async () => {\n      const mockLLM: LLMClient = {\n        generate: vi.fn(),\n      };\n\n      const result = await enrichClusters([], new Map(), mockLLM);\n      expect(result.enrichments.size).toBe(0);\n      expect(mockLLM.generate).not.toHaveBeenCalled();\n    });\n\n    it('handles LLM returning JSON with missing description field', async () => {\n      const partialLLM: LLMClient = {\n        generate: vi.fn().mockResolvedValue('{\"name\": \"Auth Only\"}'),\n      };\n\n      const result = await enrichClusters(communities, memberMap, partialLLM);\n      expect(result.enrichments.size).toBe(2);\n      const auth = result.enrichments.get('comm_0')!;\n      expect(auth.name).toBe('Auth Only');\n    });\n  });\n\n  describe('enrichClustersBatch', () => {\n    const communities: CommunityNode[] = [\n      { id: 'comm_0', label: 'Auth', heuristicLabel: 'Authentication', cohesion: 0.8, symbolCount: 3 },\n      { id: 'comm_1', label: 'Utils', heuristicLabel: 'Utilities', cohesion: 0.5, symbolCount: 2 },\n      { id: 'comm_2', label: 'Router', heuristicLabel: 'Routing', cohesion: 0.6, symbolCount: 2 },\n    ];\n\n    const memberMap = new Map<string, ClusterMemberInfo[]>([\n      ['comm_0', [{ name: 'login', filePath: 'src/auth.ts', type: 'Function' }]],\n      ['comm_1', [{ name: 'hash', filePath: 'src/utils.ts', type: 'Function' }]],\n      ['comm_2', [{ name: 'route', filePath: 'src/router.ts', type: 'Function' }]],\n    ]);\n\n    it('processes all clusters in batches and returns enrichments', async () => {\n      const batchResponse = JSON.stringify([\n        { id: 'comm_0', name: 'Auth Module', keywords: ['auth', 'login'], description: 'Authentication logic' },\n        { id: 'comm_1', name: 'Utility Helpers', keywords: ['utils'], description: 'Common utilities' },\n      ]);\n      const batchResponse2 = JSON.stringify([\n        { id: 'comm_2', name: 'HTTP Router', keywords: ['routing'], description: 'Request routing' },\n      ]);\n\n      const mockLLM: LLMClient = {\n        generate: vi.fn()\n          .mockResolvedValueOnce(batchResponse)\n          .mockResolvedValueOnce(batchResponse2),\n      };\n\n      const result = await enrichClustersBatch(communities, memberMap, mockLLM, 2);\n\n      expect(result.enrichments.size).toBe(3);\n\n      const auth = result.enrichments.get('comm_0')!;\n      expect(auth.name).toBe('Auth Module');\n      expect(auth.keywords).toEqual(['auth', 'login']);\n      expect(auth.description).toBe('Authentication logic');\n\n      const utils = result.enrichments.get('comm_1')!;\n      expect(utils.name).toBe('Utility Helpers');\n\n      const router = result.enrichments.get('comm_2')!;\n      expect(router.name).toBe('HTTP Router');\n\n      expect(result.tokensUsed).toBeGreaterThan(0);\n      // 3 communities with batchSize=2 -> 2 LLM calls\n      expect(mockLLM.generate).toHaveBeenCalledTimes(2);\n    });\n\n    it('falls back to heuristic labels on batch parse failure', async () => {\n      const mockLLM: LLMClient = {\n        generate: vi.fn().mockRejectedValue(new Error('LLM unavailable')),\n      };\n\n      const result = await enrichClustersBatch(communities, memberMap, mockLLM, 5);\n\n      // All communities should get heuristic fallback\n      expect(result.enrichments.size).toBe(3);\n      expect(result.enrichments.get('comm_0')!.name).toBe('Authentication');\n      expect(result.enrichments.get('comm_1')!.name).toBe('Utilities');\n      expect(result.enrichments.get('comm_2')!.name).toBe('Routing');\n    });\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/integration/filesystem-walker.test.ts",
    "content": "import { describe, it, expect, beforeAll, afterAll, vi } from 'vitest';\nimport fs from 'fs/promises';\nimport path from 'path';\nimport os from 'os';\nimport { walkRepositoryPaths, readFileContents } from '../../src/core/ingestion/filesystem-walker.js';\n\ndescribe('filesystem-walker', () => {\n  let tmpDir: string;\n\n  beforeAll(async () => {\n    tmpDir = await fs.mkdtemp(path.join(os.tmpdir(), 'gn-walker-test-'));\n\n    // Create test directory structure\n    await fs.mkdir(path.join(tmpDir, 'src'), { recursive: true });\n    await fs.mkdir(path.join(tmpDir, 'src', 'components'), { recursive: true });\n    await fs.mkdir(path.join(tmpDir, 'node_modules', 'lodash'), { recursive: true });\n    await fs.mkdir(path.join(tmpDir, '.git'), { recursive: true });\n\n    await fs.writeFile(path.join(tmpDir, 'src', 'index.ts'), 'export const main = () => {}');\n    await fs.writeFile(path.join(tmpDir, 'src', 'utils.ts'), 'export const helper = () => {}');\n    await fs.writeFile(path.join(tmpDir, 'src', 'components', 'Button.tsx'), 'export const Button = () => <div/>');\n    await fs.writeFile(path.join(tmpDir, 'node_modules', 'lodash', 'index.js'), 'module.exports = {}');\n    await fs.writeFile(path.join(tmpDir, '.git', 'HEAD'), 'ref: refs/heads/main');\n    await fs.writeFile(path.join(tmpDir, 'package.json'), '{}');\n    await fs.writeFile(path.join(tmpDir, 'src', 'image.png'), Buffer.from([0x89, 0x50, 0x4E, 0x47]));\n  });\n\n  afterAll(async () => {\n    try {\n      await fs.rm(tmpDir, { recursive: true, force: true });\n    } catch { /* best-effort */ }\n  });\n\n  describe('walkRepositoryPaths', () => {\n    it('discovers source files', async () => {\n      const files = await walkRepositoryPaths(tmpDir);\n      const paths = files.map(f => f.path.replace(/\\\\/g, '/'));\n      expect(paths.some(p => p.includes('src/index.ts'))).toBe(true);\n      expect(paths.some(p => p.includes('src/utils.ts'))).toBe(true);\n    });\n\n    it('discovers nested files', async () => {\n      const files = await walkRepositoryPaths(tmpDir);\n      const paths = files.map(f => f.path.replace(/\\\\/g, '/'));\n      expect(paths.some(p => p.includes('components/Button.tsx'))).toBe(true);\n    });\n\n    it('skips node_modules', async () => {\n      const files = await walkRepositoryPaths(tmpDir);\n      const paths = files.map(f => f.path.replace(/\\\\/g, '/'));\n      expect(paths.every(p => !p.includes('node_modules'))).toBe(true);\n    });\n\n    it('skips .git directory', async () => {\n      const files = await walkRepositoryPaths(tmpDir);\n      const paths = files.map(f => f.path.replace(/\\\\/g, '/'));\n      expect(paths.every(p => !p.includes('.git/'))).toBe(true);\n    });\n\n    it('returns file sizes', async () => {\n      const files = await walkRepositoryPaths(tmpDir);\n      for (const file of files) {\n        expect(typeof file.size).toBe('number');\n        expect(file.size).toBeGreaterThan(0);\n      }\n    });\n\n    it('calls progress callback', async () => {\n      const onProgress = vi.fn();\n      await walkRepositoryPaths(tmpDir, onProgress);\n      expect(onProgress).toHaveBeenCalled();\n    });\n\n    // ─── Unhappy paths ────────────────────────────────────────────────\n\n    it('throws or returns empty for non-existent directory', async () => {\n      try {\n        const files = await walkRepositoryPaths('/nonexistent/path/xyz123');\n        // If it doesn't throw, it should return empty\n        expect(files).toEqual([]);\n      } catch (err: any) {\n        expect(err).toBeDefined();\n      }\n    });\n\n    it('returns empty for directory with only ignored files', async () => {\n      const emptyDir = await fs.mkdtemp(path.join(os.tmpdir(), 'gn-walker-empty-'));\n      await fs.mkdir(path.join(emptyDir, '.git'), { recursive: true });\n      await fs.writeFile(path.join(emptyDir, '.git', 'HEAD'), 'ref: refs/heads/main');\n\n      try {\n        const files = await walkRepositoryPaths(emptyDir);\n        expect(files).toEqual([]);\n      } finally {\n        await fs.rm(emptyDir, { recursive: true, force: true });\n      }\n    });\n\n    it('returns empty for truly empty directory', async () => {\n      const emptyDir = await fs.mkdtemp(path.join(os.tmpdir(), 'gn-walker-truly-empty-'));\n      try {\n        const files = await walkRepositoryPaths(emptyDir);\n        expect(files).toEqual([]);\n      } finally {\n        await fs.rm(emptyDir, { recursive: true, force: true });\n      }\n    });\n  });\n\n  describe('.gitignore support', () => {\n    let gitignoreDir: string;\n\n    beforeAll(async () => {\n      gitignoreDir = await fs.mkdtemp(path.join(os.tmpdir(), 'gn-walker-gitignore-'));\n\n      // Create directory structure\n      await fs.mkdir(path.join(gitignoreDir, 'src'), { recursive: true });\n      await fs.mkdir(path.join(gitignoreDir, 'data', 'cache'), { recursive: true });\n      await fs.mkdir(path.join(gitignoreDir, 'logs'), { recursive: true });\n\n      // Source files (should be indexed)\n      await fs.writeFile(path.join(gitignoreDir, 'src', 'index.ts'), 'export const main = () => {}');\n      await fs.writeFile(path.join(gitignoreDir, 'src', 'utils.ts'), 'export const helper = () => {}');\n\n      // Data files (should be ignored via .gitignore)\n      await fs.writeFile(path.join(gitignoreDir, 'data', 'cache', 'file.json'), '{}');\n      await fs.writeFile(path.join(gitignoreDir, 'logs', 'app.log'), 'log entry');\n\n      // .gitignore\n      await fs.writeFile(path.join(gitignoreDir, '.gitignore'), 'data/\\nlogs/\\n');\n    });\n\n    afterAll(async () => {\n      await fs.rm(gitignoreDir, { recursive: true, force: true });\n    });\n\n    it('excludes directories listed in .gitignore', async () => {\n      const files = await walkRepositoryPaths(gitignoreDir);\n      const paths = files.map(f => f.path.replace(/\\\\/g, '/'));\n\n      // Source files should be present\n      expect(paths.some(p => p.includes('src/index.ts'))).toBe(true);\n      expect(paths.some(p => p.includes('src/utils.ts'))).toBe(true);\n\n      // Ignored directories should not be present\n      expect(paths.every(p => !p.includes('data/'))).toBe(true);\n      expect(paths.every(p => !p.includes('logs/'))).toBe(true);\n    });\n\n    it('still applies hardcoded ignore list alongside .gitignore', async () => {\n      // Add node_modules (hardcoded ignore) to verify both work\n      await fs.mkdir(path.join(gitignoreDir, 'node_modules', 'pkg'), { recursive: true });\n      await fs.writeFile(path.join(gitignoreDir, 'node_modules', 'pkg', 'index.js'), 'module.exports = {}');\n\n      const files = await walkRepositoryPaths(gitignoreDir);\n      const paths = files.map(f => f.path.replace(/\\\\/g, '/'));\n\n      expect(paths.every(p => !p.includes('node_modules'))).toBe(true);\n      expect(paths.every(p => !p.includes('data/'))).toBe(true);\n\n      await fs.rm(path.join(gitignoreDir, 'node_modules'), { recursive: true, force: true });\n    });\n  });\n\n  describe('.gitnexusignore support', () => {\n    let nexusignoreDir: string;\n\n    beforeAll(async () => {\n      nexusignoreDir = await fs.mkdtemp(path.join(os.tmpdir(), 'gn-walker-nexusignore-'));\n\n      await fs.mkdir(path.join(nexusignoreDir, 'src'), { recursive: true });\n      await fs.mkdir(path.join(nexusignoreDir, 'local', 'grafana'), { recursive: true });\n\n      await fs.writeFile(path.join(nexusignoreDir, 'src', 'index.ts'), 'export const main = () => {}');\n      await fs.writeFile(path.join(nexusignoreDir, 'local', 'grafana', 'module.js'), 'var x = 1;');\n\n      // Only .gitnexusignore, no .gitignore\n      await fs.writeFile(path.join(nexusignoreDir, '.gitnexusignore'), 'local/\\n');\n    });\n\n    afterAll(async () => {\n      await fs.rm(nexusignoreDir, { recursive: true, force: true });\n    });\n\n    it('excludes directories listed in .gitnexusignore', async () => {\n      const files = await walkRepositoryPaths(nexusignoreDir);\n      const paths = files.map(f => f.path.replace(/\\\\/g, '/'));\n\n      expect(paths.some(p => p.includes('src/index.ts'))).toBe(true);\n      expect(paths.every(p => !p.includes('local/'))).toBe(true);\n    });\n  });\n\n  describe('combined .gitignore + .gitnexusignore', () => {\n    let combinedDir: string;\n\n    beforeAll(async () => {\n      combinedDir = await fs.mkdtemp(path.join(os.tmpdir(), 'gn-walker-combined-'));\n\n      await fs.mkdir(path.join(combinedDir, 'src'), { recursive: true });\n      await fs.mkdir(path.join(combinedDir, 'data'), { recursive: true });\n      await fs.mkdir(path.join(combinedDir, 'local', 'plugins'), { recursive: true });\n\n      await fs.writeFile(path.join(combinedDir, 'src', 'index.ts'), 'export const main = () => {}');\n      await fs.writeFile(path.join(combinedDir, 'data', 'dump.json'), '{}');\n      await fs.writeFile(path.join(combinedDir, 'local', 'plugins', 'module.js'), 'var x = 1;');\n\n      await fs.writeFile(path.join(combinedDir, '.gitignore'), 'data/\\n');\n      await fs.writeFile(path.join(combinedDir, '.gitnexusignore'), 'local/\\n');\n    });\n\n    afterAll(async () => {\n      await fs.rm(combinedDir, { recursive: true, force: true });\n    });\n\n    it('excludes directories from both files', async () => {\n      const files = await walkRepositoryPaths(combinedDir);\n      const paths = files.map(f => f.path.replace(/\\\\/g, '/'));\n\n      expect(paths.some(p => p.includes('src/index.ts'))).toBe(true);\n      expect(paths.every(p => !p.includes('data/'))).toBe(true);\n      expect(paths.every(p => !p.includes('local/'))).toBe(true);\n    });\n  });\n\n  describe('GITNEXUS_NO_GITIGNORE env var', () => {\n    let envDir: string;\n\n    beforeAll(async () => {\n      envDir = await fs.mkdtemp(path.join(os.tmpdir(), 'gn-walker-noignore-'));\n\n      await fs.mkdir(path.join(envDir, 'src'), { recursive: true });\n      await fs.mkdir(path.join(envDir, 'data'), { recursive: true });\n\n      await fs.writeFile(path.join(envDir, 'src', 'index.ts'), 'export const main = () => {}');\n      await fs.writeFile(path.join(envDir, 'data', 'dump.json'), '{}');\n\n      await fs.writeFile(path.join(envDir, '.gitignore'), 'data/\\n');\n    });\n\n    afterAll(async () => {\n      await fs.rm(envDir, { recursive: true, force: true });\n    });\n\n    it('excludes gitignored directory by default', async () => {\n      const files = await walkRepositoryPaths(envDir);\n      const paths = files.map(f => f.path.replace(/\\\\/g, '/'));\n      expect(paths.every(p => !p.includes('data/'))).toBe(true);\n    });\n\n    it('includes gitignored directory when GITNEXUS_NO_GITIGNORE is set', async () => {\n      const original = process.env.GITNEXUS_NO_GITIGNORE;\n      process.env.GITNEXUS_NO_GITIGNORE = '1';\n      try {\n        const files = await walkRepositoryPaths(envDir);\n        const paths = files.map(f => f.path.replace(/\\\\/g, '/'));\n        expect(paths.some(p => p.includes('data/dump.json'))).toBe(true);\n      } finally {\n        if (original === undefined) {\n          delete process.env.GITNEXUS_NO_GITIGNORE;\n        } else {\n          process.env.GITNEXUS_NO_GITIGNORE = original;\n        }\n      }\n    });\n  });\n\n  describe('readFileContents', () => {\n    it('reads file contents by relative paths', async () => {\n      const contents = await readFileContents(tmpDir, ['src/index.ts', 'src/utils.ts']);\n      expect(contents.get('src/index.ts')).toContain('main');\n      expect(contents.get('src/utils.ts')).toContain('helper');\n    });\n\n    it('handles empty path list', async () => {\n      const contents = await readFileContents(tmpDir, []);\n      expect(contents.size).toBe(0);\n    });\n\n    it('skips non-existent files gracefully', async () => {\n      const contents = await readFileContents(tmpDir, ['nonexistent.ts']);\n      expect(contents.size).toBe(0);\n    });\n\n    // ─── Unhappy paths ────────────────────────────────────────────────\n\n    it('skips multiple non-existent files gracefully', async () => {\n      const contents = await readFileContents(tmpDir, ['a.ts', 'b.ts', 'c.ts']);\n      expect(contents.size).toBe(0);\n    });\n\n    it('handles binary file content without crashing', async () => {\n      const contents = await readFileContents(tmpDir, ['src/image.png']);\n      // May return content or skip — should not throw\n      expect(contents.size).toBeLessThanOrEqual(1);\n    });\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/integration/has-method.test.ts",
    "content": "/**\n * Integration tests for HAS_METHOD edge extraction.\n *\n * These tests exercise findEnclosingClassId against real tree-sitter ASTs\n * produced by the actual parser pipeline (loadParser + loadLanguage + queries).\n * Unlike the unit tests that test findEnclosingClassId in isolation with simple\n * snippets, these focus on multi-class files, interface vs class disambiguation,\n * and cross-language pipeline correctness.\n */\nimport { describe, it, expect, beforeAll } from 'vitest';\nimport Parser from 'tree-sitter';\nimport { loadParser, loadLanguage } from '../../src/core/tree-sitter/parser-loader.js';\nimport { LANGUAGE_QUERIES } from '../../src/core/ingestion/tree-sitter-queries.js';\nimport { SupportedLanguages } from '../../src/config/supported-languages.js';\nimport {\n  findEnclosingClassId,\n  DEFINITION_CAPTURE_KEYS,\n  getDefinitionNodeFromCaptures,\n} from '../../src/core/ingestion/utils.js';\n\nlet parser: Parser;\n\nbeforeAll(async () => {\n  parser = await loadParser();\n});\n\n/** Parse code with given language, run definition queries, return matched definitions with their enclosing class IDs. */\nfunction parseAndExtractMethods(\n  code: string,\n  lang: SupportedLanguages,\n  filePath: string,\n): { name: string; defType: string; enclosingClassId: string | null }[] {\n  const tree = parser.parse(code);\n  const query = new Parser.Query(parser.getLanguage(), LANGUAGE_QUERIES[lang]);\n  const matches = query.matches(tree.rootNode);\n\n  const results: { name: string; defType: string; enclosingClassId: string | null }[] = [];\n\n  for (const match of matches) {\n    const captureMap: Record<string, any> = {};\n    let nameNode: any = null;\n\n    for (const capture of match.captures) {\n      captureMap[capture.name] = capture.node;\n      if (capture.name === 'name') {\n        nameNode = capture.node;\n      }\n    }\n\n    const defNode = getDefinitionNodeFromCaptures(captureMap);\n    if (!defNode || !nameNode) continue;\n\n    const defType = Object.keys(captureMap).find(k => k.startsWith('definition.')) || 'unknown';\n    const enclosingClassId = findEnclosingClassId(nameNode, filePath);\n\n    results.push({\n      name: nameNode.text,\n      defType,\n      enclosingClassId,\n    });\n  }\n\n  return results;\n}\n\ndescribe('HAS_METHOD integration — C#: class with interface', () => {\n  beforeAll(async () => {\n    await loadLanguage(SupportedLanguages.CSharp);\n  });\n\n  it('methods link to correct owner (interface vs class)', () => {\n    const code = `\ninterface IRepository {\n  void FindById(int id);\n  void Save(object entity);\n}\n\nclass SqlRepository {\n  public void FindById(int id) {}\n  public void Save(object entity) {}\n  private void Connect() {}\n}\n`;\n    const results = parseAndExtractMethods(code, SupportedLanguages.CSharp, 'src/Repo.cs');\n\n    // Interface methods should be enclosed by the interface\n    const ifaceFindById = results.find(r => r.name === 'FindById' && r.enclosingClassId?.startsWith('Interface:'));\n    expect(ifaceFindById).toBeDefined();\n    expect(ifaceFindById!.enclosingClassId).toBe('Interface:src/Repo.cs:IRepository');\n\n    const ifaceSave = results.find(r => r.name === 'Save' && r.enclosingClassId?.startsWith('Interface:'));\n    expect(ifaceSave).toBeDefined();\n    expect(ifaceSave!.enclosingClassId).toBe('Interface:src/Repo.cs:IRepository');\n\n    // Class methods should be enclosed by the class\n    const classFindById = results.find(r => r.name === 'FindById' && r.enclosingClassId?.startsWith('Class:'));\n    expect(classFindById).toBeDefined();\n    expect(classFindById!.enclosingClassId).toBe('Class:src/Repo.cs:SqlRepository');\n\n    const classConnect = results.find(r => r.name === 'Connect');\n    expect(classConnect).toBeDefined();\n    expect(classConnect!.enclosingClassId).toBe('Class:src/Repo.cs:SqlRepository');\n  });\n\n  it('class/interface name captures point to their own container (self-referential)', () => {\n    const code = `\ninterface IService {\n  void Execute();\n}\n\nclass ServiceImpl {\n  public void Execute() {}\n}\n`;\n    const results = parseAndExtractMethods(code, SupportedLanguages.CSharp, 'src/Service.cs');\n\n    // The name node for IService sits inside the interface_declaration, so\n    // findEnclosingClassId returns the interface itself. This is expected —\n    // the pipeline uses defType (definition.interface vs definition.method) to\n    // distinguish container declarations from methods, not enclosingClassId.\n    const ifaceDecl = results.find(r => r.name === 'IService');\n    expect(ifaceDecl).toBeDefined();\n    expect(ifaceDecl!.defType).toBe('definition.interface');\n\n    const classDecl = results.find(r => r.name === 'ServiceImpl');\n    expect(classDecl).toBeDefined();\n    expect(classDecl!.defType).toBe('definition.class');\n\n    // Methods should still correctly reference their container\n    const execMethods = results.filter(r => r.name === 'Execute');\n    expect(execMethods.length).toBe(2);\n    expect(execMethods.some(r => r.enclosingClassId === 'Interface:src/Service.cs:IService')).toBe(true);\n    expect(execMethods.some(r => r.enclosingClassId === 'Class:src/Service.cs:ServiceImpl')).toBe(true);\n  });\n});\n\ndescribe('HAS_METHOD integration — Rust: impl + trait', () => {\n  beforeAll(async () => {\n    await loadLanguage(SupportedLanguages.Rust);\n  });\n\n  it('methods link to impl vs trait nodes', () => {\n    const code = `\ntrait Drawable {\n    fn draw(&self);\n    fn resize(&self, w: u32, h: u32);\n}\n\nstruct Circle {\n    radius: f64,\n}\n\nimpl Circle {\n    fn new(radius: f64) -> Circle {\n        Circle { radius }\n    }\n\n    fn area(&self) -> f64 {\n        3.14 * self.radius * self.radius\n    }\n}\n`;\n    const results = parseAndExtractMethods(code, SupportedLanguages.Rust, 'src/shapes.rs');\n\n    // Trait methods should be enclosed by the trait\n    const traitDraw = results.find(r => r.name === 'draw');\n    if (traitDraw) {\n      expect(traitDraw.enclosingClassId).toBe('Trait:src/shapes.rs:Drawable');\n    }\n\n    const traitResize = results.find(r => r.name === 'resize');\n    if (traitResize) {\n      expect(traitResize.enclosingClassId).toBe('Trait:src/shapes.rs:Drawable');\n    }\n\n    // Impl methods should be enclosed by the impl block\n    const implNew = results.find(r => r.name === 'new');\n    if (implNew) {\n      expect(implNew.enclosingClassId).toBe('Impl:src/shapes.rs:Circle');\n    }\n\n    const implArea = results.find(r => r.name === 'area');\n    if (implArea) {\n      expect(implArea.enclosingClassId).toBe('Impl:src/shapes.rs:Circle');\n    }\n  });\n\n  it('standalone functions do not get HAS_METHOD', () => {\n    const code = `\nfn helper() -> bool {\n    true\n}\n\nstruct Foo;\n\nimpl Foo {\n    fn bar(&self) {}\n}\n`;\n    const results = parseAndExtractMethods(code, SupportedLanguages.Rust, 'src/lib.rs');\n\n    const helper = results.find(r => r.name === 'helper');\n    expect(helper).toBeDefined();\n    expect(helper!.enclosingClassId).toBeNull();\n\n    const bar = results.find(r => r.name === 'bar');\n    if (bar) {\n      expect(bar.enclosingClassId).toBe('Impl:src/lib.rs:Foo');\n    }\n  });\n});\n\ndescribe('HAS_METHOD integration — Python: class methods vs standalone functions', () => {\n  beforeAll(async () => {\n    await loadLanguage(SupportedLanguages.Python);\n  });\n\n  it('methods link to class, standalone functions get null', () => {\n    const code = `\ndef standalone_helper():\n    return 42\n\nclass Calculator:\n    def __init__(self):\n        self.value = 0\n\n    def add(self, x):\n        self.value += x\n        return self\n\n    def result(self):\n        return self.value\n\ndef another_standalone():\n    pass\n`;\n    const results = parseAndExtractMethods(code, SupportedLanguages.Python, 'src/calc.py');\n\n    // Standalone functions should not be enclosed\n    const standaloneHelper = results.find(r => r.name === 'standalone_helper');\n    expect(standaloneHelper).toBeDefined();\n    expect(standaloneHelper!.enclosingClassId).toBeNull();\n\n    const anotherStandalone = results.find(r => r.name === 'another_standalone');\n    expect(anotherStandalone).toBeDefined();\n    expect(anotherStandalone!.enclosingClassId).toBeNull();\n\n    // Class methods should be enclosed\n    const init = results.find(r => r.name === '__init__');\n    expect(init).toBeDefined();\n    expect(init!.enclosingClassId).toBe('Class:src/calc.py:Calculator');\n\n    const add = results.find(r => r.name === 'add');\n    expect(add).toBeDefined();\n    expect(add!.enclosingClassId).toBe('Class:src/calc.py:Calculator');\n\n    const resultMethod = results.find(r => r.name === 'result');\n    expect(resultMethod).toBeDefined();\n    expect(resultMethod!.enclosingClassId).toBe('Class:src/calc.py:Calculator');\n  });\n});\n\ndescribe('HAS_METHOD integration — Multiple classes in one file', () => {\n  describe('TypeScript', () => {\n    beforeAll(async () => {\n      await loadLanguage(SupportedLanguages.TypeScript, 'multi.ts');\n    });\n\n    it('methods associate with their owning class', () => {\n      const code = `\nclass UserService {\n  findUser(id: number) {\n    return null;\n  }\n  deleteUser(id: number) {}\n}\n\nclass OrderService {\n  createOrder(data: any) {\n    return data;\n  }\n  cancelOrder(id: number) {}\n}\n\nfunction topLevelUtil() {\n  return true;\n}\n`;\n      const results = parseAndExtractMethods(code, SupportedLanguages.TypeScript, 'src/services.ts');\n\n      // UserService methods\n      const findUser = results.find(r => r.name === 'findUser');\n      expect(findUser).toBeDefined();\n      expect(findUser!.enclosingClassId).toBe('Class:src/services.ts:UserService');\n\n      const deleteUser = results.find(r => r.name === 'deleteUser');\n      expect(deleteUser).toBeDefined();\n      expect(deleteUser!.enclosingClassId).toBe('Class:src/services.ts:UserService');\n\n      // OrderService methods\n      const createOrder = results.find(r => r.name === 'createOrder');\n      expect(createOrder).toBeDefined();\n      expect(createOrder!.enclosingClassId).toBe('Class:src/services.ts:OrderService');\n\n      const cancelOrder = results.find(r => r.name === 'cancelOrder');\n      expect(cancelOrder).toBeDefined();\n      expect(cancelOrder!.enclosingClassId).toBe('Class:src/services.ts:OrderService');\n\n      // Top-level function\n      const topLevelUtil = results.find(r => r.name === 'topLevelUtil');\n      expect(topLevelUtil).toBeDefined();\n      expect(topLevelUtil!.enclosingClassId).toBeNull();\n    });\n  });\n\n  describe('Java', () => {\n    beforeAll(async () => {\n      await loadLanguage(SupportedLanguages.Java);\n    });\n\n    it('methods associate with their owning class', () => {\n      const code = `\nclass Logger {\n  public void info(String msg) {}\n  public void error(String msg) {}\n}\n\nclass Formatter {\n  public String format(String template) { return template; }\n  private String escape(String input) { return input; }\n}\n`;\n      const results = parseAndExtractMethods(code, SupportedLanguages.Java, 'src/util/Logging.java');\n\n      const info = results.find(r => r.name === 'info');\n      expect(info).toBeDefined();\n      expect(info!.enclosingClassId).toBe('Class:src/util/Logging.java:Logger');\n\n      const error = results.find(r => r.name === 'error');\n      expect(error).toBeDefined();\n      expect(error!.enclosingClassId).toBe('Class:src/util/Logging.java:Logger');\n\n      const format = results.find(r => r.name === 'format');\n      expect(format).toBeDefined();\n      expect(format!.enclosingClassId).toBe('Class:src/util/Logging.java:Formatter');\n\n      const escape = results.find(r => r.name === 'escape');\n      expect(escape).toBeDefined();\n      expect(escape!.enclosingClassId).toBe('Class:src/util/Logging.java:Formatter');\n    });\n  });\n});\n\ndescribe('HAS_METHOD integration — Java: class with interface', () => {\n  beforeAll(async () => {\n    await loadLanguage(SupportedLanguages.Java);\n  });\n\n  it('methods link to correct owner (interface vs class)', () => {\n    const code = `\ninterface Validator {\n  boolean validate(Object input);\n  String getMessage();\n}\n\nclass EmailValidator {\n  public boolean validate(Object input) { return true; }\n  public String getMessage() { return \"invalid email\"; }\n  private boolean checkFormat(String email) { return true; }\n}\n`;\n    const results = parseAndExtractMethods(code, SupportedLanguages.Java, 'src/validation/Validator.java');\n\n    // Interface methods\n    const ifaceValidate = results.find(r => r.name === 'validate' && r.enclosingClassId?.startsWith('Interface:'));\n    expect(ifaceValidate).toBeDefined();\n    expect(ifaceValidate!.enclosingClassId).toBe('Interface:src/validation/Validator.java:Validator');\n\n    const ifaceGetMessage = results.find(r => r.name === 'getMessage' && r.enclosingClassId?.startsWith('Interface:'));\n    expect(ifaceGetMessage).toBeDefined();\n    expect(ifaceGetMessage!.enclosingClassId).toBe('Interface:src/validation/Validator.java:Validator');\n\n    // Class methods\n    const classValidate = results.find(r => r.name === 'validate' && r.enclosingClassId?.startsWith('Class:'));\n    expect(classValidate).toBeDefined();\n    expect(classValidate!.enclosingClassId).toBe('Class:src/validation/Validator.java:EmailValidator');\n\n    const classCheckFormat = results.find(r => r.name === 'checkFormat');\n    expect(classCheckFormat).toBeDefined();\n    expect(classCheckFormat!.enclosingClassId).toBe('Class:src/validation/Validator.java:EmailValidator');\n  });\n\n  it('class/interface declarations are captured with correct defType', () => {\n    const code = `\ninterface Repository {\n  void save(Object entity);\n}\n\nclass UserRepository {\n  public void save(Object entity) {}\n}\n`;\n    const results = parseAndExtractMethods(code, SupportedLanguages.Java, 'src/repo/Repo.java');\n\n    // The pipeline distinguishes containers from methods via defType, not enclosingClassId\n    const repoDecl = results.find(r => r.name === 'Repository');\n    expect(repoDecl).toBeDefined();\n    expect(repoDecl!.defType).toBe('definition.interface');\n\n    const userRepoDecl = results.find(r => r.name === 'UserRepository');\n    expect(userRepoDecl).toBeDefined();\n    expect(userRepoDecl!.defType).toBe('definition.class');\n\n    // Methods associate correctly\n    const saveMethods = results.filter(r => r.name === 'save');\n    expect(saveMethods.length).toBe(2);\n    expect(saveMethods.some(r => r.enclosingClassId === 'Interface:src/repo/Repo.java:Repository')).toBe(true);\n    expect(saveMethods.some(r => r.enclosingClassId === 'Class:src/repo/Repo.java:UserRepository')).toBe(true);\n  });\n});\n\ndescribe('HAS_METHOD integration — C++ class methods', () => {\n  beforeAll(async () => {\n    await loadLanguage(SupportedLanguages.CPlusPlus);\n  });\n\n  it('inline methods link to their owning class_specifier', () => {\n    const code = `\nclass Stack {\npublic:\n  void push(int val) { data[top++] = val; }\n  int pop() { return data[--top]; }\n  int size() { return top; }\nprivate:\n  int data[100];\n  int top;\n};\n\nclass Queue {\npublic:\n  void enqueue(int val) {}\n  int dequeue() { return 0; }\n};\n`;\n    const results = parseAndExtractMethods(code, SupportedLanguages.CPlusPlus, 'src/containers.h');\n\n    // Stack methods\n    const push = results.find(r => r.name === 'push');\n    if (push) {\n      expect(push.enclosingClassId).toBe('Class:src/containers.h:Stack');\n    }\n\n    const pop = results.find(r => r.name === 'pop');\n    if (pop) {\n      expect(pop.enclosingClassId).toBe('Class:src/containers.h:Stack');\n    }\n\n    const size = results.find(r => r.name === 'size');\n    if (size) {\n      expect(size.enclosingClassId).toBe('Class:src/containers.h:Stack');\n    }\n\n    // Queue methods\n    const enqueue = results.find(r => r.name === 'enqueue');\n    if (enqueue) {\n      expect(enqueue.enclosingClassId).toBe('Class:src/containers.h:Queue');\n    }\n\n    const dequeue = results.find(r => r.name === 'dequeue');\n    if (dequeue) {\n      expect(dequeue.enclosingClassId).toBe('Class:src/containers.h:Queue');\n    }\n  });\n\n  it('free functions have null enclosingClassId', () => {\n    const code = `\nvoid freeFunction() {}\n\nclass Foo {\npublic:\n  void method() {}\n};\n`;\n    const results = parseAndExtractMethods(code, SupportedLanguages.CPlusPlus, 'src/mixed.cpp');\n\n    const freeFn = results.find(r => r.name === 'freeFunction');\n    if (freeFn) {\n      expect(freeFn.enclosingClassId).toBeNull();\n    }\n\n    const method = results.find(r => r.name === 'method');\n    if (method) {\n      expect(method.enclosingClassId).toBe('Class:src/mixed.cpp:Foo');\n    }\n  });\n});\n\ndescribe('HAS_METHOD integration — C# struct and record', () => {\n  beforeAll(async () => {\n    await loadLanguage(SupportedLanguages.CSharp);\n  });\n\n  it('struct methods link to struct, record methods link to record', () => {\n    const code = `\nstruct Vector2 {\n  public float Length() { return 0; }\n  public Vector2 Normalize() { return this; }\n}\n\nrecord Person {\n  public string GetFullName() { return \"\"; }\n}\n`;\n    const results = parseAndExtractMethods(code, SupportedLanguages.CSharp, 'src/Types.cs');\n\n    const length = results.find(r => r.name === 'Length');\n    if (length) {\n      expect(length.enclosingClassId).toBe('Struct:src/Types.cs:Vector2');\n    }\n\n    const normalize = results.find(r => r.name === 'Normalize');\n    if (normalize) {\n      expect(normalize.enclosingClassId).toBe('Struct:src/Types.cs:Vector2');\n    }\n\n    const getFullName = results.find(r => r.name === 'GetFullName');\n    if (getFullName) {\n      expect(getFullName.enclosingClassId).toBe('Record:src/Types.cs:Person');\n    }\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/integration/hooks-e2e.test.ts",
    "content": "/**\n * Integration Tests: Claude Code Hooks End-to-End\n *\n * Tests the hook scripts with real git repos and .gitnexus directories.\n * Unlike unit/hooks.test.ts which tests source code patterns and simple\n * stdin/stdout, these tests verify actual behavior with filesystem state.\n */\nimport { describe, it, expect, beforeAll, afterAll } from 'vitest';\nimport { spawnSync } from 'child_process';\nimport fs from 'fs';\nimport path from 'path';\nimport os from 'os';\nimport { runHook, parseHookOutput } from '../utils/hook-test-helpers.js';\n\n// ─── Paths to both hook variants ────────────────────────────────────\n\nconst CJS_HOOK = path.resolve(__dirname, '..', '..', 'hooks', 'claude', 'gitnexus-hook.cjs');\nconst PLUGIN_HOOK = path.resolve(__dirname, '..', '..', '..', 'gitnexus-claude-plugin', 'hooks', 'gitnexus-hook.js');\n\nconst HOOKS = [\n  { name: 'CJS', path: CJS_HOOK },\n  ...(fs.existsSync(PLUGIN_HOOK) ? [{ name: 'Plugin', path: PLUGIN_HOOK }] : []),\n];\n\n// ─── Temp git repo with .gitnexus ───────────────────────────────────\n\nlet tmpDir: string;\nlet gitNexusDir: string;\n\nbeforeAll(() => {\n  tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'hooks-e2e-'));\n  gitNexusDir = path.join(tmpDir, '.gitnexus');\n  fs.mkdirSync(gitNexusDir, { recursive: true });\n\n  // Initialize a real git repo\n  spawnSync('git', ['init'], { cwd: tmpDir, stdio: 'pipe' });\n  spawnSync('git', ['config', 'user.email', 'test@test.com'], { cwd: tmpDir, stdio: 'pipe' });\n  spawnSync('git', ['config', 'user.name', 'Test'], { cwd: tmpDir, stdio: 'pipe' });\n\n  // Create a file and commit so HEAD exists\n  fs.writeFileSync(path.join(tmpDir, 'hello.txt'), 'hello');\n  spawnSync('git', ['add', '.'], { cwd: tmpDir, stdio: 'pipe' });\n  spawnSync('git', ['commit', '-m', 'init'], { cwd: tmpDir, stdio: 'pipe' });\n});\n\nafterAll(() => {\n  fs.rmSync(tmpDir, { recursive: true, force: true });\n});\n\n// ─── Tests ──────────────────────────────────────────────────────────\n\ndescribe.each(HOOKS)('hooks e2e ($name)', ({ name, path: hookPath }) => {\n  describe('PostToolUse staleness detection', () => {\n    it('detects stale index when meta.json lastCommit differs from HEAD', () => {\n      // Write meta.json with an old commit hash\n      fs.writeFileSync(\n        path.join(gitNexusDir, 'meta.json'),\n        JSON.stringify({ lastCommit: 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa', stats: {} }),\n      );\n\n      const result = runHook(hookPath, {\n        hook_event_name: 'PostToolUse',\n        tool_name: 'Bash',\n        tool_input: { command: 'git commit -m \"test\"' },\n        tool_output: { exit_code: 0 },\n        cwd: tmpDir,\n      });\n\n      const output = parseHookOutput(result.stdout);\n      expect(output).not.toBeNull();\n      expect(output!.additionalContext).toContain('stale');\n      expect(output!.additionalContext).toContain('npx gitnexus analyze');\n    });\n\n    it('stays silent when meta.json lastCommit matches HEAD', () => {\n      // Get current HEAD\n      const headResult = spawnSync('git', ['rev-parse', 'HEAD'], {\n        cwd: tmpDir, encoding: 'utf-8', stdio: ['pipe', 'pipe', 'pipe'],\n      });\n      const head = headResult.stdout.trim();\n\n      // Write meta.json with matching commit\n      fs.writeFileSync(\n        path.join(gitNexusDir, 'meta.json'),\n        JSON.stringify({ lastCommit: head, stats: {} }),\n      );\n\n      const result = runHook(hookPath, {\n        hook_event_name: 'PostToolUse',\n        tool_name: 'Bash',\n        tool_input: { command: 'git commit -m \"test\"' },\n        tool_output: { exit_code: 0 },\n        cwd: tmpDir,\n      });\n\n      const output = parseHookOutput(result.stdout);\n      expect(output).toBeNull();\n    });\n\n    it('includes --embeddings flag when previous index had embeddings', () => {\n      fs.writeFileSync(\n        path.join(gitNexusDir, 'meta.json'),\n        JSON.stringify({ lastCommit: 'bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb', stats: { embeddings: 42 } }),\n      );\n\n      const result = runHook(hookPath, {\n        hook_event_name: 'PostToolUse',\n        tool_name: 'Bash',\n        tool_input: { command: 'git commit -m \"test\"' },\n        tool_output: { exit_code: 0 },\n        cwd: tmpDir,\n      });\n\n      const output = parseHookOutput(result.stdout);\n      expect(output).not.toBeNull();\n      expect(output!.additionalContext).toContain('--embeddings');\n    });\n\n    it('treats missing meta.json as stale', () => {\n      // Remove meta.json\n      const metaPath = path.join(gitNexusDir, 'meta.json');\n      if (fs.existsSync(metaPath)) fs.unlinkSync(metaPath);\n\n      const result = runHook(hookPath, {\n        hook_event_name: 'PostToolUse',\n        tool_name: 'Bash',\n        tool_input: { command: 'git commit -m \"test\"' },\n        tool_output: { exit_code: 0 },\n        cwd: tmpDir,\n      });\n\n      const output = parseHookOutput(result.stdout);\n      expect(output).not.toBeNull();\n      expect(output!.additionalContext).toContain('stale');\n    });\n\n    it('ignores failed git commands (exit_code !== 0)', () => {\n      fs.writeFileSync(\n        path.join(gitNexusDir, 'meta.json'),\n        JSON.stringify({ lastCommit: 'cccccccccccccccccccccccccccccccccccccccc', stats: {} }),\n      );\n\n      const result = runHook(hookPath, {\n        hook_event_name: 'PostToolUse',\n        tool_name: 'Bash',\n        tool_input: { command: 'git commit -m \"test\"' },\n        tool_output: { exit_code: 1 },\n        cwd: tmpDir,\n      });\n\n      const output = parseHookOutput(result.stdout);\n      expect(output).toBeNull();\n    });\n\n    it('ignores non-mutation git commands', () => {\n      fs.writeFileSync(\n        path.join(gitNexusDir, 'meta.json'),\n        JSON.stringify({ lastCommit: 'dddddddddddddddddddddddddddddddddddddddd', stats: {} }),\n      );\n\n      const nonMutations = ['git status', 'git log', 'git diff', 'git branch', 'git stash'];\n      for (const cmd of nonMutations) {\n        const result = runHook(hookPath, {\n          hook_event_name: 'PostToolUse',\n          tool_name: 'Bash',\n          tool_input: { command: cmd },\n          tool_output: { exit_code: 0 },\n          cwd: tmpDir,\n        });\n        const output = parseHookOutput(result.stdout);\n        expect(output).toBeNull();\n      }\n    });\n\n    it('detects all 5 git mutation types', () => {\n      fs.writeFileSync(\n        path.join(gitNexusDir, 'meta.json'),\n        JSON.stringify({ lastCommit: 'eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee', stats: {} }),\n      );\n\n      const mutations = ['git commit -m \"x\"', 'git merge feature', 'git rebase main', 'git cherry-pick abc', 'git pull origin main'];\n      for (const cmd of mutations) {\n        const result = runHook(hookPath, {\n          hook_event_name: 'PostToolUse',\n          tool_name: 'Bash',\n          tool_input: { command: cmd },\n          tool_output: { exit_code: 0 },\n          cwd: tmpDir,\n        });\n        const output = parseHookOutput(result.stdout);\n        expect(output).not.toBeNull();\n        expect(output!.additionalContext).toContain('stale');\n      }\n    });\n  });\n\n  describe('PreToolUse — silent without gitnexus CLI', () => {\n    // PreToolUse tries to spawn `gitnexus augment` which won't be available in CI.\n    // Verify it fails gracefully (no output, no crash).\n\n    it('handles Grep pattern gracefully when CLI is unavailable', () => {\n      const result = runHook(hookPath, {\n        hook_event_name: 'PreToolUse',\n        tool_name: 'Grep',\n        tool_input: { pattern: 'handleRequest' },\n        cwd: tmpDir,\n      });\n\n      // Should not crash — status is 0 if it exits cleanly, or null if the\n      // spawned `gitnexus augment` hangs and the 10s timeout kills the process.\n      expect(result.status === 0 || result.status === null).toBe(true);\n    });\n\n    it('ignores patterns shorter than 3 chars', () => {\n      const result = runHook(hookPath, {\n        hook_event_name: 'PreToolUse',\n        tool_name: 'Grep',\n        tool_input: { pattern: 'ab' },\n        cwd: tmpDir,\n      });\n\n      expect(result.status).toBe(0);\n      const output = parseHookOutput(result.stdout);\n      expect(output).toBeNull();\n    });\n\n    it('ignores non-search tools', () => {\n      const result = runHook(hookPath, {\n        hook_event_name: 'PreToolUse',\n        tool_name: 'Read',\n        tool_input: { file_path: '/some/file.ts' },\n        cwd: tmpDir,\n      });\n\n      expect(result.status).toBe(0);\n      const output = parseHookOutput(result.stdout);\n      expect(output).toBeNull();\n    });\n  });\n\n  describe('cwd validation', () => {\n    it('rejects relative cwd silently for PostToolUse', () => {\n      const result = runHook(hookPath, {\n        hook_event_name: 'PostToolUse',\n        tool_name: 'Bash',\n        tool_input: { command: 'git commit -m \"x\"' },\n        tool_output: { exit_code: 0 },\n        cwd: 'relative/path',\n      });\n\n      const output = parseHookOutput(result.stdout);\n      expect(output).toBeNull();\n    });\n\n    it('rejects relative cwd silently for PreToolUse', () => {\n      const result = runHook(hookPath, {\n        hook_event_name: 'PreToolUse',\n        tool_name: 'Grep',\n        tool_input: { pattern: 'testPattern' },\n        cwd: 'relative/path',\n      });\n\n      const output = parseHookOutput(result.stdout);\n      expect(output).toBeNull();\n    });\n  });\n\n  describe('unhappy paths', () => {\n    it('handles corrupted meta.json (invalid JSON) without crashing', () => {\n      fs.writeFileSync(\n        path.join(gitNexusDir, 'meta.json'),\n        'THIS IS NOT JSON {{{',\n      );\n\n      const result = runHook(hookPath, {\n        hook_event_name: 'PostToolUse',\n        tool_name: 'Bash',\n        tool_input: { command: 'git commit -m \"test\"' },\n        tool_output: { exit_code: 0 },\n        cwd: tmpDir,\n      });\n\n      // Should not crash — either treats as stale or ignores\n      expect(result.status === 0 || result.status === null).toBe(true);\n    });\n\n    it('handles meta.json with missing lastCommit field', () => {\n      fs.writeFileSync(\n        path.join(gitNexusDir, 'meta.json'),\n        JSON.stringify({ stats: {} }),\n      );\n\n      const result = runHook(hookPath, {\n        hook_event_name: 'PostToolUse',\n        tool_name: 'Bash',\n        tool_input: { command: 'git commit -m \"test\"' },\n        tool_output: { exit_code: 0 },\n        cwd: tmpDir,\n      });\n\n      expect(result.status === 0 || result.status === null).toBe(true);\n      const output = parseHookOutput(result.stdout);\n      // Missing lastCommit should be treated as stale\n      if (output) {\n        expect(output.additionalContext).toContain('stale');\n      }\n    });\n\n    it('ignores unknown hook event name', () => {\n      const result = runHook(hookPath, {\n        hook_event_name: 'UnknownEvent',\n        tool_name: 'Bash',\n        tool_input: { command: 'git commit -m \"test\"' },\n        tool_output: { exit_code: 0 },\n        cwd: tmpDir,\n      });\n\n      expect(result.status).toBe(0);\n      const output = parseHookOutput(result.stdout);\n      expect(output).toBeNull();\n    });\n\n    it('handles empty tool_input for PostToolUse without crashing', () => {\n      fs.writeFileSync(\n        path.join(gitNexusDir, 'meta.json'),\n        JSON.stringify({ lastCommit: 'aaaa', stats: {} }),\n      );\n\n      const result = runHook(hookPath, {\n        hook_event_name: 'PostToolUse',\n        tool_name: 'Bash',\n        tool_input: {},\n        tool_output: { exit_code: 0 },\n        cwd: tmpDir,\n      });\n\n      expect(result.status === 0 || result.status === null).toBe(true);\n      const output = parseHookOutput(result.stdout);\n      // No command means no git mutation detection — should be silent\n      expect(output).toBeNull();\n    });\n\n    it('ignores non-Bash tool for PostToolUse', () => {\n      fs.writeFileSync(\n        path.join(gitNexusDir, 'meta.json'),\n        JSON.stringify({ lastCommit: 'aaaa', stats: {} }),\n      );\n\n      const result = runHook(hookPath, {\n        hook_event_name: 'PostToolUse',\n        tool_name: 'Read',\n        tool_input: { file_path: '/some/file.ts' },\n        tool_output: {},\n        cwd: tmpDir,\n      });\n\n      expect(result.status).toBe(0);\n      const output = parseHookOutput(result.stdout);\n      expect(output).toBeNull();\n    });\n  });\n\n  describe('directory without .gitnexus', () => {\n    // The hook walks up 5 parent directories looking for .gitnexus.\n    // To guarantee none is found, create a deeply nested temp dir at the\n    // filesystem root where no .gitnexus could exist in any ancestor.\n    let noGitNexusDir: string;\n\n    beforeAll(() => {\n      // Use a root-level temp path so parent traversal can't find .gitnexus\n      const root = os.platform() === 'win32' ? 'C:\\\\' : '/tmp';\n      const base = path.join(root, `no-gitnexus-${Date.now()}`);\n      // Nest 6 levels deep (hook walks up 5) to ensure isolation\n      noGitNexusDir = path.join(base, 'a', 'b', 'c', 'd', 'e', 'f');\n      fs.mkdirSync(noGitNexusDir, { recursive: true });\n      spawnSync('git', ['init'], { cwd: noGitNexusDir, stdio: 'pipe' });\n    });\n\n    afterAll(() => {\n      // Clean up from the base directory\n      const root = os.platform() === 'win32' ? 'C:\\\\' : '/tmp';\n      const base = path.join(root, path.basename(path.resolve(noGitNexusDir, '..', '..', '..', '..', '..', '..')));\n      fs.rmSync(base, { recursive: true, force: true });\n    });\n\n    it('ignores PostToolUse when no .gitnexus directory exists', () => {\n      const result = runHook(hookPath, {\n        hook_event_name: 'PostToolUse',\n        tool_name: 'Bash',\n        tool_input: { command: 'git commit -m \"x\"' },\n        tool_output: { exit_code: 0 },\n        cwd: noGitNexusDir,\n      });\n\n      const output = parseHookOutput(result.stdout);\n      expect(output).toBeNull();\n    });\n\n    it('ignores PreToolUse when no .gitnexus directory exists', () => {\n      const result = runHook(hookPath, {\n        hook_event_name: 'PreToolUse',\n        tool_name: 'Grep',\n        tool_input: { pattern: 'somePattern' },\n        cwd: noGitNexusDir,\n      });\n\n      const output = parseHookOutput(result.stdout);\n      expect(output).toBeNull();\n    });\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/integration/ignore-and-skip-e2e.test.ts",
    "content": "import { describe, it, expect, beforeAll, afterAll } from 'vitest';\nimport fs from 'fs/promises';\nimport path from 'path';\nimport os from 'os';\nimport { walkRepositoryPaths, readFileContents } from '../../src/core/ingestion/filesystem-walker.js';\nimport { processParsing } from '../../src/core/ingestion/parsing-processor.js';\nimport { createKnowledgeGraph } from '../../src/core/graph/graph.js';\nimport { createSymbolTable } from '../../src/core/ingestion/symbol-table.js';\nimport { createASTCache } from '../../src/core/ingestion/ast-cache.js';\nimport { isLanguageAvailable } from '../../src/core/tree-sitter/parser-loader.js';\nimport { SupportedLanguages } from '../../src/config/supported-languages.js';\n\n// ============================================================================\n// E2E: .gitignore + .gitnexusignore + unsupported language skip\n// ============================================================================\n\ndescribe('ignore + language-skip E2E', () => {\n  let tmpDir: string;\n\n  beforeAll(async () => {\n    tmpDir = await fs.mkdtemp(path.join(os.tmpdir(), 'gn-e2e-ignore-skip-'));\n\n    // Create directory structure\n    await fs.mkdir(path.join(tmpDir, 'src'), { recursive: true });\n    await fs.mkdir(path.join(tmpDir, 'data'), { recursive: true });\n    await fs.mkdir(path.join(tmpDir, 'vendor'), { recursive: true });\n\n    // .gitignore — excludes data/ and *.log\n    await fs.writeFile(path.join(tmpDir, '.gitignore'), 'data/\\n*.log\\n');\n\n    // .gitnexusignore — excludes vendor/\n    await fs.writeFile(path.join(tmpDir, '.gitnexusignore'), 'vendor/\\n');\n\n    // Source files (should be indexed)\n    await fs.writeFile(\n      path.join(tmpDir, 'src', 'index.ts'),\n      \"import { greet } from './greet';\\n\\nexport function main(): string {\\n  return greet();\\n}\\n\",\n    );\n    await fs.writeFile(\n      path.join(tmpDir, 'src', 'greet.ts'),\n      \"export function greet(): string {\\n  return 'hello';\\n}\\n\",\n    );\n\n    // Swift file — triggers language skip when grammar unavailable\n    await fs.writeFile(\n      path.join(tmpDir, 'src', 'App.swift'),\n      'class App {\\n    func run() {\\n        print(\"running\")\\n    }\\n}\\n',\n    );\n\n    // Files that should be excluded\n    await fs.writeFile(path.join(tmpDir, 'data', 'seed.json'), '{}');\n    await fs.writeFile(path.join(tmpDir, 'vendor', 'lib.js'), 'var x = 1;\\n');\n    await fs.writeFile(path.join(tmpDir, 'debug.log'), 'debug log entry\\n');\n  });\n\n  afterAll(async () => {\n    try {\n      await fs.rm(tmpDir, { recursive: true, force: true });\n    } catch { /* best-effort */ }\n  });\n\n  // ── File Discovery ──────────────────────────────────────────────────\n\n  describe('file discovery (walkRepositoryPaths)', () => {\n    it('includes source files from src/', async () => {\n      const files = await walkRepositoryPaths(tmpDir);\n      const paths = files.map(f => f.path.replace(/\\\\/g, '/'));\n\n      expect(paths).toContain('src/index.ts');\n      expect(paths).toContain('src/greet.ts');\n    });\n\n    it('includes .swift files (discovery does not filter by language)', async () => {\n      const files = await walkRepositoryPaths(tmpDir);\n      const paths = files.map(f => f.path.replace(/\\\\/g, '/'));\n\n      // Swift file should be discovered — language skip happens at parse time\n      expect(paths).toContain('src/App.swift');\n    });\n\n    it('excludes gitignored directories (data/)', async () => {\n      const files = await walkRepositoryPaths(tmpDir);\n      const paths = files.map(f => f.path.replace(/\\\\/g, '/'));\n\n      expect(paths.every(p => !p.includes('data/'))).toBe(true);\n    });\n\n    it('excludes gitignored file patterns (*.log)', async () => {\n      const files = await walkRepositoryPaths(tmpDir);\n      const paths = files.map(f => f.path.replace(/\\\\/g, '/'));\n\n      expect(paths.every(p => !p.endsWith('.log'))).toBe(true);\n    });\n\n    it('excludes gitnexusignored directories (vendor/)', async () => {\n      const files = await walkRepositoryPaths(tmpDir);\n      const paths = files.map(f => f.path.replace(/\\\\/g, '/'));\n\n      expect(paths.every(p => !p.includes('vendor/'))).toBe(true);\n    });\n  });\n\n  // ── Parsing ─────────────────────────────────────────────────────────\n\n  describe('parsing (processParsing)', () => {\n    it('parses TypeScript files into graph nodes and skips Swift gracefully', async () => {\n      // Phase 1: discover files\n      const scannedFiles = await walkRepositoryPaths(tmpDir);\n      const relativePaths = scannedFiles.map(f => f.path);\n\n      // Phase 2: read contents\n      const contentMap = await readFileContents(tmpDir, relativePaths);\n      const files = Array.from(contentMap.entries()).map(([p, content]) => ({\n        path: p,\n        content,\n      }));\n\n      // Phase 3: parse (sequential — no worker pool)\n      const graph = createKnowledgeGraph();\n      const symbolTable = createSymbolTable();\n      const astCache = createASTCache();\n\n      // Should NOT throw even if Swift grammar is unavailable\n      await processParsing(graph, files, symbolTable, astCache);\n\n      // TypeScript files should produce Function nodes\n      const nodes = graph.nodes;\n      const functionNodes = nodes.filter(n => n.label === 'Function');\n      const functionNames = functionNodes.map(n => n.properties.name);\n\n      expect(functionNames).toContain('main');\n      expect(functionNames).toContain('greet');\n\n      // Function nodes should reference the correct source files\n      const fnFilePaths = functionNodes.map(n =>\n        (n.properties.filePath as string).replace(/\\\\/g, '/'),\n      );\n      expect(fnFilePaths.some(p => p.includes('index.ts'))).toBe(true);\n      expect(fnFilePaths.some(p => p.includes('greet.ts'))).toBe(true);\n\n      // Swift behavior depends on grammar availability\n      if (!isLanguageAvailable(SupportedLanguages.Swift)) {\n        // No Swift-sourced nodes should appear in the graph\n        const swiftNodes = nodes.filter(n =>\n          (n.properties.filePath as string | undefined)?.endsWith('.swift'),\n        );\n        expect(swiftNodes).toHaveLength(0);\n      }\n      // If Swift IS available, Swift nodes may appear — that's fine\n    });\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/integration/lbug-core-adapter.test.ts",
    "content": "/**\n * P0 Integration Tests: Core LadybugDB Adapter\n *\n * Tests: loadGraphToLbug CSV round-trip, createFTSIndex, getLbugStats.\n *\n * IMPORTANT: All core adapter tests share ONE coreHandle and ONE coreInitLbug\n * call because the core adapter is a module-level singleton. Calling\n * coreInitLbug with a different path closes the previous native DB handle\n * and opens a new one — sharing a single handle avoids unnecessary churn.\n */\nimport { describe, it, expect } from 'vitest';\nimport fs from 'fs/promises';\nimport path from 'path';\nimport { withTestLbugDB } from '../helpers/test-indexed-db.js';\n\n// ─── Core LadybugDB Adapter ─────────────────────────────────────────────\n\nwithTestLbugDB('core-adapter', (handle) => {\n  describe('core adapter', () => {\n    it('loadGraphToLbug: loads a minimal graph and node counts match', async () => {\n      const { executeQuery: coreExecuteQuery } = await import('../../src/core/lbug/lbug-adapter.js');\n\n      // createMinimalTestGraph has 2 File, 2 Function, 1 Class, 1 Folder = 6 nodes\n      const fileRows = await coreExecuteQuery('MATCH (n:File) RETURN n.id AS id');\n      expect(fileRows).toHaveLength(2);\n\n      const funcRows = await coreExecuteQuery('MATCH (n:Function) RETURN n.id AS id');\n      expect(funcRows).toHaveLength(2);\n\n      const classRows = await coreExecuteQuery('MATCH (n:Class) RETURN n.id AS id');\n      expect(classRows).toHaveLength(1);\n\n      const folderRows = await coreExecuteQuery('MATCH (n:Folder) RETURN n.id AS id');\n      expect(folderRows).toHaveLength(1);\n    });\n\n    it('createFTSIndex: creates FTS index on Function table without error', async () => {\n      const { createFTSIndex } = await import('../../src/core/lbug/lbug-adapter.js');\n\n      await expect(\n        createFTSIndex('Function', 'function_fts', ['name', 'content']),\n      ).resolves.toBeUndefined();\n    });\n\n    it('getLbugStats: returns correct node and edge counts for seeded data', async () => {\n      const { getLbugStats } = await import('../../src/core/lbug/lbug-adapter.js');\n\n      const stats = await getLbugStats();\n\n      // createMinimalTestGraph: 6 nodes (2 File, 2 Function, 1 Class, 1 Folder)\n      expect(stats.nodes).toBe(6);\n\n      // 4 relationships (2 CALLS, 2 CONTAINS)\n      expect(stats.edges).toBe(4);\n    });\n\n    describe('unhappy path', () => {\n      it('throws on malformed Cypher query', async () => {\n        const { executeQuery } = await import('../../src/core/lbug/lbug-adapter.js');\n\n        // Deliberately broken syntax: MATCH without a pattern clause\n        await expect(executeQuery('MATCH RETURN 1')).rejects.toThrow();\n      });\n\n      it('returns empty results for query matching no nodes', async () => {\n        const { executeQuery } = await import('../../src/core/lbug/lbug-adapter.js');\n\n        // Valid Cypher, but the id will never exist in the seeded graph\n        const rows = await executeQuery(\n          \"MATCH (n:Function) WHERE n.id = '__nonexistent_id__' RETURN n.id AS id\",\n        );\n        expect(rows).toHaveLength(0);\n      });\n\n      it('handles query with non-existent table/node label', async () => {\n        const { executeQuery } = await import('../../src/core/lbug/lbug-adapter.js');\n\n        // LadybugDB throws when the node table does not exist in the schema\n        await expect(\n          executeQuery('MATCH (n:GhostTable) RETURN n'),\n        ).rejects.toThrow();\n      });\n    });\n\n    describe('error handling', () => {\n      it('createFTSIndex handles already-existing index gracefully', async () => {\n        const { createFTSIndex } = await import('../../src/core/lbug/lbug-adapter.js');\n\n        // First call creates the index (may already exist from earlier test)\n        await createFTSIndex('Function', 'function_fts_dup', ['name', 'content']);\n\n        // Second call with same params should NOT throw — createFTSIndex catches \"already exists\"\n        await expect(\n          createFTSIndex('Function', 'function_fts_dup', ['name', 'content']),\n        ).resolves.toBeUndefined();\n      });\n\n      it('getLbugStats returns valid counts', async () => {\n        const { getLbugStats } = await import('../../src/core/lbug/lbug-adapter.js');\n\n        // getLbugStats NEVER throws — it has silent catch blocks per table\n        const stats = await getLbugStats();\n        expect(typeof stats.nodes).toBe('number');\n        expect(typeof stats.edges).toBe('number');\n        expect(stats.nodes).toBeGreaterThanOrEqual(0);\n        expect(stats.edges).toBeGreaterThanOrEqual(0);\n      });\n\n      it('executeQuery with empty string rejects', async () => {\n        const { executeQuery } = await import('../../src/core/lbug/lbug-adapter.js');\n\n        // LadybugDB throws on empty query string\n        await expect(executeQuery('')).rejects.toThrow();\n      });\n\n      it('deleteNodesForFile with non-existent path returns zero deleted', async () => {\n        const { deleteNodesForFile } = await import('../../src/core/lbug/lbug-adapter.js');\n\n        // deleteNodesForFile has per-query try/catch, returns {deletedNodes: 0} for missing paths\n        const result = await deleteNodesForFile('/absolutely/nonexistent/path/file.ts');\n        expect(result).toEqual({ deletedNodes: 0 });\n      });\n    });\n  });\n}, {\n  afterSetup: async (handle) => {\n    // Load a minimal graph via CSV round-trip (core adapter is already initialized by wrapper)\n    const { loadGraphToLbug } = await import('../../src/core/lbug/lbug-adapter.js');\n    const { createMinimalTestGraph } = await import('../helpers/test-graph.js');\n\n    const graph = createMinimalTestGraph();\n    const storagePath = path.join(handle.tmpHandle.dbPath, 'storage');\n    await fs.mkdir(storagePath, { recursive: true });\n\n    await loadGraphToLbug(graph, '/test/repo', storagePath);\n  },\n});\n"
  },
  {
    "path": "gitnexus/test/integration/lbug-pool-stability.test.ts",
    "content": "/**\n * Integration Tests: LadybugDB Connection Pool — Parallel Stability\n *\n * Tests concurrency fixes from PR #349:\n *   - Pre-warmed pool handles max parallel queries\n *   - Waiter queue drains under overload\n *   - Concurrent initLbug deduplication\n *   - stdout.write restoration after parallel operations\n *   - No connection leaks over sequential workloads\n *   - Atomic pool entry visibility (pool.set last)\n *   - Mixed query types at full concurrency\n *\n * Connection budget: LadybugDB's native mmap budget caps at ~56\n * simultaneous Connection objects per process.  Tests that only need\n * a ready pool share a single 'shared-repo' init (8 connections) to\n * stay well under the limit.\n */\nimport { it, expect, afterAll } from 'vitest';\nimport {\n  initLbug,\n  executeQuery,\n  executeParameterized,\n  closeLbug,\n  isLbugReady,\n} from '../../src/mcp/core/lbug-adapter.js';\nimport { withTestLbugDB } from '../helpers/test-indexed-db.js';\n\nconst SEED_DATA = [\n  `CREATE (f:File {id: 'file:index.ts', name: 'index.ts', filePath: 'src/index.ts', content: ''})`,\n  `CREATE (fn:Function {id: 'func:main', name: 'main', filePath: 'src/index.ts', startLine: 1, endLine: 10, isExported: true, content: '', description: ''})`,\n  `CREATE (fn2:Function {id: 'func:helper', name: 'helper', filePath: 'src/utils.ts', startLine: 1, endLine: 5, isExported: true, content: '', description: ''})`,\n  `MATCH (a:Function), (b:Function)\n    WHERE a.id = 'func:main' AND b.id = 'func:helper'\n    CREATE (a)-[:CodeRelation {type: 'CALLS', confidence: 1.0, reason: 'direct', step: 0}]->(b)`,\n];\n\n// ─── Shared-pool tests: reuse one init across 5 tests (8 connections total) ──\n\nwithTestLbugDB('pool-stability', (handle) => {\n  const REPO = 'shared-par';\n  let inited = false;\n\n  const ensurePool = async () => {\n    if (!inited) {\n      await initLbug(REPO, handle.dbPath);\n      inited = true;\n    }\n  };\n\n  afterAll(async () => {\n    try { await closeLbug(REPO); } catch { /* best-effort */ }\n  });\n\n  it('8 simultaneous queries complete without crashes', async () => {\n    await ensurePool();\n    const queries = Array.from({ length: 8 }, () =>\n      executeQuery(REPO, 'MATCH (n:Function) RETURN n.name AS name')\n    );\n    const results = await Promise.all(queries);\n    expect(results).toHaveLength(8);\n    for (const rows of results) {\n      expect(rows.length).toBeGreaterThanOrEqual(2);\n    }\n  });\n\n  it('12 parallel queries overflow into waiter queue and all complete', async () => {\n    await ensurePool();\n    const queries = Array.from({ length: 12 }, () =>\n      executeQuery(REPO, 'MATCH (n:Function) RETURN n.name AS name')\n    );\n    const results = await Promise.all(queries);\n    expect(results).toHaveLength(12);\n    for (const rows of results) {\n      expect(rows.length).toBeGreaterThanOrEqual(2);\n    }\n  });\n\n  it('process.stdout.write is functional after init and parallel queries', async () => {\n    await ensurePool();\n    const queries = Array.from({ length: 8 }, () =>\n      executeQuery(REPO, 'MATCH (n:Function) RETURN n.name AS name')\n    );\n    await Promise.all(queries);\n    // stdout.write should be the real function, not the silenced no-op\n    expect(typeof process.stdout.write).toBe('function');\n  });\n\n  it('50 sequential queries do not leak connections', async () => {\n    await ensurePool();\n    for (let i = 0; i < 50; i++) {\n      await executeQuery(REPO, 'MATCH (n:Function) RETURN n.name AS name');\n    }\n    // If connections leaked, these 8 parallel queries would exhaust the pool\n    const queries = Array.from({ length: 8 }, () =>\n      executeQuery(REPO, 'MATCH (n:Function) RETURN n.name AS name')\n    );\n    const results = await Promise.all(queries);\n    expect(results).toHaveLength(8);\n  });\n\n  it('mixed executeQuery + executeParameterized at full concurrency', async () => {\n    await ensurePool();\n    const queries = [\n      executeQuery(REPO, 'MATCH (n:Function) RETURN n.name AS name'),\n      executeParameterized(REPO, 'MATCH (n:Function) WHERE n.name = $name RETURN n.name AS name', { name: 'main' }),\n      executeQuery(REPO, 'MATCH (n:Function) RETURN n.name AS name'),\n      executeParameterized(REPO, 'MATCH (n:Function) WHERE n.name = $name RETURN n.name AS name', { name: 'helper' }),\n      executeQuery(REPO, 'MATCH (n:Function) RETURN n.name AS name'),\n      executeParameterized(REPO, 'MATCH (n:Function) WHERE n.name = $name RETURN n.name AS name', { name: 'main' }),\n      executeQuery(REPO, 'MATCH (n:Function) RETURN n.name AS name'),\n      executeQuery(REPO, 'MATCH (n:Function) RETURN n.name AS name'),\n    ];\n    const results = await Promise.all(queries);\n    expect(results).toHaveLength(8);\n  });\n\n  // ─── Regression tests for #308 / #314 / #347 ──────────────────────────\n  // The impact command's enrichment phase runs 3 concurrent queries via\n  // Promise.all (local-backend.ts:1415). Before PR #349's pool pre-warming,\n  // this triggered lazy createConnection → silenceStdout → SIGSEGV.\n\n  it('3 concurrent queries via Promise.all (impact enrichment pattern, #308/#314/#347)', async () => {\n    await ensurePool();\n    // Mirrors the exact Promise.all pattern from local-backend.ts:1415\n    // that caused SIGSEGV in issues #308, #314, #347 before pool pre-warming.\n    const [r1, r2, r3] = await Promise.all([\n      executeQuery(REPO, 'MATCH (n:Function) RETURN n.name AS name LIMIT 5').catch(() => []),\n      executeQuery(REPO, 'MATCH (n:Function) RETURN n.id AS id LIMIT 5').catch(() => []),\n      executeQuery(REPO, 'MATCH ()-[r:CodeRelation]->() RETURN r.type AS type LIMIT 5').catch(() => []),\n    ]);\n    expect(r1.length).toBeGreaterThanOrEqual(1);\n    expect(r2.length).toBeGreaterThanOrEqual(1);\n    expect(r3.length).toBeGreaterThanOrEqual(1);\n  });\n\n  // ─── Fresh-state tests: need their own init ──────────────────────────\n\n  it('concurrent initLbug calls for the same repoId deduplicate', async () => {\n    const promises = Array.from({ length: 6 }, () =>\n      initLbug('dedup-repo', handle.dbPath)\n    );\n    await Promise.all(promises);\n    expect(isLbugReady('dedup-repo')).toBe(true);\n    const rows = await executeQuery('dedup-repo', 'MATCH (n:Function) RETURN n.name AS name');\n    expect(rows.length).toBeGreaterThanOrEqual(2);\n    await closeLbug('dedup-repo');\n  });\n\n  it('pool entry is not visible until initLbug fully resolves', async () => {\n    const initPromise = initLbug('atomic-repo', handle.dbPath);\n    // Synchronously, pool should NOT be ready (pool.set is the last operation)\n    expect(isLbugReady('atomic-repo')).toBe(false);\n    await initPromise;\n    expect(isLbugReady('atomic-repo')).toBe(true);\n    await closeLbug('atomic-repo');\n  });\n}, {\n  seed: SEED_DATA,\n  poolAdapter: true,\n});\n"
  },
  {
    "path": "gitnexus/test/integration/lbug-pool.test.ts",
    "content": "/**\n * P0 Integration Tests: LadybugDB Connection Pool\n *\n * Tests: initLbug, executeQuery, executeParameterized, closeLbug lifecycle\n * Covers hardening fixes: parameterized queries, query timeout,\n * waiter queue timeout, idle eviction guards, stdout silencing race\n */\nimport { describe, it, expect, afterEach } from 'vitest';\nimport {\n  initLbug,\n  executeQuery,\n  executeParameterized,\n  closeLbug,\n  isLbugReady,\n} from '../../src/mcp/core/lbug-adapter.js';\nimport { withTestLbugDB } from '../helpers/test-indexed-db.js';\n\nconst POOL_SEED_DATA = [\n  `CREATE (f:File {id: 'file:index.ts', name: 'index.ts', filePath: 'src/index.ts', content: ''})`,\n  `CREATE (fn:Function {id: 'func:main', name: 'main', filePath: 'src/index.ts', startLine: 1, endLine: 10, isExported: true, content: '', description: ''})`,\n  `CREATE (fn2:Function {id: 'func:helper', name: 'helper', filePath: 'src/utils.ts', startLine: 1, endLine: 5, isExported: true, content: '', description: ''})`,\n  `MATCH (a:Function), (b:Function)\n    WHERE a.id = 'func:main' AND b.id = 'func:helper'\n    CREATE (a)-[:CodeRelation {type: 'CALLS', confidence: 1.0, reason: 'direct', step: 0}]->(b)`,\n];\n\n// ─── Pool lifecycle tests — test the pool adapter API directly ───────\n\nwithTestLbugDB('lbug-pool', (handle) => {\n  afterEach(async () => {\n    try { await closeLbug('test-repo'); } catch { /* best-effort */ }\n    try { await closeLbug('repo1'); } catch { /* best-effort */ }\n    try { await closeLbug('repo2'); } catch { /* best-effort */ }\n    try { await closeLbug(''); } catch { /* best-effort */ }\n  });\n\n  // ─── Lifecycle: init → query → close ─────────────────────────────────\n\n  describe('pool lifecycle', () => {\n    it('initLbug + executeQuery + closeLbug', async () => {\n      await initLbug('test-repo', handle.dbPath);\n      expect(isLbugReady('test-repo')).toBe(true);\n\n      const rows = await executeQuery('test-repo', 'MATCH (n:Function) RETURN n.name AS name');\n      expect(rows.length).toBeGreaterThanOrEqual(2);\n      const names = rows.map((r: any) => r.name);\n      expect(names).toContain('main');\n      expect(names).toContain('helper');\n\n      await closeLbug('test-repo');\n      expect(isLbugReady('test-repo')).toBe(false);\n    });\n\n    it('initLbug reuses existing pool entry', async () => {\n      await initLbug('test-repo', handle.dbPath);\n      await initLbug('test-repo', handle.dbPath); // second call should be no-op\n      expect(isLbugReady('test-repo')).toBe(true);\n    });\n\n    it('closeLbug is idempotent', async () => {\n      await initLbug('test-repo', handle.dbPath);\n      await closeLbug('test-repo');\n      await closeLbug('test-repo'); // second close should not throw\n      expect(isLbugReady('test-repo')).toBe(false);\n    });\n\n    it('closeLbug with no args closes all repos', async () => {\n      await initLbug('repo1', handle.dbPath);\n      await initLbug('repo2', handle.dbPath);\n      expect(isLbugReady('repo1')).toBe(true);\n      expect(isLbugReady('repo2')).toBe(true);\n\n      await closeLbug();\n      expect(isLbugReady('repo1')).toBe(false);\n      expect(isLbugReady('repo2')).toBe(false);\n    });\n  });\n\n  // ─── Parameterized queries ───────────────────────────────────────────\n\n  describe('executeParameterized', () => {\n    it('works with parameterized query', async () => {\n      await initLbug('test-repo', handle.dbPath);\n      const rows = await executeParameterized(\n        'test-repo',\n        'MATCH (n:Function) WHERE n.name = $name RETURN n.name AS name',\n        { name: 'main' },\n      );\n      expect(rows).toHaveLength(1);\n      expect(rows[0].name).toBe('main');\n    });\n\n    it('injection attempt is harmless with parameterized query', async () => {\n      await initLbug('test-repo', handle.dbPath);\n      const rows = await executeParameterized(\n        'test-repo',\n        'MATCH (n:Function) WHERE n.name = $name RETURN n.name AS name',\n        { name: \"' OR 1=1 --\" }, // SQL/Cypher injection attempt\n      );\n      // Should return 0 rows, not all rows\n      expect(rows).toHaveLength(0);\n    });\n  });\n\n  // ─── Error handling ──────────────────────────────────────────────────\n\n  describe('error handling', () => {\n    it('throws when querying uninitialized repo', async () => {\n      await expect(executeQuery('nonexistent-repo', 'MATCH (n) RETURN n'))\n        .rejects.toThrow(/not initialized/);\n    });\n\n    it('throws when db path does not exist', async () => {\n      await expect(initLbug('bad-repo', '/nonexistent/path/lbug'))\n        .rejects.toThrow();\n    });\n\n    it('read-only mode: write query throws', async () => {\n      await initLbug('test-repo', handle.dbPath);\n      await expect(executeQuery('test-repo', \"CREATE (n:Function {id: 'new', name: 'new', filePath: '', startLine: 0, endLine: 0, isExported: false, content: '', description: ''})\"))\n        .rejects.toThrow();\n    });\n  });\n\n  // ─── Relationship queries ────────────────────────────────────────────\n\n  describe('relationship queries', () => {\n    it('can query relationships', async () => {\n      await initLbug('test-repo', handle.dbPath);\n      const rows = await executeQuery(\n        'test-repo',\n        `MATCH (a:Function)-[r:CodeRelation {type: 'CALLS'}]->(b:Function) RETURN a.name AS caller, b.name AS callee`,\n      );\n      expect(rows.length).toBeGreaterThanOrEqual(1);\n      const row = rows.find((r: any) => r.caller === 'main');\n      expect(row).toBeDefined();\n      expect(row.callee).toBe('helper');\n    });\n  });\n\n  // ─── Unhappy paths ──────────────────────────────────────────────────\n\n  describe('unhappy paths', () => {\n    it('executeParameterized throws when repo is not initialized', async () => {\n      await expect(executeParameterized('ghost-repo', 'MATCH (n) RETURN n', {}))\n        .rejects.toThrow(/not initialized/);\n    });\n\n    it('executeQuery rejects invalid Cypher syntax', async () => {\n      await initLbug('test-repo', handle.dbPath);\n      await expect(executeQuery('test-repo', 'THIS IS NOT CYPHER'))\n        .rejects.toThrow();\n    });\n\n    it('executeParameterized rejects when referenced parameter is missing', async () => {\n      await initLbug('test-repo', handle.dbPath);\n      await expect(executeParameterized(\n        'test-repo',\n        'MATCH (n:Function) WHERE n.name = $name RETURN n',\n        { wrong_param: 'main' },\n      )).rejects.toThrow();\n    });\n\n    it('closeLbug with unknown repoId does not throw', async () => {\n      await expect(closeLbug('never-existed-repo')).resolves.toBeUndefined();\n    });\n\n    it('isLbugReady returns false for unknown repoId', () => {\n      expect(isLbugReady('never-existed-repo')).toBe(false);\n    });\n\n    it('initLbug with empty string repoId stores entry under empty key', async () => {\n      await initLbug('', handle.dbPath);\n      expect(isLbugReady('')).toBe(true);\n      await closeLbug('');\n      expect(isLbugReady('')).toBe(false);\n    });\n\n    it('executeQuery with empty query string rejects', async () => {\n      await initLbug('test-repo', handle.dbPath);\n      await expect(executeQuery('test-repo', '')).rejects.toThrow();\n    });\n  });\n}, {\n  seed: POOL_SEED_DATA,\n  poolAdapter: true,\n});\n"
  },
  {
    "path": "gitnexus/test/integration/local-backend-calltool.test.ts",
    "content": "/**\n * P0 Integration Tests: Local Backend — callTool dispatch\n *\n * Tests the full LocalBackend.callTool() dispatch with a real LadybugDB\n * instance, verifying cypher, context, impact, and query tools work\n * end-to-end against seeded graph data with FTS indexes.\n */\nimport { describe, it, expect, beforeAll, vi } from 'vitest';\nimport { LocalBackend } from '../../src/mcp/local/local-backend.js';\nimport { listRegisteredRepos } from '../../src/storage/repo-manager.js';\nimport { withTestLbugDB } from '../helpers/test-indexed-db.js';\nimport { LOCAL_BACKEND_SEED_DATA, LOCAL_BACKEND_FTS_INDEXES } from '../fixtures/local-backend-seed.js';\n\nvi.mock('../../src/storage/repo-manager.js', () => ({\n  listRegisteredRepos: vi.fn().mockResolvedValue([]),\n  cleanupOldKuzuFiles: vi.fn().mockResolvedValue({ found: false, needsReindex: false }),\n}));\n\n// ─── Block 2: callTool dispatch tests ────────────────────────────────\n\nwithTestLbugDB('local-backend-calltool', (handle) => {\n\n  describe('callTool dispatch with real DB', () => {\n    let backend: LocalBackend;\n\n    beforeAll(async () => {\n      // backend is created in afterSetup and attached to the handle\n      const ext = handle as typeof handle & { _backend?: LocalBackend };\n      if (!ext._backend) {\n        throw new Error('LocalBackend not initialized — afterSetup did not attach _backend to handle');\n      }\n      backend = ext._backend;\n    });\n\n    it('cypher tool returns function names', async () => {\n      const result = await backend.callTool('cypher', {\n        query: 'MATCH (n:Function) RETURN n.name AS name ORDER BY n.name',\n      });\n      // cypher tool wraps results as markdown\n      expect(result).toHaveProperty('markdown');\n      expect(result).toHaveProperty('row_count');\n      expect(result.row_count).toBeGreaterThanOrEqual(3);\n      expect(result.markdown).toContain('login');\n      expect(result.markdown).toContain('validate');\n      expect(result.markdown).toContain('hash');\n    });\n\n    it('cypher tool blocks write queries', async () => {\n      const result = await backend.callTool('cypher', {\n        query: \"CREATE (n:Function {id: 'x', name: 'x', filePath: '', startLine: 0, endLine: 0, isExported: false, content: '', description: ''})\",\n      });\n      expect(result).toHaveProperty('error');\n      expect(result.error).toMatch(/write operations/i);\n    });\n\n    it('context tool returns symbol info with callers and callees', async () => {\n      const result = await backend.callTool('context', { name: 'login' });\n      expect(result).not.toHaveProperty('error');\n      expect(result.status).toBe('found');\n      // Should have the symbol identity\n      expect(result.symbol).toBeDefined();\n      expect(result.symbol.name).toBe('login');\n      expect(result.symbol.filePath).toBe('src/auth.ts');\n      // login calls validate and hash — should appear in outgoing.calls\n      expect(result.outgoing).toBeDefined();\n      expect(result.outgoing.calls).toBeDefined();\n      expect(result.outgoing.calls.length).toBeGreaterThanOrEqual(2);\n      const calleeNames = result.outgoing.calls.map((c: any) => c.name);\n      expect(calleeNames).toContain('validate');\n      expect(calleeNames).toContain('hash');\n    });\n\n    it('impact tool returns upstream dependents', async () => {\n      const result = await backend.callTool('impact', {\n        target: 'validate',\n        direction: 'upstream',\n      });\n      expect(result).not.toHaveProperty('error');\n      // validate is called by login, so login should appear at depth 1\n      expect(result.impactedCount).toBeGreaterThanOrEqual(1);\n      expect(result.byDepth).toBeDefined();\n      const directDeps = result.byDepth[1] || result.byDepth['1'] || [];\n      expect(directDeps.length).toBeGreaterThanOrEqual(1);\n      const depNames = directDeps.map((d: any) => d.name);\n      expect(depNames).toContain('login');\n    });\n\n    it('query tool returns results for keyword search', async () => {\n      const result = await backend.callTool('query', { query: 'login' });\n      expect(result).not.toHaveProperty('error');\n      // Should have some combination of processes, process_symbols, or definitions\n      expect(result).toHaveProperty('processes');\n      expect(result).toHaveProperty('definitions');\n      // The search should find something (FTS or graph-based)\n      const totalResults =\n        (result.processes?.length || 0) +\n        (result.process_symbols?.length || 0) +\n        (result.definitions?.length || 0);\n      expect(totalResults).toBeGreaterThanOrEqual(1);\n    });\n\n    it('unknown tool throws', async () => {\n      await expect(\n        backend.callTool('nonexistent_tool', {}),\n      ).rejects.toThrow(/unknown tool/i);\n    });\n  });\n\n  describe('impact tool relationTypes filtering', () => {\n    let backend: LocalBackend;\n\n    beforeAll(async () => {\n      const ext = handle as typeof handle & { _backend?: LocalBackend };\n      if (!ext._backend) {\n        throw new Error('LocalBackend not initialized — afterSetup did not attach _backend to handle');\n      }\n      backend = ext._backend;\n    });\n\n    it('filters by HAS_METHOD only', async () => {\n      const result = await backend.callTool('impact', {\n        target: 'AuthService',\n        direction: 'downstream',\n        relationTypes: ['HAS_METHOD'],\n      });\n      expect(result).not.toHaveProperty('error');\n      expect(result.impactedCount).toBeGreaterThanOrEqual(1);\n      const d1 = result.byDepth[1] || result.byDepth['1'] || [];\n      const names = d1.map((d: any) => d.name);\n      expect(names).toContain('authenticate');\n      // Should NOT include CALLS-reachable symbols like validate/hash\n      expect(names).not.toContain('validate');\n      expect(names).not.toContain('hash');\n    });\n\n    it('filters by OVERRIDES only', async () => {\n      const result = await backend.callTool('impact', {\n        target: 'authenticate',\n        direction: 'downstream',\n        relationTypes: ['OVERRIDES'],\n      });\n      expect(result).not.toHaveProperty('error');\n      // AuthService.authenticate overrides BaseService.authenticate\n      expect(result.impactedCount).toBeGreaterThanOrEqual(1);\n      const d1 = result.byDepth[1] || result.byDepth['1'] || [];\n      const names = d1.map((d: any) => d.name);\n      expect(names).toContain('authenticate');\n    });\n\n    it('does not return HAS_METHOD results when filtering by CALLS only', async () => {\n      const result = await backend.callTool('impact', {\n        target: 'AuthService',\n        direction: 'downstream',\n        relationTypes: ['CALLS'],\n      });\n      expect(result).not.toHaveProperty('error');\n      // AuthService has no outgoing CALLS edges, only HAS_METHOD\n      expect(result.impactedCount).toBe(0);\n    });\n  });\n\n  describe('tool parameter edge cases', () => {\n    let backend: LocalBackend;\n\n    beforeAll(async () => {\n      const ext = handle as typeof handle & { _backend?: LocalBackend };\n      if (!ext._backend) {\n        throw new Error('LocalBackend not initialized — afterSetup did not attach _backend to handle');\n      }\n      backend = ext._backend;\n    });\n\n    it('context tool returns error for nonexistent symbol', async () => {\n      const result = await backend.callTool('context', { name: 'nonexistent_xyz_symbol_999' });\n      expect(result).toHaveProperty('error');\n      expect(result.error).toMatch(/not found/i);\n    });\n\n    it('query tool returns error for empty query', async () => {\n      const result = await backend.callTool('query', { query: '' });\n      expect(result).toHaveProperty('error');\n      expect(result.error).toMatch(/required/i);\n    });\n\n    it('query tool returns error for missing query param', async () => {\n      const result = await backend.callTool('query', {});\n      expect(result).toHaveProperty('error');\n    });\n\n    it('cypher tool returns error for invalid Cypher syntax', async () => {\n      const result = await backend.callTool('cypher', { query: 'THIS IS NOT VALID CYPHER AT ALL' });\n      expect(result).toHaveProperty('error');\n    });\n\n    it('context tool returns error when no name or uid provided', async () => {\n      const result = await backend.callTool('context', {});\n      expect(result).toHaveProperty('error');\n      expect(result.error).toMatch(/required/i);\n    });\n\n    // ─── impact error handling tests (#321) ───────────────────────────\n    // Verify that impact() returns structured JSON instead of crashing\n\n    it('impact tool returns structured error for unknown symbol', async () => {\n      const result = await backend.callTool('impact', {\n        target: 'nonexistent_symbol_xyz_999',\n        direction: 'upstream',\n      });\n      // Must return structured JSON, not throw\n      expect(result).toBeDefined();\n      // Should have either an error field (not found) or impactedCount 0\n      // Either outcome is valid — the key is it doesn't crash\n      if (result.error) {\n        expect(typeof result.error).toBe('string');\n      } else {\n        expect(result.impactedCount).toBe(0);\n      }\n    });\n\n    it('impact error response has consistent target shape', async () => {\n      const result = await backend.callTool('impact', {\n        target: 'nonexistent_symbol_xyz_999',\n        direction: 'downstream',\n      });\n      // When an error is returned, target must be an object (not raw string)\n      // so downstream API consumers can safely access result.target.name\n      if (result.error && result.target !== undefined) {\n        expect(typeof result.target).toBe('object');\n        expect(result.target).not.toBeNull();\n      }\n    });\n\n    it('impact partial results: traversalComplete flag when depth fails', async () => {\n      // Even if traversal fails at some depth, partial results should be returned\n      // and partial:true should only be set when some results were collected\n      const result = await backend.callTool('impact', {\n        target: 'validate',\n        direction: 'upstream',\n        maxDepth: 10, // Large depth to trigger multi-level traversal\n      });\n      // Should succeed (validate exists in seed data)\n      expect(result).not.toHaveProperty('error');\n      if (result.partial) {\n        // If partial, must still have some results\n        expect(result.impactedCount).toBeGreaterThan(0);\n      }\n    });\n  });\n\n}, {\n  seed: LOCAL_BACKEND_SEED_DATA,\n  ftsIndexes: LOCAL_BACKEND_FTS_INDEXES,\n  poolAdapter: true,\n  afterSetup: async (handle) => {\n    // Configure listRegisteredRepos mock with handle values\n    vi.mocked(listRegisteredRepos).mockResolvedValue([\n      {\n        name: 'test-repo',\n        path: '/test/repo',\n        storagePath: handle.tmpHandle.dbPath,\n        indexedAt: new Date().toISOString(),\n        lastCommit: 'abc123',\n        stats: { files: 2, nodes: 3, communities: 1, processes: 1 },\n      },\n    ]);\n\n    const backend = new LocalBackend();\n    await backend.init();\n    // Stash backend on handle so tests can access it\n    (handle as any)._backend = backend;\n  },\n});\n"
  },
  {
    "path": "gitnexus/test/integration/local-backend.test.ts",
    "content": "/**\n * P0 Integration Tests: Local Backend\n *\n * Tests tool implementations via direct LadybugDB queries.\n * The full LocalBackend.callTool() requires a global registry,\n * so here we test the security-critical behaviors directly:\n * - Write-operation blocking in cypher\n * - Query execution via the pool\n * - Parameterized queries preventing injection\n * - Read-only enforcement\n *\n * Covers hardening fixes: #1 (parameterized queries), #2 (write blocking),\n * #3 (path traversal), #4 (relation allowlist), #25 (regex lastIndex),\n * #26 (rename first-occurrence-only)\n */\nimport { describe, it, expect } from 'vitest';\nimport {\n  CYPHER_WRITE_RE,\n  executeQuery,\n  executeParameterized,\n  isWriteQuery,\n} from '../../src/mcp/core/lbug-adapter.js';\nimport { VALID_RELATION_TYPES } from '../../src/mcp/local/local-backend.js';\nimport { withTestLbugDB } from '../helpers/test-indexed-db.js';\nimport { LOCAL_BACKEND_SEED_DATA } from '../fixtures/local-backend-seed.js';\n\n// ─── Block 1: Pool adapter tests ─────────────────────────────────────\n\nwithTestLbugDB('local-backend', (handle) => {\n\n  // ─── Cypher write blocking ───────────────────────────────────────────\n\n  describe('cypher write blocking', () => {\n    const allWriteKeywords = ['CREATE', 'DELETE', 'SET', 'MERGE', 'REMOVE', 'DROP', 'ALTER', 'COPY', 'DETACH'];\n\n    for (const keyword of allWriteKeywords) {\n      it(`blocks ${keyword} query`, () => {\n        const blocked = isWriteQuery(`MATCH (n) ${keyword} n.name = \"x\"`);\n        expect(blocked).toBe(true);\n      });\n    }\n\n    it('allows valid read queries through the pool', async () => {\n      const rows = await executeQuery(handle.repoId, 'MATCH (n:Function) RETURN n.name AS name ORDER BY n.name');\n      expect(rows.length).toBeGreaterThanOrEqual(3);\n    });\n  });\n\n  // ─── Parameterized queries ───────────────────────────────────────────\n\n  describe('parameterized queries', () => {\n    it('finds exact match with parameter', async () => {\n      const rows = await executeParameterized(\n        handle.repoId,\n        'MATCH (n:Function) WHERE n.name = $name RETURN n.name AS name, n.filePath AS filePath',\n        { name: 'login' },\n      );\n      expect(rows).toHaveLength(1);\n      expect(rows[0].name).toBe('login');\n      expect(rows[0].filePath).toBe('src/auth.ts');\n    });\n\n    it('injection is harmless', async () => {\n      const rows = await executeParameterized(\n        handle.repoId,\n        'MATCH (n:Function) WHERE n.name = $name RETURN n.name AS name',\n        { name: \"login' OR '1'='1\" },\n      );\n      expect(rows).toHaveLength(0);\n    });\n  });\n\n  // ─── Relation type filtering ─────────────────────────────────────────\n\n  describe('relation type filtering', () => {\n    it('only allows valid relation types in queries', () => {\n      const validTypes = ['CALLS', 'IMPORTS', 'EXTENDS', 'IMPLEMENTS', 'HAS_METHOD', 'OVERRIDES', 'ACCESSES'];\n      const invalidTypes = ['CONTAINS', 'STEP_IN_PROCESS', 'MEMBER_OF', 'DROP_TABLE'];\n\n      for (const t of validTypes) {\n        expect(VALID_RELATION_TYPES.has(t)).toBe(true);\n      }\n      for (const t of invalidTypes) {\n        expect(VALID_RELATION_TYPES.has(t)).toBe(false);\n      }\n    });\n\n    it('can query relationships with valid types', async () => {\n      const rows = await executeQuery(\n        handle.repoId,\n        `MATCH (a:Function)-[r:CodeRelation {type: 'CALLS'}]->(b:Function) RETURN a.name AS caller, b.name AS callee ORDER BY b.name`,\n      );\n      expect(rows.length).toBeGreaterThanOrEqual(2);\n    });\n  });\n\n  // ─── Process queries ─────────────────────────────────────────────────\n\n  describe('process queries', () => {\n    it('can find processes', async () => {\n      const rows = await executeQuery(handle.repoId, 'MATCH (p:Process) RETURN p.heuristicLabel AS label, p.stepCount AS steps');\n      expect(rows.length).toBeGreaterThanOrEqual(1);\n      expect(rows[0].label).toBe('User Login');\n    });\n\n    it('can trace process steps', async () => {\n      const rows = await executeQuery(\n        handle.repoId,\n        `MATCH (s)-[r:CodeRelation {type: 'STEP_IN_PROCESS'}]->(p:Process)\n         WHERE p.id = 'proc:login-flow'\n         RETURN s.name AS symbol, r.step AS step\n         ORDER BY r.step`,\n      );\n      expect(rows).toHaveLength(2);\n      expect(rows[0].symbol).toBe('login');\n      expect(rows[0].step).toBe(1);\n      expect(rows[1].symbol).toBe('validate');\n      expect(rows[1].step).toBe(2);\n    });\n  });\n\n  // ─── Community queries ───────────────────────────────────────────────\n\n  describe('community queries', () => {\n    it('can find communities', async () => {\n      const rows = await executeQuery(handle.repoId, 'MATCH (c:Community) RETURN c.heuristicLabel AS label');\n      expect(rows.length).toBeGreaterThanOrEqual(1);\n      expect(rows[0].label).toBe('Authentication');\n    });\n\n    it('can find community members', async () => {\n      const rows = await executeQuery(\n        handle.repoId,\n        `MATCH (f)-[:CodeRelation {type: 'MEMBER_OF'}]->(c:Community)\n         WHERE c.heuristicLabel = 'Authentication'\n         RETURN f.name AS name`,\n      );\n      expect(rows.length).toBeGreaterThanOrEqual(1);\n      expect(rows[0].name).toBe('login');\n    });\n  });\n\n  // ─── Read-only enforcement ───────────────────────────────────────────\n\n  describe('read-only database', () => {\n    it('rejects write operations at DB level', async () => {\n      await expect(\n        executeQuery(handle.repoId, `CREATE (n:Function {id: 'new', name: 'new', filePath: '', startLine: 0, endLine: 0, isExported: false, content: '', description: ''})`)\n      ).rejects.toThrow();\n    });\n  });\n\n  // ─── Regex lastIndex hardening (#25) ─────────────────────────────────\n\n  describe('regex lastIndex (hardening #25)', () => {\n    it('CYPHER_WRITE_RE is non-global (no sticky lastIndex)', () => {\n      expect(CYPHER_WRITE_RE.global).toBe(false);\n      expect(CYPHER_WRITE_RE.sticky).toBe(false);\n    });\n\n    it('works correctly across multiple consecutive calls', () => {\n      // If the regex were global, lastIndex could cause false results\n      const results = [\n        isWriteQuery('CREATE (n)'),     // true\n        isWriteQuery('MATCH (n) RETURN n'), // false\n        isWriteQuery('DELETE n'),       // true\n        isWriteQuery('MATCH (n) RETURN n'), // false\n        isWriteQuery('SET n.x = 1'),    // true\n      ];\n      expect(results).toEqual([true, false, true, false, true]);\n    });\n  });\n\n  // ─── Content queries (include_content equivalent) ────────────────────\n\n  describe('content queries', () => {\n    it('can retrieve symbol content', async () => {\n      const rows = await executeQuery(\n        handle.repoId,\n        `MATCH (n:Function) WHERE n.name = 'login' RETURN n.content AS content`,\n      );\n      expect(rows).toHaveLength(1);\n      expect(rows[0].content).toContain('function login');\n    });\n  });\n\n  // ─── Write blocking edge cases ──────────────────────────────────────\n\n  describe('write blocking edge cases', () => {\n    it('blocks lowercase write keywords (case-insensitive)', () => {\n      expect(isWriteQuery('create (n:Function {id: \"x\"})')).toBe(true);\n      expect(isWriteQuery('delete n')).toBe(true);\n      expect(isWriteQuery('set n.name = \"x\"')).toBe(true);\n    });\n\n    it('blocks write keyword in CREATED-like words (regex is keyword-boundary unaware)', () => {\n      // CYPHER_WRITE_RE uses \\b word boundaries — \"CREATED\" does NOT match \"CREATE\"\n      const result = isWriteQuery(\"MATCH (n) WHERE n.name = 'CREATED' RETURN n\");\n      // The regex uses word boundaries so substring \"CREATE\" inside \"CREATED\" is NOT matched\n      expect(result).toBe(false);\n    });\n\n    it('blocks multi-line queries with write keywords', () => {\n      expect(isWriteQuery('MATCH (n)\\nDELETE n')).toBe(true);\n    });\n\n    it('returns false for empty string', () => {\n      expect(isWriteQuery('')).toBe(false);\n    });\n\n    it('returns false for whitespace-only query', () => {\n      expect(isWriteQuery('   ')).toBe(false);\n    });\n  });\n\n  // ─── Query error handling via pool ──────────────────────────────────\n\n  describe('query error handling via pool', () => {\n    it('returns empty rows for unknown node label', async () => {\n      // LadybugDB throws a Binder exception for unknown node labels\n      await expect(\n        executeQuery(handle.repoId, 'MATCH (n:NonExistentTable) RETURN n.name AS name')\n      ).rejects.toThrow();\n    });\n\n    it('rejects syntactically invalid Cypher', async () => {\n      await expect(executeQuery(handle.repoId, 'NOT VALID CYPHER AT ALL'))\n        .rejects.toThrow();\n    });\n  });\n\n  // ─── Parameterized query edge cases ─────────────────────────────────\n\n  describe('parameterized query edge cases', () => {\n    it('succeeds with empty params when query has no parameters', async () => {\n      const rows = await executeParameterized(\n        handle.repoId,\n        'MATCH (n:Function) RETURN n.name AS name LIMIT 1',\n        {},\n      );\n      expect(rows.length).toBeGreaterThanOrEqual(0);\n    });\n\n    it('returns empty rows when param value is null', async () => {\n      const rows = await executeParameterized(\n        handle.repoId,\n        'MATCH (n:Function) WHERE n.name = $name RETURN n.name AS name',\n        { name: null as any },\n      );\n      expect(rows).toHaveLength(0);\n    });\n  });\n\n}, {\n  seed: LOCAL_BACKEND_SEED_DATA,\n  poolAdapter: true,\n});\n"
  },
  {
    "path": "gitnexus/test/integration/parsing.test.ts",
    "content": "/**\n * P1 Integration Tests: Tree-sitter Parsing\n *\n * Tests parsing of sample files via tree-sitter.\n * Covers hardening fixes: Swift init constructor (#18),\n * PHP export detection (#20), symbol ID with startLine (#19),\n * definition node range (#22).\n */\nimport { describe, it, expect, beforeAll } from 'vitest';\nimport fs from 'fs/promises';\nimport path from 'path';\nimport { createKnowledgeGraph } from '../../src/core/graph/graph.js';\nimport { isNodeExported } from '../../src/core/ingestion/parsing-processor.js';\nimport { loadParser, loadLanguage } from '../../src/core/tree-sitter/parser-loader.js';\nimport { getLanguageFromFilename } from '../../src/core/ingestion/utils.js';\nimport { SupportedLanguages } from '../../src/config/supported-languages.js';\n\nconst FIXTURES_DIR = path.join(process.cwd(), 'test', 'fixtures', 'sample-code');\n\n// We test isNodeExported directly since it's a pure function\n// that only needs a mock AST node, name, and language string.\n\n/**\n * Minimal mock of a tree-sitter AST node.\n */\nfunction mockNode(type: string, text: string = '', parent?: any, children?: any[], fields?: Record<string, any>): any {\n  const node: any = {\n    type,\n    text,\n    parent: parent || null,\n    childCount: children?.length ?? 0,\n    child: (i: number) => children?.[i] ?? null,\n    childForFieldName: (name: string) => fields?.[name] ?? null,\n  };\n  // Set parent references on children\n  if (children) {\n    for (const child of children) {\n      child.parent = node;\n    }\n  }\n  return node;\n}\n\n// ─── isNodeExported per-language ─────────────────────────────────────\n\ndescribe('parsing', () => {\n  describe('isNodeExported', () => {\n    // TypeScript/JavaScript\n    describe('typescript', () => {\n      it('returns true when ancestor is export_statement', () => {\n        const exportStmt = mockNode('export_statement', 'export function foo() {}');\n        const fnDecl = mockNode('function_declaration', 'function foo() {}', exportStmt);\n        const nameNode = mockNode('identifier', 'foo', fnDecl);\n        expect(isNodeExported(nameNode, 'foo', 'typescript')).toBe(true);\n      });\n\n      it('returns false for non-exported function', () => {\n        const fnDecl = mockNode('function_declaration', 'function foo() {}');\n        const nameNode = mockNode('identifier', 'foo', fnDecl);\n        expect(isNodeExported(nameNode, 'foo', 'typescript')).toBe(false);\n      });\n\n      it('returns true when text starts with \"export \"', () => {\n        const parent = mockNode('lexical_declaration', 'export const foo = 1');\n        const nameNode = mockNode('identifier', 'foo', parent);\n        expect(isNodeExported(nameNode, 'foo', 'typescript')).toBe(true);\n      });\n    });\n\n    // Python\n    describe('python', () => {\n      it('public function (no underscore prefix)', () => {\n        const node = mockNode('identifier', 'public_function');\n        expect(isNodeExported(node, 'public_function', 'python')).toBe(true);\n      });\n\n      it('private function (underscore prefix)', () => {\n        const node = mockNode('identifier', '_private_helper');\n        expect(isNodeExported(node, '_private_helper', 'python')).toBe(false);\n      });\n\n      it('dunder method is private', () => {\n        const node = mockNode('identifier', '__init__');\n        expect(isNodeExported(node, '__init__', 'python')).toBe(false);\n      });\n    });\n\n    // Go\n    describe('go', () => {\n      it('uppercase first letter is exported', () => {\n        const node = mockNode('identifier', 'ExportedFunction');\n        expect(isNodeExported(node, 'ExportedFunction', 'go')).toBe(true);\n      });\n\n      it('lowercase first letter is unexported', () => {\n        const node = mockNode('identifier', 'unexportedFunction');\n        expect(isNodeExported(node, 'unexportedFunction', 'go')).toBe(false);\n      });\n\n      it('empty name is not exported', () => {\n        const node = mockNode('identifier', '');\n        expect(isNodeExported(node, '', 'go')).toBe(false);\n      });\n    });\n\n    // Rust\n    describe('rust', () => {\n      it('pub function is exported', () => {\n        const visMod = mockNode('visibility_modifier', 'pub');\n        const nameNode = mockNode('identifier', 'foo');\n        // visibility_modifier is a sibling of the name inside function_item\n        const fnDecl = mockNode('function_item', 'pub fn foo() {}', undefined, [visMod, nameNode]);\n        expect(isNodeExported(nameNode, 'foo', 'rust')).toBe(true);\n      });\n\n      it('non-pub function is not exported', () => {\n        const nameNode = mockNode('identifier', 'foo');\n        const fnDecl = mockNode('function_item', 'fn foo() {}', undefined, [nameNode]);\n        expect(isNodeExported(nameNode, 'foo', 'rust')).toBe(false);\n      });\n    });\n\n    // PHP (hardening fix #20)\n    describe('php', () => {\n      it('top-level function is exported (globally accessible)', () => {\n        // PHP: top-level functions fall through all checks and return true\n        const program = mockNode('program', '<?php function topLevel() {}');\n        const fnDecl = mockNode('function_definition', 'function topLevel() {}', program);\n        const nameNode = mockNode('name', 'topLevel', fnDecl);\n        expect(isNodeExported(nameNode, 'topLevel', 'php')).toBe(true);\n      });\n\n      it('class declaration is exported', () => {\n        const classDecl = mockNode('class_declaration', 'class Foo {}');\n        const nameNode = mockNode('name', 'Foo', classDecl);\n        expect(isNodeExported(nameNode, 'Foo', 'php')).toBe(true);\n      });\n\n      it('public method has visibility_modifier = public', () => {\n        const visMod = mockNode('visibility_modifier', 'public');\n        const nameNode = mockNode('name', 'addUser', visMod);\n        expect(isNodeExported(nameNode, 'addUser', 'php')).toBe(true);\n      });\n\n      it('private method has visibility_modifier = private', () => {\n        const visMod = mockNode('visibility_modifier', 'private');\n        const nameNode = mockNode('name', 'validate', visMod);\n        expect(isNodeExported(nameNode, 'validate', 'php')).toBe(false);\n      });\n    });\n\n    // Swift\n    describe('swift', () => {\n      it('public function is exported', () => {\n        const visMod = mockNode('modifiers', 'public');\n        const nameNode = mockNode('identifier', 'getCount', visMod);\n        expect(isNodeExported(nameNode, 'getCount', 'swift')).toBe(true);\n      });\n\n      it('open function is exported', () => {\n        const visMod = mockNode('modifiers', 'open');\n        const nameNode = mockNode('identifier', 'doStuff', visMod);\n        expect(isNodeExported(nameNode, 'doStuff', 'swift')).toBe(true);\n      });\n\n      it('non-public function is not exported', () => {\n        const fnDecl = mockNode('function_declaration', 'func helper() {}');\n        const nameNode = mockNode('identifier', 'helper', fnDecl);\n        expect(isNodeExported(nameNode, 'helper', 'swift')).toBe(false);\n      });\n    });\n\n    // C/C++\n    describe('c/cpp', () => {\n      it('C functions without static are exported (external linkage)', () => {\n        const nameNode = mockNode('identifier', 'add');\n        const fnDef = mockNode('function_definition', 'int add(int a, int b) {}', undefined, [nameNode]);\n        expect(isNodeExported(nameNode, 'add', 'c')).toBe(true);\n      });\n\n      it('C++ functions without static are exported', () => {\n        const nameNode = mockNode('identifier', 'helperFunction');\n        const fnDef = mockNode('function_definition', 'void helperFunction() {}', undefined, [nameNode]);\n        expect(isNodeExported(nameNode, 'helperFunction', 'cpp')).toBe(true);\n      });\n\n      it('static C function is not exported', () => {\n        const nameNode = mockNode('identifier', 'internalHelper');\n        const staticSpec = mockNode('storage_class_specifier', 'static');\n        const fnDef = mockNode('function_definition', 'static void internalHelper() {}', undefined, [staticSpec, nameNode]);\n        expect(isNodeExported(nameNode, 'internalHelper', 'c')).toBe(false);\n      });\n    });\n\n    // C#\n    describe('csharp', () => {\n      it('public modifier means exported', () => {\n        const modifier = mockNode('modifier', 'public');\n        const nameNode = mockNode('identifier', 'Add');\n        // modifier is a sibling of nameNode inside method_declaration\n        const methodDecl = mockNode('method_declaration', 'public int Add() {}', undefined, [modifier, nameNode]);\n        expect(isNodeExported(nameNode, 'Add', 'csharp')).toBe(true);\n      });\n\n      it('no public modifier means not exported', () => {\n        const nameNode = mockNode('identifier', 'Helper');\n        const classDecl = mockNode('class_declaration', 'class Helper {}', undefined, [nameNode]);\n        expect(isNodeExported(nameNode, 'Helper', 'csharp')).toBe(false);\n      });\n    });\n\n    // Java\n    describe('java', () => {\n      it('public method is exported', () => {\n        const modifiers = mockNode('modifiers', 'public');\n        const nameNode = mockNode('identifier', 'getUser');\n        const methodDecl = mockNode('method_declaration', 'public User getUser() {}', undefined, [modifiers, nameNode]);\n        expect(isNodeExported(nameNode, 'getUser', 'java')).toBe(true);\n      });\n\n      it('public class method via text check is exported', () => {\n        const nameNode = mockNode('identifier', 'doGet');\n        const methodDecl = mockNode('method_declaration', 'public void doGet() {}', undefined, [nameNode]);\n        // text starts with 'public' so it should be detected\n        expect(isNodeExported(nameNode, 'doGet', 'java')).toBe(true);\n      });\n\n      it('private method is not exported', () => {\n        const modifiers = mockNode('modifiers', 'private');\n        const nameNode = mockNode('identifier', 'helper');\n        const methodDecl = mockNode('method_declaration', 'private void helper() {}', undefined, [modifiers, nameNode]);\n        expect(isNodeExported(nameNode, 'helper', 'java')).toBe(false);\n      });\n\n      it('package-private (no modifier) is not exported', () => {\n        const nameNode = mockNode('identifier', 'internal');\n        const methodDecl = mockNode('method_declaration', 'void internal() {}', undefined, [nameNode]);\n        expect(isNodeExported(nameNode, 'internal', 'java')).toBe(false);\n      });\n    });\n\n    // Kotlin\n    describe('kotlin', () => {\n      it('function without visibility modifier is public by default', () => {\n        const nameNode = mockNode('identifier', 'greet');\n        const fnDecl = mockNode('function_declaration', 'fun greet() {}', undefined, [nameNode]);\n        expect(isNodeExported(nameNode, 'greet', 'kotlin')).toBe(true);\n      });\n\n      it('public function is exported', () => {\n        const visMod = mockNode('visibility_modifier', 'public');\n        const modifiers = mockNode('modifiers', 'public', undefined, [visMod]);\n        const nameNode = mockNode('identifier', 'greet');\n        const fnDecl = mockNode('function_declaration', 'public fun greet() {}', undefined, [modifiers, nameNode]);\n        expect(isNodeExported(nameNode, 'greet', 'kotlin')).toBe(true);\n      });\n\n      it('private function is not exported', () => {\n        const visMod = mockNode('visibility_modifier', 'private');\n        const modifiers = mockNode('modifiers', 'private', undefined, [visMod]);\n        const nameNode = mockNode('identifier', 'secret');\n        const fnDecl = mockNode('function_declaration', 'private fun secret() {}', undefined, [modifiers, nameNode]);\n        expect(isNodeExported(nameNode, 'secret', 'kotlin')).toBe(false);\n      });\n\n      it('internal function is not exported', () => {\n        const visMod = mockNode('visibility_modifier', 'internal');\n        const modifiers = mockNode('modifiers', 'internal', undefined, [visMod]);\n        const nameNode = mockNode('identifier', 'moduleOnly');\n        const fnDecl = mockNode('function_declaration', 'internal fun moduleOnly() {}', undefined, [modifiers, nameNode]);\n        expect(isNodeExported(nameNode, 'moduleOnly', 'kotlin')).toBe(false);\n      });\n    });\n\n    // C# additional cases\n    describe('csharp additional', () => {\n      it('internal modifier is not exported', () => {\n        const modifier = mockNode('modifier', 'internal');\n        const nameNode = mockNode('identifier', 'InternalService');\n        const classDecl = mockNode('class_declaration', 'internal class InternalService {}', undefined, [modifier, nameNode]);\n        expect(isNodeExported(nameNode, 'InternalService', 'csharp')).toBe(false);\n      });\n\n      it('private modifier is not exported', () => {\n        const modifier = mockNode('modifier', 'private');\n        const nameNode = mockNode('identifier', 'helper');\n        const methodDecl = mockNode('method_declaration', 'private void helper() {}', undefined, [modifier, nameNode]);\n        expect(isNodeExported(nameNode, 'helper', 'csharp')).toBe(false);\n      });\n\n      it('struct with public modifier is exported', () => {\n        const modifier = mockNode('modifier', 'public');\n        const nameNode = mockNode('identifier', 'Point');\n        const structDecl = mockNode('struct_declaration', 'public struct Point {}', undefined, [modifier, nameNode]);\n        expect(isNodeExported(nameNode, 'Point', 'csharp')).toBe(true);\n      });\n\n      it('enum with public modifier is exported', () => {\n        const modifier = mockNode('modifier', 'public');\n        const nameNode = mockNode('identifier', 'Status');\n        const enumDecl = mockNode('enum_declaration', 'public enum Status {}', undefined, [modifier, nameNode]);\n        expect(isNodeExported(nameNode, 'Status', 'csharp')).toBe(true);\n      });\n\n      it('record with public modifier is exported', () => {\n        const modifier = mockNode('modifier', 'public');\n        const nameNode = mockNode('identifier', 'UserDto');\n        const recordDecl = mockNode('record_declaration', 'public record UserDto {}', undefined, [modifier, nameNode]);\n        expect(isNodeExported(nameNode, 'UserDto', 'csharp')).toBe(true);\n      });\n\n      it('interface with public modifier is exported', () => {\n        const modifier = mockNode('modifier', 'public');\n        const nameNode = mockNode('identifier', 'IService');\n        const ifaceDecl = mockNode('interface_declaration', 'public interface IService {}', undefined, [modifier, nameNode]);\n        expect(isNodeExported(nameNode, 'IService', 'csharp')).toBe(true);\n      });\n    });\n\n    // Rust additional cases\n    describe('rust additional', () => {\n      it('pub(crate) is exported', () => {\n        const visMod = mockNode('visibility_modifier', 'pub(crate)');\n        const nameNode = mockNode('identifier', 'internal_fn');\n        const fnDecl = mockNode('function_item', 'pub(crate) fn internal_fn() {}', undefined, [visMod, nameNode]);\n        expect(isNodeExported(nameNode, 'internal_fn', 'rust')).toBe(true);\n      });\n\n      it('pub struct is exported', () => {\n        const visMod = mockNode('visibility_modifier', 'pub');\n        const nameNode = mockNode('type_identifier', 'Config');\n        const structDecl = mockNode('struct_item', 'pub struct Config {}', undefined, [visMod, nameNode]);\n        expect(isNodeExported(nameNode, 'Config', 'rust')).toBe(true);\n      });\n\n      it('private struct is not exported', () => {\n        const nameNode = mockNode('type_identifier', 'Inner');\n        const structDecl = mockNode('struct_item', 'struct Inner {}', undefined, [nameNode]);\n        expect(isNodeExported(nameNode, 'Inner', 'rust')).toBe(false);\n      });\n\n      it('pub enum is exported', () => {\n        const visMod = mockNode('visibility_modifier', 'pub');\n        const nameNode = mockNode('type_identifier', 'ErrorKind');\n        const enumDecl = mockNode('enum_item', 'pub enum ErrorKind {}', undefined, [visMod, nameNode]);\n        expect(isNodeExported(nameNode, 'ErrorKind', 'rust')).toBe(true);\n      });\n\n      it('pub trait is exported', () => {\n        const visMod = mockNode('visibility_modifier', 'pub');\n        const nameNode = mockNode('type_identifier', 'Handler');\n        const traitDecl = mockNode('trait_item', 'pub trait Handler {}', undefined, [visMod, nameNode]);\n        expect(isNodeExported(nameNode, 'Handler', 'rust')).toBe(true);\n      });\n    });\n\n    // C/C++ additional cases\n    describe('c/cpp additional', () => {\n      it('static C++ function is not exported', () => {\n        const nameNode = mockNode('identifier', 'localHelper');\n        const staticSpec = mockNode('storage_class_specifier', 'static');\n        const fnDef = mockNode('function_definition', 'static int localHelper() {}', undefined, [staticSpec, nameNode]);\n        expect(isNodeExported(nameNode, 'localHelper', 'cpp')).toBe(false);\n      });\n\n      it('declaration (not definition) without static is exported', () => {\n        const nameNode = mockNode('identifier', 'compute');\n        const decl = mockNode('declaration', 'int compute(int x);', undefined, [nameNode]);\n        expect(isNodeExported(nameNode, 'compute', 'c')).toBe(true);\n      });\n\n      it('static declaration is not exported', () => {\n        const nameNode = mockNode('identifier', 'internalFn');\n        const staticSpec = mockNode('storage_class_specifier', 'static');\n        const decl = mockNode('declaration', 'static int internalFn(void);', undefined, [staticSpec, nameNode]);\n        expect(isNodeExported(nameNode, 'internalFn', 'c')).toBe(false);\n      });\n\n      it('detached node defaults to exported (external linkage)', () => {\n        const nameNode = mockNode('identifier', 'orphan');\n        expect(isNodeExported(nameNode, 'orphan', 'c')).toBe(true);\n      });\n\n      it('C++ anonymous namespace function is not exported (internal linkage)', () => {\n        const nameNode = mockNode('identifier', 'anonHelper');\n        const fnDef = mockNode('function_definition', 'void anonHelper() {}', undefined, [nameNode]);\n        // Anonymous namespace: namespace_definition with no name field\n        const anonNs = mockNode('namespace_definition', 'namespace { void anonHelper() {} }', undefined, [fnDef]);\n        expect(isNodeExported(nameNode, 'anonHelper', 'cpp')).toBe(false);\n      });\n\n      it('C++ named namespace function is still exported', () => {\n        const nameNode = mockNode('identifier', 'namedHelper');\n        const fnDef = mockNode('function_definition', 'void namedHelper() {}', undefined, [nameNode]);\n        const nsName = mockNode('namespace_identifier', 'utils');\n        const namedNs = mockNode('namespace_definition', 'namespace utils { void namedHelper() {} }', undefined, [fnDef], { name: nsName });\n        expect(isNodeExported(nameNode, 'namedHelper', 'cpp')).toBe(true);\n      });\n    });\n\n    // C/C++ with real tree-sitter (validates structural storage_class_specifier detection)\n    describe('c/cpp real tree-sitter', () => {\n      it('non-static function is exported using real AST', async () => {\n        const parser = await loadParser();\n        await loadLanguage(SupportedLanguages.C);\n        const tree = parser.parse('int add(int a, int b) { return a + b; }');\n        const funcDef = tree.rootNode.child(0)!;\n        // Find the identifier name node inside the function_definition\n        const declNode = funcDef.childForFieldName('declarator');\n        const nameNode = declNode?.childForFieldName?.('declarator') || declNode;\n        expect(isNodeExported(nameNode, 'add', 'c')).toBe(true);\n      });\n\n      it('static function is not exported using real AST', async () => {\n        const parser = await loadParser();\n        await loadLanguage(SupportedLanguages.C);\n        const tree = parser.parse('static int internal_helper(void) { return 0; }');\n        const funcDef = tree.rootNode.child(0)!;\n        const declNode = funcDef.childForFieldName('declarator');\n        const nameNode = declNode?.childForFieldName?.('declarator') || declNode;\n        expect(isNodeExported(nameNode, 'internal_helper', 'c')).toBe(false);\n      });\n\n      it('extern function is exported using real AST', async () => {\n        const parser = await loadParser();\n        await loadLanguage(SupportedLanguages.C);\n        const tree = parser.parse('extern int shared_func(void);');\n        const decl = tree.rootNode.child(0)!;\n        // Declaration nodes should not have storage_class_specifier 'static'\n        const nameNode = decl.descendantsOfType?.('identifier')?.[0] || decl;\n        expect(isNodeExported(nameNode, 'shared_func', 'c')).toBe(true);\n      });\n\n      it('C++ anonymous namespace detected via real AST', async () => {\n        const parser = await loadParser();\n        await loadLanguage(SupportedLanguages.CPlusPlus);\n        const code = 'namespace { void hidden() {} }';\n        const tree = parser.parse(code);\n        const nsDef = tree.rootNode.child(0)!;\n        // Find function_definition inside the namespace body\n        const body = nsDef.childForFieldName('body');\n        const funcDef = body?.namedChild(0);\n        const declNode = funcDef?.childForFieldName?.('declarator');\n        const nameNode = declNode?.childForFieldName?.('declarator') || declNode;\n        expect(isNodeExported(nameNode, 'hidden', 'cpp')).toBe(false);\n      });\n\n      it('C++ named namespace is still exported via real AST', async () => {\n        const parser = await loadParser();\n        await loadLanguage(SupportedLanguages.CPlusPlus);\n        const code = 'namespace utils { void helper() {} }';\n        const tree = parser.parse(code);\n        const nsDef = tree.rootNode.child(0)!;\n        const body = nsDef.childForFieldName('body');\n        const funcDef = body?.namedChild(0);\n        const declNode = funcDef?.childForFieldName?.('declarator');\n        const nameNode = declNode?.childForFieldName?.('declarator') || declNode;\n        expect(isNodeExported(nameNode, 'helper', 'cpp')).toBe(true);\n      });\n    });\n\n    // C/C++ edge cases with mocks\n    describe('c/cpp edge cases', () => {\n      it('nested anonymous namespace (double nesting) is not exported', () => {\n        const nameNode = mockNode('identifier', 'deepHidden');\n        const fnDef = mockNode('function_definition', 'void deepHidden() {}', undefined, [nameNode]);\n        const innerNs = mockNode('namespace_definition', 'namespace { }', undefined, [fnDef]);\n        const outerNs = mockNode('namespace_definition', 'namespace outer { }', undefined, [innerNs], { name: mockNode('namespace_identifier', 'outer') });\n        expect(isNodeExported(nameNode, 'deepHidden', 'cpp')).toBe(false);\n      });\n\n      it('static function inside named namespace is not exported', () => {\n        const nameNode = mockNode('identifier', 'staticInNs');\n        const staticSpec = mockNode('storage_class_specifier', 'static');\n        const fnDef = mockNode('function_definition', 'static void staticInNs() {}', undefined, [staticSpec, nameNode]);\n        const ns = mockNode('namespace_definition', 'namespace foo { }', undefined, [fnDef], { name: mockNode('namespace_identifier', 'foo') });\n        expect(isNodeExported(nameNode, 'staticInNs', 'cpp')).toBe(false);\n      });\n\n      it('extern storage class is not confused with static', () => {\n        const nameNode = mockNode('identifier', 'externFn');\n        const externSpec = mockNode('storage_class_specifier', 'extern');\n        const fnDef = mockNode('function_definition', 'extern void externFn() {}', undefined, [externSpec, nameNode]);\n        expect(isNodeExported(nameNode, 'externFn', 'c')).toBe(true);\n      });\n    });\n\n    // Rust additional edge cases\n    describe('rust edge cases', () => {\n      it('pub(super) is treated as exported', () => {\n        const visMod = mockNode('visibility_modifier', 'pub(super)');\n        const nameNode = mockNode('identifier', 'parent_fn');\n        const fnDecl = mockNode('function_item', 'pub(super) fn parent_fn() {}', undefined, [visMod, nameNode]);\n        expect(isNodeExported(nameNode, 'parent_fn', 'rust')).toBe(true);\n      });\n\n      it('pub union is exported', () => {\n        const visMod = mockNode('visibility_modifier', 'pub');\n        const nameNode = mockNode('type_identifier', 'MyUnion');\n        const unionDecl = mockNode('union_item', 'pub union MyUnion {}', undefined, [visMod, nameNode]);\n        expect(isNodeExported(nameNode, 'MyUnion', 'rust')).toBe(true);\n      });\n\n      it('private union is not exported', () => {\n        const nameNode = mockNode('type_identifier', 'InternalUnion');\n        const unionDecl = mockNode('union_item', 'union InternalUnion {}', undefined, [nameNode]);\n        expect(isNodeExported(nameNode, 'InternalUnion', 'rust')).toBe(false);\n      });\n\n      it('pub type alias is exported', () => {\n        const visMod = mockNode('visibility_modifier', 'pub');\n        const nameNode = mockNode('type_identifier', 'Result');\n        const typeDecl = mockNode('type_item', 'pub type Result = ...', undefined, [visMod, nameNode]);\n        expect(isNodeExported(nameNode, 'Result', 'rust')).toBe(true);\n      });\n\n      it('pub const is exported', () => {\n        const visMod = mockNode('visibility_modifier', 'pub');\n        const nameNode = mockNode('identifier', 'MAX_SIZE');\n        const constDecl = mockNode('const_item', 'pub const MAX_SIZE: usize = 100;', undefined, [visMod, nameNode]);\n        expect(isNodeExported(nameNode, 'MAX_SIZE', 'rust')).toBe(true);\n      });\n\n      it('private const is not exported', () => {\n        const nameNode = mockNode('identifier', 'INTERNAL_LIMIT');\n        const constDecl = mockNode('const_item', 'const INTERNAL_LIMIT: usize = 50;', undefined, [nameNode]);\n        expect(isNodeExported(nameNode, 'INTERNAL_LIMIT', 'rust')).toBe(false);\n      });\n\n      it('pub static is exported', () => {\n        const visMod = mockNode('visibility_modifier', 'pub');\n        const nameNode = mockNode('identifier', 'INSTANCE');\n        const staticDecl = mockNode('static_item', 'pub static INSTANCE: ...', undefined, [visMod, nameNode]);\n        expect(isNodeExported(nameNode, 'INSTANCE', 'rust')).toBe(true);\n      });\n\n      it('associated_type without pub is not exported', () => {\n        const nameNode = mockNode('type_identifier', 'Item');\n        const assocType = mockNode('associated_type', 'type Item;', undefined, [nameNode]);\n        expect(isNodeExported(nameNode, 'Item', 'rust')).toBe(false);\n      });\n    });\n\n    // C# edge cases\n    describe('csharp edge cases', () => {\n      it('protected modifier is not exported', () => {\n        const modifier = mockNode('modifier', 'protected');\n        const nameNode = mockNode('identifier', 'OnInit');\n        const methodDecl = mockNode('method_declaration', 'protected void OnInit() {}', undefined, [modifier, nameNode]);\n        expect(isNodeExported(nameNode, 'OnInit', 'csharp')).toBe(false);\n      });\n\n      it('protected internal is not exported (first modifier wins)', () => {\n        const mod1 = mockNode('modifier', 'protected');\n        const mod2 = mockNode('modifier', 'internal');\n        const nameNode = mockNode('identifier', 'Setup');\n        const methodDecl = mockNode('method_declaration', 'protected internal void Setup() {}', undefined, [mod1, mod2, nameNode]);\n        // Neither modifier is 'public', so not exported\n        expect(isNodeExported(nameNode, 'Setup', 'csharp')).toBe(false);\n      });\n\n      it('record_struct with public modifier is exported', () => {\n        const modifier = mockNode('modifier', 'public');\n        const nameNode = mockNode('identifier', 'Coord');\n        const recStruct = mockNode('record_struct_declaration', 'public record struct Coord {}', undefined, [modifier, nameNode]);\n        expect(isNodeExported(nameNode, 'Coord', 'csharp')).toBe(true);\n      });\n\n      it('record_class with public modifier is exported', () => {\n        const modifier = mockNode('modifier', 'public');\n        const nameNode = mockNode('identifier', 'UserRecord');\n        const recClass = mockNode('record_class_declaration', 'public record class UserRecord {}', undefined, [modifier, nameNode]);\n        expect(isNodeExported(nameNode, 'UserRecord', 'csharp')).toBe(true);\n      });\n\n      it('file_scoped_namespace_declaration is a valid context', () => {\n        const modifier = mockNode('modifier', 'public');\n        const nameNode = mockNode('identifier', 'MyClass');\n        const classDecl = mockNode('class_declaration', 'public class MyClass {}', undefined, [modifier, nameNode]);\n        // class_declaration is found before namespace, so public is detected\n        expect(isNodeExported(nameNode, 'MyClass', 'csharp')).toBe(true);\n      });\n\n      it('delegate with public modifier is exported', () => {\n        const modifier = mockNode('modifier', 'public');\n        const nameNode = mockNode('identifier', 'OnChange');\n        const delegateDecl = mockNode('delegate_declaration', 'public delegate void OnChange();', undefined, [modifier, nameNode]);\n        expect(isNodeExported(nameNode, 'OnChange', 'csharp')).toBe(true);\n      });\n\n      it('event with public modifier is exported', () => {\n        const modifier = mockNode('modifier', 'public');\n        const nameNode = mockNode('identifier', 'Changed');\n        const eventDecl = mockNode('event_declaration', 'public event EventHandler Changed;', undefined, [modifier, nameNode]);\n        expect(isNodeExported(nameNode, 'Changed', 'csharp')).toBe(true);\n      });\n\n      it('property with public modifier is exported', () => {\n        const modifier = mockNode('modifier', 'public');\n        const nameNode = mockNode('identifier', 'Name');\n        const propDecl = mockNode('property_declaration', 'public string Name { get; set; }', undefined, [modifier, nameNode]);\n        expect(isNodeExported(nameNode, 'Name', 'csharp')).toBe(true);\n      });\n    });\n\n    // Kotlin edge cases\n    describe('kotlin edge cases', () => {\n      it('protected function is not exported', () => {\n        const visMod = mockNode('visibility_modifier', 'protected');\n        const modifiers = mockNode('modifiers', 'protected', undefined, [visMod]);\n        const nameNode = mockNode('identifier', 'onInit');\n        const fnDecl = mockNode('function_declaration', 'protected fun onInit() {}', undefined, [modifiers, nameNode]);\n        expect(isNodeExported(nameNode, 'onInit', 'kotlin')).toBe(false);\n      });\n    });\n\n    // Java edge cases\n    describe('java edge cases', () => {\n      it('protected method is not exported', () => {\n        const modifiers = mockNode('modifiers', 'protected');\n        const nameNode = mockNode('identifier', 'onInit');\n        const methodDecl = mockNode('method_declaration', 'protected void onInit() {}', undefined, [modifiers, nameNode]);\n        expect(isNodeExported(nameNode, 'onInit', 'java')).toBe(false);\n      });\n\n      it('static public method is exported', () => {\n        const modifiers = mockNode('modifiers', 'public static');\n        const nameNode = mockNode('identifier', 'main');\n        const methodDecl = mockNode('method_declaration', 'public static void main(String[] args) {}', undefined, [modifiers, nameNode]);\n        expect(isNodeExported(nameNode, 'main', 'java')).toBe(true);\n      });\n    });\n\n    // PHP edge cases\n    describe('php edge cases', () => {\n      it('protected method is not exported', () => {\n        const visMod = mockNode('visibility_modifier', 'protected');\n        const nameNode = mockNode('name', 'init', visMod);\n        expect(isNodeExported(nameNode, 'init', 'php')).toBe(false);\n      });\n\n      it('interface declaration is exported', () => {\n        const ifaceDecl = mockNode('interface_declaration', 'interface Loggable {}');\n        const nameNode = mockNode('name', 'Loggable', ifaceDecl);\n        expect(isNodeExported(nameNode, 'Loggable', 'php')).toBe(true);\n      });\n\n      it('trait declaration is exported', () => {\n        const traitDecl = mockNode('trait_declaration', 'trait Cacheable {}');\n        const nameNode = mockNode('name', 'Cacheable', traitDecl);\n        expect(isNodeExported(nameNode, 'Cacheable', 'php')).toBe(true);\n      });\n\n      it('enum declaration is exported', () => {\n        const enumDecl = mockNode('enum_declaration', 'enum Status {}');\n        const nameNode = mockNode('name', 'Status', enumDecl);\n        expect(isNodeExported(nameNode, 'Status', 'php')).toBe(true);\n      });\n    });\n\n    // Swift edge cases\n    describe('swift edge cases', () => {\n      it('internal function is not exported (Swift default)', () => {\n        const visMod = mockNode('visibility_modifier', 'internal');\n        const nameNode = mockNode('identifier', 'setup', visMod);\n        expect(isNodeExported(nameNode, 'setup', 'swift')).toBe(false);\n      });\n\n      it('private function is not exported', () => {\n        const visMod = mockNode('visibility_modifier', 'private');\n        const nameNode = mockNode('identifier', 'helper', visMod);\n        expect(isNodeExported(nameNode, 'helper', 'swift')).toBe(false);\n      });\n\n      it('fileprivate function is not exported', () => {\n        const visMod = mockNode('visibility_modifier', 'fileprivate');\n        const nameNode = mockNode('identifier', 'localHelper', visMod);\n        expect(isNodeExported(nameNode, 'localHelper', 'swift')).toBe(false);\n      });\n    });\n\n    // Unknown language\n    describe('unknown language', () => {\n      it('returns false for unknown language', () => {\n        const node = mockNode('identifier', 'foo');\n        expect(isNodeExported(node, 'foo', 'unknown')).toBe(false);\n      });\n    });\n  });\n\n  // ─── Fixture files exist ─────────────────────────────────────────────\n\n  describe('fixture files', () => {\n    const fixtures = ['simple.ts', 'simple.py', 'simple.go', 'simple.swift',\n      'simple.php', 'simple.rs', 'simple.java', 'simple.c', 'simple.cpp', 'simple.cs'];\n\n    for (const fixture of fixtures) {\n      it(`${fixture} exists and is non-empty`, async () => {\n        const content = await fs.readFile(path.join(FIXTURES_DIR, fixture), 'utf-8');\n        expect(content.length).toBeGreaterThan(0);\n      });\n    }\n  });\n\n  // ─── Unhappy path ─────────────────────────────────────────────────────\n\n  describe('unhappy path', () => {\n    it('returns empty AST or handles empty file content', async () => {\n      const parser = await loadParser();\n      await loadLanguage(SupportedLanguages.TypeScript, 'empty.ts');\n\n      // Parsing a zero-length string must not throw and must return a valid tree.\n      const tree = parser.parse('');\n      expect(tree).toBeDefined();\n      expect(tree.rootNode).toBeDefined();\n\n      // An empty file produces a root node with no named children — no symbols.\n      // isNodeExported on a bare node with no ancestors returns false regardless of language.\n      const detachedNode = mockNode('identifier', 'foo');\n      expect(isNodeExported(detachedNode, 'foo', 'typescript')).toBe(false);\n    });\n\n    it('handles binary/non-UTF8 content gracefully', async () => {\n      const parser = await loadParser();\n      await loadLanguage(SupportedLanguages.TypeScript, 'binary.ts');\n\n      // Construct a string that contains the Unicode replacement character (U+FFFD)\n      // and a mix of high-byte sequences that are not valid UTF-8 when treated as Latin-1.\n      // JavaScript strings are UTF-16 internally, so this is always a valid string —\n      // but it exercises tree-sitter's ability to handle unusual byte patterns.\n      const binaryLikeContent = '\\uFFFD\\u0000\\u0001\\u001F' + '\\xFF\\xFE'.repeat(10) + '\\uFFFD';\n\n      // Must not throw — tree-sitter should return an error-recovery tree.\n      let tree: any;\n      expect(() => {\n        tree = parser.parse(binaryLikeContent);\n      }).not.toThrow();\n\n      expect(tree).toBeDefined();\n      expect(tree.rootNode).toBeDefined();\n    });\n\n    it('falls back gracefully for unsupported language', async () => {\n      // getLanguageFromFilename returns null for extensions with no grammar mapping.\n      const scalaLang = getLanguageFromFilename('Main.scala');\n      expect(scalaLang).toBeNull();\n\n      const luaLang = getLanguageFromFilename('module.lua');\n      expect(luaLang).toBeNull();\n\n      // loadLanguage throws an explicit error for a language not in the grammar map.\n      // Cast through unknown to simulate a caller passing an unrecognised language key.\n      await expect(\n        loadLanguage('erlang' as unknown as SupportedLanguages)\n      ).rejects.toThrow('Unsupported language');\n    });\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/integration/pipeline.test.ts",
    "content": "/**\n * P1 Integration Tests: Pipeline End-to-End\n *\n * Runs the full ingestion pipeline once on a mini-repo fixture and\n * validates the resulting knowledge graph: file/symbol nodes, CALLS\n * edges, IMPORTS edges, community detection, and process detection.\n *\n * Pipeline runs once in beforeAll; each it() asserts against the cached result.\n */\nimport { describe, it, expect, beforeAll } from 'vitest';\nimport path from 'path';\nimport os from 'os';\nimport fs from 'fs/promises';\nimport { runPipelineFromRepo } from '../../src/core/ingestion/pipeline.js';\nimport type { PipelineProgress } from '../../src/types/pipeline.js';\nimport type { PipelineResult } from '../../src/types/pipeline.js';\n\nconst MINI_REPO = path.resolve(__dirname, '..', 'fixtures', 'mini-repo');\n\ndescribe('pipeline end-to-end', () => {\n  let result: PipelineResult;\n  const phases = new Set<string>();\n\n  // Run pipeline ONCE in beforeAll — each it() asserts against the cached result\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(MINI_REPO, (p: PipelineProgress) => phases.add(p.phase));\n  }, 60000);\n\n  it('indexes a mini repo and produces a valid graph', () => {\n    // --- Graph should have nodes ---\n    expect(result.graph.nodeCount).toBeGreaterThan(0);\n    expect(result.graph.relationshipCount).toBeGreaterThan(0);\n\n    // --- Should find at least 7 TypeScript files (may include AGENTS.md, CLAUDE.md, etc.) ---\n    expect(result.totalFileCount).toBeGreaterThanOrEqual(7);\n\n    // --- Verify File nodes exist for each source file ---\n    const fileNodes: string[] = [];\n    result.graph.forEachNode(n => {\n      if (n.label === 'File') fileNodes.push(n.properties.filePath || n.properties.name);\n    });\n    expect(fileNodes).toContain('src/handler.ts');\n    expect(fileNodes).toContain('src/validator.ts');\n    expect(fileNodes).toContain('src/db.ts');\n    expect(fileNodes).toContain('src/formatter.ts');\n    expect(fileNodes).toContain('src/index.ts');\n    expect(fileNodes).toContain('src/logger.ts');\n    expect(fileNodes).toContain('src/middleware.ts');\n\n    // --- Verify symbol nodes were created (functions, classes) ---\n    const symbolNames: string[] = [];\n    result.graph.forEachNode(n => {\n      if (['Function', 'Method', 'Class', 'Interface'].includes(n.label)) {\n        symbolNames.push(n.properties.name);\n      }\n    });\n    expect(symbolNames).toContain('handleRequest');\n    expect(symbolNames).toContain('validateInput');\n    expect(symbolNames).toContain('saveToDb');\n    expect(symbolNames).toContain('formatResponse');\n    expect(symbolNames).toContain('RequestHandler');\n    expect(symbolNames).toContain('processRequest');\n    expect(symbolNames).toContain('createLogEntry');\n\n    // --- Verify relationships exist ---\n    const relTypes = new Set<string>();\n    for (const rel of result.graph.iterRelationships()) {\n      relTypes.add(rel.type);\n    }\n    // Should have at least CONTAINS (structure) and CALLS (call graph)\n    expect(relTypes).toContain('CONTAINS');\n\n    // --- Verify CALLS edges were detected ---\n    const callEdges: { source: string; target: string }[] = [];\n    for (const rel of result.graph.iterRelationships()) {\n      if (rel.type === 'CALLS') {\n        const sourceNode = result.graph.getNode(rel.sourceId);\n        const targetNode = result.graph.getNode(rel.targetId);\n        if (sourceNode && targetNode) {\n          callEdges.push({\n            source: sourceNode.properties.name,\n            target: targetNode.properties.name,\n          });\n        }\n      }\n    }\n    expect(callEdges.length).toBeGreaterThan(0);\n\n    // handleRequest should call validateInput, saveToDb, formatResponse\n    const handleRequestCalls = callEdges.filter(e => e.source === 'handleRequest');\n    const calledByHandler = handleRequestCalls.map(e => e.target);\n    expect(calledByHandler).toContain('validateInput');\n    expect(calledByHandler).toContain('saveToDb');\n    expect(calledByHandler).toContain('formatResponse');\n\n    // --- Verify IMPORTS edges ---\n    let importsCount = 0;\n    for (const rel of result.graph.iterRelationships()) {\n      if (rel.type === 'IMPORTS') importsCount++;\n    }\n    expect(importsCount).toBeGreaterThan(0);\n  });\n\n  it('detects communities', () => {\n    expect(result.communityResult).toBeDefined();\n    expect(result.communityResult?.stats.totalCommunities).toBeGreaterThan(0);\n\n    // Community nodes should be in the graph\n    const communityNodes: string[] = [];\n    result.graph.forEachNode(n => {\n      if (n.label === 'Community') communityNodes.push(n.properties.name);\n    });\n    expect(communityNodes.length).toBeGreaterThan(0);\n\n    // MEMBER_OF relationships should exist\n    let memberOfCount = 0;\n    for (const rel of result.graph.iterRelationships()) {\n      if (rel.type === 'MEMBER_OF') memberOfCount++;\n    }\n    expect(memberOfCount).toBeGreaterThan(0);\n  });\n\n  it('detects execution flows (processes)', () => {\n    expect(result.processResult).toBeDefined();\n    expect(result.processResult?.stats.totalProcesses).toBeGreaterThan(0);\n\n    const proc = result.processResult?.processes[0] ?? { id: '', stepCount: 0, trace: [], entryPointId: '', terminalId: '', processType: '' };\n\n    // Each process should have valid structure\n    expect(proc.id).toBeTruthy();\n    expect(proc.stepCount).toBeGreaterThanOrEqual(3); // minSteps default\n    expect(proc.trace.length).toBe(proc.stepCount);\n    expect(proc.entryPointId).toBeTruthy();\n    expect(proc.terminalId).toBeTruthy();\n    expect(proc.processType).toMatch(/^(intra_community|cross_community)$/);\n\n    // Process nodes should be in the graph\n    const processNode = result.graph.getNode(proc.id);\n    expect(processNode).toBeDefined();\n    expect(processNode!.label).toBe('Process');\n\n    // STEP_IN_PROCESS relationships should exist with sequential ordering\n    const steps: number[] = [];\n    for (const rel of result.graph.iterRelationships()) {\n      if (rel.type === 'STEP_IN_PROCESS' && rel.targetId === proc.id) {\n        steps.push(rel.step);\n      }\n    }\n    expect(steps.length).toBe(proc.stepCount);\n    // Steps should be sequential 1, 2, 3, ...\n    const sorted = [...steps].sort((a, b) => a - b);\n    sorted.forEach((s, i) => expect(s).toBe(i + 1));\n  });\n\n  it('reports progress through all 6 phases', () => {\n    expect(phases).toContain('extracting');\n    expect(phases).toContain('structure');\n    expect(phases).toContain('parsing');\n    expect(phases).toContain('communities');\n    expect(phases).toContain('processes');\n    expect(phases).toContain('complete');\n  });\n\n  it('returns correct repoPath in result', () => {\n    expect(result.repoPath).toBe(MINI_REPO);\n  });\n});\n\n// ─── Pipeline error handling ──────────────────────────────────────────\n\ndescribe('pipeline error handling', () => {\n  it('returns empty result for non-existent repo path', async () => {\n    const result = await runPipelineFromRepo(\n      '/nonexistent/path/xyz123',\n      () => {},\n    );\n    expect(result.totalFileCount).toBe(0);\n  }, 30000);\n\n  it('handles empty directory gracefully', async () => {\n    const tmpDir = path.join(os.tmpdir(), `gn-pipeline-empty-${Date.now()}`);\n    await fs.mkdir(tmpDir, { recursive: true });\n    try {\n      const result = await runPipelineFromRepo(tmpDir, () => {});\n      // Empty repo should produce empty or minimal graph\n      expect(result.totalFileCount).toBe(0);\n    } finally {\n      await fs.rm(tmpDir, { recursive: true, force: true });\n    }\n  }, 30000);\n});\n"
  },
  {
    "path": "gitnexus/test/integration/query-compilation.test.ts",
    "content": "import { describe, it, expect, beforeAll } from 'vitest';\nimport { loadParser, loadLanguage, isLanguageAvailable } from '../../src/core/tree-sitter/parser-loader.js';\nimport { LANGUAGE_QUERIES } from '../../src/core/ingestion/tree-sitter-queries.js';\nimport { SupportedLanguages } from '../../src/config/supported-languages.js';\nimport Parser from 'tree-sitter';\n\n/**\n * Smoke test: verify that every LANGUAGE_QUERIES entry compiles against\n * its tree-sitter grammar without throwing.  A silent Query compilation\n * failure is the #1 cause of \"0 nodes extracted for language X\" bugs.\n */\ndescribe('Query compilation smoke tests', () => {\n  let parser: Parser;\n\n  beforeAll(async () => {\n    parser = await loadParser();\n  });\n\n  const languageFiles: Record<string, string> = {\n    [SupportedLanguages.TypeScript]: 'test.ts',\n    [SupportedLanguages.JavaScript]: 'test.js',\n    [SupportedLanguages.Python]: 'test.py',\n    [SupportedLanguages.Java]: 'Test.java',\n    [SupportedLanguages.C]: 'test.c',\n    [SupportedLanguages.CPlusPlus]: 'test.cpp',\n    [SupportedLanguages.CSharp]: 'Test.cs',\n    [SupportedLanguages.Go]: 'test.go',\n    [SupportedLanguages.Rust]: 'test.rs',\n    [SupportedLanguages.PHP]: 'test.php',\n    [SupportedLanguages.Kotlin]: 'Test.kt',\n    [SupportedLanguages.Swift]: 'test.swift',\n  };\n\n  // Known query compilation failures — remove from this set as PRs fix them\n  const knownFailures = new Set<string>([]);\n\n  for (const [lang, filename] of Object.entries(languageFiles)) {\n    const testFn = knownFailures.has(lang) ? it.fails : it;\n\n    testFn(`compiles query for ${lang}`, async () => {\n      if (!isLanguageAvailable(lang as SupportedLanguages)) {\n        return; // parser binary not available in this environment\n      }\n\n      await loadLanguage(lang as SupportedLanguages, filename);\n      const queryStr = LANGUAGE_QUERIES[lang as SupportedLanguages];\n      expect(queryStr).toBeTruthy();\n\n      const grammar = parser.getLanguage();\n      // This is the line that silently fails in production when queries\n      // use node types that don't exist in the grammar.\n      const query = new Parser.Query(grammar, queryStr);\n      expect(query).toBeDefined();\n\n      // Verify it can actually run against a minimal tree\n      const tree = parser.parse('');\n      const matches = query.matches(tree.rootNode);\n      expect(Array.isArray(matches)).toBe(true);\n    });\n  }\n});\n"
  },
  {
    "path": "gitnexus/test/integration/resolvers/cpp.test.ts",
    "content": "/**\n * C++: diamond inheritance + include-based imports + ambiguous #include disambiguation\n */\nimport { describe, it, expect, beforeAll } from 'vitest';\nimport path from 'path';\nimport {\n  FIXTURES, getRelationships, getNodesByLabel, getNodesByLabelFull, edgeSet,\n  runPipelineFromRepo, type PipelineResult,\n} from './helpers.js';\n\n// ---------------------------------------------------------------------------\n// Heritage: diamond inheritance + include-based imports\n// ---------------------------------------------------------------------------\n\ndescribe('C++ diamond inheritance', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'cpp-diamond'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects exactly 4 classes in diamond hierarchy', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Animal', 'Duck', 'Flyer', 'Swimmer']);\n  });\n\n  it('emits exactly 4 EXTENDS edges for full diamond', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(extends_.length).toBe(4);\n    expect(edgeSet(extends_)).toEqual([\n      'Duck → Flyer',\n      'Duck → Swimmer',\n      'Flyer → Animal',\n      'Swimmer → Animal',\n    ]);\n  });\n\n  it('resolves all 5 #include imports between header/source files', () => {\n    const imports = getRelationships(result, 'IMPORTS');\n    expect(imports.length).toBe(5);\n    expect(edgeSet(imports)).toEqual([\n      'duck.cpp → duck.h',\n      'duck.h → flyer.h',\n      'duck.h → swimmer.h',\n      'flyer.h → animal.h',\n      'swimmer.h → animal.h',\n    ]);\n  });\n\n  it('captures 1 Method node from duck.cpp (speak)', () => {\n    const methods = getNodesByLabel(result, 'Method');\n    expect(methods).toEqual(['speak']);\n  });\n\n  it('no OVERRIDES edges target Property nodes', () => {\n    const overrides = getRelationships(result, 'OVERRIDES');\n    for (const edge of overrides) {\n      const target = result.graph.getNode(edge.rel.targetId);\n      expect(target).toBeDefined();\n      expect(target!.label).not.toBe('Property');\n    }\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Ambiguous: two headers with same class name, #include disambiguates\n// ---------------------------------------------------------------------------\n\ndescribe('C++ ambiguous symbol resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'cpp-ambiguous'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects 2 Handler classes', () => {\n    const classes = getNodesByLabel(result, 'Class');\n    expect(classes.filter(n => n === 'Handler').length).toBe(2);\n    expect(classes).toContain('Processor');\n  });\n\n  it('resolves EXTENDS to handler_a.h (not handler_b.h)', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(extends_.length).toBe(1);\n    expect(extends_[0].source).toBe('Processor');\n    expect(extends_[0].target).toBe('Handler');\n    expect(extends_[0].targetFilePath).toBe('handler_a.h');\n  });\n\n  it('#include resolves to handler_a.h', () => {\n    const imports = getRelationships(result, 'IMPORTS');\n    expect(imports.length).toBe(1);\n    expect(imports[0].targetFilePath).toBe('handler_a.h');\n  });\n\n  it('all heritage edges point to real graph nodes', () => {\n    for (const edge of getRelationships(result, 'EXTENDS')) {\n      const target = result.graph.getNode(edge.rel.targetId);\n      expect(target).toBeDefined();\n      expect(target!.properties.name).toBe(edge.target);\n    }\n  });\n});\n\ndescribe('C++ call resolution with arity filtering', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'cpp-calls'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves run → write_audit to one.h via arity narrowing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    expect(calls.length).toBe(1);\n    expect(calls[0].source).toBe('run');\n    expect(calls[0].target).toBe('write_audit');\n    expect(calls[0].targetFilePath).toBe('one.h');\n    expect(calls[0].rel.reason).toBe('import-resolved');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Member-call resolution: obj.method() resolves through pipeline\n// ---------------------------------------------------------------------------\n\ndescribe('C++ member-call resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'cpp-member-calls'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves processUser → save as a member call on User', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('processUser');\n    expect(saveCall!.targetFilePath).toBe('user.h');\n  });\n\n  it('detects User class and save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Method')).toContain('save');\n  });\n\n  it('emits HAS_METHOD edge from User to save', () => {\n    const hasMethod = getRelationships(result, 'HAS_METHOD');\n    const edge = hasMethod.find(e => e.source === 'User' && e.target === 'save');\n    expect(edge).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Constructor resolution: new Foo() resolves to Class\n// ---------------------------------------------------------------------------\n\ndescribe('C++ constructor-call resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'cpp-constructor-calls'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves new User() as a CALLS edge to the User class', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const ctorCall = calls.find(c => c.target === 'User');\n    expect(ctorCall).toBeDefined();\n    expect(ctorCall!.source).toBe('processUser');\n    expect(ctorCall!.targetLabel).toBe('Class');\n    expect(ctorCall!.targetFilePath).toBe('user.h');\n    expect(ctorCall!.rel.reason).toBe('import-resolved');\n  });\n\n  it('detects User class and save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Method')).toContain('save');\n  });\n\n  it('resolves #include import', () => {\n    const imports = getRelationships(result, 'IMPORTS');\n    expect(imports.length).toBe(1);\n    expect(imports[0].targetFilePath).toBe('user.h');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Receiver-constrained resolution: typed variables disambiguate same-named methods\n// ---------------------------------------------------------------------------\n\ndescribe('C++ receiver-constrained resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'cpp-receiver-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, both with save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves user.save() to User.save and repo.save() to Repo.save via receiver typing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save');\n    expect(saveCalls.length).toBe(2);\n\n    const userSave = saveCalls.find(c => c.targetFilePath === 'user.h');\n    const repoSave = saveCalls.find(c => c.targetFilePath === 'repo.h');\n\n    expect(userSave).toBeDefined();\n    expect(repoSave).toBeDefined();\n    expect(userSave!.source).toBe('processEntities');\n    expect(repoSave!.source).toBe('processEntities');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Constructor-inferred type resolution: auto user = User(); user.save() → User.save\n// Cross-file SymbolTable verification (no explicit type annotations)\n// ---------------------------------------------------------------------------\n\ndescribe('C++ constructor-inferred type resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'cpp-constructor-type-inference'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, both with save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves user.save() to models/User.h via constructor-inferred type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'models/User.h');\n    expect(userSave).toBeDefined();\n    expect(userSave!.source).toBe('processEntities');\n  });\n\n  it('resolves repo.save() to models/Repo.h via constructor-inferred type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'models/Repo.h');\n    expect(repoSave).toBeDefined();\n    expect(repoSave!.source).toBe('processEntities');\n  });\n\n  it('emits exactly 2 save() CALLS edges (one per receiver type)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save');\n    expect(saveCalls.length).toBe(2);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Variadic resolution: C-style variadic (...) doesn't get filtered by arity\n// ---------------------------------------------------------------------------\n\ndescribe('C++ variadic call resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'cpp-variadic-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves 3-arg call to variadic function log_entry(const char*, ...) in logger.h', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const logCall = calls.find(c => c.target === 'log_entry');\n    expect(logCall).toBeDefined();\n    expect(logCall!.source).toBe('main');\n    expect(logCall!.targetFilePath).toBe('logger.h');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Local shadow: same-file definition takes priority over imported name\n// ---------------------------------------------------------------------------\n\ndescribe('C++ local definition shadows import', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'cpp-local-shadow'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves run → save to same-file definition, not the imported one', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save' && c.source === 'run');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.targetFilePath).toBe('src/main.cpp');\n  });\n\n  it('does NOT resolve save to utils.h', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveToUtils = calls.find(c => c.target === 'save' && c.targetFilePath === 'src/utils.h');\n    expect(saveToUtils).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// this->save() resolves to enclosing class's own save method\n// ---------------------------------------------------------------------------\n\ndescribe('C++ this resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'cpp-self-this-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, each with a save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves this->save() to User::save in the same file (not Repo::save)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.targetFilePath).toBe('src/User.cpp');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Parent class resolution: EXTENDS via base_class_clause\n// ---------------------------------------------------------------------------\n\ndescribe('C++ parent resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'cpp-parent-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects BaseModel and User classes', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('BaseModel');\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n  });\n\n  it('emits EXTENDS edge: User → BaseModel (base_class_clause)', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(extends_.length).toBe(1);\n    expect(extends_[0].source).toBe('User');\n    expect(extends_[0].target).toBe('BaseModel');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Brace-init constructor inference: auto x = User{}; x.save() → User.save\n// ---------------------------------------------------------------------------\n\ndescribe('C++ brace-init constructor inference', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'cpp-brace-init-inference'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, both with save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Repo', 'User']);\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves user.save() to User.save via brace-init', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'models/User.h');\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves repo.save() to Repo.save via brace-init', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'models/Repo.h');\n    expect(repoSave).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// C++ scoped brace-init: auto x = ns::HttpClient{}\n// ---------------------------------------------------------------------------\n\ndescribe('C++ scoped brace-init resolution (ns::Type{})', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'cpp-scoped-brace-init'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves client.connect() via ns::HttpClient{} scoped brace-init', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const connectCall = calls.find(c => c.target === 'connect' && c.targetFilePath === 'models.h');\n    expect(connectCall).toBeDefined();\n    expect(connectCall!.source).toBe('run');\n  });\n\n  it('resolves client.send() via ns::HttpClient{} scoped brace-init', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const sendCall = calls.find(c => c.target === 'send' && c.targetFilePath === 'models.h');\n    expect(sendCall).toBeDefined();\n    expect(sendCall!.source).toBe('run');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// C++ range-based for: for (auto& user : users) — Tier 1c\n// ---------------------------------------------------------------------------\n\ndescribe('C++ range-based for loop resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'cpp-range-for'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes with save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n  });\n\n  it('resolves user.save() in range-for to User#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processUsers' && c.targetFilePath?.includes('User'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves repo.save() in const auto& range-for to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processRepos' && c.targetFilePath?.includes('Repo'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('does NOT cross-resolve user.save() to Repo#save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processUsers' && c.targetFilePath?.includes('Repo'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Return type inference: auto user = getUser(\"alice\"); user.save()\n// C++'s CONSTRUCTOR_BINDING_SCANNER captures auto declarations with\n// call_expression values, enabling return type inference from function results.\n// ---------------------------------------------------------------------------\n\ndescribe('C++ return type inference via auto + function call', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'cpp-return-type'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User class and getUser function', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Function')).toContain('getUser');\n  });\n\n  it('detects save method on User', () => {\n    const methods = getNodesByLabel(result, 'Method');\n    expect(methods).toContain('save');\n  });\n\n  it('resolves user.save() to User#save via return type of getUser(): User', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processUser' && c.targetFilePath.includes('user.h'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Return-type inference with competing methods:\n// Two classes both have save(), factory functions disambiguate via return type\n// ---------------------------------------------------------------------------\n\ndescribe('C++ return-type inference via function return type', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'cpp-return-type-inference'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves user.save() to User#save via return type of getUser()', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processUser' && c.targetFilePath.includes('user.h')\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('user.save() does NOT resolve to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processUser'\n    );\n    // Should resolve to exactly one target — if it resolves at all, check it's the right one\n    if (wrongSave) {\n      expect(wrongSave.targetFilePath).toContain('user.h');\n    }\n  });\n\n  it('resolves repo.save() to Repo#save via return type of getRepo()', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processRepo' && c.targetFilePath.includes('repo.h')\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Nullable receiver unwrapping: User* pointer type stripped for resolution\n// ---------------------------------------------------------------------------\n\ndescribe('C++ nullable receiver resolution (pointer types)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'cpp-nullable-receiver'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes with competing save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter((m: string) => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves user->save() to User#save via pointer receiver typing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processEntities' && c.targetFilePath.includes('User.h'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves repo->save() to Repo#save via pointer receiver typing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processEntities' && c.targetFilePath.includes('Repo.h'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('does NOT cross-contaminate (exactly 1 save per receiver file)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save' && c.source === 'processEntities');\n    const userTargeted = saveCalls.filter(c => c.targetFilePath.includes('User.h'));\n    const repoTargeted = saveCalls.filter(c => c.targetFilePath.includes('Repo.h'));\n    expect(userTargeted.length).toBe(1);\n    expect(repoTargeted.length).toBe(1);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// C++ assignment chain propagation: auto alias = u; alias.save()\n// Tests extractPendingAssignment for C++ auto declarations.\n// ---------------------------------------------------------------------------\n\ndescribe('C++ assignment chain propagation (auto alias)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'cpp-assignment-chain'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes each with a save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves alias.save() to User#save via auto assignment chain', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processEntities' && c.targetFilePath?.includes('User.h'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves rAlias.save() to Repo#save via auto assignment chain', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processEntities' && c.targetFilePath?.includes('Repo.h'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('each alias resolves to its own class, not the other', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save' && c.source === 'processEntities');\n    const userTargeted = saveCalls.filter(c => c.targetFilePath?.includes('User.h'));\n    const repoTargeted = saveCalls.filter(c => c.targetFilePath?.includes('Repo.h'));\n    expect(userTargeted.length).toBe(1);\n    expect(repoTargeted.length).toBe(1);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Chained method calls: svc.getUser().save()\n// Tests that C++ chain call resolution correctly infers the intermediate\n// receiver type from getUser()'s return type and resolves save() to User.\n// ---------------------------------------------------------------------------\n\ndescribe('C++ chained method call resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'cpp-chain-call'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User, Repo, and UserService classes', () => {\n    const classes = getNodesByLabel(result, 'Class');\n    expect(classes).toContain('User');\n    expect(classes).toContain('Repo');\n    expect(classes).toContain('UserService');\n  });\n\n  it('detects getUser and save symbols', () => {\n    const allSymbols = [\n      ...getNodesByLabel(result, 'Function'),\n      ...getNodesByLabel(result, 'Method'),\n    ];\n    expect(allSymbols).toContain('getUser');\n    expect(allSymbols).toContain('save');\n  });\n\n  it('resolves svc.getUser().save() to User#save via chain resolution', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' &&\n      c.source === 'processUser' &&\n      c.targetFilePath?.includes('user.h'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('does NOT resolve svc.getUser().save() to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' &&\n      c.source === 'processUser' &&\n      c.targetFilePath?.includes('repo.h'),\n    );\n    expect(repoSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// C++ structured binding in range-for: for (auto& [key, user] : userMap)\n// ---------------------------------------------------------------------------\n\ndescribe('C++ structured binding in range-for', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'cpp-structured-binding'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes with save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves user.save() in structured binding for-loop to User#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processUserMap' && c.targetFilePath?.includes('User.h'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves repo.save() in structured binding for-loop to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processRepoMap' && c.targetFilePath?.includes('Repo.h'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('does NOT cross-resolve user.save() to Repo#save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processUserMap' && c.targetFilePath?.includes('Repo.h'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n\n  it('does NOT cross-resolve repo.save() to User#save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processRepoMap' && c.targetFilePath?.includes('User.h'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// C++ pointer dereference in range-for: for (auto& user : *ptr)\n// ---------------------------------------------------------------------------\n\ndescribe('C++ pointer dereference in range-for', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'cpp-deref-range-for'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes with save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n  });\n\n  it('resolves user.save() in *usersPtr range-for to User#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processUsers' && c.targetFilePath?.includes('User'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves repo.save() in *reposPtr range-for to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processRepos' && c.targetFilePath?.includes('Repo'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('does NOT cross-resolve user.save() to Repo#save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processUsers' && c.targetFilePath?.includes('Repo'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase 8: Field/property type resolution (1-level)\n// ---------------------------------------------------------------------------\n\ndescribe('Field type resolution (C++)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'cpp-field-types'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects classes: Address, User', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Address', 'User']);\n  });\n\n  it('detects Property nodes for C++ data member fields', () => {\n    const properties = getNodesByLabel(result, 'Property');\n    expect(properties).toContain('address');\n    expect(properties).toContain('name');\n    expect(properties).toContain('city');\n  });\n\n  it('emits HAS_PROPERTY edges linking fields to classes', () => {\n    const propEdges = getRelationships(result, 'HAS_PROPERTY');\n    expect(propEdges.length).toBe(3);\n    expect(edgeSet(propEdges)).toContain('User → address');\n    expect(edgeSet(propEdges)).toContain('User → name');\n    expect(edgeSet(propEdges)).toContain('Address → city');\n  });\n\n  it('resolves user.address.save() → Address#save via field type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(e => e.target === 'save');\n    const addressSave = saveCalls.find(\n      e => e.source === 'processUser' && e.targetFilePath.includes('models'),\n    );\n    expect(addressSave).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase 8A: Deep field chain resolution (3-level)\n// ---------------------------------------------------------------------------\n\ndescribe('Deep field chain resolution (C++)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'cpp-deep-field-chain'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects classes: Address, City, User', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Address', 'City', 'User']);\n  });\n\n  it('detects Property nodes for all typed fields', () => {\n    const properties = getNodesByLabel(result, 'Property');\n    expect(properties).toContain('address');\n    expect(properties).toContain('city');\n    expect(properties).toContain('zipCode');\n  });\n\n  it('emits HAS_PROPERTY edges for nested type chain', () => {\n    const propEdges = getRelationships(result, 'HAS_PROPERTY');\n    expect(edgeSet(propEdges)).toContain('User → address');\n    expect(edgeSet(propEdges)).toContain('Address → city');\n    expect(edgeSet(propEdges)).toContain('City → zipCode');\n  });\n\n  it('resolves 2-level chain: user.address.save() → Address#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(e => e.target === 'save' && e.source === 'processUser');\n    const addressSave = saveCalls.find(e => e.targetFilePath.includes('models'));\n    expect(addressSave).toBeDefined();\n  });\n\n  it('resolves 3-level chain: user.address.city.getName() → City#getName', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const getNameCalls = calls.filter(e => e.target === 'getName' && e.source === 'processUser');\n    const cityGetName = getNameCalls.find(e => e.targetFilePath.includes('models'));\n    expect(cityGetName).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Pointer and reference member fields (Address* address; Address& ref_address;)\n// ---------------------------------------------------------------------------\n\ndescribe('C++ pointer/reference member field capture', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'cpp-pointer-ref-fields'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects classes: Address, User', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Address', 'User']);\n  });\n\n  it('detects Property nodes for pointer and reference member fields', () => {\n    const properties = getNodesByLabel(result, 'Property');\n    expect(properties).toContain('address');\n    expect(properties).toContain('ref_address');\n    expect(properties).toContain('name');\n    expect(properties).toContain('city');\n  });\n\n  it('emits HAS_PROPERTY edges for pointer/reference fields', () => {\n    const propEdges = getRelationships(result, 'HAS_PROPERTY');\n    expect(edgeSet(propEdges)).toContain('User → address');\n    expect(edgeSet(propEdges)).toContain('User → ref_address');\n    expect(edgeSet(propEdges)).toContain('User → name');\n  });\n});\n\n// ACCESSES write edges from assignment expressions\n// ---------------------------------------------------------------------------\n\ndescribe('Write access tracking (C++)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'cpp-write-access'),\n      () => {},\n    );\n  }, 60000);\n\n  it('emits ACCESSES write edges for field assignments', () => {\n    const accesses = getRelationships(result, 'ACCESSES');\n    const writes = accesses.filter(e => e.rel.reason === 'write');\n    expect(writes.length).toBe(2);\n    const fieldNames = writes.map(e => e.target);\n    expect(fieldNames).toContain('name');\n    expect(fieldNames).toContain('address');\n    const sources = writes.map(e => e.source);\n    expect(sources).toContain('updateUser');\n  });\n\n  it('write ACCESSES edges have confidence 1.0', () => {\n    const accesses = getRelationships(result, 'ACCESSES');\n    const writes = accesses.filter(e => e.rel.reason === 'write');\n    for (const edge of writes) {\n      expect(edge.rel.confidence).toBe(1.0);\n    }\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Call-result variable binding (Phase 9): auto user = getUser(); user.save()\n// ---------------------------------------------------------------------------\n\ndescribe('C++ call-result variable binding (Tier 2b)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'cpp-call-result-binding'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves user.save() to User#save via call-result binding with auto', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processUser'\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Method chain binding (Phase 9C): getUser() → .address → .getCity() → .save()\n// ---------------------------------------------------------------------------\n\ndescribe('C++ method chain binding via unified fixpoint (Phase 9C)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'cpp-method-chain-binding'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves city.save() to City#save via method chain with auto', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processChain'\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase B: Deep MRO — walkParentChain() at depth 2 (C→B→A)\n// greet() is defined on A, accessed via C. Tests BFS depth-2 parent traversal.\n// ---------------------------------------------------------------------------\n\ndescribe('C++ grandparent method resolution via MRO (Phase B)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'cpp-grandparent-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects A, B, C, Greeting classes', () => {\n    const classes = getNodesByLabel(result, 'Class');\n    expect(classes).toContain('A');\n    expect(classes).toContain('B');\n    expect(classes).toContain('C');\n    expect(classes).toContain('Greeting');\n  });\n\n  it('emits EXTENDS edges: B→A, C→B', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(edgeSet(extends_)).toContain('B → A');\n    expect(edgeSet(extends_)).toContain('C → B');\n  });\n\n  it('resolves c.greet().save() to Greeting#save via depth-2 MRO lookup', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.targetFilePath.includes('Greeting'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('resolves c.greet() to A#greet (method found via MRO walk)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const greetCall = calls.find(c =>\n      c.target === 'greet' && c.targetFilePath.includes('A.h'),\n    );\n    expect(greetCall).toBeDefined();\n  });\n});\n\n// ── Phase P: Overload Disambiguation via Parameter Types ─────────────────\n\ndescribe('C++ overload disambiguation by parameter types', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'cpp-overload-param-types'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects lookup method with parameterTypes on graph node', () => {\n    const methods = getNodesByLabelFull(result, 'Method');\n    const lookupNodes = methods.filter(m => m.name === 'lookup');\n    expect(lookupNodes.length).toBe(1);\n    expect(lookupNodes[0].properties.parameterTypes).toEqual(['int']);\n  });\n\n  it('emits CALLS edge from run() → lookup() via overload disambiguation', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const lookupCalls = calls.filter(c => c.source === 'run' && c.target === 'lookup');\n    // Both lookup(42) and lookup(\"alice\") resolve to same nodeId → 1 CALLS edge\n    expect(lookupCalls.length).toBe(1);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// C++ smart pointer virtual dispatch via std::make_shared<T>()\n// ---------------------------------------------------------------------------\n\ndescribe('C++ smart pointer virtual dispatch via make_shared', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'cpp-smart-ptr-dispatch'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects Dog and Animal classes', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('Animal');\n    expect(getNodesByLabel(result, 'Class')).toContain('Dog');\n  });\n\n  it('emits CALLS edge from process → speak', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const speakCall = calls.find(c => c.source === 'process' && c.target === 'speak');\n    expect(speakCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// C++ default parameter arity resolution\n// ---------------------------------------------------------------------------\n\ndescribe('C++ default parameter arity resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'cpp-default-params'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves greet(\"Alice\") with 1 arg to greet with 2 params (1 default)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const greetCalls = calls.filter(c => c.source === 'process' && c.target === 'greet');\n    expect(greetCalls.length).toBe(1);\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/integration/resolvers/csharp.test.ts",
    "content": "/**\n * C#: heritage resolution via base_list + ambiguous namespace-import refusal\n */\nimport { describe, it, expect, beforeAll } from 'vitest';\nimport path from 'path';\nimport {\n  FIXTURES, getRelationships, getNodesByLabel, getNodesByLabelFull, edgeSet,\n  runPipelineFromRepo, type PipelineResult,\n} from './helpers.js';\n\n// ---------------------------------------------------------------------------\n// Heritage: class + interface resolution via base_list\n// ---------------------------------------------------------------------------\n\ndescribe('C# heritage resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-proj'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects exactly 3 classes and 2 interfaces', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['BaseEntity', 'User', 'UserService']);\n    expect(getNodesByLabel(result, 'Interface')).toEqual(['ILogger', 'IRepository']);\n  });\n\n  it('emits exactly 1 EXTENDS edge: User → BaseEntity', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(extends_.length).toBe(1);\n    expect(extends_[0].source).toBe('User');\n    expect(extends_[0].target).toBe('BaseEntity');\n  });\n\n  it('emits exactly 1 IMPLEMENTS edge: User → IRepository', () => {\n    const implements_ = getRelationships(result, 'IMPLEMENTS');\n    expect(implements_.length).toBe(1);\n    expect(implements_[0].source).toBe('User');\n    expect(implements_[0].target).toBe('IRepository');\n  });\n\n  it('emits CALLS edges from CreateUser (constructor + member calls)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    expect(calls.length).toBe(4);\n    const targets = edgeSet(calls);\n    expect(targets).toContain('CreateUser → User');      // new User() constructor\n    expect(targets).toContain('CreateUser → Validate');   // user.Validate() — receiver-typed\n    expect(targets).toContain('CreateUser → Save');       // _repo.Save() — receiver-typed\n    expect(targets).toContain('CreateUser → Log');        // _logger.Log() — receiver-typed\n  });\n\n  it('resolves all CALLS from CreateUser via import-resolved or unique-global', () => {\n    const calls = getRelationships(result, 'CALLS');\n    // C# non-aliased `using Namespace;` imports don't populate NamedImportMap\n    // (namespace-scoped imports can't bind to individual symbols).\n    // Calls resolve via directory-based PackageMap (import-resolved) when ambiguous,\n    // or via unique-global when the symbol name is globally unique.\n    for (const call of calls) {\n      expect(['import-resolved', 'global']).toContain(call.rel.reason);\n    }\n  });\n\n  it('resolves new User() to the User class via constructor discrimination', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const ctorCall = calls.find(c => c.target === 'User');\n    expect(ctorCall).toBeDefined();\n    expect(ctorCall!.targetLabel).toBe('Class');\n  });\n\n  it('detects 4 namespaces', () => {\n    const ns = getNodesByLabel(result, 'Namespace');\n    expect(ns.length).toBe(4);\n  });\n\n  it('detects properties on classes', () => {\n    const props = getNodesByLabel(result, 'Property');\n    expect(props).toContain('Id');\n    expect(props).toContain('Name');\n  });\n\n  it('no OVERRIDES edges target Property nodes', () => {\n    const overrides = getRelationships(result, 'OVERRIDES');\n    for (const edge of overrides) {\n      const target = result.graph.getNode(edge.rel.targetId);\n      expect(target).toBeDefined();\n      expect(target!.label).not.toBe('Property');\n    }\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Ambiguous: using-namespace can't disambiguate same-named types\n// ---------------------------------------------------------------------------\n\ndescribe('C# ambiguous symbol resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-ambiguous'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects 2 Handler classes and 2 IProcessor interfaces', () => {\n    const classes = getNodesByLabel(result, 'Class');\n    expect(classes.filter(n => n === 'Handler').length).toBe(2);\n    const ifaces = getNodesByLabel(result, 'Interface');\n    expect(ifaces.filter(n => n === 'IProcessor').length).toBe(2);\n  });\n\n  it('heritage targets are synthetic (correct refusal for ambiguous namespace import)', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    const implements_ = getRelationships(result, 'IMPLEMENTS');\n\n    expect(extends_.length).toBe(1);\n    expect(extends_[0].source).toBe('UserHandler');\n    expect(implements_.length).toBe(1);\n    expect(implements_[0].source).toBe('UserHandler');\n\n    // The key invariant: no edge points to Other/\n    if (extends_[0].targetFilePath) {\n      expect(extends_[0].targetFilePath).not.toMatch(/Other\\//);\n    }\n    if (implements_[0].targetFilePath) {\n      expect(implements_[0].targetFilePath).not.toMatch(/Other\\//);\n    }\n  });\n});\n\ndescribe('C# call resolution with arity filtering', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-calls'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves CreateUser → WriteAudit to Utils/OneArg.cs via arity narrowing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    expect(calls.length).toBe(1);\n    expect(calls[0].source).toBe('CreateUser');\n    expect(calls[0].target).toBe('WriteAudit');\n    expect(calls[0].targetFilePath).toBe('Utils/OneArg.cs');\n    expect(calls[0].rel.reason).toBe('import-resolved');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Member-call resolution: obj.Method() resolves through pipeline\n// ---------------------------------------------------------------------------\n\ndescribe('C# member-call resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-member-calls'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves ProcessUser → Save as a member call on User', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'Save');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('ProcessUser');\n    expect(saveCall!.targetFilePath).toBe('Models/User.cs');\n  });\n\n  it('detects User class and Save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Method')).toContain('Save');\n  });\n\n  it('emits HAS_METHOD edge from User to Save', () => {\n    const hasMethod = getRelationships(result, 'HAS_METHOD');\n    const edge = hasMethod.find(e => e.source === 'User' && e.target === 'Save');\n    expect(edge).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Primary constructor resolution: class User(string name, int age) { }\n// ---------------------------------------------------------------------------\n\ndescribe('C# primary constructor resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-primary-ctors'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects Constructor nodes for primary constructors on class and record', () => {\n    const ctors = getNodesByLabel(result, 'Constructor');\n    expect(ctors).toContain('User');\n    expect(ctors).toContain('Person');\n  });\n\n  it('primary constructor has correct parameter count', () => {\n    let userCtorParams: number | undefined;\n    let personCtorParams: number | undefined;\n    result.graph.forEachNode(n => {\n      if (n.label === 'Constructor' && n.properties.name === 'User') {\n        userCtorParams = n.properties.parameterCount as number;\n      }\n      if (n.label === 'Constructor' && n.properties.name === 'Person') {\n        personCtorParams = n.properties.parameterCount as number;\n      }\n    });\n    expect(userCtorParams).toBe(2);\n    expect(personCtorParams).toBe(2);\n  });\n\n  it('resolves new User(...) as a CALLS edge to the Constructor node', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const ctorCall = calls.find(c => c.target === 'User');\n    expect(ctorCall).toBeDefined();\n    expect(ctorCall!.source).toBe('Run');\n    expect(ctorCall!.targetLabel).toBe('Constructor');\n    expect(ctorCall!.targetFilePath).toBe('Models/User.cs');\n  });\n\n  it('also resolves user.Save() as a method call', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'Save');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('Run');\n  });\n\n  it('emits HAS_METHOD edge from User class to User constructor', () => {\n    const hasMethod = getRelationships(result, 'HAS_METHOD');\n    const edge = hasMethod.find(e => e.source === 'User' && e.target === 'User');\n    expect(edge).toBeDefined();\n  });\n\n  it('emits HAS_METHOD edge from Person record to Person constructor', () => {\n    const hasMethod = getRelationships(result, 'HAS_METHOD');\n    const edge = hasMethod.find(e => e.source === 'Person' && e.target === 'Person');\n    expect(edge).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Receiver-constrained resolution: typed variables disambiguate same-named methods\n// ---------------------------------------------------------------------------\n\ndescribe('C# receiver-constrained resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-receiver-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, both with Save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'Save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves user.Save() to User.Save and repo.Save() to Repo.Save via receiver typing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'Save');\n    expect(saveCalls.length).toBe(2);\n\n    const userSave = saveCalls.find(c => c.targetFilePath === 'Models/User.cs');\n    const repoSave = saveCalls.find(c => c.targetFilePath === 'Models/Repo.cs');\n\n    expect(userSave).toBeDefined();\n    expect(repoSave).toBeDefined();\n    expect(userSave!.source).toBe('ProcessEntities');\n    expect(repoSave!.source).toBe('ProcessEntities');\n  });\n\n  it('resolves constructor calls for both User and Repo', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userCtor = calls.find(c => c.target === 'User');\n    const repoCtor = calls.find(c => c.target === 'Repo');\n    expect(userCtor).toBeDefined();\n    expect(repoCtor).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Alias import resolution: using U = Models.User resolves U → User\n// ---------------------------------------------------------------------------\n\ndescribe('C# alias import resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-alias-imports'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects Main, Repo, and User classes', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Main', 'Repo', 'User']);\n  });\n\n  it('resolves u.Save() to User.cs and r.Persist() to Repo.cs via alias', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'Save');\n    const persistCall = calls.find(c => c.target === 'Persist');\n\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('Run');\n    expect(saveCall!.targetLabel).toBe('Method');\n    expect(saveCall!.targetFilePath).toBe('Models/User.cs');\n\n    expect(persistCall).toBeDefined();\n    expect(persistCall!.source).toBe('Run');\n    expect(persistCall!.targetLabel).toBe('Method');\n    expect(persistCall!.targetFilePath).toBe('Models/Repo.cs');\n  });\n\n  it('emits exactly 2 IMPORTS edges via alias resolution', () => {\n    const imports = getRelationships(result, 'IMPORTS');\n    expect(imports.length).toBe(2);\n    expect(edgeSet(imports)).toEqual([\n      'Main.cs → Repo.cs',\n      'Main.cs → User.cs',\n    ]);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Variadic resolution: params string[] doesn't get filtered by arity\n// ---------------------------------------------------------------------------\n\ndescribe('C# variadic call resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-variadic-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves call to params method Record(params string[]) in Logger.cs', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const logCall = calls.find(c => c.target === 'Record');\n    expect(logCall).toBeDefined();\n    expect(logCall!.source).toBe('Execute');\n    expect(logCall!.targetFilePath).toBe('Utils/Logger.cs');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Local shadow: same-file definition takes priority over imported name\n// ---------------------------------------------------------------------------\n\ndescribe('C# local definition shadows import', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-local-shadow'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves Run → Save to same-file definition, not the imported one', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'Save' && c.source === 'Run');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.targetFilePath).toBe('App/Main.cs');\n  });\n\n  it('does NOT resolve Save to Logger.cs', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveToUtils = calls.find(c => c.target === 'Save' && c.targetFilePath === 'Utils/Logger.cs');\n    expect(saveToUtils).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// For-each loop element typing: foreach (User user in users) user.Save()\n// C#: explicit type in foreach_statement binds loop variable\n// ---------------------------------------------------------------------------\n\ndescribe('C# foreach loop element type resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-foreach'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, both with Save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'Save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves user.Save() in foreach to User#Save (not Repo#Save)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c => c.target === 'Save' && c.targetFilePath === 'Models/User.cs');\n    expect(userSave).toBeDefined();\n    expect(userSave!.source).toBe('ProcessEntities');\n  });\n\n  it('resolves repo.Save() in foreach to Repo#Save (not User#Save)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c => c.target === 'Save' && c.targetFilePath === 'Models/Repo.cs');\n    expect(repoSave).toBeDefined();\n    expect(repoSave!.source).toBe('ProcessEntities');\n  });\n\n  it('emits exactly 2 Save() CALLS edges (one per receiver type)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'Save');\n    expect(saveCalls.length).toBe(2);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// this.Save() resolves to enclosing class's own Save method\n// ---------------------------------------------------------------------------\n\ndescribe('C# this resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-self-this-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, each with a Save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Repo', 'User']);\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'Save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves this.Save() inside User.Process to User.Save, not Repo.Save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'Save' && c.source === 'Process');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.targetFilePath).toBe('src/Models/User.cs');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Parent class resolution: EXTENDS + IMPLEMENTS via base_list\n// ---------------------------------------------------------------------------\n\ndescribe('C# parent resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-parent-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects BaseModel and User classes plus ISerializable interface', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['BaseModel', 'User']);\n    expect(getNodesByLabel(result, 'Interface')).toEqual(['ISerializable']);\n  });\n\n  it('emits EXTENDS edge: User → BaseModel (from base_list)', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(extends_.length).toBe(1);\n    expect(extends_[0].source).toBe('User');\n    expect(extends_[0].target).toBe('BaseModel');\n  });\n\n  it('emits IMPLEMENTS edge: User → ISerializable', () => {\n    const implements_ = getRelationships(result, 'IMPLEMENTS');\n    expect(implements_.length).toBe(1);\n    expect(implements_[0].source).toBe('User');\n    expect(implements_[0].target).toBe('ISerializable');\n  });\n\n  it('all heritage edges point to real graph nodes', () => {\n    for (const edge of [...getRelationships(result, 'EXTENDS'), ...getRelationships(result, 'IMPLEMENTS')]) {\n      const target = result.graph.getNode(edge.rel.targetId);\n      expect(target).toBeDefined();\n      expect(target!.properties.name).toBe(edge.target);\n    }\n  });\n});\n\n// ---------------------------------------------------------------------------\n// base.Save() resolves to parent class's Save method\n// ---------------------------------------------------------------------------\n\ndescribe('C# base resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-super-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects BaseModel, User, and Repo classes', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['BaseModel', 'Repo', 'User']);\n  });\n\n  it('resolves base.Save() inside User to BaseModel.Save, not Repo.Save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const baseSave = calls.find(c => c.source === 'Save' && c.target === 'Save'\n      && c.targetFilePath === 'src/Models/BaseModel.cs');\n    expect(baseSave).toBeDefined();\n    const repoSave = calls.find(c => c.target === 'Save' && c.targetFilePath === 'src/Models/Repo.cs');\n    expect(repoSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// base.Save() resolves to generic parent class's Save method\n// ---------------------------------------------------------------------------\n\ndescribe('C# generic parent base resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-generic-parent-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects BaseModel, User, and Repo classes', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['BaseModel', 'Repo', 'User']);\n  });\n\n  it('resolves base.Save() inside User to BaseModel.Save, not Repo.Save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const baseSave = calls.find(c => c.source === 'Save' && c.target === 'Save'\n      && c.targetFilePath === 'src/Models/BaseModel.cs');\n    expect(baseSave).toBeDefined();\n    const repoSave = calls.find(c => c.target === 'Save' && c.targetFilePath === 'src/Models/Repo.cs');\n    expect(repoSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Pattern matching: `if (animal is Dog dog)` binds `dog` as type `Dog`\n// ---------------------------------------------------------------------------\n\ndescribe('C# is pattern matching resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-pattern-matching'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects Animal, Dog, and Cat classes', () => {\n    const classes = getNodesByLabel(result, 'Class');\n    expect(classes).toContain('Animal');\n    expect(classes).toContain('Dog');\n    expect(classes).toContain('Cat');\n  });\n\n  it('detects Bark and Meow methods', () => {\n    const methods = getNodesByLabel(result, 'Method');\n    expect(methods).toContain('Bark');\n    expect(methods).toContain('Meow');\n  });\n\n  it('resolves dog.Bark() to Dog.Bark via is-pattern type binding', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const barkCall = calls.find(c => c.target === 'Bark');\n    expect(barkCall).toBeDefined();\n    expect(barkCall!.source).toBe('HandleAnimal');\n    expect(barkCall!.targetFilePath).toBe('Models/Animal.cs');\n  });\n\n  it('emits EXTENDS edges for Dog and Cat', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    const dogExtends = extends_.find(e => e.source === 'Dog');\n    const catExtends = extends_.find(e => e.source === 'Cat');\n    expect(dogExtends).toBeDefined();\n    expect(dogExtends!.target).toBe('Animal');\n    expect(catExtends).toBeDefined();\n    expect(catExtends!.target).toBe('Animal');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Return type inference: var user = svc.GetUser(\"alice\"); user.Save()\n// C#'s CONSTRUCTOR_BINDING_SCANNER handles `var` declarations with\n// invocation_expression values, enabling end-to-end return type inference.\n// ---------------------------------------------------------------------------\n\ndescribe('C# return type inference via var + invocation', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-return-type'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User, UserService, and Repo classes', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('UserService');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n  });\n\n  it('detects Save on both User and Repo, plus GetUser', () => {\n    const methods = getNodesByLabel(result, 'Method');\n    expect(methods).toContain('Save');\n    expect(methods).toContain('GetUser');\n    // Repo.Save is also detected, proving the disambiguation test is meaningful\n    expect(methods.filter((m: string) => m === 'Save').length).toBe(2);\n  });\n\n  it('resolves user.Save() to User#Save (not Repo#Save) via return type of GetUser(): User', () => {\n    // scanConstructorBinding binds `var user = svc.GetUser()` → calleeName \"GetUser\".\n    // processCallsFromExtracted verifies GetUser's returnType is \"User\" via\n    // PackageMap resolution of `using ReturnType.Models;`, then receiver filtering\n    // resolves user.Save() to User#Save (not Repo#Save).\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'Save' && c.source === 'Run' && c.targetFilePath.includes('User.cs'),\n    );\n    expect(saveCall).toBeDefined();\n    // Must NOT resolve to Repo.Save — that would mean disambiguation failed\n    const repoSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'Run' && c.targetFilePath.includes('Repo.cs'),\n    );\n    expect(repoSave).toBeUndefined();\n  });\n});\n\ndescribe('C# null-conditional call resolution (user?.Save())', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-null-conditional'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes with competing Save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter((m: string) => m === 'Save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('captures null-conditional user?.Save() call', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'Save' && c.source === 'Process');\n    expect(saveCalls.length).toBeGreaterThan(0);\n  });\n\n  it('resolves user?.Save() to User#Save via receiver typing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'Process' && c.targetFilePath.includes('User.cs'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves repo?.Save() to Repo#Save via receiver typing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'Process' && c.targetFilePath.includes('Repo.cs'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('does NOT cross-contaminate (exactly 1 Save per receiver file)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'Save' && c.source === 'Process');\n    const userTargeted = saveCalls.filter(c => c.targetFilePath.includes('User.cs'));\n    const repoTargeted = saveCalls.filter(c => c.targetFilePath.includes('Repo.cs'));\n    expect(userTargeted.length).toBe(1);\n    expect(repoTargeted.length).toBe(1);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// C# async/await constructor binding resolution\n// Verifies that `var user = await svc.GetUserAsync()` correctly unwraps the\n// await_expression to find the invocation_expression underneath, producing a\n// constructor binding that enables receiver-based disambiguation of user.Save().\n// ---------------------------------------------------------------------------\n\ndescribe('C# async await constructor binding resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-async-binding'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User, UserService, and OrderService classes', () => {\n    const classes = getNodesByLabel(result, 'Class');\n    expect(classes).toContain('User');\n    expect(classes).toContain('UserService');\n    expect(classes).toContain('OrderService');\n  });\n\n  it('detects competing Save methods on User and Order', () => {\n    const methods = getNodesByLabel(result, 'Method');\n    expect(methods).toContain('Save');\n    expect(methods).toContain('GetUserAsync');\n    expect(methods).toContain('GetOrderAsync');\n  });\n\n  it('resolves user.Save() after await to User#Save via return type inference', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'ProcessUser' && c.targetFilePath.includes('User.cs'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('user.Save() does NOT resolve to Order#Save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'ProcessUser' && c.targetFilePath.includes('Order.cs'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n\n  it('resolves order.Save() after await to Order#Save via return type inference', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const orderSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'ProcessOrder' && c.targetFilePath.includes('Order.cs'),\n    );\n    expect(orderSave).toBeDefined();\n  });\n\n  it('order.Save() does NOT resolve to User#Save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'ProcessOrder' && c.targetFilePath.includes('User.cs'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Assignment chain propagation (Phase 4.3)\n// ---------------------------------------------------------------------------\n\ndescribe('C# assignment chain propagation', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-assignment-chain'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes each with a Save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'Save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves alias.Save() to User#Save via assignment chain', () => {\n    const calls = getRelationships(result, 'CALLS');\n    // Positive: alias.Save() must resolve to User#Save\n    const userSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'ProcessEntities' && c.targetFilePath.includes('User.cs'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('alias.Save() does NOT resolve to Repo#Save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    // Negative: alias comes from User, so only one edge to User.cs\n    const wrongCall = calls.filter(c =>\n      c.target === 'Save' && c.source === 'ProcessEntities' && c.targetFilePath.includes('User.cs'),\n    );\n    expect(wrongCall.length).toBe(1);\n  });\n\n  it('resolves rAlias.Save() to Repo#Save via assignment chain', () => {\n    const calls = getRelationships(result, 'CALLS');\n    // Positive: rAlias.Save() must resolve to Repo#Save\n    const repoSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'ProcessEntities' && c.targetFilePath.includes('Repo.cs'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('each alias resolves to its own class, not the other', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'ProcessEntities' && c.targetFilePath.includes('User.cs'),\n    );\n    const repoSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'ProcessEntities' && c.targetFilePath.includes('Repo.cs'),\n    );\n    expect(userSave).toBeDefined();\n    expect(repoSave).toBeDefined();\n    expect(userSave!.targetFilePath).not.toBe(repoSave!.targetFilePath);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// C# mixed declarations: assignment chain + is-pattern in the same file.\n// Tests that the type guard in extractPendingAssignment correctly skips\n// is_pattern_expression nodes while still handling local_declaration_statement.\n// ---------------------------------------------------------------------------\n\ndescribe('C# assignment chain + is-pattern coexistence', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-mixed-decl-chain'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes each with a Save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'Save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves alias.Save() to User#Save via assignment chain', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'ProcessWithChain' && c.targetFilePath?.includes('User.cs'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('assignment chain alias does NOT resolve to Repo#Save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongCall = calls.find(c =>\n      c.target === 'Save' && c.source === 'ProcessWithChain' && c.targetFilePath?.includes('Repo.cs'),\n    );\n    expect(wrongCall).toBeUndefined();\n  });\n\n  it('resolves u.Save() to User#Save via is-pattern binding', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const patternSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'ProcessWithPattern' && c.targetFilePath?.includes('User.cs'),\n    );\n    expect(patternSave).toBeDefined();\n  });\n\n  it('resolves alias.Save() to Repo#Save via Repo assignment chain', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'ProcessRepoChain' && c.targetFilePath?.includes('Repo.cs'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('Repo chain alias does NOT resolve to User#Save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongCall = calls.find(c =>\n      c.target === 'Save' && c.source === 'ProcessRepoChain' && c.targetFilePath?.includes('User.cs'),\n    );\n    expect(wrongCall).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// C# is-pattern disambiguation: `if (obj is User user)` should bind user → User\n// and resolve user.Save() to User#Save, NOT Repo#Save.\n// Validates the Phase 5.2 is_pattern_expression extraction in extractDeclaration.\n// ---------------------------------------------------------------------------\n\ndescribe('C# is-pattern type binding disambiguation (Phase 5.2)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-is-pattern'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes each with a Save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'Save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves user.Save() inside if (obj is User user) to User#Save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'Save' &&\n      c.source === 'Process' &&\n      c.targetFilePath?.includes('User.cs'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('does NOT resolve user.Save() to Repo#Save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'Save' &&\n      c.source === 'Process' &&\n      c.targetFilePath?.includes('Repo.cs'),\n    );\n    expect(repoSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Chained method calls: svc.GetUser().Save()\n// Tests that C# chain call resolution correctly infers the intermediate\n// receiver type from GetUser()'s return type and resolves Save() to User.\n// ---------------------------------------------------------------------------\n\ndescribe('C# chained method call resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-chain-call'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User, Repo, and UserService classes', () => {\n    const classes = getNodesByLabel(result, 'Class');\n    expect(classes).toContain('User');\n    expect(classes).toContain('Repo');\n    expect(classes).toContain('UserService');\n  });\n\n  it('detects GetUser and Save methods', () => {\n    const methods = getNodesByLabel(result, 'Method');\n    expect(methods).toContain('GetUser');\n    expect(methods).toContain('Save');\n  });\n\n  it('resolves svc.GetUser().Save() to User#Save via chain resolution', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'Save' &&\n      c.source === 'ProcessUser' &&\n      c.targetFilePath?.includes('User.cs'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('does NOT resolve svc.GetUser().Save() to Repo#Save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'Save' &&\n      c.source === 'ProcessUser' &&\n      c.targetFilePath?.includes('Repo.cs'),\n    );\n    expect(repoSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// C# var foreach Tier 1c: foreach (var user in users) with List<User> param\n// ---------------------------------------------------------------------------\n\ndescribe('C# var foreach type resolution (Tier 1c)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-var-foreach'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, both with Save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n  });\n\n  it('detects methods on both classes', () => {\n    const methods = getNodesByLabel(result, 'Method');\n    expect(methods.filter(m => m === 'Save').length).toBe(2);\n    expect(methods).toContain('ProcessUsers');\n    expect(methods).toContain('ProcessRepos');\n  });\n\n  it('resolves direct calls with explicit parameter types (u.Save, r.Save)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const directUserSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'Direct' && c.targetFilePath?.includes('User.cs'),\n    );\n    const directRepoSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'Direct' && c.targetFilePath?.includes('Repo.cs'),\n    );\n    expect(directUserSave).toBeDefined();\n    expect(directRepoSave).toBeDefined();\n  });\n\n  it('resolves user.Save() in var foreach to User#Save via Tier 1c', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'ProcessUsers' && c.targetFilePath?.includes('User.cs'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves repo.Save() in var foreach to Repo#Save via Tier 1c', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'ProcessRepos' && c.targetFilePath?.includes('Repo.cs'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('does NOT cross-resolve user.Save() to Repo#Save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrong = calls.find(c =>\n      c.target === 'Save' && c.source === 'ProcessUsers' && c.targetFilePath?.includes('Repo.cs'),\n    );\n    expect(wrong).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// C# switch pattern: switch (obj) { case User user: user.Save(); }\n// ---------------------------------------------------------------------------\n\ndescribe('C# switch pattern type resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-switch-pattern'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n  });\n\n  it('resolves user.Save() via is-pattern to User#Save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c => c.target === 'Save' && c.targetFilePath === 'Models/User.cs');\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves repo.Save() via switch case pattern to Repo#Save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c => c.target === 'Save' && c.targetFilePath === 'Models/Repo.cs');\n    expect(repoSave).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// C# Dictionary .Values foreach — member_access_expression resolution\n// ---------------------------------------------------------------------------\n\ndescribe('C# Dictionary .Values foreach resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-dictionary-keys-values'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User class with Save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n  });\n\n  it('resolves user.Save() via Dictionary.Values to User#Save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'ProcessValues' && c.targetFilePath?.includes('User'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('does NOT resolve user.Save() to Repo#Save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'ProcessValues' && c.targetFilePath?.includes('Repo'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// C# recursive_pattern: obj is User { Name: \"Alice\" } u — Phase 6.1\n// ---------------------------------------------------------------------------\n\ndescribe('C# recursive_pattern type resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-recursive-pattern'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes with Save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n  });\n\n  it('resolves u.Save() via recursive_pattern is-expression to User#Save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'Save' && c.targetFilePath?.includes('User'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves r.Save() via recursive_pattern switch expression to Repo#Save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'Save' && c.targetFilePath?.includes('Repo'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('resolves exactly one Save call per target class (no cross-resolution)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'Save' && c.source === 'ProcessWithRecursivePattern');\n    const toUser = saveCalls.filter(c => c.targetFilePath?.includes('User'));\n    const toRepo = saveCalls.filter(c => c.targetFilePath?.includes('Repo'));\n    // u.Save() → User#Save only, r.Save() → Repo#Save only\n    expect(toUser.length).toBe(1);\n    expect(toRepo.length).toBe(1);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// C# nested member access with container property: this.data.Values\n// ---------------------------------------------------------------------------\n\ndescribe('C# nested member access foreach (this.data.Values)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-nested-member-foreach'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User class with Save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n  });\n\n  it('resolves user.Save() via this.data.Values to User#Save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'ProcessValues' && c.targetFilePath?.includes('User'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('does NOT resolve user.Save() to Repo#Save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'ProcessValues' && c.targetFilePath?.includes('Repo'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase 8: Field/property type resolution (1-level)\n// ---------------------------------------------------------------------------\n\ndescribe('Field type resolution (C#)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-field-types'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects classes: Address, Service, User', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Address', 'Service', 'User']);\n  });\n\n  it('detects Property nodes for C# properties', () => {\n    const properties = getNodesByLabel(result, 'Property');\n    expect(properties).toContain('Address');\n    expect(properties).toContain('Name');\n    expect(properties).toContain('City');\n  });\n\n  it('emits HAS_PROPERTY edges linking properties to classes', () => {\n    const propEdges = getRelationships(result, 'HAS_PROPERTY');\n    expect(propEdges.length).toBe(3);\n    expect(edgeSet(propEdges)).toContain('User → Address');\n    expect(edgeSet(propEdges)).toContain('User → Name');\n    expect(edgeSet(propEdges)).toContain('Address → City');\n  });\n\n  it('resolves user.Address.Save() → Address#Save via field type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(e => e.target === 'Save');\n    const addressSave = saveCalls.find(\n      e => e.source === 'ProcessUser' && e.targetFilePath.includes('Models'),\n    );\n    expect(addressSave).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase 8A: Deep field chain resolution (3-level)\n// ---------------------------------------------------------------------------\n\ndescribe('Deep field chain resolution (C#)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-deep-field-chain'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects classes: Address, City, Service, User', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Address', 'City', 'Service', 'User']);\n  });\n\n  it('detects Property nodes for C# properties', () => {\n    const properties = getNodesByLabel(result, 'Property');\n    expect(properties).toContain('Address');\n    expect(properties).toContain('City');\n    expect(properties).toContain('ZipCode');\n  });\n\n  it('emits HAS_PROPERTY edges for nested type chain', () => {\n    const propEdges = getRelationships(result, 'HAS_PROPERTY');\n    expect(edgeSet(propEdges)).toContain('User → Address');\n    expect(edgeSet(propEdges)).toContain('Address → City');\n    expect(edgeSet(propEdges)).toContain('City → ZipCode');\n  });\n\n  it('resolves 2-level chain: user.Address.Save() → Address#Save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(e => e.target === 'Save' && e.source === 'ProcessUser');\n    const addressSave = saveCalls.find(e => e.targetFilePath.includes('Models'));\n    expect(addressSave).toBeDefined();\n  });\n\n  it('resolves 3-level chain: user.Address.City.GetName() → City#GetName', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const getNameCalls = calls.filter(e => e.target === 'GetName' && e.source === 'ProcessUser');\n    const cityGetName = getNameCalls.find(e => e.targetFilePath.includes('Models'));\n    expect(cityGetName).toBeDefined();\n  });\n});\n\n// ACCESSES write edges from assignment expressions\n// ---------------------------------------------------------------------------\n\ndescribe('Write access tracking (C#)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-write-access'),\n      () => {},\n    );\n  }, 60000);\n\n  it('emits ACCESSES write edges for field assignments', () => {\n    const accesses = getRelationships(result, 'ACCESSES');\n    const writes = accesses.filter(e => e.rel.reason === 'write');\n    expect(writes.length).toBe(2);\n    const fieldNames = writes.map(e => e.target);\n    expect(fieldNames).toContain('Name');\n    expect(fieldNames).toContain('Address');\n    const sources = writes.map(e => e.source);\n    expect(sources).toContain('UpdateUser');\n  });\n\n  it('write ACCESSES edges have confidence 1.0', () => {\n    const accesses = getRelationships(result, 'ACCESSES');\n    const writes = accesses.filter(e => e.rel.reason === 'write');\n    for (const edge of writes) {\n      expect(edge.rel.confidence).toBe(1.0);\n    }\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Call-result variable binding (Phase 9): var user = GetUser(); user.Save()\n// ---------------------------------------------------------------------------\n\ndescribe('C# call-result variable binding (Tier 2b)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-call-result-binding'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves user.Save() to User#Save via call-result binding', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'Save' && c.source === 'ProcessUser' && c.targetFilePath.includes('App')\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Method chain binding (Phase 9C): GetUser() → .Address → .GetCity() → .Save()\n// ---------------------------------------------------------------------------\n\ndescribe('C# method chain binding via unified fixpoint (Phase 9C)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-method-chain-binding'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves city.Save() to City#Save via method chain', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'Save' && c.source === 'ProcessChain' && c.targetFilePath.includes('App')\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase B: Deep MRO — walkParentChain() at depth 2 (C→B→A)\n// Greet() is defined on A, accessed via C. Tests BFS depth-2 parent traversal.\n// ---------------------------------------------------------------------------\n\ndescribe('C# grandparent method resolution via MRO (Phase B)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-grandparent-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects A, B, C, Greeting classes', () => {\n    const classes = getNodesByLabel(result, 'Class');\n    expect(classes).toContain('A');\n    expect(classes).toContain('B');\n    expect(classes).toContain('C');\n    expect(classes).toContain('Greeting');\n  });\n\n  it('emits EXTENDS edges: B→A, C→B', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(edgeSet(extends_)).toContain('B → A');\n    expect(edgeSet(extends_)).toContain('C → B');\n  });\n\n  it('resolves c.Greet().Save() to Greeting#Save via depth-2 MRO lookup', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'Save' && c.targetFilePath.includes('Greeting'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('resolves c.Greet() to A#Greet (method found via MRO walk)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const greetCall = calls.find(c =>\n      c.target === 'Greet' && c.targetFilePath.includes('A.cs'),\n    );\n    expect(greetCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase C: C# null-check narrowing — if (x != null) and if (x is not null)\n// Both patterns emit patternOverrides for the if-body position range\n// ---------------------------------------------------------------------------\n\ndescribe('C# null-check narrowing resolution (Phase C)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-null-check-narrowing'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n  });\n\n  it('resolves x.Save() inside != null guard (ProcessInequality) to User#Save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'Save' && c.source === 'ProcessInequality' && c.targetFilePath.includes('User'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('resolves x.Save() inside is not null guard (ProcessIsNotNull) to User#Save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'Save' && c.source === 'ProcessIsNotNull' && c.targetFilePath.includes('User'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('does NOT cross-resolve to Repo#Save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongCall = calls.find(c =>\n      c.target === 'Save' && c.targetFilePath.includes('Repo'),\n    );\n    expect(wrongCall).toBeUndefined();\n  });\n\n  it('resolves x.Save() inside constructor via null-check narrowing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'Save' && c.source === 'App' && c.targetFilePath.includes('User'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('resolves x.Save() inside lambda via null-check narrowing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'Save' && c.source === 'ProcessInLambda' && c.targetFilePath.includes('User'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ── Phase P: Overload Disambiguation via Parameter Types ─────────────────\n\ndescribe('C# overload disambiguation by parameter types', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-overload-param-types'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects Lookup method with parameterTypes on graph node', () => {\n    const methods = getNodesByLabelFull(result, 'Method');\n    const lookupNodes = methods.filter(m => m.name === 'Lookup');\n    expect(lookupNodes.length).toBe(1);\n    expect(lookupNodes[0].properties.parameterTypes).toEqual(['int']);\n  });\n\n  it('emits CALLS edge from Run() → Lookup() via overload disambiguation', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const lookupCalls = calls.filter(c => c.source === 'Run' && c.target === 'Lookup');\n    // Both Lookup(42) and Lookup(\"alice\") resolve to same nodeId → 1 CALLS edge\n    expect(lookupCalls.length).toBe(1);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// C# optional parameter arity resolution\n// ---------------------------------------------------------------------------\n\ndescribe('C# optional parameter arity resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'csharp-optional-params'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves g.Greet(\"Alice\") with 1 arg to Greet with 2 params (1 optional)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const greetCalls = calls.filter(c => c.source === 'Main' && c.target === 'Greet');\n    expect(greetCalls.length).toBe(1);\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/integration/resolvers/go.test.ts",
    "content": "/**\n * Go: package imports + cross-package calls + ambiguous struct disambiguation\n */\nimport { describe, it, expect, beforeAll } from 'vitest';\nimport path from 'path';\nimport {\n  FIXTURES, getRelationships, getNodesByLabel, edgeSet,\n  runPipelineFromRepo, type PipelineResult,\n} from './helpers.js';\n\n// ---------------------------------------------------------------------------\n// Heritage: package imports + cross-package calls (exercises PackageMap)\n// ---------------------------------------------------------------------------\n\ndescribe('Go package import & call resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'go-pkg'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects exactly 2 structs and 1 interface', () => {\n    expect(getNodesByLabel(result, 'Struct')).toEqual(['Admin', 'User']);\n    expect(getNodesByLabel(result, 'Interface')).toEqual(['Repository']);\n  });\n\n  it('detects exactly 5 functions', () => {\n    expect(getNodesByLabel(result, 'Function')).toEqual([\n      'Authenticate', 'NewAdmin', 'NewUser', 'ValidateToken', 'main',\n    ]);\n  });\n\n  it('emits exactly 7 CALLS edges (5 function + 2 struct literal)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    expect(calls.length).toBe(7);\n    expect(edgeSet(calls)).toEqual([\n      'Authenticate → NewUser',\n      'NewAdmin → Admin',\n      'NewAdmin → NewUser',\n      'NewUser → User',\n      'main → Authenticate',\n      'main → NewAdmin',\n      'main → NewUser',\n    ]);\n  });\n\n  it('resolves exactly 7 IMPORTS edges across Go packages', () => {\n    const imports = getRelationships(result, 'IMPORTS');\n    expect(imports.length).toBe(7);\n    expect(edgeSet(imports)).toEqual([\n      'main.go → admin.go',\n      'main.go → repository.go',\n      'main.go → service.go',\n      'main.go → user.go',\n      'service.go → admin.go',\n      'service.go → repository.go',\n      'service.go → user.go',\n    ]);\n  });\n\n  it('emits exactly 1 EXTENDS edge for struct embedding: Admin → User', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(extends_.length).toBe(1);\n    expect(extends_[0].source).toBe('Admin');\n    expect(extends_[0].target).toBe('User');\n  });\n\n  it('does not emit IMPLEMENTS edges (Go uses structural typing)', () => {\n    expect(getRelationships(result, 'IMPLEMENTS').length).toBe(0);\n  });\n\n  it('no OVERRIDES edges target Property nodes', () => {\n    const overrides = getRelationships(result, 'OVERRIDES');\n    for (const edge of overrides) {\n      const target = result.graph.getNode(edge.rel.targetId);\n      expect(target).toBeDefined();\n      expect(target!.label).not.toBe('Property');\n    }\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Ambiguous: Handler struct in two packages, package import disambiguates\n// ---------------------------------------------------------------------------\n\ndescribe('Go ambiguous symbol resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'go-ambiguous'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects 2 Handler structs in separate packages', () => {\n    const structs: string[] = [];\n    result.graph.forEachNode(n => {\n      if (n.label === 'Struct') structs.push(`${n.properties.name}@${n.properties.filePath}`);\n    });\n    const handlers = structs.filter(s => s.startsWith('Handler@'));\n    expect(handlers.length).toBe(2);\n    expect(handlers.some(h => h.includes('internal/models/'))).toBe(true);\n    expect(handlers.some(h => h.includes('internal/other/'))).toBe(true);\n  });\n\n  it('import resolves to internal/models/handler.go (not internal/other/)', () => {\n    const imports = getRelationships(result, 'IMPORTS');\n    const modelsImport = imports.find(e => e.targetFilePath.includes('models'));\n    expect(modelsImport).toBeDefined();\n    expect(modelsImport!.targetFilePath).toBe('internal/models/handler.go');\n  });\n\n  it('no import edge to internal/other/', () => {\n    const imports = getRelationships(result, 'IMPORTS');\n    for (const imp of imports) {\n      expect(imp.targetFilePath).not.toMatch(/internal\\/other\\//);\n    }\n  });\n});\n\ndescribe('Go call resolution with arity filtering', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'go-calls'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves main → WriteAudit to internal/onearg/log.go via arity narrowing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    expect(calls.length).toBe(1);\n    expect(calls[0].source).toBe('main');\n    expect(calls[0].target).toBe('WriteAudit');\n    expect(calls[0].targetFilePath).toBe('internal/onearg/log.go');\n    expect(calls[0].rel.reason).toBe('import-resolved');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Member-call resolution: obj.Method() resolves through pipeline\n// ---------------------------------------------------------------------------\n\ndescribe('Go member-call resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'go-member-calls'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves processUser → Save as a member call on User', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'Save');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('processUser');\n    expect(saveCall!.targetFilePath).toBe('models/user.go');\n  });\n\n  it('detects User struct and Save method', () => {\n    const structs: string[] = [];\n    result.graph.forEachNode(n => {\n      if (n.label === 'Struct') structs.push(n.properties.name);\n    });\n    expect(structs).toContain('User');\n    expect(getNodesByLabel(result, 'Method')).toContain('Save');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Struct literal resolution: User{...} resolves to Struct node\n// ---------------------------------------------------------------------------\n\ndescribe('Go struct literal resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'go-struct-literals'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves User{...} as a CALLS edge to the User struct', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const ctorCall = calls.find(c => c.target === 'User');\n    expect(ctorCall).toBeDefined();\n    expect(ctorCall!.source).toBe('processUser');\n    expect(ctorCall!.targetLabel).toBe('Struct');\n    expect(ctorCall!.targetFilePath).toBe('user.go');\n  });\n\n  it('also resolves user.Save() as a member call', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'Save');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('processUser');\n  });\n\n  it('detects User struct, Save method, and processUser function', () => {\n    const structs: string[] = [];\n    result.graph.forEachNode(n => {\n      if (n.label === 'Struct') structs.push(n.properties.name);\n    });\n    expect(structs).toContain('User');\n    expect(getNodesByLabel(result, 'Method')).toContain('Save');\n    expect(getNodesByLabel(result, 'Function')).toContain('processUser');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Receiver-constrained resolution: typed variables disambiguate same-named methods\n// ---------------------------------------------------------------------------\n\n// ---------------------------------------------------------------------------\n// Multi-assignment: user, repo := User{}, Repo{} — both sides captured in TypeEnv\n// ---------------------------------------------------------------------------\n\ndescribe('Go multi-assignment short var declaration', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'go-multi-assign'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo structs with their methods', () => {\n    expect(getNodesByLabel(result, 'Struct')).toEqual(['Repo', 'User']);\n    expect(getNodesByLabel(result, 'Method')).toEqual(['Persist', 'Save']);\n  });\n\n  it('resolves both struct literals in multi-assignment: User{} and Repo{}', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const structCalls = calls.filter(c => c.targetLabel === 'Struct');\n    expect(edgeSet(structCalls)).toEqual([\n      'process → Repo',\n      'process → User',\n    ]);\n  });\n\n  it('resolves user.Save() to User.Save and repo.Persist() to Repo.Persist via receiver typing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'Save');\n    const cloneCall = calls.find(c => c.target === 'Persist');\n\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('process');\n    expect(saveCall!.targetFilePath).toBe('models.go');\n\n    expect(cloneCall).toBeDefined();\n    expect(cloneCall!.source).toBe('process');\n    expect(cloneCall!.targetFilePath).toBe('models.go');\n  });\n});\n\ndescribe('Go receiver-constrained resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'go-receiver-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo structs, both with Save methods', () => {\n    const structs: string[] = [];\n    result.graph.forEachNode(n => {\n      if (n.label === 'Struct') structs.push(n.properties.name);\n    });\n    expect(structs).toContain('User');\n    expect(structs).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'Save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves user.Save() to User.Save and repo.Save() to Repo.Save via receiver typing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'Save');\n    expect(saveCalls.length).toBe(2);\n\n    const userSave = saveCalls.find(c => c.targetFilePath === 'models/user.go');\n    const repoSave = saveCalls.find(c => c.targetFilePath === 'models/repo.go');\n\n    expect(userSave).toBeDefined();\n    expect(repoSave).toBeDefined();\n    expect(userSave!.source).toBe('processEntities');\n    expect(repoSave!.source).toBe('processEntities');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Variadic resolution: ...interface{} doesn't get filtered by arity\n// ---------------------------------------------------------------------------\n\ndescribe('Go variadic call resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'go-variadic-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves 3-arg call to variadic func Entry(...interface{}) in logger.go', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const logCall = calls.find(c => c.target === 'Entry');\n    expect(logCall).toBeDefined();\n    expect(logCall!.source).toBe('main');\n    expect(logCall!.targetFilePath).toBe('internal/logger/logger.go');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Local shadow: unqualified call resolves to local function, not imported package\n// ---------------------------------------------------------------------------\n\ndescribe('Go local definition shadows import', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'go-local-shadow'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves Save(\"test\") to local Save in main.go, not utils.go', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'Save' && c.source === 'main');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.targetFilePath).toBe('cmd/main.go');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Constructor-inferred type resolution: user := models.User{}; user.Save()\n// Go composite literal constructor pattern (no explicit type annotations)\n// ---------------------------------------------------------------------------\n\ndescribe('Go constructor-inferred type resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'go-constructor-type-inference'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo structs, both with Save methods', () => {\n    expect(getNodesByLabel(result, 'Struct')).toContain('User');\n    expect(getNodesByLabel(result, 'Struct')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'Save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves user.Save() to models/user.go via constructor-inferred type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c => c.target === 'Save' && c.targetFilePath === 'models/user.go');\n    expect(userSave).toBeDefined();\n    expect(userSave!.source).toBe('processEntities');\n  });\n\n  it('resolves repo.Save() to models/repo.go via constructor-inferred type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c => c.target === 'Save' && c.targetFilePath === 'models/repo.go');\n    expect(repoSave).toBeDefined();\n    expect(repoSave!.source).toBe('processEntities');\n  });\n\n  it('emits exactly 2 Save() CALLS edges (one per receiver type)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'Save');\n    expect(saveCalls.length).toBe(2);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Pointer-constructor-inferred type resolution: user := &models.User{...}; user.Save()\n// Go address-of composite literal constructor pattern (no explicit type annotations)\n// ---------------------------------------------------------------------------\n\ndescribe('Go pointer-constructor-inferred type resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'go-pointer-constructor-inference'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo structs, both with Save methods', () => {\n    expect(getNodesByLabel(result, 'Struct')).toContain('User');\n    expect(getNodesByLabel(result, 'Struct')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'Save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves user.Save() to models/user.go via &User{} pointer-constructor-inferred type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c => c.target === 'Save' && c.targetFilePath === 'models/user.go');\n    expect(userSave).toBeDefined();\n    expect(userSave!.source).toBe('process');\n  });\n\n  it('resolves repo.Save() to models/repo.go via &Repo{} pointer-constructor-inferred type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c => c.target === 'Save' && c.targetFilePath === 'models/repo.go');\n    expect(repoSave).toBeDefined();\n    expect(repoSave!.source).toBe('process');\n  });\n\n  it('emits exactly 2 Save() CALLS edges (one per receiver type)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'Save');\n    expect(saveCalls.length).toBe(2);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Parent resolution: struct embedding emits EXTENDS\n// ---------------------------------------------------------------------------\n\ndescribe('Go parent resolution (struct embedding)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'go-parent-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects BaseModel and User structs', () => {\n    expect(getNodesByLabel(result, 'Struct')).toEqual(['BaseModel', 'User']);\n  });\n\n  it('emits EXTENDS edge: User → BaseModel (struct embedding)', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(extends_.length).toBe(1);\n    expect(extends_[0].source).toBe('User');\n    expect(extends_[0].target).toBe('BaseModel');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Go new() builtin type inference: user := new(User); user.Save()\n// ---------------------------------------------------------------------------\n\ndescribe('Go new() builtin type inference', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'go-new-builtin'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves user.Save() via new(User) inference', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'Save' && c.targetFilePath === 'models.go');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('main');\n  });\n\n  it('resolves user.Greet() via new(User) inference', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const greetCall = calls.find(c => c.target === 'Greet' && c.targetFilePath === 'models.go');\n    expect(greetCall).toBeDefined();\n    expect(greetCall!.source).toBe('main');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Go make() builtin type inference: sl := make([]User, 0); sl[0].Save()\n// ---------------------------------------------------------------------------\n\ndescribe('Go make() builtin type inference', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'go-make-builtin'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves sl[0].Save() via make([]User, 0) slice inference', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'Save' && c.targetFilePath === 'models.go');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('main');\n  });\n\n  it('resolves m[\"key\"].Greet() via make(map[string]User) map inference', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const greetCall = calls.find(c => c.target === 'Greet' && c.targetFilePath === 'models.go');\n    expect(greetCall).toBeDefined();\n    expect(greetCall!.source).toBe('main');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Go type assertion inference: user := s.(User); user.Save()\n// ---------------------------------------------------------------------------\n\ndescribe('Go type assertion type inference', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'go-type-assertion'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves user.Save() via type assertion s.(User)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'Save' && c.targetFilePath === 'models.go');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('process');\n  });\n\n  it('resolves user.Greet() via type assertion s.(User)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const greetCall = calls.find(c => c.target === 'Greet' && c.targetFilePath === 'models.go');\n    expect(greetCall).toBeDefined();\n    expect(greetCall!.source).toBe('process');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Return type inference: user := GetUser(\"alice\"); user.Save()\n// Go now has a CONSTRUCTOR_BINDING_SCANNER for short_var_declaration, so\n// return type inference works end-to-end for `user := GetUser()`.\n// ---------------------------------------------------------------------------\n\ndescribe('Go return type inference via explicit function return type', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'go-return-type-inference'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects GetUser, GetRepo, and competing Save methods', () => {\n    const allSymbols = [...getNodesByLabel(result, 'Function'), ...getNodesByLabel(result, 'Method')];\n    expect(allSymbols).toContain('GetUser');\n    expect(allSymbols).toContain('GetRepo');\n    const saveMethods = allSymbols.filter(s => s === 'Save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves user.Save() to models/user.go via return type of GetUser()', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'Save' && c.source === 'processUser' && c.targetFilePath.includes('user.go')\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('user.Save() does NOT resolve to models/repo.go (negative disambiguation)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'processUser' && c.targetFilePath.includes('repo.go')\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n\n  it('resolves repo.Save() to models/repo.go via return type of GetRepo()', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'Save' && c.source === 'processRepo' && c.targetFilePath.includes('repo.go')\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('repo.Save() does NOT resolve to models/user.go (negative disambiguation)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'processRepo' && c.targetFilePath.includes('user.go')\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n\n  it('resolves user.Save() via cross-package factory call models.NewUser()', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'Save' && c.source === 'processUserCrossPackage' && c.targetFilePath.includes('user.go')\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Go multi-return factory inference: user, err := NewUser(\"alice\"); user.Save()\n// ---------------------------------------------------------------------------\n\ndescribe('Go multi-return factory type inference', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'go-multi-return-inference'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo structs with competing Save methods', () => {\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'Save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves user.Save() to models/user.go via multi-return inference (user, err := NewUser())', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'processUser' && c.targetFilePath.includes('user.go')\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('user.Save() does NOT resolve to models/repo.go', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'processUser' && c.targetFilePath.includes('repo.go')\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n\n  it('resolves repo.Save() to models/repo.go via blank discard (repo, _ := NewRepo())', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'processRepo' && c.targetFilePath.includes('repo.go')\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('repo.Save() does NOT resolve to models/user.go', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'processRepo' && c.targetFilePath.includes('user.go')\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Nullable receiver: var user *models.User = findUser(); user.Save()\n// Go pointer types (*User) — extractSimpleTypeName strips pointer prefix.\n// ---------------------------------------------------------------------------\n\ndescribe('Go nullable receiver resolution (pointer types)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'go-nullable-receiver'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo structs, both with Save methods', () => {\n    expect(getNodesByLabel(result, 'Struct')).toContain('User');\n    expect(getNodesByLabel(result, 'Struct')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'Save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves user.Save() to User.Save via pointer receiver typing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c => c.target === 'Save' && c.targetFilePath === 'models/user.go');\n    expect(userSave).toBeDefined();\n    expect(userSave!.source).toBe('processEntities');\n  });\n\n  it('resolves repo.Save() to Repo.Save via pointer receiver typing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c => c.target === 'Save' && c.targetFilePath === 'models/repo.go');\n    expect(repoSave).toBeDefined();\n    expect(repoSave!.source).toBe('processEntities');\n  });\n\n  it('user.Save() does NOT resolve to Repo.Save (negative disambiguation)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'Save' && c.source === 'processEntities');\n    expect(saveCalls.filter(c => c.targetFilePath === 'models/user.go').length).toBe(1);\n    expect(saveCalls.filter(c => c.targetFilePath === 'models/repo.go').length).toBe(1);\n  });\n\n  it('emits exactly 2 Save() CALLS edges (one per receiver type)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'Save');\n    expect(saveCalls.length).toBe(2);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Assignment chain propagation (Phase 4.3)\n// ---------------------------------------------------------------------------\n\ndescribe('Go assignment chain propagation', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'go-assignment-chain'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo structs each with a Save method', () => {\n    expect(getNodesByLabel(result, 'Struct')).toContain('User');\n    expect(getNodesByLabel(result, 'Struct')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'Save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves alias.Save() to User#Save via assignment chain', () => {\n    const calls = getRelationships(result, 'CALLS');\n    // Positive: alias.Save() must resolve to User#Save\n    const userSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'processEntities' && c.targetFilePath.includes('user.go'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('alias.Save() does NOT resolve to Repo#Save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    // Negative: alias comes from User, so only one edge to user.go\n    const wrongCall = calls.filter(c =>\n      c.target === 'Save' && c.source === 'processEntities' && c.targetFilePath.includes('user.go'),\n    );\n    expect(wrongCall.length).toBe(1);\n  });\n\n  it('resolves rAlias.Save() to Repo#Save via assignment chain', () => {\n    const calls = getRelationships(result, 'CALLS');\n    // Positive: rAlias.Save() must resolve to Repo#Save\n    const repoSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'processEntities' && c.targetFilePath.includes('repo.go'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('each alias resolves to its own struct, not the other', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'processEntities' && c.targetFilePath.includes('user.go'),\n    );\n    const repoSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'processEntities' && c.targetFilePath.includes('repo.go'),\n    );\n    expect(userSave).toBeDefined();\n    expect(repoSave).toBeDefined();\n    expect(userSave!.targetFilePath).not.toBe(repoSave!.targetFilePath);\n  });\n\n  // --- var form assignment chain ---\n\n  it('resolves var alias.Save() to User via var assignment chain', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'processWithVar' && c.targetFilePath.includes('user.go'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves var rAlias.Save() to Repo via var assignment chain', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'processWithVar' && c.targetFilePath.includes('repo.go'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('var alias.Save() does NOT resolve to Repo (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSaves = calls.filter(c =>\n      c.target === 'Save' && c.source === 'processWithVar' && c.targetFilePath.includes('user.go'),\n    );\n    expect(userSaves.length).toBe(1);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Chained method calls: svc.GetUser().Save()\n// Tests that Go chain call resolution correctly infers the intermediate\n// receiver type from GetUser()'s return type and resolves Save() to User.\n// ---------------------------------------------------------------------------\n\ndescribe('Go chained method call resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'go-chain-call'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User, Repo structs and UserService', () => {\n    expect(getNodesByLabel(result, 'Struct')).toContain('User');\n    expect(getNodesByLabel(result, 'Struct')).toContain('Repo');\n    expect(getNodesByLabel(result, 'Struct')).toContain('UserService');\n  });\n\n  it('detects GetUser and Save symbols', () => {\n    const allSymbols = [...getNodesByLabel(result, 'Function'), ...getNodesByLabel(result, 'Method')];\n    expect(allSymbols).toContain('GetUser');\n    expect(allSymbols).toContain('Save');\n  });\n\n  it('resolves svc.GetUser().Save() to User#Save via chain resolution', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'Save' &&\n      c.source === 'processUser' &&\n      c.targetFilePath?.includes('user.go'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('does NOT resolve svc.GetUser().Save() to Repo#Save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'Save' &&\n      c.source === 'processUser' &&\n      c.targetFilePath?.includes('repo.go'),\n    );\n    expect(repoSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Go map range: for _, user := range userMap where map[string]User\n// ---------------------------------------------------------------------------\n\ndescribe('Go map range type resolution (Tier 1c)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'go-map-range'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo structs with Save methods in separate files', () => {\n    const structs = getNodesByLabel(result, 'Struct');\n    expect(structs).toContain('User');\n    expect(structs).toContain('Repo');\n    const methods = getNodesByLabel(result, 'Method');\n    expect(methods.filter(m => m === 'Save').length).toBe(2);\n  });\n\n  it('resolves user.Save() in map range to User#Save via map_type value', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'processMap' && c.targetFilePath?.includes('user.go'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('does NOT resolve user.Save() to Repo#Save (negative disambiguation)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'processMap' && c.targetFilePath?.includes('repo.go'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Go for-loop with call_expression iterable: for _, user := range GetUsers()\n// Phase 7.3: call_expression iterable resolution via ReturnTypeLookup\n// ---------------------------------------------------------------------------\n\ndescribe('Go for-loop call_expression iterable resolution (Phase 7.3)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'go-for-call-expr'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo structs with competing Save methods', () => {\n    const structs = getNodesByLabel(result, 'Struct');\n    expect(structs).toContain('User');\n    expect(structs).toContain('Repo');\n    const methods = getNodesByLabel(result, 'Method');\n    expect(methods.filter(m => m === 'Save').length).toBe(2);\n  });\n\n  it('resolves user.Save() in range GetUsers() to User#Save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'processUsers' && c.targetFilePath?.includes('user.go'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves repo.Save() in range GetRepos() to Repo#Save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'processRepos' && c.targetFilePath?.includes('repo.go'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('does NOT resolve user.Save() to Repo#Save (negative disambiguation)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'processUsers' && c.targetFilePath?.includes('repo.go'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n\n  it('does NOT resolve repo.Save() to User#Save (negative disambiguation)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'Save' && c.source === 'processRepos' && c.targetFilePath?.includes('user.go'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase 8: Field/property type resolution (1-level)\n// ---------------------------------------------------------------------------\n\ndescribe('Field type resolution (Go)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'go-field-types'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects structs: Address, User', () => {\n    expect(getNodesByLabel(result, 'Struct')).toEqual(['Address', 'User']);\n  });\n\n  it('detects Property nodes for Go struct fields', () => {\n    const properties = getNodesByLabel(result, 'Property');\n    expect(properties).toContain('Address');\n    expect(properties).toContain('Name');\n    expect(properties).toContain('City');\n  });\n\n  it('emits HAS_PROPERTY edges linking struct fields to structs', () => {\n    const propEdges = getRelationships(result, 'HAS_PROPERTY');\n    expect(propEdges.length).toBe(3);\n    expect(edgeSet(propEdges)).toContain('User → Name');\n    expect(edgeSet(propEdges)).toContain('User → Address');\n    expect(edgeSet(propEdges)).toContain('Address → City');\n  });\n\n  it('resolves user.Address.Save() → Address#Save via field type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(e => e.target === 'Save');\n    const addressSave = saveCalls.find(\n      e => e.source === 'processUser' && e.targetFilePath.includes('models'),\n    );\n    expect(addressSave).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase 8A: Deep field chain resolution (3-level)\n// ---------------------------------------------------------------------------\n\ndescribe('Deep field chain resolution (Go)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'go-deep-field-chain'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects structs: Address, City, User', () => {\n    expect(getNodesByLabel(result, 'Struct')).toEqual(['Address', 'City', 'User']);\n  });\n\n  it('detects Property nodes for Go struct fields', () => {\n    const properties = getNodesByLabel(result, 'Property');\n    expect(properties).toContain('Address');\n    expect(properties).toContain('City');\n    expect(properties).toContain('ZipCode');\n  });\n\n  it('emits HAS_PROPERTY edges for nested type chain', () => {\n    const propEdges = getRelationships(result, 'HAS_PROPERTY');\n    expect(propEdges.length).toBe(5);\n    expect(edgeSet(propEdges)).toContain('User → Name');\n    expect(edgeSet(propEdges)).toContain('User → Address');\n    expect(edgeSet(propEdges)).toContain('Address → City');\n    expect(edgeSet(propEdges)).toContain('Address → Street');\n    expect(edgeSet(propEdges)).toContain('City → ZipCode');\n  });\n\n  it('resolves 2-level chain: user.Address.Save() → Address#Save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(e => e.target === 'Save' && e.source === 'processUser');\n    const addressSave = saveCalls.find(e => e.targetFilePath.includes('models'));\n    expect(addressSave).toBeDefined();\n  });\n\n  it('resolves 3-level chain: user.Address.City.GetName() → City#GetName', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const getNameCalls = calls.filter(e => e.target === 'GetName' && e.source === 'processUser');\n    const cityGetName = getNameCalls.find(e => e.targetFilePath.includes('models'));\n    expect(cityGetName).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Mixed field+call chain resolution (Go)\n// ---------------------------------------------------------------------------\n\ndescribe('Mixed field+call chain resolution (Go)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'go-mixed-chain'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects structs: Address, City, User, UserService', () => {\n    expect(getNodesByLabel(result, 'Struct')).toEqual(['Address', 'City', 'User', 'UserService']);\n  });\n\n  it('detects Property nodes for mixed-chain fields', () => {\n    const properties = getNodesByLabel(result, 'Property');\n    expect(properties).toContain('City');\n    expect(properties).toContain('Address');\n  });\n\n  it('resolves call→field chain: svc.GetUser().Address.Save() → Address#Save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(e => e.target === 'Save' && e.source === 'processWithService');\n    expect(saveCalls.length).toBe(1);\n    expect(saveCalls[0].targetFilePath).toContain('models');\n  });\n\n  it('resolves field→call chain: user.GetAddress().City.GetName() → City#GetName', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const getNameCalls = calls.filter(e => e.target === 'GetName' && e.source === 'processWithUser');\n    expect(getNameCalls.length).toBe(1);\n    expect(getNameCalls[0].targetFilePath).toContain('models');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// ACCESSES write edges from assignment statements\n// ---------------------------------------------------------------------------\n\ndescribe('Write access tracking (Go)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'go-write-access'),\n      () => {},\n    );\n  }, 60000);\n\n  it('emits ACCESSES write edges for field assignments', () => {\n    const accesses = getRelationships(result, 'ACCESSES');\n    const writes = accesses.filter(e => e.rel.reason === 'write');\n    expect(writes.length).toBe(2);\n    const nameWrite = writes.find(e => e.target === 'Name');\n    const addressWrite = writes.find(e => e.target === 'Address');\n    expect(nameWrite).toBeDefined();\n    expect(nameWrite!.source).toBe('updateUser');\n    expect(addressWrite).toBeDefined();\n    expect(addressWrite!.source).toBe('updateUser');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Call-result variable binding (Phase 9): user := GetUser(); user.Save()\n// ---------------------------------------------------------------------------\n\ndescribe('Go call-result variable binding (Tier 2b)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'go-call-result-binding'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves user.Save() to User#Save via call-result binding', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'Save' && c.source === 'processUser' && c.targetFilePath.includes('models')\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Method chain binding (Phase 9C): GetUser() → .Address → .GetCity() → .Save()\n// ---------------------------------------------------------------------------\n\ndescribe('Go method chain binding via unified fixpoint (Phase 9C)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'go-method-chain-binding'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves city.Save() to City#Save via 3-step chain', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'Save' && c.source === 'processChain' && c.targetFilePath.includes('models')\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase B: Go inc_statement / dec_statement write access\n// obj.Field++ and obj.Field-- emit ACCESSES write edges\n// ---------------------------------------------------------------------------\n\ndescribe('Go inc/dec write access tracking (Phase B)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'go-inc-dec-write-access'),\n      () => {},\n    );\n  }, 60000);\n\n  it('emits ACCESSES write edge for Count++ in increment', () => {\n    const accesses = getRelationships(result, 'ACCESSES');\n    const writes = accesses.filter(e => e.rel.reason === 'write');\n    const countInc = writes.find(e => e.target === 'Count' && e.source === 'increment');\n    expect(countInc).toBeDefined();\n  });\n\n  it('emits ACCESSES write edge for Total++ in increment', () => {\n    const accesses = getRelationships(result, 'ACCESSES');\n    const writes = accesses.filter(e => e.rel.reason === 'write');\n    const totalInc = writes.find(e => e.target === 'Total' && e.source === 'increment');\n    expect(totalInc).toBeDefined();\n  });\n\n  it('emits ACCESSES write edge for Count-- in decrement', () => {\n    const accesses = getRelationships(result, 'ACCESSES');\n    const writes = accesses.filter(e => e.rel.reason === 'write');\n    const countDec = writes.find(e => e.target === 'Count' && e.source === 'decrement');\n    expect(countDec).toBeDefined();\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/integration/resolvers/helpers.ts",
    "content": "/**\n * Shared test helpers for language resolution integration tests.\n */\nimport path from 'path';\nimport { runPipelineFromRepo } from '../../../src/core/ingestion/pipeline.js';\nimport type { PipelineOptions } from '../../../src/core/ingestion/pipeline.js';\nimport type { PipelineResult } from '../../../src/types/pipeline.js';\nimport type { GraphRelationship } from '../../../src/core/graph/types.js';\n\nexport const FIXTURES = path.resolve(__dirname, '..', '..', 'fixtures', 'lang-resolution');\n\nexport type RelEdge = {\n  source: string;\n  target: string;\n  sourceLabel: string;\n  targetLabel: string;\n  sourceFilePath: string;\n  targetFilePath: string;\n  rel: GraphRelationship;\n};\n\nexport function getRelationships(result: PipelineResult, type: string): RelEdge[] {\n  const edges: RelEdge[] = [];\n  for (const rel of result.graph.iterRelationships()) {\n    if (rel.type === type) {\n      const sourceNode = result.graph.getNode(rel.sourceId);\n      const targetNode = result.graph.getNode(rel.targetId);\n      edges.push({\n        source: sourceNode?.properties.name ?? rel.sourceId,\n        target: targetNode?.properties.name ?? rel.targetId,\n        sourceLabel: sourceNode?.label ?? 'unknown',\n        targetLabel: targetNode?.label ?? 'unknown',\n        sourceFilePath: sourceNode?.properties.filePath ?? '',\n        targetFilePath: targetNode?.properties.filePath ?? '',\n        rel,\n      });\n    }\n  }\n  return edges;\n}\n\nexport function getNodesByLabel(result: PipelineResult, label: string): string[] {\n  const names: string[] = [];\n  result.graph.forEachNode(n => {\n    if (n.label === label) names.push(n.properties.name);\n  });\n  return names.sort();\n}\n\nexport function edgeSet(edges: Array<{ source: string; target: string }>): string[] {\n  return edges.map(e => `${e.source} → ${e.target}`).sort();\n}\n\n/** Get graph nodes by label with full properties (for parameterTypes assertions). */\nexport function getNodesByLabelFull(result: PipelineResult, label: string): Array<{ name: string; properties: Record<string, any> }> {\n  const nodes: Array<{ name: string; properties: Record<string, any> }> = [];\n  result.graph.forEachNode(n => {\n    if (n.label === label) nodes.push({ name: n.properties.name, properties: n.properties });\n  });\n  return nodes.sort((a, b) => a.name.localeCompare(b.name));\n}\n\n// Tests can pass { skipGraphPhases: true } as third arg for faster runs\n// (skips MRO, community detection, and process extraction).\nexport { runPipelineFromRepo };\nexport type { PipelineOptions, PipelineResult };\n"
  },
  {
    "path": "gitnexus/test/integration/resolvers/java.test.ts",
    "content": "/**\n * Java: class extends + implements multiple interfaces + ambiguous package disambiguation\n */\nimport { describe, it, expect, beforeAll } from 'vitest';\nimport path from 'path';\nimport {\n  FIXTURES, getRelationships, getNodesByLabel, getNodesByLabelFull, edgeSet,\n  runPipelineFromRepo, type PipelineResult,\n} from './helpers.js';\n\n// ---------------------------------------------------------------------------\n// Heritage: class extends + implements multiple interfaces\n// ---------------------------------------------------------------------------\n\ndescribe('Java heritage resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'java-heritage'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects exactly 3 classes and 2 interfaces', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['BaseModel', 'User', 'UserService']);\n    expect(getNodesByLabel(result, 'Interface')).toEqual(['Serializable', 'Validatable']);\n  });\n\n  it('emits exactly 1 EXTENDS edge: User → BaseModel', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(extends_.length).toBe(1);\n    expect(extends_[0].source).toBe('User');\n    expect(extends_[0].target).toBe('BaseModel');\n  });\n\n  it('emits exactly 2 IMPLEMENTS edges: User → Serializable, User → Validatable', () => {\n    const implements_ = getRelationships(result, 'IMPLEMENTS');\n    expect(implements_.length).toBe(2);\n    expect(edgeSet(implements_)).toEqual([\n      'User → Serializable',\n      'User → Validatable',\n    ]);\n  });\n\n  it('resolves exactly 4 IMPORTS edges', () => {\n    const imports = getRelationships(result, 'IMPORTS');\n    expect(imports.length).toBe(4);\n    expect(edgeSet(imports)).toEqual([\n      'User.java → Serializable.java',\n      'User.java → Validatable.java',\n      'UserService.java → Serializable.java',\n      'UserService.java → User.java',\n    ]);\n  });\n\n  it('does not emit EXTENDS edges to interfaces', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(extends_.some(e => e.target === 'Serializable')).toBe(false);\n    expect(extends_.some(e => e.target === 'Validatable')).toBe(false);\n  });\n\n  it('emits exactly 2 CALLS edges', () => {\n    const calls = getRelationships(result, 'CALLS');\n    expect(calls.length).toBe(2);\n    expect(edgeSet(calls)).toEqual([\n      'processUser → save',\n      'processUser → validate',\n    ]);\n  });\n\n  it('no OVERRIDES edges target Property nodes', () => {\n    const overrides = getRelationships(result, 'OVERRIDES');\n    for (const edge of overrides) {\n      const target = result.graph.getNode(edge.rel.targetId);\n      expect(target).toBeDefined();\n      expect(target!.label).not.toBe('Property');\n    }\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Ambiguous: Handler + Processor in two packages, imports disambiguate\n// ---------------------------------------------------------------------------\n\ndescribe('Java ambiguous symbol resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'java-ambiguous'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects 2 Handler classes and 2 Processor interfaces', () => {\n    const classes = getNodesByLabel(result, 'Class');\n    expect(classes.filter(n => n === 'Handler').length).toBe(2);\n    expect(classes).toContain('UserHandler');\n    const ifaces = getNodesByLabel(result, 'Interface');\n    expect(ifaces.filter(n => n === 'Processor').length).toBe(2);\n  });\n\n  it('resolves EXTENDS to models/Handler (not other/Handler)', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(extends_.length).toBe(1);\n    expect(extends_[0].source).toBe('UserHandler');\n    expect(extends_[0].target).toBe('Handler');\n    expect(extends_[0].targetFilePath).toBe('models/Handler.java');\n  });\n\n  it('resolves IMPLEMENTS to models/Processor (not other/Processor)', () => {\n    const implements_ = getRelationships(result, 'IMPLEMENTS');\n    expect(implements_.length).toBe(1);\n    expect(implements_[0].source).toBe('UserHandler');\n    expect(implements_[0].target).toBe('Processor');\n    expect(implements_[0].targetFilePath).toBe('models/Processor.java');\n  });\n\n  it('import edges point to models/ not other/', () => {\n    const imports = getRelationships(result, 'IMPORTS');\n    const targets = imports.map(e => e.target).sort();\n    expect(targets).toContain('Handler.java');\n    expect(targets).toContain('Processor.java');\n    for (const imp of imports) {\n      expect(imp.targetFilePath).toMatch(/^models\\//);\n    }\n  });\n\n  it('all heritage edges point to real graph nodes', () => {\n    for (const edge of [...getRelationships(result, 'EXTENDS'), ...getRelationships(result, 'IMPLEMENTS')]) {\n      const target = result.graph.getNode(edge.rel.targetId);\n      expect(target).toBeDefined();\n      expect(target!.properties.name).toBe(edge.target);\n    }\n  });\n});\n\ndescribe('Java call resolution with arity filtering', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'java-calls'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves processUser → writeAudit to util/OneArg.java via arity narrowing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    expect(calls.length).toBe(1);\n    expect(calls[0].source).toBe('processUser');\n    expect(calls[0].target).toBe('writeAudit');\n    expect(calls[0].targetFilePath).toBe('util/OneArg.java');\n    expect(calls[0].rel.reason).toBe('import-resolved');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Member-call resolution: obj.method() resolves through pipeline\n// ---------------------------------------------------------------------------\n\ndescribe('Java member-call resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'java-member-calls'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves processUser → save as a member call on User', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('processUser');\n    expect(saveCall!.targetFilePath).toBe('models/User.java');\n  });\n\n  it('detects User class and save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Method')).toContain('save');\n  });\n\n  it('emits HAS_METHOD edge from User to save', () => {\n    const hasMethod = getRelationships(result, 'HAS_METHOD');\n    const edge = hasMethod.find(e => e.source === 'User' && e.target === 'save');\n    expect(edge).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Constructor resolution: new Foo() resolves to Constructor/Class\n// ---------------------------------------------------------------------------\n\ndescribe('Java constructor-call resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'java-constructor-calls'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves new User() as a CALLS edge to the User constructor', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const ctorCall = calls.find(c => c.target === 'User');\n    expect(ctorCall).toBeDefined();\n    expect(ctorCall!.source).toBe('processUser');\n    // Java has explicit constructor_declaration → Constructor node\n    expect(ctorCall!.targetLabel).toBe('Constructor');\n    expect(ctorCall!.targetFilePath).toBe('models/User.java');\n  });\n\n  it('also resolves user.save() as a member call', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('processUser');\n  });\n\n  it('detects User class, User constructor, save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Constructor')).toContain('User');\n    expect(getNodesByLabel(result, 'Method')).toContain('save');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Receiver-constrained resolution: typed variables disambiguate same-named methods\n// ---------------------------------------------------------------------------\n\ndescribe('Java receiver-constrained resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'java-receiver-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, both with save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves user.save() to User.save and repo.save() to Repo.save via receiver typing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save');\n    expect(saveCalls.length).toBe(2);\n\n    const userSave = saveCalls.find(c => c.targetFilePath === 'models/User.java');\n    const repoSave = saveCalls.find(c => c.targetFilePath === 'models/Repo.java');\n\n    expect(userSave).toBeDefined();\n    expect(repoSave).toBeDefined();\n    expect(userSave!.source).toBe('processEntities');\n    expect(repoSave!.source).toBe('processEntities');\n  });\n\n  it('resolves constructor calls for both User and Repo', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userCtor = calls.find(c => c.target === 'User');\n    const repoCtor = calls.find(c => c.target === 'Repo');\n    expect(userCtor).toBeDefined();\n    expect(repoCtor).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Named import disambiguation: two User classes, import resolves to correct one\n// ---------------------------------------------------------------------------\n\ndescribe('Java named import disambiguation', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'java-named-imports'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects two User classes in different packages', () => {\n    const users = getNodesByLabel(result, 'Class').filter(n => n === 'User');\n    expect(users.length).toBe(2);\n  });\n\n  it('resolves user.save() to com/example/models/User.java via named import', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('run');\n    expect(saveCall!.targetFilePath).toBe('com/example/models/User.java');\n  });\n\n  it('resolves new User() to com/example/models/User.java, not other/', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const ctorCall = calls.find(c => c.target === 'User' && c.source === 'run');\n    expect(ctorCall).toBeDefined();\n    expect(ctorCall!.targetFilePath).toBe('com/example/models/User.java');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Variadic resolution: String... doesn't get filtered by arity\n// ---------------------------------------------------------------------------\n\ndescribe('Java variadic call resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'java-variadic-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves 3-arg call to varargs method record(String...) in Logger.java', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const logCall = calls.find(c => c.target === 'record');\n    expect(logCall).toBeDefined();\n    expect(logCall!.source).toBe('run');\n    expect(logCall!.targetFilePath).toBe('com/example/util/Logger.java');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Local shadow: same-file definition takes priority over imported name\n// ---------------------------------------------------------------------------\n\ndescribe('Java local definition shadows import', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'java-local-shadow'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves run → save to same-file definition, not the imported one', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save' && c.source === 'run');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.targetFilePath).toBe('src/main/java/com/example/app/Main.java');\n  });\n\n  it('does NOT resolve save to Logger.java', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveToUtils = calls.find(c => c.target === 'save' && c.targetFilePath === 'src/main/java/com/example/utils/Logger.java');\n    expect(saveToUtils).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Constructor-inferred type resolution: var user = new User(); user.save()\n// Java 10+ local variable type inference (no explicit type annotations)\n// ---------------------------------------------------------------------------\n\ndescribe('Java constructor-inferred type resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'java-constructor-type-inference'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, both with save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves user.save() to models/User.java via constructor-inferred type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'models/User.java');\n    expect(userSave).toBeDefined();\n    expect(userSave!.source).toBe('processEntities');\n  });\n\n  it('resolves repo.save() to models/Repo.java via constructor-inferred type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'models/Repo.java');\n    expect(repoSave).toBeDefined();\n    expect(repoSave!.source).toBe('processEntities');\n  });\n\n  it('emits exactly 2 save() CALLS edges (one per receiver type)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save');\n    expect(saveCalls.length).toBe(2);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// For-each loop element typing: for (User user : users) user.save()\n// Java: explicit type in enhanced_for_statement binds loop variable\n// ---------------------------------------------------------------------------\n\ndescribe('Java for-each loop element type resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'java-foreach'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, both with save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves user.save() in for-each to User#save (not Repo#save)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'models/User.java');\n    expect(userSave).toBeDefined();\n    expect(userSave!.source).toBe('processEntities');\n  });\n\n  it('resolves repo.save() in for-each to Repo#save (not User#save)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'models/Repo.java');\n    expect(repoSave).toBeDefined();\n    expect(repoSave!.source).toBe('processEntities');\n  });\n\n  it('emits exactly 2 save() CALLS edges (one per receiver type)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save');\n    expect(saveCalls.length).toBe(2);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// this.save() resolves to enclosing class's own save method\n// ---------------------------------------------------------------------------\n\ndescribe('Java this resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'java-self-this-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, each with a save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Repo', 'User']);\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves this.save() inside User.process to User.save, not Repo.save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save' && c.source === 'process');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.targetFilePath).toBe('src/models/User.java');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Parent class resolution: EXTENDS + IMPLEMENTS edges\n// ---------------------------------------------------------------------------\n\ndescribe('Java parent resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'java-parent-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects BaseModel and User classes plus Serializable interface', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['BaseModel', 'User']);\n    expect(getNodesByLabel(result, 'Interface')).toEqual(['Serializable']);\n  });\n\n  it('emits EXTENDS edge: User → BaseModel', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(extends_.length).toBe(1);\n    expect(extends_[0].source).toBe('User');\n    expect(extends_[0].target).toBe('BaseModel');\n  });\n\n  it('emits IMPLEMENTS edge: User → Serializable', () => {\n    const implements_ = getRelationships(result, 'IMPLEMENTS');\n    expect(implements_.length).toBe(1);\n    expect(implements_[0].source).toBe('User');\n    expect(implements_[0].target).toBe('Serializable');\n  });\n\n  it('all heritage edges point to real graph nodes', () => {\n    for (const edge of [...getRelationships(result, 'EXTENDS'), ...getRelationships(result, 'IMPLEMENTS')]) {\n      const target = result.graph.getNode(edge.rel.targetId);\n      expect(target).toBeDefined();\n      expect(target!.properties.name).toBe(edge.target);\n    }\n  });\n});\n\n// ---------------------------------------------------------------------------\n// super.save() resolves to parent class's save method\n// ---------------------------------------------------------------------------\n\ndescribe('Java super resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'java-super-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects BaseModel, User, and Repo classes', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['BaseModel', 'Repo', 'User']);\n  });\n\n  it('resolves super.save() inside User to BaseModel.save, not Repo.save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const superSave = calls.find(c => c.source === 'save' && c.target === 'save'\n      && c.targetFilePath === 'src/models/BaseModel.java');\n    expect(superSave).toBeDefined();\n    const repoSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'src/models/Repo.java');\n    expect(repoSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// super.save() resolves to generic parent class's save method\n// ---------------------------------------------------------------------------\n\ndescribe('Java generic parent super resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'java-generic-parent-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects BaseModel, User, and Repo classes', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['BaseModel', 'Repo', 'User']);\n  });\n\n  it('resolves super.save() inside User to BaseModel.save, not Repo.save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const superSave = calls.find(c => c.source === 'save' && c.target === 'save'\n      && c.targetFilePath === 'src/models/BaseModel.java');\n    expect(superSave).toBeDefined();\n    const repoSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'src/models/Repo.java');\n    expect(repoSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Return type inference: var user = svc.getUser(\"alice\"); user.save()\n// Java's CONSTRUCTOR_BINDING_SCANNER handles `var` declarations with\n// method_invocation values, enabling end-to-end return type inference.\n// ---------------------------------------------------------------------------\n\ndescribe('Java return type inference via explicit method return type', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'java-return-type-inference'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and UserService classes', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('UserService');\n  });\n\n  it('detects save and getUser methods', () => {\n    const methods = getNodesByLabel(result, 'Method');\n    expect(methods).toContain('save');\n    expect(methods).toContain('getUser');\n  });\n\n  it('resolves user.save() to User#save via return type of getUser(): User', () => {\n    // Java's CONSTRUCTOR_BINDING_SCANNER binds `var user = svc.getUser()` to the\n    // return type of getUser (User), so the subsequent user.save() call resolves\n    // to User#save rather than an unresolved target.\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processUser' && c.targetFilePath.includes('models')\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Nullable receiver: Java uses explicit type annotations (User user = findUser())\n// Tests that regular typed receiver resolution works with competing save() methods\n// when the variable is assigned from a factory method returning the same type.\n// Note: Java Optional<User> stores just \"Optional\" in TypeEnv (generics stripped),\n// so this test uses plain typed variables to validate receiver disambiguation.\n// ---------------------------------------------------------------------------\n\ndescribe('Java nullable receiver resolution (typed factory return)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'java-nullable-receiver'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, both with save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves user.save() to User.save via receiver typing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'models/User.java');\n    expect(userSave).toBeDefined();\n    expect(userSave!.source).toBe('processEntities');\n  });\n\n  it('resolves repo.save() to Repo.save via receiver typing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'models/Repo.java');\n    expect(repoSave).toBeDefined();\n    expect(repoSave!.source).toBe('processEntities');\n  });\n\n  it('user.save() does NOT resolve to Repo.save (negative disambiguation)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save' && c.source === 'processEntities');\n    // Each save() call should resolve to exactly one target file\n    expect(saveCalls.filter(c => c.targetFilePath === 'models/User.java').length).toBe(1);\n    expect(saveCalls.filter(c => c.targetFilePath === 'models/Repo.java').length).toBe(1);\n  });\n\n  it('emits exactly 2 save() CALLS edges (one per receiver type)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save');\n    expect(saveCalls.length).toBe(2);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Assignment chain propagation (Phase 4.3)\n// ---------------------------------------------------------------------------\n\ndescribe('Java assignment chain propagation', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'java-assignment-chain'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes each with a save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves alias.save() to User#save via assignment chain', () => {\n    const calls = getRelationships(result, 'CALLS');\n    // Positive: alias.save() must resolve to User#save\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processEntities' && c.targetFilePath.includes('User.java'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('alias.save() does NOT resolve to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    // Negative: alias comes from User, so only one edge to User.java\n    const wrongCall = calls.filter(c =>\n      c.target === 'save' && c.source === 'processEntities' && c.targetFilePath.includes('User.java'),\n    );\n    expect(wrongCall.length).toBe(1);\n  });\n\n  it('resolves rAlias.save() to Repo#save via assignment chain', () => {\n    const calls = getRelationships(result, 'CALLS');\n    // Positive: rAlias.save() must resolve to Repo#save\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processEntities' && c.targetFilePath.includes('Repo.java'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('each alias resolves to its own class, not the other', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processEntities' && c.targetFilePath.includes('User.java'),\n    );\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processEntities' && c.targetFilePath.includes('Repo.java'),\n    );\n    expect(userSave).toBeDefined();\n    expect(repoSave).toBeDefined();\n    expect(userSave!.targetFilePath).not.toBe(repoSave!.targetFilePath);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Java Optional<User> receiver resolution — extractSimpleTypeName unwraps\n// Optional<User> to \"User\" via NULLABLE_WRAPPER_TYPES, enabling receiver\n// disambiguation when the declaration type is Optional<T>.\n// ---------------------------------------------------------------------------\n\ndescribe('Java Optional<User> receiver resolution via wrapper unwrapping', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'java-optional-receiver'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes each with a save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves user.save() to User#save with Optional<User> in scope', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processEntities' && c.targetFilePath?.includes('User.java'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves repo.save() to Repo#save alongside Optional usage', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processEntities' && c.targetFilePath?.includes('Repo.java'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('disambiguates user.save() and repo.save() to different files', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processEntities' && c.targetFilePath?.includes('User.java'),\n    );\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processEntities' && c.targetFilePath?.includes('Repo.java'),\n    );\n    expect(userSave).toBeDefined();\n    expect(repoSave).toBeDefined();\n    expect(userSave!.targetFilePath).not.toBe(repoSave!.targetFilePath);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Chained method call resolution: svc.getUser().save()\n// The receiver of save() is a method_invocation (getUser()), not a simple identifier.\n// Resolution must walk the chain: getUser() returns User, so save() → User#save.\n// ---------------------------------------------------------------------------\n\ndescribe('Java chained method call resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'java-chain-call'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User, Repo and UserService classes', () => {\n    const classes = getNodesByLabel(result, 'Class');\n    expect(classes).toContain('User');\n    expect(classes).toContain('Repo');\n    expect(classes).toContain('UserService');\n  });\n\n  it('detects save methods on both User and Repo', () => {\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('detects getUser method on UserService', () => {\n    const methods = getNodesByLabel(result, 'Method');\n    expect(methods).toContain('getUser');\n  });\n\n  it('resolves svc.getUser().save() to User#save, NOT Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' &&\n      c.source === 'processUser' &&\n      c.targetFilePath.includes('User.java'),\n    );\n    const repoSave = calls.find(c =>\n      c.target === 'save' &&\n      c.source === 'processUser' &&\n      c.targetFilePath.includes('Repo.java'),\n    );\n    expect(userSave).toBeDefined();\n    expect(repoSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Java 16+ instanceof pattern variable: `if (obj instanceof User user)`\n// Phase 5.2: extractPatternBinding on instanceof_expression binds user → User.\n// Disambiguation: User.save vs Repo.save — only User.save should be called.\n// ---------------------------------------------------------------------------\n\ndescribe('Java instanceof pattern variable resolution (Phase 5.2)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'java-instanceof-pattern'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes each with a save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves user.save() inside if (obj instanceof User user) to User#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' &&\n      c.source === 'process' &&\n      c.targetFilePath.includes('User.java'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('does NOT resolve user.save() to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' &&\n      c.source === 'process' &&\n      c.targetFilePath.includes('Repo.java'),\n    );\n    expect(repoSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Enum static method calls: Status.fromCode(200) should resolve via\n// class-as-receiver with Enum type included in the filter.\n// ---------------------------------------------------------------------------\n\ndescribe('Java enum static method call resolution (Phase 5 review fix)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'java-enum-static-call'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects Status as an Enum and App as a Class', () => {\n    expect(getNodesByLabel(result, 'Enum')).toContain('Status');\n    expect(getNodesByLabel(result, 'Class')).toContain('App');\n  });\n\n  it('detects fromCode and label methods on Status', () => {\n    const methods = getNodesByLabel(result, 'Method');\n    expect(methods).toContain('fromCode');\n    expect(methods).toContain('label');\n  });\n\n  it('resolves Status.fromCode(200) to Status#fromCode via class-as-receiver', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const fromCodeCall = calls.find(c =>\n      c.target === 'fromCode' &&\n      c.source === 'process' &&\n      c.targetFilePath?.includes('Status.java'),\n    );\n    expect(fromCodeCall).toBeDefined();\n  });\n\n  it('resolves s.label() to Status#label', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const labelCall = calls.find(c =>\n      c.target === 'label' &&\n      c.source === 'process' &&\n      c.targetFilePath?.includes('Status.java'),\n    );\n    expect(labelCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Java 21+ switch pattern matching: switch (obj) { case User user -> user.save(); }\n// ---------------------------------------------------------------------------\n\ndescribe('Java switch pattern binding', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'java-switch-pattern'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, both with save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves user.save() in switch case User to models/User.java', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processAny' && c.targetFilePath === 'models/User.java',\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves repo.save() in switch case Repo to models/Repo.java', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processAny' && c.targetFilePath === 'models/Repo.java',\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('resolves user.save() in handleUser switch case User to models/User.java', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'handleUser' && c.targetFilePath === 'models/User.java',\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('does NOT cross-resolve handleUser switch case User to Repo.save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'handleUser' && c.targetFilePath === 'models/Repo.java',\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Java Map .values() for-loop — method-aware type arg resolution\n// ---------------------------------------------------------------------------\n\ndescribe('Java Map .values() for-loop resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'java-map-keys-values'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User class with save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n  });\n\n  it('resolves user.save() via Map.values() to User#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processValues' && c.targetFilePath?.includes('User'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('does NOT resolve user.save() to Repo#save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processValues' && c.targetFilePath?.includes('Repo'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n\n  it('resolves user.save() via List iteration to User#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processList' && c.targetFilePath?.includes('User'),\n    );\n    expect(userSave).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Java enhanced for-loop with call_expression iterable: for (User user : getUsers())\n// Phase 7.3: call_expression iterable resolution via ReturnTypeLookup\n// ---------------------------------------------------------------------------\n\ndescribe('Java foreach call_expression iterable resolution (Phase 7.3)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'java-foreach-call-expr'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes with competing save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n  });\n\n  it('resolves user.save() in foreach over User.getUsers() to User#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processUsers' && c.targetFilePath?.includes('User.java'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves repo.save() in foreach over Repo.getRepos() to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processRepos' && c.targetFilePath?.includes('Repo.java'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('does NOT resolve user.save() to Repo#save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processUsers' && c.targetFilePath?.includes('Repo.java'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n\n  it('does NOT resolve repo.save() to User#save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processRepos' && c.targetFilePath?.includes('User.java'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase 8: Field/property type resolution (1-level)\n// ---------------------------------------------------------------------------\n\ndescribe('Field type resolution (Java)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'java-field-types'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects classes: Address, App, User', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Address', 'App', 'User']);\n  });\n\n  it('detects Property nodes for Java fields', () => {\n    const properties = getNodesByLabel(result, 'Property');\n    expect(properties).toContain('address');\n    expect(properties).toContain('name');\n    expect(properties).toContain('city');\n  });\n\n  it('emits HAS_PROPERTY edges linking properties to classes', () => {\n    const propEdges = getRelationships(result, 'HAS_PROPERTY');\n    expect(propEdges.length).toBe(3);\n    expect(edgeSet(propEdges)).toContain('User → address');\n    expect(edgeSet(propEdges)).toContain('User → name');\n    expect(edgeSet(propEdges)).toContain('Address → city');\n  });\n\n  it('resolves user.address.save() → Address#save via field type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(e => e.target === 'save');\n    const addressSave = saveCalls.find(\n      e => e.source === 'processUser' && e.targetFilePath.includes('Address'),\n    );\n    expect(addressSave).toBeDefined();\n  });\n\n  it('emits ACCESSES read edge for user.address field access in chain', () => {\n    const accesses = getRelationships(result, 'ACCESSES');\n    const addressReads = accesses.filter(e => e.target === 'address' && e.rel.reason === 'read');\n    expect(addressReads.length).toBe(1);\n    expect(addressReads[0].source).toBe('processUser');\n    expect(addressReads[0].targetLabel).toBe('Property');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase 8A: Deep field chain resolution (3-level)\n// ---------------------------------------------------------------------------\n\ndescribe('Deep field chain resolution (Java)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'java-deep-field-chain'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects classes: Address, App, City, User', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Address', 'App', 'City', 'User']);\n  });\n\n  it('detects Property nodes for Java fields', () => {\n    const properties = getNodesByLabel(result, 'Property');\n    expect(properties).toContain('address');\n    expect(properties).toContain('city');\n    expect(properties).toContain('zipCode');\n  });\n\n  it('emits HAS_PROPERTY edges for nested type chain', () => {\n    const propEdges = getRelationships(result, 'HAS_PROPERTY');\n    expect(edgeSet(propEdges)).toContain('User → address');\n    expect(edgeSet(propEdges)).toContain('Address → city');\n    expect(edgeSet(propEdges)).toContain('City → zipCode');\n  });\n\n  it('resolves 2-level chain: user.address.save() → Address#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(e => e.target === 'save' && e.source === 'processUser');\n    const addressSave = saveCalls.find(e => e.targetFilePath.includes('Address'));\n    expect(addressSave).toBeDefined();\n  });\n\n  it('resolves 3-level chain: user.address.city.getName() → City#getName', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const getNameCalls = calls.filter(e => e.target === 'getName' && e.source === 'processUser');\n    const cityGetName = getNameCalls.find(e => e.targetFilePath.includes('City'));\n    expect(cityGetName).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Mixed field+call chain resolution (Java)\n// ---------------------------------------------------------------------------\n\ndescribe('Mixed field+call chain resolution (Java)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'java-mixed-chain'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects classes: Address, App, City, User, UserService', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Address', 'App', 'City', 'User', 'UserService']);\n  });\n\n  it('detects Property nodes for mixed-chain fields', () => {\n    const properties = getNodesByLabel(result, 'Property');\n    expect(properties).toContain('city');\n    expect(properties).toContain('address');\n  });\n\n  it('resolves call→field chain: svc.getUser().address.save() → Address#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(e => e.target === 'save' && e.source === 'processWithService');\n    expect(saveCalls.length).toBe(1);\n    expect(saveCalls[0].targetFilePath).toContain('Address');\n  });\n\n  it('resolves field→call chain: user.getAddress().city.getName() → City#getName', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const getNameCalls = calls.filter(e => e.target === 'getName' && e.source === 'processWithUser');\n    expect(getNameCalls.length).toBe(1);\n    expect(getNameCalls[0].targetFilePath).toContain('City');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// ACCESSES write edges from assignment expressions\n// ---------------------------------------------------------------------------\n\ndescribe('Write access tracking (Java)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'java-write-access'),\n      () => {},\n    );\n  }, 60000);\n\n  it('emits ACCESSES write edges for field assignments', () => {\n    const accesses = getRelationships(result, 'ACCESSES');\n    const writes = accesses.filter(e => e.rel.reason === 'write');\n    expect(writes.length).toBe(2);\n    const nameWrite = writes.find(e => e.target === 'name');\n    const addressWrite = writes.find(e => e.target === 'address');\n    expect(nameWrite).toBeDefined();\n    expect(nameWrite!.source).toBe('updateUser');\n    expect(addressWrite).toBeDefined();\n    expect(addressWrite!.source).toBe('updateUser');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Call-result variable binding (Phase 9): var user = getUser(); user.save()\n// ---------------------------------------------------------------------------\n\ndescribe('Java call-result variable binding (Tier 2b)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'java-call-result-binding'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves user.save() to User#save via call-result binding', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processUser' && c.targetFilePath.includes('User')\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Method chain binding (Phase 9C): getUser() → .address → .getCity() → .save()\n// ---------------------------------------------------------------------------\n\ndescribe('Java method chain binding via unified fixpoint (Phase 9C)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'java-method-chain-binding'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves city.save() to City#save via method chain', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processChain' && c.targetFilePath.includes('Models')\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase B: Deep MRO — walkParentChain() at depth 2 (C→B→A)\n// greet() is defined on A, accessed via C. Tests BFS depth-2 parent traversal.\n// ---------------------------------------------------------------------------\n\ndescribe('Java grandparent method resolution via MRO (Phase B)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'java-grandparent-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects A, B, C, Greeting classes', () => {\n    const classes = getNodesByLabel(result, 'Class');\n    expect(classes).toContain('A');\n    expect(classes).toContain('B');\n    expect(classes).toContain('C');\n    expect(classes).toContain('Greeting');\n  });\n\n  it('emits EXTENDS edges: B→A, C→B', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(edgeSet(extends_)).toContain('B → A');\n    expect(edgeSet(extends_)).toContain('C → B');\n  });\n\n  it('resolves c.greet().save() to Greeting#save via depth-2 MRO lookup', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.targetFilePath.includes('Greeting'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('resolves c.greet() to A#greet (method found via MRO walk)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const greetCall = calls.find(c =>\n      c.target === 'greet' && c.targetFilePath.includes('A.java'),\n    );\n    expect(greetCall).toBeDefined();\n  });\n});\n\n// ── Phase P: Overload Disambiguation via Parameter Types ─────────────────\n\ndescribe('Java overload disambiguation by parameter types', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'java-overload-param-types'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects lookup method with parameterTypes on graph node', () => {\n    const methods = getNodesByLabelFull(result, 'Method');\n    const lookupNodes = methods.filter(m => m.name === 'lookup');\n    // generateId collision → 1 graph node, first overload's parameterTypes wins\n    expect(lookupNodes.length).toBe(1);\n    // The node has parameterTypes from whichever overload was registered first\n    expect(lookupNodes[0].properties.parameterTypes).toEqual(['int']);\n  });\n\n  it('emits CALLS edge from run() → lookup() via overload disambiguation', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const lookupCalls = calls.filter(c => c.source === 'run' && c.target === 'lookup');\n    // Phase 0 (fileIndex stores both overloads) + Phase 2 (literal type matching)\n    // enables resolution where previously 2 same-arity candidates → null.\n    // Both calls resolve to same nodeId (ID collision) → 1 CALLS edge after dedup.\n    expect(lookupCalls.length).toBe(1);\n  });\n});\n\n// ── Phase P: Virtual Dispatch via Constructor Type ───────────────────────\n\ndescribe('Java virtual dispatch via constructor type (same-file)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'java-virtual-dispatch'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects Animal, Dog, and App classes in same file', () => {\n    const classes = getNodesByLabel(result, 'Class');\n    expect(classes).toContain('Animal');\n    expect(classes).toContain('Dog');\n    expect(classes).toContain('App');\n  });\n\n  it('detects Dog extends Animal heritage', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    const dogExtends = extends_.find(e => e.source === 'Dog' && e.target === 'Animal');\n    expect(dogExtends).toBeDefined();\n  });\n\n  it('detects fetchBall() as Dog-only method', () => {\n    const methods = getNodesByLabel(result, 'Method');\n    expect(methods).toContain('fetchBall');\n  });\n\n  it('resolves fetchBall() calls from run() — proves virtual dispatch override', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const fetchCalls = calls.filter(c => c.source === 'run' && c.target === 'fetchBall');\n    // animal.fetchBall() only resolves if constructorTypeMap overrides\n    // receiver from Animal → Dog (since only Dog has fetchBall).\n    // dog.fetchBall() resolves directly via Dog type.\n    // Both target same nodeId → 1 CALLS edge after dedup.\n    expect(fetchCalls.length).toBe(1);\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/integration/resolvers/javascript.test.ts",
    "content": "/**\n * JavaScript: self/this resolution, parent resolution, super resolution\n */\nimport { describe, it, expect, beforeAll } from 'vitest';\nimport path from 'path';\nimport {\n  FIXTURES, getRelationships, getNodesByLabel, edgeSet,\n  runPipelineFromRepo, type PipelineResult,\n} from './helpers.js';\n\n// ---------------------------------------------------------------------------\n// skipGraphPhases: verify pipeline works correctly when graph phases are skipped\n// ---------------------------------------------------------------------------\n\ndescribe('Pipeline skipGraphPhases option', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'javascript-self-this-resolution'),\n      () => {},\n      { skipGraphPhases: true },\n    );\n  }, 60000);\n\n  it('produces graph nodes without community/process phases', () => {\n    expect(getNodesByLabel(result, 'Class').length).toBeGreaterThan(0);\n  });\n\n  it('still resolves CALLS edges correctly', () => {\n    const calls = getRelationships(result, 'CALLS');\n    expect(calls.length).toBeGreaterThan(0);\n  });\n\n  it('omits communityResult when skipGraphPhases is true', () => {\n    expect(result.communityResult).toBeUndefined();\n  });\n\n  it('omits processResult when skipGraphPhases is true', () => {\n    expect(result.processResult).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// this.save() resolves to enclosing class's own save method\n// ---------------------------------------------------------------------------\n\ndescribe('JavaScript this resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'javascript-self-this-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, each with a save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Repo', 'User']);\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves this.save() inside User.process to User.save, not Repo.save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save' && c.source === 'process');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.targetFilePath).toBe('src/models/User.js');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Parent class resolution: EXTENDS edge\n// ---------------------------------------------------------------------------\n\ndescribe('JavaScript parent resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'javascript-parent-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects BaseModel and User classes', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['BaseModel', 'User']);\n  });\n\n  it('emits EXTENDS edge: User → BaseModel', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(extends_.length).toBe(1);\n    expect(extends_[0].source).toBe('User');\n    expect(extends_[0].target).toBe('BaseModel');\n  });\n\n  it('EXTENDS edge points to real graph node', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    const target = result.graph.getNode(extends_[0].rel.targetId);\n    expect(target).toBeDefined();\n    expect(target!.properties.name).toBe('BaseModel');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Nullable receiver: JSDoc @param {User | null} strips nullable via TypeEnv\n// ---------------------------------------------------------------------------\n\ndescribe('JavaScript nullable receiver resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'js-nullable-receiver'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, both with save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Repo', 'User']);\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves user.save() to src/user.js via nullable-stripped JSDoc type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c => c.target === 'save' && c.source === 'processEntities' && c.targetFilePath === 'src/user.js');\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves repo.save() to src/repo.js via nullable-stripped JSDoc type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c => c.target === 'save' && c.source === 'processEntities' && c.targetFilePath === 'src/repo.js');\n    expect(repoSave).toBeDefined();\n  });\n\n  it('emits exactly 2 save() CALLS edges (one per receiver type)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save');\n    expect(saveCalls.length).toBe(2);\n  });\n\n  it('each save() call resolves to a distinct file (no duplicates)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save' && c.source === 'processEntities');\n    const files = saveCalls.map(c => c.targetFilePath).sort();\n    expect(files).toEqual(['src/repo.js', 'src/user.js']);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// super.save() resolves to parent class's save method\n// ---------------------------------------------------------------------------\n\ndescribe('JavaScript super resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'javascript-super-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects BaseModel, User, and Repo classes, each with a save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['BaseModel', 'Repo', 'User']);\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(3);\n  });\n\n  it('emits EXTENDS edge: User → BaseModel', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(extends_.length).toBe(1);\n    expect(extends_[0].source).toBe('User');\n    expect(extends_[0].target).toBe('BaseModel');\n  });\n\n  it('resolves super.save() inside User to BaseModel.save, not Repo.save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const superSave = calls.find(c => c.source === 'save' && c.target === 'save'\n      && c.targetFilePath === 'src/models/Base.js');\n    expect(superSave).toBeDefined();\n    const repoSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'src/models/Repo.js');\n    expect(repoSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Chained method calls: svc.getUser().save()\n// Tests that JavaScript chain call resolution correctly infers the intermediate\n// receiver type from getUser()'s JSDoc @returns {User} and resolves save().\n// ---------------------------------------------------------------------------\n\ndescribe('JavaScript chained method call resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'javascript-chain-call'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, and UserService', () => {\n    const classes = getNodesByLabel(result, 'Class');\n    expect(classes).toContain('User');\n    expect(classes).toContain('Repo');\n    expect(classes).toContain('UserService');\n  });\n\n  it('detects getUser and save methods', () => {\n    const methods = getNodesByLabel(result, 'Method');\n    expect(methods).toContain('getUser');\n    expect(methods).toContain('save');\n  });\n\n  it('resolves svc.getUser().save() to User#save via chain resolution', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' &&\n      c.source === 'processUser' &&\n      c.targetFilePath?.includes('user.js'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('does NOT resolve svc.getUser().save() to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' &&\n      c.source === 'processUser' &&\n      c.targetFilePath?.includes('repo.js'),\n    );\n    expect(repoSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase 8: Field/property type resolution — class field_definition capture\n// ---------------------------------------------------------------------------\n\ndescribe('Field type resolution (JavaScript)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'js-field-types'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects classes: Address, Config, User', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Address', 'Config', 'User']);\n  });\n\n  it('detects Property nodes for JS class fields', () => {\n    const properties = getNodesByLabel(result, 'Property');\n    expect(properties).toContain('address');\n    expect(properties).toContain('name');\n    expect(properties).toContain('city');\n  });\n\n  it('emits HAS_PROPERTY edges linking fields to classes', () => {\n    const propEdges = getRelationships(result, 'HAS_PROPERTY');\n    expect(propEdges.length).toBe(4);\n    expect(edgeSet(propEdges)).toContain('User → address');\n    expect(edgeSet(propEdges)).toContain('User → name');\n    expect(edgeSet(propEdges)).toContain('Address → city');\n    expect(edgeSet(propEdges)).toContain('Config → DEFAULT');\n  });\n});\n\n// ACCESSES write edges from assignment expressions\n// ---------------------------------------------------------------------------\n\ndescribe('Write access tracking (JavaScript)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'js-write-access'),\n      () => {},\n    );\n  }, 60000);\n\n  it('emits ACCESSES write edges for field assignments', () => {\n    const accesses = getRelationships(result, 'ACCESSES');\n    const writes = accesses.filter(e => e.rel.reason === 'write');\n    expect(writes.length).toBe(2);\n    const fieldNames = writes.map(e => e.target);\n    expect(fieldNames).toContain('name');\n    expect(fieldNames).toContain('address');\n    const sources = writes.map(e => e.source);\n    expect(sources).toContain('updateUser');\n  });\n\n  it('write ACCESSES edges have confidence 1.0', () => {\n    const accesses = getRelationships(result, 'ACCESSES');\n    const writes = accesses.filter(e => e.rel.reason === 'write');\n    for (const edge of writes) {\n      expect(edge.rel.confidence).toBe(1.0);\n    }\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase A: JS object destructuring — const { field } = receiver → fieldAccess PendingAssignment\n// ---------------------------------------------------------------------------\n\ndescribe('JavaScript object destructuring resolution (Phase A)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'js-object-destructuring'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User, Address classes', () => {\n    const classes = getNodesByLabel(result, 'Class');\n    expect(classes).toContain('User');\n    expect(classes).toContain('Address');\n  });\n\n  it('resolves address.save() to Address#save via object destructuring', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.targetFilePath.includes('models'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase A: Post-fixpoint for-loop replay — iterable resolved via callResult fixpoint\n// ---------------------------------------------------------------------------\n\ndescribe('JavaScript post-fixpoint for-loop replay (Phase A ex-9B)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'js-fixpoint-for-loop'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves u.save() to User#save via post-fixpoint for-loop replay', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'process' && c.targetFilePath.includes('models'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/integration/resolvers/kotlin.test.ts",
    "content": "/**\n * Kotlin: data class extends + implements interfaces + ambiguous import disambiguation\n */\nimport { describe, it, expect, beforeAll } from 'vitest';\nimport path from 'path';\nimport {\n  FIXTURES, getRelationships, getNodesByLabel, getNodesByLabelFull, edgeSet,\n  runPipelineFromRepo, type PipelineResult,\n} from './helpers.js';\n\n// ---------------------------------------------------------------------------\n// Heritage: data class extends + implements interfaces (delegation specifiers)\n// ---------------------------------------------------------------------------\n\ndescribe('Kotlin heritage resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-heritage'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects exactly 3 classes and 2 interfaces', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['BaseModel', 'User', 'UserService']);\n    expect(getNodesByLabel(result, 'Interface')).toEqual(['Serializable', 'Validatable']);\n  });\n\n  it('detects 6 functions (interface declarations + implementations + service)', () => {\n    expect(getNodesByLabel(result, 'Function')).toEqual([\n      'processUser', 'save', 'serialize', 'serialize', 'validate', 'validate',\n    ]);\n  });\n\n  it('emits exactly 1 EXTENDS edge: User → BaseModel', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(extends_.length).toBe(1);\n    expect(extends_[0].source).toBe('User');\n    expect(extends_[0].target).toBe('BaseModel');\n  });\n\n  it('emits exactly 2 IMPLEMENTS edges via symbol table resolution', () => {\n    const implements_ = getRelationships(result, 'IMPLEMENTS');\n    expect(implements_.length).toBe(2);\n    expect(edgeSet(implements_)).toEqual([\n      'User → Serializable',\n      'User → Validatable',\n    ]);\n  });\n\n  it('resolves exactly 4 IMPORTS edges (JVM-style package imports)', () => {\n    const imports = getRelationships(result, 'IMPORTS');\n    expect(imports.length).toBe(4);\n    expect(edgeSet(imports)).toEqual([\n      'User.kt → Serializable.kt',\n      'User.kt → Validatable.kt',\n      'UserService.kt → Serializable.kt',\n      'UserService.kt → User.kt',\n    ]);\n  });\n\n  it('does not emit EXTENDS edges to interfaces', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(extends_.some(e => e.target === 'Serializable')).toBe(false);\n    expect(extends_.some(e => e.target === 'Validatable')).toBe(false);\n  });\n\n  it('resolves ambiguous validate() call through non-aliased import with import-resolved reason', () => {\n    const calls = getRelationships(result, 'CALLS');\n    // validate is defined in both Validatable (interface) and User (override) → needs import scoping\n    const validateCall = calls.find(c => c.target === 'validate');\n    expect(validateCall).toBeDefined();\n    expect(validateCall!.source).toBe('processUser');\n    expect(validateCall!.rel.reason).toBe('import-resolved');\n  });\n\n  it('resolves unique save() call through non-aliased import', () => {\n    const calls = getRelationships(result, 'CALLS');\n    // save is unique globally (only in BaseModel) → resolves as unique-global\n    const saveCall = calls.find(c => c.target === 'save');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('processUser');\n  });\n\n  it('no OVERRIDES edges target Property nodes', () => {\n    const overrides = getRelationships(result, 'OVERRIDES');\n    for (const edge of overrides) {\n      const target = result.graph.getNode(edge.rel.targetId);\n      expect(target).toBeDefined();\n      expect(target!.label).not.toBe('Property');\n    }\n  });\n\n  it('all heritage edges point to real graph nodes', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    const implements_ = getRelationships(result, 'IMPLEMENTS');\n\n    for (const edge of [...extends_, ...implements_]) {\n      const target = result.graph.getNode(edge.rel.targetId);\n      expect(target).toBeDefined();\n      expect(target!.properties.name).toBe(edge.target);\n    }\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Ambiguous: Handler + Runnable in two packages, explicit imports disambiguate\n// ---------------------------------------------------------------------------\n\ndescribe('Kotlin ambiguous symbol resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-ambiguous'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects 2 Handler classes and 2 Runnable interfaces', () => {\n    const classes = getNodesByLabel(result, 'Class');\n    expect(classes.filter(n => n === 'Handler').length).toBe(2);\n    const ifaces = getNodesByLabel(result, 'Interface');\n    expect(ifaces.filter(n => n === 'Runnable').length).toBe(2);\n  });\n\n  it('resolves EXTENDS to models/Handler.kt (not other/Handler.kt)', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(extends_.length).toBe(1);\n    expect(extends_[0].source).toBe('UserHandler');\n    expect(extends_[0].target).toBe('Handler');\n    expect(extends_[0].targetFilePath).toBe('models/Handler.kt');\n  });\n\n  it('resolves IMPLEMENTS to models/Runnable.kt (not other/Runnable.kt)', () => {\n    const implements_ = getRelationships(result, 'IMPLEMENTS');\n    expect(implements_.length).toBe(1);\n    expect(implements_[0].source).toBe('UserHandler');\n    expect(implements_[0].target).toBe('Runnable');\n    expect(implements_[0].targetFilePath).toBe('models/Runnable.kt');\n  });\n\n  it('import edges point to models/ not other/', () => {\n    const imports = getRelationships(result, 'IMPORTS');\n    for (const imp of imports) {\n      expect(imp.targetFilePath).toMatch(/^models\\//);\n    }\n  });\n\n  it('all heritage edges point to real graph nodes', () => {\n    for (const edge of [...getRelationships(result, 'EXTENDS'), ...getRelationships(result, 'IMPLEMENTS')]) {\n      const target = result.graph.getNode(edge.rel.targetId);\n      expect(target).toBeDefined();\n      expect(target!.properties.name).toBe(edge.target);\n    }\n  });\n});\n\ndescribe('Kotlin call resolution with arity filtering', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-calls'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves processUser → writeAudit to util/OneArg.kt via arity narrowing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    expect(calls.length).toBe(1);\n    expect(calls[0].source).toBe('processUser');\n    expect(calls[0].target).toBe('writeAudit');\n    expect(calls[0].targetFilePath).toBe('util/OneArg.kt');\n    expect(calls[0].rel.reason).toBe('import-resolved');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Member-call resolution: obj.method() resolves through pipeline\n// ---------------------------------------------------------------------------\n\ndescribe('Kotlin member-call resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-member-calls'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves processUser → save as a member call on User', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('processUser');\n    expect(saveCall!.targetFilePath).toBe('models/User.kt');\n  });\n\n  it('detects User class and save function (Kotlin fns are Function nodes)', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    // Kotlin tree-sitter captures all function_declaration as Function, including class methods\n    expect(getNodesByLabel(result, 'Function')).toContain('save');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Receiver-constrained resolution: typed variables disambiguate same-named methods\n// ---------------------------------------------------------------------------\n\ndescribe('Kotlin receiver-constrained resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-receiver-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, both with save functions', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    // Kotlin tree-sitter captures all function_declaration as Function\n    const saveFns = getNodesByLabel(result, 'Function').filter(m => m === 'save');\n    expect(saveFns.length).toBe(2);\n  });\n\n  it('resolves user.save() to User.save and repo.save() to Repo.save via receiver typing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save');\n    expect(saveCalls.length).toBe(2);\n\n    const userSave = saveCalls.find(c => c.targetFilePath === 'models/User.kt');\n    const repoSave = saveCalls.find(c => c.targetFilePath === 'models/Repo.kt');\n\n    expect(userSave).toBeDefined();\n    expect(repoSave).toBeDefined();\n    expect(userSave!.source).toBe('processEntities');\n    expect(repoSave!.source).toBe('processEntities');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Alias import resolution: import com.example.User as U resolves U → User\n// ---------------------------------------------------------------------------\n\ndescribe('Kotlin alias import resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-alias-imports'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes with their methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Repo', 'User']);\n    // Kotlin tree-sitter captures all function_declaration as Function, including class methods\n    expect(getNodesByLabel(result, 'Function')).toContain('save');\n    expect(getNodesByLabel(result, 'Function')).toContain('persist');\n  });\n\n  it('resolves u.save() to models/Models.kt and r.persist() to models/Models.kt via alias', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save');\n    const persistCall = calls.find(c => c.target === 'persist');\n\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('main');\n    expect(saveCall!.targetFilePath).toBe('models/Models.kt');\n\n    expect(persistCall).toBeDefined();\n    expect(persistCall!.source).toBe('main');\n    expect(persistCall!.targetFilePath).toBe('models/Models.kt');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Constructor-call resolution: User(\"alice\") resolves to User constructor\n// ---------------------------------------------------------------------------\n\ndescribe('Kotlin constructor-call resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-constructor-calls'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User class with save method and main function', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Function')).toContain('save');\n    expect(getNodesByLabel(result, 'Function')).toContain('main');\n  });\n\n  it('resolves import from app/App.kt to models/User.kt', () => {\n    const imports = getRelationships(result, 'IMPORTS');\n    const imp = imports.find(e => e.source === 'App.kt' && e.targetFilePath === 'models/User.kt');\n    expect(imp).toBeDefined();\n  });\n\n  it('emits HAS_METHOD from User class to save function', () => {\n    const hasMethod = getRelationships(result, 'HAS_METHOD');\n    const edge = hasMethod.find(e => e.source === 'User' && e.target === 'save');\n    expect(edge).toBeDefined();\n    expect(edge!.targetFilePath).toBe('models/User.kt');\n  });\n\n  it('resolves user.save() as a method call to models/User.kt', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('main');\n    expect(saveCall!.targetFilePath).toBe('models/User.kt');\n  });\n\n  it('resolves calls via non-aliased import with import-resolved reason', () => {\n    const calls = getRelationships(result, 'CALLS');\n    // Both User(\"alice\") constructor and user.save() go through `import models.User`\n    for (const call of calls) {\n      expect(call.rel.reason).toBe('import-resolved');\n    }\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Variadic resolution: vararg doesn't get filtered by arity\n// ---------------------------------------------------------------------------\n\ndescribe('Kotlin variadic call resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-variadic-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves 3-arg call to vararg function logEntry(vararg String) in Logger.kt', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const logCall = calls.find(c => c.target === 'logEntry');\n    expect(logCall).toBeDefined();\n    expect(logCall!.source).toBe('main');\n    expect(logCall!.targetFilePath).toBe('util/Logger.kt');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Local shadow: same-file definition takes priority over imported name\n// ---------------------------------------------------------------------------\n\ndescribe('Kotlin local definition shadows import', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-local-shadow'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves run → save to same-file definition, not the imported one', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save' && c.source === 'run');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.targetFilePath).toBe('src/main/kotlin/app/Main.kt');\n  });\n\n  it('does NOT resolve save to Logger.kt', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveToUtils = calls.find(c => c.target === 'save' && c.targetFilePath === 'src/main/kotlin/utils/Logger.kt');\n    expect(saveToUtils).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Constructor-inferred type resolution: val user = User() without annotation\n// disambiguates user.save() vs repo.save() via TypeEnv constructor inference\n// ---------------------------------------------------------------------------\n\ndescribe('Kotlin constructor-inferred type resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-constructor-type-inference'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, both with save functions', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveFns = getNodesByLabel(result, 'Function').filter(m => m === 'save');\n    expect(saveFns.length).toBe(2);\n  });\n\nit('resolves user.save() to models/User.kt via constructor-inferred type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'models/User.kt');\n    expect(userSave).toBeDefined();\n    expect(userSave!.source).toBe('processEntities');\n  });\n\n  it('resolves repo.save() to models/Repo.kt via constructor-inferred type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'models/Repo.kt');\n    expect(repoSave).toBeDefined();\n    expect(repoSave!.source).toBe('processEntities');\n  });\n\n  it('emits exactly 2 save() CALLS edges (one per receiver type)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save');\n    expect(saveCalls.length).toBe(2);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// this.save() resolves to enclosing class's / object's own method\n// ---------------------------------------------------------------------------\n\ndescribe('Kotlin this resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-self-this-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User, Repo classes and AppConfig object', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    expect(getNodesByLabel(result, 'Class')).toContain('AppConfig');\n  });\n\n  it('resolves this.save() inside User.process to User.save, not Repo.save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save' && c.source === 'process');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.targetFilePath).toBe('models/User.kt');\n  });\n\n  it('resolves this.init() inside AppConfig.setup to AppConfig.init (object_declaration)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const initCall = calls.find(c => c.target === 'init' && c.source === 'setup');\n    expect(initCall).toBeDefined();\n    expect(initCall!.targetFilePath).toBe('models/AppConfig.kt');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Return type inference: val user = getUser(\"alice\"); user.save()\n// Kotlin's CONSTRUCTOR_BINDING_SCANNER captures property_declaration with\n// call_expression values, enabling return type inference from function results.\n// ---------------------------------------------------------------------------\n\ndescribe('Kotlin return type inference', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-return-type'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes with competing save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveFns = getNodesByLabel(result, 'Function').filter(f => f === 'save');\n    expect(saveFns.length).toBe(2);\n  });\n\n  it('resolves user.save() to User#save via return type inference', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processUser' && c.targetFilePath?.includes('User.kt'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves repo.save() to Repo#save via return type inference', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processRepo' && c.targetFilePath?.includes('Repo.kt'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('user.save() does NOT resolve to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processUser' && c.targetFilePath?.includes('Repo.kt'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Parent class resolution: EXTENDS + IMPLEMENTS edges\n// ---------------------------------------------------------------------------\n\ndescribe('Kotlin parent resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-parent-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects BaseModel and User classes plus Serializable interface', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['BaseModel', 'User']);\n    expect(getNodesByLabel(result, 'Interface')).toEqual(['Serializable']);\n  });\n\n  it('emits EXTENDS edge: User → BaseModel', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(extends_.length).toBe(1);\n    expect(extends_[0].source).toBe('User');\n    expect(extends_[0].target).toBe('BaseModel');\n  });\n\n  it('emits IMPLEMENTS edge: User → Serializable', () => {\n    const implements_ = getRelationships(result, 'IMPLEMENTS');\n    expect(implements_.length).toBe(1);\n    expect(implements_[0].source).toBe('User');\n    expect(implements_[0].target).toBe('Serializable');\n  });\n\n  it('all heritage edges point to real graph nodes', () => {\n    for (const edge of [...getRelationships(result, 'EXTENDS'), ...getRelationships(result, 'IMPLEMENTS')]) {\n      const target = result.graph.getNode(edge.rel.targetId);\n      expect(target).toBeDefined();\n      expect(target!.properties.name).toBe(edge.target);\n    }\n  });\n});\n\n// ---------------------------------------------------------------------------\n// super.save() resolves to parent class's save method\n// ---------------------------------------------------------------------------\n\ndescribe('Kotlin super resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-super-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects BaseModel, User, and Repo classes', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['BaseModel', 'Repo', 'User']);\n  });\n\n  it('resolves super.save() inside User to BaseModel.save, not Repo.save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const superSave = calls.find(c => c.source === 'save' && c.target === 'save'\n      && c.targetFilePath === 'models/BaseModel.kt');\n    expect(superSave).toBeDefined();\n    const repoSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'models/Repo.kt');\n    expect(repoSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// For-each loop variable type resolution: for (user: User in users) { user.save() }\n// ---------------------------------------------------------------------------\n\ndescribe('Kotlin for-each loop type resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-foreach'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, both with save functions', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveFns = getNodesByLabel(result, 'Function').filter(f => f === 'save');\n    expect(saveFns.length).toBe(2);\n  });\n\n  it('resolves user.save() inside for-each to models/User.kt', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c => c.target === 'save' && c.source === 'processUsers' && c.targetFilePath === 'models/User.kt');\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves repo.save() inside for-each to models/Repo.kt', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c => c.target === 'save' && c.source === 'processRepos' && c.targetFilePath === 'models/Repo.kt');\n    expect(repoSave).toBeDefined();\n  });\n\n  it('emits exactly 2 save() CALLS edges (one per receiver type)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save');\n    expect(saveCalls.length).toBe(2);\n  });\n\n  it('user.save() does NOT resolve to Repo.save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c => c.target === 'save' && c.source === 'processUsers' && c.targetFilePath === 'models/Repo.kt');\n    expect(wrongSave).toBeUndefined();\n  });\n\n  it('repo.save() does NOT resolve to User.save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c => c.target === 'save' && c.source === 'processRepos' && c.targetFilePath === 'models/User.kt');\n    expect(wrongSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// super.save() resolves to generic parent class's save method\n// ---------------------------------------------------------------------------\n\ndescribe('Kotlin generic parent super resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-generic-parent-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects BaseModel, User, and Repo classes', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['BaseModel', 'Repo', 'User']);\n  });\n\n  it('resolves super.save() inside User to BaseModel.save, not Repo.save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const superSave = calls.find(c => c.source === 'save' && c.target === 'save'\n      && c.targetFilePath === 'models/BaseModel.kt');\n    expect(superSave).toBeDefined();\n    const repoSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'models/Repo.kt');\n    expect(repoSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Nullable receiver unwrapping: user?.save() with User? type resolves through ?.\n// ---------------------------------------------------------------------------\n\ndescribe('Kotlin nullable receiver resolution (safe calls)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-nullable-receiver'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes with competing save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Function').filter((m: string) => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves user?.save() to User#save via receiver typing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processEntities' && c.targetFilePath.includes('User.kt'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves repo?.save() to Repo#save via receiver typing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processEntities' && c.targetFilePath.includes('Repo.kt'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('does NOT cross-contaminate (exactly 1 save per receiver file)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save' && c.source === 'processEntities');\n    const userTargeted = saveCalls.filter(c => c.targetFilePath.includes('User.kt'));\n    const repoTargeted = saveCalls.filter(c => c.targetFilePath.includes('Repo.kt'));\n    expect(userTargeted.length).toBe(1);\n    expect(repoTargeted.length).toBe(1);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Assignment chain propagation\n// ---------------------------------------------------------------------------\n\ndescribe('Kotlin assignment chain propagation', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-assignment-chain'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes each with a save function', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveFns = getNodesByLabel(result, 'Function').filter(f => f === 'save');\n    expect(saveFns.length).toBe(2);\n  });\n\n  it('resolves alias.save() to User#save via assignment chain', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processEntities' && c.targetFilePath.includes('User.kt'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves rAlias.save() to Repo#save via assignment chain', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processEntities' && c.targetFilePath.includes('Repo.kt'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('alias.save() does NOT resolve to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    // There should be exactly one save() call targeting User.kt from processEntities\n    const userSaves = calls.filter(c =>\n      c.target === 'save' && c.source === 'processEntities' && c.targetFilePath.includes('User.kt'),\n    );\n    expect(userSaves.length).toBe(1);\n  });\n\n  it('each alias resolves to its own class, not the other', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processEntities' && c.targetFilePath.includes('User.kt'),\n    );\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processEntities' && c.targetFilePath.includes('Repo.kt'),\n    );\n    expect(userSave).toBeDefined();\n    expect(repoSave).toBeDefined();\n    expect(userSave!.targetFilePath).not.toBe(repoSave!.targetFilePath);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Kotlin assignment chain inside class method body.\n// Tests that extractKotlinPendingAssignment handles variable_declaration\n// nodes (not just property_declaration) that tree-sitter-kotlin may emit\n// for function-local val/var inside class methods.\n// ---------------------------------------------------------------------------\n\ndescribe('Kotlin assignment chain inside class method', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-class-method-chain'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes each with a save function', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveFns = getNodesByLabel(result, 'Function').filter(m => m === 'save');\n    expect(saveFns.length).toBe(2);\n  });\n\n  it('resolves alias.save() to User#save via chain inside function', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processUser' && c.targetFilePath?.includes('User.kt'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('alias.save() in processUser does NOT resolve to Repo (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processUser' && c.targetFilePath?.includes('Repo.kt'),\n    );\n    expect(wrongCall).toBeUndefined();\n  });\n\n  it('resolves alias.save() to Repo#save via chain inside function', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processRepo' && c.targetFilePath?.includes('Repo.kt'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('alias.save() in processRepo does NOT resolve to User (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processRepo' && c.targetFilePath?.includes('User.kt'),\n    );\n    expect(wrongCall).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Chained method calls: svc.getUser().save()\n// Tests that Kotlin's navigation_expression → navigation_suffix AST structure\n// is correctly handled by extractCallChain (Phase 5 review Finding 1, Round 3).\n// ---------------------------------------------------------------------------\n\ndescribe('Kotlin chained method call resolution (Phase 5 review fix)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-chain-call'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User, Repo, and UserService classes', () => {\n    const classes = getNodesByLabel(result, 'Class');\n    expect(classes).toContain('User');\n    expect(classes).toContain('Repo');\n    expect(classes).toContain('UserService');\n  });\n\n  it('detects getUser and save functions', () => {\n    const fns = getNodesByLabel(result, 'Function');\n    expect(fns).toContain('getUser');\n    expect(fns).toContain('save');\n  });\n\n  it('resolves svc.getUser().save() to User#save via chain resolution', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' &&\n      c.source === 'processUser' &&\n      c.targetFilePath?.includes('User.kt'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('does NOT resolve svc.getUser().save() to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' &&\n      c.source === 'processUser' &&\n      c.targetFilePath?.includes('Repo.kt'),\n    );\n    expect(repoSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Kotlin unannotated for-loop Tier 1c: for (user in users) with List<User>\n// ---------------------------------------------------------------------------\n\ndescribe('Kotlin unannotated for-loop type resolution (Tier 1c)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-var-foreach'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes with save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n  });\n\n  it('resolves user.save() in unannotated for to User#save via Tier 1c', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processUsers' && c.targetFilePath?.includes('User.kt'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves repo.save() in unannotated for to Repo#save via Tier 1c', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processRepos' && c.targetFilePath?.includes('Repo.kt'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('does NOT cross-resolve user.save() to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrong = calls.find(c =>\n      c.target === 'save' && c.source === 'processUsers' && c.targetFilePath?.includes('Repo.kt'),\n    );\n    expect(wrong).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Kotlin when/is pattern binding: when (obj) { is User -> obj.save() }\n// ---------------------------------------------------------------------------\n\ndescribe('Kotlin when/is pattern binding', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-when-pattern'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, both with save functions', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveFns = getNodesByLabel(result, 'Function').filter(f => f === 'save');\n    expect(saveFns.length).toBe(2);\n  });\n\n  it('resolves obj.save() in when/is User arm to models/User.kt', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processAny' && c.targetFilePath === 'models/User.kt',\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves obj.save() in when/is Repo arm to models/Repo.kt', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processAny' && c.targetFilePath === 'models/Repo.kt',\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('resolves obj.save() in handleUser when/is User arm to models/User.kt', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'handleUser' && c.targetFilePath === 'models/User.kt',\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('does NOT cross-resolve handleUser when/is User to Repo.save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'handleUser' && c.targetFilePath === 'models/Repo.kt',\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Kotlin HashMap .values navigation_expression resolution\n// ---------------------------------------------------------------------------\n\ndescribe('Kotlin HashMap .values for-loop resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-map-keys-values'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User class with save function', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n  });\n\n  it('resolves user.save() via HashMap.values to User#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processValues' && c.targetFilePath?.includes('User'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('does NOT resolve user.save() to Repo#save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processValues' && c.targetFilePath?.includes('Repo'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n\n  it('resolves user.save() via List iteration to User#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processList' && c.targetFilePath?.includes('User'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves user.save() via HashMap.keys to User#save (first type arg)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processKeys' && c.targetFilePath?.includes('User'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('does NOT resolve HashMap.keys iteration to Repo#save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrong = calls.find(c =>\n      c.target === 'save' && c.source === 'processKeys' && c.targetFilePath?.includes('Repo'),\n    );\n    expect(wrong).toBeUndefined();\n  });\n\n  it('resolves repo.save() via MutableMap.values to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processMutableMapValues' && c.targetFilePath?.includes('Repo'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('resolves repo.save() via Set iteration to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processSet' && c.targetFilePath?.includes('Repo'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Kotlin when/is complex patterns: 3+ arms, multi-call, else branch\n// ---------------------------------------------------------------------------\n\ndescribe('Kotlin when/is complex pattern binding', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-when-complex'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User, Repo, and Admin classes', () => {\n    const classes = getNodesByLabel(result, 'Class');\n    expect(classes).toContain('User');\n    expect(classes).toContain('Repo');\n    expect(classes).toContain('Admin');\n  });\n\n  // --- Three-arm when: each arm resolves obj to the correct narrowed type ---\n\n  it('resolves obj.save() in 3-arm when/is User to User#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processThreeArms' && c.targetFilePath === 'models/User.kt',\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves obj.save() in 3-arm when/is Repo to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processThreeArms' && c.targetFilePath === 'models/Repo.kt',\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('resolves obj.save() in 3-arm when/is Admin to Admin#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const adminSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processThreeArms' && c.targetFilePath === 'models/Admin.kt',\n    );\n    expect(adminSave).toBeDefined();\n  });\n\n  // --- Multiple method calls within a single when arm ---\n\n  it('resolves obj.validate() in when/is User arm to User#validate', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userValidate = calls.find(c =>\n      c.target === 'validate' && c.source === 'processMultiCall' && c.targetFilePath === 'models/User.kt',\n    );\n    expect(userValidate).toBeDefined();\n  });\n\n  it('resolves obj.save() in when/is User arm to User#save (multi-call)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processMultiCall' && c.targetFilePath === 'models/User.kt',\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves obj.validate() in when/is Repo arm to Repo#validate', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoValidate = calls.find(c =>\n      c.target === 'validate' && c.source === 'processMultiCall' && c.targetFilePath === 'models/Repo.kt',\n    );\n    expect(repoValidate).toBeDefined();\n  });\n\n  it('resolves obj.save() in when/is Repo arm to Repo#save (multi-call)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processMultiCall' && c.targetFilePath === 'models/Repo.kt',\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  // --- Cross-resolution negatives: User arm does NOT resolve to Repo ---\n\n  it('does NOT resolve processMultiCall when/is User arm validate() to Repo', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrong = calls.find(c =>\n      c.target === 'validate' && c.source === 'processMultiCall' && c.targetFilePath === 'models/Repo.kt',\n    );\n    // Both User and Repo have validate(), so the Repo arm DOES resolve here.\n    // But processMultiCall should NOT have a cross-arm leak.\n    // We test that the User arm doesn't produce a Repo edge by checking save count.\n    const userSaves = calls.filter(c =>\n      c.target === 'save' && c.source === 'processMultiCall',\n    );\n    // Exactly 2 save() CALLS edges (one per arm, not duplicated)\n    expect(userSaves.length).toBe(2);\n  });\n\n  // --- when with else: is User arm narrows, else does not ---\n\n  it('resolves obj.save() in when/is User + else to User#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processWithElse' && c.targetFilePath === 'models/User.kt',\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('does NOT resolve processWithElse to Repo#save or Admin#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongRepo = calls.find(c =>\n      c.target === 'save' && c.source === 'processWithElse' && c.targetFilePath === 'models/Repo.kt',\n    );\n    const wrongAdmin = calls.find(c =>\n      c.target === 'save' && c.source === 'processWithElse' && c.targetFilePath === 'models/Admin.kt',\n    );\n    expect(wrongRepo).toBeUndefined();\n    expect(wrongAdmin).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Kotlin for-loop with call_expression iterable: for (user in getUsers())\n// Phase 7.3: call_expression iterable resolution via ReturnTypeLookup\n// ---------------------------------------------------------------------------\n\ndescribe('Kotlin for-loop call_expression iterable resolution (Phase 7.3)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-foreach-call-expr'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes with competing save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveFns = getNodesByLabel(result, 'Function').filter(f => f === 'save');\n    expect(saveFns.length).toBe(2);\n  });\n\n  it('resolves user.save() in for-loop over getUsers() to User#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processUsers' && c.targetFilePath?.includes('User.kt'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves repo.save() in for-loop over getRepos() to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processRepos' && c.targetFilePath?.includes('Repo.kt'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('does NOT resolve user.save() to Repo#save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processUsers' && c.targetFilePath?.includes('Repo.kt'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n\n  it('does NOT resolve repo.save() to User#save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processRepos' && c.targetFilePath?.includes('User.kt'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase 8: Field/property type resolution (1-level)\n// ---------------------------------------------------------------------------\n\ndescribe('Field type resolution (Kotlin)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-field-types'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects classes: Address, User', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Address', 'User']);\n  });\n\n  it('detects Property nodes for Kotlin properties', () => {\n    const properties = getNodesByLabel(result, 'Property');\n    expect(properties).toContain('address');\n    expect(properties).toContain('name');\n    expect(properties).toContain('city');\n  });\n\n  it('emits HAS_PROPERTY edges linking properties to classes', () => {\n    const propEdges = getRelationships(result, 'HAS_PROPERTY');\n    expect(propEdges.length).toBe(3);\n    expect(edgeSet(propEdges)).toContain('User → address');\n    expect(edgeSet(propEdges)).toContain('User → name');\n    expect(edgeSet(propEdges)).toContain('Address → city');\n  });\n\n  it('resolves user.address.save() → Address#save via field type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(e => e.target === 'save');\n    const addressSave = saveCalls.find(\n      e => e.source === 'processUser' && e.targetFilePath.includes('Models'),\n    );\n    expect(addressSave).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase 8A: Deep field chain resolution (3-level)\n// ---------------------------------------------------------------------------\n\ndescribe('Deep field chain resolution (Kotlin)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-deep-field-chain'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects classes: Address, City, User', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Address', 'City', 'User']);\n  });\n\n  it('detects Property nodes for Kotlin properties', () => {\n    const properties = getNodesByLabel(result, 'Property');\n    expect(properties).toContain('address');\n    expect(properties).toContain('city');\n    expect(properties).toContain('zipCode');\n  });\n\n  it('emits HAS_PROPERTY edges for nested type chain', () => {\n    const propEdges = getRelationships(result, 'HAS_PROPERTY');\n    expect(edgeSet(propEdges)).toContain('User → address');\n    expect(edgeSet(propEdges)).toContain('Address → city');\n    expect(edgeSet(propEdges)).toContain('City → zipCode');\n  });\n\n  it('resolves 2-level chain: user.address.save() → Address#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(e => e.target === 'save' && e.source === 'processUser');\n    const addressSave = saveCalls.find(e => e.targetFilePath.includes('Models'));\n    expect(addressSave).toBeDefined();\n  });\n\n  it('resolves 3-level chain: user.address.city.getName() → City#getName', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const getNameCalls = calls.filter(e => e.target === 'getName' && e.source === 'processUser');\n    const cityGetName = getNameCalls.find(e => e.targetFilePath.includes('Models'));\n    expect(cityGetName).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Kotlin data class primary constructor val/var properties\n// ---------------------------------------------------------------------------\n\ndescribe('Kotlin data class primary constructor property capture', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-data-class-fields'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects classes: Address, User', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Address', 'User']);\n  });\n\n  it('detects Property nodes for data class val parameters', () => {\n    const properties = getNodesByLabel(result, 'Property');\n    expect(properties).toContain('name');\n    expect(properties).toContain('address');\n    expect(properties).toContain('age');\n  });\n\n  it('emits HAS_PROPERTY edges for primary constructor properties', () => {\n    const propEdges = getRelationships(result, 'HAS_PROPERTY');\n    expect(edgeSet(propEdges)).toContain('User → name');\n    expect(edgeSet(propEdges)).toContain('User → address');\n    expect(edgeSet(propEdges)).toContain('User → age');\n  });\n\n  it('resolves user.address.save() → Address#save via data class field type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(e => e.target === 'save');\n    const addressSave = saveCalls.find(\n      e => e.source === 'processUser' && e.targetFilePath.includes('Models'),\n    );\n    expect(addressSave).toBeDefined();\n  });\n});\n\n// ACCESSES write edges from assignment expressions\n// ---------------------------------------------------------------------------\n\ndescribe('Write access tracking (Kotlin)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-write-access'),\n      () => {},\n    );\n  }, 60000);\n\n  it('emits ACCESSES write edges for property assignments', () => {\n    const accesses = getRelationships(result, 'ACCESSES');\n    const writes = accesses.filter(e => e.rel.reason === 'write');\n    expect(writes.length).toBe(3);\n    const nameWrite = writes.find(e => e.target === 'name');\n    const addressWrite = writes.find(e => e.target === 'address');\n    const scoreWrite = writes.find(e => e.target === 'score');\n    expect(nameWrite).toBeDefined();\n    expect(nameWrite!.source).toBe('updateUser');\n    expect(addressWrite).toBeDefined();\n    expect(addressWrite!.source).toBe('updateUser');\n    expect(scoreWrite).toBeDefined();\n    expect(scoreWrite!.source).toBe('updateUser');\n  });\n\n  it('emits ACCESSES write edge for compound assignment (+=)', () => {\n    const accesses = getRelationships(result, 'ACCESSES');\n    const writes = accesses.filter(e => e.rel.reason === 'write');\n    const scoreWrite = writes.find(e => e.target === 'score');\n    expect(scoreWrite).toBeDefined();\n    expect(scoreWrite!.source).toBe('updateUser');\n  });\n\n  it('write ACCESSES edges have confidence 1.0', () => {\n    const accesses = getRelationships(result, 'ACCESSES');\n    const writes = accesses.filter(e => e.rel.reason === 'write');\n    for (const edge of writes) {\n      expect(edge.rel.confidence).toBe(1.0);\n    }\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Call-result variable binding (Phase 9): val user = getUser(); user.save()\n// ---------------------------------------------------------------------------\n\ndescribe('Kotlin call-result variable binding (Tier 2b)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-call-result-binding'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves user.save() to User#save via call-result binding', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processUser' && c.targetFilePath.includes('User')\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Method chain binding (Phase 9C): getUser() → .address → .getCity() → .save()\n// ---------------------------------------------------------------------------\n\ndescribe('Kotlin method chain binding via unified fixpoint (Phase 9C)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-method-chain-binding'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves city.save() to City#save via method chain', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processChain' && c.targetFilePath.includes('Models')\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase B: Deep MRO — walkParentChain() at depth 2 (C→B→A)\n// greet() is defined on A, accessed via C. Tests BFS depth-2 parent traversal.\n// ---------------------------------------------------------------------------\n\ndescribe('Kotlin grandparent method resolution via MRO (Phase B)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-grandparent-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects A, B, C, Greeting classes', () => {\n    const classes = getNodesByLabel(result, 'Class');\n    expect(classes).toContain('A');\n    expect(classes).toContain('B');\n    expect(classes).toContain('C');\n    expect(classes).toContain('Greeting');\n  });\n\n  it('emits EXTENDS edges: B→A, C→B', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(edgeSet(extends_)).toContain('B → A');\n    expect(edgeSet(extends_)).toContain('C → B');\n  });\n\n  it('resolves c.greet().save() to Greeting#save via depth-2 MRO lookup', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.targetFilePath.includes('Greeting'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('resolves c.greet() to A#greet (method found via MRO walk)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const greetCall = calls.find(c =>\n      c.target === 'greet' && c.targetFilePath.includes('A.kt'),\n    );\n    expect(greetCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase C: Kotlin null-check narrowing — if (x != null) { x.save() }\n// NOTE: depends on nullable_type capture being fixed in jvm.ts\n// ---------------------------------------------------------------------------\n\ndescribe('Kotlin null-check narrowing resolution (Phase C)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-null-check-narrowing'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n  });\n\n  it('resolves x.save() inside != null guard to User#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processNullable' && c.targetFilePath.includes('User'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('does NOT resolve to Repo#save (no cross-contamination)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongCall = calls.find(c =>\n      c.target === 'save' && c.targetFilePath.includes('Repo'),\n    );\n    expect(wrongCall).toBeUndefined();\n  });\n\n  it('resolves x.save() from local variable val x: User? via null-check narrowing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processLocalNullable' && c.targetFilePath.includes('User'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ── Phase P: Overload Disambiguation via Parameter Types ─────────────────\n\ndescribe('Kotlin overload disambiguation by parameter types', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-overload-param-types'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects lookup function with parameterTypes on graph node', () => {\n    const nodes = getNodesByLabelFull(result, 'Function');\n    const lookupNodes = nodes.filter(m => m.name === 'lookup');\n    expect(lookupNodes.length).toBe(1);\n    expect(lookupNodes[0].properties.parameterTypes).toEqual(['Int']);\n  });\n\n  it('emits CALLS edge from run() → lookup() via overload disambiguation', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const lookupCalls = calls.filter(c => c.source === 'run' && c.target === 'lookup');\n    // Both lookup(42) and lookup(\"alice\") resolve to same nodeId → 1 CALLS edge\n    expect(lookupCalls.length).toBe(1);\n  });\n});\n\n// ── Phase P: Virtual Dispatch via Constructor Type (cross-file) ──────────\n\ndescribe('Kotlin virtual dispatch via constructor type (cross-file)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-virtual-dispatch'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects Dog class', () => {\n    const classes = getNodesByLabel(result, 'Class');\n    expect(classes).toContain('Dog');\n  });\n\n  it('resolves animal.speak() to models/Dog.kt via constructor type override', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const speakCall = calls.find(c =>\n      c.source === 'process' && c.target === 'speak' && c.targetFilePath === 'models/Dog.kt',\n    );\n    expect(speakCall).toBeDefined();\n  });\n});\n\n// ── Phase P: Default Parameter Arity Resolution ──────────────────────────\n\ndescribe('Kotlin default parameter arity resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'kotlin-default-params'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves greet(\"Alice\") with 1 arg to greet with 2 params (1 default)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const greetCalls = calls.filter(c => c.source === 'process' && c.target === 'greet');\n    expect(greetCalls.length).toBe(1);\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/integration/resolvers/php.test.ts",
    "content": "/**\n * PHP: PSR-4 imports, extends, implements, trait use, enums, calls + ambiguous disambiguation\n */\nimport { describe, it, expect, beforeAll } from 'vitest';\nimport path from 'path';\nimport {\n  FIXTURES, getRelationships, getNodesByLabel, edgeSet,\n  runPipelineFromRepo, type PipelineResult,\n} from './helpers.js';\n\n// ---------------------------------------------------------------------------\n// Heritage: PSR-4 imports, extends, implements, trait use, enums, calls\n// ---------------------------------------------------------------------------\n\ndescribe('PHP heritage & import resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'php-app'),\n      () => {},\n    );\n  }, 60000);\n\n  // --- Node detection ---\n\n  it('detects 3 classes', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['BaseModel', 'User', 'UserService']);\n  });\n\n  it('detects 2 interfaces', () => {\n    expect(getNodesByLabel(result, 'Interface')).toEqual(['Loggable', 'Repository']);\n  });\n\n  it('detects 2 traits', () => {\n    expect(getNodesByLabel(result, 'Trait')).toEqual(['HasTimestamps', 'SoftDeletes']);\n  });\n\n  it('detects 1 enum (PHP 8.1)', () => {\n    expect(getNodesByLabel(result, 'Enum')).toEqual(['UserRole']);\n  });\n\n  it('detects 8 namespaces across all files', () => {\n    const ns = getNodesByLabel(result, 'Namespace');\n    expect(ns.length).toBe(8);\n  });\n\n  // --- Heritage edges ---\n\n  it('emits exactly 1 EXTENDS edge: User → BaseModel', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(extends_.length).toBe(1);\n    expect(extends_[0].source).toBe('User');\n    expect(extends_[0].target).toBe('BaseModel');\n  });\n\n  it('emits 4 IMPLEMENTS edges: class→interface + class→trait', () => {\n    const implements_ = getRelationships(result, 'IMPLEMENTS');\n    expect(edgeSet(implements_)).toEqual([\n      'BaseModel → HasTimestamps',\n      'BaseModel → Loggable',\n      'User → SoftDeletes',\n      'UserService → Repository',\n    ]);\n  });\n\n  // --- Import (use-statement) resolution via PSR-4 ---\n\n  it('resolves 6 IMPORTS edges via PSR-4 composer.json', () => {\n    const imports = getRelationships(result, 'IMPORTS');\n    expect(edgeSet(imports)).toEqual([\n      'BaseModel.php → HasTimestamps.php',\n      'BaseModel.php → Loggable.php',\n      'User.php → SoftDeletes.php',\n      'UserService.php → Repository.php',\n      'UserService.php → User.php',\n      'UserService.php → UserRole.php',\n    ]);\n  });\n\n  // --- Method/function call edges ---\n\n  it('emits CALLS edges from createUser', () => {\n    const calls = getRelationships(result, 'CALLS')\n      .filter(e => e.source === 'createUser');\n    const targets = calls.map(c => c.target).sort();\n    expect(targets).toContain('save');\n    expect(targets).toContain('touch');\n    expect(targets).toContain('label');\n  });\n\n  it('emits CALLS edge: save → getId', () => {\n    const calls = getRelationships(result, 'CALLS')\n      .filter(e => e.source === 'save' && e.target === 'getId');\n    expect(calls.length).toBe(1);\n  });\n\n  // --- Methods and properties ---\n\n  it('detects methods on classes, interfaces, traits, and enums', () => {\n    const methods = getNodesByLabel(result, 'Method');\n    expect(methods).toContain('getId');\n    expect(methods).toContain('log');\n    expect(methods).toContain('touch');\n    expect(methods).toContain('softDelete');\n    expect(methods).toContain('restore');\n    expect(methods).toContain('find');\n    expect(methods).toContain('save');\n    expect(methods).toContain('createUser');\n    expect(methods).toContain('instance');\n    expect(methods).toContain('label');\n    expect(methods).toContain('__construct');\n  });\n\n  it('detects properties on classes and traits', () => {\n    const props = getNodesByLabel(result, 'Property');\n    expect(props).toContain('id');\n    expect(props).toContain('name');\n    expect(props).toContain('email');\n    expect(props).toContain('users');\n    // $status defined in both HasTimestamps and SoftDeletes traits\n    expect(props.filter(p => p === 'status').length).toBe(2);\n  });\n\n  // --- Property OVERRIDES exclusion ---\n\n  it('does not emit OVERRIDES for property name collisions ($status in both traits)', () => {\n    const overrides = getRelationships(result, 'OVERRIDES');\n    // OVERRIDES should only target Method nodes, never Property nodes\n    for (const edge of overrides) {\n      const target = result.graph.getNode(edge.rel.targetId);\n      expect(target).toBeDefined();\n      expect(target!.label).not.toBe('Property');\n    }\n  });\n\n  // --- MRO: OVERRIDES edge ---\n\n  it('emits OVERRIDES edge for User overriding log (inherited from BaseModel)', () => {\n    const overrides = getRelationships(result, 'OVERRIDES');\n    expect(overrides.length).toBe(1);\n    const logOverride = overrides.find(e => e.source === 'User' && e.target === 'log');\n    expect(logOverride).toBeDefined();\n  });\n\n  // --- All heritage edges point to real graph nodes ---\n\n  it('all heritage edges point to real graph nodes (no synthetic)', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    const implements_ = getRelationships(result, 'IMPLEMENTS');\n\n    for (const edge of [...extends_, ...implements_]) {\n      const target = result.graph.getNode(edge.rel.targetId);\n      expect(target).toBeDefined();\n      expect(target!.properties.name).toBe(edge.target);\n    }\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Ambiguous: Handler + Dispatchable, PSR-4 use-imports disambiguate\n// ---------------------------------------------------------------------------\n\ndescribe('PHP ambiguous symbol resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'php-ambiguous'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects 2 Handler classes and 2 Dispatchable interfaces', () => {\n    const classes = getNodesByLabel(result, 'Class');\n    expect(classes.filter(n => n === 'Handler').length).toBe(2);\n    const ifaces = getNodesByLabel(result, 'Interface');\n    expect(ifaces.filter(n => n === 'Dispatchable').length).toBe(2);\n  });\n\n  it('resolves EXTENDS to app/Models/Handler.php (not app/Other/)', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(extends_.length).toBe(1);\n    expect(extends_[0].source).toBe('UserHandler');\n    expect(extends_[0].target).toBe('Handler');\n    expect(extends_[0].targetFilePath).toBe('app/Models/Handler.php');\n  });\n\n  it('resolves IMPLEMENTS to app/Models/Dispatchable.php (not app/Other/)', () => {\n    const implements_ = getRelationships(result, 'IMPLEMENTS');\n    expect(implements_.length).toBe(1);\n    expect(implements_[0].source).toBe('UserHandler');\n    expect(implements_[0].target).toBe('Dispatchable');\n    expect(implements_[0].targetFilePath).toBe('app/Models/Dispatchable.php');\n  });\n\n  it('import edges point to app/Models/ not app/Other/', () => {\n    const imports = getRelationships(result, 'IMPORTS');\n    for (const imp of imports) {\n      expect(imp.targetFilePath).toMatch(/^app\\/Models\\//);\n    }\n  });\n\n  it('all heritage edges point to real graph nodes', () => {\n    for (const edge of [...getRelationships(result, 'EXTENDS'), ...getRelationships(result, 'IMPLEMENTS')]) {\n      const target = result.graph.getNode(edge.rel.targetId);\n      expect(target).toBeDefined();\n      expect(target!.properties.name).toBe(edge.target);\n    }\n  });\n});\n\ndescribe('PHP call resolution with arity filtering', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'php-calls'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves create_user → write_audit to app/Utils/OneArg/log.php via arity narrowing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    expect(calls.length).toBe(1);\n    expect(calls[0].source).toBe('create_user');\n    expect(calls[0].target).toBe('write_audit');\n    expect(calls[0].targetFilePath).toBe('app/Utils/OneArg/log.php');\n    expect(calls[0].rel.reason).toBe('import-resolved');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Member-call resolution: $obj->method() resolves through pipeline\n// ---------------------------------------------------------------------------\n\ndescribe('PHP member-call resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'php-member-calls'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves processUser → save as a member call on User', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('processUser');\n    expect(saveCall!.targetFilePath).toBe('app/Models/User.php');\n  });\n\n  it('detects User class and save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Method')).toContain('save');\n  });\n\n  it('emits HAS_METHOD edge from User to save', () => {\n    const hasMethod = getRelationships(result, 'HAS_METHOD');\n    const edge = hasMethod.find(e => e.source === 'User' && e.target === 'save');\n    expect(edge).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Constructor resolution: new User() resolves to Class node\n// ---------------------------------------------------------------------------\n\ndescribe('PHP constructor-call resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'php-constructor-calls'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves new User() as a CALLS edge to the User class', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const ctorCall = calls.find(c => c.target === 'User');\n    expect(ctorCall).toBeDefined();\n    expect(ctorCall!.source).toBe('processUser');\n    expect(ctorCall!.targetLabel).toBe('Class');\n    expect(ctorCall!.targetFilePath).toBe('Models/User.php');\n    expect(ctorCall!.rel.reason).toBe('import-resolved');\n  });\n\n  it('also resolves $user->save() as a member call', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('processUser');\n  });\n\n  it('detects User class, __construct method, and save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Method')).toContain('__construct');\n    expect(getNodesByLabel(result, 'Method')).toContain('save');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Receiver-constrained resolution: typed parameters disambiguate same-named methods\n// ---------------------------------------------------------------------------\n\ndescribe('PHP receiver-constrained resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'php-receiver-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, both with save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves $user->save() to User.save and $repo->save() to Repo.save via receiver typing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save');\n    expect(saveCalls.length).toBe(2);\n\n    const userSave = saveCalls.find(c => c.targetFilePath === 'app/Models/User.php');\n    const repoSave = saveCalls.find(c => c.targetFilePath === 'app/Models/Repo.php');\n\n    expect(userSave).toBeDefined();\n    expect(repoSave).toBeDefined();\n    expect(userSave!.source).toBe('processEntities');\n    expect(repoSave!.source).toBe('processEntities');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Alias import resolution: use App\\Models\\User as U resolves U → User\n// ---------------------------------------------------------------------------\n\ndescribe('PHP alias import resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'php-alias-imports'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects Main, Repo, and User classes with save and persist methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Main', 'Repo', 'User']);\n    expect(getNodesByLabel(result, 'Method')).toContain('save');\n    expect(getNodesByLabel(result, 'Method')).toContain('persist');\n  });\n\n  it('resolves $u->save() to User.php and $r->persist() to Repo.php via alias', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save');\n    const persistCall = calls.find(c => c.target === 'persist');\n\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('run');\n    expect(saveCall!.targetLabel).toBe('Method');\n    expect(saveCall!.targetFilePath).toBe('app/Models/User.php');\n\n    expect(persistCall).toBeDefined();\n    expect(persistCall!.source).toBe('run');\n    expect(persistCall!.targetLabel).toBe('Method');\n    expect(persistCall!.targetFilePath).toBe('app/Models/Repo.php');\n  });\n\n  it('emits exactly 2 IMPORTS edges via alias resolution', () => {\n    const imports = getRelationships(result, 'IMPORTS');\n    expect(imports.length).toBe(2);\n    expect(edgeSet(imports)).toEqual([\n      'Main.php → Repo.php',\n      'Main.php → User.php',\n    ]);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Grouped import with alias: use App\\Models\\{User, Repo as R}\n// ---------------------------------------------------------------------------\n\ndescribe('PHP grouped import with alias', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'php-grouped-imports'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects Main, Repo, and User classes', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Main', 'Repo', 'User']);\n  });\n\n  it('resolves $r->persist() to Repo.php via grouped alias', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const persistCall = calls.find(c => c.target === 'persist');\n\n    expect(persistCall).toBeDefined();\n    expect(persistCall!.source).toBe('run');\n    expect(persistCall!.targetFilePath).toBe('app/Models/Repo.php');\n  });\n\n  it('resolves $u->save() to User.php via grouped import', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save');\n\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('run');\n    expect(saveCall!.targetFilePath).toBe('app/Models/User.php');\n  });\n\n  it('resolves non-aliased User via NamedImportMap (not just the aliased Repo)', () => {\n    // Both User (non-aliased) and R→Repo (aliased) should resolve through grouped import\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save' && c.source === 'run');\n    const persistCall = calls.find(c => c.target === 'persist' && c.source === 'run');\n    expect(saveCall).toBeDefined();\n    expect(persistCall).toBeDefined();\n    expect(saveCall!.targetFilePath).toBe('app/Models/User.php');\n    expect(persistCall!.targetFilePath).toBe('app/Models/Repo.php');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Variadic resolution: ...$args don't get filtered by arity\n// ---------------------------------------------------------------------------\n\ndescribe('PHP variadic call resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'php-variadic-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves run → Logger.record despite extra args (variadic)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const recordCall = calls.find(c => c.target === 'record');\n    expect(recordCall).toBeDefined();\n    expect(recordCall!.source).toBe('run');\n    expect(recordCall!.targetFilePath).toBe('app/Utils/Logger.php');\n  });\n\n  it('detects Logger class and record method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('Logger');\n    expect(getNodesByLabel(result, 'Method')).toContain('record');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Local shadow: same-file definition takes priority over imported name\n// ---------------------------------------------------------------------------\n\ndescribe('PHP local definition shadows import', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'php-local-shadow'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves run → save to same-file definition, not the imported one', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save' && c.source === 'run');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.targetFilePath).toBe('app/Services/Main.php');\n  });\n\n  it('does NOT resolve save to Logger.php', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveToUtils = calls.find(c => c.target === 'save' && c.targetFilePath === 'app/Utils/Logger.php');\n    expect(saveToUtils).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Constructor-inferred type resolution: $user = new User(); $user->save()\n// PHP object_creation_expression (no typed local variable annotations)\n// ---------------------------------------------------------------------------\n\ndescribe('PHP constructor-inferred type resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'php-constructor-type-inference'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, both with save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves $user->save() to app/Models/User.php via constructor-inferred type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'app/Models/User.php');\n    expect(userSave).toBeDefined();\n    expect(userSave!.source).toBe('processEntities');\n  });\n\n  it('resolves $repo->save() to app/Models/Repo.php via constructor-inferred type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'app/Models/Repo.php');\n    expect(repoSave).toBeDefined();\n    expect(repoSave!.source).toBe('processEntities');\n  });\n\n  it('emits exactly 2 save() CALLS edges (one per receiver type)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save');\n    expect(saveCalls.length).toBe(2);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// $this->save() resolves to enclosing class's own save method\n// ---------------------------------------------------------------------------\n\ndescribe('PHP $this resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'php-self-this-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, each with a save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Repo', 'User']);\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves $this->save() inside User.process to User.save, not Repo.save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save' && c.source === 'process');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.targetFilePath).toBe('app/Models/User.php');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Parent class resolution: EXTENDS + IMPLEMENTS edges\n// ---------------------------------------------------------------------------\n\ndescribe('PHP parent class resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'php-parent-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects BaseModel and User classes plus Serializable interface', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['BaseModel', 'User']);\n    expect(getNodesByLabel(result, 'Interface')).toEqual(['Serializable']);\n  });\n\n  it('emits EXTENDS edge: User → BaseModel', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(extends_.length).toBe(1);\n    expect(extends_[0].source).toBe('User');\n    expect(extends_[0].target).toBe('BaseModel');\n  });\n\n  it('emits IMPLEMENTS edge: User → Serializable', () => {\n    const implements_ = getRelationships(result, 'IMPLEMENTS');\n    expect(implements_.length).toBe(1);\n    expect(implements_[0].source).toBe('User');\n    expect(implements_[0].target).toBe('Serializable');\n  });\n\n  it('all heritage edges point to real graph nodes', () => {\n    for (const edge of [...getRelationships(result, 'EXTENDS'), ...getRelationships(result, 'IMPLEMENTS')]) {\n      const target = result.graph.getNode(edge.rel.targetId);\n      expect(target).toBeDefined();\n      expect(target!.properties.name).toBe(edge.target);\n    }\n  });\n});\n\n// ---------------------------------------------------------------------------\n// parent::save() resolves to parent class's save method\n// ---------------------------------------------------------------------------\n\ndescribe('PHP parent:: resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'php-super-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects BaseModel, User, and Repo classes', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['BaseModel', 'Repo', 'User']);\n  });\n\n  it('resolves parent::save() inside User to BaseModel.save, not Repo.save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const parentSave = calls.find(c => c.source === 'save' && c.target === 'save'\n      && c.targetFilePath === 'app/Models/BaseModel.php');\n    expect(parentSave).toBeDefined();\n    const repoSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'app/Models/Repo.php');\n    expect(repoSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// PHP 8.0+ constructor property promotion: __construct(private UserRepo $repo)\n// ---------------------------------------------------------------------------\n\ndescribe('PHP constructor property promotion resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'php-property-promotion'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects UserRepo and UserService classes', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('UserRepo');\n    expect(getNodesByLabel(result, 'Class')).toContain('UserService');\n  });\n\n  it('resolves $repo->save() inside constructor via promoted parameter type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save' && c.source === '__construct');\n    expect(saveCall).toBeDefined();\n  });\n\n  // NOTE: $this->repo->save() in other methods requires multi-step receiver resolution\n  // (chained property access), which is a cross-language architectural feature not yet\n  // implemented. The promoted parameter type IS extracted into the TypeEnv — it just\n  // can't be accessed via $this->property chains yet.\n});\n\n// ---------------------------------------------------------------------------\n// PHP 7.4+ typed class property resolution: private UserRepo $repo;\n// ---------------------------------------------------------------------------\n\ndescribe('PHP typed class property resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'php-typed-properties'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects UserRepo and UserService classes', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('UserRepo');\n    expect(getNodesByLabel(result, 'Class')).toContain('UserService');\n  });\n\n  it('detects typed property $repo on UserService', () => {\n    expect(getNodesByLabel(result, 'Property')).toContain('repo');\n  });\n\n  it('detects find and save methods on UserRepo', () => {\n    expect(getNodesByLabel(result, 'Method')).toContain('find');\n    expect(getNodesByLabel(result, 'Method')).toContain('save');\n  });\n\n  it('resolves $repo->save() to UserRepo.php via parameter type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save' && c.source === 'process');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.targetFilePath).toBe('app/Models/UserRepo.php');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Return type inference: $user = $this->getUser(\"alice\"); $user->save()\n// PHP's scanConstructorBinding captures assignment_expression with both\n// function_call_expression and member_call_expression values, enabling\n// return type inference for method calls on objects.\n// ---------------------------------------------------------------------------\n\ndescribe('PHP return type inference via member call', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'php-return-type'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User, UserService, and Repo classes', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('UserService');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n  });\n\n  it('detects save on both User and Repo, and getUser method', () => {\n    const methods = getNodesByLabel(result, 'Method');\n    expect(methods).toContain('save');\n    expect(methods).toContain('getUser');\n    // save exists on both User and Repo — disambiguation required\n    expect(methods.filter((m: string) => m === 'save').length).toBe(2);\n  });\n\n  it('resolves $user->save() to User#save (not Repo#save) via return type of getUser(): User', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processUser' && c.targetFilePath.includes('User.php'),\n    );\n    expect(saveCall).toBeDefined();\n    // Must NOT resolve to Repo.save — that would mean disambiguation failed\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processUser' && c.targetFilePath.includes('Repo.php'),\n    );\n    expect(repoSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// PHPDoc @return annotation: return type inference without native type hints\n// ---------------------------------------------------------------------------\n\ndescribe('PHP return type inference via PHPDoc @return annotation', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'php-phpdoc-return-type'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes with save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n  });\n\n  it('resolves $user->save() to User#save via PHPDoc @return User', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processUser' && c.targetFilePath.includes('Models.php'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('resolves $repo->save() to Repo#save via PHPDoc @return Repo', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processRepo' && c.targetFilePath.includes('Models.php'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('resolves $user->save() via PHPDoc @param User $user in handleUser()', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'handleUser' && c.targetFilePath.includes('Models.php'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('resolves $repo->save() via PHPDoc @param Repo $repo in handleRepo()', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'handleRepo' && c.targetFilePath.includes('Models.php'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// PHPDoc @return with PHP 8+ attributes (#[Route]) between doc-comment and method\n// ---------------------------------------------------------------------------\n\ndescribe('PHP PHPDoc @return with attributes between comment and method', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'php-phpdoc-attribute-return-type'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes with save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n  });\n\n  it('resolves $user->save() to User#save despite #[Route] attribute between PHPDoc and method', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processUser' && c.targetFilePath.includes('Models.php'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('resolves $repo->save() to Repo#save despite #[Route] attribute between PHPDoc and method', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processRepo' && c.targetFilePath.includes('Models.php'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('resolves $user->save() via PHPDoc @param despite #[Validate] attribute', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'handleUser' && c.targetFilePath.includes('Models.php'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('resolves $repo->save() via PHPDoc @param despite #[Validate] attribute', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'handleRepo' && c.targetFilePath.includes('Models.php'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// $this->method() receiver disambiguation: two classes with same method name\n// ---------------------------------------------------------------------------\n\ndescribe('PHP $this->method() receiver disambiguation', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'php-this-receiver-disambiguation'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects UserService and AdminService classes, both with getUser methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('UserService');\n    expect(getNodesByLabel(result, 'Class')).toContain('AdminService');\n    const getUserMethods = getNodesByLabel(result, 'Method').filter(m => m === 'getUser');\n    expect(getUserMethods.length).toBe(2);\n  });\n\n  it('resolves $user->save() in UserService to User#save via $this->getUser() disambiguation', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processUser' && c.targetFilePath.includes('Models.php'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('resolves $repo->save() in AdminService to Repo#save via $this->getUser() disambiguation', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processAdmin' && c.targetFilePath.includes('Models.php'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Nullable receiver unwrapping: ?User type hint stripped to User for resolution\n// ---------------------------------------------------------------------------\n\ndescribe('PHP nullable receiver resolution (?Type hint)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'php-nullable-receiver'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes with competing save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter((m: string) => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves $user->save() to User#save via nullable param type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process' && c.targetFilePath.includes('User.php'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves $repo->save() to Repo#save via nullable param type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process' && c.targetFilePath.includes('Repo.php'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('does NOT cross-contaminate (exactly 1 save per receiver file)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save' && c.source === 'process');\n    const userTargeted = saveCalls.filter(c => c.targetFilePath.includes('User.php'));\n    const repoTargeted = saveCalls.filter(c => c.targetFilePath.includes('Repo.php'));\n    expect(userTargeted.length).toBe(1);\n    expect(repoTargeted.length).toBe(1);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Assignment chain propagation\n// ---------------------------------------------------------------------------\n\ndescribe('PHP assignment chain propagation', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'php-assignment-chain'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes each with a save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves alias->save() to User#save via assignment chain', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process' && c.targetFilePath.includes('User.php'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves rAlias->save() to Repo#save via assignment chain', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process' && c.targetFilePath.includes('Repo.php'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('alias->save() does NOT resolve to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    // There should be exactly one save() call targeting User.php from process\n    const userSaves = calls.filter(c =>\n      c.target === 'save' && c.source === 'process' && c.targetFilePath.includes('User.php'),\n    );\n    expect(userSaves.length).toBe(1);\n  });\n\n  it('each alias resolves to its own class, not the other', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process' && c.targetFilePath.includes('User.php'),\n    );\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process' && c.targetFilePath.includes('Repo.php'),\n    );\n    expect(userSave).toBeDefined();\n    expect(repoSave).toBeDefined();\n    expect(userSave!.targetFilePath).not.toBe(repoSave!.targetFilePath);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// PHP foreach ($users as $user) — Tier 1c\n// ---------------------------------------------------------------------------\n\ndescribe('PHP foreach loop resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'php-foreach-loop'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User class with save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n  });\n\n  it('resolves $user->save() in foreach to User#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processUsers' && c.targetFilePath?.includes('User'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('does NOT resolve $user->save() to Repo#save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processUsers' && c.targetFilePath?.includes('Repo'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// PHP foreach with PHPDoc generic Collection<User> — element type extraction\n// Bug fix: normalizePhpType('Collection<User>') must yield 'User', not 'Collection'\n// ---------------------------------------------------------------------------\n\ndescribe('PHP foreach with PHPDoc generic Collection<User>', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'php-foreach-generic'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes with save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves $user->save() in foreach with Collection<User> PHPDoc to User#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processCollection' && c.targetFilePath?.includes('User'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('does NOT resolve Collection<User> foreach to Repo#save (false binding regression)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processCollection' && c.targetFilePath?.includes('Repo'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n\n  it('User[] array-style PHPDoc still resolves correctly (regression check)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const arraySave = calls.find(c =>\n      c.target === 'save' && c.source === 'processArray',\n    );\n    expect(arraySave).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// PHP foreach ($this->users as $user) — member access key mismatch fix\n// Bug fix: member_access_expression.name returns 'users' but scopeEnv stores '$users'\n// ---------------------------------------------------------------------------\n\ndescribe('PHP foreach with $this->property member access', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'php-foreach-member-access'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User class with save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Method')).toContain('save');\n  });\n\n  it('resolves $user->save() in foreach($this->users) to User#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processMembers' && c.targetFilePath?.includes('User'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('does NOT resolve $this->users foreach to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processMembers' && c.targetFilePath?.includes('Repo'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// PHP foreach with call_expression iterable: foreach (getUsers() as $user)\n// Phase 7.3: function_call_expression iterable resolution via ReturnTypeLookup\n// ---------------------------------------------------------------------------\n\ndescribe('PHP foreach call_expression iterable resolution (Phase 7.3)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'php-foreach-call-expr'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes with competing save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n  });\n\n  it('resolves $user->save() in foreach over getUsers() to User#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processUsers' && c.targetFilePath?.includes('User'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves $repo->save() in foreach over getRepos() to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processRepos' && c.targetFilePath?.includes('Repo'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('does NOT resolve $user->save() to Repo#save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processUsers' && c.targetFilePath?.includes('Repo'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n\n  it('does NOT resolve $repo->save() to User#save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processRepos' && c.targetFilePath?.includes('User'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase 8: Field/property type resolution (1-level)\n// ---------------------------------------------------------------------------\n\ndescribe('Field type resolution (PHP)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'php-field-types'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects classes: Address, Service, User', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Address', 'Service', 'User']);\n  });\n\n  it('detects Property nodes for PHP properties', () => {\n    const properties = getNodesByLabel(result, 'Property');\n    expect(properties).toContain('address');\n    expect(properties).toContain('name');\n    expect(properties).toContain('city');\n  });\n\n  it('emits HAS_PROPERTY edges linking properties to classes', () => {\n    const propEdges = getRelationships(result, 'HAS_PROPERTY');\n    expect(propEdges.length).toBe(3);\n  });\n\n  it('resolves $user->address->save() → Address#save via field type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(e => e.target === 'save');\n    const addressSave = saveCalls.find(\n      e => e.source === 'processUser' && e.targetFilePath.includes('Models'),\n    );\n    expect(addressSave).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase 8A: Deep field chain resolution (3-level)\n// ---------------------------------------------------------------------------\n\ndescribe('Deep field chain resolution (PHP)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'php-deep-field-chain'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects classes: Address, City, Service, User', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Address', 'City', 'Service', 'User']);\n  });\n\n  it('detects Property nodes for PHP properties', () => {\n    const properties = getNodesByLabel(result, 'Property');\n    expect(properties).toContain('address');\n    expect(properties).toContain('city');\n    expect(properties).toContain('zipCode');\n  });\n\n  it('emits HAS_PROPERTY edges for nested type chain', () => {\n    const propEdges = getRelationships(result, 'HAS_PROPERTY');\n    expect(propEdges.length).toBe(5);\n    expect(edgeSet(propEdges)).toContain('User → name');\n    expect(edgeSet(propEdges)).toContain('User → address');\n    expect(edgeSet(propEdges)).toContain('Address → city');\n    expect(edgeSet(propEdges)).toContain('Address → street');\n    expect(edgeSet(propEdges)).toContain('City → zipCode');\n  });\n\n  it('resolves 2-level chain: $user->address->save() → Address#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(e => e.target === 'save' && e.source === 'processUser');\n    const addressSave = saveCalls.find(e => e.targetFilePath.includes('Models'));\n    expect(addressSave).toBeDefined();\n  });\n\n  it('resolves 3-level chain: $user->address->city->getName() → City#getName', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const getNameCalls = calls.filter(e => e.target === 'getName' && e.source === 'processUser');\n    const cityGetName = getNameCalls.find(e => e.targetFilePath.includes('Models'));\n    expect(cityGetName).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// PHP 8.0+ constructor promotion as property declarations\n// ---------------------------------------------------------------------------\n\ndescribe('PHP constructor promotion property capture', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'php-constructor-promotion-fields'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects classes: Address, Service, User', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Address', 'Service', 'User']);\n  });\n\n  it('detects Property nodes for promoted constructor parameters', () => {\n    const properties = getNodesByLabel(result, 'Property');\n    expect(properties).toContain('name');\n    expect(properties).toContain('address');\n  });\n\n  it('emits HAS_PROPERTY edges for promoted parameters', () => {\n    const propEdges = getRelationships(result, 'HAS_PROPERTY');\n    expect(edgeSet(propEdges)).toContain('User → name');\n    expect(edgeSet(propEdges)).toContain('User → address');\n  });\n\n  it('resolves $user->address->save() → Address#save via promoted field type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(e => e.target === 'save');\n    const addressSave = saveCalls.find(\n      e => e.source === 'processUser' && e.targetFilePath.includes('Models'),\n    );\n    expect(addressSave).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// PHP default parameter arity resolution\n// ---------------------------------------------------------------------------\n\ndescribe('PHP default parameter arity resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'php-default-params'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves greet(\"Alice\") with 1 arg to greet with 2 params (1 default)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const greetCalls = calls.filter(c => c.source === 'process' && c.target === 'greet');\n    expect(greetCalls.length).toBe(1);\n  });\n});\n\n// ACCESSES write edges from assignment expressions\n// ---------------------------------------------------------------------------\n\ndescribe('Write access tracking (PHP)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'php-write-access'),\n      () => {},\n    );\n  }, 60000);\n\n  it('emits ACCESSES write edges for field assignments', () => {\n    const accesses = getRelationships(result, 'ACCESSES');\n    const writes = accesses.filter(e => e.rel.reason === 'write');\n    expect(writes.length).toBe(3);\n    const nameWrite = writes.find(e => e.target === 'name');\n    const addressWrite = writes.find(e => e.target === 'address');\n    const countWrite = writes.find(e => e.target === 'count');\n    expect(nameWrite).toBeDefined();\n    expect(nameWrite!.source).toBe('updateUser');\n    expect(addressWrite).toBeDefined();\n    expect(addressWrite!.source).toBe('updateUser');\n    expect(countWrite).toBeDefined();\n    expect(countWrite!.source).toBe('updateUser');\n  });\n\n  it('emits ACCESSES write edge for static property assignment', () => {\n    const accesses = getRelationships(result, 'ACCESSES');\n    const writes = accesses.filter(e => e.rel.reason === 'write');\n    const countWrite = writes.find(e => e.target === 'count');\n    expect(countWrite).toBeDefined();\n    expect(countWrite!.source).toBe('updateUser');\n  });\n\n  it('write ACCESSES edges have confidence 1.0', () => {\n    const accesses = getRelationships(result, 'ACCESSES');\n    const writes = accesses.filter(e => e.rel.reason === 'write');\n    for (const edge of writes) {\n      expect(edge.rel.confidence).toBe(1.0);\n    }\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Call-result variable binding (Phase 9): $user = getUser(); $user->save()\n// ---------------------------------------------------------------------------\n\ndescribe('PHP call-result variable binding (Tier 2b)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'php-call-result-binding'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves $user->save() to User#save via call-result binding', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processUser' && c.targetFilePath.includes('App')\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Method chain binding (Phase 9C): getUser() → ->getCity() → ->save()\n// ---------------------------------------------------------------------------\n\ndescribe('PHP method chain binding via unified fixpoint (Phase 9C)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'php-method-chain-binding'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves $city->save() to City#save via method chain', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processChain' && c.targetFilePath.includes('App')\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase B: Deep MRO — walkParentChain() at depth 2 (C→B→A)\n// greet() is defined on A, accessed via C. Tests BFS depth-2 parent traversal.\n// ---------------------------------------------------------------------------\n\ndescribe('PHP grandparent method resolution via MRO (Phase B)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'php-grandparent-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects A, B, C, Greeting classes', () => {\n    const classes = getNodesByLabel(result, 'Class');\n    expect(classes).toContain('A');\n    expect(classes).toContain('B');\n    expect(classes).toContain('C');\n    expect(classes).toContain('Greeting');\n  });\n\n  it('emits EXTENDS edges: B→A, C→B', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(edgeSet(extends_)).toContain('B → A');\n    expect(edgeSet(extends_)).toContain('C → B');\n  });\n\n  it('resolves $c->greet()->save() to Greeting#save via depth-2 MRO lookup', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.targetFilePath.includes('Greeting'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('resolves $c->greet() to A#greet (method found via MRO walk)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const greetCall = calls.find(c =>\n      c.target === 'greet' && c.targetFilePath.includes('A.php'),\n    );\n    expect(greetCall).toBeDefined();\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/integration/resolvers/python.test.ts",
    "content": "/**\n * Python: relative imports + class inheritance + ambiguous module disambiguation\n */\nimport { describe, it, expect, beforeAll } from 'vitest';\nimport path from 'path';\nimport {\n  FIXTURES, getRelationships, getNodesByLabel, edgeSet,\n  runPipelineFromRepo, type PipelineResult,\n} from './helpers.js';\n\n// ---------------------------------------------------------------------------\n// Heritage: relative imports + class inheritance\n// ---------------------------------------------------------------------------\n\ndescribe('Python relative import & heritage resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-pkg'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects exactly 3 classes and 5 functions', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['AuthService', 'BaseModel', 'User']);\n    expect(getNodesByLabel(result, 'Function')).toEqual(['authenticate', 'get_name', 'process_model', 'save', 'validate']);\n  });\n\n  it('emits exactly 1 EXTENDS edge: User → BaseModel', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(extends_.length).toBe(1);\n    expect(extends_[0].source).toBe('User');\n    expect(extends_[0].target).toBe('BaseModel');\n  });\n\n  it('resolves all 3 relative imports', () => {\n    const imports = getRelationships(result, 'IMPORTS');\n    expect(imports.length).toBe(3);\n    expect(edgeSet(imports)).toEqual([\n      'auth.py → user.py',\n      'helpers.py → base.py',\n      'user.py → base.py',\n    ]);\n  });\n\n  it('emits exactly 3 CALLS edges', () => {\n    const calls = getRelationships(result, 'CALLS');\n    expect(calls.length).toBe(3);\n    expect(edgeSet(calls)).toEqual([\n      'authenticate → validate',\n      'process_model → save',\n      'process_model → validate',\n    ]);\n  });\n\n  it('no OVERRIDES edges target Property nodes', () => {\n    const overrides = getRelationships(result, 'OVERRIDES');\n    for (const edge of overrides) {\n      const target = result.graph.getNode(edge.rel.targetId);\n      expect(target).toBeDefined();\n      expect(target!.label).not.toBe('Property');\n    }\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Ambiguous: Handler in two packages, relative import disambiguates\n// ---------------------------------------------------------------------------\n\ndescribe('Python ambiguous symbol resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-ambiguous'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects 2 Handler classes', () => {\n    const classes = getNodesByLabel(result, 'Class');\n    expect(classes.filter(n => n === 'Handler').length).toBe(2);\n    expect(classes).toContain('UserHandler');\n  });\n\n  it('resolves EXTENDS to models/handler.py (not other/handler.py)', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(extends_.length).toBe(1);\n    expect(extends_[0].source).toBe('UserHandler');\n    expect(extends_[0].target).toBe('Handler');\n    expect(extends_[0].targetFilePath).toBe('models/handler.py');\n  });\n\n  it('import edge points to models/ not other/', () => {\n    const imports = getRelationships(result, 'IMPORTS');\n    expect(imports.length).toBe(1);\n    expect(imports[0].targetFilePath).toBe('models/handler.py');\n  });\n\n  it('all heritage edges point to real graph nodes', () => {\n    for (const edge of getRelationships(result, 'EXTENDS')) {\n      const target = result.graph.getNode(edge.rel.targetId);\n      expect(target).toBeDefined();\n    }\n  });\n});\n\ndescribe('Python call resolution with arity filtering', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-calls'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves run → write_audit to one.py via arity narrowing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    expect(calls.length).toBe(1);\n    expect(calls[0].source).toBe('run');\n    expect(calls[0].target).toBe('write_audit');\n    expect(calls[0].targetFilePath).toBe('one.py');\n    expect(calls[0].rel.reason).toBe('import-resolved');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Member-call resolution: obj.method() resolves through pipeline\n// ---------------------------------------------------------------------------\n\ndescribe('Python member-call resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-member-calls'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves process_user → save as a member call on User', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('process_user');\n    expect(saveCall!.targetFilePath).toBe('user.py');\n  });\n\n  it('detects User class and save function (Python methods are Function nodes)', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    // Python tree-sitter captures all function_definitions as Function, including methods\n    expect(getNodesByLabel(result, 'Function')).toContain('save');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Receiver-constrained resolution: typed variables disambiguate same-named methods\n// ---------------------------------------------------------------------------\n\ndescribe('Python receiver-constrained resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-receiver-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, both with save functions', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    // Python tree-sitter captures all function_definitions as Function\n    const saveFns = getNodesByLabel(result, 'Function').filter(m => m === 'save');\n    expect(saveFns.length).toBe(2);\n  });\n\n  it('resolves user.save() to User.save and repo.save() to Repo.save via receiver typing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save');\n    expect(saveCalls.length).toBe(2);\n\n    const userSave = saveCalls.find(c => c.targetFilePath === 'user.py');\n    const repoSave = saveCalls.find(c => c.targetFilePath === 'repo.py');\n\n    expect(userSave).toBeDefined();\n    expect(repoSave).toBeDefined();\n    expect(userSave!.source).toBe('process_entities');\n    expect(repoSave!.source).toBe('process_entities');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Named import disambiguation: two modules export same name, from-import resolves\n// ---------------------------------------------------------------------------\n\ndescribe('Python named import disambiguation', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-named-imports'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves process_input → format_data to format_upper.py via from-import', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const formatCall = calls.find(c => c.target === 'format_data');\n    expect(formatCall).toBeDefined();\n    expect(formatCall!.source).toBe('process_input');\n    expect(formatCall!.targetFilePath).toBe('format_upper.py');\n  });\n\n  it('emits IMPORTS edge to format_upper.py', () => {\n    const imports = getRelationships(result, 'IMPORTS');\n    const appImport = imports.find(e => e.source === 'app.py');\n    expect(appImport).toBeDefined();\n    expect(appImport!.targetFilePath).toBe('format_upper.py');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Variadic resolution: *args don't get filtered by arity\n// ---------------------------------------------------------------------------\n\ndescribe('Python variadic call resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-variadic-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves process_input → log_entry to logger.py despite 3 args vs *args', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const logCall = calls.find(c => c.target === 'log_entry');\n    expect(logCall).toBeDefined();\n    expect(logCall!.source).toBe('process_input');\n    expect(logCall!.targetFilePath).toBe('logger.py');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Alias import resolution: from x import User as U resolves U → User\n// ---------------------------------------------------------------------------\n\ndescribe('Python alias import resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-alias-imports'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Repo', 'User']);\n  });\n\n  it('resolves u.save() to models.py and r.persist() to models.py via alias', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save');\n    const persistCall = calls.find(c => c.target === 'persist');\n\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('main');\n    expect(saveCall!.targetFilePath).toBe('models.py');\n\n    expect(persistCall).toBeDefined();\n    expect(persistCall!.source).toBe('main');\n    expect(persistCall!.targetFilePath).toBe('models.py');\n  });\n\n  it('emits exactly 1 IMPORTS edge: app.py → models.py', () => {\n    const imports = getRelationships(result, 'IMPORTS');\n    expect(imports.length).toBe(1);\n    expect(imports[0].sourceFilePath).toBe('app.py');\n    expect(imports[0].targetFilePath).toBe('models.py');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Re-export chain: from .base import X barrel pattern via __init__.py\n// ---------------------------------------------------------------------------\n\ndescribe('Python re-export chain resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-reexport-chain'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves user.save() through __init__.py barrel to models/base.py', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('main');\n    expect(saveCall!.targetFilePath).toBe('models/base.py');\n  });\n\n  it('resolves repo.persist() through __init__.py barrel to models/base.py', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const persistCall = calls.find(c => c.target === 'persist');\n    expect(persistCall).toBeDefined();\n    expect(persistCall!.source).toBe('main');\n    expect(persistCall!.targetFilePath).toBe('models/base.py');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Local shadow: same-file definition takes priority over imported name\n// ---------------------------------------------------------------------------\n\ndescribe('Python local definition shadows import', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-local-shadow'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves save(\"test\") to local save in app.py, not utils.py', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save' && c.source === 'main');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.targetFilePath).toBe('app.py');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Bare import: `import user` from services/auth.py resolves to services/user.py\n// not models/user.py, even though models/ is indexed first (proximity wins)\n// ---------------------------------------------------------------------------\n\ndescribe('Python bare import resolution (proximity over index order)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-bare-import'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User in models/ and UserService in services/', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('UserService');\n  });\n\n  it('resolves `import user` from services/auth.py to services/user.py, not models/user.py', () => {\n    const imports = getRelationships(result, 'IMPORTS');\n    const imp = imports.find(e => e.sourceFilePath === 'services/auth.py');\n    expect(imp).toBeDefined();\n    expect(imp!.targetFilePath).toBe('services/user.py');\n    expect(imp!.targetFilePath).not.toBe('models/user.py');\n  });\n\n  it('resolves svc.execute() CALLS edge to UserService#execute in services/user.py', () => {\n    // End-to-end: correct IMPORTS resolution must propagate through type inference\n    // so that user.UserService() binds svc → UserService, and svc.execute() resolves\n    const calls = getRelationships(result, 'CALLS');\n    const executeCall = calls.find(c => c.target === 'execute' && c.targetFilePath === 'services/user.py');\n    expect(executeCall).toBeDefined();\n    expect(executeCall!.source).toBe('authenticate');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Constructor-inferred type resolution: user = User(); user.save() → User.save\n// Cross-file SymbolTable verification (no explicit type annotations)\n// ---------------------------------------------------------------------------\n\ndescribe('Python constructor-inferred type resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-constructor-type-inference'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, both with save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveFns = getNodesByLabel(result, 'Function').filter(m => m === 'save');\n    expect(saveFns.length).toBe(2);\n  });\n\n  it('resolves user.save() to models/user.py via constructor-inferred type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'models/user.py');\n    expect(userSave).toBeDefined();\n    expect(userSave!.source).toBe('process_entities');\n  });\n\n  it('resolves repo.save() to models/repo.py via constructor-inferred type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'models/repo.py');\n    expect(repoSave).toBeDefined();\n    expect(repoSave!.source).toBe('process_entities');\n  });\n\n  it('emits exactly 2 save() CALLS edges (one per receiver type)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save');\n    expect(saveCalls.length).toBe(2);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Constructor-call resolution: User(\"alice\") resolves to User class\n// ---------------------------------------------------------------------------\n\ndescribe('Python constructor-call resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-constructor-calls'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User class with __init__ and save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Function')).toContain('__init__');\n    expect(getNodesByLabel(result, 'Function')).toContain('save');\n    expect(getNodesByLabel(result, 'Function')).toContain('process');\n  });\n\n  it('resolves import from app.py to models.py', () => {\n    const imports = getRelationships(result, 'IMPORTS');\n    const imp = imports.find(e => e.source === 'app.py' && e.targetFilePath === 'models.py');\n    expect(imp).toBeDefined();\n  });\n\n  it('emits HAS_METHOD from User class to __init__ and save', () => {\n    const hasMethod = getRelationships(result, 'HAS_METHOD');\n    const initEdge = hasMethod.find(e => e.source === 'User' && e.target === '__init__');\n    const saveEdge = hasMethod.find(e => e.source === 'User' && e.target === 'save');\n    expect(initEdge).toBeDefined();\n    expect(saveEdge).toBeDefined();\n  });\n\n  it('resolves user.save() as a method call to models.py', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('process');\n    expect(saveCall!.targetFilePath).toBe('models.py');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// self.save() resolves to enclosing class's own save method\n// ---------------------------------------------------------------------------\n\ndescribe('Python self resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-self-this-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, each with a save function', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Repo', 'User']);\n    const saveFns = getNodesByLabel(result, 'Function').filter(m => m === 'save');\n    expect(saveFns.length).toBe(2);\n  });\n\n  it('resolves self.save() inside User.process to User.save, not Repo.save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save' && c.source === 'process');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.targetFilePath).toBe('models/user.py');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Parent class resolution: EXTENDS edge\n// ---------------------------------------------------------------------------\n\ndescribe('Python parent resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-parent-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects BaseModel and User classes', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['BaseModel', 'User']);\n  });\n\n  it('emits EXTENDS edge: User → BaseModel', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(extends_.length).toBe(1);\n    expect(extends_[0].source).toBe('User');\n    expect(extends_[0].target).toBe('BaseModel');\n  });\n\n  it('EXTENDS edge points to real graph node in base.py', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    const target = result.graph.getNode(extends_[0].rel.targetId);\n    expect(target).toBeDefined();\n    expect(target!.properties.filePath).toBe('models/base.py');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// super().save() resolves to parent class's save method\n// ---------------------------------------------------------------------------\n\ndescribe('Python super resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-super-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects BaseModel, User, and Repo classes', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['BaseModel', 'Repo', 'User']);\n  });\n\n  it('resolves super().save() inside User to BaseModel.save, not Repo.save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const superSave = calls.find(c => c.source === 'save' && c.target === 'save'\n      && c.targetFilePath === 'models/base.py');\n    expect(superSave).toBeDefined();\n    const repoSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'models/repo.py');\n    expect(repoSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Python qualified constructor: user = models.User(\"alice\"); user.save()\n// ---------------------------------------------------------------------------\n\ndescribe('Python qualified constructor inference', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-qualified-constructor'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves user.save() via qualified constructor (models.User)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save' && c.targetFilePath === 'models.py');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('main');\n  });\n\n  it('resolves user.greet() via qualified constructor (models.User)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const greetCall = calls.find(c => c.target === 'greet' && c.targetFilePath === 'models.py');\n    expect(greetCall).toBeDefined();\n    expect(greetCall!.source).toBe('main');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Walrus operator: if (user := User(\"alice\")): user.save()\n// ---------------------------------------------------------------------------\n\ndescribe('Python walrus operator type inference', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-walrus-operator'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User class with save and greet methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Function')).toContain('save');\n    expect(getNodesByLabel(result, 'Function')).toContain('greet');\n  });\n\n  it('resolves user.save() via walrus operator constructor inference', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save' && c.targetFilePath === 'models.py');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('process');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Class-level annotations: file-scope `user: User` disambiguates method calls\n// ---------------------------------------------------------------------------\n\ndescribe('Python class-level annotation resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-class-annotations'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, both with save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveFns = getNodesByLabel(result, 'Function').filter(m => m === 'save');\n    expect(saveFns.length).toBe(2);\n  });\n\n  it('resolves active_user.save() to User.save via file-level annotation', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'user.py');\n    expect(userSave).toBeDefined();\n    expect(userSave!.source).toBe('process');\n  });\n\n  it('resolves active_repo.save() to Repo.save via file-level annotation', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'repo.py');\n    expect(repoSave).toBeDefined();\n    expect(repoSave!.source).toBe('process');\n  });\n\n  it('emits exactly 2 save() CALLS edges (one per receiver type)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save');\n    expect(saveCalls.length).toBe(2);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Return type inference: user = get_user('alice'); user.save()\n// Python's scanner captures ALL call assignments, enabling return type inference.\n// ---------------------------------------------------------------------------\n\ndescribe('Python return type inference', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-return-type-inference'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User class', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n  });\n\n  it('detects get_user and save symbols', () => {\n    // Python methods inside classes may be labeled Method or Function depending on nesting\n    const allSymbols = [...getNodesByLabel(result, 'Function'), ...getNodesByLabel(result, 'Method')];\n    expect(allSymbols).toContain('get_user');\n    expect(allSymbols).toContain('save');\n  });\n\n  it('resolves user.save() to User#save via return type inference from get_user() -> User', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'process_user'\n    );\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.targetFilePath).toContain('models.py');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Issue #289: static/classmethod classes must have HAS_METHOD edges\n// ---------------------------------------------------------------------------\n\ndescribe('Python static/classmethod class resolution (issue #289)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-static-class-methods'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects UserService and AdminService classes', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('UserService');\n    expect(getNodesByLabel(result, 'Class')).toContain('AdminService');\n  });\n\n  it('detects all static/class methods as symbols', () => {\n    const allSymbols = [...getNodesByLabel(result, 'Function'), ...getNodesByLabel(result, 'Method')];\n    expect(allSymbols).toContain('find_user');\n    expect(allSymbols).toContain('create_user');\n    expect(allSymbols).toContain('from_config');\n    expect(allSymbols).toContain('delete_user');\n  });\n\n  it('emits HAS_METHOD edges linking static methods to their enclosing class', () => {\n    // This is the core of issue #289: without HAS_METHOD, context() and impact()\n    // return empty for classes whose methods are all @staticmethod/@classmethod\n    const hasMethod = getRelationships(result, 'HAS_METHOD');\n\n    const userServiceMethods = hasMethod.filter(e => e.source === 'UserService');\n    expect(userServiceMethods.length).toBe(3); // find_user, create_user, from_config\n\n    const adminServiceMethods = hasMethod.filter(e => e.source === 'AdminService');\n    expect(adminServiceMethods.length).toBe(2); // find_user, delete_user\n  });\n\n  it('resolves unique static method calls (create_user, delete_user, from_config)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    // delete_user is unique to AdminService — should resolve\n    const deleteCall = calls.find(c =>\n      c.target === 'delete_user' && c.source === 'process' && c.targetFilePath.includes('service.py'),\n    );\n    expect(deleteCall).toBeDefined();\n\n    // create_user is unique to UserService — should resolve\n    const createCall = calls.find(c =>\n      c.target === 'create_user' && c.source === 'process' && c.targetFilePath.includes('service.py'),\n    );\n    expect(createCall).toBeDefined();\n  });\n\n  it('resolves find_user() via class-as-receiver for static method calls', () => {\n    // UserService.find_user() and AdminService.find_user() are both resolved because\n    // the class name (UserService / AdminService) is used as the receiver type for\n    // disambiguation. Both find_user methods share the same nodeId (same file, same name)\n    // so exactly 1 CALLS edge is emitted — which is correct (not ambiguous, not missing).\n    const calls = getRelationships(result, 'CALLS');\n    const findCalls = calls.filter(c =>\n      c.target === 'find_user' && c.source === 'process',\n    );\n    expect(findCalls.length).toBe(1);\n    expect(findCalls[0].targetFilePath).toContain('service.py');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Nullable receiver: user: User | None = find_user(); user.save()\n// Python 3.10+ union syntax — stripNullable unwraps `User | None` → `User`\n// ---------------------------------------------------------------------------\n\ndescribe('Python nullable receiver resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-nullable-receiver'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, both with save functions', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveFns = getNodesByLabel(result, 'Function').filter(m => m === 'save');\n    expect(saveFns.length).toBe(2);\n  });\n\n  it('resolves user.save() to User.save via nullable receiver typing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'user.py');\n    expect(userSave).toBeDefined();\n    expect(userSave!.source).toBe('process_entities');\n  });\n\n  it('resolves repo.save() to Repo.save via nullable receiver typing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'repo.py');\n    expect(repoSave).toBeDefined();\n    expect(repoSave!.source).toBe('process_entities');\n  });\n\n  it('user.save() does NOT resolve to Repo.save (negative disambiguation)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save' && c.source === 'process_entities');\n    // Each save() call should resolve to exactly one target file\n    const userSaveToRepo = saveCalls.filter(c => c.targetFilePath === 'repo.py');\n    const repoSaveToUser = saveCalls.filter(c => c.targetFilePath === 'user.py');\n    // Exactly 1 edge to each file (not 2 to either)\n    expect(userSaveToRepo.length).toBe(1);\n    expect(repoSaveToUser.length).toBe(1);\n  });\n\n  it('emits exactly 2 save() CALLS edges (one per receiver type)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save');\n    expect(saveCalls.length).toBe(2);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Assignment chain propagation (Phase 4.3)\n// ---------------------------------------------------------------------------\n\ndescribe('Python assignment chain propagation', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-assignment-chain'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes each with a save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveFns = getNodesByLabel(result, 'Function').filter(m => m === 'save');\n    expect(saveFns.length).toBe(2);\n  });\n\n  it('resolves alias.save() to User#save via assignment chain', () => {\n    const calls = getRelationships(result, 'CALLS');\n    // Positive: alias.save() must resolve to User#save\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process' && c.targetFilePath.includes('user.py'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('alias.save() does NOT resolve to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    // Negative: only one save call from process to User#save\n    const wrongCall = calls.filter(c =>\n      c.target === 'save' && c.source === 'process' && c.targetFilePath.includes('user.py'),\n    );\n    expect(wrongCall.length).toBe(1);\n  });\n\n  it('resolves r_alias.save() to Repo#save via assignment chain', () => {\n    const calls = getRelationships(result, 'CALLS');\n    // Positive: r_alias.save() must resolve to Repo#save\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process' && c.targetFilePath.includes('repo.py'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('each alias resolves to its own class, not the other', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process' && c.targetFilePath.includes('user.py'),\n    );\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process' && c.targetFilePath.includes('repo.py'),\n    );\n    expect(userSave).toBeDefined();\n    expect(repoSave).toBeDefined();\n    expect(userSave!.targetFilePath).not.toBe(repoSave!.targetFilePath);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Python nullable (User | None) + assignment chain combined.\n// Python 3.10+ union syntax is parsed as binary_operator by tree-sitter,\n// stored as raw text \"User | None\" in TypeEnv. stripNullable's\n// NULLABLE_KEYWORDS.has() path must resolve it at lookup time.\n// ---------------------------------------------------------------------------\n\ndescribe('Python nullable (User | None) + assignment chain combined', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-nullable-chain'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes each with a save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveFns = getNodesByLabel(result, 'Function').filter(m => m === 'save');\n    expect(saveFns.length).toBe(2);\n  });\n\n  it('resolves alias.save() to User#save when source is User | None', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'nullable_chain_user' && c.targetFilePath?.includes('user.py'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('alias.save() from User | None does NOT resolve to Repo#save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongCall = calls.find(c =>\n      c.target === 'save' && c.source === 'nullable_chain_user' && c.targetFilePath?.includes('repo.py'),\n    );\n    expect(wrongCall).toBeUndefined();\n  });\n\n  it('resolves alias.save() to Repo#save when source is Repo | None', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'nullable_chain_repo' && c.targetFilePath?.includes('repo.py'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('alias.save() from Repo | None does NOT resolve to User#save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongCall = calls.find(c =>\n      c.target === 'save' && c.source === 'nullable_chain_repo' && c.targetFilePath?.includes('user.py'),\n    );\n    expect(wrongCall).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Python walrus operator (:=) assignment chain.\n// Tests that extractPendingAssignment handles named_expression nodes\n// in addition to regular assignment nodes.\n// ---------------------------------------------------------------------------\n\ndescribe('Python walrus operator (:=) assignment chain', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-walrus-chain'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes each with a save function', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveFns = getNodesByLabel(result, 'Function').filter(m => m === 'save');\n    expect(saveFns.length).toBe(2);\n  });\n\n  it('resolves alias.save() to User#save via regular + walrus chains', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'walrus_chain_user' && c.targetFilePath?.includes('user.py'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('save() in walrus_chain_user does NOT resolve to Repo#save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongCall = calls.find(c =>\n      c.target === 'save' && c.source === 'walrus_chain_user' && c.targetFilePath?.includes('repo.py'),\n    );\n    expect(wrongCall).toBeUndefined();\n  });\n\n  it('resolves alias.save() to Repo#save via regular + walrus chains', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'walrus_chain_repo' && c.targetFilePath?.includes('repo.py'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('save() in walrus_chain_repo does NOT resolve to User#save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongCall = calls.find(c =>\n      c.target === 'save' && c.source === 'walrus_chain_repo' && c.targetFilePath?.includes('user.py'),\n    );\n    expect(wrongCall).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Python match/case as-pattern binding: `case User() as u: u.save()`\n// Tests Phase 6 extractPatternBinding for Python's match statement.\n// ---------------------------------------------------------------------------\n\ndescribe('Python match/case as-pattern type binding', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-match-case'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes each with a save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveFns = getNodesByLabel(result, 'Function').filter(m => m === 'save');\n    expect(saveFns.length).toBe(2);\n  });\n\n  it('DEBUG: shows pipeline result details', () => {\n    const calls = getRelationships(result, 'CALLS');\n    console.log('ALL CALLS:', JSON.stringify(calls.map(c => ({ source: c.source, target: c.target, targetFilePath: c.targetFilePath }))));\n    // Check all relationships\n    const allRels: string[] = [];\n    result.graph.iterRelationships && [...result.graph.iterRelationships()].forEach(r => {\n      const src = result.graph.getNode(r.sourceId);\n      const tgt = result.graph.getNode(r.targetId);\n      allRels.push(r.type + ': ' + src?.properties.name + ' -> ' + tgt?.properties.name);\n    });\n    console.log('ALL RELATIONSHIPS:', allRels.join(', '));\n    expect(true).toBe(true);\n  });\n\n  // Skip: call extraction issue, NOT a type-env limitation.\n  // Type-env binding works correctly (unit test passes). The root cause is likely\n  // in call-processor's findEnclosingFunction scope resolution within match_statement\n  // blocks, not the tree-sitter query patterns (which descend recursively by default).\n  it.skip('resolves u.save() to User#save via match/case as-pattern binding', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process' && c.targetFilePath?.includes('user.py'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it.skip('does NOT resolve u.save() to Repo#save (negative disambiguation)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process' && c.targetFilePath?.includes('repo.py'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Chained method calls: svc.get_user().save()\n// Tests that Python's scanner correctly handles method-call chains where\n// the intermediate receiver type is inferred from the return type annotation.\n// ---------------------------------------------------------------------------\n\ndescribe('Python chained method call resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-chain-call'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User, Repo, and UserService classes', () => {\n    const classes = getNodesByLabel(result, 'Class');\n    expect(classes).toContain('User');\n    expect(classes).toContain('Repo');\n    expect(classes).toContain('UserService');\n  });\n\n  it('detects get_user and save functions', () => {\n    const allSymbols = [...getNodesByLabel(result, 'Function'), ...getNodesByLabel(result, 'Method')];\n    expect(allSymbols).toContain('get_user');\n    expect(allSymbols).toContain('save');\n  });\n\n  it('resolves svc.get_user().save() to User#save via chain resolution', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' &&\n      c.source === 'process_user' &&\n      c.targetFilePath?.includes('user.py'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('does NOT resolve svc.get_user().save() to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' &&\n      c.source === 'process_user' &&\n      c.targetFilePath?.includes('repo.py'),\n    );\n    expect(repoSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// for key, user in data.items() — dict.items() call iterable + tuple unpacking\n// ---------------------------------------------------------------------------\n\ndescribe('Python dict.items() for-loop resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-dict-items-loop'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User class with save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n  });\n\n  it('resolves user.save() via dict.items() loop to User#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process' && c.targetFilePath?.includes('user.py'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('does NOT resolve user.save() to Repo#save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process' && c.targetFilePath?.includes('repo.py'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// self.users member access iterable: for user in self.users\n// ---------------------------------------------------------------------------\n\ndescribe('Python member access iterable for-loop', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-member-access-for-loop'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes with save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    // Python tree-sitter captures all function_definitions as Function, including methods\n    expect(getNodesByLabel(result, 'Function')).toContain('save');\n  });\n\n  it('resolves user.save() via self.users to User#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process_users' && c.targetFilePath?.includes('user.py'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('does NOT cross-resolve user.save() to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrong = calls.find(c =>\n      c.target === 'save' && c.source === 'process_users' && c.targetFilePath?.includes('repo.py'),\n    );\n    expect(wrong).toBeUndefined();\n  });\n\n  it('resolves repo.save() via self.repos to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process_repos' && c.targetFilePath?.includes('repo.py'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Python for-loop with call_expression iterable: for user in get_users()\n// Phase 7.3: call_expression iterable resolution via ReturnTypeLookup\n// ---------------------------------------------------------------------------\n\ndescribe('Python for-loop call_expression iterable resolution (Phase 7.3)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-for-call-expr'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes with competing save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n  });\n\n  it('resolves user.save() in for-loop over get_users() to User#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process_users' && c.targetFilePath?.includes('models.py'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves repo.save() in for-loop over get_repos() to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process_repos' && c.targetFilePath?.includes('models.py'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('process_users resolves exactly one save call (no cross-binding)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c =>\n      c.target === 'save' && c.source === 'process_users',\n    );\n    expect(saveCalls.length).toBe(1);\n  });\n\n  it('process_repos resolves exactly one save call (no cross-binding)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c =>\n      c.target === 'save' && c.source === 'process_repos',\n    );\n    expect(saveCalls.length).toBe(1);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// enumerate() for-loop: for i, k, v in enumerate(d.items())\n// ---------------------------------------------------------------------------\n\ndescribe('Python enumerate() for-loop resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-enumerate-loop'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User class with save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n  });\n\n  it('resolves v.save() in enumerate(users.items()) loop to User#save', () => {\n    // for i, k, v in enumerate(users.items()): v.save()\n    // v must bind to User (value type of dict[str, User]).\n    // Without enumerate() support, v is unbound → resolver emits 0 CALLS.\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process_users' && c.targetFilePath?.includes('user.py'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('does NOT resolve v.save() to a non-User target', () => {\n    // i is the int index from enumerate — must not produce a spurious CALLS edge\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process_users' && !c.targetFilePath?.includes('user.py'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n\n  it('resolves nested tuple pattern: for i, (k, v) in enumerate(d.items())', () => {\n    // Nested tuple_pattern inside pattern_list — must descend to find v\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process_nested_tuple' && c.targetFilePath?.includes('user.py'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves parenthesized tuple: for (i, u) in enumerate(users)', () => {\n    // tuple_pattern as top-level left node (not pattern_list)\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process_parenthesized_tuple' && c.targetFilePath?.includes('user.py'),\n    );\n    expect(userSave).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase 8: Field/property type resolution — annotated attribute capture\n// ---------------------------------------------------------------------------\n\ndescribe('Field type resolution (Python)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-field-types'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects classes: Address, User', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Address', 'User']);\n  });\n\n  it('detects Property nodes for Python annotated attributes', () => {\n    const properties = getNodesByLabel(result, 'Property');\n    expect(properties).toContain('address');\n    expect(properties).toContain('name');\n    expect(properties).toContain('city');\n  });\n\n  it('emits HAS_PROPERTY edges linking attributes to classes', () => {\n    const propEdges = getRelationships(result, 'HAS_PROPERTY');\n    expect(propEdges.length).toBe(3);\n    expect(edgeSet(propEdges)).toContain('User → address');\n    expect(edgeSet(propEdges)).toContain('User → name');\n    expect(edgeSet(propEdges)).toContain('Address → city');\n  });\n\n  it('resolves user.address.save() → Address#save via field type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(e => e.target === 'save');\n    const addressSave = saveCalls.find(\n      e => e.source === 'process_user' && e.targetFilePath.includes('models'),\n    );\n    expect(addressSave).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase 8: Field type disambiguation — both User and Address have save()\n// ---------------------------------------------------------------------------\n\ndescribe('Field type disambiguation (Python)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-field-type-disambig'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects both User#save and Address#save', () => {\n    const methods = getNodesByLabel(result, 'Function');\n    const saveMethods = methods.filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves user.address.save() → Address#save (not User#save)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(\n      e => e.target === 'save' && e.source === 'process_user',\n    );\n    expect(saveCalls.length).toBe(1);\n    expect(saveCalls[0].targetFilePath).toContain('address');\n    expect(saveCalls[0].targetFilePath).not.toContain('user');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// ACCESSES write edges from assignment expressions\n// ---------------------------------------------------------------------------\n\ndescribe('Write access tracking (Python)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-write-access'),\n      () => {},\n    );\n  }, 60000);\n\n  it('emits ACCESSES write edges for attribute assignments', () => {\n    const accesses = getRelationships(result, 'ACCESSES');\n    const writes = accesses.filter(e => e.rel.reason === 'write');\n    expect(writes.length).toBe(2);\n    const nameWrite = writes.find(e => e.target === 'name');\n    const addressWrite = writes.find(e => e.target === 'address');\n    expect(nameWrite).toBeDefined();\n    expect(nameWrite!.source).toBe('update_user');\n    expect(addressWrite).toBeDefined();\n    expect(addressWrite!.source).toBe('update_user');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Call-result variable binding (Phase 9): user = get_user(); user.save()\n// ---------------------------------------------------------------------------\n\ndescribe('Python call-result variable binding (Tier 2b)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-call-result-binding'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves user.save() to User#save via call-result binding', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'process_user' && c.targetFilePath.includes('models')\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Method chain binding (Phase 9C): get_user() → .get_city() → .save()\n// ---------------------------------------------------------------------------\n\ndescribe('Python method chain binding via unified fixpoint (Phase 9C)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-method-chain-binding'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves city.save() to City#save via method chain', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'process_chain' && c.targetFilePath.includes('models')\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase B: Deep MRO — walkParentChain() at depth 2 (C→B→A)\n// greet() is defined on A, accessed via C. Tests BFS depth-2 parent traversal.\n// ---------------------------------------------------------------------------\n\ndescribe('Python grandparent method resolution via MRO (Phase B)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-grandparent-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects A, B, C, Greeting classes', () => {\n    const classes = getNodesByLabel(result, 'Class');\n    expect(classes).toContain('A');\n    expect(classes).toContain('B');\n    expect(classes).toContain('C');\n    expect(classes).toContain('Greeting');\n  });\n\n  it('emits EXTENDS edges: B→A, C→B', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(edgeSet(extends_)).toContain('B → A');\n    expect(edgeSet(extends_)).toContain('C → B');\n  });\n\n  it('resolves c.greet().save() to Greeting#save via depth-2 MRO lookup', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.targetFilePath.includes('greeting'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('resolves c.greet() to A#greet (method found via MRO walk)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const greetCall = calls.find(c =>\n      c.target === 'greet' && c.targetFilePath.includes('a.py'),\n    );\n    expect(greetCall).toBeDefined();\n  });\n});\n\n// ── Phase P: Default Parameter Arity Resolution ──────────────────────────\n\ndescribe('Python default parameter arity resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'python-default-params'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves greet(\"alice\") with 1 arg to greet with 2 params (1 default)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const greetCalls = calls.filter(c => c.source === 'process' && c.target === 'greet');\n    expect(greetCalls.length).toBe(1);\n  });\n\n  it('resolves search(\"test\") with 1 arg to search with 2 params (1 default)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const searchCalls = calls.filter(c => c.source === 'process' && c.target === 'search');\n    expect(searchCalls.length).toBe(1);\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/integration/resolvers/ruby.test.ts",
    "content": "/**\n * Ruby: require_relative imports, include heritage (mixins), attr_* properties,\n *       calls, member calls, ambiguous disambiguation, local shadow,\n *       constructor-inferred type resolution\n */\nimport { describe, it, expect, beforeAll } from 'vitest';\nimport path from 'path';\nimport {\n  FIXTURES, getRelationships, getNodesByLabel, edgeSet,\n  runPipelineFromRepo, type PipelineResult,\n} from './helpers.js';\n\n// ---------------------------------------------------------------------------\n// Heritage: require_relative imports + include heritage + attr_* properties + calls\n// ---------------------------------------------------------------------------\n\ndescribe('Ruby require_relative, heritage & property resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ruby-app'),\n      () => {},\n    );\n  }, 60000);\n\n  // --- Node detection ---\n\n  it('detects 3 classes', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual([\n      'BaseModel', 'User', 'UserService',\n    ]);\n  });\n\n  it('detects 3 modules', () => {\n    expect(getNodesByLabel(result, 'Module')).toEqual(['Cacheable', 'Loggable', 'Serializable']);\n  });\n\n  it('detects methods on classes and modules', () => {\n    const methods = getNodesByLabel(result, 'Method');\n    expect(methods).toContain('persist');\n    expect(methods).toContain('run_validations');\n    expect(methods).toContain('greet_user');\n    expect(methods).toContain('serialize_data');\n    expect(methods).toContain('create_user');\n  });\n\n  it('detects singleton method (def self.factory) as Method', () => {\n    const methods = getNodesByLabel(result, 'Method');\n    expect(methods).toContain('factory');\n  });\n\n  it('emits CALLS from singleton method: factory → run_validations', () => {\n    const calls = getRelationships(result, 'CALLS')\n      .filter(e => e.source === 'factory' && e.target === 'run_validations');\n    expect(calls.length).toBe(1);\n    expect(calls[0].sourceLabel).toBe('Method');\n  });\n\n  // --- Import resolution via require_relative ---\n\n  it('resolves 5 require_relative imports to IMPORTS edges', () => {\n    const imports = getRelationships(result, 'IMPORTS');\n    const importEdges = edgeSet(imports);\n    expect(importEdges).toContain('user.rb → base_model.rb');\n    expect(importEdges).toContain('user.rb → serializable.rb');\n    expect(importEdges).toContain('user.rb → loggable.rb');\n    expect(importEdges).toContain('user.rb → cacheable.rb');\n    expect(importEdges).toContain('service.rb → user.rb');\n  });\n\n  it('resolves bare require to IMPORTS edge', () => {\n    const imports = getRelationships(result, 'IMPORTS');\n    const bareRequire = imports.find(e =>\n      e.sourceFilePath.includes('base_model.rb') &&\n      e.targetFilePath.includes('serializable.rb')\n    );\n    expect(bareRequire).toBeDefined();\n  });\n\n  // --- Heritage: include → IMPLEMENTS ---\n\n  it('emits IMPLEMENTS edge for include Serializable with reason \"include\"', () => {\n    const implements_ = getRelationships(result, 'IMPLEMENTS');\n    const edge = implements_.find(e => e.source === 'User' && e.target === 'Serializable');\n    expect(edge).toBeDefined();\n    expect(edge!.rel.reason).toBe('include');\n  });\n\n  it('emits IMPLEMENTS edge for extend Loggable with reason \"extend\"', () => {\n    const implements_ = getRelationships(result, 'IMPLEMENTS');\n    const edge = implements_.find(e => e.source === 'User' && e.target === 'Loggable');\n    expect(edge).toBeDefined();\n    expect(edge!.rel.reason).toBe('extend');\n  });\n\n  it('emits IMPLEMENTS edge for prepend Cacheable with reason \"prepend\"', () => {\n    const implements_ = getRelationships(result, 'IMPLEMENTS');\n    const edge = implements_.find(e => e.source === 'User' && e.target === 'Cacheable');\n    expect(edge).toBeDefined();\n    expect(edge!.rel.reason).toBe('prepend');\n  });\n\n  // --- Extends: class inheritance ---\n\n  it('emits EXTENDS edge: User → BaseModel', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(extends_.length).toBe(1);\n    const edges = edgeSet(extends_);\n    expect(edges).toContain('User → BaseModel');\n  });\n\n  // --- Property nodes: attr_accessor, attr_reader, attr_writer ---\n\n  it('creates Property nodes for attr_accessor :id and :created_at', () => {\n    const props = getNodesByLabel(result, 'Property');\n    expect(props).toContain('id');\n    expect(props).toContain('created_at');\n  });\n\n  it('creates Property nodes for attr_reader :name and attr_writer :email', () => {\n    const props = getNodesByLabel(result, 'Property');\n    expect(props).toContain('name');\n    expect(props).toContain('email');\n  });\n\n  it('emits HAS_PROPERTY from User to attr_reader :name', () => {\n    const hasProperty = getRelationships(result, 'HAS_PROPERTY');\n    const edge = hasProperty.find(e => e.source === 'User' && e.target === 'name');\n    expect(edge).toBeDefined();\n  });\n\n  it('emits HAS_PROPERTY from BaseModel to attr_accessor :id', () => {\n    const hasProperty = getRelationships(result, 'HAS_PROPERTY');\n    const edge = hasProperty.find(e => e.source === 'BaseModel' && e.target === 'id');\n    expect(edge).toBeDefined();\n  });\n\n  // --- Call resolution: method-level attribution ---\n\n  it('emits method-level CALLS: create_user → persist (member call)', () => {\n    const calls = getRelationships(result, 'CALLS')\n      .filter(e => e.source === 'create_user' && e.target === 'persist');\n    expect(calls.length).toBe(1);\n    expect(calls[0].sourceLabel).toBe('Method');\n    expect(calls[0].targetLabel).toBe('Method');\n  });\n\n  it('emits method-level CALLS: create_user → greet_user (member call)', () => {\n    const calls = getRelationships(result, 'CALLS')\n      .filter(e => e.source === 'create_user' && e.target === 'greet_user');\n    expect(calls.length).toBe(1);\n    expect(calls[0].sourceLabel).toBe('Method');\n    expect(calls[0].targetLabel).toBe('Method');\n  });\n\n  it('emits method-level CALLS: greet_user → persist (bare call)', () => {\n    const calls = getRelationships(result, 'CALLS')\n      .filter(e => e.source === 'greet_user' && e.target === 'persist');\n    expect(calls.length).toBe(1);\n  });\n\n  it('emits method-level CALLS: greet_user → serialize_data (bare call)', () => {\n    const calls = getRelationships(result, 'CALLS')\n      .filter(e => e.source === 'greet_user' && e.target === 'serialize_data');\n    expect(calls.length).toBe(1);\n  });\n\n  it('emits method-level CALLS: persist → run_validations (bare call)', () => {\n    const calls = getRelationships(result, 'CALLS')\n      .filter(e => e.source === 'persist' && e.target === 'run_validations');\n    expect(calls.length).toBe(1);\n  });\n\n  // --- Heritage edges point to real graph nodes ---\n\n  it('all heritage edges point to real graph nodes', () => {\n    for (const edge of [...getRelationships(result, 'EXTENDS'), ...getRelationships(result, 'IMPLEMENTS')]) {\n      const target = result.graph.getNode(edge.rel.targetId);\n      expect(target).toBeDefined();\n    }\n  });\n\n  // --- No OVERRIDES edges target Property nodes ---\n\n  it('no OVERRIDES edges target Property nodes', () => {\n    const overrides = getRelationships(result, 'OVERRIDES');\n    for (const edge of overrides) {\n      const target = result.graph.getNode(edge.rel.targetId);\n      expect(target).toBeDefined();\n      expect(target!.label).not.toBe('Property');\n    }\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Calls: arity-based disambiguation\n// ---------------------------------------------------------------------------\n\ndescribe('Ruby call resolution with arity filtering', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ruby-calls'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves run_task → write_audit to one_arg.rb via arity narrowing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const auditCall = calls.find(c => c.target === 'write_audit');\n    expect(auditCall).toBeDefined();\n    expect(auditCall!.source).toBe('run_task');\n    expect(auditCall!.targetFilePath).toContain('one_arg.rb');\n    expect(auditCall!.rel.reason).toBe('import-resolved');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Member-call resolution: obj.method() resolves through pipeline\n// ---------------------------------------------------------------------------\n\ndescribe('Ruby member-call resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ruby-member-calls'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves process_user → persist_record as a member call on User', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'persist_record');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('process_user');\n    expect(saveCall!.targetFilePath).toContain('user.rb');\n  });\n\n  it('detects User class and persist_record method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Method')).toContain('persist_record');\n  });\n\n  it('emits HAS_METHOD edge from User to persist_record', () => {\n    const hasMethod = getRelationships(result, 'HAS_METHOD');\n    const edge = hasMethod.find(e => e.source === 'User' && e.target === 'persist_record');\n    expect(edge).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Ambiguous: Handler in two dirs, require_relative disambiguates\n// ---------------------------------------------------------------------------\n\ndescribe('Ruby ambiguous symbol resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ruby-ambiguous'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects 2 Handler classes', () => {\n    const classes = getNodesByLabel(result, 'Class');\n    expect(classes.filter(n => n === 'Handler').length).toBe(2);\n    expect(classes).toContain('UserHandler');\n  });\n\n  it('resolves EXTENDS to models/handler.rb (not other/handler.rb)', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(extends_.length).toBe(1);\n    expect(extends_[0].source).toBe('UserHandler');\n    expect(extends_[0].target).toBe('Handler');\n    expect(extends_[0].targetFilePath).toBe('models/handler.rb');\n  });\n\n  it('import edge points to models/ not other/', () => {\n    const imports = getRelationships(result, 'IMPORTS');\n    expect(imports.length).toBe(1);\n    expect(imports[0].targetFilePath).toBe('models/handler.rb');\n  });\n\n  it('all heritage edges point to real graph nodes', () => {\n    for (const edge of getRelationships(result, 'EXTENDS')) {\n      const target = result.graph.getNode(edge.rel.targetId);\n      expect(target).toBeDefined();\n    }\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Local shadow: same-file definition takes priority over imported name\n// ---------------------------------------------------------------------------\n\ndescribe('Ruby local definition shadows import', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ruby-local-shadow'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves run_app → do_work to same-file definition, not the imported one', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const doWorkCall = calls.find(c => c.target === 'do_work' && c.source === 'run_app');\n    expect(doWorkCall).toBeDefined();\n    expect(doWorkCall!.targetFilePath).toContain('app.rb');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Constructor-inferred type resolution: user = User.new; user.save → User.save\n// ---------------------------------------------------------------------------\n\ndescribe('Ruby constructor-inferred type resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ruby-constructor-type-inference'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User, Repo, and AppService classes', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    expect(getNodesByLabel(result, 'Class')).toContain('AppService');\n  });\n\n  it('detects save on User and Repo, cleanup on all three', () => {\n    const methods = getNodesByLabel(result, 'Method');\n    expect(methods.filter(m => m === 'save').length).toBe(2);\n    expect(methods.filter(m => m === 'cleanup').length).toBe(3);\n  });\n\n  it('resolves user.save to models/user.rb via constructor-inferred type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'models/user.rb');\n    expect(userSave).toBeDefined();\n    expect(userSave!.source).toBe('process_entities');\n  });\n\n  it('resolves repo.save to models/repo.rb via constructor-inferred type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'models/repo.rb');\n    expect(repoSave).toBeDefined();\n    expect(repoSave!.source).toBe('process_entities');\n  });\n\n  it('emits exactly 2 save CALLS edges (one per receiver type)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save');\n    expect(saveCalls.length).toBe(2);\n  });\n\n  it('resolves self.process_entities to services/app.rb (unique method)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const selfCall = calls.find(c =>\n      c.source === 'greet' && c.target === 'process_entities'\n    );\n    expect(selfCall).toBeDefined();\n    expect(selfCall!.targetFilePath).toContain('app.rb');\n  });\n\n  it('resolves self.cleanup to services/app.rb, not models/user.rb or models/repo.rb', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const selfCleanup = calls.find(c =>\n      c.source === 'greet' && c.target === 'cleanup'\n    );\n    expect(selfCleanup).toBeDefined();\n    expect(selfCleanup!.targetFilePath).toContain('app.rb');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// self.save resolves to enclosing class's own save method\n// ---------------------------------------------------------------------------\n\ndescribe('Ruby self resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ruby-self-this-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, each with a save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Repo', 'User']);\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves self.save inside User#process to User#save, not Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save' && c.source === 'process');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.targetFilePath).toBe('lib/models/user.rb');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Parent class resolution: < BaseModel + include Module\n// ---------------------------------------------------------------------------\n\ndescribe('Ruby parent resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ruby-parent-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects BaseModel and User classes plus Serializable module', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['BaseModel', 'User']);\n    expect(getNodesByLabel(result, 'Module')).toEqual(['Serializable']);\n  });\n\n  it('emits EXTENDS edge: User < BaseModel', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(extends_.length).toBe(1);\n    expect(extends_[0].source).toBe('User');\n    expect(extends_[0].target).toBe('BaseModel');\n  });\n\n  it('emits IMPLEMENTS edge: User includes Serializable', () => {\n    const implements_ = getRelationships(result, 'IMPLEMENTS');\n    const includeEdge = implements_.find(e => e.source === 'User' && e.target === 'Serializable');\n    expect(includeEdge).toBeDefined();\n    expect(includeEdge!.rel.reason).toBe('include');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Ruby super: standalone keyword calls same-named method on parent\n// ---------------------------------------------------------------------------\n\ndescribe('Ruby super resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ruby-super-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects BaseModel, User, and Repo classes', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['BaseModel', 'Repo', 'User']);\n  });\n\n  it('emits EXTENDS edge: User < BaseModel', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(extends_.length).toBe(1);\n    expect(extends_[0].source).toBe('User');\n    expect(extends_[0].target).toBe('BaseModel');\n  });\n\n  it('detects save methods on all three classes', () => {\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(3);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Ruby constant constructor: SERVICE = UserService.new; SERVICE.process\n// ---------------------------------------------------------------------------\n\ndescribe('Ruby constant constructor binding resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ruby-constant-constructor'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects UserService class with process and validate methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('UserService');\n    expect(getNodesByLabel(result, 'Method')).toContain('process');\n    expect(getNodesByLabel(result, 'Method')).toContain('validate');\n  });\n\n  it('resolves SERVICE.process() via constant constructor binding', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const processCall = calls.find(c => c.target === 'process' && c.targetFilePath === 'models.rb');\n    expect(processCall).toBeDefined();\n  });\n\n  it('resolves SERVICE.validate() via constant constructor binding', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const validateCall = calls.find(c => c.target === 'validate' && c.targetFilePath === 'models.rb');\n    expect(validateCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// YARD annotation type resolution: @param repo [UserRepo] → repo.save resolves\n// ---------------------------------------------------------------------------\n\ndescribe('Ruby YARD annotation type resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ruby-yard-annotations'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects UserRepo, User, and UserService classes', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('UserRepo');\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('UserService');\n  });\n\n  it('detects save, find_by_name, greet, and create methods', () => {\n    const methods = getNodesByLabel(result, 'Method');\n    expect(methods).toContain('save');\n    expect(methods).toContain('find_by_name');\n    expect(methods).toContain('greet');\n    expect(methods).toContain('create');\n  });\n\n  it('resolves repo.save to UserRepo#save via YARD @param annotation', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save' && c.source === 'create');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.targetFilePath).toContain('models.rb');\n  });\n\n  it('resolves user.greet to User#greet via YARD @param annotation', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const greetCall = calls.find(c => c.target === 'greet' && c.source === 'create');\n    expect(greetCall).toBeDefined();\n    expect(greetCall!.targetFilePath).toContain('models.rb');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Namespaced constructor: svc = Models::UserService.new; svc.process()\n// Tests scope_resolution receiver handling for Ruby namespaced classes.\n// ---------------------------------------------------------------------------\n\ndescribe('Ruby namespaced constructor resolution (Models::UserService.new)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ruby-namespaced-constructor'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects UserService class with process and validate methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('UserService');\n    const methods = getNodesByLabel(result, 'Method');\n    expect(methods).toContain('process');\n    expect(methods).toContain('validate');\n  });\n\n  it('resolves svc.process() via namespaced constructor Models::UserService.new', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const processCall = calls.find(c =>\n      c.target === 'process' && c.targetFilePath.includes('user_service.rb')\n    );\n    expect(processCall).toBeDefined();\n  });\n\n  it('resolves svc.validate() via namespaced constructor Models::UserService.new', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const validateCall = calls.find(c =>\n      c.target === 'validate' && c.targetFilePath.includes('user_service.rb')\n    );\n    expect(validateCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Return type inference: user = get_user('alice'); user.save\n// Ruby's scanConstructorBinding captures assignment nodes with call RHS.\n// Combined with YARD @return annotation parsing, the pipeline resolves\n// `user.save` to User#save (not Repo#save) via return type disambiguation.\n// The fixture has BOTH User#save and Repo#save — fuzzy matching alone\n// cannot disambiguate, so return type inference must be working.\n// ---------------------------------------------------------------------------\n\ndescribe('Ruby return type inference via function call', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ruby-return-type'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n  });\n\n  it('detects get_user and get_repo methods', () => {\n    expect(getNodesByLabel(result, 'Method')).toContain('get_user');\n    expect(getNodesByLabel(result, 'Method')).toContain('get_repo');\n  });\n\n  it('detects save method on both User and Repo (disambiguation required)', () => {\n    const methods = getNodesByLabel(result, 'Method');\n    // Both classes have save — fuzzy match alone cannot resolve this\n    expect(methods.filter(m => m === 'save').length).toBe(2);\n  });\n\n  it('resolves user.save to User#save via YARD @return [User] on get_user()', () => {\n    // With both User#save and Repo#save in scope, resolving user.save\n    // requires return type inference: get_user() → @return [User] → user is User\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'process_user' && c.targetFilePath.includes('models.rb'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('resolves repo.save to Repo#save via YARD @return [Repo] on get_repo()', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'process_repo' && c.targetFilePath.includes('repo.rb'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Ruby constant LHS factory call: SERVICE = build_service() with YARD @return\n// Verifies that constant assignments (uppercase LHS) from plain function calls\n// are captured by scanConstructorBinding, not just identifier assignments.\n// ---------------------------------------------------------------------------\n\ndescribe('Ruby constant factory call resolution (SERVICE = build_service())', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ruby-constant-factory-call'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects UserService and AdminService classes with process and validate methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('UserService');\n    expect(getNodesByLabel(result, 'Class')).toContain('AdminService');\n    expect(getNodesByLabel(result, 'Method')).toContain('process');\n    expect(getNodesByLabel(result, 'Method')).toContain('validate');\n  });\n\n  it('resolves SERVICE.process() to UserService#process via constant factory call', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const processCall = calls.find(c =>\n      c.target === 'process' && c.targetFilePath.includes('user_service.rb'),\n    );\n    expect(processCall).toBeDefined();\n    const wrongCall = calls.find(c =>\n      c.target === 'process' &&\n      c.sourceFilePath?.includes('app.rb') &&\n      c.targetFilePath.includes('admin_service.rb'),\n    );\n    expect(wrongCall).toBeUndefined();\n  });\n\n  it('resolves SERVICE.validate() to UserService#validate via constant factory call', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const validateCall = calls.find(c =>\n      c.target === 'validate' && c.targetFilePath.includes('user_service.rb'),\n    );\n    expect(validateCall).toBeDefined();\n    const wrongCall = calls.find(c =>\n      c.target === 'validate' &&\n      c.sourceFilePath?.includes('app.rb') &&\n      c.targetFilePath.includes('admin_service.rb'),\n    );\n    expect(wrongCall).toBeUndefined();\n  });\n});\n\ndescribe('Ruby YARD generic type annotations (Hash<Symbol, User>)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ruby-yard-generics'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects UserRepo, AdminRepo, and DataService classes', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('UserRepo');\n    expect(getNodesByLabel(result, 'Class')).toContain('AdminRepo');\n    expect(getNodesByLabel(result, 'Class')).toContain('DataService');\n  });\n\n  it('detects save and find_all on both repos, plus sync and audit methods', () => {\n    const methods = getNodesByLabel(result, 'Method');\n    expect(methods).toContain('save');\n    expect(methods).toContain('find_all');\n    expect(methods).toContain('sync');\n    expect(methods).toContain('audit');\n  });\n\n  it('resolves repo.save in sync() to UserRepo#save via @param repo [UserRepo]', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'sync' && c.targetFilePath.includes('models.rb'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('does NOT resolve cache param to a class (Hash<Symbol, UserRepo> is a generic container)', () => {\n    // The @param cache [Hash<Symbol, UserRepo>] should extract type \"Hash\" — not \"UserRepo\".\n    // Since Hash is not a class in the fixture, no type binding is created for cache.\n    // This verifies the bracket-balanced split doesn't break on the inner comma.\n    const calls = getRelationships(result, 'CALLS');\n    // No calls should originate from cache.* since cache has no resolved type\n    const cacheCall = calls.find(c =>\n      c.source === 'sync' && c.target === 'save' && c.targetFilePath.includes('admin'),\n    );\n    expect(cacheCall).toBeUndefined();\n  });\n\n  it('resolves admin_repo.save in audit() to AdminRepo#save via alternate @param [AdminRepo] order', () => {\n    const calls = getRelationships(result, 'CALLS');\n    // audit() calls admin_repo.save — should resolve via the alternate YARD format\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'audit',\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('resolves admin_repo.find_all in audit() to AdminRepo#find_all', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const findCall = calls.find(c =>\n      c.target === 'find_all' && c.source === 'audit',\n    );\n    expect(findCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Chained method calls: svc.get_user.save\n// Tests that Ruby's `call` node uses `method` and `receiver` fields correctly\n// for chain extraction — the tree-sitter-ruby grammar differs from other languages.\n// ---------------------------------------------------------------------------\n\ndescribe('Ruby chained method call resolution (Phase 5 review fix)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ruby-chain-call'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User, Repo, UserService and App classes', () => {\n    const classes = getNodesByLabel(result, 'Class');\n    expect(classes).toContain('User');\n    expect(classes).toContain('Repo');\n    expect(classes).toContain('UserService');\n    expect(classes).toContain('App');\n  });\n\n  it('detects save methods on both User and Repo', () => {\n    const methods = getNodesByLabel(result, 'Method');\n    const saveMethods = methods.filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('detects get_user method on UserService', () => {\n    const methods = getNodesByLabel(result, 'Method');\n    expect(methods).toContain('get_user');\n  });\n\n  it('resolves svc.get_user.save to User#save via chain resolution', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' &&\n      c.source === 'process' &&\n      c.targetFilePath?.includes('user.rb'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('does NOT resolve svc.get_user.save to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' &&\n      c.source === 'process' &&\n      c.targetFilePath?.includes('repo.rb'),\n    );\n    expect(repoSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Ruby for-in loop: for user in users — YARD @param resolution\n// ---------------------------------------------------------------------------\n\ndescribe('Ruby for-in loop resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ruby-for-in-loop'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User class with save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n  });\n\n  it('resolves user.save in for-in to User#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process_users' && c.targetFilePath?.includes('user'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('does NOT resolve user.save to Repo#save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process_users' && c.targetFilePath?.includes('repo'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase 8: Field/property type resolution via YARD @return annotations\n// ---------------------------------------------------------------------------\n\ndescribe('Field type resolution (Ruby)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ruby-field-types'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects classes: Address, User', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Address', 'User']);\n  });\n\n  it('detects Property nodes for attr_accessor fields', () => {\n    const properties = getNodesByLabel(result, 'Property');\n    expect(properties).toContain('address');\n    expect(properties).toContain('name');\n    expect(properties).toContain('city');\n  });\n\n  it('emits HAS_PROPERTY edges linking properties to classes', () => {\n    const propEdges = getRelationships(result, 'HAS_PROPERTY');\n    expect(propEdges.length).toBe(3);\n    expect(edgeSet(propEdges)).toContain('User → address');\n    expect(edgeSet(propEdges)).toContain('User → name');\n    expect(edgeSet(propEdges)).toContain('Address → city');\n  });\n\n  it('resolves user.address.save → Address#save via YARD @return [Address]', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(e => e.target === 'save');\n    const addressSave = saveCalls.find(\n      e => e.source === 'process_user' && e.targetFilePath.includes('models'),\n    );\n    expect(addressSave).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase 8: Field type disambiguation — both User and Address have save()\n// ---------------------------------------------------------------------------\n\ndescribe('Field type disambiguation (Ruby)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ruby-field-type-disambig'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects both User#save and Address#save', () => {\n    const methods = getNodesByLabel(result, 'Method');\n    const saveMethods = methods.filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves user.address.save → Address#save (not User#save)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(\n      e => e.target === 'save' && e.source === 'process_user',\n    );\n    expect(saveCalls.length).toBe(1);\n    expect(saveCalls[0].targetFilePath).toContain('address');\n    expect(saveCalls[0].targetFilePath).not.toContain('user');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// ACCESSES write edges from assignment expressions\n// ---------------------------------------------------------------------------\n\ndescribe('Write access tracking (Ruby)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ruby-write-access'),\n      () => {},\n    );\n  }, 60000);\n\n  it('emits ACCESSES write edges for setter assignments', () => {\n    const accesses = getRelationships(result, 'ACCESSES');\n    const writes = accesses.filter(e => e.rel.reason === 'write');\n    expect(writes.length).toBe(3);\n    const nameWrite = writes.find(e => e.target === 'name');\n    const addressWrite = writes.find(e => e.target === 'address');\n    const scoreWrite = writes.find(e => e.target === 'score');\n    expect(nameWrite).toBeDefined();\n    expect(nameWrite!.source).toBe('update_user');\n    expect(addressWrite).toBeDefined();\n    expect(addressWrite!.source).toBe('update_user');\n    expect(scoreWrite).toBeDefined();\n    expect(scoreWrite!.source).toBe('update_user');\n  });\n\n  it('emits ACCESSES write edge for compound assignment (operator_assignment)', () => {\n    const accesses = getRelationships(result, 'ACCESSES');\n    const writes = accesses.filter(e => e.rel.reason === 'write');\n    const scoreWrite = writes.find(e => e.target === 'score');\n    expect(scoreWrite).toBeDefined();\n    expect(scoreWrite!.source).toBe('update_user');\n  });\n\n  it('write ACCESSES edges have confidence 1.0', () => {\n    const accesses = getRelationships(result, 'ACCESSES');\n    const writes = accesses.filter(e => e.rel.reason === 'write');\n    for (const edge of writes) {\n      expect(edge.rel.confidence).toBe(1.0);\n    }\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Call-result variable binding (Phase 9): user = get_user(); user.save\n// ---------------------------------------------------------------------------\n\ndescribe('Ruby call-result variable binding (Tier 2b)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ruby-call-result-binding'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves user.save to User#save via call-result binding', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'process_user' && c.targetFilePath.includes('app')\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Method chain binding (Phase 9C): get_user() → .get_address() → .get_city() → .save\n// ---------------------------------------------------------------------------\n\ndescribe('Ruby method chain binding via unified fixpoint (Phase 9C)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ruby-method-chain-binding'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves city.save to City#save via method chain', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'process_chain' && c.targetFilePath.includes('app')\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase B: Deep MRO — walkParentChain() at depth 2 (C→B→A)\n// greet is defined on A, accessed via C. Tests BFS depth-2 parent traversal.\n// ---------------------------------------------------------------------------\n\ndescribe('Ruby grandparent method resolution via MRO (Phase B)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ruby-grandparent-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects A, B, C, Greeting classes', () => {\n    const classes = getNodesByLabel(result, 'Class');\n    expect(classes).toContain('A');\n    expect(classes).toContain('B');\n    expect(classes).toContain('C');\n    expect(classes).toContain('Greeting');\n  });\n\n  it('emits EXTENDS edges: B→A, C→B', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(edgeSet(extends_)).toContain('B → A');\n    expect(edgeSet(extends_)).toContain('C → B');\n  });\n\n  it('resolves c.greet.save to Greeting#save via depth-2 MRO lookup', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.targetFilePath.includes('greeting'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('resolves c.greet to A#greet (method found via MRO walk)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const greetCall = calls.find(c =>\n      c.target === 'greet' && c.targetFilePath.includes('a.rb'),\n    );\n    expect(greetCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Ruby default parameter arity resolution\n// ---------------------------------------------------------------------------\n\ndescribe('Ruby default parameter arity resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ruby-default-params'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves greet(\"Alice\") with 1 arg to greet with 2 params (1 default)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const greetCalls = calls.filter(c => c.source === 'process' && c.target === 'greet');\n    expect(greetCalls.length).toBe(1);\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/integration/resolvers/rust.test.ts",
    "content": "/**\n * Rust: trait implementations + ambiguous module import disambiguation\n */\nimport { describe, it, expect, beforeAll } from 'vitest';\nimport path from 'path';\nimport {\n  FIXTURES, getRelationships, getNodesByLabel, edgeSet,\n  runPipelineFromRepo, type PipelineResult,\n} from './helpers.js';\n\n// ---------------------------------------------------------------------------\n// Heritage: trait implementations\n// ---------------------------------------------------------------------------\n\ndescribe('Rust trait implementation resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-traits'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects exactly 1 struct and 2 traits', () => {\n    expect(getNodesByLabel(result, 'Struct')).toEqual(['Button']);\n    expect(getNodesByLabel(result, 'Trait')).toEqual(['Clickable', 'Drawable']);\n  });\n\n  it('emits exactly 2 IMPLEMENTS edges with reason trait-impl', () => {\n    const implements_ = getRelationships(result, 'IMPLEMENTS');\n    expect(implements_.length).toBe(2);\n    expect(edgeSet(implements_)).toEqual([\n      'Button → Clickable',\n      'Button → Drawable',\n    ]);\n    for (const edge of implements_) {\n      expect(edge.rel.reason).toBe('trait-impl');\n    }\n  });\n\n  it('does not emit any EXTENDS edges for trait impls', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(extends_.length).toBe(0);\n  });\n\n  it('resolves exactly 1 IMPORTS edge: main.rs → button.rs', () => {\n    const imports = getRelationships(result, 'IMPORTS');\n    expect(imports.length).toBe(1);\n    expect(imports[0].source).toBe('main.rs');\n    expect(imports[0].target).toBe('button.rs');\n  });\n\n  it('detects 2 modules and 4 functions', () => {\n    expect(getNodesByLabel(result, 'Module')).toEqual(['impls', 'traits']);\n    expect(getNodesByLabel(result, 'Function')).toEqual(['draw', 'is_enabled', 'main', 'on_click', 'resize']);\n  });\n\n  it('no OVERRIDES edges target Property nodes', () => {\n    const overrides = getRelationships(result, 'OVERRIDES');\n    for (const edge of overrides) {\n      const target = result.graph.getNode(edge.rel.targetId);\n      expect(target).toBeDefined();\n      expect(target!.label).not.toBe('Property');\n    }\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Ambiguous: Handler struct in two modules, crate:: import disambiguates\n// ---------------------------------------------------------------------------\n\ndescribe('Rust ambiguous symbol resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-ambiguous'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects 2 Handler structs in separate modules', () => {\n    const structs: string[] = [];\n    result.graph.forEachNode(n => {\n      if (n.label === 'Struct') structs.push(`${n.properties.name}@${n.properties.filePath}`);\n    });\n    const handlers = structs.filter(s => s.startsWith('Handler@'));\n    expect(handlers.length).toBe(2);\n    expect(handlers.some(h => h.includes('src/models/'))).toBe(true);\n    expect(handlers.some(h => h.includes('src/other/'))).toBe(true);\n  });\n\n  it('import resolves to src/models/mod.rs (not src/other/mod.rs)', () => {\n    const imports = getRelationships(result, 'IMPORTS');\n    const modelsImport = imports.find(e => e.targetFilePath.includes('models'));\n    expect(modelsImport).toBeDefined();\n    expect(modelsImport!.targetFilePath).toBe('src/models/mod.rs');\n  });\n\n  it('no import edge to src/other/', () => {\n    const imports = getRelationships(result, 'IMPORTS');\n    for (const imp of imports) {\n      expect(imp.targetFilePath).not.toMatch(/src\\/other\\//);\n    }\n  });\n});\n\ndescribe('Rust call resolution with arity filtering', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-calls'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves main → write_audit to src/onearg/mod.rs via arity narrowing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    expect(calls.length).toBe(1);\n    expect(calls[0].source).toBe('main');\n    expect(calls[0].target).toBe('write_audit');\n    expect(calls[0].targetFilePath).toBe('src/onearg/mod.rs');\n    expect(calls[0].rel.reason).toBe('import-resolved');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Member-call resolution: obj.method() resolves through pipeline\n// ---------------------------------------------------------------------------\n\ndescribe('Rust member-call resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-member-calls'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves process_user → save as a member call on User', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('process_user');\n    expect(saveCall!.targetFilePath).toBe('src/user.rs');\n  });\n\n  it('detects User struct and save function (Rust impl fns are Function nodes)', () => {\n    const structs: string[] = [];\n    result.graph.forEachNode(n => {\n      if (n.label === 'Struct') structs.push(n.properties.name);\n    });\n    expect(structs).toContain('User');\n    // Rust tree-sitter captures all function_item as Function, including impl methods\n    expect(getNodesByLabel(result, 'Function')).toContain('save');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Struct literal resolution: User { ... } resolves to Struct node\n// ---------------------------------------------------------------------------\n\ndescribe('Rust struct literal resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-struct-literals'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves User { ... } as a CALLS edge to the User struct', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const ctorCall = calls.find(c => c.target === 'User');\n    expect(ctorCall).toBeDefined();\n    expect(ctorCall!.source).toBe('process_user');\n    expect(ctorCall!.targetLabel).toBe('Struct');\n    expect(ctorCall!.targetFilePath).toBe('user.rs');\n    expect(ctorCall!.rel.reason).toBe('import-resolved');\n  });\n\n  it('also resolves user.save() as a member call', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('process_user');\n  });\n\n  it('detects User struct and process_user function', () => {\n    const structs: string[] = [];\n    result.graph.forEachNode(n => {\n      if (n.label === 'Struct') structs.push(n.properties.name);\n    });\n    expect(structs).toContain('User');\n    expect(getNodesByLabel(result, 'Function')).toContain('process_user');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Receiver-constrained resolution: typed variables disambiguate same-named methods\n// ---------------------------------------------------------------------------\n\ndescribe('Rust receiver-constrained resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-receiver-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo structs, both with save functions', () => {\n    const structs: string[] = [];\n    result.graph.forEachNode(n => {\n      if (n.label === 'Struct') structs.push(n.properties.name);\n    });\n    expect(structs).toContain('User');\n    expect(structs).toContain('Repo');\n    // Rust tree-sitter captures impl fns as Function nodes\n    const saveFns = getNodesByLabel(result, 'Function').filter(m => m === 'save');\n    expect(saveFns.length).toBe(2);\n  });\n\n  it('resolves user.save() to User.save and repo.save() to Repo.save via receiver typing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save');\n    expect(saveCalls.length).toBe(2);\n\n    const userSave = saveCalls.find(c => c.targetFilePath === 'src/user.rs');\n    const repoSave = saveCalls.find(c => c.targetFilePath === 'src/repo.rs');\n\n    expect(userSave).toBeDefined();\n    expect(repoSave).toBeDefined();\n    expect(userSave!.source).toBe('process_entities');\n    expect(repoSave!.source).toBe('process_entities');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Alias import resolution: use crate::models::User as U resolves U → User\n// ---------------------------------------------------------------------------\n\ndescribe('Rust alias import resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-alias-imports'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo structs with their methods', () => {\n    const structs: string[] = [];\n    result.graph.forEachNode(n => {\n      if (n.label === 'Struct') structs.push(n.properties.name);\n    });\n    expect(structs).toContain('User');\n    expect(structs).toContain('Repo');\n    expect(getNodesByLabel(result, 'Function')).toContain('save');\n    expect(getNodesByLabel(result, 'Function')).toContain('persist');\n  });\n\n  it('resolves u.save() to src/models.rs and r.persist() to src/models.rs via alias', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save');\n    const persistCall = calls.find(c => c.target === 'persist');\n\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('main');\n    expect(saveCall!.targetFilePath).toBe('src/models.rs');\n\n    expect(persistCall).toBeDefined();\n    expect(persistCall!.source).toBe('main');\n    expect(persistCall!.targetFilePath).toBe('src/models.rs');\n  });\n\n  it('emits exactly 1 IMPORTS edge: src/main.rs → src/models.rs', () => {\n    const imports = getRelationships(result, 'IMPORTS');\n    expect(imports.length).toBe(1);\n    expect(imports[0].sourceFilePath).toBe('src/main.rs');\n    expect(imports[0].targetFilePath).toBe('src/models.rs');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Local shadow: same-file definition takes priority over imported name\n// ---------------------------------------------------------------------------\n\n// ---------------------------------------------------------------------------\n// Re-export chain: pub use in mod.rs followed through to definition file\n// ---------------------------------------------------------------------------\n\ndescribe('Rust re-export chain resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-reexport-chain'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects Handler struct in handler.rs', () => {\n    const structs: string[] = [];\n    result.graph.forEachNode(n => {\n      if (n.label === 'Struct') structs.push(`${n.properties.name}@${n.properties.filePath}`);\n    });\n    expect(structs).toContain('Handler@src/models/handler.rs');\n  });\n\n  it('resolves Handler { ... } to src/models/handler.rs via re-export chain, not mod.rs', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const ctorCall = calls.find(c => c.target === 'Handler');\n    expect(ctorCall).toBeDefined();\n    expect(ctorCall!.source).toBe('main');\n    expect(ctorCall!.targetLabel).toBe('Struct');\n    expect(ctorCall!.targetFilePath).toBe('src/models/handler.rs');\n  });\n\n  it('resolves h.process() to src/models/handler.rs', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const processCall = calls.find(c => c.target === 'process');\n    expect(processCall).toBeDefined();\n    expect(processCall!.source).toBe('main');\n    expect(processCall!.targetFilePath).toBe('src/models/handler.rs');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Local shadow: same-file definition takes priority over imported name\n// ---------------------------------------------------------------------------\n\ndescribe('Rust local definition shadows import', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-local-shadow'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves run → save to same-file definition, not the imported one', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save' && c.source === 'run');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.targetFilePath).toBe('src/main.rs');\n  });\n\n  it('does NOT resolve save to utils.rs', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveToUtils = calls.find(c => c.target === 'save' && c.targetFilePath === 'src/utils.rs');\n    expect(saveToUtils).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Grouped imports: use crate::helpers::{func_a, func_b}\n// Verifies no spurious binding for the path prefix (e.g. \"helpers\")\n// ---------------------------------------------------------------------------\n\ndescribe('Rust grouped import resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-grouped-imports'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves main → format_name to src/helpers/mod.rs', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const call = calls.find(c => c.target === 'format_name');\n    expect(call).toBeDefined();\n    expect(call!.source).toBe('main');\n    expect(call!.targetFilePath).toBe('src/helpers/mod.rs');\n    expect(call!.rel.reason).toBe('import-resolved');\n  });\n\n  it('resolves main → validate_email to src/helpers/mod.rs', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const call = calls.find(c => c.target === 'validate_email');\n    expect(call).toBeDefined();\n    expect(call!.source).toBe('main');\n    expect(call!.targetFilePath).toBe('src/helpers/mod.rs');\n    expect(call!.rel.reason).toBe('import-resolved');\n  });\n\n  it('does not create a spurious CALLS edge for the path prefix \"helpers\"', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const spurious = calls.find(c => c.target === 'helpers' || c.source === 'helpers');\n    expect(spurious).toBeUndefined();\n  });\n\n  it('emits exactly 1 IMPORTS edge: main.rs → helpers/mod.rs', () => {\n    const imports = getRelationships(result, 'IMPORTS');\n    expect(imports.length).toBe(1);\n    expect(imports[0].source).toBe('main.rs');\n    expect(imports[0].target).toBe('mod.rs');\n    expect(imports[0].targetFilePath).toBe('src/helpers/mod.rs');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Constructor-inferred type resolution: let user = User::new(); user.save()\n// Rust scoped_identifier constructor pattern (no explicit type annotations)\n// ---------------------------------------------------------------------------\n\ndescribe('Rust constructor-inferred type resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-constructor-type-inference'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo structs, both with save methods', () => {\n    expect(getNodesByLabel(result, 'Struct')).toContain('User');\n    expect(getNodesByLabel(result, 'Struct')).toContain('Repo');\n    const saveFns = getNodesByLabel(result, 'Function').filter(m => m === 'save');\n    expect(saveFns.length).toBe(2);\n  });\n\n  it('resolves user.save() to src/user.rs via constructor-inferred type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'src/user.rs');\n    expect(userSave).toBeDefined();\n    expect(userSave!.source).toBe('process_entities');\n  });\n\n  it('resolves repo.save() to src/repo.rs via constructor-inferred type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'src/repo.rs');\n    expect(repoSave).toBeDefined();\n    expect(repoSave!.source).toBe('process_entities');\n  });\n\n  it('emits exactly 2 save() CALLS edges (one per receiver type)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save');\n    expect(saveCalls.length).toBe(2);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// self.save() resolves to enclosing impl's own save method\n// ---------------------------------------------------------------------------\n\ndescribe('Rust self resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-self-this-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo structs, each with a save function', () => {\n    expect(getNodesByLabel(result, 'Struct')).toContain('User');\n    expect(getNodesByLabel(result, 'Struct')).toContain('Repo');\n    const saveFns = getNodesByLabel(result, 'Function').filter(m => m === 'save');\n    expect(saveFns.length).toBe(2);\n  });\n\n  it('resolves self.save() inside User::process to User::save, not Repo::save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save' && c.source === 'process');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.targetFilePath).toBe('src/user.rs');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Trait impl emits IMPLEMENTS edge\n// ---------------------------------------------------------------------------\n\ndescribe('Rust parent resolution (trait impl)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-parent-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User struct and Serializable trait', () => {\n    expect(getNodesByLabel(result, 'Struct')).toContain('User');\n    expect(getNodesByLabel(result, 'Trait')).toContain('Serializable');\n  });\n\n  it('emits IMPLEMENTS edge: User → Serializable (trait impl)', () => {\n    const implements_ = getRelationships(result, 'IMPLEMENTS');\n    expect(implements_.length).toBe(1);\n    expect(implements_[0].source).toBe('User');\n    expect(implements_[0].target).toBe('Serializable');\n    expect(implements_[0].rel.reason).toBe('trait-impl');\n  });\n\n  it('no EXTENDS edges (Rust has no class inheritance)', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(extends_.length).toBe(0);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Struct literal inference: let user = User { ... }; user.save()\n// ---------------------------------------------------------------------------\n\ndescribe('Rust struct literal type inference', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-struct-literal-inference'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves user.save() via struct literal inference (User { ... })', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save' && c.targetFilePath === 'models.rs');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('main');\n  });\n\n  it('resolves config.validate() via struct literal inference (Config { ... })', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const validateCall = calls.find(c => c.target === 'validate' && c.targetFilePath === 'models.rs');\n    expect(validateCall).toBeDefined();\n    expect(validateCall!.source).toBe('main');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Rust Self {} struct literal: Self resolves to enclosing impl type\n// ---------------------------------------------------------------------------\n\ndescribe('Rust Self {} struct literal resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-self-struct-literal'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves fresh.validate() inside impl User via Self {} inference', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const validateCall = calls.find(c => c.target === 'validate' && c.source === 'blank');\n    expect(validateCall).toBeDefined();\n    expect(validateCall!.targetFilePath).toBe('models.rs');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// if let / while let: captured_pattern type extraction\n// Extracts type from `user @ User { .. }` patterns in if-let/while-let\n// ---------------------------------------------------------------------------\n\ndescribe('Rust if-let captured_pattern type resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-if-let'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Config structs with their methods', () => {\n    expect(getNodesByLabel(result, 'Struct')).toContain('User');\n    expect(getNodesByLabel(result, 'Struct')).toContain('Config');\n    expect(getNodesByLabel(result, 'Function')).toContain('save');\n    expect(getNodesByLabel(result, 'Function')).toContain('validate');\n  });\n\n  it('resolves user.save() inside if-let via captured_pattern binding', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save' && c.source === 'process_if_let');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.targetFilePath).toBe('models.rs');\n  });\n\n  it('resolves cfg.validate() inside while-let via captured_pattern binding', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const validateCall = calls.find(c => c.target === 'validate' && c.source === 'process_while_let');\n    expect(validateCall).toBeDefined();\n    expect(validateCall!.targetFilePath).toBe('models.rs');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Return type inference: let user = get_user(\"alice\"); user.save()\n// Plain function call (no ::new) with no type annotation\n// ---------------------------------------------------------------------------\n\ndescribe('Rust return type inference', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-return-type'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User struct and get_user + save functions', () => {\n    expect(getNodesByLabel(result, 'Struct')).toContain('User');\n    expect(getNodesByLabel(result, 'Function')).toContain('get_user');\n    expect(getNodesByLabel(result, 'Function')).toContain('save');\n  });\n\n  it('resolves main → get_user as a CALLS edge to src/models.rs', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const getUserCall = calls.find(c => c.target === 'get_user' && c.source === 'main');\n    expect(getUserCall).toBeDefined();\n    expect(getUserCall!.targetFilePath).toBe('src/models.rs');\n  });\n\n  it('resolves user.save() to src/models.rs via return-type-inferred binding', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save' && c.source === 'main');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.targetFilePath).toBe('src/models.rs');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Return-type inference with competing methods:\n// Two structs both have save(), factory functions disambiguate via return type\n// ---------------------------------------------------------------------------\n\ndescribe('Rust return-type inference via function return type', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-return-type-inference'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves user.save() to models.rs User#save via return type of get_user()', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'process_user' && c.targetFilePath.includes('models')\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('user.save() does NOT resolve to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process_user'\n    );\n    // Should resolve to exactly one target — if it resolves at all, check it's the right one\n    if (wrongSave) {\n      expect(wrongSave.targetFilePath).toContain('models');\n    }\n  });\n\n  it('resolves repo.save() to models.rs Repo#save via return type of get_repo()', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'process_repo' && c.targetFilePath.includes('models')\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Rust ::default() constructor resolution — scanner exclusion\n// ---------------------------------------------------------------------------\n\ndescribe('Rust ::default() constructor resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-default-constructor'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo structs', () => {\n    const structs = getNodesByLabel(result, 'Struct');\n    expect(structs).toContain('User');\n    expect(structs).toContain('Repo');\n  });\n\n  it('detects save methods on both structs', () => {\n    const methods = [...getNodesByLabel(result, 'Function'), ...getNodesByLabel(result, 'Method')];\n    expect(methods.filter((m: string) => m === 'save').length).toBe(2);\n  });\n\n  it('resolves user.save() in process_with_new() via User::new() constructor', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'process_with_new' && c.targetFilePath.includes('user.rs'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('resolves user.save() in process_with_default() via User::default() constructor', () => {\n    // User::default() should be resolved by extractInitializer (Tier 1),\n    // NOT by the scanner — the scanner excludes ::default() to avoid\n    // wasted cross-file lookups on the broadly-implemented Default trait\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'process_with_default' && c.targetFilePath.includes('user.rs'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('disambiguates repo.save() in process_with_default() to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process_with_default' && c.targetFilePath.includes('repo.rs'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('does NOT cross-contaminate (user.save() does not resolve to Repo#save)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    // In process_with_new: user.save() should go to user.rs, not repo.rs\n    const wrongCall = calls.find(c =>\n      c.target === 'save' && c.source === 'process_with_new' && c.targetFilePath.includes('repo.rs'),\n    );\n    // Either undefined (correctly disambiguated) or present (both resolved) — no single wrong one\n    if (wrongCall) {\n      // If both are present, there should also be a correct one\n      const correctCall = calls.find(c =>\n        c.target === 'save' && c.source === 'process_with_new' && c.targetFilePath.includes('user.rs'),\n      );\n      expect(correctCall).toBeDefined();\n    }\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Rust async .await constructor binding resolution\n// Verifies that `let user = create_user().await` correctly unwraps the\n// await_expression to find the call_expression underneath, producing a\n// constructor binding that enables receiver-based disambiguation of user.save().\n// ---------------------------------------------------------------------------\n\ndescribe('Rust async .await constructor binding resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-async-binding'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo structs', () => {\n    const structs = getNodesByLabel(result, 'Struct');\n    expect(structs).toContain('User');\n    expect(structs).toContain('Repo');\n  });\n\n  it('detects save methods in separate files', () => {\n    const methods = [...getNodesByLabel(result, 'Function'), ...getNodesByLabel(result, 'Method')];\n    expect(methods.filter((m: string) => m === 'save').length).toBe(2);\n  });\n\n  it('resolves user.save() after .await to user.rs via return type of get_user()', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'process_user' && c.targetFilePath.includes('user'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('user.save() does NOT resolve to Repo#save in repo.rs', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process_user' && c.targetFilePath.includes('repo'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n\n  it('resolves repo.save() after .await to repo.rs via return type of get_repo()', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'process_repo' && c.targetFilePath.includes('repo'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('repo.save() does NOT resolve to User#save in user.rs', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process_repo' && c.targetFilePath.includes('user'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Nullable receiver: let user: Option<User> = find_user(); user.unwrap().save()\n// Rust Option<User> — stripNullable unwraps Option wrapper to inner type.\n// ---------------------------------------------------------------------------\n\ndescribe('Rust nullable receiver resolution (Option<T>)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-nullable-receiver'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo structs, both with save functions', () => {\n    expect(getNodesByLabel(result, 'Struct')).toContain('User');\n    expect(getNodesByLabel(result, 'Struct')).toContain('Repo');\n    const saveFns = getNodesByLabel(result, 'Function').filter(m => m === 'save');\n    expect(saveFns.length).toBe(2);\n  });\n\n  it('resolves user.unwrap().save() to User#save via Option<User> unwrapping', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' &&\n      c.source === 'process_entities' &&\n      c.targetFilePath?.includes('user'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves repo.unwrap().save() to Repo#save via Option<Repo> unwrapping', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' &&\n      c.source === 'process_entities' &&\n      c.targetFilePath?.includes('repo'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Assignment chain propagation (Phase 4.3)\n// ---------------------------------------------------------------------------\n\ndescribe('Rust assignment chain propagation', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-assignment-chain'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo structs each with a save function', () => {\n    expect(getNodesByLabel(result, 'Struct')).toContain('User');\n    expect(getNodesByLabel(result, 'Struct')).toContain('Repo');\n    const saveFns = getNodesByLabel(result, 'Function').filter(m => m === 'save');\n    expect(saveFns.length).toBe(2);\n  });\n\n  it('resolves alias.save() to User#save via assignment chain', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process_entities' && c.targetFilePath?.includes('user.rs'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves r_alias.save() to Repo#save via assignment chain', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process_entities' && c.targetFilePath?.includes('repo.rs'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('alias.save() does NOT resolve to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save' && c.source === 'process_entities');\n    expect(saveCalls.filter(c => c.targetFilePath?.includes('user.rs')).length).toBe(1);\n    expect(saveCalls.filter(c => c.targetFilePath?.includes('repo.rs')).length).toBe(1);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Rust Option<User> receiver resolution — extractSimpleTypeName unwraps\n// Option<User> to \"User\" via NULLABLE_WRAPPER_TYPES. The variable declared\n// as Option<User> now stores \"User\" in TypeEnv, enabling direct receiver\n// disambiguation without chained .unwrap() inference.\n// ---------------------------------------------------------------------------\n\ndescribe('Rust Option<User> receiver resolution via wrapper unwrapping', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-option-receiver'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo structs each with a save function', () => {\n    expect(getNodesByLabel(result, 'Struct')).toContain('User');\n    expect(getNodesByLabel(result, 'Struct')).toContain('Repo');\n    const saveFns = getNodesByLabel(result, 'Function').filter(m => m === 'save');\n    expect(saveFns.length).toBe(2);\n  });\n\n  it('resolves alias.save() to User#save via Option<User> → assignment chain', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process_entities' && c.targetFilePath?.includes('user.rs'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves repo.save() to Repo#save alongside Option usage', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process_entities' && c.targetFilePath?.includes('repo.rs'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// if let Some(user) = opt — Phase 5.2 pattern binding: unwrap Option<T>\n// `opt: Option<User>` → Option<User> is stored as \"User\" in TypeEnv via\n// NULLABLE_WRAPPER_TYPES. extractPatternBinding maps `user` → \"User\".\n// Disambiguation: User.save vs Repo.save — only User.save should be called.\n// ---------------------------------------------------------------------------\n\ndescribe('Rust if-let Some(x) = opt pattern binding (Phase 5.2)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-if-let-unwrap'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo structs each with a save function', () => {\n    expect(getNodesByLabel(result, 'Struct')).toContain('User');\n    expect(getNodesByLabel(result, 'Struct')).toContain('Repo');\n    const saveFns = getNodesByLabel(result, 'Function').filter(f => f === 'save');\n    expect(saveFns.length).toBe(2);\n  });\n\n  it('resolves user.save() inside if-let Some(user) = opt to User#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' &&\n      c.source === 'process' &&\n      c.targetFilePath?.includes('user.rs'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('does NOT resolve user.save() to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' &&\n      c.source === 'process' &&\n      c.targetFilePath?.includes('repo.rs'),\n    );\n    expect(repoSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Rust if-let Err(e) = res pattern binding (Phase 5 review fix)\n// Result<User, AppError> → Err(e) should type e as AppError (typeArgs[1]).\n// Also tests Ok(user) in the same fixture to verify both arms work.\n// ---------------------------------------------------------------------------\n\ndescribe('Rust if-let Err(e) pattern binding (Phase 5 review fix)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-err-unwrap'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and AppError structs', () => {\n    const structs = getNodesByLabel(result, 'Struct');\n    expect(structs).toContain('User');\n    expect(structs).toContain('AppError');\n  });\n\n  it('resolves e.report() inside if-let Err(e) to AppError#report', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const reportCall = calls.find(c =>\n      c.target === 'report' &&\n      c.source === 'handle_err' &&\n      c.targetFilePath?.includes('error.rs'),\n    );\n    expect(reportCall).toBeDefined();\n  });\n\n  it('resolves user.save() inside if-let Ok(user) to User#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' &&\n      c.source === 'handle_ok' &&\n      c.targetFilePath?.includes('user.rs'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('does NOT resolve e.report() to User#save (no cross-contamination)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongCall = calls.find(c =>\n      c.target === 'save' &&\n      c.source === 'handle_err',\n    );\n    expect(wrongCall).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Chained method calls: svc.get_user().save()\n// Tests that Rust chain call resolution correctly infers the intermediate\n// receiver type from get_user()'s return type and resolves save() to User.\n// ---------------------------------------------------------------------------\n\ndescribe('Rust chained method call resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-chain-call'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo structs, and UserService', () => {\n    expect(getNodesByLabel(result, 'Struct')).toContain('User');\n    expect(getNodesByLabel(result, 'Struct')).toContain('Repo');\n    expect(getNodesByLabel(result, 'Struct')).toContain('UserService');\n  });\n\n  it('detects get_user and save functions', () => {\n    const fns = getNodesByLabel(result, 'Function');\n    expect(fns).toContain('get_user');\n    expect(fns).toContain('save');\n  });\n\n  it('resolves svc.get_user().save() to User#save via chain resolution', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' &&\n      c.source === 'process_user' &&\n      c.targetFilePath?.includes('user.rs'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('does NOT resolve svc.get_user().save() to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' &&\n      c.source === 'process_user' &&\n      c.targetFilePath?.includes('repo.rs'),\n    );\n    expect(repoSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Rust for-loop Tier 1c: for user in &users with Vec<User> parameter\n// ---------------------------------------------------------------------------\n\ndescribe('Rust for-loop type resolution (Tier 1c)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-for-loop'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo structs with save functions', () => {\n    expect(getNodesByLabel(result, 'Struct')).toContain('User');\n    expect(getNodesByLabel(result, 'Struct')).toContain('Repo');\n    const saveFns = getNodesByLabel(result, 'Function').filter(f => f === 'save');\n    expect(saveFns.length).toBe(2);\n  });\n\n  it('resolves user.save() in for-loop to User#save via Tier 1c', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process_users' && c.targetFilePath?.includes('user.rs'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('does NOT resolve user.save() to Repo#save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process_users' && c.targetFilePath?.includes('repo.rs'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n\n  it('resolves repo.save() in for-loop to Repo#save via Tier 1c', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process_repos' && c.targetFilePath?.includes('repo.rs'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('does NOT resolve repo.save() to User#save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process_repos' && c.targetFilePath?.includes('user.rs'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Rust match arm: match opt { Some(user) => user.save() }\n// ---------------------------------------------------------------------------\n\ndescribe('Rust match arm type resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-match-unwrap'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo structs with save functions', () => {\n    expect(getNodesByLabel(result, 'Struct')).toContain('User');\n    expect(getNodesByLabel(result, 'Struct')).toContain('Repo');\n    const saveFns = getNodesByLabel(result, 'Function').filter(f => f === 'save');\n    expect(saveFns.length).toBe(2);\n  });\n\n  it('resolves user.save() inside match Some(user) to User#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process' && c.targetFilePath?.includes('user.rs'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('does NOT resolve user.save() in match to Repo#save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process' && c.targetFilePath?.includes('repo.rs'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n\n  it('resolves repo.save() inside if-let Ok(repo) to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'check' && c.targetFilePath?.includes('repo.rs'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('does NOT resolve repo.save() in if-let to User#save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'check' && c.targetFilePath?.includes('user.rs'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// for user in users.iter() — call_expression iterable resolution\n// ---------------------------------------------------------------------------\n\ndescribe('Rust .iter() for-loop call_expression resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-iter-for-loop'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo structs with save functions', () => {\n    expect(getNodesByLabel(result, 'Struct')).toContain('User');\n    expect(getNodesByLabel(result, 'Struct')).toContain('Repo');\n    const saveFns = getNodesByLabel(result, 'Function').filter(f => f === 'save');\n    expect(saveFns.length).toBe(2);\n  });\n\n  it('resolves user.save() via users.iter() to User#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process_users' && c.targetFilePath?.includes('user.rs'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves repo.save() via repos.into_iter() to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process_repos' && c.targetFilePath?.includes('repo.rs'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('does NOT cross-resolve user.save() to Repo#save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process_users' && c.targetFilePath?.includes('repo.rs'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// for user in get_users() — direct call_expression iterable resolution\n// Phase 7.3: unlike rust-iter-for-loop (typed variable .iter()), this tests\n// iterating over a function call's return value directly.\n// ---------------------------------------------------------------------------\n\ndescribe('Rust for-loop direct call_expression iterable resolution (Phase 7.3)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-for-call-expr'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo structs with competing save functions', () => {\n    expect(getNodesByLabel(result, 'Struct')).toContain('User');\n    expect(getNodesByLabel(result, 'Struct')).toContain('Repo');\n    const saveFns = getNodesByLabel(result, 'Function').filter(f => f === 'save');\n    expect(saveFns.length).toBe(2);\n  });\n\n  it('resolves user.save() in for-loop over get_users() to User#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process_users' && c.targetFilePath?.includes('user.rs'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves repo.save() in for-loop over get_repos() to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process_repos' && c.targetFilePath?.includes('repo.rs'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('does NOT resolve user.save() to Repo#save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process_users' && c.targetFilePath?.includes('repo.rs'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n\n  it('does NOT resolve repo.save() to User#save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process_repos' && c.targetFilePath?.includes('user.rs'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase 8: Field/property type resolution — struct field capture\n// ---------------------------------------------------------------------------\n\ndescribe('Field type resolution (Rust)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-field-types'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects structs: Address, User', () => {\n    expect(getNodesByLabel(result, 'Struct')).toEqual(['Address', 'User']);\n  });\n\n  it('detects Property nodes for Rust struct fields', () => {\n    const properties = getNodesByLabel(result, 'Property');\n    expect(properties).toContain('address');\n    expect(properties).toContain('name');\n    expect(properties).toContain('city');\n  });\n\n  it('emits HAS_PROPERTY edges linking fields to structs', () => {\n    const propEdges = getRelationships(result, 'HAS_PROPERTY');\n    expect(propEdges.length).toBe(3);\n    expect(edgeSet(propEdges)).toContain('User → name');\n    expect(edgeSet(propEdges)).toContain('User → address');\n    expect(edgeSet(propEdges)).toContain('Address → city');\n  });\n\n  it('resolves user.address.save() → Address#save via field type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(\n      e => e.target === 'save' && e.source === 'process_user',\n    );\n    expect(saveCalls.length).toBe(1);\n    expect(saveCalls[0].targetFilePath).toContain('models');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase 8B: Deep field chain resolution (3-level)\n// ---------------------------------------------------------------------------\n\ndescribe('Deep field chain resolution (Rust)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-deep-field-chain'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects structs: Address, City, User', () => {\n    expect(getNodesByLabel(result, 'Struct')).toEqual(['Address', 'City', 'User']);\n  });\n\n  it('detects Property nodes for Rust struct fields', () => {\n    const properties = getNodesByLabel(result, 'Property');\n    expect(properties).toContain('address');\n    expect(properties).toContain('city');\n    expect(properties).toContain('zip_code');\n  });\n\n  it('emits HAS_PROPERTY edges for nested type chain', () => {\n    const propEdges = getRelationships(result, 'HAS_PROPERTY');\n    expect(propEdges.length).toBe(5);\n    expect(edgeSet(propEdges)).toContain('User → name');\n    expect(edgeSet(propEdges)).toContain('User → address');\n    expect(edgeSet(propEdges)).toContain('Address → city');\n    expect(edgeSet(propEdges)).toContain('Address → street');\n    expect(edgeSet(propEdges)).toContain('City → zip_code');\n  });\n\n  it('resolves 2-level chain: user.address.save() → Address#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(e => e.target === 'save' && e.source === 'process_user');\n    const addressSave = saveCalls.find(e => e.targetFilePath.includes('models'));\n    expect(addressSave).toBeDefined();\n  });\n\n  it('resolves 3-level chain: user.address.city.get_name() → City#get_name', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const getNameCalls = calls.filter(e => e.target === 'get_name' && e.source === 'process_user');\n    const cityGetName = getNameCalls.find(e => e.targetFilePath.includes('models'));\n    expect(cityGetName).toBeDefined();\n  });\n});\n\n// ACCESSES write edges from assignment expressions\n// ---------------------------------------------------------------------------\n\ndescribe('Write access tracking (Rust)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-write-access'),\n      () => {},\n    );\n  }, 60000);\n\n  it('emits ACCESSES write edges for field assignments', () => {\n    const accesses = getRelationships(result, 'ACCESSES');\n    const writes = accesses.filter(e => e.rel.reason === 'write');\n    expect(writes.length).toBe(3);\n    const fieldNames = writes.map(e => e.target);\n    expect(fieldNames).toContain('name');\n    expect(fieldNames).toContain('address');\n    expect(fieldNames).toContain('score');\n    const sources = writes.map(e => e.source);\n    expect(sources).toContain('update_user');\n  });\n\n  it('write ACCESSES edges have confidence 1.0', () => {\n    const accesses = getRelationships(result, 'ACCESSES');\n    const writes = accesses.filter(e => e.rel.reason === 'write');\n    for (const edge of writes) {\n      expect(edge.rel.confidence).toBe(1.0);\n    }\n  });\n\n  it('emits ACCESSES write edge for compound assignment', () => {\n    const accesses = getRelationships(result, 'ACCESSES');\n    const writes = accesses.filter(e => e.rel.reason === 'write');\n    const scoreWrite = writes.find(e => e.target === 'score');\n    expect(scoreWrite).toBeDefined();\n    expect(scoreWrite!.source).toBe('update_user');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Call-result variable binding (Phase 9): let user = get_user(); user.save()\n// ---------------------------------------------------------------------------\n\ndescribe('Rust call-result variable binding (Tier 2b)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-call-result-binding'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves user.save() to User#save via call-result binding', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'process_user' && c.targetFilePath.includes('models')\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Method chain binding (Phase 9C): get_user() → .address → .get_city() → .save()\n// ---------------------------------------------------------------------------\n\ndescribe('Rust method chain binding via unified fixpoint (Phase 9C)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-method-chain-binding'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves city.save() to City#save via method chain', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'process_chain' && c.targetFilePath.includes('models')\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase A: Rust struct_pattern destructuring — let Point { x, y } = p\n// Each field emits a fieldAccess PendingAssignment; fixpoint resolves x/y → Vec2\n// ---------------------------------------------------------------------------\n\ndescribe('Rust struct_pattern destructuring resolution (Phase A)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'rust-struct-destructuring'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects Point and Vec2 structs', () => {\n    const classes = getNodesByLabel(result, 'Struct');\n    expect(classes).toContain('Point');\n    expect(classes).toContain('Vec2');\n  });\n\n  it('resolves x.save() to Vec2#save via struct destructuring', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'process' && c.targetFilePath.includes('vec2'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('resolves both x.save() and y.save() — emits at least 1 CALLS to Vec2#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save' && c.targetFilePath.includes('vec2'));\n    // Both x and y are Vec2 — the same function, so calls may deduplicate to 1\n    expect(saveCalls.length).toBeGreaterThanOrEqual(1);\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/integration/resolvers/swift.test.ts",
    "content": "/**\n * Swift: constructor-inferred type resolution for member calls.\n * Verifies that `let user = User(name: \"alice\"); user.save()` resolves to User.save\n * without explicit type annotations, using SymbolTable verification.\n *\n * NOTE: tree-sitter-swift has build issues on Node 22 — these tests skip gracefully\n * when the Swift parser is not available.\n */\nimport { describe, it, expect, beforeAll } from 'vitest';\nimport path from 'path';\nimport {\n  FIXTURES, getRelationships, getNodesByLabel,\n  runPipelineFromRepo, type PipelineResult,\n} from './helpers.js';\nimport { isLanguageAvailable } from '../../../src/core/tree-sitter/parser-loader.js';\nimport { SupportedLanguages } from '../../../src/config/supported-languages.js';\n\nconst swiftAvailable = isLanguageAvailable(SupportedLanguages.Swift);\n\ndescribe.skipIf(!swiftAvailable)('Swift constructor-inferred type resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'swift-constructor-type-inference'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, both with save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveFns = getNodesByLabel(result, 'Function').filter(m => m === 'save');\n    expect(saveFns.length).toBe(2);\n  });\n\n  it('resolves user.save() to Models/User.swift via constructor-inferred type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'Models/User.swift');\n    expect(userSave).toBeDefined();\n    expect(userSave!.source).toBe('processEntities');\n  });\n\n  it('resolves repo.save() to Models/Repo.swift via constructor-inferred type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'Models/Repo.swift');\n    expect(repoSave).toBeDefined();\n    expect(repoSave!.source).toBe('processEntities');\n  });\n\n  it('emits exactly 2 save() CALLS edges (one per receiver type)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save');\n    expect(saveCalls.length).toBe(2);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// self.save() resolves to enclosing class's own save method\n// Build-dep issue (NOT a feature gap): tree-sitter-swift has build issues on Node 22.\n// The self/super resolution code already exists in type-env.ts lookupInEnv (lines 56-66).\n// ---------------------------------------------------------------------------\n\ndescribe.skip('Swift self resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'swift-self-this-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, each with a save function', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Repo', 'User']);\n    const saveFns = getNodesByLabel(result, 'Function').filter(m => m === 'save');\n    expect(saveFns.length).toBe(2);\n  });\n\n  it('resolves self.save() inside User.process to User.save, not Repo.save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save' && c.source === 'process');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.targetFilePath).toBe('Sources/Models/User.swift');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Parent class resolution: EXTENDS + protocol conformance\n// Build-dep issue (NOT a feature gap): tree-sitter-swift has build issues on Node 22.\n// findEnclosingParentClassName in type-env.ts already has Swift inheritance_specifier handler.\n// ---------------------------------------------------------------------------\n\ndescribe.skip('Swift parent resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'swift-parent-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects BaseModel and User classes plus Serializable protocol', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['BaseModel', 'User']);\n    expect(getNodesByLabel(result, 'Interface')).toEqual(['Serializable']);\n  });\n\n  it('emits EXTENDS edge: User → BaseModel', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    const extendsEdge = extends_.find(e => e.source === 'User' && e.target === 'BaseModel');\n    expect(extendsEdge).toBeDefined();\n  });\n\n  it('emits IMPLEMENTS edge: User → Serializable (protocol conformance)', () => {\n    const implements_ = getRelationships(result, 'IMPLEMENTS');\n    const implEdge = implements_.find(e => e.source === 'User' && e.target === 'Serializable');\n    expect(implEdge).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Swift cross-file User.init() type inference\n// ---------------------------------------------------------------------------\n\ndescribe.skipIf(!swiftAvailable)('Swift cross-file User.init() inference', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'swift-init-cross-file'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves user.save() via User.init(name:) inference', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save' && c.targetFilePath === 'User.swift');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('main');\n  });\n\n  it('resolves user.greet() via User.init(name:) inference', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const greetCall = calls.find(c => c.target === 'greet' && c.targetFilePath === 'User.swift');\n    expect(greetCall).toBeDefined();\n    expect(greetCall!.source).toBe('main');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Return type inference: let user = getUser(name: \"alice\"); user.save()\n// Swift's CONSTRUCTOR_BINDING_SCANNER captures property_declaration with\n// call_expression values, enabling return type inference from function results.\n// ---------------------------------------------------------------------------\n\ndescribe.skipIf(!swiftAvailable)('Swift return type inference', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'swift-return-type'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User class and getUser function', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Function')).toContain('getUser');\n  });\n\n  it('detects save function on User (Swift class methods are Function nodes)', () => {\n    expect(getNodesByLabel(result, 'Function')).toContain('save');\n  });\n\n  it('resolves user.save() to User#save via return type of getUser() -> User', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processUser' && c.targetFilePath.includes('Models.swift'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Return-type inference with competing methods:\n// Two classes both have save(), factory functions disambiguate via return type\n// ---------------------------------------------------------------------------\n\ndescribe.skipIf(!swiftAvailable)('Swift return-type inference via function return type', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'swift-return-type-inference'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves user.save() to User#save via return type of getUser()', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processUser' && c.targetFilePath.includes('Models.swift')\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('user.save() does NOT resolve to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processUser'\n    );\n    // Should resolve to exactly one target — if it resolves at all, check it's the right one\n    if (wrongSave) {\n      expect(wrongSave.targetFilePath).toContain('Models.swift');\n    }\n  });\n\n  it('resolves repo.save() to Repo#save via return type of getRepo()', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processRepo' && c.targetFilePath.includes('Models.swift')\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/integration/resolvers/typescript.test.ts",
    "content": "/**\n * TypeScript: heritage resolution + ambiguous symbol disambiguation\n */\nimport { describe, it, expect, beforeAll } from 'vitest';\nimport path from 'path';\nimport {\n  FIXTURES, getRelationships, getNodesByLabel, getNodesByLabelFull, edgeSet,\n  runPipelineFromRepo, type PipelineResult,\n} from './helpers.js';\n\n// ---------------------------------------------------------------------------\n// Heritage: class extends + implements interface\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript heritage resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'typescript-ambiguous'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects exactly 3 classes and 1 interface', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['BaseService', 'ConsoleLogger', 'UserService']);\n    expect(getNodesByLabel(result, 'Interface')).toEqual(['ILogger']);\n  });\n\n  it('emits exactly 3 IMPORTS edges', () => {\n    const imports = getRelationships(result, 'IMPORTS');\n    expect(imports.length).toBe(3);\n    expect(edgeSet(imports)).toEqual([\n      'logger.ts → models.ts',\n      'service.ts → logger.ts',\n      'service.ts → models.ts',\n    ]);\n  });\n\n  it('emits exactly 1 EXTENDS edge: UserService → BaseService', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(extends_.length).toBe(1);\n    expect(extends_[0].source).toBe('UserService');\n    expect(extends_[0].target).toBe('BaseService');\n  });\n\n  it('emits exactly 2 IMPLEMENTS edges', () => {\n    const implements_ = getRelationships(result, 'IMPLEMENTS');\n    expect(implements_.length).toBe(2);\n    expect(edgeSet(implements_)).toEqual([\n      'ConsoleLogger → ILogger',\n      'UserService → ILogger',\n    ]);\n  });\n\n  it('emits HAS_METHOD edges linking methods to classes', () => {\n    const hasMethod = getRelationships(result, 'HAS_METHOD');\n    expect(hasMethod.length).toBe(4);\n    expect(edgeSet(hasMethod)).toEqual([\n      'BaseService → getName',\n      'ConsoleLogger → log',\n      'UserService → getUsers',\n      'UserService → log',\n    ]);\n  });\n\n  it('emits HAS_PROPERTY edge for class fields', () => {\n    const hasProperty = getRelationships(result, 'HAS_PROPERTY');\n    expect(hasProperty.length).toBe(1);\n    expect(edgeSet(hasProperty)).toEqual(['BaseService → name']);\n  });\n\n  it('no OVERRIDES edges target Property nodes', () => {\n    const overrides = getRelationships(result, 'OVERRIDES');\n    for (const edge of overrides) {\n      const target = result.graph.getNode(edge.rel.targetId);\n      expect(target).toBeDefined();\n      expect(target!.label).not.toBe('Property');\n    }\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Ambiguous: multiple definitions, imports disambiguate\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript ambiguous symbol resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'typescript-ambiguous'),\n      () => {},\n    );\n  }, 60000);\n\n  it('UserService has exactly 1 EXTENDS + 1 IMPLEMENTS', () => {\n    const extends_ = getRelationships(result, 'EXTENDS').filter(e => e.source === 'UserService');\n    const implements_ = getRelationships(result, 'IMPLEMENTS').filter(e => e.source === 'UserService');\n    expect(extends_.length).toBe(1);\n    expect(implements_.length).toBe(1);\n  });\n\n  it('ConsoleLogger has exactly 1 IMPLEMENTS and 0 EXTENDS', () => {\n    const extends_ = getRelationships(result, 'EXTENDS').filter(e => e.source === 'ConsoleLogger');\n    const implements_ = getRelationships(result, 'IMPLEMENTS').filter(e => e.source === 'ConsoleLogger');\n    expect(extends_.length).toBe(0);\n    expect(implements_.length).toBe(1);\n    expect(implements_[0].target).toBe('ILogger');\n  });\n\n  it('all heritage edges point to real graph nodes', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    const implements_ = getRelationships(result, 'IMPLEMENTS');\n\n    for (const edge of [...extends_, ...implements_]) {\n      const target = result.graph.getNode(edge.rel.targetId);\n      expect(target).toBeDefined();\n      expect(target!.properties.name).toBe(edge.target);\n    }\n  });\n});\n\ndescribe('TypeScript call resolution with arity filtering', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'typescript-calls'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves run → writeAudit to src/one.ts via arity narrowing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    expect(calls.length).toBe(1);\n    expect(calls[0].source).toBe('run');\n    expect(calls[0].target).toBe('writeAudit');\n    expect(calls[0].targetFilePath).toBe('src/one.ts');\n    expect(calls[0].rel.reason).toBe('import-resolved');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Member-call resolution: obj.method() resolves through pipeline\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript member-call resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'typescript-member-calls'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves processUser → save as a member call on User', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('processUser');\n    expect(saveCall!.targetFilePath).toBe('src/user.ts');\n  });\n\n  it('detects User class and save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Method')).toContain('save');\n  });\n\n  it('emits HAS_METHOD edge from User to save', () => {\n    const hasMethod = getRelationships(result, 'HAS_METHOD');\n    const edge = hasMethod.find(e => e.source === 'User' && e.target === 'save');\n    expect(edge).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Constructor resolution: new Foo() resolves to Class/Constructor\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript constructor-call resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'typescript-constructor-calls'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves new User() as a CALLS edge to the User class', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const ctorCall = calls.find(c => c.target === 'User');\n    expect(ctorCall).toBeDefined();\n    expect(ctorCall!.source).toBe('processUser');\n    expect(ctorCall!.targetLabel).toBe('Class');\n    expect(ctorCall!.targetFilePath).toBe('src/user.ts');\n  });\n\n  it('also resolves user.save() as a member call', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('processUser');\n  });\n\n  it('detects User class, save method, and processUser function', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Method')).toContain('save');\n    expect(getNodesByLabel(result, 'Function')).toContain('processUser');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Receiver-constrained resolution: typed variables disambiguate same-named methods\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript receiver-constrained resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'typescript-receiver-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, both with save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves user.save() to User.save and repo.save() to Repo.save via receiver typing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save');\n    expect(saveCalls.length).toBe(2);\n\n    const userSave = saveCalls.find(c => c.targetFilePath === 'src/user.ts');\n    const repoSave = saveCalls.find(c => c.targetFilePath === 'src/repo.ts');\n\n    expect(userSave).toBeDefined();\n    expect(repoSave).toBeDefined();\n    expect(userSave!.source).toBe('processEntities');\n    expect(repoSave!.source).toBe('processEntities');\n  });\n\n  it('resolves constructor calls for both User and Repo', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userCtor = calls.find(c => c.target === 'User' && c.targetLabel === 'Class');\n    const repoCtor = calls.find(c => c.target === 'Repo' && c.targetLabel === 'Class');\n    expect(userCtor).toBeDefined();\n    expect(repoCtor).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Scoped receiver resolution: same variable name in different functions\n// resolves to different types via scope-aware TypeEnv\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript scoped receiver resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'typescript-scoped-receiver'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, both with save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves entity.save() in handleUser to User.save and in handleRepo to Repo.save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save');\n    expect(saveCalls.length).toBe(2);\n\n    const userSave = saveCalls.find(c => c.targetFilePath === 'src/user.ts');\n    const repoSave = saveCalls.find(c => c.targetFilePath === 'src/repo.ts');\n\n    expect(userSave).toBeDefined();\n    expect(repoSave).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Named import disambiguation: two files export same name, import resolves\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript named import disambiguation', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'typescript-named-imports'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves processInput → formatData to src/format-upper.ts via named import', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const formatCall = calls.find(c => c.target === 'formatData');\n    expect(formatCall).toBeDefined();\n    expect(formatCall!.source).toBe('processInput');\n    expect(formatCall!.targetFilePath).toBe('src/format-upper.ts');\n  });\n\n  it('emits IMPORTS edge to format-upper.ts', () => {\n    const imports = getRelationships(result, 'IMPORTS');\n    const appImport = imports.find(e => e.source === 'app.ts');\n    expect(appImport).toBeDefined();\n    expect(appImport!.targetFilePath).toBe('src/format-upper.ts');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Alias import resolution: import { User as U } resolves U → User\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript alias import resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'typescript-alias-imports'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes with their methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Repo', 'User']);\n    expect(getNodesByLabel(result, 'Method')).toContain('save');\n    expect(getNodesByLabel(result, 'Method')).toContain('persist');\n  });\n\n  it('resolves new U() to User class and new R() to Repo class via alias', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userCtor = calls.find(c => c.target === 'User' && c.targetLabel === 'Class');\n    const repoCtor = calls.find(c => c.target === 'Repo' && c.targetLabel === 'Class');\n\n    expect(userCtor).toBeDefined();\n    expect(userCtor!.source).toBe('main');\n    expect(userCtor!.targetFilePath).toBe('src/models.ts');\n\n    expect(repoCtor).toBeDefined();\n    expect(repoCtor!.source).toBe('main');\n    expect(repoCtor!.targetFilePath).toBe('src/models.ts');\n  });\n\n  it('resolves u.save() and r.persist() as member calls', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save');\n    const persistCall = calls.find(c => c.target === 'persist');\n\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('main');\n\n    expect(persistCall).toBeDefined();\n    expect(persistCall!.source).toBe('main');\n  });\n\n  it('emits IMPORTS edge from app.ts to models.ts', () => {\n    const imports = getRelationships(result, 'IMPORTS');\n    const appImport = imports.find(e => e.sourceFilePath === 'src/app.ts');\n    expect(appImport).toBeDefined();\n    expect(appImport!.targetFilePath).toBe('src/models.ts');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Re-export chain: export { X } from './base' barrel pattern\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript re-export chain resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'typescript-reexport-chain'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes in base.ts', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Repo', 'User']);\n  });\n\n  it('resolves new User() through re-export chain to base.ts', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userCtor = calls.find(c => c.target === 'User' && c.targetLabel === 'Class');\n    expect(userCtor).toBeDefined();\n    expect(userCtor!.source).toBe('main');\n    expect(userCtor!.targetFilePath).toBe('src/base.ts');\n  });\n\n  it('resolves user.save() through re-export chain to base.ts', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('main');\n    expect(saveCall!.targetFilePath).toBe('src/base.ts');\n  });\n\n  it('resolves new Repo() through re-export chain to base.ts', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoCtor = calls.find(c => c.target === 'Repo' && c.targetLabel === 'Class');\n    expect(repoCtor).toBeDefined();\n    expect(repoCtor!.source).toBe('main');\n    expect(repoCtor!.targetFilePath).toBe('src/base.ts');\n  });\n\n  it('resolves repo.persist() through re-export chain to base.ts', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const persistCall = calls.find(c => c.target === 'persist');\n    expect(persistCall).toBeDefined();\n    expect(persistCall!.source).toBe('main');\n    expect(persistCall!.targetFilePath).toBe('src/base.ts');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Re-export type chain: export type { X } from './base' barrel pattern\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript export type re-export chain resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'typescript-reexport-type'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes in base.ts', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Repo', 'User']);\n  });\n\n  it('resolves new User() through export type re-export chain to base.ts', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userCtor = calls.find(c => c.target === 'User' && c.targetLabel === 'Class');\n    expect(userCtor).toBeDefined();\n    expect(userCtor!.source).toBe('main');\n    expect(userCtor!.targetFilePath).toBe('src/base.ts');\n  });\n\n  it('resolves user.save() through export type re-export chain to base.ts', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('main');\n    expect(saveCall!.targetFilePath).toBe('src/base.ts');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Local shadow: same-file definition takes priority over imported name\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript local definition shadows import', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'typescript-local-shadow'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves run → save to same-file definition, not the imported one', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save' && c.source === 'run');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.targetFilePath).toBe('src/app.ts');\n  });\n\n  it('does NOT resolve save to utils.ts', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveToUtils = calls.find(c => c.target === 'save' && c.targetFilePath === 'src/utils.ts');\n    expect(saveToUtils).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Variadic resolution: rest params don't get filtered by arity\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript variadic call resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'typescript-variadic-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves processInput → logEntry to src/logger.ts despite 3 args vs rest param', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const logCall = calls.find(c => c.target === 'logEntry');\n    expect(logCall).toBeDefined();\n    expect(logCall!.source).toBe('processInput');\n    expect(logCall!.targetFilePath).toBe('src/logger.ts');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Constructor-inferred type resolution: const user = new User(); user.save()\n// Cross-file SymbolTable verification (no explicit type annotations)\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript constructor-inferred type resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'typescript-constructor-type-inference'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, both with save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves user.save() to src/user.ts via constructor-inferred type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'src/user.ts');\n    expect(userSave).toBeDefined();\n    expect(userSave!.source).toBe('processEntities');\n  });\n\n  it('resolves repo.save() to src/repo.ts via constructor-inferred type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'src/repo.ts');\n    expect(repoSave).toBeDefined();\n    expect(repoSave!.source).toBe('processEntities');\n  });\n\n  it('emits exactly 2 save() CALLS edges (one per receiver type)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save');\n    expect(saveCalls.length).toBe(2);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// JavaScript constructor-inferred type resolution: const user = new User()\n// ---------------------------------------------------------------------------\n\ndescribe('JavaScript constructor-inferred type resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'javascript-constructor-type-inference'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, both with save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves user.save() to src/user.js via constructor-inferred type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'src/user.js');\n    expect(userSave).toBeDefined();\n    expect(userSave!.source).toBe('processEntities');\n  });\n\n  it('resolves repo.save() to src/repo.js via constructor-inferred type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'src/repo.js');\n    expect(repoSave).toBeDefined();\n    expect(repoSave!.source).toBe('processEntities');\n  });\n\n  it('emits exactly 2 save() CALLS edges (one per receiver type)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save');\n    expect(saveCalls.length).toBe(2);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// this.save() resolves to enclosing class's own save method\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript this resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'typescript-self-this-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, each with a save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Repo', 'User']);\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves this.save() inside User.process to User.save, not Repo.save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c => c.target === 'save' && c.source === 'process');\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.targetFilePath).toBe('src/models/User.ts');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Parent class resolution: EXTENDS + IMPLEMENTS edges\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript parent resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'typescript-parent-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects BaseModel and User classes plus Serializable interface', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['BaseModel', 'User']);\n    expect(getNodesByLabel(result, 'Interface')).toEqual(['Serializable']);\n  });\n\n  it('emits EXTENDS edge: User → BaseModel', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(extends_.length).toBe(1);\n    expect(extends_[0].source).toBe('User');\n    expect(extends_[0].target).toBe('BaseModel');\n  });\n\n  it('emits IMPLEMENTS edge: User → Serializable', () => {\n    const implements_ = getRelationships(result, 'IMPLEMENTS');\n    expect(implements_.length).toBe(1);\n    expect(implements_[0].source).toBe('User');\n    expect(implements_[0].target).toBe('Serializable');\n  });\n\n  it('all heritage edges point to real graph nodes', () => {\n    for (const edge of [...getRelationships(result, 'EXTENDS'), ...getRelationships(result, 'IMPLEMENTS')]) {\n      const target = result.graph.getNode(edge.rel.targetId);\n      expect(target).toBeDefined();\n      expect(target!.properties.name).toBe(edge.target);\n    }\n  });\n});\n\n// ---------------------------------------------------------------------------\n// super.save() resolves to parent class's save method\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript super resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'typescript-super-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects BaseModel, User, and Repo classes, each with a save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['BaseModel', 'Repo', 'User']);\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(3);\n  });\n\n  it('emits EXTENDS edge: User → BaseModel', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(extends_.length).toBe(1);\n    expect(extends_[0].source).toBe('User');\n    expect(extends_[0].target).toBe('BaseModel');\n  });\n\n  it('resolves super.save() inside User to BaseModel.save, not Repo.save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const superSave = calls.find(c => c.source === 'save' && c.target === 'save'\n      && c.targetFilePath === 'src/models/Base.ts');\n    expect(superSave).toBeDefined();\n    const repoSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'src/models/Repo.ts');\n    expect(repoSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// super.save() resolves to generic parent class's save method\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript generic parent super resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'typescript-generic-parent-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects BaseModel, User, and Repo classes, each with a save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['BaseModel', 'Repo', 'User']);\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(3);\n  });\n\n  it('emits EXTENDS edge: User → BaseModel (not BaseModel<string>)', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(extends_.length).toBe(1);\n    expect(extends_[0].source).toBe('User');\n    expect(extends_[0].target).toBe('BaseModel');\n  });\n\n  it('resolves super.save() inside User to BaseModel.save, not Repo.save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const superSave = calls.find(c => c.source === 'save' && c.target === 'save'\n      && c.targetFilePath === 'src/models/Base.ts');\n    expect(superSave).toBeDefined();\n    const repoSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'src/models/Repo.ts');\n    expect(repoSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Cast/non-null constructor inference: new X() as T, new X()!\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript cast/non-null constructor inference', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'typescript-cast-constructor-inference'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, both with save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Repo', 'User']);\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves user.save() to User.save via new User() as any', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'src/user.ts');\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves repo.save() to Repo.save via new Repo()!', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'src/repo.ts');\n    expect(repoSave).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Double-cast constructor inference: new X() as unknown as T\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript double-cast constructor inference', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'typescript-double-cast-inference'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, both with save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Repo', 'User']);\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves user.save() to User.save via new User() as unknown as any', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'src/user.ts');\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves repo.save() to Repo.save via new Repo() as unknown as object', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'src/repo.ts');\n    expect(repoSave).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Nullable/optional receiver unwrapping: user?.save() resolves through ?.\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript nullable receiver resolution (optional chaining)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ts-nullable-receiver'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes with their methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    expect(getNodesByLabel(result, 'Method')).toContain('save');\n    expect(getNodesByLabel(result, 'Method')).toContain('greet');\n  });\n\n  it('resolves user?.save() to User.save via receiver typing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'src/user.ts');\n    expect(userSave).toBeDefined();\n    expect(userSave!.source).toBe('processEntities');\n  });\n\n  it('resolves user?.greet() to User.greet via receiver typing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const greetCall = calls.find(c => c.target === 'greet' && c.targetFilePath === 'src/user.ts');\n    expect(greetCall).toBeDefined();\n    expect(greetCall!.source).toBe('processEntities');\n  });\n\n  it('resolves repo?.save() to Repo.save via receiver typing', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c => c.target === 'save' && c.targetFilePath === 'src/repo.ts');\n    expect(repoSave).toBeDefined();\n    expect(repoSave!.source).toBe('processEntities');\n  });\n\n  it('emits constructor CALLS edges for both User and Repo', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userCtor = calls.find(c => c.target === 'User' && c.targetLabel === 'Class');\n    const repoCtor = calls.find(c => c.target === 'Repo' && c.targetLabel === 'Class');\n    expect(userCtor).toBeDefined();\n    expect(repoCtor).toBeDefined();\n  });\n\n  it('emits exactly 2 save() CALLS edges (one per receiver type)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save');\n    // user?.save() → User.save + repo?.save() → Repo.save = 2 edges\n    // If nullable unwrapping fails, the resolver refuses ambiguous matches and emits 0\n    expect(saveCalls.length).toBe(2);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Return type inference: const user = getUser('alice'); user.save()\n// The TS/JS CONSTRUCTOR_BINDING_SCANNER captures variable_declarator nodes\n// with plain call_expression values, enabling end-to-end return type inference.\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript return type inference via explicit function return type', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ts-return-type-inference'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User class with save and getName methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    const methods = getNodesByLabel(result, 'Method');\n    expect(methods).toContain('save');\n    expect(methods).toContain('getName');\n  });\n\n  it('detects getUser and fetchUserAsync functions', () => {\n    const functions = getNodesByLabel(result, 'Function');\n    expect(functions).toContain('getUser');\n    expect(functions).toContain('fetchUserAsync');\n  });\n\n  it('resolves user.save() to User#save via return type of getUser(): User', () => {\n    // TS has explicit return types in the source, so extractMethodSignature captures\n    // the return type. The TS extractInitializer handles `const user = getUser()`\n    // via the variable_declarator path, enabling save() to resolve to User#save.\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processUser' && c.targetFilePath.includes('models')\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// JavaScript return type inference via JSDoc @returns annotation\n// ---------------------------------------------------------------------------\n\ndescribe('JavaScript return type inference via JSDoc @returns annotation', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'js-jsdoc-return-type'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes with save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n  });\n\n  it('resolves user.save() to User#save via JSDoc @returns {User}', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processUser' && c.targetFilePath.includes('user.js'),\n    );\n    expect(saveCall).toBeDefined();\n    // Negative: must NOT resolve to Repo#save\n    const wrongCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processUser' && c.targetFilePath.includes('repo.js'),\n    );\n    expect(wrongCall).toBeUndefined();\n  });\n\n  it('resolves repo.save() to Repo#save via JSDoc @returns {Repo}', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processRepo' && c.targetFilePath.includes('repo.js'),\n    );\n    expect(saveCall).toBeDefined();\n    // Negative: must NOT resolve to User#save\n    const wrongCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processRepo' && c.targetFilePath.includes('user.js'),\n    );\n    expect(wrongCall).toBeUndefined();\n  });\n\n  it('resolves user.save() via JSDoc @param {User} in handleUser()', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'handleUser' && c.targetFilePath.includes('user.js'),\n    );\n    expect(saveCall).toBeDefined();\n    // Negative: must NOT resolve to Repo#save\n    const wrongCall = calls.find(c =>\n      c.target === 'save' && c.source === 'handleUser' && c.targetFilePath.includes('repo.js'),\n    );\n    expect(wrongCall).toBeUndefined();\n  });\n\n  it('resolves repo.save() via JSDoc @param {Repo} in handleRepo()', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'handleRepo' && c.targetFilePath.includes('repo.js'),\n    );\n    expect(saveCall).toBeDefined();\n    // Negative: must NOT resolve to User#save\n    const wrongCall = calls.find(c =>\n      c.target === 'save' && c.source === 'handleRepo' && c.targetFilePath.includes('user.js'),\n    );\n    expect(wrongCall).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// JavaScript async return type inference via JSDoc @returns {Promise<User>}\n// Verifies that wrapper generics (Promise) are unwrapped to the inner type.\n// ---------------------------------------------------------------------------\n\ndescribe('JavaScript async return type inference via JSDoc @returns {Promise<User>}', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'js-jsdoc-async-return-type'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes with save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n  });\n\n  it('resolves user.save() to User#save via @returns {Promise<User>} unwrapping', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processUser' && c.targetFilePath.includes('user.js'),\n    );\n    expect(saveCall).toBeDefined();\n    // Negative: must NOT resolve to Repo#save\n    const wrongCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processUser' && c.targetFilePath.includes('repo.js'),\n    );\n    expect(wrongCall).toBeUndefined();\n  });\n\n  it('resolves repo.save() to Repo#save via @returns {Promise<Repo>} unwrapping', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processRepo' && c.targetFilePath.includes('repo.js'),\n    );\n    expect(saveCall).toBeDefined();\n    // Negative: must NOT resolve to User#save\n    const wrongCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processRepo' && c.targetFilePath.includes('user.js'),\n    );\n    expect(wrongCall).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// JavaScript qualified return type: @returns {Promise<models.User>}\n// Verifies that dot-qualified names inside generics are not corrupted.\n// ---------------------------------------------------------------------------\n\ndescribe('JavaScript qualified return type via JSDoc @returns {Promise<models.User>}', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'js-jsdoc-qualified-return-type'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves user.save() to User#save despite qualified return type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processUser' && c.targetFilePath.includes('user.js'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Assignment chain propagation (Tier 2, depth-1):\n// `const alias = u` where `u: User` → alias.save() resolves to User#save\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript assignment chain propagation (Tier 2)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ts-assignment-chain'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes each with a save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves alias.save() to User#save via assignment chain', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.targetFilePath.includes('user.ts'),\n    );\n    // Positive: alias.save() must resolve to User#save\n    expect(saveCall).toBeDefined();\n    expect(saveCall!.source).toBe('processEntities');\n    // Negative: alias.save() must NOT resolve to Repo#save\n    const wrongCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processEntities' && c.targetFilePath.includes('repo.ts'),\n    );\n    // rAlias.save() correctly goes to Repo — but we verify there is exactly one\n    // per-receiver resolution (user alias → User, repo alias → Repo)\n    expect(wrongCall).toBeDefined(); // rAlias.save() resolves to Repo\n  });\n\n  it('resolves rAlias.save() to Repo#save via assignment chain', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.targetFilePath.includes('repo.ts'),\n    );\n    expect(repoSave).toBeDefined();\n    expect(repoSave!.source).toBe('processEntities');\n    // Negative: rAlias.save() must NOT resolve to User#save (only)\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.targetFilePath.includes('user.ts'),\n    );\n    expect(userSave).toBeDefined();\n    // Both resolve separately — alias → User, rAlias → Repo\n    expect(userSave!.targetFilePath).not.toBe(repoSave!.targetFilePath);\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Multi-hop forward-declared chain (a → b → c) — validates that single-pass\n// in source order resolves chains deeper than depth-1.\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript multi-hop assignment chain (a → b → c)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ts-multi-hop-chain'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes each with a save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves c.save() to User#save through a → b → c chain', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'multiHopForward' && c.targetFilePath?.includes('user.ts'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('c.save() in multiHopForward does NOT resolve to Repo#save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongCall = calls.find(c =>\n      c.target === 'save' && c.source === 'multiHopForward' && c.targetFilePath?.includes('repo.ts'),\n    );\n    expect(wrongCall).toBeUndefined();\n  });\n\n  it('resolves c.save() to Repo#save through a → b → c chain (Repo variant)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'multiHopRepo' && c.targetFilePath?.includes('repo.ts'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('c.save() in multiHopRepo does NOT resolve to User#save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongCall = calls.find(c =>\n      c.target === 'save' && c.source === 'multiHopRepo' && c.targetFilePath?.includes('user.ts'),\n    );\n    expect(wrongCall).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Nullable type + assignment chain: stripNullable must resolve the nullable\n// union (User | null → User) before the chain propagation can work.\n// Exercises the refactored NULLABLE_KEYWORDS.has() code path.\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript nullable + assignment chain combined', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ts-nullable-chain'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes each with a save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves alias.save() to User#save when source is User | null', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'nullableChainUser' && c.targetFilePath?.includes('user.ts'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('alias.save() from User | null does NOT resolve to Repo#save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongCall = calls.find(c =>\n      c.target === 'save' && c.source === 'nullableChainUser' && c.targetFilePath?.includes('repo.ts'),\n    );\n    expect(wrongCall).toBeUndefined();\n  });\n\n  it('resolves alias.save() to Repo#save when source is Repo | undefined', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'nullableChainRepo' && c.targetFilePath?.includes('repo.ts'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('resolves alias.save() to User#save when source is User | null | undefined (triple)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'tripleNullable' && c.targetFilePath?.includes('user.ts'),\n    );\n    expect(userSave).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Chained method call resolution: svc.getUser().save()\n// The receiver of save() is a call_expression (getUser()), not a simple identifier.\n// Resolution must walk the chain: getUser() returns User, so save() → User#save.\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript chained method call resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'typescript-chain-call'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User, Repo and UserService classes', () => {\n    const classes = getNodesByLabel(result, 'Class');\n    expect(classes).toContain('User');\n    expect(classes).toContain('Repo');\n    expect(classes).toContain('UserService');\n  });\n\n  it('detects save methods on both User and Repo', () => {\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('detects getUser method on UserService', () => {\n    const methods = getNodesByLabel(result, 'Method');\n    expect(methods).toContain('getUser');\n  });\n\n  it('resolves svc.getUser().save() to User#save, NOT Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' &&\n      c.source === 'processUser' &&\n      c.targetFilePath.includes('User'),\n    );\n    const repoSave = calls.find(c =>\n      c.target === 'save' &&\n      c.source === 'processUser' &&\n      c.targetFilePath.includes('Repo'),\n    );\n    expect(userSave).toBeDefined();\n    expect(repoSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Overloaded receiver: two classes with the same method name (save) must not\n// collide in the receiverKey map. The fix preserves @startIndex in the key so\n// User.save@idx1 and Repo.save@idx2 remain distinct even when the enclosing\n// scope funcName is the same.\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript overloaded-receiver resolution (receiverKey collision fix)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'typescript-overloaded-receiver'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes, both with a save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    const saveMethods = getNodesByLabel(result, 'Method').filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves user.save() to User#save (models/User.ts), not Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.targetFilePath.includes('User'),\n    );\n    expect(userSave).toBeDefined();\n    expect(userSave!.source).toBe('run');\n    // Negative: must not resolve to Repo#save\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'run' && c.targetFilePath.includes('Repo'),\n    );\n    // If only one save target resolves to User (not Repo), we correctly exclude Repo\n    expect(userSave!.targetFilePath).toContain('User');\n  });\n\n  it('resolves repo.save() to Repo#save (models/Repo.ts), not User#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.targetFilePath.includes('Repo'),\n    );\n    expect(repoSave).toBeDefined();\n    expect(repoSave!.source).toBe('run');\n    expect(repoSave!.targetFilePath).toContain('Repo');\n  });\n\n  it('emits exactly 2 save() CALLS edges — one per class', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save');\n    expect(saveCalls.length).toBe(2);\n    const targets = saveCalls.map(c => c.targetFilePath).sort();\n    expect(targets[0]).toContain('Repo');\n    expect(targets[1]).toContain('User');\n  });\n\n  it('resolves constructor calls for both User and Repo', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userCtor = calls.find(c => c.target === 'User' && c.targetLabel === 'Class');\n    const repoCtor = calls.find(c => c.target === 'Repo' && c.targetLabel === 'Class');\n    expect(userCtor).toBeDefined();\n    expect(repoCtor).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Typed parameter chain: svc.getUser().save() where svc is a parameter with\n// a type annotation (not a constructor binding). Tests that the worker path\n// consults typeEnv for chain base receivers (Phase 5 review Finding 1).\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript typed-parameter chain call resolution (Phase 5 review fix)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'typescript-typed-param-chain'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User, Repo, and UserService classes', () => {\n    const classes = getNodesByLabel(result, 'Class');\n    expect(classes).toContain('User');\n    expect(classes).toContain('Repo');\n    expect(classes).toContain('UserService');\n  });\n\n  it('detects getUser and save methods', () => {\n    const methods = getNodesByLabel(result, 'Method');\n    expect(methods).toContain('getUser');\n    expect(methods).toContain('save');\n  });\n\n  it('resolves svc.getUser().save() to User#save via parameter type annotation', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' &&\n      c.source === 'processUser' &&\n      c.targetFilePath.includes('User'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('does NOT resolve svc.getUser().save() to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' &&\n      c.source === 'processUser' &&\n      c.targetFilePath.includes('Repo'),\n    );\n    expect(repoSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Static chain: UserService.findUser().save() where the chain base is a class\n// name (not a variable). Tests that the serial path applies class-as-receiver\n// to chain base resolution (Phase 5 review Finding 2).\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript static class-name chain call resolution (Phase 5 review fix)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'typescript-static-chain'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User, Repo, and UserService classes', () => {\n    const classes = getNodesByLabel(result, 'Class');\n    expect(classes).toContain('User');\n    expect(classes).toContain('Repo');\n    expect(classes).toContain('UserService');\n  });\n\n  it('detects static findUser and instance save methods', () => {\n    const methods = getNodesByLabel(result, 'Method');\n    expect(methods).toContain('findUser');\n    expect(methods).toContain('save');\n  });\n\n  it('resolves UserService.findUser().save() to User#save via class-name chain base', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' &&\n      c.source === 'processUser' &&\n      c.targetFilePath.includes('User'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('does NOT resolve UserService.findUser().save() to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' &&\n      c.source === 'processUser' &&\n      c.targetFilePath.includes('Repo'),\n    );\n    expect(repoSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// TS readonly User[] for-loop: for (const user of users) with readonly User[]\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript readonly array for-loop resolution (Tier 1c)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ts-readonly-foreach'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes with save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n  });\n\n  it('resolves user.save() in readonly array for-of to User#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processUsers' && c.targetFilePath?.includes('user'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves repo.save() in readonly array for-of to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processRepos' && c.targetFilePath?.includes('repo'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('does NOT cross-resolve user.save() to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrong = calls.find(c =>\n      c.target === 'save' && c.source === 'processUsers' && c.targetFilePath?.includes('repo'),\n    );\n    expect(wrong).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// for (const [key, user] of entries) — destructured for-of resolution\n// ---------------------------------------------------------------------------\n\ndescribe('TS destructured for-of Map resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'typescript-destructured-for-of'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User class with save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n  });\n\n  it('resolves user.save() in destructured for-of to User#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processEntries' && c.targetFilePath?.includes('user'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('does NOT resolve user.save() to Repo#save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processEntries' && c.targetFilePath?.includes('repo'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// if (x instanceof User) { x.save() } — instanceof narrowing resolution\n// ---------------------------------------------------------------------------\n\ndescribe('TS instanceof narrowing resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'typescript-instanceof-narrowing'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User class with save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n  });\n\n  it('resolves x.save() after instanceof to User#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process' && c.targetFilePath?.includes('user'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('does NOT resolve x.save() to Repo#save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'process' && c.targetFilePath?.includes('repo'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// this.users member access iterable: for (const user of this.users)\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript member access iterable for-loop', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'typescript-member-access-for-loop'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes with save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    expect(getNodesByLabel(result, 'Method')).toContain('save');\n  });\n\n  it('resolves user.save() via this.users to User#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processUsers' && c.targetFilePath?.includes('User'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('does NOT cross-resolve user.save() to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrong = calls.find(c =>\n      c.target === 'save' && c.source === 'processUsers' && c.targetFilePath?.includes('Repo'),\n    );\n    expect(wrong).toBeUndefined();\n  });\n\n  it('resolves repo.save() via this.repos to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processRepos' && c.targetFilePath?.includes('Repo'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// TypeScript class field foreach: for (const user of this.users) with class field User[]\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript class field foreach resolution (Phase 6.1)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ts-class-field-foreach'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes with save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n    expect(getNodesByLabel(result, 'Method')).toContain('save');\n  });\n\n  it('resolves user.save() via class field User[] to User#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processUsers' && c.targetFilePath?.includes('user'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('does NOT cross-resolve user.save() to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrong = calls.find(c =>\n      c.target === 'save' && c.source === 'processUsers' && c.targetFilePath?.includes('repo'),\n    );\n    expect(wrong).toBeUndefined();\n  });\n\n  it('resolves repo.save() via class field Map<string, Repo>.values() to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processRepos' && c.targetFilePath?.includes('repo'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('does NOT cross-resolve repo.save() to User#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrong = calls.find(c =>\n      c.target === 'save' && c.source === 'processRepos' && c.targetFilePath?.includes('user'),\n    );\n    expect(wrong).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// TypeScript for-of with call_expression iterable: for (const user of getUsers())\n// Phase 7.3: call_expression iterable resolution via ReturnTypeLookup\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript for-of call_expression iterable resolution (Phase 7.3)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'typescript-for-of-call-expr'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes with competing save methods', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n  });\n\n  it('resolves user.save() in for-of getUsers() to User#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const userSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processUsers' && c.targetFilePath?.includes('user.ts'),\n    );\n    expect(userSave).toBeDefined();\n  });\n\n  it('resolves repo.save() in for-of getRepos() to Repo#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const repoSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processRepos' && c.targetFilePath?.includes('repo.ts'),\n    );\n    expect(repoSave).toBeDefined();\n  });\n\n  it('does NOT resolve user.save() to Repo#save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processUsers' && c.targetFilePath?.includes('repo.ts'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n\n  it('does NOT resolve repo.save() to User#save (negative)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongSave = calls.find(c =>\n      c.target === 'save' && c.source === 'processRepos' && c.targetFilePath?.includes('user.ts'),\n    );\n    expect(wrongSave).toBeUndefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase 8: Field/property type resolution (1-level)\n// ---------------------------------------------------------------------------\n\ndescribe('Field type resolution (TypeScript)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'field-types'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects classes: Address, Config, User', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Address', 'Config', 'User']);\n  });\n\n  it('detects Property nodes for typed fields', () => {\n    const properties = getNodesByLabel(result, 'Property');\n    expect(properties).toContain('address');\n    expect(properties).toContain('name');\n    expect(properties).toContain('city');\n  });\n\n  it('emits HAS_PROPERTY edges linking properties to classes', () => {\n    const propEdges = getRelationships(result, 'HAS_PROPERTY');\n    expect(propEdges.length).toBe(4);\n    expect(edgeSet(propEdges)).toContain('User → address');\n    expect(edgeSet(propEdges)).toContain('User → name');\n    expect(edgeSet(propEdges)).toContain('Address → city');\n    expect(edgeSet(propEdges)).toContain('Config → DEFAULT');\n  });\n\n  it('resolves user.address.save() → Address#save via field type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(e => e.target === 'save');\n    const addressSave = saveCalls.find(e => e.targetFilePath.includes('models'));\n    expect(addressSave).toBeDefined();\n    expect(addressSave!.source).toBe('processUser');\n  });\n\n  it('emits ACCESSES read edge for user.address field access in chain', () => {\n    const accesses = getRelationships(result, 'ACCESSES');\n    const addressReads = accesses.filter(e => e.target === 'address' && e.rel.reason === 'read');\n    expect(addressReads.length).toBe(1);\n    expect(addressReads[0].source).toBe('processUser');\n    expect(addressReads[0].targetLabel).toBe('Property');\n  });\n\n  it('emits ACCESSES read edge for Config.DEFAULT field access in chain', () => {\n    const accesses = getRelationships(result, 'ACCESSES');\n    const defaultReads = accesses.filter(e => e.target === 'DEFAULT' && e.rel.reason === 'read');\n    expect(defaultReads.length).toBe(1);\n    expect(defaultReads[0].source).toBe('validateConfig');\n  });\n\n  it('all ACCESSES edges have confidence 1.0 and reason read', () => {\n    const accesses = getRelationships(result, 'ACCESSES');\n    for (const edge of accesses) {\n      expect(edge.rel.confidence).toBe(1.0);\n      expect(edge.rel.reason).toBe('read');\n    }\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase 8: Field type disambiguation — both User and Address have save()\n// ---------------------------------------------------------------------------\n\ndescribe('Field type disambiguation (TypeScript)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ts-field-type-disambig'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects both User#save and Address#save', () => {\n    const methods = getNodesByLabel(result, 'Method');\n    const saveMethods = methods.filter(m => m === 'save');\n    expect(saveMethods.length).toBe(2);\n  });\n\n  it('resolves user.address.save() → Address#save (not User#save)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(\n      e => e.target === 'save' && e.source === 'processUser',\n    );\n    expect(saveCalls.length).toBe(1);\n    expect(saveCalls[0].targetFilePath).toContain('address');\n    expect(saveCalls[0].targetFilePath).not.toContain('user');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase 8: Parameter properties and #private fields\n// ---------------------------------------------------------------------------\n\ndescribe('Field type resolution (TS parameter properties)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ts-param-property-fields'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects classes: Address, User', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Address', 'User']);\n  });\n\n  it('captures constructor parameter properties as Property nodes', () => {\n    const properties = getNodesByLabel(result, 'Property');\n    expect(properties).toContain('name');\n    expect(properties).toContain('address');\n  });\n\n  it('captures #private fields as Property nodes', () => {\n    const properties = getNodesByLabel(result, 'Property');\n    expect(properties).toContain('#secret');\n  });\n\n  it('emits HAS_PROPERTY edges for parameter properties', () => {\n    const propEdges = getRelationships(result, 'HAS_PROPERTY');\n    expect(edgeSet(propEdges)).toContain('User → name');\n    expect(edgeSet(propEdges)).toContain('User → address');\n  });\n\n  it('resolves user.address.save() via parameter property type', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(e => e.target === 'save' && e.source === 'processUser');\n    expect(saveCalls.length).toBe(1);\n    expect(saveCalls[0].targetFilePath).toContain('models');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase 8A: Deep field chain resolution (3-level: user.address.city.getName())\n// ---------------------------------------------------------------------------\n\ndescribe('Deep field chain resolution (TypeScript)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ts-deep-field-chain'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects classes: Address, City, User', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Address', 'City', 'User']);\n  });\n\n  it('detects Property nodes for all typed fields', () => {\n    const properties = getNodesByLabel(result, 'Property');\n    expect(properties).toContain('address');\n    expect(properties).toContain('city');\n    expect(properties).toContain('zipCode');\n  });\n\n  it('emits HAS_PROPERTY edges for nested type chain', () => {\n    const propEdges = getRelationships(result, 'HAS_PROPERTY');\n    expect(edgeSet(propEdges)).toContain('User → address');\n    expect(edgeSet(propEdges)).toContain('Address → city');\n    expect(edgeSet(propEdges)).toContain('City → zipCode');\n  });\n\n  it('resolves 2-level chain: user.address.save() → Address#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(e => e.target === 'save' && e.source === 'processUser');\n    expect(saveCalls.length).toBe(1);\n    expect(saveCalls[0].targetFilePath).toContain('models');\n  });\n\n  it('resolves 3-level chain: user.address.city.getName() → City#getName', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const getNameCalls = calls.filter(e => e.target === 'getName' && e.source === 'processUser');\n    expect(getNameCalls.length).toBe(1);\n    expect(getNameCalls[0].targetFilePath).toContain('models');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Mixed chain resolution (field ↔ call interleaved)\n// ---------------------------------------------------------------------------\n\ndescribe('Mixed field+call chain resolution (TypeScript)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ts-mixed-chain'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects classes: Address, City, User, UserService', () => {\n    expect(getNodesByLabel(result, 'Class')).toEqual(['Address', 'City', 'User', 'UserService']);\n  });\n\n  it('detects Property node for Address.city field', () => {\n    const properties = getNodesByLabel(result, 'Property');\n    expect(properties).toContain('city');\n    expect(properties).toContain('address');\n  });\n\n  it('resolves call→field chain: svc.getUser().address.save() → Address#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(e => e.target === 'save' && e.source === 'processWithService');\n    expect(saveCalls.length).toBe(1);\n    expect(saveCalls[0].targetFilePath).toContain('models');\n  });\n\n  it('resolves field→call chain: user.getAddress().city.getName() → City#getName', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const getNameCalls = calls.filter(e => e.target === 'getName' && e.source === 'processWithUser');\n    expect(getNameCalls.length).toBe(1);\n    expect(getNameCalls[0].targetFilePath).toContain('models');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// ACCESSES write edges from assignment expressions\n// ---------------------------------------------------------------------------\n\ndescribe('Write access tracking (TypeScript)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ts-write-access'),\n      () => {},\n    );\n  }, 60000);\n\n  it('emits ACCESSES write edges for field assignments', () => {\n    const accesses = getRelationships(result, 'ACCESSES');\n    const writes = accesses.filter(e => e.rel.reason === 'write');\n    expect(writes.length).toBe(2);\n    const nameWrite = writes.find(e => e.target === 'name');\n    const addressWrite = writes.find(e => e.target === 'address');\n    expect(nameWrite).toBeDefined();\n    expect(nameWrite!.source).toBe('updateUser');\n    expect(addressWrite).toBeDefined();\n    expect(addressWrite!.source).toBe('updateUser');\n  });\n\n  it('write ACCESSES edges have confidence 1.0', () => {\n    const accesses = getRelationships(result, 'ACCESSES');\n    const writes = accesses.filter(e => e.rel.reason === 'write');\n    for (const edge of writes) {\n      expect(edge.rel.confidence).toBe(1.0);\n    }\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Call-result variable binding (Phase 9): const user = getUser(); user.save()\n// Activates Tier 2b pendingCallResults — binds return type at TypeEnv build time.\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript call-result variable binding (Tier 2b)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ts-call-result-binding'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User class with save method', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Method')).toContain('save');\n  });\n\n  it('detects getUser function', () => {\n    expect(getNodesByLabel(result, 'Function')).toContain('getUser');\n  });\n\n  it('resolves user.save() to User#save via call-result binding', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processUser' && c.targetFilePath.includes('models')\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('resolves alias.save() to User#save via call-result + copy chain', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processAlias' && c.targetFilePath.includes('models')\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// JavaScript call-result variable binding (Phase 9) via JSDoc @returns\n// ---------------------------------------------------------------------------\n\ndescribe('JavaScript call-result variable binding (Tier 2b)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'js-call-result-binding'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves user.save() to User#save via call-result binding with JSDoc', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processUser' && c.targetFilePath.includes('models')\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Method chain binding (Phase 9C): getUser() → .address → .getCity() → .save()\n// Unified fixpoint resolves field access + method-call-with-receiver at TypeEnv build time.\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript method chain binding via unified fixpoint (Phase 9C)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ts-method-chain-binding'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User, Address, City classes', () => {\n    const classes = getNodesByLabel(result, 'Class');\n    expect(classes).toContain('User');\n    expect(classes).toContain('Address');\n    expect(classes).toContain('City');\n  });\n\n  it('resolves city.save() to City#save via 3-step chain (callResult → fieldAccess → methodCallResult)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processChain' && c.targetFilePath.includes('models')\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase A: Object destructuring — const { field } = receiver → fieldAccess PendingAssignment\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript object destructuring resolution (Phase A)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ts-object-destructuring'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User, Address classes', () => {\n    const classes = getNodesByLabel(result, 'Class');\n    expect(classes).toContain('User');\n    expect(classes).toContain('Address');\n  });\n\n  it('resolves address.save() to Address#save via object destructuring', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.targetFilePath.includes('models'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('does NOT resolve save() to a wrong target', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCalls = calls.filter(c => c.target === 'save');\n    for (const call of saveCalls) {\n      expect(call.targetFilePath).toContain('models');\n    }\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase A: Post-fixpoint for-loop replay — iterable resolved via callResult fixpoint\n// Differs from ts-for-of-call-expression: iterable is an identifier, not inline call\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript post-fixpoint for-loop replay (Phase A ex-9B)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ts-fixpoint-for-loop'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves u.save() to User#save via post-fixpoint for-loop replay', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'process' && c.targetFilePath.includes('models'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase B: Deep MRO — walkParentChain() at depth 2 (C→B→A)\n// greet() is defined on A, accessed via C. Tests BFS depth-2 parent traversal.\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript grandparent method resolution via MRO (Phase B)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ts-grandparent-resolution'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects 3 classes in inheritance chain (A, B, C) plus Greeting', () => {\n    const classes = getNodesByLabel(result, 'Class');\n    expect(classes).toContain('A');\n    expect(classes).toContain('B');\n    expect(classes).toContain('C');\n    expect(classes).toContain('Greeting');\n  });\n\n  it('emits EXTENDS edges: B→A, C→B', () => {\n    const extends_ = getRelationships(result, 'EXTENDS');\n    expect(edgeSet(extends_)).toContain('B → A');\n    expect(edgeSet(extends_)).toContain('C → B');\n  });\n\n  it('resolves c.greet().save() to Greeting#save via depth-2 MRO lookup', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.targetFilePath.includes('greeting'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('resolves c.greet() to A#greet (method found via MRO walk)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const greetCall = calls.find(c =>\n      c.target === 'greet' && c.targetFilePath.includes('base'),\n    );\n    expect(greetCall).toBeDefined();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Phase C: TS null-check narrowing — if (x !== null) { x.save() }\n// patternOverrides stores narrowed type for the if-body position range\n// ---------------------------------------------------------------------------\n\ndescribe('TypeScript null-check narrowing resolution (Phase C)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ts-null-check-narrowing'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects User and Repo classes', () => {\n    expect(getNodesByLabel(result, 'Class')).toContain('User');\n    expect(getNodesByLabel(result, 'Class')).toContain('Repo');\n  });\n\n  it('resolves x.save() inside !== null guard to User#save', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processStrict' && c.targetFilePath.includes('models'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('does NOT resolve to Repo#save (no cross-contamination)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const wrongCall = calls.find(c =>\n      c.target === 'save' && c.targetLabel === 'Repo',\n    );\n    expect(wrongCall).toBeUndefined();\n  });\n\n  it('resolves x.save() in loose != null check (processLoose)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processLoose' && c.targetFilePath.includes('models'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('resolves x.save() in !== undefined check (processUndefined)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processUndefined' && c.targetFilePath.includes('models'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n\n  it('resolves x.save() inside function expression null-check (processFuncExpr)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const saveCall = calls.find(c =>\n      c.target === 'save' && c.source === 'processFuncExpr' && c.targetFilePath.includes('models'),\n    );\n    expect(saveCall).toBeDefined();\n  });\n});\n\n// ── Phase P: Virtual Dispatch via Constructor Type ───────────────────────\n\ndescribe('TypeScript virtual dispatch via constructor type (same-file)', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ts-virtual-dispatch'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects Animal and Dog classes with same-file heritage', () => {\n    const classes = getNodesByLabel(result, 'Class');\n    expect(classes).toContain('Animal');\n    expect(classes).toContain('Dog');\n    const extends_ = getRelationships(result, 'EXTENDS');\n    const dogExtends = extends_.find(e => e.source === 'Dog' && e.target === 'Animal');\n    expect(dogExtends).toBeDefined();\n  });\n\n  it('detects fetchBall() as Dog-only method', () => {\n    const methods = getNodesByLabel(result, 'Method');\n    expect(methods).toContain('fetchBall');\n  });\n\n  it('resolves fetchBall() calls from run() — proves virtual dispatch override', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const fetchCalls = calls.filter(c => c.source === 'run' && c.target === 'fetchBall');\n    // animal.fetchBall() only resolves if constructorTypeMap overrides\n    // receiver from Animal → Dog. dog.fetchBall() resolves directly.\n    // Both target same nodeId → 1 CALLS edge after dedup.\n    expect(fetchCalls.length).toBe(1);\n  });\n});\n\n// ── Phase P: Overload Disambiguation via inferLiteralType ────────────────\n\ndescribe('TypeScript overload disambiguation via inferLiteralType', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ts-overload-disambiguation'),\n      () => {},\n    );\n  }, 60000);\n\n  it('detects lookup function with parameterTypes on graph node', () => {\n    const functions = getNodesByLabelFull(result, 'Function');\n    const lookupNodes = functions.filter(f => f.name === 'lookup');\n    // generateId collision → 1 graph node, first overload's parameterTypes wins\n    expect(lookupNodes.length).toBeGreaterThanOrEqual(1);\n    // At least one lookup node has parameterTypes set\n    const withParamTypes = lookupNodes.filter(n => n.properties.parameterTypes);\n    expect(withParamTypes.length).toBeGreaterThanOrEqual(1);\n  });\n\n  it('emits CALLS edges from process() → lookup() via overload disambiguation', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const lookupCalls = calls.filter(c => c.source === 'process' && c.target === 'lookup');\n    // Phase 0 (fileIndex stores both overloads) + Phase 2 (literal type matching)\n    // enables resolution where previously 2 same-arity candidates → null.\n    // Both calls resolve to same nodeId (ID collision) → 1 CALLS edge after dedup.\n    expect(lookupCalls.length).toBe(1);\n  });\n});\n\n// ── Phase P: Optional / Default Parameter Arity Resolution ───────────────\n\ndescribe('TypeScript optional parameter arity resolution', () => {\n  let result: PipelineResult;\n\n  beforeAll(async () => {\n    result = await runPipelineFromRepo(\n      path.join(FIXTURES, 'ts-optional-params'),\n      () => {},\n    );\n  }, 60000);\n\n  it('resolves greet(\"Alice\") with 1 arg to greet with 2 params (1 optional)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const greetCalls = calls.filter(c => c.source === 'process' && c.target === 'greet');\n    expect(greetCalls.length).toBe(1);\n  });\n\n  it('resolves search(\"test\") with 1 arg to search with 2 params (1 optional)', () => {\n    const calls = getRelationships(result, 'CALLS');\n    const searchCalls = calls.filter(c => c.source === 'process' && c.target === 'search');\n    expect(searchCalls.length).toBe(1);\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/integration/search-core.test.ts",
    "content": "/**\n * P0 Integration Tests: BM25/FTS Search against real LadybugDB\n *\n * Tests: searchFTSFromLbug via core adapter (no repoId) path against\n * indexed test data. Verifies ranked result ordering, score merging,\n * and empty-match behavior.\n *\n * Uses withTestLbugDB wrapper for full lifecycle management.\n */\nimport { describe, it, expect } from 'vitest';\nimport { withTestLbugDB } from '../helpers/test-indexed-db.js';\nimport { searchFTSFromLbug } from '../../src/core/search/bm25-index.js';\nimport { SEARCH_SEED_DATA, SEARCH_FTS_INDEXES } from '../fixtures/search-seed.js';\n\n// ─── Core adapter path (no repoId) ──────────────────────────────────\n\nwithTestLbugDB('search-core', (_handle) => {\n  describe('searchFTSFromLbug — core adapter (no repoId)', () => {\n    it('returns ranked results for a matching query', async () => {\n      const results = await searchFTSFromLbug('user authentication', 10);\n\n      expect(results.length).toBeGreaterThan(0);\n\n      for (const r of results) {\n        expect(r).toHaveProperty('filePath');\n        expect(r).toHaveProperty('score');\n        expect(r).toHaveProperty('rank');\n        expect(typeof r.filePath).toBe('string');\n        expect(typeof r.score).toBe('number');\n        expect(typeof r.rank).toBe('number');\n        expect(r.score).toBeGreaterThan(0);\n      }\n\n      // Ranks should be sequential starting from 1\n      results.forEach((r, i) => {\n        expect(r.rank).toBe(i + 1);\n      });\n    });\n\n    it('results are ordered by descending score', async () => {\n      const results = await searchFTSFromLbug('user authentication', 10);\n\n      for (let i = 1; i < results.length; i++) {\n        expect(results[i - 1].score).toBeGreaterThanOrEqual(results[i].score);\n      }\n    });\n\n    it('auth-related files rank higher than unrelated files', async () => {\n      const results = await searchFTSFromLbug('user authentication', 10);\n      const filePaths = results.map((r) => r.filePath);\n\n      expect(filePaths).toContain('src/auth.ts');\n\n      const authIdx = filePaths.indexOf('src/auth.ts');\n      const utilsIdx = filePaths.indexOf('src/utils.ts');\n      if (utilsIdx !== -1) {\n        expect(authIdx).toBeLessThan(utilsIdx);\n      }\n    });\n\n    it('merges scores from multiple node types for the same filePath', async () => {\n      const results = await searchFTSFromLbug('user authentication', 20);\n\n      const authResult = results.find((r) => r.filePath === 'src/auth.ts');\n      expect(authResult).toBeDefined();\n\n      const routerResult = results.find((r) => r.filePath === 'src/router.ts');\n      if (routerResult) {\n        expect(authResult!.score).toBeGreaterThan(routerResult.score);\n      }\n    });\n\n    it('respects limit parameter', async () => {\n      const results = await searchFTSFromLbug('user authentication', 2);\n      expect(results.length).toBeLessThanOrEqual(2);\n    });\n\n    it('returns empty array for a non-matching query', async () => {\n      const results = await searchFTSFromLbug('xyzzyplughtwisty', 10);\n      expect(results).toEqual([]);\n    });\n  });\n\n  // ─── Unhappy paths ──────────────────────────────────────────────────\n\n  describe('unhappy paths', () => {\n    it('returns empty array for empty query string', async () => {\n      const results = await searchFTSFromLbug('', 10);\n      expect(results).toEqual([]);\n    });\n\n    it('returns empty array for whitespace-only query', async () => {\n      const results = await searchFTSFromLbug('   ', 10);\n      expect(results).toEqual([]);\n    });\n\n    it('handles special characters in query gracefully', async () => {\n      const results = await searchFTSFromLbug('user* OR auth+', 10);\n      expect(Array.isArray(results)).toBe(true);\n    });\n\n    it('handles limit of 0', async () => {\n      const results = await searchFTSFromLbug('user authentication', 0);\n      expect(results).toEqual([]);\n    });\n\n    it('handles negative limit gracefully', async () => {\n      const results = await searchFTSFromLbug('user authentication', -1);\n      expect(Array.isArray(results)).toBe(true);\n    });\n\n    it('handles very large limit', async () => {\n      const results = await searchFTSFromLbug('user authentication', 100000);\n      expect(results.length).toBeLessThanOrEqual(100000);\n      expect(results.length).toBeGreaterThan(0);\n    });\n  });\n}, {\n  seed: SEARCH_SEED_DATA,\n  ftsIndexes: SEARCH_FTS_INDEXES,\n});\n"
  },
  {
    "path": "gitnexus/test/integration/search-pool.test.ts",
    "content": "/**\n * P0 Integration Tests: BM25/FTS Search against real LadybugDB\n *\n * Tests: searchFTSFromLbug via MCP pool adapter (with repoId) path\n * against indexed test data. Verifies ranked result ordering and\n * empty-match behavior through the pool adapter.\n *\n * Uses withTestLbugDB wrapper for full lifecycle management.\n */\nimport { describe, it, expect } from 'vitest';\nimport { withTestLbugDB } from '../helpers/test-indexed-db.js';\nimport { searchFTSFromLbug } from '../../src/core/search/bm25-index.js';\nimport { SEARCH_SEED_DATA, SEARCH_FTS_INDEXES } from '../fixtures/search-seed.js';\n\n// ─── MCP pool adapter path (with repoId) ────────────────────────────\n\nwithTestLbugDB('search-pool', (handle) => {\n  describe('searchFTSFromLbug — MCP pool adapter (with repoId)', () => {\n    it('returns ranked results via pool adapter', async () => {\n      const results = await searchFTSFromLbug('user authentication', 10, handle.repoId);\n\n      expect(results.length).toBeGreaterThan(0);\n\n      for (const r of results) {\n        expect(r).toHaveProperty('filePath');\n        expect(r).toHaveProperty('score');\n        expect(r).toHaveProperty('rank');\n        expect(r.score).toBeGreaterThan(0);\n      }\n\n      const filePaths = results.map((r) => r.filePath);\n      expect(filePaths).toContain('src/auth.ts');\n    });\n\n    it('results are ordered by descending score via pool adapter', async () => {\n      const results = await searchFTSFromLbug('user authentication', 10, handle.repoId);\n\n      for (let i = 1; i < results.length; i++) {\n        expect(results[i - 1].score).toBeGreaterThanOrEqual(results[i].score);\n      }\n    });\n\n    it('returns empty array for non-matching query via pool adapter', async () => {\n      const results = await searchFTSFromLbug('xyzzyplughtwisty', 10, handle.repoId);\n      expect(results).toEqual([]);\n    });\n\n    it('respects limit parameter via pool adapter', async () => {\n      const results = await searchFTSFromLbug('user authentication', 1, handle.repoId);\n      expect(results.length).toBeLessThanOrEqual(1);\n    });\n  });\n\n  // ─── Unhappy paths ──────────────────────────────────────────────────\n\n  describe('unhappy paths', () => {\n    it('returns empty array for empty query via pool', async () => {\n      const results = await searchFTSFromLbug('', 10, handle.repoId);\n      expect(results).toEqual([]);\n    });\n\n    it('returns empty array for whitespace-only query via pool', async () => {\n      const results = await searchFTSFromLbug('   ', 10, handle.repoId);\n      expect(results).toEqual([]);\n    });\n\n    it('handles special characters in query via pool', async () => {\n      const results = await searchFTSFromLbug('user* OR auth+', 10, handle.repoId);\n      expect(Array.isArray(results)).toBe(true);\n    });\n\n    it('handles limit of 0 via pool', async () => {\n      const results = await searchFTSFromLbug('user authentication', 0, handle.repoId);\n      expect(results).toEqual([]);\n    });\n  });\n}, {\n  seed: SEARCH_SEED_DATA,\n  ftsIndexes: SEARCH_FTS_INDEXES,\n  poolAdapter: true,\n});\n"
  },
  {
    "path": "gitnexus/test/integration/setup-skills.test.ts",
    "content": "import { describe, it, expect, beforeAll, afterAll } from 'vitest';\nimport fs from 'fs/promises';\nimport path from 'path';\nimport os from 'os';\nimport { fileURLToPath } from 'url';\nimport { setupCommand } from '../../src/cli/setup.js';\n\ndescribe('setupCommand skills integration', () => {\n  let tempHome: string;\n  const originalHome = process.env.HOME;\n  const originalUserProfile = process.env.USERPROFILE;\n  const testId = `${Date.now()}-${process.pid}`;\n  const flatSkillName = `test-flat-skill-${testId}`;\n  const dirSkillName = `test-dir-skill-${testId}`;\n  const testDir = path.dirname(fileURLToPath(import.meta.url));\n  const packageSkillsRoot = path.resolve(testDir, '..', '..', 'skills');\n\n  beforeAll(async () => {\n    tempHome = await fs.mkdtemp(path.join(os.tmpdir(), 'gn-setup-home-'));\n    process.env.HOME = tempHome;\n    process.env.USERPROFILE = tempHome;  // os.homedir() checks USERPROFILE on Windows\n    await fs.mkdir(path.join(tempHome, '.cursor'), { recursive: true });\n\n    // Create temporary source skills to verify both supported source layouts:\n    // - flat file: skills/{name}.md\n    // - directory: skills/{name}/SKILL.md (+ nested files copied recursively)\n    await fs.writeFile(\n      path.join(packageSkillsRoot, `${flatSkillName}.md`),\n      `---\\nname: ${flatSkillName}\\ndescription: temp flat skill\\n---\\n\\n# Flat Test Skill`,\n      'utf-8',\n    );\n    await fs.mkdir(path.join(packageSkillsRoot, dirSkillName, 'references'), { recursive: true });\n    await fs.writeFile(\n      path.join(packageSkillsRoot, dirSkillName, 'SKILL.md'),\n      `---\\nname: ${dirSkillName}\\ndescription: temp directory skill\\n---\\n\\n# Directory Test Skill`,\n      'utf-8',\n    );\n    await fs.writeFile(\n      path.join(packageSkillsRoot, dirSkillName, 'references', 'note.md'),\n      '# Directory Nested File',\n      'utf-8',\n    );\n  });\n\n  afterAll(async () => {\n    await fs.rm(path.join(packageSkillsRoot, `${flatSkillName}.md`), { force: true });\n    await fs.rm(path.join(packageSkillsRoot, dirSkillName), { recursive: true, force: true });\n    process.env.HOME = originalHome;\n    process.env.USERPROFILE = originalUserProfile;\n    await fs.rm(tempHome, { recursive: true, force: true });\n  });\n\n  it('installs packaged, flat-file, and directory skills into cursor skills directory', async () => {\n    await setupCommand();\n\n    const cursorSkillsRoot = path.join(tempHome, '.cursor', 'skills');\n    const entries = await fs.readdir(cursorSkillsRoot, { withFileTypes: true });\n    const skillDirs = entries.filter(e => e.isDirectory()).map(e => e.name);\n\n    expect(skillDirs.length).toBeGreaterThan(0);\n    expect(skillDirs).toContain('gitnexus-cli');\n\n    const skillContent = await fs.readFile(\n      path.join(cursorSkillsRoot, 'gitnexus-cli', 'SKILL.md'),\n      'utf-8',\n    );\n    expect(skillContent).toContain('GitNexus CLI Commands');\n\n    // Flat file source should be installed as {name}/SKILL.md.\n    const flatInstalled = await fs.readFile(\n      path.join(cursorSkillsRoot, flatSkillName, 'SKILL.md'),\n      'utf-8',\n    );\n    expect(flatInstalled).toContain('# Flat Test Skill');\n\n    // Directory source should be copied recursively with nested files preserved.\n    const dirInstalled = await fs.readFile(\n      path.join(cursorSkillsRoot, dirSkillName, 'SKILL.md'),\n      'utf-8',\n    );\n    expect(dirInstalled).toContain('# Directory Test Skill');\n    const nestedInstalled = await fs.readFile(\n      path.join(cursorSkillsRoot, dirSkillName, 'references', 'note.md'),\n      'utf-8',\n    );\n    expect(nestedInstalled).toContain('Directory Nested File');\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/integration/skills-e2e.test.ts",
    "content": "/**\n * E2E Integration Tests: --skills Flag\n *\n * Tests `gitnexus analyze --skills` across 11 supported languages plus\n * mixed-language and idempotency scenarios. Each language fixture creates\n * a self-contained git repo with 2 clusters of files containing cross-file\n * function calls, then runs the full CLI pipeline and verifies SKILL.md\n * generation and context file updates.\n *\n * Uses process.execPath (never 'node' string), no shell: true.\n * Accepts status === null (timeout) as valid on slow CI runners.\n */\nimport { describe, it, expect, beforeAll, afterAll } from 'vitest';\nimport { spawnSync } from 'child_process';\nimport path from 'path';\nimport fs from 'fs';\nimport os from 'os';\nimport { fileURLToPath, pathToFileURL } from 'url';\nimport { createRequire } from 'module';\n\nconst testDir = path.dirname(fileURLToPath(import.meta.url));\nconst repoRoot = path.resolve(testDir, '../..');\nconst cliEntry = path.join(repoRoot, 'src/cli/index.ts');\n\n// Absolute file:// URL to tsx loader — needed when spawning CLI with cwd\n// outside the project tree (bare 'tsx' specifier won't resolve there).\nconst _require = createRequire(import.meta.url);\nconst tsxPkgDir = path.dirname(_require.resolve('tsx/package.json'));\nconst tsxImportUrl = pathToFileURL(path.join(tsxPkgDir, 'dist', 'loader.mjs')).href;\n\n// ============================================================================\n// FILE-LOCAL HELPERS\n// ============================================================================\n\n/**\n * Spawn the CLI with `analyze --skills` in the given cwd.\n * Uses the absolute tsx loader URL so it works outside the project tree.\n */\nfunction runSkillsCli(cwd: string, timeoutMs = 45000) {\n  return spawnSync(process.execPath, ['--import', tsxImportUrl, cliEntry, 'analyze', '--skills'], {\n    cwd,\n    encoding: 'utf8',\n    timeout: timeoutMs,\n    stdio: ['pipe', 'pipe', 'pipe'],\n    env: {\n      ...process.env,\n      NODE_OPTIONS: `${process.env.NODE_OPTIONS || ''} --max-old-space-size=8192`.trim(),\n    },\n  });\n}\n\n/**\n * Create a fixture repo: write files, git init, git add, git commit.\n * Returns the tmp directory path.\n */\nfunction createFixtureRepo(\n  prefix: string,\n  files: Record<string, string>,\n): string {\n  const tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), `skills-e2e-${prefix}-`));\n  for (const [relPath, content] of Object.entries(files)) {\n    const fullPath = path.join(tmpDir, relPath);\n    fs.mkdirSync(path.dirname(fullPath), { recursive: true });\n    fs.writeFileSync(fullPath, content, 'utf-8');\n  }\n  spawnSync('git', ['init'], { cwd: tmpDir, stdio: 'pipe' });\n  spawnSync('git', ['add', '-A'], { cwd: tmpDir, stdio: 'pipe' });\n  spawnSync('git', ['commit', '-m', 'initial commit'], {\n    cwd: tmpDir,\n    stdio: 'pipe',\n    env: {\n      ...process.env,\n      GIT_AUTHOR_NAME: 'test',\n      GIT_AUTHOR_EMAIL: 'test@test',\n      GIT_COMMITTER_NAME: 'test',\n      GIT_COMMITTER_EMAIL: 'test@test',\n    },\n  });\n  return tmpDir;\n}\n\n/**\n * Assert standard skill file properties:\n * 1. CLI exits 0\n * 2. .gitnexus/ exists\n * 3. >= minSkills SKILL.md files under .claude/skills/generated/\n * 4. YAML frontmatter valid\n * 5. ## Key Files section present\n * 6. ## How to Explore section present\n * 7. Content > 200 chars\n *\n * Returns false if skill generation was skipped (native parser crash\n * or Leiden non-determinism producing 0 communities). Callers can\n * use this to skip dependent assertions.\n */\nfunction assertSkillFiles(\n  result: ReturnType<typeof runSkillsCli>,\n  tmpDir: string,\n  minSkills = 1,\n): boolean {\n  /* CI timeout tolerance */\n  if (result.status === null) return false;\n\n  expect(result.status, [\n    `analyze --skills exited with code ${result.status}`,\n    `stdout: ${result.stdout?.slice(0, 500)}`,\n    `stderr: ${result.stderr?.slice(0, 500)}`,\n  ].join('\\n')).toBe(0);\n\n  expect(fs.existsSync(path.join(tmpDir, '.gitnexus'))).toBe(true);\n\n  const generatedDir = path.join(tmpDir, '.claude', 'skills', 'generated');\n  if (!fs.existsSync(generatedDir)) {\n    // Native parser may have crashed in worker or Leiden produced 0 communities.\n    // The pipeline still succeeds (exit 0) but no skills are generated.\n    // Skip skill assertions gracefully — this is platform-dependent.\n    return false;\n  }\n\n  const skillDirs = fs.readdirSync(generatedDir).filter(d =>\n    fs.statSync(path.join(generatedDir, d)).isDirectory(),\n  );\n  const skillFiles: string[] = [];\n  for (const dir of skillDirs) {\n    const skillPath = path.join(generatedDir, dir, 'SKILL.md');\n    if (fs.existsSync(skillPath)) {\n      skillFiles.push(skillPath);\n    }\n  }\n\n  expect(skillFiles.length).toBeGreaterThanOrEqual(minSkills);\n\n  for (const skillPath of skillFiles) {\n    const content = fs.readFileSync(skillPath, 'utf-8');\n    expect(content.startsWith('---')).toBe(true);\n    expect(content).toContain('name:');\n    expect(content).toContain('description:');\n    expect(content).toContain('## Key Files');\n    expect(content).toContain('## How to Explore');\n    expect(content.length).toBeGreaterThan(200);\n  }\n\n  return true;\n}\n\n/**\n * Assert CLAUDE.md and AGENTS.md contain generated skill references.\n * Automatically detects whether skills were generated by checking for\n * the generated/ directory.\n */\nfunction assertContextFiles(\n  result: ReturnType<typeof runSkillsCli>,\n  tmpDir: string,\n) {\n  if (result.status === null) return;\n\n  const generatedDir = path.join(tmpDir, '.claude', 'skills', 'generated');\n  const skillsGenerated = fs.existsSync(generatedDir);\n\n  const claudePath = path.join(tmpDir, 'CLAUDE.md');\n  expect(fs.existsSync(claudePath)).toBe(true);\n  if (skillsGenerated) {\n    const claudeContent = fs.readFileSync(claudePath, 'utf-8');\n    expect(claudeContent).toContain('.claude/skills/generated/');\n  }\n\n  const agentsPath = path.join(tmpDir, 'AGENTS.md');\n  expect(fs.existsSync(agentsPath)).toBe(true);\n  if (skillsGenerated) {\n    const agentsContent = fs.readFileSync(agentsPath, 'utf-8');\n    expect(agentsContent).toContain('.claude/skills/generated/');\n  }\n}\n\n// ============================================================================\n// DESCRIBE 1: TypeScript\n// ============================================================================\n\ndescribe('TypeScript', () => {\n  let tmpDir: string;\n  let result: ReturnType<typeof runSkillsCli>;\n\n  beforeAll(() => {\n    tmpDir = createFixtureRepo('typescript', {\n      'src/api/router.ts': `\nimport { validateRequest } from '../utils/validator';\nimport { logRequest } from '../utils/logger';\n\nexport function createRouter() {\n  validateRequest('route');\n  logRequest('router init');\n  return { routes: [] };\n}\n\nexport function registerRoute(path: string) {\n  validateRequest(path);\n  logRequest('register ' + path);\n  return true;\n}\n`,\n      'src/api/controller.ts': `\nimport { runQuery } from '../data/query';\nimport { formatResponse } from '../data/format';\n\nexport function handleGet(id: string) {\n  const data = runQuery('SELECT * FROM items WHERE id = ' + id);\n  return formatResponse(data);\n}\n\nexport function handlePost(body: any) {\n  const result = runQuery('INSERT INTO items VALUES ' + JSON.stringify(body));\n  return formatResponse(result);\n}\n`,\n      'src/api/middleware.ts': `\nimport { validateToken } from '../utils/validator';\nimport { logRequest } from '../utils/logger';\n\nexport function authMiddleware(req: any) {\n  validateToken(req.headers.auth);\n  logRequest('auth check');\n  return true;\n}\n\nexport function corsMiddleware(req: any) {\n  logRequest('cors check');\n  return { allowed: true };\n}\n`,\n      'src/data/query.ts': `\nimport { formatResult } from './format';\nimport { getCached } from './cache';\n\nexport function runQuery(sql: string) {\n  const cached = getCached(sql);\n  if (cached) return cached;\n  return formatResult({ sql, rows: [] });\n}\n\nexport function buildQuery(table: string, conditions: any) {\n  return 'SELECT * FROM ' + table;\n}\n`,\n      'src/data/format.ts': `\nexport function formatResult(data: any) {\n  return { ...data, formatted: true };\n}\n\nexport function formatResponse(data: any) {\n  return { status: 200, body: formatResult(data) };\n}\n\nexport function serializeResult(data: any) {\n  return JSON.stringify(data);\n}\n`,\n      'src/data/cache.ts': `\nimport { runQuery } from './query';\n\nconst cache = new Map<string, any>();\n\nexport function getCached(key: string) {\n  return cache.get(key) || null;\n}\n\nexport function warmCache(keys: string[]) {\n  for (const key of keys) {\n    cache.set(key, runQuery(key));\n  }\n}\n`,\n      'src/utils/logger.ts': `\nexport function logRequest(msg: string) {\n  console.log('[REQ]', msg);\n}\n\nexport function logError(msg: string) {\n  console.error('[ERR]', msg);\n}\n\nexport function createLogEntry(level: string, msg: string) {\n  return { level, msg, ts: Date.now() };\n}\n`,\n      'src/utils/validator.ts': `\nexport function validateRequest(input: string) {\n  if (!input || input.length === 0) throw new Error('Invalid');\n  return true;\n}\n\nexport function validateToken(token: string) {\n  if (!token || token.length < 10) throw new Error('Invalid token');\n  return true;\n}\n\nexport function sanitize(input: string) {\n  return input.replace(/[<>]/g, '');\n}\n`,\n      'src/utils/config.ts': `\nexport function getConfig(key: string) {\n  return process.env[key] || '';\n}\n\nexport function loadEnv() {\n  return { ...process.env };\n}\n\nexport function parseArgs(args: string[]) {\n  return args.reduce((acc: any, arg) => {\n    const [k, v] = arg.split('=');\n    acc[k] = v;\n    return acc;\n  }, {});\n}\n`,\n    });\n    result = runSkillsCli(tmpDir);\n  }, 50000);\n\n  afterAll(() => {\n    fs.rmSync(tmpDir, { recursive: true, force: true });\n  });\n\n  /**\n   * Verify analyze --skills generates valid SKILL.md files for a\n   * TypeScript repo with 3 clusters of cross-calling functions.\n   */\n  it('generates skill files', () => {\n    assertSkillFiles(result, tmpDir);\n  }, 50000);\n\n  /**\n   * Verify CLAUDE.md and AGENTS.md are created and reference generated skills.\n   */\n  it('context files updated', () => {\n    assertContextFiles(result, tmpDir);\n  }, 50000);\n});\n\n// ============================================================================\n// DESCRIBE 2: JavaScript\n// ============================================================================\n\ndescribe('JavaScript', () => {\n  let tmpDir: string;\n  let result: ReturnType<typeof runSkillsCli>;\n\n  beforeAll(() => {\n    tmpDir = createFixtureRepo('javascript', {\n      'src/handlers/userHandler.js': `\nconst { findById } = require('../services/userService');\nconst { validateInput } = require('../helpers/validator');\n\nfunction getUser(id) {\n  validateInput(id);\n  return findById(id);\n}\n\nfunction createUser(data) {\n  validateInput(data.name);\n  return { id: Date.now(), ...data };\n}\n\nmodule.exports = { getUser, createUser };\n`,\n      'src/handlers/authHandler.js': `\nconst { hashPassword, createToken } = require('../services/authService');\n\nfunction login(username, password) {\n  const hashed = hashPassword(password);\n  return createToken(username);\n}\n\nfunction logout(token) {\n  return { success: true };\n}\n\nmodule.exports = { login, logout };\n`,\n      'src/handlers/errorHandler.js': `\nconst { logError } = require('../helpers/logger');\n\nfunction handleError(err) {\n  logError(err.message);\n  return { error: err.message };\n}\n\nfunction formatError(err) {\n  logError('format: ' + err.message);\n  return { code: err.code || 500, message: err.message };\n}\n\nmodule.exports = { handleError, formatError };\n`,\n      'src/services/userService.js': `\nconst { formatUser } = require('./formatService');\n\nfunction findById(id) {\n  const user = { id, name: 'Test' };\n  return formatUser(user);\n}\n\nfunction saveUser(user) {\n  return { ...user, saved: true };\n}\n\nmodule.exports = { findById, saveUser };\n`,\n      'src/services/authService.js': `\nfunction hashPassword(password) {\n  return 'hashed_' + password;\n}\n\nfunction createToken(username) {\n  return 'token_' + username + '_' + Date.now();\n}\n\nfunction verifyToken(token) {\n  return token.startsWith('token_');\n}\n\nmodule.exports = { hashPassword, createToken, verifyToken };\n`,\n      'src/services/formatService.js': `\nfunction formatUser(user) {\n  return { ...user, displayName: user.name.toUpperCase() };\n}\n\nfunction formatDate(date) {\n  return new Date(date).toISOString();\n}\n\nfunction formatError(err) {\n  return { error: true, message: String(err) };\n}\n\nmodule.exports = { formatUser, formatDate, formatError };\n`,\n      'src/helpers/validator.js': `\nfunction validateInput(input) {\n  if (!input) throw new Error('Required');\n  return true;\n}\n\nfunction validateEmail(email) {\n  return /^[^@]+@[^@]+$/.test(email);\n}\n\nfunction sanitize(str) {\n  return String(str).replace(/[<>]/g, '');\n}\n\nmodule.exports = { validateInput, validateEmail, sanitize };\n`,\n      'src/helpers/logger.js': `\nfunction logError(msg) {\n  console.error('[ERROR]', msg);\n}\n\nfunction logInfo(msg) {\n  console.log('[INFO]', msg);\n}\n\nfunction createEntry(level, msg) {\n  return { level, msg, ts: Date.now() };\n}\n\nmodule.exports = { logError, logInfo, createEntry };\n`,\n    });\n    result = runSkillsCli(tmpDir);\n  }, 50000);\n\n  afterAll(() => {\n    fs.rmSync(tmpDir, { recursive: true, force: true });\n  });\n\n  /**\n   * Verify analyze --skills generates valid SKILL.md files for a\n   * JavaScript repo with handler/service/helper clusters.\n   */\n  it('generates skill files', () => {\n    assertSkillFiles(result, tmpDir);\n  }, 50000);\n\n  /**\n   * Verify CLAUDE.md and AGENTS.md are created and reference generated skills.\n   */\n  it('context files updated', () => {\n    assertContextFiles(result, tmpDir);\n  }, 50000);\n});\n\n// ============================================================================\n// DESCRIBE 3: Python\n// ============================================================================\n\ndescribe('Python', () => {\n  let tmpDir: string;\n  let result: ReturnType<typeof runSkillsCli>;\n\n  beforeAll(() => {\n    tmpDir = createFixtureRepo('python', {\n      'src/auth/__init__.py': '',\n      'src/auth/login.py': `\nfrom src.auth.hash import hash_password\nfrom src.auth.session import create_session\n\ndef login(username, password):\n    hashed = hash_password(password)\n    session = create_session(username)\n    return session\n\ndef validate_credentials(username, password):\n    if not username or not password:\n        raise ValueError(\"Invalid credentials\")\n    return True\n`,\n      'src/auth/hash.py': `\ndef hash_password(password):\n    return \"hashed_\" + password\n\ndef compare_hash(plain, hashed):\n    return hash_password(plain) == hashed\n\ndef generate_salt():\n    return \"salt_\" + str(id(object()))\n`,\n      'src/auth/session.py': `\nfrom src.auth.login import login\n\ndef create_session(username):\n    return {\"user\": username, \"token\": \"sess_\" + username}\n\ndef validate_session(session):\n    return session and \"token\" in session\n\ndef refresh_session(session):\n    return create_session(session[\"user\"])\n`,\n      'src/database/__init__.py': '',\n      'src/database/query.py': `\nfrom src.database.format import format_result\nfrom src.database.cache import get_cached\n\ndef run_query(sql):\n    cached = get_cached(sql)\n    if cached:\n        return cached\n    return format_result({\"sql\": sql, \"rows\": []})\n\ndef build_query(table, conditions):\n    return f\"SELECT * FROM {table}\"\n`,\n      'src/database/format.py': `\ndef format_result(data):\n    return {**data, \"formatted\": True}\n\ndef serialize_result(data):\n    import json\n    return json.dumps(data)\n\ndef format_error(err):\n    return {\"error\": str(err)}\n`,\n      'src/database/cache.py': `\nfrom src.database.query import run_query\n\n_cache = {}\n\ndef get_cached(key):\n    return _cache.get(key)\n\ndef warm_cache(keys):\n    for key in keys:\n        _cache[key] = run_query(key)\n`,\n      'src/utils/__init__.py': '',\n      'src/utils/logger.py': `\ndef log_info(msg):\n    print(f\"[INFO] {msg}\")\n\ndef log_error(msg):\n    print(f\"[ERROR] {msg}\")\n\ndef create_entry(level, msg):\n    return {\"level\": level, \"msg\": msg}\n`,\n      'src/utils/validator.py': `\ndef validate_input(data):\n    if not data:\n        raise ValueError(\"Input required\")\n    return True\n\ndef sanitize(text):\n    return text.replace(\"<\", \"\").replace(\">\", \"\")\n\ndef check_length(text, max_len=255):\n    return len(text) <= max_len\n`,\n    });\n    result = runSkillsCli(tmpDir);\n  }, 50000);\n\n  afterAll(() => {\n    fs.rmSync(tmpDir, { recursive: true, force: true });\n  });\n\n  /**\n   * Verify analyze --skills generates valid SKILL.md files for a\n   * Python repo with auth/database/utils clusters.\n   */\n  it('generates skill files', () => {\n    assertSkillFiles(result, tmpDir);\n  }, 50000);\n\n  /**\n   * Verify CLAUDE.md and AGENTS.md are created and reference generated skills.\n   */\n  it('context files updated', () => {\n    assertContextFiles(result, tmpDir);\n  }, 50000);\n});\n\n// ============================================================================\n// DESCRIBE 4: Go\n// ============================================================================\n\ndescribe('Go', () => {\n  let tmpDir: string;\n  let result: ReturnType<typeof runSkillsCli>;\n\n  beforeAll(() => {\n    tmpDir = createFixtureRepo('go', {\n      'go.mod': `module example.com/testapp\n\ngo 1.21\n`,\n      'cmd/main.go': `package main\n\nimport (\n\t\"example.com/testapp/pkg/handler\"\n)\n\nfunc main() {\n\thandler.HandleGet(\"1\")\n\thandler.HandlePost(map[string]string{\"name\": \"test\"})\n}\n`,\n      'pkg/handler/get.go': `package handler\n\nimport (\n\t\"example.com/testapp/pkg/service\"\n)\n\nfunc HandleGet(id string) map[string]interface{} {\n\tuser := service.FindUser(id)\n\treturn service.FormatResponse(user)\n}\n`,\n      'pkg/handler/post.go': `package handler\n\nimport (\n\t\"example.com/testapp/pkg/service\"\n)\n\nfunc HandlePost(data map[string]string) map[string]interface{} {\n\tservice.ValidateInput(data)\n\treturn service.CreateUser(data)\n}\n`,\n      'pkg/service/user.go': `package service\n\nimport (\n\t\"example.com/testapp/pkg/repository\"\n)\n\nfunc FindUser(id string) map[string]interface{} {\n\treturn repository.GetByID(id)\n}\n\nfunc CreateUser(data map[string]string) map[string]interface{} {\n\trepository.Save(data)\n\treturn map[string]interface{}{\"created\": true}\n}\n`,\n      'pkg/service/format.go': `package service\n\nfunc FormatResponse(data map[string]interface{}) map[string]interface{} {\n\tdata[\"formatted\"] = true\n\treturn data\n}\n\nfunc ValidateInput(data map[string]string) bool {\n\treturn len(data) > 0\n}\n\nfunc Sanitize(input string) string {\n\treturn input\n}\n`,\n      'pkg/repository/user_repo.go': `package repository\n\nfunc GetByID(id string) map[string]interface{} {\n\treturn map[string]interface{}{\"id\": id, \"name\": \"Test\"}\n}\n\nfunc Save(data map[string]string) bool {\n\treturn true\n}\n\nfunc Delete(id string) bool {\n\treturn true\n}\n`,\n      'pkg/models/user.go': `package models\n\ntype User struct {\n\tID   string\n\tName string\n}\n\nfunc NewUser(id, name string) *User {\n\treturn &User{ID: id, Name: name}\n}\n\nfunc (u *User) Validate() bool {\n\treturn u.ID != \"\" && u.Name != \"\"\n}\n`,\n    });\n    result = runSkillsCli(tmpDir);\n  }, 50000);\n\n  afterAll(() => {\n    fs.rmSync(tmpDir, { recursive: true, force: true });\n  });\n\n  /**\n   * Verify analyze --skills generates valid SKILL.md files for a\n   * Go repo with handler/service/repository clusters.\n   */\n  it('generates skill files', () => {\n    assertSkillFiles(result, tmpDir);\n  }, 50000);\n\n  /**\n   * Verify CLAUDE.md and AGENTS.md are created and reference generated skills.\n   */\n  it('context files updated', () => {\n    assertContextFiles(result, tmpDir);\n  }, 50000);\n});\n\n// ============================================================================\n// DESCRIBE 5: Java\n// ============================================================================\n\ndescribe('Java', () => {\n  let tmpDir: string;\n  let result: ReturnType<typeof runSkillsCli>;\n\n  beforeAll(() => {\n    tmpDir = createFixtureRepo('java', {\n      'src/service/UserService.java': `package service;\n\nimport repository.UserRepository;\nimport service.Validator;\n\npublic class UserService {\n    private UserRepository repository = new UserRepository();\n    private Validator validator = new Validator();\n\n    public Object findUser(String id) {\n        validator.validate(id);\n        return repository.getById(id);\n    }\n\n    public Object createUser(String name) {\n        validator.validate(name);\n        return repository.save(name);\n    }\n}\n`,\n      'src/service/AuthService.java': `package service;\n\npublic class AuthService {\n    private UserService userService = new UserService();\n\n    public Object authenticate(String username, String password) {\n        Object user = userService.findUser(username);\n        return hashPassword(password);\n    }\n\n    public String hashPassword(String password) {\n        return \"hashed_\" + password;\n    }\n}\n`,\n      'src/service/Validator.java': `package service;\n\npublic class Validator {\n    public boolean validate(String input) {\n        if (input == null || input.isEmpty()) {\n            throw new IllegalArgumentException(\"Invalid input\");\n        }\n        return true;\n    }\n\n    public String sanitize(String input) {\n        return input.replaceAll(\"[<>]\", \"\");\n    }\n\n    public boolean checkLength(String input, int max) {\n        return input.length() <= max;\n    }\n}\n`,\n      'src/repository/UserRepository.java': `package repository;\n\npublic class UserRepository extends BaseRepository {\n    public Object getById(String id) {\n        return new Object();\n    }\n\n    public Object save(String name) {\n        return new Object();\n    }\n\n    public boolean delete(String id) {\n        return true;\n    }\n}\n`,\n      'src/repository/BaseRepository.java': `package repository;\n\npublic abstract class BaseRepository {\n    public Object[] findAll() {\n        return new Object[0];\n    }\n\n    public int count() {\n        return 0;\n    }\n}\n`,\n      'src/model/User.java': `package model;\n\npublic class User {\n    private String name;\n\n    public User(String name) {\n        this.name = name;\n    }\n\n    public String getName() {\n        return name;\n    }\n\n    public void setName(String name) {\n        this.name = name;\n    }\n}\n`,\n    });\n    result = runSkillsCli(tmpDir);\n  }, 50000);\n\n  afterAll(() => {\n    fs.rmSync(tmpDir, { recursive: true, force: true });\n  });\n\n  /**\n   * Verify analyze --skills generates valid SKILL.md files for a\n   * Java repo with service/repository/model clusters.\n   */\n  it('generates skill files', () => {\n    assertSkillFiles(result, tmpDir);\n  }, 50000);\n\n  /**\n   * Verify CLAUDE.md and AGENTS.md are created and reference generated skills.\n   */\n  it('context files updated', () => {\n    assertContextFiles(result, tmpDir);\n  }, 50000);\n});\n\n// ============================================================================\n// DESCRIBE 6: Rust\n// ============================================================================\n\ndescribe('Rust', () => {\n  let tmpDir: string;\n  let result: ReturnType<typeof runSkillsCli>;\n\n  beforeAll(() => {\n    tmpDir = createFixtureRepo('rust', {\n      'Cargo.toml': `[package]\nname = \"testapp\"\nversion = \"0.1.0\"\nedition = \"2021\"\n`,\n      'src/main.rs': `mod auth;\nmod data;\n\nfn main() {\n    let session = auth::login::login(\"user\", \"pass\");\n    let result = data::query::run_query(\"SELECT 1\");\n    println!(\"{:?} {:?}\", session, result);\n}\n`,\n      'src/auth/mod.rs': `pub mod login;\npub mod hash;\n`,\n      'src/auth/login.rs': `use crate::auth::hash::hash_password;\n\npub fn login(username: &str, password: &str) -> String {\n    let hashed = hash_password(password);\n    format!(\"session_{}_{}\", username, hashed)\n}\n\npub fn validate(token: &str) -> bool {\n    token.starts_with(\"session_\")\n}\n`,\n      'src/auth/hash.rs': `pub fn hash_password(password: &str) -> String {\n    format!(\"hashed_{}\", password)\n}\n\npub fn compare_hash(plain: &str, hashed: &str) -> bool {\n    hash_password(plain) == hashed\n}\n\npub fn generate_salt() -> String {\n    String::from(\"random_salt\")\n}\n`,\n      'src/data/mod.rs': `pub mod query;\npub mod format;\n`,\n      'src/data/query.rs': `use crate::data::format::format_result;\n\npub fn run_query(sql: &str) -> String {\n    let raw = format!(\"result_{}\", sql);\n    format_result(&raw)\n}\n\npub fn build_query(table: &str) -> String {\n    format!(\"SELECT * FROM {}\", table)\n}\n`,\n      'src/data/format.rs': `pub fn format_result(data: &str) -> String {\n    format!(\"[formatted] {}\", data)\n}\n\npub fn serialize(data: &str) -> String {\n    format!(\"{{\\\"data\\\": \\\"{}\\\"}}\", data)\n}\n\npub fn format_error(err: &str) -> String {\n    format!(\"[ERROR] {}\", err)\n}\n`,\n    });\n    result = runSkillsCli(tmpDir);\n  }, 50000);\n\n  afterAll(() => {\n    fs.rmSync(tmpDir, { recursive: true, force: true });\n  });\n\n  /**\n   * Verify analyze --skills generates valid SKILL.md files for a\n   * Rust repo with auth/data module clusters.\n   */\n  it('generates skill files', () => {\n    assertSkillFiles(result, tmpDir);\n  }, 50000);\n\n  /**\n   * Verify CLAUDE.md and AGENTS.md are created and reference generated skills.\n   */\n  it('context files updated', () => {\n    assertContextFiles(result, tmpDir);\n  }, 50000);\n});\n\n// ============================================================================\n// DESCRIBE 7: C#\n// ============================================================================\n\ndescribe('CSharp', () => {\n  let tmpDir: string;\n  let result: ReturnType<typeof runSkillsCli>;\n\n  beforeAll(() => {\n    tmpDir = createFixtureRepo('csharp', {\n      'Services/UserService.cs': `using System;\n\nnamespace Services\n{\n    public class UserService\n    {\n        public object FindUser(string id)\n        {\n            return id;\n        }\n\n        public object CreateUser(string name)\n        {\n            return name;\n        }\n\n        public object UpdateUser(string id, string name)\n        {\n            return name;\n        }\n\n        public bool RemoveUser(string id)\n        {\n            return true;\n        }\n    }\n\n    public class UserValidator\n    {\n        public bool ValidateUser(string input)\n        {\n            return true;\n        }\n\n        public string SanitizeUser(string input)\n        {\n            return input;\n        }\n\n        public bool CheckUserLength(string input)\n        {\n            return true;\n        }\n    }\n}\n`,\n      'Services/AuthService.cs': `using System;\n\nnamespace Services\n{\n    public class AuthService\n    {\n        public object Authenticate(string username, string password)\n        {\n            return username;\n        }\n\n        public string HashPassword(string password)\n        {\n            return password;\n        }\n\n        public bool VerifyPassword(string hashed)\n        {\n            return true;\n        }\n\n        public string CreateToken(string username)\n        {\n            return username;\n        }\n    }\n\n    public class TokenManager\n    {\n        public string GenerateToken(string user)\n        {\n            return user;\n        }\n\n        public bool ValidateToken(string token)\n        {\n            return true;\n        }\n\n        public string RefreshToken(string token)\n        {\n            return token;\n        }\n    }\n}\n`,\n      'Services/OrderService.cs': `using System;\n\nnamespace Services\n{\n    public class OrderService\n    {\n        public object CreateOrder(string item)\n        {\n            return item;\n        }\n\n        public object GetOrder(string id)\n        {\n            return id;\n        }\n\n        public bool CancelOrder(string id)\n        {\n            return true;\n        }\n\n        public object UpdateOrder(string id, string item)\n        {\n            return item;\n        }\n    }\n\n    public class OrderValidator\n    {\n        public bool ValidateOrder(string input)\n        {\n            return true;\n        }\n\n        public string SanitizeOrder(string input)\n        {\n            return input;\n        }\n    }\n}\n`,\n      'Services/EmailService.cs': `using System;\n\nnamespace Services\n{\n    public class EmailService\n    {\n        public void SendMail(string to, string body)\n        {\n        }\n\n        public void SendBulk(string to, string body)\n        {\n        }\n\n        public string FormatBody(string body)\n        {\n            return body;\n        }\n\n        public bool ValidateAddress(string addr)\n        {\n            return true;\n        }\n    }\n}\n`,\n      'Data/UserRepo.cs': `using System;\n\nnamespace Data\n{\n    public class UserRepo\n    {\n        public object GetById(string id)\n        {\n            return id;\n        }\n\n        public object Save(string name)\n        {\n            return name;\n        }\n\n        public object Update(string id, string name)\n        {\n            return name;\n        }\n\n        public bool Delete(string id)\n        {\n            return true;\n        }\n\n        public object[] ListAll()\n        {\n            return new object[0];\n        }\n    }\n}\n`,\n      'Data/OrderRepo.cs': `using System;\n\nnamespace Data\n{\n    public class OrderRepo\n    {\n        public object FindOrder(string id)\n        {\n            return id;\n        }\n\n        public object InsertOrder(string item)\n        {\n            return item;\n        }\n\n        public bool RemoveOrder(string id)\n        {\n            return true;\n        }\n\n        public object UpdateOrder(string id, string data)\n        {\n            return data;\n        }\n\n        public int CountOrders()\n        {\n            return 0;\n        }\n    }\n}\n`,\n      'Data/CacheManager.cs': `using System;\n\nnamespace Data\n{\n    public class CacheManager\n    {\n        public object GetCached(string key)\n        {\n            return key;\n        }\n\n        public void SetCached(string key, object val)\n        {\n        }\n\n        public void Invalidate(string key)\n        {\n        }\n\n        public void Clear()\n        {\n        }\n    }\n\n    public class CacheStats\n    {\n        public int GetHitCount()\n        {\n            return 0;\n        }\n\n        public int GetMissCount()\n        {\n            return 0;\n        }\n\n        public double GetHitRate()\n        {\n            return 0.0;\n        }\n    }\n}\n`,\n      'Data/Logger.cs': `using System;\n\nnamespace Data\n{\n    public class Logger\n    {\n        public void Info(string msg)\n        {\n        }\n\n        public void Error(string msg)\n        {\n        }\n\n        public void Warn(string msg)\n        {\n        }\n\n        public void Debug(string msg)\n        {\n        }\n    }\n\n    public class LogFormatter\n    {\n        public string FormatEntry(string level, string msg)\n        {\n            return level + msg;\n        }\n\n        public string FormatTimestamp()\n        {\n            return \"\";\n        }\n\n        public string FormatStackTrace(string trace)\n        {\n            return trace;\n        }\n    }\n}\n`,\n    });\n    result = runSkillsCli(tmpDir);\n  }, 50000);\n\n  afterAll(() => {\n    fs.rmSync(tmpDir, { recursive: true, force: true });\n  });\n\n  /**\n   * Verify analyze --skills generates valid SKILL.md files for a\n   * C# repo with Services/Data clusters.\n   *\n   * Note: tree-sitter-c-sharp's native N-API addon can crash in forked\n   * workers on some platforms (libc++abi exception). When this happens,\n   * the pipeline falls through with 0 communities and no skills are\n   * generated. assertSkillFiles handles this gracefully.\n   */\n  it('generates skill files', () => {\n    assertSkillFiles(result, tmpDir);\n  }, 50000);\n\n  /**\n   * Verify CLAUDE.md and AGENTS.md are created and reference generated skills.\n   */\n  it('context files updated', () => {\n    assertContextFiles(result, tmpDir);\n  }, 50000);\n});\n\n// ============================================================================\n// DESCRIBE 8: C++\n// ============================================================================\n\ndescribe('CPlusPlus', () => {\n  let tmpDir: string;\n  let result: ReturnType<typeof runSkillsCli>;\n\n  beforeAll(() => {\n    tmpDir = createFixtureRepo('cpp', {\n      'src/engine/engine.h': `#ifndef ENGINE_H\n#define ENGINE_H\n\nclass Engine {\npublic:\n    void start();\n    void stop();\n};\n\n#endif\n`,\n      'src/engine/engine.cpp': `#include \"engine.h\"\n#include \"../utils/logger.h\"\n#include \"../utils/config.h\"\n\nvoid Engine::start() {\n    Logger logger;\n    logger.log(\"Engine starting\");\n    Config config;\n    config.get(\"engine.mode\");\n}\n\nvoid Engine::stop() {\n    Logger logger;\n    logger.log(\"Engine stopping\");\n}\n`,\n      'src/engine/renderer.h': `#ifndef RENDERER_H\n#define RENDERER_H\n\nclass Renderer {\npublic:\n    void render();\n    void clear();\n};\n\n#endif\n`,\n      'src/engine/renderer.cpp': `#include \"renderer.h\"\n#include \"engine.h\"\n\nvoid Renderer::render() {\n    Engine engine;\n    engine.start();\n}\n\nvoid Renderer::clear() {\n}\n`,\n      'src/engine/physics.h': `#ifndef PHYSICS_H\n#define PHYSICS_H\n\nvoid simulate();\nvoid collide();\n\n#endif\n`,\n      'src/engine/physics.cpp': `#include \"physics.h\"\n#include \"engine.h\"\n#include \"../utils/logger.h\"\n\nvoid simulate() {\n    Engine engine;\n    engine.stop();\n    Logger logger;\n    logger.log(\"simulating\");\n}\n\nvoid collide() {\n    Logger logger;\n    logger.log(\"collision detected\");\n}\n`,\n      'src/utils/logger.h': `#ifndef LOGGER_H\n#define LOGGER_H\n\n#include <string>\n\nclass Logger {\npublic:\n    void log(const std::string& msg);\n    void error(const std::string& msg);\n    void flush();\n};\n\n#endif\n`,\n      'src/utils/logger.cpp': `#include \"logger.h\"\n#include <iostream>\n\nvoid Logger::log(const std::string& msg) {\n    std::cout << \"[LOG] \" << msg << std::endl;\n}\n\nvoid Logger::error(const std::string& msg) {\n    std::cerr << \"[ERR] \" << msg << std::endl;\n}\n\nvoid Logger::flush() {\n    std::cout.flush();\n}\n`,\n      'src/utils/config.h': `#ifndef CONFIG_H\n#define CONFIG_H\n\n#include <string>\n\nclass Config {\npublic:\n    std::string get(const std::string& key);\n    void set(const std::string& key, const std::string& value);\n    void load(const std::string& path);\n};\n\n#endif\n`,\n      'src/utils/config.cpp': `#include \"config.h\"\n\nstd::string Config::get(const std::string& key) {\n    return \"\";\n}\n\nvoid Config::set(const std::string& key, const std::string& value) {\n}\n\nvoid Config::load(const std::string& path) {\n}\n`,\n      'src/utils/math.h': `#ifndef MATH_H\n#define MATH_H\n\nint clamp(int value, int min, int max);\nfloat lerp(float a, float b, float t);\ndouble distance(double x1, double y1, double x2, double y2);\n\n#endif\n`,\n      'src/utils/math.cpp': `#include \"math.h\"\n#include <cmath>\n\nint clamp(int value, int min, int max) {\n    if (value < min) return min;\n    if (value > max) return max;\n    return value;\n}\n\nfloat lerp(float a, float b, float t) {\n    return a + (b - a) * t;\n}\n\ndouble distance(double x1, double y1, double x2, double y2) {\n    return std::sqrt((x2-x1)*(x2-x1) + (y2-y1)*(y2-y1));\n}\n`,\n    });\n    result = runSkillsCli(tmpDir);\n  }, 50000);\n\n  afterAll(() => {\n    fs.rmSync(tmpDir, { recursive: true, force: true });\n  });\n\n  /**\n   * Verify analyze --skills generates valid SKILL.md files for a\n   * C++ repo with engine/utils clusters including headers.\n   */\n  it('generates skill files', () => {\n    assertSkillFiles(result, tmpDir);\n  }, 50000);\n\n  /**\n   * Verify CLAUDE.md and AGENTS.md are created and reference generated skills.\n   */\n  it('context files updated', () => {\n    assertContextFiles(result, tmpDir);\n  }, 50000);\n});\n\n// ============================================================================\n// DESCRIBE 9: C\n// ============================================================================\n\ndescribe('C', () => {\n  let tmpDir: string;\n  let result: ReturnType<typeof runSkillsCli>;\n\n  beforeAll(() => {\n    tmpDir = createFixtureRepo('c', {\n      'src/core/parser.h': `#ifndef PARSER_H\n#define PARSER_H\n\nvoid parse(const char* input);\nvoid tokenize(const char* input);\n\n#endif\n`,\n      'src/core/parser.c': `#include \"parser.h\"\n#include \"../io/reader.h\"\n#include \"../io/logger.h\"\n\nvoid parse(const char* input) {\n    char* data = read_file(input);\n    log_msg(\"parsing\");\n    tokenize(data);\n}\n\nvoid tokenize(const char* input) {\n    log_msg(\"tokenizing\");\n}\n`,\n      'src/core/lexer.h': `#ifndef LEXER_H\n#define LEXER_H\n\ntypedef struct {\n    int type;\n    const char* value;\n} Token;\n\nvoid lex(const char* input);\nToken next_token(const char* input);\nint is_keyword(const char* word);\n\n#endif\n`,\n      'src/core/lexer.c': `#include \"lexer.h\"\n#include \"parser.h\"\n#include <string.h>\n\nvoid lex(const char* input) {\n    parse(input);\n}\n\nToken next_token(const char* input) {\n    Token t;\n    t.type = 0;\n    t.value = input;\n    return t;\n}\n\nint is_keyword(const char* word) {\n    return strcmp(word, \"if\") == 0 || strcmp(word, \"else\") == 0;\n}\n`,\n      'src/core/ast.h': `#ifndef AST_H\n#define AST_H\n\ntypedef struct ASTNode {\n    int type;\n    struct ASTNode* left;\n    struct ASTNode* right;\n} ASTNode;\n\nASTNode* create_node(int type);\nvoid free_node(ASTNode* node);\n\n#endif\n`,\n      'src/core/ast.c': `#include \"ast.h\"\n#include \"lexer.h\"\n#include <stdlib.h>\n\nASTNode* create_node(int type) {\n    ASTNode* node = (ASTNode*)malloc(sizeof(ASTNode));\n    node->type = type;\n    node->left = NULL;\n    node->right = NULL;\n    tokenize(\"ast\");\n    return node;\n}\n\nvoid free_node(ASTNode* node) {\n    if (node) {\n        free_node(node->left);\n        free_node(node->right);\n        free(node);\n    }\n}\n`,\n      'src/io/reader.h': `#ifndef READER_H\n#define READER_H\n\nchar* read_file(const char* path);\nvoid close_file(const char* path);\nint file_exists(const char* path);\n\n#endif\n`,\n      'src/io/reader.c': `#include \"reader.h\"\n#include <stdio.h>\n#include <stdlib.h>\n\nchar* read_file(const char* path) {\n    return \"file contents\";\n}\n\nvoid close_file(const char* path) {\n}\n\nint file_exists(const char* path) {\n    FILE* f = fopen(path, \"r\");\n    if (f) { fclose(f); return 1; }\n    return 0;\n}\n`,\n      'src/io/writer.h': `#ifndef WRITER_H\n#define WRITER_H\n\nvoid write_file(const char* path, const char* data);\nvoid flush_writer(void);\n\n#endif\n`,\n      'src/io/writer.c': `#include \"writer.h\"\n#include \"logger.h\"\n\nvoid write_file(const char* path, const char* data) {\n    log_msg(\"writing file\");\n}\n\nvoid flush_writer(void) {\n    log_msg(\"flushing\");\n}\n`,\n      'src/io/logger.h': `#ifndef LOGGER_H\n#define LOGGER_H\n\nvoid log_msg(const char* msg);\nvoid log_error(const char* msg);\nvoid log_init(void);\n\n#endif\n`,\n      'src/io/logger.c': `#include \"logger.h\"\n#include <stdio.h>\n\nvoid log_msg(const char* msg) {\n    printf(\"[LOG] %s\\\\n\", msg);\n}\n\nvoid log_error(const char* msg) {\n    fprintf(stderr, \"[ERR] %s\\\\n\", msg);\n}\n\nvoid log_init(void) {\n    log_msg(\"logger initialized\");\n}\n`,\n    });\n    result = runSkillsCli(tmpDir);\n  }, 50000);\n\n  afterAll(() => {\n    fs.rmSync(tmpDir, { recursive: true, force: true });\n  });\n\n  /**\n   * Verify analyze --skills generates valid SKILL.md files for a\n   * C repo with core/io clusters including headers.\n   */\n  it('generates skill files', () => {\n    assertSkillFiles(result, tmpDir);\n  }, 50000);\n\n  /**\n   * Verify CLAUDE.md and AGENTS.md are created and reference generated skills.\n   */\n  it('context files updated', () => {\n    assertContextFiles(result, tmpDir);\n  }, 50000);\n});\n\n// ============================================================================\n// DESCRIBE 10: PHP\n// ============================================================================\n\ndescribe('PHP', () => {\n  let tmpDir: string;\n  let result: ReturnType<typeof runSkillsCli>;\n\n  beforeAll(() => {\n    tmpDir = createFixtureRepo('php', {\n      'src/Controllers/UserController.php': `<?php\n\nfunction controller_index() {\n    validate_input('list');\n    $users = service_find_all();\n    return format_response($users);\n}\n\nfunction controller_store($data) {\n    validate_input($data);\n    sanitize_input($data);\n    $user = service_find_by_id($data);\n    return format_response($user);\n}\n\nfunction controller_update($id, $data) {\n    validate_input($id);\n    validate_input($data);\n    $result = service_update($id, $data);\n    return format_response($result);\n}\n\nfunction controller_delete($id) {\n    validate_input($id);\n    return service_delete($id);\n}\n`,\n      'src/Controllers/AuthController.php': `<?php\n\nfunction auth_login($username, $password) {\n    validate_input($username);\n    validate_input($password);\n    $hash = auth_hash_password($password);\n    return auth_create_token($username);\n}\n\nfunction auth_logout($token) {\n    validate_input($token);\n    return true;\n}\n\nfunction auth_register($username, $password) {\n    validate_input($username);\n    sanitize_input($username);\n    $hash = auth_hash_password($password);\n    return service_create($username, $hash);\n}\n`,\n      'src/Controllers/ApiController.php': `<?php\n\nfunction api_handle_request($method, $path) {\n    validate_input($method);\n    validate_input($path);\n    log_request($method . ' ' . $path);\n    return format_response(['method' => $method, 'path' => $path]);\n}\n\nfunction api_handle_error($error) {\n    log_error($error);\n    return format_error($error);\n}\n\nfunction api_middleware($request) {\n    validate_input($request);\n    log_request('middleware');\n    return true;\n}\n`,\n      'src/Services/UserService.php': `<?php\n\nfunction service_find_all() {\n    $result = db_query('SELECT * FROM users');\n    return format_response($result);\n}\n\nfunction service_find_by_id($id) {\n    $result = db_query('SELECT * FROM users WHERE id = ' . $id);\n    return format_response($result);\n}\n\nfunction service_create($name, $hash) {\n    db_execute('INSERT INTO users VALUES (' . $name . ')');\n    log_request('user created');\n    return true;\n}\n\nfunction service_update($id, $data) {\n    db_execute('UPDATE users SET data = ' . $data);\n    log_request('user updated');\n    return true;\n}\n\nfunction service_delete($id) {\n    db_execute('DELETE FROM users WHERE id = ' . $id);\n    log_request('user deleted');\n    return true;\n}\n`,\n      'src/Services/AuthServiceImpl.php': `<?php\n\nfunction auth_hash_password($password) {\n    validate_input($password);\n    return 'hashed_' . $password;\n}\n\nfunction auth_create_token($username) {\n    validate_input($username);\n    log_request('token created for ' . $username);\n    return 'token_' . $username;\n}\n\nfunction auth_verify_token($token) {\n    validate_input($token);\n    return strpos($token, 'token_') === 0;\n}\n\nfunction auth_refresh_token($token) {\n    auth_verify_token($token);\n    return auth_create_token('refreshed');\n}\n`,\n      'src/Helpers/validator.php': `<?php\n\nfunction validate_input($input) {\n    if (empty($input)) {\n        throw new InvalidArgumentException('Invalid');\n    }\n    return true;\n}\n\nfunction sanitize_input($input) {\n    return htmlspecialchars($input);\n}\n\nfunction check_required($data, $fields) {\n    foreach ($fields as $field) {\n        if (!isset($data[$field])) return false;\n    }\n    return true;\n}\n\nfunction check_length($input, $max = 255) {\n    return strlen($input) <= $max;\n}\n`,\n      'src/Helpers/logger.php': `<?php\n\nfunction log_request($msg) {\n    echo '[REQ] ' . $msg . \"\\\\n\";\n}\n\nfunction log_error($msg) {\n    echo '[ERR] ' . $msg . \"\\\\n\";\n}\n\nfunction log_info($msg) {\n    echo '[INFO] ' . $msg . \"\\\\n\";\n}\n\nfunction create_log_entry($level, $msg) {\n    return ['level' => $level, 'msg' => $msg, 'ts' => time()];\n}\n`,\n      'src/Helpers/formatter.php': `<?php\n\nfunction format_response($data) {\n    return ['status' => 200, 'body' => $data, 'formatted' => true];\n}\n\nfunction format_error($err) {\n    return ['status' => 500, 'error' => $err];\n}\n\nfunction format_date($timestamp) {\n    return date('Y-m-d', $timestamp);\n}\n\nfunction format_json($data) {\n    return json_encode($data);\n}\n`,\n      'src/Data/database.php': `<?php\n\nfunction db_query($sql) {\n    log_request('query: ' . $sql);\n    return [];\n}\n\nfunction db_execute($sql) {\n    log_request('execute: ' . $sql);\n    return true;\n}\n\nfunction db_connect($host) {\n    return true;\n}\n\nfunction db_close() {\n    return true;\n}\n`,\n    });\n    result = runSkillsCli(tmpDir);\n  }, 50000);\n\n  afterAll(() => {\n    fs.rmSync(tmpDir, { recursive: true, force: true });\n  });\n\n  /**\n   * Verify analyze --skills generates valid SKILL.md files for a\n   * PHP repo with Controllers/Services/Models clusters.\n   */\n  it('generates skill files', () => {\n    assertSkillFiles(result, tmpDir);\n  }, 50000);\n\n  /**\n   * Verify CLAUDE.md and AGENTS.md are created and reference generated skills.\n   */\n  it('context files updated', () => {\n    assertContextFiles(result, tmpDir);\n  }, 50000);\n});\n\n// ============================================================================\n// DESCRIBE 11: Kotlin\n// ============================================================================\n\ndescribe('Kotlin', () => {\n  let tmpDir: string;\n  let result: ReturnType<typeof runSkillsCli>;\n\n  beforeAll(() => {\n    tmpDir = createFixtureRepo('kotlin', {\n      'src/main/kotlin/service/UserService.kt': `package service\n\nfun findUser(id: String): Map<String, Any> {\n    validateInput(id)\n    val result = dbQuery(\"SELECT * FROM users WHERE id = $id\")\n    return formatResponse(result)\n}\n\nfun createUser(name: String): Map<String, Any> {\n    validateInput(name)\n    sanitizeInput(name)\n    dbExecute(\"INSERT INTO users VALUES ('$name')\")\n    logRequest(\"user created\")\n    return formatResponse(mapOf(\"name\" to name))\n}\n\nfun updateUser(id: String, name: String): Map<String, Any> {\n    validateInput(id)\n    validateInput(name)\n    dbExecute(\"UPDATE users SET name = '$name' WHERE id = $id\")\n    logRequest(\"user updated\")\n    return formatResponse(mapOf(\"id\" to id))\n}\n\nfun deleteUser(id: String): Boolean {\n    validateInput(id)\n    dbExecute(\"DELETE FROM users WHERE id = $id\")\n    logRequest(\"user deleted\")\n    return true\n}\n`,\n      'src/main/kotlin/service/AuthService.kt': `package service\n\nfun authenticate(username: String, password: String): Map<String, Any> {\n    validateInput(username)\n    validateInput(password)\n    val user = findUser(username)\n    val hash = hashPassword(password)\n    return formatResponse(mapOf(\"user\" to user, \"token\" to createToken(username)))\n}\n\nfun hashPassword(password: String): String {\n    validateInput(password)\n    return \"hashed_$password\"\n}\n\nfun createToken(username: String): String {\n    validateInput(username)\n    logRequest(\"token created for $username\")\n    return \"token_$username\"\n}\n\nfun verifyToken(token: String): Boolean {\n    validateInput(token)\n    return token.startsWith(\"token_\")\n}\n\nfun refreshToken(token: String): String {\n    verifyToken(token)\n    return createToken(\"refreshed\")\n}\n`,\n      'src/main/kotlin/service/NotificationService.kt': `package service\n\nfun notify(userId: String, message: String) {\n    validateInput(userId)\n    validateInput(message)\n    sendEmail(userId, message)\n}\n\nfun sendEmail(to: String, body: String) {\n    sanitizeInput(body)\n    logRequest(\"email sent to $to\")\n    formatMessage(body)\n}\n\nfun sendAlert(message: String) {\n    logRequest(\"alert: $message\")\n    formatError(message)\n}\n`,\n      'src/main/kotlin/helpers/Validator.kt': `package helpers\n\nfun validateInput(input: String): Boolean {\n    if (input.isEmpty()) throw IllegalArgumentException(\"Invalid\")\n    return true\n}\n\nfun sanitizeInput(input: String): String {\n    return input.replace(\"<\", \"\").replace(\">\", \"\")\n}\n\nfun checkLength(input: String, max: Int = 255): Boolean {\n    return input.length <= max\n}\n\nfun normalizeInput(input: String): String {\n    return input.trim().lowercase()\n}\n`,\n      'src/main/kotlin/helpers/Logger.kt': `package helpers\n\nfun logRequest(msg: String) {\n    println(\"[REQ] $msg\")\n}\n\nfun logError(msg: String) {\n    System.err.println(\"[ERR] $msg\")\n}\n\nfun logInfo(msg: String) {\n    println(\"[INFO] $msg\")\n}\n\nfun createLogEntry(level: String, msg: String): Map<String, Any> {\n    return mapOf(\"level\" to level, \"msg\" to msg, \"ts\" to System.currentTimeMillis())\n}\n`,\n      'src/main/kotlin/helpers/Formatter.kt': `package helpers\n\nfun formatResponse(data: Map<String, Any>): Map<String, Any> {\n    return data + mapOf(\"formatted\" to true, \"status\" to 200)\n}\n\nfun formatError(err: String): Map<String, Any> {\n    return mapOf(\"status\" to 500, \"error\" to err)\n}\n\nfun formatMessage(msg: String): String {\n    return \"[MSG] $msg\"\n}\n\nfun formatDate(timestamp: Long): String {\n    return timestamp.toString()\n}\n`,\n      'src/main/kotlin/data/Database.kt': `package data\n\nfun dbQuery(sql: String): Map<String, Any> {\n    logRequest(\"query: $sql\")\n    return mapOf(\"rows\" to emptyList<Any>())\n}\n\nfun dbExecute(sql: String): Boolean {\n    logRequest(\"execute: $sql\")\n    return true\n}\n\nfun dbConnect(url: String): Boolean {\n    return true\n}\n\nfun dbClose() {\n}\n`,\n    });\n    result = runSkillsCli(tmpDir);\n  }, 50000);\n\n  afterAll(() => {\n    fs.rmSync(tmpDir, { recursive: true, force: true });\n  });\n\n  /**\n   * Verify analyze --skills generates valid SKILL.md files for a\n   * Kotlin repo with service/repository clusters.\n   */\n  it('generates skill files', () => {\n    assertSkillFiles(result, tmpDir);\n  }, 50000);\n\n  /**\n   * Verify CLAUDE.md and AGENTS.md are created and reference generated skills.\n   */\n  it('context files updated', () => {\n    assertContextFiles(result, tmpDir);\n  }, 50000);\n});\n\n// ============================================================================\n// DESCRIBE 12: Mixed TypeScript + Python\n// ============================================================================\n\ndescribe('Mixed TypeScript + Python', () => {\n  let tmpDir: string;\n  let result: ReturnType<typeof runSkillsCli>;\n\n  beforeAll(() => {\n    tmpDir = createFixtureRepo('mixed', {\n      'packages/backend/src/api/router.ts': `\nimport { validateRequest } from '../utils/validator';\nimport { logRequest } from '../utils/logger';\n\nexport function createRouter() {\n  validateRequest('route');\n  logRequest('router init');\n  return { routes: [] };\n}\n\nexport function registerRoute(path: string) {\n  validateRequest(path);\n  logRequest('register ' + path);\n  return true;\n}\n`,\n      'packages/backend/src/api/controller.ts': `\nimport { runQuery } from '../data/query';\n\nexport function handleGet(id: string) {\n  return runQuery('SELECT * FROM items WHERE id = ' + id);\n}\n\nexport function handlePost(body: any) {\n  return runQuery('INSERT INTO items VALUES ' + JSON.stringify(body));\n}\n`,\n      'packages/backend/src/data/query.ts': `\nexport function runQuery(sql: string) {\n  return { sql, rows: [] };\n}\n\nexport function buildQuery(table: string) {\n  return 'SELECT * FROM ' + table;\n}\n`,\n      'packages/backend/src/utils/validator.ts': `\nexport function validateRequest(input: string) {\n  if (!input) throw new Error('Invalid');\n  return true;\n}\n\nexport function sanitize(input: string) {\n  return input.replace(/[<>]/g, '');\n}\n`,\n      'packages/backend/src/utils/logger.ts': `\nexport function logRequest(msg: string) {\n  console.log('[REQ]', msg);\n}\n\nexport function logError(msg: string) {\n  console.error('[ERR]', msg);\n}\n`,\n      'packages/ml/src/pipeline/__init__.py': '',\n      'packages/ml/src/pipeline/train.py': `\nfrom packages.ml.src.data.loader import load_data, preprocess\n\ndef train(config):\n    data = load_data(\"train.csv\")\n    processed = preprocess(data)\n    return {\"model\": \"trained\", \"data\": processed}\n\ndef evaluate(model, test_data):\n    data = load_data(\"test.csv\")\n    return {\"accuracy\": 0.95}\n`,\n      'packages/ml/src/pipeline/predict.py': `\nfrom packages.ml.src.models.model import load_model\n\ndef predict(input_data):\n    model = load_model(\"latest\")\n    return {\"prediction\": \"result\"}\n\ndef batch_predict(inputs):\n    model = load_model(\"latest\")\n    return [{\"prediction\": \"result\"} for _ in inputs]\n`,\n      'packages/ml/src/data/__init__.py': '',\n      'packages/ml/src/data/loader.py': `\ndef load_data(path):\n    return {\"path\": path, \"rows\": []}\n\ndef preprocess(data):\n    return {**data, \"preprocessed\": True}\n\ndef split_data(data, ratio=0.8):\n    return data, data\n`,\n      'packages/ml/src/models/__init__.py': '',\n      'packages/ml/src/models/model.py': `\ndef load_model(name):\n    return {\"name\": name, \"loaded\": True}\n\ndef save_model(model, path):\n    return True\n\ndef compile_model(config):\n    return {\"compiled\": True}\n`,\n    });\n    result = runSkillsCli(tmpDir);\n  }, 50000);\n\n  afterAll(() => {\n    fs.rmSync(tmpDir, { recursive: true, force: true });\n  });\n\n  /**\n   * Verify analyze --skills generates at least 1 SKILL.md for a\n   * mixed TypeScript + Python monorepo. Relaxed assertion since Leiden\n   * may or may not form communities spanning both languages.\n   */\n  it('generates skill files', () => {\n    assertSkillFiles(result, tmpDir, 1);\n  }, 50000);\n\n  /**\n   * Verify CLAUDE.md and AGENTS.md are created and reference generated skills.\n   */\n  it('context files updated', () => {\n    assertContextFiles(result, tmpDir);\n  }, 50000);\n});\n\n// ============================================================================\n// DESCRIBE 13: Idempotency\n// ============================================================================\n\ndescribe('Idempotency', () => {\n  let tmpDir: string;\n  let result1: ReturnType<typeof runSkillsCli>;\n  let result2: ReturnType<typeof runSkillsCli>;\n\n  beforeAll(() => {\n    tmpDir = createFixtureRepo('idempotency', {\n      'src/core/parser.ts': `\nimport { readFile } from '../io/reader';\nimport { log } from '../io/logger';\n\nexport function parse(input: string) {\n  const data = readFile(input);\n  log('parsing');\n  return tokenize(data);\n}\n\nexport function tokenize(data: string) {\n  log('tokenizing');\n  return data.split(' ');\n}\n`,\n      'src/core/transformer.ts': `\nimport { parse } from './parser';\nimport { validate } from './validator';\n\nexport function transform(input: string) {\n  validate(input);\n  const tokens = parse(input);\n  return tokens.map(t => t.toUpperCase());\n}\n\nexport function optimize(input: string) {\n  const tokens = parse(input);\n  return tokens.filter(t => t.length > 0);\n}\n`,\n      'src/core/validator.ts': `\nexport function validate(input: string) {\n  if (!input) throw new Error('Invalid');\n  return true;\n}\n\nexport function checkSchema(schema: any) {\n  return schema && typeof schema === 'object';\n}\n\nexport function sanitize(input: string) {\n  return input.replace(/[<>]/g, '');\n}\n`,\n      'src/io/reader.ts': `\nexport function readFile(path: string) {\n  return 'file contents from ' + path;\n}\n\nexport function readStream(path: string) {\n  return { path, stream: true };\n}\n\nexport function close(handle: any) {\n  return true;\n}\n`,\n      'src/io/writer.ts': `\nimport { log } from './logger';\n\nexport function writeFile(path: string, data: string) {\n  log('writing ' + path);\n  return true;\n}\n\nexport function flush() {\n  log('flushing');\n  return true;\n}\n`,\n      'src/io/logger.ts': `\nexport function log(msg: string) {\n  console.log('[LOG]', msg);\n}\n\nexport function logError(msg: string) {\n  console.error('[ERR]', msg);\n}\n\nexport function createEntry(level: string, msg: string) {\n  return { level, msg, ts: Date.now() };\n}\n`,\n    });\n    result1 = runSkillsCli(tmpDir);\n    result2 = runSkillsCli(tmpDir);\n  }, 90000);\n\n  afterAll(() => {\n    fs.rmSync(tmpDir, { recursive: true, force: true });\n  });\n\n  /**\n   * Running analyze --skills twice should produce stable output:\n   * same number of skill directories, all SKILL.md files valid,\n   * and CLAUDE.md still references generated skills.\n   */\n  it('second analyze --skills produces stable output', () => {\n    /* CI timeout tolerance */\n    if (result1.status === null || result2.status === null) return;\n\n    expect(result1.status).toBe(0);\n    expect(result2.status).toBe(0);\n\n    const generatedDir = path.join(tmpDir, '.claude', 'skills', 'generated');\n    expect(fs.existsSync(generatedDir)).toBe(true);\n\n    const skillDirs = fs.readdirSync(generatedDir).filter(d =>\n      fs.statSync(path.join(generatedDir, d)).isDirectory(),\n    );\n    expect(skillDirs.length).toBeGreaterThanOrEqual(1);\n\n    /* All SKILL.md files should still have valid frontmatter */\n    for (const dir of skillDirs) {\n      const skillPath = path.join(generatedDir, dir, 'SKILL.md');\n      expect(fs.existsSync(skillPath)).toBe(true);\n      const content = fs.readFileSync(skillPath, 'utf-8');\n      expect(content.startsWith('---')).toBe(true);\n      expect(content).toContain('name:');\n      expect(content).toContain('description:');\n      expect(content.length).toBeGreaterThan(200);\n    }\n\n    /* CLAUDE.md should still reference generated skills */\n    const claudePath = path.join(tmpDir, 'CLAUDE.md');\n    expect(fs.existsSync(claudePath)).toBe(true);\n    const claudeContent = fs.readFileSync(claudePath, 'utf-8');\n    expect(claudeContent).toContain('.claude/skills/generated/');\n  }, 90000);\n});\n"
  },
  {
    "path": "gitnexus/test/integration/tree-sitter-languages.test.ts",
    "content": "import { describe, it, expect, beforeAll } from 'vitest';\nimport fs from 'fs';\nimport path from 'path';\nimport { loadParser, loadLanguage } from '../../src/core/tree-sitter/parser-loader.js';\nimport { LANGUAGE_QUERIES } from '../../src/core/ingestion/tree-sitter-queries.js';\nimport { SupportedLanguages } from '../../src/config/supported-languages.js';\nimport { getLanguageFromFilename } from '../../src/core/ingestion/utils.js';\nimport Parser from 'tree-sitter';\n\nconst fixturesDir = path.resolve(__dirname, '..', 'fixtures', 'sample-code');\n\nfunction readFixture(filename: string): string {\n  return fs.readFileSync(path.join(fixturesDir, filename), 'utf-8');\n}\n\nfunction parseAndQuery(parser: Parser, content: string, queryStr: string) {\n  const tree = parser.parse(content);\n  const lang = parser.getLanguage();\n  const query = new Parser.Query(lang, queryStr);\n  const matches = query.matches(tree.rootNode);\n  return { tree, matches };\n}\n\nfunction extractDefinitions(matches: any[]) {\n  const defs: { type: string; name: string }[] = [];\n  for (const match of matches) {\n    for (const capture of match.captures) {\n      if (capture.name === 'name' && match.captures.some((c: any) =>\n        c.name.startsWith('definition.'))) {\n        const defType = match.captures.find((c: any) => c.name.startsWith('definition.'))!.name;\n        defs.push({ type: defType, name: capture.node.text });\n      }\n    }\n  }\n  return defs;\n}\n\ndescribe('Tree-sitter multi-language parsing', () => {\n  let parser: Parser;\n\n  beforeAll(async () => {\n    parser = await loadParser();\n  });\n\n  describe('TypeScript', () => {\n    it('parses functions, classes, interfaces, methods, and arrow functions', async () => {\n      await loadLanguage(SupportedLanguages.TypeScript, 'simple.ts');\n      const content = readFixture('simple.ts');\n      const { matches } = parseAndQuery(parser, content, LANGUAGE_QUERIES[SupportedLanguages.TypeScript]);\n      const defs = extractDefinitions(matches);\n\n      const defTypes = defs.map(d => d.type);\n      expect(defTypes).toContain('definition.class');\n      expect(defTypes).toContain('definition.function');\n    });\n  });\n\n  describe('TSX', () => {\n    it('parses JSX components with tsx grammar', async () => {\n      await loadLanguage(SupportedLanguages.TypeScript, 'simple.tsx');\n      const content = readFixture('simple.tsx');\n      const { matches } = parseAndQuery(parser, content, LANGUAGE_QUERIES[SupportedLanguages.TypeScript]);\n      const defs = extractDefinitions(matches);\n\n      expect(defs.length).toBeGreaterThan(0);\n      // Should detect Counter class and Button/useCounter functions\n      const names = defs.map(d => d.name);\n      expect(names).toContain('Counter');\n    });\n  });\n\n  describe('JavaScript', () => {\n    it('parses class and function declarations', async () => {\n      await loadLanguage(SupportedLanguages.JavaScript);\n      const content = readFixture('simple.js');\n      const { matches } = parseAndQuery(parser, content, LANGUAGE_QUERIES[SupportedLanguages.JavaScript]);\n      const defs = extractDefinitions(matches);\n\n      expect(defs.length).toBeGreaterThan(0);\n      const names = defs.map(d => d.name);\n      expect(names).toContain('EventEmitter');\n      expect(names).toContain('createLogger');\n    });\n  });\n\n  describe('Python', () => {\n    it('parses class and function definitions', async () => {\n      await loadLanguage(SupportedLanguages.Python);\n      const content = readFixture('simple.py');\n      const { matches } = parseAndQuery(parser, content, LANGUAGE_QUERIES[SupportedLanguages.Python]);\n      const defs = extractDefinitions(matches);\n\n      const defTypes = defs.map(d => d.type);\n      expect(defTypes).toContain('definition.class');\n      expect(defTypes).toContain('definition.function');\n    });\n  });\n\n  describe('Java', () => {\n    it('parses class, method, and constructor declarations', async () => {\n      await loadLanguage(SupportedLanguages.Java);\n      const content = readFixture('simple.java');\n      const { matches } = parseAndQuery(parser, content, LANGUAGE_QUERIES[SupportedLanguages.Java]);\n      const defs = extractDefinitions(matches);\n\n      expect(defs.length).toBeGreaterThan(0);\n      const defTypes = defs.map(d => d.type);\n      expect(defTypes).toContain('definition.class');\n      expect(defTypes).toContain('definition.method');\n    });\n  });\n\n  describe('Go', () => {\n    it('parses function and type declarations', async () => {\n      await loadLanguage(SupportedLanguages.Go);\n      const content = readFixture('simple.go');\n      const { matches } = parseAndQuery(parser, content, LANGUAGE_QUERIES[SupportedLanguages.Go]);\n      const defs = extractDefinitions(matches);\n\n      expect(defs.length).toBeGreaterThan(0);\n      const defTypes = defs.map(d => d.type);\n      expect(defTypes).toContain('definition.function');\n    });\n  });\n\n  describe('C', () => {\n    it('parses function definitions and structs', async () => {\n      await loadLanguage(SupportedLanguages.C);\n      const content = readFixture('simple.c');\n      const { matches } = parseAndQuery(parser, content, LANGUAGE_QUERIES[SupportedLanguages.C]);\n      const defs = extractDefinitions(matches);\n\n      expect(defs.length).toBeGreaterThan(0);\n      const defTypes = defs.map(d => d.type);\n      expect(defTypes).toContain('definition.function');\n      const names = defs.map(d => d.name);\n      expect(names).toContain('add');\n      expect(names).toContain('internal_helper');\n      expect(names).toContain('print_message');\n    });\n\n    it('captures pointer-returning function definitions', async () => {\n      await loadLanguage(SupportedLanguages.C);\n      const code = `int* get_ptr() { return 0; }\\nchar** get_strs() { return 0; }`;\n      const { matches } = parseAndQuery(parser, code, LANGUAGE_QUERIES[SupportedLanguages.C]);\n      const defs = extractDefinitions(matches);\n      const names = defs.map(d => d.name);\n      expect(names).toContain('get_ptr');\n      expect(names).toContain('get_strs');\n    });\n\n    it('captures macros and typedefs', async () => {\n      await loadLanguage(SupportedLanguages.C);\n      const code = `#define MAX_SIZE 100\\ntypedef unsigned int uint;\\nstruct Point { int x; int y; };`;\n      const { matches } = parseAndQuery(parser, code, LANGUAGE_QUERIES[SupportedLanguages.C]);\n      const defs = extractDefinitions(matches);\n      const names = defs.map(d => d.name);\n      expect(names).toContain('MAX_SIZE');\n      expect(names).toContain('uint');\n      expect(names).toContain('Point');\n    });\n  });\n\n  describe('C++', () => {\n    it('parses class, function, and namespace declarations', async () => {\n      await loadLanguage(SupportedLanguages.CPlusPlus);\n      const content = readFixture('simple.cpp');\n      const { matches } = parseAndQuery(parser, content, LANGUAGE_QUERIES[SupportedLanguages.CPlusPlus]);\n      const defs = extractDefinitions(matches);\n\n      expect(defs.length).toBeGreaterThan(0);\n      const defTypes = defs.map(d => d.type);\n      expect(defTypes).toContain('definition.class');\n      const names = defs.map(d => d.name);\n      expect(names).toContain('UserManager');\n      expect(names).toContain('helperFunction');\n    });\n\n    it('captures pointer-returning methods and functions', async () => {\n      await loadLanguage(SupportedLanguages.CPlusPlus);\n      const code = `int* Factory::create() { return nullptr; }\\nchar** getNames() { return 0; }`;\n      const { matches } = parseAndQuery(parser, code, LANGUAGE_QUERIES[SupportedLanguages.CPlusPlus]);\n      const defs = extractDefinitions(matches);\n      const names = defs.map(d => d.name);\n      expect(names).toContain('create');\n      expect(names).toContain('getNames');\n    });\n\n    it('captures reference-returning functions', async () => {\n      await loadLanguage(SupportedLanguages.CPlusPlus);\n      const code = `int& Container::at(int i) { static int x; return x; }`;\n      const { matches } = parseAndQuery(parser, code, LANGUAGE_QUERIES[SupportedLanguages.CPlusPlus]);\n      const defs = extractDefinitions(matches);\n      const names = defs.map(d => d.name);\n      expect(names).toContain('at');\n    });\n\n    it('captures destructor definitions', async () => {\n      await loadLanguage(SupportedLanguages.CPlusPlus);\n      const code = `MyClass::~MyClass() { cleanup(); }`;\n      const { matches } = parseAndQuery(parser, code, LANGUAGE_QUERIES[SupportedLanguages.CPlusPlus]);\n      const defs = extractDefinitions(matches);\n      const names = defs.map(d => d.name);\n      expect(names).toContain('~MyClass');\n    });\n\n    it('captures template declarations', async () => {\n      await loadLanguage(SupportedLanguages.CPlusPlus);\n      const code = `template<typename T> class Container { T value; };`;\n      const { matches } = parseAndQuery(parser, code, LANGUAGE_QUERIES[SupportedLanguages.CPlusPlus]);\n      const defs = extractDefinitions(matches);\n      const names = defs.map(d => d.name);\n      expect(names).toContain('Container');\n    });\n\n    it('captures namespace definitions', async () => {\n      await loadLanguage(SupportedLanguages.CPlusPlus);\n      const code = `namespace utils { void helper() {} }`;\n      const { matches } = parseAndQuery(parser, code, LANGUAGE_QUERIES[SupportedLanguages.CPlusPlus]);\n      const defs = extractDefinitions(matches);\n      const names = defs.map(d => d.name);\n      expect(names).toContain('utils');\n      expect(names).toContain('helper');\n    });\n  });\n\n  describe('C#', () => {\n    it('parses class, method, and namespace declarations', async () => {\n      await loadLanguage(SupportedLanguages.CSharp);\n      const content = readFixture('simple.cs');\n      const { matches } = parseAndQuery(parser, content, LANGUAGE_QUERIES[SupportedLanguages.CSharp]);\n      const defs = extractDefinitions(matches);\n\n      expect(defs.length).toBeGreaterThan(0);\n      const defTypes = defs.map(d => d.type);\n      expect(defTypes).toContain('definition.class');\n      expect(defTypes).toContain('definition.method');\n      expect(defTypes).toContain('definition.namespace');\n      const names = defs.map(d => d.name);\n      expect(names).toContain('Calculator');\n      expect(names).toContain('Add');\n    });\n\n    it('captures interfaces, enums, records, structs', async () => {\n      await loadLanguage(SupportedLanguages.CSharp);\n      const content = readFixture('simple.cs');\n      const { matches } = parseAndQuery(parser, content, LANGUAGE_QUERIES[SupportedLanguages.CSharp]);\n      const defs = extractDefinitions(matches);\n      const names = defs.map(d => d.name);\n      expect(names).toContain('ICalculator');\n      expect(names).toContain('Operation');\n      expect(names).toContain('CalculationResult');\n      expect(names).toContain('Point');\n    });\n\n    it('captures file-scoped namespace declarations', async () => {\n      await loadLanguage(SupportedLanguages.CSharp);\n      const code = `namespace MyApp;\\npublic class Program { }`;\n      const { matches } = parseAndQuery(parser, code, LANGUAGE_QUERIES[SupportedLanguages.CSharp]);\n      const defs = extractDefinitions(matches);\n      const names = defs.map(d => d.name);\n      expect(names).toContain('MyApp');\n      expect(names).toContain('Program');\n    });\n\n    it('captures constructors and properties', async () => {\n      await loadLanguage(SupportedLanguages.CSharp);\n      const content = readFixture('simple.cs');\n      const { matches } = parseAndQuery(parser, content, LANGUAGE_QUERIES[SupportedLanguages.CSharp]);\n      const defs = extractDefinitions(matches);\n      const defTypes = defs.map(d => d.type);\n      expect(defTypes).toContain('definition.constructor');\n      expect(defTypes).toContain('definition.property');\n    });\n  });\n\n  describe('Rust', () => {\n    it('parses fn, struct, impl, trait, and enum', async () => {\n      await loadLanguage(SupportedLanguages.Rust);\n      const content = readFixture('simple.rs');\n      const { matches } = parseAndQuery(parser, content, LANGUAGE_QUERIES[SupportedLanguages.Rust]);\n      const defs = extractDefinitions(matches);\n\n      expect(defs.length).toBeGreaterThan(0);\n      const defTypes = defs.map(d => d.type);\n      expect(defTypes).toContain('definition.function');\n      const names = defs.map(d => d.name);\n      expect(names).toContain('public_function');\n      expect(names).toContain('private_function');\n      expect(names).toContain('Config');\n    });\n\n    it('captures impl blocks and methods', async () => {\n      await loadLanguage(SupportedLanguages.Rust);\n      const content = readFixture('simple.rs');\n      const { matches } = parseAndQuery(parser, content, LANGUAGE_QUERIES[SupportedLanguages.Rust]);\n      const defs = extractDefinitions(matches);\n      const defTypes = defs.map(d => d.type);\n      expect(defTypes).toContain('definition.impl');\n      const names = defs.map(d => d.name);\n      expect(names).toContain('new');\n    });\n\n    it('captures generic impl blocks', async () => {\n      await loadLanguage(SupportedLanguages.Rust);\n      const code = `struct Vec<T> { data: Vec<T> }\\nimpl<T> Vec<T> { fn len(&self) -> usize { 0 } }`;\n      const { matches } = parseAndQuery(parser, code, LANGUAGE_QUERIES[SupportedLanguages.Rust]);\n      const defs = extractDefinitions(matches);\n      const names = defs.map(d => d.name);\n      expect(names).toContain('Vec');\n    });\n\n    it('captures trait impl heritage', async () => {\n      await loadLanguage(SupportedLanguages.Rust);\n      const code = `trait Display { fn fmt(&self); }\\nstruct Foo;\\nimpl Display for Foo { fn fmt(&self) {} }`;\n      const { matches } = parseAndQuery(parser, code, LANGUAGE_QUERIES[SupportedLanguages.Rust]);\n      // Look for heritage captures\n      const heritageCaptures: string[] = [];\n      for (const match of matches) {\n        for (const capture of match.captures) {\n          if (capture.name.startsWith('heritage.')) {\n            heritageCaptures.push(`${capture.name}:${capture.node.text}`);\n          }\n        }\n      }\n      expect(heritageCaptures).toContain('heritage.trait:Display');\n      expect(heritageCaptures).toContain('heritage.class:Foo');\n    });\n\n    it('captures modules, consts, and statics', async () => {\n      await loadLanguage(SupportedLanguages.Rust);\n      const code = `mod utils { pub fn helper() {} }\\npub const MAX: usize = 100;\\nstatic INSTANCE: i32 = 0;`;\n      const { matches } = parseAndQuery(parser, code, LANGUAGE_QUERIES[SupportedLanguages.Rust]);\n      const defs = extractDefinitions(matches);\n      const names = defs.map(d => d.name);\n      expect(names).toContain('utils');\n      expect(names).toContain('MAX');\n      expect(names).toContain('INSTANCE');\n    });\n  });\n\n  describe('PHP', () => {\n    it('parses class, function, and method declarations', async () => {\n      await loadLanguage(SupportedLanguages.PHP);\n      const content = readFixture('simple.php');\n      const { matches } = parseAndQuery(parser, content, LANGUAGE_QUERIES[SupportedLanguages.PHP]);\n      const defs = extractDefinitions(matches);\n\n      expect(defs.length).toBeGreaterThan(0);\n      const defTypes = defs.map(d => d.type);\n      expect(defTypes).toContain('definition.class');\n    });\n  });\n\n  describe('Swift', () => {\n    it('parses class, struct, protocol, and function if tree-sitter-swift is available', async () => {\n      try {\n        await loadLanguage(SupportedLanguages.Swift);\n      } catch {\n        // tree-sitter-swift not installed — skip\n        return;\n      }\n\n      const content = readFixture('simple.swift');\n      const { matches } = parseAndQuery(parser, content, LANGUAGE_QUERIES[SupportedLanguages.Swift]);\n      const defs = extractDefinitions(matches);\n\n      expect(defs.length).toBeGreaterThan(0);\n    });\n\n    it('gracefully handles missing tree-sitter-swift', async () => {\n      // If Swift is NOT available, loadLanguage should throw\n      // If it IS available, this test just passes\n      try {\n        await loadLanguage(SupportedLanguages.Swift);\n      } catch (e: any) {\n        expect(e.message).toContain('Unsupported language');\n      }\n    });\n  });\n\n  describe('unhappy path', () => {\n    it('returns null/undefined for unsupported file extensions', () => {\n      expect(getLanguageFromFilename('archive.xyz')).toBeNull();\n      expect(getLanguageFromFilename('data.unknown')).toBeNull();\n    });\n\n    it('handles empty string file path', () => {\n      expect(getLanguageFromFilename('')).toBeNull();\n    });\n\n    it('returns null/undefined for binary file extensions', () => {\n      expect(getLanguageFromFilename('program.exe')).toBeNull();\n      expect(getLanguageFromFilename('library.dll')).toBeNull();\n      expect(getLanguageFromFilename('object.so')).toBeNull();\n    });\n  });\n\n  describe('cross-language assertions', () => {\n    it('all supported languages produce at least one definition from fixtures', async () => {\n      const langFixtures: [SupportedLanguages, string, string?][] = [\n        [SupportedLanguages.TypeScript, 'simple.ts'],\n        [SupportedLanguages.JavaScript, 'simple.js'],\n        [SupportedLanguages.Python, 'simple.py'],\n        [SupportedLanguages.Java, 'simple.java'],\n        [SupportedLanguages.Go, 'simple.go'],\n        [SupportedLanguages.C, 'simple.c'],\n        [SupportedLanguages.CPlusPlus, 'simple.cpp'],\n        [SupportedLanguages.CSharp, 'simple.cs'],\n        [SupportedLanguages.Rust, 'simple.rs'],\n        [SupportedLanguages.PHP, 'simple.php'],\n      ];\n\n      for (const [lang, fixture, filePath] of langFixtures) {\n        await loadLanguage(lang, filePath || fixture);\n        const content = readFixture(fixture);\n        const { matches } = parseAndQuery(parser, content, LANGUAGE_QUERIES[lang]);\n        const defs = extractDefinitions(matches);\n        expect(defs.length, `${lang} (${fixture}) should have definitions`).toBeGreaterThan(0);\n      }\n    });\n  });\n\n  describe('parser edge cases', () => {\n    it('loadLanguage throws for unsupported language', async () => {\n      await expect(loadLanguage('brainfuck' as any)).rejects.toThrow(/unsupported language/i);\n    });\n\n    it('parsing empty file content produces empty matches', async () => {\n      await loadLanguage(SupportedLanguages.TypeScript, 'empty.ts');\n      const tree = parser.parse('');\n      expect(tree.rootNode).toBeDefined();\n\n      const lang = parser.getLanguage();\n      const query = new Parser.Query(lang, LANGUAGE_QUERIES[SupportedLanguages.TypeScript]);\n      const matches = query.matches(tree.rootNode);\n      expect(matches).toEqual([]);\n    });\n\n    it('parsing malformed code does not crash', async () => {\n      await loadLanguage(SupportedLanguages.TypeScript, 'malformed.ts');\n      const tree = parser.parse('function {{{ class >>><< if(( end');\n      expect(tree.rootNode).toBeDefined();\n      expect(tree.rootNode.hasError).toBe(true);\n    });\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/integration/worker-pool.test.ts",
    "content": "/**\n * Integration Tests: Worker Pool & Parse Worker\n *\n * Verifies that the worker pool can spawn real worker threads using the\n * compiled dist/ parse-worker.js and process files correctly.\n * This is critical for cross-platform CI where vitest runs from src/\n * but workers need compiled .js files.\n */\nimport { describe, it, expect, afterEach } from 'vitest';\nimport { createWorkerPool, WorkerPool } from '../../src/core/ingestion/workers/worker-pool.js';\nimport { pathToFileURL } from 'node:url';\nimport path from 'node:path';\nimport fs from 'node:fs';\n\nconst DIST_WORKER = path.resolve(__dirname, '..', '..', 'dist', 'core', 'ingestion', 'workers', 'parse-worker.js');\nconst hasDistWorker = fs.existsSync(DIST_WORKER);\n\ndescribe('worker pool integration', () => {\n  let pool: WorkerPool | undefined;\n\n  afterEach(async () => {\n    if (pool) {\n      await pool.terminate();\n      pool = undefined;\n    }\n  });\n\n  it.skipIf(!hasDistWorker)('creates a worker pool from dist/ worker', () => {\n    const workerUrl = pathToFileURL(DIST_WORKER) as URL;\n    pool = createWorkerPool(workerUrl, 1);\n    expect(pool.size).toBe(1);\n  });\n\n  it.skipIf(!hasDistWorker)('dispatches an empty batch without error', async () => {\n    const workerUrl = pathToFileURL(DIST_WORKER) as URL;\n    pool = createWorkerPool(workerUrl, 1);\n    const results = await pool.dispatch([]);\n    expect(results).toEqual([]);\n  });\n\n  it.skipIf(!hasDistWorker)('parses a single TypeScript file through worker', async () => {\n    const workerUrl = pathToFileURL(DIST_WORKER) as URL;\n    pool = createWorkerPool(workerUrl, 1);\n\n    const fixtureFile = path.resolve(__dirname, '..', 'fixtures', 'mini-repo', 'src', 'validator.ts');\n    const content = fs.readFileSync(fixtureFile, 'utf-8');\n\n    const results = await pool.dispatch<any, any>([\n      { path: 'src/validator.ts', content },\n    ]);\n\n    // Worker returns an array of results (one per worker chunk)\n    expect(results).toHaveLength(1);\n    const result = results[0];\n    expect(result.fileCount).toBe(1);\n    expect(result.nodes.length).toBeGreaterThan(0);\n\n    // Should find the validateInput function\n    const names = result.nodes.map((n: any) => n.properties.name);\n    expect(names).toContain('validateInput');\n  });\n\n  it.skipIf(!hasDistWorker)('parses multiple files across workers', async () => {\n    const workerUrl = pathToFileURL(DIST_WORKER) as URL;\n    pool = createWorkerPool(workerUrl, 2);\n\n    const fixturesDir = path.resolve(__dirname, '..', 'fixtures', 'mini-repo', 'src');\n    const files = fs.readdirSync(fixturesDir)\n      .filter(f => f.endsWith('.ts'))\n      .map(f => ({\n        path: `src/${f}`,\n        content: fs.readFileSync(path.join(fixturesDir, f), 'utf-8'),\n      }));\n\n    expect(files.length).toBeGreaterThanOrEqual(4);\n\n    const results = await pool.dispatch<any, any>(files);\n\n    // Each worker chunk returns a result\n    expect(results.length).toBeGreaterThan(0);\n\n    // Total files parsed should match input\n    const totalParsed = results.reduce((sum: number, r: any) => sum + r.fileCount, 0);\n    expect(totalParsed).toBe(files.length);\n\n    // Should find symbols from multiple files\n    const allNames = results.flatMap((r: any) => r.nodes.map((n: any) => n.properties.name));\n    expect(allNames).toContain('handleRequest');\n    expect(allNames).toContain('validateInput');\n    expect(allNames).toContain('saveToDb');\n    expect(allNames).toContain('formatResponse');\n  });\n\n  it.skipIf(!hasDistWorker)('reports progress during parsing', async () => {\n    const workerUrl = pathToFileURL(DIST_WORKER) as URL;\n    pool = createWorkerPool(workerUrl, 1);\n\n    const fixturesDir = path.resolve(__dirname, '..', 'fixtures', 'mini-repo', 'src');\n    const files = fs.readdirSync(fixturesDir)\n      .filter(f => f.endsWith('.ts'))\n      .map(f => ({\n        path: `src/${f}`,\n        content: fs.readFileSync(path.join(fixturesDir, f), 'utf-8'),\n      }));\n\n    const progressCalls: number[] = [];\n    await pool.dispatch<any, any>(files, (filesProcessed) => {\n      progressCalls.push(filesProcessed);\n    });\n\n    // Progress callbacks are best-effort — with a small batch the worker may\n    // process all files before the progress message is delivered. Just verify\n    // that if progress was reported, the values are sensible.\n    if (progressCalls.length > 0) {\n      expect(progressCalls[progressCalls.length - 1]).toBe(files.length);\n    }\n  });\n\n  it.skipIf(!hasDistWorker)('terminates cleanly', async () => {\n    const workerUrl = pathToFileURL(DIST_WORKER) as URL;\n    pool = createWorkerPool(workerUrl, 2);\n    await pool.terminate();\n    pool = undefined; // already terminated\n  });\n\n  it('fails gracefully with invalid worker path', () => {\n    const badUrl = pathToFileURL('/nonexistent/worker.js') as URL;\n    // createWorkerPool validates the worker script exists before spawning\n    expect(() => {\n      pool = createWorkerPool(badUrl, 1);\n    }).toThrow(/Worker script not found/);\n  });\n\n  // ─── Unhappy paths ──────────────────────────────────────────────────\n\n  it.skipIf(!hasDistWorker)('dispatch after terminate rejects', async () => {\n    const workerUrl = pathToFileURL(DIST_WORKER) as URL;\n    pool = createWorkerPool(workerUrl, 1);\n    const terminatedPool = pool;\n    await terminatedPool.terminate();\n    pool = undefined; // already terminated — prevent afterEach double-terminate\n\n    await expect(terminatedPool.dispatch([{ path: 'x.ts', content: 'const x = 1;' }]))\n      .rejects.toThrow();\n  });\n\n  it.skipIf(!hasDistWorker)('double terminate does not throw', async () => {\n    const workerUrl = pathToFileURL(DIST_WORKER) as URL;\n    pool = createWorkerPool(workerUrl, 1);\n    await pool.terminate();\n    await expect(pool.terminate()).resolves.toBeUndefined();\n    pool = undefined;\n  });\n\n  it.skipIf(!hasDistWorker)('dispatches entries with empty content string without crashing', async () => {\n    const workerUrl = pathToFileURL(DIST_WORKER) as URL;\n    pool = createWorkerPool(workerUrl, 1);\n\n    const results = await pool.dispatch<any, any>([\n      { path: 'empty.ts', content: '' },\n    ]);\n\n    expect(results).toHaveLength(1);\n    const result = results[0];\n    expect(typeof result.fileCount).toBe('number');\n    expect(result.fileCount).toBeGreaterThanOrEqual(0);\n    expect(Array.isArray(result.nodes)).toBe(true);\n  });\n\n  it.skipIf(!hasDistWorker)('createWorkerPool with size 0 creates pool with zero workers', () => {\n    const workerUrl = pathToFileURL(DIST_WORKER) as URL;\n    const zeroPool = createWorkerPool(workerUrl, 0);\n    expect(zeroPool.size).toBe(0);\n    return zeroPool.terminate();\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/ai-context.test.ts",
    "content": "import { describe, it, expect, vi, beforeAll, afterAll } from 'vitest';\nimport fs from 'fs/promises';\nimport path from 'path';\nimport os from 'os';\nimport { generateAIContextFiles } from '../../src/cli/ai-context.js';\n\ndescribe('generateAIContextFiles', () => {\n  let tmpDir: string;\n  let storagePath: string;\n\n  beforeAll(async () => {\n    tmpDir = await fs.mkdtemp(path.join(os.tmpdir(), 'gn-ai-ctx-test-'));\n    storagePath = path.join(tmpDir, '.gitnexus');\n    await fs.mkdir(storagePath, { recursive: true });\n  });\n\n  afterAll(async () => {\n    try {\n      await fs.rm(tmpDir, { recursive: true, force: true });\n    } catch { /* best-effort */ }\n  });\n\n  it('generates context files', async () => {\n    const stats = {\n      nodes: 100,\n      edges: 200,\n      processes: 10,\n    };\n\n    const result = await generateAIContextFiles(tmpDir, storagePath, 'TestProject', stats);\n    expect(result.files).toBeDefined();\n    expect(result.files.length).toBeGreaterThan(0);\n  });\n\n  it('creates or updates CLAUDE.md with GitNexus section', async () => {\n    const stats = { nodes: 50, edges: 100, processes: 5 };\n    await generateAIContextFiles(tmpDir, storagePath, 'TestProject', stats);\n\n    const claudeMdPath = path.join(tmpDir, 'CLAUDE.md');\n    const content = await fs.readFile(claudeMdPath, 'utf-8');\n    expect(content).toContain('gitnexus:start');\n    expect(content).toContain('gitnexus:end');\n    expect(content).toContain('TestProject');\n  });\n\n  it('handles empty stats', async () => {\n    const stats = {};\n    const result = await generateAIContextFiles(tmpDir, storagePath, 'EmptyProject', stats);\n    expect(result.files).toBeDefined();\n  });\n\n  it('updates existing CLAUDE.md without duplicating', async () => {\n    const stats = { nodes: 10 };\n\n    // Run twice\n    await generateAIContextFiles(tmpDir, storagePath, 'TestProject', stats);\n    await generateAIContextFiles(tmpDir, storagePath, 'TestProject', stats);\n\n    const claudeMdPath = path.join(tmpDir, 'CLAUDE.md');\n    const content = await fs.readFile(claudeMdPath, 'utf-8');\n\n    // Should only have one gitnexus section\n    const starts = (content.match(/gitnexus:start/g) || []).length;\n    expect(starts).toBe(1);\n  });\n\n  it('installs skills files', async () => {\n    const stats = { nodes: 10 };\n    const result = await generateAIContextFiles(tmpDir, storagePath, 'TestProject', stats);\n\n    // Should have installed skill files\n    const skillsDir = path.join(tmpDir, '.claude', 'skills', 'gitnexus');\n    try {\n      const entries = await fs.readdir(skillsDir, { recursive: true });\n      expect(entries.length).toBeGreaterThan(0);\n    } catch {\n      // Skills dir may not be created if skills source doesn't exist in test context\n    }\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/ast-cache.test.ts",
    "content": "import { describe, it, expect, beforeEach } from 'vitest';\nimport { createASTCache, type ASTCache } from '../../src/core/ingestion/ast-cache.js';\n\n// Create a minimal mock tree object (mimics Parser.Tree interface)\nfunction mockTree(id: string): any {\n  return { rootNode: { type: 'program', text: id }, delete: vi.fn() };\n}\n\ndescribe('ASTCache', () => {\n  let cache: ASTCache;\n\n  beforeEach(() => {\n    cache = createASTCache(3);\n  });\n\n  describe('get / set', () => {\n    it('returns undefined for cache miss', () => {\n      expect(cache.get('nonexistent.ts')).toBeUndefined();\n    });\n\n    it('returns cached tree on hit', () => {\n      const tree = mockTree('test');\n      cache.set('src/index.ts', tree);\n      expect(cache.get('src/index.ts')).toBe(tree);\n    });\n\n    it('overwrites existing entry for same key', () => {\n      const tree1 = mockTree('v1');\n      const tree2 = mockTree('v2');\n      cache.set('src/index.ts', tree1);\n      cache.set('src/index.ts', tree2);\n      expect(cache.get('src/index.ts')).toBe(tree2);\n    });\n  });\n\n  describe('LRU eviction', () => {\n    it('evicts least recently used when capacity exceeded', () => {\n      cache.set('a.ts', mockTree('a'));\n      cache.set('b.ts', mockTree('b'));\n      cache.set('c.ts', mockTree('c'));\n      // Cache is full (maxSize=3). Adding one more evicts 'a'\n      cache.set('d.ts', mockTree('d'));\n      expect(cache.get('a.ts')).toBeUndefined();\n      expect(cache.get('b.ts')).toBeDefined();\n      expect(cache.get('d.ts')).toBeDefined();\n    });\n\n    it('accessing an entry makes it recently used', () => {\n      cache.set('a.ts', mockTree('a'));\n      cache.set('b.ts', mockTree('b'));\n      cache.set('c.ts', mockTree('c'));\n      // Touch 'a' to make it recently used\n      cache.get('a.ts');\n      // Now 'b' is LRU\n      cache.set('d.ts', mockTree('d'));\n      expect(cache.get('a.ts')).toBeDefined();\n      expect(cache.get('b.ts')).toBeUndefined();\n    });\n  });\n\n  describe('clear', () => {\n    it('removes all entries', () => {\n      cache.set('a.ts', mockTree('a'));\n      cache.set('b.ts', mockTree('b'));\n      cache.clear();\n      expect(cache.get('a.ts')).toBeUndefined();\n      expect(cache.get('b.ts')).toBeUndefined();\n      expect(cache.stats().size).toBe(0);\n    });\n  });\n\n  describe('stats', () => {\n    it('reports size and maxSize', () => {\n      expect(cache.stats()).toEqual({ size: 0, maxSize: 3 });\n      cache.set('a.ts', mockTree('a'));\n      expect(cache.stats()).toEqual({ size: 1, maxSize: 3 });\n      cache.set('b.ts', mockTree('b'));\n      expect(cache.stats()).toEqual({ size: 2, maxSize: 3 });\n    });\n\n    it('uses default maxSize of 50', () => {\n      const defaultCache = createASTCache();\n      expect(defaultCache.stats().maxSize).toBe(50);\n    });\n\n    it('clamps maxSize of 0 to 1 to prevent LRU cache error', () => {\n      const zeroCache = createASTCache(0);\n      expect(zeroCache.stats().maxSize).toBe(1);\n      // Should still function correctly\n      const tree = mockTree('test');\n      zeroCache.set('a.ts', tree);\n      expect(zeroCache.get('a.ts')).toBe(tree);\n    });\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/bm25-search.test.ts",
    "content": "import { describe, it, expect } from 'vitest';\nimport { searchFTSFromLbug, type BM25SearchResult } from '../../src/core/search/bm25-index.js';\n\ndescribe('BM25 search', () => {\n  describe('searchFTSFromLbug', () => {\n    it('returns empty array when LadybugDB is not initialized', async () => {\n      // Without LadybugDB init, search should return empty (not crash)\n      const results = await searchFTSFromLbug('test query');\n      expect(Array.isArray(results)).toBe(true);\n      expect(results).toHaveLength(0);\n    });\n\n    it('handles empty query', async () => {\n      const results = await searchFTSFromLbug('');\n      expect(Array.isArray(results)).toBe(true);\n    });\n\n    it('accepts custom limit parameter', async () => {\n      const results = await searchFTSFromLbug('test', 5);\n      expect(Array.isArray(results)).toBe(true);\n    });\n  });\n\n  describe('BM25SearchResult type', () => {\n    it('has correct shape', () => {\n      const result: BM25SearchResult = {\n        filePath: 'src/index.ts',\n        score: 1.5,\n        rank: 1,\n      };\n      expect(result.filePath).toBe('src/index.ts');\n      expect(result.score).toBe(1.5);\n      expect(result.rank).toBe(1);\n    });\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/call-form.test.ts",
    "content": "import { describe, it, expect } from 'vitest';\nimport { inferCallForm, extractReceiverName, type SyntaxNode } from '../../src/core/ingestion/utils.js';\nimport { createSymbolTable } from '../../src/core/ingestion/symbol-table.js';\nimport Parser from 'tree-sitter';\nimport TypeScript from 'tree-sitter-typescript';\nimport Python from 'tree-sitter-python';\nimport Java from 'tree-sitter-java';\nimport CSharp from 'tree-sitter-c-sharp';\nimport Kotlin from 'tree-sitter-kotlin';\nimport Go from 'tree-sitter-go';\nimport Rust from 'tree-sitter-rust';\nimport CPP from 'tree-sitter-cpp';\nimport PHP from 'tree-sitter-php';\nimport { LANGUAGE_QUERIES } from '../../src/core/ingestion/tree-sitter-queries.js';\nimport { SupportedLanguages } from '../../src/config/supported-languages.js';\n\n/**\n * Helper: parse code, run the language query, and return all @call captures\n * as { callNode, nameNode } pairs.\n */\nfunction extractCallCaptures(\n  parser: Parser,\n  code: string,\n  language: string,\n): Array<{ callNode: SyntaxNode; nameNode: SyntaxNode; calledName: string }> {\n  const queryStr = LANGUAGE_QUERIES[language];\n  if (!queryStr) throw new Error(`No query for ${language}`);\n\n  const tree = parser.parse(code);\n  const lang = parser.getLanguage();\n  const query = new Parser.Query(lang, queryStr);\n  const matches = query.matches(tree.rootNode);\n\n  const results: Array<{ callNode: SyntaxNode; nameNode: SyntaxNode; calledName: string }> = [];\n\n  for (const match of matches) {\n    const captureMap: Record<string, SyntaxNode> = {};\n    for (const c of match.captures) {\n      captureMap[c.name] = c.node;\n    }\n    if (captureMap['call'] && captureMap['call.name']) {\n      results.push({\n        callNode: captureMap['call'],\n        nameNode: captureMap['call.name'],\n        calledName: captureMap['call.name'].text,\n      });\n    }\n  }\n\n  return results;\n}\n\ndescribe('inferCallForm', () => {\n  const parser = new Parser();\n\n  describe('TypeScript', () => {\n    it('detects free call', () => {\n      parser.setLanguage(TypeScript.typescript);\n      const captures = extractCallCaptures(parser, 'doStuff()', SupportedLanguages.TypeScript);\n      const match = captures.find(c => c.calledName === 'doStuff');\n      expect(match).toBeDefined();\n      expect(inferCallForm(match!.callNode, match!.nameNode)).toBe('free');\n    });\n\n    it('detects member call', () => {\n      parser.setLanguage(TypeScript.typescript);\n      const captures = extractCallCaptures(parser, 'user.save()', SupportedLanguages.TypeScript);\n      const match = captures.find(c => c.calledName === 'save');\n      expect(match).toBeDefined();\n      expect(inferCallForm(match!.callNode, match!.nameNode)).toBe('member');\n    });\n  });\n\n  describe('Python', () => {\n    it('detects free call', () => {\n      parser.setLanguage(Python);\n      const captures = extractCallCaptures(parser, 'print_result()', SupportedLanguages.Python);\n      const match = captures.find(c => c.calledName === 'print_result');\n      expect(match).toBeDefined();\n      expect(inferCallForm(match!.callNode, match!.nameNode)).toBe('free');\n    });\n\n    it('detects member call', () => {\n      parser.setLanguage(Python);\n      const captures = extractCallCaptures(parser, 'self.save()', SupportedLanguages.Python);\n      const match = captures.find(c => c.calledName === 'save');\n      expect(match).toBeDefined();\n      expect(inferCallForm(match!.callNode, match!.nameNode)).toBe('member');\n    });\n  });\n\n  describe('Java', () => {\n    it('detects free call (no object)', () => {\n      parser.setLanguage(Java);\n      const code = `class Foo { void run() { doStuff(); } }`;\n      const captures = extractCallCaptures(parser, code, SupportedLanguages.Java);\n      const match = captures.find(c => c.calledName === 'doStuff');\n      expect(match).toBeDefined();\n      expect(inferCallForm(match!.callNode, match!.nameNode)).toBe('free');\n    });\n\n    it('detects member call (with object)', () => {\n      parser.setLanguage(Java);\n      const code = `class Foo { void run() { user.save(); } }`;\n      const captures = extractCallCaptures(parser, code, SupportedLanguages.Java);\n      const match = captures.find(c => c.calledName === 'save');\n      expect(match).toBeDefined();\n      expect(inferCallForm(match!.callNode, match!.nameNode)).toBe('member');\n    });\n  });\n\n  describe('C#', () => {\n    it('detects free call', () => {\n      parser.setLanguage(CSharp);\n      const code = `class Foo { void Run() { DoStuff(); } }`;\n      const captures = extractCallCaptures(parser, code, SupportedLanguages.CSharp);\n      const match = captures.find(c => c.calledName === 'DoStuff');\n      expect(match).toBeDefined();\n      expect(inferCallForm(match!.callNode, match!.nameNode)).toBe('free');\n    });\n\n    it('detects member call', () => {\n      parser.setLanguage(CSharp);\n      const code = `class Foo { void Run() { user.Save(); } }`;\n      const captures = extractCallCaptures(parser, code, SupportedLanguages.CSharp);\n      const match = captures.find(c => c.calledName === 'Save');\n      expect(match).toBeDefined();\n      expect(inferCallForm(match!.callNode, match!.nameNode)).toBe('member');\n    });\n  });\n\n  describe('Go', () => {\n    it('detects free call', () => {\n      parser.setLanguage(Go);\n      const code = `package main\\nfunc main() { doStuff() }`;\n      const captures = extractCallCaptures(parser, code, SupportedLanguages.Go);\n      const match = captures.find(c => c.calledName === 'doStuff');\n      expect(match).toBeDefined();\n      expect(inferCallForm(match!.callNode, match!.nameNode)).toBe('free');\n    });\n\n    it('detects member call via selector', () => {\n      parser.setLanguage(Go);\n      const code = `package main\\nfunc main() { user.Save() }`;\n      const captures = extractCallCaptures(parser, code, SupportedLanguages.Go);\n      const match = captures.find(c => c.calledName === 'Save');\n      expect(match).toBeDefined();\n      expect(inferCallForm(match!.callNode, match!.nameNode)).toBe('member');\n    });\n  });\n\n  describe('Rust', () => {\n    it('detects free call', () => {\n      parser.setLanguage(Rust);\n      const code = `fn main() { do_stuff(); }`;\n      const captures = extractCallCaptures(parser, code, SupportedLanguages.Rust);\n      const match = captures.find(c => c.calledName === 'do_stuff');\n      expect(match).toBeDefined();\n      expect(inferCallForm(match!.callNode, match!.nameNode)).toBe('free');\n    });\n\n    it('detects member call via field_expression', () => {\n      parser.setLanguage(Rust);\n      const code = `fn main() { user.save(); }`;\n      const captures = extractCallCaptures(parser, code, SupportedLanguages.Rust);\n      const match = captures.find(c => c.calledName === 'save');\n      expect(match).toBeDefined();\n      expect(inferCallForm(match!.callNode, match!.nameNode)).toBe('member');\n    });\n\n    it('detects scoped call as free (Foo::new)', () => {\n      parser.setLanguage(Rust);\n      const code = `fn main() { Foo::new(); }`;\n      const captures = extractCallCaptures(parser, code, SupportedLanguages.Rust);\n      const match = captures.find(c => c.calledName === 'new');\n      expect(match).toBeDefined();\n      expect(inferCallForm(match!.callNode, match!.nameNode)).toBe('free');\n    });\n  });\n\n  describe('C++', () => {\n    it('detects free call', () => {\n      parser.setLanguage(CPP);\n      const code = `void main() { doStuff(); }`;\n      const captures = extractCallCaptures(parser, code, SupportedLanguages.CPlusPlus);\n      const match = captures.find(c => c.calledName === 'doStuff');\n      expect(match).toBeDefined();\n      expect(inferCallForm(match!.callNode, match!.nameNode)).toBe('free');\n    });\n\n    it('detects member call via field_expression', () => {\n      parser.setLanguage(CPP);\n      const code = `void main() { obj.run(); }`;\n      const captures = extractCallCaptures(parser, code, SupportedLanguages.CPlusPlus);\n      const match = captures.find(c => c.calledName === 'run');\n      expect(match).toBeDefined();\n      expect(inferCallForm(match!.callNode, match!.nameNode)).toBe('member');\n    });\n  });\n\n  describe('PHP', () => {\n    it('detects free function call', () => {\n      parser.setLanguage(PHP.php);\n      const code = `<?php doStuff(); ?>`;\n      const captures = extractCallCaptures(parser, code, SupportedLanguages.PHP);\n      const match = captures.find(c => c.calledName === 'doStuff');\n      expect(match).toBeDefined();\n      expect(inferCallForm(match!.callNode, match!.nameNode)).toBe('free');\n    });\n\n    it('detects member call', () => {\n      parser.setLanguage(PHP.php);\n      const code = `<?php $user->save(); ?>`;\n      const captures = extractCallCaptures(parser, code, SupportedLanguages.PHP);\n      const match = captures.find(c => c.calledName === 'save');\n      expect(match).toBeDefined();\n      expect(inferCallForm(match!.callNode, match!.nameNode)).toBe('member');\n    });\n\n    it('detects static call as member', () => {\n      parser.setLanguage(PHP.php);\n      const code = `<?php Foo::bar(); ?>`;\n      const captures = extractCallCaptures(parser, code, SupportedLanguages.PHP);\n      const match = captures.find(c => c.calledName === 'bar');\n      expect(match).toBeDefined();\n      expect(inferCallForm(match!.callNode, match!.nameNode)).toBe('member');\n    });\n  });\n\n  describe('Kotlin', () => {\n    it('detects free call', () => {\n      parser.setLanguage(Kotlin);\n      const code = `fun main() { doStuff() }`;\n      const captures = extractCallCaptures(parser, code, SupportedLanguages.Kotlin);\n      const match = captures.find(c => c.calledName === 'doStuff');\n      expect(match).toBeDefined();\n      expect(inferCallForm(match!.callNode, match!.nameNode)).toBe('free');\n    });\n\n    it('detects member call via navigation_expression', () => {\n      parser.setLanguage(Kotlin);\n      const code = `fun main() { user.save() }`;\n      const captures = extractCallCaptures(parser, code, SupportedLanguages.Kotlin);\n      const match = captures.find(c => c.calledName === 'save');\n      expect(match).toBeDefined();\n      expect(inferCallForm(match!.callNode, match!.nameNode)).toBe('member');\n    });\n\n    it('Foo() is a free call (constructor_invocation only in heritage context)', () => {\n      parser.setLanguage(Kotlin);\n      const code = `fun main() { val x = Foo() }`;\n      const captures = extractCallCaptures(parser, code, SupportedLanguages.Kotlin);\n      const match = captures.find(c => c.calledName === 'Foo');\n      expect(match).toBeDefined();\n      // Kotlin Foo() is syntactically a call_expression, not constructor_invocation\n      // Constructor discrimination happens in Phase 2 via symbol kind matching\n      expect(inferCallForm(match!.callNode, match!.nameNode)).toBe('free');\n    });\n\n    it('detects constructor_invocation in heritage delegation as constructor', () => {\n      parser.setLanguage(Kotlin);\n      const code = `open class Base\\nclass Derived : Base()`;\n      const captures = extractCallCaptures(parser, code, SupportedLanguages.Kotlin);\n      const match = captures.find(c => c.calledName === 'Base');\n      // constructor_invocation is captured by heritage queries, not call queries\n      // If it happens to be captured, it should be 'constructor'\n      if (match) {\n        expect(inferCallForm(match.callNode, match.nameNode)).toBe('constructor');\n      }\n    });\n  });\n});\n\ndescribe('extractReceiverName', () => {\n  const parser = new Parser();\n\n  describe('TypeScript', () => {\n    it('extracts simple identifier receiver', () => {\n      parser.setLanguage(TypeScript.typescript);\n      const captures = extractCallCaptures(parser, 'user.save()', SupportedLanguages.TypeScript);\n      const match = captures.find(c => c.calledName === 'save');\n      expect(match).toBeDefined();\n      expect(extractReceiverName(match!.nameNode)).toBe('user');\n    });\n\n    it('extracts \"this\" as receiver', () => {\n      parser.setLanguage(TypeScript.typescript);\n      const code = `class Foo { run() { this.save(); } }`;\n      const captures = extractCallCaptures(parser, code, SupportedLanguages.TypeScript);\n      const match = captures.find(c => c.calledName === 'save');\n      expect(match).toBeDefined();\n      expect(extractReceiverName(match!.nameNode)).toBe('this');\n    });\n\n    it('returns undefined for chained call receiver', () => {\n      parser.setLanguage(TypeScript.typescript);\n      const captures = extractCallCaptures(parser, 'getUser().save()', SupportedLanguages.TypeScript);\n      const match = captures.find(c => c.calledName === 'save');\n      expect(match).toBeDefined();\n      expect(extractReceiverName(match!.nameNode)).toBeUndefined();\n    });\n\n    it('returns undefined for free call', () => {\n      parser.setLanguage(TypeScript.typescript);\n      const captures = extractCallCaptures(parser, 'doStuff()', SupportedLanguages.TypeScript);\n      const match = captures.find(c => c.calledName === 'doStuff');\n      expect(match).toBeDefined();\n      expect(extractReceiverName(match!.nameNode)).toBeUndefined();\n    });\n\n    it('extracts receiver from optional chain call user?.save()', () => {\n      parser.setLanguage(TypeScript.typescript);\n      const captures = extractCallCaptures(parser, 'user?.save()', SupportedLanguages.TypeScript);\n      const match = captures.find(c => c.calledName === 'save');\n      expect(match).toBeDefined();\n      expect(extractReceiverName(match!.nameNode)).toBe('user');\n    });\n\n    it('extracts \"this\" from optional chain call this?.save()', () => {\n      parser.setLanguage(TypeScript.typescript);\n      const code = `class Foo { run() { this?.save(); } }`;\n      const captures = extractCallCaptures(parser, code, SupportedLanguages.TypeScript);\n      const match = captures.find(c => c.calledName === 'save');\n      expect(match).toBeDefined();\n      expect(extractReceiverName(match!.nameNode)).toBe('this');\n    });\n  });\n\n  describe('Python', () => {\n    it('extracts simple identifier receiver', () => {\n      parser.setLanguage(Python);\n      const captures = extractCallCaptures(parser, 'user.save()', SupportedLanguages.Python);\n      const match = captures.find(c => c.calledName === 'save');\n      expect(match).toBeDefined();\n      expect(extractReceiverName(match!.nameNode)).toBe('user');\n    });\n\n    it('extracts \"self\" as receiver', () => {\n      parser.setLanguage(Python);\n      const captures = extractCallCaptures(parser, 'self.save()', SupportedLanguages.Python);\n      const match = captures.find(c => c.calledName === 'save');\n      expect(match).toBeDefined();\n      expect(extractReceiverName(match!.nameNode)).toBe('self');\n    });\n  });\n\n  describe('Java', () => {\n    it('extracts receiver from method_invocation', () => {\n      parser.setLanguage(Java);\n      const code = `class Foo { void run() { user.save(); } }`;\n      const captures = extractCallCaptures(parser, code, SupportedLanguages.Java);\n      const match = captures.find(c => c.calledName === 'save');\n      expect(match).toBeDefined();\n      expect(extractReceiverName(match!.nameNode)).toBe('user');\n    });\n  });\n\n  describe('Go', () => {\n    it('extracts receiver from selector_expression', () => {\n      parser.setLanguage(Go);\n      const code = `package main\\nfunc main() { user.Save() }`;\n      const captures = extractCallCaptures(parser, code, SupportedLanguages.Go);\n      const match = captures.find(c => c.calledName === 'Save');\n      expect(match).toBeDefined();\n      expect(extractReceiverName(match!.nameNode)).toBe('user');\n    });\n  });\n\n  describe('Rust', () => {\n    it('extracts receiver from field_expression', () => {\n      parser.setLanguage(Rust);\n      const code = `fn main() { user.save(); }`;\n      const captures = extractCallCaptures(parser, code, SupportedLanguages.Rust);\n      const match = captures.find(c => c.calledName === 'save');\n      expect(match).toBeDefined();\n      expect(extractReceiverName(match!.nameNode)).toBe('user');\n    });\n  });\n\n  describe('C#', () => {\n    it('extracts receiver from member_access_expression', () => {\n      parser.setLanguage(CSharp);\n      const code = `class Foo { void Run() { user.Save(); } }`;\n      const captures = extractCallCaptures(parser, code, SupportedLanguages.CSharp);\n      const match = captures.find(c => c.calledName === 'Save');\n      expect(match).toBeDefined();\n      expect(extractReceiverName(match!.nameNode)).toBe('user');\n    });\n\n    it('captures null-conditional user?.Save() and extracts receiver', () => {\n      parser.setLanguage(CSharp);\n      const code = `class Foo { void Run() { user?.Save(); } }`;\n      const captures = extractCallCaptures(parser, code, SupportedLanguages.CSharp);\n      const match = captures.find(c => c.calledName === 'Save');\n      // C# conditional_access_expression (user?.Save()) is now captured via member_binding_expression\n      expect(match).toBeDefined();\n      expect(inferCallForm(match!.callNode, match!.nameNode)).toBe('member');\n      expect(extractReceiverName(match!.nameNode)).toBe('user');\n    });\n  });\n\n  describe('Kotlin', () => {\n    it('extracts receiver from navigation_expression', () => {\n      parser.setLanguage(Kotlin);\n      const code = `fun main() { user.save() }`;\n      const captures = extractCallCaptures(parser, code, SupportedLanguages.Kotlin);\n      const match = captures.find(c => c.calledName === 'save');\n      expect(match).toBeDefined();\n      expect(extractReceiverName(match!.nameNode)).toBe('user');\n    });\n\n    it('extracts receiver from safe navigation user?.save()', () => {\n      parser.setLanguage(Kotlin);\n      const code = `fun main() { user?.save() }`;\n      const captures = extractCallCaptures(parser, code, SupportedLanguages.Kotlin);\n      const match = captures.find(c => c.calledName === 'save');\n      expect(match).toBeDefined();\n      expect(extractReceiverName(match!.nameNode)).toBe('user');\n    });\n  });\n});\n\ndescribe('ownerId on SymbolDefinition', () => {\n  it('is set for Method symbols via symbolTable.add()', () => {\n    const st = createSymbolTable();\n    st.add('src/foo.ts', 'save', 'Method:src/foo.ts:save', 'Method', {\n      parameterCount: 1,\n      ownerId: 'Class:src/foo.ts:User',\n    });\n\n    const def = st.lookupExactFull('src/foo.ts', 'save');\n    expect(def).toBeDefined();\n    expect(def!.ownerId).toBe('Class:src/foo.ts:User');\n    expect(def!.parameterCount).toBe(1);\n  });\n\n  it('is undefined for Function symbols (no owner)', () => {\n    const st = createSymbolTable();\n    st.add('src/foo.ts', 'helper', 'Function:src/foo.ts:helper', 'Function');\n\n    const def = st.lookupExactFull('src/foo.ts', 'helper');\n    expect(def).toBeDefined();\n    expect(def!.ownerId).toBeUndefined();\n  });\n\n  it('propagates ownerId through lookupFuzzy', () => {\n    const st = createSymbolTable();\n    st.add('src/foo.ts', 'save', 'Method:src/foo.ts:save', 'Method', {\n      ownerId: 'Class:src/foo.ts:User',\n    });\n\n    const defs = st.lookupFuzzy('save');\n    expect(defs).toHaveLength(1);\n    expect(defs[0].ownerId).toBe('Class:src/foo.ts:User');\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/call-processor.test.ts",
    "content": "import { describe, it, expect, vi, beforeEach } from 'vitest';\nimport { processCallsFromExtracted } from '../../src/core/ingestion/call-processor.js';\nimport { extractReturnTypeName } from '../../src/core/ingestion/type-extractors/shared.js';\nimport { createResolutionContext, type ResolutionContext } from '../../src/core/ingestion/resolution-context.js';\nimport { createKnowledgeGraph } from '../../src/core/graph/graph.js';\nimport type { ExtractedCall, FileConstructorBindings } from '../../src/core/ingestion/workers/parse-worker.js';\n\ndescribe('processCallsFromExtracted', () => {\n  let graph: ReturnType<typeof createKnowledgeGraph>;\n  let ctx: ResolutionContext;\n\n  beforeEach(() => {\n    graph = createKnowledgeGraph();\n    ctx = createResolutionContext();\n  });\n\n  it('creates CALLS relationship for same-file resolution', async () => {\n    ctx.symbols.add('src/index.ts', 'helper', 'Function:src/index.ts:helper', 'Function');\n\n    const calls: ExtractedCall[] = [{\n      filePath: 'src/index.ts',\n      calledName: 'helper',\n      sourceId: 'Function:src/index.ts:main',\n    }];\n\n    await processCallsFromExtracted(graph, calls, ctx);\n\n    const rels = graph.relationships.filter(r => r.type === 'CALLS');\n    expect(rels).toHaveLength(1);\n    expect(rels[0].sourceId).toBe('Function:src/index.ts:main');\n    expect(rels[0].targetId).toBe('Function:src/index.ts:helper');\n    expect(rels[0].confidence).toBe(0.95);\n    expect(rels[0].reason).toBe('same-file');\n  });\n\n  it('creates CALLS relationship for import-resolved resolution', async () => {\n    ctx.symbols.add('src/utils.ts', 'format', 'Function:src/utils.ts:format', 'Function');\n    ctx.importMap.set('src/index.ts', new Set(['src/utils.ts']));\n\n    const calls: ExtractedCall[] = [{\n      filePath: 'src/index.ts',\n      calledName: 'format',\n      sourceId: 'Function:src/index.ts:main',\n    }];\n\n    await processCallsFromExtracted(graph, calls, ctx);\n\n    const rels = graph.relationships.filter(r => r.type === 'CALLS');\n    expect(rels).toHaveLength(1);\n    expect(rels[0].confidence).toBe(0.9);\n    expect(rels[0].reason).toBe('import-resolved');\n  });\n\n  it('resolves unique global symbol with moderate confidence', async () => {\n    ctx.symbols.add('src/other.ts', 'uniqueFunc', 'Function:src/other.ts:uniqueFunc', 'Function');\n\n    const calls: ExtractedCall[] = [{\n      filePath: 'src/index.ts',\n      calledName: 'uniqueFunc',\n      sourceId: 'Function:src/index.ts:main',\n    }];\n\n    await processCallsFromExtracted(graph, calls, ctx);\n\n    const rels = graph.relationships.filter(r => r.type === 'CALLS');\n    expect(rels).toHaveLength(1);\n    expect(rels[0].confidence).toBe(0.5);\n    expect(rels[0].reason).toBe('global');\n  });\n\n  it('refuses ambiguous global symbols — no CALLS edge created', async () => {\n    ctx.symbols.add('src/a.ts', 'render', 'Function:src/a.ts:render', 'Function');\n    ctx.symbols.add('src/b.ts', 'render', 'Function:src/b.ts:render', 'Function');\n\n    const calls: ExtractedCall[] = [{\n      filePath: 'src/index.ts',\n      calledName: 'render',\n      sourceId: 'Function:src/index.ts:main',\n    }];\n\n    await processCallsFromExtracted(graph, calls, ctx);\n\n    const rels = graph.relationships.filter(r => r.type === 'CALLS');\n    expect(rels).toHaveLength(0);\n  });\n\n  it('skips unresolvable calls', async () => {\n    const calls: ExtractedCall[] = [{\n      filePath: 'src/index.ts',\n      calledName: 'nonExistent',\n      sourceId: 'Function:src/index.ts:main',\n    }];\n\n    await processCallsFromExtracted(graph, calls, ctx);\n    expect(graph.relationshipCount).toBe(0);\n  });\n\n  it('refuses non-callable symbols even when the name resolves', async () => {\n    ctx.symbols.add('src/index.ts', 'Widget', 'Class:src/index.ts:Widget', 'Class');\n\n    const calls: ExtractedCall[] = [{\n      filePath: 'src/index.ts',\n      calledName: 'Widget',\n      sourceId: 'Function:src/index.ts:main',\n    }];\n\n    await processCallsFromExtracted(graph, calls, ctx);\n    expect(graph.relationshipCount).toBe(0);\n  });\n\n  it('refuses CALLS edges to Interface symbols', async () => {\n    ctx.symbols.add('src/types.ts', 'Serializable', 'Interface:src/types.ts:Serializable', 'Interface');\n    ctx.importMap.set('src/index.ts', new Set(['src/types.ts']));\n\n    const calls: ExtractedCall[] = [{\n      filePath: 'src/index.ts',\n      calledName: 'Serializable',\n      sourceId: 'Function:src/index.ts:main',\n    }];\n\n    await processCallsFromExtracted(graph, calls, ctx);\n    expect(graph.relationships.filter(r => r.type === 'CALLS')).toHaveLength(0);\n  });\n\n  it('refuses CALLS edges to Enum symbols', async () => {\n    ctx.symbols.add('src/status.ts', 'Status', 'Enum:src/status.ts:Status', 'Enum');\n    ctx.importMap.set('src/index.ts', new Set(['src/status.ts']));\n\n    const calls: ExtractedCall[] = [{\n      filePath: 'src/index.ts',\n      calledName: 'Status',\n      sourceId: 'Function:src/index.ts:main',\n    }];\n\n    await processCallsFromExtracted(graph, calls, ctx);\n    expect(graph.relationships.filter(r => r.type === 'CALLS')).toHaveLength(0);\n  });\n\n  it('prefers same-file over import-resolved', async () => {\n    ctx.symbols.add('src/index.ts', 'render', 'Function:src/index.ts:render', 'Function');\n    ctx.symbols.add('src/utils.ts', 'render', 'Function:src/utils.ts:render', 'Function');\n    ctx.importMap.set('src/index.ts', new Set(['src/utils.ts']));\n\n    const calls: ExtractedCall[] = [{\n      filePath: 'src/index.ts',\n      calledName: 'render',\n      sourceId: 'Function:src/index.ts:main',\n    }];\n\n    await processCallsFromExtracted(graph, calls, ctx);\n\n    const rels = graph.relationships.filter(r => r.type === 'CALLS');\n    expect(rels).toHaveLength(1);\n    expect(rels[0].targetId).toBe('Function:src/index.ts:render');\n    expect(rels[0].reason).toBe('same-file');\n  });\n\n  it('handles multiple calls from the same file', async () => {\n    ctx.symbols.add('src/index.ts', 'foo', 'Function:src/index.ts:foo', 'Function');\n    ctx.symbols.add('src/index.ts', 'bar', 'Function:src/index.ts:bar', 'Function');\n\n    const calls: ExtractedCall[] = [\n      { filePath: 'src/index.ts', calledName: 'foo', sourceId: 'Function:src/index.ts:main' },\n      { filePath: 'src/index.ts', calledName: 'bar', sourceId: 'Function:src/index.ts:main' },\n    ];\n\n    await processCallsFromExtracted(graph, calls, ctx);\n    expect(graph.relationships.filter(r => r.type === 'CALLS')).toHaveLength(2);\n  });\n\n  it('uses arity to disambiguate import-scoped callable candidates', async () => {\n    ctx.symbols.add('src/logger.ts', 'log', 'Function:src/logger.ts:log', 'Function', { parameterCount: 0 });\n    ctx.symbols.add('src/formatter.ts', 'log', 'Function:src/formatter.ts:log', 'Function', { parameterCount: 1 });\n    ctx.importMap.set('src/index.ts', new Set(['src/logger.ts', 'src/formatter.ts']));\n\n    const calls: ExtractedCall[] = [{\n      filePath: 'src/index.ts',\n      calledName: 'log',\n      sourceId: 'Function:src/index.ts:main',\n      argCount: 1,\n    }];\n\n    await processCallsFromExtracted(graph, calls, ctx);\n\n    const rels = graph.relationships.filter(r => r.type === 'CALLS');\n    expect(rels).toHaveLength(1);\n    expect(rels[0].targetId).toBe('Function:src/formatter.ts:log');\n    expect(rels[0].reason).toBe('import-resolved');\n  });\n\n  it('refuses ambiguous call targets when arity does not produce a unique match', async () => {\n    ctx.symbols.add('src/logger.ts', 'log', 'Function:src/logger.ts:log', 'Function', { parameterCount: 1 });\n    ctx.symbols.add('src/formatter.ts', 'log', 'Function:src/formatter.ts:log', 'Function', { parameterCount: 1 });\n    ctx.importMap.set('src/index.ts', new Set(['src/logger.ts', 'src/formatter.ts']));\n\n    const calls: ExtractedCall[] = [{\n      filePath: 'src/index.ts',\n      calledName: 'log',\n      sourceId: 'Function:src/index.ts:main',\n      argCount: 1,\n    }];\n\n    await processCallsFromExtracted(graph, calls, ctx);\n    expect(graph.relationships.filter(r => r.type === 'CALLS')).toHaveLength(0);\n  });\n\n  it('calls progress callback', async () => {\n    ctx.symbols.add('src/index.ts', 'foo', 'Function:src/index.ts:foo', 'Function');\n\n    const calls: ExtractedCall[] = [\n      { filePath: 'src/index.ts', calledName: 'foo', sourceId: 'Function:src/index.ts:main' },\n    ];\n\n    const onProgress = vi.fn();\n    await processCallsFromExtracted(graph, calls, ctx, onProgress);\n\n    expect(onProgress).toHaveBeenCalledWith(1, 1);\n  });\n\n  it('handles empty calls array', async () => {\n    await processCallsFromExtracted(graph, [], ctx);\n    expect(graph.relationshipCount).toBe(0);\n  });\n\n  // ---- Constructor-aware resolution (Phase 2) ----\n\n  it('resolves constructor call to Class when no Constructor node exists', async () => {\n    ctx.symbols.add('src/models.ts', 'User', 'Class:src/models.ts:User', 'Class');\n    ctx.importMap.set('src/index.ts', new Set(['src/models.ts']));\n\n    const calls: ExtractedCall[] = [{\n      filePath: 'src/index.ts',\n      calledName: 'User',\n      sourceId: 'Function:src/index.ts:main',\n      callForm: 'constructor',\n    }];\n\n    await processCallsFromExtracted(graph, calls, ctx);\n\n    const rels = graph.relationships.filter(r => r.type === 'CALLS');\n    expect(rels).toHaveLength(1);\n    expect(rels[0].targetId).toBe('Class:src/models.ts:User');\n    expect(rels[0].reason).toBe('import-resolved');\n  });\n\n  it('resolves constructor call to Constructor node over Class node', async () => {\n    ctx.symbols.add('src/models.ts', 'User', 'Class:src/models.ts:User', 'Class');\n    ctx.symbols.add('src/models.ts', 'User', 'Constructor:src/models.ts:User', 'Constructor', { parameterCount: 1 });\n    ctx.importMap.set('src/index.ts', new Set(['src/models.ts']));\n\n    const calls: ExtractedCall[] = [{\n      filePath: 'src/index.ts',\n      calledName: 'User',\n      sourceId: 'Function:src/index.ts:main',\n      argCount: 1,\n      callForm: 'constructor',\n    }];\n\n    await processCallsFromExtracted(graph, calls, ctx);\n\n    const rels = graph.relationships.filter(r => r.type === 'CALLS');\n    expect(rels).toHaveLength(1);\n    expect(rels[0].targetId).toBe('Constructor:src/models.ts:User');\n  });\n\n  it('refuses Class target without callForm=constructor (existing behavior)', async () => {\n    ctx.symbols.add('src/models.ts', 'User', 'Class:src/models.ts:User', 'Class');\n    ctx.importMap.set('src/index.ts', new Set(['src/models.ts']));\n\n    const calls: ExtractedCall[] = [{\n      filePath: 'src/index.ts',\n      calledName: 'User',\n      sourceId: 'Function:src/index.ts:main',\n    }];\n\n    await processCallsFromExtracted(graph, calls, ctx);\n\n    const rels = graph.relationships.filter(r => r.type === 'CALLS');\n    expect(rels).toHaveLength(0);\n  });\n\n  it('constructor call falls back to callable types when no Constructor/Class found', async () => {\n    ctx.symbols.add('src/utils.ts', 'Widget', 'Function:src/utils.ts:Widget', 'Function');\n    ctx.importMap.set('src/index.ts', new Set(['src/utils.ts']));\n\n    const calls: ExtractedCall[] = [{\n      filePath: 'src/index.ts',\n      calledName: 'Widget',\n      sourceId: 'Function:src/index.ts:main',\n      callForm: 'constructor',\n    }];\n\n    await processCallsFromExtracted(graph, calls, ctx);\n\n    const rels = graph.relationships.filter(r => r.type === 'CALLS');\n    expect(rels).toHaveLength(1);\n    expect(rels[0].targetId).toBe('Function:src/utils.ts:Widget');\n  });\n\n  it('constructor arity filtering narrows overloaded constructors', async () => {\n    ctx.symbols.add('src/models.ts', 'User', 'Constructor:src/models.ts:User(0)', 'Constructor', { parameterCount: 0 });\n    ctx.symbols.add('src/models.ts', 'User', 'Constructor:src/models.ts:User(2)', 'Constructor', { parameterCount: 2 });\n    ctx.importMap.set('src/index.ts', new Set(['src/models.ts']));\n\n    const calls: ExtractedCall[] = [{\n      filePath: 'src/index.ts',\n      calledName: 'User',\n      sourceId: 'Function:src/index.ts:main',\n      argCount: 2,\n      callForm: 'constructor',\n    }];\n\n    await processCallsFromExtracted(graph, calls, ctx);\n\n    const rels = graph.relationships.filter(r => r.type === 'CALLS');\n    expect(rels).toHaveLength(1);\n    expect(rels[0].targetId).toBe('Constructor:src/models.ts:User(2)');\n  });\n\n  it('cannot discriminate same-arity overloads by parameter type (known limitation)', async () => {\n    ctx.symbols.add('src/UserDao.ts', 'save', 'Function:src/UserDao.ts:save', 'Function', { parameterCount: 1 });\n    ctx.symbols.add('src/RepoDao.ts', 'save', 'Function:src/RepoDao.ts:save', 'Function', { parameterCount: 1 });\n    ctx.importMap.set('src/index.ts', new Set(['src/UserDao.ts', 'src/RepoDao.ts']));\n\n    const calls: ExtractedCall[] = [{\n      filePath: 'src/index.ts',\n      calledName: 'save',\n      sourceId: 'Function:src/index.ts:main',\n      argCount: 1,\n    }];\n\n    await processCallsFromExtracted(graph, calls, ctx);\n    const rels = graph.relationships.filter(r => r.type === 'CALLS');\n    expect(rels).toHaveLength(0);\n  });\n\n  // ---- Return type inference (Phase 4) ----\n\n  it('return type inference: binds variable to return type of callee', async () => {\n    // getUser() returns User, and User has a save() method\n    ctx.symbols.add('src/utils.ts', 'getUser', 'Function:src/utils.ts:getUser', 'Function', { returnType: 'User' });\n    ctx.symbols.add('src/models.ts', 'User', 'Class:src/models.ts:User', 'Class');\n    ctx.symbols.add('src/models.ts', 'save', 'Method:src/models.ts:save', 'Method', { ownerId: 'Class:src/models.ts:User' });\n    ctx.importMap.set('src/index.ts', new Set(['src/utils.ts', 'src/models.ts']));\n\n    // Binding: user = getUser() — getUser is not a class, so constructor path fails,\n    // but return type inference should kick in\n    const constructorBindings: FileConstructorBindings[] = [{\n      filePath: 'src/index.ts',\n      bindings: [\n        { scope: 'main@0', varName: 'user', calleeName: 'getUser' },\n      ],\n    }];\n\n    const calls: ExtractedCall[] = [{\n      filePath: 'src/index.ts',\n      calledName: 'save',\n      sourceId: 'Function:src/index.ts:main',\n      receiverName: 'user',\n      callForm: 'member',\n    }];\n\n    await processCallsFromExtracted(graph, calls, ctx, undefined, constructorBindings);\n\n    const rels = graph.relationships.filter(r => r.type === 'CALLS');\n    expect(rels).toHaveLength(1);\n    expect(rels[0].targetId).toBe('Method:src/models.ts:save');\n  });\n\n  it('return type inference: unwraps Promise<User> to User', async () => {\n    ctx.symbols.add('src/api.ts', 'fetchUser', 'Function:src/api.ts:fetchUser', 'Function', { returnType: 'Promise<User>' });\n    ctx.symbols.add('src/models.ts', 'User', 'Class:src/models.ts:User', 'Class');\n    ctx.symbols.add('src/models.ts', 'save', 'Method:src/models.ts:save', 'Method', { ownerId: 'Class:src/models.ts:User' });\n    ctx.importMap.set('src/index.ts', new Set(['src/api.ts', 'src/models.ts']));\n\n    const constructorBindings: FileConstructorBindings[] = [{\n      filePath: 'src/index.ts',\n      bindings: [\n        { scope: 'main@0', varName: 'user', calleeName: 'fetchUser' },\n      ],\n    }];\n\n    const calls: ExtractedCall[] = [{\n      filePath: 'src/index.ts',\n      calledName: 'save',\n      sourceId: 'Function:src/index.ts:main',\n      receiverName: 'user',\n      callForm: 'member',\n    }];\n\n    await processCallsFromExtracted(graph, calls, ctx, undefined, constructorBindings);\n\n    const rels = graph.relationships.filter(r => r.type === 'CALLS');\n    expect(rels).toHaveLength(1);\n    expect(rels[0].targetId).toBe('Method:src/models.ts:save');\n  });\n\n  it('return type inference: skips when return type is primitive', async () => {\n    ctx.symbols.add('src/utils.ts', 'getCount', 'Function:src/utils.ts:getCount', 'Function', { returnType: 'number' });\n    ctx.importMap.set('src/index.ts', new Set(['src/utils.ts']));\n\n    const constructorBindings: FileConstructorBindings[] = [{\n      filePath: 'src/index.ts',\n      bindings: [\n        { scope: 'main@0', varName: 'count', calleeName: 'getCount' },\n      ],\n    }];\n\n    const calls: ExtractedCall[] = [{\n      filePath: 'src/index.ts',\n      calledName: 'toString',\n      sourceId: 'Function:src/index.ts:main',\n      receiverName: 'count',\n      callForm: 'member',\n    }];\n\n    await processCallsFromExtracted(graph, calls, ctx, undefined, constructorBindings);\n\n    // No binding should be created for primitive return types\n    const rels = graph.relationships.filter(r => r.type === 'CALLS');\n    expect(rels).toHaveLength(0);\n  });\n\n  it('return type inference: skips ambiguous callees (multiple definitions)', async () => {\n    ctx.symbols.add('src/a.ts', 'getData', 'Function:src/a.ts:getData', 'Function', { returnType: 'User' });\n    ctx.symbols.add('src/b.ts', 'getData', 'Function:src/b.ts:getData', 'Function', { returnType: 'Repo' });\n\n    const constructorBindings: FileConstructorBindings[] = [{\n      filePath: 'src/index.ts',\n      bindings: [\n        { scope: 'main@0', varName: 'data', calleeName: 'getData' },\n      ],\n    }];\n\n    const calls: ExtractedCall[] = [{\n      filePath: 'src/index.ts',\n      calledName: 'save',\n      sourceId: 'Function:src/index.ts:main',\n      receiverName: 'data',\n      callForm: 'member',\n    }];\n\n    await processCallsFromExtracted(graph, calls, ctx, undefined, constructorBindings);\n\n    // Ambiguous callee — don't guess\n    const rels = graph.relationships.filter(r => r.type === 'CALLS');\n    expect(rels).toHaveLength(0);\n  });\n\n  it('return type inference: prefers constructor binding over return type', async () => {\n    // If the callee IS a class, constructor binding wins (existing behavior)\n    ctx.symbols.add('src/models.ts', 'User', 'Class:src/models.ts:User', 'Class');\n    ctx.symbols.add('src/models.ts', 'save', 'Method:src/models.ts:save', 'Method', { ownerId: 'Class:src/models.ts:User' });\n    ctx.importMap.set('src/index.ts', new Set(['src/models.ts']));\n\n    const constructorBindings: FileConstructorBindings[] = [{\n      filePath: 'src/index.ts',\n      bindings: [\n        { scope: 'main@0', varName: 'user', calleeName: 'User' },\n      ],\n    }];\n\n    const calls: ExtractedCall[] = [{\n      filePath: 'src/index.ts',\n      calledName: 'save',\n      sourceId: 'Function:src/index.ts:main',\n      receiverName: 'user',\n      callForm: 'member',\n    }];\n\n    await processCallsFromExtracted(graph, calls, ctx, undefined, constructorBindings);\n\n    const rels = graph.relationships.filter(r => r.type === 'CALLS');\n    expect(rels).toHaveLength(1);\n    expect(rels[0].targetId).toBe('Method:src/models.ts:save');\n  });\n\n  // ---- Scope-aware constructor bindings (Phase 3) ----\n\n  it('receiverKey collision: same method name in different classes does not collide', async () => {\n    // User.save@100 and Repo.save@200 are two methods named \"save\" in different classes.\n    // Each has a local variable \"db\" pointing to a different type.\n    // Without @startIndex in the key, the second binding would overwrite the first.\n    ctx.symbols.add('src/db/Database.ts', 'Database', 'Class:src/db/Database.ts:Database', 'Class');\n    ctx.symbols.add('src/db/Cache.ts', 'Cache', 'Class:src/db/Cache.ts:Cache', 'Class');\n    ctx.symbols.add('src/db/Database.ts', 'query', 'Method:src/db/Database.ts:query', 'Method', { ownerId: 'Class:src/db/Database.ts:Database' });\n    ctx.symbols.add('src/db/Cache.ts', 'query', 'Method:src/db/Cache.ts:query', 'Method', { ownerId: 'Class:src/db/Cache.ts:Cache' });\n    ctx.importMap.set('src/models/User.ts', new Set(['src/db/Database.ts']));\n    ctx.importMap.set('src/models/Repo.ts', new Set(['src/db/Cache.ts']));\n\n    // Two bindings: both enclosing scope is named \"save\" but at different startIndexes\n    const constructorBindings: FileConstructorBindings[] = [\n      {\n        filePath: 'src/models/User.ts',\n        bindings: [\n          // save@100: inside User.save(), db = new Database()\n          { scope: 'save@100', varName: 'db', calleeName: 'Database' },\n        ],\n      },\n      {\n        filePath: 'src/models/Repo.ts',\n        bindings: [\n          // save@200: inside Repo.save(), db = new Cache()\n          { scope: 'save@200', varName: 'db', calleeName: 'Cache' },\n        ],\n      },\n    ];\n\n    const calls: ExtractedCall[] = [\n      {\n        filePath: 'src/models/User.ts',\n        calledName: 'query',\n        sourceId: 'Method:src/models/User.ts:save',\n        receiverName: 'db',\n        callForm: 'member',\n      },\n      {\n        filePath: 'src/models/Repo.ts',\n        calledName: 'query',\n        sourceId: 'Method:src/models/Repo.ts:save',\n        receiverName: 'db',\n        callForm: 'member',\n      },\n    ];\n\n    await processCallsFromExtracted(graph, calls, ctx, undefined, constructorBindings);\n\n    const rels = graph.relationships.filter(r => r.type === 'CALLS');\n    expect(rels).toHaveLength(2);\n    const userQueryRel = rels.find(r => r.sourceId === 'Method:src/models/User.ts:save');\n    const repoQueryRel = rels.find(r => r.sourceId === 'Method:src/models/Repo.ts:save');\n    expect(userQueryRel?.targetId).toBe('Method:src/db/Database.ts:query');\n    expect(repoQueryRel?.targetId).toBe('Method:src/db/Cache.ts:query');\n  });\n\n  it('receiverKey collision: same scope funcName + same varName + same type resolves (non-ambiguous)', async () => {\n    // Two save@* scopes both bind \"db\" to the same type — not ambiguous, should resolve.\n    ctx.symbols.add('src/db/Database.ts', 'Database', 'Class:src/db/Database.ts:Database', 'Class');\n    ctx.symbols.add('src/db/Database.ts', 'query', 'Method:src/db/Database.ts:query', 'Method', { ownerId: 'Class:src/db/Database.ts:Database' });\n    ctx.importMap.set('src/service.ts', new Set(['src/db/Database.ts']));\n\n    const constructorBindings: FileConstructorBindings[] = [{\n      filePath: 'src/service.ts',\n      bindings: [\n        { scope: 'save@10', varName: 'db', calleeName: 'Database' },\n        { scope: 'save@50', varName: 'db', calleeName: 'Database' },\n      ],\n    }];\n\n    const calls: ExtractedCall[] = [{\n      filePath: 'src/service.ts',\n      calledName: 'query',\n      sourceId: 'Method:src/service.ts:save',\n      receiverName: 'db',\n      callForm: 'member',\n    }];\n\n    await processCallsFromExtracted(graph, calls, ctx, undefined, constructorBindings);\n\n    const rels = graph.relationships.filter(r => r.type === 'CALLS');\n    expect(rels).toHaveLength(1);\n    expect(rels[0].targetId).toBe('Method:src/db/Database.ts:query');\n  });\n\n  it('receiverKey collision: same scope funcName + same varName + different types → ambiguous, no CALLS edge', async () => {\n    // Two save@* scopes in the same file bind \"db\" to different types — truly ambiguous.\n    ctx.symbols.add('src/db/Database.ts', 'Database', 'Class:src/db/Database.ts:Database', 'Class');\n    ctx.symbols.add('src/db/Cache.ts', 'Cache', 'Class:src/db/Cache.ts:Cache', 'Class');\n    ctx.symbols.add('src/db/Database.ts', 'query', 'Method:src/db/Database.ts:query', 'Method', { ownerId: 'Class:src/db/Database.ts:Database' });\n    ctx.symbols.add('src/db/Cache.ts', 'query', 'Method:src/db/Cache.ts:query', 'Method', { ownerId: 'Class:src/db/Cache.ts:Cache' });\n    ctx.importMap.set('src/service.ts', new Set(['src/db/Database.ts', 'src/db/Cache.ts']));\n\n    const constructorBindings: FileConstructorBindings[] = [{\n      filePath: 'src/service.ts',\n      bindings: [\n        { scope: 'save@10', varName: 'db', calleeName: 'Database' },\n        { scope: 'save@50', varName: 'db', calleeName: 'Cache' },\n      ],\n    }];\n\n    const calls: ExtractedCall[] = [{\n      filePath: 'src/service.ts',\n      calledName: 'query',\n      sourceId: 'Method:src/service.ts:save',\n      receiverName: 'db',\n      callForm: 'member',\n    }];\n\n    await processCallsFromExtracted(graph, calls, ctx, undefined, constructorBindings);\n\n    // Ambiguous — different types for same funcName+varName, should not emit a CALLS edge\n    const rels = graph.relationships.filter(r => r.type === 'CALLS');\n    expect(rels).toHaveLength(0);\n  });\n\n  it('scope-aware bindings: same varName in different functions resolves to correct type', async () => {\n    ctx.symbols.add('src/models.ts', 'User', 'Class:src/models.ts:User', 'Class');\n    ctx.symbols.add('src/models.ts', 'Repo', 'Class:src/models.ts:Repo', 'Class');\n    ctx.symbols.add('src/models.ts', 'save', 'Function:src/models.ts:save', 'Function');\n    ctx.importMap.set('src/index.ts', new Set(['src/models.ts']));\n\n    const constructorBindings: FileConstructorBindings[] = [{\n      filePath: 'src/index.ts',\n      bindings: [\n        { scope: 'processUser@12', varName: 'obj', calleeName: 'User' },\n        { scope: 'processRepo@89', varName: 'obj', calleeName: 'Repo' },\n      ],\n    }];\n\n    const calls: ExtractedCall[] = [\n      {\n        filePath: 'src/index.ts',\n        calledName: 'save',\n        sourceId: 'Function:src/index.ts:processUser',\n        receiverName: 'obj',\n        callForm: 'member',\n      },\n      {\n        filePath: 'src/index.ts',\n        calledName: 'save',\n        sourceId: 'Function:src/index.ts:processRepo',\n        receiverName: 'obj',\n        callForm: 'member',\n      },\n    ];\n\n    await processCallsFromExtracted(graph, calls, ctx, undefined, constructorBindings);\n\n    const rels = graph.relationships.filter(r => r.type === 'CALLS');\n    expect(rels).toHaveLength(2);\n    // Both calls should resolve, each with the correct receiver type from their scope\n    // (the important thing is they don't collide — without scope awareness,\n    // last-write-wins would give both calls the same receiver type)\n    expect(rels[0].sourceId).toBe('Function:src/index.ts:processUser');\n    expect(rels[1].sourceId).toBe('Function:src/index.ts:processRepo');\n  });\n});\n\ndescribe('extractReturnTypeName', () => {\n  it('extracts simple type name', () => {\n    expect(extractReturnTypeName('User')).toBe('User');\n  });\n\n  it('unwraps Promise<User>', () => {\n    expect(extractReturnTypeName('Promise<User>')).toBe('User');\n  });\n\n  it('unwraps Option<User>', () => {\n    expect(extractReturnTypeName('Option<User>')).toBe('User');\n  });\n\n  it('unwraps Result<User, Error> to first type arg', () => {\n    expect(extractReturnTypeName('Result<User, Error>')).toBe('User');\n  });\n\n  it('strips nullable union: User | null', () => {\n    expect(extractReturnTypeName('User | null')).toBe('User');\n  });\n\n  it('strips nullable union: User | undefined', () => {\n    expect(extractReturnTypeName('User | undefined')).toBe('User');\n  });\n\n  it('strips nullable suffix: User?', () => {\n    expect(extractReturnTypeName('User?')).toBe('User');\n  });\n\n  it('strips Go pointer: *User', () => {\n    expect(extractReturnTypeName('*User')).toBe('User');\n  });\n\n  it('strips Rust reference: &User', () => {\n    expect(extractReturnTypeName('&User')).toBe('User');\n  });\n\n  it('strips Rust mutable reference: &mut User', () => {\n    expect(extractReturnTypeName('&mut User')).toBe('User');\n  });\n\n  it('returns undefined for primitives', () => {\n    expect(extractReturnTypeName('string')).toBeUndefined();\n    expect(extractReturnTypeName('number')).toBeUndefined();\n    expect(extractReturnTypeName('boolean')).toBeUndefined();\n    expect(extractReturnTypeName('void')).toBeUndefined();\n    expect(extractReturnTypeName('int')).toBeUndefined();\n  });\n\n  it('returns undefined for genuine union types', () => {\n    expect(extractReturnTypeName('User | Repo')).toBeUndefined();\n  });\n\n  it('returns undefined for empty string', () => {\n    expect(extractReturnTypeName('')).toBeUndefined();\n  });\n\n  it('extracts qualified type: models.User → User', () => {\n    expect(extractReturnTypeName('models.User')).toBe('User');\n  });\n\n  it('handles non-wrapper generics: Map<K, V> → Map', () => {\n    expect(extractReturnTypeName('Map<string, User>')).toBe('Map');\n  });\n\n  it('handles nested wrapper: Promise<Option<User>>', () => {\n    // Promise<Option<User>> → unwrap Promise → Option<User> → unwrap Option → User\n    expect(extractReturnTypeName('Promise<Option<User>>')).toBe('User');\n  });\n\n  it('returns base type for collection generics (not unwrapped)', () => {\n    expect(extractReturnTypeName('Vec<User>')).toBe('Vec');\n    expect(extractReturnTypeName('List<User>')).toBe('List');\n    expect(extractReturnTypeName('Array<User>')).toBe('Array');\n    expect(extractReturnTypeName('Set<User>')).toBe('Set');\n    expect(extractReturnTypeName('ArrayList<User>')).toBe('ArrayList');\n  });\n\n  it('unwraps Optional<User>', () => {\n    expect(extractReturnTypeName('Optional<User>')).toBe('User');\n  });\n\n  it('extracts Ruby :: qualified type: Models::User → User', () => {\n    expect(extractReturnTypeName('Models::User')).toBe('User');\n  });\n\n  it('extracts C++ :: qualified type: ns::HttpClient → HttpClient', () => {\n    expect(extractReturnTypeName('ns::HttpClient')).toBe('HttpClient');\n  });\n\n  it('extracts deep :: qualified type: crate::models::User → User', () => {\n    expect(extractReturnTypeName('crate::models::User')).toBe('User');\n  });\n\n  it('extracts mixed qualifier: ns.module::User → User', () => {\n    expect(extractReturnTypeName('ns.module::User')).toBe('User');\n  });\n\n  it('returns undefined for lowercase :: qualified: std::vector', () => {\n    expect(extractReturnTypeName('std::vector')).toBeUndefined();\n  });\n\n  it('extracts deep dot-qualified: com.example.models.User → User', () => {\n    expect(extractReturnTypeName('com.example.models.User')).toBe('User');\n  });\n\n  it('unwraps wrapper over non-wrapper generic: Promise<Map<string, User>> → Map', () => {\n    // Promise is a wrapper — unwrap it to get Map<string, User>.\n    // Map is not a wrapper, so return its base type: Map.\n    expect(extractReturnTypeName('Promise<Map<string, User>>')).toBe('Map');\n  });\n\n  it('unwraps doubly-nested wrapper: Future<Result<User, Error>> → User', () => {\n    // Future → unwrap → Result<User, Error>; Result → unwrap first arg → User\n    expect(extractReturnTypeName('Future<Result<User, Error>>')).toBe('User');\n  });\n\n  it('unwraps CompletableFuture<Optional<User>> → User', () => {\n    // CompletableFuture → unwrap → Optional<User>; Optional → unwrap → User\n    expect(extractReturnTypeName('CompletableFuture<Optional<User>>')).toBe('User');\n  });\n\n  // Rust smart pointer unwrapping\n  it('unwraps Rc<User> → User', () => {\n    expect(extractReturnTypeName('Rc<User>')).toBe('User');\n  });\n  it('unwraps Arc<User> → User', () => {\n    expect(extractReturnTypeName('Arc<User>')).toBe('User');\n  });\n  it('unwraps Weak<User> → User', () => {\n    expect(extractReturnTypeName('Weak<User>')).toBe('User');\n  });\n  it('unwraps MutexGuard<User> → User', () => {\n    expect(extractReturnTypeName('MutexGuard<User>')).toBe('User');\n  });\n  it('unwraps RwLockReadGuard<User> → User', () => {\n    expect(extractReturnTypeName('RwLockReadGuard<User>')).toBe('User');\n  });\n  it('unwraps Cow<User> → User', () => {\n    expect(extractReturnTypeName('Cow<User>')).toBe('User');\n  });\n  // Nested: Arc<Option<User>> → User (double unwrap)\n  it('unwraps Arc<Option<User>> → User', () => {\n    expect(extractReturnTypeName('Arc<Option<User>>')).toBe('User');\n  });\n  // NOT unwrapped (containers/wrappers not in set)\n  it('does not unwrap Mutex<User> (not a Deref wrapper)', () => {\n    expect(extractReturnTypeName('Mutex<User>')).toBe('Mutex');\n  });\n\n  // Rust lifetime parameters in wrapper generics\n  it(\"skips lifetime in Ref<'_, User> → User\", () => {\n    expect(extractReturnTypeName(\"Ref<'_, User>\")).toBe('User');\n  });\n  it(\"skips lifetime in RefMut<'a, User> → User\", () => {\n    expect(extractReturnTypeName(\"RefMut<'a, User>\")).toBe('User');\n  });\n  it(\"skips lifetime in MutexGuard<'_, User> → User\", () => {\n    expect(extractReturnTypeName(\"MutexGuard<'_, User>\")).toBe('User');\n  });\n\n  it('returns undefined for lowercase non-class types', () => {\n    expect(extractReturnTypeName('error')).toBeUndefined();\n  });\n\n  it('extracts PHP backslash-namespaced type: \\\\App\\\\Models\\\\User → User', () => {\n    expect(extractReturnTypeName('\\\\App\\\\Models\\\\User')).toBe('User');\n  });\n\n  it('extracts PHP single-segment namespace: \\\\User → User', () => {\n    expect(extractReturnTypeName('\\\\User')).toBe('User');\n  });\n\n  it('extracts PHP deep namespace: \\\\Vendor\\\\Package\\\\Sub\\\\Client → Client', () => {\n    expect(extractReturnTypeName('\\\\Vendor\\\\Package\\\\Sub\\\\Client')).toBe('Client');\n  });\n\n  it('returns undefined for bare wrapper type names without generic arguments', () => {\n    expect(extractReturnTypeName('Task')).toBeUndefined();\n    expect(extractReturnTypeName('Promise')).toBeUndefined();\n    expect(extractReturnTypeName('Future')).toBeUndefined();\n    expect(extractReturnTypeName('Option')).toBeUndefined();\n    expect(extractReturnTypeName('Result')).toBeUndefined();\n    expect(extractReturnTypeName('Observable')).toBeUndefined();\n    expect(extractReturnTypeName('ValueTask')).toBeUndefined();\n    expect(extractReturnTypeName('CompletableFuture')).toBeUndefined();\n    expect(extractReturnTypeName('Optional')).toBeUndefined();\n  });\n\n  // ---- Length caps (Phase 6) ----\n\n  it('pre-cap: returns undefined when raw input exceeds 2048 characters', () => {\n    const longInput = 'A'.repeat(2049);\n    expect(extractReturnTypeName(longInput)).toBeUndefined();\n  });\n\n  it('pre-cap: accepts raw input at exactly 2048 characters (boundary)', () => {\n    // A 2048-char string of uppercase letters passes the pre-cap gate.\n    // It won't match as a valid identifier (too long for post-cap), so the\n    // result is undefined — but the pre-cap itself does NOT reject it.\n    // We test this by verifying a 2048-char type that WOULD be valid in all\n    // other respects is still returned as undefined (post-cap rejects it).\n    const atLimit = 'U' + 'x'.repeat(2047); // 2048 chars, starts with uppercase\n    // Post-cap (512) will reject this, but the pre-cap should not fire.\n    // The important assertion: no throw and the result is undefined from post-cap.\n    expect(extractReturnTypeName(atLimit)).toBeUndefined();\n  });\n\n  it('pre-cap: accepts inputs shorter than 2048 characters without rejection', () => {\n    // 'User' is well under 2048 — should resolve normally.\n    expect(extractReturnTypeName('User')).toBe('User');\n  });\n\n  it('post-cap: returns undefined when extracted type name exceeds 512 characters', () => {\n    // Construct a raw string that is under the 2048-char pre-cap but produces\n    // a final identifier longer than 512 characters after extraction.\n    // A bare uppercase identifier of 513 chars satisfies all rules except post-cap.\n    const longTypeName = 'U' + 'x'.repeat(512); // 513 chars, starts with uppercase\n    expect(extractReturnTypeName(longTypeName)).toBeUndefined();\n  });\n\n  it('post-cap: accepts extracted type name at exactly 512 characters (boundary)', () => {\n    // 512-char identifier should pass the post-cap check (> 512 rejects, not >=).\n    const atLimit = 'U' + 'x'.repeat(511); // exactly 512 chars\n    expect(extractReturnTypeName(atLimit)).toBe(atLimit);\n  });\n\n  it('post-cap: accepts normal short type names well under 512 characters', () => {\n    expect(extractReturnTypeName('HttpClient')).toBe('HttpClient');\n    expect(extractReturnTypeName('UserService')).toBe('UserService');\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/call-routing.test.ts",
    "content": "import { describe, it, expect } from 'vitest';\nimport { routeRubyCall, callRouters } from '../../src/core/ingestion/call-routing.js';\nimport { SupportedLanguages } from '../../src/config/supported-languages.js';\n\n// ── Mock AST node helpers ────────────────────────────────────────────────────\n\n/**\n * Build a minimal mock tree-sitter node. Only the fields actually accessed\n * by routeRubyCall are populated; everything else is left undefined so that\n * accidental property reads surface as undefined (the same as a real node\n * that lacks the field) rather than silently returning a wrong value.\n */\n\ninterface MockNode {\n  type: string;\n  text: string;\n  isNamed?: boolean;\n  startPosition?: { row: number; col?: number };\n  endPosition?: { row: number; col?: number };\n  children?: MockNode[];\n  parent?: MockNode | null;\n  previousSibling?: MockNode | null;\n  childForFieldName?: (name: string) => MockNode | undefined;\n}\n\n/** Build a string node as tree-sitter-ruby emits it: string → string_content */\nfunction makeStringNode(content: string): MockNode {\n  const contentNode: MockNode = { type: 'string_content', text: content };\n  return {\n    type: 'string',\n    text: `\"${content}\"`,\n    children: [contentNode],\n  };\n}\n\n/**\n * Build a mock `call` node for require/require_relative with a single string\n * argument.\n */\nfunction makeRequireCallNode(path: string | null): MockNode {\n  const argChildren: MockNode[] = path !== null ? [makeStringNode(path)] : [];\n  const argList: MockNode = { type: 'argument_list', text: '', children: argChildren };\n  const node: MockNode = {\n    type: 'call',\n    text: '',\n    childForFieldName: (name: string) => (name === 'arguments' ? argList : undefined),\n  };\n  return node;\n}\n\n/**\n * Build a call node where the argument list contains a string node whose\n * string_content child is absent (simulates a non-literal string argument).\n */\nfunction makeRequireCallNodeNoContent(): MockNode {\n  const stringNodeNoContent: MockNode = { type: 'string', text: '\"\"', children: [] };\n  const argList: MockNode = { type: 'argument_list', text: '', children: [stringNodeNoContent] };\n  return {\n    type: 'call',\n    text: '',\n    childForFieldName: (name: string) => (name === 'arguments' ? argList : undefined),\n  };\n}\n\n/** Build a mock call node for include/extend/prepend with a chain of parents. */\nfunction makeHeritageCallNode(\n  argNodes: MockNode[],\n  enclosingType: 'class' | 'module' | null,\n  enclosingName: string | null,\n  extraDepth = 0,\n): MockNode {\n  const argList: MockNode = { type: 'argument_list', text: '', children: argNodes };\n  const callNode: MockNode = {\n    type: 'call',\n    text: '',\n    parent: null,\n    childForFieldName: (name: string) => (name === 'arguments' ? argList : undefined),\n  };\n\n  if (enclosingType === null) {\n    // No enclosing class at all — parent chain ends without a class/module node\n    const intermediate: MockNode = { type: 'body', text: '', parent: null };\n    callNode.parent = intermediate;\n    return callNode;\n  }\n\n  // Build extra intermediate nodes to test deep parent walking\n  let leaf: MockNode = callNode;\n  for (let i = 0; i < extraDepth; i++) {\n    const wrapper: MockNode = { type: 'body_statement', text: '', parent: null };\n    leaf.parent = wrapper;\n    leaf = wrapper;\n  }\n\n  const nameNode: MockNode | undefined =\n    enclosingName ? { type: 'constant', text: enclosingName } : undefined;\n\n  const classNode: MockNode = {\n    type: enclosingType,\n    text: '',\n    parent: null,\n    childForFieldName: (name: string) => (name === 'name' ? nameNode : undefined),\n  };\n  leaf.parent = classNode;\n\n  return callNode;\n}\n\n/** Build a constant arg node (used as mixin name) */\nfunction makeConstantArg(text: string): MockNode {\n  return { type: 'constant', text };\n}\n\n/** Build a scope_resolution arg node (e.g. Foo::Bar) */\nfunction makeScopeResolutionArg(text: string): MockNode {\n  return { type: 'scope_resolution', text };\n}\n\n/** Build an identifier arg that is neither constant nor scope_resolution */\nfunction makeIdentifierArg(text: string): MockNode {\n  return { type: 'identifier', text };\n}\n\n/** Build a simple_symbol arg (used in attr_accessor etc.) */\nfunction makeSimpleSymbol(name: string, row = 0): MockNode {\n  return {\n    type: 'simple_symbol',\n    text: `:${name}`,\n    startPosition: { row },\n    endPosition: { row },\n  };\n}\n\n/**\n * Build a call node for attr_accessor/attr_reader/attr_writer with optional\n * preceding comment siblings.\n */\nfunction makeAccessorCallNode(\n  symbolArgs: MockNode[],\n  previousSiblings: MockNode[] = [],\n): MockNode {\n  const argList: MockNode = { type: 'argument_list', text: '', children: symbolArgs };\n\n  // Link previousSiblings as a chain (last element is the direct previousSibling)\n  let prevSibling: MockNode | null = null;\n  for (const s of previousSiblings) {\n    s.previousSibling = prevSibling;\n    prevSibling = s;\n  }\n\n  const callNode: MockNode = {\n    type: 'call',\n    text: '',\n    previousSibling: prevSibling,\n    childForFieldName: (name: string) => (name === 'arguments' ? argList : undefined),\n  };\n  return callNode;\n}\n\n/** Build a comment node (isNamed = false by default, matching real tree-sitter Ruby) */\nfunction makeCommentNode(text: string, named = false): MockNode {\n  return { type: 'comment', text, isNamed: named };\n}\n\n/** Build a named non-comment sibling (causes yard-scan loop to stop) */\nfunction makeNamedSibling(type = 'expression_statement'): MockNode {\n  return { type, text: '', isNamed: true };\n}\n\n// ── require / require_relative ───────────────────────────────────────────────\n\ndescribe('routeRubyCall — require / require_relative', () => {\n  it('require with a valid string path returns import with isRelative=false', () => {\n    const node = makeRequireCallNode('net/http');\n    const result = routeRubyCall('require', node);\n\n    expect(result).toEqual({ kind: 'import', importPath: 'net/http', isRelative: false });\n  });\n\n  it('require_relative without leading dot prepends \"./\"', () => {\n    const node = makeRequireCallNode('models/user');\n    const result = routeRubyCall('require_relative', node);\n\n    expect(result).toEqual({ kind: 'import', importPath: './models/user', isRelative: true });\n  });\n\n  it('require_relative with path already starting with \".\" does not double-prepend', () => {\n    const node = makeRequireCallNode('./helpers/formatter');\n    const result = routeRubyCall('require_relative', node);\n\n    expect(result).toEqual({ kind: 'import', importPath: './helpers/formatter', isRelative: true });\n  });\n\n  it('require_relative with \"../\" prefix is left unchanged', () => {\n    const node = makeRequireCallNode('../shared/utils');\n    const result = routeRubyCall('require_relative', node);\n\n    expect(result).toEqual({ kind: 'import', importPath: '../shared/utils', isRelative: true });\n  });\n\n  it('returns skip when there is no string_content node (non-literal argument)', () => {\n    const node = makeRequireCallNodeNoContent();\n    expect(routeRubyCall('require', node)).toEqual({ kind: 'skip' });\n  });\n\n  it('returns skip when import path is an empty string', () => {\n    const node = makeRequireCallNode('');\n    expect(routeRubyCall('require', node)).toEqual({ kind: 'skip' });\n  });\n\n  it('returns skip when import path contains a control character (\\\\x00)', () => {\n    const node = makeRequireCallNode('some\\x00path');\n    expect(routeRubyCall('require', node)).toEqual({ kind: 'skip' });\n  });\n\n  it('returns skip when import path contains a newline control character (\\\\n)', () => {\n    const node = makeRequireCallNode('path\\ninjection');\n    expect(routeRubyCall('require', node)).toEqual({ kind: 'skip' });\n  });\n\n  it('returns skip when import path exceeds 1024 characters', () => {\n    const longPath = 'a'.repeat(1025);\n    const node = makeRequireCallNode(longPath);\n    expect(routeRubyCall('require', node)).toEqual({ kind: 'skip' });\n  });\n\n  it('accepts import path of exactly 1024 characters', () => {\n    const maxPath = 'a'.repeat(1024);\n    const node = makeRequireCallNode(maxPath);\n    const result = routeRubyCall('require', node);\n    expect(result).toEqual({ kind: 'import', importPath: maxPath, isRelative: false });\n  });\n\n  it('returns skip when argument list has no string child at all', () => {\n    // argList has only a non-string child\n    const argList: MockNode = {\n      type: 'argument_list',\n      text: '',\n      children: [{ type: 'identifier', text: 'MY_CONST' }],\n    };\n    const node: MockNode = {\n      type: 'call',\n      text: '',\n      childForFieldName: (name: string) => (name === 'arguments' ? argList : undefined),\n    };\n    expect(routeRubyCall('require', node)).toEqual({ kind: 'skip' });\n  });\n\n  it('returns skip when childForFieldName is absent (undefined callNode fields)', () => {\n    // callNode has no childForFieldName method at all\n    const node: MockNode = { type: 'call', text: '' };\n    expect(routeRubyCall('require', node)).toEqual({ kind: 'skip' });\n  });\n});\n\n// ── include / extend / prepend ───────────────────────────────────────────────\n\ndescribe('routeRubyCall — include / extend / prepend', () => {\n  it('include with a single constant arg inside a class returns heritage', () => {\n    const node = makeHeritageCallNode([makeConstantArg('Serializable')], 'class', 'User');\n    const result = routeRubyCall('include', node);\n\n    expect(result).toEqual({\n      kind: 'heritage',\n      items: [{ enclosingClass: 'User', mixinName: 'Serializable', heritageKind: 'include' }],\n    });\n  });\n\n  it('extend with a scope_resolution arg (Foo::Bar) returns heritage', () => {\n    const node = makeHeritageCallNode([makeScopeResolutionArg('ActiveSupport::Concern')], 'class', 'Post');\n    const result = routeRubyCall('extend', node);\n\n    expect(result).toEqual({\n      kind: 'heritage',\n      items: [{ enclosingClass: 'Post', mixinName: 'ActiveSupport::Concern', heritageKind: 'extend' }],\n    });\n  });\n\n  it('prepend records heritageKind as \"prepend\"', () => {\n    const node = makeHeritageCallNode([makeConstantArg('Instrumented')], 'class', 'Service');\n    const result = routeRubyCall('prepend', node);\n\n    expect(result).toEqual({\n      kind: 'heritage',\n      items: [{ enclosingClass: 'Service', mixinName: 'Instrumented', heritageKind: 'prepend' }],\n    });\n  });\n\n  it('include inside a module (not a class) still resolves enclosing name', () => {\n    const node = makeHeritageCallNode([makeConstantArg('Helpers')], 'module', 'ApplicationHelper');\n    const result = routeRubyCall('include', node);\n\n    expect(result).toEqual({\n      kind: 'heritage',\n      items: [{ enclosingClass: 'ApplicationHelper', mixinName: 'Helpers', heritageKind: 'include' }],\n    });\n  });\n\n  it('include with multiple constant args produces one item per arg', () => {\n    const args = [makeConstantArg('Mod1'), makeConstantArg('Mod2'), makeConstantArg('Mod3')];\n    const node = makeHeritageCallNode(args, 'class', 'MyClass');\n    const result = routeRubyCall('include', node);\n\n    expect(result).toEqual({\n      kind: 'heritage',\n      items: [\n        { enclosingClass: 'MyClass', mixinName: 'Mod1', heritageKind: 'include' },\n        { enclosingClass: 'MyClass', mixinName: 'Mod2', heritageKind: 'include' },\n        { enclosingClass: 'MyClass', mixinName: 'Mod3', heritageKind: 'include' },\n      ],\n    });\n  });\n\n  it('returns skip when no enclosing class or module is found in parent chain', () => {\n    const node = makeHeritageCallNode([makeConstantArg('Mod')], null, null);\n    expect(routeRubyCall('include', node)).toEqual({ kind: 'skip' });\n  });\n\n  it('returns skip when enclosing class node has no name child', () => {\n    // nameNode is undefined — childForFieldName('name') returns undefined\n    const argList: MockNode = { type: 'argument_list', text: '', children: [makeConstantArg('Mod')] };\n    const classNode: MockNode = {\n      type: 'class',\n      text: '',\n      parent: null,\n      childForFieldName: (_name: string) => undefined,\n    };\n    const bodyNode: MockNode = { type: 'body', text: '', parent: classNode };\n    const callNode: MockNode = {\n      type: 'call',\n      text: '',\n      parent: bodyNode,\n      childForFieldName: (name: string) => (name === 'arguments' ? argList : undefined),\n    };\n    expect(routeRubyCall('include', callNode)).toEqual({ kind: 'skip' });\n  });\n\n  it('returns skip when arg list contains only non-constant/non-scope_resolution args', () => {\n    const node = makeHeritageCallNode([makeIdentifierArg('some_var')], 'class', 'Foo');\n    expect(routeRubyCall('include', node)).toEqual({ kind: 'skip' });\n  });\n\n  it('returns skip when arg list is empty', () => {\n    const node = makeHeritageCallNode([], 'class', 'Foo');\n    expect(routeRubyCall('include', node)).toEqual({ kind: 'skip' });\n  });\n\n  it('walks nested scopes to find the nearest enclosing class', () => {\n    // callNode is 5 levels deep inside a class body\n    const node = makeHeritageCallNode([makeConstantArg('DeepMixin')], 'class', 'DeepClass', 5);\n    const result = routeRubyCall('include', node);\n\n    expect(result).toEqual({\n      kind: 'heritage',\n      items: [{ enclosingClass: 'DeepClass', mixinName: 'DeepMixin', heritageKind: 'include' }],\n    });\n  });\n\n  it('returns skip when parent depth exceeds MAX_PARENT_DEPTH (50)', () => {\n    // Build a chain of 51 intermediate nodes with no class/module in it\n    const argList: MockNode = {\n      type: 'argument_list',\n      text: '',\n      children: [makeConstantArg('Mod')],\n    };\n    const callNode: MockNode = {\n      type: 'call',\n      text: '',\n      parent: null,\n      childForFieldName: (name: string) => (name === 'arguments' ? argList : undefined),\n    };\n\n    let current: MockNode = callNode;\n    // Create 51 parents — all plain body nodes, never a class/module\n    for (let i = 0; i < 51; i++) {\n      const parent: MockNode = { type: 'body_statement', text: '', parent: null };\n      current.parent = parent;\n      current = parent;\n    }\n\n    expect(routeRubyCall('include', callNode)).toEqual({ kind: 'skip' });\n  });\n\n  it('finds class at exactly depth 50 (boundary — must succeed)', () => {\n    // 49 plain wrappers, then the class at depth 50\n    const argList: MockNode = {\n      type: 'argument_list',\n      text: '',\n      children: [makeConstantArg('BoundaryMixin')],\n    };\n    const callNode: MockNode = {\n      type: 'call',\n      text: '',\n      parent: null,\n      childForFieldName: (name: string) => (name === 'arguments' ? argList : undefined),\n    };\n\n    let leaf: MockNode = callNode;\n    for (let i = 0; i < 49; i++) {\n      const wrapper: MockNode = { type: 'body_statement', text: '', parent: null };\n      leaf.parent = wrapper;\n      leaf = wrapper;\n    }\n\n    const nameNode: MockNode = { type: 'constant', text: 'BoundaryClass' };\n    const classNode: MockNode = {\n      type: 'class',\n      text: '',\n      parent: null,\n      childForFieldName: (name: string) => (name === 'name' ? nameNode : undefined),\n    };\n    leaf.parent = classNode;\n\n    const result = routeRubyCall('include', callNode);\n    expect(result).toEqual({\n      kind: 'heritage',\n      items: [{ enclosingClass: 'BoundaryClass', mixinName: 'BoundaryMixin', heritageKind: 'include' }],\n    });\n  });\n\n  it('skips non-constant args mixed with constant args, collecting only constants', () => {\n    const args = [\n      makeIdentifierArg('local_var'),\n      makeConstantArg('ValidMixin'),\n      makeIdentifierArg('another_var'),\n    ];\n    const node = makeHeritageCallNode(args, 'class', 'Foo');\n    const result = routeRubyCall('include', node);\n\n    expect(result).toEqual({\n      kind: 'heritage',\n      items: [{ enclosingClass: 'Foo', mixinName: 'ValidMixin', heritageKind: 'include' }],\n    });\n  });\n});\n\n// ── attr_accessor / attr_reader / attr_writer ────────────────────────────────\n\ndescribe('routeRubyCall — attr_accessor / attr_reader / attr_writer', () => {\n  it('attr_accessor with a single symbol returns a property item', () => {\n    const node = makeAccessorCallNode([makeSimpleSymbol('name', 5)]);\n    const result = routeRubyCall('attr_accessor', node);\n\n    expect(result).toEqual({\n      kind: 'properties',\n      items: [{ propName: 'name', accessorType: 'attr_accessor', startLine: 5, endLine: 5 }],\n    });\n  });\n\n  it('attr_reader sets accessorType to \"attr_reader\"', () => {\n    const node = makeAccessorCallNode([makeSimpleSymbol('age', 3)]);\n    const result = routeRubyCall('attr_reader', node);\n\n    expect(result).toEqual({\n      kind: 'properties',\n      items: [{ propName: 'age', accessorType: 'attr_reader', startLine: 3, endLine: 3 }],\n    });\n  });\n\n  it('attr_writer sets accessorType to \"attr_writer\"', () => {\n    const node = makeAccessorCallNode([makeSimpleSymbol('email', 7)]);\n    const result = routeRubyCall('attr_writer', node);\n\n    expect(result).toEqual({\n      kind: 'properties',\n      items: [{ propName: 'email', accessorType: 'attr_writer', startLine: 7, endLine: 7 }],\n    });\n  });\n\n  it('strips leading colon from symbol text', () => {\n    // makeSimpleSymbol already prefixes with ':', this validates the slice(1) branch\n    const symNode: MockNode = {\n      type: 'simple_symbol',\n      text: ':title',\n      startPosition: { row: 2 },\n      endPosition: { row: 2 },\n    };\n    const node = makeAccessorCallNode([symNode]);\n    const result = routeRubyCall('attr_accessor', node);\n\n    expect(result).toMatchObject({ kind: 'properties', items: [{ propName: 'title' }] });\n  });\n\n  it('handles symbol text without colon prefix (no double-strip)', () => {\n    // Simulate a symbol whose text does NOT start with ':'\n    const symNode: MockNode = {\n      type: 'simple_symbol',\n      text: 'status',\n      startPosition: { row: 1 },\n      endPosition: { row: 1 },\n    };\n    const node = makeAccessorCallNode([symNode]);\n    const result = routeRubyCall('attr_accessor', node);\n\n    expect(result).toMatchObject({ kind: 'properties', items: [{ propName: 'status' }] });\n  });\n\n  it('multiple symbols produce one item each', () => {\n    const args = [makeSimpleSymbol('first_name', 10), makeSimpleSymbol('last_name', 10), makeSimpleSymbol('dob', 10)];\n    const node = makeAccessorCallNode(args);\n    const result = routeRubyCall('attr_accessor', node);\n\n    expect(result).toEqual({\n      kind: 'properties',\n      items: [\n        { propName: 'first_name', accessorType: 'attr_accessor', startLine: 10, endLine: 10 },\n        { propName: 'last_name', accessorType: 'attr_accessor', startLine: 10, endLine: 10 },\n        { propName: 'dob',        accessorType: 'attr_accessor', startLine: 10, endLine: 10 },\n      ],\n    });\n  });\n\n  it('extracts simple YARD @return [Type] from preceding comment', () => {\n    const comment = makeCommentNode('# @return [Address]');\n    const node = makeAccessorCallNode([makeSimpleSymbol('address', 20)], [comment]);\n    const result = routeRubyCall('attr_accessor', node);\n\n    expect(result).toMatchObject({\n      kind: 'properties',\n      items: [{ propName: 'address', declaredType: 'Address' }],\n    });\n  });\n\n  it('extracts only the leading type name from compound YARD type (Array<User> → \"Array\")', () => {\n    // The regex captures \"Array<User>\"; the simple match grabs the first uppercase word \"Array\"\n    const comment = makeCommentNode('# @return [Array<User>]');\n    const node = makeAccessorCallNode([makeSimpleSymbol('users', 15)], [comment]);\n    const result = routeRubyCall('attr_accessor', node);\n\n    expect(result).toMatchObject({\n      kind: 'properties',\n      items: [{ declaredType: 'Array' }],\n    });\n  });\n\n  it('extracts type from YARD comment with extra whitespace inside brackets', () => {\n    const comment = makeCommentNode('#  @return [  Integer  ]');\n    const node = makeAccessorCallNode([makeSimpleSymbol('count', 8)], [comment]);\n    const result = routeRubyCall('attr_accessor', node);\n\n    expect(result).toMatchObject({\n      kind: 'properties',\n      items: [{ declaredType: 'Integer' }],\n    });\n  });\n\n  it('does not set declaredType when no YARD comment precedes the call', () => {\n    const node = makeAccessorCallNode([makeSimpleSymbol('score', 12)]);\n    const result = routeRubyCall('attr_accessor', node);\n\n    expect(result).toMatchObject({ kind: 'properties', items: [{ propName: 'score' }] });\n    const item = (result as any).items[0];\n    expect(item.declaredType).toBeUndefined();\n  });\n\n  it('does not set declaredType when comment has no @return annotation', () => {\n    const comment = makeCommentNode('# This accessor stores the user name');\n    const node = makeAccessorCallNode([makeSimpleSymbol('user_name', 9)], [comment]);\n    const result = routeRubyCall('attr_accessor', node);\n\n    const item = (result as any).items[0];\n    expect(item.declaredType).toBeUndefined();\n  });\n\n  it('does not set declaredType when YARD type starts with lowercase (not ^[A-Z])', () => {\n    // e.g. \"@return [string]\" — lowercase first char fails the simple = raw.match(/^([A-Z]\\w*)/)\n    const comment = makeCommentNode('# @return [string]');\n    const node = makeAccessorCallNode([makeSimpleSymbol('label', 4)], [comment]);\n    const result = routeRubyCall('attr_accessor', node);\n\n    const item = (result as any).items[0];\n    expect(item.declaredType).toBeUndefined();\n  });\n\n  it('stops sibling scan at a non-comment named sibling before reaching a comment', () => {\n    // Ordered as [comment, namedSibling] — namedSibling is the direct previousSibling,\n    // so the scan hits it first and stops before reading the comment\n    const yardComment = makeCommentNode('# @return [User]');\n    const named = makeNamedSibling();\n    const node = makeAccessorCallNode([makeSimpleSymbol('owner', 6)], [yardComment, named]);\n    // named is last in the array → becomes direct previousSibling\n    const result = routeRubyCall('attr_accessor', node);\n\n    const item = (result as any).items[0];\n    expect(item.declaredType).toBeUndefined();\n  });\n\n  it('continues past unnamed (non-named) siblings to find a YARD comment', () => {\n    // An unnamed whitespace/punctuation node between the comment and the call\n    const unnamedNode: MockNode = { type: 'newline', text: '\\n', isNamed: false };\n    const comment = makeCommentNode('# @return [Order]');\n    // siblings in order oldest→newest; the last becomes the direct previousSibling\n    const node = makeAccessorCallNode([makeSimpleSymbol('order', 30)], [comment, unnamedNode]);\n    const result = routeRubyCall('attr_accessor', node);\n\n    expect(result).toMatchObject({\n      kind: 'properties',\n      items: [{ declaredType: 'Order' }],\n    });\n  });\n\n  it('returns skip when arg list contains no simple_symbol nodes', () => {\n    // Only an identifier node — not a symbol\n    const identArg: MockNode = { type: 'identifier', text: 'some_var' };\n    const node = makeAccessorCallNode([identArg]);\n    expect(routeRubyCall('attr_accessor', node)).toEqual({ kind: 'skip' });\n  });\n\n  it('returns skip when arg list is empty', () => {\n    const node = makeAccessorCallNode([]);\n    expect(routeRubyCall('attr_accessor', node)).toEqual({ kind: 'skip' });\n  });\n\n  it('records correct startLine and endLine from symbol node positions', () => {\n    const sym: MockNode = {\n      type: 'simple_symbol',\n      text: ':created_at',\n      startPosition: { row: 42 },\n      endPosition: { row: 42 },\n    };\n    const node = makeAccessorCallNode([sym]);\n    const result = routeRubyCall('attr_accessor', node);\n\n    expect(result).toMatchObject({\n      kind: 'properties',\n      items: [{ startLine: 42, endLine: 42 }],\n    });\n  });\n});\n\n// ── default case ─────────────────────────────────────────────────────────────\n\ndescribe('routeRubyCall — default (unknown method name)', () => {\n  it('returns {kind: \"call\"} for an arbitrary method name', () => {\n    const node: MockNode = { type: 'call', text: '' };\n    expect(routeRubyCall('some_method', node)).toEqual({ kind: 'call' });\n  });\n\n  it('returns {kind: \"call\"} for an empty method name string', () => {\n    const node: MockNode = { type: 'call', text: '' };\n    expect(routeRubyCall('', node)).toEqual({ kind: 'call' });\n  });\n\n  it('returns {kind: \"call\"} for a realistic method name (save, render, etc.)', () => {\n    const node: MockNode = { type: 'call', text: '' };\n    expect(routeRubyCall('render', node)).toEqual({ kind: 'call' });\n    expect(routeRubyCall('save', node)).toEqual({ kind: 'call' });\n    expect(routeRubyCall('destroy', node)).toEqual({ kind: 'call' });\n  });\n});\n\n// ── callRouters dispatch table ────────────────────────────────────────────────\n\ndescribe('callRouters dispatch table', () => {\n  it('non-Ruby languages return null (noRouting passthrough)', () => {\n    const dummyNode = { type: 'call', text: '' };\n    const nonRubyLanguages: SupportedLanguages[] = [\n      SupportedLanguages.JavaScript,\n      SupportedLanguages.TypeScript,\n      SupportedLanguages.Python,\n      SupportedLanguages.Java,\n      SupportedLanguages.Kotlin,\n      SupportedLanguages.Go,\n      SupportedLanguages.Rust,\n      SupportedLanguages.CSharp,\n      SupportedLanguages.PHP,\n      SupportedLanguages.Swift,\n      SupportedLanguages.CPlusPlus,\n      SupportedLanguages.C,\n    ];\n    for (const lang of nonRubyLanguages) {\n      expect(callRouters[lang]('require', dummyNode)).toBeNull();\n    }\n  });\n\n  it('Ruby router is routeRubyCall (delegates correctly for require)', () => {\n    const node = makeRequireCallNode('json');\n    const result = callRouters[SupportedLanguages.Ruby]('require', node);\n    expect(result).toEqual({ kind: 'import', importPath: 'json', isRelative: false });\n  });\n\n  it('Ruby router returns {kind: \"call\"} for an unknown method name', () => {\n    const node: MockNode = { type: 'call', text: '' };\n    const result = callRouters[SupportedLanguages.Ruby]('render', node);\n    expect(result).toEqual({ kind: 'call' });\n  });\n\n  it('callRouters covers every SupportedLanguages value (no missing key)', () => {\n    const allLanguages = Object.values(SupportedLanguages) as SupportedLanguages[];\n    for (const lang of allLanguages) {\n      expect(typeof callRouters[lang]).toBe('function');\n    }\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/calltool-dispatch.test.ts",
    "content": "/**\n * Unit Tests: LocalBackend callTool dispatch & lifecycle\n *\n * Tests the callTool dispatch logic, resolveRepo, init/disconnect,\n * error cases, and silent failure patterns — all with mocked LadybugDB.\n *\n * These are pure unit tests that mock the LadybugDB layer to test\n * the dispatch and error handling logic in isolation.\n */\nimport { describe, it, expect, vi, beforeEach } from 'vitest';\n\n// We need to mock the LadybugDB adapter and repo-manager BEFORE importing LocalBackend\nvi.mock('../../src/mcp/core/lbug-adapter.js', () => ({\n  initLbug: vi.fn().mockResolvedValue(undefined),\n  executeQuery: vi.fn().mockResolvedValue([]),\n  executeParameterized: vi.fn().mockResolvedValue([]),\n  closeLbug: vi.fn().mockResolvedValue(undefined),\n  isLbugReady: vi.fn().mockReturnValue(true),\n}));\n\nvi.mock('../../src/storage/repo-manager.js', () => ({\n  listRegisteredRepos: vi.fn().mockResolvedValue([]),\n  cleanupOldKuzuFiles: vi.fn().mockResolvedValue({ found: false, needsReindex: false }),\n}));\n\n// Also mock the search modules to avoid loading onnxruntime\nvi.mock('../../src/core/search/bm25-index.js', () => ({\n  searchFTSFromLbug: vi.fn().mockResolvedValue([]),\n}));\n\nvi.mock('../../src/mcp/core/embedder.js', () => ({\n  embedQuery: vi.fn().mockResolvedValue([]),\n  getEmbeddingDims: vi.fn().mockReturnValue(384),\n}));\n\nimport { LocalBackend } from '../../src/mcp/local/local-backend.js';\nimport { listRegisteredRepos, cleanupOldKuzuFiles } from '../../src/storage/repo-manager.js';\nimport { initLbug, executeQuery, executeParameterized, isLbugReady, closeLbug } from '../../src/mcp/core/lbug-adapter.js';\n\n// ─── Helpers ─────────────────────────────────────────────────────────\n\nconst MOCK_REPO_ENTRY = {\n  name: 'test-project',\n  path: '/tmp/test-project',\n  storagePath: '/tmp/.gitnexus/test-project',\n  indexedAt: '2024-06-01T12:00:00Z',\n  lastCommit: 'abc1234567890',\n  stats: { files: 10, nodes: 50, edges: 100, communities: 3, processes: 5 },\n};\n\nfunction setupSingleRepo() {\n  (listRegisteredRepos as any).mockResolvedValue([MOCK_REPO_ENTRY]);\n}\n\nfunction setupMultipleRepos() {\n  (listRegisteredRepos as any).mockResolvedValue([\n    MOCK_REPO_ENTRY,\n    {\n      ...MOCK_REPO_ENTRY,\n      name: 'other-project',\n      path: '/tmp/other-project',\n      storagePath: '/tmp/.gitnexus/other-project',\n    },\n  ]);\n}\n\nfunction setupNoRepos() {\n  (listRegisteredRepos as any).mockResolvedValue([]);\n}\n\n// ─── LocalBackend lifecycle ──────────────────────────────────────────\n\ndescribe('LocalBackend.init', () => {\n  let backend: LocalBackend;\n\n  beforeEach(() => {\n    backend = new LocalBackend();\n    vi.clearAllMocks();\n  });\n\n  it('returns true when repos are available', async () => {\n    setupSingleRepo();\n    const result = await backend.init();\n    expect(result).toBe(true);\n  });\n\n  it('returns false when no repos are registered', async () => {\n    setupNoRepos();\n    const result = await backend.init();\n    expect(result).toBe(false);\n  });\n\n  it('calls listRegisteredRepos with validate: true', async () => {\n    setupSingleRepo();\n    await backend.init();\n    expect(listRegisteredRepos).toHaveBeenCalledWith({ validate: true });\n  });\n});\n\ndescribe('LocalBackend.disconnect', () => {\n  let backend: LocalBackend;\n\n  beforeEach(() => {\n    backend = new LocalBackend();\n    vi.clearAllMocks();\n  });\n\n  it('does not throw when no repos are initialized', async () => {\n    setupNoRepos();\n    await backend.init();\n    await expect(backend.disconnect()).resolves.not.toThrow();\n  });\n\n  it('calls closeLbug on disconnect', async () => {\n    setupSingleRepo();\n    await backend.init();\n    await backend.disconnect();\n    expect(closeLbug).toHaveBeenCalled();\n  });\n});\n\n// ─── callTool dispatch ───────────────────────────────────────────────\n\ndescribe('LocalBackend.callTool', () => {\n  let backend: LocalBackend;\n\n  beforeEach(async () => {\n    vi.clearAllMocks();\n    backend = new LocalBackend();\n    setupSingleRepo();\n    await backend.init();\n  });\n\n  it('routes list_repos without needing repo param', async () => {\n    const result = await backend.callTool('list_repos', {});\n    expect(Array.isArray(result)).toBe(true);\n    expect(result[0].name).toBe('test-project');\n  });\n\n  it('throws for unknown tool name', async () => {\n    await expect(backend.callTool('nonexistent_tool', {}))\n      .rejects.toThrow('Unknown tool: nonexistent_tool');\n  });\n\n  it('dispatches query tool', async () => {\n    (executeParameterized as any).mockResolvedValue([]);\n    const result = await backend.callTool('query', { query: 'auth' });\n    expect(result).toHaveProperty('processes');\n    expect(result).toHaveProperty('definitions');\n  });\n\n  it('query tool returns error for empty query', async () => {\n    const result = await backend.callTool('query', { query: '' });\n    expect(result.error).toContain('query parameter is required');\n  });\n\n  it('query tool returns error for whitespace-only query', async () => {\n    const result = await backend.callTool('query', { query: '   ' });\n    expect(result.error).toContain('query parameter is required');\n  });\n\n  it('dispatches cypher tool and blocks write queries', async () => {\n    const result = await backend.callTool('cypher', { query: 'CREATE (n:Test)' });\n    expect(result).toHaveProperty('error');\n    expect(result.error).toContain('Write operations');\n  });\n\n  it('dispatches cypher tool with valid read query', async () => {\n    (executeQuery as any).mockResolvedValue([\n      { name: 'test', filePath: 'src/test.ts' },\n    ]);\n    const result = await backend.callTool('cypher', {\n      query: 'MATCH (n:Function) RETURN n.name AS name, n.filePath AS filePath LIMIT 5',\n    });\n    // formatCypherAsMarkdown returns { markdown, row_count } for tabular results\n    expect(result).toHaveProperty('markdown');\n    expect(result).toHaveProperty('row_count');\n    expect(result.row_count).toBe(1);\n  });\n\n  it('dispatches context tool', async () => {\n    (executeParameterized as any).mockResolvedValue([\n      { id: 'func:main', name: 'main', type: 'Function', filePath: 'src/index.ts', startLine: 1, endLine: 10 },\n    ]);\n    const result = await backend.callTool('context', { name: 'main' });\n    expect(result.status).toBe('found');\n    expect(result.symbol.name).toBe('main');\n  });\n\n  it('context tool returns error when name and uid are both missing', async () => {\n    const result = await backend.callTool('context', {});\n    expect(result.error).toContain('Either \"name\" or \"uid\"');\n  });\n\n  it('context tool returns not-found for missing symbol', async () => {\n    (executeParameterized as any).mockResolvedValue([]);\n    const result = await backend.callTool('context', { name: 'doesNotExist' });\n    expect(result.error).toContain('not found');\n  });\n\n  it('context tool returns disambiguation for multiple matches', async () => {\n    (executeParameterized as any).mockResolvedValue([\n      { id: 'func:main:1', name: 'main', type: 'Function', filePath: 'src/a.ts', startLine: 1, endLine: 5 },\n      { id: 'func:main:2', name: 'main', type: 'Function', filePath: 'src/b.ts', startLine: 1, endLine: 5 },\n    ]);\n    const result = await backend.callTool('context', { name: 'main' });\n    expect(result.status).toBe('ambiguous');\n    expect(result.candidates).toHaveLength(2);\n  });\n\n  it('dispatches impact tool', async () => {\n    // impact() calls executeParameterized to find target, then executeQuery for traversal\n    (executeParameterized as any).mockResolvedValue([\n      { id: 'func:main', name: 'main', type: 'Function', filePath: 'src/index.ts' },\n    ]);\n    (executeQuery as any).mockResolvedValue([]);\n\n    const result = await backend.callTool('impact', { target: 'main', direction: 'upstream' });\n    expect(result).toBeDefined();\n    expect(result.target).toBeDefined();\n  });\n\n  it('dispatches detect_changes tool', async () => {\n    // detect_changes calls execFileSync which we haven't mocked at module level,\n    // so it will throw a git error — that's fine, we test the error path\n    const result = await backend.callTool('detect_changes', { scope: 'unstaged' });\n    // Should either return changes or a git error\n    expect(result).toBeDefined();\n    expect(result.error || result.summary).toBeDefined();\n  });\n\n  it('dispatches rename tool', async () => {\n    (executeParameterized as any)\n      .mockResolvedValueOnce([\n        { id: 'func:oldName', name: 'oldName', type: 'Function', filePath: 'src/test.ts', startLine: 1, endLine: 5 },\n      ])\n      .mockResolvedValue([]);\n\n    const result = await backend.callTool('rename', {\n      symbol_name: 'oldName',\n      new_name: 'newName',\n      dry_run: true,\n    });\n    expect(result).toBeDefined();\n  });\n\n  it('rename returns error when both symbol_name and symbol_uid are missing', async () => {\n    const result = await backend.callTool('rename', { new_name: 'newName' });\n    expect(result.error).toContain('Either symbol_name or symbol_uid');\n  });\n\n  // Legacy tool aliases\n  it('dispatches \"search\" as alias for query', async () => {\n    (executeParameterized as any).mockResolvedValue([]);\n    const result = await backend.callTool('search', { query: 'auth' });\n    expect(result).toHaveProperty('processes');\n  });\n\n  it('dispatches \"explore\" as alias for context', async () => {\n    (executeParameterized as any).mockResolvedValue([\n      { id: 'func:main', name: 'main', type: 'Function', filePath: 'src/index.ts', startLine: 1, endLine: 10 },\n    ]);\n    const result = await backend.callTool('explore', { name: 'main' });\n    // explore calls context — which may return found or ambiguous depending on mock\n    expect(result).toBeDefined();\n    expect(result.status === 'found' || result.symbol || result.error === undefined).toBeTruthy();\n  });\n});\n\n// ─── Repo resolution ────────────────────────────────────────────────\n\ndescribe('LocalBackend.resolveRepo', () => {\n  let backend: LocalBackend;\n\n  beforeEach(async () => {\n    vi.clearAllMocks();\n    backend = new LocalBackend();\n  });\n\n  it('resolves single repo without param', async () => {\n    setupSingleRepo();\n    await backend.init();\n    const result = await backend.callTool('list_repos', {});\n    expect(result).toHaveLength(1);\n  });\n\n  it('throws when no repos are registered', async () => {\n    setupNoRepos();\n    await backend.init();\n    await expect(backend.callTool('query', { query: 'test' }))\n      .rejects.toThrow('No indexed repositories');\n  });\n\n  it('throws for ambiguous repos without param', async () => {\n    setupMultipleRepos();\n    await backend.init();\n    await expect(backend.callTool('query', { query: 'test' }))\n      .rejects.toThrow('Multiple repositories indexed');\n  });\n\n  it('resolves repo by name parameter', async () => {\n    setupMultipleRepos();\n    await backend.init();\n    // With repo param, it should resolve correctly\n    (executeParameterized as any).mockResolvedValue([]);\n    const result = await backend.callTool('query', {\n      query: 'auth',\n      repo: 'test-project',\n    });\n    expect(result).toHaveProperty('processes');\n  });\n\n  it('throws for unknown repo name', async () => {\n    setupSingleRepo();\n    await backend.init();\n    await expect(backend.callTool('query', { query: 'test', repo: 'nonexistent' }))\n      .rejects.toThrow('not found');\n  });\n\n  it('resolves repo case-insensitively', async () => {\n    setupSingleRepo();\n    await backend.init();\n    (executeParameterized as any).mockResolvedValue([]);\n    // Should match even with different case\n    const result = await backend.callTool('query', {\n      query: 'test',\n      repo: 'Test-Project',\n    });\n    expect(result).toHaveProperty('processes');\n  });\n\n  it('refreshes registry on repo miss', async () => {\n    setupNoRepos();\n    await backend.init();\n\n    // Now make a repo appear\n    (listRegisteredRepos as any).mockResolvedValue([MOCK_REPO_ENTRY]);\n\n    // The resolve should re-read the registry and find the new repo\n    (executeParameterized as any).mockResolvedValue([]);\n    const result = await backend.callTool('query', {\n      query: 'test',\n      repo: 'test-project',\n    });\n    expect(result).toHaveProperty('processes');\n    // listRegisteredRepos should have been called again\n    expect(listRegisteredRepos).toHaveBeenCalledTimes(2); // once in init, once in refreshRepos\n  });\n});\n\n// ─── getContext ──────────────────────────────────────────────────────\n\ndescribe('LocalBackend.getContext', () => {\n  let backend: LocalBackend;\n\n  beforeEach(async () => {\n    vi.clearAllMocks();\n    backend = new LocalBackend();\n    setupSingleRepo();\n    await backend.init();\n  });\n\n  it('returns context for single repo without specifying id', () => {\n    const ctx = backend.getContext();\n    expect(ctx).not.toBeNull();\n    expect(ctx!.projectName).toBe('test-project');\n    expect(ctx!.stats.fileCount).toBe(10);\n    expect(ctx!.stats.functionCount).toBe(50);\n  });\n\n  it('returns context by repo id', () => {\n    const ctx = backend.getContext('test-project');\n    expect(ctx).not.toBeNull();\n    expect(ctx!.projectName).toBe('test-project');\n  });\n\n  it('returns single repo context even with unknown id (single-repo fallback)', () => {\n    // When only 1 repo is registered, getContext falls through the id check\n    // and returns the single repo's context. This is intentional behavior.\n    const ctx = backend.getContext('nonexistent');\n    // The id doesn't match, but since repos.size === 1, it returns that single context\n    // This is the actual behavior — test documents it\n    expect(ctx).not.toBeNull();\n    expect(ctx!.projectName).toBe('test-project');\n  });\n});\n\n// ─── LadybugDB lazy initialization ──────────────────────────────────────\n\ndescribe('ensureInitialized', () => {\n  let backend: LocalBackend;\n\n  beforeEach(async () => {\n    vi.clearAllMocks();\n    backend = new LocalBackend();\n    setupSingleRepo();\n    await backend.init();\n  });\n\n  it('calls initLbug on first tool call', async () => {\n    (executeParameterized as any).mockResolvedValue([]);\n    await backend.callTool('query', { query: 'test' });\n    expect(initLbug).toHaveBeenCalled();\n  });\n\n  it('retries initLbug if connection was evicted', async () => {\n    (executeParameterized as any).mockResolvedValue([]);\n    // First call initializes\n    await backend.callTool('query', { query: 'test' });\n    expect(initLbug).toHaveBeenCalledTimes(1);\n\n    // Simulate idle eviction\n    (isLbugReady as any).mockReturnValueOnce(false);\n    await backend.callTool('query', { query: 'test' });\n    expect(initLbug).toHaveBeenCalledTimes(2);\n  });\n\n  it('handles initLbug failure gracefully', async () => {\n    (initLbug as any).mockRejectedValueOnce(new Error('DB locked'));\n    await expect(backend.callTool('query', { query: 'test' }))\n      .rejects.toThrow('DB locked');\n  });\n});\n\n// ─── Cypher write blocking through callTool ──────────────────────────\n\ndescribe('callTool cypher write blocking', () => {\n  let backend: LocalBackend;\n\n  beforeEach(async () => {\n    vi.clearAllMocks();\n    backend = new LocalBackend();\n    setupSingleRepo();\n    await backend.init();\n  });\n\n  const writeQueries = [\n    'CREATE (n:Function {name: \"test\"})',\n    'MATCH (n) DELETE n',\n    'MATCH (n) SET n.name = \"hacked\"',\n    'MERGE (n:Function {name: \"test\"})',\n    'MATCH (n) REMOVE n.name',\n    'DROP TABLE Function',\n    'ALTER TABLE Function ADD COLUMN foo STRING',\n    'COPY Function FROM \"file.csv\"',\n    'MATCH (n) DETACH DELETE n',\n  ];\n\n  for (const query of writeQueries) {\n    it(`blocks write query: ${query.slice(0, 30)}...`, async () => {\n      const result = await backend.callTool('cypher', { query });\n      expect(result).toHaveProperty('error');\n      expect(result.error).toContain('Write operations');\n    });\n  }\n\n  it('allows read query through callTool', async () => {\n    (executeQuery as any).mockResolvedValue([]);\n    const result = await backend.callTool('cypher', {\n      query: 'MATCH (n:Function) RETURN n.name LIMIT 5',\n    });\n    // Should not have error property with write-block message\n    expect(result.error).toBeUndefined();\n  });\n});\n\n// ─── listRepos ──────────────────────────────────────────────────────\n\ndescribe('LocalBackend.listRepos', () => {\n  let backend: LocalBackend;\n\n  beforeEach(async () => {\n    vi.clearAllMocks();\n    backend = new LocalBackend();\n  });\n\n  it('returns empty array when no repos', async () => {\n    setupNoRepos();\n    await backend.init();\n    const repos = await backend.callTool('list_repos', {});\n    expect(repos).toEqual([]);\n  });\n\n  it('returns repo metadata', async () => {\n    setupSingleRepo();\n    await backend.init();\n    const repos = await backend.callTool('list_repos', {});\n    expect(repos).toHaveLength(1);\n    expect(repos[0]).toEqual(expect.objectContaining({\n      name: 'test-project',\n      path: '/tmp/test-project',\n      indexedAt: expect.any(String),\n      lastCommit: expect.any(String),\n    }));\n  });\n\n  it('re-reads registry on each listRepos call', async () => {\n    setupSingleRepo();\n    await backend.init();\n    await backend.callTool('list_repos', {});\n    await backend.callTool('list_repos', {});\n    // listRegisteredRepos called: once in init, once per listRepos\n    expect(listRegisteredRepos).toHaveBeenCalledTimes(3);\n  });\n});\n\n// ─── Cypher LadybugDB not ready ────────────────────────────────────────\n\ndescribe('cypher tool LadybugDB not ready', () => {\n  let backend: LocalBackend;\n\n  beforeEach(async () => {\n    vi.clearAllMocks();\n    backend = new LocalBackend();\n    setupSingleRepo();\n    await backend.init();\n  });\n\n  it('returns error when LadybugDB is not ready', async () => {\n    (isLbugReady as any).mockReturnValue(false);\n    // initLbug will succeed but isLbugReady returns false after ensureInitialized\n    // Actually ensureInitialized checks isLbugReady and re-inits — let's make that pass\n    // then the cypher method checks isLbugReady again\n    (isLbugReady as any)\n      .mockReturnValueOnce(false)  // ensureInitialized check\n      .mockReturnValueOnce(false); // cypher's own check\n\n    const result = await backend.callTool('cypher', {\n      query: 'MATCH (n) RETURN n LIMIT 1',\n    });\n    expect(result.error).toContain('LadybugDB not ready');\n  });\n});\n\n// ─── formatCypherAsMarkdown ──────────────────────────────────────────\n\ndescribe('cypher result formatting', () => {\n  let backend: LocalBackend;\n\n  beforeEach(async () => {\n    // Full reset of all mocks to prevent state leaking from other tests\n    vi.resetAllMocks();\n    (listRegisteredRepos as any).mockResolvedValue([MOCK_REPO_ENTRY]);\n    (cleanupOldKuzuFiles as any).mockResolvedValue({ found: false, needsReindex: false });\n    (initLbug as any).mockResolvedValue(undefined);\n    (isLbugReady as any).mockReturnValue(true);\n    (closeLbug as any).mockResolvedValue(undefined);\n    (executeParameterized as any).mockResolvedValue([]);\n\n    backend = new LocalBackend();\n    await backend.init();\n  });\n\n  it('formats tabular results as markdown table', async () => {\n    (executeQuery as any).mockResolvedValue([\n      { name: 'main', filePath: 'src/index.ts' },\n      { name: 'helper', filePath: 'src/utils.ts' },\n    ]);\n    const result = await backend.callTool('cypher', {\n      query: 'MATCH (n:Function) RETURN n.name AS name, n.filePath AS filePath',\n    });\n    expect(result).toHaveProperty('markdown');\n    expect(result.markdown).toContain('name');\n    expect(result.markdown).toContain('main');\n    expect(result.row_count).toBe(2);\n  });\n\n  it('returns empty array as-is', async () => {\n    (executeQuery as any).mockResolvedValue([]);\n    const result = await backend.callTool('cypher', {\n      query: 'MATCH (n:Function) RETURN n.name LIMIT 0',\n    });\n    expect(result).toEqual([]);\n  });\n\n  it('returns error object when cypher fails', async () => {\n    (executeQuery as any).mockRejectedValue(new Error('Syntax error'));\n    const result = await backend.callTool('cypher', {\n      query: 'INVALID CYPHER SYNTAX',\n    });\n    expect(result).toHaveProperty('error');\n    expect(result.error).toContain('Syntax error');\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/cli-commands.test.ts",
    "content": "import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';\n\n// Mock all the heavy imports before importing index\nvi.mock('../../src/cli/analyze.js', () => ({\n  analyzeCommand: vi.fn(),\n}));\nvi.mock('../../src/cli/mcp.js', () => ({\n  mcpCommand: vi.fn(),\n}));\nvi.mock('../../src/cli/setup.js', () => ({\n  setupCommand: vi.fn(),\n}));\n\ndescribe('CLI commands', () => {\n  describe('version', () => {\n    it('package.json has a valid version string', async () => {\n      const pkg = await import('../../package.json', { with: { type: 'json' } });\n      expect(pkg.default.version).toMatch(/^\\d+\\.\\d+\\.\\d+/);\n    });\n  });\n\n  describe('package.json scripts', () => {\n    it('has test scripts configured', async () => {\n      const pkg = await import('../../package.json', { with: { type: 'json' } });\n      expect(pkg.default.scripts.test).toBeDefined();\n      expect(pkg.default.scripts['test:integration']).toBeDefined();\n      expect(pkg.default.scripts['test:unit']).toBeDefined();\n    });\n\n    it('has build script', async () => {\n      const pkg = await import('../../package.json', { with: { type: 'json' } });\n      expect(pkg.default.scripts.build).toBeDefined();\n    });\n  });\n\n  describe('package.json bin entry', () => {\n    it('exposes gitnexus binary', async () => {\n      const pkg = await import('../../package.json', { with: { type: 'json' } });\n      expect(pkg.default.bin).toBeDefined();\n      expect(pkg.default.bin.gitnexus || pkg.default.bin).toBeDefined();\n    });\n  });\n\n  describe('analyzeCommand', () => {\n    it('is a function', async () => {\n      const { analyzeCommand } = await import('../../src/cli/analyze.js');\n      expect(typeof analyzeCommand).toBe('function');\n    });\n  });\n\n  describe('mcpCommand', () => {\n    it('is a function', async () => {\n      const { mcpCommand } = await import('../../src/cli/mcp.js');\n      expect(typeof mcpCommand).toBe('function');\n    });\n  });\n\n  describe('setupCommand', () => {\n    it('is a function', async () => {\n      const { setupCommand } = await import('../../src/cli/setup.js');\n      expect(typeof setupCommand).toBe('function');\n    });\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/cli-index-help.test.ts",
    "content": "import { spawnSync } from 'node:child_process';\nimport path from 'node:path';\nimport { fileURLToPath } from 'node:url';\nimport { describe, expect, it } from 'vitest';\n\nconst testDir = path.dirname(fileURLToPath(import.meta.url));\nconst repoRoot = path.resolve(testDir, '../..');\nconst cliEntry = path.join(repoRoot, 'src/cli/index.ts');\n\nfunction runHelp(command: string) {\n  return spawnSync(process.execPath, ['--import', 'tsx', cliEntry, command, '--help'], {\n    cwd: repoRoot,\n    encoding: 'utf8',\n  });\n}\n\ndescribe('CLI help surface', () => {\n  it('query help keeps advanced search options without importing analyze deps', () => {\n    const result = runHelp('query');\n\n    expect(result.status).toBe(0);\n    expect(result.stdout).toContain('--context <text>');\n    expect(result.stdout).toContain('--goal <text>');\n    expect(result.stdout).toContain('--content');\n    expect(result.stderr).not.toContain('tree-sitter-kotlin');\n  });\n\n  it('context help keeps optional name and disambiguation flags', () => {\n    const result = runHelp('context');\n\n    expect(result.status).toBe(0);\n    expect(result.stdout).toContain('context [options] [name]');\n    expect(result.stdout).toContain('--uid <uid>');\n    expect(result.stdout).toContain('--file <path>');\n  });\n\n  it('impact help keeps repo and include-tests flags', () => {\n    const result = runHelp('impact');\n\n    expect(result.status).toBe(0);\n    expect(result.stdout).toContain('--depth <n>');\n    expect(result.stdout).toContain('--include-tests');\n    expect(result.stdout).toContain('--repo <name>');\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/cohesion-consistency.test.ts",
    "content": "/**\n * Unit tests for cohesion formula consistency.\n *\n * Verifies that calculateCohesion (module-private) uses the internal edge ratio\n * formula: internalEdges / totalEdges, NOT graph density (internalEdges / maxPossibleEdges).\n *\n * Since calculateCohesion is not exported, all tests exercise it indirectly through\n * processCommunities — the public export. Graphs are built so that Leiden's community\n * assignment is deterministic (disconnected cliques with strong internal connectivity).\n */\nimport { describe, it, expect } from 'vitest';\nimport { createKnowledgeGraph } from '../../src/core/graph/graph.js';\nimport type { GraphNode, GraphRelationship } from '../../src/core/graph/types.js';\nimport { processCommunities } from '../../src/core/ingestion/community-processor.js';\n\n// ============================================================================\n// FIXTURE HELPERS\n// ============================================================================\n\n/** Create a GraphNode with commonly-needed properties */\nfunction makeNode(\n  id: string,\n  name: string,\n  label: GraphNode['label'],\n  filePath: string,\n): GraphNode {\n  return {\n    id,\n    label,\n    properties: { name, filePath, startLine: 1, endLine: 10, isExported: false },\n  };\n}\n\n/** Create a CALLS relationship between two nodes */\nfunction makeRel(\n  id: string,\n  sourceId: string,\n  targetId: string,\n): GraphRelationship {\n  return { id, sourceId, targetId, type: 'CALLS', confidence: 1.0, reason: '' };\n}\n\n/** Add a fully-connected clique of Function nodes to the graph */\nfunction addClique(\n  graph: ReturnType<typeof createKnowledgeGraph>,\n  prefix: string,\n  folder: string,\n  size: number,\n): string[] {\n  const ids: string[] = [];\n  for (let i = 0; i < size; i++) {\n    const id = `fn:${prefix}${i}`;\n    ids.push(id);\n    graph.addNode(makeNode(id, `${prefix}Fn${i}`, 'Function', `/src/${folder}/f${i}.ts`));\n  }\n  // Fully connect all pairs\n  let relIdx = 0;\n  for (let i = 0; i < size; i++) {\n    for (let j = i + 1; j < size; j++) {\n      graph.addRelationship(makeRel(`rel:${prefix}_${relIdx++}`, ids[i], ids[j]));\n    }\n  }\n  return ids;\n}\n\n// ============================================================================\n// TESTS\n// ============================================================================\n\ndescribe('calculateCohesion — internal edge ratio', () => {\n  /**\n   * Build a 4-node fully connected clique with 2 external boundary edges.\n   * For the clique community:\n   *   - 4 nodes, 6 internal edges (undirected)\n   *   - 2 external edges (one from node0, one from node1 to outside nodes)\n   *   - Each undirected edge is traversed twice in forEachNeighbor\n   *   - Internal traversals: 6 edges * 2 = 12  (each internal edge counted from both endpoints)\n   *     BUT only edges where BOTH endpoints are in the clique count. node0 has 3 internal + 1 external neighbor,\n   *     node1 has 3 internal + 1 external neighbor, node2 has 3 internal, node3 has 3 internal.\n   *   - Total neighbor traversals from clique members: (3+1) + (3+1) + 3 + 3 = 14\n   *   - Internal traversals: 3 + 3 + 3 + 3 = 12\n   *   - Edge ratio: 12 / 14 = 0.857...\n   *   - Graph density would be: 6 / (4*3/2) = 6/6 = 1.0\n   *   - This discriminates: if cohesion < 1.0, it's edge ratio; if 1.0, it could be density.\n   */\n  it('produces internal edge ratio, not graph density, for a tight cluster with external edges', async () => {\n    const graph = createKnowledgeGraph();\n\n    // Clique of 4 nodes\n    const clique = addClique(graph, 'c', 'cluster', 4);\n\n    // Two external nodes, each connected to one clique member\n    graph.addNode(makeNode('fn:ext0', 'extFn0', 'Function', '/src/other/ext0.ts'));\n    graph.addNode(makeNode('fn:ext1', 'extFn1', 'Function', '/src/other/ext1.ts'));\n    // Connect ext nodes to each other so they form their own community (size >= 2)\n    graph.addRelationship(makeRel('rel:ext_link', 'fn:ext0', 'fn:ext1'));\n    // Boundary edges from clique to external\n    graph.addRelationship(makeRel('rel:boundary0', clique[0], 'fn:ext0'));\n    graph.addRelationship(makeRel('rel:boundary1', clique[1], 'fn:ext1'));\n\n    const result = await processCommunities(graph);\n\n    // Find the community containing the clique nodes\n    const cliqueMemberSet = new Set(clique);\n    const membershipMap = new Map<string, string>();\n    for (const m of result.memberships) {\n      membershipMap.set(m.nodeId, m.communityId);\n    }\n\n    // Determine which community the clique nodes belong to\n    const cliqueCommunityId = membershipMap.get(clique[0]);\n    expect(cliqueCommunityId).toBeDefined();\n\n    // All clique nodes should be in the same community\n    for (const nodeId of clique) {\n      expect(membershipMap.get(nodeId)).toBe(cliqueCommunityId);\n    }\n\n    // Find the community node\n    const cliqueCommunity = result.communities.find(c => c.id === cliqueCommunityId);\n    expect(cliqueCommunity).toBeDefined();\n\n    // Key assertion: cohesion should be < 1.0 (edge ratio with boundary edges)\n    // Graph density would be 1.0 since 4 nodes are fully connected internally.\n    // Edge ratio: 12 internal traversals / 14 total traversals = ~0.857\n    expect(cliqueCommunity!.cohesion).toBeLessThan(1.0);\n    expect(cliqueCommunity!.cohesion).toBeCloseTo(12 / 14, 2);\n  });\n\n  /**\n   * A fully isolated clique with no external edges.\n   * Both formulas agree: cohesion should be 1.0 because all edges are internal.\n   */\n  it('cohesion is 1.0 when community has no external edges', async () => {\n    const graph = createKnowledgeGraph();\n\n    // Single isolated clique of 4 — no boundary edges at all\n    addClique(graph, 'iso', 'isolated', 4);\n\n    const result = await processCommunities(graph);\n\n    // Should produce exactly one community (singletons are filtered)\n    expect(result.communities.length).toBeGreaterThanOrEqual(1);\n\n    // The community containing our clique should have cohesion 1.0\n    const community = result.communities.find(c => c.symbolCount >= 4);\n    // If Leiden puts them all in one community (expected for a fully connected graph)\n    if (community) {\n      expect(community.cohesion).toBe(1.0);\n    }\n  });\n\n  /**\n   * Two variants of the same base clique: one with few external edges,\n   * one with many. The variant with more external edges should have lower cohesion.\n   */\n  it('cohesion decreases as external edge proportion increases', async () => {\n    // --- Variant A: clique with 1 external edge ---\n    const graphA = createKnowledgeGraph();\n    const cliqueA = addClique(graphA, 'a', 'groupA', 4);\n    // One external node pair (to form a valid community)\n    graphA.addNode(makeNode('fn:extA0', 'extA0', 'Function', '/src/extA/e0.ts'));\n    graphA.addNode(makeNode('fn:extA1', 'extA1', 'Function', '/src/extA/e1.ts'));\n    graphA.addRelationship(makeRel('rel:extA_link', 'fn:extA0', 'fn:extA1'));\n    // 1 boundary edge\n    graphA.addRelationship(makeRel('rel:bndA0', cliqueA[0], 'fn:extA0'));\n\n    const resultA = await processCommunities(graphA);\n    const commIdA = resultA.memberships.find(m => m.nodeId === cliqueA[0])?.communityId;\n    const communityA = resultA.communities.find(c => c.id === commIdA);\n\n    // --- Variant B: clique with 4 external edges ---\n    const graphB = createKnowledgeGraph();\n    const cliqueB = addClique(graphB, 'b', 'groupB', 4);\n    // Four external nodes (two pairs)\n    for (let i = 0; i < 4; i++) {\n      graphB.addNode(makeNode(`fn:extB${i}`, `extB${i}`, 'Function', `/src/extB/e${i}.ts`));\n    }\n    graphB.addRelationship(makeRel('rel:extB_link0', 'fn:extB0', 'fn:extB1'));\n    graphB.addRelationship(makeRel('rel:extB_link1', 'fn:extB2', 'fn:extB3'));\n    // 4 boundary edges (one per clique node)\n    for (let i = 0; i < 4; i++) {\n      graphB.addRelationship(makeRel(`rel:bndB${i}`, cliqueB[i], `fn:extB${i}`));\n    }\n\n    const resultB = await processCommunities(graphB);\n    const commIdB = resultB.memberships.find(m => m.nodeId === cliqueB[0])?.communityId;\n    const communityB = resultB.communities.find(c => c.id === commIdB);\n\n    expect(communityA).toBeDefined();\n    expect(communityB).toBeDefined();\n\n    // More external edges => lower cohesion\n    expect(communityB!.cohesion).toBeLessThan(communityA!.cohesion);\n  });\n\n  /**\n   * Edge case: a community with a single node should return cohesion 1.0.\n   * The code returns early for memberIds.length <= 1.\n   * Leiden skips singletons (communities with < 2 members), so we test this\n   * by building a graph where one node has no edges — it won't appear in a\n   * community at all. Instead, test with 2 connected nodes and verify the\n   * community gets cohesion 1.0 (2 nodes, 1 internal edge, 0 external = 1.0).\n   */\n  it('two-node community with no external edges returns 1.0', async () => {\n    const graph = createKnowledgeGraph();\n    graph.addNode(makeNode('fn:pair0', 'pairFn0', 'Function', '/src/pair/f0.ts'));\n    graph.addNode(makeNode('fn:pair1', 'pairFn1', 'Function', '/src/pair/f1.ts'));\n    graph.addRelationship(makeRel('rel:pair', 'fn:pair0', 'fn:pair1'));\n\n    const result = await processCommunities(graph);\n\n    // Should have exactly 1 community with 2 members\n    expect(result.communities).toHaveLength(1);\n    expect(result.communities[0].symbolCount).toBe(2);\n    expect(result.communities[0].cohesion).toBe(1.0);\n  });\n\n  /**\n   * Sanity check: an empty graph should yield no communities.\n   */\n  it('empty graph returns empty communities', async () => {\n    const graph = createKnowledgeGraph();\n    const result = await processCommunities(graph);\n\n    expect(result.communities).toEqual([]);\n    expect(result.memberships).toEqual([]);\n    expect(result.stats.totalCommunities).toBe(0);\n    expect(result.stats.nodesProcessed).toBe(0);\n  });\n\n  /**\n   * Verify that the web and backend formulas produce equivalent results\n   * by checking the backend value against a hand-calculated edge-ratio result.\n   *\n   * Topology: 3-node triangle (clique) + 1 external node connected to one vertex.\n   *   - Triangle: 3 internal edges\n   *   - 1 boundary edge from vertex 0 to external node\n   *   - Traversals from triangle members:\n   *     vertex0: 2 internal neighbors + 1 external = 3 traversals\n   *     vertex1: 2 internal neighbors = 2 traversals\n   *     vertex2: 2 internal neighbors = 2 traversals\n   *   - Total traversals: 7, internal traversals: 6\n   *   - Edge ratio: 6/7 ≈ 0.857\n   *\n   * The external node is a singleton so Leiden won't produce a community for it.\n   * But we need at least 2 external nodes connected to each other for Leiden\n   * to form a second community. Let's add a second external node.\n   *   - vertex0 connects to ext0, ext0 connects to ext1\n   *   - Triangle traversals:\n   *     vertex0: 2 internal + 1 external = 3\n   *     vertex1: 2 internal = 2\n   *     vertex2: 2 internal = 2\n   *   - Total: 7, internal: 6, ratio: 6/7 ≈ 0.857\n   */\n  it('web and backend formulas produce equivalent edge-ratio results', async () => {\n    const graph = createKnowledgeGraph();\n\n    // Triangle clique\n    const tri = ['fn:t0', 'fn:t1', 'fn:t2'];\n    graph.addNode(makeNode('fn:t0', 'triFn0', 'Function', '/src/tri/f0.ts'));\n    graph.addNode(makeNode('fn:t1', 'triFn1', 'Function', '/src/tri/f1.ts'));\n    graph.addNode(makeNode('fn:t2', 'triFn2', 'Function', '/src/tri/f2.ts'));\n    graph.addRelationship(makeRel('rel:t01', 'fn:t0', 'fn:t1'));\n    graph.addRelationship(makeRel('rel:t02', 'fn:t0', 'fn:t2'));\n    graph.addRelationship(makeRel('rel:t12', 'fn:t1', 'fn:t2'));\n\n    // External pair\n    graph.addNode(makeNode('fn:ext0', 'extFn0', 'Function', '/src/ext/e0.ts'));\n    graph.addNode(makeNode('fn:ext1', 'extFn1', 'Function', '/src/ext/e1.ts'));\n    graph.addRelationship(makeRel('rel:ext', 'fn:ext0', 'fn:ext1'));\n\n    // Boundary edge: triangle vertex0 -> ext0\n    graph.addRelationship(makeRel('rel:bnd', 'fn:t0', 'fn:ext0'));\n\n    const result = await processCommunities(graph);\n\n    // Find triangle community\n    const triCommId = result.memberships.find(m => m.nodeId === 'fn:t0')?.communityId;\n    expect(triCommId).toBeDefined();\n\n    const triComm = result.communities.find(c => c.id === triCommId);\n    expect(triComm).toBeDefined();\n\n    // Hand-calculated edge ratio: 6 internal traversals / 7 total = 0.8571...\n    const expectedEdgeRatio = 6 / 7;\n    expect(triComm!.cohesion).toBeCloseTo(expectedEdgeRatio, 2);\n\n    // Verify it's NOT graph density (which would be 3 / (3*2/2) = 1.0)\n    expect(triComm!.cohesion).not.toBeCloseTo(1.0, 2);\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/community-processor.test.ts",
    "content": "import { describe, it, expect } from 'vitest';\nimport { getCommunityColor, COMMUNITY_COLORS } from '../../src/core/ingestion/community-processor.js';\n\ndescribe('community-processor', () => {\n  describe('COMMUNITY_COLORS', () => {\n    it('has 12 colors', () => {\n      expect(COMMUNITY_COLORS).toHaveLength(12);\n    });\n\n    it('contains valid hex color strings', () => {\n      for (const color of COMMUNITY_COLORS) {\n        expect(color).toMatch(/^#[0-9a-fA-F]{6}$/);\n      }\n    });\n\n    it('has no duplicate colors', () => {\n      const unique = new Set(COMMUNITY_COLORS);\n      expect(unique.size).toBe(COMMUNITY_COLORS.length);\n    });\n  });\n\n  describe('getCommunityColor', () => {\n    it('returns first color for index 0', () => {\n      expect(getCommunityColor(0)).toBe(COMMUNITY_COLORS[0]);\n    });\n\n    it('wraps around when index exceeds color count', () => {\n      expect(getCommunityColor(12)).toBe(COMMUNITY_COLORS[0]);\n      expect(getCommunityColor(13)).toBe(COMMUNITY_COLORS[1]);\n    });\n\n    it('returns different colors for different indices', () => {\n      const c0 = getCommunityColor(0);\n      const c1 = getCommunityColor(1);\n      expect(c0).not.toBe(c1);\n    });\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/compatible-stdio-transport.test.ts",
    "content": "import { PassThrough } from 'node:stream';\nimport { beforeEach, describe, expect, it, vi } from 'vitest';\nimport { CompatibleStdioServerTransport } from '../../src/mcp/compatible-stdio-transport.js';\n\nfunction onceMessage(transport: CompatibleStdioServerTransport): Promise<any> {\n  return new Promise((resolve, reject) => {\n    transport.onmessage = (message) => resolve(message);\n    transport.onerror = (error) => reject(error);\n  });\n}\n\ndescribe('CompatibleStdioServerTransport', () => {\n  let stdin: PassThrough;\n  let stdout: PassThrough;\n  let transport: CompatibleStdioServerTransport;\n\n  beforeEach(() => {\n    stdin = new PassThrough();\n    stdout = new PassThrough();\n    transport = new CompatibleStdioServerTransport(stdin, stdout);\n  });\n\n  it('parses Content-Length framed initialize requests', async () => {\n    await transport.start();\n    const messagePromise = onceMessage(transport);\n    const body = JSON.stringify({\n      jsonrpc: '2.0',\n      id: 1,\n      method: 'initialize',\n      params: {\n        protocolVersion: '2024-11-05',\n        capabilities: {},\n        clientInfo: { name: 'codex', version: '0.1' },\n      },\n    });\n\n    stdin.write(`Content-Length: ${Buffer.byteLength(body, 'utf8')}\\r\\n\\r\\n${body}`);\n\n    await expect(messagePromise).resolves.toMatchObject({\n      method: 'initialize',\n      params: { clientInfo: { name: 'codex' } },\n    });\n  });\n\n  it('parses newline-delimited initialize requests', async () => {\n    await transport.start();\n    const messagePromise = onceMessage(transport);\n    stdin.write(`${JSON.stringify({\n      jsonrpc: '2.0',\n      id: 1,\n      method: 'initialize',\n      params: {\n        protocolVersion: '2024-11-05',\n        capabilities: {},\n        clientInfo: { name: 'cursor', version: '0.1' },\n      },\n    })}\\n`);\n\n    await expect(messagePromise).resolves.toMatchObject({\n      method: 'initialize',\n      params: { clientInfo: { name: 'cursor' } },\n    });\n  });\n\n  it('responds with Content-Length framing after Content-Length input', async () => {\n    await transport.start();\n    const body = JSON.stringify({\n      jsonrpc: '2.0',\n      id: 1,\n      method: 'initialize',\n      params: {\n        protocolVersion: '2024-11-05',\n        capabilities: {},\n        clientInfo: { name: 'codex', version: '0.1' },\n      },\n    });\n\n    const messagePromise = onceMessage(transport);\n    stdin.write(`Content-Length: ${Buffer.byteLength(body, 'utf8')}\\n\\n${body}`);\n    await messagePromise;\n\n    const chunks: Buffer[] = [];\n    stdout.on('data', (chunk) => chunks.push(Buffer.from(chunk)));\n\n    await transport.send({ jsonrpc: '2.0', id: 1, result: { ok: true } });\n    const raw = Buffer.concat(chunks).toString('utf8');\n\n    expect(raw).toMatch(/^Content-Length: \\d+\\r\\n\\r\\n/);\n    expect(raw).toContain('\"ok\":true');\n  });\n\n\n\n  it('reports malformed Content-Length headers once without looping forever', async () => {\n    await transport.start();\n    const onError = vi.fn();\n    transport.onerror = onError;\n\n    stdin.write('Content-Length:\\r\\n\\r\\n{}');\n    await new Promise((resolve) => setTimeout(resolve, 25));\n\n    expect(onError).toHaveBeenCalledTimes(1);\n    expect(onError.mock.calls[0]?.[0]).toBeInstanceOf(Error);\n  });\n\n  it('recovers after discarding a malformed Content-Length frame', async () => {\n    await transport.start();\n    const onError = vi.fn();\n    transport.onerror = onError;\n\n    stdin.write('Content-Length:\\r\\n\\r\\n{}');\n    await new Promise((resolve) => setTimeout(resolve, 25));\n\n    const body = JSON.stringify({\n      jsonrpc: '2.0',\n      id: 2,\n      method: 'initialize',\n      params: {\n        protocolVersion: '2024-11-05',\n        capabilities: {},\n        clientInfo: { name: 'recovery-client', version: '0.1' },\n      },\n    });\n    const messagePromise = onceMessage(transport);\n    stdin.write(`Content-Length: ${Buffer.byteLength(body, 'utf8')}\\r\\n\\r\\n${body}`);\n\n    await expect(messagePromise).resolves.toMatchObject({\n      method: 'initialize',\n      params: { clientInfo: { name: 'recovery-client' } },\n    });\n    expect(onError).toHaveBeenCalledTimes(1);\n  });\n\n  // ─── Security hardening regressions ──────────────────────────────\n\n  it('rejects Content-Length values exceeding the buffer cap', async () => {\n    await transport.start();\n    const onError = vi.fn();\n    transport.onerror = onError;\n\n    // 20 MB — exceeds the 10 MB MAX_BUFFER_SIZE\n    stdin.write('Content-Length: 20971520\\r\\n\\r\\n{}');\n    await new Promise((resolve) => setTimeout(resolve, 25));\n\n    expect(onError).toHaveBeenCalledTimes(1);\n    expect(onError.mock.calls[0]?.[0]?.message).toMatch(/exceeds maximum/i);\n  });\n\n  it('errors when read buffer exceeds maximum size in newline mode', async () => {\n    await transport.start();\n    const onError = vi.fn();\n    transport.onerror = onError;\n\n    // Send a JSON-starting chunk (triggers newline mode) with no newline,\n    // then keep appending until we exceed the 10 MB cap\n    const chunkSize = 1024 * 1024; // 1 MB\n    const chunk = Buffer.alloc(chunkSize, 0x61); // 'a' repeated\n    // First byte must be '{' to trigger newline framing detection\n    const first = Buffer.from('{' + 'a'.repeat(chunkSize - 1));\n    stdin.write(first);\n\n    for (let i = 0; i < 10; i++) {\n      stdin.write(chunk);\n    }\n\n    await new Promise((resolve) => setTimeout(resolve, 25));\n\n    expect(onError).toHaveBeenCalled();\n    const hasMaxSizeError = onError.mock.calls.some(\n      (call) => call[0] instanceof Error && /maximum size/i.test(call[0].message),\n    );\n    expect(hasMaxSizeError).toBe(true);\n  });\n\n  it('handles many consecutive empty lines without stack overflow', async () => {\n    await transport.start();\n    const onError = vi.fn();\n    transport.onerror = onError;\n\n    // First, seed the framing mode with a valid newline-delimited message\n    const seed = JSON.stringify({\n      jsonrpc: '2.0',\n      id: 1,\n      method: 'initialize',\n      params: {\n        protocolVersion: '2024-11-05',\n        capabilities: {},\n        clientInfo: { name: 'seed', version: '0.1' },\n      },\n    });\n    const seedPromise = onceMessage(transport);\n    stdin.write(seed + '\\n');\n    await seedPromise;\n\n    // Now send 15K empty lines followed by a real message — this would\n    // stack-overflow with the old recursive readNewlineMessage\n    const followup = JSON.stringify({\n      jsonrpc: '2.0',\n      id: 2,\n      method: 'notifications/initialized',\n      params: {},\n    });\n\n    const messagePromise = onceMessage(transport);\n    stdin.write('\\n'.repeat(15_000) + followup + '\\n');\n\n    await expect(messagePromise).resolves.toMatchObject({\n      method: 'notifications/initialized',\n    });\n    expect(onError).not.toHaveBeenCalled();\n  });\n\n  it('rejects send() after transport is closed', async () => {\n    await transport.start();\n    await transport.close();\n\n    await expect(\n      transport.send({ jsonrpc: '2.0', id: 1, result: { ok: true } }),\n    ).rejects.toThrow(/closed/i);\n  });\n\n  it('does not detect content-length framing from short ambiguous prefix', async () => {\n    await transport.start();\n    const onError = vi.fn();\n    transport.onerror = onError;\n\n    // Write only \"cont\" — fewer than 14 bytes, should NOT trigger\n    // content-length detection. Transport should wait for more data.\n    stdin.write(Buffer.from('cont'));\n    await new Promise((resolve) => setTimeout(resolve, 25));\n\n    // No message and no error — transport is waiting for more data\n    expect(onError).not.toHaveBeenCalled();\n  });\n\n  it('responds with newline framing after newline input', async () => {\n    await transport.start();\n    const messagePromise = onceMessage(transport);\n    stdin.write(`${JSON.stringify({\n      jsonrpc: '2.0',\n      id: 1,\n      method: 'initialize',\n      params: {\n        protocolVersion: '2024-11-05',\n        capabilities: {},\n        clientInfo: { name: 'cursor', version: '0.1' },\n      },\n    })}\\n`);\n    await messagePromise;\n\n    const chunks: Buffer[] = [];\n    stdout.on('data', (chunk) => chunks.push(Buffer.from(chunk)));\n\n    await transport.send({ jsonrpc: '2.0', id: 1, result: { ok: true } });\n    const raw = Buffer.concat(chunks).toString('utf8');\n\n    expect(raw).toBe('{\"jsonrpc\":\"2.0\",\"id\":1,\"result\":{\"ok\":true}}\\n');\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/csv-escaping.test.ts",
    "content": "/**\n * P0 Unit Tests: CSV Escaping Functions\n *\n * Tests: escapeCSVField, escapeCSVNumber, sanitizeUTF8, isBinaryContent\n * Covers hardening fix #23 (keyword arrays with backslashes and commas)\n */\nimport { describe, it, expect } from 'vitest';\nimport {\n  escapeCSVField,\n  escapeCSVNumber,\n  sanitizeUTF8,\n  isBinaryContent,\n} from '../../src/core/lbug/csv-generator.js';\n\n// ─── escapeCSVField ──────────────────────────────────────────────────\n\ndescribe('escapeCSVField', () => {\n  it('returns empty quoted string for null', () => {\n    expect(escapeCSVField(null)).toBe('\"\"');\n  });\n\n  it('returns empty quoted string for undefined', () => {\n    expect(escapeCSVField(undefined)).toBe('\"\"');\n  });\n\n  it('returns quoted empty string for empty input', () => {\n    expect(escapeCSVField('')).toBe('\"\"');\n  });\n\n  it('wraps simple string in quotes', () => {\n    expect(escapeCSVField('hello')).toBe('\"hello\"');\n  });\n\n  it('doubles internal double quotes', () => {\n    expect(escapeCSVField('say \"hello\"')).toBe('\"say \"\"hello\"\"\"');\n  });\n\n  it('handles strings with commas', () => {\n    expect(escapeCSVField('a,b,c')).toBe('\"a,b,c\"');\n  });\n\n  it('handles strings with newlines', () => {\n    expect(escapeCSVField('line1\\nline2')).toBe('\"line1\\nline2\"');\n  });\n\n  it('converts numbers to quoted strings', () => {\n    expect(escapeCSVField(42)).toBe('\"42\"');\n  });\n\n  it('handles strings with both quotes and commas', () => {\n    expect(escapeCSVField('\"hello\",world')).toBe('\"\"\"hello\"\",world\"');\n  });\n\n  // Hardening fix #23: keyword arrays with backslashes\n  it('handles strings with backslashes', () => {\n    const result = escapeCSVField('path\\\\to\\\\file');\n    expect(result).toBe('\"path\\\\to\\\\file\"');\n  });\n\n  it('handles code content with special characters', () => {\n    const code = 'function foo() {\\n  return \"bar\";\\n}';\n    const result = escapeCSVField(code);\n    expect(result).toContain('function foo()');\n    expect(result).toContain('\"\"bar\"\"');\n  });\n});\n\n// ─── escapeCSVNumber ─────────────────────────────────────────────────\n\ndescribe('escapeCSVNumber', () => {\n  it('returns default value for null', () => {\n    expect(escapeCSVNumber(null)).toBe('-1');\n  });\n\n  it('returns default value for undefined', () => {\n    expect(escapeCSVNumber(undefined)).toBe('-1');\n  });\n\n  it('returns custom default value', () => {\n    expect(escapeCSVNumber(null, 0)).toBe('0');\n  });\n\n  it('returns string representation of number', () => {\n    expect(escapeCSVNumber(42)).toBe('42');\n  });\n\n  it('handles zero', () => {\n    expect(escapeCSVNumber(0)).toBe('0');\n  });\n\n  it('handles negative numbers', () => {\n    expect(escapeCSVNumber(-5)).toBe('-5');\n  });\n\n  it('handles floating point', () => {\n    expect(escapeCSVNumber(3.14)).toBe('3.14');\n  });\n});\n\n// ─── sanitizeUTF8 ────────────────────────────────────────────────────\n\ndescribe('sanitizeUTF8', () => {\n  it('passes through clean strings unchanged', () => {\n    expect(sanitizeUTF8('hello world')).toBe('hello world');\n  });\n\n  it('normalizes CRLF to LF', () => {\n    expect(sanitizeUTF8('line1\\r\\nline2')).toBe('line1\\nline2');\n  });\n\n  it('normalizes lone CR to LF', () => {\n    expect(sanitizeUTF8('line1\\rline2')).toBe('line1\\nline2');\n  });\n\n  it('strips null bytes', () => {\n    expect(sanitizeUTF8('hello\\x00world')).toBe('helloworld');\n  });\n\n  it('strips control characters', () => {\n    expect(sanitizeUTF8('hello\\x01\\x02\\x03world')).toBe('helloworld');\n  });\n\n  it('preserves tabs', () => {\n    expect(sanitizeUTF8('hello\\tworld')).toBe('hello\\tworld');\n  });\n\n  it('preserves newlines', () => {\n    expect(sanitizeUTF8('hello\\nworld')).toBe('hello\\nworld');\n  });\n\n  it('strips lone surrogates', () => {\n    expect(sanitizeUTF8('hello\\uD800world')).toBe('helloworld');\n  });\n\n  it('strips BOM-like characters (FFFE/FFFF)', () => {\n    expect(sanitizeUTF8('hello\\uFFFEworld')).toBe('helloworld');\n  });\n});\n\n// ─── isBinaryContent ─────────────────────────────────────────────────\n\ndescribe('isBinaryContent', () => {\n  it('returns false for empty string', () => {\n    expect(isBinaryContent('')).toBe(false);\n  });\n\n  it('returns false for normal text', () => {\n    expect(isBinaryContent('hello world\\nline two')).toBe(false);\n  });\n\n  it('returns false for code content', () => {\n    const code = 'function foo() {\\n  return 42;\\n}\\n';\n    expect(isBinaryContent(code)).toBe(false);\n  });\n\n  it('returns true when >10% non-printable characters', () => {\n    // Create a string that's ~20% null bytes\n    const binary = 'a'.repeat(80) + '\\x00'.repeat(20);\n    expect(isBinaryContent(binary)).toBe(true);\n  });\n\n  it('returns false when just under 10% threshold', () => {\n    // 9% non-printable should not be binary\n    const borderline = 'a'.repeat(91) + '\\x01'.repeat(9);\n    expect(isBinaryContent(borderline)).toBe(false);\n  });\n\n  it('only samples first 1000 characters', () => {\n    // Binary content past 1000 chars should be ignored\n    const text = 'a'.repeat(1000) + '\\x00'.repeat(500);\n    expect(isBinaryContent(text)).toBe(false);\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/embedder.test.ts",
    "content": "import { describe, it, expect } from 'vitest';\nimport { getEmbeddingDims, isEmbedderReady } from '../../src/mcp/core/embedder.js';\n\ndescribe('embedder', () => {\n  describe('getEmbeddingDims', () => {\n    it('returns 384 (MiniLM default)', () => {\n      expect(getEmbeddingDims()).toBe(384);\n    });\n  });\n\n  describe('isEmbedderReady', () => {\n    it('returns false before initialization', () => {\n      expect(isEmbedderReady()).toBe(false);\n    });\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/entry-point-scoring.test.ts",
    "content": "import { describe, it, expect } from 'vitest';\nimport { calculateEntryPointScore, isTestFile, isUtilityFile } from '../../src/core/ingestion/entry-point-scoring.js';\n\ndescribe('calculateEntryPointScore', () => {\n  describe('base scoring', () => {\n    it('returns 0 for functions with no outgoing calls', () => {\n      const result = calculateEntryPointScore('handler', 'typescript', true, 0, 0);\n      expect(result.score).toBe(0);\n      expect(result.reasons).toContain('no-outgoing-calls');\n    });\n\n    it('calculates base score as calleeCount / (callerCount + 1)', () => {\n      const result = calculateEntryPointScore('doStuff', 'typescript', false, 0, 5);\n      // base = 5 / (0 + 1) = 5, no export bonus, no name bonus\n      expect(result.score).toBe(5);\n    });\n\n    it('reduces score for functions with many callers', () => {\n      const few = calculateEntryPointScore('doStuff', 'typescript', false, 1, 5);\n      const many = calculateEntryPointScore('doStuff', 'typescript', false, 10, 5);\n      expect(few.score).toBeGreaterThan(many.score);\n    });\n  });\n\n  describe('export multiplier', () => {\n    it('applies 2.0 multiplier for exported functions', () => {\n      const exported = calculateEntryPointScore('doStuff', 'typescript', true, 0, 4);\n      const notExported = calculateEntryPointScore('doStuff', 'typescript', false, 0, 4);\n      expect(exported.score).toBe(notExported.score * 2);\n      expect(exported.reasons).toContain('exported');\n    });\n\n    it('does not add exported reason when not exported', () => {\n      const result = calculateEntryPointScore('doStuff', 'typescript', false, 0, 4);\n      expect(result.reasons).not.toContain('exported');\n    });\n  });\n\n  describe('universal name patterns', () => {\n    it.each([\n      'main', 'init', 'bootstrap', 'start', 'run', 'setup', 'configure',\n    ])('recognizes \"%s\" as entry point pattern', (name) => {\n      const result = calculateEntryPointScore(name, 'typescript', false, 0, 3);\n      expect(result.reasons).toContain('entry-pattern');\n    });\n\n    it.each([\n      'handleLogin', 'handleSubmit', 'onClick', 'onSubmit',\n      'RequestHandler', 'UserController',\n      'processPayment', 'executeQuery', 'performAction',\n      'dispatchEvent', 'triggerAction', 'fireEvent', 'emitEvent',\n    ])('recognizes \"%s\" as entry point pattern', (name) => {\n      const result = calculateEntryPointScore(name, 'typescript', false, 0, 3);\n      expect(result.reasons).toContain('entry-pattern');\n    });\n\n    it('applies 1.5x name multiplier for entry patterns', () => {\n      const matching = calculateEntryPointScore('handleLogin', 'typescript', false, 0, 4);\n      const plain = calculateEntryPointScore('doStuff', 'typescript', false, 0, 4);\n      // matching gets 1.5x, plain gets 1.0x\n      expect(matching.score).toBe(plain.score * 1.5);\n    });\n  });\n\n  describe('language-specific patterns', () => {\n    it('recognizes React hooks for TypeScript', () => {\n      const result = calculateEntryPointScore('useEffect', 'typescript', false, 0, 2);\n      expect(result.reasons).toContain('entry-pattern');\n    });\n\n    it('recognizes React hooks for JavaScript', () => {\n      const result = calculateEntryPointScore('useState', 'javascript', false, 0, 2);\n      expect(result.reasons).toContain('entry-pattern');\n    });\n\n    it('recognizes Python REST patterns', () => {\n      const result = calculateEntryPointScore('get_users', 'python', false, 0, 2);\n      expect(result.reasons).toContain('entry-pattern');\n    });\n\n    it('recognizes Java servlet patterns', () => {\n      const result = calculateEntryPointScore('doGet', 'java', false, 0, 2);\n      expect(result.reasons).toContain('entry-pattern');\n    });\n\n    it('recognizes Go handler patterns', () => {\n      const result = calculateEntryPointScore('NewServer', 'go', false, 0, 2);\n      expect(result.reasons).toContain('entry-pattern');\n    });\n\n    it('recognizes Rust entry patterns', () => {\n      const result = calculateEntryPointScore('handle_request', 'rust', false, 0, 2);\n      expect(result.reasons).toContain('entry-pattern');\n    });\n\n    it('recognizes Swift UIKit lifecycle', () => {\n      const result = calculateEntryPointScore('viewDidLoad', 'swift', false, 0, 2);\n      expect(result.reasons).toContain('entry-pattern');\n    });\n\n    it('recognizes Swift SwiftUI body', () => {\n      const result = calculateEntryPointScore('body', 'swift', false, 0, 2);\n      expect(result.reasons).toContain('entry-pattern');\n    });\n\n    it('recognizes PHP Laravel patterns', () => {\n      // __invoke starts with '_' which matches utility pattern first\n      const result = calculateEntryPointScore('handle', 'php', false, 0, 2);\n      expect(result.reasons).toContain('entry-pattern');\n    });\n\n    it('recognizes PHP RESTful resource methods', () => {\n      const result = calculateEntryPointScore('index', 'php', false, 0, 2);\n      expect(result.reasons).toContain('entry-pattern');\n    });\n\n    it('recognizes C# ASP.NET patterns', () => {\n      const result = calculateEntryPointScore('GetUsers', 'csharp', false, 0, 2);\n      expect(result.reasons).toContain('entry-pattern');\n    });\n\n    it('recognizes C main entry point', () => {\n      const result = calculateEntryPointScore('main', 'c', false, 0, 2);\n      expect(result.reasons).toContain('entry-pattern');\n    });\n\n    // C-specific patterns\n    it.each([\n      'init_server', 'server_init', 'start_server', 'handle_request',\n      'signal_handler', 'event_callback', 'cmd_new_window', 'server_start',\n      'client_connect', 'session_create', 'window_resize',\n    ])('recognizes C pattern \"%s\"', (name) => {\n      const result = calculateEntryPointScore(name, 'c', false, 0, 2);\n      expect(result.reasons).toContain('entry-pattern');\n    });\n\n    // C++-specific patterns\n    it.each([\n      'CreateInstance', 'create_session', 'Run', 'run', 'Start', 'start',\n      'OnEventReceived', 'on_click',\n    ])('recognizes C++ pattern \"%s\"', (name) => {\n      const result = calculateEntryPointScore(name, 'cpp', false, 0, 2);\n      expect(result.reasons).toContain('entry-pattern');\n    });\n  });\n\n  describe('utility pattern penalty', () => {\n    it.each([\n      'getUser', 'setName', 'isValid', 'hasPermission', 'canEdit',\n      'formatDate', 'parseJSON', 'validateInput',\n      'toString', 'fromJSON', 'encodeBase64', 'serializeData',\n      'cloneDeep', 'mergeObjects',\n    ])('penalizes utility function \"%s\"', (name) => {\n      const result = calculateEntryPointScore(name, 'typescript', false, 0, 3);\n      expect(result.reasons).toContain('utility-pattern');\n      // 0.3 multiplier\n      const plain = calculateEntryPointScore('doStuff', 'typescript', false, 0, 3);\n      expect(result.score).toBeLessThan(plain.score);\n    });\n\n    it('penalizes private-by-convention functions', () => {\n      const result = calculateEntryPointScore('_internal', 'typescript', false, 0, 3);\n      expect(result.reasons).toContain('utility-pattern');\n    });\n  });\n\n  describe('framework detection from path', () => {\n    it('boosts Next.js page entry points', () => {\n      const result = calculateEntryPointScore('render', 'typescript', true, 0, 3, 'pages/users.tsx');\n      expect(result.reasons.some(r => r.includes('framework:'))).toBe(true);\n      expect(result.score).toBeGreaterThan(0);\n    });\n\n    it('does not apply framework bonus for non-framework paths', () => {\n      const result = calculateEntryPointScore('render', 'typescript', true, 0, 3, 'src/lib/utils.ts');\n      expect(result.reasons.every(r => !r.includes('framework:'))).toBe(true);\n    });\n  });\n\n  describe('combined scoring', () => {\n    it('multiplies all factors together', () => {\n      // handleLogin: entry pattern (1.5x) + exported (2.0x) + base\n      const result = calculateEntryPointScore('handleLogin', 'typescript', true, 0, 4, 'routes/auth.ts');\n      expect(result.score).toBeGreaterThan(0);\n      expect(result.reasons).toContain('exported');\n      expect(result.reasons).toContain('entry-pattern');\n    });\n  });\n});\n\ndescribe('isTestFile', () => {\n  it.each([\n    'src/utils.test.ts',\n    'src/utils.spec.ts',\n    '__tests__/utils.ts',\n    '__mocks__/api.ts',\n    'src/test/integration/db.ts',\n    'src/tests/unit/helper.ts',\n    'src/testing/setup.ts',\n    'lib/test_utils.py',\n    'pkg/handler_test.go',\n    'src/test/java/com/example/Test.java',\n    'MyViewTests.swift',\n    'MyViewTest.swift',\n    'UITests/LoginTest.swift',\n    'App.Tests/MyTest.cs',\n    'tests/Feature/UserTest.php',\n    'tests/Unit/AuthSpec.php',\n  ])('returns true for test file \"%s\"', (filePath) => {\n    expect(isTestFile(filePath)).toBe(true);\n  });\n\n  it.each([\n    'src/utils.ts',\n    'src/controllers/auth.ts',\n    'src/main.py',\n    'cmd/server.go',\n    'src/main/java/App.java',\n  ])('returns false for non-test file \"%s\"', (filePath) => {\n    expect(isTestFile(filePath)).toBe(false);\n  });\n\n  it('normalizes Windows backslashes', () => {\n    expect(isTestFile('src\\\\__tests__\\\\utils.ts')).toBe(true);\n  });\n});\n\ndescribe('isUtilityFile', () => {\n  it.each([\n    'src/utils/format.ts',\n    'src/util/helpers.ts',\n    'src/helpers/date.ts',\n    'src/helper/string.ts',\n    'src/common/types.ts',\n    'src/shared/constants.ts',\n    'src/lib/crypto.ts',\n    'src/utils.ts',\n    'src/utils.js',\n    'src/helpers.ts',\n    'lib/date_utils.py',\n    'lib/date_helpers.py',\n  ])('returns true for utility file \"%s\"', (filePath) => {\n    expect(isUtilityFile(filePath)).toBe(true);\n  });\n\n  it.each([\n    'src/controllers/auth.ts',\n    'src/routes/api.ts',\n    'src/main.ts',\n    'src/app.ts',\n  ])('returns false for non-utility file \"%s\"', (filePath) => {\n    expect(isUtilityFile(filePath)).toBe(false);\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/eval-formatters.test.ts",
    "content": "/**\n * P1 Unit Tests: Eval Server Formatters\n *\n * Tests: formatQueryResult, formatContextResult, formatImpactResult,\n * formatCypherResult, formatDetectChangesResult, formatListReposResult, MAX_BODY_SIZE\n */\nimport { describe, it, expect } from 'vitest';\nimport {\n  formatQueryResult,\n  formatContextResult,\n  formatImpactResult,\n  formatCypherResult,\n  formatDetectChangesResult,\n  formatListReposResult,\n  MAX_BODY_SIZE,\n} from '../../src/cli/eval-server.js';\n\n// ─── MAX_BODY_SIZE ───────────────────────────────────────────────────\n\ndescribe('MAX_BODY_SIZE', () => {\n  it('is 1MB', () => {\n    expect(MAX_BODY_SIZE).toBe(1024 * 1024);\n  });\n});\n\n// ─── formatQueryResult ───────────────────────────────────────────────\n\ndescribe('formatQueryResult', () => {\n  it('returns error message for error input', () => {\n    expect(formatQueryResult({ error: 'something failed' })).toBe('Error: something failed');\n  });\n\n  it('returns no-match message for empty results', () => {\n    const result = formatQueryResult({ processes: [], definitions: [] });\n    expect(result).toContain('No matching execution flows');\n  });\n\n  it('formats processes with symbols', () => {\n    const result = formatQueryResult({\n      processes: [\n        { id: 'p1', summary: 'User Login Flow', step_count: 3, symbol_count: 2 },\n      ],\n      process_symbols: [\n        { process_id: 'p1', type: 'Function', name: 'login', filePath: 'src/auth.ts', startLine: 10 },\n        { process_id: 'p1', type: 'Function', name: 'validate', filePath: 'src/auth.ts', startLine: 20 },\n      ],\n      definitions: [],\n    });\n    expect(result).toContain('1 execution flow');\n    expect(result).toContain('User Login Flow');\n    expect(result).toContain('login');\n    expect(result).toContain(':10');\n  });\n\n  it('truncates symbols per process at 6', () => {\n    const symbols = Array.from({ length: 10 }, (_, i) => ({\n      process_id: 'p1',\n      type: 'Function',\n      name: `fn${i}`,\n      filePath: 'src/test.ts',\n    }));\n    const result = formatQueryResult({\n      processes: [{ id: 'p1', summary: 'Flow', step_count: 10, symbol_count: 10 }],\n      process_symbols: symbols,\n      definitions: [],\n    });\n    expect(result).toContain('and 4 more');\n  });\n\n  it('formats standalone definitions', () => {\n    const result = formatQueryResult({\n      processes: [],\n      definitions: [\n        { type: 'Interface', name: 'Config', filePath: 'src/types.ts' },\n      ],\n    });\n    expect(result).toContain('Standalone definitions');\n    expect(result).toContain('Config');\n  });\n\n  it('truncates definitions at 8', () => {\n    const defs = Array.from({ length: 12 }, (_, i) => ({\n      type: 'Interface',\n      name: `Type${i}`,\n      filePath: 'src/types.ts',\n    }));\n    const result = formatQueryResult({ processes: [], definitions: defs });\n    expect(result).toContain('and 4 more');\n  });\n});\n\n// ─── formatContextResult ─────────────────────────────────────────────\n\ndescribe('formatContextResult', () => {\n  it('returns error message for error input', () => {\n    expect(formatContextResult({ error: 'not found' })).toBe('Error: not found');\n  });\n\n  it('handles ambiguous results', () => {\n    const result = formatContextResult({\n      status: 'ambiguous',\n      candidates: [\n        { name: 'foo', kind: 'Function', filePath: 'src/a.ts', line: 10, uid: 'uid1' },\n        { name: 'foo', kind: 'Function', filePath: 'src/b.ts', line: 5, uid: 'uid2' },\n      ],\n    });\n    expect(result).toContain('Multiple symbols');\n    expect(result).toContain('uid1');\n    expect(result).toContain('uid2');\n  });\n\n  it('returns \"Symbol not found\" when no symbol', () => {\n    expect(formatContextResult({})).toBe('Symbol not found.');\n  });\n\n  it('formats symbol with incoming/outgoing refs', () => {\n    const result = formatContextResult({\n      symbol: { kind: 'Function', name: 'foo', filePath: 'src/a.ts', startLine: 1, endLine: 10 },\n      incoming: {\n        CALLS: [{ kind: 'Function', name: 'bar', filePath: 'src/b.ts' }],\n      },\n      outgoing: {\n        IMPORTS: [{ kind: 'Module', name: 'utils', filePath: 'src/utils.ts' }],\n      },\n      processes: [],\n    });\n    expect(result).toContain('Function foo');\n    expect(result).toContain('Called/imported by (1)');\n    expect(result).toContain('Calls/imports (1)');\n  });\n\n  it('formats process participation', () => {\n    const result = formatContextResult({\n      symbol: { kind: 'Function', name: 'foo', filePath: 'src/a.ts' },\n      incoming: {},\n      outgoing: {},\n      processes: [\n        { name: 'Auth Flow', step_index: 2, step_count: 5 },\n      ],\n    });\n    expect(result).toContain('1 execution flow');\n    expect(result).toContain('Auth Flow');\n  });\n});\n\n// ─── formatImpactResult ──────────────────────────────────────────────\n\ndescribe('formatImpactResult', () => {\n  it('returns error message for error input', () => {\n    expect(formatImpactResult({ error: 'bad request' })).toContain('Error: bad request');\n  });\n\n  it('returns error with suggestion when provided', () => {\n    const result = formatImpactResult({\n      error: 'Impact analysis failed',\n      suggestion: 'Try gitnexus context <symbol> as a fallback',\n    });\n    expect(result).toContain('Error: Impact analysis failed');\n    expect(result).toContain('Suggestion: Try gitnexus context');\n  });\n\n  it('shows partial warning when traversal was interrupted', () => {\n    const result = formatImpactResult({\n      target: { kind: 'Function', name: 'foo' },\n      direction: 'upstream',\n      impactedCount: 2,\n      partial: true,\n      byDepth: {\n        1: [\n          { type: 'Function', name: 'caller1', filePath: 'src/a.ts', relationType: 'CALLS', confidence: 1 },\n          { type: 'Function', name: 'caller2', filePath: 'src/b.ts', relationType: 'CALLS', confidence: 1 },\n        ],\n      },\n    });\n    expect(result).toContain('Partial results');\n    expect(result).toContain('caller1');\n    expect(result).toContain('caller2');\n  });\n\n  it('handles zero impact', () => {\n    const result = formatImpactResult({\n      target: { name: 'foo' },\n      direction: 'upstream',\n      impactedCount: 0,\n      byDepth: {},\n    });\n    expect(result).toContain('No upstream dependencies');\n  });\n\n  it('formats impact by depth', () => {\n    const result = formatImpactResult({\n      target: { kind: 'Function', name: 'foo' },\n      direction: 'upstream',\n      impactedCount: 3,\n      byDepth: {\n        1: [\n          { type: 'Function', name: 'caller1', filePath: 'src/a.ts', relationType: 'CALLS', confidence: 1 },\n          { type: 'Function', name: 'caller2', filePath: 'src/b.ts', relationType: 'CALLS', confidence: 0.8 },\n        ],\n        2: [\n          { type: 'Class', name: 'App', filePath: 'src/app.ts', relationType: 'IMPORTS', confidence: 1 },\n        ],\n      },\n    });\n    expect(result).toContain('Blast radius');\n    expect(result).toContain('WILL BREAK');\n    expect(result).toContain('caller1');\n    expect(result).toContain('conf: 0.8');\n    expect(result).toContain('LIKELY AFFECTED');\n  });\n\n  it('truncates items per depth at 12', () => {\n    const items = Array.from({ length: 15 }, (_, i) => ({\n      type: 'Function',\n      name: `fn${i}`,\n      filePath: 'src/test.ts',\n      relationType: 'CALLS',\n      confidence: 1,\n    }));\n    const result = formatImpactResult({\n      target: { kind: 'Function', name: 'foo' },\n      direction: 'upstream',\n      impactedCount: 15,\n      byDepth: { 1: items },\n    });\n    expect(result).toContain('and 3 more');\n  });\n});\n\n// ─── formatCypherResult ──────────────────────────────────────────────\n\ndescribe('formatCypherResult', () => {\n  it('returns error message for error input', () => {\n    expect(formatCypherResult({ error: 'syntax error' })).toBe('Error: syntax error');\n  });\n\n  it('handles empty array', () => {\n    expect(formatCypherResult([])).toBe('Query returned 0 rows.');\n  });\n\n  it('formats array of objects as table', () => {\n    const result = formatCypherResult([\n      { name: 'foo', filePath: 'src/a.ts' },\n      { name: 'bar', filePath: 'src/b.ts' },\n    ]);\n    expect(result).toContain('2 row(s)');\n    expect(result).toContain('name: foo');\n    expect(result).toContain('name: bar');\n  });\n\n  it('truncates at 30 rows', () => {\n    const rows = Array.from({ length: 35 }, (_, i) => ({ id: i }));\n    const result = formatCypherResult(rows);\n    expect(result).toContain('5 more rows');\n  });\n\n  it('handles string result', () => {\n    expect(formatCypherResult('some text')).toBe('some text');\n  });\n});\n\n// ─── formatDetectChangesResult ───────────────────────────────────────\n\ndescribe('formatDetectChangesResult', () => {\n  it('returns error message for error input', () => {\n    expect(formatDetectChangesResult({ error: 'git error' })).toBe('Error: git error');\n  });\n\n  it('handles no changes', () => {\n    const result = formatDetectChangesResult({ summary: { changed_count: 0 } });\n    expect(result).toBe('No changes detected.');\n  });\n\n  it('formats changes with affected processes', () => {\n    const result = formatDetectChangesResult({\n      summary: { changed_files: 2, changed_count: 3, affected_count: 1, risk_level: 'MEDIUM' },\n      changed_symbols: [\n        { type: 'Function', name: 'foo', filePath: 'src/a.ts' },\n      ],\n      affected_processes: [\n        { name: 'Auth Flow', step_count: 5, changed_steps: [{ symbol: 'foo' }] },\n      ],\n    });\n    expect(result).toContain('2 files');\n    expect(result).toContain('MEDIUM');\n    expect(result).toContain('Auth Flow');\n  });\n\n  it('truncates changed symbols at 15', () => {\n    const symbols = Array.from({ length: 20 }, (_, i) => ({\n      type: 'Function',\n      name: `fn${i}`,\n      filePath: 'src/test.ts',\n    }));\n    const result = formatDetectChangesResult({\n      summary: { changed_files: 1, changed_count: 20, affected_count: 0, risk_level: 'HIGH' },\n      changed_symbols: symbols,\n      affected_processes: [],\n    });\n    expect(result).toContain('and 5 more');\n  });\n});\n\n// ─── formatListReposResult ───────────────────────────────────────────\n\ndescribe('formatListReposResult', () => {\n  it('handles empty/null input', () => {\n    expect(formatListReposResult([])).toBe('No indexed repositories.');\n    expect(formatListReposResult(null)).toBe('No indexed repositories.');\n  });\n\n  it('formats repo list', () => {\n    const result = formatListReposResult([\n      {\n        name: 'my-project',\n        path: '/home/user/my-project',\n        indexedAt: '2024-01-01',\n        stats: { nodes: 100, edges: 200, processes: 10 },\n      },\n    ]);\n    expect(result).toContain('Indexed repositories');\n    expect(result).toContain('my-project');\n    expect(result).toContain('100 symbols');\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/extract-element-type-from-string.test.ts",
    "content": "import { describe, it, expect } from 'vitest';\nimport { extractElementTypeFromString } from '../../src/core/ingestion/type-extractors/shared.js';\n\ndescribe('extractElementTypeFromString', () => {\n  describe('array suffix (TypeScript / Java / C#)', () => {\n    it('User[] → User', () => {\n      expect(extractElementTypeFromString('User[]')).toBe('User');\n    });\n\n    it('string[] → string', () => {\n      expect(extractElementTypeFromString('string[]')).toBe('string');\n    });\n\n    it('int[] → int', () => {\n      expect(extractElementTypeFromString('int[]')).toBe('int');\n    });\n  });\n\n  describe('Go slice prefix', () => {\n    it('[]User → User', () => {\n      expect(extractElementTypeFromString('[]User')).toBe('User');\n    });\n\n    it('[]string → string', () => {\n      expect(extractElementTypeFromString('[]string')).toBe('string');\n    });\n  });\n\n  describe('Swift array sugar', () => {\n    it('[User] → User', () => {\n      expect(extractElementTypeFromString('[User]')).toBe('User');\n    });\n\n    it('[String] → String', () => {\n      expect(extractElementTypeFromString('[String]')).toBe('String');\n    });\n  });\n\n  describe('generic angle-bracket containers', () => {\n    it('Array<User> → User', () => {\n      expect(extractElementTypeFromString('Array<User>')).toBe('User');\n    });\n\n    it('Vec<User> → User (Rust)', () => {\n      expect(extractElementTypeFromString('Vec<User>')).toBe('User');\n    });\n\n    it('vector<User> → User (C++)', () => {\n      expect(extractElementTypeFromString('vector<User>')).toBe('User');\n    });\n\n    it('Set<User> → User', () => {\n      expect(extractElementTypeFromString('Set<User>')).toBe('User');\n    });\n\n    it('List<User> → User', () => {\n      expect(extractElementTypeFromString('List<User>')).toBe('User');\n    });\n\n    it('IEnumerable<User> → User (C#)', () => {\n      expect(extractElementTypeFromString('IEnumerable<User>')).toBe('User');\n    });\n  });\n\n  describe('Python subscript-style generics', () => {\n    it('List[User] → User', () => {\n      expect(extractElementTypeFromString('List[User]')).toBe('User');\n    });\n\n    it('Set[User] → User', () => {\n      expect(extractElementTypeFromString('Set[User]')).toBe('User');\n    });\n  });\n\n  describe('multi-argument generics — default returns last (value) arg', () => {\n    it('Map<String, User> → User (default: last/value arg)', () => {\n      expect(extractElementTypeFromString('Map<String, User>')).toBe('User');\n    });\n\n    it('Map<String, User> → String (pos=first: key arg)', () => {\n      expect(extractElementTypeFromString('Map<String, User>', 'first')).toBe('String');\n    });\n\n    it('Map<String, List<User>> → undefined (last arg is nested generic)', () => {\n      expect(extractElementTypeFromString('Map<String, List<User>>')).toBeUndefined();\n    });\n\n    it('Map<String, List<User>> → String (pos=first: key arg)', () => {\n      expect(extractElementTypeFromString('Map<String, List<User>>', 'first')).toBe('String');\n    });\n\n    it('Dict[str, User] → User (default: last/value arg, Python)', () => {\n      expect(extractElementTypeFromString('Dict[str, User]')).toBe('User');\n    });\n\n    it('Dict[str, User] → str (pos=first: key arg, Python)', () => {\n      expect(extractElementTypeFromString('Dict[str, User]', 'first')).toBe('str');\n    });\n  });\n\n  describe('nested generics as element type — returns undefined', () => {\n    it('Array<List<User>> → undefined (element is itself generic)', () => {\n      // The element \"List<User>\" is not a plain word, so return undefined.\n      expect(extractElementTypeFromString('Array<List<User>>')).toBeUndefined();\n    });\n\n    it('Vec<Option<User>> → undefined (element is itself generic)', () => {\n      expect(extractElementTypeFromString('Vec<Option<User>>')).toBeUndefined();\n    });\n  });\n\n  describe('cross-bracket nesting (bracket depth fix)', () => {\n    it('Dict[str, List[int]] → undefined (default: last arg is nested generic)', () => {\n      expect(extractElementTypeFromString('Dict[str, List[int]]')).toBeUndefined();\n    });\n\n    it('Dict[str, List[int]] → str (pos=first: key arg)', () => {\n      expect(extractElementTypeFromString('Dict[str, List[int]]', 'first')).toBe('str');\n    });\n\n    it('Map<String, List<User>> → undefined (default: last arg is nested generic)', () => {\n      expect(extractElementTypeFromString('Map<String, List<User>>')).toBeUndefined();\n    });\n\n    it('Map<String, List<User>> → String (pos=first: key arg)', () => {\n      expect(extractElementTypeFromString('Map<String, List<User>>', 'first')).toBe('String');\n    });\n\n    it('mismatched close bracket at depth 0 → undefined', () => {\n      // openChar is '<' but first close at depth 0 is ']' — malformed\n      expect(extractElementTypeFromString('Array<int]')).toBeUndefined();\n    });\n  });\n\n  describe('edge cases — return undefined', () => {\n    it('empty string → undefined', () => {\n      expect(extractElementTypeFromString('')).toBeUndefined();\n    });\n\n    it('plain type name (no container) → undefined', () => {\n      expect(extractElementTypeFromString('User')).toBeUndefined();\n    });\n\n    it('bare angle bracket with no close → undefined (malformed)', () => {\n      expect(extractElementTypeFromString('Array<User')).toBeUndefined();\n    });\n\n    it('bare [] prefix with spaces only → undefined', () => {\n      expect(extractElementTypeFromString('[]')).toBeUndefined();\n    });\n\n    it('empty array suffix → undefined', () => {\n      expect(extractElementTypeFromString('[]')).toBeUndefined();\n    });\n\n    it('[] suffix with no base → undefined', () => {\n      expect(extractElementTypeFromString('[]')).toBeUndefined();\n    });\n\n    it('empty Swift sugar [] → undefined', () => {\n      // starts with '[' and ends with ']' but inner is empty\n      expect(extractElementTypeFromString('[ ]')).toBeUndefined();\n    });\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/extract-generic-type-args.test.ts",
    "content": "import { describe, it, expect } from 'vitest';\nimport { extractGenericTypeArgs } from '../../src/core/ingestion/type-extractors/shared.js';\nimport type { SyntaxNode } from '../../src/core/ingestion/utils.js';\n\n/**\n * Create a minimal mock SyntaxNode for testing type extraction.\n * Only the properties used by extractSimpleTypeName / extractGenericTypeArgs\n * are populated — everything else is left as stubs.\n */\nfunction mockNode(\n  type: string,\n  opts: {\n    text?: string;\n    namedChildren?: SyntaxNode[];\n    fields?: Record<string, SyntaxNode>;\n  } = {},\n): SyntaxNode {\n  const children = opts.namedChildren ?? [];\n  const fields = opts.fields ?? {};\n  const text = opts.text ?? children.map((c) => c.text).join(', ');\n\n  return {\n    type,\n    text,\n    namedChildCount: children.length,\n    namedChild: (i: number) => children[i] ?? null,\n    firstNamedChild: children[0] ?? null,\n    lastNamedChild: children[children.length - 1] ?? null,\n    childForFieldName: (name: string) => fields[name] ?? null,\n  } as unknown as SyntaxNode;\n}\n\n// Helper: build a generic_type node with type_arguments\nfunction genericType(\n  baseName: string,\n  typeArgNames: string[],\n  opts?: { argsNodeType?: string; wrapInProjection?: boolean },\n): SyntaxNode {\n  const argsNodeType = opts?.argsNodeType ?? 'type_arguments';\n\n  const baseNode = mockNode('type_identifier', { text: baseName });\n\n  let argChildren = typeArgNames.map((name) =>\n    mockNode('type_identifier', { text: name }),\n  );\n\n  // Kotlin wraps each arg in type_projection > user_type > type_identifier\n  if (opts?.wrapInProjection) {\n    argChildren = typeArgNames.map((name) => {\n      const typeId = mockNode('type_identifier', { text: name });\n      const userType = mockNode('user_type', { namedChildren: [typeId] });\n      return mockNode('type_projection', { namedChildren: [userType] });\n    }) as unknown as SyntaxNode[];\n  }\n\n  const typeArgsNode = mockNode(argsNodeType, {\n    namedChildren: argChildren,\n  });\n\n  return mockNode('generic_type', {\n    namedChildren: [baseNode, typeArgsNode],\n    fields: { name: baseNode },\n  });\n}\n\ndescribe('extractGenericTypeArgs', () => {\n  describe('single type argument', () => {\n    it('extracts from TypeScript Array<User>', () => {\n      const node = genericType('Array', ['User']);\n      expect(extractGenericTypeArgs(node)).toEqual(['User']);\n    });\n\n    it('extracts from Java List<User>', () => {\n      const node = genericType('List', ['User']);\n      expect(extractGenericTypeArgs(node)).toEqual(['User']);\n    });\n\n    it('extracts from Rust Vec<User>', () => {\n      const node = genericType('Vec', ['User']);\n      expect(extractGenericTypeArgs(node)).toEqual(['User']);\n    });\n\n    it('extracts from C# List<User> (type_argument_list)', () => {\n      const node = genericType('List', ['User'], {\n        argsNodeType: 'type_argument_list',\n      });\n      expect(extractGenericTypeArgs(node)).toEqual(['User']);\n    });\n  });\n\n  describe('multiple type arguments', () => {\n    it('extracts from Java Map<String, User>', () => {\n      const node = genericType('Map', ['String', 'User']);\n      expect(extractGenericTypeArgs(node)).toEqual(['String', 'User']);\n    });\n\n    it('extracts from TS Map<string, number>', () => {\n      const node = genericType('Map', ['string', 'number']);\n      expect(extractGenericTypeArgs(node)).toEqual(['string', 'number']);\n    });\n  });\n\n  describe('Kotlin type_projection wrapping', () => {\n    it('extracts from Kotlin List<User> through type_projection', () => {\n      const node = genericType('List', ['User'], { wrapInProjection: true });\n      expect(extractGenericTypeArgs(node)).toEqual(['User']);\n    });\n\n    it('extracts from Kotlin Map<String, User> through type_projection', () => {\n      const node = genericType('Map', ['String', 'User'], {\n        wrapInProjection: true,\n      });\n      expect(extractGenericTypeArgs(node)).toEqual(['String', 'User']);\n    });\n  });\n\n  describe('parameterized_type (Java/Kotlin alternate node type)', () => {\n    it('extracts type arguments from parameterized_type', () => {\n      const baseNode = mockNode('type_identifier', { text: 'List' });\n      const argNode = mockNode('type_identifier', { text: 'User' });\n      const typeArgsNode = mockNode('type_arguments', {\n        namedChildren: [argNode],\n      });\n      const node = mockNode('parameterized_type', {\n        namedChildren: [baseNode, typeArgsNode],\n        fields: { name: baseNode },\n      });\n      expect(extractGenericTypeArgs(node)).toEqual(['User']);\n    });\n  });\n\n  describe('wrapper node unwrapping', () => {\n    it('unwraps type_annotation before extracting', () => {\n      const inner = genericType('Array', ['User']);\n      const wrapper = mockNode('type_annotation', { namedChildren: [inner] });\n      expect(extractGenericTypeArgs(wrapper)).toEqual(['User']);\n    });\n\n    it('unwraps nullable_type before extracting', () => {\n      const inner = genericType('List', ['User']);\n      const wrapper = mockNode('nullable_type', { namedChildren: [inner] });\n      expect(extractGenericTypeArgs(wrapper)).toEqual(['User']);\n    });\n\n    it('unwraps user_type before extracting (Kotlin)', () => {\n      const inner = genericType('MutableList', ['String']);\n      const wrapper = mockNode('user_type', { namedChildren: [inner] });\n      expect(extractGenericTypeArgs(wrapper)).toEqual(['String']);\n    });\n  });\n\n  describe('non-generic types return empty array', () => {\n    it('returns [] for plain type_identifier', () => {\n      const node = mockNode('type_identifier', { text: 'User' });\n      expect(extractGenericTypeArgs(node)).toEqual([]);\n    });\n\n    it('returns [] for identifier', () => {\n      const node = mockNode('identifier', { text: 'foo' });\n      expect(extractGenericTypeArgs(node)).toEqual([]);\n    });\n\n    it('returns [] for union_type', () => {\n      const node = mockNode('union_type', {\n        namedChildren: [\n          mockNode('type_identifier', { text: 'string' }),\n          mockNode('type_identifier', { text: 'number' }),\n        ],\n      });\n      expect(extractGenericTypeArgs(node)).toEqual([]);\n    });\n  });\n\n  describe('nested generic types as arguments', () => {\n    it('extracts outer type arg names for nested generics', () => {\n      // Map<String, List<User>> — the second arg is itself a generic_type\n      // extractGenericTypeArgs should extract 'List' (via extractSimpleTypeName)\n      const innerGeneric = genericType('List', ['User']);\n      const stringNode = mockNode('type_identifier', { text: 'String' });\n      const typeArgsNode = mockNode('type_arguments', {\n        namedChildren: [stringNode, innerGeneric],\n      });\n      const baseNode = mockNode('type_identifier', { text: 'Map' });\n      const node = mockNode('generic_type', {\n        namedChildren: [baseNode, typeArgsNode],\n        fields: { name: baseNode },\n      });\n\n      // extractSimpleTypeName on a generic_type returns the base name\n      expect(extractGenericTypeArgs(node)).toEqual(['String', 'List']);\n    });\n  });\n\n  describe('edge cases', () => {\n    it('returns [] for generic_type with no type_arguments child', () => {\n      const baseNode = mockNode('type_identifier', { text: 'List' });\n      const node = mockNode('generic_type', {\n        namedChildren: [baseNode],\n        fields: { name: baseNode },\n      });\n      expect(extractGenericTypeArgs(node)).toEqual([]);\n    });\n\n    it('skips unresolvable type arguments', () => {\n      // If a child can't be resolved by extractSimpleTypeName, it is omitted\n      const baseNode = mockNode('type_identifier', { text: 'Fn' });\n      const unresolvedArg = mockNode('function_type', { text: '() => void' });\n      const resolvedArg = mockNode('type_identifier', { text: 'User' });\n      const typeArgsNode = mockNode('type_arguments', {\n        namedChildren: [unresolvedArg, resolvedArg],\n      });\n      const node = mockNode('generic_type', {\n        namedChildren: [baseNode, typeArgsNode],\n        fields: { name: baseNode },\n      });\n      expect(extractGenericTypeArgs(node)).toEqual(['User']);\n    });\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/framework-detection.test.ts",
    "content": "import { describe, it, expect } from 'vitest';\nimport { detectFrameworkFromPath, detectFrameworkFromAST, FRAMEWORK_AST_PATTERNS } from '../../src/core/ingestion/framework-detection.js';\n\ndescribe('detectFrameworkFromPath', () => {\n  describe('Next.js', () => {\n    it('detects Pages Router pages', () => {\n      const result = detectFrameworkFromPath('pages/users.tsx');\n      expect(result).not.toBeNull();\n      expect(result!.framework).toBe('nextjs-pages');\n      expect(result!.entryPointMultiplier).toBe(3.0);\n    });\n\n    it('ignores _app and _document pages', () => {\n      expect(detectFrameworkFromPath('pages/_app.tsx')).toBeNull();\n    });\n\n    it('detects App Router page.tsx', () => {\n      const result = detectFrameworkFromPath('app/dashboard/page.tsx');\n      expect(result).not.toBeNull();\n      expect(result!.framework).toBe('nextjs-app');\n    });\n\n    it('detects API routes in pages', () => {\n      const result = detectFrameworkFromPath('pages/api/users.ts');\n      expect(result).not.toBeNull();\n      expect(result!.framework).toBe('nextjs-api');\n    });\n\n    it('detects App Router API route.ts', () => {\n      const result = detectFrameworkFromPath('app/api/users/route.ts');\n      expect(result).not.toBeNull();\n      expect(result!.framework).toBe('nextjs-api');\n    });\n\n    it('detects layout files', () => {\n      const result = detectFrameworkFromPath('app/layout.tsx');\n      expect(result).not.toBeNull();\n      expect(result!.entryPointMultiplier).toBe(2.0);\n    });\n  });\n\n  describe('Express / Node.js', () => {\n    it('detects route files', () => {\n      const result = detectFrameworkFromPath('routes/auth.ts');\n      expect(result).not.toBeNull();\n      expect(result!.framework).toBe('express');\n      expect(result!.entryPointMultiplier).toBe(2.5);\n    });\n  });\n\n  describe('MVC controllers', () => {\n    it('detects controller folder', () => {\n      const result = detectFrameworkFromPath('controllers/UserController.ts');\n      expect(result).not.toBeNull();\n      expect(result!.framework).toBe('mvc');\n    });\n\n    it('detects handlers folder', () => {\n      const result = detectFrameworkFromPath('handlers/auth.ts');\n      expect(result).not.toBeNull();\n      expect(result!.framework).toBe('handlers');\n    });\n  });\n\n  describe('React', () => {\n    it('has React component detection rule for views/components folders', () => {\n      // Note: The current implementation lowercases the path before checking\n      // PascalCase, so PascalCase detection currently can't match.\n      // This test documents the current behavior.\n      const result = detectFrameworkFromPath('views/Button.tsx');\n      // Returns null because path is lowercased before PascalCase regex check\n      expect(result).toBeNull();\n    });\n  });\n\n  describe('Python frameworks', () => {\n    it('detects Django views', () => {\n      const result = detectFrameworkFromPath('myapp/views.py');\n      expect(result).not.toBeNull();\n      expect(result!.framework).toBe('django');\n      expect(result!.entryPointMultiplier).toBe(3.0);\n    });\n\n    it('detects Django URLs', () => {\n      const result = detectFrameworkFromPath('myapp/urls.py');\n      expect(result).not.toBeNull();\n      expect(result!.framework).toBe('django');\n    });\n\n    it('detects FastAPI routers', () => {\n      const result = detectFrameworkFromPath('routers/users.py');\n      expect(result).not.toBeNull();\n      expect(result!.framework).toBe('fastapi');\n    });\n  });\n\n  describe('Java frameworks', () => {\n    it('detects Spring controllers folder', () => {\n      const result = detectFrameworkFromPath('controller/UserController.java');\n      expect(result).not.toBeNull();\n      expect(result!.framework).toBe('spring');\n    });\n\n    it('detects Spring controller by filename', () => {\n      const result = detectFrameworkFromPath('src/UserController.java');\n      expect(result).not.toBeNull();\n      expect(result!.framework).toBe('spring');\n    });\n\n    it('detects Java service layer', () => {\n      const result = detectFrameworkFromPath('service/UserService.java');\n      expect(result).not.toBeNull();\n      expect(result!.framework).toBe('java-service');\n    });\n  });\n\n  describe('C# / .NET', () => {\n    it('detects ASP.NET controllers', () => {\n      const result = detectFrameworkFromPath('controllers/UsersController.cs');\n      expect(result).not.toBeNull();\n      expect(result!.framework).toBe('aspnet');\n    });\n\n    it('detects Blazor pages', () => {\n      const result = detectFrameworkFromPath('pages/Index.razor');\n      expect(result).not.toBeNull();\n      expect(result!.framework).toBe('blazor');\n    });\n  });\n\n  describe('Go frameworks', () => {\n    it('detects Go handlers', () => {\n      const result = detectFrameworkFromPath('handlers/user.go');\n      expect(result).not.toBeNull();\n      expect(result!.framework).toBe('go-http');\n    });\n\n    it('detects Go main.go', () => {\n      const result = detectFrameworkFromPath('cmd/server/main.go');\n      expect(result).not.toBeNull();\n      expect(result!.entryPointMultiplier).toBe(3.0);\n    });\n  });\n\n  describe('Rust frameworks', () => {\n    it('detects Rust handlers', () => {\n      const result = detectFrameworkFromPath('handlers/auth.rs');\n      expect(result).not.toBeNull();\n      expect(result!.framework).toBe('rust-web');\n    });\n\n    it('detects main.rs', () => {\n      const result = detectFrameworkFromPath('src/main.rs');\n      expect(result).not.toBeNull();\n      expect(result!.framework).toBe('rust');\n      expect(result!.entryPointMultiplier).toBe(3.0);\n    });\n\n    it('detects bin folder', () => {\n      const result = detectFrameworkFromPath('src/bin/cli.rs');\n      expect(result).not.toBeNull();\n      expect(result!.framework).toBe('rust');\n    });\n  });\n\n  describe('C / C++', () => {\n    it('detects main.c', () => {\n      const result = detectFrameworkFromPath('src/main.c');\n      expect(result).not.toBeNull();\n      expect(result!.framework).toBe('c-cpp');\n    });\n\n    it('detects main.cpp', () => {\n      const result = detectFrameworkFromPath('src/main.cpp');\n      expect(result).not.toBeNull();\n      expect(result!.framework).toBe('c-cpp');\n    });\n  });\n\n  describe('PHP / Laravel', () => {\n    it('detects Laravel routes', () => {\n      const result = detectFrameworkFromPath('routes/web.php');\n      expect(result).not.toBeNull();\n      expect(result!.framework).toBe('laravel');\n      expect(result!.entryPointMultiplier).toBe(3.0);\n    });\n\n    it('detects Laravel controllers', () => {\n      const result = detectFrameworkFromPath('http/controllers/UserController.php');\n      expect(result).not.toBeNull();\n      expect(result!.framework).toBe('laravel');\n    });\n\n    it('detects Laravel jobs', () => {\n      const result = detectFrameworkFromPath('jobs/SendEmail.php');\n      expect(result).not.toBeNull();\n      expect(result!.reason).toBe('laravel-job');\n    });\n\n    it('detects Laravel middleware', () => {\n      const result = detectFrameworkFromPath('http/middleware/Auth.php');\n      expect(result).not.toBeNull();\n      expect(result!.reason).toBe('laravel-middleware');\n    });\n\n    it('detects Laravel models', () => {\n      const result = detectFrameworkFromPath('models/User.php');\n      expect(result).not.toBeNull();\n      expect(result!.entryPointMultiplier).toBe(1.5);\n    });\n  });\n\n  describe('Swift / iOS', () => {\n    it('detects AppDelegate', () => {\n      const result = detectFrameworkFromPath('Sources/AppDelegate.swift');\n      expect(result).not.toBeNull();\n      expect(result!.framework).toBe('ios');\n    });\n\n    it('detects ViewControllers folder', () => {\n      const result = detectFrameworkFromPath('ViewControllers/LoginVC.swift');\n      expect(result).not.toBeNull();\n      expect(result!.framework).toBe('uikit');\n    });\n\n    it('detects Coordinator pattern', () => {\n      const result = detectFrameworkFromPath('Coordinators/AppCoordinator.swift');\n      expect(result).not.toBeNull();\n      expect(result!.framework).toBe('ios-coordinator');\n    });\n\n    it('detects SwiftUI views folder', () => {\n      const result = detectFrameworkFromPath('views/ContentView.swift');\n      expect(result).not.toBeNull();\n      expect(result!.framework).toBe('swiftui');\n    });\n  });\n\n  describe('generic patterns', () => {\n    it('returns null for unknown paths', () => {\n      expect(detectFrameworkFromPath('src/internal/crypto.ts')).toBeNull();\n    });\n\n    it('normalizes Windows backslashes', () => {\n      const result = detectFrameworkFromPath('routes\\\\auth.ts');\n      expect(result).not.toBeNull();\n      expect(result!.framework).toBe('express');\n    });\n  });\n});\n\ndescribe('detectFrameworkFromAST', () => {\n  it('returns null for empty inputs', () => {\n    expect(detectFrameworkFromAST('', '')).toBeNull();\n    expect(detectFrameworkFromAST('typescript', '')).toBeNull();\n    expect(detectFrameworkFromAST('', 'some code')).toBeNull();\n  });\n\n  it('detects NestJS decorators in TypeScript', () => {\n    const result = detectFrameworkFromAST('typescript', '@Controller(\"/users\")');\n    expect(result).not.toBeNull();\n    expect(result!.framework).toBe('nestjs');\n    expect(result!.entryPointMultiplier).toBe(3.2);\n  });\n\n  it('detects NestJS decorators in JavaScript', () => {\n    const result = detectFrameworkFromAST('javascript', '@Get(\"/\")');\n    expect(result).not.toBeNull();\n    expect(result!.framework).toBe('nestjs');\n  });\n\n  it('detects FastAPI decorators in Python', () => {\n    const result = detectFrameworkFromAST('python', '@app.get(\"/users\")');\n    expect(result).not.toBeNull();\n    expect(result!.framework).toBe('fastapi');\n  });\n\n  it('detects Flask decorators in Python', () => {\n    const result = detectFrameworkFromAST('python', '@app.route(\"/users\")');\n    expect(result).not.toBeNull();\n    expect(result!.framework).toBe('flask');\n  });\n\n  it('detects Spring annotations in Java', () => {\n    const result = detectFrameworkFromAST('java', '@RestController');\n    expect(result).not.toBeNull();\n    expect(result!.framework).toBe('spring');\n  });\n\n  it('detects ASP.NET attributes in C#', () => {\n    const result = detectFrameworkFromAST('csharp', '[ApiController]');\n    expect(result).not.toBeNull();\n    expect(result!.framework).toBe('aspnet');\n  });\n\n  it('detects Laravel route definitions in PHP', () => {\n    const result = detectFrameworkFromAST('php', \"Route::get('/users', [UserController::class, 'index'])\");\n    expect(result).not.toBeNull();\n    expect(result!.framework).toBe('laravel');\n  });\n\n  it('returns null for unsupported language', () => {\n    expect(detectFrameworkFromAST('rust', '#[get(\"/\")]')).toBeNull();\n  });\n\n  it('is case-insensitive', () => {\n    const result = detectFrameworkFromAST('TypeScript', '@controller(\"/\")');\n    expect(result).not.toBeNull();\n  });\n});\n\ndescribe('FRAMEWORK_AST_PATTERNS', () => {\n  it('has patterns for all expected frameworks', () => {\n    const expectedFrameworks = [\n      'nestjs', 'express', 'fastapi', 'flask', 'spring', 'jaxrs',\n      'aspnet', 'go-http', 'laravel', 'actix', 'axum', 'rocket',\n      'uikit', 'swiftui', 'combine',\n    ];\n    for (const fw of expectedFrameworks) {\n      expect(FRAMEWORK_AST_PATTERNS).toHaveProperty(fw);\n      expect(FRAMEWORK_AST_PATTERNS[fw as keyof typeof FRAMEWORK_AST_PATTERNS].length).toBeGreaterThan(0);\n    }\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/git.test.ts",
    "content": "import { describe, it, expect, vi, beforeEach } from 'vitest';\nimport { execSync } from 'child_process';\nimport { isGitRepo, getCurrentCommit, getGitRoot } from '../../src/storage/git.js';\n\n// Mock child_process.execSync\nvi.mock('child_process', () => ({\n  execSync: vi.fn(),\n}));\n\nconst mockExecSync = vi.mocked(execSync);\n\ndescribe('git utilities', () => {\n  beforeEach(() => {\n    vi.clearAllMocks();\n  });\n\n  describe('isGitRepo', () => {\n    it('returns true when inside a git work tree', () => {\n      mockExecSync.mockReturnValueOnce(Buffer.from(''));\n      expect(isGitRepo('/project')).toBe(true);\n      expect(mockExecSync).toHaveBeenCalledWith(\n        'git rev-parse --is-inside-work-tree',\n        { cwd: '/project', stdio: 'ignore' }\n      );\n    });\n\n    it('returns false when not a git repo', () => {\n      mockExecSync.mockImplementationOnce(() => { throw new Error('not a git repo'); });\n      expect(isGitRepo('/not-a-repo')).toBe(false);\n    });\n\n    it('passes the correct cwd', () => {\n      mockExecSync.mockReturnValueOnce(Buffer.from(''));\n      isGitRepo('/some/path');\n      expect(mockExecSync).toHaveBeenCalledWith(\n        expect.any(String),\n        expect.objectContaining({ cwd: '/some/path' })\n      );\n    });\n  });\n\n  describe('getCurrentCommit', () => {\n    it('returns trimmed commit hash', () => {\n      mockExecSync.mockReturnValueOnce(Buffer.from('abc123def\\n'));\n      expect(getCurrentCommit('/project')).toBe('abc123def');\n    });\n\n    it('returns empty string on error', () => {\n      mockExecSync.mockImplementationOnce(() => { throw new Error('not a git repo'); });\n      expect(getCurrentCommit('/not-a-repo')).toBe('');\n    });\n\n    it('trims whitespace from output', () => {\n      mockExecSync.mockReturnValueOnce(Buffer.from('  sha256hash  \\n'));\n      expect(getCurrentCommit('/project')).toBe('sha256hash');\n    });\n  });\n\n  describe('getGitRoot', () => {\n    it('returns resolved path on success', () => {\n      mockExecSync.mockReturnValueOnce(Buffer.from('/d/Projects/MyRepo\\n'));\n      const result = getGitRoot('/d/Projects/MyRepo/src');\n      expect(result).toBeTruthy();\n      // path.resolve normalizes the git output\n      expect(typeof result).toBe('string');\n    });\n\n    it('returns null when not in a git repo', () => {\n      mockExecSync.mockImplementationOnce(() => { throw new Error('not a git repo'); });\n      expect(getGitRoot('/not-a-repo')).toBeNull();\n    });\n\n    it('calls git rev-parse --show-toplevel', () => {\n      mockExecSync.mockReturnValueOnce(Buffer.from('/repo\\n'));\n      getGitRoot('/repo/src');\n      expect(mockExecSync).toHaveBeenCalledWith(\n        'git rev-parse --show-toplevel',\n        expect.objectContaining({ cwd: '/repo/src' })\n      );\n    });\n\n    it('trims output before resolving path', () => {\n      mockExecSync.mockReturnValueOnce(Buffer.from('  /repo  \\n'));\n      const result = getGitRoot('/repo/src');\n      expect(result).not.toBeNull();\n      expect(result!.trim()).toBe(result);\n    });\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/graph.test.ts",
    "content": "/**\n * P0 Unit Tests: Knowledge Graph\n *\n * Tests: createKnowledgeGraph() — addNode, getNode, removeNode,\n * iterNodes, addRelationship, removeNodesByFile, counts.\n */\nimport { describe, it, expect } from 'vitest';\nimport { createKnowledgeGraph } from '../../src/core/graph/graph.js';\nimport type { GraphNode, GraphRelationship } from '../../src/core/graph/types.js';\n\nfunction makeNode(id: string, name: string, filePath: string = 'src/test.ts'): GraphNode {\n  return {\n    id,\n    label: 'Function',\n    properties: { name, filePath, startLine: 1, endLine: 10 },\n  };\n}\n\nfunction makeRel(src: string, tgt: string, type: GraphRelationship['type'] = 'CALLS'): GraphRelationship {\n  return {\n    id: `${src}-${type}-${tgt}`,\n    sourceId: src,\n    targetId: tgt,\n    type,\n    confidence: 1.0,\n    reason: '',\n  };\n}\n\ndescribe('createKnowledgeGraph', () => {\n  // ─── addNode / getNode ─────────────────────────────────────────────\n\n  it('adds and retrieves a node', () => {\n    const g = createKnowledgeGraph();\n    const node = makeNode('fn:foo', 'foo');\n    g.addNode(node);\n    expect(g.getNode('fn:foo')).toBe(node);\n  });\n\n  it('returns undefined for unknown node', () => {\n    const g = createKnowledgeGraph();\n    expect(g.getNode('nonexistent')).toBeUndefined();\n  });\n\n  it('duplicate addNode is a no-op', () => {\n    const g = createKnowledgeGraph();\n    const node1 = makeNode('fn:foo', 'foo');\n    const node2 = makeNode('fn:foo', 'bar'); // same ID, different name\n    g.addNode(node1);\n    g.addNode(node2);\n    expect(g.nodeCount).toBe(1);\n    expect(g.getNode('fn:foo')!.properties.name).toBe('foo'); // first one wins\n  });\n\n  // ─── removeNode ─────────────────────────────────────────────────────\n\n  it('removes a node and its relationships', () => {\n    const g = createKnowledgeGraph();\n    g.addNode(makeNode('fn:a', 'a'));\n    g.addNode(makeNode('fn:b', 'b'));\n    g.addRelationship(makeRel('fn:a', 'fn:b'));\n    expect(g.relationshipCount).toBe(1);\n\n    const removed = g.removeNode('fn:a');\n    expect(removed).toBe(true);\n    expect(g.getNode('fn:a')).toBeUndefined();\n    expect(g.nodeCount).toBe(1);\n    expect(g.relationshipCount).toBe(0); // relationship involving fn:a removed\n  });\n\n  it('removeNode returns false for unknown node', () => {\n    const g = createKnowledgeGraph();\n    expect(g.removeNode('nope')).toBe(false);\n  });\n\n  // ─── removeNodesByFile ──────────────────────────────────────────────\n\n  it('removes all nodes belonging to a file', () => {\n    const g = createKnowledgeGraph();\n    g.addNode(makeNode('fn:a', 'a', 'src/foo.ts'));\n    g.addNode(makeNode('fn:b', 'b', 'src/foo.ts'));\n    g.addNode(makeNode('fn:c', 'c', 'src/bar.ts'));\n\n    const removed = g.removeNodesByFile('src/foo.ts');\n    expect(removed).toBe(2);\n    expect(g.nodeCount).toBe(1);\n    expect(g.getNode('fn:c')).toBeDefined();\n  });\n\n  // ─── iterNodes / iterRelationships ─────────────────────────────────\n\n  it('iterNodes yields all nodes', () => {\n    const g = createKnowledgeGraph();\n    g.addNode(makeNode('fn:a', 'a'));\n    g.addNode(makeNode('fn:b', 'b'));\n\n    const ids = [...g.iterNodes()].map(n => n.id);\n    expect(ids).toHaveLength(2);\n    expect(ids).toContain('fn:a');\n    expect(ids).toContain('fn:b');\n  });\n\n  it('iterRelationships yields all relationships', () => {\n    const g = createKnowledgeGraph();\n    g.addNode(makeNode('fn:a', 'a'));\n    g.addNode(makeNode('fn:b', 'b'));\n    g.addRelationship(makeRel('fn:a', 'fn:b'));\n\n    const rels = [...g.iterRelationships()];\n    expect(rels).toHaveLength(1);\n    expect(rels[0].sourceId).toBe('fn:a');\n  });\n\n  // ─── nodeCount / relationshipCount ─────────────────────────────────\n\n  it('nodeCount reflects current node count', () => {\n    const g = createKnowledgeGraph();\n    expect(g.nodeCount).toBe(0);\n    g.addNode(makeNode('fn:a', 'a'));\n    expect(g.nodeCount).toBe(1);\n    g.addNode(makeNode('fn:b', 'b'));\n    expect(g.nodeCount).toBe(2);\n  });\n\n  it('relationshipCount reflects current relationship count', () => {\n    const g = createKnowledgeGraph();\n    g.addNode(makeNode('fn:a', 'a'));\n    g.addNode(makeNode('fn:b', 'b'));\n    expect(g.relationshipCount).toBe(0);\n    g.addRelationship(makeRel('fn:a', 'fn:b'));\n    expect(g.relationshipCount).toBe(1);\n  });\n\n  // ─── addRelationship ───────────────────────────────────────────────\n\n  it('duplicate addRelationship is a no-op', () => {\n    const g = createKnowledgeGraph();\n    g.addNode(makeNode('fn:a', 'a'));\n    g.addNode(makeNode('fn:b', 'b'));\n    g.addRelationship(makeRel('fn:a', 'fn:b'));\n    g.addRelationship(makeRel('fn:a', 'fn:b')); // same ID\n    expect(g.relationshipCount).toBe(1);\n  });\n\n  // ─── nodes / relationships arrays ──────────────────────────────────\n\n  it('.nodes returns an array copy', () => {\n    const g = createKnowledgeGraph();\n    g.addNode(makeNode('fn:a', 'a'));\n    const arr1 = g.nodes;\n    const arr2 = g.nodes;\n    expect(arr1).not.toBe(arr2); // different array instances\n    expect(arr1).toHaveLength(1);\n  });\n\n  it('.relationships returns an array copy', () => {\n    const g = createKnowledgeGraph();\n    g.addNode(makeNode('fn:a', 'a'));\n    g.addNode(makeNode('fn:b', 'b'));\n    g.addRelationship(makeRel('fn:a', 'fn:b'));\n    const arr1 = g.relationships;\n    const arr2 = g.relationships;\n    expect(arr1).not.toBe(arr2);\n    expect(arr1).toHaveLength(1);\n  });\n\n  // ─── forEachNode / forEachRelationship ──────────────────────────────\n\n  it('forEachNode calls fn for every node', () => {\n    const g = createKnowledgeGraph();\n    g.addNode(makeNode('fn:a', 'a'));\n    g.addNode(makeNode('fn:b', 'b'));\n\n    const ids: string[] = [];\n    g.forEachNode(n => ids.push(n.id));\n    expect(ids).toHaveLength(2);\n  });\n\n  it('forEachRelationship calls fn for every relationship', () => {\n    const g = createKnowledgeGraph();\n    g.addNode(makeNode('fn:a', 'a'));\n    g.addNode(makeNode('fn:b', 'b'));\n    g.addRelationship(makeRel('fn:a', 'fn:b'));\n\n    const types: string[] = [];\n    g.forEachRelationship(r => types.push(r.type));\n    expect(types).toEqual(['CALLS']);\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/has-method.test.ts",
    "content": "import { describe, it, expect } from 'vitest';\nimport Parser from 'tree-sitter';\nimport TypeScript from 'tree-sitter-typescript';\nimport Python from 'tree-sitter-python';\nimport Java from 'tree-sitter-java';\nimport CPP from 'tree-sitter-cpp';\nimport Rust from 'tree-sitter-rust';\nimport CSharp from 'tree-sitter-c-sharp';\nimport Go from 'tree-sitter-go';\nimport { findEnclosingClassId, CLASS_CONTAINER_TYPES, CONTAINER_TYPE_TO_LABEL } from '../../src/core/ingestion/utils.js';\n\nfunction parseCode(language: any, code: string): Parser.Tree {\n  const parser = new Parser();\n  parser.setLanguage(language);\n  return parser.parse(code);\n}\n\n/** Find the first descendant node matching a predicate (BFS). */\nfunction findNode(root: Parser.SyntaxNode, predicate: (n: Parser.SyntaxNode) => boolean): Parser.SyntaxNode | null {\n  const queue: Parser.SyntaxNode[] = [root];\n  while (queue.length > 0) {\n    const node = queue.shift()!;\n    if (predicate(node)) return node;\n    for (let i = 0; i < node.childCount; i++) {\n      queue.push(node.child(i)!);\n    }\n  }\n  return null;\n}\n\ndescribe('CLASS_CONTAINER_TYPES', () => {\n  it('contains expected class-like AST node types', () => {\n    expect(CLASS_CONTAINER_TYPES.has('class_declaration')).toBe(true);\n    expect(CLASS_CONTAINER_TYPES.has('interface_declaration')).toBe(true);\n    expect(CLASS_CONTAINER_TYPES.has('struct_declaration')).toBe(true);\n    expect(CLASS_CONTAINER_TYPES.has('impl_item')).toBe(true);\n    expect(CLASS_CONTAINER_TYPES.has('class_specifier')).toBe(true);\n    expect(CLASS_CONTAINER_TYPES.has('class_definition')).toBe(true);\n  });\n\n  it('does not contain function types', () => {\n    expect(CLASS_CONTAINER_TYPES.has('function_declaration')).toBe(false);\n    expect(CLASS_CONTAINER_TYPES.has('function_definition')).toBe(false);\n  });\n});\n\ndescribe('CONTAINER_TYPE_TO_LABEL', () => {\n  it('maps class-like types to correct labels', () => {\n    expect(CONTAINER_TYPE_TO_LABEL['class_declaration']).toBe('Class');\n    expect(CONTAINER_TYPE_TO_LABEL['interface_declaration']).toBe('Interface');\n    expect(CONTAINER_TYPE_TO_LABEL['struct_declaration']).toBe('Struct');\n    expect(CONTAINER_TYPE_TO_LABEL['impl_item']).toBe('Impl');\n    expect(CONTAINER_TYPE_TO_LABEL['trait_item']).toBe('Trait');\n    expect(CONTAINER_TYPE_TO_LABEL['record_declaration']).toBe('Record');\n    expect(CONTAINER_TYPE_TO_LABEL['protocol_declaration']).toBe('Interface');\n  });\n});\n\ndescribe('findEnclosingClassId', () => {\n  const filePath = 'test/example.ts';\n\n  describe('TypeScript', () => {\n    it('finds enclosing class for a method', () => {\n      const tree = parseCode(TypeScript.typescript, `\nclass MyService {\n  getData() {\n    return 42;\n  }\n}\n`);\n      // Find the method_definition node for getData\n      const methodNode = findNode(tree.rootNode, n => n.type === 'method_definition');\n      expect(methodNode).not.toBeNull();\n\n      // Find the identifier 'getData' inside the method\n      const nameNode = findNode(methodNode!, n => n.type === 'property_identifier' && n.text === 'getData');\n      expect(nameNode).not.toBeNull();\n\n      const result = findEnclosingClassId(nameNode!, filePath);\n      expect(result).toBe('Class:test/example.ts:MyService');\n    });\n\n    it('finds enclosing interface for a method signature', () => {\n      const tree = parseCode(TypeScript.typescript, `\ninterface MyInterface {\n  doSomething(): void;\n}\n`);\n      // In TS, interface methods are method_signature nodes — find method name\n      const nameNode = findNode(tree.rootNode, n => n.type === 'property_identifier' && n.text === 'doSomething');\n      if (nameNode) {\n        const result = findEnclosingClassId(nameNode, filePath);\n        expect(result).toBe('Interface:test/example.ts:MyInterface');\n      }\n    });\n\n    it('returns null for a top-level function', () => {\n      const tree = parseCode(TypeScript.typescript, `\nfunction topLevel() {\n  return 1;\n}\n`);\n      const nameNode = findNode(tree.rootNode, n => n.type === 'identifier' && n.text === 'topLevel');\n      expect(nameNode).not.toBeNull();\n\n      const result = findEnclosingClassId(nameNode!, filePath);\n      expect(result).toBeNull();\n    });\n\n    it('returns null when node has no parent', () => {\n      // Root node's parent is null\n      const tree = parseCode(TypeScript.typescript, 'const x = 1;');\n      const result = findEnclosingClassId(tree.rootNode, filePath);\n      expect(result).toBeNull();\n    });\n  });\n\n  describe('Python', () => {\n    it('finds enclosing class for a method', () => {\n      const tree = parseCode(Python, `\nclass Calculator:\n    def add(self, a, b):\n        return a + b\n`);\n      const methodNode = findNode(tree.rootNode, n => n.type === 'function_definition');\n      expect(methodNode).not.toBeNull();\n\n      const nameNode = findNode(methodNode!, n => n.type === 'identifier' && n.text === 'add');\n      expect(nameNode).not.toBeNull();\n\n      const result = findEnclosingClassId(nameNode!, 'test/calc.py');\n      expect(result).toBe('Class:test/calc.py:Calculator');\n    });\n  });\n\n  describe('Java', () => {\n    it('finds enclosing class for a method', () => {\n      const tree = parseCode(Java, `\nclass UserService {\n  public void save(User user) {}\n}\n`);\n      const methodNode = findNode(tree.rootNode, n => n.type === 'method_declaration');\n      expect(methodNode).not.toBeNull();\n\n      const nameNode = findNode(methodNode!, n => n.type === 'identifier' && n.text === 'save');\n      expect(nameNode).not.toBeNull();\n\n      const result = findEnclosingClassId(nameNode!, 'test/UserService.java');\n      expect(result).toBe('Class:test/UserService.java:UserService');\n    });\n\n    it('finds enclosing interface for a method', () => {\n      const tree = parseCode(Java, `\ninterface Repository {\n  void findById(int id);\n}\n`);\n      const methodNode = findNode(tree.rootNode, n => n.type === 'method_declaration');\n      expect(methodNode).not.toBeNull();\n\n      const nameNode = findNode(methodNode!, n => n.type === 'identifier' && n.text === 'findById');\n      expect(nameNode).not.toBeNull();\n\n      const result = findEnclosingClassId(nameNode!, 'test/Repository.java');\n      expect(result).toBe('Interface:test/Repository.java:Repository');\n    });\n  });\n\n  describe('C++', () => {\n    it('finds enclosing class_specifier for a method', () => {\n      const tree = parseCode(CPP, `\nclass Vector {\npublic:\n  int size() { return _size; }\nprivate:\n  int _size;\n};\n`);\n      // In C++ tree-sitter, the class is a class_specifier\n      const classNode = findNode(tree.rootNode, n => n.type === 'class_specifier');\n      expect(classNode).not.toBeNull();\n\n      // Find a function_definition inside the class\n      const funcDef = findNode(classNode!, n => n.type === 'function_definition');\n      expect(funcDef).not.toBeNull();\n\n      const nameNode = findNode(funcDef!, n => n.type === 'identifier' && n.text === 'size');\n      if (nameNode) {\n        const result = findEnclosingClassId(nameNode, 'test/vector.h');\n        expect(result).toBe('Class:test/vector.h:Vector');\n      }\n    });\n\n    it('finds enclosing struct_specifier for a method', () => {\n      const tree = parseCode(CPP, `\nstruct Point {\n  double distance() { return 0; }\n};\n`);\n      const funcDef = findNode(tree.rootNode, n => n.type === 'function_definition');\n      if (funcDef) {\n        const nameNode = findNode(funcDef, n => n.type === 'identifier' && n.text === 'distance');\n        if (nameNode) {\n          const result = findEnclosingClassId(nameNode, 'test/point.h');\n          expect(result).toBe('Struct:test/point.h:Point');\n        }\n      }\n    });\n  });\n\n  describe('Rust', () => {\n    it('finds enclosing impl_item for a method', () => {\n      const tree = parseCode(Rust, `\nstruct Counter {\n  count: u32,\n}\nimpl Counter {\n  fn increment(&mut self) {\n    self.count += 1;\n  }\n}\n`);\n      const implNode = findNode(tree.rootNode, n => n.type === 'impl_item');\n      expect(implNode).not.toBeNull();\n\n      const funcItem = findNode(implNode!, n => n.type === 'function_item');\n      expect(funcItem).not.toBeNull();\n\n      const nameNode = findNode(funcItem!, n => n.type === 'identifier' && n.text === 'increment');\n      expect(nameNode).not.toBeNull();\n\n      const result = findEnclosingClassId(nameNode!, 'test/counter.rs');\n      expect(result).toBe('Impl:test/counter.rs:Counter');\n    });\n\n    it('picks struct name (not trait name) for impl Trait for Struct', () => {\n      const tree = parseCode(Rust, `\nstruct MyStruct {\n  value: i32,\n}\n\ntrait MyTrait {\n  fn do_something(&self);\n}\n\nimpl MyTrait for MyStruct {\n  fn do_something(&self) {\n    println!(\"{}\", self.value);\n  }\n}\n`);\n      // Find the impl_item that has a `for` keyword (impl Trait for Struct)\n      const implNode = findNode(tree.rootNode, n =>\n        n.type === 'impl_item' && n.children?.some((c: any) => c.text === 'for')\n      );\n      expect(implNode).not.toBeNull();\n\n      const funcItem = findNode(implNode!, n => n.type === 'function_item');\n      expect(funcItem).not.toBeNull();\n\n      const nameNode = findNode(funcItem!, n => n.type === 'identifier' && n.text === 'do_something');\n      expect(nameNode).not.toBeNull();\n\n      const result = findEnclosingClassId(nameNode!, 'test/my_struct.rs');\n      // Should resolve to MyStruct (the implementing type), NOT MyTrait\n      expect(result).toBe('Impl:test/my_struct.rs:MyStruct');\n    });\n\n    it('still picks struct name for plain impl Struct (no trait)', () => {\n      const tree = parseCode(Rust, `\nstruct Counter {\n  count: u32,\n}\nimpl Counter {\n  fn increment(&mut self) {\n    self.count += 1;\n  }\n}\n`);\n      const implNode = findNode(tree.rootNode, n => n.type === 'impl_item');\n      expect(implNode).not.toBeNull();\n\n      const funcItem = findNode(implNode!, n => n.type === 'function_item');\n      expect(funcItem).not.toBeNull();\n\n      const nameNode = findNode(funcItem!, n => n.type === 'identifier' && n.text === 'increment');\n      expect(nameNode).not.toBeNull();\n\n      const result = findEnclosingClassId(nameNode!, 'test/counter.rs');\n      expect(result).toBe('Impl:test/counter.rs:Counter');\n    });\n\n    it('finds enclosing trait_item for a method', () => {\n      const tree = parseCode(Rust, `\ntrait Drawable {\n  fn draw(&self);\n}\n`);\n      const traitNode = findNode(tree.rootNode, n => n.type === 'trait_item');\n      expect(traitNode).not.toBeNull();\n\n      const funcItem = findNode(traitNode!, n => n.type === 'function_signature_item' || n.type === 'function_item');\n      if (funcItem) {\n        const nameNode = findNode(funcItem, n => n.type === 'identifier' && n.text === 'draw');\n        if (nameNode) {\n          const result = findEnclosingClassId(nameNode, 'test/draw.rs');\n          expect(result).toBe('Trait:test/draw.rs:Drawable');\n        }\n      }\n    });\n  });\n\n  describe('C#', () => {\n    it('finds enclosing class for a method', () => {\n      const tree = parseCode(CSharp, `\nclass OrderService {\n  public void Process() {}\n}\n`);\n      const methodNode = findNode(tree.rootNode, n => n.type === 'method_declaration');\n      expect(methodNode).not.toBeNull();\n\n      const nameNode = findNode(methodNode!, n => n.type === 'identifier' && n.text === 'Process');\n      expect(nameNode).not.toBeNull();\n\n      const result = findEnclosingClassId(nameNode!, 'test/OrderService.cs');\n      expect(result).toBe('Class:test/OrderService.cs:OrderService');\n    });\n\n    it('finds enclosing record for a method', () => {\n      const tree = parseCode(CSharp, `\nrecord Person {\n  public string GetName() { return \"\"; }\n}\n`);\n      const methodNode = findNode(tree.rootNode, n => n.type === 'method_declaration');\n      if (methodNode) {\n        const nameNode = findNode(methodNode, n => n.type === 'identifier' && n.text === 'GetName');\n        if (nameNode) {\n          const result = findEnclosingClassId(nameNode, 'test/Person.cs');\n          expect(result).toBe('Record:test/Person.cs:Person');\n        }\n      }\n    });\n\n    it('finds enclosing struct for a method', () => {\n      const tree = parseCode(CSharp, `\nstruct Vector2 {\n  public float Length() { return 0; }\n}\n`);\n      const methodNode = findNode(tree.rootNode, n => n.type === 'method_declaration');\n      if (methodNode) {\n        const nameNode = findNode(methodNode, n => n.type === 'identifier' && n.text === 'Length');\n        if (nameNode) {\n          const result = findEnclosingClassId(nameNode, 'test/Vector2.cs');\n          expect(result).toBe('Struct:test/Vector2.cs:Vector2');\n        }\n      }\n    });\n  });\n\n  describe('Go', () => {\n    it('returns receiver struct ID for Go methods', () => {\n      // Go methods have receiver parameter: func (s *Server) Start() {}\n      // findEnclosingClassId extracts the receiver type to link method → struct\n      const tree = parseCode(Go, `\npackage main\n\ntype Server struct {\n  port int\n}\n\nfunc (s *Server) Start() {}\n`);\n      const methodNode = findNode(tree.rootNode, n => n.type === 'method_declaration');\n      expect(methodNode).not.toBeNull();\n\n      const nameNode = findNode(methodNode!, n => n.type === 'field_identifier' && n.text === 'Start');\n      if (nameNode) {\n        const result = findEnclosingClassId(nameNode, 'test/server.go');\n        expect(result).not.toBeNull();\n        // Should generate a Struct ID for \"Server\"\n        expect(result).toContain('Server');\n      }\n    });\n  });\n\n  describe('edge cases', () => {\n    it('handles nested classes — returns innermost enclosing class', () => {\n      const tree = parseCode(TypeScript.typescript, `\nclass Outer {\n  inner = class Inner {\n    doWork() {}\n  }\n}\n`);\n      const methodNode = findNode(tree.rootNode, n => n.type === 'method_definition');\n      if (methodNode) {\n        const nameNode = findNode(methodNode, n => n.type === 'property_identifier' && n.text === 'doWork');\n        if (nameNode) {\n          const result = findEnclosingClassId(nameNode, filePath);\n          // Should find the innermost class (Inner, which is a class node)\n          expect(result).not.toBeNull();\n          // The result should reference the inner class, not the outer\n        }\n      }\n    });\n\n    it('returns null for a node without parent', () => {\n      // Simulate a node with null parent\n      const fakeNode = { parent: null };\n      const result = findEnclosingClassId(fakeNode, filePath);\n      expect(result).toBeNull();\n    });\n\n    it('skips containers without a name node', () => {\n      // Simulate AST nodes where the class container has no name\n      const fakeClassNode = {\n        type: 'class_declaration',\n        childForFieldName: () => null,\n        children: [],\n        parent: null,\n      };\n      const fakeChild = {\n        parent: fakeClassNode,\n      };\n      const result = findEnclosingClassId(fakeChild, filePath);\n      // The class has no name, so should return null\n      expect(result).toBeNull();\n    });\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/heritage-processor.test.ts",
    "content": "import { describe, it, expect, vi, beforeEach } from 'vitest';\nimport { processHeritageFromExtracted } from '../../src/core/ingestion/heritage-processor.js';\nimport { createKnowledgeGraph } from '../../src/core/graph/graph.js';\nimport { createResolutionContext, type ResolutionContext } from '../../src/core/ingestion/resolution-context.js';\nimport type { ExtractedHeritage } from '../../src/core/ingestion/workers/parse-worker.js';\n\ndescribe('processHeritageFromExtracted', () => {\n  let graph: ReturnType<typeof createKnowledgeGraph>;\n  let ctx: ResolutionContext;\n\n  beforeEach(() => {\n    graph = createKnowledgeGraph();\n    ctx = createResolutionContext();\n  });\n\n  describe('extends', () => {\n    it('creates EXTENDS relationship between classes', async () => {\n      ctx.symbols.add('src/admin.ts', 'AdminUser', 'Class:src/admin.ts:AdminUser', 'Class');\n      ctx.symbols.add('src/user.ts', 'User', 'Class:src/user.ts:User', 'Class');\n\n      const heritage: ExtractedHeritage[] = [{\n        filePath: 'src/admin.ts',\n        className: 'AdminUser',\n        parentName: 'User',\n        kind: 'extends',\n      }];\n\n      await processHeritageFromExtracted(graph, heritage, ctx);\n\n      const rels = graph.relationships.filter(r => r.type === 'EXTENDS');\n      expect(rels).toHaveLength(1);\n      expect(rels[0].sourceId).toBe('Class:src/admin.ts:AdminUser');\n      expect(rels[0].targetId).toBe('Class:src/user.ts:User');\n      expect(rels[0].confidence).toBe(1.0);\n    });\n\n    it('uses generated ID when class not in symbol table', async () => {\n      const heritage: ExtractedHeritage[] = [{\n        filePath: 'src/admin.ts',\n        className: 'AdminUser',\n        parentName: 'BaseUser',\n        kind: 'extends',\n      }];\n\n      await processHeritageFromExtracted(graph, heritage, ctx);\n\n      const rels = graph.relationships.filter(r => r.type === 'EXTENDS');\n      expect(rels).toHaveLength(1);\n      expect(rels[0].sourceId).toContain('AdminUser');\n      expect(rels[0].targetId).toContain('BaseUser');\n    });\n\n    it('skips self-inheritance', async () => {\n      ctx.symbols.add('src/a.ts', 'Foo', 'Class:src/a.ts:Foo', 'Class');\n\n      const heritage: ExtractedHeritage[] = [{\n        filePath: 'src/a.ts',\n        className: 'Foo',\n        parentName: 'Foo',\n        kind: 'extends',\n      }];\n\n      await processHeritageFromExtracted(graph, heritage, ctx);\n      expect(graph.relationshipCount).toBe(0);\n    });\n  });\n\n  describe('implements', () => {\n    it('creates IMPLEMENTS relationship', async () => {\n      ctx.symbols.add('src/service.ts', 'UserService', 'Class:src/service.ts:UserService', 'Class');\n      ctx.symbols.add('src/interfaces.ts', 'IService', 'Interface:src/interfaces.ts:IService', 'Interface');\n\n      const heritage: ExtractedHeritage[] = [{\n        filePath: 'src/service.ts',\n        className: 'UserService',\n        parentName: 'IService',\n        kind: 'implements',\n      }];\n\n      await processHeritageFromExtracted(graph, heritage, ctx);\n\n      const rels = graph.relationships.filter(r => r.type === 'IMPLEMENTS');\n      expect(rels).toHaveLength(1);\n      expect(rels[0].sourceId).toBe('Class:src/service.ts:UserService');\n    });\n  });\n\n  describe('trait-impl (Rust)', () => {\n    it('creates IMPLEMENTS relationship for trait impl', async () => {\n      ctx.symbols.add('src/point.rs', 'Point', 'Struct:src/point.rs:Point', 'Struct');\n      ctx.symbols.add('src/display.rs', 'Display', 'Trait:src/display.rs:Display', 'Trait');\n\n      const heritage: ExtractedHeritage[] = [{\n        filePath: 'src/point.rs',\n        className: 'Point',\n        parentName: 'Display',\n        kind: 'trait-impl',\n      }];\n\n      await processHeritageFromExtracted(graph, heritage, ctx);\n\n      const rels = graph.relationships.filter(r => r.type === 'IMPLEMENTS');\n      expect(rels).toHaveLength(1);\n      expect(rels[0].reason).toBe('trait-impl');\n    });\n  });\n\n  describe('C# interface resolution from extends captures', () => {\n    it('emits IMPLEMENTS when parent is an Interface in symbol table', async () => {\n      ctx.symbols.add('src/Service.cs', 'UserService', 'Class:src/Service.cs:UserService', 'Class');\n      ctx.symbols.add('src/IService.cs', 'IService', 'Interface:src/IService.cs:IService', 'Interface');\n\n      const heritage: ExtractedHeritage[] = [{\n        filePath: 'src/Service.cs',\n        className: 'UserService',\n        parentName: 'IService',\n        kind: 'extends', // C# base_list always sends extends\n      }];\n\n      await processHeritageFromExtracted(graph, heritage, ctx);\n\n      const impls = graph.relationships.filter(r => r.type === 'IMPLEMENTS');\n      const exts = graph.relationships.filter(r => r.type === 'EXTENDS');\n      expect(impls).toHaveLength(1);\n      expect(exts).toHaveLength(0);\n      expect(impls[0].sourceId).toBe('Class:src/Service.cs:UserService');\n      expect(impls[0].targetId).toBe('Interface:src/IService.cs:IService');\n    });\n\n    it('emits EXTENDS when parent is a Class in symbol table', async () => {\n      ctx.symbols.add('src/Admin.cs', 'AdminUser', 'Class:src/Admin.cs:AdminUser', 'Class');\n      ctx.symbols.add('src/User.cs', 'User', 'Class:src/User.cs:User', 'Class');\n\n      const heritage: ExtractedHeritage[] = [{\n        filePath: 'src/Admin.cs',\n        className: 'AdminUser',\n        parentName: 'User',\n        kind: 'extends',\n      }];\n\n      await processHeritageFromExtracted(graph, heritage, ctx);\n\n      const exts = graph.relationships.filter(r => r.type === 'EXTENDS');\n      const impls = graph.relationships.filter(r => r.type === 'IMPLEMENTS');\n      expect(exts).toHaveLength(1);\n      expect(impls).toHaveLength(0);\n    });\n\n    it('uses I[A-Z] heuristic for unresolved interface names in C#', async () => {\n      // IDisposable is not in symbol table (external .NET type)\n      const heritage: ExtractedHeritage[] = [{\n        filePath: 'src/Resource.cs',\n        className: 'Resource',\n        parentName: 'IDisposable',\n        kind: 'extends',\n      }];\n\n      await processHeritageFromExtracted(graph, heritage, ctx);\n\n      const impls = graph.relationships.filter(r => r.type === 'IMPLEMENTS');\n      expect(impls).toHaveLength(1);\n      expect(impls[0].targetId).toContain('IDisposable');\n    });\n\n    it('does not apply I[A-Z] heuristic for TypeScript — unresolved IFoo should be EXTENDS', async () => {\n      // The I[A-Z] convention is C#/Java-specific; TypeScript files should not be affected\n      const heritage: ExtractedHeritage[] = [{\n        filePath: 'src/service.ts',\n        className: 'MyService',\n        parentName: 'IFoo',\n        kind: 'extends',\n      }];\n\n      await processHeritageFromExtracted(graph, heritage, ctx);\n\n      const exts = graph.relationships.filter(r => r.type === 'EXTENDS');\n      const impls = graph.relationships.filter(r => r.type === 'IMPLEMENTS');\n      expect(exts).toHaveLength(1);\n      expect(impls).toHaveLength(0);\n    });\n\n    it('does not misclassify non-I-prefixed unresolved names as interfaces', async () => {\n      const heritage: ExtractedHeritage[] = [{\n        filePath: 'src/Derived.cs',\n        className: 'Derived',\n        parentName: 'BaseClass',\n        kind: 'extends',\n      }];\n\n      await processHeritageFromExtracted(graph, heritage, ctx);\n\n      const exts = graph.relationships.filter(r => r.type === 'EXTENDS');\n      const impls = graph.relationships.filter(r => r.type === 'IMPLEMENTS');\n      expect(exts).toHaveLength(1);\n      expect(impls).toHaveLength(0);\n    });\n\n    it('does not match single-letter I names like \"I\" or \"Id\"', async () => {\n      const heritage: ExtractedHeritage[] = [{\n        filePath: 'src/Thing.cs',\n        className: 'Thing',\n        parentName: 'Id',\n        kind: 'extends',\n      }];\n\n      await processHeritageFromExtracted(graph, heritage, ctx);\n\n      // \"Id\" starts with I but second char is lowercase — should be EXTENDS\n      const exts = graph.relationships.filter(r => r.type === 'EXTENDS');\n      expect(exts).toHaveLength(1);\n    });\n\n    it('handles mixed class + interface base_list from C#', async () => {\n      ctx.symbols.add('src/Repo.cs', 'UserRepo', 'Class:src/Repo.cs:UserRepo', 'Class');\n      ctx.symbols.add('src/Base.cs', 'BaseRepository', 'Class:src/Base.cs:BaseRepository', 'Class');\n      ctx.symbols.add('src/IRepo.cs', 'IRepository', 'Interface:src/IRepo.cs:IRepository', 'Interface');\n      ctx.symbols.add('src/IDisp.cs', 'IDisposable', 'Interface:src/IDisp.cs:IDisposable', 'Interface');\n\n      const heritage: ExtractedHeritage[] = [\n        { filePath: 'src/Repo.cs', className: 'UserRepo', parentName: 'BaseRepository', kind: 'extends' },\n        { filePath: 'src/Repo.cs', className: 'UserRepo', parentName: 'IRepository', kind: 'extends' },\n        { filePath: 'src/Repo.cs', className: 'UserRepo', parentName: 'IDisposable', kind: 'extends' },\n      ];\n\n      await processHeritageFromExtracted(graph, heritage, ctx);\n\n      const exts = graph.relationships.filter(r => r.type === 'EXTENDS');\n      const impls = graph.relationships.filter(r => r.type === 'IMPLEMENTS');\n      expect(exts).toHaveLength(1); // BaseRepository\n      expect(impls).toHaveLength(2); // IRepository + IDisposable\n    });\n  });\n\n  describe('Swift protocol conformance from extends captures', () => {\n    it('defaults unresolved PascalCase protocol names to IMPLEMENTS for Swift', async () => {\n      // Codable, Hashable, Equatable etc. are protocols — no I-prefix convention in Swift\n      const heritage: ExtractedHeritage[] = [{\n        filePath: 'src/Model.swift',\n        className: 'User',\n        parentName: 'Codable',\n        kind: 'extends',\n      }];\n\n      await processHeritageFromExtracted(graph, heritage, ctx);\n\n      const impls = graph.relationships.filter(r => r.type === 'IMPLEMENTS');\n      const exts = graph.relationships.filter(r => r.type === 'EXTENDS');\n      expect(impls).toHaveLength(1);\n      expect(exts).toHaveLength(0);\n      expect(impls[0].targetId).toContain('Codable');\n    });\n\n    it('still uses symbol table authoritatively for Swift (Tier 1 takes precedence)', async () => {\n      // When the parent is in the symbol table as a Class, EXTENDS wins even in Swift\n      ctx.symbols.add('src/Animal.swift', 'Animal', 'Class:src/Animal.swift:Animal', 'Class');\n\n      const heritage: ExtractedHeritage[] = [{\n        filePath: 'src/Dog.swift',\n        className: 'Dog',\n        parentName: 'Animal',\n        kind: 'extends',\n      }];\n\n      await processHeritageFromExtracted(graph, heritage, ctx);\n\n      const exts = graph.relationships.filter(r => r.type === 'EXTENDS');\n      const impls = graph.relationships.filter(r => r.type === 'IMPLEMENTS');\n      expect(exts).toHaveLength(1);\n      expect(impls).toHaveLength(0);\n    });\n  });\n\n  it('handles multiple heritage entries', async () => {\n    const heritage: ExtractedHeritage[] = [\n      { filePath: 'src/a.ts', className: 'A', parentName: 'B', kind: 'extends' },\n      { filePath: 'src/c.ts', className: 'C', parentName: 'D', kind: 'implements' },\n      { filePath: 'src/e.rs', className: 'E', parentName: 'F', kind: 'trait-impl' },\n    ];\n\n    await processHeritageFromExtracted(graph, heritage, ctx);\n    expect(graph.relationships.filter(r => r.type === 'EXTENDS')).toHaveLength(1);\n    expect(graph.relationships.filter(r => r.type === 'IMPLEMENTS')).toHaveLength(2);\n  });\n\n  it('calls progress callback', async () => {\n    const heritage: ExtractedHeritage[] = [\n      { filePath: 'src/a.ts', className: 'A', parentName: 'B', kind: 'extends' },\n    ];\n\n    const onProgress = vi.fn();\n    await processHeritageFromExtracted(graph, heritage, ctx, onProgress);\n    expect(onProgress).toHaveBeenCalledWith(1, 1);\n  });\n\n  it('handles empty heritage array', async () => {\n    await processHeritageFromExtracted(graph, [], ctx);\n    expect(graph.relationshipCount).toBe(0);\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/hooks.test.ts",
    "content": "/**\n * Regression Tests: Claude Code Hooks\n *\n * Tests the hook scripts (gitnexus-hook.cjs and gitnexus-hook.js) that run\n * as PreToolUse and PostToolUse hooks in Claude Code.\n *\n * Covers:\n * - extractPattern: pattern extraction from Grep/Glob/Bash tool inputs\n * - findGitNexusDir: .gitnexus directory discovery\n * - handlePostToolUse: staleness detection after git mutations\n * - cwd validation: rejects relative paths (defense-in-depth)\n * - shell injection: verifies no shell: true in spawnSync calls\n * - dispatch map: correct handler routing\n * - cross-platform: Windows .cmd extension handling\n *\n * Since the hooks are CJS scripts that call main() on load, we test them\n * by spawning them as child processes with controlled stdin JSON.\n */\nimport { describe, it, expect, beforeAll, afterAll } from 'vitest';\nimport { spawnSync } from 'child_process';\nimport fs from 'fs';\nimport path from 'path';\nimport os from 'os';\nimport { runHook, parseHookOutput } from '../utils/hook-test-helpers.js';\n\n// ─── Paths to both hook variants ────────────────────────────────────\n\nconst CJS_HOOK = path.resolve(__dirname, '..', '..', 'hooks', 'claude', 'gitnexus-hook.cjs');\nconst PLUGIN_HOOK = path.resolve(__dirname, '..', '..', '..', 'gitnexus-claude-plugin', 'hooks', 'gitnexus-hook.js');\n\n// ─── Test fixtures: temporary .gitnexus directory ───────────────────\n\nlet tmpDir: string;\nlet gitNexusDir: string;\n\nbeforeAll(() => {\n  tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'gitnexus-hook-test-'));\n  gitNexusDir = path.join(tmpDir, '.gitnexus');\n  fs.mkdirSync(gitNexusDir, { recursive: true });\n\n  // Initialize a bare git repo so git rev-parse HEAD works\n  spawnSync('git', ['init'], { cwd: tmpDir, stdio: 'pipe' });\n  spawnSync('git', ['config', 'user.email', 'test@test.com'], { cwd: tmpDir, stdio: 'pipe' });\n  spawnSync('git', ['config', 'user.name', 'Test'], { cwd: tmpDir, stdio: 'pipe' });\n  fs.writeFileSync(path.join(tmpDir, 'dummy.txt'), 'hello');\n  spawnSync('git', ['add', '.'], { cwd: tmpDir, stdio: 'pipe' });\n  spawnSync('git', ['commit', '-m', 'init'], { cwd: tmpDir, stdio: 'pipe' });\n});\n\nafterAll(() => {\n  fs.rmSync(tmpDir, { recursive: true, force: true });\n});\n\n// ─── Helper to get HEAD commit hash ─────────────────────────────────\n\nfunction getHeadCommit(): string {\n  const result = spawnSync('git', ['rev-parse', 'HEAD'], {\n    cwd: tmpDir, encoding: 'utf-8', stdio: ['pipe', 'pipe', 'pipe'],\n  });\n  return (result.stdout || '').trim();\n}\n\n// ─── Both hook files should exist ───────────────────────────────────\n\ndescribe('Hook files exist', () => {\n  it('CJS hook exists', () => {\n    expect(fs.existsSync(CJS_HOOK)).toBe(true);\n  });\n\n  it('Plugin hook exists', () => {\n    expect(fs.existsSync(PLUGIN_HOOK)).toBe(true);\n  });\n});\n\n// ─── Source code regression: no shell: true ──────────────────────────\n\ndescribe('Shell injection regression', () => {\n  for (const [label, hookPath] of [['CJS', CJS_HOOK], ['Plugin', PLUGIN_HOOK]] as const) {\n    it(`${label} hook has no shell: true in spawnSync calls`, () => {\n      const source = fs.readFileSync(hookPath, 'utf-8');\n      // Match spawnSync calls with shell option set to true or a variable\n      // Allowed: comments mentioning shell: true, string literals\n      const lines = source.split('\\n');\n      for (let i = 0; i < lines.length; i++) {\n        const line = lines[i];\n        // Skip comments and string literals\n        if (line.trim().startsWith('//') || line.trim().startsWith('*')) continue;\n        // Check for shell: true or shell: isWin in actual code\n        if (/shell:\\s*(true|isWin)/.test(line)) {\n          throw new Error(`${label} hook line ${i + 1} has shell injection risk: ${line.trim()}`);\n        }\n      }\n    });\n  }\n});\n\n// ─── Source code regression: .cmd extensions for Windows ─────────────\n\ndescribe('Windows .cmd extension handling', () => {\n  for (const [label, hookPath] of [['CJS', CJS_HOOK], ['Plugin', PLUGIN_HOOK]] as const) {\n    it(`${label} hook uses .cmd extensions for Windows npx`, () => {\n      const source = fs.readFileSync(hookPath, 'utf-8');\n      expect(source).toContain(\"npx.cmd\");\n    });\n  }\n\n  it('Plugin hook uses .cmd extension for Windows gitnexus binary', () => {\n    const source = fs.readFileSync(PLUGIN_HOOK, 'utf-8');\n    expect(source).toContain(\"gitnexus.cmd\");\n  });\n});\n\n// ─── Source code regression: cwd validation ─────────────────────────\n\ndescribe('cwd validation guards', () => {\n  for (const [label, hookPath] of [['CJS', CJS_HOOK], ['Plugin', PLUGIN_HOOK]] as const) {\n    it(`${label} hook validates cwd is absolute path`, () => {\n      const source = fs.readFileSync(hookPath, 'utf-8');\n      const cwdChecks = (source.match(/path\\.isAbsolute\\(cwd\\)/g) || []).length;\n      // Should have at least 2 checks (one in PreToolUse, one in PostToolUse)\n      expect(cwdChecks).toBeGreaterThanOrEqual(2);\n    });\n  }\n});\n\n// ─── Source code regression: sendHookResponse used consistently ──────\n\ndescribe('sendHookResponse consistency', () => {\n  for (const [label, hookPath] of [['CJS', CJS_HOOK], ['Plugin', PLUGIN_HOOK]] as const) {\n    it(`${label} hook uses sendHookResponse in both handlers`, () => {\n      const source = fs.readFileSync(hookPath, 'utf-8');\n      const calls = (source.match(/sendHookResponse\\(/g) || []).length;\n      // At least 3: definition + PreToolUse call + PostToolUse call\n      expect(calls).toBeGreaterThanOrEqual(3);\n    });\n\n    it(`${label} hook does not inline hookSpecificOutput JSON in handlers`, () => {\n      const source = fs.readFileSync(hookPath, 'utf-8');\n      // Count inline hookSpecificOutput usage (should only be in sendHookResponse definition)\n      const inlineCount = (source.match(/hookSpecificOutput/g) || []).length;\n      // Exactly 1 occurrence: inside the sendHookResponse function body\n      expect(inlineCount).toBe(1);\n    });\n  }\n});\n\n// ─── Source code regression: dispatch map pattern ────────────────────\n\ndescribe('Dispatch map pattern', () => {\n  for (const [label, hookPath] of [['CJS', CJS_HOOK], ['Plugin', PLUGIN_HOOK]] as const) {\n    it(`${label} hook uses dispatch map instead of if/else`, () => {\n      const source = fs.readFileSync(hookPath, 'utf-8');\n      expect(source).toContain('const handlers = {');\n      expect(source).toContain('PreToolUse: handlePreToolUse');\n      expect(source).toContain('PostToolUse: handlePostToolUse');\n      // Should NOT have if/else dispatch in main()\n      expect(source).not.toMatch(/if\\s*\\(hookEvent\\s*===\\s*'PreToolUse'\\)/);\n    });\n  }\n});\n\n// ─── Source code regression: debug error truncation ──────────────────\n\ndescribe('Debug error message truncation', () => {\n  for (const [label, hookPath] of [['CJS', CJS_HOOK], ['Plugin', PLUGIN_HOOK]] as const) {\n    it(`${label} hook truncates error messages to 200 chars`, () => {\n      const source = fs.readFileSync(hookPath, 'utf-8');\n      expect(source).toContain('.slice(0, 200)');\n    });\n  }\n});\n\n// ─── extractPattern regression (via source analysis) ────────────────\n\ndescribe('extractPattern coverage', () => {\n  for (const [label, hookPath] of [['CJS', CJS_HOOK], ['Plugin', PLUGIN_HOOK]] as const) {\n    it(`${label} hook extracts pattern from Grep tool input`, () => {\n      const source = fs.readFileSync(hookPath, 'utf-8');\n      expect(source).toContain(\"toolName === 'Grep'\");\n      expect(source).toContain('toolInput.pattern');\n    });\n\n    it(`${label} hook extracts pattern from Glob tool input`, () => {\n      const source = fs.readFileSync(hookPath, 'utf-8');\n      expect(source).toContain(\"toolName === 'Glob'\");\n    });\n\n    it(`${label} hook extracts pattern from Bash grep/rg commands`, () => {\n      const source = fs.readFileSync(hookPath, 'utf-8');\n      expect(source).toMatch(/\\\\brg\\\\b.*\\\\bgrep\\\\b/);\n    });\n\n    it(`${label} hook rejects patterns shorter than 3 chars`, () => {\n      const source = fs.readFileSync(hookPath, 'utf-8');\n      expect(source).toContain('cleaned.length >= 3');\n    });\n  }\n});\n\n// ─── PostToolUse: git mutation regex coverage ───────────────────────\n\ndescribe('Git mutation regex', () => {\n  const GIT_REGEX = /\\\\bgit\\\\s\\+\\(commit\\|merge\\|rebase\\|cherry-pick\\|pull\\)/;\n\n  for (const [label, hookPath] of [['CJS', CJS_HOOK], ['Plugin', PLUGIN_HOOK]] as const) {\n    it(`${label} hook detects git commit`, () => {\n      const source = fs.readFileSync(hookPath, 'utf-8');\n      expect(source).toContain('commit');\n    });\n\n    it(`${label} hook detects git merge`, () => {\n      const source = fs.readFileSync(hookPath, 'utf-8');\n      expect(source).toContain('merge');\n    });\n\n    it(`${label} hook detects git rebase`, () => {\n      const source = fs.readFileSync(hookPath, 'utf-8');\n      expect(source).toContain('rebase');\n    });\n\n    it(`${label} hook detects git cherry-pick`, () => {\n      const source = fs.readFileSync(hookPath, 'utf-8');\n      expect(source).toContain('cherry-pick');\n    });\n\n    it(`${label} hook detects git pull`, () => {\n      const source = fs.readFileSync(hookPath, 'utf-8');\n      // 'pull' in the regex alternation\n      expect(source).toMatch(/commit\\|merge\\|rebase\\|cherry-pick\\|pull/);\n    });\n  }\n});\n\n// ─── Integration: PostToolUse staleness detection ───────────────────\n\ndescribe('PostToolUse staleness detection (integration)', () => {\n  for (const [label, hookPath] of [['CJS', CJS_HOOK], ['Plugin', PLUGIN_HOOK]] as const) {\n    it(`${label}: emits stale notification when HEAD differs from meta`, () => {\n      // Write meta.json with a different commit\n      fs.writeFileSync(\n        path.join(gitNexusDir, 'meta.json'),\n        JSON.stringify({ lastCommit: 'aaaaaaa0000000000000000000000000deadbeef', stats: {} }),\n      );\n\n      const result = runHook(hookPath, {\n        hook_event_name: 'PostToolUse',\n        tool_name: 'Bash',\n        tool_input: { command: 'git commit -m \"test\"' },\n        tool_output: { exit_code: 0 },\n        cwd: tmpDir,\n      });\n\n      const output = parseHookOutput(result.stdout);\n      expect(output).not.toBeNull();\n      expect(output!.hookEventName).toBe('PostToolUse');\n      expect(output!.additionalContext).toContain('stale');\n      expect(output!.additionalContext).toContain('aaaaaaa');\n    });\n\n    it(`${label}: silent when HEAD matches meta lastCommit`, () => {\n      const head = getHeadCommit();\n      fs.writeFileSync(\n        path.join(gitNexusDir, 'meta.json'),\n        JSON.stringify({ lastCommit: head, stats: {} }),\n      );\n\n      const result = runHook(hookPath, {\n        hook_event_name: 'PostToolUse',\n        tool_name: 'Bash',\n        tool_input: { command: 'git commit -m \"test\"' },\n        tool_output: { exit_code: 0 },\n        cwd: tmpDir,\n      });\n\n      expect(result.stdout.trim()).toBe('');\n    });\n\n    it(`${label}: silent when tool is not Bash`, () => {\n      const result = runHook(hookPath, {\n        hook_event_name: 'PostToolUse',\n        tool_name: 'Grep',\n        tool_input: { command: 'git commit -m \"test\"' },\n        cwd: tmpDir,\n      });\n      expect(result.stdout.trim()).toBe('');\n    });\n\n    it(`${label}: silent when command is not a git mutation`, () => {\n      const result = runHook(hookPath, {\n        hook_event_name: 'PostToolUse',\n        tool_name: 'Bash',\n        tool_input: { command: 'git status' },\n        tool_output: { exit_code: 0 },\n        cwd: tmpDir,\n      });\n      expect(result.stdout.trim()).toBe('');\n    });\n\n    it(`${label}: silent when exit code is non-zero`, () => {\n      const result = runHook(hookPath, {\n        hook_event_name: 'PostToolUse',\n        tool_name: 'Bash',\n        tool_input: { command: 'git commit -m \"fail\"' },\n        tool_output: { exit_code: 1 },\n        cwd: tmpDir,\n      });\n      expect(result.stdout.trim()).toBe('');\n    });\n\n    it(`${label}: includes --embeddings in suggestion when meta had embeddings`, () => {\n      fs.writeFileSync(\n        path.join(gitNexusDir, 'meta.json'),\n        JSON.stringify({ lastCommit: 'deadbeef', stats: { embeddings: 42 } }),\n      );\n\n      const result = runHook(hookPath, {\n        hook_event_name: 'PostToolUse',\n        tool_name: 'Bash',\n        tool_input: { command: 'git merge feature' },\n        tool_output: { exit_code: 0 },\n        cwd: tmpDir,\n      });\n\n      const output = parseHookOutput(result.stdout);\n      expect(output).not.toBeNull();\n      expect(output!.additionalContext).toContain('--embeddings');\n    });\n\n    it(`${label}: omits --embeddings when meta had no embeddings`, () => {\n      fs.writeFileSync(\n        path.join(gitNexusDir, 'meta.json'),\n        JSON.stringify({ lastCommit: 'deadbeef', stats: { embeddings: 0 } }),\n      );\n\n      const result = runHook(hookPath, {\n        hook_event_name: 'PostToolUse',\n        tool_name: 'Bash',\n        tool_input: { command: 'git commit -m \"test\"' },\n        tool_output: { exit_code: 0 },\n        cwd: tmpDir,\n      });\n\n      const output = parseHookOutput(result.stdout);\n      expect(output).not.toBeNull();\n      expect(output!.additionalContext).not.toContain('--embeddings');\n    });\n\n    it(`${label}: detects git rebase as a mutation`, () => {\n      fs.writeFileSync(\n        path.join(gitNexusDir, 'meta.json'),\n        JSON.stringify({ lastCommit: 'oldcommit', stats: {} }),\n      );\n\n      const result = runHook(hookPath, {\n        hook_event_name: 'PostToolUse',\n        tool_name: 'Bash',\n        tool_input: { command: 'git rebase main' },\n        tool_output: { exit_code: 0 },\n        cwd: tmpDir,\n      });\n\n      const output = parseHookOutput(result.stdout);\n      expect(output).not.toBeNull();\n      expect(output!.additionalContext).toContain('stale');\n    });\n\n    it(`${label}: detects git cherry-pick as a mutation`, () => {\n      fs.writeFileSync(\n        path.join(gitNexusDir, 'meta.json'),\n        JSON.stringify({ lastCommit: 'oldcommit', stats: {} }),\n      );\n\n      const result = runHook(hookPath, {\n        hook_event_name: 'PostToolUse',\n        tool_name: 'Bash',\n        tool_input: { command: 'git cherry-pick abc123' },\n        tool_output: { exit_code: 0 },\n        cwd: tmpDir,\n      });\n\n      const output = parseHookOutput(result.stdout);\n      expect(output).not.toBeNull();\n    });\n\n    it(`${label}: detects git pull as a mutation`, () => {\n      fs.writeFileSync(\n        path.join(gitNexusDir, 'meta.json'),\n        JSON.stringify({ lastCommit: 'oldcommit', stats: {} }),\n      );\n\n      const result = runHook(hookPath, {\n        hook_event_name: 'PostToolUse',\n        tool_name: 'Bash',\n        tool_input: { command: 'git pull origin main' },\n        tool_output: { exit_code: 0 },\n        cwd: tmpDir,\n      });\n\n      const output = parseHookOutput(result.stdout);\n      expect(output).not.toBeNull();\n    });\n  }\n});\n\n// ─── Integration: cwd validation rejects relative paths ─────────────\n\ndescribe('cwd validation (integration)', () => {\n  for (const [label, hookPath] of [['CJS', CJS_HOOK], ['Plugin', PLUGIN_HOOK]] as const) {\n    it(`${label}: PostToolUse silent when cwd is relative`, () => {\n      const result = runHook(hookPath, {\n        hook_event_name: 'PostToolUse',\n        tool_name: 'Bash',\n        tool_input: { command: 'git commit -m \"test\"' },\n        tool_output: { exit_code: 0 },\n        cwd: 'relative/path',\n      });\n      expect(result.stdout.trim()).toBe('');\n    });\n\n    it(`${label}: PreToolUse silent when cwd is relative`, () => {\n      const result = runHook(hookPath, {\n        hook_event_name: 'PreToolUse',\n        tool_name: 'Grep',\n        tool_input: { pattern: 'validateUser' },\n        cwd: 'relative/path',\n      });\n      expect(result.stdout.trim()).toBe('');\n    });\n  }\n});\n\n// ─── Integration: dispatch map routes correctly ─────────────────────\n\ndescribe('Dispatch map routing (integration)', () => {\n  for (const [label, hookPath] of [['CJS', CJS_HOOK], ['Plugin', PLUGIN_HOOK]] as const) {\n    it(`${label}: unknown hook_event_name produces no output`, () => {\n      const result = runHook(hookPath, {\n        hook_event_name: 'UnknownEvent',\n        tool_name: 'Bash',\n        tool_input: { command: 'echo hello' },\n        cwd: tmpDir,\n      });\n      expect(result.stdout.trim()).toBe('');\n      expect(result.status).toBe(0);\n    });\n\n    it(`${label}: empty hook_event_name produces no output`, () => {\n      const result = runHook(hookPath, {\n        hook_event_name: '',\n        tool_name: 'Bash',\n        cwd: tmpDir,\n      });\n      expect(result.stdout.trim()).toBe('');\n      expect(result.status).toBe(0);\n    });\n\n    it(`${label}: missing hook_event_name produces no output`, () => {\n      const result = runHook(hookPath, {\n        tool_name: 'Bash',\n        cwd: tmpDir,\n      });\n      expect(result.stdout.trim()).toBe('');\n      expect(result.status).toBe(0);\n    });\n\n    it(`${label}: invalid JSON input exits cleanly`, () => {\n      const result = spawnSync(process.execPath, [hookPath], {\n        input: 'not json at all',\n        encoding: 'utf-8',\n        timeout: 10000,\n        stdio: ['pipe', 'pipe', 'pipe'],\n      });\n      expect(result.status).toBe(0);\n      expect(result.stdout.trim()).toBe('');\n    });\n\n    it(`${label}: empty stdin exits cleanly`, () => {\n      const result = spawnSync(process.execPath, [hookPath], {\n        input: '',\n        encoding: 'utf-8',\n        timeout: 10000,\n        stdio: ['pipe', 'pipe', 'pipe'],\n      });\n      expect(result.status).toBe(0);\n    });\n  }\n});\n\n// ─── Integration: PostToolUse with missing meta.json ────────────────\n\ndescribe('PostToolUse with missing/corrupt meta.json', () => {\n  for (const [label, hookPath] of [['CJS', CJS_HOOK], ['Plugin', PLUGIN_HOOK]] as const) {\n    it(`${label}: emits stale when meta.json does not exist`, () => {\n      const metaPath = path.join(gitNexusDir, 'meta.json');\n      const hadMeta = fs.existsSync(metaPath);\n      if (hadMeta) fs.unlinkSync(metaPath);\n\n      try {\n        const result = runHook(hookPath, {\n          hook_event_name: 'PostToolUse',\n          tool_name: 'Bash',\n          tool_input: { command: 'git commit -m \"test\"' },\n          tool_output: { exit_code: 0 },\n          cwd: tmpDir,\n        });\n\n        const output = parseHookOutput(result.stdout);\n        expect(output).not.toBeNull();\n        expect(output!.additionalContext).toContain('never');\n      } finally {\n        // Restore meta.json for subsequent tests\n        fs.writeFileSync(metaPath, JSON.stringify({ lastCommit: 'old', stats: {} }));\n      }\n    });\n\n    it(`${label}: emits stale when meta.json is corrupt`, () => {\n      const metaPath = path.join(gitNexusDir, 'meta.json');\n      fs.writeFileSync(metaPath, 'not valid json!!!');\n\n      const result = runHook(hookPath, {\n        hook_event_name: 'PostToolUse',\n        tool_name: 'Bash',\n        tool_input: { command: 'git commit -m \"test\"' },\n        tool_output: { exit_code: 0 },\n        cwd: tmpDir,\n      });\n\n      const output = parseHookOutput(result.stdout);\n      expect(output).not.toBeNull();\n      expect(output!.additionalContext).toContain('never');\n\n      // Restore\n      fs.writeFileSync(metaPath, JSON.stringify({ lastCommit: 'old', stats: {} }));\n    });\n  }\n});\n"
  },
  {
    "path": "gitnexus/test/unit/hybrid-search.test.ts",
    "content": "/**\n * P1 Unit Tests: Hybrid Search (mergeWithRRF)\n *\n * Tests: mergeWithRRF from hybrid-search.ts\n * - BM25-only merge\n * - Semantic-only merge\n * - Combined ranking\n * - Limit parameter\n * - Empty inputs\n */\nimport { describe, it, expect } from 'vitest';\nimport { mergeWithRRF } from '../../src/core/search/hybrid-search.js';\nimport type { BM25SearchResult } from '../../src/core/search/bm25-index.js';\nimport type { SemanticSearchResult } from '../../src/core/embeddings/types.js';\n\nlet bm25Rank = 0;\nfunction makeBM25(filePath: string, score: number): BM25SearchResult {\n  return { filePath, score, rank: ++bm25Rank };\n}\n\nfunction makeSemantic(filePath: string, distance: number): SemanticSearchResult {\n  return {\n    filePath,\n    distance,\n    nodeId: `node:${filePath}`,\n    name: filePath.split('/').pop()!.replace(/\\.\\w+$/, ''),\n    label: 'Function',\n    startLine: 1,\n    endLine: 10,\n  };\n}\n\ndescribe('mergeWithRRF', () => {\n  it('handles empty inputs', () => {\n    const result = mergeWithRRF([], []);\n    expect(result).toHaveLength(0);\n  });\n\n  it('handles BM25-only results', () => {\n    const bm25: BM25SearchResult[] = [\n      makeBM25('src/a.ts', 10),\n      makeBM25('src/b.ts', 5),\n    ];\n    const result = mergeWithRRF(bm25, []);\n    expect(result).toHaveLength(2);\n    expect(result[0].filePath).toBe('src/a.ts');\n    expect(result[0].sources).toEqual(['bm25']);\n    expect(result[0].rank).toBe(1);\n    expect(result[1].rank).toBe(2);\n  });\n\n  it('handles semantic-only results', () => {\n    const semantic: SemanticSearchResult[] = [\n      makeSemantic('src/a.ts', 0.1),\n      makeSemantic('src/b.ts', 0.2),\n    ];\n    const result = mergeWithRRF([], semantic);\n    expect(result).toHaveLength(2);\n    expect(result[0].filePath).toBe('src/a.ts');\n    expect(result[0].sources).toEqual(['semantic']);\n  });\n\n  it('combined: shared results get higher score', () => {\n    const bm25: BM25SearchResult[] = [\n      makeBM25('src/shared.ts', 10),\n      makeBM25('src/bm25-only.ts', 5),\n    ];\n    const semantic: SemanticSearchResult[] = [\n      makeSemantic('src/shared.ts', 0.1),\n      makeSemantic('src/semantic-only.ts', 0.2),\n    ];\n\n    const result = mergeWithRRF(bm25, semantic);\n    // Shared result should be ranked first (higher combined RRF score)\n    expect(result[0].filePath).toBe('src/shared.ts');\n    expect(result[0].sources).toContain('bm25');\n    expect(result[0].sources).toContain('semantic');\n    // Its score should be higher than any single-source result\n    expect(result[0].score).toBeGreaterThan(result[1].score);\n  });\n\n  it('respects limit parameter', () => {\n    const bm25: BM25SearchResult[] = Array.from({ length: 20 }, (_, i) =>\n      makeBM25(`src/${i}.ts`, 100 - i),\n    );\n    const result = mergeWithRRF(bm25, [], 5);\n    expect(result).toHaveLength(5);\n  });\n\n  it('default limit is 10', () => {\n    const bm25: BM25SearchResult[] = Array.from({ length: 20 }, (_, i) =>\n      makeBM25(`src/${i}.ts`, 100 - i),\n    );\n    const result = mergeWithRRF(bm25, []);\n    expect(result).toHaveLength(10);\n  });\n\n  it('assigns ranks starting from 1', () => {\n    const bm25: BM25SearchResult[] = [\n      makeBM25('src/a.ts', 10),\n      makeBM25('src/b.ts', 5),\n      makeBM25('src/c.ts', 1),\n    ];\n    const result = mergeWithRRF(bm25, []);\n    expect(result.map(r => r.rank)).toEqual([1, 2, 3]);\n  });\n\n  it('preserves semantic metadata on shared results', () => {\n    const bm25: BM25SearchResult[] = [makeBM25('src/a.ts', 10)];\n    const semantic: SemanticSearchResult[] = [makeSemantic('src/a.ts', 0.1)];\n\n    const result = mergeWithRRF(bm25, semantic);\n    expect(result[0].nodeId).toBe('node:src/a.ts');\n    expect(result[0].name).toBe('a');\n    expect(result[0].label).toBe('Function');\n  });\n\n  it('stores original scores for debugging', () => {\n    const bm25: BM25SearchResult[] = [makeBM25('src/a.ts', 15)];\n    const semantic: SemanticSearchResult[] = [makeSemantic('src/a.ts', 0.3)];\n\n    const result = mergeWithRRF(bm25, semantic);\n    expect(result[0].bm25Score).toBe(15);\n    expect(result[0].semanticScore).toBeCloseTo(0.7); // 1 - distance\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/ignore-service.test.ts",
    "content": "import { describe, it, expect, beforeAll, afterAll, vi } from 'vitest';\nimport fs from 'fs/promises';\nimport path from 'path';\nimport os from 'os';\nimport { shouldIgnorePath, isHardcodedIgnoredDirectory, loadIgnoreRules, createIgnoreFilter } from '../../src/config/ignore-service.js';\n\ndescribe('shouldIgnorePath', () => {\n  describe('version control directories', () => {\n    it.each(['.git', '.svn', '.hg', '.bzr'])('ignores %s directory', (dir) => {\n      expect(shouldIgnorePath(`${dir}/config`)).toBe(true);\n      expect(shouldIgnorePath(`project/${dir}/HEAD`)).toBe(true);\n    });\n  });\n\n  describe('IDE/editor directories', () => {\n    it.each(['.idea', '.vscode', '.vs'])('ignores %s directory', (dir) => {\n      expect(shouldIgnorePath(`${dir}/settings.json`)).toBe(true);\n    });\n  });\n\n  describe('dependency directories', () => {\n    it.each([\n      'node_modules', 'vendor', 'venv', '.venv', '__pycache__',\n      'site-packages', '.mypy_cache', '.pytest_cache',\n    ])('ignores %s directory', (dir) => {\n      expect(shouldIgnorePath(`project/${dir}/some-file.js`)).toBe(true);\n    });\n  });\n\n  describe('build output directories', () => {\n    it.each([\n      'dist', 'build', 'out', 'output', 'bin', 'obj', 'target',\n      '.next', '.nuxt', '.vercel', '.parcel-cache', '.turbo',\n    ])('ignores %s directory', (dir) => {\n      expect(shouldIgnorePath(`${dir}/bundle.js`)).toBe(true);\n    });\n  });\n\n  describe('test/coverage directories', () => {\n    it.each(['coverage', '__tests__', '__mocks__', '.nyc_output'])('ignores %s directory', (dir) => {\n      expect(shouldIgnorePath(`${dir}/results.json`)).toBe(true);\n    });\n  });\n\n  describe('ignored file extensions', () => {\n    it.each([\n      // Images\n      '.png', '.jpg', '.jpeg', '.gif', '.svg', '.ico', '.webp',\n      // Archives\n      '.zip', '.tar', '.gz', '.rar',\n      // Binary/Compiled\n      '.exe', '.dll', '.so', '.dylib', '.class', '.jar', '.pyc', '.wasm',\n      // Documents\n      '.pdf', '.doc', '.docx',\n      // Media\n      '.mp4', '.mp3', '.wav',\n      // Fonts\n      '.woff', '.woff2', '.ttf',\n      // Databases\n      '.db', '.sqlite',\n      // Source maps\n      '.map',\n      // Lock files\n      '.lock',\n      // Certificates\n      '.pem', '.key', '.crt',\n      // Data files\n      '.csv', '.parquet', '.pkl',\n    ])('ignores files with %s extension', (ext) => {\n      expect(shouldIgnorePath(`assets/file${ext}`)).toBe(true);\n    });\n  });\n\n  describe('ignored files by exact name', () => {\n    it.each([\n      'package-lock.json', 'yarn.lock', 'pnpm-lock.yaml',\n      'composer.lock', 'Cargo.lock', 'go.sum',\n      '.gitignore', '.gitattributes', '.npmrc', '.editorconfig',\n      '.prettierrc', '.eslintignore', '.dockerignore',\n      'LICENSE', 'LICENSE.md', 'CHANGELOG.md',\n      '.env', '.env.local', '.env.production',\n    ])('ignores %s', (fileName) => {\n      expect(shouldIgnorePath(fileName)).toBe(true);\n      expect(shouldIgnorePath(`project/${fileName}`)).toBe(true);\n    });\n  });\n\n  describe('compound extensions', () => {\n    it('ignores .min.js files', () => {\n      expect(shouldIgnorePath('dist/bundle.min.js')).toBe(true);\n    });\n\n    it('ignores .bundle.js files', () => {\n      expect(shouldIgnorePath('dist/app.bundle.js')).toBe(true);\n    });\n\n    it('ignores .chunk.js files', () => {\n      expect(shouldIgnorePath('dist/vendor.chunk.js')).toBe(true);\n    });\n\n    it('ignores .min.css files', () => {\n      expect(shouldIgnorePath('dist/styles.min.css')).toBe(true);\n    });\n  });\n\n  describe('generated files', () => {\n    it('ignores .generated. files', () => {\n      expect(shouldIgnorePath('src/api.generated.ts')).toBe(true);\n    });\n\n    it('ignores TypeScript declaration files', () => {\n      expect(shouldIgnorePath('types/index.d.ts')).toBe(true);\n    });\n  });\n\n  describe('Windows path normalization', () => {\n    it('normalizes backslashes to forward slashes', () => {\n      expect(shouldIgnorePath('node_modules\\\\express\\\\index.js')).toBe(true);\n      expect(shouldIgnorePath('project\\\\.git\\\\HEAD')).toBe(true);\n    });\n  });\n\n  describe('files that should NOT be ignored', () => {\n    it.each([\n      'src/index.ts',\n      'src/components/Button.tsx',\n      'lib/utils.py',\n      'cmd/server/main.go',\n      'src/main.rs',\n      'app/Models/User.php',\n      'Sources/App.swift',\n      'src/App.java',\n      'src/main.c',\n      'src/main.cpp',\n      'src/Program.cs',\n    ])('does not ignore source file %s', (filePath) => {\n      expect(shouldIgnorePath(filePath)).toBe(false);\n    });\n  });\n});\n\ndescribe('isHardcodedIgnoredDirectory', () => {\n  it('returns true for known ignored directories', () => {\n    expect(isHardcodedIgnoredDirectory('node_modules')).toBe(true);\n    expect(isHardcodedIgnoredDirectory('.git')).toBe(true);\n    expect(isHardcodedIgnoredDirectory('dist')).toBe(true);\n    expect(isHardcodedIgnoredDirectory('__pycache__')).toBe(true);\n  });\n\n  it('returns false for source directories', () => {\n    expect(isHardcodedIgnoredDirectory('src')).toBe(false);\n    expect(isHardcodedIgnoredDirectory('lib')).toBe(false);\n    expect(isHardcodedIgnoredDirectory('app')).toBe(false);\n    expect(isHardcodedIgnoredDirectory('local')).toBe(false);\n  });\n});\n\ndescribe('loadIgnoreRules', () => {\n  let tmpDir: string;\n\n  beforeAll(async () => {\n    tmpDir = await fs.mkdtemp(path.join(os.tmpdir(), 'gn-ignore-test-'));\n  });\n\n  afterAll(async () => {\n    await fs.rm(tmpDir, { recursive: true, force: true });\n  });\n\n  it('returns null when no ignore files exist', async () => {\n    const result = await loadIgnoreRules(tmpDir);\n    expect(result).toBeNull();\n  });\n\n  it('parses .gitignore file', async () => {\n    await fs.writeFile(path.join(tmpDir, '.gitignore'), 'data/\\nlogs/\\n');\n    const ig = await loadIgnoreRules(tmpDir);\n    expect(ig).not.toBeNull();\n    expect(ig!.ignores('data/file.txt')).toBe(true);\n    expect(ig!.ignores('logs/app.log')).toBe(true);\n    expect(ig!.ignores('src/index.ts')).toBe(false);\n    await fs.unlink(path.join(tmpDir, '.gitignore'));\n  });\n\n  it('parses .gitnexusignore file', async () => {\n    await fs.writeFile(path.join(tmpDir, '.gitnexusignore'), 'vendor/\\n*.test.ts\\n');\n    const ig = await loadIgnoreRules(tmpDir);\n    expect(ig).not.toBeNull();\n    expect(ig!.ignores('vendor/lib.js')).toBe(true);\n    expect(ig!.ignores('src/app.test.ts')).toBe(true);\n    expect(ig!.ignores('src/app.ts')).toBe(false);\n    await fs.unlink(path.join(tmpDir, '.gitnexusignore'));\n  });\n\n  it('combines both files', async () => {\n    await fs.writeFile(path.join(tmpDir, '.gitignore'), 'data/\\n');\n    await fs.writeFile(path.join(tmpDir, '.gitnexusignore'), 'vendor/\\n');\n    const ig = await loadIgnoreRules(tmpDir);\n    expect(ig).not.toBeNull();\n    expect(ig!.ignores('data/file.txt')).toBe(true);\n    expect(ig!.ignores('vendor/lib.js')).toBe(true);\n    expect(ig!.ignores('src/index.ts')).toBe(false);\n    await fs.unlink(path.join(tmpDir, '.gitignore'));\n    await fs.unlink(path.join(tmpDir, '.gitnexusignore'));\n  });\n\n  it('handles comments and blank lines', async () => {\n    await fs.writeFile(path.join(tmpDir, '.gitignore'), '# comment\\n\\ndata/\\n\\n# another comment\\n');\n    const ig = await loadIgnoreRules(tmpDir);\n    expect(ig).not.toBeNull();\n    expect(ig!.ignores('data/file.txt')).toBe(true);\n    expect(ig!.ignores('src/index.ts')).toBe(false);\n    await fs.unlink(path.join(tmpDir, '.gitignore'));\n  });\n});\n\ndescribe('createIgnoreFilter', () => {\n  let tmpDir: string;\n\n  beforeAll(async () => {\n    tmpDir = await fs.mkdtemp(path.join(os.tmpdir(), 'gn-filter-test-'));\n  });\n\n  afterAll(async () => {\n    await fs.rm(tmpDir, { recursive: true, force: true });\n  });\n\n  it('creates a filter with ignored and childrenIgnored methods', async () => {\n    const filter = await createIgnoreFilter(tmpDir);\n    expect(typeof filter.ignored).toBe('function');\n    expect(typeof filter.childrenIgnored).toBe('function');\n  });\n\n  it('childrenIgnored returns true for hardcoded directories', async () => {\n    const filter = await createIgnoreFilter(tmpDir);\n    // Simulate a Path-like object\n    const mockPath = { name: 'node_modules', relative: () => 'node_modules' } as any;\n    expect(filter.childrenIgnored(mockPath)).toBe(true);\n\n    const srcPath = { name: 'src', relative: () => 'src' } as any;\n    expect(filter.childrenIgnored(srcPath)).toBe(false);\n  });\n\n  it('childrenIgnored returns true for gitignored directories', async () => {\n    await fs.writeFile(path.join(tmpDir, '.gitignore'), 'local/\\n');\n    const filter = await createIgnoreFilter(tmpDir);\n\n    const localPath = { name: 'local', relative: () => 'local' } as any;\n    expect(filter.childrenIgnored(localPath)).toBe(true);\n\n    const srcPath = { name: 'src', relative: () => 'src' } as any;\n    expect(filter.childrenIgnored(srcPath)).toBe(false);\n\n    await fs.unlink(path.join(tmpDir, '.gitignore'));\n  });\n\n  it('childrenIgnored returns true for bare-name directory patterns (no trailing slash)', async () => {\n    await fs.writeFile(path.join(tmpDir, '.gitignore'), 'local\\n');\n    const filter = await createIgnoreFilter(tmpDir);\n\n    const localPath = { name: 'local', relative: () => 'local' } as any;\n    expect(filter.childrenIgnored(localPath)).toBe(true);\n\n    const srcPath = { name: 'src', relative: () => 'src' } as any;\n    expect(filter.childrenIgnored(srcPath)).toBe(false);\n\n    await fs.unlink(path.join(tmpDir, '.gitignore'));\n  });\n\n  it('ignored returns true for file-glob patterns like *.log', async () => {\n    await fs.writeFile(path.join(tmpDir, '.gitignore'), '*.log\\n');\n    const filter = await createIgnoreFilter(tmpDir);\n\n    const logPath = { name: 'app.log', relative: () => 'app.log' } as any;\n    expect(filter.ignored(logPath)).toBe(true);\n\n    const tsPath = { name: 'index.ts', relative: () => 'src/index.ts' } as any;\n    expect(filter.ignored(tsPath)).toBe(false);\n\n    await fs.unlink(path.join(tmpDir, '.gitignore'));\n  });\n});\n\ndescribe('loadIgnoreRules — error handling', () => {\n  let tmpDir: string;\n\n  beforeAll(async () => {\n    tmpDir = await fs.mkdtemp(path.join(os.tmpdir(), 'gn-err-test-'));\n  });\n\n  afterAll(async () => {\n    await fs.rm(tmpDir, { recursive: true, force: true });\n  });\n\n  it.skipIf(process.platform === 'win32')('warns on EACCES but does not throw', async () => {\n    const gitignorePath = path.join(tmpDir, '.gitignore');\n    await fs.writeFile(gitignorePath, 'data/\\n');\n    await fs.chmod(gitignorePath, 0o000);\n\n    const warnSpy = vi.spyOn(console, 'warn').mockImplementation(() => {});\n    const result = await loadIgnoreRules(tmpDir);\n    // Should still return (null or partial), not throw\n    expect(result).toBeNull();\n    expect(warnSpy).toHaveBeenCalledWith(expect.stringContaining('.gitignore'));\n\n    warnSpy.mockRestore();\n    await fs.chmod(gitignorePath, 0o644);\n    await fs.unlink(gitignorePath);\n  });\n});\n\ndescribe('loadIgnoreRules — GITNEXUS_NO_GITIGNORE env var', () => {\n  let tmpDir: string;\n\n  beforeAll(async () => {\n    tmpDir = await fs.mkdtemp(path.join(os.tmpdir(), 'gn-noignore-test-'));\n  });\n\n  afterAll(async () => {\n    await fs.rm(tmpDir, { recursive: true, force: true });\n  });\n\n  it('skips .gitignore when GITNEXUS_NO_GITIGNORE is set', async () => {\n    await fs.writeFile(path.join(tmpDir, '.gitignore'), 'data/\\n');\n\n    const original = process.env.GITNEXUS_NO_GITIGNORE;\n    process.env.GITNEXUS_NO_GITIGNORE = '1';\n    try {\n      const ig = await loadIgnoreRules(tmpDir);\n      // .gitignore should be skipped — no rules loaded\n      expect(ig).toBeNull();\n    } finally {\n      if (original === undefined) {\n        delete process.env.GITNEXUS_NO_GITIGNORE;\n      } else {\n        process.env.GITNEXUS_NO_GITIGNORE = original;\n      }\n      await fs.unlink(path.join(tmpDir, '.gitignore'));\n    }\n  });\n\n  it('still reads .gitnexusignore when GITNEXUS_NO_GITIGNORE is set', async () => {\n    await fs.writeFile(path.join(tmpDir, '.gitnexusignore'), 'vendor/\\n');\n\n    const original = process.env.GITNEXUS_NO_GITIGNORE;\n    process.env.GITNEXUS_NO_GITIGNORE = '1';\n    try {\n      const ig = await loadIgnoreRules(tmpDir);\n      expect(ig).not.toBeNull();\n      expect(ig!.ignores('vendor/lib.js')).toBe(true);\n    } finally {\n      if (original === undefined) {\n        delete process.env.GITNEXUS_NO_GITIGNORE;\n      } else {\n        process.env.GITNEXUS_NO_GITIGNORE = original;\n      }\n      await fs.unlink(path.join(tmpDir, '.gitnexusignore'));\n    }\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/import-processor.test.ts",
    "content": "import { describe, it, expect, beforeEach } from 'vitest';\nimport { buildImportResolutionContext, type ImportResolutionContext } from '../../src/core/ingestion/import-processor.js';\nimport { createResolutionContext } from '../../src/core/ingestion/resolution-context.js';\n\ndescribe('ResolutionContext.importMap', () => {\n  it('creates an empty Map', () => {\n    const map = createResolutionContext().importMap;\n    expect(map).toBeInstanceOf(Map);\n    expect(map.size).toBe(0);\n  });\n\n  it('can be used to store import relationships', () => {\n    const map = createResolutionContext().importMap;\n    map.set('src/index.ts', new Set(['src/utils.ts', 'src/types.ts']));\n    expect(map.get('src/index.ts')!.size).toBe(2);\n    expect(map.get('src/index.ts')!.has('src/utils.ts')).toBe(true);\n  });\n});\n\ndescribe('buildImportResolutionContext', () => {\n  let ctx: ImportResolutionContext;\n  const testPaths = [\n    'src/index.ts',\n    'src/utils.ts',\n    'src/components/Button.tsx',\n    'src/lib/helpers.ts',\n  ];\n\n  beforeEach(() => {\n    ctx = buildImportResolutionContext(testPaths);\n  });\n\n  it('creates a Set of all file paths', () => {\n    expect(ctx.allFilePaths).toBeInstanceOf(Set);\n    expect(ctx.allFilePaths.size).toBe(4);\n    expect(ctx.allFilePaths.has('src/index.ts')).toBe(true);\n  });\n\n  it('stores the original file list', () => {\n    expect(ctx.allFileList).toBe(testPaths);\n  });\n\n  it('creates normalized file list with forward slashes', () => {\n    const winPaths = ['src\\\\index.ts', 'src\\\\utils.ts'];\n    const winCtx = buildImportResolutionContext(winPaths);\n    expect(winCtx.normalizedFileList[0]).toBe('src/index.ts');\n    expect(winCtx.normalizedFileList[1]).toBe('src/utils.ts');\n  });\n\n  it('creates a suffix index for O(1) lookups', () => {\n    expect(ctx.suffixIndex).toBeDefined();\n    expect(typeof ctx.suffixIndex.get).toBe('function');\n  });\n\n  it('initializes empty resolve cache', () => {\n    expect(ctx.resolveCache).toBeInstanceOf(Map);\n    expect(ctx.resolveCache.size).toBe(0);\n  });\n\n  it('handles empty paths array', () => {\n    const emptyCtx = buildImportResolutionContext([]);\n    expect(emptyCtx.allFilePaths.size).toBe(0);\n    expect(emptyCtx.allFileList).toHaveLength(0);\n  });\n\n  describe('suffix index', () => {\n    it('resolves file by suffix', () => {\n      const result = ctx.suffixIndex.get('utils.ts');\n      expect(result).toBeDefined();\n    });\n\n    it('resolves file by full path', () => {\n      const result = ctx.suffixIndex.get('src/index.ts');\n      expect(result).toBeDefined();\n    });\n\n    it('resolves nested component path', () => {\n      const result = ctx.suffixIndex.get('components/Button.tsx');\n      expect(result).toBeDefined();\n    });\n\n    it('returns undefined for non-existent suffix', () => {\n      const result = ctx.suffixIndex.get('nonexistent.ts');\n      expect(result).toBeUndefined();\n    });\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/ingestion-utils.test.ts",
    "content": "import { describe, it, expect } from 'vitest';\nimport { getLanguageFromFilename, isBuiltInOrNoise, extractFunctionName } from '../../src/core/ingestion/utils.js';\nimport { getTreeSitterBufferSize, TREE_SITTER_BUFFER_SIZE, TREE_SITTER_MAX_BUFFER } from '../../src/core/ingestion/constants.js';\nimport { SupportedLanguages } from '../../src/config/supported-languages.js';\nimport Parser from 'tree-sitter';\nimport C from 'tree-sitter-c';\nimport CPP from 'tree-sitter-cpp';\nimport Python from 'tree-sitter-python';\nimport TypeScript from 'tree-sitter-typescript';\n\ndescribe('getLanguageFromFilename', () => {\n  describe('TypeScript', () => {\n    it('detects .ts files', () => {\n      expect(getLanguageFromFilename('index.ts')).toBe(SupportedLanguages.TypeScript);\n    });\n\n    it('detects .tsx files', () => {\n      expect(getLanguageFromFilename('Component.tsx')).toBe(SupportedLanguages.TypeScript);\n    });\n\n    it('detects .ts files in paths', () => {\n      expect(getLanguageFromFilename('src/core/utils.ts')).toBe(SupportedLanguages.TypeScript);\n    });\n  });\n\n  describe('JavaScript', () => {\n    it('detects .js files', () => {\n      expect(getLanguageFromFilename('index.js')).toBe(SupportedLanguages.JavaScript);\n    });\n\n    it('detects .jsx files', () => {\n      expect(getLanguageFromFilename('App.jsx')).toBe(SupportedLanguages.JavaScript);\n    });\n  });\n\n  describe('Python', () => {\n    it('detects .py files', () => {\n      expect(getLanguageFromFilename('main.py')).toBe(SupportedLanguages.Python);\n    });\n  });\n\n  describe('Java', () => {\n    it('detects .java files', () => {\n      expect(getLanguageFromFilename('Main.java')).toBe(SupportedLanguages.Java);\n    });\n  });\n\n  describe('C', () => {\n    it('detects .c files', () => {\n      expect(getLanguageFromFilename('main.c')).toBe(SupportedLanguages.C);\n    });\n  });\n\n  describe('C++', () => {\n    it.each(['.cpp', '.cc', '.cxx', '.h', '.hpp', '.hxx', '.hh'])(\n      'detects %s files',\n      (ext) => {\n        expect(getLanguageFromFilename(`file${ext}`)).toBe(SupportedLanguages.CPlusPlus);\n      }\n    );\n  });\n\n  describe('C#', () => {\n    it('detects .cs files', () => {\n      expect(getLanguageFromFilename('Program.cs')).toBe(SupportedLanguages.CSharp);\n    });\n  });\n\n  describe('Go', () => {\n    it('detects .go files', () => {\n      expect(getLanguageFromFilename('main.go')).toBe(SupportedLanguages.Go);\n    });\n  });\n\n  describe('Rust', () => {\n    it('detects .rs files', () => {\n      expect(getLanguageFromFilename('main.rs')).toBe(SupportedLanguages.Rust);\n    });\n  });\n\n  describe('PHP', () => {\n    it.each(['.php', '.phtml', '.php3', '.php4', '.php5', '.php8'])(\n      'detects %s files',\n      (ext) => {\n        expect(getLanguageFromFilename(`file${ext}`)).toBe(SupportedLanguages.PHP);\n      }\n    );\n  });\n\n  describe('Swift', () => {\n    it('detects .swift files', () => {\n      expect(getLanguageFromFilename('App.swift')).toBe(SupportedLanguages.Swift);\n    });\n  });\n\n  describe('Ruby', () => {\n    it.each(['.rb', '.rake', '.gemspec'])(\n      'detects %s files',\n      (ext) => {\n        expect(getLanguageFromFilename(`file${ext}`)).toBe(SupportedLanguages.Ruby);\n      }\n    );\n\n    it('detects extensionless Rakefile', () => {\n      expect(getLanguageFromFilename('Rakefile')).toBe(SupportedLanguages.Ruby);\n    });\n\n    it('detects extensionless Gemfile', () => {\n      expect(getLanguageFromFilename('Gemfile')).toBe(SupportedLanguages.Ruby);\n    });\n  });\n\n  describe('Kotlin', () => {\n    it.each(['.kt', '.kts'])(\n      'detects %s files',\n      (ext) => {\n        expect(getLanguageFromFilename(`file${ext}`)).toBe(SupportedLanguages.Kotlin);\n      }\n    );\n  });\n\n  describe('unsupported', () => {\n    it.each(['.scala', '.r', '.lua', '.zig', '.txt', '.md', '.json', '.yaml'])(\n      'returns null for %s files',\n      (ext) => {\n        expect(getLanguageFromFilename(`file${ext}`)).toBeNull();\n      }\n    );\n\n    it('returns null for files without extension', () => {\n      expect(getLanguageFromFilename('Makefile')).toBeNull();\n    });\n\n    it('returns null for empty string', () => {\n      expect(getLanguageFromFilename('')).toBeNull();\n    });\n  });\n});\n\ndescribe('isBuiltInOrNoise', () => {\n  describe('JavaScript/TypeScript', () => {\n    it('filters console methods', () => {\n      expect(isBuiltInOrNoise('console')).toBe(true);\n      expect(isBuiltInOrNoise('log')).toBe(true);\n      expect(isBuiltInOrNoise('warn')).toBe(true);\n    });\n\n    it('filters React hooks', () => {\n      expect(isBuiltInOrNoise('useState')).toBe(true);\n      expect(isBuiltInOrNoise('useEffect')).toBe(true);\n      expect(isBuiltInOrNoise('useCallback')).toBe(true);\n    });\n\n    it('filters array methods', () => {\n      expect(isBuiltInOrNoise('map')).toBe(true);\n      expect(isBuiltInOrNoise('filter')).toBe(true);\n      expect(isBuiltInOrNoise('reduce')).toBe(true);\n    });\n  });\n\n  describe('Python', () => {\n    it('filters built-in functions', () => {\n      expect(isBuiltInOrNoise('print')).toBe(true);\n      expect(isBuiltInOrNoise('len')).toBe(true);\n      expect(isBuiltInOrNoise('range')).toBe(true);\n    });\n  });\n\n  describe('PHP', () => {\n    it('filters PHP built-in functions', () => {\n      expect(isBuiltInOrNoise('echo')).toBe(true);\n      expect(isBuiltInOrNoise('isset')).toBe(true);\n      expect(isBuiltInOrNoise('date')).toBe(true);\n      expect(isBuiltInOrNoise('json_encode')).toBe(true);\n      expect(isBuiltInOrNoise('array_map')).toBe(true);\n    });\n\n    it('filters PHP string functions', () => {\n      expect(isBuiltInOrNoise('strlen')).toBe(true);\n      expect(isBuiltInOrNoise('substr')).toBe(true);\n      expect(isBuiltInOrNoise('str_replace')).toBe(true);\n    });\n  });\n\n  describe('C/C++', () => {\n    it('filters standard library functions', () => {\n      expect(isBuiltInOrNoise('printf')).toBe(true);\n      expect(isBuiltInOrNoise('malloc')).toBe(true);\n      expect(isBuiltInOrNoise('free')).toBe(true);\n    });\n\n    it('filters Linux kernel macros', () => {\n      expect(isBuiltInOrNoise('container_of')).toBe(true);\n      expect(isBuiltInOrNoise('ARRAY_SIZE')).toBe(true);\n      expect(isBuiltInOrNoise('pr_info')).toBe(true);\n    });\n  });\n\n  describe('Kotlin', () => {\n    it('filters stdlib functions', () => {\n      expect(isBuiltInOrNoise('println')).toBe(true);\n      expect(isBuiltInOrNoise('listOf')).toBe(true);\n      expect(isBuiltInOrNoise('TODO')).toBe(true);\n    });\n\n    it('filters coroutine functions', () => {\n      expect(isBuiltInOrNoise('launch')).toBe(true);\n      expect(isBuiltInOrNoise('async')).toBe(true);\n    });\n  });\n\n  describe('Swift', () => {\n    it('filters built-in functions', () => {\n      expect(isBuiltInOrNoise('print')).toBe(true);\n      expect(isBuiltInOrNoise('fatalError')).toBe(true);\n    });\n\n    it('filters UIKit methods', () => {\n      expect(isBuiltInOrNoise('addSubview')).toBe(true);\n      expect(isBuiltInOrNoise('reloadData')).toBe(true);\n    });\n  });\n\n  describe('Rust', () => {\n    it('filters Result/Option methods', () => {\n      expect(isBuiltInOrNoise('unwrap')).toBe(true);\n      expect(isBuiltInOrNoise('expect')).toBe(true);\n      expect(isBuiltInOrNoise('unwrap_or')).toBe(true);\n      expect(isBuiltInOrNoise('unwrap_or_else')).toBe(true);\n      expect(isBuiltInOrNoise('unwrap_or_default')).toBe(true);\n      expect(isBuiltInOrNoise('ok')).toBe(true);\n      expect(isBuiltInOrNoise('err')).toBe(true);\n      expect(isBuiltInOrNoise('is_ok')).toBe(true);\n      expect(isBuiltInOrNoise('is_err')).toBe(true);\n      expect(isBuiltInOrNoise('map_err')).toBe(true);\n      expect(isBuiltInOrNoise('and_then')).toBe(true);\n      expect(isBuiltInOrNoise('or_else')).toBe(true);\n    });\n\n    it('filters trait conversion methods', () => {\n      expect(isBuiltInOrNoise('clone')).toBe(true);\n      expect(isBuiltInOrNoise('to_string')).toBe(true);\n      expect(isBuiltInOrNoise('to_owned')).toBe(true);\n      expect(isBuiltInOrNoise('into')).toBe(true);\n      expect(isBuiltInOrNoise('from')).toBe(true);\n      expect(isBuiltInOrNoise('as_ref')).toBe(true);\n      expect(isBuiltInOrNoise('as_mut')).toBe(true);\n    });\n\n    it('filters iterator methods', () => {\n      expect(isBuiltInOrNoise('iter')).toBe(true);\n      expect(isBuiltInOrNoise('into_iter')).toBe(true);\n      expect(isBuiltInOrNoise('collect')).toBe(true);\n      expect(isBuiltInOrNoise('fold')).toBe(true);\n      expect(isBuiltInOrNoise('for_each')).toBe(true);\n    });\n\n    it('filters collection methods', () => {\n      expect(isBuiltInOrNoise('len')).toBe(true);\n      expect(isBuiltInOrNoise('is_empty')).toBe(true);\n      expect(isBuiltInOrNoise('push')).toBe(true);\n      expect(isBuiltInOrNoise('pop')).toBe(true);\n      expect(isBuiltInOrNoise('insert')).toBe(true);\n      expect(isBuiltInOrNoise('remove')).toBe(true);\n      expect(isBuiltInOrNoise('contains')).toBe(true);\n    });\n\n    it('filters macro-like and panic functions', () => {\n      expect(isBuiltInOrNoise('format')).toBe(true);\n      expect(isBuiltInOrNoise('panic')).toBe(true);\n      expect(isBuiltInOrNoise('unreachable')).toBe(true);\n      expect(isBuiltInOrNoise('todo')).toBe(true);\n      expect(isBuiltInOrNoise('unimplemented')).toBe(true);\n      expect(isBuiltInOrNoise('vec')).toBe(true);\n      expect(isBuiltInOrNoise('println')).toBe(true);\n      expect(isBuiltInOrNoise('eprintln')).toBe(true);\n      expect(isBuiltInOrNoise('dbg')).toBe(true);\n    });\n\n    it('filters sync primitives', () => {\n      expect(isBuiltInOrNoise('lock')).toBe(true);\n      expect(isBuiltInOrNoise('try_lock')).toBe(true);\n      expect(isBuiltInOrNoise('spawn')).toBe(true);\n      expect(isBuiltInOrNoise('join')).toBe(true);\n      expect(isBuiltInOrNoise('sleep')).toBe(true);\n    });\n\n    it('filters enum constructors', () => {\n      expect(isBuiltInOrNoise('Some')).toBe(true);\n      expect(isBuiltInOrNoise('None')).toBe(true);\n      expect(isBuiltInOrNoise('Ok')).toBe(true);\n      expect(isBuiltInOrNoise('Err')).toBe(true);\n    });\n\n    it('does not filter user-defined Rust functions', () => {\n      expect(isBuiltInOrNoise('process_request')).toBe(false);\n      expect(isBuiltInOrNoise('handle_connection')).toBe(false);\n      expect(isBuiltInOrNoise('build_response')).toBe(false);\n    });\n  });\n\n  describe('C#/.NET', () => {\n    it('filters Console I/O', () => {\n      expect(isBuiltInOrNoise('Console')).toBe(true);\n      expect(isBuiltInOrNoise('WriteLine')).toBe(true);\n      expect(isBuiltInOrNoise('ReadLine')).toBe(true);\n    });\n\n    it('filters LINQ methods', () => {\n      expect(isBuiltInOrNoise('Where')).toBe(true);\n      expect(isBuiltInOrNoise('Select')).toBe(true);\n      expect(isBuiltInOrNoise('GroupBy')).toBe(true);\n      expect(isBuiltInOrNoise('OrderBy')).toBe(true);\n      expect(isBuiltInOrNoise('FirstOrDefault')).toBe(true);\n      expect(isBuiltInOrNoise('ToList')).toBe(true);\n    });\n\n    it('filters Task async methods', () => {\n      expect(isBuiltInOrNoise('Task')).toBe(true);\n      expect(isBuiltInOrNoise('Run')).toBe(true);\n      expect(isBuiltInOrNoise('WhenAll')).toBe(true);\n      expect(isBuiltInOrNoise('ConfigureAwait')).toBe(true);\n    });\n\n    it('filters Object base methods', () => {\n      expect(isBuiltInOrNoise('ToString')).toBe(true);\n      expect(isBuiltInOrNoise('GetType')).toBe(true);\n      expect(isBuiltInOrNoise('Equals')).toBe(true);\n      expect(isBuiltInOrNoise('GetHashCode')).toBe(true);\n    });\n  });\n\n  describe('user-defined functions', () => {\n    it('does not filter custom function names', () => {\n      expect(isBuiltInOrNoise('myCustomFunction')).toBe(false);\n      expect(isBuiltInOrNoise('processData')).toBe(false);\n      expect(isBuiltInOrNoise('handleUserRequest')).toBe(false);\n    });\n  });\n});\n\ndescribe('extractFunctionName', () => {\n  const parser = new Parser();\n\n  describe('C', () => {\n    it('extracts function name from C function definition', () => {\n      parser.setLanguage(C);\n      const code = `int main() { return 0; }`;\n      const tree = parser.parse(code);\n      const funcNode = tree.rootNode.child(0);\n\n      const result = extractFunctionName(funcNode);\n\n      expect(result.funcName).toBe('main');\n      expect(result.label).toBe('Function');\n    });\n\n    it('extracts function name with parameters', () => {\n      parser.setLanguage(C);\n      const code = `void helper(int a, char* b) {}`;\n      const tree = parser.parse(code);\n      const funcNode = tree.rootNode.child(0);\n\n      const result = extractFunctionName(funcNode);\n\n      expect(result.funcName).toBe('helper');\n      expect(result.label).toBe('Function');\n    });\n  });\n\n  describe('C++', () => {\n    it('extracts method name from C++ class method definition', () => {\n      parser.setLanguage(CPP);\n      const code = `int MyClass::OnEncryptData() { return 0; }`;\n      const tree = parser.parse(code);\n      const funcNode = tree.rootNode.child(0);\n\n      const result = extractFunctionName(funcNode);\n\n      expect(result.funcName).toBe('OnEncryptData');\n      expect(result.label).toBe('Method');\n    });\n\n    it('extracts method name with namespace', () => {\n      parser.setLanguage(CPP);\n      const code = `void HuksListener::OnDataOprEvent(int type, DataInfo& info) {}`;\n      const tree = parser.parse(code);\n      const funcNode = tree.rootNode.child(0);\n\n      const result = extractFunctionName(funcNode);\n\n      expect(result.funcName).toBe('OnDataOprEvent');\n      expect(result.label).toBe('Method');\n    });\n\n    it('extracts C function (not method)', () => {\n      parser.setLanguage(CPP);\n      const code = `void standalone_function() {}`;\n      const tree = parser.parse(code);\n      const funcNode = tree.rootNode.child(0);\n\n      const result = extractFunctionName(funcNode);\n\n      expect(result.funcName).toBe('standalone_function');\n      expect(result.label).toBe('Function');\n    });\n\n    it('extracts method with parenthesized declarator', () => {\n      parser.setLanguage(CPP);\n      const code = `void (MyClass::handler)() {}`;\n      const tree = parser.parse(code);\n      const funcNode = tree.rootNode.child(0);\n\n      const result = extractFunctionName(funcNode);\n\n      expect(result.funcName).toBe('handler');\n      expect(result.label).toBe('Method');\n    });\n  });\n\n  describe('C pointer returns', () => {\n    it('extracts name from function returning pointer', () => {\n      parser.setLanguage(C);\n      const code = `int* get_data() { return 0; }`;\n      const tree = parser.parse(code);\n      const funcNode = tree.rootNode.child(0);\n\n      const result = extractFunctionName(funcNode);\n\n      expect(result.funcName).toBe('get_data');\n      expect(result.label).toBe('Function');\n    });\n\n    it('extracts name from function returning double pointer', () => {\n      parser.setLanguage(C);\n      const code = `char** get_strings() { return 0; }`;\n      const tree = parser.parse(code);\n      const funcNode = tree.rootNode.child(0);\n\n      const result = extractFunctionName(funcNode);\n\n      expect(result.funcName).toBe('get_strings');\n      expect(result.label).toBe('Function');\n    });\n\n    it('extracts name from struct pointer return', () => {\n      parser.setLanguage(C);\n      const code = `struct Node* create_node(int val) { return 0; }`;\n      const tree = parser.parse(code);\n      const funcNode = tree.rootNode.child(0);\n\n      const result = extractFunctionName(funcNode);\n\n      expect(result.funcName).toBe('create_node');\n      expect(result.label).toBe('Function');\n    });\n  });\n\n  describe('C++ pointer/reference returns', () => {\n    it('extracts name from method returning pointer', () => {\n      parser.setLanguage(CPP);\n      const code = `int* MyClass::getData() { return nullptr; }`;\n      const tree = parser.parse(code);\n      const funcNode = tree.rootNode.child(0);\n\n      const result = extractFunctionName(funcNode);\n\n      expect(result.funcName).toBe('getData');\n      expect(result.label).toBe('Method');\n    });\n\n    it('extracts name from function returning reference', () => {\n      parser.setLanguage(CPP);\n      const code = `std::string& get_name() { static std::string s; return s; }`;\n      const tree = parser.parse(code);\n      const funcNode = tree.rootNode.child(0);\n\n      const result = extractFunctionName(funcNode);\n\n      expect(result.funcName).toBe('get_name');\n      expect(result.label).toBe('Function');\n    });\n\n    it('extracts name from method returning reference', () => {\n      parser.setLanguage(CPP);\n      const code = `int& Container::at(int i) { return data[i]; }`;\n      const tree = parser.parse(code);\n      const funcNode = tree.rootNode.child(0);\n\n      const result = extractFunctionName(funcNode);\n\n      expect(result.funcName).toBe('at');\n      expect(result.label).toBe('Method');\n    });\n\n    it('extracts name from method returning const reference', () => {\n      parser.setLanguage(CPP);\n      const code = `const std::string& Config::getName() const { return name_; }`;\n      const tree = parser.parse(code);\n      const funcNode = tree.rootNode.child(0);\n\n      const result = extractFunctionName(funcNode);\n\n      expect(result.funcName).toBe('getName');\n      expect(result.label).toBe('Method');\n    });\n  });\n\n  describe('C++ destructors', () => {\n    it('extracts destructor name from out-of-line definition', () => {\n      parser.setLanguage(CPP);\n      const code = `MyClass::~MyClass() { cleanup(); }`;\n      const tree = parser.parse(code);\n      const funcNode = tree.rootNode.child(0);\n\n      const result = extractFunctionName(funcNode);\n\n      // destructor_name includes the ~ prefix\n      expect(result.funcName).toBe('~MyClass');\n      expect(result.label).toBe('Method');\n    });\n  });\n\n  describe('TypeScript', () => {\n    it('extracts arrow function name from variable declarator', () => {\n      parser.setLanguage(TypeScript.typescript);\n      const code = `const myHandler = () => { return 1; }`;\n      const tree = parser.parse(code);\n      const program = tree.rootNode;\n      const varDecl = program.child(0);\n      const declarator = varDecl!.namedChild(0);\n      const arrowFunc = declarator!.namedChild(1);\n\n      const result = extractFunctionName(arrowFunc);\n\n      expect(result.funcName).toBe('myHandler');\n      expect(result.label).toBe('Function');\n    });\n\n    it('extracts function expression name from variable declarator', () => {\n      parser.setLanguage(TypeScript.typescript);\n      const code = `const processItem = function() { }`;\n      const tree = parser.parse(code);\n      const program = tree.rootNode;\n      const varDecl = program.child(0);\n      const declarator = varDecl!.namedChild(0);\n      const funcExpr = declarator!.namedChild(1);\n\n      const result = extractFunctionName(funcExpr);\n\n      expect(result.funcName).toBe('processItem');\n      expect(result.label).toBe('Function');\n    });\n  });\n\n  describe('Python', () => {\n    it('extracts function name from Python function definition', () => {\n      parser.setLanguage(Python);\n      const code = `def hello_world():\\n    pass`;\n      const tree = parser.parse(code);\n      const funcNode = tree.rootNode.child(0);\n\n      const result = extractFunctionName(funcNode);\n\n      expect(result.funcName).toBe('hello_world');\n      expect(result.label).toBe('Function');\n    });\n\n    it('extracts function name with parameters', () => {\n      parser.setLanguage(Python);\n      const code = `def calculate_sum(a, b):\\n    return a + b`;\n      const tree = parser.parse(code);\n      const funcNode = tree.rootNode.child(0);\n\n      const result = extractFunctionName(funcNode);\n\n      expect(result.funcName).toBe('calculate_sum');\n      expect(result.label).toBe('Function');\n    });\n\n    it('extracts async function name', () => {\n      parser.setLanguage(Python);\n      const code = `async def fetch_data():\\n    pass`;\n      const tree = parser.parse(code);\n      const funcNode = tree.rootNode.child(0);\n\n      const result = extractFunctionName(funcNode);\n\n      expect(result.funcName).toBe('fetch_data');\n      expect(result.label).toBe('Function');\n    });\n\n    it('extracts function name with type hints', () => {\n      parser.setLanguage(Python);\n      const code = `def process_data(items: list[int]) -> bool:\\n    return True`;\n      const tree = parser.parse(code);\n      const funcNode = tree.rootNode.child(0);\n\n      const result = extractFunctionName(funcNode);\n\n      expect(result.funcName).toBe('process_data');\n      expect(result.label).toBe('Function');\n    });\n\n    it('extracts nested function name', () => {\n      parser.setLanguage(Python);\n      const code = `def outer():\\n    def inner():\\n        pass`;\n      const tree = parser.parse(code);\n      const outerFunc = tree.rootNode.child(0);\n      const block = outerFunc!.child(4);\n      const innerFunc = block!.namedChild(0);\n\n      const result = extractFunctionName(innerFunc);\n\n      expect(result.funcName).toBe('inner');\n      expect(result.label).toBe('Function');\n    });\n  });\n});\n\ndescribe('getTreeSitterBufferSize', () => {\n  it('returns minimum 512KB for small files', () => {\n    expect(getTreeSitterBufferSize(100)).toBe(TREE_SITTER_BUFFER_SIZE);\n    expect(getTreeSitterBufferSize(0)).toBe(TREE_SITTER_BUFFER_SIZE);\n    expect(getTreeSitterBufferSize(1000)).toBe(TREE_SITTER_BUFFER_SIZE);\n  });\n\n  it('returns 2x content length when larger than minimum', () => {\n    const size = 400 * 1024; // 400 KB — 2x = 800 KB > 512 KB min\n    expect(getTreeSitterBufferSize(size)).toBe(size * 2);\n  });\n\n  it('caps at 32MB for very large files', () => {\n    const huge = 20 * 1024 * 1024; // 20 MB — 2x = 40 MB > 32 MB cap\n    expect(getTreeSitterBufferSize(huge)).toBe(32 * 1024 * 1024);\n  });\n\n  it('returns exactly 512KB at the boundary', () => {\n    // 256KB * 2 = 512KB = minimum, so should return minimum\n    expect(getTreeSitterBufferSize(256 * 1024)).toBe(TREE_SITTER_BUFFER_SIZE);\n  });\n\n  it('scales linearly between min and max', () => {\n    const small = getTreeSitterBufferSize(300 * 1024);\n    const medium = getTreeSitterBufferSize(1 * 1024 * 1024);\n    const large = getTreeSitterBufferSize(5 * 1024 * 1024);\n    expect(small).toBeLessThan(medium);\n    expect(medium).toBeLessThan(large);\n  });\n\n  it('TREE_SITTER_MAX_BUFFER is 32MB', () => {\n    expect(TREE_SITTER_MAX_BUFFER).toBe(32 * 1024 * 1024);\n  });\n\n  it('returns max buffer at exact boundary (16MB input)', () => {\n    // 16MB * 2 = 32MB = max\n    expect(getTreeSitterBufferSize(16 * 1024 * 1024)).toBe(TREE_SITTER_MAX_BUFFER);\n  });\n\n  it('file just over max returns max buffer', () => {\n    // 17MB * 2 = 34MB > 32MB cap\n    expect(getTreeSitterBufferSize(17 * 1024 * 1024)).toBe(TREE_SITTER_MAX_BUFFER);\n  });\n\n  it('handles files between old 512KB limit and new 32MB limit', () => {\n    // This is the range that was previously silently skipped\n    const sizes = [600 * 1024, 1024 * 1024, 5 * 1024 * 1024, 10 * 1024 * 1024];\n    for (const size of sizes) {\n      const bufSize = getTreeSitterBufferSize(size);\n      expect(bufSize).toBeGreaterThanOrEqual(TREE_SITTER_BUFFER_SIZE);\n      expect(bufSize).toBeLessThanOrEqual(TREE_SITTER_MAX_BUFFER);\n    }\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/language-skip.test.ts",
    "content": "import { describe, it, expect } from 'vitest';\nimport { isLanguageAvailable, loadLanguage } from '../../src/core/tree-sitter/parser-loader.js';\nimport { SupportedLanguages } from '../../src/config/supported-languages.js';\n\ndescribe('isLanguageAvailable', () => {\n  it('returns true for installed languages', () => {\n    expect(isLanguageAvailable(SupportedLanguages.TypeScript)).toBe(true);\n    expect(isLanguageAvailable(SupportedLanguages.JavaScript)).toBe(true);\n    expect(isLanguageAvailable(SupportedLanguages.Python)).toBe(true);\n    expect(isLanguageAvailable(SupportedLanguages.Java)).toBe(true);\n    expect(isLanguageAvailable(SupportedLanguages.Go)).toBe(true);\n    expect(isLanguageAvailable(SupportedLanguages.Rust)).toBe(true);\n    expect(isLanguageAvailable(SupportedLanguages.PHP)).toBe(true);\n    expect(isLanguageAvailable(SupportedLanguages.Ruby)).toBe(true);\n  });\n\n  it('returns false for fabricated language values', () => {\n    expect(isLanguageAvailable('erlang' as SupportedLanguages)).toBe(false);\n    expect(isLanguageAvailable('haskell' as SupportedLanguages)).toBe(false);\n  });\n\n  it('handles Swift based on optional dependency availability', () => {\n    // Swift is optional — result depends on whether tree-sitter-swift is installed\n    const result = isLanguageAvailable(SupportedLanguages.Swift);\n    expect(typeof result).toBe('boolean');\n    // Either way, it should not throw\n  });\n\n  it('handles Kotlin based on optional dependency availability', () => {\n    // Kotlin is now optional — result depends on whether tree-sitter-kotlin is installed\n    const result = isLanguageAvailable(SupportedLanguages.Kotlin);\n    expect(typeof result).toBe('boolean');\n    // Either way, it should not throw\n  });\n});\n\ndescribe('Kotlin optional dependency', () => {\n  it('handles Kotlin loading gracefully', async () => {\n    // Kotlin is optional — it either loads successfully or throws an error\n    try {\n      await loadLanguage(SupportedLanguages.Kotlin);\n      // If it succeeds, tree-sitter-kotlin is installed\n    } catch (e: any) {\n      // If it fails, it should be because tree-sitter-kotlin is not installed\n      expect(e.message).toContain('Unsupported language');\n    }\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/lazy-action.test.ts",
    "content": "import { describe, expect, it, vi } from 'vitest';\nimport { createLazyAction } from '../../src/cli/lazy-action.js';\n\ndescribe('createLazyAction', () => {\n  it('does not import target module until invoked', async () => {\n    const loader = vi.fn(async () => ({\n      run: vi.fn(async () => 'ok'),\n    }));\n\n    const action = createLazyAction(loader, 'run');\n\n    expect(loader).not.toHaveBeenCalled();\n    await expect(action('arg-1')).resolves.toBeUndefined();\n    expect(loader).toHaveBeenCalledTimes(1);\n  });\n\n  it('throws a clear error when export is not a function', async () => {\n    const action = createLazyAction(async () => ({ notAFunction: 'string-value' }), 'notAFunction');\n    await expect(action()).rejects.toThrow('notAFunction');\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/method-signature.test.ts",
    "content": "import { describe, it, expect } from 'vitest';\nimport { extractMethodSignature } from '../../src/core/ingestion/utils.js';\nimport Parser from 'tree-sitter';\nimport TypeScript from 'tree-sitter-typescript';\nimport Python from 'tree-sitter-python';\nimport Java from 'tree-sitter-java';\nimport CSharp from 'tree-sitter-c-sharp';\nimport Kotlin from 'tree-sitter-kotlin';\nimport CPP from 'tree-sitter-cpp';\nimport Go from 'tree-sitter-go';\nimport Rust from 'tree-sitter-rust';\n\ndescribe('extractMethodSignature', () => {\n  const parser = new Parser();\n\n  it('returns zero params and no return type for null node', () => {\n    const sig = extractMethodSignature(null);\n    expect(sig.parameterCount).toBe(0);\n    expect(sig.returnType).toBeUndefined();\n  });\n\n  describe('TypeScript', () => {\n    it('extracts params and return type from a typed method', () => {\n      parser.setLanguage(TypeScript.typescript);\n      const code = `class Foo {\n  greet(name: string, age: number): boolean { return true; }\n}`;\n      const tree = parser.parse(code);\n      const classNode = tree.rootNode.child(0)!;\n      const classBody = classNode.childForFieldName('body')!;\n      const methodNode = classBody.namedChild(0)!;\n\n      const sig = extractMethodSignature(methodNode);\n      expect(sig.parameterCount).toBe(2);\n      expect(sig.returnType).toBe('boolean');\n    });\n\n    it('extracts zero params from a method with no parameters', () => {\n      parser.setLanguage(TypeScript.typescript);\n      const code = `class Foo {\n  run(): void {}\n}`;\n      const tree = parser.parse(code);\n      const classNode = tree.rootNode.child(0)!;\n      const classBody = classNode.childForFieldName('body')!;\n      const methodNode = classBody.namedChild(0)!;\n\n      const sig = extractMethodSignature(methodNode);\n      expect(sig.parameterCount).toBe(0);\n      expect(sig.returnType).toBe('void');\n    });\n\n    it('extracts params without return type annotation', () => {\n      parser.setLanguage(TypeScript.typescript);\n      const code = `class Foo {\n  process(x: number) { return x + 1; }\n}`;\n      const tree = parser.parse(code);\n      const classNode = tree.rootNode.child(0)!;\n      const classBody = classNode.childForFieldName('body')!;\n      const methodNode = classBody.namedChild(0)!;\n\n      const sig = extractMethodSignature(methodNode);\n      expect(sig.parameterCount).toBe(1);\n      expect(sig.returnType).toBeUndefined();\n    });\n  });\n\n  describe('Python', () => {\n    it('skips self parameter', () => {\n      parser.setLanguage(Python);\n      const code = `class Foo:\n    def bar(self, x, y):\n        pass`;\n      const tree = parser.parse(code);\n      const classNode = tree.rootNode.child(0)!;\n      const classBody = classNode.childForFieldName('body')!;\n      const methodNode = classBody.namedChild(0)!;\n\n      const sig = extractMethodSignature(methodNode);\n      expect(sig.parameterCount).toBe(2);\n      expect(sig.returnType).toBeUndefined();\n    });\n\n    it('handles method with only self', () => {\n      parser.setLanguage(Python);\n      const code = `class Foo:\n    def noop(self):\n        pass`;\n      const tree = parser.parse(code);\n      const classNode = tree.rootNode.child(0)!;\n      const classBody = classNode.childForFieldName('body')!;\n      const methodNode = classBody.namedChild(0)!;\n\n      const sig = extractMethodSignature(methodNode);\n      expect(sig.parameterCount).toBe(0);\n    });\n\n    it('handles Python return type annotation', () => {\n      parser.setLanguage(Python);\n      const code = `class Foo:\n    def bar(self, x: int) -> bool:\n        return True`;\n      const tree = parser.parse(code);\n      const classNode = tree.rootNode.child(0)!;\n      const classBody = classNode.childForFieldName('body')!;\n      const methodNode = classBody.namedChild(0)!;\n\n      const sig = extractMethodSignature(methodNode);\n      expect(sig.parameterCount).toBe(1);\n      // The important thing is parameterCount is correct; returnType may vary.\n    });\n  });\n\n  describe('Java', () => {\n    it('extracts params from a Java method', () => {\n      parser.setLanguage(Java);\n      const code = `class Foo {\n  public int add(int a, int b) { return a + b; }\n}`;\n      const tree = parser.parse(code);\n      const classNode = tree.rootNode.child(0)!;\n      const classBody = classNode.childForFieldName('body')!;\n      const methodNode = classBody.namedChild(0)!;\n\n      const sig = extractMethodSignature(methodNode);\n      expect(sig.parameterCount).toBe(2);\n    });\n\n    it('extracts zero params from no-arg Java method', () => {\n      parser.setLanguage(Java);\n      const code = `class Foo {\n  public void run() {}\n}`;\n      const tree = parser.parse(code);\n      const classNode = tree.rootNode.child(0)!;\n      const classBody = classNode.childForFieldName('body')!;\n      const methodNode = classBody.namedChild(0)!;\n\n      const sig = extractMethodSignature(methodNode);\n      expect(sig.parameterCount).toBe(0);\n    });\n\n    it('extracts parameterTypes for Java overloaded methods', () => {\n      parser.setLanguage(Java);\n      const code = `class Svc {\n  public User lookup(int id) { return null; }\n  public User lookup(String name) { return null; }\n  public void process(int code, String msg) {}\n}`;\n      const tree = parser.parse(code);\n      const classBody = tree.rootNode.child(0)!.childForFieldName('body')!;\n\n      const sig0 = extractMethodSignature(classBody.namedChild(0)!);\n      expect(sig0.parameterCount).toBe(1);\n      expect(sig0.parameterTypes).toEqual(['int']);\n\n      const sig1 = extractMethodSignature(classBody.namedChild(1)!);\n      expect(sig1.parameterCount).toBe(1);\n      expect(sig1.parameterTypes).toEqual(['String']);\n\n      const sig2 = extractMethodSignature(classBody.namedChild(2)!);\n      expect(sig2.parameterCount).toBe(2);\n      expect(sig2.parameterTypes).toEqual(['int', 'String']);\n    });\n  });\n\n  describe('Kotlin', () => {\n    it('extracts params from a Kotlin function declaration', () => {\n      parser.setLanguage(Kotlin);\n      const code = `object OneArg {\n  fun writeAudit(message: String): String {\n    return message\n  }\n}`;\n      const tree = parser.parse(code);\n      const objectNode = tree.rootNode.child(0)!;\n      const classBody = objectNode.namedChild(1)!;\n      const functionNode = classBody.namedChild(0)!;\n\n      const sig = extractMethodSignature(functionNode);\n      expect(sig.parameterCount).toBe(1);\n    });\n\n    it('extracts zero params from a no-arg Kotlin function', () => {\n      parser.setLanguage(Kotlin);\n      const code = `object ZeroArg {\n  fun writeAudit(): String {\n    return \"zero\"\n  }\n}`;\n      const tree = parser.parse(code);\n      const objectNode = tree.rootNode.child(0)!;\n      const classBody = objectNode.namedChild(1)!;\n      const functionNode = classBody.namedChild(0)!;\n\n      const sig = extractMethodSignature(functionNode);\n      expect(sig.parameterCount).toBe(0);\n    });\n\n    it('extracts parameterTypes for Kotlin overloaded functions', () => {\n      parser.setLanguage(Kotlin);\n      const code = `class Svc {\n  fun lookup(id: Int): User? { return null }\n  fun lookup(name: String): User? { return null }\n}`;\n      const tree = parser.parse(code);\n      const classBody = tree.rootNode.child(0)!.namedChild(1)!;\n\n      const sig0 = extractMethodSignature(classBody.namedChild(0)!);\n      expect(sig0.parameterCount).toBe(1);\n      expect(sig0.parameterTypes).toEqual(['Int']);\n\n      const sig1 = extractMethodSignature(classBody.namedChild(1)!);\n      expect(sig1.parameterCount).toBe(1);\n      expect(sig1.parameterTypes).toEqual(['String']);\n    });\n  });\n\n  describe('C++', () => {\n    it('extracts params from a nested C++ declarator', () => {\n      parser.setLanguage(CPP);\n      const code = `inline const char* write_audit(const char* message) {\n  return message;\n}`;\n      const tree = parser.parse(code);\n      const functionNode = tree.rootNode.namedChild(0)!;\n\n      const sig = extractMethodSignature(functionNode);\n      expect(sig.parameterCount).toBe(1);\n    });\n\n    it('extracts zero params from a no-arg C++ function', () => {\n      parser.setLanguage(CPP);\n      const code = `inline const char* write_audit() {\n  return \"zero\";\n}`;\n      const tree = parser.parse(code);\n      const functionNode = tree.rootNode.namedChild(0)!;\n\n      const sig = extractMethodSignature(functionNode);\n      expect(sig.parameterCount).toBe(0);\n    });\n\n    it('extracts parameterTypes for C++ overloaded functions', () => {\n      parser.setLanguage(CPP);\n      const code = `User* lookup(int id) { return nullptr; }\nUser* lookup(string name) { return nullptr; }`;\n      const tree = parser.parse(code);\n\n      const sig0 = extractMethodSignature(tree.rootNode.namedChild(0)!);\n      expect(sig0.parameterCount).toBe(1);\n      expect(sig0.parameterTypes).toEqual(['int']);\n\n      const sig1 = extractMethodSignature(tree.rootNode.namedChild(1)!);\n      expect(sig1.parameterCount).toBe(1);\n      expect(sig1.parameterTypes).toEqual(['string']);\n    });\n  });\n\n  describe('C#', () => {\n    it('extracts params from a C# method', () => {\n      parser.setLanguage(CSharp);\n      const code = `class Foo {\n  public bool Check(string name, int count) { return true; }\n}`;\n      const tree = parser.parse(code);\n      const classNode = tree.rootNode.child(0)!;\n      const classBody = classNode.childForFieldName('body')!;\n      const methodNode = classBody.namedChild(0)!;\n\n      const sig = extractMethodSignature(methodNode);\n      expect(sig.parameterCount).toBe(2);\n    });\n\n    it('extracts parameterTypes for C# overloaded methods', () => {\n      parser.setLanguage(CSharp);\n      const code = `class Svc {\n  public User Lookup(int id) { return null; }\n  public User Lookup(string name) { return null; }\n}`;\n      const tree = parser.parse(code);\n      const classBody = tree.rootNode.child(0)!.childForFieldName('body')!;\n\n      const sig0 = extractMethodSignature(classBody.namedChild(0)!);\n      expect(sig0.parameterCount).toBe(1);\n      expect(sig0.parameterTypes).toEqual(['int']);\n\n      const sig1 = extractMethodSignature(classBody.namedChild(1)!);\n      expect(sig1.parameterCount).toBe(1);\n      expect(sig1.parameterTypes).toEqual(['string']);\n    });\n\n    it('handles C# method with no params', () => {\n      parser.setLanguage(CSharp);\n      const code = `class Foo {\n  public void Execute() {}\n}`;\n      const tree = parser.parse(code);\n      const classNode = tree.rootNode.child(0)!;\n      const classBody = classNode.childForFieldName('body')!;\n      const methodNode = classBody.namedChild(0)!;\n\n      const sig = extractMethodSignature(methodNode);\n      expect(sig.parameterCount).toBe(0);\n    });\n\n    it('extracts return type from C# method', () => {\n      parser.setLanguage(CSharp);\n      const code = `class Svc {\n  public User GetUser(string name) { return null; }\n}`;\n      const tree = parser.parse(code);\n      const classNode = tree.rootNode.child(0)!;\n      const classBody = classNode.childForFieldName('body')!;\n      const methodNode = classBody.namedChild(0)!;\n\n      const sig = extractMethodSignature(methodNode);\n      expect(sig.returnType).toBe('User');\n    });\n  });\n\n  describe('Go', () => {\n    it('extracts params and single return type', () => {\n      parser.setLanguage(Go);\n      const code = `package main\nfunc add(a int, b int) int { return a + b }`;\n      const tree = parser.parse(code);\n      const funcNode = tree.rootNode.namedChildren.find(c => c.type === 'function_declaration')!;\n\n      const sig = extractMethodSignature(funcNode);\n      expect(sig.parameterCount).toBe(2);\n      expect(sig.returnType).toBe('int');\n    });\n\n    it('extracts multi-return type', () => {\n      parser.setLanguage(Go);\n      const code = `package main\nfunc parse(s string) (string, error) { return s, nil }`;\n      const tree = parser.parse(code);\n      const funcNode = tree.rootNode.namedChildren.find(c => c.type === 'function_declaration')!;\n\n      const sig = extractMethodSignature(funcNode);\n      expect(sig.parameterCount).toBe(1);\n      expect(sig.returnType).toBe('string');\n    });\n\n    it('handles no return type', () => {\n      parser.setLanguage(Go);\n      const code = `package main\nfunc doSomething(x int) { }`;\n      const tree = parser.parse(code);\n      const funcNode = tree.rootNode.namedChildren.find(c => c.type === 'function_declaration')!;\n\n      const sig = extractMethodSignature(funcNode);\n      expect(sig.parameterCount).toBe(1);\n      expect(sig.returnType).toBeUndefined();\n    });\n\n    it('marks variadic function with undefined parameterCount', () => {\n      parser.setLanguage(Go);\n      const code = `package main\nfunc log(args ...string) int { return 0 }`;\n      const tree = parser.parse(code);\n      const funcNode = tree.rootNode.namedChildren.find(c => c.type === 'function_declaration')!;\n\n      const sig = extractMethodSignature(funcNode);\n      expect(sig.parameterCount).toBeUndefined();\n      expect(sig.returnType).toBe('int');\n    });\n  });\n\n  describe('Rust', () => {\n    it('extracts return type from function', () => {\n      parser.setLanguage(Rust);\n      const code = `fn add(a: i32, b: i32) -> i32 { a + b }`;\n      const tree = parser.parse(code);\n      const funcNode = tree.rootNode.namedChild(0)!;\n\n      const sig = extractMethodSignature(funcNode);\n      expect(sig.parameterCount).toBe(2);\n      expect(sig.returnType).toBe('i32');\n    });\n  });\n\n  describe('C++ return types', () => {\n    it('extracts primitive return type', () => {\n      parser.setLanguage(CPP);\n      const code = `int add(int a, int b) { return a + b; }`;\n      const tree = parser.parse(code);\n      const funcNode = tree.rootNode.namedChild(0)!;\n\n      const sig = extractMethodSignature(funcNode);\n      expect(sig.parameterCount).toBe(2);\n      expect(sig.returnType).toBe('int');\n    });\n\n    it('extracts qualified return type', () => {\n      parser.setLanguage(CPP);\n      const code = `std::string getName() { return \"\"; }`;\n      const tree = parser.parse(code);\n      const funcNode = tree.rootNode.namedChild(0)!;\n\n      const sig = extractMethodSignature(funcNode);\n      expect(sig.parameterCount).toBe(0);\n      expect(sig.returnType).toBe('std::string');\n    });\n\n    it('returns undefined returnType for void', () => {\n      parser.setLanguage(CPP);\n      const code = `void doNothing() { }`;\n      const tree = parser.parse(code);\n      const funcNode = tree.rootNode.namedChild(0)!;\n\n      const sig = extractMethodSignature(funcNode);\n      expect(sig.returnType).toBeUndefined();\n    });\n\n    it('marks variadic function with undefined parameterCount', () => {\n      parser.setLanguage(CPP);\n      const code = `int printf(const char* fmt, ...) { return 0; }`;\n      const tree = parser.parse(code);\n      const funcNode = tree.rootNode.namedChild(0)!;\n\n      const sig = extractMethodSignature(funcNode);\n      expect(sig.parameterCount).toBeUndefined();\n      expect(sig.returnType).toBe('int');\n    });\n  });\n\n  describe('variadic params', () => {\n    it('Java: marks varargs with undefined parameterCount', () => {\n      parser.setLanguage(Java);\n      const code = `class Foo {\n  public void log(String fmt, Object... args) {}\n}`;\n      const tree = parser.parse(code);\n      const classNode = tree.rootNode.child(0)!;\n      const classBody = classNode.childForFieldName('body')!;\n      const methodNode = classBody.namedChild(0)!;\n\n      const sig = extractMethodSignature(methodNode);\n      expect(sig.parameterCount).toBeUndefined();\n    });\n\n    it('Python: marks *args with undefined parameterCount', () => {\n      parser.setLanguage(Python);\n      const code = `class Foo:\n    def log(self, fmt, *args):\n        pass`;\n      const tree = parser.parse(code);\n      const classNode = tree.rootNode.child(0)!;\n      const classBody = classNode.childForFieldName('body')!;\n      const methodNode = classBody.namedChild(0)!;\n\n      const sig = extractMethodSignature(methodNode);\n      expect(sig.parameterCount).toBeUndefined();\n    });\n\n    it('Python: marks **kwargs with undefined parameterCount', () => {\n      parser.setLanguage(Python);\n      const code = `class Foo:\n    def config(self, **kwargs):\n        pass`;\n      const tree = parser.parse(code);\n      const classNode = tree.rootNode.child(0)!;\n      const classBody = classNode.childForFieldName('body')!;\n      const methodNode = classBody.namedChild(0)!;\n\n      const sig = extractMethodSignature(methodNode);\n      expect(sig.parameterCount).toBeUndefined();\n    });\n\n    it('TypeScript: marks rest params with undefined parameterCount', () => {\n      parser.setLanguage(TypeScript.typescript);\n      const code = `function logEntry(...messages: string[]): void {}`;\n      const tree = parser.parse(code);\n      const funcNode = tree.rootNode.namedChild(0)!;\n\n      const sig = extractMethodSignature(funcNode);\n      expect(sig.parameterCount).toBeUndefined();\n    });\n\n    it('Kotlin: marks vararg with undefined parameterCount', () => {\n      parser.setLanguage(Kotlin);\n      const code = `object Foo {\n  fun log(vararg args: String) {}\n}`;\n      const tree = parser.parse(code);\n      const objectNode = tree.rootNode.child(0)!;\n      const classBody = objectNode.namedChild(1)!;\n      const functionNode = classBody.namedChild(0)!;\n\n      const sig = extractMethodSignature(functionNode);\n      expect(sig.parameterCount).toBeUndefined();\n    });\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/mro-processor.test.ts",
    "content": "import { describe, it, expect } from 'vitest';\nimport { computeMRO } from '../../src/core/ingestion/mro-processor.js';\nimport { createKnowledgeGraph } from '../../src/core/graph/graph.js';\nimport type { KnowledgeGraph } from '../../src/core/graph/types.js';\nimport { generateId } from '../../src/lib/utils.js';\n\n// ---------------------------------------------------------------------------\n// Helpers\n// ---------------------------------------------------------------------------\n\nfunction addClass(graph: KnowledgeGraph, name: string, language: string, label: 'Class' | 'Interface' | 'Struct' | 'Trait' = 'Class') {\n  const id = generateId(label, name);\n  graph.addNode({\n    id,\n    label,\n    properties: { name, filePath: `src/${name}.ts`, language },\n  });\n  return id;\n}\n\nfunction addMethod(graph: KnowledgeGraph, className: string, methodName: string, classLabel: 'Class' | 'Interface' | 'Struct' | 'Trait' = 'Class') {\n  const classId = generateId(classLabel, className);\n  const methodId = generateId('Method', `${className}.${methodName}`);\n  graph.addNode({\n    id: methodId,\n    label: 'Method',\n    properties: { name: methodName, filePath: `src/${className}.ts` },\n  });\n  graph.addRelationship({\n    id: generateId('HAS_METHOD', `${classId}->${methodId}`),\n    sourceId: classId,\n    targetId: methodId,\n    type: 'HAS_METHOD',\n    confidence: 1.0,\n    reason: '',\n  });\n  return methodId;\n}\n\nfunction addExtends(graph: KnowledgeGraph, childName: string, parentName: string, childLabel: 'Class' | 'Struct' = 'Class', parentLabel: 'Class' | 'Interface' | 'Trait' = 'Class') {\n  const childId = generateId(childLabel, childName);\n  const parentId = generateId(parentLabel, parentName);\n  graph.addRelationship({\n    id: generateId('EXTENDS', `${childId}->${parentId}`),\n    sourceId: childId,\n    targetId: parentId,\n    type: 'EXTENDS',\n    confidence: 1.0,\n    reason: '',\n  });\n}\n\nfunction addImplements(graph: KnowledgeGraph, childName: string, parentName: string, childLabel: 'Class' | 'Struct' = 'Class', parentLabel: 'Interface' | 'Trait' = 'Interface') {\n  const childId = generateId(childLabel, childName);\n  const parentId = generateId(parentLabel, parentName);\n  graph.addRelationship({\n    id: generateId('IMPLEMENTS', `${childId}->${parentId}`),\n    sourceId: childId,\n    targetId: parentId,\n    type: 'IMPLEMENTS',\n    confidence: 1.0,\n    reason: '',\n  });\n}\n\n// ---------------------------------------------------------------------------\n// Tests\n// ---------------------------------------------------------------------------\n\ndescribe('computeMRO', () => {\n\n  // ---- C++ diamond --------------------------------------------------------\n  describe('C++ diamond inheritance', () => {\n    it('leftmost base wins when both B and C override foo', () => {\n      // Diamond: A <- B, A <- C, B <- D, C <- D\n      const graph = createKnowledgeGraph();\n      const aId = addClass(graph, 'A', 'cpp');\n      const bId = addClass(graph, 'B', 'cpp');\n      const cId = addClass(graph, 'C', 'cpp');\n      const dId = addClass(graph, 'D', 'cpp');\n\n      addExtends(graph, 'B', 'A');\n      addExtends(graph, 'C', 'A');\n      addExtends(graph, 'D', 'B'); // B is leftmost\n      addExtends(graph, 'D', 'C');\n\n      // A has foo, B overrides foo, C overrides foo\n      addMethod(graph, 'A', 'foo');\n      const bFoo = addMethod(graph, 'B', 'foo');\n      const cFoo = addMethod(graph, 'C', 'foo');\n\n      const result = computeMRO(graph);\n\n      // D should have an entry with ambiguity on foo\n      const dEntry = result.entries.find(e => e.className === 'D');\n      expect(dEntry).toBeDefined();\n      expect(dEntry!.language).toBe('cpp');\n\n      const fooAmbiguity = dEntry!.ambiguities.find(a => a.methodName === 'foo');\n      expect(fooAmbiguity).toBeDefined();\n      expect(fooAmbiguity!.definedIn.length).toBeGreaterThanOrEqual(2);\n\n      // Leftmost base (B) wins\n      expect(fooAmbiguity!.resolvedTo).toBe(bFoo);\n      expect(fooAmbiguity!.reason).toContain('C++ leftmost');\n      expect(fooAmbiguity!.reason).toContain('B');\n\n      // OVERRIDES edge emitted\n      expect(result.overrideEdges).toBeGreaterThanOrEqual(1);\n      const overrides = graph.relationships.filter(r => r.type === 'OVERRIDES');\n      expect(overrides.some(r => r.sourceId === dId && r.targetId === bFoo)).toBe(true);\n    });\n\n    it('no ambiguity when foo only in A (diamond no override)', () => {\n      // Diamond: A <- B, A <- C, B <- D, C <- D, but only A has foo\n      const graph = createKnowledgeGraph();\n      addClass(graph, 'A', 'cpp');\n      addClass(graph, 'B', 'cpp');\n      addClass(graph, 'C', 'cpp');\n      addClass(graph, 'D', 'cpp');\n\n      addExtends(graph, 'B', 'A');\n      addExtends(graph, 'C', 'A');\n      addExtends(graph, 'D', 'B');\n      addExtends(graph, 'D', 'C');\n\n      // Only A has foo\n      addMethod(graph, 'A', 'foo');\n\n      const result = computeMRO(graph);\n\n      const dEntry = result.entries.find(e => e.className === 'D');\n      expect(dEntry).toBeDefined();\n      // A::foo appears only once across ancestors — no collision\n      // (B and C don't have their own foo, the duplicate is A::foo seen through both paths)\n      const fooAmbiguity = dEntry!.ambiguities.find(a => a.methodName === 'foo');\n      expect(fooAmbiguity).toBeUndefined();\n    });\n  });\n\n  // ---- C# class + interface -----------------------------------------------\n  describe('C# class + interface', () => {\n    it('class method beats interface default', () => {\n      const graph = createKnowledgeGraph();\n      const classId = addClass(graph, 'MyClass', 'csharp');\n      const baseId = addClass(graph, 'BaseClass', 'csharp');\n      const ifaceId = addClass(graph, 'IDoSomething', 'csharp', 'Interface');\n\n      addExtends(graph, 'MyClass', 'BaseClass');\n      addImplements(graph, 'MyClass', 'IDoSomething');\n\n      const baseDoIt = addMethod(graph, 'BaseClass', 'doIt');\n      const ifaceDoIt = addMethod(graph, 'IDoSomething', 'doIt', 'Interface');\n\n      const result = computeMRO(graph);\n\n      const entry = result.entries.find(e => e.className === 'MyClass');\n      expect(entry).toBeDefined();\n\n      const doItAmbiguity = entry!.ambiguities.find(a => a.methodName === 'doIt');\n      expect(doItAmbiguity).toBeDefined();\n      // Class method wins\n      expect(doItAmbiguity!.resolvedTo).toBe(baseDoIt);\n      expect(doItAmbiguity!.reason).toContain('class method wins');\n    });\n\n    it('multiple interface methods with same name are ambiguous', () => {\n      const graph = createKnowledgeGraph();\n      addClass(graph, 'MyClass', 'csharp');\n      addClass(graph, 'IFoo', 'csharp', 'Interface');\n      addClass(graph, 'IBar', 'csharp', 'Interface');\n\n      addImplements(graph, 'MyClass', 'IFoo');\n      addImplements(graph, 'MyClass', 'IBar');\n\n      addMethod(graph, 'IFoo', 'process', 'Interface');\n      addMethod(graph, 'IBar', 'process', 'Interface');\n\n      const result = computeMRO(graph);\n\n      const entry = result.entries.find(e => e.className === 'MyClass');\n      expect(entry).toBeDefined();\n\n      const processAmbiguity = entry!.ambiguities.find(a => a.methodName === 'process');\n      expect(processAmbiguity).toBeDefined();\n      expect(processAmbiguity!.resolvedTo).toBeNull();\n      expect(processAmbiguity!.reason).toContain('ambiguous');\n      expect(result.ambiguityCount).toBeGreaterThanOrEqual(1);\n    });\n  });\n\n  // ---- Python C3 ----------------------------------------------------------\n  describe('Python C3 linearization', () => {\n    it('C3 order determines winner in diamond with overrides', () => {\n      // Diamond: A <- B, A <- C, B <- D, C <- D\n      // class D(B, C) → C3 MRO: B, C, A\n      const graph = createKnowledgeGraph();\n      addClass(graph, 'A', 'python');\n      addClass(graph, 'B', 'python');\n      addClass(graph, 'C', 'python');\n      const dId = addClass(graph, 'D', 'python');\n\n      addExtends(graph, 'B', 'A');\n      addExtends(graph, 'C', 'A');\n      addExtends(graph, 'D', 'B'); // B first → leftmost in C3\n      addExtends(graph, 'D', 'C');\n\n      addMethod(graph, 'A', 'foo');\n      const bFoo = addMethod(graph, 'B', 'foo');\n      addMethod(graph, 'C', 'foo');\n\n      const result = computeMRO(graph);\n\n      const dEntry = result.entries.find(e => e.className === 'D');\n      expect(dEntry).toBeDefined();\n\n      const fooAmbiguity = dEntry!.ambiguities.find(a => a.methodName === 'foo');\n      expect(fooAmbiguity).toBeDefined();\n      // C3 linearization for D(B, C): B comes first\n      expect(fooAmbiguity!.resolvedTo).toBe(bFoo);\n      expect(fooAmbiguity!.reason).toContain('Python C3');\n    });\n  });\n\n  // ---- Java class + interface ---------------------------------------------\n  describe('Java class + interface', () => {\n    it('class method beats interface default', () => {\n      const graph = createKnowledgeGraph();\n      addClass(graph, 'Service', 'java');\n      addClass(graph, 'BaseService', 'java');\n      addClass(graph, 'Runnable', 'java', 'Interface');\n\n      addExtends(graph, 'Service', 'BaseService');\n      addImplements(graph, 'Service', 'Runnable');\n\n      const baseRun = addMethod(graph, 'BaseService', 'run');\n      addMethod(graph, 'Runnable', 'run', 'Interface');\n\n      const result = computeMRO(graph);\n\n      const entry = result.entries.find(e => e.className === 'Service');\n      expect(entry).toBeDefined();\n\n      const runAmbiguity = entry!.ambiguities.find(a => a.methodName === 'run');\n      expect(runAmbiguity).toBeDefined();\n      expect(runAmbiguity!.resolvedTo).toBe(baseRun);\n      expect(runAmbiguity!.reason).toContain('class method wins');\n    });\n  });\n\n  // ---- Rust trait conflicts -----------------------------------------------\n  describe('Rust trait conflicts', () => {\n    it('trait conflicts result in null resolution with qualified syntax reason', () => {\n      const graph = createKnowledgeGraph();\n      addClass(graph, 'MyStruct', 'rust', 'Struct');\n      addClass(graph, 'TraitA', 'rust', 'Trait');\n      addClass(graph, 'TraitB', 'rust', 'Trait');\n\n      addImplements(graph, 'MyStruct', 'TraitA', 'Struct', 'Trait');\n      addImplements(graph, 'MyStruct', 'TraitB', 'Struct', 'Trait');\n\n      addMethod(graph, 'TraitA', 'execute', 'Trait');\n      addMethod(graph, 'TraitB', 'execute', 'Trait');\n\n      const result = computeMRO(graph);\n\n      const entry = result.entries.find(e => e.className === 'MyStruct');\n      expect(entry).toBeDefined();\n\n      const execAmbiguity = entry!.ambiguities.find(a => a.methodName === 'execute');\n      expect(execAmbiguity).toBeDefined();\n      expect(execAmbiguity!.resolvedTo).toBeNull();\n      expect(execAmbiguity!.reason).toContain('qualified syntax');\n      expect(result.ambiguityCount).toBeGreaterThanOrEqual(1);\n\n      // No OVERRIDES edge emitted for Rust ambiguity\n      const overrides = graph.relationships.filter(\n        r => r.type === 'OVERRIDES' && r.sourceId === generateId('Struct', 'MyStruct')\n      );\n      expect(overrides).toHaveLength(0);\n    });\n  });\n\n  // ---- Property collisions don't trigger OVERRIDES ------------------------\n  describe('Property nodes excluded from OVERRIDES', () => {\n    it('property name collision across parents does not emit OVERRIDES edge', () => {\n      const graph = createKnowledgeGraph();\n      const parentA = addClass(graph, 'ParentA', 'typescript');\n      const parentB = addClass(graph, 'ParentB', 'typescript');\n      const child = addClass(graph, 'Child', 'typescript');\n\n      addExtends(graph, 'Child', 'ParentA');\n      addExtends(graph, 'Child', 'ParentB');\n\n      // Add Property nodes (same name 'name') to both parents via HAS_PROPERTY\n      const propA = generateId('Property', 'ParentA.name');\n      graph.addNode({ id: propA, label: 'Property', properties: { name: 'name', filePath: 'src/ParentA.ts' } });\n      graph.addRelationship({\n        id: generateId('HAS_PROPERTY', `${parentA}->${propA}`),\n        sourceId: parentA, targetId: propA, type: 'HAS_PROPERTY', confidence: 1.0, reason: '',\n      });\n\n      const propB = generateId('Property', 'ParentB.name');\n      graph.addNode({ id: propB, label: 'Property', properties: { name: 'name', filePath: 'src/ParentB.ts' } });\n      graph.addRelationship({\n        id: generateId('HAS_PROPERTY', `${parentB}->${propB}`),\n        sourceId: parentB, targetId: propB, type: 'HAS_PROPERTY', confidence: 1.0, reason: '',\n      });\n\n      const result = computeMRO(graph);\n\n      // No OVERRIDES edge should be emitted for properties\n      const overrides = graph.relationships.filter(r => r.type === 'OVERRIDES');\n      expect(overrides).toHaveLength(0);\n      expect(result.overrideEdges).toBe(0);\n    });\n\n    it('method collision still triggers OVERRIDES even when properties also collide', () => {\n      const graph = createKnowledgeGraph();\n      const parentA = addClass(graph, 'PA', 'cpp');\n      const parentB = addClass(graph, 'PB', 'cpp');\n      addClass(graph, 'Ch', 'cpp');\n\n      addExtends(graph, 'Ch', 'PA');\n      addExtends(graph, 'Ch', 'PB');\n\n      // Method collision (should trigger OVERRIDES)\n      const methodA = addMethod(graph, 'PA', 'doWork');\n      addMethod(graph, 'PB', 'doWork');\n\n      // Property collision (should NOT trigger OVERRIDES — properties use HAS_PROPERTY, not HAS_METHOD)\n      const propA = generateId('Property', 'PA.id');\n      graph.addNode({ id: propA, label: 'Property', properties: { name: 'id', filePath: 'src/PA.ts' } });\n      graph.addRelationship({\n        id: generateId('HAS_PROPERTY', `${parentA}->${propA}`),\n        sourceId: parentA, targetId: propA, type: 'HAS_PROPERTY', confidence: 1.0, reason: '',\n      });\n\n      const propB = generateId('Property', 'PB.id');\n      graph.addNode({ id: propB, label: 'Property', properties: { name: 'id', filePath: 'src/PB.ts' } });\n      graph.addRelationship({\n        id: generateId('HAS_PROPERTY', `${parentB}->${propB}`),\n        sourceId: parentB, targetId: propB, type: 'HAS_PROPERTY', confidence: 1.0, reason: '',\n      });\n\n      const result = computeMRO(graph);\n\n      // Only 1 OVERRIDES edge (for the method, not the property)\n      const overrides = graph.relationships.filter(r => r.type === 'OVERRIDES');\n      expect(overrides).toHaveLength(1);\n      expect(overrides[0].targetId).toBe(methodA); // leftmost base wins for C++\n      expect(result.overrideEdges).toBe(1);\n    });\n  });\n\n  // ---- No ambiguity: single parent ----------------------------------------\n  describe('single parent, no ambiguity', () => {\n    it('single parent with unique methods produces no ambiguities', () => {\n      const graph = createKnowledgeGraph();\n      addClass(graph, 'Parent', 'typescript');\n      addClass(graph, 'Child', 'typescript');\n\n      addExtends(graph, 'Child', 'Parent');\n\n      addMethod(graph, 'Parent', 'foo');\n      addMethod(graph, 'Parent', 'bar');\n\n      const result = computeMRO(graph);\n\n      const entry = result.entries.find(e => e.className === 'Child');\n      expect(entry).toBeDefined();\n      expect(entry!.ambiguities).toHaveLength(0);\n    });\n  });\n\n  // ---- No parents: standalone class not in entries ------------------------\n  describe('standalone class', () => {\n    it('class with no parents is not included in entries', () => {\n      const graph = createKnowledgeGraph();\n      addClass(graph, 'Standalone', 'typescript');\n      addMethod(graph, 'Standalone', 'doStuff');\n\n      const result = computeMRO(graph);\n\n      const entry = result.entries.find(e => e.className === 'Standalone');\n      expect(entry).toBeUndefined();\n      expect(result.overrideEdges).toBe(0);\n      expect(result.ambiguityCount).toBe(0);\n    });\n  });\n\n  // ---- Own method shadows ancestor ----------------------------------------\n  describe('own method shadows ancestor', () => {\n    it('class defining its own method suppresses ambiguity', () => {\n      const graph = createKnowledgeGraph();\n      addClass(graph, 'Base1', 'cpp');\n      addClass(graph, 'Base2', 'cpp');\n      addClass(graph, 'Child', 'cpp');\n\n      addExtends(graph, 'Child', 'Base1');\n      addExtends(graph, 'Child', 'Base2');\n\n      addMethod(graph, 'Base1', 'foo');\n      addMethod(graph, 'Base2', 'foo');\n      addMethod(graph, 'Child', 'foo'); // own method\n\n      const result = computeMRO(graph);\n\n      const entry = result.entries.find(e => e.className === 'Child');\n      expect(entry).toBeDefined();\n      // No ambiguity because Child defines its own foo\n      const fooAmbiguity = entry!.ambiguities.find(a => a.methodName === 'foo');\n      expect(fooAmbiguity).toBeUndefined();\n    });\n  });\n\n  // ---- Empty graph --------------------------------------------------------\n  describe('empty graph', () => {\n    it('returns empty result for graph with no classes', () => {\n      const graph = createKnowledgeGraph();\n      const result = computeMRO(graph);\n      expect(result.entries).toHaveLength(0);\n      expect(result.overrideEdges).toBe(0);\n      expect(result.ambiguityCount).toBe(0);\n    });\n  });\n\n  // ---- Cyclic inheritance (P1 fix) ----------------------------------------\n  describe('cyclic inheritance', () => {\n    it('does not stack overflow on cyclic Python hierarchy', () => {\n      // A extends B, B extends A — cyclic\n      const graph = createKnowledgeGraph();\n      addClass(graph, 'A', 'python');\n      addClass(graph, 'B', 'python');\n      addExtends(graph, 'A', 'B');\n      addExtends(graph, 'B', 'A');\n      addMethod(graph, 'A', 'foo');\n      addMethod(graph, 'B', 'foo');\n\n      // Should NOT throw — c3Linearize returns null, falls back to BFS\n      const result = computeMRO(graph);\n      expect(result).toBeDefined();\n      // Both A and B have parents, so both get entries\n      expect(result.entries.length).toBeGreaterThanOrEqual(1);\n    });\n\n    it('handles 3-node cycle gracefully', () => {\n      // A → B → C → A\n      const graph = createKnowledgeGraph();\n      addClass(graph, 'X', 'python');\n      addClass(graph, 'Y', 'python');\n      addClass(graph, 'Z', 'python');\n      addExtends(graph, 'X', 'Y');\n      addExtends(graph, 'Y', 'Z');\n      addExtends(graph, 'Z', 'X');\n\n      const result = computeMRO(graph);\n      expect(result).toBeDefined();\n    });\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/named-binding-extraction.test.ts",
    "content": "import { describe, it, expect } from 'vitest';\nimport { extractCsharpNamedBindings } from '../../src/core/ingestion/named-binding-extraction.js';\nimport Parser from 'tree-sitter';\nimport CSharp from 'tree-sitter-c-sharp';\n\nconst parser = new Parser();\n\n/** Walk a tree depth-first and return the first node matching the given type. */\nfunction findFirst(node: any, type: string): any | undefined {\n  if (node.type === type) return node;\n  for (let i = 0; i < node.childCount; i++) {\n    const found = findFirst(node.child(i), type);\n    if (found) return found;\n  }\n  return undefined;\n}\n\nconst parse = (code: string) => {\n  parser.setLanguage(CSharp);\n  return parser.parse(code);\n};\n\ndescribe('extractCsharpNamedBindings', () => {\n  describe('non-aliased namespace imports (known limitation)', () => {\n    it('returns undefined for non-aliased namespace imports (known limitation)', () => {\n      // C# using Namespace imports can't be reduced to per-symbol bindings without type\n      // inference — resolution falls back to PackageMap directory matching.\n      const tree = parse('using MyApp.Models;');\n      const usingNode = findFirst(tree.rootNode, 'using_directive');\n      expect(usingNode).toBeDefined();\n\n      const result = extractCsharpNamedBindings(usingNode);\n\n      expect(result).toBeUndefined();\n    });\n\n    it('returns undefined for a single-segment non-aliased import', () => {\n      // C# using Namespace imports can't be reduced to per-symbol bindings without type\n      // inference — resolution falls back to PackageMap directory matching.\n      const tree = parse('using System;');\n      const usingNode = findFirst(tree.rootNode, 'using_directive');\n      expect(usingNode).toBeDefined();\n\n      const result = extractCsharpNamedBindings(usingNode);\n\n      expect(result).toBeUndefined();\n    });\n  });\n\n  describe('aliased imports', () => {\n    it('returns a binding for a simple aliased import', () => {\n      const tree = parse('using Mod = MyApp.Models;');\n      const usingNode = findFirst(tree.rootNode, 'using_directive');\n      expect(usingNode).toBeDefined();\n\n      const result = extractCsharpNamedBindings(usingNode);\n\n      expect(result).toEqual([{ local: 'Mod', exported: 'Models' }]);\n    });\n\n    it('uses the last segment of the qualified name as the exported binding', () => {\n      const tree = parse('using Svc = MyApp.Services.UserService;');\n      const usingNode = findFirst(tree.rootNode, 'using_directive');\n      expect(usingNode).toBeDefined();\n\n      const result = extractCsharpNamedBindings(usingNode);\n\n      expect(result).toEqual([{ local: 'Svc', exported: 'UserService' }]);\n    });\n  });\n\n  describe('edge cases', () => {\n    it('returns undefined when the node type is not using_directive', () => {\n      // Passing a synthetic object that is not a using_directive node.\n      const fakeNode = { type: 'import_declaration', namedChildCount: 0 };\n\n      const result = extractCsharpNamedBindings(fakeNode);\n\n      expect(result).toBeUndefined();\n    });\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/parser-loader.test.ts",
    "content": "import { describe, it, expect } from 'vitest';\nimport { loadParser, loadLanguage } from '../../src/core/tree-sitter/parser-loader.js';\nimport { SupportedLanguages } from '../../src/config/supported-languages.js';\n\ndescribe('parser-loader', () => {\n  describe('loadParser', () => {\n    it('returns a Parser instance', async () => {\n      const parser = await loadParser();\n      expect(parser).toBeDefined();\n      expect(typeof parser.parse).toBe('function');\n    });\n\n    it('returns the same singleton instance', async () => {\n      const parser1 = await loadParser();\n      const parser2 = await loadParser();\n      expect(parser1).toBe(parser2);\n    });\n  });\n\n  describe('loadLanguage', () => {\n    it('loads TypeScript language', async () => {\n      await expect(loadLanguage(SupportedLanguages.TypeScript)).resolves.not.toThrow();\n    });\n\n    it('loads JavaScript language', async () => {\n      await expect(loadLanguage(SupportedLanguages.JavaScript)).resolves.not.toThrow();\n    });\n\n    it('loads Python language', async () => {\n      await expect(loadLanguage(SupportedLanguages.Python)).resolves.not.toThrow();\n    });\n\n    it('loads Java language', async () => {\n      await expect(loadLanguage(SupportedLanguages.Java)).resolves.not.toThrow();\n    });\n\n    it('loads C language', async () => {\n      await expect(loadLanguage(SupportedLanguages.C)).resolves.not.toThrow();\n    });\n\n    it('loads C++ language', async () => {\n      await expect(loadLanguage(SupportedLanguages.CPlusPlus)).resolves.not.toThrow();\n    });\n\n    it('loads C# language', async () => {\n      await expect(loadLanguage(SupportedLanguages.CSharp)).resolves.not.toThrow();\n    });\n\n    it('loads Go language', async () => {\n      await expect(loadLanguage(SupportedLanguages.Go)).resolves.not.toThrow();\n    });\n\n    it('loads Rust language', async () => {\n      await expect(loadLanguage(SupportedLanguages.Rust)).resolves.not.toThrow();\n    });\n\n    it('loads PHP language', async () => {\n      await expect(loadLanguage(SupportedLanguages.PHP)).resolves.not.toThrow();\n    });\n\n    it('loads TSX grammar for .tsx files', async () => {\n      // TSX uses a different grammar (TypeScript.tsx vs TypeScript.typescript)\n      await expect(loadLanguage(SupportedLanguages.TypeScript, 'Component.tsx')).resolves.not.toThrow();\n    });\n\n    it('loads TS grammar for .ts files', async () => {\n      await expect(loadLanguage(SupportedLanguages.TypeScript, 'utils.ts')).resolves.not.toThrow();\n    });\n\n    it('loads Ruby language', async () => {\n      await expect(loadLanguage(SupportedLanguages.Ruby)).resolves.not.toThrow();\n    });\n\n    it('throws for unsupported language', async () => {\n      await expect(loadLanguage('erlang' as SupportedLanguages)).rejects.toThrow('Unsupported language');\n    });\n  });\n\n  describe('Swift optional dependency', () => {\n    it('handles Swift loading gracefully', async () => {\n      // Swift is optional — it either loads successfully or throws an error about unsupported language\n      try {\n        await loadLanguage(SupportedLanguages.Swift);\n        // If it succeeds, tree-sitter-swift is installed\n      } catch (e: any) {\n        // If it fails, it should be because tree-sitter-swift is not installed\n        expect(e.message).toContain('Unsupported language');\n      }\n    });\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/pipeline-exports.test.ts",
    "content": "import { describe, it, expect } from 'vitest';\nimport { runPipelineFromRepo } from '../../src/core/ingestion/pipeline.js';\n\ndescribe('pipeline', () => {\n  it('exports runPipelineFromRepo function', () => {\n    expect(typeof runPipelineFromRepo).toBe('function');\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/process-processor.test.ts",
    "content": "import { describe, it, expect, vi } from 'vitest';\nimport { processProcesses, type ProcessDetectionConfig } from '../../src/core/ingestion/process-processor.js';\nimport { createKnowledgeGraph } from '../../src/core/graph/graph.js';\nimport type { CommunityMembership } from '../../src/core/ingestion/community-processor.js';\n\ndescribe('processProcesses', () => {\n  it('detects no processes in empty graph', async () => {\n    const graph = createKnowledgeGraph();\n    const result = await processProcesses(graph, []);\n    expect(result.processes).toHaveLength(0);\n    expect(result.steps).toHaveLength(0);\n    expect(result.stats.totalProcesses).toBe(0);\n    expect(result.stats.entryPointsFound).toBe(0);\n    expect(result.stats.avgStepCount).toBe(0);\n  });\n\n  it('detects no processes when there are no CALLS relationships', async () => {\n    const graph = createKnowledgeGraph();\n    graph.addNode({\n      id: 'func:main', label: 'Function',\n      properties: { name: 'main', filePath: 'src/index.ts', startLine: 1, endLine: 10, isExported: true }\n    });\n\n    const result = await processProcesses(graph, []);\n    expect(result.processes).toHaveLength(0);\n  });\n\n  it('detects a simple 3-step process with correct structure', async () => {\n    const graph = createKnowledgeGraph();\n\n    // Create 3 functions in a chain\n    graph.addNode({\n      id: 'func:handleRequest', label: 'Function',\n      properties: { name: 'handleRequest', filePath: 'src/handler.ts', startLine: 1, endLine: 10, isExported: true }\n    });\n    graph.addNode({\n      id: 'func:validateInput', label: 'Function',\n      properties: { name: 'validateInput', filePath: 'src/validator.ts', startLine: 1, endLine: 5, isExported: true }\n    });\n    graph.addNode({\n      id: 'func:saveToDb', label: 'Function',\n      properties: { name: 'saveToDb', filePath: 'src/db.ts', startLine: 1, endLine: 8, isExported: true }\n    });\n\n    // handleRequest -> validateInput -> saveToDb\n    graph.addRelationship({\n      id: 'call:1', sourceId: 'func:handleRequest', targetId: 'func:validateInput',\n      type: 'CALLS', confidence: 0.9, reason: 'import-resolved'\n    });\n    graph.addRelationship({\n      id: 'call:2', sourceId: 'func:validateInput', targetId: 'func:saveToDb',\n      type: 'CALLS', confidence: 0.9, reason: 'import-resolved'\n    });\n\n    const memberships: CommunityMembership[] = [\n      { nodeId: 'func:handleRequest', communityId: 'community:0' },\n      { nodeId: 'func:validateInput', communityId: 'community:0' },\n      { nodeId: 'func:saveToDb', communityId: 'community:0' },\n    ];\n\n    const result = await processProcesses(graph, memberships);\n\n    // Must detect at least one process\n    expect(result.processes.length).toBeGreaterThan(0);\n\n    // Find the process starting from handleRequest\n    const process = result.processes.find(p => p.entryPointId === 'func:handleRequest');\n    expect(process).toBeDefined();\n    expect(process!.stepCount).toBe(3);\n    expect(process!.entryPointId).toBe('func:handleRequest');\n    expect(process!.terminalId).toBe('func:saveToDb');\n    expect(process!.processType).toBe('intra_community');\n    expect(process!.communities).toEqual(['community:0']);\n\n    // Verify trace order: entry -> middle -> terminal\n    expect(process!.trace).toEqual([\n      'func:handleRequest',\n      'func:validateInput',\n      'func:saveToDb',\n    ]);\n\n    // Verify steps are 1-indexed and in correct order\n    const processSteps = result.steps.filter(s => s.processId === process!.id);\n    expect(processSteps).toHaveLength(3);\n    expect(processSteps[0]).toEqual(expect.objectContaining({ nodeId: 'func:handleRequest', step: 1 }));\n    expect(processSteps[1]).toEqual(expect.objectContaining({ nodeId: 'func:validateInput', step: 2 }));\n    expect(processSteps[2]).toEqual(expect.objectContaining({ nodeId: 'func:saveToDb', step: 3 }));\n\n    // Verify label is generated from entry and terminal names\n    expect(process!.heuristicLabel).toContain('HandleRequest');\n    expect(process!.heuristicLabel).toContain('SaveToDb');\n\n    // Stats should reflect the detected processes\n    expect(result.stats.totalProcesses).toBe(result.processes.length);\n    expect(result.stats.entryPointsFound).toBeGreaterThan(0);\n  });\n\n  it('respects maxTraceDepth config', async () => {\n    const graph = createKnowledgeGraph();\n\n    // Create a long chain: f0 -> f1 -> f2 -> f3 -> f4\n    for (let i = 0; i < 5; i++) {\n      graph.addNode({\n        id: `func:f${i}`, label: 'Function',\n        properties: { name: `f${i}`, filePath: `src/f${i}.ts`, startLine: 1, endLine: 5, isExported: true }\n      });\n    }\n    for (let i = 0; i < 4; i++) {\n      graph.addRelationship({\n        id: `call:${i}`, sourceId: `func:f${i}`, targetId: `func:f${i+1}`,\n        type: 'CALLS', confidence: 0.9, reason: ''\n      });\n    }\n\n    const memberships: CommunityMembership[] = Array.from({ length: 5 }, (_, i) => ({\n      nodeId: `func:f${i}`, communityId: 'community:0'\n    }));\n\n    // Limit to 3 steps max depth\n    const config: Partial<ProcessDetectionConfig> = { maxTraceDepth: 3 };\n    const result = await processProcesses(graph, memberships, undefined, config);\n\n    // Should still find processes, but each trace should be at most maxTraceDepth steps\n    expect(result.processes.length).toBeGreaterThan(0);\n    for (const process of result.processes) {\n      expect(process.stepCount).toBeLessThanOrEqual(3);\n    }\n  });\n\n  it('detects cross_community processes', async () => {\n    const graph = createKnowledgeGraph();\n\n    graph.addNode({\n      id: 'func:apiHandler', label: 'Function',\n      properties: { name: 'apiHandler', filePath: 'src/api/handler.ts', startLine: 1, endLine: 10, isExported: true }\n    });\n    graph.addNode({\n      id: 'func:dbQuery', label: 'Function',\n      properties: { name: 'dbQuery', filePath: 'src/db/query.ts', startLine: 1, endLine: 5, isExported: true }\n    });\n    graph.addNode({\n      id: 'func:formatResponse', label: 'Function',\n      properties: { name: 'formatResponse', filePath: 'src/api/format.ts', startLine: 1, endLine: 5, isExported: true }\n    });\n\n    // apiHandler -> dbQuery (cross community), apiHandler -> formatResponse (same community)\n    graph.addRelationship({\n      id: 'call:1', sourceId: 'func:apiHandler', targetId: 'func:dbQuery',\n      type: 'CALLS', confidence: 0.9, reason: ''\n    });\n    graph.addRelationship({\n      id: 'call:2', sourceId: 'func:dbQuery', targetId: 'func:formatResponse',\n      type: 'CALLS', confidence: 0.9, reason: ''\n    });\n\n    // Put them in different communities\n    const memberships: CommunityMembership[] = [\n      { nodeId: 'func:apiHandler', communityId: 'community:api' },\n      { nodeId: 'func:dbQuery', communityId: 'community:db' },\n      { nodeId: 'func:formatResponse', communityId: 'community:api' },\n    ];\n\n    const result = await processProcesses(graph, memberships);\n\n    // Must find at least one process\n    expect(result.processes.length).toBeGreaterThan(0);\n\n    // The process from apiHandler should be cross_community (touches api + db communities)\n    const crossProcess = result.processes.find(p => p.entryPointId === 'func:apiHandler');\n    expect(crossProcess).toBeDefined();\n    expect(crossProcess!.processType).toBe('cross_community');\n    expect(crossProcess!.communities.length).toBeGreaterThan(1);\n    expect(crossProcess!.communities).toContain('community:api');\n    expect(crossProcess!.communities).toContain('community:db');\n\n    // Stats should count cross-community\n    expect(result.stats.crossCommunityCount).toBeGreaterThan(0);\n  });\n\n  it('excludes test files from entry points', async () => {\n    const graph = createKnowledgeGraph();\n\n    // Test file function\n    graph.addNode({\n      id: 'func:testMain', label: 'Function',\n      properties: { name: 'testMain', filePath: 'test/unit/main.test.ts', startLine: 1, endLine: 10, isExported: true }\n    });\n    graph.addNode({\n      id: 'func:helper', label: 'Function',\n      properties: { name: 'helper', filePath: 'src/helper.ts', startLine: 1, endLine: 5, isExported: true }\n    });\n\n    graph.addRelationship({\n      id: 'call:1', sourceId: 'func:testMain', targetId: 'func:helper',\n      type: 'CALLS', confidence: 0.9, reason: ''\n    });\n\n    const result = await processProcesses(graph, []);\n\n    // Test files should not be used as entry points\n    const testProcess = result.processes.find(p => p.entryPointId === 'func:testMain');\n    expect(testProcess).toBeUndefined();\n  });\n\n  it('filters out low-confidence calls (below 0.5)', async () => {\n    const graph = createKnowledgeGraph();\n\n    graph.addNode({\n      id: 'func:a', label: 'Function',\n      properties: { name: 'a', filePath: 'src/a.ts', startLine: 1, endLine: 5, isExported: true }\n    });\n    graph.addNode({\n      id: 'func:b', label: 'Function',\n      properties: { name: 'b', filePath: 'src/b.ts', startLine: 1, endLine: 5, isExported: true }\n    });\n    graph.addNode({\n      id: 'func:c', label: 'Function',\n      properties: { name: 'c', filePath: 'src/c.ts', startLine: 1, endLine: 5, isExported: true }\n    });\n\n    // a -> b with low confidence (fuzzy-global ambiguous), a -> c with high confidence\n    graph.addRelationship({\n      id: 'call:1', sourceId: 'func:a', targetId: 'func:b',\n      type: 'CALLS', confidence: 0.3, reason: 'fuzzy-global'\n    });\n    graph.addRelationship({\n      id: 'call:2', sourceId: 'func:a', targetId: 'func:c',\n      type: 'CALLS', confidence: 0.9, reason: 'import-resolved'\n    });\n\n    const result = await processProcesses(graph, []);\n\n    // No process should include func:b since the edge has confidence < 0.5 (MIN_TRACE_CONFIDENCE)\n    for (const process of result.processes) {\n      expect(process.trace).not.toContain('func:b');\n    }\n  });\n\n  it('handles cycles without infinite loops', async () => {\n    const graph = createKnowledgeGraph();\n\n    graph.addNode({\n      id: 'func:a', label: 'Function',\n      properties: { name: 'processItem', filePath: 'src/a.ts', startLine: 1, endLine: 5, isExported: true }\n    });\n    graph.addNode({\n      id: 'func:b', label: 'Function',\n      properties: { name: 'validate', filePath: 'src/b.ts', startLine: 1, endLine: 5, isExported: true }\n    });\n    graph.addNode({\n      id: 'func:c', label: 'Function',\n      properties: { name: 'retry', filePath: 'src/c.ts', startLine: 1, endLine: 5, isExported: true }\n    });\n\n    // a -> b -> c -> a (cycle)\n    graph.addRelationship({\n      id: 'call:1', sourceId: 'func:a', targetId: 'func:b',\n      type: 'CALLS', confidence: 0.9, reason: ''\n    });\n    graph.addRelationship({\n      id: 'call:2', sourceId: 'func:b', targetId: 'func:c',\n      type: 'CALLS', confidence: 0.9, reason: ''\n    });\n    graph.addRelationship({\n      id: 'call:3', sourceId: 'func:c', targetId: 'func:a',\n      type: 'CALLS', confidence: 0.9, reason: ''\n    });\n\n    const memberships: CommunityMembership[] = [\n      { nodeId: 'func:a', communityId: 'community:0' },\n      { nodeId: 'func:b', communityId: 'community:0' },\n      { nodeId: 'func:c', communityId: 'community:0' },\n    ];\n\n    // Should complete without hanging, and traces should not repeat nodes\n    const result = await processProcesses(graph, memberships);\n    for (const process of result.processes) {\n      const uniqueNodes = new Set(process.trace);\n      expect(uniqueNodes.size).toBe(process.trace.length);\n    }\n  });\n\n  it('respects minSteps default (3) — rejects 2-step traces', async () => {\n    const graph = createKnowledgeGraph();\n\n    // Only 2 functions: a -> b (2 steps, below default minSteps of 3)\n    graph.addNode({\n      id: 'func:caller', label: 'Function',\n      properties: { name: 'caller', filePath: 'src/caller.ts', startLine: 1, endLine: 5, isExported: true }\n    });\n    graph.addNode({\n      id: 'func:callee', label: 'Function',\n      properties: { name: 'callee', filePath: 'src/callee.ts', startLine: 1, endLine: 5, isExported: true }\n    });\n\n    graph.addRelationship({\n      id: 'call:1', sourceId: 'func:caller', targetId: 'func:callee',\n      type: 'CALLS', confidence: 0.9, reason: ''\n    });\n\n    const result = await processProcesses(graph, []);\n\n    // Default minSteps is 3, so a 2-step trace (caller -> callee) should be rejected\n    expect(result.processes).toHaveLength(0);\n  });\n\n  it('calls progress callback with messages', async () => {\n    const graph = createKnowledgeGraph();\n    const onProgress = vi.fn();\n\n    await processProcesses(graph, [], onProgress);\n\n    expect(onProgress).toHaveBeenCalled();\n    // Verify callback receives (message: string, progress: number)\n    const [message, progress] = onProgress.mock.calls[0];\n    expect(typeof message).toBe('string');\n    expect(typeof progress).toBe('number');\n    expect(progress).toBeGreaterThanOrEqual(0);\n    expect(progress).toBeLessThanOrEqual(100);\n  });\n\n  it('limits output to maxProcesses', async () => {\n    const graph = createKnowledgeGraph();\n\n    // Create many independent 3-step chains to generate many processes\n    for (let chain = 0; chain < 10; chain++) {\n      for (let step = 0; step < 3; step++) {\n        graph.addNode({\n          id: `func:chain${chain}_f${step}`, label: 'Function',\n          properties: {\n            name: `chain${chain}_f${step}`,\n            filePath: `src/chain${chain}/f${step}.ts`,\n            startLine: 1, endLine: 5,\n            isExported: true\n          }\n        });\n      }\n      for (let step = 0; step < 2; step++) {\n        graph.addRelationship({\n          id: `call:chain${chain}_${step}`,\n          sourceId: `func:chain${chain}_f${step}`,\n          targetId: `func:chain${chain}_f${step+1}`,\n          type: 'CALLS', confidence: 0.9, reason: ''\n        });\n      }\n    }\n\n    const memberships: CommunityMembership[] = [];\n    for (let chain = 0; chain < 10; chain++) {\n      for (let step = 0; step < 3; step++) {\n        memberships.push({ nodeId: `func:chain${chain}_f${step}`, communityId: 'community:0' });\n      }\n    }\n\n    const config: Partial<ProcessDetectionConfig> = { maxProcesses: 3 };\n    const result = await processProcesses(graph, memberships, undefined, config);\n\n    expect(result.processes.length).toBeLessThanOrEqual(3);\n    expect(result.stats.totalProcesses).toBeLessThanOrEqual(3);\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/repo-manager.test.ts",
    "content": "/**\n * P1 Unit Tests: Repository Manager\n *\n * Tests: getStoragePath, getStoragePaths, readRegistry, registerRepo, unregisterRepo\n * Covers hardening fixes #29 (API key file permissions) and #30 (case-insensitive paths on Windows)\n */\nimport { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';\nimport path from 'path';\nimport os from 'os';\nimport fs from 'fs/promises';\nimport {\n  getStoragePath,\n  getStoragePaths,\n  readRegistry,\n  saveCLIConfig,\n  loadCLIConfig,\n} from '../../src/storage/repo-manager.js';\nimport { createTempDir } from '../helpers/test-db.js';\n\n// ─── getStoragePath ──────────────────────────────────────────────────\n\ndescribe('getStoragePath', () => {\n  it('appends .gitnexus to resolved repo path', () => {\n    const result = getStoragePath('/home/user/project');\n    expect(result).toContain('.gitnexus');\n    expect(path.basename(result)).toBe('.gitnexus');\n  });\n\n  it('resolves relative paths', () => {\n    const result = getStoragePath('.');\n    // Should be an absolute path\n    expect(path.isAbsolute(result)).toBe(true);\n  });\n});\n\n// ─── getStoragePaths ─────────────────────────────────────────────────\n\ndescribe('getStoragePaths', () => {\n  it('returns storagePath, lbugPath, metaPath', () => {\n    const paths = getStoragePaths('/home/user/project');\n    expect(paths.storagePath).toContain('.gitnexus');\n    expect(paths.lbugPath).toContain('lbug');\n    expect(paths.metaPath).toContain('meta.json');\n  });\n\n  it('all paths are under storagePath', () => {\n    const paths = getStoragePaths('/home/user/project');\n    expect(paths.lbugPath.startsWith(paths.storagePath)).toBe(true);\n    expect(paths.metaPath.startsWith(paths.storagePath)).toBe(true);\n  });\n});\n\n// ─── readRegistry ────────────────────────────────────────────────────\n\ndescribe('readRegistry', () => {\n  it('returns empty array when registry does not exist', async () => {\n    // readRegistry reads from ~/.gitnexus/registry.json\n    // If the file doesn't exist, it should return []\n    // This test exercises the catch path\n    const result = await readRegistry();\n    // Result is an array (may or may not be empty depending on user's system)\n    expect(Array.isArray(result)).toBe(true);\n  });\n});\n\n// ─── CLI Config (file permissions) ───────────────────────────────────\n\ndescribe('saveCLIConfig / loadCLIConfig', () => {\n  let tmpHandle: Awaited<ReturnType<typeof createTempDir>>;\n  let originalHomedir: typeof os.homedir;\n\n  beforeEach(async () => {\n    tmpHandle = await createTempDir('gitnexus-config-test-');\n    originalHomedir = os.homedir;\n    // Mock os.homedir to point to our temp dir\n    // Note: This won't fully work because repo-manager uses its own import of os\n    // We'll test what we can.\n  });\n\n  afterEach(async () => {\n    os.homedir = originalHomedir;\n    await tmpHandle.cleanup();\n  });\n\n  it('loadCLIConfig returns empty object when config does not exist', async () => {\n    const config = await loadCLIConfig();\n    // Returns {} or existing config\n    expect(typeof config).toBe('object');\n  });\n});\n\n// ─── Case-insensitive path comparison (Windows hardening #30) ────────\n\ndescribe('case-insensitive path comparison', () => {\n  it('registerRepo uses case-insensitive compare on Windows', () => {\n    // The fix is in registerRepo: process.platform === 'win32' ? a.toLowerCase() === b.toLowerCase()\n    // We verify the logic inline since we can't easily mock process.platform\n\n    const compareWindows = (a: string, b: string): boolean => {\n      return a.toLowerCase() === b.toLowerCase();\n    };\n\n    // On Windows, these should match\n    expect(compareWindows('D:\\\\Projects\\\\MyApp', 'd:\\\\projects\\\\myapp')).toBe(true);\n    expect(compareWindows('C:\\\\Users\\\\USER\\\\project', 'c:\\\\users\\\\user\\\\project')).toBe(true);\n\n    // Different paths should not match\n    expect(compareWindows('D:\\\\Projects\\\\App1', 'D:\\\\Projects\\\\App2')).toBe(false);\n  });\n\n  it('case-sensitive compare for non-Windows', () => {\n    const compareUnix = (a: string, b: string): boolean => {\n      return a === b;\n    };\n\n    // On Unix, case matters\n    expect(compareUnix('/home/user/Project', '/home/user/project')).toBe(false);\n    expect(compareUnix('/home/user/project', '/home/user/project')).toBe(true);\n  });\n});\n\n// ─── API key file permissions (hardening #29) ────────────────────────\n\ndescribe('API key file permissions', () => {\n  it('saveCLIConfig calls chmod 0o600 on non-Windows', async () => {\n    // We verify that the saveCLIConfig code has the chmod call\n    // by reading the source and checking statically.\n    // The actual chmod behavior is platform-dependent.\n    const source = await fs.readFile(\n      path.join(process.cwd(), 'src', 'storage', 'repo-manager.ts'),\n      'utf-8',\n    );\n    expect(source).toContain('chmod(configPath, 0o600)');\n    expect(source).toContain(\"process.platform !== 'win32'\");\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/resources.test.ts",
    "content": "/**\n * Unit Tests: MCP Resources\n *\n * Tests: getResourceDefinitions, getResourceTemplates, readResource\n * - Static resource definitions\n * - Dynamic resource templates\n * - URI parsing and dispatch\n * - Error handling for invalid URIs\n * - Resource handlers with mocked backend\n */\nimport { describe, it, expect, vi } from 'vitest';\nimport {\n  getResourceDefinitions,\n  getResourceTemplates,\n  readResource,\n} from '../../src/mcp/resources.js';\n\n// ─── Minimal mock backend ──────────────────────────────────────────\n\nfunction createMockBackend(overrides: Partial<Record<string, any>> = {}): any {\n  return {\n    listRepos: vi.fn().mockResolvedValue(overrides.repos ?? []),\n    resolveRepo: vi.fn().mockResolvedValue(overrides.resolvedRepo ?? {\n      name: 'test-repo',\n      repoPath: '/tmp/test-repo',\n      lastCommit: 'abc1234',\n    }),\n    getContext: vi.fn().mockReturnValue(overrides.context ?? null),\n    queryClusters: vi.fn().mockResolvedValue(overrides.clusters ?? { clusters: [] }),\n    queryProcesses: vi.fn().mockResolvedValue(overrides.processes ?? { processes: [] }),\n    queryClusterDetail: vi.fn().mockResolvedValue(overrides.clusterDetail ?? { error: 'Not found' }),\n    queryProcessDetail: vi.fn().mockResolvedValue(overrides.processDetail ?? { error: 'Not found' }),\n    ...overrides,\n  };\n}\n\n// ─── Static definitions ─────────────────────────────────────────────\n\ndescribe('getResourceDefinitions', () => {\n  it('returns 2 static resources', () => {\n    const defs = getResourceDefinitions();\n    expect(defs).toHaveLength(2);\n  });\n\n  it('includes repos resource', () => {\n    const defs = getResourceDefinitions();\n    const repos = defs.find(d => d.uri === 'gitnexus://repos');\n    expect(repos).toBeDefined();\n    expect(repos!.mimeType).toBe('text/yaml');\n  });\n\n  it('includes setup resource', () => {\n    const defs = getResourceDefinitions();\n    const setup = defs.find(d => d.uri === 'gitnexus://setup');\n    expect(setup).toBeDefined();\n    expect(setup!.mimeType).toBe('text/markdown');\n  });\n\n  it('each definition has uri, name, description, mimeType', () => {\n    for (const def of getResourceDefinitions()) {\n      expect(def.uri).toBeTruthy();\n      expect(def.name).toBeTruthy();\n      expect(def.description).toBeTruthy();\n      expect(def.mimeType).toBeTruthy();\n    }\n  });\n});\n\ndescribe('getResourceTemplates', () => {\n  it('returns 6 dynamic templates', () => {\n    const templates = getResourceTemplates();\n    expect(templates).toHaveLength(6);\n  });\n\n  it('includes context, clusters, processes, schema, cluster detail, process detail', () => {\n    const templates = getResourceTemplates();\n    const uris = templates.map(t => t.uriTemplate);\n    expect(uris).toContain('gitnexus://repo/{name}/context');\n    expect(uris).toContain('gitnexus://repo/{name}/clusters');\n    expect(uris).toContain('gitnexus://repo/{name}/processes');\n    expect(uris).toContain('gitnexus://repo/{name}/schema');\n    expect(uris).toContain('gitnexus://repo/{name}/cluster/{clusterName}');\n    expect(uris).toContain('gitnexus://repo/{name}/process/{processName}');\n  });\n\n  it('each template has uriTemplate, name, description, mimeType', () => {\n    for (const tmpl of getResourceTemplates()) {\n      expect(tmpl.uriTemplate).toBeTruthy();\n      expect(tmpl.name).toBeTruthy();\n      expect(tmpl.description).toBeTruthy();\n      expect(tmpl.mimeType).toBeTruthy();\n    }\n  });\n});\n\n// ─── readResource URI parsing ────────────────────────────────────────\n\ndescribe('readResource', () => {\n  it('routes gitnexus://repos to listRepos', async () => {\n    const backend = createMockBackend({\n      repos: [\n        { name: 'my-project', path: '/home/me/my-project', indexedAt: '2024-01-01', lastCommit: 'abc1234', stats: { files: 10, nodes: 50, processes: 5 } },\n      ],\n    });\n\n    const result = await readResource('gitnexus://repos', backend);\n    expect(backend.listRepos).toHaveBeenCalled();\n    expect(result).toContain('my-project');\n  });\n\n  it('returns empty message when no repos', async () => {\n    const backend = createMockBackend({ repos: [] });\n    const result = await readResource('gitnexus://repos', backend);\n    expect(result).toContain('No repositories indexed');\n  });\n\n  it('routes gitnexus://setup to setup resource', async () => {\n    const backend = createMockBackend({\n      repos: [\n        { name: 'proj', path: '/tmp/proj', indexedAt: '2024-01-01', lastCommit: 'abc', stats: { nodes: 10, edges: 20, processes: 3 } },\n      ],\n    });\n    const result = await readResource('gitnexus://setup', backend);\n    expect(result).toContain('GitNexus MCP');\n    expect(result).toContain('proj');\n  });\n\n  it('returns fallback when setup has no repos', async () => {\n    const backend = createMockBackend({ repos: [] });\n    const result = await readResource('gitnexus://setup', backend);\n    expect(result).toContain('No repositories indexed');\n  });\n\n  it('routes gitnexus://repo/{name}/context correctly', async () => {\n    const backend = createMockBackend({\n      context: {\n        projectName: 'test-project',\n        stats: { fileCount: 10, functionCount: 50, communityCount: 3, processCount: 5 },\n      },\n    });\n\n    const result = await readResource('gitnexus://repo/test-project/context', backend);\n    expect(backend.resolveRepo).toHaveBeenCalledWith('test-project');\n    expect(result).toContain('test-project');\n    expect(result).toContain('files: 10');\n  });\n\n  it('returns error when context has no codebase loaded', async () => {\n    const backend = createMockBackend({ context: null });\n    const result = await readResource('gitnexus://repo/test-project/context', backend);\n    expect(result).toContain('error');\n  });\n\n  it('routes gitnexus://repo/{name}/schema to static schema', async () => {\n    const backend = createMockBackend();\n    const result = await readResource('gitnexus://repo/any/schema', backend);\n    expect(result).toContain('GitNexus Graph Schema');\n    expect(result).toContain('CALLS');\n    expect(result).toContain('IMPORTS');\n  });\n\n  it('routes gitnexus://repo/{name}/clusters correctly', async () => {\n    const backend = createMockBackend({\n      clusters: {\n        clusters: [\n          { heuristicLabel: 'Auth', symbolCount: 10, cohesion: 0.9 },\n        ],\n      },\n    });\n    const result = await readResource('gitnexus://repo/test/clusters', backend);\n    expect(backend.queryClusters).toHaveBeenCalledWith('test', 100);\n    expect(result).toContain('Auth');\n  });\n\n  it('returns empty modules when no clusters', async () => {\n    const backend = createMockBackend({ clusters: { clusters: [] } });\n    const result = await readResource('gitnexus://repo/test/clusters', backend);\n    expect(result).toContain('modules: []');\n  });\n\n  it('handles cluster query error gracefully', async () => {\n    const backend = createMockBackend();\n    backend.queryClusters = vi.fn().mockRejectedValue(new Error('DB locked'));\n    const result = await readResource('gitnexus://repo/test/clusters', backend);\n    expect(result).toContain('DB locked');\n  });\n\n  it('routes gitnexus://repo/{name}/processes correctly', async () => {\n    const backend = createMockBackend({\n      processes: {\n        processes: [\n          { heuristicLabel: 'LoginFlow', processType: 'intra_community', stepCount: 3 },\n        ],\n      },\n    });\n    const result = await readResource('gitnexus://repo/test/processes', backend);\n    expect(backend.queryProcesses).toHaveBeenCalledWith('test', 50);\n    expect(result).toContain('LoginFlow');\n  });\n\n  it('handles process query error gracefully', async () => {\n    const backend = createMockBackend();\n    backend.queryProcesses = vi.fn().mockRejectedValue(new Error('timeout'));\n    const result = await readResource('gitnexus://repo/test/processes', backend);\n    expect(result).toContain('timeout');\n  });\n\n  it('routes gitnexus://repo/{name}/cluster/{clusterName} correctly', async () => {\n    const backend = createMockBackend({\n      clusterDetail: {\n        cluster: { heuristicLabel: 'Auth', symbolCount: 5, cohesion: 0.85 },\n        members: [\n          { name: 'login', type: 'Function', filePath: 'src/auth.ts' },\n        ],\n      },\n    });\n    const result = await readResource('gitnexus://repo/test/cluster/Auth', backend);\n    expect(backend.queryClusterDetail).toHaveBeenCalledWith('Auth', 'test');\n    expect(result).toContain('Auth');\n    expect(result).toContain('login');\n  });\n\n  it('handles cluster detail error', async () => {\n    const backend = createMockBackend({\n      clusterDetail: { error: 'Cluster not found' },\n    });\n    const result = await readResource('gitnexus://repo/test/cluster/Missing', backend);\n    expect(result).toContain('Cluster not found');\n  });\n\n  it('routes gitnexus://repo/{name}/process/{processName} correctly', async () => {\n    const backend = createMockBackend({\n      processDetail: {\n        process: { heuristicLabel: 'LoginFlow', processType: 'intra_community', stepCount: 3 },\n        steps: [\n          { step: 1, name: 'login', filePath: 'src/auth.ts' },\n          { step: 2, name: 'validate', filePath: 'src/validate.ts' },\n        ],\n      },\n    });\n    const result = await readResource('gitnexus://repo/test/process/LoginFlow', backend);\n    expect(backend.queryProcessDetail).toHaveBeenCalledWith('LoginFlow', 'test');\n    expect(result).toContain('LoginFlow');\n    expect(result).toContain('login');\n    expect(result).toContain('validate');\n  });\n\n  it('handles process detail error', async () => {\n    const backend = createMockBackend({\n      processDetail: { error: 'Process not found' },\n    });\n    const result = await readResource('gitnexus://repo/test/process/Missing', backend);\n    expect(result).toContain('Process not found');\n  });\n\n  it('throws for unknown resource URI', async () => {\n    const backend = createMockBackend();\n    await expect(readResource('gitnexus://unknown', backend))\n      .rejects.toThrow('Unknown resource URI');\n  });\n\n  it('throws for unknown repo-scoped resource type', async () => {\n    const backend = createMockBackend();\n    await expect(readResource('gitnexus://repo/test/nonexistent', backend))\n      .rejects.toThrow('Unknown resource');\n  });\n\n  it('decodes URI-encoded repo names', async () => {\n    const backend = createMockBackend();\n    await readResource('gitnexus://repo/my%20project/schema', backend);\n    // Should not throw — the schema resource is static\n  });\n\n  it('decodes URI-encoded cluster names', async () => {\n    const backend = createMockBackend({\n      clusterDetail: {\n        cluster: { heuristicLabel: 'Auth Module', symbolCount: 5 },\n        members: [],\n      },\n    });\n    await readResource('gitnexus://repo/test/cluster/Auth%20Module', backend);\n    expect(backend.queryClusterDetail).toHaveBeenCalledWith('Auth Module', 'test');\n  });\n\n  it('repos resource shows multi-repo hint for multiple repos', async () => {\n    const backend = createMockBackend({\n      repos: [\n        { name: 'proj-a', path: '/a', indexedAt: '2024-01-01', lastCommit: 'abc' },\n        { name: 'proj-b', path: '/b', indexedAt: '2024-01-02', lastCommit: 'def' },\n      ],\n    });\n    const result = await readResource('gitnexus://repos', backend);\n    expect(result).toContain('Multiple repos indexed');\n    expect(result).toContain('repo parameter');\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/schema.test.ts",
    "content": "import { describe, it, expect } from 'vitest';\nimport {\n  NODE_TABLES,\n  REL_TABLE_NAME,\n  REL_TYPES,\n  EMBEDDING_TABLE_NAME,\n  NODE_SCHEMA_QUERIES,\n  REL_SCHEMA_QUERIES,\n  SCHEMA_QUERIES,\n  FILE_SCHEMA,\n  FOLDER_SCHEMA,\n  FUNCTION_SCHEMA,\n  CLASS_SCHEMA,\n  INTERFACE_SCHEMA,\n  METHOD_SCHEMA,\n  CODE_ELEMENT_SCHEMA,\n  COMMUNITY_SCHEMA,\n  PROCESS_SCHEMA,\n  RELATION_SCHEMA,\n  EMBEDDING_SCHEMA,\n  CREATE_VECTOR_INDEX_QUERY,\n} from '../../src/core/lbug/schema.js';\n\ndescribe('LadybugDB Schema', () => {\n  describe('NODE_TABLES', () => {\n    it('includes all core node types', () => {\n      const core = ['File', 'Folder', 'Function', 'Class', 'Interface', 'Method', 'CodeElement', 'Community', 'Process'];\n      for (const t of core) {\n        expect(NODE_TABLES).toContain(t);\n      }\n    });\n\n    it('includes multi-language node types', () => {\n      const multiLang = ['Struct', 'Enum', 'Macro', 'Typedef', 'Union', 'Namespace', 'Trait', 'Impl',\n        'TypeAlias', 'Const', 'Static', 'Property', 'Record', 'Delegate', 'Annotation', 'Constructor', 'Template', 'Module'];\n      for (const t of multiLang) {\n        expect(NODE_TABLES).toContain(t);\n      }\n    });\n\n    it('has expected total count', () => {\n      // 9 core + 18 multi-language = 27\n      expect(NODE_TABLES).toHaveLength(27);\n    });\n  });\n\n  describe('REL_TYPES', () => {\n    it('includes all expected relationship types', () => {\n      const expected = ['CONTAINS', 'DEFINES', 'IMPORTS', 'CALLS', 'EXTENDS', 'IMPLEMENTS', 'MEMBER_OF', 'STEP_IN_PROCESS'];\n      for (const t of expected) {\n        expect(REL_TYPES).toContain(t);\n      }\n    });\n  });\n\n  describe('node schema DDL', () => {\n    it.each([\n      ['FILE_SCHEMA', FILE_SCHEMA, 'File'],\n      ['FOLDER_SCHEMA', FOLDER_SCHEMA, 'Folder'],\n      ['FUNCTION_SCHEMA', FUNCTION_SCHEMA, 'Function'],\n      ['CLASS_SCHEMA', CLASS_SCHEMA, 'Class'],\n      ['INTERFACE_SCHEMA', INTERFACE_SCHEMA, 'Interface'],\n      ['METHOD_SCHEMA', METHOD_SCHEMA, 'Method'],\n      ['CODE_ELEMENT_SCHEMA', CODE_ELEMENT_SCHEMA, 'CodeElement'],\n      ['COMMUNITY_SCHEMA', COMMUNITY_SCHEMA, 'Community'],\n      ['PROCESS_SCHEMA', PROCESS_SCHEMA, 'Process'],\n    ])('%s contains CREATE NODE TABLE for %s', (_, schema, tableName) => {\n      expect(schema).toContain('CREATE NODE TABLE');\n      expect(schema).toContain(tableName);\n      expect(schema).toContain('PRIMARY KEY');\n    });\n\n    it('Function schema has startLine and endLine', () => {\n      expect(FUNCTION_SCHEMA).toContain('startLine INT64');\n      expect(FUNCTION_SCHEMA).toContain('endLine INT64');\n    });\n\n    it('Function schema has isExported', () => {\n      expect(FUNCTION_SCHEMA).toContain('isExported BOOLEAN');\n    });\n\n    it('Community schema has heuristicLabel and cohesion', () => {\n      expect(COMMUNITY_SCHEMA).toContain('heuristicLabel STRING');\n      expect(COMMUNITY_SCHEMA).toContain('cohesion DOUBLE');\n    });\n\n    it('Process schema has processType and stepCount', () => {\n      expect(PROCESS_SCHEMA).toContain('processType STRING');\n      expect(PROCESS_SCHEMA).toContain('stepCount INT32');\n    });\n  });\n\n  describe('relation schema', () => {\n    it('creates a single REL TABLE named CodeRelation', () => {\n      expect(RELATION_SCHEMA).toContain(`CREATE REL TABLE ${REL_TABLE_NAME}`);\n    });\n\n    it('has type, confidence, reason, step properties', () => {\n      expect(RELATION_SCHEMA).toContain('type STRING');\n      expect(RELATION_SCHEMA).toContain('confidence DOUBLE');\n      expect(RELATION_SCHEMA).toContain('reason STRING');\n      expect(RELATION_SCHEMA).toContain('step INT32');\n    });\n\n    it('connects Function to Function (CALLS)', () => {\n      expect(RELATION_SCHEMA).toContain('FROM Function TO Function');\n    });\n\n    it('connects File to Function (CONTAINS/DEFINES)', () => {\n      expect(RELATION_SCHEMA).toContain('FROM File TO Function');\n    });\n\n    it('connects symbols to Community (MEMBER_OF)', () => {\n      expect(RELATION_SCHEMA).toContain('FROM Function TO Community');\n      expect(RELATION_SCHEMA).toContain('FROM Class TO Community');\n    });\n\n    it('connects symbols to Process (STEP_IN_PROCESS)', () => {\n      expect(RELATION_SCHEMA).toContain('FROM Function TO Process');\n      expect(RELATION_SCHEMA).toContain('FROM Method TO Process');\n    });\n\n    it('has all FROM/TO pairs needed for HAS_METHOD edges', () => {\n      // HAS_METHOD sources: Class, Interface, Struct, Trait, Impl, Record\n      // HAS_METHOD targets: Method, Constructor (Property is now HAS_PROPERTY)\n      const sources = ['Class', 'Interface'];\n      const backtickSources = ['Struct', 'Trait', 'Impl', 'Record'];\n      const targets = ['Method'];\n      const backtickTargets = ['Constructor'];\n\n      // Non-backtick source → non-backtick target\n      for (const src of sources) {\n        for (const tgt of targets) {\n          expect(RELATION_SCHEMA).toContain(`FROM ${src} TO ${tgt}`);\n        }\n        for (const tgt of backtickTargets) {\n          expect(RELATION_SCHEMA).toContain(`FROM ${src} TO \\`${tgt}\\``);\n        }\n      }\n\n      // Backtick source → all targets\n      for (const src of backtickSources) {\n        for (const tgt of targets) {\n          expect(RELATION_SCHEMA).toContain(`FROM \\`${src}\\` TO ${tgt}`);\n        }\n        for (const tgt of backtickTargets) {\n          expect(RELATION_SCHEMA).toContain(`FROM \\`${src}\\` TO \\`${tgt}\\``);\n        }\n      }\n    });\n  });\n\n  describe('embedding schema', () => {\n    it('creates CodeEmbedding table', () => {\n      expect(EMBEDDING_SCHEMA).toContain(`CREATE NODE TABLE ${EMBEDDING_TABLE_NAME}`);\n      expect(EMBEDDING_SCHEMA).toContain('embedding FLOAT[384]');\n    });\n\n    it('has vector index query', () => {\n      expect(CREATE_VECTOR_INDEX_QUERY).toContain('CREATE_VECTOR_INDEX');\n      expect(CREATE_VECTOR_INDEX_QUERY).toContain('cosine');\n    });\n  });\n\n  describe('schema query ordering', () => {\n    it('NODE_SCHEMA_QUERIES has correct count', () => {\n      expect(NODE_SCHEMA_QUERIES).toHaveLength(27);\n    });\n\n    it('REL_SCHEMA_QUERIES has one relation table', () => {\n      expect(REL_SCHEMA_QUERIES).toHaveLength(1);\n    });\n\n    it('SCHEMA_QUERIES includes all node + rel + embedding schemas', () => {\n      // 27 node + 1 rel + 1 embedding = 29\n      expect(SCHEMA_QUERIES).toHaveLength(29);\n    });\n\n    it('node schemas come before relation schemas in SCHEMA_QUERIES', () => {\n      const relIndex = SCHEMA_QUERIES.indexOf(RELATION_SCHEMA);\n      const lastNodeIndex = SCHEMA_QUERIES.indexOf(NODE_SCHEMA_QUERIES[NODE_SCHEMA_QUERIES.length - 1]);\n      expect(relIndex).toBeGreaterThan(lastNodeIndex);\n    });\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/security.test.ts",
    "content": "/**\n * P0 Unit Tests: Security Hardening\n *\n * Tests all security hardening in isolation:\n * - Write blocking (CYPHER_WRITE_RE)\n * - Relation type allowlist\n * - Path traversal detection\n * - isWriteQuery wrapper\n * - isTestFilePath patterns\n */\nimport { describe, it, expect } from 'vitest';\nimport {\n  VALID_RELATION_TYPES,\n  VALID_NODE_LABELS,\n  isTestFilePath,\n} from '../../src/mcp/local/local-backend.js';\nimport { CYPHER_WRITE_RE, isWriteQuery } from '../../src/mcp/core/lbug-adapter.js';\n\n// ─── Write-operation blocking (CYPHER_WRITE_RE) ──────────────────────\n\ndescribe('CYPHER_WRITE_RE', () => {\n  const writeKeywords = ['CREATE', 'DELETE', 'SET', 'MERGE', 'REMOVE', 'DROP', 'ALTER', 'COPY', 'DETACH'];\n\n  for (const keyword of writeKeywords) {\n    it(`matches \"${keyword}\" (uppercase)`, () => {\n      expect(CYPHER_WRITE_RE.test(`${keyword} (n:Node)`)).toBe(true);\n    });\n\n    it(`matches \"${keyword.toLowerCase()}\" (lowercase)`, () => {\n      expect(CYPHER_WRITE_RE.test(`${keyword.toLowerCase()} (n:Node)`)).toBe(true);\n    });\n\n    it(`matches \"${keyword[0] + keyword.slice(1).toLowerCase()}\" (mixed case)`, () => {\n      const mixed = keyword[0] + keyword.slice(1).toLowerCase();\n      expect(CYPHER_WRITE_RE.test(`${mixed} (n:Node)`)).toBe(true);\n    });\n  }\n\n  // Safe read queries should NOT be blocked\n  const safeQueries = [\n    'MATCH (n) RETURN n',\n    'MATCH (n:Function) WHERE n.name = \"foo\" RETURN n',\n    'MATCH (a)-[r]->(b) RETURN a, r, b',\n    'OPTIONAL MATCH (n)-[r]->(m) RETURN n, r, m',\n    'MATCH (n) WITH n RETURN n.name',\n    'UNWIND [1,2,3] AS x RETURN x',\n    'MATCH (n) RETURN count(n)',\n    'MATCH (n:Function) WHERE n.filePath CONTAINS \"test\" RETURN n',\n  ];\n\n  for (const query of safeQueries) {\n    it(`does NOT block safe query: \"${query.slice(0, 50)}...\"`, () => {\n      expect(CYPHER_WRITE_RE.test(query)).toBe(false);\n    });\n  }\n\n  it('blocks write keyword within a longer query', () => {\n    expect(CYPHER_WRITE_RE.test('MATCH (n) DELETE n')).toBe(true);\n    expect(CYPHER_WRITE_RE.test('MATCH (n:Node) SET n.name = \"x\"')).toBe(true);\n  });\n\n  it('does not match partial word (e.g., \"CREATED\" should not match)', () => {\n    // \\b ensures word boundary. \"CREATED\" starts with \"CREATE\" but has extra D\n    // Actually \\b(CREATE) matches \"CREATE\" in \"CREATED\" since CREATE is followed by D\n    // which is a word char -> no boundary at E-D. Let's verify:\n    expect(CYPHER_WRITE_RE.test('CREATED_AT')).toBe(false);\n  });\n});\n\n// ─── isWriteQuery wrapper ─────────────────────────────────────────────\n\ndescribe('isWriteQuery', () => {\n  it('returns true for write queries', () => {\n    expect(isWriteQuery('CREATE (n:Node)')).toBe(true);\n    expect(isWriteQuery('match (n) delete n')).toBe(true);\n  });\n\n  it('returns false for read queries', () => {\n    expect(isWriteQuery('MATCH (n) RETURN n')).toBe(false);\n  });\n\n  it('handles empty string', () => {\n    expect(isWriteQuery('')).toBe(false);\n  });\n\n  // Hardening: regex lastIndex not stuck (non-global regex, but verify)\n  it('works correctly on consecutive calls', () => {\n    expect(isWriteQuery('CREATE (n)')).toBe(true);\n    expect(isWriteQuery('MATCH (n) RETURN n')).toBe(false);\n    expect(isWriteQuery('DROP TABLE foo')).toBe(true);\n    expect(isWriteQuery('MATCH (n) RETURN n')).toBe(false);\n  });\n});\n\n// ─── Relation type allowlist ──────────────────────────────────────────\n\ndescribe('VALID_RELATION_TYPES', () => {\n  it('contains exactly the expected 8 types', () => {\n    expect(VALID_RELATION_TYPES.size).toBe(8);\n    expect(VALID_RELATION_TYPES.has('CALLS')).toBe(true);\n    expect(VALID_RELATION_TYPES.has('IMPORTS')).toBe(true);\n    expect(VALID_RELATION_TYPES.has('EXTENDS')).toBe(true);\n    expect(VALID_RELATION_TYPES.has('IMPLEMENTS')).toBe(true);\n    expect(VALID_RELATION_TYPES.has('HAS_METHOD')).toBe(true);\n    expect(VALID_RELATION_TYPES.has('HAS_PROPERTY')).toBe(true);\n    expect(VALID_RELATION_TYPES.has('OVERRIDES')).toBe(true);\n    expect(VALID_RELATION_TYPES.has('ACCESSES')).toBe(true);\n  });\n\n  it('rejects invalid relation types', () => {\n    expect(VALID_RELATION_TYPES.has('CONTAINS')).toBe(false);\n    expect(VALID_RELATION_TYPES.has('USES')).toBe(false);\n    expect(VALID_RELATION_TYPES.has('calls')).toBe(false); // case-sensitive\n    expect(VALID_RELATION_TYPES.has('DROP_TABLE')).toBe(false);\n  });\n});\n\n// ─── Valid node labels ───────────────────────────────────────────────\n\ndescribe('VALID_NODE_LABELS', () => {\n  it('contains core node types', () => {\n    for (const label of ['File', 'Folder', 'Function', 'Class', 'Interface', 'Method', 'CodeElement']) {\n      expect(VALID_NODE_LABELS.has(label)).toBe(true);\n    }\n  });\n\n  it('contains meta node types', () => {\n    for (const label of ['Community', 'Process']) {\n      expect(VALID_NODE_LABELS.has(label)).toBe(true);\n    }\n  });\n\n  it('contains multi-language node types', () => {\n    for (const label of ['Struct', 'Enum', 'Macro', 'Trait', 'Impl', 'Namespace']) {\n      expect(VALID_NODE_LABELS.has(label)).toBe(true);\n    }\n  });\n\n  it('rejects invalid labels', () => {\n    expect(VALID_NODE_LABELS.has('InvalidType')).toBe(false);\n    expect(VALID_NODE_LABELS.has('function')).toBe(false); // case-sensitive\n  });\n});\n\n// ─── Path traversal detection ────────────────────────────────────────\n\ndescribe('path traversal (isTestFilePath as proxy for path handling)', () => {\n  it('isTestFilePath matches .test. files', () => {\n    expect(isTestFilePath('src/foo.test.ts')).toBe(true);\n    expect(isTestFilePath('src/foo.spec.ts')).toBe(true);\n  });\n\n  it('isTestFilePath matches __tests__ directory', () => {\n    expect(isTestFilePath('src/__tests__/foo.ts')).toBe(true);\n  });\n\n  it('isTestFilePath matches /test/ directory', () => {\n    expect(isTestFilePath('src/test/foo.ts')).toBe(true);\n  });\n\n  it('isTestFilePath handles Windows backslash paths', () => {\n    expect(isTestFilePath('src\\\\test\\\\foo.ts')).toBe(true);\n    expect(isTestFilePath('src\\\\__tests__\\\\bar.ts')).toBe(true);\n  });\n\n  it('isTestFilePath is case-insensitive', () => {\n    expect(isTestFilePath('SRC/TEST/Foo.ts')).toBe(true);\n    expect(isTestFilePath('SRC/Foo.Test.ts')).toBe(true);\n  });\n\n  it('isTestFilePath matches Go test files', () => {\n    expect(isTestFilePath('pkg/handler_test.go')).toBe(true);\n  });\n\n  it('isTestFilePath matches Python test files', () => {\n    expect(isTestFilePath('tests/test_handler.py')).toBe(true);\n    expect(isTestFilePath('pkg/handler_test.py')).toBe(true);\n  });\n\n  it('isTestFilePath returns false for non-test files', () => {\n    expect(isTestFilePath('src/main.ts')).toBe(false);\n    expect(isTestFilePath('src/utils/helper.ts')).toBe(false);\n  });\n});\n\n// ─── Static analysis: parameterized query patterns ────────────────────\n\ndescribe('parameterized query patterns (static analysis)', () => {\n  it('CYPHER_WRITE_RE is not a global regex (no lastIndex issue)', () => {\n    // A global regex would have sticky lastIndex state\n    expect(CYPHER_WRITE_RE.global).toBe(false);\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/sequential-language-availability.test.ts",
    "content": "import { describe, expect, it, vi, beforeEach } from 'vitest';\n\nvi.mock('../../src/core/tree-sitter/parser-loader.js', () => ({\n  loadParser: vi.fn(async () => ({\n    parse: vi.fn(),\n    getLanguage: vi.fn(),\n  })),\n  loadLanguage: vi.fn(async () => undefined),\n  isLanguageAvailable: vi.fn(() => true),\n}));\n\nimport { createKnowledgeGraph } from '../../src/core/graph/graph.js';\nimport { createASTCache } from '../../src/core/ingestion/ast-cache.js';\nimport { processImports } from '../../src/core/ingestion/import-processor.js';\nimport { processCalls } from '../../src/core/ingestion/call-processor.js';\nimport { processHeritage } from '../../src/core/ingestion/heritage-processor.js';\nimport { createResolutionContext } from '../../src/core/ingestion/resolution-context.js';\nimport * as parserLoader from '../../src/core/tree-sitter/parser-loader.js';\n\n\ndescribe('sequential native parser availability', () => {\n  beforeEach(() => {\n    vi.clearAllMocks();\n  });\n\n  it('skips Swift files in processImports when the native parser is unavailable', async () => {\n    vi.mocked(parserLoader.isLanguageAvailable).mockReturnValue(false);\n\n    await expect(processImports(\n      createKnowledgeGraph(),\n      [{ path: 'App.swift', content: 'import Foundation' }],\n      createASTCache(),\n      createResolutionContext(),\n      undefined,\n      '/tmp/repo',\n      ['App.swift'],\n    )).resolves.toBeUndefined();\n\n    expect(parserLoader.loadLanguage).not.toHaveBeenCalled();\n  });\n\n  it('warns when processImports skips files in verbose mode', async () => {\n    const warnSpy = vi.spyOn(console, 'warn').mockImplementation(() => undefined);\n    const previous = process.env.GITNEXUS_VERBOSE;\n    process.env.GITNEXUS_VERBOSE = '1';\n    vi.mocked(parserLoader.isLanguageAvailable).mockReturnValue(false);\n\n    await processImports(\n      createKnowledgeGraph(),\n      [{ path: 'App.swift', content: 'import Foundation' }],\n      createASTCache(),\n      createResolutionContext(),\n      undefined,\n      '/tmp/repo',\n      ['App.swift'],\n    );\n\n    expect(warnSpy).toHaveBeenCalledWith(\n      '[ingestion] Skipped 1 swift file(s) in import processing — swift parser not available.'\n    );\n\n    warnSpy.mockRestore();\n    if (previous === undefined) {\n      delete process.env.GITNEXUS_VERBOSE;\n    } else {\n      process.env.GITNEXUS_VERBOSE = previous;\n    }\n  });\n\n  it('skips Swift files in processCalls when the native parser is unavailable', async () => {\n    vi.mocked(parserLoader.isLanguageAvailable).mockReturnValue(false);\n\n    await expect(processCalls(\n      createKnowledgeGraph(),\n      [{ path: 'App.swift', content: 'func demo() {}' }],\n      createASTCache(),\n      createResolutionContext(),\n    )).resolves.toEqual([]);\n\n    expect(parserLoader.loadLanguage).not.toHaveBeenCalled();\n  });\n\n  it('warns when processCalls skips files in verbose mode', async () => {\n    const warnSpy = vi.spyOn(console, 'warn').mockImplementation(() => undefined);\n    const previous = process.env.GITNEXUS_VERBOSE;\n    process.env.GITNEXUS_VERBOSE = '1';\n    vi.mocked(parserLoader.isLanguageAvailable).mockReturnValue(false);\n\n    await processCalls(\n      createKnowledgeGraph(),\n      [{ path: 'App.swift', content: 'func demo() {}' }],\n      createASTCache(),\n      createResolutionContext(),\n    );\n\n    expect(warnSpy).toHaveBeenCalledWith(\n      '[ingestion] Skipped 1 swift file(s) in call processing — swift parser not available.'\n    );\n\n    warnSpy.mockRestore();\n    if (previous === undefined) {\n      delete process.env.GITNEXUS_VERBOSE;\n    } else {\n      process.env.GITNEXUS_VERBOSE = previous;\n    }\n  });\n\n  it('skips Swift files in processHeritage when the native parser is unavailable', async () => {\n    vi.mocked(parserLoader.isLanguageAvailable).mockReturnValue(false);\n\n    await expect(processHeritage(\n      createKnowledgeGraph(),\n      [{ path: 'App.swift', content: 'class AppViewController: UIViewController {}' }],\n      createASTCache(),\n      createResolutionContext(),\n    )).resolves.toBeUndefined();\n\n    expect(parserLoader.loadLanguage).not.toHaveBeenCalled();\n  });\n\n  it('warns when processHeritage skips files in verbose mode', async () => {\n    const warnSpy = vi.spyOn(console, 'warn').mockImplementation(() => undefined);\n    const previous = process.env.GITNEXUS_VERBOSE;\n    process.env.GITNEXUS_VERBOSE = '1';\n    vi.mocked(parserLoader.isLanguageAvailable).mockReturnValue(false);\n\n    await processHeritage(\n      createKnowledgeGraph(),\n      [{ path: 'App.swift', content: 'class AppViewController: UIViewController {}' }],\n      createASTCache(),\n      createResolutionContext(),\n    );\n\n    expect(warnSpy).toHaveBeenCalledWith(\n      '[ingestion] Skipped 1 swift file(s) in heritage processing — swift parser not available.'\n    );\n\n    warnSpy.mockRestore();\n    if (previous === undefined) {\n      delete process.env.GITNEXUS_VERBOSE;\n    } else {\n      process.env.GITNEXUS_VERBOSE = previous;\n    }\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/server.test.ts",
    "content": "/**\n * Unit Tests: MCP Server\n *\n * Tests: createMCPServer from server.ts\n * - Server creation returns a Server instance\n * - Tool handler wraps backend.callTool and appends hints\n * - Tool handler catches errors and returns isError: true\n * - Resource handlers delegate to resources.ts functions\n * - Prompt handlers return expected prompts\n * - Next-step hints cover all tool names\n *\n * NOTE: We test the server handler logic by calling the request handlers\n * directly through the MCP Server's handler dispatch.\n */\nimport { describe, it, expect, vi, beforeAll } from 'vitest';\nimport { createMCPServer } from '../../src/mcp/server.js';\n\n// ─── Mock backend ──────────────────────────────────────────────────\n\nfunction createMockBackend(overrides: Record<string, any> = {}): any {\n  return {\n    callTool: vi.fn().mockResolvedValue({ result: 'ok' }),\n    listRepos: vi.fn().mockResolvedValue([]),\n    resolveRepo: vi.fn().mockResolvedValue({ name: 'test', repoPath: '/tmp/test', lastCommit: 'abc' }),\n    getContext: vi.fn().mockReturnValue(null),\n    queryClusters: vi.fn().mockResolvedValue({ clusters: [] }),\n    queryProcesses: vi.fn().mockResolvedValue({ processes: [] }),\n    queryClusterDetail: vi.fn().mockResolvedValue({ error: 'not found' }),\n    queryProcessDetail: vi.fn().mockResolvedValue({ error: 'not found' }),\n    disconnect: vi.fn().mockResolvedValue(undefined),\n    ...overrides,\n  };\n}\n\n// ─── createMCPServer ─────────────────────────────────────────────────\n\ndescribe('createMCPServer', () => {\n  it('returns a Server instance with expected shape', () => {\n    const backend = createMockBackend();\n    const server = createMCPServer(backend);\n    expect(server).toBeDefined();\n    // Server should have connect/close methods\n    expect(typeof server.connect).toBe('function');\n    expect(typeof server.close).toBe('function');\n  });\n\n  it('server has setRequestHandler method', () => {\n    const backend = createMockBackend();\n    const server = createMCPServer(backend);\n    // The server has registered handlers — verify it was created without errors\n    expect(server).toBeTruthy();\n  });\n});\n\n// ─── getNextStepHint (tested indirectly via server tool handler) ──────\n\ndescribe('getNextStepHint (via tool call response)', () => {\n  // We test hints by calling the server's tool handler indirectly.\n  // Since createMCPServer registers handlers on the Server, we verify\n  // hints are appended by checking the tool response format.\n\n  it('query tool response includes hint about context', async () => {\n    const backend = createMockBackend({\n      callTool: vi.fn().mockResolvedValue({ processes: [], definitions: [] }),\n    });\n    const server = createMCPServer(backend);\n\n    // We can't easily call handlers directly on the MCP Server,\n    // so we verify the handler was registered by creating the server without error.\n    // The actual hint logic is tested via the integration path.\n    expect(backend.callTool).not.toHaveBeenCalled(); // not called until request\n  });\n});\n\n// ─── Tool handler error handling ──────────────────────────────────────\n\ndescribe('server error handling', () => {\n  it('createMCPServer does not throw for valid backend', () => {\n    const backend = createMockBackend();\n    expect(() => createMCPServer(backend)).not.toThrow();\n  });\n\n  it('createMCPServer reads version from package.json', () => {\n    const backend = createMockBackend();\n    const server = createMCPServer(backend);\n    // Server was created with version from package.json — no crash\n    expect(server).toBeDefined();\n  });\n});\n\n// ─── Prompt definitions ───────────────────────────────────────────────\n\ndescribe('prompt registration', () => {\n  it('server registers detect_impact and generate_map prompts', () => {\n    const backend = createMockBackend();\n    // Creating the server registers all handlers including prompts\n    const server = createMCPServer(backend);\n    expect(server).toBeDefined();\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/shared-type-extractors.test.ts",
    "content": "import { describe, it, expect } from 'vitest';\nimport {\n  extractElementTypeFromString,\n  stripNullable,\n  extractReturnTypeName,\n  methodToTypeArgPosition,\n  getContainerDescriptor,\n} from '../../src/core/ingestion/type-extractors/shared.js';\n\n// ---------------------------------------------------------------------------\n// extractElementTypeFromString\n// ---------------------------------------------------------------------------\n\ndescribe('extractElementTypeFromString', () => {\n  // --- Array suffix ---\n  it('strips array suffix: User[] → User', () => {\n    expect(extractElementTypeFromString('User[]')).toBe('User');\n  });\n\n  it('strips array suffix for primitive: string[] → string', () => {\n    expect(extractElementTypeFromString('string[]')).toBe('string');\n  });\n\n  it('returns undefined for just brackets: []', () => {\n    // Empty base after stripping suffix\n    expect(extractElementTypeFromString('[]')).toBeUndefined();\n  });\n\n  it('returns undefined when base is not a simple word: user model[]', () => {\n    expect(extractElementTypeFromString('user model[]')).toBeUndefined();\n  });\n\n  // --- Go slice prefix ---\n  it('strips Go slice prefix: []User → User', () => {\n    expect(extractElementTypeFromString('[]User')).toBe('User');\n  });\n\n  it('strips Go slice prefix for primitive: []string → string', () => {\n    expect(extractElementTypeFromString('[]string')).toBe('string');\n  });\n\n  // --- Swift array sugar ---\n  it('unwraps Swift array sugar: [User] → User', () => {\n    expect(extractElementTypeFromString('[User]')).toBe('User');\n  });\n\n  // --- Generic angle brackets ---\n  it('unwraps single-arg generic angle: Array<User> → User', () => {\n    expect(extractElementTypeFromString('Array<User>')).toBe('User');\n  });\n\n  it('unwraps single-arg generic angle: C++ vector<User> → User', () => {\n    expect(extractElementTypeFromString('vector<User>')).toBe('User');\n  });\n\n  it('unwraps single-arg generic angle: Rust Vec<User> → User', () => {\n    expect(extractElementTypeFromString('Vec<User>')).toBe('User');\n  });\n\n  // --- Generic square brackets ---\n  it('unwraps Python subscript: List[User] → User', () => {\n    expect(extractElementTypeFromString('List[User]')).toBe('User');\n  });\n\n  // --- Multi-argument generics ---\n  it('returns last arg by default for Map<String, User>: → User', () => {\n    expect(extractElementTypeFromString('Map<String, User>')).toBe('User');\n  });\n\n  it('returns first arg with pos=first for Map<String, User>: → String', () => {\n    expect(extractElementTypeFromString('Map<String, User>', 'first')).toBe('String');\n  });\n\n  it('returns last arg explicitly for Map<String, User>: → User', () => {\n    expect(extractElementTypeFromString('Map<String, User>', 'last')).toBe('User');\n  });\n\n  // --- Nested generics ---\n  it('returns undefined for nested generics: List<Map<String, User>>', () => {\n    // The inner arg \"Map<String, User>\" is not a simple word (/^\\w+$/ fails)\n    expect(extractElementTypeFromString('List<Map<String, User>>')).toBeUndefined();\n  });\n\n  // --- Guard conditions ---\n  it('returns undefined for empty string', () => {\n    expect(extractElementTypeFromString('')).toBeUndefined();\n  });\n\n  it('returns undefined for string longer than 2048 chars', () => {\n    expect(extractElementTypeFromString('A'.repeat(2049))).toBeUndefined();\n  });\n\n  it('returns undefined at exactly the 2048-char limit', () => {\n    // Length > 2048 is rejected; length === 2048 may or may not parse — test boundary\n    const exactly2048 = 'A'.repeat(2048);\n    // This won't contain valid container syntax, so it won't resolve, but it won't be\n    // rejected by the length guard alone. We only assert > 2048 is rejected.\n    expect(extractElementTypeFromString('A'.repeat(2049))).toBeUndefined();\n  });\n\n  it('returns undefined for malformed brackets: Map<String, User]', () => {\n    // Mismatched bracket at depth 0 → malformed\n    expect(extractElementTypeFromString('Map<String, User]')).toBeUndefined();\n  });\n\n  it('returns undefined for just closing angle brackets: >', () => {\n    expect(extractElementTypeFromString('>')).toBeUndefined();\n  });\n\n  it('returns undefined when no brackets are present: User', () => {\n    // No opening bracket → falls through to undefined\n    expect(extractElementTypeFromString('User')).toBeUndefined();\n  });\n\n  it('returns undefined for complex non-word base before []: user model[]', () => {\n    expect(extractElementTypeFromString('user model[]')).toBeUndefined();\n  });\n\n  // --- Edge cases for Swift vs List-subscript disambiguation ---\n  it('does not treat [User] as Swift if it contains angle bracket (impossible case sanity-check)', () => {\n    // [User<X>] — starts with [ ends with ] but contains <, so Swift path is skipped.\n    // Falls to generic extraction: openSquare=0, extracts \"User<X>\" which is not /^\\w+$/.\n    expect(extractElementTypeFromString('[User<X>]')).toBeUndefined();\n  });\n\n  // --- pos='first' on single-arg container falls through to the same result ---\n  it('returns same result for Vec<User> with pos=first', () => {\n    expect(extractElementTypeFromString('Vec<User>', 'first')).toBe('User');\n  });\n\n  // --- Whitespace inside generics ---\n  it('handles whitespace in generic args: Array< User > → User', () => {\n    expect(extractElementTypeFromString('Array< User >')).toBe('User');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// stripNullable\n// ---------------------------------------------------------------------------\n\ndescribe('stripNullable', () => {\n  // --- Nullable union stripping ---\n  it('strips \"| null\": User | null → User', () => {\n    expect(stripNullable('User | null')).toBe('User');\n  });\n\n  it('strips \"| undefined\": User | undefined → User', () => {\n    expect(stripNullable('User | undefined')).toBe('User');\n  });\n\n  it('strips both \"| null | undefined\": User | null | undefined → User', () => {\n    expect(stripNullable('User | null | undefined')).toBe('User');\n  });\n\n  // --- Nullable suffix ---\n  it('strips nullable suffix: User? → User', () => {\n    expect(stripNullable('User?')).toBe('User');\n  });\n\n  // --- Genuine union is rejected ---\n  it('returns undefined for genuine union: User | Repo', () => {\n    expect(stripNullable('User | Repo')).toBeUndefined();\n  });\n\n  // --- Bare nullable keywords ---\n  it('returns undefined for bare \"null\"', () => {\n    expect(stripNullable('null')).toBeUndefined();\n  });\n\n  it('returns undefined for bare \"undefined\"', () => {\n    expect(stripNullable('undefined')).toBeUndefined();\n  });\n\n  it('returns undefined for bare \"None\"', () => {\n    expect(stripNullable('None')).toBeUndefined();\n  });\n\n  it('returns undefined for bare \"nil\"', () => {\n    expect(stripNullable('nil')).toBeUndefined();\n  });\n\n  it('returns undefined for bare \"void\"', () => {\n    expect(stripNullable('void')).toBeUndefined();\n  });\n\n  // --- Empty / whitespace ---\n  it('returns undefined for empty string', () => {\n    expect(stripNullable('')).toBeUndefined();\n  });\n\n  it('returns undefined for whitespace-only string', () => {\n    expect(stripNullable('   ')).toBeUndefined();\n  });\n\n  // --- Trimming ---\n  it('trims surrounding whitespace: \"  User  \" → User', () => {\n    expect(stripNullable('  User  ')).toBe('User');\n  });\n\n  // --- No-op for clean type names ---\n  it('returns the type name unchanged when no nullable markers: User → User', () => {\n    expect(stripNullable('User')).toBe('User');\n  });\n\n  // --- All-nullable union is rejected ---\n  it('returns undefined when the union is all nullable keywords: null | undefined', () => {\n    expect(stripNullable('null | undefined')).toBeUndefined();\n  });\n\n  // --- Nullable suffix does NOT interact with pipe (order matters) ---\n  it('returns \"User?\" when ? is embedded before pipe: User? | null', () => {\n    // The ? strip only fires when the whole string ends with '?'.\n    // \"User? | null\" ends with 'null', so ? is NOT stripped first.\n    // The pipe branch splits → parts: ['User?', 'null'] → filters 'null' → one part\n    // remaining is 'User?' (not in NULLABLE_KEYWORDS), so it is returned as-is.\n    expect(stripNullable('User? | null')).toBe('User?');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// extractReturnTypeName\n// ---------------------------------------------------------------------------\n\ndescribe('extractReturnTypeName', () => {\n  // --- Simple user-defined type ---\n  it('returns simple type: User → User', () => {\n    expect(extractReturnTypeName('User')).toBe('User');\n  });\n\n  // --- Wrapper generics unwrapping ---\n  it('unwraps Promise: Promise<User> → User', () => {\n    expect(extractReturnTypeName('Promise<User>')).toBe('User');\n  });\n\n  it('unwraps Option: Option<User> → User', () => {\n    expect(extractReturnTypeName('Option<User>')).toBe('User');\n  });\n\n  it('unwraps Result (first arg): Result<User, Error> → User', () => {\n    expect(extractReturnTypeName('Result<User, Error>')).toBe('User');\n  });\n\n  // --- Nullable union / suffix ---\n  it('strips nullable union: User | null → User', () => {\n    expect(extractReturnTypeName('User | null')).toBe('User');\n  });\n\n  it('strips nullable suffix: User? → User', () => {\n    expect(extractReturnTypeName('User?')).toBe('User');\n  });\n\n  // --- Pointer / reference prefix stripping ---\n  it('strips Go pointer prefix: *User → User', () => {\n    expect(extractReturnTypeName('*User')).toBe('User');\n  });\n\n  it('strips Rust reference prefix: &User → User', () => {\n    expect(extractReturnTypeName('&User')).toBe('User');\n  });\n\n  it('strips Rust mutable reference prefix: &mut User → User', () => {\n    expect(extractReturnTypeName('&mut User')).toBe('User');\n  });\n\n  // --- Non-wrapper generic returns base type ---\n  it('returns base type for non-wrapper generic: List<User> → List', () => {\n    expect(extractReturnTypeName('List<User>')).toBe('List');\n  });\n\n  // --- Qualified names ---\n  it('extracts last segment of dot-qualified name: models.User → User', () => {\n    expect(extractReturnTypeName('models.User')).toBe('User');\n  });\n\n  it('extracts last segment of PHP namespace: \\\\App\\\\Models\\\\User → User', () => {\n    expect(extractReturnTypeName('\\\\App\\\\Models\\\\User')).toBe('User');\n  });\n\n  it('extracts last segment of Rust path: models::User → User', () => {\n    expect(extractReturnTypeName('models::User')).toBe('User');\n  });\n\n  // --- Primitives rejected ---\n  it('returns undefined for primitive \"string\"', () => {\n    expect(extractReturnTypeName('string')).toBeUndefined();\n  });\n\n  it('returns undefined for primitive \"int\"', () => {\n    expect(extractReturnTypeName('int')).toBeUndefined();\n  });\n\n  it('returns undefined for primitive \"boolean\"', () => {\n    expect(extractReturnTypeName('boolean')).toBeUndefined();\n  });\n\n  it('returns undefined for primitive \"void\"', () => {\n    expect(extractReturnTypeName('void')).toBeUndefined();\n  });\n\n  // --- Lowercase (non-uppercase start) rejected ---\n  it('returns undefined for lowercase identifier: user', () => {\n    expect(extractReturnTypeName('user')).toBeUndefined();\n  });\n\n  // --- Empty ---\n  it('returns undefined for empty string', () => {\n    expect(extractReturnTypeName('')).toBeUndefined();\n  });\n\n  // --- Bare wrapper without type arg ---\n  it('returns undefined for bare wrapper without type arg: Promise', () => {\n    expect(extractReturnTypeName('Promise')).toBeUndefined();\n  });\n\n  it('returns undefined for bare wrapper: Option', () => {\n    expect(extractReturnTypeName('Option')).toBeUndefined();\n  });\n\n  // --- Recursive unwrap (nested wrappers) ---\n  it('recursively unwraps nested wrappers: Future<Option<User>> → User', () => {\n    expect(extractReturnTypeName('Future<Option<User>>')).toBe('User');\n  });\n\n  // --- Rust smart pointer ---\n  it('unwraps Rc: Rc<User> → User', () => {\n    expect(extractReturnTypeName('Rc<User>')).toBe('User');\n  });\n\n  it('unwraps Arc: Arc<User> → User', () => {\n    expect(extractReturnTypeName('Arc<User>')).toBe('User');\n  });\n\n  // --- Guard conditions ---\n  it('returns undefined for input longer than 2048 chars', () => {\n    expect(extractReturnTypeName('A'.repeat(2049))).toBeUndefined();\n  });\n\n  it('returns undefined when depth > 10', () => {\n    expect(extractReturnTypeName('User', 11)).toBeUndefined();\n  });\n\n  // --- Rust lifetimes in Result ---\n  it('skips Rust lifetime parameters in Result: Result<\\'_, User, Error> is unusual but lifetime in first pos is skipped', () => {\n    // Lifetime as FIRST arg: Result<'a, User> — extractFirstTypeArg skips lifetimes\n    expect(extractReturnTypeName(\"Result<'a, User>\")).toBe('User');\n  });\n\n  it('skips Rust anonymous lifetime: Result<\\'_, User>', () => {\n    expect(extractReturnTypeName(\"Result<'_, User>\")).toBe('User');\n  });\n\n  // --- Genuine union is rejected ---\n  it('returns undefined for genuine union: User | Repo', () => {\n    expect(extractReturnTypeName('User | Repo')).toBeUndefined();\n  });\n\n  // --- Uppercase-start constraint ---\n  it('accepts underscore-start identifier: _User', () => {\n    // Pattern is /^[A-Z_]\\w*$/ — underscore start is accepted\n    expect(extractReturnTypeName('_User')).toBe('_User');\n  });\n\n  // --- Type with multiple pointer sigils ---\n  it('strips multiple pointer sigils: **User → User', () => {\n    // Regex: /^[&*]+\\s*(mut\\s+)?/ strips all leading & and *\n    expect(extractReturnTypeName('**User')).toBe('User');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// methodToTypeArgPosition\n// ---------------------------------------------------------------------------\n\ndescribe('methodToTypeArgPosition', () => {\n  // --- Known single-element container → always 'last' ---\n  it('returns last for known single-element container method: get on Array', () => {\n    expect(methodToTypeArgPosition('get', 'Array')).toBe('last');\n  });\n\n  it('returns last for iter on Vec (single-element)', () => {\n    expect(methodToTypeArgPosition('iter', 'Vec')).toBe('last');\n  });\n\n  // --- Known map container with key method ---\n  it('returns first for keys on Map', () => {\n    expect(methodToTypeArgPosition('keys', 'Map')).toBe('first');\n  });\n\n  it('returns last for values on Map', () => {\n    expect(methodToTypeArgPosition('values', 'Map')).toBe('last');\n  });\n\n  it('returns last for get on Map (value method)', () => {\n    expect(methodToTypeArgPosition('get', 'Map')).toBe('last');\n  });\n\n  // --- Java-specific keySet ---\n  it('returns first for keySet on LinkedHashMap (Java keyMethods)', () => {\n    expect(methodToTypeArgPosition('keySet', 'LinkedHashMap')).toBe('first');\n  });\n\n  it('returns first for keySet on HashMap (Java keyMethods via STD_KEY_METHODS — note: HashMap uses STD_KEY_METHODS, not JAVA_KEY_METHODS)', () => {\n    // HashMap uses STD_KEY_METHODS which has 'keys', not 'keySet'\n    // So keySet on HashMap is not a key method → falls to 'last'\n    expect(methodToTypeArgPosition('keySet', 'HashMap')).toBe('last');\n  });\n\n  it('returns first for keySet on TreeMap (JAVA_KEY_METHODS)', () => {\n    expect(methodToTypeArgPosition('keySet', 'TreeMap')).toBe('first');\n  });\n\n  // --- C# Keys property ---\n  it('returns first for Keys on Dictionary (C# key method)', () => {\n    expect(methodToTypeArgPosition('Keys', 'Dictionary')).toBe('first');\n  });\n\n  it('returns last for Values on Dictionary (C# value method)', () => {\n    expect(methodToTypeArgPosition('Values', 'Dictionary')).toBe('last');\n  });\n\n  // --- Unknown container falls back to method name heuristic ---\n  it('returns first for keys on unknown container: keys on MyMap', () => {\n    expect(methodToTypeArgPosition('keys', 'MyMap')).toBe('first');\n  });\n\n  it('returns last for get on unknown container: get on MyMap', () => {\n    expect(methodToTypeArgPosition('get', 'MyMap')).toBe('last');\n  });\n\n  it('returns first for keySet on unknown container (fallback heuristic)', () => {\n    expect(methodToTypeArgPosition('keySet', 'MyMap')).toBe('first');\n  });\n\n  it('returns first for Keys on unknown container (fallback heuristic)', () => {\n    expect(methodToTypeArgPosition('Keys', 'SomeUnknownMap')).toBe('first');\n  });\n\n  // --- Undefined method / no container ---\n  it('returns last when method and container are both undefined', () => {\n    expect(methodToTypeArgPosition(undefined)).toBe('last');\n  });\n\n  it('returns last when method is undefined and container is known map', () => {\n    // containerTypeName is Map (arity 2), method is undefined → not in keyMethods → 'last'\n    expect(methodToTypeArgPosition(undefined, 'Map')).toBe('last');\n  });\n\n  it('returns last for unknown method on unknown container', () => {\n    expect(methodToTypeArgPosition('somethingElse', 'FooBar')).toBe('last');\n  });\n\n  // --- No container provided, non-key method ---\n  it('returns last for non-key method with no container', () => {\n    expect(methodToTypeArgPosition('get')).toBe('last');\n  });\n\n  it('returns first for keys with no container', () => {\n    expect(methodToTypeArgPosition('keys')).toBe('first');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// getContainerDescriptor\n// ---------------------------------------------------------------------------\n\ndescribe('getContainerDescriptor', () => {\n  // --- Known map-type (arity 2) ---\n  it('returns descriptor with arity 2 for Map', () => {\n    const desc = getContainerDescriptor('Map');\n    expect(desc).toBeDefined();\n    expect(desc!.arity).toBe(2);\n  });\n\n  it('descriptor for Map has keys in keyMethods', () => {\n    const desc = getContainerDescriptor('Map');\n    expect(desc!.keyMethods.has('keys')).toBe(true);\n  });\n\n  it('descriptor for Map has get in valueMethods', () => {\n    const desc = getContainerDescriptor('Map');\n    expect(desc!.valueMethods.has('get')).toBe(true);\n  });\n\n  it('returns descriptor with arity 2 for HashMap', () => {\n    const desc = getContainerDescriptor('HashMap');\n    expect(desc).toBeDefined();\n    expect(desc!.arity).toBe(2);\n  });\n\n  it('returns descriptor with arity 2 for Dictionary (C#)', () => {\n    const desc = getContainerDescriptor('Dictionary');\n    expect(desc).toBeDefined();\n    expect(desc!.arity).toBe(2);\n  });\n\n  // --- Known single-element container (arity 1) ---\n  it('returns descriptor with arity 1 for Vec', () => {\n    const desc = getContainerDescriptor('Vec');\n    expect(desc).toBeDefined();\n    expect(desc!.arity).toBe(1);\n  });\n\n  it('descriptor for Vec has empty keyMethods', () => {\n    const desc = getContainerDescriptor('Vec');\n    expect(desc!.keyMethods.size).toBe(0);\n  });\n\n  it('descriptor for Vec has get in valueMethods', () => {\n    const desc = getContainerDescriptor('Vec');\n    expect(desc!.valueMethods.has('get')).toBe(true);\n  });\n\n  it('returns descriptor with arity 1 for Array', () => {\n    const desc = getContainerDescriptor('Array');\n    expect(desc).toBeDefined();\n    expect(desc!.arity).toBe(1);\n  });\n\n  it('returns descriptor with arity 1 for List', () => {\n    const desc = getContainerDescriptor('List');\n    expect(desc).toBeDefined();\n    expect(desc!.arity).toBe(1);\n  });\n\n  // --- Unknown type ---\n  it('returns undefined for unknown type: FooBar', () => {\n    expect(getContainerDescriptor('FooBar')).toBeUndefined();\n  });\n\n  it('returns undefined for empty string', () => {\n    expect(getContainerDescriptor('')).toBeUndefined();\n  });\n\n  it('returns undefined for lowercase unknown: mycontainer', () => {\n    expect(getContainerDescriptor('mycontainer')).toBeUndefined();\n  });\n\n  // --- Lowercase built-in containers ---\n  it('returns descriptor for lowercase list (Python)', () => {\n    const desc = getContainerDescriptor('list');\n    expect(desc).toBeDefined();\n    expect(desc!.arity).toBe(1);\n  });\n\n  it('returns descriptor for lowercase dict (Python)', () => {\n    const desc = getContainerDescriptor('dict');\n    expect(desc).toBeDefined();\n    expect(desc!.arity).toBe(2);\n  });\n\n  // --- Java-specific map with keySet ---\n  it('descriptor for TreeMap has keySet in keyMethods', () => {\n    const desc = getContainerDescriptor('TreeMap');\n    expect(desc).toBeDefined();\n    expect(desc!.keyMethods.has('keySet')).toBe(true);\n  });\n\n  it('descriptor for ConcurrentHashMap has keySet in keyMethods', () => {\n    const desc = getContainerDescriptor('ConcurrentHashMap');\n    expect(desc).toBeDefined();\n    expect(desc!.keyMethods.has('keySet')).toBe(true);\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/skill-gen.test.ts",
    "content": "/**\n * Unit & integration tests for the skill file generator.\n *\n * Tests generateSkillFiles() — the only public export from cli/skill-gen.ts.\n * Validates return values (skill metadata), aggregation logic, edge cases,\n * and the on-disk SKILL.md files produced.\n */\nimport { describe, it, expect, beforeEach, afterEach, vi } from 'vitest';\nimport fs from 'fs/promises';\nimport path from 'path';\nimport os from 'os';\nimport { generateSkillFiles } from '../../src/cli/skill-gen.js';\nimport { createKnowledgeGraph } from '../../src/core/graph/graph.js';\nimport type { GraphNode, GraphRelationship, KnowledgeGraph } from '../../src/core/graph/types.js';\nimport type { CommunityNode, CommunityMembership, CommunityDetectionResult } from '../../src/core/ingestion/community-processor.js';\nimport type { ProcessNode, ProcessDetectionResult } from '../../src/core/ingestion/process-processor.js';\nimport type { PipelineResult } from '../../src/types/pipeline.js';\n\n// ============================================================================\n// FIXTURE HELPERS\n// ============================================================================\n\n/** Create a GraphNode with commonly-needed properties */\nfunction makeNode(\n  id: string,\n  name: string,\n  label: GraphNode['label'],\n  filePath: string,\n  startLine: number,\n  isExported: boolean,\n): GraphNode {\n  return {\n    id,\n    label,\n    properties: { name, filePath, startLine, endLine: startLine + 10, isExported },\n  };\n}\n\n/** Create a GraphRelationship between two nodes */\nfunction makeRel(\n  id: string,\n  sourceId: string,\n  targetId: string,\n  type: GraphRelationship['type'],\n): GraphRelationship {\n  return { id, sourceId, targetId, type, confidence: 1.0, reason: '' };\n}\n\n/** Create a CommunityNode with default cohesion */\nfunction makeCommunity(\n  id: string,\n  label: string,\n  symbolCount: number,\n  cohesion: number = 0.75,\n): CommunityNode {\n  return { id, label, heuristicLabel: label, cohesion, symbolCount };\n}\n\n/** Create a membership record linking a node to a community */\nfunction makeMembership(nodeId: string, communityId: string): CommunityMembership {\n  return { nodeId, communityId };\n}\n\n/** Create a ProcessNode for testing execution flows */\nfunction makeProcess(\n  id: string,\n  label: string,\n  communities: string[],\n  stepCount: number,\n): ProcessNode {\n  return {\n    id,\n    label,\n    heuristicLabel: label,\n    processType: communities.length > 1 ? 'cross_community' : 'intra_community',\n    stepCount,\n    communities,\n    entryPointId: '',\n    terminalId: '',\n    trace: [],\n  };\n}\n\n/**\n * Assemble a full PipelineResult from individual pieces.\n * Only graph is required; community and process data default to empty.\n */\nfunction buildPipelineResult(opts: {\n  graph: KnowledgeGraph;\n  repoPath: string;\n  communities?: CommunityNode[];\n  memberships?: CommunityMembership[];\n  processes?: ProcessNode[];\n}): PipelineResult {\n  const communityResult: CommunityDetectionResult = {\n    communities: opts.communities ?? [],\n    memberships: opts.memberships ?? [],\n    stats: {\n      totalCommunities: (opts.communities ?? []).length,\n      modularity: 0.5,\n      nodesProcessed: (opts.memberships ?? []).length,\n    },\n  };\n\n  const processResult: ProcessDetectionResult | undefined =\n    opts.processes\n      ? {\n          processes: opts.processes,\n          steps: [],\n          stats: {\n            totalProcesses: opts.processes.length,\n            crossCommunityCount: opts.processes.filter(p => p.processType === 'cross_community').length,\n            avgStepCount: opts.processes.length > 0\n              ? opts.processes.reduce((s, p) => s + p.stepCount, 0) / opts.processes.length\n              : 0,\n            entryPointsFound: 0,\n          },\n        }\n      : undefined;\n\n  return {\n    graph: opts.graph,\n    repoPath: opts.repoPath,\n    totalFileCount: 0,\n    communityResult,\n    processResult,\n  };\n}\n\n// ============================================================================\n// TESTS — RETURN VALUES\n// ============================================================================\n\ndescribe('generateSkillFiles — return values', () => {\n  let tmpDir: string;\n\n  beforeEach(async () => {\n    tmpDir = await fs.mkdtemp(path.join(os.tmpdir(), 'gn-skill-test-'));\n    vi.spyOn(console, 'log').mockImplementation(() => {});\n  });\n\n  afterEach(async () => {\n    vi.restoreAllMocks();\n    try {\n      await fs.rm(tmpDir, { recursive: true, force: true });\n    } catch { /* best-effort */ }\n  });\n\n  /**\n   * When memberships array is empty, there is nothing to group into skills.\n   * Should return an empty skills array and the expected output path.\n   */\n  it('returns empty skills when memberships is empty', async () => {\n    const graph = createKnowledgeGraph();\n    const result = await generateSkillFiles(tmpDir, 'TestProject', buildPipelineResult({\n      graph,\n      repoPath: tmpDir,\n      communities: [],\n      memberships: [],\n    }));\n\n    expect(result.skills).toEqual([]);\n    expect(result.outputPath).toBe(path.join(tmpDir, '.claude', 'skills', 'generated'));\n  });\n\n  /**\n   * Communities with fewer than 3 symbols are filtered out.\n   * Three communities each with 2 symbols should all be excluded.\n   */\n  it('returns empty skills when all communities are below threshold', async () => {\n    const graph = createKnowledgeGraph();\n    // Add 6 nodes — 2 per community\n    for (let i = 0; i < 6; i++) {\n      graph.addNode(makeNode(`fn:n${i}`, `n${i}`, 'Function', `${tmpDir}/src/f${i}.ts`, 1, false));\n    }\n\n    const communities = [\n      makeCommunity('c1', 'Small1', 2),\n      makeCommunity('c2', 'Small2', 2),\n      makeCommunity('c3', 'Small3', 2),\n    ];\n    const memberships = [\n      makeMembership('fn:n0', 'c1'), makeMembership('fn:n1', 'c1'),\n      makeMembership('fn:n2', 'c2'), makeMembership('fn:n3', 'c2'),\n      makeMembership('fn:n4', 'c3'), makeMembership('fn:n5', 'c3'),\n    ];\n\n    const result = await generateSkillFiles(tmpDir, 'TestProject', buildPipelineResult({\n      graph, repoPath: tmpDir, communities, memberships,\n    }));\n\n    expect(result.skills).toEqual([]);\n  });\n\n  /**\n   * A single valid community with 5 nodes across 2 files, some exported.\n   * Should return exactly 1 skill with correct metadata.\n   */\n  it('returns 1 skill for a single valid community', async () => {\n    const graph = createKnowledgeGraph();\n    graph.addNode(makeNode('fn:a', 'alpha', 'Function', `${tmpDir}/src/auth/login.ts`, 1, true));\n    graph.addNode(makeNode('fn:b', 'beta', 'Function', `${tmpDir}/src/auth/login.ts`, 20, false));\n    graph.addNode(makeNode('fn:c', 'gamma', 'Class', `${tmpDir}/src/auth/session.ts`, 1, true));\n    graph.addNode(makeNode('fn:d', 'delta', 'Function', `${tmpDir}/src/auth/session.ts`, 40, false));\n    graph.addNode(makeNode('fn:e', 'epsilon', 'Function', `${tmpDir}/src/auth/session.ts`, 60, true));\n\n    const communities = [makeCommunity('c1', 'Auth', 5, 0.8)];\n    const memberships = ['fn:a', 'fn:b', 'fn:c', 'fn:d', 'fn:e']\n      .map(id => makeMembership(id, 'c1'));\n\n    const result = await generateSkillFiles(tmpDir, 'TestProject', buildPipelineResult({\n      graph, repoPath: tmpDir, communities, memberships,\n    }));\n\n    expect(result.skills).toHaveLength(1);\n    expect(result.skills[0].label).toBe('Auth');\n    expect(result.skills[0].symbolCount).toBe(5);\n    expect(result.skills[0].fileCount).toBe(2);\n    expect(result.skills[0].name).toBe('auth');\n  });\n\n  /**\n   * Two communities with the same heuristicLabel should be aggregated\n   * into one skill with summed symbolCount.\n   */\n  it('aggregates communities with same label into one skill', async () => {\n    const graph = createKnowledgeGraph();\n    for (let i = 0; i < 8; i++) {\n      graph.addNode(makeNode(`fn:n${i}`, `n${i}`, 'Function', `${tmpDir}/src/auth/f${i}.ts`, 1, false));\n    }\n\n    const communities = [\n      makeCommunity('c1', 'Auth', 4, 0.7),\n      makeCommunity('c2', 'Auth', 4, 0.9),\n    ];\n    const memberships = [\n      ...['fn:n0', 'fn:n1', 'fn:n2', 'fn:n3'].map(id => makeMembership(id, 'c1')),\n      ...['fn:n4', 'fn:n5', 'fn:n6', 'fn:n7'].map(id => makeMembership(id, 'c2')),\n    ];\n\n    const result = await generateSkillFiles(tmpDir, 'TestProject', buildPipelineResult({\n      graph, repoPath: tmpDir, communities, memberships,\n    }));\n\n    expect(result.skills).toHaveLength(1);\n    expect(result.skills[0].label).toBe('Auth');\n    expect(result.skills[0].symbolCount).toBe(8);\n  });\n\n  /**\n   * The generator caps output at 20 skills regardless of how many\n   * communities pass the threshold.\n   */\n  it('caps skills at 20 even with more valid communities', async () => {\n    const graph = createKnowledgeGraph();\n    const communities: CommunityNode[] = [];\n    const memberships: CommunityMembership[] = [];\n\n    for (let i = 0; i < 25; i++) {\n      const commId = `c${i}`;\n      communities.push(makeCommunity(commId, `Area${i}`, 4));\n      for (let j = 0; j < 4; j++) {\n        const nodeId = `fn:c${i}_n${j}`;\n        graph.addNode(makeNode(nodeId, `func_${i}_${j}`, 'Function', `${tmpDir}/src/area${i}/f${j}.ts`, 1, false));\n        memberships.push(makeMembership(nodeId, commId));\n      }\n    }\n\n    const result = await generateSkillFiles(tmpDir, 'TestProject', buildPipelineResult({\n      graph, repoPath: tmpDir, communities, memberships,\n    }));\n\n    expect(result.skills).toHaveLength(20);\n  });\n\n  /**\n   * Skills should be sorted by symbolCount descending so the most\n   * significant community appears first.\n   */\n  it('sorts skills by symbol count descending', async () => {\n    const graph = createKnowledgeGraph();\n    const sizes = [10, 5, 3];\n    const communities: CommunityNode[] = [];\n    const memberships: CommunityMembership[] = [];\n\n    for (let ci = 0; ci < 3; ci++) {\n      const commId = `c${ci}`;\n      communities.push(makeCommunity(commId, `Area${ci}`, sizes[ci]));\n      for (let ni = 0; ni < sizes[ci]; ni++) {\n        const nodeId = `fn:c${ci}_n${ni}`;\n        graph.addNode(makeNode(nodeId, `func_${ci}_${ni}`, 'Function', `${tmpDir}/src/area${ci}/f${ni}.ts`, 1, false));\n        memberships.push(makeMembership(nodeId, commId));\n      }\n    }\n\n    const result = await generateSkillFiles(tmpDir, 'TestProject', buildPipelineResult({\n      graph, repoPath: tmpDir, communities, memberships,\n    }));\n\n    expect(result.skills).toHaveLength(3);\n    expect(result.skills[0].symbolCount).toBe(10);\n    expect(result.skills[1].symbolCount).toBe(5);\n    expect(result.skills[2].symbolCount).toBe(3);\n  });\n\n  /**\n   * When the communities array is empty but memberships exist with nodes\n   * in an \"auth/\" folder, the fallback builder should derive a label from\n   * the most common parent directory.\n   */\n  it('uses fallback builder when communities array is empty', async () => {\n    const graph = createKnowledgeGraph();\n    for (let i = 0; i < 4; i++) {\n      graph.addNode(makeNode(`fn:n${i}`, `authFunc${i}`, 'Function', `${tmpDir}/src/auth/file${i}.ts`, 1, true));\n    }\n\n    const memberships = [0, 1, 2, 3].map(i => makeMembership(`fn:n${i}`, 'comm_0'));\n\n    const result = await generateSkillFiles(tmpDir, 'TestProject', buildPipelineResult({\n      graph, repoPath: tmpDir, communities: [], memberships,\n    }));\n\n    expect(result.skills).toHaveLength(1);\n    expect(result.skills[0].label).toBe('Auth');\n  });\n\n  /**\n   * When processResult is undefined, the generator should still work\n   * without crashing — it simply has no execution flows.\n   */\n  it('does not crash when processResult is undefined', async () => {\n    const graph = createKnowledgeGraph();\n    for (let i = 0; i < 4; i++) {\n      graph.addNode(makeNode(`fn:n${i}`, `func${i}`, 'Function', `${tmpDir}/src/core/f${i}.ts`, 1, false));\n    }\n\n    const communities = [makeCommunity('c1', 'Core', 4)];\n    const memberships = [0, 1, 2, 3].map(i => makeMembership(`fn:n${i}`, 'c1'));\n\n    const pipeline: PipelineResult = {\n      graph,\n      repoPath: tmpDir,\n      totalFileCount: 0,\n      communityResult: {\n        communities,\n        memberships,\n        stats: { totalCommunities: 1, modularity: 0.5, nodesProcessed: 4 },\n      },\n      processResult: undefined,\n    };\n\n    const result = await generateSkillFiles(tmpDir, 'TestProject', pipeline);\n    expect(result.skills).toHaveLength(1);\n  });\n\n  /**\n   * Memberships that reference node IDs not present in the graph\n   * should be silently skipped without crashing.\n   */\n  it('does not crash when memberships reference missing nodes', async () => {\n    const graph = createKnowledgeGraph();\n    // Only add 2 real nodes but membership references 4\n    graph.addNode(makeNode('fn:real1', 'real1', 'Function', `${tmpDir}/src/mod/a.ts`, 1, false));\n    graph.addNode(makeNode('fn:real2', 'real2', 'Function', `${tmpDir}/src/mod/b.ts`, 1, false));\n\n    const communities = [makeCommunity('c1', 'Mod', 4)];\n    const memberships = [\n      makeMembership('fn:real1', 'c1'),\n      makeMembership('fn:real2', 'c1'),\n      makeMembership('fn:ghost1', 'c1'),\n      makeMembership('fn:ghost2', 'c1'),\n    ];\n\n    const result = await generateSkillFiles(tmpDir, 'TestProject', buildPipelineResult({\n      graph, repoPath: tmpDir, communities, memberships,\n    }));\n\n    // Community has symbolCount=4 which passes threshold, but only 2 real nodes resolve\n    expect(result.skills).toHaveLength(1);\n    expect(result.skills[0].fileCount).toBe(2);\n  });\n\n  /**\n   * When the same nodeId appears in two raw community IDs that get\n   * aggregated into the same label, it should not be double-counted\n   * in the file output.\n   */\n  it('does not double-count nodes shared across aggregated communities', async () => {\n    const graph = createKnowledgeGraph();\n    graph.addNode(makeNode('fn:shared', 'shared', 'Function', `${tmpDir}/src/data/shared.ts`, 1, true));\n    graph.addNode(makeNode('fn:a', 'a', 'Function', `${tmpDir}/src/data/a.ts`, 1, false));\n    graph.addNode(makeNode('fn:b', 'b', 'Function', `${tmpDir}/src/data/b.ts`, 1, false));\n\n    // Two raw communities both named \"Data\", both containing fn:shared\n    const communities = [\n      makeCommunity('c1', 'Data', 2, 0.8),\n      makeCommunity('c2', 'Data', 2, 0.7),\n    ];\n    const memberships = [\n      makeMembership('fn:shared', 'c1'),\n      makeMembership('fn:a', 'c1'),\n      makeMembership('fn:shared', 'c2'),\n      makeMembership('fn:b', 'c2'),\n    ];\n\n    const result = await generateSkillFiles(tmpDir, 'TestProject', buildPipelineResult({\n      graph, repoPath: tmpDir, communities, memberships,\n    }));\n\n    expect(result.skills).toHaveLength(1);\n    // fileCount should be 3 (shared.ts, a.ts, b.ts) — not 4\n    expect(result.skills[0].fileCount).toBe(3);\n  });\n});\n\n// ============================================================================\n// TESTS — FILE OUTPUT\n// ============================================================================\n\ndescribe('generateSkillFiles — file output', () => {\n  let tmpDir: string;\n\n  beforeEach(async () => {\n    tmpDir = await fs.mkdtemp(path.join(os.tmpdir(), 'gn-skill-out-'));\n    vi.spyOn(console, 'log').mockImplementation(() => {});\n  });\n\n  afterEach(async () => {\n    vi.restoreAllMocks();\n    try {\n      await fs.rm(tmpDir, { recursive: true, force: true });\n    } catch { /* best-effort */ }\n  });\n\n  /** Helper: create a standard 2-community setup for file-output tests */\n  function twoCommSetup() {\n    const graph = createKnowledgeGraph();\n    for (let i = 0; i < 4; i++) {\n      graph.addNode(makeNode(`fn:a${i}`, `alphaFn${i}`, 'Function', `${tmpDir}/src/alpha/f${i}.ts`, i * 10 + 1, i < 2));\n    }\n    for (let i = 0; i < 4; i++) {\n      graph.addNode(makeNode(`fn:b${i}`, `betaFn${i}`, 'Function', `${tmpDir}/src/beta/f${i}.ts`, i * 10 + 1, i < 2));\n    }\n\n    const communities = [\n      makeCommunity('cA', 'Alpha', 4, 0.85),\n      makeCommunity('cB', 'Beta', 4, 0.60),\n    ];\n    const memberships = [\n      ...[0, 1, 2, 3].map(i => makeMembership(`fn:a${i}`, 'cA')),\n      ...[0, 1, 2, 3].map(i => makeMembership(`fn:b${i}`, 'cB')),\n    ];\n\n    return { graph, communities, memberships };\n  }\n\n  /**\n   * Verify that each community produces a directory under generated/\n   * containing a SKILL.md file.\n   */\n  it('creates generated/{name}/SKILL.md for each community', async () => {\n    const { graph, communities, memberships } = twoCommSetup();\n\n    await generateSkillFiles(tmpDir, 'TestProject', buildPipelineResult({\n      graph, repoPath: tmpDir, communities, memberships,\n    }));\n\n    const outputDir = path.join(tmpDir, '.claude', 'skills', 'generated');\n    const alphaSkill = await fs.readFile(path.join(outputDir, 'alpha', 'SKILL.md'), 'utf-8');\n    const betaSkill = await fs.readFile(path.join(outputDir, 'beta', 'SKILL.md'), 'utf-8');\n    expect(alphaSkill.length).toBeGreaterThan(0);\n    expect(betaSkill.length).toBeGreaterThan(0);\n  });\n\n  /**\n   * SKILL.md files should start with YAML frontmatter containing\n   * name and description fields.\n   */\n  it('starts with frontmatter containing name and description', async () => {\n    const { graph, communities, memberships } = twoCommSetup();\n\n    await generateSkillFiles(tmpDir, 'TestProject', buildPipelineResult({\n      graph, repoPath: tmpDir, communities, memberships,\n    }));\n\n    const content = await fs.readFile(\n      path.join(tmpDir, '.claude', 'skills', 'generated', 'alpha', 'SKILL.md'),\n      'utf-8',\n    );\n    expect(content.startsWith('---')).toBe(true);\n    expect(content).toContain('name:');\n    expect(content).toContain('description:');\n  });\n\n  /**\n   * A community with exported symbols, processes, and cross-community\n   * CALLS edges should have all optional sections rendered.\n   */\n  it('includes Entry Points, Execution Flows, Connected Areas when data exists', async () => {\n    const graph = createKnowledgeGraph();\n    // Community A: exported symbols\n    for (let i = 0; i < 4; i++) {\n      graph.addNode(makeNode(`fn:a${i}`, `alphaFn${i}`, 'Function', `${tmpDir}/src/alpha/f${i}.ts`, 1, true));\n    }\n    // Community B: target of cross-community calls\n    for (let i = 0; i < 4; i++) {\n      graph.addNode(makeNode(`fn:b${i}`, `betaFn${i}`, 'Function', `${tmpDir}/src/beta/f${i}.ts`, 1, false));\n    }\n    // Cross-community CALLS edge: A -> B\n    graph.addRelationship(makeRel('r1', 'fn:a0', 'fn:b0', 'CALLS'));\n\n    const communities = [\n      makeCommunity('cA', 'Alpha', 4, 0.85),\n      makeCommunity('cB', 'Beta', 4, 0.60),\n    ];\n    const memberships = [\n      ...[0, 1, 2, 3].map(i => makeMembership(`fn:a${i}`, 'cA')),\n      ...[0, 1, 2, 3].map(i => makeMembership(`fn:b${i}`, 'cB')),\n    ];\n\n    const processes = [makeProcess('p1', 'AlphaFlow', ['cA'], 5)];\n\n    await generateSkillFiles(tmpDir, 'TestProject', buildPipelineResult({\n      graph, repoPath: tmpDir, communities, memberships, processes,\n    }));\n\n    const content = await fs.readFile(\n      path.join(tmpDir, '.claude', 'skills', 'generated', 'alpha', 'SKILL.md'),\n      'utf-8',\n    );\n\n    expect(content).toContain('## Entry Points');\n    expect(content).toContain('## Execution Flows');\n    expect(content).toContain('## Connected Areas');\n  });\n\n  /**\n   * A community with no exports, no processes, and no cross-community\n   * calls should omit the optional sections entirely.\n   */\n  it('omits Entry Points, Execution Flows, Connected Areas when absent', async () => {\n    const graph = createKnowledgeGraph();\n    for (let i = 0; i < 4; i++) {\n      graph.addNode(makeNode(`fn:n${i}`, `func${i}`, 'Function', `${tmpDir}/src/isolated/f${i}.ts`, 1, false));\n    }\n\n    const communities = [makeCommunity('c1', 'Isolated', 4)];\n    const memberships = [0, 1, 2, 3].map(i => makeMembership(`fn:n${i}`, 'c1'));\n\n    await generateSkillFiles(tmpDir, 'TestProject', buildPipelineResult({\n      graph, repoPath: tmpDir, communities, memberships, processes: [],\n    }));\n\n    const content = await fs.readFile(\n      path.join(tmpDir, '.claude', 'skills', 'generated', 'isolated', 'SKILL.md'),\n      'utf-8',\n    );\n\n    expect(content).not.toContain('## Entry Points');\n    expect(content).not.toContain('## Execution Flows');\n    expect(content).not.toContain('## Connected Areas');\n  });\n\n  /**\n   * Running generateSkillFiles twice with different communities should\n   * clean up the first run's output directories.\n   */\n  it('cleans up previous run output on re-run', async () => {\n    const graph1 = createKnowledgeGraph();\n    for (let i = 0; i < 4; i++) {\n      graph1.addNode(makeNode(`fn:x${i}`, `xFunc${i}`, 'Function', `${tmpDir}/src/first/f${i}.ts`, 1, false));\n    }\n\n    // First run\n    await generateSkillFiles(tmpDir, 'TestProject', buildPipelineResult({\n      graph: graph1, repoPath: tmpDir,\n      communities: [makeCommunity('c1', 'First', 4)],\n      memberships: [0, 1, 2, 3].map(i => makeMembership(`fn:x${i}`, 'c1')),\n    }));\n\n    const outputDir = path.join(tmpDir, '.claude', 'skills', 'generated');\n    const firstRunDirs = await fs.readdir(outputDir);\n    expect(firstRunDirs).toContain('first');\n\n    // Second run with different community\n    const graph2 = createKnowledgeGraph();\n    for (let i = 0; i < 4; i++) {\n      graph2.addNode(makeNode(`fn:y${i}`, `yFunc${i}`, 'Function', `${tmpDir}/src/second/f${i}.ts`, 1, false));\n    }\n\n    await generateSkillFiles(tmpDir, 'TestProject', buildPipelineResult({\n      graph: graph2, repoPath: tmpDir,\n      communities: [makeCommunity('c2', 'Second', 4)],\n      memberships: [0, 1, 2, 3].map(i => makeMembership(`fn:y${i}`, 'c2')),\n    }));\n\n    const secondRunDirs = await fs.readdir(outputDir);\n    expect(secondRunDirs).toContain('second');\n    expect(secondRunDirs).not.toContain('first');\n  });\n\n  /**\n   * The rendered SKILL.md should contain a stats line matching the\n   * community's symbol count, file count, and cohesion percentage.\n   */\n  it('contains stats line with correct symbol count, file count, cohesion', async () => {\n    const graph = createKnowledgeGraph();\n    for (let i = 0; i < 5; i++) {\n      graph.addNode(makeNode(`fn:s${i}`, `statsFn${i}`, 'Function', `${tmpDir}/src/stats/f${i}.ts`, 1, false));\n    }\n\n    const communities = [makeCommunity('c1', 'Stats', 5, 0.82)];\n    const memberships = [0, 1, 2, 3, 4].map(i => makeMembership(`fn:s${i}`, 'c1'));\n\n    await generateSkillFiles(tmpDir, 'TestProject', buildPipelineResult({\n      graph, repoPath: tmpDir, communities, memberships,\n    }));\n\n    const content = await fs.readFile(\n      path.join(tmpDir, '.claude', 'skills', 'generated', 'stats', 'SKILL.md'),\n      'utf-8',\n    );\n\n    expect(content).toContain('5 symbols | 5 files | Cohesion: 82%');\n  });\n\n  /**\n   * Labels with special characters (like \"C++ Core\") should be converted\n   * to a valid kebab-case directory name without crashing.\n   */\n  it('handles special characters in label for directory name', async () => {\n    const graph = createKnowledgeGraph();\n    for (let i = 0; i < 4; i++) {\n      graph.addNode(makeNode(`fn:cpp${i}`, `cppFunc${i}`, 'Function', `${tmpDir}/src/cpp/f${i}.ts`, 1, false));\n    }\n\n    const communities = [makeCommunity('c1', 'C++ Core', 4)];\n    const memberships = [0, 1, 2, 3].map(i => makeMembership(`fn:cpp${i}`, 'c1'));\n\n    const result = await generateSkillFiles(tmpDir, 'TestProject', buildPipelineResult({\n      graph, repoPath: tmpDir, communities, memberships,\n    }));\n\n    expect(result.skills).toHaveLength(1);\n    // The kebab name should only contain lowercase alphanumerics and dashes\n    expect(result.skills[0].name).toMatch(/^[a-z0-9-]+$/);\n\n    const skillPath = path.join(tmpDir, '.claude', 'skills', 'generated', result.skills[0].name, 'SKILL.md');\n    const content = await fs.readFile(skillPath, 'utf-8');\n    expect(content.length).toBeGreaterThan(0);\n  });\n\n  /**\n   * Nodes with no filePath should not crash the generator.\n   * The skill should still be generated with fileCount 0.\n   */\n  it('handles nodes with no filePath', async () => {\n    const graph = createKnowledgeGraph();\n    for (let i = 0; i < 4; i++) {\n      graph.addNode(makeNode(`fn:nf${i}`, `nofileFunc${i}`, 'Function', '', 0, false));\n    }\n\n    const communities = [makeCommunity('c1', 'NoFile', 4)];\n    const memberships = [0, 1, 2, 3].map(i => makeMembership(`fn:nf${i}`, 'c1'));\n\n    const result = await generateSkillFiles(tmpDir, 'TestProject', buildPipelineResult({\n      graph, repoPath: tmpDir, communities, memberships,\n    }));\n\n    expect(result.skills).toHaveLength(1);\n    expect(result.skills[0].fileCount).toBe(0);\n  });\n\n  /**\n   * Node filePaths containing Windows-style backslashes should be\n   * normalized to forward slashes in the Key Files table (which uses\n   * toRelativePath). The Key Symbols table renders raw filePath as-is,\n   * so we only check the Key Files section for normalization.\n   */\n  it('normalizes Windows backslash paths in Key Files output', async () => {\n    const graph = createKnowledgeGraph();\n    for (let i = 0; i < 4; i++) {\n      graph.addNode(makeNode(\n        `fn:w${i}`, `winFunc${i}`, 'Function',\n        `${tmpDir}\\\\src\\\\win\\\\f${i}.ts`, 1, false,\n      ));\n    }\n\n    const communities = [makeCommunity('c1', 'Win', 4)];\n    const memberships = [0, 1, 2, 3].map(i => makeMembership(`fn:w${i}`, 'c1'));\n\n    const result = await generateSkillFiles(tmpDir, 'TestProject', buildPipelineResult({\n      graph, repoPath: tmpDir, communities, memberships,\n    }));\n\n    expect(result.skills).toHaveLength(1);\n\n    const content = await fs.readFile(\n      path.join(tmpDir, '.claude', 'skills', 'generated', 'win', 'SKILL.md'),\n      'utf-8',\n    );\n\n    // Extract the Key Files section between \"## Key Files\" and the next \"##\"\n    const keyFilesMatch = content.match(/## Key Files\\n([\\s\\S]*?)(?=\\n##)/);\n    expect(keyFilesMatch).not.toBeNull();\n    const keyFilesSection = keyFilesMatch![1];\n    // Key Files section should use forward slashes only\n    expect(keyFilesSection).not.toMatch(/\\\\/);\n    // Verify it actually has file paths\n    expect(keyFilesSection).toContain('src/win/f0.ts');\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/staleness.test.ts",
    "content": "/**\n * P2 Unit Tests: Staleness Check\n *\n * Tests: checkStaleness from staleness.ts\n * - HEAD matches → not stale\n * - HEAD differs → stale with commit count\n * - Git failure → fail open (not stale)\n */\nimport { describe, it, expect, vi, afterEach } from 'vitest';\nimport { execFileSync } from 'child_process';\nimport { checkStaleness } from '../../src/mcp/staleness.js';\n\n// We test checkStaleness with a real git repo (the project itself)\n// since mocking execFileSync across ESM modules is complex.\n\ndescribe('checkStaleness', () => {\n  it('returns not stale when HEAD matches lastCommit', () => {\n    // Get the actual HEAD commit of this repo\n    let headCommit: string;\n    try {\n      headCommit = execFileSync(\n        'git', ['rev-parse', 'HEAD'],\n        { encoding: 'utf-8', stdio: ['pipe', 'pipe', 'pipe'] },\n      ).trim();\n    } catch {\n      // If we can't get HEAD (e.g., not in a git repo), skip\n      return;\n    }\n\n    const result = checkStaleness(process.cwd(), headCommit);\n    expect(result.isStale).toBe(false);\n    expect(result.commitsBehind).toBe(0);\n    expect(result.hint).toBeUndefined();\n  });\n\n  it('returns stale when lastCommit is behind HEAD', () => {\n    // Use HEAD~1 — works in shallow clones (GitHub Actions) unlike rev-list --max-parents=0\n    let previousCommit: string;\n    try {\n      previousCommit = execFileSync(\n        'git', ['rev-parse', 'HEAD~1'],\n        { encoding: 'utf-8', stdio: ['pipe', 'pipe', 'pipe'] },\n      ).trim();\n    } catch {\n      return; // Not in a git repo or only 1 commit\n    }\n\n    if (!previousCommit) return;\n\n    const result = checkStaleness(process.cwd(), previousCommit);\n    expect(result.isStale).toBe(true);\n    expect(result.commitsBehind).toBeGreaterThan(0);\n    expect(result.hint).toContain('behind HEAD');\n  });\n\n  it('fails open when git command fails (e.g., invalid path)', () => {\n    const result = checkStaleness('/nonexistent/path', 'abc123');\n    expect(result.isStale).toBe(false);\n    expect(result.commitsBehind).toBe(0);\n  });\n\n  it('fails open with invalid commit hash', () => {\n    const result = checkStaleness(process.cwd(), 'not-a-real-commit-hash');\n    expect(result.isStale).toBe(false);\n    expect(result.commitsBehind).toBe(0);\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/structure-processor.test.ts",
    "content": "import { describe, it, expect } from 'vitest';\nimport { processStructure } from '../../src/core/ingestion/structure-processor.js';\nimport { createKnowledgeGraph } from '../../src/core/graph/graph.js';\n\ndescribe('processStructure', () => {\n  it('creates File nodes for each path', () => {\n    const graph = createKnowledgeGraph();\n    processStructure(graph, ['src/index.ts', 'src/utils.ts']);\n    const fileNodes = graph.nodes.filter(n => n.label === 'File');\n    expect(fileNodes).toHaveLength(2);\n    expect(fileNodes.map(n => n.properties.name)).toContain('index.ts');\n    expect(fileNodes.map(n => n.properties.name)).toContain('utils.ts');\n  });\n\n  it('creates Folder nodes for directories', () => {\n    const graph = createKnowledgeGraph();\n    processStructure(graph, ['src/lib/utils.ts']);\n    const folderNodes = graph.nodes.filter(n => n.label === 'Folder');\n    expect(folderNodes.map(n => n.properties.name)).toContain('src');\n    expect(folderNodes.map(n => n.properties.name)).toContain('lib');\n  });\n\n  it('creates CONTAINS relationships from parent to child', () => {\n    const graph = createKnowledgeGraph();\n    processStructure(graph, ['src/index.ts']);\n    const rels = graph.relationships.filter(r => r.type === 'CONTAINS');\n    expect(rels).toHaveLength(1);\n    expect(rels[0].sourceId).toBe('Folder:src');\n    expect(rels[0].targetId).toBe('File:src/index.ts');\n  });\n\n  it('creates nested folder hierarchy', () => {\n    const graph = createKnowledgeGraph();\n    processStructure(graph, ['src/core/graph/types.ts']);\n    const folderNodes = graph.nodes.filter(n => n.label === 'Folder');\n    expect(folderNodes).toHaveLength(3); // src, core, graph\n    const rels = graph.relationships.filter(r => r.type === 'CONTAINS');\n    expect(rels).toHaveLength(3); // src->core, core->graph, graph->types.ts\n  });\n\n  it('deduplicates shared folders', () => {\n    const graph = createKnowledgeGraph();\n    processStructure(graph, ['src/a.ts', 'src/b.ts']);\n    const folderNodes = graph.nodes.filter(n => n.label === 'Folder');\n    // 'src' should only appear once\n    expect(folderNodes.filter(n => n.properties.name === 'src')).toHaveLength(1);\n  });\n\n  it('handles single file without directory', () => {\n    const graph = createKnowledgeGraph();\n    processStructure(graph, ['index.ts']);\n    expect(graph.nodes).toHaveLength(1);\n    expect(graph.nodes[0].label).toBe('File');\n    expect(graph.relationships).toHaveLength(0);\n  });\n\n  it('handles empty paths array', () => {\n    const graph = createKnowledgeGraph();\n    processStructure(graph, []);\n    expect(graph.nodeCount).toBe(0);\n    expect(graph.relationshipCount).toBe(0);\n  });\n\n  it('sets CONTAINS relationship confidence to 1.0', () => {\n    const graph = createKnowledgeGraph();\n    processStructure(graph, ['src/index.ts']);\n    const rels = graph.relationships;\n    for (const rel of rels) {\n      expect(rel.confidence).toBe(1.0);\n    }\n  });\n\n  it('stores filePath as the full cumulative path', () => {\n    const graph = createKnowledgeGraph();\n    processStructure(graph, ['src/core/utils.ts']);\n    const utils = graph.nodes.find(n => n.properties.name === 'utils.ts');\n    expect(utils!.properties.filePath).toBe('src/core/utils.ts');\n    const core = graph.nodes.find(n => n.properties.name === 'core');\n    expect(core!.properties.filePath).toBe('src/core');\n  });\n\n  it('handles deeply nested paths', () => {\n    const graph = createKnowledgeGraph();\n    processStructure(graph, ['a/b/c/d/e.ts']);\n    expect(graph.nodes.filter(n => n.label === 'Folder')).toHaveLength(4);\n    expect(graph.nodes.filter(n => n.label === 'File')).toHaveLength(1);\n  });\n\n  it('generates correct node IDs', () => {\n    const graph = createKnowledgeGraph();\n    processStructure(graph, ['src/index.ts']);\n    expect(graph.getNode('Folder:src')).toBeDefined();\n    expect(graph.getNode('File:src/index.ts')).toBeDefined();\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/suffix-index-ambiguity.test.ts",
    "content": "/**\n * Unit tests for proximity-based Python import resolution.\n *\n * When two files share the same bare name (e.g. user.py in two different\n * directories), suffixResolve alone picks whichever was indexed first.\n * resolvePythonImport addresses this by checking the importer's own directory\n * first, mirroring Python's sys.path resolution order.\n */\n\nimport { describe, it, expect } from 'vitest';\nimport { buildSuffixIndex, suffixResolve } from '../../src/core/ingestion/resolvers/utils.js';\nimport { resolvePythonImport } from '../../src/core/ingestion/resolvers/python.js';\nimport { resolveImportPath } from '../../src/core/ingestion/resolvers/standard.js';\nimport { SupportedLanguages } from '../../src/config/supported-languages.js';\n\n// ---------------------------------------------------------------------------\n// Helpers\n// ---------------------------------------------------------------------------\n\nfunction makeCtx(files: string[]) {\n  const normalized = files.map(f => f.replace(/\\\\/g, '/'));\n  const allFilesSet = new Set(files);\n  const index = buildSuffixIndex(normalized, files);\n  const cache = new Map<string, string | null>();\n  return { files, normalized, allFilesSet, index, cache };\n}\n\n/** Simulate the full dispatch: resolvePythonImport first, then suffixResolve fallback. */\nfunction resolvePython(\n  currentFile: string,\n  importPath: string,\n  ctx: ReturnType<typeof makeCtx>,\n): string | null {\n  const proximity = resolvePythonImport(currentFile, importPath, ctx.allFilesSet);\n  if (proximity) return proximity;\n  if (importPath.startsWith('.')) return null;\n  const pathLike = importPath.replace(/\\./g, '/');\n  const parts = pathLike.split('/').filter(Boolean);\n  return suffixResolve(parts, ctx.normalized, ctx.files, ctx.index);\n}\n\n/** For non-Python languages, delegate directly to standard resolveImportPath. */\nfunction resolve(\n  currentFile: string,\n  importPath: string,\n  language: SupportedLanguages,\n  ctx: ReturnType<typeof makeCtx>,\n): string | null {\n  return resolveImportPath(\n    currentFile,\n    importPath,\n    ctx.allFilesSet,\n    ctx.files,\n    ctx.normalized,\n    ctx.cache,\n    language,\n    null,\n    ctx.index,\n  );\n}\n\n// ---------------------------------------------------------------------------\n// Python proximity resolution\n// ---------------------------------------------------------------------------\n\ndescribe('resolvePythonImport — proximity-based resolution for Python', () => {\n  it('resolves bare import to same-directory file when multiple files share the name', () => {\n    const ctx = makeCtx([\n      'app/models/user.py',   // indexed first — would win without proximity\n      'app/services/user.py',\n      'app/services/auth.py',\n    ]);\n\n    const result = resolvePython('app/services/auth.py', 'user', ctx);\n    expect(result).toBe('app/services/user.py');\n  });\n\n  it('falls back to suffix index when no same-directory match exists', () => {\n    const ctx = makeCtx([\n      'app/models/user.py',\n      'app/services/auth.py',\n    ]);\n\n    const result = resolvePython('app/services/auth.py', 'user', ctx);\n    expect(result).toBe('app/models/user.py');\n  });\n\n  it('handles importer at repo root (no directory) without crashing', () => {\n    const ctx = makeCtx([\n      'user.py',\n      'auth.py',\n    ]);\n\n    // importerDir is '' — proximity skipped, suffix fallback used\n    const result = resolvePython('auth.py', 'user', ctx);\n    expect(result).toBe('user.py');\n  });\n\n  it('does not apply proximity for multi-segment imports (dotted paths)', () => {\n    const ctx = makeCtx([\n      'app/models/utils/helpers.py',\n      'app/services/auth.py',\n    ]);\n\n    const result = resolvePython('app/services/auth.py', 'utils.helpers', ctx);\n    expect(result).toBe('app/models/utils/helpers.py');\n  });\n\n  it('resolves same-directory package (user/__init__.py) via proximity', () => {\n    const ctx = makeCtx([\n      'app/models/user/__init__.py',  // indexed first — would win without proximity\n      'app/services/user/__init__.py',\n      'app/services/auth.py',\n    ]);\n\n    const result = resolvePython('app/services/auth.py', 'user', ctx);\n    expect(result).toBe('app/services/user/__init__.py');\n  });\n\n  it('falls back to suffixResolve for __init__.py when no same-directory package exists', () => {\n    const ctx = makeCtx([\n      'app/models/__init__.py',\n      'app/services/auth.py',\n    ]);\n\n    const result = resolvePython('app/services/auth.py', 'models', ctx);\n    expect(result).toBe('app/models/__init__.py');\n  });\n\n  it('handles Windows-style backslash paths in currentFile without crashing', () => {\n    const ctx = makeCtx([\n      'app/services/user.py',\n      'app/services/auth.py',\n    ]);\n\n    const result = resolvePython('app\\\\services\\\\auth.py', 'user', ctx);\n    expect(result).toBe('app/services/user.py');\n  });\n\n  it('resolves PEP 328 relative import (.user) to same-directory file', () => {\n    const ctx = makeCtx([\n      'app/services/user.py',\n      'app/services/auth.py',\n    ]);\n\n    const result = resolvePython('app/services/auth.py', '.user', ctx);\n    expect(result).toBe('app/services/user.py');\n  });\n\n  it('returns null when relative import dots exceed directory depth (PEP 328 over-traversal)', () => {\n    // auth.py is at depth 1 (one directory: 'app').\n    // '...user' has 3 dots → 2 upward hops required, but only 1 directory level exists.\n    // CPython raises ImportError; we return null.\n    const ctx = makeCtx([\n      'app/auth.py',\n      'user.py',\n    ]);\n\n    const result = resolvePython('app/auth.py', '...user', ctx);\n    expect(result).toBeNull();\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Namespace packages (PEP 420) — directory with no __init__.py\n// ---------------------------------------------------------------------------\n\ndescribe('resolvePythonImport — namespace packages (no __init__.py)', () => {\n  // user/ exists as a namespace package: no __init__.py, only submodules.\n  const files = [\n    'app/services/auth.py',\n    'app/services/user/model.py',   // user/ has no __init__.py\n    'app/services/user/queries.py',\n  ];\n\n  it('bare import of namespace package returns null (no file to resolve to)', () => {\n    // `import user` — proximity finds neither user.py nor user/__init__.py.\n    // suffixResolve also finds nothing because no file is literally named \"user\".\n    // This is expected: CPython itself sets user.__file__ = None for namespace packages.\n    const ctx = makeCtx(files);\n    const result = resolvePython('app/services/auth.py', 'user', ctx);\n    expect(result).toBeNull();\n  });\n\n  it('submodule form resolves correctly via suffixResolve fallback', () => {\n    // `import user.model` — multi-segment, proximity skipped, suffixResolve finds user/model.py.\n    const ctx = makeCtx(files);\n    const result = resolvePython('app/services/auth.py', 'user.model', ctx);\n    expect(result).toBe('app/services/user/model.py');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Ruby: bare require does NOT use proximity\n// ---------------------------------------------------------------------------\n\ndescribe('resolveImportPath — Ruby bare require does not use proximity', () => {\n  it('returns first-indexed file for bare require (Ruby $LOAD_PATH excludes current directory)', () => {\n    const ctx = makeCtx([\n      'lib/core/helpers.rb',   // indexed first\n      'lib/utils/helpers.rb',\n      'lib/utils/formatter.rb',\n    ]);\n\n    // Ruby bare `require 'helpers'` searches $LOAD_PATH — current directory not included.\n    // No proximity bias; first-indexed file is returned, same as before.\n    const result = resolve('lib/utils/formatter.rb', 'helpers', SupportedLanguages.Ruby, ctx);\n    expect(result).toBe('lib/core/helpers.rb');\n  });\n\n  it('resolves require_relative (dot-prefixed) to same-directory file via generic relative resolver', () => {\n    const ctx = makeCtx([\n      'lib/utils/helpers.rb',\n      'lib/utils/formatter.rb',\n    ]);\n\n    // require_relative arrives as \"./<path>\" — caught by generic relative resolver, not proximity\n    const result = resolve('lib/utils/formatter.rb', './helpers', SupportedLanguages.Ruby, ctx);\n    expect(result).toBe('lib/utils/helpers.rb');\n  });\n});\n\n// ---------------------------------------------------------------------------\n// Other languages: no proximity applied\n// ---------------------------------------------------------------------------\n\ndescribe('resolveImportPath — no proximity for Java or TypeScript', () => {\n  it('Java: fully-qualified import resolves to the correct file via unique suffix', () => {\n    const ctx = makeCtx([\n      'src/com/a/User.java',\n      'src/com/b/User.java',\n      'src/com/b/Service.java',\n    ]);\n\n    // \"com.b.User\" → \"com/b/User\" → unique suffix; no ambiguity\n    const result = resolve('src/com/b/Service.java', 'com.b.User', SupportedLanguages.Java, ctx);\n    expect(result).toBe('src/com/b/User.java');\n  });\n\n  it('TypeScript: relative import resolves via generic relative resolver', () => {\n    const ctx = makeCtx([\n      'src/services/user.ts',\n      'src/services/auth.ts',\n      'src/models/user.ts',\n    ]);\n\n    // \"./user\" is explicit relative — resolved before proximity is checked\n    const result = resolve('src/services/auth.ts', './user', SupportedLanguages.TypeScript, ctx);\n    expect(result).toBe('src/services/user.ts');\n  });\n});\n\n"
  },
  {
    "path": "gitnexus/test/unit/symbol-resolver.test.ts",
    "content": "import { describe, it, expect, beforeEach } from 'vitest';\nimport { createResolutionContext, type ResolutionContext } from '../../src/core/ingestion/resolution-context.js';\nimport { createSymbolTable } from '../../src/core/ingestion/symbol-table.js';\nimport { isFileInPackageDir } from '../../src/core/ingestion/import-processor.js';\n\n/** Helper: resolve to single best definition (refuses ambiguous global) */\nconst resolveOne = (ctx: ResolutionContext, name: string, fromFile: string) => {\n  const tiered = ctx.resolve(name, fromFile);\n  if (!tiered) return null;\n  if (tiered.tier === 'global' && tiered.candidates.length !== 1) return null;\n  return tiered.candidates[0];\n};\n\n/** Helper: resolve with tier metadata (refuses ambiguous global) */\nconst resolveInternal = (ctx: ResolutionContext, name: string, fromFile: string) => {\n  const tiered = ctx.resolve(name, fromFile);\n  if (!tiered) return null;\n  if (tiered.tier === 'global' && tiered.candidates.length !== 1) return null;\n  return { definition: tiered.candidates[0], tier: tiered.tier, candidateCount: tiered.candidates.length };\n};\n\ndescribe('ResolutionContext.resolve — resolveSymbol compatibility', () => {\n  let ctx: ResolutionContext;\n\n  beforeEach(() => {\n    ctx = createResolutionContext();\n  });\n\n  describe('Tier 1: Same-file resolution', () => {\n    it('resolves symbol defined in the same file', () => {\n      ctx.symbols.add('src/models/user.ts', 'User', 'Class:src/models/user.ts:User', 'Class');\n\n      const result = resolveOne(ctx, 'User', 'src/models/user.ts');\n\n      expect(result).not.toBeNull();\n      expect(result!.nodeId).toBe('Class:src/models/user.ts:User');\n      expect(result!.filePath).toBe('src/models/user.ts');\n      expect(result!.type).toBe('Class');\n    });\n\n    it('prefers same-file over imported definition', () => {\n      ctx.symbols.add('src/local.ts', 'Config', 'Class:src/local.ts:Config', 'Class');\n      ctx.symbols.add('src/shared.ts', 'Config', 'Class:src/shared.ts:Config', 'Class');\n      ctx.importMap.set('src/local.ts', new Set(['src/shared.ts']));\n\n      const result = resolveOne(ctx, 'Config', 'src/local.ts');\n\n      expect(result!.nodeId).toBe('Class:src/local.ts:Config');\n      expect(result!.filePath).toBe('src/local.ts');\n    });\n  });\n\n  describe('Tier 2: Import-scoped resolution', () => {\n    it('resolves symbol from an imported file', () => {\n      ctx.symbols.add('src/services/auth.ts', 'AuthService', 'Class:src/services/auth.ts:AuthService', 'Class');\n      ctx.importMap.set('src/controllers/login.ts', new Set(['src/services/auth.ts']));\n\n      const result = resolveOne(ctx, 'AuthService', 'src/controllers/login.ts');\n\n      expect(result).not.toBeNull();\n      expect(result!.nodeId).toBe('Class:src/services/auth.ts:AuthService');\n      expect(result!.filePath).toBe('src/services/auth.ts');\n    });\n\n    it('prefers imported definition over non-imported with same name', () => {\n      ctx.symbols.add('src/services/logger.ts', 'Logger', 'Class:src/services/logger.ts:Logger', 'Class');\n      ctx.symbols.add('src/testing/mock-logger.ts', 'Logger', 'Class:src/testing/mock-logger.ts:Logger', 'Class');\n      ctx.importMap.set('src/app.ts', new Set(['src/services/logger.ts']));\n\n      const result = resolveOne(ctx, 'Logger', 'src/app.ts');\n\n      expect(result!.nodeId).toBe('Class:src/services/logger.ts:Logger');\n      expect(result!.filePath).toBe('src/services/logger.ts');\n    });\n\n    it('handles file with no imports — unique global falls through', () => {\n      ctx.symbols.add('src/utils.ts', 'Helper', 'Class:src/utils.ts:Helper', 'Class');\n\n      const result = resolveOne(ctx, 'Helper', 'src/app.ts');\n\n      expect(result).not.toBeNull();\n      expect(result!.nodeId).toBe('Class:src/utils.ts:Helper');\n    });\n  });\n\n  describe('Tier 3: Global resolution', () => {\n    it('resolves unique global when not in imports', () => {\n      ctx.symbols.add('src/external/base.ts', 'BaseModel', 'Class:src/external/base.ts:BaseModel', 'Class');\n      ctx.importMap.set('src/app.ts', new Set(['src/other.ts']));\n\n      const result = resolveOne(ctx, 'BaseModel', 'src/app.ts');\n\n      expect(result).not.toBeNull();\n      expect(result!.nodeId).toBe('Class:src/external/base.ts:BaseModel');\n    });\n\n    it('refuses ambiguous global — returns null when multiple candidates exist', () => {\n      ctx.symbols.add('src/a.ts', 'Config', 'Class:src/a.ts:Config', 'Class');\n      ctx.symbols.add('src/b.ts', 'Config', 'Class:src/b.ts:Config', 'Class');\n\n      const result = resolveOne(ctx, 'Config', 'src/other.ts');\n\n      expect(result).toBeNull();\n    });\n\n    it('ctx.resolve returns all candidates at global tier (consumers decide)', () => {\n      ctx.symbols.add('src/a.ts', 'Config', 'Class:src/a.ts:Config', 'Class');\n      ctx.symbols.add('src/b.ts', 'Config', 'Class:src/b.ts:Config', 'Class');\n\n      const tiered = ctx.resolve('Config', 'src/other.ts');\n\n      expect(tiered).not.toBeNull();\n      expect(tiered!.tier).toBe('global');\n      expect(tiered!.candidates.length).toBe(2);\n    });\n  });\n\n  describe('null cases', () => {\n    it('returns null for unknown symbol', () => {\n      const result = resolveOne(ctx, 'NonExistent', 'src/app.ts');\n      expect(result).toBeNull();\n    });\n\n    it('returns null when symbol table is empty', () => {\n      const result = resolveOne(ctx, 'Anything', 'src/app.ts');\n      expect(result).toBeNull();\n    });\n  });\n\n  describe('type preservation', () => {\n    it('preserves Interface type for heritage resolution', () => {\n      ctx.symbols.add('src/interfaces.ts', 'ILogger', 'Interface:src/interfaces.ts:ILogger', 'Interface');\n      ctx.importMap.set('src/app.ts', new Set(['src/interfaces.ts']));\n\n      const result = resolveOne(ctx, 'ILogger', 'src/app.ts');\n\n      expect(result!.type).toBe('Interface');\n    });\n\n    it('preserves Class type for heritage resolution', () => {\n      ctx.symbols.add('src/base.ts', 'BaseService', 'Class:src/base.ts:BaseService', 'Class');\n      ctx.importMap.set('src/app.ts', new Set(['src/base.ts']));\n\n      const result = resolveOne(ctx, 'BaseService', 'src/app.ts');\n\n      expect(result!.type).toBe('Class');\n    });\n  });\n\n  describe('heritage-specific scenarios', () => {\n    it('resolves C# interface vs class ambiguity via imports', () => {\n      ctx.symbols.add('src/logging/ilogger.cs', 'ILogger', 'Interface:src/logging/ilogger.cs:ILogger', 'Interface');\n      ctx.symbols.add('src/testing/ilogger.cs', 'ILogger', 'Class:src/testing/ilogger.cs:ILogger', 'Class');\n      ctx.importMap.set('src/services/auth.cs', new Set(['src/logging/ilogger.cs']));\n\n      const result = resolveOne(ctx, 'ILogger', 'src/services/auth.cs');\n\n      expect(result!.type).toBe('Interface');\n      expect(result!.filePath).toBe('src/logging/ilogger.cs');\n    });\n\n    it('resolves parent class from imported file for extends', () => {\n      ctx.symbols.add('src/api/controller.ts', 'UserController', 'Class:src/api/controller.ts:UserController', 'Class');\n      ctx.symbols.add('src/base/controller.ts', 'BaseController', 'Class:src/base/controller.ts:BaseController', 'Class');\n      ctx.importMap.set('src/api/controller.ts', new Set(['src/base/controller.ts']));\n\n      const result = resolveOne(ctx, 'BaseController', 'src/api/controller.ts');\n\n      expect(result!.nodeId).toBe('Class:src/base/controller.ts:BaseController');\n    });\n  });\n});\n\ndescribe('ResolutionContext.resolve — tier metadata', () => {\n  let ctx: ResolutionContext;\n\n  beforeEach(() => {\n    ctx = createResolutionContext();\n  });\n\n  it('returns same-file tier for Tier 1 match', () => {\n    ctx.symbols.add('src/a.ts', 'Foo', 'Class:src/a.ts:Foo', 'Class');\n\n    const result = resolveInternal(ctx, 'Foo', 'src/a.ts');\n\n    expect(result).not.toBeNull();\n    expect(result!.tier).toBe('same-file');\n    expect(result!.candidateCount).toBe(1);\n    expect(result!.definition.nodeId).toBe('Class:src/a.ts:Foo');\n  });\n\n  it('returns import-scoped tier for Tier 2 match', () => {\n    ctx.symbols.add('src/logger.ts', 'Logger', 'Class:src/logger.ts:Logger', 'Class');\n    ctx.symbols.add('src/mock.ts', 'Logger', 'Class:src/mock.ts:Logger', 'Class');\n    ctx.importMap.set('src/app.ts', new Set(['src/logger.ts']));\n\n    const result = resolveInternal(ctx, 'Logger', 'src/app.ts');\n\n    expect(result).not.toBeNull();\n    expect(result!.tier).toBe('import-scoped');\n  });\n\n  it('returns global tier for Tier 3 match', () => {\n    ctx.symbols.add('src/only.ts', 'Singleton', 'Class:src/only.ts:Singleton', 'Class');\n\n    const result = resolveInternal(ctx, 'Singleton', 'src/other.ts');\n\n    expect(result).not.toBeNull();\n    expect(result!.tier).toBe('global');\n    expect(result!.candidateCount).toBe(1);\n  });\n\n  it('returns null for ambiguous global — refuses to guess', () => {\n    ctx.symbols.add('src/a.ts', 'Config', 'Class:src/a.ts:Config', 'Class');\n    ctx.symbols.add('src/b.ts', 'Config', 'Class:src/b.ts:Config', 'Class');\n\n    const result = resolveInternal(ctx, 'Config', 'src/other.ts');\n\n    expect(result).toBeNull();\n  });\n\n  it('returns null for unknown symbol', () => {\n    const result = resolveInternal(ctx, 'Ghost', 'src/any.ts');\n    expect(result).toBeNull();\n  });\n\n  it('Tier 1 wins over Tier 2 — same-file takes priority', () => {\n    ctx.symbols.add('src/app.ts', 'Util', 'Function:src/app.ts:Util', 'Function');\n    ctx.symbols.add('src/lib.ts', 'Util', 'Function:src/lib.ts:Util', 'Function');\n    ctx.importMap.set('src/app.ts', new Set(['src/lib.ts']));\n\n    const result = resolveInternal(ctx, 'Util', 'src/app.ts');\n\n    expect(result!.tier).toBe('same-file');\n    expect(result!.definition.filePath).toBe('src/app.ts');\n  });\n});\n\ndescribe('negative tests — ambiguous refusal per language family', () => {\n  let ctx: ResolutionContext;\n\n  beforeEach(() => {\n    ctx = createResolutionContext();\n  });\n\n  it('TS/JS: two Logger definitions with no import → returns null', () => {\n    ctx.symbols.add('src/services/logger.ts', 'Logger', 'Class:src/services/logger.ts:Logger', 'Class');\n    ctx.symbols.add('src/testing/logger.ts', 'Logger', 'Class:src/testing/logger.ts:Logger', 'Class');\n\n    const result = resolveOne(ctx, 'Logger', 'src/app.ts');\n    expect(result).toBeNull();\n  });\n\n  it('Java: same-named class in different packages, no import → returns null', () => {\n    ctx.symbols.add('com/example/models/User.java', 'User', 'Class:com/example/models/User.java:User', 'Class');\n    ctx.symbols.add('com/example/dto/User.java', 'User', 'Class:com/example/dto/User.java:User', 'Class');\n\n    const result = resolveOne(ctx, 'User', 'com/example/services/UserService.java');\n    expect(result).toBeNull();\n  });\n\n  it('C/C++: type defined in transitively-included header → returns null (not reachable via direct import)', () => {\n    ctx.symbols.add('src/c.h', 'Widget', 'Struct:src/c.h:Widget', 'Struct');\n    ctx.symbols.add('src/d.h', 'Widget', 'Struct:src/d.h:Widget', 'Struct');\n    ctx.importMap.set('src/a.c', new Set(['src/b.h']));\n\n    const result = resolveOne(ctx, 'Widget', 'src/a.c');\n    expect(result).toBeNull();\n  });\n\n  it('C#: two IService interfaces in different namespaces, no import → returns null', () => {\n    ctx.symbols.add('src/Services/IService.cs', 'IService', 'Interface:src/Services/IService.cs:IService', 'Interface');\n    ctx.symbols.add('src/Testing/IService.cs', 'IService', 'Interface:src/Testing/IService.cs:IService', 'Interface');\n\n    const result = resolveOne(ctx, 'IService', 'src/App.cs');\n    expect(result).toBeNull();\n  });\n});\n\ndescribe('heritage false-positive guard', () => {\n  let ctx: ResolutionContext;\n\n  beforeEach(() => {\n    ctx = createResolutionContext();\n  });\n\n  it('null from resolve prevents false edge — generateId fallback produces synthetic ID, not wrong match', () => {\n    ctx.symbols.add('src/api/base.ts', 'BaseController', 'Class:src/api/base.ts:BaseController', 'Class');\n    ctx.symbols.add('src/testing/base.ts', 'BaseController', 'Class:src/testing/base.ts:BaseController', 'Class');\n\n    const result = resolveOne(ctx, 'BaseController', 'src/routes/admin.ts');\n    expect(result).toBeNull();\n\n    ctx.importMap.set('src/routes/admin.ts', new Set(['src/api/base.ts']));\n    const resolved = resolveOne(ctx, 'BaseController', 'src/routes/admin.ts');\n    expect(resolved).not.toBeNull();\n    expect(resolved!.filePath).toBe('src/api/base.ts');\n  });\n});\n\ndescribe('lookupExactFull', () => {\n  it('returns full SymbolDefinition for same-file lookup via O(1) direct storage', () => {\n    const symbolTable = createSymbolTable();\n    symbolTable.add('src/models/user.ts', 'User', 'Class:src/models/user.ts:User', 'Class');\n\n    const result = symbolTable.lookupExactFull('src/models/user.ts', 'User');\n\n    expect(result).not.toBeUndefined();\n    expect(result!.nodeId).toBe('Class:src/models/user.ts:User');\n    expect(result!.filePath).toBe('src/models/user.ts');\n    expect(result!.type).toBe('Class');\n  });\n\n  it('returns undefined for non-existent symbol', () => {\n    const symbolTable = createSymbolTable();\n    const result = symbolTable.lookupExactFull('src/app.ts', 'NonExistent');\n    expect(result).toBeUndefined();\n  });\n\n  it('returns undefined for wrong file', () => {\n    const symbolTable = createSymbolTable();\n    symbolTable.add('src/a.ts', 'Foo', 'Class:src/a.ts:Foo', 'Class');\n\n    const result = symbolTable.lookupExactFull('src/b.ts', 'Foo');\n    expect(result).toBeUndefined();\n  });\n\n  it('shares same object reference between fileIndex and globalIndex', () => {\n    const symbolTable = createSymbolTable();\n    symbolTable.add('src/x.ts', 'Bar', 'Class:src/x.ts:Bar', 'Class');\n\n    const fromExact = symbolTable.lookupExactFull('src/x.ts', 'Bar');\n    const fromFuzzy = symbolTable.lookupFuzzy('Bar')[0];\n\n    expect(fromExact).toBe(fromFuzzy);\n  });\n\n  it('preserves optional callable metadata on stored definitions', () => {\n    const symbolTable = createSymbolTable();\n    symbolTable.add('src/math.ts', 'sum', 'Function:src/math.ts:sum', 'Function', { parameterCount: 2 });\n\n    const fromExact = symbolTable.lookupExactFull('src/math.ts', 'sum');\n    const fromFuzzy = symbolTable.lookupFuzzy('sum')[0];\n\n    expect(fromExact?.parameterCount).toBe(2);\n    expect(fromFuzzy.parameterCount).toBe(2);\n    expect(fromExact).toBe(fromFuzzy);\n  });\n});\n\ndescribe('isFileInPackageDir', () => {\n  it('matches file directly in the package directory', () => {\n    expect(isFileInPackageDir('internal/auth/handler.go', '/internal/auth/')).toBe(true);\n  });\n\n  it('matches with leading path segments', () => {\n    expect(isFileInPackageDir('myrepo/internal/auth/handler.go', '/internal/auth/')).toBe(true);\n    expect(isFileInPackageDir('src/github.com/user/repo/internal/auth/handler.go', '/internal/auth/')).toBe(true);\n  });\n\n  it('rejects files in subdirectories', () => {\n    expect(isFileInPackageDir('internal/auth/middleware/jwt.go', '/internal/auth/')).toBe(false);\n  });\n\n  it('matches any file extension in the directory', () => {\n    expect(isFileInPackageDir('internal/auth/README.md', '/internal/auth/')).toBe(true);\n    expect(isFileInPackageDir('Models/User.cs', '/Models/')).toBe(true);\n    expect(isFileInPackageDir('internal/auth/handler_test.go', '/internal/auth/')).toBe(true);\n  });\n\n  it('rejects files not in the package', () => {\n    expect(isFileInPackageDir('internal/db/connection.go', '/internal/auth/')).toBe(false);\n  });\n\n  it('handles backslash paths (Windows)', () => {\n    expect(isFileInPackageDir('internal\\\\auth\\\\handler.go', '/internal/auth/')).toBe(true);\n  });\n\n  it('matches C# namespace directories', () => {\n    expect(isFileInPackageDir('MyProject/Models/User.cs', '/MyProject/Models/')).toBe(true);\n    expect(isFileInPackageDir('MyProject/Models/Order.cs', '/MyProject/Models/')).toBe(true);\n    expect(isFileInPackageDir('MyProject/Models/Sub/Nested.cs', '/MyProject/Models/')).toBe(false);\n  });\n});\n\ndescribe('Tier 2b: PackageMap resolution (Go)', () => {\n  let ctx: ResolutionContext;\n\n  beforeEach(() => {\n    ctx = createResolutionContext();\n  });\n\n  it('resolves symbol via PackageMap when not in ImportMap', () => {\n    ctx.symbols.add('internal/auth/handler.go', 'HandleLogin', 'Function:internal/auth/handler.go:HandleLogin', 'Function');\n    ctx.packageMap.set('cmd/server/main.go', new Set(['/internal/auth/']));\n\n    const result = ctx.resolve('HandleLogin', 'cmd/server/main.go');\n\n    expect(result).not.toBeNull();\n    expect(result!.tier).toBe('import-scoped');\n    expect(result!.candidates[0].filePath).toBe('internal/auth/handler.go');\n  });\n\n  it('does not resolve symbol from wrong package', () => {\n    ctx.symbols.add('internal/db/connection.go', 'Connect', 'Function:internal/db/connection.go:Connect', 'Function');\n    ctx.packageMap.set('cmd/server/main.go', new Set(['/internal/auth/']));\n\n    const result = ctx.resolve('Connect', 'cmd/server/main.go');\n\n    // Not in imported package, single global def → global tier\n    expect(result).not.toBeNull();\n    expect(result!.tier).toBe('global');\n  });\n\n  it('Tier 2a (ImportMap) takes precedence over Tier 2b (PackageMap)', () => {\n    ctx.symbols.add('internal/auth/handler.go', 'Validate', 'Function:internal/auth/handler.go:Validate', 'Function');\n    ctx.symbols.add('internal/db/validator.go', 'Validate', 'Function:internal/db/validator.go:Validate', 'Function');\n\n    ctx.importMap.set('cmd/server/main.go', new Set(['internal/db/validator.go']));\n    ctx.packageMap.set('cmd/server/main.go', new Set(['/internal/auth/']));\n\n    const result = ctx.resolve('Validate', 'cmd/server/main.go');\n\n    expect(result).not.toBeNull();\n    expect(result!.tier).toBe('import-scoped');\n    expect(result!.candidates[0].filePath).toBe('internal/db/validator.go');\n  });\n\n  it('resolves both symbols in same imported package', () => {\n    ctx.symbols.add('internal/auth/handler.go', 'Run', 'Function:internal/auth/handler.go:Run', 'Function');\n    ctx.symbols.add('internal/auth/worker.go', 'Run', 'Function:internal/auth/worker.go:Run', 'Function');\n    ctx.packageMap.set('cmd/main.go', new Set(['/internal/auth/']));\n\n    const result = ctx.resolve('Run', 'cmd/main.go');\n\n    expect(result).not.toBeNull();\n    expect(result!.tier).toBe('import-scoped');\n    expect(result!.candidates.length).toBe(2);\n  });\n\n  it('returns global without packageMap when ambiguous', () => {\n    ctx.symbols.add('internal/auth/handler.go', 'X', 'Function:internal/auth/handler.go:X', 'Function');\n    ctx.symbols.add('internal/db/handler.go', 'X', 'Function:internal/db/handler.go:X', 'Function');\n\n    const result = resolveInternal(ctx, 'X', 'cmd/main.go');\n\n    // No import or package match, 2 candidates → ambiguous → null\n    expect(result).toBeNull();\n  });\n});\n\ndescribe('per-file cache', () => {\n  let ctx: ResolutionContext;\n\n  beforeEach(() => {\n    ctx = createResolutionContext();\n  });\n\n  it('caches results per file', () => {\n    ctx.symbols.add('src/a.ts', 'Foo', 'Class:src/a.ts:Foo', 'Class');\n\n    ctx.enableCache('src/a.ts');\n    const r1 = ctx.resolve('Foo', 'src/a.ts');\n    const r2 = ctx.resolve('Foo', 'src/a.ts');\n    ctx.clearCache();\n\n    // Same object reference from cache\n    expect(r1).toBe(r2);\n    expect(ctx.getStats().cacheHits).toBe(1);\n    expect(ctx.getStats().cacheMisses).toBe(1);\n  });\n\n  it('resolve works without cache enabled', () => {\n    ctx.symbols.add('src/a.ts', 'Foo', 'Class:src/a.ts:Foo', 'Class');\n\n    const result = ctx.resolve('Foo', 'src/a.ts');\n\n    expect(result).not.toBeNull();\n    expect(result!.candidates[0].nodeId).toBe('Class:src/a.ts:Foo');\n    expect(ctx.getStats().cacheHits).toBe(0);\n  });\n\n  it('cache does not leak across files', () => {\n    ctx.symbols.add('src/a.ts', 'Foo', 'Class:src/a.ts:Foo', 'Class');\n\n    ctx.enableCache('src/a.ts');\n    ctx.resolve('Foo', 'src/a.ts'); // cached for a.ts\n\n    // Resolve from different file — should NOT use cache\n    const r = ctx.resolve('Foo', 'src/b.ts');\n    ctx.clearCache();\n\n    // Foo is not in src/b.ts, so same-file fails. Falls to global with 1 candidate.\n    expect(r!.tier).toBe('global');\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/symbol-table.test.ts",
    "content": "import { describe, it, expect, beforeEach } from 'vitest';\nimport { createSymbolTable, type SymbolTable } from '../../src/core/ingestion/symbol-table.js';\n\ndescribe('SymbolTable', () => {\n  let table: SymbolTable;\n\n  beforeEach(() => {\n    table = createSymbolTable();\n  });\n\n  describe('add', () => {\n    it('registers a symbol in the table', () => {\n      table.add('src/index.ts', 'main', 'func:main', 'Function');\n      expect(table.getStats().globalSymbolCount).toBe(1);\n      expect(table.getStats().fileCount).toBe(1);\n    });\n\n    it('handles multiple symbols in the same file', () => {\n      table.add('src/index.ts', 'main', 'func:main', 'Function');\n      table.add('src/index.ts', 'helper', 'func:helper', 'Function');\n      expect(table.getStats().fileCount).toBe(1);\n      expect(table.getStats().globalSymbolCount).toBe(2);\n    });\n\n    it('handles same name in different files', () => {\n      table.add('src/a.ts', 'init', 'func:a:init', 'Function');\n      table.add('src/b.ts', 'init', 'func:b:init', 'Function');\n      expect(table.getStats().fileCount).toBe(2);\n      // Global index groups by name, so 'init' has one entry with two definitions\n      expect(table.getStats().globalSymbolCount).toBe(1);\n    });\n\n    it('allows duplicate adds for same file and name (overloads preserved)', () => {\n      table.add('src/a.ts', 'foo', 'func:foo:1', 'Function');\n      table.add('src/a.ts', 'foo', 'func:foo:2', 'Function');\n      // File index stores both overloads; lookupExact returns first\n      expect(table.lookupExact('src/a.ts', 'foo')).toBe('func:foo:1');\n      // lookupExactAll returns all overloads\n      expect(table.lookupExactAll('src/a.ts', 'foo')).toHaveLength(2);\n      // Global index appends\n      expect(table.lookupFuzzy('foo')).toHaveLength(2);\n    });\n  });\n\n  describe('lookupExact', () => {\n    it('finds a symbol by file path and name', () => {\n      table.add('src/index.ts', 'main', 'func:main', 'Function');\n      expect(table.lookupExact('src/index.ts', 'main')).toBe('func:main');\n    });\n\n    it('returns undefined for unknown file', () => {\n      table.add('src/index.ts', 'main', 'func:main', 'Function');\n      expect(table.lookupExact('src/other.ts', 'main')).toBeUndefined();\n    });\n\n    it('returns undefined for unknown symbol name', () => {\n      table.add('src/index.ts', 'main', 'func:main', 'Function');\n      expect(table.lookupExact('src/index.ts', 'notExist')).toBeUndefined();\n    });\n\n    it('returns undefined for empty table', () => {\n      expect(table.lookupExact('src/index.ts', 'main')).toBeUndefined();\n    });\n  });\n\n  describe('lookupFuzzy', () => {\n    it('finds all definitions of a symbol across files', () => {\n      table.add('src/a.ts', 'render', 'func:a:render', 'Function');\n      table.add('src/b.ts', 'render', 'func:b:render', 'Method');\n      const results = table.lookupFuzzy('render');\n      expect(results).toHaveLength(2);\n      expect(results[0]).toEqual({ nodeId: 'func:a:render', filePath: 'src/a.ts', type: 'Function' });\n      expect(results[1]).toEqual({ nodeId: 'func:b:render', filePath: 'src/b.ts', type: 'Method' });\n    });\n\n    it('returns empty array for unknown symbol', () => {\n      expect(table.lookupFuzzy('nonexistent')).toEqual([]);\n    });\n\n    it('returns empty array for empty table', () => {\n      expect(table.lookupFuzzy('anything')).toEqual([]);\n    });\n  });\n\n  describe('getStats', () => {\n    it('returns zero counts for empty table', () => {\n      expect(table.getStats()).toEqual({ fileCount: 0, globalSymbolCount: 0 });\n    });\n\n    it('tracks unique file count correctly', () => {\n      table.add('src/a.ts', 'foo', 'func:foo', 'Function');\n      table.add('src/a.ts', 'bar', 'func:bar', 'Function');\n      table.add('src/b.ts', 'baz', 'func:baz', 'Function');\n      expect(table.getStats().fileCount).toBe(2);\n    });\n\n    it('tracks unique global symbol names', () => {\n      table.add('src/a.ts', 'foo', 'func:a:foo', 'Function');\n      table.add('src/b.ts', 'foo', 'func:b:foo', 'Function');\n      table.add('src/a.ts', 'bar', 'func:a:bar', 'Function');\n      // 'foo' and 'bar' are 2 unique global names\n      expect(table.getStats().globalSymbolCount).toBe(2);\n    });\n  });\n\n  describe('returnType metadata', () => {\n    it('stores returnType in SymbolDefinition', () => {\n      table.add('src/utils.ts', 'getUser', 'func:getUser', 'Function', { returnType: 'User' });\n      const def = table.lookupExactFull('src/utils.ts', 'getUser');\n      expect(def).toBeDefined();\n      expect(def!.returnType).toBe('User');\n    });\n\n    it('returnType is available via lookupFuzzy', () => {\n      table.add('src/utils.ts', 'getUser', 'func:getUser', 'Function', { returnType: 'Promise<User>' });\n      const results = table.lookupFuzzy('getUser');\n      expect(results).toHaveLength(1);\n      expect(results[0].returnType).toBe('Promise<User>');\n    });\n\n    it('omits returnType when not provided', () => {\n      table.add('src/utils.ts', 'helper', 'func:helper', 'Function');\n      const def = table.lookupExactFull('src/utils.ts', 'helper');\n      expect(def).toBeDefined();\n      expect(def!.returnType).toBeUndefined();\n    });\n\n    it('stores returnType alongside parameterCount and ownerId', () => {\n      table.add('src/models.ts', 'save', 'method:save', 'Method', {\n        parameterCount: 1,\n        returnType: 'boolean',\n        ownerId: 'class:User',\n      });\n      const def = table.lookupExactFull('src/models.ts', 'save');\n      expect(def).toBeDefined();\n      expect(def!.parameterCount).toBe(1);\n      expect(def!.returnType).toBe('boolean');\n      expect(def!.ownerId).toBe('class:User');\n    });\n  });\n\n  describe('declaredType metadata', () => {\n    it('stores declaredType in SymbolDefinition', () => {\n      table.add('src/models.ts', 'address', 'prop:address', 'Property', {\n        declaredType: 'Address',\n        ownerId: 'class:User',\n      });\n      const def = table.lookupExactFull('src/models.ts', 'address');\n      expect(def).toBeDefined();\n      expect(def!.declaredType).toBe('Address');\n    });\n\n    it('omits declaredType when not provided', () => {\n      table.add('src/models.ts', 'name', 'prop:name', 'Property', { ownerId: 'class:User' });\n      const def = table.lookupExactFull('src/models.ts', 'name');\n      expect(def).toBeDefined();\n      expect(def!.declaredType).toBeUndefined();\n    });\n  });\n\n  describe('Property exclusion from globalIndex', () => {\n    it('Property with ownerId is NOT added to globalIndex', () => {\n      table.add('src/models.ts', 'name', 'prop:name', 'Property', {\n        declaredType: 'string',\n        ownerId: 'class:User',\n      });\n      // Should not appear in fuzzy lookup\n      expect(table.lookupFuzzy('name')).toEqual([]);\n      // But should still be in fileIndex\n      expect(table.lookupExact('src/models.ts', 'name')).toBe('prop:name');\n    });\n\n    it('Property without ownerId IS added to globalIndex', () => {\n      table.add('src/models.ts', 'name', 'prop:name', 'Property');\n      expect(table.lookupFuzzy('name')).toHaveLength(1);\n    });\n\n    it('Property without declaredType is still added to fieldByOwner index only (not globalIndex)', () => {\n      table.add('src/models.ts', 'name', 'prop:name', 'Property', { ownerId: 'class:User' });\n      // No declaredType → still indexed in fieldByOwner (for write-access tracking\n      // in dynamically-typed languages like Ruby/JS), but excluded from globalIndex\n      expect(table.lookupFuzzy('name')).toEqual([]);\n      expect(table.lookupFieldByOwner('class:User', 'name')).toEqual({\n        nodeId: 'prop:name',\n        filePath: 'src/models.ts',\n        type: 'Property',\n        ownerId: 'class:User',\n      });\n    });\n\n    it('non-Property types are always added to globalIndex', () => {\n      table.add('src/models.ts', 'save', 'method:save', 'Method', { ownerId: 'class:User' });\n      expect(table.lookupFuzzy('save')).toHaveLength(1);\n    });\n  });\n\n  describe('conditional callableIndex invalidation', () => {\n    it('adding a Function invalidates callableIndex', () => {\n      table.add('src/a.ts', 'foo', 'func:foo', 'Function', { returnType: 'void' });\n      // First call builds the index\n      expect(table.lookupFuzzyCallable('foo')).toHaveLength(1);\n      // Add another callable — should invalidate and rebuild\n      table.add('src/a.ts', 'bar', 'func:bar', 'Method');\n      expect(table.lookupFuzzyCallable('bar')).toHaveLength(1);\n    });\n\n    it('adding a Property does NOT invalidate callableIndex', () => {\n      table.add('src/a.ts', 'foo', 'func:foo', 'Function');\n      // Build callable index\n      expect(table.lookupFuzzyCallable('foo')).toHaveLength(1);\n      // Add a Property — callable index should still be valid (foo still found)\n      table.add('src/models.ts', 'name', 'prop:name', 'Property', {\n        declaredType: 'string',\n        ownerId: 'class:User',\n      });\n      expect(table.lookupFuzzyCallable('foo')).toHaveLength(1);\n    });\n\n    it('adding a Class does NOT invalidate callableIndex', () => {\n      table.add('src/a.ts', 'foo', 'func:foo', 'Function');\n      expect(table.lookupFuzzyCallable('foo')).toHaveLength(1);\n      table.add('src/models.ts', 'User', 'class:User', 'Class');\n      // Class is not callable, should not trigger rebuild\n      expect(table.lookupFuzzyCallable('foo')).toHaveLength(1);\n    });\n  });\n\n  describe('lookupFieldByOwner', () => {\n    it('finds a Property by ownerNodeId and fieldName', () => {\n      table.add('src/models.ts', 'address', 'prop:address', 'Property', {\n        declaredType: 'Address',\n        ownerId: 'class:User',\n      });\n      const def = table.lookupFieldByOwner('class:User', 'address');\n      expect(def).toBeDefined();\n      expect(def!.declaredType).toBe('Address');\n      expect(def!.nodeId).toBe('prop:address');\n    });\n\n    it('returns undefined for unknown owner', () => {\n      table.add('src/models.ts', 'address', 'prop:address', 'Property', {\n        declaredType: 'Address',\n        ownerId: 'class:User',\n      });\n      expect(table.lookupFieldByOwner('class:Unknown', 'address')).toBeUndefined();\n    });\n\n    it('returns undefined for unknown field name', () => {\n      table.add('src/models.ts', 'address', 'prop:address', 'Property', {\n        declaredType: 'Address',\n        ownerId: 'class:User',\n      });\n      expect(table.lookupFieldByOwner('class:User', 'email')).toBeUndefined();\n    });\n\n    it('returns undefined for empty table', () => {\n      expect(table.lookupFieldByOwner('class:User', 'name')).toBeUndefined();\n    });\n\n    it('indexes Property without declaredType (for dynamic language write-access)', () => {\n      table.add('src/models.ts', 'name', 'prop:name', 'Property', { ownerId: 'class:User' });\n      expect(table.lookupFieldByOwner('class:User', 'name')).toEqual({\n        nodeId: 'prop:name',\n        filePath: 'src/models.ts',\n        type: 'Property',\n        ownerId: 'class:User',\n      });\n    });\n\n    it('distinguishes fields by owner', () => {\n      table.add('src/models.ts', 'name', 'prop:user:name', 'Property', {\n        declaredType: 'string',\n        ownerId: 'class:User',\n      });\n      table.add('src/models.ts', 'name', 'prop:repo:name', 'Property', {\n        declaredType: 'RepoName',\n        ownerId: 'class:Repo',\n      });\n      expect(table.lookupFieldByOwner('class:User', 'name')!.declaredType).toBe('string');\n      expect(table.lookupFieldByOwner('class:Repo', 'name')!.declaredType).toBe('RepoName');\n    });\n  });\n\n  describe('lookupFuzzyCallable', () => {\n    it('returns only callable types (Function, Method, Constructor)', () => {\n      table.add('src/a.ts', 'foo', 'func:foo', 'Function');\n      table.add('src/a.ts', 'bar', 'method:bar', 'Method');\n      table.add('src/a.ts', 'Baz', 'ctor:Baz', 'Constructor');\n      table.add('src/a.ts', 'User', 'class:User', 'Class');\n      table.add('src/a.ts', 'IUser', 'iface:IUser', 'Interface');\n      expect(table.lookupFuzzyCallable('foo')).toHaveLength(1);\n      expect(table.lookupFuzzyCallable('bar')).toHaveLength(1);\n      expect(table.lookupFuzzyCallable('Baz')).toHaveLength(1);\n      expect(table.lookupFuzzyCallable('User')).toEqual([]);\n      expect(table.lookupFuzzyCallable('IUser')).toEqual([]);\n    });\n\n    it('returns empty array for unknown name', () => {\n      table.add('src/a.ts', 'foo', 'func:foo', 'Function');\n      expect(table.lookupFuzzyCallable('unknown')).toEqual([]);\n    });\n\n    it('rebuilds index after adding new callable', () => {\n      table.add('src/a.ts', 'foo', 'func:foo', 'Function');\n      expect(table.lookupFuzzyCallable('foo')).toHaveLength(1);\n      expect(table.lookupFuzzyCallable('bar')).toEqual([]);\n      table.add('src/a.ts', 'bar', 'func:bar', 'Function');\n      expect(table.lookupFuzzyCallable('bar')).toHaveLength(1);\n    });\n\n    it('filters non-callable types from mixed name entries', () => {\n      table.add('src/a.ts', 'save', 'func:save', 'Function');\n      table.add('src/b.ts', 'save', 'class:save', 'Class');\n      const callables = table.lookupFuzzyCallable('save');\n      expect(callables).toHaveLength(1);\n      expect(callables[0].type).toBe('Function');\n    });\n  });\n\n  describe('clear', () => {\n    it('resets all state including fieldByOwner', () => {\n      table.add('src/a.ts', 'foo', 'func:foo', 'Function');\n      table.add('src/b.ts', 'bar', 'func:bar', 'Function');\n      table.add('src/models.ts', 'address', 'prop:address', 'Property', {\n        declaredType: 'Address',\n        ownerId: 'class:User',\n      });\n      table.clear();\n      expect(table.getStats()).toEqual({ fileCount: 0, globalSymbolCount: 0 });\n      expect(table.lookupExact('src/a.ts', 'foo')).toBeUndefined();\n      expect(table.lookupFuzzy('foo')).toEqual([]);\n      expect(table.lookupFieldByOwner('class:User', 'address')).toBeUndefined();\n      expect(table.lookupFuzzyCallable('foo')).toEqual([]);\n    });\n\n    it('allows re-adding after clear', () => {\n      table.add('src/a.ts', 'foo', 'func:foo', 'Function');\n      table.clear();\n      table.add('src/b.ts', 'bar', 'func:bar', 'Function');\n      expect(table.getStats()).toEqual({ fileCount: 1, globalSymbolCount: 1 });\n    });\n\n    it('resets callableIndex so first lookup after clear rebuilds from scratch', () => {\n      table.add('src/a.ts', 'foo', 'func:foo', 'Function');\n      // Populate the lazy callable index\n      expect(table.lookupFuzzyCallable('foo')).toHaveLength(1);\n      table.clear();\n      // After clear the callable index must be gone — empty table returns nothing\n      expect(table.lookupFuzzyCallable('foo')).toEqual([]);\n      // Re-adding and looking up rebuilds successfully\n      table.add('src/a.ts', 'foo', 'func:foo2', 'Function');\n      expect(table.lookupFuzzyCallable('foo')).toHaveLength(1);\n      expect(table.lookupFuzzyCallable('foo')[0].nodeId).toBe('func:foo2');\n    });\n  });\n\n  describe('metadata spread branches (individual optional fields)', () => {\n    it('stores only parameterCount when no other metadata is given', () => {\n      table.add('src/utils.ts', 'compute', 'func:compute', 'Function', { parameterCount: 3 });\n      const def = table.lookupExactFull('src/utils.ts', 'compute');\n      expect(def).toBeDefined();\n      expect(def!.parameterCount).toBe(3);\n      expect(def!.returnType).toBeUndefined();\n      expect(def!.declaredType).toBeUndefined();\n      expect(def!.ownerId).toBeUndefined();\n    });\n\n    it('stores only ownerId on a Method (non-Property) — still added to globalIndex', () => {\n      table.add('src/models.ts', 'save', 'method:save', 'Method', { ownerId: 'class:Repo' });\n      const def = table.lookupExactFull('src/models.ts', 'save');\n      expect(def).toBeDefined();\n      expect(def!.ownerId).toBe('class:Repo');\n      expect(def!.parameterCount).toBeUndefined();\n      expect(def!.returnType).toBeUndefined();\n      expect(def!.declaredType).toBeUndefined();\n      // Non-Property with ownerId must still appear in globalIndex\n      expect(table.lookupFuzzy('save')).toHaveLength(1);\n    });\n\n    it('stores declaredType alone (no ownerId) — symbol goes to globalIndex', () => {\n      // A Variable/Property without an owner should still be globally visible\n      table.add('src/config.ts', 'DEFAULT_TIMEOUT', 'var:DEFAULT_TIMEOUT', 'Variable', {\n        declaredType: 'number',\n      });\n      const def = table.lookupExactFull('src/config.ts', 'DEFAULT_TIMEOUT');\n      expect(def).toBeDefined();\n      expect(def!.declaredType).toBe('number');\n      expect(def!.ownerId).toBeUndefined();\n      // No ownerId → not a Property exclusion path → must be in globalIndex\n      expect(table.lookupFuzzy('DEFAULT_TIMEOUT')).toHaveLength(1);\n      expect(table.lookupFuzzy('DEFAULT_TIMEOUT')[0].declaredType).toBe('number');\n    });\n\n    it('stores all four optional metadata fields simultaneously on a Method', () => {\n      table.add('src/models.ts', 'find', 'method:find', 'Method', {\n        parameterCount: 2,\n        returnType: 'User | undefined',\n        declaredType: 'QueryResult',\n        ownerId: 'class:UserRepository',\n      });\n      const def = table.lookupExactFull('src/models.ts', 'find');\n      expect(def).toBeDefined();\n      expect(def!.parameterCount).toBe(2);\n      expect(def!.returnType).toBe('User | undefined');\n      expect(def!.declaredType).toBe('QueryResult');\n      expect(def!.ownerId).toBe('class:UserRepository');\n    });\n\n    it('omits all optional fields when metadata is not provided at all', () => {\n      table.add('src/utils.ts', 'noop', 'func:noop', 'Function');\n      const def = table.lookupExactFull('src/utils.ts', 'noop');\n      expect(def).toBeDefined();\n      expect(def!.parameterCount).toBeUndefined();\n      expect(def!.returnType).toBeUndefined();\n      expect(def!.declaredType).toBeUndefined();\n      expect(def!.ownerId).toBeUndefined();\n    });\n\n    it('stores parameterCount: 0 (falsy value) correctly', () => {\n      // parameterCount of 0 must not be dropped by the spread guard\n      table.add('src/utils.ts', 'noArgs', 'func:noArgs', 'Function', { parameterCount: 0 });\n      const def = table.lookupExactFull('src/utils.ts', 'noArgs');\n      expect(def).toBeDefined();\n      expect(def!.parameterCount).toBe(0);\n    });\n  });\n\n  describe('lookupFuzzyCallable — lazy index behaviour', () => {\n    it('returns empty array when table has no callables', () => {\n      table.add('src/models.ts', 'User', 'class:User', 'Class');\n      table.add('src/models.ts', 'IUser', 'iface:IUser', 'Interface');\n      expect(table.lookupFuzzyCallable('User')).toEqual([]);\n      expect(table.lookupFuzzyCallable('IUser')).toEqual([]);\n    });\n\n    it('uses cached index on second call without adding new symbols', () => {\n      table.add('src/a.ts', 'fetch', 'func:fetch', 'Function', { returnType: 'Response' });\n      // First call — builds the lazy index\n      const first = table.lookupFuzzyCallable('fetch');\n      expect(first).toHaveLength(1);\n      // Second call — must return equivalent result from cache\n      const second = table.lookupFuzzyCallable('fetch');\n      expect(second).toHaveLength(1);\n      expect(second[0].nodeId).toBe('func:fetch');\n      // Both calls return the same array reference (same cache entry)\n      expect(first).toBe(second);\n    });\n\n    it('invalidated cache is rebuilt correctly after adding a Method', () => {\n      table.add('src/a.ts', 'alpha', 'func:alpha', 'Function');\n      // Warm the cache\n      expect(table.lookupFuzzyCallable('alpha')).toHaveLength(1);\n      expect(table.lookupFuzzyCallable('beta')).toEqual([]);\n      // Add a Method — must invalidate cache\n      table.add('src/a.ts', 'beta', 'method:beta', 'Method');\n      // Rebuilt cache must now include beta\n      const result = table.lookupFuzzyCallable('beta');\n      expect(result).toHaveLength(1);\n      expect(result[0].type).toBe('Method');\n    });\n\n    it('invalidated cache is rebuilt correctly after adding a Constructor', () => {\n      table.add('src/a.ts', 'existing', 'func:existing', 'Function');\n      expect(table.lookupFuzzyCallable('existing')).toHaveLength(1);\n      table.add('src/models.ts', 'MyClass', 'ctor:MyClass', 'Constructor');\n      expect(table.lookupFuzzyCallable('MyClass')).toHaveLength(1);\n      expect(table.lookupFuzzyCallable('MyClass')[0].type).toBe('Constructor');\n    });\n  });\n\n  describe('lookupExactFull — full SymbolDefinition shape', () => {\n    it('returns undefined for unknown file', () => {\n      table.add('src/a.ts', 'foo', 'func:foo', 'Function');\n      expect(table.lookupExactFull('src/other.ts', 'foo')).toBeUndefined();\n    });\n\n    it('returns undefined for unknown symbol name within a known file', () => {\n      table.add('src/a.ts', 'foo', 'func:foo', 'Function');\n      expect(table.lookupExactFull('src/a.ts', 'bar')).toBeUndefined();\n    });\n\n    it('returns undefined for empty table', () => {\n      expect(table.lookupExactFull('src/a.ts', 'foo')).toBeUndefined();\n    });\n\n    it('returns the full SymbolDefinition including nodeId, filePath, and type', () => {\n      table.add('src/models.ts', 'address', 'prop:address', 'Property', {\n        declaredType: 'Address',\n        ownerId: 'class:User',\n      });\n      const def = table.lookupExactFull('src/models.ts', 'address');\n      expect(def).toBeDefined();\n      expect(def!.nodeId).toBe('prop:address');\n      expect(def!.filePath).toBe('src/models.ts');\n      expect(def!.type).toBe('Property');\n      expect(def!.declaredType).toBe('Address');\n      expect(def!.ownerId).toBe('class:User');\n    });\n\n    it('returns first definition when same file and name are added twice (overloads preserved)', () => {\n      table.add('src/a.ts', 'foo', 'func:foo:v1', 'Function', { returnType: 'void' });\n      table.add('src/a.ts', 'foo', 'func:foo:v2', 'Function', { returnType: 'string' });\n      // lookupExactFull returns first match\n      const def = table.lookupExactFull('src/a.ts', 'foo');\n      expect(def).toBeDefined();\n      expect(def!.nodeId).toBe('func:foo:v1');\n      expect(def!.returnType).toBe('void');\n      // lookupExactAll returns all overloads\n      const all = table.lookupExactAll('src/a.ts', 'foo');\n      expect(all).toHaveLength(2);\n      expect(all[0].nodeId).toBe('func:foo:v1');\n      expect(all[1].nodeId).toBe('func:foo:v2');\n      expect(all[1].returnType).toBe('string');\n    });\n  });\n\n  describe('lookupFieldByOwner — additional coverage', () => {\n    it('stores multiple distinct fields under the same owner', () => {\n      table.add('src/models.ts', 'id', 'prop:user:id', 'Property', {\n        declaredType: 'number',\n        ownerId: 'class:User',\n      });\n      table.add('src/models.ts', 'email', 'prop:user:email', 'Property', {\n        declaredType: 'string',\n        ownerId: 'class:User',\n      });\n      table.add('src/models.ts', 'createdAt', 'prop:user:createdAt', 'Property', {\n        declaredType: 'Date',\n        ownerId: 'class:User',\n      });\n      expect(table.lookupFieldByOwner('class:User', 'id')!.declaredType).toBe('number');\n      expect(table.lookupFieldByOwner('class:User', 'email')!.declaredType).toBe('string');\n      expect(table.lookupFieldByOwner('class:User', 'createdAt')!.declaredType).toBe('Date');\n    });\n\n    it('returns the full SymbolDefinition (nodeId + filePath + type) not just declaredType', () => {\n      table.add('src/models.ts', 'score', 'prop:score', 'Property', {\n        declaredType: 'number',\n        ownerId: 'class:Player',\n      });\n      const def = table.lookupFieldByOwner('class:Player', 'score');\n      expect(def).toBeDefined();\n      expect(def!.nodeId).toBe('prop:score');\n      expect(def!.filePath).toBe('src/models.ts');\n      expect(def!.type).toBe('Property');\n    });\n\n    it('key collision is impossible between different owners sharing a field name', () => {\n      // Ensures the null-byte separator in the key prevents cross-owner leakage\n      table.add('src/models.ts', 'id', 'prop:a:id', 'Property', {\n        declaredType: 'string',\n        ownerId: 'class:A',\n      });\n      table.add('src/models.ts', 'id', 'prop:b:id', 'Property', {\n        declaredType: 'UUID',\n        ownerId: 'class:B',\n      });\n      expect(table.lookupFieldByOwner('class:A', 'id')!.nodeId).toBe('prop:a:id');\n      expect(table.lookupFieldByOwner('class:B', 'id')!.nodeId).toBe('prop:b:id');\n      // An owner whose id is the concatenation of A's ownerId + fieldName must not match\n      expect(table.lookupFieldByOwner('class:A\\0id', '')).toBeUndefined();\n    });\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/tools.test.ts",
    "content": "/**\n * Unit Tests: MCP Tool Definitions\n *\n * Tests: GITNEXUS_TOOLS from tools.ts\n * - All 7 tools are defined\n * - Each tool has valid name, description, inputSchema\n * - Required fields are correct\n * - Optional repo parameter is present on tools that need it\n */\nimport { describe, it, expect } from 'vitest';\nimport { GITNEXUS_TOOLS, type ToolDefinition } from '../../src/mcp/tools.js';\n\ndescribe('GITNEXUS_TOOLS', () => {\n  it('exports exactly 7 tools', () => {\n    expect(GITNEXUS_TOOLS).toHaveLength(7);\n  });\n\n  it('contains all expected tool names', () => {\n    const names = GITNEXUS_TOOLS.map(t => t.name);\n    expect(names).toEqual(\n      expect.arrayContaining([\n        'list_repos', 'query', 'cypher', 'context',\n        'detect_changes', 'rename', 'impact',\n      ])\n    );\n  });\n\n  it('each tool has name, description, and inputSchema', () => {\n    for (const tool of GITNEXUS_TOOLS) {\n      expect(tool.name).toBeTruthy();\n      expect(typeof tool.name).toBe('string');\n      expect(tool.description).toBeTruthy();\n      expect(typeof tool.description).toBe('string');\n      expect(tool.inputSchema).toBeDefined();\n      expect(tool.inputSchema.type).toBe('object');\n      expect(tool.inputSchema.properties).toBeDefined();\n      expect(Array.isArray(tool.inputSchema.required)).toBe(true);\n    }\n  });\n\n  it('query tool requires \"query\" parameter', () => {\n    const queryTool = GITNEXUS_TOOLS.find(t => t.name === 'query')!;\n    expect(queryTool.inputSchema.required).toContain('query');\n    expect(queryTool.inputSchema.properties.query).toBeDefined();\n    expect(queryTool.inputSchema.properties.query.type).toBe('string');\n  });\n\n  it('cypher tool requires \"query\" parameter', () => {\n    const cypherTool = GITNEXUS_TOOLS.find(t => t.name === 'cypher')!;\n    expect(cypherTool.inputSchema.required).toContain('query');\n  });\n\n  it('context tool has no required parameters', () => {\n    const contextTool = GITNEXUS_TOOLS.find(t => t.name === 'context')!;\n    expect(contextTool.inputSchema.required).toEqual([]);\n  });\n\n  it('impact tool requires target and direction', () => {\n    const impactTool = GITNEXUS_TOOLS.find(t => t.name === 'impact')!;\n    expect(impactTool.inputSchema.required).toContain('target');\n    expect(impactTool.inputSchema.required).toContain('direction');\n  });\n\n  it('rename tool requires new_name', () => {\n    const renameTool = GITNEXUS_TOOLS.find(t => t.name === 'rename')!;\n    expect(renameTool.inputSchema.required).toContain('new_name');\n  });\n\n  it('detect_changes tool has no required parameters', () => {\n    const detectTool = GITNEXUS_TOOLS.find(t => t.name === 'detect_changes')!;\n    expect(detectTool.inputSchema.required).toEqual([]);\n  });\n\n  it('list_repos tool has no parameters', () => {\n    const listTool = GITNEXUS_TOOLS.find(t => t.name === 'list_repos')!;\n    expect(Object.keys(listTool.inputSchema.properties)).toHaveLength(0);\n    expect(listTool.inputSchema.required).toEqual([]);\n  });\n\n  it('all tools except list_repos have optional repo parameter', () => {\n    for (const tool of GITNEXUS_TOOLS) {\n      if (tool.name === 'list_repos') continue;\n      expect(tool.inputSchema.properties.repo).toBeDefined();\n      expect(tool.inputSchema.properties.repo.type).toBe('string');\n      // repo should never be required\n      expect(tool.inputSchema.required).not.toContain('repo');\n    }\n  });\n\n  it('detect_changes scope has correct enum values', () => {\n    const detectTool = GITNEXUS_TOOLS.find(t => t.name === 'detect_changes')!;\n    const scopeProp = detectTool.inputSchema.properties.scope;\n    expect(scopeProp.enum).toEqual(['unstaged', 'staged', 'all', 'compare']);\n  });\n\n  it('impact relationTypes is array of strings', () => {\n    const impactTool = GITNEXUS_TOOLS.find(t => t.name === 'impact')!;\n    const relProp = impactTool.inputSchema.properties.relationTypes;\n    expect(relProp.type).toBe('array');\n    expect(relProp.items).toEqual({ type: 'string' });\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/tree-sitter-queries.test.ts",
    "content": "import { describe, it, expect } from 'vitest';\nimport {\n  TYPESCRIPT_QUERIES,\n  JAVASCRIPT_QUERIES,\n  PYTHON_QUERIES,\n  JAVA_QUERIES,\n  C_QUERIES,\n  GO_QUERIES,\n  CPP_QUERIES,\n  CSHARP_QUERIES,\n  RUST_QUERIES,\n  PHP_QUERIES,\n  SWIFT_QUERIES,\n  LANGUAGE_QUERIES,\n} from '../../src/core/ingestion/tree-sitter-queries.js';\nimport { SupportedLanguages } from '../../src/config/supported-languages.js';\n\ndescribe('tree-sitter queries', () => {\n  describe('LANGUAGE_QUERIES map', () => {\n    it('has entries for all supported languages', () => {\n      const allLanguages = Object.values(SupportedLanguages);\n      for (const lang of allLanguages) {\n        expect(LANGUAGE_QUERIES[lang]).toBeDefined();\n        expect(LANGUAGE_QUERIES[lang].length).toBeGreaterThan(0);\n      }\n    });\n\n    it('maps to the correct query constants', () => {\n      expect(LANGUAGE_QUERIES[SupportedLanguages.TypeScript]).toBe(TYPESCRIPT_QUERIES);\n      expect(LANGUAGE_QUERIES[SupportedLanguages.JavaScript]).toBe(JAVASCRIPT_QUERIES);\n      expect(LANGUAGE_QUERIES[SupportedLanguages.Python]).toBe(PYTHON_QUERIES);\n      expect(LANGUAGE_QUERIES[SupportedLanguages.Java]).toBe(JAVA_QUERIES);\n      expect(LANGUAGE_QUERIES[SupportedLanguages.C]).toBe(C_QUERIES);\n      expect(LANGUAGE_QUERIES[SupportedLanguages.Go]).toBe(GO_QUERIES);\n      expect(LANGUAGE_QUERIES[SupportedLanguages.CPlusPlus]).toBe(CPP_QUERIES);\n      expect(LANGUAGE_QUERIES[SupportedLanguages.CSharp]).toBe(CSHARP_QUERIES);\n      expect(LANGUAGE_QUERIES[SupportedLanguages.Rust]).toBe(RUST_QUERIES);\n      expect(LANGUAGE_QUERIES[SupportedLanguages.PHP]).toBe(PHP_QUERIES);\n      expect(LANGUAGE_QUERIES[SupportedLanguages.Swift]).toBe(SWIFT_QUERIES);\n    });\n  });\n\n  describe('TypeScript queries', () => {\n    it('captures class declarations', () => {\n      expect(TYPESCRIPT_QUERIES).toContain('class_declaration');\n      expect(TYPESCRIPT_QUERIES).toContain('@definition.class');\n    });\n\n    it('captures interface declarations', () => {\n      expect(TYPESCRIPT_QUERIES).toContain('interface_declaration');\n      expect(TYPESCRIPT_QUERIES).toContain('@definition.interface');\n    });\n\n    it('captures function declarations', () => {\n      expect(TYPESCRIPT_QUERIES).toContain('function_declaration');\n      expect(TYPESCRIPT_QUERIES).toContain('@definition.function');\n    });\n\n    it('captures method definitions', () => {\n      expect(TYPESCRIPT_QUERIES).toContain('method_definition');\n      expect(TYPESCRIPT_QUERIES).toContain('@definition.method');\n    });\n\n    it('captures arrow functions in variable declarations', () => {\n      expect(TYPESCRIPT_QUERIES).toContain('arrow_function');\n    });\n\n    it('captures imports', () => {\n      expect(TYPESCRIPT_QUERIES).toContain('import_statement');\n      expect(TYPESCRIPT_QUERIES).toContain('@import');\n    });\n\n    it('captures call expressions', () => {\n      expect(TYPESCRIPT_QUERIES).toContain('call_expression');\n      expect(TYPESCRIPT_QUERIES).toContain('@call');\n    });\n\n    it('captures heritage (extends/implements)', () => {\n      expect(TYPESCRIPT_QUERIES).toContain('@heritage.extends');\n      expect(TYPESCRIPT_QUERIES).toContain('@heritage.implements');\n    });\n  });\n\n  describe('JavaScript queries', () => {\n    it('captures function and class definitions', () => {\n      expect(JAVASCRIPT_QUERIES).toContain('@definition.class');\n      expect(JAVASCRIPT_QUERIES).toContain('@definition.function');\n      expect(JAVASCRIPT_QUERIES).toContain('@definition.method');\n    });\n\n    it('captures heritage (extends)', () => {\n      expect(JAVASCRIPT_QUERIES).toContain('@heritage.extends');\n    });\n\n    it('does not have interface declarations', () => {\n      expect(JAVASCRIPT_QUERIES).not.toContain('interface_declaration');\n    });\n  });\n\n  describe('Python queries', () => {\n    it('captures class and function definitions', () => {\n      expect(PYTHON_QUERIES).toContain('class_definition');\n      expect(PYTHON_QUERIES).toContain('function_definition');\n    });\n\n    it('captures imports including from-imports', () => {\n      expect(PYTHON_QUERIES).toContain('import_statement');\n      expect(PYTHON_QUERIES).toContain('import_from_statement');\n    });\n\n    it('captures heritage (class inheritance)', () => {\n      expect(PYTHON_QUERIES).toContain('@heritage.extends');\n    });\n  });\n\n  describe('Java queries', () => {\n    it('captures all major declaration types', () => {\n      expect(JAVA_QUERIES).toContain('@definition.class');\n      expect(JAVA_QUERIES).toContain('@definition.interface');\n      expect(JAVA_QUERIES).toContain('@definition.enum');\n      expect(JAVA_QUERIES).toContain('@definition.method');\n      expect(JAVA_QUERIES).toContain('@definition.constructor');\n      expect(JAVA_QUERIES).toContain('@definition.annotation');\n    });\n\n    it('captures extends and implements heritage', () => {\n      expect(JAVA_QUERIES).toContain('@heritage.extends');\n      expect(JAVA_QUERIES).toContain('@heritage.implements');\n    });\n  });\n\n  describe('C queries', () => {\n    it('captures function definitions', () => {\n      expect(C_QUERIES).toContain('function_definition');\n      expect(C_QUERIES).toContain('@definition.function');\n    });\n\n    it('captures struct, union, enum, typedef', () => {\n      expect(C_QUERIES).toContain('@definition.struct');\n      expect(C_QUERIES).toContain('@definition.union');\n      expect(C_QUERIES).toContain('@definition.enum');\n      expect(C_QUERIES).toContain('@definition.typedef');\n    });\n\n    it('captures macros', () => {\n      expect(C_QUERIES).toContain('@definition.macro');\n    });\n\n    it('captures includes as imports', () => {\n      expect(C_QUERIES).toContain('preproc_include');\n    });\n  });\n\n  describe('Go queries', () => {\n    it('captures function and method declarations', () => {\n      expect(GO_QUERIES).toContain('function_declaration');\n      expect(GO_QUERIES).toContain('method_declaration');\n    });\n\n    it('captures struct and interface types', () => {\n      expect(GO_QUERIES).toContain('@definition.struct');\n      expect(GO_QUERIES).toContain('@definition.interface');\n    });\n\n    it('captures import declarations', () => {\n      expect(GO_QUERIES).toContain('import_declaration');\n    });\n  });\n\n  describe('C++ queries', () => {\n    it('captures class, struct, namespace', () => {\n      expect(CPP_QUERIES).toContain('@definition.class');\n      expect(CPP_QUERIES).toContain('@definition.struct');\n      expect(CPP_QUERIES).toContain('@definition.namespace');\n    });\n\n    it('captures templates', () => {\n      expect(CPP_QUERIES).toContain('@definition.template');\n      expect(CPP_QUERIES).toContain('template_declaration');\n    });\n\n    it('captures heritage (base class)', () => {\n      expect(CPP_QUERIES).toContain('@heritage.extends');\n    });\n  });\n\n  describe('C# queries', () => {\n    it('captures all major types', () => {\n      expect(CSHARP_QUERIES).toContain('@definition.class');\n      expect(CSHARP_QUERIES).toContain('@definition.interface');\n      expect(CSHARP_QUERIES).toContain('@definition.struct');\n      expect(CSHARP_QUERIES).toContain('@definition.enum');\n      expect(CSHARP_QUERIES).toContain('@definition.record');\n      expect(CSHARP_QUERIES).toContain('@definition.delegate');\n    });\n\n    it('captures namespace declarations', () => {\n      expect(CSHARP_QUERIES).toContain('@definition.namespace');\n    });\n\n    it('captures constructor and property', () => {\n      expect(CSHARP_QUERIES).toContain('@definition.constructor');\n      expect(CSHARP_QUERIES).toContain('@definition.property');\n    });\n  });\n\n  describe('Rust queries', () => {\n    it('captures function items', () => {\n      expect(RUST_QUERIES).toContain('function_item');\n      expect(RUST_QUERIES).toContain('@definition.function');\n    });\n\n    it('captures struct, enum, trait, impl', () => {\n      expect(RUST_QUERIES).toContain('@definition.struct');\n      expect(RUST_QUERIES).toContain('@definition.enum');\n      expect(RUST_QUERIES).toContain('@definition.trait');\n      expect(RUST_QUERIES).toContain('@definition.impl');\n    });\n\n    it('captures module, const, static, macro', () => {\n      expect(RUST_QUERIES).toContain('@definition.module');\n      expect(RUST_QUERIES).toContain('@definition.const');\n      expect(RUST_QUERIES).toContain('@definition.static');\n      expect(RUST_QUERIES).toContain('@definition.macro');\n    });\n\n    it('captures trait implementation heritage', () => {\n      expect(RUST_QUERIES).toContain('@heritage.trait');\n      expect(RUST_QUERIES).toContain('@heritage.class');\n    });\n  });\n\n  describe('PHP queries', () => {\n    it('captures class, interface, trait, enum', () => {\n      expect(PHP_QUERIES).toContain('@definition.class');\n      expect(PHP_QUERIES).toContain('@definition.interface');\n      expect(PHP_QUERIES).toContain('@definition.trait');\n      expect(PHP_QUERIES).toContain('@definition.enum');\n    });\n\n    it('captures top-level function definitions', () => {\n      expect(PHP_QUERIES).toContain('function_definition');\n      expect(PHP_QUERIES).toContain('@definition.function');\n    });\n\n    it('captures method declarations', () => {\n      expect(PHP_QUERIES).toContain('method_declaration');\n      expect(PHP_QUERIES).toContain('@definition.method');\n    });\n\n    it('captures class properties', () => {\n      expect(PHP_QUERIES).toContain('property_declaration');\n      expect(PHP_QUERIES).toContain('@definition.property');\n    });\n\n    it('captures heritage (extends, implements, use trait)', () => {\n      expect(PHP_QUERIES).toContain('@heritage.extends');\n      expect(PHP_QUERIES).toContain('@heritage.implements');\n      expect(PHP_QUERIES).toContain('@heritage.trait');\n    });\n\n    it('captures namespace definitions', () => {\n      expect(PHP_QUERIES).toContain('namespace_definition');\n      expect(PHP_QUERIES).toContain('@definition.namespace');\n    });\n  });\n\n  describe('Swift queries', () => {\n    it('captures class, struct, enum', () => {\n      expect(SWIFT_QUERIES).toContain('@definition.class');\n      expect(SWIFT_QUERIES).toContain('@definition.struct');\n      expect(SWIFT_QUERIES).toContain('@definition.enum');\n    });\n\n    it('captures protocols as interfaces', () => {\n      expect(SWIFT_QUERIES).toContain('protocol_declaration');\n      expect(SWIFT_QUERIES).toContain('@definition.interface');\n    });\n\n    it('captures init declarations as constructors', () => {\n      expect(SWIFT_QUERIES).toContain('init_declaration');\n      expect(SWIFT_QUERIES).toContain('@definition.constructor');\n    });\n\n    it('captures function declarations', () => {\n      expect(SWIFT_QUERIES).toContain('function_declaration');\n      expect(SWIFT_QUERIES).toContain('@definition.function');\n    });\n\n    it('captures protocol method declarations', () => {\n      expect(SWIFT_QUERIES).toContain('protocol_function_declaration');\n      expect(SWIFT_QUERIES).toContain('@definition.method');\n    });\n\n    it('captures properties', () => {\n      expect(SWIFT_QUERIES).toContain('property_declaration');\n      expect(SWIFT_QUERIES).toContain('@definition.property');\n    });\n\n    it('captures heritage (inheritance)', () => {\n      expect(SWIFT_QUERIES).toContain('@heritage.extends');\n    });\n\n    it('captures type aliases', () => {\n      expect(SWIFT_QUERIES).toContain('typealias_declaration');\n      expect(SWIFT_QUERIES).toContain('@definition.type');\n    });\n\n    it('captures extensions as classes', () => {\n      expect(SWIFT_QUERIES).toContain('\"extension\"');\n    });\n\n    it('captures actors as classes', () => {\n      expect(SWIFT_QUERIES).toContain('\"actor\"');\n    });\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/unit/type-env.test.ts",
    "content": "import { describe, it, expect } from 'vitest';\nimport { buildTypeEnv, type TypeEnv, type TypeEnvironment } from '../../src/core/ingestion/type-env.js';\nimport { stripNullable, extractSimpleTypeName } from '../../src/core/ingestion/type-extractors/shared.js';\nimport Parser from 'tree-sitter';\nimport TypeScript from 'tree-sitter-typescript';\nimport Java from 'tree-sitter-java';\nimport CSharp from 'tree-sitter-c-sharp';\nimport Go from 'tree-sitter-go';\nimport Rust from 'tree-sitter-rust';\nimport Python from 'tree-sitter-python';\nimport CPP from 'tree-sitter-cpp';\nimport Kotlin from 'tree-sitter-kotlin';\nimport PHP from 'tree-sitter-php';\nimport Ruby from 'tree-sitter-ruby';\n\nconst parser = new Parser();\n\nconst parse = (code: string, lang: any) => {\n  parser.setLanguage(lang);\n  return parser.parse(code);\n};\n\n/** Flatten a scoped TypeEnv into a simple name→type map (for simple test assertions). */\nfunction flatGet(env: TypeEnv, varName: string): string | undefined {\n  for (const [, scopeMap] of env) {\n    const val = scopeMap.get(varName);\n    if (val) return val;\n  }\n  return undefined;\n}\n\n/** Count all bindings across all scopes. */\nfunction flatSize(env: TypeEnv): number {\n  let count = 0;\n  for (const [, scopeMap] of env) count += scopeMap.size;\n  return count;\n}\n\ndescribe('buildTypeEnv', () => {\n  describe('TypeScript', () => {\n    it('extracts type from const declaration', () => {\n      const tree = parse('const user: User = getUser();', TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('extracts type from let declaration', () => {\n      const tree = parse('let repo: Repository;', TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      expect(flatGet(env, 'repo')).toBe('Repository');\n    });\n\n    it('extracts type from function parameters', () => {\n      const tree = parse('function save(user: User, repo: Repository) {}', TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      expect(flatGet(env, 'user')).toBe('User');\n      expect(flatGet(env, 'repo')).toBe('Repository');\n    });\n\n    it('extracts type from arrow function parameters', () => {\n      const tree = parse('const fn = (user: User) => user.save();', TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('ignores variables without type annotations', () => {\n      const tree = parse('const x = 5; let y = \"hello\";', TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      expect(flatSize(env)).toBe(0);\n    });\n\n    it('extracts type from nullable union User | null', () => {\n      const tree = parse('const user: User | null = getUser();', TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('extracts type from optional union User | undefined', () => {\n      const tree = parse('let user: User | undefined;', TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('extracts type from triple nullable union User | null | undefined', () => {\n      const tree = parse('const user: User | null | undefined = getUser();', TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('ignores non-nullable unions like User | Repo', () => {\n      const tree = parse('const entity: User | Repo = getEntity();', TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      expect(flatGet(env, 'entity')).toBeUndefined();\n    });\n  });\n\n  describe('Java', () => {\n    it('extracts type from local variable declaration', () => {\n      const tree = parse(`\n        class App {\n          void run() {\n            User user = new User();\n            Repository repo = getRepo();\n          }\n        }\n      `, Java);\n      const { env } = buildTypeEnv(tree, 'java');\n      expect(flatGet(env, 'user')).toBe('User');\n      expect(flatGet(env, 'repo')).toBe('Repository');\n    });\n\n    it('extracts type from method parameters', () => {\n      const tree = parse(`\n        class App {\n          void process(User user, Repository repo) {}\n        }\n      `, Java);\n      const { env } = buildTypeEnv(tree, 'java');\n      expect(flatGet(env, 'user')).toBe('User');\n      expect(flatGet(env, 'repo')).toBe('Repository');\n    });\n\n    it('extracts type from field declaration', () => {\n      const tree = parse(`\n        class App {\n          private User user;\n        }\n      `, Java);\n      const { env } = buildTypeEnv(tree, 'java');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n  });\n\n  describe('C#', () => {\n    it('extracts type from local variable declaration', () => {\n      const tree = parse(`\n        class App {\n          void Run() {\n            User user = new User();\n          }\n        }\n      `, CSharp);\n      const { env } = buildTypeEnv(tree, 'csharp');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('extracts type from var with new expression', () => {\n      const tree = parse(`\n        class App {\n          void Run() {\n            var user = new User();\n          }\n        }\n      `, CSharp);\n      const { env } = buildTypeEnv(tree, 'csharp');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('extracts type from method parameters', () => {\n      const tree = parse(`\n        class App {\n          void Process(User user, Repository repo) {}\n        }\n      `, CSharp);\n      const { env } = buildTypeEnv(tree, 'csharp');\n      expect(flatGet(env, 'user')).toBe('User');\n      expect(flatGet(env, 'repo')).toBe('Repository');\n    });\n\n    it('extracts type from is pattern matching (obj is User user)', () => {\n      const tree = parse(`\n        class User { public void Save() {} }\n        class App {\n          void Process(object obj) {\n            if (obj is User user) {\n              user.Save();\n            }\n          }\n        }\n      `, CSharp);\n      const { env } = buildTypeEnv(tree, 'csharp');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n  });\n\n  describe('Go', () => {\n    it('extracts type from var declaration', () => {\n      const tree = parse(`\n        package main\n        func main() {\n          var user User\n        }\n      `, Go);\n      const { env } = buildTypeEnv(tree, 'go');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('extracts type from short var with composite literal', () => {\n      const tree = parse(`\n        package main\n        func main() {\n          user := User{Name: \"Alice\"}\n        }\n      `, Go);\n      const { env } = buildTypeEnv(tree, 'go');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('extracts type from address-of composite literal (&User{})', () => {\n      const tree = parse(`\n        package main\n        func main() {\n          user := &User{Name: \"Alice\"}\n        }\n      `, Go);\n      const { env } = buildTypeEnv(tree, 'go');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('extracts type from address-of in multi-assignment', () => {\n      const tree = parse(`\n        package main\n        func main() {\n          user, repo := &User{}, &Repo{}\n        }\n      `, Go);\n      const { env } = buildTypeEnv(tree, 'go');\n      expect(flatGet(env, 'user')).toBe('User');\n      expect(flatGet(env, 'repo')).toBe('Repo');\n    });\n\n    it('infers type from new(User) built-in', () => {\n      const tree = parse(`\n        package main\n        func main() {\n          user := new(User)\n        }\n      `, Go);\n      const { env } = buildTypeEnv(tree, 'go');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('does not infer from non-new function calls', () => {\n      const tree = parse(`\n        package main\n        func main() {\n          user := getUser()\n        }\n      `, Go);\n      const { env } = buildTypeEnv(tree, 'go');\n      expect(flatGet(env, 'user')).toBeUndefined();\n    });\n\n    it('infers element type from make([]User, 0) slice builtin', () => {\n      const tree = parse(`\n        package main\n        func main() {\n          sl := make([]User, 0)\n        }\n      `, Go);\n      const { env } = buildTypeEnv(tree, 'go');\n      expect(flatGet(env, 'sl')).toBe('User');\n    });\n\n    it('infers value type from make(map[string]User) map builtin', () => {\n      const tree = parse(`\n        package main\n        func main() {\n          m := make(map[string]User)\n        }\n      `, Go);\n      const { env } = buildTypeEnv(tree, 'go');\n      expect(flatGet(env, 'm')).toBe('User');\n    });\n\n    it('infers type from type assertion: user := iface.(User)', () => {\n      const tree = parse(`\n        package main\n        type Saver interface { Save() }\n        func process(s Saver) {\n          user := s.(User)\n        }\n      `, Go);\n      const { env } = buildTypeEnv(tree, 'go');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('infers type from type assertion in multi-assignment: user, ok := iface.(User)', () => {\n      const tree = parse(`\n        package main\n        type Saver interface { Save() }\n        func process(s Saver) {\n          user, ok := s.(User)\n        }\n      `, Go);\n      const { env } = buildTypeEnv(tree, 'go');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('extracts type from function parameters', () => {\n      const tree = parse(`\n        package main\n        func process(user User, repo Repository) {}\n      `, Go);\n      const { env } = buildTypeEnv(tree, 'go');\n      // Go parameter extraction depends on tree-sitter grammar structure\n      // Parameters may or may not have 'name'/'type' fields\n    });\n  });\n\n  describe('Rust', () => {\n    it('extracts type from let declaration', () => {\n      const tree = parse(`\n        fn main() {\n          let user: User = User::new();\n        }\n      `, Rust);\n      const { env } = buildTypeEnv(tree, 'rust');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('extracts type from function parameters', () => {\n      const tree = parse(`\n        fn process(user: User, repo: Repository) {}\n      `, Rust);\n      const { env } = buildTypeEnv(tree, 'rust');\n      expect(flatGet(env, 'user')).toBe('User');\n      expect(flatGet(env, 'repo')).toBe('Repository');\n    });\n\n    it('extracts type from let with reference', () => {\n      const tree = parse(`\n        fn main() {\n          let user: &User = &get_user();\n        }\n      `, Rust);\n      const { env } = buildTypeEnv(tree, 'rust');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n  });\n\n  describe('Python', () => {\n    it('extracts type from annotated assignment (PEP 484)', () => {\n      const tree = parse('user: User = get_user()', Python);\n      const { env } = buildTypeEnv(tree, 'python');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('extracts type from standalone annotation without value (file scope)', () => {\n      const tree = parse('active_user: User', Python);\n      const { env } = buildTypeEnv(tree, 'python');\n      expect(flatGet(env, 'active_user')).toBe('User');\n    });\n\n    it('extracts type from function parameters', () => {\n      const tree = parse('def process(user: User, repo: Repository): pass', Python);\n      const { env } = buildTypeEnv(tree, 'python');\n      // Python uses typed_parameter nodes, check if they match\n    });\n\n    it('extracts type from class-level annotation with default value', () => {\n      const tree = parse(`class User:\n    name: str = \"default\"\n    age: int = 0\n`, Python);\n      const { env } = buildTypeEnv(tree, 'python');\n      expect(flatGet(env, 'name')).toBe('str');\n      expect(flatGet(env, 'age')).toBe('int');\n    });\n\n    it('extracts type from class-level annotation without default value', () => {\n      const tree = parse(`class User:\n    repo: UserRepo\n`, Python);\n      const { env } = buildTypeEnv(tree, 'python');\n      expect(flatGet(env, 'repo')).toBe('UserRepo');\n    });\n\n    it('extracts types from mixed class-level annotations and methods', () => {\n      const tree = parse(`class User:\n    name: str = \"default\"\n    age: int = 0\n    repo: UserRepo\n\n    def save(self):\n        pass\n`, Python);\n      const { env } = buildTypeEnv(tree, 'python');\n      expect(flatGet(env, 'name')).toBe('str');\n      expect(flatGet(env, 'age')).toBe('int');\n      expect(flatGet(env, 'repo')).toBe('UserRepo');\n    });\n\n    describe('Python match/case as_pattern binding (Phase 6)', () => {\n      it('extracts type from `case User() as u` in match statement', () => {\n        const tree = parse(`\nclass User:\n    def save(self):\n        pass\n\ndef process(x):\n    match x:\n        case User() as u:\n            u.save()\n`, Python);\n        const { env } = buildTypeEnv(tree, 'python');\n        expect(flatGet(env, 'u')).toBe('User');\n      });\n\n      it('does NOT overwrite an existing binding in scopeEnv', () => {\n        const tree = parse(`\nclass User:\n    pass\n\ndef process(x):\n    u: User = x\n    match x:\n        case User() as u:\n            u.save()\n`, Python);\n        const { env } = buildTypeEnv(tree, 'python');\n        // u is already bound from the annotation, pattern binding should not overwrite\n        expect(flatGet(env, 'u')).toBe('User');\n      });\n\n      it('extracts type for each bound variable when multiple cases have as-patterns', () => {\n        const tree = parse(`\nclass User:\n    pass\n\nclass Repo:\n    pass\n\ndef process(x):\n    match x:\n        case User() as u:\n            u.save()\n        case Repo() as r:\n            r.save()\n`, Python);\n        const { env } = buildTypeEnv(tree, 'python');\n        expect(flatGet(env, 'u')).toBe('User');\n        expect(flatGet(env, 'r')).toBe('Repo');\n      });\n\n      it('does NOT extract binding when the pattern is not a class_pattern', () => {\n        // `case 42 as n:` — integer pattern, not a class_pattern\n        const tree = parse(`\ndef process(x):\n    match x:\n        case 42 as n:\n            pass\n`, Python);\n        const { env } = buildTypeEnv(tree, 'python');\n        // No class_pattern child — should return undefined\n        expect(flatGet(env, 'n')).toBeUndefined();\n      });\n    });\n  });\n\n  describe('C++', () => {\n    it('extracts type from local variable declaration', () => {\n      const tree = parse(`\n        void run() {\n          User user;\n        }\n      `, CPP);\n      const { env } = buildTypeEnv(tree, 'cpp');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('extracts type from initialized declaration', () => {\n      const tree = parse(`\n        void run() {\n          User user = getUser();\n        }\n      `, CPP);\n      const { env } = buildTypeEnv(tree, 'cpp');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('extracts type from pointer declaration', () => {\n      const tree = parse(`\n        void run() {\n          User* user = new User();\n        }\n      `, CPP);\n      const { env } = buildTypeEnv(tree, 'cpp');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('extracts type from function parameters', () => {\n      const tree = parse(`\n        void process(User user, Repository& repo) {}\n      `, CPP);\n      const { env } = buildTypeEnv(tree, 'cpp');\n      expect(flatGet(env, 'user')).toBe('User');\n      expect(flatGet(env, 'repo')).toBe('Repository');\n    });\n\n    it('extracts type from range-for with explicit type', () => {\n      const tree = parse(`\n        void run() {\n          std::vector<User> users;\n          for (User& user : users) {\n            user.save();\n          }\n        }\n      `, CPP);\n      const { env } = buildTypeEnv(tree, 'cpp');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('extracts type from range-for with const ref', () => {\n      const tree = parse(`\n        void run() {\n          std::vector<User> users;\n          for (const User& user : users) {\n            user.save();\n          }\n        }\n      `, CPP);\n      const { env } = buildTypeEnv(tree, 'cpp');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n  });\n\n  describe('PHP', () => {\n    it('extracts type from function parameters', () => {\n      const tree = parse(`<?php\n        function process(User $user, Repository $repo) {}\n      `, PHP.php);\n      const { env } = buildTypeEnv(tree, 'php');\n      // PHP parameter type extraction\n      expect(flatGet(env, '$user')).toBe('User');\n      expect(flatGet(env, '$repo')).toBe('Repository');\n    });\n\n    it('resolves $this to enclosing class name', () => {\n      const code = `<?php\nclass UserService {\n  public function process(): void {\n    $this->save();\n  }\n}`;\n      const tree = parse(code, PHP.php);\n      const typeEnv = buildTypeEnv(tree, 'php');\n\n      // Find the call node ($this->save())\n      const calls: any[] = [];\n      function findCalls(node: any) {\n        if (node.type === 'member_call_expression') calls.push(node);\n        for (let i = 0; i < node.childCount; i++) findCalls(node.child(i));\n      }\n      findCalls(tree.rootNode);\n\n      expect(calls.length).toBe(1);\n      // $this should resolve to enclosing class 'UserService'\n      expect(typeEnv.lookup('$this', calls[0])).toBe('UserService');\n    });\n\n    it('extracts type from constructor property promotion (PHP 8.0+)', () => {\n      const tree = parse(`<?php\nclass User {\n  public function __construct(\n    private string $name,\n    private UserRepo $repo\n  ) {}\n}\n      `, PHP.php);\n      const { env } = buildTypeEnv(tree, 'php');\n      expect(flatGet(env, '$repo')).toBe('UserRepo');\n    });\n\n    it('extracts type from typed class property (PHP 7.4+)', () => {\n      const tree = parse(`<?php\nclass UserService {\n  private UserRepo $repo;\n}\n      `, PHP.php);\n      const { env } = buildTypeEnv(tree, 'php');\n      expect(flatGet(env, '$repo')).toBe('UserRepo');\n    });\n\n    it('extracts type from typed class property with default value', () => {\n      const tree = parse(`<?php\nclass UserService {\n  public string $name = \"test\";\n}\n      `, PHP.php);\n      const { env } = buildTypeEnv(tree, 'php');\n      expect(flatGet(env, '$name')).toBe('string');\n    });\n\n    it('extracts PHPDoc @param with standard order: @param Type $name', () => {\n      const tree = parse(`<?php\n/**\n * @param UserRepo $repo the repository\n * @param string $name the user name\n */\nfunction create($repo, $name) {\n  $repo->save();\n}\n      `, PHP.php);\n      const { env } = buildTypeEnv(tree, 'php');\n      expect(flatGet(env, '$repo')).toBe('UserRepo');\n      expect(flatGet(env, '$name')).toBe('string');\n    });\n\n    it('extracts PHPDoc @param with alternate order: @param $name Type', () => {\n      const tree = parse(`<?php\n/**\n * @param $repo UserRepo the repository\n * @param $name string the user name\n */\nfunction process($repo, $name) {\n  $repo->save();\n}\n      `, PHP.php);\n      const { env } = buildTypeEnv(tree, 'php');\n      expect(flatGet(env, '$repo')).toBe('UserRepo');\n      expect(flatGet(env, '$name')).toBe('string');\n    });\n  });\n\n  describe('Ruby YARD annotations', () => {\n    it('extracts @param type bindings from YARD comments', () => {\n      const tree = parse(`\nclass UserService\n  # @param repo [UserRepo] the repository\n  # @param name [String] the user's name\n  def create(repo, name)\n    repo.save\n  end\nend\n`, Ruby);\n      const { env } = buildTypeEnv(tree, 'ruby');\n      expect(flatGet(env, 'repo')).toBe('UserRepo');\n      expect(flatGet(env, 'name')).toBe('String');\n    });\n\n    it('handles qualified YARD types (Models::User → User)', () => {\n      const tree = parse(`\n# @param user [Models::User] the user\ndef process(user)\nend\n`, Ruby);\n      const { env } = buildTypeEnv(tree, 'ruby');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('handles nullable YARD types (String, nil → String)', () => {\n      const tree = parse(`\n# @param name [String, nil] optional name\ndef greet(name)\nend\n`, Ruby);\n      const { env } = buildTypeEnv(tree, 'ruby');\n      expect(flatGet(env, 'name')).toBe('String');\n    });\n\n    it('skips ambiguous union YARD types (String, Integer → undefined)', () => {\n      const tree = parse(`\n# @param value [String, Integer] mixed type\ndef process(value)\nend\n`, Ruby);\n      const { env } = buildTypeEnv(tree, 'ruby');\n      expect(flatGet(env, 'value')).toBeUndefined();\n    });\n\n    it('extracts no types when no YARD comments present', () => {\n      const tree = parse(`\ndef create(repo, name)\n  repo.save\nend\n`, Ruby);\n      const { env } = buildTypeEnv(tree, 'ruby');\n      expect(flatSize(env)).toBe(0);\n    });\n\n    it('extracts types from singleton method YARD comments', () => {\n      const tree = parse(`\nclass UserService\n  # @param name [String] the user's name\n  def self.find(name)\n    name\n  end\nend\n`, Ruby);\n      const { env } = buildTypeEnv(tree, 'ruby');\n      expect(flatGet(env, 'name')).toBe('String');\n    });\n\n    it('handles generic YARD types (Array<User> → Array)', () => {\n      const tree = parse(`\n# @param users [Array<User>] list of users\ndef process(users)\nend\n`, Ruby);\n      const { env } = buildTypeEnv(tree, 'ruby');\n      expect(flatGet(env, 'users')).toBe('Array');\n    });\n  });\n\n  describe('super/base/parent resolution', () => {\n    it('resolves super to parent class name (TypeScript)', () => {\n      const code = `\nclass BaseModel {\n  save(): boolean { return true; }\n}\nclass User extends BaseModel {\n  save(): boolean {\n    super.save();\n    return true;\n  }\n}`;\n      const tree = parse(code, TypeScript.typescript);\n      const typeEnv = buildTypeEnv(tree, 'typescript');\n\n      const calls: any[] = [];\n      function findCalls(node: any) {\n        if (node.type === 'call_expression') calls.push(node);\n        for (let i = 0; i < node.childCount; i++) findCalls(node.child(i));\n      }\n      findCalls(tree.rootNode);\n\n      // Find the super.save() call (inside User class)\n      const superCall = calls.find((c: any) => {\n        const text = c.text;\n        return text.includes('super');\n      });\n      expect(superCall).toBeDefined();\n      expect(typeEnv.lookup('super', superCall)).toBe('BaseModel');\n    });\n\n    it('resolves super to parent class name (Java)', () => {\n      const code = `\nclass BaseModel {\n  boolean save() { return true; }\n}\nclass User extends BaseModel {\n  boolean save() {\n    super.save();\n    return true;\n  }\n}`;\n      const tree = parse(code, Java);\n      const typeEnv = buildTypeEnv(tree, 'java');\n\n      const calls: any[] = [];\n      function findCalls(node: any) {\n        if (node.type === 'method_invocation') calls.push(node);\n        for (let i = 0; i < node.childCount; i++) findCalls(node.child(i));\n      }\n      findCalls(tree.rootNode);\n\n      const superCall = calls.find((c: any) => c.text.includes('super'));\n      expect(superCall).toBeDefined();\n      expect(typeEnv.lookup('super', superCall)).toBe('BaseModel');\n    });\n\n    it('resolves super to parent class name (Python)', () => {\n      const code = `\nclass BaseModel:\n    def save(self) -> bool:\n        return True\n\nclass User(BaseModel):\n    def save(self) -> bool:\n        super().save()\n        return True\n`;\n      const tree = parse(code, Python);\n      const typeEnv = buildTypeEnv(tree, 'python');\n\n      const calls: any[] = [];\n      function findCalls(node: any) {\n        if (node.type === 'call') calls.push(node);\n        for (let i = 0; i < node.childCount; i++) findCalls(node.child(i));\n      }\n      findCalls(tree.rootNode);\n\n      // Find a call inside the User class\n      const superCall = calls.find((c: any) => c.text.includes('super'));\n      expect(superCall).toBeDefined();\n      expect(typeEnv.lookup('super', superCall)).toBe('BaseModel');\n    });\n\n    it('returns undefined when class has no parent', () => {\n      const code = `\nclass Standalone {\n  save(): boolean {\n    return true;\n  }\n}`;\n      const tree = parse(code, TypeScript.typescript);\n      const typeEnv = buildTypeEnv(tree, 'typescript');\n\n      // No calls in this code — test the resolution function directly\n      // by using the class body as the context node\n      const classNode = tree.rootNode.firstNamedChild;\n      expect(typeEnv.lookup('super', classNode!)).toBeUndefined();\n    });\n  });\n\n  describe('Kotlin object_declaration this resolution', () => {\n    it('resolves this inside object declaration', () => {\n      const code = `\nobject AppConfig {\n  fun setup() {\n    this.init()\n  }\n}`;\n      const tree = parse(code, Kotlin);\n      const typeEnv = buildTypeEnv(tree, 'kotlin');\n\n      const calls: any[] = [];\n      function findCalls(node: any) {\n        if (node.type === 'call_expression') calls.push(node);\n        for (let i = 0; i < node.childCount; i++) findCalls(node.child(i));\n      }\n      findCalls(tree.rootNode);\n\n      expect(calls.length).toBe(1);\n      expect(typeEnv.lookup('this', calls[0])).toBe('AppConfig');\n    });\n  });\n\n  describe('scope awareness', () => {\n    it('separates same-named variables in different functions', () => {\n      const tree = parse(`\n        function handleUser(user: User) {\n          user.save();\n        }\n        function handleRepo(user: Repo) {\n          user.save();\n        }\n      `, TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n\n      // Each function has its own scope for 'user' (keyed by funcName@startIndex)\n      // Find the scope keys that start with handleUser/handleRepo\n      const scopes = [...env.keys()];\n      const handleUserKey = scopes.find(k => k.startsWith('handleUser@'));\n      const handleRepoKey = scopes.find(k => k.startsWith('handleRepo@'));\n      expect(handleUserKey).toBeDefined();\n      expect(handleRepoKey).toBeDefined();\n      expect(env.get(handleUserKey!)?.get('user')).toBe('User');\n      expect(env.get(handleRepoKey!)?.get('user')).toBe('Repo');\n    });\n\n    it('lookup resolves from enclosing function scope', () => {\n      const code = `\nfunction handleUser(user: User) {\n  user.save();\n}\nfunction handleRepo(user: Repo) {\n  user.save();\n}`;\n      const tree = parse(code, TypeScript.typescript);\n      const typeEnv = buildTypeEnv(tree, 'typescript');\n\n      // Find the call nodes inside each function\n      const calls: any[] = [];\n      function findCalls(node: any) {\n        if (node.type === 'call_expression') calls.push(node);\n        for (let i = 0; i < node.childCount; i++) {\n          findCalls(node.child(i));\n        }\n      }\n      findCalls(tree.rootNode);\n\n      expect(calls.length).toBe(2);\n      // First call is inside handleUser → user should be User\n      expect(typeEnv.lookup('user', calls[0])).toBe('User');\n      // Second call is inside handleRepo → user should be Repo\n      expect(typeEnv.lookup('user', calls[1])).toBe('Repo');\n    });\n\n    it('separates same-named methods in different classes via startIndex', () => {\n      const code = `\nclass UserService {\n  process(user: User) {\n    user.save();\n  }\n}\nclass RepoService {\n  process(repo: Repo) {\n    repo.save();\n  }\n}`;\n      const tree = parse(code, TypeScript.typescript);\n      const typeEnv = buildTypeEnv(tree, 'typescript');\n\n      // Find the call nodes inside each process method\n      const calls: any[] = [];\n      function findCalls(node: any) {\n        if (node.type === 'call_expression') calls.push(node);\n        for (let i = 0; i < node.childCount; i++) {\n          findCalls(node.child(i));\n        }\n      }\n      findCalls(tree.rootNode);\n\n      expect(calls.length).toBe(2);\n      // First call inside UserService.process → user should be User\n      expect(typeEnv.lookup('user', calls[0])).toBe('User');\n      // Second call inside RepoService.process → repo should be Repo\n      expect(typeEnv.lookup('repo', calls[1])).toBe('Repo');\n    });\n\n    it('file-level variables are accessible from all scopes', () => {\n      const tree = parse(`\n        const config: Config = getConfig();\n        function process(user: User) {\n          config.validate();\n          user.save();\n        }\n      `, TypeScript.typescript);\n      const typeEnv = buildTypeEnv(tree, 'typescript');\n\n      // config is at file-level scope\n      const fileScope = typeEnv.env.get('');\n      expect(fileScope?.get('config')).toBe('Config');\n\n      // user is in process scope (key includes startIndex)\n      // Find call nodes inside the process function\n      const calls: any[] = [];\n      function findCalls(node: any) {\n        if (node.type === 'call_expression') calls.push(node);\n        for (let i = 0; i < node.childCount; i++) findCalls(node.child(i));\n      }\n      findCalls(tree.rootNode);\n      // calls[0] = getConfig() at file level, calls[1] = config.validate(), calls[2] = user.save()\n      expect(typeEnv.lookup('user', calls[2])).toBe('User');\n      // config is file-level, accessible from any scope\n      expect(typeEnv.lookup('config', calls[1])).toBe('Config');\n    });\n  });\n\n  describe('destructuring patterns (known limitations)', () => {\n    it('captures the typed source variable but not destructured bindings', () => {\n      const tree = parse(`\n        const user: User = getUser();\n        const { name, email } = user;\n      `, TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      // The typed variable is captured\n      expect(flatGet(env, 'user')).toBe('User');\n      // Destructured bindings (name, email) would need type inference to resolve\n      // — not extractable from annotations alone\n      expect(flatGet(env, 'name')).toBeUndefined();\n      expect(flatGet(env, 'email')).toBeUndefined();\n    });\n\n    it('does not extract from object-type-annotated destructuring', () => {\n      // TypeScript allows: const { name }: { name: string } = user;\n      // The annotation is on the whole pattern, not individual bindings\n      const tree = parse(`\n        const { name }: { name: string } = getUser();\n      `, TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      // Complex type annotation (object type) — extractSimpleTypeName returns undefined\n      expect(flatSize(env)).toBe(0);\n    });\n  });\n\n  describe('constructor inference (Tier 1 fallback)', () => {\n    describe('TypeScript', () => {\n      it('infers type from new expression when no annotation', () => {\n        const tree = parse('const user = new User();', TypeScript.typescript);\n        const { env } = buildTypeEnv(tree, 'typescript');\n        expect(flatGet(env, 'user')).toBe('User');\n      });\n\n      it('prefers explicit annotation over constructor inference', () => {\n        const tree = parse('const user: BaseUser = new User();', TypeScript.typescript);\n        const { env } = buildTypeEnv(tree, 'typescript');\n        expect(flatGet(env, 'user')).toBe('BaseUser');\n      });\n\n      it('infers from namespaced constructor: new ns.Service()', () => {\n        // extractSimpleTypeName handles member_expression via property_identifier\n        const tree = parse('const svc = new ns.Service();', TypeScript.typescript);\n        const { env } = buildTypeEnv(tree, 'typescript');\n        expect(flatGet(env, 'svc')).toBe('Service');\n      });\n\n      it('infers type from new expression with as cast', () => {\n        const tree = parse('const x = new User() as BaseUser;', TypeScript.typescript);\n        const { env } = buildTypeEnv(tree, 'typescript');\n        // Unwraps as_expression to find the inner new_expression → User\n        expect(flatGet(env, 'x')).toBe('User');\n      });\n\n      it('infers type from new expression with non-null assertion', () => {\n        const tree = parse('const x = new User()!;', TypeScript.typescript);\n        const { env } = buildTypeEnv(tree, 'typescript');\n        // Unwraps non_null_expression to find the inner new_expression → User\n        expect(flatGet(env, 'x')).toBe('User');\n      });\n\n      it('infers type from double-cast (new X() as unknown as T)', () => {\n        const tree = parse('const x = new User() as unknown as Admin;', TypeScript.typescript);\n        const { env } = buildTypeEnv(tree, 'typescript');\n        // Unwraps nested as_expression to find inner new_expression → User\n        expect(flatGet(env, 'x')).toBe('User');\n      });\n\n      it('ignores non-new assignments', () => {\n        const tree = parse('const x = getUser();', TypeScript.typescript);\n        const { env } = buildTypeEnv(tree, 'typescript');\n        expect(flatSize(env)).toBe(0);\n      });\n\n      it('handles mixed annotated + unannotated declarators', () => {\n        const tree = parse('const a: A = getA(), b = new B();', TypeScript.typescript);\n        const { env } = buildTypeEnv(tree, 'typescript');\n        expect(flatGet(env, 'a')).toBe('A');\n        expect(flatGet(env, 'b')).toBe('B');\n      });\n    });\n\n    describe('Java', () => {\n      it('infers type from var with new expression (Java 10+)', () => {\n        const tree = parse(`\n          class App {\n            void run() {\n              var user = new User();\n            }\n          }\n        `, Java);\n        const { env } = buildTypeEnv(tree, 'java');\n        expect(flatGet(env, 'user')).toBe('User');\n      });\n\n      it('prefers explicit type over constructor inference', () => {\n        const tree = parse(`\n          class App {\n            void run() {\n              User user = new User();\n            }\n          }\n        `, Java);\n        const { env } = buildTypeEnv(tree, 'java');\n        expect(flatGet(env, 'user')).toBe('User');\n      });\n\n      it('does not infer from var without new expression', () => {\n        const tree = parse(`\n          class App {\n            void run() {\n              var x = getUser();\n            }\n          }\n        `, Java);\n        const { env } = buildTypeEnv(tree, 'java');\n        expect(flatGet(env, 'x')).toBeUndefined();\n      });\n    });\n\n    describe('Rust', () => {\n      it('infers type from Type::new()', () => {\n        const tree = parse(`\n          fn main() {\n            let user = User::new();\n          }\n        `, Rust);\n        const { env } = buildTypeEnv(tree, 'rust');\n        expect(flatGet(env, 'user')).toBe('User');\n      });\n\n      it('infers type from Type::default()', () => {\n        const tree = parse(`\n          fn main() {\n            let config = Config::default();\n          }\n        `, Rust);\n        const { env } = buildTypeEnv(tree, 'rust');\n        expect(flatGet(env, 'config')).toBe('Config');\n      });\n\n      it('does NOT emit scanner binding for Type::default() (handled by extractInitializer)', () => {\n        const tree = parse(`\n          fn main() {\n            let config = Config::default();\n          }\n        `, Rust);\n        const { constructorBindings } = buildTypeEnv(tree, 'rust');\n        // ::default() should be excluded from scanConstructorBinding just like ::new()\n        // extractInitializer already resolves it, so a scanner binding would be redundant\n        const defaultBinding = constructorBindings.find(b => b.calleeName === 'default');\n        expect(defaultBinding).toBeUndefined();\n      });\n\n      it('does NOT emit scanner binding for Type::new() (handled by extractInitializer)', () => {\n        const tree = parse(`\n          fn main() {\n            let user = User::new();\n          }\n        `, Rust);\n        const { constructorBindings } = buildTypeEnv(tree, 'rust');\n        const newBinding = constructorBindings.find(b => b.calleeName === 'new');\n        expect(newBinding).toBeUndefined();\n      });\n\n      it('prefers explicit annotation over constructor inference', () => {\n        // Uses DIFFERENT types to catch Tier 0 overwrite bugs\n        const tree = parse(`\n          fn main() {\n            let user: BaseUser = Admin::new();\n          }\n        `, Rust);\n        const { env } = buildTypeEnv(tree, 'rust');\n        expect(flatGet(env, 'user')).toBe('BaseUser');\n      });\n\n      it('infers type from let mut with ::new()', () => {\n        const tree = parse(`\n          fn main() {\n            let mut user = User::new();\n          }\n        `, Rust);\n        const { env } = buildTypeEnv(tree, 'rust');\n        expect(flatGet(env, 'user')).toBe('User');\n      });\n\n      it('resolves Self::new() to enclosing impl type', () => {\n        const tree = parse(`\n          struct User {}\n          impl User {\n            fn create() -> Self {\n              let instance = Self::new();\n            }\n          }\n        `, Rust);\n        const { env } = buildTypeEnv(tree, 'rust');\n        expect(flatGet(env, 'instance')).toBe('User');\n      });\n\n      it('resolves Self::default() to enclosing impl type', () => {\n        const tree = parse(`\n          struct Config {}\n          impl Config {\n            fn make() -> Self {\n              let cfg = Self::default();\n            }\n          }\n        `, Rust);\n        const { env } = buildTypeEnv(tree, 'rust');\n        expect(flatGet(env, 'cfg')).toBe('Config');\n      });\n\n      it('skips Self::new() outside impl block', () => {\n        const tree = parse(`\n          fn main() {\n            let x = Self::new();\n          }\n        `, Rust);\n        const { env } = buildTypeEnv(tree, 'rust');\n        expect(flatGet(env, 'x')).toBeUndefined();\n      });\n\n      it('does not infer from Type::other_method()', () => {\n        const tree = parse(`\n          fn main() {\n            let user = User::from_str(\"alice\");\n          }\n        `, Rust);\n        const { env } = buildTypeEnv(tree, 'rust');\n        expect(flatGet(env, 'user')).toBeUndefined();\n      });\n\n      it('infers type from struct literal (User { ... })', () => {\n        const tree = parse(`\n          fn main() {\n            let user = User { name: \"alice\", age: 30 };\n          }\n        `, Rust);\n        const { env } = buildTypeEnv(tree, 'rust');\n        expect(flatGet(env, 'user')).toBe('User');\n      });\n\n      it('infers type from empty struct literal (Config {})', () => {\n        const tree = parse(`\n          fn main() {\n            let config = Config {};\n          }\n        `, Rust);\n        const { env } = buildTypeEnv(tree, 'rust');\n        expect(flatGet(env, 'config')).toBe('Config');\n      });\n\n      it('prefers explicit annotation over struct literal inference', () => {\n        const tree = parse(`\n          fn main() {\n            let user: BaseUser = Admin { name: \"alice\" };\n          }\n        `, Rust);\n        const { env } = buildTypeEnv(tree, 'rust');\n        expect(flatGet(env, 'user')).toBe('BaseUser');\n      });\n\n      it('resolves Self {} struct literal to enclosing impl type', () => {\n        const tree = parse(`\n          struct User { name: String }\n          impl User {\n            fn reset(&self) -> Self {\n              let fresh = Self { name: String::new() };\n            }\n          }\n        `, Rust);\n        const { env } = buildTypeEnv(tree, 'rust');\n        expect(flatGet(env, 'fresh')).toBe('User');\n      });\n\n      it('skips Self {} outside impl block', () => {\n        const tree = parse(`\n          fn main() {\n            let x = Self { name: String::new() };\n          }\n        `, Rust);\n        const { env } = buildTypeEnv(tree, 'rust');\n        expect(flatGet(env, 'x')).toBeUndefined();\n      });\n    });\n\n    describe('Rust if-let / while-let pattern bindings', () => {\n      it('extracts type from captured_pattern in if let (user @ User { .. })', () => {\n        const tree = parse(`\n          fn process() {\n            if let user @ User { .. } = get_user() {\n              user.save();\n            }\n          }\n        `, Rust);\n        const { env } = buildTypeEnv(tree, 'rust');\n        expect(flatGet(env, 'user')).toBe('User');\n      });\n\n      it('extracts type from nested captured_pattern in if let Some(user @ User { .. })', () => {\n        const tree = parse(`\n          fn process(opt: Option<User>) {\n            if let Some(user @ User { .. }) = opt {\n              user.save();\n            }\n          }\n        `, Rust);\n        const { env } = buildTypeEnv(tree, 'rust');\n        expect(flatGet(env, 'user')).toBe('User');\n      });\n\n      it('extracts type from captured_pattern in while let', () => {\n        const tree = parse(`\n          fn process() {\n            while let item @ Config { .. } = iter.next() {\n              item.validate();\n            }\n          }\n        `, Rust);\n        const { env } = buildTypeEnv(tree, 'rust');\n        expect(flatGet(env, 'item')).toBe('Config');\n      });\n\n      it('extracts binding from if let Some(x) = opt via Phase 5.2 pattern binding', () => {\n        const tree = parse(`\n          fn process(opt: Option<User>) {\n            if let Some(user) = opt {\n              user.save();\n            }\n          }\n        `, Rust);\n        const { env } = buildTypeEnv(tree, 'rust');\n        // Option<User> is unwrapped to \"User\" in TypeEnv via NULLABLE_WRAPPER_TYPES.\n        // extractPatternBinding maps `user` → \"User\" from the scopeEnv lookup for `opt`.\n        expect(flatGet(env, 'user')).toBe('User');\n      });\n\n      it('does NOT extract field bindings from struct pattern destructuring', () => {\n        const tree = parse(`\n          fn process(val: User) {\n            if let User { name } = val {\n              name.len();\n            }\n          }\n        `, Rust);\n        const { env } = buildTypeEnv(tree, 'rust');\n        // 'name' is a field of User — we don't know its type without field-type resolution\n        expect(flatGet(env, 'name')).toBeUndefined();\n        // 'val' should still be extracted from the parameter annotation\n        expect(flatGet(env, 'val')).toBe('User');\n      });\n\n      it('extracts type from scoped struct pattern (Message::Data)', () => {\n        const tree = parse(`\n          fn process() {\n            if let msg @ Message::Data { .. } = get_msg() {\n              msg.process();\n            }\n          }\n        `, Rust);\n        const { env } = buildTypeEnv(tree, 'rust');\n        // scoped_type_identifier: Message::Data — extractSimpleTypeName returns \"Data\"\n        expect(flatGet(env, 'msg')).toBe('Data');\n      });\n\n      it('still extracts parameter types alongside if-let bindings', () => {\n        const tree = parse(`\n          fn process(opt: Option<User>) {\n            if let user @ User { .. } = get_user() {\n              user.save();\n            }\n          }\n        `, Rust);\n        const { env } = buildTypeEnv(tree, 'rust');\n        // Option<User> unwraps to User (nullable wrapper unwrapping)\n        expect(flatGet(env, 'opt')).toBe('User');\n        expect(flatGet(env, 'user')).toBe('User');\n      });\n\n      it('Phase 5.2: extracts binding from if let Some(x) = opt where opt: Option<User>', () => {\n        const tree = parse(`\n          fn process(opt: Option<User>) {\n            if let Some(user) = opt {\n              user.save();\n            }\n          }\n        `, Rust);\n        const { env } = buildTypeEnv(tree, 'rust');\n        // opt: Option<User> → scopeEnv stores \"User\" (NULLABLE_WRAPPER_TYPES unwrapping)\n        // extractPatternBinding maps user → opt's type → \"User\"\n        expect(flatGet(env, 'user')).toBe('User');\n      });\n\n      it('Phase 5.2: does NOT extract binding when source variable is unknown', () => {\n        const tree = parse(`\n          fn process() {\n            if let Some(x) = unknown_var {\n              x.foo();\n            }\n          }\n        `, Rust);\n        const { env } = buildTypeEnv(tree, 'rust');\n        // unknown_var is not in scopeEnv — conservative, produces no binding\n        expect(flatGet(env, 'x')).toBeUndefined();\n      });\n\n      it('Phase 5.2: does NOT extract binding for non-Option/Result wrappers', () => {\n        const tree = parse(`\n          fn process(vec: Vec<User>) {\n            if let SomeOtherVariant(x) = vec {\n              x.save();\n            }\n          }\n        `, Rust);\n        const { env } = buildTypeEnv(tree, 'rust');\n        // SomeOtherVariant is not a known unwrap wrapper — no binding\n        expect(flatGet(env, 'x')).toBeUndefined();\n      });\n    });\n\n    describe('Java instanceof pattern variable (Phase 5.2)', () => {\n      it('extracts binding from x instanceof User user', () => {\n        const tree = parse(`\n          class App {\n            void process(Object obj) {\n              if (obj instanceof User user) {\n                user.save();\n              }\n            }\n          }\n        `, Java);\n        const { env } = buildTypeEnv(tree, 'java');\n        expect(flatGet(env, 'user')).toBe('User');\n      });\n\n      it('extracts boolean type from plain instanceof (no pattern variable)', () => {\n        const tree = parse(`\n          class App {\n            void process(Object obj) {\n              boolean b = obj instanceof User;\n            }\n          }\n        `, Java);\n        const { env } = buildTypeEnv(tree, 'java');\n        // No pattern variable — b gets its declared type 'boolean', not 'User'\n        expect(flatGet(env, 'b')).toBe('boolean');\n      });\n\n      it('extracts correct type when multiple instanceof patterns exist', () => {\n        const tree = parse(`\n          class App {\n            void process(Object obj) {\n              if (obj instanceof User user) {\n                user.save();\n              }\n              if (obj instanceof Repo repo) {\n                repo.save();\n              }\n            }\n          }\n        `, Java);\n        const { env } = buildTypeEnv(tree, 'java');\n        expect(flatGet(env, 'user')).toBe('User');\n        expect(flatGet(env, 'repo')).toBe('Repo');\n      });\n    });\n\n    describe('PHP', () => {\n      it('infers type from new expression', () => {\n        const tree = parse(`<?php\n          $user = new User();\n        `, PHP.php);\n        const { env } = buildTypeEnv(tree, 'php');\n        expect(flatGet(env, '$user')).toBe('User');\n      });\n\n      it('resolves new self() and new static() to enclosing class', () => {\n        const tree = parse(`<?php\n          class Foo {\n            function make() {\n              $a = new self();\n              $b = new static();\n            }\n          }\n        `, PHP.php);\n        const { env } = buildTypeEnv(tree, 'php');\n        expect(flatGet(env, '$a')).toBe('Foo');\n        expect(flatGet(env, '$b')).toBe('Foo');\n      });\n\n      it('resolves new parent() to superclass', () => {\n        const tree = parse(`<?php\n          class Bar {}\n          class Foo extends Bar {\n            function make() {\n              $p = new parent();\n            }\n          }\n        `, PHP.php);\n        const { env } = buildTypeEnv(tree, 'php');\n        expect(flatGet(env, '$p')).toBe('Bar');\n      });\n\n      it('skips self/static/parent outside class scope', () => {\n        const tree = parse(`<?php\n          $a = new self();\n        `, PHP.php);\n        const { env } = buildTypeEnv(tree, 'php');\n        expect(flatGet(env, '$a')).toBeUndefined();\n      });\n\n      it('does not infer from non-new assignments', () => {\n        const tree = parse(`<?php\n          $user = getUser();\n        `, PHP.php);\n        const { env } = buildTypeEnv(tree, 'php');\n        expect(flatGet(env, '$user')).toBeUndefined();\n      });\n    });\n\n    describe('C++', () => {\n      it('infers type from auto with new expression', () => {\n        const tree = parse(`\n          void run() {\n            auto user = new User();\n          }\n        `, CPP);\n        const { env } = buildTypeEnv(tree, 'cpp');\n        expect(flatGet(env, 'user')).toBe('User');\n      });\n\n      it('infers type from auto with direct construction when class is defined', () => {\n        const tree = parse(`\n          class User {};\n          void run() {\n            auto user = User();\n          }\n        `, CPP);\n        const { env } = buildTypeEnv(tree, 'cpp');\n        expect(flatGet(env, 'user')).toBe('User');\n      });\n\n      it('prefers explicit type over auto inference', () => {\n        const tree = parse(`\n          void run() {\n            User* user = new User();\n          }\n        `, CPP);\n        const { env } = buildTypeEnv(tree, 'cpp');\n        expect(flatGet(env, 'user')).toBe('User');\n      });\n\n      it('does not infer from auto with function call (not a known class)', () => {\n        const tree = parse(`\n          class User {};\n          User getUser() { return User(); }\n          void run() {\n            auto x = getUser();\n          }\n        `, CPP);\n        const { env } = buildTypeEnv(tree, 'cpp');\n        // getUser is an identifier but NOT a known class — no inference\n        expect(flatGet(env, 'x')).toBeUndefined();\n      });\n\n      it('infers type from brace initialization (User{})', () => {\n        const tree = parse(`\n          class User {};\n          void run() {\n            auto user = User{};\n          }\n        `, CPP);\n        const { env } = buildTypeEnv(tree, 'cpp');\n        expect(flatGet(env, 'user')).toBe('User');\n      });\n\n      it('infers type from brace initialization with args (User{1,2})', () => {\n        const tree = parse(`\n          class Config {};\n          void run() {\n            auto cfg = Config{1, 2};\n          }\n        `, CPP);\n        const { env } = buildTypeEnv(tree, 'cpp');\n        expect(flatGet(env, 'cfg')).toBe('Config');\n      });\n\n      it('infers type from namespaced brace-init (ns::User{})', () => {\n        const tree = parse(`\n          namespace ns { class User {}; }\n          void run() {\n            auto user = ns::User{};\n          }\n        `, CPP);\n        const { env } = buildTypeEnv(tree, 'cpp');\n        expect(flatGet(env, 'user')).toBe('User');\n      });\n    });\n\n    describe('Kotlin constructor inference', () => {\n      it('still extracts explicit type annotations', () => {\n        const tree = parse(`\n          fun main() {\n            val user: User = User()\n          }\n        `, Kotlin);\n        const { env } = buildTypeEnv(tree, 'kotlin');\n        expect(flatGet(env, 'user')).toBe('User');\n      });\n\n      it('infers type from constructor call when class is in same file', () => {\n        const tree = parse(`\n          class User(val name: String)\n          fun main() {\n            val user = User(\"Alice\")\n          }\n        `, Kotlin);\n        const { env } = buildTypeEnv(tree, 'kotlin');\n        expect(flatGet(env, 'user')).toBe('User');\n      });\n\n      it('does NOT infer type from plain function call', () => {\n        const tree = parse(`\n          fun getUser(): User = User(\"Alice\")\n          fun main() {\n            val user = getUser()\n          }\n        `, Kotlin);\n        const { env } = buildTypeEnv(tree, 'kotlin');\n        // getUser is not a class name — should NOT produce a binding\n        expect(flatGet(env, 'user')).toBeUndefined();\n      });\n\n      it('infers type from constructor when class defined via SymbolTable', () => {\n        const tree = parse(`\n          fun main() {\n            val user = User(\"Alice\")\n          }\n        `, Kotlin);\n        // User is NOT defined in this file, but SymbolTable knows it's a Class\n        const mockSymbolTable = {\n          lookupFuzzy: (name: string) =>\n            name === 'User' ? [{ nodeId: 'n1', filePath: 'models.kt', type: 'Class' }] : [],\n          lookupExact: () => undefined,\n          lookupExactFull: () => undefined,\n          add: () => {},\n          getStats: () => ({ fileCount: 0, globalSymbolCount: 0 }),\n          clear: () => {},\n        };\n        const { env } = buildTypeEnv(tree, 'kotlin', { symbolTable: mockSymbolTable as any });\n        expect(flatGet(env, 'user')).toBe('User');\n      });\n\n      it('does NOT infer when SymbolTable says callee is a Function', () => {\n        const tree = parse(`\n          fun main() {\n            val result = doStuff()\n          }\n        `, Kotlin);\n        const mockSymbolTable = {\n          lookupFuzzy: (name: string) =>\n            name === 'doStuff' ? [{ nodeId: 'n1', filePath: 'utils.kt', type: 'Function' }] : [],\n          lookupFuzzyCallable: () => [],\n          lookupFieldByOwner: () => undefined,\n          lookupExact: () => undefined,\n          lookupExactFull: () => undefined,\n          add: () => {},\n          getStats: () => ({ fileCount: 0, globalSymbolCount: 0 }),\n          clear: () => {},\n        };\n        const { env } = buildTypeEnv(tree, 'kotlin', { symbolTable: mockSymbolTable as any });\n        expect(flatGet(env, 'result')).toBeUndefined();\n      });\n\n      it('prefers explicit annotation over constructor inference', () => {\n        const tree = parse(`\n          class User(val name: String)\n          fun main() {\n            val user: BaseEntity = User(\"Alice\")\n          }\n        `, Kotlin);\n        const { env } = buildTypeEnv(tree, 'kotlin');\n        // Tier 0 (explicit annotation) wins over Tier 1 (constructor inference)\n        expect(flatGet(env, 'user')).toBe('BaseEntity');\n      });\n    });\n\n    describe('Python constructor inference', () => {\n      it('infers type from direct constructor call when class is known', () => {\n        const tree = parse(`\nclass User:\n    pass\n\ndef main():\n    user = User(\"alice\")\n`, Python);\n        const { env } = buildTypeEnv(tree, 'python');\n        expect(flatGet(env, 'user')).toBe('User');\n      });\n\n      it('infers type from qualified constructor call (models.User)', () => {\n        const tree = parse(`\nclass User:\n    pass\n\ndef main():\n    user = models.User(\"alice\")\n`, Python);\n        const { env } = buildTypeEnv(tree, 'python');\n        // extractSimpleTypeName extracts \"User\" from attribute node \"models.User\"\n        expect(flatGet(env, 'user')).toBe('User');\n      });\n\n      it('does not infer from plain function call', () => {\n        const tree = parse(`\ndef main():\n    user = get_user()\n`, Python);\n        const { env } = buildTypeEnv(tree, 'python');\n        expect(flatGet(env, 'user')).toBeUndefined();\n      });\n    });\n\n    describe('Python walrus operator type inference', () => {\n      it('infers type from walrus operator with constructor call', () => {\n        const tree = parse(`\nclass User:\n    pass\n\ndef main():\n    if (user := User(\"alice\")):\n        pass\n`, Python);\n        const { env } = buildTypeEnv(tree, 'python');\n        expect(flatGet(env, 'user')).toBe('User');\n      });\n\n      it('does not infer type from walrus operator without known class', () => {\n        const tree = parse(`\ndef main():\n    if (data := get_data()):\n        pass\n`, Python);\n        const { env } = buildTypeEnv(tree, 'python');\n        expect(flatGet(env, 'data')).toBeUndefined();\n      });\n    });\n  });\n\n  describe('edge cases', () => {\n    it('returns empty map for code without type annotations', () => {\n      const tree = parse('const x = 5;', TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      expect(flatSize(env)).toBe(0);\n    });\n\n    it('last-write-wins for same variable name in same scope', () => {\n      const tree = parse(`\n        let x: User = getUser();\n        let x: Admin = getAdmin();\n      `, TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      // Both declarations are at file level; last one wins\n      expect(flatGet(env, 'x')).toBeDefined();\n    });\n  });\n\n  describe('generic parent class resolution', () => {\n    it('resolves super through generic parent (TypeScript)', () => {\n      const code = `\nclass BaseModel<T> {\n  save(): T { return {} as T; }\n}\nclass User extends BaseModel<string> {\n  save(): string {\n    super.save();\n    return \"ok\";\n  }\n}`;\n      const tree = parse(code, TypeScript.typescript);\n      const typeEnv = buildTypeEnv(tree, 'typescript');\n\n      const calls: any[] = [];\n      function findCalls(node: any) {\n        if (node.type === 'call_expression') calls.push(node);\n        for (let i = 0; i < node.childCount; i++) findCalls(node.child(i));\n      }\n      findCalls(tree.rootNode);\n\n      const superCall = calls.find((c: any) => c.text.includes('super'));\n      expect(superCall).toBeDefined();\n      // Should resolve to \"BaseModel\", not \"BaseModel<string>\"\n      expect(typeEnv.lookup('super', superCall)).toBe('BaseModel');\n    });\n\n    it('resolves super through generic parent (Java)', () => {\n      const code = `\nclass BaseModel<T> {\n  T save() { return null; }\n}\nclass User extends BaseModel<String> {\n  String save() {\n    super.save();\n    return \"ok\";\n  }\n}`;\n      const tree = parse(code, Java);\n      const typeEnv = buildTypeEnv(tree, 'java');\n\n      const calls: any[] = [];\n      function findCalls(node: any) {\n        if (node.type === 'method_invocation') calls.push(node);\n        for (let i = 0; i < node.childCount; i++) findCalls(node.child(i));\n      }\n      findCalls(tree.rootNode);\n\n      const superCall = calls.find((c: any) => c.text.includes('super'));\n      expect(superCall).toBeDefined();\n      // Should resolve to \"BaseModel\", not \"BaseModel<String>\"\n      expect(typeEnv.lookup('super', superCall)).toBe('BaseModel');\n    });\n\n    it('resolves super through qualified parent (Python models.Model)', () => {\n      const code = `\nclass Model:\n    def save(self):\n        pass\n\nclass User(Model):\n    def save(self):\n        super().save()\n`;\n      const tree = parse(code, Python);\n      const typeEnv = buildTypeEnv(tree, 'python');\n\n      const calls: any[] = [];\n      function findCalls(node: any) {\n        if (node.type === 'call') calls.push(node);\n        for (let i = 0; i < node.childCount; i++) findCalls(node.child(i));\n      }\n      findCalls(tree.rootNode);\n\n      const superCall = calls.find((c: any) => c.text.includes('super'));\n      expect(superCall).toBeDefined();\n      expect(typeEnv.lookup('super', superCall)).toBe('Model');\n    });\n\n    it('resolves super through generic parent (C#)', () => {\n      const code = `\nclass BaseModel<T> {\n  public T Save() { return default; }\n}\nclass User : BaseModel<string> {\n  public string Save() {\n    base.Save();\n    return \"ok\";\n  }\n}`;\n      const tree = parse(code, CSharp);\n      const typeEnv = buildTypeEnv(tree, 'csharp');\n\n      const calls: any[] = [];\n      function findCalls(node: any) {\n        if (node.type === 'invocation_expression') calls.push(node);\n        for (let i = 0; i < node.childCount; i++) findCalls(node.child(i));\n      }\n      findCalls(tree.rootNode);\n\n      const baseCall = calls.find((c: any) => c.text.includes('base'));\n      expect(baseCall).toBeDefined();\n      // Should resolve to \"BaseModel\", not \"BaseModel<string>\"\n      expect(typeEnv.lookup('base', baseCall)).toBe('BaseModel');\n    });\n  });\n\n  describe('C++ namespaced constructor binding', () => {\n    it('infers type from auto with namespaced constructor (ns::User)', () => {\n      const tree = parse(`\n        namespace ns {\n          class HttpClient {};\n        }\n        void run() {\n          auto client = ns::HttpClient();\n        }\n      `, CPP);\n      const { constructorBindings } = buildTypeEnv(tree, 'cpp');\n      // Should extract \"HttpClient\" from the scoped_identifier ns::HttpClient\n      const binding = constructorBindings.find(b => b.varName === 'client');\n      expect(binding).toBeDefined();\n      expect(binding!.calleeName).toBe('HttpClient');\n    });\n\n    it('does not extract from non-namespaced plain identifier (existing behavior)', () => {\n      const tree = parse(`\n        class User {};\n        void run() {\n          auto user = User();\n        }\n      `, CPP);\n      const { env, constructorBindings } = buildTypeEnv(tree, 'cpp');\n      // User() with known class resolves via extractInitializer, not constructor bindings\n      expect(flatGet(env, 'user')).toBe('User');\n      // No unresolved bindings since User is locally known\n      expect(constructorBindings.find(b => b.varName === 'user')).toBeUndefined();\n    });\n  });\n\n  describe('constructorBindings merged into buildTypeEnv', () => {\n    it('returns constructor bindings for Kotlin val x = UnknownClass()', () => {\n      const tree = parse(`\n        fun main() {\n          val user = UnknownClass()\n        }\n      `, Kotlin);\n      const { env, constructorBindings } = buildTypeEnv(tree, 'kotlin');\n      // UnknownClass is not defined locally — should appear as unverified binding\n      expect(flatGet(env, 'user')).toBeUndefined();\n      expect(constructorBindings.length).toBe(1);\n      expect(constructorBindings[0].varName).toBe('user');\n      expect(constructorBindings[0].calleeName).toBe('UnknownClass');\n    });\n\n    it('does NOT emit constructor binding when TypeEnv already resolved', () => {\n      const tree = parse(`\n        fun main() {\n          val user: User = User()\n        }\n      `, Kotlin);\n      const { env, constructorBindings } = buildTypeEnv(tree, 'kotlin');\n      // Explicit annotation resolves it — no unverified binding needed\n      expect(flatGet(env, 'user')).toBe('User');\n      expect(constructorBindings.find(b => b.varName === 'user')).toBeUndefined();\n    });\n\n    it('returns constructor bindings for Python x = UnknownClass()', () => {\n      const tree = parse(`\ndef main():\n    user = SomeClass()\n`, Python);\n      const { env, constructorBindings } = buildTypeEnv(tree, 'python');\n      expect(flatGet(env, 'user')).toBeUndefined();\n      expect(constructorBindings.length).toBe(1);\n      expect(constructorBindings[0].varName).toBe('user');\n      expect(constructorBindings[0].calleeName).toBe('SomeClass');\n    });\n\n    it('returns constructor bindings for Python qualified call (models.User)', () => {\n      const tree = parse(`\ndef main():\n    user = models.User(\"alice\")\n`, Python);\n      const { constructorBindings } = buildTypeEnv(tree, 'python');\n      expect(constructorBindings.length).toBe(1);\n      expect(constructorBindings[0].varName).toBe('user');\n      expect(constructorBindings[0].calleeName).toBe('User');\n    });\n\n    it('returns constructor bindings for Python walrus operator (user := SomeClass())', () => {\n      const tree = parse(`\ndef main():\n    if (user := SomeClass()):\n        pass\n`, Python);\n      const { env, constructorBindings } = buildTypeEnv(tree, 'python');\n      expect(flatGet(env, 'user')).toBeUndefined();\n      expect(constructorBindings.length).toBe(1);\n      expect(constructorBindings[0].varName).toBe('user');\n      expect(constructorBindings[0].calleeName).toBe('SomeClass');\n    });\n\n    it('returns empty bindings for language without scanner (Go)', () => {\n      const tree = parse(`\n        package main\n        func main() {\n          var x int = 5\n        }\n      `, Go);\n      const { constructorBindings } = buildTypeEnv(tree, 'go');\n      expect(constructorBindings).toEqual([]);\n    });\n\n    it('returns constructor bindings for Ruby constant assignment (REPO = Repo.new)', () => {\n      const tree = parse(`\nREPO = Repo.new\n`, Ruby);\n      const { constructorBindings } = buildTypeEnv(tree, 'ruby');\n      expect(constructorBindings.length).toBe(1);\n      expect(constructorBindings[0].varName).toBe('REPO');\n      expect(constructorBindings[0].calleeName).toBe('Repo');\n    });\n\n    it('returns constructor bindings for Ruby namespaced constructor (service = Models::UserService.new)', () => {\n      const tree = parse(`\nservice = Models::UserService.new\n`, Ruby);\n      const { constructorBindings } = buildTypeEnv(tree, 'ruby');\n      expect(constructorBindings.length).toBe(1);\n      expect(constructorBindings[0].varName).toBe('service');\n      expect(constructorBindings[0].calleeName).toBe('UserService');\n    });\n\n    it('returns constructor bindings for deeply namespaced Ruby constructor (svc = App::Models::Service.new)', () => {\n      const tree = parse(`\nsvc = App::Models::Service.new\n`, Ruby);\n      const { constructorBindings } = buildTypeEnv(tree, 'ruby');\n      expect(constructorBindings.length).toBe(1);\n      expect(constructorBindings[0].varName).toBe('svc');\n      expect(constructorBindings[0].calleeName).toBe('Service');\n    });\n\n    it('includes scope key in constructor bindings', () => {\n      const tree = parse(`\n        fun process() {\n          val user = RemoteUser()\n        }\n      `, Kotlin);\n      const { constructorBindings } = buildTypeEnv(tree, 'kotlin');\n      expect(constructorBindings.length).toBe(1);\n      expect(constructorBindings[0].scope).toMatch(/^process@\\d+$/);\n    });\n\n    it('returns constructor bindings for TypeScript const user = getUser()', () => {\n      const tree = parse('const user = getUser();', TypeScript.typescript);\n      const { env, constructorBindings } = buildTypeEnv(tree, 'typescript');\n      expect(flatGet(env, 'user')).toBeUndefined();\n      expect(constructorBindings.length).toBe(1);\n      expect(constructorBindings[0].varName).toBe('user');\n      expect(constructorBindings[0].calleeName).toBe('getUser');\n    });\n\n    it('does NOT emit constructor binding when TypeScript var has explicit type annotation', () => {\n      const tree = parse('const user: User = getUser();', TypeScript.typescript);\n      const { env, constructorBindings } = buildTypeEnv(tree, 'typescript');\n      expect(flatGet(env, 'user')).toBe('User');\n      expect(constructorBindings.find(b => b.varName === 'user')).toBeUndefined();\n    });\n\n    it('skips destructuring patterns (array_pattern) for TypeScript', () => {\n      const tree = parse('const [a, b] = getPair();', TypeScript.typescript);\n      const { constructorBindings } = buildTypeEnv(tree, 'typescript');\n      expect(constructorBindings).toEqual([]);\n    });\n\n    it('skips destructuring patterns (object_pattern) for TypeScript', () => {\n      const tree = parse('const { name, age } = getUser();', TypeScript.typescript);\n      const { constructorBindings } = buildTypeEnv(tree, 'typescript');\n      expect(constructorBindings).toEqual([]);\n    });\n\n    it('unwraps await in TypeScript: const user = await fetchUser()', () => {\n      const tree = parse('async function f() { const user = await fetchUser(); }', TypeScript.typescript);\n      const { constructorBindings } = buildTypeEnv(tree, 'typescript');\n      expect(constructorBindings.length).toBe(1);\n      expect(constructorBindings[0].varName).toBe('user');\n      expect(constructorBindings[0].calleeName).toBe('fetchUser');\n    });\n\n    it('handles qualified callee in TypeScript: const user = repo.getUser()', () => {\n      const tree = parse('const user = repo.getUser();', TypeScript.typescript);\n      const { constructorBindings } = buildTypeEnv(tree, 'typescript');\n      expect(constructorBindings.length).toBe(1);\n      expect(constructorBindings[0].varName).toBe('user');\n      expect(constructorBindings[0].calleeName).toBe('getUser');\n    });\n\n    it('does not emit binding for TypeScript new expression (handled by extractInitializer)', () => {\n      const tree = parse('const user = new User();', TypeScript.typescript);\n      const { env, constructorBindings } = buildTypeEnv(tree, 'typescript');\n      expect(flatGet(env, 'user')).toBe('User');\n      expect(constructorBindings.find(b => b.varName === 'user')).toBeUndefined();\n    });\n\n    it('returns constructor binding for C# var user = svc.GetUser()', () => {\n      const tree = parse(`\n        class App {\n          void Run() {\n            var svc = new UserService();\n            var user = svc.GetUser(\"alice\");\n          }\n        }\n      `, CSharp);\n      const { constructorBindings } = buildTypeEnv(tree, 'csharp');\n      const binding = constructorBindings.find(b => b.varName === 'user');\n      expect(binding).toBeDefined();\n      expect(binding!.calleeName).toBe('GetUser');\n    });\n\n    it('unwraps .await in Rust: let user = get_user().await', () => {\n      const tree = parse(`\n        async fn process() {\n          let user = get_user().await;\n        }\n      `, Rust);\n      const { constructorBindings } = buildTypeEnv(tree, 'rust');\n      expect(constructorBindings.length).toBe(1);\n      expect(constructorBindings[0].varName).toBe('user');\n      expect(constructorBindings[0].calleeName).toBe('get_user');\n    });\n\n    it('unwraps await in C#: var user = await svc.GetUserAsync()', () => {\n      const tree = parse(`\n        class App {\n          async void Run() {\n            var svc = new UserService();\n            var user = await svc.GetUserAsync(\"alice\");\n          }\n        }\n      `, CSharp);\n      const { constructorBindings } = buildTypeEnv(tree, 'csharp');\n      const binding = constructorBindings.find(b => b.varName === 'user');\n      expect(binding).toBeDefined();\n      expect(binding!.calleeName).toBe('GetUserAsync');\n    });\n\n    it('returns constructor binding for C# var user = GetUser() (standalone call)', () => {\n      const tree = parse(`\n        class App {\n          void Run() {\n            var user = GetUser(\"alice\");\n          }\n        }\n      `, CSharp);\n      const { constructorBindings } = buildTypeEnv(tree, 'csharp');\n      const binding = constructorBindings.find(b => b.varName === 'user');\n      expect(binding).toBeDefined();\n      expect(binding!.calleeName).toBe('GetUser');\n    });\n  });\n\n  describe('assignment chain propagation (Tier 2, depth-1)', () => {\n    it('propagates explicit annotation: const a: User = ...; const b = a → b is User', () => {\n      const tree = parse(`\n        const a: User = getUser();\n        const b = a;\n      `, TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      expect(flatGet(env, 'a')).toBe('User');\n      expect(flatGet(env, 'b')).toBe('User');\n    });\n\n    it('propagates constructor inference: const a = new User(); const b = a → b is User', () => {\n      const tree = parse(`\n        const a = new User();\n        const b = a;\n      `, TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      expect(flatGet(env, 'a')).toBe('User');\n      expect(flatGet(env, 'b')).toBe('User');\n    });\n\n    it('depth-2 in declaration order resolves because single pass iterates sequentially', () => {\n      // b = a → resolved (a has User), c = b → also resolved because the same\n      // pass sets b before processing c (declarations are always in order).\n      // The \"depth-1\" limit applies to out-of-order or cyclic references.\n      const tree = parse(`\n        const a: User = getUser();\n        const b = a;\n        const c = b;\n      `, TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      expect(flatGet(env, 'a')).toBe('User');\n      expect(flatGet(env, 'b')).toBe('User');\n      expect(flatGet(env, 'c')).toBe('User');\n    });\n\n    it('propagates typed function parameter to local alias', () => {\n      const tree = parse(`\n        function process(user: User) {\n          const alias = user;\n          alias.save();\n        }\n      `, TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      // 'alias' should get User from the parameter 'user'\n      const scopeKey = [...env.keys()].find(k => k.startsWith('process@'));\n      expect(scopeKey).toBeDefined();\n      expect(env.get(scopeKey!)?.get('user')).toBe('User');\n      expect(env.get(scopeKey!)?.get('alias')).toBe('User');\n    });\n\n    it('propagates file-level typed variable to local alias inside function', () => {\n      const tree = parse(`\n        const config: Config = getConfig();\n        function process() {\n          const cfg = config;\n        }\n      `, TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      // cfg in process scope picks up Config from the file-level config binding\n      const scopeKey = [...env.keys()].find(k => k.startsWith('process@'));\n      expect(scopeKey).toBeDefined();\n      expect(env.get(scopeKey!)?.get('cfg')).toBe('Config');\n    });\n\n    it('does not propagate when RHS is a call expression (not a plain identifier)', () => {\n      const tree = parse(`\n        const x = getUser();\n      `, TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      // getUser() is a call_expression — should not create a binding\n      expect(flatGet(env, 'x')).toBeUndefined();\n    });\n  });\n\n  describe('stripNullable', () => {\n    it('strips User | null → User', () => {\n      expect(stripNullable('User | null')).toBe('User');\n    });\n\n    it('strips User | undefined → User', () => {\n      expect(stripNullable('User | undefined')).toBe('User');\n    });\n\n    it('strips User | null | undefined → User', () => {\n      expect(stripNullable('User | null | undefined')).toBe('User');\n    });\n\n    it('strips User? → User', () => {\n      expect(stripNullable('User?')).toBe('User');\n    });\n\n    it('passes through User unchanged', () => {\n      expect(stripNullable('User')).toBe('User');\n    });\n\n    it('refuses genuine union User | Repo → undefined', () => {\n      expect(stripNullable('User | Repo')).toBeUndefined();\n    });\n\n    it('returns undefined for null alone', () => {\n      expect(stripNullable('null')).toBeUndefined();\n    });\n\n    it('returns undefined for empty string', () => {\n      expect(stripNullable('')).toBeUndefined();\n    });\n\n    it('strips User | void → User', () => {\n      expect(stripNullable('User | void')).toBe('User');\n    });\n\n    it('strips User | None → User (Python)', () => {\n      expect(stripNullable('User | None')).toBe('User');\n    });\n\n    it('strips User | nil → User (Ruby)', () => {\n      expect(stripNullable('User | nil')).toBe('User');\n    });\n\n    it('strips User | void | nil → User (multiple nullable keywords)', () => {\n      expect(stripNullable('User | void | nil')).toBe('User');\n    });\n\n    it('returns undefined for None alone', () => {\n      expect(stripNullable('None')).toBeUndefined();\n    });\n\n    it('returns undefined for nil alone', () => {\n      expect(stripNullable('nil')).toBeUndefined();\n    });\n\n    it('returns undefined for void alone', () => {\n      expect(stripNullable('void')).toBeUndefined();\n    });\n\n    it('returns undefined for undefined alone', () => {\n      expect(stripNullable('undefined')).toBeUndefined();\n    });\n\n    it('strips nullable suffix with spaces: User ? → User', () => {\n      expect(stripNullable(' User? ')).toBe('User');\n    });\n\n    it('returns undefined for all-nullable union: null | undefined | void', () => {\n      expect(stripNullable('null | undefined | void')).toBeUndefined();\n    });\n\n    it('refuses triple non-null union: User | Repo | Service', () => {\n      expect(stripNullable('User | Repo | Service')).toBeUndefined();\n    });\n  });\n\n  // ── Assignment chain: reverse-order depth limitation ──────────────────\n\n  describe('assignment chain — reverse-order limitation', () => {\n    it('resolves reverse-declared Tier 2→Tier 0 (Tier 0 set during walk, before post-walk)', () => {\n      // Even though b = a appears before a: User in source, a's Tier 0 binding\n      // is set during the AST walk. The post-walk Tier 2 loop runs after all\n      // Tier 0/1 bindings exist, so b = a resolves.\n      const tree = parse(`\n        function process() {\n          const b = a;\n          const a: User = getUser();\n        }\n      `, TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      const scopeKey = [...env.keys()].find(k => k.startsWith('process@'));\n      expect(scopeKey).toBeDefined();\n      expect(env.get(scopeKey!)?.get('a')).toBe('User');\n      expect(env.get(scopeKey!)?.get('b')).toBe('User');\n    });\n\n    it('resolves reverse-ordered Tier 2 chains via fixpoint (b = a, a = c, c: User)', () => {\n      // Two chained Tier 2 assignments in reverse source order.\n      // The unified fixpoint loop resolves this in 2 iterations:\n      //   Iter 1: a = c (c is Tier 0 → a = User)\n      //   Iter 2: b = a (a now resolved → b = User)\n      const tree = parse(`\n        function process() {\n          const b = a;\n          const a = c;\n          const c: User = getUser();\n        }\n      `, TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      const scopeKey = [...env.keys()].find(k => k.startsWith('process@'));\n      expect(scopeKey).toBeDefined();\n      expect(env.get(scopeKey!)?.get('c')).toBe('User');\n      expect(env.get(scopeKey!)?.get('a')).toBe('User');\n      // Fixpoint now resolves reverse-ordered chains\n      expect(env.get(scopeKey!)?.get('b')).toBe('User');\n    });\n  });\n\n  // ── Assignment chain: per-language coverage for refactored code ────────\n\n  describe('assignment chain — Go var_spec form', () => {\n    it('propagates var b = a when a has a known type (var_spec)', () => {\n      const tree = parse(`\n        package main\n        func process() {\n          var a User\n          var b = a\n        }\n      `, Go);\n      const { env } = buildTypeEnv(tree, 'go');\n      expect(flatGet(env, 'a')).toBe('User');\n      expect(flatGet(env, 'b')).toBe('User');\n    });\n  });\n\n  describe('assignment chain — C# equals_value_clause', () => {\n    it('propagates var alias = u when u has a known type', () => {\n      const tree = parse(`\n        class App {\n          void Process() {\n            User u = new User();\n            var alias = u;\n          }\n        }\n      `, CSharp);\n      const { env } = buildTypeEnv(tree, 'csharp');\n      expect(flatGet(env, 'u')).toBe('User');\n      expect(flatGet(env, 'alias')).toBe('User');\n    });\n  });\n\n  describe('assignment chain — Kotlin property_declaration', () => {\n    it('propagates val alias = u when u has an explicit type annotation', () => {\n      const tree = parse(`\n        fun process() {\n          val u: User = User()\n          val alias = u\n        }\n      `, Kotlin);\n      const { env } = buildTypeEnv(tree, 'kotlin');\n      expect(flatGet(env, 'u')).toBe('User');\n      expect(flatGet(env, 'alias')).toBe('User');\n    });\n\n    it('propagates val alias = u inside a class method with explicit type', () => {\n      const tree = parse(`\n        class Service {\n          fun process() {\n            val u: User = User()\n            val alias = u\n          }\n        }\n      `, Kotlin);\n      const { env } = buildTypeEnv(tree, 'kotlin');\n      expect(flatGet(env, 'u')).toBe('User');\n      expect(flatGet(env, 'alias')).toBe('User');\n    });\n  });\n\n  describe('assignment chain — Java variable_declarator', () => {\n    it('propagates var alias = u when u has an explicit type', () => {\n      const tree = parse(`\n        class App {\n          void process() {\n            User u = new User();\n            var alias = u;\n          }\n        }\n      `, Java);\n      const { env } = buildTypeEnv(tree, 'java');\n      expect(flatGet(env, 'u')).toBe('User');\n      expect(flatGet(env, 'alias')).toBe('User');\n    });\n  });\n\n  describe('assignment chain — Python identifier', () => {\n    it('propagates alias = u when u has a type annotation', () => {\n      const tree = parse(`\ndef process():\n    u: User = get_user()\n    alias = u\n      `, Python);\n      const { env } = buildTypeEnv(tree, 'python');\n      expect(flatGet(env, 'u')).toBe('User');\n      expect(flatGet(env, 'alias')).toBe('User');\n    });\n\n    it('propagates walrus alias := u when u has a type annotation', () => {\n      const tree = parse(`\ndef process():\n    u: User = get_user()\n    if (alias := u):\n        pass\n      `, Python);\n      const { env } = buildTypeEnv(tree, 'python');\n      expect(flatGet(env, 'u')).toBe('User');\n      expect(flatGet(env, 'alias')).toBe('User');\n    });\n  });\n\n  describe('assignment chain — Rust let_declaration', () => {\n    it('propagates let alias = u when u has a type annotation', () => {\n      const tree = parse(`\n        fn process() {\n          let u: User = User::new();\n          let alias = u;\n        }\n      `, Rust);\n      const { env } = buildTypeEnv(tree, 'rust');\n      expect(flatGet(env, 'u')).toBe('User');\n      expect(flatGet(env, 'alias')).toBe('User');\n    });\n  });\n\n  describe('assignment chain — PHP variable_name', () => {\n    it('propagates $alias = $u when $u has a type from new', () => {\n      const tree = parse(`<?php\n        function process() {\n          $u = new User();\n          $alias = $u;\n        }\n      `, PHP.php);\n      const { env } = buildTypeEnv(tree, 'php');\n      expect(flatGet(env, '$u')).toBe('User');\n      expect(flatGet(env, '$alias')).toBe('User');\n    });\n  });\n\n  describe('assignment chain — Ruby assignment', () => {\n    it('captures assignment of simple identifier for pending propagation', () => {\n      // Ruby assignment chains: alias_user = user where user is a simple identifier.\n      // In unit tests (no SymbolTable), constructor bindings are pending — so we test\n      // that the extractor captures the assignment relationship correctly.\n      // The actual propagation is tested via integration tests where User.new resolves.\n      const tree = parse(`\ndef process(user)\n  alias_user = user\n  alias_user.save\nend\n`, Ruby);\n      const { env } = buildTypeEnv(tree, 'ruby');\n      // Without a known type for 'user' (no annotation in Ruby), alias_user stays undefined.\n      // This verifies the extractor doesn't crash or produce false bindings.\n      expect(flatGet(env, 'alias_user')).toBeUndefined();\n    });\n\n    it('does not capture assignment from call expression (not a plain identifier)', () => {\n      const tree = parse(`\ndef process\n  user = get_user()\n  alias_user = user\nend\n`, Ruby);\n      const { env } = buildTypeEnv(tree, 'ruby');\n      // get_user() is a call — user has no resolved type, so alias_user should not resolve either\n      expect(flatGet(env, 'alias_user')).toBeUndefined();\n    });\n  });\n\n  // ── lookupInEnv with nullable stripping ───────────────────────────────\n\n  describe('lookup resolves through nullable stripping', () => {\n    it('TypeScript: lookup strips User | null to User', () => {\n      const tree = parse(`\n        function process(user: User | null) {\n          user.save();\n        }\n      `, TypeScript.typescript);\n      const typeEnv = buildTypeEnv(tree, 'typescript');\n      // Find the call node for .save()\n      const { env } = typeEnv;\n      const scopeKey = [...env.keys()].find(k => k.startsWith('process@'));\n      expect(scopeKey).toBeDefined();\n      // The raw env stores 'User' because extractSimpleTypeName already unwraps union_type\n      expect(env.get(scopeKey!)?.get('user')).toBe('User');\n    });\n\n    it('Python: lookup strips User | None to User', () => {\n      const tree = parse(`\ndef process():\n    user: User | None = get_user()\n      `, Python);\n      const { env } = buildTypeEnv(tree, 'python');\n      // Python 3.10+ union syntax is stored as raw text \"User | None\"\n      // which stripNullable resolves at lookup time\n      const rawVal = flatGet(env, 'user');\n      expect(rawVal).toBeDefined();\n      // Either already unwrapped by AST, or stored as raw text for stripNullable\n      expect(stripNullable(rawVal!)).toBe('User');\n    });\n  });\n\n  // ── extractSimpleTypeName: nullable wrapper unwrapping ────────────────\n\n  describe('extractSimpleTypeName — nullable wrapper unwrapping', () => {\n    it('unwraps Java Optional<User> → User', () => {\n      const tree = parse(`\n        class App {\n          void process() {\n            Optional<User> user = findUser();\n          }\n        }\n      `, Java);\n      const { env } = buildTypeEnv(tree, 'java');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('unwraps Rust Option<User> → User', () => {\n      const tree = parse(`\n        fn process() {\n          let user: Option<User> = find_user();\n        }\n      `, Rust);\n      const { env } = buildTypeEnv(tree, 'rust');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('does NOT unwrap List<User> — containers stay as List', () => {\n      const tree = parse(`\n        class App {\n          void process() {\n            List<User> users = getUsers();\n          }\n        }\n      `, Java);\n      const { env } = buildTypeEnv(tree, 'java');\n      expect(flatGet(env, 'users')).toBe('List');\n    });\n\n    it('does NOT unwrap Map<String, User> — containers stay as Map', () => {\n      const tree = parse(`\n        class App {\n          void process() {\n            Map<String, User> lookup = getLookup();\n          }\n        }\n      `, Java);\n      const { env } = buildTypeEnv(tree, 'java');\n      expect(flatGet(env, 'lookup')).toBe('Map');\n    });\n\n    it('does NOT unwrap CompletableFuture<User> — async wrappers stay', () => {\n      const tree = parse(`\n        class App {\n          void process() {\n            CompletableFuture<User> future = fetchUser();\n          }\n        }\n      `, Java);\n      const { env } = buildTypeEnv(tree, 'java');\n      expect(flatGet(env, 'future')).toBe('CompletableFuture');\n    });\n\n    it('unwraps TypeScript extractSimpleTypeName directly for generic_type', () => {\n      // Parse a Java Optional<User> and grab the type node to test extractSimpleTypeName\n      parser.setLanguage(Java);\n      const tree = parser.parse(`class A { void f() { Optional<User> x = null; } }`);\n      // Navigate to the type node: class > body > method > body > local_variable_declaration > type\n      const method = tree.rootNode.firstNamedChild?.lastNamedChild?.firstNamedChild;\n      const decl = method?.lastNamedChild?.firstNamedChild;\n      const typeNode = decl?.childForFieldName('type');\n      if (typeNode) {\n        expect(extractSimpleTypeName(typeNode)).toBe('User');\n      }\n    });\n  });\n\n  // ── C++ assignment chain propagation ──────────────────────────────────\n\n  describe('assignment chain — C++ auto alias', () => {\n    it('propagates auto alias = u when u has an explicit type', () => {\n      const tree = parse(`\n        void process() {\n          User u;\n          auto alias = u;\n        }\n      `, CPP);\n      const { env } = buildTypeEnv(tree, 'cpp');\n      expect(flatGet(env, 'u')).toBe('User');\n      expect(flatGet(env, 'alias')).toBe('User');\n    });\n  });\n\n  // ── Tier 1c: for-loop element type inference ───────────────────────────\n\n  describe('for-loop element type inference (Tier 1c) — TypeScript', () => {\n    it('infers loop variable type from User[] parameter annotation (for...of)', () => {\n      const tree = parse(`\n        function process(users: User[]) {\n          for (const user of users) {\n            user.save();\n          }\n        }\n      `, TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('infers loop variable type from Array<User> parameter annotation (for...of)', () => {\n      const tree = parse(`\n        function process(users: Array<User>) {\n          for (const user of users) {\n            user.save();\n          }\n        }\n      `, TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('does NOT bind loop variable for for...in (produces string keys, not elements)', () => {\n      const tree = parse(`\n        function process(users: User[]) {\n          for (const key in users) {\n            console.log(key);\n          }\n        }\n      `, TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      // for...in yields string keys — extractor must NOT bind 'key' to User\n      expect(flatGet(env, 'key')).toBeUndefined();\n    });\n\n    it('does not infer type when iterable variable has no known type in scope', () => {\n      const tree = parse(`\n        function process(users: any) {\n          for (const user of users) {\n            user.save();\n          }\n        }\n      `, TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      expect(flatGet(env, 'user')).toBeUndefined();\n    });\n\n    it('infers loop variable from a locally declared const with User[] annotation', () => {\n      const tree = parse(`\n        function process() {\n          const users: User[] = getUsers();\n          for (const user of users) {\n            user.save();\n          }\n        }\n      `, TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      // Note: users itself is stored with no binding (extractSimpleTypeName returns undefined\n      // for array_type), but the for-loop extractor uses AST walking to resolve the element type.\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('infers loop variable from readonly User[] parameter', () => {\n      const tree = parse(`\n        function process(users: readonly User[]) {\n          for (const user of users) {\n            user.save();\n          }\n        }\n      `, TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n  });\n\n  describe('for-loop element type inference (Tier 1c) — Python', () => {\n    it('infers loop variable type from List[User] parameter annotation', () => {\n      const tree = parse(`\ndef process(users: List[User]):\n    for user in users:\n        user.save()\n      `, Python);\n      const { env } = buildTypeEnv(tree, 'python');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('infers loop variable type from Sequence[User] annotation style', () => {\n      const tree = parse(`\ndef process(users: Sequence[User]):\n    for user in users:\n        user.save()\n      `, Python);\n      const { env } = buildTypeEnv(tree, 'python');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('does not infer type when iterable parameter has no annotation', () => {\n      const tree = parse(`\ndef process(users):\n    for user in users:\n        user.save()\n      `, Python);\n      const { env } = buildTypeEnv(tree, 'python');\n      expect(flatGet(env, 'user')).toBeUndefined();\n    });\n\n    it('infers loop variable from a locally annotated variable', () => {\n      const tree = parse(`\ndef process():\n    users: List[User] = get_users()\n    for user in users:\n        user.save()\n      `, Python);\n      const { env } = buildTypeEnv(tree, 'python');\n      // List[User] → extractSimpleTypeName returns 'List' (base name), stored as 'List'\n      // extractElementTypeFromString('List') → undefined (no brackets in the string)\n      // So user is unresolved unless users is stored as 'List[User]' raw.\n      // The locally annotated var stores the base type 'List' via extractSimpleTypeName.\n      // This test documents the actual behavior.\n      const usersType = flatGet(env, 'users');\n      expect(usersType).toBeDefined(); // users has a type annotation\n    });\n  });\n\n  describe('for-loop element type inference (Tier 1c) — Go', () => {\n    it('infers loop variable type from []User slice parameter (_, user := range users)', () => {\n      const tree = parse(`\npackage main\nfunc process(users []User) {\n    for _, user := range users {\n        user.Save()\n    }\n}\n      `, Go);\n      const { env } = buildTypeEnv(tree, 'go');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('does NOT infer element type for single-var slice range (yields index, not element)', () => {\n      const tree = parse(`\npackage main\nfunc process(users []User) {\n    for user := range users {\n        user.Save()\n    }\n}\n      `, Go);\n      const { env } = buildTypeEnv(tree, 'go');\n      // In Go, `for v := range slice` gives the INDEX (int), not the element.\n      expect(flatGet(env, 'user')).toBeUndefined();\n    });\n\n    it('infers loop variable from map range (_, v := range myMap)', () => {\n      const tree = parse(`\npackage main\nfunc process(myMap map[string]User) {\n    for _, v := range myMap {\n        v.Save()\n    }\n}\n      `, Go);\n      const { env } = buildTypeEnv(tree, 'go');\n      expect(flatGet(env, 'v')).toBe('User');\n    });\n\n    it('does NOT infer element type for single-var map range (yields key, not value)', () => {\n      const tree = parse(`\npackage main\nfunc process(myMap map[string]User) {\n    for k := range myMap {\n        _ = k\n    }\n}\n      `, Go);\n      const { env } = buildTypeEnv(tree, 'go');\n      // Single-var map range gives the KEY, not the value\n      expect(flatGet(env, 'k')).toBeUndefined();\n    });\n\n    it('does not infer type for C-style for loops (no range_clause)', () => {\n      const tree = parse(`\npackage main\nfunc process() {\n    for i := 0; i < 10; i++ {\n    }\n}\n      `, Go);\n      const { env } = buildTypeEnv(tree, 'go');\n      // C-style for loop has no range_clause — extractor must return early\n      expect(flatGet(env, 'i')).toBeUndefined();\n    });\n\n    it('does not infer type when iterable has no annotation in scope', () => {\n      const tree = parse(`\npackage main\nfunc process() {\n    users := getUsers()\n    for _, user := range users {\n        user.Save()\n    }\n}\n      `, Go);\n      const { env } = buildTypeEnv(tree, 'go');\n      // users has no type annotation — only a constructor binding candidate\n      // Without a resolved type for users, user cannot be inferred\n      expect(flatGet(env, 'user')).toBeUndefined();\n    });\n  });\n\n  describe('for-loop element type inference (Tier 1c) — Rust', () => {\n    it('infers loop variable from Vec<User> parameter (for user in &users)', () => {\n      const tree = parse(`\nfn process(users: Vec<User>) {\n    for user in &users {\n        user.save();\n    }\n}\n      `, Rust);\n      const { env } = buildTypeEnv(tree, 'rust');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('infers loop variable from &[User] slice parameter', () => {\n      const tree = parse(`\nfn process(users: &[User]) {\n    for user in users {\n        user.save();\n    }\n}\n      `, Rust);\n      const { env } = buildTypeEnv(tree, 'rust');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('does not infer type for range expression (0..10)', () => {\n      const tree = parse(`\nfn process() {\n    for i in 0..10 {\n        println!(\"{}\", i);\n    }\n}\n      `, Rust);\n      const { env } = buildTypeEnv(tree, 'rust');\n      expect(flatGet(env, 'i')).toBeUndefined();\n    });\n\n    it('does not infer type when iterable has no annotation', () => {\n      const tree = parse(`\nfn process() {\n    let users = get_users();\n    for user in &users {\n        user.save();\n    }\n}\n      `, Rust);\n      const { env } = buildTypeEnv(tree, 'rust');\n      expect(flatGet(env, 'user')).toBeUndefined();\n    });\n  });\n\n  describe('for-loop element type inference (Tier 1c) — C#', () => {\n    it('infers loop variable from var foreach with List<User> parameter', () => {\n      const tree = parse(`\nusing System.Collections.Generic;\nclass Foo {\n  void Process(List<User> users) {\n    foreach (var user in users) {\n      user.Save();\n    }\n  }\n}\n      `, CSharp);\n      const { env } = buildTypeEnv(tree, 'csharp');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('still resolves explicit type foreach (regression)', () => {\n      const tree = parse(`\nclass Foo {\n  void Process(List<User> users) {\n    foreach (User user in users) {\n      user.Save();\n    }\n  }\n}\n      `, CSharp);\n      const { env } = buildTypeEnv(tree, 'csharp');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('does not infer type when iterable has no annotation', () => {\n      const tree = parse(`\nclass Foo {\n  void Process() {\n    var users = GetUsers();\n    foreach (var user in users) {\n      user.Save();\n    }\n  }\n}\n      `, CSharp);\n      const { env } = buildTypeEnv(tree, 'csharp');\n      expect(flatGet(env, 'user')).toBeUndefined();\n    });\n  });\n\n  describe('for-loop element type inference (Tier 1c) — Kotlin', () => {\n    it('infers loop variable from unannotated for with List<User> parameter', () => {\n      const tree = parse(`\nfun process(users: List<User>) {\n    for (user in users) {\n        user.save()\n    }\n}\n      `, Kotlin);\n      const { env } = buildTypeEnv(tree, 'kotlin');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('still resolves explicit type annotation (regression)', () => {\n      const tree = parse(`\nfun process(users: List<User>) {\n    for (user: User in users) {\n        user.save()\n    }\n}\n      `, Kotlin);\n      const { env } = buildTypeEnv(tree, 'kotlin');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('does not infer type when iterable has no annotation', () => {\n      const tree = parse(`\nfun process() {\n    val users = getUsers()\n    for (user in users) {\n        user.save()\n    }\n}\n      `, Kotlin);\n      const { env } = buildTypeEnv(tree, 'kotlin');\n      expect(flatGet(env, 'user')).toBeUndefined();\n    });\n  });\n\n  describe('for-loop element type inference (Tier 1c) — Java', () => {\n    it('still resolves explicit type enhanced-for (regression)', () => {\n      const tree = parse(`\nclass Foo {\n  void process(List<User> users) {\n    for (User user : users) {\n      user.save();\n    }\n  }\n}\n      `, Java);\n      const { env } = buildTypeEnv(tree, 'java');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('does not infer type when iterable has no annotation', () => {\n      const tree = parse(`\nclass Foo {\n  void process() {\n    var users = getUsers();\n    for (var user : users) {\n      user.save();\n    }\n  }\n}\n      `, Java);\n      const { env } = buildTypeEnv(tree, 'java');\n      expect(flatGet(env, 'user')).toBeUndefined();\n    });\n  });\n\n  describe('previously-skipped limitations (now resolved)', () => {\n    it('TS destructured for-of: for (const [k, v] of entries) — last-child heuristic', () => {\n      // array_pattern handled by binding last named child to element type.\n      // Map<string, User> resolves to 'User' via last generic type arg.\n      const tree = parse(`\nfunction process(entries: Map<string, User>) {\n  for (const [key, user] of entries) {\n    user.save();\n  }\n}\n      `, TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('Python tuple unpacking: for key, value in dict.items() — call iterable + pattern_list', () => {\n      // call iterable: data.items() → extract receiver 'data' for type lookup.\n      // pattern_list: bind last named child to element type.\n      // dict[str, User] resolves to 'User' via last generic type arg.\n      const tree = parse(`\ndef process(data: dict[str, User]):\n    for key, user in data.items():\n        user.save()\n      `, Python);\n      const { env } = buildTypeEnv(tree, 'python');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('Python enumerate(dict.items()): for i, k, v — skips int index, binds value var to User', () => {\n      // enumerate() wraps the iterable: right node is call with fn='enumerate', not fn.attribute.\n      // Without enumerate() support, iterableName is never set → v stays unbound.\n      // With the fix: unwrap enumerate → inner call → data.items() → v binds to User.\n      const tree = parse(`\ndef process(data: dict[str, User]):\n    for i, k, v in enumerate(data.items()):\n        v.save()\n      `, Python);\n      const { env } = buildTypeEnv(tree, 'python');\n      expect(flatGet(env, 'v')).toBe('User');\n      // i is the int index from enumerate — must NOT be bound to User\n      expect(flatGet(env, 'i')).toBeUndefined();\n    });\n\n    it('Python enumerate(dict.items()) with nested tuple: for i, (k, v) — binds v to User', () => {\n      // Nested tuple pattern: `(k, v)` is a tuple_pattern inside the pattern_list.\n      // AST: pattern_list > [identifier('i'), tuple_pattern > [identifier('k'), identifier('v')]]\n      // Must descend into tuple_pattern to extract v, not just collect top-level identifiers.\n      const tree = parse(`\ndef process(data: dict[str, User]):\n    for i, (k, v) in enumerate(data.items()):\n        v.save()\n      `, Python);\n      const { env } = buildTypeEnv(tree, 'python');\n      expect(flatGet(env, 'v')).toBe('User');\n      expect(flatGet(env, 'i')).toBeUndefined();\n    });\n\n    it('Python enumerate with parenthesized tuple: for (k, v) in enumerate(users) — binds v to User', () => {\n      // Parenthesized tuple pattern: `(k, v)` is a tuple_pattern, not pattern_list.\n      // AST: for_statement > left: tuple_pattern > [identifier('k'), identifier('v')]\n      // Must handle tuple_pattern as top-level left node, not just nested inside pattern_list.\n      const tree = parse(`\ndef process(users: List[User]):\n    for (k, v) in enumerate(users):\n        v.save()\n      `, Python);\n      const { env } = buildTypeEnv(tree, 'python');\n      // enumerate yields (index, element) — k is int (unbound), v is User\n      expect(flatGet(env, 'v')).toBe('User');\n      expect(flatGet(env, 'k')).toBeUndefined();\n    });\n\n    it('TS instanceof narrowing: if (x instanceof User) — first-writer-wins, not block-scoped', () => {\n      // Binds x to User via extractPatternBinding on binary_expression.\n      // Only works when x has no prior type binding in scopeEnv.\n      // True block-level scoping (overwriting existing bindings) is Phase 5.\n      const tree = parse(`\nfunction process(x) {\n  if (x instanceof User) {\n    x.save();\n  }\n}\n      `, TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      expect(flatGet(env, 'x')).toBe('User');\n    });\n\n    it('Rust for with .iter(): for user in users.iter() — call_expression iterable', () => {\n      // Extracts receiver from call_expression > field_expression > identifier.\n      // .iter()/.into_iter()/.iter_mut() is the dominant Rust iteration pattern.\n      const tree = parse(`\nfn process(users: Vec<User>) {\n    for user in users.iter() {\n        user.save();\n    }\n}\n      `, Rust);\n      const { env } = buildTypeEnv(tree, 'rust');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n  });\n\n  describe('method-aware type arg selection (.keys() vs .values())', () => {\n    it('TS for-of map.values() resolves to value type (User)', () => {\n      const tree = parse(`\nfunction process(data: Map<string, User>) {\n  for (const user of data.values()) {\n    user.save();\n  }\n}\n      `, TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('TS for-of map.keys() resolves to key type (string)', () => {\n      const tree = parse(`\nfunction process(data: Map<string, User>) {\n  for (const key of data.keys()) {\n    key.trim();\n  }\n}\n      `, TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      expect(flatGet(env, 'key')).toBe('string');\n    });\n\n    it('Python for key in data.keys() resolves to key type (str)', () => {\n      const tree = parse(`\ndef process(data: dict[str, User]):\n    for key in data.keys():\n        key.strip()\n      `, Python);\n      const { env } = buildTypeEnv(tree, 'python');\n      expect(flatGet(env, 'key')).toBe('str');\n    });\n\n    it('Python for user in data.values() resolves to value type (User)', () => {\n      const tree = parse(`\ndef process(data: dict[str, User]):\n    for user in data.values():\n        user.save()\n      `, Python);\n      const { env } = buildTypeEnv(tree, 'python');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('Rust for key in map.keys() resolves to key type (String)', () => {\n      const tree = parse(`\nfn process(data: HashMap<String, User>) {\n    for key in data.keys() {\n        key.len();\n    }\n}\n      `, Rust);\n      const { env } = buildTypeEnv(tree, 'rust');\n      expect(flatGet(env, 'key')).toBe('String');\n    });\n\n    it('Rust for user in map.values() resolves to value type (User)', () => {\n      const tree = parse(`\nfn process(data: HashMap<String, User>) {\n    for user in data.values() {\n        user.save();\n    }\n}\n      `, Rust);\n      const { env } = buildTypeEnv(tree, 'rust');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n  });\n\n  describe('container descriptor-aware type arg selection', () => {\n    it('HashMap.keys() resolves to key type (String) via descriptor', () => {\n      const tree = parse(`\nfn process(data: HashMap<String, User>) {\n    for key in data.keys() {\n        key.len();\n    }\n}\n      `, Rust);\n      const { env } = buildTypeEnv(tree, 'rust');\n      expect(flatGet(env, 'key')).toBe('String');\n    });\n\n    it('HashMap.values() resolves to value type (User) via descriptor', () => {\n      const tree = parse(`\nfn process(data: HashMap<String, User>) {\n    for user in data.values() {\n        user.save();\n    }\n}\n      `, Rust);\n      const { env } = buildTypeEnv(tree, 'rust');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('Vec.iter() resolves to element type (User) — arity 1 always returns last', () => {\n      const tree = parse(`\nfn process(users: Vec<User>) {\n    for user in users.iter() {\n        user.save();\n    }\n}\n      `, Rust);\n      const { env } = buildTypeEnv(tree, 'rust');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('unknown container falls back to last-arg heuristic', () => {\n      // MyCache is not in CONTAINER_DESCRIPTORS, so .keys() still returns first via fallback\n      const tree = parse(`\nfunction process(cache: MyCache<string, User>) {\n  for (const key of cache.keys()) {\n    key.trim();\n  }\n}\n      `, TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      expect(flatGet(env, 'key')).toBe('string');\n    });\n  });\n\n  describe('for-loop Phase 2 enhancements', () => {\n    it('TS object destructuring skip: for (const { id, name } of users) — no binding produced', () => {\n      const tree = parse(`\nfunction process(users: User[]) {\n  for (const { id, name } of users) {\n    console.log(id, name);\n  }\n}\n      `, TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      // Object destructuring should NOT produce bindings — field types are unknown\n      expect(flatGet(env, 'id')).toBeUndefined();\n      expect(flatGet(env, 'name')).toBeUndefined();\n    });\n\n    it('TS member access: for (const user of this.users) with users: User[] param — resolves', () => {\n      const tree = parse(`\nfunction process(users: User[]) {\n  for (const user of this.users) {\n    user.save();\n  }\n}\n      `, TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('Python member access: for user in self.users with users: List[User] param — resolves', () => {\n      const tree = parse(`\ndef process(users: List[User]):\n    for user in self.users:\n        user.save()\n      `, Python);\n      const { env } = buildTypeEnv(tree, 'python');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('C++ structured bindings: for (auto& [key, value] : map) with map<string, User> param — binds value', () => {\n      const tree = parse(`\nvoid process(std::map<std::string, User>& map) {\n  for (auto& [key, value] : map) {\n    value.save();\n  }\n}\n      `, CPP);\n      const { env } = buildTypeEnv(tree, 'cpp');\n      expect(flatGet(env, 'value')).toBe('User');\n    });\n\n    it('C++ structured bindings: exact App.cpp fixture — binds user and repo', () => {\n      const tree = parse(`\n#include \"User.h\"\n#include \"Repo.h\"\n#include <map>\n#include <string>\n#include <vector>\n\nvoid processUserMap(std::map<std::string, User> userMap) {\n    for (auto& [key, user] : userMap) {\n        user.save();\n    }\n}\n\nvoid processRepoMap(std::map<std::string, Repo> repoMap) {\n    for (const auto& [key, repo] : repoMap) {\n        repo.save();\n    }\n}\n      `, CPP);\n      const { env } = buildTypeEnv(tree, 'cpp');\n      expect(flatGet(env, 'user')).toBe('User');\n      expect(flatGet(env, 'repo')).toBe('Repo');\n    });\n  });\n\n  describe('known limitations (documented skip tests)', () => {\n    it.skip('Ruby block parameter: users.each { |user| } — closure param inference, different feature', () => {\n      // Not a for-loop; .each { |user| } is a method call with a block.\n      // Requires closure parameter inference — a different feature category\n      // applicable to Ruby, Swift closures, Kotlin lambdas, and Java lambdas.\n      const tree = parse(`\ndef process(users)\n  users.each { |user| user.save }\nend\n      `, Ruby);\n      const { env } = buildTypeEnv(tree, 'ruby');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n  });\n\n  describe('Kotlin when/is pattern binding (Phase 6)', () => {\n    it('when (x) { is User -> } binds x to User', () => {\n      const tree = parse(`\nfun process(x: Any) {\n    when (x) {\n        is User -> x.name\n    }\n}\n      `, Kotlin);\n      const { env } = buildTypeEnv(tree, 'kotlin');\n      expect(flatGet(env, 'x')).toBe('User');\n    });\n\n    it('when (x) { is User -> ...; is Admin -> ... } — last arm overwrites (allowPatternBindingOverwrite)', () => {\n      const tree = parse(`\nfun process(x: Any) {\n    when (x) {\n        is User -> x.name\n        is Admin -> x.role\n    }\n}\n      `, Kotlin);\n      const { env } = buildTypeEnv(tree, 'kotlin');\n      // allowPatternBindingOverwrite means each arm overwrites — last one wins\n      expect(flatGet(env, 'x')).toBe('Admin');\n    });\n\n    it('when (x) { else -> } — no type check, no pattern binding produced', () => {\n      const tree = parse(`\nfun process() {\n    val x: String = \"\"\n    when (x) {\n        else -> println(x)\n    }\n}\n      `, Kotlin);\n      const { env } = buildTypeEnv(tree, 'kotlin');\n      // x has type String from its declaration — no pattern binding should narrow it\n      // (else branch has no type_test node, so extractKotlinPatternBinding never fires)\n      expect(flatGet(env, 'x')).toBe('String');\n    });\n  });\n\n  describe('Kotlin for-loop HashMap.values resolution (Phase 6)', () => {\n    it('for (user in data.values) binds user to User via HashMap<String, User>', () => {\n      const tree = parse(`\nfun processValues(data: HashMap<String, User>) {\n    for (user in data.values) {\n        user.save()\n    }\n}\n      `, Kotlin);\n      const { env } = buildTypeEnv(tree, 'kotlin');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('for (user in users) binds user to User via List<User> param', () => {\n      const tree = parse(`\nfun processList(users: List<User>) {\n    for (user in users) {\n        user.save()\n    }\n}\n      `, Kotlin);\n      const { env } = buildTypeEnv(tree, 'kotlin');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n  });\n\n  describe('Java switch pattern variable (Phase 6)', () => {\n    it('switch (obj) { case User u -> } binds u to User', () => {\n      const tree = parse(`\nclass App {\n    void process(Object obj) {\n        switch (obj) {\n            case User u -> u.save();\n        }\n    }\n}\n      `, Java);\n      const { env } = buildTypeEnv(tree, 'java');\n      expect(flatGet(env, 'u')).toBe('User');\n    });\n\n    it('switch (obj) { case User u -> ...; case Admin a -> ... } — both bind', () => {\n      const tree = parse(`\nclass App {\n    void process(Object obj) {\n        switch (obj) {\n            case User u -> u.save();\n            case Admin a -> a.manage();\n        }\n    }\n}\n      `, Java);\n      const { env } = buildTypeEnv(tree, 'java');\n      expect(flatGet(env, 'u')).toBe('User');\n      expect(flatGet(env, 'a')).toBe('Admin');\n    });\n\n    it('switch (x) { case 42 -> ... } — no pattern variable, no binding', () => {\n      const tree = parse(`\nclass App {\n    void process(Object x) {\n        switch (x) {\n            case 42 -> System.out.println(\"answer\");\n        }\n    }\n}\n      `, Java);\n      const { env } = buildTypeEnv(tree, 'java');\n      // Only the parameter x:Object should exist, no extra bindings from case 42\n      expect(flatGet(env, 'x')).toBe('Object');\n    });\n\n    it('obj instanceof User user — regression: still works after type_pattern addition', () => {\n      const tree = parse(`\nclass App {\n    void process(Object obj) {\n        if (obj instanceof User user) {\n            user.save();\n        }\n    }\n}\n      `, Java);\n      const { env } = buildTypeEnv(tree, 'java');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n  });\n\n  describe('new container descriptors (Phase 6.1)', () => {\n    it('Collection<User> resolves element type via descriptor (arity 1)', () => {\n      const tree = parse(`\nusing System.Collections.ObjectModel;\npublic class App {\n    public void Process(Collection<User> users) {\n        foreach (var user in users) {\n            user.Save();\n        }\n    }\n}\n      `, CSharp);\n      const { env } = buildTypeEnv(tree, 'csharp');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('MutableMap<String, User>.values() resolves to User via descriptor (arity 2)', () => {\n      const tree = parse(`\nfun process(data: MutableMap<String, User>) {\n    for (user in data.values()) {\n        user.save()\n    }\n}\n      `, Kotlin);\n      const { env } = buildTypeEnv(tree, 'kotlin');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('MutableList<User> resolves element type via descriptor', () => {\n      const tree = parse(`\nfun process(users: MutableList<User>) {\n    for (user in users) {\n        user.save()\n    }\n}\n      `, Kotlin);\n      const { env } = buildTypeEnv(tree, 'kotlin');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('SortedSet<User> resolves element type via descriptor (C#)', () => {\n      const tree = parse(`\nusing System.Collections.Generic;\npublic class App {\n    public void Process(SortedSet<User> users) {\n        foreach (var user in users) {\n            user.Save();\n        }\n    }\n}\n      `, CSharp);\n      const { env } = buildTypeEnv(tree, 'csharp');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('Stream<User> resolves element type via descriptor (Java)', () => {\n      const tree = parse(`\nclass App {\n    void process(Stream<User> users) {\n        for (User user : users.toList()) {\n            user.save();\n        }\n    }\n}\n      `, Java);\n      const { env } = buildTypeEnv(tree, 'java');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n  });\n\n  describe('C# recursive_pattern binding (Phase 6.1)', () => {\n    it('obj is User { Name: \"Alice\" } u — binds u to User', () => {\n      const tree = parse(`\npublic class App {\n    public void Process(object obj) {\n        if (obj is User { Name: \"Alice\" } u) {\n            u.Save();\n        }\n    }\n}\n      `, CSharp);\n      const { env } = buildTypeEnv(tree, 'csharp');\n      expect(flatGet(env, 'u')).toBe('User');\n    });\n\n    it('switch expression with recursive_pattern — binds r to Repo', () => {\n      const tree = parse(`\npublic class App {\n    public void Process(object obj) {\n        var result = obj switch {\n            Repo { Name: \"main\" } r => r.Save(),\n            _ => false\n        };\n    }\n}\n      `, CSharp);\n      const { env } = buildTypeEnv(tree, 'csharp');\n      expect(flatGet(env, 'r')).toBe('Repo');\n    });\n\n    it('recursive_pattern without designation — no pattern binding produced', () => {\n      const tree = parse(`\npublic class App {\n    public void Process(object obj) {\n        if (obj is User { Name: \"Alice\" }) {\n        }\n    }\n}\n      `, CSharp);\n      const { env } = buildTypeEnv(tree, 'csharp');\n      // obj → object from the parameter, but no pattern binding\n      expect(flatGet(env, 'obj')).toBe('object');\n      expect(flatSize(env)).toBe(1); // only the parameter binding\n    });\n  });\n\n  describe('C# await foreach (Phase 6.1)', () => {\n    it('await foreach (var user in users) — same node type as foreach, resolves element type', () => {\n      const tree = parse(`\nusing System.Collections.Generic;\npublic class App {\n    public async Task Process(IAsyncEnumerable<User> users) {\n        await foreach (var user in users) {\n            user.Save();\n        }\n    }\n}\n      `, CSharp);\n      const { env } = buildTypeEnv(tree, 'csharp');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('foreach (var user in this.data.Values) — nested member access with container property', () => {\n      const tree = parse(`\nusing System.Collections.Generic;\npublic class App {\n    private Dictionary<string, User> data;\n    public void ProcessValues() {\n        foreach (var user in this.data.Values) {\n            user.Save();\n        }\n    }\n}\n      `, CSharp);\n      const { env, lookup } = buildTypeEnv(tree, 'csharp');\n      expect(flatGet(env, 'user')).toBe('User');\n      // Verify lookup works from the call site (user.Save())\n      const saveCall = tree.rootNode.descendantsOfType('invocation_expression')[0];\n      expect(lookup('user', saveCall)).toBe('User');\n    });\n  });\n\n  describe('TypeScript class field declaration (Phase 6.1)', () => {\n    it('class field with array type — for-loop resolves element type via declarationTypeNodes', () => {\n      const tree = parse(`\nclass UserService {\n    private users: User[] = [];\n    processUsers() {\n        for (const user of this.users) {\n            user.save();\n        }\n    }\n}\n      `, TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      // User[] is an array_type — extractSimpleTypeName returns undefined (no simple base name).\n      // But declarationTypeNodes captures the raw AST node, so for-loop resolution\n      // uses Strategy 1 (extractTsElementTypeFromAnnotation) to resolve the element type.\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('class field with generic type annotation — binds field name to base type', () => {\n      const tree = parse(`\nclass RepoService {\n    repos: Map<string, Repo> = new Map();\n}\n      `, TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      expect(flatGet(env, 'repos')).toBe('Map');\n    });\n  });\n\n  describe('PHP foreach $this->property (Phase 7.4 — Strategy C)', () => {\n    it('resolves loop variable from @var User[] property without @param workaround', () => {\n      const tree = parse(`<?php\nclass App {\n    /** @var User[] */\n    private $users;\n    public function process(): void {\n        foreach ($this->users as $user) {\n            $user->save();\n        }\n    }\n}\n      `, PHP.php);\n      const { env } = buildTypeEnv(tree, 'php');\n      expect(flatGet(env, '$user')).toBe('User');\n    });\n\n    it('does not bind from unknown $this->property (conservative)', () => {\n      const tree = parse(`<?php\nclass App {\n    private $unknownProp;\n    public function process(): void {\n        foreach ($this->unknownProp as $item) {\n            $item->save();\n        }\n    }\n}\n      `, PHP.php);\n      const { env } = buildTypeEnv(tree, 'php');\n      expect(flatGet(env, '$item')).toBeUndefined();\n    });\n\n    it('multi-class file: resolves correct property for each class', () => {\n      const tree = parse(`<?php\nclass A {\n    /** @var User[] */\n    private $items;\n    public function processA(): void {\n        foreach ($this->items as $item) {\n            $item->save();\n        }\n    }\n}\nclass B {\n    /** @var Order[] */\n    private $items;\n    public function processB(): void {\n        foreach ($this->items as $item) {\n            $item->submit();\n        }\n    }\n}\n      `, PHP.php);\n      const { env } = buildTypeEnv(tree, 'php');\n      // Both $item bindings exist but may share the same key if scoped to method name\n      // Conservative: just verify at least one resolves correctly\n      expect(flatGet(env, '$item')).toBeDefined();\n    });\n  });\n\n  describe('match arm scoping — first-writer-wins regression', () => {\n    it('Rust: first match arm binding wins, later arms do not overwrite', () => {\n      const tree = parse(`\nfn process(opt: Option<User>) {\n    match opt {\n        Some(user) => user.save(),\n        None => {},\n    }\n}\n      `, Rust);\n      const { env } = buildTypeEnv(tree, 'rust');\n      // user should be typed from the first arm (Some unwrap)\n      // Known limitation: binding leaks across arms (first-writer-wins)\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n  });\n\n  describe('performance optimizations — coverage for new code paths', () => {\n    it('fastStripNullable: passes through simple identifier without stripping', () => {\n      const tree = parse('function f(user: User) { user.save(); }', TypeScript.typescript);\n      const typeEnv = buildTypeEnv(tree, 'typescript');\n      // lookup exercises fastStripNullable — \"User\" has no | or ? markers\n      const callNode = tree.rootNode.descendantForIndex(tree.rootNode.text.indexOf('save'));\n      expect(typeEnv.lookup('user', callNode)).toBe('User');\n    });\n\n    it('fastStripNullable: strips nullable union type via full stripNullable', () => {\n      const tree = parse('function f(user: User | null) { user.save(); }', TypeScript.typescript);\n      const typeEnv = buildTypeEnv(tree, 'typescript');\n      const callNode = tree.rootNode.descendantForIndex(tree.rootNode.text.indexOf('save'));\n      expect(typeEnv.lookup('user', callNode)).toBe('User');\n    });\n\n    it('fastStripNullable: rejects bare nullable keyword', () => {\n      const tree = parse('function f(x: null) { x.save(); }', TypeScript.typescript);\n      const typeEnv = buildTypeEnv(tree, 'typescript');\n      const callNode = tree.rootNode.descendantForIndex(tree.rootNode.text.indexOf('save'));\n      expect(typeEnv.lookup('x', callNode)).toBeUndefined();\n    });\n\n    it('fastStripNullable: strips optional type suffix', () => {\n      const tree = parse(`\nclass Foo {\n    process(user: User) {\n        user.save();\n    }\n}\n      `, TypeScript.typescript);\n      const typeEnv = buildTypeEnv(tree, 'typescript');\n      const callNode = tree.rootNode.descendantForIndex(tree.rootNode.text.indexOf('save'));\n      expect(typeEnv.lookup('user', callNode)).toBe('User');\n    });\n\n    it('SKIP_SUBTREE_TYPES: string literal subtrees do not affect type extraction', () => {\n      const tree = parse(`\nfunction f(user: User) {\n    const msg = \"hello world this is a long string\";\n    user.save();\n}\n      `, TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('interestingNodeTypes: non-declaration nodes skip extractTypeBinding', () => {\n      // Large code with many non-interesting nodes (binary expressions, calls, etc.)\n      const tree = parse(`\nfunction calculate(service: Service) {\n    const a = 1 + 2 + 3;\n    const b = true && false;\n    if (a > b) { service.run(); }\n}\n      `, TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      expect(flatGet(env, 'service')).toBe('Service');\n    });\n  });\n\n  describe('null-check narrowing via patternOverrides (Phase C Task 7)', () => {\n    it('TS: if (x !== null) narrows User | null to User inside if-body', () => {\n      const code = `\nfunction process(x: User | null) {\n  if (x !== null) {\n    x.save();\n  }\n}`;\n      const tree = parse(code, TypeScript.typescript);\n      const typeEnv = buildTypeEnv(tree, 'typescript');\n      // Inside the if-body, x should resolve to User (nullable stripped)\n      const saveCall = tree.rootNode.descendantForIndex(tree.rootNode.text.indexOf('x.save'));\n      expect(typeEnv.lookup('x', saveCall)).toBe('User');\n    });\n\n    it('TS: if (x !== undefined) narrows User | undefined to User inside if-body', () => {\n      const code = `\nfunction process(x: User | undefined) {\n  if (x !== undefined) {\n    x.save();\n  }\n}`;\n      const tree = parse(code, TypeScript.typescript);\n      const typeEnv = buildTypeEnv(tree, 'typescript');\n      const saveCall = tree.rootNode.descendantForIndex(tree.rootNode.text.indexOf('x.save'));\n      expect(typeEnv.lookup('x', saveCall)).toBe('User');\n    });\n\n    it('TS: if (x != null) narrows with loose inequality', () => {\n      const code = `\nfunction process(x: User | null) {\n  if (x != null) {\n    x.save();\n  }\n}`;\n      const tree = parse(code, TypeScript.typescript);\n      const typeEnv = buildTypeEnv(tree, 'typescript');\n      const saveCall = tree.rootNode.descendantForIndex(tree.rootNode.text.indexOf('x.save'));\n      expect(typeEnv.lookup('x', saveCall)).toBe('User');\n    });\n\n    it('TS: null-check narrowing does NOT leak to else branch', () => {\n      const code = `\nfunction process(x: User | null) {\n  if (x !== null) {\n    x.save();\n  } else {\n    x.fallback();\n  }\n}`;\n      const tree = parse(code, TypeScript.typescript);\n      const typeEnv = buildTypeEnv(tree, 'typescript');\n      // Inside else branch, x should retain original nullable type (User via fastStripNullable)\n      const fallbackCall = tree.rootNode.descendantForIndex(tree.rootNode.text.indexOf('x.fallback'));\n      // The else branch is NOT in the narrowing range, so lookup falls through to\n      // the flat scopeEnv which has \"User | null\" — fastStripNullable strips it to User.\n      // This is expected: without negative narrowing (Phase 13A), else branches still get\n      // the base stripped type. The key invariant is that the narrowing override does NOT\n      // apply outside the if-body range.\n      expect(typeEnv.lookup('x', fallbackCall)).toBe('User');\n    });\n\n    it('TS: null-check narrowing does NOT apply outside the if block', () => {\n      const code = `\nfunction process(x: User | null) {\n  if (x !== null) {\n    x.save();\n  }\n  x.other();\n}`;\n      const tree = parse(code, TypeScript.typescript);\n      const typeEnv = buildTypeEnv(tree, 'typescript');\n      // After the if-block, x should use the flat scopeEnv (User | null → User via fastStripNullable)\n      const otherCall = tree.rootNode.descendantForIndex(tree.rootNode.text.indexOf('x.other'));\n      expect(typeEnv.lookup('x', otherCall)).toBe('User');\n    });\n\n    it('TS: no narrowing when variable has no nullable type', () => {\n      const code = `\nfunction process(x: User) {\n  if (x !== null) {\n    x.save();\n  }\n}`;\n      const tree = parse(code, TypeScript.typescript);\n      const typeEnv = buildTypeEnv(tree, 'typescript');\n      // x is already non-nullable — no narrowing override is emitted, but lookup still works\n      const saveCall = tree.rootNode.descendantForIndex(tree.rootNode.text.indexOf('x.save'));\n      expect(typeEnv.lookup('x', saveCall)).toBe('User');\n    });\n\n    it('TS: instanceof still works alongside null-check narrowing', () => {\n      const tree = parse(`\nfunction process(x) {\n  if (x instanceof User) {\n    x.save();\n  }\n}\n      `, TypeScript.typescript);\n      const { env } = buildTypeEnv(tree, 'typescript');\n      expect(flatGet(env, 'x')).toBe('User');\n    });\n\n    it('Kotlin: if (x != null) narrows nullable type inside if-body', () => {\n      const code = `\nfun process(x: User?) {\n    if (x != null) {\n        x.save()\n    }\n}`;\n      const tree = parse(code, Kotlin);\n      const typeEnv = buildTypeEnv(tree, 'kotlin');\n      const saveCall = tree.rootNode.descendantForIndex(tree.rootNode.text.indexOf('x.save'));\n      expect(typeEnv.lookup('x', saveCall)).toBe('User');\n    });\n\n    it('Kotlin: when/is still works alongside null-check narrowing', () => {\n      const tree = parse(`\nfun process(x: Any) {\n    when (x) {\n        is User -> x.name\n    }\n}\n      `, Kotlin);\n      const { env } = buildTypeEnv(tree, 'kotlin');\n      expect(flatGet(env, 'x')).toBe('User');\n    });\n\n    it('C#: if (x != null) narrows nullable type inside if-body', () => {\n      const code = `\nclass App {\n    void Process(User? x) {\n        if (x != null) {\n            x.Save();\n        }\n    }\n}`;\n      const tree = parse(code, CSharp);\n      const typeEnv = buildTypeEnv(tree, 'csharp');\n      const saveCall = tree.rootNode.descendantForIndex(tree.rootNode.text.indexOf('x.Save'));\n      expect(typeEnv.lookup('x', saveCall)).toBe('User');\n    });\n\n    it('C#: is_pattern_expression type pattern still works alongside null-check', () => {\n      const tree = parse(`\nclass App {\n    void Process(object obj) {\n        if (obj is User user) {\n            user.Save();\n        }\n    }\n}\n      `, CSharp);\n      const { env } = buildTypeEnv(tree, 'csharp');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n  });\n\n  describe('multi-declarator type association (sizeBefore optimization)', () => {\n    it('Java: multi-declarator captures all variable names with shared type', () => {\n      const tree = parse(`\nclass App {\n    void run() {\n        User a = getA(), b = getB();\n        a.save();\n        b.save();\n    }\n}\n      `, Java);\n      const { env } = buildTypeEnv(tree, 'java');\n      expect(flatGet(env, 'a')).toBe('User');\n      expect(flatGet(env, 'b')).toBe('User');\n    });\n\n    it('Java: untyped declaration before typed does not get false type association', () => {\n      // `x` has no type annotation → must NOT be associated with the User type\n      // from the later declaration. This guards the sizeBefore skip logic.\n      const tree = parse(`\nclass App {\n    void run() {\n        var x = getX();\n        User user = getUser();\n        user.save();\n    }\n}\n      `, Java);\n      const { env } = buildTypeEnv(tree, 'java');\n      expect(flatGet(env, 'user')).toBe('User');\n      // x should NOT have a type binding (it's untyped via var)\n      expect(flatGet(env, 'x')).toBeUndefined();\n    });\n\n    it('C#: multi-declarator with shared type captures both variables', () => {\n      const tree = parse(`\nclass App {\n    void Run() {\n        User a = GetA(), b = GetB();\n        a.Save();\n        b.Save();\n    }\n}\n      `, CSharp);\n      const { env } = buildTypeEnv(tree, 'csharp');\n      expect(flatGet(env, 'a')).toBe('User');\n      expect(flatGet(env, 'b')).toBe('User');\n    });\n\n    it('Java: single declarator with type still works after optimization', () => {\n      const tree = parse(`\nclass App {\n    void run() {\n        User user = getUser();\n        user.save();\n    }\n}\n      `, Java);\n      const { env } = buildTypeEnv(tree, 'java');\n      expect(flatGet(env, 'user')).toBe('User');\n    });\n\n    it('Java: for-loop resolves element type from multi-declarator typed iterable', () => {\n      // Tests that declarationTypeNodes is correctly populated for multi-declarator\n      // variables, enabling for-loop element type resolution (Strategy 1).\n      const tree = parse(`\nclass App {\n    void run() {\n        List<User> users = getUsers(), admins = getAdmins();\n        for (User u : users) {\n            u.save();\n        }\n    }\n}\n      `, Java);\n      const { env } = buildTypeEnv(tree, 'java');\n      expect(flatGet(env, 'users')).toBe('List');\n      expect(flatGet(env, 'admins')).toBe('List');\n      expect(flatGet(env, 'u')).toBe('User');\n    });\n  });\n\n  describe('constructorTypeMap (virtual dispatch detection)', () => {\n    it('Java: Animal a = new Dog() populates constructorTypeMap with Dog', () => {\n      const tree = parse(`\nclass Animal {}\nclass Dog extends Animal {}\nclass App {\n    void run() {\n        Animal a = new Dog();\n    }\n}\n      `, Java);\n      const { constructorTypeMap } = buildTypeEnv(tree, 'java');\n      // Find the entry for variable 'a'\n      let ctorType: string | undefined;\n      for (const [key, value] of constructorTypeMap) {\n        if (key.endsWith('\\0a')) { ctorType = value; break; }\n      }\n      expect(ctorType).toBe('Dog');\n    });\n\n    it('Java: same-type constructor does NOT populate constructorTypeMap', () => {\n      const tree = parse(`\nclass User {}\nclass App {\n    void run() {\n        User u = new User();\n    }\n}\n      `, Java);\n      const { constructorTypeMap } = buildTypeEnv(tree, 'java');\n      let found = false;\n      for (const [key] of constructorTypeMap) {\n        if (key.endsWith('\\0u')) { found = true; break; }\n      }\n      expect(found).toBe(false);\n    });\n\n    it('TypeScript: const a: Animal = new Dog() — constructorTypeMap not populated (type on variable_declarator, not lexical_declaration)', () => {\n      // TS virtual dispatch for this pattern works through call-processor,\n      // not constructorTypeMap — the type annotation is on the child\n      // variable_declarator, not the outer lexical_declaration.\n      const tree = parse(`\nclass Animal {}\nclass Dog extends Animal {}\nconst a: Animal = new Dog();\n      `, TypeScript.typescript);\n      const { env, constructorTypeMap } = buildTypeEnv(tree, 'typescript');\n      expect(flatGet(env, 'a')).toBe('Animal');\n      let found = false;\n      for (const [key] of constructorTypeMap) {\n        if (key.endsWith('\\0a')) { found = true; break; }\n      }\n      expect(found).toBe(false);\n    });\n\n    it('C++: Animal* a = new Dog() populates constructorTypeMap', () => {\n      const tree = parse(`\nclass Animal {};\nclass Dog : public Animal {};\nvoid run() {\n    Animal* a = new Dog();\n}\n      `, CPP);\n      const { constructorTypeMap } = buildTypeEnv(tree, 'cpp');\n      let ctorType: string | undefined;\n      for (const [key, value] of constructorTypeMap) {\n        if (key.endsWith('\\0a')) { ctorType = value; break; }\n      }\n      expect(ctorType).toBe('Dog');\n    });\n\n    it('C#: Animal a = new Dog() populates constructorTypeMap', () => {\n      const tree = parse(`\nclass Animal {}\nclass Dog : Animal {}\nclass App {\n    void Run() {\n        Animal a = new Dog();\n    }\n}\n      `, CSharp);\n      const { constructorTypeMap } = buildTypeEnv(tree, 'csharp');\n      let ctorType: string | undefined;\n      for (const [key, value] of constructorTypeMap) {\n        if (key.endsWith('\\0a')) { ctorType = value; break; }\n      }\n      expect(ctorType).toBe('Dog');\n    });\n\n    it('C#: implicit new() does NOT populate constructorTypeMap (type from declaration)', () => {\n      const tree = parse(`\nclass Dog {}\nclass App {\n    void Run() {\n        Dog d = new();\n    }\n}\n      `, CSharp);\n      const { env, constructorTypeMap } = buildTypeEnv(tree, 'csharp');\n      // d should be bound via declared type path\n      expect(flatGet(env, 'd')).toBe('Dog');\n      // constructorTypeMap should NOT have an entry (same type, no override needed)\n      let found = false;\n      for (const [key] of constructorTypeMap) {\n        if (key.endsWith('\\0d')) { found = true; break; }\n      }\n      expect(found).toBe(false);\n    });\n  });\n\n});\n"
  },
  {
    "path": "gitnexus/test/unit/utils.test.ts",
    "content": "import { describe, it, expect } from 'vitest';\nimport { generateId } from '../../src/lib/utils.js';\n\ndescribe('generateId', () => {\n  it('creates id from label and name', () => {\n    expect(generateId('Function', 'main')).toBe('Function:main');\n  });\n\n  it('handles labels with various node types', () => {\n    expect(generateId('File', 'src/index.ts')).toBe('File:src/index.ts');\n    expect(generateId('Class', 'UserService')).toBe('Class:UserService');\n    expect(generateId('Method', 'getData')).toBe('Method:getData');\n    expect(generateId('Folder', 'src')).toBe('Folder:src');\n    expect(generateId('Interface', 'IUser')).toBe('Interface:IUser');\n  });\n\n  it('handles special characters in name', () => {\n    expect(generateId('Function', 'path/to/file.ts:init')).toBe('Function:path/to/file.ts:init');\n  });\n\n  it('handles empty strings', () => {\n    expect(generateId('', '')).toBe(':');\n    expect(generateId('', 'name')).toBe(':name');\n    expect(generateId('label', '')).toBe('label:');\n  });\n\n  it('handles relationship IDs', () => {\n    expect(generateId('CONTAINS', 'Folder:src->File:src/index.ts')).toBe('CONTAINS:Folder:src->File:src/index.ts');\n  });\n\n  it('handles multi-language node types', () => {\n    expect(generateId('Struct', 'Point')).toBe('Struct:Point');\n    expect(generateId('Trait', 'Display')).toBe('Trait:Display');\n    expect(generateId('Impl', 'Display for Point')).toBe('Impl:Display for Point');\n    expect(generateId('Enum', 'Color')).toBe('Enum:Color');\n    expect(generateId('Namespace', 'std')).toBe('Namespace:std');\n    expect(generateId('Constructor', 'User')).toBe('Constructor:User');\n  });\n});\n"
  },
  {
    "path": "gitnexus/test/utils/hook-test-helpers.ts",
    "content": "/**\n * Shared helpers for hook test files (unit + integration).\n */\nimport { spawnSync } from 'child_process';\n\nexport function runHook(\n  hookPath: string,\n  input: Record<string, any>,\n  cwd?: string,\n): { stdout: string; stderr: string; status: number | null } {\n  const result = spawnSync(process.execPath, [hookPath], {\n    input: JSON.stringify(input),\n    encoding: 'utf-8',\n    timeout: 10000,\n    cwd,\n    stdio: ['pipe', 'pipe', 'pipe'],\n  });\n  return {\n    stdout: result.stdout || '',\n    stderr: result.stderr || '',\n    status: result.status,\n  };\n}\n\nexport function parseHookOutput(\n  stdout: string,\n): { hookEventName?: string; additionalContext?: string } | null {\n  if (!stdout.trim()) return null;\n  try {\n    const parsed = JSON.parse(stdout.trim());\n    return parsed.hookSpecificOutput || null;\n  } catch {\n    return null;\n  }\n}\n"
  },
  {
    "path": "gitnexus/test/vitest.d.ts",
    "content": "import 'vitest';\n\ndeclare module 'vitest' {\n  export interface ProvidedContext {\n    lbugDbPath: string;\n  }\n}\n"
  },
  {
    "path": "gitnexus/tsconfig.json",
    "content": "{\n  \"compilerOptions\": {\n    \"target\": \"ES2022\",\n    \"lib\": [\n      \"ES2022\"\n    ],\n    \"module\": \"NodeNext\",\n    \"moduleResolution\": \"NodeNext\",\n    \"outDir\": \"dist\",\n    \"rootDir\": \"src\",\n    \"strict\": false,\n    \"esModuleInterop\": true,\n    \"skipLibCheck\": true,\n    \"resolveJsonModule\": true,\n    \"forceConsistentCasingInFileNames\": true,\n    \"declaration\": true,\n    \"types\": [\n      \"node\"\n    ]\n  },\n  \"include\": [\n    \"src/**/*\"\n  ]\n}"
  },
  {
    "path": "gitnexus/tsconfig.test.json",
    "content": "{\n  \"extends\": \"./tsconfig.json\",\n  \"compilerOptions\": {\n    \"rootDir\": \".\",\n    \"noEmit\": true,\n    \"types\": [\"node\", \"vitest/globals\"]\n  },\n  \"include\": [\"src/**/*\", \"test/**/*\"],\n  \"exclude\": [\"test/fixtures/mini-repo/**\", \"test/fixtures/sample-code/**\"]\n}\n"
  },
  {
    "path": "gitnexus/vendor/leiden/index.cjs",
    "content": "/**\n * Graphology Leiden Algorithm\n * ============================\n *\n * JavaScript implementation of the Leiden community detection\n * algorithm for graphology.\n *\n * Vendored from: https://github.com/graphology/graphology/tree/master/src/communities-leiden\n * License: MIT\n *\n * [Reference]\n * Traag, V. A., et al. \"From Louvain to Leiden: Guaranteeing Well-Connected\n * Communities\". Scientific Reports, vol. 9, no 1, 2019, p. 5233.\n * https://arxiv.org/abs/1810.08473\n */\nvar resolveDefaults = require('graphology-utils/defaults');\nvar isGraph = require('graphology-utils/is-graph');\nvar inferType = require('graphology-utils/infer-type');\nvar SparseMap = require('mnemonist/sparse-map');\nvar SparseQueueSet = require('mnemonist/sparse-queue-set');\nvar createRandomIndex = require('pandemonium/random-index').createRandomIndex;\nvar utils = require('./utils.cjs');\n\nvar indices = require('graphology-indices/louvain');\nvar addWeightToCommunity = utils.addWeightToCommunity;\n\nvar UndirectedLouvainIndex = indices.UndirectedLouvainIndex;\n\nvar UndirectedLeidenAddenda = utils.UndirectedLeidenAddenda;\n\nvar DEFAULTS = {\n attributes: {\n community: 'community',\n weight: 'weight'\n },\n randomness: 0.01,\n randomWalk: true,\n resolution: 1,\n rng: Math.random,\n weighted: false\n};\n\nvar EPSILON = 1e-10;\n\nfunction tieBreaker(\n bestCommunity,\n currentCommunity,\n targetCommunity,\n delta,\n bestDelta\n) {\n if (Math.abs(delta - bestDelta) < EPSILON) {\n if (bestCommunity === currentCommunity) {\n return false;\n } else {\n return targetCommunity > bestCommunity;\n }\n } else if (delta > bestDelta) {\n return true;\n }\n\n return false;\n}\n\nfunction undirectedLeiden(detailed, graph, options) {\n var index = new UndirectedLouvainIndex(graph, {\n attributes: {\n weight: options.attributes.weight\n },\n keepDendrogram: detailed,\n resolution: options.resolution,\n weighted: options.weighted\n });\n\n var addenda = new UndirectedLeidenAddenda(index, {\n randomness: options.randomness,\n rng: options.rng\n });\n\n var randomIndex = createRandomIndex(options.rng);\n\n // Communities\n var currentCommunity, targetCommunity;\n var communities = new SparseMap(Float64Array, index.C);\n\n // Traversal\n var queue = new SparseQueueSet(index.C),\n start,\n end,\n weight,\n ci,\n ri,\n s,\n i,\n j,\n l;\n\n // Metrics\n var degree, targetCommunityDegree;\n\n // Moves\n var bestCommunity, bestDelta, deltaIsBetter, delta;\n\n // Details\n var deltaComputations = 0,\n nodesVisited = 0,\n moves = [],\n currentMoves;\n\n while (true) {\n l = index.C;\n\n currentMoves = 0;\n\n // Traversal of the graph\n ri = options.randomWalk ? randomIndex(l) : 0;\n\n for (s = 0; s < l; s++, ri++) {\n i = ri % l;\n queue.enqueue(i);\n }\n\n while (queue.size !== 0) {\n i = queue.dequeue();\n nodesVisited++;\n\n degree = 0;\n communities.clear();\n\n currentCommunity = index.belongings[i];\n\n start = index.starts[i];\n end = index.starts[i + 1];\n\n // Traversing neighbors\n for (; start < end; start++) {\n j = index.neighborhood[start];\n weight = index.weights[start];\n\n targetCommunity = index.belongings[j];\n\n // Incrementing metrics\n degree += weight;\n addWeightToCommunity(communities, targetCommunity, weight);\n }\n\n // Finding best community to move to\n bestDelta = index.fastDeltaWithOwnCommunity(\n i,\n degree,\n communities.get(currentCommunity) || 0,\n currentCommunity\n );\n bestCommunity = currentCommunity;\n\n for (ci = 0; ci < communities.size; ci++) {\n targetCommunity = communities.dense[ci];\n\n if (targetCommunity === currentCommunity) continue;\n\n targetCommunityDegree = communities.vals[ci];\n\n deltaComputations++;\n\n delta = index.fastDelta(\n i,\n degree,\n targetCommunityDegree,\n targetCommunity\n );\n\n deltaIsBetter = tieBreaker(\n bestCommunity,\n currentCommunity,\n targetCommunity,\n delta,\n bestDelta\n );\n\n if (deltaIsBetter) {\n bestDelta = delta;\n bestCommunity = targetCommunity;\n }\n }\n\n if (bestDelta < 0) {\n bestCommunity = index.isolate(i, degree);\n\n if (bestCommunity === currentCommunity) continue;\n } else {\n if (bestCommunity === currentCommunity) {\n continue;\n } else {\n index.move(i, degree, bestCommunity);\n }\n }\n\n currentMoves++;\n\n // Adding neighbors from other communities to the queue\n start = index.starts[i];\n end = index.starts[i + 1];\n\n for (; start < end; start++) {\n j = index.neighborhood[start];\n targetCommunity = index.belongings[j];\n\n if (targetCommunity !== bestCommunity) queue.enqueue(j);\n }\n }\n\n moves.push(currentMoves);\n\n if (currentMoves === 0) {\n index.zoomOut();\n break;\n }\n\n if (!addenda.onlySingletons()) {\n // We continue working on the induced graph\n addenda.zoomOut();\n continue;\n }\n\n break;\n }\n\n var results = {\n index: index,\n deltaComputations: deltaComputations,\n nodesVisited: nodesVisited,\n moves: moves\n };\n\n return results;\n}\n\n/**\n * Function returning the communities mapping of the graph.\n *\n * @param {boolean} assign - Assign communities to nodes attributes?\n * @param {boolean} detailed - Whether to return detailed information.\n * @param {Graph} graph - Target graph.\n * @param {object} options - Options:\n * @param {object} attributes - Attribute names:\n * @param {string} community - Community node attribute name.\n * @param {string} weight - Weight edge attribute name.\n * @param {number} randomness - Randomness parameter.\n * @param {boolean} randomWalk - Whether to traverse the graph in random order.\n * @param {number} resolution - Resolution parameter.\n * @param {function} rng - RNG function to use.\n * @param {boolean} weighted - Whether to compute the weighted version.\n * @return {object}\n */\nfunction leiden(assign, detailed, graph, options) {\n if (!isGraph(graph))\n throw new Error(\n 'graphology-communities-leiden: the given graph is not a valid graphology instance.'\n );\n\n var type = inferType(graph);\n\n if (type === 'mixed')\n throw new Error(\n 'graphology-communities-leiden: cannot run the algorithm on a true mixed graph.'\n );\n\n if (type === 'directed')\n throw new Error(\n 'graphology-communities-leiden: not yet implemented for directed graphs.'\n );\n\n // Attributes name\n options = resolveDefaults(options, DEFAULTS);\n\n // Empty graph case\n var c = 0;\n\n if (graph.size === 0) {\n if (assign) {\n graph.forEachNode(function (node) {\n graph.setNodeAttribute(node, options.attributes.communities, c++);\n });\n\n return;\n }\n\n var communities = {};\n\n graph.forEachNode(function (node) {\n communities[node] = c++;\n });\n\n if (!detailed) return communities;\n\n return {\n communities: communities,\n count: graph.order,\n deltaComputations: 0,\n dendrogram: null,\n level: 0,\n modularity: NaN,\n moves: null,\n nodesVisited: 0,\n resolution: options.resolution\n };\n }\n\n var fn = undirectedLeiden;\n\n var results = fn(detailed, graph, options);\n\n var index = results.index;\n\n // Standard output\n if (!detailed) {\n if (assign) {\n index.assign(options.attributes.community);\n return;\n }\n\n return index.collect();\n }\n\n // Detailed output\n var output = {\n count: index.C,\n deltaComputations: results.deltaComputations,\n dendrogram: index.dendrogram,\n level: index.level,\n modularity: index.modularity(),\n moves: results.moves,\n nodesVisited: results.nodesVisited,\n resolution: options.resolution\n };\n\n if (assign) {\n index.assign(options.attributes.community);\n return output;\n }\n\n output.communities = index.collect();\n\n return output;\n}\n\n/**\n * Exporting.\n */\nvar fn = leiden.bind(null, false, false);\nfn.assign = leiden.bind(null, true, false);\nfn.detailed = leiden.bind(null, false, true);\nfn.defaults = DEFAULTS;\n\nmodule.exports = fn;\n"
  },
  {
    "path": "gitnexus/vendor/leiden/utils.cjs",
    "content": "/**\n * Graphology Leiden Utils\n * ========================\n *\n * Miscellaneous utilities used by the Leiden algorithm.\n *\n * Vendored from: https://github.com/graphology/graphology/tree/master/src/communities-leiden\n * License: MIT\n */\nvar SparseMap = require('mnemonist/sparse-map');\nvar createRandom = require('pandemonium/random').createRandom;\n\nfunction addWeightToCommunity(map, community, weight) {\n var currentWeight = map.get(community);\n\n if (typeof currentWeight === 'undefined') currentWeight = 0;\n\n currentWeight += weight;\n\n map.set(community, currentWeight);\n}\n\nfunction UndirectedLeidenAddenda(index, options) {\n options = options || {};\n\n var rng = options.rng || Math.random;\n var randomness = 'randomness' in options ? options.randomness : 0.01;\n\n this.index = index;\n this.random = createRandom(rng);\n this.randomness = randomness;\n this.rng = rng;\n\n var NodesPointerArray = index.counts.constructor;\n var WeightsArray = index.weights.constructor;\n\n var order = index.C;\n this.resolution = index.resolution;\n\n // Used to group nodes by communities\n this.B = index.C;\n this.C = 0;\n this.communitiesOffsets = new NodesPointerArray(order);\n this.nodesSortedByCommunities = new NodesPointerArray(order);\n this.communitiesBounds = new NodesPointerArray(order + 1);\n\n // Used to merge nodes subsets\n this.communityWeights = new WeightsArray(order);\n this.degrees = new WeightsArray(order);\n this.nonSingleton = new Uint8Array(order);\n this.externalEdgeWeightPerCommunity = new WeightsArray(order);\n this.belongings = new NodesPointerArray(order);\n this.neighboringCommunities = new SparseMap(WeightsArray, order);\n this.cumulativeIncrement = new Float64Array(order);\n this.macroCommunities = null;\n}\n\nUndirectedLeidenAddenda.prototype.groupByCommunities = function () {\n var index = this.index;\n\n var n, i, c, b, o;\n\n n = 0;\n o = 0;\n\n for (i = 0; i < index.C; i++) {\n c = index.counts[i];\n\n if (c !== 0) {\n this.communitiesBounds[o++] = n;\n n += c;\n this.communitiesOffsets[i] = n;\n }\n }\n\n this.communitiesBounds[o] = n;\n\n o = 0;\n\n for (i = 0; i < index.C; i++) {\n b = index.belongings[i];\n o = --this.communitiesOffsets[b];\n this.nodesSortedByCommunities[o] = i;\n }\n\n this.B = index.C - index.U;\n this.C = index.C;\n};\n\nUndirectedLeidenAddenda.prototype.communities = function () {\n var communities = new Array(this.B);\n\n var i, j, community, start, stop;\n\n for (i = 0; i < this.B; i++) {\n start = this.communitiesBounds[i];\n stop = this.communitiesBounds[i + 1];\n community = [];\n\n for (j = start; j < stop; j++) {\n community.push(j);\n }\n\n communities[i] = community;\n }\n\n return communities;\n};\n\nUndirectedLeidenAddenda.prototype.mergeNodesSubset = function (start, stop) {\n var index = this.index;\n var currentMacroCommunity =\n index.belongings[this.nodesSortedByCommunities[start]];\n var neighboringCommunities = this.neighboringCommunities;\n\n var totalNodeWeight = 0;\n\n var i, j, w;\n var ei, el, et;\n\n // Initializing singletons\n for (j = start; j < stop; j++) {\n i = this.nodesSortedByCommunities[j];\n\n this.belongings[i] = i;\n this.nonSingleton[i] = 0;\n this.degrees[i] = 0;\n totalNodeWeight += index.loops[i] / 2;\n\n this.communityWeights[i] = index.loops[i];\n this.externalEdgeWeightPerCommunity[i] = 0;\n\n ei = index.starts[i];\n el = index.starts[i + 1];\n\n for (; ei < el; ei++) {\n et = index.neighborhood[ei];\n w = index.weights[ei];\n\n this.degrees[i] += w;\n\n if (index.belongings[et] !== currentMacroCommunity) continue;\n\n totalNodeWeight += w;\n this.externalEdgeWeightPerCommunity[i] += w;\n this.communityWeights[i] += w;\n }\n }\n\n var microDegrees = this.externalEdgeWeightPerCommunity.slice();\n\n var s, ri, ci;\n var order = stop - start;\n\n var degree,\n bestCommunity,\n qualityValueIncrement,\n maxQualityValueIncrement,\n totalTransformedQualityValueIncrement,\n targetCommunity,\n targetCommunityDegree,\n targetCommunityWeight;\n\n var r, lo, hi, mid, chosenCommunity;\n\n ri = this.random(start, stop - 1);\n\n for (s = start; s < stop; s++, ri++) {\n j = start + (ri % order);\n\n i = this.nodesSortedByCommunities[j];\n\n if (this.nonSingleton[i] === 1) {\n continue;\n }\n\n if (\n this.externalEdgeWeightPerCommunity[i] <\n this.communityWeights[i] *\n (totalNodeWeight / 2 - this.communityWeights[i]) *\n this.resolution\n ) {\n continue;\n }\n\n this.communityWeights[i] = 0;\n this.externalEdgeWeightPerCommunity[i] = 0;\n\n neighboringCommunities.clear();\n neighboringCommunities.set(i, 0);\n\n degree = 0;\n\n ei = index.starts[i];\n el = index.starts[i + 1];\n\n for (; ei < el; ei++) {\n et = index.neighborhood[ei];\n\n if (index.belongings[et] !== currentMacroCommunity) continue;\n\n w = index.weights[ei];\n\n degree += w;\n\n addWeightToCommunity(neighboringCommunities, this.belongings[et], w);\n }\n\n bestCommunity = i;\n maxQualityValueIncrement = 0;\n totalTransformedQualityValueIncrement = 0;\n\n for (ci = 0; ci < neighboringCommunities.size; ci++) {\n targetCommunity = neighboringCommunities.dense[ci];\n targetCommunityDegree = neighboringCommunities.vals[ci];\n targetCommunityWeight = this.communityWeights[targetCommunity];\n\n if (\n this.externalEdgeWeightPerCommunity[targetCommunity] >=\n targetCommunityWeight *\n (totalNodeWeight / 2 - targetCommunityWeight) *\n this.resolution\n ) {\n qualityValueIncrement =\n targetCommunityDegree -\n ((degree + index.loops[i]) *\n targetCommunityWeight *\n this.resolution) /\n totalNodeWeight;\n\n if (qualityValueIncrement > maxQualityValueIncrement) {\n bestCommunity = targetCommunity;\n maxQualityValueIncrement = qualityValueIncrement;\n }\n\n if (qualityValueIncrement >= 0)\n totalTransformedQualityValueIncrement += Math.exp(\n qualityValueIncrement / this.randomness\n );\n }\n\n this.cumulativeIncrement[ci] = totalTransformedQualityValueIncrement;\n }\n\n if (\n totalTransformedQualityValueIncrement < Number.MAX_VALUE &&\n totalTransformedQualityValueIncrement < Infinity\n ) {\n r = totalTransformedQualityValueIncrement * this.rng();\n lo = -1;\n hi = neighboringCommunities.size + 1;\n\n while (lo < hi - 1) {\n mid = (lo + hi) >>> 1;\n\n if (this.cumulativeIncrement[mid] >= r) hi = mid;\n else lo = mid;\n }\n\n chosenCommunity = neighboringCommunities.dense[hi];\n } else {\n chosenCommunity = bestCommunity;\n }\n\n this.communityWeights[chosenCommunity] += degree + index.loops[i];\n\n ei = index.starts[i];\n el = index.starts[i + 1];\n\n for (; ei < el; ei++) {\n et = index.neighborhood[ei];\n\n if (index.belongings[et] !== currentMacroCommunity) continue;\n\n targetCommunity = this.belongings[et];\n\n if (targetCommunity === chosenCommunity) {\n this.externalEdgeWeightPerCommunity[chosenCommunity] -=\n microDegrees[et];\n } else {\n this.externalEdgeWeightPerCommunity[chosenCommunity] +=\n microDegrees[et];\n }\n }\n\n if (chosenCommunity !== i) {\n this.belongings[i] = chosenCommunity;\n this.nonSingleton[chosenCommunity] = 1;\n this.C--;\n }\n }\n\n var microCommunities = this.neighboringCommunities;\n microCommunities.clear();\n\n for (j = start; j < stop; j++) {\n i = this.nodesSortedByCommunities[j];\n microCommunities.set(this.belongings[i], 1);\n }\n\n return microCommunities.dense.slice(0, microCommunities.size);\n};\n\nUndirectedLeidenAddenda.prototype.refinePartition = function () {\n this.groupByCommunities();\n\n this.macroCommunities = new Array(this.B);\n\n var i, start, stop, mapping;\n\n var bounds = this.communitiesBounds;\n\n for (i = 0; i < this.B; i++) {\n start = bounds[i];\n stop = bounds[i + 1];\n\n mapping = this.mergeNodesSubset(start, stop);\n this.macroCommunities[i] = mapping;\n }\n};\n\nUndirectedLeidenAddenda.prototype.split = function () {\n var index = this.index;\n var isolates = this.neighboringCommunities;\n\n isolates.clear();\n\n var i, community, isolated;\n\n for (i = 0; i < index.C; i++) {\n community = this.belongings[i];\n\n if (i !== community) continue;\n\n isolated = index.isolate(i, this.degrees[i]);\n isolates.set(community, isolated);\n }\n\n for (i = 0; i < index.C; i++) {\n community = this.belongings[i];\n\n if (i === community) continue;\n\n isolated = isolates.get(community);\n index.move(i, this.degrees[i], isolated);\n }\n\n var j, macro;\n\n for (i = 0; i < this.macroCommunities.length; i++) {\n macro = this.macroCommunities[i];\n\n for (j = 0; j < macro.length; j++) macro[j] = isolates.get(macro[j]);\n }\n};\n\nUndirectedLeidenAddenda.prototype.zoomOut = function () {\n var index = this.index;\n this.refinePartition();\n this.split();\n\n var newLabels = index.zoomOut();\n\n var macro, leader, follower;\n\n var i, j;\n\n for (i = 0; i < this.macroCommunities.length; i++) {\n macro = this.macroCommunities[i];\n leader = newLabels[macro[0]];\n\n for (j = 1; j < macro.length; j++) {\n follower = newLabels[macro[j]];\n index.expensiveMove(follower, leader);\n }\n }\n};\n\nUndirectedLeidenAddenda.prototype.onlySingletons = function () {\n var index = this.index;\n\n var i;\n\n for (i = 0; i < index.C; i++) {\n if (index.counts[i] > 1) return false;\n }\n\n return true;\n};\n\nexports.addWeightToCommunity = addWeightToCommunity;\nexports.UndirectedLeidenAddenda = UndirectedLeidenAddenda;\n"
  },
  {
    "path": "gitnexus/vitest.config.ts",
    "content": "import { defineConfig } from 'vitest/config';\n\nexport default defineConfig({\n  test: {\n    // Shared settings — inherited by all projects via extends: true\n    globalSetup: ['test/global-setup.ts'],\n    testTimeout: 30000,\n    hookTimeout: 120000,\n    pool: 'forks',\n    globals: true,\n    teardownTimeout: 3000,\n    // N-API destructors can crash worker forks on macOS during process exit.\n    // This is independent of the QueryResult lifetime fix in @ladybugdb/core 0.15.2 —\n    // it's a vitest forks + native addon interaction where destructors run in\n    // arbitrary order at exit. Tests themselves pass; only the exit crashes.\n    // TODO: remove once LadybugDB fixes all N-API destructor ordering issues.\n    dangerouslyIgnoreUnhandledErrors: true,\n\n    // Coverage stays at root (not supported in project configs)\n    coverage: {\n      provider: 'v8',\n      include: ['src/**/*.ts'],\n      exclude: [\n        'src/cli/index.ts',          // CLI entry point (commander wiring)\n        'src/server/**',              // HTTP server (requires network)\n        'src/core/wiki/**',           // Wiki generation (requires LLM)\n      ],\n      // Auto-ratchet: vitest bumps thresholds when coverage exceeds them.\n      // CI will fail if a PR drops below these floors.\n      thresholds: {\n        statements: 26,\n        branches: 23,\n        functions: 28,\n        lines: 27,\n        autoUpdate: true,\n      },\n    },\n\n    // LadybugDB's native mmap addon causes file-lock conflicts when vitest\n    // runs lbug test files in parallel forks on Windows.  The 'lbug-db'\n    // project forces sequential execution (fileParallelism: false).\n    //\n    // Each file runs in its own fork — the fork exits after the file\n    // completes, triggering an N-API destructor segfault that is caught\n    // by dangerouslyIgnoreUnhandledErrors.  Tests themselves pass; only\n    // the exit crashes.  This is safer than isolate: false, which causes\n    // native state corruption after 2-3 open/close cycles in the same fork.\n    projects: [\n      {\n        extends: true,\n        test: {\n          name: 'lbug-db',\n          include: [\n            'test/integration/lbug-core-adapter.test.ts',\n            'test/integration/lbug-pool.test.ts',\n            'test/integration/lbug-pool-stability.test.ts',\n            'test/integration/local-backend.test.ts',\n            'test/integration/local-backend-calltool.test.ts',\n            'test/integration/search-core.test.ts',\n            'test/integration/search-pool.test.ts',\n            'test/integration/augmentation.test.ts',\n          ],\n          fileParallelism: false,\n          sequence: { groupOrder: 1 },\n        },\n      },\n      {\n        extends: true,\n        test: {\n          name: 'default',\n          sequence: { groupOrder: 2 },\n          include: ['test/**/*.test.ts'],\n          exclude: [\n            'test/integration/lbug-core-adapter.test.ts',\n            'test/integration/lbug-pool.test.ts',\n            'test/integration/lbug-pool-stability.test.ts',\n            'test/integration/local-backend.test.ts',\n            'test/integration/local-backend-calltool.test.ts',\n            'test/integration/search-core.test.ts',\n            'test/integration/search-pool.test.ts',\n            'test/integration/augmentation.test.ts',\n          ],\n        },\n      },\n    ],\n  },\n});\n"
  },
  {
    "path": "gitnexus-claude-plugin/.claude-plugin/plugin.json",
    "content": "{\n  \"name\": \"gitnexus\",\n  \"description\": \"Code intelligence powered by a knowledge graph. Provides execution flow tracing, blast radius analysis, and augmented search across your codebase.\",\n  \"version\": \"1.3.6\",\n  \"author\": {\n    \"name\": \"GitNexus\"\n  },\n  \"homepage\": \"https://github.com/abhigyanpatwari/GitNexus\",\n  \"repository\": \"https://github.com/abhigyanpatwari/GitNexus\",\n  \"keywords\": [\"code-intelligence\", \"knowledge-graph\", \"mcp\", \"static-analysis\"]\n}\n"
  },
  {
    "path": "gitnexus-claude-plugin/.mcp.json",
    "content": "{\n  \"mcpServers\": {\n    \"gitnexus\": {\n      \"command\": \"npx\",\n      \"args\": [\"-y\", \"gitnexus@latest\", \"mcp\"]\n    }\n  }\n}\n"
  },
  {
    "path": "gitnexus-claude-plugin/hooks/gitnexus-hook.js",
    "content": "#!/usr/bin/env node\n/**\n * GitNexus Claude Code Plugin Hook\n *\n * PreToolUse  — intercepts Grep/Glob/Bash searches and augments\n *               with graph context from the GitNexus index.\n * PostToolUse — detects stale index after git mutations and notifies\n *               the agent to reindex.\n *\n * NOTE: SessionStart hooks are broken on Windows (Claude Code bug #23576).\n * Session context is injected via CLAUDE.md / skills instead.\n */\n\nconst fs = require('fs');\nconst path = require('path');\nconst { spawnSync } = require('child_process');\n\n/**\n * Read JSON input from stdin synchronously.\n */\nfunction readInput() {\n  try {\n    const data = fs.readFileSync(0, 'utf-8');\n    return JSON.parse(data);\n  } catch {\n    return {};\n  }\n}\n\n/**\n * Find the .gitnexus directory by walking up from startDir.\n * Returns the path to .gitnexus/ or null if not found.\n */\nfunction findGitNexusDir(startDir) {\n  let dir = startDir || process.cwd();\n  for (let i = 0; i < 5; i++) {\n    const candidate = path.join(dir, '.gitnexus');\n    if (fs.existsSync(candidate)) return candidate;\n    const parent = path.dirname(dir);\n    if (parent === dir) break;\n    dir = parent;\n  }\n  return null;\n}\n\n/**\n * Extract search pattern from tool input.\n */\nfunction extractPattern(toolName, toolInput) {\n  if (toolName === 'Grep') {\n    return toolInput.pattern || null;\n  }\n\n  if (toolName === 'Glob') {\n    const raw = toolInput.pattern || '';\n    const match = raw.match(/[*\\/]([a-zA-Z][a-zA-Z0-9_-]{2,})/);\n    return match ? match[1] : null;\n  }\n\n  if (toolName === 'Bash') {\n    const cmd = toolInput.command || '';\n    if (!/\\brg\\b|\\bgrep\\b/.test(cmd)) return null;\n\n    const tokens = cmd.split(/\\s+/);\n    let foundCmd = false;\n    let skipNext = false;\n    const flagsWithValues = new Set(['-e', '-f', '-m', '-A', '-B', '-C', '-g', '--glob', '-t', '--type', '--include', '--exclude']);\n\n    for (const token of tokens) {\n      if (skipNext) { skipNext = false; continue; }\n      if (!foundCmd) {\n        if (/\\brg$|\\bgrep$/.test(token)) foundCmd = true;\n        continue;\n      }\n      if (token.startsWith('-')) {\n        if (flagsWithValues.has(token)) skipNext = true;\n        continue;\n      }\n      const cleaned = token.replace(/['\"]/g, '');\n      return cleaned.length >= 3 ? cleaned : null;\n    }\n    return null;\n  }\n\n  return null;\n}\n\n/**\n * Spawn a gitnexus CLI command synchronously.\n * Detects binary on PATH once, then runs exactly once.\n *\n * SECURITY: Never use shell: true with user-controlled arguments.\n * On Windows, invoke gitnexus.cmd directly (no shell needed).\n */\nfunction runGitNexusCli(args, cwd, timeout) {\n  const isWin = process.platform === 'win32';\n\n  // Detect whether 'gitnexus' is on PATH (cheap check, no execution)\n  let useDirectBinary = false;\n  try {\n    const which = spawnSync(\n      isWin ? 'where' : 'which', ['gitnexus'],\n      { encoding: 'utf-8', timeout: 3000, stdio: ['pipe', 'pipe', 'pipe'] }\n    );\n    useDirectBinary = which.status === 0;\n  } catch { /* not on PATH */ }\n\n  if (useDirectBinary) {\n    return spawnSync(\n      isWin ? 'gitnexus.cmd' : 'gitnexus', args,\n      { encoding: 'utf-8', timeout, cwd, stdio: ['pipe', 'pipe', 'pipe'] }\n    );\n  }\n  // npx fallback needs shell on Windows since npx is a .cmd script\n  return spawnSync(\n    isWin ? 'npx.cmd' : 'npx', ['-y', 'gitnexus', ...args],\n    { encoding: 'utf-8', timeout: timeout + 5000, cwd, stdio: ['pipe', 'pipe', 'pipe'] }\n  );\n}\n\n/**\n * Emit a hook response with additional context for the agent.\n */\nfunction sendHookResponse(hookEventName, message) {\n  console.log(JSON.stringify({\n    hookSpecificOutput: { hookEventName, additionalContext: message }\n  }));\n}\n\n/**\n * PreToolUse handler — augment searches with graph context.\n */\nfunction handlePreToolUse(input) {\n  const cwd = input.cwd || process.cwd();\n  if (!path.isAbsolute(cwd)) return;\n  if (!findGitNexusDir(cwd)) return;\n\n  const toolName = input.tool_name || '';\n  const toolInput = input.tool_input || {};\n\n  if (toolName !== 'Grep' && toolName !== 'Glob' && toolName !== 'Bash') return;\n\n  const pattern = extractPattern(toolName, toolInput);\n  if (!pattern || pattern.length < 3) return;\n\n  let result = '';\n  try {\n    const child = runGitNexusCli(['augment', '--', pattern], cwd, 7000);\n    if (!child.error && child.status === 0) {\n      result = child.stderr || '';\n    }\n  } catch { /* graceful failure */ }\n\n  if (result && result.trim()) {\n    sendHookResponse('PreToolUse', result.trim());\n  }\n}\n\n/**\n * PostToolUse handler — detect index staleness after git mutations.\n *\n * Instead of spawning a full `gitnexus analyze` synchronously (which blocks\n * the agent for up to 120s and risks LadybugDB corruption on timeout), we do a\n * lightweight staleness check: compare `git rev-parse HEAD` against the\n * lastCommit stored in `.gitnexus/meta.json`. If they differ, notify the\n * agent so it can decide when to reindex.\n */\nfunction handlePostToolUse(input) {\n  const toolName = input.tool_name || '';\n  if (toolName !== 'Bash') return;\n\n  const command = (input.tool_input || {}).command || '';\n  if (!/\\bgit\\s+(commit|merge|rebase|cherry-pick|pull)(\\s|$)/.test(command)) return;\n\n  // Only proceed if the command succeeded\n  const toolOutput = input.tool_output || {};\n  if (toolOutput.exit_code !== undefined && toolOutput.exit_code !== 0) return;\n\n  const cwd = input.cwd || process.cwd();\n  if (!path.isAbsolute(cwd)) return;\n  const gitNexusDir = findGitNexusDir(cwd);\n  if (!gitNexusDir) return;\n\n  // Compare HEAD against last indexed commit — skip if unchanged\n  let currentHead = '';\n  try {\n    const headResult = spawnSync('git', ['rev-parse', 'HEAD'], {\n      encoding: 'utf-8', timeout: 3000, cwd, stdio: ['pipe', 'pipe', 'pipe'],\n    });\n    currentHead = (headResult.stdout || '').trim();\n  } catch { return; }\n\n  if (!currentHead) return;\n\n  let lastCommit = '';\n  let hadEmbeddings = false;\n  try {\n    const meta = JSON.parse(fs.readFileSync(path.join(gitNexusDir, 'meta.json'), 'utf-8'));\n    lastCommit = meta.lastCommit || '';\n    hadEmbeddings = (meta.stats && meta.stats.embeddings > 0);\n  } catch { /* no meta — treat as stale */ }\n\n  // If HEAD matches last indexed commit, no reindex needed\n  if (currentHead && currentHead === lastCommit) return;\n\n  const analyzeCmd = `npx gitnexus analyze${hadEmbeddings ? ' --embeddings' : ''}`;\n  sendHookResponse('PostToolUse',\n    `GitNexus index is stale (last indexed: ${lastCommit ? lastCommit.slice(0, 7) : 'never'}). ` +\n    `Run \\`${analyzeCmd}\\` to update the knowledge graph.`\n  );\n}\n\n// Dispatch map for hook events\nconst handlers = {\n  PreToolUse: handlePreToolUse,\n  PostToolUse: handlePostToolUse,\n};\n\nfunction main() {\n  try {\n    const input = readInput();\n    const handler = handlers[input.hook_event_name || ''];\n    if (handler) handler(input);\n  } catch (err) {\n    if (process.env.GITNEXUS_DEBUG) {\n      console.error('GitNexus hook error:', (err.message || '').slice(0, 200));\n    }\n  }\n}\n\nmain();\n"
  },
  {
    "path": "gitnexus-claude-plugin/hooks/hooks.json",
    "content": "{\n  \"hooks\": {\n    \"PreToolUse\": [\n      {\n        \"matcher\": \"Grep|Glob|Bash\",\n        \"hooks\": [\n          {\n            \"type\": \"command\",\n            \"command\": \"node ${CLAUDE_PLUGIN_ROOT}/hooks/gitnexus-hook.js\",\n            \"timeout\": 10,\n            \"statusMessage\": \"Enriching with GitNexus graph context...\"\n          }\n        ]\n      }\n    ],\n    \"PostToolUse\": [\n      {\n        \"matcher\": \"Bash\",\n        \"hooks\": [\n          {\n            \"type\": \"command\",\n            \"command\": \"node ${CLAUDE_PLUGIN_ROOT}/hooks/gitnexus-hook.js\",\n            \"timeout\": 10,\n            \"statusMessage\": \"Checking GitNexus index freshness...\"\n          }\n        ]\n      }\n    ]\n  }\n}\n"
  },
  {
    "path": "gitnexus-claude-plugin/skills/gitnexus-cli/SKILL.md",
    "content": "---\nname: gitnexus-cli\ndescription: \"Use when the user needs to run GitNexus CLI commands like analyze/index a repo, check status, clean the index, generate a wiki, or list indexed repos. Examples: \\\"Index this repo\\\", \\\"Reanalyze the codebase\\\", \\\"Generate a wiki\\\"\"\n---\n\n# GitNexus CLI Commands\n\nAll commands work via `npx` — no global install required.\n\n## Commands\n\n### analyze — Build or refresh the index\n\n```bash\nnpx gitnexus analyze\n```\n\nRun from the project root. This parses all source files, builds the knowledge graph, writes it to `.gitnexus/`, and generates CLAUDE.md / AGENTS.md context files.\n\n| Flag | Effect |\n|------|--------|\n| `--force` | Force full re-index even if up to date |\n| `--embeddings` | Enable embedding generation for semantic search (off by default) |\n\n**When to run:** First time in a project, after major code changes, or when `gitnexus://repo/{name}/context` reports the index is stale.\n\n### status — Check index freshness\n\n```bash\nnpx gitnexus status\n```\n\nShows whether the current repo has a GitNexus index, when it was last updated, and symbol/relationship counts. Use this to check if re-indexing is needed.\n\n### clean — Delete the index\n\n```bash\nnpx gitnexus clean\n```\n\nDeletes the `.gitnexus/` directory and unregisters the repo from the global registry. Use before re-indexing if the index is corrupt or after removing GitNexus from a project.\n\n| Flag | Effect |\n|------|--------|\n| `--force` | Skip confirmation prompt |\n| `--all` | Clean all indexed repos, not just the current one |\n\n### wiki — Generate documentation from the graph\n\n```bash\nnpx gitnexus wiki\n```\n\nGenerates repository documentation from the knowledge graph using an LLM. Requires an API key (saved to `~/.gitnexus/config.json` on first use).\n\n| Flag | Effect |\n|------|--------|\n| `--force` | Force full regeneration |\n| `--model <model>` | LLM model (default: minimax/minimax-m2.5) |\n| `--base-url <url>` | LLM API base URL |\n| `--api-key <key>` | LLM API key |\n| `--concurrency <n>` | Parallel LLM calls (default: 3) |\n| `--gist` | Publish wiki as a public GitHub Gist |\n\n### list — Show all indexed repos\n\n```bash\nnpx gitnexus list\n```\n\nLists all repositories registered in `~/.gitnexus/registry.json`. The MCP `list_repos` tool provides the same information.\n\n## After Indexing\n\n1. **Read `gitnexus://repo/{name}/context`** to verify the index loaded\n2. Use the other GitNexus skills (`exploring`, `debugging`, `impact-analysis`, `refactoring`) for your task\n\n## Troubleshooting\n\n- **\"Not inside a git repository\"**: Run from a directory inside a git repo\n- **Index is stale after re-analyzing**: Restart Claude Code to reload the MCP server\n- **Embeddings slow**: Omit `--embeddings` (it's off by default) or set `OPENAI_API_KEY` for faster API-based embedding\n"
  },
  {
    "path": "gitnexus-claude-plugin/skills/gitnexus-cli/mcp.json",
    "content": "{\n  \"mcpServers\": {\n    \"gitnexus\": {\n      \"command\": \"npx\",\n      \"args\": [\"-y\", \"gitnexus@latest\", \"mcp\"]\n    }\n  }\n}\n"
  },
  {
    "path": "gitnexus-claude-plugin/skills/gitnexus-debugging/SKILL.md",
    "content": "---\nname: gitnexus-debugging\ndescription: \"Use when the user is debugging a bug, tracing an error, or asking why something fails. Examples: \\\"Why is X failing?\\\", \\\"Where does this error come from?\\\", \\\"Trace this bug\\\"\"\n---\n\n# Debugging with GitNexus\n\n## When to Use\n\n- \"Why is this function failing?\"\n- \"Trace where this error comes from\"\n- \"Who calls this method?\"\n- \"This endpoint returns 500\"\n- Investigating bugs, errors, or unexpected behavior\n\n## Workflow\n\n```\n1. gitnexus_query({query: \"<error or symptom>\"})            → Find related execution flows\n2. gitnexus_context({name: \"<suspect>\"})                    → See callers/callees/processes\n3. READ gitnexus://repo/{name}/process/{name}                → Trace execution flow\n4. gitnexus_cypher({query: \"MATCH path...\"})                 → Custom traces if needed\n```\n\n> If \"Index is stale\" → run `npx gitnexus analyze` in terminal.\n\n## Checklist\n\n```\n- [ ] Understand the symptom (error message, unexpected behavior)\n- [ ] gitnexus_query for error text or related code\n- [ ] Identify the suspect function from returned processes\n- [ ] gitnexus_context to see callers and callees\n- [ ] Trace execution flow via process resource if applicable\n- [ ] gitnexus_cypher for custom call chain traces if needed\n- [ ] Read source files to confirm root cause\n```\n\n## Debugging Patterns\n\n| Symptom              | GitNexus Approach                                          |\n| -------------------- | ---------------------------------------------------------- |\n| Error message        | `gitnexus_query` for error text → `context` on throw sites |\n| Wrong return value   | `context` on the function → trace callees for data flow    |\n| Intermittent failure | `context` → look for external calls, async deps            |\n| Performance issue    | `context` → find symbols with many callers (hot paths)     |\n| Recent regression    | `detect_changes` to see what your changes affect           |\n\n## Tools\n\n**gitnexus_query** — find code related to error:\n\n```\ngitnexus_query({query: \"payment validation error\"})\n→ Processes: CheckoutFlow, ErrorHandling\n→ Symbols: validatePayment, handlePaymentError, PaymentException\n```\n\n**gitnexus_context** — full context for a suspect:\n\n```\ngitnexus_context({name: \"validatePayment\"})\n→ Incoming calls: processCheckout, webhookHandler\n→ Outgoing calls: verifyCard, fetchRates (external API!)\n→ Processes: CheckoutFlow (step 3/7)\n```\n\n**gitnexus_cypher** — custom call chain traces:\n\n```cypher\nMATCH path = (a)-[:CodeRelation {type: 'CALLS'}*1..2]->(b:Function {name: \"validatePayment\"})\nRETURN [n IN nodes(path) | n.name] AS chain\n```\n\n## Example: \"Payment endpoint returns 500 intermittently\"\n\n```\n1. gitnexus_query({query: \"payment error handling\"})\n   → Processes: CheckoutFlow, ErrorHandling\n   → Symbols: validatePayment, handlePaymentError\n\n2. gitnexus_context({name: \"validatePayment\"})\n   → Outgoing calls: verifyCard, fetchRates (external API!)\n\n3. READ gitnexus://repo/my-app/process/CheckoutFlow\n   → Step 3: validatePayment → calls fetchRates (external)\n\n4. Root cause: fetchRates calls external API without proper timeout\n```\n"
  },
  {
    "path": "gitnexus-claude-plugin/skills/gitnexus-debugging/mcp.json",
    "content": "{\n  \"mcpServers\": {\n    \"gitnexus\": {\n      \"command\": \"npx\",\n      \"args\": [\"-y\", \"gitnexus@latest\", \"mcp\"]\n    }\n  }\n}\n"
  },
  {
    "path": "gitnexus-claude-plugin/skills/gitnexus-exploring/SKILL.md",
    "content": "---\nname: gitnexus-exploring\ndescription: \"Use when the user asks how code works, wants to understand architecture, trace execution flows, or explore unfamiliar parts of the codebase. Examples: \\\"How does X work?\\\", \\\"What calls this function?\\\", \\\"Show me the auth flow\\\"\"\n---\n\n# Exploring Codebases with GitNexus\n\n## When to Use\n\n- \"How does authentication work?\"\n- \"What's the project structure?\"\n- \"Show me the main components\"\n- \"Where is the database logic?\"\n- Understanding code you haven't seen before\n\n## Workflow\n\n```\n1. READ gitnexus://repos                          → Discover indexed repos\n2. READ gitnexus://repo/{name}/context             → Codebase overview, check staleness\n3. gitnexus_query({query: \"<what you want to understand>\"})  → Find related execution flows\n4. gitnexus_context({name: \"<symbol>\"})            → Deep dive on specific symbol\n5. READ gitnexus://repo/{name}/process/{name}      → Trace full execution flow\n```\n\n> If step 2 says \"Index is stale\" → run `npx gitnexus analyze` in terminal.\n\n## Checklist\n\n```\n- [ ] READ gitnexus://repo/{name}/context\n- [ ] gitnexus_query for the concept you want to understand\n- [ ] Review returned processes (execution flows)\n- [ ] gitnexus_context on key symbols for callers/callees\n- [ ] READ process resource for full execution traces\n- [ ] Read source files for implementation details\n```\n\n## Resources\n\n| Resource                                | What you get                                            |\n| --------------------------------------- | ------------------------------------------------------- |\n| `gitnexus://repo/{name}/context`        | Stats, staleness warning (~150 tokens)                  |\n| `gitnexus://repo/{name}/clusters`       | All functional areas with cohesion scores (~300 tokens) |\n| `gitnexus://repo/{name}/cluster/{name}` | Area members with file paths (~500 tokens)              |\n| `gitnexus://repo/{name}/process/{name}` | Step-by-step execution trace (~200 tokens)              |\n\n## Tools\n\n**gitnexus_query** — find execution flows related to a concept:\n\n```\ngitnexus_query({query: \"payment processing\"})\n→ Processes: CheckoutFlow, RefundFlow, WebhookHandler\n→ Symbols grouped by flow with file locations\n```\n\n**gitnexus_context** — 360-degree view of a symbol:\n\n```\ngitnexus_context({name: \"validateUser\"})\n→ Incoming calls: loginHandler, apiMiddleware\n→ Outgoing calls: checkToken, getUserById\n→ Processes: LoginFlow (step 2/5), TokenRefresh (step 1/3)\n```\n\n## Example: \"How does payment processing work?\"\n\n```\n1. READ gitnexus://repo/my-app/context       → 918 symbols, 45 processes\n2. gitnexus_query({query: \"payment processing\"})\n   → CheckoutFlow: processPayment → validateCard → chargeStripe\n   → RefundFlow: initiateRefund → calculateRefund → processRefund\n3. gitnexus_context({name: \"processPayment\"})\n   → Incoming: checkoutHandler, webhookHandler\n   → Outgoing: validateCard, chargeStripe, saveTransaction\n4. Read src/payments/processor.ts for implementation details\n```\n"
  },
  {
    "path": "gitnexus-claude-plugin/skills/gitnexus-exploring/mcp.json",
    "content": "{\n  \"mcpServers\": {\n    \"gitnexus\": {\n      \"command\": \"npx\",\n      \"args\": [\"-y\", \"gitnexus@latest\", \"mcp\"]\n    }\n  }\n}\n"
  },
  {
    "path": "gitnexus-claude-plugin/skills/gitnexus-guide/SKILL.md",
    "content": "---\nname: gitnexus-guide\ndescription: \"Use when the user asks about GitNexus itself — available tools, how to query the knowledge graph, MCP resources, graph schema, or workflow reference. Examples: \\\"What GitNexus tools are available?\\\", \\\"How do I use GitNexus?\\\"\"\n---\n\n# GitNexus Guide\n\nQuick reference for all GitNexus MCP tools, resources, and the knowledge graph schema.\n\n## Always Start Here\n\nFor any task involving code understanding, debugging, impact analysis, or refactoring:\n\n1. **Read `gitnexus://repo/{name}/context`** — codebase overview + check index freshness\n2. **Match your task to a skill below** and **read that skill file**\n3. **Follow the skill's workflow and checklist**\n\n> If step 1 warns the index is stale, run `npx gitnexus analyze` in the terminal first.\n\n## Skills\n\n| Task                                         | Skill to read       |\n| -------------------------------------------- | ------------------- |\n| Understand architecture / \"How does X work?\" | `gitnexus-exploring`         |\n| Blast radius / \"What breaks if I change X?\"  | `gitnexus-impact-analysis`   |\n| Trace bugs / \"Why is X failing?\"             | `gitnexus-debugging`         |\n| Rename / extract / split / refactor          | `gitnexus-refactoring`       |\n| Tools, resources, schema reference           | `gitnexus-guide` (this file) |\n| Index, status, clean, wiki CLI commands      | `gitnexus-cli`               |\n\n## Tools Reference\n\n| Tool             | What it gives you                                                        |\n| ---------------- | ------------------------------------------------------------------------ |\n| `query`          | Process-grouped code intelligence — execution flows related to a concept |\n| `context`        | 360-degree symbol view — categorized refs, processes it participates in  |\n| `impact`         | Symbol blast radius — what breaks at depth 1/2/3 with confidence         |\n| `detect_changes` | Git-diff impact — what do your current changes affect                    |\n| `rename`         | Multi-file coordinated rename with confidence-tagged edits               |\n| `cypher`         | Raw graph queries (read `gitnexus://repo/{name}/schema` first)           |\n| `list_repos`     | Discover indexed repos                                                   |\n\n## Resources Reference\n\nLightweight reads (~100-500 tokens) for navigation:\n\n| Resource                                       | Content                                   |\n| ---------------------------------------------- | ----------------------------------------- |\n| `gitnexus://repo/{name}/context`               | Stats, staleness check                    |\n| `gitnexus://repo/{name}/clusters`              | All functional areas with cohesion scores |\n| `gitnexus://repo/{name}/cluster/{clusterName}` | Area members                              |\n| `gitnexus://repo/{name}/processes`             | All execution flows                       |\n| `gitnexus://repo/{name}/process/{processName}` | Step-by-step trace                        |\n| `gitnexus://repo/{name}/schema`                | Graph schema for Cypher                   |\n\n## Graph Schema\n\n**Nodes:** File, Function, Class, Interface, Method, Community, Process\n**Edges (via CodeRelation.type):** CALLS, IMPORTS, EXTENDS, IMPLEMENTS, DEFINES, MEMBER_OF, STEP_IN_PROCESS\n\n```cypher\nMATCH (caller)-[:CodeRelation {type: 'CALLS'}]->(f:Function {name: \"myFunc\"})\nRETURN caller.name, caller.filePath\n```\n"
  },
  {
    "path": "gitnexus-claude-plugin/skills/gitnexus-guide/mcp.json",
    "content": "{\n  \"mcpServers\": {\n    \"gitnexus\": {\n      \"command\": \"npx\",\n      \"args\": [\"-y\", \"gitnexus@latest\", \"mcp\"]\n    }\n  }\n}\n"
  },
  {
    "path": "gitnexus-claude-plugin/skills/gitnexus-impact-analysis/SKILL.md",
    "content": "---\nname: gitnexus-impact-analysis\ndescription: \"Use when the user wants to know what will break if they change something, or needs safety analysis before editing code. Examples: \\\"Is it safe to change X?\\\", \\\"What depends on this?\\\", \\\"What will break?\\\"\"\n---\n\n# Impact Analysis with GitNexus\n\n## When to Use\n\n- \"Is it safe to change this function?\"\n- \"What will break if I modify X?\"\n- \"Show me the blast radius\"\n- \"Who uses this code?\"\n- Before making non-trivial code changes\n- Before committing — to understand what your changes affect\n\n## Workflow\n\n```\n1. gitnexus_impact({target: \"X\", direction: \"upstream\"})  → What depends on this\n2. READ gitnexus://repo/{name}/processes                   → Check affected execution flows\n3. gitnexus_detect_changes()                               → Map current git changes to affected flows\n4. Assess risk and report to user\n```\n\n> If \"Index is stale\" → run `npx gitnexus analyze` in terminal.\n\n## Checklist\n\n```\n- [ ] gitnexus_impact({target, direction: \"upstream\"}) to find dependents\n- [ ] Review d=1 items first (these WILL BREAK)\n- [ ] Check high-confidence (>0.8) dependencies\n- [ ] READ processes to check affected execution flows\n- [ ] gitnexus_detect_changes() for pre-commit check\n- [ ] Assess risk level and report to user\n```\n\n## Understanding Output\n\n| Depth | Risk Level       | Meaning                  |\n| ----- | ---------------- | ------------------------ |\n| d=1   | **WILL BREAK**   | Direct callers/importers |\n| d=2   | LIKELY AFFECTED  | Indirect dependencies    |\n| d=3   | MAY NEED TESTING | Transitive effects       |\n\n## Risk Assessment\n\n| Affected                       | Risk     |\n| ------------------------------ | -------- |\n| <5 symbols, few processes      | LOW      |\n| 5-15 symbols, 2-5 processes    | MEDIUM   |\n| >15 symbols or many processes  | HIGH     |\n| Critical path (auth, payments) | CRITICAL |\n\n## Tools\n\n**gitnexus_impact** — the primary tool for symbol blast radius:\n\n```\ngitnexus_impact({\n  target: \"validateUser\",\n  direction: \"upstream\",\n  minConfidence: 0.8,\n  maxDepth: 3\n})\n\n→ d=1 (WILL BREAK):\n  - loginHandler (src/auth/login.ts:42) [CALLS, 100%]\n  - apiMiddleware (src/api/middleware.ts:15) [CALLS, 100%]\n\n→ d=2 (LIKELY AFFECTED):\n  - authRouter (src/routes/auth.ts:22) [CALLS, 95%]\n```\n\n**gitnexus_detect_changes** — git-diff based impact analysis:\n\n```\ngitnexus_detect_changes({scope: \"staged\"})\n\n→ Changed: 5 symbols in 3 files\n→ Affected: LoginFlow, TokenRefresh, APIMiddlewarePipeline\n→ Risk: MEDIUM\n```\n\n## Example: \"What breaks if I change validateUser?\"\n\n```\n1. gitnexus_impact({target: \"validateUser\", direction: \"upstream\"})\n   → d=1: loginHandler, apiMiddleware (WILL BREAK)\n   → d=2: authRouter, sessionManager (LIKELY AFFECTED)\n\n2. READ gitnexus://repo/my-app/processes\n   → LoginFlow and TokenRefresh touch validateUser\n\n3. Risk: 2 direct callers, 2 processes = MEDIUM\n```\n"
  },
  {
    "path": "gitnexus-claude-plugin/skills/gitnexus-impact-analysis/mcp.json",
    "content": "{\n  \"mcpServers\": {\n    \"gitnexus\": {\n      \"command\": \"npx\",\n      \"args\": [\"-y\", \"gitnexus@latest\", \"mcp\"]\n    }\n  }\n}\n"
  },
  {
    "path": "gitnexus-claude-plugin/skills/gitnexus-pr-review/SKILL.md",
    "content": "---\nname: gitnexus-pr-review\ndescription: \"Use when the user wants to review a pull request, understand what a PR changes, assess risk of merging, or check for missing test coverage. Examples: \\\"Review this PR\\\", \\\"What does PR #42 change?\\\", \\\"Is this PR safe to merge?\\\"\"\n---\n\n# PR Review with GitNexus\n\n## When to Use\n\n- \"Review this PR\"\n- \"What does PR #42 change?\"\n- \"Is this safe to merge?\"\n- \"What's the blast radius of this PR?\"\n- \"Are there missing tests for this PR?\"\n- Reviewing someone else's code changes before merge\n\n## Workflow\n\n```\n1. gh pr diff <number>                                    → Get the raw diff\n2. gitnexus_detect_changes({scope: \"compare\", base_ref: \"main\"})  → Map diff to affected flows\n3. For each changed symbol:\n   gitnexus_impact({target: \"<symbol>\", direction: \"upstream\"})    → Blast radius per change\n4. gitnexus_context({name: \"<key symbol>\"})               → Understand callers/callees\n5. READ gitnexus://repo/{name}/processes                   → Check affected execution flows\n6. Summarize findings with risk assessment\n```\n\n> If \"Index is stale\" → run `npx gitnexus analyze` in terminal before reviewing.\n\n## Checklist\n\n```\n- [ ] Fetch PR diff (gh pr diff or git diff base...head)\n- [ ] gitnexus_detect_changes to map changes to affected execution flows\n- [ ] gitnexus_impact on each non-trivial changed symbol\n- [ ] Review d=1 items (WILL BREAK) — are callers updated?\n- [ ] gitnexus_context on key changed symbols to understand full picture\n- [ ] Check if affected processes have test coverage\n- [ ] Assess overall risk level\n- [ ] Write review summary with findings\n```\n\n## Review Dimensions\n\n| Dimension | How GitNexus Helps |\n| --- | --- |\n| **Correctness** | `context` shows callers — are they all compatible with the change? |\n| **Blast radius** | `impact` shows d=1/d=2/d=3 dependents — anything missed? |\n| **Completeness** | `detect_changes` shows all affected flows — are they all handled? |\n| **Test coverage** | `impact({includeTests: true})` shows which tests touch changed code |\n| **Breaking changes** | d=1 upstream items that aren't updated in the PR = potential breakage |\n\n## Risk Assessment\n\n| Signal | Risk |\n| --- | --- |\n| Changes touch <3 symbols, 0-1 processes | LOW |\n| Changes touch 3-10 symbols, 2-5 processes | MEDIUM |\n| Changes touch >10 symbols or many processes | HIGH |\n| Changes touch auth, payments, or data integrity code | CRITICAL |\n| d=1 callers exist outside the PR diff | Potential breakage — flag it |\n\n## Tools\n\n**gitnexus_detect_changes** — map PR diff to affected execution flows:\n\n```\ngitnexus_detect_changes({scope: \"compare\", base_ref: \"main\"})\n\n→ Changed: 8 symbols in 4 files\n→ Affected processes: CheckoutFlow, RefundFlow, WebhookHandler\n→ Risk: MEDIUM\n```\n\n**gitnexus_impact** — blast radius per changed symbol:\n\n```\ngitnexus_impact({target: \"validatePayment\", direction: \"upstream\"})\n\n→ d=1 (WILL BREAK):\n  - processCheckout (src/checkout.ts:42) [CALLS, 100%]\n  - webhookHandler (src/webhooks.ts:15) [CALLS, 100%]\n\n→ d=2 (LIKELY AFFECTED):\n  - checkoutRouter (src/routes/checkout.ts:22) [CALLS, 95%]\n```\n\n**gitnexus_impact with tests** — check test coverage:\n\n```\ngitnexus_impact({target: \"validatePayment\", direction: \"upstream\", includeTests: true})\n\n→ Tests that cover this symbol:\n  - validatePayment.test.ts [direct]\n  - checkout.integration.test.ts [via processCheckout]\n```\n\n**gitnexus_context** — understand a changed symbol's role:\n\n```\ngitnexus_context({name: \"validatePayment\"})\n\n→ Incoming calls: processCheckout, webhookHandler\n→ Outgoing calls: verifyCard, fetchRates\n→ Processes: CheckoutFlow (step 3/7), RefundFlow (step 1/5)\n```\n\n## Example: \"Review PR #42\"\n\n```\n1. gh pr diff 42 > /tmp/pr42.diff\n   → 4 files changed: payments.ts, checkout.ts, types.ts, utils.ts\n\n2. gitnexus_detect_changes({scope: \"compare\", base_ref: \"main\"})\n   → Changed symbols: validatePayment, PaymentInput, formatAmount\n   → Affected processes: CheckoutFlow, RefundFlow\n   → Risk: MEDIUM\n\n3. gitnexus_impact({target: \"validatePayment\", direction: \"upstream\"})\n   → d=1: processCheckout, webhookHandler (WILL BREAK)\n   → webhookHandler is NOT in the PR diff — potential breakage!\n\n4. gitnexus_impact({target: \"PaymentInput\", direction: \"upstream\"})\n   → d=1: validatePayment (in PR), createPayment (NOT in PR)\n   → createPayment uses the old PaymentInput shape — breaking change!\n\n5. gitnexus_context({name: \"formatAmount\"})\n   → Called by 12 functions — but change is backwards-compatible (added optional param)\n\n6. Review summary:\n   - MEDIUM risk — 3 changed symbols affect 2 execution flows\n   - BUG: webhookHandler calls validatePayment but isn't updated for new signature\n   - BUG: createPayment depends on PaymentInput type which changed\n   - OK: formatAmount change is backwards-compatible\n   - Tests: checkout.test.ts covers processCheckout path, but no webhook test\n```\n\n## Review Output Format\n\nStructure your review as:\n\n```markdown\n## PR Review: <title>\n\n**Risk: LOW / MEDIUM / HIGH / CRITICAL**\n\n### Changes Summary\n- <N> symbols changed across <M> files\n- <P> execution flows affected\n\n### Findings\n1. **[severity]** Description of finding\n   - Evidence from GitNexus tools\n   - Affected callers/flows\n\n### Missing Coverage\n- Callers not updated in PR: ...\n- Untested flows: ...\n\n### Recommendation\nAPPROVE / REQUEST CHANGES / NEEDS DISCUSSION\n```\n"
  },
  {
    "path": "gitnexus-claude-plugin/skills/gitnexus-refactoring/SKILL.md",
    "content": "---\nname: gitnexus-refactoring\ndescription: \"Use when the user wants to rename, extract, split, move, or restructure code safely. Examples: \\\"Rename this function\\\", \\\"Extract this into a module\\\", \\\"Refactor this class\\\", \\\"Move this to a separate file\\\"\"\n---\n\n# Refactoring with GitNexus\n\n## When to Use\n\n- \"Rename this function safely\"\n- \"Extract this into a module\"\n- \"Split this service\"\n- \"Move this to a new file\"\n- Any task involving renaming, extracting, splitting, or restructuring code\n\n## Workflow\n\n```\n1. gitnexus_impact({target: \"X\", direction: \"upstream\"})  → Map all dependents\n2. gitnexus_query({query: \"X\"})                            → Find execution flows involving X\n3. gitnexus_context({name: \"X\"})                           → See all incoming/outgoing refs\n4. Plan update order: interfaces → implementations → callers → tests\n```\n\n> If \"Index is stale\" → run `npx gitnexus analyze` in terminal.\n\n## Checklists\n\n### Rename Symbol\n\n```\n- [ ] gitnexus_rename({symbol_name: \"oldName\", new_name: \"newName\", dry_run: true}) — preview all edits\n- [ ] Review graph edits (high confidence) and ast_search edits (review carefully)\n- [ ] If satisfied: gitnexus_rename({..., dry_run: false}) — apply edits\n- [ ] gitnexus_detect_changes() — verify only expected files changed\n- [ ] Run tests for affected processes\n```\n\n### Extract Module\n\n```\n- [ ] gitnexus_context({name: target}) — see all incoming/outgoing refs\n- [ ] gitnexus_impact({target, direction: \"upstream\"}) — find all external callers\n- [ ] Define new module interface\n- [ ] Extract code, update imports\n- [ ] gitnexus_detect_changes() — verify affected scope\n- [ ] Run tests for affected processes\n```\n\n### Split Function/Service\n\n```\n- [ ] gitnexus_context({name: target}) — understand all callees\n- [ ] Group callees by responsibility\n- [ ] gitnexus_impact({target, direction: \"upstream\"}) — map callers to update\n- [ ] Create new functions/services\n- [ ] Update callers\n- [ ] gitnexus_detect_changes() — verify affected scope\n- [ ] Run tests for affected processes\n```\n\n## Tools\n\n**gitnexus_rename** — automated multi-file rename:\n\n```\ngitnexus_rename({symbol_name: \"validateUser\", new_name: \"authenticateUser\", dry_run: true})\n→ 12 edits across 8 files\n→ 10 graph edits (high confidence), 2 ast_search edits (review)\n→ Changes: [{file_path, edits: [{line, old_text, new_text, confidence}]}]\n```\n\n**gitnexus_impact** — map all dependents first:\n\n```\ngitnexus_impact({target: \"validateUser\", direction: \"upstream\"})\n→ d=1: loginHandler, apiMiddleware, testUtils\n→ Affected Processes: LoginFlow, TokenRefresh\n```\n\n**gitnexus_detect_changes** — verify your changes after refactoring:\n\n```\ngitnexus_detect_changes({scope: \"all\"})\n→ Changed: 8 files, 12 symbols\n→ Affected processes: LoginFlow, TokenRefresh\n→ Risk: MEDIUM\n```\n\n**gitnexus_cypher** — custom reference queries:\n\n```cypher\nMATCH (caller)-[:CodeRelation {type: 'CALLS'}]->(f:Function {name: \"validateUser\"})\nRETURN caller.name, caller.filePath ORDER BY caller.filePath\n```\n\n## Risk Rules\n\n| Risk Factor         | Mitigation                                |\n| ------------------- | ----------------------------------------- |\n| Many callers (>5)   | Use gitnexus_rename for automated updates |\n| Cross-area refs     | Use detect_changes after to verify scope  |\n| String/dynamic refs | gitnexus_query to find them               |\n| External/public API | Version and deprecate properly            |\n\n## Example: Rename `validateUser` to `authenticateUser`\n\n```\n1. gitnexus_rename({symbol_name: \"validateUser\", new_name: \"authenticateUser\", dry_run: true})\n   → 12 edits: 10 graph (safe), 2 ast_search (review)\n   → Files: validator.ts, login.ts, middleware.ts, config.json...\n\n2. Review ast_search edits (config.json: dynamic reference!)\n\n3. gitnexus_rename({symbol_name: \"validateUser\", new_name: \"authenticateUser\", dry_run: false})\n   → Applied 12 edits across 8 files\n\n4. gitnexus_detect_changes({scope: \"all\"})\n   → Affected: LoginFlow, TokenRefresh\n   → Risk: MEDIUM — run tests for these flows\n```\n"
  },
  {
    "path": "gitnexus-claude-plugin/skills/gitnexus-refactoring/mcp.json",
    "content": "{\n  \"mcpServers\": {\n    \"gitnexus\": {\n      \"command\": \"npx\",\n      \"args\": [\"-y\", \"gitnexus@latest\", \"mcp\"]\n    }\n  }\n}\n"
  },
  {
    "path": "gitnexus-cursor-integration/hooks/augment-shell.sh",
    "content": "#!/bin/bash\n# GitNexus beforeShellExecution hook for Cursor\n# Receives JSON on stdin with { command, cwd, timeout }\n# Returns JSON on stdout with { permission, agent_message }\n#\n# Extracts search pattern from grep/rg commands, runs gitnexus augment,\n# and injects the enriched context via agent_message.\n\nINPUT=$(cat)\n\nCOMMAND=$(echo \"$INPUT\" | jq -r '.command // empty' 2>/dev/null)\n\nif [ -z \"$COMMAND\" ]; then\n  echo '{\"permission\":\"allow\"}'\n  exit 0\nfi\n\n# Skip non-search commands\ncase \"$COMMAND\" in\n  cd\\ *|npm\\ *|yarn\\ *|pnpm\\ *|git\\ commit*|git\\ push*|git\\ pull*|mkdir\\ *|rm\\ *|cp\\ *|mv\\ *|echo\\ *|cat\\ *)\n    echo '{\"permission\":\"allow\"}'\n    exit 0\n    ;;\nesac\n\n# Extract search pattern from rg/grep commands\nPATTERN=\"\"\nif echo \"$COMMAND\" | grep -qE '\\brg\\b'; then\n  PATTERN=$(echo \"$COMMAND\" | sed -n \"s/.*\\brg\\s\\+\\(--[^ ]*\\s\\+\\)*['\\\"]\\\\?\\([^'\\\";\\| >]*\\\\).*/\\2/p\")\nelif echo \"$COMMAND\" | grep -qE '\\bgrep\\b'; then\n  PATTERN=$(echo \"$COMMAND\" | sed -n \"s/.*\\bgrep\\s\\+\\(-[^ ]*\\s\\+\\)*['\\\"]\\\\?\\([^'\\\";\\| >]*\\\\).*/\\2/p\")\nfi\n\nif [ -z \"$PATTERN\" ] || [ ${#PATTERN} -lt 3 ]; then\n  echo '{\"permission\":\"allow\"}'\n  exit 0\nfi\n\n# Run gitnexus augment\nRESULT=$(npx -y gitnexus augment \"$PATTERN\" 2>/dev/null)\n\nif [ -n \"$RESULT\" ]; then\n  # Escape for JSON\n  ESCAPED=$(echo \"$RESULT\" | jq -Rs .)\n  echo \"{\\\"permission\\\":\\\"allow\\\",\\\"agent_message\\\":$ESCAPED}\"\nelse\n  echo '{\"permission\":\"allow\"}'\nfi\n\nexit 0\n"
  },
  {
    "path": "gitnexus-cursor-integration/hooks/hooks.json",
    "content": "{\n  \"version\": 1,\n  \"hooks\": {\n    \"beforeShellExecution\": [\n      {\n        \"command\": \"./hooks/augment-shell.sh\",\n        \"timeout\": 5,\n        \"matcher\": \"\\\\brg\\\\b|\\\\bgrep\\\\b\"\n      }\n    ]\n  }\n}\n"
  },
  {
    "path": "gitnexus-cursor-integration/skills/gitnexus-debugging/SKILL.md",
    "content": "---\nname: gitnexus-debugging\ndescription: Trace bugs through call chains using knowledge graph\n---\n\n# Debugging with GitNexus\n\n## When to Use\n- \"Why is this function failing?\"\n- \"Trace where this error comes from\"\n- \"Who calls this method?\"\n- \"This endpoint returns 500\"\n- Investigating bugs, errors, or unexpected behavior\n\n## Workflow\n\n```\n1. gitnexus_query({query: \"<error or symptom>\"})            → Find related execution flows\n2. gitnexus_context({name: \"<suspect>\"})                    → See callers/callees/processes\n3. READ gitnexus://repo/{name}/process/{name}                → Trace execution flow\n4. gitnexus_cypher({query: \"MATCH path...\"})                 → Custom traces if needed\n```\n\n> If \"Index is stale\" → run `npx gitnexus analyze` in terminal.\n\n## Checklist\n\n```\n- [ ] Understand the symptom (error message, unexpected behavior)\n- [ ] gitnexus_query for error text or related code\n- [ ] Identify the suspect function from returned processes\n- [ ] gitnexus_context to see callers and callees\n- [ ] Trace execution flow via process resource if applicable\n- [ ] gitnexus_cypher for custom call chain traces if needed\n- [ ] Read source files to confirm root cause\n```\n\n## Debugging Patterns\n\n| Symptom | GitNexus Approach |\n|---------|-------------------|\n| Error message | `gitnexus_query` for error text → `context` on throw sites |\n| Wrong return value | `context` on the function → trace callees for data flow |\n| Intermittent failure | `context` → look for external calls, async deps |\n| Performance issue | `context` → find symbols with many callers (hot paths) |\n| Recent regression | `detect_changes` to see what your changes affect |\n\n## Tools\n\n**gitnexus_query** — find code related to error:\n```\ngitnexus_query({query: \"payment validation error\"})\n→ Processes: CheckoutFlow, ErrorHandling\n→ Symbols: validatePayment, handlePaymentError, PaymentException\n```\n\n**gitnexus_context** — full context for a suspect:\n```\ngitnexus_context({name: \"validatePayment\"})\n→ Incoming calls: processCheckout, webhookHandler\n→ Outgoing calls: verifyCard, fetchRates (external API!)\n→ Processes: CheckoutFlow (step 3/7)\n```\n\n**gitnexus_cypher** — custom call chain traces:\n```cypher\nMATCH path = (a)-[:CodeRelation {type: 'CALLS'}*1..2]->(b:Function {name: \"validatePayment\"})\nRETURN [n IN nodes(path) | n.name] AS chain\n```\n\n## Example: \"Payment endpoint returns 500 intermittently\"\n\n```\n1. gitnexus_query({query: \"payment error handling\"})\n   → Processes: CheckoutFlow, ErrorHandling\n   → Symbols: validatePayment, handlePaymentError\n\n2. gitnexus_context({name: \"validatePayment\"})\n   → Outgoing calls: verifyCard, fetchRates (external API!)\n\n3. READ gitnexus://repo/my-app/process/CheckoutFlow\n   → Step 3: validatePayment → calls fetchRates (external)\n\n4. Root cause: fetchRates calls external API without proper timeout\n```\n"
  },
  {
    "path": "gitnexus-cursor-integration/skills/gitnexus-exploring/SKILL.md",
    "content": "---\nname: gitnexus-exploring\ndescription: Navigate unfamiliar code using GitNexus knowledge graph\n---\n\n# Exploring Codebases with GitNexus\n\n## When to Use\n- \"How does authentication work?\"\n- \"What's the project structure?\"\n- \"Show me the main components\"\n- \"Where is the database logic?\"\n- Understanding code you haven't seen before\n\n## Workflow\n\n```\n1. READ gitnexus://repos                          → Discover indexed repos\n2. READ gitnexus://repo/{name}/context             → Codebase overview, check staleness\n3. gitnexus_query({query: \"<what you want to understand>\"})  → Find related execution flows\n4. gitnexus_context({name: \"<symbol>\"})            → Deep dive on specific symbol\n5. READ gitnexus://repo/{name}/process/{name}      → Trace full execution flow\n```\n\n> If step 2 says \"Index is stale\" → run `npx gitnexus analyze` in terminal.\n\n## Checklist\n\n```\n- [ ] READ gitnexus://repo/{name}/context\n- [ ] gitnexus_query for the concept you want to understand\n- [ ] Review returned processes (execution flows)\n- [ ] gitnexus_context on key symbols for callers/callees\n- [ ] READ process resource for full execution traces\n- [ ] Read source files for implementation details\n```\n\n## Resources\n\n| Resource | What you get |\n|----------|-------------|\n| `gitnexus://repo/{name}/context` | Stats, staleness warning (~150 tokens) |\n| `gitnexus://repo/{name}/clusters` | All functional areas with cohesion scores (~300 tokens) |\n| `gitnexus://repo/{name}/cluster/{name}` | Area members with file paths (~500 tokens) |\n| `gitnexus://repo/{name}/process/{name}` | Step-by-step execution trace (~200 tokens) |\n\n## Tools\n\n**gitnexus_query** — find execution flows related to a concept:\n```\ngitnexus_query({query: \"payment processing\"})\n→ Processes: CheckoutFlow, RefundFlow, WebhookHandler\n→ Symbols grouped by flow with file locations\n```\n\n**gitnexus_context** — 360-degree view of a symbol:\n```\ngitnexus_context({name: \"validateUser\"})\n→ Incoming calls: loginHandler, apiMiddleware\n→ Outgoing calls: checkToken, getUserById\n→ Processes: LoginFlow (step 2/5), TokenRefresh (step 1/3)\n```\n\n## Example: \"How does payment processing work?\"\n\n```\n1. READ gitnexus://repo/my-app/context       → 918 symbols, 45 processes\n2. gitnexus_query({query: \"payment processing\"})\n   → CheckoutFlow: processPayment → validateCard → chargeStripe\n   → RefundFlow: initiateRefund → calculateRefund → processRefund\n3. gitnexus_context({name: \"processPayment\"})\n   → Incoming: checkoutHandler, webhookHandler\n   → Outgoing: validateCard, chargeStripe, saveTransaction\n4. Read src/payments/processor.ts for implementation details\n```\n"
  },
  {
    "path": "gitnexus-cursor-integration/skills/gitnexus-impact-analysis/SKILL.md",
    "content": "---\nname: gitnexus-impact-analysis\ndescription: Analyze blast radius before making code changes\n---\n\n# Impact Analysis with GitNexus\n\n## When to Use\n- \"Is it safe to change this function?\"\n- \"What will break if I modify X?\"\n- \"Show me the blast radius\"\n- \"Who uses this code?\"\n- Before making non-trivial code changes\n- Before committing — to understand what your changes affect\n\n## Workflow\n\n```\n1. gitnexus_impact({target: \"X\", direction: \"upstream\"})  → What depends on this\n2. READ gitnexus://repo/{name}/processes                   → Check affected execution flows\n3. gitnexus_detect_changes()                               → Map current git changes to affected flows\n4. Assess risk and report to user\n```\n\n> If \"Index is stale\" → run `npx gitnexus analyze` in terminal.\n\n## Checklist\n\n```\n- [ ] gitnexus_impact({target, direction: \"upstream\"}) to find dependents\n- [ ] Review d=1 items first (these WILL BREAK)\n- [ ] Check high-confidence (>0.8) dependencies\n- [ ] READ processes to check affected execution flows\n- [ ] gitnexus_detect_changes() for pre-commit check\n- [ ] Assess risk level and report to user\n```\n\n## Understanding Output\n\n| Depth | Risk Level | Meaning |\n|-------|-----------|---------|\n| d=1 | **WILL BREAK** | Direct callers/importers |\n| d=2 | LIKELY AFFECTED | Indirect dependencies |\n| d=3 | MAY NEED TESTING | Transitive effects |\n\n## Risk Assessment\n\n| Affected | Risk |\n|----------|------|\n| <5 symbols, few processes | LOW |\n| 5-15 symbols, 2-5 processes | MEDIUM |\n| >15 symbols or many processes | HIGH |\n| Critical path (auth, payments) | CRITICAL |\n\n## Tools\n\n**gitnexus_impact** — the primary tool for symbol blast radius:\n```\ngitnexus_impact({\n  target: \"validateUser\",\n  direction: \"upstream\",\n  minConfidence: 0.8,\n  maxDepth: 3\n})\n\n→ d=1 (WILL BREAK):\n  - loginHandler (src/auth/login.ts:42) [CALLS, 100%]\n  - apiMiddleware (src/api/middleware.ts:15) [CALLS, 100%]\n\n→ d=2 (LIKELY AFFECTED):\n  - authRouter (src/routes/auth.ts:22) [CALLS, 95%]\n```\n\n**gitnexus_detect_changes** — git-diff based impact analysis:\n```\ngitnexus_detect_changes({scope: \"staged\"})\n\n→ Changed: 5 symbols in 3 files\n→ Affected: LoginFlow, TokenRefresh, APIMiddlewarePipeline\n→ Risk: MEDIUM\n```\n\n## Example: \"What breaks if I change validateUser?\"\n\n```\n1. gitnexus_impact({target: \"validateUser\", direction: \"upstream\"})\n   → d=1: loginHandler, apiMiddleware (WILL BREAK)\n   → d=2: authRouter, sessionManager (LIKELY AFFECTED)\n\n2. READ gitnexus://repo/my-app/processes\n   → LoginFlow and TokenRefresh touch validateUser\n\n3. Risk: 2 direct callers, 2 processes = MEDIUM\n```\n"
  },
  {
    "path": "gitnexus-cursor-integration/skills/gitnexus-pr-review/SKILL.md",
    "content": "---\nname: gitnexus-pr-review\ndescription: \"Use when the user wants to review a pull request, understand what a PR changes, assess risk of merging, or check for missing test coverage. Examples: \\\"Review this PR\\\", \\\"What does PR #42 change?\\\", \\\"Is this PR safe to merge?\\\"\"\n---\n\n# PR Review with GitNexus\n\n## When to Use\n\n- \"Review this PR\"\n- \"What does PR #42 change?\"\n- \"Is this safe to merge?\"\n- \"What's the blast radius of this PR?\"\n- \"Are there missing tests for this PR?\"\n- Reviewing someone else's code changes before merge\n\n## Workflow\n\n```\n1. gh pr diff <number>                                    → Get the raw diff\n2. gitnexus_detect_changes({scope: \"compare\", base_ref: \"main\"})  → Map diff to affected flows\n3. For each changed symbol:\n   gitnexus_impact({target: \"<symbol>\", direction: \"upstream\"})    → Blast radius per change\n4. gitnexus_context({name: \"<key symbol>\"})               → Understand callers/callees\n5. READ gitnexus://repo/{name}/processes                   → Check affected execution flows\n6. Summarize findings with risk assessment\n```\n\n> If \"Index is stale\" → run `npx gitnexus analyze` in terminal before reviewing.\n\n## Checklist\n\n```\n- [ ] Fetch PR diff (gh pr diff or git diff base...head)\n- [ ] gitnexus_detect_changes to map changes to affected execution flows\n- [ ] gitnexus_impact on each non-trivial changed symbol\n- [ ] Review d=1 items (WILL BREAK) — are callers updated?\n- [ ] gitnexus_context on key changed symbols to understand full picture\n- [ ] Check if affected processes have test coverage\n- [ ] Assess overall risk level\n- [ ] Write review summary with findings\n```\n\n## Review Dimensions\n\n| Dimension | How GitNexus Helps |\n| --- | --- |\n| **Correctness** | `context` shows callers — are they all compatible with the change? |\n| **Blast radius** | `impact` shows d=1/d=2/d=3 dependents — anything missed? |\n| **Completeness** | `detect_changes` shows all affected flows — are they all handled? |\n| **Test coverage** | `impact({includeTests: true})` shows which tests touch changed code |\n| **Breaking changes** | d=1 upstream items that aren't updated in the PR = potential breakage |\n\n## Risk Assessment\n\n| Signal | Risk |\n| --- | --- |\n| Changes touch <3 symbols, 0-1 processes | LOW |\n| Changes touch 3-10 symbols, 2-5 processes | MEDIUM |\n| Changes touch >10 symbols or many processes | HIGH |\n| Changes touch auth, payments, or data integrity code | CRITICAL |\n| d=1 callers exist outside the PR diff | Potential breakage — flag it |\n\n## Tools\n\n**gitnexus_detect_changes** — map PR diff to affected execution flows:\n\n```\ngitnexus_detect_changes({scope: \"compare\", base_ref: \"main\"})\n\n→ Changed: 8 symbols in 4 files\n→ Affected processes: CheckoutFlow, RefundFlow, WebhookHandler\n→ Risk: MEDIUM\n```\n\n**gitnexus_impact** — blast radius per changed symbol:\n\n```\ngitnexus_impact({target: \"validatePayment\", direction: \"upstream\"})\n\n→ d=1 (WILL BREAK):\n  - processCheckout (src/checkout.ts:42) [CALLS, 100%]\n  - webhookHandler (src/webhooks.ts:15) [CALLS, 100%]\n\n→ d=2 (LIKELY AFFECTED):\n  - checkoutRouter (src/routes/checkout.ts:22) [CALLS, 95%]\n```\n\n**gitnexus_impact with tests** — check test coverage:\n\n```\ngitnexus_impact({target: \"validatePayment\", direction: \"upstream\", includeTests: true})\n\n→ Tests that cover this symbol:\n  - validatePayment.test.ts [direct]\n  - checkout.integration.test.ts [via processCheckout]\n```\n\n**gitnexus_context** — understand a changed symbol's role:\n\n```\ngitnexus_context({name: \"validatePayment\"})\n\n→ Incoming calls: processCheckout, webhookHandler\n→ Outgoing calls: verifyCard, fetchRates\n→ Processes: CheckoutFlow (step 3/7), RefundFlow (step 1/5)\n```\n\n## Example: \"Review PR #42\"\n\n```\n1. gh pr diff 42 > /tmp/pr42.diff\n   → 4 files changed: payments.ts, checkout.ts, types.ts, utils.ts\n\n2. gitnexus_detect_changes({scope: \"compare\", base_ref: \"main\"})\n   → Changed symbols: validatePayment, PaymentInput, formatAmount\n   → Affected processes: CheckoutFlow, RefundFlow\n   → Risk: MEDIUM\n\n3. gitnexus_impact({target: \"validatePayment\", direction: \"upstream\"})\n   → d=1: processCheckout, webhookHandler (WILL BREAK)\n   → webhookHandler is NOT in the PR diff — potential breakage!\n\n4. gitnexus_impact({target: \"PaymentInput\", direction: \"upstream\"})\n   → d=1: validatePayment (in PR), createPayment (NOT in PR)\n   → createPayment uses the old PaymentInput shape — breaking change!\n\n5. gitnexus_context({name: \"formatAmount\"})\n   → Called by 12 functions — but change is backwards-compatible (added optional param)\n\n6. Review summary:\n   - MEDIUM risk — 3 changed symbols affect 2 execution flows\n   - BUG: webhookHandler calls validatePayment but isn't updated for new signature\n   - BUG: createPayment depends on PaymentInput type which changed\n   - OK: formatAmount change is backwards-compatible\n   - Tests: checkout.test.ts covers processCheckout path, but no webhook test\n```\n\n## Review Output Format\n\nStructure your review as:\n\n```markdown\n## PR Review: <title>\n\n**Risk: LOW / MEDIUM / HIGH / CRITICAL**\n\n### Changes Summary\n- <N> symbols changed across <M> files\n- <P> execution flows affected\n\n### Findings\n1. **[severity]** Description of finding\n   - Evidence from GitNexus tools\n   - Affected callers/flows\n\n### Missing Coverage\n- Callers not updated in PR: ...\n- Untested flows: ...\n\n### Recommendation\nAPPROVE / REQUEST CHANGES / NEEDS DISCUSSION\n```\n"
  },
  {
    "path": "gitnexus-cursor-integration/skills/gitnexus-refactoring/SKILL.md",
    "content": "---\nname: gitnexus-refactoring\ndescription: Plan safe refactors using blast radius and dependency mapping\n---\n\n# Refactoring with GitNexus\n\n## When to Use\n- \"Rename this function safely\"\n- \"Extract this into a module\"\n- \"Split this service\"\n- \"Move this to a new file\"\n- Any task involving renaming, extracting, splitting, or restructuring code\n\n## Workflow\n\n```\n1. gitnexus_impact({target: \"X\", direction: \"upstream\"})  → Map all dependents\n2. gitnexus_query({query: \"X\"})                            → Find execution flows involving X\n3. gitnexus_context({name: \"X\"})                           → See all incoming/outgoing refs\n4. Plan update order: interfaces → implementations → callers → tests\n```\n\n> If \"Index is stale\" → run `npx gitnexus analyze` in terminal.\n\n## Checklists\n\n### Rename Symbol\n```\n- [ ] gitnexus_rename({symbol_name: \"oldName\", new_name: \"newName\", dry_run: true}) — preview all edits\n- [ ] Review graph edits (high confidence) and ast_search edits (review carefully)\n- [ ] If satisfied: gitnexus_rename({..., dry_run: false}) — apply edits\n- [ ] gitnexus_detect_changes() — verify only expected files changed\n- [ ] Run tests for affected processes\n```\n\n### Extract Module\n```\n- [ ] gitnexus_context({name: target}) — see all incoming/outgoing refs\n- [ ] gitnexus_impact({target, direction: \"upstream\"}) — find all external callers\n- [ ] Define new module interface\n- [ ] Extract code, update imports\n- [ ] gitnexus_detect_changes() — verify affected scope\n- [ ] Run tests for affected processes\n```\n\n### Split Function/Service\n```\n- [ ] gitnexus_context({name: target}) — understand all callees\n- [ ] Group callees by responsibility\n- [ ] gitnexus_impact({target, direction: \"upstream\"}) — map callers to update\n- [ ] Create new functions/services\n- [ ] Update callers\n- [ ] gitnexus_detect_changes() — verify affected scope\n- [ ] Run tests for affected processes\n```\n\n## Tools\n\n**gitnexus_rename** — automated multi-file rename:\n```\ngitnexus_rename({symbol_name: \"validateUser\", new_name: \"authenticateUser\", dry_run: true})\n→ 12 edits across 8 files\n→ 10 graph edits (high confidence), 2 ast_search edits (review)\n→ Changes: [{file_path, edits: [{line, old_text, new_text, confidence}]}]\n```\n\n**gitnexus_impact** — map all dependents first:\n```\ngitnexus_impact({target: \"validateUser\", direction: \"upstream\"})\n→ d=1: loginHandler, apiMiddleware, testUtils\n→ Affected Processes: LoginFlow, TokenRefresh\n```\n\n**gitnexus_detect_changes** — verify your changes after refactoring:\n```\ngitnexus_detect_changes({scope: \"all\"})\n→ Changed: 8 files, 12 symbols\n→ Affected processes: LoginFlow, TokenRefresh\n→ Risk: MEDIUM\n```\n\n**gitnexus_cypher** — custom reference queries:\n```cypher\nMATCH (caller)-[:CodeRelation {type: 'CALLS'}]->(f:Function {name: \"validateUser\"})\nRETURN caller.name, caller.filePath ORDER BY caller.filePath\n```\n\n## Risk Rules\n\n| Risk Factor | Mitigation |\n|-------------|------------|\n| Many callers (>5) | Use gitnexus_rename for automated updates |\n| Cross-area refs | Use detect_changes after to verify scope |\n| String/dynamic refs | gitnexus_query to find them |\n| External/public API | Version and deprecate properly |\n\n## Example: Rename `validateUser` to `authenticateUser`\n\n```\n1. gitnexus_rename({symbol_name: \"validateUser\", new_name: \"authenticateUser\", dry_run: true})\n   → 12 edits: 10 graph (safe), 2 ast_search (review)\n   → Files: validator.ts, login.ts, middleware.ts, config.json...\n\n2. Review ast_search edits (config.json: dynamic reference!)\n\n3. gitnexus_rename({symbol_name: \"validateUser\", new_name: \"authenticateUser\", dry_run: false})\n   → Applied 12 edits across 8 files\n\n4. gitnexus_detect_changes({scope: \"all\"})\n   → Affected: LoginFlow, TokenRefresh\n   → Risk: MEDIUM — run tests for these flows\n```\n"
  },
  {
    "path": "gitnexus-test-setup/.gitignore",
    "content": "\n# GitNexus AI Context\n.gitnexus-rules.md\n.cursorrules\n.windsurfrules\nCLAUDE.md\n.github/copilot-instructions.md\n"
  },
  {
    "path": "gitnexus-web/.gitignore",
    "content": ".vercel\n.env*.local\n"
  },
  {
    "path": "gitnexus-web/api/proxy.ts",
    "content": "import type { VercelRequest, VercelResponse } from '@vercel/node';\n\n/**\n * CORS Proxy for isomorphic-git\n * \n * isomorphic-git calls: /api/proxy?url=https://github.com/...\n */\nexport default async function handler(req: VercelRequest, res: VercelResponse) {\n  // Handle CORS preflight\n  if (req.method === 'OPTIONS') {\n    res.setHeader('Access-Control-Allow-Origin', '*');\n    res.setHeader('Access-Control-Allow-Methods', 'GET, POST, OPTIONS');\n    res.setHeader('Access-Control-Allow-Headers', 'Content-Type, Authorization, Git-Protocol, Accept');\n    res.status(200).end();\n    return;\n  }\n\n  // Get URL from query parameter\n  const { url } = req.query;\n  \n  if (!url || typeof url !== 'string') {\n    res.status(400).json({ error: 'Missing url query parameter' });\n    return;\n  }\n\n  // Only allow GitHub URLs for security\n  const allowedHosts = ['github.com', 'raw.githubusercontent.com'];\n  let parsedUrl: URL;\n  \n  try {\n    parsedUrl = new URL(url);\n  } catch {\n    res.status(400).json({ error: 'Invalid URL' });\n    return;\n  }\n  \n  if (!allowedHosts.some(host => parsedUrl.hostname.endsWith(host))) {\n    res.status(403).json({ error: 'Only GitHub URLs are allowed' });\n    return;\n  }\n\n  try {\n    const headers: Record<string, string> = {\n      'User-Agent': 'git/isomorphic-git',\n    };\n    \n    // Forward relevant headers\n    if (req.headers.authorization) {\n      headers['Authorization'] = req.headers.authorization as string;\n    }\n    if (req.headers['content-type']) {\n      headers['Content-Type'] = req.headers['content-type'] as string;\n    }\n    if (req.headers['git-protocol']) {\n      headers['Git-Protocol'] = req.headers['git-protocol'] as string;\n    }\n    if (req.headers.accept) {\n      headers['Accept'] = req.headers.accept as string;\n    }\n\n    // Get request body for POST requests\n    let body: Buffer | undefined;\n    if (req.method === 'POST') {\n      const chunks: Buffer[] = [];\n      for await (const chunk of req) {\n        chunks.push(typeof chunk === 'string' ? Buffer.from(chunk) : chunk);\n      }\n      body = Buffer.concat(chunks);\n    }\n\n    const response = await fetch(url, {\n      method: req.method || 'GET',\n      headers,\n      body: body ? new Uint8Array(body) : undefined,\n    });\n\n    // Set CORS headers\n    res.setHeader('Access-Control-Allow-Origin', '*');\n    res.setHeader('Access-Control-Expose-Headers', '*');\n\n    // Forward response headers (except ones that cause issues)\n    const skipHeaders = [\n      'content-encoding', \n      'transfer-encoding', \n      'connection',\n      'www-authenticate', // IMPORTANT: Strip this to prevent browser's native auth popup!\n    ];\n    \n    response.headers.forEach((value, key) => {\n      if (!skipHeaders.includes(key.toLowerCase())) {\n        res.setHeader(key, value);\n      }\n    });\n\n    res.status(response.status);\n    const buffer = await response.arrayBuffer();\n    res.send(Buffer.from(buffer));\n    \n  } catch (error) {\n    console.error('Proxy error:', error);\n    res.status(500).json({ error: 'Proxy request failed', details: String(error) });\n  }\n}\n\n"
  },
  {
    "path": "gitnexus-web/index.html",
    "content": "<!doctype html>\n<html lang=\"en\">\n  <head>\n    <meta charset=\"UTF-8\" />\n    <meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\" />\n    <title>GitNexus</title>\n    <link rel=\"preconnect\" href=\"https://fonts.googleapis.com\">\n    <link rel=\"preconnect\" href=\"https://fonts.gstatic.com\" crossorigin>\n    <link href=\"https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@400;500;600&family=Outfit:wght@300;400;500;600;700&display=swap\" rel=\"stylesheet\">\n  </head>\n  <body>\n    <div id=\"root\"></div>\n    <script type=\"module\" src=\"/src/main.tsx\"></script>\n  </body>\n</html>\n"
  },
  {
    "path": "gitnexus-web/package.json",
    "content": "{\n  \"name\": \"gitnexus\",\n  \"private\": true,\n  \"version\": \"0.0.0\",\n  \"type\": \"module\",\n  \"scripts\": {\n    \"dev\": \"vite\",\n    \"build\": \"tsc -b && vite build\",\n    \"preview\": \"vite preview\"\n  },\n  \"dependencies\": {\n    \"@huggingface/transformers\": \"^3.0.0\",\n    \"@isomorphic-git/lightning-fs\": \"^4.6.2\",\n    \"@langchain/anthropic\": \"^1.3.10\",\n    \"@langchain/core\": \"^1.1.15\",\n    \"@langchain/google-genai\": \"^2.1.10\",\n    \"@langchain/langgraph\": \"^1.1.0\",\n    \"@langchain/ollama\": \"^1.2.0\",\n    \"@langchain/openai\": \"^1.2.2\",\n    \"@sigma/edge-curve\": \"^3.1.0\",\n    \"@tailwindcss/vite\": \"^4.1.18\",\n    \"axios\": \"^1.13.2\",\n    \"buffer\": \"^6.0.3\",\n    \"comlink\": \"^4.4.2\",\n    \"d3\": \"^7.9.0\",\n    \"graphology\": \"^0.26.0\",\n    \"graphology-indices\": \"^0.17.0\",\n    \"graphology-utils\": \"^2.3.0\",\n    \"mnemonist\": \"^0.39.0\",\n    \"pandemonium\": \"^2.4.0\",\n    \"graphology-layout-force\": \"^0.2.4\",\n    \"graphology-layout-forceatlas2\": \"^0.10.1\",\n    \"graphology-layout-noverlap\": \"^0.4.2\",\n    \"isomorphic-git\": \"^1.36.1\",\n    \"jszip\": \"^3.10.1\",\n    \"@ladybugdb/wasm-core\": \"^0.15.2\",\n    \"langchain\": \"^1.2.10\",\n    \"lru-cache\": \"^11.2.4\",\n    \"lucide-react\": \"^0.562.0\",\n    \"mermaid\": \"^11.12.2\",\n    \"minisearch\": \"^7.2.0\",\n    \"react\": \"^18.3.1\",\n    \"react-dom\": \"^18.3.1\",\n    \"react-markdown\": \"^10.1.0\",\n    \"react-syntax-highlighter\": \"^16.1.0\",\n    \"react-zoom-pan-pinch\": \"^3.7.0\",\n    \"remark-gfm\": \"^4.0.1\",\n    \"sigma\": \"^3.0.2\",\n    \"tailwindcss\": \"^4.1.18\",\n    \"uuid\": \"^13.0.0\",\n    \"vite-plugin-top-level-await\": \"^1.6.0\",\n    \"vite-plugin-wasm\": \"^3.5.0\",\n    \"web-tree-sitter\": \"^0.20.8\",\n    \"zod\": \"^3.25.76\"\n  },\n  \"devDependencies\": {\n    \"@babel/types\": \"^7.28.5\",\n    \"@types/jszip\": \"^3.4.0\",\n    \"@types/node\": \"^24.10.1\",\n    \"@types/react\": \"^18.3.5\",\n    \"@types/react-dom\": \"^18.3.0\",\n    \"@types/react-syntax-highlighter\": \"^15.5.13\",\n    \"@vercel/node\": \"^5.5.16\",\n    \"@vitejs/plugin-react\": \"^5.1.0\",\n    \"tree-sitter-wasms\": \"^0.1.13\",\n    \"typescript\": \"^5.4.5\",\n    \"vite\": \"^5.2.0\",\n    \"vite-plugin-static-copy\": \"^3.1.4\"\n  }\n}\n"
  },
  {
    "path": "gitnexus-web/src/App.tsx",
    "content": "import { useCallback, useEffect, useRef } from 'react';\nimport { AppStateProvider, useAppState } from './hooks/useAppState';\nimport { DropZone } from './components/DropZone';\nimport { LoadingOverlay } from './components/LoadingOverlay';\nimport { Header } from './components/Header';\nimport { GraphCanvas, GraphCanvasHandle } from './components/GraphCanvas';\nimport { RightPanel } from './components/RightPanel';\nimport { SettingsPanel } from './components/SettingsPanel';\nimport { StatusBar } from './components/StatusBar';\nimport { FileTreePanel } from './components/FileTreePanel';\nimport { CodeReferencesPanel } from './components/CodeReferencesPanel';\nimport { FileEntry } from './services/zip';\nimport { getActiveProviderConfig } from './core/llm/settings-service';\nimport { createKnowledgeGraph } from './core/graph/graph';\nimport { connectToServer, fetchRepos, normalizeServerUrl, type ConnectToServerResult } from './services/server-connection';\n\nconst AppContent = () => {\n  const {\n    viewMode,\n    setViewMode,\n    setGraph,\n    setFileContents,\n    setProgress,\n    setProjectName,\n    progress,\n    isRightPanelOpen,\n    runPipeline,\n    runPipelineFromFiles,\n    isSettingsPanelOpen,\n    setSettingsPanelOpen,\n    refreshLLMSettings,\n    initializeAgent,\n    startEmbeddings,\n    embeddingStatus,\n    codeReferences,\n    selectedNode,\n    isCodePanelOpen,\n    serverBaseUrl,\n    setServerBaseUrl,\n    availableRepos,\n    setAvailableRepos,\n    switchRepo,\n  } = useAppState();\n\n  const graphCanvasRef = useRef<GraphCanvasHandle>(null);\n\n  const handleFileSelect = useCallback(async (file: File) => {\n    const projectName = file.name.replace('.zip', '');\n    setProjectName(projectName);\n    setProgress({ phase: 'extracting', percent: 0, message: 'Starting...', detail: 'Preparing to extract files' });\n    setViewMode('loading');\n\n    try {\n      const result = await runPipeline(file, (progress) => {\n        setProgress(progress);\n      });\n\n      setGraph(result.graph);\n      setFileContents(result.fileContents);\n      setViewMode('exploring');\n\n      // Initialize (or re-initialize) the agent AFTER a repo loads so it captures\n      // the current codebase context (file contents + graph tools) in the worker.\n      if (getActiveProviderConfig()) {\n        initializeAgent(projectName);\n      }\n\n      // Auto-start embeddings pipeline in background\n      // Uses WebGPU if available, falls back to WASM\n      startEmbeddings().catch((err) => {\n        if (err?.name === 'WebGPUNotAvailableError' || err?.message?.includes('WebGPU')) {\n          startEmbeddings('wasm').catch(console.warn);\n        } else {\n          console.warn('Embeddings auto-start failed:', err);\n        }\n      });\n    } catch (error) {\n      console.error('Pipeline error:', error);\n      setProgress({\n        phase: 'error',\n        percent: 0,\n        message: 'Error processing file',\n        detail: error instanceof Error ? error.message : 'Unknown error',\n      });\n      setTimeout(() => {\n        setViewMode('onboarding');\n        setProgress(null);\n      }, 3000);\n    }\n  }, [setViewMode, setGraph, setFileContents, setProgress, setProjectName, runPipeline, startEmbeddings, initializeAgent]);\n\n  const handleGitClone = useCallback(async (files: FileEntry[]) => {\n    const firstPath = files[0]?.path || 'repository';\n    const projectName = firstPath.split('/')[0].replace(/-\\d+$/, '') || 'repository';\n\n    setProjectName(projectName);\n    setProgress({ phase: 'extracting', percent: 0, message: 'Starting...', detail: 'Preparing to process files' });\n    setViewMode('loading');\n\n    try {\n      const result = await runPipelineFromFiles(files, (progress) => {\n        setProgress(progress);\n      });\n\n      setGraph(result.graph);\n      setFileContents(result.fileContents);\n      setViewMode('exploring');\n\n      if (getActiveProviderConfig()) {\n        initializeAgent(projectName);\n      }\n\n      startEmbeddings().catch((err) => {\n        if (err?.name === 'WebGPUNotAvailableError' || err?.message?.includes('WebGPU')) {\n          startEmbeddings('wasm').catch(console.warn);\n        } else {\n          console.warn('Embeddings auto-start failed:', err);\n        }\n      });\n    } catch (error) {\n      console.error('Pipeline error:', error);\n      setProgress({\n        phase: 'error',\n        percent: 0,\n        message: 'Error processing repository',\n        detail: error instanceof Error ? error.message : 'Unknown error',\n      });\n      setTimeout(() => {\n        setViewMode('onboarding');\n        setProgress(null);\n      }, 3000);\n    }\n  }, [setViewMode, setGraph, setFileContents, setProgress, setProjectName, runPipelineFromFiles, startEmbeddings, initializeAgent]);\n\n  const handleServerConnect = useCallback((result: ConnectToServerResult) => {\n    // Extract project name from repoPath\n    const repoPath = result.repoInfo.repoPath;\n    const projectName = repoPath.split('/').pop() || 'server-project';\n    setProjectName(projectName);\n\n    // Build KnowledgeGraph from server data (bypasses WASM pipeline entirely)\n    const graph = createKnowledgeGraph();\n    for (const node of result.nodes) {\n      graph.addNode(node);\n    }\n    for (const rel of result.relationships) {\n      graph.addRelationship(rel);\n    }\n    setGraph(graph);\n\n    // Set file contents from extracted File node content\n    const fileMap = new Map<string, string>();\n    for (const [path, content] of Object.entries(result.fileContents)) {\n      fileMap.set(path, content);\n    }\n    setFileContents(fileMap);\n\n    // Transition directly to exploring view\n    setViewMode('exploring');\n\n    // Initialize agent if LLM is configured\n    if (getActiveProviderConfig()) {\n      initializeAgent(projectName);\n    }\n\n    // Auto-start embeddings\n    startEmbeddings().catch((err) => {\n      if (err?.name === 'WebGPUNotAvailableError' || err?.message?.includes('WebGPU')) {\n        startEmbeddings('wasm').catch(console.warn);\n      } else {\n        console.warn('Embeddings auto-start failed:', err);\n      }\n    });\n  }, [setViewMode, setGraph, setFileContents, setProjectName, initializeAgent, startEmbeddings]);\n\n  // Auto-connect when ?server query param is present (bookmarkable shortcut)\n  const autoConnectRan = useRef(false);\n  useEffect(() => {\n    if (autoConnectRan.current) return;\n    const params = new URLSearchParams(window.location.search);\n    if (!params.has('server')) return;\n    autoConnectRan.current = true;\n\n    // Clean the URL so a refresh won't re-trigger\n    const cleanUrl = window.location.pathname + window.location.hash;\n    window.history.replaceState(null, '', cleanUrl);\n\n    setProgress({ phase: 'extracting', percent: 0, message: 'Connecting to server...', detail: 'Validating server' });\n    setViewMode('loading');\n\n    const serverUrl = params.get('server') || window.location.origin;\n\n    const baseUrl = normalizeServerUrl(serverUrl);\n\n    connectToServer(serverUrl, (phase, downloaded, total) => {\n      if (phase === 'validating') {\n        setProgress({ phase: 'extracting', percent: 5, message: 'Connecting to server...', detail: 'Validating server' });\n      } else if (phase === 'downloading') {\n        const pct = total ? Math.round((downloaded / total) * 90) + 5 : 50;\n        const mb = (downloaded / (1024 * 1024)).toFixed(1);\n        setProgress({ phase: 'extracting', percent: pct, message: 'Downloading graph...', detail: `${mb} MB downloaded` });\n      } else if (phase === 'extracting') {\n        setProgress({ phase: 'extracting', percent: 97, message: 'Processing...', detail: 'Extracting file contents' });\n      }\n    }).then(async (result) => {\n      handleServerConnect(result);\n\n      // Store server URL and fetch available repos for the repo switcher\n      setServerBaseUrl(baseUrl);\n      try {\n        const repos = await fetchRepos(baseUrl);\n        setAvailableRepos(repos);\n      } catch (e) {\n        console.warn('Failed to fetch repo list:', e);\n      }\n    }).catch((err) => {\n      console.error('Auto-connect failed:', err);\n      setProgress({\n        phase: 'error',\n        percent: 0,\n        message: 'Failed to connect to server',\n        detail: err instanceof Error ? err.message : 'Unknown error',\n      });\n      setTimeout(() => {\n        setViewMode('onboarding');\n        setProgress(null);\n      }, 3000);\n    });\n  }, [handleServerConnect, setProgress, setViewMode, setServerBaseUrl, setAvailableRepos]);\n\n  const handleFocusNode = useCallback((nodeId: string) => {\n    graphCanvasRef.current?.focusNode(nodeId);\n  }, []);\n\n  // Handle settings saved - refresh and reinitialize agent\n  // NOTE: Must be defined BEFORE any conditional returns (React hooks rule)\n  const handleSettingsSaved = useCallback(() => {\n    refreshLLMSettings();\n    initializeAgent();\n  }, [refreshLLMSettings, initializeAgent]);\n\n  // Render based on view mode\n  if (viewMode === 'onboarding') {\n    return (\n      <DropZone\n        onFileSelect={handleFileSelect}\n        onGitClone={handleGitClone}\n        onServerConnect={async (result, serverUrl) => {\n          handleServerConnect(result);\n          if (serverUrl) {\n            const baseUrl = normalizeServerUrl(serverUrl);\n            setServerBaseUrl(baseUrl);\n            try {\n              const repos = await fetchRepos(baseUrl);\n              setAvailableRepos(repos);\n            } catch (e) {\n              console.warn('Failed to fetch repo list:', e);\n            }\n          }\n        }}\n      />\n    );\n  }\n\n  if (viewMode === 'loading' && progress) {\n    return <LoadingOverlay progress={progress} />;\n  }\n\n  // Exploring view\n  return (\n    <div className=\"flex flex-col h-screen bg-void overflow-hidden\">\n      <Header onFocusNode={handleFocusNode} availableRepos={availableRepos} onSwitchRepo={switchRepo} />\n\n      <main className=\"flex-1 flex min-h-0\">\n        {/* Left Panel - File Tree */}\n        <FileTreePanel onFocusNode={handleFocusNode} />\n\n        {/* Graph area - takes remaining space */}\n        <div className=\"flex-1 relative min-w-0\">\n          <GraphCanvas ref={graphCanvasRef} />\n\n          {/* Code References Panel (overlay) - does NOT resize the graph, it overlaps on top */}\n          {isCodePanelOpen && (codeReferences.length > 0 || !!selectedNode) && (\n            <div className=\"absolute inset-y-0 left-0 z-30 pointer-events-auto\">\n              <CodeReferencesPanel onFocusNode={handleFocusNode} />\n            </div>\n          )}\n        </div>\n\n        {/* Right Panel - Code & Chat (tabbed) */}\n        {isRightPanelOpen && <RightPanel />}\n      </main>\n\n      <StatusBar />\n\n      {/* Settings Panel (modal) */}\n      <SettingsPanel\n        isOpen={isSettingsPanelOpen}\n        onClose={() => setSettingsPanelOpen(false)}\n        onSettingsSaved={handleSettingsSaved}\n      />\n\n    </div>\n  );\n};\n\nfunction App() {\n  return (\n    <AppStateProvider>\n      <AppContent />\n    </AppStateProvider>\n  );\n}\n\nexport default App;\n"
  },
  {
    "path": "gitnexus-web/src/components/BackendRepoSelector.tsx",
    "content": "import { Server, ArrowRight } from 'lucide-react';\nimport { BackendRepo } from '../services/backend';\n\ninterface BackendRepoSelectorProps {\n  repos: BackendRepo[];\n  onSelectRepo: (repoName: string) => void;\n  backendUrl: string;\n  isConnected: boolean;\n}\n\nexport const BackendRepoSelector = ({\n  repos,\n  onSelectRepo,\n  backendUrl,\n  isConnected,\n}: BackendRepoSelectorProps) => {\n  return (\n    <div className=\"p-8 bg-surface border border-border-default rounded-3xl\">\n      {/* Icon */}\n      <div className=\"mx-auto w-20 h-20 mb-6 flex items-center justify-center bg-gradient-to-br from-accent to-node-interface rounded-2xl shadow-glow\">\n        <Server className=\"w-10 h-10 text-white\" />\n      </div>\n\n      {/* Title */}\n      <h2 className=\"text-xl font-semibold text-text-primary text-center mb-2\">\n        Local Repositories\n      </h2>\n      <p className=\"text-sm text-text-secondary text-center mb-4\">\n        Select an indexed repository from your local GitNexus server\n      </p>\n\n      {/* Connected status badge */}\n      {isConnected && (\n        <div className=\"flex items-center justify-center gap-2 mb-6\">\n          <span className=\"w-2 h-2 bg-green-400 rounded-full animate-pulse\" />\n          <span className=\"text-xs text-green-400\">Connected to {backendUrl}</span>\n        </div>\n      )}\n\n      {/* Repo list or empty state */}\n      {repos.length > 0 ? (\n        <div className=\"max-h-80 overflow-y-auto space-y-2\">\n          {repos.map((repo) => (\n            <button\n              key={repo.name}\n              onClick={() => onSelectRepo(repo.name)}\n              className=\"w-full p-4 bg-elevated border border-border-subtle rounded-xl hover:border-accent/50 hover:bg-hover transition-all text-left group\"\n            >\n              <div className=\"flex items-center justify-between mb-2\">\n                <span className=\"font-medium text-text-primary group-hover:text-accent transition-colors\">\n                  {repo.name}\n                </span>\n                <ArrowRight className=\"w-4 h-4 text-text-muted group-hover:text-accent transition-colors\" />\n              </div>\n              <div className=\"flex items-center gap-3 text-xs text-text-muted\">\n                {repo.stats?.files != null && <span>{repo.stats.files} files</span>}\n                {repo.stats?.nodes != null && <span>{repo.stats.nodes} nodes</span>}\n                {repo.stats?.edges != null && <span>{repo.stats.edges} edges</span>}\n              </div>\n              <div className=\"text-xs text-text-muted mt-1\">\n                Indexed {new Date(repo.indexedAt).toLocaleDateString()}\n              </div>\n            </button>\n          ))}\n        </div>\n      ) : (\n        <div className=\"text-center text-text-muted py-8\">\n          <p className=\"text-sm mb-2\">No indexed repositories found</p>\n          <p className=\"text-xs\">\n            Run{' '}\n            <code className=\"px-1 py-0.5 bg-elevated rounded\">gitnexus analyze</code>{' '}\n            in a repository\n          </p>\n        </div>\n      )}\n\n      {/* Bottom hints */}\n      <div className=\"mt-4 flex items-center justify-center gap-3 text-xs text-text-muted\">\n        <span className=\"px-3 py-1.5 bg-elevated border border-border-subtle rounded-md\">\n          {repos.length} {repos.length === 1 ? 'repo' : 'repos'}\n        </span>\n        <span className=\"px-3 py-1.5 bg-elevated border border-border-subtle rounded-md\">\n          Pre-indexed\n        </span>\n      </div>\n    </div>\n  );\n};\n"
  },
  {
    "path": "gitnexus-web/src/components/CodeReferencesPanel.tsx",
    "content": "import { useCallback, useEffect, useMemo, useRef, useState } from 'react';\nimport { Code, PanelLeftClose, PanelLeft, Trash2, X, Target, FileCode, Sparkles, MousePointerClick } from 'lucide-react';\nimport { Prism as SyntaxHighlighter } from 'react-syntax-highlighter';\nimport { vscDarkPlus } from 'react-syntax-highlighter/dist/esm/styles/prism';\nimport { useAppState } from '../hooks/useAppState';\nimport { NODE_COLORS } from '../lib/constants';\n\n/** Map file extension to Prism syntax highlighter language identifier */\nconst getSyntaxLanguage = (filePath: string | undefined): string => {\n  if (!filePath) return 'text';\n  const ext = filePath.split('.').pop()?.toLowerCase();\n  switch (ext) {\n    case 'js': case 'jsx': case 'mjs': case 'cjs': return 'javascript';\n    case 'ts': case 'tsx': case 'mts': case 'cts': return 'typescript';\n    case 'py': case 'pyw': return 'python';\n    case 'rb': case 'rake': case 'gemspec': return 'ruby';\n    case 'java': return 'java';\n    case 'go': return 'go';\n    case 'rs': return 'rust';\n    case 'c': case 'h': return 'c';\n    case 'cpp': case 'cc': case 'cxx': case 'hpp': case 'hxx': case 'hh': return 'cpp';\n    case 'cs': return 'csharp';\n    case 'php': return 'php';\n    case 'kt': case 'kts': return 'kotlin';\n    case 'swift': return 'swift';\n    case 'json': return 'json';\n    case 'yaml': case 'yml': return 'yaml';\n    case 'md': case 'mdx': return 'markdown';\n    case 'html': case 'htm': case 'erb': return 'markup';\n    case 'css': case 'scss': case 'sass': return 'css';\n    case 'sh': case 'bash': case 'zsh': return 'bash';\n    case 'sql': return 'sql';\n    case 'xml': return 'xml';\n    default: break;\n  }\n  // Handle extensionless Ruby files\n  const basename = filePath.split('/').pop() || '';\n  if (['Rakefile', 'Gemfile', 'Guardfile', 'Vagrantfile', 'Brewfile'].includes(basename)) return 'ruby';\n  if (['Makefile'].includes(basename)) return 'makefile';\n  if (['Dockerfile'].includes(basename)) return 'docker';\n  return 'text';\n};\n\n// Match the code theme used elsewhere in the app\nconst customTheme = {\n  ...vscDarkPlus,\n  'pre[class*=\"language-\"]': {\n    ...vscDarkPlus['pre[class*=\"language-\"]'],\n    background: '#0a0a10',\n    margin: 0,\n    padding: '12px 0',\n    fontSize: '13px',\n    lineHeight: '1.6',\n  },\n  'code[class*=\"language-\"]': {\n    ...vscDarkPlus['code[class*=\"language-\"]'],\n    background: 'transparent',\n    fontFamily: '\"JetBrains Mono\", \"Fira Code\", monospace',\n  },\n};\n\nexport interface CodeReferencesPanelProps {\n  onFocusNode: (nodeId: string) => void;\n}\n\nexport const CodeReferencesPanel = ({ onFocusNode }: CodeReferencesPanelProps) => {\n  const {\n    graph,\n    fileContents,\n    selectedNode,\n    codeReferences,\n    removeCodeReference,\n    clearCodeReferences,\n    setSelectedNode,\n    codeReferenceFocus,\n  } = useAppState();\n\n  const [isCollapsed, setIsCollapsed] = useState(false);\n  const [glowRefId, setGlowRefId] = useState<string | null>(null);\n  const panelRef = useRef<HTMLElement | null>(null);\n  const resizeRef = useRef<{ startX: number; startWidth: number } | null>(null);\n  const refCardEls = useRef<Map<string, HTMLDivElement | null>>(new Map());\n  const glowTimerRef = useRef<number | null>(null);\n\n  useEffect(() => {\n    return () => {\n      if (glowTimerRef.current) {\n        window.clearTimeout(glowTimerRef.current);\n        glowTimerRef.current = null;\n      }\n    };\n  }, []);\n\n  const [panelWidth, setPanelWidth] = useState<number>(() => {\n    try {\n      const saved = window.localStorage.getItem('gitnexus.codePanelWidth');\n      const parsed = saved ? parseInt(saved, 10) : NaN;\n      if (!Number.isFinite(parsed)) return 560; // increased default\n      return Math.max(420, Math.min(parsed, 900));\n    } catch {\n      return 560;\n    }\n  });\n\n  useEffect(() => {\n    try {\n      window.localStorage.setItem('gitnexus.codePanelWidth', String(panelWidth));\n    } catch {\n      // ignore\n    }\n  }, [panelWidth]);\n\n  const startResize = useCallback((e: React.MouseEvent) => {\n    e.preventDefault();\n    e.stopPropagation();\n    resizeRef.current = { startX: e.clientX, startWidth: panelWidth };\n    document.body.style.cursor = 'col-resize';\n    document.body.style.userSelect = 'none';\n\n    const onMove = (ev: MouseEvent) => {\n      const state = resizeRef.current;\n      if (!state) return;\n      const delta = ev.clientX - state.startX;\n      const next = Math.max(420, Math.min(state.startWidth + delta, 900));\n      setPanelWidth(next);\n    };\n\n    const onUp = () => {\n      resizeRef.current = null;\n      document.body.style.cursor = '';\n      document.body.style.userSelect = '';\n      window.removeEventListener('mousemove', onMove);\n      window.removeEventListener('mouseup', onUp);\n    };\n\n    window.addEventListener('mousemove', onMove);\n    window.addEventListener('mouseup', onUp);\n  }, [panelWidth]);\n\n  const aiReferences = useMemo(() => codeReferences.filter(r => r.source === 'ai'), [codeReferences]);\n\n  // When the user clicks a citation badge in chat, focus the corresponding snippet card:\n  // - expand the panel if collapsed\n  // - smooth-scroll the card into view\n  // - briefly glow it for discoverability\n  useEffect(() => {\n    if (!codeReferenceFocus) return;\n\n    // Ensure panel is expanded\n    setIsCollapsed(false);\n\n    const { filePath, startLine, endLine } = codeReferenceFocus;\n    const target =\n      aiReferences.find(r =>\n        r.filePath === filePath &&\n        r.startLine === startLine &&\n        r.endLine === endLine\n      ) ??\n      aiReferences.find(r => r.filePath === filePath);\n\n    if (!target) return;\n\n    // Double rAF: wait for collapse state + list DOM to render.\n    requestAnimationFrame(() => {\n      requestAnimationFrame(() => {\n        const el = refCardEls.current.get(target.id);\n        if (!el) return;\n\n        el.scrollIntoView({ behavior: 'smooth', block: 'center' });\n        setGlowRefId(target.id);\n\n        if (glowTimerRef.current) {\n          window.clearTimeout(glowTimerRef.current);\n        }\n        glowTimerRef.current = window.setTimeout(() => {\n          setGlowRefId((prev) => (prev === target.id ? null : prev));\n          glowTimerRef.current = null;\n        }, 1200);\n      });\n    });\n  }, [codeReferenceFocus?.ts, aiReferences]);\n\n  const refsWithSnippets = useMemo(() => {\n    return aiReferences.map((ref) => {\n      const content = fileContents.get(ref.filePath);\n      if (!content) {\n        return { ref, content: null as string | null, start: 0, end: 0, highlightStart: 0, highlightEnd: 0, totalLines: 0 };\n      }\n\n      const lines = content.split('\\n');\n      const totalLines = lines.length;\n\n      const startLine = ref.startLine ?? 0;\n      const endLine = ref.endLine ?? startLine;\n\n      const contextBefore = 3;\n      const contextAfter = 20;\n      const start = Math.max(0, startLine - contextBefore);\n      const end = Math.min(totalLines - 1, endLine + contextAfter);\n\n      return {\n        ref,\n        content: lines.slice(start, end + 1).join('\\n'),\n        start,\n        end,\n        highlightStart: Math.max(0, startLine - start),\n        highlightEnd: Math.max(0, endLine - start),\n        totalLines,\n      };\n    });\n  }, [aiReferences, fileContents]);\n\n  const selectedFilePath = selectedNode?.properties?.filePath;\n  const selectedFileContent = selectedFilePath ? fileContents.get(selectedFilePath) : undefined;\n  const selectedIsFile = selectedNode?.label === 'File' && !!selectedFilePath;\n  const showSelectedViewer = !!selectedNode && !!selectedFilePath;\n  const showCitations = aiReferences.length > 0;\n\n  if (isCollapsed) {\n    return (\n      <aside className=\"h-full w-12 bg-surface border-r border-border-subtle flex flex-col items-center py-3 gap-2 flex-shrink-0\">\n        <button\n          onClick={() => setIsCollapsed(false)}\n          className=\"p-2 text-text-secondary hover:text-cyan-400 hover:bg-cyan-500/10 rounded transition-colors\"\n          title=\"Expand Code Panel\"\n        >\n          <PanelLeft className=\"w-5 h-5\" />\n        </button>\n        <div className=\"w-6 h-px bg-border-subtle my-1\" />\n        {showSelectedViewer && (\n          <div className=\"text-[9px] text-amber-400 rotate-90 whitespace-nowrap font-medium tracking-wide\">\n            SELECTED\n          </div>\n        )}\n        {showCitations && (\n          <div className=\"text-[9px] text-cyan-400 rotate-90 whitespace-nowrap font-medium tracking-wide mt-4\">\n            AI • {aiReferences.length}\n          </div>\n        )}\n      </aside>\n    );\n  }\n\n  return (\n    <aside\n      ref={(el) => { panelRef.current = el; }}\n      className=\"h-full bg-surface/95 backdrop-blur-md border-r border-border-subtle flex flex-col animate-slide-in relative shadow-2xl\"\n      style={{ width: panelWidth }}\n    >\n      {/* Resize handle */}\n      <div\n        onMouseDown={startResize}\n        className=\"absolute top-0 right-0 h-full w-2 cursor-col-resize bg-transparent hover:bg-cyan-500/25 transition-colors\"\n        title=\"Drag to resize\"\n      />\n      {/* Header */}\n      <div className=\"flex items-center justify-between px-3 py-2.5 border-b border-border-subtle bg-gradient-to-r from-elevated/60 to-surface/60\">\n        <div className=\"flex items-center gap-2\">\n          <Code className=\"w-4 h-4 text-cyan-400\" />\n          <span className=\"text-sm font-semibold text-text-primary\">Code Inspector</span>\n        </div>\n        <div className=\"flex items-center gap-1.5\">\n          {showCitations && (\n            <button\n              onClick={() => clearCodeReferences()}\n              className=\"p-1.5 text-text-muted hover:text-red-400 hover:bg-red-500/10 rounded transition-colors\"\n              title=\"Clear AI citations\"\n            >\n              <Trash2 className=\"w-4 h-4\" />\n            </button>\n          )}\n          <button\n            onClick={() => setIsCollapsed(true)}\n            className=\"p-1.5 text-text-muted hover:text-text-primary hover:bg-hover rounded transition-colors\"\n            title=\"Collapse Panel\"\n          >\n            <PanelLeftClose className=\"w-4 h-4\" />\n          </button>\n        </div>\n      </div>\n\n      <div className=\"flex-1 min-h-0 flex flex-col\">\n        {/* Top: Selected file viewer (when a node is selected) */}\n        {showSelectedViewer && (\n          <div className={`${showCitations ? 'h-[42%]' : 'flex-1'} min-h-0 flex flex-col`}>\n            <div className=\"px-3 py-2 bg-gradient-to-r from-amber-500/8 to-orange-500/5 border-b border-amber-500/20 flex items-center gap-2\">\n              <div className=\"flex items-center gap-1.5 px-2 py-0.5 bg-amber-500/15 rounded-md border border-amber-500/25\">\n                <MousePointerClick className=\"w-3 h-3 text-amber-400\" />\n                <span className=\"text-[10px] text-amber-300 font-semibold uppercase tracking-wide\">Selected</span>\n              </div>\n              <FileCode className=\"w-3.5 h-3.5 text-amber-400/70 ml-1\" />\n              <span className=\"text-xs text-text-primary font-mono truncate flex-1\">\n                {selectedNode?.properties?.filePath?.split('/').pop() ?? selectedNode?.properties?.name}\n              </span>\n              <button\n                onClick={() => setSelectedNode(null)}\n                className=\"p-1 text-text-muted hover:text-amber-400 hover:bg-amber-500/10 rounded transition-colors\"\n                title=\"Clear selection\"\n              >\n                <X className=\"w-4 h-4\" />\n              </button>\n            </div>\n            <div className=\"flex-1 min-h-0 overflow-auto scrollbar-thin\">\n              {selectedFileContent ? (\n                <SyntaxHighlighter\n                  language={getSyntaxLanguage(selectedFilePath)}\n                  style={customTheme as any}\n                  showLineNumbers\n                  startingLineNumber={1}\n                  lineNumberStyle={{\n                    minWidth: '3em',\n                    paddingRight: '1em',\n                    color: '#5a5a70',\n                    textAlign: 'right',\n                    userSelect: 'none',\n                  }}\n                  lineProps={(lineNumber) => {\n                    const startLine = selectedNode?.properties?.startLine;\n                    const endLine = selectedNode?.properties?.endLine ?? startLine;\n                    const isHighlighted =\n                      typeof startLine === 'number' &&\n                      lineNumber >= startLine + 1 &&\n                      lineNumber <= (endLine ?? startLine) + 1;\n                    return {\n                      style: {\n                        display: 'block',\n                        backgroundColor: isHighlighted ? 'rgba(6, 182, 212, 0.14)' : 'transparent',\n                        borderLeft: isHighlighted ? '3px solid #06b6d4' : '3px solid transparent',\n                        paddingLeft: '12px',\n                        paddingRight: '16px',\n                      },\n                    };\n                  }}\n                  wrapLines\n                >\n                  {selectedFileContent}\n                </SyntaxHighlighter>\n              ) : (\n                <div className=\"px-3 py-3 text-sm text-text-muted\">\n                  {selectedIsFile ? (\n                    <>Code not available in memory for <span className=\"font-mono\">{selectedFilePath}</span></>\n                  ) : (\n                    <>Select a file node to preview its contents.</>\n                  )}\n                </div>\n              )}\n            </div>\n          </div>\n        )}\n\n        {/* Divider between Selected viewer and AI refs (more visible) */}\n        {showSelectedViewer && showCitations && (\n          <div className=\"h-1.5 bg-gradient-to-r from-transparent via-border-subtle to-transparent\" />\n        )}\n\n        {/* Bottom: AI citations list */}\n        {showCitations && (\n          <div className=\"flex-1 min-h-0 flex flex-col\">\n            {/* AI Citations Section Header */}\n            <div className=\"px-3 py-2 bg-gradient-to-r from-cyan-500/8 to-teal-500/5 border-b border-cyan-500/20 flex items-center gap-2\">\n              <div className=\"flex items-center gap-1.5 px-2 py-0.5 bg-cyan-500/15 rounded-md border border-cyan-500/25\">\n                <Sparkles className=\"w-3 h-3 text-cyan-400\" />\n                <span className=\"text-[10px] text-cyan-300 font-semibold uppercase tracking-wide\">AI Citations</span>\n              </div>\n              <span className=\"text-xs text-text-muted ml-1\">{aiReferences.length} reference{aiReferences.length !== 1 ? 's' : ''}</span>\n            </div>\n            <div className=\"flex-1 min-h-0 overflow-y-auto scrollbar-thin p-3 space-y-3\">\n            {refsWithSnippets.map(({ ref, content, start, highlightStart, highlightEnd, totalLines }) => {\n          const nodeColor = ref.label ? (NODE_COLORS as any)[ref.label] || '#6b7280' : '#6b7280';\n          const hasRange = typeof ref.startLine === 'number';\n          const startDisplay = hasRange ? (ref.startLine ?? 0) + 1 : undefined;\n          const endDisplay = hasRange ? (ref.endLine ?? ref.startLine ?? 0) + 1 : undefined;\n          const language = getSyntaxLanguage(ref.filePath);\n\n          const isGlowing = glowRefId === ref.id;\n\n          return (\n            <div\n              key={ref.id}\n              ref={(el) => { refCardEls.current.set(ref.id, el); }}\n              className={[\n                'bg-elevated border border-border-subtle rounded-xl overflow-hidden transition-all',\n                isGlowing ? 'ring-2 ring-cyan-300/70 shadow-[0_0_0_6px_rgba(34,211,238,0.14)] animate-pulse' : '',\n              ].join(' ')}\n            >\n              <div className=\"px-3 py-2 border-b border-border-subtle bg-surface/40 flex items-start gap-2\">\n                <span\n                  className=\"mt-0.5 px-2 py-0.5 rounded text-[10px] font-semibold uppercase tracking-wide flex-shrink-0\"\n                  style={{ backgroundColor: nodeColor, color: '#06060a' }}\n                  title={ref.label ?? 'Code'}\n                >\n                  {ref.label ?? 'Code'}\n                </span>\n                <div className=\"min-w-0 flex-1\">\n                  <div className=\"text-xs text-text-primary font-medium truncate\">\n                    {ref.name ?? ref.filePath.split('/').pop() ?? ref.filePath}\n                  </div>\n                  <div className=\"text-[11px] text-text-muted font-mono truncate\">\n                    {ref.filePath}\n                    {startDisplay !== undefined && (\n                      <span className=\"text-text-secondary\">\n                        {' '}\n                        • L{startDisplay}\n                        {endDisplay !== startDisplay ? `–${endDisplay}` : ''}\n                      </span>\n                    )}\n                    {totalLines > 0 && <span className=\"text-text-muted\"> • {totalLines} lines</span>}\n                  </div>\n                </div>\n                <div className=\"flex items-center gap-1\">\n                  {ref.nodeId && (\n                    <button\n                      onClick={() => {\n                        const nodeId = ref.nodeId!;\n                        // Sync selection + focus graph\n                        if (graph) {\n                          const node = graph.nodes.find((n) => n.id === nodeId);\n                          if (node) setSelectedNode(node);\n                        }\n                        onFocusNode(nodeId);\n                      }}\n                      className=\"p-1.5 text-text-muted hover:text-text-primary hover:bg-hover rounded transition-colors\"\n                      title=\"Focus in graph\"\n                    >\n                      <Target className=\"w-4 h-4\" />\n                    </button>\n                  )}\n                  <button\n                    onClick={() => removeCodeReference(ref.id)}\n                    className=\"p-1.5 text-text-muted hover:text-text-primary hover:bg-hover rounded transition-colors\"\n                    title=\"Remove\"\n                  >\n                    <X className=\"w-4 h-4\" />\n                  </button>\n                </div>\n              </div>\n\n              <div className=\"overflow-x-auto\">\n                {content ? (\n                  <SyntaxHighlighter\n                    language={language}\n                    style={customTheme as any}\n                    showLineNumbers\n                    startingLineNumber={start + 1}\n                    lineNumberStyle={{\n                      minWidth: '3em',\n                      paddingRight: '1em',\n                      color: '#5a5a70',\n                      textAlign: 'right',\n                      userSelect: 'none',\n                    }}\n                    lineProps={(lineNumber) => {\n                      const isHighlighted =\n                        hasRange &&\n                        lineNumber >= start + highlightStart + 1 &&\n                        lineNumber <= start + highlightEnd + 1;\n                      return {\n                        style: {\n                          display: 'block',\n                          backgroundColor: isHighlighted ? 'rgba(6, 182, 212, 0.14)' : 'transparent',\n                          borderLeft: isHighlighted ? '3px solid #06b6d4' : '3px solid transparent',\n                          paddingLeft: '12px',\n                          paddingRight: '16px',\n                        },\n                      };\n                    }}\n                    wrapLines\n                  >\n                    {content}\n                  </SyntaxHighlighter>\n                ) : (\n                  <div className=\"px-3 py-3 text-sm text-text-muted\">\n                    Code not available in memory for <span className=\"font-mono\">{ref.filePath}</span>\n                  </div>\n                )}\n              </div>\n            </div>\n          );\n            })}\n            </div>\n          </div>\n        )}\n      </div>\n    </aside>\n  );\n};\n"
  },
  {
    "path": "gitnexus-web/src/components/DropZone.tsx",
    "content": "import { useState, useCallback, useRef, DragEvent } from 'react';\nimport { Upload, FileArchive, Github, Loader2, ArrowRight, Key, Eye, EyeOff, Globe, X } from 'lucide-react';\nimport { cloneRepository, parseGitHubUrl } from '../services/git-clone';\nimport { connectToServer, type ConnectToServerResult } from '../services/server-connection';\nimport { FileEntry } from '../services/zip';\n\ninterface DropZoneProps {\n  onFileSelect: (file: File) => void;\n  onGitClone?: (files: FileEntry[]) => void;\n  onServerConnect?: (result: ConnectToServerResult, serverUrl?: string) => void;\n}\n\nfunction formatBytes(bytes: number): string {\n  if (bytes < 1024) return `${bytes} B`;\n  if (bytes < 1024 * 1024) return `${(bytes / 1024).toFixed(1)} KB`;\n  return `${(bytes / (1024 * 1024)).toFixed(1)} MB`;\n}\n\nexport const DropZone = ({ onFileSelect, onGitClone, onServerConnect }: DropZoneProps) => {\n  const [isDragging, setIsDragging] = useState(false);\n  const [activeTab, setActiveTab] = useState<'zip' | 'github' | 'server'>('zip');\n  const [githubUrl, setGithubUrl] = useState('');\n  const [githubToken, setGithubToken] = useState('');\n  const [showToken, setShowToken] = useState(false);\n  const [isCloning, setIsCloning] = useState(false);\n  const [cloneProgress, setCloneProgress] = useState({ phase: '', percent: 0 });\n  const [error, setError] = useState<string | null>(null);\n\n  // Server tab state\n  const [serverUrl, setServerUrl] = useState(() =>\n    localStorage.getItem('gitnexus-server-url') || ''\n  );\n  const [isConnecting, setIsConnecting] = useState(false);\n  const [serverProgress, setServerProgress] = useState<{\n    phase: string;\n    downloaded: number;\n    total: number | null;\n  }>({ phase: '', downloaded: 0, total: null });\n  const abortControllerRef = useRef<AbortController | null>(null);\n\n  const handleDragOver = useCallback((e: DragEvent<HTMLDivElement>) => {\n    e.preventDefault();\n    e.stopPropagation();\n    setIsDragging(true);\n  }, []);\n\n  const handleDragLeave = useCallback((e: DragEvent<HTMLDivElement>) => {\n    e.preventDefault();\n    e.stopPropagation();\n    setIsDragging(false);\n  }, []);\n\n  const handleDrop = useCallback((e: DragEvent<HTMLDivElement>) => {\n    e.preventDefault();\n    e.stopPropagation();\n    setIsDragging(false);\n\n    const files = e.dataTransfer.files;\n    if (files.length > 0) {\n      const file = files[0];\n      if (file.name.endsWith('.zip')) {\n        onFileSelect(file);\n      } else {\n        setError('Please drop a .zip file');\n      }\n    }\n  }, [onFileSelect]);\n\n  const handleFileInput = useCallback((e: React.ChangeEvent<HTMLInputElement>) => {\n    const files = e.target.files;\n    if (files && files.length > 0) {\n      const file = files[0];\n      if (file.name.endsWith('.zip')) {\n        onFileSelect(file);\n      } else {\n        setError('Please select a .zip file');\n      }\n    }\n  }, [onFileSelect]);\n\n  const handleGitClone = async () => {\n    if (!githubUrl.trim()) {\n      setError('Please enter a GitHub URL');\n      return;\n    }\n\n    const parsed = parseGitHubUrl(githubUrl);\n    if (!parsed) {\n      setError('Invalid GitHub URL. Use format: https://github.com/owner/repo');\n      return;\n    }\n\n    setError(null);\n    setIsCloning(true);\n    setCloneProgress({ phase: 'starting', percent: 0 });\n\n    try {\n      const files = await cloneRepository(\n        githubUrl,\n        (phase, percent) => setCloneProgress({ phase, percent }),\n        githubToken || undefined\n      );\n\n      setGithubToken('');\n\n      if (onGitClone) {\n        onGitClone(files);\n      }\n    } catch (err) {\n      console.error('Clone failed:', err);\n      const message = err instanceof Error ? err.message : 'Failed to clone repository';\n      if (message.includes('401') || message.includes('403') || message.includes('Authentication')) {\n        if (!githubToken) {\n          setError('This looks like a private repo. Add a GitHub PAT (Personal Access Token) to access it.');\n        } else {\n          setError('Authentication failed. Check your token permissions (needs repo access).');\n        }\n      } else if (message.includes('404') || message.includes('not found')) {\n        setError('Repository not found. Check the URL or it might be private (needs PAT).');\n      } else {\n        setError(message);\n      }\n    } finally {\n      setIsCloning(false);\n    }\n  };\n\n  const handleServerConnect = async () => {\n    const urlToUse = serverUrl.trim() || window.location.origin;\n    if (!urlToUse) {\n      setError('Please enter a server URL');\n      return;\n    }\n\n    // Persist URL to localStorage\n    localStorage.setItem('gitnexus-server-url', serverUrl);\n\n    setError(null);\n    setIsConnecting(true);\n    setServerProgress({ phase: 'validating', downloaded: 0, total: null });\n\n    const abortController = new AbortController();\n    abortControllerRef.current = abortController;\n\n    try {\n      const result = await connectToServer(\n        urlToUse,\n        (phase, downloaded, total) => {\n          setServerProgress({ phase, downloaded, total });\n        },\n        abortController.signal\n      );\n\n      if (onServerConnect) {\n        onServerConnect(result, urlToUse);\n      }\n    } catch (err) {\n      if ((err as Error).name === 'AbortError') {\n        // User cancelled\n        return;\n      }\n      console.error('Server connect failed:', err);\n      const message = err instanceof Error ? err.message : 'Failed to connect to server';\n      if (message.includes('Failed to fetch') || message.includes('NetworkError')) {\n        setError('Cannot reach server. Check the URL and ensure the server is running.');\n      } else {\n        setError(message);\n      }\n    } finally {\n      setIsConnecting(false);\n      abortControllerRef.current = null;\n    }\n  };\n\n  const handleCancelConnect = () => {\n    abortControllerRef.current?.abort();\n    setIsConnecting(false);\n  };\n\n  const serverProgressPercent = serverProgress.total\n    ? Math.round((serverProgress.downloaded / serverProgress.total) * 100)\n    : null;\n\n  return (\n    <div className=\"flex items-center justify-center min-h-screen p-8 bg-void\">\n      {/* Background gradient effects */}\n      <div className=\"fixed inset-0 pointer-events-none\">\n        <div className=\"absolute top-1/4 left-1/4 w-96 h-96 bg-accent/10 rounded-full blur-3xl\" />\n        <div className=\"absolute bottom-1/4 right-1/4 w-96 h-96 bg-node-interface/10 rounded-full blur-3xl\" />\n      </div>\n\n      <div className=\"relative w-full max-w-lg\">\n        {/* Tab Switcher */}\n        <div className=\"flex mb-4 bg-surface border border-border-default rounded-xl p-1\">\n          <button\n            onClick={() => { setActiveTab('zip'); setError(null); }}\n            className={`\n              flex-1 flex items-center justify-center gap-2 py-2.5 px-4 rounded-lg\n              text-sm font-medium transition-all duration-200\n              ${activeTab === 'zip'\n                ? 'bg-accent text-white shadow-md'\n                : 'text-text-secondary hover:text-text-primary hover:bg-elevated'\n              }\n            `}\n          >\n            <FileArchive className=\"w-4 h-4\" />\n            ZIP Upload\n          </button>\n          <button\n            onClick={() => { setActiveTab('github'); setError(null); }}\n            className={`\n              flex-1 flex items-center justify-center gap-2 py-2.5 px-4 rounded-lg\n              text-sm font-medium transition-all duration-200\n              ${activeTab === 'github'\n                ? 'bg-accent text-white shadow-md'\n                : 'text-text-secondary hover:text-text-primary hover:bg-elevated'\n              }\n            `}\n          >\n            <Github className=\"w-4 h-4\" />\n            GitHub URL\n          </button>\n          <button\n            onClick={() => { setActiveTab('server'); setError(null); }}\n            className={`\n              flex-1 flex items-center justify-center gap-2 py-2.5 px-4 rounded-lg\n              text-sm font-medium transition-all duration-200\n              ${activeTab === 'server'\n                ? 'bg-accent text-white shadow-md'\n                : 'text-text-secondary hover:text-text-primary hover:bg-elevated'\n              }\n            `}\n          >\n            <Globe className=\"w-4 h-4\" />\n            Server\n          </button>\n        </div>\n\n        {/* Error Message */}\n        {error && (\n          <div className=\"mb-4 p-3 bg-red-500/10 border border-red-500/30 rounded-xl text-red-400 text-sm text-center\">\n            {error}\n          </div>\n        )}\n\n        {/* ZIP Upload Tab */}\n        {activeTab === 'zip' && (\n          <>\n            <div\n              className={`\n                relative p-16\n                bg-surface border-2 border-dashed rounded-3xl\n                transition-all duration-300 cursor-pointer\n                ${isDragging\n                  ? 'border-accent bg-elevated scale-105 shadow-glow'\n                  : 'border-border-default hover:border-accent/50 hover:bg-elevated/50 animate-breathe'\n                }\n              `}\n              onDragOver={handleDragOver}\n              onDragLeave={handleDragLeave}\n              onDrop={handleDrop}\n              onClick={() => document.getElementById('file-input')?.click()}\n            >\n              <input\n                id=\"file-input\"\n                type=\"file\"\n                accept=\".zip\"\n                className=\"hidden\"\n                onChange={handleFileInput}\n              />\n\n              {/* Icon */}\n              <div className={`\n                mx-auto w-20 h-20 mb-6\n                flex items-center justify-center\n                bg-gradient-to-br from-accent to-node-interface\n                rounded-2xl shadow-glow\n                transition-transform duration-300\n                ${isDragging ? 'scale-110' : ''}\n              `}>\n                {isDragging ? (\n                  <Upload className=\"w-10 h-10 text-white\" />\n                ) : (\n                  <FileArchive className=\"w-10 h-10 text-white\" />\n                )}\n              </div>\n\n              {/* Text */}\n              <h2 className=\"text-xl font-semibold text-text-primary text-center mb-2\">\n                {isDragging ? 'Drop it here!' : 'Drop your codebase'}\n              </h2>\n              <p className=\"text-sm text-text-secondary text-center mb-6\">\n                Drag & drop a .zip file to generate a knowledge graph\n              </p>\n\n              {/* Hints */}\n              <div className=\"flex items-center justify-center gap-3 text-xs text-text-muted\">\n                <span className=\"px-3 py-1.5 bg-elevated border border-border-subtle rounded-md\">\n                  .zip\n                </span>\n              </div>\n            </div>\n\n          </>\n        )}\n\n        {/* GitHub URL Tab */}\n        {activeTab === 'github' && (\n          <div className=\"p-8 bg-surface border border-border-default rounded-3xl\">\n            {/* Icon */}\n            <div className=\"mx-auto w-20 h-20 mb-6 flex items-center justify-center bg-gradient-to-br from-[#333] to-[#24292e] rounded-2xl shadow-lg\">\n              <Github className=\"w-10 h-10 text-white\" />\n            </div>\n\n            {/* Text */}\n            <h2 className=\"text-xl font-semibold text-text-primary text-center mb-2\">\n              Clone from GitHub\n            </h2>\n            <p className=\"text-sm text-text-secondary text-center mb-6\">\n              Enter a repository URL to clone directly\n            </p>\n\n            {/* Inputs - wrapped in div to prevent form autofill */}\n            <div className=\"space-y-3\" data-form-type=\"other\">\n              <input\n                type=\"url\"\n                name=\"github-repo-url-input\"\n                value={githubUrl}\n                onChange={(e) => setGithubUrl(e.target.value)}\n                onKeyDown={(e) => e.key === 'Enter' && !isCloning && handleGitClone()}\n                placeholder=\"https://github.com/owner/repo\"\n                disabled={isCloning}\n                autoComplete=\"off\"\n                data-lpignore=\"true\"\n                data-1p-ignore=\"true\"\n                data-form-type=\"other\"\n                className=\"\n                  w-full px-4 py-3\n                  bg-elevated border border-border-default rounded-xl\n                  text-text-primary placeholder-text-muted\n                  focus:outline-none focus:border-accent focus:ring-1 focus:ring-accent\n                  disabled:opacity-50 disabled:cursor-not-allowed\n                  transition-all duration-200\n                \"\n              />\n\n              {/* Token input for private repos */}\n              <div className=\"relative\">\n                <div className=\"absolute left-3 top-1/2 -translate-y-1/2 text-text-muted\">\n                  <Key className=\"w-4 h-4\" />\n                </div>\n                <input\n                  type={showToken ? 'text' : 'password'}\n                  name=\"github-pat-token-input\"\n                  value={githubToken}\n                  onChange={(e) => setGithubToken(e.target.value)}\n                  placeholder=\"GitHub PAT (optional, for private repos)\"\n                  disabled={isCloning}\n                  autoComplete=\"new-password\"\n                  data-lpignore=\"true\"\n                  data-1p-ignore=\"true\"\n                  data-form-type=\"other\"\n                  className=\"\n                    w-full pl-10 pr-10 py-3\n                    bg-elevated border border-border-default rounded-xl\n                    text-text-primary placeholder-text-muted\n                    focus:outline-none focus:border-accent focus:ring-1 focus:ring-accent\n                    disabled:opacity-50 disabled:cursor-not-allowed\n                    transition-all duration-200\n                  \"\n                />\n                <button\n                  type=\"button\"\n                  onClick={() => setShowToken(!showToken)}\n                  className=\"absolute right-3 top-1/2 -translate-y-1/2 text-text-muted hover:text-text-secondary transition-colors\"\n                >\n                  {showToken ? <EyeOff className=\"w-4 h-4\" /> : <Eye className=\"w-4 h-4\" />}\n                </button>\n              </div>\n\n              <button\n                onClick={handleGitClone}\n                disabled={isCloning || !githubUrl.trim()}\n                className=\"\n                  w-full flex items-center justify-center gap-2\n                  px-4 py-3\n                  bg-accent hover:bg-accent/90\n                  text-white font-medium rounded-xl\n                  disabled:opacity-50 disabled:cursor-not-allowed\n                  transition-all duration-200\n                \"\n              >\n                {isCloning ? (\n                  <>\n                    <Loader2 className=\"w-5 h-5 animate-spin\" />\n                    {cloneProgress.phase === 'cloning'\n                      ? `Cloning... ${cloneProgress.percent}%`\n                      : cloneProgress.phase === 'reading'\n                        ? 'Reading files...'\n                        : 'Starting...'\n                    }\n                  </>\n                ) : (\n                  <>\n                    Clone Repository\n                    <ArrowRight className=\"w-5 h-5\" />\n                  </>\n                )}\n              </button>\n            </div>\n\n            {/* Progress bar */}\n            {isCloning && (\n              <div className=\"mt-4\">\n                <div className=\"h-2 bg-elevated rounded-full overflow-hidden\">\n                  <div\n                    className=\"h-full bg-accent transition-all duration-300 ease-out\"\n                    style={{ width: `${cloneProgress.percent}%` }}\n                  />\n                </div>\n              </div>\n            )}\n\n            {/* Security note */}\n            {githubToken && (\n              <p className=\"mt-3 text-xs text-text-muted text-center\">\n                Token stays in your browser only, never sent to any server\n              </p>\n            )}\n\n            {/* Hints */}\n            <div className=\"mt-4 flex items-center justify-center gap-3 text-xs text-text-muted\">\n              <span className=\"px-3 py-1.5 bg-elevated border border-border-subtle rounded-md\">\n                {githubToken ? 'Private + Public' : 'Public repos'}\n              </span>\n              <span className=\"px-3 py-1.5 bg-elevated border border-border-subtle rounded-md\">\n                Shallow clone\n              </span>\n            </div>\n          </div>\n        )}\n\n        {/* Server Tab */}\n        {activeTab === 'server' && (\n          <div className=\"p-8 bg-surface border border-border-default rounded-3xl\">\n            {/* Icon */}\n            <div className=\"mx-auto w-20 h-20 mb-6 flex items-center justify-center bg-gradient-to-br from-accent to-emerald-600 rounded-2xl shadow-lg\">\n              <Globe className=\"w-10 h-10 text-white\" />\n            </div>\n\n            {/* Text */}\n            <h2 className=\"text-xl font-semibold text-text-primary text-center mb-2\">\n              Connect to Server\n            </h2>\n            <p className=\"text-sm text-text-secondary text-center mb-6\">\n              Load a pre-built knowledge graph from a running GitNexus server\n            </p>\n\n            {/* Inputs */}\n            <div className=\"space-y-3\" data-form-type=\"other\">\n              <input\n                type=\"url\"\n                name=\"server-url-input\"\n                value={serverUrl}\n                onChange={(e) => setServerUrl(e.target.value)}\n                onKeyDown={(e) => e.key === 'Enter' && !isConnecting && handleServerConnect()}\n                placeholder={window.location.origin}\n                disabled={isConnecting}\n                autoComplete=\"off\"\n                data-lpignore=\"true\"\n                data-1p-ignore=\"true\"\n                data-form-type=\"other\"\n                className=\"\n                  w-full px-4 py-3\n                  bg-elevated border border-border-default rounded-xl\n                  text-text-primary placeholder-text-muted\n                  focus:outline-none focus:border-accent focus:ring-1 focus:ring-accent\n                  disabled:opacity-50 disabled:cursor-not-allowed\n                  transition-all duration-200\n                \"\n              />\n\n              <div className=\"flex gap-2\">\n                <button\n                  onClick={handleServerConnect}\n                  disabled={isConnecting}\n                  className=\"\n                    flex-1 flex items-center justify-center gap-2\n                    px-4 py-3\n                    bg-accent hover:bg-accent/90\n                    text-white font-medium rounded-xl\n                    disabled:opacity-50 disabled:cursor-not-allowed\n                    transition-all duration-200\n                  \"\n                >\n                  {isConnecting ? (\n                    <>\n                      <Loader2 className=\"w-5 h-5 animate-spin\" />\n                      {serverProgress.phase === 'validating'\n                        ? 'Validating...'\n                        : serverProgress.phase === 'downloading'\n                          ? serverProgressPercent !== null\n                            ? `Downloading... ${serverProgressPercent}%`\n                            : `Downloading... ${formatBytes(serverProgress.downloaded)}`\n                          : serverProgress.phase === 'extracting'\n                            ? 'Processing...'\n                            : 'Connecting...'\n                      }\n                    </>\n                  ) : (\n                    <>\n                      Connect\n                      <ArrowRight className=\"w-5 h-5\" />\n                    </>\n                  )}\n                </button>\n\n                {isConnecting && (\n                  <button\n                    onClick={handleCancelConnect}\n                    className=\"\n                      flex items-center justify-center\n                      px-4 py-3\n                      bg-red-500/20 hover:bg-red-500/30\n                      text-red-400 font-medium rounded-xl\n                      transition-all duration-200\n                    \"\n                  >\n                    <X className=\"w-5 h-5\" />\n                  </button>\n                )}\n              </div>\n            </div>\n\n            {/* Progress bar */}\n            {isConnecting && serverProgress.phase === 'downloading' && (\n              <div className=\"mt-4\">\n                <div className=\"h-2 bg-elevated rounded-full overflow-hidden\">\n                  <div\n                    className={`h-full bg-accent transition-all duration-300 ease-out ${\n                      serverProgressPercent === null ? 'animate-pulse' : ''\n                    }`}\n                    style={{\n                      width: serverProgressPercent !== null\n                        ? `${serverProgressPercent}%`\n                        : '100%',\n                    }}\n                  />\n                </div>\n                {serverProgress.total && (\n                  <p className=\"mt-1 text-xs text-text-muted text-center\">\n                    {formatBytes(serverProgress.downloaded)} / {formatBytes(serverProgress.total)}\n                  </p>\n                )}\n              </div>\n            )}\n\n            {/* Hints */}\n            <div className=\"mt-4 flex items-center justify-center gap-3 text-xs text-text-muted\">\n              <span className=\"px-3 py-1.5 bg-elevated border border-border-subtle rounded-md\">\n                Pre-indexed\n              </span>\n              <span className=\"px-3 py-1.5 bg-elevated border border-border-subtle rounded-md\">\n                No WASM needed\n              </span>\n            </div>\n          </div>\n        )}\n      </div>\n    </div>\n  );\n};\n"
  },
  {
    "path": "gitnexus-web/src/components/EmbeddingStatus.tsx",
    "content": "import { Brain, Loader2, Check, AlertCircle, Zap, FlaskConical } from 'lucide-react';\nimport { useAppState } from '../hooks/useAppState';\nimport { useState } from 'react';\nimport { WebGPUFallbackDialog } from './WebGPUFallbackDialog';\n\n/**\n * Embedding status indicator and trigger button\n * Shows in header when graph is loaded\n */\nexport const EmbeddingStatus = () => {\n  const {\n    embeddingStatus,\n    embeddingProgress,\n    startEmbeddings,\n    graph,\n    viewMode,\n    serverBaseUrl,\n    testArrayParams,\n  } = useAppState();\n\n  const [testResult, setTestResult] = useState<string | null>(null);\n  const [showFallbackDialog, setShowFallbackDialog] = useState(false);\n\n  // Only show when exploring a loaded graph; hide in backend mode (no WASM DB)\n  if (viewMode !== 'exploring' || !graph || serverBaseUrl) return null;\n\n  const nodeCount = graph.nodes.length;\n\n  const handleStartEmbeddings = async (forceDevice?: 'webgpu' | 'wasm') => {\n    try {\n      await startEmbeddings(forceDevice);\n    } catch (error: any) {\n      // Check if it's a WebGPU not available error\n      if (error?.name === 'WebGPUNotAvailableError' || \n          error?.message?.includes('WebGPU not available')) {\n        setShowFallbackDialog(true);\n      } else {\n        console.error('Embedding failed:', error);\n      }\n    }\n  };\n\n  const handleUseCPU = () => {\n    setShowFallbackDialog(false);\n    handleStartEmbeddings('wasm');\n  };\n\n  const handleSkipEmbeddings = () => {\n    setShowFallbackDialog(false);\n    // Just close - user can try again later if they want\n  };\n  \n  const handleTestArrayParams = async () => {\n    setTestResult('Testing...');\n    const result = await testArrayParams();\n    if (result.success) {\n      setTestResult('✅ Array params WORK!');\n      console.log('✅ Array params test passed!');\n    } else {\n      setTestResult(`❌ ${result.error}`);\n      console.error('❌ Array params test failed:', result.error);\n    }\n  };\n\n  // WebGPU fallback dialog - rendered independently of state\n  const fallbackDialog = (\n    <WebGPUFallbackDialog\n      isOpen={showFallbackDialog}\n      onClose={() => setShowFallbackDialog(false)}\n      onUseCPU={handleUseCPU}\n      onSkip={handleSkipEmbeddings}\n      nodeCount={nodeCount}\n    />\n  );\n\n  // Idle state - show button to start\n  if (embeddingStatus === 'idle') {\n    return (\n      <>\n        <div className=\"flex items-center gap-2\">\n          {/* Test button (dev only) */}\n          {import.meta.env.DEV && (\n            <button\n              onClick={handleTestArrayParams}\n              className=\"flex items-center gap-1 px-2 py-1.5 bg-surface border border-border-subtle rounded-lg text-xs text-text-muted hover:bg-hover hover:text-text-secondary transition-all\"\n              title=\"Test if LadybugDB supports array params\"\n            >\n              <FlaskConical className=\"w-3 h-3\" />\n              {testResult || 'Test'}\n            </button>\n          )}\n          \n          <button\n            onClick={() => handleStartEmbeddings()}\n            className=\"flex items-center gap-2 px-3 py-1.5 bg-surface border border-border-subtle rounded-lg text-sm text-text-secondary hover:bg-hover hover:text-text-primary hover:border-accent/50 transition-all group\"\n            title=\"Generate embeddings for semantic search\"\n          >\n            <Brain className=\"w-4 h-4 text-node-interface group-hover:text-accent transition-colors\" />\n            <span className=\"hidden sm:inline\">Enable Semantic Search</span>\n            <Zap className=\"w-3 h-3 text-text-muted\" />\n          </button>\n        </div>\n        {fallbackDialog}\n      </>\n    );\n  }\n\n  // Loading model\n  if (embeddingStatus === 'loading') {\n    const downloadPercent = embeddingProgress?.modelDownloadPercent ?? 0;\n    return (\n      <>\n        <div className=\"flex items-center gap-2.5 px-3 py-1.5 bg-surface border border-accent/30 rounded-lg text-sm\">\n          <Loader2 className=\"w-4 h-4 text-accent animate-spin\" />\n          <div className=\"flex flex-col gap-0.5\">\n            <span className=\"text-text-secondary text-xs\">Loading AI model...</span>\n            <div className=\"w-24 h-1 bg-elevated rounded-full overflow-hidden\">\n              <div \n                className=\"h-full bg-gradient-to-r from-accent to-node-interface rounded-full transition-all duration-300\"\n                style={{ width: `${downloadPercent}%` }}\n              />\n            </div>\n          </div>\n        </div>\n        {fallbackDialog}\n      </>\n    );\n  }\n\n  // Embedding in progress\n  if (embeddingStatus === 'embedding') {\n    const processed = embeddingProgress?.nodesProcessed ?? 0;\n    const total = embeddingProgress?.totalNodes ?? 0;\n    const percent = embeddingProgress?.percent ?? 0;\n    \n    return (\n      <div className=\"flex items-center gap-2.5 px-3 py-1.5 bg-surface border border-node-function/30 rounded-lg text-sm\">\n        <Loader2 className=\"w-4 h-4 text-node-function animate-spin\" />\n        <div className=\"flex flex-col gap-0.5\">\n          <span className=\"text-text-secondary text-xs\">\n            Embedding {processed}/{total} nodes\n          </span>\n          <div className=\"w-24 h-1 bg-elevated rounded-full overflow-hidden\">\n            <div \n              className=\"h-full bg-gradient-to-r from-node-function to-accent rounded-full transition-all duration-300\"\n              style={{ width: `${percent}%` }}\n            />\n          </div>\n        </div>\n      </div>\n    );\n  }\n\n  // Indexing\n  if (embeddingStatus === 'indexing') {\n    return (\n      <div className=\"flex items-center gap-2 px-3 py-1.5 bg-surface border border-node-interface/30 rounded-lg text-sm text-text-secondary\">\n        <Loader2 className=\"w-4 h-4 text-node-interface animate-spin\" />\n        <span className=\"text-xs\">Creating vector index...</span>\n      </div>\n    );\n  }\n\n  // Ready\n  if (embeddingStatus === 'ready') {\n    return (\n      <div \n        className=\"flex items-center gap-2 px-3 py-1.5 bg-node-function/10 border border-node-function/30 rounded-lg text-sm text-node-function\"\n        title=\"Semantic search is ready! Use natural language in the AI chat.\"\n      >\n        <Check className=\"w-4 h-4\" />\n        <span className=\"text-xs font-medium\">Semantic Ready</span>\n      </div>\n    );\n  }\n\n  // Error\n  if (embeddingStatus === 'error') {\n    return (\n      <>\n        <button\n          onClick={() => handleStartEmbeddings()}\n          className=\"flex items-center gap-2 px-3 py-1.5 bg-red-500/10 border border-red-500/30 rounded-lg text-sm text-red-400 hover:bg-red-500/20 transition-colors\"\n          title={embeddingProgress?.error || 'Embedding failed. Click to retry.'}\n        >\n          <AlertCircle className=\"w-4 h-4\" />\n          <span className=\"text-xs\">Failed - Retry</span>\n        </button>\n        {fallbackDialog}\n      </>\n    );\n  }\n\n  return null;\n};\n\n"
  },
  {
    "path": "gitnexus-web/src/components/FileTreePanel.tsx",
    "content": "import { useState, useMemo, useCallback, useEffect } from 'react';\nimport {\n  ChevronRight,\n  ChevronDown,\n  Folder,\n  FolderOpen,\n  FileCode,\n  Search,\n  Filter,\n  PanelLeftClose,\n  PanelLeft,\n  Box,\n  Braces,\n  Variable,\n  Hash,\n  Target,\n} from 'lucide-react';\nimport { useAppState } from '../hooks/useAppState';\nimport { FILTERABLE_LABELS, NODE_COLORS, ALL_EDGE_TYPES, EDGE_INFO, type EdgeType } from '../lib/constants';\nimport { GraphNode, NodeLabel } from '../core/graph/types';\n\n// Tree node structure\ninterface TreeNode {\n  id: string;\n  name: string;\n  type: 'folder' | 'file';\n  path: string;\n  children: TreeNode[];\n  graphNode?: GraphNode;\n}\n\n// Build tree from graph nodes\nconst buildFileTree = (nodes: GraphNode[]): TreeNode[] => {\n  const root: TreeNode[] = [];\n  const pathMap = new Map<string, TreeNode>();\n\n  // Filter to only folders and files\n  const fileNodes = nodes.filter(n => n.label === 'Folder' || n.label === 'File');\n\n  // Sort by path to ensure parents come before children\n  fileNodes.sort((a, b) => a.properties.filePath.localeCompare(b.properties.filePath));\n\n  fileNodes.forEach(node => {\n    const parts = node.properties.filePath.split('/').filter(Boolean);\n    let currentPath = '';\n    let currentLevel = root;\n\n    parts.forEach((part, index) => {\n      currentPath = currentPath ? `${currentPath}/${part}` : part;\n\n      let existing = pathMap.get(currentPath);\n\n      if (!existing) {\n        const isLastPart = index === parts.length - 1;\n        const isFile = isLastPart && node.label === 'File';\n\n        existing = {\n          id: isLastPart ? node.id : currentPath,\n          name: part,\n          type: isFile ? 'file' : 'folder',\n          path: currentPath,\n          children: [],\n          graphNode: isLastPart ? node : undefined,\n        };\n\n        pathMap.set(currentPath, existing);\n        currentLevel.push(existing);\n      }\n\n      currentLevel = existing.children;\n    });\n  });\n\n  return root;\n};\n\n// Tree item component\ninterface TreeItemProps {\n  node: TreeNode;\n  depth: number;\n  searchQuery: string;\n  onNodeClick: (node: TreeNode) => void;\n  expandedPaths: Set<string>;\n  toggleExpanded: (path: string) => void;\n  selectedPath: string | null;\n}\n\nconst TreeItem = ({\n  node,\n  depth,\n  searchQuery,\n  onNodeClick,\n  expandedPaths,\n  toggleExpanded,\n  selectedPath,\n}: TreeItemProps) => {\n  const isExpanded = expandedPaths.has(node.path);\n  const isSelected = selectedPath === node.path;\n  const hasChildren = node.children.length > 0;\n\n  // Filter children based on search\n  const filteredChildren = useMemo(() => {\n    if (!searchQuery) return node.children;\n    return node.children.filter(child =>\n      child.name.toLowerCase().includes(searchQuery.toLowerCase()) ||\n      child.children.some(c => c.name.toLowerCase().includes(searchQuery.toLowerCase()))\n    );\n  }, [node.children, searchQuery]);\n\n  // Check if this node matches search\n  const matchesSearch = searchQuery && node.name.toLowerCase().includes(searchQuery.toLowerCase());\n\n  const handleClick = () => {\n    if (hasChildren) {\n      toggleExpanded(node.path);\n    }\n    onNodeClick(node);\n  };\n\n  return (\n    <div>\n      <button\n        onClick={handleClick}\n        className={`\n          w-full flex items-center gap-1.5 px-2 py-1 text-left text-sm\n          hover:bg-hover transition-colors rounded relative\n          ${isSelected ? 'bg-amber-500/15 text-amber-300 border-l-2 border-amber-400' : 'text-text-secondary hover:text-text-primary border-l-2 border-transparent'}\n          ${matchesSearch ? 'bg-accent/10' : ''}\n        `}\n        style={{ paddingLeft: `${depth * 12 + 8}px` }}\n      >\n        {/* Expand/collapse icon */}\n        {hasChildren ? (\n          isExpanded ? (\n            <ChevronDown className=\"w-3.5 h-3.5 shrink-0 text-text-muted\" />\n          ) : (\n            <ChevronRight className=\"w-3.5 h-3.5 shrink-0 text-text-muted\" />\n          )\n        ) : (\n          <span className=\"w-3.5\" />\n        )}\n\n        {/* Node icon */}\n        {node.type === 'folder' ? (\n          isExpanded ? (\n            <FolderOpen className=\"w-4 h-4 shrink-0\" style={{ color: NODE_COLORS.Folder }} />\n          ) : (\n            <Folder className=\"w-4 h-4 shrink-0\" style={{ color: NODE_COLORS.Folder }} />\n          )\n        ) : (\n          <FileCode className=\"w-4 h-4 shrink-0\" style={{ color: NODE_COLORS.File }} />\n        )}\n\n        {/* Name */}\n        <span className=\"truncate font-mono text-xs\">{node.name}</span>\n      </button>\n\n      {/* Children */}\n      {isExpanded && filteredChildren.length > 0 && (\n        <div>\n          {filteredChildren.map(child => (\n            <TreeItem\n              key={child.id}\n              node={child}\n              depth={depth + 1}\n              searchQuery={searchQuery}\n              onNodeClick={onNodeClick}\n              expandedPaths={expandedPaths}\n              toggleExpanded={toggleExpanded}\n              selectedPath={selectedPath}\n            />\n          ))}\n        </div>\n      )}\n    </div>\n  );\n};\n\n// Icon for node types\nconst getNodeTypeIcon = (label: NodeLabel) => {\n  switch (label) {\n    case 'Folder': return Folder;\n    case 'File': return FileCode;\n    case 'Class': return Box;\n    case 'Function': return Braces;\n    case 'Method': return Braces;\n    case 'Interface': return Hash;\n    case 'Import': return FileCode;\n    default: return Variable;\n  }\n};\n\ninterface FileTreePanelProps {\n  onFocusNode: (nodeId: string) => void;\n}\n\nexport const FileTreePanel = ({ onFocusNode }: FileTreePanelProps) => {\n  const { graph, visibleLabels, toggleLabelVisibility, visibleEdgeTypes, toggleEdgeVisibility, selectedNode, setSelectedNode, openCodePanel, depthFilter, setDepthFilter } = useAppState();\n\n  const [isCollapsed, setIsCollapsed] = useState(false);\n  const [searchQuery, setSearchQuery] = useState('');\n  const [expandedPaths, setExpandedPaths] = useState<Set<string>>(new Set());\n  const [activeTab, setActiveTab] = useState<'files' | 'filters'>('files');\n\n  // Build file tree from graph\n  const fileTree = useMemo(() => {\n    if (!graph) return [];\n    return buildFileTree(graph.nodes);\n  }, [graph]);\n\n  // Auto-expand first level on initial load\n  useEffect(() => {\n    if (fileTree.length > 0 && expandedPaths.size === 0) {\n      const firstLevel = new Set(fileTree.map(n => n.path));\n      setExpandedPaths(firstLevel);\n    }\n  }, [fileTree.length]); // Only run when tree first loads\n\n  // Auto-expand to selected file when selectedNode changes (e.g., from graph click)\n  useEffect(() => {\n    const path = selectedNode?.properties?.filePath;\n    if (!path) return;\n\n    // Expand all parent folders leading to this file\n    const parts = path.split('/').filter(Boolean);\n    const pathsToExpand: string[] = [];\n    let currentPath = '';\n\n    // Build all parent paths (exclude the last part if it's a file)\n    for (let i = 0; i < parts.length - 1; i++) {\n      currentPath = currentPath ? `${currentPath}/${parts[i]}` : parts[i];\n      pathsToExpand.push(currentPath);\n    }\n\n    if (pathsToExpand.length > 0) {\n      setExpandedPaths(prev => {\n        const next = new Set(prev);\n        pathsToExpand.forEach(p => next.add(p));\n        return next;\n      });\n    }\n  }, [selectedNode?.id]); // Trigger when selected node changes\n\n  const toggleExpanded = useCallback((path: string) => {\n    setExpandedPaths(prev => {\n      const next = new Set(prev);\n      if (next.has(path)) {\n        next.delete(path);\n      } else {\n        next.add(path);\n      }\n      return next;\n    });\n  }, []);\n\n  const handleNodeClick = useCallback((treeNode: TreeNode) => {\n    if (treeNode.graphNode) {\n      // Only focus if selecting a different node\n      const isSameNode = selectedNode?.id === treeNode.graphNode.id;\n      setSelectedNode(treeNode.graphNode);\n      openCodePanel();\n      if (!isSameNode) {\n        onFocusNode(treeNode.graphNode.id);\n      }\n    }\n  }, [setSelectedNode, openCodePanel, onFocusNode, selectedNode]);\n\n  const selectedPath = selectedNode?.properties.filePath || null;\n\n  if (isCollapsed) {\n    return (\n      <div className=\"h-full w-12 bg-surface border-r border-border-subtle flex flex-col items-center py-3 gap-2\">\n        <button\n          onClick={() => setIsCollapsed(false)}\n          className=\"p-2 text-text-secondary hover:text-text-primary hover:bg-hover rounded transition-colors\"\n          title=\"Expand Panel\"\n        >\n          <PanelLeft className=\"w-5 h-5\" />\n        </button>\n        <div className=\"w-6 h-px bg-border-subtle my-1\" />\n        <button\n          onClick={() => { setIsCollapsed(false); setActiveTab('files'); }}\n          className={`p-2 rounded transition-colors ${activeTab === 'files' ? 'text-accent bg-accent/10' : 'text-text-secondary hover:text-text-primary hover:bg-hover'}`}\n          title=\"File Explorer\"\n        >\n          <Folder className=\"w-5 h-5\" />\n        </button>\n        <button\n          onClick={() => { setIsCollapsed(false); setActiveTab('filters'); }}\n          className={`p-2 rounded transition-colors ${activeTab === 'filters' ? 'text-accent bg-accent/10' : 'text-text-secondary hover:text-text-primary hover:bg-hover'}`}\n          title=\"Filters\"\n        >\n          <Filter className=\"w-5 h-5\" />\n        </button>\n      </div>\n    );\n  }\n\n  return (\n    <div className=\"h-full w-64 bg-surface border-r border-border-subtle flex flex-col animate-slide-in\">\n      {/* Header */}\n      <div className=\"flex items-center justify-between px-3 py-2 border-b border-border-subtle\">\n        <div className=\"flex items-center gap-1\">\n          <button\n            onClick={() => setActiveTab('files')}\n            className={`px-2 py-1 text-xs rounded transition-colors ${activeTab === 'files'\n              ? 'bg-accent/20 text-accent'\n              : 'text-text-secondary hover:text-text-primary hover:bg-hover'\n              }`}\n          >\n            Explorer\n          </button>\n          <button\n            onClick={() => setActiveTab('filters')}\n            className={`px-2 py-1 text-xs rounded transition-colors ${activeTab === 'filters'\n              ? 'bg-accent/20 text-accent'\n              : 'text-text-secondary hover:text-text-primary hover:bg-hover'\n              }`}\n          >\n            Filters\n          </button>\n        </div>\n        <button\n          onClick={() => setIsCollapsed(true)}\n          className=\"p-1 text-text-muted hover:text-text-primary hover:bg-hover rounded transition-colors\"\n          title=\"Collapse Panel\"\n        >\n          <PanelLeftClose className=\"w-4 h-4\" />\n        </button>\n      </div>\n\n      {activeTab === 'files' && (\n        <>\n          {/* Search */}\n          <div className=\"px-3 py-2 border-b border-border-subtle\">\n            <div className=\"relative\">\n              <Search className=\"absolute left-2.5 top-1/2 -translate-y-1/2 w-3.5 h-3.5 text-text-muted\" />\n              <input\n                type=\"text\"\n                placeholder=\"Search files...\"\n                value={searchQuery}\n                onChange={(e) => setSearchQuery(e.target.value)}\n                className=\"w-full pl-8 pr-3 py-1.5 bg-elevated border border-border-subtle rounded text-xs text-text-primary placeholder:text-text-muted focus:outline-none focus:border-accent\"\n              />\n            </div>\n          </div>\n\n          {/* File tree */}\n          <div className=\"flex-1 overflow-y-auto scrollbar-thin py-2\">\n            {fileTree.length === 0 ? (\n              <div className=\"px-3 py-4 text-center text-text-muted text-xs\">\n                No files loaded\n              </div>\n            ) : (\n              fileTree.map(node => (\n                <TreeItem\n                  key={node.id}\n                  node={node}\n                  depth={0}\n                  searchQuery={searchQuery}\n                  onNodeClick={handleNodeClick}\n                  expandedPaths={expandedPaths}\n                  toggleExpanded={toggleExpanded}\n                  selectedPath={selectedPath}\n                />\n              ))\n            )}\n          </div>\n        </>\n      )}\n\n      {activeTab === 'filters' && (\n        <div className=\"flex-1 overflow-y-auto scrollbar-thin p-3\">\n          <div className=\"mb-3\">\n            <h3 className=\"text-xs font-medium text-text-secondary uppercase tracking-wide mb-2\">\n              Node Types\n            </h3>\n            <p className=\"text-[11px] text-text-muted mb-3\">\n              Toggle visibility of node types in the graph\n            </p>\n          </div>\n\n          <div className=\"flex flex-col gap-1\">\n            {FILTERABLE_LABELS.map((label) => {\n              const Icon = getNodeTypeIcon(label);\n              const isVisible = visibleLabels.includes(label);\n\n              return (\n                <button\n                  key={label}\n                  onClick={() => toggleLabelVisibility(label)}\n                  className={`\n                    flex items-center gap-2.5 px-2 py-1.5 rounded text-left transition-colors\n                    ${isVisible\n                      ? 'bg-elevated text-text-primary'\n                      : 'text-text-muted hover:bg-hover hover:text-text-secondary'\n                    }\n                  `}\n                >\n                  <div\n                    className={`w-5 h-5 rounded flex items-center justify-center ${isVisible ? '' : 'opacity-40'}`}\n                    style={{ backgroundColor: `${NODE_COLORS[label]}20` }}\n                  >\n                    <Icon className=\"w-3 h-3\" style={{ color: NODE_COLORS[label] }} />\n                  </div>\n                  <span className=\"text-xs flex-1\">{label}</span>\n                  <div\n                    className={`w-2 h-2 rounded-full transition-colors ${isVisible ? 'bg-accent' : 'bg-border-subtle'}`}\n                  />\n                </button>\n              );\n            })}\n          </div>\n\n          {/* Edge Type Toggles */}\n          <div className=\"mt-6 pt-4 border-t border-border-subtle\">\n            <h3 className=\"text-xs font-medium text-text-secondary uppercase tracking-wide mb-2\">\n              Edge Types\n            </h3>\n            <p className=\"text-[11px] text-text-muted mb-3\">\n              Toggle visibility of relationship types\n            </p>\n\n            <div className=\"flex flex-col gap-1\">\n              {ALL_EDGE_TYPES.map((edgeType) => {\n                const info = EDGE_INFO[edgeType];\n                const isVisible = visibleEdgeTypes.includes(edgeType);\n\n                return (\n                  <button\n                    key={edgeType}\n                    onClick={() => toggleEdgeVisibility(edgeType)}\n                    className={`\n                      flex items-center gap-2.5 px-2 py-1.5 rounded text-left transition-colors\n                      ${isVisible\n                        ? 'bg-elevated text-text-primary'\n                        : 'text-text-muted hover:bg-hover hover:text-text-secondary'\n                      }\n                    `}\n                  >\n                    <div\n                      className={`w-6 h-1.5 rounded-full ${isVisible ? '' : 'opacity-40'}`}\n                      style={{ backgroundColor: info.color }}\n                    />\n                    <span className=\"text-xs flex-1\">{info.label}</span>\n                    <div\n                      className={`w-2 h-2 rounded-full transition-colors ${isVisible ? 'bg-accent' : 'bg-border-subtle'}`}\n                    />\n                  </button>\n                );\n              })}\n            </div>\n          </div>\n\n          {/* Depth Filter */}\n          <div className=\"mt-6 pt-4 border-t border-border-subtle\">\n            <h3 className=\"text-xs font-medium text-text-secondary uppercase tracking-wide mb-2\">\n              <Target className=\"w-3 h-3 inline mr-1.5\" />\n              Focus Depth\n            </h3>\n            <p className=\"text-[11px] text-text-muted mb-3\">\n              Show nodes within N hops of selection\n            </p>\n\n            <div className=\"flex flex-wrap gap-1.5\">\n              {[\n                { value: null, label: 'All' },\n                { value: 1, label: '1 hop' },\n                { value: 2, label: '2 hops' },\n                { value: 3, label: '3 hops' },\n                { value: 5, label: '5 hops' },\n              ].map(({ value, label }) => (\n                <button\n                  key={label}\n                  onClick={() => setDepthFilter(value)}\n                  className={`\n                    px-2 py-1 text-xs rounded transition-colors\n                    ${depthFilter === value\n                      ? 'bg-accent text-white'\n                      : 'bg-elevated text-text-secondary hover:bg-hover hover:text-text-primary'\n                    }\n                  `}\n                >\n                  {label}\n                </button>\n              ))}\n            </div>\n\n            {depthFilter !== null && !selectedNode && (\n              <p className=\"mt-2 text-[10px] text-amber-400\">\n                Select a node to apply depth filter\n              </p>\n            )}\n          </div>\n\n          {/* Legend */}\n          <div className=\"mt-6 pt-4 border-t border-border-subtle\">\n            <h3 className=\"text-xs font-medium text-text-secondary uppercase tracking-wide mb-3\">\n              Color Legend\n            </h3>\n            <div className=\"grid grid-cols-2 gap-2\">\n              {(['Folder', 'File', 'Class', 'Function', 'Interface', 'Method'] as NodeLabel[]).map(label => (\n                <div key={label} className=\"flex items-center gap-1.5\">\n                  <div\n                    className=\"w-2.5 h-2.5 rounded-full\"\n                    style={{ backgroundColor: NODE_COLORS[label] }}\n                  />\n                  <span className=\"text-[10px] text-text-muted\">{label}</span>\n                </div>\n              ))}\n            </div>\n          </div>\n        </div>\n      )}\n\n      {/* Stats footer */}\n      {graph && (\n        <div className=\"px-3 py-2 border-t border-border-subtle bg-elevated/50\">\n          <div className=\"flex items-center justify-between text-[10px] text-text-muted\">\n            <span>{graph.nodes.length} nodes</span>\n            <span>{graph.relationships.length} edges</span>\n          </div>\n        </div>\n      )}\n    </div>\n  );\n};\n\n"
  },
  {
    "path": "gitnexus-web/src/components/GraphCanvas.tsx",
    "content": "import { useEffect, useCallback, useMemo, useState, forwardRef, useImperativeHandle } from 'react';\nimport { ZoomIn, ZoomOut, Maximize2, Focus, RotateCcw, Play, Pause, Lightbulb, LightbulbOff } from 'lucide-react';\nimport { useSigma } from '../hooks/useSigma';\nimport { useAppState } from '../hooks/useAppState';\nimport { knowledgeGraphToGraphology, filterGraphByDepth, SigmaNodeAttributes, SigmaEdgeAttributes } from '../lib/graph-adapter';\nimport { QueryFAB } from './QueryFAB';\nimport Graph from 'graphology';\n\nexport interface GraphCanvasHandle {\n  focusNode: (nodeId: string) => void;\n}\n\nexport const GraphCanvas = forwardRef<GraphCanvasHandle>((_, ref) => {\n  const {\n    graph,\n    setSelectedNode,\n    selectedNode: appSelectedNode,\n    visibleLabels,\n    visibleEdgeTypes,\n    openCodePanel,\n    depthFilter,\n    highlightedNodeIds,\n    setHighlightedNodeIds,\n    aiCitationHighlightedNodeIds,\n    aiToolHighlightedNodeIds,\n    blastRadiusNodeIds,\n    isAIHighlightsEnabled,\n    toggleAIHighlights,\n    animatedNodes,\n  } = useAppState();\n  const [hoveredNodeName, setHoveredNodeName] = useState<string | null>(null);\n\n  const effectiveHighlightedNodeIds = useMemo(() => {\n    if (!isAIHighlightsEnabled) return highlightedNodeIds;\n    const next = new Set(highlightedNodeIds);\n    for (const id of aiCitationHighlightedNodeIds) next.add(id);\n    for (const id of aiToolHighlightedNodeIds) next.add(id);\n    // Note: blast radius nodes are handled separately with red color\n    return next;\n  }, [highlightedNodeIds, aiCitationHighlightedNodeIds, aiToolHighlightedNodeIds, isAIHighlightsEnabled]);\n\n  // Blast radius nodes (only when AI highlights enabled)\n  const effectiveBlastRadiusNodeIds = useMemo(() => {\n    if (!isAIHighlightsEnabled) return new Set<string>();\n    return blastRadiusNodeIds;\n  }, [blastRadiusNodeIds, isAIHighlightsEnabled]);\n\n  // Animated nodes (only when AI highlights enabled)\n  const effectiveAnimatedNodes = useMemo(() => {\n    if (!isAIHighlightsEnabled) return new Map();\n    return animatedNodes;\n  }, [animatedNodes, isAIHighlightsEnabled]);\n\n  const handleNodeClick = useCallback((nodeId: string) => {\n    if (!graph) return;\n    const node = graph.nodes.find(n => n.id === nodeId);\n    if (node) {\n      setSelectedNode(node);\n      openCodePanel();\n    }\n  }, [graph, setSelectedNode, openCodePanel]);\n\n  const handleNodeHover = useCallback((nodeId: string | null) => {\n    if (!nodeId || !graph) {\n      setHoveredNodeName(null);\n      return;\n    }\n    const node = graph.nodes.find(n => n.id === nodeId);\n    if (node) {\n      setHoveredNodeName(node.properties.name);\n    }\n  }, [graph]);\n\n  const handleStageClick = useCallback(() => {\n    setSelectedNode(null);\n  }, [setSelectedNode]);\n\n  const {\n    containerRef,\n    sigmaRef,\n    setGraph: setSigmaGraph,\n    zoomIn,\n    zoomOut,\n    resetZoom,\n    focusNode,\n    isLayoutRunning,\n    startLayout,\n    stopLayout,\n    selectedNode: sigmaSelectedNode,\n    setSelectedNode: setSigmaSelectedNode,\n  } = useSigma({\n    onNodeClick: handleNodeClick,\n    onNodeHover: handleNodeHover,\n    onStageClick: handleStageClick,\n    highlightedNodeIds: effectiveHighlightedNodeIds,\n    blastRadiusNodeIds: effectiveBlastRadiusNodeIds,\n    animatedNodes: effectiveAnimatedNodes,\n    visibleEdgeTypes,\n  });\n\n  // Expose focusNode to parent via ref\n  useImperativeHandle(ref, () => ({\n    focusNode: (nodeId: string) => {\n      // Also update app state so the selection syncs properly\n      if (graph) {\n        const node = graph.nodes.find(n => n.id === nodeId);\n        if (node) {\n          setSelectedNode(node);\n          openCodePanel();\n        }\n      }\n      focusNode(nodeId);\n    }\n  }), [focusNode, graph, setSelectedNode, openCodePanel]);\n\n  // Update Sigma graph when KnowledgeGraph changes\n  useEffect(() => {\n    if (!graph) return;\n\n    // Build communityMemberships map from MEMBER_OF relationships\n    // MEMBER_OF edges: nodeId -> communityId (stored as targetId)\n    const communityMemberships = new Map<string, number>();\n    graph.relationships.forEach(rel => {\n      if (rel.type === 'MEMBER_OF') {\n        // Find the community node to get its index\n        const communityNode = graph.nodes.find(n => n.id === rel.targetId && n.label === 'Community');\n        if (communityNode) {\n          // Extract community index from id (e.g., \"comm_5\" -> 5)\n          const communityIdx = parseInt(rel.targetId.replace('comm_', ''), 10) || 0;\n          communityMemberships.set(rel.sourceId, communityIdx);\n        }\n      }\n    });\n\n    const sigmaGraph = knowledgeGraphToGraphology(graph, communityMemberships);\n    setSigmaGraph(sigmaGraph);\n  }, [graph, setSigmaGraph]);\n\n  // Update node visibility when filters change\n  useEffect(() => {\n    const sigma = sigmaRef.current;\n    if (!sigma) return;\n\n    const sigmaGraph = sigma.getGraph() as Graph<SigmaNodeAttributes, SigmaEdgeAttributes>;\n    if (sigmaGraph.order === 0) return; // Don't filter empty graph\n\n    filterGraphByDepth(sigmaGraph, appSelectedNode?.id || null, depthFilter, visibleLabels);\n    sigma.refresh();\n  }, [visibleLabels, depthFilter, appSelectedNode, sigmaRef]);\n\n  // Sync app selected node with sigma\n  useEffect(() => {\n    if (appSelectedNode) {\n      setSigmaSelectedNode(appSelectedNode.id);\n    } else {\n      setSigmaSelectedNode(null);\n    }\n  }, [appSelectedNode, setSigmaSelectedNode]);\n\n  // Focus on selected node\n  const handleFocusSelected = useCallback(() => {\n    if (appSelectedNode) {\n      focusNode(appSelectedNode.id);\n    }\n  }, [appSelectedNode, focusNode]);\n\n  // Clear selection\n  const handleClearSelection = useCallback(() => {\n    setSelectedNode(null);\n    setSigmaSelectedNode(null);\n    resetZoom();\n  }, [setSelectedNode, setSigmaSelectedNode, resetZoom]);\n\n  return (\n    <div className=\"relative w-full h-full bg-void\">\n      {/* Background gradient */}\n      <div className=\"absolute inset-0 pointer-events-none\">\n        <div\n          className=\"absolute inset-0\"\n          style={{\n            background: `\n              radial-gradient(circle at 50% 50%, rgba(124, 58, 237, 0.03) 0%, transparent 70%),\n              linear-gradient(to bottom, #06060a, #0a0a10)\n            `\n          }}\n        />\n      </div>\n\n      {/* Sigma container */}\n      <div\n        ref={containerRef}\n        className=\"sigma-container w-full h-full cursor-grab active:cursor-grabbing\"\n      />\n\n      {/* Hovered node tooltip - only show when NOT selected */}\n      {hoveredNodeName && !sigmaSelectedNode && (\n        <div className=\"absolute top-4 left-1/2 -translate-x-1/2 px-3 py-1.5 bg-elevated/95 border border-border-subtle rounded-lg backdrop-blur-sm z-20 pointer-events-none animate-fade-in\">\n          <span className=\"font-mono text-sm text-text-primary\">{hoveredNodeName}</span>\n        </div>\n      )}\n\n      {/* Selection info bar */}\n      {sigmaSelectedNode && appSelectedNode && (\n        <div className=\"absolute top-4 left-1/2 -translate-x-1/2 flex items-center gap-2 px-4 py-2 bg-accent/20 border border-accent/30 rounded-xl backdrop-blur-sm z-20 animate-slide-up\">\n          <div className=\"w-2 h-2 bg-accent rounded-full animate-pulse\" />\n          <span className=\"font-mono text-sm text-text-primary\">\n            {appSelectedNode.properties.name}\n          </span>\n          <span className=\"text-xs text-text-muted\">\n            ({appSelectedNode.label})\n          </span>\n          <button\n            onClick={handleClearSelection}\n            className=\"ml-2 px-2 py-0.5 text-xs text-text-secondary hover:text-text-primary hover:bg-white/10 rounded transition-colors\"\n          >\n            Clear\n          </button>\n        </div>\n      )}\n\n      {/* Graph Controls - Bottom Right */}\n      <div className=\"absolute bottom-4 right-4 flex flex-col gap-1 z-10\">\n        <button\n          onClick={zoomIn}\n          className=\"w-9 h-9 flex items-center justify-center bg-elevated border border-border-subtle rounded-md text-text-secondary hover:bg-hover hover:text-text-primary transition-colors\"\n          title=\"Zoom In\"\n        >\n          <ZoomIn className=\"w-4 h-4\" />\n        </button>\n        <button\n          onClick={zoomOut}\n          className=\"w-9 h-9 flex items-center justify-center bg-elevated border border-border-subtle rounded-md text-text-secondary hover:bg-hover hover:text-text-primary transition-colors\"\n          title=\"Zoom Out\"\n        >\n          <ZoomOut className=\"w-4 h-4\" />\n        </button>\n        <button\n          onClick={resetZoom}\n          className=\"w-9 h-9 flex items-center justify-center bg-elevated border border-border-subtle rounded-md text-text-secondary hover:bg-hover hover:text-text-primary transition-colors\"\n          title=\"Fit to Screen\"\n        >\n          <Maximize2 className=\"w-4 h-4\" />\n        </button>\n\n        {/* Divider */}\n        <div className=\"h-px bg-border-subtle my-1\" />\n\n        {/* Focus on selected */}\n        {appSelectedNode && (\n          <button\n            onClick={handleFocusSelected}\n            className=\"w-9 h-9 flex items-center justify-center bg-accent/20 border border-accent/30 rounded-md text-accent hover:bg-accent/30 transition-colors\"\n            title=\"Focus on Selected Node\"\n          >\n            <Focus className=\"w-4 h-4\" />\n          </button>\n        )}\n\n        {/* Clear selection */}\n        {sigmaSelectedNode && (\n          <button\n            onClick={handleClearSelection}\n            className=\"w-9 h-9 flex items-center justify-center bg-elevated border border-border-subtle rounded-md text-text-secondary hover:bg-hover hover:text-text-primary transition-colors\"\n            title=\"Clear Selection\"\n          >\n            <RotateCcw className=\"w-4 h-4\" />\n          </button>\n        )}\n\n        {/* Divider */}\n        <div className=\"h-px bg-border-subtle my-1\" />\n\n        {/* Layout control */}\n        <button\n          onClick={isLayoutRunning ? stopLayout : startLayout}\n          className={`\n            w-9 h-9 flex items-center justify-center border rounded-md transition-all\n            ${isLayoutRunning\n              ? 'bg-accent border-accent text-white shadow-glow animate-pulse'\n              : 'bg-elevated border-border-subtle text-text-secondary hover:bg-hover hover:text-text-primary'\n            }\n          `}\n          title={isLayoutRunning ? 'Stop Layout' : 'Run Layout Again'}\n        >\n          {isLayoutRunning ? (\n            <Pause className=\"w-4 h-4\" />\n          ) : (\n            <Play className=\"w-4 h-4\" />\n          )}\n        </button>\n      </div>\n\n      {/* Layout running indicator */}\n      {isLayoutRunning && (\n        <div className=\"absolute bottom-4 left-1/2 -translate-x-1/2 flex items-center gap-2 px-3 py-1.5 bg-emerald-500/20 border border-emerald-500/30 rounded-full backdrop-blur-sm z-10 animate-fade-in\">\n          <div className=\"w-2 h-2 bg-emerald-400 rounded-full animate-ping\" />\n          <span className=\"text-xs text-emerald-400 font-medium\">Layout optimizing...</span>\n        </div>\n      )}\n\n      {/* Query FAB */}\n      <QueryFAB />\n\n      {/* AI Highlights toggle - Top Right */}\n      <div className=\"absolute top-4 right-4 z-20\">\n        <button\n          onClick={() => {\n            // If turning off, also clear process highlights\n            if (isAIHighlightsEnabled) {\n              setHighlightedNodeIds(new Set());\n            }\n            toggleAIHighlights();\n          }}\n          className={\n            isAIHighlightsEnabled\n              ? 'w-10 h-10 flex items-center justify-center bg-cyan-500/15 border border-cyan-400/40 rounded-lg text-cyan-200 hover:bg-cyan-500/20 hover:border-cyan-300/60 transition-colors'\n              : 'w-10 h-10 flex items-center justify-center bg-elevated border border-border-subtle rounded-lg text-text-muted hover:bg-hover hover:text-text-primary transition-colors'\n          }\n          title={isAIHighlightsEnabled ? 'Turn off all highlights' : 'Turn on AI highlights'}\n        >\n          {isAIHighlightsEnabled ? <Lightbulb className=\"w-4 h-4\" /> : <LightbulbOff className=\"w-4 h-4\" />}\n        </button>\n      </div>\n    </div>\n  );\n});\n\nGraphCanvas.displayName = 'GraphCanvas';\n"
  },
  {
    "path": "gitnexus-web/src/components/Header.tsx",
    "content": "import { Search, Settings, HelpCircle, Sparkles, Github, Star, ChevronDown } from 'lucide-react';\nimport { useAppState } from '../hooks/useAppState';\nimport type { RepoSummary } from '../services/server-connection';\nimport { useState, useMemo, useRef, useEffect, useCallback } from 'react';\nimport { GraphNode } from '../core/graph/types';\nimport { EmbeddingStatus } from './EmbeddingStatus';\n\n// Color mapping for node types in search results\nconst NODE_TYPE_COLORS: Record<string, string> = {\n  Folder: '#6366f1',\n  File: '#3b82f6',\n  Function: '#10b981',\n  Class: '#f59e0b',\n  Method: '#14b8a6',\n  Interface: '#ec4899',\n  Variable: '#64748b',\n  Import: '#475569',\n  Type: '#a78bfa',\n};\n\ninterface HeaderProps {\n  onFocusNode?: (nodeId: string) => void;\n  availableRepos?: RepoSummary[];\n  onSwitchRepo?: (repoName: string) => void;\n}\n\nexport const Header = ({ onFocusNode, availableRepos = [], onSwitchRepo }: HeaderProps) => {\n  const {\n    projectName,\n    graph,\n    openChatPanel,\n    isRightPanelOpen,\n    rightPanelTab,\n    setSettingsPanelOpen,\n  } = useAppState();\n  const [isRepoDropdownOpen, setIsRepoDropdownOpen] = useState(false);\n  const repoDropdownRef = useRef<HTMLDivElement>(null);\n  const [searchQuery, setSearchQuery] = useState('');\n  const [isSearchOpen, setIsSearchOpen] = useState(false);\n  const [selectedIndex, setSelectedIndex] = useState(0);\n  const searchRef = useRef<HTMLDivElement>(null);\n  const inputRef = useRef<HTMLInputElement>(null);\n\n  const nodeCount = graph?.nodes.length ?? 0;\n  const edgeCount = graph?.relationships.length ?? 0;\n\n  // Search results - filter nodes by name\n  const searchResults = useMemo(() => {\n    if (!graph || !searchQuery.trim()) return [];\n\n    const query = searchQuery.toLowerCase();\n    return graph.nodes\n      .filter(node => node.properties.name.toLowerCase().includes(query))\n      .slice(0, 10); // Limit to 10 results\n  }, [graph, searchQuery]);\n\n  // Handle clicking outside to close dropdowns\n  useEffect(() => {\n    const handleClickOutside = (e: MouseEvent) => {\n      if (searchRef.current && !searchRef.current.contains(e.target as Node)) {\n        setIsSearchOpen(false);\n      }\n      if (repoDropdownRef.current && !repoDropdownRef.current.contains(e.target as Node)) {\n        setIsRepoDropdownOpen(false);\n      }\n    };\n    document.addEventListener('mousedown', handleClickOutside);\n    return () => document.removeEventListener('mousedown', handleClickOutside);\n  }, []);\n\n  // Keyboard shortcut (Cmd+K / Ctrl+K)\n  useEffect(() => {\n    const handleKeyDown = (e: KeyboardEvent) => {\n      if ((e.metaKey || e.ctrlKey) && e.key === 'k') {\n        e.preventDefault();\n        inputRef.current?.focus();\n        setIsSearchOpen(true);\n      }\n      if (e.key === 'Escape') {\n        setIsSearchOpen(false);\n        inputRef.current?.blur();\n      }\n    };\n    document.addEventListener('keydown', handleKeyDown);\n    return () => document.removeEventListener('keydown', handleKeyDown);\n  }, []);\n\n  // Handle keyboard navigation in results\n  const handleKeyDown = (e: React.KeyboardEvent) => {\n    if (!isSearchOpen || searchResults.length === 0) return;\n\n    if (e.key === 'ArrowDown') {\n      e.preventDefault();\n      setSelectedIndex(i => Math.min(i + 1, searchResults.length - 1));\n    } else if (e.key === 'ArrowUp') {\n      e.preventDefault();\n      setSelectedIndex(i => Math.max(i - 1, 0));\n    } else if (e.key === 'Enter') {\n      e.preventDefault();\n      const selected = searchResults[selectedIndex];\n      if (selected) {\n        handleSelectNode(selected);\n      }\n    }\n  };\n\n  const handleSelectNode = (node: GraphNode) => {\n    // onFocusNode handles both camera focus AND selection in useSigma\n    onFocusNode?.(node.id);\n    setSearchQuery('');\n    setIsSearchOpen(false);\n    setSelectedIndex(0);\n  };\n\n  return (\n    <header className=\"flex items-center justify-between px-5 py-3 bg-deep border-b border-dashed border-border-subtle\">\n      {/* Left section */}\n      <div className=\"flex items-center gap-4\">\n        {/* Logo */}\n        <div className=\"flex items-center gap-2.5\">\n          <div className=\"w-7 h-7 flex items-center justify-center bg-gradient-to-br from-accent to-node-interface rounded-md shadow-glow text-white text-sm font-bold\">\n            ◇\n          </div>\n          <span className=\"font-semibold text-[15px] tracking-tight\">GitNexus</span>\n        </div>\n\n        {/* Project badge / Repo selector dropdown */}\n        {projectName && (\n          <div className=\"relative\" ref={repoDropdownRef}>\n            <button\n              onClick={() => availableRepos.length >= 2 && setIsRepoDropdownOpen(prev => !prev)}\n              className={`flex items-center gap-2 px-3 py-1.5 bg-surface border border-border-subtle rounded-lg text-sm text-text-secondary transition-colors ${availableRepos.length >= 2 ? 'hover:bg-hover cursor-pointer' : ''}`}\n            >\n              <span className=\"w-1.5 h-1.5 bg-node-function rounded-full animate-pulse\" />\n              <span className=\"truncate max-w-[200px]\">{projectName}</span>\n              {availableRepos.length >= 2 && (\n                <ChevronDown className={`w-3.5 h-3.5 text-text-muted transition-transform ${isRepoDropdownOpen ? 'rotate-180' : ''}`} />\n              )}\n            </button>\n\n            {/* Repo dropdown */}\n            {isRepoDropdownOpen && availableRepos.length >= 2 && (\n              <div className=\"absolute top-full left-0 mt-1 w-72 bg-surface border border-border-subtle rounded-lg shadow-xl overflow-hidden z-50\">\n                {availableRepos.map((repo) => {\n                  const isCurrent = repo.name === projectName;\n                  return (\n                    <button\n                      key={repo.name}\n                      onClick={() => {\n                        if (!isCurrent && onSwitchRepo) {\n                          onSwitchRepo(repo.name);\n                        }\n                        setIsRepoDropdownOpen(false);\n                      }}\n                      className={`w-full px-4 py-3 flex items-center gap-3 text-left transition-colors ${isCurrent ? 'bg-accent/10 border-l-2 border-accent' : 'hover:bg-hover border-l-2 border-transparent'}`}\n                    >\n                      <span className={`w-2 h-2 rounded-full flex-shrink-0 ${isCurrent ? 'bg-node-function animate-pulse' : 'bg-text-muted'}`} />\n                      <div className=\"flex-1 min-w-0\">\n                        <div className={`text-sm font-medium truncate ${isCurrent ? 'text-accent' : 'text-text-primary'}`}>\n                          {repo.name}\n                        </div>\n                        <div className=\"text-xs text-text-muted mt-0.5\">\n                          {repo.stats?.nodes ?? '?'} nodes &middot; {repo.stats?.files ?? '?'} files\n                        </div>\n                      </div>\n                    </button>\n                  );\n                })}\n              </div>\n            )}\n          </div>\n        )}\n      </div>\n\n      {/* Center - Search */}\n      <div className=\"flex-1 max-w-md mx-6 relative\" ref={searchRef}>\n        <div className=\"flex items-center gap-2.5 px-3.5 py-2 bg-surface border border-border-subtle rounded-lg transition-all focus-within:border-accent focus-within:ring-2 focus-within:ring-accent/20\">\n          <Search className=\"w-4 h-4 text-text-muted flex-shrink-0\" />\n          <input\n            ref={inputRef}\n            type=\"text\"\n            placeholder=\"Search nodes...\"\n            value={searchQuery}\n            onChange={(e) => {\n              setSearchQuery(e.target.value);\n              setIsSearchOpen(true);\n              setSelectedIndex(0);\n            }}\n            onFocus={() => setIsSearchOpen(true)}\n            onKeyDown={handleKeyDown}\n            className=\"flex-1 bg-transparent border-none outline-none text-sm text-text-primary placeholder:text-text-muted\"\n          />\n          <kbd className=\"px-1.5 py-0.5 bg-elevated border border-border-subtle rounded text-[10px] text-text-muted font-mono\">\n            ⌘K\n          </kbd>\n        </div>\n\n        {/* Search Results Dropdown */}\n        {isSearchOpen && searchQuery.trim() && (\n          <div className=\"absolute top-full left-0 right-0 mt-1 bg-surface border border-border-subtle rounded-lg shadow-xl overflow-hidden z-50\">\n            {searchResults.length === 0 ? (\n              <div className=\"px-4 py-3 text-sm text-text-muted\">\n                No nodes found for \"{searchQuery}\"\n              </div>\n            ) : (\n              <div className=\"max-h-80 overflow-y-auto\">\n                {searchResults.map((node, index) => (\n                  <button\n                    key={node.id}\n                    onClick={() => handleSelectNode(node)}\n                    className={`w-full px-4 py-2.5 flex items-center gap-3 text-left transition-colors ${index === selectedIndex\n                      ? 'bg-accent/20 text-text-primary'\n                      : 'hover:bg-hover text-text-secondary'\n                      }`}\n                  >\n                    {/* Node type indicator */}\n                    <span\n                      className=\"w-2.5 h-2.5 rounded-full flex-shrink-0\"\n                      style={{ backgroundColor: NODE_TYPE_COLORS[node.label] || '#6b7280' }}\n                    />\n                    {/* Node name */}\n                    <span className=\"flex-1 truncate text-sm font-medium\">\n                      {node.properties.name}\n                    </span>\n                    {/* Node type badge */}\n                    <span className=\"text-xs text-text-muted px-2 py-0.5 bg-elevated rounded\">\n                      {node.label}\n                    </span>\n                  </button>\n                ))}\n              </div>\n            )}\n          </div>\n        )}\n      </div>\n\n      {/* Right section */}\n      <div className=\"flex items-center gap-2\">\n        {/* GitHub Star Button */}\n        <a\n          href=\"https://github.com/abhigyanpatwari/GitNexus\"\n          target=\"_blank\"\n          rel=\"noopener noreferrer\"\n          className=\"flex items-center gap-2 px-3.5 py-2 bg-gradient-to-r from-purple-600 to-pink-600 hover:from-purple-500 hover:to-pink-500 rounded-lg text-white text-sm font-medium shadow-lg hover:shadow-xl hover:-translate-y-0.5 transition-all duration-200 group\"\n        >\n          <Github className=\"w-4 h-4\" />\n          <span className=\"hidden sm:inline\">Star if cool</span>\n          <Star className=\"w-3.5 h-3.5 group-hover:fill-yellow-300 group-hover:text-yellow-300 transition-all\" />\n          <span className=\"hidden sm:inline\">✨</span>\n        </a>\n\n        {/* Stats */}\n        {graph && (\n          <div className=\"flex items-center gap-4 mr-2 text-xs text-text-muted\">\n            <span>{nodeCount} nodes</span>\n            <span>{edgeCount} edges</span>\n          </div>\n        )}\n\n        {/* Embedding Status */}\n        <EmbeddingStatus />\n\n        {/* Icon buttons */}\n        <button\n          onClick={() => setSettingsPanelOpen(true)}\n          className=\"w-9 h-9 flex items-center justify-center rounded-md text-text-secondary hover:bg-hover hover:text-text-primary transition-colors\"\n          title=\"AI Settings\"\n        >\n          <Settings className=\"w-[18px] h-[18px]\" />\n        </button>\n        <button className=\"w-9 h-9 flex items-center justify-center rounded-md text-text-secondary hover:bg-hover hover:text-text-primary transition-colors\">\n          <HelpCircle className=\"w-[18px] h-[18px]\" />\n        </button>\n\n        {/* AI Button */}\n        <button\n          onClick={openChatPanel}\n          className={`\n            flex items-center gap-1.5 px-3.5 py-2 rounded-lg text-sm font-medium transition-all\n            ${isRightPanelOpen && rightPanelTab === 'chat'\n              ? 'bg-accent text-white shadow-glow'\n              : 'bg-gradient-to-r from-accent to-accent-dim text-white shadow-glow hover:shadow-lg hover:-translate-y-0.5'\n            }\n          `}\n        >\n          <Sparkles className=\"w-4 h-4\" />\n          <span>Nexus AI</span>\n        </button>\n      </div>\n    </header>\n  );\n};\n\n"
  },
  {
    "path": "gitnexus-web/src/components/LoadingOverlay.tsx",
    "content": "import { PipelineProgress } from '../types/pipeline';\n\ninterface LoadingOverlayProps {\n  progress: PipelineProgress;\n}\n\nexport const LoadingOverlay = ({ progress }: LoadingOverlayProps) => {\n  return (\n    <div className=\"fixed inset-0 flex flex-col items-center justify-center bg-void z-50\">\n      {/* Background gradient effects */}\n      <div className=\"absolute inset-0 pointer-events-none\">\n        <div className=\"absolute top-1/3 left-1/3 w-96 h-96 bg-accent/10 rounded-full blur-3xl animate-pulse\" />\n        <div className=\"absolute bottom-1/3 right-1/3 w-96 h-96 bg-node-interface/10 rounded-full blur-3xl animate-pulse\" />\n      </div>\n\n      {/* Pulsing orb */}\n      <div className=\"relative mb-10\">\n        <div className=\"w-28 h-28 bg-gradient-to-br from-accent to-node-interface rounded-full animate-pulse-glow\" />\n        <div className=\"absolute inset-0 w-28 h-28 bg-gradient-to-br from-accent to-node-interface rounded-full blur-xl opacity-50\" />\n      </div>\n\n      {/* Progress bar */}\n      <div className=\"w-80 mb-4\">\n        <div className=\"h-1.5 bg-elevated rounded-full overflow-hidden\">\n          <div \n            className=\"h-full bg-gradient-to-r from-accent to-node-interface rounded-full transition-all duration-300 ease-out\"\n            style={{ width: `${progress.percent}%` }}\n          />\n        </div>\n      </div>\n\n      {/* Status text */}\n      <div className=\"text-center\">\n        <p className=\"font-mono text-sm text-text-secondary mb-1\">\n          {progress.message}\n          <span className=\"animate-pulse\">|</span>\n        </p>\n        {progress.detail && (\n          <p className=\"font-mono text-xs text-text-muted truncate max-w-md\">\n            {progress.detail}\n          </p>\n        )}\n      </div>\n\n      {/* Stats */}\n      {progress.stats && (\n        <div className=\"mt-8 flex items-center gap-6 text-xs text-text-muted\">\n          <div className=\"flex items-center gap-2\">\n            <span className=\"w-2 h-2 bg-node-file rounded-full\" />\n            <span>{progress.stats.filesProcessed} / {progress.stats.totalFiles} files</span>\n          </div>\n          <div className=\"flex items-center gap-2\">\n            <span className=\"w-2 h-2 bg-node-function rounded-full\" />\n            <span>{progress.stats.nodesCreated} nodes</span>\n          </div>\n        </div>\n      )}\n\n      {/* Percent */}\n      <p className=\"mt-4 font-mono text-3xl font-semibold text-text-primary\">\n        {progress.percent}%\n      </p>\n    </div>\n  );\n};\n\n"
  },
  {
    "path": "gitnexus-web/src/components/MarkdownRenderer.tsx",
    "content": "import React, { useState } from 'react';\nimport ReactMarkdown from 'react-markdown';\nimport remarkGfm from 'remark-gfm';\nimport { Prism as SyntaxHighlighter } from 'react-syntax-highlighter';\nimport { vscDarkPlus } from 'react-syntax-highlighter/dist/esm/styles/prism';\nimport { MermaidDiagram } from './MermaidDiagram';\nimport { ToolCallCard } from './ToolCallCard';\nimport { Copy, Check } from 'lucide-react';\n\n// Custom syntax theme\nconst customTheme = {\n    ...vscDarkPlus,\n    'pre[class*=\"language-\"]': {\n        ...vscDarkPlus['pre[class*=\"language-\"]'],\n        background: '#0a0a10',\n        margin: 0,\n        padding: '16px 0',\n        fontSize: '13px',\n        lineHeight: '1.6',\n    },\n    'code[class*=\"language-\"]': {\n        ...vscDarkPlus['code[class*=\"language-\"]'],\n        background: 'transparent',\n        fontFamily: '\"JetBrains Mono\", \"Fira Code\", monospace',\n    },\n};\n\ninterface MarkdownRendererProps {\n    content: string;\n    onLinkClick?: (href: string) => void;\n    toolCalls?: any[]; // Keep flexible for now\n    showCopyButton?: boolean;\n}\n\nexport const MarkdownRenderer: React.FC<MarkdownRendererProps> = ({\n    content,\n    onLinkClick,\n    toolCalls,\n    showCopyButton = false\n}) => {\n    const [copied, setCopied] = useState(false);\n\n    const handleCopy = async () => {\n        try {\n            await navigator.clipboard.writeText(content);\n            setCopied(true);\n            setTimeout(() => setCopied(false), 2000);\n        } catch (err) {\n            console.error('Failed to copy:', err);\n        }\n    };\n\n    // Helper to format text for display (convert [[links]] to markdown links)\n    const formatMarkdownForDisplay = (md: string) => {\n        // Avoid rewriting inside fenced code blocks.\n        const parts = md.split('```');\n        for (let i = 0; i < parts.length; i += 2) {\n            // Pattern 1: File grounding - [[file.ext]]\n            parts[i] = parts[i].replace(\n                /\\[\\[([a-zA-Z0-9_\\-./\\\\]+\\.[a-zA-Z0-9]+(?::\\d+(?:[-–]\\d+)?)?)\\]\\]/g,\n                (_m, inner: string) => {\n                    const trimmed = inner.trim();\n                    const href = `code-ref:${encodeURIComponent(trimmed)}`;\n                    return `[${trimmed}](${href})`;\n                }\n            );\n\n            // Pattern 2: Node grounding - [[Type:Name]]\n            parts[i] = parts[i].replace(\n                /\\[\\[(?:graph:)?(Class|Function|Method|Interface|File|Folder|Variable|Enum|Type|CodeElement):([^\\]]+)\\]\\]/g,\n                (_m, nodeType: string, nodeName: string) => {\n                    const trimmed = `${nodeType}:${nodeName.trim()}`;\n                    const href = `node-ref:${encodeURIComponent(trimmed)}`;\n                    return `[${trimmed}](${href})`;\n                }\n            );\n        }\n        return parts.join('```');\n    };\n\n    const handleLinkClick = (e: React.MouseEvent<HTMLAnchorElement>, href: string) => {\n        if (href.startsWith('code-ref:') || href.startsWith('node-ref:')) {\n            e.preventDefault();\n            onLinkClick?.(href);\n        }\n        // External links open in new tab (default behavior)\n    };\n\n    const formattedContent = React.useMemo(() => formatMarkdownForDisplay(content), [content]);\n\n    const markdownComponents = React.useMemo(() => ({\n        a: ({ href, children, ...props }: any) => {\n            const hrefStr = href || '';\n\n            // Grounding links (Code refs & Node refs)\n            if (hrefStr.startsWith('code-ref:') || hrefStr.startsWith('node-ref:')) {\n                const isNodeRef = hrefStr.startsWith('node-ref:');\n                const inner = decodeURIComponent(hrefStr.slice(isNodeRef ? 9 : 9)); // length is same? wait.. code-ref: (9), node-ref: (9). Yes.\n\n                // Styles\n                const baseParams = \"code-ref-btn inline-flex items-center px-2 py-0.5 rounded-md font-mono text-[12px] !no-underline hover:!no-underline transition-colors\";\n                const colorParams = isNodeRef\n                    ? \"border border-amber-300/55 bg-amber-400/10 !text-amber-200 visited:!text-amber-200 hover:bg-amber-400/15 hover:border-amber-200/70\"\n                    : \"border border-cyan-300/55 bg-cyan-400/10 !text-cyan-200 visited:!text-cyan-200 hover:bg-cyan-400/15 hover:border-cyan-200/70\";\n\n                return (\n                    <a\n                        href={hrefStr}\n                        onClick={(e) => handleLinkClick(e, hrefStr)}\n                        className={`${baseParams} ${colorParams}`}\n                        title={isNodeRef ? `View ${inner} in Code panel` : `Open in Code panel • ${inner}`}\n                        {...props}\n                    >\n                        <span className=\"text-inherit\">{children}</span>\n                    </a>\n                );\n            }\n\n            // External links\n            return (\n                <a\n                    href={hrefStr}\n                    className=\"text-accent underline underline-offset-2 hover:text-purple-300\"\n                    target=\"_blank\"\n                    rel=\"noopener noreferrer\"\n                    {...props}\n                >\n                    {children}\n                </a>\n            );\n        },\n        code: ({ className, children, ...props }: any) => {\n            const match = /language-(\\w+)/.exec(className || '');\n            const isInline = !className && !match;\n            const codeContent = String(children).replace(/\\n$/, '');\n\n            if (isInline) {\n                return <code {...props}>{children}</code>;\n            }\n\n            const language = match ? match[1] : 'text';\n\n            // Render Mermaid diagrams\n            if (language === 'mermaid') {\n                return <MermaidDiagram code={codeContent} />;\n            }\n\n            return (\n                <SyntaxHighlighter\n                    style={customTheme}\n                    language={language}\n                    PreTag=\"div\"\n                    customStyle={{\n                        margin: 0,\n                        padding: '14px 16px',\n                        borderRadius: '8px',\n                        fontSize: '13px',\n                        background: '#0a0a10',\n                        border: '1px solid #1e1e2a',\n                    }}\n                >\n                    {codeContent}\n                </SyntaxHighlighter>\n            );\n        },\n        pre: ({ children }: any) => <>{children}</>,\n    }), [onLinkClick]); // Removed handleLinkClick dependency as it is defined inside component but depends on onLinkClick\n\n    return (\n        <div className=\"text-text-primary text-sm\">\n            <ReactMarkdown\n                remarkPlugins={[remarkGfm]}\n                urlTransform={(url) => {\n                    if (url.startsWith('code-ref:') || url.startsWith('node-ref:')) return url;\n                    // Default behavior for http/https/etc\n                    return url;\n                }}\n                components={markdownComponents}\n            >\n                {formattedContent}\n            </ReactMarkdown>\n\n            {/* Copy Button */}\n            {showCopyButton && (\n                <div className=\"mt-2 flex justify-end\">\n                    <button\n                        onClick={handleCopy}\n                        className=\"flex items-center gap-1.5 px-2 py-1 text-xs text-text-muted hover:text-text-primary hover:bg-surface border border-transparent hover:border-border-subtle rounded transition-all\"\n                        title=\"Copy to clipboard\"\n                    >\n                        {copied ? <Check className=\"w-3.5 h-3.5 text-emerald-400\" /> : <Copy className=\"w-3.5 h-3.5\" />}\n                        <span>{copied ? 'Copied' : 'Copy'}</span>\n                    </button>\n                </div>\n            )}\n\n            {/* Tool Call Cards appended at the bottom if provided */}\n            {toolCalls && toolCalls.length > 0 && (\n                <div className=\"mt-3 space-y-2\">\n                    {toolCalls.map(tc => (\n                        <ToolCallCard key={tc.id} toolCall={tc} defaultExpanded={false} />\n                    ))}\n                </div>\n            )}\n        </div>\n    );\n};\n\n\n"
  },
  {
    "path": "gitnexus-web/src/components/MermaidDiagram.tsx",
    "content": "import { useEffect, useRef, useState } from 'react';\nimport mermaid from 'mermaid';\nimport { AlertTriangle, Maximize2 } from 'lucide-react';\nimport { ProcessFlowModal } from './ProcessFlowModal';\nimport type { ProcessData } from '../lib/mermaid-generator';\n\n// Initialize mermaid with cyan theme matching ProcessFlowModal\nmermaid.initialize({\n  startOnLoad: false,\n  maxTextSize: 900000,\n  theme: 'base',\n  themeVariables: {\n    primaryColor: '#1e293b', // node bg - slate\n    primaryTextColor: '#f1f5f9',\n    primaryBorderColor: '#22d3ee', // cyan\n    lineColor: '#94a3b8',\n    secondaryColor: '#1e293b',\n    tertiaryColor: '#0f172a',\n    mainBkg: '#1e293b',\n    nodeBorder: '#22d3ee', // cyan\n    clusterBkg: '#1e293b',\n    clusterBorder: '#475569',\n    titleColor: '#f1f5f9',\n    edgeLabelBackground: '#0f172a',\n  },\n  flowchart: {\n    curve: 'basis',\n    padding: 15,\n    nodeSpacing: 50,\n    rankSpacing: 50,\n    htmlLabels: true,\n  },\n  sequence: {\n    actorMargin: 50,\n    boxMargin: 10,\n    boxTextMargin: 5,\n    noteMargin: 10,\n    messageMargin: 35,\n  },\n  fontFamily: '\"JetBrains Mono\", \"Fira Code\", monospace',\n  fontSize: 13,\n  suppressErrorRendering: true,\n});\n\n// Override the default error handler to prevent it from logging to UI\nmermaid.parseError = (_err) => {\n  // Silent catch\n};\n\ninterface MermaidDiagramProps {\n  code: string;\n}\n\nexport const MermaidDiagram = ({ code }: MermaidDiagramProps) => {\n  const containerRef = useRef<HTMLDivElement>(null);\n  const [error, setError] = useState<string | null>(null);\n  const [showModal, setShowModal] = useState(false);\n  const [svg, setSvg] = useState<string>('');\n\n  useEffect(() => {\n    const renderDiagram = async () => {\n      if (!containerRef.current) return;\n\n      try {\n        // Generate unique ID for this diagram\n        const id = `mermaid-${Date.now()}-${Math.random().toString(36).slice(2, 8)}`;\n\n        // Render the diagram\n        const { svg: renderedSvg } = await mermaid.render(id, code.trim());\n        setSvg(renderedSvg);\n        setError(null);\n      } catch (err) {\n        // Silent catch for streaming: \n        // If render fails (common during partial streaming), we:\n        // 1. Log to console for debugging\n        // 2. Do NOT set error state (avoids flashing red box)\n        // 3. Do NOT clear existing SVG (keeps last valid state visible)\n        console.debug('Mermaid render skipped (incomplete):', err);\n      }\n    };\n\n    // Debounce rendering to prevent \"jerking\" during high-speed streaming\n    const timeoutId = setTimeout(() => {\n      renderDiagram();\n    }, 300);\n\n    return () => clearTimeout(timeoutId);\n  }, [code]);\n\n  // Create a pseudo ProcessData for the modal (with custom rawMermaid property)\n  const processData: any = showModal ? {\n    id: 'ai-generated',\n    label: 'AI Generated Diagram',\n    processType: 'intra_community',\n    steps: [], // Empty - we'll render raw mermaid\n    edges: [],\n    clusters: [],\n    rawMermaid: code, // Pass raw mermaid code\n  } : null;\n\n  if (error) {\n    return (\n      <div className=\"my-3 p-4 bg-rose-500/10 border border-rose-500/30 rounded-lg\">\n        <div className=\"flex items-center gap-2 text-rose-300 text-sm mb-2\">\n          <AlertTriangle className=\"w-4 h-4\" />\n          <span className=\"font-medium\">Diagram Error</span>\n        </div>\n        <pre className=\"text-xs text-rose-200/70 font-mono whitespace-pre-wrap\">{error}</pre>\n        <details className=\"mt-2\">\n          <summary className=\"text-xs text-text-muted cursor-pointer hover:text-text-secondary\">\n            Show source\n          </summary>\n          <pre className=\"mt-2 p-2 bg-surface rounded text-xs text-text-muted overflow-x-auto\">\n            {code}\n          </pre>\n        </details>\n      </div>\n    );\n  }\n\n  return (\n    <>\n      <div className=\"my-3 relative group\">\n        <div className=\"relative bg-gradient-to-b from-surface to-elevated border border-border-subtle rounded-xl overflow-hidden\">\n          {/* Header */}\n          <div className=\"flex items-center justify-between px-3 py-2 bg-surface/60 border-b border-border-subtle\">\n            <span className=\"text-[10px] text-text-muted uppercase tracking-wider font-medium\">\n              Diagram\n            </span>\n            <button\n              onClick={() => setShowModal(true)}\n              className=\"p-1 text-text-muted hover:text-text-primary hover:bg-hover rounded transition-colors\"\n              title=\"Expand\"\n            >\n              <Maximize2 className=\"w-3.5 h-3.5\" />\n            </button>\n          </div>\n\n          {/* Diagram container */}\n          <div\n            ref={containerRef}\n            className=\"flex items-center justify-center p-4 overflow-auto max-h-[400px]\"\n            dangerouslySetInnerHTML={{ __html: svg }}\n          />\n        </div>\n      </div>\n\n      {/* Use ProcessFlowModal for expansion */}\n      {showModal && processData && (\n        <ProcessFlowModal\n          process={processData}\n          onClose={() => setShowModal(false)}\n        />\n      )}\n    </>\n  );\n};\n"
  },
  {
    "path": "gitnexus-web/src/components/ProcessFlowModal.tsx",
    "content": "/**\n * Process Flow Modal\n * \n * Displays a Mermaid flowchart for a process in a centered modal popup.\n */\n\nimport { useEffect, useRef, useCallback, useState } from 'react';\nimport { X, GitBranch, Copy, Focus, Layers, ZoomIn, ZoomOut } from 'lucide-react';\nimport mermaid from 'mermaid';\nimport { ProcessData, generateProcessMermaid } from '../lib/mermaid-generator';\n\ninterface ProcessFlowModalProps {\n    process: ProcessData | null;\n    onClose: () => void;\n    onFocusInGraph?: (nodeIds: string[], processId: string) => void;\n    isFullScreen?: boolean;\n}\n\n// Initialize mermaid with cyan/purple theme matching GitNexus\n// Initialize mermaid with cyan/purple theme matching GitNexus\nmermaid.initialize({\n    startOnLoad: false,\n    suppressErrorRendering: true, // Try to suppress if supported\n    maxTextSize: 900000, // Increase from default 50000 to handle large combined diagrams\n    theme: 'base',\n    themeVariables: {\n        primaryColor: '#1e293b', // node bg\n        primaryTextColor: '#f1f5f9',\n        primaryBorderColor: '#22d3ee',\n        lineColor: '#94a3b8',\n        secondaryColor: '#1e293b',\n        tertiaryColor: '#0f172a',\n        mainBkg: '#1e293b', // background\n        nodeBorder: '#22d3ee',\n        clusterBkg: '#1e293b',\n        clusterBorder: '#475569',\n        titleColor: '#f1f5f9',\n        edgeLabelBackground: '#0f172a',\n    },\n    flowchart: {\n        curve: 'basis',\n        padding: 50,\n        nodeSpacing: 120,\n        rankSpacing: 140,\n        htmlLabels: true,\n    },\n});\n\n// Suppress distinct syntax error overlay\nmermaid.parseError = (err) => {\n    // Suppress visual error - we handle errors in the render try/catch\n    console.debug('Mermaid parse error (suppressed):', err);\n};\n\nexport const ProcessFlowModal = ({ process, onClose, onFocusInGraph, isFullScreen = false }: ProcessFlowModalProps) => {\n    const containerRef = useRef<HTMLDivElement>(null);\n    const diagramRef = useRef<HTMLDivElement>(null);\n    const scrollContainerRef = useRef<HTMLDivElement>(null);\n    \n    // Full process map gets higher default zoom (667%) and max zoom (3000%)\n    const defaultZoom = isFullScreen ? 6.67 : 1;\n    const maxZoom = isFullScreen ? 30 : 10;\n    \n    const [zoom, setZoom] = useState(defaultZoom);\n    const [pan, setPan] = useState({ x: 0, y: 0 });\n    const [isPanning, setIsPanning] = useState(false);\n    const [panStart, setPanStart] = useState({ x: 0, y: 0 });\n    \n    // Reset zoom when switching between full screen and regular mode\n    useEffect(() => {\n        setZoom(defaultZoom);\n        setPan({ x: 0, y: 0 });\n    }, [isFullScreen, defaultZoom]);\n\n    // Handle zoom with scroll wheel\n    useEffect(() => {\n        const handleWheel = (e: WheelEvent) => {\n            e.preventDefault();\n            const delta = e.deltaY * -0.001;\n            setZoom(prev => Math.min(Math.max(0.1, prev + delta), maxZoom));\n        };\n\n        const container = scrollContainerRef.current;\n        if (container) {\n            container.addEventListener('wheel', handleWheel, { passive: false });\n            return () => container.removeEventListener('wheel', handleWheel);\n        }\n    }, [process, maxZoom]); // Re-attach when process or maxZoom changes\n\n    // Handle keyboard zoom\n    useEffect(() => {\n        const handleKeyDown = (e: KeyboardEvent) => {\n            if (e.key === '+' || e.key === '=') {\n                setZoom(prev => Math.min(prev + 0.2, maxZoom));\n            } else if (e.key === '-' || e.key === '_') {\n                setZoom(prev => Math.max(prev - 0.2, 0.1));\n            }\n        };\n        window.addEventListener('keydown', handleKeyDown);\n        return () => window.removeEventListener('keydown', handleKeyDown);\n    }, [maxZoom]);\n\n    // Zoom in/out handlers\n    const handleZoomIn = useCallback(() => {\n        setZoom(prev => Math.min(prev + 0.25, maxZoom));\n    }, [maxZoom]);\n\n    const handleZoomOut = useCallback(() => {\n        setZoom(prev => Math.max(prev - 0.25, 0.1));\n    }, []);\n\n    // Handle pan with mouse drag\n    const handleMouseDown = useCallback((e: React.MouseEvent) => {\n        setIsPanning(true);\n        setPanStart({ x: e.clientX - pan.x, y: e.clientY - pan.y });\n    }, [pan]);\n\n    const handleMouseMove = useCallback((e: React.MouseEvent) => {\n        if (!isPanning) return;\n        setPan({ x: e.clientX - panStart.x, y: e.clientY - panStart.y });\n    }, [isPanning, panStart]);\n\n    const handleMouseUp = useCallback(() => {\n        setIsPanning(false);\n    }, []);\n\n    const resetView = useCallback(() => {\n        setZoom(defaultZoom);\n        setPan({ x: 0, y: 0 });\n    }, [defaultZoom]);\n\n    // Render mermaid diagram\n    useEffect(() => {\n        if (!process || !diagramRef.current) return;\n\n        const renderDiagram = async () => {\n            try {\n                // Check if we have raw mermaid code (from AI chat) or need to generate it\n                const mermaidCode = (process as any).rawMermaid\n                    ? (process as any).rawMermaid\n                    : generateProcessMermaid(process);\n                const id = `mermaid-${Date.now()}`;\n\n                // Clear previous content\n                diagramRef.current!.innerHTML = '';\n\n                const { svg } = await mermaid.render(id, mermaidCode);\n                diagramRef.current!.innerHTML = svg;\n            } catch (error) {\n                console.error('Mermaid render error:', error);\n                const errorMessage = error instanceof Error ? error.message : String(error);\n                const isSizeError = errorMessage.includes('Maximum') || errorMessage.includes('exceeded');\n\n                diagramRef.current!.innerHTML = `\n          <div class=\"text-center p-8\">\n            <div class=\"text-red-400 text-sm font-medium mb-2\">\n              ${isSizeError ? '📊 Diagram Too Large' : '⚠️ Render Error'}\n            </div>\n            <div class=\"text-slate-400 text-xs max-w-md\">\n              ${isSizeError\n                        ? `This diagram has ${process.steps?.length || 0} steps and is too complex to render. Try viewing individual processes instead of \"All Processes\".`\n                        : `Unable to render diagram. Steps: ${process.steps?.length || 0}`\n                    }\n            </div>\n          </div>\n        `;\n            }\n        };\n\n        renderDiagram();\n    }, [process]);\n\n    // Close on escape\n    useEffect(() => {\n        const handleEscape = (e: KeyboardEvent) => {\n            if (e.key === 'Escape') onClose();\n        };\n        window.addEventListener('keydown', handleEscape);\n        return () => window.removeEventListener('keydown', handleEscape);\n    }, [onClose]);\n\n    // Close on backdrop click\n    const handleBackdropClick = useCallback((e: React.MouseEvent) => {\n        if (e.target === containerRef.current) {\n            onClose();\n        }\n    }, [onClose]);\n\n    // Copy mermaid code to clipboard\n    const handleCopyMermaid = useCallback(async () => {\n        if (!process) return;\n        const mermaidCode = generateProcessMermaid(process);\n        await navigator.clipboard.writeText(mermaidCode);\n    }, [process]);\n\n    // Focus in graph\n    const handleFocusInGraph = useCallback(() => {\n        if (!process || !onFocusInGraph) return;\n        const nodeIds = process.steps.map(s => s.id);\n        onFocusInGraph(nodeIds, process.id);\n        onClose();\n    }, [process, onFocusInGraph, onClose]);\n\n    if (!process) return null;\n\n    return (\n        <div\n            ref={containerRef}\n            className=\"fixed inset-0 z-50 flex items-center justify-center bg-black/20 animate-fade-in\"\n            onClick={handleBackdropClick}\n        >\n            {/* Glassmorphism Modal */}\n            <div className={`bg-slate-900/60 backdrop-blur-2xl border border-white/10 rounded-3xl shadow-2xl shadow-cyan-500/10 flex flex-col animate-scale-in overflow-hidden relative ${isFullScreen\n                ? 'w-[98%] h-[95vh] max-w-none'\n                : 'w-[95%] max-w-5xl max-h-[90vh]'\n                }`}>\n                {/* Subtle gradient overlay for extra glass feel */}\n                <div className=\"absolute inset-0 bg-gradient-to-br from-white/5 to-transparent pointer-events-none\" />\n\n                {/* Header */}\n                <div className=\"px-6 py-5 border-b border-white/10 relative z-10\">\n                    <h2 className=\"text-lg font-semibold text-white\">\n                        Process: {process.label}\n                    </h2>\n                </div>\n\n                {/* Diagram */}\n                <div\n                    ref={scrollContainerRef}\n                    className={`flex-1 p-8 flex items-center justify-center relative z-10 overflow-hidden ${isFullScreen ? 'min-h-[70vh]' : 'min-h-[400px]'}`}\n                    onMouseDown={handleMouseDown}\n                    onMouseMove={handleMouseMove}\n                    onMouseUp={handleMouseUp}\n                    onMouseLeave={handleMouseUp}\n                    style={{ cursor: isPanning ? 'grabbing' : 'grab' }}\n                >\n                    <div\n                        ref={diagramRef}\n                        className=\"[&_.edgePath_.path]:stroke-slate-400 [&_.edgePath_.path]:stroke-2 [&_.marker]:fill-slate-400 transition-transform origin-center w-fit h-fit\"\n                        style={{\n                            transform: `translate(${pan.x}px, ${pan.y}px) scale(${zoom})`,\n                        }}\n                    />\n                </div>\n\n                {/* Footer Actions */}\n                <div className=\"flex items-center justify-center gap-3 px-6 py-4 border-t border-white/10 bg-slate-900/50 relative z-10\">\n                    {/* Zoom controls */}\n                    <div className=\"flex items-center gap-1 bg-white/5 border border-white/10 rounded-lg p-1\">\n                        <button\n                            onClick={handleZoomOut}\n                            className=\"p-2 text-slate-300 hover:text-white hover:bg-white/10 rounded-md transition-all\"\n                            title=\"Zoom out (-)\"\n                        >\n                            <ZoomOut className=\"w-4 h-4\" />\n                        </button>\n                        <span className=\"px-2 text-xs text-slate-400 font-mono min-w-[3rem] text-center\">\n                            {Math.round(zoom * 100)}%\n                        </span>\n                        <button\n                            onClick={handleZoomIn}\n                            className=\"p-2 text-slate-300 hover:text-white hover:bg-white/10 rounded-md transition-all\"\n                            title=\"Zoom in (+)\"\n                        >\n                            <ZoomIn className=\"w-4 h-4\" />\n                        </button>\n                    </div>\n                    <button\n                        onClick={resetView}\n                        className=\"flex items-center gap-2 px-4 py-2.5 text-sm font-medium text-slate-300 hover:text-white bg-white/5 hover:bg-white/10 border border-white/10 rounded-lg transition-all\"\n                        title=\"Reset zoom and pan\"\n                    >\n                        Reset View\n                    </button>\n                    {onFocusInGraph && (\n                        <button\n                            onClick={handleFocusInGraph}\n                            className=\"flex items-center gap-2 px-5 py-2.5 text-sm font-medium text-slate-900 bg-cyan-400 hover:bg-cyan-300 rounded-lg transition-all shadow-lg shadow-cyan-500/20\"\n                        >\n                            <Focus className=\"w-4 h-4\" />\n                            Toggle Focus\n                        </button>\n                    )}\n                    <button\n                        onClick={handleCopyMermaid}\n                        className=\"flex items-center gap-2 px-5 py-2.5 text-sm font-medium text-white bg-purple-600 hover:bg-purple-500 rounded-lg transition-all shadow-lg shadow-purple-500/20\"\n                    >\n                        <Copy className=\"w-4 h-4\" />\n                        Copy Mermaid\n                    </button>\n                    <button\n                        onClick={onClose}\n                        className=\"px-5 py-2.5 text-sm font-medium text-slate-300 hover:text-white bg-white/5 hover:bg-white/10 border border-white/10 rounded-lg transition-all\"\n                    >\n                        Close\n                    </button>\n                </div>\n            </div>\n        </div>\n    );\n};\n"
  },
  {
    "path": "gitnexus-web/src/components/ProcessesPanel.tsx",
    "content": "/**\n * Processes Panel\n * \n * Lists all detected processes grouped by type (cross-community / intra-community).\n * Clicking a process opens the ProcessFlowModal with a flowchart.\n */\n\nimport { useState, useMemo, useCallback, useEffect } from 'react';\nimport { GitBranch, Search, Eye, Zap, Home, ChevronDown, ChevronRight, Sparkles, Lightbulb, Layers } from 'lucide-react';\nimport { useAppState } from '../hooks/useAppState';\nimport { ProcessFlowModal } from './ProcessFlowModal';\nimport type { ProcessData, ProcessStep } from '../lib/mermaid-generator';\n\nexport const ProcessesPanel = () => {\n    const { graph, runQuery, setHighlightedNodeIds, highlightedNodeIds } = useAppState();\n    const [searchQuery, setSearchQuery] = useState('');\n    const [selectedProcess, setSelectedProcess] = useState<ProcessData | null>(null);\n    const [expandedSections, setExpandedSections] = useState<Set<string>>(new Set(['cross', 'intra']));\n    const [loadingProcess, setLoadingProcess] = useState<string | null>(null);\n    const [focusedProcessId, setFocusedProcessId] = useState<string | null>(null);\n\n    // Extract processes from graph\n    const processes = useMemo(() => {\n        if (!graph) return { cross: [], intra: [] };\n\n        const processNodes = graph.nodes.filter(n => n.label === 'Process');\n\n        const cross: Array<{ id: string; label: string; stepCount: number; clusters: string[] }> = [];\n        const intra: Array<{ id: string; label: string; stepCount: number; clusters: string[] }> = [];\n\n        for (const node of processNodes) {\n            const item = {\n                id: node.id,\n                label: node.properties.heuristicLabel || node.properties.name || node.id,\n                stepCount: node.properties.stepCount || 0,\n                clusters: node.properties.communities || [],\n            };\n\n            if (node.properties.processType === 'cross_community') {\n                cross.push(item);\n            } else {\n                intra.push(item);\n            }\n        }\n\n        // Sort by step count (most complex first)\n        cross.sort((a, b) => b.stepCount - a.stepCount);\n        intra.sort((a, b) => b.stepCount - a.stepCount);\n\n        return { cross, intra };\n    }, [graph]);\n\n    // Filter by search\n    const filteredProcesses = useMemo(() => {\n        if (!searchQuery.trim()) return processes;\n\n        const query = searchQuery.toLowerCase();\n        return {\n            cross: processes.cross.filter(p => p.label.toLowerCase().includes(query)),\n            intra: processes.intra.filter(p => p.label.toLowerCase().includes(query)),\n        };\n    }, [processes, searchQuery]);\n\n    // Toggle section expansion\n    const toggleSection = useCallback((section: string) => {\n        setExpandedSections(prev => {\n            const next = new Set(prev);\n            if (next.has(section)) {\n                next.delete(section);\n            } else {\n                next.add(section);\n            }\n            return next;\n        });\n    }, []);\n\n    // Load ALL processes and combine into one mega-diagram\n    const handleViewAllProcesses = useCallback(async () => {\n        setLoadingProcess('all');\n\n        try {\n            const allProcessIds = [...processes.cross, ...processes.intra].map(p => p.id);\n\n            if (allProcessIds.length === 0) return;\n\n            // Collect all steps from all processes\n            const allStepsMap = new Map<string, ProcessStep>();\n            const allEdges: Array<{ from: string; to: string; type: string }> = [];\n\n            // Fetch steps for all processes concurrently in batches if needed, but for now sequentially to be safe\n            // Optimization: Fetch all steps in one query if possible\n            const allStepsQuery = `\n                MATCH (s)-[r:CodeRelation {type: 'STEP_IN_PROCESS'}]->(p:Process)\n                WHERE p.id IN [${allProcessIds.map(id => `'${id.replace(/'/g, \"''\")}'`).join(',')}]\n                RETURN s.id AS id, s.name AS name, s.filePath AS filePath, r.step AS stepNumber\n            `;\n\n            const stepsResult = await runQuery(allStepsQuery);\n\n            for (const row of stepsResult) {\n                const stepId = row.id || row[0];\n                if (!allStepsMap.has(stepId)) {\n                    allStepsMap.set(stepId, {\n                        id: stepId,\n                        name: row.name || row[1] || 'Unknown',\n                        filePath: row.filePath || row[2],\n                        stepNumber: row.stepNumber || row.step || row[3] || 0,\n                    });\n                }\n            }\n\n            const allSteps = Array.from(allStepsMap.values());\n            const stepIds = allSteps.map(s => s.id);\n\n            // Query for all CALLS edges between the combined steps\n            if (stepIds.length > 0) {\n                // Batch query if too many steps\n                const edgesQuery = `\n                    MATCH (from)-[r:CodeRelation {type: 'CALLS'}]->(to)\n                    WHERE from.id IN [${stepIds.map(id => `'${id.replace(/'/g, \"''\")}'`).join(',')}]\n                      AND to.id IN [${stepIds.map(id => `'${id.replace(/'/g, \"''\")}'`).join(',')}]\n                    RETURN from.id AS fromId, to.id AS toId, r.type AS type\n                `;\n\n                try {\n                    const edgesResult = await runQuery(edgesQuery);\n                    allEdges.push(...edgesResult\n                        .map((row: any) => ({\n                            from: row.fromId || row[0],\n                            to: row.toId || row[1],\n                            type: row.type || row[2] || 'CALLS',\n                        }))\n                        .filter(edge => edge.from !== edge.to));\n                } catch (err) {\n                    console.warn('Could not fetch combined edges:', err);\n                }\n            }\n\n            const combinedProcessData: ProcessData = {\n                id: 'combined-all',\n                label: `All Processes (${allProcessIds.length} combined)`,\n                processType: 'cross_community', // Treat as cross-community for styling\n                steps: allSteps,\n                edges: allEdges,\n                clusters: [],\n            };\n\n            setSelectedProcess(combinedProcessData);\n        } catch (error) {\n            console.error('Failed to load combined processes:', error);\n        } finally {\n            setLoadingProcess(null);\n        }\n    }, [processes, runQuery]);\n\n    // Load process steps and open modal\n    const handleViewProcess = useCallback(async (processId: string, label: string, processType: string) => {\n        setLoadingProcess(processId);\n\n        try {\n            // Query for process steps\n            const stepsQuery = `\n        MATCH (s)-[r:CodeRelation {type: 'STEP_IN_PROCESS'}]->(p:Process {id: '${processId.replace(/'/g, \"''\")}'})\n        RETURN s.id AS id, s.name AS name, s.filePath AS filePath, r.step AS stepNumber\n        ORDER BY r.step\n      `;\n\n            const stepsResult = await runQuery(stepsQuery);\n\n            const steps: ProcessStep[] = stepsResult.map((row: any) => ({\n                id: row.id || row[0],\n                name: row.name || row[1] || 'Unknown',\n                filePath: row.filePath || row[2],\n                stepNumber: row.stepNumber || row.step || row[3] || 0,\n            }));\n\n            // Get step IDs for edge query\n            const stepIds = steps.map(s => s.id);\n\n            // Query for CALLS edges between the steps in this process\n            let edges: Array<{ from: string; to: string; type: string }> = [];\n            if (stepIds.length > 0) {\n                const edgesQuery = `\n          MATCH (from)-[r:CodeRelation {type: 'CALLS'}]->(to)\n          WHERE from.id IN [${stepIds.map(id => `'${id.replace(/'/g, \"''\")}'`).join(',')}]\n            AND to.id IN [${stepIds.map(id => `'${id.replace(/'/g, \"''\")}'`).join(',')}]\n          RETURN from.id AS fromId, to.id AS toId, r.type AS type\n        `;\n\n                try {\n                    const edgesResult = await runQuery(edgesQuery);\n                    edges = edgesResult\n                        .map((row: any) => ({\n                            from: row.fromId || row[0],\n                            to: row.toId || row[1],\n                            type: row.type || row[2] || 'CALLS',\n                        }))\n                        .filter(edge => edge.from !== edge.to); // Remove self-loops\n                } catch (err) {\n                    console.warn('Could not fetch edges:', err);\n                    // Continue with empty edges - will fallback to linear\n                }\n            }\n\n            // Get clusters for this process\n            const processNode = graph?.nodes.find(n => n.id === processId);\n            const clusters = processNode?.properties.communities || [];\n\n            const processData: ProcessData = {\n                id: processId,\n                label,\n                processType: processType as 'cross_community' | 'intra_community',\n                steps,\n                edges,\n                clusters,\n            };\n\n            setSelectedProcess(processData);\n        } catch (error) {\n            console.error('Failed to load process steps:', error);\n        } finally {\n            setLoadingProcess(null);\n        }\n    }, [runQuery, graph]);\n\n    // Cache for process steps (so we don't re-query when toggling focus)\n    const [processStepsCache, setProcessStepsCache] = useState<Map<string, string[]>>(new Map());\n\n    // Toggle focus for any process - loads steps on demand\n    const handleToggleFocusForProcess = useCallback(async (processId: string) => {\n        // If already focused on this process, turn off\n        if (focusedProcessId === processId) {\n            setHighlightedNodeIds(new Set());\n            setFocusedProcessId(null);\n            return;\n        }\n\n        // Check if we have cached steps\n        if (processStepsCache.has(processId)) {\n            const stepIds = processStepsCache.get(processId)!;\n            setHighlightedNodeIds(new Set(stepIds));\n            setFocusedProcessId(processId);\n            return;\n        }\n\n        // Load steps for this process\n        setLoadingProcess(processId);\n        try {\n            const stepsQuery = `\n                MATCH (s)-[r:CodeRelation {type: 'STEP_IN_PROCESS'}]->(p:Process {id: '${processId.replace(/'/g, \"''\")}'})\n                RETURN s.id AS id\n            `;\n            const stepsResult = await runQuery(stepsQuery);\n            const stepIds = stepsResult.map((row: any) => row.id || row[0]);\n\n            // Cache the result\n            setProcessStepsCache(prev => new Map(prev).set(processId, stepIds));\n\n            // Set focus\n            setHighlightedNodeIds(new Set(stepIds));\n            setFocusedProcessId(processId);\n        } catch (error) {\n            console.error('Failed to load process steps for focus:', error);\n        } finally {\n            setLoadingProcess(null);\n        }\n    }, [focusedProcessId, processStepsCache, runQuery, setHighlightedNodeIds]);\n\n    // Focus in graph callback - toggles highlight (used by modal)\n    const handleFocusInGraph = useCallback((nodeIds: string[], processId: string) => {\n        // Check if this process is already focused\n        if (focusedProcessId === processId) {\n            // Clear focus\n            setHighlightedNodeIds(new Set());\n            setFocusedProcessId(null);\n        } else {\n            // Set focus and cache\n            setHighlightedNodeIds(new Set(nodeIds));\n            setFocusedProcessId(processId);\n            setProcessStepsCache(prev => new Map(prev).set(processId, nodeIds));\n        }\n    }, [focusedProcessId, setHighlightedNodeIds]);\n\n    // Clear focused process when highlights are cleared externally\n    useEffect(() => {\n        if (highlightedNodeIds.size === 0 && focusedProcessId !== null) {\n            setFocusedProcessId(null);\n        }\n    }, [highlightedNodeIds, focusedProcessId]);\n\n    const totalCount = processes.cross.length + processes.intra.length;\n\n\n    if (totalCount === 0) {\n        return (\n            <div className=\"flex flex-col items-center justify-center h-full p-6 text-center\">\n                <div className=\"w-14 h-14 mb-4 flex items-center justify-center bg-surface rounded-xl\">\n                    <GitBranch className=\"w-7 h-7 text-text-muted\" />\n                </div>\n                <h3 className=\"text-base font-medium text-text-primary mb-2\">No Processes Detected</h3>\n                <p className=\"text-sm text-text-secondary max-w-xs\">\n                    Processes are execution flows traced from entry points. Load a codebase to see detected processes.\n                </p>\n            </div>\n        );\n    }\n\n    return (\n        <div className=\"flex flex-col h-full\">\n            {/* Header with search */}\n            <div className=\"p-3 border-b border-border-subtle\">\n                <div className=\"flex items-center gap-2 mb-2\">\n                    <div className=\"flex-1 flex items-center gap-2 px-3 py-2 bg-elevated border border-border-subtle rounded-lg focus-within:border-accent focus-within:ring-2 focus-within:ring-accent/20\">\n                        <Search className=\"w-4 h-4 text-text-muted\" />\n                        <input\n                            type=\"text\"\n                            value={searchQuery}\n                            onChange={(e) => setSearchQuery(e.target.value)}\n                            placeholder=\"Filter processes...\"\n                            className=\"flex-1 bg-transparent border-none outline-none text-sm text-text-primary placeholder:text-text-muted\"\n                        />\n                    </div>\n                </div>\n                <div className=\"flex items-center gap-2 text-xs text-text-muted\">\n                    <span>{totalCount} processes detected</span>\n                </div>\n            </div>\n\n            {/* Process list */}\n            <div className=\"flex-1 overflow-y-auto scrollbar-thin\">\n                {/* View All Processes Card */}\n                <div className=\"px-4 py-3\">\n                    <button\n                        onClick={handleViewAllProcesses}\n                        disabled={loadingProcess !== null}\n                        className=\"w-full flex items-center gap-3 p-3 bg-elevated/40 hover:bg-elevated/80 border border-border-subtle hover:border-cyan-500/30 rounded-xl transition-all group shadow-sm hover:shadow-cyan-900/10 text-left\"\n                    >\n                        <div className=\"p-2 bg-cyan-500/10 rounded-lg group-hover:bg-cyan-500/20 transition-colors\">\n                            <Layers className=\"w-5 h-5 text-cyan-400\" />\n                        </div>\n                        <div className=\"flex-1\">\n                            <h4 className=\"text-sm font-medium text-text-primary group-hover:text-cyan-200\">Full Process Map</h4>\n                            <p className=\"text-xs text-text-muted\">View combined map of {totalCount} processes</p>\n                        </div>\n                        {loadingProcess === 'all' ? (\n                            <span className=\"animate-spin mr-1\">\n                                <Sparkles className=\"w-4 h-4 text-cyan-400\" />\n                            </span>\n                        ) : (\n                            <Eye className=\"w-4 h-4 text-text-muted group-hover:text-cyan-400\" />\n                        )}\n                    </button>\n                </div>\n\n                {/* Cross-Community Section */}\n                {filteredProcesses.cross.length > 0 && (\n                    <div className=\"border-b border-border-subtle\">\n                        <button\n                            onClick={() => toggleSection('cross')}\n                            className=\"w-full flex items-center gap-2 px-4 py-2.5 text-left hover:bg-hover transition-colors\"\n                        >\n                            {expandedSections.has('cross') ? (\n                                <ChevronDown className=\"w-4 h-4 text-text-muted\" />\n                            ) : (\n                                <ChevronRight className=\"w-4 h-4 text-text-muted\" />\n                            )}\n                            <Zap className=\"w-4 h-4 text-amber-400\" />\n                            <span className=\"text-sm font-medium text-text-primary\">Cross-Community</span>\n                            <span className=\"ml-auto text-xs text-text-muted bg-surface px-2 py-0.5 rounded-full\">\n                                {filteredProcesses.cross.length}\n                            </span>\n                        </button>\n\n                        {expandedSections.has('cross') && (\n                            <div className=\"pb-2\">\n                                {filteredProcesses.cross.map((process) => (\n                                    <ProcessItem\n                                        key={process.id}\n                                        process={process}\n                                        isLoading={loadingProcess === process.id}\n                                        isSelected={selectedProcess?.id === process.id}\n                                        isFocused={focusedProcessId === process.id}\n                                        onView={() => handleViewProcess(process.id, process.label, 'cross_community')}\n                                        onToggleFocus={() => handleToggleFocusForProcess(process.id)}\n                                    />\n                                ))}\n                            </div>\n                        )}\n                    </div>\n                )}\n\n                {/* Intra-Community Section */}\n                {filteredProcesses.intra.length > 0 && (\n                    <div>\n                        <button\n                            onClick={() => toggleSection('intra')}\n                            className=\"w-full flex items-center gap-2 px-4 py-2.5 text-left hover:bg-hover transition-colors\"\n                        >\n                            {expandedSections.has('intra') ? (\n                                <ChevronDown className=\"w-4 h-4 text-text-muted\" />\n                            ) : (\n                                <ChevronRight className=\"w-4 h-4 text-text-muted\" />\n                            )}\n                            <Home className=\"w-4 h-4 text-emerald-400\" />\n                            <span className=\"text-sm font-medium text-text-primary\">Intra-Community</span>\n                            <span className=\"ml-auto text-xs text-text-muted bg-surface px-2 py-0.5 rounded-full\">\n                                {filteredProcesses.intra.length}\n                            </span>\n                        </button>\n\n                        {expandedSections.has('intra') && (\n                            <div className=\"pb-2\">\n                                {filteredProcesses.intra.map((process) => (\n                                    <ProcessItem\n                                        key={process.id}\n                                        process={process}\n                                        isLoading={loadingProcess === process.id}\n                                        isSelected={selectedProcess?.id === process.id}\n                                        isFocused={focusedProcessId === process.id}\n                                        onView={() => handleViewProcess(process.id, process.label, 'intra_community')}\n                                        onToggleFocus={() => handleToggleFocusForProcess(process.id)}\n                                    />\n                                ))}\n                            </div>\n                        )}\n                    </div>\n                )}\n            </div>\n\n            {/* Modal */}\n            <ProcessFlowModal\n                process={selectedProcess}\n                onClose={() => setSelectedProcess(null)}\n                onFocusInGraph={handleFocusInGraph}\n                isFullScreen={selectedProcess?.id === 'combined-all'}\n            />\n        </div>\n    );\n};\n\n// Individual process item\ninterface ProcessItemProps {\n    process: { id: string; label: string; stepCount: number; clusters: string[] };\n    isLoading: boolean;\n    isSelected: boolean;\n    isFocused: boolean;\n    onView: () => void;\n    onToggleFocus: () => void;\n}\n\nconst ProcessItem = ({ process, isLoading, isSelected, isFocused, onView, onToggleFocus }: ProcessItemProps) => {\n    // Determine row styling - focused gets special highlight\n    const rowClass = isFocused\n        ? 'bg-amber-950/40 border border-amber-500/50 ring-1 ring-amber-400/30'\n        : isSelected\n            ? 'bg-cyan-950/40 border border-cyan-500/50 ring-1 ring-cyan-400/30'\n            : '';\n\n    return (\n        <div className={`flex items-center gap-2 px-4 py-2 mx-2 rounded-lg hover:bg-hover group transition-all ${rowClass}`}>\n            <GitBranch className=\"w-4 h-4 text-text-muted flex-shrink-0\" />\n            <div className=\"flex-1 min-w-0\">\n                <div className=\"text-sm text-text-primary truncate\">{process.label}</div>\n                <div className=\"flex items-center gap-2 text-xs text-text-muted\">\n                    <span>{process.stepCount} steps</span>\n                    {process.clusters.length > 0 && (\n                        <>\n                            <span>•</span>\n                            <span>{process.clusters.length} clusters</span>\n                        </>\n                    )}\n                </div>\n            </div>\n            {/* Lightbulb icon - appears on hover, always visible when focused */}\n            <button\n                onClick={onToggleFocus}\n                className={`p-1.5 rounded-md transition-all ${isFocused\n                    ? 'text-amber-400 hover:text-amber-300 bg-amber-500/20 hover:bg-amber-500/30 border border-amber-400/40 animate-pulse opacity-100'\n                    : 'text-text-muted hover:text-cyan-400 bg-white/5 hover:bg-cyan-500/20 border border-white/10 hover:border-cyan-400/40 opacity-0 group-hover:opacity-100'\n                    }`}\n                title={isFocused ? 'Click to remove highlight from graph' : 'Click to highlight in graph'}\n            >\n                <Lightbulb className=\"w-4 h-4\" />\n            </button>\n            <button\n                onClick={onView}\n                disabled={isLoading}\n                className={`flex items-center gap-1.5 px-2.5 py-1.5 text-xs font-medium rounded-md transition-all disabled:opacity-50 shadow-sm ${isSelected\n                    ? 'text-cyan-300 bg-cyan-900/60 border border-cyan-400/60 opacity-100'\n                    : 'text-cyan-400 hover:text-cyan-300 bg-cyan-950/30 hover:bg-cyan-900/50 border border-cyan-500/30 hover:border-cyan-400/50 opacity-0 group-hover:opacity-100 shadow-cyan-900/20'\n                    }`}\n            >\n                {isLoading ? (\n                    <span className=\"animate-pulse\">Loading...</span>\n                ) : isSelected ? (\n                    <>\n                        <Eye className=\"w-3.5 h-3.5\" />\n                        Viewing\n                    </>\n                ) : (\n                    <>\n                        <Eye className=\"w-3.5 h-3.5\" />\n                        View\n                    </>\n                )}\n            </button>\n        </div>\n    );\n};\n"
  },
  {
    "path": "gitnexus-web/src/components/QueryFAB.tsx",
    "content": "import { useState, useRef, useEffect, useCallback } from 'react';\nimport { Terminal, Play, X, ChevronDown, ChevronUp, Loader2, Sparkles, Table } from 'lucide-react';\nimport { useAppState } from '../hooks/useAppState';\n\nconst EXAMPLE_QUERIES = [\n  {\n    label: 'All Functions',\n    query: `MATCH (n:Function) RETURN n.id AS id, n.name AS name, n.filePath AS path LIMIT 50`,\n  },\n  {\n    label: 'All Classes',\n    query: `MATCH (n:Class) RETURN n.id AS id, n.name AS name, n.filePath AS path LIMIT 50`,\n  },\n  {\n    label: 'All Interfaces',\n    query: `MATCH (n:Interface) RETURN n.id AS id, n.name AS name, n.filePath AS path LIMIT 50`,\n  },\n  {\n    label: 'Function Calls',\n    query: `MATCH (a:File)-[r:CodeRelation {type: 'CALLS'}]->(b:Function) RETURN a.id AS id, a.name AS caller, b.name AS callee LIMIT 50`,\n  },\n  {\n    label: 'Import Dependencies',\n    query: `MATCH (a:File)-[r:CodeRelation {type: 'IMPORTS'}]->(b:File) RETURN a.id AS id, a.name AS from, b.name AS imports LIMIT 50`,\n  },\n];\n\nexport const QueryFAB = () => {\n  const { setHighlightedNodeIds, setQueryResult, queryResult, clearQueryHighlights, graph, runQuery, isDatabaseReady } = useAppState();\n\n  const [isExpanded, setIsExpanded] = useState(false);\n  const [query, setQuery] = useState('');\n  const [isRunning, setIsRunning] = useState(false);\n  const [error, setError] = useState<string | null>(null);\n  const [showExamples, setShowExamples] = useState(false);\n  const [showResults, setShowResults] = useState(true);\n\n  const textareaRef = useRef<HTMLTextAreaElement>(null);\n  const panelRef = useRef<HTMLDivElement>(null);\n\n  useEffect(() => {\n    if (isExpanded && textareaRef.current) {\n      textareaRef.current.focus();\n    }\n  }, [isExpanded]);\n\n  useEffect(() => {\n    const handleClickOutside = (e: MouseEvent) => {\n      if (panelRef.current && !panelRef.current.contains(e.target as Node)) {\n        setShowExamples(false);\n      }\n    };\n    document.addEventListener('mousedown', handleClickOutside);\n    return () => document.removeEventListener('mousedown', handleClickOutside);\n  }, []);\n\n  useEffect(() => {\n    const handleKeyDown = (e: KeyboardEvent) => {\n      if (e.key === 'Escape' && isExpanded) {\n        setIsExpanded(false);\n        setShowExamples(false);\n      }\n    };\n    document.addEventListener('keydown', handleKeyDown);\n    return () => document.removeEventListener('keydown', handleKeyDown);\n  }, [isExpanded]);\n\n  const handleRunQuery = useCallback(async () => {\n    if (!query.trim() || isRunning) return;\n\n    if (!graph) {\n      setError('No project loaded. Load a project first.');\n      return;\n    }\n\n    const ready = await isDatabaseReady();\n    if (!ready) {\n      setError('Database not ready. Please wait for loading to complete.');\n      return;\n    }\n\n    setIsRunning(true);\n    setError(null);\n\n    const startTime = performance.now();\n\n    try {\n      const rows = await runQuery(query);\n      const executionTime = performance.now() - startTime;\n\n      // Extract node IDs from results - handles various formats\n      // 1. Array format: first element if it looks like a node ID\n      // 2. Object format: any field ending with 'id' (case-insensitive)\n      // 3. Values matching node ID pattern: Label:path:name\n      const nodeIdPattern = /^(File|Function|Class|Method|Interface|Folder|CodeElement):/;\n\n      const nodeIds = rows\n        .flatMap(row => {\n          const ids: string[] = [];\n\n          if (Array.isArray(row)) {\n            // Array format - check all elements for node ID patterns\n            row.forEach(val => {\n              if (typeof val === 'string' && (nodeIdPattern.test(val) || val.includes(':'))) {\n                ids.push(val);\n              }\n            });\n          } else if (typeof row === 'object' && row !== null) {\n            // Object format - check fields ending with 'id' and values matching patterns\n            Object.entries(row).forEach(([key, val]) => {\n              const keyLower = key.toLowerCase();\n              if (typeof val === 'string') {\n                // Field name contains 'id'\n                if (keyLower.includes('id') || keyLower === 'id') {\n                  ids.push(val);\n                }\n                // Value matches node ID pattern\n                else if (nodeIdPattern.test(val)) {\n                  ids.push(val);\n                }\n              }\n            });\n          }\n\n          return ids;\n        })\n        .filter(Boolean)\n        .filter((id, index, arr) => arr.indexOf(id) === index);\n\n      setQueryResult({ rows, nodeIds, executionTime });\n      setHighlightedNodeIds(new Set(nodeIds));\n    } catch (err) {\n      setError(err instanceof Error ? err.message : 'Query execution failed');\n      setQueryResult(null);\n      setHighlightedNodeIds(new Set());\n    } finally {\n      setIsRunning(false);\n    }\n  }, [query, isRunning, graph, isDatabaseReady, runQuery, setHighlightedNodeIds, setQueryResult]);\n\n  const handleKeyDown = (e: React.KeyboardEvent) => {\n    if (e.key === 'Enter' && (e.ctrlKey || e.metaKey)) {\n      e.preventDefault();\n      handleRunQuery();\n    }\n  };\n\n  const handleSelectExample = (exampleQuery: string) => {\n    setQuery(exampleQuery);\n    setShowExamples(false);\n    textareaRef.current?.focus();\n  };\n\n  const handleClose = () => {\n    setIsExpanded(false);\n    setShowExamples(false);\n    clearQueryHighlights();\n    setError(null);\n  };\n\n  const handleClear = () => {\n    setQuery('');\n    clearQueryHighlights();\n    setError(null);\n    textareaRef.current?.focus();\n  };\n\n  if (!isExpanded) {\n    return (\n      <button\n        onClick={() => setIsExpanded(true)}\n        className=\"\n          group absolute bottom-4 left-4 z-20\n          flex items-center gap-2 px-4 py-2.5\n          bg-gradient-to-r from-cyan-500 to-teal-500\n          rounded-xl text-white font-medium text-sm\n          shadow-[0_0_20px_rgba(6,182,212,0.4)]\n          hover:shadow-[0_0_30px_rgba(6,182,212,0.6)]\n          hover:-translate-y-0.5\n          transition-all duration-200\n        \"\n      >\n        <Terminal className=\"w-4 h-4\" />\n        <span>Query</span>\n        {queryResult && queryResult.nodeIds.length > 0 && (\n          <span className=\"\n            px-1.5 py-0.5 ml-1\n            bg-white/20 rounded-md\n            text-xs font-semibold\n          \">\n            {queryResult.nodeIds.length}\n          </span>\n        )}\n      </button>\n    );\n  }\n\n  return (\n    <div\n      ref={panelRef}\n      className=\"\n        absolute bottom-4 left-4 z-20\n        w-[480px] max-w-[calc(100%-2rem)]\n        bg-deep/95 backdrop-blur-md\n        border border-cyan-500/30\n        rounded-xl\n        shadow-[0_0_40px_rgba(6,182,212,0.2)]\n        animate-fade-in\n      \"\n    >\n      <div className=\"flex items-center justify-between px-4 py-3 border-b border-border-subtle\">\n        <div className=\"flex items-center gap-2\">\n          <div className=\"w-7 h-7 flex items-center justify-center bg-gradient-to-br from-cyan-500 to-teal-500 rounded-lg\">\n            <Terminal className=\"w-4 h-4 text-white\" />\n          </div>\n          <span className=\"font-medium text-sm\">Cypher Query</span>\n        </div>\n        <button\n          onClick={handleClose}\n          className=\"p-1.5 text-text-muted hover:text-text-primary hover:bg-hover rounded-md transition-colors\"\n        >\n          <X className=\"w-4 h-4\" />\n        </button>\n      </div>\n\n      <div className=\"p-3\">\n        <div className=\"relative\">\n          <textarea\n            ref={textareaRef}\n            value={query}\n            onChange={(e) => setQuery(e.target.value)}\n            onKeyDown={handleKeyDown}\n            placeholder=\"MATCH (n:Function) RETURN n.name, n.filePath LIMIT 10\"\n            rows={3}\n            className=\"\n              w-full px-3 py-2.5\n              bg-surface border border-border-subtle rounded-lg\n              text-sm font-mono text-text-primary\n              placeholder:text-text-muted\n              focus:border-cyan-500/50 focus:ring-2 focus:ring-cyan-500/20\n              outline-none resize-none\n              transition-all\n            \"\n          />\n        </div>\n\n        <div className=\"flex items-center justify-between mt-3\">\n          <div className=\"relative\">\n            <button\n              onClick={() => setShowExamples(!showExamples)}\n              className=\"\n                flex items-center gap-1.5 px-3 py-1.5\n                text-xs text-text-secondary\n                hover:text-text-primary hover:bg-hover\n                rounded-md transition-colors\n              \"\n            >\n              <Sparkles className=\"w-3.5 h-3.5\" />\n              <span>Examples</span>\n              <ChevronDown className={`w-3.5 h-3.5 transition-transform ${showExamples ? 'rotate-180' : ''}`} />\n            </button>\n\n            {showExamples && (\n              <div className=\"\n                absolute bottom-full left-0 mb-2\n                w-64 py-1\n                bg-surface border border-border-subtle rounded-lg\n                shadow-xl\n                animate-fade-in\n              \">\n                {EXAMPLE_QUERIES.map((example) => (\n                  <button\n                    key={example.label}\n                    onClick={() => handleSelectExample(example.query)}\n                    className=\"\n                      w-full px-3 py-2 text-left\n                      text-sm text-text-secondary\n                      hover:bg-hover hover:text-text-primary\n                      transition-colors\n                    \"\n                  >\n                    {example.label}\n                  </button>\n                ))}\n              </div>\n            )}\n          </div>\n\n          <div className=\"flex items-center gap-2\">\n            {query && (\n              <button\n                onClick={handleClear}\n                className=\"\n                  px-3 py-1.5\n                  text-xs text-text-secondary\n                  hover:text-text-primary hover:bg-hover\n                  rounded-md transition-colors\n                \"\n              >\n                Clear\n              </button>\n            )}\n            <button\n              onClick={handleRunQuery}\n              disabled={!query.trim() || isRunning}\n              className=\"\n                flex items-center gap-1.5 px-4 py-1.5\n                bg-gradient-to-r from-cyan-500 to-teal-500\n                rounded-md text-white text-sm font-medium\n                shadow-[0_0_15px_rgba(6,182,212,0.3)]\n                hover:shadow-[0_0_20px_rgba(6,182,212,0.5)]\n                disabled:opacity-50 disabled:cursor-not-allowed disabled:shadow-none\n                transition-all\n              \"\n            >\n              {isRunning ? (\n                <Loader2 className=\"w-3.5 h-3.5 animate-spin\" />\n              ) : (\n                <Play className=\"w-3.5 h-3.5\" />\n              )}\n              <span>Run</span>\n              <kbd className=\"ml-1 px-1 py-0.5 bg-white/20 rounded text-[10px]\">⌘↵</kbd>\n            </button>\n          </div>\n        </div>\n      </div>\n\n      {error && (\n        <div className=\"px-4 py-2 bg-red-500/10 border-t border-red-500/20\">\n          <p className=\"text-xs text-red-400 font-mono\">{error}</p>\n        </div>\n      )}\n\n      {queryResult && !error && (\n        <div className=\"border-t border-cyan-500/20\">\n          <div className=\"px-4 py-2.5 bg-cyan-500/5 flex items-center justify-between\">\n            <div className=\"flex items-center gap-3 text-xs\">\n              <span className=\"text-text-secondary\">\n                <span className=\"text-cyan-400 font-semibold\">{queryResult.rows.length}</span> rows\n              </span>\n              {queryResult.nodeIds.length > 0 && (\n                <span className=\"text-text-secondary\">\n                  <span className=\"text-cyan-400 font-semibold\">{queryResult.nodeIds.length}</span> highlighted\n                </span>\n              )}\n              <span className=\"text-text-muted\">\n                {queryResult.executionTime.toFixed(1)}ms\n              </span>\n            </div>\n            <div className=\"flex items-center gap-2\">\n              {queryResult.nodeIds.length > 0 && (\n                <button\n                  onClick={clearQueryHighlights}\n                  className=\"text-xs text-text-muted hover:text-text-primary transition-colors\"\n                >\n                  Clear\n                </button>\n              )}\n              <button\n                onClick={() => setShowResults(!showResults)}\n                className=\"flex items-center gap-1 text-xs text-text-muted hover:text-text-primary transition-colors\"\n              >\n                <Table className=\"w-3 h-3\" />\n                {showResults ? <ChevronDown className=\"w-3 h-3\" /> : <ChevronUp className=\"w-3 h-3\" />}\n              </button>\n            </div>\n          </div>\n\n          {showResults && queryResult.rows.length > 0 && (\n            <div className=\"max-h-48 overflow-auto scrollbar-thin border-t border-border-subtle\">\n              <table className=\"w-full text-xs\">\n                <thead className=\"bg-surface sticky top-0\">\n                  <tr>\n                    {Object.keys(queryResult.rows[0]).map((key) => (\n                      <th key={key} className=\"px-3 py-2 text-left text-text-muted font-medium border-b border-border-subtle\">\n                        {key}\n                      </th>\n                    ))}\n                  </tr>\n                </thead>\n                <tbody>\n                  {queryResult.rows.slice(0, 50).map((row, i) => (\n                    <tr key={i} className=\"hover:bg-hover/50 transition-colors\">\n                      {Object.values(row).map((val, j) => (\n                        <td key={j} className=\"px-3 py-1.5 text-text-secondary border-b border-border-subtle/50 font-mono truncate max-w-[200px]\">\n                          {typeof val === 'object' ? JSON.stringify(val) : String(val ?? '')}\n                        </td>\n                      ))}\n                    </tr>\n                  ))}\n                </tbody>\n              </table>\n              {queryResult.rows.length > 50 && (\n                <div className=\"px-3 py-2 text-xs text-text-muted bg-surface border-t border-border-subtle\">\n                  Showing 50 of {queryResult.rows.length} rows\n                </div>\n              )}\n            </div>\n          )}\n        </div>\n      )}\n    </div>\n  );\n};\n\n"
  },
  {
    "path": "gitnexus-web/src/components/RightPanel.tsx",
    "content": "import { useState, useRef, useEffect, useCallback } from 'react';\nimport {\n  Send, Square, Sparkles, User,\n  PanelRightClose, Loader2, AlertTriangle, GitBranch\n} from 'lucide-react';\nimport { useAppState } from '../hooks/useAppState';\nimport { ToolCallCard } from './ToolCallCard';\nimport { isProviderConfigured } from '../core/llm/settings-service';\nimport { MarkdownRenderer } from './MarkdownRenderer';\nimport { ProcessesPanel } from './ProcessesPanel';\nexport const RightPanel = () => {\n  const {\n    isRightPanelOpen,\n    setRightPanelOpen,\n    fileContents,\n    graph,\n    addCodeReference,\n    // LLM / chat state\n    chatMessages,\n    isChatLoading,\n    currentToolCalls,\n    agentError,\n    isAgentReady,\n    isAgentInitializing,\n    sendChatMessage,\n    stopChatResponse,\n    clearChat,\n  } = useAppState();\n\n  const [chatInput, setChatInput] = useState('');\n  const [activeTab, setActiveTab] = useState<'chat' | 'processes'>('chat');\n  const textareaRef = useRef<HTMLTextAreaElement>(null);\n  const messagesEndRef = useRef<HTMLDivElement>(null);\n\n  // Auto-scroll to bottom when messages update or while streaming\n  useEffect(() => {\n    if (messagesEndRef.current) {\n      messagesEndRef.current.scrollIntoView({ behavior: 'smooth' });\n    }\n  }, [chatMessages, isChatLoading]);\n\n  const resolveFilePathForUI = useCallback((requestedPath: string): string | null => {\n    const req = requestedPath.replace(/\\\\/g, '/').replace(/^\\.?\\//, '').toLowerCase();\n    if (!req) return null;\n\n    // Exact match first (case-insensitive)\n    for (const key of fileContents.keys()) {\n      const norm = key.replace(/\\\\/g, '/').replace(/^\\.?\\//, '').toLowerCase();\n      if (norm === req) return key;\n    }\n\n    // Ends-with match (best for partial paths)\n    let best: { path: string; score: number } | null = null;\n    for (const key of fileContents.keys()) {\n      const norm = key.replace(/\\\\/g, '/').replace(/^\\.?\\//, '').toLowerCase();\n      if (norm.endsWith(req)) {\n        const score = 1000 - norm.length;\n        if (!best || score > best.score) best = { path: key, score };\n      }\n    }\n    return best?.path ?? null;\n  }, [fileContents]);\n\n  const findFileNodeIdForUI = useCallback((filePath: string): string | undefined => {\n    if (!graph) return undefined;\n    const target = filePath.replace(/\\\\/g, '/').replace(/^\\.?\\//, '');\n    const node = graph.nodes.find(\n      (n) => n.label === 'File' && n.properties.filePath.replace(/\\\\/g, '/').replace(/^\\.?\\//, '') === target\n    );\n    return node?.id;\n  }, [graph]);\n\n  const handleGroundingClick = useCallback((inner: string) => {\n    const raw = inner.trim();\n    if (!raw) return;\n\n    let rawPath = raw;\n    let startLine1: number | undefined;\n    let endLine1: number | undefined;\n\n    // Match line:num or line:num-num (supports both hyphen - and en dash –)\n    const lineMatch = raw.match(/^(.*):(\\d+)(?:[-–](\\d+))?$/);\n    if (lineMatch) {\n      rawPath = lineMatch[1].trim();\n      startLine1 = parseInt(lineMatch[2], 10);\n      endLine1 = parseInt(lineMatch[3] || lineMatch[2], 10);\n    }\n\n    const resolvedPath = resolveFilePathForUI(rawPath);\n    if (!resolvedPath) return;\n\n    const nodeId = findFileNodeIdForUI(resolvedPath);\n\n    addCodeReference({\n      filePath: resolvedPath,\n      startLine: startLine1 ? Math.max(0, startLine1 - 1) : undefined,\n      endLine: endLine1 ? Math.max(0, endLine1 - 1) : (startLine1 ? Math.max(0, startLine1 - 1) : undefined),\n      nodeId,\n      label: 'File',\n      name: resolvedPath.split('/').pop() ?? resolvedPath,\n      source: 'ai',\n    });\n  }, [addCodeReference, findFileNodeIdForUI, resolveFilePathForUI]);\n\n  // Handler for node grounding: [[Class:View]], [[Function:trigger]], etc.\n  const handleNodeGroundingClick = useCallback((nodeTypeAndName: string) => {\n    const raw = nodeTypeAndName.trim();\n    if (!raw || !graph) return;\n\n    // Parse Type:Name format\n    const match = raw.match(/^(Class|Function|Method|Interface|File|Folder|Variable|Enum|Type|CodeElement):(.+)$/);\n    if (!match) return;\n\n    const [, nodeType, nodeName] = match;\n    const trimmedName = nodeName.trim();\n\n    // Find node in graph by type + name\n    const node = graph.nodes.find(n =>\n      n.label === nodeType &&\n      n.properties.name === trimmedName\n    );\n\n    if (!node) {\n      console.warn(`Node not found: ${nodeType}:${trimmedName}`);\n      return;\n    }\n\n    // 1. Highlight in graph (add to AI citation highlights)\n    // Note: This requires accessing the state setter from parent context\n    // For now, we'll add to code references which triggers the highlight\n\n    // 2. Add to Code Panel (if node has file/line info)\n    if (node.properties.filePath) {\n      const resolvedPath = resolveFilePathForUI(node.properties.filePath);\n      if (resolvedPath) {\n        addCodeReference({\n          filePath: resolvedPath,\n          startLine: node.properties.startLine ? node.properties.startLine - 1 : undefined,\n          endLine: node.properties.endLine ? node.properties.endLine - 1 : undefined,\n          nodeId: node.id,\n          label: node.label,\n          name: node.properties.name,\n          source: 'ai',\n        });\n      }\n    }\n  }, [graph, resolveFilePathForUI, addCodeReference]);\n\n  const handleLinkClick = useCallback((href: string) => {\n    if (href.startsWith('code-ref:')) {\n      const inner = decodeURIComponent(href.slice('code-ref:'.length));\n      handleGroundingClick(inner);\n    } else if (href.startsWith('node-ref:')) {\n      const inner = decodeURIComponent(href.slice('node-ref:'.length));\n      handleNodeGroundingClick(inner);\n    }\n  }, [handleGroundingClick, handleNodeGroundingClick]);\n\n\n\n  // Auto-resize textarea as user types\n  const adjustTextareaHeight = useCallback(() => {\n    const textarea = textareaRef.current;\n    if (!textarea) return;\n\n    // Reset height to get accurate scrollHeight\n    textarea.style.height = 'auto';\n    // Set to scrollHeight, capped at max\n    const maxHeight = 160; // ~6 lines\n    const newHeight = Math.min(textarea.scrollHeight, maxHeight);\n    textarea.style.height = `${newHeight}px`;\n    // Show scrollbar if content exceeds max\n    textarea.style.overflowY = textarea.scrollHeight > maxHeight ? 'auto' : 'hidden';\n  }, []);\n\n  // Adjust height when input changes\n  useEffect(() => {\n    adjustTextareaHeight();\n  }, [chatInput, adjustTextareaHeight]);\n\n  // Chat handlers\n  const handleSendMessage = async () => {\n    if (!chatInput.trim()) return;\n    const text = chatInput.trim();\n    setChatInput('');\n    // Reset textarea height after sending\n    if (textareaRef.current) {\n      textareaRef.current.style.height = '36px';\n      textareaRef.current.style.overflowY = 'hidden';\n    }\n    await sendChatMessage(text);\n  };\n\n  const handleKeyDown = (e: React.KeyboardEvent) => {\n    if (e.key === 'Enter' && !e.shiftKey) {\n      e.preventDefault();\n      handleSendMessage();\n    }\n  };\n\n  const chatSuggestions = [\n    'Explain the project architecture',\n    'What does this project do?',\n    'Show me the most important files',\n    'Find all API handlers',\n  ];\n\n  if (!isRightPanelOpen) return null;\n\n  return (\n    <aside className=\"w-[40%] min-w-[400px] max-w-[600px] flex flex-col bg-deep border-l border-border-subtle animate-slide-in relative z-30 flex-shrink-0\">\n      {/* Header with Tabs */}\n      <div className=\"flex items-center justify-between px-4 py-2 bg-surface border-b border-border-subtle\">\n        <div className=\"flex items-center gap-1\">\n          {/* Chat Tab */}\n          <button\n            onClick={() => setActiveTab('chat')}\n            className={`flex items-center gap-1.5 px-3 py-1.5 rounded-md text-sm font-medium transition-colors ${activeTab === 'chat'\n              ? 'bg-accent/15 text-accent'\n              : 'text-text-muted hover:text-text-primary hover:bg-hover'\n              }`}\n          >\n            <Sparkles className=\"w-3.5 h-3.5\" />\n            <span>Nexus AI</span>\n          </button>\n\n          {/* Processes Tab */}\n          <button\n            onClick={() => setActiveTab('processes')}\n            className={`flex items-center gap-1.5 px-3 py-1.5 rounded-md text-sm font-medium transition-colors ${activeTab === 'processes'\n              ? 'bg-accent/15 text-accent'\n              : 'text-text-muted hover:text-text-primary hover:bg-hover'\n              }`}\n          >\n            <GitBranch className=\"w-3.5 h-3.5\" />\n            <span>Processes</span>\n            <span className=\"text-[10px] px-1.5 py-0.5 bg-gradient-to-r from-violet-500 to-fuchsia-500 text-white rounded-full font-semibold\">\n              NEW\n            </span>\n          </button>\n        </div>\n\n        {/* Close button */}\n        <button\n          onClick={() => setRightPanelOpen(false)}\n          className=\"p-1.5 text-text-muted hover:text-text-primary hover:bg-hover rounded transition-colors\"\n          title=\"Close Panel\"\n        >\n          <PanelRightClose className=\"w-4 h-4\" />\n        </button>\n      </div>\n\n      {/* Processes Tab */}\n      {activeTab === 'processes' && (\n        <div className=\"flex-1 flex flex-col overflow-hidden\">\n          <ProcessesPanel />\n        </div>\n      )}\n\n      {/* Chat Content - only show when chat tab is active */}\n      {activeTab === 'chat' && (\n        <div className=\"flex-1 flex flex-col overflow-hidden\">\n          {/* Status bar */}\n          <div className=\"flex items-center gap-2.5 px-4 py-3 bg-elevated/50 border-b border-border-subtle\">\n            <div className=\"ml-auto flex items-center gap-2\">\n              {!isAgentReady && (\n                <span className=\"text-[11px] px-2 py-1 rounded-full bg-amber-500/15 text-amber-300 border border-amber-500/30\">\n                  Configure AI\n                </span>\n              )}\n              {isAgentInitializing && (\n                <span className=\"text-[11px] px-2 py-1 rounded-full bg-surface border border-border-subtle flex items-center gap-1 text-text-muted\">\n                  <Loader2 className=\"w-3 h-3 animate-spin\" /> Connecting\n                </span>\n              )}\n            </div>\n          </div>\n\n          {/* Status / errors */}\n          {agentError && (\n            <div className=\"px-4 py-3 bg-rose-500/10 border-b border-rose-500/30 text-rose-100 text-sm flex items-center gap-2\">\n              <AlertTriangle className=\"w-4 h-4\" />\n              <span>{agentError}</span>\n            </div>\n          )}\n\n\n\n          {/* Messages */}\n          <div className=\"flex-1 overflow-y-auto p-4 scrollbar-thin\">\n            {chatMessages.length === 0 ? (\n              <div className=\"flex flex-col items-center justify-center h-full text-center px-4\">\n                <div className=\"w-14 h-14 mb-4 flex items-center justify-center bg-gradient-to-br from-accent to-node-interface rounded-xl shadow-glow text-2xl\">\n                  🧠\n                </div>\n                <h3 className=\"text-base font-medium mb-2\">\n                  Ask me anything\n                </h3>\n                <p className=\"text-sm text-text-secondary leading-relaxed mb-5\">\n                  I can help you understand the architecture, find functions, or explain connections.\n                </p>\n                <div className=\"flex flex-wrap gap-2 justify-center\">\n                  {chatSuggestions.map((suggestion) => (\n                    <button\n                      key={suggestion}\n                      onClick={() => setChatInput(suggestion)}\n                      className=\"px-3 py-1.5 bg-elevated border border-border-subtle rounded-full text-xs text-text-secondary hover:border-accent hover:text-text-primary transition-colors\"\n                    >\n                      {suggestion}\n                    </button>\n                  ))}\n                </div>\n              </div>\n            ) : (\n              <div className=\"flex flex-col gap-6\">\n                {chatMessages.map((message) => (\n                  <div\n                    key={message.id}\n                    className=\"animate-fade-in\"\n                  >\n                    {/* User message - compact label style */}\n                    {message.role === 'user' && (\n                      <div className=\"mb-4\">\n                        <div className=\"flex items-center gap-2 mb-2\">\n                          <User className=\"w-4 h-4 text-text-muted\" />\n                          <span className=\"text-xs font-medium text-text-muted uppercase tracking-wide\">You</span>\n                        </div>\n                        <div className=\"pl-6 text-sm text-text-primary\">\n                          {message.content}\n                        </div>\n                      </div>\n                    )}\n\n                    {/* Assistant message - copilot style */}\n                    {message.role === 'assistant' && (\n                      <div>\n                        <div className=\"flex items-center gap-2 mb-3\">\n                          <Sparkles className=\"w-4 h-4 text-accent\" />\n                          <span className=\"text-xs font-medium text-text-muted uppercase tracking-wide\">Nexus AI</span>\n                          {isChatLoading && message === chatMessages[chatMessages.length - 1] && (\n                            <Loader2 className=\"w-3 h-3 animate-spin text-accent\" />\n                          )}\n                        </div>\n                        <div className=\"pl-6 chat-prose\">\n                          {/* Render steps in order (reasoning, tool calls, content interleaved) */}\n                          {message.steps && message.steps.length > 0 ? (\n                            <div className=\"space-y-4\">\n                              {message.steps.map((step, index) => (\n                                <div key={step.id}>\n                                  {step.type === 'reasoning' && step.content && (\n                                    <div className=\"text-text-secondary text-sm italic border-l-2 border-text-muted/30 pl-3 mb-3\">\n                                      <MarkdownRenderer\n                                        content={step.content}\n                                        onLinkClick={handleLinkClick}\n                                      />\n                                    </div>\n                                  )}\n                                  {step.type === 'tool_call' && step.toolCall && (\n                                    <div className=\"mb-3\">\n                                      <ToolCallCard toolCall={step.toolCall} defaultExpanded={false} />\n                                    </div>\n                                  )}\n                                  {step.type === 'content' && step.content && (\n                                    <MarkdownRenderer\n                                      content={step.content}\n                                      onLinkClick={handleLinkClick}\n                                      showCopyButton={index === message.steps!.length - 1}\n                                    />\n                                  )}\n                                </div>\n                              ))}\n                            </div>\n                          ) : (\n                            // Fallback: render content + toolCalls separately (old format)\n                            <MarkdownRenderer\n                              content={message.content}\n                              onLinkClick={handleLinkClick}\n                              toolCalls={message.toolCalls}\n                              showCopyButton={true}\n                            />\n                          )}\n                        </div>\n                      </div>\n                    )}\n                  </div>\n                ))}\n\n\n              </div>\n            )}\n            {/* Scroll anchor for auto-scroll */}\n            <div ref={messagesEndRef} />\n          </div>\n\n          {/* Input */}\n          <div className=\"p-3 bg-surface border-t border-border-subtle\">\n            <div className=\"flex items-end gap-2 px-3 py-2 bg-elevated border border-border-subtle rounded-xl transition-all focus-within:border-accent focus-within:ring-2 focus-within:ring-accent/20\">\n              <textarea\n                ref={textareaRef}\n                value={chatInput}\n                onChange={(e) => setChatInput(e.target.value)}\n                onKeyDown={handleKeyDown}\n                placeholder=\"Ask about the codebase...\"\n                rows={1}\n                className=\"flex-1 bg-transparent border-none outline-none text-sm text-text-primary placeholder:text-text-muted resize-none min-h-[36px] scrollbar-thin\"\n                style={{ height: '36px', overflowY: 'hidden' }}\n              />\n              <button\n                onClick={clearChat}\n                className=\"px-2 py-1 text-xs text-text-muted hover:text-text-primary transition-colors\"\n                title=\"Clear chat\"\n              >\n                Clear\n              </button>\n              {isChatLoading ? (\n                <button\n                  onClick={stopChatResponse}\n                  className=\"w-9 h-9 flex items-center justify-center bg-red-500/80 rounded-md text-white transition-all hover:bg-red-500\"\n                  title=\"Stop response\"\n                >\n                  <Square className=\"w-3.5 h-3.5 fill-current\" />\n                </button>\n              ) : (\n                <button\n                  onClick={handleSendMessage}\n                  disabled={!chatInput.trim() || isAgentInitializing}\n                  className=\"w-9 h-9 flex items-center justify-center bg-accent rounded-md text-white transition-all hover:bg-accent-dim disabled:opacity-50 disabled:cursor-not-allowed\"\n                >\n                  <Send className=\"w-3.5 h-3.5\" />\n                </button>\n              )}\n            </div>\n            {!isAgentReady && !isAgentInitializing && (\n              <div className=\"mt-2 text-xs text-amber-200 flex items-center gap-2\">\n                <AlertTriangle className=\"w-3.5 h-3.5\" />\n                <span>\n                  {isProviderConfigured()\n                    ? 'Initializing AI agent...'\n                    : 'Configure an LLM provider to enable chat.'}\n                </span>\n              </div>\n            )}\n          </div>\n        </div>\n      )}\n    </aside>\n  );\n};\n\n\n\n"
  },
  {
    "path": "gitnexus-web/src/components/SettingsPanel.tsx",
    "content": "import { useState, useEffect, useCallback, useRef, useMemo } from 'react';\nimport { X, Key, Server, Brain, Check, AlertCircle, Eye, EyeOff, RefreshCw, ChevronDown, Loader2, Search } from 'lucide-react';\nimport {\n  loadSettings,\n  saveSettings,\n  getProviderDisplayName,\n  fetchOpenRouterModels,\n} from '../core/llm/settings-service';\nimport type { LLMSettings, LLMProvider } from '../core/llm/types';\n\ninterface SettingsPanelProps {\n  isOpen: boolean;\n  onClose: () => void;\n  onSettingsSaved?: () => void;\n  backendUrl?: string;\n  isBackendConnected?: boolean;\n  onBackendUrlChange?: (url: string) => void;\n}\n\n/**\n * Searchable combobox for OpenRouter model selection\n */\ninterface OpenRouterModelComboboxProps {\n  value: string;\n  onChange: (model: string) => void;\n  models: Array<{ id: string; name: string }>;\n  isLoading: boolean;\n  onLoadModels: () => void;\n}\n\nconst OpenRouterModelCombobox = ({ value, onChange, models, isLoading, onLoadModels }: OpenRouterModelComboboxProps) => {\n  const [isOpen, setIsOpen] = useState(false);\n  const [searchTerm, setSearchTerm] = useState('');\n  const inputRef = useRef<HTMLInputElement>(null);\n  const containerRef = useRef<HTMLDivElement>(null);\n\n  // Filter models based on search term\n  const filteredModels = useMemo(() => {\n    if (!searchTerm.trim()) return models;\n    const lower = searchTerm.toLowerCase();\n    return models.filter(m =>\n      m.id.toLowerCase().includes(lower) ||\n      m.name.toLowerCase().includes(lower)\n    );\n  }, [models, searchTerm]);\n\n  // Find display name for current value\n  const displayValue = useMemo(() => {\n    if (!value) return '';\n    const found = models.find(m => m.id === value);\n    return found ? found.name : value;\n  }, [value, models]);\n\n  // Close dropdown when clicking outside\n  useEffect(() => {\n    const handleClickOutside = (e: MouseEvent) => {\n      if (containerRef.current && !containerRef.current.contains(e.target as Node)) {\n        setIsOpen(false);\n        setSearchTerm('');\n      }\n    };\n    document.addEventListener('mousedown', handleClickOutside);\n    return () => document.removeEventListener('mousedown', handleClickOutside);\n  }, []);\n\n  // Load models when opening\n  const handleOpen = () => {\n    setIsOpen(true);\n    if (models.length === 0 && !isLoading) {\n      onLoadModels();\n    }\n    setTimeout(() => inputRef.current?.focus(), 10);\n  };\n\n  const handleSelect = (modelId: string) => {\n    onChange(modelId);\n    setIsOpen(false);\n    setSearchTerm('');\n  };\n\n  const handleInputChange = (e: React.ChangeEvent<HTMLInputElement>) => {\n    const val = e.target.value;\n    setSearchTerm(val);\n    // Also allow direct typing of model ID\n    if (val && models.length === 0) {\n      onChange(val);\n    }\n  };\n\n  const handleKeyDown = (e: React.KeyboardEvent) => {\n    if (e.key === 'Enter' && searchTerm) {\n      // If exact match in filtered, select it; otherwise use raw input\n      const exact = filteredModels.find(m => m.id.toLowerCase() === searchTerm.toLowerCase());\n      if (exact) {\n        handleSelect(exact.id);\n      } else if (filteredModels.length === 1) {\n        handleSelect(filteredModels[0].id);\n      } else {\n        // Allow custom model ID input\n        onChange(searchTerm);\n        setIsOpen(false);\n        setSearchTerm('');\n      }\n    } else if (e.key === 'Escape') {\n      setIsOpen(false);\n      setSearchTerm('');\n    }\n  };\n\n  return (\n    <div ref={containerRef} className=\"relative\">\n      {/* Main input/button */}\n      <div\n        onClick={handleOpen}\n        className={`w-full px-4 py-3 bg-elevated border rounded-xl cursor-pointer transition-all flex items-center gap-2\n          ${isOpen ? 'border-accent ring-2 ring-accent/20' : 'border-border-subtle hover:border-accent/50'}`}\n      >\n        {isOpen ? (\n          <input\n            ref={inputRef}\n            type=\"text\"\n            value={searchTerm}\n            onChange={handleInputChange}\n            onKeyDown={handleKeyDown}\n            placeholder=\"Search or type model ID...\"\n            className=\"flex-1 bg-transparent text-text-primary placeholder:text-text-muted outline-none font-mono text-sm\"\n            onClick={e => e.stopPropagation()}\n          />\n        ) : (\n          <span className={`flex-1 font-mono text-sm truncate ${value ? 'text-text-primary' : 'text-text-muted'}`}>\n            {displayValue || 'Select or type a model...'}\n          </span>\n        )}\n        <div className=\"flex items-center gap-1\">\n          {isLoading && <Loader2 className=\"w-4 h-4 animate-spin text-text-muted\" />}\n          <ChevronDown className={`w-4 h-4 text-text-muted transition-transform ${isOpen ? 'rotate-180' : ''}`} />\n        </div>\n      </div>\n\n      {/* Dropdown */}\n      {isOpen && (\n        <div className=\"absolute z-50 w-full mt-1 bg-elevated border border-border-subtle rounded-xl shadow-xl overflow-hidden\">\n          {isLoading ? (\n            <div className=\"px-4 py-6 text-center text-text-muted text-sm flex items-center justify-center gap-2\">\n              <Loader2 className=\"w-4 h-4 animate-spin\" />\n              Loading models...\n            </div>\n          ) : filteredModels.length === 0 ? (\n            <div className=\"px-4 py-4 text-center\">\n              {models.length === 0 ? (\n                <div className=\"text-text-muted text-sm\">\n                  <Search className=\"w-5 h-5 mx-auto mb-2 opacity-50\" />\n                  <p>Type a model ID or press Enter</p>\n                  <p className=\"text-xs mt-1\">e.g. openai/gpt-4o</p>\n                </div>\n              ) : (\n                <div className=\"text-text-muted text-sm\">\n                  <p>No models match \"{searchTerm}\"</p>\n                  <p className=\"text-xs mt-1\">Press Enter to use as custom ID</p>\n                </div>\n              )}\n            </div>\n          ) : (\n            <div className=\"max-h-64 overflow-y-auto\">\n              {filteredModels.slice(0, 50).map(model => (\n                <button\n                  key={model.id}\n                  onClick={() => handleSelect(model.id)}\n                  className={`w-full px-4 py-2.5 text-left hover:bg-hover transition-colors flex flex-col\n                    ${model.id === value ? 'bg-accent/10' : ''}`}\n                >\n                  <span className=\"text-text-primary text-sm truncate\">{model.name}</span>\n                  <span className=\"text-text-muted text-xs font-mono truncate\">{model.id}</span>\n                </button>\n              ))}\n              {filteredModels.length > 50 && (\n                <div className=\"px-4 py-2 text-xs text-text-muted text-center border-t border-border-subtle\">\n                  +{filteredModels.length - 50} more • Refine your search\n                </div>\n              )}\n            </div>\n          )}\n        </div>\n      )}\n    </div>\n  );\n};\n\n/**\n * Check connection to local Ollama instance\n */\nconst checkOllamaStatus = async (baseUrl: string): Promise<{ ok: boolean; error: string | null }> => {\n  try {\n    const response = await fetch(`${baseUrl}/api/tags`, {\n      method: 'GET',\n      headers: { 'Content-Type': 'application/json' },\n    });\n\n    if (!response.ok) {\n      if (response.status === 0 || response.status === 404) {\n        return { ok: false, error: 'Cannot connect to Ollama. Make sure it\\'s running with `ollama serve`' };\n      }\n      return { ok: false, error: `Ollama API error: ${response.status}` };\n    }\n\n    return { ok: true, error: null };\n  } catch (error) {\n    return {\n      ok: false,\n      error: 'Cannot connect to Ollama. Make sure it\\'s running with `ollama serve`'\n    };\n  }\n};\n\nexport const SettingsPanel = ({ isOpen, onClose, onSettingsSaved, backendUrl, isBackendConnected, onBackendUrlChange }: SettingsPanelProps) => {\n  const [settings, setSettings] = useState<LLMSettings>(loadSettings);\n  const [showApiKey, setShowApiKey] = useState<Record<string, boolean>>({});\n  const [saveStatus, setSaveStatus] = useState<'idle' | 'saved' | 'error'>('idle');\n  // Ollama connection state\n  const [ollamaError, setOllamaError] = useState<string | null>(null);\n  const [isCheckingOllama, setIsCheckingOllama] = useState(false);\n  // OpenRouter models state\n  const [openRouterModels, setOpenRouterModels] = useState<Array<{ id: string; name: string }>>([]);\n  const [isLoadingModels, setIsLoadingModels] = useState(false);\n\n  // Load settings when panel opens\n  useEffect(() => {\n    if (isOpen) {\n      setSettings(loadSettings());\n      setSaveStatus('idle');\n      setOllamaError(null);\n    }\n  }, [isOpen]);\n\n  // Check Ollama connection when provider is selected or base URL changes\n  const checkOllamaConnection = useCallback(async (baseUrl: string) => {\n    setIsCheckingOllama(true);\n    setOllamaError(null);\n\n    const { error } = await checkOllamaStatus(baseUrl);\n    setIsCheckingOllama(false);\n    setOllamaError(error);\n  }, []);\n\n  // Load OpenRouter models\n  const loadOpenRouterModels = useCallback(async () => {\n    setIsLoadingModels(true);\n    const models = await fetchOpenRouterModels();\n    setOpenRouterModels(models);\n    setIsLoadingModels(false);\n  }, []);\n\n  useEffect(() => {\n    if (settings.activeProvider === 'ollama') {\n      const baseUrl = settings.ollama?.baseUrl ?? 'http://localhost:11434';\n      const timer = setTimeout(() => {\n        checkOllamaConnection(baseUrl);\n      }, 300);\n      return () => clearTimeout(timer);\n    }\n  }, [settings.ollama?.baseUrl, settings.activeProvider, checkOllamaConnection]);\n\n  const handleProviderChange = (provider: LLMProvider) => {\n    setSettings(prev => ({ ...prev, activeProvider: provider }));\n  };\n\n  const handleSave = () => {\n    try {\n      saveSettings(settings);\n      setSaveStatus('saved');\n      onSettingsSaved?.();\n      setTimeout(() => setSaveStatus('idle'), 2000);\n    } catch {\n      setSaveStatus('error');\n    }\n  };\n\n  const toggleApiKeyVisibility = (key: string) => {\n    setShowApiKey(prev => ({ ...prev, [key]: !prev[key] }));\n  };\n\n  if (!isOpen) return null;\n\n  const providers: LLMProvider[] = ['openai', 'gemini', 'anthropic', 'azure-openai', 'ollama', 'openrouter'];\n\n\n  return (\n    <div className=\"fixed inset-0 z-50 flex items-center justify-center\">\n      {/* Backdrop */}\n      <div\n        className=\"absolute inset-0 bg-black/60 backdrop-blur-sm\"\n        onClick={onClose}\n      />\n\n      {/* Panel */}\n      <div className=\"relative bg-surface border border-border-subtle rounded-2xl shadow-2xl max-w-lg w-full mx-4 overflow-hidden max-h-[90vh] flex flex-col\">\n        {/* Header */}\n        <div className=\"flex items-center justify-between px-6 py-4 border-b border-border-subtle bg-elevated/50\">\n          <div className=\"flex items-center gap-3\">\n            <div className=\"w-10 h-10 flex items-center justify-center bg-accent/20 rounded-xl\">\n              <Brain className=\"w-5 h-5 text-accent\" />\n            </div>\n            <div>\n              <h2 className=\"text-lg font-semibold text-text-primary\">AI Settings</h2>\n              <p className=\"text-xs text-text-muted\">Configure your LLM provider</p>\n            </div>\n          </div>\n          <button\n            onClick={onClose}\n            className=\"p-2 text-text-muted hover:text-text-primary hover:bg-hover rounded-lg transition-colors\"\n          >\n            <X className=\"w-5 h-5\" />\n          </button>\n        </div>\n\n        {/* Content */}\n        <div className=\"flex-1 overflow-y-auto p-6 space-y-6\">\n          {/* Local Server */}\n          {backendUrl !== undefined && onBackendUrlChange && (\n            <div className=\"space-y-3\">\n              <label className=\"block text-sm font-medium text-text-secondary\">\n                Local Server\n              </label>\n              <div className=\"space-y-2\">\n                <div className=\"flex items-center gap-2 mb-2\">\n                  <Server className=\"w-4 h-4 text-text-muted\" />\n                  <span className=\"text-sm text-text-secondary\">Backend URL</span>\n                  <span className={`w-2 h-2 rounded-full ${isBackendConnected ? 'bg-green-400' : 'bg-red-400'}`} />\n                  <span className=\"text-xs text-text-muted\">\n                    {isBackendConnected ? 'Connected' : 'Not connected'}\n                  </span>\n                </div>\n                <input\n                  type=\"url\"\n                  value={backendUrl}\n                  onChange={(e) => onBackendUrlChange(e.target.value)}\n                  placeholder=\"http://localhost:4747\"\n                  className=\"w-full px-4 py-3 bg-elevated border border-border-subtle rounded-xl text-text-primary placeholder:text-text-muted focus:border-accent focus:ring-2 focus:ring-accent/20 outline-none transition-all font-mono text-sm\"\n                />\n                <p className=\"text-xs text-text-muted\">\n                  Run <code className=\"px-1 py-0.5 bg-elevated rounded\">gitnexus serve</code> to start the local server\n                </p>\n              </div>\n            </div>\n          )}\n\n          {/* Provider Selection */}\n          <div className=\"space-y-3\">\n            <label className=\"block text-sm font-medium text-text-secondary\">\n              Provider\n            </label>\n            <div className=\"grid grid-cols-1 sm:grid-cols-3 gap-3\">\n              {providers.map(provider => (\n                <button\n                  key={provider}\n                  onClick={() => handleProviderChange(provider)}\n                  className={`\n                    flex items-center gap-3 p-4 rounded-xl border-2 transition-all\n                    ${settings.activeProvider === provider\n                      ? 'border-accent bg-accent/10 text-text-primary'\n                      : 'border-border-subtle bg-elevated hover:border-accent/50 text-text-secondary'\n                    }\n                  `}\n                >\n                  <div className={`\n                    w-8 h-8 rounded-lg flex items-center justify-center text-lg\n                    ${settings.activeProvider === provider ? 'bg-accent/20' : 'bg-surface'}\n                  `}>\n                    {provider === 'openai' ? '🤖' : provider === 'gemini' ? '💎' : provider === 'anthropic' ? '🧠' : provider === 'ollama' ? '🦙' : provider === 'openrouter' ? '🌐' : '☁️'}\n                  </div>\n                  <span className=\"font-medium\">{getProviderDisplayName(provider)}</span>\n                </button>\n              ))}\n            </div>\n          </div>\n\n          {/* OpenAI Settings */}\n          {settings.activeProvider === 'openai' && (\n            <div className=\"space-y-4 animate-fade-in\">\n              <div className=\"space-y-2\">\n                <label className=\"flex items-center gap-2 text-sm font-medium text-text-secondary\">\n                  <Key className=\"w-4 h-4\" />\n                  API Key\n                </label>\n                <div className=\"relative\">\n                  <input\n                    type={showApiKey['openai'] ? 'text' : 'password'}\n                    value={settings.openai?.apiKey ?? ''}\n                    onChange={e => setSettings(prev => ({\n                      ...prev,\n                      openai: { ...prev.openai!, apiKey: e.target.value }\n                    }))}\n                    placeholder=\"Enter your OpenAI API key\"\n                    className=\"w-full px-4 py-3 pr-12 bg-elevated border border-border-subtle rounded-xl text-text-primary placeholder:text-text-muted focus:border-accent focus:ring-2 focus:ring-accent/20 outline-none transition-all\"\n                  />\n                  <button\n                    type=\"button\"\n                    onClick={() => toggleApiKeyVisibility('openai')}\n                    className=\"absolute right-3 top-1/2 -translate-y-1/2 p-1 text-text-muted hover:text-text-primary transition-colors\"\n                  >\n                    {showApiKey['openai'] ? <EyeOff className=\"w-4 h-4\" /> : <Eye className=\"w-4 h-4\" />}\n                  </button>\n                </div>\n                <p className=\"text-xs text-text-muted\">\n                  Get your API key from{' '}\n                  <a\n                    href=\"https://platform.openai.com/api-keys\"\n                    target=\"_blank\"\n                    rel=\"noopener noreferrer\"\n                    className=\"text-accent hover:underline\"\n                  >\n                    OpenAI Platform\n                  </a>\n                </p>\n              </div>\n\n              <div className=\"space-y-2\">\n                <label className=\"text-sm font-medium text-text-secondary\">Model</label>\n                <input\n                  type=\"text\"\n                  value={settings.openai?.model ?? 'gpt-5.2-chat'}\n                  onChange={e => setSettings(prev => ({\n                    ...prev,\n                    openai: { ...prev.openai!, model: e.target.value }\n                  }))}\n                  placeholder=\"e.g., gpt-4o, gpt-4-turbo, gpt-3.5-turbo\"\n                  className=\"w-full px-4 py-3 bg-elevated border border-border-subtle rounded-xl text-text-primary placeholder:text-text-muted focus:border-accent focus:ring-2 focus:ring-accent/20 outline-none transition-all font-mono text-sm\"\n                />\n              </div>\n\n              <div className=\"space-y-2\">\n                <label className=\"flex items-center gap-2 text-sm font-medium text-text-secondary\">\n                  <Server className=\"w-4 h-4\" />\n                  Base URL <span className=\"text-text-muted font-normal\">(optional)</span>\n                </label>\n                <input\n                  type=\"url\"\n                  value={settings.openai?.baseUrl ?? ''}\n                  onChange={e => setSettings(prev => ({\n                    ...prev,\n                    openai: { ...prev.openai!, baseUrl: e.target.value }\n                  }))}\n                  placeholder=\"https://api.openai.com/v1 (default)\"\n                  className=\"w-full px-4 py-3 bg-elevated border border-border-subtle rounded-xl text-text-primary placeholder:text-text-muted focus:border-accent focus:ring-2 focus:ring-accent/20 outline-none transition-all\"\n                />\n                <p className=\"text-xs text-text-muted\">\n                  Leave empty to use the default OpenAI API. Set a custom URL for proxies or compatible APIs.\n                </p>\n              </div>\n            </div>\n          )}\n\n          {/* Gemini Settings */}\n          {settings.activeProvider === 'gemini' && (\n            <div className=\"space-y-4 animate-fade-in\">\n              <div className=\"space-y-2\">\n                <label className=\"flex items-center gap-2 text-sm font-medium text-text-secondary\">\n                  <Key className=\"w-4 h-4\" />\n                  API Key\n                </label>\n                <div className=\"relative\">\n                  <input\n                    type={showApiKey['gemini'] ? 'text' : 'password'}\n                    value={settings.gemini?.apiKey ?? ''}\n                    onChange={e => setSettings(prev => ({\n                      ...prev,\n                      gemini: { ...prev.gemini!, apiKey: e.target.value }\n                    }))}\n                    placeholder=\"Enter your Google AI API key\"\n                    className=\"w-full px-4 py-3 pr-12 bg-elevated border border-border-subtle rounded-xl text-text-primary placeholder:text-text-muted focus:border-accent focus:ring-2 focus:ring-accent/20 outline-none transition-all\"\n                  />\n                  <button\n                    type=\"button\"\n                    onClick={() => toggleApiKeyVisibility('gemini')}\n                    className=\"absolute right-3 top-1/2 -translate-y-1/2 p-1 text-text-muted hover:text-text-primary transition-colors\"\n                  >\n                    {showApiKey['gemini'] ? <EyeOff className=\"w-4 h-4\" /> : <Eye className=\"w-4 h-4\" />}\n                  </button>\n                </div>\n                <p className=\"text-xs text-text-muted\">\n                  Get your API key from{' '}\n                  <a\n                    href=\"https://aistudio.google.com/app/apikey\"\n                    target=\"_blank\"\n                    rel=\"noopener noreferrer\"\n                    className=\"text-accent hover:underline\"\n                  >\n                    Google AI Studio\n                  </a>\n                </p>\n              </div>\n\n              <div className=\"space-y-2\">\n                <label className=\"text-sm font-medium text-text-secondary\">Model</label>\n                <input\n                  type=\"text\"\n                  value={settings.gemini?.model ?? 'gemini-2.0-flash'}\n                  onChange={e => setSettings(prev => ({\n                    ...prev,\n                    gemini: { ...prev.gemini!, model: e.target.value }\n                  }))}\n                  placeholder=\"e.g., gemini-2.0-flash, gemini-1.5-pro\"\n                  className=\"w-full px-4 py-3 bg-elevated border border-border-subtle rounded-xl text-text-primary placeholder:text-text-muted focus:border-accent focus:ring-2 focus:ring-accent/20 outline-none transition-all font-mono text-sm\"\n                />\n              </div>\n            </div>\n          )}\n\n          {/* Anthropic Settings */}\n          {settings.activeProvider === 'anthropic' && (\n            <div className=\"space-y-4 animate-fade-in\">\n              <div className=\"space-y-2\">\n                <label className=\"flex items-center gap-2 text-sm font-medium text-text-secondary\">\n                  <Key className=\"w-4 h-4\" />\n                  API Key\n                </label>\n                <div className=\"relative\">\n                  <input\n                    type={showApiKey['anthropic'] ? 'text' : 'password'}\n                    value={settings.anthropic?.apiKey ?? ''}\n                    onChange={e => setSettings(prev => ({\n                      ...prev,\n                      anthropic: { ...prev.anthropic!, apiKey: e.target.value }\n                    }))}\n                    placeholder=\"Enter your Anthropic API key\"\n                    className=\"w-full px-4 py-3 pr-12 bg-elevated border border-border-subtle rounded-xl text-text-primary placeholder:text-text-muted focus:border-accent focus:ring-2 focus:ring-accent/20 outline-none transition-all\"\n                  />\n                  <button\n                    type=\"button\"\n                    onClick={() => toggleApiKeyVisibility('anthropic')}\n                    className=\"absolute right-3 top-1/2 -translate-y-1/2 p-1 text-text-muted hover:text-text-primary transition-colors\"\n                  >\n                    {showApiKey['anthropic'] ? <EyeOff className=\"w-4 h-4\" /> : <Eye className=\"w-4 h-4\" />}\n                  </button>\n                </div>\n                <p className=\"text-xs text-text-muted\">\n                  Get your API key from{' '}\n                  <a\n                    href=\"https://console.anthropic.com/settings/keys\"\n                    target=\"_blank\"\n                    rel=\"noopener noreferrer\"\n                    className=\"text-accent hover:underline\"\n                  >\n                    Anthropic Console\n                  </a>\n                </p>\n              </div>\n\n              <div className=\"space-y-2\">\n                <label className=\"text-sm font-medium text-text-secondary\">Model</label>\n                <input\n                  type=\"text\"\n                  value={settings.anthropic?.model ?? 'claude-sonnet-4-20250514'}\n                  onChange={e => setSettings(prev => ({\n                    ...prev,\n                    anthropic: { ...prev.anthropic!, model: e.target.value }\n                  }))}\n                  placeholder=\"e.g., claude-sonnet-4-20250514, claude-3-opus\"\n                  className=\"w-full px-4 py-3 bg-elevated border border-border-subtle rounded-xl text-text-primary placeholder:text-text-muted focus:border-accent focus:ring-2 focus:ring-accent/20 outline-none transition-all font-mono text-sm\"\n                />\n              </div>\n            </div>\n          )}\n\n          {/* Azure OpenAI Settings */}\n          {settings.activeProvider === 'azure-openai' && (\n            <div className=\"space-y-4 animate-fade-in\">\n              <div className=\"space-y-2\">\n                <label className=\"flex items-center gap-2 text-sm font-medium text-text-secondary\">\n                  <Key className=\"w-4 h-4\" />\n                  API Key\n                </label>\n                <div className=\"relative\">\n                  <input\n                    type={showApiKey['azure'] ? 'text' : 'password'}\n                    value={settings.azureOpenAI?.apiKey ?? ''}\n                    onChange={e => setSettings(prev => ({\n                      ...prev,\n                      azureOpenAI: { ...prev.azureOpenAI!, apiKey: e.target.value }\n                    }))}\n                    placeholder=\"Enter your Azure OpenAI API key\"\n                    className=\"w-full px-4 py-3 pr-12 bg-elevated border border-border-subtle rounded-xl text-text-primary placeholder:text-text-muted focus:border-accent focus:ring-2 focus:ring-accent/20 outline-none transition-all\"\n                  />\n                  <button\n                    type=\"button\"\n                    onClick={() => toggleApiKeyVisibility('azure')}\n                    className=\"absolute right-3 top-1/2 -translate-y-1/2 p-1 text-text-muted hover:text-text-primary transition-colors\"\n                  >\n                    {showApiKey['azure'] ? <EyeOff className=\"w-4 h-4\" /> : <Eye className=\"w-4 h-4\" />}\n                  </button>\n                </div>\n              </div>\n\n              <div className=\"space-y-2\">\n                <label className=\"flex items-center gap-2 text-sm font-medium text-text-secondary\">\n                  <Server className=\"w-4 h-4\" />\n                  Endpoint\n                </label>\n                <input\n                  type=\"url\"\n                  value={settings.azureOpenAI?.endpoint ?? ''}\n                  onChange={e => setSettings(prev => ({\n                    ...prev,\n                    azureOpenAI: { ...prev.azureOpenAI!, endpoint: e.target.value }\n                  }))}\n                  placeholder=\"https://your-resource.openai.azure.com\"\n                  className=\"w-full px-4 py-3 bg-elevated border border-border-subtle rounded-xl text-text-primary placeholder:text-text-muted focus:border-accent focus:ring-2 focus:ring-accent/20 outline-none transition-all\"\n                />\n              </div>\n\n              <div className=\"space-y-2\">\n                <label className=\"text-sm font-medium text-text-secondary\">Deployment Name</label>\n                <input\n                  type=\"text\"\n                  value={settings.azureOpenAI?.deploymentName ?? ''}\n                  onChange={e => setSettings(prev => ({\n                    ...prev,\n                    azureOpenAI: { ...prev.azureOpenAI!, deploymentName: e.target.value }\n                  }))}\n                  placeholder=\"e.g., gpt-4o-deployment\"\n                  className=\"w-full px-4 py-3 bg-elevated border border-border-subtle rounded-xl text-text-primary placeholder:text-text-muted focus:border-accent focus:ring-2 focus:ring-accent/20 outline-none transition-all\"\n                />\n              </div>\n\n              <div className=\"grid grid-cols-2 gap-4\">\n                <div className=\"space-y-2\">\n                  <label className=\"text-sm font-medium text-text-secondary\">Model</label>\n                  <input\n                    type=\"text\"\n                    value={settings.azureOpenAI?.model ?? 'gpt-4o'}\n                    onChange={e => setSettings(prev => ({\n                      ...prev,\n                      azureOpenAI: { ...prev.azureOpenAI!, model: e.target.value }\n                    }))}\n                    placeholder=\"gpt-4o\"\n                    className=\"w-full px-4 py-3 bg-elevated border border-border-subtle rounded-xl text-text-primary placeholder:text-text-muted focus:border-accent focus:ring-2 focus:ring-accent/20 outline-none transition-all\"\n                  />\n                </div>\n\n                <div className=\"space-y-2\">\n                  <label className=\"text-sm font-medium text-text-secondary\">API Version</label>\n                  <input\n                    type=\"text\"\n                    value={settings.azureOpenAI?.apiVersion ?? '2024-08-01-preview'}\n                    onChange={e => setSettings(prev => ({\n                      ...prev,\n                      azureOpenAI: { ...prev.azureOpenAI!, apiVersion: e.target.value }\n                    }))}\n                    placeholder=\"2024-08-01-preview\"\n                    className=\"w-full px-4 py-3 bg-elevated border border-border-subtle rounded-xl text-text-primary placeholder:text-text-muted focus:border-accent focus:ring-2 focus:ring-accent/20 outline-none transition-all\"\n                  />\n                </div>\n              </div>\n\n              <p className=\"text-xs text-text-muted\">\n                Configure your Azure OpenAI service in the{' '}\n                <a\n                  href=\"https://portal.azure.com/#view/Microsoft_Azure_ProjectOxford/CognitiveServicesHub/~/OpenAI\"\n                  target=\"_blank\"\n                  rel=\"noopener noreferrer\"\n                  className=\"text-accent hover:underline\"\n                >\n                  Azure Portal\n                </a>\n              </p>\n            </div>\n          )}\n\n          {/* Ollama Settings */}\n          {settings.activeProvider === 'ollama' && (\n            <div className=\"space-y-4 animate-fade-in\">\n              {/* How to run Ollama */}\n              <div className=\"p-3 bg-amber-500/10 border border-amber-500/30 rounded-xl\">\n                <p className=\"text-xs text-amber-300 leading-relaxed\">\n                  <span className=\"font-medium\">📋 Quick Start:</span> Install Ollama from{' '}\n                  <a\n                    href=\"https://ollama.ai\"\n                    target=\"_blank\"\n                    rel=\"noopener noreferrer\"\n                    className=\"text-accent hover:underline\"\n                  >\n                    ollama.ai\n                  </a>, then run:\n                </p>\n                <code className=\"block mt-2 px-3 py-2 bg-black/30 rounded-lg text-amber-200 font-mono text-sm\">\n                  ollama serve\n                </code>\n              </div>\n\n              <div className=\"space-y-2\">\n                <label className=\"flex items-center gap-2 text-sm font-medium text-text-secondary\">\n                  <Server className=\"w-4 h-4\" />\n                  Base URL\n                </label>\n                <div className=\"flex gap-2\">\n                  <input\n                    type=\"url\"\n                    value={settings.ollama?.baseUrl ?? 'http://localhost:11434'}\n                    onChange={e => setSettings(prev => ({\n                      ...prev,\n                      ollama: { ...prev.ollama!, baseUrl: e.target.value }\n                    }))}\n                    placeholder=\"http://localhost:11434\"\n                    className=\"flex-1 px-4 py-3 bg-elevated border border-border-subtle rounded-xl text-text-primary placeholder:text-text-muted focus:border-accent focus:ring-2 focus:ring-accent/20 outline-none transition-all font-mono text-sm\"\n                  />\n                  <button\n                    type=\"button\"\n                    onClick={() => checkOllamaConnection(settings.ollama?.baseUrl ?? 'http://localhost:11434')}\n                    disabled={isCheckingOllama}\n                    className=\"px-3 py-3 bg-elevated border border-border-subtle rounded-xl text-text-secondary hover:text-text-primary hover:border-accent/50 transition-colors disabled:opacity-50\"\n                    title=\"Check connection\"\n                  >\n                    <RefreshCw className={`w-4 h-4 ${isCheckingOllama ? 'animate-spin' : ''}`} />\n                  </button>\n                </div>\n                <p className=\"text-xs text-text-muted\">\n                  Default port is <code className=\"px-1 py-0.5 bg-elevated rounded\">11434</code>.\n                </p>\n              </div>\n\n              <div className=\"space-y-2\">\n                <label className=\"text-sm font-medium text-text-secondary\">Model</label>\n\n                {ollamaError && !isCheckingOllama && (\n                  <div className=\"p-2 bg-red-500/10 border border-red-500/30 rounded-lg\">\n                    <p className=\"text-xs text-red-400 flex items-center gap-1\">\n                      <AlertCircle className=\"w-3 h-3\" />\n                      {ollamaError}\n                    </p>\n                  </div>\n                )}\n\n                <input\n                  type=\"text\"\n                  value={settings.ollama?.model ?? ''}\n                  onChange={e => setSettings(prev => ({\n                    ...prev,\n                    ollama: { ...prev.ollama!, model: e.target.value }\n                  }))}\n                  placeholder=\"e.g., llama3.2, mistral, codellama\"\n                  className=\"w-full px-4 py-3 bg-elevated border border-border-subtle rounded-xl text-text-primary placeholder:text-text-muted focus:border-accent focus:ring-2 focus:ring-accent/20 outline-none transition-all font-mono text-sm\"\n                />\n                <p className=\"text-xs text-text-muted\">\n                  Pull a model with <code className=\"px-1 py-0.5 bg-elevated rounded\">ollama pull llama3.2</code>\n                </p>\n              </div>\n            </div>\n          )}\n\n          {/* OpenRouter Settings */}\n          {settings.activeProvider === 'openrouter' && (\n            <div className=\"space-y-4 animate-fade-in\">\n              <div className=\"space-y-2\">\n                <label className=\"flex items-center gap-2 text-sm font-medium text-text-secondary\">\n                  <Key className=\"w-4 h-4\" />\n                  API Key\n                </label>\n                <div className=\"relative\">\n                  <input\n                    type={showApiKey['openrouter'] ? 'text' : 'password'}\n                    value={settings.openrouter?.apiKey ?? ''}\n                    onChange={e => setSettings(prev => ({\n                      ...prev,\n                      openrouter: { ...prev.openrouter!, apiKey: e.target.value }\n                    }))}\n                    placeholder=\"Enter your OpenRouter API key\"\n                    className=\"w-full px-4 py-3 pr-12 bg-elevated border border-border-subtle rounded-xl text-text-primary placeholder:text-text-muted focus:border-accent focus:ring-2 focus:ring-accent/20 outline-none transition-all\"\n                  />\n                  <button\n                    type=\"button\"\n                    onClick={() => toggleApiKeyVisibility('openrouter')}\n                    className=\"absolute right-3 top-1/2 -translate-y-1/2 p-1 text-text-muted hover:text-text-primary transition-colors\"\n                  >\n                    {showApiKey['openrouter'] ? <EyeOff className=\"w-4 h-4\" /> : <Eye className=\"w-4 h-4\" />}\n                  </button>\n                </div>\n                <p className=\"text-xs text-text-muted\">\n                  Get your API key from{' '}\n                  <a\n                    href=\"https://openrouter.ai/keys\"\n                    target=\"_blank\"\n                    rel=\"noopener noreferrer\"\n                    className=\"text-accent hover:underline\"\n                  >\n                    OpenRouter Keys\n                  </a>\n                </p>\n              </div>\n\n              <div className=\"space-y-2\">\n                <label className=\"text-sm font-medium text-text-secondary\">Model</label>\n                <OpenRouterModelCombobox\n                  value={settings.openrouter?.model ?? ''}\n                  onChange={(model) => setSettings(prev => ({\n                    ...prev,\n                    openrouter: { ...prev.openrouter!, model }\n                  }))}\n                  models={openRouterModels}\n                  isLoading={isLoadingModels}\n                  onLoadModels={loadOpenRouterModels}\n                />\n                <p className=\"text-xs text-text-muted\">\n                  Browse all models at{' '}\n                  <a\n                    href=\"https://openrouter.ai/models\"\n                    target=\"_blank\"\n                    rel=\"noopener noreferrer\"\n                    className=\"text-accent hover:underline\"\n                  >\n                    OpenRouter Models\n                  </a>\n                </p>\n              </div>\n            </div>\n          )}\n\n\n\n          {/* Privacy Note */}\n          <div className=\"p-4 bg-elevated/50 border border-border-subtle rounded-xl\">\n            <div className=\"flex gap-3\">\n              <div className=\"w-8 h-8 flex items-center justify-center bg-green-500/20 rounded-lg text-green-400 flex-shrink-0\">\n                🔒\n              </div>\n              <div className=\"text-xs text-text-muted leading-relaxed\">\n                <span className=\"text-text-secondary font-medium\">Privacy:</span> Your API keys are stored only in your browser's local storage.\n                They're sent directly to the LLM provider when you chat. Your code never leaves your machine.\n              </div>\n            </div>\n          </div>\n        </div>\n\n        {/* Footer */}\n        <div className=\"flex items-center justify-between px-6 py-4 border-t border-border-subtle bg-elevated/30\">\n          <div className=\"flex items-center gap-2 text-sm\">\n            {saveStatus === 'saved' && (\n              <span className=\"flex items-center gap-1.5 text-green-400 animate-fade-in\">\n                <Check className=\"w-4 h-4\" />\n                Settings saved\n              </span>\n            )}\n            {saveStatus === 'error' && (\n              <span className=\"flex items-center gap-1.5 text-red-400 animate-fade-in\">\n                <AlertCircle className=\"w-4 h-4\" />\n                Failed to save\n              </span>\n            )}\n          </div>\n          <div className=\"flex items-center gap-3\">\n            <button\n              onClick={onClose}\n              className=\"px-4 py-2 text-sm text-text-secondary hover:text-text-primary transition-colors\"\n            >\n              Cancel\n            </button>\n            <button\n              onClick={handleSave}\n              className=\"px-5 py-2 bg-accent text-white text-sm font-medium rounded-lg hover:bg-accent-dim transition-colors\"\n            >\n              Save Settings\n            </button>\n          </div>\n        </div>\n      </div>\n    </div>\n  );\n};\n\n"
  },
  {
    "path": "gitnexus-web/src/components/StatusBar.tsx",
    "content": "import { Heart } from 'lucide-react';\nimport { useAppState } from '../hooks/useAppState';\n\nexport const StatusBar = () => {\n  const { graph, progress } = useAppState();\n\n  const nodeCount = graph?.nodes.length ?? 0;\n  const edgeCount = graph?.relationships.length ?? 0;\n\n  // Detect primary language\n  const primaryLanguage = (() => {\n    if (!graph) return null;\n    const languages = graph.nodes\n      .map(n => n.properties.language)\n      .filter(Boolean);\n    if (languages.length === 0) return null;\n\n    const counts = languages.reduce((acc, lang) => {\n      acc[lang!] = (acc[lang!] || 0) + 1;\n      return acc;\n    }, {} as Record<string, number>);\n\n    return Object.entries(counts).sort((a, b) => b[1] - a[1])[0]?.[0];\n  })();\n\n  return (\n    <footer className=\"flex items-center justify-between px-5 py-2 bg-deep border-t border-dashed border-border-subtle text-[11px] text-text-muted\">\n      {/* Left - Status */}\n      <div className=\"flex items-center gap-4\">\n        {progress && progress.phase !== 'complete' ? (\n          <>\n            <div className=\"w-28 h-1 bg-elevated rounded-full overflow-hidden\">\n              <div\n                className=\"h-full bg-gradient-to-r from-accent to-node-interface rounded-full transition-all duration-300\"\n                style={{ width: `${progress.percent}%` }}\n              />\n            </div>\n            <span>{progress.message}</span>\n          </>\n        ) : (\n          <div className=\"flex items-center gap-1.5\">\n            <span className=\"w-1.5 h-1.5 bg-node-function rounded-full\" />\n            <span>Ready</span>\n          </div>\n        )}\n      </div>\n\n      {/* Center - Sponsor */}\n      <a\n        href=\"https://github.com/sponsors/abhigyanpatwari\"\n        target=\"_blank\"\n        rel=\"noopener noreferrer\"\n        className=\"group flex items-center gap-2 px-3 py-1 rounded-full bg-pink-500/10 border border-pink-500/20 hover:bg-pink-500/20 hover:border-pink-500/40 hover:scale-[1.02] transition-all duration-200 cursor-pointer\"\n      >\n        <Heart className=\"w-3.5 h-3.5 text-pink-500 fill-pink-500/40 group-hover:fill-pink-500 group-hover:scale-110 transition-all duration-200 animate-pulse\" />\n        <span className=\"text-[11px] font-medium text-pink-400 group-hover:text-pink-300 transition-colors\">Sponsor</span>\n        <span className=\"text-[10px] text-pink-300/50 group-hover:text-pink-300/80 italic hidden md:inline transition-colors\">\n          need to buy some API credits to run SWE-bench 😅\n        </span>\n      </a>\n\n      {/* Right - Stats */}\n      <div className=\"flex items-center gap-3\">\n        {graph && (\n          <>\n            <span>{nodeCount} nodes</span>\n            <span className=\"text-border-default\">•</span>\n            <span>{edgeCount} edges</span>\n            {primaryLanguage && (\n              <>\n                <span className=\"text-border-default\">•</span>\n                <span>{primaryLanguage}</span>\n              </>\n            )}\n          </>\n        )}\n      </div>\n    </footer>\n  );\n};\n"
  },
  {
    "path": "gitnexus-web/src/components/ToolCallCard.tsx",
    "content": "/**\n * ToolCallCard Component\n * \n * Displays a tool call with expand/collapse functionality.\n * Shows the tool name, status, and when expanded, the query/args and result.\n */\n\nimport { useState } from 'react';\nimport { ChevronDown, ChevronRight, Sparkles, Check, Loader2, AlertCircle } from 'lucide-react';\nimport type { ToolCallInfo } from '../core/llm/types';\n\ninterface ToolCallCardProps {\n  toolCall: ToolCallInfo;\n  /** Start expanded (useful for in-progress calls) */\n  defaultExpanded?: boolean;\n}\n\n/**\n * Format tool arguments for display\n */\nconst formatArgs = (args: Record<string, unknown>): string => {\n  if (!args || Object.keys(args).length === 0) {\n    return '';\n  }\n\n  // Special handling for Cypher queries\n  if ('cypher' in args && typeof args.cypher === 'string') {\n    let result = '';\n    if ('query' in args && typeof args.query === 'string') {\n      result += `Search: \"${args.query}\"\\n\\n`;\n    }\n    result += args.cypher;\n    return result;\n  }\n\n  // Special handling for search/grep queries\n  if ('query' in args && typeof args.query === 'string') {\n    return args.query;\n  }\n\n  // For other tools, show as formatted JSON\n  return JSON.stringify(args, null, 2);\n};\n\n/**\n * Get status icon and color\n */\nconst getStatusDisplay = (status: ToolCallInfo['status']) => {\n  switch (status) {\n    case 'running':\n      return {\n        icon: <Loader2 className=\"w-3.5 h-3.5 animate-spin\" />,\n        color: 'text-amber-400',\n        bgColor: 'bg-amber-500/10',\n        borderColor: 'border-amber-500/30',\n      };\n    case 'completed':\n      return {\n        icon: <Check className=\"w-3.5 h-3.5\" />,\n        color: 'text-emerald-400',\n        bgColor: 'bg-emerald-500/10',\n        borderColor: 'border-emerald-500/30',\n      };\n    case 'error':\n      return {\n        icon: <AlertCircle className=\"w-3.5 h-3.5\" />,\n        color: 'text-rose-400',\n        bgColor: 'bg-rose-500/10',\n        borderColor: 'border-rose-500/30',\n      };\n    default:\n      return {\n        icon: <Sparkles className=\"w-3.5 h-3.5\" />,\n        color: 'text-text-muted',\n        bgColor: 'bg-surface',\n        borderColor: 'border-border-subtle',\n      };\n  }\n};\n\n/**\n * Get a friendly display name for the tool\n */\nconst getToolDisplayName = (name: string): string => {\n  const names: Record<string, string> = {\n    // Current 7-tool architecture\n    'search': '🔍 Search Code',\n    'cypher': '🔗 Cypher Query',\n    'grep': '🔎 Pattern Search',\n    'read': '📄 Read File',\n    'overview': '🗺️ Codebase Overview',\n    'explore': '🔬 Deep Dive',\n    'impact': '💥 Impact Analysis',\n  };\n  return names[name] || name;\n};\n\nexport const ToolCallCard = ({ toolCall, defaultExpanded = false }: ToolCallCardProps) => {\n  const [isExpanded, setIsExpanded] = useState(defaultExpanded);\n  const status = getStatusDisplay(toolCall.status);\n  const formattedArgs = formatArgs(toolCall.args);\n\n  return (\n    <div className={`rounded-lg border ${status.borderColor} ${status.bgColor} overflow-hidden transition-all`}>\n      {/* Header - always visible */}\n      <div\n        role=\"button\"\n        tabIndex={0}\n        onClick={() => setIsExpanded(!isExpanded)}\n        onKeyDown={(e) => { if (e.key === 'Enter' || e.key === ' ') { e.preventDefault(); setIsExpanded(!isExpanded); } }}\n        className=\"w-full flex items-center gap-2 px-3 py-2 text-left hover:bg-white/5 transition-colors cursor-pointer select-none\"\n      >\n        {/* Expand/collapse icon */}\n        <span className=\"text-text-muted\">\n          {isExpanded ? <ChevronDown className=\"w-4 h-4\" /> : <ChevronRight className=\"w-4 h-4\" />}\n        </span>\n\n        {/* Tool name */}\n        <span className=\"flex-1 text-sm font-medium text-text-primary\">\n          {getToolDisplayName(toolCall.name)}\n        </span>\n\n        {/* Status indicator */}\n        <span className={`flex items-center gap-1 text-xs ${status.color}`}>\n          {status.icon}\n          <span className=\"capitalize\">{toolCall.status}</span>\n        </span>\n      </div>\n\n      {/* Expanded content */}\n      {isExpanded && (\n        <div className=\"border-t border-border-subtle/50\">\n          {/* Arguments/Query */}\n          {formattedArgs && (\n            <div className=\"px-3 py-2 border-b border-border-subtle/50\">\n              <div className=\"text-[10px] uppercase tracking-wider text-text-muted mb-1.5\">\n                {toolCall.name === 'cypher' ? 'Query' : 'Input'}\n              </div>\n              <pre className=\"text-xs text-text-secondary bg-surface/50 rounded p-2 overflow-x-auto whitespace-pre-wrap font-mono\">\n                {formattedArgs}\n              </pre>\n            </div>\n          )}\n\n          {/* Result */}\n          {toolCall.result && (\n            <div className=\"px-3 py-2\">\n              <div className=\"text-[10px] uppercase tracking-wider text-text-muted mb-1.5\">\n                Result\n              </div>\n              <div className=\"max-h-[400px] overflow-y-auto bg-surface/50 rounded\">\n                <pre className=\"text-xs text-text-secondary p-2 whitespace-pre-wrap font-mono\">\n                  {toolCall.result.length > 3000\n                    ? toolCall.result.slice(0, 3000) + '\\n\\n... (truncated)'\n                    : toolCall.result\n                  }\n                </pre>\n              </div>\n            </div>\n          )}\n\n          {/* Loading state for in-progress */}\n          {toolCall.status === 'running' && !toolCall.result && (\n            <div className=\"px-3 py-3 flex items-center gap-2 text-xs text-text-muted\">\n              <Loader2 className=\"w-3 h-3 animate-spin\" />\n              <span>Executing...</span>\n            </div>\n          )}\n        </div>\n      )}\n    </div>\n  );\n};\n\nexport default ToolCallCard;\n"
  },
  {
    "path": "gitnexus-web/src/components/WebGPUFallbackDialog.tsx",
    "content": "import { useState, useEffect } from 'react';\nimport { X, Snail, Rocket, SkipForward } from 'lucide-react';\n\ninterface WebGPUFallbackDialogProps {\n  isOpen: boolean;\n  onClose: () => void;\n  onUseCPU: () => void;\n  onSkip: () => void;\n  nodeCount: number;\n}\n\n/**\n * Fun dialog shown when WebGPU isn't available\n * Lets user choose: CPU fallback (slow) or skip embeddings\n */\nexport const WebGPUFallbackDialog = ({\n  isOpen,\n  onClose,\n  onUseCPU,\n  onSkip,\n  nodeCount,\n}: WebGPUFallbackDialogProps) => {\n  const [isAnimating, setIsAnimating] = useState(true);\n  const [isVisible, setIsVisible] = useState(false);\n\n  useEffect(() => {\n    if (isOpen) {\n      // Trigger animation after mount\n      requestAnimationFrame(() => setIsVisible(true));\n    } else {\n      setIsVisible(false);\n    }\n  }, [isOpen]);\n\n  if (!isOpen) return null;\n\n  // Estimate time based on node count (rough: ~50ms per node on CPU)\n  const estimatedMinutes = Math.ceil((nodeCount * 50) / 60000);\n  const isSmallCodebase = nodeCount < 200;\n\n  return (\n    <div className=\"fixed inset-0 z-50 flex items-center justify-center\">\n      {/* Backdrop */}\n      <div \n        className={`absolute inset-0 bg-black/60 backdrop-blur-sm transition-opacity duration-200 ${isVisible ? 'opacity-100' : 'opacity-0'}`}\n        onClick={onClose}\n      />\n      \n      {/* Dialog */}\n      <div \n        className={`relative bg-surface border border-border-subtle rounded-2xl shadow-2xl max-w-md w-full mx-4 overflow-hidden transition-all duration-200 ${isVisible ? 'opacity-100 scale-100' : 'opacity-0 scale-95'}`}\n      >\n        {/* Header with scratching emoji */}\n        <div className=\"relative bg-gradient-to-r from-amber-500/20 to-orange-500/20 px-6 py-5 border-b border-border-subtle\">\n          <button\n            onClick={onClose}\n            className=\"absolute top-4 right-4 p-1 text-text-muted hover:text-text-primary transition-colors\"\n          >\n            <X className=\"w-5 h-5\" />\n          </button>\n          \n          <div className=\"flex items-center gap-4\">\n            {/* Animated emoji */}\n            <div \n              className={`text-5xl ${isAnimating ? 'animate-bounce' : ''}`}\n              onAnimationEnd={() => setIsAnimating(false)}\n              onClick={() => setIsAnimating(true)}\n            >\n              🤔\n            </div>\n            <div>\n              <h2 className=\"text-lg font-semibold text-text-primary\">\n                WebGPU said \"nope\"\n              </h2>\n              <p className=\"text-sm text-text-muted mt-0.5\">\n                Your browser doesn't support GPU acceleration\n              </p>\n            </div>\n          </div>\n        </div>\n\n        {/* Content */}\n        <div className=\"px-6 py-5 space-y-4\">\n          <p className=\"text-sm text-text-secondary leading-relaxed\">\n            Couldn't create embeddings with WebGPU, so semantic search (Graph RAG) \n            won't be as smart. The graph still works fine though! \n          </p>\n          \n          <div className=\"bg-elevated/50 rounded-lg p-4 border border-border-subtle\">\n            <p className=\"text-sm text-text-secondary\">\n              <span className=\"font-medium text-text-primary\">Your options:</span>\n            </p>\n            <ul className=\"mt-2 space-y-1.5 text-sm text-text-muted\">\n              <li className=\"flex items-start gap-2\">\n                <Snail className=\"w-4 h-4 mt-0.5 text-amber-400 flex-shrink-0\" />\n                <span>\n                  <strong className=\"text-text-secondary\">Use CPU</strong> — Works but {isSmallCodebase ? 'a bit' : 'way'} slower\n                  {nodeCount > 0 && (\n                    <span className=\"text-text-muted\"> (~{estimatedMinutes} min for {nodeCount} nodes)</span>\n                  )}\n                </span>\n              </li>\n              <li className=\"flex items-start gap-2\">\n                <SkipForward className=\"w-4 h-4 mt-0.5 text-blue-400 flex-shrink-0\" />\n                <span>\n                  <strong className=\"text-text-secondary\">Skip it</strong> — Graph works, just no AI semantic search\n                </span>\n              </li>\n            </ul>\n          </div>\n\n          {isSmallCodebase && (\n            <p className=\"text-xs text-node-function flex items-center gap-1.5 bg-node-function/10 px-3 py-2 rounded-lg\">\n              <Rocket className=\"w-3.5 h-3.5\" />\n              Small codebase detected! CPU should be fine.\n            </p>\n          )}\n\n          <p className=\"text-xs text-text-muted\">\n            💡 Tip: Try Chrome or Edge for WebGPU support\n          </p>\n        </div>\n\n        {/* Actions */}\n        <div className=\"px-6 py-4 bg-elevated/30 border-t border-border-subtle flex gap-3\">\n          <button\n            onClick={onSkip}\n            className=\"flex-1 px-4 py-2.5 text-sm font-medium text-text-secondary bg-surface border border-border-subtle rounded-lg hover:bg-hover hover:text-text-primary transition-all flex items-center justify-center gap-2\"\n          >\n            <SkipForward className=\"w-4 h-4\" />\n            Skip Embeddings\n          </button>\n          <button\n            onClick={onUseCPU}\n            className={`flex-1 px-4 py-2.5 text-sm font-medium rounded-lg transition-all flex items-center justify-center gap-2 ${\n              isSmallCodebase\n                ? 'bg-node-function text-white hover:bg-node-function/90'\n                : 'bg-amber-500/20 text-amber-300 border border-amber-500/30 hover:bg-amber-500/30'\n            }`}\n          >\n            <Snail className=\"w-4 h-4\" />\n            Use CPU {isSmallCodebase ? '(Recommended)' : '(Slow)'}\n          </button>\n        </div>\n      </div>\n    </div>\n  );\n};\n\n"
  },
  {
    "path": "gitnexus-web/src/config/ignore-service.ts",
    "content": "const DEFAULT_IGNORE_LIST = new Set([\n    // Version Control\n    '.git',\n    '.svn',\n    '.hg',\n    '.bzr',\n    \n    // IDEs & Editors\n    '.idea',\n    '.vscode',\n    '.vs',\n    '.eclipse',\n    '.settings',\n    '.DS_Store',\n    'Thumbs.db',\n  \n    // Dependencies\n    'node_modules',\n    'bower_components',\n    'jspm_packages',\n    'vendor',           // PHP/Go\n    // 'packages' removed - commonly used for monorepo source code (lerna, pnpm, yarn workspaces)\n    'venv',\n    '.venv',\n    'env',\n    '.env',\n    '__pycache__',\n    '.pytest_cache',\n    '.mypy_cache',\n    'site-packages',\n    '.tox',\n    'eggs',\n    '.eggs',\n    'lib64',\n    'parts',\n    'sdist',\n    'wheels',\n  \n    // Build Outputs\n    'dist',\n    'build',\n    'out',\n    'output',\n    'bin',\n    'obj',\n    'target',           // Java/Rust\n    '.next',\n    '.nuxt',\n    '.output',\n    '.vercel',\n    '.netlify',\n    '.serverless',\n    '_build',\n    'public/build',\n    '.parcel-cache',\n    '.turbo',\n    '.svelte-kit',\n  \n    // Test & Coverage\n    'coverage',\n    '.nyc_output',\n    'htmlcov',\n    '.coverage',\n    '__tests__',        // Often just test files\n    '__mocks__',\n    '.jest',\n    \n    // Logs & Temp\n    'logs',\n    'log',\n    'tmp',\n    'temp',\n    'cache',\n    '.cache',\n    '.tmp',\n    '.temp',\n    \n    // Generated/Compiled\n    '.generated',\n    'generated',\n    'auto-generated',\n    '.terraform',\n    '.serverless',\n    \n    // Documentation (optional - might want to keep)\n    // 'docs',\n    // 'documentation',\n    \n    // Misc\n    '.husky',\n    '.github',          // GitHub config, not code\n    '.circleci',\n    '.gitlab',\n    'fixtures',         // Test fixtures\n    'snapshots',        // Jest snapshots\n    '__snapshots__',\n]);\n\nconst IGNORED_EXTENSIONS = new Set([\n    // Images\n    '.png', '.jpg', '.jpeg', '.gif', '.svg', '.ico', '.webp', '.bmp', '.tiff', '.tif',\n    '.psd', '.ai', '.sketch', '.fig', '.xd',\n    \n    // Archives\n    '.zip', '.tar', '.gz', '.rar', '.7z', '.bz2', '.xz', '.tgz',\n    \n    // Binary/Compiled\n    '.exe', '.dll', '.so', '.dylib', '.a', '.lib', '.o', '.obj',\n    '.class', '.jar', '.war', '.ear',\n    '.pyc', '.pyo', '.pyd',\n    '.beam',            // Erlang\n    '.wasm',            // WebAssembly - important!\n    '.node',            // Native Node addons\n    \n    // Documents\n    '.pdf', '.doc', '.docx', '.xls', '.xlsx', '.ppt', '.pptx',\n    '.odt', '.ods', '.odp',\n    \n    // Media\n    '.mp4', '.mp3', '.wav', '.mov', '.avi', '.mkv', '.flv', '.wmv',\n    '.ogg', '.webm', '.flac', '.aac', '.m4a',\n    \n    // Fonts\n    '.woff', '.woff2', '.ttf', '.eot', '.otf',\n    \n    // Databases\n    '.db', '.sqlite', '.sqlite3', '.mdb', '.accdb',\n    \n    // Minified/Bundled files\n    '.min.js', '.min.css', '.bundle.js', '.chunk.js',\n    \n    // Source maps (debug files, not source)\n    '.map',\n    \n    // Lock files (handled separately, but also here)\n    '.lock',\n    \n    // Certificates & Keys (security - don't index!)\n    '.pem', '.key', '.crt', '.cer', '.p12', '.pfx',\n    \n    // Data files (often large/binary)\n    '.csv', '.tsv', '.parquet', '.avro', '.feather',\n    '.npy', '.npz', '.pkl', '.pickle', '.h5', '.hdf5',\n    \n    // Misc binary\n    '.bin', '.dat', '.data', '.raw',\n    '.iso', '.img', '.dmg',\n]);\n\n// Files to ignore by exact name\nconst IGNORED_FILES = new Set([\n    'package-lock.json',\n    'yarn.lock',\n    'pnpm-lock.yaml',\n    'composer.lock',\n    'Gemfile.lock',\n    'poetry.lock',\n    'Cargo.lock',\n    'go.sum',\n    '.gitignore',\n    '.gitattributes',\n    '.npmrc',\n    '.yarnrc',\n    '.editorconfig',\n    '.prettierrc',\n    '.prettierignore',\n    '.eslintignore',\n    '.dockerignore',\n    'Thumbs.db',\n    '.DS_Store',\n    'LICENSE',\n    'LICENSE.md',\n    'LICENSE.txt',\n    'CHANGELOG.md',\n    'CHANGELOG',\n    'CONTRIBUTING.md',\n    'CODE_OF_CONDUCT.md',\n    'SECURITY.md',\n    '.env',\n    '.env.local',\n    '.env.development',\n    '.env.production',\n    '.env.test',\n    '.env.example',\n]);\n\n\n\nexport const shouldIgnorePath = (filePath: string): boolean => {\n  const normalizedPath = filePath.replace(/\\\\/g, '/');\n  const parts = normalizedPath.split('/');\n  const fileName = parts[parts.length - 1];\n  const fileNameLower = fileName.toLowerCase();\n\n  // Check if any path segment is in ignore list\n  for (const part of parts) {\n    if (DEFAULT_IGNORE_LIST.has(part)) {\n      return true;\n    }\n  }\n\n  // Check exact filename matches\n  if (IGNORED_FILES.has(fileName) || IGNORED_FILES.has(fileNameLower)) {\n    return true;\n  }\n\n  // Check extension\n  const lastDotIndex = fileNameLower.lastIndexOf('.');\n  if (lastDotIndex !== -1) {\n    const ext = fileNameLower.substring(lastDotIndex);\n    if (IGNORED_EXTENSIONS.has(ext)) return true;\n    \n    // Handle compound extensions like .min.js, .bundle.js\n    const secondLastDot = fileNameLower.lastIndexOf('.', lastDotIndex - 1);\n    if (secondLastDot !== -1) {\n      const compoundExt = fileNameLower.substring(secondLastDot);\n      if (IGNORED_EXTENSIONS.has(compoundExt)) return true;\n    }\n  }\n\n  // Ignore hidden files (starting with .)\n  if (fileName.startsWith('.') && fileName !== '.') {\n    // But allow some important config files\n    const allowedDotFiles = ['.env', '.gitignore']; // Already in IGNORED_FILES, so this is redundant\n    // Actually, let's NOT ignore all dot files - many are important configs\n    // Just rely on the explicit lists above\n  }\n\n  // Ignore files that look like generated/bundled code\n  if (fileNameLower.includes('.bundle.') || \n      fileNameLower.includes('.chunk.') ||\n      fileNameLower.includes('.generated.') ||\n      fileNameLower.endsWith('.d.ts')) { // TypeScript declaration files\n    return true;\n  }\n\n  return false;\n}\n\n"
  },
  {
    "path": "gitnexus-web/src/config/supported-languages.ts",
    "content": "export enum SupportedLanguages {\n    JavaScript = 'javascript',\n    TypeScript = 'typescript',\n    Python = 'python',\n    Java = 'java',\n    C = 'c',\n    CPlusPlus = 'cpp',\n    CSharp = 'csharp',\n    Go = 'go',\n    Rust = 'rust',\n    PHP = 'php',\n    Ruby = 'ruby',\n    Kotlin = 'kotlin',\n    Swift = 'swift',\n}"
  },
  {
    "path": "gitnexus-web/src/core/embeddings/embedder.ts",
    "content": "/**\n * Embedder Module\n * \n * Singleton factory for transformers.js embedding pipeline.\n * Handles model loading, caching, and both single and batch embedding operations.\n * \n * Uses snowflake-arctic-embed-xs by default (22M params, 384 dims, ~90MB)\n */\n\nimport { pipeline, env, type FeatureExtractionPipeline } from '@huggingface/transformers';\nimport { DEFAULT_EMBEDDING_CONFIG, type EmbeddingConfig, type ModelProgress } from './types';\n\n// Module-level state for singleton pattern\nlet embedderInstance: FeatureExtractionPipeline | null = null;\nlet isInitializing = false;\nlet initPromise: Promise<FeatureExtractionPipeline> | null = null;\nlet currentDevice: 'webgpu' | 'wasm' | null = null;\n\n/**\n * Progress callback type for model loading\n */\nexport type ModelProgressCallback = (progress: ModelProgress) => void;\n\n/**\n * Custom error thrown when WebGPU is not available\n * Allows UI to prompt user for fallback choice\n */\nexport class WebGPUNotAvailableError extends Error {\n  constructor(originalError?: Error) {\n    super('WebGPU not available in this browser');\n    this.name = 'WebGPUNotAvailableError';\n    this.cause = originalError;\n  }\n}\n\n/**\n * Check if WebGPU is available in this browser\n * Quick check without loading the model\n */\nexport const checkWebGPUAvailability = async (): Promise<boolean> => {\n  try {\n    // Cast to any to avoid WebGPU types not being available in all TS configs\n    const nav = navigator as any;\n    if (!nav.gpu) {\n      return false;\n    }\n    const adapter = await nav.gpu.requestAdapter();\n    if (!adapter) {\n      return false;\n    }\n    // Try to get a device - this is where it usually fails\n    const device = await adapter.requestDevice();\n    device.destroy(); // Clean up\n    return true;\n  } catch {\n    return false;\n  }\n};\n\n/**\n * Get the current device being used for inference\n */\nexport const getCurrentDevice = (): 'webgpu' | 'wasm' | null => currentDevice;\n\n/**\n * Initialize the embedding model\n * Uses singleton pattern - only loads once, subsequent calls return cached instance\n * \n * @param onProgress - Optional callback for model download progress\n * @param config - Optional configuration override\n * @param forceDevice - Force a specific device (bypasses WebGPU check)\n * @returns Promise resolving to the embedder pipeline\n * @throws WebGPUNotAvailableError if WebGPU is requested but unavailable\n */\nexport const initEmbedder = async (\n  onProgress?: ModelProgressCallback,\n  config: Partial<EmbeddingConfig> = {},\n  forceDevice?: 'webgpu' | 'wasm'\n): Promise<FeatureExtractionPipeline> => {\n  // Return existing instance if available\n  if (embedderInstance) {\n    return embedderInstance;\n  }\n\n  // If already initializing, wait for that promise\n  if (isInitializing && initPromise) {\n    return initPromise;\n  }\n\n  isInitializing = true;\n  \n  const finalConfig = { ...DEFAULT_EMBEDDING_CONFIG, ...config };\n  const requestedDevice = forceDevice || finalConfig.device;\n\n  initPromise = (async () => {\n    try {\n      // Configure transformers.js environment\n      env.allowLocalModels = false;\n      \n      if (import.meta.env.DEV) {\n        console.log(`🧠 Loading embedding model: ${finalConfig.modelId}`);\n      }\n\n      const progressCallback = onProgress ? (data: any) => {\n        const progress: ModelProgress = {\n          status: data.status || 'progress',\n          file: data.file,\n          progress: data.progress,\n          loaded: data.loaded,\n          total: data.total,\n        };\n        onProgress(progress);\n      } : undefined;\n\n      // If WebGPU is requested (default), check availability first\n      if (requestedDevice === 'webgpu') {\n        if (import.meta.env.DEV) {\n          console.log('🔧 Checking WebGPU availability...');\n        }\n        \n        const webgpuAvailable = await checkWebGPUAvailability();\n        \n        if (!webgpuAvailable) {\n          if (import.meta.env.DEV) {\n            console.warn('⚠️ WebGPU not available');\n          }\n          isInitializing = false;\n          initPromise = null;\n          throw new WebGPUNotAvailableError();\n        }\n        \n        // Try WebGPU\n        try {\n          if (import.meta.env.DEV) {\n            console.log('🔧 Initializing WebGPU backend...');\n          }\n          \n          // Type assertion needed due to complex union types in transformers.js\n          embedderInstance = await (pipeline as any)(\n            'feature-extraction',\n            finalConfig.modelId,\n            {\n              device: 'webgpu',\n              dtype: 'fp32',\n              progress_callback: progressCallback,\n            }\n          );\n          currentDevice = 'webgpu';\n          \n          if (import.meta.env.DEV) {\n            console.log('✅ Using WebGPU backend');\n          }\n        } catch (err) {\n          if (import.meta.env.DEV) {\n            console.warn('⚠️ WebGPU initialization failed:', err);\n          }\n          isInitializing = false;\n          initPromise = null;\n          embedderInstance = null;\n          throw new WebGPUNotAvailableError(err as Error);\n        }\n      } else {\n        // WASM mode requested (user chose fallback)\n        if (import.meta.env.DEV) {\n          console.log('🔧 Initializing WASM backend (this will be slower)...');\n        }\n        \n        // Type assertion needed due to complex union types in transformers.js\n        embedderInstance = await (pipeline as any)(\n          'feature-extraction',\n          finalConfig.modelId,\n          {\n            device: 'wasm', // WASM-based CPU execution\n            dtype: 'fp32',\n            progress_callback: progressCallback,\n          }\n        );\n        currentDevice = 'wasm';\n        \n        if (import.meta.env.DEV) {\n          console.log('✅ Using WASM backend');\n        }\n      }\n\n      if (import.meta.env.DEV) {\n        console.log('✅ Embedding model loaded successfully');\n      }\n\n      return embedderInstance!;\n    } catch (error) {\n      // Re-throw WebGPUNotAvailableError as-is\n      if (error instanceof WebGPUNotAvailableError) {\n        throw error;\n      }\n      isInitializing = false;\n      initPromise = null;\n      embedderInstance = null;\n      throw error;\n    } finally {\n      isInitializing = false;\n    }\n  })();\n\n  return initPromise;\n};\n\n/**\n * Check if the embedder is initialized and ready\n */\nexport const isEmbedderReady = (): boolean => {\n  return embedderInstance !== null;\n};\n\n/**\n * Get the embedder instance (throws if not initialized)\n */\nexport const getEmbedder = (): FeatureExtractionPipeline => {\n  if (!embedderInstance) {\n    throw new Error('Embedder not initialized. Call initEmbedder() first.');\n  }\n  return embedderInstance;\n};\n\n/**\n * Embed a single text string\n * \n * @param text - Text to embed\n * @returns Float32Array of embedding vector (384 dimensions)\n */\nexport const embedText = async (text: string): Promise<Float32Array> => {\n  const embedder = getEmbedder();\n  \n  const result = await embedder(text, {\n    pooling: 'mean',\n    normalize: true,\n  });\n  \n  // Result is a Tensor, convert to Float32Array\n  return new Float32Array(result.data as ArrayLike<number>);\n};\n\n/**\n * Embed multiple texts in a single batch\n * More efficient than calling embedText multiple times\n * \n * @param texts - Array of texts to embed\n * @returns Array of Float32Array embedding vectors\n */\nexport const embedBatch = async (texts: string[]): Promise<Float32Array[]> => {\n  if (texts.length === 0) {\n    return [];\n  }\n\n  const embedder = getEmbedder();\n  \n  // Process batch\n  const result = await embedder(texts, {\n    pooling: 'mean',\n    normalize: true,\n  });\n  \n  // Result shape is [batch_size, dimensions]\n  // Need to split into individual vectors\n  const data = result.data as ArrayLike<number>;\n  const dimensions = DEFAULT_EMBEDDING_CONFIG.dimensions;\n  const embeddings: Float32Array[] = [];\n  \n  for (let i = 0; i < texts.length; i++) {\n    const start = i * dimensions;\n    const end = start + dimensions;\n    embeddings.push(new Float32Array(Array.prototype.slice.call(data, start, end)));\n  }\n  \n  return embeddings;\n};\n\n/**\n * Convert Float32Array to regular number array (for LadybugDB storage)\n */\nexport const embeddingToArray = (embedding: Float32Array): number[] => {\n  return Array.from(embedding);\n};\n\n/**\n * Cleanup the embedder (free memory)\n * Call this when done with embeddings\n */\nexport const disposeEmbedder = async (): Promise<void> => {\n  if (embedderInstance) {\n    // transformers.js pipelines may have a dispose method\n    try {\n      if ('dispose' in embedderInstance && typeof embedderInstance.dispose === 'function') {\n        await embedderInstance.dispose();\n      }\n    } catch {\n      // Ignore disposal errors\n    }\n    embedderInstance = null;\n    initPromise = null;\n  }\n};\n\n"
  },
  {
    "path": "gitnexus-web/src/core/embeddings/embedding-pipeline.ts",
    "content": "/**\n * Embedding Pipeline Module\n * \n * Orchestrates the background embedding process:\n * 1. Query embeddable nodes from LadybugDB\n * 2. Generate text representations\n * 3. Batch embed using transformers.js\n * 4. Update LadybugDB with embeddings\n * 5. Create vector index for semantic search\n */\n\nimport { initEmbedder, embedBatch, embedText, embeddingToArray, isEmbedderReady } from './embedder';\nimport { generateBatchEmbeddingTexts, generateEmbeddingText } from './text-generator';\nimport {\n  type EmbeddingProgress,\n  type EmbeddingConfig,\n  type EmbeddableNode,\n  type SemanticSearchResult,\n  type ModelProgress,\n  DEFAULT_EMBEDDING_CONFIG,\n  EMBEDDABLE_LABELS,\n} from './types';\n\n/**\n * Progress callback type\n */\nexport type EmbeddingProgressCallback = (progress: EmbeddingProgress) => void;\n\n/**\n * Query all embeddable nodes from LadybugDB\n * Uses table-specific queries (File has different schema than code elements)\n */\nconst queryEmbeddableNodes = async (\n  executeQuery: (cypher: string) => Promise<any[]>\n): Promise<EmbeddableNode[]> => {\n  const allNodes: EmbeddableNode[] = [];\n  \n  // Query each embeddable table with table-specific columns\n  for (const label of EMBEDDABLE_LABELS) {\n    try {\n      let query: string;\n      \n      if (label === 'File') {\n        // File nodes don't have startLine/endLine\n        query = `\n          MATCH (n:File)\n          RETURN n.id AS id, n.name AS name, 'File' AS label, \n                 n.filePath AS filePath, n.content AS content\n        `;\n      } else {\n        // Code elements have startLine/endLine\n        query = `\n          MATCH (n:${label})\n          RETURN n.id AS id, n.name AS name, '${label}' AS label, \n                 n.filePath AS filePath, n.content AS content,\n                 n.startLine AS startLine, n.endLine AS endLine\n        `;\n      }\n      \n      const rows = await executeQuery(query);\n      for (const row of rows) {\n        allNodes.push({\n          id: row.id ?? row[0],\n          name: row.name ?? row[1],\n          label: row.label ?? row[2],\n          filePath: row.filePath ?? row[3],\n          content: row.content ?? row[4] ?? '',\n          startLine: row.startLine ?? row[5],\n          endLine: row.endLine ?? row[6],\n        });\n      }\n    } catch (error) {\n      // Table might not exist or be empty, continue\n      if (import.meta.env.DEV) {\n        console.warn(`Query for ${label} nodes failed:`, error);\n      }\n    }\n  }\n\n  return allNodes;\n};\n\n/**\n * Batch INSERT embeddings into separate CodeEmbedding table\n * Using a separate lightweight table avoids copy-on-write overhead\n * that occurs when UPDATEing nodes with large content fields\n */\nconst batchInsertEmbeddings = async (\n  executeWithReusedStatement: (\n    cypher: string,\n    paramsList: Array<Record<string, any>>\n  ) => Promise<void>,\n  updates: Array<{ id: string; embedding: number[] }>\n): Promise<void> => {\n  // INSERT into separate embedding table - much more memory efficient!\n  const cypher = `CREATE (e:CodeEmbedding {nodeId: $nodeId, embedding: $embedding})`;\n  const paramsList = updates.map(u => ({ nodeId: u.id, embedding: u.embedding }));\n  await executeWithReusedStatement(cypher, paramsList);\n};\n\n/**\n * Create the vector index for semantic search\n * Now indexes the separate CodeEmbedding table\n */\nlet vectorExtensionLoaded = false;\n\nconst createVectorIndex = async (\n  executeQuery: (cypher: string) => Promise<any[]>\n): Promise<void> => {\n  // LadybugDB v0.15+ requires explicit VECTOR extension loading (once per session)\n  if (!vectorExtensionLoaded) {\n    try {\n      await executeQuery('INSTALL VECTOR');\n      await executeQuery('LOAD EXTENSION VECTOR');\n      vectorExtensionLoaded = true;\n    } catch {\n      // Extension may already be loaded — CREATE_VECTOR_INDEX will fail clearly if not\n      vectorExtensionLoaded = true;\n    }\n  }\n\n  const cypher = `\n    CALL CREATE_VECTOR_INDEX('CodeEmbedding', 'code_embedding_idx', 'embedding', metric := 'cosine')\n  `;\n\n  try {\n    await executeQuery(cypher);\n  } catch (error) {\n    // Index might already exist\n    if (import.meta.env.DEV) {\n      console.warn('Vector index creation warning:', error);\n    }\n  }\n};\n\n/**\n * Run the embedding pipeline\n * \n * @param executeQuery - Function to execute Cypher queries against LadybugDB\n * @param executeWithReusedStatement - Function to execute with reused prepared statement\n * @param onProgress - Callback for progress updates\n * @param config - Optional configuration override\n */\nexport const runEmbeddingPipeline = async (\n  executeQuery: (cypher: string) => Promise<any[]>,\n  executeWithReusedStatement: (cypher: string, paramsList: Array<Record<string, any>>) => Promise<void>,\n  onProgress: EmbeddingProgressCallback,\n  config: Partial<EmbeddingConfig> = {}\n): Promise<void> => {\n  const finalConfig = { ...DEFAULT_EMBEDDING_CONFIG, ...config };\n\n  try {\n    // Phase 1: Load embedding model\n    onProgress({\n      phase: 'loading-model',\n      percent: 0,\n      modelDownloadPercent: 0,\n    });\n\n    await initEmbedder((modelProgress: ModelProgress) => {\n      // Report model download progress\n      const downloadPercent = modelProgress.progress ?? 0;\n      onProgress({\n        phase: 'loading-model',\n        percent: Math.round(downloadPercent * 0.2), // 0-20% for model loading\n        modelDownloadPercent: downloadPercent,\n      });\n    }, finalConfig);\n\n    onProgress({\n      phase: 'loading-model',\n      percent: 20,\n      modelDownloadPercent: 100,\n    });\n\n    if (import.meta.env.DEV) {\n      console.log('🔍 Querying embeddable nodes...');\n    }\n\n    // Phase 2: Query embeddable nodes\n    const nodes = await queryEmbeddableNodes(executeQuery);\n    const totalNodes = nodes.length;\n\n    if (import.meta.env.DEV) {\n      console.log(`📊 Found ${totalNodes} embeddable nodes`);\n    }\n\n    if (totalNodes === 0) {\n      onProgress({\n        phase: 'ready',\n        percent: 100,\n        nodesProcessed: 0,\n        totalNodes: 0,\n      });\n      return;\n    }\n\n    // Phase 3: Batch embed nodes\n    const batchSize = finalConfig.batchSize;\n    const totalBatches = Math.ceil(totalNodes / batchSize);\n    let processedNodes = 0;\n\n    onProgress({\n      phase: 'embedding',\n      percent: 20,\n      nodesProcessed: 0,\n      totalNodes,\n      currentBatch: 0,\n      totalBatches,\n    });\n\n    for (let batchIndex = 0; batchIndex < totalBatches; batchIndex++) {\n      const start = batchIndex * batchSize;\n      const end = Math.min(start + batchSize, totalNodes);\n      const batch = nodes.slice(start, end);\n\n      // Generate texts for this batch\n      const texts = generateBatchEmbeddingTexts(batch, finalConfig);\n\n      // Embed the batch\n      const embeddings = await embedBatch(texts);\n\n      // Update LadybugDB with embeddings\n      const updates = batch.map((node, i) => ({\n        id: node.id,\n        embedding: embeddingToArray(embeddings[i]),\n      }));\n\n      await batchInsertEmbeddings(executeWithReusedStatement, updates);\n\n      processedNodes += batch.length;\n\n      // Report progress (20-90% for embedding phase)\n      const embeddingProgress = 20 + ((processedNodes / totalNodes) * 70);\n      onProgress({\n        phase: 'embedding',\n        percent: Math.round(embeddingProgress),\n        nodesProcessed: processedNodes,\n        totalNodes,\n        currentBatch: batchIndex + 1,\n        totalBatches,\n      });\n    }\n\n    // Phase 4: Create vector index\n    onProgress({\n      phase: 'indexing',\n      percent: 90,\n      nodesProcessed: totalNodes,\n      totalNodes,\n    });\n\n    if (import.meta.env.DEV) {\n      console.log('📇 Creating vector index...');\n    }\n\n    await createVectorIndex(executeQuery);\n\n    // Complete\n    onProgress({\n      phase: 'ready',\n      percent: 100,\n      nodesProcessed: totalNodes,\n      totalNodes,\n    });\n\n    if (import.meta.env.DEV) {\n      console.log('✅ Embedding pipeline complete!');\n    }\n  } catch (error) {\n    const errorMessage = error instanceof Error ? error.message : 'Unknown error';\n    \n    if (import.meta.env.DEV) {\n      console.error('❌ Embedding pipeline error:', error);\n    }\n\n    onProgress({\n      phase: 'error',\n      percent: 0,\n      error: errorMessage,\n    });\n\n    throw error;\n  }\n};\n\n/**\n * Perform semantic search using the vector index\n * \n * Uses CodeEmbedding table and queries each node table to get metadata\n * \n * @param executeQuery - Function to execute Cypher queries\n * @param query - Search query text\n * @param k - Number of results to return (default: 10)\n * @param maxDistance - Maximum distance threshold (default: 0.5)\n * @returns Array of search results ordered by relevance\n */\nexport const semanticSearch = async (\n  executeQuery: (cypher: string) => Promise<any[]>,\n  query: string,\n  k: number = 10,\n  maxDistance: number = 0.5\n): Promise<SemanticSearchResult[]> => {\n  if (!isEmbedderReady()) {\n    throw new Error('Embedding model not initialized. Run embedding pipeline first.');\n  }\n\n  // Embed the query\n  const queryEmbedding = await embedText(query);\n  const queryVec = embeddingToArray(queryEmbedding);\n  const queryVecStr = `[${queryVec.join(',')}]`;\n\n  // Query the vector index on CodeEmbedding to get nodeIds and distances\n  const vectorQuery = `\n    CALL QUERY_VECTOR_INDEX('CodeEmbedding', 'code_embedding_idx', \n      CAST(${queryVecStr} AS FLOAT[384]), ${k})\n    YIELD node AS emb, distance\n    WITH emb, distance\n    WHERE distance < ${maxDistance}\n    RETURN emb.nodeId AS nodeId, distance\n    ORDER BY distance\n  `;\n\n  const embResults = await executeQuery(vectorQuery);\n  \n  if (embResults.length === 0) {\n    return [];\n  }\n\n  // Group results by label for batched metadata queries\n  const byLabel = new Map<string, Array<{ nodeId: string; distance: number }>>();\n  for (const embRow of embResults) {\n    const nodeId = embRow.nodeId ?? embRow[0];\n    const distance = embRow.distance ?? embRow[1];\n    const labelEndIdx = nodeId.indexOf(':');\n    const label = labelEndIdx > 0 ? nodeId.substring(0, labelEndIdx) : 'Unknown';\n    if (!byLabel.has(label)) byLabel.set(label, []);\n    byLabel.get(label)!.push({ nodeId, distance });\n  }\n\n  // Batch-fetch metadata per label\n  const results: SemanticSearchResult[] = [];\n\n  for (const [label, items] of byLabel) {\n    const idList = items.map(i => `'${i.nodeId.replace(/'/g, \"''\")}'`).join(', ');\n    try {\n      let nodeQuery: string;\n      if (label === 'File') {\n        nodeQuery = `\n          MATCH (n:File) WHERE n.id IN [${idList}]\n          RETURN n.id AS id, n.name AS name, n.filePath AS filePath\n        `;\n      } else {\n        nodeQuery = `\n          MATCH (n:${label}) WHERE n.id IN [${idList}]\n          RETURN n.id AS id, n.name AS name, n.filePath AS filePath,\n                 n.startLine AS startLine, n.endLine AS endLine\n        `;\n      }\n      const nodeRows = await executeQuery(nodeQuery);\n      const rowMap = new Map<string, any>();\n      for (const row of nodeRows) {\n        const id = row.id ?? row[0];\n        rowMap.set(id, row);\n      }\n      for (const item of items) {\n        const nodeRow = rowMap.get(item.nodeId);\n        if (nodeRow) {\n          results.push({\n            nodeId: item.nodeId,\n            name: nodeRow.name ?? nodeRow[1] ?? '',\n            label,\n            filePath: nodeRow.filePath ?? nodeRow[2] ?? '',\n            distance: item.distance,\n            startLine: label !== 'File' ? (nodeRow.startLine ?? nodeRow[3]) : undefined,\n            endLine: label !== 'File' ? (nodeRow.endLine ?? nodeRow[4]) : undefined,\n          });\n        }\n      }\n    } catch {\n      // Table might not exist, skip\n    }\n  }\n\n  // Re-sort by distance since batch queries may have mixed order\n  results.sort((a, b) => a.distance - b.distance);\n\n  return results;\n};\n\n/**\n * Semantic search with graph expansion (flattened results)\n * \n * Note: With multi-table schema, graph traversal is simplified.\n * Returns semantic matches with their metadata.\n * For full graph traversal, use execute_vector_cypher tool directly.\n * \n * @param executeQuery - Function to execute Cypher queries\n * @param query - Search query text\n * @param k - Number of initial semantic matches (default: 5)\n * @param _hops - Unused (kept for API compatibility).\n * @returns Semantic matches with metadata\n */\nexport const semanticSearchWithContext = async (\n  executeQuery: (cypher: string) => Promise<any[]>,\n  query: string,\n  k: number = 5,\n  _hops: number = 1\n): Promise<any[]> => {\n  // For multi-table schema, just return semantic search results\n  // Graph traversal is complex with separate tables - use execute_vector_cypher instead\n  const results = await semanticSearch(executeQuery, query, k, 0.5);\n  \n  return results.map(r => ({\n    matchId: r.nodeId,\n    matchName: r.name,\n    matchLabel: r.label,\n    matchPath: r.filePath,\n    distance: r.distance,\n    connectedId: null,\n    connectedName: null,\n    connectedLabel: null,\n    relationType: null,\n  }));\n};\n\n"
  },
  {
    "path": "gitnexus-web/src/core/embeddings/index.ts",
    "content": "/**\n * Embeddings Module\n * \n * Re-exports for the embedding pipeline system.\n */\n\nexport * from './types';\nexport * from './embedder';\nexport * from './text-generator';\nexport * from './embedding-pipeline';\n\n"
  },
  {
    "path": "gitnexus-web/src/core/embeddings/text-generator.ts",
    "content": "/**\n * Text Generator Module\n * \n * Pure functions to generate embedding text from code nodes.\n * Combines node metadata with code snippets for semantic matching.\n */\n\nimport type { EmbeddableNode, EmbeddingConfig } from './types';\nimport { DEFAULT_EMBEDDING_CONFIG } from './types';\n\n/**\n * Extract the filename from a file path\n */\nconst getFileName = (filePath: string): string => {\n  const parts = filePath.split('/');\n  return parts[parts.length - 1] || filePath;\n};\n\n/**\n * Extract the directory path from a file path\n */\nconst getDirectory = (filePath: string): string => {\n  const parts = filePath.split('/');\n  parts.pop();\n  return parts.join('/') || '';\n};\n\n/**\n * Truncate content to max length, preserving word boundaries\n */\nconst truncateContent = (content: string, maxLength: number): string => {\n  if (content.length <= maxLength) {\n    return content;\n  }\n  \n  // Find last space before maxLength to avoid cutting words\n  const truncated = content.slice(0, maxLength);\n  const lastSpace = truncated.lastIndexOf(' ');\n  \n  if (lastSpace > maxLength * 0.8) {\n    return truncated.slice(0, lastSpace) + '...';\n  }\n  \n  return truncated + '...';\n};\n\n/**\n * Clean code content for embedding\n * Removes excessive whitespace while preserving structure\n */\nconst cleanContent = (content: string): string => {\n  return content\n    // Normalize line endings\n    .replace(/\\r\\n/g, '\\n')\n    // Remove excessive blank lines (more than 2)\n    .replace(/\\n{3,}/g, '\\n\\n')\n    // Trim each line\n    .split('\\n')\n    .map(line => line.trimEnd())\n    .join('\\n')\n    .trim();\n};\n\n/**\n * Generate embedding text for a Function node\n */\nconst generateFunctionText = (\n  node: EmbeddableNode,\n  maxSnippetLength: number\n): string => {\n  const parts: string[] = [\n    `Function: ${node.name}`,\n    `File: ${getFileName(node.filePath)}`,\n  ];\n\n  const dir = getDirectory(node.filePath);\n  if (dir) {\n    parts.push(`Directory: ${dir}`);\n  }\n\n  if (node.content) {\n    const cleanedContent = cleanContent(node.content);\n    const snippet = truncateContent(cleanedContent, maxSnippetLength);\n    parts.push('', snippet);\n  }\n\n  return parts.join('\\n');\n};\n\n/**\n * Generate embedding text for a Class node\n */\nconst generateClassText = (\n  node: EmbeddableNode,\n  maxSnippetLength: number\n): string => {\n  const parts: string[] = [\n    `Class: ${node.name}`,\n    `File: ${getFileName(node.filePath)}`,\n  ];\n\n  const dir = getDirectory(node.filePath);\n  if (dir) {\n    parts.push(`Directory: ${dir}`);\n  }\n\n  if (node.content) {\n    const cleanedContent = cleanContent(node.content);\n    const snippet = truncateContent(cleanedContent, maxSnippetLength);\n    parts.push('', snippet);\n  }\n\n  return parts.join('\\n');\n};\n\n/**\n * Generate embedding text for a Method node\n */\nconst generateMethodText = (\n  node: EmbeddableNode,\n  maxSnippetLength: number\n): string => {\n  const parts: string[] = [\n    `Method: ${node.name}`,\n    `File: ${getFileName(node.filePath)}`,\n  ];\n\n  const dir = getDirectory(node.filePath);\n  if (dir) {\n    parts.push(`Directory: ${dir}`);\n  }\n\n  if (node.content) {\n    const cleanedContent = cleanContent(node.content);\n    const snippet = truncateContent(cleanedContent, maxSnippetLength);\n    parts.push('', snippet);\n  }\n\n  return parts.join('\\n');\n};\n\n/**\n * Generate embedding text for an Interface node\n */\nconst generateInterfaceText = (\n  node: EmbeddableNode,\n  maxSnippetLength: number\n): string => {\n  const parts: string[] = [\n    `Interface: ${node.name}`,\n    `File: ${getFileName(node.filePath)}`,\n  ];\n\n  const dir = getDirectory(node.filePath);\n  if (dir) {\n    parts.push(`Directory: ${dir}`);\n  }\n\n  if (node.content) {\n    const cleanedContent = cleanContent(node.content);\n    const snippet = truncateContent(cleanedContent, maxSnippetLength);\n    parts.push('', snippet);\n  }\n\n  return parts.join('\\n');\n};\n\n/**\n * Generate embedding text for a File node\n * Uses file name and first N characters of content\n */\nconst generateFileText = (\n  node: EmbeddableNode,\n  maxSnippetLength: number\n): string => {\n  const parts: string[] = [\n    `File: ${node.name}`,\n    `Path: ${node.filePath}`,\n  ];\n\n  if (node.content) {\n    const cleanedContent = cleanContent(node.content);\n    // For files, use a shorter snippet since they can be very long\n    const snippet = truncateContent(cleanedContent, Math.min(maxSnippetLength, 300));\n    parts.push('', snippet);\n  }\n\n  return parts.join('\\n');\n};\n\n/**\n * Generate embedding text for any embeddable node\n * Dispatches to the appropriate generator based on node label\n * \n * @param node - The node to generate text for\n * @param config - Optional configuration for max snippet length\n * @returns Text suitable for embedding\n */\nexport const generateEmbeddingText = (\n  node: EmbeddableNode,\n  config: Partial<EmbeddingConfig> = {}\n): string => {\n  const maxSnippetLength = config.maxSnippetLength ?? DEFAULT_EMBEDDING_CONFIG.maxSnippetLength;\n\n  switch (node.label) {\n    case 'Function':\n      return generateFunctionText(node, maxSnippetLength);\n    case 'Class':\n      return generateClassText(node, maxSnippetLength);\n    case 'Method':\n      return generateMethodText(node, maxSnippetLength);\n    case 'Interface':\n      return generateInterfaceText(node, maxSnippetLength);\n    case 'File':\n      return generateFileText(node, maxSnippetLength);\n    default:\n      // Fallback for any other embeddable type\n      return `${node.label}: ${node.name}\\nPath: ${node.filePath}`;\n  }\n};\n\n/**\n * Generate embedding texts for a batch of nodes\n * \n * @param nodes - Array of nodes to generate text for\n * @param config - Optional configuration\n * @returns Array of texts in the same order as input nodes\n */\nexport const generateBatchEmbeddingTexts = (\n  nodes: EmbeddableNode[],\n  config: Partial<EmbeddingConfig> = {}\n): string[] => {\n  return nodes.map(node => generateEmbeddingText(node, config));\n};\n\n"
  },
  {
    "path": "gitnexus-web/src/core/embeddings/types.ts",
    "content": "/**\n * Embedding Pipeline Types\n * \n * Type definitions for the embedding generation and semantic search system.\n */\n\n/**\n * Node labels that should be embedded for semantic search\n * These are code elements that benefit from semantic matching\n */\nexport const EMBEDDABLE_LABELS = [\n  'Function',\n  'Class', \n  'Method',\n  'Interface',\n  'File',\n] as const;\n\nexport type EmbeddableLabel = typeof EMBEDDABLE_LABELS[number];\n\n/**\n * Check if a label should be embedded\n */\nexport const isEmbeddableLabel = (label: string): label is EmbeddableLabel =>\n  EMBEDDABLE_LABELS.includes(label as EmbeddableLabel);\n\n/**\n * Embedding pipeline phases\n */\nexport type EmbeddingPhase = \n  | 'idle'\n  | 'loading-model'\n  | 'embedding'\n  | 'indexing'\n  | 'ready'\n  | 'error';\n\n/**\n * Progress information for the embedding pipeline\n */\nexport interface EmbeddingProgress {\n  phase: EmbeddingPhase;\n  percent: number;\n  modelDownloadPercent?: number;\n  nodesProcessed?: number;\n  totalNodes?: number;\n  currentBatch?: number;\n  totalBatches?: number;\n  error?: string;\n}\n\n/**\n * Configuration for the embedding pipeline\n */\nexport interface EmbeddingConfig {\n  /** Model identifier for transformers.js */\n  modelId: string;\n  /** Number of nodes to embed in each batch */\n  batchSize: number;\n  /** Embedding vector dimensions */\n  dimensions: number;\n  /** Device to use for inference: 'webgpu' for GPU acceleration, 'wasm' for WASM-based CPU */\n  device: 'webgpu' | 'wasm';\n  /** Maximum characters of code snippet to include */\n  maxSnippetLength: number;\n}\n\n/**\n * Default embedding configuration\n * Uses snowflake-arctic-embed-xs for browser efficiency\n * Tries WebGPU first (fast), user can choose WASM fallback if unavailable\n */\nexport const DEFAULT_EMBEDDING_CONFIG: EmbeddingConfig = {\n  modelId: 'Snowflake/snowflake-arctic-embed-xs',\n  batchSize: 16,\n  dimensions: 384,\n  device: 'webgpu', // WebGPU preferred, WASM fallback available if user chooses\n  maxSnippetLength: 500,\n};\n\n/**\n * Result from semantic search\n */\nexport interface SemanticSearchResult {\n  nodeId: string;\n  name: string;\n  label: string;\n  filePath: string;\n  distance: number;\n  startLine?: number;\n  endLine?: number;\n}\n\n/**\n * Node data for embedding (minimal structure from LadybugDB query)\n */\nexport interface EmbeddableNode {\n  id: string;\n  name: string;\n  label: string;\n  filePath: string;\n  content: string;\n  startLine?: number;\n  endLine?: number;\n}\n\n/**\n * Model download progress from transformers.js\n */\nexport interface ModelProgress {\n  status: 'initiate' | 'download' | 'progress' | 'done' | 'ready';\n  file?: string;\n  progress?: number;\n  loaded?: number;\n  total?: number;\n}\n\n"
  },
  {
    "path": "gitnexus-web/src/core/graph/graph.ts",
    "content": "import { GraphNode, GraphRelationship, KnowledgeGraph } from './types'\n\nexport const createKnowledgeGraph = (): KnowledgeGraph => {\n  const nodeMap = new Map<string, GraphNode>();\n  const relationshipMap = new Map<string, GraphRelationship>();\n\n  const addNode = (node: GraphNode) => {\n    if(!nodeMap.has(node.id)) {\n      nodeMap.set(node.id, node);\n    }\n  };\n\n  const addRelationship = (relationship: GraphRelationship) => {\n    if (!relationshipMap.has(relationship.id)) {\n      relationshipMap.set(relationship.id, relationship);\n    }\n  };\n\n  return{\n    get nodes(){\n      return Array.from(nodeMap.values())\n    },\n  \n    get relationships(){\n      return Array.from(relationshipMap.values())\n    },\n\n    // O(1) count getters - avoid creating arrays just for length\n    get nodeCount() {\n      return nodeMap.size;\n    },\n\n    get relationshipCount() {\n      return relationshipMap.size;\n    },\n\n    addNode,\n    addRelationship,\n\n  };\n};"
  },
  {
    "path": "gitnexus-web/src/core/graph/types.ts",
    "content": "export type NodeLabel =\n  | 'Project'\n  | 'Package'\n  | 'Module'\n  | 'Folder'\n  | 'File'\n  | 'Class'\n  | 'Function'\n  | 'Method'\n  | 'Variable'\n  | 'Interface'\n  | 'Enum'\n  | 'Decorator'\n  | 'Import'\n  | 'Type'\n  | 'CodeElement'\n  | 'Community'\n  | 'Process';\n\n\nexport type NodeProperties = {\n  name: string,\n  filePath: string,\n  startLine?: number,\n  endLine?: number,\n  language?: string,\n  isExported?: boolean,\n  // Community-specific properties\n  heuristicLabel?: string,\n  cohesion?: number,\n  symbolCount?: number,\n  keywords?: string[],\n  description?: string,\n  enrichedBy?: 'heuristic' | 'llm',\n  // Process-specific properties\n  processType?: 'intra_community' | 'cross_community',\n  stepCount?: number,\n  communities?: string[],\n  entryPointId?: string,\n  terminalId?: string,\n  // Entry point scoring (computed by process detection)\n  entryPointScore?: number,\n  entryPointReason?: string,\n}\n\nexport type RelationshipType = \n  | 'CONTAINS' \n  | 'CALLS' \n  | 'INHERITS' \n  | 'OVERRIDES' \n  | 'IMPORTS'\n  | 'USES'\n  | 'DEFINES'\n  | 'DECORATES'\n  | 'IMPLEMENTS'\n  | 'EXTENDS'\n  | 'HAS_METHOD'\n  | 'MEMBER_OF'\n  | 'STEP_IN_PROCESS'\n\nexport interface GraphNode {\n  id:  string,\n  label: NodeLabel,\n  properties: NodeProperties,  \n}\n\nexport interface GraphRelationship {\n  id: string,\n  sourceId: string,\n  targetId: string,\n  type: RelationshipType,\n  /** Confidence score 0-1 (1.0 = certain, lower = uncertain resolution) */\n  confidence: number,\n  /** Resolution reason: 'import-resolved', 'same-file', 'fuzzy-global', or empty for non-CALLS */\n  reason: string,\n  /** Step number for STEP_IN_PROCESS relationships (1-indexed) */\n  step?: number,\n}\n\nexport interface KnowledgeGraph {\n  nodes: GraphNode[],\n  relationships: GraphRelationship[],\n  nodeCount: number,\n  relationshipCount: number,\n  addNode: (node: GraphNode) => void,\n  addRelationship: (relationship: GraphRelationship) => void,\n}"
  },
  {
    "path": "gitnexus-web/src/core/ingestion/ast-cache.ts",
    "content": "import { LRUCache } from 'lru-cache';\nimport Parser from 'web-tree-sitter';\n\n// Define the interface for the Cache\nexport interface ASTCache {\n  get: (filePath: string) => Parser.Tree | undefined;\n  set: (filePath: string, tree: Parser.Tree) => void;\n  clear: () => void;\n  stats: () => { size: number; maxSize: number };\n}\n\nexport const createASTCache = (maxSize: number = 50): ASTCache => {\n  // Initialize the cache with a 'dispose' handler\n  // This is the magic: When an item is evicted (dropped), this runs automatically.\n  const cache = new LRUCache<string, Parser.Tree>({\n    max: maxSize,\n    dispose: (tree) => {\n      try {\n        // CRITICAL: Free the WASM memory when the tree leaves the cache\n        tree.delete();\n      } catch (e) {\n        console.warn('Failed to delete tree from WASM memory', e);\n      }\n    }\n  });\n\n  return {\n    get: (filePath: string) => {\n      const tree = cache.get(filePath);\n      return tree; // Returns undefined if not found\n    },\n    \n    set: (filePath: string, tree: Parser.Tree) => {\n      cache.set(filePath, tree);\n    },\n    \n    clear: () => {\n      cache.clear();\n    },\n\n    stats: () => ({\n      size: cache.size,\n      maxSize: maxSize\n    })\n  };\n};\n\n"
  },
  {
    "path": "gitnexus-web/src/core/ingestion/call-processor.ts",
    "content": "import { KnowledgeGraph } from '../graph/types';\nimport { ASTCache } from './ast-cache';\nimport { SymbolTable } from './symbol-table';\nimport { ImportMap } from './import-processor';\nimport { loadParser, loadLanguage } from '../tree-sitter/parser-loader';\nimport { LANGUAGE_QUERIES } from './tree-sitter-queries';\nimport { generateId } from '../../lib/utils';\nimport { getLanguageFromFilename } from './utils';\nimport { callRouters } from './call-routing';\n\n/**\n * Node types that represent function/method definitions across languages.\n * Used to find the enclosing function for a call site.\n */\nconst FUNCTION_NODE_TYPES = new Set([\n  // TypeScript/JavaScript\n  'function_declaration',\n  'arrow_function',\n  'function_expression',\n  'method_definition',\n  'generator_function_declaration',\n  // Python\n  'function_definition',\n  // Common async variants\n  'async_function_declaration',\n  'async_arrow_function',\n  // Java\n  'method_declaration',\n  'constructor_declaration',\n  // C/C++\n  // 'function_definition' already included above\n  // Go\n  // 'method_declaration' already included from Java\n  // C#\n  'local_function_statement',\n  // Rust\n  'function_item',\n  'impl_item', // Methods inside impl blocks\n  // Ruby\n  'method',           // def foo\n  'singleton_method', // def self.foo\n]);\n\n/**\n * Walk up the AST from a node to find the enclosing function/method.\n * Returns null if the call is at module/file level (top-level code).\n */\nconst findEnclosingFunction = (\n  node: any,\n  filePath: string,\n  symbolTable: SymbolTable\n): string | null => {\n  let current = node.parent;\n  \n  while (current) {\n    if (FUNCTION_NODE_TYPES.has(current.type)) {\n      // Found enclosing function - try to get its name\n      let funcName: string | null = null;\n      let label = 'Function';\n      \n      // Different node types have different name locations\n      if (current.type === 'function_declaration' || \n          current.type === 'function_definition' ||\n          current.type === 'async_function_declaration' ||\n          current.type === 'generator_function_declaration' ||\n          current.type === 'function_item') { // Rust function\n        // Named function: function foo() {}\n        const nameNode = current.childForFieldName?.('name') || \n                         current.children?.find((c: any) => c.type === 'identifier' || c.type === 'property_identifier');\n        funcName = nameNode?.text;\n      } else if (current.type === 'impl_item') {\n        // Rust method inside impl block: wrapper around function_item or const_item\n        // We need to look inside for the function_item\n        const funcItem = current.children?.find((c: any) => c.type === 'function_item');\n        if (funcItem) {\n           const nameNode = funcItem.childForFieldName?.('name') || \n                            funcItem.children?.find((c: any) => c.type === 'identifier');\n           funcName = nameNode?.text;\n           label = 'Method';\n        }\n      } else if (current.type === 'method_definition') {\n        // Method: foo() {} inside class (JS/TS)\n        const nameNode = current.childForFieldName?.('name') ||\n                         current.children?.find((c: any) => c.type === 'property_identifier');\n        funcName = nameNode?.text;\n        label = 'Method';\n      } else if (current.type === 'method_declaration') {\n        // Java method: public void foo() {}\n        const nameNode = current.childForFieldName?.('name') ||\n                         current.children?.find((c: any) => c.type === 'identifier');\n        funcName = nameNode?.text;\n        label = 'Method';\n      } else if (current.type === 'constructor_declaration') {\n        // Java constructor: public ClassName() {}\n        const nameNode = current.childForFieldName?.('name') ||\n                         current.children?.find((c: any) => c.type === 'identifier');\n        funcName = nameNode?.text;\n        label = 'Method'; // Treat constructors as methods for process detection\n      } else if (current.type === 'method') {\n        // Ruby instance method: def foo\n        const nameNode = current.childForFieldName?.('name') ||\n                         current.children?.find((c: any) => c.type === 'identifier');\n        funcName = nameNode?.text;\n        label = 'Method';\n      } else if (current.type === 'singleton_method') {\n        // Ruby class method: def self.foo\n        const nameNode = current.childForFieldName?.('name') ||\n                         current.children?.find((c: any) => c.type === 'identifier');\n        funcName = nameNode?.text;\n        label = 'Method';\n      } else if (current.type === 'arrow_function' || current.type === 'function_expression') {\n        // Arrow/expression: const foo = () => {} - check parent variable declarator\n        const parent = current.parent;\n        if (parent?.type === 'variable_declarator') {\n          const nameNode = parent.childForFieldName?.('name') ||\n                           parent.children?.find((c: any) => c.type === 'identifier');\n          funcName = nameNode?.text;\n        }\n      }\n      \n      if (funcName) {\n        // Look up the function in symbol table to get its node ID\n        // Try exact match first\n        const nodeId = symbolTable.lookupExact(filePath, funcName);\n        if (nodeId) return nodeId;\n        \n        // Try construct ID manually if lookup fails (common for non-exported internal functions)\n        // Format should match what parsing-processor generates: \"Function:path/to/file:funcName\"\n        // Check if we already have a node with this ID in the symbol table to be safe\n        const generatedId = generateId(label, `${filePath}:${funcName}`);\n        \n        // Ideally we should verify this ID exists, but strictly speaking if we are inside it,\n        // it SHOULD exist. Returning it is better than falling back to File.\n        return generatedId;\n      }\n      \n      // Couldn't determine function name - try parent (might be nested)\n    }\n    current = current.parent;\n  }\n  \n  return null; // Top-level call (not inside any function)\n};\n\n/** AST node types that represent a class-like container */\nconst CLASS_CONTAINER_TYPES = new Set([\n  'class_declaration', 'abstract_class_declaration',\n  'interface_declaration', 'struct_declaration', 'record_declaration',\n  'class_specifier', 'struct_specifier',\n  'impl_item', 'trait_item',\n  'class_definition',\n  'trait_declaration',\n  'protocol_declaration',\n  'class', 'module', // Ruby\n]);\n\nconst CONTAINER_TYPE_TO_LABEL: Record<string, string> = {\n  class_declaration: 'Class', abstract_class_declaration: 'Class',\n  interface_declaration: 'Interface',\n  struct_declaration: 'Struct', struct_specifier: 'Struct',\n  class_specifier: 'Class', class_definition: 'Class',\n  impl_item: 'Impl', trait_item: 'Trait', trait_declaration: 'Trait',\n  record_declaration: 'Record', protocol_declaration: 'Interface',\n  class: 'Class', module: 'Module',\n};\n\n/** Walk up AST to find enclosing class/struct/interface, return its generateId or null. */\nconst findEnclosingClassId = (node: any, filePath: string): string | null => {\n  let current = node.parent;\n  while (current) {\n    if (CLASS_CONTAINER_TYPES.has(current.type)) {\n      const nameNode = current.childForFieldName?.('name')\n        ?? current.children?.find((c: any) =>\n          c.type === 'type_identifier' || c.type === 'identifier' || c.type === 'name' || c.type === 'constant'\n        );\n      if (nameNode) {\n        const label = CONTAINER_TYPE_TO_LABEL[current.type] || 'Class';\n        return generateId(label, `${filePath}:${nameNode.text}`);\n      }\n    }\n    current = current.parent;\n  }\n  return null;\n};\n\nexport const processCalls = async (\n  graph: KnowledgeGraph,\n  files: { path: string; content: string }[],\n  astCache: ASTCache,\n  symbolTable: SymbolTable,\n  importMap: ImportMap,\n  onProgress?: (current: number, total: number) => void\n) => {\n  const parser = await loadParser();\n\n  for (let i = 0; i < files.length; i++) {\n    const file = files[i];\n    onProgress?.(i + 1, files.length);\n\n    // 1. Check language support first\n    const language = getLanguageFromFilename(file.path);\n    if (!language) continue;\n\n    const queryStr = LANGUAGE_QUERIES[language];\n    if (!queryStr) continue;\n\n    // 2. ALWAYS load the language before querying (parser is stateful)\n    await loadLanguage(language, file.path);\n\n    // 3. Get AST (Try Cache First)\n    let tree = astCache.get(file.path);\n    let wasReparsed = false;\n\n    if (!tree) {\n      // Cache Miss: Re-parse\n      tree = parser.parse(file.content);\n      wasReparsed = true;\n    }\n\n    let query;\n    let matches;\n    try {\n      query = parser.getLanguage().query(queryStr);\n      matches = query.matches(tree.rootNode);\n    } catch (queryError) {\n      console.warn(`Query error for ${file.path}:`, queryError);\n      if (wasReparsed) tree.delete();\n      continue;\n    }\n\n    const callRouter = callRouters[language];\n\n    // 3. Process each call match\n    matches.forEach(match => {\n      const captureMap: Record<string, any> = {};\n      match.captures.forEach(c => captureMap[c.name] = c.node);\n\n      // Only process @call captures\n      if (!captureMap['call']) return;\n\n      const nameNode = captureMap['call.name'];\n      if (!nameNode) return;\n\n      const calledName = nameNode.text;\n\n      // Dispatch: route language-specific calls (heritage, properties, imports)\n      const routed = callRouter(calledName, captureMap['call']);\n      if (routed) {\n        switch (routed.kind) {\n          case 'skip':\n          case 'import': // handled by import-processor\n            return;\n\n          case 'heritage':\n            for (const item of routed.items) {\n              const childId = symbolTable.lookupExact(file.path, item.enclosingClass) ||\n                              symbolTable.lookupFuzzy(item.enclosingClass)[0]?.nodeId ||\n                              generateId('Class', `${file.path}:${item.enclosingClass}`);\n              const parentId = symbolTable.lookupFuzzy(item.mixinName)[0]?.nodeId ||\n                               generateId('Module', `${item.mixinName}`);\n              if (childId && parentId) {\n                const relId = generateId('IMPLEMENTS', `${childId}->${parentId}:${item.heritageKind}`);\n                graph.addRelationship({\n                  id: relId, sourceId: childId, targetId: parentId,\n                  type: 'IMPLEMENTS', confidence: 1.0, reason: item.heritageKind,\n                });\n              }\n            }\n            return;\n\n          case 'properties': {\n            const fileId = generateId('File', file.path);\n            const propEnclosingClassId = findEnclosingClassId(captureMap['call'], file.path);\n            for (const item of routed.items) {\n              const nodeId = generateId('Property', `${file.path}:${item.propName}`);\n              graph.addNode({\n                id: nodeId,\n                label: 'Property' as any, // TODO: add 'Property' to graph node label union\n                properties: {\n                  name: item.propName, filePath: file.path,\n                  startLine: item.startLine, endLine: item.endLine,\n                  language, isExported: true,\n                  description: item.accessorType,\n                },\n              });\n              symbolTable.add(file.path, item.propName, nodeId, 'Property');\n              const relId = generateId('DEFINES', `${fileId}->${nodeId}`);\n              graph.addRelationship({\n                id: relId, sourceId: fileId, targetId: nodeId,\n                type: 'DEFINES', confidence: 1.0, reason: '',\n              });\n              if (propEnclosingClassId) {\n                graph.addRelationship({\n                  id: generateId('HAS_METHOD', `${propEnclosingClassId}->${nodeId}`),\n                  sourceId: propEnclosingClassId, targetId: nodeId,\n                  type: 'HAS_METHOD', confidence: 1.0, reason: '',\n                });\n              }\n            }\n            return;\n          }\n\n          case 'call':\n            break; // fall through to normal call processing below\n        }\n      }\n\n      // Skip common built-ins and noise\n      if (isBuiltInOrNoise(calledName)) return;\n\n      // 4. Resolve the target using priority strategy (returns confidence)\n      const resolved = resolveCallTarget(\n        calledName,\n        file.path,\n        symbolTable,\n        importMap\n      );\n\n      if (!resolved) return;\n\n      // 5. Find the enclosing function (caller)\n      const callNode = captureMap['call'];\n      const enclosingFuncId = findEnclosingFunction(callNode, file.path, symbolTable);\n\n      // Use enclosing function as source, fallback to file for top-level calls\n      const sourceId = enclosingFuncId || generateId('File', file.path);\n\n      const relId = generateId('CALLS', `${sourceId}:${calledName}->${resolved.nodeId}`);\n\n      graph.addRelationship({\n        id: relId,\n        sourceId,\n        targetId: resolved.nodeId,\n        type: 'CALLS',\n        confidence: resolved.confidence,\n        reason: resolved.reason,\n      });\n    });\n\n    // Extract Laravel routes from route files via procedural AST walk\n    if (language === 'php' && (file.path.includes('/routes/') || file.path.startsWith('routes/')) && file.path.endsWith('.php')) {\n      const extractedRoutes = extractLaravelRoutes(tree, file.path);\n      for (const route of extractedRoutes) {\n        if (!route.controllerName || !route.methodName) continue;\n\n        const controllerDefs = symbolTable.lookupFuzzy(route.controllerName);\n        if (controllerDefs.length === 0) continue;\n\n        const routeImportedFiles = importMap.get(route.filePath);\n        let controllerDef = controllerDefs[0];\n        let conf = controllerDefs.length === 1 ? 0.7 : 0.5;\n\n        if (routeImportedFiles) {\n          for (const def of controllerDefs) {\n            if (routeImportedFiles.has(def.filePath)) {\n              controllerDef = def;\n              conf = 0.9;\n              break;\n            }\n          }\n        }\n\n        const methodId = symbolTable.lookupExact(controllerDef.filePath, route.methodName);\n        const routeSourceId = generateId('File', route.filePath);\n\n        if (!methodId) {\n          const guessedId = generateId('Method', `${controllerDef.filePath}:${route.methodName}`);\n          const routeRelId = generateId('CALLS', `${routeSourceId}:route->${guessedId}`);\n          graph.addRelationship({\n            id: routeRelId,\n            sourceId: routeSourceId,\n            targetId: guessedId,\n            type: 'CALLS',\n            confidence: conf * 0.8,\n            reason: 'laravel-route',\n          });\n          continue;\n        }\n\n        const routeRelId = generateId('CALLS', `${routeSourceId}:route->${methodId}`);\n        graph.addRelationship({\n          id: routeRelId,\n          sourceId: routeSourceId,\n          targetId: methodId,\n          type: 'CALLS',\n          confidence: conf,\n          reason: 'laravel-route',\n        });\n      }\n    }\n\n    // Cleanup if re-parsed\n    if (wasReparsed) {\n      tree.delete();\n    }\n  }\n};\n\n// ============================================================================\n// Laravel Route Extraction (procedural AST walk)\n// ============================================================================\n\ninterface ExtractedRoute {\n  filePath: string;\n  httpMethod: string;\n  routePath: string | null;\n  controllerName: string | null;\n  methodName: string | null;\n  middleware: string[];\n  prefix: string | null;\n  lineNumber: number;\n}\n\ninterface RouteGroupContext {\n  middleware: string[];\n  prefix: string | null;\n  controller: string | null;\n}\n\nconst ROUTE_HTTP_METHODS = new Set([\n  'get', 'post', 'put', 'patch', 'delete', 'options', 'any', 'match',\n]);\n\nconst ROUTE_RESOURCE_METHODS = new Set(['resource', 'apiResource']);\n\nconst RESOURCE_ACTIONS = ['index', 'create', 'store', 'show', 'edit', 'update', 'destroy'];\nconst API_RESOURCE_ACTIONS = ['index', 'store', 'show', 'update', 'destroy'];\n\nfunction isRouteStaticCall(node: any): boolean {\n  if (node.type !== 'scoped_call_expression') return false;\n  const obj = node.childForFieldName?.('object') ?? node.children?.[0];\n  return obj?.text === 'Route';\n}\n\nfunction getCallMethodName(node: any): string | null {\n  const nameNode = node.childForFieldName?.('name') ??\n    node.children?.find((c: any) => c.type === 'name');\n  return nameNode?.text ?? null;\n}\n\nfunction getArguments(node: any): any {\n  return node.children?.find((c: any) => c.type === 'arguments') ?? null;\n}\n\nfunction findClosureBody(argsNode: any): any | null {\n  if (!argsNode) return null;\n  for (const child of argsNode.children ?? []) {\n    if (child.type === 'argument') {\n      for (const inner of child.children ?? []) {\n        if (inner.type === 'anonymous_function' ||\n            inner.type === 'arrow_function') {\n          return inner.childForFieldName?.('body') ??\n            inner.children?.find((c: any) => c.type === 'compound_statement');\n        }\n      }\n    }\n    if (child.type === 'anonymous_function' ||\n        child.type === 'arrow_function') {\n      return child.childForFieldName?.('body') ??\n        child.children?.find((c: any) => c.type === 'compound_statement');\n    }\n  }\n  return null;\n}\n\nfunction findDescendant(node: any, type: string): any {\n  if (node.type === type) return node;\n  for (const child of (node.children ?? [])) {\n    const found = findDescendant(child, type);\n    if (found) return found;\n  }\n  return null;\n}\n\nfunction extractStringContent(node: any): string | null {\n  if (!node) return null;\n  const content = node.children?.find((c: any) => c.type === 'string_content');\n  if (content) return content.text;\n  if (node.type === 'string_content') return node.text;\n  return null;\n}\n\nfunction extractFirstStringArg(argsNode: any): string | null {\n  if (!argsNode) return null;\n  for (const child of argsNode.children ?? []) {\n    const target = child.type === 'argument' ? child.children?.[0] : child;\n    if (!target) continue;\n    if (target.type === 'string' || target.type === 'encapsed_string') {\n      return extractStringContent(target);\n    }\n  }\n  return null;\n}\n\nfunction extractMiddlewareArg(argsNode: any): string[] {\n  if (!argsNode) return [];\n  for (const child of argsNode.children ?? []) {\n    const target = child.type === 'argument' ? child.children?.[0] : child;\n    if (!target) continue;\n    if (target.type === 'string' || target.type === 'encapsed_string') {\n      const val = extractStringContent(target);\n      return val ? [val] : [];\n    }\n    if (target.type === 'array_creation_expression') {\n      const items: string[] = [];\n      for (const el of target.children ?? []) {\n        if (el.type === 'array_element_initializer') {\n          const str = el.children?.find((c: any) => c.type === 'string' || c.type === 'encapsed_string');\n          const val = str ? extractStringContent(str) : null;\n          if (val) items.push(val);\n        }\n      }\n      return items;\n    }\n  }\n  return [];\n}\n\nfunction extractClassArg(argsNode: any): string | null {\n  if (!argsNode) return null;\n  for (const child of argsNode.children ?? []) {\n    const target = child.type === 'argument' ? child.children?.[0] : child;\n    if (target?.type === 'class_constant_access_expression') {\n      return target.children?.find((c: any) => c.type === 'name')?.text ?? null;\n    }\n  }\n  return null;\n}\n\nfunction extractControllerTarget(argsNode: any): { controller: string | null; method: string | null } {\n  if (!argsNode) return { controller: null, method: null };\n\n  const args: any[] = [];\n  for (const child of argsNode.children ?? []) {\n    if (child.type === 'argument') args.push(child.children?.[0]);\n    else if (child.type !== '(' && child.type !== ')' && child.type !== ',') args.push(child);\n  }\n\n  const handlerNode = args[1];\n  if (!handlerNode) return { controller: null, method: null };\n\n  if (handlerNode.type === 'array_creation_expression') {\n    let controller: string | null = null;\n    let method: string | null = null;\n    const elements: any[] = [];\n    for (const el of handlerNode.children ?? []) {\n      if (el.type === 'array_element_initializer') elements.push(el);\n    }\n    if (elements[0]) {\n      const classAccess = findDescendant(elements[0], 'class_constant_access_expression');\n      if (classAccess) {\n        controller = classAccess.children?.find((c: any) => c.type === 'name')?.text ?? null;\n      }\n    }\n    if (elements[1]) {\n      const str = findDescendant(elements[1], 'string');\n      method = str ? extractStringContent(str) : null;\n    }\n    return { controller, method };\n  }\n\n  if (handlerNode.type === 'string' || handlerNode.type === 'encapsed_string') {\n    const text = extractStringContent(handlerNode);\n    if (text?.includes('@')) {\n      const [controller, method] = text.split('@');\n      return { controller, method };\n    }\n  }\n\n  if (handlerNode.type === 'class_constant_access_expression') {\n    const controller = handlerNode.children?.find((c: any) => c.type === 'name')?.text ?? null;\n    return { controller, method: '__invoke' };\n  }\n\n  return { controller: null, method: null };\n}\n\ninterface ChainedRouteCall {\n  isRouteFacade: boolean;\n  terminalMethod: string;\n  attributes: { method: string; argsNode: any }[];\n  terminalArgs: any;\n  node: any;\n}\n\nfunction unwrapRouteChain(node: any): ChainedRouteCall | null {\n  if (node.type !== 'member_call_expression') return null;\n\n  const terminalMethod = getCallMethodName(node);\n  if (!terminalMethod) return null;\n\n  const terminalArgs = getArguments(node);\n  const attributes: { method: string; argsNode: any }[] = [];\n\n  let current = node.children?.[0];\n\n  while (current) {\n    if (current.type === 'member_call_expression') {\n      const method = getCallMethodName(current);\n      const args = getArguments(current);\n      if (method) attributes.unshift({ method, argsNode: args });\n      current = current.children?.[0];\n    } else if (current.type === 'scoped_call_expression') {\n      const obj = current.childForFieldName?.('object') ?? current.children?.[0];\n      if (obj?.text !== 'Route') return null;\n\n      const method = getCallMethodName(current);\n      const args = getArguments(current);\n      if (method) attributes.unshift({ method, argsNode: args });\n\n      return { isRouteFacade: true, terminalMethod, attributes, terminalArgs, node };\n    } else {\n      break;\n    }\n  }\n\n  return null;\n}\n\nfunction parseArrayGroupArgs(argsNode: any): RouteGroupContext {\n  const ctx: RouteGroupContext = { middleware: [], prefix: null, controller: null };\n  if (!argsNode) return ctx;\n\n  for (const child of argsNode.children ?? []) {\n    const target = child.type === 'argument' ? child.children?.[0] : child;\n    if (target?.type === 'array_creation_expression') {\n      for (const el of target.children ?? []) {\n        if (el.type !== 'array_element_initializer') continue;\n        const children = el.children ?? [];\n        const arrowIdx = children.findIndex((c: any) => c.type === '=>');\n        if (arrowIdx === -1) continue;\n        const key = extractStringContent(children[arrowIdx - 1]);\n        const val = children[arrowIdx + 1];\n        if (key === 'middleware') {\n          if (val?.type === 'string') {\n            const s = extractStringContent(val);\n            if (s) ctx.middleware.push(s);\n          } else if (val?.type === 'array_creation_expression') {\n            for (const item of val.children ?? []) {\n              if (item.type === 'array_element_initializer') {\n                const str = item.children?.find((c: any) => c.type === 'string');\n                const s = str ? extractStringContent(str) : null;\n                if (s) ctx.middleware.push(s);\n              }\n            }\n          }\n        } else if (key === 'prefix') {\n          ctx.prefix = extractStringContent(val) ?? null;\n        } else if (key === 'controller') {\n          if (val?.type === 'class_constant_access_expression') {\n            ctx.controller = val.children?.find((c: any) => c.type === 'name')?.text ?? null;\n          }\n        }\n      }\n    }\n  }\n  return ctx;\n}\n\nfunction extractLaravelRoutes(tree: any, filePath: string): ExtractedRoute[] {\n  const routes: ExtractedRoute[] = [];\n\n  function resolveStack(stack: RouteGroupContext[]): { middleware: string[]; prefix: string | null; controller: string | null } {\n    const middleware: string[] = [];\n    let prefix: string | null = null;\n    let controller: string | null = null;\n    for (const ctx of stack) {\n      middleware.push(...ctx.middleware);\n      if (ctx.prefix) prefix = prefix ? `${prefix}/${ctx.prefix}`.replace(/\\/+/g, '/') : ctx.prefix;\n      if (ctx.controller) controller = ctx.controller;\n    }\n    return { middleware, prefix, controller };\n  }\n\n  function emitRoute(\n    httpMethod: string,\n    argsNode: any,\n    lineNumber: number,\n    groupStack: RouteGroupContext[],\n    chainAttrs: { method: string; argsNode: any }[],\n  ) {\n    const effective = resolveStack(groupStack);\n\n    for (const attr of chainAttrs) {\n      if (attr.method === 'middleware') effective.middleware.push(...extractMiddlewareArg(attr.argsNode));\n      if (attr.method === 'prefix') {\n        const p = extractFirstStringArg(attr.argsNode);\n        if (p) effective.prefix = effective.prefix ? `${effective.prefix}/${p}` : p;\n      }\n      if (attr.method === 'controller') {\n        const cls = extractClassArg(attr.argsNode);\n        if (cls) effective.controller = cls;\n      }\n    }\n\n    const routePath = extractFirstStringArg(argsNode);\n\n    if (ROUTE_RESOURCE_METHODS.has(httpMethod)) {\n      const target = extractControllerTarget(argsNode);\n      const actions = httpMethod === 'apiResource' ? API_RESOURCE_ACTIONS : RESOURCE_ACTIONS;\n      for (const action of actions) {\n        routes.push({\n          filePath, httpMethod, routePath,\n          controllerName: target.controller ?? effective.controller,\n          methodName: action,\n          middleware: [...effective.middleware],\n          prefix: effective.prefix,\n          lineNumber,\n        });\n      }\n    } else {\n      const target = extractControllerTarget(argsNode);\n      routes.push({\n        filePath, httpMethod, routePath,\n        controllerName: target.controller ?? effective.controller,\n        methodName: target.method,\n        middleware: [...effective.middleware],\n        prefix: effective.prefix,\n        lineNumber,\n      });\n    }\n  }\n\n  function walk(node: any, groupStack: RouteGroupContext[]) {\n    if (isRouteStaticCall(node)) {\n      const method = getCallMethodName(node);\n      if (method && (ROUTE_HTTP_METHODS.has(method) || ROUTE_RESOURCE_METHODS.has(method))) {\n        emitRoute(method, getArguments(node), node.startPosition.row, groupStack, []);\n        return;\n      }\n      if (method === 'group') {\n        const argsNode = getArguments(node);\n        const groupCtx = parseArrayGroupArgs(argsNode);\n        const body = findClosureBody(argsNode);\n        if (body) {\n          groupStack.push(groupCtx);\n          walkChildren(body, groupStack);\n          groupStack.pop();\n        }\n        return;\n      }\n    }\n\n    const chain = unwrapRouteChain(node);\n    if (chain) {\n      if (chain.terminalMethod === 'group') {\n        const groupCtx: RouteGroupContext = { middleware: [], prefix: null, controller: null };\n        for (const attr of chain.attributes) {\n          if (attr.method === 'middleware') groupCtx.middleware.push(...extractMiddlewareArg(attr.argsNode));\n          if (attr.method === 'prefix') groupCtx.prefix = extractFirstStringArg(attr.argsNode);\n          if (attr.method === 'controller') groupCtx.controller = extractClassArg(attr.argsNode);\n        }\n        const body = findClosureBody(chain.terminalArgs);\n        if (body) {\n          groupStack.push(groupCtx);\n          walkChildren(body, groupStack);\n          groupStack.pop();\n        }\n        return;\n      }\n      if (ROUTE_HTTP_METHODS.has(chain.terminalMethod) || ROUTE_RESOURCE_METHODS.has(chain.terminalMethod)) {\n        emitRoute(chain.terminalMethod, chain.terminalArgs, node.startPosition.row, groupStack, chain.attributes);\n        return;\n      }\n    }\n\n    walkChildren(node, groupStack);\n  }\n\n  function walkChildren(node: any, groupStack: RouteGroupContext[]) {\n    for (const child of node.children ?? []) {\n      walk(child, groupStack);\n    }\n  }\n\n  walk(tree.rootNode, []);\n  return routes;\n}\n\n/**\n * Resolution result with confidence scoring\n */\ninterface ResolveResult {\n  nodeId: string;\n  confidence: number;  // 0-1: how sure are we?\n  reason: string;      // 'import-resolved' | 'same-file' | 'fuzzy-global'\n}\n\n/**\n * Resolve a function call to its target node ID using priority strategy:\n * A. Check imported files first (highest confidence)\n * B. Check local file definitions\n * C. Fuzzy global search (lowest confidence)\n * \n * Returns confidence score so agents know what to trust.\n */\nconst resolveCallTarget = (\n  calledName: string,\n  currentFile: string,\n  symbolTable: SymbolTable,\n  importMap: ImportMap\n): ResolveResult | null => {\n  // Strategy A: Check imported files (HIGH confidence - we know the import chain)\n  const importedFiles = importMap.get(currentFile);\n  if (importedFiles) {\n    for (const importedFile of importedFiles) {\n      const nodeId = symbolTable.lookupExact(importedFile, calledName);\n      if (nodeId) {\n        return { nodeId, confidence: 0.9, reason: 'import-resolved' };\n      }\n    }\n  }\n\n  // Strategy B: Check local file (HIGH confidence - same file definition)\n  const localNodeId = symbolTable.lookupExact(currentFile, calledName);\n  if (localNodeId) {\n    return { nodeId: localNodeId, confidence: 0.85, reason: 'same-file' };\n  }\n\n  // Strategy C: Fuzzy global search (LOW confidence - just matching by name)\n  const fuzzyMatches = symbolTable.lookupFuzzy(calledName);\n  if (fuzzyMatches.length > 0) {\n    // Lower confidence if multiple matches exist (more ambiguous)\n    const confidence = fuzzyMatches.length === 1 ? 0.5 : 0.3;\n    return { nodeId: fuzzyMatches[0].nodeId, confidence, reason: 'fuzzy-global' };\n  }\n\n  return null;\n};\n\n/**\n * Filter out common built-in functions and noise\n * that shouldn't be tracked as calls\n */\n/** Pre-built set (module-level singleton) to avoid re-creating per call */\nconst BUILT_IN_NAMES = new Set([\n  // JavaScript/TypeScript built-ins\n  'console', 'log', 'warn', 'error', 'info', 'debug',\n  'setTimeout', 'setInterval', 'clearTimeout', 'clearInterval',\n  'parseInt', 'parseFloat', 'isNaN', 'isFinite',\n  'encodeURI', 'decodeURI', 'encodeURIComponent', 'decodeURIComponent',\n  'JSON', 'parse', 'stringify',\n  'Object', 'Array', 'String', 'Number', 'Boolean', 'Symbol', 'BigInt',\n  'Map', 'Set', 'WeakMap', 'WeakSet',\n  'Promise', 'resolve', 'reject', 'then', 'catch', 'finally',\n  'Math', 'Date', 'RegExp', 'Error',\n  'require', 'import', 'export',\n  'fetch', 'Response', 'Request',\n  // React hooks and common functions\n  'useState', 'useEffect', 'useCallback', 'useMemo', 'useRef', 'useContext',\n  'useReducer', 'useLayoutEffect', 'useImperativeHandle', 'useDebugValue',\n  'createElement', 'createContext', 'createRef', 'forwardRef', 'memo', 'lazy',\n  // Common array/object methods\n  'map', 'filter', 'reduce', 'forEach', 'find', 'findIndex', 'some', 'every',\n  'includes', 'indexOf', 'slice', 'splice', 'concat', 'join', 'split',\n  'push', 'pop', 'shift', 'unshift', 'sort', 'reverse',\n  'keys', 'values', 'entries', 'assign', 'freeze', 'seal',\n  'hasOwnProperty', 'toString', 'valueOf',\n  // Python built-ins\n  'print', 'len', 'range', 'str', 'int', 'float', 'list', 'dict', 'set', 'tuple',\n  'open', 'read', 'write', 'close', 'append', 'extend', 'update',\n  'super', 'type', 'isinstance', 'issubclass', 'getattr', 'setattr', 'hasattr',\n  'enumerate', 'zip', 'sorted', 'reversed', 'min', 'max', 'sum', 'abs',\n  // C/C++ standard library and common kernel helpers\n  'printf', 'fprintf', 'sprintf', 'snprintf', 'vprintf', 'vfprintf', 'vsprintf', 'vsnprintf',\n  'scanf', 'fscanf', 'sscanf',\n  'malloc', 'calloc', 'realloc', 'free', 'memcpy', 'memmove', 'memset', 'memcmp',\n  'strlen', 'strcpy', 'strncpy', 'strcat', 'strncat', 'strcmp', 'strncmp', 'strstr', 'strchr', 'strrchr',\n  'atoi', 'atol', 'atof', 'strtol', 'strtoul', 'strtoll', 'strtoull', 'strtod',\n  'sizeof', 'offsetof', 'typeof',\n  'assert', 'abort', 'exit', '_exit',\n  'fopen', 'fclose', 'fread', 'fwrite', 'fseek', 'ftell', 'rewind', 'fflush', 'fgets', 'fputs',\n  // Linux kernel common macros/helpers (not real call targets)\n  'likely', 'unlikely', 'BUG', 'BUG_ON', 'WARN', 'WARN_ON', 'WARN_ONCE',\n  'IS_ERR', 'PTR_ERR', 'ERR_PTR', 'IS_ERR_OR_NULL',\n  'ARRAY_SIZE', 'container_of', 'list_for_each_entry', 'list_for_each_entry_safe',\n  'min', 'max', 'clamp', 'abs', 'swap',\n  'pr_info', 'pr_warn', 'pr_err', 'pr_debug', 'pr_notice', 'pr_crit', 'pr_emerg',\n  'printk', 'dev_info', 'dev_warn', 'dev_err', 'dev_dbg',\n  'GFP_KERNEL', 'GFP_ATOMIC',\n  'spin_lock', 'spin_unlock', 'spin_lock_irqsave', 'spin_unlock_irqrestore',\n  'mutex_lock', 'mutex_unlock', 'mutex_init',\n  'kfree', 'kmalloc', 'kzalloc', 'kcalloc', 'krealloc', 'kvmalloc', 'kvfree',\n  'get', 'put',\n  // Ruby built-ins and Kernel methods\n  'puts', 'print', 'p', 'pp', 'warn', 'raise', 'fail',\n  'require', 'require_relative', 'load', 'autoload',\n  'include', 'extend', 'prepend',\n  'attr_accessor', 'attr_reader', 'attr_writer',\n  'public', 'private', 'protected', 'module_function',\n  'lambda', 'proc', 'block_given?',\n  'nil?', 'is_a?', 'kind_of?', 'instance_of?', 'respond_to?',\n  'freeze', 'frozen?', 'dup', 'clone', 'tap', 'then', 'yield_self',\n  // Ruby enumerables\n  'each', 'map', 'select', 'reject', 'find', 'detect', 'collect',\n  'inject', 'reduce', 'flat_map', 'each_with_object', 'each_with_index',\n  'any?', 'all?', 'none?', 'count', 'first', 'last',\n  'sort', 'sort_by', 'min', 'max', 'min_by', 'max_by',\n  'group_by', 'partition', 'zip', 'compact', 'flatten', 'uniq',\n]);\n\nconst isBuiltInOrNoise = (name: string): boolean => BUILT_IN_NAMES.has(name);\n\n"
  },
  {
    "path": "gitnexus-web/src/core/ingestion/call-routing.ts",
    "content": "/**\n * Shared Ruby call routing logic.\n *\n * Ruby expresses imports, heritage (mixins), and property definitions as\n * method calls rather than syntax-level constructs. This module provides a\n * routing function used by the CLI call-processor, CLI parse-worker, and\n * the web call-processor so that the classification logic lives in one place.\n *\n * NOTE: This file is intentionally duplicated in gitnexus-web/ because the\n * two packages have separate build targets (Node native vs WASM/browser).\n * Keep both copies in sync until a shared package is introduced.\n */\n\nimport { SupportedLanguages } from '../../config/supported-languages';\n\n// ── Call routing dispatch table ─────────────────────────────────────────────\n\n/** null = this call was not routed; fall through to default call handling */\nexport type CallRoutingResult = RubyCallRouting | null;\n\nexport type CallRouter = (\n  calledName: string,\n  callNode: any,\n) => CallRoutingResult;\n\n/** No-op router: returns null for every call (passthrough to normal processing) */\nconst noRouting: CallRouter = () => null;\n\n/** Per-language call routing. noRouting = no special routing (normal call processing) */\nexport const callRouters = {\n  [SupportedLanguages.JavaScript]: noRouting,\n  [SupportedLanguages.TypeScript]: noRouting,\n  [SupportedLanguages.Python]: noRouting,\n  [SupportedLanguages.Java]: noRouting,\n  [SupportedLanguages.Go]: noRouting,\n  [SupportedLanguages.Rust]: noRouting,\n  [SupportedLanguages.CSharp]: noRouting,\n  [SupportedLanguages.PHP]: noRouting,\n  [SupportedLanguages.Swift]: noRouting,\n  [SupportedLanguages.CPlusPlus]: noRouting,\n  [SupportedLanguages.C]: noRouting,\n  [SupportedLanguages.Ruby]: routeRubyCall,\n  [SupportedLanguages.Kotlin]: noRouting,\n} satisfies Record<SupportedLanguages, CallRouter>;\n\n// ── Result types ────────────────────────────────────────────────────────────\n\nexport type RubyCallRouting =\n  | { kind: 'import'; importPath: string; isRelative: boolean }\n  | { kind: 'heritage'; items: RubyHeritageItem[] }\n  | { kind: 'properties'; items: RubyPropertyItem[] }\n  | { kind: 'call' }\n  | { kind: 'skip' };\n\nexport interface RubyHeritageItem {\n  enclosingClass: string;\n  mixinName: string;\n  heritageKind: 'include' | 'extend' | 'prepend';\n}\n\nexport type RubyAccessorType = 'attr_accessor' | 'attr_reader' | 'attr_writer';\n\nexport interface RubyPropertyItem {\n  propName: string;\n  accessorType: RubyAccessorType;\n  startLine: number;\n  endLine: number;\n}\n\n// ── Pre-allocated singletons for common return values ────────────────────────\nconst CALL_RESULT: RubyCallRouting = { kind: 'call' };\nconst SKIP_RESULT: RubyCallRouting = { kind: 'skip' };\n\n/** Max depth for parent-walking loops to prevent pathological AST traversals */\nconst MAX_PARENT_DEPTH = 50;\n\n// ── Routing function ────────────────────────────────────────────────────────\n\n/**\n * Classify a Ruby call node and extract its semantic payload.\n *\n * @param calledName - The method name (e.g. 'require', 'include', 'attr_accessor')\n * @param callNode   - The tree-sitter `call` AST node\n * @returns A discriminated union describing the call's semantic role\n */\nexport function routeRubyCall(calledName: string, callNode: any): RubyCallRouting {\n  // ── require / require_relative → import ─────────────────────────────────\n  if (calledName === 'require' || calledName === 'require_relative') {\n    const argList = callNode.childForFieldName?.('arguments');\n    const stringNode = argList?.children?.find((c: any) => c.type === 'string');\n    const contentNode = stringNode?.children?.find((c: any) => c.type === 'string_content');\n    if (!contentNode) return SKIP_RESULT;\n\n    let importPath: string = contentNode.text;\n    // Validate: reject null bytes, control chars, excessively long paths\n    if (!importPath || importPath.length > 1024 || /[\\x00-\\x1f]/.test(importPath)) {\n      return SKIP_RESULT;\n    }\n    const isRelative = calledName === 'require_relative';\n    if (isRelative && !importPath.startsWith('.')) {\n      importPath = './' + importPath;\n    }\n    return { kind: 'import', importPath, isRelative };\n  }\n\n  // ── include / extend / prepend → heritage (mixin) ──────────────────────\n  if (calledName === 'include' || calledName === 'extend' || calledName === 'prepend') {\n    let enclosingClass: string | null = null;\n    let current = callNode.parent;\n    let depth = 0;\n    while (current && ++depth <= MAX_PARENT_DEPTH) {\n      if (current.type === 'class' || current.type === 'module') {\n        const nameNode = current.childForFieldName?.('name');\n        if (nameNode) { enclosingClass = nameNode.text; break; }\n      }\n      current = current.parent;\n    }\n    if (!enclosingClass) return SKIP_RESULT;\n\n    const items: RubyHeritageItem[] = [];\n    const argList = callNode.childForFieldName?.('arguments');\n    for (const arg of (argList?.children ?? [])) {\n      if (arg.type === 'constant' || arg.type === 'scope_resolution') {\n        items.push({ enclosingClass, mixinName: arg.text, heritageKind: calledName as 'include' | 'extend' | 'prepend' });\n      }\n    }\n    return items.length > 0 ? { kind: 'heritage', items } : SKIP_RESULT;\n  }\n\n  // ── attr_accessor / attr_reader / attr_writer → property definitions ───\n  if (calledName === 'attr_accessor' || calledName === 'attr_reader' || calledName === 'attr_writer') {\n    const items: RubyPropertyItem[] = [];\n    const argList = callNode.childForFieldName?.('arguments');\n    for (const arg of (argList?.children ?? [])) {\n      if (arg.type === 'simple_symbol') {\n        items.push({\n          propName: arg.text.startsWith(':') ? arg.text.slice(1) : arg.text,\n          accessorType: calledName as RubyAccessorType,\n          startLine: arg.startPosition.row,\n          endLine: arg.endPosition.row,\n        });\n      }\n    }\n    return items.length > 0 ? { kind: 'properties', items } : SKIP_RESULT;\n  }\n\n  // ── Everything else → regular call ─────────────────────────────────────\n  return CALL_RESULT;\n}\n"
  },
  {
    "path": "gitnexus-web/src/core/ingestion/cluster-enricher.ts",
    "content": "/**\n * Cluster Enricher\n * \n * LLM-based enrichment for community clusters.\n * Generates semantic names, keywords, and descriptions using an LLM.\n */\n\nimport { CommunityNode } from './community-processor';\n\n// ============================================================================\n// TYPES\n// ============================================================================\n\nexport interface ClusterEnrichment {\n  name: string;\n  keywords: string[];\n  description: string;\n}\n\nexport interface EnrichmentResult {\n  enrichments: Map<string, ClusterEnrichment>;\n  tokensUsed: number;\n}\n\nexport interface LLMClient {\n  generate: (prompt: string) => Promise<string>;\n}\n\nexport interface ClusterMemberInfo {\n  name: string;\n  filePath: string;\n  type: string; // 'Function' | 'Class' | 'Method' | 'Interface'\n}\n\n// ============================================================================\n// PROMPT TEMPLATE\n// ============================================================================\n\nconst buildEnrichmentPrompt = (\n  members: ClusterMemberInfo[],\n  heuristicLabel: string\n): string => {\n  // Limit to first 20 members to control token usage\n  const limitedMembers = members.slice(0, 20);\n  \n  const memberList = limitedMembers\n    .map(m => `${m.name} (${m.type})`)\n    .join(', ');\n  \n  return `Analyze this code cluster and provide a semantic name and short description.\n\nHeuristic: \"${heuristicLabel}\"\nMembers: ${memberList}${members.length > 20 ? ` (+${members.length - 20} more)` : ''}\n\nReply with JSON only:\n{\"name\": \"2-4 word semantic name\", \"description\": \"One sentence describing purpose\"}`\n};\n\n// ============================================================================\n// PARSE LLM RESPONSE\n// ============================================================================\n\nconst parseEnrichmentResponse = (\n  response: string,\n  fallbackLabel: string\n): ClusterEnrichment => {\n  try {\n    // Extract JSON from response (handles markdown code blocks)\n    const jsonMatch = response.match(/\\{[\\s\\S]*\\}/);\n    if (!jsonMatch) {\n      throw new Error('No JSON found in response');\n    }\n    \n    const parsed = JSON.parse(jsonMatch[0]);\n    \n    return {\n      name: parsed.name || fallbackLabel,\n      keywords: Array.isArray(parsed.keywords) ? parsed.keywords : [],\n      description: parsed.description || '',\n    };\n  } catch {\n    // Fallback if parsing fails\n    return {\n      name: fallbackLabel,\n      keywords: [],\n      description: '',\n    };\n  }\n};\n\n// ============================================================================\n// MAIN ENRICHMENT FUNCTION\n// ============================================================================\n\n/**\n * Enrich clusters with LLM-generated names, keywords, and descriptions\n * \n * @param communities - Community nodes to enrich\n * @param memberMap - Map of communityId -> member info\n * @param llmClient - LLM client for generation\n * @param onProgress - Progress callback\n */\nexport const enrichClusters = async (\n  communities: CommunityNode[],\n  memberMap: Map<string, ClusterMemberInfo[]>,\n  llmClient: LLMClient,\n  onProgress?: (current: number, total: number) => void\n): Promise<EnrichmentResult> => {\n  const enrichments = new Map<string, ClusterEnrichment>();\n  let tokensUsed = 0;\n  \n  for (let i = 0; i < communities.length; i++) {\n    const community = communities[i];\n    const members = memberMap.get(community.id) || [];\n    \n    onProgress?.(i + 1, communities.length);\n    \n    if (members.length === 0) {\n      // No members, use heuristic\n      enrichments.set(community.id, {\n        name: community.heuristicLabel,\n        keywords: [],\n        description: '',\n      });\n      continue;\n    }\n    \n    try {\n      const prompt = buildEnrichmentPrompt(members, community.heuristicLabel);\n      const response = await llmClient.generate(prompt);\n      \n      // Rough token estimate\n      tokensUsed += prompt.length / 4 + response.length / 4;\n      \n      const enrichment = parseEnrichmentResponse(response, community.heuristicLabel);\n      enrichments.set(community.id, enrichment);\n    } catch (error) {\n      // On error, fallback to heuristic\n      console.warn(`Failed to enrich cluster ${community.id}:`, error);\n      enrichments.set(community.id, {\n        name: community.heuristicLabel,\n        keywords: [],\n        description: '',\n      });\n    }\n  }\n  \n  return { enrichments, tokensUsed };\n};\n\n// ============================================================================\n// BATCH ENRICHMENT (more efficient)\n// ============================================================================\n\n/**\n * Enrich multiple clusters in a single LLM call (batch mode)\n * More efficient for token usage but requires larger context window\n */\nexport const enrichClustersBatch = async (\n  communities: CommunityNode[],\n  memberMap: Map<string, ClusterMemberInfo[]>,\n  llmClient: LLMClient,\n  batchSize: number = 5,\n  onProgress?: (current: number, total: number) => void\n): Promise<EnrichmentResult> => {\n  const enrichments = new Map<string, ClusterEnrichment>();\n  let tokensUsed = 0;\n  \n  // Process in batches\n  for (let i = 0; i < communities.length; i += batchSize) {\n    // Report progress\n    onProgress?.(Math.min(i + batchSize, communities.length), communities.length);\n\n    const batch = communities.slice(i, i + batchSize);\n    \n    const batchPrompt = batch.map((community, idx) => {\n      const members = memberMap.get(community.id) || [];\n      const limitedMembers = members.slice(0, 15);\n      const memberList = limitedMembers\n        .map(m => `${m.name} (${m.type})`)\n        .join(', ');\n      \n      return `Cluster ${idx + 1} (id: ${community.id}):\nHeuristic: \"${community.heuristicLabel}\"\nMembers: ${memberList}`;\n    }).join('\\n\\n');\n    \n    const prompt = `Analyze these code clusters and generate semantic names, keywords, and descriptions.\n\n${batchPrompt}\n\nOutput JSON array:\n[\n  {\"id\": \"comm_X\", \"name\": \"...\", \"keywords\": [...], \"description\": \"...\"},\n  ...\n]`;\n    \n    try {\n      const response = await llmClient.generate(prompt);\n      tokensUsed += prompt.length / 4 + response.length / 4;\n      \n      // Parse batch response\n      const jsonMatch = response.match(/\\[[\\s\\S]*\\]/);\n      if (jsonMatch) {\n        const parsed = JSON.parse(jsonMatch[0]) as Array<{\n          id: string;\n          name: string;\n          keywords: string[];\n          description: string;\n        }>;\n        \n        for (const item of parsed) {\n          enrichments.set(item.id, {\n            name: item.name,\n            keywords: item.keywords || [],\n            description: item.description || '',\n          });\n        }\n      }\n    } catch (error) {\n      console.warn('Batch enrichment failed, falling back to heuristics:', error);\n      // Fallback for this batch\n      for (const community of batch) {\n        enrichments.set(community.id, {\n          name: community.heuristicLabel,\n          keywords: [],\n          description: '',\n        });\n      }\n    }\n  }\n  \n  // Fill in any missing communities\n  for (const community of communities) {\n    if (!enrichments.has(community.id)) {\n      enrichments.set(community.id, {\n        name: community.heuristicLabel,\n        keywords: [],\n        description: '',\n      });\n    }\n  }\n  \n  return { enrichments, tokensUsed };\n};\n"
  },
  {
    "path": "gitnexus-web/src/core/ingestion/community-processor.ts",
    "content": "/**\n * Community Detection Processor\n * \n * Uses the Leiden algorithm (vendored from graphology-communities-leiden) to detect\n * communities/clusters in the code graph based on CALLS relationships.\n * \n * Communities represent groups of code that work together frequently,\n * helping agents navigate the codebase by functional area rather than file structure.\n */\n\nimport Graph from 'graphology';\nimport leiden from '../../vendor/leiden/index.js';\nimport { KnowledgeGraph, NodeLabel } from '../graph/types';\n\n// ============================================================================\n// TYPES\n// ============================================================================\n\nexport interface CommunityNode {\n  id: string;\n  label: string;\n  heuristicLabel: string;\n  cohesion: number;\n  symbolCount: number;\n}\n\nexport interface CommunityMembership {\n  nodeId: string;\n  communityId: string;\n}\n\nexport interface CommunityDetectionResult {\n  communities: CommunityNode[];\n  memberships: CommunityMembership[];\n  stats: {\n    totalCommunities: number;\n    modularity: number;\n    nodesProcessed: number;\n  };\n}\n\n// ============================================================================\n// COMMUNITY COLORS (for visualization)\n// ============================================================================\n\nexport const COMMUNITY_COLORS = [\n  '#ef4444', // red\n  '#f97316', // orange\n  '#eab308', // yellow\n  '#22c55e', // green\n  '#06b6d4', // cyan\n  '#3b82f6', // blue\n  '#8b5cf6', // violet\n  '#d946ef', // fuchsia\n  '#ec4899', // pink\n  '#f43f5e', // rose\n  '#14b8a6', // teal\n  '#84cc16', // lime\n];\n\nexport const getCommunityColor = (communityIndex: number): string => {\n  return COMMUNITY_COLORS[communityIndex % COMMUNITY_COLORS.length];\n};\n\n// ============================================================================\n// MAIN PROCESSOR\n// ============================================================================\n\n/**\n * Detect communities in the knowledge graph using Leiden algorithm\n * \n * This runs AFTER all relationships (CALLS, IMPORTS, etc.) have been built.\n * It uses primarily CALLS edges to cluster code that works together.\n */\nexport const processCommunities = async (\n  knowledgeGraph: KnowledgeGraph,\n  onProgress?: (message: string, progress: number) => void\n): Promise<CommunityDetectionResult> => {\n  onProgress?.('Building graph for community detection...', 0);\n\n  // Step 1: Build a graphology graph from the knowledge graph\n  // We only include symbol nodes (Function, Class, Method) and CALLS edges\n  const graph = buildGraphologyGraph(knowledgeGraph);\n  \n  if (graph.order === 0) {\n    // No nodes to cluster\n    return {\n      communities: [],\n      memberships: [],\n      stats: { totalCommunities: 0, modularity: 0, nodesProcessed: 0 }\n    };\n  }\n\n  onProgress?.(`Running Leiden algorithm on ${graph.order} nodes...`, 30);\n\n  // Step 2: Run Leiden algorithm for community detection\n  const details = leiden.detailed(graph, {\n    resolution: 1.0,  // Default resolution, can be tuned\n    randomWalk: true,\n  });\n\n  onProgress?.(`Found ${details.count} communities...`, 60);\n\n  // Step 3: Create community nodes with heuristic labels\n  const communityNodes = createCommunityNodes(\n    details.communities as Record<string, number>,\n    details.count,\n    graph,\n    knowledgeGraph\n  );\n\n  onProgress?.('Creating membership edges...', 80);\n\n  // Step 4: Create membership mappings\n  const memberships: CommunityMembership[] = [];\n  Object.entries(details.communities).forEach(([nodeId, communityNum]) => {\n    memberships.push({\n      nodeId,\n      communityId: `comm_${communityNum}`,\n    });\n  });\n\n  onProgress?.('Community detection complete!', 100);\n\n  return {\n    communities: communityNodes,\n    memberships,\n    stats: {\n      totalCommunities: details.count,\n      modularity: details.modularity,\n      nodesProcessed: graph.order,\n    }\n  };\n};\n\n// ============================================================================\n// HELPER: Build graphology graph from knowledge graph\n// ============================================================================\n\n/**\n * Build a graphology graph containing only symbol nodes and CALLS edges\n * This is what the Leiden algorithm will cluster\n */\nconst buildGraphologyGraph = (knowledgeGraph: KnowledgeGraph): Graph => {\n  // Use undirected graph for Leiden - it looks at edge density, not direction\n  const graph = new Graph({ type: 'undirected', allowSelfLoops: false });\n\n  // Symbol types that should be clustered\n  const symbolTypes = new Set<NodeLabel>(['Function', 'Class', 'Method', 'Interface']);\n  \n  // Add symbol nodes\n  knowledgeGraph.nodes.forEach(node => {\n    if (symbolTypes.has(node.label)) {\n      graph.addNode(node.id, {\n        name: node.properties.name,\n        filePath: node.properties.filePath,\n        type: node.label,\n      });\n    }\n  });\n\n  // Add CALLS edges (primary clustering signal)\n  // We can also include EXTENDS/IMPLEMENTS for OOP clustering\n  const clusteringRelTypes = new Set(['CALLS', 'EXTENDS', 'IMPLEMENTS']);\n  \n  knowledgeGraph.relationships.forEach(rel => {\n    if (clusteringRelTypes.has(rel.type)) {\n      // Only add edge if both nodes exist in our symbol graph\n      // Also skip self-loops (recursive calls) - not allowed in undirected graph\n      if (graph.hasNode(rel.sourceId) && graph.hasNode(rel.targetId) && rel.sourceId !== rel.targetId) {\n        // Avoid duplicate edges\n        if (!graph.hasEdge(rel.sourceId, rel.targetId)) {\n          graph.addEdge(rel.sourceId, rel.targetId);\n        }\n      }\n    }\n  });\n\n  return graph;\n};\n\n// ============================================================================\n// HELPER: Create community nodes with heuristic labels\n// ============================================================================\n\n/**\n * Create Community nodes with auto-generated labels based on member file paths\n */\nconst createCommunityNodes = (\n  communities: Record<string, number>,\n  communityCount: number,\n  graph: Graph,\n  knowledgeGraph: KnowledgeGraph\n): CommunityNode[] => {\n  // Group node IDs by community\n  const communityMembers = new Map<number, string[]>();\n  \n  Object.entries(communities).forEach(([nodeId, commNum]) => {\n    if (!communityMembers.has(commNum)) {\n      communityMembers.set(commNum, []);\n    }\n    communityMembers.get(commNum)!.push(nodeId);\n  });\n\n  // Build node lookup for file paths\n  const nodePathMap = new Map<string, string>();\n  knowledgeGraph.nodes.forEach(node => {\n    if (node.properties.filePath) {\n      nodePathMap.set(node.id, node.properties.filePath);\n    }\n  });\n\n  // Create community nodes - SKIP SINGLETONS (isolated nodes)\n  const communityNodes: CommunityNode[] = [];\n  \n  communityMembers.forEach((memberIds, commNum) => {\n    // Skip singleton communities - they're just isolated nodes\n    if (memberIds.length < 2) return;\n    \n    const heuristicLabel = generateHeuristicLabel(memberIds, nodePathMap, graph, commNum);\n    \n    communityNodes.push({\n      id: `comm_${commNum}`,\n      label: heuristicLabel,\n      heuristicLabel,\n      cohesion: calculateCohesion(memberIds, graph),\n      symbolCount: memberIds.length,\n    });\n  });\n\n  // Sort by size descending\n  communityNodes.sort((a, b) => b.symbolCount - a.symbolCount);\n\n  return communityNodes;\n};\n\n// ============================================================================\n// HELPER: Generate heuristic label from folder patterns\n// ============================================================================\n\n/**\n * Generate a human-readable label from the most common folder name in the community\n */\nconst generateHeuristicLabel = (\n  memberIds: string[],\n  nodePathMap: Map<string, string>,\n  graph: Graph,\n  commNum: number\n): string => {\n  // Collect folder names from file paths\n  const folderCounts = new Map<string, number>();\n  \n  memberIds.forEach(nodeId => {\n    const filePath = nodePathMap.get(nodeId) || '';\n    const parts = filePath.split('/').filter(Boolean);\n    \n    // Get the most specific folder (parent directory)\n    if (parts.length >= 2) {\n      const folder = parts[parts.length - 2];\n      // Skip generic folder names\n      if (!['src', 'lib', 'core', 'utils', 'common', 'shared', 'helpers'].includes(folder.toLowerCase())) {\n        folderCounts.set(folder, (folderCounts.get(folder) || 0) + 1);\n      }\n    }\n  });\n\n  // Find most common folder\n  let maxCount = 0;\n  let bestFolder = '';\n  \n  folderCounts.forEach((count, folder) => {\n    if (count > maxCount) {\n      maxCount = count;\n      bestFolder = folder;\n    }\n  });\n\n  if (bestFolder) {\n    // Capitalize first letter\n    return bestFolder.charAt(0).toUpperCase() + bestFolder.slice(1);\n  }\n\n  // Fallback: use function names to detect patterns\n  const names: string[] = [];\n  memberIds.forEach(nodeId => {\n    const name = graph.getNodeAttribute(nodeId, 'name');\n    if (name) names.push(name);\n  });\n\n  // Look for common prefixes\n  if (names.length > 2) {\n    const commonPrefix = findCommonPrefix(names);\n    if (commonPrefix.length > 2) {\n      return commonPrefix.charAt(0).toUpperCase() + commonPrefix.slice(1);\n    }\n  }\n\n  // Last resort: generic name with community ID for uniqueness\n  return `Cluster_${commNum}`;\n};\n\n/**\n * Find common prefix among strings\n */\nconst findCommonPrefix = (strings: string[]): string => {\n  if (strings.length === 0) return '';\n  \n  const sorted = strings.slice().sort();\n  const first = sorted[0];\n  const last = sorted[sorted.length - 1];\n  \n  let i = 0;\n  while (i < first.length && first[i] === last[i]) {\n    i++;\n  }\n  \n  return first.substring(0, i);\n};\n\n// ============================================================================\n// HELPER: Calculate community cohesion\n// ============================================================================\n\n/**\n * Calculate cohesion score (0-1) based on internal edge density\n * Higher cohesion = more internal connections relative to size\n */\nconst calculateCohesion = (memberIds: string[], graph: Graph): number => {\n  if (memberIds.length <= 1) return 1.0;\n\n  const memberSet = new Set(memberIds);\n  let internalEdges = 0;\n  let totalEdges = 0;\n\n  // Count internal vs total edges for community members\n  memberIds.forEach(nodeId => {\n    if (graph.hasNode(nodeId)) {\n      graph.forEachNeighbor(nodeId, neighbor => {\n        totalEdges++;\n        if (memberSet.has(neighbor)) {\n          internalEdges++;\n        }\n      });\n    }\n  });\n\n  if (totalEdges === 0) return 1.0;\n  return Math.min(1.0, internalEdges / totalEdges);\n};\n"
  },
  {
    "path": "gitnexus-web/src/core/ingestion/entry-point-scoring.ts",
    "content": "/**\n * Entry Point Scoring\n * \n * Calculates entry point scores for process detection based on:\n * 1. Call ratio (existing algorithm - callees / (callers + 1))\n * 2. Export status (exported functions get higher priority)\n * 3. Name patterns (functions matching entry point patterns like handle*, on*, *Controller)\n * 4. Framework detection (path-based detection for Next.js, Express, Django, etc.)\n * \n * This module is language-agnostic - language-specific patterns are defined per language.\n */\n\nimport { detectFrameworkFromPath } from './framework-detection';\n\n// ============================================================================\n// NAME PATTERNS - All 11 supported languages\n// ============================================================================\n\n/**\n * Common entry point naming patterns by language\n * These patterns indicate functions that are likely feature entry points\n */\nconst ENTRY_POINT_PATTERNS: Record<string, RegExp[]> = {\n  // Universal patterns (apply to all languages)\n  '*': [\n    /^(main|init|bootstrap|start|run|setup|configure)$/i,\n    /^handle[A-Z]/,           // handleLogin, handleSubmit\n    /^on[A-Z]/,               // onClick, onSubmit\n    /Handler$/,               // RequestHandler\n    /Controller$/,            // UserController\n    /^process[A-Z]/,          // processPayment\n    /^execute[A-Z]/,          // executeQuery\n    /^perform[A-Z]/,          // performAction\n    /^dispatch[A-Z]/,         // dispatchEvent\n    /^trigger[A-Z]/,          // triggerAction\n    /^fire[A-Z]/,             // fireEvent\n    /^emit[A-Z]/,             // emitEvent\n  ],\n  \n  // JavaScript/TypeScript\n  'javascript': [\n    /^use[A-Z]/,              // React hooks (useEffect, etc.)\n  ],\n  'typescript': [\n    /^use[A-Z]/,              // React hooks\n  ],\n  \n  // Python\n  'python': [\n    /^app$/,                  // Flask/FastAPI app\n    /^(get|post|put|delete|patch)_/i,  // REST conventions\n    /^api_/,                  // API functions\n    /^view_/,                 // Django views\n  ],\n  \n  // Java\n  'java': [\n    /^do[A-Z]/,               // doGet, doPost (Servlets)\n    /^create[A-Z]/,           // Factory patterns\n    /^build[A-Z]/,            // Builder patterns\n    /Service$/,               // UserService\n  ],\n  \n  // C#\n  'csharp': [\n    /^(Get|Post|Put|Delete)/,  // ASP.NET conventions\n    /Action$/,                 // MVC actions\n    /^On[A-Z]/,               // Event handlers\n    /Async$/,                 // Async entry points\n  ],\n  \n  // Go\n  'go': [\n    /Handler$/,               // http.Handler pattern\n    /^Serve/,                 // ServeHTTP\n    /^New[A-Z]/,              // Constructor pattern (returns new instance)\n    /^Make[A-Z]/,             // Make functions\n  ],\n  \n  // Rust\n  'rust': [\n    /^(get|post|put|delete)_handler$/i,\n    /^handle_/,               // handle_request\n    /^new$/,                  // Constructor pattern\n    /^run$/,                  // run entry point\n    /^spawn/,                 // Async spawn\n  ],\n  \n  // C - explicit main() boost (critical for C programs)\n  'c': [\n    /^main$/,                 // THE entry point\n    /^init_/,                 // Initialization functions\n    /^start_/,                // Start functions\n    /^run_/,                  // Run functions\n  ],\n  \n  // C++ - same as C plus class patterns\n  'cpp': [\n    /^main$/,                 // THE entry point\n    /^init_/,\n    /^Create[A-Z]/,           // Factory patterns\n    /^Run$/,                  // Run methods\n    /^Start$/,                // Start methods\n  ],\n\n  // Swift / iOS\n  'swift': [\n    /^viewDidLoad$/,                  // UIKit lifecycle\n    /^viewWillAppear$/,               // UIKit lifecycle\n    /^viewDidAppear$/,                // UIKit lifecycle\n    /^viewWillDisappear$/,            // UIKit lifecycle\n    /^viewDidDisappear$/,             // UIKit lifecycle\n    /^application\\(/,                 // AppDelegate methods\n    /^scene\\(/,                       // SceneDelegate methods\n    /^body$/,                         // SwiftUI View.body\n    /Coordinator$/,                   // Coordinator pattern\n    /^sceneDidBecomeActive$/,         // SceneDelegate lifecycle\n    /^sceneWillResignActive$/,        // SceneDelegate lifecycle\n    /^didFinishLaunchingWithOptions$/, // AppDelegate\n    /ViewController$/,                // ViewController classes\n    /^configure[A-Z]/,               // Configuration methods\n    /^setup[A-Z]/,                    // Setup methods\n    /^makeBody$/,                     // SwiftUI ViewModifier\n  ],\n\n  // PHP / Laravel\n  'php': [\n    /Controller$/,            // UserController (class name convention)\n    /^handle$/,               // Job::handle(), Listener::handle()\n    /^execute$/,              // Command::execute()\n    /^boot$/,                 // ServiceProvider::boot()\n    /^register$/,             // ServiceProvider::register()\n    /^__invoke$/,             // Invokable controllers/actions\n    /^(index|show|store|update|destroy|create|edit)$/,  // RESTful resource methods\n    /^(get|post|put|delete|patch)[A-Z]/,  // Explicit HTTP method actions\n    /^run$/,                  // Command/Job run()\n    /^fire$/,                 // Event fire()\n    /^dispatch$/,             // Dispatchable jobs\n    /Service$/,               // UserService (Service layer)\n    /Repository$/,            // UserRepository (Repository pattern)\n    /^find$/,                 // Repository::find()\n    /^findAll$/,              // Repository::findAll()\n    /^save$/,                 // Repository::save()\n    /^delete$/,               // Repository::delete()\n  ],\n\n  // Ruby\n  'ruby': [\n    /^call$/,                 // Service objects (MyService.call)\n    /^perform$/,              // Background jobs (Sidekiq, ActiveJob)\n    /^execute$/,              // Command pattern\n  ],\n};\n\n// ============================================================================\n// UTILITY PATTERNS - Functions that should be penalized\n// ============================================================================\n\n/**\n * Patterns that indicate utility/helper functions (NOT entry points)\n * These get penalized in scoring\n */\nconst UTILITY_PATTERNS: RegExp[] = [\n  /^(get|set|is|has|can|should|will|did)[A-Z]/,  // Accessors/predicates\n  /^_/,                                            // Private by convention\n  /^(format|parse|validate|convert|transform)/i,  // Transformation utilities\n  /^(log|debug|error|warn|info)$/i,               // Logging\n  /^(to|from)[A-Z]/,                              // Conversions\n  /^(encode|decode)/i,                            // Encoding utilities\n  /^(serialize|deserialize)/i,                    // Serialization\n  /^(clone|copy|deep)/i,                          // Cloning utilities\n  /^(merge|extend|assign)/i,                      // Object utilities\n  /^(filter|map|reduce|sort|find)/i,             // Collection utilities (standalone)\n  /Helper$/,\n  /Util$/,\n  /Utils$/,\n  /^utils?$/i,\n  /^helpers?$/i,\n];\n\n// ============================================================================\n// TYPES\n// ============================================================================\n\nexport interface EntryPointScoreResult {\n  score: number;\n  reasons: string[];\n}\n\n// ============================================================================\n// MAIN SCORING FUNCTION\n// ============================================================================\n\n/**\n * Calculate an entry point score for a function/method\n * \n * Higher scores indicate better entry point candidates.\n * Score = baseScore × exportMultiplier × nameMultiplier\n * \n * @param name - Function/method name\n * @param language - Programming language\n * @param isExported - Whether the function is exported/public\n * @param callerCount - Number of functions that call this function\n * @param calleeCount - Number of functions this function calls\n * @returns Score and array of reasons explaining the score\n */\nexport function calculateEntryPointScore(\n  name: string,\n  language: string,\n  isExported: boolean,\n  callerCount: number,\n  calleeCount: number,\n  filePath: string = ''  // Optional for backwards compatibility\n): EntryPointScoreResult {\n  const reasons: string[] = [];\n  \n  // Must have outgoing calls to be an entry point (we need to trace forward)\n  if (calleeCount === 0) {\n    return { score: 0, reasons: ['no-outgoing-calls'] };\n  }\n  \n  // Base score: call ratio (existing algorithm)\n  // High ratio = calls many, called by few = likely entry point\n  const baseScore = calleeCount / (callerCount + 1);\n  reasons.push(`base:${baseScore.toFixed(2)}`);\n  \n  // Export bonus: exported/public functions are more likely entry points\n  const exportMultiplier = isExported ? 2.0 : 1.0;\n  if (isExported) {\n    reasons.push('exported');\n  }\n  \n  // Name pattern scoring\n  let nameMultiplier = 1.0;\n  \n  // Check negative patterns first (utilities get penalized)\n  if (UTILITY_PATTERNS.some(p => p.test(name))) {\n    nameMultiplier = 0.3;  // Significant penalty\n    reasons.push('utility-pattern');\n  } else {\n    // Check positive patterns\n    const universalPatterns = ENTRY_POINT_PATTERNS['*'] || [];\n    const langPatterns = ENTRY_POINT_PATTERNS[language] || [];\n    const allPatterns = [...universalPatterns, ...langPatterns];\n    \n    if (allPatterns.some(p => p.test(name))) {\n      nameMultiplier = 1.5;  // Bonus for matching entry point pattern\n      reasons.push('entry-pattern');\n    }\n  }\n  \n  // Framework detection bonus (Phase 2)\n  let frameworkMultiplier = 1.0;\n  if (filePath) {\n    const frameworkHint = detectFrameworkFromPath(filePath);\n    if (frameworkHint) {\n      frameworkMultiplier = frameworkHint.entryPointMultiplier;\n      reasons.push(`framework:${frameworkHint.reason}`);\n    }\n  }\n  \n  // Calculate final score\n  const finalScore = baseScore * exportMultiplier * nameMultiplier * frameworkMultiplier;\n  \n  return {\n    score: finalScore,\n    reasons,\n  };\n}\n\n// ============================================================================\n// HELPER FUNCTIONS\n// ============================================================================\n\n/**\n * Check if a file path is a test file (should be excluded from entry points)\n * Covers common test file patterns across all supported languages\n */\nexport function isTestFile(filePath: string): boolean {\n  const p = filePath.toLowerCase().replace(/\\\\/g, '/');\n  \n  return (\n    // JavaScript/TypeScript test patterns\n    p.includes('.test.') || \n    p.includes('.spec.') || \n    p.includes('__tests__/') || \n    p.includes('__mocks__/') ||\n    // Generic test folders\n    p.includes('/test/') ||\n    p.includes('/tests/') ||\n    p.includes('/testing/') ||\n    // Python test patterns\n    p.endsWith('_test.py') ||\n    p.includes('/test_') ||\n    // Go test patterns\n    p.endsWith('_test.go') ||\n    // Java test patterns\n    p.includes('/src/test/') ||\n    // Rust test patterns (inline tests are different, but test files)\n    p.includes('/tests/') ||\n    // Swift/iOS test patterns\n    p.endsWith('tests.swift') ||\n    p.endsWith('test.swift') ||\n    p.includes('uitests/') ||\n    // C# test patterns\n    p.includes('.tests/') ||\n    p.includes('tests.cs') ||\n    // PHP/Laravel test patterns\n    p.endsWith('test.php') ||\n    p.endsWith('spec.php') ||\n    p.includes('/tests/feature/') ||\n    p.includes('/tests/unit/') ||\n    // Ruby test patterns\n    p.endsWith('_spec.rb') ||\n    p.endsWith('_test.rb') ||\n    p.includes('/spec/') ||\n    p.includes('/test/fixtures/')\n  );\n}\n\n/**\n * Check if a file path is likely a utility/helper file\n * These might still have entry points but should be lower priority\n */\nexport function isUtilityFile(filePath: string): boolean {\n  const p = filePath.toLowerCase().replace(/\\\\/g, '/');\n  \n  return (\n    p.includes('/utils/') ||\n    p.includes('/util/') ||\n    p.includes('/helpers/') ||\n    p.includes('/helper/') ||\n    p.includes('/common/') ||\n    p.includes('/shared/') ||\n    p.includes('/lib/') ||\n    p.endsWith('/utils.ts') ||\n    p.endsWith('/utils.js') ||\n    p.endsWith('/helpers.ts') ||\n    p.endsWith('/helpers.js') ||\n    p.endsWith('_utils.py') ||\n    p.endsWith('_helpers.py')\n  );\n}\n"
  },
  {
    "path": "gitnexus-web/src/core/ingestion/framework-detection.ts",
    "content": "/**\n * Framework Detection\n * \n * Detects frameworks from file path patterns and provides entry point multipliers.\n * This enables framework-aware entry point scoring.\n * \n * DESIGN: Returns null for unknown frameworks, which causes a 1.0 multiplier\n * (no bonus, no penalty) - same behavior as before this feature.\n */\n\n// ============================================================================\n// TYPES\n// ============================================================================\n\nexport interface FrameworkHint {\n  framework: string;\n  entryPointMultiplier: number;\n  reason: string;\n}\n\n// ============================================================================\n// PATH-BASED FRAMEWORK DETECTION\n// ============================================================================\n\n/**\n * Detect framework from file path patterns\n * \n * This provides entry point multipliers based on well-known framework conventions.\n * Returns null if no framework pattern is detected (falls back to 1.0 multiplier).\n */\nexport function detectFrameworkFromPath(filePath: string): FrameworkHint | null {\n  // Normalize path separators and ensure leading slash for consistent matching\n  let p = filePath.toLowerCase().replace(/\\\\/g, '/');\n  if (!p.startsWith('/')) {\n    p = '/' + p;  // Add leading slash so patterns like '/app/' match 'app/...'\n  }\n  \n  // ========== JAVASCRIPT / TYPESCRIPT FRAMEWORKS ==========\n  \n  // Next.js - Pages Router (high confidence)\n  if (p.includes('/pages/') && !p.includes('/_') && !p.includes('/api/')) {\n    if (p.endsWith('.tsx') || p.endsWith('.ts') || p.endsWith('.jsx') || p.endsWith('.js')) {\n      return { framework: 'nextjs-pages', entryPointMultiplier: 3.0, reason: 'nextjs-page' };\n    }\n  }\n  \n  // Next.js - App Router (page.tsx files)\n  if (p.includes('/app/') && (\n    p.endsWith('page.tsx') || p.endsWith('page.ts') || \n    p.endsWith('page.jsx') || p.endsWith('page.js')\n  )) {\n    return { framework: 'nextjs-app', entryPointMultiplier: 3.0, reason: 'nextjs-app-page' };\n  }\n  \n  // Next.js - API Routes\n  if (p.includes('/pages/api/') || (p.includes('/app/') && p.includes('/api/') && p.endsWith('route.ts'))) {\n    return { framework: 'nextjs-api', entryPointMultiplier: 3.0, reason: 'nextjs-api-route' };\n  }\n  \n  // Next.js - Layout files (moderate - they're entry-ish but not the main entry)\n  if (p.includes('/app/') && (p.endsWith('layout.tsx') || p.endsWith('layout.ts'))) {\n    return { framework: 'nextjs-app', entryPointMultiplier: 2.0, reason: 'nextjs-layout' };\n  }\n  \n  // Express / Node.js routes\n  if (p.includes('/routes/') && (p.endsWith('.ts') || p.endsWith('.js'))) {\n    return { framework: 'express', entryPointMultiplier: 2.5, reason: 'routes-folder' };\n  }\n  \n  // Generic controllers (MVC pattern)\n  if (p.includes('/controllers/') && (p.endsWith('.ts') || p.endsWith('.js'))) {\n    return { framework: 'mvc', entryPointMultiplier: 2.5, reason: 'controllers-folder' };\n  }\n  \n  // Generic handlers\n  if (p.includes('/handlers/') && (p.endsWith('.ts') || p.endsWith('.js'))) {\n    return { framework: 'handlers', entryPointMultiplier: 2.5, reason: 'handlers-folder' };\n  }\n  \n  // React components (lower priority - not all are entry points)\n  if ((p.includes('/components/') || p.includes('/views/')) && \n      (p.endsWith('.tsx') || p.endsWith('.jsx'))) {\n    // Only boost if PascalCase filename (likely a component, not util)\n    const fileName = p.split('/').pop() || '';\n    if (/^[A-Z]/.test(fileName)) {\n      return { framework: 'react', entryPointMultiplier: 1.5, reason: 'react-component' };\n    }\n  }\n  \n  // ========== PYTHON FRAMEWORKS ==========\n  \n  // Django views (high confidence)\n  if (p.endsWith('views.py')) {\n    return { framework: 'django', entryPointMultiplier: 3.0, reason: 'django-views' };\n  }\n  \n  // Django URL configs\n  if (p.endsWith('urls.py')) {\n    return { framework: 'django', entryPointMultiplier: 2.0, reason: 'django-urls' };\n  }\n  \n  // FastAPI / Flask routers\n  if ((p.includes('/routers/') || p.includes('/endpoints/') || p.includes('/routes/')) && \n      p.endsWith('.py')) {\n    return { framework: 'fastapi', entryPointMultiplier: 2.5, reason: 'api-routers' };\n  }\n  \n  // Python API folder\n  if (p.includes('/api/') && p.endsWith('.py') && !p.endsWith('__init__.py')) {\n    return { framework: 'python-api', entryPointMultiplier: 2.0, reason: 'api-folder' };\n  }\n  \n  // ========== JAVA FRAMEWORKS ==========\n  \n  // Spring Boot controllers\n  if ((p.includes('/controller/') || p.includes('/controllers/')) && p.endsWith('.java')) {\n    return { framework: 'spring', entryPointMultiplier: 3.0, reason: 'spring-controller' };\n  }\n  \n  // Spring Boot - files ending in Controller.java\n  if (p.endsWith('controller.java')) {\n    return { framework: 'spring', entryPointMultiplier: 3.0, reason: 'spring-controller-file' };\n  }\n  \n  // Java service layer (often entry points for business logic)\n  if ((p.includes('/service/') || p.includes('/services/')) && p.endsWith('.java')) {\n    return { framework: 'java-service', entryPointMultiplier: 1.8, reason: 'java-service' };\n  }\n  \n  // ========== C# / .NET FRAMEWORKS ==========\n  \n  // ASP.NET Controllers\n  if (p.includes('/controllers/') && p.endsWith('.cs')) {\n    return { framework: 'aspnet', entryPointMultiplier: 3.0, reason: 'aspnet-controller' };\n  }\n  \n  // ASP.NET - files ending in Controller.cs\n  if (p.endsWith('controller.cs')) {\n    return { framework: 'aspnet', entryPointMultiplier: 3.0, reason: 'aspnet-controller-file' };\n  }\n  \n  // Blazor pages\n  if (p.includes('/pages/') && p.endsWith('.razor')) {\n    return { framework: 'blazor', entryPointMultiplier: 2.5, reason: 'blazor-page' };\n  }\n  \n  // ========== GO FRAMEWORKS ==========\n  \n  // Go handlers\n  if ((p.includes('/handlers/') || p.includes('/handler/')) && p.endsWith('.go')) {\n    return { framework: 'go-http', entryPointMultiplier: 2.5, reason: 'go-handlers' };\n  }\n  \n  // Go routes\n  if (p.includes('/routes/') && p.endsWith('.go')) {\n    return { framework: 'go-http', entryPointMultiplier: 2.5, reason: 'go-routes' };\n  }\n  \n  // Go controllers\n  if (p.includes('/controllers/') && p.endsWith('.go')) {\n    return { framework: 'go-mvc', entryPointMultiplier: 2.5, reason: 'go-controller' };\n  }\n  \n  // Go main.go files (THE entry point)\n  if (p.endsWith('/main.go') || p.endsWith('/cmd/') && p.endsWith('.go')) {\n    return { framework: 'go', entryPointMultiplier: 3.0, reason: 'go-main' };\n  }\n  \n  // ========== RUST FRAMEWORKS ==========\n  \n  // Rust handlers/routes\n  if ((p.includes('/handlers/') || p.includes('/routes/')) && p.endsWith('.rs')) {\n    return { framework: 'rust-web', entryPointMultiplier: 2.5, reason: 'rust-handlers' };\n  }\n  \n  // Rust main.rs (THE entry point)\n  if (p.endsWith('/main.rs')) {\n    return { framework: 'rust', entryPointMultiplier: 3.0, reason: 'rust-main' };\n  }\n  \n  // Rust bin folder (executables)\n  if (p.includes('/bin/') && p.endsWith('.rs')) {\n    return { framework: 'rust', entryPointMultiplier: 2.5, reason: 'rust-bin' };\n  }\n  \n  // ========== C / C++ ==========\n  \n  // C/C++ main files\n  if (p.endsWith('/main.c') || p.endsWith('/main.cpp') || p.endsWith('/main.cc')) {\n    return { framework: 'c-cpp', entryPointMultiplier: 3.0, reason: 'c-main' };\n  }\n  \n  // C/C++ src folder entry points (if named specifically)\n  if ((p.includes('/src/') && (p.endsWith('/app.c') || p.endsWith('/app.cpp')))) {\n    return { framework: 'c-cpp', entryPointMultiplier: 2.5, reason: 'c-app' };\n  }\n  \n  // ========== PHP / LARAVEL FRAMEWORKS ==========\n\n  // Laravel routes (highest - these ARE the entry point definitions)\n  if (p.includes('/routes/') && p.endsWith('.php')) {\n    return { framework: 'laravel', entryPointMultiplier: 3.0, reason: 'laravel-routes' };\n  }\n\n  // Laravel controllers (very high - receive HTTP requests)\n  if ((p.includes('/http/controllers/') || p.includes('/controllers/')) && p.endsWith('.php')) {\n    return { framework: 'laravel', entryPointMultiplier: 3.0, reason: 'laravel-controller' };\n  }\n\n  // Laravel controller by file name convention\n  if (p.endsWith('controller.php')) {\n    return { framework: 'laravel', entryPointMultiplier: 3.0, reason: 'laravel-controller-file' };\n  }\n\n  // Laravel console commands\n  if ((p.includes('/console/commands/') || p.includes('/commands/')) && p.endsWith('.php')) {\n    return { framework: 'laravel', entryPointMultiplier: 2.5, reason: 'laravel-command' };\n  }\n\n  // Laravel jobs (queue entry points)\n  if (p.includes('/jobs/') && p.endsWith('.php')) {\n    return { framework: 'laravel', entryPointMultiplier: 2.5, reason: 'laravel-job' };\n  }\n\n  // Laravel listeners (event-driven entry points)\n  if (p.includes('/listeners/') && p.endsWith('.php')) {\n    return { framework: 'laravel', entryPointMultiplier: 2.5, reason: 'laravel-listener' };\n  }\n\n  // Laravel middleware\n  if (p.includes('/http/middleware/') && p.endsWith('.php')) {\n    return { framework: 'laravel', entryPointMultiplier: 2.5, reason: 'laravel-middleware' };\n  }\n\n  // Laravel service providers\n  if (p.includes('/providers/') && p.endsWith('.php')) {\n    return { framework: 'laravel', entryPointMultiplier: 1.8, reason: 'laravel-provider' };\n  }\n\n  // Laravel policies\n  if (p.includes('/policies/') && p.endsWith('.php')) {\n    return { framework: 'laravel', entryPointMultiplier: 2.0, reason: 'laravel-policy' };\n  }\n\n  // Laravel models (important but not entry points per se)\n  if (p.includes('/models/') && p.endsWith('.php')) {\n    return { framework: 'laravel', entryPointMultiplier: 1.5, reason: 'laravel-model' };\n  }\n\n  // Laravel services (Service Repository pattern)\n  if (p.includes('/services/') && p.endsWith('.php')) {\n    return { framework: 'laravel', entryPointMultiplier: 1.8, reason: 'laravel-service' };\n  }\n\n  // Laravel repositories (Service Repository pattern)\n  if (p.includes('/repositories/') && p.endsWith('.php')) {\n    return { framework: 'laravel', entryPointMultiplier: 1.5, reason: 'laravel-repository' };\n  }\n\n  // ========== RUBY ==========\n\n  // Ruby: bin/ or exe/ (CLI entry points)\n  if ((p.includes('/bin/') || p.includes('/exe/')) && p.endsWith('.rb')) {\n    return { framework: 'ruby', entryPointMultiplier: 2.5, reason: 'ruby-executable' };\n  }\n\n  // Ruby: Rakefile or *.rake (task definitions)\n  if (p.endsWith('/rakefile') || p.endsWith('.rake')) {\n    return { framework: 'ruby', entryPointMultiplier: 1.5, reason: 'ruby-rake' };\n  }\n  // ========== SWIFT / iOS ==========\n\n  // iOS App entry points (highest priority)\n  if (p.endsWith('/appdelegate.swift') || p.endsWith('/scenedelegate.swift') || p.endsWith('/app.swift')) {\n    return { framework: 'ios', entryPointMultiplier: 3.0, reason: 'ios-app-entry' };\n  }\n\n  // SwiftUI App entry (@main)\n  if (p.endsWith('app.swift') && p.includes('/sources/')) {\n    return { framework: 'swiftui', entryPointMultiplier: 3.0, reason: 'swiftui-app' };\n  }\n\n  // UIKit ViewControllers (high priority - screen entry points)\n  if ((p.includes('/viewcontrollers/') || p.includes('/controllers/') || p.includes('/screens/')) && p.endsWith('.swift')) {\n    return { framework: 'uikit', entryPointMultiplier: 2.5, reason: 'uikit-viewcontroller' };\n  }\n\n  // ViewController by filename convention\n  if (p.endsWith('viewcontroller.swift') || p.endsWith('vc.swift')) {\n    return { framework: 'uikit', entryPointMultiplier: 2.5, reason: 'uikit-viewcontroller-file' };\n  }\n\n  // Coordinator pattern (navigation entry points)\n  if (p.includes('/coordinators/') && p.endsWith('.swift')) {\n    return { framework: 'ios-coordinator', entryPointMultiplier: 2.5, reason: 'ios-coordinator' };\n  }\n\n  // Coordinator by filename\n  if (p.endsWith('coordinator.swift')) {\n    return { framework: 'ios-coordinator', entryPointMultiplier: 2.5, reason: 'ios-coordinator-file' };\n  }\n\n  // SwiftUI Views (moderate - reusable components)\n  if ((p.includes('/views/') || p.includes('/scenes/')) && p.endsWith('.swift')) {\n    return { framework: 'swiftui', entryPointMultiplier: 1.8, reason: 'swiftui-view' };\n  }\n\n  // Service layer\n  if (p.includes('/services/') && p.endsWith('.swift')) {\n    return { framework: 'ios-service', entryPointMultiplier: 1.8, reason: 'ios-service' };\n  }\n\n  // Router / navigation\n  if (p.includes('/router/') && p.endsWith('.swift')) {\n    return { framework: 'ios-router', entryPointMultiplier: 2.0, reason: 'ios-router' };\n  }\n\n  // ========== GENERIC PATTERNS ==========\n\n  // Any language: index files in API folders\n  if (p.includes('/api/') && (\n    p.endsWith('/index.ts') || p.endsWith('/index.js') ||\n    p.endsWith('/__init__.py')\n  )) {\n    return { framework: 'api', entryPointMultiplier: 1.8, reason: 'api-index' };\n  }\n\n  // No framework detected - return null for graceful fallback (1.0 multiplier)\n  return null;\n}\n\n// ============================================================================\n// PARTIALLY IMPLEMENTED: Route::* detection via procedural AST walk in parse-worker/call-processor\n// Remaining: NestJS, Express, FastAPI, Flask, Spring, etc.\n// ============================================================================\n\n/**\n * Patterns that indicate entry points within code (for future AST-based detection)\n * These would require parsing decorators/annotations in the code itself.\n */\nexport const FRAMEWORK_AST_PATTERNS = {\n  // JavaScript/TypeScript decorators\n  'nestjs': ['@Controller', '@Get', '@Post', '@Put', '@Delete', '@Patch'],\n  'express': ['app.get', 'app.post', 'app.put', 'app.delete', 'router.get', 'router.post'],\n  \n  // Python decorators\n  'fastapi': ['@app.get', '@app.post', '@app.put', '@app.delete', '@router.get'],\n  'flask': ['@app.route', '@blueprint.route'],\n  \n  // Java annotations\n  'spring': ['@RestController', '@Controller', '@GetMapping', '@PostMapping', '@RequestMapping'],\n  'jaxrs': ['@Path', '@GET', '@POST', '@PUT', '@DELETE'],\n  \n  // C# attributes\n  'aspnet': ['[ApiController]', '[HttpGet]', '[HttpPost]', '[Route]'],\n  \n  // Go patterns (function signatures)\n  'go-http': ['http.Handler', 'http.HandlerFunc', 'ServeHTTP'],\n\n  // PHP/Laravel\n  'laravel': ['Route::get', 'Route::post', 'Route::put', 'Route::delete',\n              'Route::resource', 'Route::apiResource', '#[Route('],\n\n  // Rust macros\n  'actix': ['#[get', '#[post', '#[put', '#[delete'],\n  'axum': ['Router::new'],\n  'rocket': ['#[get', '#[post'],\n\n  // Swift/iOS\n  'uikit': ['viewDidLoad', 'viewWillAppear', 'viewDidAppear', 'UIViewController'],\n  'swiftui': ['@main', 'WindowGroup', 'ContentView', '@StateObject', '@ObservedObject'],\n  'combine': ['sink', 'assign', 'Publisher', 'Subscriber'],\n};\n"
  },
  {
    "path": "gitnexus-web/src/core/ingestion/heritage-processor.ts",
    "content": "/**\n * Heritage Processor\n * \n * Extracts class inheritance relationships:\n * - EXTENDS: Class extends another Class (TS, JS, Python)\n * - IMPLEMENTS: Class implements an Interface (TS only)\n */\n\nimport { KnowledgeGraph } from '../graph/types';\nimport { ASTCache } from './ast-cache';\nimport { SymbolTable } from './symbol-table';\nimport { loadParser, loadLanguage } from '../tree-sitter/parser-loader';\nimport { LANGUAGE_QUERIES } from './tree-sitter-queries';\nimport { generateId } from '../../lib/utils';\nimport { getLanguageFromFilename } from './utils';\n\nexport const processHeritage = async (\n  graph: KnowledgeGraph,\n  files: { path: string; content: string }[],\n  astCache: ASTCache,\n  symbolTable: SymbolTable,\n  onProgress?: (current: number, total: number) => void\n) => {\n  const parser = await loadParser();\n\n  for (let i = 0; i < files.length; i++) {\n    const file = files[i];\n    onProgress?.(i + 1, files.length);\n\n    // 1. Check language support\n    const language = getLanguageFromFilename(file.path);\n    if (!language) continue;\n\n    const queryStr = LANGUAGE_QUERIES[language];\n    if (!queryStr) continue;\n\n    // 2. Load the language\n    await loadLanguage(language, file.path);\n\n    // 3. Get AST\n    let tree = astCache.get(file.path);\n    let wasReparsed = false;\n\n    if (!tree) {\n      tree = parser.parse(file.content);\n      wasReparsed = true;\n    }\n\n    let query;\n    let matches;\n    try {\n      query = parser.getLanguage().query(queryStr);\n      matches = query.matches(tree.rootNode);\n    } catch (queryError) {\n      console.warn(`Heritage query error for ${file.path}:`, queryError);\n      if (wasReparsed) tree.delete();\n      continue;\n    }\n\n    // 4. Process heritage matches\n    matches.forEach(match => {\n      const captureMap: Record<string, any> = {};\n      match.captures.forEach(c => {\n        captureMap[c.name] = c.node;\n      });\n\n      // EXTENDS: Class extends another Class\n      if (captureMap['heritage.class'] && captureMap['heritage.extends']) {\n        const className = captureMap['heritage.class'].text;\n        const parentClassName = captureMap['heritage.extends'].text;\n\n        // Resolve both class IDs\n        const childId = symbolTable.lookupExact(file.path, className) ||\n                        symbolTable.lookupFuzzy(className)[0]?.nodeId ||\n                        generateId('Class', `${file.path}:${className}`);\n        \n        const parentId = symbolTable.lookupFuzzy(parentClassName)[0]?.nodeId ||\n                         generateId('Class', `${parentClassName}`);\n\n        if (childId && parentId && childId !== parentId) {\n          const relId = generateId('EXTENDS', `${childId}->${parentId}`);\n          \n          graph.addRelationship({\n            id: relId,\n            sourceId: childId,\n            targetId: parentId,\n            type: 'EXTENDS',\n            confidence: 1.0,\n            reason: '',\n          });\n        }\n      }\n\n      // IMPLEMENTS: Class implements Interface (TypeScript only)\n      if (captureMap['heritage.class'] && captureMap['heritage.implements']) {\n        const className = captureMap['heritage.class'].text;\n        const interfaceName = captureMap['heritage.implements'].text;\n\n        // Resolve class and interface IDs\n        const classId = symbolTable.lookupExact(file.path, className) ||\n                        symbolTable.lookupFuzzy(className)[0]?.nodeId ||\n                        generateId('Class', `${file.path}:${className}`);\n        \n        const interfaceId = symbolTable.lookupFuzzy(interfaceName)[0]?.nodeId ||\n                            generateId('Interface', `${interfaceName}`);\n\n        if (classId && interfaceId) {\n          const relId = generateId('IMPLEMENTS', `${classId}->${interfaceId}`);\n          \n          graph.addRelationship({\n            id: relId,\n            sourceId: classId,\n            targetId: interfaceId,\n            type: 'IMPLEMENTS',\n            confidence: 1.0,\n            reason: '',\n          });\n        }\n      }\n\n      // IMPLEMENTS (Rust): impl Trait for Struct\n      if (captureMap['heritage.trait'] && captureMap['heritage.class']) {\n        const structName = captureMap['heritage.class'].text;\n        const traitName = captureMap['heritage.trait'].text;\n\n        // Resolve struct and trait IDs\n        const structId = symbolTable.lookupExact(file.path, structName) ||\n                         symbolTable.lookupFuzzy(structName)[0]?.nodeId ||\n                         generateId('Struct', `${file.path}:${structName}`);\n        \n        const traitId = symbolTable.lookupFuzzy(traitName)[0]?.nodeId ||\n                        generateId('Trait', `${traitName}`);\n\n        if (structId && traitId) {\n          const relId = generateId('IMPLEMENTS', `${structId}->${traitId}`);\n          \n          graph.addRelationship({\n            id: relId,\n            sourceId: structId,\n            targetId: traitId,\n            type: 'IMPLEMENTS',\n            confidence: 1.0,\n            reason: 'trait-impl',\n          });\n        }\n      }\n    });\n\n    // Cleanup\n    if (wasReparsed) {\n      tree.delete();\n    }\n  }\n};\n"
  },
  {
    "path": "gitnexus-web/src/core/ingestion/import-processor.ts",
    "content": "import { KnowledgeGraph } from '../graph/types';\nimport { ASTCache } from './ast-cache';\nimport { loadParser, loadLanguage } from '../tree-sitter/parser-loader';\nimport { LANGUAGE_QUERIES } from './tree-sitter-queries';\nimport { generateId } from '../../lib/utils';\nimport { getLanguageFromFilename } from './utils';\nimport { callRouters } from './call-routing';\n\n// Type: Map<FilePath, Set<ResolvedFilePath>>\n// Stores all files that a given file imports from\nexport type ImportMap = Map<string, Set<string>>;\n\nexport const createImportMap = (): ImportMap => new Map();\n\n// Helper: Resolve import paths (relative and absolute/package-style)\nconst resolveImportPath = (\n  currentFile: string, \n  importPath: string, \n  allFiles: Set<string>,\n  allFileList: string[],\n  resolveCache: Map<string, string | null>\n): string | null => {\n  const cacheKey = `${currentFile}::${importPath}`;\n  if (resolveCache.has(cacheKey)) return resolveCache.get(cacheKey) ?? null;\n\n  // 1. Resolve '..' and '.' for relative imports\n  const currentDir = currentFile.split('/').slice(0, -1);\n  const parts = importPath.split('/');\n  \n  for (const part of parts) {\n    if (part === '.') continue;\n    if (part === '..') {\n      currentDir.pop();\n    } else {\n      currentDir.push(part);\n    }\n  }\n  \n  const basePath = currentDir.join('/');\n\n  // 2. Try extensions for all supported languages\n  const extensions = [\n    '', \n    // TypeScript/JavaScript\n    '.tsx', '.ts', '.jsx', '.js', '/index.tsx', '/index.ts', '/index.jsx', '/index.js',\n    // Python\n    '.py', '/__init__.py',\n    // Java\n    '.java',\n    // C/C++\n    '.c', '.h', '.cpp', '.hpp', '.cc', '.cxx', '.hxx', '.hh',\n    // C#\n    '.cs',\n    // Go\n    '.go',\n    // Rust\n    '.rs', '/mod.rs',\n    // Ruby\n    '.rb', '.rake',\n  ];\n  \n  if (importPath.startsWith('.')) {\n    for (const ext of extensions) {\n      const candidate = basePath + ext;\n      if (allFiles.has(candidate)) {\n        resolveCache.set(cacheKey, candidate);\n        return candidate;\n      }\n    }\n    resolveCache.set(cacheKey, null);\n    return null;\n  }\n\n  // 3. Handle absolute/package imports (Java, Go, Python, etc.)\n  if (importPath.endsWith('.*')) {\n    resolveCache.set(cacheKey, null);\n    return null;\n  }\n\n  const pathLike = importPath.includes('/')\n    ? importPath\n    : importPath.replace(/\\./g, '/');\n  const pathParts = pathLike.split('/').filter(Boolean);\n\n  // Normalize all file paths to forward slashes for matching\n  const normalizedFileList = allFileList.map(p => p.replace(/\\\\/g, '/'));\n\n  for (let i = 0; i < pathParts.length; i++) {\n    const suffix = pathParts.slice(i).join('/');\n    for (const ext of extensions) {\n      const suffixWithExt = suffix + ext;\n      // Require path separator before match to avoid false positives like \"View.java\" matching \"RootView.java\"\n      const suffixPattern = '/' + suffixWithExt;\n      const matchIdx = normalizedFileList.findIndex(filePath => \n        filePath.endsWith(suffixPattern) || filePath.toLowerCase().endsWith(suffixPattern.toLowerCase())\n      );\n      if (matchIdx !== -1) {\n        const match = allFileList[matchIdx];\n        resolveCache.set(cacheKey, match);\n        return match;\n      }\n    }\n  }\n\n  // Unresolved imports (external packages, SDK imports) are expected - don't log\n  resolveCache.set(cacheKey, null);\n  return null;\n};\n\nexport const processImports = async (\n  graph: KnowledgeGraph,\n  files: { path: string; content: string }[],\n  astCache: ASTCache,\n  importMap: ImportMap,\n  onProgress?: (current: number, total: number) => void\n) => {\n  // Create a Set of all file paths for fast lookup during resolution\n  const allFilePaths = new Set(files.map(f => f.path));\n  const parser = await loadParser();\n  const resolveCache = new Map<string, string | null>();\n  const allFileList = files.map(f => f.path);\n  \n  // Track import statistics\n  let totalImportsFound = 0;\n  let totalImportsResolved = 0;\n\n  for (let i = 0; i < files.length; i++) {\n    const file = files[i];\n    onProgress?.(i + 1, files.length);\n\n    // 1. Check language support first\n    const language = getLanguageFromFilename(file.path);\n    if (!language) continue;\n    \n    const queryStr = LANGUAGE_QUERIES[language];\n    if (!queryStr) continue;\n\n    // 2. ALWAYS load the language before querying (parser is stateful)\n    await loadLanguage(language, file.path);\n\n    // 3. Get AST (Try Cache First)\n    let tree = astCache.get(file.path);\n    let wasReparsed = false;\n    \n    if (!tree) {\n      // Cache Miss: Re-parse (slower, but necessary if evicted)\n      tree = parser.parse(file.content);\n      wasReparsed = true;\n    }\n\n    let query;\n    let matches;\n    try {\n      query = parser.getLanguage().query(queryStr);\n      matches = query.matches(tree.rootNode);\n      \n      // Removed verbose Java import logging\n    } catch (queryError: any) {\n      // Detailed debug logging for query failures\n      console.group(`🔴 Query Error: ${file.path}`);\n      console.log('Language:', language);\n      console.log('Query (first 200 chars):', queryStr.substring(0, 200) + '...');\n      console.log('Error:', queryError?.message || queryError);\n      console.log('File content (first 300 chars):', file.content.substring(0, 300));\n      console.log('AST root type:', tree.rootNode?.type);\n      console.log('AST has errors:', tree.rootNode?.hasError);\n      console.groupEnd();\n      \n      if (wasReparsed) tree.delete();\n      continue;\n    }\n\n    matches.forEach(match => {\n      const captureMap: Record<string, any> = {};\n      match.captures.forEach(c => captureMap[c.name] = c.node);\n\n      if (captureMap['import']) {\n        const sourceNode = captureMap['import.source'];\n        if (!sourceNode) {\n          if (import.meta.env.DEV) {\n            console.log(`⚠️ Import captured but no source node in ${file.path}`);\n          }\n          return;\n        }\n\n        // Clean path (remove quotes)\n        const rawImportPath = sourceNode.text.replace(/['\"]/g, '');\n        totalImportsFound++;\n        \n        // Removed verbose per-import logging\n        \n        // Resolve to actual file in the system\n        const resolvedPath = resolveImportPath(\n          file.path,\n          rawImportPath,\n          allFilePaths,\n          allFileList,\n          resolveCache\n        );\n\n        if (resolvedPath) {\n          // A. Update Graph (File -> IMPORTS -> File)\n          const sourceId = generateId('File', file.path);\n          const targetId = generateId('File', resolvedPath);\n          const relId = generateId('IMPORTS', `${file.path}->${resolvedPath}`);\n\n          totalImportsResolved++;\n\n          graph.addRelationship({\n            id: relId,\n            sourceId,\n            targetId,\n            type: 'IMPORTS',\n            confidence: 1.0,\n            reason: '',\n          });\n\n          // B. Update Import Map (For Pass 4)\n          // Store all resolved import paths for this file\n          if (!importMap.has(file.path)) {\n            importMap.set(file.path, new Set());\n          }\n          importMap.get(file.path)!.add(resolvedPath);\n        }\n      }\n\n      // ---- Language-specific call-as-import routing (Ruby require, etc.) ----\n      if (captureMap['call']) {\n        const callNameNode = captureMap['call.name'];\n        if (callNameNode) {\n          const callRouter = callRouters[language];\n          const routed = callRouter(callNameNode.text, captureMap['call']);\n          if (routed && routed.kind === 'import') {\n            totalImportsFound++;\n            const resolvedPath = resolveImportPath(\n              file.path, routed.importPath, allFilePaths, allFileList, resolveCache\n            );\n            if (resolvedPath) {\n              const sourceId = generateId('File', file.path);\n              const targetId = generateId('File', resolvedPath);\n              const relId = generateId('IMPORTS', `${file.path}->${resolvedPath}`);\n              totalImportsResolved++;\n              graph.addRelationship({\n                id: relId, sourceId, targetId,\n                type: 'IMPORTS', confidence: 1.0, reason: '',\n              });\n              if (!importMap.has(file.path)) {\n                importMap.set(file.path, new Set());\n              }\n              importMap.get(file.path)!.add(resolvedPath);\n            }\n          }\n        }\n      }\n    });\n\n    // If re-parsed just for this, delete the tree to save memory\n    if (wasReparsed) {\n      tree.delete();\n    }\n  }\n  \n  if (import.meta.env.DEV) {\n    console.log(`📊 Import processing complete: ${totalImportsResolved}/${totalImportsFound} imports resolved to graph edges`);\n  }\n};\n\n\n"
  },
  {
    "path": "gitnexus-web/src/core/ingestion/parsing-processor.ts",
    "content": "import { KnowledgeGraph, GraphNode, GraphRelationship } from '../graph/types';\nimport { loadParser, loadLanguage } from '../tree-sitter/parser-loader';\nimport { LANGUAGE_QUERIES } from './tree-sitter-queries';\nimport { generateId } from '../../lib/utils';\nimport { SymbolTable } from './symbol-table';\nimport { ASTCache } from './ast-cache';\nimport { getLanguageFromFilename } from './utils';\n\nexport type FileProgressCallback = (current: number, total: number, filePath: string) => void;\n\n// ============================================================================\n// EXPORT DETECTION - Language-specific visibility detection\n// ============================================================================\n\n/**\n * Check if a symbol (function, class, etc.) is exported/public\n * Handles all 11 supported languages with explicit logic\n * \n * @param node - The AST node for the symbol name\n * @param name - The symbol name\n * @param language - The programming language\n * @returns true if the symbol is exported/public\n */\nconst isNodeExported = (node: any, name: string, language: string): boolean => {\n  let current = node;\n  \n  switch (language) {\n    // JavaScript/TypeScript: Check for export keyword in ancestors\n    case 'javascript':\n    case 'typescript':\n      while (current) {\n        const type = current.type;\n        if (type === 'export_statement' || \n            type === 'export_specifier' ||\n            type === 'lexical_declaration' && current.parent?.type === 'export_statement') {\n          return true;\n        }\n        // Also check if text starts with 'export '\n        if (current.text?.startsWith('export ')) {\n          return true;\n        }\n        current = current.parent;\n      }\n      return false;\n    \n    // Python: Public if no leading underscore (convention)\n    case 'python':\n      return !name.startsWith('_');\n    \n    // Java: Check for 'public' modifier\n    // In tree-sitter Java, modifiers are siblings of the name node, not parents\n    case 'java':\n      while (current) {\n        // Check if this node or any sibling is a 'modifiers' node containing 'public'\n        if (current.parent) {\n          const parent = current.parent;\n          // Check all children of the parent for modifiers\n          for (let i = 0; i < parent.childCount; i++) {\n            const child = parent.child(i);\n            if (child?.type === 'modifiers' && child.text?.includes('public')) {\n              return true;\n            }\n          }\n          // Also check if the parent's text starts with 'public' (fallback)\n          if (parent.type === 'method_declaration' || parent.type === 'constructor_declaration') {\n            if (parent.text?.trimStart().startsWith('public')) {\n              return true;\n            }\n          }\n        }\n        current = current.parent;\n      }\n      return false;\n    \n    // C#: Check for 'public' modifier in ancestors\n    case 'csharp':\n      while (current) {\n        if (current.type === 'modifier' || current.type === 'modifiers') {\n          if (current.text?.includes('public')) return true;\n        }\n        current = current.parent;\n      }\n      return false;\n    \n    // Go: Uppercase first letter = exported\n    case 'go':\n      if (name.length === 0) return false;\n      const first = name[0];\n      // Must be uppercase letter (not a number or symbol)\n      return first === first.toUpperCase() && first !== first.toLowerCase();\n    \n    // Rust: Check for 'pub' visibility modifier\n    case 'rust':\n      while (current) {\n        if (current.type === 'visibility_modifier') {\n          if (current.text?.includes('pub')) return true;\n        }\n        current = current.parent;\n      }\n      return false;\n    \n    // C/C++: No native export concept at language level\n    // Entry points will be detected via name patterns (main, etc.)\n    case 'c':\n    case 'cpp':\n      return false;\n\n    // Ruby: All top-level definitions are public by default\n    case 'ruby':\n      return true;\n\n    default:\n      return false;\n  }\n};\n\nexport const processParsing = async (\n  graph: KnowledgeGraph, \n  files: { path: string; content: string }[],\n  symbolTable: SymbolTable,\n  astCache: ASTCache,\n  onFileProgress?: FileProgressCallback\n) => {\n \n  const parser = await loadParser();\n  const total = files.length;\n\n  for (let i = 0; i < files.length; i++) {\n    const file = files[i];\n    \n    // Report progress for each file\n    onFileProgress?.(i + 1, total, file.path);\n    \n    const language = getLanguageFromFilename(file.path);\n\n    if (!language) continue;\n\n    await loadLanguage(language, file.path);\n    \n    // 3. Parse the text content into an AST\n    const tree = parser.parse(file.content);\n    \n    // Store in cache immediately (this might evict an old one)\n    astCache.set(file.path, tree);\n    \n    // 4. Get the specific query string for this language\n    const queryString = LANGUAGE_QUERIES[language];\n    if (!queryString) {\n      continue;\n    }\n\n    // 5. Run the query against the AST root node\n    // This looks for patterns like (function_declaration)\n    let query;\n    let matches;\n    try {\n      query = parser.getLanguage().query(queryString);\n      matches = query.matches(tree.rootNode);\n    } catch (queryError) {\n      console.warn(`Query error for ${file.path}:`, queryError);\n      continue;\n    }\n\n    // 6. Process every match found\n    matches.forEach(match => {\n      const captureMap: Record<string, any> = {};\n      \n      match.captures.forEach(c => {\n        captureMap[c.name] = c.node;\n      });\n\n      // Skip imports here - they are handled by import-processor.ts\n      // which creates proper File -> IMPORTS -> File relationships\n      if (captureMap['import']) {\n        return;\n      }\n\n      // Skip call expressions - they are handled by call-processor.ts\n      if (captureMap['call']) {\n        return;\n      }\n\n      const nameNode = captureMap['name'];\n      if (!nameNode) return;\n\n      const nodeName = nameNode.text;\n      \n      let nodeLabel = 'CodeElement';\n      \n      // Core types\n      if (captureMap['definition.function']) nodeLabel = 'Function';\n      else if (captureMap['definition.class']) nodeLabel = 'Class';\n      else if (captureMap['definition.interface']) nodeLabel = 'Interface';\n      else if (captureMap['definition.method']) nodeLabel = 'Method';\n      // Struct types (C, C++, Go, Rust, C#)\n      else if (captureMap['definition.struct']) nodeLabel = 'Struct';\n      // Enum types\n      else if (captureMap['definition.enum']) nodeLabel = 'Enum';\n      // Namespace/Module (C++, C#, Rust)\n      else if (captureMap['definition.namespace']) nodeLabel = 'Namespace';\n      else if (captureMap['definition.module']) nodeLabel = 'Module';\n      // Rust-specific\n      else if (captureMap['definition.trait']) nodeLabel = 'Trait';\n      else if (captureMap['definition.impl']) nodeLabel = 'Impl';\n      else if (captureMap['definition.type']) nodeLabel = 'TypeAlias';\n      else if (captureMap['definition.const']) nodeLabel = 'Const';\n      else if (captureMap['definition.static']) nodeLabel = 'Static';\n      // C-specific\n      else if (captureMap['definition.typedef']) nodeLabel = 'Typedef';\n      else if (captureMap['definition.macro']) nodeLabel = 'Macro';\n      else if (captureMap['definition.union']) nodeLabel = 'Union';\n      // C#-specific\n      else if (captureMap['definition.property']) nodeLabel = 'Property';\n      else if (captureMap['definition.record']) nodeLabel = 'Record';\n      else if (captureMap['definition.delegate']) nodeLabel = 'Delegate';\n      // Java-specific\n      else if (captureMap['definition.annotation']) nodeLabel = 'Annotation';\n      else if (captureMap['definition.constructor']) nodeLabel = 'Constructor';\n      // C++ template\n      else if (captureMap['definition.template']) nodeLabel = 'Template';\n\n      const nodeId = generateId(nodeLabel, `${file.path}:${nodeName}`);\n      \n      const node: GraphNode = {\n        id: nodeId,\n        label: nodeLabel as any,\n        properties: {\n          name: nodeName,\n          filePath: file.path,\n          startLine: nameNode.startPosition.row,\n          endLine: nameNode.endPosition.row,\n          language: language,\n          isExported: isNodeExported(nameNode, nodeName, language),\n        }\n      };\n\n      graph.addNode(node);\n\n      // Register in Symbol Table (only definitions, not imports)\n      symbolTable.add(file.path, nodeName, nodeId, nodeLabel);\n\n      const fileId = generateId('File', file.path);\n      \n      const relId = generateId('DEFINES', `${fileId}->${nodeId}`);\n      \n      const relationship: GraphRelationship = {\n        id: relId,\n        sourceId: fileId,\n        targetId: nodeId,\n        type: 'DEFINES',\n        confidence: 1.0,\n        reason: '',\n      };\n\n      graph.addRelationship(relationship);\n    });\n    \n    // Don't delete tree here - LRU cache handles cleanup when evicted\n  }\n};\n"
  },
  {
    "path": "gitnexus-web/src/core/ingestion/pipeline.ts",
    "content": "import { createKnowledgeGraph } from '../graph/graph';\nimport { extractZip, FileEntry } from '../../services/zip';\nimport { processStructure } from './structure-processor';\nimport { processParsing } from './parsing-processor';\nimport { processImports, createImportMap } from './import-processor';\nimport { processCalls } from './call-processor';\nimport { processHeritage } from './heritage-processor';\nimport { processCommunities, CommunityDetectionResult } from './community-processor';\nimport { processProcesses, ProcessDetectionResult } from './process-processor';\nimport { createSymbolTable } from './symbol-table';\nimport { createASTCache } from './ast-cache';\nimport { PipelineProgress, PipelineResult } from '../../types/pipeline';\n\n/**\n * Run the ingestion pipeline from a ZIP file\n */\nexport const runIngestionPipeline = async ( file: File, onProgress: (progress: PipelineProgress) => void): Promise<PipelineResult> => {\n  // Phase 1: Extracting (0-15%)\n  onProgress({\n    phase: 'extracting',\n    percent: 0,\n    message: 'Extracting ZIP file...',\n  });\n  \n  // Fake progress for extraction (JSZip doesn't expose progress)\n  const fakeExtractionProgress = setInterval(() => {\n    onProgress({\n      phase: 'extracting',\n      percent: Math.min(14, Math.random() * 10 + 5),\n      message: 'Extracting ZIP file...',\n    });\n  }, 200);\n  \n  const files = await extractZip(file);\n  clearInterval(fakeExtractionProgress);\n  \n  // Continue with common pipeline\n  return runPipelineFromFiles(files, onProgress);\n};\n\n/**\n * Run the ingestion pipeline from pre-extracted files (e.g., from git clone)\n */\nexport const runPipelineFromFiles = async (\n  files: FileEntry[],\n  onProgress: (progress: PipelineProgress) => void\n): Promise<PipelineResult> => {\n  const graph = createKnowledgeGraph();\n  const fileContents = new Map<string, string>();\n  const symbolTable = createSymbolTable();\n  const astCache = createASTCache(50); // Keep last 50 files hot\n  const importMap = createImportMap();\n\n  // Cleanup function for error handling\n  const cleanup = () => {\n    astCache.clear();\n    symbolTable.clear();\n  };\n  \n  try {\n  // Store file contents for code panel\n  files.forEach(f => fileContents.set(f.path, f.content));\n  \n  onProgress({\n    phase: 'extracting',\n    percent: 15,\n    message: 'ZIP extracted successfully',\n    stats: { filesProcessed: 0, totalFiles: files.length, nodesCreated: 0 },\n  });\n  \n  // Phase 2: Structure (15-30%)\n  onProgress({\n    phase: 'structure',\n    percent: 15,\n    message: 'Analyzing project structure...',\n    stats: { filesProcessed: 0, totalFiles: files.length, nodesCreated: 0 },\n  });\n  \n  const filePaths = files.map(f => f.path);\n  processStructure(graph, filePaths);\n  \n  onProgress({\n    phase: 'structure',\n    percent: 30,\n    message: 'Project structure analyzed',\n    stats: { filesProcessed: files.length, totalFiles: files.length, nodesCreated: graph.nodeCount },\n  });\n  \n  // Phase 3: Parsing (30-70%)\n  onProgress({\n    phase: 'parsing',\n    percent: 30,\n    message: 'Parsing code definitions...',\n    stats: { filesProcessed: 0, totalFiles: files.length, nodesCreated: graph.nodeCount },\n  });\n  \n  await processParsing(graph, files, symbolTable, astCache, (current, total, filePath) => {\n    const parsingProgress = 30 + ((current / total) * 40);\n    onProgress({\n      phase: 'parsing',\n      percent: Math.round(parsingProgress),\n      message: 'Parsing code definitions...',\n      detail: filePath,\n      stats: { filesProcessed: current, totalFiles: total, nodesCreated: graph.nodeCount },\n    });\n  });\n\n\n  // Phase 4: Imports (70-82%)\n  onProgress({\n    phase: 'imports',\n    percent: 70,\n    message: 'Resolving imports...',\n    stats: { filesProcessed: 0, totalFiles: files.length, nodesCreated: graph.nodeCount },\n  });\n\n  await processImports(graph, files, astCache, importMap, (current, total) => {\n    const importProgress = 70 + ((current / total) * 12);\n    onProgress({\n      phase: 'imports',\n      percent: Math.round(importProgress),\n      message: 'Resolving imports...',\n      stats: { filesProcessed: current, totalFiles: total, nodesCreated: graph.nodeCount },\n    });\n  });\n  \n  // Debug: Count IMPORTS relationships\n  if (import.meta.env.DEV) {\n    const importsCount = graph.relationships.filter(r => r.type === 'IMPORTS').length;\n    console.log(`📊 Pipeline: After import phase, graph has ${importsCount} IMPORTS relationships (total: ${graph.relationshipCount})`);\n    if (importsCount > 0) {\n      const sample = graph.relationships.filter(r => r.type === 'IMPORTS').slice(0, 3);\n      sample.forEach(r => console.log(`   Sample IMPORTS: ${r.sourceId} → ${r.targetId}`));\n    }\n  }\n\n\n  // Phase 5: Calls (82-98%)\n  onProgress({\n    phase: 'calls',\n    percent: 82,\n    message: 'Tracing function calls...',\n    stats: { filesProcessed: 0, totalFiles: files.length, nodesCreated: graph.nodeCount },\n  });\n\n  await processCalls(graph, files, astCache, symbolTable, importMap, (current, total) => {\n    const callProgress = 82 + ((current / total) * 10);\n    onProgress({\n      phase: 'calls',\n      percent: Math.round(callProgress),\n      message: 'Tracing function calls...',\n      stats: { filesProcessed: current, totalFiles: total, nodesCreated: graph.nodeCount },\n    });\n  });\n\n  // Phase 6: Heritage - Class inheritance (92-98%)\n  onProgress({\n    phase: 'heritage',\n    percent: 92,\n    message: 'Extracting class inheritance...',\n    stats: { filesProcessed: 0, totalFiles: files.length, nodesCreated: graph.nodeCount },\n  });\n\n  await processHeritage(graph, files, astCache, symbolTable, (current, total) => {\n    const heritageProgress = 88 + ((current / total) * 4);\n    onProgress({\n      phase: 'heritage',\n      percent: Math.round(heritageProgress),\n      message: 'Extracting class inheritance...',\n      stats: { filesProcessed: current, totalFiles: total, nodesCreated: graph.nodeCount },\n    });\n  });\n\n  // Phase 7: Community Detection (92-98%)\n  onProgress({\n    phase: 'communities',\n    percent: 92,\n    message: 'Detecting code communities...',\n    stats: { filesProcessed: files.length, totalFiles: files.length, nodesCreated: graph.nodeCount },\n  });\n\n  const communityResult = await processCommunities(graph, (message, progress) => {\n    const communityProgress = 92 + (progress * 0.06);\n    onProgress({\n      phase: 'communities',\n      percent: Math.round(communityProgress),\n      message,\n      stats: { filesProcessed: files.length, totalFiles: files.length, nodesCreated: graph.nodeCount },\n    });\n  });\n\n  // Log community detection results\n  if (import.meta.env.DEV) {\n    console.log(`🏘️ Community detection: ${communityResult.stats.totalCommunities} communities found (modularity: ${communityResult.stats.modularity.toFixed(3)})`);\n  }\n\n  // Add community nodes to the graph\n  communityResult.communities.forEach(comm => {\n    graph.addNode({\n      id: comm.id,\n      label: 'Community' as const,\n      properties: {\n        name: comm.label,\n        filePath: '',\n        heuristicLabel: comm.heuristicLabel,\n        cohesion: comm.cohesion,\n        symbolCount: comm.symbolCount,\n      }\n    });\n  });\n\n  // Add MEMBER_OF relationships\n  communityResult.memberships.forEach(membership => {\n    graph.addRelationship({\n      id: `${membership.nodeId}_member_of_${membership.communityId}`,\n      type: 'MEMBER_OF',\n      sourceId: membership.nodeId,\n      targetId: membership.communityId,\n      confidence: 1.0,\n      reason: 'leiden-algorithm',\n    });\n  });\n\n  // Phase 8: Process Detection (98-99%)\n  onProgress({\n    phase: 'processes',\n    percent: 98,\n    message: 'Detecting execution flows...',\n    stats: { filesProcessed: files.length, totalFiles: files.length, nodesCreated: graph.nodeCount },\n  });\n\n  const processResult = await processProcesses(\n    graph,\n    communityResult.memberships,\n    (message, progress) => {\n      const processProgress = 98 + (progress * 0.01);\n      onProgress({\n        phase: 'processes',\n        percent: Math.round(processProgress),\n        message,\n        stats: { filesProcessed: files.length, totalFiles: files.length, nodesCreated: graph.nodeCount },\n      });\n    }\n  );\n\n  // Log process detection results\n  if (import.meta.env.DEV) {\n    console.log(`🔄 Process detection: ${processResult.stats.totalProcesses} processes found (${processResult.stats.crossCommunityCount} cross-community)`);\n  }\n\n  // Add Process nodes to the graph\n  processResult.processes.forEach(proc => {\n    graph.addNode({\n      id: proc.id,\n      label: 'Process' as const,\n      properties: {\n        name: proc.label,\n        filePath: '',\n        heuristicLabel: proc.heuristicLabel,\n        processType: proc.processType,\n        stepCount: proc.stepCount,\n        communities: proc.communities,\n        entryPointId: proc.entryPointId,\n        terminalId: proc.terminalId,\n      }\n    });\n  });\n\n  // Add STEP_IN_PROCESS relationships\n  processResult.steps.forEach(step => {\n    graph.addRelationship({\n      id: `${step.nodeId}_step_${step.step}_${step.processId}`,\n      type: 'STEP_IN_PROCESS',\n      sourceId: step.nodeId,\n      targetId: step.processId,\n      confidence: 1.0,\n      reason: 'trace-detection',\n      step: step.step,\n    });\n  });\n\n  \n  // Phase 9: Complete (100%)\n  onProgress({\n    phase: 'complete',\n    percent: 100,\n    message: `Graph complete! ${communityResult.stats.totalCommunities} communities, ${processResult.stats.totalProcesses} processes detected.`,\n    stats: { \n      filesProcessed: files.length, \n      totalFiles: files.length, \n      nodesCreated: graph.nodeCount \n    },\n  });\n\n  // Cleanup WASM memory before returning\n  astCache.clear();\n  \n  return { graph, fileContents, communityResult, processResult };\n\n  } catch (error) {\n    cleanup();\n    throw error;\n  }\n};\n"
  },
  {
    "path": "gitnexus-web/src/core/ingestion/process-processor.ts",
    "content": "/**\n * Process Detection Processor\n * \n * Detects execution flows (Processes) in the code graph by:\n * 1. Finding entry points (functions with no internal callers)\n * 2. Tracing forward via CALLS edges (BFS)\n * 3. Grouping and deduplicating similar paths\n * 4. Labeling with heuristic names\n * \n * Processes help agents understand how features work through the codebase.\n */\n\nimport { KnowledgeGraph, GraphNode, GraphRelationship, NodeLabel } from '../graph/types';\nimport { CommunityMembership } from './community-processor';\nimport { calculateEntryPointScore, isTestFile } from './entry-point-scoring';\n\n// ============================================================================\n// CONFIGURATION\n// ============================================================================\n\nexport interface ProcessDetectionConfig {\n  maxTraceDepth: number;      // Maximum steps to trace (default: 10)\n  maxBranching: number;       // Max branches to follow per node (default: 3)\n  maxProcesses: number;       // Maximum processes to detect (default: 50)\n  minSteps: number;           // Minimum steps for a valid process (default: 2)\n}\n\nconst DEFAULT_CONFIG: ProcessDetectionConfig = {\n  maxTraceDepth: 10,\n  maxBranching: 4,\n  maxProcesses: 75,\n  minSteps: 2,\n};\n\n// ============================================================================\n// TYPES\n// ============================================================================\n\nexport interface ProcessNode {\n  id: string;                    // \"proc_handleLogin_createSession\"\n  label: string;                 // \"HandleLogin → CreateSession\"\n  heuristicLabel: string;\n  processType: 'intra_community' | 'cross_community';\n  stepCount: number;\n  communities: string[];         // Community IDs touched\n  entryPointId: string;\n  terminalId: string;\n  trace: string[];               // Ordered array of node IDs\n}\n\nexport interface ProcessStep {\n  nodeId: string;\n  processId: string;\n  step: number;                  // 1-indexed position in trace\n}\n\nexport interface ProcessDetectionResult {\n  processes: ProcessNode[];\n  steps: ProcessStep[];\n  stats: {\n    totalProcesses: number;\n    crossCommunityCount: number;\n    avgStepCount: number;\n    entryPointsFound: number;\n  };\n}\n\n// ============================================================================\n// MAIN PROCESSOR\n// ============================================================================\n\n/**\n * Detect processes (execution flows) in the knowledge graph\n * \n * This runs AFTER community detection, using CALLS edges to trace flows.\n */\nexport const processProcesses = async (\n  knowledgeGraph: KnowledgeGraph,\n  memberships: CommunityMembership[],\n  onProgress?: (message: string, progress: number) => void,\n  config: Partial<ProcessDetectionConfig> = {}\n): Promise<ProcessDetectionResult> => {\n  const cfg = { ...DEFAULT_CONFIG, ...config };\n  \n  onProgress?.('Finding entry points...', 0);\n  \n  // Build lookup maps\n  const membershipMap = new Map<string, string>();\n  memberships.forEach(m => membershipMap.set(m.nodeId, m.communityId));\n  \n  const callsEdges = buildCallsGraph(knowledgeGraph);\n  const reverseCallsEdges = buildReverseCallsGraph(knowledgeGraph);\n  const nodeMap = new Map<string, GraphNode>();\n  knowledgeGraph.nodes.forEach(n => nodeMap.set(n.id, n));\n  \n  // Step 1: Find entry points (functions that call others but have few callers)\n  const entryPoints = findEntryPoints(knowledgeGraph, reverseCallsEdges, callsEdges);\n  \n  onProgress?.(`Found ${entryPoints.length} entry points, tracing flows...`, 20);\n  \n  onProgress?.(`Found ${entryPoints.length} entry points, tracing flows...`, 20);\n  \n  // Step 2: Trace processes from each entry point\n  const allTraces: string[][] = [];\n  \n  for (let i = 0; i < entryPoints.length && allTraces.length < cfg.maxProcesses * 2; i++) {\n    const entryId = entryPoints[i];\n    const traces = traceFromEntryPoint(entryId, callsEdges, cfg);\n    \n    // Filter out traces that are too short\n    traces.filter(t => t.length >= cfg.minSteps).forEach(t => allTraces.push(t));\n    \n    if (i % 10 === 0) {\n      onProgress?.(`Tracing entry point ${i + 1}/${entryPoints.length}...`, 20 + (i / entryPoints.length) * 40);\n    }\n  }\n  \n  onProgress?.(`Found ${allTraces.length} traces, deduplicating...`, 60);\n  \n  // Step 3: Deduplicate similar traces\n  const uniqueTraces = deduplicateTraces(allTraces);\n  \n  // Step 4: Limit to max processes (prioritize longer traces)\n  const limitedTraces = uniqueTraces\n    .sort((a, b) => b.length - a.length)\n    .slice(0, cfg.maxProcesses);\n  \n  onProgress?.(`Creating ${limitedTraces.length} process nodes...`, 80);\n  \n  // Step 5: Create process nodes\n  const processes: ProcessNode[] = [];\n  const steps: ProcessStep[] = [];\n  \n  limitedTraces.forEach((trace, idx) => {\n    const entryPointId = trace[0];\n    const terminalId = trace[trace.length - 1];\n    \n    // Get communities touched\n    const communitiesSet = new Set<string>();\n    trace.forEach(nodeId => {\n      const comm = membershipMap.get(nodeId);\n      if (comm) communitiesSet.add(comm);\n    });\n    const communities = Array.from(communitiesSet);\n    \n    // Determine process type\n    const processType: 'intra_community' | 'cross_community' = \n      communities.length > 1 ? 'cross_community' : 'intra_community';\n    \n    // Generate label\n    const entryNode = nodeMap.get(entryPointId);\n    const terminalNode = nodeMap.get(terminalId);\n    const entryName = entryNode?.properties.name || 'Unknown';\n    const terminalName = terminalNode?.properties.name || 'Unknown';\n    const heuristicLabel = `${capitalize(entryName)} → ${capitalize(terminalName)}`;\n    \n    const processId = `proc_${idx}_${sanitizeId(entryName)}`;\n    \n    processes.push({\n      id: processId,\n      label: heuristicLabel,\n      heuristicLabel,\n      processType,\n      stepCount: trace.length,\n      communities,\n      entryPointId,\n      terminalId,\n      trace,\n    });\n    \n    // Create step relationships\n    trace.forEach((nodeId, stepIdx) => {\n      steps.push({\n        nodeId,\n        processId,\n        step: stepIdx + 1,  // 1-indexed\n      });\n    });\n  });\n  \n  onProgress?.('Process detection complete!', 100);\n  \n  // Calculate stats\n  const crossCommunityCount = processes.filter(p => p.processType === 'cross_community').length;\n  const avgStepCount = processes.length > 0 \n    ? processes.reduce((sum, p) => sum + p.stepCount, 0) / processes.length \n    : 0;\n  \n  return {\n    processes,\n    steps,\n    stats: {\n      totalProcesses: processes.length,\n      crossCommunityCount,\n      avgStepCount: Math.round(avgStepCount * 10) / 10,\n      entryPointsFound: entryPoints.length,\n    },\n  };\n};\n\n// ============================================================================\n// HELPER: Build CALLS adjacency list\n// ============================================================================\n\ntype AdjacencyList = Map<string, string[]>;\n\nconst buildCallsGraph = (graph: KnowledgeGraph): AdjacencyList => {\n  const adj = new Map<string, string[]>();\n  \n  graph.relationships.forEach(rel => {\n    if (rel.type === 'CALLS') {\n      if (!adj.has(rel.sourceId)) {\n        adj.set(rel.sourceId, []);\n      }\n      adj.get(rel.sourceId)!.push(rel.targetId);\n    }\n  });\n  \n  return adj;\n};\n\nconst buildReverseCallsGraph = (graph: KnowledgeGraph): AdjacencyList => {\n  const adj = new Map<string, string[]>();\n  \n  graph.relationships.forEach(rel => {\n    if (rel.type === 'CALLS') {\n      if (!adj.has(rel.targetId)) {\n        adj.set(rel.targetId, []);\n      }\n      adj.get(rel.targetId)!.push(rel.sourceId);\n    }\n  });\n  \n  return adj;\n};\n\n/**\n * Find functions/methods that are good entry points for tracing.\n * \n * Entry points are scored based on:\n * 1. Call ratio (calls many, called by few)\n * 2. Export status (exported/public functions rank higher)\n * 3. Name patterns (handle*, on*, *Controller, etc.)\n * \n * Test files are excluded entirely.\n */\nconst findEntryPoints = (\n  graph: KnowledgeGraph, \n  reverseCallsEdges: AdjacencyList,\n  callsEdges: AdjacencyList\n): string[] => {\n  const symbolTypes = new Set<NodeLabel>(['Function', 'Method']);\n  const entryPointCandidates: { \n    id: string; \n    score: number; \n    reasons: string[];\n  }[] = [];\n  \n  graph.nodes.forEach(node => {\n    if (!symbolTypes.has(node.label)) return;\n    \n    const filePath = node.properties.filePath || '';\n    \n    // Skip test files entirely\n    if (isTestFile(filePath)) return;\n    \n    const callers = reverseCallsEdges.get(node.id) || [];\n    const callees = callsEdges.get(node.id) || [];\n    \n    // Must have at least 1 outgoing call to trace forward\n    if (callees.length === 0) return;\n    \n    // Calculate entry point score using new scoring system\n    const { score, reasons } = calculateEntryPointScore(\n      node.properties.name,\n      node.properties.language || 'javascript',\n      node.properties.isExported ?? false,\n      callers.length,\n      callees.length,\n      filePath  // Pass filePath for framework detection\n    );\n    \n    if (score > 0) {\n      entryPointCandidates.push({ id: node.id, score, reasons });\n    }\n  });\n  \n  // Sort by score descending and return top candidates\n  const sorted = entryPointCandidates.sort((a, b) => b.score - a.score);\n  \n  // DEBUG: Log top candidates with new scoring details\n  if (sorted.length > 0 && typeof import.meta !== 'undefined' && import.meta.env?.DEV) {\n    console.log(`[Process] Top 10 entry point candidates (new scoring):`);\n    sorted.slice(0, 10).forEach((c, i) => {\n      const node = graph.nodes.find(n => n.id === c.id);\n      const exported = node?.properties.isExported ? '✓' : '✗';\n      const shortPath = node?.properties.filePath?.split('/').slice(-2).join('/') || '';\n      console.log(`  ${i+1}. ${node?.properties.name} [exported:${exported}] (${shortPath})`);\n      console.log(`     score: ${c.score.toFixed(2)} = [${c.reasons.join(' × ')}]`);\n    });\n  }\n  \n  return sorted\n    .slice(0, 200)  // Limit to prevent explosion\n    .map(c => c.id);\n};\n\n// ============================================================================\n// HELPER: Trace from entry point (BFS)\n// ============================================================================\n\n/**\n * Trace forward from an entry point using BFS.\n * Returns all distinct paths up to maxDepth.\n */\nconst traceFromEntryPoint = (\n  entryId: string,\n  callsEdges: AdjacencyList,\n  config: ProcessDetectionConfig\n): string[][] => {\n  const traces: string[][] = [];\n  \n  // BFS with path tracking\n  // Each queue item: [currentNodeId, pathSoFar]\n  const queue: [string, string[]][] = [[entryId, [entryId]]];\n  const visited = new Set<string>();\n  \n  while (queue.length > 0 && traces.length < config.maxBranching * 3) {\n    const [currentId, path] = queue.shift()!;\n    \n    // Get outgoing calls\n    const callees = callsEdges.get(currentId) || [];\n    \n    if (callees.length === 0) {\n      // Terminal node - this is a complete trace\n      if (path.length >= config.minSteps) {\n        traces.push([...path]);\n      }\n    } else if (path.length >= config.maxTraceDepth) {\n      // Max depth reached - save what we have\n      if (path.length >= config.minSteps) {\n        traces.push([...path]);\n      }\n    } else {\n      // Continue tracing - limit branching\n      const limitedCallees = callees.slice(0, config.maxBranching);\n      let addedBranch = false;\n      \n      for (const calleeId of limitedCallees) {\n        // Avoid cycles\n        if (!path.includes(calleeId)) {\n          queue.push([calleeId, [...path, calleeId]]);\n          addedBranch = true;\n        }\n      }\n      \n      // If all branches were cycles, save current path as terminal\n      if (!addedBranch && path.length >= config.minSteps) {\n        traces.push([...path]);\n      }\n    }\n  }\n  \n  return traces;\n};\n\n// ============================================================================\n// HELPER: Deduplicate traces\n// ============================================================================\n\n/**\n * Merge traces that are subsets of other traces.\n * Keep longer traces, remove redundant shorter ones.\n */\nconst deduplicateTraces = (traces: string[][]): string[][] => {\n  if (traces.length === 0) return [];\n  \n  // Sort by length descending\n  const sorted = [...traces].sort((a, b) => b.length - a.length);\n  const unique: string[][] = [];\n  \n  for (const trace of sorted) {\n    // Check if this trace is a subset of any already-added trace\n    const traceKey = trace.join('->');\n    const isSubset = unique.some(existing => {\n      const existingKey = existing.join('->');\n      return existingKey.includes(traceKey);\n    });\n    \n    if (!isSubset) {\n      unique.push(trace);\n    }\n  }\n  \n  return unique;\n};\n\n// ============================================================================\n// HELPER: String utilities\n// ============================================================================\n\nconst capitalize = (s: string): string => {\n  if (!s) return s;\n  return s.charAt(0).toUpperCase() + s.slice(1);\n};\n\nconst sanitizeId = (s: string): string => {\n  return s.replace(/[^a-zA-Z0-9]/g, '_').substring(0, 20).toLowerCase();\n};\n"
  },
  {
    "path": "gitnexus-web/src/core/ingestion/structure-processor.ts",
    "content": "import { generateId } from \"@/lib/utils\";\nimport { KnowledgeGraph, GraphNode, GraphRelationship } from \"../graph/types\";\n\nexport const processStructure = ( graph: KnowledgeGraph, paths: string[])=>{\n    paths.forEach( path => {\n        const parts = path.split('/')\n        let currentPath = ''\n        let parentId = ''\n\n        parts.forEach( (part, index ) => {\n            const isFile = index === parts.length - 1\n            const label = isFile ? 'File' : 'Folder' \n\n            currentPath = currentPath ? `${currentPath}/${part}` : part\n\n            const nodeId=generateId(label, currentPath)\n\n            const node: GraphNode = {\n                id: nodeId,\n                label: label,\n                properties: {\n                    name: part,\n                    filePath: currentPath\n                }\n            }\n            graph.addNode(node)\n\n            if(parentId){\n                const relId = generateId('CONTAINS', `${parentId}->${nodeId}`)\n\n                const relationship: GraphRelationship={\n                    id: relId,\n                    type: 'CONTAINS',\n                    sourceId: parentId,\n                    targetId: nodeId,\n                    confidence: 1.0,\n                    reason: '',\n                }\n\n                graph.addRelationship(relationship)\n            }\n\n            parentId = nodeId\n\n        })\n    })\n}\n\n"
  },
  {
    "path": "gitnexus-web/src/core/ingestion/symbol-table.ts",
    "content": "export interface SymbolDefinition {\n  nodeId: string;\n  filePath: string;\n  type: string; // 'Function', 'Class', etc.\n}\n\nexport interface SymbolTable {\n  /**\n   * Register a new symbol definition\n   */\n  add: (filePath: string, name: string, nodeId: string, type: string) => void;\n  \n  /**\n   * High Confidence: Look for a symbol specifically inside a file\n   * Returns the Node ID if found\n   */\n  lookupExact: (filePath: string, name: string) => string | undefined;\n  \n  /**\n   * Low Confidence: Look for a symbol anywhere in the project\n   * Used when imports are missing or for framework magic\n   */\n  lookupFuzzy: (name: string) => SymbolDefinition[];\n  \n  /**\n   * Debugging: See how many symbols are tracked\n   */\n  getStats: () => { fileCount: number; globalSymbolCount: number };\n  \n  /**\n   * Cleanup memory\n   */\n  clear: () => void;\n}\n\nexport const createSymbolTable = (): SymbolTable => {\n  // 1. File-Specific Index (The \"Good\" one)\n  // Structure: FilePath -> (SymbolName -> NodeID)\n  const fileIndex = new Map<string, Map<string, string>>();\n\n  // 2. Global Reverse Index (The \"Backup\")\n  // Structure: SymbolName -> [List of Definitions]\n  const globalIndex = new Map<string, SymbolDefinition[]>();\n\n  const add = (filePath: string, name: string, nodeId: string, type: string) => {\n    // A. Add to File Index\n    if (!fileIndex.has(filePath)) {\n      fileIndex.set(filePath, new Map());\n    }\n    fileIndex.get(filePath)!.set(name, nodeId);\n\n    // B. Add to Global Index\n    if (!globalIndex.has(name)) {\n      globalIndex.set(name, []);\n    }\n    globalIndex.get(name)!.push({ nodeId, filePath, type });\n  };\n\n  const lookupExact = (filePath: string, name: string): string | undefined => {\n    const fileSymbols = fileIndex.get(filePath);\n    if (!fileSymbols) return undefined;\n    return fileSymbols.get(name);\n  };\n\n  const lookupFuzzy = (name: string): SymbolDefinition[] => {\n    return globalIndex.get(name) || [];\n  };\n\n  const getStats = () => ({\n    fileCount: fileIndex.size,\n    globalSymbolCount: globalIndex.size\n  });\n\n  const clear = () => {\n    fileIndex.clear();\n    globalIndex.clear();\n  };\n\n  return { add, lookupExact, lookupFuzzy, getStats, clear };\n};"
  },
  {
    "path": "gitnexus-web/src/core/ingestion/tree-sitter-queries.ts",
    "content": "import { SupportedLanguages } from '../../config/supported-languages';\n\n/* \n * Tree-sitter queries for extracting code definitions.\n * \n * Note: Different grammars (typescript vs tsx vs javascript) may have\n * slightly different node types. These queries are designed to be \n * compatible with the standard tree-sitter grammars.\n */\n\n// TypeScript queries - works with tree-sitter-typescript\nexport const TYPESCRIPT_QUERIES = `\n(class_declaration\n  name: (type_identifier) @name) @definition.class\n\n(interface_declaration\n  name: (type_identifier) @name) @definition.interface\n\n(function_declaration\n  name: (identifier) @name) @definition.function\n\n(method_definition\n  name: (property_identifier) @name) @definition.method\n\n(lexical_declaration\n  (variable_declarator\n    name: (identifier) @name\n    value: (arrow_function))) @definition.function\n\n(lexical_declaration\n  (variable_declarator\n    name: (identifier) @name\n    value: (function_expression))) @definition.function\n\n(export_statement\n  declaration: (lexical_declaration\n    (variable_declarator\n      name: (identifier) @name\n      value: (arrow_function)))) @definition.function\n\n(export_statement\n  declaration: (lexical_declaration\n    (variable_declarator\n      name: (identifier) @name\n      value: (function_expression)))) @definition.function\n\n(import_statement\n  source: (string) @import.source) @import\n\n(call_expression\n  function: (identifier) @call.name) @call\n\n(call_expression\n  function: (member_expression\n    property: (property_identifier) @call.name)) @call\n\n; Heritage queries - class extends\n(class_declaration\n  name: (type_identifier) @heritage.class\n  (class_heritage\n    (extends_clause\n      value: (identifier) @heritage.extends))) @heritage\n\n; Heritage queries - class implements interface\n(class_declaration\n  name: (type_identifier) @heritage.class\n  (class_heritage\n    (implements_clause\n      (type_identifier) @heritage.implements))) @heritage.impl\n`;\n\n// JavaScript queries - works with tree-sitter-javascript  \nexport const JAVASCRIPT_QUERIES = `\n(class_declaration\n  name: (identifier) @name) @definition.class\n\n(function_declaration\n  name: (identifier) @name) @definition.function\n\n(method_definition\n  name: (property_identifier) @name) @definition.method\n\n(lexical_declaration\n  (variable_declarator\n    name: (identifier) @name\n    value: (arrow_function))) @definition.function\n\n(lexical_declaration\n  (variable_declarator\n    name: (identifier) @name\n    value: (function_expression))) @definition.function\n\n(export_statement\n  declaration: (lexical_declaration\n    (variable_declarator\n      name: (identifier) @name\n      value: (arrow_function)))) @definition.function\n\n(export_statement\n  declaration: (lexical_declaration\n    (variable_declarator\n      name: (identifier) @name\n      value: (function_expression)))) @definition.function\n\n(import_statement\n  source: (string) @import.source) @import\n\n(call_expression\n  function: (identifier) @call.name) @call\n\n(call_expression\n  function: (member_expression\n    property: (property_identifier) @call.name)) @call\n\n; Heritage queries - class extends (JavaScript uses different AST than TypeScript)\n; In tree-sitter-javascript, class_heritage directly contains the parent identifier\n(class_declaration\n  name: (identifier) @heritage.class\n  (class_heritage\n    (identifier) @heritage.extends)) @heritage\n`;\n\n// Python queries - works with tree-sitter-python\nexport const PYTHON_QUERIES = `\n(class_definition\n  name: (identifier) @name) @definition.class\n\n(function_definition\n  name: (identifier) @name) @definition.function\n\n(import_statement\n  name: (dotted_name) @import.source) @import\n\n(import_from_statement\n  module_name: (dotted_name) @import.source) @import\n\n(call\n  function: (identifier) @call.name) @call\n\n(call\n  function: (attribute\n    attribute: (identifier) @call.name)) @call\n\n; Heritage queries - Python class inheritance\n(class_definition\n  name: (identifier) @heritage.class\n  superclasses: (argument_list\n    (identifier) @heritage.extends)) @heritage\n`;\n\n// Java queries - works with tree-sitter-java\nexport const JAVA_QUERIES = `\n; Classes, Interfaces, Enums, Annotations\n(class_declaration name: (identifier) @name) @definition.class\n(interface_declaration name: (identifier) @name) @definition.interface\n(enum_declaration name: (identifier) @name) @definition.enum\n(annotation_type_declaration name: (identifier) @name) @definition.annotation\n\n; Methods & Constructors\n(method_declaration name: (identifier) @name) @definition.method\n(constructor_declaration name: (identifier) @name) @definition.constructor\n\n; Imports - capture any import declaration child as source\n(import_declaration (_) @import.source) @import\n\n; Calls\n(method_invocation name: (identifier) @call.name) @call\n(method_invocation object: (_) name: (identifier) @call.name) @call\n\n; Heritage - extends class\n(class_declaration name: (identifier) @heritage.class\n  (superclass (type_identifier) @heritage.extends)) @heritage\n\n; Heritage - implements interfaces\n(class_declaration name: (identifier) @heritage.class\n  (super_interfaces (type_list (type_identifier) @heritage.implements))) @heritage.impl\n`;\n\n// C queries - works with tree-sitter-c\nexport const C_QUERIES = `\n; Functions\n(function_definition declarator: (function_declarator declarator: (identifier) @name)) @definition.function\n(declaration declarator: (function_declarator declarator: (identifier) @name)) @definition.function\n\n; Structs, Unions, Enums, Typedefs\n(struct_specifier name: (type_identifier) @name) @definition.struct\n(union_specifier name: (type_identifier) @name) @definition.union\n(enum_specifier name: (type_identifier) @name) @definition.enum\n(type_definition declarator: (type_identifier) @name) @definition.typedef\n\n; Macros\n(preproc_function_def name: (identifier) @name) @definition.macro\n(preproc_def name: (identifier) @name) @definition.macro\n\n; Includes\n(preproc_include path: (_) @import.source) @import\n\n; Calls\n(call_expression function: (identifier) @call.name) @call\n(call_expression function: (field_expression field: (field_identifier) @call.name)) @call\n`;\n\n// Go queries - works with tree-sitter-go\nexport const GO_QUERIES = `\n; Functions & Methods\n(function_declaration name: (identifier) @name) @definition.function\n(method_declaration name: (field_identifier) @name) @definition.method\n\n; Types\n(type_declaration (type_spec name: (type_identifier) @name type: (struct_type))) @definition.struct\n(type_declaration (type_spec name: (type_identifier) @name type: (interface_type))) @definition.interface\n(type_declaration (type_spec name: (type_identifier) @name)) @definition.type\n\n; Imports\n(import_declaration (import_spec path: (interpreted_string_literal) @import.source)) @import\n(import_declaration (import_spec_list (import_spec path: (interpreted_string_literal) @import.source))) @import\n\n; Calls\n(call_expression function: (identifier) @call.name) @call\n(call_expression function: (selector_expression field: (field_identifier) @call.name)) @call\n`;\n\n// C++ queries - works with tree-sitter-cpp\nexport const CPP_QUERIES = `\n; Classes, Structs, Namespaces\n(class_specifier name: (type_identifier) @name) @definition.class\n(struct_specifier name: (type_identifier) @name) @definition.struct\n(namespace_definition name: (namespace_identifier) @name) @definition.namespace\n(enum_specifier name: (type_identifier) @name) @definition.enum\n\n; Functions & Methods\n(function_definition declarator: (function_declarator declarator: (identifier) @name)) @definition.function\n(function_definition declarator: (function_declarator declarator: (qualified_identifier name: (identifier) @name))) @definition.method\n\n; Templates\n(template_declaration (class_specifier name: (type_identifier) @name)) @definition.template\n(template_declaration (function_definition declarator: (function_declarator declarator: (identifier) @name))) @definition.template\n\n; Includes\n(preproc_include path: (_) @import.source) @import\n\n; Calls\n(call_expression function: (identifier) @call.name) @call\n(call_expression function: (field_expression field: (field_identifier) @call.name)) @call\n(call_expression function: (qualified_identifier name: (identifier) @call.name)) @call\n(call_expression function: (template_function name: (identifier) @call.name)) @call\n\n; Heritage\n(class_specifier name: (type_identifier) @heritage.class\n  (base_class_clause (type_identifier) @heritage.extends)) @heritage\n(class_specifier name: (type_identifier) @heritage.class\n  (base_class_clause (access_specifier) (type_identifier) @heritage.extends)) @heritage\n`;\n\n// C# queries - works with tree-sitter-c-sharp\nexport const CSHARP_QUERIES = `\n; Types\n(class_declaration name: (identifier) @name) @definition.class\n(interface_declaration name: (identifier) @name) @definition.interface\n(struct_declaration name: (identifier) @name) @definition.struct\n(enum_declaration name: (identifier) @name) @definition.enum\n(record_declaration name: (identifier) @name) @definition.record\n(delegate_declaration name: (identifier) @name) @definition.delegate\n\n; Namespaces\n(namespace_declaration name: (identifier) @name) @definition.namespace\n(namespace_declaration name: (qualified_name) @name) @definition.namespace\n\n; Methods & Properties\n(method_declaration name: (identifier) @name) @definition.method\n(local_function_statement name: (identifier) @name) @definition.function\n(constructor_declaration name: (identifier) @name) @definition.constructor\n(property_declaration name: (identifier) @name) @definition.property\n\n; Using\n(using_directive (qualified_name) @import.source) @import\n(using_directive (identifier) @import.source) @import\n\n; Calls\n(invocation_expression function: (identifier) @call.name) @call\n(invocation_expression function: (member_access_expression name: (identifier) @call.name)) @call\n\n; Heritage\n(class_declaration name: (identifier) @heritage.class\n  (base_list (simple_base_type (identifier) @heritage.extends))) @heritage\n(class_declaration name: (identifier) @heritage.class\n  (base_list (simple_base_type (generic_name (identifier) @heritage.extends)))) @heritage\n`;\n\n// Rust queries - works with tree-sitter-rust\nexport const RUST_QUERIES = `\n; Functions & Items\n(function_item name: (identifier) @name) @definition.function\n(struct_item name: (type_identifier) @name) @definition.struct\n(enum_item name: (type_identifier) @name) @definition.enum\n(trait_item name: (type_identifier) @name) @definition.trait\n(impl_item type: (type_identifier) @name) @definition.impl\n(mod_item name: (identifier) @name) @definition.module\n\n; Type aliases, const, static, macros\n(type_item name: (type_identifier) @name) @definition.type\n(const_item name: (identifier) @name) @definition.const\n(static_item name: (identifier) @name) @definition.static\n(macro_definition name: (identifier) @name) @definition.macro\n\n; Use statements\n(use_declaration argument: (_) @import.source) @import\n\n; Calls\n(call_expression function: (identifier) @call.name) @call\n(call_expression function: (field_expression field: (field_identifier) @call.name)) @call\n(call_expression function: (scoped_identifier name: (identifier) @call.name)) @call\n(call_expression function: (generic_function function: (identifier) @call.name)) @call\n\n; Heritage (trait implementation)\n(impl_item trait: (type_identifier) @heritage.trait type: (type_identifier) @heritage.class) @heritage\n(impl_item trait: (generic_type type: (type_identifier) @heritage.trait) type: (type_identifier) @heritage.class) @heritage\n`;\n\n// PHP queries - works with tree-sitter-php (php_only grammar)\nexport const PHP_QUERIES = `\n; ── Namespace ────────────────────────────────────────────────────────────────\n(namespace_definition\n  name: (namespace_name) @name) @definition.namespace\n\n; ── Classes ──────────────────────────────────────────────────────────────────\n(class_declaration\n  name: (name) @name) @definition.class\n\n; ── Interfaces ───────────────────────────────────────────────────────────────\n(interface_declaration\n  name: (name) @name) @definition.interface\n\n; ── Traits ───────────────────────────────────────────────────────────────────\n(trait_declaration\n  name: (name) @name) @definition.trait\n\n; ── Enums (PHP 8.1) ──────────────────────────────────────────────────────────\n(enum_declaration\n  name: (name) @name) @definition.enum\n\n; ── Top-level functions ───────────────────────────────────────────────────────\n(function_definition\n  name: (name) @name) @definition.function\n\n; ── Methods (including constructors) ─────────────────────────────────────────\n(method_declaration\n  name: (name) @name) @definition.method\n\n; ── Class properties (including Eloquent $fillable, $casts, etc.) ────────────\n(property_declaration\n  (property_element\n    (variable_name\n      (name) @name))) @definition.property\n\n; ── Imports: use statements ──────────────────────────────────────────────────\n; Simple: use App\\\\Models\\\\User;\n(namespace_use_declaration\n  (namespace_use_clause\n    (qualified_name) @import.source)) @import\n\n; ── Function/method calls ────────────────────────────────────────────────────\n; Regular function call: foo()\n(function_call_expression\n  function: (name) @call.name) @call\n\n; Method call: $obj->method()\n(member_call_expression\n  name: (name) @call.name) @call\n\n; Nullsafe method call: $obj?->method()\n(nullsafe_member_call_expression\n  name: (name) @call.name) @call\n\n; Static call: Foo::bar() (php_only uses scoped_call_expression)\n(scoped_call_expression\n  name: (name) @call.name) @call\n\n; ── Heritage: extends ────────────────────────────────────────────────────────\n(class_declaration\n  name: (name) @heritage.class\n  (base_clause\n    [(name) (qualified_name)] @heritage.extends)) @heritage\n\n; ── Heritage: implements ─────────────────────────────────────────────────────\n(class_declaration\n  name: (name) @heritage.class\n  (class_interface_clause\n    [(name) (qualified_name)] @heritage.implements)) @heritage.impl\n\n; ── Heritage: use trait (must capture enclosing class name) ──────────────────\n(class_declaration\n  name: (name) @heritage.class\n  body: (declaration_list\n    (use_declaration\n      [(name) (qualified_name)] @heritage.trait))) @heritage\n`;\n\n// Ruby queries - works with tree-sitter-ruby\n// NOTE: Ruby uses `call` for require, include, extend, prepend, attr_* etc.\n// These are all captured as @call and routed in JS post-processing:\n//   - require/require_relative → import extraction\n//   - include/extend/prepend → heritage (mixin) extraction\n//   - attr_accessor/attr_reader/attr_writer → property definition extraction\n//   - everything else → regular call extraction\nexport const RUBY_QUERIES = `\n; ── Modules ──────────────────────────────────────────────────────────────────\n(module\n  name: (constant) @name) @definition.module\n\n; ── Classes ──────────────────────────────────────────────────────────────────\n(class\n  name: (constant) @name) @definition.class\n\n; ── Instance methods ─────────────────────────────────────────────────────────\n(method\n  name: (identifier) @name) @definition.method\n\n; ── Singleton (class-level) methods ──────────────────────────────────────────\n(singleton_method\n  name: (identifier) @name) @definition.function\n\n; ── All calls (require, include, attr_*, and regular calls routed in JS) ─────\n(call\n  method: (identifier) @call.name) @call\n\n; ── Heritage: class < SuperClass ─────────────────────────────────────────────\n(class\n  name: (constant) @heritage.class\n  superclass: (superclass\n    (constant) @heritage.extends)) @heritage`;\n    \n// Swift queries - works with tree-sitter-swift\nexport const SWIFT_QUERIES = `\n; Classes\n(class_declaration \"class\" name: (type_identifier) @name) @definition.class\n\n; Structs\n(class_declaration \"struct\" name: (type_identifier) @name) @definition.struct\n\n; Enums\n(class_declaration \"enum\" name: (type_identifier) @name) @definition.enum\n\n; Extensions (mapped to class — no dedicated label in schema)\n(class_declaration \"extension\" name: (user_type (type_identifier) @name)) @definition.class\n\n; Actors\n(class_declaration \"actor\" name: (type_identifier) @name) @definition.class\n\n; Protocols (mapped to interface)\n(protocol_declaration name: (type_identifier) @name) @definition.interface\n\n; Type aliases\n(typealias_declaration name: (type_identifier) @name) @definition.type\n\n; Functions (top-level and methods)\n(function_declaration name: (simple_identifier) @name) @definition.function\n\n; Protocol method declarations\n(protocol_function_declaration name: (simple_identifier) @name) @definition.method\n\n; Initializers\n(init_declaration) @definition.constructor\n\n; Properties (stored and computed)\n(property_declaration (pattern (simple_identifier) @name)) @definition.property\n\n; Imports\n(import_declaration (identifier (simple_identifier) @import.source)) @import\n\n; Calls - direct function calls\n(call_expression (simple_identifier) @call.name) @call\n\n; Calls - member/navigation calls (obj.method())\n(call_expression (navigation_expression (navigation_suffix (simple_identifier) @call.name))) @call\n\n; Heritage - class/struct/enum inheritance and protocol conformance\n(class_declaration name: (type_identifier) @heritage.class\n  (inheritance_specifier inherits_from: (user_type (type_identifier) @heritage.extends))) @heritage\n\n; Heritage - protocol inheritance\n(protocol_declaration name: (type_identifier) @heritage.class\n  (inheritance_specifier inherits_from: (user_type (type_identifier) @heritage.extends))) @heritage\n`;\n\nexport const LANGUAGE_QUERIES: Record<SupportedLanguages, string> = {\n  [SupportedLanguages.TypeScript]: TYPESCRIPT_QUERIES,\n  [SupportedLanguages.JavaScript]: JAVASCRIPT_QUERIES,\n  [SupportedLanguages.Python]: PYTHON_QUERIES,\n  [SupportedLanguages.Java]: JAVA_QUERIES,\n  [SupportedLanguages.C]: C_QUERIES,\n  [SupportedLanguages.Go]: GO_QUERIES,\n  [SupportedLanguages.CPlusPlus]: CPP_QUERIES,\n  [SupportedLanguages.CSharp]: CSHARP_QUERIES,\n  [SupportedLanguages.Rust]: RUST_QUERIES,\n  [SupportedLanguages.PHP]: PHP_QUERIES,\n  [SupportedLanguages.Ruby]: RUBY_QUERIES,\n  [SupportedLanguages.Kotlin]: '', // Kotlin WASM parser not yet available for web\n  [SupportedLanguages.Swift]: SWIFT_QUERIES,\n};\n "
  },
  {
    "path": "gitnexus-web/src/core/ingestion/utils.ts",
    "content": "import { SupportedLanguages } from '../../config/supported-languages';\n\n/** Ruby extensionless filenames recognised as Ruby source */\nconst RUBY_EXTENSIONLESS_FILES = new Set(['Rakefile', 'Gemfile', 'Guardfile', 'Vagrantfile', 'Brewfile']);\n\n/**\n * Map file extension to SupportedLanguage enum\n */\nexport const getLanguageFromFilename = (filename: string): SupportedLanguages | null => {\n  // TypeScript (including TSX)\n  if (filename.endsWith('.tsx')) return SupportedLanguages.TypeScript;\n  if (filename.endsWith('.ts')) return SupportedLanguages.TypeScript;\n  // JavaScript (including JSX)\n  if (filename.endsWith('.jsx')) return SupportedLanguages.JavaScript;\n  if (filename.endsWith('.js')) return SupportedLanguages.JavaScript;\n  // Python\n  if (filename.endsWith('.py')) return SupportedLanguages.Python;\n  // Java\n  if (filename.endsWith('.java')) return SupportedLanguages.Java;\n  // C (source and headers)\n  if (filename.endsWith('.c') || filename.endsWith('.h')) return SupportedLanguages.C;\n  // C++ (all common extensions)\n  if (filename.endsWith('.cpp') || filename.endsWith('.cc') || filename.endsWith('.cxx') ||\n      filename.endsWith('.hpp') || filename.endsWith('.hxx') || filename.endsWith('.hh')) return SupportedLanguages.CPlusPlus;\n  // C#\n  if (filename.endsWith('.cs')) return SupportedLanguages.CSharp;\n  // Go\n  if (filename.endsWith('.go')) return SupportedLanguages.Go;\n  // Rust\n  if (filename.endsWith('.rs')) return SupportedLanguages.Rust;\n  // PHP (all common extensions)\n  if (filename.endsWith('.php') || filename.endsWith('.phtml') ||\n      filename.endsWith('.php3') || filename.endsWith('.php4') ||\n      filename.endsWith('.php5') || filename.endsWith('.php8')) {\n    return SupportedLanguages.PHP;\n  }\n  // Ruby (extensions)\n  if (filename.endsWith('.rb') || filename.endsWith('.rake') || filename.endsWith('.gemspec')) {\n    return SupportedLanguages.Ruby;\n  }\n  // Ruby (extensionless files)\n  const basename = filename.split('/').pop() || filename;\n  if (RUBY_EXTENSIONLESS_FILES.has(basename)) {\n    return SupportedLanguages.Ruby;\n  }\n  // Swift\n  if (filename.endsWith('.swift')) return SupportedLanguages.Swift;\n  return null;\n};\n\n"
  },
  {
    "path": "gitnexus-web/src/core/lbug/csv-generator.ts",
    "content": "/**\n * CSV Generator for LadybugDB Hybrid Schema\n * \n * Generates separate CSV files for each node table and one relation CSV.\n * This enables efficient bulk loading via COPY FROM for hybrid schema.\n * \n * RFC 4180 Compliant:\n * - Fields containing commas, double quotes, or newlines are enclosed in double quotes\n * - Double quotes within fields are escaped by doubling them (\"\")\n * - All fields are consistently quoted for safety with code content\n */\n\nimport { KnowledgeGraph, GraphNode, NodeLabel } from '../graph/types';\nimport { NODE_TABLES, NodeTableName } from './schema';\n\n// ============================================================================\n// CSV ESCAPE UTILITIES\n// ============================================================================\n\n/**\n * Sanitize string to ensure valid UTF-8 and safe CSV content for LadybugDB\n * Removes or replaces invalid characters that would break CSV parsing.\n * \n * Critical: LadybugDB's CSV parser can misinterpret \\r\\n inside quoted fields.\n * We normalize all line endings to \\n only.\n */\nconst sanitizeUTF8 = (str: string): string => {\n  return str\n    .replace(/\\r\\n/g, '\\n')          // Normalize Windows line endings first\n    .replace(/\\r/g, '\\n')            // Normalize remaining \\r to \\n\n    .replace(/[\\x00-\\x08\\x0B\\x0C\\x0E-\\x1F\\x7F]/g, '') // Remove control chars except \\t \\n\n    .replace(/[\\uD800-\\uDFFF]/g, '') // Remove surrogate pairs (invalid standalone)\n    .replace(/[\\uFFFE\\uFFFF]/g, ''); // Remove BOM and special chars\n};\n\n/**\n * RFC 4180 compliant CSV field escaping\n * ALWAYS wraps in double quotes for safety with code content\n */\nconst escapeCSVField = (value: string | number | undefined | null): string => {\n  if (value === undefined || value === null) {\n    return '\"\"';\n  }\n  let str = String(value);\n  str = sanitizeUTF8(str);\n  return `\"${str.replace(/\"/g, '\"\"')}\"`;\n};\n\n/**\n * Escape a numeric value (no quotes needed for numbers)\n */\nconst escapeCSVNumber = (value: number | undefined | null, defaultValue: number = -1): string => {\n  if (value === undefined || value === null) {\n    return String(defaultValue);\n  }\n  return String(value);\n};\n\n// ============================================================================\n// CONTENT EXTRACTION\n// ============================================================================\n\n/**\n * Check if content looks like binary data\n */\nconst isBinaryContent = (content: string): boolean => {\n  if (!content || content.length === 0) return false;\n  const sample = content.slice(0, 1000);\n  let nonPrintable = 0;\n  for (let i = 0; i < sample.length; i++) {\n    const code = sample.charCodeAt(i);\n    if ((code < 9) || (code > 13 && code < 32) || code === 127) {\n      nonPrintable++;\n    }\n  }\n  return (nonPrintable / sample.length) > 0.1;\n};\n\n/**\n * Extract code content for a node\n */\nconst extractContent = (\n  node: GraphNode,\n  fileContents: Map<string, string>\n): string => {\n  const filePath = node.properties.filePath;\n  const content = fileContents.get(filePath);\n  \n  if (!content) return '';\n  if (node.label === 'Folder') return '';\n  if (isBinaryContent(content)) return '[Binary file - content not stored]';\n  \n  // For File nodes, return content (limited)\n  if (node.label === 'File') {\n    const MAX_FILE_CONTENT = 10000;\n    if (content.length > MAX_FILE_CONTENT) {\n      return content.slice(0, MAX_FILE_CONTENT) + '\\n... [truncated]';\n    }\n    return content;\n  }\n  \n  // For code elements, extract the relevant lines with context\n  const startLine = node.properties.startLine;\n  const endLine = node.properties.endLine;\n  \n  if (startLine === undefined || endLine === undefined) return '';\n  \n  const lines = content.split('\\n');\n  const contextLines = 2;\n  const start = Math.max(0, startLine - contextLines);\n  const end = Math.min(lines.length - 1, endLine + contextLines);\n  \n  const snippet = lines.slice(start, end + 1).join('\\n');\n  const MAX_SNIPPET = 5000;\n  if (snippet.length > MAX_SNIPPET) {\n    return snippet.slice(0, MAX_SNIPPET) + '\\n... [truncated]';\n  }\n  return snippet;\n};\n\n// ============================================================================\n// CSV GENERATION RESULT TYPE\n// ============================================================================\n\nexport interface CSVData {\n  nodes: Map<NodeTableName, string>;\n  relCSV: string;  // Single relation CSV with from,to,type,confidence,reason columns\n}\n\n// ============================================================================\n// NODE CSV GENERATORS\n// ============================================================================\n\n/**\n * Generate CSV for File nodes\n * Headers: id,name,filePath,content\n */\nconst generateFileCSV = (nodes: GraphNode[], fileContents: Map<string, string>): string => {\n  const headers = ['id', 'name', 'filePath', 'content'];\n  const rows: string[] = [headers.join(',')];\n  \n  for (const node of nodes) {\n    if (node.label !== 'File') continue;\n    const content = extractContent(node, fileContents);\n    rows.push([\n      escapeCSVField(node.id),\n      escapeCSVField(node.properties.name || ''),\n      escapeCSVField(node.properties.filePath || ''),\n      escapeCSVField(content),\n    ].join(','));\n  }\n  \n  return rows.join('\\n');\n};\n\n/**\n * Generate CSV for Folder nodes\n * Headers: id,name,filePath\n */\nconst generateFolderCSV = (nodes: GraphNode[]): string => {\n  const headers = ['id', 'name', 'filePath'];\n  const rows: string[] = [headers.join(',')];\n  \n  for (const node of nodes) {\n    if (node.label !== 'Folder') continue;\n    rows.push([\n      escapeCSVField(node.id),\n      escapeCSVField(node.properties.name || ''),\n      escapeCSVField(node.properties.filePath || ''),\n    ].join(','));\n  }\n  \n  return rows.join('\\n');\n};\n\n/**\n * Generate CSV for code element nodes (Function, Class, Interface, Method, CodeElement)\n * Headers: id,name,filePath,startLine,endLine,isExported,content\n */\nconst generateCodeElementCSV = (\n  nodes: GraphNode[],\n  label: NodeLabel,\n  fileContents: Map<string, string>\n): string => {\n  const headers = ['id', 'name', 'filePath', 'startLine', 'endLine', 'isExported', 'content'];\n  const rows: string[] = [headers.join(',')];\n  \n  for (const node of nodes) {\n    if (node.label !== label) continue;\n    const content = extractContent(node, fileContents);\n    rows.push([\n      escapeCSVField(node.id),\n      escapeCSVField(node.properties.name || ''),\n      escapeCSVField(node.properties.filePath || ''),\n      escapeCSVNumber(node.properties.startLine, -1),\n      escapeCSVNumber(node.properties.endLine, -1),\n      node.properties.isExported ? 'true' : 'false',\n      escapeCSVField(content),\n    ].join(','));\n  }\n  \n  return rows.join('\\n');\n};\n\n/**\n * Generate CSV for Community nodes (from Leiden algorithm)\n * Headers: id,label,heuristicLabel,keywords,description,enrichedBy,cohesion,symbolCount\n */\nconst generateCommunityCSV = (nodes: GraphNode[]): string => {\n  const headers = ['id', 'label', 'heuristicLabel', 'keywords', 'description', 'enrichedBy', 'cohesion', 'symbolCount'];\n  const rows: string[] = [headers.join(',')];\n  \n  for (const node of nodes) {\n    if (node.label !== 'Community') continue;\n    \n    // Handle keywords array - convert to LadybugDB array format\n    const keywords = (node.properties as any).keywords || [];\n    const keywordsStr = `[${keywords.map((k: string) => `'${k.replace(/'/g, \"''\")}'`).join(',')}]`;\n    \n    rows.push([\n      escapeCSVField(node.id),\n      escapeCSVField(node.properties.name || ''),  // label is stored in name\n      escapeCSVField(node.properties.heuristicLabel || ''),\n      keywordsStr,  // Array format for LadybugDB\n      escapeCSVField((node.properties as any).description || ''),\n      escapeCSVField((node.properties as any).enrichedBy || 'heuristic'),\n      escapeCSVNumber(node.properties.cohesion, 0),\n      escapeCSVNumber(node.properties.symbolCount, 0),\n    ].join(','));\n  }\n  \n  return rows.join('\\n');\n};\n\n/**\n * Generate CSV for Process nodes\n * Headers: id,label,heuristicLabel,processType,stepCount,communities,entryPointId,terminalId\n */\nconst generateProcessCSV = (nodes: GraphNode[]): string => {\n  const headers = ['id', 'label', 'heuristicLabel', 'processType', 'stepCount', 'communities', 'entryPointId', 'terminalId'];\n  const rows: string[] = [headers.join(',')];\n  \n  for (const node of nodes) {\n    if (node.label !== 'Process') continue;\n    \n    // Handle communities array (string[])\n    const communities = (node.properties as any).communities || [];\n    const communitiesStr = `[${communities.map((c: string) => `'${c.replace(/'/g, \"''\")}'`).join(',')}]`;\n    \n    rows.push([\n      escapeCSVField(node.id),\n      escapeCSVField(node.properties.name || ''), // label stores name\n      escapeCSVField((node.properties as any).heuristicLabel || ''),\n      escapeCSVField((node.properties as any).processType || ''),\n      escapeCSVNumber((node.properties as any).stepCount, 0),\n      escapeCSVField(communitiesStr), // Needs CSV escaping because it contains commas!\n      escapeCSVField((node.properties as any).entryPointId || ''),\n      escapeCSVField((node.properties as any).terminalId || ''),\n    ].join(','));\n  }\n  \n  return rows.join('\\n');\n};\n\n/**\n * Generate CSV for the single CodeRelation table\n * Headers: from,to,type,confidence,reason\n * \n * confidence: 0-1 score for CALLS edges (how sure are we about the target?)\n * reason: 'import-resolved' | 'same-file' | 'fuzzy-global' (or empty for non-CALLS)\n */\nconst generateRelationCSV = (graph: KnowledgeGraph): string => {\n  const headers = ['from', 'to', 'type', 'confidence', 'reason', 'step'];\n  const rows: string[] = [headers.join(',')];\n  \n  for (const rel of graph.relationships) {\n    rows.push([\n      escapeCSVField(rel.sourceId),\n      escapeCSVField(rel.targetId),\n      escapeCSVField(rel.type),\n      escapeCSVNumber(rel.confidence, 1.0),\n      escapeCSVField(rel.reason),\n      escapeCSVNumber((rel as any).step, 0),\n    ].join(','));\n  }\n  \n  return rows.join('\\n');\n};\n\n// ============================================================================\n// MAIN CSV GENERATION FUNCTION\n// ============================================================================\n\n/**\n * Generate all CSV data for hybrid schema bulk loading\n * Returns Maps of node table name -> CSV content, and single relation CSV\n */\nexport const generateAllCSVs = (\n  graph: KnowledgeGraph,\n  fileContents: Map<string, string>\n): CSVData => {\n  const nodes = Array.from(graph.nodes);\n  \n  // Generate node CSVs\n  const nodeCSVs = new Map<NodeTableName, string>();\n  nodeCSVs.set('File', generateFileCSV(nodes, fileContents));\n  nodeCSVs.set('Folder', generateFolderCSV(nodes));\n  nodeCSVs.set('Function', generateCodeElementCSV(nodes, 'Function', fileContents));\n  nodeCSVs.set('Class', generateCodeElementCSV(nodes, 'Class', fileContents));\n  nodeCSVs.set('Interface', generateCodeElementCSV(nodes, 'Interface', fileContents));\n  nodeCSVs.set('Method', generateCodeElementCSV(nodes, 'Method', fileContents));\n  nodeCSVs.set('CodeElement', generateCodeElementCSV(nodes, 'CodeElement', fileContents));\n  nodeCSVs.set('Community', generateCommunityCSV(nodes));\n  nodeCSVs.set('Process', generateProcessCSV(nodes));\n  \n  // Generate single relation CSV\n  const relCSV = generateRelationCSV(graph);\n  \n  return { nodes: nodeCSVs, relCSV };\n};\n\n"
  },
  {
    "path": "gitnexus-web/src/core/lbug/lbug-adapter.ts",
    "content": "/**\n * LadybugDB Adapter\n *\n * Manages the LadybugDB WASM instance for client-side graph database operations.\n * Uses the \"Snapshot / Bulk Load\" pattern with COPY FROM for performance.\n *\n * Multi-table schema: separate tables for File, Function, Class, etc.\n */\n\nimport { KnowledgeGraph } from '../graph/types';\nimport {\n  NODE_TABLES,\n  REL_TABLE_NAME,\n  SCHEMA_QUERIES,\n  EMBEDDING_TABLE_NAME,\n  NodeTableName,\n} from './schema';\nimport { generateAllCSVs } from './csv-generator';\n\n// Holds the reference to the dynamically loaded module\nlet lbug: any = null;\nlet db: any = null;\nlet conn: any = null;\n\n/**\n * Initialize LadybugDB WASM module and create in-memory database\n */\nexport const initLbug = async () => {\n  if (conn) return { db, conn, lbug };\n\n  try {\n    if (import.meta.env.DEV) console.log('🚀 Initializing LadybugDB...');\n\n    // 1. Dynamic Import (Fixes the \"not a function\" bundler issue)\n    const lbugModule = await import('@ladybugdb/wasm-core');\n\n    // 2. Handle Vite/Webpack \"default\" wrapping\n    lbug = lbugModule.default || lbugModule;\n\n    // 3. Initialize WASM\n    await lbug.init();\n\n    // 4. Create Database with 512MB buffer manager\n    const BUFFER_POOL_SIZE = 512 * 1024 * 1024; // 512MB\n    db = new lbug.Database(':memory:', BUFFER_POOL_SIZE);\n    conn = new lbug.Connection(db);\n\n    if (import.meta.env.DEV) console.log('✅ LadybugDB WASM Initialized');\n\n    // 5. Initialize Schema (all node tables, then rel tables, then embedding table)\n    for (const schemaQuery of SCHEMA_QUERIES) {\n      try {\n        await conn.query(schemaQuery);\n      } catch (e) {\n        // Schema might already exist, skip\n        if (import.meta.env.DEV) {\n          console.warn('Schema creation skipped (may already exist):', e);\n        }\n      }\n    }\n\n    if (import.meta.env.DEV) console.log('✅ LadybugDB Multi-Table Schema Created');\n\n    return { db, conn, lbug };\n  } catch (error) {\n    if (import.meta.env.DEV) console.error('❌ LadybugDB Initialization Failed:', error);\n    throw error;\n  }\n};\n\n/**\n * Load a KnowledgeGraph into LadybugDB using COPY FROM (bulk load)\n * Uses batched CSV writes and COPY statements for optimal performance\n */\nexport const loadGraphToLbug = async (\n  graph: KnowledgeGraph,\n  fileContents: Map<string, string>\n) => {\n  const { conn, lbug } = await initLbug();\n\n  try {\n    if (import.meta.env.DEV) console.log(`LadybugDB: Generating CSVs for ${graph.nodeCount} nodes...`);\n\n    // 1. Generate all CSVs (per-table)\n    const csvData = generateAllCSVs(graph, fileContents);\n\n    const fs = lbug.FS;\n\n    // 2. Write all node CSVs to virtual filesystem\n    const nodeFiles: Array<{ table: NodeTableName; path: string }> = [];\n    for (const [tableName, csv] of csvData.nodes.entries()) {\n      // Skip empty CSVs (only header row)\n      if (csv.split('\\n').length <= 1) continue;\n\n      const path = `/${tableName.toLowerCase()}.csv`;\n      try { await fs.unlink(path); } catch {}\n      await fs.writeFile(path, csv);\n      nodeFiles.push({ table: tableName, path });\n    }\n\n    // 3. Parse relation CSV and prepare for INSERT (COPY FROM doesn't work with multi-pair tables)\n    const relLines = csvData.relCSV.split('\\n').slice(1).filter(line => line.trim());\n    const relCount = relLines.length;\n\n    if (import.meta.env.DEV) {\n      console.log(`LadybugDB: Wrote ${nodeFiles.length} node CSVs, ${relCount} relations to insert`);\n    }\n\n    // 4. COPY all node tables (must complete before rels due to FK constraints)\n    for (const { table, path } of nodeFiles) {\n      const copyQuery = getCopyQuery(table, path);\n      await conn.query(copyQuery);\n    }\n\n    // 5. INSERT relations one by one (COPY doesn't work with multi-pair REL tables)\n    // Build a set of valid table names for fast lookup\n    const validTables = new Set<string>(NODE_TABLES as readonly string[]);\n\n    const getNodeLabel = (nodeId: string): string => {\n      if (nodeId.startsWith('comm_')) return 'Community';\n      if (nodeId.startsWith('proc_')) return 'Process';\n      return nodeId.split(':')[0];\n    };\n\n    // All multi-language tables are created with backticks - must always reference them with backticks\n    const escapeLabel = (label: string): string => {\n      return BACKTICK_TABLES.has(label) ? `\\`${label}\\`` : label;\n    };\n\n    let insertedRels = 0;\n    let skippedRels = 0;\n    const skippedRelStats = new Map<string, number>();\n    for (const line of relLines) {\n      try {\n        // Format: \"from\",\"to\",\"type\",confidence,\"reason\",step\n        const match = line.match(/\"([^\"]*)\",\"([^\"]*)\",\"([^\"]*)\",([0-9.]+),\"([^\"]*)\",([0-9-]+)/);\n        if (!match) continue;\n\n        const [, fromId, toId, relType, confidenceStr, reason, stepStr] = match;\n\n        const fromLabel = getNodeLabel(fromId);\n        const toLabel = getNodeLabel(toId);\n\n        // Skip relationships where either node's label doesn't have a table in LadybugDB\n        // Querying a non-existent table causes a fatal native crash\n        if (!validTables.has(fromLabel) || !validTables.has(toLabel)) {\n          skippedRels++;\n          continue;\n        }\n\n        const confidence = parseFloat(confidenceStr) || 1.0;\n        const step = parseInt(stepStr) || 0;\n\n        const insertQuery = `\n          MATCH (a:${escapeLabel(fromLabel)} {id: '${fromId.replace(/'/g, \"''\")}'}),\n                (b:${escapeLabel(toLabel)} {id: '${toId.replace(/'/g, \"''\")}'})\n          CREATE (a)-[:${REL_TABLE_NAME} {type: '${relType}', confidence: ${confidence}, reason: '${reason.replace(/'/g, \"''\")}', step: ${step}}]->(b)\n        `;\n        await conn.query(insertQuery);\n        insertedRels++;\n      } catch (err) {\n        skippedRels++;\n        const match = line.match(/\"([^\"]*)\",\"([^\"]*)\",\"([^\"]*)\",([0-9.]+),\"([^\"]*)\"/);\n        if (match) {\n          const [, fromId, toId, relType] = match;\n          const fromLabel = getNodeLabel(fromId);\n          const toLabel = getNodeLabel(toId);\n          const key = `${relType}:${fromLabel}->` + toLabel;\n          skippedRelStats.set(key, (skippedRelStats.get(key) || 0) + 1);\n\n          if (import.meta.env.DEV) {\n            console.warn(`⚠️ Skipped: ${key} | \"${fromId}\" → \"${toId}\" | ${err instanceof Error ? err.message : String(err)}`);\n          }\n        }\n      }\n    }\n\n    if (import.meta.env.DEV) {\n      console.log(`LadybugDB: Inserted ${insertedRels}/${relCount} relations`);\n      if (skippedRels > 0) {\n        const topSkipped = Array.from(skippedRelStats.entries())\n          .sort((a, b) => b[1] - a[1])\n          .slice(0, 10);\n        console.warn(`LadybugDB: Skipped ${skippedRels}/${relCount} relations (top by kind/pair):`, topSkipped);\n      }\n    }\n\n    // 6. Verify results\n    let totalNodes = 0;\n    for (const tableName of NODE_TABLES) {\n      try {\n        const countRes = await conn.query(`MATCH (n:${tableName}) RETURN count(n) AS cnt`);\n        const countRows = await countRes.getAll();\n        const countRow = countRows[0];\n        const count = countRow ? (countRow.cnt ?? countRow[0] ?? 0) : 0;\n        totalNodes += Number(count);\n      } catch {\n        // Table might be empty, skip\n      }\n    }\n\n    if (import.meta.env.DEV) console.log(`✅ LadybugDB Bulk Load Complete. Total nodes: ${totalNodes}, edges: ${insertedRels}`);\n\n    // 7. Cleanup CSV files\n    for (const { path } of nodeFiles) {\n      try { await fs.unlink(path); } catch {}\n    }\n\n    return { success: true, count: totalNodes };\n\n  } catch (error) {\n    if (import.meta.env.DEV) console.error('❌ LadybugDB Bulk Load Failed:', error);\n    return { success: false, count: 0 };\n  }\n};\n\n// LadybugDB default ESCAPE is '\\' (backslash), but our CSV uses RFC 4180 escaping (\"\" for literal quotes).\n// Source code content is full of backslashes which confuse the auto-detection.\n// We MUST explicitly set ESCAPE='\"' and disable auto_detect.\nconst COPY_CSV_OPTS = `(HEADER=true, ESCAPE='\"', DELIM=',', QUOTE='\"', PARALLEL=false, auto_detect=false)`;\n\n// Multi-language table names created with backticks in CODE_ELEMENT_BASE\nconst BACKTICK_TABLES = new Set([\n  'Struct', 'Enum', 'Macro', 'Typedef', 'Union', 'Namespace', 'Trait', 'Impl',\n  'TypeAlias', 'Const', 'Static', 'Property', 'Record', 'Delegate', 'Annotation',\n  'Constructor', 'Template', 'Module',\n]);\n\nconst escapeTableName = (table: string): string => {\n  return BACKTICK_TABLES.has(table) ? `\\`${table}\\`` : table;\n};\n\n/** Tables with isExported column (TypeScript/JS-native types) */\nconst TABLES_WITH_EXPORTED = new Set<string>(['Function', 'Class', 'Interface', 'Method', 'CodeElement']);\n\n/**\n * Get the COPY query for a node table with correct column mapping\n */\nconst getCopyQuery = (table: NodeTableName, path: string): string => {\n  const t = escapeTableName(table);\n  if (table === 'File') {\n    return `COPY ${t}(id, name, filePath, content) FROM \"${path}\" ${COPY_CSV_OPTS}`;\n  }\n  if (table === 'Folder') {\n    return `COPY ${t}(id, name, filePath) FROM \"${path}\" ${COPY_CSV_OPTS}`;\n  }\n  if (table === 'Community') {\n    return `COPY ${t}(id, label, heuristicLabel, keywords, description, enrichedBy, cohesion, symbolCount) FROM \"${path}\" ${COPY_CSV_OPTS}`;\n  }\n  if (table === 'Process') {\n    return `COPY ${t}(id, label, heuristicLabel, processType, stepCount, communities, entryPointId, terminalId) FROM \"${path}\" ${COPY_CSV_OPTS}`;\n  }\n  // TypeScript/JS code element tables have isExported; multi-language tables do not\n  if (TABLES_WITH_EXPORTED.has(table)) {\n    return `COPY ${t}(id, name, filePath, startLine, endLine, isExported, content) FROM \"${path}\" ${COPY_CSV_OPTS}`;\n  }\n  // Multi-language tables (Struct, Impl, Trait, Macro, etc.)\n  return `COPY ${t}(id, name, filePath, startLine, endLine, content) FROM \"${path}\" ${COPY_CSV_OPTS}`;\n};\n\n/**\n * Execute a Cypher query against the database\n * Returns results as named objects (not tuples) for better usability\n */\nexport const executeQuery = async (cypher: string): Promise<any[]> => {\n  if (!conn) {\n    await initLbug();\n  }\n\n  try {\n    const result = await conn.query(cypher);\n\n    // Extract column names from RETURN clause\n    const returnMatch = cypher.match(/RETURN\\s+(.+?)(?:\\s+ORDER|\\s+LIMIT|\\s+SKIP|\\s*$)/is);\n    let columnNames: string[] = [];\n    if (returnMatch) {\n      // Parse RETURN clause to get column names/aliases\n      // Handles: \"a.name, b.filePath AS path, count(x) AS cnt\"\n      const returnClause = returnMatch[1];\n      columnNames = returnClause.split(',').map(col => {\n        col = col.trim();\n        // Check for AS alias\n        const asMatch = col.match(/\\s+AS\\s+(\\w+)\\s*$/i);\n        if (asMatch) return asMatch[1];\n        // Check for property access like n.name\n        const propMatch = col.match(/\\.(\\w+)\\s*$/);\n        if (propMatch) return propMatch[1];\n        // Check for function call like count(x)\n        const funcMatch = col.match(/^(\\w+)\\s*\\(/);\n        if (funcMatch) return funcMatch[1];\n        // Just use as-is if simple identifier\n        return col.replace(/[^a-zA-Z0-9_]/g, '_');\n      });\n    }\n\n    // Collect all rows\n    const allRows = await result.getAll();\n    const rows: any[] = [];\n    for (const row of allRows) {\n      // Convert tuple to named object if we have column names and row is array\n      if (Array.isArray(row) && columnNames.length === row.length) {\n        const namedRow: Record<string, any> = {};\n        for (let i = 0; i < row.length; i++) {\n          namedRow[columnNames[i]] = row[i];\n        }\n        rows.push(namedRow);\n      } else {\n        // Already an object or column count doesn't match\n        rows.push(row);\n      }\n    }\n\n    return rows;\n  } catch (error) {\n    if (import.meta.env.DEV) console.error('Query execution failed:', error);\n    throw error;\n  }\n};\n\n/**\n * Get database statistics\n */\nexport const getLbugStats = async (): Promise<{ nodes: number; edges: number }> => {\n  if (!conn) {\n    return { nodes: 0, edges: 0 };\n  }\n\n  try {\n    // Count nodes across all tables\n    let totalNodes = 0;\n    for (const tableName of NODE_TABLES) {\n      try {\n        const nodeResult = await conn.query(`MATCH (n:${tableName}) RETURN count(n) AS cnt`);\n        const nodeRows = await nodeResult.getAll();\n        const nodeRow = nodeRows[0];\n        totalNodes += Number(nodeRow?.cnt ?? nodeRow?.[0] ?? 0);\n      } catch {\n        // Table might not exist or be empty\n      }\n    }\n\n    // Count edges from single relation table\n    let totalEdges = 0;\n    try {\n      const edgeResult = await conn.query(`MATCH ()-[r:${REL_TABLE_NAME}]->() RETURN count(r) AS cnt`);\n      const edgeRows = await edgeResult.getAll();\n      const edgeRow = edgeRows[0];\n      totalEdges = Number(edgeRow?.cnt ?? edgeRow?.[0] ?? 0);\n    } catch {\n      // Table might not exist or be empty\n    }\n\n    return { nodes: totalNodes, edges: totalEdges };\n  } catch (error) {\n    if (import.meta.env.DEV) {\n      console.warn('Failed to get LadybugDB stats:', error);\n    }\n    return { nodes: 0, edges: 0 };\n  }\n};\n\n/**\n * Check if LadybugDB is initialized and has data\n */\nexport const isLbugReady = (): boolean => {\n  return conn !== null && db !== null;\n};\n\n/**\n * Close the database connection (cleanup)\n */\nexport const closeLbug = async (): Promise<void> => {\n  if (conn) {\n    try {\n      await conn.close();\n    } catch {}\n    conn = null;\n  }\n  if (db) {\n    try {\n      await db.close();\n    } catch {}\n    db = null;\n  }\n  lbug = null;\n};\n\n/**\n * Execute a prepared statement with parameters\n * @param cypher - Cypher query with $param placeholders\n * @param params - Object mapping param names to values\n * @returns Query results\n */\nexport const executePrepared = async (\n  cypher: string,\n  params: Record<string, any>\n): Promise<any[]> => {\n  if (!conn) {\n    await initLbug();\n  }\n\n  try {\n    const stmt = await conn.prepare(cypher);\n    if (!stmt.isSuccess()) {\n      const errMsg = await stmt.getErrorMessage();\n      throw new Error(`Prepare failed: ${errMsg}`);\n    }\n\n    const result = await conn.execute(stmt, params);\n\n    const rows = await result.getAll();\n\n    await stmt.close();\n    return rows;\n  } catch (error) {\n    if (import.meta.env.DEV) console.error('Prepared query failed:', error);\n    throw error;\n  }\n};\n\n/**\n * Execute a prepared statement with multiple parameter sets in small sub-batches\n */\nexport const executeWithReusedStatement = async (\n  cypher: string,\n  paramsList: Array<Record<string, any>>\n): Promise<void> => {\n  if (!conn) {\n    await initLbug();\n  }\n\n  if (paramsList.length === 0) return;\n\n  const SUB_BATCH_SIZE = 4;\n\n  for (let i = 0; i < paramsList.length; i += SUB_BATCH_SIZE) {\n    const subBatch = paramsList.slice(i, i + SUB_BATCH_SIZE);\n\n    const stmt = await conn.prepare(cypher);\n    if (!stmt.isSuccess()) {\n      const errMsg = await stmt.getErrorMessage();\n      throw new Error(`Prepare failed: ${errMsg}`);\n    }\n\n    try {\n      for (const params of subBatch) {\n        await conn.execute(stmt, params);\n      }\n    } finally {\n      await stmt.close();\n    }\n\n    if (i + SUB_BATCH_SIZE < paramsList.length) {\n      await new Promise(r => setTimeout(r, 0));\n    }\n  }\n};\n\n/**\n * Test if array parameters work with prepared statements\n */\nexport const testArrayParams = async (): Promise<{ success: boolean; error?: string }> => {\n  if (!conn) {\n    await initLbug();\n  }\n\n  try {\n    const testEmbedding = new Array(384).fill(0).map((_, i) => i / 384);\n\n    // Get any node ID to test with (try File first, then others)\n    let testNodeId: string | null = null;\n    for (const tableName of NODE_TABLES) {\n      try {\n        const nodeResult = await conn.query(`MATCH (n:${tableName}) RETURN n.id AS id LIMIT 1`);\n        const nodeRows = await nodeResult.getAll();\n        const nodeRow = nodeRows[0];\n        if (nodeRow) {\n          testNodeId = nodeRow.id ?? nodeRow[0];\n          break;\n        }\n      } catch {}\n    }\n\n    if (!testNodeId) {\n      return { success: false, error: 'No nodes found to test with' };\n    }\n\n    if (import.meta.env.DEV) {\n      console.log('🧪 Testing array params with node:', testNodeId);\n    }\n\n    // First create an embedding entry\n    const createQuery = `CREATE (e:${EMBEDDING_TABLE_NAME} {nodeId: $nodeId, embedding: $embedding})`;\n    const stmt = await conn.prepare(createQuery);\n\n    if (!stmt.isSuccess()) {\n      const errMsg = await stmt.getErrorMessage();\n      return { success: false, error: `Prepare failed: ${errMsg}` };\n    }\n\n    await conn.execute(stmt, {\n      nodeId: testNodeId,\n      embedding: testEmbedding,\n    });\n\n    await stmt.close();\n\n    // Verify it was stored\n    const verifyResult = await conn.query(\n      `MATCH (e:${EMBEDDING_TABLE_NAME} {nodeId: '${testNodeId}'}) RETURN e.embedding AS emb`\n    );\n    const verifyRows = await verifyResult.getAll();\n    const verifyRow = verifyRows[0];\n    const storedEmb = verifyRow?.emb ?? verifyRow?.[0];\n\n    if (storedEmb && Array.isArray(storedEmb) && storedEmb.length === 384) {\n      if (import.meta.env.DEV) {\n        console.log('✅ Array params WORK! Stored embedding length:', storedEmb.length);\n      }\n      return { success: true };\n    } else {\n      return {\n        success: false,\n        error: `Embedding not stored correctly. Got: ${typeof storedEmb}, length: ${storedEmb?.length}`\n      };\n    }\n  } catch (error) {\n    const errorMsg = error instanceof Error ? error.message : String(error);\n    if (import.meta.env.DEV) {\n      console.error('❌ Array params test failed:', errorMsg);\n    }\n    return { success: false, error: errorMsg };\n  }\n};\n"
  },
  {
    "path": "gitnexus-web/src/core/lbug/schema.ts",
    "content": "/**\n * LadybugDB Schema Definitions\n * \n * Hybrid Schema:\n * - Separate node tables for each code element type (File, Function, Class, etc.)\n * - Single CodeRelation table with 'type' property for all relationships\n * \n * This allows LLMs to write natural Cypher queries like:\n *   MATCH (f:Function)-[r:CodeRelation {type: 'CALLS'}]->(g:Function) RETURN f, g\n */\n\n// ============================================================================\n// NODE TABLE NAMES\n// ============================================================================\nexport const NODE_TABLES = [\n  'File', 'Folder', 'Function', 'Class', 'Interface', 'Method', 'CodeElement', 'Community', 'Process',\n  // Multi-language support\n  'Struct', 'Enum', 'Macro', 'Typedef', 'Union', 'Namespace', 'Trait', 'Impl',\n  'TypeAlias', 'Const', 'Static', 'Property', 'Record', 'Delegate', 'Annotation', 'Constructor', 'Template', 'Module'\n] as const;\nexport type NodeTableName = typeof NODE_TABLES[number];\n\n// ============================================================================\n// RELATION TABLE\n// ============================================================================\nexport const REL_TABLE_NAME = 'CodeRelation';\n\n// Valid relation types\nexport const REL_TYPES = ['CONTAINS', 'DEFINES', 'IMPORTS', 'CALLS', 'EXTENDS', 'IMPLEMENTS', 'MEMBER_OF', 'STEP_IN_PROCESS'] as const;\nexport type RelType = typeof REL_TYPES[number];\n\n// ============================================================================\n// EMBEDDING TABLE\n// ============================================================================\nexport const EMBEDDING_TABLE_NAME = 'CodeEmbedding';\n\n// ============================================================================\n// NODE TABLE SCHEMAS\n// ============================================================================\n\nexport const FILE_SCHEMA = `\nCREATE NODE TABLE File (\n  id STRING,\n  name STRING,\n  filePath STRING,\n  content STRING,\n  PRIMARY KEY (id)\n)`;\n\nexport const FOLDER_SCHEMA = `\nCREATE NODE TABLE Folder (\n  id STRING,\n  name STRING,\n  filePath STRING,\n  PRIMARY KEY (id)\n)`;\n\nexport const FUNCTION_SCHEMA = `\nCREATE NODE TABLE Function (\n  id STRING,\n  name STRING,\n  filePath STRING,\n  startLine INT64,\n  endLine INT64,\n  isExported BOOLEAN,\n  content STRING,\n  PRIMARY KEY (id)\n)`;\n\nexport const CLASS_SCHEMA = `\nCREATE NODE TABLE Class (\n  id STRING,\n  name STRING,\n  filePath STRING,\n  startLine INT64,\n  endLine INT64,\n  isExported BOOLEAN,\n  content STRING,\n  PRIMARY KEY (id)\n)`;\n\nexport const INTERFACE_SCHEMA = `\nCREATE NODE TABLE Interface (\n  id STRING,\n  name STRING,\n  filePath STRING,\n  startLine INT64,\n  endLine INT64,\n  isExported BOOLEAN,\n  content STRING,\n  PRIMARY KEY (id)\n)`;\n\nexport const METHOD_SCHEMA = `\nCREATE NODE TABLE Method (\n  id STRING,\n  name STRING,\n  filePath STRING,\n  startLine INT64,\n  endLine INT64,\n  isExported BOOLEAN,\n  content STRING,\n  PRIMARY KEY (id)\n)`;\n\nexport const CODE_ELEMENT_SCHEMA = `\nCREATE NODE TABLE CodeElement (\n  id STRING,\n  name STRING,\n  filePath STRING,\n  startLine INT64,\n  endLine INT64,\n  isExported BOOLEAN,\n  content STRING,\n  PRIMARY KEY (id)\n)`;\n\n// ============================================================================\n// COMMUNITY NODE TABLE (for Leiden algorithm clusters)\n// ============================================================================\n\nexport const COMMUNITY_SCHEMA = `\nCREATE NODE TABLE Community (\n  id STRING,\n  label STRING,\n  heuristicLabel STRING,\n  keywords STRING[],\n  description STRING,\n  enrichedBy STRING,\n  cohesion DOUBLE,\n  symbolCount INT32,\n  PRIMARY KEY (id)\n)`;\n\n// ============================================================================\n// PROCESS NODE TABLE (for execution flow detection)\n// ============================================================================\n\nexport const PROCESS_SCHEMA = `\nCREATE NODE TABLE Process (\n  id STRING,\n  label STRING,\n  heuristicLabel STRING,\n  processType STRING,\n  stepCount INT32,\n  communities STRING[],\n  entryPointId STRING,\n  terminalId STRING,\n  PRIMARY KEY (id)\n)`;\n\n// ============================================================================\n// MULTI-LANGUAGE NODE TABLE SCHEMAS\n// ============================================================================\n\n// Generic code element with startLine/endLine for C, C++, Rust, Go, Java, C#\nconst CODE_ELEMENT_BASE = (name: string) => `\nCREATE NODE TABLE \\`${name}\\` (\n  id STRING,\n  name STRING,\n  filePath STRING,\n  startLine INT64,\n  endLine INT64,\n  content STRING,\n  PRIMARY KEY (id)\n)`;\n\nexport const STRUCT_SCHEMA = CODE_ELEMENT_BASE('Struct');\nexport const ENUM_SCHEMA = CODE_ELEMENT_BASE('Enum');\nexport const MACRO_SCHEMA = CODE_ELEMENT_BASE('Macro');\nexport const TYPEDEF_SCHEMA = CODE_ELEMENT_BASE('Typedef');\nexport const UNION_SCHEMA = CODE_ELEMENT_BASE('Union');\nexport const NAMESPACE_SCHEMA = CODE_ELEMENT_BASE('Namespace');\nexport const TRAIT_SCHEMA = CODE_ELEMENT_BASE('Trait');\nexport const IMPL_SCHEMA = CODE_ELEMENT_BASE('Impl');\nexport const TYPE_ALIAS_SCHEMA = CODE_ELEMENT_BASE('TypeAlias');\nexport const CONST_SCHEMA = CODE_ELEMENT_BASE('Const');\nexport const STATIC_SCHEMA = CODE_ELEMENT_BASE('Static');\nexport const PROPERTY_SCHEMA = CODE_ELEMENT_BASE('Property');\nexport const RECORD_SCHEMA = CODE_ELEMENT_BASE('Record');\nexport const DELEGATE_SCHEMA = CODE_ELEMENT_BASE('Delegate');\nexport const ANNOTATION_SCHEMA = CODE_ELEMENT_BASE('Annotation');\nexport const CONSTRUCTOR_SCHEMA = CODE_ELEMENT_BASE('Constructor');\nexport const TEMPLATE_SCHEMA = CODE_ELEMENT_BASE('Template');\nexport const MODULE_SCHEMA = CODE_ELEMENT_BASE('Module');\n\n// ============================================================================\n// RELATION TABLE SCHEMA\n// Single table with 'type' property - connects all node tables\n// ============================================================================\n\nexport const RELATION_SCHEMA = `\nCREATE REL TABLE ${REL_TABLE_NAME} (\n  FROM File TO File,\n  FROM File TO Folder,\n  FROM File TO Function,\n  FROM File TO Class,\n  FROM File TO Interface,\n  FROM File TO Method,\n  FROM File TO CodeElement,\n  FROM File TO \\`Struct\\`,\n  FROM File TO \\`Enum\\`,\n  FROM File TO \\`Macro\\`,\n  FROM File TO \\`Typedef\\`,\n  FROM File TO \\`Union\\`,\n  FROM File TO \\`Namespace\\`,\n  FROM File TO \\`Trait\\`,\n  FROM File TO \\`Impl\\`,\n  FROM File TO \\`TypeAlias\\`,\n  FROM File TO \\`Const\\`,\n  FROM File TO \\`Static\\`,\n  FROM File TO \\`Property\\`,\n  FROM File TO \\`Record\\`,\n  FROM File TO \\`Delegate\\`,\n  FROM File TO \\`Annotation\\`,\n  FROM File TO \\`Constructor\\`,\n  FROM File TO \\`Template\\`,\n  FROM File TO \\`Module\\`,\n  FROM Folder TO Folder,\n  FROM Folder TO File,\n  FROM Function TO Function,\n  FROM Function TO Method,\n  FROM Function TO Class,\n  FROM Function TO Community,\n  FROM Function TO \\`Macro\\`,\n  FROM Function TO \\`Struct\\`,\n  FROM Function TO \\`Template\\`,\n  FROM Function TO \\`Enum\\`,\n  FROM Function TO \\`Namespace\\`,\n  FROM Function TO \\`TypeAlias\\`,\n  FROM Function TO \\`Module\\`,\n  FROM Function TO \\`Impl\\`,\n  FROM Function TO Interface,\n  FROM Function TO \\`Constructor\\`,\n  FROM Class TO Method,\n  FROM Class TO Function,\n  FROM Class TO Class,\n  FROM Class TO Interface,\n  FROM Class TO Community,\n  FROM Class TO \\`Template\\`,\n  FROM Class TO \\`TypeAlias\\`,\n  FROM Class TO \\`Struct\\`,\n  FROM Class TO \\`Enum\\`,\n  FROM Class TO \\`Constructor\\`,\n  FROM Method TO Function,\n  FROM Method TO Method,\n  FROM Method TO Class,\n  FROM Method TO Community,\n  FROM Method TO \\`Template\\`,\n  FROM Method TO \\`Struct\\`,\n  FROM Method TO \\`TypeAlias\\`,\n  FROM Method TO \\`Enum\\`,\n  FROM Method TO \\`Macro\\`,\n  FROM Method TO \\`Namespace\\`,\n  FROM Method TO \\`Module\\`,\n  FROM Method TO \\`Impl\\`,\n  FROM Method TO Interface,\n  FROM Method TO \\`Constructor\\`,\n  FROM \\`Template\\` TO \\`Template\\`,\n  FROM \\`Template\\` TO Function,\n  FROM \\`Template\\` TO Method,\n  FROM \\`Template\\` TO Class,\n  FROM \\`Template\\` TO \\`Struct\\`,\n  FROM \\`Template\\` TO \\`TypeAlias\\`,\n  FROM \\`Template\\` TO \\`Enum\\`,\n  FROM \\`Template\\` TO \\`Macro\\`,\n  FROM \\`Template\\` TO Interface,\n  FROM \\`Template\\` TO \\`Constructor\\`,\n  FROM \\`Module\\` TO \\`Module\\`,\n  FROM CodeElement TO Community,\n  FROM Interface TO Community,\n  FROM Interface TO Function,\n  FROM Interface TO Method,\n  FROM Interface TO Class,\n  FROM Interface TO Interface,\n  FROM Interface TO \\`TypeAlias\\`,\n  FROM Interface TO \\`Struct\\`,\n  FROM Interface TO \\`Constructor\\`,\n  FROM \\`Struct\\` TO Community,\n  FROM \\`Struct\\` TO \\`Trait\\`,\n  FROM \\`Struct\\` TO Function,\n  FROM \\`Struct\\` TO Method,\n  FROM \\`Enum\\` TO Community,\n  FROM \\`Macro\\` TO Community,\n  FROM \\`Macro\\` TO Function,\n  FROM \\`Macro\\` TO Method,\n  FROM \\`Module\\` TO Function,\n  FROM \\`Module\\` TO Method,\n  FROM \\`Typedef\\` TO Community,\n  FROM \\`Union\\` TO Community,\n  FROM \\`Namespace\\` TO Community,\n  FROM \\`Trait\\` TO Community,\n  FROM \\`Impl\\` TO Community,\n  FROM \\`Impl\\` TO \\`Trait\\`,\n  FROM \\`TypeAlias\\` TO Community,\n  FROM \\`Const\\` TO Community,\n  FROM \\`Static\\` TO Community,\n  FROM \\`Property\\` TO Community,\n  FROM \\`Record\\` TO Community,\n  FROM \\`Delegate\\` TO Community,\n  FROM \\`Annotation\\` TO Community,\n  FROM \\`Constructor\\` TO Community,\n  FROM \\`Constructor\\` TO Interface,\n  FROM \\`Constructor\\` TO Class,\n  FROM \\`Constructor\\` TO Method,\n  FROM \\`Constructor\\` TO Function,\n  FROM \\`Constructor\\` TO \\`Constructor\\`,\n  FROM \\`Constructor\\` TO \\`Struct\\`,\n  FROM \\`Constructor\\` TO \\`Macro\\`,\n  FROM \\`Constructor\\` TO \\`Template\\`,\n  FROM \\`Constructor\\` TO \\`TypeAlias\\`,\n  FROM \\`Constructor\\` TO \\`Enum\\`,\n  FROM \\`Constructor\\` TO \\`Impl\\`,\n  FROM \\`Constructor\\` TO \\`Namespace\\`,\n  FROM \\`Template\\` TO Community,\n  FROM \\`Module\\` TO Community,\n  FROM Function TO Process,\n  FROM Method TO Process,\n  FROM Class TO Process,\n  FROM Interface TO Process,\n  FROM \\`Struct\\` TO Process,\n  FROM \\`Constructor\\` TO Process,\n  FROM \\`Module\\` TO Process,\n  FROM \\`Macro\\` TO Process,\n  FROM \\`Impl\\` TO Process,\n  FROM \\`Typedef\\` TO Process,\n  FROM \\`TypeAlias\\` TO Process,\n  FROM \\`Enum\\` TO Process,\n  FROM \\`Union\\` TO Process,\n  FROM \\`Namespace\\` TO Process,\n  FROM \\`Trait\\` TO Process,\n  FROM \\`Const\\` TO Process,\n  FROM \\`Static\\` TO Process,\n  FROM \\`Property\\` TO Process,\n  FROM \\`Record\\` TO Process,\n  FROM \\`Delegate\\` TO Process,\n  FROM \\`Annotation\\` TO Process,\n  FROM \\`Template\\` TO Process,\n  FROM CodeElement TO Process,\n  type STRING,\n  confidence DOUBLE,\n  reason STRING,\n  step INT32\n)`;\n\n// ============================================================================\n// EMBEDDING TABLE SCHEMA\n// Separate table for vector storage to avoid copy-on-write overhead\n// ============================================================================\n\nexport const EMBEDDING_SCHEMA = `\nCREATE NODE TABLE ${EMBEDDING_TABLE_NAME} (\n  nodeId STRING,\n  embedding FLOAT[384],\n  PRIMARY KEY (nodeId)\n)`;\n\n/**\n * Create vector index for semantic search\n * Uses HNSW (Hierarchical Navigable Small World) algorithm with cosine similarity\n */\nexport const CREATE_VECTOR_INDEX_QUERY = `\nCALL CREATE_VECTOR_INDEX('${EMBEDDING_TABLE_NAME}', 'code_embedding_idx', 'embedding', metric := 'cosine')\n`;\n\n// ============================================================================\n// ALL SCHEMA QUERIES IN ORDER\n// Node tables must be created before relationship tables that reference them\n// ============================================================================\n\nexport const NODE_SCHEMA_QUERIES = [\n  FILE_SCHEMA,\n  FOLDER_SCHEMA,\n  FUNCTION_SCHEMA,\n  CLASS_SCHEMA,\n  INTERFACE_SCHEMA,\n  METHOD_SCHEMA,\n  CODE_ELEMENT_SCHEMA,\n  COMMUNITY_SCHEMA,\n  PROCESS_SCHEMA,\n  // Multi-language support\n  STRUCT_SCHEMA,\n  ENUM_SCHEMA,\n  MACRO_SCHEMA,\n  TYPEDEF_SCHEMA,\n  UNION_SCHEMA,\n  NAMESPACE_SCHEMA,\n  TRAIT_SCHEMA,\n  IMPL_SCHEMA,\n  TYPE_ALIAS_SCHEMA,\n  CONST_SCHEMA,\n  STATIC_SCHEMA,\n  PROPERTY_SCHEMA,\n  RECORD_SCHEMA,\n  DELEGATE_SCHEMA,\n  ANNOTATION_SCHEMA,\n  CONSTRUCTOR_SCHEMA,\n  TEMPLATE_SCHEMA,\n  MODULE_SCHEMA,\n];\n\nexport const REL_SCHEMA_QUERIES = [\n  RELATION_SCHEMA,\n];\n\nexport const SCHEMA_QUERIES = [\n  ...NODE_SCHEMA_QUERIES,\n  ...REL_SCHEMA_QUERIES,\n  EMBEDDING_SCHEMA,\n];\n"
  },
  {
    "path": "gitnexus-web/src/core/llm/agent.ts",
    "content": "/**\n * Graph RAG Agent Factory\n * \n * Creates a LangChain agent configured for code graph analysis.\n * Supports Azure OpenAI and Google Gemini providers.\n */\n\nimport { createReactAgent } from '@langchain/langgraph/prebuilt';\nimport { SystemMessage } from '@langchain/core/messages';\nimport { ChatOpenAI, AzureChatOpenAI } from '@langchain/openai';\nimport { ChatGoogleGenerativeAI } from '@langchain/google-genai';\nimport { ChatAnthropic } from '@langchain/anthropic';\nimport { ChatOllama } from '@langchain/ollama';\nimport type { BaseChatModel } from '@langchain/core/language_models/chat_models';\nimport { createGraphRAGTools } from './tools';\nimport type { \n  ProviderConfig, \n  OpenAIConfig,\n  AzureOpenAIConfig, \n  GeminiConfig,\n  AnthropicConfig,\n  OllamaConfig,\n  OpenRouterConfig,\n  AgentStreamChunk,\n} from './types';\nimport { \n  type CodebaseContext,\n  buildDynamicSystemPrompt,\n} from './context-builder';\n\n/**\n * System prompt for the Graph RAG agent\n * \n * Design principles (based on Aider/Cline research):\n * - Short, punchy directives > long explanations\n * - No template-inducing examples\n * - Let LLM figure out HOW, just tell it WHAT behavior we want\n * - Explicit progress reporting requirement\n * - Anti-laziness directives\n */\n/**\n * Base system prompt - exported so it can be used with dynamic context injection\n * \n * Structure (optimized for instruction following):\n * 1. Identity + GROUNDING mandate (most important)\n * 2. Core protocol (how to work)\n * 3. Tools reference\n * 4. Output format & rules\n * 5. [Dynamic context appended at end]\n */\nexport const BASE_SYSTEM_PROMPT = `You are Nexus, a Code Analysis Agent with access to a Knowledge Graph. Your responses MUST be grounded.\n\n## ⚠️ MANDATORY: GROUNDING\nEvery factual claim MUST include a citation.\n- File refs: [[src/auth.ts:45-60]] (line range with hyphen)\n- NO citation = NO claim. Say \"I didn't find evidence\" instead of guessing.\n\n## ⚠️ MANDATORY: VALIDATION\nEvery output MUST be validated.\n- Use cypher to validate the results and confirm completeness of context before final output.\n- NO validation = NO claim. Say \"I didn't find evidence\" instead of guessing.\n- Do not blindly trust readme or single source of truth. Always validate and cross-reference. Never be lazy.\n\n## 🧠 CORE PROTOCOL\nYou are an investigator. For each question:\n1. **Search** → Use cypher, search or grep to find relevant code\n2. **Read** → Use read to see the actual source\n3. **Trace** → Use cypher to follow connections in the graph\n4. **Cite** → Ground every finding with [[file:line]] or [[Type:Name]]\n5. **Validate** → Use cypher to validate the results and confirm completeness of context before final output. ( MUST DO )\n\n## 🛠️ TOOLS\n- **\\`search\\`** — Hybrid search. Results grouped by process with cluster context.\n- **\\`cypher\\`** — Cypher queries against the graph. Use \\`{{QUERY_VECTOR}}\\` for vector search.\n- **\\`grep\\`** — Regex search. Best for exact strings, TODOs, error codes.\n- **\\`read\\`** — Read file content. Always use after search/grep to see full code.\n- **\\`explore\\`** — Deep dive on a symbol, cluster, or process. Shows membership, participation, connections.\n- **\\`overview\\`** — Codebase map showing all clusters and processes.\n- **\\`impact\\`** — Impact analysis. Shows affected processes, clusters, and risk level.\n\n## 📊 GRAPH SCHEMA\nNodes: File, Folder, Function, Class, Interface, Method, Community, Process\nRelations: \\`CodeRelation\\` with \\`type\\` property: CONTAINS, DEFINES, IMPORTS, CALLS, EXTENDS, IMPLEMENTS, MEMBER_OF, STEP_IN_PROCESS\n\n## 📐 GRAPH SEMANTICS (Important!)\n**Edge Types:**\n- \\`CALLS\\`: Method invocation OR constructor injection. If A receives B as parameter and uses it, A→B is CALLS. This is intentional simplification.\n- \\`IMPORTS\\`: File-level import/include statement.\n- \\`EXTENDS/IMPLEMENTS\\`: Class inheritance.\n\n**Process Nodes:**\n- Process labels use format: \"EntryPoint → Terminal\" (e.g., \"onCreate → showToast\")\n- These are heuristic names from tracing execution flow, NOT application-defined names\n- Entry points are detected via export status, naming patterns, and framework conventions\n\nCypher examples:\n- \\`MATCH (f:Function) RETURN f.name LIMIT 10\\`\n- \\`MATCH (f:File)-[:CodeRelation {type: 'IMPORTS'}]->(g:File) RETURN f.name, g.name\\`\n\n## 📝CRITICAL RULES\n- **impact output is trusted.** Do NOT re-validate with cypher. Optionally run the suggested grep commands for dynamic patterns.\n- **Cite or retract.** Never state something you can't ground.\n- **Read before concluding.** Don't guess from names alone.\n- **Retry on failure.** If a tool fails, fix the input and try again.\n- **Cyfer tool validation** prefer using cyfer tool in anything that requires graph connections.\n- **OUTPUT STYLE** Prefer using tables and mermaid diagrams instead of long explanations.\n- ALWAYS USE MERMAID FOR VISUALIZATION AND STRUCTURING THE OUTPUT.\n\n## 🎯 OUTPUT STYLE\nThink like a senior architect. Be concise—no fluff, short, precise and to the point.\n- Use tables for comparisons/rankings\n- Use mermaid diagrams for flows/dependencies\n- Surface deep insights: patterns, coupling, design decisions\n- End with **TL;DR** (short summary of the response, summing up the response and the most critical parts)\n\n## MERMAID RULES\nWhen generating diagrams:\n- NO special characters in node labels: quotes, (), /, &, <, >\n- Wrap labels with spaces in quotes: A[\"My Label\"]\n- Use simple IDs: A, B, C or auth, db, api\n- Flowchart: graph TD or graph LR (not flowchart)\n- Always test mentally: would this parse?\n\nBAD:  A[User's Data] --> B(Process & Save)\nGOOD: A[\"User Data\"] --> B[\"Process and Save\"]\n`;\nexport const createChatModel = (config: ProviderConfig): BaseChatModel => {\n  switch (config.provider) {\n    case 'openai': {\n      const openaiConfig = config as OpenAIConfig;\n      \n      if (!openaiConfig.apiKey || openaiConfig.apiKey.trim() === '') {\n        throw new Error('OpenAI API key is required but was not provided');\n      }\n      \n      return new ChatOpenAI({\n        apiKey: openaiConfig.apiKey,\n        modelName: openaiConfig.model,\n        temperature: openaiConfig.temperature ?? 0.1,\n        maxTokens: openaiConfig.maxTokens,\n        configuration: {\n          apiKey: openaiConfig.apiKey,\n          ...(openaiConfig.baseUrl ? { baseURL: openaiConfig.baseUrl } : {}),\n        },\n        streaming: true,\n      });\n    }\n    \n    case 'azure-openai': {\n      const azureConfig = config as AzureOpenAIConfig;\n      return new AzureChatOpenAI({\n        azureOpenAIApiKey: azureConfig.apiKey,\n        azureOpenAIApiInstanceName: extractInstanceName(azureConfig.endpoint),\n        azureOpenAIApiDeploymentName: azureConfig.deploymentName,\n        azureOpenAIApiVersion: azureConfig.apiVersion ?? '2024-12-01-preview',\n        // Note: gpt-5.2-chat only supports temperature=1 (default)\n        streaming: true,\n      });\n    }\n    \n    case 'gemini': {\n      const geminiConfig = config as GeminiConfig;\n      return new ChatGoogleGenerativeAI({\n        apiKey: geminiConfig.apiKey,\n        model: geminiConfig.model,\n        temperature: geminiConfig.temperature ?? 0.1,\n        maxOutputTokens: geminiConfig.maxTokens,\n        streaming: true,\n      });\n    }\n    \n    case 'anthropic': {\n      const anthropicConfig = config as AnthropicConfig;\n      return new ChatAnthropic({\n        anthropicApiKey: anthropicConfig.apiKey,\n        model: anthropicConfig.model,\n        temperature: anthropicConfig.temperature ?? 0.1,\n        maxTokens: anthropicConfig.maxTokens ?? 8192,\n        streaming: true,\n      });\n    }\n    \n    case 'ollama': {\n      const ollamaConfig = config as OllamaConfig;\n      return new ChatOllama({\n        baseUrl: ollamaConfig.baseUrl ?? 'http://localhost:11434',\n        model: ollamaConfig.model,\n        temperature: ollamaConfig.temperature ?? 0.1,\n        streaming: true,\n        // Allow longer responses (Ollama default is often 128-2048)\n        numPredict: 30000,\n        // Increase context window (Ollama default is only 2048!)\n        // This is critical for agentic workflows with tool calls\n        numCtx: 32768,\n      });\n    }\n    \n    case 'openrouter': {\n      const openRouterConfig = config as OpenRouterConfig;\n      \n      // Debug logging\n      if (import.meta.env.DEV) {\n        console.log('🌐 OpenRouter config:', {\n          hasApiKey: !!openRouterConfig.apiKey,\n          apiKeyLength: openRouterConfig.apiKey?.length || 0,\n          model: openRouterConfig.model,\n          baseUrl: openRouterConfig.baseUrl,\n        });\n      }\n      \n      if (!openRouterConfig.apiKey || openRouterConfig.apiKey.trim() === '') {\n        throw new Error('OpenRouter API key is required but was not provided');\n      }\n      \n      return new ChatOpenAI({\n        openAIApiKey: openRouterConfig.apiKey,\n        apiKey: openRouterConfig.apiKey, // Fallback for some versions\n        modelName: openRouterConfig.model,\n        temperature: openRouterConfig.temperature ?? 0.1,\n        maxTokens: openRouterConfig.maxTokens,\n        configuration: {\n          apiKey: openRouterConfig.apiKey, // Ensure client receives it\n          baseURL: openRouterConfig.baseUrl ?? 'https://openrouter.ai/api/v1',\n        },\n        streaming: true,\n      });\n    }\n    \n    default:\n      throw new Error(`Unsupported provider: ${(config as any).provider}`);\n  }\n};\n\n/**\n * Extract instance name from Azure endpoint URL\n * e.g., \"https://my-resource.openai.azure.com\" -> \"my-resource\"\n */\nconst extractInstanceName = (endpoint: string): string => {\n  try {\n    const url = new URL(endpoint);\n    const hostname = url.hostname;\n    // Extract the first part before .openai.azure.com\n    const match = hostname.match(/^([^.]+)\\.openai\\.azure\\.com/);\n    if (match) {\n      return match[1];\n    }\n    // Fallback: just use the first part of hostname\n    return hostname.split('.')[0];\n  } catch {\n    return endpoint;\n  }\n};\n\n/**\n * Create a Graph RAG agent\n */\nexport const createGraphRAGAgent = (\n  config: ProviderConfig,\n  executeQuery: (cypher: string) => Promise<any[]>,\n  semanticSearch: (query: string, k?: number, maxDistance?: number) => Promise<any[]>,\n  semanticSearchWithContext: (query: string, k?: number, hops?: number) => Promise<any[]>,\n  hybridSearch: (query: string, k?: number) => Promise<any[]>,\n  isEmbeddingReady: () => boolean,\n  isBM25Ready: () => boolean,\n  fileContents: Map<string, string>,\n  codebaseContext?: CodebaseContext\n) => {\n  const model = createChatModel(config);\n  const tools = createGraphRAGTools(\n    executeQuery,\n    semanticSearch,\n    semanticSearchWithContext,\n    hybridSearch,\n    isEmbeddingReady,\n    isBM25Ready,\n    fileContents\n  );\n  \n  // Use dynamic prompt if context is provided, otherwise use base prompt\n  const systemPrompt = codebaseContext \n    ? buildDynamicSystemPrompt(BASE_SYSTEM_PROMPT, codebaseContext)\n    : BASE_SYSTEM_PROMPT;\n  \n  // Log the full prompt for debugging\n  if (import.meta.env.DEV) {\n    console.log('🤖 AGENT SYSTEM PROMPT:\\n', systemPrompt);\n  }\n  \n  const agent = createReactAgent({\n    llm: model as any,\n    tools: tools as any,\n    messageModifier: new SystemMessage(systemPrompt) as any,\n  });\n  \n  return agent;\n};\n\n/**\n * Message type for agent conversation\n */\nexport interface AgentMessage {\n  role: 'user' | 'assistant';\n  content: string;\n}\n\n/**\n * Stream a response from the agent\n * Uses BOTH streamModes for best of both worlds:\n * - 'values' for state transitions (tool calls, results) in proper order\n * - 'messages' for token-by-token text streaming\n * \n * This preserves the natural progression: reasoning → tool → reasoning → tool → answer\n */\nexport async function* streamAgentResponse(\n  agent: ReturnType<typeof createReactAgent>,\n  messages: AgentMessage[]\n): AsyncGenerator<AgentStreamChunk> {\n  try {\n    const formattedMessages = messages.map(m => ({\n      role: m.role,\n      content: m.content,\n    }));\n    \n    // Use BOTH modes: 'values' for structure, 'messages' for token streaming\n    const stream = await agent.stream(\n      { messages: formattedMessages },\n      {\n        streamMode: ['values', 'messages'] as any,\n        // Allow longer tool/reasoning loops (more Cursor-like persistence)\n        recursionLimit: 50,\n      } as any\n    );\n    \n    // Track what we've yielded to avoid duplicates\n    const yieldedToolCalls = new Set<string>();\n    const yieldedToolResults = new Set<string>();\n    let lastProcessedMsgCount = formattedMessages.length;\n    // Track if all tools are done (for distinguishing reasoning vs final content)\n    let allToolsDone = true;\n    // Track if we've seen any tool calls in this response turn.\n    // Anything before the first tool call should be treated as \"reasoning/narration\"\n    // so the UI can show the Cursor-like loop: plan → tool → update → tool → answer.\n    let hasSeenToolCallThisTurn = false;\n    \n    for await (const event of stream) {\n      // Events come as [streamMode, data] tuples when using multiple modes\n      // or just data when using single mode\n      let mode: string;\n      let data: any;\n      \n      if (Array.isArray(event) && event.length === 2 && typeof event[0] === 'string') {\n        [mode, data] = event;\n      } else if (Array.isArray(event) && event[0]?._getType) {\n        // Single messages mode format: [message, metadata]\n        mode = 'messages';\n        data = event;\n      } else {\n        // Assume values mode\n        mode = 'values';\n        data = event;\n      }\n      \n      // DEBUG: Enhanced logging\n      if (import.meta.env.DEV) {\n        const msgType = mode === 'messages' && data?.[0]?._getType?.() || 'n/a';\n        const hasContent = mode === 'messages' && data?.[0]?.content;\n        const hasToolCalls = mode === 'messages' && data?.[0]?.tool_calls?.length > 0;\n        console.log(`🔄 [${mode}] type:${msgType} content:${!!hasContent} tools:${hasToolCalls}`);\n      }\n      // Handle 'messages' mode - token-by-token streaming\n      if (mode === 'messages') {\n        const [msg] = Array.isArray(data) ? data : [data];\n        if (!msg) continue;\n        \n        const msgType = msg._getType?.() || msg.type || msg.constructor?.name || 'unknown';\n        \n        // AIMessageChunk - streaming text tokens\n        if (msgType === 'ai' || msgType === 'AIMessage' || msgType === 'AIMessageChunk') {\n          const rawContent = msg.content;\n          const toolCalls = msg.tool_calls || [];\n          \n          // Handle content that can be string or array of content blocks\n          let content: string = '';\n          if (typeof rawContent === 'string') {\n            content = rawContent;\n          } else if (Array.isArray(rawContent)) {\n            // Content blocks format: [{type: 'text', text: '...'}, ...]\n            content = rawContent\n              .filter((block: any) => block.type === 'text' || typeof block === 'string')\n              .map((block: any) => typeof block === 'string' ? block : block.text || '')\n              .join('');\n          }\n          \n          // If chunk has content, stream it\n          if (content && content.length > 0) {\n            // Determine if this is reasoning/narration vs final answer content.\n            // - Before the first tool call: treat as reasoning (narration)\n            // - Between tool calls/results: treat as reasoning\n            // - After all tools are done: treat as final content\n            const isReasoning =\n              !hasSeenToolCallThisTurn ||\n              toolCalls.length > 0 ||\n              !allToolsDone;\n            yield {\n              type: isReasoning ? 'reasoning' : 'content',\n              [isReasoning ? 'reasoning' : 'content']: content,\n            };\n          }\n          \n          // Track tool calls from message chunks\n          if (toolCalls.length > 0) {\n            hasSeenToolCallThisTurn = true;\n            allToolsDone = false;\n            for (const tc of toolCalls) {\n              const toolId = tc.id || `tool-${Date.now()}-${Math.random().toString(36).slice(2)}`;\n              if (!yieldedToolCalls.has(toolId)) {\n                yieldedToolCalls.add(toolId);\n                yield {\n                  type: 'tool_call',\n                  toolCall: {\n                    id: toolId,\n                    name: tc.name || tc.function?.name || 'unknown',\n                    args: tc.args || (tc.function?.arguments ? JSON.parse(tc.function.arguments) : {}),\n                    status: 'running',\n                  },\n                };\n              }\n            }\n          }\n        }\n        \n        // ToolMessage in messages mode\n        if (msgType === 'tool' || msgType === 'ToolMessage') {\n          const toolCallId = msg.tool_call_id || '';\n          if (toolCallId && !yieldedToolResults.has(toolCallId)) {\n            yieldedToolResults.add(toolCallId);\n            const result = typeof msg.content === 'string' ? msg.content : JSON.stringify(msg.content);\n            yield {\n              type: 'tool_result',\n              toolCall: {\n                id: toolCallId,\n                name: msg.name || 'tool',\n                args: {},\n                result: result,\n                status: 'completed',\n              },\n            };\n            // After tool result, next AI content could be reasoning or final\n            allToolsDone = true;\n          }\n        }\n      }\n      \n      // Handle 'values' mode - state snapshots for structure\n      if (mode === 'values' && data?.messages) {\n        const stepMessages = data.messages || [];\n        \n        // Process new messages for tool calls/results we might have missed\n        for (let i = lastProcessedMsgCount; i < stepMessages.length; i++) {\n          const msg = stepMessages[i];\n          const msgType = msg._getType?.() || msg.type || 'unknown';\n          \n          // Catch tool calls from values mode (backup)\n          if ((msgType === 'ai' || msgType === 'AIMessage') && !yieldedToolCalls.size) {\n            const toolCalls = msg.tool_calls || [];\n            for (const tc of toolCalls) {\n              const toolId = tc.id || `tool-${Date.now()}`;\n              if (!yieldedToolCalls.has(toolId)) {\n                allToolsDone = false;\n                yieldedToolCalls.add(toolId);\n                yield {\n                  type: 'tool_call',\n                  toolCall: {\n                    id: toolId,\n                    name: tc.name || 'unknown',\n                    args: tc.args || {},\n                    status: 'running',\n                  },\n                };\n              }\n            }\n          }\n          \n          // Catch tool results from values mode (backup)\n          if (msgType === 'tool' || msgType === 'ToolMessage') {\n            const toolCallId = msg.tool_call_id || '';\n            if (toolCallId && !yieldedToolResults.has(toolCallId)) {\n              yieldedToolResults.add(toolCallId);\n              const result = typeof msg.content === 'string' ? msg.content : JSON.stringify(msg.content);\n              yield {\n                type: 'tool_result',\n                toolCall: {\n                  id: toolCallId,\n                  name: msg.name || 'tool',\n                  args: {},\n                  result: result,\n                  status: 'completed',\n                },\n              };\n              allToolsDone = true;\n            }\n          }\n        }\n        \n        lastProcessedMsgCount = stepMessages.length;\n      }\n    }\n    \n    // DEBUG: Stream completed normally\n    if (import.meta.env.DEV) {\n      console.log('✅ Stream completed normally, yielding done');\n    }\n    yield { type: 'done' };\n  } catch (error) {\n    const message = error instanceof Error ? error.message : String(error);\n    // DEBUG: Stream error\n    if (import.meta.env.DEV) {\n      console.error('❌ Stream error:', message, error);\n    }\n    yield { \n      type: 'error', \n      error: message,\n    };\n  }\n}\n\n/**\n * Get a non-streaming response from the agent\n * Simpler for cases where streaming isn't needed\n */\nexport const invokeAgent = async (\n  agent: ReturnType<typeof createReactAgent>,\n  messages: AgentMessage[]\n): Promise<string> => {\n  const formattedMessages = messages.map(m => ({\n    role: m.role,\n    content: m.content,\n  }));\n  \n  const result = await agent.invoke({ messages: formattedMessages });\n  \n  // result.messages is the full conversation state\n  const lastMessage = result.messages[result.messages.length - 1];\n  return lastMessage?.content?.toString() ?? 'No response generated.';\n};\n\n"
  },
  {
    "path": "gitnexus-web/src/core/llm/context-builder.ts",
    "content": "/**\n * Context Builder for Graph RAG Agent\n * \n * Generates dynamic context about the loaded codebase to inject into the system prompt.\n * This helps the LLM understand the project structure, scale, and key entry points\n * without needing to explore from scratch.\n */\n\n/**\n * Codebase statistics\n */\nexport interface CodebaseStats {\n  projectName: string;\n  fileCount: number;\n  functionCount: number;\n  classCount: number;\n  interfaceCount: number;\n  methodCount: number;\n}\n\n/**\n * Hotspot - highly connected node\n */\nexport interface Hotspot {\n  name: string;\n  type: string;\n  filePath: string;\n  connections: number;\n}\n\n/**\n * Folder info for tree rendering\n */\ninterface FolderInfo {\n  path: string;\n  name: string;\n  depth: number;\n  fileCount: number;\n  children: FolderInfo[];\n}\n\n/**\n * Complete codebase context for prompt injection\n * Simplified: stats + hotspots + folder tree (no entry points or language detection)\n */\nexport interface CodebaseContext {\n  stats: CodebaseStats;\n  hotspots: Hotspot[];\n  folderTree: string;\n}\n\n/**\n * Get codebase statistics via Cypher queries\n */\nexport async function getCodebaseStats(\n  executeQuery: (cypher: string) => Promise<any[]>,\n  projectName: string\n): Promise<CodebaseStats> {\n  try {\n    // Count each node type\n    const countQueries = [\n      { type: 'files', query: 'MATCH (n:File) RETURN COUNT(n) AS count' },\n      { type: 'functions', query: 'MATCH (n:Function) RETURN COUNT(n) AS count' },\n      { type: 'classes', query: 'MATCH (n:Class) RETURN COUNT(n) AS count' },\n      { type: 'interfaces', query: 'MATCH (n:Interface) RETURN COUNT(n) AS count' },\n      { type: 'methods', query: 'MATCH (n:Method) RETURN COUNT(n) AS count' },\n    ];\n\n    const counts: Record<string, number> = {};\n    \n    for (const { type, query } of countQueries) {\n      try {\n        const result = await executeQuery(query);\n        // Handle both array and object result formats\n        const row = result[0];\n        counts[type] = Array.isArray(row) ? (row[0] ?? 0) : (row?.count ?? 0);\n      } catch {\n        counts[type] = 0;\n      }\n    }\n\n    return {\n      projectName,\n      fileCount: counts.files,\n      functionCount: counts.functions,\n      classCount: counts.classes,\n      interfaceCount: counts.interfaces,\n      methodCount: counts.methods,\n    };\n  } catch (error) {\n    console.error('Failed to get codebase stats:', error);\n    return {\n      projectName,\n      fileCount: 0,\n      functionCount: 0,\n      classCount: 0,\n      interfaceCount: 0,\n      methodCount: 0,\n    };\n  }\n}\n\n\n/**\n * Find hotspots - nodes with the most connections\n */\nexport async function getHotspots(\n  executeQuery: (cypher: string) => Promise<any[]>,\n  limit: number = 8\n): Promise<Hotspot[]> {\n  try {\n    // Find nodes with most edges (both directions)\n    const query = `\n      MATCH (n)-[r:CodeRelation]-(m)\n      WHERE n.name IS NOT NULL\n      WITH n, COUNT(r) AS connections\n      ORDER BY connections DESC\n      LIMIT ${limit}\n      RETURN n.name AS name, LABEL(n) AS type, n.filePath AS filePath, connections\n    `;\n    \n    const results = await executeQuery(query);\n    \n    return results.map(row => {\n      if (Array.isArray(row)) {\n        return {\n          name: row[0],\n          type: row[1],\n          filePath: row[2],\n          connections: row[3],\n        };\n      }\n      return {\n        name: row.name,\n        type: row.type,\n        filePath: row.filePath,\n        connections: row.connections,\n      };\n    }).filter(h => h.name && h.type);\n  } catch (error) {\n    console.error('Failed to get hotspots:', error);\n    return [];\n  }\n}\n\n/**\n * Build folder tree structure from file paths\n * Returns ASCII tree format with smart truncation for readability\n */\nexport async function getFolderTree(\n  executeQuery: (cypher: string) => Promise<any[]>,\n  maxDepth: number = 10\n): Promise<string> {\n  try {\n    // Get all file paths\n    const query = 'MATCH (f:File) RETURN f.filePath AS path ORDER BY path';\n    const results = await executeQuery(query);\n    \n    const paths = results.map(row => {\n      if (Array.isArray(row)) return row[0];\n      return row.path;\n    }).filter(Boolean);\n\n    if (paths.length === 0) return '';\n\n    // Use hybrid ASCII format: clear hierarchy with smart truncation\n    return formatAsHybridAscii(paths, maxDepth);\n  } catch (error) {\n    console.error('Failed to get folder tree:', error);\n    return '';\n  }\n}\n\n/**\n * Format paths as indented tree (TOON-style, no ASCII box chars)\n * Uses indentation only for hierarchy - more token efficient than ASCII tree\n * Shows complete structure with no truncation\n * \n * Example output:\n * src/\n *   components/ (45 files)\n *   hooks/\n *     useAppState.tsx\n *     useSigma.ts\n *   core/ (15 files)\n * test/ (12 files)\n */\nfunction formatAsHybridAscii(paths: string[], maxDepth: number): string {\n  // Build tree structure\n  interface TreeNode {\n    isFile: boolean;\n    children: Map<string, TreeNode>;\n    fileCount: number;\n  }\n  \n  const root: TreeNode = { isFile: false, children: new Map(), fileCount: 0 };\n  \n  for (const path of paths) {\n    const normalized = path.replace(/\\\\/g, '/');\n    const parts = normalized.split('/').filter(Boolean);\n    \n    let current = root;\n    for (let i = 0; i < parts.length; i++) {\n      const part = parts[i];\n      const isFile = i === parts.length - 1;\n      \n      if (!current.children.has(part)) {\n        current.children.set(part, { isFile, children: new Map(), fileCount: 0 });\n      }\n      \n      current = current.children.get(part)!;\n      if (isFile) {\n        // Count files in parent directories\n        let parent = root;\n        for (let j = 0; j < i; j++) {\n          parent = parent.children.get(parts[j])!;\n          parent.fileCount++;\n        }\n      }\n    }\n  }\n  \n  // Render tree with indentation only (no ASCII box chars)\n  const lines: string[] = [];\n  \n  function renderNode(node: TreeNode, indent: string, depth: number): void {\n    const entries = [...node.children.entries()];\n    // Sort: folders first (by file count desc), then files alphabetically\n    entries.sort(([aName, aNode], [bName, bNode]) => {\n      if (aNode.isFile !== bNode.isFile) return aNode.isFile ? 1 : -1;\n      if (!aNode.isFile && !bNode.isFile) return bNode.fileCount - aNode.fileCount;\n      return aName.localeCompare(bName);\n    });\n    \n    for (const [name, childNode] of entries) {\n      if (childNode.isFile) {\n        // File\n        lines.push(`${indent}${name}`);\n      } else {\n        // Directory\n        const childCount = childNode.children.size;\n        const fileCount = childNode.fileCount;\n        \n        // Only collapse if beyond maxDepth\n        if (depth >= maxDepth) {\n          lines.push(`${indent}${name}/ (${fileCount} files)`);\n        } else {\n          lines.push(`${indent}${name}/`);\n          renderNode(childNode, indent + '  ', depth + 1);\n        }\n      }\n    }\n  }\n  \n  renderNode(root, '', 0);\n  \n  return lines.join('\\n');\n}\n\n/**\n * Build a tree structure from file paths\n */\nfunction buildTreeFromPaths(paths: string[], maxDepth: number): Map<string, any> {\n  const root = new Map<string, any>();\n  \n  for (const fullPath of paths) {\n    // Normalize path separators\n    const normalizedPath = fullPath.replace(/\\\\/g, '/');\n    const parts = normalizedPath.split('/').filter(Boolean);\n    \n    let current = root;\n    const depth = Math.min(parts.length, maxDepth + 1); // +1 to include files at maxDepth\n    \n    for (let i = 0; i < depth; i++) {\n      const part = parts[i];\n      const isFile = i === parts.length - 1;\n      \n      if (!current.has(part)) {\n        current.set(part, isFile ? null : new Map<string, any>());\n      }\n      \n      const next = current.get(part);\n      if (next instanceof Map) {\n        current = next;\n      } else {\n        break;\n      }\n    }\n  }\n  \n  return root;\n}\n\n/**\n * Format tree as ASCII (like VS Code sidebar)\n */\nfunction formatTreeAsAscii(\n  tree: Map<string, any>,\n  prefix: string,\n  isLast: boolean = true\n): string {\n  const lines: string[] = [];\n  const entries = Array.from(tree.entries());\n  \n  // Sort: folders first, then files, alphabetically\n  entries.sort(([a, aVal], [b, bVal]) => {\n    const aIsDir = aVal instanceof Map;\n    const bIsDir = bVal instanceof Map;\n    if (aIsDir !== bIsDir) return bIsDir ? 1 : -1;\n    return a.localeCompare(b);\n  });\n  \n  entries.forEach(([name, subtree], index) => {\n    const isLastItem = index === entries.length - 1;\n    const connector = isLastItem ? '└── ' : '├── ';\n    const childPrefix = prefix + (isLastItem ? '    ' : '│   ');\n    \n    if (subtree instanceof Map && subtree.size > 0) {\n      // Folder with children\n      const childCount = countItems(subtree);\n      const annotation = childCount > 3 ? ` (${childCount} items)` : '';\n      lines.push(`${prefix}${connector}${name}/${annotation}`);\n      lines.push(formatTreeAsAscii(subtree, childPrefix, isLastItem));\n    } else if (subtree instanceof Map) {\n      // Empty folder\n      lines.push(`${prefix}${connector}${name}/`);\n    } else {\n      // File\n      lines.push(`${prefix}${connector}${name}`);\n    }\n  });\n  \n  return lines.filter(Boolean).join('\\n');\n}\n\n/**\n * Count items in a tree node\n */\nfunction countItems(tree: Map<string, any>): number {\n  let count = 0;\n  for (const [, value] of tree) {\n    if (value instanceof Map) {\n      count += 1 + countItems(value);\n    } else {\n      count += 1;\n    }\n  }\n  return count;\n}\n\n/**\n * Build complete codebase context\n */\nexport async function buildCodebaseContext(\n  executeQuery: (cypher: string) => Promise<any[]>,\n  projectName: string\n): Promise<CodebaseContext> {\n  // Run all queries in parallel for speed\n  const [stats, hotspots, folderTree] = await Promise.all([\n    getCodebaseStats(executeQuery, projectName),\n    getHotspots(executeQuery),\n    getFolderTree(executeQuery),\n  ]);\n\n  return {\n    stats,\n    hotspots,\n    folderTree,\n  };\n}\n\n/**\n * Format context as markdown for prompt injection\n */\nexport function formatContextForPrompt(context: CodebaseContext): string {\n  const { stats, hotspots, folderTree } = context;\n  \n  const lines: string[] = [];\n  \n  // Project header with stats\n  lines.push(`### 📊 CODEBASE: ${stats.projectName}`);\n  \n  const statParts = [\n    `Files: ${stats.fileCount}`,\n    `Functions: ${stats.functionCount}`,\n    stats.classCount > 0 ? `Classes: ${stats.classCount}` : null,\n    stats.interfaceCount > 0 ? `Interfaces: ${stats.interfaceCount}` : null,\n  ].filter(Boolean);\n  lines.push(statParts.join(' | '));\n  lines.push('');\n  \n  // Hotspots\n  if (hotspots.length > 0) {\n    lines.push('**Hotspots** (most connected):');\n    hotspots.slice(0, 5).forEach(h => {\n      lines.push(`- \\`${h.name}\\` (${h.type}) — ${h.connections} edges`);\n    });\n    lines.push('');\n  }\n  \n  // Folder tree\n  if (folderTree) {\n    lines.push('### 📁 STRUCTURE');\n    lines.push('```');\n    lines.push(stats.projectName + '/');\n    lines.push(folderTree);\n    lines.push('```');\n  }\n  \n  return lines.join('\\n');\n}\n\n/**\n * Build the complete dynamic system prompt\n * Context is appended at the END so core instructions remain at the top\n */\nexport function buildDynamicSystemPrompt(\n  basePrompt: string,\n  context: CodebaseContext\n): string {\n  const contextSection = formatContextForPrompt(context);\n  \n  // Append context at the END - keeps core instructions at top for better adherence\n  return `${basePrompt}\n\n---\n\n## 📦 CURRENT CODEBASE\n${contextSection}`;\n}\n"
  },
  {
    "path": "gitnexus-web/src/core/llm/index.ts",
    "content": "/**\n * LLM Module Exports\n * \n * Provides Graph RAG agent capabilities for code analysis.\n */\n\n// Types\nexport * from './types';\n\n// Settings management\nexport {\n  loadSettings,\n  saveSettings,\n  updateProviderSettings,\n  setActiveProvider,\n  getActiveProviderConfig,\n  isProviderConfigured,\n  clearSettings,\n  getProviderDisplayName,\n  getAvailableModels,\n} from './settings-service';\n\n// Tools\nexport { createGraphRAGTools } from './tools';\n\n// Context Builder\nexport {\n  buildCodebaseContext,\n  formatContextForPrompt,\n  buildDynamicSystemPrompt,\n  type CodebaseContext,\n  type CodebaseStats,\n  type Hotspot,\n} from './context-builder';\n\n// Agent\nexport {\n  createChatModel,\n  createGraphRAGAgent,\n  streamAgentResponse,\n  invokeAgent,\n  BASE_SYSTEM_PROMPT,\n  type AgentMessage,\n} from './agent';\n"
  },
  {
    "path": "gitnexus-web/src/core/llm/settings-service.ts",
    "content": "/**\n * Settings Service\n * \n * Handles localStorage persistence for LLM provider settings.\n * All API keys are stored locally - never sent to any server except the LLM provider.\n */\n\nimport { \n  LLMSettings, \n  DEFAULT_LLM_SETTINGS, \n  LLMProvider,\n  OpenAIConfig,\n  AzureOpenAIConfig,\n  GeminiConfig,\n  AnthropicConfig,\n  OllamaConfig,\n  OpenRouterConfig,\n  ProviderConfig,\n} from './types';\n\nconst STORAGE_KEY = 'gitnexus-llm-settings';\n\n/**\n * Load settings from localStorage\n */\nexport const loadSettings = (): LLMSettings => {\n  try {\n    const stored = localStorage.getItem(STORAGE_KEY);\n    if (!stored) {\n      return DEFAULT_LLM_SETTINGS;\n    }\n    \n    const parsed = JSON.parse(stored) as Partial<LLMSettings>;\n    \n    // Merge with defaults to handle new fields\n    return {\n      ...DEFAULT_LLM_SETTINGS,\n      ...parsed,\n      openai: {\n        ...DEFAULT_LLM_SETTINGS.openai,\n        ...parsed.openai,\n      },\n      azureOpenAI: {\n        ...DEFAULT_LLM_SETTINGS.azureOpenAI,\n        ...parsed.azureOpenAI,\n      },\n      gemini: {\n        ...DEFAULT_LLM_SETTINGS.gemini,\n        ...parsed.gemini,\n      },\n      anthropic: {\n        ...DEFAULT_LLM_SETTINGS.anthropic,\n        ...parsed.anthropic,\n      },\n      ollama: {\n        ...DEFAULT_LLM_SETTINGS.ollama,\n        ...parsed.ollama,\n      },\n      openrouter: {\n        ...DEFAULT_LLM_SETTINGS.openrouter,\n        ...parsed.openrouter,\n      },\n    };\n  } catch (error) {\n    console.warn('Failed to load LLM settings:', error);\n    return DEFAULT_LLM_SETTINGS;\n  }\n};\n\n/**\n * Save settings to localStorage\n */\nexport const saveSettings = (settings: LLMSettings): void => {\n  try {\n    localStorage.setItem(STORAGE_KEY, JSON.stringify(settings));\n  } catch (error) {\n    console.error('Failed to save LLM settings:', error);\n  }\n};\n\n/**\n * Update a specific provider's settings\n */\nexport const updateProviderSettings = <T extends LLMProvider>(\n  provider: T,\n  updates: Partial<\n    T extends 'openai' ? Partial<Omit<OpenAIConfig, 'provider'>> :\n    T extends 'azure-openai' ? Partial<Omit<AzureOpenAIConfig, 'provider'>> :\n    T extends 'gemini' ? Partial<Omit<GeminiConfig, 'provider'>> :\n    T extends 'anthropic' ? Partial<Omit<AnthropicConfig, 'provider'>> :\n    T extends 'ollama' ? Partial<Omit<OllamaConfig, 'provider'>> :\n    never\n  >\n): LLMSettings => {\n  const current = loadSettings();\n\n  // Avoid spreading unions like LLMSettings[keyof LLMSettings] (can be string/undefined)\n  switch (provider) {\n    case 'openai': {\n      const updated: LLMSettings = {\n        ...current,\n        openai: {\n          ...(current.openai ?? {}),\n          ...(updates as Partial<Omit<OpenAIConfig, 'provider'>>),\n        },\n      };\n      saveSettings(updated);\n      return updated;\n    }\n    case 'azure-openai': {\n      const updated: LLMSettings = {\n        ...current,\n        azureOpenAI: {\n          ...(current.azureOpenAI ?? {}),\n          ...(updates as Partial<Omit<AzureOpenAIConfig, 'provider'>>),\n        },\n      };\n      saveSettings(updated);\n      return updated;\n    }\n    case 'gemini': {\n      const updated: LLMSettings = {\n        ...current,\n        gemini: {\n          ...(current.gemini ?? {}),\n          ...(updates as Partial<Omit<GeminiConfig, 'provider'>>),\n        },\n      };\n      saveSettings(updated);\n      return updated;\n    }\n    case 'anthropic': {\n      const updated: LLMSettings = {\n        ...current,\n        anthropic: {\n          ...(current.anthropic ?? {}),\n          ...(updates as Partial<Omit<AnthropicConfig, 'provider'>>),\n        },\n      };\n      saveSettings(updated);\n      return updated;\n    }\n    case 'ollama': {\n      const updated: LLMSettings = {\n        ...current,\n        ollama: {\n          ...(current.ollama ?? {}),\n          ...(updates as Partial<Omit<OllamaConfig, 'provider'>>),\n        },\n      };\n      saveSettings(updated);\n      return updated;\n    }\n    case 'openrouter': {\n      const updated: LLMSettings = {\n        ...current,\n        openrouter: {\n          ...(current.openrouter ?? {}),\n          ...(updates as Partial<Omit<OpenRouterConfig, 'provider'>>),\n        },\n      };\n      saveSettings(updated);\n      return updated;\n    }\n    default: {\n      // Should be unreachable due to T extends LLMProvider, but keep a safe fallback\n      const updated: LLMSettings = { ...current };\n      saveSettings(updated);\n      return updated;\n    }\n  }\n};\n\n/**\n * Set the active provider\n */\nexport const setActiveProvider = (provider: LLMProvider): LLMSettings => {\n  const current = loadSettings();\n  const updated: LLMSettings = {\n    ...current,\n    activeProvider: provider,\n  };\n  saveSettings(updated);\n  return updated;\n};\n\n/**\n * Get the current provider configuration\n */\nexport const getActiveProviderConfig = (): ProviderConfig | null => {\n  const settings = loadSettings();\n  \n  switch (settings.activeProvider) {\n    case 'openai':\n      if (!settings.openai?.apiKey) {\n        return null;\n      }\n      return {\n        provider: 'openai',\n        ...settings.openai,\n      } as OpenAIConfig;\n      \n    case 'azure-openai':\n      if (!settings.azureOpenAI?.apiKey || !settings.azureOpenAI?.endpoint) {\n        return null;\n      }\n      return {\n        provider: 'azure-openai',\n        ...settings.azureOpenAI,\n      } as AzureOpenAIConfig;\n      \n    case 'gemini':\n      if (!settings.gemini?.apiKey) {\n        return null;\n      }\n      return {\n        provider: 'gemini',\n        ...settings.gemini,\n      } as GeminiConfig;\n      \n    case 'anthropic':\n      if (!settings.anthropic?.apiKey) {\n        return null;\n      }\n      return {\n        provider: 'anthropic',\n        ...settings.anthropic,\n      } as AnthropicConfig;\n      \n    case 'ollama':\n      return {\n        provider: 'ollama',\n        ...settings.ollama,\n      } as OllamaConfig;\n      \n    case 'openrouter':\n      if (!settings.openrouter?.apiKey || settings.openrouter.apiKey.trim() === '') {\n        return null;\n      }\n      return {\n        provider: 'openrouter',\n        apiKey: settings.openrouter.apiKey,\n        model: settings.openrouter.model || '',\n        baseUrl: settings.openrouter.baseUrl || 'https://openrouter.ai/api/v1',\n        temperature: settings.openrouter.temperature,\n        maxTokens: settings.openrouter.maxTokens,\n      } as OpenRouterConfig;\n      \n    default:\n      return null;\n  }\n};\n\n/**\n * Check if the active provider is properly configured\n */\nexport const isProviderConfigured = (): boolean => {\n  return getActiveProviderConfig() !== null;\n};\n\n/**\n * Clear all settings (reset to defaults)\n */\nexport const clearSettings = (): void => {\n  localStorage.removeItem(STORAGE_KEY);\n};\n\n/**\n * Get display name for a provider\n */\nexport const getProviderDisplayName = (provider: LLMProvider): string => {\n  switch (provider) {\n    case 'openai':\n      return 'OpenAI';\n    case 'azure-openai':\n      return 'Azure OpenAI';\n    case 'gemini':\n      return 'Google Gemini';\n    case 'anthropic':\n      return 'Anthropic';\n    case 'ollama':\n      return 'Ollama (Local)';\n    case 'openrouter':\n      return 'OpenRouter';\n    default:\n      return provider;\n  }\n};\n\n/**\n * Get available models for a provider\n */\nexport const getAvailableModels = (provider: LLMProvider): string[] => {\n  switch (provider) {\n    case 'openai':\n      return ['gpt-4.5-preview', 'gpt-4o', 'gpt-4o-mini', 'gpt-4-turbo', 'gpt-4', 'gpt-3.5-turbo'];\n    case 'azure-openai':\n      // Azure models depend on deployment, so we show common ones\n      return ['gpt-4o', 'gpt-4o-mini', 'gpt-4-turbo', 'gpt-4', 'gpt-35-turbo'];\n    case 'gemini':\n      return ['gemini-2.0-flash', 'gemini-1.5-pro', 'gemini-1.5-flash', 'gemini-1.0-pro'];\n    case 'anthropic':\n      return ['claude-sonnet-4-20250514', 'claude-3-5-sonnet-20241022', 'claude-3-5-haiku-20241022', 'claude-3-opus-20240229'];\n    case 'ollama':\n      return ['llama3.2', 'llama3.1', 'mistral', 'codellama', 'deepseek-coder'];\n    default:\n      return [];\n  }\n};\n\n/**\n * Fetch available models from OpenRouter API\n */\nexport const fetchOpenRouterModels = async (): Promise<Array<{ id: string; name: string }>> => {\n  try {\n    const response = await fetch('https://openrouter.ai/api/v1/models');\n    if (!response.ok) throw new Error('Failed to fetch models');\n    const data = await response.json();\n    return data.data.map((model: any) => ({\n      id: model.id,\n      name: model.name || model.id,\n    }));\n  } catch (error) {\n    console.error('Error fetching OpenRouter models:', error);\n    return [];\n  }\n};\n\n"
  },
  {
    "path": "gitnexus-web/src/core/llm/tools.ts",
    "content": "/**\n * Graph RAG Tools for LangChain Agent\n * \n * Consolidated tools (7 total):\n * - search: Hybrid search (BM25 + semantic + RRF), grouped by process/cluster\n * - cypher: Execute Cypher queries (auto-embeds {{QUERY_VECTOR}} if present)\n * - grep: Regex pattern search across files\n * - read: Read file content by path\n * - overview: Codebase map (clusters + processes)\n * - explore: Deep dive on a symbol, cluster, or process\n * - impact: Impact analysis (what depends on / is affected by changes)\n */\n\nimport { tool } from '@langchain/core/tools';\nimport { z } from 'zod';\n// Note: GRAPH_SCHEMA_DESCRIPTION from './types' is available if needed for additional context\nimport { WebGPUNotAvailableError, embedText, embeddingToArray, initEmbedder, isEmbedderReady } from '../embeddings/embedder';\n\n/**\n * Tool factory - creates tools bound to the LadybugDB query functions\n */\nexport const createGraphRAGTools = (\n  executeQuery: (cypher: string) => Promise<any[]>,\n  semanticSearch: (query: string, k?: number, maxDistance?: number) => Promise<any[]>,\n  semanticSearchWithContext: (query: string, k?: number, hops?: number) => Promise<any[]>,\n  hybridSearch: (query: string, k?: number) => Promise<any[]>,\n  isEmbeddingReady: () => boolean,\n  isBM25Ready: () => boolean,\n  fileContents: Map<string, string>\n) => {\n\n  // ============================================================================\n  // TOOL 1: SEARCH (Hybrid + 1-hop expansion)\n  // ============================================================================\n  \n  /**\n   * Unified search tool: BM25 + Semantic + RRF, with 1-hop graph context\n   */\n  const searchTool = tool(\n    async ({ query, limit, groupByProcess }: { query: string; limit?: number; groupByProcess?: boolean }) => {\n      const k = limit ?? 10;\n      const shouldGroup = groupByProcess ?? true;\n      \n      // Step 1: Hybrid search (BM25 + semantic with RRF)\n      let searchResults: any[] = [];\n      \n      if (isBM25Ready()) {\n        try {\n          searchResults = await hybridSearch(query, k);\n        } catch (error) {\n          // Fallback to semantic-only if hybrid fails\n          if (isEmbeddingReady()) {\n            searchResults = await semanticSearch(query, k);\n          }\n        }\n      } else if (isEmbeddingReady()) {\n        // Semantic only if BM25 not ready\n        searchResults = await semanticSearch(query, k);\n      } else {\n        return 'Search is not available. Please load a repository first.';\n      }\n      \n      if (searchResults.length === 0) {\n        return `No code found matching \"${query}\". Try different terms or use grep for exact patterns.`;\n      }\n      \n      type ProcessInfo = { id: string; label: string; step?: number; stepCount?: number };\n      type ResultInfo = {\n        idx: number;\n        nodeId: string;\n        name: string;\n        label: string;\n        filePath: string;\n        location: string;\n        sources: string;\n        score: string;\n        connections: string;\n        clusterLabel: string;\n        processes: ProcessInfo[];\n      };\n      \n      const results: ResultInfo[] = [];\n      \n      for (let i = 0; i < Math.min(searchResults.length, k); i++) {\n        const r = searchResults[i];\n        const nodeId = r.nodeId || r.id || '';\n        const name = r.name || r.filePath?.split('/').pop() || 'Unknown';\n        const label = r.label || 'File';\n        const filePath = r.filePath || '';\n        const location = r.startLine ? ` (lines ${r.startLine}-${r.endLine})` : '';\n        const sources = r.sources?.join('+') || 'hybrid';\n        const score = r.score ? ` [score: ${r.score.toFixed(2)}]` : '';\n        \n        // Get 1-hop connections using single CodeRelation table\n        let connections = '';\n        if (nodeId) {\n          try {\n            const nodeLabel = nodeId.split(':')[0];\n            const connectionsQuery = `\n              MATCH (n:${nodeLabel} {id: '${nodeId.replace(/'/g, \"''\")}'})\n              OPTIONAL MATCH (n)-[r1:CodeRelation]->(dst)\n              OPTIONAL MATCH (src)-[r2:CodeRelation]->(n)\n              RETURN \n                collect(DISTINCT {name: dst.name, type: r1.type, confidence: r1.confidence}) AS outgoing,\n                collect(DISTINCT {name: src.name, type: r2.type, confidence: r2.confidence}) AS incoming\n              LIMIT 1\n            `;\n            const connRes = await executeQuery(connectionsQuery);\n            if (connRes.length > 0) {\n              const row = connRes[0];\n              const rawOutgoing = Array.isArray(row) ? row[0] : (row.outgoing || []);\n              const rawIncoming = Array.isArray(row) ? row[1] : (row.incoming || []);\n              const outgoing = (rawOutgoing || []).filter((c: any) => c && c.name).slice(0, 3);\n              const incoming = (rawIncoming || []).filter((c: any) => c && c.name).slice(0, 3);\n              \n              const fmt = (c: any, dir: 'out' | 'in') => {\n                const conf = c.confidence ? Math.round(c.confidence * 100) : 100;\n                return dir === 'out' \n                  ? `-[${c.type} ${conf}%]-> ${c.name}`\n                  : `<-[${c.type} ${conf}%]- ${c.name}`;\n              };\n              \n              const outList = outgoing.map((c: any) => fmt(c, 'out'));\n              const inList = incoming.map((c: any) => fmt(c, 'in'));\n              if (outList.length || inList.length) {\n                connections = `\\n    Connections: ${[...outList, ...inList].join(', ')}`;\n              }\n            }\n          } catch {\n            // Skip connections if query fails\n          }\n        }\n        \n        // Cluster membership\n        let clusterLabel = 'Unclustered';\n        if (nodeId) {\n          try {\n            const nodeLabel = nodeId.split(':')[0];\n            const clusterQuery = `\n              MATCH (n:${nodeLabel} {id: '${nodeId.replace(/'/g, \"''\")}'})\n              MATCH (n)-[:CodeRelation {type: 'MEMBER_OF'}]->(c:Community)\n              RETURN c.label AS label\n              LIMIT 1\n            `;\n            const clusterRes = await executeQuery(clusterQuery);\n            if (clusterRes.length > 0) {\n              const row = clusterRes[0];\n              const labelValue = Array.isArray(row) ? row[0] : row.label;\n              if (labelValue) clusterLabel = labelValue;\n            }\n          } catch {\n            // Skip cluster lookup if query fails\n          }\n        }\n        \n        // Process participation\n        const processes: ProcessInfo[] = [];\n        if (nodeId) {\n          try {\n            const nodeLabel = nodeId.split(':')[0];\n            const processQuery = `\n              MATCH (n:${nodeLabel} {id: '${nodeId.replace(/'/g, \"''\")}'})\n              MATCH (n)-[r:CodeRelation {type: 'STEP_IN_PROCESS'}]->(p:Process)\n              RETURN p.id AS id, p.label AS label, r.step AS step, p.stepCount AS stepCount\n              ORDER BY r.step\n            `;\n            const procRes = await executeQuery(processQuery);\n            for (const row of procRes) {\n              const id = Array.isArray(row) ? row[0] : row.id;\n              const labelValue = Array.isArray(row) ? row[1] : row.label;\n              const step = Array.isArray(row) ? row[2] : row.step;\n              const stepCount = Array.isArray(row) ? row[3] : row.stepCount;\n              if (id && labelValue) {\n                processes.push({ id, label: labelValue, step, stepCount });\n              }\n            }\n          } catch {\n            // Skip process lookup if query fails\n          }\n        }\n        \n        results.push({\n          idx: i + 1,\n          nodeId,\n          name,\n          label,\n          filePath,\n          location,\n          sources,\n          score,\n          connections,\n          clusterLabel,\n          processes,\n        });\n      }\n      \n      const formatResult = (r: ResultInfo, stepInfo?: ProcessInfo) => {\n        const stepLabel = stepInfo?.step ? ` (step ${stepInfo.step}/${stepInfo.stepCount ?? '?'})` : '';\n        return `[${r.idx}] ${r.label}: ${r.name}${r.score}${stepLabel}\\n    ID: ${r.nodeId}\\n    File: ${r.filePath}${r.location}\\n    Cluster: ${r.clusterLabel}\\n    Found by: ${r.sources}${r.connections}`;\n      };\n      \n      if (!shouldGroup) {\n        return `Found ${searchResults.length} matches:\\n\\n${results.map(r => formatResult(r)).join('\\n\\n')}`;\n      }\n      \n      // Group by process (or \"No process\")\n      const processMap = new Map<string, { label: string; stepCount?: number; entries: { result: ResultInfo; step?: number; stepCount?: number }[] }>();\n      const noProcessKey = '__no_process__';\n      \n      for (const r of results) {\n        if (r.processes.length === 0) {\n          if (!processMap.has(noProcessKey)) {\n            processMap.set(noProcessKey, { label: 'No process', entries: [] });\n          }\n          processMap.get(noProcessKey)!.entries.push({ result: r });\n          continue;\n        }\n        \n        for (const p of r.processes) {\n          if (!processMap.has(p.id)) {\n            processMap.set(p.id, { label: p.label, stepCount: p.stepCount, entries: [] });\n          }\n          processMap.get(p.id)!.entries.push({ result: r, step: p.step, stepCount: p.stepCount });\n        }\n      }\n      \n      const sortedProcesses = Array.from(processMap.entries()).sort((a, b) => {\n        const aCount = a[1].entries.length;\n        const bCount = b[1].entries.length;\n        return bCount - aCount;\n      });\n      \n      const lines: string[] = [];\n      lines.push(`Found ${searchResults.length} matches grouped by process:`);\n      lines.push('');\n      \n      for (const [pid, group] of sortedProcesses) {\n        const stepInfo = group.stepCount ? `, ${group.stepCount} steps` : '';\n        const header = pid === noProcessKey\n          ? `NO PROCESS (${group.entries.length} matches)`\n          : `PROCESS: ${group.label} (${group.entries.length} matches${stepInfo})`;\n        lines.push(header);\n        group.entries.forEach(entry => {\n          const stepLabel = entry.step ? { id: pid, label: group.label, step: entry.step, stepCount: entry.stepCount } : undefined;\n          lines.push(formatResult(entry.result, stepLabel));\n        });\n        lines.push('');\n      }\n      \n      return lines.join('\\n').trim();\n    },\n    {\n      name: 'search',\n      description: 'Search for code by keywords or concepts. Combines keyword matching and semantic understanding. Groups results by process with cluster context.',\n      schema: z.object({\n        query: z.string().describe('What you are looking for (e.g., \"authentication middleware\", \"database connection\")'),\n        groupByProcess: z.boolean().optional().nullable().describe('Group results by process (default: true)'),\n        limit: z.number().optional().nullable().describe('Max results to return (default: 10)'),\n      }),\n    }\n  );\n\n  // ============================================================================\n  // TOOL 2: CYPHER (Raw Cypher, auto-embeds {{QUERY_VECTOR}} if present)\n  // ============================================================================\n  \n  /**\n   * Execute Cypher queries with optional vector embedding\n   */\n  const cypherTool = tool(\n    async ({ query, cypher }: { query?: string; cypher: string }) => {\n      try {\n        let finalCypher = cypher;\n        \n        // Auto-embed if {{QUERY_VECTOR}} placeholder is present\n        if (cypher.includes('{{QUERY_VECTOR}}')) {\n          if (!query) {\n            return \"Error: Your Cypher contains {{QUERY_VECTOR}} but you didn't provide a 'query' to embed. Add a natural language query.\";\n          }\n          \n          if (!isEmbeddingReady()) {\n            // Try to init embedder\n            try {\n              await initEmbedder();\n            } catch (err) {\n              if (err instanceof WebGPUNotAvailableError) {\n                await initEmbedder(undefined, {}, 'wasm');\n              } else {\n                return 'Embeddings not available. Remove {{QUERY_VECTOR}} and use a non-vector query.';\n              }\n            }\n          }\n          \n          const queryEmbedding = await embedText(query);\n          const queryVec = embeddingToArray(queryEmbedding);\n          const queryVecStr = `CAST([${queryVec.join(',')}] AS FLOAT[384])`;\n          finalCypher = cypher.replace(/\\{\\{\\s*QUERY_VECTOR\\s*\\}\\}/g, queryVecStr);\n        }\n        \n        const results = await executeQuery(finalCypher);\n        \n        if (results.length === 0) {\n          return 'Query returned no results.';\n        }\n        \n        // Get column names from first result (now objects from executeQuery)\n        const firstRow = results[0];\n        const columnNames = typeof firstRow === 'object' && !Array.isArray(firstRow)\n          ? Object.keys(firstRow)\n          : [];\n        \n        // Format as markdown table (more token efficient than JSON per row)\n        if (columnNames.length > 0) {\n          const header = `| ${columnNames.join(' | ')} |`;\n          const separator = `|${columnNames.map(() => '---').join('|')}|`;\n          \n          const rows = results.slice(0, 50).map(row => {\n            const values = columnNames.map(col => {\n              const val = row[col];\n              if (val === null || val === undefined) return '';\n              if (typeof val === 'object') return JSON.stringify(val);\n              // Truncate long values and escape pipe characters\n              const str = String(val).replace(/\\|/g, '\\\\|');\n              return str.length > 60 ? str.slice(0, 57) + '...' : str;\n            });\n            return `| ${values.join(' | ')} |`;\n          }).join('\\n');\n          \n          const truncated = results.length > 50 ? `\\n\\n_(${results.length - 50} more rows)_` : '';\n          return `**${results.length} results:**\\n\\n${header}\\n${separator}\\n${rows}${truncated}`;\n        }\n        \n        // Fallback for non-object results\n        const formatted = results.slice(0, 50).map((row, i) => {\n          return `[${i + 1}] ${JSON.stringify(row)}`;\n        });\n        const truncated = results.length > 50 ? `\\n... (${results.length - 50} more)` : '';\n        return `${results.length} results:\\n${formatted.join('\\n')}${truncated}`;\n      } catch (error) {\n        const message = error instanceof Error ? error.message : String(error);\n        return `Cypher error: ${message}\\n\\nCheck your query syntax. Node tables: File, Folder, Function, Class, Interface, Method, CodeElement. Relation: CodeRelation with type property (CONTAINS, DEFINES, IMPORTS, CALLS). Example: MATCH (f:File)-[:CodeRelation {type: 'IMPORTS'}]->(g:File) RETURN f, g`;\n      }\n    },\n    {\n      name: 'cypher',\n      description: `Execute a Cypher query against the code graph. Use for structural queries like finding callers, tracing imports, class inheritance, or custom traversals.\n\nNode tables: File, Folder, Function, Class, Interface, Method, CodeElement\nRelation: CodeRelation (single table with 'type' property: CONTAINS, DEFINES, IMPORTS, CALLS, EXTENDS, IMPLEMENTS)\n\nExample queries:\n- Functions calling a function: MATCH (caller:Function)-[:CodeRelation {type: 'CALLS'}]->(fn:Function {name: 'validate'}) RETURN caller.name, caller.filePath\n- Class inheritance: MATCH (child:Class)-[:CodeRelation {type: 'EXTENDS'}]->(parent:Class) RETURN child.name, parent.name\n- Classes implementing interface: MATCH (c:Class)-[:CodeRelation {type: 'IMPLEMENTS'}]->(i:Interface) RETURN c.name, i.name\n- Files importing a file: MATCH (f:File)-[:CodeRelation {type: 'IMPORTS'}]->(target:File) WHERE target.name = 'utils.ts' RETURN f.name\n- All connections (with confidence): MATCH (n)-[r:CodeRelation]-(m) WHERE n.name = 'MyClass' AND r.confidence > 0.8 RETURN m.name, r.type, r.confidence\n- Find fuzzy matches: MATCH (n)-[r:CodeRelation]-(m) WHERE r.confidence < 0.8 RETURN n.name, r.reason\n\nFor semantic+graph queries, include {{QUERY_VECTOR}} placeholder and provide a 'query' parameter:\nCALL QUERY_VECTOR_INDEX('CodeEmbedding', 'code_embedding_idx', {{QUERY_VECTOR}}, 10) YIELD node AS emb, distance\nWITH emb, distance WHERE distance < 0.5\nMATCH (n:Function {id: emb.nodeId}) RETURN n`,\n      schema: z.object({\n        cypher: z.string().describe('The Cypher query to execute'),\n        query: z.string().optional().nullable().describe('Natural language query to embed (required if cypher contains {{QUERY_VECTOR}})'),\n      }),\n    }\n  );\n\n  // ============================================================================\n  // TOOL 3: GREP (Regex pattern search)\n  // ============================================================================\n  \n  const grepTool = tool(\n    async ({ pattern, fileFilter, caseSensitive, maxResults }: { \n      pattern: string; \n      fileFilter?: string;\n      caseSensitive?: boolean;\n      maxResults?: number;\n    }) => {\n      try {\n        const flags = caseSensitive ? 'g' : 'gi';\n        let regex: RegExp;\n        try {\n          regex = new RegExp(pattern, flags);\n        } catch (e) {\n          return `Invalid regex: ${pattern}. Error: ${e instanceof Error ? e.message : String(e)}`;\n        }\n        \n        const results: Array<{ file: string; line: number; content: string }> = [];\n        const limit = maxResults ?? 100;\n        \n        for (const [filePath, content] of fileContents.entries()) {\n          if (fileFilter && !filePath.toLowerCase().includes(fileFilter.toLowerCase())) {\n            continue;\n          }\n          \n          const lines = content.split('\\n');\n          for (let i = 0; i < lines.length; i++) {\n            if (regex.test(lines[i])) {\n              results.push({\n                file: filePath,\n                line: i + 1,\n                content: lines[i].trim().slice(0, 150),\n              });\n              if (results.length >= limit) break;\n            }\n            regex.lastIndex = 0;\n          }\n          if (results.length >= limit) break;\n        }\n        \n        if (results.length === 0) {\n          return `No matches for \"${pattern}\"${fileFilter ? ` in files matching \"${fileFilter}\"` : ''}`;\n        }\n        \n        const formatted = results.map(r => `${r.file}:${r.line}: ${r.content}`).join('\\n');\n        const truncatedMsg = results.length >= limit ? `\\n\\n(Showing first ${limit} results)` : '';\n        \n        return `Found ${results.length} matches:\\n\\n${formatted}${truncatedMsg}`;\n      } catch (error) {\n        return `Grep error: ${error instanceof Error ? error.message : String(error)}`;\n      }\n    },\n    {\n      name: 'grep',\n      description: 'Search for exact text patterns across all files using regex. Use for finding specific strings, error messages, TODOs, variable names, etc.',\n      schema: z.object({\n        pattern: z.string().describe('Regex pattern to search for (e.g., \"TODO\", \"console\\\\.log\", \"API_KEY\")'),\n        fileFilter: z.string().optional().nullable().describe('Only search files containing this string (e.g., \".ts\", \"src/api\")'),\n        caseSensitive: z.boolean().optional().nullable().describe('Case-sensitive search (default: false)'),\n        maxResults: z.number().optional().nullable().describe('Max results (default: 100)'),\n      }),\n    }\n  );\n\n  // ============================================================================\n  // TOOL 4: READ (Read file content)\n  // ============================================================================\n  \n  const readTool = tool(\n    async ({ filePath }: { filePath: string }) => {\n      const normalizedRequest = filePath.replace(/\\\\/g, '/').toLowerCase();\n      \n      // Try exact match first\n      let content = fileContents.get(filePath);\n      let actualPath = filePath;\n      \n      // Smart matching if not found\n      if (!content) {\n        const candidates: Array<{ path: string; score: number }> = [];\n        \n        for (const [path] of fileContents.entries()) {\n          const normalizedPath = path.toLowerCase();\n          \n          if (normalizedPath === normalizedRequest) {\n            candidates.push({ path, score: 1000 });\n          } else if (normalizedPath.endsWith(normalizedRequest)) {\n            candidates.push({ path, score: 100 + (200 - path.length) });\n          } else {\n            const requestSegments = normalizedRequest.split('/').filter(Boolean);\n            const pathSegments = normalizedPath.split('/');\n            let matchScore = 0;\n            let lastMatchIdx = -1;\n            \n            for (const seg of requestSegments) {\n              const idx = pathSegments.findIndex((s, i) => i > lastMatchIdx && s.includes(seg));\n              if (idx > lastMatchIdx) {\n                matchScore += 10;\n                lastMatchIdx = idx;\n              }\n            }\n            \n            if (matchScore >= requestSegments.length * 5) {\n              candidates.push({ path, score: matchScore });\n            }\n          }\n        }\n        \n        candidates.sort((a, b) => b.score - a.score);\n        if (candidates.length > 0) {\n          actualPath = candidates[0].path;\n          content = fileContents.get(actualPath);\n        }\n      }\n      \n      if (!content) {\n        const fileName = filePath.split('/').pop()?.toLowerCase() || '';\n        const similar = Array.from(fileContents.keys())\n          .filter(p => p.toLowerCase().includes(fileName))\n          .slice(0, 5);\n        \n        if (similar.length > 0) {\n          return `File not found: \"${filePath}\"\\n\\nDid you mean:\\n${similar.map(f => `  - ${f}`).join('\\n')}`;\n        }\n        return `File not found: \"${filePath}\"`;\n      }\n      \n      // Truncate large files\n      const MAX_CONTENT = 50000;\n      if (content.length > MAX_CONTENT) {\n        const lines = content.split('\\n').length;\n        return `File: ${actualPath} (${lines} lines, truncated)\\n\\n${content.slice(0, MAX_CONTENT)}\\n\\n... [truncated]`;\n      }\n      \n      const lines = content.split('\\n').length;\n      return `File: ${actualPath} (${lines} lines)\\n\\n${content}`;\n    },\n    {\n      name: 'read',\n      description: 'Read the full content of a file. Use to see source code after finding files via search or grep.',\n      schema: z.object({\n        filePath: z.string().describe('File path to read (can be partial like \"src/utils.ts\")'),\n      }),\n    }\n  );\n\n  // ============================================================================\n  // TOOL 5: OVERVIEW (Codebase map)\n  // ============================================================================\n  \n  const overviewTool = tool(\n    async () => {\n      try {\n        const clustersQuery = `\n          MATCH (c:Community)\n          RETURN c.id AS id, c.label AS label, c.cohesion AS cohesion, c.symbolCount AS symbolCount, c.description AS description\n          ORDER BY c.symbolCount DESC\n          LIMIT 200\n        `;\n        const processesQuery = `\n          MATCH (p:Process)\n          RETURN p.id AS id, p.label AS label, p.processType AS type, p.stepCount AS stepCount, p.communities AS communities\n          ORDER BY p.stepCount DESC\n          LIMIT 200\n        `;\n        const depsQuery = `\n          MATCH (a)-[:CodeRelation {type: 'CALLS'}]->(b)\n          MATCH (a)-[:CodeRelation {type: 'MEMBER_OF'}]->(c1:Community)\n          MATCH (b)-[:CodeRelation {type: 'MEMBER_OF'}]->(c2:Community)\n          WHERE c1.id <> c2.id\n          RETURN c1.label AS \\`from\\`, c2.label AS \\`to\\`, COUNT(*) AS calls\n          ORDER BY calls DESC\n          LIMIT 15\n        `;\n        const criticalQuery = `\n          MATCH (s)-[r:CodeRelation {type: 'STEP_IN_PROCESS'}]->(p:Process)\n          RETURN p.label AS label, COUNT(r) AS steps\n          ORDER BY steps DESC\n          LIMIT 10\n        `;\n        \n        const [clusters, processes, deps, critical] = await Promise.all([\n          executeQuery(clustersQuery),\n          executeQuery(processesQuery),\n          executeQuery(depsQuery),\n          executeQuery(criticalQuery),\n        ]);\n        \n        const clusterLines = clusters.map((row: any) => {\n          const label = Array.isArray(row) ? row[1] : row.label;\n          const symbols = Array.isArray(row) ? row[3] : row.symbolCount;\n          const cohesion = Array.isArray(row) ? row[2] : row.cohesion;\n          const desc = Array.isArray(row) ? row[4] : row.description;\n          const cohesionText = cohesion !== null && cohesion !== undefined ? Number(cohesion).toFixed(2) : '';\n          return `| ${label || ''} | ${symbols ?? ''} | ${cohesionText} | ${desc ?? ''} |`;\n        });\n        \n        const processLines = processes.map((row: any) => {\n          const label = Array.isArray(row) ? row[1] : row.label;\n          const steps = Array.isArray(row) ? row[3] : row.stepCount;\n          const type = Array.isArray(row) ? row[2] : row.type;\n          const communities = Array.isArray(row) ? row[4] : row.communities;\n          const clusterText = Array.isArray(communities) ? communities.length : (communities ? 1 : 0);\n          return `| ${label || ''} | ${steps ?? ''} | ${type ?? ''} | ${clusterText} |`;\n        });\n        \n        const depLines = deps.map((row: any) => {\n          const from = Array.isArray(row) ? row[0] : row.from;\n          const to = Array.isArray(row) ? row[1] : row.to;\n          const calls = Array.isArray(row) ? row[2] : row.calls;\n          return `- ${from} -> ${to} (${calls} calls)`;\n        });\n        \n        const criticalLines = critical.map((row: any) => {\n          const label = Array.isArray(row) ? row[0] : row.label;\n          const steps = Array.isArray(row) ? row[1] : row.steps;\n          return `- ${label} (${steps} steps)`;\n        });\n        \n        return [\n          `CLUSTERS (${clusters.length} total):`,\n          `| Cluster | Symbols | Cohesion | Description |`,\n          `| --- | --- | --- | --- |`,\n          ...clusterLines,\n          ``,\n          `PROCESSES (${processes.length} total):`,\n          `| Process | Steps | Type | Clusters |`,\n          `| --- | --- | --- | --- |`,\n          ...processLines,\n          ``,\n          `CLUSTER DEPENDENCIES:`,\n          ...(depLines.length > 0 ? depLines : ['- None found']),\n          ``,\n          `CRITICAL PATHS:`,\n          ...(criticalLines.length > 0 ? criticalLines : ['- None found']),\n        ].join('\\n');\n      } catch (error) {\n        return `Overview error: ${error instanceof Error ? error.message : String(error)}`;\n      }\n    },\n    {\n      name: 'overview',\n      description: 'Codebase map showing all clusters and processes, plus cross-cluster dependencies.',\n      schema: z.object({}),\n    }\n  );\n\n  // ============================================================================\n  // TOOL 6: EXPLORE (Deep dive on symbol, cluster, or process)\n  // ============================================================================\n  \n  const exploreTool = tool(\n    async ({ target, type }: { target: string; type?: 'symbol' | 'cluster' | 'process' | null }) => {\n      const safeTarget = target.replace(/'/g, \"''\");\n      let resolvedType = type ?? null;\n      let processRow: any | null = null;\n      let communityRow: any | null = null;\n      let symbolRow: any | null = null;\n      \n      const getRowValue = (row: any, idx: number, key: string) => Array.isArray(row) ? row[idx] : row[key];\n      \n      if (!resolvedType || resolvedType === 'process') {\n        const processQuery = `\n          MATCH (p:Process)\n          WHERE p.id = '${safeTarget}' OR p.label = '${safeTarget}'\n          RETURN p.id AS id, p.label AS label, p.processType AS type, p.stepCount AS stepCount\n          LIMIT 1\n        `;\n        const processRes = await executeQuery(processQuery);\n        if (processRes.length > 0) {\n          processRow = processRes[0];\n          resolvedType = 'process';\n        }\n      }\n      \n      if (!resolvedType || resolvedType === 'cluster') {\n        const communityQuery = `\n          MATCH (c:Community)\n          WHERE c.id = '${safeTarget}' OR c.label = '${safeTarget}' OR c.heuristicLabel = '${safeTarget}'\n          RETURN c.id AS id, c.label AS label, c.cohesion AS cohesion, c.symbolCount AS symbolCount, c.description AS description\n          LIMIT 1\n        `;\n        const communityRes = await executeQuery(communityQuery);\n        if (communityRes.length > 0) {\n          communityRow = communityRes[0];\n          resolvedType = 'cluster';\n        }\n      }\n      \n      if (!resolvedType || resolvedType === 'symbol') {\n        const symbolQuery = `\n          MATCH (n)\n          WHERE n.name = '${safeTarget}' OR n.id = '${safeTarget}' OR n.filePath = '${safeTarget}'\n          RETURN n.id AS id, n.name AS name, n.filePath AS filePath, label(n) AS nodeType\n          LIMIT 5\n        `;\n        const symbolRes = await executeQuery(symbolQuery);\n        if (symbolRes.length > 0) {\n          symbolRow = symbolRes[0];\n          resolvedType = 'symbol';\n        }\n      }\n      \n      if (!resolvedType) {\n        return `Could not find \"${target}\" as a symbol, cluster, or process. Try search first.`;\n      }\n      \n      if (resolvedType === 'process') {\n        const pid = getRowValue(processRow, 0, 'id');\n        const label = getRowValue(processRow, 1, 'label');\n        const ptype = getRowValue(processRow, 2, 'type');\n        const stepCount = getRowValue(processRow, 3, 'stepCount');\n        \n        const stepsQuery = `\n          MATCH (s)-[r:CodeRelation {type: 'STEP_IN_PROCESS'}]->(p:Process {id: '${pid.replace(/'/g, \"''\")}'})\n          RETURN s.name AS name, s.filePath AS filePath, r.step AS step\n          ORDER BY r.step\n        `;\n        const clustersQuery = `\n          MATCH (c:Community)<-[:CodeRelation {type: 'MEMBER_OF'}]-(s)\n          MATCH (s)-[:CodeRelation {type: 'STEP_IN_PROCESS'}]->(p:Process {id: '${pid.replace(/'/g, \"''\")}'})\n          RETURN DISTINCT c.id AS id, c.label AS label, c.description AS description\n          ORDER BY c.label\n          LIMIT 20\n        `;\n        \n        const [steps, clusters] = await Promise.all([\n          executeQuery(stepsQuery),\n          executeQuery(clustersQuery),\n        ]);\n        \n        const stepLines = steps.map((row: any) => {\n          const name = getRowValue(row, 0, 'name');\n          const filePath = getRowValue(row, 1, 'filePath');\n          const step = getRowValue(row, 2, 'step');\n          return `- ${step}. ${name} (${filePath || 'n/a'})`;\n        });\n        \n        const clusterLines = clusters.map((row: any) => {\n          const clabel = getRowValue(row, 1, 'label');\n          const desc = getRowValue(row, 2, 'description');\n          return `- ${clabel}${desc ? ` — ${desc}` : ''}`;\n        });\n        \n        return [\n          `PROCESS: ${label}`,\n          `Type: ${ptype || 'n/a'}`,\n          `Steps: ${stepCount ?? steps.length}`,\n          ``,\n          `STEPS:`,\n          ...(stepLines.length > 0 ? stepLines : ['- None found']),\n          ``,\n          `CLUSTERS TOUCHED:`,\n          ...(clusterLines.length > 0 ? clusterLines : ['- None found']),\n        ].join('\\n');\n      }\n      \n      if (resolvedType === 'cluster') {\n        const cid = getRowValue(communityRow, 0, 'id');\n        const label = getRowValue(communityRow, 1, 'label');\n        const cohesion = getRowValue(communityRow, 2, 'cohesion');\n        const symbolCount = getRowValue(communityRow, 3, 'symbolCount');\n        const description = getRowValue(communityRow, 4, 'description');\n        \n        const membersQuery = `\n          MATCH (c:Community {id: '${cid.replace(/'/g, \"''\")}'})<-[:CodeRelation {type: 'MEMBER_OF'}]-(m)\n          RETURN m.name AS name, m.filePath AS filePath, label(m) AS nodeType\n          LIMIT 50\n        `;\n        const processesQuery = `\n          MATCH (c:Community {id: '${cid.replace(/'/g, \"''\")}'})<-[:CodeRelation {type: 'MEMBER_OF'}]-(s)\n          MATCH (s)-[:CodeRelation {type: 'STEP_IN_PROCESS'}]->(p:Process)\n          RETURN DISTINCT p.id AS id, p.label AS label, p.stepCount AS stepCount\n          ORDER BY p.stepCount DESC\n          LIMIT 20\n        `;\n        \n        const [members, processes] = await Promise.all([\n          executeQuery(membersQuery),\n          executeQuery(processesQuery),\n        ]);\n        \n        const memberLines = members.map((row: any) => {\n          const name = getRowValue(row, 0, 'name');\n          const filePath = getRowValue(row, 1, 'filePath');\n          const nodeType = getRowValue(row, 2, 'nodeType');\n          return `- ${nodeType}: ${name} (${filePath || 'n/a'})`;\n        });\n        \n        const processLines = processes.map((row: any) => {\n          const plabel = getRowValue(row, 1, 'label');\n          const steps = getRowValue(row, 2, 'stepCount');\n          return `- ${plabel} (${steps} steps)`;\n        });\n        \n        return [\n          `CLUSTER: ${label}`,\n          `Symbols: ${symbolCount ?? members.length}`,\n          `Cohesion: ${cohesion !== null && cohesion !== undefined ? Number(cohesion).toFixed(2) : 'n/a'}`,\n          `Description: ${description || 'n/a'}`,\n          ``,\n          `TOP MEMBERS:`,\n          ...(memberLines.length > 0 ? memberLines : ['- None found']),\n          ``,\n          `PROCESSES TOUCHING THIS CLUSTER:`,\n          ...(processLines.length > 0 ? processLines : ['- None found']),\n        ].join('\\n');\n      }\n      \n      if (resolvedType === 'symbol') {\n        const nodeId = getRowValue(symbolRow, 0, 'id');\n        const name = getRowValue(symbolRow, 1, 'name');\n        const filePath = getRowValue(symbolRow, 2, 'filePath');\n        const nodeType = getRowValue(symbolRow, 3, 'nodeType');\n        \n        const clusterQuery = `\n          MATCH (n:${nodeType} {id: '${String(nodeId).replace(/'/g, \"''\")}'})\n          MATCH (n)-[:CodeRelation {type: 'MEMBER_OF'}]->(c:Community)\n          RETURN c.label AS label, c.description AS description\n          LIMIT 1\n        `;\n        const processQuery = `\n          MATCH (n:${nodeType} {id: '${String(nodeId).replace(/'/g, \"''\")}'})\n          MATCH (n)-[r:CodeRelation {type: 'STEP_IN_PROCESS'}]->(p:Process)\n          RETURN p.label AS label, r.step AS step, p.stepCount AS stepCount\n          ORDER BY r.step\n        `;\n        const connectionsQuery = `\n          MATCH (n:${nodeType} {id: '${String(nodeId).replace(/'/g, \"''\")}'})\n          OPTIONAL MATCH (n)-[r1:CodeRelation]->(dst)\n          OPTIONAL MATCH (src)-[r2:CodeRelation]->(n)\n          RETURN \n            collect(DISTINCT {name: dst.name, type: r1.type, confidence: r1.confidence}) AS outgoing,\n            collect(DISTINCT {name: src.name, type: r2.type, confidence: r2.confidence}) AS incoming\n          LIMIT 1\n        `;\n        \n        const [clusterRes, processRes, connRes] = await Promise.all([\n          executeQuery(clusterQuery),\n          executeQuery(processQuery),\n          executeQuery(connectionsQuery),\n        ]);\n        \n        const clusterLabel = clusterRes.length > 0 ? getRowValue(clusterRes[0], 0, 'label') : 'Unclustered';\n        const clusterDesc = clusterRes.length > 0 ? getRowValue(clusterRes[0], 1, 'description') : '';\n        \n        const processLines = processRes.map((row: any) => {\n          const plabel = getRowValue(row, 0, 'label');\n          const step = getRowValue(row, 1, 'step');\n          const stepCount = getRowValue(row, 2, 'stepCount');\n          return `- ${plabel} (step ${step}/${stepCount ?? '?'})`;\n        });\n        \n        let connections = 'None';\n        if (connRes.length > 0) {\n          const row = connRes[0];\n          const rawOutgoing = Array.isArray(row) ? row[0] : (row.outgoing || []);\n          const rawIncoming = Array.isArray(row) ? row[1] : (row.incoming || []);\n          const outgoing = (rawOutgoing || []).filter((c: any) => c && c.name).slice(0, 5);\n          const incoming = (rawIncoming || []).filter((c: any) => c && c.name).slice(0, 5);\n          \n          const fmt = (c: any, dir: 'out' | 'in') => {\n            const conf = c.confidence ? Math.round(c.confidence * 100) : 100;\n            return dir === 'out' \n              ? `-[${c.type} ${conf}%]-> ${c.name}`\n              : `<-[${c.type} ${conf}%]- ${c.name}`;\n          };\n          const outList = outgoing.map((c: any) => fmt(c, 'out'));\n          const inList = incoming.map((c: any) => fmt(c, 'in'));\n          if (outList.length || inList.length) {\n            connections = [...outList, ...inList].join(', ');\n          }\n        }\n        \n        return [\n          `SYMBOL: ${nodeType} ${name}`,\n          `ID: ${nodeId}`,\n          `File: ${filePath || 'n/a'}`,\n          `Cluster: ${clusterLabel}${clusterDesc ? ` — ${clusterDesc}` : ''}`,\n          ``,\n          `PROCESSES:`,\n          ...(processLines.length > 0 ? processLines : ['- None found']),\n          ``,\n          `CONNECTIONS:`,\n          connections,\n        ].join('\\n');\n      }\n      \n      return `Unable to explore \"${target}\".`;\n    },\n    {\n      name: 'explore',\n      description: 'Deep dive on a symbol, cluster, or process. Shows membership, participation, and connections.',\n      schema: z.object({\n        target: z.string().describe('Name or ID of a symbol, cluster, or process'),\n        type: z.enum(['symbol', 'cluster', 'process']).optional().nullable().describe('Optional target type (auto-detected if omitted)'),\n      }),\n    }\n  );\n\n  // ============================================================================\n  // TOOL 7: IMPACT (Impact analysis)\n  // ============================================================================\n  \n  const impactTool = tool(\n    async ({ target, direction, maxDepth, relationTypes, includeTests, minConfidence }: { \n      target: string; \n      direction: 'upstream' | 'downstream';\n      maxDepth?: number;\n      relationTypes?: string[];\n      includeTests?: boolean;\n      minConfidence?: number;\n    }) => {\n      const depth = Math.min(maxDepth ?? 3, 10);\n      const showTests = includeTests ?? false; // Default: exclude test files\n      const minConf = minConfidence ?? 0.7; // Default: exclude fuzzy matches (<70% confidence)\n      \n      // Test file patterns\n      const isTestFile = (path: string): boolean => {\n        if (!path) return false;\n        const p = path.toLowerCase();\n        return p.includes('.test.') || p.includes('.spec.') || \n               p.includes('__tests__') || p.includes('__mocks__') ||\n               p.endsWith('.test.ts') || p.endsWith('.test.tsx') ||\n               p.endsWith('.spec.ts') || p.endsWith('.spec.tsx');\n      };\n      \n      // Default to usage-based relation types (exclude CONTAINS, DEFINES for impact analysis)\n      const defaultRelTypes = ['CALLS', 'IMPORTS', 'EXTENDS', 'IMPLEMENTS'];\n      const activeRelTypes = relationTypes && relationTypes.length > 0 \n        ? relationTypes \n        : defaultRelTypes;\n      const relTypeFilter = activeRelTypes.map(t => `'${t}'`).join(', ');\n      \n      const directionLabel = direction === 'upstream' \n        ? 'Files that DEPEND ON this (breakage risk)'\n        : 'Dependencies this RELIES ON';\n      \n      // Try to find the target node first\n      // If target contains '/', search by filePath; otherwise by name\n      const isPathQuery = target.includes('/');\n      const escapedTarget = target.replace(/'/g, \"''\");\n      \n      const findTargetQuery = isPathQuery\n        ? `\n          MATCH (n) \n          WHERE n.filePath IS NOT NULL AND n.filePath CONTAINS '${escapedTarget}'\n          RETURN n.id AS id, label(n) AS nodeType, n.filePath AS filePath\n          LIMIT 10\n        `\n        : `\n          MATCH (n) \n          WHERE n.name = '${escapedTarget}'\n          RETURN n.id AS id, label(n) AS nodeType, n.filePath AS filePath\n          LIMIT 10\n        `;\n      \n      let targetResults;\n      try {\n        targetResults = await executeQuery(findTargetQuery);\n      } catch (error) {\n        return `Error finding target \"${target}\": ${error}`;\n      }\n      \n      if (!targetResults || targetResults.length === 0) {\n        return `Could not find \"${target}\" in the codebase. Try using the search tool first to find the exact name.`;\n      }\n      \n      // Handle multiple matches - require disambiguation\n      const allPaths = targetResults.map((r: any) => Array.isArray(r) ? r[2] : r.filePath).filter(Boolean);\n      \n      // If multiple matches and target doesn't look like a specific path, ask for clarification\n      if (targetResults.length > 1 && !target.includes('/')) {\n        return `⚠️ AMBIGUOUS TARGET: Multiple files named \"${target}\" found:\\n\\n${allPaths.map((p: string, i: number) => `${i + 1}. ${p}`).join('\\n')}\\n\\nPlease specify which file you mean by using a more specific path, e.g.:\\n- impact(\"${allPaths[0].split('/').slice(-3).join('/')}\")\\n- impact(\"${allPaths[1]?.split('/').slice(-3).join('/') || allPaths[0]}\")`;\n      }\n      \n      // If target contains a path, try to find matching file\n      let targetNode = targetResults[0];\n      if (target.includes('/') && targetResults.length > 1) {\n        const exactMatch = targetResults.find((r: any) => {\n          const path = Array.isArray(r) ? r[2] : r.filePath;\n          return path && path.toLowerCase().includes(target.toLowerCase());\n        });\n        if (exactMatch) {\n          targetNode = exactMatch;\n        } else {\n          // Still ambiguous even with path\n          return `⚠️ AMBIGUOUS TARGET: Could not uniquely match \"${target}\". Found:\\n\\n${allPaths.map((p: string, i: number) => `${i + 1}. ${p}`).join('\\n')}\\n\\nPlease use a more specific path.`;\n        }\n      }\n      \n      const targetId = Array.isArray(targetNode) ? targetNode[0] : targetNode.id;\n      const targetType = Array.isArray(targetNode) ? targetNode[1] : targetNode.nodeType;\n      const targetFilePath = Array.isArray(targetNode) ? targetNode[2] : targetNode.filePath;\n      \n      if (import.meta.env.DEV) {\n        console.log(`🎯 Impact: Found target \"${target}\" → id=${targetId}, type=${targetType}, filePath=${targetFilePath}`);\n      }\n      \n      // No more multipleMatchWarning needed - we either disambiguated or returned early\n      const multipleMatchWarning = '';\n      \n      // For File targets, find what calls code INSIDE the file (by filePath)\n      // For code elements (Function, Class, etc.), use the direct id\n      const isFileTarget = targetType === 'File';\n      \n      // Query each depth level separately (LadybugDB doesn't support list comprehensions on paths)\n      // For depth 1: direct connections only\n      // For depth 2+: chain multiple single-hop queries\n      const depthQueries: Promise<any[]>[] = [];\n      \n      // Depth 1 query - direct connections with edge metadata\n      // For File targets: find callers of any code element with matching filePath\n      const d1Query = direction === 'upstream'\n        ? isFileTarget\n          ? `\n            MATCH (affected)-[r:CodeRelation]->(callee)\n            WHERE callee.filePath = '${(targetFilePath || target).replace(/'/g, \"''\")}'\n              AND r.type IN [${relTypeFilter}]\n              AND affected.filePath <> callee.filePath\n              AND (r.confidence IS NULL OR r.confidence >= ${minConf})\n            RETURN DISTINCT \n              affected.id AS id, \n              affected.name AS name, \n              label(affected) AS nodeType, \n              affected.filePath AS filePath,\n              affected.startLine AS startLine,\n              1 AS depth,\n              r.type AS edgeType,\n              r.confidence AS confidence,\n              r.reason AS reason\n            LIMIT 300\n          `\n          : `\n            MATCH (target {id: '${targetId.replace(/'/g, \"''\")}'})\n            MATCH (affected)-[r:CodeRelation]->(target)\n            WHERE r.type IN [${relTypeFilter}]\n              AND (r.confidence IS NULL OR r.confidence >= ${minConf})\n            RETURN DISTINCT \n              affected.id AS id, \n              affected.name AS name, \n              label(affected) AS nodeType, \n              affected.filePath AS filePath,\n              affected.startLine AS startLine,\n              1 AS depth,\n              r.type AS edgeType,\n              r.confidence AS confidence,\n              r.reason AS reason\n            LIMIT 300\n          `\n        : isFileTarget\n          ? `\n            MATCH (caller)-[r:CodeRelation]->(affected)\n            WHERE caller.filePath = '${(targetFilePath || target).replace(/'/g, \"''\")}'\n              AND r.type IN [${relTypeFilter}]\n              AND caller.filePath <> affected.filePath\n              AND (r.confidence IS NULL OR r.confidence >= ${minConf})\n            RETURN DISTINCT \n              affected.id AS id, \n              affected.name AS name, \n              label(affected) AS nodeType, \n              affected.filePath AS filePath,\n              affected.startLine AS startLine,\n              1 AS depth,\n              r.type AS edgeType,\n              r.confidence AS confidence,\n              r.reason AS reason\n            LIMIT 300\n          `\n          : `\n            MATCH (target {id: '${targetId.replace(/'/g, \"''\")}'})\n            MATCH (target)-[r:CodeRelation]->(affected)\n            WHERE r.type IN [${relTypeFilter}]\n              AND (r.confidence IS NULL OR r.confidence >= ${minConf})\n            RETURN DISTINCT \n              affected.id AS id, \n              affected.name AS name, \n              label(affected) AS nodeType, \n              affected.filePath AS filePath,\n              affected.startLine AS startLine,\n              1 AS depth,\n              r.type AS edgeType,\n              r.confidence AS confidence,\n              r.reason AS reason\n            LIMIT 300\n          `;\n      if (import.meta.env.DEV) {\n        console.log(`🔍 Impact d=1 query:\\n${d1Query}`);\n      }\n      depthQueries.push(executeQuery(d1Query).then(results => {\n        if (import.meta.env.DEV) {\n          console.log(`📊 Impact d=1 results: ${results.length} rows`);\n          if (results.length > 0) {\n            console.log('   Sample:', results.slice(0, 3));\n          }\n        }\n        return results;\n      }).catch(err => {\n        if (import.meta.env.DEV) console.warn('Impact d=1 query failed:', err);\n        return [];\n      }));\n      \n      // Depth 2 query - 2 hops\n      if (depth >= 2) {\n        const d2Query = direction === 'upstream'\n          ? `\n            MATCH (target {id: '${targetId.replace(/'/g, \"''\")}'})\n            MATCH (a)-[r1:CodeRelation]->(target)\n            MATCH (affected)-[r2:CodeRelation]->(a)\n            WHERE r1.type IN [${relTypeFilter}] AND r2.type IN [${relTypeFilter}]\n              AND affected.id <> target.id\n              AND (r1.confidence IS NULL OR r1.confidence >= ${minConf})\n              AND (r2.confidence IS NULL OR r2.confidence >= ${minConf})\n            RETURN DISTINCT \n              affected.id AS id, \n              affected.name AS name, \n              label(affected) AS nodeType, \n              affected.filePath AS filePath,\n              affected.startLine AS startLine,\n              2 AS depth,\n              r2.type AS edgeType,\n              r2.confidence AS confidence,\n              r2.reason AS reason\n            LIMIT 200\n          `\n          : `\n            MATCH (target {id: '${targetId.replace(/'/g, \"''\")}'})\n            MATCH (target)-[r1:CodeRelation]->(a)\n            MATCH (a)-[r2:CodeRelation]->(affected)\n            WHERE r1.type IN [${relTypeFilter}] AND r2.type IN [${relTypeFilter}]\n              AND affected.id <> target.id\n              AND (r1.confidence IS NULL OR r1.confidence >= ${minConf})\n              AND (r2.confidence IS NULL OR r2.confidence >= ${minConf})\n            RETURN DISTINCT \n              affected.id AS id, \n              affected.name AS name, \n              label(affected) AS nodeType, \n              affected.filePath AS filePath,\n              affected.startLine AS startLine,\n              2 AS depth,\n              r2.type AS edgeType,\n              r2.confidence AS confidence,\n              r2.reason AS reason\n            LIMIT 200\n          `;\n        depthQueries.push(executeQuery(d2Query).catch(err => {\n          if (import.meta.env.DEV) console.warn('Impact d=2 query failed:', err);\n          return [];\n        }));\n      }\n      \n      // Depth 3 query - 3 hops\n      if (depth >= 3) {\n        const d3Query = direction === 'upstream'\n          ? `\n            MATCH (target {id: '${targetId.replace(/'/g, \"''\")}'})\n            MATCH (a)-[r1:CodeRelation]->(target)\n            MATCH (b)-[r2:CodeRelation]->(a)\n            MATCH (affected)-[r3:CodeRelation]->(b)\n            WHERE r1.type IN [${relTypeFilter}] AND r2.type IN [${relTypeFilter}] AND r3.type IN [${relTypeFilter}]\n              AND affected.id <> target.id AND affected.id <> a.id\n              AND (r1.confidence IS NULL OR r1.confidence >= ${minConf})\n              AND (r2.confidence IS NULL OR r2.confidence >= ${minConf})\n              AND (r3.confidence IS NULL OR r3.confidence >= ${minConf})\n            RETURN DISTINCT \n              affected.id AS id, \n              affected.name AS name, \n              label(affected) AS nodeType, \n              affected.filePath AS filePath,\n              affected.startLine AS startLine,\n              3 AS depth,\n              r3.type AS edgeType,\n              r3.confidence AS confidence,\n              r3.reason AS reason\n            LIMIT 100\n          `\n          : `\n            MATCH (target {id: '${targetId.replace(/'/g, \"''\")}'})\n            MATCH (target)-[r1:CodeRelation]->(a)\n            MATCH (a)-[r2:CodeRelation]->(b)\n            MATCH (b)-[r3:CodeRelation]->(affected)\n            WHERE r1.type IN [${relTypeFilter}] AND r2.type IN [${relTypeFilter}] AND r3.type IN [${relTypeFilter}]\n              AND affected.id <> target.id AND affected.id <> a.id\n              AND (r1.confidence IS NULL OR r1.confidence >= ${minConf})\n              AND (r2.confidence IS NULL OR r2.confidence >= ${minConf})\n              AND (r3.confidence IS NULL OR r3.confidence >= ${minConf})\n            RETURN DISTINCT \n              affected.id AS id, \n              affected.name AS name, \n              label(affected) AS nodeType, \n              affected.filePath AS filePath,\n              affected.startLine AS startLine,\n              3 AS depth,\n              r3.type AS edgeType,\n              r3.confidence AS confidence,\n              r3.reason AS reason\n            LIMIT 100\n          `;\n        depthQueries.push(executeQuery(d3Query).catch(err => {\n          if (import.meta.env.DEV) console.warn('Impact d=3 query failed:', err);\n          return [];\n        }));\n      }\n      \n      // Wait for all depth queries\n      const depthResults = await Promise.all(depthQueries);\n      \n      // Combine results by depth\n      interface NodeInfo {\n        id: string;\n        name: string;\n        nodeType: string;\n        filePath: string;\n        startLine?: number;\n        edgeType: string;\n        confidence: number;\n        reason: string;\n      }\n      const byDepth: Map<number, NodeInfo[]> = new Map();\n      const allNodeIds: string[] = [];\n      const seenIds = new Set<string>();\n      \n      depthResults.forEach((results, idx) => {\n        const d = idx + 1;\n        results.forEach((row: any) => {\n          const nodeId = Array.isArray(row) ? row[0] : row.id;\n          const filePath = Array.isArray(row) ? row[3] : row.filePath;\n          \n          // Skip test files if includeTests is false\n          if (!showTests && isTestFile(filePath)) return;\n          \n          // Avoid duplicates (a node might appear at multiple depths)\n          if (nodeId && !seenIds.has(nodeId)) {\n            seenIds.add(nodeId);\n            if (!byDepth.has(d)) byDepth.set(d, []);\n            \n            const info: NodeInfo = {\n              id: nodeId,\n              name: Array.isArray(row) ? row[1] : row.name,\n              nodeType: Array.isArray(row) ? row[2] : row.nodeType,\n              filePath: filePath,\n              startLine: Array.isArray(row) ? row[4] : row.startLine,\n              edgeType: Array.isArray(row) ? row[5] : row.edgeType || 'CALLS',\n              confidence: Array.isArray(row) ? row[6] : row.confidence ?? 1.0,\n              reason: Array.isArray(row) ? row[7] : row.reason || '',\n            };\n            byDepth.get(d)!.push(info);\n            allNodeIds.push(nodeId);\n          }\n        });\n      });\n      \n      const totalAffected = allNodeIds.length;\n      \n      if (totalAffected === 0) {\n        if (isFileTarget) {\n          const escapeRegex = (value: string) => value.replace(/[.*+?^${}()|[\\]\\\\]/g, '\\\\$&');\n          const targetFileName = (targetFilePath || target).split('/').pop() || target;\n          const baseName = targetFileName.replace(/\\.[^/.]+$/, '');\n          const refRegex = new RegExp(`\\\\b${escapeRegex(baseName)}\\\\b`, 'g');\n          const hints: Array<{ file: string; line: number; content: string }> = [];\n          const hintLimit = 15;\n          \n          for (const [filePath, content] of fileContents.entries()) {\n            if (filePath === targetFilePath) continue;\n            const lines = content.split('\\n');\n            for (let i = 0; i < lines.length; i++) {\n              if (refRegex.test(lines[i])) {\n                hints.push({\n                  file: filePath,\n                  line: i + 1,\n                  content: lines[i].trim().slice(0, 150),\n                });\n                if (hints.length >= hintLimit) break;\n              }\n              refRegex.lastIndex = 0;\n            }\n            if (hints.length >= hintLimit) break;\n          }\n          \n          if (hints.length > 0) {\n            const formatted = hints.map(h => `${h.file}:${h.line}: ${h.content}`).join('\\n');\n            return `No ${direction} dependencies found for \"${target}\" (types: ${activeRelTypes.join(', ')}), but textual references were detected (graph may be incomplete):\\n\\n${formatted}${multipleMatchWarning}`;\n          }\n        }\n\n        return `No ${direction} dependencies found for \"${target}\" (types: ${activeRelTypes.join(', ')}). This code appears to be ${direction === 'upstream' ? 'unused (not called by anything)' : 'self-contained (no outgoing dependencies)'}.${multipleMatchWarning}`;\n      }\n      \n      const depth1 = byDepth.get(1) || [];\n      const depth2 = byDepth.get(2) || [];\n      const depth3 = byDepth.get(3) || [];\n      \n      // Confidence buckets\n      const confidenceBuckets = { high: 0, medium: 0, low: 0 };\n      for (const nodes of byDepth.values()) {\n        for (const n of nodes) {\n          const conf = n.confidence ?? 1;\n          if (conf >= 0.9) confidenceBuckets.high += 1;\n          else if (conf >= 0.8) confidenceBuckets.medium += 1;\n          else confidenceBuckets.low += 1;\n        }\n      }\n      \n      // Affected processes and clusters\n      const maxIdsForContext = 500;\n      const trimmedIds = allNodeIds.slice(0, maxIdsForContext);\n      const idList = trimmedIds.map(id => `'${id.replace(/'/g, \"''\")}'`).join(', ');\n      let affectedProcesses: Array<{ label: string; hits: number; minStep: number | null; stepCount: number | null }> = [];\n      let affectedClusters: Array<{ label: string; hits: number; impact: string }> = [];\n      \n      if (trimmedIds.length > 0) {\n        const processQuery = `\n          MATCH (s)-[r:CodeRelation {type: 'STEP_IN_PROCESS'}]->(p:Process)\n          WHERE s.id IN [${idList}]\n          RETURN p.label AS label, COUNT(DISTINCT s.id) AS hits, MIN(r.step) AS minStep, p.stepCount AS stepCount\n          ORDER BY hits DESC\n          LIMIT 20\n        `;\n        const clusterQuery = `\n          MATCH (s)-[:CodeRelation {type: 'MEMBER_OF'}]->(c:Community)\n          WHERE s.id IN [${idList}]\n          RETURN c.label AS label, COUNT(DISTINCT s.id) AS hits\n          ORDER BY hits DESC\n          LIMIT 20\n        `;\n        const directIdList = depth1.map(n => `'${n.id.replace(/'/g, \"''\")}'`).join(', ');\n        const directClusterQuery = depth1.length > 0 ? `\n          MATCH (s)-[:CodeRelation {type: 'MEMBER_OF'}]->(c:Community)\n          WHERE s.id IN [${directIdList}]\n          RETURN DISTINCT c.label AS label\n        ` : '';\n        \n        const [processRes, clusterRes, directClusterRes] = await Promise.all([\n          executeQuery(processQuery),\n          executeQuery(clusterQuery),\n          directClusterQuery ? executeQuery(directClusterQuery) : Promise.resolve([]),\n        ]);\n        \n        const directClusterSet = new Set<string>();\n        directClusterRes.forEach((row: any) => {\n          const label = Array.isArray(row) ? row[0] : row.label;\n          if (label) directClusterSet.add(label);\n        });\n        \n        affectedProcesses = processRes.map((row: any) => ({\n          label: Array.isArray(row) ? row[0] : row.label,\n          hits: Array.isArray(row) ? row[1] : row.hits,\n          minStep: Array.isArray(row) ? row[2] : row.minStep,\n          stepCount: Array.isArray(row) ? row[3] : row.stepCount,\n        }));\n        \n        affectedClusters = clusterRes.map((row: any) => {\n          const label = Array.isArray(row) ? row[0] : row.label;\n          const hits = Array.isArray(row) ? row[1] : row.hits;\n          const impact = directClusterSet.has(label) ? 'direct' : 'indirect';\n          return { label, hits, impact };\n        });\n      }\n      \n      const directCount = depth1.length;\n      const processCount = affectedProcesses.length;\n      const clusterCount = affectedClusters.length;\n      let risk = 'LOW';\n      if (directCount >= 30 || processCount >= 5 || clusterCount >= 5 || totalAffected >= 200) {\n        risk = 'CRITICAL';\n      } else if (directCount >= 15 || processCount >= 3 || clusterCount >= 3 || totalAffected >= 100) {\n        risk = 'HIGH';\n      } else if (directCount >= 5 || totalAffected >= 30) {\n        risk = 'MEDIUM';\n      }\n      \n      // ===== COMPACT TABULAR OUTPUT =====\n      const lines: string[] = [\n        `🔴 IMPACT: ${target} | ${direction} | ${totalAffected} affected`,\n        `Confidence: High ${confidenceBuckets.high} | Medium ${confidenceBuckets.medium} | Low ${confidenceBuckets.low}`,\n        ``,\n        `AFFECTED PROCESSES:`,\n        ...(affectedProcesses.length > 0\n          ? affectedProcesses.map(p => `- ${p.label} - BROKEN at step ${p.minStep ?? '?'} (${p.hits} symbols, ${p.stepCount ?? '?'} steps)`)\n          : ['- None found']),\n        ``,\n        `AFFECTED CLUSTERS:`,\n        ...(affectedClusters.length > 0\n          ? affectedClusters.map(c => `- ${c.label} (${c.impact}, ${c.hits} symbols)`)\n          : ['- None found']),\n        ``,\n        `RISK: ${risk}`,\n        `- Direct callers: ${directCount}`,\n        `- Processes affected: ${processCount}`,\n        `- Clusters affected: ${clusterCount}`,\n        ``,\n      ];\n      \n      // Format helper: Type|Name|File:Line|EdgeType|Confidence\n      const formatNode = (n: NodeInfo): string => {\n        const fileName = n.filePath?.split('/').pop() || '';\n        const loc = n.startLine ? `${fileName}:${n.startLine}` : fileName;\n        const confPct = Math.round((n.confidence ?? 1) * 100);\n        const fuzzyMarker = confPct < 80 ? '[fuzzy]' : '';\n        return `  ${n.nodeType}|${n.name}|${loc}|${n.edgeType}|${confPct}%${fuzzyMarker}`;\n      };\n      \n      // Helper to get code snippet for a node (call site context)\n      const getCallSiteSnippet = (n: NodeInfo): string | null => {\n        if (!n.filePath || !n.startLine) return null;\n        \n        // Find the file in fileContents (try multiple path formats)\n        let content: string | undefined;\n        const normalizedPath = n.filePath.replace(/\\\\/g, '/');\n        \n        for (const [path, c] of fileContents.entries()) {\n          const normalizedKey = path.replace(/\\\\/g, '/');\n          if (normalizedKey === normalizedPath || \n              normalizedKey.endsWith(normalizedPath) || \n              normalizedPath.endsWith(normalizedKey)) {\n            content = c;\n            break;\n          }\n        }\n        \n        if (!content) return null;\n        \n        const lines = content.split('\\n');\n        const lineIdx = n.startLine - 1;\n        if (lineIdx < 0 || lineIdx >= lines.length) return null;\n        \n        // Get the line and trim it, max 80 chars\n        let snippet = lines[lineIdx].trim();\n        if (snippet.length > 80) snippet = snippet.slice(0, 77) + '...';\n        return snippet;\n      };\n      \n      // Depth 1 - Critical (with call site snippets)\n      if (depth1.length > 0) {\n        const header = direction === 'upstream'\n          ? `d=1 (Directly DEPEND ON ${target}):`\n          : `d=1 (${target} USES these):`;\n        lines.push(header);\n        depth1.slice(0, 15).forEach(n => {\n          lines.push(formatNode(n));\n          // Add call site snippet for d=1 results\n          const snippet = getCallSiteSnippet(n);\n          if (snippet) {\n            lines.push(`    ↳ \"${snippet}\"`);\n          }\n        });\n        if (depth1.length > 15) lines.push(`  ... +${depth1.length - 15} more`);\n        lines.push(``);\n      }\n      \n      // Depth 2 - High impact\n      if (depth2.length > 0) {\n        const header = direction === 'upstream'\n          ? `d=2 (Indirectly DEPEND ON ${target}):`\n          : `d=2 (${target} USES these indirectly):`;\n        lines.push(header);\n        depth2.slice(0, 15).forEach(n => lines.push(formatNode(n)));\n        if (depth2.length > 15) lines.push(`  ... +${depth2.length - 15} more`);\n        lines.push(``);\n      }\n      \n      // Depth 3 - Transitive\n      if (depth3.length > 0) {\n        lines.push(`d=3 (Deep impact/dependency):`);\n        depth3.slice(0, 5).forEach(n => lines.push(formatNode(n)));\n        if (depth3.length > 5) lines.push(`  ... +${depth3.length - 5} more`);\n        lines.push(``);\n      }\n      \n      // Compact footer\n      lines.push(`✅ GRAPH ANALYSIS COMPLETE (trusted)`);\n      lines.push(`⚠️ Optional: grep(\"${target}\") for dynamic patterns`);\n      if (multipleMatchWarning) {\n        lines.push(multipleMatchWarning);\n      }\n      lines.push(``);\n      \n      return lines.join('\\n');\n    },\n    {\n      name: 'impact',\n      description: `Analyze the impact of changing a function, class, or file.\n\nUse when users ask:\n- \"What would break if I changed X?\"\n- \"What depends on X?\"\n- \"Impact analysis for X\"\n\nDirection:\n- upstream: Find what CALLS/IMPORTS/EXTENDS this target (what would break)\n- downstream: Find what this target CALLS/IMPORTS/EXTENDS (dependencies)\n\nOutput format (compact tabular):\n  Type|Name|File:Line|EdgeType|Confidence%\n  \nEdgeType: CALLS, IMPORTS, EXTENDS, IMPLEMENTS\nConfidence: 100% = certain, <80% = fuzzy match (may be false positive)\n\nrelationTypes filter (optional):\n- Default: CALLS, IMPORTS, EXTENDS, IMPLEMENTS (usage-based)\n- Can add CONTAINS, DEFINES for structural analysis\n\nAdditional output sections:\n- Affected processes (with step impact)\n- Affected clusters (direct/indirect)\n- Risk summary (based on direct callers, processes, clusters)`,\n      schema: z.object({\n        target: z.string().describe('Name of the function, class, or file to analyze'),\n        direction: z.enum(['upstream', 'downstream']).describe('upstream = what depends on this; downstream = what this depends on'),\n        maxDepth: z.number().optional().nullable().describe('Max traversal depth (default: 3, max: 10)'),\n        relationTypes: z.array(z.string()).optional().nullable().describe('Filter by relation types: CALLS, IMPORTS, EXTENDS, IMPLEMENTS, CONTAINS, DEFINES (default: usage-based)'),\n        includeTests: z.boolean().optional().nullable().describe('Include test files in results (default: false, excludes .test.ts, .spec.ts, __tests__)'),\n        minConfidence: z.number().optional().nullable().describe('Minimum edge confidence 0-1 (default: 0.7, excludes fuzzy/inferred matches)'),\n      }),\n    }\n  );\n\n  // ============================================================================\n  // RETURN ALL TOOLS\n  // ============================================================================\n  \n  return [\n    searchTool,\n    cypherTool,\n    grepTool,\n    readTool,\n    overviewTool,\n    exploreTool,\n    impactTool,\n  ];\n};\n"
  },
  {
    "path": "gitnexus-web/src/core/llm/types.ts",
    "content": "/**\n * LLM Provider Types\n * \n * Type definitions for multi-provider LLM support.\n * Supports Azure OpenAI and Google Gemini (with extensibility for others).\n */\n\n/**\n * Supported LLM providers\n */\nexport type LLMProvider = 'openai' | 'azure-openai' | 'gemini' | 'anthropic' | 'ollama' | 'openrouter';\n\n/**\n * Base configuration shared by all providers\n */\nexport interface BaseProviderConfig {\n  provider: LLMProvider;\n  model: string;\n  temperature?: number;\n  maxTokens?: number;\n}\n\n/**\n * OpenAI specific configuration\n */\nexport interface OpenAIConfig extends BaseProviderConfig {\n  provider: 'openai';\n  apiKey: string;\n  model: string;  // e.g., 'gpt-4o', 'gpt-4o-mini', 'gpt-4-turbo'\n  baseUrl?: string;  // optional, for custom endpoints or proxies\n}\n\n/**\n * Azure OpenAI specific configuration\n */\nexport interface AzureOpenAIConfig extends BaseProviderConfig {\n  provider: 'azure-openai';\n  apiKey: string;\n  endpoint: string;  // e.g., https://your-resource.openai.azure.com\n  deploymentName: string;\n  apiVersion?: string;  // defaults to '2024-08-01-preview'\n}\n\n/**\n * Google Gemini specific configuration\n */\nexport interface GeminiConfig extends BaseProviderConfig {\n  provider: 'gemini';\n  apiKey: string;\n  model: string;  // e.g., 'gemini-2.0-flash', 'gemini-1.5-pro'\n}\n\n/**\n * Anthropic (Claude) configuration\n */\nexport interface AnthropicConfig extends BaseProviderConfig {\n  provider: 'anthropic';\n  apiKey: string;\n  model: string;  // e.g., 'claude-sonnet-4-20250514', 'claude-3-5-sonnet-20241022'\n}\n\n/**\n * Ollama configuration (for future use)\n */\nexport interface OllamaConfig extends BaseProviderConfig {\n  provider: 'ollama';\n  baseUrl?: string;  // defaults to http://localhost:11434\n  model: string;\n}\n\n/**\n * OpenRouter configuration\n */\nexport interface OpenRouterConfig extends BaseProviderConfig {\n  provider: 'openrouter';\n  apiKey: string;\n  model: string;  // e.g., 'anthropic/claude-3.5-sonnet', 'openai/gpt-4-turbo'\n  baseUrl?: string;  // defaults to https://openrouter.ai/api/v1\n}\n\n/**\n * Union type for all provider configurations\n */\nexport type ProviderConfig = OpenAIConfig | AzureOpenAIConfig | GeminiConfig | AnthropicConfig | OllamaConfig | OpenRouterConfig;\n\n/**\n * Stored settings (what goes to localStorage)\n */\nexport interface LLMSettings {\n  activeProvider: LLMProvider;\n  /**\n   * Provider settings are persisted to localStorage and may be partially configured.\n   * We validate required fields at runtime before creating a ProviderConfig.\n   */\n  openai?: Partial<Omit<OpenAIConfig, 'provider'>>;\n  azureOpenAI?: Partial<Omit<AzureOpenAIConfig, 'provider'>>;\n  gemini?: Partial<Omit<GeminiConfig, 'provider'>>;\n  anthropic?: Partial<Omit<AnthropicConfig, 'provider'>>;\n  ollama?: Partial<Omit<OllamaConfig, 'provider'>>;\n  openrouter?: Partial<Omit<OpenRouterConfig, 'provider'>>;\n\n  // Intelligent Clustering Settings\n  intelligentClustering: boolean;\n  hasSeenClusteringPrompt: boolean;\n  useSameModelForClustering: boolean;\n  clusteringProvider?: Partial<ProviderConfig>; // Optional specific config for clustering\n}\n\n/**\n * Default LLM settings\n */\nexport const DEFAULT_LLM_SETTINGS: LLMSettings = {\n  activeProvider: 'gemini',\n  intelligentClustering: false,\n  hasSeenClusteringPrompt: false,\n  useSameModelForClustering: true,\n  openai: {\n    apiKey: '',\n    model: 'gpt-4o',\n    temperature: 0.1,\n  },\n  gemini: {\n    apiKey: '',\n    model: 'gemini-2.0-flash',\n    temperature: 0.1,\n  },\n  azureOpenAI: {\n    apiKey: '',\n    endpoint: '',\n    deploymentName: '',\n    model: 'gpt-4o',\n    apiVersion: '2024-08-01-preview',\n    temperature: 0.1,\n  },\n  anthropic: {\n    apiKey: '',\n    model: 'claude-sonnet-4-20250514',\n    temperature: 0.1,\n  },\n  ollama: {\n    baseUrl: 'http://localhost:11434',\n    model: 'llama3.2',\n    temperature: 0.1,\n  },\n  openrouter: {\n    apiKey: '',\n    model: '',\n    baseUrl: 'https://openrouter.ai/api/v1',\n    temperature: 0.1,\n  },\n};\n\n/**\n * A single step in the agent's execution (reasoning or tool call)\n * Steps are rendered in order to show the agent's thought process\n */\nexport interface MessageStep {\n  id: string;\n  type: 'reasoning' | 'tool_call' | 'content';\n  /** For reasoning/content steps */\n  content?: string;\n  /** For tool_call steps */\n  toolCall?: ToolCallInfo;\n}\n\n/**\n * Chat message for agent interaction\n */\nexport interface ChatMessage {\n  id: string;\n  role: 'user' | 'assistant' | 'tool';\n  content: string;\n  /** @deprecated Use steps instead for proper ordering */\n  toolCalls?: ToolCallInfo[];\n  /** Ordered steps: reasoning, tool calls, and final content interleaved */\n  steps?: MessageStep[];\n  toolCallId?: string;\n  timestamp: number;\n}\n\n/**\n * Tool call information for UI display\n */\nexport interface ToolCallInfo {\n  id: string;\n  name: string;\n  args: Record<string, unknown>;\n  result?: string;\n  status: 'pending' | 'running' | 'completed' | 'error';\n}\n\n/**\n * Streaming chunk from agent\n * Now supports step-based streaming where each step is a distinct message\n */\nexport interface AgentStreamChunk {\n  type: 'reasoning' | 'tool_call' | 'tool_result' | 'content' | 'error' | 'done';\n  /** LLM's reasoning/thinking text (shown as a step) */\n  reasoning?: string;\n  /** Final answer content (streamed token by token) */\n  content?: string;\n  /** Tool call information */\n  toolCall?: ToolCallInfo;\n  /** Error message */\n  error?: string;\n}\n\n/**\n * A single step in the agent's execution\n * Used for displaying the agent's thought process\n */\nexport interface AgentStep {\n  id: string;\n  type: 'reasoning' | 'tool_call' | 'answer';\n  /** For reasoning steps */\n  content?: string;\n  /** For tool_call steps */\n  toolCall?: ToolCallInfo;\n  /** Timestamp */\n  timestamp: number;\n}\n\n/**\n * Graph schema information for LLM context\n */\nexport const GRAPH_SCHEMA_DESCRIPTION = `\nLADYBUG GRAPH DATABASE SCHEMA (Multi-Table):\n\nNODE TABLES:\n1. File - Source files\n   - id: STRING (primary key)\n   - name: STRING\n   - filePath: STRING\n   - content: STRING\n\n2. Folder - Directories\n   - id: STRING (primary key)\n   - name: STRING\n   - filePath: STRING\n\n3. Function - Function definitions\n   - id: STRING (primary key)\n   - name: STRING\n   - filePath: STRING\n   - startLine: INT64\n   - endLine: INT64\n   - content: STRING\n\n4. Class - Class definitions\n   - id: STRING (primary key)\n   - name: STRING\n   - filePath: STRING\n   - startLine: INT64\n   - endLine: INT64\n   - content: STRING\n\n5. Interface - Interface/Type definitions\n   - id: STRING (primary key)\n   - name: STRING\n   - filePath: STRING\n   - startLine: INT64\n   - endLine: INT64\n   - content: STRING\n\n6. Method - Class methods\n   - id: STRING (primary key)\n   - name: STRING\n   - filePath: STRING\n   - startLine: INT64\n   - endLine: INT64\n   - content: STRING\n\n7. CodeElement - Other code elements (fallback)\n   - id: STRING (primary key)\n   - name: STRING\n   - filePath: STRING\n   - startLine: INT64\n   - endLine: INT64\n   - content: STRING\n\n8. CodeEmbedding - Vector embeddings (separate for efficiency)\n   - nodeId: STRING (primary key)\n   - embedding: FLOAT[384]\n\nRELATIONSHIP TABLE:\nCodeRelation - Single table with 'type' property connecting all node tables\n  - type: STRING (values: CONTAINS, DEFINES, IMPORTS, CALLS)\n\nConnection patterns:\n- CONTAINS: Folder->Folder, Folder->File\n- DEFINES: File->Function, File->Class, File->Interface, File->Method, File->CodeElement\n- IMPORTS: File->File\n- CALLS: File->Function, File->Method, Function->Function, Function->Method\n\nQUERY PATTERNS:\n\n1. Find all functions:\n   MATCH (f:Function) RETURN f.name, f.filePath LIMIT 10\n\n2. Find what a file defines:\n   MATCH (f:File)-[:CodeRelation {type: 'DEFINES'}]->(fn:Function)\n   WHERE f.name = 'utils.ts'\n   RETURN fn.name\n\n3. Find function callers:\n   MATCH (caller:File)-[:CodeRelation {type: 'CALLS'}]->(fn:Function {name: 'myFunction'})\n   RETURN caller.name, caller.filePath\n\n4. Find imports:\n   MATCH (f:File {name: 'main.ts'})-[:CodeRelation {type: 'IMPORTS'}]->(imported:File)\n   RETURN imported.name\n\n5. Find files that import a specific file:\n   MATCH (f:File)-[:CodeRelation {type: 'IMPORTS'}]->(target:File {name: 'utils.ts'})\n   RETURN f.name, f.filePath\n\n6. SEMANTIC SEARCH (embeddings in separate table - MUST JOIN):\n   CALL QUERY_VECTOR_INDEX('CodeEmbedding', 'code_embedding_idx', $queryVector, 10)\n   YIELD node AS emb, distance\n   WITH emb, distance\n   WHERE distance < 0.4\n   MATCH (n:Function {id: emb.nodeId})\n   RETURN n.name, n.filePath, distance\n   ORDER BY distance\n\n7. Search across all code types (use UNION or separate queries):\n   MATCH (f:Function) WHERE f.name CONTAINS 'auth' RETURN f.id, f.name, 'Function' AS type\n   UNION ALL\n   MATCH (c:Class) WHERE c.name CONTAINS 'auth' RETURN c.id, c.name, 'Class' AS type\n\n8. Folder structure:\n   MATCH (parent:Folder)-[:CodeRelation {type: 'CONTAINS'}]->(child)\n   WHERE parent.name = 'src'\n   RETURN child.name, labels(child)[0] AS type\n\n9. Get all connections for a node:\n   MATCH (f:File {name: 'index.ts'})-[r:CodeRelation]-(m)\n   RETURN m.name, r.type\n\nTOOLING NOTE (for execute_vector_cypher):\n- Write Cypher containing {{QUERY_VECTOR}} where the vector should go.\n- The tool will replace {{QUERY_VECTOR}} with CAST([..] AS FLOAT[384]).\n\nNOTES:\n- Use proper table names: File, Folder, Function, Class, Interface, Method, CodeElement\n- Use CodeRelation with type property: [:CodeRelation {type: 'DEFINES'}]\n- For vector search, join CodeEmbedding.nodeId to the appropriate table's id\n- Use LIMIT to avoid returning too many results\n`;\n\n"
  },
  {
    "path": "gitnexus-web/src/core/search/bm25-index.ts",
    "content": "/**\n * BM25 Full-Text Search Index\n * \n * Uses MiniSearch for fast keyword-based search with BM25 ranking.\n * Complements semantic search - BM25 finds exact terms, semantic finds concepts.\n */\n\nimport MiniSearch from 'minisearch';\n\nexport interface BM25Document {\n  id: string;       // File path\n  content: string;  // File content\n  name: string;     // File name (boosted in search)\n}\n\nexport interface BM25SearchResult {\n  filePath: string;\n  score: number;\n  rank: number;\n}\n\n/**\n * BM25 Index singleton\n * Stores the MiniSearch instance and provides search methods\n */\nlet searchIndex: MiniSearch<BM25Document> | null = null;\nlet indexedDocCount = 0;\n\n/**\n * Build the BM25 index from file contents\n * Should be called after ingestion completes\n * \n * @param fileContents - Map of file path to content\n * @returns Number of documents indexed\n */\nexport const buildBM25Index = (fileContents: Map<string, string>): number => {\n  // Create new MiniSearch instance with BM25-like scoring\n  searchIndex = new MiniSearch<BM25Document>({\n    fields: ['content', 'name'], // Fields to index\n    storeFields: ['id'],         // Fields to return in results\n    \n    // Tokenizer: split on non-alphanumeric, camelCase, snake_case\n    tokenize: (text: string) => {\n      // Split on whitespace and punctuation\n      const tokens = text.toLowerCase().split(/[\\s\\-_./\\\\(){}[\\]<>:;,!?'\"]+/);\n      \n      // Also split camelCase: \"getUserById\" -> [\"get\", \"user\", \"by\", \"id\"]\n      const expanded: string[] = [];\n      for (const token of tokens) {\n        if (token.length === 0) continue;\n        \n        // Split camelCase\n        const camelParts = token.replace(/([a-z])([A-Z])/g, '$1 $2').toLowerCase().split(' ');\n        expanded.push(...camelParts);\n        \n        // Also keep original token for exact matches\n        if (camelParts.length > 1) {\n          expanded.push(token);\n        }\n      }\n      \n      // Filter out very short tokens and common noise\n      return expanded.filter(t => t.length > 1 && !STOP_WORDS.has(t));\n    },\n  });\n  \n  // Index all files\n  const documents: BM25Document[] = [];\n  \n  for (const [filePath, content] of fileContents.entries()) {\n    // Extract filename from path\n    const name = filePath.split('/').pop() || filePath;\n    \n    documents.push({\n      id: filePath,\n      content: content,\n      name: name,\n    });\n  }\n  \n  // Batch add for efficiency\n  searchIndex.addAll(documents);\n  indexedDocCount = documents.length;\n  \n  if (import.meta.env.DEV) {\n    console.log(`📚 BM25 index built: ${indexedDocCount} documents`);\n  }\n  \n  return indexedDocCount;\n};\n\n/**\n * Search the BM25 index\n * \n * @param query - Search query (keywords)\n * @param limit - Maximum results to return\n * @returns Ranked search results with file paths and scores\n */\nexport const searchBM25 = (query: string, limit: number = 20): BM25SearchResult[] => {\n  if (!searchIndex) {\n    return [];\n  }\n  \n  // Search with fuzzy matching and prefix support\n  const results = searchIndex.search(query, {\n    fuzzy: 0.2,\n    prefix: true,\n    boost: { name: 2 },  // Boost file name matches\n  });\n  \n  // Limit results and add rank\n  return results.slice(0, limit).map((r, index) => ({\n    filePath: r.id,\n    score: r.score,\n    rank: index + 1,\n  }));\n};\n\n/**\n * Check if the BM25 index is ready\n */\nexport const isBM25Ready = (): boolean => {\n  return searchIndex !== null && indexedDocCount > 0;\n};\n\n/**\n * Get index statistics\n */\nexport const getBM25Stats = (): { documentCount: number; termCount: number } => {\n  if (!searchIndex) {\n    return { documentCount: 0, termCount: 0 };\n  }\n  \n  return {\n    documentCount: indexedDocCount,\n    termCount: searchIndex.termCount,\n  };\n};\n\n/**\n * Clear the index (for cleanup or re-indexing)\n */\nexport const clearBM25Index = (): void => {\n  searchIndex = null;\n  indexedDocCount = 0;\n};\n\n/**\n * Common stop words to filter out (too common to be useful)\n */\nconst STOP_WORDS = new Set([\n  // JavaScript/TypeScript keywords\n  'const', 'let', 'var', 'function', 'return', 'if', 'else', 'for', 'while',\n  'class', 'new', 'this', 'import', 'export', 'from', 'default', 'async', 'await',\n  'try', 'catch', 'throw', 'typeof', 'instanceof', 'true', 'false', 'null', 'undefined',\n  \n  // Common English stop words\n  'the', 'is', 'at', 'which', 'on', 'a', 'an', 'and', 'or', 'but', 'in', 'with',\n  'to', 'of', 'it', 'be', 'as', 'by', 'that', 'for', 'are', 'was', 'were',\n]);\n\n"
  },
  {
    "path": "gitnexus-web/src/core/search/hybrid-search.ts",
    "content": "/**\n * Hybrid Search with Reciprocal Rank Fusion (RRF)\n * \n * Combines BM25 (keyword) and semantic (embedding) search results.\n * Uses RRF to merge rankings without needing score normalization.\n * \n * This is the same approach used by Elasticsearch, Pinecone, and other\n * production search systems.\n */\n\nimport { searchBM25, isBM25Ready, type BM25SearchResult } from './bm25-index';\nimport type { SemanticSearchResult } from '../embeddings/types';\n\n/**\n * RRF constant - standard value used in the literature\n * Higher values give more weight to lower-ranked results\n */\nconst RRF_K = 60;\n\nexport interface HybridSearchResult {\n  filePath: string;\n  score: number;           // RRF score\n  rank: number;            // Final rank\n  sources: ('bm25' | 'semantic')[];  // Which methods found this\n  \n  // Metadata from semantic search (if available)\n  nodeId?: string;\n  name?: string;\n  label?: string;\n  startLine?: number;\n  endLine?: number;\n  \n  // Original scores for debugging\n  bm25Score?: number;\n  semanticScore?: number;\n}\n\n/**\n * Perform hybrid search combining BM25 and semantic results\n * \n * @param bm25Results - Results from BM25 keyword search\n * @param semanticResults - Results from semantic/embedding search\n * @param limit - Maximum results to return\n * @returns Merged and re-ranked results\n */\nexport const mergeWithRRF = (\n  bm25Results: BM25SearchResult[],\n  semanticResults: SemanticSearchResult[],\n  limit: number = 10\n): HybridSearchResult[] => {\n  const merged = new Map<string, HybridSearchResult>();\n  \n  // Process BM25 results\n  for (let i = 0; i < bm25Results.length; i++) {\n    const r = bm25Results[i];\n    const rrfScore = 1 / (RRF_K + i + 1);  // i+1 because rank starts at 1\n    \n    merged.set(r.filePath, {\n      filePath: r.filePath,\n      score: rrfScore,\n      rank: 0,  // Will be set after sorting\n      sources: ['bm25'],\n      bm25Score: r.score,\n    });\n  }\n  \n  // Process semantic results and merge\n  for (let i = 0; i < semanticResults.length; i++) {\n    const r = semanticResults[i];\n    const rrfScore = 1 / (RRF_K + i + 1);\n    \n    const existing = merged.get(r.filePath);\n    if (existing) {\n      // Found by both methods - add scores\n      existing.score += rrfScore;\n      existing.sources.push('semantic');\n      existing.semanticScore = 1 - r.distance;\n      \n      // Add semantic metadata\n      existing.nodeId = r.nodeId;\n      existing.name = r.name;\n      existing.label = r.label;\n      existing.startLine = r.startLine;\n      existing.endLine = r.endLine;\n    } else {\n      // Only found by semantic\n      merged.set(r.filePath, {\n        filePath: r.filePath,\n        score: rrfScore,\n        rank: 0,\n        sources: ['semantic'],\n        semanticScore: 1 - r.distance,\n        nodeId: r.nodeId,\n        name: r.name,\n        label: r.label,\n        startLine: r.startLine,\n        endLine: r.endLine,\n      });\n    }\n  }\n  \n  // Sort by RRF score descending\n  const sorted = Array.from(merged.values())\n    .sort((a, b) => b.score - a.score)\n    .slice(0, limit);\n  \n  // Assign final ranks\n  sorted.forEach((r, i) => {\n    r.rank = i + 1;\n  });\n  \n  return sorted;\n};\n\n/**\n * Check if hybrid search is available\n * Requires BM25 index to be built\n * Note: Semantic search is optional - hybrid works with just BM25 if embeddings aren't ready\n */\nexport const isHybridSearchReady = (): boolean => {\n  return isBM25Ready();\n};\n\n/**\n * Format hybrid results for LLM consumption\n */\nexport const formatHybridResults = (results: HybridSearchResult[]): string => {\n  if (results.length === 0) {\n    return 'No results found.';\n  }\n  \n  const formatted = results.map((r, i) => {\n    const sources = r.sources.join(' + ');\n    const location = r.startLine ? ` (lines ${r.startLine}-${r.endLine})` : '';\n    const label = r.label ? `${r.label}: ` : 'File: ';\n    const name = r.name || r.filePath.split('/').pop() || r.filePath;\n    \n    return `[${i + 1}] ${label}${name}\n    File: ${r.filePath}${location}\n    Found by: ${sources}\n    Relevance: ${r.score.toFixed(4)}`;\n  });\n  \n  return `Found ${results.length} results:\\n\\n${formatted.join('\\n\\n')}`;\n};\n\n\n\n\n"
  },
  {
    "path": "gitnexus-web/src/core/search/index.ts",
    "content": "/**\n * Search Module\n * \n * Exports BM25 indexing and hybrid search functionality.\n */\n\nexport { \n  buildBM25Index, \n  searchBM25, \n  isBM25Ready, \n  getBM25Stats,\n  clearBM25Index,\n  type BM25SearchResult,\n} from './bm25-index';\n\nexport { \n  mergeWithRRF, \n  isHybridSearchReady,\n  formatHybridResults,\n  type HybridSearchResult,\n} from './hybrid-search';\n\n\n\n\n"
  },
  {
    "path": "gitnexus-web/src/core/tree-sitter/parser-loader.ts",
    "content": "import Parser from 'web-tree-sitter';\nimport { SupportedLanguages } from '../../config/supported-languages';\n\nlet parser: Parser | null = null;\n\n// Cache the compiled Language objects to avoid fetching/compiling twice\nconst languageCache = new Map<string, Parser.Language>();\n\nexport const loadParser = async (): Promise<Parser> => {\n    if (parser) return parser;\n\n    await Parser.init({\n        locateFile: (scriptName: string) => {\n            return `/wasm/${scriptName}`;\n        }\n    })\n\n    parser = new Parser();\n    return parser;\n}\n\n// Get the appropriate WASM file based on language and file extension\nconst getWasmPath = (language: SupportedLanguages, filePath?: string): string => {\n    // For TypeScript, check if it's a TSX file\n    if (language === SupportedLanguages.TypeScript) {\n        if (filePath?.endsWith('.tsx')) {\n            return '/wasm/typescript/tree-sitter-tsx.wasm';\n        }\n        return '/wasm/typescript/tree-sitter-typescript.wasm';\n    }\n    \n    const languageFileMap: Record<SupportedLanguages, string> = {\n        [SupportedLanguages.JavaScript]: '/wasm/javascript/tree-sitter-javascript.wasm',\n        [SupportedLanguages.TypeScript]: '/wasm/typescript/tree-sitter-typescript.wasm',\n        [SupportedLanguages.Python]: '/wasm/python/tree-sitter-python.wasm',\n        [SupportedLanguages.Java]: '/wasm/java/tree-sitter-java.wasm',\n        [SupportedLanguages.C]: '/wasm/c/tree-sitter-c.wasm',\n        [SupportedLanguages.CPlusPlus]: '/wasm/cpp/tree-sitter-cpp.wasm',\n        [SupportedLanguages.CSharp]: '/wasm/csharp/tree-sitter-csharp.wasm',\n        [SupportedLanguages.Go]: '/wasm/go/tree-sitter-go.wasm',\n        [SupportedLanguages.Rust]: '/wasm/rust/tree-sitter-rust.wasm',\n        [SupportedLanguages.PHP]: '/wasm/php/tree-sitter-php.wasm',\n        [SupportedLanguages.Ruby]: '/wasm/ruby/tree-sitter-ruby.wasm',\n        [SupportedLanguages.Kotlin]: '', // Kotlin WASM parser not yet available for web\n        [SupportedLanguages.Swift]: '/wasm/swift/tree-sitter-swift.wasm',\n    };\n    \n    return languageFileMap[language];\n};\n\nexport const loadLanguage = async (language: SupportedLanguages, filePath?: string): Promise<void> => {\n    if (!parser) await loadParser();\n    const wasmPath = getWasmPath(language, filePath);\n    \n    if (languageCache.has(wasmPath)) {\n        parser!.setLanguage(languageCache.get(wasmPath)!);\n        return;\n    }\n\n    if (!wasmPath) {\n        console.error(`❌ [Parser] No WASM path configured for language: ${language}`);\n        throw new Error(`Unsupported language: ${language}`);\n    }\n    \n    try {\n        const loadedLanguage = await Parser.Language.load(wasmPath);    \n        languageCache.set(wasmPath, loadedLanguage);\n        parser!.setLanguage(loadedLanguage);\n    } catch (error: unknown) {\n        const errorMessage = error instanceof Error ? error.message : String(error);\n        console.error(`❌ [Parser] Failed to load WASM grammar for ${language}`);\n        console.error(`   WASM Path: ${wasmPath}`);\n        console.error(`   Error: ${errorMessage}`);\n        throw new Error(`Failed to load grammar for ${language}: ${errorMessage}`);\n    }\n}\n"
  },
  {
    "path": "gitnexus-web/src/hooks/useAppState.tsx",
    "content": "import { createContext, useContext, useState, useCallback, useRef, useEffect, ReactNode } from 'react';\nimport * as Comlink from 'comlink';\nimport { KnowledgeGraph, GraphNode, NodeLabel } from '../core/graph/types';\nimport { PipelineProgress, PipelineResult, deserializePipelineResult } from '../types/pipeline';\nimport { createKnowledgeGraph } from '../core/graph/graph';\nimport { DEFAULT_VISIBLE_LABELS } from '../lib/constants';\nimport type { IngestionWorkerApi } from '../workers/ingestion.worker';\nimport type { FileEntry } from '../services/zip';\nimport type { EmbeddingProgress, SemanticSearchResult } from '../core/embeddings/types';\nimport type { LLMSettings, ProviderConfig, AgentStreamChunk, ChatMessage, ToolCallInfo, MessageStep } from '../core/llm/types';\nimport { loadSettings, getActiveProviderConfig, saveSettings } from '../core/llm/settings-service';\nimport type { AgentMessage } from '../core/llm/agent';\nimport { DEFAULT_VISIBLE_EDGES, type EdgeType } from '../lib/constants';\nimport type { RepoSummary, ConnectToServerResult } from '../services/server-connection';\nimport { fetchRepos, connectToServer } from '../services/server-connection';\n\nexport type ViewMode = 'onboarding' | 'loading' | 'exploring';\nexport type RightPanelTab = 'code' | 'chat';\nexport type EmbeddingStatus = 'idle' | 'loading' | 'embedding' | 'indexing' | 'ready' | 'error';\n\nexport interface QueryResult {\n  rows: Record<string, any>[];\n  nodeIds: string[];\n  executionTime: number;\n}\n\n// Animation types for graph nodes\nexport type AnimationType = 'pulse' | 'ripple' | 'glow';\n\nexport interface NodeAnimation {\n  type: AnimationType;\n  startTime: number;\n  duration: number;\n}\n\n// Code reference from AI grounding or user selection\nexport interface CodeReference {\n  id: string;\n  filePath: string;\n  startLine?: number;\n  endLine?: number;\n  nodeId?: string;  // Associated graph node ID\n  label?: string;   // File, Function, Class, etc.\n  name?: string;    // Display name\n  source: 'ai' | 'user';  // How it was added\n}\n\nexport interface CodeReferenceFocus {\n  filePath: string;\n  startLine?: number;\n  endLine?: number;\n  ts: number;\n}\n\ninterface AppState {\n  // View state\n  viewMode: ViewMode;\n  setViewMode: (mode: ViewMode) => void;\n\n  // Graph data\n  graph: KnowledgeGraph | null;\n  setGraph: (graph: KnowledgeGraph | null) => void;\n  fileContents: Map<string, string>;\n  setFileContents: (contents: Map<string, string>) => void;\n\n  // Selection\n  selectedNode: GraphNode | null;\n  setSelectedNode: (node: GraphNode | null) => void;\n\n  // Right Panel (unified Code + Chat)\n  isRightPanelOpen: boolean;\n  setRightPanelOpen: (open: boolean) => void;\n  rightPanelTab: RightPanelTab;\n  setRightPanelTab: (tab: RightPanelTab) => void;\n  openCodePanel: () => void;\n  openChatPanel: () => void;\n\n  // Filters\n  visibleLabels: NodeLabel[];\n  toggleLabelVisibility: (label: NodeLabel) => void;\n  visibleEdgeTypes: EdgeType[];\n  toggleEdgeVisibility: (edgeType: EdgeType) => void;\n\n  // Depth filter (N hops from selection)\n  depthFilter: number | null;\n  setDepthFilter: (depth: number | null) => void;\n\n  // Query state\n  highlightedNodeIds: Set<string>;\n  setHighlightedNodeIds: (ids: Set<string>) => void;\n  // AI highlights (toggable)\n  aiCitationHighlightedNodeIds: Set<string>;\n  aiToolHighlightedNodeIds: Set<string>;\n  blastRadiusNodeIds: Set<string>;\n  isAIHighlightsEnabled: boolean;\n  toggleAIHighlights: () => void;\n  clearAIToolHighlights: () => void;\n  clearBlastRadius: () => void;\n  queryResult: QueryResult | null;\n  setQueryResult: (result: QueryResult | null) => void;\n  clearQueryHighlights: () => void;\n\n  // Node animations (for MCP tool visual feedback)\n  animatedNodes: Map<string, NodeAnimation>;\n  triggerNodeAnimation: (nodeIds: string[], type: AnimationType) => void;\n  clearAnimations: () => void;\n\n  // Progress\n  progress: PipelineProgress | null;\n  setProgress: (progress: PipelineProgress | null) => void;\n\n  // Project info\n  projectName: string;\n  setProjectName: (name: string) => void;\n\n  // Multi-repo switching\n  serverBaseUrl: string | null;\n  setServerBaseUrl: (url: string | null) => void;\n  availableRepos: RepoSummary[];\n  setAvailableRepos: (repos: RepoSummary[]) => void;\n  switchRepo: (repoName: string) => Promise<void>;\n\n  // Worker API (shared across app)\n  runPipeline: (file: File, onProgress: (p: PipelineProgress) => void, clusteringConfig?: ProviderConfig) => Promise<PipelineResult>;\n  runPipelineFromFiles: (files: FileEntry[], onProgress: (p: PipelineProgress) => void, clusteringConfig?: ProviderConfig) => Promise<PipelineResult>;\n  runQuery: (cypher: string) => Promise<any[]>;\n  isDatabaseReady: () => Promise<boolean>;\n\n  // Embedding state\n  embeddingStatus: EmbeddingStatus;\n  embeddingProgress: EmbeddingProgress | null;\n\n  // Embedding methods\n  startEmbeddings: (forceDevice?: 'webgpu' | 'wasm') => Promise<void>;\n  semanticSearch: (query: string, k?: number) => Promise<SemanticSearchResult[]>;\n  semanticSearchWithContext: (query: string, k?: number, hops?: number) => Promise<any[]>;\n  isEmbeddingReady: boolean;\n\n  // Debug/test methods\n  testArrayParams: () => Promise<{ success: boolean; error?: string }>;\n\n  // LLM/Agent state\n  llmSettings: LLMSettings;\n  updateLLMSettings: (updates: Partial<LLMSettings>) => void;\n  isSettingsPanelOpen: boolean;\n  setSettingsPanelOpen: (open: boolean) => void;\n  isAgentReady: boolean;\n  isAgentInitializing: boolean;\n  agentError: string | null;\n\n  // Chat state\n  chatMessages: ChatMessage[];\n  isChatLoading: boolean;\n  currentToolCalls: ToolCallInfo[];\n\n  // LLM methods\n  refreshLLMSettings: () => void;\n  initializeAgent: (overrideProjectName?: string) => Promise<void>;\n  sendChatMessage: (message: string) => Promise<void>;\n  stopChatResponse: () => void;\n  clearChat: () => void;\n\n  // Code References Panel\n  codeReferences: CodeReference[];\n  isCodePanelOpen: boolean;\n  setCodePanelOpen: (open: boolean) => void;\n  addCodeReference: (ref: Omit<CodeReference, 'id'>) => void;\n  removeCodeReference: (id: string) => void;\n  clearAICodeReferences: () => void;\n  clearCodeReferences: () => void;\n  codeReferenceFocus: CodeReferenceFocus | null;\n}\n\nconst AppStateContext = createContext<AppState | null>(null);\n\nexport const AppStateProvider = ({ children }: { children: ReactNode }) => {\n  // View state\n  const [viewMode, setViewMode] = useState<ViewMode>('onboarding');\n\n  // Graph data\n  const [graph, setGraph] = useState<KnowledgeGraph | null>(null);\n  const [fileContents, setFileContents] = useState<Map<string, string>>(new Map());\n\n  // Selection\n  const [selectedNode, setSelectedNode] = useState<GraphNode | null>(null);\n\n  // Right Panel\n  const [isRightPanelOpen, setRightPanelOpen] = useState(false);\n  const [rightPanelTab, setRightPanelTab] = useState<RightPanelTab>('code');\n\n  const openCodePanel = useCallback(() => {\n    // Legacy API: used by graph/tree selection.\n    // Code is now shown in the Code References Panel (left of the graph),\n    // so \"openCodePanel\" just ensures that panel becomes visible when needed.\n    setCodePanelOpen(true);\n  }, []);\n\n  const openChatPanel = useCallback(() => {\n    setRightPanelOpen(true);\n    setRightPanelTab('chat');\n  }, []);\n\n  // Filters\n  const [visibleLabels, setVisibleLabels] = useState<NodeLabel[]>(DEFAULT_VISIBLE_LABELS);\n  const [visibleEdgeTypes, setVisibleEdgeTypes] = useState<EdgeType[]>(DEFAULT_VISIBLE_EDGES);\n\n  // Depth filter\n  const [depthFilter, setDepthFilter] = useState<number | null>(null);\n\n  // Query state\n  const [highlightedNodeIds, setHighlightedNodeIds] = useState<Set<string>>(new Set());\n  const [queryResult, setQueryResult] = useState<QueryResult | null>(null);\n\n  // AI highlights (separate from user/query highlights)\n  const [aiCitationHighlightedNodeIds, setAICitationHighlightedNodeIds] = useState<Set<string>>(new Set());\n  const [aiToolHighlightedNodeIds, setAIToolHighlightedNodeIds] = useState<Set<string>>(new Set());\n  const [blastRadiusNodeIds, setBlastRadiusNodeIds] = useState<Set<string>>(new Set());\n  const [isAIHighlightsEnabled, setAIHighlightsEnabled] = useState(true);\n\n  const toggleAIHighlights = useCallback(() => {\n    setAIHighlightsEnabled(prev => !prev);\n  }, []);\n\n  const clearAIToolHighlights = useCallback(() => {\n    setAIToolHighlightedNodeIds(new Set());\n  }, []);\n\n  const clearBlastRadius = useCallback(() => {\n    setBlastRadiusNodeIds(new Set());\n  }, []);\n\n  const clearQueryHighlights = useCallback(() => {\n    setHighlightedNodeIds(new Set());\n    setQueryResult(null);\n  }, []);\n\n  // Node animations (for MCP tool visual feedback)\n  const [animatedNodes, setAnimatedNodes] = useState<Map<string, NodeAnimation>>(new Map());\n  const animationTimerRef = useRef<ReturnType<typeof setInterval> | null>(null);\n\n  const triggerNodeAnimation = useCallback((nodeIds: string[], type: AnimationType) => {\n    const now = Date.now();\n    const duration = type === 'pulse' ? 2000 : type === 'ripple' ? 3000 : 4000;\n\n    setAnimatedNodes(prev => {\n      const next = new Map(prev);\n      for (const id of nodeIds) {\n        next.set(id, { type, startTime: now, duration });\n      }\n      return next;\n    });\n\n    // Auto-cleanup after duration\n    setTimeout(() => {\n      setAnimatedNodes(prev => {\n        const next = new Map(prev);\n        for (const id of nodeIds) {\n          const anim = next.get(id);\n          if (anim && anim.startTime === now) {\n            next.delete(id);\n          }\n        }\n        return next;\n      });\n    }, duration + 100);\n  }, []);\n\n  const clearAnimations = useCallback(() => {\n    setAnimatedNodes(new Map());\n    if (animationTimerRef.current) {\n      clearInterval(animationTimerRef.current);\n      animationTimerRef.current = null;\n    }\n  }, []);\n\n  // Progress\n  const [progress, setProgress] = useState<PipelineProgress | null>(null);\n\n  // Project info\n  const [projectName, setProjectName] = useState<string>('');\n\n  // Multi-repo switching\n  const [serverBaseUrl, setServerBaseUrl] = useState<string | null>(null);\n  const [availableRepos, setAvailableRepos] = useState<RepoSummary[]>([]);\n\n  // Embedding state\n  const [embeddingStatus, setEmbeddingStatus] = useState<EmbeddingStatus>('idle');\n  const [embeddingProgress, setEmbeddingProgress] = useState<EmbeddingProgress | null>(null);\n\n  // LLM/Agent state\n  const [llmSettings, setLLMSettings] = useState<LLMSettings>(loadSettings);\n  const [isSettingsPanelOpen, setSettingsPanelOpen] = useState(false);\n  const [isAgentReady, setIsAgentReady] = useState(false);\n  const [isAgentInitializing, setIsAgentInitializing] = useState(false);\n  const [agentError, setAgentError] = useState<string | null>(null);\n\n  // Chat state\n  const [chatMessages, setChatMessages] = useState<ChatMessage[]>([]);\n  const [isChatLoading, setIsChatLoading] = useState(false);\n  const [currentToolCalls, setCurrentToolCalls] = useState<ToolCallInfo[]>([]);\n\n  // Code References Panel state\n  const [codeReferences, setCodeReferences] = useState<CodeReference[]>([]);\n  const [isCodePanelOpen, setCodePanelOpen] = useState(false);\n  const [codeReferenceFocus, setCodeReferenceFocus] = useState<CodeReferenceFocus | null>(null);\n\n    const normalizePath = useCallback((p: string) => {\n    return p.replace(/\\\\/g, '/').replace(/^\\.?\\//, '');\n  }, []);\n\n  const resolveFilePath = useCallback((requestedPath: string): string | null => {\n    const req = normalizePath(requestedPath).toLowerCase();\n    if (!req) return null;\n\n    // Exact match first\n    for (const key of fileContents.keys()) {\n      if (normalizePath(key).toLowerCase() === req) return key;\n    }\n\n    // Ends-with match (best for partial paths like \"src/foo.ts\")\n    let best: { path: string; score: number } | null = null;\n    for (const key of fileContents.keys()) {\n      const norm = normalizePath(key).toLowerCase();\n      if (norm.endsWith(req)) {\n        const score = 1000 - norm.length; // shorter is better\n        if (!best || score > best.score) best = { path: key, score };\n      }\n    }\n    if (best) return best.path;\n\n    // Segment match fallback\n    const segs = req.split('/').filter(Boolean);\n    for (const key of fileContents.keys()) {\n      const normSegs = normalizePath(key).toLowerCase().split('/').filter(Boolean);\n      let idx = 0;\n      for (const s of segs) {\n        const found = normSegs.findIndex((x, i) => i >= idx && x.includes(s));\n        if (found === -1) { idx = -1; break; }\n        idx = found + 1;\n      }\n      if (idx !== -1) return key;\n    }\n\n    return null;\n  }, [fileContents, normalizePath]);\n\n  const findFileNodeId = useCallback((filePath: string): string | undefined => {\n    if (!graph) return undefined;\n    const target = normalizePath(filePath);\n    const fileNode = graph.nodes.find(\n      (n) => n.label === 'File' && normalizePath(n.properties.filePath) === target\n    );\n    return fileNode?.id;\n  }, [graph, normalizePath]);\n\n  // Code References methods\n  const addCodeReference = useCallback((ref: Omit<CodeReference, 'id'>) => {\n    const id = `ref-${Date.now()}-${Math.random().toString(36).slice(2, 8)}`;\n    const newRef: CodeReference = { ...ref, id };\n\n    setCodeReferences(prev => {\n      // Don't add duplicates (same file + line range)\n      const isDuplicate = prev.some(r =>\n        r.filePath === ref.filePath &&\n        r.startLine === ref.startLine &&\n        r.endLine === ref.endLine\n      );\n      if (isDuplicate) return prev;\n      return [...prev, newRef];\n    });\n\n    // Auto-open panel when references are added\n    setCodePanelOpen(true);\n\n    // Signal the Code Inspector to focus (scroll + glow) this reference.\n    // This should happen even if the reference already exists (duplicates are ignored),\n    // so it must be separate from the add-to-list behavior.\n    setCodeReferenceFocus({\n      filePath: ref.filePath,\n      startLine: ref.startLine,\n      endLine: ref.endLine,\n      ts: Date.now(),\n    });\n\n    // Track AI highlights separately so they can be toggled off in the UI\n    if (ref.nodeId && ref.source === 'ai') {\n      setAICitationHighlightedNodeIds(prev => new Set([...prev, ref.nodeId!]));\n    }\n  }, []);\n\n  // Remove ONLY AI-provided refs so each new chat response refreshes the Code panel\n  const clearAICodeReferences = useCallback(() => {\n    setCodeReferences(prev => {\n      const removed = prev.filter(r => r.source === 'ai');\n      const kept = prev.filter(r => r.source !== 'ai');\n\n      // Remove citation-based AI highlights for removed refs\n      const removedNodeIds = new Set(removed.map(r => r.nodeId).filter(Boolean) as string[]);\n      if (removedNodeIds.size > 0) {\n        setAICitationHighlightedNodeIds(prevIds => {\n          const next = new Set(prevIds);\n          for (const id of removedNodeIds) next.delete(id);\n          return next;\n        });\n      }\n\n      // Don't auto-close if the user has something selected (top viewer)\n      if (kept.length === 0 && !selectedNode) {\n        setCodePanelOpen(false);\n      }\n      return kept;\n    });\n  }, [queryResult, selectedNode]);\n\n  // Auto-add a code reference when the user selects a node in the graph/tree\n  useEffect(() => {\n    if (!selectedNode) return;\n    // User selection should show in the top \"Selected file\" viewer,\n    // not be appended to the AI citations list.\n    setCodePanelOpen(true);\n  }, [selectedNode]);\n\n  // Worker (single instance shared across app)\n  const workerRef = useRef<Worker | null>(null);\n  const apiRef = useRef<Comlink.Remote<IngestionWorkerApi> | null>(null);\n\n  useEffect(() => {\n    const worker = new Worker(\n      new URL('../workers/ingestion.worker.ts', import.meta.url),\n      { type: 'module' }\n    );\n    const api = Comlink.wrap<IngestionWorkerApi>(worker);\n    workerRef.current = worker;\n    apiRef.current = api;\n\n    return () => {\n      worker.terminate();\n      workerRef.current = null;\n      apiRef.current = null;\n    };\n  }, []);\n\n  const runPipeline = useCallback(async (\n    file: File,\n    onProgress: (progress: PipelineProgress) => void,\n    clusteringConfig?: ProviderConfig\n  ): Promise<PipelineResult> => {\n    const api = apiRef.current;\n    if (!api) throw new Error('Worker not initialized');\n\n    const proxiedOnProgress = Comlink.proxy(onProgress);\n    const serializedResult = await api.runPipeline(file, proxiedOnProgress, clusteringConfig);\n    return deserializePipelineResult(serializedResult, createKnowledgeGraph);\n  }, []);\n\n  const runPipelineFromFiles = useCallback(async (\n    files: FileEntry[],\n    onProgress: (progress: PipelineProgress) => void,\n    clusteringConfig?: ProviderConfig\n  ): Promise<PipelineResult> => {\n    const api = apiRef.current;\n    if (!api) throw new Error('Worker not initialized');\n\n    const proxiedOnProgress = Comlink.proxy(onProgress);\n    const serializedResult = await api.runPipelineFromFiles(files, proxiedOnProgress, clusteringConfig);\n    return deserializePipelineResult(serializedResult, createKnowledgeGraph);\n  }, []);\n\n  const runQuery = useCallback(async (cypher: string): Promise<any[]> => {\n    const api = apiRef.current;\n    if (!api) throw new Error('Worker not initialized');\n    return api.runQuery(cypher);\n  }, []);\n\n  const isDatabaseReady = useCallback(async (): Promise<boolean> => {\n    const api = apiRef.current;\n    if (!api) return false;\n    try {\n      return await api.isReady();\n    } catch {\n      return false;\n    }\n  }, []);\n\n  // Embedding methods\n  const startEmbeddings = useCallback(async (forceDevice?: 'webgpu' | 'wasm'): Promise<void> => {\n    const api = apiRef.current;\n    if (!api) throw new Error('Worker not initialized');\n\n    setEmbeddingStatus('loading');\n    setEmbeddingProgress(null);\n\n    try {\n      const proxiedOnProgress = Comlink.proxy((progress: EmbeddingProgress) => {\n        setEmbeddingProgress(progress);\n\n        // Update status based on phase\n        switch (progress.phase) {\n          case 'loading-model':\n            setEmbeddingStatus('loading');\n            break;\n          case 'embedding':\n            setEmbeddingStatus('embedding');\n            break;\n          case 'indexing':\n            setEmbeddingStatus('indexing');\n            break;\n          case 'ready':\n            setEmbeddingStatus('ready');\n            break;\n          case 'error':\n            setEmbeddingStatus('error');\n            break;\n        }\n      });\n\n      await api.startEmbeddingPipeline(proxiedOnProgress, forceDevice);\n    } catch (error: any) {\n      // Check if it's WebGPU not available - let caller handle the dialog\n      if (error?.name === 'WebGPUNotAvailableError' ||\n        error?.message?.includes('WebGPU not available')) {\n        setEmbeddingStatus('idle'); // Reset to idle so user can try again\n      } else {\n        setEmbeddingStatus('error');\n      }\n      throw error;\n    }\n  }, []);\n\n  const semanticSearch = useCallback(async (\n    query: string,\n    k: number = 10\n  ): Promise<SemanticSearchResult[]> => {\n    const api = apiRef.current;\n    if (!api) throw new Error('Worker not initialized');\n    return api.semanticSearch(query, k);\n  }, []);\n\n  const semanticSearchWithContext = useCallback(async (\n    query: string,\n    k: number = 5,\n    hops: number = 2\n  ): Promise<any[]> => {\n    const api = apiRef.current;\n    if (!api) throw new Error('Worker not initialized');\n    return api.semanticSearchWithContext(query, k, hops);\n  }, []);\n\n  const testArrayParams = useCallback(async (): Promise<{ success: boolean; error?: string }> => {\n    const api = apiRef.current;\n    if (!api) return { success: false, error: 'Worker not initialized' };\n    return api.testArrayParams();\n  }, []);\n\n  // LLM methods\n  const updateLLMSettings = useCallback((updates: Partial<LLMSettings>) => {\n    setLLMSettings(prev => {\n      const next = { ...prev, ...updates };\n      saveSettings(next);\n      return next;\n    });\n  }, []);\n\n  const refreshLLMSettings = useCallback(() => {\n    setLLMSettings(loadSettings());\n  }, []);\n\n  const initializeAgent = useCallback(async (overrideProjectName?: string): Promise<void> => {\n    const api = apiRef.current;\n    if (!api) {\n      setAgentError('Worker not initialized');\n      return;\n    }\n\n    const config = getActiveProviderConfig();\n    if (!config) {\n      setAgentError('Please configure an LLM provider in settings');\n      return;\n    }\n\n    setIsAgentInitializing(true);\n    setAgentError(null);\n\n    try {\n      // Use override if provided (for fresh loads), fallback to state (for re-init)\n      const effectiveProjectName = overrideProjectName || projectName || 'project';\n      const result = await api.initializeAgent(config, effectiveProjectName);\n      if (result.success) {\n        setIsAgentReady(true);\n        setAgentError(null);\n        if (import.meta.env.DEV) {\n          console.log('✅ Agent initialized successfully');\n        }\n      } else {\n        setAgentError(result.error ?? 'Failed to initialize agent');\n        setIsAgentReady(false);\n      }\n    } catch (error) {\n      const message = error instanceof Error ? error.message : String(error);\n      setAgentError(message);\n      setIsAgentReady(false);\n    } finally {\n      setIsAgentInitializing(false);\n    }\n  }, [projectName]);\n\n  const sendChatMessage = useCallback(async (message: string): Promise<void> => {\n    const api = apiRef.current;\n    if (!api) {\n      setAgentError('Worker not initialized');\n      return;\n    }\n\n    // Refresh Code panel for the new question: keep user-pinned refs, clear old AI citations\n    clearAICodeReferences();\n    // Also clear previous tool-driven AI highlights (highlight_in_graph)\n    clearAIToolHighlights();\n\n    if (!isAgentReady) {\n      // Try to initialize first\n      await initializeAgent();\n      if (!apiRef.current) return;\n    }\n\n    // Add user message\n    const userMessage: ChatMessage = {\n      id: `user-${Date.now()}`,\n      role: 'user',\n      content: message,\n      timestamp: Date.now(),\n    };\n    setChatMessages(prev => [...prev, userMessage]);\n\n    // If embeddings are running and we're currently creating the vector index,\n    // avoid a confusing \"Embeddings not ready\" error and give a clear wait message.\n    if (embeddingStatus === 'indexing') {\n      const assistantMessage: ChatMessage = {\n        id: `assistant-${Date.now()}`,\n        role: 'assistant',\n        content: 'Wait a moment, vector index is being created.',\n        timestamp: Date.now(),\n      };\n      setChatMessages(prev => [...prev, assistantMessage]);\n      setAgentError(null);\n      setIsChatLoading(false);\n      setCurrentToolCalls([]);\n      return;\n    }\n\n    setIsChatLoading(true);\n    setCurrentToolCalls([]);\n\n    // Prepare message history for agent (convert our format to AgentMessage format)\n    const history: AgentMessage[] = [...chatMessages, userMessage].map(m => ({\n      role: m.role === 'tool' ? 'assistant' : m.role,\n      content: m.content,\n    }));\n\n    // Create placeholder for assistant response\n    const assistantMessageId = `assistant-${Date.now()}`;\n    // Use an ordered steps array to preserve execution order (reasoning → tool → reasoning → tool → answer)\n    const stepsForMessage: MessageStep[] = [];\n    // Keep toolCalls for backwards compat and currentToolCalls state\n    const toolCallsForMessage: ToolCallInfo[] = [];\n    let stepCounter = 0;\n\n    // Helper to update the message with current steps\n    const updateMessage = () => {\n      // Build content from steps for backwards compatibility\n      const contentParts = stepsForMessage\n        .filter(s => s.type === 'reasoning' || s.type === 'content')\n        .map(s => s.content)\n        .filter(Boolean);\n      const content = contentParts.join('\\n\\n');\n\n      setChatMessages(prev => {\n        const existing = prev.find(m => m.id === assistantMessageId);\n        const newMessage: ChatMessage = {\n          id: assistantMessageId,\n          role: 'assistant' as const,\n          content,\n          steps: [...stepsForMessage],\n          toolCalls: [...toolCallsForMessage],\n          timestamp: existing?.timestamp ?? Date.now(),\n        };\n        if (existing) {\n          return prev.map(m => m.id === assistantMessageId ? newMessage : m);\n        } else {\n          return [...prev, newMessage];\n        }\n      });\n    };\n\n    try {\n      const onChunk = Comlink.proxy((chunk: AgentStreamChunk) => {\n        switch (chunk.type) {\n          case 'reasoning':\n            // LLM's thinking/reasoning - accumulate contiguous reasoning\n            if (chunk.reasoning) {\n              const lastStep = stepsForMessage[stepsForMessage.length - 1];\n              if (lastStep && lastStep.type === 'reasoning') {\n                // Append to existing reasoning step\n                stepsForMessage[stepsForMessage.length - 1] = {\n                  ...lastStep,\n                  content: (lastStep.content || '') + chunk.reasoning,\n                };\n              } else {\n                // Create new reasoning step (after tool calls or at start)\n                stepsForMessage.push({\n                  id: `step-${stepCounter++}`,\n                  type: 'reasoning',\n                  content: chunk.reasoning,\n                });\n              }\n              updateMessage();\n            }\n            break;\n\n          case 'content':\n            // Final answer content - accumulate into contiguous content step\n            if (chunk.content) {\n              // Only append if the LAST step is a content step (contiguous streaming)\n              const lastStep = stepsForMessage[stepsForMessage.length - 1];\n              if (lastStep && lastStep.type === 'content') {\n                // Append to existing content step\n                stepsForMessage[stepsForMessage.length - 1] = {\n                  ...lastStep,\n                  content: (lastStep.content || '') + chunk.content,\n                };\n              } else {\n                // Create new content step (after tool calls or at start)\n                stepsForMessage.push({\n                  id: `step-${stepCounter++}`,\n                  type: 'content',\n                  content: chunk.content,\n                });\n              }\n              updateMessage();\n\n              // Parse inline grounding references and add them to the Code References panel.\n              // Supports: [[file.ts:10-25]] (file refs) and [[Class:View]] (node refs)\n              const currentContentStep = stepsForMessage[stepsForMessage.length - 1];\n              const fullText = (currentContentStep && currentContentStep.type === 'content')\n                ? (currentContentStep.content || '')\n                : '';\n\n              // Pattern 1: File refs - [[path/file.ext]] or [[path/file.ext:line]] or [[path/file.ext:line-line]]\n              // Line numbers are optional\n              const fileRefRegex = /\\[\\[([a-zA-Z0-9_\\-./\\\\]+\\.[a-zA-Z0-9]+)(?::(\\d+)(?:[-–](\\d+))?)?\\]\\]/g;\n              let fileMatch: RegExpExecArray | null;\n              while ((fileMatch = fileRefRegex.exec(fullText)) !== null) {\n                const rawPath = fileMatch[1].trim();\n                const startLine1 = fileMatch[2] ? parseInt(fileMatch[2], 10) : undefined;\n                const endLine1 = fileMatch[3] ? parseInt(fileMatch[3], 10) : startLine1;\n\n                const resolvedPath = resolveFilePath(rawPath);\n                if (!resolvedPath) continue;\n\n                const startLine0 = startLine1 !== undefined ? Math.max(0, startLine1 - 1) : undefined;\n                const endLine0 = endLine1 !== undefined ? Math.max(0, endLine1 - 1) : startLine0;\n                const nodeId = findFileNodeId(resolvedPath);\n\n                addCodeReference({\n                  filePath: resolvedPath,\n                  startLine: startLine0,\n                  endLine: endLine0,\n                  nodeId,\n                  label: 'File',\n                  name: resolvedPath.split('/').pop() ?? resolvedPath,\n                  source: 'ai',\n                });\n              }\n\n              // Pattern 2: Node refs - [[Type:Name]] or [[graph:Type:Name]]\n              const nodeRefRegex = /\\[\\[(?:graph:)?(Class|Function|Method|Interface|File|Folder|Variable|Enum|Type|CodeElement):([^\\]]+)\\]\\]/g;\n              let nodeMatch: RegExpExecArray | null;\n              while ((nodeMatch = nodeRefRegex.exec(fullText)) !== null) {\n                const nodeType = nodeMatch[1];\n                const nodeName = nodeMatch[2].trim();\n\n                // Find node in graph\n                if (!graph) continue;\n                const node = graph.nodes.find(n =>\n                  n.label === nodeType &&\n                  n.properties.name === nodeName\n                );\n                if (!node || !node.properties.filePath) continue;\n\n                const resolvedPath = resolveFilePath(node.properties.filePath);\n                if (!resolvedPath) continue;\n\n                addCodeReference({\n                  filePath: resolvedPath,\n                  startLine: node.properties.startLine ? node.properties.startLine - 1 : undefined,\n                  endLine: node.properties.endLine ? node.properties.endLine - 1 : undefined,\n                  nodeId: node.id,\n                  label: node.label,\n                  name: node.properties.name,\n                  source: 'ai',\n                });\n              }\n            }\n            break;\n\n          case 'tool_call':\n            if (chunk.toolCall) {\n              const tc = chunk.toolCall;\n              toolCallsForMessage.push(tc);\n              // Add tool call as a step (in order with reasoning)\n              stepsForMessage.push({\n                id: `step-${stepCounter++}`,\n                type: 'tool_call',\n                toolCall: tc,\n              });\n              setCurrentToolCalls(prev => [...prev, tc]);\n              updateMessage();\n            }\n            break;\n\n          case 'tool_result':\n            if (chunk.toolCall) {\n              const tc = chunk.toolCall;\n              // Update the tool call status in toolCallsForMessage\n              let idx = toolCallsForMessage.findIndex(t => t.id === tc.id);\n              if (idx < 0) {\n                idx = toolCallsForMessage.findIndex(t => t.name === tc.name && t.status === 'running');\n              }\n              if (idx < 0) {\n                idx = toolCallsForMessage.findIndex(t => t.name === tc.name && !t.result);\n              }\n              if (idx >= 0) {\n                toolCallsForMessage[idx] = {\n                  ...toolCallsForMessage[idx],\n                  result: tc.result,\n                  status: 'completed'\n                };\n              }\n\n              // Also update the tool call in steps\n              const stepIdx = stepsForMessage.findIndex(s =>\n                s.type === 'tool_call' && s.toolCall && (\n                  s.toolCall.id === tc.id ||\n                  (s.toolCall.name === tc.name && s.toolCall.status === 'running')\n                )\n              );\n              if (stepIdx >= 0 && stepsForMessage[stepIdx].toolCall) {\n                stepsForMessage[stepIdx] = {\n                  ...stepsForMessage[stepIdx],\n                  toolCall: {\n                    ...stepsForMessage[stepIdx].toolCall!,\n                    result: tc.result,\n                    status: 'completed',\n                  },\n                };\n              }\n\n              // Update currentToolCalls\n              setCurrentToolCalls(prev => {\n                let targetIdx = prev.findIndex(t => t.id === tc.id);\n                if (targetIdx < 0) {\n                  targetIdx = prev.findIndex(t => t.name === tc.name && t.status === 'running');\n                }\n                if (targetIdx < 0) {\n                  targetIdx = prev.findIndex(t => t.name === tc.name && !t.result);\n                }\n                if (targetIdx >= 0) {\n                  return prev.map((t, i) => i === targetIdx\n                    ? { ...t, result: tc.result, status: 'completed' }\n                    : t\n                  );\n                }\n                return prev;\n              });\n\n              updateMessage();\n\n              // Parse highlight marker from tool results\n              if (tc.result) {\n                const highlightMatch = tc.result.match(/\\[HIGHLIGHT_NODES:([^\\]]+)\\]/);\n                if (highlightMatch) {\n                  const rawIds = highlightMatch[1].split(',').map((id: string) => id.trim()).filter(Boolean);\n                  if (rawIds.length > 0 && graph) {\n                    const matchedIds = new Set<string>();\n                    const graphNodeIds = graph.nodes.map(n => n.id);\n\n                    for (const rawId of rawIds) {\n                      if (graphNodeIds.includes(rawId)) {\n                        matchedIds.add(rawId);\n                      } else {\n                        const found = graphNodeIds.find(gid =>\n                          gid.endsWith(rawId) || gid.endsWith(':' + rawId)\n                        );\n                        if (found) {\n                          matchedIds.add(found);\n                        }\n                      }\n                    }\n\n                    if (matchedIds.size > 0) {\n                      setAIToolHighlightedNodeIds(matchedIds);\n                    }\n                  } else if (rawIds.length > 0) {\n                    setAIToolHighlightedNodeIds(new Set(rawIds));\n                  }\n                }\n\n                // Parse impact marker from tool results\n                const impactMatch = tc.result.match(/\\[IMPACT:([^\\]]+)\\]/);\n                if (impactMatch) {\n                  const rawIds = impactMatch[1].split(',').map((id: string) => id.trim()).filter(Boolean);\n                  if (rawIds.length > 0 && graph) {\n                    const matchedIds = new Set<string>();\n                    const graphNodeIds = graph.nodes.map(n => n.id);\n\n                    for (const rawId of rawIds) {\n                      if (graphNodeIds.includes(rawId)) {\n                        matchedIds.add(rawId);\n                      } else {\n                        const found = graphNodeIds.find(gid =>\n                          gid.endsWith(rawId) || gid.endsWith(':' + rawId)\n                        );\n                        if (found) {\n                          matchedIds.add(found);\n                        }\n                      }\n                    }\n\n                    if (matchedIds.size > 0) {\n                      setBlastRadiusNodeIds(matchedIds);\n                    }\n                  } else if (rawIds.length > 0) {\n                    setBlastRadiusNodeIds(new Set(rawIds));\n                  }\n                }\n              }\n            }\n            break;\n\n          case 'error':\n            setAgentError(chunk.error ?? 'Unknown error');\n            break;\n\n          case 'done':\n            // Finalize the assistant message - just call updateMessage one more time\n            updateMessage();\n            break;\n        }\n      });\n\n      await api.chatStream(history, onChunk);\n    } catch (error) {\n      const message = error instanceof Error ? error.message : String(error);\n      setAgentError(message);\n    } finally {\n      setIsChatLoading(false);\n      setCurrentToolCalls([]);\n    }\n  }, [chatMessages, isAgentReady, initializeAgent, resolveFilePath, findFileNodeId, addCodeReference, clearAICodeReferences, clearAIToolHighlights, graph, embeddingStatus]);\n\n  const stopChatResponse = useCallback(() => {\n    const api = apiRef.current;\n    if (api && isChatLoading) {\n      api.stopChat();\n      setIsChatLoading(false);\n      setCurrentToolCalls([]);\n    }\n  }, [isChatLoading]);\n\n  const clearChat = useCallback(() => {\n    setChatMessages([]);\n    setCurrentToolCalls([]);\n    setAgentError(null);\n  }, []);\n\n  // Switch to a different repo on the connected server\n  const switchRepo = useCallback(async (repoName: string) => {\n    if (!serverBaseUrl) return;\n\n    setProgress({ phase: 'extracting', percent: 0, message: 'Switching repository...', detail: `Loading ${repoName}` });\n    setViewMode('loading');\n\n    // Clear stale graph state from previous repo (highlights, selections, blast radius)\n    // Without this, sigma reducers dim ALL nodes/edges because old node IDs don't match\n    setHighlightedNodeIds(new Set());\n    clearAIToolHighlights();\n    clearBlastRadius();\n    setSelectedNode(null);\n    setQueryResult(null);\n    setCodeReferences([]);\n    setCodePanelOpen(false);\n    setCodeReferenceFocus(null);\n\n    try {\n      const result: ConnectToServerResult = await connectToServer(serverBaseUrl, (phase, downloaded, total) => {\n        if (phase === 'validating') {\n          setProgress({ phase: 'extracting', percent: 5, message: 'Switching repository...', detail: 'Validating' });\n        } else if (phase === 'downloading') {\n          const pct = total ? Math.round((downloaded / total) * 90) + 5 : 50;\n          const mb = (downloaded / (1024 * 1024)).toFixed(1);\n          setProgress({ phase: 'extracting', percent: pct, message: 'Downloading graph...', detail: `${mb} MB downloaded` });\n        } else if (phase === 'extracting') {\n          setProgress({ phase: 'extracting', percent: 97, message: 'Processing...', detail: 'Extracting file contents' });\n        }\n      }, undefined, repoName);\n\n      // Reuse the same handleServerConnect logic inline\n      const repoPath = result.repoInfo.repoPath;\n      const pName = result.repoInfo.name || repoPath.split('/').pop() || 'server-project';\n      setProjectName(pName);\n\n      const graph = createKnowledgeGraph();\n      for (const node of result.nodes) graph.addNode(node);\n      for (const rel of result.relationships) graph.addRelationship(rel);\n      setGraph(graph);\n\n      const fileMap = new Map<string, string>();\n      for (const [p, c] of Object.entries(result.fileContents)) fileMap.set(p, c);\n      setFileContents(fileMap);\n\n      setViewMode('exploring');\n\n      if (getActiveProviderConfig()) initializeAgent(pName);\n\n      startEmbeddings().catch((err) => {\n        if (err?.name === 'WebGPUNotAvailableError' || err?.message?.includes('WebGPU')) {\n          startEmbeddings('wasm').catch(console.warn);\n        } else {\n          console.warn('Embeddings auto-start failed:', err);\n        }\n      });\n    } catch (err) {\n      console.error('Repo switch failed:', err);\n      setProgress({\n        phase: 'error', percent: 0,\n        message: 'Failed to switch repository',\n        detail: err instanceof Error ? err.message : 'Unknown error',\n      });\n      setTimeout(() => { setViewMode('exploring'); setProgress(null); }, 3000);\n    }\n  }, [serverBaseUrl, setProgress, setViewMode, setProjectName, setGraph, setFileContents, initializeAgent, startEmbeddings, setHighlightedNodeIds, clearAIToolHighlights, clearBlastRadius, setSelectedNode, setQueryResult, setCodeReferences, setCodePanelOpen, setCodeReferenceFocus]);\n\n  const removeCodeReference = useCallback((id: string) => {\n    setCodeReferences(prev => {\n      const ref = prev.find(r => r.id === id);\n      const newRefs = prev.filter(r => r.id !== id);\n\n      // Remove AI citation highlight if this was the only AI reference to that node\n      if (ref?.nodeId && ref.source === 'ai') {\n        const stillReferenced = newRefs.some(r => r.nodeId === ref.nodeId && r.source === 'ai');\n        if (!stillReferenced) {\n          setAICitationHighlightedNodeIds(prev => {\n            const next = new Set(prev);\n            next.delete(ref.nodeId!);\n            return next;\n          });\n        }\n      }\n\n      // Auto-close panel if no references left AND no selection in top viewer\n      if (newRefs.length === 0 && !selectedNode) {\n        setCodePanelOpen(false);\n      }\n\n      return newRefs;\n    });\n  }, [selectedNode]);\n\n  const clearCodeReferences = useCallback(() => {\n    setCodeReferences([]);\n    setCodePanelOpen(false);\n    setCodeReferenceFocus(null);\n  }, []);\n\n  const toggleLabelVisibility = useCallback((label: NodeLabel) => {\n    setVisibleLabels(prev => {\n      if (prev.includes(label)) {\n        return prev.filter(l => l !== label);\n      } else {\n        return [...prev, label];\n      }\n    });\n  }, []);\n\n  const toggleEdgeVisibility = useCallback((edgeType: EdgeType) => {\n    setVisibleEdgeTypes(prev => {\n      if (prev.includes(edgeType)) {\n        return prev.filter(t => t !== edgeType);\n      } else {\n        return [...prev, edgeType];\n      }\n    });\n  }, []);\n\n  const value: AppState = {\n    viewMode,\n    setViewMode,\n    graph,\n    setGraph,\n    fileContents,\n    setFileContents,\n    selectedNode,\n    setSelectedNode,\n    isRightPanelOpen,\n    setRightPanelOpen,\n    rightPanelTab,\n    setRightPanelTab,\n    openCodePanel,\n    openChatPanel,\n    visibleLabels,\n    toggleLabelVisibility,\n    visibleEdgeTypes,\n    toggleEdgeVisibility,\n    depthFilter,\n    setDepthFilter,\n    highlightedNodeIds,\n    setHighlightedNodeIds,\n    aiCitationHighlightedNodeIds,\n    aiToolHighlightedNodeIds,\n    blastRadiusNodeIds,\n    isAIHighlightsEnabled,\n    toggleAIHighlights,\n    clearAIToolHighlights,\n    clearBlastRadius,\n    queryResult,\n    setQueryResult,\n    clearQueryHighlights,\n    // Node animations\n    animatedNodes,\n    triggerNodeAnimation,\n    clearAnimations,\n    progress,\n    setProgress,\n    projectName,\n    setProjectName,\n    // Multi-repo switching\n    serverBaseUrl,\n    setServerBaseUrl,\n    availableRepos,\n    setAvailableRepos,\n    switchRepo,\n    runPipeline,\n    runPipelineFromFiles,\n    runQuery,\n    isDatabaseReady,\n    // Embedding state and methods\n    embeddingStatus,\n    embeddingProgress,\n    startEmbeddings,\n    semanticSearch,\n    semanticSearchWithContext,\n    isEmbeddingReady: embeddingStatus === 'ready',\n    // Debug\n    testArrayParams,\n    // LLM/Agent state\n    llmSettings,\n    updateLLMSettings,\n    isSettingsPanelOpen,\n    setSettingsPanelOpen,\n    isAgentReady,\n    isAgentInitializing,\n    agentError,\n    // Chat state\n    chatMessages,\n    isChatLoading,\n    currentToolCalls,\n    // LLM methods\n    refreshLLMSettings,\n    initializeAgent,\n    sendChatMessage,\n    stopChatResponse,\n    clearChat,\n    // Code References Panel\n    codeReferences,\n    isCodePanelOpen,\n    setCodePanelOpen,\n    addCodeReference,\n    removeCodeReference,\n    clearAICodeReferences,\n    clearCodeReferences,\n    codeReferenceFocus,\n  };\n\n  return (\n    <AppStateContext.Provider value={value}>\n      {children}\n    </AppStateContext.Provider>\n  );\n};\n\nexport const useAppState = (): AppState => {\n  const context = useContext(AppStateContext);\n  if (!context) {\n    throw new Error('useAppState must be used within AppStateProvider');\n  }\n  return context;\n};\n\n"
  },
  {
    "path": "gitnexus-web/src/hooks/useBackend.ts",
    "content": "import { useState, useEffect, useCallback, useRef } from 'react';\nimport {\n  probeBackend,\n  fetchRepos,\n  setBackendUrl as setServiceUrl,\n  getBackendUrl,\n  type BackendRepo,\n} from '../services/backend';\n\n// ── localStorage keys ────────────────────────────────────────────────────────\n\nconst LS_URL_KEY = 'gitnexus-backend-url';\nconst LS_REPO_KEY = 'gitnexus-backend-repo';\nconst DEFAULT_URL = 'http://localhost:4747';\n\n// ── Debounce delay ───────────────────────────────────────────────────────────\n\nconst DEBOUNCE_MS = 500;\n\n// ── Public interface ─────────────────────────────────────────────────────────\n\nexport interface UseBackendResult {\n  /** Backend probe succeeded */\n  isConnected: boolean;\n  /** Currently checking connection */\n  isProbing: boolean;\n  /** Current backend URL */\n  backendUrl: string;\n\n  /** Available repos from the server */\n  repos: BackendRepo[];\n  /** Currently selected repo name */\n  selectedRepo: string | null;\n\n  /** Change the backend URL, persist to localStorage, and re-probe */\n  setBackendUrl: (url: string) => void;\n  /** Select a repo (persisted to localStorage) */\n  selectRepo: (name: string) => void;\n  /** Manually re-check the backend connection */\n  probe: () => Promise<boolean>;\n  /** Clear connection state and go back to browser-only mode */\n  disconnect: () => void;\n}\n\n// ── Hook implementation ──────────────────────────────────────────────────────\n\nexport function useBackend(): UseBackendResult {\n  // Read persisted values on first render only\n  const [backendUrl, setUrlState] = useState<string>(() => {\n    try {\n      return localStorage.getItem(LS_URL_KEY) ?? DEFAULT_URL;\n    } catch {\n      return DEFAULT_URL;\n    }\n  });\n\n  const [isConnected, setIsConnected] = useState(false);\n  const [isProbing, setIsProbing] = useState(false);\n  const [repos, setRepos] = useState<BackendRepo[]>([]);\n  const [selectedRepo, setSelectedRepo] = useState<string | null>(() => {\n    try {\n      return localStorage.getItem(LS_REPO_KEY);\n    } catch {\n      return null;\n    }\n  });\n\n  // Race-condition guard: monotonically increasing probe ID\n  const probeIdRef = useRef(0);\n  // Debounce timer handle\n  const debounceRef = useRef<ReturnType<typeof setTimeout> | null>(null);\n\n  // ── Core probe logic (not debounced) ─────────────────────────────────────\n\n  const probe = useCallback(async (): Promise<boolean> => {\n    const id = ++probeIdRef.current;\n    setIsProbing(true);\n\n    try {\n      const ok = await probeBackend();\n\n      // If a newer probe was started while we were in-flight, discard this result\n      if (id !== probeIdRef.current) return false;\n\n      setIsConnected(ok);\n\n      if (ok) {\n        try {\n          const repoList = await fetchRepos();\n          // Re-check: still the latest probe?\n          if (id !== probeIdRef.current) return false;\n          setRepos(repoList);\n        } catch {\n          if (id === probeIdRef.current) {\n            setRepos([]);\n          }\n        }\n      } else {\n        setRepos([]);\n      }\n\n      return ok;\n    } catch {\n      if (id === probeIdRef.current) {\n        setIsConnected(false);\n        setRepos([]);\n      }\n      return false;\n    } finally {\n      if (id === probeIdRef.current) {\n        setIsProbing(false);\n      }\n    }\n  }, []);\n\n  // ── setBackendUrl: persist, update service, trigger debounced re-probe ───\n\n  const setBackendUrl = useCallback(\n    (url: string) => {\n      setUrlState(url);\n      setServiceUrl(url);\n\n      try {\n        localStorage.setItem(LS_URL_KEY, url);\n      } catch {\n        // localStorage may be unavailable (e.g. incognito quota exceeded)\n      }\n\n      // Debounce: clear any pending probe, schedule a new one\n      if (debounceRef.current !== null) {\n        clearTimeout(debounceRef.current);\n      }\n      debounceRef.current = setTimeout(() => {\n        debounceRef.current = null;\n        void probe();\n      }, DEBOUNCE_MS);\n    },\n    [probe],\n  );\n\n  // ── selectRepo: persist and update state ─────────────────────────────────\n\n  const selectRepo = useCallback((name: string) => {\n    setSelectedRepo(name);\n    try {\n      localStorage.setItem(LS_REPO_KEY, name);\n    } catch {\n      // localStorage may be unavailable\n    }\n  }, []);\n\n  // ── disconnect: clear connection state (URL stays in localStorage) ───────\n\n  const disconnect = useCallback(() => {\n    // Bump probe ID so any in-flight probe is ignored\n    probeIdRef.current++;\n    setIsConnected(false);\n    setIsProbing(false);\n    setRepos([]);\n    setSelectedRepo(null);\n    try {\n      localStorage.removeItem(LS_REPO_KEY);\n    } catch {\n      // localStorage may be unavailable\n    }\n  }, []);\n\n  // ── Mount: sync service URL + auto-probe ─────────────────────────────────\n\n  useEffect(() => {\n    // Ensure the service module is in sync with the persisted URL\n    setServiceUrl(backendUrl);\n    void probe();\n\n    // Cleanup debounce timer on unmount\n    return () => {\n      if (debounceRef.current !== null) {\n        clearTimeout(debounceRef.current);\n      }\n    };\n    // Only run on mount — backendUrl and probe are stable refs from useState/useCallback\n    // eslint-disable-next-line react-hooks/exhaustive-deps\n  }, []);\n\n  return {\n    isConnected,\n    isProbing,\n    backendUrl,\n    repos,\n    selectedRepo,\n    setBackendUrl,\n    selectRepo,\n    probe,\n    disconnect,\n  };\n}\n"
  },
  {
    "path": "gitnexus-web/src/hooks/useSettings.ts",
    "content": "import { useAppState } from './useAppState';\n\nexport const useSettings = () => {\n  const { llmSettings, updateLLMSettings } = useAppState();\n  \n  return {\n    settings: llmSettings,\n    updateSettings: updateLLMSettings\n  };\n};\n"
  },
  {
    "path": "gitnexus-web/src/hooks/useSigma.ts",
    "content": "import { useRef, useEffect, useCallback, useState } from 'react';\nimport Sigma from 'sigma';\nimport Graph from 'graphology';\nimport FA2Layout from 'graphology-layout-forceatlas2/worker';\nimport forceAtlas2 from 'graphology-layout-forceatlas2';\nimport noverlap from 'graphology-layout-noverlap';\nimport EdgeCurveProgram from '@sigma/edge-curve';\nimport { SigmaNodeAttributes, SigmaEdgeAttributes } from '../lib/graph-adapter';\nimport type { NodeAnimation } from './useAppState';\nimport type { EdgeType } from '../lib/constants';\n// Helper: Parse hex color to RGB\nconst hexToRgb = (hex: string): { r: number; g: number; b: number } => {\n  const result = /^#?([a-f\\d]{2})([a-f\\d]{2})([a-f\\d]{2})$/i.exec(hex);\n  return result\n    ? {\n        r: parseInt(result[1], 16),\n        g: parseInt(result[2], 16),\n        b: parseInt(result[3], 16),\n      }\n    : { r: 100, g: 100, b: 100 };\n};\n\n// Helper: RGB to hex\nconst rgbToHex = (r: number, g: number, b: number): string => {\n  return '#' + [r, g, b].map(x => {\n    const hex = Math.max(0, Math.min(255, Math.round(x))).toString(16);\n    return hex.length === 1 ? '0' + hex : hex;\n  }).join('');\n};\n\n// Dim a color by mixing with dark background (keeps color hint)\nconst dimColor = (hex: string, amount: number): string => {\n  const rgb = hexToRgb(hex);\n  const darkBg = { r: 18, g: 18, b: 28 }; // #12121c - dark background\n  return rgbToHex(\n    darkBg.r + (rgb.r - darkBg.r) * amount,\n    darkBg.g + (rgb.g - darkBg.g) * amount,\n    darkBg.b + (rgb.b - darkBg.b) * amount\n  );\n};\n\n// Brighten a color (increase luminosity)\nconst brightenColor = (hex: string, factor: number): string => {\n  const rgb = hexToRgb(hex);\n  return rgbToHex(\n    rgb.r + (255 - rgb.r) * (factor - 1) / factor,\n    rgb.g + (255 - rgb.g) * (factor - 1) / factor,\n    rgb.b + (255 - rgb.b) * (factor - 1) / factor\n  );\n};\n\ninterface UseSigmaOptions {\n  onNodeClick?: (nodeId: string) => void;\n  onNodeHover?: (nodeId: string | null) => void;\n  onStageClick?: () => void;\n  highlightedNodeIds?: Set<string>;\n  blastRadiusNodeIds?: Set<string>;\n  animatedNodes?: Map<string, NodeAnimation>;\n  visibleEdgeTypes?: EdgeType[];\n}\n\ninterface UseSigmaReturn {\n  containerRef: React.RefObject<HTMLDivElement>;\n  sigmaRef: React.RefObject<Sigma | null>;\n  setGraph: (graph: Graph<SigmaNodeAttributes, SigmaEdgeAttributes>) => void;\n  zoomIn: () => void;\n  zoomOut: () => void;\n  resetZoom: () => void;\n  focusNode: (nodeId: string) => void;\n  isLayoutRunning: boolean;\n  startLayout: () => void;\n  stopLayout: () => void;\n  selectedNode: string | null;\n  setSelectedNode: (nodeId: string | null) => void;\n  refreshHighlights: () => void;\n}\n\n// Noverlap for final cleanup - minimal since it starts with good positions\nconst NOVERLAP_SETTINGS = {\n  maxIterations: 20,  // Reduced - less cleanup needed\n  ratio: 1.1,\n  margin: 10,\n  expansion: 1.05,\n};\n\n// ForceAtlas2 settings - FAST convergence since nodes start near their parents\nconst getFA2Settings = (nodeCount: number) => {\n  const isSmall = nodeCount < 500;\n  const isMedium = nodeCount >= 500 && nodeCount < 2000;\n  const isLarge = nodeCount >= 2000 && nodeCount < 10000;\n  \n  return {\n    // Lower gravity allows folders to stay spread out\n    gravity: isSmall ? 0.8 : isMedium ? 0.5 : isLarge ? 0.3 : 0.15,\n    \n    // Higher scaling ratio = more spread out overall\n    scalingRatio: isSmall ? 15 : isMedium ? 30 : isLarge ? 60 : 100,\n    \n    // LOW slowDown = FASTER movement (converges quicker)\n    slowDown: isSmall ? 1 : isMedium ? 2 : isLarge ? 3 : 5,\n    \n    // Barnes-Hut for performance - use it even on smaller graphs\n    barnesHutOptimize: nodeCount > 200,\n    barnesHutTheta: isLarge ? 0.8 : 0.6,  // Higher = faster but less accurate\n    \n    // These help with clustering while keeping spread\n    strongGravityMode: false,\n    outboundAttractionDistribution: true,\n    linLogMode: false,\n    adjustSizes: true,\n    edgeWeightInfluence: 1,\n  };\n};\n\n// Layout duration - let it run longer for better results\n// Web Worker + WebGL means minimal system impact\nconst getLayoutDuration = (nodeCount: number): number => {\n  if (nodeCount > 10000) return 45000;  // 45s for huge graphs\n  if (nodeCount > 5000) return 35000;   // 35s\n  if (nodeCount > 2000) return 30000;   // 30s\n  if (nodeCount > 1000) return 30000;   // 30s\n  if (nodeCount > 500) return 25000;    // 25s\n  return 20000;                         // 20s for small graphs\n};\n\nexport const useSigma = (options: UseSigmaOptions = {}): UseSigmaReturn => {\n  const containerRef = useRef<HTMLDivElement>(null);\n  const sigmaRef = useRef<Sigma | null>(null);\n  const graphRef = useRef<Graph<SigmaNodeAttributes, SigmaEdgeAttributes> | null>(null);\n  const layoutRef = useRef<FA2Layout | null>(null);\n  const selectedNodeRef = useRef<string | null>(null);\n  const highlightedRef = useRef<Set<string>>(new Set());\n  const blastRadiusRef = useRef<Set<string>>(new Set());\n  const animatedNodesRef = useRef<Map<string, NodeAnimation>>(new Map());\n  const visibleEdgeTypesRef = useRef<EdgeType[] | null>(null);\n  const layoutTimeoutRef = useRef<ReturnType<typeof setTimeout> | null>(null);\n  const animationFrameRef = useRef<number | null>(null);\n  const [isLayoutRunning, setIsLayoutRunning] = useState(false);\n  const [selectedNode, setSelectedNodeState] = useState<string | null>(null);\n\n  useEffect(() => {\n    highlightedRef.current = options.highlightedNodeIds || new Set();\n    blastRadiusRef.current = options.blastRadiusNodeIds || new Set();\n    animatedNodesRef.current = options.animatedNodes || new Map();\n    visibleEdgeTypesRef.current = options.visibleEdgeTypes || null;\n    sigmaRef.current?.refresh();\n  }, [options.highlightedNodeIds, options.blastRadiusNodeIds, options.animatedNodes, options.visibleEdgeTypes]);\n\n  // Animation loop for node effects\n  useEffect(() => {\n    if (!options.animatedNodes || options.animatedNodes.size === 0) {\n      if (animationFrameRef.current) {\n        cancelAnimationFrame(animationFrameRef.current);\n        animationFrameRef.current = null;\n      }\n      return;\n    }\n\n    const animate = () => {\n      sigmaRef.current?.refresh();\n      animationFrameRef.current = requestAnimationFrame(animate);\n    };\n\n    animate();\n\n    return () => {\n      if (animationFrameRef.current) {\n        cancelAnimationFrame(animationFrameRef.current);\n        animationFrameRef.current = null;\n      }\n    };\n  }, [options.animatedNodes]);\n\n  const setSelectedNode = useCallback((nodeId: string | null) => {\n    selectedNodeRef.current = nodeId;\n    setSelectedNodeState(nodeId);\n    \n    const sigma = sigmaRef.current;\n    if (!sigma) return;\n    \n    // Tiny camera nudge to force edge refresh (workaround for Sigma edge caching)\n    const camera = sigma.getCamera();\n    const currentRatio = camera.ratio;\n    // Imperceptible zoom change that triggers re-render\n    camera.animate(\n      { ratio: currentRatio * 1.0001 },\n      { duration: 50 }\n    );\n    \n    sigma.refresh();\n  }, []);\n\n  // Initialize Sigma ONCE\n  useEffect(() => {\n    if (!containerRef.current) return;\n\n    const graph = new Graph<SigmaNodeAttributes, SigmaEdgeAttributes>();\n    graphRef.current = graph;\n\n    const sigma = new Sigma(graph, containerRef.current, {\n      renderLabels: true,\n      labelFont: 'JetBrains Mono, monospace',\n      labelSize: 11,\n      labelWeight: '500',\n      labelColor: { color: '#e4e4ed' },\n      labelRenderedSizeThreshold: 8,\n      labelDensity: 0.1,\n      labelGridCellSize: 70,\n      \n      defaultNodeColor: '#6b7280',\n      defaultEdgeColor: '#2a2a3a',\n      \n      defaultEdgeType: 'curved',\n      edgeProgramClasses: {\n        curved: EdgeCurveProgram,\n      },\n      \n      // Custom hover renderer - dark background instead of white\n      defaultDrawNodeHover: (context, data, settings) => {\n        const label = data.label;\n        if (!label) return;\n        \n        const size = settings.labelSize || 11;\n        const font = settings.labelFont || 'JetBrains Mono, monospace';\n        const weight = settings.labelWeight || '500';\n        \n        context.font = `${weight} ${size}px ${font}`;\n        const textWidth = context.measureText(label).width;\n        \n        const nodeSize = data.size || 8;\n        const x = data.x;\n        const y = data.y - nodeSize - 10;\n        const paddingX = 8;\n        const paddingY = 5;\n        const height = size + paddingY * 2;\n        const width = textWidth + paddingX * 2;\n        const radius = 4;\n        \n        // Dark background pill\n        context.fillStyle = '#12121c';\n        context.beginPath();\n        context.roundRect(x - width / 2, y - height / 2, width, height, radius);\n        context.fill();\n        \n        // Border matching node color\n        context.strokeStyle = data.color || '#6366f1';\n        context.lineWidth = 2;\n        context.stroke();\n        \n        // Label text - light color\n        context.fillStyle = '#f5f5f7';\n        context.textAlign = 'center';\n        context.textBaseline = 'middle';\n        context.fillText(label, x, y);\n        \n        // Also draw a subtle glow ring around the node\n        context.beginPath();\n        context.arc(data.x, data.y, nodeSize + 4, 0, Math.PI * 2);\n        context.strokeStyle = data.color || '#6366f1';\n        context.lineWidth = 2;\n        context.globalAlpha = 0.5;\n        context.stroke();\n        context.globalAlpha = 1;\n      },\n      \n      minCameraRatio: 0.002,\n      maxCameraRatio: 50,\n      hideEdgesOnMove: true,\n      zIndex: true,\n      \n      nodeReducer: (node, data) => {\n        const res = { ...data };\n        \n        if (data.hidden) {\n          res.hidden = true;\n          return res;\n        }\n        \n        const currentSelected = selectedNodeRef.current;\n        const highlighted = highlightedRef.current;\n        const blastRadius = blastRadiusRef.current;\n        const animatedNodes = animatedNodesRef.current;\n        const hasHighlights = highlighted.size > 0;\n        const hasBlastRadius = blastRadius.size > 0;\n        const isQueryHighlighted = highlighted.has(node);\n        const isBlastRadiusNode = blastRadius.has(node);\n        \n        // Apply animation effects FIRST (before other highlighting)\n        const animation = animatedNodes.get(node);\n        if (animation) {\n          const now = Date.now();\n          const elapsed = now - animation.startTime;\n          const progress = Math.min(elapsed / animation.duration, 1);\n          \n          // Calculate animation phase (0-1-0-1... oscillation)\n          const phase = (Math.sin(progress * Math.PI * 4) + 1) / 2;\n          \n          if (animation.type === 'pulse') {\n            // Cyan pulse for search results\n            const sizeMultiplier = 1.5 + phase * 0.8;\n            res.size = (data.size || 8) * sizeMultiplier;\n            res.color = phase > 0.5 ? '#06b6d4' : brightenColor('#06b6d4', 1.3);\n            res.zIndex = 5;\n            res.highlighted = true;\n          } else if (animation.type === 'ripple') {\n            // Red ripple for blast radius\n            const sizeMultiplier = 1.3 + phase * 1.2;\n            res.size = (data.size || 8) * sizeMultiplier;\n            res.color = phase > 0.5 ? '#ef4444' : '#f87171';\n            res.zIndex = 5;\n            res.highlighted = true;\n          } else if (animation.type === 'glow') {\n            // Purple glow for highlight\n            const sizeMultiplier = 1.4 + phase * 0.6;\n            res.size = (data.size || 8) * sizeMultiplier;\n            res.color = phase > 0.5 ? '#a855f7' : '#c084fc';\n            res.zIndex = 5;\n            res.highlighted = true;\n          }\n          \n          return res;\n        }\n        \n        // Blast radius takes priority (red highlighting)\n        if (hasBlastRadius && !currentSelected) {\n          if (isBlastRadiusNode) {\n            res.color = '#ef4444'; // Red for blast radius\n            res.size = (data.size || 8) * 1.8;\n            res.zIndex = 3;\n            res.highlighted = true;\n          } else if (isQueryHighlighted) {\n            // Regular cyan highlight for non-blast-radius nodes\n            res.color = '#06b6d4';\n            res.size = (data.size || 8) * 1.4;\n            res.zIndex = 2;\n            res.highlighted = true;\n          } else {\n            res.color = dimColor(data.color, 0.15);\n            res.size = (data.size || 8) * 0.4;\n            res.zIndex = 0;\n          }\n          return res;\n        }\n        \n        if (hasHighlights && !currentSelected) {\n          if (isQueryHighlighted) {\n            res.color = '#06b6d4';\n            res.size = (data.size || 8) * 1.6;\n            res.zIndex = 2;\n            res.highlighted = true;\n          } else {\n            res.color = dimColor(data.color, 0.2);\n            res.size = (data.size || 8) * 0.5;\n            res.zIndex = 0;\n          }\n          return res;\n        }\n        \n        if (currentSelected) {\n          const graph = graphRef.current;\n          if (graph) {\n            const isSelected = node === currentSelected;\n            const isNeighbor = graph.hasEdge(node, currentSelected) || graph.hasEdge(currentSelected, node);\n            \n            if (isSelected) {\n              res.color = data.color;\n              res.size = (data.size || 8) * 1.8;\n              res.zIndex = 2;\n              res.highlighted = true;\n            } else if (isNeighbor) {\n              res.color = data.color;\n              res.size = (data.size || 8) * 1.3;\n              res.zIndex = 1;\n            } else {\n              res.color = dimColor(data.color, 0.25);\n              res.size = (data.size || 8) * 0.6;\n              res.zIndex = 0;\n            }\n          }\n        }\n        \n        return res;\n      },\n      \n      edgeReducer: (edge, data) => {\n        const res = { ...data };\n        \n        // Check edge type visibility first\n        const visibleTypes = visibleEdgeTypesRef.current;\n        if (visibleTypes && data.relationType) {\n          if (!visibleTypes.includes(data.relationType as EdgeType)) {\n            res.hidden = true;\n            return res;\n          }\n        }\n        \n        const currentSelected = selectedNodeRef.current;\n        const highlighted = highlightedRef.current;\n        const blastRadius = blastRadiusRef.current;\n        const hasHighlights = highlighted.size > 0 || blastRadius.size > 0; // Check BOTH sets\n        \n        if (hasHighlights && !currentSelected) {\n          const graph = graphRef.current;\n          if (graph) {\n            const [source, target] = graph.extremities(edge);\n            \n            // Check if nodes are in EITHER set\n            const isSourceActive = highlighted.has(source) || blastRadius.has(source);\n            const isTargetActive = highlighted.has(target) || blastRadius.has(target);\n            \n            const bothHighlighted = isSourceActive && isTargetActive;\n            const oneHighlighted = isSourceActive || isTargetActive;\n            \n            if (bothHighlighted) {\n              // If both nodes are in blast radius, use red edge\n              if (blastRadius.has(source) && blastRadius.has(target)) {\n                res.color = '#ef4444';\n              } else {\n                res.color = '#06b6d4';\n              }\n              res.size = Math.max(2, (data.size || 1) * 3);\n              res.zIndex = 2;\n            } else if (oneHighlighted) {\n              res.color = dimColor('#06b6d4', 0.4);\n              res.size = 1;\n              res.zIndex = 1;\n            } else {\n              res.color = dimColor(data.color, 0.08);\n              res.size = 0.2;\n              res.zIndex = 0;\n            }\n          }\n          return res;\n        }\n        \n        if (currentSelected) {\n          const graph = graphRef.current;\n          if (graph) {\n            const [source, target] = graph.extremities(edge);\n            const isConnected = source === currentSelected || target === currentSelected;\n            \n            if (isConnected) {\n              res.color = brightenColor(data.color, 1.5);\n              res.size = Math.max(3, (data.size || 1) * 4);\n              res.zIndex = 2;\n            } else {\n              res.color = dimColor(data.color, 0.1);\n              res.size = 0.3;\n              res.zIndex = 0;\n            }\n          }\n        }\n        \n        return res;\n      },\n    });\n\n    sigmaRef.current = sigma;\n\n    sigma.on('clickNode', ({ node }) => {\n      setSelectedNode(node);\n      options.onNodeClick?.(node);\n    });\n\n    sigma.on('clickStage', () => {\n      setSelectedNode(null);\n      options.onStageClick?.();\n    });\n\n    sigma.on('enterNode', ({ node }) => {\n      options.onNodeHover?.(node);\n      if (containerRef.current) {\n        containerRef.current.style.cursor = 'pointer';\n      }\n    });\n\n    sigma.on('leaveNode', () => {\n      options.onNodeHover?.(null);\n      if (containerRef.current) {\n        containerRef.current.style.cursor = 'grab';\n      }\n    });\n\n    return () => {\n      if (layoutTimeoutRef.current) {\n        clearTimeout(layoutTimeoutRef.current);\n      }\n      layoutRef.current?.kill();\n      sigma.kill();\n      sigmaRef.current = null;\n      graphRef.current = null;\n    };\n  }, []);\n\n  // Run ForceAtlas2 layout\n  const runLayout = useCallback((graph: Graph<SigmaNodeAttributes, SigmaEdgeAttributes>) => {\n    const nodeCount = graph.order;\n    if (nodeCount === 0) return;\n\n    // Kill existing\n    if (layoutRef.current) {\n      layoutRef.current.kill();\n      layoutRef.current = null;\n    }\n    if (layoutTimeoutRef.current) {\n      clearTimeout(layoutTimeoutRef.current);\n      layoutTimeoutRef.current = null;\n    }\n\n    // Get settings\n    const inferredSettings = forceAtlas2.inferSettings(graph);\n    const customSettings = getFA2Settings(nodeCount);\n    const settings = { ...inferredSettings, ...customSettings };\n    \n    const layout = new FA2Layout(graph, { settings });\n    \n    layoutRef.current = layout;\n    layout.start();\n    setIsLayoutRunning(true);\n\n    const duration = getLayoutDuration(nodeCount);\n    \n    layoutTimeoutRef.current = setTimeout(() => {\n      if (layoutRef.current) {\n        layoutRef.current.stop();\n        layoutRef.current = null;\n        \n        // Light noverlap cleanup\n        noverlap.assign(graph, NOVERLAP_SETTINGS);\n        sigmaRef.current?.refresh();\n        \n        setIsLayoutRunning(false);\n      }\n    }, duration);\n  }, []);\n\n  const setGraph = useCallback((newGraph: Graph<SigmaNodeAttributes, SigmaEdgeAttributes>) => {\n    const sigma = sigmaRef.current;\n    if (!sigma) return;\n\n    if (layoutRef.current) {\n      layoutRef.current.kill();\n      layoutRef.current = null;\n    }\n    if (layoutTimeoutRef.current) {\n      clearTimeout(layoutTimeoutRef.current);\n      layoutTimeoutRef.current = null;\n    }\n\n    graphRef.current = newGraph;\n    sigma.setGraph(newGraph);\n    setSelectedNode(null);\n\n    runLayout(newGraph);\n    sigma.getCamera().animatedReset({ duration: 500 });\n  }, [runLayout, setSelectedNode]);\n\n  const focusNode = useCallback((nodeId: string) => {\n    const sigma = sigmaRef.current;\n    const graph = graphRef.current;\n    if (!sigma || !graph || !graph.hasNode(nodeId)) return;\n\n    // Skip if already focused on this node (prevents double-click issues)\n    const alreadySelected = selectedNodeRef.current === nodeId;\n    \n    // Set selection state directly (without the camera nudge from setSelectedNode)\n    selectedNodeRef.current = nodeId;\n    setSelectedNodeState(nodeId);\n    \n    // Only animate camera if selecting a new node\n    if (!alreadySelected) {\n      const nodeAttrs = graph.getNodeAttributes(nodeId);\n      sigma.getCamera().animate(\n        { x: nodeAttrs.x, y: nodeAttrs.y, ratio: 0.15 },\n        { duration: 400 }\n      );\n    }\n    \n    sigma.refresh();\n  }, []);\n\n  const zoomIn = useCallback(() => {\n    sigmaRef.current?.getCamera().animatedZoom({ duration: 200 });\n  }, []);\n\n  const zoomOut = useCallback(() => {\n    sigmaRef.current?.getCamera().animatedUnzoom({ duration: 200 });\n  }, []);\n\n  const resetZoom = useCallback(() => {\n    sigmaRef.current?.getCamera().animatedReset({ duration: 300 });\n    setSelectedNode(null);\n  }, [setSelectedNode]);\n\n  const startLayout = useCallback(() => {\n    const graph = graphRef.current;\n    if (!graph || graph.order === 0) return;\n    runLayout(graph);\n  }, [runLayout]);\n\n  const stopLayout = useCallback(() => {\n    if (layoutTimeoutRef.current) {\n      clearTimeout(layoutTimeoutRef.current);\n      layoutTimeoutRef.current = null;\n    }\n    if (layoutRef.current) {\n      layoutRef.current.stop();\n      layoutRef.current = null;\n      \n      const graph = graphRef.current;\n      if (graph) {\n        noverlap.assign(graph, NOVERLAP_SETTINGS);\n        sigmaRef.current?.refresh();\n      }\n      \n      setIsLayoutRunning(false);\n    }\n  }, []);\n\n  const refreshHighlights = useCallback(() => {\n    sigmaRef.current?.refresh();\n  }, []);\n\n  return {\n    containerRef,\n    sigmaRef,\n    setGraph,\n    zoomIn,\n    zoomOut,\n    resetZoom,\n    focusNode,\n    isLayoutRunning,\n    startLayout,\n    stopLayout,\n    selectedNode,\n    setSelectedNode,\n    refreshHighlights,\n  };\n};\n"
  },
  {
    "path": "gitnexus-web/src/index.css",
    "content": "@import \"tailwindcss\";\n\n/* ═══════════════════════════════════════════════════════════════\n   TAILWIND V4 THEME CONFIGURATION\n═══════════════════════════════════════════════════════════════ */\n@theme {\n  /* Backgrounds */\n  --color-void: #06060a;\n  --color-deep: #0a0a10;\n  --color-surface: #101018;\n  --color-elevated: #16161f;\n  --color-hover: #1c1c28;\n\n  /* Borders */\n  --color-border-subtle: #1e1e2a;\n  --color-border-default: #2a2a3a;\n\n  /* Text */\n  --color-text-primary: #e4e4ed;\n  --color-text-secondary: #8888a0;\n  --color-text-muted: #5a5a70;\n\n  /* Accent */\n  --color-accent: #7c3aed;\n  --color-accent-dim: #5b21b6;\n\n  /* Node colors */\n  --color-node-file: #3b82f6;\n  --color-node-folder: #6366f1;\n  --color-node-class: #f59e0b;\n  --color-node-function: #10b981;\n  --color-node-interface: #ec4899;\n  --color-node-import: #6b7280;\n  --color-node-method: #14b8a6;\n\n  /* Fonts */\n  --font-sans: 'Outfit', system-ui, sans-serif;\n  --font-mono: 'JetBrains Mono', 'Fira Code', monospace;\n\n  /* Animations */\n  --animate-breathe: breathe 3s ease-in-out infinite;\n  --animate-pulse-glow: pulse-glow 2s ease-in-out infinite;\n  --animate-slide-in: slide-in 0.3s cubic-bezier(0.4, 0, 0.2, 1);\n  --animate-slide-up: slide-up 0.3s cubic-bezier(0.4, 0, 0.2, 1);\n  --animate-fade-in: fade-in 0.3s ease-out;\n\n  /* Box shadows */\n  --shadow-glow: 0 0 20px rgba(124, 58, 237, 0.4);\n  --shadow-glow-soft: 0 0 40px rgba(124, 58, 237, 0.15);\n}\n\n/* Keyframes */\n@keyframes breathe {\n\n  0%,\n  100% {\n    border-color: #2a2a3a;\n    box-shadow: 0 0 0 0 rgba(124, 58, 237, 0.3);\n  }\n\n  50% {\n    border-color: #7c3aed;\n    box-shadow: 0 0 40px 10px rgba(124, 58, 237, 0.3);\n  }\n}\n\n@keyframes pulse-glow {\n\n  0%,\n  100% {\n    transform: scale(1);\n    box-shadow: 0 0 40px rgba(124, 58, 237, 0.4);\n  }\n\n  50% {\n    transform: scale(1.1);\n    box-shadow: 0 0 80px rgba(124, 58, 237, 0.6);\n  }\n}\n\n@keyframes slide-in {\n  from {\n    opacity: 0;\n    transform: translateX(20px);\n  }\n\n  to {\n    opacity: 1;\n    transform: translateX(0);\n  }\n}\n\n@keyframes slide-up {\n  from {\n    opacity: 0;\n    transform: translateY(20px);\n  }\n\n  to {\n    opacity: 1;\n    transform: translateY(0);\n  }\n}\n\n@keyframes fade-in {\n  from {\n    opacity: 0;\n  }\n\n  to {\n    opacity: 1;\n  }\n}\n\n/* ═══════════════════════════════════════════════════════════════\n   BASE STYLES\n═══════════════════════════════════════════════════════════════ */\n* {\n  box-sizing: border-box;\n}\n\nhtml,\nbody,\n#root {\n  height: 100%;\n}\n\nbody {\n  background-color: var(--color-void);\n  color: var(--color-text-primary);\n  font-family: var(--font-sans);\n  -webkit-font-smoothing: antialiased;\n  -moz-osx-font-smoothing: grayscale;\n}\n\n/* ═══════════════════════════════════════════════════════════════\n   CUSTOM SCROLLBAR\n═══════════════════════════════════════════════════════════════ */\n.scrollbar-thin {\n  scrollbar-width: thin;\n  scrollbar-color: #2a2a3a #0a0a10;\n}\n\n.scrollbar-thin::-webkit-scrollbar {\n  width: 8px;\n  height: 8px;\n}\n\n.scrollbar-thin::-webkit-scrollbar-track {\n  background: var(--color-deep);\n}\n\n.scrollbar-thin::-webkit-scrollbar-thumb {\n  background: var(--color-border-default);\n  border-radius: 4px;\n}\n\n.scrollbar-thin::-webkit-scrollbar-thumb:hover {\n  background: var(--color-text-muted);\n}\n\n/* ═══════════════════════════════════════════════════════════════\n   CHAT MESSAGE PROSE STYLES\n═══════════════════════════════════════════════════════════════ */\n.chat-prose {\n  font-size: 14px;\n  line-height: 1.75;\n  color: var(--color-text-primary);\n}\n\n/* Spacing between all block elements */\n.chat-prose p,\n.chat-prose ul,\n.chat-prose ol,\n.chat-prose pre,\n.chat-prose blockquote,\n.chat-prose table {\n  margin-top: 0;\n  margin-bottom: 1em;\n}\n\n.chat-prose p:last-child,\n.chat-prose ul:last-child,\n.chat-prose ol:last-child,\n.chat-prose pre:last-child,\n.chat-prose blockquote:last-child {\n  margin-bottom: 0;\n}\n\n.chat-prose strong {\n  font-weight: 600;\n  color: #fff;\n}\n\n.chat-prose em {\n  font-style: italic;\n  color: var(--color-text-secondary);\n}\n\n/* Headers */\n.chat-prose h1,\n.chat-prose h2,\n.chat-prose h3,\n.chat-prose h4 {\n  font-weight: 600;\n  color: #fff;\n  margin-top: 1.5em;\n  margin-bottom: 0.5em;\n  line-height: 1.3;\n}\n\n.chat-prose h1 {\n  font-size: 1.25em;\n}\n\n.chat-prose h2 {\n  font-size: 1.125em;\n}\n\n.chat-prose h3 {\n  font-size: 1em;\n}\n\n.chat-prose h4 {\n  font-size: 0.875em;\n  text-transform: uppercase;\n  letter-spacing: 0.05em;\n  color: var(--color-text-secondary);\n}\n\n.chat-prose>h1:first-child,\n.chat-prose>h2:first-child,\n.chat-prose>h3:first-child,\n.chat-prose>h4:first-child {\n  margin-top: 0;\n}\n\n/* Lists */\n.chat-prose ul,\n.chat-prose ol {\n  margin: 0.75em 0;\n  padding-left: 1.5em;\n}\n\n.chat-prose li {\n  margin: 0.375em 0;\n  padding-left: 0.25em;\n}\n\n.chat-prose li::marker {\n  color: var(--color-accent);\n}\n\n.chat-prose ul ul,\n.chat-prose ol ol,\n.chat-prose ul ol,\n.chat-prose ol ul {\n  margin: 0.25em 0;\n}\n\n/* Inline code - VS Code style */\n.chat-prose code:not([class*=\"language-\"]) {\n  padding: 0.2em 0.5em;\n  background: rgba(110, 118, 129, 0.2);\n  border-radius: 6px;\n  font-family: var(--font-mono);\n  font-size: 0.875em;\n  color: #e6b450 !important;\n  border: 1px solid rgba(110, 118, 129, 0.3);\n  font-weight: 500;\n}\n\n/* Ensure inline code keeps its color inside links and other elements */\n.chat-prose a code:not([class*=\"language-\"]),\n.chat-prose strong code:not([class*=\"language-\"]),\n.chat-prose em code:not([class*=\"language-\"]) {\n  color: #e6b450 !important;\n}\n\n/* Code blocks */\n.chat-prose pre {\n  margin: 1em 0;\n  border-radius: 8px;\n  overflow: hidden;\n}\n\n/* Blockquotes */\n.chat-prose blockquote {\n  margin: 1em 0;\n  padding: 0.5em 1em;\n  border-left: 3px solid var(--color-accent);\n  background: var(--color-surface);\n  border-radius: 0 6px 6px 0;\n  color: var(--color-text-secondary);\n}\n\n.chat-prose blockquote p {\n  margin: 0;\n}\n\n/* Horizontal rules */\n.chat-prose hr {\n  margin: 1.5em 0;\n  border: none;\n  border-top: 1px solid var(--color-border-subtle);\n}\n\n/* Links - but NOT grounding/citation buttons (those have their own Tailwind styles) */\n.chat-prose a:not(.code-ref-btn) {\n  color: var(--color-accent);\n  text-decoration: underline;\n  text-underline-offset: 2px;\n}\n\n.chat-prose a:not(.code-ref-btn):hover {\n  color: #a78bfa;\n}\n\n/* Tables */\n.chat-prose table {\n  width: 100%;\n  margin: 1em 0;\n  border-collapse: collapse;\n  font-size: 0.875em;\n}\n\n.chat-prose th,\n.chat-prose td {\n  padding: 0.5em 0.75em;\n  border: 1px solid var(--color-border-subtle);\n  text-align: left;\n}\n\n.chat-prose th {\n  background: var(--color-surface);\n  font-weight: 600;\n  color: var(--color-text-secondary);\n}\n\n.chat-prose tr:nth-child(even) td {\n  background: var(--color-surface);\n}\n\n/* ═══════════════════════════════════════════════════════════════\n   SIGMA.JS CONTAINER\n═══════════════════════════════════════════════════════════════ */\n.sigma-container {\n  width: 100%;\n  height: 100%;\n}\n\n.sigma-container canvas {\n  outline: none;\n}"
  },
  {
    "path": "gitnexus-web/src/lib/constants.ts",
    "content": "import { NodeLabel } from '../core/graph/types';\n\n// Node colors by type - slightly muted for less visual noise\nexport const NODE_COLORS: Record<NodeLabel, string> = {\n  Project: '#a855f7',    // Purple - prominent\n  Package: '#8b5cf6',    // Violet\n  Module: '#7c3aed',     // Violet darker\n  Folder: '#6366f1',     // Indigo\n  File: '#3b82f6',       // Blue\n  Class: '#f59e0b',      // Amber - stands out\n  Function: '#10b981',   // Emerald\n  Method: '#14b8a6',     // Teal\n  Variable: '#64748b',   // Slate - muted (less important)\n  Interface: '#ec4899',  // Pink\n  Enum: '#f97316',       // Orange\n  Decorator: '#eab308',  // Yellow\n  Import: '#475569',     // Slate darker - very muted\n  Type: '#a78bfa',       // Violet light\n  CodeElement: '#64748b', // Slate - muted\n  Community: '#818cf8',  // Indigo light - cluster indicator\n  Process: '#f43f5e',    // Rose - execution flow indicator\n};\n\n// Node sizes by type - clear visual hierarchy with dramatic size differences\n// Structural nodes are MUCH larger to make hierarchy obvious\nexport const NODE_SIZES: Record<NodeLabel, number> = {\n  Project: 20,     // Largest - root of everything\n  Package: 16,     // Major structural element\n  Module: 13,      // Important container\n  Folder: 10,      // Structural - clearly bigger than files\n  File: 6,         // Common element - smaller than folders\n  Class: 8,        // Important code structure\n  Function: 4,     // Common code element - small\n  Method: 3,       // Smaller than function\n  Variable: 2,     // Tiny - leaf node\n  Interface: 7,    // Important type definition\n  Enum: 5,         // Type definition\n  Decorator: 2,    // Tiny modifier\n  Import: 1.5,     // Very small - usually hidden anyway\n  Type: 3,         // Type alias - small\n  CodeElement: 2,  // Generic small\n  Community: 0,    // Hidden by default - metadata node\n  Process: 0,      // Hidden by default - metadata node\n};\n\n// Community color palette for cluster-based coloring\nexport const COMMUNITY_COLORS = [\n  '#ef4444', // red\n  '#f97316', // orange\n  '#eab308', // yellow\n  '#22c55e', // green\n  '#06b6d4', // cyan\n  '#3b82f6', // blue\n  '#8b5cf6', // violet\n  '#d946ef', // fuchsia\n  '#ec4899', // pink\n  '#f43f5e', // rose\n  '#14b8a6', // teal\n  '#84cc16', // lime\n];\n\nexport const getCommunityColor = (communityIndex: number): string => {\n  return COMMUNITY_COLORS[communityIndex % COMMUNITY_COLORS.length];\n};\n\n// Labels to show by default (hide imports and variables by default as they clutter)\nexport const DEFAULT_VISIBLE_LABELS: NodeLabel[] = [\n  'Project',\n  'Package',\n  'Module',\n  'Folder',\n  'File',\n  'Class',\n  'Function',\n  'Method',\n  'Interface',\n  'Enum',\n  'Type',\n];\n\n// All filterable labels\nexport const FILTERABLE_LABELS: NodeLabel[] = [\n  'Folder',\n  'File',\n  'Class',\n  'Function',\n  'Method',\n  'Variable',\n  'Interface',\n  'Import',\n];\n\n// Edge/Relation types\nexport type EdgeType = 'CONTAINS' | 'DEFINES' | 'IMPORTS' | 'CALLS' | 'EXTENDS' | 'IMPLEMENTS';\n\nexport const ALL_EDGE_TYPES: EdgeType[] = [\n  'CONTAINS',\n  'DEFINES',\n  'IMPORTS',\n  'CALLS',\n  'EXTENDS',\n  'IMPLEMENTS',\n];\n\n// Default visible edges (CALLS hidden by default to reduce clutter)\nexport const DEFAULT_VISIBLE_EDGES: EdgeType[] = [\n  'CONTAINS',\n  'DEFINES',\n  'IMPORTS',\n  'EXTENDS',\n  'IMPLEMENTS',\n  'CALLS',\n];\n\n// Edge display info for UI\nexport const EDGE_INFO: Record<EdgeType, { color: string; label: string }> = {\n  CONTAINS: { color: '#2d5a3d', label: 'Contains' },\n  DEFINES: { color: '#0e7490', label: 'Defines' },\n  IMPORTS: { color: '#1d4ed8', label: 'Imports' },\n  CALLS: { color: '#7c3aed', label: 'Calls' },\n  EXTENDS: { color: '#c2410c', label: 'Extends' },\n  IMPLEMENTS: { color: '#be185d', label: 'Implements' },\n};\n"
  },
  {
    "path": "gitnexus-web/src/lib/graph-adapter.ts",
    "content": "import Graph from 'graphology';\nimport { KnowledgeGraph, NodeLabel } from '../core/graph/types';\nimport { NODE_COLORS, NODE_SIZES, getCommunityColor } from './constants';\n\nexport interface SigmaNodeAttributes {\n  x: number;\n  y: number;\n  size: number;\n  color: string;\n  label: string;\n  nodeType: NodeLabel;\n  filePath: string;\n  startLine?: number;\n  endLine?: number;\n  hidden?: boolean;\n  zIndex?: number;\n  highlighted?: boolean;\n  mass?: number; // ForceAtlas2 mass - higher = more repulsion\n  community?: number; // Community index from Leiden algorithm\n  communityColor?: string; // Color assigned by community\n}\n\nexport interface SigmaEdgeAttributes {\n  size: number;\n  color: string;\n  relationType: string;\n  type?: string;\n  curvature?: number;\n  zIndex?: number;\n}\n\n/**\n * Get node size scaled for graph density\n * Uses lower minimums to maintain hierarchy visibility even in huge graphs\n */\nconst getScaledNodeSize = (baseSize: number, nodeCount: number): number => {\n  // Scale factor decreases as graph gets larger\n  // But a minimum is used that preserves relative differences\n  if (nodeCount > 50000) return Math.max(1, baseSize * 0.4);\n  if (nodeCount > 20000) return Math.max(1.5, baseSize * 0.5);\n  if (nodeCount > 5000) return Math.max(2, baseSize * 0.65);\n  if (nodeCount > 1000) return Math.max(2.5, baseSize * 0.8);\n  return baseSize;\n};\n\n/**\n * Get mass for node type - higher mass = more repulsion in ForceAtlas2\n * Folders get MUCH higher mass so they spread out and pull their files with them\n */\nconst getNodeMass = (nodeType: NodeLabel, nodeCount: number): number => {\n  // Scale mass based on graph size\n  const baseMassMultiplier = nodeCount > 5000 ? 2 : nodeCount > 1000 ? 1.5 : 1;\n  \n  switch (nodeType) {\n    case 'Project':\n      return 50 * baseMassMultiplier;  // Heaviest - anchors everything\n    case 'Package':\n      return 30 * baseMassMultiplier;  // Very heavy\n    case 'Module':\n      return 20 * baseMassMultiplier;  // Heavy\n    case 'Folder':\n      return 15 * baseMassMultiplier;  // Heavy - blasts folders apart!\n    case 'File':\n      return 3 * baseMassMultiplier;   // Medium - follows folders\n    case 'Class':\n    case 'Interface':\n      return 5 * baseMassMultiplier;   // Medium-heavy\n    case 'Function':\n    case 'Method':\n      return 2 * baseMassMultiplier;   // Light\n    default:\n      return 1;  // Default mass\n  }\n};\n\n/**\n * Converts the KnowledgeGraph to a graphology Graph for Sigma.js\n * Folders are positioned in a wide spread, children positioned NEAR their parents\n * \n * @param knowledgeGraph - The knowledge graph to convert\n * @param communityMemberships - Optional map of nodeId -> communityIndex for community coloring\n */\nexport const knowledgeGraphToGraphology = (\n  knowledgeGraph: KnowledgeGraph,\n  communityMemberships?: Map<string, number>\n): Graph<SigmaNodeAttributes, SigmaEdgeAttributes> => {\n  const graph = new Graph<SigmaNodeAttributes, SigmaEdgeAttributes>();\n  const nodeCount = knowledgeGraph.nodes.length;\n  \n  // Build parent-child map from hierarchy relationships\n  // CONTAINS: Folder -> File\n  // DEFINES: File -> Function/Class/Interface/Method\n  // IMPORTS: File -> Import\n  // parent -> children\n  const parentToChildren = new Map<string, string[]>();\n  // child -> parent\n  const childToParent = new Map<string, string>();\n  \n  const hierarchyRelations = new Set(['CONTAINS', 'DEFINES', 'IMPORTS']);\n  \n  knowledgeGraph.relationships.forEach(rel => {\n    // These relationships represent parent-child hierarchy for positioning\n    if (hierarchyRelations.has(rel.type)) {\n      // source CONTAINS/DEFINES/IMPORTS target, so source is parent\n      if (!parentToChildren.has(rel.sourceId)) {\n        parentToChildren.set(rel.sourceId, []);\n      }\n      parentToChildren.get(rel.sourceId)!.push(rel.targetId);\n      childToParent.set(rel.targetId, rel.sourceId);\n    }\n  });\n  \n  // Create node lookup\n  const nodeMap = new Map(knowledgeGraph.nodes.map(n => [n.id, n]));\n  \n  // Separate structural nodes (folders, packages) from content nodes\n  const structuralTypes = new Set(['Project', 'Package', 'Module', 'Folder']);\n  const structuralNodes = knowledgeGraph.nodes.filter(n => structuralTypes.has(n.label));\n  \n  // Much wider spread for structural nodes - this is the key!\n  const structuralSpread = Math.sqrt(nodeCount) * 40;\n  // Small jitter for children around their parent\n  const childJitter = Math.sqrt(nodeCount) * 3;\n\n  // === CLUSTER-BASED POSITIONING ===\n  // Calculate cluster centers - each cluster gets a region of the graph\n  const clusterCenters = new Map<number, { x: number; y: number }>();\n  if (communityMemberships && communityMemberships.size > 0) {\n    // Find unique community IDs\n    const communities = new Set(communityMemberships.values());\n    const communityCount = communities.size;\n    const clusterSpread = structuralSpread * 0.8; // Clusters spread across 80% of graph\n    \n    // Position cluster centers using golden angle for even distribution\n    const goldenAngle = Math.PI * (3 - Math.sqrt(5));\n    let idx = 0;\n    communities.forEach(communityId => {\n      const angle = idx * goldenAngle;\n      const radius = clusterSpread * Math.sqrt((idx + 1) / communityCount);\n      clusterCenters.set(communityId, {\n        x: radius * Math.cos(angle),\n        y: radius * Math.sin(angle),\n      });\n      idx++;\n    });\n  }\n  // Jitter within cluster (tighter than childJitter)\n  const clusterJitter = Math.sqrt(nodeCount) * 1.5;\n\n  // Store positions for parent lookup\n  const nodePositions = new Map<string, { x: number; y: number }>();\n\n  // Position structural nodes (folders, etc.) in a wide radial pattern FIRST\n  structuralNodes.forEach((node, index) => {\n    // Use golden angle for even distribution\n    const goldenAngle = Math.PI * (3 - Math.sqrt(5));\n    const angle = index * goldenAngle;\n    const radius = structuralSpread * Math.sqrt((index + 1) / Math.max(structuralNodes.length, 1));\n    \n    // Add some randomness to prevent perfect patterns\n    const jitter = structuralSpread * 0.15;\n    const x = radius * Math.cos(angle) + (Math.random() - 0.5) * jitter;\n    const y = radius * Math.sin(angle) + (Math.random() - 0.5) * jitter;\n    \n    nodePositions.set(node.id, { x, y });\n    \n    const baseSize = NODE_SIZES[node.label] || 8;\n    const scaledSize = getScaledNodeSize(baseSize, nodeCount);\n    \n    // Structural nodes keep their type-based color\n    graph.addNode(node.id, {\n      x,\n      y,\n      size: scaledSize,\n      color: NODE_COLORS[node.label] || '#9ca3af',\n      label: node.properties.name,\n      nodeType: node.label,\n      filePath: node.properties.filePath,\n      startLine: node.properties.startLine,\n      endLine: node.properties.endLine,\n      hidden: false,\n      mass: getNodeMass(node.label, nodeCount),\n    });\n  });\n\n  // Process remaining nodes in HIERARCHY ORDER (parents before children)\n  // Use BFS starting from structural nodes to ensure parents are positioned first\n  const addNodeWithPosition = (nodeId: string) => {\n    if (graph.hasNode(nodeId)) return;\n    \n    const node = nodeMap.get(nodeId);\n    if (!node) return;\n    \n    let x: number, y: number;\n    \n    // Check if this is a symbol node with a community assignment\n    const communityIndex = communityMemberships?.get(nodeId);\n    const symbolTypes = new Set(['Function', 'Class', 'Method', 'Interface']);\n    const clusterCenter = communityIndex !== undefined ? clusterCenters.get(communityIndex) : null;\n    \n    if (clusterCenter && symbolTypes.has(node.label)) {\n      // CLUSTER-BASED POSITIONING: Position near cluster center with tight jitter\n      x = clusterCenter.x + (Math.random() - 0.5) * clusterJitter;\n      y = clusterCenter.y + (Math.random() - 0.5) * clusterJitter;\n    } else {\n      // HIERARCHY-BASED POSITIONING: Position near parent\n      const parentId = childToParent.get(nodeId);\n      const parentPos = parentId ? nodePositions.get(parentId) : null;\n      \n      if (parentPos) {\n        x = parentPos.x + (Math.random() - 0.5) * childJitter;\n        y = parentPos.y + (Math.random() - 0.5) * childJitter;\n      } else {\n        // No parent found - position randomly but still spread out\n        x = (Math.random() - 0.5) * structuralSpread * 0.5;\n        y = (Math.random() - 0.5) * structuralSpread * 0.5;\n      }\n    }\n    \n    nodePositions.set(nodeId, { x, y });\n    \n    const baseSize = NODE_SIZES[node.label] || 8;\n    const scaledSize = getScaledNodeSize(baseSize, nodeCount);\n    \n    // Check if this node has a community assignment (reuse communityIndex from above)\n    const hasCommunity = communityIndex !== undefined;\n    \n    // Symbol nodes get colored by community if available\n    const usesCommunityColor = hasCommunity && symbolTypes.has(node.label);\n    const nodeColor = usesCommunityColor \n      ? getCommunityColor(communityIndex!)\n      : NODE_COLORS[node.label] || '#9ca3af';\n    \n    graph.addNode(nodeId, {\n      x,\n      y,\n      size: scaledSize,\n      color: nodeColor,\n      label: node.properties.name,\n      nodeType: node.label,\n      filePath: node.properties.filePath,\n      startLine: node.properties.startLine,\n      endLine: node.properties.endLine,\n      hidden: false,\n      mass: getNodeMass(node.label, nodeCount),\n      community: communityIndex,\n      communityColor: hasCommunity ? getCommunityColor(communityIndex!) : undefined,\n    });\n  };\n  \n  // BFS from structural nodes - this ensures parent is ALWAYS positioned before child\n  const queue: string[] = [...structuralNodes.map(n => n.id)];\n  const visited = new Set<string>(queue);\n  \n  while (queue.length > 0) {\n    const currentId = queue.shift()!;\n    \n    // Get children of current node and add them\n    const children = parentToChildren.get(currentId) || [];\n    for (const childId of children) {\n      if (!visited.has(childId)) {\n        visited.add(childId);\n        addNodeWithPosition(childId);\n        queue.push(childId); // Add to queue so its children are processed too\n      }\n    }\n  }\n  \n  // Add any orphan nodes that weren't reached (no parent relationship)\n  knowledgeGraph.nodes.forEach((node) => {\n    if (!graph.hasNode(node.id)) {\n      addNodeWithPosition(node.id);\n    }\n  });\n\n  // Add edges with distinct colors per relationship type\n  const edgeBaseSize = nodeCount > 20000 ? 0.4 : nodeCount > 5000 ? 0.6 : 1.0;\n  \n  // Edge styles - each relationship type has a DISTINCT color for clarity\n  // Using varied hues so relationships are easily distinguishable\n  const EDGE_STYLES: Record<string, { color: string; sizeMultiplier: number }> = {\n    // STRUCTURAL - Greens (folder/file hierarchy)\n    CONTAINS: { color: '#2d5a3d', sizeMultiplier: 0.4 },    // Forest green - folder contains\n    \n    // DEFINITIONS - Cyan/Teal (code definitions)\n    DEFINES: { color: '#0e7490', sizeMultiplier: 0.5 },     // Cyan - file defines function/class\n    \n    // DEPENDENCIES - Blue (imports between files)  \n    IMPORTS: { color: '#1d4ed8', sizeMultiplier: 0.6 },     // Blue - file imports file\n    \n    // FUNCTION FLOW - Purple (call graph)\n    CALLS: { color: '#7c3aed', sizeMultiplier: 0.8 },       // Violet - function calls\n    \n    // TYPE RELATIONSHIPS - Warm colors (OOP)\n    EXTENDS: { color: '#c2410c', sizeMultiplier: 1.0 },     // Orange - extension\n    IMPLEMENTS: { color: '#be185d', sizeMultiplier: 0.9 },  // Pink - interface implementation\n  };\n  \n  knowledgeGraph.relationships.forEach((rel) => {\n    if (graph.hasNode(rel.sourceId) && graph.hasNode(rel.targetId)) {\n      if (!graph.hasEdge(rel.sourceId, rel.targetId)) {\n        const style = EDGE_STYLES[rel.type] || { color: '#4a4a5a', sizeMultiplier: 0.5 };\n        const curvature = 0.12 + (Math.random() * 0.08);\n        \n        graph.addEdge(rel.sourceId, rel.targetId, {\n          size: edgeBaseSize * style.sizeMultiplier,\n          color: style.color,\n          relationType: rel.type,\n          type: 'curved',\n          curvature: curvature,\n        });\n      }\n    }\n  });\n\n  return graph;\n};\n\n/**\n * Filter nodes by visibility - sets hidden attribute\n */\nexport const filterGraphByLabels = (\n  graph: Graph<SigmaNodeAttributes, SigmaEdgeAttributes>,\n  visibleLabels: NodeLabel[]\n): void => {\n  graph.forEachNode((nodeId, attributes) => {\n    const isVisible = visibleLabels.includes(attributes.nodeType);\n    graph.setNodeAttribute(nodeId, 'hidden', !isVisible);\n  });\n};\n\n/**\n * Get all nodes within N hops of a starting node\n */\nexport const getNodesWithinHops = (\n  graph: Graph<SigmaNodeAttributes, SigmaEdgeAttributes>,\n  startNodeId: string,\n  maxHops: number\n): Set<string> => {\n  const visited = new Set<string>();\n  const queue: { nodeId: string; depth: number }[] = [{ nodeId: startNodeId, depth: 0 }];\n  \n  while (queue.length > 0) {\n    const { nodeId, depth } = queue.shift()!;\n    \n    if (visited.has(nodeId)) continue;\n    visited.add(nodeId);\n    \n    if (depth < maxHops) {\n      graph.forEachNeighbor(nodeId, (neighborId) => {\n        if (!visited.has(neighborId)) {\n          queue.push({ nodeId: neighborId, depth: depth + 1 });\n        }\n      });\n    }\n  }\n  \n  return visited;\n};\n\n/**\n * Filter nodes by depth from selected node\n */\nexport const filterGraphByDepth = (\n  graph: Graph<SigmaNodeAttributes, SigmaEdgeAttributes>,\n  selectedNodeId: string | null,\n  maxHops: number | null,\n  visibleLabels: NodeLabel[]\n): void => {\n  if (maxHops === null) {\n    filterGraphByLabels(graph, visibleLabels);\n    return;\n  }\n  \n  if (selectedNodeId === null || !graph.hasNode(selectedNodeId)) {\n    filterGraphByLabels(graph, visibleLabels);\n    return;\n  }\n  \n  const nodesInRange = getNodesWithinHops(graph, selectedNodeId, maxHops);\n  \n  graph.forEachNode((nodeId, attributes) => {\n    const isLabelVisible = visibleLabels.includes(attributes.nodeType);\n    const isInRange = nodesInRange.has(nodeId);\n    graph.setNodeAttribute(nodeId, 'hidden', !isLabelVisible || !isInRange);\n  });\n};\n"
  },
  {
    "path": "gitnexus-web/src/lib/mermaid-generator.ts",
    "content": "/**\n * Mermaid Diagram Generator for Processes\n * \n * Generates Mermaid flowchart syntax from Process step data.\n * Designed to show branching/merging when CALLS edges exist between steps.\n */\n\nexport interface ProcessStep {\n  id: string;\n  name: string;\n  filePath?: string;\n  stepNumber: number;\n  cluster?: string;\n}\n\nexport interface ProcessEdge {\n  from: string;\n  to: string;\n  type: string;\n}\n\nexport interface ProcessData {\n  id: string;\n  label: string;\n  processType: 'intra_community' | 'cross_community';\n  steps: ProcessStep[];\n  edges?: ProcessEdge[];  // CALLS edges between steps for branching\n  clusters?: string[];\n}\n\n/**\n * Generate Mermaid flowchart from process data\n */\nexport function generateProcessMermaid(process: ProcessData): string {\n  const { steps, edges, clusters } = process;\n  \n  if (!steps || steps.length === 0) {\n    return 'graph TD\\n  A[No steps found]';\n  }\n\n  const lines: string[] = ['graph TD'];\n\n  // Add class definitions for styling (rounded corners + colors)\n  lines.push('  %% Styles');\n  lines.push('  classDef default fill:#1e293b,stroke:#94a3b8,stroke-width:3px,color:#f8fafc,rx:10,ry:10,font-size:24px;');\n  lines.push('  classDef entry fill:#1e293b,stroke:#34d399,stroke-width:5px,color:#f8fafc,rx:10,ry:10,font-size:24px;');\n  lines.push('  classDef step fill:#1e293b,stroke:#22d3ee,stroke-width:3px,color:#f8fafc,rx:10,ry:10,font-size:24px;');\n  lines.push('  classDef terminal fill:#1e293b,stroke:#f472b6,stroke-width:5px,color:#f8fafc,rx:10,ry:10,font-size:24px;');\n  lines.push('  classDef cluster fill:#0f172a,stroke:#334155,stroke-width:3px,color:#94a3b8,rx:4,ry:4,font-size:20px;');\n\n  // Track clusters for subgraph grouping\n  const clusterGroups = new Map<string, ProcessStep[]>();\n  const noCluster: ProcessStep[] = [];\n  \n  for (const step of steps) {\n    if (step.cluster) {\n      const group = clusterGroups.get(step.cluster) || [];\n      group.push(step);\n      clusterGroups.set(step.cluster, group);\n    } else {\n      noCluster.push(step);\n    }\n  }\n\n  // Generate node IDs (sanitized) - use actual ID to avoid collisions when combining processes\n  const nodeId = (step: ProcessStep) => {\n    // Sanitize the actual ID to be Mermaid-safe\n    return step.id.replace(/[^a-zA-Z0-9_]/g, '_');\n  };\n  const sanitizeLabel = (text: string) => text.replace(/[\"\\[\\]<>{}()]/g, '').substring(0, 30);\n  const getFileName = (path?: string) => path?.split('/').pop() || '';\n\n  // Determine node class (entry, terminal, or normal step)\n  const sortedStepsRef = [...steps].sort((a, b) => a.stepNumber - b.stepNumber);\n  const entryId = sortedStepsRef[0]?.id;\n  const terminalId = sortedStepsRef[sortedStepsRef.length - 1]?.id;\n\n  const getNodeClass = (step: ProcessStep) => {\n    if (step.id === entryId) return 'entry';\n    if (step.id === terminalId) return 'terminal';\n    return 'step';\n  };\n\n  // If we have cluster groupings and cross-community, use subgraphs\n  const useClusters = process.processType === 'cross_community' && clusterGroups.size > 1;\n\n  if (useClusters) {\n    // Generate subgraphs for each cluster\n    let clusterIndex = 0;\n    \n    for (const [clusterName, clusterSteps] of clusterGroups) {\n      lines.push(`  subgraph ${sanitizeLabel(clusterName)}[\"${sanitizeLabel(clusterName)}\"]:::cluster`);\n      \n      for (const step of clusterSteps) {\n        const id = nodeId(step);\n        const label = `${step.stepNumber}. ${sanitizeLabel(step.name)}`;\n        const file = getFileName(step.filePath);\n        const className = getNodeClass(step);\n        lines.push(`    ${id}[\"${label}<br/><small>${file}</small>\"]:::${className}`);\n      }\n      lines.push('  end');\n      clusterIndex++;\n    }\n    \n    // Add unclustered steps\n    for (const step of noCluster) {\n      const id = nodeId(step);\n      const label = `${step.stepNumber}. ${sanitizeLabel(step.name)}`;\n      const file = getFileName(step.filePath);\n      const className = getNodeClass(step);\n      lines.push(`  ${id}[\"${label}<br/><small>${file}</small>\"]:::${className}`);\n    }\n  } else {\n    // Simple flat layout\n    for (const step of steps) {\n      const id = nodeId(step);\n      const label = `${step.stepNumber}. ${sanitizeLabel(step.name)}`;\n      const file = getFileName(step.filePath);\n      const className = getNodeClass(step);\n      lines.push(`  ${id}[\"${label}<br/><small>${file}</small>\"]:::${className}`);\n    }\n  }\n\n  // Generate edges\n  if (edges && edges.length > 0) {\n    // Use actual CALLS edges for branching\n    const stepById = new Map(steps.map(s => [s.id, s]));\n    for (const edge of edges) {\n      const fromStep = stepById.get(edge.from);\n      const toStep = stepById.get(edge.to);\n      if (fromStep && toStep) {\n        lines.push(`  ${nodeId(fromStep)} --> ${nodeId(toStep)}`);\n      }\n    }\n    // Ensure all nodes are connected (fallback for disconnected components)\n    // For now assume graph is connected enough or user accepts fragments.\n  } else {\n    // Fallback: linear chain based on step order\n    const sortedSteps = [...steps].sort((a, b) => a.stepNumber - b.stepNumber);\n    for (let i = 0; i < sortedSteps.length - 1; i++) {\n      lines.push(`  ${nodeId(sortedSteps[i])} --> ${nodeId(sortedSteps[i + 1])}`);\n    }\n  }\n\n  return lines.join('\\n');\n}\n\n/**\n * Simple linear mermaid for quick preview\n */\nexport function generateSimpleMermaid(processLabel: string, stepCount: number): string {\n  const [entry, terminal] = processLabel.split(' → ').map(s => s.trim());\n  \n  return `graph LR\n  classDef entry fill:#059669,stroke:#34d399,stroke-width:2px,color:#ffffff,rx:10,ry:10;\n  classDef terminal fill:#be185d,stroke:#f472b6,stroke-width:2px,color:#ffffff,rx:10,ry:10;\n  A[\"🟢 ${entry || 'Start'}\"]:::entry --> B[\"... ${stepCount - 2} steps ...\"] --> C[\"🔴 ${terminal || 'End'}\"]:::terminal`;\n}\n"
  },
  {
    "path": "gitnexus-web/src/lib/utils.ts",
    "content": "export const generateId = (label: string, name: string): string => {\n  return `${label}:${name}`\n}"
  },
  {
    "path": "gitnexus-web/src/main.tsx",
    "content": "import React from 'react';\nimport ReactDOM from 'react-dom/client';\nimport { Buffer } from 'buffer';\nimport App from './App';\nimport './index.css';\n\n// Polyfill Buffer for isomorphic-git (requires Node.js Buffer API)\nglobalThis.Buffer = Buffer;\n\nReactDOM.createRoot(document.getElementById('root') as HTMLElement).render(\n  <React.StrictMode>\n    <App />\n  </React.StrictMode>\n);\n"
  },
  {
    "path": "gitnexus-web/src/services/backend.ts",
    "content": "/**\n * Stateless HTTP client for the local GitNexus backend server.\n * All functions use fetch() with AbortController timeouts.\n */\n\n// ── Types ──────────────────────────────────────────────────────────────────\n\nexport interface BackendRepo {\n  name: string;\n  path: string;\n  indexedAt: string;\n  lastCommit: string;\n  stats?: {\n    files?: number;\n    nodes?: number;\n    edges?: number;\n    communities?: number;\n    processes?: number;\n  };\n}\n\n// ── Configuration ──────────────────────────────────────────────────────────\n\nlet backendUrl = 'http://localhost:4747';\n\nexport const setBackendUrl = (url: string): void => {\n  backendUrl = url.replace(/\\/$/, '');\n};\n\nexport const getBackendUrl = (): string => backendUrl;\n\n// ── Helpers ────────────────────────────────────────────────────────────────\n\nconst DEFAULT_TIMEOUT_MS = 10_000;\nconst PROBE_TIMEOUT_MS = 2_000;\n\n/**\n * Perform a fetch with an AbortController timeout.\n * Throws a cleaner error message on network failures.\n */\nconst fetchWithTimeout = async (\n  url: string,\n  init: RequestInit = {},\n  timeoutMs: number = DEFAULT_TIMEOUT_MS,\n): Promise<Response> => {\n  const controller = new AbortController();\n  const timer = setTimeout(() => controller.abort(), timeoutMs);\n\n  try {\n    const response = await fetch(url, { ...init, signal: controller.signal });\n    return response;\n  } catch (error: unknown) {\n    if (error instanceof DOMException && error.name === 'AbortError') {\n      throw new Error(`Request to ${url} timed out after ${timeoutMs}ms`);\n    }\n    if (error instanceof TypeError) {\n      throw new Error(`Network error reaching GitNexus backend at ${backendUrl}: ${error.message}`);\n    }\n    throw error;\n  } finally {\n    clearTimeout(timer);\n  }\n};\n\n/**\n * Assert the response is OK, otherwise throw with the server's error message if available.\n */\nconst assertOk = async (response: Response): Promise<void> => {\n  if (response.ok) return;\n\n  let message = `Backend returned ${response.status} ${response.statusText}`;\n  try {\n    const body = await response.json();\n    if (body && typeof body.error === 'string') {\n      message = body.error;\n    }\n  } catch {\n    // Response body was not JSON — use the status text\n  }\n  throw new Error(message);\n};\n\n// ── API functions ──────────────────────────────────────────────────────────\n\n/**\n * Probe the backend to check if it is reachable.\n * Uses a short 2-second timeout. Returns true if reachable, false otherwise.\n */\nexport const probeBackend = async (): Promise<boolean> => {\n  try {\n    const response = await fetchWithTimeout(\n      `${backendUrl}/api/repos`,\n      {},\n      PROBE_TIMEOUT_MS,\n    );\n    return response.status === 200;\n  } catch {\n    return false;\n  }\n};\n\n/**\n * Fetch the list of indexed repositories.\n */\nexport const fetchRepos = async (): Promise<BackendRepo[]> => {\n  const response = await fetchWithTimeout(`${backendUrl}/api/repos`);\n  await assertOk(response);\n  return response.json() as Promise<BackendRepo[]>;\n};\n\n/**\n * Fetch the full graph (nodes + relationships) for a repository.\n */\nexport const fetchGraph = async (\n  repo: string,\n): Promise<{ nodes: unknown[]; relationships: unknown[] }> => {\n  // Graph loading can take a while for large repos — use 60s timeout\n  const response = await fetchWithTimeout(\n    `${backendUrl}/api/graph?repo=${encodeURIComponent(repo)}`,\n    {},\n    60_000,\n  );\n  await assertOk(response);\n  return response.json() as Promise<{ nodes: unknown[]; relationships: unknown[] }>;\n};\n\n/**\n * Execute a raw Cypher query against the repository's graph.\n * Unwraps the `{ result }` wrapper returned by the server.\n */\nexport const runCypherQuery = async (\n  repo: string,\n  cypher: string,\n): Promise<unknown[]> => {\n  const response = await fetchWithTimeout(`${backendUrl}/api/query`, {\n    method: 'POST',\n    headers: { 'Content-Type': 'application/json' },\n    body: JSON.stringify({ cypher, repo }),\n  });\n  await assertOk(response);\n\n  const body = await response.json();\n  if (body && typeof body.error === 'string') {\n    throw new Error(body.error);\n  }\n  return (body.result ?? body) as unknown[];\n};\n\n/**\n * Run a semantic search across the repository's graph.\n */\nexport const runSearch = async (\n  repo: string,\n  query: string,\n  limit?: number,\n): Promise<unknown> => {\n  const response = await fetchWithTimeout(`${backendUrl}/api/search`, {\n    method: 'POST',\n    headers: { 'Content-Type': 'application/json' },\n    body: JSON.stringify({ query, limit, repo }),\n  });\n  await assertOk(response);\n  return response.json();\n};\n\n/**\n * Fetch the source content of a file in a repository.\n */\nexport const fetchFileContent = async (\n  repo: string,\n  filePath: string,\n): Promise<string> => {\n  const response = await fetchWithTimeout(\n    `${backendUrl}/api/file?repo=${encodeURIComponent(repo)}&path=${encodeURIComponent(filePath)}`,\n  );\n  await assertOk(response);\n\n  const body = (await response.json()) as { content: string };\n  return body.content;\n};\n\n/**\n * Fetch all execution-flow processes for a repository.\n */\nexport const fetchProcesses = async (repo: string): Promise<unknown> => {\n  const response = await fetchWithTimeout(\n    `${backendUrl}/api/processes?repo=${encodeURIComponent(repo)}`,\n  );\n  await assertOk(response);\n  return response.json();\n};\n\n/**\n * Fetch the detailed step-by-step trace for a single process.\n */\nexport const fetchProcessDetail = async (\n  repo: string,\n  name: string,\n): Promise<unknown> => {\n  const response = await fetchWithTimeout(\n    `${backendUrl}/api/process?repo=${encodeURIComponent(repo)}&name=${encodeURIComponent(name)}`,\n  );\n  await assertOk(response);\n  return response.json();\n};\n\n/**\n * Fetch all functional-area clusters for a repository.\n */\nexport const fetchClusters = async (repo: string): Promise<unknown> => {\n  const response = await fetchWithTimeout(\n    `${backendUrl}/api/clusters?repo=${encodeURIComponent(repo)}`,\n  );\n  await assertOk(response);\n  return response.json();\n};\n\n/**\n * Fetch the members of a single cluster.\n */\nexport const fetchClusterDetail = async (\n  repo: string,\n  name: string,\n): Promise<unknown> => {\n  const response = await fetchWithTimeout(\n    `${backendUrl}/api/cluster?repo=${encodeURIComponent(repo)}&name=${encodeURIComponent(name)}`,\n  );\n  await assertOk(response);\n  return response.json();\n};\n"
  },
  {
    "path": "gitnexus-web/src/services/git-clone.ts",
    "content": "import git from 'isomorphic-git';\nimport http from 'isomorphic-git/http/web';\nimport LightningFS from '@isomorphic-git/lightning-fs';\nimport { shouldIgnorePath } from '../config/ignore-service';\nimport { FileEntry } from './zip';\n\n// Initialize virtual filesystem (persists in IndexedDB)\n// Use a unique name each time to avoid stale data issues\nlet fs: LightningFS;\nlet pfs: any;\n\nconst initFS = () => {\n  // Create a fresh filesystem instance\n  const fsName = `gitnexus-git-${Date.now()}`;\n  fs = new LightningFS(fsName);\n  pfs = fs.promises;\n  return fsName;\n};\n\n// Hosted proxy URL - use this for localhost to avoid local proxy issues\nconst HOSTED_PROXY_URL = 'https://gitnexus.vercel.app/api/proxy';\n\n/**\n * Custom HTTP client that uses a query-param based proxy\n * - In development (localhost): uses the hosted Vercel proxy for reliability\n * - In production: uses the local /api/proxy endpoint\n */\nconst createProxiedHttp = (): typeof http => {\n  const isDev = typeof window !== 'undefined' && window.location.hostname === 'localhost';\n  \n  return {\n    request: async (config) => {\n      // Use hosted proxy for localhost, local proxy for production\n      const proxyBase = isDev ? HOSTED_PROXY_URL : '/api/proxy';\n      const proxyUrl = `${proxyBase}?url=${encodeURIComponent(config.url)}`;\n      \n      // Call the original http.request with the proxied URL\n      return http.request({\n        ...config,\n        url: proxyUrl,\n      });\n    },\n  };\n};\n\n/**\n * Parse GitHub URL to extract owner and repo\n * Supports: \n *   - https://github.com/owner/repo\n *   - https://github.com/owner/repo.git\n *   - github.com/owner/repo\n */\nexport const parseGitHubUrl = (url: string): { owner: string; repo: string } | null => {\n  const cleaned = url.trim().replace(/\\.git$/, '');\n  const match = cleaned.match(/github\\.com\\/([^\\/]+)\\/([^\\/]+)/);\n  \n  if (!match) return null;\n  \n  return {\n    owner: match[1],\n    repo: match[2],\n  };\n};\n\n/**\n * Clone a GitHub repository using isomorphic-git\n * Returns files in the same format as extractZip for compatibility\n * \n * @param url - GitHub repository URL\n * @param onProgress - Progress callback\n * @param token - Optional GitHub PAT for private repos (stays client-side only)\n */\nexport const cloneRepository = async (\n  url: string,\n  onProgress?: (phase: string, progress: number) => void,\n  token?: string\n): Promise<FileEntry[]> => {\n  const parsed = parseGitHubUrl(url);\n  if (!parsed) {\n    throw new Error('Invalid GitHub URL. Use format: https://github.com/owner/repo');\n  }\n\n  // Initialize fresh filesystem to avoid stale IndexedDB data\n  const fsName = initFS();\n  \n  const dir = `/${parsed.repo}`;\n  const repoUrl = `https://github.com/${parsed.owner}/${parsed.repo}.git`;\n\n  try {\n    onProgress?.('cloning', 0);\n\n    const httpClient = createProxiedHttp();\n    \n    // Clone with shallow depth for speed\n    await git.clone({\n      fs,\n      http: httpClient,\n      dir,\n      url: repoUrl,\n      depth: 1,\n      // Auth callback for private repos (PAT stays client-side)\n      onAuth: token ? () => ({ username: token, password: 'x-oauth-basic' }) : undefined,\n      onProgress: (event) => {\n        if (event.total) {\n          const percent = Math.round((event.loaded / event.total) * 100);\n          onProgress?.('cloning', percent);\n        }\n      },\n    });\n\n    onProgress?.('reading', 0);\n\n    // Read all files from the cloned repo\n    const files = await readAllFiles(dir, dir);\n\n    // Cleanup: remove the cloned repo from virtual FS to save space\n    await removeDirectory(dir);\n    \n    // Also try to clean up the IndexedDB database\n    try {\n      indexedDB.deleteDatabase(fsName);\n    } catch {}\n\n    onProgress?.('complete', 100);\n\n    return files;\n  } catch (error) {\n    // Cleanup on error\n    try {\n      await removeDirectory(dir);\n      indexedDB.deleteDatabase(fsName);\n    } catch {}\n    \n    throw error;\n  }\n};\n\n/**\n * Recursively read all files from a directory in the virtual filesystem\n */\nconst readAllFiles = async (baseDir: string, currentDir: string): Promise<FileEntry[]> => {\n  const files: FileEntry[] = [];\n  \n  let entries: string[];\n  try {\n    entries = await pfs.readdir(currentDir);\n  } catch (err) {\n    // Directory might not exist or be inaccessible\n    console.warn(`Cannot read directory: ${currentDir}`);\n    return files;\n  }\n\n  for (const entry of entries) {\n    // Skip .git directory\n    if (entry === '.git') continue;\n\n    const fullPath = `${currentDir}/${entry}`;\n    const relativePath = fullPath.replace(`${baseDir}/`, '');\n\n    // Check ignore rules\n    if (shouldIgnorePath(relativePath)) continue;\n\n    // Try to stat the file - skip if it fails (broken symlinks, etc.)\n    let stat;\n    try {\n      stat = await pfs.stat(fullPath);\n    } catch {\n      // Skip files that can't be stat'd (broken symlinks, permission issues)\n      if (import.meta.env.DEV) {\n        console.warn(`Skipping unreadable entry: ${relativePath}`);\n      }\n      continue;\n    }\n\n    if (stat.isDirectory()) {\n      // Recurse into subdirectory\n      const subFiles = await readAllFiles(baseDir, fullPath);\n      files.push(...subFiles);\n    } else {\n      // Read file content\n      try {\n        const content = await pfs.readFile(fullPath, { encoding: 'utf8' }) as string;\n        files.push({\n          path: relativePath,\n          content,\n        });\n      } catch {\n        // Skip binary files or files that can't be read as text\n      }\n    }\n  }\n\n  return files;\n};\n\n/**\n * Recursively remove a directory from the virtual filesystem\n */\nconst removeDirectory = async (dir: string): Promise<void> => {\n  try {\n    const entries = await pfs.readdir(dir);\n    \n    for (const entry of entries) {\n      const fullPath = `${dir}/${entry}`;\n      const stat = await pfs.stat(fullPath);\n      \n      if (stat.isDirectory()) {\n        await removeDirectory(fullPath);\n      } else {\n        await pfs.unlink(fullPath);\n      }\n    }\n    \n    await pfs.rmdir(dir);\n  } catch {\n    // Ignore errors during cleanup\n  }\n};\n\n"
  },
  {
    "path": "gitnexus-web/src/services/server-connection.ts",
    "content": "import { GraphNode, GraphRelationship } from '../core/graph/types';\n\nexport interface RepoSummary {\n  name: string;\n  path: string;\n  indexedAt: string;\n  lastCommit: string;\n  stats: {\n    files: number;\n    nodes: number;\n    edges: number;\n    communities: number;\n    processes: number;\n  };\n}\n\nexport interface ServerRepoInfo {\n  name: string;\n  repoPath: string;\n  indexedAt: string;\n  stats: {\n    files: number;\n    nodes: number;\n    edges: number;\n    communities: number;\n    processes: number;\n  };\n}\n\nexport interface ConnectToServerResult {\n  nodes: GraphNode[];\n  relationships: GraphRelationship[];\n  fileContents: Record<string, string>;\n  repoInfo: ServerRepoInfo;\n}\n\nexport function normalizeServerUrl(input: string): string {\n  let url = input.trim();\n\n  // Strip trailing slashes\n  url = url.replace(/\\/+$/, '');\n\n  // Add protocol if missing\n  if (!url.startsWith('http://') && !url.startsWith('https://')) {\n    if (url.startsWith('localhost') || url.startsWith('127.0.0.1')) {\n      url = `http://${url}`;\n    } else {\n      url = `https://${url}`;\n    }\n  }\n\n  // Add /api if not already present\n  if (!url.endsWith('/api')) {\n    url = `${url}/api`;\n  }\n\n  return url;\n}\n\nexport async function fetchRepos(baseUrl: string): Promise<RepoSummary[]> {\n  const response = await fetch(`${baseUrl}/repos`);\n  if (!response.ok) throw new Error(`Server returned ${response.status}`);\n  return response.json();\n}\n\nexport async function fetchRepoInfo(baseUrl: string, repoName?: string): Promise<ServerRepoInfo> {\n  const url = repoName ? `${baseUrl}/repo?repo=${encodeURIComponent(repoName)}` : `${baseUrl}/repo`;\n  const response = await fetch(url);\n  if (!response.ok) {\n    throw new Error(`Server returned ${response.status}: ${response.statusText}`);\n  }\n  const data = await response.json();\n  // npm gitnexus@1.3.3 returns \"path\"; git HEAD returns \"repoPath\"\n  return { ...data, repoPath: data.repoPath ?? data.path };\n}\n\nexport async function fetchGraph(\n  baseUrl: string,\n  onProgress?: (downloaded: number, total: number | null) => void,\n  signal?: AbortSignal,\n  repoName?: string\n): Promise<{ nodes: GraphNode[]; relationships: GraphRelationship[] }> {\n  const url = repoName ? `${baseUrl}/graph?repo=${encodeURIComponent(repoName)}` : `${baseUrl}/graph`;\n  const response = await fetch(url, { signal });\n  if (!response.ok) {\n    throw new Error(`Server returned ${response.status}: ${response.statusText}`);\n  }\n\n  const contentLength = response.headers.get('Content-Length');\n  const total = contentLength ? parseInt(contentLength, 10) : null;\n\n  if (!response.body) {\n    const data = await response.json();\n    return data;\n  }\n\n  const reader = response.body.getReader();\n  const chunks: Uint8Array[] = [];\n  let downloaded = 0;\n\n  while (true) {\n    const { done, value } = await reader.read();\n    if (done) break;\n\n    chunks.push(value);\n    downloaded += value.length;\n    onProgress?.(downloaded, total);\n  }\n\n  const combined = new Uint8Array(downloaded);\n  let offset = 0;\n  for (const chunk of chunks) {\n    combined.set(chunk, offset);\n    offset += chunk.length;\n  }\n\n  const text = new TextDecoder().decode(combined);\n  return JSON.parse(text);\n}\n\nexport function extractFileContents(nodes: GraphNode[]): Record<string, string> {\n  const contents: Record<string, string> = {};\n  for (const node of nodes) {\n    if (node.label === 'File' && (node.properties as any).content) {\n      contents[node.properties.filePath] = (node.properties as any).content;\n    }\n  }\n  return contents;\n}\n\nexport async function connectToServer(\n  url: string,\n  onProgress?: (phase: string, downloaded: number, total: number | null) => void,\n  signal?: AbortSignal,\n  repoName?: string\n): Promise<ConnectToServerResult> {\n  const baseUrl = normalizeServerUrl(url);\n\n  // Phase 1: Validate server\n  onProgress?.('validating', 0, null);\n  const repoInfo = await fetchRepoInfo(baseUrl, repoName);\n\n  // Phase 2: Download graph\n  onProgress?.('downloading', 0, null);\n  const { nodes, relationships } = await fetchGraph(\n    baseUrl,\n    (downloaded, total) => onProgress?.('downloading', downloaded, total),\n    signal,\n    repoName\n  );\n\n  // Phase 3: Extract file contents\n  onProgress?.('extracting', 0, null);\n  const fileContents = extractFileContents(nodes);\n\n  return { nodes, relationships, fileContents, repoInfo };\n}\n"
  },
  {
    "path": "gitnexus-web/src/services/zip.ts",
    "content": "import JSZip from 'jszip';\nimport { shouldIgnorePath } from '../config/ignore-service';\n\nexport interface FileEntry {\n    path: string;\n    content: string;\n}\n\n/**\n * Find the common root folder prefix in ZIP files\n * GitHub ZIPs have a root folder like \"repo-main/\" or \"repo-branch/\"\n */\nconst findRootPrefix = (paths: string[]): string => {\n    if (paths.length === 0) return '';\n    \n    // Get the first path segment of each file\n    const firstSegments = paths\n        .filter(p => p.includes('/'))\n        .map(p => p.split('/')[0]);\n    \n    if (firstSegments.length === 0) return '';\n    \n    // Check if ALL files share the same first segment\n    const firstSegment = firstSegments[0];\n    const allSameRoot = firstSegments.every(s => s === firstSegment);\n    \n    if (allSameRoot) {\n        return firstSegment + '/';\n    }\n    \n    return '';\n};\n\nexport const extractZip = async (file: File): Promise<FileEntry[]> => {\n    const zip = await JSZip.loadAsync(file);\n    const files: FileEntry[] = [];\n    const allPaths: string[] = [];\n    \n    // First pass: collect all paths to find common root\n    zip.forEach((relativePath, entry) => {\n        if (!entry.dir) {\n            allPaths.push(relativePath);\n        }\n    });\n    \n    // Find and strip root prefix (e.g., \"repo-main/\")\n    const rootPrefix = findRootPrefix(allPaths);\n    \n    const promises: Promise<void>[] = [];\n\n    const processEntry = async (relativePath: string, entry: JSZip.JSZipObject) => {\n        if (entry.dir) return;\n        \n        // Strip root prefix if present\n        const normalizedPath = rootPrefix && relativePath.startsWith(rootPrefix)\n            ? relativePath.slice(rootPrefix.length)\n            : relativePath;\n        \n        if (!normalizedPath) return; // Skip if path becomes empty\n        if (shouldIgnorePath(normalizedPath)) return;\n\n        const content = await entry.async('string');\n        \n        files.push({\n            path: normalizedPath,\n            content: content\n        });\n    };\n\n    zip.forEach((relativePath, entry) => {\n        promises.push(processEntry(relativePath, entry));\n    });\n    \n    await Promise.all(promises);\n    \n    return files;\n};\n"
  },
  {
    "path": "gitnexus-web/src/types/lbug-wasm.d.ts",
    "content": "declare module '@ladybugdb/wasm-core' {\n  export function init(): Promise<void>;\n  export class Database {\n    constructor(path: string, bufferPoolSize?: number);\n    close(): Promise<void>;\n  }\n  export class Connection {\n    constructor(db: Database);\n    query(cypher: string): Promise<QueryResult>;\n    prepare(cypher: string): Promise<PreparedStatement>;\n    execute(stmt: PreparedStatement, params?: Record<string, any>): Promise<QueryResult>;\n    close(): Promise<void>;\n  }\n  export interface QueryResult {\n    getAll(): Promise<any[]>;\n    hasNext(): Promise<boolean>;\n    getNext(): Promise<any>;\n  }\n  export interface PreparedStatement {\n    isSuccess(): boolean;\n    getErrorMessage(): Promise<string>;\n    close(): Promise<void>;\n  }\n  export const FS: {\n    writeFile(path: string, data: string): Promise<void>;\n    unlink(path: string): Promise<void>;\n  };\n  const lbug: {\n    init: typeof init;\n    Database: typeof Database;\n    Connection: typeof Connection;\n    FS: typeof FS;\n  };\n  export default lbug;\n}\n"
  },
  {
    "path": "gitnexus-web/src/types/pipeline.ts",
    "content": "import { GraphNode, GraphRelationship, KnowledgeGraph } from '../core/graph/types';\nimport { CommunityDetectionResult } from '../core/ingestion/community-processor';\nimport { ProcessDetectionResult } from '../core/ingestion/process-processor';\n\nexport type PipelinePhase = 'idle' | 'extracting' | 'structure' | 'parsing' | 'imports' | 'calls' | 'heritage' | 'communities' | 'processes' | 'enriching' | 'complete' | 'error';\n\nexport interface PipelineProgress {\n  phase: PipelinePhase;\n  percent: number;\n  message: string;\n  detail?: string;\n  stats?: {\n    filesProcessed: number;\n    totalFiles: number;\n    nodesCreated: number;\n  };\n}\n\n// Original result type (used internally in pipeline)\nexport interface PipelineResult {\n  graph: KnowledgeGraph;\n  fileContents: Map<string, string>;\n  communityResult?: CommunityDetectionResult;\n  processResult?: ProcessDetectionResult;\n}\n\n// Serializable version for Web Worker communication\n// Maps and functions cannot be transferred via postMessage\nexport interface SerializablePipelineResult {\n  nodes: GraphNode[];\n  relationships: GraphRelationship[];\n  fileContents: Record<string, string>; // Object instead of Map\n}\n\n// Helper to convert PipelineResult to serializable format\nexport const serializePipelineResult = (result: PipelineResult): SerializablePipelineResult => ({\n  nodes: result.graph.nodes,\n  relationships: result.graph.relationships,\n  fileContents: Object.fromEntries(result.fileContents),\n});\n\n// Helper to reconstruct from serializable format (used in main thread)\nexport const deserializePipelineResult = (\n  serialized: SerializablePipelineResult,\n  createGraph: () => KnowledgeGraph\n): PipelineResult => {\n  const graph = createGraph();\n  serialized.nodes.forEach(node => graph.addNode(node));\n  serialized.relationships.forEach(rel => graph.addRelationship(rel));\n  \n  return {\n    graph,\n    fileContents: new Map(Object.entries(serialized.fileContents)),\n  };\n};\n\n"
  },
  {
    "path": "gitnexus-web/src/vendor/leiden/index.d.ts",
    "content": "import Graph from 'graphology';\n\ntype RNGFunction = () => number;\n\nexport type LeidenOptions = {\n  attributes?: {\n    community?: string;\n    weight?: string;\n  };\n  randomWalk?: boolean;\n  resolution?: number;\n  rng?: RNGFunction;\n  weighted?: boolean;\n};\n\ntype LeidenMapping = { [key: string]: number };\n\nexport type DetailedLeidenOutput = {\n  communities: LeidenMapping;\n  count: number;\n  deltaComputations: number;\n  dendrogram: Array<any>;\n  modularity: number;\n  moves: Array<Array<number>> | Array<number>;\n  nodesVisited: number;\n  resolution: number;\n};\n\ndeclare const leiden: {\n  (graph: Graph, options?: LeidenOptions): LeidenMapping;\n  assign(graph: Graph, options?: LeidenOptions): void;\n  detailed(graph: Graph, options?: LeidenOptions): DetailedLeidenOutput;\n};\n\nexport default leiden;\n"
  },
  {
    "path": "gitnexus-web/src/vendor/leiden/index.js",
    "content": "/**\n * Graphology Leiden Algorithm\n * ============================\n *\n * JavaScript implementation of the Leiden community detection\n * algorithm for graphology.\n *\n * Vendored from: https://github.com/graphology/graphology/tree/master/src/communities-leiden\n * License: MIT\n *\n * Converted to ESM for Vite compatibility.\n *\n * [Reference]\n * Traag, V. A., et al. \"From Louvain to Leiden: Guaranteeing Well-Connected\n * Communities\". Scientific Reports, vol. 9, no 1, 2019, p. 5233.\n * https://arxiv.org/abs/1810.08473\n */\nimport resolveDefaults from 'graphology-utils/defaults';\nimport isGraph from 'graphology-utils/is-graph';\nimport inferType from 'graphology-utils/infer-type';\nimport SparseMap from 'mnemonist/sparse-map';\nimport SparseQueueSet from 'mnemonist/sparse-queue-set';\nimport randomIndexModule from 'pandemonium/random-index';\nimport { addWeightToCommunity, UndirectedLeidenAddenda } from './utils.js';\n\nimport indices from 'graphology-indices/louvain';\n\nvar createRandomIndex = randomIndexModule.createRandomIndex || randomIndexModule;\nvar UndirectedLouvainIndex = indices.UndirectedLouvainIndex;\n\nvar DEFAULTS = {\n attributes: {\n community: 'community',\n weight: 'weight'\n },\n randomness: 0.01,\n randomWalk: true,\n resolution: 1,\n rng: Math.random,\n weighted: false\n};\n\nvar EPSILON = 1e-10;\n\nfunction tieBreaker(\n bestCommunity,\n currentCommunity,\n targetCommunity,\n delta,\n bestDelta\n) {\n if (Math.abs(delta - bestDelta) < EPSILON) {\n if (bestCommunity === currentCommunity) {\n return false;\n } else {\n return targetCommunity > bestCommunity;\n }\n } else if (delta > bestDelta) {\n return true;\n }\n\n return false;\n}\n\nfunction undirectedLeiden(detailed, graph, options) {\n var index = new UndirectedLouvainIndex(graph, {\n attributes: {\n weight: options.attributes.weight\n },\n keepDendrogram: detailed,\n resolution: options.resolution,\n weighted: options.weighted\n });\n\n var addenda = new UndirectedLeidenAddenda(index, {\n randomness: options.randomness,\n rng: options.rng\n });\n\n var randomIndex = createRandomIndex(options.rng);\n\n // Communities\n var currentCommunity, targetCommunity;\n var communities = new SparseMap(Float64Array, index.C);\n\n // Traversal\n var queue = new SparseQueueSet(index.C),\n start,\n end,\n weight,\n ci,\n ri,\n s,\n i,\n j,\n l;\n\n // Metrics\n var degree, targetCommunityDegree;\n\n // Moves\n var bestCommunity, bestDelta, deltaIsBetter, delta;\n\n // Details\n var deltaComputations = 0,\n nodesVisited = 0,\n moves = [],\n currentMoves;\n\n while (true) {\n l = index.C;\n\n currentMoves = 0;\n\n // Traversal of the graph\n ri = options.randomWalk ? randomIndex(l) : 0;\n\n for (s = 0; s < l; s++, ri++) {\n i = ri % l;\n queue.enqueue(i);\n }\n\n while (queue.size !== 0) {\n i = queue.dequeue();\n nodesVisited++;\n\n degree = 0;\n communities.clear();\n\n currentCommunity = index.belongings[i];\n\n start = index.starts[i];\n end = index.starts[i + 1];\n\n // Traversing neighbors\n for (; start < end; start++) {\n j = index.neighborhood[start];\n weight = index.weights[start];\n\n targetCommunity = index.belongings[j];\n\n // Incrementing metrics\n degree += weight;\n addWeightToCommunity(communities, targetCommunity, weight);\n }\n\n // Finding best community to move to\n bestDelta = index.fastDeltaWithOwnCommunity(\n i,\n degree,\n communities.get(currentCommunity) || 0,\n currentCommunity\n );\n bestCommunity = currentCommunity;\n\n for (ci = 0; ci < communities.size; ci++) {\n targetCommunity = communities.dense[ci];\n\n if (targetCommunity === currentCommunity) continue;\n\n targetCommunityDegree = communities.vals[ci];\n\n deltaComputations++;\n\n delta = index.fastDelta(\n i,\n degree,\n targetCommunityDegree,\n targetCommunity\n );\n\n deltaIsBetter = tieBreaker(\n bestCommunity,\n currentCommunity,\n targetCommunity,\n delta,\n bestDelta\n );\n\n if (deltaIsBetter) {\n bestDelta = delta;\n bestCommunity = targetCommunity;\n }\n }\n\n if (bestDelta < 0) {\n bestCommunity = index.isolate(i, degree);\n\n if (bestCommunity === currentCommunity) continue;\n } else {\n if (bestCommunity === currentCommunity) {\n continue;\n } else {\n index.move(i, degree, bestCommunity);\n }\n }\n\n currentMoves++;\n\n // Adding neighbors from other communities to the queue\n start = index.starts[i];\n end = index.starts[i + 1];\n\n for (; start < end; start++) {\n j = index.neighborhood[start];\n targetCommunity = index.belongings[j];\n\n if (targetCommunity !== bestCommunity) queue.enqueue(j);\n }\n }\n\n moves.push(currentMoves);\n\n if (currentMoves === 0) {\n index.zoomOut();\n break;\n }\n\n if (!addenda.onlySingletons()) {\n // We continue working on the induced graph\n addenda.zoomOut();\n continue;\n }\n\n break;\n }\n\n var results = {\n index: index,\n deltaComputations: deltaComputations,\n nodesVisited: nodesVisited,\n moves: moves\n };\n\n return results;\n}\n\n/**\n * Function returning the communities mapping of the graph.\n */\nfunction leiden(assign, detailed, graph, options) {\n if (!isGraph(graph))\n throw new Error(\n 'graphology-communities-leiden: the given graph is not a valid graphology instance.'\n );\n\n var type = inferType(graph);\n\n if (type === 'mixed')\n throw new Error(\n 'graphology-communities-leiden: cannot run the algorithm on a true mixed graph.'\n );\n\n if (type === 'directed')\n throw new Error(\n 'graphology-communities-leiden: not yet implemented for directed graphs.'\n );\n\n // Attributes name\n options = resolveDefaults(options, DEFAULTS);\n\n // Empty graph case\n var c = 0;\n\n if (graph.size === 0) {\n if (assign) {\n graph.forEachNode(function (node) {\n graph.setNodeAttribute(node, options.attributes.communities, c++);\n });\n\n return;\n }\n\n var communities = {};\n\n graph.forEachNode(function (node) {\n communities[node] = c++;\n });\n\n if (!detailed) return communities;\n\n return {\n communities: communities,\n count: graph.order,\n deltaComputations: 0,\n dendrogram: null,\n level: 0,\n modularity: NaN,\n moves: null,\n nodesVisited: 0,\n resolution: options.resolution\n };\n }\n\n var fn = undirectedLeiden;\n\n var results = fn(detailed, graph, options);\n\n var index = results.index;\n\n // Standard output\n if (!detailed) {\n if (assign) {\n index.assign(options.attributes.community);\n return;\n }\n\n return index.collect();\n }\n\n // Detailed output\n var output = {\n count: index.C,\n deltaComputations: results.deltaComputations,\n dendrogram: index.dendrogram,\n level: index.level,\n modularity: index.modularity(),\n moves: results.moves,\n nodesVisited: results.nodesVisited,\n resolution: options.resolution\n };\n\n if (assign) {\n index.assign(options.attributes.community);\n return output;\n }\n\n output.communities = index.collect();\n\n return output;\n}\n\n/**\n * Exporting.\n */\nvar fn = leiden.bind(null, false, false);\nfn.assign = leiden.bind(null, true, false);\nfn.detailed = leiden.bind(null, false, true);\nfn.defaults = DEFAULTS;\n\nexport default fn;\n"
  },
  {
    "path": "gitnexus-web/src/vendor/leiden/utils.js",
    "content": "/**\n * Graphology Leiden Utils\n * ========================\n *\n * Miscellaneous utilities used by the Leiden algorithm.\n *\n * Vendored from: https://github.com/graphology/graphology/tree/master/src/communities-leiden\n * License: MIT\n *\n * Converted to ESM for Vite compatibility.\n */\nimport SparseMap from 'mnemonist/sparse-map';\nimport randomModule from 'pandemonium/random';\nvar createRandom = randomModule.createRandom || randomModule;\n\nexport function addWeightToCommunity(map, community, weight) {\n var currentWeight = map.get(community);\n\n if (typeof currentWeight === 'undefined') currentWeight = 0;\n\n currentWeight += weight;\n\n map.set(community, currentWeight);\n}\n\nexport function UndirectedLeidenAddenda(index, options) {\n options = options || {};\n\n var rng = options.rng || Math.random;\n var randomness = 'randomness' in options ? options.randomness : 0.01;\n\n this.index = index;\n this.random = createRandom(rng);\n this.randomness = randomness;\n this.rng = rng;\n\n var NodesPointerArray = index.counts.constructor;\n var WeightsArray = index.weights.constructor;\n\n var order = index.C;\n this.resolution = index.resolution;\n\n // Used to group nodes by communities\n this.B = index.C;\n this.C = 0;\n this.communitiesOffsets = new NodesPointerArray(order);\n this.nodesSortedByCommunities = new NodesPointerArray(order);\n this.communitiesBounds = new NodesPointerArray(order + 1);\n\n // Used to merge nodes subsets\n this.communityWeights = new WeightsArray(order);\n this.degrees = new WeightsArray(order);\n this.nonSingleton = new Uint8Array(order);\n this.externalEdgeWeightPerCommunity = new WeightsArray(order);\n this.belongings = new NodesPointerArray(order);\n this.neighboringCommunities = new SparseMap(WeightsArray, order);\n this.cumulativeIncrement = new Float64Array(order);\n this.macroCommunities = null;\n}\n\nUndirectedLeidenAddenda.prototype.groupByCommunities = function () {\n var index = this.index;\n\n var n, i, c, b, o;\n\n n = 0;\n o = 0;\n\n for (i = 0; i < index.C; i++) {\n c = index.counts[i];\n\n if (c !== 0) {\n this.communitiesBounds[o++] = n;\n n += c;\n this.communitiesOffsets[i] = n;\n }\n }\n\n this.communitiesBounds[o] = n;\n\n o = 0;\n\n for (i = 0; i < index.C; i++) {\n b = index.belongings[i];\n o = --this.communitiesOffsets[b];\n this.nodesSortedByCommunities[o] = i;\n }\n\n this.B = index.C - index.U;\n this.C = index.C;\n};\n\nUndirectedLeidenAddenda.prototype.communities = function () {\n var communities = new Array(this.B);\n\n var i, j, community, start, stop;\n\n for (i = 0; i < this.B; i++) {\n start = this.communitiesBounds[i];\n stop = this.communitiesBounds[i + 1];\n community = [];\n\n for (j = start; j < stop; j++) {\n community.push(j);\n }\n\n communities[i] = community;\n }\n\n return communities;\n};\n\nUndirectedLeidenAddenda.prototype.mergeNodesSubset = function (start, stop) {\n var index = this.index;\n var currentMacroCommunity =\n index.belongings[this.nodesSortedByCommunities[start]];\n var neighboringCommunities = this.neighboringCommunities;\n\n var totalNodeWeight = 0;\n\n var i, j, w;\n var ei, el, et;\n\n // Initializing singletons\n for (j = start; j < stop; j++) {\n i = this.nodesSortedByCommunities[j];\n\n this.belongings[i] = i;\n this.nonSingleton[i] = 0;\n this.degrees[i] = 0;\n totalNodeWeight += index.loops[i] / 2;\n\n this.communityWeights[i] = index.loops[i];\n this.externalEdgeWeightPerCommunity[i] = 0;\n\n ei = index.starts[i];\n el = index.starts[i + 1];\n\n for (; ei < el; ei++) {\n et = index.neighborhood[ei];\n w = index.weights[ei];\n\n this.degrees[i] += w;\n\n if (index.belongings[et] !== currentMacroCommunity) continue;\n\n totalNodeWeight += w;\n this.externalEdgeWeightPerCommunity[i] += w;\n this.communityWeights[i] += w;\n }\n }\n\n var microDegrees = this.externalEdgeWeightPerCommunity.slice();\n\n var s, ri, ci;\n var order = stop - start;\n\n var degree,\n bestCommunity,\n qualityValueIncrement,\n maxQualityValueIncrement,\n totalTransformedQualityValueIncrement,\n targetCommunity,\n targetCommunityDegree,\n targetCommunityWeight;\n\n var r, lo, hi, mid, chosenCommunity;\n\n ri = this.random(start, stop - 1);\n\n for (s = start; s < stop; s++, ri++) {\n j = start + (ri % order);\n\n i = this.nodesSortedByCommunities[j];\n\n if (this.nonSingleton[i] === 1) {\n continue;\n }\n\n if (\n this.externalEdgeWeightPerCommunity[i] <\n this.communityWeights[i] *\n (totalNodeWeight / 2 - this.communityWeights[i]) *\n this.resolution\n ) {\n continue;\n }\n\n this.communityWeights[i] = 0;\n this.externalEdgeWeightPerCommunity[i] = 0;\n\n neighboringCommunities.clear();\n neighboringCommunities.set(i, 0);\n\n degree = 0;\n\n ei = index.starts[i];\n el = index.starts[i + 1];\n\n for (; ei < el; ei++) {\n et = index.neighborhood[ei];\n\n if (index.belongings[et] !== currentMacroCommunity) continue;\n\n w = index.weights[ei];\n\n degree += w;\n\n addWeightToCommunity(neighboringCommunities, this.belongings[et], w);\n }\n\n bestCommunity = i;\n maxQualityValueIncrement = 0;\n totalTransformedQualityValueIncrement = 0;\n\n for (ci = 0; ci < neighboringCommunities.size; ci++) {\n targetCommunity = neighboringCommunities.dense[ci];\n targetCommunityDegree = neighboringCommunities.vals[ci];\n targetCommunityWeight = this.communityWeights[targetCommunity];\n\n if (\n this.externalEdgeWeightPerCommunity[targetCommunity] >=\n targetCommunityWeight *\n (totalNodeWeight / 2 - targetCommunityWeight) *\n this.resolution\n ) {\n qualityValueIncrement =\n targetCommunityDegree -\n ((degree + index.loops[i]) *\n targetCommunityWeight *\n this.resolution) /\n totalNodeWeight;\n\n if (qualityValueIncrement > maxQualityValueIncrement) {\n bestCommunity = targetCommunity;\n maxQualityValueIncrement = qualityValueIncrement;\n }\n\n if (qualityValueIncrement >= 0)\n totalTransformedQualityValueIncrement += Math.exp(\n qualityValueIncrement / this.randomness\n );\n }\n\n this.cumulativeIncrement[ci] = totalTransformedQualityValueIncrement;\n }\n\n if (\n totalTransformedQualityValueIncrement < Number.MAX_VALUE &&\n totalTransformedQualityValueIncrement < Infinity\n ) {\n r = totalTransformedQualityValueIncrement * this.rng();\n lo = -1;\n hi = neighboringCommunities.size + 1;\n\n while (lo < hi - 1) {\n mid = (lo + hi) >>> 1;\n\n if (this.cumulativeIncrement[mid] >= r) hi = mid;\n else lo = mid;\n }\n\n chosenCommunity = neighboringCommunities.dense[hi];\n } else {\n chosenCommunity = bestCommunity;\n }\n\n this.communityWeights[chosenCommunity] += degree + index.loops[i];\n\n ei = index.starts[i];\n el = index.starts[i + 1];\n\n for (; ei < el; ei++) {\n et = index.neighborhood[ei];\n\n if (index.belongings[et] !== currentMacroCommunity) continue;\n\n targetCommunity = this.belongings[et];\n\n if (targetCommunity === chosenCommunity) {\n this.externalEdgeWeightPerCommunity[chosenCommunity] -=\n microDegrees[et];\n } else {\n this.externalEdgeWeightPerCommunity[chosenCommunity] +=\n microDegrees[et];\n }\n }\n\n if (chosenCommunity !== i) {\n this.belongings[i] = chosenCommunity;\n this.nonSingleton[chosenCommunity] = 1;\n this.C--;\n }\n }\n\n var microCommunities = this.neighboringCommunities;\n microCommunities.clear();\n\n for (j = start; j < stop; j++) {\n i = this.nodesSortedByCommunities[j];\n microCommunities.set(this.belongings[i], 1);\n }\n\n return microCommunities.dense.slice(0, microCommunities.size);\n};\n\nUndirectedLeidenAddenda.prototype.refinePartition = function () {\n this.groupByCommunities();\n\n this.macroCommunities = new Array(this.B);\n\n var i, start, stop, mapping;\n\n var bounds = this.communitiesBounds;\n\n for (i = 0; i < this.B; i++) {\n start = bounds[i];\n stop = bounds[i + 1];\n\n mapping = this.mergeNodesSubset(start, stop);\n this.macroCommunities[i] = mapping;\n }\n};\n\nUndirectedLeidenAddenda.prototype.split = function () {\n var index = this.index;\n var isolates = this.neighboringCommunities;\n\n isolates.clear();\n\n var i, community, isolated;\n\n for (i = 0; i < index.C; i++) {\n community = this.belongings[i];\n\n if (i !== community) continue;\n\n isolated = index.isolate(i, this.degrees[i]);\n isolates.set(community, isolated);\n }\n\n for (i = 0; i < index.C; i++) {\n community = this.belongings[i];\n\n if (i === community) continue;\n\n isolated = isolates.get(community);\n index.move(i, this.degrees[i], isolated);\n }\n\n var j, macro;\n\n for (i = 0; i < this.macroCommunities.length; i++) {\n macro = this.macroCommunities[i];\n\n for (j = 0; j < macro.length; j++) macro[j] = isolates.get(macro[j]);\n }\n};\n\nUndirectedLeidenAddenda.prototype.zoomOut = function () {\n var index = this.index;\n this.refinePartition();\n this.split();\n\n var newLabels = index.zoomOut();\n\n var macro, leader, follower;\n\n var i, j;\n\n for (i = 0; i < this.macroCommunities.length; i++) {\n macro = this.macroCommunities[i];\n leader = newLabels[macro[0]];\n\n for (j = 1; j < macro.length; j++) {\n follower = newLabels[macro[j]];\n index.expensiveMove(follower, leader);\n }\n }\n};\n\nUndirectedLeidenAddenda.prototype.onlySingletons = function () {\n var index = this.index;\n\n var i;\n\n for (i = 0; i < index.C; i++) {\n if (index.counts[i] > 1) return false;\n }\n\n return true;\n};\n"
  },
  {
    "path": "gitnexus-web/src/vite-env.d.ts",
    "content": "/// <reference types=\"vite/client\" />\n"
  },
  {
    "path": "gitnexus-web/src/workers/ingestion.worker.ts",
    "content": "import * as Comlink from 'comlink';\nimport { runIngestionPipeline, runPipelineFromFiles } from '../core/ingestion/pipeline';\nimport { PipelineProgress, SerializablePipelineResult, serializePipelineResult } from '../types/pipeline';\nimport { FileEntry } from '../services/zip';\nimport {\n  runEmbeddingPipeline,\n  semanticSearch as doSemanticSearch,\n  semanticSearchWithContext as doSemanticSearchWithContext,\n  type EmbeddingProgressCallback,\n} from '../core/embeddings/embedding-pipeline';\nimport { isEmbedderReady, disposeEmbedder } from '../core/embeddings/embedder';\nimport type { EmbeddingProgress, SemanticSearchResult } from '../core/embeddings/types';\nimport type { ProviderConfig, AgentStreamChunk } from '../core/llm/types';\nimport { createGraphRAGAgent, streamAgentResponse, type AgentMessage, createChatModel } from '../core/llm/agent';\nimport { SystemMessage } from '@langchain/core/messages';\nimport { enrichClustersBatch, ClusterMemberInfo, ClusterEnrichment } from '../core/ingestion/cluster-enricher';\nimport { CommunityNode } from '../core/ingestion/community-processor';\nimport { PipelineResult } from '../types/pipeline';\nimport { buildCodebaseContext, type CodebaseContext } from '../core/llm/context-builder';\nimport { \n  buildBM25Index, \n  searchBM25, \n  isBM25Ready, \n  getBM25Stats,\n  mergeWithRRF,\n  type HybridSearchResult,\n} from '../core/search';\n\n// Lazy import for LadybugDB to avoid breaking worker if SharedArrayBuffer unavailable\nlet lbugAdapter: typeof import('../core/lbug/lbug-adapter') | null = null;\nconst getLbugAdapter = async () => {\n  if (!lbugAdapter) {\n    lbugAdapter = await import('../core/lbug/lbug-adapter');\n  }\n  return lbugAdapter;\n};\n\n// Embedding state\nlet embeddingProgress: EmbeddingProgress | null = null;\nlet isEmbeddingComplete = false;\n\n// File contents state - stores full file contents for grep/read tools\nlet storedFileContents: Map<string, string> = new Map();\n\n// Agent state\nlet currentAgent: ReturnType<typeof createGraphRAGAgent> | null = null;\nlet currentProviderConfig: ProviderConfig | null = null;\nlet currentGraphResult: PipelineResult | null = null;\n\n// Pending enrichment config (for background processing)\nlet pendingEnrichmentConfig: ProviderConfig | null = null;\nlet enrichmentCancelled = false;\n\n// Chat cancellation flag\nlet chatCancelled = false;\n\n// ============================================================\n// HTTP helpers for backend mode\n// ============================================================\n\nconst httpFetchWithTimeout = async (\n  url: string,\n  init: RequestInit = {},\n  timeoutMs: number = 30_000,\n): Promise<Response> => {\n  const controller = new AbortController();\n  const timer = setTimeout(() => controller.abort(), timeoutMs);\n  try {\n    return await fetch(url, { ...init, signal: controller.signal });\n  } finally {\n    clearTimeout(timer);\n  }\n};\n\nconst createHttpExecuteQuery = (backendUrl: string, repo: string) => {\n  return async (cypher: string): Promise<any[]> => {\n    const response = await httpFetchWithTimeout(`${backendUrl}/api/query`, {\n      method: 'POST',\n      headers: { 'Content-Type': 'application/json' },\n      body: JSON.stringify({ cypher, repo }),\n    });\n    if (!response.ok) {\n      const body = await response.json().catch(() => ({}));\n      throw new Error(body.error || `Backend query failed: ${response.status}`);\n    }\n    const body = await response.json();\n    return (body.result ?? body) as any[];\n  };\n};\n\n/**\n * Create a search function that calls the backend's /api/search endpoint,\n * which runs full hybrid search (BM25 + semantic + RRF) on the server.\n * Results are flattened from the process-grouped response into the flat\n * array format expected by createGraphRAGTools.\n */\nconst createHttpHybridSearch = (backendUrl: string, repo: string) => {\n  return async (query: string, k: number = 15): Promise<any[]> => {\n    try {\n      const response = await httpFetchWithTimeout(`${backendUrl}/api/search`, {\n        method: 'POST',\n        headers: { 'Content-Type': 'application/json' },\n        body: JSON.stringify({ query, limit: k, repo }),\n      });\n      if (!response.ok) {\n        return [];\n      }\n      const body = await response.json();\n      const data = body.results ?? body;\n\n      // Flatten process_symbols + definitions into a single ranked list\n      const symbols: any[] = (data.process_symbols ?? []).map((s: any, i: number) => ({\n        nodeId: s.id,\n        id: s.id,\n        name: s.name,\n        label: s.type,\n        filePath: s.filePath,\n        startLine: s.startLine,\n        endLine: s.endLine,\n        content: s.content ?? '',\n        sources: ['bm25', 'semantic'],\n        score: 1 - (i * 0.02),\n      }));\n\n      const defs: any[] = (data.definitions ?? []).map((d: any, i: number) => ({\n        id: d.name,\n        name: d.name,\n        label: d.type || 'File',\n        filePath: d.filePath,\n        content: '',\n        sources: ['bm25'],\n        score: 0.5 - (i * 0.02),\n      }));\n\n      return [...symbols, ...defs].slice(0, k);\n    } catch {\n      return [];\n    }\n  };\n};\n\n/**\n * Worker API exposed via Comlink\n * \n * Note: The onProgress callback is passed as a Comlink.proxy() from the main thread,\n * allowing it to be called from the worker and have it execute on the main thread.\n */\nconst workerApi = {\n  /**\n   * Run the ingestion pipeline in the worker thread\n   * @param file - The ZIP file to process\n   * @param onProgress - Proxied callback for progress updates (runs on main thread)\n   * @returns Serializable result (nodes, relationships, fileContents as object)\n   */\n  async runPipeline(\n    file: File,\n    onProgress: (progress: PipelineProgress) => void,\n    clusteringConfig?: ProviderConfig\n  ): Promise<SerializablePipelineResult> {\n    // Debug logging\n    console.log('🔧 runPipeline called with clusteringConfig:', !!clusteringConfig);\n    // Run the actual pipeline\n    const result = await runIngestionPipeline(file, onProgress);\n    currentGraphResult = result;\n    \n    // Store file contents for grep/read tools (full content, not truncated)\n    storedFileContents = result.fileContents;\n    \n    // Build BM25 index for keyword search (instant, ~100ms)\n    const bm25DocCount = buildBM25Index(storedFileContents);\n    if (import.meta.env.DEV) {\n      console.log(`🔍 BM25 index built: ${bm25DocCount} documents`);\n    }\n    \n    // Load graph into LadybugDB for querying (optional - gracefully degrades)\n    try {\n      onProgress({\n        phase: 'complete',\n        percent: 98,\n        message: 'Loading into LadybugDB...',\n        stats: {\n          filesProcessed: result.graph.nodeCount,\n          totalFiles: result.graph.nodeCount,\n          nodesCreated: result.graph.nodeCount,\n        },\n      });\n\n      const lbug = await getLbugAdapter();\n      await lbug.loadGraphToLbug(result.graph, result.fileContents);\n\n      if (import.meta.env.DEV) {\n        const stats = await lbug.getLbugStats();\n        console.log('LadybugDB loaded:', stats);\n        console.log('📁 Stored', storedFileContents.size, 'files for grep/read tools');\n      }\n    } catch {\n      // LadybugDB is optional - silently continue without it\n    }\n\n    // Store clustering config for background enrichment (runs after graph loads)\n    if (clusteringConfig) {\n      pendingEnrichmentConfig = clusteringConfig;\n      console.log('📋 Clustering config saved for background enrichment');\n    }\n\n    // Convert to serializable format for transfer back to main thread\n    return serializePipelineResult(result);\n  },\n\n  /**\n   * Execute a Cypher query against the LadybugDB database\n   * @param cypher - The Cypher query string\n   * @returns Query results as an array of objects\n   */\n  async runQuery(cypher: string): Promise<any[]> {\n    const lbug = await getLbugAdapter();\n    if (!lbug.isLbugReady()) {\n      throw new Error('Database not ready. Please load a repository first.');\n    }\n    return lbug.executeQuery(cypher);\n  },\n\n  /**\n   * Check if the database is ready for queries\n   */\n  async isReady(): Promise<boolean> {\n    try {\n      const lbug = await getLbugAdapter();\n      return lbug.isLbugReady();\n    } catch {\n      return false;\n    }\n  },\n\n  /**\n   * Get database statistics\n   */\n  async getStats(): Promise<{ nodes: number; edges: number }> {\n    try {\n      const lbug = await getLbugAdapter();\n      return lbug.getLbugStats();\n    } catch {\n      return { nodes: 0, edges: 0 };\n    }\n  },\n\n  /**\n   * Run the ingestion pipeline from pre-extracted files (e.g., from git clone)\n   * @param files - Array of file entries with path and content\n   * @param onProgress - Proxied callback for progress updates\n   * @returns Serializable result\n   */\n  async runPipelineFromFiles(\n    files: FileEntry[],\n    onProgress: (progress: PipelineProgress) => void,\n    clusteringConfig?: ProviderConfig\n  ): Promise<SerializablePipelineResult> {\n    // Skip extraction phase, start from 15%\n    onProgress({\n      phase: 'extracting',\n      percent: 15,\n      message: 'Files ready',\n      stats: { filesProcessed: 0, totalFiles: files.length, nodesCreated: 0 },\n    });\n\n    // Run the pipeline\n    const result = await runPipelineFromFiles(files, onProgress);\n    currentGraphResult = result;\n    \n    // Store file contents for grep/read tools (full content, not truncated)\n    storedFileContents = result.fileContents;\n    \n    // Build BM25 index for keyword search (instant, ~100ms)\n    const bm25DocCount = buildBM25Index(storedFileContents);\n    if (import.meta.env.DEV) {\n      console.log(`🔍 BM25 index built: ${bm25DocCount} documents`);\n    }\n    \n    // Load graph into LadybugDB for querying (optional - gracefully degrades)\n    try {\n      onProgress({\n        phase: 'complete',\n        percent: 98,\n        message: 'Loading into LadybugDB...',\n        stats: {\n          filesProcessed: result.graph.nodeCount,\n          totalFiles: result.graph.nodeCount,\n          nodesCreated: result.graph.nodeCount,\n        },\n      });\n\n      const lbug = await getLbugAdapter();\n      await lbug.loadGraphToLbug(result.graph, result.fileContents);\n\n      if (import.meta.env.DEV) {\n        const stats = await lbug.getLbugStats();\n        console.log('LadybugDB loaded:', stats);\n        console.log('📁 Stored', storedFileContents.size, 'files for grep/read tools');\n      }\n    } catch {\n      // LadybugDB is optional - silently continue without it\n    }\n    \n    // Store clustering config for background enrichment (runs after graph loads)\n    if (clusteringConfig) {\n      pendingEnrichmentConfig = clusteringConfig;\n      console.log('📋 Clustering config saved for background enrichment');\n    }\n    \n    // Convert to serializable format for transfer back to main thread\n    return serializePipelineResult(result);\n  },\n\n  // ============================================================\n  // Embedding Pipeline Methods\n  // ============================================================\n\n  /**\n   * Start the embedding pipeline in the background\n   * Generates embeddings for all embeddable nodes and creates vector index\n   * @param onProgress - Proxied callback for embedding progress updates\n   * @param forceDevice - Force a specific device ('webgpu' or 'wasm')\n   */\n  async startEmbeddingPipeline(\n    onProgress: (progress: EmbeddingProgress) => void,\n    forceDevice?: 'webgpu' | 'wasm'\n  ): Promise<void> {\n    const lbug = await getLbugAdapter();\n    if (!lbug.isLbugReady()) {\n      throw new Error('Database not ready. Please load a repository first.');\n    }\n\n    // Reset state\n    embeddingProgress = null;\n    isEmbeddingComplete = false;\n\n    const progressCallback: EmbeddingProgressCallback = (progress) => {\n      embeddingProgress = progress;\n      if (progress.phase === 'ready') {\n        isEmbeddingComplete = true;\n      }\n      onProgress(progress);\n    };\n\n    await runEmbeddingPipeline(\n      lbug.executeQuery,\n      lbug.executeWithReusedStatement,\n      progressCallback,\n      forceDevice ? { device: forceDevice } : {}\n    );\n  },\n\n  /**\n   * Start background cluster enrichment (if pending)\n   * Called after graph loads, runs in background like embeddings\n   * @param onProgress - Progress callback\n   */\n  async startBackgroundEnrichment(\n    onProgress?: (current: number, total: number) => void\n  ): Promise<{ enriched: number; skipped: boolean }> {\n    if (!pendingEnrichmentConfig) {\n      console.log('⏭️ No pending enrichment config, skipping');\n      return { enriched: 0, skipped: true };\n    }\n    \n    console.log('✨ Starting background LLM enrichment...');\n    try {\n      await workerApi.enrichCommunities(\n        pendingEnrichmentConfig,\n        onProgress ?? (() => {})\n      );\n      pendingEnrichmentConfig = null; // Clear after running\n      console.log('✅ Background enrichment completed');\n      return { enriched: 1, skipped: false };\n    } catch (err) {\n      console.error('❌ Background enrichment failed:', err);\n      pendingEnrichmentConfig = null;\n      return { enriched: 0, skipped: false };\n    }\n  },\n\n  /**\n   * Cancel the current enrichment operation\n   */\n  async cancelEnrichment(): Promise<void> {\n    enrichmentCancelled = true;\n    pendingEnrichmentConfig = null;\n    console.log('⏸️ Enrichment cancelled by user');\n  },\n\n  /**\n   * Perform semantic search on the codebase\n   * @param query - Natural language search query\n   * @param k - Number of results to return (default: 10)\n   * @param maxDistance - Maximum distance threshold (default: 0.5)\n   * @returns Array of search results ordered by relevance\n   */\n  async semanticSearch(\n    query: string,\n    k: number = 10,\n    maxDistance: number = 0.5\n  ): Promise<SemanticSearchResult[]> {\n    const lbug = await getLbugAdapter();\n    if (!lbug.isLbugReady()) {\n      throw new Error('Database not ready. Please load a repository first.');\n    }\n    if (!isEmbeddingComplete) {\n      throw new Error('Embeddings not ready. Please wait for embedding pipeline to complete.');\n    }\n\n    return doSemanticSearch(lbug.executeQuery, query, k, maxDistance);\n  },\n\n  /**\n   * Perform semantic search with graph expansion\n   * Finds similar nodes AND their connections\n   * @param query - Natural language search query\n   * @param k - Number of initial results (default: 5)\n   * @param hops - Number of graph hops to expand (default: 2)\n   * @returns Search results with connected nodes\n   */\n  async semanticSearchWithContext(\n    query: string,\n    k: number = 5,\n    hops: number = 2\n  ): Promise<any[]> {\n    const lbug = await getLbugAdapter();\n    if (!lbug.isLbugReady()) {\n      throw new Error('Database not ready. Please load a repository first.');\n    }\n    if (!isEmbeddingComplete) {\n      throw new Error('Embeddings not ready. Please wait for embedding pipeline to complete.');\n    }\n\n    return doSemanticSearchWithContext(lbug.executeQuery, query, k, hops);\n  },\n\n  /**\n   * Perform hybrid search combining BM25 (keyword) and semantic (embedding) search\n   * Uses Reciprocal Rank Fusion (RRF) to merge results\n   * \n   * @param query - Search query\n   * @param k - Number of results to return (default: 10)\n   * @returns Hybrid search results with RRF scores\n   */\n  async hybridSearch(\n    query: string,\n    k: number = 10\n  ): Promise<HybridSearchResult[]> {\n    if (!isBM25Ready()) {\n      throw new Error('Search index not ready. Please load a repository first.');\n    }\n    \n    // Get BM25 results (always available after ingestion)\n    const bm25Results = searchBM25(query, k * 3);  // Get more for better RRF merge\n    \n    // Get semantic results if embeddings are ready\n    let semanticResults: SemanticSearchResult[] = [];\n    if (isEmbeddingComplete) {\n      try {\n        const lbug = await getLbugAdapter();\n        if (lbug.isLbugReady()) {\n          semanticResults = await doSemanticSearch(lbug.executeQuery, query, k * 3, 0.5);\n        }\n      } catch {\n        // Semantic search failed, continue with BM25 only\n      }\n    }\n    \n    // Merge with RRF\n    return mergeWithRRF(bm25Results, semanticResults, k);\n  },\n\n  /**\n   * Check if BM25 search index is ready\n   */\n  isBM25Ready(): boolean {\n    return isBM25Ready();\n  },\n\n  /**\n   * Get BM25 index statistics\n   */\n  getBM25Stats(): { documentCount: number; termCount: number } {\n    return getBM25Stats();\n  },\n\n  /**\n   * Check if the embedding model is loaded and ready\n   */\n  isEmbeddingModelReady(): boolean {\n    return isEmbedderReady();\n  },\n\n  /**\n   * Check if embeddings are fully generated and indexed\n   */\n  isEmbeddingComplete(): boolean {\n    return isEmbeddingComplete;\n  },\n\n  /**\n   * Get current embedding progress\n   */\n  getEmbeddingProgress(): EmbeddingProgress | null {\n    return embeddingProgress;\n  },\n\n  /**\n   * Cleanup embedding model resources\n   */\n  async disposeEmbeddingModel(): Promise<void> {\n    await disposeEmbedder();\n    isEmbeddingComplete = false;\n    embeddingProgress = null;\n  },\n\n  /**\n   * Test if LadybugDB supports array parameters in prepared statements\n   * This is a diagnostic function\n   */\n  async testArrayParams(): Promise<{ success: boolean; error?: string }> {\n    const lbug = await getLbugAdapter();\n    if (!lbug.isLbugReady()) {\n      return { success: false, error: 'Database not ready' };\n    }\n    return lbug.testArrayParams();\n  },\n\n  // ============================================================\n  // Graph RAG Agent Methods\n  // ============================================================\n\n  /**\n   * Initialize the Graph RAG agent with a provider configuration\n   * Must be called before using chat methods\n   * @param config - Provider configuration (Azure OpenAI or Gemini)\n   * @param projectName - Name of the loaded project/repository\n   */\n  async initializeAgent(config: ProviderConfig, projectName?: string): Promise<{ success: boolean; error?: string }> {\n    try {\n      const lbug = await getLbugAdapter();\n      if (!lbug.isLbugReady()) {\n        return { success: false, error: 'Database not ready. Please load a repository first.' };\n      }\n\n      // Create semantic search wrappers that handle embedding state\n      const semanticSearchWrapper = async (query: string, k?: number, maxDistance?: number) => {\n        if (!isEmbeddingComplete) {\n          throw new Error('Embeddings not ready');\n        }\n        return doSemanticSearch(lbug.executeQuery, query, k, maxDistance);\n      };\n\n      const semanticSearchWithContextWrapper = async (query: string, k?: number, hops?: number) => {\n        if (!isEmbeddingComplete) {\n          throw new Error('Embeddings not ready');\n        }\n        return doSemanticSearchWithContext(lbug.executeQuery, query, k, hops);\n      };\n\n      // Hybrid search wrapper - combines BM25 + semantic\n      const hybridSearchWrapper = async (query: string, k?: number) => {\n        // Get BM25 results (always available after ingestion)\n        const bm25Results = searchBM25(query, (k ?? 10) * 3);\n\n        // Get semantic results if embeddings are ready\n        let semanticResults: any[] = [];\n        if (isEmbeddingComplete) {\n          try {\n            semanticResults = await doSemanticSearch(lbug.executeQuery, query, (k ?? 10) * 3, 0.5);\n          } catch {\n            // Semantic search failed, continue with BM25 only\n          }\n        }\n\n        // Merge with RRF\n        return mergeWithRRF(bm25Results, semanticResults, k ?? 10);\n      };\n\n      // Use provided projectName, or fallback to 'project' if not provided\n      const resolvedProjectName = projectName || 'project';\n      if (import.meta.env.DEV) {\n        console.log('📛 Project name received:', { provided: projectName, resolved: resolvedProjectName });\n      }\n      \n      let codebaseContext;\n      try {\n        codebaseContext = await buildCodebaseContext(lbug.executeQuery, resolvedProjectName);\n        if (import.meta.env.DEV) {\n          console.log('📊 Codebase context built:', {\n            files: codebaseContext.stats.fileCount,\n            functions: codebaseContext.stats.functionCount,\n            hotspots: codebaseContext.hotspots.length,\n          });\n        }\n      } catch (err) {\n        console.warn('Failed to build codebase context, proceeding without:', err);\n      }\n\n      currentAgent = createGraphRAGAgent(\n        config,\n        lbug.executeQuery,\n        semanticSearchWrapper,\n        semanticSearchWithContextWrapper,\n        hybridSearchWrapper,\n        () => isEmbeddingComplete,\n        () => isBM25Ready(),\n        storedFileContents,\n        codebaseContext\n      );\n      currentProviderConfig = config;\n\n      if (import.meta.env.DEV) {\n        console.log('🤖 Graph RAG Agent initialized with provider:', config.provider);\n      }\n\n      return { success: true };\n    } catch (error) {\n      const message = error instanceof Error ? error.message : String(error);\n      if (import.meta.env.DEV) {\n        console.error('❌ Agent initialization failed:', error);\n      }\n      return { success: false, error: message };\n    }\n  },\n\n  /**\n   * Initialize the Graph RAG agent in backend mode (HTTP-backed tools).\n   * Uses HTTP wrappers instead of local LadybugDB for all tool queries.\n   * @param config - Provider configuration for the LLM\n   * @param backendUrl - Base URL of the gitnexus serve backend\n   * @param repoName - Repository name on the backend\n   * @param fileContentsEntries - File contents as [path, content][] (Comlink can't transfer Maps)\n   * @param projectName - Display name for the project\n   */\n  async initializeBackendAgent(\n    config: ProviderConfig,\n    backendUrl: string,\n    repoName: string,\n    fileContentsEntries: [string, string][],\n    projectName?: string,\n  ): Promise<{ success: boolean; error?: string }> {\n    try {\n      // Rebuild Map from serializable entries (Comlink can't transfer Maps)\n      const contents = new Map<string, string>(fileContentsEntries);\n      storedFileContents = contents;\n\n      // Create HTTP-based tool wrappers\n      const executeQuery = createHttpExecuteQuery(backendUrl, repoName);\n      const hybridSearch = createHttpHybridSearch(backendUrl, repoName);\n\n      // Build codebase context (uses Cypher queries — works via HTTP)\n      let codebaseContext: CodebaseContext | undefined;\n      try {\n        codebaseContext = await buildCodebaseContext(executeQuery, projectName || repoName);\n      } catch {\n        // Non-fatal — agent works without context\n      }\n\n      // Create agent with HTTP-backed tools.\n      // hybridSearch calls /api/search which runs full BM25 + semantic + RRF on the server.\n      // isEmbeddingReady is false — no local embedding model is loaded in backend mode.\n      // isBM25Ready is true — BM25 is available via the server's hybrid search.\n      currentAgent = createGraphRAGAgent(\n        config,\n        executeQuery,          // Cypher via HTTP\n        hybridSearch,          // semanticSearch → server hybrid search\n        hybridSearch,          // semanticSearchWithContext → same\n        hybridSearch,          // hybridSearch → server hybrid search\n        () => false,           // isEmbeddingReady → no local embedder\n        () => true,            // isBM25Ready → available via server\n        contents,              // fileContents Map\n        codebaseContext,\n      );\n\n      currentProviderConfig = config;\n\n      if (import.meta.env.DEV) {\n        console.log('🤖 Backend agent initialized with provider:', config.provider);\n      }\n\n      return { success: true };\n    } catch (err: any) {\n      if (import.meta.env.DEV) {\n        console.error('❌ Backend agent initialization failed:', err);\n      }\n      return { success: false, error: err.message || 'Failed to initialize backend agent' };\n    }\n  },\n\n  /**\n   * Check if the agent is initialized\n   */\n  isAgentReady(): boolean {\n    return currentAgent !== null;\n  },\n\n  /**\n   * Get current provider info\n   */\n  getAgentProvider(): { provider: string; model: string } | null {\n    if (!currentProviderConfig) return null;\n    return {\n      provider: currentProviderConfig.provider,\n      model: currentProviderConfig.model,\n    };\n  },\n\n  /**\n   * Chat with the Graph RAG agent (streaming)\n   * Sends response chunks via the onChunk callback\n   * @param messages - Conversation history\n   * @param onChunk - Proxied callback for streaming chunks (runs on main thread)\n   */\n  async chatStream(\n    messages: AgentMessage[],\n    onChunk: (chunk: AgentStreamChunk) => void\n  ): Promise<void> {\n    if (!currentAgent) {\n      onChunk({ type: 'error', error: 'Agent not initialized. Please configure an LLM provider first.' });\n      return;\n    }\n\n    chatCancelled = false;\n\n    try {\n      for await (const chunk of streamAgentResponse(currentAgent, messages)) {\n        if (chatCancelled) {\n          onChunk({ type: 'done' });\n          break;\n        }\n        onChunk(chunk);\n      }\n    } catch (error) {\n      if (chatCancelled) {\n        // Swallow errors from cancellation\n        onChunk({ type: 'done' });\n        return;\n      }\n      const message = error instanceof Error ? error.message : String(error);\n      onChunk({ type: 'error', error: message });\n    }\n  },\n\n  /**\n   * Stop the current chat stream\n   */\n  stopChat(): void {\n    chatCancelled = true;\n  },\n\n  /**\n   * Dispose of the current agent\n   */\n  disposeAgent(): void {\n    currentAgent = null;\n    currentProviderConfig = null;\n  },\n\n  /**\n   * Enrich community clusters using LLM\n   */\n  async enrichCommunities(\n    providerConfig: ProviderConfig,\n    onProgress: (current: number, total: number) => void\n  ): Promise<{ enrichments: Record<string, ClusterEnrichment>, tokensUsed: number }> {\n    if (!currentGraphResult) {\n      throw new Error('No graph loaded. Please ingest a repository first.');\n    }\n\n    const { graph } = currentGraphResult;\n    \n    // Filter for community nodes\n    const communityNodes = graph.nodes\n      .filter(n => n.label === 'Community')\n      .map(n => ({\n        id: n.id,\n        label: 'Community',\n        heuristicLabel: n.properties.heuristicLabel,\n        cohesion: n.properties.cohesion,\n        symbolCount: n.properties.symbolCount\n      } as CommunityNode));\n\n    if (communityNodes.length === 0) {\n      return { enrichments: {}, tokensUsed: 0 };\n    }\n\n    // Build member map: CommunityID -> Member Info\n    const memberMap = new Map<string, ClusterMemberInfo[]>();\n    \n    // Initialize map\n    communityNodes.forEach(c => memberMap.set(c.id, []));\n    \n    // Find all MEMBER_OF edges\n    graph.relationships.forEach(rel => {\n      if (rel.type === 'MEMBER_OF') {\n        const communityId = rel.targetId;\n        const memberId = rel.sourceId; // MEMBER_OF goes Member -> Community\n        \n        if (memberMap.has(communityId)) {\n          // Find member node details\n          const memberNode = graph.nodes.find(n => n.id === memberId);\n          if (memberNode) {\n            memberMap.get(communityId)?.push({\n              name: memberNode.properties.name,\n              filePath: memberNode.properties.filePath,\n              type: memberNode.label\n            });\n          }\n        }\n      }\n    });\n\n    // Create LLM client adapter for LangChain model\n    const chatModel = createChatModel(providerConfig);\n    const llmClient = {\n      generate: async (prompt: string): Promise<string> => {\n        const response = await chatModel.invoke([\n          new SystemMessage('You are a helpful code analysis assistant.'),\n          { role: 'user', content: prompt }\n        ]);\n        return response.content as string;\n      }\n    };\n\n    // Run enrichment\n    const { enrichments, tokensUsed } = await enrichClustersBatch(\n      communityNodes,\n      memberMap,\n      llmClient,\n      5, // Batch size\n      onProgress\n    );\n\n    if (import.meta.env.DEV) {\n      console.log(`✨ Enriched ${enrichments.size} clusters using ~${Math.round(tokensUsed)} tokens`);\n    }\n\n    // Update graph nodes with enrichment data\n    graph.nodes.forEach(node => {\n      if (node.label === 'Community' && enrichments.has(node.id)) {\n        const enrichment = enrichments.get(node.id)!;\n        node.properties.name = enrichment.name; // Update display label\n        node.properties.keywords = enrichment.keywords;\n        node.properties.description = enrichment.description;\n        node.properties.enrichedBy = 'llm';\n      }\n    });\n\n    // Update LadybugDB with new data\n    try {\n      const lbug = await getLbugAdapter();\n        \n      onProgress(enrichments.size, enrichments.size); // Done\n      \n      // Update one by one via Cypher (simplest for now)\n      for (const [id, enrichment] of enrichments.entries()) {\n         // Escape strings for Cypher - replace backslash first, then quotes\n         const escapeCypher = (str: string) => str.replace(/\\\\/g, '\\\\\\\\').replace(/\"/g, '\\\\\"');\n         \n         const keywordsStr = JSON.stringify(enrichment.keywords);\n         const descStr = escapeCypher(enrichment.description);\n         const nameStr = escapeCypher(enrichment.name);\n         const escapedId = escapeCypher(id);\n         \n         const query = `\n           MATCH (c:Community {id: \"${escapedId}\"})\n           SET c.label = \"${nameStr}\", \n               c.keywords = ${keywordsStr}, \n               c.description = \"${descStr}\",\n               c.enrichedBy = \"llm\"\n         `;\n         \n         await lbug.executeQuery(query);\n      }\n\n    } catch (err) {\n      console.error('Failed to update LadybugDB with enrichment:', err);\n    }\n    \n    // Convert Map to Record for serialization\n    const enrichmentsRecord: Record<string, ClusterEnrichment> = {};\n    for (const [id, val] of enrichments.entries()) {\n      enrichmentsRecord[id] = val;\n    }\n     \n    return { enrichments: enrichmentsRecord, tokensUsed };\n  \n  },\n};\n\n// Expose the worker API to the main thread\nComlink.expose(workerApi);\n\n// TypeScript type for the exposed API (used by the hook)\nexport type IngestionWorkerApi = typeof workerApi;\n\n"
  },
  {
    "path": "gitnexus-web/tsconfig.app.json",
    "content": "{\n  \"compilerOptions\": {\n    \"target\": \"ESNext\",\n    \"useDefineForClassFields\": true,\n    \"module\": \"ESNext\",\n    \"moduleResolution\": \"Bundler\",\n    \"allowImportingTsExtensions\": true,\n    \"resolveJsonModule\": true,\n    \"isolatedModules\": true,\n    \"noEmit\": true,\n    \"jsx\": \"react-jsx\",\n    \"strict\": true,\n    \"esModuleInterop\": true,\n    \"allowSyntheticDefaultImports\": true,\n    \"forceConsistentCasingInFileNames\": true,\n    \"skipLibCheck\": true,\n    \"baseUrl\": \"./\",\n    \"paths\": {\n      \"@/*\": [\"./src/*\"]\n    },\n    \"types\": [\"vite/client\"]\n  },\n  \"include\": [\"src\"]\n}\n"
  },
  {
    "path": "gitnexus-web/tsconfig.json",
    "content": "{\n  \"files\": [],\n  \"references\": [\n    { \"path\": \"./tsconfig.app.json\" },\n    { \"path\": \"./tsconfig.node.json\" }\n  ]\n}\n"
  },
  {
    "path": "gitnexus-web/tsconfig.node.json",
    "content": "{\n  \"compilerOptions\": {\n    \"composite\": true,\n    \"skipLibCheck\": true,\n    \"module\": \"ESNext\",\n    \"moduleResolution\": \"Bundler\",\n    \"allowSyntheticDefaultImports\": true,\n    \"esModuleInterop\": true,\n    \"noEmit\": true,\n    \"types\": [\"node\"]\n  },\n  \"include\": [\"vite.config.ts\"]\n}\n"
  },
  {
    "path": "gitnexus-web/vercel.json",
    "content": "{\n  \"headers\": [\n    {\n      \"source\": \"/(.*)\",\n      \"headers\": [\n        {\n          \"key\": \"Cross-Origin-Opener-Policy\",\n          \"value\": \"same-origin\"\n        },\n        {\n          \"key\": \"Cross-Origin-Embedder-Policy\",\n          \"value\": \"require-corp\"\n        }\n      ]\n    }\n  ]\n}\n"
  },
  {
    "path": "gitnexus-web/vite.config.ts",
    "content": "import { defineConfig } from 'vite';\nimport react from '@vitejs/plugin-react';\nimport tailwindcss from '@tailwindcss/vite';\nimport wasm from 'vite-plugin-wasm';\nimport topLevelAwait from 'vite-plugin-top-level-await';\nimport { viteStaticCopy } from 'vite-plugin-static-copy';\nimport path from 'path';\n\nexport default defineConfig({\n  plugins: [\n    react(),\n    tailwindcss(),\n    wasm(),\n    topLevelAwait(),\n    // Copy lbug-wasm worker file to assets folder for production\n    viteStaticCopy({\n      targets: [\n        {\n          src: 'node_modules/@ladybugdb/wasm-core/lbug_wasm_worker.js',\n          dest: 'assets'\n        }\n      ]\n    }),\n  ],\n  resolve: {\n    alias: {\n      '@': path.resolve(__dirname, './src'),\n      // Fix for Rollup failing to resolve this deep import from @langchain/anthropic\n      '@anthropic-ai/sdk/lib/transform-json-schema': path.resolve(__dirname, 'node_modules/@anthropic-ai/sdk/lib/transform-json-schema.mjs'),\n      // Fix for mermaid d3-color prototype crash on Vercel (known issue with mermaid 10.9.0+ and Vite)\n      'mermaid': path.resolve(__dirname, 'node_modules/mermaid/dist/mermaid.esm.min.mjs'),\n    },\n  },\n  // Polyfill Buffer for isomorphic-git (Node.js API needed in browser)\n  define: {\n    global: 'globalThis',\n  },\n  // Optimize deps - exclude lbug-wasm from pre-bundling (it has WASM files)\n  optimizeDeps: {\n    exclude: ['@ladybugdb/wasm-core'],\n    include: ['buffer'],\n  },\n  // Required for LadybugDB WASM (SharedArrayBuffer needs Cross-Origin Isolation)\n  server: {\n    headers: {\n      'Cross-Origin-Opener-Policy': 'same-origin',\n      'Cross-Origin-Embedder-Policy': 'require-corp',\n    },\n    // Allow serving files from node_modules\n    fs: {\n      allow: ['..'],\n    },\n  },\n  // Also set for preview/production builds\n  preview: {\n    headers: {\n      'Cross-Origin-Opener-Policy': 'same-origin',\n      'Cross-Origin-Embedder-Policy': 'require-corp',\n    },\n  },\n  // Worker configuration\n  worker: {\n    format: 'es',\n    plugins: () => [wasm(), topLevelAwait()],\n  },\n});\n"
  },
  {
    "path": "skills.mdm",
    "content": ""
  },
  {
    "path": "type-resolution-roadmap.md",
    "content": "# Type Resolution Roadmap\n\nThis roadmap describes the evolution of GitNexus's type-resolution layer from a receiver-disambiguation aid into a production-grade static-analysis foundation.\n\n---\n\n## Principles\n\n- **stay conservative** — prefer missing a binding over introducing a misleading one\n- **prefer explainable inference over clever but brittle inference**\n- **limit performance overhead during ingestion**\n- **keep per-language extractors explicit rather than over-generic**\n- **separate \"better receiver resolution\" from \"compiler-grade typing\"**\n\nThe goal is not to build a compiler. The goal is to support high-value static analysis for call graphs, impact analysis, context gathering, and downstream graph features.\n\n---\n\n## Delivered Phases\n\n### Phase 7: Cross-Scope and Return-Aware Propagation ✅\n\n**Shipped in** `feat/phase7-type-resolution`.\n\n- `ReturnTypeLookup` interface threading return-type knowledge into TypeEnv\n- Iterable call-expression support across 7 languages (Go, TS, Python, Rust, Java, Kotlin, C#)\n- PHP class-level `@var` property typing for `$this->property` foreach (Strategy C)\n- `pendingCallResults` infrastructure (Tier 2b loop + `PendingAssignment` union) — activated by Phase 9\n\n### Phase 8: Field and Property Type Resolution ✅\n\n**Shipped in** `feat/phase8-field-property-type-resolution`.\n\n- SymbolTable `fieldByOwner` index — O(1) field lookup by `ownerNodeId\\0fieldName`\n- `HAS_PROPERTY` edge type + `declaredType` on Property symbols\n- Deep chain resolution up to 3 levels (`user.address.city.getName()`) across 10 languages\n- Mixed field+method chains via unified `MixedChainStep[]` (`svc.getUser().address.save()`)\n- Type-preserving stdlib passthroughs (`unwrap`, `clone`, `expect`, etc.)\n- `ACCESSES` edge type — read/write field access tracking across 12 languages\n- C++ `field_declaration` capture, `field_expression` receiver support\n- Rust unit struct instantiation, Ruby YARD `@return` for `attr_accessor`\n\n### Phase 9 + 9C: Return-Type-Aware Variable Binding ✅\n\n**Shipped in** `feat/phase9-call-result-binding` (PR #379).\n\n- Simple call-result binding: `const user = getUser(); user.save()` across 11 languages\n- Unified fixpoint loop replacing sequential Tier 2b/2a — handles 4 binding kinds (`callResult`, `copy`, `fieldAccess`, `methodCallResult`) at arbitrary depth\n- Field access binding: `const addr = user.address` resolves via `lookupFieldByOwner` + `declaredType`\n- Method-call-result binding: `const city = addr.getCity()` resolves via `lookupFuzzyCallable` filtered by `ownerId`\n- Fixpoint iterates until stable (max 10 iterations), enabling chains like `getUser() → .address → .getCity() → city.save()`\n- Reverse-order copy chains now resolve (`const b = a; const a: User = x` → both resolve)\n\n### Milestone D — Completeness ✅\n\n**Shipped in** `feat/type-resolution-milestone-d` (PR #387). Consolidated original Phases 10–13 into 3 balanced phases.\n\n#### Phase A: Fixpoint Completeness ✅\n\n- Post-fixpoint for-loop replay (ex-9B): `pendingForLoops` collection + replay after fixpoint resolves iterable types\n- Object destructuring via `fieldAccess` items (TS/JS `object_pattern`, Rust `struct_pattern`) — no new `destructure` PendingAssignment variant needed\n- Extracted `resolveFixpointBindings()` helper with exhaustive switch + `classDefCache` memoization\n\n#### Phase B: Inheritance & Receivers ✅\n\n- `BuildTypeEnvOptions` interface replacing positional params for `buildTypeEnv`\n- Heritage pre-pass constructing `parentMap` from tree-sitter query matches (not graph edges — heritage-processor runs in parallel)\n- MRO-aware `walkParentChain()` (depth 5, cycle-safe BFS) for `resolveFieldType` and `resolveMethodReturnType`\n- `this`/`self`/`$this`/`Me` receiver substitution via `substituteThisReceiver` hook\n- Go `inc_statement`/`dec_statement` write-access queries\n\n#### Phase C: Branch-Sensitive Narrowing ✅\n\n- Null-check narrowing (`!= null`, `!== undefined`, `is not null`) via position-indexed `patternOverrides`\n- Supported for TS, Kotlin, C# — renamed `PATTERN_BRANCH_TYPES` → `NARROWING_BRANCH_TYPES`\n- Bug fix: Kotlin narrowing required 3 fixes in `jvm.ts` (AST node type `equality_expression`, anonymous `null` node, `nullable_type` parameter fallback)\n\n#### Deferred from Milestone D\n\n- **Type predicates (13A):** Cross-function analysis for niche TS `x is User` feature — deferred\n- **Swift parity (11D):** tree-sitter-swift Node 22 issues — all Swift work consolidated to Phase S\n- **Positional destructuring (12C):** Python/Kotlin/C#/C++ tuple-position-to-field mapping — deferred\n- **Discriminated union narrowing (13C):** Needs tagged union metadata not in SymbolTable — deferred\n\n#### Integration Test Coverage\n\n17 fixture directories, 23 describe blocks, 705 lines of test code covering all 11 languages:\n- Grandparent MRO (depth-2 C→B→A): TS, JS, Kotlin, C#, C++, Java, PHP, Python, Ruby\n- Object destructuring: TS, JS\n- Struct destructuring: Rust\n- Post-fixpoint for-loop replay: TS, JS\n- Go inc/dec write access\n- Null-check narrowing: TS, C#, Kotlin\n\n---\n\n## Open Phases\n\n### Phase P: Polymorphism & Overloading\n\n**Plan:** `docs/plans/2026-03-19-feat-polymorphism-overloading-type-resolution-plan.md`\n\nFour incremental phases:\n1. **Parameter type metadata** — extend `SymbolDefinition` with `parameterTypes: string[]` extracted during parsing — **DELIVERED**\n2. **Overload disambiguation** — filter overloaded methods by argument literal types at call sites — **DELIVERED** (Java, Kotlin, C#, C++, TypeScript)\n3. **Constructor-visible virtual dispatch** — `Base b = new Derived(); b.method()` resolves to `Derived#method` when constructor type is a known subclass — **DELIVERED** (Java, C#, TS, C++, Kotlin via `detectConstructorType` hook, C++ smart pointers via `make_shared`/`make_unique`)\n4. **Optional parameter arity resolution** — calls with omitted optional/default args now resolve via `requiredParameterCount` range check — **DELIVERED** (TS, Python, Kotlin, C#, C++, PHP, Ruby)\n5. **Covariant return type awareness** — prefer child's return type over inherited definition\n\nLanguages benefiting: Java, Kotlin, C#, C++, TypeScript (overloading). All OOP languages (virtual dispatch).\n\n**Impact: High | Effort: High** (P.1–P.4 delivered; P.5 covariant return types remains open)\n\n---\n\n### Phase S: Swift Parity\n\n**Blocked on** tree-sitter-swift Node 22 compatibility.\n\n- For-loop element binding (from Phase 10)\n- Assignment chains: copy, callResult, fieldAccess, methodCallResult (from Phase 11D)\n- `guard let` narrowing (from Phase 13B) — uses scopeEnv path, not `patternOverrides`\n\n**Impact: Medium | Effort: Medium**\n\n---\n\n### Phase 14: Cross-File Binding Propagation\n\n**Problem:** `buildTypeEnv` is per-file. Inferred types don't cross file boundaries.\n\n```typescript\n// file-a.ts — fixpoint resolves: config → Config\nexport const config = getConfig();\n\n// file-b.ts — config has no type\nimport { config } from './file-a';\nconfig.validate();  // missed\n```\n\n**Approach: Export-type index.** After each file's fixpoint, export resolved bindings for exported symbols into `ExportedBindings: Map<filePath, Map<symbolName, typeName>>`. Subsequent files seed scopeEnv from this index for imported symbols.\n\n**Details:**\n- Process files in topological import order (import-processor already builds the dependency graph)\n- Re-exports: follow import chain transitively in `ExportedBindings`\n- Barrel files (`index.ts`): chain of re-exports — same mechanism\n- Default exports: keyed as `\"default\"` in the map, mapped to local name at import site\n- Dynamic imports (`import()`, conditional `require()`): excluded — runtime-only edges\n- Circular imports: files in a cycle processed in arbitrary order within the cycle; cross-cycle bindings don't propagate (conservative)\n- Parallelism preserved within topological levels\n\n**Why this is last:** Every earlier phase makes the per-file fixpoint stronger, reducing cases where cross-file propagation is needed. This is also the highest-risk architectural change.\n\n**Risks:** Topological ordering correctness (mitigated by reusing import-processor's existing graph). Re-export chain depth (bounded by import depth, typically 2-3). Memory for `ExportedBindings` (~100K entries for 10K-file monorepo — negligible).\n\n**Impact: High | Effort: High**\n\n---\n\n## Dependency Graph\n\n```\nMilestone D (Phases A, B, C) ✅ ──┐\n                                   ├──→ Phase 14 (cross-file)\nPhase P (polymorphism) ───────────┤\n                                   │\nPhase S (Swift parity) ───────────┘\n\nPhase P.1–P.4 are delivered. P.5 (covariant return types) remains open.\nPhase P and Phase S are independent of each other and Phase 14.\nPhase 14 benefits from Phase P (better per-file resolution = fewer cross-file gaps).\n```\n\n---\n\n## Language-Specific Gaps (remaining)\n\n### Swift\n- For-loop element binding → Phase S\n- Assignment chains (copy, callResult, fieldAccess, methodCallResult) → Phase S\n- `guard let` narrowing → Phase S\n\n### Kotlin\n- ~~Virtual dispatch: `Dog()` uses `call_expression` (no `new` keyword)~~ — **RESOLVED** via `detectConstructorType` hook\n\n### All languages\n- Cross-file binding propagation → Phase 14\n\n---\n\n## Milestones\n\n### Milestone A — Inference Expansion ✅ (Phase 7)\n\nLoop inference, `ReturnTypeLookup`, PHP Strategy C.\n\n### Milestone B — Structural Member Typing ✅ (Phase 8)\n\nField/property maps, deep chains, mixed chains, stdlib passthroughs.\n\n### Milestone C — Static-Analysis Foundation ✅ (Phase 9 + 9C)\n\nUnified fixpoint loop, call-result binding, field access binding, method-call-result binding, arbitrary-depth chain propagation.\n\n### Milestone D — Completeness ✅ (Phases A, B, C)\n\nConsolidated Phases 10–13 into 3 balanced phases. Loop-fixpoint bridge, MRO-aware inheritance walking, `this`/`self` resolution, object/struct destructuring, null-check narrowing. Kotlin null-check bug fix. Full 11-language integration test coverage.\n\n### Milestone E — Cross-Boundary ← **next** (Phase 14)\n\nExport-type index, cross-file binding propagation.\n\n### Milestone P — Polymorphism & Overloading (Phase P)\n\nParameter type metadata, overload disambiguation, constructor-visible virtual dispatch (including Kotlin `detectConstructorType` and C++ smart pointer factories), optional parameter arity resolution, covariant return types (open).\n\n### Milestone S — Swift Parity (Phase S)\n\nFor-loop binding, assignment chains, `guard let` narrowing. Blocked on tree-sitter-swift Node 22.\n\n---\n\n## Open Design Questions\n\n| # | Question | Status |\n|---|----------|--------|\n| 1 | Where should field-type metadata live? | ✅ Resolved: `fieldByOwner` index in SymbolTable |\n| 2 | How should ambiguity be represented? | ✅ Resolved: keep `undefined`. Conservative approach proven through 9 phases. |\n| 3 | How much receiver context for return types? | ✅ Resolved: Phase 9C `resolveMethodReturnType` filters by `ownerId`. |\n| 4 | How much branch sensitivity? | ✅ Resolved: type predicates + null checks only. No control-flow graph. (Phase 13) |\n| 5 | Field typing and chain typing — one phase or two? | ✅ Resolved: incremental delivery within phases (Phase 8/8A precedent). |\n| 6 | Phase 9B vs Phase 10? | ✅ Resolved: Phase 10 supersedes 9B via post-fixpoint replay. |\n\n---\n\n## What \"Production-Grade\" Means Here\n\nFor GitNexus, production-grade does **not** mean replacing a language compiler. The target:\n\n- Strong receiver-constrained call resolution across common language idioms\n- Reliable handling of typed loops, constructors, and common patterns\n- Return-type propagation for service/repository code\n- Field/property knowledge for chained-member analysis\n- Inheritance-aware lookups\n- Conservative behavior under ambiguity\n- Predictable performance during indexing\n\nThat supports: better call graphs, more accurate impact analysis, stronger AI context assembly, more trustworthy graph traversal.\n\n---\n\n## Summary\n\n**Complete:** Phases 7, 8, 9, 9C, Milestone D (A, B, C) — explicit types, constructor inference, loop inference, field/property resolution, deep chains, mixed chains, stdlib passthroughs, comment-based types, unified fixpoint with 4 binding kinds, arbitrary-depth chain propagation, MRO-aware inheritance walking, this/self resolution, object/struct destructuring, null-check narrowing — across 11 languages with full integration test coverage.\n\n**Next:** Phase 14 (cross-file binding propagation) — the architectural capstone. Phase S (Swift parity) is independent and unblocked once tree-sitter-swift Node 22 is resolved.\n"
  },
  {
    "path": "type-resolution-system.md",
    "content": "# Type Resolution System\n\nGitNexus's type resolution system maps variables to likely declared types across the supported languages so the ingestion pipeline can perform **receiver-constrained call resolution**.\n\nWhen the code contains a call such as `user.save()`, the resolver tries to determine that `user` is a `User`, allowing call resolution to prefer `User#save` over unrelated methods such as `Repo#save`.\n\nThis system is designed to be:\n\n- **Conservative** — it prefers missing a binding over introducing a misleading one\n- **Walk + fixpoint** — bindings are collected during a single AST walk, then a unified fixpoint loop iterates over pending assignments (copy, callResult, fieldAccess, methodCallResult) until no new bindings are produced\n- **Scope-aware** — function-local bindings are isolated from file-level bindings\n- **Per-file** — the environment is built for one file at a time, though it may consult the global `SymbolTable` for validation in specific cases\n\nIt is **not** a full compiler type checker. Its job is to recover enough type information to improve call-edge accuracy during ingestion.\n\n---\n\n## Purpose in the Pipeline\n\nType resolution sits between parsing and call resolution.\n\n```text\nparse-worker.ts\n     │\n     ▼\nbuildTypeEnv(tree, language, symbolTable?)\n     │\n     ├──► TypeEnvironment.lookup(varName, callNode)\n     │         │\n     │         ▼\n     │    call-processor.ts\n     │    - resolves receiver type for method calls\n     │    - filters candidates by receiver match\n     │    - verifies deferred constructor / initializer bindings\n     │\n     └──► discarded after file processing\n```\n\nThe `TypeEnvironment` is built once per file. `call-processor.ts` then uses `lookup()` to determine receiver types and narrow candidate symbols from the `SymbolTable`.\n\n---\n\n## Architecture\n\n```text\n                                 ┌──────────────────────┐\n                                 │     type-env.ts      │\n                                 │                      │\n                                 │  buildTypeEnv()      │\n                                 │  - Single AST walk   │\n                                 │  - Scope tracking    │\n                                 │  - Tier orchestration│\n                                 └──────────┬───────────┘\n                                            │ dispatches to\n                    ┌───────────────────────┬┴┬────────────────────────┐\n                    │                       │ │                        │\n          ┌─────────▼──────────┐  ┌─────────▼─▼─────────┐  ┌──────────▼─────────┐\n          │   shared.ts        │  │  <language>.ts      │  │    types.ts        │\n          │                    │  │                      │  │                    │\n          │  Container table   │  │  Per-language        │  │  Extractor         │\n          │  Type helpers      │  │  extractors          │  │  interface defs    │\n          │  Generic helpers   │  │  (shared + per-lang) │  │                    │\n          └────────────────────┘  └──────────────────────┘  └────────────────────┘\n```\n\n### Main files\n\n| File | Purpose |\n|------|---------|\n| `type-env.ts` | Core engine. Walks the AST once, tracks scopes, collects bindings, and exposes `buildTypeEnv()` plus the `TypeEnvironment` interface. |\n| `types.ts` | TypeScript interfaces for extractor hooks such as `TypeBindingExtractor`, `ForLoopExtractor`, and `PatternBindingExtractor`. |\n| `shared.ts` | Language-agnostic helpers such as `extractSimpleTypeName`, `extractElementTypeFromString`, `resolveIterableElementType`, `CONTAINER_DESCRIPTORS`, and `TYPED_PARAMETER_TYPES`. |\n| `index.ts` | Dispatch map from `SupportedLanguages` to `LanguageTypeConfig`. |\n| `typescript.ts` | TypeScript and JavaScript extractors, including JSDoc support. |\n| `jvm.ts` | Java and Kotlin extractors. |\n| `csharp.ts` | C# extractors. |\n| `go.ts` | Go extractors, including range semantics. |\n| `rust.ts` | Rust extractors, including `if let`, match-related handling, and `Self` resolution. |\n| `python.ts` | Python extractors, including `match` / `case` handling. |\n| `php.ts` | PHP extractors, including PHPDoc support. |\n| `ruby.ts` | Ruby extractors, including YARD support. |\n| `swift.ts` | Swift extractors. Currently the most minimal configuration. |\n| `c-cpp.ts` | Shared C / C++ extractors. |\n\n---\n\n## Supported Languages\n\nThe current type-resolution layer supports **13 languages**:\n\n- TypeScript\n- JavaScript\n- Python\n- Java\n- Kotlin\n- C#\n- Go\n- Rust\n- PHP\n- Ruby\n- Swift\n- C\n- C++\n\nNot all languages have the same level of coverage. Swift remains the most minimal. C and some C++ cases naturally benefit less from receiver typing than object-oriented languages.\n\n---\n\n## Design Constraints\n\nThe type resolution layer is intentionally narrower than a compiler-grade type system.\n\nIt does:\n\n- resolve variable types from declarations, parameters, initializers, loops, and selected pattern constructs\n- normalize common wrappers such as nullable types and generic containers\n- improve receiver matching during call resolution\n- verify some ambiguous initializer bindings against the `SymbolTable`\n\nIt does not:\n\n- perform full semantic type checking\n- run fixpoint inference\n- propagate inferred bindings across files as ordinary environment entries\n- guarantee resolution for every ambiguous construct\n\n---\n\n## TypeEnvironment Model\n\n`buildTypeEnv()` returns a `TypeEnvironment` that contains:\n\n- scoped bindings collected from the current file\n- deferred constructor / initializer binding candidates\n- lookup helpers used by call resolution\n- pattern override data for branch-local narrowing where supported\n\n### Scope model\n\nThe environment is scope-aware so identical variable names in different functions do not collide.\n\n```text\nFile scope ('')\n├── config → Config\n├── users → Map\n│\n├── processUsers@100\n│   ├── user → User\n│   └── alias → User\n│\n└── processRepos@200\n    └── repo → Repo\n```\n\n### Scope keys\n\n- `''` for file scope\n- `functionName@startIndex` for function-local scope\n\nThese scope keys are also used later when verifying deferred bindings in call processing, so any future change to scope-key format must stay consistent across both layers.\n\n---\n\n## Lookup Semantics\n\n`TypeEnvironment.lookup()` resolves types in this effective order:\n\n1. special receivers\n   - `this`, `self`, `$this` → enclosing class\n   - `super`, `base`, `parent` → parent class\n2. position-indexed pattern overrides\n3. function-local scope\n4. file-level scope\n\nSpecial receivers are handled as a dedicated fast path rather than ordinary lexical bindings.\n\n---\n\n## Resolution Tiers\n\nBindings are collected during the same AST walk. Higher-confidence sources win over weaker inference.\n\n### Tier 0: Explicit Type Annotations\n\nDirect extraction from AST type nodes.\n\n```typescript\n// TypeScript\nconst user: User = getUser()\n\n// Java\nUser user = getUser()\n\n// Go\nvar user User\n\n// Rust\nlet user: User = get_user()\n\n// Python\nuser: User = get_user()\n```\n\n`extractDeclaration()` reads the declaration type node and normalizes it through `extractSimpleTypeName()`.\n\nParameters are handled separately by `extractParameter()` using the same normalization logic. The shared `TYPED_PARAMETER_TYPES` set controls which AST node types are treated as typed parameters.\n\n### Tier 0b: For-Loop Element Type Resolution\n\nAlso referred to as **Tier 1c** in Phase 6 PR and test naming.\n\nFor-each style loops often introduce a variable with no explicit type. In those cases, the resolver derives the loop variable type from the iterable's container type.\n\n```csharp\nforeach (var user in users) { user.Save(); }\n\n// TypeScript\nfor (const user of users) { user.save(); }\n\n// Rust\nfor user in users { user.save(); }\n```\n\nThis is handled by `resolveIterableElementType()` through a three-step cascade:\n\n1. **Declaration type nodes**  \n   Uses raw type annotation nodes when available, including cases such as `User[]` or `List[User]`.\n\n2. **Scope environment string**  \n   Uses `extractElementTypeFromString()` to parse a stored type string.\n\n3. **AST walk fallback**  \n   Walks upward to enclosing declarations or parameters when needed.\n\n### Tier 0c: Pattern Binding\n\nPattern-matching constructs may introduce a new variable or temporarily narrow an existing one.\n\n```csharp\nif (obj is User user) { user.Save(); }\n\n// Java\nif (obj instanceof User user) { user.save(); }\n\n// Rust\nif let Some(user) = opt { user.save(); }\n\n// Python\nmatch obj:\n    case User() as user:\n        user.save()\n```\n\nBinding behavior depends on the language:\n\n- **first-writer-wins** is used by default\n- **position-indexed branch overrides** are used where branch-local narrowing must not leak between branches, most notably Kotlin\n\n### Tier 1: Initializer / Constructor Inference\n\nWhen there is no explicit annotation, the resolver can infer a type from the initializer.\n\n```typescript\nconst user = new User()\n\n// C#\nvar user = new User()\n\n// Kotlin\nval user = User()\n\n// Go\nuser := User{}\nptr := &User{}\nuser2 := new(User)\n\n// Ruby\nuser = User.new\n```\n\nSome languages can identify constructor-like syntax directly. Others need validation through the `SymbolTable`, because syntax alone cannot always distinguish `User()` from `getUser()`.\n\nIn those cases the system records an unverified binding candidate and later validates it against known class / struct symbols.\n\n### Tier 2: Assignment Chain Propagation\n\nBindings can propagate through simple identifier assignments.\n\n```typescript\nconst user: User = getUser()\nconst alias = user\nconst other = alias\n```\n\nThis is handled after the main walk through a unified fixpoint loop over all pending assignments (copy, callResult, fieldAccess, methodCallResult). The loop iterates until no new bindings are produced (max 10 iterations), enabling arbitrary-depth mixed chains and reverse-order resolution:\n\n```typescript\nconst b = a              // iteration 2: b → User (a now resolved)\nconst a: User = getUser()  // iteration 1: a → User\n```\n\nBoth `a` and `b` resolve correctly. The fixpoint also handles chains mixing field access and method calls:\n\n```typescript\nconst user = getUser()       // callResult → User\nconst addr = user.address    // fieldAccess → Address\nconst city = addr.getCity()  // methodCallResult → City\n```\n\n---\n\n## Container Type Descriptors\n\n`CONTAINER_DESCRIPTORS` defines the type-parameter semantics for common containers.\n\nThat allows the resolver to distinguish key-yielding methods from value-yielding methods instead of always assuming the last generic argument.\n\n```typescript\nfor (const key of map.keys()) { ... }    // key → string\nfor (const val of map.values()) { ... }  // val → User\n```\n\nUnknown containers fall back to heuristics, keeping the system conservative rather than fully semantic.\n\n### Examples of descriptor-driven behavior\n\n- `Map<K, V>` / `Dictionary<K, V>` / similar key-value containers\n- `List<T>` / `Array<T>` / `Vec<T>` / `Set<T>` / similar single-element containers\n- method-aware yield selection such as `.keys()`, `.values()`, `.keySet()`, `.Values`\n\n---\n\n## Comment-Based Types\n\nFor less strictly typed ecosystems, the resolver can fall back to documentation-based type information.\n\nSupported comment systems:\n\n- **JSDoc** for JavaScript / TypeScript\n- **PHPDoc** for PHP\n- **YARD** for Ruby\n\nThese are used conservatively and only when AST-level type information is missing or insufficient.\n\n---\n\n## SymbolTable Interaction\n\nAlthough the environment is built per file, it may consult the global `SymbolTable` in specific validation paths.\n\nThis is important for languages where constructor-like syntax is ambiguous. A binding candidate such as `val user = User()` may need confirmation that `User` is a class-like symbol rather than an ordinary function.\n\nThis means the system is still **per-file in binding construction**, but not completely isolated from project-wide symbol knowledge.\n\n---\n\n## Deferred Binding Verification in Call Processing\n\nA key detail is that some initializer bindings are not fully resolved inside `TypeEnv` itself.\n\n`call-processor.ts` later verifies deferred bindings and may infer receiver types from:\n\n- validated class / struct constructor candidates\n- uniquely resolved function or method calls that expose a usable return type\n\nSo return-type-aware receiver inference already exists in a constrained downstream form today. Phase 7.3 extended this by threading `ReturnTypeLookup` into `TypeEnv` via `ForLoopExtractorContext`, enabling for-loop call-expression iterables (e.g., `for (const u of getUsers())`) to resolve element types in 7 languages (TS/JS, Java, Kotlin, C#, Go, Rust, Python, PHP). Phase 9 activated simple call-result binding (`var x = f()`) across all 11 supported languages (Swift excluded). Phase 9C replaced the sequential Tier 2b/2a with a unified fixpoint loop that handles four binding kinds — `callResult`, `copy`, `fieldAccess`, and `methodCallResult` — iterating until no new bindings are produced. This enables arbitrary-depth mixed chains like `const user = getUser(); const addr = user.address; const city = addr.getCity(); city.save()`.\n\n---\n\n## Language Feature Matrix\n\n| Feature | TS | JS | Java | Kotlin | C# | Go | Rust | Python | PHP | Ruby | Swift | C++ | C |\n|---------|:--:|:--:|:----:|:------:|:--:|:--:|:----:|:------:|:---:|:----:|:-----:|:---:|:-:|\n| Declarations | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |\n| Parameters | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |\n| Initializer / constructor inference | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |\n| Constructor binding scan | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |\n| For-loop element types | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes |\n| Pattern binding | Yes | Yes | Yes | Yes | No | Yes | Yes | No | No | No | No | No | No |\n| Assignment chains | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes |\n| Field/property type resolution | Yes | No† | Yes | Yes | Yes | Yes | Yes | Yes* | Yes | YARD | No | Yes | No‡ |\n| Comment-based types | JSDoc | JSDoc | No | No | No | No | No | No | PHPDoc | YARD | No | No | No |\n| Return type extraction | JSDoc | JSDoc | No | No | No | No | No | No | PHPDoc | YARD | No | No | No |\n| Call-result variable binding | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes¶ | No | Yes | No |\n| Field access binding | Yes | No† | Yes | Yes | Yes | Yes | Yes | No‖ | Yes | N/A | No | Yes | No |\n| Method-call-result binding | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes¶ | No | Yes | No |\n| Write access (ACCESSES write) | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes§ | Yes | Yes | Yes | No |\n| Parameter types extracted | Yes** | No | Yes | Yes | Yes | Yes | Yes | Partial†† | No | No | No | Yes | No |\n| Method overload disambiguation | Yes** | No | Yes | Yes | Yes | No | No | No | No | No | No | Yes | No |\n| Constructor-visible virtual dispatch | Yes | No | Yes | Yes‡‡ | Yes | No | No | No | No | No | No | Yes§§ | No |\n| Optional parameter arity resolution | Yes | No | No | Yes | Yes | No | No | Yes | Yes | Yes | No | Yes | No |\n\n\\* Python class-level annotated attributes (`address: Address`) now resolve `declaredType` correctly. The `self.x` instance attribute pattern is not yet supported.\n\n† JS field topology is captured (`field_definition` → `HAS_PROPERTY` edges) but `declaredType` is never set — JS has no AST type annotations. Disambiguation via `lookupFieldByOwner` requires `declaredType`. JSDoc `@type` support is a Phase 9 candidate.\n\n‡ C has no `@definition.property` query pattern. Struct member fields are not captured. C++ captures class/struct member fields via `field_declaration`.\n\n¶ Ruby call-result and method-call-result binding work via `call`/`method_call` nodes. Ruby uses method calls for both field access and method calls — there is no separate field access node type.\n\n‖ Python class-level annotated attributes (`address: Address`) have `declaredType`, but `self.x` instance attributes do not. Field access binding only works for class-level annotated fields.\n\n**Note on `this`/`self`/`$this` receivers:** Field access and method-call-result binding with `this`/`self`/`$this` as the receiver do not resolve in the fixpoint loop because these keywords are not stored in `scopeEnv`. They are resolved on-demand at call sites via `findEnclosingClassName()` AST walk. This is consistent across all languages and not a regression.\n\n§ PHP write access covers instance property writes (`$obj->field = value`) and static property writes (`ClassName::$field = value`). Nullsafe writes (`$obj?->field = value`) are not tracked because this is invalid PHP syntax — null-safe member access on the left-hand side of assignment is a parse error.\n\n\\*\\* TS: `parameterTypes` populated with `inferLiteralType` for overload disambiguation. TS overloads share one implementation body (generateId collision), but disambiguation selects the correct candidate.\n\n†† Python: parameter types extracted only with PEP 3107 type annotations (`def f(x: int)`).\n\n‡‡ Kotlin virtual dispatch supported via `detectConstructorType` hook — detects `Dog()` constructor calls (no `new` keyword) by verifying callee against `ClassNameLookup`.\n\n§§ C++ smart pointer virtual dispatch supported for `make_shared<T>()`/`make_unique<T>()` factory patterns. Raw pointer `new` also supported.\n\n---\n\n## Current Strengths\n\nThe current system provides strong value for call resolution because it combines:\n\n- explicit annotation extraction across 13 languages\n- generic-aware loop element typing (including call-expression iterables)\n- initializer-based inference with SymbolTable validation\n- selected pattern-based narrowing\n- scope-aware lookups\n- comment-based fallbacks for dynamic ecosystems (JSDoc, PHPDoc, YARD)\n- constrained return-type-aware receiver inference in call processing\n- deep field/property chains up to 3 levels across 9 languages\n- ACCESSES edge emission for field read access (via chain walking) and field write access (via assignment capture) across 12 languages\n- mixed field+method chain resolution (e.g. `svc.getUser().address.save()`)\n- type-preserving stdlib passthrough for `unwrap()`, `clone()`, `expect()`, etc.\n- method overload disambiguation via argument literal types (Java, Kotlin, C#, C++)\n- constructor-visible virtual dispatch for same-file subclasses (Java, C#, TypeScript, C++, Kotlin)\n- optional/default parameter arity resolution — calls with omitted optional args still resolve (TS, Python, Kotlin, C#, C++, PHP, Ruby)\n\nThis is enough to materially improve call-edge precision even without implementing a full static type system.\n\n---\n\n## Current Limitations\n\nImportant gaps still remain:\n\n- no general cross-file propagation of inferred bindings\n- `this`/`self`/`$this` receivers are not resolved in the fixpoint loop (resolved on-demand at call sites via AST walk instead)\n- limited branch-sensitive narrowing outside selected pattern constructs\n- limited Swift support compared with other languages\n- no complete destructuring-based field typing\n- no MRO/inheritance walking for field lookups (`lookupFieldByOwner` is direct-only)\n- for-loop variables bound at walk time cannot see fixpoint-resolved types (Phase 9B gap)\n- overloaded same-file methods share a graph node ID (generateId collision) — CALLS edges deduplicate to one per callee name\n\n---\n\n## Contributor Notes\n\nWhen modifying this system, treat the following as load-bearing invariants:\n\n1. **Conservatism matters more than recall**  \n   A missed binding is usually safer than a misleading receiver type.\n\n2. **Scope-key format is shared behavior**  \n   If scope keys change, constructor-binding verification and any downstream lookup using those keys must change in sync.\n\n3. **Tier naming may differ across code and PR discussions**  \n   For-loop element inference may appear as \"Tier 0b\" in documentation and \"Tier 1c\" in Phase 6 PR / test naming.\n\n4. **Comment-based types are fallback signals, not primary truth**  \n   They should remain lower-trust than explicit AST-derived types.\n\n5. **Return-type-aware inference already exists in constrained form**  \n   Future roadmap work should extend and generalize it rather than reintroduce it from scratch.\n"
  }
]