[
  {
    "path": ".github/ISSUE_TEMPLATE/bug_report.md",
    "content": "---\nname: Bug Report 🐛\nabout: Create a report to help improve Evo\ntitle: '[BUG] '\nlabels: bug\nassignees: ''\n---\n\n## Bug Description 🔍\n<!-- A clear and concise description of what the bug is -->\n\n## Steps to Reproduce 🔄\n1. \n2. \n3. \n\n## Expected Behavior 🌿\n<!-- What did you expect to happen? -->\n\n## Actual Behavior 🍂\n<!-- What actually happened? -->\n\n## Environment 🌍\n- Evo Version: \n- OS: \n- Go Version: \n\n## Additional Context 📝\n<!-- Add any other context, screenshots, or error messages about the problem here -->\n\n## Possible Solution 💡\n<!-- Optional: If you have any ideas on how to fix this, let us know! -->\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/feature_request.md",
    "content": "---\nname: Feature Request 💫\nabout: Suggest an idea to help evolve Evo\ntitle: '[FEATURE] '\nlabels: enhancement\nassignees: ''\n---\n\n## Feature Description 🌱\n<!-- A clear and concise description of what you'd like to see -->\n\n## Use Case 🎯\n<!-- Describe the problem this feature would solve or how it would improve Evo -->\n\n## Proposed Solution 💡\n<!-- If you have a specific solution in mind, describe it here -->\n\n## Alternatives Considered 🤔\n<!-- Have you considered any alternative solutions or workarounds? -->\n\n## Additional Context 📝\n<!-- Add any other context, mockups, or examples about the feature request here -->\n\n## Impact on Project 🌿\n<!-- How would this feature align with Evo's goals of being:\n- Conflict-free\n- Rename-friendly\n- Large-file ready\n- Offline-first\n-->\n"
  },
  {
    "path": ".github/pull_request_template.md",
    "content": "## Description 🌿\n<!-- Provide a clear and concise description of your changes -->\n\n## Type of Change 🔄\n<!-- Mark the relevant option with an [x] -->\n- [ ] 🐛 Bug Fix\n- [ ] ✨ New Feature\n- [ ] 📈 Performance Improvement\n- [ ] 🔧 Code Refactoring\n- [ ] 📚 Documentation\n- [ ] 🧪 Test Enhancement\n- [ ] 🛠️ Build/CI Pipeline\n\n## Related Issues 🔗\n<!-- Link any related issues using #issue-number -->\nCloses #\n\n## Testing Done 🧪\n<!-- Describe the tests you've added or run to verify your changes -->\n\n## Checklist ✅\n- [ ] My code follows Evo's style guidelines\n- [ ] I have added/updated necessary documentation\n- [ ] I have added appropriate tests\n- [ ] My changes don't introduce new merge conflicts (Evo magic! 🎩)\n- [ ] I have tested my changes with large files (if applicable)\n- [ ] All new and existing tests pass\n\n## Screenshots 📸\n<!-- If applicable, add screenshots to help explain your changes -->\n\n## Additional Notes 📝\n<!-- Add any other context about your PR here -->\n\n---\nRemember: Evo is all about making version control effortless! Thanks for contributing! 💪\n"
  },
  {
    "path": ".gitignore",
    "content": "# Binaries and build artifacts\nbin/\ndist/\nbuild/\n*.exe\n*.exe~\n*.dll\n*.so\n*.dylib\n\n# Go cache / coverage\n*.test\n*.out\n*.swp\n*.swo\n*.tmp\n*.temp\ncoverage.out\ncoverage.html\n\n# OS-specific\n.DS_Store\nThumbs.db\n\n# Evo logs & data\n.evo/\n*.log\n\n# Editor / IDE\n.vscode/\n.idea/\n"
  },
  {
    "path": "DESIGN.md",
    "content": "# Evo Design Document\n\n## Overview & Motivation\n\nEvo is a next-generation version control system designed to solve problems that legacy systems (like Git) struggle with—especially around complex merges, large file handling, and rename tracking. By leveraging CRDTs (Conflict-Free Replicated Data Types), Evo can integrate changes from multiple developers without forcing manual merges or conflicts, all while supporting a familiar commit/branch-like workflow.\n\n## Key Goals\n\n1. **Branch-Free, Named Streams**\n   - Instead of Git branches, Evo uses named streams to isolate sets of changes\n   - Merging is a matter of replicating CRDT operations from one stream to another\n\n2. **CRDT-Powered Concurrency**\n   - No more \"merge conflicts\"\n   - Evo's line-based RGA (Replicated Growable Array) CRDT automatically merges line insertions, updates, and deletions even when multiple developers modify the same file concurrently\n\n3. **Stable File IDs for Renames**\n   - Renames no longer break history\n   - Evo maintains a `.evo/index` that assigns each file a stable, UUID-based ID so renaming a file doesn't lose references to its log\n\n4. **Large File Support**\n   - Files exceeding a configurable threshold are stored in `.evo/largefiles/<fileID>` with only a stub line in the CRDT logs\n   - This prevents huge content from bloating the text-based logs\n\n5. **Full Revert & Partial Merges**\n   - Every commit tracks the old content on updates, allowing truly comprehensive revert\n   - Partial merges (or \"cherry-picks\") replicate only a single commit's changes from one stream to another, as opposed to pulling everything\n\n6. **Optional Commit Signing**\n   - Evo supports Ed25519-based signatures for verifying authenticity\n   - Commits store a signature field to guard against tampering\n\n## Architecture\n\nBelow is a high-level view of Evo's architecture and rationale:\n\n### 1. Named Streams\n- Each stream is effectively a separate CRDT operation log stored in `.evo/ops/<stream>`\n- Users can create or switch streams (akin to branches)\n- Merging means copying missing commits (and their CRDT operations) from one stream's logs to another\n\n**Design Decision:** This approach provides a branch-like user experience but avoids the complexity of Git merges and HEAD pointers. CRDT ensures no merge conflicts.\n\n### 2. RGA-Based CRDT\n- We employ an RGA (Replicated Growable Array) for each file, which can handle line insertion, deletion, and reordering\n- The RGA logic is stored in `.evo/ops/<stream>/<fileID>.bin` in a custom binary format (no JSON overhead)\n- Each operation has `(lamport, nodeID)` for concurrency ordering, plus a `lineID` for each line\n\n**Design Decision:**\n- RGA allows lines to be re-inserted anywhere, supporting reordering or partial merges with minimal overhead\n- Using a binary format speeds up parsing and reduces disk usage\n\n### 3. Stable File IDs\n- `.evo/index` maps `filePath -> fileID`. If a user renames a file, we only update the index; the CRDT logs still reference the same fileID\n- This ensures rename history is never lost, unlike older VCS tools that rely on heuristics to guess renames\n\n### 4. Commits & Reverts\n- A commit is a snapshot of newly added operations since the previous commit, stored in `.evo/commits/<stream>/<commitID>.bin`\n- For update operations, we store the `oldContent` so revert can truly restore lines to what they were\n- Revert automatically generates inverse operations (e.g., an insert becomes a delete) and re-applies them to the CRDT logs\n\n**Design Decision:**\n- By storing old content in commits, we can revert precisely, even for partial updates or line changes, avoiding the simplistic \"delete everything\" approach\n\n### 5. Large File Handling\n- If a file's size exceeds a configurable threshold (`files.largeThreshold`), Evo writes a CRDT stub line `EVO-LFS:<fileID>` and places the real file content into `.evo/largefiles/<fileID>/`\n- This keeps the CRDT logs small and is reminiscent of Git-LFS, but simpler and built-in\n\n### 6. Partial Merges & Cherry-Pick\n- `evo stream merge <src> <target>` merges all missing commits from `<src>` to `<target>`\n- `evo stream cherry-pick <commitID> <target>` merges only that single commit\n- Because each commit references discrete CRDT operations by file ID, partial merges replicate exactly the needed ops\n\n### 7. Optional Ed25519 Signing\n- Users can configure a signing key path (`signing.keyPath` in config)\n- On commit, Evo can create a signature by hashing the commit's stable representation (metadata + ops) and sign it\n- If `verifySignatures = true`, the CLI warns if the signature fails verification\n\n**Design Decision:**\n- This approach is offline-first: no server needed\n- The user's private key is local, and signatures are purely a cryptographic measure for authenticity\n\n## CLI Summary\n\n1. **Initialize Repository**\n   ```bash\n   evo init [dir]\n   ```\n   - Creates `.evo/` structure, \"main\" stream, config, etc.\n\n2. **Configuration**\n   ```bash\n   evo config [get|set] ...\n   ```\n   - Manage global/repo-level settings (`user.name`, `user.email`, `remote origin`, etc.)\n\n3. **Status**\n   ```bash\n   evo status\n   ```\n   - Shows changed files, new files, renames, etc.\n   - Lists current stream and pending operations\n\n4. **Commit**\n   ```bash\n   evo commit -m <msg> [--sign]\n   ```\n   - Groups newly added ops into a commit with a user-provided message, optional signing\n\n5. **Revert**\n   ```bash\n   evo revert <commit-id>\n   ```\n   - Generates inverse ops to restore lines from a prior commit\n\n6. **Log**\n   ```bash\n   evo log\n   ```\n   - Lists commits in the current stream, optionally verifying signatures\n\n7. **Stream**\n   ```bash\n   evo stream <create|switch|list|merge|cherry-pick>\n   ```\n   - Manages named streams (branch-like workflows)\n\n8. **Sync**\n   ```bash\n   evo sync <remote> (not fully implemented)\n   ```\n   - Stub for pushing/pulling CRDT logs from a future Evo server\n\n## Config & Auth\n\n- Global config at `~/.config/evo/config.toml`\n- Repo config at `.evo/config/config.toml`\n- Example keys:\n  - `user.name`, `user.email`\n  - `files.largeThreshold`\n  - `verifySignatures` (true/false)\n  - `signing.keyPath` (path to Ed25519 private key)\n\n## Why Evo is Different\n\n- **No Merge Conflicts:** CRDT concurrency means each line insertion, update, or deletion merges automatically\n- **Renames Are Trivial:** The stable file ID approach eliminates guesswork\n- **Partial Merges:** Cherry-pick or revert lines in a simpler manner thanks to the operation-based CRDT approach\n- **Offline-First:** No central server required; commits and merges work locally with minimal overhead\n- **Extensible:** We can add \"pull requests,\" \"server-based merges,\" or advanced partial file merges without rewriting the entire engine\n\n## Conclusion\n\nEvo aims to simplify version control while enhancing concurrency and rename support. It merges automatically using a robust line-based CRDT, organizes changes into named streams instead of ephemeral branches, and offers optional commit signing plus large file offloading.\n\nThe result is a production-ready, innovative VCS that supports both small personal projects and large enterprise codebases—offline or with a future server for collaboration. Evo's design choices reflect the vision of replacing traditional DVCS with something more powerful, more flexible, and less conflict-prone."
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2025 Brayden Moon\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "# Evo 🌿\n\n> **IMPORTANT**: This project has been discontinued. Due to persistent harassment and hostile behavior from certain members of the community, I have made the difficult decision to cease development of this project. While I'm proud of what was built and the vision it represented, I cannot continue maintaining it under these circumstances. The repository will remain archived for reference, but will no longer receive updates or support. Thank you to those who supported this project constructively.\n\n> ~~**Note**: This is my hobby project in active development! While the core concepts are working, some features are still experimental and under construction. If you like the vision, contributions and feedback are very welcome! 🚧~~\n\nNext-Generation, CRDT-Based Version Control\nNo Merge Conflicts • Named Streams • Stable File IDs • Large File Support\n\nEvo 🌿 aims to evolve version control by abandoning outdated branch merges and conflict resolutions. Instead, it leverages CRDT (Conflict-Free Replicated Data Type) magic so that changes from multiple users automatically converge—no fighting with merges or losing work when files are renamed!\n\n## Why Evo? 🌿\n\n1. **Zero Merge Conflicts**\n   The line-based RGA CRDT merges text changes from different developers seamlessly.\n2. **Named Streams Instead of Branches**\n   Create and switch streams for new features, merge or cherry-pick commits from one stream to another—no more complicated branching.\n3. **Renames Made Simple**\n   Files get stable UUIDs in .evo/index so that renames never lose history.\n4. **Large File Support**\n   Automatic detection moves big files to .evo/largefiles/ and stores only a stub in the CRDT logs.\n5. **Offline-First**\n   Commit, revert, or switch streams locally with no server required.\n6. **Commit Signing**\n   Optional Ed25519 signing for users who need authenticity checks.\n\n## ~~Work in Progress~~ Project Status 🌿\n\n~~While Evo's core is functional, there's active development on:\n- Advanced partial merges for even more granular change selection\n- Extended tests (unit/integration/E2E)\n- Server-based PR flows for code reviews\n- Performance (packfiles, caching)\n- CLI & UI polish~~\n\nThis project has been discontinued and is no longer under development. The code remains as-is for reference purposes, but no further updates or improvements will be made.\n\n~~Your feedback and contributions can help shape Evo's future!~~\n\n## Vision 🌿\n\nThe goal is to make version control feel effortless: merges happen automatically, renames never break history, large files don't slow you down, and everything works offline. The future roadmap includes a fully realized server for pull requests, enterprise auth, and real-time collaboration—all powered by CRDT behind the scenes.\n\n## Installing Evo 🛠️\n\n> **Note**: As this is a hobby project, some features might not work as described. Feel free to experiment and contribute improvements!\n\n1. Clone & Build:\n```bash\ngit clone https://github.com/crazywolf132/evo.git\ncd evo\ngo mod tidy\ngo build -o evo ./cmd/evo\n```\n\n2. (Optional) Install:\n```bash\ngo install ./cmd/evo\n```\n\n## Quick Start 🚀\n\n```bash\n# Initialize a new Evo repo\nevo init\n\n# Check for changed or renamed files\nevo status\n\n# Commit changes (optionally sign)\nevo commit -m \"Initial commit\"\n\n# Create a new stream (like a branch)\nevo stream create feature-x\nevo stream switch feature-x\n\n# Make changes -> evo status -> evo commit ...\n# Merge everything back into main when ready\nevo stream merge feature-x main\n```\n\n## Contributing 💪\n\nThis project is no longer accepting contributions as it has been discontinued. The repository is archived for reference purposes only.\n\n## License 📜\n\nEvo 🌿 is released under the MIT License. Hope you find it as fun and liberating to use as it is to build!\n\n---\n\nThanks for checking out Evo 🌿! I'm excited to see this project grow into a conflict-free, rename-friendly, large-file-ready version control system. Remember, it's a work in progress, so expect some rough edges - but with your help, it can become amazing! ✨\n"
  },
  {
    "path": "cmd/evo/commit_cmd.go",
    "content": "package main\n\nimport (\n\t\"evo/internal/commits\"\n\t\"evo/internal/config\"\n\t\"evo/internal/index\"\n\t\"evo/internal/repo\"\n\t\"evo/internal/streams\"\n\t\"evo/internal/types\"\n\t\"fmt\"\n\n\t\"github.com/spf13/cobra\"\n)\n\nvar (\n\tcommitMsg  string\n\tcommitSign bool\n)\n\nfunc init() {\n\tvar commitCmd = &cobra.Command{\n\t\tUse:   \"commit\",\n\t\tShort: \"Group new CRDT ops into a commit, optionally signed\",\n\t\tLong: `Collect newly added CRDT ops (including old content for updates) into a single commit\nwith a message and optional Ed25519 signature, if configured.`,\n\t\tRunE: func(cmd *cobra.Command, args []string) error {\n\t\t\tif commitMsg == \"\" {\n\t\t\t\treturn fmt.Errorf(\"use -m to specify a commit message\")\n\t\t\t}\n\t\t\trp, err := repo.FindRepoRoot(\".\")\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tstream, err := streams.CurrentStream(rp)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\t// update index\n\t\t\tif err := index.UpdateIndex(rp); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tname, _ := config.GetConfigValue(rp, \"user.name\")\n\t\t\temail, _ := config.GetConfigValue(rp, \"user.email\")\n\t\t\tif name == \"\" {\n\t\t\t\tname = \"EvoUser\"\n\t\t\t}\n\t\t\tif email == \"\" {\n\t\t\t\temail = \"user@evo\"\n\t\t\t}\n\t\t\tcid, err := commits.CreateCommit(rp, stream, commitMsg, name, email, []types.ExtendedOp{}, commitSign)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tfmt.Printf(\"Created commit %s in stream %s\\n\", cid.ID, stream)\n\t\t\treturn nil\n\t\t},\n\t}\n\tcommitCmd.Flags().StringVarP(&commitMsg, \"message\", \"m\", \"\", \"Commit message\")\n\tcommitCmd.Flags().BoolVar(&commitSign, \"sign\", false, \"Sign commit using Ed25519 if configured\")\n\trootCmd.AddCommand(commitCmd)\n}\n"
  },
  {
    "path": "cmd/evo/config_cmd.go",
    "content": "package main\n\nimport (\n\t\"evo/internal/config\"\n\t\"evo/internal/repo\"\n\t\"fmt\"\n\n\t\"github.com/spf13/cobra\"\n)\n\nvar cfgGlobal bool\n\nfunc init() {\n\tvar setCmd = &cobra.Command{\n\t\tUse:   \"set <key> <value>\",\n\t\tShort: \"Set a config key (repo-level by default, or --global)\",\n\t\tRunE: func(cmd *cobra.Command, args []string) error {\n\t\t\tif len(args) < 2 {\n\t\t\t\treturn fmt.Errorf(\"usage: evo config set <key> <value>\")\n\t\t\t}\n\t\t\tkey, val := args[0], args[1]\n\n\t\t\tif cfgGlobal {\n\t\t\t\treturn config.SetGlobalConfigValue(key, val)\n\t\t\t}\n\t\t\trp, err := repo.FindRepoRoot(\".\")\n\t\t\tif err != nil {\n\t\t\t\t// fallback to global\n\t\t\t\treturn config.SetGlobalConfigValue(key, val)\n\t\t\t}\n\t\t\treturn config.SetRepoConfigValue(rp, key, val)\n\t\t},\n\t}\n\tsetCmd.Flags().BoolVar(&cfgGlobal, \"global\", false, \"Set global config instead of repo-level\")\n\n\tvar getCmd = &cobra.Command{\n\t\tUse:   \"get <key>\",\n\t\tShort: \"Get a config value (repo-level overrides global)\",\n\t\tRunE: func(cmd *cobra.Command, args []string) error {\n\t\t\tif len(args) < 1 {\n\t\t\t\treturn fmt.Errorf(\"usage: evo config get <key>\")\n\t\t\t}\n\t\t\tkey := args[0]\n\t\t\trp, err := repo.FindRepoRoot(\".\")\n\t\t\tvar val string\n\t\t\tif err != nil {\n\t\t\t\t// fallback global\n\t\t\t\tval, err = config.GetConfigValue(\"\", key)\n\t\t\t} else {\n\t\t\t\tval, err = config.GetConfigValue(rp, key)\n\t\t\t}\n\t\t\tif err != nil {\n\t\t\t\tfmt.Println(\"Error:\", err)\n\t\t\t\treturn nil\n\t\t\t}\n\t\t\tif val == \"\" {\n\t\t\t\tfmt.Printf(\"No value found for key: %s\\n\", key)\n\t\t\t} else {\n\t\t\t\tfmt.Println(val)\n\t\t\t}\n\t\t\treturn nil\n\t\t},\n\t}\n\n\tvar configCmd = &cobra.Command{\n\t\tUse:   \"config\",\n\t\tShort: \"Manage Evo configuration\",\n\t}\n\n\tconfigCmd.AddCommand(setCmd, getCmd)\n\trootCmd.AddCommand(configCmd)\n}\n"
  },
  {
    "path": "cmd/evo/init_cmd.go",
    "content": "package main\n\nimport (\n\t\"evo/internal/repo\"\n\t\"fmt\"\n\n\t\"github.com/spf13/cobra\"\n)\n\nfunc init() {\n\tvar initCmd = &cobra.Command{\n\t\tUse:   \"init [path]\",\n\t\tShort: \"Initialize a new Evo repository\",\n\t\tLong: `Creates a .evo directory with default stream \"main\", config folder, index for stable file IDs,\nand other structures needed for CRDT-based version control.`,\n\t\tRunE: func(cmd *cobra.Command, args []string) error {\n\t\t\tpath := \".\"\n\t\t\tif len(args) > 0 {\n\t\t\t\tpath = args[0]\n\t\t\t}\n\t\t\tif err := repo.InitRepo(path); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tfmt.Println(\"Initialized Evo repository at\", path)\n\t\t\treturn nil\n\t\t},\n\t}\n\trootCmd.AddCommand(initCmd)\n}\n"
  },
  {
    "path": "cmd/evo/log_cmd.go",
    "content": "package main\n\nimport (\n\t\"evo/internal/commits\"\n\t\"evo/internal/config\"\n\t\"evo/internal/repo\"\n\t\"evo/internal/signing\"\n\t\"evo/internal/streams\"\n\t\"fmt\"\n\n\t\"github.com/spf13/cobra\"\n)\n\nfunc init() {\n\tvar logCmd = &cobra.Command{\n\t\tUse:   \"log\",\n\t\tShort: \"Show commit history for the current stream\",\n\t\tRunE: func(cmd *cobra.Command, args []string) error {\n\t\t\trp, err := repo.FindRepoRoot(\".\")\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tstream, err := streams.CurrentStream(rp)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tverifyStr, _ := config.GetConfigValue(rp, \"verifySignatures\")\n\t\t\tdoVerify := (verifyStr == \"true\")\n\n\t\t\tcc, err := commits.ListCommits(rp, stream)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif len(cc) == 0 {\n\t\t\t\tfmt.Println(\"No commits found in this stream.\")\n\t\t\t\treturn nil\n\t\t\t}\n\t\t\tfor _, c := range cc {\n\t\t\t\tver := \"\"\n\t\t\t\tif c.Signature != \"\" && doVerify {\n\t\t\t\t\tvalid, err := signing.VerifyCommit(&c, rp)\n\t\t\t\t\tif err != nil {\n\t\t\t\t\t\tver = \" (error: \" + err.Error() + \")\"\n\t\t\t\t\t} else if valid {\n\t\t\t\t\t\tver = \" (verified)\"\n\t\t\t\t\t} else {\n\t\t\t\t\t\tver = \" (INVALID!)\"\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tfmt.Printf(\"commit %s%s\\nAuthor: %s <%s>\\nDate:   %s\\n\\n    %s\\n\\n\",\n\t\t\t\t\tc.ID, ver, c.AuthorName, c.AuthorEmail, c.Timestamp.Local(), c.Message)\n\t\t\t}\n\t\t\treturn nil\n\t\t},\n\t}\n\trootCmd.AddCommand(logCmd)\n}\n"
  },
  {
    "path": "cmd/evo/main.go",
    "content": "package main\n\nfunc main() {\n\tExecute()\n}\n"
  },
  {
    "path": "cmd/evo/revert_cmd.go",
    "content": "package main\n\nimport (\n\t\"evo/internal/commits\"\n\t\"evo/internal/repo\"\n\t\"evo/internal/streams\"\n\t\"fmt\"\n\n\t\"github.com/spf13/cobra\"\n)\n\nfunc init() {\n\tvar revertCmd = &cobra.Command{\n\t\tUse:   \"revert <commit-id>\",\n\t\tShort: \"Revert the specified commit by generating inverse ops\",\n\t\tLong:  `This properly restores old lines if the commit performed updates, removing inserted lines, etc.`,\n\t\tRunE: func(cmd *cobra.Command, args []string) error {\n\t\t\tif len(args) < 1 {\n\t\t\t\treturn fmt.Errorf(\"usage: evo revert <commit-id>\")\n\t\t\t}\n\t\t\tcommitID := args[0]\n\t\t\trp, err := repo.FindRepoRoot(\".\")\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tstr, err := streams.CurrentStream(rp)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tnewC, err := commits.RevertCommit(rp, str, commitID)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"failed to revert commit: %w\", err)\n\t\t\t}\n\t\t\tfmt.Printf(\"Created revert commit %s\\n\", newC.ID)\n\t\t\treturn nil\n\t\t},\n\t}\n\trootCmd.AddCommand(revertCmd)\n}\n"
  },
  {
    "path": "cmd/evo/root.go",
    "content": "package main\n\nimport (\n\t\"fmt\"\n\t\"os\"\n\n\t\"github.com/spf13/cobra\"\n)\n\nvar rootCmd = &cobra.Command{\n\tUse:   \"evo\",\n\tShort: \"Evo (🌿) - next-generation CRDT-based version control\",\n\tLong: `Evo is a production-ready version control system that uses named streams,\nline-based CRDT (with RGA for reordering), stable file IDs, commit signing, and large file support.`,\n}\n\n// Execute runs the CLI\nfunc Execute() {\n\tif err := rootCmd.Execute(); err != nil {\n\t\tfmt.Fprintln(os.Stderr, \"Error:\", err)\n\t\tos.Exit(1)\n\t}\n}\n"
  },
  {
    "path": "cmd/evo/status_cmd.go",
    "content": "package main\n\nimport (\n\t\"evo/internal/repo\"\n\t\"evo/internal/status\"\n\t\"fmt\"\n\n\t\"github.com/spf13/cobra\"\n)\n\nfunc init() {\n\tvar statusCmd = &cobra.Command{\n\t\tUse:   \"status\",\n\t\tShort: \"Show the working tree status\",\n\t\tLong: `Shows the status of files in the working directory:\n- New (untracked) files\n- Modified files\n- Deleted files\n- Renamed files\nRespects .evo-ignore patterns for excluding files.`,\n\t\tRunE: func(cmd *cobra.Command, args []string) error {\n\t\t\trp, err := repo.FindRepoRoot(\".\")\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\n\t\t\tst, err := status.GetStatus(rp)\n\t\t\tif err != nil {\n\t\t\t\treturn fmt.Errorf(\"failed to get status: %w\", err)\n\t\t\t}\n\n\t\t\tfmt.Print(status.FormatStatus(st))\n\t\t\treturn nil\n\t\t},\n\t}\n\trootCmd.AddCommand(statusCmd)\n}\n"
  },
  {
    "path": "cmd/evo/stream_cmd.go",
    "content": "package main\n\nimport (\n\t\"evo/internal/repo\"\n\t\"evo/internal/streams\"\n\t\"fmt\"\n\n\t\"github.com/spf13/cobra\"\n)\n\nfunc init() {\n\tvar streamCmd = &cobra.Command{\n\t\tUse:   \"stream\",\n\t\tShort: \"Manage named streams (like branches)\",\n\t\tLong:  \"Create, switch, list, merge, or cherry-pick commits in named streams.\",\n\t}\n\n\tvar createCmd = &cobra.Command{\n\t\tUse:   \"create <name>\",\n\t\tShort: \"Create a new stream\",\n\t\tRunE: func(cmd *cobra.Command, args []string) error {\n\t\t\tif len(args) < 1 {\n\t\t\t\treturn fmt.Errorf(\"usage: evo stream create <name>\")\n\t\t\t}\n\t\t\trp, err := repo.FindRepoRoot(\".\")\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif err := streams.CreateStream(rp, args[0]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tfmt.Println(\"Created stream:\", args[0])\n\t\t\treturn nil\n\t\t},\n\t}\n\n\tvar switchCmd = &cobra.Command{\n\t\tUse:   \"switch <name>\",\n\t\tShort: \"Switch to another stream locally\",\n\t\tRunE: func(cmd *cobra.Command, args []string) error {\n\t\t\tif len(args) < 1 {\n\t\t\t\treturn fmt.Errorf(\"usage: evo stream switch <name>\")\n\t\t\t}\n\t\t\trp, err := repo.FindRepoRoot(\".\")\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif err := streams.SwitchStream(rp, args[0]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tfmt.Println(\"Switched to stream:\", args[0])\n\t\t\treturn nil\n\t\t},\n\t}\n\n\tvar listCmd = &cobra.Command{\n\t\tUse:   \"list\",\n\t\tShort: \"List named streams\",\n\t\tRunE: func(cmd *cobra.Command, args []string) error {\n\t\t\trp, err := repo.FindRepoRoot(\".\")\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tss, err := streams.ListStreams(rp)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tcur, _ := streams.CurrentStream(rp)\n\t\t\tfor _, s := range ss {\n\t\t\t\tprefix := \"  \"\n\t\t\t\tif s == cur {\n\t\t\t\t\tprefix = \"* \"\n\t\t\t\t}\n\t\t\t\tfmt.Println(prefix + s)\n\t\t\t}\n\t\t\treturn nil\n\t\t},\n\t}\n\n\tvar mergeCmd = &cobra.Command{\n\t\tUse:   \"merge <source> <target>\",\n\t\tShort: \"Merge all commits from source stream into target stream\",\n\t\tRunE: func(cmd *cobra.Command, args []string) error {\n\t\t\tif len(args) < 2 {\n\t\t\t\treturn fmt.Errorf(\"usage: evo stream merge <source> <target>\")\n\t\t\t}\n\t\t\trp, err := repo.FindRepoRoot(\".\")\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif err := streams.MergeStreams(rp, args[0], args[1]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tfmt.Printf(\"Merged all missing commits from '%s' into '%s'\\n\", args[0], args[1])\n\t\t\treturn nil\n\t\t},\n\t}\n\n\tvar cherryPickCmd = &cobra.Command{\n\t\tUse:   \"cherry-pick <commit-id> <target-stream>\",\n\t\tShort: \"Replicate only one commit's ops into the target stream\",\n\t\tRunE: func(cmd *cobra.Command, args []string) error {\n\t\t\tif len(args) < 2 {\n\t\t\t\treturn fmt.Errorf(\"usage: evo stream cherry-pick <commit-id> <target-stream>\")\n\t\t\t}\n\t\t\trp, err := repo.FindRepoRoot(\".\")\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tif err := streams.CherryPick(rp, args[0], args[1]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tfmt.Printf(\"Cherry-picked commit %s into stream %s\\n\", args[0], args[1])\n\t\t\treturn nil\n\t\t},\n\t}\n\n\tstreamCmd.AddCommand(createCmd, switchCmd, listCmd, mergeCmd, cherryPickCmd)\n\trootCmd.AddCommand(streamCmd)\n}\n"
  },
  {
    "path": "cmd/evo/sync_cmd.go",
    "content": "package main\n\nimport (\n\t\"evo/internal/repo\"\n\t\"fmt\"\n\n\t\"github.com/spf13/cobra\"\n)\n\nfunc init() {\n\tvar syncCmd = &cobra.Command{\n\t\tUse:   \"sync <remote-url>\",\n\t\tShort: \"Synchronize CRDT logs with remote (not fully implemented)\",\n\t\tLong: `Pull missing ops from remote for the current stream and push local ops\nto the remote. Requires a future Evo server implementation for full functionality.`,\n\t\tRunE: func(cmd *cobra.Command, args []string) error {\n\t\t\tif len(args) < 1 {\n\t\t\t\treturn fmt.Errorf(\"usage: evo sync <remote-url>\")\n\t\t\t}\n\t\t\tremote := args[0]\n\t\t\t_, err := repo.FindRepoRoot(\".\")\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tfmt.Printf(\"Sync with %s is not yet implemented.\\n\", remote)\n\t\t\treturn nil\n\t\t},\n\t}\n\trootCmd.AddCommand(syncCmd)\n}\n"
  },
  {
    "path": "go.mod",
    "content": "module evo\n\ngo 1.23.4\n\nrequire (\n\tgithub.com/google/uuid v1.6.0\n\tgithub.com/pelletier/go-toml v1.9.5\n\tgithub.com/spf13/cobra v1.8.1\n)\n\nrequire (\n\tgithub.com/bmatcuk/doublestar/v4 v4.8.0 // indirect\n\tgithub.com/davecgh/go-spew v1.1.1 // indirect\n\tgithub.com/inconshreveable/mousetrap v1.1.0 // indirect\n\tgithub.com/pmezard/go-difflib v1.0.0 // indirect\n\tgithub.com/spf13/pflag v1.0.5 // indirect\n\tgithub.com/stretchr/testify v1.10.0 // indirect\n\tgopkg.in/yaml.v3 v3.0.1 // indirect\n)\n"
  },
  {
    "path": "go.sum",
    "content": "github.com/bmatcuk/doublestar/v4 v4.8.0 h1:DSXtrypQddoug1459viM9X9D3dp1Z7993fw36I2kNcQ=\ngithub.com/bmatcuk/doublestar/v4 v4.8.0/go.mod h1:xBQ8jztBU6kakFMg+8WGxn0c6z1fTSPVIjEY1Wr7jzc=\ngithub.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=\ngithub.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=\ngithub.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=\ngithub.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=\ngithub.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=\ngithub.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=\ngithub.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=\ngithub.com/pelletier/go-toml v1.9.5 h1:4yBQzkHv+7BHq2PQUZF3Mx0IYxG7LsP222s7Agd3ve8=\ngithub.com/pelletier/go-toml v1.9.5/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCkoOuaOx1Y+c=\ngithub.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=\ngithub.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=\ngithub.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=\ngithub.com/spf13/cobra v1.8.1 h1:e5/vxKd/rZsfSJMUX1agtjeTDf+qv1/JdBF8gg5k9ZM=\ngithub.com/spf13/cobra v1.8.1/go.mod h1:wHxEcudfqmLYa8iTfL+OuZPbBZkmvliBWKIezN3kD9Y=\ngithub.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=\ngithub.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=\ngithub.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=\ngithub.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=\ngopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=\ngopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=\ngopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=\n"
  },
  {
    "path": "internal/commits/commits.go",
    "content": "package commits\n\nimport (\n\t\"crypto/sha256\"\n\t\"encoding/binary\"\n\t\"encoding/json\"\n\t\"evo/internal/crdt\"\n\t\"evo/internal/ops\"\n\t\"evo/internal/signing\"\n\t\"evo/internal/types\"\n\t\"fmt\"\n\t\"io/fs\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"sort\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/google/uuid\"\n)\n\n// ExtendedOp includes oldContent for update ops\ntype ExtendedOp = types.ExtendedOp\n\n// CreateCommit creates a new commit with the given operations\nfunc CreateCommit(repoPath, stream, message, authorName, authorEmail string, ops []types.ExtendedOp, sign bool) (*types.Commit, error) {\n\tcommit := &types.Commit{\n\t\tID:          uuid.New().String(),\n\t\tStream:      stream,\n\t\tMessage:     message,\n\t\tAuthorName:  authorName,\n\t\tAuthorEmail: authorEmail,\n\t\tTimestamp:   time.Now().UTC(),\n\t\tOperations:  ops,\n\t}\n\n\t// Sign commit if requested\n\tif sign {\n\t\tsig, err := signing.SignCommit(commit, repoPath)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"failed to sign commit: %w\", err)\n\t\t}\n\t\tcommit.Signature = sig\n\n\t\t// Verify signature immediately\n\t\tvalid, err := signing.VerifyCommit(commit, repoPath)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"failed to verify commit signature: %w\", err)\n\t\t}\n\t\tif !valid {\n\t\t\treturn nil, fmt.Errorf(\"commit signature verification failed\")\n\t\t}\n\t}\n\n\t// Save commit\n\tif err := SaveCommit(repoPath, commit); err != nil {\n\t\treturn nil, fmt.Errorf(\"failed to save commit: %w\", err)\n\t}\n\n\treturn commit, nil\n}\n\n// LoadCommit loads a commit from disk\nfunc LoadCommit(repoPath, stream, commitID string) (*types.Commit, error) {\n\tcommitPath := filepath.Join(repoPath, \".evo\", \"commits\", stream, commitID+\".bin\")\n\tdata, err := os.ReadFile(commitPath)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"failed to read commit file: %w\", err)\n\t}\n\n\tvar commit types.Commit\n\tif err := json.Unmarshal(data, &commit); err != nil {\n\t\treturn nil, fmt.Errorf(\"failed to unmarshal commit: %w\", err)\n\t}\n\n\t// Verify signature if present\n\tif commit.Signature != \"\" {\n\t\tvalid, err := signing.VerifyCommit(&commit, repoPath)\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"failed to verify commit signature: %w\", err)\n\t\t}\n\t\tif !valid {\n\t\t\treturn nil, fmt.Errorf(\"commit signature verification failed\")\n\t\t}\n\t}\n\n\treturn &commit, nil\n}\n\n// SaveCommit saves a commit to disk\nfunc SaveCommit(repoPath string, commit *types.Commit) error {\n\tcommitDir := filepath.Join(repoPath, \".evo\", \"commits\", commit.Stream)\n\tif err := os.MkdirAll(commitDir, 0755); err != nil {\n\t\treturn fmt.Errorf(\"failed to create commit directory: %w\", err)\n\t}\n\n\tdata, err := json.Marshal(commit)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"failed to marshal commit: %w\", err)\n\t}\n\n\tcommitPath := filepath.Join(commitDir, commit.ID+\".bin\")\n\tif err := os.WriteFile(commitPath, data, 0644); err != nil {\n\t\treturn fmt.Errorf(\"failed to write commit file: %w\", err)\n\t}\n\n\treturn nil\n}\n\n// gatherNewOps => find ops not in prior commits, augment 'update' ops with oldContent\nfunc gatherNewOps(repoPath, stream string) ([]ExtendedOp, error) {\n\tall, err := ListCommits(repoPath, stream)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tknown := make(map[string]bool)\n\tfor _, cc := range all {\n\t\tfor _, eop := range cc.Operations {\n\t\t\tknown[opKey(eop.Op)] = true\n\t\t}\n\t}\n\n\t// load all current ops\n\tvar allOps []ExtendedOp\n\topsDir := filepath.Join(repoPath, \".evo\", \"ops\", stream)\n\tif err := filepath.WalkDir(opsDir, func(path string, d fs.DirEntry, err error) error {\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tif !d.IsDir() && filepath.Ext(path) == \".bin\" {\n\t\t\tf, err := os.Open(path)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tdefer f.Close()\n\t\t\tvar size int64\n\t\t\tif err := binary.Read(f, binary.LittleEndian, &size); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tdata := make([]byte, size)\n\t\t\tif _, err := f.Read(data); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tvar op crdt.Operation\n\t\t\tif err := json.Unmarshal(data, &op); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tallOps = append(allOps, ExtendedOp{Op: op})\n\t\t}\n\t\treturn nil\n\t}); err != nil && !os.IsNotExist(err) {\n\t\treturn nil, err\n\t}\n\n\t// build doc states to find old text\n\tdocStates := buildDocStates(repoPath, stream)\n\tvar newEops []ExtendedOp\n\tfor _, op := range allOps {\n\t\tk := opKey(op.Op)\n\t\tif !known[k] {\n\t\t\tvar old string\n\t\t\tif op.Op.Type == crdt.OpUpdate {\n\t\t\t\told = findOldContent(docStates, op.Op.LineID)\n\t\t\t}\n\t\t\tnewEops = append(newEops, ExtendedOp{\n\t\t\t\tOp:         op.Op,\n\t\t\t\tOldContent: old,\n\t\t\t})\n\t\t}\n\t}\n\tsort.Slice(newEops, func(i, j int) bool {\n\t\treturn newEops[i].Op.LessThan(&newEops[j].Op)\n\t})\n\treturn newEops, nil\n}\n\nfunc opKey(op crdt.Operation) string {\n\treturn fmt.Sprintf(\"%d_%s_%s\", op.Lamport, op.NodeID.String(), op.LineID.String())\n}\n\nfunc buildDocStates(repoPath, stream string) map[uuid.UUID]map[uuid.UUID]string {\n\tres := make(map[uuid.UUID]map[uuid.UUID]string)\n\troot := filepath.Join(repoPath, \".evo\", \"ops\", stream)\n\tif err := filepath.WalkDir(root, func(path string, d fs.DirEntry, err error) error {\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tif !d.IsDir() && strings.HasSuffix(path, \".bin\") {\n\t\t\tfn := filepath.Base(path)\n\t\t\tfidStr := strings.TrimSuffix(fn, \".bin\")\n\t\t\tfid, err := uuid.Parse(fidStr)\n\t\t\tif err == nil {\n\t\t\t\tops2, _ := ops.LoadAllOps(path)\n\t\t\t\tdoc := crdt.NewRGA()\n\t\t\t\tfor _, op := range ops2 {\n\t\t\t\t\tdoc.Apply(op)\n\t\t\t\t}\n\t\t\t\tres[fid] = doc.LineMap()\n\t\t\t}\n\t\t}\n\t\treturn nil\n\t}); err != nil && !os.IsNotExist(err) {\n\t\treturn nil\n\t}\n\treturn res\n}\n\nfunc findOldContent(ds map[uuid.UUID]map[uuid.UUID]string, lineID uuid.UUID) string {\n\tfor _, linesMap := range ds {\n\t\tif txt, ok := linesMap[lineID]; ok {\n\t\t\treturn txt\n\t\t}\n\t}\n\treturn \"\"\n}\n\n// ListCommits returns all commits in a stream, sorted by timestamp\nfunc ListCommits(repoPath, stream string) ([]types.Commit, error) {\n\tcommitDir := filepath.Join(repoPath, \".evo\", \"commits\", stream)\n\tentries, err := os.ReadDir(commitDir)\n\tif err != nil {\n\t\tif os.IsNotExist(err) {\n\t\t\treturn nil, nil\n\t\t}\n\t\treturn nil, fmt.Errorf(\"failed to read commit directory: %w\", err)\n\t}\n\n\tvar commits []types.Commit\n\tfor _, entry := range entries {\n\t\tif !entry.IsDir() && strings.HasSuffix(entry.Name(), \".bin\") {\n\t\t\tcommit, err := LoadCommit(repoPath, stream, strings.TrimSuffix(entry.Name(), \".bin\"))\n\t\t\tif err != nil {\n\t\t\t\treturn nil, fmt.Errorf(\"failed to load commit %s: %w\", entry.Name(), err)\n\t\t\t}\n\t\t\tcommits = append(commits, *commit)\n\t\t}\n\t}\n\n\t// Sort by timestamp\n\tsort.Slice(commits, func(i, j int) bool {\n\t\treturn commits[i].Timestamp.Before(commits[j].Timestamp)\n\t})\n\n\treturn commits, nil\n}\n\nfunc saveCommit(repoPath string, c *types.Commit) error {\n\tdir := filepath.Join(repoPath, \".evo\", \"commits\", c.Stream)\n\tif err := os.MkdirAll(dir, 0755); err != nil {\n\t\treturn err\n\t}\n\tfp := filepath.Join(dir, c.ID+\".bin\")\n\tb, _ := json.Marshal(c)\n\tsz := make([]byte, 4)\n\tbinary.BigEndian.PutUint32(sz, uint32(len(b)))\n\tf, err := os.Create(fp)\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer f.Close()\n\tf.Write(sz)\n\tf.Write(b)\n\treturn nil\n}\n\nfunc SaveCommitFile(dir string, c *types.Commit) error {\n\tif err := os.MkdirAll(dir, 0755); err != nil {\n\t\treturn err\n\t}\n\tfp := filepath.Join(dir, c.ID+\".bin\")\n\tb, _ := json.Marshal(c)\n\tsz := make([]byte, 4)\n\tbinary.BigEndian.PutUint32(sz, uint32(len(b)))\n\tf, err := os.Create(fp)\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer f.Close()\n\tf.Write(sz)\n\tf.Write(b)\n\treturn nil\n}\n\nfunc loadCommit(fp string) (*types.Commit, error) {\n\tf, err := os.Open(fp)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdefer f.Close()\n\tszBuf := make([]byte, 4)\n\tif _, err := f.Read(szBuf); err != nil {\n\t\treturn nil, err\n\t}\n\tsz := binary.BigEndian.Uint32(szBuf)\n\tdata := make([]byte, sz)\n\tif _, err := f.Read(data); err != nil {\n\t\treturn nil, err\n\t}\n\tvar c types.Commit\n\tif err := json.Unmarshal(data, &c); err != nil {\n\t\treturn nil, err\n\t}\n\treturn &c, nil\n}\n\n// RevertCommit creates a new commit that reverts the changes in the specified commit\nfunc RevertCommit(repoPath, stream, commitID string) (*types.Commit, error) {\n\ttarget, err := LoadCommit(repoPath, stream, commitID)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"failed to load commit %s: %w\", commitID, err)\n\t}\n\n\t// Generate inverse operations\n\tinverted, err := invertOps(target.Operations)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"failed to invert operations: %w\", err)\n\t}\n\n\t// Create revert commit\n\trevert := &types.Commit{\n\t\tID:          uuid.New().String(),\n\t\tStream:      stream,\n\t\tMessage:     fmt.Sprintf(\"Revert commit %s\", commitID),\n\t\tAuthorName:  target.AuthorName,\n\t\tAuthorEmail: target.AuthorEmail,\n\t\tTimestamp:   time.Now().UTC(),\n\t\tOperations:  inverted,\n\t}\n\n\t// Save revert commit\n\tif err := SaveCommit(repoPath, revert); err != nil {\n\t\treturn nil, fmt.Errorf(\"failed to save revert commit: %w\", err)\n\t}\n\n\treturn revert, nil\n}\n\n// invertOps generates inverse operations for a commit\nfunc invertOps(ops []types.ExtendedOp) ([]types.ExtendedOp, error) {\n\tvar inverted []types.ExtendedOp\n\n\t// Process operations in reverse order\n\tfor i := len(ops) - 1; i >= 0; i-- {\n\t\top := ops[i]\n\t\tswitch op.Op.Type {\n\t\tcase crdt.OpInsert:\n\t\t\t// Invert insert -> delete\n\t\t\tinverted = append(inverted, types.ExtendedOp{\n\t\t\t\tOp: crdt.Operation{\n\t\t\t\t\tType:      crdt.OpDelete,\n\t\t\t\t\tLineID:    op.Op.LineID,\n\t\t\t\t\tTimestamp: time.Now(),\n\t\t\t\t},\n\t\t\t})\n\n\t\tcase crdt.OpDelete:\n\t\t\t// Invert delete -> insert with original content\n\t\t\tif op.Op.Content == \"\" {\n\t\t\t\treturn nil, fmt.Errorf(\"cannot revert delete operation: missing original content\")\n\t\t\t}\n\t\t\tinverted = append(inverted, types.ExtendedOp{\n\t\t\t\tOp: crdt.Operation{\n\t\t\t\t\tType:      crdt.OpInsert,\n\t\t\t\t\tLineID:    op.Op.LineID,\n\t\t\t\t\tContent:   op.Op.Content,\n\t\t\t\t\tTimestamp: time.Now(),\n\t\t\t\t},\n\t\t\t})\n\n\t\tcase crdt.OpUpdate:\n\t\t\t// Invert update -> update with old content\n\t\t\tif op.OldContent == \"\" {\n\t\t\t\treturn nil, fmt.Errorf(\"cannot revert update operation: missing old content\")\n\t\t\t}\n\t\t\tinverted = append(inverted, types.ExtendedOp{\n\t\t\t\tOp: crdt.Operation{\n\t\t\t\t\tType:      crdt.OpUpdate,\n\t\t\t\t\tLineID:    op.Op.LineID,\n\t\t\t\t\tContent:   op.OldContent,\n\t\t\t\t\tTimestamp: time.Now(),\n\t\t\t\t},\n\t\t\t\tOldContent: op.Op.Content,\n\t\t\t})\n\t\t}\n\t}\n\n\treturn inverted, nil\n}\n\nfunc newLamport() uint64 {\n\treturn uint64(time.Now().UnixNano())\n}\n\nfunc applyOps(repoPath, stream string, eops []ExtendedOp) error {\n\t// for each extended op, append to .evo/ops/<stream>/<fileID>.bin\n\topsRoot := filepath.Join(repoPath, \".evo\", \"ops\", stream)\n\tif err := os.MkdirAll(opsRoot, 0755); err != nil {\n\t\treturn err\n\t}\n\tfor _, eop := range eops {\n\t\tfid := eop.Op.FileID.String()\n\t\tbinFile := filepath.Join(opsRoot, fid+\".bin\")\n\t\tif err := ops.AppendOp(binFile, eop.Op); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n\n// For signing\nfunc CommitHashString(c *types.Commit) string {\n\t// stable representation => ID + stream + message + etc\n\th := sha256.New()\n\th.Write([]byte(c.ID))\n\th.Write([]byte(c.Stream))\n\th.Write([]byte(c.Message))\n\th.Write([]byte(c.AuthorName))\n\th.Write([]byte(c.AuthorEmail))\n\th.Write([]byte(c.Timestamp.String()))\n\tfor _, eop := range c.Operations {\n\t\t// incorporate lamport, node, lineID, content, oldContent\n\t\th.Write([]byte(fmt.Sprintf(\"%d_%s_%s_%s_old=%s\",\n\t\t\teop.Op.Lamport, eop.Op.NodeID, eop.Op.LineID, eop.Op.Content, eop.OldContent)))\n\t}\n\treturn fmt.Sprintf(\"%x\", h.Sum(nil))\n}\n"
  },
  {
    "path": "internal/commits/commits_test.go",
    "content": "package commits\n\nimport (\n\t\"evo/internal/crdt\"\n\t\"evo/internal/types\"\n\t\"path/filepath\"\n\t\"testing\"\n\n\t\"evo/internal/config\"\n\t\"evo/internal/signing\"\n)\n\nfunc TestRevertCommit(t *testing.T) {\n\t// Create temp directory for test\n\ttestDir := t.TempDir()\n\n\tt.Run(\"Revert_Insert\", func(t *testing.T) {\n\t\t// Create original commit with insert operation\n\t\tops := []types.ExtendedOp{\n\t\t\t{Op: crdt.Operation{Type: crdt.OpInsert, Content: \"test\"}},\n\t\t}\n\t\tcommit, err := CreateCommit(testDir, \"main\", \"Test commit\", \"Test User\", \"test@example.com\", ops, false)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Failed to create commit: %v\", err)\n\t\t}\n\n\t\t// Revert the commit\n\t\trevertCommit, err := RevertCommit(testDir, \"main\", commit.ID)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Failed to revert commit: %v\", err)\n\t\t}\n\n\t\t// Verify revert operations\n\t\tif len(revertCommit.Operations) != len(commit.Operations) {\n\t\t\tt.Errorf(\"Expected %d operations, got %d\", len(commit.Operations), len(revertCommit.Operations))\n\t\t}\n\n\t\t// Check that insert was reverted to delete\n\t\tif revertCommit.Operations[0].Op.Type != crdt.OpDelete {\n\t\t\tt.Error(\"Expected delete operation in revert commit\")\n\t\t}\n\t})\n\n\tt.Run(\"Revert_Delete\", func(t *testing.T) {\n\t\t// Create original commit with delete operation\n\t\tops := []types.ExtendedOp{\n\t\t\t{Op: crdt.Operation{Type: crdt.OpDelete, Content: \"test\"}},\n\t\t}\n\t\tcommit, err := CreateCommit(testDir, \"main\", \"Test commit\", \"Test User\", \"test@example.com\", ops, false)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Failed to create commit: %v\", err)\n\t\t}\n\n\t\t// Revert the commit\n\t\trevertCommit, err := RevertCommit(testDir, \"main\", commit.ID)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Failed to revert commit: %v\", err)\n\t\t}\n\n\t\t// Verify revert operations\n\t\tif len(revertCommit.Operations) != len(commit.Operations) {\n\t\t\tt.Errorf(\"Expected %d operations, got %d\", len(commit.Operations), len(revertCommit.Operations))\n\t\t}\n\n\t\t// Check that delete was reverted to insert with original content\n\t\tif revertCommit.Operations[0].Op.Type != crdt.OpInsert {\n\t\t\tt.Error(\"Expected insert operation in revert commit\")\n\t\t}\n\t\tif revertCommit.Operations[0].Op.Content != commit.Operations[0].Op.Content {\n\t\t\tt.Error(\"Content not preserved in revert operation\")\n\t\t}\n\t})\n\n\tt.Run(\"Revert_Update\", func(t *testing.T) {\n\t\t// Create original commit with update operation\n\t\toldContent := \"old\"\n\t\tnewContent := \"new\"\n\t\tops := []types.ExtendedOp{\n\t\t\t{\n\t\t\t\tOp: crdt.Operation{\n\t\t\t\t\tType:    crdt.OpUpdate,\n\t\t\t\t\tContent: newContent,\n\t\t\t\t},\n\t\t\t\tOldContent: oldContent,\n\t\t\t},\n\t\t}\n\t\tcommit, err := CreateCommit(testDir, \"main\", \"Test commit\", \"Test User\", \"test@example.com\", ops, false)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Failed to create commit: %v\", err)\n\t\t}\n\n\t\t// Revert the commit\n\t\trevertCommit, err := RevertCommit(testDir, \"main\", commit.ID)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Failed to revert commit: %v\", err)\n\t\t}\n\n\t\t// Verify revert operations\n\t\tif len(revertCommit.Operations) != len(commit.Operations) {\n\t\t\tt.Errorf(\"Expected %d operations, got %d\", len(commit.Operations), len(revertCommit.Operations))\n\t\t}\n\n\t\t// Check that update was reverted with old content\n\t\tif revertCommit.Operations[0].Op.Type != crdt.OpUpdate {\n\t\t\tt.Error(\"Expected update operation in revert commit\")\n\t\t}\n\t\tif revertCommit.Operations[0].Op.Content != oldContent {\n\t\t\tt.Error(\"Old content not restored in revert operation\")\n\t\t}\n\t})\n\n\tt.Run(\"Revert_Multiple_Operations\", func(t *testing.T) {\n\t\t// Create original commit with multiple operations\n\t\tops := []types.ExtendedOp{\n\t\t\t{Op: crdt.Operation{Type: crdt.OpInsert, Content: \"test1\"}},\n\t\t\t{Op: crdt.Operation{Type: crdt.OpInsert, Content: \"test2\"}},\n\t\t}\n\t\tcommit, err := CreateCommit(testDir, \"main\", \"Test commit\", \"Test User\", \"test@example.com\", ops, false)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Failed to create commit: %v\", err)\n\t\t}\n\n\t\t// Revert the commit\n\t\trevertCommit, err := RevertCommit(testDir, \"main\", commit.ID)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Failed to revert commit: %v\", err)\n\t\t}\n\n\t\t// Verify revert operations\n\t\tif len(revertCommit.Operations) != len(commit.Operations) {\n\t\t\tt.Errorf(\"Expected %d operations, got %d\", len(commit.Operations), len(revertCommit.Operations))\n\t\t}\n\n\t\t// Check that operations were reverted in reverse order\n\t\tfor i := 0; i < len(revertCommit.Operations); i++ {\n\t\t\tif revertCommit.Operations[i].Op.Type != crdt.OpDelete {\n\t\t\t\tt.Error(\"Expected delete operation in revert commit\")\n\t\t\t}\n\t\t}\n\t})\n}\n\nfunc TestSignedCommits(t *testing.T) {\n\t// Create temp directory for test\n\ttmpDir := t.TempDir()\n\tkeyPath := filepath.Join(tmpDir, \"signing_key\")\n\n\t// Set up config for test\n\terr := config.SetConfigValue(tmpDir, \"signing.keyPath\", keyPath)\n\tif err != nil {\n\t\tt.Fatalf(\"Failed to set config value: %v\", err)\n\t}\n\n\t// Generate key pair for signing\n\terr = signing.GenerateKeyPair(tmpDir)\n\tif err != nil {\n\t\tt.Fatalf(\"Failed to generate key pair: %v\", err)\n\t}\n\n\tt.Run(\"Create_Signed_Commit\", func(t *testing.T) {\n\t\tops := []types.ExtendedOp{\n\t\t\t{Op: crdt.Operation{Type: crdt.OpInsert, Content: \"test\"}},\n\t\t}\n\n\t\tcommit, err := CreateCommit(tmpDir, \"main\", \"Test commit\", \"Test User\", \"test@example.com\", ops, true)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Failed to create signed commit: %v\", err)\n\t\t}\n\n\t\tif commit.Signature == \"\" {\n\t\t\tt.Error(\"Commit not signed\")\n\t\t}\n\n\t\t// Load and verify commit\n\t\tloaded, err := LoadCommit(tmpDir, \"main\", commit.ID)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Failed to load commit: %v\", err)\n\t\t}\n\n\t\tif loaded.Signature != commit.Signature {\n\t\t\tt.Error(\"Loaded commit signature does not match\")\n\t\t}\n\t})\n\n\tt.Run(\"Create_Unsigned_Commit\", func(t *testing.T) {\n\t\tops := []types.ExtendedOp{\n\t\t\t{Op: crdt.Operation{Type: crdt.OpInsert, Content: \"test\"}},\n\t\t}\n\n\t\tcommit, err := CreateCommit(tmpDir, \"main\", \"Test commit\", \"Test User\", \"test@example.com\", ops, false)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Failed to create unsigned commit: %v\", err)\n\t\t}\n\n\t\tif commit.Signature != \"\" {\n\t\t\tt.Error(\"Unsigned commit has signature\")\n\t\t}\n\n\t\t// List commits and verify both signed and unsigned are present\n\t\tcommits, err := ListCommits(tmpDir, \"main\")\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Failed to list commits: %v\", err)\n\t\t}\n\n\t\tif len(commits) != 2 {\n\t\t\tt.Errorf(\"Expected 2 commits, got %d\", len(commits))\n\t\t}\n\t})\n\n\tt.Run(\"Invalid_Signature\", func(t *testing.T) {\n\t\tcommit := &types.Commit{\n\t\t\tID:        \"test\",\n\t\t\tStream:    \"main\",\n\t\t\tMessage:   \"Test commit\",\n\t\t\tSignature: \"invalid\",\n\t\t}\n\n\t\t// Save commit with invalid signature\n\t\terr := SaveCommit(tmpDir, commit)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Failed to save commit: %v\", err)\n\t\t}\n\n\t\t// Try to load commit - should fail verification\n\t\t_, err = LoadCommit(tmpDir, \"main\", commit.ID)\n\t\tif err == nil {\n\t\t\tt.Error(\"Expected error loading commit with invalid signature\")\n\t\t}\n\t})\n}\n"
  },
  {
    "path": "internal/config/config.go",
    "content": "package config\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"os\"\n\t\"path/filepath\"\n\n\t\"github.com/pelletier/go-toml\"\n)\n\n// For example: user.name, user.email, signing.keyPath, files.largeThreshold, verifySignatures\n\nfunc globalConfigPath() (string, error) {\n\thome, err := os.UserHomeDir()\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tcfgDir := filepath.Join(home, \".config\", \"evo\")\n\tif err := os.MkdirAll(cfgDir, 0755); err != nil {\n\t\treturn \"\", err\n\t}\n\treturn filepath.Join(cfgDir, \"config.toml\"), nil\n}\n\nfunc repoConfigPath(repoPath string) string {\n\treturn filepath.Join(repoPath, \".evo\", \"config\", \"config.toml\")\n}\n\nfunc loadToml(path string) (*toml.Tree, error) {\n\tif _, err := os.Stat(path); os.IsNotExist(err) {\n\t\ttree, err := toml.TreeFromMap(map[string]interface{}{})\n\t\tif err != nil {\n\t\t\treturn nil, fmt.Errorf(\"failed to create empty config: %w\", err)\n\t\t}\n\t\treturn tree, nil\n\t}\n\tb, err := os.ReadFile(path)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn toml.LoadBytes(b)\n}\n\nfunc saveToml(tree *toml.Tree, path string) error {\n\treturn os.WriteFile(path, []byte(tree.String()), 0644)\n}\n\n// SetGlobalConfigValue sets key=val in ~/.config/evo/config.toml\nfunc SetGlobalConfigValue(key, val string) error {\n\tgp, err := globalConfigPath()\n\tif err != nil {\n\t\treturn err\n\t}\n\ttree, err := loadToml(gp)\n\tif err != nil {\n\t\treturn err\n\t}\n\ttree.Set(key, val)\n\treturn saveToml(tree, gp)\n}\n\n// SetRepoConfigValue sets key=val in .evo/config/config.toml\nfunc SetRepoConfigValue(repoPath, key, val string) error {\n\trp := repoConfigPath(repoPath)\n\ttree, err := loadToml(rp)\n\tif err != nil {\n\t\treturn err\n\t}\n\ttree.Set(key, val)\n\treturn saveToml(tree, rp)\n}\n\n// GetConfigValue retrieves a value from the config file\nfunc GetConfigValue(repoPath, key string) (string, error) {\n\tconfig, err := loadConfig(repoPath)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\n\tvalue, ok := config[key]\n\tif !ok {\n\t\treturn \"\", fmt.Errorf(\"no config value for %s\", key)\n\t}\n\n\treturn value, nil\n}\n\n// SetConfigValue stores a value in the config file\nfunc SetConfigValue(repoPath, key, value string) error {\n\tconfig, err := loadConfig(repoPath)\n\tif err != nil {\n\t\t// If config doesn't exist, create new map\n\t\tconfig = make(map[string]string)\n\t}\n\n\tconfig[key] = value\n\n\t// Ensure .evo directory exists\n\tconfigDir := filepath.Join(repoPath, \".evo\")\n\tif err := os.MkdirAll(configDir, 0755); err != nil {\n\t\treturn fmt.Errorf(\"failed to create config directory: %w\", err)\n\t}\n\n\t// Write updated config\n\tconfigPath := filepath.Join(configDir, \"config.json\")\n\tdata, err := json.MarshalIndent(config, \"\", \"  \")\n\tif err != nil {\n\t\treturn fmt.Errorf(\"failed to marshal config: %w\", err)\n\t}\n\n\tif err := os.WriteFile(configPath, data, 0644); err != nil {\n\t\treturn fmt.Errorf(\"failed to write config file: %w\", err)\n\t}\n\n\treturn nil\n}\n\nfunc loadConfig(repoPath string) (map[string]string, error) {\n\tconfigPath := filepath.Join(repoPath, \".evo\", \"config.json\")\n\tdata, err := os.ReadFile(configPath)\n\tif err != nil {\n\t\tif os.IsNotExist(err) {\n\t\t\treturn nil, fmt.Errorf(\"no config value for signing.keyPath\")\n\t\t}\n\t\treturn nil, fmt.Errorf(\"failed to read config file: %w\", err)\n\t}\n\n\tvar config map[string]string\n\tif err := json.Unmarshal(data, &config); err != nil {\n\t\treturn nil, fmt.Errorf(\"failed to parse config file: %w\", err)\n\t}\n\n\treturn config, nil\n}\n"
  },
  {
    "path": "internal/crdt/compact/compact.go",
    "content": "package compact\n\nimport (\n\t\"evo/internal/crdt\"\n\t\"sort\"\n\t\"time\"\n\n\t\"github.com/google/uuid\"\n)\n\n// CompactOperations compacts a list of operations by:\n// 1. Pruning old tombstones\n// 2. Collapsing multiple operations on the same line into a single op\n// 3. Removing redundant operations that don't affect the final state\nfunc CompactOperations(ops []crdt.Operation, cfg *Config) []crdt.Operation {\n\tif len(ops) < cfg.MaxOps {\n\t\treturn ops\n\t}\n\n\t// Build a map of lineID to its operations\n\tlineOps := make(map[uuid.UUID][]crdt.Operation)\n\tfor _, op := range ops {\n\t\tlineOps[op.LineID] = append(lineOps[op.LineID], op)\n\t}\n\n\tvar compacted []crdt.Operation\n\tnow := time.Now()\n\n\tfor _, lineHistory := range lineOps {\n\t\t// Sort operations by lamport timestamp\n\t\tsortOps(lineHistory)\n\n\t\t// Keep only the latest operation for each line\n\t\tfinalOp := lineHistory[len(lineHistory)-1]\n\t\t\n\t\t// Skip old tombstones\n\t\tif finalOp.Type == crdt.OpDelete {\n\t\t\tage := now.Sub(finalOp.Timestamp)\n\t\t\tif age > cfg.TombstoneTTL {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t}\n\n\t\tcompacted = append(compacted, finalOp)\n\t}\n\n\t// Sort compacted operations\n\tsortOps(compacted)\n\n\t// Ensure we keep minimum number of ops\n\tif len(compacted) < cfg.MinOpsToKeep {\n\t\treturn ops[:cfg.MinOpsToKeep]\n\t}\n\n\treturn compacted\n}\n\n// sortOps sorts operations by lamport timestamp and nodeID\nfunc sortOps(ops []crdt.Operation) {\n\tsort.Slice(ops, func(i, j int) bool {\n\t\treturn ops[i].LessThan(&ops[j])\n\t})\n}\n\n// CompactRGA creates a new RGA with compacted operations\nfunc CompactRGA(rga *crdt.RGA, cfg *Config) *crdt.RGA {\n\tops := rga.GetOperations()\n\tcompacted := CompactOperations(ops, cfg)\n\n\tnewRGA := crdt.NewRGA()\n\tfor _, op := range compacted {\n\t\tif err := newRGA.Apply(op); err != nil {\n\t\t\t// Log error but continue\n\t\t\tcontinue\n\t\t}\n\t}\n\n\treturn newRGA\n}\n"
  },
  {
    "path": "internal/crdt/compact/config.go",
    "content": "package compact\n\nimport \"time\"\n\n// Config defines thresholds for when to perform compaction\ntype Config struct {\n\t// Maximum number of operations before triggering compaction\n\tMaxOps int\n\t// Maximum age of tombstones before pruning\n\tTombstoneTTL time.Duration\n\t// Minimum number of operations to keep after compaction\n\tMinOpsToKeep int\n\t// How often to run compaction\n\tCompactionInterval time.Duration\n}\n\n// DefaultConfig returns sensible defaults for compaction\nfunc DefaultConfig() *Config {\n\treturn &Config{\n\t\tMaxOps:             10000,                // Compact when we have more than 10k ops\n\t\tTombstoneTTL:       7 * 24 * time.Hour,  // Keep tombstones for 1 week\n\t\tMinOpsToKeep:       1000,                // Keep at least 1k ops after compaction\n\t\tCompactionInterval: 1 * time.Hour,       // Run compaction every hour\n\t}\n}\n"
  },
  {
    "path": "internal/crdt/compact/service.go",
    "content": "package compact\n\nimport (\n\t\"encoding/binary\"\n\t\"encoding/json\"\n\t\"evo/internal/crdt\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strings\"\n\t\"sync\"\n\t\"time\"\n)\n\n// CompactionService manages operation compaction and tombstone pruning\ntype CompactionService struct {\n\trepoPath string\n\tconfig   *Config\n\tmu       sync.RWMutex\n\tdone     chan struct{}\n}\n\n// NewCompactionService creates a new compaction service\nfunc NewCompactionService(repoPath string, config *Config) *CompactionService {\n\tif config == nil {\n\t\tconfig = DefaultConfig()\n\t}\n\treturn &CompactionService{\n\t\trepoPath: repoPath,\n\t\tconfig:   config,\n\t\tdone:     make(chan struct{}),\n\t}\n}\n\n// Start begins the compaction service\nfunc (s *CompactionService) Start() error {\n\ts.mu.Lock()\n\tdefer s.mu.Unlock()\n\n\t// Create ticker for periodic compaction\n\tticker := time.NewTicker(s.config.CompactionInterval)\n\n\t// Start background goroutine\n\tgo func() {\n\t\tfor {\n\t\t\tselect {\n\t\t\tcase <-ticker.C:\n\t\t\t\tif err := s.CompactOperations(); err != nil {\n\t\t\t\t\t// Log error but continue running\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\tif err := s.PruneTombstones(); err != nil {\n\t\t\t\t\t// Log error but continue running\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\tcase <-s.done:\n\t\t\t\tticker.Stop()\n\t\t\t\treturn\n\t\t\t}\n\t\t}\n\t}()\n\n\treturn nil\n}\n\n// Stop stops the compaction service\nfunc (s *CompactionService) Stop() {\n\tclose(s.done)\n}\n\n// CompactOperations compacts operations by combining sequential operations\nfunc (s *CompactionService) CompactOperations() error {\n\ts.mu.Lock()\n\tdefer s.mu.Unlock()\n\n\topsDir := filepath.Join(s.repoPath, \".evo\", \"ops\")\n\tstreams, err := os.ReadDir(opsDir)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tfor _, stream := range streams {\n\t\tif !stream.IsDir() {\n\t\t\tcontinue\n\t\t}\n\n\t\tstreamDir := filepath.Join(opsDir, stream.Name())\n\t\tfiles, err := os.ReadDir(streamDir)\n\t\tif err != nil {\n\t\t\tcontinue\n\t\t}\n\n\t\tvar ops []crdt.Operation\n\t\tfor _, f := range files {\n\t\t\tif !strings.HasSuffix(f.Name(), \".bin\") {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tdata, err := os.ReadFile(filepath.Join(streamDir, f.Name()))\n\t\t\tif err != nil {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\t// Read size prefix\n\t\t\tif len(data) < 4 {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tsize := binary.BigEndian.Uint32(data[:4])\n\t\t\tif len(data) < int(4+size) {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\topData := data[4 : 4+size]\n\n\t\t\tvar op crdt.Operation\n\t\t\tif err := json.Unmarshal(opData, &op); err != nil {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tops = append(ops, op)\n\t\t}\n\n\t\tif len(ops) < s.config.MaxOps {\n\t\t\tcontinue\n\t\t}\n\n\t\t// Combine sequential operations\n\t\tfor i := range ops {\n\t\t\top := &ops[i]\n\t\t\tif i > 0 && ops[i-1].LineID == op.LineID {\n\t\t\t\t// Combine with previous operation\n\t\t\t\tops[i-1].Content = op.Content\n\t\t\t\tops[i-1].Lamport = op.Lamport\n\t\t\t\tops[i-1].Timestamp = op.Timestamp\n\t\t\t\tops = append(ops[:i], ops[i+1:]...)\n\t\t\t\ti--\n\t\t\t\tcontinue\n\t\t\t}\n\t\t}\n\n\t\t// Write compacted operations back\n\t\tcompacted := make([]crdt.Operation, 0, len(ops))\n\t\tfor _, op := range ops {\n\t\t\tif op.Type != crdt.OpDelete || time.Since(op.Timestamp) <= s.config.TombstoneTTL {\n\t\t\t\tcompacted = append(compacted, op)\n\t\t\t}\n\t\t}\n\n\t\t// Save compacted operations\n\t\tfor _, op := range compacted {\n\t\t\tdata, err := json.Marshal(op)\n\t\t\tif err != nil {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\t// Write size prefix followed by data\n\t\t\topPath := filepath.Join(streamDir, op.LineID.String()+\".bin\")\n\t\t\tf, err := os.Create(opPath)\n\t\t\tif err != nil {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\t// Write 4-byte size prefix\n\t\t\tsize := uint32(len(data))\n\t\t\tvar sizeBuf [4]byte\n\t\t\tbinary.BigEndian.PutUint32(sizeBuf[:], size)\n\t\t\tif _, err := f.Write(sizeBuf[:]); err != nil {\n\t\t\t\tf.Close()\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\t// Write operation data\n\t\t\tif _, err := f.Write(data); err != nil {\n\t\t\t\tf.Close()\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tf.Close()\n\t\t}\n\n\t\t// Remove old operations\n\t\tfor _, op := range ops {\n\t\t\tfound := false\n\t\t\tfor _, c := range compacted {\n\t\t\t\tif c.LineID == op.LineID {\n\t\t\t\t\tfound = true\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif !found {\n\t\t\t\tos.Remove(filepath.Join(streamDir, op.LineID.String()+\".bin\"))\n\t\t\t}\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// PruneTombstones removes old tombstones\nfunc (s *CompactionService) PruneTombstones() error {\n\ts.mu.Lock()\n\tdefer s.mu.Unlock()\n\n\topsDir := filepath.Join(s.repoPath, \".evo\", \"ops\")\n\tstreams, err := os.ReadDir(opsDir)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tcutoff := time.Now().Add(-s.config.TombstoneTTL)\n\n\tfor _, stream := range streams {\n\t\tif !stream.IsDir() {\n\t\t\tcontinue\n\t\t}\n\n\t\tstreamDir := filepath.Join(opsDir, stream.Name())\n\t\tfiles, err := os.ReadDir(streamDir)\n\t\tif err != nil {\n\t\t\tcontinue\n\t\t}\n\n\t\tvar ops []crdt.Operation\n\t\tvar filesToRemove []string\n\n\t\t// Read all operations in this stream\n\t\tfor _, f := range files {\n\t\t\tif !strings.HasSuffix(f.Name(), \".bin\") {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tdata, err := os.ReadFile(filepath.Join(streamDir, f.Name()))\n\t\t\tif err != nil {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\t// Read size prefix\n\t\t\tif len(data) < 4 {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tsize := binary.BigEndian.Uint32(data[:4])\n\t\t\tif len(data) < int(4+size) {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\topData := data[4 : 4+size]\n\n\t\t\tvar op crdt.Operation\n\t\t\tif err := json.Unmarshal(opData, &op); err != nil {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\t// Keep non-delete operations and recent tombstones\n\t\t\tif op.Type != crdt.OpDelete || op.Timestamp.After(cutoff) {\n\t\t\t\tops = append(ops, op)\n\t\t\t} else {\n\t\t\t\tfilesToRemove = append(filesToRemove, f.Name())\n\t\t\t}\n\t\t}\n\n\t\t// Remove old tombstones\n\t\tfor _, name := range filesToRemove {\n\t\t\tif err := os.Remove(filepath.Join(streamDir, name)); err != nil && !os.IsNotExist(err) {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\n\t\t// Write remaining operations back\n\t\tfor _, op := range ops {\n\t\t\tdata, err := json.Marshal(op)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\n\t\t\t// Write size prefix followed by data\n\t\t\topPath := filepath.Join(streamDir, op.LineID.String()+\".bin\")\n\t\t\ttempPath := opPath + \".tmp\"\n\t\t\tf, err := os.Create(tempPath)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\n\t\t\t// Write 4-byte size prefix\n\t\t\tsize := uint32(len(data))\n\t\t\tvar sizeBuf [4]byte\n\t\t\tbinary.BigEndian.PutUint32(sizeBuf[:], size)\n\t\t\tif _, err := f.Write(sizeBuf[:]); err != nil {\n\t\t\t\tf.Close()\n\t\t\t\tos.Remove(tempPath)\n\t\t\t\treturn err\n\t\t\t}\n\n\t\t\t// Write operation data\n\t\t\tif _, err := f.Write(data); err != nil {\n\t\t\t\tf.Close()\n\t\t\t\tos.Remove(tempPath)\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tf.Close()\n\n\t\t\t// Atomically replace the old file with the new one\n\t\t\tif err := os.Rename(tempPath, opPath); err != nil {\n\t\t\t\tos.Remove(tempPath)\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\n\t\t// Remove any remaining files that weren't rewritten\n\t\tfor _, f := range files {\n\t\t\tif !strings.HasSuffix(f.Name(), \".bin\") {\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tfound := false\n\t\t\tfor _, op := range ops {\n\t\t\t\tif f.Name() == op.LineID.String()+\".bin\" {\n\t\t\t\t\tfound = true\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif !found {\n\t\t\t\tif err := os.Remove(filepath.Join(streamDir, f.Name())); err != nil && !os.IsNotExist(err) {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\treturn nil\n}\n"
  },
  {
    "path": "internal/crdt/compact/service_test.go",
    "content": "package compact\n\nimport (\n\t\"encoding/binary\"\n\t\"encoding/json\"\n\t\"evo/internal/crdt\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/google/uuid\"\n)\n\nfunc TestCompactionService(t *testing.T) {\n\t// Create temp directory for testing\n\ttmpDir, err := os.MkdirTemp(\"\", \"evo-compact-test-*\")\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tdefer os.RemoveAll(tmpDir)\n\n\t// Create test repository structure\n\trepoPath := filepath.Join(tmpDir, \"test-repo\")\n\tif err := os.MkdirAll(filepath.Join(repoPath, \".evo\", \"ops\"), 0755); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tt.Run(\"Service Lifecycle\", func(t *testing.T) {\n\t\tconfig := &Config{\n\t\t\tCompactionInterval: 100 * time.Millisecond,\n\t\t\tTombstoneTTL:       1 * time.Hour,\n\t\t\tMinOpsToKeep:       10,\n\t\t\tMaxOps:             100,\n\t\t}\n\n\t\tservice := NewCompactionService(repoPath, config)\n\t\tif err := service.Start(); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\t// Let it run for a bit\n\t\ttime.Sleep(200 * time.Millisecond)\n\n\t\tservice.Stop()\n\t})\n\n\tt.Run(\"Operation Compaction\", func(t *testing.T) {\n\t\tfileID := uuid.New()\n\t\tlineID := uuid.New()\n\t\tnodeID := uuid.New()\n\n\t\t// Create test operations\n\t\tops := []crdt.Operation{\n\t\t\t{\n\t\t\t\tType:      crdt.OpUpdate,\n\t\t\t\tLamport:   1,\n\t\t\t\tNodeID:    nodeID,\n\t\t\t\tFileID:    fileID,\n\t\t\t\tLineID:    lineID,\n\t\t\t\tContent:   \"value1\",\n\t\t\t\tStream:    \"stream1\",\n\t\t\t\tTimestamp: time.Now().Add(-2 * time.Hour),\n\t\t\t\tVector:    []int64{1, 0, 0},\n\t\t\t},\n\t\t\t{\n\t\t\t\tType:      crdt.OpUpdate,\n\t\t\t\tLamport:   2,\n\t\t\t\tNodeID:    nodeID,\n\t\t\t\tFileID:    fileID,\n\t\t\t\tLineID:    lineID,\n\t\t\t\tContent:   \"value2\",\n\t\t\t\tStream:    \"stream1\",\n\t\t\t\tTimestamp: time.Now().Add(-1 * time.Hour),\n\t\t\t\tVector:    []int64{1, 1, 0},\n\t\t\t},\n\t\t\t{\n\t\t\t\tType:      crdt.OpDelete,\n\t\t\t\tLamport:   3,\n\t\t\t\tNodeID:    nodeID,\n\t\t\t\tFileID:    fileID,\n\t\t\t\tLineID:    lineID,\n\t\t\t\tStream:    \"stream1\",\n\t\t\t\tTimestamp: time.Now(),\n\t\t\t\tVector:    []int64{1, 1, 1},\n\t\t\t},\n\t\t}\n\n\t\t// Write operations to file\n\t\topsFile := filepath.Join(repoPath, \".evo\", \"ops\", \"test.bin\")\n\t\tf, err := os.Create(opsFile)\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t\tdefer f.Close()\n\n\t\tfor _, op := range ops {\n\t\t\tdata, err := json.Marshal(op)\n\t\t\tif err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t\tif _, err := f.Write(data); err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t}\n\n\t\t// Run compaction\n\t\tconfig := &Config{\n\t\t\tCompactionInterval: 100 * time.Millisecond,\n\t\t\tTombstoneTTL:       30 * time.Minute,\n\t\t\tMinOpsToKeep:       1,\n\t\t\tMaxOps:             2,\n\t\t}\n\n\t\tservice := NewCompactionService(repoPath, config)\n\t\tif err := service.CompactOperations(); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\t// Verify results\n\t\t// TODO: Add verification logic\n\t})\n\n\tt.Run(\"Tombstone Pruning\", func(t *testing.T) {\n\t\tfileID := uuid.New()\n\t\tlineID := uuid.New()\n\t\tnodeID := uuid.New()\n\n\t\t// Create test operations including a tombstone\n\t\tops := []crdt.Operation{\n\t\t\t{\n\t\t\t\tType:      crdt.OpUpdate,\n\t\t\t\tLamport:   1,\n\t\t\t\tNodeID:    nodeID,\n\t\t\t\tFileID:    fileID,\n\t\t\t\tLineID:    lineID,\n\t\t\t\tContent:   \"value1\",\n\t\t\t\tStream:    \"stream1\",\n\t\t\t\tTimestamp: time.Now(),\n\t\t\t\tVector:    []int64{1, 0, 0},\n\t\t\t},\n\t\t\t{\n\t\t\t\tType:      crdt.OpDelete,\n\t\t\t\tLamport:   2,\n\t\t\t\tNodeID:    nodeID,\n\t\t\t\tFileID:    fileID,\n\t\t\t\tLineID:    uuid.New(), // Use a different LineID for the tombstone\n\t\t\t\tStream:    \"stream1\",\n\t\t\t\tTimestamp: time.Now().Add(-2 * time.Hour), // Old tombstone\n\t\t\t\tVector:    []int64{1, 1, 0},\n\t\t\t},\n\t\t}\n\n\t\t// Write operations to disk\n\t\topsDir := filepath.Join(repoPath, \".evo\", \"ops\")\n\t\tstreamDir := filepath.Join(opsDir, \"stream1\")\n\t\tif err := os.MkdirAll(streamDir, 0755); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\tfor _, op := range ops {\n\t\t\tdata, err := json.Marshal(op)\n\t\t\tif err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\n\t\t\t// Write size prefix followed by data\n\t\t\topFile := filepath.Join(streamDir, op.LineID.String()+\".bin\")\n\t\t\tf, err := os.Create(opFile)\n\t\t\tif err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\n\t\t\t// Write 4-byte size prefix\n\t\t\tsize := uint32(len(data))\n\t\t\tvar sizeBuf [4]byte\n\t\t\tbinary.BigEndian.PutUint32(sizeBuf[:], size)\n\t\t\tif _, err := f.Write(sizeBuf[:]); err != nil {\n\t\t\t\tf.Close()\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\n\t\t\t// Write operation data\n\t\t\tif _, err := f.Write(data); err != nil {\n\t\t\t\tf.Close()\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\t\t\tf.Close()\n\t\t}\n\n\t\t// Create and run compaction service\n\t\tconfig := &Config{\n\t\t\tCompactionInterval: 1 * time.Hour,\n\t\t\tTombstoneTTL:       1 * time.Hour,\n\t\t\tMinOpsToKeep:       1,\n\t\t\tMaxOps:             10,\n\t\t}\n\n\t\tservice := NewCompactionService(repoPath, config)\n\t\tif err := service.PruneTombstones(); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\t// Check that old tombstone was removed\n\t\tfiles, err := os.ReadDir(streamDir)\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\tif len(files) != 1 {\n\t\t\tt.Errorf(\"Expected 1 operation after pruning, got %d\", len(files))\n\t\t}\n\n\t\t// The remaining operation should be the update\n\t\tfor _, f := range files {\n\t\t\tdata, err := os.ReadFile(filepath.Join(streamDir, f.Name()))\n\t\t\tif err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\n\t\t\t// Read size prefix\n\t\t\tif len(data) < 4 {\n\t\t\t\tt.Fatal(\"Invalid operation file: too short\")\n\t\t\t}\n\t\t\tsize := binary.BigEndian.Uint32(data[:4])\n\t\t\tif len(data) < int(4+size) {\n\t\t\t\tt.Fatalf(\"Invalid operation file: expected %d bytes after size prefix, got %d\", size, len(data)-4)\n\t\t\t}\n\t\t\topData := data[4 : 4+size]\n\n\t\t\tvar op crdt.Operation\n\t\t\tif err := json.Unmarshal(opData, &op); err != nil {\n\t\t\t\tt.Fatal(err)\n\t\t\t}\n\n\t\t\tif op.Type == crdt.OpDelete {\n\t\t\t\tt.Error(\"Expected tombstone to be pruned\")\n\t\t\t}\n\t\t}\n\t})\n}\n\nfunc TestCompactionConfig(t *testing.T) {\n\tt.Run(\"Default Config\", func(t *testing.T) {\n\t\tcfg := DefaultConfig()\n\t\tif cfg.MaxOps <= cfg.MinOpsToKeep {\n\t\t\tt.Error(\"MaxOps should be greater than MinOpsToKeep\")\n\t\t}\n\t\tif cfg.TombstoneTTL <= 0 {\n\t\t\tt.Error(\"TombstoneTTL should be positive\")\n\t\t}\n\t})\n\n\tt.Run(\"Custom Config\", func(t *testing.T) {\n\t\tcfg := &Config{\n\t\t\tMaxOps:             5000,\n\t\t\tMinOpsToKeep:       500,\n\t\t\tTombstoneTTL:       48 * time.Hour,\n\t\t\tCompactionInterval: time.Hour,\n\t\t}\n\n\t\tservice := NewCompactionService(\"test-path\", cfg)\n\t\tif service.config.MaxOps != 5000 {\n\t\t\tt.Error(\"Failed to set custom MaxOps\")\n\t\t}\n\t\tif service.config.MinOpsToKeep != 500 {\n\t\t\tt.Error(\"Failed to set custom MinOpsToKeep\")\n\t\t}\n\t\tif service.config.TombstoneTTL != 48*time.Hour {\n\t\t\tt.Error(\"Failed to set custom TombstoneTTL\")\n\t\t}\n\t})\n}\n"
  },
  {
    "path": "internal/crdt/operation.go",
    "content": "package crdt\n\nimport (\n\t\"time\"\n\n\t\"github.com/google/uuid\"\n)\n\n// OpType represents the type of operation\ntype OpType int\n\nconst (\n\tOpInsert OpType = iota\n\tOpUpdate\n\tOpDelete\n)\n\n// Operation represents a CRDT operation\ntype Operation struct {\n\tType      OpType    // Type of operation\n\tLamport   uint64    // Lamport timestamp for ordering\n\tNodeID    uuid.UUID // ID of the node that created this operation\n\tFileID    uuid.UUID // ID of the file being modified\n\tLineID    uuid.UUID // ID of the line being modified\n\tContent   string    // Content for insert/update operations\n\tStream    string    // Stream this operation belongs to\n\tTimestamp time.Time // When the operation occurred\n\tVector    []int64   // Vector clock for causal ordering\n}\n\n// CanCombine checks if two operations can be combined\nfunc (o *Operation) CanCombine(other *Operation) bool {\n\t// Can only combine operations in same stream\n\tif o.Stream != other.Stream {\n\t\treturn false\n\t}\n\n\t// Can only combine operations on same file\n\tif o.FileID != other.FileID {\n\t\treturn false\n\t}\n\n\t// Can't combine deletes\n\tif o.Type == OpDelete || other.Type == OpDelete {\n\t\treturn false\n\t}\n\n\t// Must be sequential in Lamport time\n\treturn o.Lamport < other.Lamport\n}\n\n// Combine merges another operation into this one\nfunc (o *Operation) Combine(other *Operation) {\n\t// Take the latest content and Lamport timestamp\n\to.Content = other.Content\n\to.Lamport = other.Lamport\n\to.Timestamp = other.Timestamp\n\n\t// Extend vector clock if needed\n\tif len(other.Vector) > len(o.Vector) {\n\t\tnewVec := make([]int64, len(other.Vector))\n\t\tcopy(newVec, o.Vector)\n\t\to.Vector = newVec\n\t}\n\n\t// Update vector clock values\n\tfor i := 0; i < len(other.Vector); i++ {\n\t\tif i < len(o.Vector) {\n\t\t\to.Vector[i] = other.Vector[i]\n\t\t}\n\t}\n}\n\n// LessThan compares operations for ordering\nfunc (o *Operation) LessThan(other *Operation) bool {\n\tif o.Lamport != other.Lamport {\n\t\treturn o.Lamport < other.Lamport\n\t}\n\treturn o.NodeID.String() < other.NodeID.String()\n}\n"
  },
  {
    "path": "internal/crdt/operation_test.go",
    "content": "package crdt\n\nimport (\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/google/uuid\"\n)\n\nfunc TestOperationCombining(t *testing.T) {\n\tt.Run(\"Same Stream Operations\", func(t *testing.T) {\n\t\tfileID := uuid.New()\n\t\tlineID := uuid.New()\n\t\tnodeID := uuid.New()\n\n\t\top1 := &Operation{\n\t\t\tType:      OpUpdate,\n\t\t\tLamport:   1,\n\t\t\tNodeID:    nodeID,\n\t\t\tFileID:    fileID,\n\t\t\tLineID:    lineID,\n\t\t\tContent:   \"value1\",\n\t\t\tStream:    \"stream1\",\n\t\t\tTimestamp: time.Now(),\n\t\t\tVector:    []int64{1, 0, 0},\n\t\t}\n\n\t\top2 := &Operation{\n\t\t\tType:      OpUpdate,\n\t\t\tLamport:   2,\n\t\t\tNodeID:    nodeID,\n\t\t\tFileID:    fileID,\n\t\t\tLineID:    lineID,\n\t\t\tContent:   \"value2\",\n\t\t\tStream:    \"stream1\",\n\t\t\tTimestamp: op1.Timestamp.Add(time.Second),\n\t\t\tVector:    []int64{1, 1, 0},\n\t\t}\n\n\t\tif !op1.CanCombine(op2) {\n\t\t\tt.Error(\"Expected operations to be combinable\")\n\t\t}\n\n\t\top1.Combine(op2)\n\n\t\tif op1.Content != \"value2\" {\n\t\t\tt.Errorf(\"Expected combined content to be 'value2', got '%s'\", op1.Content)\n\t\t}\n\n\t\tif op1.Vector[1] != 1 {\n\t\t\tt.Errorf(\"Expected vector clock [1] to be 1, got %d\", op1.Vector[1])\n\t\t}\n\t})\n\n\tt.Run(\"Different Stream Operations\", func(t *testing.T) {\n\t\tfileID := uuid.New()\n\t\tlineID := uuid.New()\n\t\tnodeID := uuid.New()\n\n\t\top1 := &Operation{\n\t\t\tType:      OpUpdate,\n\t\t\tLamport:   1,\n\t\t\tNodeID:    nodeID,\n\t\t\tFileID:    fileID,\n\t\t\tLineID:    lineID,\n\t\t\tContent:   \"value1\",\n\t\t\tStream:    \"stream1\",\n\t\t\tTimestamp: time.Now(),\n\t\t\tVector:    []int64{1, 0, 0},\n\t\t}\n\n\t\top2 := &Operation{\n\t\t\tType:      OpUpdate,\n\t\t\tLamport:   2,\n\t\t\tNodeID:    nodeID,\n\t\t\tFileID:    fileID,\n\t\t\tLineID:    lineID,\n\t\t\tContent:   \"value2\",\n\t\t\tStream:    \"stream2\",\n\t\t\tTimestamp: op1.Timestamp.Add(time.Second),\n\t\t\tVector:    []int64{1, 1, 0},\n\t\t}\n\n\t\tif op1.CanCombine(op2) {\n\t\t\tt.Error(\"Expected operations from different streams to not be combinable\")\n\t\t}\n\t})\n\n\tt.Run(\"Delete Operations\", func(t *testing.T) {\n\t\tfileID := uuid.New()\n\t\tlineID := uuid.New()\n\t\tnodeID := uuid.New()\n\n\t\top1 := &Operation{\n\t\t\tType:      OpUpdate,\n\t\t\tLamport:   1,\n\t\t\tNodeID:    nodeID,\n\t\t\tFileID:    fileID,\n\t\t\tLineID:    lineID,\n\t\t\tContent:   \"value1\",\n\t\t\tStream:    \"stream1\",\n\t\t\tTimestamp: time.Now(),\n\t\t\tVector:    []int64{1, 0, 0},\n\t\t}\n\n\t\top2 := &Operation{\n\t\t\tType:      OpDelete,\n\t\t\tLamport:   2,\n\t\t\tNodeID:    nodeID,\n\t\t\tFileID:    fileID,\n\t\t\tLineID:    lineID,\n\t\t\tStream:    \"stream1\",\n\t\t\tTimestamp: op1.Timestamp.Add(time.Second),\n\t\t\tVector:    []int64{1, 1, 0},\n\t\t}\n\n\t\tif op1.CanCombine(op2) {\n\t\t\tt.Error(\"Expected delete operations to not be combinable\")\n\t\t}\n\t})\n\n\tt.Run(\"Vector Clock Extension\", func(t *testing.T) {\n\t\tfileID := uuid.New()\n\t\tlineID := uuid.New()\n\t\tnodeID := uuid.New()\n\n\t\top1 := &Operation{\n\t\t\tType:      OpUpdate,\n\t\t\tLamport:   1,\n\t\t\tNodeID:    nodeID,\n\t\t\tFileID:    fileID,\n\t\t\tLineID:    lineID,\n\t\t\tContent:   \"value1\",\n\t\t\tStream:    \"stream1\",\n\t\t\tTimestamp: time.Now(),\n\t\t\tVector:    []int64{1, 0},\n\t\t}\n\n\t\top2 := &Operation{\n\t\t\tType:      OpUpdate,\n\t\t\tLamport:   2,\n\t\t\tNodeID:    nodeID,\n\t\t\tFileID:    fileID,\n\t\t\tLineID:    lineID,\n\t\t\tContent:   \"value2\",\n\t\t\tStream:    \"stream1\",\n\t\t\tTimestamp: op1.Timestamp.Add(time.Second),\n\t\t\tVector:    []int64{1, 1, 1},\n\t\t}\n\n\t\tif !op1.CanCombine(op2) {\n\t\t\tt.Error(\"Expected operations to be combinable\")\n\t\t}\n\n\t\top1.Combine(op2)\n\n\t\tif len(op1.Vector) != 3 {\n\t\t\tt.Errorf(\"Expected vector clock length to be 3, got %d\", len(op1.Vector))\n\t\t}\n\n\t\tif op1.Vector[2] != 1 {\n\t\t\tt.Errorf(\"Expected vector clock [2] to be 1, got %d\", op1.Vector[2])\n\t\t}\n\t})\n\n\tt.Run(\"Non-Sequential Operations\", func(t *testing.T) {\n\t\tfileID := uuid.New()\n\t\tlineID := uuid.New()\n\t\tnodeID := uuid.New()\n\n\t\top1 := &Operation{\n\t\t\tType:      OpUpdate,\n\t\t\tLamport:   2, // Higher Lamport\n\t\t\tNodeID:    nodeID,\n\t\t\tFileID:    fileID,\n\t\t\tLineID:    lineID,\n\t\t\tContent:   \"value1\",\n\t\t\tStream:    \"stream1\",\n\t\t\tTimestamp: time.Now().Add(time.Second),\n\t\t\tVector:    []int64{1, 0, 0},\n\t\t}\n\n\t\top2 := &Operation{\n\t\t\tType:      OpUpdate,\n\t\t\tLamport:   1, // Lower Lamport\n\t\t\tNodeID:    nodeID,\n\t\t\tFileID:    fileID,\n\t\t\tLineID:    lineID,\n\t\t\tContent:   \"value2\",\n\t\t\tStream:    \"stream1\",\n\t\t\tTimestamp: time.Now(),\n\t\t\tVector:    []int64{1, 1, 0},\n\t\t}\n\n\t\tif op1.CanCombine(op2) {\n\t\t\tt.Error(\"Expected non-sequential operations to not be combinable\")\n\t\t}\n\t})\n}\n\nfunc TestOperationOrdering(t *testing.T) {\n\tnow := time.Now()\n\tfileID := uuid.New()\n\tlineID := uuid.New()\n\tnodeID := uuid.New()\n\n\tops := []Operation{\n\t\t{\n\t\t\tType:      OpInsert,\n\t\t\tLamport:   1,\n\t\t\tNodeID:    nodeID,\n\t\t\tFileID:    fileID,\n\t\t\tLineID:    lineID,\n\t\t\tContent:   \"value1\",\n\t\t\tStream:    \"stream1\",\n\t\t\tTimestamp: now,\n\t\t\tVector:    []int64{1, 0, 0},\n\t\t},\n\t\t{\n\t\t\tType:      OpUpdate,\n\t\t\tLamport:   2,\n\t\t\tNodeID:    nodeID,\n\t\t\tFileID:    fileID,\n\t\t\tLineID:    lineID,\n\t\t\tContent:   \"value2\",\n\t\t\tStream:    \"stream1\",\n\t\t\tTimestamp: now.Add(time.Second),\n\t\t\tVector:    []int64{1, 1, 0},\n\t\t},\n\t\t{\n\t\t\tType:      OpDelete,\n\t\t\tLamport:   3,\n\t\t\tNodeID:    nodeID,\n\t\t\tFileID:    fileID,\n\t\t\tLineID:    lineID,\n\t\t\tStream:    \"stream1\",\n\t\t\tTimestamp: now.Add(2 * time.Second),\n\t\t\tVector:    []int64{1, 1, 1},\n\t\t},\n\t}\n\n\tt.Run(\"Timestamp Order\", func(t *testing.T) {\n\t\tif !ops[0].Timestamp.Before(ops[1].Timestamp) {\n\t\t\tt.Error(\"Expected op1 timestamp to be before op2\")\n\t\t}\n\n\t\tif !ops[1].Timestamp.Before(ops[2].Timestamp) {\n\t\t\tt.Error(\"Expected op2 timestamp to be before op3\")\n\t\t}\n\t})\n\n\tt.Run(\"Vector Clock Order\", func(t *testing.T) {\n\t\t// Test that vector clocks are monotonically increasing\n\t\tfor i := 1; i < len(ops); i++ {\n\t\t\tprev := ops[i-1].Vector\n\t\t\tcurr := ops[i].Vector\n\t\t\tincreasing := false\n\t\t\tfor j := 0; j < len(prev) && j < len(curr); j++ {\n\t\t\t\tif curr[j] > prev[j] {\n\t\t\t\t\tincreasing = true\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif !increasing {\n\t\t\t\tt.Errorf(\"Expected vector clock to increase between op%d and op%d\", i, i+1)\n\t\t\t}\n\t\t}\n\t})\n}\n"
  },
  {
    "path": "internal/crdt/rga.go",
    "content": "package crdt\n\nimport (\n\t\"fmt\"\n\t\"sort\"\n\t\"sync\"\n\n\t\"github.com/google/uuid\"\n)\n\n// RGAOperation extends Operation with additional fields\ntype RGAOperation struct {\n\tOperation\n\tIndex int\n}\n\n// NewRGAOperation creates a new RGAOperation instance\nfunc NewRGAOperation(op Operation, index int) RGAOperation {\n\treturn RGAOperation{\n\t\tOperation: op,\n\t\tIndex:     index,\n\t}\n}\n\n// RGA represents a Replicated Growable Array CRDT\ntype RGA struct {\n\tmu        sync.RWMutex\n\tops       []RGAOperation\n\ttombstone map[string]bool\n}\n\n// NewRGA creates a new RGA instance\nfunc NewRGA() *RGA {\n\treturn &RGA{\n\t\tops:       make([]RGAOperation, 0),\n\t\ttombstone: make(map[string]bool),\n\t}\n}\n\n// Apply applies an operation to the RGA\nfunc (r *RGA) Apply(op Operation) error {\n\tr.mu.Lock()\n\tdefer r.mu.Unlock()\n\n\trgaOp := NewRGAOperation(op, len(r.ops))\n\n\tswitch op.Type {\n\tcase OpInsert:\n\t\t// Filter out any previous operations for this LineID\n\t\tnewOps := make([]RGAOperation, 0)\n\t\tfor _, existingOp := range r.ops {\n\t\t\tif existingOp.LineID != op.LineID {\n\t\t\t\tnewOps = append(newOps, existingOp)\n\t\t\t}\n\t\t}\n\t\tr.ops = append(newOps, rgaOp)\n\t\t// Clear tombstone status\n\t\tdelete(r.tombstone, op.LineID.String())\n\t\tsort.Slice(r.ops, func(i, j int) bool {\n\t\t\treturn r.ops[i].LessThan(&r.ops[j].Operation)\n\t\t})\n\tcase OpDelete:\n\t\t// Get content before marking as deleted\n\t\tvar content string\n\t\tfor _, op := range r.ops {\n\t\t\tif op.LineID == rgaOp.LineID && !r.tombstone[op.LineID.String()] {\n\t\t\t\tcontent = op.Content\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\trgaOp.Content = content // Store content in the delete operation\n\t\tr.ops = append(r.ops, rgaOp)\n\t\tr.tombstone[op.LineID.String()] = true\n\tcase OpUpdate:\n\t\tfound := false\n\t\tfor i := range r.ops {\n\t\t\tif r.ops[i].LineID == op.LineID {\n\t\t\t\tr.ops[i].Content = op.Content\n\t\t\t\tfound = true\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tif !found {\n\t\t\treturn fmt.Errorf(\"line not found for update: %s\", op.LineID)\n\t\t}\n\tdefault:\n\t\treturn fmt.Errorf(\"unknown operation type: %d\", op.Type)\n\t}\n\n\treturn nil\n}\n\n// Get returns the current state of the RGA\nfunc (r *RGA) Get() []string {\n\tr.mu.RLock()\n\tdefer r.mu.RUnlock()\n\n\tvar result []string\n\tfor _, op := range r.ops {\n\t\tif !r.tombstone[op.LineID.String()] {\n\t\t\tresult = append(result, op.Content)\n\t\t}\n\t}\n\treturn result\n}\n\n// GetOperations returns all operations in order\nfunc (r *RGA) GetOperations() []Operation {\n\tr.mu.RLock()\n\tdefer r.mu.RUnlock()\n\n\tresult := make([]Operation, len(r.ops))\n\tfor i, op := range r.ops {\n\t\tresult[i] = op.Operation\n\t}\n\treturn result\n}\n\n// Clear removes all operations and resets the RGA\nfunc (r *RGA) Clear() {\n\tr.mu.Lock()\n\tdefer r.mu.Unlock()\n\n\tr.ops = make([]RGAOperation, 0)\n\tr.tombstone = make(map[string]bool)\n}\n\n// Materialize returns the current document state as a slice of strings\nfunc (r *RGA) Materialize() []string {\n\tr.mu.RLock()\n\tdefer r.mu.RUnlock()\n\n\tvar result []string\n\tfor _, op := range r.ops {\n\t\tif !r.tombstone[op.LineID.String()] {\n\t\t\tresult = append(result, op.Content)\n\t\t}\n\t}\n\treturn result\n}\n\n// GetPositions returns the positions of all active lines\nfunc (r *RGA) GetPositions() []int {\n\tr.mu.RLock()\n\tdefer r.mu.RUnlock()\n\n\tvar positions []int\n\tfor i, op := range r.ops {\n\t\tif !r.tombstone[op.LineID.String()] {\n\t\t\tpositions = append(positions, i)\n\t\t}\n\t}\n\treturn positions\n}\n\n// GetLineIDs returns the LineIDs of all active lines in order\nfunc (r *RGA) GetLineIDs() []uuid.UUID {\n\tr.mu.RLock()\n\tdefer r.mu.RUnlock()\n\n\tvar lineIDs []uuid.UUID\n\tfor _, op := range r.ops {\n\t\tif !r.tombstone[op.LineID.String()] {\n\t\t\tlineIDs = append(lineIDs, op.LineID)\n\t\t}\n\t}\n\treturn lineIDs\n}\n\n// LineMap returns a map of LineID to Content for all active lines\nfunc (r *RGA) LineMap() map[uuid.UUID]string {\n\tr.mu.RLock()\n\tdefer r.mu.RUnlock()\n\n\tresult := make(map[uuid.UUID]string)\n\tfor _, op := range r.ops {\n\t\tif !r.tombstone[op.LineID.String()] {\n\t\t\tresult[op.LineID] = op.Content\n\t\t}\n\t}\n\treturn result\n}\n"
  },
  {
    "path": "internal/crdt/rga_test.go",
    "content": "package crdt\n\nimport (\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/google/uuid\"\n)\n\nfunc TestRGA(t *testing.T) {\n\tt.Run(\"Insert Operations\", func(t *testing.T) {\n\t\trga := NewRGA()\n\t\tfileID := uuid.New()\n\t\tlineID1 := uuid.New()\n\t\tlineID2 := uuid.New()\n\t\tnodeID := uuid.New()\n\n\t\t// Create operations\n\t\top1 := Operation{\n\t\t\tType:      OpInsert,\n\t\t\tLamport:   1,\n\t\t\tNodeID:    nodeID,\n\t\t\tFileID:    fileID,\n\t\t\tLineID:    lineID1,\n\t\t\tContent:   \"value1\",\n\t\t\tStream:    \"stream1\",\n\t\t\tTimestamp: time.Now(),\n\t\t\tVector:    []int64{1},\n\t\t}\n\n\t\top2 := Operation{\n\t\t\tType:      OpInsert,\n\t\t\tLamport:   2,\n\t\t\tNodeID:    nodeID,\n\t\t\tFileID:    fileID,\n\t\t\tLineID:    lineID2,\n\t\t\tContent:   \"value2\",\n\t\t\tStream:    \"stream1\",\n\t\t\tTimestamp: time.Now().Add(time.Second),\n\t\t\tVector:    []int64{2},\n\t\t}\n\n\t\t// Apply operations\n\t\terr := rga.Apply(op1)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Failed to apply operation 1: %v\", err)\n\t\t}\n\n\t\terr = rga.Apply(op2)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Failed to apply operation 2: %v\", err)\n\t\t}\n\n\t\t// Check state\n\t\tvalues := rga.Get()\n\t\tif len(values) != 2 {\n\t\t\tt.Errorf(\"Expected 2 values, got %d\", len(values))\n\t\t}\n\n\t\tif values[0] != \"value1\" || values[1] != \"value2\" {\n\t\t\tt.Errorf(\"Values not in expected order: %v\", values)\n\t\t}\n\t})\n\n\tt.Run(\"Delete Operations\", func(t *testing.T) {\n\t\trga := NewRGA()\n\t\tfileID := uuid.New()\n\t\tlineID := uuid.New()\n\t\tnodeID := uuid.New()\n\n\t\t// Insert operation\n\t\tinsertOp := Operation{\n\t\t\tType:      OpInsert,\n\t\t\tLamport:   1,\n\t\t\tNodeID:    nodeID,\n\t\t\tFileID:    fileID,\n\t\t\tLineID:    lineID,\n\t\t\tContent:   \"value1\",\n\t\t\tStream:    \"stream1\",\n\t\t\tTimestamp: time.Now(),\n\t\t\tVector:    []int64{1},\n\t\t}\n\n\t\t// Apply insert\n\t\terr := rga.Apply(insertOp)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Failed to apply insert operation: %v\", err)\n\t\t}\n\n\t\t// Delete operation\n\t\tdeleteOp := Operation{\n\t\t\tType:      OpDelete,\n\t\t\tLamport:   2,\n\t\t\tNodeID:    nodeID,\n\t\t\tFileID:    fileID,\n\t\t\tLineID:    lineID,\n\t\t\tStream:    \"stream1\",\n\t\t\tTimestamp: time.Now().Add(time.Second),\n\t\t\tVector:    []int64{2},\n\t\t}\n\n\t\t// Apply delete\n\t\terr = rga.Apply(deleteOp)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Failed to apply delete operation: %v\", err)\n\t\t}\n\n\t\t// Check state\n\t\tvalues := rga.Get()\n\t\tif len(values) != 0 {\n\t\t\tt.Errorf(\"Expected 0 values after delete, got %d\", len(values))\n\t\t}\n\t})\n\n\tt.Run(\"Update Operations\", func(t *testing.T) {\n\t\trga := NewRGA()\n\t\tfileID := uuid.New()\n\t\tlineID := uuid.New()\n\t\tnodeID := uuid.New()\n\n\t\t// Insert operation\n\t\tinsertOp := Operation{\n\t\t\tType:      OpInsert,\n\t\t\tLamport:   1,\n\t\t\tNodeID:    nodeID,\n\t\t\tFileID:    fileID,\n\t\t\tLineID:    lineID,\n\t\t\tContent:   \"value1\",\n\t\t\tStream:    \"stream1\",\n\t\t\tTimestamp: time.Now(),\n\t\t\tVector:    []int64{1},\n\t\t}\n\n\t\t// Apply insert\n\t\terr := rga.Apply(insertOp)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Failed to apply insert operation: %v\", err)\n\t\t}\n\n\t\t// Update operation\n\t\tupdateOp := Operation{\n\t\t\tType:      OpUpdate,\n\t\t\tLamport:   2,\n\t\t\tNodeID:    nodeID,\n\t\t\tFileID:    fileID,\n\t\t\tLineID:    lineID,\n\t\t\tContent:   \"updated\",\n\t\t\tStream:    \"stream1\",\n\t\t\tTimestamp: time.Now().Add(time.Second),\n\t\t\tVector:    []int64{2},\n\t\t}\n\n\t\t// Apply update\n\t\terr = rga.Apply(updateOp)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Failed to apply update operation: %v\", err)\n\t\t}\n\n\t\t// Check state\n\t\tvalues := rga.Get()\n\t\tif len(values) != 1 {\n\t\t\tt.Errorf(\"Expected 1 value after update, got %d\", len(values))\n\t\t}\n\n\t\tif values[0] != \"updated\" {\n\t\t\tt.Errorf(\"Expected updated value, got %s\", values[0])\n\t\t}\n\t})\n\n\tt.Run(\"Invalid Update Operation\", func(t *testing.T) {\n\t\trga := NewRGA()\n\t\tfileID := uuid.New()\n\t\tlineID := uuid.New()\n\t\tnodeID := uuid.New()\n\n\t\t// Update operation without insert\n\t\tupdateOp := Operation{\n\t\t\tType:      OpUpdate,\n\t\t\tLamport:   1,\n\t\t\tNodeID:    nodeID,\n\t\t\tFileID:    fileID,\n\t\t\tLineID:    lineID,\n\t\t\tContent:   \"updated\",\n\t\t\tStream:    \"stream1\",\n\t\t\tTimestamp: time.Now(),\n\t\t\tVector:    []int64{1},\n\t\t}\n\n\t\t// Apply update\n\t\terr := rga.Apply(updateOp)\n\t\tif err == nil {\n\t\t\tt.Error(\"Expected error when updating non-existent line\")\n\t\t}\n\t})\n\n\tt.Run(\"Clear Operations\", func(t *testing.T) {\n\t\trga := NewRGA()\n\t\tfileID := uuid.New()\n\t\tlineID1 := uuid.New()\n\t\tlineID2 := uuid.New()\n\t\tnodeID := uuid.New()\n\n\t\t// Insert operations\n\t\top1 := Operation{\n\t\t\tType:      OpInsert,\n\t\t\tLamport:   1,\n\t\t\tNodeID:    nodeID,\n\t\t\tFileID:    fileID,\n\t\t\tLineID:    lineID1,\n\t\t\tContent:   \"value1\",\n\t\t\tStream:    \"stream1\",\n\t\t\tTimestamp: time.Now(),\n\t\t\tVector:    []int64{1},\n\t\t}\n\n\t\top2 := Operation{\n\t\t\tType:      OpInsert,\n\t\t\tLamport:   2,\n\t\t\tNodeID:    nodeID,\n\t\t\tFileID:    fileID,\n\t\t\tLineID:    lineID2,\n\t\t\tContent:   \"value2\",\n\t\t\tStream:    \"stream1\",\n\t\t\tTimestamp: time.Now().Add(time.Second),\n\t\t\tVector:    []int64{2},\n\t\t}\n\n\t\t// Apply operations\n\t\terr := rga.Apply(op1)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Failed to apply operation 1: %v\", err)\n\t\t}\n\n\t\terr = rga.Apply(op2)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Failed to apply operation 2: %v\", err)\n\t\t}\n\n\t\t// Clear RGA\n\t\trga.Clear()\n\n\t\t// Check state\n\t\tvalues := rga.Get()\n\t\tif len(values) != 0 {\n\t\t\tt.Errorf(\"Expected 0 values after clear, got %d\", len(values))\n\t\t}\n\n\t\tops := rga.GetOperations()\n\t\tif len(ops) != 0 {\n\t\t\tt.Errorf(\"Expected 0 operations after clear, got %d\", len(ops))\n\t\t}\n\t})\n\n\tt.Run(\"Delete Content Preservation\", func(t *testing.T) {\n\t\trga := NewRGA()\n\t\tfileID := uuid.New()\n\t\tlineID := uuid.New()\n\t\tnodeID := uuid.New()\n\t\tcontent := \"test content to preserve\"\n\n\t\t// Insert operation\n\t\tinsertOp := Operation{\n\t\t\tType:      OpInsert,\n\t\t\tLamport:   1,\n\t\t\tNodeID:    nodeID,\n\t\t\tFileID:    fileID,\n\t\t\tLineID:    lineID,\n\t\t\tContent:   content,\n\t\t\tStream:    \"stream1\",\n\t\t\tTimestamp: time.Now(),\n\t\t\tVector:    []int64{1},\n\t\t}\n\n\t\t// Apply insert\n\t\terr := rga.Apply(insertOp)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Failed to apply insert operation: %v\", err)\n\t\t}\n\n\t\t// Delete operation\n\t\tdeleteOp := Operation{\n\t\t\tType:      OpDelete,\n\t\t\tLamport:   2,\n\t\t\tNodeID:    nodeID,\n\t\t\tFileID:    fileID,\n\t\t\tLineID:    lineID,\n\t\t\tStream:    \"stream1\",\n\t\t\tTimestamp: time.Now().Add(time.Second),\n\t\t\tVector:    []int64{2},\n\t\t}\n\n\t\t// Apply delete\n\t\terr = rga.Apply(deleteOp)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Failed to apply delete operation: %v\", err)\n\t\t}\n\n\t\t// Get all operations\n\t\tops := rga.GetOperations()\n\t\tvar foundDelete bool\n\t\tfor _, op := range ops {\n\t\t\tif op.Type == OpDelete && op.LineID == lineID {\n\t\t\t\tfoundDelete = true\n\t\t\t\tif op.Content != content {\n\t\t\t\t\tt.Errorf(\"Delete operation did not preserve content, expected %q, got %q\", content, op.Content)\n\t\t\t\t}\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tif !foundDelete {\n\t\t\tt.Error(\"Delete operation not found in operations list\")\n\t\t}\n\t})\n\n\tt.Run(\"Delete and Reinsert\", func(t *testing.T) {\n\t\trga := NewRGA()\n\t\tfileID := uuid.New()\n\t\tlineID := uuid.New()\n\t\tnodeID := uuid.New()\n\t\tcontent := \"test content for reinsert\"\n\n\t\t// Insert operation\n\t\tinsertOp := Operation{\n\t\t\tType:      OpInsert,\n\t\t\tLamport:   1,\n\t\t\tNodeID:    nodeID,\n\t\t\tFileID:    fileID,\n\t\t\tLineID:    lineID,\n\t\t\tContent:   content,\n\t\t\tStream:    \"stream1\",\n\t\t\tTimestamp: time.Now(),\n\t\t\tVector:    []int64{1},\n\t\t}\n\n\t\t// Apply insert\n\t\terr := rga.Apply(insertOp)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Failed to apply insert operation: %v\", err)\n\t\t}\n\n\t\t// Delete operation\n\t\tdeleteOp := Operation{\n\t\t\tType:      OpDelete,\n\t\t\tLamport:   2,\n\t\t\tNodeID:    nodeID,\n\t\t\tFileID:    fileID,\n\t\t\tLineID:    lineID,\n\t\t\tStream:    \"stream1\",\n\t\t\tTimestamp: time.Now().Add(time.Second),\n\t\t\tVector:    []int64{2},\n\t\t}\n\n\t\t// Apply delete\n\t\terr = rga.Apply(deleteOp)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Failed to apply delete operation: %v\", err)\n\t\t}\n\n\t\t// Reinsert operation (simulating revert)\n\t\treinsertOp := Operation{\n\t\t\tType:      OpInsert,\n\t\t\tLamport:   3,\n\t\t\tNodeID:    nodeID,\n\t\t\tFileID:    fileID,\n\t\t\tLineID:    lineID,\n\t\t\tContent:   content,\n\t\t\tStream:    \"stream1\",\n\t\t\tTimestamp: time.Now().Add(2 * time.Second),\n\t\t\tVector:    []int64{3},\n\t\t}\n\n\t\t// Apply reinsert\n\t\terr = rga.Apply(reinsertOp)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Failed to apply reinsert operation: %v\", err)\n\t\t}\n\n\t\t// Verify content is restored\n\t\tvalues := rga.Get()\n\t\tif len(values) != 1 {\n\t\t\tt.Errorf(\"Expected 1 value after reinsert, got %d\", len(values))\n\t\t} else if values[0] != content {\n\t\t\tt.Errorf(\"Content mismatch after reinsert, expected %q, got %q\", content, values[0])\n\t\t}\n\t})\n}\n"
  },
  {
    "path": "internal/ignore/ignore.go",
    "content": "package ignore\n\nimport (\n\t\"bufio\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strings\"\n\n\t\"github.com/bmatcuk/doublestar/v4\"\n)\n\n// IgnoreList represents a collection of ignore patterns\ntype IgnoreList struct {\n\tpatterns []string\n}\n\n// LoadIgnoreFile reads and parses the .evo-ignore file from the given repository path\nfunc LoadIgnoreFile(repoPath string) (*IgnoreList, error) {\n\tignorePath := filepath.Join(repoPath, \".evo-ignore\")\n\tfile, err := os.Open(ignorePath)\n\tif os.IsNotExist(err) {\n\t\treturn &IgnoreList{}, nil\n\t}\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdefer file.Close()\n\n\tvar patterns []string\n\tscanner := bufio.NewScanner(file)\n\tfor scanner.Scan() {\n\t\tpattern := strings.TrimSpace(scanner.Text())\n\t\tif pattern != \"\" && !strings.HasPrefix(pattern, \"#\") {\n\t\t\t// Handle directory patterns\n\t\t\tif strings.HasSuffix(pattern, \"/\") {\n\t\t\t\tpattern = strings.TrimSuffix(pattern, \"/\")\n\t\t\t\tif !strings.Contains(pattern, \"**\") {\n\t\t\t\t\tpattern = pattern + \"/**\"\n\t\t\t\t}\n\t\t\t}\n\t\t\tpatterns = append(patterns, pattern)\n\t\t}\n\t}\n\n\tif err := scanner.Err(); err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn &IgnoreList{patterns: patterns}, nil\n}\n\n// IsIgnored checks if a given path should be ignored based on the ignore patterns\nfunc (il *IgnoreList) IsIgnored(path string) bool {\n\t// Always ignore .evo directory\n\tif strings.HasPrefix(path, \".evo\") {\n\t\treturn true\n\t}\n\n\t// Clean and normalize the path\n\tpath = filepath.ToSlash(filepath.Clean(path))\n\tpath = strings.TrimPrefix(path, \"./\")\n\tpath = strings.TrimPrefix(path, \"../\")\n\n\tfor _, pattern := range il.patterns {\n\t\t// Handle negation patterns\n\t\tif strings.HasPrefix(pattern, \"!\") {\n\t\t\tmatched, err := doublestar.Match(pattern[1:], path)\n\t\t\tif err == nil && matched {\n\t\t\t\treturn false\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\n\t\t// For directory patterns ending with /**, try prefix matching first\n\t\tif strings.HasSuffix(pattern, \"/**\") {\n\t\t\tbase := strings.TrimSuffix(pattern, \"/**\")\n\t\t\tif path == base || strings.HasPrefix(path, base+\"/\") {\n\t\t\t\treturn true\n\t\t\t}\n\t\t}\n\n\t\t// Try matching the pattern directly\n\t\tmatched, err := doublestar.Match(pattern, path)\n\t\tif err == nil && matched {\n\t\t\treturn true\n\t\t}\n\n\t\t// Try matching with **/ prefix\n\t\tif !strings.HasPrefix(pattern, \"**/\") {\n\t\t\tmatched, err := doublestar.Match(\"**/\"+pattern, path)\n\t\t\tif err == nil && matched {\n\t\t\t\treturn true\n\t\t\t}\n\t\t}\n\n\t\t// For directory patterns without /**, try matching with /** suffix\n\t\tif !strings.HasSuffix(pattern, \"/**\") {\n\t\t\t// Try with /** suffix\n\t\t\tmatched, err := doublestar.Match(pattern+\"/**\", path)\n\t\t\tif err == nil && matched {\n\t\t\t\treturn true\n\t\t\t}\n\n\t\t\t// Try with **/ prefix and /** suffix\n\t\t\tmatched, err = doublestar.Match(\"**/\"+pattern+\"/**\", path)\n\t\t\tif err == nil && matched {\n\t\t\t\treturn true\n\t\t\t}\n\n\t\t\t// Try with /** suffix for each path component\n\t\t\tparts := strings.Split(path, \"/\")\n\t\t\tfor i := range parts {\n\t\t\t\tprefix := strings.Join(parts[:i+1], \"/\")\n\t\t\t\tif prefix == pattern {\n\t\t\t\t\treturn true\n\t\t\t\t}\n\t\t\t\tif strings.HasSuffix(pattern, \"/\") {\n\t\t\t\t\tpattern = strings.TrimSuffix(pattern, \"/\")\n\t\t\t\t\tif prefix == pattern {\n\t\t\t\t\t\treturn true\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\treturn false\n}\n\n// AddPattern adds a new ignore pattern\nfunc (il *IgnoreList) AddPattern(pattern string) {\n\t// Handle directory patterns\n\tif strings.HasSuffix(pattern, \"/\") {\n\t\tpattern = strings.TrimSuffix(pattern, \"/\")\n\t\tif !strings.Contains(pattern, \"**\") {\n\t\t\tpattern = pattern + \"/**\"\n\t\t}\n\t}\n\til.patterns = append(il.patterns, pattern)\n}\n\n// GetPatterns returns all current ignore patterns\nfunc (il *IgnoreList) GetPatterns() []string {\n\treturn append([]string{}, il.patterns...)\n}\n"
  },
  {
    "path": "internal/ignore/ignore_test.go",
    "content": "package ignore\n\nimport (\n\t\"os\"\n\t\"path/filepath\"\n\t\"testing\"\n)\n\nfunc TestLoadIgnoreFile(t *testing.T) {\n\t// Create a temporary directory for testing\n\ttmpDir, err := os.MkdirTemp(\"\", \"evo-ignore-test\")\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tdefer os.RemoveAll(tmpDir)\n\n\t// Test case 1: No .evo-ignore file\n\til, err := LoadIgnoreFile(tmpDir)\n\tif err != nil {\n\t\tt.Errorf(\"Expected no error when .evo-ignore doesn't exist, got %v\", err)\n\t}\n\tif len(il.patterns) != 0 {\n\t\tt.Errorf(\"Expected empty patterns list, got %v\", il.patterns)\n\t}\n\n\t// Test case 2: With .evo-ignore file\n\tignoreContent := `\n# Comment line\n*.log\nbuild/\n**/*.tmp\ntest/*.txt\nnode_modules/\n*.bak\n!important.bak\n`\n\tignorePath := filepath.Join(tmpDir, \".evo-ignore\")\n\tif err := os.WriteFile(ignorePath, []byte(ignoreContent), 0644); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\til, err = LoadIgnoreFile(tmpDir)\n\tif err != nil {\n\t\tt.Errorf(\"Failed to load .evo-ignore file: %v\", err)\n\t}\n\n\texpectedPatterns := []string{\n\t\t\"*.log\",\n\t\t\"build/**\",\n\t\t\"**/*.tmp\",\n\t\t\"test/*.txt\",\n\t\t\"node_modules/**\",\n\t\t\"*.bak\",\n\t\t\"!important.bak\",\n\t}\n\tpatterns := il.GetPatterns()\n\tif len(patterns) != len(expectedPatterns) {\n\t\tt.Errorf(\"Expected %d patterns, got %d\", len(expectedPatterns), len(patterns))\n\t}\n\n\tfor i, pattern := range patterns {\n\t\tif pattern != expectedPatterns[i] {\n\t\t\tt.Errorf(\"Pattern %d: expected %s, got %s\", i, expectedPatterns[i], pattern)\n\t\t}\n\t}\n\n\t// Test case 3: Invalid file permissions\n\tif err := os.Chmod(ignorePath, 0000); err != nil {\n\t\tt.Fatal(err)\n\t}\n\t_, err = LoadIgnoreFile(tmpDir)\n\tif err == nil {\n\t\tt.Error(\"Expected error when loading file with no permissions\")\n\t}\n}\n\nfunc TestIsIgnored(t *testing.T) {\n\ttests := []struct {\n\t\tname     string\n\t\tpatterns []string\n\t\tpaths    map[string]bool // path -> should be ignored\n\t}{\n\t\t{\n\t\t\tname:     \"Empty patterns\",\n\t\t\tpatterns: []string{},\n\t\t\tpaths: map[string]bool{\n\t\t\t\t\"file.txt\":     false,\n\t\t\t\t\".evo/config\":  true, // .evo is always ignored\n\t\t\t\t\".evo/objects\": true,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"Simple glob patterns\",\n\t\t\tpatterns: []string{\n\t\t\t\t\"*.log\",\n\t\t\t\t\"*.tmp\",\n\t\t\t},\n\t\t\tpaths: map[string]bool{\n\t\t\t\t\"test.log\":      true,\n\t\t\t\t\"logs/test.log\": true,\n\t\t\t\t\"test.txt\":      false,\n\t\t\t\t\"test.tmp\":      true,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"Directory patterns\",\n\t\t\tpatterns: []string{\n\t\t\t\t\"build/\",\n\t\t\t\t\"node_modules/\",\n\t\t\t\t\"test/fixtures/\",\n\t\t\t},\n\t\t\tpaths: map[string]bool{\n\t\t\t\t\"build/output.txt\":          true,\n\t\t\t\t\"build/temp/file.txt\":       true,\n\t\t\t\t\"src/build/file.txt\":        false,\n\t\t\t\t\"node_modules/package.json\": true,\n\t\t\t\t\"test/fixtures/data.json\":   true,\n\t\t\t\t\"test/file.txt\":             false,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"Double-star patterns\",\n\t\t\tpatterns: []string{\n\t\t\t\t\"**/*.tmp\",\n\t\t\t\t\"**/vendor/**\",\n\t\t\t\t\"**/__pycache__/**\",\n\t\t\t},\n\t\t\tpaths: map[string]bool{\n\t\t\t\t\"file.tmp\":                    true,\n\t\t\t\t\"temp/file.tmp\":               true,\n\t\t\t\t\"a/b/c/file.tmp\":              true,\n\t\t\t\t\"vendor/lib.js\":               true,\n\t\t\t\t\"src/vendor/lib.js\":           true,\n\t\t\t\t\"src/__pycache__/module.pyc\":  true,\n\t\t\t\t\"test/__pycache__/cache.json\": true,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"Complex patterns\",\n\t\t\tpatterns: []string{\n\t\t\t\t\"*.{log,tmp}\",\n\t\t\t\t\"**/{test,mock}_*.go\",\n\t\t\t\t\"**/.DS_Store\",\n\t\t\t},\n\t\t\tpaths: map[string]bool{\n\t\t\t\t\"error.log\":           true,\n\t\t\t\t\"temp.tmp\":            true,\n\t\t\t\t\"test_handler.go\":     true,\n\t\t\t\t\"mock_service.go\":     true,\n\t\t\t\t\"internal/test_db.go\": true,\n\t\t\t\t\".DS_Store\":           true,\n\t\t\t\t\"src/.DS_Store\":       true,\n\t\t\t\t\"handler.go\":          false,\n\t\t\t\t\"service_test.go\":     false,\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"Path normalization\",\n\t\t\tpatterns: []string{\n\t\t\t\t\"build/\",\n\t\t\t\t\"**/temp/**\",\n\t\t\t},\n\t\t\tpaths: map[string]bool{\n\t\t\t\t\"build/file.txt\":        true,\n\t\t\t\t\"./build/file.txt\":      true,\n\t\t\t\t\"build/../build/file\":   true,\n\t\t\t\t\"temp/file.txt\":         true,\n\t\t\t\t\"./temp/file.txt\":       true,\n\t\t\t\t\"a/temp/b/file.txt\":     true,\n\t\t\t\t\"./a/temp/b/file.txt\":   true,\n\t\t\t\t\"../repo/temp/file.txt\": true,\n\t\t\t},\n\t\t},\n\t}\n\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\til := &IgnoreList{patterns: tt.patterns}\n\t\t\tfor path, shouldIgnore := range tt.paths {\n\t\t\t\tif got := il.IsIgnored(path); got != shouldIgnore {\n\t\t\t\t\tt.Errorf(\"IsIgnored(%q) = %v, want %v\", path, got, shouldIgnore)\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestAddPattern(t *testing.T) {\n\til := &IgnoreList{}\n\n\t// Test adding various pattern types\n\tpatterns := []struct {\n\t\tinput    string\n\t\texpected string\n\t}{\n\t\t{\"*.log\", \"*.log\"},\n\t\t{\"build/\", \"build/**\"},\n\t\t{\"node_modules/\", \"node_modules/**\"},\n\t\t{\"**/*.tmp\", \"**/*.tmp\"},\n\t\t{\"test/*.txt\", \"test/*.txt\"},\n\t\t{\".env\", \".env\"},\n\t\t{\"dist/\", \"dist/**\"},\n\t}\n\n\tfor _, p := range patterns {\n\t\til.AddPattern(p.input)\n\t\tfound := false\n\t\tfor _, pattern := range il.patterns {\n\t\t\tif pattern == p.expected {\n\t\t\t\tfound = true\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tif !found {\n\t\t\tt.Errorf(\"Pattern %q not found in patterns after AddPattern, expected %q\", p.input, p.expected)\n\t\t}\n\t}\n\n\t// Test pattern order preservation\n\til = &IgnoreList{}\n\tvar expectedPatterns []string\n\tfor _, p := range patterns {\n\t\til.AddPattern(p.input)\n\t\texpectedPatterns = append(expectedPatterns, p.expected)\n\t}\n\n\tactualPatterns := il.GetPatterns()\n\tif len(actualPatterns) != len(expectedPatterns) {\n\t\tt.Errorf(\"Expected %d patterns, got %d\", len(expectedPatterns), len(actualPatterns))\n\t}\n\n\tfor i, pattern := range actualPatterns {\n\t\tif pattern != expectedPatterns[i] {\n\t\t\tt.Errorf(\"Pattern at index %d: expected %q, got %q\", i, expectedPatterns[i], pattern)\n\t\t}\n\t}\n}\n\nfunc TestGetPatterns(t *testing.T) {\n\t// Test that GetPatterns returns a copy of the patterns slice\n\til := &IgnoreList{patterns: []string{\"*.log\", \"build/**\", \"**/*.tmp\"}}\n\n\tpatterns1 := il.GetPatterns()\n\tpatterns2 := il.GetPatterns()\n\n\t// Verify both slices have the same content\n\tif len(patterns1) != len(patterns2) {\n\t\tt.Errorf(\"Pattern slices have different lengths: %d vs %d\", len(patterns1), len(patterns2))\n\t}\n\tfor i := range patterns1 {\n\t\tif patterns1[i] != patterns2[i] {\n\t\t\tt.Errorf(\"Pattern mismatch at index %d: %q vs %q\", i, patterns1[i], patterns2[i])\n\t\t}\n\t}\n\n\t// Modify the first slice and verify it doesn't affect the second\n\tpatterns1[0] = \"modified\"\n\tif patterns1[0] == patterns2[0] {\n\t\tt.Error(\"Modifying one pattern slice affected the other\")\n\t}\n\n\t// Verify the original patterns are unchanged\n\toriginalPatterns := il.GetPatterns()\n\tif originalPatterns[0] != \"*.log\" {\n\t\tt.Errorf(\"Original patterns were modified: expected %q, got %q\", \"*.log\", originalPatterns[0])\n\t}\n}\n"
  },
  {
    "path": "internal/index/index.go",
    "content": "package index\n\nimport (\n\t\"bufio\"\n\t\"errors\"\n\t\"fmt\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strings\"\n\n\t\"github.com/google/uuid\"\n)\n\n// The .evo/index is lines: \"<fileID> <path>\"\n\nfunc LoadIndex(repoPath string) (map[string]string, map[string]string, error) {\n\t// path->fileID, fileID->path\n\tpath2id := make(map[string]string)\n\tid2path := make(map[string]string)\n\tidxPath := filepath.Join(repoPath, \".evo\", \"index\")\n\tf, err := os.Open(idxPath)\n\tif os.IsNotExist(err) {\n\t\treturn path2id, id2path, nil\n\t}\n\tif err != nil {\n\t\treturn path2id, id2path, err\n\t}\n\tdefer f.Close()\n\tsc := bufio.NewScanner(f)\n\tfor sc.Scan() {\n\t\tline := strings.TrimSpace(sc.Text())\n\t\tif line == \"\" {\n\t\t\tcontinue\n\t\t}\n\t\tparts := strings.SplitN(line, \" \", 2)\n\t\tif len(parts) == 2 {\n\t\t\tfid := parts[0]\n\t\t\tp := parts[1]\n\t\t\tpath2id[p] = fid\n\t\t\tid2path[fid] = p\n\t\t}\n\t}\n\treturn path2id, id2path, nil\n}\n\nfunc SaveIndex(repoPath string, path2id map[string]string) error {\n\tidxPath := filepath.Join(repoPath, \".evo\", \"index\")\n\tf, err := os.OpenFile(idxPath, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, 0644)\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer f.Close()\n\tfor p, fid := range path2id {\n\t\tfmt.Fprintf(f, \"%s %s\\n\", fid, p)\n\t}\n\treturn nil\n}\n\n// UpdateIndex => scans working dir, assigns stable fileIDs, removes missing files\nfunc UpdateIndex(repoPath string) error {\n\tp2id, id2p, err := LoadIndex(repoPath)\n\tif err != nil {\n\t\treturn err\n\t}\n\tvar working []string\n\tfilepath.Walk(repoPath, func(path string, info os.FileInfo, e error) error {\n\t\tif e != nil {\n\t\t\treturn nil\n\t\t}\n\t\tif !info.IsDir() {\n\t\t\trel, _ := filepath.Rel(repoPath, path)\n\t\t\tif !strings.HasPrefix(rel, \".evo\") {\n\t\t\t\tworking = append(working, rel)\n\t\t\t}\n\t\t}\n\t\treturn nil\n\t})\n\t// detect new files\n\tfor _, w := range working {\n\t\tif _, ok := p2id[w]; !ok {\n\t\t\t// assign new fileID\n\t\t\tfid := uuid.New().String()\n\t\t\tp2id[w] = fid\n\t\t\tid2p[fid] = w\n\t\t}\n\t}\n\t// detect removed\n\tfor p, fid := range p2id {\n\t\tfound := false\n\t\tfor _, w := range working {\n\t\t\tif w == p {\n\t\t\t\tfound = true\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tif !found {\n\t\t\tdelete(p2id, p)\n\t\t\tdelete(id2p, fid)\n\t\t}\n\t}\n\treturn SaveIndex(repoPath, p2id)\n}\n\n// LookupFileID => returns stable fileID for a given path\nfunc LookupFileID(repoPath, relPath string) (string, error) {\n\tp2id, _, err := LoadIndex(repoPath)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tfid, ok := p2id[relPath]\n\tif !ok {\n\t\treturn \"\", errors.New(\"file not tracked in index: \" + relPath)\n\t}\n\treturn fid, nil\n}\n"
  },
  {
    "path": "internal/lfs/diff.go",
    "content": "package lfs\n\nimport (\n\t\"bytes\"\n\t\"io\"\n)\n\nconst (\n\t// RollingHashWindow is the size of the rolling hash window\n\tRollingHashWindow = 64\n\n\t// MinMatchSize is the minimum size of a matching block\n\tMinMatchSize = 32\n)\n\n// RollingHash implements a simple rolling hash for binary diff\ntype RollingHash struct {\n\twindow []byte\n\tpos    int\n\thash   uint32\n}\n\n// NewRollingHash creates a new rolling hash\nfunc NewRollingHash() *RollingHash {\n\treturn &RollingHash{\n\t\twindow: make([]byte, RollingHashWindow),\n\t}\n}\n\n// Update updates the rolling hash with a new byte\nfunc (r *RollingHash) Update(b byte) uint32 {\n\t// Remove old byte's contribution\n\told := r.window[r.pos]\n\tr.hash = (r.hash - uint32(old)) + uint32(b)\n\n\t// Add new byte\n\tr.window[r.pos] = b\n\tr.pos = (r.pos + 1) % RollingHashWindow\n\n\treturn r.hash\n}\n\n// BinaryDiff generates a binary diff between two readers\nfunc BinaryDiff(old, new io.Reader) ([]DiffEntry, error) {\n\t// Read old content into memory for efficient matching\n\toldData, err := io.ReadAll(old)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t// Read new content into memory for efficient matching\n\tnewData, err := io.ReadAll(new)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\t// Initialize rolling hash\n\trh := NewRollingHash()\n\tblockIndex := make(map[uint32][]int)\n\n\t// Build block index for old content\n\tif len(oldData) >= RollingHashWindow {\n\t\tfor i := 0; i <= len(oldData)-RollingHashWindow; i++ {\n\t\t\t// Update rolling hash\n\t\t\tif i == 0 {\n\t\t\t\tfor j := 0; j < RollingHashWindow && j < len(oldData); j++ {\n\t\t\t\t\trh.Update(oldData[j])\n\t\t\t\t}\n\t\t\t} else if i+RollingHashWindow-1 < len(oldData) {\n\t\t\t\trh.Update(oldData[i+RollingHashWindow-1])\n\t\t\t}\n\t\t\thash := rh.hash\n\n\t\t\t// Store position for this hash\n\t\t\tblockIndex[hash] = append(blockIndex[hash], i)\n\t\t}\n\t}\n\n\t// Process new content to find matches\n\tvar diff []DiffEntry\n\tnewBuf := &bytes.Buffer{}\n\tpos := 0\n\n\tfor pos < len(newData) {\n\t\t// Calculate rolling hash for current window\n\t\trh = NewRollingHash()\n\t\twindowEnd := pos + RollingHashWindow\n\t\tif windowEnd > len(newData) {\n\t\t\twindowEnd = len(newData)\n\t\t}\n\t\tfor i := pos; i < windowEnd; i++ {\n\t\t\trh.Update(newData[i])\n\t\t}\n\t\thash := rh.hash\n\n\t\t// Look for matches\n\t\tmatched := false\n\t\tif positions, ok := blockIndex[hash]; ok {\n\t\t\tfor _, oldPos := range positions {\n\t\t\t\t// Verify full match\n\t\t\t\tmatchLen := 0\n\t\t\t\tfor i := 0; i < MinMatchSize && pos+i < len(newData) && oldPos+i < len(oldData); i++ {\n\t\t\t\t\tif oldData[oldPos+i] != newData[pos+i] {\n\t\t\t\t\t\tbreak\n\t\t\t\t\t}\n\t\t\t\t\tmatchLen++\n\t\t\t\t}\n\n\t\t\t\tif matchLen >= MinMatchSize {\n\t\t\t\t\t// Found a match, extend it\n\t\t\t\t\tfor oldPos+matchLen < len(oldData) && pos+matchLen < len(newData) && \n\t\t\t\t\t\toldData[oldPos+matchLen] == newData[pos+matchLen] {\n\t\t\t\t\t\tmatchLen++\n\t\t\t\t\t}\n\n\t\t\t\t\t// Add any pending new data\n\t\t\t\t\tif newBuf.Len() > 0 {\n\t\t\t\t\t\tdiff = append(diff, DiffEntry{\n\t\t\t\t\t\t\tType: DiffNew,\n\t\t\t\t\t\t\tData: newBuf.Bytes(),\n\t\t\t\t\t\t})\n\t\t\t\t\t\tnewBuf.Reset()\n\t\t\t\t\t}\n\n\t\t\t\t\t// Add the match\n\t\t\t\t\tdiff = append(diff, DiffEntry{\n\t\t\t\t\t\tType:     DiffCopy,\n\t\t\t\t\t\tOffset:   int64(oldPos),\n\t\t\t\t\t\tLength:   int64(matchLen),\n\t\t\t\t\t})\n\n\t\t\t\t\tpos += matchLen\n\t\t\t\t\tmatched = true\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tif !matched && pos < len(newData) {\n\t\t\t// No match found, add to new data buffer\n\t\t\tnewBuf.WriteByte(newData[pos])\n\t\t\tpos++\n\t\t}\n\t}\n\n\t// Add any remaining new data\n\tif newBuf.Len() > 0 {\n\t\tdiff = append(diff, DiffEntry{\n\t\t\tType: DiffNew,\n\t\t\tData: newBuf.Bytes(),\n\t\t})\n\t}\n\n\treturn diff, nil\n}\n\n// DiffType represents the type of a diff entry\ntype DiffType byte\n\nconst (\n\tDiffCopy DiffType = iota // Copy from old file\n\tDiffNew                  // New data\n)\n\n// DiffEntry represents a single entry in a binary diff\ntype DiffEntry struct {\n\tType   DiffType // Type of entry\n\tOffset int64    // Offset in old file (for Copy)\n\tLength int64    // Length to copy (for Copy)\n\tData   []byte   // New data (for New)\n}\n\n// ApplyDiff applies a binary diff to generate new content\nfunc ApplyDiff(old io.Reader, diff []DiffEntry, w io.Writer) error {\n\t// Read old content\n\toldData, err := io.ReadAll(old)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// Apply diff entries\n\tfor _, entry := range diff {\n\t\tswitch entry.Type {\n\t\tcase DiffCopy:\n\t\t\t// Copy from old file\n\t\t\tif entry.Offset+entry.Length > int64(len(oldData)) {\n\t\t\t\treturn io.ErrUnexpectedEOF\n\t\t\t}\n\t\t\tif _, err := w.Write(oldData[entry.Offset:entry.Offset+entry.Length]); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\tcase DiffNew:\n\t\t\t// Write new data\n\t\t\tif _, err := w.Write(entry.Data); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t}\n\n\treturn nil\n}\n"
  },
  {
    "path": "internal/lfs/diff_test.go",
    "content": "package lfs\n\nimport (\n\t\"bytes\"\n\t\"io\"\n\t\"testing\"\n)\n\nfunc TestBinaryDiff(t *testing.T) {\n\tt.Run(\"Small Changes\", func(t *testing.T) {\n\t\t// Original content\n\t\toldData := []byte(\"Hello, this is a test file for binary diff!\")\n\t\t// Modified content (changed one word)\n\t\tnewData := []byte(\"Hello, this is a sample file for binary diff!\")\n\n\t\t// Generate diff\n\t\tdiff, err := BinaryDiff(bytes.NewReader(oldData), bytes.NewReader(newData))\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\t// Apply diff\n\t\tvar result bytes.Buffer\n\t\tif err := ApplyDiff(bytes.NewReader(oldData), diff, &result); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\t// Verify result\n\t\tif !bytes.Equal(result.Bytes(), newData) {\n\t\t\tt.Error(\"Diff application failed to reproduce new content\")\n\t\t}\n\t})\n\n\tt.Run(\"Large Block Changes\", func(t *testing.T) {\n\t\t// Create large test data\n\t\toldData := make([]byte, 100*1024) // 100KB\n\t\tnewData := make([]byte, 100*1024)\n\n\t\t// Fill with pattern\n\t\tfor i := range oldData {\n\t\t\toldData[i] = byte(i % 256)\n\t\t\tnewData[i] = byte(i % 256)\n\t\t}\n\n\t\t// Modify a block in the middle\n\t\tcopy(newData[50*1024:], bytes.Repeat([]byte(\"modified\"), 1024))\n\n\t\t// Generate and apply diff\n\t\tdiff, err := BinaryDiff(bytes.NewReader(oldData), bytes.NewReader(newData))\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\tvar result bytes.Buffer\n\t\tif err := ApplyDiff(bytes.NewReader(oldData), diff, &result); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\tif !bytes.Equal(result.Bytes(), newData) {\n\t\t\tt.Error(\"Failed to reproduce large modified content\")\n\t\t}\n\t})\n\n\tt.Run(\"Rolling Hash\", func(t *testing.T) {\n\t\trh := NewRollingHash()\n\n\t\t// Test with simple pattern\n\t\tdata := []byte(\"abcdefghijklmnop\")\n\t\tvar hashes []uint32\n\n\t\t// Calculate rolling hash for each window\n\t\tfor i := 0; i <= len(data)-RollingHashWindow; i++ {\n\t\t\t// Reset hash for new window\n\t\t\trh = NewRollingHash()\n\t\t\tfor j := 0; j < RollingHashWindow; j++ {\n\t\t\t\trh.Update(data[i+j])\n\t\t\t}\n\t\t\thashes = append(hashes, rh.hash)\n\t\t}\n\n\t\t// Verify we get different hashes for different windows\n\t\tseen := make(map[uint32]bool)\n\t\tfor _, h := range hashes {\n\t\t\tif seen[h] {\n\t\t\t\tt.Error(\"Hash collision in rolling hash\")\n\t\t\t}\n\t\t\tseen[h] = true\n\t\t}\n\t})\n\n\tt.Run(\"Empty Input\", func(t *testing.T) {\n\t\tdiff, err := BinaryDiff(bytes.NewReader([]byte{}), bytes.NewReader([]byte{}))\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t\tif len(diff) != 0 {\n\t\t\tt.Error(\"Expected empty diff for empty input\")\n\t\t}\n\t})\n\n\tt.Run(\"Append Content\", func(t *testing.T) {\n\t\toldData := []byte(\"Original content\")\n\t\tnewData := []byte(\"Original content with appended text\")\n\n\t\tdiff, err := BinaryDiff(bytes.NewReader(oldData), bytes.NewReader(newData))\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\tvar result bytes.Buffer\n\t\tif err := ApplyDiff(bytes.NewReader(oldData), diff, &result); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\tif !bytes.Equal(result.Bytes(), newData) {\n\t\t\tt.Error(\"Failed to handle appended content\")\n\t\t}\n\t})\n\n\tt.Run(\"Streaming Large Content\", func(t *testing.T) {\n\t\t// Create large content that won't fit in memory\n\t\toldReader := &infiniteReader{limit: 10 * 1024 * 1024} // 10MB\n\t\tnewReader := &infiniteReader{limit: 10 * 1024 * 1024, modified: true}\n\n\t\t// Generate diff\n\t\tdiff, err := BinaryDiff(oldReader, newReader)\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\t// Verify diff size is reasonable\n\t\tif len(diff) > 1024*1024 { // Should be much smaller than original\n\t\t\tt.Error(\"Diff size too large for similar content\")\n\t\t}\n\t})\n}\n\n// infiniteReader generates predictable content for testing\ntype infiniteReader struct {\n\tpos      int64\n\tlimit    int64\n\tmodified bool\n}\n\nfunc (r *infiniteReader) Read(p []byte) (n int, err error) {\n\tif r.pos >= r.limit {\n\t\treturn 0, io.EOF\n\t}\n\n\tfor i := range p {\n\t\tif r.pos+int64(i) >= r.limit {\n\t\t\treturn i, io.EOF\n\t\t}\n\t\tif r.modified && r.pos+int64(i) >= r.limit/2 {\n\t\t\tp[i] = byte((r.pos + int64(i)) % 251) // Different pattern\n\t\t} else {\n\t\t\tp[i] = byte((r.pos + int64(i)) % 250)\n\t\t}\n\t}\n\tr.pos += int64(len(p))\n\treturn len(p), nil\n}\n"
  },
  {
    "path": "internal/lfs/gc.go",
    "content": "package lfs\n\nimport (\n\t\"fmt\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"sync\"\n\t\"time\"\n)\n\n// GarbageCollector manages cleanup of unreferenced chunks\ntype GarbageCollector struct {\n\tstore *Store\n\tmu    sync.Mutex\n\tdone  chan struct{}\n}\n\n// NewGarbageCollector creates a new garbage collector\nfunc NewGarbageCollector(store *Store) *GarbageCollector {\n\treturn &GarbageCollector{\n\t\tstore: store,\n\t\tdone:  make(chan struct{}),\n\t}\n}\n\n// Start begins periodic garbage collection\nfunc (gc *GarbageCollector) Start() {\n\tticker := time.NewTicker(24 * time.Hour) // Run daily\n\tgo func() {\n\t\tfor {\n\t\t\tselect {\n\t\t\tcase <-ticker.C:\n\t\t\t\tif err := gc.Run(); err != nil {\n\t\t\t\t\tfmt.Fprintf(os.Stderr, \"Error during LFS garbage collection: %v\\n\", err)\n\t\t\t\t}\n\t\t\tcase <-gc.done:\n\t\t\t\tticker.Stop()\n\t\t\t\treturn\n\t\t\t}\n\t\t}\n\t}()\n}\n\n// Stop stops the garbage collector\nfunc (gc *GarbageCollector) Stop() {\n\tclose(gc.done)\n}\n\n// Run performs garbage collection\nfunc (gc *GarbageCollector) Run() error {\n\tgc.mu.Lock()\n\tdefer gc.mu.Unlock()\n\n\t// Get all chunks\n\tchunksDir := filepath.Join(gc.store.root, \".evo\", \"chunks\")\n\tchunks, err := os.ReadDir(chunksDir)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"failed to read chunks directory: %w\", err)\n\t}\n\n\t// Check each chunk\n\tfor _, chunk := range chunks {\n\t\tif chunk.IsDir() {\n\t\t\tcontinue\n\t\t}\n\n\t\t// Delete if not referenced\n\t\tchunkHash := chunk.Name()\n\t\tif !gc.store.isChunkReferenced(chunkHash) {\n\t\t\tchunkPath := filepath.Join(chunksDir, chunkHash)\n\t\t\tif err := os.Remove(chunkPath); err != nil {\n\t\t\t\treturn fmt.Errorf(\"failed to delete unreferenced chunk %s: %w\", chunkHash, err)\n\t\t\t}\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// PruneTombstones removes old tombstones\nfunc (gc *GarbageCollector) PruneTombstones(maxAge time.Duration) error {\n\tgc.mu.Lock()\n\tdefer gc.mu.Unlock()\n\n\t// Get all files\n\tfilesDir := filepath.Join(gc.store.root, \".evo\", \"lfs\")\n\tfiles, err := os.ReadDir(filesDir)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"failed to read files directory: %w\", err)\n\t}\n\n\tcutoff := time.Now().Add(-maxAge)\n\n\t// Check each file\n\tfor _, file := range files {\n\t\tif !file.IsDir() {\n\t\t\tcontinue\n\t\t}\n\n\t\t// Load file info\n\t\tinfo, err := gc.store.loadFileInfo(file.Name())\n\t\tif err != nil {\n\t\t\tcontinue\n\t\t}\n\n\t\t// Delete if it's a tombstone older than maxAge\n\t\tif info.RefCount == 0 && info.Created.Before(cutoff) {\n\t\t\tif err := gc.store.DeleteFile(file.Name()); err != nil {\n\t\t\t\treturn fmt.Errorf(\"failed to delete old tombstone %s: %w\", file.Name(), err)\n\t\t\t}\n\t\t}\n\t}\n\n\treturn nil\n}\n"
  },
  {
    "path": "internal/lfs/store.go",
    "content": "package lfs\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"io\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"sync\"\n\t\"time\"\n)\n\n// Store manages large file storage with deduplication\ntype Store struct {\n\tmu   sync.RWMutex\n\troot string\n}\n\n// NewStore creates a new LFS store at the given root path\nfunc NewStore(root string) *Store {\n\t// Create necessary directories\n\tos.MkdirAll(filepath.Join(root, \".evo\", \"lfs\"), 0755)\n\tos.MkdirAll(filepath.Join(root, \".evo\", \"chunks\"), 0755)\n\n\treturn &Store{\n\t\troot: root,\n\t}\n}\n\n// StoreFile stores a file in chunks and returns file info\nfunc (s *Store) StoreFile(id string, r io.Reader, size int64) (*FileInfo, error) {\n\ts.mu.Lock()\n\tdefer s.mu.Unlock()\n\n\t// Create file directory\n\tfileDir := filepath.Join(s.root, \".evo\", \"lfs\", id)\n\tif err := os.MkdirAll(fileDir, 0755); err != nil {\n\t\treturn nil, err\n\t}\n\n\t// Calculate content hash and split into chunks\n\tchunks := make([]ChunkInfo, 0)\n\tcontentHash := NewHash()\n\n\t// Read file in chunks to calculate hash and store chunks\n\tvar totalSize int64\n\tbuf := make([]byte, ChunkSize)\n\tfor totalSize < size {\n\t\t// Calculate remaining size and read size\n\t\tremaining := size - totalSize\n\t\treadSize := ChunkSize\n\t\tif remaining < ChunkSize {\n\t\t\treadSize = int(remaining)\n\t\t}\n\n\t\t// Read chunk\n\t\tn, err := io.ReadFull(r, buf[:readSize])\n\t\tif err != nil && err != io.EOF && err != io.ErrUnexpectedEOF {\n\t\t\treturn nil, err\n\t\t}\n\t\tif n == 0 {\n\t\t\tbreak\n\t\t}\n\n\t\t// Calculate content hash for this chunk\n\t\tcontentHash.Write(buf[:n])\n\n\t\t// Calculate chunk hash and store chunk\n\t\tchunk := make([]byte, n)\n\t\tcopy(chunk, buf[:n])\n\t\tchunkHash := HashBytes(chunk)\n\n\t\t// Store chunk if it doesn't exist\n\t\tchunkPath := filepath.Join(s.root, \".evo\", \"chunks\", chunkHash)\n\t\tif _, err := os.Stat(chunkPath); os.IsNotExist(err) {\n\t\t\t// Store new chunk\n\t\t\tchunkData := make([]byte, n)\n\t\t\tcopy(chunkData, chunk)\n\t\t\tif err := os.WriteFile(chunkPath, chunkData, 0644); err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t}\n\n\t\tchunks = append(chunks, ChunkInfo{\n\t\t\tHash: chunkHash,\n\t\t\tSize: int64(n),\n\t\t})\n\n\t\ttotalSize += int64(n)\n\n\t\t// Break if we've read all the data\n\t\tif totalSize >= size {\n\t\t\tbreak\n\t\t}\n\t}\n\n\t// Verify total size matches expected size\n\tif totalSize != size {\n\t\treturn nil, fmt.Errorf(\"expected size %d, got %d\", size, totalSize)\n\t}\n\n\thashStr := contentHash.Sum()\n\n\t// Check for existing file with same content hash\n\texistingFiles, err := os.ReadDir(filepath.Join(s.root, \".evo\", \"lfs\"))\n\tif err == nil {\n\t\tfor _, f := range existingFiles {\n\t\t\tif !f.IsDir() {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\texistingInfo, err := s.loadFileInfo(f.Name())\n\t\t\tif err != nil {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tif existingInfo.ContentHash == hashStr {\n\t\t\t\t// Found existing file with same content\n\t\t\t\texistingInfo.RefCount++\n\t\t\t\tif err := s.saveFileInfo(f.Name(), existingInfo); err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\n\t\t\t\t// Create new file info pointing to same chunks\n\t\t\t\tnewInfo := &FileInfo{\n\t\t\t\t\tID:          id,\n\t\t\t\t\tSize:        existingInfo.Size,\n\t\t\t\t\tContentHash: existingInfo.ContentHash,\n\t\t\t\t\tNumChunks:   existingInfo.NumChunks,\n\t\t\t\t\tChunks:      existingInfo.Chunks,\n\t\t\t\t\tRefCount:    existingInfo.RefCount, // Use same ref count as existing file\n\t\t\t\t\tCreated:     time.Now(),\n\t\t\t\t}\n\t\t\t\tif err := s.saveFileInfo(id, newInfo); err != nil {\n\t\t\t\t\treturn nil, err\n\t\t\t\t}\n\t\t\t\treturn newInfo, nil\n\t\t\t}\n\t\t}\n\t}\n\n\t// Create file info\n\tinfo := &FileInfo{\n\t\tID:          id,\n\t\tSize:        size,\n\t\tContentHash: hashStr,\n\t\tNumChunks:   len(chunks),\n\t\tChunks:      chunks,\n\t\tRefCount:    1,\n\t\tCreated:     time.Now(),\n\t}\n\n\t// Save file info\n\tif err := s.saveFileInfo(id, info); err != nil {\n\t\treturn nil, err\n\t}\n\n\treturn info, nil\n}\n\nfunc (s *Store) saveFileInfo(id string, info *FileInfo) error {\n\tdata, err := json.Marshal(info)\n\tif err != nil {\n\t\treturn err\n\t}\n\treturn os.WriteFile(filepath.Join(s.root, \".evo\", \"lfs\", id, \"info.json\"), data, 0644)\n}\n\nfunc (s *Store) loadFileInfo(id string) (*FileInfo, error) {\n\tdata, err := os.ReadFile(filepath.Join(s.root, \".evo\", \"lfs\", id, \"info.json\"))\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tvar info FileInfo\n\tif err := json.Unmarshal(data, &info); err != nil {\n\t\treturn nil, err\n\t}\n\treturn &info, nil\n}\n\n// ReadFile reads a file from chunks into the writer\nfunc (s *Store) ReadFile(id string, w io.Writer) error {\n\ts.mu.RLock()\n\tdefer s.mu.RUnlock()\n\n\t// Load file info\n\tinfo, err := s.loadFileInfo(id)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// Read chunks\n\tfor _, chunk := range info.Chunks {\n\t\tdata, err := os.ReadFile(filepath.Join(s.root, \".evo\", \"chunks\", chunk.Hash))\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\t\tif _, err := w.Write(data); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// DeleteFile deletes a file and its chunks if no longer referenced\nfunc (s *Store) DeleteFile(id string) error {\n\ts.mu.Lock()\n\tdefer s.mu.Unlock()\n\n\t// Load file info\n\tinfo, err := s.loadFileInfo(id)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// Delete file info\n\tfileDir := filepath.Join(s.root, \".evo\", \"lfs\", id)\n\tif err := os.RemoveAll(fileDir); err != nil {\n\t\treturn err\n\t}\n\n\t// Find other files with same content hash\n\texistingFiles, err := os.ReadDir(filepath.Join(s.root, \".evo\", \"lfs\"))\n\tif err == nil {\n\t\tfor _, f := range existingFiles {\n\t\t\tif !f.IsDir() || f.Name() == id {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\texistingInfo, err := s.loadFileInfo(f.Name())\n\t\t\tif err != nil {\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tif existingInfo.ContentHash == info.ContentHash {\n\t\t\t\t// Found another file with same content, decrement its ref count\n\t\t\t\texistingInfo.RefCount--\n\t\t\t\tif err := s.saveFileInfo(f.Name(), existingInfo); err != nil {\n\t\t\t\t\treturn err\n\t\t\t\t}\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t}\n\n\t// Delete unreferenced chunks\n\tfor _, chunk := range info.Chunks {\n\t\tchunkPath := filepath.Join(s.root, \".evo\", \"chunks\", chunk.Hash)\n\t\tif s.isChunkReferenced(chunk.Hash) {\n\t\t\tcontinue\n\t\t}\n\t\tif err := os.Remove(chunkPath); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// isChunkReferenced checks if a chunk is referenced by any file\nfunc (s *Store) isChunkReferenced(hash string) bool {\n\tfiles, err := os.ReadDir(filepath.Join(s.root, \".evo\", \"lfs\"))\n\tif err != nil {\n\t\treturn false\n\t}\n\n\tfor _, file := range files {\n\t\tif !file.IsDir() {\n\t\t\tcontinue\n\t\t}\n\n\t\tinfo, err := s.loadFileInfo(file.Name())\n\t\tif err != nil {\n\t\t\tcontinue\n\t\t}\n\n\t\tfor _, chunk := range info.Chunks {\n\t\t\tif chunk.Hash == hash {\n\t\t\t\treturn true\n\t\t\t}\n\t\t}\n\t}\n\n\treturn false\n}\n\nfunc min(a, b int64) int64 {\n\tif a < b {\n\t\treturn a\n\t}\n\treturn b\n}\n"
  },
  {
    "path": "internal/lfs/store_test.go",
    "content": "package lfs\n\nimport (\n\t\"bytes\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"testing\"\n)\n\nfunc TestStore(t *testing.T) {\n\t// Create temp dir for testing\n\ttmpDir, err := os.MkdirTemp(\"\", \"evo-lfs-test-*\")\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tdefer os.RemoveAll(tmpDir)\n\n\t// Create test file\n\ttestData := []byte(\"Hello, this is test data for LFS!\")\n\ttestFile := filepath.Join(tmpDir, \"test.txt\")\n\tif err := os.WriteFile(testFile, testData, 0644); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\t// Initialize store\n\tstore := NewStore(tmpDir)\n\n\tt.Run(\"Store and Read File\", func(t *testing.T) {\n\t\t// Store file\n\t\tf, err := os.Open(testFile)\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t\tdefer f.Close()\n\n\t\tinfo, err := store.StoreFile(\"test123\", f, int64(len(testData)))\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\t// Verify file info\n\t\tif info.Size != int64(len(testData)) {\n\t\t\tt.Errorf(\"Expected size %d, got %d\", len(testData), info.Size)\n\t\t}\n\t\tif info.RefCount != 1 {\n\t\t\tt.Errorf(\"Expected refCount 1, got %d\", info.RefCount)\n\t\t}\n\n\t\t// Read file back\n\t\tvar buf bytes.Buffer\n\t\tif err := store.ReadFile(\"test123\", &buf); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\t// Verify content\n\t\tif !bytes.Equal(buf.Bytes(), testData) {\n\t\t\tt.Error(\"Read data doesn't match original\")\n\t\t}\n\t})\n\n\tt.Run(\"Deduplication\", func(t *testing.T) {\n\t\t// Store same file again\n\t\tf, err := os.Open(testFile)\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t\tdefer f.Close()\n\n\t\tinfo, err := store.StoreFile(\"test456\", f, int64(len(testData)))\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\t// Verify increased ref count\n\t\tif info.RefCount != 2 {\n\t\t\tt.Errorf(\"Expected refCount 2, got %d\", info.RefCount)\n\t\t}\n\n\t\t// Check chunks directory\n\t\tchunksDir := filepath.Join(tmpDir, \".evo\", \"chunks\")\n\t\tentries, err := os.ReadDir(chunksDir)\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\t// Should only have one chunk since content is identical\n\t\tif len(entries) != 1 {\n\t\t\tt.Errorf(\"Expected 1 chunk, got %d\", len(entries))\n\t\t}\n\t})\n\n\tt.Run(\"Reference Counting\", func(t *testing.T) {\n\t\t// Delete first file\n\t\tif err := store.DeleteFile(\"test123\"); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\t// Verify file still exists\n\t\tinfo, err := store.loadFileInfo(\"test456\")\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t\tif info.RefCount != 1 {\n\t\t\tt.Errorf(\"Expected refCount 1, got %d\", info.RefCount)\n\t\t}\n\n\t\t// Delete last reference\n\t\tif err := store.DeleteFile(\"test456\"); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\t// Verify file is gone\n\t\tif _, err := store.loadFileInfo(\"test456\"); err == nil {\n\t\t\tt.Error(\"File should not exist\")\n\t\t}\n\t})\n}\n\nfunc TestLargeFileChunking(t *testing.T) {\n\t// Create temp dir\n\ttmpDir, err := os.MkdirTemp(\"\", \"evo-lfs-chunks-*\")\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tdefer os.RemoveAll(tmpDir)\n\n\t// Create large test data (5MB)\n\tsize := 5 * 1024 * 1024\n\tdata := make([]byte, size)\n\n\t// Ensure each 1MB chunk is unique:\n\tfor i := 0; i < size; i++ {\n\t\tchunkIndex := i >> 20 // i / 1 MB\n\t\tdata[i] = byte(chunkIndex)\n\t}\n\n\t// Write to testFile, then store in LFS, expecting 5 distinct chunks\n\ttestFile := filepath.Join(tmpDir, \"large.bin\")\n\tif err := os.WriteFile(testFile, data, 0644); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tstore := NewStore(tmpDir)\n\n\tt.Run(\"Chunk Storage\", func(t *testing.T) {\n\t\t// Store large file\n\t\tf, err := os.Open(testFile)\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t\tdefer f.Close()\n\n\t\tinfo, err := store.StoreFile(\"large123\", f, int64(size))\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\t// Verify number of chunks\n\t\texpectedChunks := (size + ChunkSize - 1) / ChunkSize\n\t\tif info.NumChunks != expectedChunks {\n\t\t\tt.Errorf(\"Expected %d chunks, got %d\", expectedChunks, info.NumChunks)\n\t\t}\n\n\t\t// Check chunks directory\n\t\tchunksDir := filepath.Join(tmpDir, \".evo\", \"chunks\")\n\t\tentries, err := os.ReadDir(chunksDir)\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t\tif len(entries) != expectedChunks {\n\t\t\tt.Errorf(\"Expected %d chunk files, got %d\", expectedChunks, len(entries))\n\t\t}\n\t})\n\n\tt.Run(\"Streaming Read\", func(t *testing.T) {\n\t\t// Read file back in chunks\n\t\tvar buf bytes.Buffer\n\t\tif err := store.ReadFile(\"large123\", &buf); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\t// Verify content\n\t\tif !bytes.Equal(buf.Bytes(), data) {\n\t\t\tt.Error(\"Read data doesn't match original\")\n\t\t}\n\t})\n}\n\nfunc TestGarbageCollection(t *testing.T) {\n\t// Create temp dir\n\ttmpDir, err := os.MkdirTemp(\"\", \"evo-lfs-gc-*\")\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tdefer os.RemoveAll(tmpDir)\n\n\tstore := NewStore(tmpDir)\n\tgc := NewGarbageCollector(store)\n\n\t// Create test files\n\tfiles := []struct {\n\t\tname    string\n\t\tcontent []byte\n\t}{\n\t\t{\"file1\", []byte(\"content1\")},\n\t\t{\"file2\", []byte(\"content2\")},\n\t\t{\"file3\", []byte(\"content3\")},\n\t}\n\n\tfor _, f := range files {\n\t\tr := bytes.NewReader(f.content)\n\t\tif _, err := store.StoreFile(f.name, r, int64(len(f.content))); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t}\n\n\tt.Run(\"GC Cleanup\", func(t *testing.T) {\n\t\t// Delete some files\n\t\tif err := store.DeleteFile(\"file1\"); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t\tif err := store.DeleteFile(\"file2\"); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\t// Run GC\n\t\tif err := gc.Run(); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\t// Verify only file3's chunks remain\n\t\tchunksDir := filepath.Join(tmpDir, \".evo\", \"chunks\")\n\t\tentries, err := os.ReadDir(chunksDir)\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\texpectedChunks := 1 // Only file3's chunk should remain\n\t\tif len(entries) != expectedChunks {\n\t\t\tt.Errorf(\"Expected %d chunks after GC, got %d\", expectedChunks, len(entries))\n\t\t}\n\t})\n}\n"
  },
  {
    "path": "internal/lfs/types.go",
    "content": "package lfs\n\nimport (\n\t\"crypto/sha256\"\n\t\"encoding/hex\"\n\t\"hash\"\n\t\"time\"\n)\n\nconst (\n\t// ChunkSize is the size of each chunk in bytes (1MB)\n\tChunkSize = 1024 * 1024\n)\n\n// FileInfo contains metadata about a stored file\ntype FileInfo struct {\n\tID          string      `json:\"id\"`          // Unique file identifier\n\tSize        int64       `json:\"size\"`        // Total file size in bytes\n\tContentHash string      `json:\"contentHash\"` // Hash of entire file content\n\tNumChunks   int         `json:\"numChunks\"`   // Number of chunks\n\tChunks      []ChunkInfo `json:\"chunks\"`      // List of chunks\n\tRefCount    int         `json:\"refCount\"`    // Number of references to this file\n\tCreated     time.Time   `json:\"created\"`     // When the file was created\n}\n\n// ChunkInfo contains metadata about a file chunk\ntype ChunkInfo struct {\n\tHash string `json:\"hash\"` // Hash of chunk content\n\tSize int64  `json:\"size\"` // Size of chunk in bytes\n}\n\n// Hash represents a content-addressable hash\ntype Hash struct {\n\th hash.Hash\n}\n\n// NewHash creates a new hash\nfunc NewHash() *Hash {\n\treturn &Hash{h: sha256.New()}\n}\n\n// Write implements io.Writer\nfunc (h *Hash) Write(p []byte) (n int, err error) {\n\treturn h.h.Write(p)\n}\n\n// Sum returns the hash as a hex string\nfunc (h *Hash) Sum() string {\n\treturn hex.EncodeToString(h.h.Sum(nil))\n}\n\n// HashBytes returns the hash of a byte slice\nfunc HashBytes(data []byte) string {\n\th := sha256.New()\n\th.Write(data)\n\treturn hex.EncodeToString(h.Sum(nil))\n}\n"
  },
  {
    "path": "internal/ops/binary_log.go",
    "content": "package ops\n\nimport (\n\t\"encoding/binary\"\n\t\"evo/internal/crdt\"\n\t\"io\"\n\t\"os\"\n\n\t\"github.com/google/uuid\"\n)\n\n// WriteOp writes a single CRDT op in binary\nfunc WriteOp(w io.Writer, op crdt.Operation) error {\n\t// Format:\n\t// [1 byte opType]\n\t// [8 bytes lamport]\n\t// [16 bytes nodeID]\n\t// [16 bytes fileID]\n\t// [16 bytes lineID]\n\t// [4 bytes contentLen]\n\t// [content]\n\tbuf := make([]byte, 1+8+16+16+16+4)\n\tbuf[0] = byte(op.Type)\n\tbinary.BigEndian.PutUint64(buf[1:9], op.Lamport)\n\tcopy(buf[9:25], op.NodeID[:])\n\tcopy(buf[25:41], op.FileID[:])\n\tcopy(buf[41:57], op.LineID[:])\n\n\tcontentBytes := []byte(op.Content)\n\tbinary.BigEndian.PutUint32(buf[57:61], uint32(len(contentBytes)))\n\tif _, err := w.Write(buf); err != nil {\n\t\treturn err\n\t}\n\tif len(contentBytes) > 0 {\n\t\tif _, err := w.Write(contentBytes); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc ReadOp(r io.Reader) (*crdt.Operation, error) {\n\theader := make([]byte, 1+8+16+16+16+4)\n\t_, err := io.ReadFull(r, header)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\topType := crdt.OpType(header[0])\n\tlamport := binary.BigEndian.Uint64(header[1:9])\n\tvar nodeID, fileID, lineID uuid.UUID\n\tcopy(nodeID[:], header[9:25])\n\tcopy(fileID[:], header[25:41])\n\tcopy(lineID[:], header[41:57])\n\tcontentLen := binary.BigEndian.Uint32(header[57:61])\n\tcontent := make([]byte, contentLen)\n\tif contentLen > 0 {\n\t\tif _, err := io.ReadFull(r, content); err != nil {\n\t\t\treturn nil, err\n\t\t}\n\t}\n\treturn &crdt.Operation{\n\t\tType:    opType,\n\t\tLamport: lamport,\n\t\tNodeID:  nodeID,\n\t\tFileID:  fileID,\n\t\tLineID:  lineID,\n\t\tContent: string(content),\n\t}, nil\n}\n\nfunc LoadAllOps(filename string) ([]crdt.Operation, error) {\n\tvar out []crdt.Operation\n\tf, err := os.Open(filename)\n\tif os.IsNotExist(err) {\n\t\treturn out, nil\n\t}\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdefer f.Close()\n\n\tfor {\n\t\top, e := ReadOp(f)\n\t\tif e == io.EOF {\n\t\t\tbreak\n\t\t}\n\t\tif e != nil {\n\t\t\t// partial read => ignore or return\n\t\t\treturn out, nil\n\t\t}\n\t\tout = append(out, *op)\n\t}\n\treturn out, nil\n}\n\nfunc AppendOp(filename string, op crdt.Operation) error {\n\tif err := os.MkdirAll(dirOf(filename), 0755); err != nil {\n\t\treturn err\n\t}\n\tf, err := os.OpenFile(filename, os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0644)\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer f.Close()\n\treturn WriteOp(f, op)\n}\n\nfunc dirOf(fp string) string {\n\tfor i := len(fp) - 1; i >= 0; i-- {\n\t\tif fp[i] == '/' || fp[i] == '\\\\' {\n\t\t\treturn fp[:i]\n\t\t}\n\t}\n\treturn \".\"\n}\n"
  },
  {
    "path": "internal/ops/ops.go",
    "content": "package ops\n\nimport (\n\t\"evo/internal/crdt\"\n\t\"evo/internal/index\"\n\t\"evo/internal/lfs\"\n\t\"evo/internal/util\"\n\t\"fmt\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strings\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/google/uuid\"\n)\n\n// IngestLocalChanges checks each file in the working directory, handles large-file threshold, stable fileID, then line CRDT logic.\nfunc IngestLocalChanges(repoPath, stream string) ([]string, error) {\n\tfiles, err := util.ListAllFiles(repoPath)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tvar changed []string\n\tvar mu sync.Mutex\n\tvar wg sync.WaitGroup\n\tchWork := make(chan string, len(files))\n\tchErr := make(chan error, 8)\n\n\tfor _, f := range files {\n\t\tchWork <- f\n\t}\n\tclose(chWork)\n\n\tfor i := 0; i < 8; i++ {\n\t\twg.Add(1)\n\t\tgo func() {\n\t\t\tdefer wg.Done()\n\t\t\tfor rel := range chWork {\n\t\t\t\tif strings.HasPrefix(rel, \".evo\") {\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\tabs := filepath.Join(repoPath, rel)\n\t\t\t\tfi, errStat := os.Stat(abs)\n\t\t\t\tif errStat != nil || fi.IsDir() {\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\tok, e2 := processFile(repoPath, stream, rel, abs, fi.Size())\n\t\t\t\tif e2 != nil {\n\t\t\t\t\tchErr <- e2\n\t\t\t\t\treturn\n\t\t\t\t}\n\t\t\t\tif ok {\n\t\t\t\t\tmu.Lock()\n\t\t\t\t\tchanged = append(changed, rel)\n\t\t\t\t\tmu.Unlock()\n\t\t\t\t}\n\t\t\t}\n\t\t}()\n\t}\n\twg.Wait()\n\tclose(chErr)\n\tfor e := range chErr {\n\t\tif e != nil {\n\t\t\treturn nil, e\n\t\t}\n\t}\n\treturn changed, nil\n}\n\nfunc processFile(repoPath, stream, relPath, absPath string, fsize int64) (bool, error) {\n\tfileID, err := index.LookupFileID(repoPath, relPath)\n\tif err != nil {\n\t\t// not tracked => skip\n\t\treturn false, nil\n\t}\n\topsFile := filepath.Join(repoPath, \".evo\", \"ops\", stream, fileID+\".bin\")\n\texisting, _ := LoadAllOps(opsFile)\n\n\t// build doc\n\tdoc := crdt.NewRGA()\n\tfor _, op := range existing {\n\t\tif err := doc.Apply(op); err != nil {\n\t\t\treturn false, fmt.Errorf(\"applying operation: %v\", err)\n\t\t}\n\t}\n\n\tthreshold := readLargeThreshold(repoPath)\n\tif fsize > threshold {\n\t\t// large file => store stub\n\t\treturn storeLargeFile(repoPath, stream, fileID, relPath, absPath, doc, opsFile)\n\t}\n\n\t// normal text => read lines\n\tdata, err := os.ReadFile(absPath)\n\tif err != nil {\n\t\treturn false, err\n\t}\n\tdiskLines := strings.Split(strings.ReplaceAll(string(data), \"\\r\\n\", \"\\n\"), \"\\n\")\n\tdocLines := doc.Materialize()\n\tif eqLines(docLines, diskLines) {\n\t\treturn false, nil\n\t}\n\tchanged := false\n\tvar lamport uint64 = uint64(time.Now().UnixNano())\n\tnodeID := uuid.New()\n\n\tlineIDs := doc.GetLineIDs()\n\tprefix := 0\n\tminLen := len(docLines)\n\tif len(diskLines) < minLen {\n\t\tminLen = len(diskLines)\n\t}\n\tfor prefix < minLen && docLines[prefix] == diskLines[prefix] {\n\t\tprefix++\n\t}\n\tsuffix := 0\n\tfor suffix < minLen-prefix && docLines[len(docLines)-1-suffix] == diskLines[len(diskLines)-1-suffix] {\n\t\tsuffix++\n\t}\n\tdocMid := docLines[prefix : len(docLines)-suffix]\n\tdiskMid := diskLines[prefix : len(diskLines)-suffix]\n\n\tstartPos := prefix\n\tvar i int\n\tfor i = 0; i < len(docMid) && i < len(diskMid); i++ {\n\t\tif docMid[i] != diskMid[i] {\n\t\t\top := crdt.Operation{\n\t\t\t\tType:      crdt.OpUpdate,\n\t\t\t\tLamport:   lamport + uint64(i),\n\t\t\t\tNodeID:    nodeID,\n\t\t\t\tFileID:    parseUUID(fileID),\n\t\t\t\tLineID:    lineIDs[startPos+i],\n\t\t\t\tContent:   diskMid[i],\n\t\t\t\tStream:    stream,\n\t\t\t\tTimestamp: time.Now(),\n\t\t\t}\n\t\t\tif err := AppendOp(opsFile, op); err != nil {\n\t\t\t\treturn false, err\n\t\t\t}\n\t\t\tchanged = true\n\t\t}\n\t}\n\tfor j := len(diskMid); j < len(docMid); j++ {\n\t\top := crdt.Operation{\n\t\t\tType:      crdt.OpDelete,\n\t\t\tLamport:   lamport + uint64(j),\n\t\t\tNodeID:    nodeID,\n\t\t\tFileID:    parseUUID(fileID),\n\t\t\tLineID:    lineIDs[startPos+j],\n\t\t\tStream:    stream,\n\t\t\tTimestamp: time.Now(),\n\t\t}\n\t\tif err := AppendOp(opsFile, op); err != nil {\n\t\t\treturn false, err\n\t\t}\n\t\tchanged = true\n\t}\n\tif i < len(diskMid) {\n\t\t// disk has extra => insert\n\t\tfor j := i; j < len(diskMid); j++ {\n\t\t\tinsOp := crdt.Operation{\n\t\t\t\tFileID:  parseUUID(fileID),\n\t\t\t\tType:    crdt.OpInsert,\n\t\t\t\tLamport: lamport + uint64(j),\n\t\t\t\tNodeID:  uuid.New(),\n\t\t\t\tLineID:  uuid.New(),\n\t\t\t\tContent: diskMid[j],\n\t\t\t}\n\t\t\tAppendOp(opsFile, insOp)\n\t\t\tlamport++\n\t\t\tchanged = true\n\t\t}\n\t}\n\treturn changed, nil\n}\n\nfunc storeLargeFile(repoPath, stream, fileID, relPath, absPath string, doc *crdt.RGA, opsFile string) (bool, error) {\n\t// Initialize LFS store\n\tstore := lfs.NewStore(repoPath)\n\n\t// Open file\n\tf, err := os.Open(absPath)\n\tif err != nil {\n\t\treturn false, err\n\t}\n\tdefer f.Close()\n\n\t// Get file info\n\tstat, err := f.Stat()\n\tif err != nil {\n\t\treturn false, err\n\t}\n\n\t// Store in LFS\n\tinfo, err := store.StoreFile(fileID, f, stat.Size())\n\tif err != nil {\n\t\treturn false, err\n\t}\n\n\t// Add LFS stub line\n\tdocLines := doc.Materialize()\n\tif len(docLines) == 1 && strings.HasPrefix(docLines[0], \"EVO-LFS:\") {\n\t\t// already a stub\n\t\treturn false, nil\n\t}\n\n\t// Replace content with LFS stub\n\tlop := crdt.Operation{\n\t\tFileID:  parseUUID(fileID),\n\t\tType:    crdt.OpInsert,\n\t\tLamport: uint64(time.Now().UnixNano()),\n\t\tNodeID:  uuid.New(),\n\t\tLineID:  uuid.New(),\n\t\tContent: fmt.Sprintf(\"EVO-LFS:%s:%d\", fileID, info.Size),\n\t}\n\tif err := AppendOp(opsFile, lop); err != nil {\n\t\treturn false, err\n\t}\n\n\treturn true, nil\n}\n\nfunc copyFile(src, dst string) error {\n\ts, err := os.Open(src)\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer s.Close()\n\td, err := os.Create(dst)\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer d.Close()\n\tbuf := make([]byte, 64*1024)\n\tfor {\n\t\tn, e := s.Read(buf)\n\t\tif n > 0 {\n\t\t\td.Write(buf[:n])\n\t\t}\n\t\tif e != nil {\n\t\t\tbreak\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc readLargeThreshold(repoPath string) int64 {\n\t// read config: files.largeThreshold\n\t// fallback 1MB\n\treturn 1_000_000\n}\n\nfunc parseUUID(s string) uuid.UUID {\n\tid, _ := uuid.Parse(s)\n\treturn id\n}\n\nfunc eqLines(a, b []string) bool {\n\tif len(a) != len(b) {\n\t\treturn false\n\t}\n\tfor i := range a {\n\t\tif a[i] != b[i] {\n\t\t\treturn false\n\t\t}\n\t}\n\treturn true\n}\n"
  },
  {
    "path": "internal/repo/repo.go",
    "content": "package repo\n\nimport (\n\t\"errors\"\n\t\"evo/internal/crdt/compact\"\n\t\"evo/internal/lfs\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"sync\"\n)\n\nconst EvoDir = \".evo\"\n\nvar (\n\tcompactionService *compact.CompactionService\n\tgarbageCollector  *lfs.GarbageCollector\n\tserviceMutex      sync.Mutex\n)\n\n// InitRepo creates the .evo folder structure, default stream, config, index, etc.\nfunc InitRepo(path string) error {\n\tserviceMutex.Lock()\n\tdefer serviceMutex.Unlock()\n\n\tevoPath := filepath.Join(path, EvoDir)\n\tif _, err := os.Stat(evoPath); err == nil {\n\t\treturn errors.New(\"Evo repository already exists here\")\n\t}\n\n\tdirs := []string{\n\t\tfilepath.Join(path, EvoDir),\n\t\tfilepath.Join(path, EvoDir, \"ops\"),\n\t\tfilepath.Join(path, EvoDir, \"commits\"),\n\t\tfilepath.Join(path, EvoDir, \"config\"),\n\t\tfilepath.Join(path, EvoDir, \"streams\"),\n\t\tfilepath.Join(path, EvoDir, \"largefiles\"),\n\t\tfilepath.Join(path, EvoDir, \"cache\"),\n\t\tfilepath.Join(path, EvoDir, \"chunks\"),\n\t\tfilepath.Join(path, EvoDir, \"lfs\"),\n\t}\n\tfor _, d := range dirs {\n\t\tif err := os.MkdirAll(d, 0755); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\t// Start compaction service\n\tcs := compact.NewCompactionService(path, compact.DefaultConfig())\n\tif err := cs.Start(); err != nil {\n\t\treturn err\n\t}\n\tcompactionService = cs\n\n\t// Start LFS garbage collector\n\tstore := lfs.NewStore(path)\n\tgc := lfs.NewGarbageCollector(store)\n\tgc.Start()\n\tgarbageCollector = gc\n\n\t// HEAD => \"main\"\n\tif err := os.WriteFile(filepath.Join(evoPath, \"HEAD\"), []byte(\"main\"), 0644); err != nil {\n\t\treturn err\n\t}\n\t// create stream \"main\"\n\tif err := os.WriteFile(filepath.Join(evoPath, \"streams\", \"main\"), []byte{}, 0644); err != nil {\n\t\treturn err\n\t}\n\n\t// create empty .evo/index\n\tif err := os.WriteFile(filepath.Join(evoPath, \"index\"), []byte{}, 0644); err != nil {\n\t\treturn err\n\t}\n\n\treturn nil\n}\n\n// Cleanup stops all background services\nfunc Cleanup() {\n\tserviceMutex.Lock()\n\tdefer serviceMutex.Unlock()\n\n\tif compactionService != nil {\n\t\tcompactionService.Stop()\n\t\tcompactionService = nil\n\t}\n\n\tif garbageCollector != nil {\n\t\tgarbageCollector.Stop()\n\t\tgarbageCollector = nil\n\t}\n}\n\n// FindRepoRoot searches for .evo directory walking up from start\nfunc FindRepoRoot(start string) (string, error) {\n\tcur, err := filepath.Abs(start)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\tfor {\n\t\tif _, err := os.Stat(filepath.Join(cur, EvoDir)); err == nil {\n\t\t\treturn cur, nil\n\t\t}\n\t\tparent := filepath.Dir(cur)\n\t\tif parent == cur {\n\t\t\treturn \"\", os.ErrNotExist\n\t\t}\n\t\tcur = parent\n\t}\n}\n"
  },
  {
    "path": "internal/repo/repo_test.go",
    "content": "package repo\n\nimport (\n\t\"os\"\n\t\"path/filepath\"\n\t\"testing\"\n)\n\nfunc TestRepo(t *testing.T) {\n\t// Create temp dir for testing\n\ttmpDir, err := os.MkdirTemp(\"\", \"evo-repo-test-*\")\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tdefer os.RemoveAll(tmpDir)\n\n\tt.Run(\"Init Repository\", func(t *testing.T) {\n\t\trepoPath := filepath.Join(tmpDir, \"test-repo\")\n\t\tif err := InitRepo(repoPath); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\t// Verify directory structure\n\t\tdirs := []string{\n\t\t\t\".evo\",\n\t\t\t\".evo/ops\",\n\t\t\t\".evo/commits\",\n\t\t\t\".evo/config\",\n\t\t\t\".evo/streams\",\n\t\t\t\".evo/chunks\",\n\t\t\t\".evo/lfs\",\n\t\t}\n\n\t\tfor _, dir := range dirs {\n\t\t\tpath := filepath.Join(repoPath, dir)\n\t\t\tif _, err := os.Stat(path); os.IsNotExist(err) {\n\t\t\t\tt.Errorf(\"Directory %s not created\", dir)\n\t\t\t}\n\t\t}\n\n\t\t// Verify HEAD file\n\t\thead, err := os.ReadFile(filepath.Join(repoPath, \".evo\", \"HEAD\"))\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t\tif string(head) != \"main\" {\n\t\t\tt.Errorf(\"Expected HEAD to be 'main', got '%s'\", string(head))\n\t\t}\n\t})\n\n\tt.Run(\"Find Repository Root\", func(t *testing.T) {\n\t\t// Create test repository\n\t\trepoPath := filepath.Join(tmpDir, \"find-repo-test\")\n\t\tif err := InitRepo(repoPath); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\t// Create nested directory structure\n\t\tnestedPath := filepath.Join(repoPath, \"dir1\", \"dir2\", \"dir3\")\n\t\tif err := os.MkdirAll(nestedPath, 0755); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\t// Test finding root from nested directory\n\t\tfound, err := FindRepoRoot(nestedPath)\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t\tif found != repoPath {\n\t\t\tt.Errorf(\"Expected root %s, got %s\", repoPath, found)\n\t\t}\n\n\t\t// Test finding root from repository root\n\t\tfound, err = FindRepoRoot(repoPath)\n\t\tif err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t\tif found != repoPath {\n\t\t\tt.Errorf(\"Expected root %s, got %s\", repoPath, found)\n\t\t}\n\n\t\t// Test finding root from non-repository directory\n\t\tnonRepoPath := filepath.Join(tmpDir, \"non-repo\")\n\t\tif err := os.MkdirAll(nonRepoPath, 0755); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\t_, err = FindRepoRoot(nonRepoPath)\n\t\tif err == nil {\n\t\t\tt.Error(\"Expected error when finding root in non-repository\")\n\t\t}\n\t})\n\n\tt.Run(\"Multiple Init Prevention\", func(t *testing.T) {\n\t\trepoPath := filepath.Join(tmpDir, \"multi-init-test\")\n\t\t\n\t\t// First init should succeed\n\t\tif err := InitRepo(repoPath); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\t// Second init should fail\n\t\tif err := InitRepo(repoPath); err == nil {\n\t\t\tt.Error(\"Expected error on second init\")\n\t\t}\n\t})\n\n\tt.Run(\"Init with Existing Files\", func(t *testing.T) {\n\t\trepoPath := filepath.Join(tmpDir, \"existing-files-test\")\n\t\t\n\t\t// Create some existing files\n\t\tif err := os.MkdirAll(repoPath, 0755); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t\tif err := os.WriteFile(filepath.Join(repoPath, \"test.txt\"), []byte(\"test\"), 0644); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\t// Init should succeed with existing files\n\t\tif err := InitRepo(repoPath); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\t// Verify existing files are untouched\n\t\tif _, err := os.Stat(filepath.Join(repoPath, \"test.txt\")); os.IsNotExist(err) {\n\t\t\tt.Error(\"Existing file was removed during init\")\n\t\t}\n\t})\n\n\tt.Run(\"Init Permission Handling\", func(t *testing.T) {\n\t\trepoPath := filepath.Join(tmpDir, \"permission-test\")\n\t\t\n\t\t// Create directory with restricted permissions\n\t\tif err := os.MkdirAll(repoPath, 0444); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\t// Init should fail with insufficient permissions\n\t\terr := InitRepo(repoPath)\n\t\tif err == nil {\n\t\t\tt.Error(\"Expected error with insufficient permissions\")\n\t\t}\n\n\t\t// Reset permissions\n\t\tif err := os.Chmod(repoPath, 0755); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\t// Init should now succeed\n\t\tif err := InitRepo(repoPath); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t})\n\n\tt.Run(\"Service Initialization\", func(t *testing.T) {\n\t\trepoPath := filepath.Join(tmpDir, \"service-test\")\n\t\t\n\t\tif err := InitRepo(repoPath); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\n\t\t// Verify services are running by checking their directories\n\t\tservices := []string{\n\t\t\t\".evo/chunks\",  // LFS chunks directory\n\t\t\t\".evo/lfs\",     // LFS metadata directory\n\t\t}\n\n\t\tfor _, dir := range services {\n\t\t\tpath := filepath.Join(repoPath, dir)\n\t\t\tif _, err := os.Stat(path); os.IsNotExist(err) {\n\t\t\t\tt.Errorf(\"Service directory %s not created\", dir)\n\t\t\t}\n\t\t}\n\t})\n}\n"
  },
  {
    "path": "internal/signing/signing.go",
    "content": "package signing\n\nimport (\n\t\"crypto/ed25519\"\n\t\"crypto/rand\"\n\t\"encoding/hex\"\n\t\"evo/internal/config\"\n\t\"evo/internal/types\"\n\t\"fmt\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"time\"\n)\n\ntype KeyPair struct {\n\tPrivateKey ed25519.PrivateKey\n\tPublicKey  ed25519.PublicKey\n\tCreated    time.Time\n}\n\n// GenerateKeyPair creates a new Ed25519 key pair and stores it\nfunc GenerateKeyPair(repoPath string) error {\n\tpub, priv, err := ed25519.GenerateKey(rand.Reader)\n\tif err != nil {\n\t\treturn fmt.Errorf(\"failed to generate key pair: %w\", err)\n\t}\n\n\tkeyPath, err := getKeyPath(repoPath)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// Ensure directory exists\n\tif err := os.MkdirAll(filepath.Dir(keyPath), 0700); err != nil {\n\t\treturn fmt.Errorf(\"failed to create key directory: %w\", err)\n\t}\n\n\t// Write private key\n\tif err := os.WriteFile(keyPath, priv, 0600); err != nil {\n\t\treturn fmt.Errorf(\"failed to write private key: %w\", err)\n\t}\n\n\t// Write public key\n\tpubFile := keyPath + \".pub\"\n\tif err := os.WriteFile(pubFile, pub, 0644); err != nil {\n\t\treturn fmt.Errorf(\"failed to write public key: %w\", err)\n\t}\n\n\tfmt.Printf(\"Generated new Ed25519 key pair:\\n\")\n\tfmt.Printf(\"Private key: %s\\n\", keyPath)\n\tfmt.Printf(\"Public key: %s\\n\", pubFile)\n\treturn nil\n}\n\n// LoadKeyPair loads an existing key pair from disk\nfunc LoadKeyPair(repoPath string) (*KeyPair, error) {\n\tkeyPath, err := getKeyPath(repoPath)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\n\tpriv, err := os.ReadFile(keyPath)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"failed to read private key: %w\", err)\n\t}\n\n\t// Validate private key\n\tvar pk ed25519.PrivateKey\n\tif len(priv) == ed25519.SeedSize {\n\t\tpk = ed25519.NewKeyFromSeed(priv)\n\t} else if len(priv) == ed25519.PrivateKeySize {\n\t\tpk = priv\n\t} else {\n\t\treturn nil, fmt.Errorf(\"invalid Ed25519 key length: %d\", len(priv))\n\t}\n\n\t// Load public key\n\tpub, err := os.ReadFile(keyPath + \".pub\")\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"failed to read public key: %w\", err)\n\t}\n\n\tif len(pub) != ed25519.PublicKeySize {\n\t\treturn nil, fmt.Errorf(\"invalid public key length: %d\", len(pub))\n\t}\n\n\treturn &KeyPair{\n\t\tPrivateKey: pk,\n\t\tPublicKey:  pub,\n\t\tCreated:    getFileCreationTime(keyPath),\n\t}, nil\n}\n\n// SignCommit signs a commit using the configured key\nfunc SignCommit(c *types.Commit, repoPath string) (string, error) {\n\tkp, err := LoadKeyPair(repoPath)\n\tif err != nil {\n\t\treturn \"\", fmt.Errorf(\"failed to load signing key: %w\", err)\n\t}\n\n\tmsg := types.CommitHashString(c)\n\tsig := ed25519.Sign(kp.PrivateKey, []byte(msg))\n\treturn hex.EncodeToString(sig), nil\n}\n\n// VerifyCommit verifies a commit's signature\nfunc VerifyCommit(c *types.Commit, repoPath string) (bool, error) {\n\tif c.Signature == \"\" {\n\t\treturn false, fmt.Errorf(\"commit has no signature\")\n\t}\n\n\tkp, err := LoadKeyPair(repoPath)\n\tif err != nil {\n\t\treturn false, fmt.Errorf(\"failed to load public key: %w\", err)\n\t}\n\n\tsigBytes, err := hex.DecodeString(c.Signature)\n\tif err != nil {\n\t\treturn false, fmt.Errorf(\"invalid signature format: %w\", err)\n\t}\n\n\tmsg := types.CommitHashString(c)\n\tif !ed25519.Verify(kp.PublicKey, []byte(msg), sigBytes) {\n\t\treturn false, fmt.Errorf(\"signature verification failed\")\n\t}\n\n\treturn true, nil\n}\n\nfunc getKeyPath(repoPath string) (string, error) {\n\tkeyPath, err := config.GetConfigValue(repoPath, \"signing.keyPath\")\n\tif err != nil {\n\t\treturn \"\", fmt.Errorf(\"failed to get key path from config: %w\", err)\n\t}\n\tif keyPath == \"\" {\n\t\thome, err := os.UserHomeDir()\n\t\tif err != nil {\n\t\t\treturn \"\", fmt.Errorf(\"failed to get user home directory: %w\", err)\n\t\t}\n\t\tkeyPath = filepath.Join(home, \".config\", \"evo\", \"signing_key\")\n\t}\n\treturn keyPath, nil\n}\n\nfunc getFileCreationTime(path string) time.Time {\n\tinfo, err := os.Stat(path)\n\tif err != nil {\n\t\treturn time.Time{}\n\t}\n\treturn info.ModTime()\n}\n"
  },
  {
    "path": "internal/signing/signing_test.go",
    "content": "package signing\n\nimport (\n\t\"evo/internal/config\"\n\t\"evo/internal/types\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"testing\"\n)\n\nfunc TestSigningKeyPair(t *testing.T) {\n\t// Create temp directory for test\n\ttmpDir := t.TempDir()\n\tkeyPath := filepath.Join(tmpDir, \"signing_key\")\n\n\t// Set up config for test\n\terr := config.SetConfigValue(tmpDir, \"signing.keyPath\", keyPath)\n\tif err != nil {\n\t\tt.Fatalf(\"Failed to set config value: %v\", err)\n\t}\n\n\tt.Run(\"Generate_Key_Pair\", func(t *testing.T) {\n\t\terr := GenerateKeyPair(tmpDir)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Failed to generate key pair: %v\", err)\n\t\t}\n\n\t\t// Check that key files exist\n\t\tif _, err := os.Stat(keyPath); err != nil {\n\t\t\tt.Errorf(\"Private key file not found: %v\", err)\n\t\t}\n\t\tif _, err := os.Stat(keyPath + \".pub\"); err != nil {\n\t\t\tt.Errorf(\"Public key file not found: %v\", err)\n\t\t}\n\t})\n\n\tt.Run(\"Load_Key_Pair\", func(t *testing.T) {\n\t\tkp, err := LoadKeyPair(tmpDir)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Failed to load key pair: %v\", err)\n\t\t}\n\n\t\tif kp.PrivateKey == nil {\n\t\t\tt.Error(\"Private key is nil\")\n\t\t}\n\t\tif kp.PublicKey == nil {\n\t\t\tt.Error(\"Public key is nil\")\n\t\t}\n\t\tif kp.Created.IsZero() {\n\t\t\tt.Error(\"Creation time not set\")\n\t\t}\n\t})\n\n\tt.Run(\"Sign_and_Verify_Commit\", func(t *testing.T) {\n\t\tcommit := &types.Commit{\n\t\t\tMessage: \"Test commit\",\n\t\t}\n\n\t\tsig, err := SignCommit(commit, tmpDir)\n\t\tif err != nil {\n\t\t\tt.Fatalf(\"Failed to sign commit: %v\", err)\n\t\t}\n\t\tif sig == \"\" {\n\t\t\tt.Error(\"Empty signature returned\")\n\t\t}\n\n\t\tcommit.Signature = sig\n\t\tvalid, err := VerifyCommit(commit, tmpDir)\n\t\tif err != nil {\n\t\t\tt.Errorf(\"Failed to verify commit: %v\", err)\n\t\t}\n\t\tif !valid {\n\t\t\tt.Error(\"Signature verification failed\")\n\t\t}\n\t})\n\n\tt.Run(\"Invalid_Signature\", func(t *testing.T) {\n\t\tcommit := &types.Commit{\n\t\t\tMessage:   \"Test commit\",\n\t\t\tSignature: \"invalid\",\n\t\t}\n\n\t\tvalid, err := VerifyCommit(commit, tmpDir)\n\t\tif err == nil {\n\t\t\tt.Error(\"Expected error for invalid signature\")\n\t\t}\n\t\tif valid {\n\t\t\tt.Error(\"Invalid signature reported as valid\")\n\t\t}\n\t})\n\n\tt.Run(\"Missing_Signature\", func(t *testing.T) {\n\t\tcommit := &types.Commit{\n\t\t\tMessage: \"Test commit\",\n\t\t}\n\n\t\tvalid, err := VerifyCommit(commit, tmpDir)\n\t\tif err == nil {\n\t\t\tt.Error(\"Expected error for missing signature\")\n\t\t}\n\t\tif valid {\n\t\t\tt.Error(\"Missing signature reported as valid\")\n\t\t}\n\t})\n}\n"
  },
  {
    "path": "internal/status/status.go",
    "content": "package status\n\nimport (\n\t\"bufio\"\n\t\"evo/internal/ignore\"\n\t\"evo/internal/streams\"\n\t\"fmt\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"sort\"\n\t\"strings\"\n)\n\ntype FileStatus struct {\n\tPath    string\n\tStatus  string // \"modified\", \"new\", \"deleted\", \"renamed\"\n\tOldPath string // only set for renamed files\n}\n\ntype RepoStatus struct {\n\tCurrentStream string\n\tFiles         []FileStatus\n}\n\n// loadIndex loads the index file directly to avoid dependency cycles\nfunc loadIndex(repoPath string) (map[string]string, error) {\n\tindexPath := filepath.Join(repoPath, \".evo\", \"index\")\n\tfile, err := os.Open(indexPath)\n\tif os.IsNotExist(err) {\n\t\treturn make(map[string]string), nil\n\t}\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdefer file.Close()\n\n\tidx := make(map[string]string)\n\tscanner := bufio.NewScanner(file)\n\tfor scanner.Scan() {\n\t\tparts := strings.Split(scanner.Text(), \":\")\n\t\tif len(parts) == 2 {\n\t\t\tidx[parts[0]] = parts[1]\n\t\t}\n\t}\n\treturn idx, scanner.Err()\n}\n\nfunc GetStatus(repoPath string) (*RepoStatus, error) {\n\t// Get current stream\n\tstream, err := streams.CurrentStream(repoPath)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"failed to get current stream: %w\", err)\n\t}\n\n\t// Verify stream exists\n\tstreamPath := filepath.Join(repoPath, \".evo\", \"streams\", stream)\n\tif _, err := os.Stat(streamPath); os.IsNotExist(err) {\n\t\treturn nil, fmt.Errorf(\"stream %s does not exist\", stream)\n\t}\n\n\t// Load ignore patterns\n\tignoreList, err := ignore.LoadIgnoreFile(repoPath)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"failed to load ignore file: %w\", err)\n\t}\n\n\t// Get current index state\n\tidx, err := loadIndex(repoPath)\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"failed to load index: %w\", err)\n\t}\n\n\tstatus := &RepoStatus{\n\t\tCurrentStream: stream,\n\t}\n\n\t// Track processed files and their content hashes\n\tprocessedFiles := make(map[string]string) // path -> content hash\n\n\t// Walk the repository to find new and modified files\n\terr = filepath.Walk(repoPath, func(path string, info os.FileInfo, err error) error {\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\t// Get relative path\n\t\trelPath, err := filepath.Rel(repoPath, path)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\t// Skip the .evo directory\n\t\tif strings.HasPrefix(relPath, \".evo\") {\n\t\t\tif info.IsDir() {\n\t\t\t\treturn filepath.SkipDir\n\t\t\t}\n\t\t\treturn nil\n\t\t}\n\n\t\t// Skip directories\n\t\tif info.IsDir() {\n\t\t\treturn nil\n\t\t}\n\n\t\t// Skip ignored files\n\t\tif ignoreList.IsIgnored(relPath) {\n\t\t\treturn nil\n\t\t}\n\n\t\t// Read current file content\n\t\tcurrentContent, err := os.ReadFile(path)\n\t\tif err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\t// Store content hash for rename detection\n\t\tprocessedFiles[relPath] = string(currentContent)\n\n\t\t// Check if file is in index\n\t\tfileID, exists := idx[relPath]\n\t\tif !exists {\n\t\t\t// Check if this might be a renamed file\n\t\t\tvar foundRename bool\n\t\t\tfor oldPath, oldID := range idx {\n\t\t\t\tif oldPath == relPath {\n\t\t\t\t\tcontinue\n\t\t\t\t}\n\t\t\t\tstoredContent, err := os.ReadFile(filepath.Join(repoPath, \".evo\", \"objects\", oldID))\n\t\t\t\tif err == nil && string(currentContent) == string(storedContent) {\n\t\t\t\t\t// Found a rename\n\t\t\t\t\tstatus.Files = append(status.Files, FileStatus{\n\t\t\t\t\t\tPath:    relPath,\n\t\t\t\t\t\tStatus:  \"renamed\",\n\t\t\t\t\t\tOldPath: oldPath,\n\t\t\t\t\t})\n\t\t\t\t\tfoundRename = true\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t\tif !foundRename {\n\t\t\t\t// New file\n\t\t\t\tstatus.Files = append(status.Files, FileStatus{\n\t\t\t\t\tPath:   relPath,\n\t\t\t\t\tStatus: \"new\",\n\t\t\t\t})\n\t\t\t}\n\t\t\treturn nil\n\t\t}\n\n\t\t// Check if file has been modified\n\t\tstoredContent, err := os.ReadFile(filepath.Join(repoPath, \".evo\", \"objects\", fileID))\n\t\tif err != nil || string(currentContent) != string(storedContent) {\n\t\t\tstatus.Files = append(status.Files, FileStatus{\n\t\t\t\tPath:   relPath,\n\t\t\t\tStatus: \"modified\",\n\t\t\t})\n\t\t}\n\n\t\treturn nil\n\t})\n\n\tif err != nil {\n\t\treturn nil, fmt.Errorf(\"failed to walk repository: %w\", err)\n\t}\n\n\t// Check for deleted files\n\tfor path, id := range idx {\n\t\t// Skip if file was already processed\n\t\tif _, exists := processedFiles[path]; exists {\n\t\t\tcontinue\n\t\t}\n\n\t\t// Check if file was renamed by looking for matching content\n\t\tvar renamed bool\n\t\tfor newPath, content := range processedFiles {\n\t\t\tstoredContent, err := os.ReadFile(filepath.Join(repoPath, \".evo\", \"objects\", id))\n\t\t\tif err == nil && content == string(storedContent) {\n\t\t\t\t// Found a rename\n\t\t\t\tstatus.Files = append(status.Files, FileStatus{\n\t\t\t\t\tPath:    newPath,\n\t\t\t\t\tStatus:  \"renamed\",\n\t\t\t\t\tOldPath: path,\n\t\t\t\t})\n\t\t\t\trenamed = true\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\n\t\tif !renamed {\n\t\t\tstatus.Files = append(status.Files, FileStatus{\n\t\t\t\tPath:   path,\n\t\t\t\tStatus: \"deleted\",\n\t\t\t})\n\t\t}\n\t}\n\n\t// Sort files by status and path\n\tsort.Slice(status.Files, func(i, j int) bool {\n\t\tif status.Files[i].Status != status.Files[j].Status {\n\t\t\treturn status.Files[i].Status < status.Files[j].Status\n\t\t}\n\t\treturn status.Files[i].Path < status.Files[j].Path\n\t})\n\n\treturn status, nil\n}\n\n// FormatStatus returns a formatted string representation of the repository status\nfunc FormatStatus(status *RepoStatus) string {\n\tvar sb strings.Builder\n\n\tsb.WriteString(fmt.Sprintf(\"On stream %s\\n\\n\", status.CurrentStream))\n\n\tif len(status.Files) == 0 {\n\t\tsb.WriteString(\"nothing to commit, working tree clean\\n\")\n\t\treturn sb.String()\n\t}\n\n\t// Group files by status\n\tvar modified, new, deleted, renamed []FileStatus\n\tfor _, f := range status.Files {\n\t\tswitch f.Status {\n\t\tcase \"modified\":\n\t\t\tmodified = append(modified, f)\n\t\tcase \"new\":\n\t\t\tnew = append(new, f)\n\t\tcase \"deleted\":\n\t\t\tdeleted = append(deleted, f)\n\t\tcase \"renamed\":\n\t\t\trenamed = append(renamed, f)\n\t\t}\n\t}\n\n\tif len(modified) > 0 {\n\t\tsb.WriteString(\"Changes not staged for commit:\\n\")\n\t\tfor _, f := range modified {\n\t\t\tsb.WriteString(fmt.Sprintf(\"  modified: %s\\n\", f.Path))\n\t\t}\n\t\tsb.WriteString(\"\\n\")\n\t}\n\n\tif len(new) > 0 {\n\t\tsb.WriteString(\"Untracked files:\\n\")\n\t\tfor _, f := range new {\n\t\t\tsb.WriteString(fmt.Sprintf(\"  %s\\n\", f.Path))\n\t\t}\n\t\tsb.WriteString(\"\\n\")\n\t}\n\n\tif len(deleted) > 0 {\n\t\tsb.WriteString(\"Deleted files:\\n\")\n\t\tfor _, f := range deleted {\n\t\t\tsb.WriteString(fmt.Sprintf(\"  %s\\n\", f.Path))\n\t\t}\n\t\tsb.WriteString(\"\\n\")\n\t}\n\n\tif len(renamed) > 0 {\n\t\tsb.WriteString(\"Renamed files:\\n\")\n\t\tfor _, f := range renamed {\n\t\t\tsb.WriteString(fmt.Sprintf(\"  %s -> %s\\n\", f.OldPath, f.Path))\n\t\t}\n\t\tsb.WriteString(\"\\n\")\n\t}\n\n\treturn sb.String()\n}\n"
  },
  {
    "path": "internal/status/status_test.go",
    "content": "package status\n\nimport (\n\t\"os\"\n\t\"path/filepath\"\n\t\"strings\"\n\t\"testing\"\n)\n\nfunc setupTestRepo(t *testing.T) string {\n\t// Create a temporary directory for the test repository\n\ttmpDir, err := os.MkdirTemp(\"\", \"evo-status-test\")\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\t// Create .evo directory structure\n\tevoDir := filepath.Join(tmpDir, \".evo\")\n\tfor _, dir := range []string{\n\t\t\"objects\",\n\t\t\"streams\",\n\t\t\"commits\",\n\t} {\n\t\tif err := os.MkdirAll(filepath.Join(evoDir, dir), 0755); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t}\n\n\t// Create main stream\n\tif err := os.WriteFile(filepath.Join(evoDir, \"streams\", \"main\"), []byte{}, 0644); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\t// Set current stream\n\tif err := os.WriteFile(filepath.Join(evoDir, \"HEAD\"), []byte(\"main\"), 0644); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\treturn tmpDir\n}\n\nfunc TestGetStatus(t *testing.T) {\n\trepoPath := setupTestRepo(t)\n\tdefer os.RemoveAll(repoPath)\n\n\t// Create .evo-ignore file first\n\tignoreContent := `\n# Test ignore file\n*.log\nbuild/\n**/*.tmp\n`\n\tif err := os.WriteFile(filepath.Join(repoPath, \".evo-ignore\"), []byte(ignoreContent), 0644); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\t// Create some test files\n\tfiles := map[string]string{\n\t\t\"file1.txt\":     \"content1\",\n\t\t\"file2.txt\":     \"content2\",\n\t\t\"dir/file3.txt\": \"content3\",\n\t}\n\n\tfor path, content := range files {\n\t\tfullPath := filepath.Join(repoPath, path)\n\t\tif err := os.MkdirAll(filepath.Dir(fullPath), 0755); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t\tif err := os.WriteFile(fullPath, []byte(content), 0644); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t}\n\n\t// Create some files that should be ignored\n\tignoredFiles := map[string]string{\n\t\t\"test.log\":         \"log content\",\n\t\t\"build/output.txt\": \"build output\",\n\t\t\"temp.tmp\":         \"temporary file\",\n\t}\n\n\tfor path, content := range ignoredFiles {\n\t\tfullPath := filepath.Join(repoPath, path)\n\t\tif err := os.MkdirAll(filepath.Dir(fullPath), 0755); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t\tif err := os.WriteFile(fullPath, []byte(content), 0644); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t}\n\n\t// Get initial status (before index exists)\n\tstatus, err := GetStatus(repoPath)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\t// Verify all non-ignored files are marked as new\n\tnewFiles := make(map[string]bool)\n\tfor _, f := range status.Files {\n\t\tnewFiles[f.Path] = true\n\t\tif f.Status != \"new\" {\n\t\t\tt.Errorf(\"Expected file %s to be new, got %s\", f.Path, f.Status)\n\t\t}\n\t\t// Verify no ignored files are included\n\t\tfor ignoredPath := range ignoredFiles {\n\t\t\tif f.Path == ignoredPath {\n\t\t\t\tt.Errorf(\"Found ignored file in status: %s\", f.Path)\n\t\t\t}\n\t\t}\n\t}\n\n\t// Check that we found all expected files\n\tfor path := range files {\n\t\tif !newFiles[path] {\n\t\t\tt.Errorf(\"Expected to find %s in status, but it was missing\", path)\n\t\t}\n\t}\n\n\t// Create object files first\n\tobjects := map[string]string{\n\t\t\"id1\": \"content1\",\n\t\t\"id2\": \"content2\",\n\t}\n\n\tfor id, content := range objects {\n\t\tobjPath := filepath.Join(repoPath, \".evo\", \"objects\", id)\n\t\tif err := os.WriteFile(objPath, []byte(content), 0644); err != nil {\n\t\t\tt.Fatal(err)\n\t\t}\n\t}\n\n\t// Create index file after objects\n\tindexContent := map[string]string{\n\t\t\"file1.txt\": \"id1\",\n\t\t\"file2.txt\": \"id2\",\n\t}\n\n\tvar indexLines []string\n\tfor path, id := range indexContent {\n\t\tindexLines = append(indexLines, path+\":\"+id)\n\t}\n\tif err := os.WriteFile(filepath.Join(repoPath, \".evo\", \"index\"), []byte(strings.Join(indexLines, \"\\n\")+\"\\n\"), 0644); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\t// Modify file2.txt\n\tif err := os.WriteFile(filepath.Join(repoPath, \"file2.txt\"), []byte(\"modified content\"), 0644); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\t// Get status again\n\tstatus, err = GetStatus(repoPath)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\t// Verify status\n\texpectedStatuses := map[string]string{\n\t\t\"file2.txt\":     \"modified\",\n\t\t\"dir/file3.txt\": \"new\",\n\t}\n\n\tfoundFiles := make(map[string]bool)\n\tfor _, f := range status.Files {\n\t\tfoundFiles[f.Path] = true\n\t\texpectedStatus, exists := expectedStatuses[f.Path]\n\t\tif !exists {\n\t\t\tt.Errorf(\"Unexpected file in status: %s\", f.Path)\n\t\t\tcontinue\n\t\t}\n\t\tif f.Status != expectedStatus {\n\t\t\tt.Errorf(\"Expected file %s to be %s, got %s\", f.Path, expectedStatus, f.Status)\n\t\t}\n\t}\n\n\t// Check that we found all expected files\n\tfor path := range expectedStatuses {\n\t\tif !foundFiles[path] {\n\t\t\tt.Errorf(\"Expected to find %s in status, but it was missing\", path)\n\t\t}\n\t}\n\n\t// Test rename detection\n\tif err := os.Rename(filepath.Join(repoPath, \"file1.txt\"), filepath.Join(repoPath, \"file1_renamed.txt\")); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tstatus, err = GetStatus(repoPath)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tfoundRename := false\n\tfor _, f := range status.Files {\n\t\tif f.Status == \"renamed\" && f.Path == \"file1_renamed.txt\" && f.OldPath == \"file1.txt\" {\n\t\t\tfoundRename = true\n\t\t\tbreak\n\t\t}\n\t}\n\tif !foundRename {\n\t\tt.Error(\"Failed to detect renamed file\")\n\t}\n\n\t// Test deletion detection\n\tif err := os.Remove(filepath.Join(repoPath, \"file2.txt\")); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tstatus, err = GetStatus(repoPath)\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tfoundDelete := false\n\tfor _, f := range status.Files {\n\t\tif f.Status == \"deleted\" && f.Path == \"file2.txt\" {\n\t\t\tfoundDelete = true\n\t\t\tbreak\n\t\t}\n\t}\n\tif !foundDelete {\n\t\tt.Error(\"Failed to detect deleted file\")\n\t}\n}\n\nfunc TestGetStatusErrors(t *testing.T) {\n\t// Test with non-existent directory\n\t_, err := GetStatus(\"/nonexistent/path\")\n\tif err == nil {\n\t\tt.Error(\"Expected error when repository path doesn't exist\")\n\t}\n\n\t// Test with invalid repository (no .evo directory)\n\ttmpDir, err := os.MkdirTemp(\"\", \"invalid-repo\")\n\tif err != nil {\n\t\tt.Fatal(err)\n\t}\n\tdefer os.RemoveAll(tmpDir)\n\n\t_, err = GetStatus(tmpDir)\n\tif err == nil {\n\t\tt.Error(\"Expected error when .evo directory doesn't exist\")\n\t}\n\n\t// Test with invalid HEAD file\n\trepoPath := setupTestRepo(t)\n\tdefer os.RemoveAll(repoPath)\n\n\tif err := os.WriteFile(filepath.Join(repoPath, \".evo\", \"HEAD\"), []byte(\"invalid-stream\\n\"), 0644); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\t_, err = GetStatus(repoPath)\n\tif err == nil {\n\t\tt.Error(\"Expected error when HEAD points to non-existent stream\")\n\t}\n}\n\nfunc TestFormatStatus(t *testing.T) {\n\ttests := []struct {\n\t\tname     string\n\t\tstatus   *RepoStatus\n\t\tcontains []string\n\t\texcludes []string\n\t}{\n\t\t{\n\t\t\tname: \"Empty status\",\n\t\t\tstatus: &RepoStatus{\n\t\t\t\tCurrentStream: \"main\",\n\t\t\t\tFiles:         []FileStatus{},\n\t\t\t},\n\t\t\tcontains: []string{\n\t\t\t\t\"On stream main\",\n\t\t\t\t\"nothing to commit, working tree clean\",\n\t\t\t},\n\t\t\texcludes: []string{\n\t\t\t\t\"Changes not staged\",\n\t\t\t\t\"Untracked files\",\n\t\t\t\t\"Deleted files\",\n\t\t\t\t\"Renamed files\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"Modified files only\",\n\t\t\tstatus: &RepoStatus{\n\t\t\t\tCurrentStream: \"main\",\n\t\t\t\tFiles: []FileStatus{\n\t\t\t\t\t{Path: \"file1.txt\", Status: \"modified\"},\n\t\t\t\t\t{Path: \"dir/file2.txt\", Status: \"modified\"},\n\t\t\t\t},\n\t\t\t},\n\t\t\tcontains: []string{\n\t\t\t\t\"On stream main\",\n\t\t\t\t\"Changes not staged for commit:\",\n\t\t\t\t\"modified: file1.txt\",\n\t\t\t\t\"modified: dir/file2.txt\",\n\t\t\t},\n\t\t\texcludes: []string{\n\t\t\t\t\"nothing to commit\",\n\t\t\t\t\"Untracked files\",\n\t\t\t\t\"Deleted files\",\n\t\t\t\t\"Renamed files\",\n\t\t\t},\n\t\t},\n\t\t{\n\t\t\tname: \"All status types\",\n\t\t\tstatus: &RepoStatus{\n\t\t\t\tCurrentStream: \"feature\",\n\t\t\t\tFiles: []FileStatus{\n\t\t\t\t\t{Path: \"file1.txt\", Status: \"modified\"},\n\t\t\t\t\t{Path: \"file2.txt\", Status: \"new\"},\n\t\t\t\t\t{Path: \"file3.txt\", Status: \"deleted\"},\n\t\t\t\t\t{Path: \"new.txt\", Status: \"renamed\", OldPath: \"old.txt\"},\n\t\t\t\t},\n\t\t\t},\n\t\t\tcontains: []string{\n\t\t\t\t\"On stream feature\",\n\t\t\t\t\"Changes not staged for commit:\",\n\t\t\t\t\"modified: file1.txt\",\n\t\t\t\t\"Untracked files:\",\n\t\t\t\t\"file2.txt\",\n\t\t\t\t\"Deleted files:\",\n\t\t\t\t\"file3.txt\",\n\t\t\t\t\"Renamed files:\",\n\t\t\t\t\"old.txt -> new.txt\",\n\t\t\t},\n\t\t\texcludes: []string{\n\t\t\t\t\"nothing to commit\",\n\t\t\t},\n\t\t},\n\t}\n\n\tfor _, tt := range tests {\n\t\tt.Run(tt.name, func(t *testing.T) {\n\t\t\toutput := FormatStatus(tt.status)\n\n\t\t\tfor _, s := range tt.contains {\n\t\t\t\tif !strings.Contains(output, s) {\n\t\t\t\t\tt.Errorf(\"Expected output to contain %q\", s)\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tfor _, s := range tt.excludes {\n\t\t\t\tif strings.Contains(output, s) {\n\t\t\t\t\tt.Errorf(\"Expected output to not contain %q\", s)\n\t\t\t\t}\n\t\t\t}\n\t\t})\n\t}\n}\n\nfunc TestLoadIndex(t *testing.T) {\n\trepoPath := setupTestRepo(t)\n\tdefer os.RemoveAll(repoPath)\n\n\t// Test loading non-existent index\n\tidx, err := loadIndex(repoPath)\n\tif err != nil {\n\t\tt.Errorf(\"Expected no error when index doesn't exist, got %v\", err)\n\t}\n\tif len(idx) != 0 {\n\t\tt.Errorf(\"Expected empty index, got %v\", idx)\n\t}\n\n\t// Test loading valid index\n\tindexContent := \"file1.txt:id1\\nfile2.txt:id2\\n\"\n\tif err := os.WriteFile(filepath.Join(repoPath, \".evo\", \"index\"), []byte(indexContent), 0644); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tidx, err = loadIndex(repoPath)\n\tif err != nil {\n\t\tt.Errorf(\"Failed to load index: %v\", err)\n\t}\n\n\texpected := map[string]string{\n\t\t\"file1.txt\": \"id1\",\n\t\t\"file2.txt\": \"id2\",\n\t}\n\n\tif len(idx) != len(expected) {\n\t\tt.Errorf(\"Expected %d entries, got %d\", len(expected), len(idx))\n\t}\n\n\tfor path, id := range expected {\n\t\tif idx[path] != id {\n\t\t\tt.Errorf(\"Expected %s -> %s, got %s -> %s\", path, id, path, idx[path])\n\t\t}\n\t}\n\n\t// Test loading malformed index\n\tmalformedContent := \"file1.txt:id1\\nmalformed-line\\nfile2.txt:id2\\n\"\n\tif err := os.WriteFile(filepath.Join(repoPath, \".evo\", \"index\"), []byte(malformedContent), 0644); err != nil {\n\t\tt.Fatal(err)\n\t}\n\n\tidx, err = loadIndex(repoPath)\n\tif err != nil {\n\t\tt.Errorf(\"Failed to load index with malformed line: %v\", err)\n\t}\n\n\tif len(idx) != 2 {\n\t\tt.Errorf(\"Expected 2 valid entries, got %d\", len(idx))\n\t}\n}\n"
  },
  {
    "path": "internal/streams/partial.go",
    "content": "package streams\n\nimport (\n\t\"evo/internal/commits\"\n\t\"evo/internal/crdt\"\n\t\"evo/internal/repo\"\n\t\"evo/internal/types\"\n\t\"fmt\"\n\t\"path/filepath\"\n)\n\n// MergeFilter defines criteria for selecting operations during a partial merge\ntype MergeFilter struct {\n\tFileIDs []string      // Only merge operations for these files\n\tOpTypes []crdt.OpType // Only merge these operation types\n}\n\n// PartialMerge merges selected operations from source to target stream based on filter criteria\nfunc PartialMerge(repoPath, source, target string, filter MergeFilter) error {\n\tsrcCommits, err := ListCommits(repoPath, source)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\ttgtCommits, err := ListCommits(repoPath, target)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\t// Build map of target commits for quick lookup\n\ttgtMap := make(map[string]bool)\n\tfor _, c := range tgtCommits {\n\t\ttgtMap[c.ID] = true\n\t}\n\n\t// For empty filter, merge all operations into a single commit\n\tif len(filter.FileIDs) == 0 && len(filter.OpTypes) == 0 {\n\t\tvar allOps []commits.ExtendedOp\n\t\tvar lastCommit *types.Commit\n\n\t\tfor _, sc := range srcCommits {\n\t\t\tlastCommit = &sc\n\t\t\tfor _, op := range sc.Operations {\n\t\t\t\tnewOp := op\n\t\t\t\tnewOp.Op.Stream = target\n\t\t\t\tallOps = append(allOps, newOp)\n\t\t\t}\n\t\t}\n\n\t\tif len(allOps) > 0 && lastCommit != nil {\n\t\t\t// Create single commit with all operations\n\t\t\tnewCommit := types.Commit{\n\t\t\t\tID:         lastCommit.ID,\n\t\t\t\tStream:     target,\n\t\t\t\tMessage:    fmt.Sprintf(\"[merge] %s\", lastCommit.Message),\n\t\t\t\tOperations: allOps,\n\t\t\t\tTimestamp:  lastCommit.Timestamp,\n\t\t\t}\n\n\t\t\t// Save the commit\n\t\t\tcommitPath := filepath.Join(repoPath, repo.EvoDir, \"commits\", target)\n\t\t\tif err := commits.SaveCommitFile(commitPath, &newCommit); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\n\t\t\t// Replicate all operations\n\t\t\tif err := replicateOps(repoPath, target, allOps); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\n\t\treturn nil\n\t}\n\n\t// Process each source commit for non-empty filters\n\tfor _, sc := range srcCommits {\n\t\t// Filter operations based on criteria\n\t\tvar filteredOps []commits.ExtendedOp\n\t\tfor _, op := range sc.Operations {\n\t\t\tif shouldIncludeOp(op, filter) {\n\t\t\t\tnewOp := op\n\t\t\t\tnewOp.Op.Stream = target\n\t\t\t\tfilteredOps = append(filteredOps, newOp)\n\t\t\t}\n\t\t}\n\n\t\t// Skip if no operations match filter\n\t\tif len(filteredOps) == 0 {\n\t\t\tcontinue\n\t\t}\n\n\t\t// Create new commit with filtered operations\n\t\tnewCommit := types.Commit{\n\t\t\tID:         sc.ID,\n\t\t\tStream:     target,\n\t\t\tMessage:    fmt.Sprintf(\"[merge] %s\", sc.Message),\n\t\t\tOperations: filteredOps,\n\t\t\tTimestamp:  sc.Timestamp,\n\t\t}\n\n\t\t// Save the commit\n\t\tcommitPath := filepath.Join(repoPath, repo.EvoDir, \"commits\", target)\n\t\tif err := commits.SaveCommitFile(commitPath, &newCommit); err != nil {\n\t\t\treturn err\n\t\t}\n\n\t\t// Replicate filtered operations\n\t\tif err := replicateOps(repoPath, target, filteredOps); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// shouldIncludeOp checks if an operation matches the filter criteria\nfunc shouldIncludeOp(op commits.ExtendedOp, filter MergeFilter) bool {\n\t// If no filters specified, include everything\n\tif len(filter.FileIDs) == 0 && len(filter.OpTypes) == 0 {\n\t\treturn true\n\t}\n\n\t// Check file ID filter\n\tif len(filter.FileIDs) > 0 {\n\t\tfileMatch := false\n\t\tfor _, fid := range filter.FileIDs {\n\t\t\tif op.Op.FileID.String() == fid {\n\t\t\t\tfileMatch = true\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tif !fileMatch {\n\t\t\treturn false\n\t\t}\n\t}\n\n\t// Check operation type filter\n\tif len(filter.OpTypes) > 0 {\n\t\ttypeMatch := false\n\t\tfor _, ot := range filter.OpTypes {\n\t\t\tif op.Op.Type == ot {\n\t\t\t\ttypeMatch = true\n\t\t\t\tbreak\n\t\t\t}\n\t\t}\n\t\tif !typeMatch {\n\t\t\treturn false\n\t\t}\n\t}\n\n\treturn true\n}\n"
  },
  {
    "path": "internal/streams/partial_test.go",
    "content": "package streams\n\nimport (\n\t\"evo/internal/commits\"\n\t\"evo/internal/crdt\"\n\t\"evo/internal/repo\"\n\t\"evo/internal/types\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/google/uuid\"\n\t\"github.com/stretchr/testify/assert\"\n)\n\nfunc TestPartialMerge(t *testing.T) {\n\t// Create temp repo\n\ttmpDir := t.TempDir()\n\trepoPath := filepath.Join(tmpDir, \"test-repo\")\n\n\t// Initialize repo structure\n\tassert.NoError(t, os.MkdirAll(filepath.Join(repoPath, repo.EvoDir, \"commits\", \"main\"), 0755))\n\tassert.NoError(t, os.MkdirAll(filepath.Join(repoPath, repo.EvoDir, \"ops\", \"main\"), 0755))\n\tassert.NoError(t, CreateStream(repoPath, \"feature\"))\n\tassert.NoError(t, os.MkdirAll(filepath.Join(repoPath, repo.EvoDir, \"commits\", \"feature\"), 0755))\n\tassert.NoError(t, os.MkdirAll(filepath.Join(repoPath, repo.EvoDir, \"ops\", \"feature\"), 0755))\n\n\t// Create test commits with different file IDs and operation types\n\tfile1ID := uuid.New()\n\tfile2ID := uuid.New()\n\ttestCommits := []types.Commit{\n\t\t{\n\t\t\tID:      uuid.New().String(),\n\t\t\tStream:  \"feature\",\n\t\t\tMessage: \"commit 1\",\n\t\t\tOperations: []commits.ExtendedOp{\n\t\t\t\t{\n\t\t\t\t\tOp: crdt.Operation{\n\t\t\t\t\t\tType:      crdt.OpInsert,\n\t\t\t\t\t\tFileID:    file1ID,\n\t\t\t\t\t\tLineID:    uuid.New(),\n\t\t\t\t\t\tContent:   \"file1 line1\",\n\t\t\t\t\t\tStream:    \"feature\",\n\t\t\t\t\t\tTimestamp: time.Now(),\n\t\t\t\t\t\tNodeID:    uuid.New(),\n\t\t\t\t\t\tLamport:   1,\n\t\t\t\t\t\tVector:    []int64{1},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t\t{\n\t\t\t\t\tOp: crdt.Operation{\n\t\t\t\t\t\tType:      crdt.OpDelete,\n\t\t\t\t\t\tFileID:    file2ID,\n\t\t\t\t\t\tLineID:    uuid.New(),\n\t\t\t\t\t\tContent:   \"file2 line1\",\n\t\t\t\t\t\tStream:    \"feature\",\n\t\t\t\t\t\tTimestamp: time.Now(),\n\t\t\t\t\t\tNodeID:    uuid.New(),\n\t\t\t\t\t\tLamport:   2,\n\t\t\t\t\t\tVector:    []int64{2},\n\t\t\t\t\t},\n\t\t\t\t},\n\t\t\t},\n\t\t\tTimestamp: time.Now(),\n\t\t},\n\t}\n\n\tfor _, c := range testCommits {\n\t\tassert.NoError(t, commits.SaveCommitFile(filepath.Join(repoPath, repo.EvoDir, \"commits\", \"feature\"), &c))\n\t}\n\n\t// Test partial merge with file ID filter\n\tfileFilter := MergeFilter{\n\t\tFileIDs: []string{file1ID.String()},\n\t}\n\tvar err error\n\tvar mainCommits []types.Commit\n\terr = PartialMerge(repoPath, \"feature\", \"main\", fileFilter)\n\tassert.NoError(t, err)\n\n\t// Verify only file1 operations were merged\n\tmainCommits, err = ListCommits(repoPath, \"main\")\n\tassert.NoError(t, err)\n\tassert.Equal(t, 1, len(mainCommits))\n\tassert.Equal(t, file1ID, mainCommits[0].Operations[0].Op.FileID)\n\tassert.Equal(t, \"file1 line1\", mainCommits[0].Operations[0].Op.Content)\n\n\t// Test partial merge with operation type filter\n\ttypeFilter := MergeFilter{\n\t\tOpTypes: []crdt.OpType{crdt.OpDelete},\n\t}\n\terr = PartialMerge(repoPath, \"feature\", \"main\", typeFilter)\n\tassert.NoError(t, err)\n\n\t// Verify only delete operations were merged\n\tmainCommits, err = ListCommits(repoPath, \"main\")\n\tassert.NoError(t, err)\n\tassert.Equal(t, 1, len(mainCommits))\n\tassert.Equal(t, crdt.OpDelete, mainCommits[0].Operations[0].Op.Type)\n\tassert.Equal(t, file2ID, mainCommits[0].Operations[0].Op.FileID)\n\n\t// Test partial merge with empty filter (should merge all)\n\temptyFilter := MergeFilter{}\n\terr = PartialMerge(repoPath, \"feature\", \"main\", emptyFilter)\n\tassert.NoError(t, err)\n\n\t// Verify all commits were merged\n\tmainCommits, err = ListCommits(repoPath, \"main\")\n\tassert.NoError(t, err)\n\tassert.Equal(t, 1, len(mainCommits))               // Since we're preserving commit IDs, we should have one commit\n\tassert.Equal(t, 2, len(mainCommits[0].Operations)) // But it should contain all operations\n}\n\nfunc TestShouldIncludeOp(t *testing.T) {\n\tfileID := uuid.New()\n\ttestOp := commits.ExtendedOp{\n\t\tOp: crdt.Operation{\n\t\t\tType:   crdt.OpInsert,\n\t\t\tFileID: fileID,\n\t\t},\n\t}\n\n\t// Test empty filter\n\tassert.True(t, shouldIncludeOp(testOp, MergeFilter{}))\n\n\t// Test file ID filter match\n\tassert.True(t, shouldIncludeOp(testOp, MergeFilter{FileIDs: []string{fileID.String()}}))\n\n\t// Test file ID filter no match\n\tassert.False(t, shouldIncludeOp(testOp, MergeFilter{FileIDs: []string{uuid.New().String()}}))\n\n\t// Test operation type filter match\n\tassert.True(t, shouldIncludeOp(testOp, MergeFilter{OpTypes: []crdt.OpType{crdt.OpInsert}}))\n\n\t// Test operation type filter no match\n\tassert.False(t, shouldIncludeOp(testOp, MergeFilter{OpTypes: []crdt.OpType{crdt.OpDelete}}))\n\n\t// Test both filters match\n\tassert.True(t, shouldIncludeOp(testOp, MergeFilter{\n\t\tFileIDs: []string{fileID.String()},\n\t\tOpTypes: []crdt.OpType{crdt.OpInsert},\n\t}))\n\n\t// Test both filters, one no match\n\tassert.False(t, shouldIncludeOp(testOp, MergeFilter{\n\t\tFileIDs: []string{fileID.String()},\n\t\tOpTypes: []crdt.OpType{crdt.OpDelete},\n\t}))\n}\n"
  },
  {
    "path": "internal/streams/streams.go",
    "content": "package streams\n\nimport (\n\t\"encoding/binary\"\n\t\"encoding/json\"\n\t\"evo/internal/commits\"\n\t\"evo/internal/ops\"\n\t\"evo/internal/repo\"\n\t\"evo/internal/types\"\n\t\"fmt\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"sort\"\n\t\"strings\"\n\n\t\"github.com/google/uuid\"\n)\n\nfunc CreateStream(repoPath, name string) error {\n\tsdir := filepath.Join(repoPath, repo.EvoDir, \"streams\")\n\tif err := os.MkdirAll(sdir, 0755); err != nil {\n\t\treturn err\n\t}\n\tfpath := filepath.Join(sdir, name)\n\tif _, err := os.Stat(fpath); err == nil {\n\t\treturn fmt.Errorf(\"stream '%s' already exists\", name)\n\t}\n\treturn os.WriteFile(fpath, []byte{}, 0644)\n}\n\nfunc SwitchStream(repoPath, name string) error {\n\tfpath := filepath.Join(repoPath, repo.EvoDir, \"streams\", name)\n\tif _, err := os.Stat(fpath); os.IsNotExist(err) {\n\t\treturn fmt.Errorf(\"stream '%s' does not exist\", name)\n\t}\n\thead := filepath.Join(repoPath, repo.EvoDir, \"HEAD\")\n\treturn os.WriteFile(head, []byte(name), 0644)\n}\n\nfunc ListStreams(repoPath string) ([]string, error) {\n\tdir := filepath.Join(repoPath, repo.EvoDir, \"streams\")\n\tentries, err := os.ReadDir(dir)\n\tif os.IsNotExist(err) {\n\t\treturn []string{}, nil\n\t}\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tvar out []string\n\tfor _, e := range entries {\n\t\tif !e.IsDir() {\n\t\t\tout = append(out, e.Name())\n\t\t}\n\t}\n\treturn out, nil\n}\n\nfunc CurrentStream(repoPath string) (string, error) {\n\thead := filepath.Join(repoPath, repo.EvoDir, \"HEAD\")\n\tb, err := os.ReadFile(head)\n\tif err != nil {\n\t\treturn \"\", err\n\t}\n\treturn strings.TrimSpace(string(b)), nil\n}\n\n// MergeStreams => merges all missing commits from source => target\nfunc MergeStreams(repoPath, source, target string) error {\n\tsrcCommits, err := ListCommits(repoPath, source)\n\tif err != nil {\n\t\treturn err\n\t}\n\ttgtCommits, err := ListCommits(repoPath, target)\n\tif err != nil {\n\t\treturn err\n\t}\n\ttgtMap := make(map[string]bool)\n\tfor _, c := range tgtCommits {\n\t\ttgtMap[c.ID] = true\n\t}\n\tvar missing []types.Commit\n\tfor _, sc := range srcCommits {\n\t\tif !tgtMap[sc.ID] {\n\t\t\tmissing = append(missing, sc)\n\t\t}\n\t}\n\tfor _, mc := range missing {\n\t\t// replicate each op into .evo/ops/<target>/<fileID>.bin\n\t\tif err := replicateOps(repoPath, target, mc.Operations); err != nil {\n\t\t\treturn err\n\t\t}\n\t\t// store a commit copy in target\n\t\tc2 := mc\n\t\tc2.Stream = target\n\t\tif err := commits.SaveCommitFile(filepath.Join(repoPath, repo.EvoDir, \"commits\", target), &c2); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc replicateOps(repoPath, stream string, eops []commits.ExtendedOp) error {\n\tfor _, eop := range eops {\n\t\tfileID := eop.Op.FileID.String()\n\t\tbinPath := filepath.Join(repoPath, repo.EvoDir, \"ops\", stream, fileID+\".bin\")\n\t\tif err := ops.AppendOp(binPath, eop.Op); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\treturn nil\n}\n\n// CherryPick => replicate a single commit into the target\nfunc CherryPick(repoPath, commitID, target string) error {\n\tallStreams, err := ListStreams(repoPath)\n\tif err != nil {\n\t\treturn err\n\t}\n\tvar found *types.Commit\nOUTER:\n\tfor _, s := range allStreams {\n\t\tcc, _ := ListCommits(repoPath, s)\n\t\tfor _, c := range cc {\n\t\t\tif c.ID == commitID {\n\t\t\t\tfound = &c\n\t\t\t\tbreak OUTER\n\t\t\t}\n\t\t}\n\t}\n\tif found == nil {\n\t\treturn fmt.Errorf(\"commit %s not found in any stream\", commitID)\n\t}\n\t// replicate ops\n\tif err := replicateOps(repoPath, target, found.Operations); err != nil {\n\t\treturn err\n\t}\n\t// store new commit with new ID\n\tnewID := uuid.New().String()\n\tnc := *found\n\tnc.ID = newID\n\tnc.Stream = target\n\tnc.Message = \"[cherry-pick] \" + found.Message\n\treturn commits.SaveCommitFile(filepath.Join(repoPath, repo.EvoDir, \"commits\", target), &nc)\n}\n\nfunc ListCommits(repoPath, stream string) ([]types.Commit, error) {\n\tdir := filepath.Join(repoPath, repo.EvoDir, \"commits\", stream)\n\tentries, err := os.ReadDir(dir)\n\tif os.IsNotExist(err) {\n\t\treturn []types.Commit{}, nil\n\t}\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tvar out []types.Commit\n\tfor _, e := range entries {\n\t\tif !e.IsDir() && filepath.Ext(e.Name()) == \".bin\" {\n\t\t\tc, err := loadCommit(filepath.Join(dir, e.Name()))\n\t\t\tif err != nil {\n\t\t\t\treturn nil, err\n\t\t\t}\n\t\t\tout = append(out, *c)\n\t\t}\n\t}\n\tsort.Slice(out, func(i, j int) bool {\n\t\treturn out[i].Timestamp.Before(out[j].Timestamp)\n\t})\n\treturn out, nil\n}\n\nfunc loadCommit(fp string) (*types.Commit, error) {\n\tf, err := os.Open(fp)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tdefer f.Close()\n\tszBuf := make([]byte, 4)\n\tif _, err := f.Read(szBuf); err != nil {\n\t\treturn nil, err\n\t}\n\tsize := binary.BigEndian.Uint32(szBuf)\n\tdata := make([]byte, size)\n\tif _, err := f.Read(data); err != nil {\n\t\treturn nil, err\n\t}\n\tvar c types.Commit\n\tif err := json.Unmarshal(data, &c); err != nil {\n\t\treturn nil, err\n\t}\n\treturn &c, nil\n}\n\nfunc getCommit(repoPath, stream, commitID string) (*types.Commit, error) {\n\tcc, err := ListCommits(repoPath, stream)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\tfor _, c := range cc {\n\t\tif c.ID == commitID {\n\t\t\treturn &c, nil\n\t\t}\n\t}\n\treturn nil, fmt.Errorf(\"commit %s not found in stream %s\", commitID, stream)\n}\n"
  },
  {
    "path": "internal/streams/streams_test.go",
    "content": "package streams\n\nimport (\n\t\"evo/internal/commits\"\n\t\"evo/internal/crdt\"\n\t\"evo/internal/repo\"\n\t\"evo/internal/types\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/google/uuid\"\n\t\"github.com/stretchr/testify/assert\"\n)\n\nfunc TestCherryPick(t *testing.T) {\n\t// Create temp repo\n\ttmpDir := t.TempDir()\n\trepoPath := filepath.Join(tmpDir, \"test-repo\")\n\n\t// Initialize repo structure\n\tassert.NoError(t, os.MkdirAll(filepath.Join(repoPath, repo.EvoDir, \"commits\", \"main\"), 0755))\n\tassert.NoError(t, os.MkdirAll(filepath.Join(repoPath, repo.EvoDir, \"ops\", \"main\"), 0755))\n\tassert.NoError(t, CreateStream(repoPath, \"feature\"))\n\tassert.NoError(t, os.MkdirAll(filepath.Join(repoPath, repo.EvoDir, \"commits\", \"feature\"), 0755))\n\tassert.NoError(t, os.MkdirAll(filepath.Join(repoPath, repo.EvoDir, \"ops\", \"feature\"), 0755))\n\n\t// Create a test commit in feature stream\n\tfileID := uuid.New()\n\ttestOp := commits.ExtendedOp{\n\t\tOp: crdt.Operation{\n\t\t\tType:      crdt.OpInsert,\n\t\t\tFileID:    fileID,\n\t\t\tLineID:    uuid.New(),\n\t\t\tContent:   \"test line\",\n\t\t\tStream:    \"feature\",\n\t\t\tTimestamp: time.Now(),\n\t\t\tNodeID:    uuid.New(),\n\t\t\tLamport:   1,\n\t\t\tVector:    []int64{1},\n\t\t},\n\t}\n\ttestCommit := types.Commit{\n\t\tID:         uuid.New().String(),\n\t\tStream:     \"feature\",\n\t\tMessage:    \"test commit\",\n\t\tTimestamp:  time.Now(),\n\t\tOperations: []commits.ExtendedOp{testOp},\n\t}\n\tassert.NoError(t, commits.SaveCommitFile(filepath.Join(repoPath, repo.EvoDir, \"commits\", \"feature\"), &testCommit))\n\n\t// Test cherry-pick to main stream\n\terr := CherryPick(repoPath, testCommit.ID, \"main\")\n\tassert.NoError(t, err)\n\n\t// Verify commit was replicated\n\tmainCommits, err := ListCommits(repoPath, \"main\")\n\tassert.NoError(t, err)\n\tassert.Equal(t, 1, len(mainCommits))\n\tassert.Contains(t, mainCommits[0].Message, \"[cherry-pick]\")\n\tassert.Equal(t, \"main\", mainCommits[0].Stream)\n\tassert.Equal(t, 1, len(mainCommits[0].Operations))\n\tassert.Equal(t, fileID, mainCommits[0].Operations[0].Op.FileID)\n\tassert.Equal(t, \"test line\", mainCommits[0].Operations[0].Op.Content)\n}\n\nfunc TestMergeStreams(t *testing.T) {\n\t// Create temp repo\n\ttmpDir := t.TempDir()\n\trepoPath := filepath.Join(tmpDir, \"test-repo\")\n\n\t// Initialize repo structure\n\tassert.NoError(t, os.MkdirAll(filepath.Join(repoPath, repo.EvoDir, \"commits\", \"main\"), 0755))\n\tassert.NoError(t, os.MkdirAll(filepath.Join(repoPath, repo.EvoDir, \"ops\", \"main\"), 0755))\n\tassert.NoError(t, CreateStream(repoPath, \"feature\"))\n\tassert.NoError(t, os.MkdirAll(filepath.Join(repoPath, repo.EvoDir, \"commits\", \"feature\"), 0755))\n\tassert.NoError(t, os.MkdirAll(filepath.Join(repoPath, repo.EvoDir, \"ops\", \"feature\"), 0755))\n\n\t// Create multiple test commits in feature stream\n\tfileID := uuid.New()\n\ttestCommits := []types.Commit{\n\t\t{\n\t\t\tID:      uuid.New().String(),\n\t\t\tStream:  \"feature\",\n\t\t\tMessage: \"commit 1\",\n\t\t\tOperations: []commits.ExtendedOp{{\n\t\t\t\tOp: crdt.Operation{\n\t\t\t\t\tType:      crdt.OpInsert,\n\t\t\t\t\tFileID:    fileID,\n\t\t\t\t\tLineID:    uuid.New(),\n\t\t\t\t\tContent:   \"line 1\",\n\t\t\t\t\tStream:    \"feature\",\n\t\t\t\t\tTimestamp: time.Now(),\n\t\t\t\t\tNodeID:    uuid.New(),\n\t\t\t\t\tLamport:   1,\n\t\t\t\t\tVector:    []int64{1},\n\t\t\t\t},\n\t\t\t}},\n\t\t\tTimestamp: time.Now(),\n\t\t},\n\t\t{\n\t\t\tID:      uuid.New().String(),\n\t\t\tStream:  \"feature\",\n\t\t\tMessage: \"commit 2\",\n\t\t\tOperations: []commits.ExtendedOp{{\n\t\t\t\tOp: crdt.Operation{\n\t\t\t\t\tType:      crdt.OpInsert,\n\t\t\t\t\tFileID:    fileID,\n\t\t\t\t\tLineID:    uuid.New(),\n\t\t\t\t\tContent:   \"line 2\",\n\t\t\t\t\tStream:    \"feature\",\n\t\t\t\t\tTimestamp: time.Now(),\n\t\t\t\t\tNodeID:    uuid.New(),\n\t\t\t\t\tLamport:   2,\n\t\t\t\t\tVector:    []int64{2},\n\t\t\t\t},\n\t\t\t}},\n\t\t\tTimestamp: time.Now().Add(time.Second),\n\t\t},\n\t}\n\n\tfor _, c := range testCommits {\n\t\tassert.NoError(t, commits.SaveCommitFile(filepath.Join(repoPath, repo.EvoDir, \"commits\", \"feature\"), &c))\n\t}\n\n\t// Test merge streams\n\terr := MergeStreams(repoPath, \"feature\", \"main\")\n\tassert.NoError(t, err)\n\n\t// Verify all commits were replicated\n\tmainCommits, err := ListCommits(repoPath, \"main\")\n\tassert.NoError(t, err)\n\tassert.Equal(t, 2, len(mainCommits))\n\tassert.Equal(t, \"main\", mainCommits[0].Stream)\n\tassert.Equal(t, \"main\", mainCommits[1].Stream)\n\tassert.Equal(t, \"line 1\", mainCommits[0].Operations[0].Op.Content)\n\tassert.Equal(t, \"line 2\", mainCommits[1].Operations[0].Op.Content)\n}\n"
  },
  {
    "path": "internal/types/commit.go",
    "content": "package types\n\nimport (\n\t\"crypto/sha256\"\n\t\"evo/internal/crdt\"\n\t\"time\"\n)\n\n// ExtendedOp includes oldContent for update ops\ntype ExtendedOp struct {\n\tOp         crdt.Operation `json:\"op\"`\n\tOldContent string         `json:\"oldContent,omitempty\"`\n}\n\n// Commit represents a commit in the repository\ntype Commit struct {\n\tID          string       // Unique identifier\n\tStream      string       // Stream name\n\tMessage     string       // Commit message\n\tAuthorName  string       // Author's name\n\tAuthorEmail string       // Author's email\n\tTimestamp   time.Time    // When the commit was created\n\tOperations  []ExtendedOp // Operations included in this commit\n\tSignature   string       // Optional Ed25519 signature\n}\n\n// CommitHashString generates a stable string representation of a commit for signing\nfunc CommitHashString(c *Commit) string {\n\t// stable representation => ID + stream + message + etc\n\th := sha256.New()\n\th.Write([]byte(c.ID))\n\th.Write([]byte(c.Stream))\n\th.Write([]byte(c.Message))\n\th.Write([]byte(c.AuthorName))\n\th.Write([]byte(c.AuthorEmail))\n\th.Write([]byte(c.Timestamp.UTC().Format(time.RFC3339)))\n\treturn string(h.Sum(nil))\n}\n"
  },
  {
    "path": "internal/util/util.go",
    "content": "package util\n\nimport (\n\t\"os\"\n\t\"path/filepath\"\n)\n\nfunc ListAllFiles(repoPath string) ([]string, error) {\n\tvar out []string\n\tfilepath.Walk(repoPath, func(path string, info os.FileInfo, e error) error {\n\t\tif e != nil {\n\t\t\treturn e\n\t\t}\n\t\tif !info.IsDir() {\n\t\t\trel, _ := filepath.Rel(repoPath, path)\n\t\t\tout = append(out, rel)\n\t\t}\n\t\treturn nil\n\t})\n\treturn out, nil\n}\n"
  },
  {
    "path": "justfile",
    "content": "# just is a handy way to save and run project-specific commands # https://just.systems/\n\n# List all recipes\ndefault:\n    @just --list\n\n# Format all Go files\nfmt:\n    go fmt ./...\n\n# Run tests\ntest:\n    go test -v ./...\n\n# Run tests with coverage\ntest-coverage:\n    go test -v -coverprofile=coverage.out ./...\n    go tool cover -html=coverage.out -o coverage.html\n\n# Build the project\nbuild:\n    go build -v ./...\n\n# Build the CLI\nbuild-cli:\n    go build -v -o bin/evo ./cmd/evo\n\n# Install the CLI to $GOPATH/bin\ninstall: build-cli\n    go install ./cmd/evo\n\n# Run the main application\nrun:\n    go run ./cmd/evo\n\n# Install dependencies\ndeps:\n    go mod download\n    go mod tidy\n\n# Verify dependencies\nverify:\n    go mod verify\n\n# Run linter (requires golangci-lint)\nlint:\n    golangci-lint run\n\n# Clean build artifacts\nclean:\n    go clean\n    rm -f coverage.out coverage.html\n\n# Update dependencies to latest versions\nupdate-deps:\n    go get -u ./...\n    go mod tidy\n\n# Run security check (requires gosec)\nsecurity-check:\n    gosec ./...\n\n# Generate documentation\ndocs:\n    godoc -http=:6060\n\n# Create a new release tag\nrelease VERSION:\n    git tag -a {{VERSION}} -m \"Release {{VERSION}}\"\n    git push origin {{VERSION}}\n\n# Install development tools\ninstall-tools:\n    go install golang.org/x/tools/cmd/godoc@latest\n    go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest\n    go install github.com/securego/gosec/v2/cmd/gosec@latest\n"
  }
]