Repository: crazywolf132/evo
Branch: main
Commit: 08c5b8db2c04
Files: 54
Total size: 174.2 KB
Directory structure:
gitextract_ru125rnr/
├── .github/
│ ├── ISSUE_TEMPLATE/
│ │ ├── bug_report.md
│ │ └── feature_request.md
│ └── pull_request_template.md
├── .gitignore
├── DESIGN.md
├── LICENSE
├── README.md
├── cmd/
│ └── evo/
│ ├── commit_cmd.go
│ ├── config_cmd.go
│ ├── init_cmd.go
│ ├── log_cmd.go
│ ├── main.go
│ ├── revert_cmd.go
│ ├── root.go
│ ├── status_cmd.go
│ ├── stream_cmd.go
│ └── sync_cmd.go
├── go.mod
├── go.sum
├── internal/
│ ├── commits/
│ │ ├── commits.go
│ │ └── commits_test.go
│ ├── config/
│ │ └── config.go
│ ├── crdt/
│ │ ├── compact/
│ │ │ ├── compact.go
│ │ │ ├── config.go
│ │ │ ├── service.go
│ │ │ └── service_test.go
│ │ ├── operation.go
│ │ ├── operation_test.go
│ │ ├── rga.go
│ │ └── rga_test.go
│ ├── ignore/
│ │ ├── ignore.go
│ │ └── ignore_test.go
│ ├── index/
│ │ └── index.go
│ ├── lfs/
│ │ ├── diff.go
│ │ ├── diff_test.go
│ │ ├── gc.go
│ │ ├── store.go
│ │ ├── store_test.go
│ │ └── types.go
│ ├── ops/
│ │ ├── binary_log.go
│ │ └── ops.go
│ ├── repo/
│ │ ├── repo.go
│ │ └── repo_test.go
│ ├── signing/
│ │ ├── signing.go
│ │ └── signing_test.go
│ ├── status/
│ │ ├── status.go
│ │ └── status_test.go
│ ├── streams/
│ │ ├── partial.go
│ │ ├── partial_test.go
│ │ ├── streams.go
│ │ └── streams_test.go
│ ├── types/
│ │ └── commit.go
│ └── util/
│ └── util.go
└── justfile
================================================
FILE CONTENTS
================================================
================================================
FILE: .github/ISSUE_TEMPLATE/bug_report.md
================================================
---
name: Bug Report 🐛
about: Create a report to help improve Evo
title: '[BUG] '
labels: bug
assignees: ''
---
## Bug Description 🔍
<!-- A clear and concise description of what the bug is -->
## Steps to Reproduce 🔄
1.
2.
3.
## Expected Behavior 🌿
<!-- What did you expect to happen? -->
## Actual Behavior 🍂
<!-- What actually happened? -->
## Environment 🌍
- Evo Version:
- OS:
- Go Version:
## Additional Context 📝
<!-- Add any other context, screenshots, or error messages about the problem here -->
## Possible Solution 💡
<!-- Optional: If you have any ideas on how to fix this, let us know! -->
================================================
FILE: .github/ISSUE_TEMPLATE/feature_request.md
================================================
---
name: Feature Request 💫
about: Suggest an idea to help evolve Evo
title: '[FEATURE] '
labels: enhancement
assignees: ''
---
## Feature Description 🌱
<!-- A clear and concise description of what you'd like to see -->
## Use Case 🎯
<!-- Describe the problem this feature would solve or how it would improve Evo -->
## Proposed Solution 💡
<!-- If you have a specific solution in mind, describe it here -->
## Alternatives Considered 🤔
<!-- Have you considered any alternative solutions or workarounds? -->
## Additional Context 📝
<!-- Add any other context, mockups, or examples about the feature request here -->
## Impact on Project 🌿
<!-- How would this feature align with Evo's goals of being:
- Conflict-free
- Rename-friendly
- Large-file ready
- Offline-first
-->
================================================
FILE: .github/pull_request_template.md
================================================
## Description 🌿
<!-- Provide a clear and concise description of your changes -->
## Type of Change 🔄
<!-- Mark the relevant option with an [x] -->
- [ ] 🐛 Bug Fix
- [ ] ✨ New Feature
- [ ] 📈 Performance Improvement
- [ ] 🔧 Code Refactoring
- [ ] 📚 Documentation
- [ ] 🧪 Test Enhancement
- [ ] 🛠️ Build/CI Pipeline
## Related Issues 🔗
<!-- Link any related issues using #issue-number -->
Closes #
## Testing Done 🧪
<!-- Describe the tests you've added or run to verify your changes -->
## Checklist ✅
- [ ] My code follows Evo's style guidelines
- [ ] I have added/updated necessary documentation
- [ ] I have added appropriate tests
- [ ] My changes don't introduce new merge conflicts (Evo magic! 🎩)
- [ ] I have tested my changes with large files (if applicable)
- [ ] All new and existing tests pass
## Screenshots 📸
<!-- If applicable, add screenshots to help explain your changes -->
## Additional Notes 📝
<!-- Add any other context about your PR here -->
---
Remember: Evo is all about making version control effortless! Thanks for contributing! 💪
================================================
FILE: .gitignore
================================================
# Binaries and build artifacts
bin/
dist/
build/
*.exe
*.exe~
*.dll
*.so
*.dylib
# Go cache / coverage
*.test
*.out
*.swp
*.swo
*.tmp
*.temp
coverage.out
coverage.html
# OS-specific
.DS_Store
Thumbs.db
# Evo logs & data
.evo/
*.log
# Editor / IDE
.vscode/
.idea/
================================================
FILE: DESIGN.md
================================================
# Evo Design Document
## Overview & Motivation
Evo is a next-generation version control system designed to solve problems that legacy systems (like Git) struggle with—especially around complex merges, large file handling, and rename tracking. By leveraging CRDTs (Conflict-Free Replicated Data Types), Evo can integrate changes from multiple developers without forcing manual merges or conflicts, all while supporting a familiar commit/branch-like workflow.
## Key Goals
1. **Branch-Free, Named Streams**
- Instead of Git branches, Evo uses named streams to isolate sets of changes
- Merging is a matter of replicating CRDT operations from one stream to another
2. **CRDT-Powered Concurrency**
- No more "merge conflicts"
- Evo's line-based RGA (Replicated Growable Array) CRDT automatically merges line insertions, updates, and deletions even when multiple developers modify the same file concurrently
3. **Stable File IDs for Renames**
- Renames no longer break history
- Evo maintains a `.evo/index` that assigns each file a stable, UUID-based ID so renaming a file doesn't lose references to its log
4. **Large File Support**
- Files exceeding a configurable threshold are stored in `.evo/largefiles/<fileID>` with only a stub line in the CRDT logs
- This prevents huge content from bloating the text-based logs
5. **Full Revert & Partial Merges**
- Every commit tracks the old content on updates, allowing truly comprehensive revert
- Partial merges (or "cherry-picks") replicate only a single commit's changes from one stream to another, as opposed to pulling everything
6. **Optional Commit Signing**
- Evo supports Ed25519-based signatures for verifying authenticity
- Commits store a signature field to guard against tampering
## Architecture
Below is a high-level view of Evo's architecture and rationale:
### 1. Named Streams
- Each stream is effectively a separate CRDT operation log stored in `.evo/ops/<stream>`
- Users can create or switch streams (akin to branches)
- Merging means copying missing commits (and their CRDT operations) from one stream's logs to another
**Design Decision:** This approach provides a branch-like user experience but avoids the complexity of Git merges and HEAD pointers. CRDT ensures no merge conflicts.
### 2. RGA-Based CRDT
- We employ an RGA (Replicated Growable Array) for each file, which can handle line insertion, deletion, and reordering
- The RGA logic is stored in `.evo/ops/<stream>/<fileID>.bin` in a custom binary format (no JSON overhead)
- Each operation has `(lamport, nodeID)` for concurrency ordering, plus a `lineID` for each line
**Design Decision:**
- RGA allows lines to be re-inserted anywhere, supporting reordering or partial merges with minimal overhead
- Using a binary format speeds up parsing and reduces disk usage
### 3. Stable File IDs
- `.evo/index` maps `filePath -> fileID`. If a user renames a file, we only update the index; the CRDT logs still reference the same fileID
- This ensures rename history is never lost, unlike older VCS tools that rely on heuristics to guess renames
### 4. Commits & Reverts
- A commit is a snapshot of newly added operations since the previous commit, stored in `.evo/commits/<stream>/<commitID>.bin`
- For update operations, we store the `oldContent` so revert can truly restore lines to what they were
- Revert automatically generates inverse operations (e.g., an insert becomes a delete) and re-applies them to the CRDT logs
**Design Decision:**
- By storing old content in commits, we can revert precisely, even for partial updates or line changes, avoiding the simplistic "delete everything" approach
### 5. Large File Handling
- If a file's size exceeds a configurable threshold (`files.largeThreshold`), Evo writes a CRDT stub line `EVO-LFS:<fileID>` and places the real file content into `.evo/largefiles/<fileID>/`
- This keeps the CRDT logs small and is reminiscent of Git-LFS, but simpler and built-in
### 6. Partial Merges & Cherry-Pick
- `evo stream merge <src> <target>` merges all missing commits from `<src>` to `<target>`
- `evo stream cherry-pick <commitID> <target>` merges only that single commit
- Because each commit references discrete CRDT operations by file ID, partial merges replicate exactly the needed ops
### 7. Optional Ed25519 Signing
- Users can configure a signing key path (`signing.keyPath` in config)
- On commit, Evo can create a signature by hashing the commit's stable representation (metadata + ops) and sign it
- If `verifySignatures = true`, the CLI warns if the signature fails verification
**Design Decision:**
- This approach is offline-first: no server needed
- The user's private key is local, and signatures are purely a cryptographic measure for authenticity
## CLI Summary
1. **Initialize Repository**
```bash
evo init [dir]
```
- Creates `.evo/` structure, "main" stream, config, etc.
2. **Configuration**
```bash
evo config [get|set] ...
```
- Manage global/repo-level settings (`user.name`, `user.email`, `remote origin`, etc.)
3. **Status**
```bash
evo status
```
- Shows changed files, new files, renames, etc.
- Lists current stream and pending operations
4. **Commit**
```bash
evo commit -m <msg> [--sign]
```
- Groups newly added ops into a commit with a user-provided message, optional signing
5. **Revert**
```bash
evo revert <commit-id>
```
- Generates inverse ops to restore lines from a prior commit
6. **Log**
```bash
evo log
```
- Lists commits in the current stream, optionally verifying signatures
7. **Stream**
```bash
evo stream <create|switch|list|merge|cherry-pick>
```
- Manages named streams (branch-like workflows)
8. **Sync**
```bash
evo sync <remote> (not fully implemented)
```
- Stub for pushing/pulling CRDT logs from a future Evo server
## Config & Auth
- Global config at `~/.config/evo/config.toml`
- Repo config at `.evo/config/config.toml`
- Example keys:
- `user.name`, `user.email`
- `files.largeThreshold`
- `verifySignatures` (true/false)
- `signing.keyPath` (path to Ed25519 private key)
## Why Evo is Different
- **No Merge Conflicts:** CRDT concurrency means each line insertion, update, or deletion merges automatically
- **Renames Are Trivial:** The stable file ID approach eliminates guesswork
- **Partial Merges:** Cherry-pick or revert lines in a simpler manner thanks to the operation-based CRDT approach
- **Offline-First:** No central server required; commits and merges work locally with minimal overhead
- **Extensible:** We can add "pull requests," "server-based merges," or advanced partial file merges without rewriting the entire engine
## Conclusion
Evo aims to simplify version control while enhancing concurrency and rename support. It merges automatically using a robust line-based CRDT, organizes changes into named streams instead of ephemeral branches, and offers optional commit signing plus large file offloading.
The result is a production-ready, innovative VCS that supports both small personal projects and large enterprise codebases—offline or with a future server for collaboration. Evo's design choices reflect the vision of replacing traditional DVCS with something more powerful, more flexible, and less conflict-prone.
================================================
FILE: LICENSE
================================================
MIT License
Copyright (c) 2025 Brayden Moon
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
================================================
FILE: README.md
================================================
# Evo 🌿
> **IMPORTANT**: This project has been discontinued. Due to persistent harassment and hostile behavior from certain members of the community, I have made the difficult decision to cease development of this project. While I'm proud of what was built and the vision it represented, I cannot continue maintaining it under these circumstances. The repository will remain archived for reference, but will no longer receive updates or support. Thank you to those who supported this project constructively.
> ~~**Note**: This is my hobby project in active development! While the core concepts are working, some features are still experimental and under construction. If you like the vision, contributions and feedback are very welcome! 🚧~~
Next-Generation, CRDT-Based Version Control
No Merge Conflicts • Named Streams • Stable File IDs • Large File Support
Evo 🌿 aims to evolve version control by abandoning outdated branch merges and conflict resolutions. Instead, it leverages CRDT (Conflict-Free Replicated Data Type) magic so that changes from multiple users automatically converge—no fighting with merges or losing work when files are renamed!
## Why Evo? 🌿
1. **Zero Merge Conflicts**
The line-based RGA CRDT merges text changes from different developers seamlessly.
2. **Named Streams Instead of Branches**
Create and switch streams for new features, merge or cherry-pick commits from one stream to another—no more complicated branching.
3. **Renames Made Simple**
Files get stable UUIDs in .evo/index so that renames never lose history.
4. **Large File Support**
Automatic detection moves big files to .evo/largefiles/ and stores only a stub in the CRDT logs.
5. **Offline-First**
Commit, revert, or switch streams locally with no server required.
6. **Commit Signing**
Optional Ed25519 signing for users who need authenticity checks.
## ~~Work in Progress~~ Project Status 🌿
~~While Evo's core is functional, there's active development on:
- Advanced partial merges for even more granular change selection
- Extended tests (unit/integration/E2E)
- Server-based PR flows for code reviews
- Performance (packfiles, caching)
- CLI & UI polish~~
This project has been discontinued and is no longer under development. The code remains as-is for reference purposes, but no further updates or improvements will be made.
~~Your feedback and contributions can help shape Evo's future!~~
## Vision 🌿
The goal is to make version control feel effortless: merges happen automatically, renames never break history, large files don't slow you down, and everything works offline. The future roadmap includes a fully realized server for pull requests, enterprise auth, and real-time collaboration—all powered by CRDT behind the scenes.
## Installing Evo 🛠️
> **Note**: As this is a hobby project, some features might not work as described. Feel free to experiment and contribute improvements!
1. Clone & Build:
```bash
git clone https://github.com/crazywolf132/evo.git
cd evo
go mod tidy
go build -o evo ./cmd/evo
```
2. (Optional) Install:
```bash
go install ./cmd/evo
```
## Quick Start 🚀
```bash
# Initialize a new Evo repo
evo init
# Check for changed or renamed files
evo status
# Commit changes (optionally sign)
evo commit -m "Initial commit"
# Create a new stream (like a branch)
evo stream create feature-x
evo stream switch feature-x
# Make changes -> evo status -> evo commit ...
# Merge everything back into main when ready
evo stream merge feature-x main
```
## Contributing 💪
This project is no longer accepting contributions as it has been discontinued. The repository is archived for reference purposes only.
## License 📜
Evo 🌿 is released under the MIT License. Hope you find it as fun and liberating to use as it is to build!
---
Thanks for checking out Evo 🌿! I'm excited to see this project grow into a conflict-free, rename-friendly, large-file-ready version control system. Remember, it's a work in progress, so expect some rough edges - but with your help, it can become amazing! ✨
================================================
FILE: cmd/evo/commit_cmd.go
================================================
package main
import (
"evo/internal/commits"
"evo/internal/config"
"evo/internal/index"
"evo/internal/repo"
"evo/internal/streams"
"evo/internal/types"
"fmt"
"github.com/spf13/cobra"
)
var (
commitMsg string
commitSign bool
)
func init() {
var commitCmd = &cobra.Command{
Use: "commit",
Short: "Group new CRDT ops into a commit, optionally signed",
Long: `Collect newly added CRDT ops (including old content for updates) into a single commit
with a message and optional Ed25519 signature, if configured.`,
RunE: func(cmd *cobra.Command, args []string) error {
if commitMsg == "" {
return fmt.Errorf("use -m to specify a commit message")
}
rp, err := repo.FindRepoRoot(".")
if err != nil {
return err
}
stream, err := streams.CurrentStream(rp)
if err != nil {
return err
}
// update index
if err := index.UpdateIndex(rp); err != nil {
return err
}
name, _ := config.GetConfigValue(rp, "user.name")
email, _ := config.GetConfigValue(rp, "user.email")
if name == "" {
name = "EvoUser"
}
if email == "" {
email = "user@evo"
}
cid, err := commits.CreateCommit(rp, stream, commitMsg, name, email, []types.ExtendedOp{}, commitSign)
if err != nil {
return err
}
fmt.Printf("Created commit %s in stream %s\n", cid.ID, stream)
return nil
},
}
commitCmd.Flags().StringVarP(&commitMsg, "message", "m", "", "Commit message")
commitCmd.Flags().BoolVar(&commitSign, "sign", false, "Sign commit using Ed25519 if configured")
rootCmd.AddCommand(commitCmd)
}
================================================
FILE: cmd/evo/config_cmd.go
================================================
package main
import (
"evo/internal/config"
"evo/internal/repo"
"fmt"
"github.com/spf13/cobra"
)
var cfgGlobal bool
func init() {
var setCmd = &cobra.Command{
Use: "set <key> <value>",
Short: "Set a config key (repo-level by default, or --global)",
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) < 2 {
return fmt.Errorf("usage: evo config set <key> <value>")
}
key, val := args[0], args[1]
if cfgGlobal {
return config.SetGlobalConfigValue(key, val)
}
rp, err := repo.FindRepoRoot(".")
if err != nil {
// fallback to global
return config.SetGlobalConfigValue(key, val)
}
return config.SetRepoConfigValue(rp, key, val)
},
}
setCmd.Flags().BoolVar(&cfgGlobal, "global", false, "Set global config instead of repo-level")
var getCmd = &cobra.Command{
Use: "get <key>",
Short: "Get a config value (repo-level overrides global)",
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) < 1 {
return fmt.Errorf("usage: evo config get <key>")
}
key := args[0]
rp, err := repo.FindRepoRoot(".")
var val string
if err != nil {
// fallback global
val, err = config.GetConfigValue("", key)
} else {
val, err = config.GetConfigValue(rp, key)
}
if err != nil {
fmt.Println("Error:", err)
return nil
}
if val == "" {
fmt.Printf("No value found for key: %s\n", key)
} else {
fmt.Println(val)
}
return nil
},
}
var configCmd = &cobra.Command{
Use: "config",
Short: "Manage Evo configuration",
}
configCmd.AddCommand(setCmd, getCmd)
rootCmd.AddCommand(configCmd)
}
================================================
FILE: cmd/evo/init_cmd.go
================================================
package main
import (
"evo/internal/repo"
"fmt"
"github.com/spf13/cobra"
)
func init() {
var initCmd = &cobra.Command{
Use: "init [path]",
Short: "Initialize a new Evo repository",
Long: `Creates a .evo directory with default stream "main", config folder, index for stable file IDs,
and other structures needed for CRDT-based version control.`,
RunE: func(cmd *cobra.Command, args []string) error {
path := "."
if len(args) > 0 {
path = args[0]
}
if err := repo.InitRepo(path); err != nil {
return err
}
fmt.Println("Initialized Evo repository at", path)
return nil
},
}
rootCmd.AddCommand(initCmd)
}
================================================
FILE: cmd/evo/log_cmd.go
================================================
package main
import (
"evo/internal/commits"
"evo/internal/config"
"evo/internal/repo"
"evo/internal/signing"
"evo/internal/streams"
"fmt"
"github.com/spf13/cobra"
)
func init() {
var logCmd = &cobra.Command{
Use: "log",
Short: "Show commit history for the current stream",
RunE: func(cmd *cobra.Command, args []string) error {
rp, err := repo.FindRepoRoot(".")
if err != nil {
return err
}
stream, err := streams.CurrentStream(rp)
if err != nil {
return err
}
verifyStr, _ := config.GetConfigValue(rp, "verifySignatures")
doVerify := (verifyStr == "true")
cc, err := commits.ListCommits(rp, stream)
if err != nil {
return err
}
if len(cc) == 0 {
fmt.Println("No commits found in this stream.")
return nil
}
for _, c := range cc {
ver := ""
if c.Signature != "" && doVerify {
valid, err := signing.VerifyCommit(&c, rp)
if err != nil {
ver = " (error: " + err.Error() + ")"
} else if valid {
ver = " (verified)"
} else {
ver = " (INVALID!)"
}
}
fmt.Printf("commit %s%s\nAuthor: %s <%s>\nDate: %s\n\n %s\n\n",
c.ID, ver, c.AuthorName, c.AuthorEmail, c.Timestamp.Local(), c.Message)
}
return nil
},
}
rootCmd.AddCommand(logCmd)
}
================================================
FILE: cmd/evo/main.go
================================================
package main
func main() {
Execute()
}
================================================
FILE: cmd/evo/revert_cmd.go
================================================
package main
import (
"evo/internal/commits"
"evo/internal/repo"
"evo/internal/streams"
"fmt"
"github.com/spf13/cobra"
)
func init() {
var revertCmd = &cobra.Command{
Use: "revert <commit-id>",
Short: "Revert the specified commit by generating inverse ops",
Long: `This properly restores old lines if the commit performed updates, removing inserted lines, etc.`,
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) < 1 {
return fmt.Errorf("usage: evo revert <commit-id>")
}
commitID := args[0]
rp, err := repo.FindRepoRoot(".")
if err != nil {
return err
}
str, err := streams.CurrentStream(rp)
if err != nil {
return err
}
newC, err := commits.RevertCommit(rp, str, commitID)
if err != nil {
return fmt.Errorf("failed to revert commit: %w", err)
}
fmt.Printf("Created revert commit %s\n", newC.ID)
return nil
},
}
rootCmd.AddCommand(revertCmd)
}
================================================
FILE: cmd/evo/root.go
================================================
package main
import (
"fmt"
"os"
"github.com/spf13/cobra"
)
var rootCmd = &cobra.Command{
Use: "evo",
Short: "Evo (🌿) - next-generation CRDT-based version control",
Long: `Evo is a production-ready version control system that uses named streams,
line-based CRDT (with RGA for reordering), stable file IDs, commit signing, and large file support.`,
}
// Execute runs the CLI
func Execute() {
if err := rootCmd.Execute(); err != nil {
fmt.Fprintln(os.Stderr, "Error:", err)
os.Exit(1)
}
}
================================================
FILE: cmd/evo/status_cmd.go
================================================
package main
import (
"evo/internal/repo"
"evo/internal/status"
"fmt"
"github.com/spf13/cobra"
)
func init() {
var statusCmd = &cobra.Command{
Use: "status",
Short: "Show the working tree status",
Long: `Shows the status of files in the working directory:
- New (untracked) files
- Modified files
- Deleted files
- Renamed files
Respects .evo-ignore patterns for excluding files.`,
RunE: func(cmd *cobra.Command, args []string) error {
rp, err := repo.FindRepoRoot(".")
if err != nil {
return err
}
st, err := status.GetStatus(rp)
if err != nil {
return fmt.Errorf("failed to get status: %w", err)
}
fmt.Print(status.FormatStatus(st))
return nil
},
}
rootCmd.AddCommand(statusCmd)
}
================================================
FILE: cmd/evo/stream_cmd.go
================================================
package main
import (
"evo/internal/repo"
"evo/internal/streams"
"fmt"
"github.com/spf13/cobra"
)
func init() {
var streamCmd = &cobra.Command{
Use: "stream",
Short: "Manage named streams (like branches)",
Long: "Create, switch, list, merge, or cherry-pick commits in named streams.",
}
var createCmd = &cobra.Command{
Use: "create <name>",
Short: "Create a new stream",
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) < 1 {
return fmt.Errorf("usage: evo stream create <name>")
}
rp, err := repo.FindRepoRoot(".")
if err != nil {
return err
}
if err := streams.CreateStream(rp, args[0]); err != nil {
return err
}
fmt.Println("Created stream:", args[0])
return nil
},
}
var switchCmd = &cobra.Command{
Use: "switch <name>",
Short: "Switch to another stream locally",
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) < 1 {
return fmt.Errorf("usage: evo stream switch <name>")
}
rp, err := repo.FindRepoRoot(".")
if err != nil {
return err
}
if err := streams.SwitchStream(rp, args[0]); err != nil {
return err
}
fmt.Println("Switched to stream:", args[0])
return nil
},
}
var listCmd = &cobra.Command{
Use: "list",
Short: "List named streams",
RunE: func(cmd *cobra.Command, args []string) error {
rp, err := repo.FindRepoRoot(".")
if err != nil {
return err
}
ss, err := streams.ListStreams(rp)
if err != nil {
return err
}
cur, _ := streams.CurrentStream(rp)
for _, s := range ss {
prefix := " "
if s == cur {
prefix = "* "
}
fmt.Println(prefix + s)
}
return nil
},
}
var mergeCmd = &cobra.Command{
Use: "merge <source> <target>",
Short: "Merge all commits from source stream into target stream",
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) < 2 {
return fmt.Errorf("usage: evo stream merge <source> <target>")
}
rp, err := repo.FindRepoRoot(".")
if err != nil {
return err
}
if err := streams.MergeStreams(rp, args[0], args[1]); err != nil {
return err
}
fmt.Printf("Merged all missing commits from '%s' into '%s'\n", args[0], args[1])
return nil
},
}
var cherryPickCmd = &cobra.Command{
Use: "cherry-pick <commit-id> <target-stream>",
Short: "Replicate only one commit's ops into the target stream",
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) < 2 {
return fmt.Errorf("usage: evo stream cherry-pick <commit-id> <target-stream>")
}
rp, err := repo.FindRepoRoot(".")
if err != nil {
return err
}
if err := streams.CherryPick(rp, args[0], args[1]); err != nil {
return err
}
fmt.Printf("Cherry-picked commit %s into stream %s\n", args[0], args[1])
return nil
},
}
streamCmd.AddCommand(createCmd, switchCmd, listCmd, mergeCmd, cherryPickCmd)
rootCmd.AddCommand(streamCmd)
}
================================================
FILE: cmd/evo/sync_cmd.go
================================================
package main
import (
"evo/internal/repo"
"fmt"
"github.com/spf13/cobra"
)
func init() {
var syncCmd = &cobra.Command{
Use: "sync <remote-url>",
Short: "Synchronize CRDT logs with remote (not fully implemented)",
Long: `Pull missing ops from remote for the current stream and push local ops
to the remote. Requires a future Evo server implementation for full functionality.`,
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) < 1 {
return fmt.Errorf("usage: evo sync <remote-url>")
}
remote := args[0]
_, err := repo.FindRepoRoot(".")
if err != nil {
return err
}
fmt.Printf("Sync with %s is not yet implemented.\n", remote)
return nil
},
}
rootCmd.AddCommand(syncCmd)
}
================================================
FILE: go.mod
================================================
module evo
go 1.23.4
require (
github.com/google/uuid v1.6.0
github.com/pelletier/go-toml v1.9.5
github.com/spf13/cobra v1.8.1
)
require (
github.com/bmatcuk/doublestar/v4 v4.8.0 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/stretchr/testify v1.10.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)
================================================
FILE: go.sum
================================================
github.com/bmatcuk/doublestar/v4 v4.8.0 h1:DSXtrypQddoug1459viM9X9D3dp1Z7993fw36I2kNcQ=
github.com/bmatcuk/doublestar/v4 v4.8.0/go.mod h1:xBQ8jztBU6kakFMg+8WGxn0c6z1fTSPVIjEY1Wr7jzc=
github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/pelletier/go-toml v1.9.5 h1:4yBQzkHv+7BHq2PQUZF3Mx0IYxG7LsP222s7Agd3ve8=
github.com/pelletier/go-toml v1.9.5/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCkoOuaOx1Y+c=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/spf13/cobra v1.8.1 h1:e5/vxKd/rZsfSJMUX1agtjeTDf+qv1/JdBF8gg5k9ZM=
github.com/spf13/cobra v1.8.1/go.mod h1:wHxEcudfqmLYa8iTfL+OuZPbBZkmvliBWKIezN3kD9Y=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
================================================
FILE: internal/commits/commits.go
================================================
package commits
import (
"crypto/sha256"
"encoding/binary"
"encoding/json"
"evo/internal/crdt"
"evo/internal/ops"
"evo/internal/signing"
"evo/internal/types"
"fmt"
"io/fs"
"os"
"path/filepath"
"sort"
"strings"
"time"
"github.com/google/uuid"
)
// ExtendedOp includes oldContent for update ops
type ExtendedOp = types.ExtendedOp
// CreateCommit creates a new commit with the given operations
func CreateCommit(repoPath, stream, message, authorName, authorEmail string, ops []types.ExtendedOp, sign bool) (*types.Commit, error) {
commit := &types.Commit{
ID: uuid.New().String(),
Stream: stream,
Message: message,
AuthorName: authorName,
AuthorEmail: authorEmail,
Timestamp: time.Now().UTC(),
Operations: ops,
}
// Sign commit if requested
if sign {
sig, err := signing.SignCommit(commit, repoPath)
if err != nil {
return nil, fmt.Errorf("failed to sign commit: %w", err)
}
commit.Signature = sig
// Verify signature immediately
valid, err := signing.VerifyCommit(commit, repoPath)
if err != nil {
return nil, fmt.Errorf("failed to verify commit signature: %w", err)
}
if !valid {
return nil, fmt.Errorf("commit signature verification failed")
}
}
// Save commit
if err := SaveCommit(repoPath, commit); err != nil {
return nil, fmt.Errorf("failed to save commit: %w", err)
}
return commit, nil
}
// LoadCommit loads a commit from disk
func LoadCommit(repoPath, stream, commitID string) (*types.Commit, error) {
commitPath := filepath.Join(repoPath, ".evo", "commits", stream, commitID+".bin")
data, err := os.ReadFile(commitPath)
if err != nil {
return nil, fmt.Errorf("failed to read commit file: %w", err)
}
var commit types.Commit
if err := json.Unmarshal(data, &commit); err != nil {
return nil, fmt.Errorf("failed to unmarshal commit: %w", err)
}
// Verify signature if present
if commit.Signature != "" {
valid, err := signing.VerifyCommit(&commit, repoPath)
if err != nil {
return nil, fmt.Errorf("failed to verify commit signature: %w", err)
}
if !valid {
return nil, fmt.Errorf("commit signature verification failed")
}
}
return &commit, nil
}
// SaveCommit saves a commit to disk
func SaveCommit(repoPath string, commit *types.Commit) error {
commitDir := filepath.Join(repoPath, ".evo", "commits", commit.Stream)
if err := os.MkdirAll(commitDir, 0755); err != nil {
return fmt.Errorf("failed to create commit directory: %w", err)
}
data, err := json.Marshal(commit)
if err != nil {
return fmt.Errorf("failed to marshal commit: %w", err)
}
commitPath := filepath.Join(commitDir, commit.ID+".bin")
if err := os.WriteFile(commitPath, data, 0644); err != nil {
return fmt.Errorf("failed to write commit file: %w", err)
}
return nil
}
// gatherNewOps => find ops not in prior commits, augment 'update' ops with oldContent
func gatherNewOps(repoPath, stream string) ([]ExtendedOp, error) {
all, err := ListCommits(repoPath, stream)
if err != nil {
return nil, err
}
known := make(map[string]bool)
for _, cc := range all {
for _, eop := range cc.Operations {
known[opKey(eop.Op)] = true
}
}
// load all current ops
var allOps []ExtendedOp
opsDir := filepath.Join(repoPath, ".evo", "ops", stream)
if err := filepath.WalkDir(opsDir, func(path string, d fs.DirEntry, err error) error {
if err != nil {
return err
}
if !d.IsDir() && filepath.Ext(path) == ".bin" {
f, err := os.Open(path)
if err != nil {
return err
}
defer f.Close()
var size int64
if err := binary.Read(f, binary.LittleEndian, &size); err != nil {
return err
}
data := make([]byte, size)
if _, err := f.Read(data); err != nil {
return err
}
var op crdt.Operation
if err := json.Unmarshal(data, &op); err != nil {
return err
}
allOps = append(allOps, ExtendedOp{Op: op})
}
return nil
}); err != nil && !os.IsNotExist(err) {
return nil, err
}
// build doc states to find old text
docStates := buildDocStates(repoPath, stream)
var newEops []ExtendedOp
for _, op := range allOps {
k := opKey(op.Op)
if !known[k] {
var old string
if op.Op.Type == crdt.OpUpdate {
old = findOldContent(docStates, op.Op.LineID)
}
newEops = append(newEops, ExtendedOp{
Op: op.Op,
OldContent: old,
})
}
}
sort.Slice(newEops, func(i, j int) bool {
return newEops[i].Op.LessThan(&newEops[j].Op)
})
return newEops, nil
}
func opKey(op crdt.Operation) string {
return fmt.Sprintf("%d_%s_%s", op.Lamport, op.NodeID.String(), op.LineID.String())
}
func buildDocStates(repoPath, stream string) map[uuid.UUID]map[uuid.UUID]string {
res := make(map[uuid.UUID]map[uuid.UUID]string)
root := filepath.Join(repoPath, ".evo", "ops", stream)
if err := filepath.WalkDir(root, func(path string, d fs.DirEntry, err error) error {
if err != nil {
return err
}
if !d.IsDir() && strings.HasSuffix(path, ".bin") {
fn := filepath.Base(path)
fidStr := strings.TrimSuffix(fn, ".bin")
fid, err := uuid.Parse(fidStr)
if err == nil {
ops2, _ := ops.LoadAllOps(path)
doc := crdt.NewRGA()
for _, op := range ops2 {
doc.Apply(op)
}
res[fid] = doc.LineMap()
}
}
return nil
}); err != nil && !os.IsNotExist(err) {
return nil
}
return res
}
func findOldContent(ds map[uuid.UUID]map[uuid.UUID]string, lineID uuid.UUID) string {
for _, linesMap := range ds {
if txt, ok := linesMap[lineID]; ok {
return txt
}
}
return ""
}
// ListCommits returns all commits in a stream, sorted by timestamp
func ListCommits(repoPath, stream string) ([]types.Commit, error) {
commitDir := filepath.Join(repoPath, ".evo", "commits", stream)
entries, err := os.ReadDir(commitDir)
if err != nil {
if os.IsNotExist(err) {
return nil, nil
}
return nil, fmt.Errorf("failed to read commit directory: %w", err)
}
var commits []types.Commit
for _, entry := range entries {
if !entry.IsDir() && strings.HasSuffix(entry.Name(), ".bin") {
commit, err := LoadCommit(repoPath, stream, strings.TrimSuffix(entry.Name(), ".bin"))
if err != nil {
return nil, fmt.Errorf("failed to load commit %s: %w", entry.Name(), err)
}
commits = append(commits, *commit)
}
}
// Sort by timestamp
sort.Slice(commits, func(i, j int) bool {
return commits[i].Timestamp.Before(commits[j].Timestamp)
})
return commits, nil
}
func saveCommit(repoPath string, c *types.Commit) error {
dir := filepath.Join(repoPath, ".evo", "commits", c.Stream)
if err := os.MkdirAll(dir, 0755); err != nil {
return err
}
fp := filepath.Join(dir, c.ID+".bin")
b, _ := json.Marshal(c)
sz := make([]byte, 4)
binary.BigEndian.PutUint32(sz, uint32(len(b)))
f, err := os.Create(fp)
if err != nil {
return err
}
defer f.Close()
f.Write(sz)
f.Write(b)
return nil
}
func SaveCommitFile(dir string, c *types.Commit) error {
if err := os.MkdirAll(dir, 0755); err != nil {
return err
}
fp := filepath.Join(dir, c.ID+".bin")
b, _ := json.Marshal(c)
sz := make([]byte, 4)
binary.BigEndian.PutUint32(sz, uint32(len(b)))
f, err := os.Create(fp)
if err != nil {
return err
}
defer f.Close()
f.Write(sz)
f.Write(b)
return nil
}
func loadCommit(fp string) (*types.Commit, error) {
f, err := os.Open(fp)
if err != nil {
return nil, err
}
defer f.Close()
szBuf := make([]byte, 4)
if _, err := f.Read(szBuf); err != nil {
return nil, err
}
sz := binary.BigEndian.Uint32(szBuf)
data := make([]byte, sz)
if _, err := f.Read(data); err != nil {
return nil, err
}
var c types.Commit
if err := json.Unmarshal(data, &c); err != nil {
return nil, err
}
return &c, nil
}
// RevertCommit creates a new commit that reverts the changes in the specified commit
func RevertCommit(repoPath, stream, commitID string) (*types.Commit, error) {
target, err := LoadCommit(repoPath, stream, commitID)
if err != nil {
return nil, fmt.Errorf("failed to load commit %s: %w", commitID, err)
}
// Generate inverse operations
inverted, err := invertOps(target.Operations)
if err != nil {
return nil, fmt.Errorf("failed to invert operations: %w", err)
}
// Create revert commit
revert := &types.Commit{
ID: uuid.New().String(),
Stream: stream,
Message: fmt.Sprintf("Revert commit %s", commitID),
AuthorName: target.AuthorName,
AuthorEmail: target.AuthorEmail,
Timestamp: time.Now().UTC(),
Operations: inverted,
}
// Save revert commit
if err := SaveCommit(repoPath, revert); err != nil {
return nil, fmt.Errorf("failed to save revert commit: %w", err)
}
return revert, nil
}
// invertOps generates inverse operations for a commit
func invertOps(ops []types.ExtendedOp) ([]types.ExtendedOp, error) {
var inverted []types.ExtendedOp
// Process operations in reverse order
for i := len(ops) - 1; i >= 0; i-- {
op := ops[i]
switch op.Op.Type {
case crdt.OpInsert:
// Invert insert -> delete
inverted = append(inverted, types.ExtendedOp{
Op: crdt.Operation{
Type: crdt.OpDelete,
LineID: op.Op.LineID,
Timestamp: time.Now(),
},
})
case crdt.OpDelete:
// Invert delete -> insert with original content
if op.Op.Content == "" {
return nil, fmt.Errorf("cannot revert delete operation: missing original content")
}
inverted = append(inverted, types.ExtendedOp{
Op: crdt.Operation{
Type: crdt.OpInsert,
LineID: op.Op.LineID,
Content: op.Op.Content,
Timestamp: time.Now(),
},
})
case crdt.OpUpdate:
// Invert update -> update with old content
if op.OldContent == "" {
return nil, fmt.Errorf("cannot revert update operation: missing old content")
}
inverted = append(inverted, types.ExtendedOp{
Op: crdt.Operation{
Type: crdt.OpUpdate,
LineID: op.Op.LineID,
Content: op.OldContent,
Timestamp: time.Now(),
},
OldContent: op.Op.Content,
})
}
}
return inverted, nil
}
func newLamport() uint64 {
return uint64(time.Now().UnixNano())
}
func applyOps(repoPath, stream string, eops []ExtendedOp) error {
// for each extended op, append to .evo/ops/<stream>/<fileID>.bin
opsRoot := filepath.Join(repoPath, ".evo", "ops", stream)
if err := os.MkdirAll(opsRoot, 0755); err != nil {
return err
}
for _, eop := range eops {
fid := eop.Op.FileID.String()
binFile := filepath.Join(opsRoot, fid+".bin")
if err := ops.AppendOp(binFile, eop.Op); err != nil {
return err
}
}
return nil
}
// For signing
func CommitHashString(c *types.Commit) string {
// stable representation => ID + stream + message + etc
h := sha256.New()
h.Write([]byte(c.ID))
h.Write([]byte(c.Stream))
h.Write([]byte(c.Message))
h.Write([]byte(c.AuthorName))
h.Write([]byte(c.AuthorEmail))
h.Write([]byte(c.Timestamp.String()))
for _, eop := range c.Operations {
// incorporate lamport, node, lineID, content, oldContent
h.Write([]byte(fmt.Sprintf("%d_%s_%s_%s_old=%s",
eop.Op.Lamport, eop.Op.NodeID, eop.Op.LineID, eop.Op.Content, eop.OldContent)))
}
return fmt.Sprintf("%x", h.Sum(nil))
}
================================================
FILE: internal/commits/commits_test.go
================================================
package commits
import (
"evo/internal/crdt"
"evo/internal/types"
"path/filepath"
"testing"
"evo/internal/config"
"evo/internal/signing"
)
func TestRevertCommit(t *testing.T) {
// Create temp directory for test
testDir := t.TempDir()
t.Run("Revert_Insert", func(t *testing.T) {
// Create original commit with insert operation
ops := []types.ExtendedOp{
{Op: crdt.Operation{Type: crdt.OpInsert, Content: "test"}},
}
commit, err := CreateCommit(testDir, "main", "Test commit", "Test User", "test@example.com", ops, false)
if err != nil {
t.Fatalf("Failed to create commit: %v", err)
}
// Revert the commit
revertCommit, err := RevertCommit(testDir, "main", commit.ID)
if err != nil {
t.Fatalf("Failed to revert commit: %v", err)
}
// Verify revert operations
if len(revertCommit.Operations) != len(commit.Operations) {
t.Errorf("Expected %d operations, got %d", len(commit.Operations), len(revertCommit.Operations))
}
// Check that insert was reverted to delete
if revertCommit.Operations[0].Op.Type != crdt.OpDelete {
t.Error("Expected delete operation in revert commit")
}
})
t.Run("Revert_Delete", func(t *testing.T) {
// Create original commit with delete operation
ops := []types.ExtendedOp{
{Op: crdt.Operation{Type: crdt.OpDelete, Content: "test"}},
}
commit, err := CreateCommit(testDir, "main", "Test commit", "Test User", "test@example.com", ops, false)
if err != nil {
t.Fatalf("Failed to create commit: %v", err)
}
// Revert the commit
revertCommit, err := RevertCommit(testDir, "main", commit.ID)
if err != nil {
t.Fatalf("Failed to revert commit: %v", err)
}
// Verify revert operations
if len(revertCommit.Operations) != len(commit.Operations) {
t.Errorf("Expected %d operations, got %d", len(commit.Operations), len(revertCommit.Operations))
}
// Check that delete was reverted to insert with original content
if revertCommit.Operations[0].Op.Type != crdt.OpInsert {
t.Error("Expected insert operation in revert commit")
}
if revertCommit.Operations[0].Op.Content != commit.Operations[0].Op.Content {
t.Error("Content not preserved in revert operation")
}
})
t.Run("Revert_Update", func(t *testing.T) {
// Create original commit with update operation
oldContent := "old"
newContent := "new"
ops := []types.ExtendedOp{
{
Op: crdt.Operation{
Type: crdt.OpUpdate,
Content: newContent,
},
OldContent: oldContent,
},
}
commit, err := CreateCommit(testDir, "main", "Test commit", "Test User", "test@example.com", ops, false)
if err != nil {
t.Fatalf("Failed to create commit: %v", err)
}
// Revert the commit
revertCommit, err := RevertCommit(testDir, "main", commit.ID)
if err != nil {
t.Fatalf("Failed to revert commit: %v", err)
}
// Verify revert operations
if len(revertCommit.Operations) != len(commit.Operations) {
t.Errorf("Expected %d operations, got %d", len(commit.Operations), len(revertCommit.Operations))
}
// Check that update was reverted with old content
if revertCommit.Operations[0].Op.Type != crdt.OpUpdate {
t.Error("Expected update operation in revert commit")
}
if revertCommit.Operations[0].Op.Content != oldContent {
t.Error("Old content not restored in revert operation")
}
})
t.Run("Revert_Multiple_Operations", func(t *testing.T) {
// Create original commit with multiple operations
ops := []types.ExtendedOp{
{Op: crdt.Operation{Type: crdt.OpInsert, Content: "test1"}},
{Op: crdt.Operation{Type: crdt.OpInsert, Content: "test2"}},
}
commit, err := CreateCommit(testDir, "main", "Test commit", "Test User", "test@example.com", ops, false)
if err != nil {
t.Fatalf("Failed to create commit: %v", err)
}
// Revert the commit
revertCommit, err := RevertCommit(testDir, "main", commit.ID)
if err != nil {
t.Fatalf("Failed to revert commit: %v", err)
}
// Verify revert operations
if len(revertCommit.Operations) != len(commit.Operations) {
t.Errorf("Expected %d operations, got %d", len(commit.Operations), len(revertCommit.Operations))
}
// Check that operations were reverted in reverse order
for i := 0; i < len(revertCommit.Operations); i++ {
if revertCommit.Operations[i].Op.Type != crdt.OpDelete {
t.Error("Expected delete operation in revert commit")
}
}
})
}
func TestSignedCommits(t *testing.T) {
// Create temp directory for test
tmpDir := t.TempDir()
keyPath := filepath.Join(tmpDir, "signing_key")
// Set up config for test
err := config.SetConfigValue(tmpDir, "signing.keyPath", keyPath)
if err != nil {
t.Fatalf("Failed to set config value: %v", err)
}
// Generate key pair for signing
err = signing.GenerateKeyPair(tmpDir)
if err != nil {
t.Fatalf("Failed to generate key pair: %v", err)
}
t.Run("Create_Signed_Commit", func(t *testing.T) {
ops := []types.ExtendedOp{
{Op: crdt.Operation{Type: crdt.OpInsert, Content: "test"}},
}
commit, err := CreateCommit(tmpDir, "main", "Test commit", "Test User", "test@example.com", ops, true)
if err != nil {
t.Fatalf("Failed to create signed commit: %v", err)
}
if commit.Signature == "" {
t.Error("Commit not signed")
}
// Load and verify commit
loaded, err := LoadCommit(tmpDir, "main", commit.ID)
if err != nil {
t.Fatalf("Failed to load commit: %v", err)
}
if loaded.Signature != commit.Signature {
t.Error("Loaded commit signature does not match")
}
})
t.Run("Create_Unsigned_Commit", func(t *testing.T) {
ops := []types.ExtendedOp{
{Op: crdt.Operation{Type: crdt.OpInsert, Content: "test"}},
}
commit, err := CreateCommit(tmpDir, "main", "Test commit", "Test User", "test@example.com", ops, false)
if err != nil {
t.Fatalf("Failed to create unsigned commit: %v", err)
}
if commit.Signature != "" {
t.Error("Unsigned commit has signature")
}
// List commits and verify both signed and unsigned are present
commits, err := ListCommits(tmpDir, "main")
if err != nil {
t.Fatalf("Failed to list commits: %v", err)
}
if len(commits) != 2 {
t.Errorf("Expected 2 commits, got %d", len(commits))
}
})
t.Run("Invalid_Signature", func(t *testing.T) {
commit := &types.Commit{
ID: "test",
Stream: "main",
Message: "Test commit",
Signature: "invalid",
}
// Save commit with invalid signature
err := SaveCommit(tmpDir, commit)
if err != nil {
t.Fatalf("Failed to save commit: %v", err)
}
// Try to load commit - should fail verification
_, err = LoadCommit(tmpDir, "main", commit.ID)
if err == nil {
t.Error("Expected error loading commit with invalid signature")
}
})
}
================================================
FILE: internal/config/config.go
================================================
package config
import (
"encoding/json"
"fmt"
"os"
"path/filepath"
"github.com/pelletier/go-toml"
)
// For example: user.name, user.email, signing.keyPath, files.largeThreshold, verifySignatures
func globalConfigPath() (string, error) {
home, err := os.UserHomeDir()
if err != nil {
return "", err
}
cfgDir := filepath.Join(home, ".config", "evo")
if err := os.MkdirAll(cfgDir, 0755); err != nil {
return "", err
}
return filepath.Join(cfgDir, "config.toml"), nil
}
func repoConfigPath(repoPath string) string {
return filepath.Join(repoPath, ".evo", "config", "config.toml")
}
func loadToml(path string) (*toml.Tree, error) {
if _, err := os.Stat(path); os.IsNotExist(err) {
tree, err := toml.TreeFromMap(map[string]interface{}{})
if err != nil {
return nil, fmt.Errorf("failed to create empty config: %w", err)
}
return tree, nil
}
b, err := os.ReadFile(path)
if err != nil {
return nil, err
}
return toml.LoadBytes(b)
}
func saveToml(tree *toml.Tree, path string) error {
return os.WriteFile(path, []byte(tree.String()), 0644)
}
// SetGlobalConfigValue sets key=val in ~/.config/evo/config.toml
func SetGlobalConfigValue(key, val string) error {
gp, err := globalConfigPath()
if err != nil {
return err
}
tree, err := loadToml(gp)
if err != nil {
return err
}
tree.Set(key, val)
return saveToml(tree, gp)
}
// SetRepoConfigValue sets key=val in .evo/config/config.toml
func SetRepoConfigValue(repoPath, key, val string) error {
rp := repoConfigPath(repoPath)
tree, err := loadToml(rp)
if err != nil {
return err
}
tree.Set(key, val)
return saveToml(tree, rp)
}
// GetConfigValue retrieves a value from the config file
func GetConfigValue(repoPath, key string) (string, error) {
config, err := loadConfig(repoPath)
if err != nil {
return "", err
}
value, ok := config[key]
if !ok {
return "", fmt.Errorf("no config value for %s", key)
}
return value, nil
}
// SetConfigValue stores a value in the config file
func SetConfigValue(repoPath, key, value string) error {
config, err := loadConfig(repoPath)
if err != nil {
// If config doesn't exist, create new map
config = make(map[string]string)
}
config[key] = value
// Ensure .evo directory exists
configDir := filepath.Join(repoPath, ".evo")
if err := os.MkdirAll(configDir, 0755); err != nil {
return fmt.Errorf("failed to create config directory: %w", err)
}
// Write updated config
configPath := filepath.Join(configDir, "config.json")
data, err := json.MarshalIndent(config, "", " ")
if err != nil {
return fmt.Errorf("failed to marshal config: %w", err)
}
if err := os.WriteFile(configPath, data, 0644); err != nil {
return fmt.Errorf("failed to write config file: %w", err)
}
return nil
}
func loadConfig(repoPath string) (map[string]string, error) {
configPath := filepath.Join(repoPath, ".evo", "config.json")
data, err := os.ReadFile(configPath)
if err != nil {
if os.IsNotExist(err) {
return nil, fmt.Errorf("no config value for signing.keyPath")
}
return nil, fmt.Errorf("failed to read config file: %w", err)
}
var config map[string]string
if err := json.Unmarshal(data, &config); err != nil {
return nil, fmt.Errorf("failed to parse config file: %w", err)
}
return config, nil
}
================================================
FILE: internal/crdt/compact/compact.go
================================================
package compact
import (
"evo/internal/crdt"
"sort"
"time"
"github.com/google/uuid"
)
// CompactOperations compacts a list of operations by:
// 1. Pruning old tombstones
// 2. Collapsing multiple operations on the same line into a single op
// 3. Removing redundant operations that don't affect the final state
func CompactOperations(ops []crdt.Operation, cfg *Config) []crdt.Operation {
if len(ops) < cfg.MaxOps {
return ops
}
// Build a map of lineID to its operations
lineOps := make(map[uuid.UUID][]crdt.Operation)
for _, op := range ops {
lineOps[op.LineID] = append(lineOps[op.LineID], op)
}
var compacted []crdt.Operation
now := time.Now()
for _, lineHistory := range lineOps {
// Sort operations by lamport timestamp
sortOps(lineHistory)
// Keep only the latest operation for each line
finalOp := lineHistory[len(lineHistory)-1]
// Skip old tombstones
if finalOp.Type == crdt.OpDelete {
age := now.Sub(finalOp.Timestamp)
if age > cfg.TombstoneTTL {
continue
}
}
compacted = append(compacted, finalOp)
}
// Sort compacted operations
sortOps(compacted)
// Ensure we keep minimum number of ops
if len(compacted) < cfg.MinOpsToKeep {
return ops[:cfg.MinOpsToKeep]
}
return compacted
}
// sortOps sorts operations by lamport timestamp and nodeID
func sortOps(ops []crdt.Operation) {
sort.Slice(ops, func(i, j int) bool {
return ops[i].LessThan(&ops[j])
})
}
// CompactRGA creates a new RGA with compacted operations
func CompactRGA(rga *crdt.RGA, cfg *Config) *crdt.RGA {
ops := rga.GetOperations()
compacted := CompactOperations(ops, cfg)
newRGA := crdt.NewRGA()
for _, op := range compacted {
if err := newRGA.Apply(op); err != nil {
// Log error but continue
continue
}
}
return newRGA
}
================================================
FILE: internal/crdt/compact/config.go
================================================
package compact
import "time"
// Config defines thresholds for when to perform compaction
type Config struct {
// Maximum number of operations before triggering compaction
MaxOps int
// Maximum age of tombstones before pruning
TombstoneTTL time.Duration
// Minimum number of operations to keep after compaction
MinOpsToKeep int
// How often to run compaction
CompactionInterval time.Duration
}
// DefaultConfig returns sensible defaults for compaction
func DefaultConfig() *Config {
return &Config{
MaxOps: 10000, // Compact when we have more than 10k ops
TombstoneTTL: 7 * 24 * time.Hour, // Keep tombstones for 1 week
MinOpsToKeep: 1000, // Keep at least 1k ops after compaction
CompactionInterval: 1 * time.Hour, // Run compaction every hour
}
}
================================================
FILE: internal/crdt/compact/service.go
================================================
package compact
import (
"encoding/binary"
"encoding/json"
"evo/internal/crdt"
"os"
"path/filepath"
"strings"
"sync"
"time"
)
// CompactionService manages operation compaction and tombstone pruning
type CompactionService struct {
repoPath string
config *Config
mu sync.RWMutex
done chan struct{}
}
// NewCompactionService creates a new compaction service
func NewCompactionService(repoPath string, config *Config) *CompactionService {
if config == nil {
config = DefaultConfig()
}
return &CompactionService{
repoPath: repoPath,
config: config,
done: make(chan struct{}),
}
}
// Start begins the compaction service
func (s *CompactionService) Start() error {
s.mu.Lock()
defer s.mu.Unlock()
// Create ticker for periodic compaction
ticker := time.NewTicker(s.config.CompactionInterval)
// Start background goroutine
go func() {
for {
select {
case <-ticker.C:
if err := s.CompactOperations(); err != nil {
// Log error but continue running
continue
}
if err := s.PruneTombstones(); err != nil {
// Log error but continue running
continue
}
case <-s.done:
ticker.Stop()
return
}
}
}()
return nil
}
// Stop stops the compaction service
func (s *CompactionService) Stop() {
close(s.done)
}
// CompactOperations compacts operations by combining sequential operations
func (s *CompactionService) CompactOperations() error {
s.mu.Lock()
defer s.mu.Unlock()
opsDir := filepath.Join(s.repoPath, ".evo", "ops")
streams, err := os.ReadDir(opsDir)
if err != nil {
return err
}
for _, stream := range streams {
if !stream.IsDir() {
continue
}
streamDir := filepath.Join(opsDir, stream.Name())
files, err := os.ReadDir(streamDir)
if err != nil {
continue
}
var ops []crdt.Operation
for _, f := range files {
if !strings.HasSuffix(f.Name(), ".bin") {
continue
}
data, err := os.ReadFile(filepath.Join(streamDir, f.Name()))
if err != nil {
continue
}
// Read size prefix
if len(data) < 4 {
continue
}
size := binary.BigEndian.Uint32(data[:4])
if len(data) < int(4+size) {
continue
}
opData := data[4 : 4+size]
var op crdt.Operation
if err := json.Unmarshal(opData, &op); err != nil {
continue
}
ops = append(ops, op)
}
if len(ops) < s.config.MaxOps {
continue
}
// Combine sequential operations
for i := range ops {
op := &ops[i]
if i > 0 && ops[i-1].LineID == op.LineID {
// Combine with previous operation
ops[i-1].Content = op.Content
ops[i-1].Lamport = op.Lamport
ops[i-1].Timestamp = op.Timestamp
ops = append(ops[:i], ops[i+1:]...)
i--
continue
}
}
// Write compacted operations back
compacted := make([]crdt.Operation, 0, len(ops))
for _, op := range ops {
if op.Type != crdt.OpDelete || time.Since(op.Timestamp) <= s.config.TombstoneTTL {
compacted = append(compacted, op)
}
}
// Save compacted operations
for _, op := range compacted {
data, err := json.Marshal(op)
if err != nil {
continue
}
// Write size prefix followed by data
opPath := filepath.Join(streamDir, op.LineID.String()+".bin")
f, err := os.Create(opPath)
if err != nil {
continue
}
// Write 4-byte size prefix
size := uint32(len(data))
var sizeBuf [4]byte
binary.BigEndian.PutUint32(sizeBuf[:], size)
if _, err := f.Write(sizeBuf[:]); err != nil {
f.Close()
continue
}
// Write operation data
if _, err := f.Write(data); err != nil {
f.Close()
continue
}
f.Close()
}
// Remove old operations
for _, op := range ops {
found := false
for _, c := range compacted {
if c.LineID == op.LineID {
found = true
break
}
}
if !found {
os.Remove(filepath.Join(streamDir, op.LineID.String()+".bin"))
}
}
}
return nil
}
// PruneTombstones removes old tombstones
func (s *CompactionService) PruneTombstones() error {
s.mu.Lock()
defer s.mu.Unlock()
opsDir := filepath.Join(s.repoPath, ".evo", "ops")
streams, err := os.ReadDir(opsDir)
if err != nil {
return err
}
cutoff := time.Now().Add(-s.config.TombstoneTTL)
for _, stream := range streams {
if !stream.IsDir() {
continue
}
streamDir := filepath.Join(opsDir, stream.Name())
files, err := os.ReadDir(streamDir)
if err != nil {
continue
}
var ops []crdt.Operation
var filesToRemove []string
// Read all operations in this stream
for _, f := range files {
if !strings.HasSuffix(f.Name(), ".bin") {
continue
}
data, err := os.ReadFile(filepath.Join(streamDir, f.Name()))
if err != nil {
continue
}
// Read size prefix
if len(data) < 4 {
continue
}
size := binary.BigEndian.Uint32(data[:4])
if len(data) < int(4+size) {
continue
}
opData := data[4 : 4+size]
var op crdt.Operation
if err := json.Unmarshal(opData, &op); err != nil {
continue
}
// Keep non-delete operations and recent tombstones
if op.Type != crdt.OpDelete || op.Timestamp.After(cutoff) {
ops = append(ops, op)
} else {
filesToRemove = append(filesToRemove, f.Name())
}
}
// Remove old tombstones
for _, name := range filesToRemove {
if err := os.Remove(filepath.Join(streamDir, name)); err != nil && !os.IsNotExist(err) {
return err
}
}
// Write remaining operations back
for _, op := range ops {
data, err := json.Marshal(op)
if err != nil {
return err
}
// Write size prefix followed by data
opPath := filepath.Join(streamDir, op.LineID.String()+".bin")
tempPath := opPath + ".tmp"
f, err := os.Create(tempPath)
if err != nil {
return err
}
// Write 4-byte size prefix
size := uint32(len(data))
var sizeBuf [4]byte
binary.BigEndian.PutUint32(sizeBuf[:], size)
if _, err := f.Write(sizeBuf[:]); err != nil {
f.Close()
os.Remove(tempPath)
return err
}
// Write operation data
if _, err := f.Write(data); err != nil {
f.Close()
os.Remove(tempPath)
return err
}
f.Close()
// Atomically replace the old file with the new one
if err := os.Rename(tempPath, opPath); err != nil {
os.Remove(tempPath)
return err
}
}
// Remove any remaining files that weren't rewritten
for _, f := range files {
if !strings.HasSuffix(f.Name(), ".bin") {
continue
}
found := false
for _, op := range ops {
if f.Name() == op.LineID.String()+".bin" {
found = true
break
}
}
if !found {
if err := os.Remove(filepath.Join(streamDir, f.Name())); err != nil && !os.IsNotExist(err) {
return err
}
}
}
}
return nil
}
================================================
FILE: internal/crdt/compact/service_test.go
================================================
package compact
import (
"encoding/binary"
"encoding/json"
"evo/internal/crdt"
"os"
"path/filepath"
"testing"
"time"
"github.com/google/uuid"
)
func TestCompactionService(t *testing.T) {
// Create temp directory for testing
tmpDir, err := os.MkdirTemp("", "evo-compact-test-*")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(tmpDir)
// Create test repository structure
repoPath := filepath.Join(tmpDir, "test-repo")
if err := os.MkdirAll(filepath.Join(repoPath, ".evo", "ops"), 0755); err != nil {
t.Fatal(err)
}
t.Run("Service Lifecycle", func(t *testing.T) {
config := &Config{
CompactionInterval: 100 * time.Millisecond,
TombstoneTTL: 1 * time.Hour,
MinOpsToKeep: 10,
MaxOps: 100,
}
service := NewCompactionService(repoPath, config)
if err := service.Start(); err != nil {
t.Fatal(err)
}
// Let it run for a bit
time.Sleep(200 * time.Millisecond)
service.Stop()
})
t.Run("Operation Compaction", func(t *testing.T) {
fileID := uuid.New()
lineID := uuid.New()
nodeID := uuid.New()
// Create test operations
ops := []crdt.Operation{
{
Type: crdt.OpUpdate,
Lamport: 1,
NodeID: nodeID,
FileID: fileID,
LineID: lineID,
Content: "value1",
Stream: "stream1",
Timestamp: time.Now().Add(-2 * time.Hour),
Vector: []int64{1, 0, 0},
},
{
Type: crdt.OpUpdate,
Lamport: 2,
NodeID: nodeID,
FileID: fileID,
LineID: lineID,
Content: "value2",
Stream: "stream1",
Timestamp: time.Now().Add(-1 * time.Hour),
Vector: []int64{1, 1, 0},
},
{
Type: crdt.OpDelete,
Lamport: 3,
NodeID: nodeID,
FileID: fileID,
LineID: lineID,
Stream: "stream1",
Timestamp: time.Now(),
Vector: []int64{1, 1, 1},
},
}
// Write operations to file
opsFile := filepath.Join(repoPath, ".evo", "ops", "test.bin")
f, err := os.Create(opsFile)
if err != nil {
t.Fatal(err)
}
defer f.Close()
for _, op := range ops {
data, err := json.Marshal(op)
if err != nil {
t.Fatal(err)
}
if _, err := f.Write(data); err != nil {
t.Fatal(err)
}
}
// Run compaction
config := &Config{
CompactionInterval: 100 * time.Millisecond,
TombstoneTTL: 30 * time.Minute,
MinOpsToKeep: 1,
MaxOps: 2,
}
service := NewCompactionService(repoPath, config)
if err := service.CompactOperations(); err != nil {
t.Fatal(err)
}
// Verify results
// TODO: Add verification logic
})
t.Run("Tombstone Pruning", func(t *testing.T) {
fileID := uuid.New()
lineID := uuid.New()
nodeID := uuid.New()
// Create test operations including a tombstone
ops := []crdt.Operation{
{
Type: crdt.OpUpdate,
Lamport: 1,
NodeID: nodeID,
FileID: fileID,
LineID: lineID,
Content: "value1",
Stream: "stream1",
Timestamp: time.Now(),
Vector: []int64{1, 0, 0},
},
{
Type: crdt.OpDelete,
Lamport: 2,
NodeID: nodeID,
FileID: fileID,
LineID: uuid.New(), // Use a different LineID for the tombstone
Stream: "stream1",
Timestamp: time.Now().Add(-2 * time.Hour), // Old tombstone
Vector: []int64{1, 1, 0},
},
}
// Write operations to disk
opsDir := filepath.Join(repoPath, ".evo", "ops")
streamDir := filepath.Join(opsDir, "stream1")
if err := os.MkdirAll(streamDir, 0755); err != nil {
t.Fatal(err)
}
for _, op := range ops {
data, err := json.Marshal(op)
if err != nil {
t.Fatal(err)
}
// Write size prefix followed by data
opFile := filepath.Join(streamDir, op.LineID.String()+".bin")
f, err := os.Create(opFile)
if err != nil {
t.Fatal(err)
}
// Write 4-byte size prefix
size := uint32(len(data))
var sizeBuf [4]byte
binary.BigEndian.PutUint32(sizeBuf[:], size)
if _, err := f.Write(sizeBuf[:]); err != nil {
f.Close()
t.Fatal(err)
}
// Write operation data
if _, err := f.Write(data); err != nil {
f.Close()
t.Fatal(err)
}
f.Close()
}
// Create and run compaction service
config := &Config{
CompactionInterval: 1 * time.Hour,
TombstoneTTL: 1 * time.Hour,
MinOpsToKeep: 1,
MaxOps: 10,
}
service := NewCompactionService(repoPath, config)
if err := service.PruneTombstones(); err != nil {
t.Fatal(err)
}
// Check that old tombstone was removed
files, err := os.ReadDir(streamDir)
if err != nil {
t.Fatal(err)
}
if len(files) != 1 {
t.Errorf("Expected 1 operation after pruning, got %d", len(files))
}
// The remaining operation should be the update
for _, f := range files {
data, err := os.ReadFile(filepath.Join(streamDir, f.Name()))
if err != nil {
t.Fatal(err)
}
// Read size prefix
if len(data) < 4 {
t.Fatal("Invalid operation file: too short")
}
size := binary.BigEndian.Uint32(data[:4])
if len(data) < int(4+size) {
t.Fatalf("Invalid operation file: expected %d bytes after size prefix, got %d", size, len(data)-4)
}
opData := data[4 : 4+size]
var op crdt.Operation
if err := json.Unmarshal(opData, &op); err != nil {
t.Fatal(err)
}
if op.Type == crdt.OpDelete {
t.Error("Expected tombstone to be pruned")
}
}
})
}
func TestCompactionConfig(t *testing.T) {
t.Run("Default Config", func(t *testing.T) {
cfg := DefaultConfig()
if cfg.MaxOps <= cfg.MinOpsToKeep {
t.Error("MaxOps should be greater than MinOpsToKeep")
}
if cfg.TombstoneTTL <= 0 {
t.Error("TombstoneTTL should be positive")
}
})
t.Run("Custom Config", func(t *testing.T) {
cfg := &Config{
MaxOps: 5000,
MinOpsToKeep: 500,
TombstoneTTL: 48 * time.Hour,
CompactionInterval: time.Hour,
}
service := NewCompactionService("test-path", cfg)
if service.config.MaxOps != 5000 {
t.Error("Failed to set custom MaxOps")
}
if service.config.MinOpsToKeep != 500 {
t.Error("Failed to set custom MinOpsToKeep")
}
if service.config.TombstoneTTL != 48*time.Hour {
t.Error("Failed to set custom TombstoneTTL")
}
})
}
================================================
FILE: internal/crdt/operation.go
================================================
package crdt
import (
"time"
"github.com/google/uuid"
)
// OpType represents the type of operation
type OpType int
const (
OpInsert OpType = iota
OpUpdate
OpDelete
)
// Operation represents a CRDT operation
type Operation struct {
Type OpType // Type of operation
Lamport uint64 // Lamport timestamp for ordering
NodeID uuid.UUID // ID of the node that created this operation
FileID uuid.UUID // ID of the file being modified
LineID uuid.UUID // ID of the line being modified
Content string // Content for insert/update operations
Stream string // Stream this operation belongs to
Timestamp time.Time // When the operation occurred
Vector []int64 // Vector clock for causal ordering
}
// CanCombine checks if two operations can be combined
func (o *Operation) CanCombine(other *Operation) bool {
// Can only combine operations in same stream
if o.Stream != other.Stream {
return false
}
// Can only combine operations on same file
if o.FileID != other.FileID {
return false
}
// Can't combine deletes
if o.Type == OpDelete || other.Type == OpDelete {
return false
}
// Must be sequential in Lamport time
return o.Lamport < other.Lamport
}
// Combine merges another operation into this one
func (o *Operation) Combine(other *Operation) {
// Take the latest content and Lamport timestamp
o.Content = other.Content
o.Lamport = other.Lamport
o.Timestamp = other.Timestamp
// Extend vector clock if needed
if len(other.Vector) > len(o.Vector) {
newVec := make([]int64, len(other.Vector))
copy(newVec, o.Vector)
o.Vector = newVec
}
// Update vector clock values
for i := 0; i < len(other.Vector); i++ {
if i < len(o.Vector) {
o.Vector[i] = other.Vector[i]
}
}
}
// LessThan compares operations for ordering
func (o *Operation) LessThan(other *Operation) bool {
if o.Lamport != other.Lamport {
return o.Lamport < other.Lamport
}
return o.NodeID.String() < other.NodeID.String()
}
================================================
FILE: internal/crdt/operation_test.go
================================================
package crdt
import (
"testing"
"time"
"github.com/google/uuid"
)
func TestOperationCombining(t *testing.T) {
t.Run("Same Stream Operations", func(t *testing.T) {
fileID := uuid.New()
lineID := uuid.New()
nodeID := uuid.New()
op1 := &Operation{
Type: OpUpdate,
Lamport: 1,
NodeID: nodeID,
FileID: fileID,
LineID: lineID,
Content: "value1",
Stream: "stream1",
Timestamp: time.Now(),
Vector: []int64{1, 0, 0},
}
op2 := &Operation{
Type: OpUpdate,
Lamport: 2,
NodeID: nodeID,
FileID: fileID,
LineID: lineID,
Content: "value2",
Stream: "stream1",
Timestamp: op1.Timestamp.Add(time.Second),
Vector: []int64{1, 1, 0},
}
if !op1.CanCombine(op2) {
t.Error("Expected operations to be combinable")
}
op1.Combine(op2)
if op1.Content != "value2" {
t.Errorf("Expected combined content to be 'value2', got '%s'", op1.Content)
}
if op1.Vector[1] != 1 {
t.Errorf("Expected vector clock [1] to be 1, got %d", op1.Vector[1])
}
})
t.Run("Different Stream Operations", func(t *testing.T) {
fileID := uuid.New()
lineID := uuid.New()
nodeID := uuid.New()
op1 := &Operation{
Type: OpUpdate,
Lamport: 1,
NodeID: nodeID,
FileID: fileID,
LineID: lineID,
Content: "value1",
Stream: "stream1",
Timestamp: time.Now(),
Vector: []int64{1, 0, 0},
}
op2 := &Operation{
Type: OpUpdate,
Lamport: 2,
NodeID: nodeID,
FileID: fileID,
LineID: lineID,
Content: "value2",
Stream: "stream2",
Timestamp: op1.Timestamp.Add(time.Second),
Vector: []int64{1, 1, 0},
}
if op1.CanCombine(op2) {
t.Error("Expected operations from different streams to not be combinable")
}
})
t.Run("Delete Operations", func(t *testing.T) {
fileID := uuid.New()
lineID := uuid.New()
nodeID := uuid.New()
op1 := &Operation{
Type: OpUpdate,
Lamport: 1,
NodeID: nodeID,
FileID: fileID,
LineID: lineID,
Content: "value1",
Stream: "stream1",
Timestamp: time.Now(),
Vector: []int64{1, 0, 0},
}
op2 := &Operation{
Type: OpDelete,
Lamport: 2,
NodeID: nodeID,
FileID: fileID,
LineID: lineID,
Stream: "stream1",
Timestamp: op1.Timestamp.Add(time.Second),
Vector: []int64{1, 1, 0},
}
if op1.CanCombine(op2) {
t.Error("Expected delete operations to not be combinable")
}
})
t.Run("Vector Clock Extension", func(t *testing.T) {
fileID := uuid.New()
lineID := uuid.New()
nodeID := uuid.New()
op1 := &Operation{
Type: OpUpdate,
Lamport: 1,
NodeID: nodeID,
FileID: fileID,
LineID: lineID,
Content: "value1",
Stream: "stream1",
Timestamp: time.Now(),
Vector: []int64{1, 0},
}
op2 := &Operation{
Type: OpUpdate,
Lamport: 2,
NodeID: nodeID,
FileID: fileID,
LineID: lineID,
Content: "value2",
Stream: "stream1",
Timestamp: op1.Timestamp.Add(time.Second),
Vector: []int64{1, 1, 1},
}
if !op1.CanCombine(op2) {
t.Error("Expected operations to be combinable")
}
op1.Combine(op2)
if len(op1.Vector) != 3 {
t.Errorf("Expected vector clock length to be 3, got %d", len(op1.Vector))
}
if op1.Vector[2] != 1 {
t.Errorf("Expected vector clock [2] to be 1, got %d", op1.Vector[2])
}
})
t.Run("Non-Sequential Operations", func(t *testing.T) {
fileID := uuid.New()
lineID := uuid.New()
nodeID := uuid.New()
op1 := &Operation{
Type: OpUpdate,
Lamport: 2, // Higher Lamport
NodeID: nodeID,
FileID: fileID,
LineID: lineID,
Content: "value1",
Stream: "stream1",
Timestamp: time.Now().Add(time.Second),
Vector: []int64{1, 0, 0},
}
op2 := &Operation{
Type: OpUpdate,
Lamport: 1, // Lower Lamport
NodeID: nodeID,
FileID: fileID,
LineID: lineID,
Content: "value2",
Stream: "stream1",
Timestamp: time.Now(),
Vector: []int64{1, 1, 0},
}
if op1.CanCombine(op2) {
t.Error("Expected non-sequential operations to not be combinable")
}
})
}
func TestOperationOrdering(t *testing.T) {
now := time.Now()
fileID := uuid.New()
lineID := uuid.New()
nodeID := uuid.New()
ops := []Operation{
{
Type: OpInsert,
Lamport: 1,
NodeID: nodeID,
FileID: fileID,
LineID: lineID,
Content: "value1",
Stream: "stream1",
Timestamp: now,
Vector: []int64{1, 0, 0},
},
{
Type: OpUpdate,
Lamport: 2,
NodeID: nodeID,
FileID: fileID,
LineID: lineID,
Content: "value2",
Stream: "stream1",
Timestamp: now.Add(time.Second),
Vector: []int64{1, 1, 0},
},
{
Type: OpDelete,
Lamport: 3,
NodeID: nodeID,
FileID: fileID,
LineID: lineID,
Stream: "stream1",
Timestamp: now.Add(2 * time.Second),
Vector: []int64{1, 1, 1},
},
}
t.Run("Timestamp Order", func(t *testing.T) {
if !ops[0].Timestamp.Before(ops[1].Timestamp) {
t.Error("Expected op1 timestamp to be before op2")
}
if !ops[1].Timestamp.Before(ops[2].Timestamp) {
t.Error("Expected op2 timestamp to be before op3")
}
})
t.Run("Vector Clock Order", func(t *testing.T) {
// Test that vector clocks are monotonically increasing
for i := 1; i < len(ops); i++ {
prev := ops[i-1].Vector
curr := ops[i].Vector
increasing := false
for j := 0; j < len(prev) && j < len(curr); j++ {
if curr[j] > prev[j] {
increasing = true
break
}
}
if !increasing {
t.Errorf("Expected vector clock to increase between op%d and op%d", i, i+1)
}
}
})
}
================================================
FILE: internal/crdt/rga.go
================================================
package crdt
import (
"fmt"
"sort"
"sync"
"github.com/google/uuid"
)
// RGAOperation extends Operation with additional fields
type RGAOperation struct {
Operation
Index int
}
// NewRGAOperation creates a new RGAOperation instance
func NewRGAOperation(op Operation, index int) RGAOperation {
return RGAOperation{
Operation: op,
Index: index,
}
}
// RGA represents a Replicated Growable Array CRDT
type RGA struct {
mu sync.RWMutex
ops []RGAOperation
tombstone map[string]bool
}
// NewRGA creates a new RGA instance
func NewRGA() *RGA {
return &RGA{
ops: make([]RGAOperation, 0),
tombstone: make(map[string]bool),
}
}
// Apply applies an operation to the RGA
func (r *RGA) Apply(op Operation) error {
r.mu.Lock()
defer r.mu.Unlock()
rgaOp := NewRGAOperation(op, len(r.ops))
switch op.Type {
case OpInsert:
// Filter out any previous operations for this LineID
newOps := make([]RGAOperation, 0)
for _, existingOp := range r.ops {
if existingOp.LineID != op.LineID {
newOps = append(newOps, existingOp)
}
}
r.ops = append(newOps, rgaOp)
// Clear tombstone status
delete(r.tombstone, op.LineID.String())
sort.Slice(r.ops, func(i, j int) bool {
return r.ops[i].LessThan(&r.ops[j].Operation)
})
case OpDelete:
// Get content before marking as deleted
var content string
for _, op := range r.ops {
if op.LineID == rgaOp.LineID && !r.tombstone[op.LineID.String()] {
content = op.Content
break
}
}
rgaOp.Content = content // Store content in the delete operation
r.ops = append(r.ops, rgaOp)
r.tombstone[op.LineID.String()] = true
case OpUpdate:
found := false
for i := range r.ops {
if r.ops[i].LineID == op.LineID {
r.ops[i].Content = op.Content
found = true
break
}
}
if !found {
return fmt.Errorf("line not found for update: %s", op.LineID)
}
default:
return fmt.Errorf("unknown operation type: %d", op.Type)
}
return nil
}
// Get returns the current state of the RGA
func (r *RGA) Get() []string {
r.mu.RLock()
defer r.mu.RUnlock()
var result []string
for _, op := range r.ops {
if !r.tombstone[op.LineID.String()] {
result = append(result, op.Content)
}
}
return result
}
// GetOperations returns all operations in order
func (r *RGA) GetOperations() []Operation {
r.mu.RLock()
defer r.mu.RUnlock()
result := make([]Operation, len(r.ops))
for i, op := range r.ops {
result[i] = op.Operation
}
return result
}
// Clear removes all operations and resets the RGA
func (r *RGA) Clear() {
r.mu.Lock()
defer r.mu.Unlock()
r.ops = make([]RGAOperation, 0)
r.tombstone = make(map[string]bool)
}
// Materialize returns the current document state as a slice of strings
func (r *RGA) Materialize() []string {
r.mu.RLock()
defer r.mu.RUnlock()
var result []string
for _, op := range r.ops {
if !r.tombstone[op.LineID.String()] {
result = append(result, op.Content)
}
}
return result
}
// GetPositions returns the positions of all active lines
func (r *RGA) GetPositions() []int {
r.mu.RLock()
defer r.mu.RUnlock()
var positions []int
for i, op := range r.ops {
if !r.tombstone[op.LineID.String()] {
positions = append(positions, i)
}
}
return positions
}
// GetLineIDs returns the LineIDs of all active lines in order
func (r *RGA) GetLineIDs() []uuid.UUID {
r.mu.RLock()
defer r.mu.RUnlock()
var lineIDs []uuid.UUID
for _, op := range r.ops {
if !r.tombstone[op.LineID.String()] {
lineIDs = append(lineIDs, op.LineID)
}
}
return lineIDs
}
// LineMap returns a map of LineID to Content for all active lines
func (r *RGA) LineMap() map[uuid.UUID]string {
r.mu.RLock()
defer r.mu.RUnlock()
result := make(map[uuid.UUID]string)
for _, op := range r.ops {
if !r.tombstone[op.LineID.String()] {
result[op.LineID] = op.Content
}
}
return result
}
================================================
FILE: internal/crdt/rga_test.go
================================================
package crdt
import (
"testing"
"time"
"github.com/google/uuid"
)
func TestRGA(t *testing.T) {
t.Run("Insert Operations", func(t *testing.T) {
rga := NewRGA()
fileID := uuid.New()
lineID1 := uuid.New()
lineID2 := uuid.New()
nodeID := uuid.New()
// Create operations
op1 := Operation{
Type: OpInsert,
Lamport: 1,
NodeID: nodeID,
FileID: fileID,
LineID: lineID1,
Content: "value1",
Stream: "stream1",
Timestamp: time.Now(),
Vector: []int64{1},
}
op2 := Operation{
Type: OpInsert,
Lamport: 2,
NodeID: nodeID,
FileID: fileID,
LineID: lineID2,
Content: "value2",
Stream: "stream1",
Timestamp: time.Now().Add(time.Second),
Vector: []int64{2},
}
// Apply operations
err := rga.Apply(op1)
if err != nil {
t.Errorf("Failed to apply operation 1: %v", err)
}
err = rga.Apply(op2)
if err != nil {
t.Errorf("Failed to apply operation 2: %v", err)
}
// Check state
values := rga.Get()
if len(values) != 2 {
t.Errorf("Expected 2 values, got %d", len(values))
}
if values[0] != "value1" || values[1] != "value2" {
t.Errorf("Values not in expected order: %v", values)
}
})
t.Run("Delete Operations", func(t *testing.T) {
rga := NewRGA()
fileID := uuid.New()
lineID := uuid.New()
nodeID := uuid.New()
// Insert operation
insertOp := Operation{
Type: OpInsert,
Lamport: 1,
NodeID: nodeID,
FileID: fileID,
LineID: lineID,
Content: "value1",
Stream: "stream1",
Timestamp: time.Now(),
Vector: []int64{1},
}
// Apply insert
err := rga.Apply(insertOp)
if err != nil {
t.Errorf("Failed to apply insert operation: %v", err)
}
// Delete operation
deleteOp := Operation{
Type: OpDelete,
Lamport: 2,
NodeID: nodeID,
FileID: fileID,
LineID: lineID,
Stream: "stream1",
Timestamp: time.Now().Add(time.Second),
Vector: []int64{2},
}
// Apply delete
err = rga.Apply(deleteOp)
if err != nil {
t.Errorf("Failed to apply delete operation: %v", err)
}
// Check state
values := rga.Get()
if len(values) != 0 {
t.Errorf("Expected 0 values after delete, got %d", len(values))
}
})
t.Run("Update Operations", func(t *testing.T) {
rga := NewRGA()
fileID := uuid.New()
lineID := uuid.New()
nodeID := uuid.New()
// Insert operation
insertOp := Operation{
Type: OpInsert,
Lamport: 1,
NodeID: nodeID,
FileID: fileID,
LineID: lineID,
Content: "value1",
Stream: "stream1",
Timestamp: time.Now(),
Vector: []int64{1},
}
// Apply insert
err := rga.Apply(insertOp)
if err != nil {
t.Errorf("Failed to apply insert operation: %v", err)
}
// Update operation
updateOp := Operation{
Type: OpUpdate,
Lamport: 2,
NodeID: nodeID,
FileID: fileID,
LineID: lineID,
Content: "updated",
Stream: "stream1",
Timestamp: time.Now().Add(time.Second),
Vector: []int64{2},
}
// Apply update
err = rga.Apply(updateOp)
if err != nil {
t.Errorf("Failed to apply update operation: %v", err)
}
// Check state
values := rga.Get()
if len(values) != 1 {
t.Errorf("Expected 1 value after update, got %d", len(values))
}
if values[0] != "updated" {
t.Errorf("Expected updated value, got %s", values[0])
}
})
t.Run("Invalid Update Operation", func(t *testing.T) {
rga := NewRGA()
fileID := uuid.New()
lineID := uuid.New()
nodeID := uuid.New()
// Update operation without insert
updateOp := Operation{
Type: OpUpdate,
Lamport: 1,
NodeID: nodeID,
FileID: fileID,
LineID: lineID,
Content: "updated",
Stream: "stream1",
Timestamp: time.Now(),
Vector: []int64{1},
}
// Apply update
err := rga.Apply(updateOp)
if err == nil {
t.Error("Expected error when updating non-existent line")
}
})
t.Run("Clear Operations", func(t *testing.T) {
rga := NewRGA()
fileID := uuid.New()
lineID1 := uuid.New()
lineID2 := uuid.New()
nodeID := uuid.New()
// Insert operations
op1 := Operation{
Type: OpInsert,
Lamport: 1,
NodeID: nodeID,
FileID: fileID,
LineID: lineID1,
Content: "value1",
Stream: "stream1",
Timestamp: time.Now(),
Vector: []int64{1},
}
op2 := Operation{
Type: OpInsert,
Lamport: 2,
NodeID: nodeID,
FileID: fileID,
LineID: lineID2,
Content: "value2",
Stream: "stream1",
Timestamp: time.Now().Add(time.Second),
Vector: []int64{2},
}
// Apply operations
err := rga.Apply(op1)
if err != nil {
t.Errorf("Failed to apply operation 1: %v", err)
}
err = rga.Apply(op2)
if err != nil {
t.Errorf("Failed to apply operation 2: %v", err)
}
// Clear RGA
rga.Clear()
// Check state
values := rga.Get()
if len(values) != 0 {
t.Errorf("Expected 0 values after clear, got %d", len(values))
}
ops := rga.GetOperations()
if len(ops) != 0 {
t.Errorf("Expected 0 operations after clear, got %d", len(ops))
}
})
t.Run("Delete Content Preservation", func(t *testing.T) {
rga := NewRGA()
fileID := uuid.New()
lineID := uuid.New()
nodeID := uuid.New()
content := "test content to preserve"
// Insert operation
insertOp := Operation{
Type: OpInsert,
Lamport: 1,
NodeID: nodeID,
FileID: fileID,
LineID: lineID,
Content: content,
Stream: "stream1",
Timestamp: time.Now(),
Vector: []int64{1},
}
// Apply insert
err := rga.Apply(insertOp)
if err != nil {
t.Errorf("Failed to apply insert operation: %v", err)
}
// Delete operation
deleteOp := Operation{
Type: OpDelete,
Lamport: 2,
NodeID: nodeID,
FileID: fileID,
LineID: lineID,
Stream: "stream1",
Timestamp: time.Now().Add(time.Second),
Vector: []int64{2},
}
// Apply delete
err = rga.Apply(deleteOp)
if err != nil {
t.Errorf("Failed to apply delete operation: %v", err)
}
// Get all operations
ops := rga.GetOperations()
var foundDelete bool
for _, op := range ops {
if op.Type == OpDelete && op.LineID == lineID {
foundDelete = true
if op.Content != content {
t.Errorf("Delete operation did not preserve content, expected %q, got %q", content, op.Content)
}
break
}
}
if !foundDelete {
t.Error("Delete operation not found in operations list")
}
})
t.Run("Delete and Reinsert", func(t *testing.T) {
rga := NewRGA()
fileID := uuid.New()
lineID := uuid.New()
nodeID := uuid.New()
content := "test content for reinsert"
// Insert operation
insertOp := Operation{
Type: OpInsert,
Lamport: 1,
NodeID: nodeID,
FileID: fileID,
LineID: lineID,
Content: content,
Stream: "stream1",
Timestamp: time.Now(),
Vector: []int64{1},
}
// Apply insert
err := rga.Apply(insertOp)
if err != nil {
t.Errorf("Failed to apply insert operation: %v", err)
}
// Delete operation
deleteOp := Operation{
Type: OpDelete,
Lamport: 2,
NodeID: nodeID,
FileID: fileID,
LineID: lineID,
Stream: "stream1",
Timestamp: time.Now().Add(time.Second),
Vector: []int64{2},
}
// Apply delete
err = rga.Apply(deleteOp)
if err != nil {
t.Errorf("Failed to apply delete operation: %v", err)
}
// Reinsert operation (simulating revert)
reinsertOp := Operation{
Type: OpInsert,
Lamport: 3,
NodeID: nodeID,
FileID: fileID,
LineID: lineID,
Content: content,
Stream: "stream1",
Timestamp: time.Now().Add(2 * time.Second),
Vector: []int64{3},
}
// Apply reinsert
err = rga.Apply(reinsertOp)
if err != nil {
t.Errorf("Failed to apply reinsert operation: %v", err)
}
// Verify content is restored
values := rga.Get()
if len(values) != 1 {
t.Errorf("Expected 1 value after reinsert, got %d", len(values))
} else if values[0] != content {
t.Errorf("Content mismatch after reinsert, expected %q, got %q", content, values[0])
}
})
}
================================================
FILE: internal/ignore/ignore.go
================================================
package ignore
import (
"bufio"
"os"
"path/filepath"
"strings"
"github.com/bmatcuk/doublestar/v4"
)
// IgnoreList represents a collection of ignore patterns
type IgnoreList struct {
patterns []string
}
// LoadIgnoreFile reads and parses the .evo-ignore file from the given repository path
func LoadIgnoreFile(repoPath string) (*IgnoreList, error) {
ignorePath := filepath.Join(repoPath, ".evo-ignore")
file, err := os.Open(ignorePath)
if os.IsNotExist(err) {
return &IgnoreList{}, nil
}
if err != nil {
return nil, err
}
defer file.Close()
var patterns []string
scanner := bufio.NewScanner(file)
for scanner.Scan() {
pattern := strings.TrimSpace(scanner.Text())
if pattern != "" && !strings.HasPrefix(pattern, "#") {
// Handle directory patterns
if strings.HasSuffix(pattern, "/") {
pattern = strings.TrimSuffix(pattern, "/")
if !strings.Contains(pattern, "**") {
pattern = pattern + "/**"
}
}
patterns = append(patterns, pattern)
}
}
if err := scanner.Err(); err != nil {
return nil, err
}
return &IgnoreList{patterns: patterns}, nil
}
// IsIgnored checks if a given path should be ignored based on the ignore patterns
func (il *IgnoreList) IsIgnored(path string) bool {
// Always ignore .evo directory
if strings.HasPrefix(path, ".evo") {
return true
}
// Clean and normalize the path
path = filepath.ToSlash(filepath.Clean(path))
path = strings.TrimPrefix(path, "./")
path = strings.TrimPrefix(path, "../")
for _, pattern := range il.patterns {
// Handle negation patterns
if strings.HasPrefix(pattern, "!") {
matched, err := doublestar.Match(pattern[1:], path)
if err == nil && matched {
return false
}
continue
}
// For directory patterns ending with /**, try prefix matching first
if strings.HasSuffix(pattern, "/**") {
base := strings.TrimSuffix(pattern, "/**")
if path == base || strings.HasPrefix(path, base+"/") {
return true
}
}
// Try matching the pattern directly
matched, err := doublestar.Match(pattern, path)
if err == nil && matched {
return true
}
// Try matching with **/ prefix
if !strings.HasPrefix(pattern, "**/") {
matched, err := doublestar.Match("**/"+pattern, path)
if err == nil && matched {
return true
}
}
// For directory patterns without /**, try matching with /** suffix
if !strings.HasSuffix(pattern, "/**") {
// Try with /** suffix
matched, err := doublestar.Match(pattern+"/**", path)
if err == nil && matched {
return true
}
// Try with **/ prefix and /** suffix
matched, err = doublestar.Match("**/"+pattern+"/**", path)
if err == nil && matched {
return true
}
// Try with /** suffix for each path component
parts := strings.Split(path, "/")
for i := range parts {
prefix := strings.Join(parts[:i+1], "/")
if prefix == pattern {
return true
}
if strings.HasSuffix(pattern, "/") {
pattern = strings.TrimSuffix(pattern, "/")
if prefix == pattern {
return true
}
}
}
}
}
return false
}
// AddPattern adds a new ignore pattern
func (il *IgnoreList) AddPattern(pattern string) {
// Handle directory patterns
if strings.HasSuffix(pattern, "/") {
pattern = strings.TrimSuffix(pattern, "/")
if !strings.Contains(pattern, "**") {
pattern = pattern + "/**"
}
}
il.patterns = append(il.patterns, pattern)
}
// GetPatterns returns all current ignore patterns
func (il *IgnoreList) GetPatterns() []string {
return append([]string{}, il.patterns...)
}
================================================
FILE: internal/ignore/ignore_test.go
================================================
package ignore
import (
"os"
"path/filepath"
"testing"
)
func TestLoadIgnoreFile(t *testing.T) {
// Create a temporary directory for testing
tmpDir, err := os.MkdirTemp("", "evo-ignore-test")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(tmpDir)
// Test case 1: No .evo-ignore file
il, err := LoadIgnoreFile(tmpDir)
if err != nil {
t.Errorf("Expected no error when .evo-ignore doesn't exist, got %v", err)
}
if len(il.patterns) != 0 {
t.Errorf("Expected empty patterns list, got %v", il.patterns)
}
// Test case 2: With .evo-ignore file
ignoreContent := `
# Comment line
*.log
build/
**/*.tmp
test/*.txt
node_modules/
*.bak
!important.bak
`
ignorePath := filepath.Join(tmpDir, ".evo-ignore")
if err := os.WriteFile(ignorePath, []byte(ignoreContent), 0644); err != nil {
t.Fatal(err)
}
il, err = LoadIgnoreFile(tmpDir)
if err != nil {
t.Errorf("Failed to load .evo-ignore file: %v", err)
}
expectedPatterns := []string{
"*.log",
"build/**",
"**/*.tmp",
"test/*.txt",
"node_modules/**",
"*.bak",
"!important.bak",
}
patterns := il.GetPatterns()
if len(patterns) != len(expectedPatterns) {
t.Errorf("Expected %d patterns, got %d", len(expectedPatterns), len(patterns))
}
for i, pattern := range patterns {
if pattern != expectedPatterns[i] {
t.Errorf("Pattern %d: expected %s, got %s", i, expectedPatterns[i], pattern)
}
}
// Test case 3: Invalid file permissions
if err := os.Chmod(ignorePath, 0000); err != nil {
t.Fatal(err)
}
_, err = LoadIgnoreFile(tmpDir)
if err == nil {
t.Error("Expected error when loading file with no permissions")
}
}
func TestIsIgnored(t *testing.T) {
tests := []struct {
name string
patterns []string
paths map[string]bool // path -> should be ignored
}{
{
name: "Empty patterns",
patterns: []string{},
paths: map[string]bool{
"file.txt": false,
".evo/config": true, // .evo is always ignored
".evo/objects": true,
},
},
{
name: "Simple glob patterns",
patterns: []string{
"*.log",
"*.tmp",
},
paths: map[string]bool{
"test.log": true,
"logs/test.log": true,
"test.txt": false,
"test.tmp": true,
},
},
{
name: "Directory patterns",
patterns: []string{
"build/",
"node_modules/",
"test/fixtures/",
},
paths: map[string]bool{
"build/output.txt": true,
"build/temp/file.txt": true,
"src/build/file.txt": false,
"node_modules/package.json": true,
"test/fixtures/data.json": true,
"test/file.txt": false,
},
},
{
name: "Double-star patterns",
patterns: []string{
"**/*.tmp",
"**/vendor/**",
"**/__pycache__/**",
},
paths: map[string]bool{
"file.tmp": true,
"temp/file.tmp": true,
"a/b/c/file.tmp": true,
"vendor/lib.js": true,
"src/vendor/lib.js": true,
"src/__pycache__/module.pyc": true,
"test/__pycache__/cache.json": true,
},
},
{
name: "Complex patterns",
patterns: []string{
"*.{log,tmp}",
"**/{test,mock}_*.go",
"**/.DS_Store",
},
paths: map[string]bool{
"error.log": true,
"temp.tmp": true,
"test_handler.go": true,
"mock_service.go": true,
"internal/test_db.go": true,
".DS_Store": true,
"src/.DS_Store": true,
"handler.go": false,
"service_test.go": false,
},
},
{
name: "Path normalization",
patterns: []string{
"build/",
"**/temp/**",
},
paths: map[string]bool{
"build/file.txt": true,
"./build/file.txt": true,
"build/../build/file": true,
"temp/file.txt": true,
"./temp/file.txt": true,
"a/temp/b/file.txt": true,
"./a/temp/b/file.txt": true,
"../repo/temp/file.txt": true,
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
il := &IgnoreList{patterns: tt.patterns}
for path, shouldIgnore := range tt.paths {
if got := il.IsIgnored(path); got != shouldIgnore {
t.Errorf("IsIgnored(%q) = %v, want %v", path, got, shouldIgnore)
}
}
})
}
}
func TestAddPattern(t *testing.T) {
il := &IgnoreList{}
// Test adding various pattern types
patterns := []struct {
input string
expected string
}{
{"*.log", "*.log"},
{"build/", "build/**"},
{"node_modules/", "node_modules/**"},
{"**/*.tmp", "**/*.tmp"},
{"test/*.txt", "test/*.txt"},
{".env", ".env"},
{"dist/", "dist/**"},
}
for _, p := range patterns {
il.AddPattern(p.input)
found := false
for _, pattern := range il.patterns {
if pattern == p.expected {
found = true
break
}
}
if !found {
t.Errorf("Pattern %q not found in patterns after AddPattern, expected %q", p.input, p.expected)
}
}
// Test pattern order preservation
il = &IgnoreList{}
var expectedPatterns []string
for _, p := range patterns {
il.AddPattern(p.input)
expectedPatterns = append(expectedPatterns, p.expected)
}
actualPatterns := il.GetPatterns()
if len(actualPatterns) != len(expectedPatterns) {
t.Errorf("Expected %d patterns, got %d", len(expectedPatterns), len(actualPatterns))
}
for i, pattern := range actualPatterns {
if pattern != expectedPatterns[i] {
t.Errorf("Pattern at index %d: expected %q, got %q", i, expectedPatterns[i], pattern)
}
}
}
func TestGetPatterns(t *testing.T) {
// Test that GetPatterns returns a copy of the patterns slice
il := &IgnoreList{patterns: []string{"*.log", "build/**", "**/*.tmp"}}
patterns1 := il.GetPatterns()
patterns2 := il.GetPatterns()
// Verify both slices have the same content
if len(patterns1) != len(patterns2) {
t.Errorf("Pattern slices have different lengths: %d vs %d", len(patterns1), len(patterns2))
}
for i := range patterns1 {
if patterns1[i] != patterns2[i] {
t.Errorf("Pattern mismatch at index %d: %q vs %q", i, patterns1[i], patterns2[i])
}
}
// Modify the first slice and verify it doesn't affect the second
patterns1[0] = "modified"
if patterns1[0] == patterns2[0] {
t.Error("Modifying one pattern slice affected the other")
}
// Verify the original patterns are unchanged
originalPatterns := il.GetPatterns()
if originalPatterns[0] != "*.log" {
t.Errorf("Original patterns were modified: expected %q, got %q", "*.log", originalPatterns[0])
}
}
================================================
FILE: internal/index/index.go
================================================
package index
import (
"bufio"
"errors"
"fmt"
"os"
"path/filepath"
"strings"
"github.com/google/uuid"
)
// The .evo/index is lines: "<fileID> <path>"
func LoadIndex(repoPath string) (map[string]string, map[string]string, error) {
// path->fileID, fileID->path
path2id := make(map[string]string)
id2path := make(map[string]string)
idxPath := filepath.Join(repoPath, ".evo", "index")
f, err := os.Open(idxPath)
if os.IsNotExist(err) {
return path2id, id2path, nil
}
if err != nil {
return path2id, id2path, err
}
defer f.Close()
sc := bufio.NewScanner(f)
for sc.Scan() {
line := strings.TrimSpace(sc.Text())
if line == "" {
continue
}
parts := strings.SplitN(line, " ", 2)
if len(parts) == 2 {
fid := parts[0]
p := parts[1]
path2id[p] = fid
id2path[fid] = p
}
}
return path2id, id2path, nil
}
func SaveIndex(repoPath string, path2id map[string]string) error {
idxPath := filepath.Join(repoPath, ".evo", "index")
f, err := os.OpenFile(idxPath, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, 0644)
if err != nil {
return err
}
defer f.Close()
for p, fid := range path2id {
fmt.Fprintf(f, "%s %s\n", fid, p)
}
return nil
}
// UpdateIndex => scans working dir, assigns stable fileIDs, removes missing files
func UpdateIndex(repoPath string) error {
p2id, id2p, err := LoadIndex(repoPath)
if err != nil {
return err
}
var working []string
filepath.Walk(repoPath, func(path string, info os.FileInfo, e error) error {
if e != nil {
return nil
}
if !info.IsDir() {
rel, _ := filepath.Rel(repoPath, path)
if !strings.HasPrefix(rel, ".evo") {
working = append(working, rel)
}
}
return nil
})
// detect new files
for _, w := range working {
if _, ok := p2id[w]; !ok {
// assign new fileID
fid := uuid.New().String()
p2id[w] = fid
id2p[fid] = w
}
}
// detect removed
for p, fid := range p2id {
found := false
for _, w := range working {
if w == p {
found = true
break
}
}
if !found {
delete(p2id, p)
delete(id2p, fid)
}
}
return SaveIndex(repoPath, p2id)
}
// LookupFileID => returns stable fileID for a given path
func LookupFileID(repoPath, relPath string) (string, error) {
p2id, _, err := LoadIndex(repoPath)
if err != nil {
return "", err
}
fid, ok := p2id[relPath]
if !ok {
return "", errors.New("file not tracked in index: " + relPath)
}
return fid, nil
}
================================================
FILE: internal/lfs/diff.go
================================================
package lfs
import (
"bytes"
"io"
)
const (
// RollingHashWindow is the size of the rolling hash window
RollingHashWindow = 64
// MinMatchSize is the minimum size of a matching block
MinMatchSize = 32
)
// RollingHash implements a simple rolling hash for binary diff
type RollingHash struct {
window []byte
pos int
hash uint32
}
// NewRollingHash creates a new rolling hash
func NewRollingHash() *RollingHash {
return &RollingHash{
window: make([]byte, RollingHashWindow),
}
}
// Update updates the rolling hash with a new byte
func (r *RollingHash) Update(b byte) uint32 {
// Remove old byte's contribution
old := r.window[r.pos]
r.hash = (r.hash - uint32(old)) + uint32(b)
// Add new byte
r.window[r.pos] = b
r.pos = (r.pos + 1) % RollingHashWindow
return r.hash
}
// BinaryDiff generates a binary diff between two readers
func BinaryDiff(old, new io.Reader) ([]DiffEntry, error) {
// Read old content into memory for efficient matching
oldData, err := io.ReadAll(old)
if err != nil {
return nil, err
}
// Read new content into memory for efficient matching
newData, err := io.ReadAll(new)
if err != nil {
return nil, err
}
// Initialize rolling hash
rh := NewRollingHash()
blockIndex := make(map[uint32][]int)
// Build block index for old content
if len(oldData) >= RollingHashWindow {
for i := 0; i <= len(oldData)-RollingHashWindow; i++ {
// Update rolling hash
if i == 0 {
for j := 0; j < RollingHashWindow && j < len(oldData); j++ {
rh.Update(oldData[j])
}
} else if i+RollingHashWindow-1 < len(oldData) {
rh.Update(oldData[i+RollingHashWindow-1])
}
hash := rh.hash
// Store position for this hash
blockIndex[hash] = append(blockIndex[hash], i)
}
}
// Process new content to find matches
var diff []DiffEntry
newBuf := &bytes.Buffer{}
pos := 0
for pos < len(newData) {
// Calculate rolling hash for current window
rh = NewRollingHash()
windowEnd := pos + RollingHashWindow
if windowEnd > len(newData) {
windowEnd = len(newData)
}
for i := pos; i < windowEnd; i++ {
rh.Update(newData[i])
}
hash := rh.hash
// Look for matches
matched := false
if positions, ok := blockIndex[hash]; ok {
for _, oldPos := range positions {
// Verify full match
matchLen := 0
for i := 0; i < MinMatchSize && pos+i < len(newData) && oldPos+i < len(oldData); i++ {
if oldData[oldPos+i] != newData[pos+i] {
break
}
matchLen++
}
if matchLen >= MinMatchSize {
// Found a match, extend it
for oldPos+matchLen < len(oldData) && pos+matchLen < len(newData) &&
oldData[oldPos+matchLen] == newData[pos+matchLen] {
matchLen++
}
// Add any pending new data
if newBuf.Len() > 0 {
diff = append(diff, DiffEntry{
Type: DiffNew,
Data: newBuf.Bytes(),
})
newBuf.Reset()
}
// Add the match
diff = append(diff, DiffEntry{
Type: DiffCopy,
Offset: int64(oldPos),
Length: int64(matchLen),
})
pos += matchLen
matched = true
break
}
}
}
if !matched && pos < len(newData) {
// No match found, add to new data buffer
newBuf.WriteByte(newData[pos])
pos++
}
}
// Add any remaining new data
if newBuf.Len() > 0 {
diff = append(diff, DiffEntry{
Type: DiffNew,
Data: newBuf.Bytes(),
})
}
return diff, nil
}
// DiffType represents the type of a diff entry
type DiffType byte
const (
DiffCopy DiffType = iota // Copy from old file
DiffNew // New data
)
// DiffEntry represents a single entry in a binary diff
type DiffEntry struct {
Type DiffType // Type of entry
Offset int64 // Offset in old file (for Copy)
Length int64 // Length to copy (for Copy)
Data []byte // New data (for New)
}
// ApplyDiff applies a binary diff to generate new content
func ApplyDiff(old io.Reader, diff []DiffEntry, w io.Writer) error {
// Read old content
oldData, err := io.ReadAll(old)
if err != nil {
return err
}
// Apply diff entries
for _, entry := range diff {
switch entry.Type {
case DiffCopy:
// Copy from old file
if entry.Offset+entry.Length > int64(len(oldData)) {
return io.ErrUnexpectedEOF
}
if _, err := w.Write(oldData[entry.Offset:entry.Offset+entry.Length]); err != nil {
return err
}
case DiffNew:
// Write new data
if _, err := w.Write(entry.Data); err != nil {
return err
}
}
}
return nil
}
================================================
FILE: internal/lfs/diff_test.go
================================================
package lfs
import (
"bytes"
"io"
"testing"
)
func TestBinaryDiff(t *testing.T) {
t.Run("Small Changes", func(t *testing.T) {
// Original content
oldData := []byte("Hello, this is a test file for binary diff!")
// Modified content (changed one word)
newData := []byte("Hello, this is a sample file for binary diff!")
// Generate diff
diff, err := BinaryDiff(bytes.NewReader(oldData), bytes.NewReader(newData))
if err != nil {
t.Fatal(err)
}
// Apply diff
var result bytes.Buffer
if err := ApplyDiff(bytes.NewReader(oldData), diff, &result); err != nil {
t.Fatal(err)
}
// Verify result
if !bytes.Equal(result.Bytes(), newData) {
t.Error("Diff application failed to reproduce new content")
}
})
t.Run("Large Block Changes", func(t *testing.T) {
// Create large test data
oldData := make([]byte, 100*1024) // 100KB
newData := make([]byte, 100*1024)
// Fill with pattern
for i := range oldData {
oldData[i] = byte(i % 256)
newData[i] = byte(i % 256)
}
// Modify a block in the middle
copy(newData[50*1024:], bytes.Repeat([]byte("modified"), 1024))
// Generate and apply diff
diff, err := BinaryDiff(bytes.NewReader(oldData), bytes.NewReader(newData))
if err != nil {
t.Fatal(err)
}
var result bytes.Buffer
if err := ApplyDiff(bytes.NewReader(oldData), diff, &result); err != nil {
t.Fatal(err)
}
if !bytes.Equal(result.Bytes(), newData) {
t.Error("Failed to reproduce large modified content")
}
})
t.Run("Rolling Hash", func(t *testing.T) {
rh := NewRollingHash()
// Test with simple pattern
data := []byte("abcdefghijklmnop")
var hashes []uint32
// Calculate rolling hash for each window
for i := 0; i <= len(data)-RollingHashWindow; i++ {
// Reset hash for new window
rh = NewRollingHash()
for j := 0; j < RollingHashWindow; j++ {
rh.Update(data[i+j])
}
hashes = append(hashes, rh.hash)
}
// Verify we get different hashes for different windows
seen := make(map[uint32]bool)
for _, h := range hashes {
if seen[h] {
t.Error("Hash collision in rolling hash")
}
seen[h] = true
}
})
t.Run("Empty Input", func(t *testing.T) {
diff, err := BinaryDiff(bytes.NewReader([]byte{}), bytes.NewReader([]byte{}))
if err != nil {
t.Fatal(err)
}
if len(diff) != 0 {
t.Error("Expected empty diff for empty input")
}
})
t.Run("Append Content", func(t *testing.T) {
oldData := []byte("Original content")
newData := []byte("Original content with appended text")
diff, err := BinaryDiff(bytes.NewReader(oldData), bytes.NewReader(newData))
if err != nil {
t.Fatal(err)
}
var result bytes.Buffer
if err := ApplyDiff(bytes.NewReader(oldData), diff, &result); err != nil {
t.Fatal(err)
}
if !bytes.Equal(result.Bytes(), newData) {
t.Error("Failed to handle appended content")
}
})
t.Run("Streaming Large Content", func(t *testing.T) {
// Create large content that won't fit in memory
oldReader := &infiniteReader{limit: 10 * 1024 * 1024} // 10MB
newReader := &infiniteReader{limit: 10 * 1024 * 1024, modified: true}
// Generate diff
diff, err := BinaryDiff(oldReader, newReader)
if err != nil {
t.Fatal(err)
}
// Verify diff size is reasonable
if len(diff) > 1024*1024 { // Should be much smaller than original
t.Error("Diff size too large for similar content")
}
})
}
// infiniteReader generates predictable content for testing
type infiniteReader struct {
pos int64
limit int64
modified bool
}
func (r *infiniteReader) Read(p []byte) (n int, err error) {
if r.pos >= r.limit {
return 0, io.EOF
}
for i := range p {
if r.pos+int64(i) >= r.limit {
return i, io.EOF
}
if r.modified && r.pos+int64(i) >= r.limit/2 {
p[i] = byte((r.pos + int64(i)) % 251) // Different pattern
} else {
p[i] = byte((r.pos + int64(i)) % 250)
}
}
r.pos += int64(len(p))
return len(p), nil
}
================================================
FILE: internal/lfs/gc.go
================================================
package lfs
import (
"fmt"
"os"
"path/filepath"
"sync"
"time"
)
// GarbageCollector manages cleanup of unreferenced chunks
type GarbageCollector struct {
store *Store
mu sync.Mutex
done chan struct{}
}
// NewGarbageCollector creates a new garbage collector
func NewGarbageCollector(store *Store) *GarbageCollector {
return &GarbageCollector{
store: store,
done: make(chan struct{}),
}
}
// Start begins periodic garbage collection
func (gc *GarbageCollector) Start() {
ticker := time.NewTicker(24 * time.Hour) // Run daily
go func() {
for {
select {
case <-ticker.C:
if err := gc.Run(); err != nil {
fmt.Fprintf(os.Stderr, "Error during LFS garbage collection: %v\n", err)
}
case <-gc.done:
ticker.Stop()
return
}
}
}()
}
// Stop stops the garbage collector
func (gc *GarbageCollector) Stop() {
close(gc.done)
}
// Run performs garbage collection
func (gc *GarbageCollector) Run() error {
gc.mu.Lock()
defer gc.mu.Unlock()
// Get all chunks
chunksDir := filepath.Join(gc.store.root, ".evo", "chunks")
chunks, err := os.ReadDir(chunksDir)
if err != nil {
return fmt.Errorf("failed to read chunks directory: %w", err)
}
// Check each chunk
for _, chunk := range chunks {
if chunk.IsDir() {
continue
}
// Delete if not referenced
chunkHash := chunk.Name()
if !gc.store.isChunkReferenced(chunkHash) {
chunkPath := filepath.Join(chunksDir, chunkHash)
if err := os.Remove(chunkPath); err != nil {
return fmt.Errorf("failed to delete unreferenced chunk %s: %w", chunkHash, err)
}
}
}
return nil
}
// PruneTombstones removes old tombstones
func (gc *GarbageCollector) PruneTombstones(maxAge time.Duration) error {
gc.mu.Lock()
defer gc.mu.Unlock()
// Get all files
filesDir := filepath.Join(gc.store.root, ".evo", "lfs")
files, err := os.ReadDir(filesDir)
if err != nil {
return fmt.Errorf("failed to read files directory: %w", err)
}
cutoff := time.Now().Add(-maxAge)
// Check each file
for _, file := range files {
if !file.IsDir() {
continue
}
// Load file info
info, err := gc.store.loadFileInfo(file.Name())
if err != nil {
continue
}
// Delete if it's a tombstone older than maxAge
if info.RefCount == 0 && info.Created.Before(cutoff) {
if err := gc.store.DeleteFile(file.Name()); err != nil {
return fmt.Errorf("failed to delete old tombstone %s: %w", file.Name(), err)
}
}
}
return nil
}
================================================
FILE: internal/lfs/store.go
================================================
package lfs
import (
"encoding/json"
"fmt"
"io"
"os"
"path/filepath"
"sync"
"time"
)
// Store manages large file storage with deduplication
type Store struct {
mu sync.RWMutex
root string
}
// NewStore creates a new LFS store at the given root path
func NewStore(root string) *Store {
// Create necessary directories
os.MkdirAll(filepath.Join(root, ".evo", "lfs"), 0755)
os.MkdirAll(filepath.Join(root, ".evo", "chunks"), 0755)
return &Store{
root: root,
}
}
// StoreFile stores a file in chunks and returns file info
func (s *Store) StoreFile(id string, r io.Reader, size int64) (*FileInfo, error) {
s.mu.Lock()
defer s.mu.Unlock()
// Create file directory
fileDir := filepath.Join(s.root, ".evo", "lfs", id)
if err := os.MkdirAll(fileDir, 0755); err != nil {
return nil, err
}
// Calculate content hash and split into chunks
chunks := make([]ChunkInfo, 0)
contentHash := NewHash()
// Read file in chunks to calculate hash and store chunks
var totalSize int64
buf := make([]byte, ChunkSize)
for totalSize < size {
// Calculate remaining size and read size
remaining := size - totalSize
readSize := ChunkSize
if remaining < ChunkSize {
readSize = int(remaining)
}
// Read chunk
n, err := io.ReadFull(r, buf[:readSize])
if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF {
return nil, err
}
if n == 0 {
break
}
// Calculate content hash for this chunk
contentHash.Write(buf[:n])
// Calculate chunk hash and store chunk
chunk := make([]byte, n)
copy(chunk, buf[:n])
chunkHash := HashBytes(chunk)
// Store chunk if it doesn't exist
chunkPath := filepath.Join(s.root, ".evo", "chunks", chunkHash)
if _, err := os.Stat(chunkPath); os.IsNotExist(err) {
// Store new chunk
chunkData := make([]byte, n)
copy(chunkData, chunk)
if err := os.WriteFile(chunkPath, chunkData, 0644); err != nil {
return nil, err
}
}
chunks = append(chunks, ChunkInfo{
Hash: chunkHash,
Size: int64(n),
})
totalSize += int64(n)
// Break if we've read all the data
if totalSize >= size {
break
}
}
// Verify total size matches expected size
if totalSize != size {
return nil, fmt.Errorf("expected size %d, got %d", size, totalSize)
}
hashStr := contentHash.Sum()
// Check for existing file with same content hash
existingFiles, err := os.ReadDir(filepath.Join(s.root, ".evo", "lfs"))
if err == nil {
for _, f := range existingFiles {
if !f.IsDir() {
continue
}
existingInfo, err := s.loadFileInfo(f.Name())
if err != nil {
continue
}
if existingInfo.ContentHash == hashStr {
// Found existing file with same content
existingInfo.RefCount++
if err := s.saveFileInfo(f.Name(), existingInfo); err != nil {
return nil, err
}
// Create new file info pointing to same chunks
newInfo := &FileInfo{
ID: id,
Size: existingInfo.Size,
ContentHash: existingInfo.ContentHash,
NumChunks: existingInfo.NumChunks,
Chunks: existingInfo.Chunks,
RefCount: existingInfo.RefCount, // Use same ref count as existing file
Created: time.Now(),
}
if err := s.saveFileInfo(id, newInfo); err != nil {
return nil, err
}
return newInfo, nil
}
}
}
// Create file info
info := &FileInfo{
ID: id,
Size: size,
ContentHash: hashStr,
NumChunks: len(chunks),
Chunks: chunks,
RefCount: 1,
Created: time.Now(),
}
// Save file info
if err := s.saveFileInfo(id, info); err != nil {
return nil, err
}
return info, nil
}
func (s *Store) saveFileInfo(id string, info *FileInfo) error {
data, err := json.Marshal(info)
if err != nil {
return err
}
return os.WriteFile(filepath.Join(s.root, ".evo", "lfs", id, "info.json"), data, 0644)
}
func (s *Store) loadFileInfo(id string) (*FileInfo, error) {
data, err := os.ReadFile(filepath.Join(s.root, ".evo", "lfs", id, "info.json"))
if err != nil {
return nil, err
}
var info FileInfo
if err := json.Unmarshal(data, &info); err != nil {
return nil, err
}
return &info, nil
}
// ReadFile reads a file from chunks into the writer
func (s *Store) ReadFile(id string, w io.Writer) error {
s.mu.RLock()
defer s.mu.RUnlock()
// Load file info
info, err := s.loadFileInfo(id)
if err != nil {
return err
}
// Read chunks
for _, chunk := range info.Chunks {
data, err := os.ReadFile(filepath.Join(s.root, ".evo", "chunks", chunk.Hash))
if err != nil {
return err
}
if _, err := w.Write(data); err != nil {
return err
}
}
return nil
}
// DeleteFile deletes a file and its chunks if no longer referenced
func (s *Store) DeleteFile(id string) error {
s.mu.Lock()
defer s.mu.Unlock()
// Load file info
info, err := s.loadFileInfo(id)
if err != nil {
return err
}
// Delete file info
fileDir := filepath.Join(s.root, ".evo", "lfs", id)
if err := os.RemoveAll(fileDir); err != nil {
return err
}
// Find other files with same content hash
existingFiles, err := os.ReadDir(filepath.Join(s.root, ".evo", "lfs"))
if err == nil {
for _, f := range existingFiles {
if !f.IsDir() || f.Name() == id {
continue
}
existingInfo, err := s.loadFileInfo(f.Name())
if err != nil {
continue
}
if existingInfo.ContentHash == info.ContentHash {
// Found another file with same content, decrement its ref count
existingInfo.RefCount--
if err := s.saveFileInfo(f.Name(), existingInfo); err != nil {
return err
}
break
}
}
}
// Delete unreferenced chunks
for _, chunk := range info.Chunks {
chunkPath := filepath.Join(s.root, ".evo", "chunks", chunk.Hash)
if s.isChunkReferenced(chunk.Hash) {
continue
}
if err := os.Remove(chunkPath); err != nil {
return err
}
}
return nil
}
// isChunkReferenced checks if a chunk is referenced by any file
func (s *Store) isChunkReferenced(hash string) bool {
files, err := os.ReadDir(filepath.Join(s.root, ".evo", "lfs"))
if err != nil {
return false
}
for _, file := range files {
if !file.IsDir() {
continue
}
info, err := s.loadFileInfo(file.Name())
if err != nil {
continue
}
for _, chunk := range info.Chunks {
if chunk.Hash == hash {
return true
}
}
}
return false
}
func min(a, b int64) int64 {
if a < b {
return a
}
return b
}
================================================
FILE: internal/lfs/store_test.go
================================================
package lfs
import (
"bytes"
"os"
"path/filepath"
"testing"
)
func TestStore(t *testing.T) {
// Create temp dir for testing
tmpDir, err := os.MkdirTemp("", "evo-lfs-test-*")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(tmpDir)
// Create test file
testData := []byte("Hello, this is test data for LFS!")
testFile := filepath.Join(tmpDir, "test.txt")
if err := os.WriteFile(testFile, testData, 0644); err != nil {
t.Fatal(err)
}
// Initialize store
store := NewStore(tmpDir)
t.Run("Store and Read File", func(t *testing.T) {
// Store file
f, err := os.Open(testFile)
if err != nil {
t.Fatal(err)
}
defer f.Close()
info, err := store.StoreFile("test123", f, int64(len(testData)))
if err != nil {
t.Fatal(err)
}
// Verify file info
if info.Size != int64(len(testData)) {
t.Errorf("Expected size %d, got %d", len(testData), info.Size)
}
if info.RefCount != 1 {
t.Errorf("Expected refCount 1, got %d", info.RefCount)
}
// Read file back
var buf bytes.Buffer
if err := store.ReadFile("test123", &buf); err != nil {
t.Fatal(err)
}
// Verify content
if !bytes.Equal(buf.Bytes(), testData) {
t.Error("Read data doesn't match original")
}
})
t.Run("Deduplication", func(t *testing.T) {
// Store same file again
f, err := os.Open(testFile)
if err != nil {
t.Fatal(err)
}
defer f.Close()
info, err := store.StoreFile("test456", f, int64(len(testData)))
if err != nil {
t.Fatal(err)
}
// Verify increased ref count
if info.RefCount != 2 {
t.Errorf("Expected refCount 2, got %d", info.RefCount)
}
// Check chunks directory
chunksDir := filepath.Join(tmpDir, ".evo", "chunks")
entries, err := os.ReadDir(chunksDir)
if err != nil {
t.Fatal(err)
}
// Should only have one chunk since content is identical
if len(entries) != 1 {
t.Errorf("Expected 1 chunk, got %d", len(entries))
}
})
t.Run("Reference Counting", func(t *testing.T) {
// Delete first file
if err := store.DeleteFile("test123"); err != nil {
t.Fatal(err)
}
// Verify file still exists
info, err := store.loadFileInfo("test456")
if err != nil {
t.Fatal(err)
}
if info.RefCount != 1 {
t.Errorf("Expected refCount 1, got %d", info.RefCount)
}
// Delete last reference
if err := store.DeleteFile("test456"); err != nil {
t.Fatal(err)
}
// Verify file is gone
if _, err := store.loadFileInfo("test456"); err == nil {
t.Error("File should not exist")
}
})
}
func TestLargeFileChunking(t *testing.T) {
// Create temp dir
tmpDir, err := os.MkdirTemp("", "evo-lfs-chunks-*")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(tmpDir)
// Create large test data (5MB)
size := 5 * 1024 * 1024
data := make([]byte, size)
// Ensure each 1MB chunk is unique:
for i := 0; i < size; i++ {
chunkIndex := i >> 20 // i / 1 MB
data[i] = byte(chunkIndex)
}
// Write to testFile, then store in LFS, expecting 5 distinct chunks
testFile := filepath.Join(tmpDir, "large.bin")
if err := os.WriteFile(testFile, data, 0644); err != nil {
t.Fatal(err)
}
store := NewStore(tmpDir)
t.Run("Chunk Storage", func(t *testing.T) {
// Store large file
f, err := os.Open(testFile)
if err != nil {
t.Fatal(err)
}
defer f.Close()
info, err := store.StoreFile("large123", f, int64(size))
if err != nil {
t.Fatal(err)
}
// Verify number of chunks
expectedChunks := (size + ChunkSize - 1) / ChunkSize
if info.NumChunks != expectedChunks {
t.Errorf("Expected %d chunks, got %d", expectedChunks, info.NumChunks)
}
// Check chunks directory
chunksDir := filepath.Join(tmpDir, ".evo", "chunks")
entries, err := os.ReadDir(chunksDir)
if err != nil {
t.Fatal(err)
}
if len(entries) != expectedChunks {
t.Errorf("Expected %d chunk files, got %d", expectedChunks, len(entries))
}
})
t.Run("Streaming Read", func(t *testing.T) {
// Read file back in chunks
var buf bytes.Buffer
if err := store.ReadFile("large123", &buf); err != nil {
t.Fatal(err)
}
// Verify content
if !bytes.Equal(buf.Bytes(), data) {
t.Error("Read data doesn't match original")
}
})
}
func TestGarbageCollection(t *testing.T) {
// Create temp dir
tmpDir, err := os.MkdirTemp("", "evo-lfs-gc-*")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(tmpDir)
store := NewStore(tmpDir)
gc := NewGarbageCollector(store)
// Create test files
files := []struct {
name string
content []byte
}{
{"file1", []byte("content1")},
{"file2", []byte("content2")},
{"file3", []byte("content3")},
}
for _, f := range files {
r := bytes.NewReader(f.content)
if _, err := store.StoreFile(f.name, r, int64(len(f.content))); err != nil {
t.Fatal(err)
}
}
t.Run("GC Cleanup", func(t *testing.T) {
// Delete some files
if err := store.DeleteFile("file1"); err != nil {
t.Fatal(err)
}
if err := store.DeleteFile("file2"); err != nil {
t.Fatal(err)
}
// Run GC
if err := gc.Run(); err != nil {
t.Fatal(err)
}
// Verify only file3's chunks remain
chunksDir := filepath.Join(tmpDir, ".evo", "chunks")
entries, err := os.ReadDir(chunksDir)
if err != nil {
t.Fatal(err)
}
expectedChunks := 1 // Only file3's chunk should remain
if len(entries) != expectedChunks {
t.Errorf("Expected %d chunks after GC, got %d", expectedChunks, len(entries))
}
})
}
================================================
FILE: internal/lfs/types.go
================================================
package lfs
import (
"crypto/sha256"
"encoding/hex"
"hash"
"time"
)
const (
// ChunkSize is the size of each chunk in bytes (1MB)
ChunkSize = 1024 * 1024
)
// FileInfo contains metadata about a stored file
type FileInfo struct {
ID string `json:"id"` // Unique file identifier
Size int64 `json:"size"` // Total file size in bytes
ContentHash string `json:"contentHash"` // Hash of entire file content
NumChunks int `json:"numChunks"` // Number of chunks
Chunks []ChunkInfo `json:"chunks"` // List of chunks
RefCount int `json:"refCount"` // Number of references to this file
Created time.Time `json:"created"` // When the file was created
}
// ChunkInfo contains metadata about a file chunk
type ChunkInfo struct {
Hash string `json:"hash"` // Hash of chunk content
Size int64 `json:"size"` // Size of chunk in bytes
}
// Hash represents a content-addressable hash
type Hash struct {
h hash.Hash
}
// NewHash creates a new hash
func NewHash() *Hash {
return &Hash{h: sha256.New()}
}
// Write implements io.Writer
func (h *Hash) Write(p []byte) (n int, err error) {
return h.h.Write(p)
}
// Sum returns the hash as a hex string
func (h *Hash) Sum() string {
return hex.EncodeToString(h.h.Sum(nil))
}
// HashBytes returns the hash of a byte slice
func HashBytes(data []byte) string {
h := sha256.New()
h.Write(data)
return hex.EncodeToString(h.Sum(nil))
}
================================================
FILE: internal/ops/binary_log.go
================================================
package ops
import (
"encoding/binary"
"evo/internal/crdt"
"io"
"os"
"github.com/google/uuid"
)
// WriteOp writes a single CRDT op in binary
func WriteOp(w io.Writer, op crdt.Operation) error {
// Format:
// [1 byte opType]
// [8 bytes lamport]
// [16 bytes nodeID]
// [16 bytes fileID]
// [16 bytes lineID]
// [4 bytes contentLen]
// [content]
buf := make([]byte, 1+8+16+16+16+4)
buf[0] = byte(op.Type)
binary.BigEndian.PutUint64(buf[1:9], op.Lamport)
copy(buf[9:25], op.NodeID[:])
copy(buf[25:41], op.FileID[:])
copy(buf[41:57], op.LineID[:])
contentBytes := []byte(op.Content)
binary.BigEndian.PutUint32(buf[57:61], uint32(len(contentBytes)))
if _, err := w.Write(buf); err != nil {
return err
}
if len(contentBytes) > 0 {
if _, err := w.Write(contentBytes); err != nil {
return err
}
}
return nil
}
func ReadOp(r io.Reader) (*crdt.Operation, error) {
header := make([]byte, 1+8+16+16+16+4)
_, err := io.ReadFull(r, header)
if err != nil {
return nil, err
}
opType := crdt.OpType(header[0])
lamport := binary.BigEndian.Uint64(header[1:9])
var nodeID, fileID, lineID uuid.UUID
copy(nodeID[:], header[9:25])
copy(fileID[:], header[25:41])
copy(lineID[:], header[41:57])
contentLen := binary.BigEndian.Uint32(header[57:61])
content := make([]byte, contentLen)
if contentLen > 0 {
if _, err := io.ReadFull(r, content); err != nil {
return nil, err
}
}
return &crdt.Operation{
Type: opType,
Lamport: lamport,
NodeID: nodeID,
FileID: fileID,
LineID: lineID,
Content: string(content),
}, nil
}
func LoadAllOps(filename string) ([]crdt.Operation, error) {
var out []crdt.Operation
f, err := os.Open(filename)
if os.IsNotExist(err) {
return out, nil
}
if err != nil {
return nil, err
}
defer f.Close()
for {
op, e := ReadOp(f)
if e == io.EOF {
break
}
if e != nil {
// partial read => ignore or return
return out, nil
}
out = append(out, *op)
}
return out, nil
}
func AppendOp(filename string, op crdt.Operation) error {
if err := os.MkdirAll(dirOf(filename), 0755); err != nil {
return err
}
f, err := os.OpenFile(filename, os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0644)
if err != nil {
return err
}
defer f.Close()
return WriteOp(f, op)
}
func dirOf(fp string) string {
for i := len(fp) - 1; i >= 0; i-- {
if fp[i] == '/' || fp[i] == '\\' {
return fp[:i]
}
}
return "."
}
================================================
FILE: internal/ops/ops.go
================================================
package ops
import (
"evo/internal/crdt"
"evo/internal/index"
"evo/internal/lfs"
"evo/internal/util"
"fmt"
"os"
"path/filepath"
"strings"
"sync"
"time"
"github.com/google/uuid"
)
// IngestLocalChanges checks each file in the working directory, handles large-file threshold, stable fileID, then line CRDT logic.
func IngestLocalChanges(repoPath, stream string) ([]string, error) {
files, err := util.ListAllFiles(repoPath)
if err != nil {
return nil, err
}
var changed []string
var mu sync.Mutex
var wg sync.WaitGroup
chWork := make(chan string, len(files))
chErr := make(chan error, 8)
for _, f := range files {
chWork <- f
}
close(chWork)
for i := 0; i < 8; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for rel := range chWork {
if strings.HasPrefix(rel, ".evo") {
continue
}
abs := filepath.Join(repoPath, rel)
fi, errStat := os.Stat(abs)
if errStat != nil || fi.IsDir() {
continue
}
ok, e2 := processFile(repoPath, stream, rel, abs, fi.Size())
if e2 != nil {
chErr <- e2
return
}
if ok {
mu.Lock()
changed = append(changed, rel)
mu.Unlock()
}
}
}()
}
wg.Wait()
close(chErr)
for e := range chErr {
if e != nil {
return nil, e
}
}
return changed, nil
}
func processFile(repoPath, stream, relPath, absPath string, fsize int64) (bool, error) {
fileID, err := index.LookupFileID(repoPath, relPath)
if err != nil {
// not tracked => skip
return false, nil
}
opsFile := filepath.Join(repoPath, ".evo", "ops", stream, fileID+".bin")
existing, _ := LoadAllOps(opsFile)
// build doc
doc := crdt.NewRGA()
for _, op := range existing {
if err := doc.Apply(op); err != nil {
return false, fmt.Errorf("applying operation: %v", err)
}
}
threshold := readLargeThreshold(repoPath)
if fsize > threshold {
// large file => store stub
return storeLargeFile(repoPath, stream, fileID, relPath, absPath, doc, opsFile)
}
// normal text => read lines
data, err := os.ReadFile(absPath)
if err != nil {
return false, err
}
diskLines := strings.Split(strings.ReplaceAll(string(data), "\r\n", "\n"), "\n")
docLines := doc.Materialize()
if eqLines(docLines, diskLines) {
return false, nil
}
changed := false
var lamport uint64 = uint64(time.Now().UnixNano())
nodeID := uuid.New()
lineIDs := doc.GetLineIDs()
prefix := 0
minLen := len(docLines)
if len(diskLines) < minLen {
minLen = len(diskLines)
}
for prefix < minLen && docLines[prefix] == diskLines[prefix] {
prefix++
}
suffix := 0
for suffix < minLen-prefix && docLines[len(docLines)-1-suffix] == diskLines[len(diskLines)-1-suffix] {
suffix++
}
docMid := docLines[prefix : len(docLines)-suffix]
diskMid := diskLines[prefix : len(diskLines)-suffix]
startPos := prefix
var i int
for i = 0; i < len(docMid) && i < len(diskMid); i++ {
if docMid[i] != diskMid[i] {
op := crdt.Operation{
Type: crdt.OpUpdate,
Lamport: lamport + uint64(i),
NodeID: nodeID,
FileID: parseUUID(fileID),
LineID: lineIDs[startPos+i],
Content: diskMid[i],
Stream: stream,
Timestamp: time.Now(),
}
if err := AppendOp(opsFile, op); err != nil {
return false, err
}
changed = true
}
}
for j := len(diskMid); j < len(docMid); j++ {
op := crdt.Operation{
Type: crdt.OpDelete,
Lamport: lamport + uint64(j),
NodeID: nodeID,
FileID: parseUUID(fileID),
LineID: lineIDs[startPos+j],
Stream: stream,
Timestamp: time.Now(),
}
if err := AppendOp(opsFile, op); err != nil {
return false, err
}
changed = true
}
if i < len(diskMid) {
// disk has extra => insert
for j := i; j < len(diskMid); j++ {
insOp := crdt.Operation{
FileID: parseUUID(fileID),
Type: crdt.OpInsert,
Lamport: lamport + uint64(j),
NodeID: uuid.New(),
LineID: uuid.New(),
Content: diskMid[j],
}
AppendOp(opsFile, insOp)
lamport++
changed = true
}
}
return changed, nil
}
func storeLargeFile(repoPath, stream, fileID, relPath, absPath string, doc *crdt.RGA, opsFile string) (bool, error) {
// Initialize LFS store
store := lfs.NewStore(repoPath)
// Open file
f, err := os.Open(absPath)
if err != nil {
return false, err
}
defer f.Close()
// Get file info
stat, err := f.Stat()
if err != nil {
return false, err
}
// Store in LFS
info, err := store.StoreFile(fileID, f, stat.Size())
if err != nil {
return false, err
}
// Add LFS stub line
docLines := doc.Materialize()
if len(docLines) == 1 && strings.HasPrefix(docLines[0], "EVO-LFS:") {
// already a stub
return false, nil
}
// Replace content with LFS stub
lop := crdt.Operation{
FileID: parseUUID(fileID),
Type: crdt.OpInsert,
Lamport: uint64(time.Now().UnixNano()),
NodeID: uuid.New(),
LineID: uuid.New(),
Content: fmt.Sprintf("EVO-LFS:%s:%d", fileID, info.Size),
}
if err := AppendOp(opsFile, lop); err != nil {
return false, err
}
return true, nil
}
func copyFile(src, dst string) error {
s, err := os.Open(src)
if err != nil {
return err
}
defer s.Close()
d, err := os.Create(dst)
if err != nil {
return err
}
defer d.Close()
buf := make([]byte, 64*1024)
for {
n, e := s.Read(buf)
if n > 0 {
d.Write(buf[:n])
}
if e != nil {
break
}
}
return nil
}
func readLargeThreshold(repoPath string) int64 {
// read config: files.largeThreshold
// fallback 1MB
return 1_000_000
}
func parseUUID(s string) uuid.UUID {
id, _ := uuid.Parse(s)
return id
}
func eqLines(a, b []string) bool {
if len(a) != len(b) {
return false
}
for i := range a {
if a[i] != b[i] {
return false
}
}
return true
}
================================================
FILE: internal/repo/repo.go
================================================
package repo
import (
"errors"
"evo/internal/crdt/compact"
"evo/internal/lfs"
"os"
"path/filepath"
"sync"
)
const EvoDir = ".evo"
var (
compactionService *compact.CompactionService
garbageCollector *lfs.GarbageCollector
serviceMutex sync.Mutex
)
// InitRepo creates the .evo folder structure, default stream, config, index, etc.
func InitRepo(path string) error {
serviceMutex.Lock()
defer serviceMutex.Unlock()
evoPath := filepath.Join(path, EvoDir)
if _, err := os.Stat(evoPath); err == nil {
return errors.New("Evo repository already exists here")
}
dirs := []string{
filepath.Join(path, EvoDir),
filepath.Join(path, EvoDir, "ops"),
filepath.Join(path, EvoDir, "commits"),
filepath.Join(path, EvoDir, "config"),
filepath.Join(path, EvoDir, "streams"),
filepath.Join(path, EvoDir, "largefiles"),
filepath.Join(path, EvoDir, "cache"),
filepath.Join(path, EvoDir, "chunks"),
filepath.Join(path, EvoDir, "lfs"),
}
for _, d := range dirs {
if err := os.MkdirAll(d, 0755); err != nil {
return err
}
}
// Start compaction service
cs := compact.NewCompactionService(path, compact.DefaultConfig())
if err := cs.Start(); err != nil {
return err
}
compactionService = cs
// Start LFS garbage collector
store := lfs.NewStore(path)
gc := lfs.NewGarbageCollector(store)
gc.Start()
garbageCollector = gc
// HEAD => "main"
if err := os.WriteFile(filepath.Join(evoPath, "HEAD"), []byte("main"), 0644); err != nil {
return err
}
// create stream "main"
if err := os.WriteFile(filepath.Join(evoPath, "streams", "main"), []byte{}, 0644); err != nil {
return err
}
// create empty .evo/index
if err := os.WriteFile(filepath.Join(evoPath, "index"), []byte{}, 0644); err != nil {
return err
}
return nil
}
// Cleanup stops all background services
func Cleanup() {
serviceMutex.Lock()
defer serviceMutex.Unlock()
if compactionService != nil {
compactionService.Stop()
compactionService = nil
}
if garbageCollector != nil {
garbageCollector.Stop()
garbageCollector = nil
}
}
// FindRepoRoot searches for .evo directory walking up from start
func FindRepoRoot(start string) (string, error) {
cur, err := filepath.Abs(start)
if err != nil {
return "", err
}
for {
if _, err := os.Stat(filepath.Join(cur, EvoDir)); err == nil {
return cur, nil
}
parent := filepath.Dir(cur)
if parent == cur {
return "", os.ErrNotExist
}
cur = parent
}
}
================================================
FILE: internal/repo/repo_test.go
================================================
package repo
import (
"os"
"path/filepath"
"testing"
)
func TestRepo(t *testing.T) {
// Create temp dir for testing
tmpDir, err := os.MkdirTemp("", "evo-repo-test-*")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(tmpDir)
t.Run("Init Repository", func(t *testing.T) {
repoPath := filepath.Join(tmpDir, "test-repo")
if err := InitRepo(repoPath); err != nil {
t.Fatal(err)
}
// Verify directory structure
dirs := []string{
".evo",
".evo/ops",
".evo/commits",
".evo/config",
".evo/streams",
".evo/chunks",
".evo/lfs",
}
for _, dir := range dirs {
path := filepath.Join(repoPath, dir)
if _, err := os.Stat(path); os.IsNotExist(err) {
t.Errorf("Directory %s not created", dir)
}
}
// Verify HEAD file
head, err := os.ReadFile(filepath.Join(repoPath, ".evo", "HEAD"))
if err != nil {
t.Fatal(err)
}
if string(head) != "main" {
t.Errorf("Expected HEAD to be 'main', got '%s'", string(head))
}
})
t.Run("Find Repository Root", func(t *testing.T) {
// Create test repository
repoPath := filepath.Join(tmpDir, "find-repo-test")
if err := InitRepo(repoPath); err != nil {
t.Fatal(err)
}
// Create nested directory structure
nestedPath := filepath.Join(repoPath, "dir1", "dir2", "dir3")
if err := os.MkdirAll(nestedPath, 0755); err != nil {
t.Fatal(err)
}
// Test finding root from nested directory
found, err := FindRepoRoot(nestedPath)
if err != nil {
t.Fatal(err)
}
if found != repoPath {
t.Errorf("Expected root %s, got %s", repoPath, found)
}
// Test finding root from repository root
found, err = FindRepoRoot(repoPath)
if err != nil {
t.Fatal(err)
}
if found != repoPath {
t.Errorf("Expected root %s, got %s", repoPath, found)
}
// Test finding root from non-repository directory
nonRepoPath := filepath.Join(tmpDir, "non-repo")
if err := os.MkdirAll(nonRepoPath, 0755); err != nil {
t.Fatal(err)
}
_, err = FindRepoRoot(nonRepoPath)
if err == nil {
t.Error("Expected error when finding root in non-repository")
}
})
t.Run("Multiple Init Prevention", func(t *testing.T) {
repoPath := filepath.Join(tmpDir, "multi-init-test")
// First init should succeed
if err := InitRepo(repoPath); err != nil {
t.Fatal(err)
}
// Second init should fail
if err := InitRepo(repoPath); err == nil {
t.Error("Expected error on second init")
}
})
t.Run("Init with Existing Files", func(t *testing.T) {
repoPath := filepath.Join(tmpDir, "existing-files-test")
// Create some existing files
if err := os.MkdirAll(repoPath, 0755); err != nil {
t.Fatal(err)
}
if err := os.WriteFile(filepath.Join(repoPath, "test.txt"), []byte("test"), 0644); err != nil {
t.Fatal(err)
}
// Init should succeed with existing files
if err := InitRepo(repoPath); err != nil {
t.Fatal(err)
}
// Verify existing files are untouched
if _, err := os.Stat(filepath.Join(repoPath, "test.txt")); os.IsNotExist(err) {
t.Error("Existing file was removed during init")
}
})
t.Run("Init Permission Handling", func(t *testing.T) {
repoPath := filepath.Join(tmpDir, "permission-test")
// Create directory with restricted permissions
if err := os.MkdirAll(repoPath, 0444); err != nil {
t.Fatal(err)
}
// Init should fail with insufficient permissions
err := InitRepo(repoPath)
if err == nil {
t.Error("Expected error with insufficient permissions")
}
// Reset permissions
if err := os.Chmod(repoPath, 0755); err != nil {
t.Fatal(err)
}
// Init should now succeed
if err := InitRepo(repoPath); err != nil {
t.Fatal(err)
}
})
t.Run("Service Initialization", func(t *testing.T) {
repoPath := filepath.Join(tmpDir, "service-test")
if err := InitRepo(repoPath); err != nil {
t.Fatal(err)
}
// Verify services are running by checking their directories
services := []string{
".evo/chunks", // LFS chunks directory
".evo/lfs", // LFS metadata directory
}
for _, dir := range services {
path := filepath.Join(repoPath, dir)
if _, err := os.Stat(path); os.IsNotExist(err) {
t.Errorf("Service directory %s not created", dir)
}
}
})
}
================================================
FILE: internal/signing/signing.go
================================================
package signing
import (
"crypto/ed25519"
"crypto/rand"
"encoding/hex"
"evo/internal/config"
"evo/internal/types"
"fmt"
"os"
"path/filepath"
"time"
)
type KeyPair struct {
PrivateKey ed25519.PrivateKey
PublicKey ed25519.PublicKey
Created time.Time
}
// GenerateKeyPair creates a new Ed25519 key pair and stores it
func GenerateKeyPair(repoPath string) error {
pub, priv, err := ed25519.GenerateKey(rand.Reader)
if err != nil {
return fmt.Errorf("failed to generate key pair: %w", err)
}
keyPath, err := getKeyPath(repoPath)
if err != nil {
return err
}
// Ensure directory exists
if err := os.MkdirAll(filepath.Dir(keyPath), 0700); err != nil {
return fmt.Errorf("failed to create key directory: %w", err)
}
// Write private key
if err := os.WriteFile(keyPath, priv, 0600); err != nil {
return fmt.Errorf("failed to write private key: %w", err)
}
// Write public key
pubFile := keyPath + ".pub"
if err := os.WriteFile(pubFile, pub, 0644); err != nil {
return fmt.Errorf("failed to write public key: %w", err)
}
fmt.Printf("Generated new Ed25519 key pair:\n")
fmt.Printf("Private key: %s\n", keyPath)
fmt.Printf("Public key: %s\n", pubFile)
return nil
}
// LoadKeyPair loads an existing key pair from disk
func LoadKeyPair(repoPath string) (*KeyPair, error) {
keyPath, err := getKeyPath(repoPath)
if err != nil {
return nil, err
}
priv, err := os.ReadFile(keyPath)
if err != nil {
return nil, fmt.Errorf("failed to read private key: %w", err)
}
// Validate private key
var pk ed25519.PrivateKey
if len(priv) == ed25519.SeedSize {
pk = ed25519.NewKeyFromSeed(priv)
} else if len(priv) == ed25519.PrivateKeySize {
pk = priv
} else {
return nil, fmt.Errorf("invalid Ed25519 key length: %d", len(priv))
}
// Load public key
pub, err := os.ReadFile(keyPath + ".pub")
if err != nil {
return nil, fmt.Errorf("failed to read public key: %w", err)
}
if len(pub) != ed25519.PublicKeySize {
return nil, fmt.Errorf("invalid public key length: %d", len(pub))
}
return &KeyPair{
PrivateKey: pk,
PublicKey: pub,
Created: getFileCreationTime(keyPath),
}, nil
}
// SignCommit signs a commit using the configured key
func SignCommit(c *types.Commit, repoPath string) (string, error) {
kp, err := LoadKeyPair(repoPath)
if err != nil {
return "", fmt.Errorf("failed to load signing key: %w", err)
}
msg := types.CommitHashString(c)
sig := ed25519.Sign(kp.PrivateKey, []byte(msg))
return hex.EncodeToString(sig), nil
}
// VerifyCommit verifies a commit's signature
func VerifyCommit(c *types.Commit, repoPath string) (bool, error) {
if c.Signature == "" {
return false, fmt.Errorf("commit has no signature")
}
kp, err := LoadKeyPair(repoPath)
if err != nil {
return false, fmt.Errorf("failed to load public key: %w", err)
}
sigBytes, err := hex.DecodeString(c.Signature)
if err != nil {
return false, fmt.Errorf("invalid signature format: %w", err)
}
msg := types.CommitHashString(c)
if !ed25519.Verify(kp.PublicKey, []byte(msg), sigBytes) {
return false, fmt.Errorf("signature verification failed")
}
return true, nil
}
func getKeyPath(repoPath string) (string, error) {
keyPath, err := config.GetConfigValue(repoPath, "signing.keyPath")
if err != nil {
return "", fmt.Errorf("failed to get key path from config: %w", err)
}
if keyPath == "" {
home, err := os.UserHomeDir()
if err != nil {
return "", fmt.Errorf("failed to get user home directory: %w", err)
}
keyPath = filepath.Join(home, ".config", "evo", "signing_key")
}
return keyPath, nil
}
func getFileCreationTime(path string) time.Time {
info, err := os.Stat(path)
if err != nil {
return time.Time{}
}
return info.ModTime()
}
================================================
FILE: internal/signing/signing_test.go
================================================
package signing
import (
"evo/internal/config"
"evo/internal/types"
"os"
"path/filepath"
"testing"
)
func TestSigningKeyPair(t *testing.T) {
// Create temp directory for test
tmpDir := t.TempDir()
keyPath := filepath.Join(tmpDir, "signing_key")
// Set up config for test
err := config.SetConfigValue(tmpDir, "signing.keyPath", keyPath)
if err != nil {
t.Fatalf("Failed to set config value: %v", err)
}
t.Run("Generate_Key_Pair", func(t *testing.T) {
err := GenerateKeyPair(tmpDir)
if err != nil {
t.Fatalf("Failed to generate key pair: %v", err)
}
// Check that key files exist
if _, err := os.Stat(keyPath); err != nil {
t.Errorf("Private key file not found: %v", err)
}
if _, err := os.Stat(keyPath + ".pub"); err != nil {
t.Errorf("Public key file not found: %v", err)
}
})
t.Run("Load_Key_Pair", func(t *testing.T) {
kp, err := LoadKeyPair(tmpDir)
if err != nil {
t.Fatalf("Failed to load key pair: %v", err)
}
if kp.PrivateKey == nil {
t.Error("Private key is nil")
}
if kp.PublicKey == nil {
t.Error("Public key is nil")
}
if kp.Created.IsZero() {
t.Error("Creation time not set")
}
})
t.Run("Sign_and_Verify_Commit", func(t *testing.T) {
commit := &types.Commit{
Message: "Test commit",
}
sig, err := SignCommit(commit, tmpDir)
if err != nil {
t.Fatalf("Failed to sign commit: %v", err)
}
if sig == "" {
t.Error("Empty signature returned")
}
commit.Signature = sig
valid, err := VerifyCommit(commit, tmpDir)
if err != nil {
t.Errorf("Failed to verify commit: %v", err)
}
if !valid {
t.Error("Signature verification failed")
}
})
t.Run("Invalid_Signature", func(t *testing.T) {
commit := &types.Commit{
Message: "Test commit",
Signature: "invalid",
}
valid, err := VerifyCommit(commit, tmpDir)
if err == nil {
t.Error("Expected error for invalid signature")
}
if valid {
t.Error("Invalid signature reported as valid")
}
})
t.Run("Missing_Signature", func(t *testing.T) {
commit := &types.Commit{
Message: "Test commit",
}
valid, err := VerifyCommit(commit, tmpDir)
if err == nil {
t.Error("Expected error for missing signature")
}
if valid {
t.Error("Missing signature reported as valid")
}
})
}
================================================
FILE: internal/status/status.go
================================================
package status
import (
"bufio"
"evo/internal/ignore"
"evo/internal/streams"
"fmt"
"os"
"path/filepath"
"sort"
"strings"
)
type FileStatus struct {
Path string
Status string // "modified", "new", "deleted", "renamed"
OldPath string // only set for renamed files
}
type RepoStatus struct {
CurrentStream string
Files []FileStatus
}
// loadIndex loads the index file directly to avoid dependency cycles
func loadIndex(repoPath string) (map[string]string, error) {
indexPath := filepath.Join(repoPath, ".evo", "index")
file, err := os.Open(indexPath)
if os.IsNotExist(err) {
return make(map[string]string), nil
}
if err != nil {
return nil, err
}
defer file.Close()
idx := make(map[string]string)
scanner := bufio.NewScanner(file)
for scanner.Scan() {
parts := strings.Split(scanner.Text(), ":")
if len(parts) == 2 {
idx[parts[0]] = parts[1]
}
}
return idx, scanner.Err()
}
func GetStatus(repoPath string) (*RepoStatus, error) {
// Get current stream
stream, err := streams.CurrentStream(repoPath)
if err != nil {
return nil, fmt.Errorf("failed to get current stream: %w", err)
}
// Verify stream exists
streamPath := filepath.Join(repoPath, ".evo", "streams", stream)
if _, err := os.Stat(streamPath); os.IsNotExist(err) {
return nil, fmt.Errorf("stream %s does not exist", stream)
}
// Load ignore patterns
ignoreList, err := ignore.LoadIgnoreFile(repoPath)
if err != nil {
return nil, fmt.Errorf("failed to load ignore file: %w", err)
}
// Get current index state
idx, err := loadIndex(repoPath)
if err != nil {
return nil, fmt.Errorf("failed to load index: %w", err)
}
status := &RepoStatus{
CurrentStream: stream,
}
// Track processed files and their content hashes
processedFiles := make(map[string]string) // path -> content hash
// Walk the repository to find new and modified files
err = filepath.Walk(repoPath, func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
// Get relative path
relPath, err := filepath.Rel(repoPath, path)
if err != nil {
return err
}
// Skip the .evo directory
if strings.HasPrefix(relPath, ".evo") {
if info.IsDir() {
return filepath.SkipDir
}
return nil
}
// Skip directories
if info.IsDir() {
return nil
}
// Skip ignored files
if ignoreList.IsIgnored(relPath) {
return nil
}
// Read current file content
currentContent, err := os.ReadFile(path)
if err != nil {
return err
}
// Store content hash for rename detection
processedFiles[relPath] = string(currentContent)
// Check if file is in index
fileID, exists := idx[relPath]
if !exists {
// Check if this might be a renamed file
var foundRename bool
for oldPath, oldID := range idx {
if oldPath == relPath {
continue
}
storedContent, err := os.ReadFile(filepath.Join(repoPath, ".evo", "objects", oldID))
if err == nil && string(currentContent) == string(storedContent) {
// Found a rename
status.Files = append(status.Files, FileStatus{
Path: relPath,
Status: "renamed",
OldPath: oldPath,
})
foundRename = true
break
}
}
if !foundRename {
// New file
status.Files = append(status.Files, FileStatus{
Path: relPath,
Status: "new",
})
}
return nil
}
// Check if file has been modified
storedContent, err := os.ReadFile(filepath.Join(repoPath, ".evo", "objects", fileID))
if err != nil || string(currentContent) != string(storedContent) {
status.Files = append(status.Files, FileStatus{
Path: relPath,
Status: "modified",
})
}
return nil
})
if err != nil {
return nil, fmt.Errorf("failed to walk repository: %w", err)
}
// Check for deleted files
for path, id := range idx {
// Skip if file was already processed
if _, exists := processedFiles[path]; exists {
continue
}
// Check if file was renamed by looking for matching content
var renamed bool
for newPath, content := range processedFiles {
storedContent, err := os.ReadFile(filepath.Join(repoPath, ".evo", "objects", id))
if err == nil && content == string(storedContent) {
// Found a rename
status.Files = append(status.Files, FileStatus{
Path: newPath,
Status: "renamed",
OldPath: path,
})
renamed = true
break
}
}
if !renamed {
status.Files = append(status.Files, FileStatus{
Path: path,
Status: "deleted",
})
}
}
// Sort files by status and path
sort.Slice(status.Files, func(i, j int) bool {
if status.Files[i].Status != status.Files[j].Status {
return status.Files[i].Status < status.Files[j].Status
}
return status.Files[i].Path < status.Files[j].Path
})
return status, nil
}
// FormatStatus returns a formatted string representation of the repository status
func FormatStatus(status *RepoStatus) string {
var sb strings.Builder
sb.WriteString(fmt.Sprintf("On stream %s\n\n", status.CurrentStream))
if len(status.Files) == 0 {
sb.WriteString("nothing to commit, working tree clean\n")
return sb.String()
}
// Group files by status
var modified, new, deleted, renamed []FileStatus
for _, f := range status.Files {
switch f.Status {
case "modified":
modified = append(modified, f)
case "new":
new = append(new, f)
case "deleted":
deleted = append(deleted, f)
case "renamed":
renamed = append(renamed, f)
}
}
if len(modified) > 0 {
sb.WriteString("Changes not staged for commit:\n")
for _, f := range modified {
sb.WriteString(fmt.Sprintf(" modified: %s\n", f.Path))
}
sb.WriteString("\n")
}
if len(new) > 0 {
sb.WriteString("Untracked files:\n")
for _, f := range new {
sb.WriteString(fmt.Sprintf(" %s\n", f.Path))
}
sb.WriteString("\n")
}
if len(deleted) > 0 {
sb.WriteString("Deleted files:\n")
for _, f := range deleted {
sb.WriteString(fmt.Sprintf(" %s\n", f.Path))
}
sb.WriteString("\n")
}
if len(renamed) > 0 {
sb.WriteString("Renamed files:\n")
for _, f := range renamed {
sb.WriteString(fmt.Sprintf(" %s -> %s\n", f.OldPath, f.Path))
}
sb.WriteString("\n")
}
return sb.String()
}
================================================
FILE: internal/status/status_test.go
================================================
package status
import (
"os"
"path/filepath"
"strings"
"testing"
)
func setupTestRepo(t *testing.T) string {
// Create a temporary directory for the test repository
tmpDir, err := os.MkdirTemp("", "evo-status-test")
if err != nil {
t.Fatal(err)
}
// Create .evo directory structure
evoDir := filepath.Join(tmpDir, ".evo")
for _, dir := range []string{
"objects",
"streams",
"commits",
} {
if err := os.MkdirAll(filepath.Join(evoDir, dir), 0755); err != nil {
t.Fatal(err)
}
}
// Create main stream
if err := os.WriteFile(filepath.Join(evoDir, "streams", "main"), []byte{}, 0644); err != nil {
t.Fatal(err)
}
// Set current stream
if err := os.WriteFile(filepath.Join(evoDir, "HEAD"), []byte("main"), 0644); err != nil {
t.Fatal(err)
}
return tmpDir
}
func TestGetStatus(t *testing.T) {
repoPath := setupTestRepo(t)
defer os.RemoveAll(repoPath)
// Create .evo-ignore file first
ignoreContent := `
# Test ignore file
*.log
build/
**/*.tmp
`
if err := os.WriteFile(filepath.Join(repoPath, ".evo-ignore"), []byte(ignoreContent), 0644); err != nil {
t.Fatal(err)
}
// Create some test files
files := map[string]string{
"file1.txt": "content1",
"file2.txt": "content2",
"dir/file3.txt": "content3",
}
for path, content := range files {
fullPath := filepath.Join(repoPath, path)
if err := os.MkdirAll(filepath.Dir(fullPath), 0755); err != nil {
t.Fatal(err)
}
if err := os.WriteFile(fullPath, []byte(content), 0644); err != nil {
t.Fatal(err)
}
}
// Create some files that should be ignored
ignoredFiles := map[string]string{
"test.log": "log content",
"build/output.txt": "build output",
"temp.tmp": "temporary file",
}
for path, content := range ignoredFiles {
fullPath := filepath.Join(repoPath, path)
if err := os.MkdirAll(filepath.Dir(fullPath), 0755); err != nil {
t.Fatal(err)
}
if err := os.WriteFile(fullPath, []byte(content), 0644); err != nil {
t.Fatal(err)
}
}
// Get initial status (before index exists)
status, err := GetStatus(repoPath)
if err != nil {
t.Fatal(err)
}
// Verify all non-ignored files are marked as new
newFiles := make(map[string]bool)
for _, f := range status.Files {
newFiles[f.Path] = true
if f.Status != "new" {
t.Errorf("Expected file %s to be new, got %s", f.Path, f.Status)
}
// Verify no ignored files are included
for ignoredPath := range ignoredFiles {
if f.Path == ignoredPath {
t.Errorf("Found ignored file in status: %s", f.Path)
}
}
}
// Check that we found all expected files
for path := range files {
if !newFiles[path] {
t.Errorf("Expected to find %s in status, but it was missing", path)
}
}
// Create object files first
objects := map[string]string{
"id1": "content1",
"id2": "content2",
}
for id, content := range objects {
objPath := filepath.Join(repoPath, ".evo", "objects", id)
if err := os.WriteFile(objPath, []byte(content), 0644); err != nil {
t.Fatal(err)
}
}
// Create index file after objects
indexContent := map[string]string{
"file1.txt": "id1",
"file2.txt": "id2",
}
var indexLines []string
for path, id := range indexContent {
indexLines = append(indexLines, path+":"+id)
}
if err := os.WriteFile(filepath.Join(repoPath, ".evo", "index"), []byte(strings.Join(indexLines, "\n")+"\n"), 0644); err != nil {
t.Fatal(err)
}
// Modify file2.txt
if err := os.WriteFile(filepath.Join(repoPath, "file2.txt"), []byte("modified content"), 0644); err != nil {
t.Fatal(err)
}
// Get status again
status, err = GetStatus(repoPath)
if err != nil {
t.Fatal(err)
}
// Verify status
expectedStatuses := map[string]string{
"file2.txt": "modified",
"dir/file3.txt": "new",
}
foundFiles := make(map[string]bool)
for _, f := range status.Files {
foundFiles[f.Path] = true
expectedStatus, exists := expectedStatuses[f.Path]
if !exists {
t.Errorf("Unexpected file in status: %s", f.Path)
continue
}
if f.Status != expectedStatus {
t.Errorf("Expected file %s to be %s, got %s", f.Path, expectedStatus, f.Status)
}
}
// Check that we found all expected files
for path := range expectedStatuses {
if !foundFiles[path] {
t.Errorf("Expected to find %s in status, but it was missing", path)
}
}
// Test rename detection
if err := os.Rename(filepath.Join(repoPath, "file1.txt"), filepath.Join(repoPath, "file1_renamed.txt")); err != nil {
t.Fatal(err)
}
status, err = GetStatus(repoPath)
if err != nil {
t.Fatal(err)
}
foundRename := false
for _, f := range status.Files {
if f.Status == "renamed" && f.Path == "file1_renamed.txt" && f.OldPath == "file1.txt" {
foundRename = true
break
}
}
if !foundRename {
t.Error("Failed to detect renamed file")
}
// Test deletion detection
if err := os.Remove(filepath.Join(repoPath, "file2.txt")); err != nil {
t.Fatal(err)
}
status, err = GetStatus(repoPath)
if err != nil {
t.Fatal(err)
}
foundDelete := false
for _, f := range status.Files {
if f.Status == "deleted" && f.Path == "file2.txt" {
foundDelete = true
break
}
}
if !foundDelete {
t.Error("Failed to detect deleted file")
}
}
func TestGetStatusErrors(t *testing.T) {
// Test with non-existent directory
_, err := GetStatus("/nonexistent/path")
if err == nil {
t.Error("Expected error when repository path doesn't exist")
}
// Test with invalid repository (no .evo directory)
tmpDir, err := os.MkdirTemp("", "invalid-repo")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(tmpDir)
_, err = GetStatus(tmpDir)
if err == nil {
t.Error("Expected error when .evo directory doesn't exist")
}
// Test with invalid HEAD file
repoPath := setupTestRepo(t)
defer os.RemoveAll(repoPath)
if err := os.WriteFile(filepath.Join(repoPath, ".evo", "HEAD"), []byte("invalid-stream\n"), 0644); err != nil {
t.Fatal(err)
}
_, err = GetStatus(repoPath)
if err == nil {
t.Error("Expected error when HEAD points to non-existent stream")
}
}
func TestFormatStatus(t *testing.T) {
tests := []struct {
name string
status *RepoStatus
contains []string
excludes []string
}{
{
name: "Empty status",
status: &RepoStatus{
CurrentStream: "main",
Files: []FileStatus{},
},
contains: []string{
"On stream main",
"nothing to commit, working tree clean",
},
excludes: []string{
"Changes not staged",
"Untracked files",
"Deleted files",
"Renamed files",
},
},
{
name: "Modified files only",
status: &RepoStatus{
CurrentStream: "main",
Files: []FileStatus{
{Path: "file1.txt", Status: "modified"},
{Path: "dir/file2.txt", Status: "modified"},
},
},
contains: []string{
"On stream main",
"Changes not staged for commit:",
"modified: file1.txt",
"modified: dir/file2.txt",
},
excludes: []string{
"nothing to commit",
"Untracked files",
"Deleted files",
"Renamed files",
},
},
{
name: "All status types",
status: &RepoStatus{
CurrentStream: "feature",
Files: []FileStatus{
{Path: "file1.txt", Status: "modified"},
{Path: "file2.txt", Status: "new"},
{Path: "file3.txt", Status: "deleted"},
{Path: "new.txt", Status: "renamed", OldPath: "old.txt"},
},
},
contains: []string{
"On stream feature",
"Changes not staged for commit:",
"modified: file1.txt",
"Untracked files:",
"file2.txt",
"Deleted files:",
"file3.txt",
"Renamed files:",
"old.txt -> new.txt",
},
excludes: []string{
"nothing to commit",
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
output := FormatStatus(tt.status)
for _, s := range tt.contains {
if !strings.Contains(output, s) {
t.Errorf("Expected output to contain %q", s)
}
}
for _, s := range tt.excludes {
if strings.Contains(output, s) {
t.Errorf("Expected output to not contain %q", s)
}
}
})
}
}
func TestLoadIndex(t *testing.T) {
repoPath := setupTestRepo(t)
defer os.RemoveAll(repoPath)
// Test loading non-existent index
idx, err := loadIndex(repoPath)
if err != nil {
t.Errorf("Expected no error when index doesn't exist, got %v", err)
}
if len(idx) != 0 {
t.Errorf("Expected empty index, got %v", idx)
}
// Test loading valid index
indexContent := "file1.txt:id1\nfile2.txt:id2\n"
if err := os.WriteFile(filepath.Join(repoPath, ".evo", "index"), []byte(indexContent), 0644); err != nil {
t.Fatal(err)
}
idx, err = loadIndex(repoPath)
if err != nil {
t.Errorf("Failed to load index: %v", err)
}
expected := map[string]string{
"file1.txt": "id1",
"file2.txt": "id2",
}
if len(idx) != len(expected) {
t.Errorf("Expected %d entries, got %d", len(expected), len(idx))
}
for path, id := range expected {
if idx[path] != id {
t.Errorf("Expected %s -> %s, got %s -> %s", path, id, path, idx[path])
}
}
// Test loading malformed index
malformedContent := "file1.txt:id1\nmalformed-line\nfile2.txt:id2\n"
if err := os.WriteFile(filepath.Join(repoPath, ".evo", "index"), []byte(malformedContent), 0644); err != nil {
t.Fatal(err)
}
idx, err = loadIndex(repoPath)
if err != nil {
t.Errorf("Failed to load index with malformed line: %v", err)
}
if len(idx) != 2 {
t.Errorf("Expected 2 valid entries, got %d", len(idx))
}
}
================================================
FILE: internal/streams/partial.go
================================================
package streams
import (
"evo/internal/commits"
"evo/internal/crdt"
"evo/internal/repo"
"evo/internal/types"
"fmt"
"path/filepath"
)
// MergeFilter defines criteria for selecting operations during a partial merge
type MergeFilter struct {
FileIDs []string // Only merge operations for these files
OpTypes []crdt.OpType // Only merge these operation types
}
// PartialMerge merges selected operations from source to target stream based on filter criteria
func PartialMerge(repoPath, source, target string, filter MergeFilter) error {
srcCommits, err := ListCommits(repoPath, source)
if err != nil {
return err
}
tgtCommits, err := ListCommits(repoPath, target)
if err != nil {
return err
}
// Build map of target commits for quick lookup
tgtMap := make(map[string]bool)
for _, c := range tgtCommits {
tgtMap[c.ID] = true
}
// For empty filter, merge all operations into a single commit
if len(filter.FileIDs) == 0 && len(filter.OpTypes) == 0 {
var allOps []commits.ExtendedOp
var lastCommit *types.Commit
for _, sc := range srcCommits {
lastCommit = &sc
for _, op := range sc.Operations {
newOp := op
newOp.Op.Stream = target
allOps = append(allOps, newOp)
}
}
if len(allOps) > 0 && lastCommit != nil {
// Create single commit with all operations
newCommit := types.Commit{
ID: lastCommit.ID,
Stream: target,
Message: fmt.Sprintf("[merge] %s", lastCommit.Message),
Operations: allOps,
Timestamp: lastCommit.Timestamp,
}
// Save the commit
commitPath := filepath.Join(repoPath, repo.EvoDir, "commits", target)
if err := commits.SaveCommitFile(commitPath, &newCommit); err != nil {
return err
}
// Replicate all operations
if err := replicateOps(repoPath, target, allOps); err != nil {
return err
}
}
return nil
}
// Process each source commit for non-empty filters
for _, sc := range srcCommits {
// Filter operations based on criteria
var filteredOps []commits.ExtendedOp
for _, op := range sc.Operations {
if shouldIncludeOp(op, filter) {
newOp := op
newOp.Op.Stream = target
filteredOps = append(filteredOps, newOp)
}
}
// Skip if no operations match filter
if len(filteredOps) == 0 {
continue
}
// Create new commit with filtered operations
newCommit := types.Commit{
ID: sc.ID,
Stream: target,
Message: fmt.Sprintf("[merge] %s", sc.Message),
Operations: filteredOps,
Timestamp: sc.Timestamp,
}
// Save the commit
commitPath := filepath.Join(repoPath, repo.EvoDir, "commits", target)
if err := commits.SaveCommitFile(commitPath, &newCommit); err != nil {
return err
}
// Replicate filtered operations
if err := replicateOps(repoPath, target, filteredOps); err != nil {
return err
}
}
return nil
}
// shouldIncludeOp checks if an operation matches the filter criteria
func shouldIncludeOp(op commits.ExtendedOp, filter MergeFilter) bool {
// If no filters specified, include everything
if len(filter.FileIDs) == 0 && len(filter.OpTypes) == 0 {
return true
}
// Check file ID filter
if len(filter.FileIDs) > 0 {
fileMatch := false
for _, fid := range filter.FileIDs {
if op.Op.FileID.String() == fid {
fileMatch = true
break
}
}
if !fileMatch {
return false
}
}
// Check operation type filter
if len(filter.OpTypes) > 0 {
typeMatch := false
for _, ot := range filter.OpTypes {
if op.Op.Type == ot {
typeMatch = true
break
}
}
if !typeMatch {
return false
}
}
return true
}
================================================
FILE: internal/streams/partial_test.go
================================================
package streams
import (
"evo/internal/commits"
"evo/internal/crdt"
"evo/internal/repo"
"evo/internal/types"
"os"
"path/filepath"
"testing"
"time"
"github.com/google/uuid"
"github.com/stretchr/testify/assert"
)
func TestPartialMerge(t *testing.T) {
// Create temp repo
tmpDir := t.TempDir()
repoPath := filepath.Join(tmpDir, "test-repo")
// Initialize repo structure
assert.NoError(t, os.MkdirAll(filepath.Join(repoPath, repo.EvoDir, "commits", "main"), 0755))
assert.NoError(t, os.MkdirAll(filepath.Join(repoPath, repo.EvoDir, "ops", "main"), 0755))
assert.NoError(t, CreateStream(repoPath, "feature"))
assert.NoError(t, os.MkdirAll(filepath.Join(repoPath, repo.EvoDir, "commits", "feature"), 0755))
assert.NoError(t, os.MkdirAll(filepath.Join(repoPath, repo.EvoDir, "ops", "feature"), 0755))
// Create test commits with different file IDs and operation types
file1ID := uuid.New()
file2ID := uuid.New()
testCommits := []types.Commit{
{
ID: uuid.New().String(),
Stream: "feature",
Message: "commit 1",
Operations: []commits.ExtendedOp{
{
Op: crdt.Operation{
Type: crdt.OpInsert,
FileID: file1ID,
LineID: uuid.New(),
Content: "file1 line1",
Stream: "feature",
Timestamp: time.Now(),
NodeID: uuid.New(),
Lamport: 1,
Vector: []int64{1},
},
},
{
Op: crdt.Operation{
Type: crdt.OpDelete,
FileID: file2ID,
LineID: uuid.New(),
Content: "file2 line1",
Stream: "feature",
Timestamp: time.Now(),
NodeID: uuid.New(),
Lamport: 2,
Vector: []int64{2},
},
},
},
Timestamp: time.Now(),
},
}
for _, c := range testCommits {
assert.NoError(t, commits.SaveCommitFile(filepath.Join(repoPath, repo.EvoDir, "commits", "feature"), &c))
}
// Test partial merge with file ID filter
fileFilter := MergeFilter{
FileIDs: []string{file1ID.String()},
}
var err error
var mainCommits []types.Commit
err = PartialMerge(repoPath, "feature", "main", fileFilter)
assert.NoError(t, err)
// Verify only file1 operations were merged
mainCommits, err = ListCommits(repoPath, "main")
assert.NoError(t, err)
assert.Equal(t, 1, len(mainCommits))
assert.Equal(t, file1ID, mainCommits[0].Operations[0].Op.FileID)
assert.Equal(t, "file1 line1", mainCommits[0].Operations[0].Op.Content)
// Test partial merge with operation type filter
typeFilter := MergeFilter{
OpTypes: []crdt.OpType{crdt.OpDelete},
}
err = PartialMerge(repoPath, "feature", "main", typeFilter)
assert.NoError(t, err)
// Verify only delete operations were merged
mainCommits, err = ListCommits(repoPath, "main")
assert.NoError(t, err)
assert.Equal(t, 1, len(mainCommits))
assert.Equal(t, crdt.OpDelete, mainCommits[0].Operations[0].Op.Type)
assert.Equal(t, file2ID, mainCommits[0].Operations[0].Op.FileID)
// Test partial merge with empty filter (should merge all)
emptyFilter := MergeFilter{}
err = PartialMerge(repoPath, "feature", "main", emptyFilter)
assert.NoError(t, err)
// Verify all commits were merged
mainCommits, err = ListCommits(repoPath, "main")
assert.NoError(t, err)
assert.Equal(t, 1, len(mainCommits)) // Since we're preserving commit IDs, we should have one commit
assert.Equal(t, 2, len(mainCommits[0].Operations)) // But it should contain all operations
}
func TestShouldIncludeOp(t *testing.T) {
fileID := uuid.New()
testOp := commits.ExtendedOp{
Op: crdt.Operation{
Type: crdt.OpInsert,
FileID: fileID,
},
}
// Test empty filter
assert.True(t, shouldIncludeOp(testOp, MergeFilter{}))
// Test file ID filter match
assert.True(t, shouldIncludeOp(testOp, MergeFilter{FileIDs: []string{fileID.String()}}))
// Test file ID filter no match
assert.False(t, shouldIncludeOp(testOp, MergeFilter{FileIDs: []string{uuid.New().String()}}))
// Test operation type filter match
assert.True(t, shouldIncludeOp(testOp, MergeFilter{OpTypes: []crdt.OpType{crdt.OpInsert}}))
// Test operation type filter no match
assert.False(t, shouldIncludeOp(testOp, MergeFilter{OpTypes: []crdt.OpType{crdt.OpDelete}}))
// Test both filters match
assert.True(t, shouldIncludeOp(testOp, MergeFilter{
FileIDs: []string{fileID.String()},
OpTypes: []crdt.OpType{crdt.OpInsert},
}))
// Test both filters, one no match
assert.False(t, shouldIncludeOp(testOp, MergeFilter{
FileIDs: []string{fileID.String()},
OpTypes: []crdt.OpType{crdt.OpDelete},
}))
}
================================================
FILE: internal/streams/streams.go
================================================
package streams
import (
"encoding/binary"
"encoding/json"
"evo/internal/commits"
"evo/internal/ops"
"evo/internal/repo"
"evo/internal/types"
"fmt"
"os"
"path/filepath"
"sort"
"strings"
"github.com/google/uuid"
)
func CreateStream(repoPath, name string) error {
sdir := filepath.Join(repoPath, repo.EvoDir, "streams")
if err := os.MkdirAll(sdir, 0755); err != nil {
return err
}
fpath := filepath.Join(sdir, name)
if _, err := os.Stat(fpath); err == nil {
return fmt.Errorf("stream '%s' already exists", name)
}
return os.WriteFile(fpath, []byte{}, 0644)
}
func SwitchStream(repoPath, name string) error {
fpath := filepath.Join(repoPath, repo.EvoDir, "streams", name)
if _, err := os.Stat(fpath); os.IsNotExist(err) {
return fmt.Errorf("stream '%s' does not exist", name)
}
head := filepath.Join(repoPath, repo.EvoDir, "HEAD")
return os.WriteFile(head, []byte(name), 0644)
}
func ListStreams(repoPath string) ([]string, error) {
dir := filepath.Join(repoPath, repo.EvoDir, "streams")
entries, err := os.ReadDir(dir)
if os.IsNotExist(err) {
return []string{}, nil
}
if err != nil {
return nil, err
}
var out []string
for _, e := range entries {
if !e.IsDir() {
out = append(out, e.Name())
}
}
return out, nil
}
func CurrentStream(repoPath string) (string, error) {
head := filepath.Join(repoPath, repo.EvoDir, "HEAD")
b, err := os.ReadFile(head)
if err != nil {
return "", err
}
return strings.TrimSpace(string(b)), nil
}
// MergeStreams => merges all missing commits from source => target
func MergeStreams(repoPath, source, target string) error {
srcCommits, err := ListCommits(repoPath, source)
if err != nil {
return err
}
tgtCommits, err := ListCommits(repoPath, target)
if err != nil {
return err
}
tgtMap := make(map[string]bool)
for _, c := range tgtCommits {
tgtMap[c.ID] = true
}
var missing []types.Commit
for _, sc := range srcCommits {
if !tgtMap[sc.ID] {
missing = append(missing, sc)
}
}
for _, mc := range missing {
// replicate each op into .evo/ops/<target>/<fileID>.bin
if err := replicateOps(repoPath, target, mc.Operations); err != nil {
return err
}
// store a commit copy in target
c2 := mc
c2.Stream = target
if err := commits.SaveCommitFile(filepath.Join(repoPath, repo.EvoDir, "commits", target), &c2); err != nil {
return err
}
}
return nil
}
func replicateOps(repoPath, stream string, eops []commits.ExtendedOp) error {
for _, eop := range eops {
fileID := eop.Op.FileID.String()
binPath := filepath.Join(repoPath, repo.EvoDir, "ops", stream, fileID+".bin")
if err := ops.AppendOp(binPath, eop.Op); err != nil {
return err
}
}
return nil
}
// CherryPick => replicate a single commit into the target
func CherryPick(repoPath, commitID, target string) error {
allStreams, err := ListStreams(repoPath)
if err != nil {
return err
}
var found *types.Commit
OUTER:
for _, s := range allStreams {
cc, _ := ListCommits(repoPath, s)
for _, c := range cc {
if c.ID == commitID {
found = &c
break OUTER
}
}
}
if found == nil {
return fmt.Errorf("commit %s not found in any stream", commitID)
}
// replicate ops
if err := replicateOps(repoPath, target, found.Operations); err != nil {
return err
}
// store new commit with new ID
newID := uuid.New().String()
nc := *found
nc.ID = newID
nc.Stream = target
nc.Message = "[cherry-pick] " + found.Message
return commits.SaveCommitFile(filepath.Join(repoPath, repo.EvoDir, "commits", target), &nc)
}
func ListCommits(repoPath, stream string) ([]types.Commit, error) {
dir := filepath.Join(repoPath, repo.EvoDir, "commits", stream)
entries, err := os.ReadDir(dir)
if os.IsNotExist(err) {
return []types.Commit{}, nil
}
if err != nil {
return nil, err
}
var out []types.Commit
for _, e := range entries {
if !e.IsDir() && filepath.Ext(e.Name()) == ".bin" {
c, err := loadCommit(filepath.Join(dir, e.Name()))
if err != nil {
return nil, err
}
out = append(out, *c)
}
}
sort.Slice(out, func(i, j int) bool {
return out[i].Timestamp.Before(out[j].Timestamp)
})
return out, nil
}
func loadCommit(fp string) (*types.Commit, error) {
f, err := os.Open(fp)
if err != nil {
return nil, err
}
defer f.Close()
szBuf := make([]byte, 4)
if _, err := f.Read(szBuf); err != nil {
return nil, err
}
size := binary.BigEndian.Uint32(szBuf)
data := make([]byte, size)
if _, err := f.Read(data); err != nil {
return nil, err
}
var c types.Commit
if err := json.Unmarshal(data, &c); err != nil {
return nil, err
}
return &c, nil
}
func getCommit(repoPath, stream, commitID string) (*types.Commit, error) {
cc, err := ListCommits(repoPath, stream)
if err != nil {
return nil, err
}
for _, c := range cc {
if c.ID == commitID {
return &c, nil
}
}
return nil, fmt.Errorf("commit %s not found in stream %s", commitID, stream)
}
================================================
FILE: internal/streams/streams_test.go
================================================
package streams
import (
"evo/internal/commits"
"evo/internal/crdt"
"evo/internal/repo"
"evo/internal/types"
"os"
"path/filepath"
"testing"
"time"
"github.com/google/uuid"
"github.com/stretchr/testify/assert"
)
func TestCherryPick(t *testing.T) {
// Create temp repo
tmpDir := t.TempDir()
repoPath := filepath.Join(tmpDir, "test-repo")
// Initialize repo structure
assert.NoError(t, os.MkdirAll(filepath.Join(repoPath, repo.EvoDir, "commits", "main"), 0755))
assert.NoError(t, os.MkdirAll(filepath.Join(repoPath, repo.EvoDir, "ops", "main"), 0755))
assert.NoError(t, CreateStream(repoPath, "feature"))
assert.NoError(t, os.MkdirAll(filepath.Join(repoPath, repo.EvoDir, "commits", "feature"), 0755))
assert.NoError(t, os.MkdirAll(filepath.Join(repoPath, repo.EvoDir, "ops", "feature"), 0755))
// Create a test commit in feature stream
fileID := uuid.New()
testOp := commits.ExtendedOp{
Op: crdt.Operation{
Type: crdt.OpInsert,
FileID: fileID,
LineID: uuid.New(),
Content: "test line",
Stream: "feature",
Timestamp: time.Now(),
NodeID: uuid.New(),
Lamport: 1,
Vector: []int64{1},
},
}
testCommit := types.Commit{
ID: uuid.New().String(),
Stream: "feature",
Message: "test commit",
Timestamp: time.Now(),
Operations: []commits.ExtendedOp{testOp},
}
assert.NoError(t, commits.SaveCommitFile(filepath.Join(repoPath, repo.EvoDir, "commits", "feature"), &testCommit))
// Test cherry-pick to main stream
err := CherryPick(repoPath, testCommit.ID, "main")
assert.NoError(t, err)
// Verify commit was replicated
mainCommits, err := ListCommits(repoPath, "main")
assert.NoError(t, err)
assert.Equal(t, 1, len(mainCommits))
assert.Contains(t, mainCommits[0].Message, "[cherry-pick]")
assert.Equal(t, "main", mainCommits[0].Stream)
assert.Equal(t, 1, len(mainCommits[0].Operations))
assert.Equal(t, fileID, mainCommits[0].Operations[0].Op.FileID)
assert.Equal(t, "test line", mainCommits[0].Operations[0].Op.Content)
}
func TestMergeStreams(t *testing.T) {
// Create temp repo
tmpDir := t.TempDir()
repoPath := filepath.Join(tmpDir, "test-repo")
// Initialize repo structure
assert.NoError(t, os.MkdirAll(filepath.Join(repoPath, repo.EvoDir, "commits", "main"), 0755))
assert.NoError(t, os.MkdirAll(filepath.Join(repoPath, repo.EvoDir, "ops", "main"), 0755))
assert.NoError(t, CreateStream(repoPath, "feature"))
assert.NoError(t, os.MkdirAll(filepath.Join(repoPath, repo.EvoDir, "commits", "feature"), 0755))
assert.NoError(t, os.MkdirAll(filepath.Join(repoPath, repo.EvoDir, "ops", "feature"), 0755))
// Create multiple test commits in feature stream
fileID := uuid.New()
testCommits := []types.Commit{
{
ID: uuid.New().String(),
Stream: "feature",
Message: "commit 1",
Operations: []commits.ExtendedOp{{
Op: crdt.Operation{
Type: crdt.OpInsert,
FileID: fileID,
LineID: uuid.New(),
Content: "line 1",
Stream: "feature",
Timestamp: time.Now(),
NodeID: uuid.New(),
Lamport: 1,
Vector: []int64{1},
},
}},
Timestamp: time.Now(),
},
{
ID: uuid.New().String(),
Stream: "feature",
Message: "commit 2",
Operations: []commits.ExtendedOp{{
Op: crdt.Operation{
Type: crdt.OpInsert,
FileID: fileID,
LineID: uuid.New(),
Content: "line 2",
Stream: "feature",
Timestamp: time.Now(),
NodeID: uuid.New(),
Lamport: 2,
Vector: []int64{2},
},
}},
Timestamp: time.Now().Add(time.Second),
},
}
for _, c := range testCommits {
assert.NoError(t, commits.SaveCommitFile(filepath.Join(repoPath, repo.EvoDir, "commits", "feature"), &c))
}
// Test merge streams
err := MergeStreams(repoPath, "feature", "main")
assert.NoError(t, err)
// Verify all commits were replicated
mainCommits, err := ListCommits(repoPath, "main")
assert.NoError(t, err)
assert.Equal(t, 2, len(mainCommits))
assert.Equal(t, "main", mainCommits[0].Stream)
assert.Equal(t, "main", mainCommits[1].Stream)
assert.Equal(t, "line 1", mainCommits[0].Operations[0].Op.Content)
assert.Equal(t, "line 2", mainCommits[1].Operations[0].Op.Content)
}
================================================
FILE: internal/types/commit.go
================================================
package types
import (
"crypto/sha256"
"evo/internal/crdt"
"time"
)
// ExtendedOp includes oldContent for update ops
type ExtendedOp struct {
Op crdt.Operation `json:"op"`
OldContent string `json:"oldContent,omitempty"`
}
// Commit represents a commit in the repository
type Commit struct {
ID string // Unique identifier
Stream string // Stream name
Message string // Commit message
AuthorName string // Author's name
AuthorEmail string // Author's email
Timestamp time.Time // When the commit was created
Operations []ExtendedOp // Operations included in this commit
Signature string // Optional Ed25519 signature
}
// CommitHashString generates a stable string representation of a commit for signing
func CommitHashString(c *Commit) string {
// stable representation => ID + stream + message + etc
h := sha256.New()
h.Write([]byte(c.ID))
h.Write([]byte(c.Stream))
h.Write([]byte(c.Message))
h.Write([]byte(c.AuthorName))
h.Write([]byte(c.AuthorEmail))
h.Write([]byte(c.Timestamp.UTC().Format(time.RFC3339)))
return string(h.Sum(nil))
}
================================================
FILE: internal/util/util.go
================================================
package util
import (
"os"
"path/filepath"
)
func ListAllFiles(repoPath string) ([]string, error) {
var out []string
filepath.Walk(repoPath, func(path string, info os.FileInfo, e error) error {
if e != nil {
return e
}
if !info.IsDir() {
rel, _ := filepath.Rel(repoPath, path)
out = append(out, rel)
}
return nil
})
return out, nil
}
================================================
FILE: justfile
================================================
# just is a handy way to save and run project-specific commands # https://just.systems/
# List all recipes
default:
@just --list
# Format all Go files
fmt:
go fmt ./...
# Run tests
test:
go test -v ./...
# Run tests with coverage
test-coverage:
go test -v -coverprofile=coverage.out ./...
go tool cover -html=coverage.out -o coverage.html
# Build the project
build:
go build -v ./...
# Build the CLI
build-cli:
go build -v -o bin/evo ./cmd/evo
# Install the CLI to $GOPATH/bin
install: build-cli
go install ./cmd/evo
# Run the main application
run:
go run ./cmd/evo
# Install dependencies
deps:
go mod download
go mod tidy
# Verify dependencies
verify:
go mod verify
# Run linter (requires golangci-lint)
lint:
golangci-lint run
# Clean build artifacts
clean:
go clean
rm -f coverage.out coverage.html
# Update dependencies to latest versions
update-deps:
go get -u ./...
go mod tidy
# Run security check (requires gosec)
security-check:
gosec ./...
# Generate documentation
docs:
godoc -http=:6060
# Create a new release tag
release VERSION:
git tag -a {{VERSION}} -m "Release {{VERSION}}"
git push origin {{VERSION}}
# Install development tools
install-tools:
go install golang.org/x/tools/cmd/godoc@latest
go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest
go install github.com/securego/gosec/v2/cmd/gosec@latest
gitextract_ru125rnr/ ├── .github/ │ ├── ISSUE_TEMPLATE/ │ │ ├── bug_report.md │ │ └── feature_request.md │ └── pull_request_template.md ├── .gitignore ├── DESIGN.md ├── LICENSE ├── README.md ├── cmd/ │ └── evo/ │ ├── commit_cmd.go │ ├── config_cmd.go │ ├── init_cmd.go │ ├── log_cmd.go │ ├── main.go │ ├── revert_cmd.go │ ├── root.go │ ├── status_cmd.go │ ├── stream_cmd.go │ └── sync_cmd.go ├── go.mod ├── go.sum ├── internal/ │ ├── commits/ │ │ ├── commits.go │ │ └── commits_test.go │ ├── config/ │ │ └── config.go │ ├── crdt/ │ │ ├── compact/ │ │ │ ├── compact.go │ │ │ ├── config.go │ │ │ ├── service.go │ │ │ └── service_test.go │ │ ├── operation.go │ │ ├── operation_test.go │ │ ├── rga.go │ │ └── rga_test.go │ ├── ignore/ │ │ ├── ignore.go │ │ └── ignore_test.go │ ├── index/ │ │ └── index.go │ ├── lfs/ │ │ ├── diff.go │ │ ├── diff_test.go │ │ ├── gc.go │ │ ├── store.go │ │ ├── store_test.go │ │ └── types.go │ ├── ops/ │ │ ├── binary_log.go │ │ └── ops.go │ ├── repo/ │ │ ├── repo.go │ │ └── repo_test.go │ ├── signing/ │ │ ├── signing.go │ │ └── signing_test.go │ ├── status/ │ │ ├── status.go │ │ └── status_test.go │ ├── streams/ │ │ ├── partial.go │ │ ├── partial_test.go │ │ ├── streams.go │ │ └── streams_test.go │ ├── types/ │ │ └── commit.go │ └── util/ │ └── util.go └── justfile
SYMBOL INDEX (182 symbols across 44 files)
FILE: cmd/evo/commit_cmd.go
function init (line 20) | func init() {
FILE: cmd/evo/config_cmd.go
function init (line 13) | func init() {
FILE: cmd/evo/init_cmd.go
function init (line 10) | func init() {
FILE: cmd/evo/log_cmd.go
function init (line 14) | func init() {
FILE: cmd/evo/main.go
function main (line 3) | func main() {
FILE: cmd/evo/revert_cmd.go
function init (line 12) | func init() {
FILE: cmd/evo/root.go
function Execute (line 18) | func Execute() {
FILE: cmd/evo/status_cmd.go
function init (line 11) | func init() {
FILE: cmd/evo/stream_cmd.go
function init (line 11) | func init() {
FILE: cmd/evo/sync_cmd.go
function init (line 10) | func init() {
FILE: internal/commits/commits.go
function CreateCommit (line 26) | func CreateCommit(repoPath, stream, message, authorName, authorEmail str...
function LoadCommit (line 64) | func LoadCommit(repoPath, stream, commitID string) (*types.Commit, error) {
function SaveCommit (line 91) | func SaveCommit(repoPath string, commit *types.Commit) error {
function gatherNewOps (line 111) | func gatherNewOps(repoPath, stream string) ([]ExtendedOp, error) {
function opKey (line 177) | func opKey(op crdt.Operation) string {
function buildDocStates (line 181) | func buildDocStates(repoPath, stream string) map[uuid.UUID]map[uuid.UUID...
function findOldContent (line 208) | func findOldContent(ds map[uuid.UUID]map[uuid.UUID]string, lineID uuid.U...
function ListCommits (line 218) | func ListCommits(repoPath, stream string) ([]types.Commit, error) {
function saveCommit (line 247) | func saveCommit(repoPath string, c *types.Commit) error {
function SaveCommitFile (line 266) | func SaveCommitFile(dir string, c *types.Commit) error {
function loadCommit (line 284) | func loadCommit(fp string) (*types.Commit, error) {
function RevertCommit (line 307) | func RevertCommit(repoPath, stream, commitID string) (*types.Commit, err...
function invertOps (line 339) | func invertOps(ops []types.ExtendedOp) ([]types.ExtendedOp, error) {
function newLamport (line 390) | func newLamport() uint64 {
function applyOps (line 394) | func applyOps(repoPath, stream string, eops []ExtendedOp) error {
function CommitHashString (line 411) | func CommitHashString(c *types.Commit) string {
FILE: internal/commits/commits_test.go
function TestRevertCommit (line 13) | func TestRevertCommit(t *testing.T) {
function TestSignedCommits (line 143) | func TestSignedCommits(t *testing.T) {
FILE: internal/config/config.go
function globalConfigPath (line 14) | func globalConfigPath() (string, error) {
function repoConfigPath (line 26) | func repoConfigPath(repoPath string) string {
function loadToml (line 30) | func loadToml(path string) (*toml.Tree, error) {
function saveToml (line 45) | func saveToml(tree *toml.Tree, path string) error {
function SetGlobalConfigValue (line 50) | func SetGlobalConfigValue(key, val string) error {
function SetRepoConfigValue (line 64) | func SetRepoConfigValue(repoPath, key, val string) error {
function GetConfigValue (line 75) | func GetConfigValue(repoPath, key string) (string, error) {
function SetConfigValue (line 90) | func SetConfigValue(repoPath, key, value string) error {
function loadConfig (line 119) | func loadConfig(repoPath string) (map[string]string, error) {
FILE: internal/crdt/compact/compact.go
function CompactOperations (line 15) | func CompactOperations(ops []crdt.Operation, cfg *Config) []crdt.Operati...
function sortOps (line 59) | func sortOps(ops []crdt.Operation) {
function CompactRGA (line 66) | func CompactRGA(rga *crdt.RGA, cfg *Config) *crdt.RGA {
FILE: internal/crdt/compact/config.go
type Config (line 6) | type Config struct
function DefaultConfig (line 18) | func DefaultConfig() *Config {
FILE: internal/crdt/compact/service.go
type CompactionService (line 15) | type CompactionService struct
method Start (line 35) | func (s *CompactionService) Start() error {
method Stop (line 66) | func (s *CompactionService) Stop() {
method CompactOperations (line 71) | func (s *CompactionService) CompactOperations() error {
method PruneTombstones (line 196) | func (s *CompactionService) PruneTombstones() error {
function NewCompactionService (line 23) | func NewCompactionService(repoPath string, config *Config) *CompactionSe...
FILE: internal/crdt/compact/service_test.go
function TestCompactionService (line 15) | func TestCompactionService(t *testing.T) {
function TestCompactionConfig (line 243) | func TestCompactionConfig(t *testing.T) {
FILE: internal/crdt/operation.go
type OpType (line 10) | type OpType
constant OpInsert (line 13) | OpInsert OpType = iota
constant OpUpdate (line 14) | OpUpdate
constant OpDelete (line 15) | OpDelete
type Operation (line 19) | type Operation struct
method CanCombine (line 32) | func (o *Operation) CanCombine(other *Operation) bool {
method Combine (line 53) | func (o *Operation) Combine(other *Operation) {
method LessThan (line 75) | func (o *Operation) LessThan(other *Operation) bool {
FILE: internal/crdt/operation_test.go
function TestOperationCombining (line 10) | func TestOperationCombining(t *testing.T) {
function TestOperationOrdering (line 201) | func TestOperationOrdering(t *testing.T) {
FILE: internal/crdt/rga.go
type RGAOperation (line 12) | type RGAOperation struct
function NewRGAOperation (line 18) | func NewRGAOperation(op Operation, index int) RGAOperation {
type RGA (line 26) | type RGA struct
method Apply (line 41) | func (r *RGA) Apply(op Operation) error {
method Get (line 94) | func (r *RGA) Get() []string {
method GetOperations (line 108) | func (r *RGA) GetOperations() []Operation {
method Clear (line 120) | func (r *RGA) Clear() {
method Materialize (line 129) | func (r *RGA) Materialize() []string {
method GetPositions (line 143) | func (r *RGA) GetPositions() []int {
method GetLineIDs (line 157) | func (r *RGA) GetLineIDs() []uuid.UUID {
method LineMap (line 171) | func (r *RGA) LineMap() map[uuid.UUID]string {
function NewRGA (line 33) | func NewRGA() *RGA {
FILE: internal/crdt/rga_test.go
function TestRGA (line 10) | func TestRGA(t *testing.T) {
FILE: internal/ignore/ignore.go
type IgnoreList (line 13) | type IgnoreList struct
method IsIgnored (line 53) | func (il *IgnoreList) IsIgnored(path string) bool {
method AddPattern (line 131) | func (il *IgnoreList) AddPattern(pattern string) {
method GetPatterns (line 143) | func (il *IgnoreList) GetPatterns() []string {
function LoadIgnoreFile (line 18) | func LoadIgnoreFile(repoPath string) (*IgnoreList, error) {
FILE: internal/ignore/ignore_test.go
function TestLoadIgnoreFile (line 9) | func TestLoadIgnoreFile(t *testing.T) {
function TestIsIgnored (line 77) | func TestIsIgnored(t *testing.T) {
function TestAddPattern (line 188) | func TestAddPattern(t *testing.T) {
function TestGetPatterns (line 239) | func TestGetPatterns(t *testing.T) {
FILE: internal/index/index.go
function LoadIndex (line 16) | func LoadIndex(repoPath string) (map[string]string, map[string]string, e...
function SaveIndex (line 46) | func SaveIndex(repoPath string, path2id map[string]string) error {
function UpdateIndex (line 60) | func UpdateIndex(repoPath string) error {
function LookupFileID (line 105) | func LookupFileID(repoPath, relPath string) (string, error) {
FILE: internal/lfs/diff.go
constant RollingHashWindow (line 10) | RollingHashWindow = 64
constant MinMatchSize (line 13) | MinMatchSize = 32
type RollingHash (line 17) | type RollingHash struct
method Update (line 31) | func (r *RollingHash) Update(b byte) uint32 {
function NewRollingHash (line 24) | func NewRollingHash() *RollingHash {
function BinaryDiff (line 44) | func BinaryDiff(old, new io.Reader) ([]DiffEntry, error) {
type DiffType (line 158) | type DiffType
constant DiffCopy (line 161) | DiffCopy DiffType = iota
constant DiffNew (line 162) | DiffNew
type DiffEntry (line 166) | type DiffEntry struct
function ApplyDiff (line 174) | func ApplyDiff(old io.Reader, diff []DiffEntry, w io.Writer) error {
FILE: internal/lfs/diff_test.go
function TestBinaryDiff (line 9) | func TestBinaryDiff(t *testing.T) {
type infiniteReader (line 139) | type infiniteReader struct
method Read (line 145) | func (r *infiniteReader) Read(p []byte) (n int, err error) {
FILE: internal/lfs/gc.go
type GarbageCollector (line 12) | type GarbageCollector struct
method Start (line 27) | func (gc *GarbageCollector) Start() {
method Stop (line 45) | func (gc *GarbageCollector) Stop() {
method Run (line 50) | func (gc *GarbageCollector) Run() error {
method PruneTombstones (line 81) | func (gc *GarbageCollector) PruneTombstones(maxAge time.Duration) error {
function NewGarbageCollector (line 19) | func NewGarbageCollector(store *Store) *GarbageCollector {
FILE: internal/lfs/store.go
type Store (line 14) | type Store struct
method StoreFile (line 31) | func (s *Store) StoreFile(id string, r io.Reader, size int64) (*FileIn...
method saveFileInfo (line 159) | func (s *Store) saveFileInfo(id string, info *FileInfo) error {
method loadFileInfo (line 167) | func (s *Store) loadFileInfo(id string) (*FileInfo, error) {
method ReadFile (line 180) | func (s *Store) ReadFile(id string, w io.Writer) error {
method DeleteFile (line 205) | func (s *Store) DeleteFile(id string) error {
method isChunkReferenced (line 258) | func (s *Store) isChunkReferenced(hash string) bool {
function NewStore (line 20) | func NewStore(root string) *Store {
function min (line 284) | func min(a, b int64) int64 {
FILE: internal/lfs/store_test.go
function TestStore (line 10) | func TestStore(t *testing.T) {
function TestLargeFileChunking (line 119) | func TestLargeFileChunking(t *testing.T) {
function TestGarbageCollection (line 189) | func TestGarbageCollection(t *testing.T) {
FILE: internal/lfs/types.go
constant ChunkSize (line 12) | ChunkSize = 1024 * 1024
type FileInfo (line 16) | type FileInfo struct
type ChunkInfo (line 27) | type ChunkInfo struct
type Hash (line 33) | type Hash struct
method Write (line 43) | func (h *Hash) Write(p []byte) (n int, err error) {
method Sum (line 48) | func (h *Hash) Sum() string {
function NewHash (line 38) | func NewHash() *Hash {
function HashBytes (line 53) | func HashBytes(data []byte) string {
FILE: internal/ops/binary_log.go
function WriteOp (line 13) | func WriteOp(w io.Writer, op crdt.Operation) error {
function ReadOp (line 42) | func ReadOp(r io.Reader) (*crdt.Operation, error) {
function LoadAllOps (line 71) | func LoadAllOps(filename string) ([]crdt.Operation, error) {
function AppendOp (line 96) | func AppendOp(filename string, op crdt.Operation) error {
function dirOf (line 108) | func dirOf(fp string) string {
FILE: internal/ops/ops.go
function IngestLocalChanges (line 19) | func IngestLocalChanges(repoPath, stream string) ([]string, error) {
function processFile (line 71) | func processFile(repoPath, stream, relPath, absPath string, fsize int64)...
function storeLargeFile (line 178) | func storeLargeFile(repoPath, stream, fileID, relPath, absPath string, d...
function copyFile (line 224) | func copyFile(src, dst string) error {
function readLargeThreshold (line 248) | func readLargeThreshold(repoPath string) int64 {
function parseUUID (line 254) | func parseUUID(s string) uuid.UUID {
function eqLines (line 259) | func eqLines(a, b []string) bool {
FILE: internal/repo/repo.go
constant EvoDir (line 12) | EvoDir = ".evo"
function InitRepo (line 21) | func InitRepo(path string) error {
function Cleanup (line 78) | func Cleanup() {
function FindRepoRoot (line 94) | func FindRepoRoot(start string) (string, error) {
FILE: internal/repo/repo_test.go
function TestRepo (line 9) | func TestRepo(t *testing.T) {
FILE: internal/signing/signing.go
type KeyPair (line 15) | type KeyPair struct
function GenerateKeyPair (line 22) | func GenerateKeyPair(repoPath string) error {
function LoadKeyPair (line 56) | func LoadKeyPair(repoPath string) (*KeyPair, error) {
function SignCommit (line 95) | func SignCommit(c *types.Commit, repoPath string) (string, error) {
function VerifyCommit (line 107) | func VerifyCommit(c *types.Commit, repoPath string) (bool, error) {
function getKeyPath (line 130) | func getKeyPath(repoPath string) (string, error) {
function getFileCreationTime (line 145) | func getFileCreationTime(path string) time.Time {
FILE: internal/signing/signing_test.go
function TestSigningKeyPair (line 11) | func TestSigningKeyPair(t *testing.T) {
FILE: internal/status/status.go
type FileStatus (line 14) | type FileStatus struct
type RepoStatus (line 20) | type RepoStatus struct
function loadIndex (line 26) | func loadIndex(repoPath string) (map[string]string, error) {
function GetStatus (line 48) | func GetStatus(repoPath string) (*RepoStatus, error) {
function FormatStatus (line 209) | func FormatStatus(status *RepoStatus) string {
FILE: internal/status/status_test.go
function setupTestRepo (line 10) | func setupTestRepo(t *testing.T) string {
function TestGetStatus (line 42) | func TestGetStatus(t *testing.T) {
function TestGetStatusErrors (line 226) | func TestGetStatusErrors(t *testing.T) {
function TestFormatStatus (line 259) | func TestFormatStatus(t *testing.T) {
function TestLoadIndex (line 352) | func TestLoadIndex(t *testing.T) {
FILE: internal/streams/partial.go
type MergeFilter (line 13) | type MergeFilter struct
function PartialMerge (line 19) | func PartialMerge(repoPath, source, target string, filter MergeFilter) e...
function shouldIncludeOp (line 117) | func shouldIncludeOp(op commits.ExtendedOp, filter MergeFilter) bool {
FILE: internal/streams/partial_test.go
function TestPartialMerge (line 17) | func TestPartialMerge(t *testing.T) {
function TestShouldIncludeOp (line 115) | func TestShouldIncludeOp(t *testing.T) {
FILE: internal/streams/streams.go
function CreateStream (line 19) | func CreateStream(repoPath, name string) error {
function SwitchStream (line 31) | func SwitchStream(repoPath, name string) error {
function ListStreams (line 40) | func ListStreams(repoPath string) ([]string, error) {
function CurrentStream (line 58) | func CurrentStream(repoPath string) (string, error) {
function MergeStreams (line 68) | func MergeStreams(repoPath, source, target string) error {
function replicateOps (line 102) | func replicateOps(repoPath, stream string, eops []commits.ExtendedOp) er...
function CherryPick (line 114) | func CherryPick(repoPath, commitID, target string) error {
function ListCommits (line 146) | func ListCommits(repoPath, stream string) ([]types.Commit, error) {
function loadCommit (line 171) | func loadCommit(fp string) (*types.Commit, error) {
function getCommit (line 193) | func getCommit(repoPath, stream, commitID string) (*types.Commit, error) {
FILE: internal/streams/streams_test.go
function TestCherryPick (line 17) | func TestCherryPick(t *testing.T) {
function TestMergeStreams (line 68) | func TestMergeStreams(t *testing.T) {
FILE: internal/types/commit.go
type ExtendedOp (line 10) | type ExtendedOp struct
type Commit (line 16) | type Commit struct
function CommitHashString (line 28) | func CommitHashString(c *Commit) string {
FILE: internal/util/util.go
function ListAllFiles (line 8) | func ListAllFiles(repoPath string) ([]string, error) {
Condensed preview — 54 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (203K chars).
[
{
"path": ".github/ISSUE_TEMPLATE/bug_report.md",
"chars": 615,
"preview": "---\nname: Bug Report 🐛\nabout: Create a report to help improve Evo\ntitle: '[BUG] '\nlabels: bug\nassignees: ''\n---\n\n## Bug "
},
{
"path": ".github/ISSUE_TEMPLATE/feature_request.md",
"chars": 778,
"preview": "---\nname: Feature Request 💫\nabout: Suggest an idea to help evolve Evo\ntitle: '[FEATURE] '\nlabels: enhancement\nassignees:"
},
{
"path": ".github/pull_request_template.md",
"chars": 1062,
"preview": "## Description 🌿\n<!-- Provide a clear and concise description of your changes -->\n\n## Type of Change 🔄\n<!-- Mark the rel"
},
{
"path": ".gitignore",
"chars": 267,
"preview": "# Binaries and build artifacts\nbin/\ndist/\nbuild/\n*.exe\n*.exe~\n*.dll\n*.so\n*.dylib\n\n# Go cache / coverage\n*.test\n*.out\n*.s"
},
{
"path": "DESIGN.md",
"chars": 7337,
"preview": "# Evo Design Document\n\n## Overview & Motivation\n\nEvo is a next-generation version control system designed to solve probl"
},
{
"path": "LICENSE",
"chars": 1069,
"preview": "MIT License\n\nCopyright (c) 2025 Brayden Moon\n\nPermission is hereby granted, free of charge, to any person obtaining a co"
},
{
"path": "README.md",
"chars": 4045,
"preview": "# Evo 🌿\n\n> **IMPORTANT**: This project has been discontinued. Due to persistent harassment and hostile behavior from cer"
},
{
"path": "cmd/evo/commit_cmd.go",
"chars": 1567,
"preview": "package main\n\nimport (\n\t\"evo/internal/commits\"\n\t\"evo/internal/config\"\n\t\"evo/internal/index\"\n\t\"evo/internal/repo\"\n\t\"evo/i"
},
{
"path": "cmd/evo/config_cmd.go",
"chars": 1642,
"preview": "package main\n\nimport (\n\t\"evo/internal/config\"\n\t\"evo/internal/repo\"\n\t\"fmt\"\n\n\t\"github.com/spf13/cobra\"\n)\n\nvar cfgGlobal bo"
},
{
"path": "cmd/evo/init_cmd.go",
"chars": 651,
"preview": "package main\n\nimport (\n\t\"evo/internal/repo\"\n\t\"fmt\"\n\n\t\"github.com/spf13/cobra\"\n)\n\nfunc init() {\n\tvar initCmd = &cobra.Com"
},
{
"path": "cmd/evo/log_cmd.go",
"chars": 1288,
"preview": "package main\n\nimport (\n\t\"evo/internal/commits\"\n\t\"evo/internal/config\"\n\t\"evo/internal/repo\"\n\t\"evo/internal/signing\"\n\t\"evo"
},
{
"path": "cmd/evo/main.go",
"chars": 41,
"preview": "package main\n\nfunc main() {\n\tExecute()\n}\n"
},
{
"path": "cmd/evo/revert_cmd.go",
"chars": 944,
"preview": "package main\n\nimport (\n\t\"evo/internal/commits\"\n\t\"evo/internal/repo\"\n\t\"evo/internal/streams\"\n\t\"fmt\"\n\n\t\"github.com/spf13/c"
},
{
"path": "cmd/evo/root.go",
"chars": 505,
"preview": "package main\n\nimport (\n\t\"fmt\"\n\t\"os\"\n\n\t\"github.com/spf13/cobra\"\n)\n\nvar rootCmd = &cobra.Command{\n\tUse: \"evo\",\n\tShort: \""
},
{
"path": "cmd/evo/status_cmd.go",
"chars": 739,
"preview": "package main\n\nimport (\n\t\"evo/internal/repo\"\n\t\"evo/internal/status\"\n\t\"fmt\"\n\n\t\"github.com/spf13/cobra\"\n)\n\nfunc init() {\n\tv"
},
{
"path": "cmd/evo/stream_cmd.go",
"chars": 2957,
"preview": "package main\n\nimport (\n\t\"evo/internal/repo\"\n\t\"evo/internal/streams\"\n\t\"fmt\"\n\n\t\"github.com/spf13/cobra\"\n)\n\nfunc init() {\n\t"
},
{
"path": "cmd/evo/sync_cmd.go",
"chars": 741,
"preview": "package main\n\nimport (\n\t\"evo/internal/repo\"\n\t\"fmt\"\n\n\t\"github.com/spf13/cobra\"\n)\n\nfunc init() {\n\tvar syncCmd = &cobra.Com"
},
{
"path": "go.mod",
"chars": 483,
"preview": "module evo\n\ngo 1.23.4\n\nrequire (\n\tgithub.com/google/uuid v1.6.0\n\tgithub.com/pelletier/go-toml v1.9.5\n\tgithub.com/spf13/c"
},
{
"path": "go.sum",
"chars": 2012,
"preview": "github.com/bmatcuk/doublestar/v4 v4.8.0 h1:DSXtrypQddoug1459viM9X9D3dp1Z7993fw36I2kNcQ=\ngithub.com/bmatcuk/doublestar/v4"
},
{
"path": "internal/commits/commits.go",
"chars": 11080,
"preview": "package commits\n\nimport (\n\t\"crypto/sha256\"\n\t\"encoding/binary\"\n\t\"encoding/json\"\n\t\"evo/internal/crdt\"\n\t\"evo/internal/ops\"\n"
},
{
"path": "internal/commits/commits_test.go",
"chars": 6699,
"preview": "package commits\n\nimport (\n\t\"evo/internal/crdt\"\n\t\"evo/internal/types\"\n\t\"path/filepath\"\n\t\"testing\"\n\n\t\"evo/internal/config\""
},
{
"path": "internal/config/config.go",
"chars": 3280,
"preview": "package config\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"os\"\n\t\"path/filepath\"\n\n\t\"github.com/pelletier/go-toml\"\n)\n\n// For examp"
},
{
"path": "internal/crdt/compact/compact.go",
"chars": 1787,
"preview": "package compact\n\nimport (\n\t\"evo/internal/crdt\"\n\t\"sort\"\n\t\"time\"\n\n\t\"github.com/google/uuid\"\n)\n\n// CompactOperations compac"
},
{
"path": "internal/crdt/compact/config.go",
"chars": 831,
"preview": "package compact\n\nimport \"time\"\n\n// Config defines thresholds for when to perform compaction\ntype Config struct {\n\t// Max"
},
{
"path": "internal/crdt/compact/service.go",
"chars": 6714,
"preview": "package compact\n\nimport (\n\t\"encoding/binary\"\n\t\"encoding/json\"\n\t\"evo/internal/crdt\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strings\"\n\t\"s"
},
{
"path": "internal/crdt/compact/service_test.go",
"chars": 6232,
"preview": "package compact\n\nimport (\n\t\"encoding/binary\"\n\t\"encoding/json\"\n\t\"evo/internal/crdt\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"testing\"\n\t\"t"
},
{
"path": "internal/crdt/operation.go",
"chars": 1983,
"preview": "package crdt\n\nimport (\n\t\"time\"\n\n\t\"github.com/google/uuid\"\n)\n\n// OpType represents the type of operation\ntype OpType int\n"
},
{
"path": "internal/crdt/operation_test.go",
"chars": 5753,
"preview": "package crdt\n\nimport (\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/google/uuid\"\n)\n\nfunc TestOperationCombining(t *testing.T) {\n\tt.R"
},
{
"path": "internal/crdt/rga.go",
"chars": 3871,
"preview": "package crdt\n\nimport (\n\t\"fmt\"\n\t\"sort\"\n\t\"sync\"\n\n\t\"github.com/google/uuid\"\n)\n\n// RGAOperation extends Operation with addit"
},
{
"path": "internal/crdt/rga_test.go",
"chars": 8199,
"preview": "package crdt\n\nimport (\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/google/uuid\"\n)\n\nfunc TestRGA(t *testing.T) {\n\tt.Run(\"Insert Oper"
},
{
"path": "internal/ignore/ignore.go",
"chars": 3549,
"preview": "package ignore\n\nimport (\n\t\"bufio\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strings\"\n\n\t\"github.com/bmatcuk/doublestar/v4\"\n)\n\n// IgnoreLis"
},
{
"path": "internal/ignore/ignore_test.go",
"chars": 6465,
"preview": "package ignore\n\nimport (\n\t\"os\"\n\t\"path/filepath\"\n\t\"testing\"\n)\n\nfunc TestLoadIgnoreFile(t *testing.T) {\n\t// Create a tempo"
},
{
"path": "internal/index/index.go",
"chars": 2407,
"preview": "package index\n\nimport (\n\t\"bufio\"\n\t\"errors\"\n\t\"fmt\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"strings\"\n\n\t\"github.com/google/uuid\"\n)\n\n// The"
},
{
"path": "internal/lfs/diff.go",
"chars": 4494,
"preview": "package lfs\n\nimport (\n\t\"bytes\"\n\t\"io\"\n)\n\nconst (\n\t// RollingHashWindow is the size of the rolling hash window\n\tRollingHas"
},
{
"path": "internal/lfs/diff_test.go",
"chars": 3934,
"preview": "package lfs\n\nimport (\n\t\"bytes\"\n\t\"io\"\n\t\"testing\"\n)\n\nfunc TestBinaryDiff(t *testing.T) {\n\tt.Run(\"Small Changes\", func(t *t"
},
{
"path": "internal/lfs/gc.go",
"chars": 2454,
"preview": "package lfs\n\nimport (\n\t\"fmt\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"sync\"\n\t\"time\"\n)\n\n// GarbageCollector manages cleanup of unreferenc"
},
{
"path": "internal/lfs/store.go",
"chars": 6375,
"preview": "package lfs\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"io\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"sync\"\n\t\"time\"\n)\n\n// Store manages large fil"
},
{
"path": "internal/lfs/store_test.go",
"chars": 5391,
"preview": "package lfs\n\nimport (\n\t\"bytes\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"testing\"\n)\n\nfunc TestStore(t *testing.T) {\n\t// Create temp dir f"
},
{
"path": "internal/lfs/types.go",
"chars": 1480,
"preview": "package lfs\n\nimport (\n\t\"crypto/sha256\"\n\t\"encoding/hex\"\n\t\"hash\"\n\t\"time\"\n)\n\nconst (\n\t// ChunkSize is the size of each chun"
},
{
"path": "internal/ops/binary_log.go",
"chars": 2407,
"preview": "package ops\n\nimport (\n\t\"encoding/binary\"\n\t\"evo/internal/crdt\"\n\t\"io\"\n\t\"os\"\n\n\t\"github.com/google/uuid\"\n)\n\n// WriteOp write"
},
{
"path": "internal/ops/ops.go",
"chars": 5699,
"preview": "package ops\n\nimport (\n\t\"evo/internal/crdt\"\n\t\"evo/internal/index\"\n\t\"evo/internal/lfs\"\n\t\"evo/internal/util\"\n\t\"fmt\"\n\t\"os\"\n\t"
},
{
"path": "internal/repo/repo.go",
"chars": 2446,
"preview": "package repo\n\nimport (\n\t\"errors\"\n\t\"evo/internal/crdt/compact\"\n\t\"evo/internal/lfs\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"sync\"\n)\n\ncons"
},
{
"path": "internal/repo/repo_test.go",
"chars": 4211,
"preview": "package repo\n\nimport (\n\t\"os\"\n\t\"path/filepath\"\n\t\"testing\"\n)\n\nfunc TestRepo(t *testing.T) {\n\t// Create temp dir for testin"
},
{
"path": "internal/signing/signing.go",
"chars": 3730,
"preview": "package signing\n\nimport (\n\t\"crypto/ed25519\"\n\t\"crypto/rand\"\n\t\"encoding/hex\"\n\t\"evo/internal/config\"\n\t\"evo/internal/types\"\n"
},
{
"path": "internal/signing/signing_test.go",
"chars": 2287,
"preview": "package signing\n\nimport (\n\t\"evo/internal/config\"\n\t\"evo/internal/types\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"testing\"\n)\n\nfunc TestSig"
},
{
"path": "internal/status/status.go",
"chars": 6201,
"preview": "package status\n\nimport (\n\t\"bufio\"\n\t\"evo/internal/ignore\"\n\t\"evo/internal/streams\"\n\t\"fmt\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"sort\"\n\t"
},
{
"path": "internal/status/status_test.go",
"chars": 9407,
"preview": "package status\n\nimport (\n\t\"os\"\n\t\"path/filepath\"\n\t\"strings\"\n\t\"testing\"\n)\n\nfunc setupTestRepo(t *testing.T) string {\n\t// C"
},
{
"path": "internal/streams/partial.go",
"chars": 3597,
"preview": "package streams\n\nimport (\n\t\"evo/internal/commits\"\n\t\"evo/internal/crdt\"\n\t\"evo/internal/repo\"\n\t\"evo/internal/types\"\n\t\"fmt\""
},
{
"path": "internal/streams/partial_test.go",
"chars": 4521,
"preview": "package streams\n\nimport (\n\t\"evo/internal/commits\"\n\t\"evo/internal/crdt\"\n\t\"evo/internal/repo\"\n\t\"evo/internal/types\"\n\t\"os\"\n"
},
{
"path": "internal/streams/streams.go",
"chars": 4917,
"preview": "package streams\n\nimport (\n\t\"encoding/binary\"\n\t\"encoding/json\"\n\t\"evo/internal/commits\"\n\t\"evo/internal/ops\"\n\t\"evo/internal"
},
{
"path": "internal/streams/streams_test.go",
"chars": 4242,
"preview": "package streams\n\nimport (\n\t\"evo/internal/commits\"\n\t\"evo/internal/crdt\"\n\t\"evo/internal/repo\"\n\t\"evo/internal/types\"\n\t\"os\"\n"
},
{
"path": "internal/types/commit.go",
"chars": 1144,
"preview": "package types\n\nimport (\n\t\"crypto/sha256\"\n\t\"evo/internal/crdt\"\n\t\"time\"\n)\n\n// ExtendedOp includes oldContent for update op"
},
{
"path": "internal/util/util.go",
"chars": 361,
"preview": "package util\n\nimport (\n\t\"os\"\n\t\"path/filepath\"\n)\n\nfunc ListAllFiles(repoPath string) ([]string, error) {\n\tvar out []strin"
},
{
"path": "justfile",
"chars": 1449,
"preview": "# just is a handy way to save and run project-specific commands # https://just.systems/\n\n# List all recipes\ndefault:\n "
}
]
About this extraction
This page contains the full source code of the crazywolf132/evo GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 54 files (174.2 KB), approximately 53.7k tokens, and a symbol index with 182 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.