Full Code of sahib/brig for AI

develop 6b7eccf8fcbd cached
399 files
9.3 MB
2.5M tokens
6397 symbols
1 requests
Download .txt
Showing preview only (9,834K chars total). Download the full file or copy to clipboard to get everything.
Repository: sahib/brig
Branch: develop
Commit: 6b7eccf8fcbd
Files: 399
Total size: 9.3 MB

Directory structure:
gitextract_6878u1a7/

├── .github/
│   └── ISSUE_TEMPLATE/
│       ├── bug_report.md
│       └── feature_request.md
├── .gitignore
├── .mailmap
├── .travis.yml
├── CHANGELOG.md
├── Dockerfile
├── LICENSE
├── PULL_REQUEST_TEMPLATE.md
├── README.md
├── Taskfile.yml
├── autocomplete/
│   ├── bash_autocomplete
│   └── zsh_autocomplete
├── backend/
│   ├── backend.go
│   ├── httpipfs/
│   │   ├── gc.go
│   │   ├── gc_test.go
│   │   ├── io.go
│   │   ├── io_test.go
│   │   ├── net.go
│   │   ├── net_test.go
│   │   ├── pin.go
│   │   ├── pin_test.go
│   │   ├── pubsub.go
│   │   ├── pubsub_test.go
│   │   ├── resolve.go
│   │   ├── resolve_test.go
│   │   ├── shell.go
│   │   ├── testing.go
│   │   ├── testing_test.go
│   │   └── version.go
│   └── mock/
│       └── mock.go
├── bench/
│   ├── bench.go
│   ├── inputs.go
│   ├── runner.go
│   └── stats.go
├── brig.go
├── catfs/
│   ├── backend.go
│   ├── capnp/
│   │   ├── pinner.capnp
│   │   └── pinner.capnp.go
│   ├── core/
│   │   ├── coreutils.go
│   │   ├── coreutils_test.go
│   │   ├── gc.go
│   │   ├── gc_test.go
│   │   ├── linker.go
│   │   ├── linker_test.go
│   │   └── testing.go
│   ├── db/
│   │   ├── database.go
│   │   ├── database_badger.go
│   │   ├── database_disk.go
│   │   ├── database_memory.go
│   │   └── database_test.go
│   ├── errors/
│   │   └── errors.go
│   ├── fs.go
│   ├── fs_test.go
│   ├── handle.go
│   ├── handle_test.go
│   ├── mio/
│   │   ├── chunkbuf/
│   │   │   ├── chunkbuf.go
│   │   │   └── chunkbuf_test.go
│   │   ├── compress/
│   │   │   ├── algorithm.go
│   │   │   ├── compress_test.go
│   │   │   ├── header.go
│   │   │   ├── heuristic.go
│   │   │   ├── heuristic_test.go
│   │   │   ├── mime_db.go
│   │   │   ├── reader.go
│   │   │   └── writer.go
│   │   ├── doc.go
│   │   ├── encrypt/
│   │   │   ├── format.go
│   │   │   ├── format_test.go
│   │   │   ├── reader.go
│   │   │   └── writer.go
│   │   ├── pagecache/
│   │   │   ├── cache.go
│   │   │   ├── doc.go
│   │   │   ├── mdcache/
│   │   │   │   ├── l1.go
│   │   │   │   ├── l1_test.go
│   │   │   │   ├── l2.go
│   │   │   │   ├── l2_test.go
│   │   │   │   ├── mdcache.go
│   │   │   │   └── mdcache_test.go
│   │   │   ├── overlay.go
│   │   │   ├── overlay_test.go
│   │   │   ├── page/
│   │   │   │   ├── page.go
│   │   │   │   └── page_test.go
│   │   │   ├── util.go
│   │   │   └── util_test.go
│   │   ├── stream.go
│   │   └── stream_test.go
│   ├── nodes/
│   │   ├── base.go
│   │   ├── capnp/
│   │   │   ├── nodes.capnp
│   │   │   └── nodes.capnp.go
│   │   ├── commit.go
│   │   ├── commit_test.go
│   │   ├── directory.go
│   │   ├── directory_test.go
│   │   ├── doc.go
│   │   ├── file.go
│   │   ├── file_test.go
│   │   ├── ghost.go
│   │   ├── ghost_test.go
│   │   ├── linker.go
│   │   └── node.go
│   ├── pinner.go
│   ├── pinner_test.go
│   ├── repin.go
│   ├── repin_test.go
│   ├── rev.go
│   ├── rev_test.go
│   └── vcs/
│       ├── capnp/
│       │   ├── patch.capnp
│       │   └── patch.capnp.go
│       ├── change.go
│       ├── change_test.go
│       ├── debug.go
│       ├── diff.go
│       ├── diff_test.go
│       ├── history.go
│       ├── history_test.go
│       ├── mapper.go
│       ├── mapper_test.go
│       ├── patch.go
│       ├── patch_test.go
│       ├── reset.go
│       ├── reset_test.go
│       ├── resolve.go
│       ├── resolve_test.go
│       ├── sync.go
│       ├── sync_test.go
│       └── undelete.go
├── client/
│   ├── .gitignore
│   ├── client.go
│   ├── clienttest/
│   │   └── daemon.go
│   ├── fs_cmds.go
│   ├── fs_test.go
│   ├── net_cmds.go
│   ├── net_test.go
│   ├── repo_cmds.go
│   └── vcs_cmds.go
├── cmd/
│   ├── bug.go
│   ├── debug.go
│   ├── exit_codes.go
│   ├── fs_handlers.go
│   ├── help.go
│   ├── init.go
│   ├── inode_other.go
│   ├── inode_unix.go
│   ├── iobench.go
│   ├── log.go
│   ├── net_handlers.go
│   ├── parser.go
│   ├── pwd/
│   │   ├── pwd-util/
│   │   │   └── pwd-util.go
│   │   ├── pwd.go
│   │   └── pwd_test.go
│   ├── repo_handlers.go
│   ├── suggest.go
│   ├── tabwriter/
│   │   ├── example_test.go
│   │   ├── tabwriter.go
│   │   └── tabwriter_test.go
│   ├── tree.go
│   ├── util.go
│   └── vcs_handlers.go
├── defaults/
│   ├── defaults.go
│   └── defaults_v0.go
├── docs/
│   ├── .gitignore
│   ├── Makefile
│   ├── _static/
│   │   └── css/
│   │       └── custom.css
│   ├── asciinema/
│   │   ├── 1_init.json
│   │   ├── 1_init_with_pwm.json
│   │   ├── 2_adding.json
│   │   ├── 3_coreutils.json
│   │   ├── 4_mount.json
│   │   ├── 5_commits.json
│   │   ├── 6_history.json
│   │   ├── 7_remotes.json
│   │   ├── 8_sync.json
│   │   └── 9_pin.json
│   ├── conf.py
│   ├── contributing.rst
│   ├── faq.rst
│   ├── features.rst
│   ├── index.rst
│   ├── installation.rst
│   ├── make.bat
│   ├── quickstart.rst
│   ├── requirements.txt
│   ├── roadmap.rst
│   ├── talk/
│   │   ├── Makefile
│   │   ├── demo.rst
│   │   ├── index.rst
│   │   ├── requirements.txt
│   │   └── style.css
│   └── tutorial/
│       ├── config.rst
│       ├── coreutils.rst
│       ├── gateway.rst
│       ├── init.rst
│       ├── intro.rst
│       ├── mounts.rst
│       ├── pinning.rst
│       ├── remotes.rst
│       └── vcs.rst
├── events/
│   ├── backend/
│   │   └── backend.go
│   ├── capnp/
│   │   ├── events_api.capnp
│   │   └── events_api.capnp.go
│   ├── docs.go
│   ├── event.go
│   ├── listener.go
│   ├── listener_test.go
│   └── mock/
│       └── mock.go
├── fuse/
│   ├── directory.go
│   ├── doc.go
│   ├── file.go
│   ├── fs.go
│   ├── fstab.go
│   ├── fuse_test.go
│   ├── fusetest/
│   │   ├── client.go
│   │   ├── doc.go
│   │   ├── helper.go
│   │   └── server.go
│   ├── handle.go
│   ├── mount.go
│   ├── stub.go
│   └── util.go
├── gateway/
│   ├── db/
│   │   ├── capnp/
│   │   │   ├── user.capnp
│   │   │   └── user.capnp.go
│   │   ├── db.go
│   │   └── db_test.go
│   ├── elm/
│   │   ├── .gitignore
│   │   ├── Makefile
│   │   ├── elm.json
│   │   └── src/
│   │       ├── Clipboard.elm
│   │       ├── Commands.elm
│   │       ├── Main.elm
│   │       ├── Modals/
│   │       │   ├── History.elm
│   │       │   ├── Mkdir.elm
│   │       │   ├── MoveCopy.elm
│   │       │   ├── RemoteAdd.elm
│   │       │   ├── RemoteFolders.elm
│   │       │   ├── RemoteRemove.elm
│   │       │   ├── Remove.elm
│   │       │   ├── Rename.elm
│   │       │   ├── Share.elm
│   │       │   └── Upload.elm
│   │       ├── Pinger.elm
│   │       ├── Routes/
│   │       │   ├── Commits.elm
│   │       │   ├── DeletedFiles.elm
│   │       │   ├── Diff.elm
│   │       │   ├── Ls.elm
│   │       │   └── Remotes.elm
│   │       ├── Scroll.elm
│   │       ├── Util.elm
│   │       └── Websocket.elm
│   ├── endpoints/
│   │   ├── all_dirs.go
│   │   ├── all_dirs_test.go
│   │   ├── copy.go
│   │   ├── copy_test.go
│   │   ├── deleted.go
│   │   ├── deleted_test.go
│   │   ├── events.go
│   │   ├── events_test.go
│   │   ├── get.go
│   │   ├── get_test.go
│   │   ├── history.go
│   │   ├── history_test.go
│   │   ├── index.go
│   │   ├── log.go
│   │   ├── log_test.go
│   │   ├── login.go
│   │   ├── login_test.go
│   │   ├── ls.go
│   │   ├── ls_test.go
│   │   ├── mkdir.go
│   │   ├── mkdir_test.go
│   │   ├── move.go
│   │   ├── move_test.go
│   │   ├── pin.go
│   │   ├── pin_test.go
│   │   ├── ping.go
│   │   ├── ping_test.go
│   │   ├── redirect.go
│   │   ├── remotes_add.go
│   │   ├── remotes_add_test.go
│   │   ├── remotes_diff.go
│   │   ├── remotes_diff_test.go
│   │   ├── remotes_list.go
│   │   ├── remotes_list_test.go
│   │   ├── remotes_remove.go
│   │   ├── remotes_remove_test.go
│   │   ├── remotes_self.go
│   │   ├── remotes_self_test.go
│   │   ├── remotes_sync.go
│   │   ├── remotes_sync_test.go
│   │   ├── remove.go
│   │   ├── remove_test.go
│   │   ├── reset.go
│   │   ├── reset_test.go
│   │   ├── testing.go
│   │   ├── undelete.go
│   │   ├── undelete_test.go
│   │   ├── upload.go
│   │   ├── upload_test.go
│   │   └── util.go
│   ├── remotesapi/
│   │   ├── api.go
│   │   └── mock.go
│   ├── server.go
│   ├── server_test.go
│   ├── static/
│   │   ├── css/
│   │   │   ├── default.css
│   │   │   └── fontawesome.css
│   │   ├── js/
│   │   │   ├── app.js
│   │   │   ├── init.js
│   │   │   └── smoothscroll.js
│   │   ├── package.go
│   │   └── resource.go
│   └── templates/
│       ├── index.html
│       ├── package.go
│       └── resource.go
├── go.mod
├── go.sum
├── net/
│   ├── authrw.go
│   ├── authrw_test.go
│   ├── backend/
│   │   └── backend.go
│   ├── capnp/
│   │   ├── api.capnp
│   │   └── api.capnp.go
│   ├── client.go
│   ├── client_test.go
│   ├── handlers.go
│   ├── mock/
│   │   ├── mock.go
│   │   └── pinger.go
│   ├── peer/
│   │   ├── peer.go
│   │   └── peer_test.go
│   ├── pinger.go
│   ├── pinger_test.go
│   ├── resolve_test.go
│   └── server.go
├── repo/
│   ├── backend.go
│   ├── config.go
│   ├── gc.go
│   ├── hints/
│   │   ├── doc.go
│   │   ├── hints.go
│   │   └── hints_test.go
│   ├── immutables.go
│   ├── init.go
│   ├── keys.go
│   ├── keys_test.go
│   ├── mock/
│   │   └── mock.go
│   ├── readme.go
│   ├── remotes.go
│   ├── remotes_test.go
│   ├── repo.go
│   ├── repo_test.go
│   ├── repopack/
│   │   └── repopack.go
│   └── setup/
│       ├── ipfs.go
│       └── ipfs_test.go
├── scripts/
│   ├── build.sh
│   ├── count-lines-of-code.sh
│   ├── create-release-bundle.sh
│   ├── docker-normal-startup.sh
│   ├── generate.sh
│   ├── install-task.sh
│   ├── install.sh
│   ├── run-linter.sh
│   ├── run-tests.sh
│   └── test-bed.sh
├── server/
│   ├── api_handler.go
│   ├── base.go
│   ├── capnp/
│   │   ├── local_api.capnp
│   │   └── local_api.capnp.go
│   ├── fs_handler.go
│   ├── net_handler.go
│   ├── path.go
│   ├── path_test.go
│   ├── remotes_api.go
│   ├── repo_handler.go
│   ├── rlimit_linux.go
│   ├── rlimit_other.go
│   ├── server.go
│   ├── stream.go
│   ├── transfer.go
│   └── vcs_handler.go
├── tests/
│   ├── test-init-no-pass.sh
│   ├── test-init-pass-helper.sh
│   └── test-init-several.sh
├── util/
│   ├── conductor/
│   │   ├── conductor.go
│   │   └── conductor_test.go
│   ├── hashlib/
│   │   ├── hash.go
│   │   └── hash_test.go
│   ├── key.go
│   ├── log/
│   │   ├── logger.go
│   │   └── logger_test.go
│   ├── pwutil/
│   │   └── pwutil.go
│   ├── server/
│   │   └── server.go
│   ├── std.go
│   ├── std_test.go
│   ├── strings/
│   │   ├── README.md
│   │   └── builder.go
│   ├── testutil/
│   │   └── testutil.go
│   ├── trie/
│   │   ├── buildpath.go
│   │   ├── pathricia.go
│   │   └── pathricia_test.go
│   ├── zipper.go
│   └── zipper_test.go
└── version/
    └── version.go

================================================
FILE CONTENTS
================================================

================================================
FILE: .github/ISSUE_TEMPLATE/bug_report.md
================================================
---
name: Bug report
about: Create a report to help us improve
labels: 

---

**Describe the bug**
A clear and concise description of what the bug is.

**To Reproduce**

Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error

**Please always include the output of the following commands in your report!**

* ``brig bug -s``

**Expected behavior**
A clear and concise description of what you expected to happen.

**Additional context**
Add any other context about the problem here.


================================================
FILE: .github/ISSUE_TEMPLATE/feature_request.md
================================================
---
name: Feature request
about: Suggest an idea for this project
labels: 

---

**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

**Describe the solution you'd like**
A clear and concise description of what you want to happen.

**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.

**Additional context**
Add any other context or screenshots about the feature request here.


**Please keep in mind that new features should be orthogonal (i.e. complement) to the existing features.**


================================================
FILE: .gitignore
================================================
brig
*.coverprofile
coverage.out
_vendor*
TODO
.idea
mage
ipfs
repo/setup/ipfs
.task
tags
cov.out


================================================
FILE: .mailmap
================================================
Christopher Pahl <sahib@online.de>
Christopher Pahl <sahib@online.de> <sahib@peeel.(none)>
Christopher Pahl <sahib@online.de> <chris@peeel.(none)>
Christopher Pahl <sahib@online.de> <chris@modi.localdomain>
Christopher Pahl <sahib@online.de> <b@mmel.localdomain>
Christopher Pahl <sahib@online.de> <sahib@localhost.localdomain>
Christopher Pahl <sahib@online.de> <chris@vbox.(none)>


================================================
FILE: .travis.yml
================================================
language: go
sudo: required
go:
    - "1.15"
notifications:
    email:
      - sahib@online.de
install:
    - sudo apt-get install fuse capnproto
    - mkdir -p ${GOPATH}/bin
    - export GOBIN=${GOPATH}/bin
    - export PATH="${GOPATH}/bin:${PATH}"
    - export GO111MODULE=on
    - go get -u github.com/rakyll/gotest
    - go get -u github.com/phogolabs/parcello
    - go get -u github.com/phogolabs/parcello/cmd/parcello
    - go get -u zombiezen.com/go/capnproto2/...
    - go get -u github.com/go-task/task/v3/cmd/task
    - wget https://dist.ipfs.io/go-ipfs/v0.7.0/go-ipfs_v0.7.0_linux-amd64.tar.gz -O /tmp/ipfs.tgz
    - tar -C /tmp -xvf /tmp/ipfs.tgz
    - cp /tmp/go-ipfs/ipfs $GOBIN
    - export PATH="${GOPATH}/bin:${PATH}"
    - task

script:
    - export PATH="${GOPATH}/bin:${PATH}"
    - travis_wait 30 bash scripts/run-tests.sh


================================================
FILE: CHANGELOG.md
================================================
# Change Log

All notable changes to this project will be documented in this file.

The format follows [keepachangelog.com]. Please stick to it.

## [0.5.3] -- 2020-07-20

Drastic speed up of listing and show operation.

In previous  version simple IsCached  operation on  500 MB file  was taking
more than 30 seconds.  The reason is splitting a file in  chunks of no more
than 256 kB in ipfs. It took time to establish connection to ipfs and check
status of every chunk. The new caching scheme of intermediate results helps
to avoid unnecessary connection.

There is an additional heuristic: if  a reference/hash stores less or equal
to 262158 bytes, then this hash will not have children links (hashes). This
seems to be  true for IPFS up to  v0.6.0 but there is no  guarantee that it
will be true in further version.

The initial check  is now very fast,  less than second for the  same 500 MB
file. Also, recursive check does not  rerun full check on probed files now,
so this  is also done  much faster. The down  side: the caching  result are
stale  and can  be misreported  for  up to  5 minutes  (with current  cache
expiration settings).

### Changed

Added caching mechanism for ipfs interaction.

## [0.5.2] -- 2020-07-16

Bug fix release.

### Fixed

- Report correct cache status for a hash with multiple children links.
  The IsCached reported as yes for a large files (>256 kB with ipfs backend)
  since such files are split into multiple blocks. Strangely, the parent
  node is somehow precached without asking (maybe it happens when brig checks
  for the backend size), but its children are not unless we pin or read file 
  content.

### Changed

- `brig ls /file.name` will return listing for a single file. Before it worked 
  only with directories. Now it behaves similar to the standard file system 
  listing (ls) command.

## [0.5.1] -- 2020-07-15

Improvements and bug fixes in the fuse layer. The fuse layer consequent read is
factor of 20 faster now.

### Fixed

- Fix reading larger than 64 kB files. The read was from limitedStream with
  64 kB size. It was spitting EOF when end of the buffer was hit, which is
  not the same as true end of file.

### Changed

- Fuse file handle keeps track of the seek position. So, in consequent
  reads it does not have to reseek the stream which costs a lot of time.
  On my machine the speed went up from about 200 kB/s to 5 MB/s. It is
  still much slower than direct read from disk (30 MB/s) but probably
  expected due to ipfs, compression, and encryption layers.

## [0.5.0] -- 2020-07-13

This version is mostly bug fixes of unreleased version 0.4.2 by Chris Pahl,
who is the original author and maintainer of the »brig«. Output of the diff and 
sync command is now different from the behaviour outlined in the old manual.
There are also fixes to make fuse mounts to work properly. Compatibility wise,
metadata exchange expects to see the cachedSize field for proper handling,
older versions do not provide it.
So, I think it justifies the bump in the minor version.

TODO: documentation does not reflect all the changes.

### Fixed

- Fix the behaviour of fast forward changes (i.e. when the file changed only at 
  one location).
- Fix the merge over deleted (ghost) file
- Fix handling of the resurrected at remote entries.
- Fix gateway 'get' for authorized user
- Fix gateway to block download/view files outside of permitted folders
- Fix bug with pinning at sync. The program used source node info to pin 
  instead of destination. So a random node at destination was pinned.
- Fix bug in repinner size calculator, which could end up with negative numbers
  in unsigned integer and thus report a crazy high values.
- Fix in capnp template to be compatible with more modern version of capnp
- Fix handling the pin state border case, when our data report a pin but ipfs 
  is not. We ask to repin at ipfs backend.
- Fix the destination pin status preservation during merge.
- Multiple fixes in the file system frontend (fuse).
  - Correct file sizes which propagate to the directory size
  - Make sure attributes of a file are not stale
  - Redone writing and staging handling. No if we open file for writing
    we store the modified content in memory. When we flush to the backend
    we compare content change with this memory stored content. Old way did not
    catch such changes.
  - Touch can do create new file over the ghost nodes.
  - We can move/rename things within fuse mount via brig backend.
    TODO: there is small time (several seconds), when file info is not picked 
    by fuse after rename. It might be a bug in fuse library itself. It is not
    too critical right now.

### Changed

- Repinner can unpin explicitly pinned files, if they scheduled for deletion 
  (i.e. beyond min-max version requirement). Otherwise, we will have stale old
  versions which will take storage space forever.
- Diff and Sync show the direction of the merge. Diff takes precedence 
  according to the modification time. TODO: I need to add new conflict
  resolution strategy: chronological/time and use it only if required.
  I felt chronological strategy is more natural way for syncing different 
  repos, so I program time resolution as default.
- Changes are specific to files not directories. If you create a directory
  in one repo and do the same on the other, it is not a conflict unless, there
  are files with different content. Otherwise, it can be easily merged.
- Preserve source modification time when merging.
- Better display of missing or removed entries.
- When syncing create patches for every commit, before the patch was done 
  between old known commit and the current state. That was breaking possibly
  non conflicting merges, since the common parent was lost during the commit 
  skip. Sync is a bit longer now, but I think it worse it.
- Modules are compatible with go v1.14
- Shorten time format for »brig ls«

### Added

- New option: fs.repin.pin_unpinned. When set to 'false' it saves traffic and 
  does not pin what is already pinned. When 'true' pins files within pinning 
  requirements (the old behavior); this essentially pre-caches files at the 
  backend but uses traffic and disk space.
- New option: fs.sync.pin_added. If false the sync does not pin/cache at sync
  new/added at remote files. This is handy if we want only sync the metadata 
  but not the content itself. I.e. bandwidth saving mode. Opposite (the old 
  behavior) is also possible if we want to sync the content and are not concerned
  about bandwidth.
- Help for »brig gateway add«.
- Added cached size information to listings, show,  and internal file or 
  directory info. Technically, it is the same as size of backend, since the 
  cached size could be zero at a given time (TODO rename accordingly).
- The cached size is transmitted via capnp.

## [0.4.1 Capricious Clownfish] -- 2019-03-31

A smaller release with some bug fixes and a few new features. Also one bigger
stability and speed improvement. Thanks to everyone that gave feedback!

### Fixed

- Fix two badger db related crashes that lead to a crash in the daemon. One was
  related to having nested transactions, the one was related to having an open
  iterator while committing data to the database.
- Fix some dependencies that led to errors for some users (thanks @vasket)
- The gateway code now tries to reconnect the websocket whenever it was closed
  due to bad connectivity or similar issues. This led to a state where files
  were only updated after reloading the page.
- Several smaller fixes in the remotes view, i.e. the owner name was displayed
  wrong and most of the settings could not be set outside the test environment.
  Also the diff output was different in the UI and brig diff.
- We now error out early if e.g. »brig ls« was issued, but there is no repo.
  Before it tried to start a daemon and waited a long time before timing out.
- Made »brig mkdir« always prefix a »/« to a path which would lead to funny
  issues otherwise.

### Added

- Add a --offline flag to the following subcommands: ``cat``, ``tar``,
  ``mount`` and ``fstab add``. These flags will only output files that are
  locally cached and will not cause timeouts therefore. Trying other files will
  result in an error.
- »brig show« now outputs if a file/directory is locally cached. This is not
  the same as pinned, since you can pin a file but it might not be cached yet.
- Make the gateway host all of its JavaScript, fonts and CSS code itself by
  baking it into the binary. This will enable people running the gateway in
  environments where no internet connection is available to reach the CDN used
  before.
- Add the possibility to copy the fingerprint in the UI via a button click.
  Before the fingerprint was shown over two lines which made copying tricky.
- A PKGBUILD for ArchLinux was added, which builds ``brig`` from the
  ``develop`` branch. Thanks @vasket!

### Changed

- The ``brig remote ls`` command no longer does active I/O between nodes to check
  if a node is authenticated. Instead it relies on info from the peer server
  which can apply better caching. The peer server is also able to use information
  from dials and requests to/from other peers to update the ping information.
- Switch the internal checksum algorithm to ``blake2s-256`` from ``sha3-256``.
  This change was made for speed reasons and leads to a slightly different looking
  checksum format in the command line output. This change MIGHT lead to incompatibilities.
- Also swap ``scrypt`` with ``argon2`` for key derivation and lower the hashing settings
  until acceptable performance was achieved.
- Replace the Makefile with a magefile, i.e. a build script written in Go only which has
  no dependencies and can bootstrap itself.
- Include IPFS config output in »brig bug«.

### Removed

* The old Makefile was removed and replaced with a Go only solution.

## [0.4.0 Capricious Clownfish] -- 2019-03-19

It's only been a few months since the last release (December 2018), but there
are a ton of new features / general changes that total in about 15k added lines
of code. The biggest changes are definitely refactoring IPFS into its own
process and providing a nice UI written in Elm. But those are just two of the
biggest ones, see the full list below.

As always, ``brig`` is **always looking for contributors.** Anything from
feedback to pull requests is greatly appreciated.

### Fixed

- Many documentation fixes and updates.
- Gateway: Prefer server cipher suites over client's choice.
- Gateway: Make sure to enable timeouts.
- Bugfix in catfs that could lead to truncated file streams.
* Lower the memory hunger of BadgerDB.
* Fix a bug that stopped badger transactions when they got too big.

### Added

* The IPFS daemon does not live in the ``brig`` process itself anymore.
  It can now use any existing / running IPFS daemon. If ``ipfs`` is not installed,
  it will download a local copy and setup a repository in the default place.
  Notice that this is a completely backwards-incompatible change.

* New UI: The Gateway feature was greatly extended and an UI was developed that
  exposes many features in an easily usable way to people that are used to a
  Dropbox like interface. See
  [here](https://brig.readthedocs.io/en/develop/tutorial/gateway.html) for some
  screenshots of the UI and documentation on how to set it up. The gateway
  supports users with different roles (``admin``, ``editor``, ``collaborator``,
  ``viewer``, ``link-only``) and also supports logging as anonymous user (not by
  default!). You can also limit what users can see which folders.

* New event subsystems. This enables users to receive updates in "realtime"
  from other remotes. This is built on top of the experimental pubsub feature
  of IPFS and thus needs a daemon that was started with
  ``--enable-pubsub-experiment``. Users can decide to receive updates from
  a remote by issuing ``brig remote auto-update enable <remote name>``. [More
  details in the documentation](https://brig.readthedocs.io/en/develop/tutorial/remotes.html#automatic-updating).

* Change the way pinning works. ``brig`` will not unpin old versions anymore,
  but leave that to the [repinning settings](https://brig.readthedocs.io/en/develop/tutorial/pinning.html#repinning).
  This is an automatic process that will make sure to keep at least ``x``
  versions, unpin all versions greater than ``y`` and make sure that only a
  certain filesystem quota is used.

* New ``trash`` subcommand that makes it easy to show deleted files (``brig
  trash ls``) and undelete them again (``brig trash undelete <path>``).

* New ``brig push`` command to ask a remote to sync with us. For this to work
  the remote needs to allow this to us via ``brig remote auto-push enable <remote
  name>``. See also the
  [documentation](https://brig.readthedocs.io/en/develop/tutorial/remotes.html#pushing-changes).

* New way to handle conflicts: ``embrace`` will always pick the version of the remote you are syncing with.
  This is especially useful if you are building an archival node where you can push changes to.
  See also the [documentation](https://brig.readthedocs.io/en/develop/tutorial/remotes.html#conflicts).
  You can configure the conflict strategy now either globally, per remote or for a specific folder.

* Read only folders. Those are folders that can be shared with others, but when
  we synchronize with them, the folder is exempted from any modifications.

* Implement automated invocation of the garbage collector of IPFS. By default
  it is called once per hour and will clean up files that were unpinned. Note
  that this will also unpin files that are not owned by ``brig``! If you don't want this,
  you should use a separate IPFS instance for ``brig``.

* It's now possible to create ``.tar`` files that are filtered by certain patterns.
  This functionality is currently only exposed in the gateway, not in the command line.

* Easier debugging by having a ``pprof`` server open by default (until we
  consider the daemon to be stable enough to disable it by default). You can get
  a performance graph of the last 30s by issuing ``go tool pprof -web
  "http://localhost:$(brig d p)/debug/pprof/profile?seconds=30"``

* One way install script to easily get a ``brig`` binary in seconds on your computer:
  ``bash <(curl -s https://raw.githubusercontent.com/sahib/brig/master/scripts/install.sh)``

### Changed

* Starting with this release we will provide pre-compiled binaries for the most common platforms on the [release page](https://github.com/sahib/brig/releases).
* Introduce proper linting process (``make lint``)
* ``init`` will now set some IPFS config values that improve connectivity and performance
  of ``brig``. You can disable this via ``--no-ipfs-optimization``.
* Disable pre-caching by default due to extreme slow-ness.
* Migrate to ``go mod`` since we do not need to deal with ``gx`` packages anymore.
* There is no (broken) ``make install`` target anymore. Simply do ``make`` and
  ``sudo cp brig /usr/local/bin`` or wherever you want to put it.

### Removed

* A lot of old code that was there to support running IPFS inside the daemon process.
  As a side effect, ``brig`` is now much snappier.

## [0.3.0 Galloping Galapagos] -- 2018-12-07

### Fixed

- Compression guessing is now using Go's http.DetectContentType()

### Added

* New gateway subcommand and feature. Now files and directories can be easily
  shared to non-brig users via a normal webserver. Also includes easy https setup.

### Changed

### Removed

### Deprecated

## [0.2.0 Baffling Buck] -- 2018-11-21

### Fixed

All features mentioned in the documentation should work now.

### Added

Many new features, including password management, partial diffs and partial syncing.

### Changed

Many internal things. Too many to list in this early stage.

### Removed

Nothing substantial.

### Deprecated

Nothing.

## [0.1.0 Analphabetic Antelope] -- 2018-04-21

Initial release on the Linux Info Day 2018 in Augsburg.

[unreleased]: https://github.com/sahib/rmlint/compare/master...develop
[0.1.0]: https://github.com/sahib/brig/releases/tag/v0.1.0
[keepachangelog.com]: http://keepachangelog.com/


================================================
FILE: Dockerfile
================================================
FROM golang
MAINTAINER sahib@online.de

# Most test cases can use the pre-defined BRIG_PATH.
ENV BRIG_PATH /var/repo
RUN mkdir -p $BRIG_PATH
ENV BRIG_USER="charlie@wald.de/container"

# Build the brig binary:
ENV BRIG_SOURCE /go/src/github.com/sahib/brig
ENV BRIG_BINARY_PATH /usr/bin/brig
COPY . $BRIG_SOURCE
WORKDIR $BRIG_SOURCE
RUN make

# Download IPFS, so the container can startup faster.
# (brig can also download the binary for you, but later)
RUN wget https://dist.ipfs.io/go-ipfs/v0.4.19/go-ipfs_v0.4.19_linux-amd64.tar.gz -O /tmp/ipfs.tar.gz
RUN tar xfv /tmp/ipfs.tar.gz -C /tmp
RUN cp /tmp/go-ipfs/ipfs /usr/bin

EXPOSE 6666
EXPOSE 4001

COPY scripts/docker-normal-startup.sh /bin/run.sh
CMD ["/bin/bash", "/bin/run.sh"]


================================================
FILE: LICENSE
================================================
                    GNU AFFERO GENERAL PUBLIC LICENSE
                       Version 3, 19 November 2007

 Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
 Everyone is permitted to copy and distribute verbatim copies
 of this license document, but changing it is not allowed.

                            Preamble

  The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.

  The licenses for most software and other practical works are designed
to take away your freedom to share and change the works.  By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.

  When we speak of free software, we are referring to freedom, not
price.  Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.

  Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.

  A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate.  Many developers of free software are heartened and
encouraged by the resulting cooperation.  However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.

  The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community.  It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server.  Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.

  An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals.  This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.

  The precise terms and conditions for copying, distribution and
modification follow.

                       TERMS AND CONDITIONS

  0. Definitions.

  "This License" refers to version 3 of the GNU Affero General Public License.

  "Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.

  "The Program" refers to any copyrightable work licensed under this
License.  Each licensee is addressed as "you".  "Licensees" and
"recipients" may be individuals or organizations.

  To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy.  The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.

  A "covered work" means either the unmodified Program or a work based
on the Program.

  To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy.  Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.

  To "convey" a work means any kind of propagation that enables other
parties to make or receive copies.  Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.

  An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License.  If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.

  1. Source Code.

  The "source code" for a work means the preferred form of the work
for making modifications to it.  "Object code" means any non-source
form of a work.

  A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.

  The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form.  A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.

  The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities.  However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work.  For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.

  The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.

  The Corresponding Source for a work in source code form is that
same work.

  2. Basic Permissions.

  All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met.  This License explicitly affirms your unlimited
permission to run the unmodified Program.  The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work.  This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.

  You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force.  You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright.  Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.

  Conveying under any other circumstances is permitted solely under
the conditions stated below.  Sublicensing is not allowed; section 10
makes it unnecessary.

  3. Protecting Users' Legal Rights From Anti-Circumvention Law.

  No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.

  When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.

  4. Conveying Verbatim Copies.

  You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.

  You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.

  5. Conveying Modified Source Versions.

  You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:

    a) The work must carry prominent notices stating that you modified
    it, and giving a relevant date.

    b) The work must carry prominent notices stating that it is
    released under this License and any conditions added under section
    7.  This requirement modifies the requirement in section 4 to
    "keep intact all notices".

    c) You must license the entire work, as a whole, under this
    License to anyone who comes into possession of a copy.  This
    License will therefore apply, along with any applicable section 7
    additional terms, to the whole of the work, and all its parts,
    regardless of how they are packaged.  This License gives no
    permission to license the work in any other way, but it does not
    invalidate such permission if you have separately received it.

    d) If the work has interactive user interfaces, each must display
    Appropriate Legal Notices; however, if the Program has interactive
    interfaces that do not display Appropriate Legal Notices, your
    work need not make them do so.

  A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit.  Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.

  6. Conveying Non-Source Forms.

  You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:

    a) Convey the object code in, or embodied in, a physical product
    (including a physical distribution medium), accompanied by the
    Corresponding Source fixed on a durable physical medium
    customarily used for software interchange.

    b) Convey the object code in, or embodied in, a physical product
    (including a physical distribution medium), accompanied by a
    written offer, valid for at least three years and valid for as
    long as you offer spare parts or customer support for that product
    model, to give anyone who possesses the object code either (1) a
    copy of the Corresponding Source for all the software in the
    product that is covered by this License, on a durable physical
    medium customarily used for software interchange, for a price no
    more than your reasonable cost of physically performing this
    conveying of source, or (2) access to copy the
    Corresponding Source from a network server at no charge.

    c) Convey individual copies of the object code with a copy of the
    written offer to provide the Corresponding Source.  This
    alternative is allowed only occasionally and noncommercially, and
    only if you received the object code with such an offer, in accord
    with subsection 6b.

    d) Convey the object code by offering access from a designated
    place (gratis or for a charge), and offer equivalent access to the
    Corresponding Source in the same way through the same place at no
    further charge.  You need not require recipients to copy the
    Corresponding Source along with the object code.  If the place to
    copy the object code is a network server, the Corresponding Source
    may be on a different server (operated by you or a third party)
    that supports equivalent copying facilities, provided you maintain
    clear directions next to the object code saying where to find the
    Corresponding Source.  Regardless of what server hosts the
    Corresponding Source, you remain obligated to ensure that it is
    available for as long as needed to satisfy these requirements.

    e) Convey the object code using peer-to-peer transmission, provided
    you inform other peers where the object code and Corresponding
    Source of the work are being offered to the general public at no
    charge under subsection 6d.

  A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.

  A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling.  In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage.  For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product.  A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.

  "Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source.  The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.

  If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information.  But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).

  The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed.  Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.

  Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.

  7. Additional Terms.

  "Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law.  If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.

  When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it.  (Additional permissions may be written to require their own
removal in certain cases when you modify the work.)  You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.

  Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:

    a) Disclaiming warranty or limiting liability differently from the
    terms of sections 15 and 16 of this License; or

    b) Requiring preservation of specified reasonable legal notices or
    author attributions in that material or in the Appropriate Legal
    Notices displayed by works containing it; or

    c) Prohibiting misrepresentation of the origin of that material, or
    requiring that modified versions of such material be marked in
    reasonable ways as different from the original version; or

    d) Limiting the use for publicity purposes of names of licensors or
    authors of the material; or

    e) Declining to grant rights under trademark law for use of some
    trade names, trademarks, or service marks; or

    f) Requiring indemnification of licensors and authors of that
    material by anyone who conveys the material (or modified versions of
    it) with contractual assumptions of liability to the recipient, for
    any liability that these contractual assumptions directly impose on
    those licensors and authors.

  All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10.  If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term.  If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.

  If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.

  Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.

  8. Termination.

  You may not propagate or modify a covered work except as expressly
provided under this License.  Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).

  However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.

  Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.

  Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License.  If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.

  9. Acceptance Not Required for Having Copies.

  You are not required to accept this License in order to receive or
run a copy of the Program.  Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance.  However,
nothing other than this License grants you permission to propagate or
modify any covered work.  These actions infringe copyright if you do
not accept this License.  Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.

  10. Automatic Licensing of Downstream Recipients.

  Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License.  You are not responsible
for enforcing compliance by third parties with this License.

  An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations.  If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.

  You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License.  For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.

  11. Patents.

  A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based.  The
work thus licensed is called the contributor's "contributor version".

  A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version.  For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.

  Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.

  In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement).  To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.

  If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients.  "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.

  If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.

  A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License.  You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.

  Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.

  12. No Surrender of Others' Freedom.

  If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License.  If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all.  For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.

  13. Remote Network Interaction; Use with the GNU General Public License.

  Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software.  This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.

  Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work.  The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.

  14. Revised Versions of this License.

  The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time.  Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.

  Each version is given a distinguishing version number.  If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation.  If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.

  If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.

  Later license versions may give you additional or different
permissions.  However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.

  15. Disclaimer of Warranty.

  THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW.  EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU.  SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.

  16. Limitation of Liability.

  IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.

  17. Interpretation of Sections 15 and 16.

  If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.

                     END OF TERMS AND CONDITIONS

            How to Apply These Terms to Your New Programs

  If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.

  To do so, attach the following notices to the program.  It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.

    <one line to give the program's name and a brief idea of what it does.>
    Copyright (C) <year>  <name of author>

    This program is free software: you can redistribute it and/or modify
    it under the terms of the GNU Affero General Public License as published by
    the Free Software Foundation, either version 3 of the License, or
    (at your option) any later version.

    This program is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
    GNU Affero General Public License for more details.

    You should have received a copy of the GNU Affero General Public License
    along with this program.  If not, see <http://www.gnu.org/licenses/>.

Also add information on how to contact you by electronic and paper mail.

  If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source.  For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code.  There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.

  You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
<http://www.gnu.org/licenses/>.


================================================
FILE: PULL_REQUEST_TEMPLATE.md
================================================
Here's a small checklist before publishing your pull request:

* Did you ``go fmt`` all code?
* Does your code style fit with the rest of the code base?
* Did you run ``go run mage.go dev:lint``?
* Did you write tests if necessary?
* Did you consider if changes to the docs are necessary?
* Did you check if you need something to CHANGELOG.md?


Thank you for your contribution.


================================================
FILE: README.md
================================================
# `brig`: Ship your data around the world

<center>  <!-- I know, that's not how you usually do it :) -->
<img src="https://raw.githubusercontent.com/sahib/brig/master/docs/logo.png" alt="a brig" width="50%">
</center>

[![go reportcard](https://goreportcard.com/badge/github.com/sahib/brig)](https://goreportcard.com/report/github.com/sahib/brig)
[![GoDoc](https://godoc.org/github.com/sahib/brig?status.svg)](https://godoc.org/github.com/sahib/brig)
[![Build Status](https://travis-ci.org/sahib/brig.svg?branch=master)](https://travis-ci.org/sahib/brig)
[![Documentation](https://readthedocs.org/projects/rmlint/badge/?version=latest)](http://brig.readthedocs.io/en/latest)
[![License: AGPL v3](https://img.shields.io/badge/License-AGPL%20v3-blue.svg)](https://www.gnu.org/licenses/agpl-3.0)
[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/1558/badge)](https://bestpractices.coreinfrastructure.org/en/projects/1558)

![brig gateway in the files tab](docs/_static/gateway-files.png)

## Table of Contents

- [`brig`: Ship your data around the world](#brig-ship-your-data-around-the-world)
  - [Table of Contents](#table-of-contents)
  - [About](#about)
  - [Installation](#installation)
  - [Getting started](#getting-started)
  - [Status](#status)
  - [Documentation](#documentation)
  - [Donations](#donations)
  - [Focus](#focus)

## About

`brig` is a distributed & secure file synchronization tool with version control.
It is based on `IPFS`, written in Go and will feel familiar to `git` users.

**Key feature highlights:**

* Encryption of data in rest and transport + compression on the fly.
* Simplified `git` version control.
* Sync algorithm that can handle moved files and empty directories and files.
* Your data does not need to be stored on the device you are currently using.
* FUSE filesystem that feels like a normal (sync) folder.
* No central server at all. Still, central architectures can be build with `brig`.
* Simple user identification and discovery with users that look like email addresses.

Also take a look [at the documentation](http://brig.readthedocs.io/en/latest/index.html) for more details.

## Installation

You can download the latest script with the following oneliner:

```bash
# Before you execute this, ask yourself if you trust me.
$ bash <(curl -s https://raw.githubusercontent.com/sahib/brig/master/scripts/install.sh)
```

Alternatively, you can simply grab the latest binary from the [release tab](https://github.com/sahib/brig/releases).

Development versions can be installed easily by compiling yourself. If you have
a recent version of `go` (`>= 1.10`) installed, it should be as easy as this:

```bash
$ go get -d -v -u github.com/sahib/brig  # Download the sources.
$ cd $GOPATH/src/github.com/sahib/brig   # Go to the source directory.
$ git checkout develop                   # Checkout the develop branch.
$ go run mage.go                         # Build the software.
$ $GOPATH/bin/brig help                  # Run the binary.
```

Please refer to the [install docs](https://brig.readthedocs.io/en/latest/installation.html) for more details.

## Getting started

[![asciicast](https://asciinema.org/a/163713.png)](https://asciinema.org/a/163713)

...If you want to know, what to do after you can read the
[Quickstart](http://brig.readthedocs.io/en/latest/quickstart.html).

There is also a ``#brig`` room on ``matrix.org`` you can join with any [Matrix](https://matrix.org) client.
Click [this link](https://riot.im/app/#/room/#brig:matrix.org) to join the room directly via [Riot.im](https://about.riot.im).

## Status

This software is in a **beta phase** currently. All mentioned features should
work. Things might still change rapidly and there will be no guarantees given
before version `1.0.0`. Do not use `brig` yet as only storage for your
production data. There are still bugs, but it should be safe enough to toy
around with it quite a bit.

This project has started end of 2015 and has seen many conceptual changes in
the meantime. It started out as research project. After writing my [master
theses](https://github.com/disorganizer/brig-thesis) on it, it was put down for
a few months until I picked at up again and currently am trying to push it to
usable software.

If you want to open a bug report, just type `brig bug` to get a readily filled template for you.

## Documentation

All documentation can be found on [ReadTheDocs.org](http://brig.readthedocs.io/en/latest/index.html).

## Donations

If you're interested in the development and would think about supporting me
financially, then please [contact me!](mailto:sahib@online.de) If you'd like to
give me a small & steady donation, you can always use *Liberapay*:

<noscript><a href="https://liberapay.com/sahib/donate"><img alt="Donate using Liberapay" src="https://liberapay.com/assets/widgets/donate.svg"></a></noscript>

*Thank you!*


================================================
FILE: Taskfile.yml
================================================
# This file controls how brig is build.
# It is a nicer to use alternative to Makefiles.
# Please read the documentation over at:
#
# https://taskfile.dev
#
# The actual commands that do the work are written in bash.
# See the scripts/ folder for them.
#
# When changing the structure of the repository, please remember
# to update the "sources" list in this file if dependencies
# of a build target were added, removed or changed.
version: '3'

tasks:
  default:
    deps: [build]

  elm:
    desc: "Compile elm sources to Javascript"
    cmds:
      - cd gateway/elm && elm make src/Main.elm --output ../static/js/app.js
    sources:
      - ./gateway/elm/**/*.elm
    generates:
      - ./gateway/static/js/app.js
    method: checksum
    summary: |
        Build the elm frontend.

  generate:
    desc: "Generate build dependencies"
    cmds:
      - scripts/generate.sh
    sources:
      - scripts/generate.sh
      - ./**/*.capnp
      - ./gateway/static/**/**/**/**

  build:
    deps: [generate]
    desc: "Build the brig binary"
    cmds:
      - ./scripts/build.sh
    sources:
      - ./scripts/build.sh
      - go.mod
      - ./*.go
      - ./**/*.go

  test:
    desc: "Run integration & unit tests"
    cmds:
      - bash scripts/run-tests.sh

  lint:
    desc: "Run static linters on the code"
    cmds:
      - bash scripts/run-linter.sh

  sloc:
    desc: "Count the lines of code"
    cmds:
      - bash scripts/count-lines-of-code.sh


================================================
FILE: autocomplete/bash_autocomplete
================================================
#!/bin/bash

# This should be installed to /etc/bash_completion.d/brig and sourced.
# If you want to try out the autocompletion, just source this file.
_cli_bash_autocomplete() {
    local cur opts base
    COMPREPLY=()
    cur="${COMP_WORDS[COMP_CWORD]}"
    opts=$( ${COMP_WORDS[@]:0:$COMP_CWORD} --generate-bash-completion )
    COMPREPLY=( $(compgen -W "${opts}" -- ${cur}) )
    return 0
}

complete -F _cli_bash_autocomplete brig


================================================
FILE: autocomplete/zsh_autocomplete
================================================
_cli_zsh_autocomplete() {
  local -a opts
  opts=("${(@f)$(_CLI_ZSH_AUTOCOMPLETE_HACK=1 ${words[@]:0:#words[@]-1} --generate-bash-completion)}")
  _describe 'values' opts
  return
}

compdef _cli_zsh_autocomplete brig


================================================
FILE: backend/backend.go
================================================
package backend

import (
	"errors"
	"io"
	"os"

	"github.com/sahib/brig/backend/httpipfs"
	"github.com/sahib/brig/backend/mock"
	"github.com/sahib/brig/catfs"
	eventsBackend "github.com/sahib/brig/events/backend"
	netBackend "github.com/sahib/brig/net/backend"
	"github.com/sahib/brig/repo"
	log "github.com/sirupsen/logrus"
)

var (
	// ErrNoSuchBackend is returned when passing an invalid backend name
	ErrNoSuchBackend = errors.New("No such backend")
)

// VersionInfo is a small interface that will return version info about the
// backend.
type VersionInfo interface {
	SemVer() string
	Name() string
	Rev() string
}

// Backend is a amalgamation of all backend interfaces required for brig to work.
type Backend interface {
	repo.Backend
	catfs.FsBackend
	netBackend.Backend
	eventsBackend.Backend
}

// ForwardLogByName will forward the logs of the backend `name` to `w`.
func ForwardLogByName(name string, w io.Writer) error {
	switch name {
	case "httpipfs":
		return nil
	case "mock":
		return nil
	}

	return ErrNoSuchBackend
}

// FromName returns a suitable backend for a human readable name.
// If an invalid name is passed, nil is returned.
func FromName(name, path, fingerprint string) (Backend, error) {
	switch name {
	case "httpipfs":
		return httpipfs.NewNode(path, fingerprint)
	case "mock":
		user := "alice"
		if envUser := os.Getenv("BRIG_MOCK_USER"); envUser != "" {
			user = envUser
		}

		if envNetDbPath := os.Getenv("BRIG_MOCK_NET_DB_PATH"); envNetDbPath != "" {
			path = envNetDbPath
		}

		return mock.NewMockBackend(path, user), nil
	}

	return nil, ErrNoSuchBackend
}

// Version returns version info for the backend `name`.
func Version(name, path string) VersionInfo {
	switch name {
	case "mock":
		return mock.Version()
	case "httpipfs":
		nd, err := httpipfs.NewNode(path, "")
		if err != nil {
			log.Debugf("failed to get version")
			return nil
		}

		defer nd.Close()
		return nd.Version()
	default:
		return nil
	}
}


================================================
FILE: backend/httpipfs/gc.go
================================================
package httpipfs

import (
	"bufio"
	"bytes"
	"context"
	"encoding/json"

	e "github.com/pkg/errors"
	h "github.com/sahib/brig/util/hashlib"
	log "github.com/sirupsen/logrus"
)

// GC will trigger the garbage collector of IPFS.
// Cleaned up hashes will be returned as a list
// (note that those hashes are not always ours)
func (nd *Node) GC() ([]h.Hash, error) {
	ctx := context.Background()
	resp, err := nd.sh.Request("repo/gc").Send(ctx)

	if err != nil {
		return nil, e.Wrapf(resp.Error, "gc request")
	}

	defer resp.Close()

	if resp.Error != nil {
		return nil, e.Wrapf(resp.Error, "gc resp")
	}

	hs := []h.Hash{}
	br := bufio.NewReader(resp.Output)
	for {
		line, err := br.ReadBytes('\n')
		if err != nil {
			break
		}

		raw := struct {
			Key map[string]string
		}{}

		lr := bytes.NewReader(line)
		if err := json.NewDecoder(lr).Decode(&raw); err != nil {
			return nil, e.Wrapf(err, "json decode")
		}

		for _, cid := range raw.Key {
			h, err := h.FromB58String(cid)
			if err != nil {
				return nil, e.Wrapf(err, "gc: hash decode")
			}

			hs = append(hs, h)
		}
	}

	log.Debugf("GC returned %d hashes", len(hs))
	return hs, nil
}


================================================
FILE: backend/httpipfs/gc_test.go
================================================
package httpipfs

import (
	"bytes"
	"testing"

	"github.com/sahib/brig/util/testutil"
	"github.com/stretchr/testify/require"
)

func TestGC(t *testing.T) {
	t.Skipf("will be replaced by bash based e2e tests")

	WithIpfs(t, 1, func(t *testing.T, ipfsPath string) {
		nd, err := NewNode(ipfsPath, "")
		require.Nil(t, err)

		data := testutil.CreateDummyBuf(4096 * 1024)
		hash, err := nd.Add(bytes.NewReader(data))
		require.Nil(t, err)

		require.Nil(t, nd.Unpin(hash))
		hashes, err := nd.GC()
		require.Nil(t, err)
		require.True(t, len(hashes) > 0)
	})
}


================================================
FILE: backend/httpipfs/io.go
================================================
package httpipfs

import (
	"context"
	"encoding/json"
	"fmt"
	"io"
	"io/ioutil"
	"sync"

	shell "github.com/ipfs/go-ipfs-api"
	"github.com/sahib/brig/catfs/mio"
	h "github.com/sahib/brig/util/hashlib"
)

func cat(s *shell.Shell, path string, offset int64) (io.ReadCloser, error) {
	rb := s.Request("cat", path)
	rb.Option("offset", offset)
	resp, err := rb.Send(context.Background())
	if err != nil {
		return nil, err
	}

	if resp.Error != nil {
		return nil, resp.Error
	}

	return resp.Output, nil
}

type streamWrapper struct {
	mu sync.Mutex

	io.ReadCloser
	nd   *Node
	hash h.Hash
	off  int64
	size int64
}

func (sw *streamWrapper) Read(buf []byte) (int, error) {
	sw.mu.Lock()
	defer sw.mu.Unlock()

	n, err := sw.ReadCloser.Read(buf)
	if err != nil {
		return n, err
	}

	sw.off += int64(n)
	return n, err
}

func (sw *streamWrapper) WriteTo(w io.Writer) (int64, error) {
	sw.mu.Lock()
	defer sw.mu.Unlock()

	return io.Copy(w, sw)
}

func (sw *streamWrapper) cachedSize() (int64, error) {
	ctx := context.Background()
	resp, err := sw.nd.sh.Request(
		"files/stat",
		"/ipfs/"+sw.hash.B58String(),
	).Send(ctx)

	if err != nil {
		return -1, err
	}

	defer resp.Close()

	if resp.Error != nil {
		return -1, resp.Error
	}

	raw := struct {
		Size int64
	}{}

	if err := json.NewDecoder(resp.Output).Decode(&raw); err != nil {
		return -1, err
	}

	return raw.Size, nil
}

func (sw *streamWrapper) getAbsOffset(offset int64, whence int) (int64, error) {
	switch whence {
	case io.SeekStart:
		sw.off = offset
		return offset, nil
	case io.SeekCurrent:
		sw.off += offset
		return sw.off, nil
	case io.SeekEnd:
		size, err := sw.cachedSize()
		if err != nil {
			return -1, err
		}

		sw.off = size + offset
		return sw.off, nil
	default:
		return -1, fmt.Errorf("invalid whence: %v", whence)
	}
}

// TODO: Seek is currently freaking expensive.
// Does IPFS maybe offer a better way to do this?
func (sw *streamWrapper) Seek(offset int64, whence int) (int64, error) {
	sw.mu.Lock()
	defer sw.mu.Unlock()

	absOffset, err := sw.getAbsOffset(offset, whence)
	if err != nil {
		return -1, err
	}

	rc, err := cat(sw.nd.sh, sw.hash.B58String(), absOffset)
	if err != nil {
		return -1, err
	}

	if sw.ReadCloser != nil {
		// Not sure if that is even needed...
		// TODO: measure memory consumption and see if we can do
		//       without discarding left over bytes.
		go func(rc io.ReadCloser) {
			io.Copy(ioutil.Discard, rc)
			rc.Close()
		}(sw.ReadCloser)
	}

	sw.off = absOffset
	sw.ReadCloser = rc
	return absOffset, nil
}

// Cat returns a stream associated with `hash`.
func (nd *Node) Cat(hash h.Hash) (mio.Stream, error) {
	rc, err := cat(nd.sh, hash.B58String(), 0)
	if err != nil {
		return nil, err
	}

	return &streamWrapper{
		nd:         nd,
		hash:       hash,
		ReadCloser: rc,
		off:        0,
		size:       -1,
	}, nil
}

// Add puts the contents of `r` into IPFS and returns its hash.
func (nd *Node) Add(r io.Reader) (h.Hash, error) {
	hs, err := nd.sh.Add(r)
	if err != nil {
		return nil, err
	}

	return h.FromB58String(hs)
}


================================================
FILE: backend/httpipfs/io_test.go
================================================
package httpipfs

import (
	"bytes"
	"fmt"
	"io"
	"io/ioutil"
	"testing"

	"github.com/sahib/brig/util/testutil"
	"github.com/stretchr/testify/require"
)

func TestAddCatBasic(t *testing.T) {
	t.Skipf("will be replaced by bash based e2e tests")

	WithIpfs(t, 1, func(t *testing.T, ipfsPath string) {
		nd, err := NewNode(ipfsPath, "")
		require.Nil(t, err)

		data := testutil.CreateDummyBuf(4096 * 1024)
		hash, err := nd.Add(bytes.NewReader(data))
		require.Nil(t, err)

		fmt.Println(hash)

		stream, err := nd.Cat(hash)
		require.Nil(t, err)

		echoData, err := ioutil.ReadAll(stream)
		require.Nil(t, err)
		require.Equal(t, data, echoData)
	})
}

func TestAddCatSize(t *testing.T) {
	t.Skipf("will be replaced by bash based e2e tests")

	WithIpfs(t, 1, func(t *testing.T, ipfsPath string) {
		nd, err := NewNode(ipfsPath, "")
		require.Nil(t, err)

		data := testutil.CreateDummyBuf(4096 * 1024)
		hash, err := nd.Add(bytes.NewReader(data))
		require.Nil(t, err)

		stream, err := nd.Cat(hash)
		require.Nil(t, err)

		size, err := stream.Seek(0, io.SeekEnd)
		require.Nil(t, err)
		require.Equal(t, int64(len(data)), size)

		off, err := stream.Seek(0, io.SeekStart)
		require.Nil(t, err)
		require.Equal(t, int64(0), off)

		echoData, err := ioutil.ReadAll(stream)
		require.Nil(t, err)
		require.Equal(t, data, echoData)
	})
}


================================================
FILE: backend/httpipfs/net.go
================================================
package httpipfs

import (
	"context"
	"encoding/json"
	"errors"
	"fmt"
	"io/ioutil"
	"net"
	"os"
	"path"
	"path/filepath"
	"sync"
	"time"

	shell "github.com/ipfs/go-ipfs-api"
	netBackend "github.com/sahib/brig/net/backend"
	"github.com/sahib/brig/util"
	log "github.com/sirupsen/logrus"
)

type connWrapper struct {
	net.Conn

	peer       string
	protocol   string
	targetAddr string
	sh         *shell.Shell
}

func (cw *connWrapper) LocalAddr() net.Addr {
	return &addrWrapper{
		protocol: cw.protocol,
		peer:     "",
	}
}

func (cw *connWrapper) RemoteAddr() net.Addr {
	return &addrWrapper{
		protocol: cw.protocol,
		peer:     cw.peer,
	}
}

func (cw *connWrapper) Close() error {
	defer cw.Conn.Close()
	return closeStream(cw.sh, cw.protocol, "", cw.targetAddr)
}

// Dial will open a connection to the peer identified by `peerHash`,
// running `protocol` over it.
func (nd *Node) Dial(peerHash, fingerprint, protocol string) (net.Conn, error) {
	if !nd.isOnline() {
		return nil, ErrOffline
	}

	self, err := nd.Identity()
	if err != nil {
		return nil, err
	}

	if self.Addr == peerHash {
		// Special case:
		// When we use the same IPFS daemon for different
		// brig repositiories, we want still to be able to dial
		// other brig instances. Since we cannot dial over ipfs
		// we simply have the port written to /tmp where
		// we can pick it up on Dial()
		addr, err := readLocalAddr(peerHash, fingerprint)
		if err != nil {
			return nil, err
		}

		return net.Dial("tcp", addr)
	}

	protocol = path.Join(protocol, peerHash)

	port := util.FindFreePort()
	addr := fmt.Sprintf("/ip4/127.0.0.1/tcp/%d", port)
	if err := forward(nd.sh, protocol, addr, peerHash); err != nil {
		return nil, err
	}

	tcpAddr := fmt.Sprintf("127.0.0.1:%d", port)
	log.Debugf("dial to »%s« over port %d", peerHash, port)
	conn, err := net.Dial("tcp", tcpAddr)
	if err != nil {
		return nil, err
	}

	return &connWrapper{
		Conn:       conn,
		peer:       peerHash,
		protocol:   protocol,
		targetAddr: addr,
		sh:         nd.sh,
	}, nil
}

//////////////////////////

func forward(sh *shell.Shell, protocol, targetAddr, peerID string) error {
	ctx := context.Background()
	peerID = "/ipfs/" + peerID

	rb := sh.Request("p2p/forward", protocol, targetAddr, peerID)
	rb.Option("allow-custom-protocol", true)
	resp, err := rb.Send(ctx)
	if err != nil {
		return err
	}

	defer resp.Close()
	if resp.Error != nil {
		return resp.Error
	}

	return nil
}

func openListener(sh *shell.Shell, protocol, targetAddr string) error {
	ctx := context.Background()
	rb := sh.Request("p2p/listen", protocol, targetAddr)
	rb.Option("allow-custom-protocol", true)
	resp, err := rb.Send(ctx)
	if err != nil {
		return err
	}

	defer resp.Close()
	if err := resp.Error; err != nil {
		return err
	}

	return nil
}

func closeStream(sh *shell.Shell, protocol, targetAddr, listenAddr string) error {
	ctx := context.Background()
	rb := sh.Request("p2p/close")
	rb.Option("protocol", protocol)

	if targetAddr != "" {
		rb.Option("target-address", targetAddr)
	}

	if listenAddr != "" {
		rb.Option("listen-address", listenAddr)
	}

	resp, err := rb.Send(ctx)
	if err != nil {
		return err
	}

	defer resp.Close()
	if resp.Error != nil {
		return resp.Error
	}

	return nil
}

type addrWrapper struct {
	protocol string
	peer     string
}

func (sa *addrWrapper) Network() string {
	return sa.protocol
}

func (sa *addrWrapper) String() string {
	return sa.peer
}

type listenerWrapper struct {
	lst         net.Listener
	protocol    string
	peer        string
	targetAddr  string
	fingerprint string
	sh          *shell.Shell
}

func (lw *listenerWrapper) Accept() (net.Conn, error) {
	conn, err := lw.lst.Accept()
	if err != nil {
		return nil, err
	}

	return &connWrapper{
		Conn:       conn,
		peer:       lw.peer,
		protocol:   lw.protocol,
		targetAddr: lw.targetAddr,
		sh:         lw.sh,
	}, nil
}

func (lw *listenerWrapper) Addr() net.Addr {
	return &addrWrapper{
		protocol: lw.protocol,
		peer:     lw.peer,
	}
}

func (lw *listenerWrapper) Close() error {
	defer lw.lst.Close()
	defer deleteLocalAddr(lw.peer, lw.fingerprint)
	return closeStream(lw.sh, lw.protocol, lw.targetAddr, "")
}

func buildLocalAddrPath(id, fingerprint string) string {
	return filepath.Join(os.TempDir(), fmt.Sprintf("brig-%s:%s.addr", id, fingerprint))
}

func readLocalAddr(id, fingerprint string) (string, error) {
	path := buildLocalAddrPath(id, fingerprint)
	data, err := ioutil.ReadFile(path)
	if err != nil {
		return "", err
	}

	return string(data), nil
}

func deleteLocalAddr(id, fingerprint string) error {
	path := buildLocalAddrPath(id, fingerprint)
	return os.RemoveAll(path)
}

func writeLocalAddr(id, fingerprint, addr string) error {
	path := buildLocalAddrPath(id, fingerprint)
	return ioutil.WriteFile(path, []byte(addr), 0644)
}

// Listen will listen to the protocol
func (nd *Node) Listen(protocol string) (net.Listener, error) {
	if !nd.isOnline() {
		return nil, ErrOffline
	}

	self, err := nd.Identity()
	if err != nil {
		return nil, err
	}

	// TODO: Is this even needed still?
	// Do we want support for having more than one brig per ipfs.
	// Append the id to the protocol:
	protocol = path.Join(protocol, self.Addr)

	port := util.FindFreePort()
	addr := fmt.Sprintf("/ip4/127.0.0.1/tcp/%d", port)

	// Prevent errors by closing any previously opened listeners:
	if err := closeStream(nd.sh, protocol, "", ""); err != nil {
		return nil, err
	}

	log.Debugf("backend: listening for %s over port %d", protocol, port)
	if err := openListener(nd.sh, protocol, addr); err != nil {
		return nil, err
	}

	localAddr := fmt.Sprintf("127.0.0.1:%d", port)
	lst, err := net.Listen("tcp", localAddr)
	if err != nil {
		return nil, err
	}

	if err := writeLocalAddr(self.Addr, nd.fingerprint, localAddr); err != nil {
		return nil, err
	}

	return &listenerWrapper{
		lst:         lst,
		protocol:    protocol,
		peer:        self.Addr,
		targetAddr:  addr,
		fingerprint: nd.fingerprint,
		sh:          nd.sh,
	}, nil
}

/////////////////////////////////

type pinger struct {
	lastSeen  time.Time
	roundtrip time.Duration
	err       error

	mu     sync.Mutex
	cancel func()
	nd     *Node
}

// LastSeen returns the time we pinged the remote last time.
func (p *pinger) LastSeen() time.Time {
	p.mu.Lock()
	defer p.mu.Unlock()

	return p.lastSeen
}

// Roundtrip returns the time needed send a single package to
// the remote and receive the answer.
func (p *pinger) Roundtrip() time.Duration {
	p.mu.Lock()
	defer p.mu.Unlock()

	return p.roundtrip
}

// Err will return a non-nil error when the current ping did not succeed.
func (p *pinger) Err() error {
	p.mu.Lock()
	defer p.mu.Unlock()

	return p.err
}

// Close will clean up the pinger.
func (p *pinger) Close() error {
	if p.cancel != nil {
		p.cancel()
		p.cancel = nil
	}

	return nil
}

func (p *pinger) update(ctx context.Context, addr, self string) {
	// Edge case: test setups where we ping ourselves.
	if self == addr {
		p.mu.Lock()
		p.err = nil
		p.lastSeen = time.Now()
		p.roundtrip = time.Duration(0)
		p.mu.Unlock()
		return
	}

	// Do the network op without a lock:
	roundtrip, err := ping(p.nd.sh, addr)

	p.mu.Lock()
	if err != nil {
		p.err = err
	} else {
		p.err = nil
		p.lastSeen = time.Now()
		p.roundtrip = roundtrip
	}

	p.mu.Unlock()
}

func (p *pinger) Run(ctx context.Context, addr string) error {
	self, err := p.nd.Identity()
	if err != nil {
		return err
	}

	p.update(ctx, addr, self.Addr)
	tckr := time.NewTicker(10 * time.Second)

	for {
		select {
		case <-ctx.Done():
			return ctx.Err()
		case <-tckr.C:
			p.update(ctx, addr, self.Addr)
		}
	}
}

func ping(sh *shell.Shell, peerID string) (time.Duration, error) {
	ctx := context.Background()
	resp, err := sh.Request("ping", peerID).Send(ctx)
	if err != nil {
		return 0, err
	}

	defer resp.Close()

	if resp.Error != nil {
		return 0, resp.Error
	}

	raw := struct {
		Success bool
		Time    int64
	}{}

	if err := json.NewDecoder(resp.Output).Decode(&raw); err != nil {
		return 0, err
	}

	if raw.Success {
		return time.Duration(raw.Time), nil
	}

	return 0, fmt.Errorf("no ping")
}

// ErrWaiting is the initial error state of a pinger.
// The error will be unset once a successful ping was made.
var ErrWaiting = errors.New("waiting for route")

// Ping will return a pinger for `addr`.
func (nd *Node) Ping(addr string) (netBackend.Pinger, error) {
	if !nd.isOnline() {
		return nil, ErrOffline
	}

	log.Debugf("backend: start ping »%s«", addr)
	p := &pinger{
		nd:  nd,
		err: ErrWaiting,
	}

	ctx, cancel := context.WithCancel(context.Background())
	p.cancel = cancel
	go p.Run(ctx, addr)
	return p, nil
}


================================================
FILE: backend/httpipfs/net_test.go
================================================
package httpipfs

import (
	"bytes"
	"io"
	"testing"
	"time"

	"github.com/stretchr/testify/require"
)

const (
	TestProtocol = "/brig/test/1.0"
)

var (
	TestMessage = []byte("Hello World!")
)

func testClientSide(t *testing.T, ipfsPathB string, addr string) {
	nd, err := NewNode(ipfsPathB, "")
	require.Nil(t, err)

	conn, err := nd.Dial(addr, "", TestProtocol)
	require.Nil(t, err)

	defer func() {
		require.Nil(t, conn.Close())
	}()

	_, err = conn.Write(TestMessage)
	require.Nil(t, err)
}

func TestDialAndListen(t *testing.T) {
	t.Skipf("will be replaced by bash based e2e tests")

	WithDoubleIpfs(t, 1, func(t *testing.T, ipfsPathA, ipfsPathB string) {
		nd, err := NewNode(ipfsPathA, "")
		require.Nil(t, err)

		lst, err := nd.Listen(TestProtocol)
		require.Nil(t, err)
		defer func() {
			require.Nil(t, lst.Close())
		}()

		id, err := nd.Identity()
		require.Nil(t, err)

		go testClientSide(t, ipfsPathB, id.Addr)

		conn, err := lst.Accept()
		require.Nil(t, err)

		buf := &bytes.Buffer{}
		_, err = io.Copy(buf, conn)
		require.Nil(t, err)
		require.Equal(t, TestMessage, buf.Bytes())
	})
}

func TestPing(t *testing.T) {
	t.Skipf("will be replaced by bash based e2e tests")

	WithDoubleIpfs(t, 1, func(t *testing.T, ipfsPathA, ipfsPathB string) {
		ndA, err := NewNode(ipfsPathA, "")
		require.NoError(t, err)

		idA, err := ndA.Identity()
		require.NoError(t, err)

		pinger, err := ndA.Ping(idA.Addr)
		require.NoError(t, err)

		defer func() {
			require.NoError(t, pinger.Close())
		}()

		for idx := 0; idx < 60; idx++ {
			if pinger.Err() != ErrWaiting {
				break
			}

			time.Sleep(1 * time.Second)
		}

		require.Nil(t, pinger.Err())
		require.True(t, pinger.Roundtrip() < time.Second)
		require.True(t, time.Since(pinger.LastSeen()) < 2*time.Second)
	})
}

func TestDialAndListenOnSingleNode(t *testing.T) {
	t.Skipf("will be replaced by bash based e2e tests")

	WithIpfs(t, 1, func(t *testing.T, ipfsPath string) {
		nd, err := NewNode(ipfsPath, "")
		require.Nil(t, err)

		lst, err := nd.Listen(TestProtocol)
		require.Nil(t, err)
		defer func() {
			require.Nil(t, lst.Close())
		}()

		id, err := nd.Identity()
		require.Nil(t, err)

		go testClientSide(t, ipfsPath, id.Addr)

		conn, err := lst.Accept()
		require.Nil(t, err)

		buf := &bytes.Buffer{}
		_, err = io.Copy(buf, conn)
		require.Nil(t, err)
		require.Equal(t, TestMessage, buf.Bytes())
	})
}

func TestPingSelf(t *testing.T) {
	t.Skipf("will be replaced by bash based e2e tests")

	WithIpfs(t, 1, func(t *testing.T, ipfsPath string) {
		nd, err := NewNode(ipfsPath, "")
		require.Nil(t, err)

		id, err := nd.Identity()
		require.Nil(t, err)

		pinger, err := nd.Ping(id.Addr)
		require.Nil(t, err)

		defer func() {
			require.Nil(t, pinger.Close())
		}()

		for idx := 0; idx < 60; idx++ {
			if pinger.Err() != ErrWaiting {
				break
			}

			time.Sleep(250 * time.Millisecond)
		}

		require.Nil(t, pinger.Err())
		require.True(t, pinger.Roundtrip() < time.Second)
		require.True(t, time.Since(pinger.LastSeen()) < 2*time.Second)
	})
}


================================================
FILE: backend/httpipfs/pin.go
================================================
package httpipfs

import (
	"context"
	"encoding/json"
	"io"
	"strings"

	"github.com/patrickmn/go-cache"
	h "github.com/sahib/brig/util/hashlib"
)

// IsPinned returns true when `hash` is pinned in some way.
func (nd *Node) IsPinned(hash h.Hash) (bool, error) {
	ctx := context.Background()
	resp, err := nd.sh.Request("pin/ls", hash.B58String()).Send(ctx)
	if err != nil {
		return false, err
	}

	defer resp.Close()

	if resp.Error != nil {
		if strings.HasSuffix(resp.Error.Message, "is not pinned") {
			return false, nil
		}

		return false, resp.Error
	}

	raw := struct {
		Keys map[string]struct {
			Type string
		}
	}{}

	if err := json.NewDecoder(resp.Output).Decode(&raw); err != nil {
		return false, err
	}

	if len(raw.Keys) == 0 {
		return false, nil
	}

	return true, nil
}

// Pin will pin `hash`.
func (nd *Node) Pin(hash h.Hash) error {
	return nd.sh.Pin(hash.B58String())
}

// Unpin will unpin `hash`.
func (nd *Node) Unpin(hash h.Hash) error {
	err := nd.sh.Unpin(hash.B58String())
	if err == nil || err.Error() == "pin/rm: not pinned or pinned indirectly" {
		return nil
	}
	return err
}

type objectRef struct {
	Ref string // hash of the ref
	Err string
}

// Link is a child of a hash.
// Used by IPFS when files get bigger.
type Link struct {
	Name string
	Hash string
	Size uint64
}

// IsCached checks if hash and all its children are cached
func (nd *Node) IsCached(hash h.Hash) (bool, error) {
	locallyCached := nd.cache.locallyCached
	stat, found := locallyCached.Get(hash.B58String())
	if found {
		return stat.(bool), nil
	}

	// Nothing in the cache, we have to figure it out.
	// We will execute equivalent of
	//   ipfs refs --offline --recursive hash
	// note the `--recursive` switch, we need to check all children links
	// if command fails at least one child link/hash is missing
	ctx := context.Background()
	req := nd.sh.Request("refs", hash.B58String())
	req.Option("offline", "true")
	req.Option("recursive", "true")
	resp, err := req.Send(ctx)
	if err != nil {
		return false, err
	}
	defer resp.Close()
	if resp.Error != nil {
		return false, resp.Error
	}

	ref := objectRef{}
	jsonDecoder := json.NewDecoder(resp.Output)
	for {
		if err := jsonDecoder.Decode(&ref); err == io.EOF {
			break
		} else if err != nil {
			return false, err
		}
		if ref.Err != "" {
			// Either main hash or one of its refs/links is not available locally
			// consequently the whole hash is not cached
			locallyCached.Set(hash.B58String(), false, cache.DefaultExpiration)
			return false, nil
		}
	}
	// if we are here, the parent hash and all its children links/hashes are cached
	locallyCached.Set(hash.B58String(), true, cache.DefaultExpiration)
	return true, nil
}

// CachedSize returns the cached size of the node.
// Negative indicates unknow eithe due to error or hash not stored locally
func (nd *Node) CachedSize(hash h.Hash) (int64, error) {
	ctx := context.Background()
	req := nd.sh.Request("object/stat", hash.B58String())
	// provides backend size only for cached objects
	req.Option("offline", "true")
	resp, err := req.Send(ctx)
	if err != nil {
		return -1, err
	}

	defer resp.Close()

	if resp.Error != nil {
		return -1, resp.Error
	}

	raw := struct {
		CumulativeSize int64
		Key            string
	}{}

	if err := json.NewDecoder(resp.Output).Decode(&raw); err != nil {
		return -1, err
	}

	return raw.CumulativeSize, nil
}


================================================
FILE: backend/httpipfs/pin_test.go
================================================
package httpipfs

import (
	"bytes"
	"testing"

	h "github.com/sahib/brig/util/hashlib"
	"github.com/sahib/brig/util/testutil"
	"github.com/stretchr/testify/require"
)

func TestPinUnpin(t *testing.T) {
	t.Skipf("will be replaced by bash based e2e tests")

	WithIpfs(t, 1, func(t *testing.T, ipfsPath string) {
		nd, err := NewNode(ipfsPath, "")
		require.Nil(t, err)

		data := testutil.CreateDummyBuf(4096 * 1024)
		hash, err := nd.Add(bytes.NewReader(data))
		require.Nil(t, err)

		isPinned, err := nd.IsPinned(hash)
		require.Nil(t, err)
		require.True(t, isPinned)

		require.Nil(t, nd.Unpin(hash))

		isPinned, err = nd.IsPinned(hash)
		require.Nil(t, err)
		require.False(t, isPinned)

		require.Nil(t, nd.Pin(hash))

		isPinned, err = nd.IsPinned(hash)
		require.Nil(t, err)
		require.True(t, isPinned)
	})
}

func TestIsCached(t *testing.T) {
	t.Skipf("will be replaced by bash based e2e tests")

	WithIpfs(t, 1, func(t *testing.T, ipfsPath string) {
		nd, err := NewNode(ipfsPath, "")
		require.Nil(t, err)

		hash, err := nd.Add(bytes.NewReader([]byte{1, 2, 3}))
		require.Nil(t, err)

		isCached, err := nd.IsCached(hash)
		require.Nil(t, err)
		require.True(t, isCached)

		// Let's just hope this hash does not exist locally:
		dummyHash, err := h.FromB58String("QmanyEbg6appBzzGaGMZm9NKqPVCbrWaB8ayGDerWh6aMB")
		require.Nil(t, err)

		isCached, err = nd.IsCached(dummyHash)
		require.Nil(t, err)
		require.False(t, isCached)
	})
}


================================================
FILE: backend/httpipfs/pubsub.go
================================================
package httpipfs

import (
	"context"

	shell "github.com/ipfs/go-ipfs-api"
	eventsBackend "github.com/sahib/brig/events/backend"
)

type subWrapper struct {
	sub *shell.PubSubSubscription
}

type msgWrapper struct {
	msg *shell.Message
}

func (msg *msgWrapper) Data() []byte {
	return msg.msg.Data
}

func (msg *msgWrapper) Source() string {
	return string(msg.msg.From)
}

func (s *subWrapper) Next(ctx context.Context) (eventsBackend.Message, error) {
	msg, err := s.sub.Next()
	if err != nil {
		return nil, err
	}

	return &msgWrapper{msg: msg}, nil
}

func (s *subWrapper) Close() error {
	return s.sub.Cancel()
}

// Subscribe will create a subscription for `topic`.
// You can use the subscription to wait for the next incoming message.
// This will only work if the daemon supports/has enabled pub sub.
func (nd *Node) Subscribe(ctx context.Context, topic string) (eventsBackend.Subscription, error) {
	if !nd.isOnline() {
		return nil, ErrOffline
	}

	sub, err := nd.sh.PubSubSubscribe(topic)
	if err != nil {
		return nil, err
	}

	return &subWrapper{sub: sub}, nil
}

// PublishEvent will publish `data` on `topic`.
func (nd *Node) PublishEvent(topic string, data []byte) error {
	if !nd.isOnline() {
		return ErrOffline
	}

	return nd.sh.PubSubPublish(topic, string(data))
}


================================================
FILE: backend/httpipfs/pubsub_test.go
================================================
package httpipfs

import (
	"context"
	"testing"
	"time"

	"github.com/stretchr/testify/require"
)

func TestPubSub(t *testing.T) {
	t.Skipf("will be replaced by bash based e2e tests")

	// Only use one ipfs instance, for test performance.
	WithIpfs(t, 1, func(t *testing.T, ipfsPath string) {
		nd, err := NewNode(ipfsPath, "")
		require.Nil(t, err)

		self, err := nd.Identity()
		require.Nil(t, err)

		ctx := context.Background()
		sub, err := nd.Subscribe(ctx, "test-topic")
		require.Nil(t, err)

		defer func() {
			require.Nil(t, sub.Close())
		}()

		time.Sleep(1 * time.Second)
		data := []byte("hello world!")
		go nd.PublishEvent("test-topic", data)

		msg, err := sub.Next(ctx)
		require.Nil(t, err)

		require.Equal(t, data, msg.Data())
		require.Equal(t, self.Addr, msg.Source())
	})
}


================================================
FILE: backend/httpipfs/resolve.go
================================================
package httpipfs

import (
	"bufio"
	"bytes"
	"context"
	"encoding/json"

	shell "github.com/ipfs/go-ipfs-api"
	ipfsutil "github.com/ipfs/go-ipfs-util"
	mh "github.com/multiformats/go-multihash"
	"github.com/sahib/brig/net/peer"
	h "github.com/sahib/brig/util/hashlib"
	log "github.com/sirupsen/logrus"
)

// PublishName will announce `name` to the network
// and make us discoverable.
func (nd *Node) PublishName(name string) error {
	if !nd.isOnline() {
		return ErrOffline
	}

	fullName := "brig:" + string(name)
	key, err := nd.sh.BlockPut([]byte(fullName), "v0", "sha2-256", -1)
	log.Debugf("published name: »%s« (key %s)", name, key)
	return err
}

// Identity returns our own identity.
// It will cache the identity after the first request.
func (nd *Node) Identity() (peer.Info, error) {
	nd.mu.Lock()
	if nd.cachedIdentity != "" {
		defer nd.mu.Unlock()
		return peer.Info{
			Name: "httpipfs",
			Addr: nd.cachedIdentity,
		}, nil
	}

	// Do not hold the lock during net ops:
	nd.mu.Unlock()

	id, err := nd.sh.ID()
	if err != nil {
		return peer.Info{}, err
	}

	nd.mu.Lock()
	nd.cachedIdentity = id.ID
	nd.mu.Unlock()

	return peer.Info{
		Name: "httpipfs",
		Addr: id.ID,
	}, nil
}

func findProvider(ctx context.Context, sh *shell.Shell, hash h.Hash) ([]string, error) {
	resp, err := sh.Request("dht/findprovs", hash.B58String()).Send(ctx)
	if err != nil {
		return nil, err
	}

	defer resp.Output.Close()

	if resp.Error != nil {
		return nil, resp.Error
	}

	ids := make(map[string]bool)
	br := bufio.NewReader(resp.Output)
	interrupted := false

	for len(ids) < 20 && !interrupted {
		line, err := br.ReadBytes('\n')
		if err != nil {
			break
		}

		raw := struct {
			Responses []struct {
				ID string
			}
		}{}

		lr := bytes.NewReader(line)
		if err := json.NewDecoder(lr).Decode(&raw); err != nil {
			return nil, err
		}

		for _, resp := range raw.Responses {
			ids[resp.ID] = true
		}

		select {
		case <-ctx.Done():
			interrupted = true
			break
		}
	}

	linearIDs := []string{}
	for id := range ids {
		linearIDs = append(linearIDs, id)
	}

	return linearIDs, nil
}

// ResolveName will return all peers that identify themselves as `name`.
// If ctx is canceled it will return early, but return no error.
func (nd *Node) ResolveName(ctx context.Context, name string) ([]peer.Info, error) {
	if !nd.isOnline() {
		return nil, ErrOffline
	}

	name = "brig:" + name
	mhash, err := mh.Sum([]byte(name), ipfsutil.DefaultIpfsHash, -1)
	if err != nil {
		return nil, err
	}

	log.Debugf("backend: resolve »%s« (%s)", name, mhash.B58String())

	ids, err := findProvider(ctx, nd.sh, h.Hash(mhash))
	if err != nil {
		return nil, err
	}

	infos := []peer.Info{}
	for _, id := range ids {
		infos = append(infos, peer.Info{
			Addr: id,
			Name: peer.Name(name),
		})
	}

	return infos, nil
}


================================================
FILE: backend/httpipfs/resolve_test.go
================================================
package httpipfs

import (
	"context"
	"fmt"
	"testing"
	"time"

	"github.com/stretchr/testify/require"
)

func TestPublishResolve(t *testing.T) {
	t.Skipf("will be replaced by bash based e2e tests")

	// Only use one ipfs instance, for test performance.
	WithDoubleIpfs(t, 1, func(t *testing.T, ipfsPathA, ipfsPathB string) {
		ndA, err := NewNode(ipfsPathA, "")
		require.Nil(t, err)

		ndB, err := NewNode(ipfsPathB, "")
		require.Nil(t, err)

		// self, err := ndA.Identity()
		// require.Nil(t, err)

		require.Nil(t, ndA.PublishName("alice"))
		ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
		defer cancel()

		infos, err := ndB.ResolveName(ctx, "alice")
		require.Nil(t, err)

		// TODO: This test doesn't produce results yet,
		// most likely because of time issues (would need to run longer?)
		fmt.Println(infos)
	})
}


================================================
FILE: backend/httpipfs/shell.go
================================================
package httpipfs

import (
	"context"
	"encoding/json"
	"errors"
	"os"
	"path/filepath"
	"sync"
	"time"

	"github.com/blang/semver"
	shell "github.com/ipfs/go-ipfs-api"
	ma "github.com/multiformats/go-multiaddr"
	"github.com/patrickmn/go-cache"
	"github.com/sahib/brig/repo/setup"
	log "github.com/sirupsen/logrus"
)

var (
	// ErrOffline is returned by operations that need online support
	// to work when the backend is in offline mode.
	ErrOffline = errors.New("backend is in offline mode")
)

// IpfsStateCache contains various backend related caches
type IpfsStateCache struct {
	locallyCached *cache.Cache // shows if the hash and its children is locally cached by ipfs
}

// Node is the struct that holds the httpipfs backend together.
// It is a shallow type that has not much own state and is very light.
type Node struct {
	sh             *shell.Shell
	mu             sync.Mutex
	cachedIdentity string
	allowNetOps    bool
	fingerprint    string
	version        *semver.Version
	cache          *IpfsStateCache
	quiet          bool
}

func getExperimentalFeatures(sh *shell.Shell) (map[string]bool, error) {
	ctx := context.Background()
	resp, err := sh.Request("config/show").Send(ctx)
	if err != nil {
		return nil, err
	}

	defer resp.Close()

	if resp.Error != nil {
		return nil, resp.Error
	}

	raw := struct {
		Experimental map[string]bool
	}{}

	if err := json.NewDecoder(resp.Output).Decode(&raw); err != nil {
		return nil, err
	}

	return raw.Experimental, nil
}

// Option is a option you can pass to NewNode()
// It controls the behavior of the node.
type Option func(nd *Node)

// WithNoLogging will make the node not print log messages.
// Useful for commandline use cases.
func WithNoLogging() Option {
	return func(nd *Node) {
		nd.quiet = true
	}
}

func toMultiAddr(ipfsPathOrMultiaddr string) (ma.Multiaddr, error) {
	if !filepath.IsAbs(ipfsPathOrMultiaddr) {
		// multiaddr always start with a slash,
		// this branch affects only file paths.
		var err error
		ipfsPathOrMultiaddr, err = filepath.Abs(ipfsPathOrMultiaddr)
		if err != nil {
			return nil, err
		}
	}

	if _, err := os.Stat(ipfsPathOrMultiaddr); err == nil {
		return setup.GetAPIAddrForPath(ipfsPathOrMultiaddr)
	}

	return ma.NewMultiaddr(ipfsPathOrMultiaddr)
}

// NewNode returns a new http based IPFS backend.
func NewNode(ipfsPathOrMultiaddr string, fingerprint string, opts ...Option) (*Node, error) {
	nd := &Node{
		allowNetOps: true,
		fingerprint: fingerprint,
		cache: &IpfsStateCache{
			locallyCached: cache.New(5*time.Minute, 10*time.Minute),
		},
	}

	for _, opt := range opts {
		opt(nd)
	}

	m, err := toMultiAddr(ipfsPathOrMultiaddr)
	if err != nil {
		return nil, err
	}

	if !nd.quiet {
		log.Infof("Connecting to IPFS HTTP API at %s", m.String())
	}

	nd.sh = shell.NewShell(m.String())

	versionString, _, err := nd.sh.Version()
	if err != nil && !nd.quiet {
		log.Warningf("failed to get version: %v", err)
	}

	version, err := semver.Parse(versionString)
	if err != nil && !nd.quiet {
		log.Warningf("failed to parse version string of IPFS (»%s«): %v", versionString, err)
	}

	if !nd.quiet {
		log.Infof("The IPFS version is »%s«.", version)
		if version.LT(semver.MustParse("0.4.18")) {
			log.Warningf("This version is quite old. Please update, if possible.\n")
			log.Warningf("We only test on newer versions (>= 0.4.18).\n")
		}
	}

	nd.version = &version

	if !nd.quiet {
		features, err := getExperimentalFeatures(nd.sh)
		if err != nil {
			log.Warningf("Failed to get experimental feature list: %v", err)
		} else {
			if !features["Libp2pStreamMounting"] {
				log.Warningf("Stream mounting does not seem to be enabled.")
				log.Warningf("Please execute the following to change that:")
				log.Warningf("$ ipfs config --json Experimental.Libp2pStreamMounting true")
			}
		}
	}

	return nd, nil
}

// IsOnline returns true if the node is in online mode and the daemon is reachable.
func (nd *Node) IsOnline() bool {
	nd.mu.Lock()
	allowNetOps := nd.allowNetOps
	nd.mu.Unlock()

	return nd.sh.IsUp() && allowNetOps
}

// Connect implements Backend.Connect
func (nd *Node) Connect() error {
	nd.mu.Lock()
	defer nd.mu.Unlock()

	nd.allowNetOps = true
	return nil
}

// Disconnect implements Backend.Disconnect
func (nd *Node) Disconnect() error {
	nd.mu.Lock()
	defer nd.mu.Unlock()

	nd.allowNetOps = false
	return nil
}

func (nd *Node) isOnline() bool {
	nd.mu.Lock()
	defer nd.mu.Unlock()

	return nd.allowNetOps
}

// Close implements Backend.Close
func (nd *Node) Close() error {
	return nil
}

// Name returns "httpipfs" as name of the backend.
func (nd *Node) Name() string {
	return "httpipfs"
}


================================================
FILE: backend/httpipfs/testing.go
================================================
package httpipfs

import (
	"fmt"
	"io/ioutil"
	"os"
	"os/exec"
	"testing"
	"time"

	shell "github.com/ipfs/go-ipfs-api"
	"github.com/stretchr/testify/require"
)

// WithIpfs starts a new IPFS instance and calls `fn` with the API port to it.
// `portOff` is the offset to add on all standard ports.
func WithIpfs(t *testing.T, portOff int, fn func(t *testing.T, ipfsPath string)) {
	ipfsPath, err := ioutil.TempDir("", "brig-httpipfs-test-")
	require.Nil(t, err)
	defer os.RemoveAll(ipfsPath)

	gwtPort := 8081 + portOff
	swmPort := 4001 + portOff
	apiPort := 5011 + portOff

	os.Setenv("IPFS_PATH", ipfsPath)
	script := [][]string{
		{"ipfs", "init"},
		{"ipfs", "config", "--json", "Addresses.Swarm", fmt.Sprintf("[\"/ip4/127.0.0.1/tcp/%d\"]", swmPort)},
		{"ipfs", "config", "--json", "Experimental.Libp2pStreamMounting", "true"},
		{"ipfs", "config", "Addresses.API", fmt.Sprintf("/ip4/127.0.0.1/tcp/%d", apiPort)},
		{"ipfs", "config", "Addresses.Gateway", fmt.Sprintf("/ip4/127.0.0.1/tcp/%d", gwtPort)},
	}

	for _, line := range script {
		cmd := exec.Command(line[0], line[1:]...)
		cmd.Env = append(cmd.Env, fmt.Sprintf("IPFS_PATH=%s", ipfsPath))
		err := cmd.Run()
		require.NoError(t, err)
	}

	daemonCmd := exec.Command("ipfs", "daemon", "--enable-pubsub-experiment")
	// daemonCmd.Stdout = os.Stdout
	// daemonCmd.Stderr = os.Stdout
	daemonCmd.Env = append(daemonCmd.Env, fmt.Sprintf("IPFS_PATH=%s", ipfsPath))
	require.Nil(t, daemonCmd.Start())

	defer func() {
		require.Nil(t, daemonCmd.Process.Kill())
	}()

	// Wait until the daemon actually offers the API interface:
	localAddr := fmt.Sprintf("localhost:%d", apiPort)
	for tries := 0; tries < 200; tries++ {
		if shell.NewShell(localAddr).IsUp() {
			break
		}

		time.Sleep(100 * time.Millisecond)
	}

	// Actually call the test:
	fn(t, ipfsPath)

}

// WithDoubleIpfs starts two IPFS instances in parallel.
func WithDoubleIpfs(t *testing.T, portOff int, fn func(t *testing.T, ipfsPathA, ipfsPathB string)) {
	chPathA := make(chan string)
	chPathB := make(chan string)
	stop := make(chan bool, 2)

	go WithIpfs(t, portOff, func(t *testing.T, ipfsPathA string) {
		chPathA <- ipfsPathA
		<-stop
	})

	go WithIpfs(t, portOff+1, func(t *testing.T, ipfsPathB string) {
		chPathB <- ipfsPathB
		<-stop
	})

	fn(t, <-chPathA, <-chPathB)
	stop <- true
	stop <- true
}


================================================
FILE: backend/httpipfs/testing_test.go
================================================
package httpipfs

import (
	"bytes"
	"fmt"
	"testing"

	"github.com/stretchr/testify/require"
)

func TestIpfsStartup(t *testing.T) {
	t.Skipf("will be replaced by bash based e2e tests")

	WithIpfs(t, 1, func(t *testing.T, ipfsPath string) {
		nd, err := NewNode(ipfsPath, "")
		require.Nil(t, err)

		hash, err := nd.Add(bytes.NewReader([]byte("hello")))
		require.Nil(t, err, fmt.Sprintf("%v", err))
		require.Equal(t, "QmWfVY9y3xjsixTgbd9AorQxH7VtMpzfx2HaWtsoUYecaX", hash.String())
	})
}

func TestDoubleIpfsStartup(t *testing.T) {
	t.Skipf("will be replaced by bash based e2e tests")

	WithDoubleIpfs(t, 1, func(t *testing.T, ipfsPathA, ipfsPathB string) {
		ndA, err := NewNode(ipfsPathA, "")
		require.Nil(t, err)

		ndB, err := NewNode(ipfsPathB, "")
		require.Nil(t, err)

		idA, err := ndA.Identity()
		require.Nil(t, err, fmt.Sprintf("%v", err))

		idB, err := ndB.Identity()
		require.Nil(t, err)

		require.NotEqual(t, idA.Addr, idB.Addr)
	})
}


================================================
FILE: backend/httpipfs/version.go
================================================
package httpipfs

// VersionInfo holds version info (yeah, golint)
type VersionInfo struct {
	semVer, name, rev string
}

// SemVer returns a VersionInfo string complying semantic versioning
func (v *VersionInfo) SemVer() string { return v.semVer }

// Name returns the name of the backend
func (v *VersionInfo) Name() string { return v.name }

// Rev returns the git revision of the backend
func (v *VersionInfo) Rev() string { return v.rev }

// Version returns detailed VersionInfo info as struct
func (n *Node) Version() *VersionInfo {
	v, rev, err := n.sh.Version()
	if err != nil {
		return nil
	}

	return &VersionInfo{
		semVer: v,
		name:   "go-ipfs",
		rev:    rev,
	}
}


================================================
FILE: backend/mock/mock.go
================================================
package mock

import (
	"github.com/sahib/brig/catfs"
	eventsMock "github.com/sahib/brig/events/mock"
	netMock "github.com/sahib/brig/net/mock"
	repoMock "github.com/sahib/brig/repo/mock"
)

// Backend is used for local testing.
type Backend struct {
	*catfs.MemFsBackend
	*repoMock.RepoBackend
	*netMock.NetBackend
	*eventsMock.EventsBackend
}

// NewMockBackend returns a backend.Backend that operates only in memory
// and does not use any resources outliving the own process, except the net
// part which stores connection info on disk.
func NewMockBackend(path, owner string) *Backend {
	return &Backend{
		MemFsBackend:  catfs.NewMemFsBackend(),
		RepoBackend:   repoMock.NewMockRepoBackend(),
		NetBackend:    netMock.NewNetBackend(path, owner),
		EventsBackend: eventsMock.NewEventsBackend(owner),
	}
}

// VersionInfo holds version info (yeah, golint)
type VersionInfo struct {
	semVer, name, rev string
}

// SemVer returns a version string complying semantic versioning
func (v *VersionInfo) SemVer() string { return v.semVer }

// Name returns the name of the backend
func (v *VersionInfo) Name() string { return v.name }

// Rev returns the git revision of the backend
func (v *VersionInfo) Rev() string { return v.rev }

// Version returns detailed version info as struct
func Version() *VersionInfo {
	return &VersionInfo{
		semVer: "0.0.1",
		name:   "mock",
		rev:    "HEAD",
	}
}


================================================
FILE: bench/bench.go
================================================
package bench

import (
	"bytes"
	"context"
	"fmt"
	"io"
	"io/ioutil"
	"math/rand"
	"os"
	"path/filepath"
	"runtime"
	"sort"
	"strings"
	"syscall"
	"time"

	"github.com/pkg/xattr"
	"github.com/sahib/brig/backend/httpipfs"
	"github.com/sahib/brig/catfs/mio"
	"github.com/sahib/brig/client"
	"github.com/sahib/brig/client/clienttest"
	"github.com/sahib/brig/fuse/fusetest"
	"github.com/sahib/brig/repo/hints"
	"github.com/sahib/brig/server"
	"github.com/sahib/brig/util/testutil"
)

// Run is a single benchmark run
type Run struct {
	Took             time.Duration
	Allocs           int64
	CompressionRatio float32
}

// Runs is a list of individual runs
type Runs []Run

// Average returns a fictional average run out of all runs
func (runs Runs) Average() Run {
	sum := Run{}
	for _, run := range runs {
		sum.Took += run.Took
		sum.Allocs += run.Allocs
		sum.CompressionRatio += run.CompressionRatio
	}

	return Run{
		Took:             sum.Took / time.Duration(len(runs)),
		Allocs:           sum.Allocs / int64(len(runs)),
		CompressionRatio: sum.CompressionRatio / float32(len(runs)),
	}
}

// Bench is the interface every benchmark needs to implement.
type Bench interface {
	// SupportHints should return true for benchmarks where
	// passing hint influences the benchmark result.
	SupportHints() bool

	// CanBeVerified should return true when the test
	// can use the verifier (i.e. is a read test)
	CanBeVerified() bool

	// Bench should read the input from `r` and apply `hint` if applicable.
	// The time needed to process all of `r` should be returned.
	Bench(hint hints.Hint, size int64, r io.Reader, w io.Writer) (*Run, error)

	// Close should clean up the benchmark.
	Close() error
}

var (
	dummyKey = make([]byte, 32)
)

func withRunStats(size int64, fn func() (int64, error)) (*Run, error) {
	start := time.Now()

	var memBefore, memAfter runtime.MemStats
	runtime.ReadMemStats(&memBefore)
	written, err := fn()
	runtime.ReadMemStats(&memAfter)
	took := time.Since(start)
	return &Run{
		Took:             took,
		CompressionRatio: float32(written) / float32(size),
		Allocs:           int64(memAfter.Mallocs) - int64(memBefore.Mallocs),
	}, err
}

//////////

type memcpyBench struct{}

func newMemcpyBench(_ string, _ bool) (Bench, error) {
	return memcpyBench{}, nil
}

func (n memcpyBench) SupportHints() bool { return false }

func (n memcpyBench) CanBeVerified() bool { return true }

func (n memcpyBench) Bench(hint hints.Hint, size int64, r io.Reader, verifier io.Writer) (*Run, error) {
	// NOTE: Use DumbCopy, since io.Copy would use the
	// ReadFrom of ioutil.Discard. This is lightning fast.
	// We want to measure actual time to copy in memory.

	return withRunStats(size, func() (int64, error) {
		return testutil.DumbCopy(verifier, r, false, false)
	})
}

func (n memcpyBench) Close() error { return nil }

//////////

type serverCommon struct {
	daemon *server.Server
	client *client.Client
}

func newServerCommon(ipfsPath string) (*serverCommon, error) {
	backendName := "mock"
	if ipfsPath != "" {
		backendName = "httpipfs"
	}

	srv, err := clienttest.StartDaemon("ali", backendName, ipfsPath)
	if err != nil {
		return nil, err
	}

	ctl, err := client.Dial(context.Background(), srv.DaemonURL())
	if err != nil {
		return nil, err
	}

	return &serverCommon{
		daemon: srv,
		client: ctl,
	}, nil
}

func (sc *serverCommon) Close() error {
	sc.daemon.Close()
	sc.client.Close()
	return nil
}

type serverStageBench struct {
	common *serverCommon
}

func newServerStageBench(ipfsPath string, _ bool) (Bench, error) {
	common, err := newServerCommon(ipfsPath)
	if err != nil {
		return nil, err
	}

	return &serverStageBench{common: common}, nil
}

func (s *serverStageBench) SupportHints() bool { return true }

func (s *serverStageBench) CanBeVerified() bool { return false }

func (s *serverStageBench) Bench(hint hints.Hint, size int64, r io.Reader, verifier io.Writer) (*Run, error) {
	path := fmt.Sprintf("/path_%d", rand.Int31())

	c := string(hint.CompressionAlgo)
	e := string(hint.EncryptionAlgo)
	if err := s.common.client.HintSet(path, &c, &e); err != nil {
		return nil, err
	}

	// That's just for cleaning up after each test.
	defer s.common.client.Remove(path)

	return withRunStats(size, func() (int64, error) {
		return size, s.common.client.StageFromReader(path, r)
	})
}

func (s *serverStageBench) Close() error {
	return s.common.Close()
}

type serverCatBench struct {
	common *serverCommon
}

func newServerCatBench(ipfsPath string, _ bool) (Bench, error) {
	common, err := newServerCommon(ipfsPath)
	if err != nil {
		return nil, err
	}

	return &serverCatBench{common: common}, nil
}

func (s *serverCatBench) SupportHints() bool { return true }

func (s *serverCatBench) CanBeVerified() bool { return true }

func (s *serverCatBench) Bench(hint hints.Hint, size int64, r io.Reader, verifier io.Writer) (*Run, error) {
	path := fmt.Sprintf("/path_%d", rand.Int31())
	c := string(hint.CompressionAlgo)
	e := string(hint.EncryptionAlgo)

	if err := s.common.client.HintSet(path, &c, &e); err != nil {
		return nil, err
	}

	if err := s.common.client.StageFromReader(path, r); err != nil {
		return nil, err
	}

	// That's just for cleaning up after each test.
	defer s.common.client.Remove(path)

	return withRunStats(size, func() (int64, error) {
		stream, err := s.common.client.Cat(path, true)
		if err != nil {
			return 0, err
		}

		defer stream.Close()
		return testutil.DumbCopy(verifier, stream, false, false)
	})
}

func (s *serverCatBench) Close() error {
	return s.common.Close()
}

//////////

type mioWriterBench struct{}

func newMioWriterBench(_ string, _ bool) (Bench, error) {
	return &mioWriterBench{}, nil
}

func (m *mioWriterBench) SupportHints() bool { return true }

func (m *mioWriterBench) CanBeVerified() bool { return false }

func (m *mioWriterBench) Bench(hint hints.Hint, size int64, r io.Reader, verifier io.Writer) (*Run, error) {
	stream, _, err := mio.NewInStream(r, "", dummyKey, hint)
	if err != nil {
		return nil, err
	}

	return withRunStats(size, func() (int64, error) {
		defer stream.Close()
		return testutil.DumbCopy(ioutil.Discard, stream, false, false)
	})
}

func (m *mioWriterBench) Close() error {
	return nil
}

//////////

type mioReaderBench struct{}

func newMioReaderBench(_ string, _ bool) (Bench, error) {
	return &mioReaderBench{}, nil
}

func (m *mioReaderBench) SupportHints() bool { return true }

func (m *mioReaderBench) CanBeVerified() bool { return true }

func (m *mioReaderBench) Bench(hint hints.Hint, size int64, r io.Reader, verifier io.Writer) (*Run, error) {
	// Produce a buffer with encoded data in the right size.
	// This is not benched, only the reading of it is.
	inStream, _, err := mio.NewInStream(r, "", dummyKey, hint)
	if err != nil {
		return nil, err
	}

	defer inStream.Close()

	// Read it to memory before measuring.
	// We do not want to count the encoding in the bench time.
	streamData, err := ioutil.ReadAll(inStream)
	if err != nil {
		return nil, err
	}

	return withRunStats(size, func() (int64, error) {
		outStream, err := mio.NewOutStream(
			bytes.NewReader(streamData),
			hint.IsRaw(),
			dummyKey,
		)

		if err != nil {
			return -1, err
		}

		defer outStream.Close()

		return testutil.DumbCopy(verifier, outStream, false, false)
	})
}

func (m *mioReaderBench) Close() error {
	return nil
}

//////////

type ipfsAddOrCatBench struct {
	ipfsPath string
	isAdd    bool
}

func newIPFSAddBench(ipfsPath string, isAdd bool) (Bench, error) {
	return &ipfsAddOrCatBench{ipfsPath: ipfsPath, isAdd: isAdd}, nil
}

func (ia *ipfsAddOrCatBench) SupportHints() bool { return false }

func (ia *ipfsAddOrCatBench) CanBeVerified() bool { return !ia.isAdd }

func (ia *ipfsAddOrCatBench) Bench(hint hints.Hint, size int64, r io.Reader, verifier io.Writer) (*Run, error) {
	nd, err := httpipfs.NewNode(ia.ipfsPath, "")
	if err != nil {
		return nil, err
	}

	defer nd.Close()

	if ia.isAdd {
		return withRunStats(size, func() (int64, error) {
			_, err := nd.Add(r)
			return size, err
		})
	}

	hash, err := nd.Add(r)
	if err != nil {
		return nil, err
	}

	return withRunStats(size, func() (int64, error) {
		stream, err := nd.Cat(hash)
		if err != nil {
			return -1, err
		}

		return testutil.DumbCopy(verifier, stream, false, false)
	})
}

func (ia *ipfsAddOrCatBench) Close() error {
	return nil
}

//////////

type fuseWriteOrReadBench struct {
	ipfsPath string
	isWrite  bool

	tmpDir string
	ctl    *fusetest.Client
	proc   *os.Process
}

func newFuseWriteOrReadBench(ipfsPath string, isWrite bool) (Bench, error) {
	tmpDir, err := ioutil.TempDir("", "brig-fuse-bench-*")
	if err != nil {
		return nil, err
	}

	unixSocket := "unix:" + filepath.Join(tmpDir, "socket")

	proc, err := fusetest.LaunchAsProcess(fusetest.Options{
		MountPath:           filepath.Join(tmpDir, "mount"),
		CatfsPath:           filepath.Join(tmpDir, "catfs"),
		IpfsPathOrMultiaddr: ipfsPath,
		URL:                 unixSocket,
	})

	if err != nil {
		return nil, err
	}

	// bit time to start things up:
	time.Sleep(500 * time.Millisecond)

	ctl, err := fusetest.Dial(unixSocket)
	if err != nil {
		return nil, err
	}

	return &fuseWriteOrReadBench{
		ipfsPath: ipfsPath,
		isWrite:  isWrite,
		tmpDir:   tmpDir,
		proc:     proc,
		ctl:      ctl,
	}, nil
}

func (fb *fuseWriteOrReadBench) SupportHints() bool { return true }

func (fb *fuseWriteOrReadBench) CanBeVerified() bool { return !fb.isWrite }

func (fb *fuseWriteOrReadBench) Bench(hint hints.Hint, size int64, r io.Reader, verifier io.Writer) (*Run, error) {
	mountDir := filepath.Join(fb.tmpDir, "mount")
	testPath := filepath.Join(mountDir, fmt.Sprintf("/path_%d", rand.Int31()))

	const (
		xattrEnc = "user.brig.hints.encryption"
		xattrZip = "user.brig.hints.compression"
	)

	// Make sure hints are followed:
	if err := xattr.Set(mountDir, xattrEnc, []byte(hint.EncryptionAlgo)); err != nil {
		return nil, err
	}

	if err := xattr.Set(mountDir, xattrZip, []byte(hint.CompressionAlgo)); err != nil {
		return nil, err
	}

	took, err := withRunStats(size, func() (int64, error) {
		fd, err := os.OpenFile(testPath, os.O_CREATE|os.O_WRONLY, 0600)
		if err != nil {
			return -1, err
		}

		defer fd.Close()

		return testutil.DumbCopy(fd, r, false, false)
	})

	if err != nil {
		return nil, err
	}

	if fb.isWrite {
		// test is done already, no need to read-back.
		return took, nil
	}

	took, err = withRunStats(size, func() (int64, error) {
		// NOTE: We have to use syscall.O_DIRECT here in order to
		//       bypass the kernel page cache. The write above fills it with
		//       data immediately, thus this read can yield 10x times higher
		//       results (which you still might get in practice, if lucky)
		fd, err := os.OpenFile(testPath, os.O_RDONLY|syscall.O_DIRECT, 0600)
		if err != nil {
			return -1, err
		}

		defer fd.Close()

		return testutil.DumbCopy(verifier, fd, false, false)
	})

	return took, err
}

func (fb *fuseWriteOrReadBench) Close() error {
	fb.ctl.QuitServer()
	time.Sleep(2 * time.Second)
	fb.proc.Signal(syscall.SIGTERM)

	var lastError error
	for retries := 0; retries < 10; retries++ {
		if err := os.RemoveAll(fb.tmpDir); err != nil {
			time.Sleep(200 * time.Millisecond)
			lastError = err
			continue
		}

		lastError = nil
		break
	}

	return lastError
}

//////////

var (
	// Convention:
	// - If it's using ipfs, put it in the name.
	// - If it's writing things, put that in the name too as "write".
	benchMap = map[string]func(string, bool) (Bench, error){
		"memcpy":          newMemcpyBench,
		"brig-write-mem":  newServerStageBench,
		"brig-read-mem":   newServerCatBench,
		"brig-write-ipfs": newServerStageBench,
		"brig-read-ipfs":  newServerCatBench,
		"mio-write":       newMioWriterBench,
		"mio-read":        newMioReaderBench,
		"ipfs-write":      newIPFSAddBench,
		"ipfs-read":       newIPFSAddBench,
		"fuse-write-mem":  newFuseWriteOrReadBench,
		"fuse-write-ipfs": newFuseWriteOrReadBench,
		"fuse-read-mem":   newFuseWriteOrReadBench,
		"fuse-read-ipfs":  newFuseWriteOrReadBench,
	}
)

// ByName returns the benchmark with this name, or an error
// if none. If IPFS is used, it should be given as `ipfsPath`.
func ByName(name, ipfsPath string) (Bench, error) {
	newBench, ok := benchMap[name]
	if !ok {
		return nil, fmt.Errorf("no such bench: %s", name)
	}

	return newBench(ipfsPath, strings.Contains(name, "write"))
}

// BenchmarkNames returns all possible benchmark names
// in an defined & stable sorting.
func BenchmarkNames() []string {
	names := []string{}
	for name := range benchMap {
		names = append(names, name)
	}

	sort.Slice(names, func(i, j int) bool {
		if names[i] == names[j] {
			return false
		}

		specials := []string{
			"memcpy",
			"mio",
		}

		for _, special := range specials {
			v := strings.HasSuffix(names[i], special)
			if v || strings.HasSuffix(names[j], special) {
				return v
			}
		}

		return names[i] < names[j]
	})

	return names
}


================================================
FILE: bench/inputs.go
================================================
package bench

import (
	"bytes"
	"encoding/binary"
	"fmt"
	"io"
	"sort"

	"github.com/sahib/brig/util/testutil"
)

// Verifier is a io.Writer that should be used for benchmarks
// that read encoded data. It verifies that the data is actually
// correct in the sense that it is equal to the original input.
type Verifier interface {
	io.Writer

	// MissingBytes returns the diff of bytes to the original input.
	// This number can be negative when too much data was written.
	// Only 0 is a valid value after the benchmark finished.
	MissingBytes() int64
}

// Input generates input for a benchmark. It defines how the data looks that
// is fed to the streaming system.
type Input interface {
	Reader(seed uint64) (io.Reader, error)
	Size() int64
	Verifier() (Verifier, error)
	Close() error
}

func benchData(size uint64, name string) []byte {
	switch name {
	case "random":
		return testutil.CreateRandomDummyBuf(int64(size), 23)
	case "ten":
		return testutil.CreateDummyBuf(int64(size))
	case "mixed":
		return testutil.CreateMixedDummyBuf(int64(size), 42)
	default:
		return nil
	}
}

//////////

type memVerifier struct {
	expect  []byte
	counter int64
}

func (m *memVerifier) Write(buf []byte) (int, error) {
	if int64(len(buf))+m.counter > int64(len(m.expect)) {
		return -1, fmt.Errorf("verify: got too much data")
	}

	slice := m.expect[m.counter : m.counter+int64(len(buf))]
	if !bytes.Equal(slice, buf) {
		return -1, fmt.Errorf("verify: data differs in block at %d", m.counter)
	}

	m.counter += int64(len(buf))

	// Just nod off the data and let GC do the rest.
	return len(buf), nil
}

func (m *memVerifier) MissingBytes() int64 {
	return int64(len(m.expect)) - m.counter
}

type memInput struct {
	buf []byte
}

func newMemInput(size uint64, name string) Input {
	return &memInput{buf: benchData(size, name)}
}

func (ni *memInput) Reader(seed uint64) (io.Reader, error) {
	// Put a few bytes difference at the start to make the complete
	// stream different than the last seed. This is here to avoid
	// that consequent runs of a benchmark get speed ups because
	// they can cache inputs.
	binary.LittleEndian.PutUint64(ni.buf, seed)
	return bytes.NewReader(ni.buf), nil
}

func (ni *memInput) Verifier() (Verifier, error) {
	return &memVerifier{
		expect:  ni.buf,
		counter: 0,
	}, nil
}

func (ni *memInput) Size() int64 {
	return int64(len(ni.buf))
}

func (ni *memInput) Close() error {
	return nil
}

//////////

var (
	inputMap = map[string]func(size uint64) (Input, error){
		"ten": func(size uint64) (Input, error) {
			return newMemInput(size, "ten"), nil
		},
		"random": func(size uint64) (Input, error) {
			return newMemInput(size, "random"), nil
		},
		"mixed": func(size uint64) (Input, error) {
			return newMemInput(size, "mixed"), nil
		},
	}
)

// InputByName fetches the input by it's name and returns an input
// that will produce data with `size` bytes.
func InputByName(name string, size uint64) (Input, error) {
	newInput, ok := inputMap[name]
	if !ok {
		return nil, fmt.Errorf("no such input: %s", name)
	}

	return newInput(size)
}

// InputNames returns the sorted list of all possible inputs.
func InputNames() []string {
	names := []string{}
	for name := range inputMap {
		names = append(names, name)
	}

	sort.Strings(names)
	return names
}


================================================
FILE: bench/runner.go
================================================
package bench

import (
	"fmt"
	"io/ioutil"
	"os"
	"os/signal"
	"runtime"
	"sort"
	"strings"
	"syscall"
	"time"

	"github.com/sahib/brig/repo/hints"
	"github.com/sahib/brig/repo/setup"
	log "github.com/sirupsen/logrus"
)

// Config define how the benchmarks are run.
type Config struct {
	InputName   string `json:"input_name"`
	BenchName   string `json:"bench_name"`
	Size        uint64 `json:"size"`
	Encryption  string `json:"encryption"`
	Compression string `json:"compression"`
	Samples     int    `json:"samples"`
}

// Result is the result of a single benchmark run.
type Result struct {
	Name            string        `json:"name"`
	Config          Config        `json:"config"`
	Encryption      string        `json:"encryption"`
	Compression     string        `json:"compression"`
	Took            time.Duration `json:"took"`
	Throughput      float64       `json:"throughput"`
	CompressionRate float32       `json:"compression_rate"`
	Allocs          int64         `json:"allocs"`
}

// buildHints handles wildcards for compression and/or encryption.
// If no wildcards are specified, we just take what is set in `cfg`.
func buildHints(cfg Config) []hints.Hint {
	encIsWildcard := cfg.Encryption == "*"
	zipIsWildcard := cfg.Compression == "*"

	if encIsWildcard && zipIsWildcard {
		return hints.AllPossibleHints()
	}

	if encIsWildcard {
		hs := []hints.Hint{}
		for _, encAlgo := range hints.ValidEncryptionHints() {
			hs = append(hs, hints.Hint{
				CompressionAlgo: hints.CompressionHint(cfg.Compression),
				EncryptionAlgo:  hints.EncryptionHint(encAlgo),
			})
		}

		return hs
	}

	if zipIsWildcard {
		hs := []hints.Hint{}
		for _, zipAlgo := range hints.ValidCompressionHints() {
			hs = append(hs, hints.Hint{
				CompressionAlgo: hints.CompressionHint(zipAlgo),
				EncryptionAlgo:  hints.EncryptionHint(cfg.Encryption),
			})
		}

		return hs
	}

	return []hints.Hint{{
		CompressionAlgo: hints.CompressionHint(cfg.Compression),
		EncryptionAlgo:  hints.EncryptionHint(cfg.Encryption),
	}}
}

func sortHints(hs []hints.Hint) []hints.Hint {
	sort.Slice(hs, func(i, j int) bool {
		return hs[i].Less(hs[j])
	})

	// sorts in-place, but also return for ease of use.
	return hs
}

func benchmarkSingle(cfg Config, fn func(result Result), ipfsPath string) error {
	in, err := InputByName(cfg.InputName, cfg.Size)
	if err != nil {
		return err
	}

	defer in.Close()

	out, err := ByName(cfg.BenchName, ipfsPath)
	if err != nil {
		return err
	}

	defer out.Close()

	sigCh := make(chan os.Signal, 1)
	signal.Notify(sigCh, syscall.SIGINT)
	defer signal.Stop(sigCh)

	for _, hint := range sortHints(buildHints(cfg)) {
		select {
		case <-sigCh:
			fmt.Println("Interrupted")
			return nil
		default:
			// just continue
		}

		supportsHints := out.SupportHints()
		if !supportsHints {
			// Indicate in output that nothing was encrypted or compressed.
			hint.CompressionAlgo = hints.CompressionNone
			hint.EncryptionAlgo = hints.EncryptionNone
		}

		if hint.CompressionAlgo == hints.CompressionGuess {
			// NOTE: We do not benchmark guessing here.
			// Simply reason is that we do not know from the output
			// which algorithm was actually used.
			continue
		}

		var runs Runs

		// probably doesn't do much, just to clean any leftover memory.
		runtime.GC()

		for seed := uint64(0); seed < uint64(cfg.Samples); seed++ {
			r, err := in.Reader(seed)
			if err != nil {
				return err
			}

			v, err := in.Verifier()
			if err != nil {
				return err
			}

			run, err := out.Bench(hint, in.Size(), r, v)
			if err != nil {
				return err
			}

			runs = append(runs, *run)

			// Most write-only benchmarks cannot be verified, since
			// we modify the stream and the verifier checks that the stream
			// is equal to the input. Most read tests involve the same logic
			// as writing though, so the writer has to work for that.
			if out.CanBeVerified() {
				if missing := v.MissingBytes(); missing != 0 {
					log.Warnf("not all or too much data received in verify: %d", missing)
				}
			}
		}

		avgRun := runs.Average()
		throughput := (float64(cfg.Size) / 1000 / 1000) / (float64(avgRun.Took) / float64(time.Second))
		fn(Result{
			Name:            fmt.Sprintf("%s:%s_%s", cfg.BenchName, cfg.InputName, hint),
			Encryption:      string(hint.EncryptionAlgo),
			Compression:     string(hint.CompressionAlgo),
			Config:          cfg,
			Took:            avgRun.Took,
			Throughput:      throughput,
			CompressionRate: avgRun.CompressionRatio,
			Allocs:          avgRun.Allocs,
		})

		if !supportsHints {
			// If there are no hints there is no point.
			// of repeating the benchmark several times.
			break
		}
	}

	return nil
}

// IPFS is expensive to set-up, so let's do it only once.
func ipfsIsNeeded(cfgs []Config) bool {
	for _, cfg := range cfgs {
		if strings.Contains(strings.ToLower(cfg.BenchName), "ipfs") {
			return true
		}
	}

	return false
}

// Benchmark runs the benchmarks specified by `cfgs` and call `fn` on each result.
func Benchmark(cfgs []Config, fn func(result Result)) error {
	needsIPFS := ipfsIsNeeded(cfgs)
	var result *setup.Result

	if needsIPFS {
		var err error
		log.Infof("Setting up IPFS for the benchmarks...")

		ipfsPath, err := ioutil.TempDir("", "brig-iobench-ipfs-repo-*")
		if err != nil {
			return err
		}

		result, err = setup.IPFS(setup.Options{
			LogWriter:        ioutil.Discard,
			Setup:            true,
			SetDefaultConfig: true,
			SetExtraConfig:   true,
			IpfsPath:         ipfsPath,
			InitProfile:      "test",
		})

		if err != nil {
			return err
		}
	}

	for _, cfg := range cfgs {
		var ipfsPath string
		if result != nil {
			ipfsPath = result.IpfsPath
		}

		if err := benchmarkSingle(cfg, fn, ipfsPath); err != nil {
			return err
		}
	}

	if needsIPFS {
		if result.IpfsPath != "" {
			os.RemoveAll(result.IpfsPath)
		}

		if result.PID > 0 {
			proc, err := os.FindProcess(result.PID)
			if err != nil {
				log.WithError(err).Warnf("failed to get IPFS PID")
			} else {
				if err := proc.Kill(); err != nil {
					log.WithError(err).Warnf("failed to kill IPFS PID")
				}
			}
		}
	}

	return nil
}


================================================
FILE: bench/stats.go
================================================
package bench

import (
	"time"

	"github.com/klauspost/cpuid/v2"
)

// Stats are system statistics that might influence the benchmark result.
type Stats struct {
	Time         time.Time `json:"time"`
	CPUBrandName string    `json:"cpu_brand_name"`
	LogicalCores int       `json:"logical_cores"`
	HasAESNI     bool      `json:"has_aesni"`
}

// FetchStats returns the current statistics.
func FetchStats() Stats {
	return Stats{
		Time:         time.Now(),
		CPUBrandName: cpuid.CPU.BrandName,
		LogicalCores: cpuid.CPU.LogicalCores,
		HasAESNI:     cpuid.CPU.Supports(cpuid.AESNI),
	}
}


================================================
FILE: brig.go
================================================
package main

import (
	"os"

	"github.com/sahib/brig/cmd"
)

func main() {
	os.Exit(cmd.RunCmdline(os.Args))
}


================================================
FILE: catfs/backend.go
================================================
package catfs

import (
	"fmt"
	"io"
	"io/ioutil"

	"github.com/sahib/brig/catfs/mio"
	"github.com/sahib/brig/catfs/mio/chunkbuf"
	h "github.com/sahib/brig/util/hashlib"
	"github.com/sahib/brig/util/testutil"
)

// ErrNoSuchHash should be returned whenever the backend is unable
// to find an object referenced to by this hash.
type ErrNoSuchHash struct {
	what h.Hash
}

func (eh ErrNoSuchHash) Error() string {
	return fmt.Sprintf("No such hash: %s", eh.what.B58String())
}

// FsBackend is the interface that needs to be implemented by the data
// management layer.
type FsBackend interface {
	// Cat should find the object referenced to by `hash` and
	// make its data available as mio.Stream.
	Cat(hash h.Hash) (mio.Stream, error)

	// Add should read all data in `r` and return the hash under
	// which it can be accessed on later.
	Add(r io.Reader) (h.Hash, error)

	// Pin gives the object at `hash` a "pin".
	// (i.e. it marks the file to be stored indefinitely in local storage)
	// When pinning an explicit pin with an implicit pin, the explicit pin
	// will stay. Upgrading from implicit to explicit is possible though.
	Pin(hash h.Hash) error

	// Unpin removes a previously added pin.
	// If an object is already unpinned this is a no op.
	Unpin(hash h.Hash) error

	// IsPinned checks if the file is pinned.
	IsPinned(hash h.Hash) (bool, error)

	// IsCached checks if the file contents are available locally.
	IsCached(hash h.Hash) (bool, error)

	// CachedSize returns the backend size for a given hash
	// Nehative indicates that cachedSize is unknown
	CachedSize(hash h.Hash) (int64, error)
}

// MemFsBackend is a mock structure that implements FsBackend.
type MemFsBackend struct {
	data map[string][]byte
	pins map[string]bool
}

// NewMemFsBackend returns a MemFsBackend (useful for writing tests)
func NewMemFsBackend() *MemFsBackend {
	return &MemFsBackend{
		data: make(map[string][]byte),
		pins: make(map[string]bool),
	}
}

// Cat implements FsBackend.Cat by querying memory.
func (mb *MemFsBackend) Cat(hash h.Hash) (mio.Stream, error) {
	data, ok := mb.data[hash.B58String()]
	if !ok {
		return nil, ErrNoSuchHash{hash}
	}

	chunkBuf := chunkbuf.NewChunkBuffer(data)
	randRead := testutil.RandomizeReads(chunkBuf, 512, true)

	return struct {
		io.Reader
		io.Seeker
		io.Closer
		io.WriterTo
	}{
		Reader:   randRead,
		Seeker:   chunkBuf,
		WriterTo: chunkBuf,
		Closer:   ioutil.NopCloser(chunkBuf),
	}, nil
}

// Add implements FsBackend.Add by storing the data in memory.
func (mb *MemFsBackend) Add(r io.Reader) (h.Hash, error) {
	data, err := ioutil.ReadAll(r)
	if err != nil {
		return nil, err
	}

	hash := h.SumWithBackendHash(data)
	mb.data[hash.B58String()] = data
	return hash, nil
}

// Pin implements FsBackend.Pin by storing a marker in memory.
func (mb *MemFsBackend) Pin(hash h.Hash) error {
	mb.pins[hash.B58String()] = true
	return nil
}

// Unpin implements FsBackend.Unpin by removing a marker in memory.
func (mb *MemFsBackend) Unpin(hash h.Hash) error {
	mb.pins[hash.B58String()] = false
	return nil
}

// IsPinned implements FsBackend.IsPinned by querying a marker in memory.
func (mb *MemFsBackend) IsPinned(hash h.Hash) (bool, error) {
	isPinned, ok := mb.pins[hash.B58String()]
	if !ok {
		return false, nil
	}

	return isPinned, nil
}

// IsCached implements FsBackend.IsCached by checking if the file exists.
// If hash found, the file is always cached.
func (mb *MemFsBackend) IsCached(hash h.Hash) (bool, error) {
	_, ok := mb.data[hash.B58String()]
	return ok, nil
}

// CachedSize implements FsBackend.CachedSize by returnig data size
// If hash found, the file is always cached.
func (mb *MemFsBackend) CachedSize(hash h.Hash) (int64, error) {
	data, ok := mb.data[hash.B58String()]
	if !ok {
		return -1, nil // negative indicates unknown size
	}
	return int64(len(data)), nil
}


================================================
FILE: catfs/capnp/pinner.capnp
================================================
using Go = import "/go.capnp";

@0xba762188b0a6e4cf;

$Go.package("capnp");
$Go.import("github.com/sahib/brig/catfs/capnp");


struct Pin {
    inode    @0 :UInt64;
    isPinned @1 :Bool;
}

struct PinEntry $Go.doc("A single entry for a certain content node") {
    # Following attributes will be part of the hash:
    pins   @0 :List(Pin);
}


================================================
FILE: catfs/capnp/pinner.capnp.go
================================================
// Code generated by capnpc-go. DO NOT EDIT.

package capnp

import (
	capnp "zombiezen.com/go/capnproto2"
	text "zombiezen.com/go/capnproto2/encoding/text"
	schemas "zombiezen.com/go/capnproto2/schemas"
)

type Pin struct{ capnp.Struct }

// Pin_TypeID is the unique identifier for the type Pin.
const Pin_TypeID = 0x985d53e01674ee95

func NewPin(s *capnp.Segment) (Pin, error) {
	st, err := capnp.NewStruct(s, capnp.ObjectSize{DataSize: 16, PointerCount: 0})
	return Pin{st}, err
}

func NewRootPin(s *capnp.Segment) (Pin, error) {
	st, err := capnp.NewRootStruct(s, capnp.ObjectSize{DataSize: 16, PointerCount: 0})
	return Pin{st}, err
}

func ReadRootPin(msg *capnp.Message) (Pin, error) {
	root, err := msg.RootPtr()
	return Pin{root.Struct()}, err
}

func (s Pin) String() string {
	str, _ := text.Marshal(0x985d53e01674ee95, s.Struct)
	return str
}

func (s Pin) Inode() uint64 {
	return s.Struct.Uint64(0)
}

func (s Pin) SetInode(v uint64) {
	s.Struct.SetUint64(0, v)
}

func (s Pin) IsPinned() bool {
	return s.Struct.Bit(64)
}

func (s Pin) SetIsPinned(v bool) {
	s.Struct.SetBit(64, v)
}

// Pin_List is a list of Pin.
type Pin_List struct{ capnp.List }

// NewPin creates a new list of Pin.
func NewPin_List(s *capnp.Segment, sz int32) (Pin_List, error) {
	l, err := capnp.NewCompositeList(s, capnp.ObjectSize{DataSize: 16, PointerCount: 0}, sz)
	return Pin_List{l}, err
}

func (s Pin_List) At(i int) Pin { return Pin{s.List.Struct(i)} }

func (s Pin_List) Set(i int, v Pin) error { return s.List.SetStruct(i, v.Struct) }

func (s Pin_List) String() string {
	str, _ := text.MarshalList(0x985d53e01674ee95, s.List)
	return str
}

// Pin_Promise is a wrapper for a Pin promised by a client call.
type Pin_Promise struct{ *capnp.Pipeline }

func (p Pin_Promise) Struct() (Pin, error) {
	s, err := p.Pipeline.Struct()
	return Pin{s}, err
}

// A single entry for a certain content node
type PinEntry struct{ capnp.Struct }

// PinEntry_TypeID is the unique identifier for the type PinEntry.
const PinEntry_TypeID = 0xdb74f7cf7bc815c6

func NewPinEntry(s *capnp.Segment) (PinEntry, error) {
	st, err := capnp.NewStruct(s, capnp.ObjectSize{DataSize: 0, PointerCount: 1})
	return PinEntry{st}, err
}

func NewRootPinEntry(s *capnp.Segment) (PinEntry, error) {
	st, err := capnp.NewRootStruct(s, capnp.ObjectSize{DataSize: 0, PointerCount: 1})
	return PinEntry{st}, err
}

func ReadRootPinEntry(msg *capnp.Message) (PinEntry, error) {
	root, err := msg.RootPtr()
	return PinEntry{root.Struct()}, err
}

func (s PinEntry) String() string {
	str, _ := text.Marshal(0xdb74f7cf7bc815c6, s.Struct)
	return str
}

func (s PinEntry) Pins() (Pin_List, error) {
	p, err := s.Struct.Ptr(0)
	return Pin_List{List: p.List()}, err
}

func (s PinEntry) HasPins() bool {
	p, err := s.Struct.Ptr(0)
	return p.IsValid() || err != nil
}

func (s PinEntry) SetPins(v Pin_List) error {
	return s.Struct.SetPtr(0, v.List.ToPtr())
}

// NewPins sets the pins field to a newly
// allocated Pin_List, preferring placement in s's segment.
func (s PinEntry) NewPins(n int32) (Pin_List, error) {
	l, err := NewPin_List(s.Struct.Segment(), n)
	if err != nil {
		return Pin_List{}, err
	}
	err = s.Struct.SetPtr(0, l.List.ToPtr())
	return l, err
}

// PinEntry_List is a list of PinEntry.
type PinEntry_List struct{ capnp.List }

// NewPinEntry creates a new list of PinEntry.
func NewPinEntry_List(s *capnp.Segment, sz int32) (PinEntry_List, error) {
	l, err := capnp.NewCompositeList(s, capnp.ObjectSize{DataSize: 0, PointerCount: 1}, sz)
	return PinEntry_List{l}, err
}

func (s PinEntry_List) At(i int) PinEntry { return PinEntry{s.List.Struct(i)} }

func (s PinEntry_List) Set(i int, v PinEntry) error { return s.List.SetStruct(i, v.Struct) }

func (s PinEntry_List) String() string {
	str, _ := text.MarshalList(0xdb74f7cf7bc815c6, s.List)
	return str
}

// PinEntry_Promise is a wrapper for a PinEntry promised by a client call.
type PinEntry_Promise struct{ *capnp.Pipeline }

func (p PinEntry_Promise) Struct() (PinEntry, error) {
	s, err := p.Pipeline.Struct()
	return PinEntry{s}, err
}

const schema_ba762188b0a6e4cf = "x\xda\\\xd0\xb1K\xebP\x14\x06\xf0\xef\xbbI_[" +
	"x\xef\xb5WT\xe8\xd4\x08]\x14\xb5\xd6I\\l\x07" +
	"\x07\x05!Wg\x85\x90\xa6% 7!\xb9(\xc5\x7f" +
	"@\\Eps\x13\xdc\xdc\xa4\x82\xa3\xe2\xd6\xcd\xc5\xc5" +
	"\xc1I\xd0\xd51\x92\xc5\x8a\xd3\x81\x8f\xc3\xf9\x1d\xbej" +
	"\xbf-d\xe1\x06P\xa5\xc2\x9f\xec\xec\xc3L\xbf\xec\xec" +
	"\x9eC\xd5(\xb2\xd1\xeb\xe5\xf5\xf1\xcc\xc1-\xec\" " +
	";or+\x9f\x1b\x87`\xf60\xf5x4\xfa4\xcf" +
	"\x905\x8e\xf7\x0a,\x02\xad\xab\x09\xcaaQ\x0e\xeb\xf2" +
	"}\x0d\xcc|\xcf\xf4\xd2\xa6\xef\x89X\xc7\xcd8\xd4:" +
	"H\x16}/\xd6qe\xd5\x0d\xb5K\xaa\x92e\x036" +
	"\x019\xbb\x0c\xa8\x86E\xb5$(\xd9\x9ed\x1e.l" +
	"\x02j\xde\xa2Z\x11\xac\x87:\xea\x06,C\xb0\x0cf" +
	"a\xea\xe6\x07\xbb\x00H\x08\xf2\x87g\xfd\xf6rn]" +
	"\x9b\x84\x83\x1c\xb5)\xb2\xbd\xd3\x0bu\xf7tr\x0fe" +
	"\x0bv\x1a\xe4_\xa0\xc5mf\x1d'\x0du\x7f?\xb0" +
	"\x9d@\x9bd\xe0\xf4\xa2\xc4\xf1\x1c?H\x8c\x17j\xc7" +
	"\x8f\xb4\x09\xb4qt\xd4e\x00(\xfb\xfb\xff\x7fsy" +
	"\x91\x16UC\xb0\x12\x87:\xe5\x7f\xd0\xb5\xc8\xea\xb8Z" +
	"0\x0f\xbf\x02\x00\x00\xff\xff\x05\xdde\x03"

func init() {
	schemas.Register(schema_ba762188b0a6e4cf,
		0x985d53e01674ee95,
		0xdb74f7cf7bc815c6)
}


================================================
FILE: catfs/core/coreutils.go
================================================
package core

import (
	"errors"
	"fmt"
	"path"
	"strings"
	"time"

	e "github.com/pkg/errors"
	ie "github.com/sahib/brig/catfs/errors"
	n "github.com/sahib/brig/catfs/nodes"
	h "github.com/sahib/brig/util/hashlib"
	log "github.com/sirupsen/logrus"
)

var (
	// ErrIsGhost is returned by Remove() when calling it on a ghost.
	ErrIsGhost = errors.New("Is a ghost")
)

// mkdirParents takes the dirname of repoPath and makes sure all intermediate
// directories are created. The last directory will be returned.

// If any directory exist already, it will not be touched.
// You can also think of it as mkdir -p.
func mkdirParents(lkr *Linker, repoPath string) (*n.Directory, error) {
	repoPath = path.Clean(repoPath)

	elems := strings.Split(repoPath, "/")
	for idx := 0; idx < len(elems)-1; idx++ {
		dirname := strings.Join(elems[:idx+1], "/")
		if dirname == "" {
			dirname = "/"
		}

		dir, err := Mkdir(lkr, dirname, false)
		if err != nil {
			return nil, err
		}

		// Return it, if it's the last path component:
		if idx+1 == len(elems)-1 {
			return dir, nil
		}
	}

	return nil, fmt.Errorf("Empty path given")
}

// Mkdir creates the directory at repoPath and any intermediate directories if
// createParents is true. It will fail if there is already a file at `repoPath`
// and it is not a directory.
func Mkdir(lkr *Linker, repoPath string, createParents bool) (dir *n.Directory, err error) {
	dirname, basename := path.Split(repoPath)

	// Take special care of the root node:
	if basename == "" {
		return lkr.Root()
	}

	// Check if the parent exists:
	parent, lerr := lkr.LookupDirectory(dirname)
	if lerr != nil && !ie.IsNoSuchFileError(lerr) {
		err = e.Wrap(lerr, "dirname lookup failed")
		return
	}

	err = lkr.Atomic(func() (bool, error) {
		// If it's nil, we might need to create it:
		if parent == nil {
			if !createParents {
				return false, ie.NoSuchFile(dirname)
			}

			parent, err = mkdirParents(lkr, repoPath)
			if err != nil {
				return true, err
			}
		}

		child, err := parent.Child(lkr, basename)
		if err != nil {
			return true, err
		}

		if child != nil {
			switch child.Type() {
			case n.NodeTypeDirectory:
				// Nothing to do really. Return the old child.
				dir = child.(*n.Directory)
				return false, nil
			case n.NodeTypeFile:
				return true, fmt.Errorf("`%s` exists and is a file", repoPath)
			case n.NodeTypeGhost:
				// Remove the ghost and continue with adding:
				if err := parent.RemoveChild(lkr, child); err != nil {
					return true, err
				}
			default:
				return true, ie.ErrBadNode
			}
		}

		// Create it then!
		dir, err = n.NewEmptyDirectory(lkr, parent, basename, lkr.owner, lkr.NextInode())
		if err != nil {
			return true, err
		}

		if err := lkr.StageNode(dir); err != nil {
			return true, e.Wrapf(err, "stage dir")
		}

		log.Debugf("mkdir: %s", dirname)
		return false, nil
	})

	return
}

// Remove removes a single node from a directory.
// `nd` is the node that shall be removed and may not be root.
// The parent directory is returned.
func Remove(lkr *Linker, nd n.ModNode, createGhost, force bool) (parentDir *n.Directory, ghost *n.Ghost, err error) {
	if !force && nd.Type() == n.NodeTypeGhost {
		err = ErrIsGhost
		return
	}

	parentDir, err = n.ParentDirectory(lkr, nd)
	if err != nil {
		return
	}

	// We shouldn't delete the root directory
	// (only directory with a parent)
	if parentDir == nil {
		err = fmt.Errorf("refusing to delete root")
		return
	}

	err = lkr.Atomic(func() (bool, error) {
		if err := parentDir.RemoveChild(lkr, nd); err != nil {
			return true, fmt.Errorf("failed to remove child: %v", err)
		}

		lkr.MemIndexPurge(nd)

		if err := lkr.StageNode(parentDir); err != nil {
			return true, err
		}

		if createGhost {
			newGhost, err := n.MakeGhost(nd, lkr.NextInode())
			if err != nil {
				return true, err
			}

			if err := parentDir.Add(lkr, newGhost); err != nil {
				return true, err
			}

			if err := lkr.StageNode(newGhost); err != nil {
				return true, err
			}

			ghost = newGhost
			return false, nil
		}

		return false, nil
	})

	return
}

// prepareParent tries to figure out the correct parent directory when attempting
// to move `nd` to `dstPath`. It also removes any nodes that are "in the way" if possible.
func prepareParent(lkr *Linker, nd n.ModNode, dstPath string) (*n.Directory, error) {
	// Check if the destination already exists:
	destNode, err := lkr.LookupModNode(dstPath)
	if err != nil && !ie.IsNoSuchFileError(err) {
		return nil, err
	}

	if destNode == nil {
		// No node at this place yet, attempt to look it up.
		return lkr.LookupDirectory(path.Dir(dstPath))
	}

	switch destNode.Type() {
	case n.NodeTypeDirectory:
		// Move inside of this directory.
		// Check if there is already a file
		destDir, ok := destNode.(*n.Directory)
		if !ok {
			return nil, ie.ErrBadNode
		}

		child, err := destDir.Child(lkr, nd.Name())
		if err != nil {
			return nil, err
		}

		// Oh, something is in there?
		if child != nil {
			if nd.Type() == n.NodeTypeFile {
				return nil, fmt.Errorf(
					"cannot overwrite a directory (%s) with a file (%s)",
					destNode.Path(),
					child.Path(),
				)
			}

			childDir, ok := child.(*n.Directory)
			if !ok {
				return nil, ie.ErrBadNode
			}

			if childDir.Size() > 0 {
				return nil, fmt.Errorf(
					"cannot move over: %s; directory is not empty",
					child.Path(),
				)
			}

			// Okay, there is an empty directory. Let's remove it to
			// replace it with our source node.
			log.Warningf("Remove child dir: %v", childDir)
			if _, _, err := Remove(lkr, childDir, false, false); err != nil {
				return nil, err
			}
		}

		return destDir, nil
	case n.NodeTypeFile:
		log.Infof("Remove file: %v", destNode.Path())
		parentDir, _, err := Remove(lkr, destNode, false, false)
		return parentDir, err
	case n.NodeTypeGhost:
		// It is already a ghost. Overwrite it and do not create a new one.
		log.Infof("Remove ghost: %v", destNode.Path())
		parentDir, _, err := Remove(lkr, destNode, false, true)
		return parentDir, err
	default:
		return nil, ie.ErrBadNode
	}
}

// Copy copies the node `nd` to the path at `dstPath`.
func Copy(lkr *Linker, nd n.ModNode, dstPath string) (newNode n.ModNode, err error) {
	// Forbid moving a node inside of one of it's subdirectories.
	if nd.Path() == dstPath {
		err = fmt.Errorf("source and dest are the same file: %v", dstPath)
		return
	}

	if strings.HasPrefix(path.Dir(dstPath), nd.Path()) {
		err = fmt.Errorf(
			"cannot copy `%s` into it's own subdir `%s`",
			nd.Path(),
			dstPath,
		)
		return
	}

	err = lkr.Atomic(func() (bool, error) {
		parentDir, err := prepareParent(lkr, nd, dstPath)
		if err != nil {
			return true, e.Wrapf(err, "handle parent")
		}

		// We might copy something into a directory.
		// In this case, dstPath specifies the directory we move into,
		// not the file we moved to (which we need here)
		if parentDir.Path() == dstPath {
			dstPath = path.Join(parentDir.Path(), path.Base(nd.Path()))
		}

		// And add it to the right destination dir:
		newNode = nd.Copy(lkr.NextInode())
		newNode.SetName(path.Base(dstPath))
		if err := newNode.SetParent(lkr, parentDir); err != nil {
			return true, e.Wrapf(err, "set parent")
		}

		if err := newNode.NotifyMove(lkr, parentDir, newNode.Path()); err != nil {
			return true, e.Wrapf(err, "notify move")
		}

		return false, lkr.StageNode(newNode)
	})

	return
}

// Move moves the node `nd` to the path at `dstPath` and leaves
// a ghost at the old place.
func Move(lkr *Linker, nd n.ModNode, dstPath string) error {
	// Forbid moving a node inside of one of it's subdirectories.
	if nd.Type() == n.NodeTypeGhost {
		return errors.New("cannot move ghosts")
	}

	if nd.Path() == dstPath {
		return fmt.Errorf("Source and Dest are the same file: %v", dstPath)
	}

	if strings.HasPrefix(path.Dir(dstPath), nd.Path()) {
		return fmt.Errorf(
			"Cannot move `%s` into it's own subdir `%s`",
			nd.Path(),
			dstPath,
		)
	}

	return lkr.Atomic(func() (bool, error) {
		parentDir, err := prepareParent(lkr, nd, dstPath)
		if err != nil {
			return true, err
		}

		// Remove the old node:
		oldPath := nd.Path()
		_, ghost, err := Remove(lkr, nd, true, true)
		if err != nil {
			return true, e.Wrapf(err, "remove old")
		}

		if parentDir.Path() == dstPath {
			dstPath = path.Join(parentDir.Path(), path.Base(oldPath))
		}

		// The node needs to be told that it's path changed,
		// since it might need to change it's hash value now.
		if err := nd.NotifyMove(lkr, parentDir, dstPath); err != nil {
			return true, e.Wrapf(err, "notify move")
		}

		err = n.Walk(lkr, nd, true, func(child n.Node) error {
			return e.Wrapf(lkr.StageNode(child), "stage node")
		})

		if err != nil {
			return true, err
		}

		if err := lkr.AddMoveMapping(nd.Inode(), ghost.Inode()); err != nil {
			return true, e.Wrapf(err, "add move mapping")
		}

		return false, nil
	})
}

// StageFromFileNode is a convinience helper that will call Stage() with all necessary params from `f`.
func StageFromFileNode(lkr *Linker, f *n.File) (*n.File, error) {
	return Stage(
		lkr,
		f.Path(),
		f.ContentHash(),
		f.BackendHash(),
		f.Size(),
		f.CachedSize(),
		f.Key(),
		f.ModTime(),
		f.IsRaw(),
	)
}

// Stage adds a file to brigs DAG this is lesser version since it does not use cachedSize
// Do not use it if you can, use StageWithFullInfo couple lines below!
// TODO rename Stage calls everywhere (especially in tests) and then
// rename Stage -> StageWithoutCacheSize, and StageWithFullInfo -> Stage
func Stage(
	lkr *Linker,
	repoPath string,
	contentHash,
	backendHash h.Hash,
	size uint64,
	cachedSize int64,
	key []byte,
	modTime time.Time,
	isRaw bool,
) (file *n.File, err error) {
	node, lerr := lkr.LookupNode(repoPath)
	if lerr != nil && !ie.IsNoSuchFileError(lerr) {
		err = lerr
		return
	}

	err = lkr.Atomic(func() (bool, error) {
		if node != nil {
			if node.Type() == n.NodeTypeGhost {
				ghostParent, err := n.ParentDirectory(lkr, node)
				if err != nil {
					return true, err
				}

				if ghostParent == nil {
					return true, fmt.Errorf(
						"bug: %s has no parent. Is root a ghost?",
						node.Path(),
					)
				}

				if err := ghostParent.RemoveChild(lkr, node); err != nil {
					return true, err
				}

				// Act like there was no previous node.
				// New node will have a different Inode.
				file = nil
			} else {
				var ok bool
				file, ok = node.(*n.File)
				if !ok {
					return true, ie.ErrBadNode
				}
			}
		}

		needRemove := false
		if file != nil {
			// We know this file already.
			log.WithFields(log.Fields{"file": repoPath}).Info("File exists; modifying.")
			needRemove = true

			if file.BackendHash().Equal(backendHash) {
				log.Debugf("Hash was not modified. Not doing any update.")
				return false, nil
			}
		} else {
			parent, err := mkdirParents(lkr, repoPath)
			if err != nil {
				return true, err
			}

			// Create a new file at specified path:
			file = n.NewEmptyFile(parent, path.Base(repoPath), lkr.owner, lkr.NextInode())
		}

		parentDir, err := n.ParentDirectory(lkr, file)
		if err != nil {
			return true, err
		}

		if parentDir == nil {
			return true, fmt.Errorf("%s has no parent yet (BUG)", repoPath)
		}

		if needRemove {
			// Remove the child before changing the hash:
			if err := parentDir.RemoveChild(lkr, file); err != nil {
				return true, err
			}
		}

		file.SetSize(size)
		file.SetCachedSize(cachedSize)
		file.SetModTime(modTime)
		file.SetContent(lkr, contentHash)
		file.SetBackend(lkr, backendHash)
		file.SetKey(key)
		file.SetUser(lkr.owner)
		file.SetIsRaw(isRaw)

		// Add it again when the hash was changed.
		log.Debugf("adding %s (%v)", file.Path(), file.BackendHash())
		if err := parentDir.Add(lkr, file); err != nil {
			return true, err
		}

		if err := lkr.StageNode(file); err != nil {
			return true, err
		}

		return false, nil
	})

	return
}

// Log will call `fn` on every commit we currently have, starting
// with the most current one (CURR, then HEAD, ...).
// If `fn` will return an error, the iteration is being stopped.
func Log(lkr *Linker, start *n.Commit, fn func(cmt *n.Commit) error) error {
	curr := start
	for curr != nil {
		if err := fn(curr); err != nil {
			return err
		}

		parent, err := curr.Parent(lkr)
		if err != nil {
			return err
		}

		if parent == nil {
			break
		}

		parentCmt, ok := parent.(*n.Commit)
		if !ok {
			return ie.ErrBadNode
		}

		curr = parentCmt
	}

	return nil
}


================================================
FILE: catfs/core/coreutils_test.go
================================================
package core

import (
	"path"
	"sort"
	"strings"
	"testing"
	"time"

	ie "github.com/sahib/brig/catfs/errors"
	n "github.com/sahib/brig/catfs/nodes"
	h "github.com/sahib/brig/util/hashlib"
	"github.com/stretchr/testify/require"
)

func TestMkdir(t *testing.T) {
	WithDummyLinker(t, func(lkr *Linker) {
		// Test nested creation without -p like flag:
		dir, err := Mkdir(lkr, "/deep/nested", false)
		if err == nil || dir != nil {
			t.Fatalf("Nested mkdir without -p should have failed: %v", err)
		}

		AssertDir(t, lkr, "/", true)
		AssertDir(t, lkr, "/deep", false)
		AssertDir(t, lkr, "/deep/nested", false)

		// Test mkdir -p like creating of nested dirs:
		dir, err = Mkdir(lkr, "/deep/nested", true)
		if err != nil {
			t.Fatalf("mkdir -p failed: %v", err)
		}

		AssertDir(t, lkr, "/", true)
		AssertDir(t, lkr, "/deep", true)
		AssertDir(t, lkr, "/deep/nested", true)

		// Attempt to mkdir the same directory once more:
		dir, err = Mkdir(lkr, "/deep/nested", true)
		if err != nil {
			t.Fatalf("second mkdir -p failed: %v", err)
		}

		// Also without -p, it should just return the respective dir.
		// (i.e. work like LookupDirectory)
		// Note: This is a difference to the traditional mkdir.
		dir, err = Mkdir(lkr, "/deep/nested", false)
		if err != nil {
			t.Fatalf("second mkdir without -p failed: %v", err)
		}

		// See if an attempt at creating the root failed,
		// should not and just work like lkr.LookupDirectory("/")
		dir, err = Mkdir(lkr, "/", false)
		if err != nil {
			t.Fatalf("mkdir root failed (without -p): %v", err)
		}

		root, err := lkr.Root()
		if err != nil {
			t.Fatalf("Failed to retrieve root: %v", err)
		}

		if !dir.TreeHash().Equal(root.TreeHash()) {
			t.Fatal("Root and mkdir('/') differ!")
		}

		// Try to mkdir over a regular file:
		MustTouch(t, lkr, "/cat.png", 1)

		// This should fail, since we cannot create it.
		dir, err = Mkdir(lkr, "/cat.png", false)
		if err == nil {
			t.Fatal("Creating directory on file should have failed!")
		}

		// Same even for -p
		dir, err = Mkdir(lkr, "/cat.png", true)
		if err == nil {
			t.Fatal("Creating directory on file should have failed!")
		}
	})
}

func TestRemove(t *testing.T) {
	WithDummyLinker(t, func(lkr *Linker) {
		dir, err := Mkdir(lkr, "/some/nested/directory", true)
		if err != nil {
			t.Fatalf("Failed to mkdir a nested directory: %v", err)
		}

		AssertDir(t, lkr, "/some/nested/directory", true)

		path := "/some/nested/directory/cat.png"
		MustTouch(t, lkr, path, 1)

		// Check file removal with ghost creation:
		file, err := lkr.LookupFile(path)
		if err != nil {
			t.Fatalf("Failed to lookup nested file: %v", err)
		}

		// Fill in a dummy file hash, so we get a ghost instance
		parentDir, _, err := Remove(lkr, file, true, false)
		if err != nil {
			t.Fatalf("Remove failed: %v", err)
		}

		if !parentDir.TreeHash().Equal(dir.TreeHash()) {
			t.Fatalf("Hash differs on %s and %s", dir.Path(), parentDir.TreeHash())
		}

		// Check that a ghost was created for the removed file:

		ghost, err := lkr.LookupGhost(path)
		if err != nil {
			t.Fatalf("Looking up ghost failed: %v", err)
		}

		oldFile, err := ghost.OldFile()
		if err != nil {
			t.Fatalf("Failed to retrieve old file from ghost: %v", err)
		}

		if !oldFile.TreeHash().Equal(file.TreeHash()) {
			t.Fatal("Old file and original file hashes differ!")
		}

		// Check directory removal:
		nestedDir, err := lkr.LookupDirectory("/some/nested")
		if err != nil {
			t.Fatalf("Lookup on /some/nested failed: %v", err)
		}

		nestedParentDir, err := nestedDir.Parent(lkr)
		if err != nil {
			t.Fatalf("Getting parent of /some/nested failed: %v", err)
		}

		// Just fill in a dummy moved to ref, to get a ghost.
		parentDir, ghost, err = Remove(lkr, nestedDir, true, false)
		if err != nil {
			t.Fatalf("Directory removal failed: %v", err)
		}

		if ghost == nil || ghost.Type() != n.NodeTypeGhost {
			t.Fatalf("Ghost node does not look like a ghost: %v", ghost)
		}

		if !parentDir.TreeHash().Equal(nestedParentDir.TreeHash()) {
			t.Fatalf("Hash differs on %s and %s", nestedParentDir.Path(), parentDir.TreeHash())
		}
	})
}

func TestRemoveGhost(t *testing.T) {
	WithDummyLinker(t, func(lkr *Linker) {
		file := MustTouch(t, lkr, "/x", 1)
		par, err := n.ParentDirectory(lkr, file)
		if err != nil {
			t.Fatalf("Failed to get get parent directory of /x: %v", err)
		}

		if err := par.RemoveChild(lkr, file); err != nil {
			t.Fatalf("Removing child /x failed: %v", err)
		}

		ghost, err := n.MakeGhost(file, 42)
		if err != nil {
			t.Fatalf("Failed to summon ghost: %v", err)
		}

		if err := par.Add(lkr, ghost); err != nil {
			t.Fatalf("Re-adding ghost failed: %v", err)
		}

		if err := lkr.StageNode(ghost); err != nil {
			t.Fatalf("Staging ghost failed: %v", err)
		}

		// Try to remove a ghost:
		if _, _, err := Remove(lkr, ghost, true, false); err != ErrIsGhost {
			t.Fatalf("Removing ghost failed other than expected: %v", err)
		}
	})
}

func TestRemoveExistingGhost(t *testing.T) {
	WithDummyLinker(t, func(lkr *Linker) {
		nd := MustTouch(t, lkr, "/x", 1)
		_, ghost, err := Remove(lkr, nd, true, true)
		require.Nil(t, err)

		_, _, err = Remove(lkr, ghost, false, true)
		require.Nil(t, err)

		_, _, err = Remove(lkr, ghost, true, true)
		require.NotNil(t, err)
	})
}

func moveValidCheck(t *testing.T, lkr *Linker, srcPath, dstPath string) {
	nd, err := lkr.LookupNode(srcPath)

	if err == nil {
		if nd.Type() != n.NodeTypeGhost {
			t.Fatalf("Source node still exists! (%v): %v", srcPath, nd.Type())
		}
	} else if !ie.IsNoSuchFileError(err) {
		t.Fatalf("Looking up source node failed: %v", err)
	}

	lkDestNode, err := lkr.LookupNode(dstPath)
	if err != nil {
		t.Fatalf("Looking up dest path failed: %v", err)
	}

	if lkDestNode.Path() != dstPath {
		t.Fatalf("Dest nod and dest path differ: %v <-> %v", lkDestNode.Path(), dstPath)
	}
}

func moveInvalidCheck(t *testing.T, lkr *Linker, srcPath, dstPath string) {
	node, err := lkr.LookupNode(srcPath)
	if err != nil {
		t.Fatalf("Source node vanished during errorneous move: %v", err)
	}

	if node.Type() == n.NodeTypeGhost {
		t.Fatalf("Source node was converted to a ghost: %v", node.Path())
	}
}

var moveAndCopyTestCases = []struct {
	name        string
	isErrorCase bool
	setup       func(t *testing.T, lkr *Linker) (n.ModNode, string)
}{
	{
		name:        "basic",
		isErrorCase: false,
		setup: func(t *testing.T, lkr *Linker) (n.ModNode, string) {
			MustMkdir(t, lkr, "/a/b/c")
			return MustTouch(t, lkr, "/a/b/c/x", 1), "/a/b/y"
		},
	}, {
		name:        "basic-directory",
		isErrorCase: false,
		setup: func(t *testing.T, lkr *Linker) (n.ModNode, string) {
			return MustMkdir(t, lkr, "/a/b/short"), "/a/b/looooong"
		},
	}, {
		name:        "basic-same-level",
		isErrorCase: false,
		setup: func(t *testing.T, lkr *Linker) (n.ModNode, string) {
			return MustTouch(t, lkr, "/a", 1), "/b"
		},
	}, {
		name:        "basic-root-to-sub",
		isErrorCase: false,
		setup: func(t *testing.T, lkr *Linker) (n.ModNode, string) {
			MustTouch(t, lkr, "/README.md", 1)
			MustMkdir(t, lkr, "/sub")
			return MustTouch(t, lkr, "/x", 1), "/sub"
		},
	}, {
		name:        "into-directory",
		isErrorCase: false,
		setup: func(t *testing.T, lkr *Linker) (n.ModNode, string) {
			MustMkdir(t, lkr, "/a/b/c")
			MustMkdir(t, lkr, "/a/b/d")
			return MustTouch(t, lkr, "/a/b/c/x", 1), "/a/b/d"
		},
	}, {
		name:        "into-nonempty-directory",
		isErrorCase: false,
		setup: func(t *testing.T, lkr *Linker) (n.ModNode, string) {
			MustMkdir(t, lkr, "/a/b/c")
			MustMkdir(t, lkr, "/a/b/d")
			MustTouch(t, lkr, "/a/b/d/y", 1)
			return MustTouch(t, lkr, "/a/b/c/x", 1), "/a/b/d"
		},
	}, {
		name:        "error-to-directory-contains-file",
		isErrorCase: true,
		setup: func(t *testing.T, lkr *Linker) (n.ModNode, string) {
			MustMkdir(t, lkr, "/src")
			MustMkdir(t, lkr, "/dst")
			MustTouch(t, lkr, "/dst/x", 1)
			return MustTouch(t, lkr, "/src/x", 1), "/dst"
		},
	}, {
		name:        "error-file-over-existing",
		isErrorCase: false,
		setup: func(t *testing.T, lkr *Linker) (n.ModNode, string) {
			MustMkdir(t, lkr, "/src")
			MustMkdir(t, lkr, "/dst")
			MustTouch(t, lkr, "/dst/x", 1)
			return MustTouch(t, lkr, "/src/x", 1), "/dst/x"
		},
	}, {
		name:        "error-file-over-ghost",
		isErrorCase: false,
		setup: func(t *testing.T, lkr *Linker) (n.ModNode, string) {
			MustMkdir(t, lkr, "/src")
			MustMkdir(t, lkr, "/dst")
			destFile := MustTouch(t, lkr, "/dst/x", 1)
			MustRemove(t, lkr, destFile)
			return MustTouch(t, lkr, "/src/x", 1), "/dst/x"
		},
	}, {
		name:        "error-src-equal-dst",
		isErrorCase: true,
		setup: func(t *testing.T, lkr *Linker) (n.ModNode, string) {
			return MustTouch(t, lkr, "/x", 1), "/x"
		},
	}, {
		name:        "error-into-own-subdir",
		isErrorCase: true,
		setup: func(t *testing.T, lkr *Linker) (n.ModNode, string) {
			// We should not be able to move "/dir" into itself.
			dir := MustMkdir(t, lkr, "/dir")
			MustTouch(t, lkr, "/dir/x", 1)
			return dir, "/dir/own"
		},
	},
}

func TestMoveSingle(t *testing.T) {
	// Cases to cover for move():
	// 1.        Dest exists:
	// 1.1.      Is a directory.
	// 1.1.1  E  This directory contains basename(src) and it is a file.
	// 1.1.2  E  This directory contains basename(src) and it is a non-empty dir.
	// 1.1.3  V  This directory contains basename(src) and it is a empty dir.
	// 2.        Dest does not exist.
	// 2.1    V  dirname(dest) exists and is a directory.
	// 2.2    E  dirname(dest) does not exists.
	// 2.2    E  dirname(dest) exists and is not a directory.
	// 3.     E  Overlap of src and dest paths (src in dest)

	// Checks for valid cases (V):
	// 1) src is gone.
	// 2) dest is the same node as before.
	// 3) dest has the correct path.

	// Checks for invalid cases (E):
	// 1) src is not gone.

	for _, tc := range moveAndCopyTestCases {
		t.Run(tc.name, func(t *testing.T) {
			WithDummyLinker(t, func(lkr *Linker) {
				// Setup src and dest dir with a file in it named like src.
				srcNd, dstPath := tc.setup(t, lkr)
				srcPath := srcNd.Path()

				if err := Move(lkr, srcNd, dstPath); err != nil {
					if tc.isErrorCase {
						moveInvalidCheck(t, lkr, srcPath, dstPath)
					} else {
						t.Fatalf("Move failed unexpectly: %v", err)
					}
				} else {
					moveValidCheck(t, lkr, srcPath, dstPath)
				}
			})
		})
	}
}

func TestMoveDirectoryWithChild(t *testing.T) {
	WithDummyLinker(t, func(lkr *Linker) {
		MustMkdir(t, lkr, "/src")
		oldFile := MustTouch(t, lkr, "/src/x", 1)
		oldFile = oldFile.Copy(oldFile.Inode()).(*n.File)

		MustCommit(t, lkr, "before move")

		dir, err := lkr.LookupDirectory("/src")
		require.Nil(t, err)

		MustMove(t, lkr, dir, "/dst")
		MustCommit(t, lkr, "after move")

		file, err := lkr.LookupFile("/dst/x")
		require.Nil(t, err)
		require.Equal(t, h.TestDummy(t, 1), file.BackendHash())

		_, err = lkr.LookupGhost("/src")
		require.Nil(t, err)

		// This will resolve to the old file:
		_, err = lkr.LookupFile("/src/x")
		require.NotNil(t, err)
	})
}

func TestMoveDirectory(t *testing.T) {
	WithDummyLinker(t, func(lkr *Linker) {
		srcDir := MustMkdir(t, lkr, "/src")
		MustMkdir(t, lkr, "/src/sub")
		MustTouch(t, lkr, "/src/sub/x", 23)
		MustTouch(t, lkr, "/src/y", 23)

		dstDir := MustMove(t, lkr, srcDir, "/dst")

		expect := []string{
			"/dst/sub/x",
			"/dst/sub",
			"/dst/y",
			"/dst",
		}

		got := []string{}
		require.Nil(t, n.Walk(lkr, dstDir, true, func(child n.Node) error {
			got = append(got, child.Path())
			return nil
		}))

		sort.Strings(expect)
		sort.Strings(got)

		require.Equal(t, len(expect), len(got))
		for idx := range expect {
			if got[idx] != expect[idx] {
				t.Errorf(
					"Moved node child `%s` does not match `%s`",
					got[idx],
					expect[idx],
				)
			}
		}
	})
}

func TestMoveDirectoryWithGhosts(t *testing.T) {
	WithDummyLinker(t, func(lkr *Linker) {
		srcDir := MustMkdir(t, lkr, "/src")
		MustMkdir(t, lkr, "/src/sub")
		xFile := MustTouch(t, lkr, "/src/sub/x", 23)
		MustTouch(t, lkr, "/src/y", 23)
		MustMove(t, lkr, xFile, "/src/z")

		dstDir := MustMove(t, lkr, srcDir, "/dst")

		expect := []string{
			"/dst",
			"/dst/sub",
			"/dst/sub/x",
			"/dst/y",
			"/dst/z",
		}

		// Be evil and clear the mem cache in order to check if all changes
		// were checked into the staging area.
		lkr.MemIndexClear()

		got := []string{}
		require.Nil(t, n.Walk(lkr, dstDir, true, func(child n.Node) error {
			got = append(got, child.Path())
			return nil
		}))

		// Check if the moved directory contains the right paths:
		sort.Strings(got)
		for idx, expectPath := range expect {
			if expectPath != got[idx] {
				t.Fatalf("%d: %s != %s", idx, expectPath, got[idx])
			}
		}

		ghost, err := lkr.LookupNode(got[2])
		require.Nil(t, err)

		status, err := lkr.Status()
		require.Nil(t, err)
		require.Equal(t, "/src/sub/x", ghost.(*n.Ghost).OldNode().Path())

		twin, _, err := lkr.MoveMapping(status, ghost)
		require.Nil(t, err)
		require.Equal(t, "/dst/z", twin.Path())
	})
}

func TestStage(t *testing.T) {
	WithDummyLinker(t, func(lkr *Linker) {
		// Initial stage of the file:
		key := make([]byte, 32)

		contentHash1 := h.TestDummy(t, 1)
		backendHash1 := h.TestDummy(t, 1)
		file, err := Stage(
			lkr,
			"/photos/moose.png",
			contentHash1,
			backendHash1,
			2,
			-1,
			key,
			time.Now(),
			false,
		)
		if err != nil {
			t.Fatalf("Adding of /photos/moose.png failed: %v", err)
		}

		contentHash2 := h.TestDummy(t, 2)
		backendHash2 := h.TestDummy(t, 2)
		file, err = Stage(
			lkr,
			"/photos/moose.png",
			contentHash2,
			backendHash2,
			3,
			-1,
			key,
			time.Now(),
			false,
		)
		if err != nil {
			t.Fatalf("Adding of /photos/moose.png failed: %v", err)
		}

		if !file.BackendHash().Equal(h.TestDummy(t, 2)) {
			t.Fatalf(
				"File content after update is not what's advertised: %v",
				file.TreeHash(),
			)
		}
	})
}

func TestStageDirOverGhost(t *testing.T) {
	WithDummyLinker(t, func(lkr *Linker) {
		empty := MustMkdir(t, lkr, "/empty")
		MustMove(t, lkr, empty, "/moved_empty")
		MustMkdir(t, lkr, "/empty")
		dir, err := lkr.LookupDirectory("/empty")
		require.Nil(t, err)

		require.Equal(t, dir.Path(), "/empty")
		if dir.Type() != n.NodeTypeDirectory {
			t.Fatalf("/empty is not a directory")
		}
	})
}

func TestCopy(t *testing.T) {
	for _, tc := range moveAndCopyTestCases {
		t.Run(tc.name, func(t *testing.T) {

			WithDummyLinker(t, func(lkr *Linker) {
				// Setup src and dest dir with a file in it named like src.
				srcNd, dstPath := tc.setup(t, lkr)
				srcPath := srcNd.Path()

				newNd, err := Copy(lkr, srcNd, dstPath)
				if newNd != nil {
					if !strings.HasPrefix(newNd.Path(), dstPath) {
						t.Fatalf(
							"Node was copied to wrong path: %v (want %v)",
							newNd.Path(),
							dstPath,
						)
					}

					// Make sure the new copy is reachable from parent:
					par, err := lkr.LookupDirectory(path.Dir(newNd.Path()))
					if err != nil {
						t.Fatalf("Failed to lookup parent: %v", err)
					}

					newChildNd, err := par.Child(lkr, newNd.Name())
					if err != nil {
						t.Fatalf("Failed to get base path: %v", err)
					}

					newNd = newChildNd.(n.ModNode)
				}

				if oldNd, lookErr := lkr.LookupNode(srcPath); oldNd == nil || lookErr != nil {
					t.Fatalf("source node does not exist or is not accesible: %v %v", err, tc.isErrorCase)
				}

				if err != nil {
					if tc.isErrorCase {
						node, err := lkr.LookupNode(srcPath)
						if err != nil {
							t.Fatalf("Source node vanished during errorneous copy: %v", err)
						}

						if node.Type() == n.NodeTypeGhost {
							t.Fatalf("Source node was converted to a ghost: %v", node.Path())
						}
					} else {
						t.Fatalf("Copy failed unexpectly: %v", err)
					}

					// No need to test more.
					return
				}

				if tc.isErrorCase {
					t.Fatalf("test should have failed")
				}

				if newNd == nil {
					t.Fatalf("Dest node does not exist after copy: %v", err)
				}

				if !newNd.BackendHash().Equal(srcNd.BackendHash()) {
					t.Logf("Content of src and dst differ after copy")
					t.Logf("WANT: %v", srcNd.BackendHash())
					t.Logf("GOT : %v", newNd.BackendHash())
					t.Fatalf("Check Copy()")
				}

				if newNd.Inode() < srcNd.Inode() {
					t.Fatalf("New inode has <= inode of src")
				}

				// Sanity check: do not rely on Copy() to return a valid, staged node.
				// Check if we can look it up after the Copy too.
				nd, err := lkr.LookupNode(newNd.Path())
				require.Nil(t, err)
				require.NotNil(t, nd)
			})
		})
	}
}


================================================
FILE: catfs/core/gc.go
================================================
package core

import (
	"github.com/sahib/brig/catfs/db"
	ie "github.com/sahib/brig/catfs/errors"
	n "github.com/sahib/brig/catfs/nodes"
	h "github.com/sahib/brig/util/hashlib"
	log "github.com/sirupsen/logrus"
)

// GarbageCollector implements a small mark & sweep garbage collector.
// It exists more for the sake of fault tolerance than it being an
// essential part of brig. This is different from the ipfs garbage collector.
type GarbageCollector struct {
	lkr      *Linker
	kv       db.Database
	notifier func(nd n.Node) bool
	markMap  map[string]struct{}
}

// NewGarbageCollector will return a new GC, operating on `lkr` and `kv`.
// It will call `kc` on every collected node.
func NewGarbageCollector(lkr *Linker, kv db.Database, kc func(nd n.Node) bool) *GarbageCollector {
	return &GarbageCollector{
		lkr:      lkr,
		kv:       kv,
		notifier: kc,
	}
}

func (gc *GarbageCollector) markMoveMap(key []string) error {
	keys, err := gc.kv.Keys(key...)
	if err != nil {
		return err
	}

	for _, key := range keys {
		data, err := gc.kv.Get(key...)
		if err != nil {
			return err
		}

		node, _, err := gc.lkr.parseMoveMappingLine(string(data))
		if err != nil {
			return err
		}

		if node != nil {
			gc.markMap[node.TreeHash().B58String()] = struct{}{}
		}
	}

	return nil
}

func (gc *GarbageCollector) mark(cmt *n.Commit, recursive bool) error {
	if cmt == nil {
		return nil
	}

	root, err := gc.lkr.DirectoryByHash(cmt.Root())
	if err != nil {
		return err
	}

	gc.markMap[cmt.TreeHash().B58String()] = struct{}{}
	err = n.Walk(gc.lkr, root, true, func(child n.Node) error {
		gc.markMap[child.TreeHash().B58String()] = struct{}{}
		return nil
	})

	if err != nil {
		return err
	}

	parent, err := cmt.Parent(gc.lkr)
	if err != nil {
		return err
	}

	if recursive && parent != nil {
		parentCmt, ok := parent.(*n.Commit)
		if !ok {
			return ie.ErrBadNode
		}

		return gc.mark(parentCmt, recursive)
	}

	return nil
}

func (gc *GarbageCollector) sweep(prefix []string) (int, error) {
	removed := 0

	return removed, gc.lkr.AtomicWithBatch(func(batch db.Batch) (bool, error) {
		keys, err := gc.kv.Keys(prefix...)
		if err != nil {
			return hintRollback(err)
		}

		for _, key := range keys {
			b58Hash := key[len(key)-1]
			if _, ok := gc.markMap[b58Hash]; ok {
				continue
			}

			hash, err := h.FromB58String(b58Hash)
			if err != nil {
				return hintRollback(err)
			}

			node, err := gc.lkr.NodeByHash(hash)
			if err != nil {
				return hintRollback(err)
			}

			if node == nil {
				continue
			}

			// Allow the gc caller to check if he really
			// wants to delete this node.
			if gc.notifier != nil && !gc.notifier(node) {
				continue
			}

			// Actually get rid of the node:
			gc.lkr.MemIndexPurge(node)

			batch.Erase(key...)
			removed++
		}

		return false, nil
	})
}

func (gc *GarbageCollector) findAllMoveLocations(head *n.Commit) ([][]string, error) {
	locations := [][]string{
		{"stage", "moves"},
	}

	for {
		parent, err := head.Parent(gc.lkr)
		if err != nil {
			return nil, err
		}

		if parent == nil {
			break
		}

		parentCmt, ok := parent.(*n.Commit)
		if !ok {
			return nil, ie.ErrBadNode
		}

		head = parentCmt
		location := []string{"moves", head.TreeHash().B58String()}
		locations = append(locations, location)
	}

	return locations, nil
}

// Run will trigger a GC run. If `allObjects` is false,
// only the staging commit will be checked. Otherwise
// all objects in the key value store.
func (gc *GarbageCollector) Run(allObjects bool) error {
	gc.markMap = make(map[string]struct{})
	head, err := gc.lkr.Status()
	if err != nil {
		return err
	}

	if err := gc.mark(head, allObjects); err != nil {
		return err
	}

	// Staging might contain moved files that are not reachable anymore,
	// but still are referenced by the move mapping.
	// Keep them for now, they will die most likely on MakeCommit()
	moveMapLocations := [][]string{
		{"stage", "moves"},
	}

	if allObjects {
		moveMapLocations, err = gc.findAllMoveLocations(head)
		if err != nil {
			return err
		}
	}

	for _, location := range moveMapLocations {
		if err := gc.markMoveMap(location); err != nil {
			return err
		}
	}

	removed, err := gc.sweep([]string{"stage", "objects"})
	if err != nil {
		log.Debugf("removed %d unreachable staging objects.", removed)
	}

	if allObjects {
		removed, err = gc.sweep([]string{"objects"})
		if err != nil {
			return err
		}

		if removed > 0 {
			log.Warningf("removed %d unreachable permanent objects.", removed)
			log.Warningf("this might indiciate a bug in catfs somewhere.")
		}
	}

	return nil
}


================================================
FILE: catfs/core/gc_test.go
================================================
package core

import (
	"testing"

	"github.com/sahib/brig/catfs/db"
	n "github.com/sahib/brig/catfs/nodes"
	"github.com/stretchr/testify/require"
)

func assertNodeExists(t *testing.T, kv db.Database, nd n.Node) {
	if _, err := kv.Get("stage", "objects", nd.TreeHash().B58String()); err != nil {
		t.Fatalf("Stage object %v does not exist: %v", nd, err)
	}
}

func TestGC(t *testing.T) {
	mdb := db.NewMemoryDatabase()
	lkr := NewLinker(mdb)

	killExpected := make(map[string]bool)
	killActual := make(map[string]bool)

	gc := NewGarbageCollector(lkr, mdb, func(nd n.Node) bool {
		killActual[nd.TreeHash().B58String()] = true
		return true
	})

	root, err := lkr.Root()
	if err != nil {
		t.Fatalf("Failed to retrieve the root: %v", root)
	}

	killExpected[root.TreeHash().B58String()] = true

	sub1, err := n.NewEmptyDirectory(lkr, root, "a", "u", 3)
	if err != nil {
		t.Fatalf("Creating sub2 failed: %v", err)
	}

	if err := lkr.StageNode(sub1); err != nil {
		t.Fatalf("Staging root failed: %v", err)
	}

	killExpected[root.TreeHash().B58String()] = true
	killExpected[sub1.TreeHash().B58String()] = true

	sub2, err := n.NewEmptyDirectory(lkr, sub1, "b", "u", 4)
	if err != nil {
		t.Fatalf("Creating sub2 failed: %v", err)
	}

	if err := lkr.StageNode(sub2); err != nil {
		t.Fatalf("Staging root failed: %v", err)
	}

	root, err = lkr.Root()
	require.Nil(t, err)

	if err := gc.Run(true); err != nil {
		t.Fatalf("gc run failed: %v", err)
	}

	if len(killExpected) != len(killActual) {
		t.Fatalf(
			"GC killed %d nodes, but should have killed %d",
			len(killActual),
			len(killExpected),
		)
	}

	for killedHash := range killActual {
		if _, ok := killExpected[killedHash]; !ok {
			t.Fatalf("%s was killed, but should not!", killedHash)
		}

		if _, err := mdb.Get("stage", "objects", killedHash); err != db.ErrNoSuchKey {
			t.Fatalf("GC did not wipe key from db: %v", killedHash)
		}
	}

	// Double check that the gc did not delete other stuff from the db:
	assertNodeExists(t, mdb, root)
	assertNodeExists(t, mdb, sub1)
	assertNodeExists(t, mdb, sub2)

	gc = NewGarbageCollector(lkr, mdb, func(nd n.Node) bool {
		t.Fatalf("Second gc run found something, first didn't")
		return true
	})

	if err := gc.Run(true); err != nil {
		t.Fatalf("Second gc run failed: %v", err)
	}

	if err := lkr.MakeCommit(n.AuthorOfStage, "some message"); err != nil {
		t.Fatalf("MakeCommit() failed: %v", err)
	}

	gc = NewGarbageCollector(lkr, mdb, func(nd n.Node) bool {
		t.Fatalf("Third gc run found something, first didn't")
		return true
	})

	if err := gc.Run(true); err != nil {
		t.Fatalf("Third gc run failed: %v", err)
	}
}


================================================
FILE: catfs/core/linker.go
================================================
package core

// Layout of the key/value store:
//
// objects/<NODE_HASH>                   => NODE_METADATA
// tree/<FULL_NODE_PATH>                 => NODE_HASH
// index/<CMT_INDEX>                     => COMMIT_HASH
// inode/<INODE>                         => NODE_HASH
// moves/<INODE>                         => MOVE_INFO
// moves/overlay/<INODE>                 => MOVE_INFO
//
// stage/objects/<NODE_HASH>             => NODE_METADATA
// stage/tree/<FULL_NODE_PATH>           => NODE_HASH
// stage/STATUS                          => COMMIT_METADATA
// stage/moves/<INODE>                   => MOVE_INFO
// stage/moves/overlay/<INODE>           => MOVE_INFO
//
// stats/max-inode                       => UINT64
// refs/<REFNAME>                        => NODE_HASH
//
// Defined by caller:
//
// metadata/                             => BYTES (Caller defined data)
//
// NODE is either a Commit, a Directory or a File.
// FULL_NODE_PATH may contain slashes and in case of directories,
// it will contain a trailing slash.
//
// The following refs are defined by the system:
// HEAD -> Points to the latest finished commit, or nil.
// CURR -> Points to the staging commit.
//
// In git terminology, this file implements the following commands:
//
// - git add:    StageNode(): Create and Update Nodes.
// - git status: Status()
// - git commit: MakeCommit()
//
// All write operations are written in one batch or are rolled back
// on errors.

import (
	"encoding/binary"
	"fmt"
	"path"
	"runtime/debug"
	"strconv"
	"strings"
	"time"

	e "github.com/pkg/errors"
	"github.com/sahib/brig/catfs/db"
	ie "github.com/sahib/brig/catfs/errors"
	n "github.com/sahib/brig/catfs/nodes"
	h "github.com/sahib/brig/util/hashlib"
	"github.com/sahib/brig/util/trie"
	log "github.com/sirupsen/logrus"
	capnp "zombiezen.com/go/capnproto2"
)

// Linker implements the basic logic of brig's data model
// It uses an underlying key/value database to
// storea a Merkle-DAG with versioned metadata,
// similar to what git does internally.
type Linker struct {
	kv db.Database

	// root of the filesystem
	root *n.Directory

	// Path lookup trie
	ptrie *trie.Node

	// B58Hash to node
	index map[string]n.Node

	// UID to node
	inodeIndex map[uint64]n.Node

	// Cache for the linker owner.
	owner string
}

// NewLinker returns a new lkr, ready to use. It assumes the key value store
// is working and does no check on this.
func NewLinker(kv db.Database) *Linker {
	lkr := &Linker{kv: kv}
	lkr.MemIndexClear()
	return lkr
}

// MemIndexAdd adds `nd` to the in memory index.
func (lkr *Linker) MemIndexAdd(nd n.Node, updatePathIndex bool) {
	lkr.index[nd.TreeHash().B58String()] = nd
	lkr.inodeIndex[nd.Inode()] = nd

	if updatePathIndex {
		path := nd.Path()
		if nd.Type() == n.NodeTypeDirectory {
			path = appendDot(path)
		}
		lkr.ptrie.InsertWithData(path, nd)
	}
}

// MemIndexSwap updates an entry of the in memory index, by deleting
// the old entry referenced by oldHash (may be nil). This is necessary
// to ensure that old hashes do not resolve to the new, updated instance.
// If the old instance is needed, it will be loaded as new instance.
// You should not need to call this function, except when implementing own Nodes.
func (lkr *Linker) MemIndexSwap(nd n.Node, oldHash h.Hash, updatePathIndex bool) {
	if oldHash != nil {
		delete(lkr.index, oldHash.B58String())
	}

	lkr.MemIndexAdd(nd, updatePathIndex)
}

// MemSetRoot sets the current root, but does not store it yet. It's supposed
// to be called after in-memory modifications. Only implementors of new Nodes
// might need to call this function.
func (lkr *Linker) MemSetRoot(root *n.Directory) {
	if lkr.root != nil {
		lkr.MemIndexSwap(root, lkr.root.TreeHash(), true)
	} else {
		lkr.MemIndexAdd(root, true)
	}

	lkr.root = root
}

// MemIndexPurge removes `nd` from the memory index.
func (lkr *Linker) MemIndexPurge(nd n.Node) {
	delete(lkr.inodeIndex, nd.Inode())
	delete(lkr.index, nd.TreeHash().B58String())
	lkr.ptrie.Lookup(nd.Path()).Remove()
}

// MemIndexClear resets the memory index to zero.
// This should not be called mid-flight in operations,
// but should be okay to call between atomic operations.
func (lkr *Linker) MemIndexClear() {
	lkr.ptrie = trie.NewNode()
	lkr.index = make(map[string]n.Node)
	lkr.inodeIndex = make(map[uint64]n.Node)
	lkr.root = nil
}

//////////////////////////
// COMMON NODE HANDLING //
//////////////////////////

// NextInode returns a unique identifier, used to identify a single node. You
// should not need to call this function, except when implementing own nodes.
func (lkr *Linker) NextInode() uint64 {
	nodeCount, err := lkr.kv.Get("stats", "max-inode")
	if err != nil && err != db.ErrNoSuchKey {
		return 0
	}

	// nodeCount might be nil on startup:
	cnt := uint64(1)
	if nodeCount != nil {
		cnt = binary.BigEndian.Uint64(nodeCount) + 1
	}

	cntBuf := make([]byte, 8)
	binary.BigEndian.PutUint64(cntBuf, cnt)

	err = lkr.AtomicWithBatch(func(batch db.Batch) (bool, error) {
		batch.Put(cntBuf, "stats", "max-inode")
		return false, nil
	})

	if err != nil {
		return 0
	}

	return cnt
}

// FilesByContents checks what files are associated with the content hashes in
// `contents`. It returns a map of content hash b58 to file. This method is
// quite heavy and should not be used in loops. There is room for optimizations.
func (lkr *Linker) FilesByContents(contents []h.Hash) (map[string]*n.File, error) {
	keys, err := lkr.kv.Keys()
	if err != nil {
		return nil, err
	}

	result := make(map[string]*n.File)
	for _, key := range keys {
		// Filter non-node storage:
		fullKey := strings.Join(key, "/")
		if !strings.HasPrefix(fullKey, "objects") &&
			!strings.HasPrefix(fullKey, "stage/objects") {
			continue
		}

		data, err := lkr.kv.Get(key...)
		if err != nil {
			return nil, err
		}

		nd, err := n.UnmarshalNode(data)
		if err != nil {
			return nil, err
		}

		if nd.Type() != n.NodeTypeFile {
			continue
		}

		file, ok := nd.(*n.File)
		if !ok {
			return nil, ie.ErrBadNode
		}

		for _, content := range contents {
			if content.Equal(file.BackendHash()) {
				result[content.B58String()] = file
			}
		}
	}

	return result, nil
}

// loadNode loads an individual object by its hash from the object store. It
// will return nil if the hash is not there.
func (lkr *Linker) loadNode(hash h.Hash) (n.Node, error) {
	var data []byte
	var err error

	b58hash := hash.B58String()

	// First look in the stage:
	loadableBuckets := [][]string{
		{"stage", "objects", b58hash},
		{"objects", b58hash},
	}

	for _, bucketPath := range loadableBuckets {
		data, err = lkr.kv.Get(bucketPath...)
		if err != nil && err != db.ErrNoSuchKey {
			return nil, err
		}

		if data != nil {
			return n.UnmarshalNode(data)
		}
	}

	// Damn, no hash found:
	return nil, nil
}

// NodeByHash returns the node identified by hash.
// If no such hash could be found, nil is returned.
func (lkr *Linker) NodeByHash(hash h.Hash) (n.Node, error) {
	// Check if we have this this node in the memory cache already:
	b58Hash := hash.B58String()
	if cachedNode, ok := lkr.index[b58Hash]; ok {
		return cachedNode, nil
	}

	// Node was not in the cache, load directly from kv.
	nd, err := lkr.loadNode(hash)
	if err != nil {
		return nil, err
	}

	if nd == nil {
		return nil, nil
	}

	lkr.MemIndexAdd(nd, false)
	return nd, nil
}

func appendDot(path string) string {
	// path.Join() calls path.Clean() which in turn
	// removes the '.' at the end when trying to join that.
	// But since we use the dot to mark directories we shouldn't do that.
	if strings.HasSuffix(path, "/") {
		return path + "."
	}

	return path + "/."
}

// ResolveNode resolves a path to a hash and resolves the corresponding node by
// calling NodeByHash(). If no node could be resolved, nil is returned.
// It does not matter if the node was deleted in the meantime. If so,
// a Ghost node is returned which stores the last known state.
func (lkr *Linker) ResolveNode(nodePath string) (n.Node, error) {
	// Check if it's cached already:
	trieNode := lkr.ptrie.Lookup(nodePath)
	if trieNode != nil && trieNode.Data != nil {
		return trieNode.Data.(n.Node), nil
	}

	fullPaths := [][]string{
		{"stage", "tree", nodePath},
		{"tree", nodePath},
	}

	for _, fullPath := range fullPaths {
		b58Hash, err := lkr.kv.Get(fullPath...)
		if err != nil && err != db.ErrNoSuchKey {
			return nil, e.Wrapf(err, "db-lookup")
		}

		if err == db.ErrNoSuchKey {
			continue
		}

		bhash, err := h.FromB58String(string(b58Hash))
		if err != nil {
			return nil, err
		}

		if bhash != nil {
			return lkr.NodeByHash(h.Hash(bhash))
		}
	}

	// Return nil if nothing found:
	return nil, nil
}

// StageNode inserts a modified node to the staging area, making sure the
// modification is persistent and part of the staging commit. All parent
// directories of the node in question will be staged automatically. If there
// was no modification it will be a (quite expensive) NOOP.
func (lkr *Linker) StageNode(nd n.Node) error {
	return lkr.AtomicWithBatch(func(batch db.Batch) (bool, error) {
		if err := lkr.stageNodeRecursive(batch, nd); err != nil {
			return true, e.Wrapf(err, "recursive stage")
		}

		// Update the staging commit's root hash:
		status, err := lkr.Status()
		if err != nil {
			return true, fmt.Errorf("failed to retrieve status: %v", err)
		}

		root, err := lkr.Root()
		if err != nil {
			return true, err
		}

		status.SetModTime(time.Now())
		status.SetRoot(root.TreeHash())
		lkr.MemSetRoot(root)
		return hintRollback(lkr.saveStatus(status))
	})
}

// CommitByIndex returns the commit referenced by `index`.
// `0` will return the very first commit. Negative numbers will yield
// a ErrNoSuchKey error.
func (lkr *Linker) CommitByIndex(index int64) (*n.Commit, error) {
	status, err := lkr.Status()
	if err != nil {
		return nil, err
	}

	if index < 0 {
		// Interpret an index of -n as curr-(n+1),
		// so that -1 means "curr".
		index = status.Index() + index + 1
	}

	b58Hash, err := lkr.kv.Get("index", strconv.FormatInt(index, 10))
	if err != nil && err != db.ErrNoSuchKey {
		return nil, err
	}

	// Special case: status is not in the index bucket.
	// Do a separate check for it.
	if err == db.ErrNoSuchKey {
		if status.Index() == index {
			return status, nil
		}

		owner, _ := lkr.Owner()
		errmsg := fmt.Sprintf("No commit with index %v for owner `%v`", index, owner)
		log.Error(errmsg)
		return nil, ie.NoSuchCommitIndex(index)
	}

	hash, err := h.FromB58String(string(b58Hash))
	if err != nil {
		return nil, err
	}

	return lkr.CommitByHash(hash)
}

// NodeByInode resolves a node by it's unique ID.
// It will return nil if no corresponding node was found.
func (lkr *Linker) NodeByInode(uid uint64) (n.Node, error) {
	b58Hash, err := lkr.kv.Get("inode", strconv.FormatUint(uid, 10))
	if err != nil && err != db.ErrNoSuchKey {
		return nil, err
	}

	hash, err := h.FromB58String(string(b58Hash))
	if err != nil {
		return nil, err
	}

	return lkr.NodeByHash(hash)
}

func (lkr *Linker) stageNodeRecursive(batch db.Batch, nd n.Node) error {
	if nd.Type() == n.NodeTypeCommit {
		return fmt.Errorf("bug: commits cannot be staged; use MakeCommit()")
	}

	data, err := n.MarshalNode(nd)
	if err != nil {
		return e.Wrapf(err, "marshal")
	}

	b58Hash := nd.TreeHash().B58String()
	batch.Put(data, "stage", "objects", b58Hash)

	uidKey := strconv.FormatUint(nd.Inode(), 10)
	batch.Put([]byte(nd.TreeHash().B58String()), "inode", uidKey)

	hashPath := []string{"stage", "tree", nd.Path()}
	if nd.Type() == n.NodeTypeDirectory {
		hashPath = append(hashPath, ".")
	}

	batch.Put([]byte(b58Hash), hashPath...)

	// Remember/Update this node in the cache if it's not yet there:
	lkr.MemIndexAdd(nd, true)

	// We need to save parent directories too, in case the hash changed:
	// Note that this will create many pointless directories in staging.
	// That's okay since we garbage collect it every few seconds
	// on a higher layer.
	if nd.Path() == "/" {
		// Can' go any higher. Save this dir as new virtual root.
		root, ok := nd.(*n.Directory)
		if !ok {
			return ie.ErrBadNode
		}

		lkr.MemSetRoot(root)
		return nil
	}

	par, err := lkr.ResolveDirectory(path.Dir(nd.Path()))
	if err != nil {
		return e.Wrapf(err, "resolve")
	}

	if par != nil {
		if err := lkr.stageNodeRecursive(batch, par); err != nil {
			return err
		}
	}

	return nil
}

/////////////////////
// COMMIT HANDLING //
/////////////////////

// SetMergeMarker sets the current status to be a merge commit.
// Note that this function only will have a result when MakeCommit() is called afterwards.
// Otherwise, the changes will not be written to disk.
func (lkr *Linker) SetMergeMarker(with string, remoteHead h.Hash) error {
	status, err := lkr.Status()
	if err != nil {
		return err
	}

	status.SetMergeMarker(with, remoteHead)
	return lkr.saveStatus(status)
}

// MakeCommit creates a new full commit in the version history.
// The current staging commit is finalized with `author` and `message`
// and gets saved. A new, identical staging commit is created pointing
// to the root of the now new HEAD.
//
// If nothing changed since the last call to MakeCommit, it will
// return ErrNoChange, which can be reacted upon.
func (lkr *Linker) MakeCommit(author string, message string) error {
	return lkr.AtomicWithBatch(func(batch db.Batch) (bool, error) {
		switch err := lkr.makeCommit(batch, author, message); err {
		case ie.ErrNoChange:
			return false, err
		case nil:
			return false, nil
		default:
			return true, err
		}
	})
}

func (lkr *Linker) makeCommitPutCurrToPersistent(batch db.Batch, rootDir *n.Directory) (map[uint64]bool, error) {
	exportedInodes := make(map[uint64]bool)
	return exportedInodes, n.Walk(lkr, rootDir, true, func(child n.Node) error {
		data, err := n.MarshalNode(child)
		if err != nil {
			return err
		}

		b58Hash := child.TreeHash().B58String()
		batch.Put(data, "objects", b58Hash)
		exportedInodes[child.Inode()] = true

		childPath := child.Path()
		if child.Type() == n.NodeTypeDirectory {
			childPath = appendDot(childPath)
		}

		batch.Put([]byte(b58Hash), "tree", childPath)
		return nil
	})
}

func (lkr *Linker) makeCommit(batch db.Batch, author string, message string) error {
	head, err := lkr.Head()
	if err != nil && !ie.IsErrNoSuchRef(err) {
		return err
	}

	status, err := lkr.Status()
	if err != nil {
		return err
	}

	// Only compare with previous if we have a HEAD yet.
	if head != nil {
		if status.Root().Equal(head.Root()) {
			return ie.ErrNoChange
		}
	}

	rootDir, err := lkr.Root()
	if err != nil {
		return err
	}

	// Go over all files/directories and save them in tree & objects.
	// Note that this will only move nodes that are reachable from the current
	// commit root. Intermediate nodes will not be copied.
	exportedInodes, err := lkr.makeCommitPutCurrToPersistent(batch, rootDir)
	if err != nil {
		return err
	}

	// NOTE: `head` may be nil, if it couldn't be resolved,
	//        or (maybe more likely) if this is the first commit.
	if head != nil {
		if err := status.SetParent(lkr, head); err != nil {
			return err
		}
	}

	if err := status.BoxCommit(author, message); err != nil {
		return err
	}

	statusData, err := n.MarshalNode(status)
	if err != nil {
		return err
	}

	statusB58Hash := status.TreeHash().B58String()
	batch.Put(statusData, "objects", statusB58Hash)

	// Remember this commit under his index:
	batch.Put([]byte(statusB58Hash), "index", strconv.FormatInt(status.Index(), 10))

	if err := lkr.SaveRef("HEAD", status); err != nil {
		return err
	}

	// Check if we have already tagged the initial commit.
	if _, err := lkr.ResolveRef("init"); err != nil {
		if !ie.IsErrNoSuchRef(err) {
			// Some other error happened.
			return err
		}

		// This is probably the first commit. Tag it.
		if err := lkr.SaveRef("INIT", status); err != nil {
			return err
		}
	}

	// Fixate the moved paths in the stage:
	if err := lkr.commitMoveMapping(status, exportedInodes); err != nil {
		return err
	}

	if err := lkr.clearStage(batch); err != nil {
		return err
	}

	newStatus, err := n.NewEmptyCommit(lkr.NextInode(), status.Index()+1)
	if err != nil {
		return err
	}

	newStatus.SetRoot(status.Root())
	if err := newStatus.SetParent(lkr, status); err != nil {
		return err
	}

	return lkr.saveStatus(newStatus)
}

func (lkr *Linker) clearStage(batch db.Batch) error {
	// Clear the staging area.
	toClear := [][]string{
		{"stage", "objects"},
		{"stage", "tree"},
		{"stage", "moves"},
	}

	for _, key := range toClear {
		if err := batch.Clear(key...); err != nil {
			return err
		}
	}

	return nil
}

///////////////////////
// METADATA HANDLING //
///////////////////////

// MetadataPut remembers a value persisntenly identified by `key`.
// It can be used as single-level key value store for user purposes.
func (lkr *Linker) MetadataPut(key string, value []byte) error {
	return lkr.AtomicWithBatch(func(batch db.Batch) (bool, error) {
		batch.Put([]byte(value), "metadata", key)
		return false, nil
	})
}

// MetadataGet retriesves a previously put key value pair.
// It will return nil if no such value could be retrieved.
func (lkr *Linker) MetadataGet(key string) ([]byte, error) {
	return lkr.kv.Get("metadata", key)
}

////////////////////////
// OWNERSHIP HANDLING //
////////////////////////

// Owner returns the owner of the linker.
func (lkr *Linker) Owner() (string, error) {
	if lkr.owner != "" {
		return lkr.owner, nil
	}

	data, err := lkr.MetadataGet("owner")
	if err != nil {
		return "", err
	}

	// Cache owner, we don't want to reload it again and again.
	// It will usually not change during runtime, except SetOwner
	// is called (which is invalidating the cache anyways)
	lkr.owner = string(data)
	return lkr.owner, nil
}

// SetOwner will set the owner to `owner`.
func (lkr *Linker) SetOwner(owner string) error {
	lkr.owner = owner
	return lkr.MetadataPut("owner", []byte(owner))
}

// SetABIVersion will set the ABI version to `version`.
func (lkr *Linker) SetABIVersion(version int) error {
	sv := strconv.Itoa(version)
	return lkr.MetadataPut("version", []byte(sv))
}

////////////////////////
// REFERENCE HANDLING //
////////////////////////

// ResolveRef resolves the hash associated with `refname`. If the ref could not
// be resolved, ErrNoSuchRef is returned. Typically, Node will be a Commit.
// But there are no technical restrictions on which node typ to use.
// NOTE: ResolveRef("HEAD") != ResolveRef("head") due to case.
func (lkr *Linker) ResolveRef(refname string) (n.Node, error) {
	origRefname := refname

	nUps := 0
	for idx := len(refname) - 1; idx >= 0; idx-- {
		if refname[idx] == '^' {
			nUps++
		} else {
			break
		}
	}

	// Strip the ^s:
	refname = refname[:len(refname)-nUps]

	// Special case: the status commit is not part of the normal object store.
	// Still make it able to resolve it by it's refname "curr".
	if refname == "curr" || refname == "status" {
		return lkr.Status()
	}

	b58Hash, err := lkr.kv.Get("refs", refname)
	if err != nil && err != db.ErrNoSuchKey {
		return nil, err
	}

	if len(b58Hash) == 0 {
		// Try to interpret the refname as b58hash directly.
		// This path will hit when passing a commit hash directly
		// as `refname` to this method.
		b58Hash = []byte(refname)
	}

	hash, err := h.FromB58String(string(b58Hash))
	if err != nil {
		// Could not parse hash, so it's probably none.
		return nil, ie.ErrNoSuchRef(refname)
	}

	status, err := lkr.Status()
	if err != nil {
		return nil, err
	}

	// Special case: Allow the resolving of `curr`
	// by using its status hash and check it explicitly.
	var nd n.Node
	if status.TreeHash().Equal(hash) {
		nd = status
	} else {
		nd, err = lkr.NodeByHash(h.Hash(hash))
		if err != nil {
			return nil, err
		}
	}

	if nd == nil {
		return nil, ie.ErrNoSuchRef(refname)
	}

	// Possibly advance a few commits until we hit the one
	// the user required.
	cmt, ok := nd.(*n.Commit)
	if ok {
		for i := 0; i < nUps; i++ {
			parentNd, err := cmt.Parent(lkr)
			if err != nil {
				return nil, err
			}

			if parentNd == nil {
				log.Warningf("ref `%s` is too far back; stopping at `init`", origRefname)
				break
			}

			parentCmt, ok := parentNd.(*n.Commit)
			if !ok {
				break
			}

			cmt = parentCmt
		}

		nd = cmt
	}

	return nd, nil
}

// SaveRef stores a reference to `nd` persistently. The caller is responsbiel
// to ensure that the node is already in the blockstore, otherwise it won't be
// resolvable.
func (lkr *Linker) SaveRef(refname string, nd n.Node) error {
	refname = strings.ToLower(refname)
	return lkr.AtomicWithBatch(func(batch db.Batch) (bool, error) {
		batch.Put([]byte(nd.TreeHash().B58String()), "refs", refname)
		return false, nil
	})
}

// ListRefs lists all currently known refs.
func (lkr *Linker) ListRefs() ([]string, error) {
	refs := []string{}
	keys, err := lkr.kv.Keys("refs")
	if err != nil {
		return nil, err
	}

	for _, key := range keys {
		if len(key) <= 1 {
			continue
		}

		refs = append(refs, key[1])
	}

	return refs, nil
}

// RemoveRef removes the ref named `refname`.
func (lkr *Linker) RemoveRef(refname string) error {
	return lkr.AtomicWithBatch(func(batch db.Batch) (bool, error) {
		batch.Erase("refs", refname)
		return false, nil
	})
}

// Head is just a shortcut for ResolveRef("HEAD").
func (lkr *Linker) Head() (*n.Commit, error) {
	nd, err := lkr.ResolveRef("head")
	if err != nil {
		return nil, err
	}

	cmt, ok := nd.(*n.Commit)
	if !ok {
		return nil, fmt.Errorf("oh-oh, HEAD is not a Commit... %v", nd)
	}

	return cmt, nil
}

// Root returns the current root directory of CURR.
// It is never nil when err is nil.
func (lkr *Linker) Root() (*n.Directory, error) {
	if lkr.root != nil {
		return lkr.root, nil
	}

	status, err := lkr.Status()
	if err != nil {
		return nil, err
	}

	rootNd, err := lkr.DirectoryByHash(status.Root())
	if err != nil {
		return nil, err
	}

	lkr.MemSetRoot(rootNd)
	return rootNd, nil
}

// Status returns the current staging commit.
// It is never nil, unless err is nil.
func (lkr *Linker) Status() (*n.Commit, error) {
	var cmt *n.Commit
	var err error

	return cmt, lkr.AtomicWithBatch(func(batch db.Batch) (bool, error) {
		cmt, err = lkr.status(batch)
		return hintRollback(err)
	})
}
func (lkr *Linker) status(batch db.Batch) (cmt *n.Commit, err error) {
	cmt, err = lkr.loadStatus()
	if err != nil {
		return nil, err
	}

	if cmt != nil {
		return cmt, nil
	}

	// Shoot, no commit exists yet.
	// We need to create an initial one.
	cmt, err = n.NewEmptyCommit(lkr.NextInode(), 0)
	if err != nil {
		return nil, err
	}

	// Setup a new commit and set root from last HEAD or new one.
	head, err := lkr.Head()
	if err != nil && !ie.IsErrNoSuchRef(err) {
		return nil, err
	}

	var rootHash h.Hash

	if ie.IsErrNoSuchRef(err) {
		// There probably wasn't a HEAD yet.
		if root, err := lkr.ResolveDirectory("/"); err == nil && root != nil {
			rootHash = root.TreeHash()
		} else {
			// No root directory then. Create a shiny new one and stage it.
			inode := lkr.NextInode()
			newRoot, err := n.NewEmptyDirectory(lkr, nil, "/", lkr.owner, inode)
			if err != nil {
				return nil, err
			}

			// Can't call StageNode(), since that would call Status(),
			// causing and endless loop of grief and doom.
			if err := lkr.stageNodeRecursive(batch, newRoot); err != nil {
				return nil, err
			}

			rootHash = newRoot.TreeHash()
		}
	} else {
		if err := cmt.SetParent(lkr, head); err != nil {
			return nil, err
		}

		rootHash = head.Root()
	}

	cmt.SetRoot(rootHash)

	if err := lkr.saveStatus(cmt); err != nil {
		return nil, err
	}

	return cmt, nil
}

func (lkr *Linker) loadStatus() (*n.Commit, error) {
	data, err := lkr.kv.Get("stage", "STATUS")
	if err != nil && err != db.ErrNoSuchKey {
		return nil, err
	}

	if data == nil {
		return nil, nil
	}

	msg, err := capnp.Unmarshal(data)
	if err != nil {
		return nil, err
	}

	// It's there already. Just unmarshal it.
	cmt := &n.Commit
Download .txt
gitextract_6878u1a7/

├── .github/
│   └── ISSUE_TEMPLATE/
│       ├── bug_report.md
│       └── feature_request.md
├── .gitignore
├── .mailmap
├── .travis.yml
├── CHANGELOG.md
├── Dockerfile
├── LICENSE
├── PULL_REQUEST_TEMPLATE.md
├── README.md
├── Taskfile.yml
├── autocomplete/
│   ├── bash_autocomplete
│   └── zsh_autocomplete
├── backend/
│   ├── backend.go
│   ├── httpipfs/
│   │   ├── gc.go
│   │   ├── gc_test.go
│   │   ├── io.go
│   │   ├── io_test.go
│   │   ├── net.go
│   │   ├── net_test.go
│   │   ├── pin.go
│   │   ├── pin_test.go
│   │   ├── pubsub.go
│   │   ├── pubsub_test.go
│   │   ├── resolve.go
│   │   ├── resolve_test.go
│   │   ├── shell.go
│   │   ├── testing.go
│   │   ├── testing_test.go
│   │   └── version.go
│   └── mock/
│       └── mock.go
├── bench/
│   ├── bench.go
│   ├── inputs.go
│   ├── runner.go
│   └── stats.go
├── brig.go
├── catfs/
│   ├── backend.go
│   ├── capnp/
│   │   ├── pinner.capnp
│   │   └── pinner.capnp.go
│   ├── core/
│   │   ├── coreutils.go
│   │   ├── coreutils_test.go
│   │   ├── gc.go
│   │   ├── gc_test.go
│   │   ├── linker.go
│   │   ├── linker_test.go
│   │   └── testing.go
│   ├── db/
│   │   ├── database.go
│   │   ├── database_badger.go
│   │   ├── database_disk.go
│   │   ├── database_memory.go
│   │   └── database_test.go
│   ├── errors/
│   │   └── errors.go
│   ├── fs.go
│   ├── fs_test.go
│   ├── handle.go
│   ├── handle_test.go
│   ├── mio/
│   │   ├── chunkbuf/
│   │   │   ├── chunkbuf.go
│   │   │   └── chunkbuf_test.go
│   │   ├── compress/
│   │   │   ├── algorithm.go
│   │   │   ├── compress_test.go
│   │   │   ├── header.go
│   │   │   ├── heuristic.go
│   │   │   ├── heuristic_test.go
│   │   │   ├── mime_db.go
│   │   │   ├── reader.go
│   │   │   └── writer.go
│   │   ├── doc.go
│   │   ├── encrypt/
│   │   │   ├── format.go
│   │   │   ├── format_test.go
│   │   │   ├── reader.go
│   │   │   └── writer.go
│   │   ├── pagecache/
│   │   │   ├── cache.go
│   │   │   ├── doc.go
│   │   │   ├── mdcache/
│   │   │   │   ├── l1.go
│   │   │   │   ├── l1_test.go
│   │   │   │   ├── l2.go
│   │   │   │   ├── l2_test.go
│   │   │   │   ├── mdcache.go
│   │   │   │   └── mdcache_test.go
│   │   │   ├── overlay.go
│   │   │   ├── overlay_test.go
│   │   │   ├── page/
│   │   │   │   ├── page.go
│   │   │   │   └── page_test.go
│   │   │   ├── util.go
│   │   │   └── util_test.go
│   │   ├── stream.go
│   │   └── stream_test.go
│   ├── nodes/
│   │   ├── base.go
│   │   ├── capnp/
│   │   │   ├── nodes.capnp
│   │   │   └── nodes.capnp.go
│   │   ├── commit.go
│   │   ├── commit_test.go
│   │   ├── directory.go
│   │   ├── directory_test.go
│   │   ├── doc.go
│   │   ├── file.go
│   │   ├── file_test.go
│   │   ├── ghost.go
│   │   ├── ghost_test.go
│   │   ├── linker.go
│   │   └── node.go
│   ├── pinner.go
│   ├── pinner_test.go
│   ├── repin.go
│   ├── repin_test.go
│   ├── rev.go
│   ├── rev_test.go
│   └── vcs/
│       ├── capnp/
│       │   ├── patch.capnp
│       │   └── patch.capnp.go
│       ├── change.go
│       ├── change_test.go
│       ├── debug.go
│       ├── diff.go
│       ├── diff_test.go
│       ├── history.go
│       ├── history_test.go
│       ├── mapper.go
│       ├── mapper_test.go
│       ├── patch.go
│       ├── patch_test.go
│       ├── reset.go
│       ├── reset_test.go
│       ├── resolve.go
│       ├── resolve_test.go
│       ├── sync.go
│       ├── sync_test.go
│       └── undelete.go
├── client/
│   ├── .gitignore
│   ├── client.go
│   ├── clienttest/
│   │   └── daemon.go
│   ├── fs_cmds.go
│   ├── fs_test.go
│   ├── net_cmds.go
│   ├── net_test.go
│   ├── repo_cmds.go
│   └── vcs_cmds.go
├── cmd/
│   ├── bug.go
│   ├── debug.go
│   ├── exit_codes.go
│   ├── fs_handlers.go
│   ├── help.go
│   ├── init.go
│   ├── inode_other.go
│   ├── inode_unix.go
│   ├── iobench.go
│   ├── log.go
│   ├── net_handlers.go
│   ├── parser.go
│   ├── pwd/
│   │   ├── pwd-util/
│   │   │   └── pwd-util.go
│   │   ├── pwd.go
│   │   └── pwd_test.go
│   ├── repo_handlers.go
│   ├── suggest.go
│   ├── tabwriter/
│   │   ├── example_test.go
│   │   ├── tabwriter.go
│   │   └── tabwriter_test.go
│   ├── tree.go
│   ├── util.go
│   └── vcs_handlers.go
├── defaults/
│   ├── defaults.go
│   └── defaults_v0.go
├── docs/
│   ├── .gitignore
│   ├── Makefile
│   ├── _static/
│   │   └── css/
│   │       └── custom.css
│   ├── asciinema/
│   │   ├── 1_init.json
│   │   ├── 1_init_with_pwm.json
│   │   ├── 2_adding.json
│   │   ├── 3_coreutils.json
│   │   ├── 4_mount.json
│   │   ├── 5_commits.json
│   │   ├── 6_history.json
│   │   ├── 7_remotes.json
│   │   ├── 8_sync.json
│   │   └── 9_pin.json
│   ├── conf.py
│   ├── contributing.rst
│   ├── faq.rst
│   ├── features.rst
│   ├── index.rst
│   ├── installation.rst
│   ├── make.bat
│   ├── quickstart.rst
│   ├── requirements.txt
│   ├── roadmap.rst
│   ├── talk/
│   │   ├── Makefile
│   │   ├── demo.rst
│   │   ├── index.rst
│   │   ├── requirements.txt
│   │   └── style.css
│   └── tutorial/
│       ├── config.rst
│       ├── coreutils.rst
│       ├── gateway.rst
│       ├── init.rst
│       ├── intro.rst
│       ├── mounts.rst
│       ├── pinning.rst
│       ├── remotes.rst
│       └── vcs.rst
├── events/
│   ├── backend/
│   │   └── backend.go
│   ├── capnp/
│   │   ├── events_api.capnp
│   │   └── events_api.capnp.go
│   ├── docs.go
│   ├── event.go
│   ├── listener.go
│   ├── listener_test.go
│   └── mock/
│       └── mock.go
├── fuse/
│   ├── directory.go
│   ├── doc.go
│   ├── file.go
│   ├── fs.go
│   ├── fstab.go
│   ├── fuse_test.go
│   ├── fusetest/
│   │   ├── client.go
│   │   ├── doc.go
│   │   ├── helper.go
│   │   └── server.go
│   ├── handle.go
│   ├── mount.go
│   ├── stub.go
│   └── util.go
├── gateway/
│   ├── db/
│   │   ├── capnp/
│   │   │   ├── user.capnp
│   │   │   └── user.capnp.go
│   │   ├── db.go
│   │   └── db_test.go
│   ├── elm/
│   │   ├── .gitignore
│   │   ├── Makefile
│   │   ├── elm.json
│   │   └── src/
│   │       ├── Clipboard.elm
│   │       ├── Commands.elm
│   │       ├── Main.elm
│   │       ├── Modals/
│   │       │   ├── History.elm
│   │       │   ├── Mkdir.elm
│   │       │   ├── MoveCopy.elm
│   │       │   ├── RemoteAdd.elm
│   │       │   ├── RemoteFolders.elm
│   │       │   ├── RemoteRemove.elm
│   │       │   ├── Remove.elm
│   │       │   ├── Rename.elm
│   │       │   ├── Share.elm
│   │       │   └── Upload.elm
│   │       ├── Pinger.elm
│   │       ├── Routes/
│   │       │   ├── Commits.elm
│   │       │   ├── DeletedFiles.elm
│   │       │   ├── Diff.elm
│   │       │   ├── Ls.elm
│   │       │   └── Remotes.elm
│   │       ├── Scroll.elm
│   │       ├── Util.elm
│   │       └── Websocket.elm
│   ├── endpoints/
│   │   ├── all_dirs.go
│   │   ├── all_dirs_test.go
│   │   ├── copy.go
│   │   ├── copy_test.go
│   │   ├── deleted.go
│   │   ├── deleted_test.go
│   │   ├── events.go
│   │   ├── events_test.go
│   │   ├── get.go
│   │   ├── get_test.go
│   │   ├── history.go
│   │   ├── history_test.go
│   │   ├── index.go
│   │   ├── log.go
│   │   ├── log_test.go
│   │   ├── login.go
│   │   ├── login_test.go
│   │   ├── ls.go
│   │   ├── ls_test.go
│   │   ├── mkdir.go
│   │   ├── mkdir_test.go
│   │   ├── move.go
│   │   ├── move_test.go
│   │   ├── pin.go
│   │   ├── pin_test.go
│   │   ├── ping.go
│   │   ├── ping_test.go
│   │   ├── redirect.go
│   │   ├── remotes_add.go
│   │   ├── remotes_add_test.go
│   │   ├── remotes_diff.go
│   │   ├── remotes_diff_test.go
│   │   ├── remotes_list.go
│   │   ├── remotes_list_test.go
│   │   ├── remotes_remove.go
│   │   ├── remotes_remove_test.go
│   │   ├── remotes_self.go
│   │   ├── remotes_self_test.go
│   │   ├── remotes_sync.go
│   │   ├── remotes_sync_test.go
│   │   ├── remove.go
│   │   ├── remove_test.go
│   │   ├── reset.go
│   │   ├── reset_test.go
│   │   ├── testing.go
│   │   ├── undelete.go
│   │   ├── undelete_test.go
│   │   ├── upload.go
│   │   ├── upload_test.go
│   │   └── util.go
│   ├── remotesapi/
│   │   ├── api.go
│   │   └── mock.go
│   ├── server.go
│   ├── server_test.go
│   ├── static/
│   │   ├── css/
│   │   │   ├── default.css
│   │   │   └── fontawesome.css
│   │   ├── js/
│   │   │   ├── app.js
│   │   │   ├── init.js
│   │   │   └── smoothscroll.js
│   │   ├── package.go
│   │   └── resource.go
│   └── templates/
│       ├── index.html
│       ├── package.go
│       └── resource.go
├── go.mod
├── go.sum
├── net/
│   ├── authrw.go
│   ├── authrw_test.go
│   ├── backend/
│   │   └── backend.go
│   ├── capnp/
│   │   ├── api.capnp
│   │   └── api.capnp.go
│   ├── client.go
│   ├── client_test.go
│   ├── handlers.go
│   ├── mock/
│   │   ├── mock.go
│   │   └── pinger.go
│   ├── peer/
│   │   ├── peer.go
│   │   └── peer_test.go
│   ├── pinger.go
│   ├── pinger_test.go
│   ├── resolve_test.go
│   └── server.go
├── repo/
│   ├── backend.go
│   ├── config.go
│   ├── gc.go
│   ├── hints/
│   │   ├── doc.go
│   │   ├── hints.go
│   │   └── hints_test.go
│   ├── immutables.go
│   ├── init.go
│   ├── keys.go
│   ├── keys_test.go
│   ├── mock/
│   │   └── mock.go
│   ├── readme.go
│   ├── remotes.go
│   ├── remotes_test.go
│   ├── repo.go
│   ├── repo_test.go
│   ├── repopack/
│   │   └── repopack.go
│   └── setup/
│       ├── ipfs.go
│       └── ipfs_test.go
├── scripts/
│   ├── build.sh
│   ├── count-lines-of-code.sh
│   ├── create-release-bundle.sh
│   ├── docker-normal-startup.sh
│   ├── generate.sh
│   ├── install-task.sh
│   ├── install.sh
│   ├── run-linter.sh
│   ├── run-tests.sh
│   └── test-bed.sh
├── server/
│   ├── api_handler.go
│   ├── base.go
│   ├── capnp/
│   │   ├── local_api.capnp
│   │   └── local_api.capnp.go
│   ├── fs_handler.go
│   ├── net_handler.go
│   ├── path.go
│   ├── path_test.go
│   ├── remotes_api.go
│   ├── repo_handler.go
│   ├── rlimit_linux.go
│   ├── rlimit_other.go
│   ├── server.go
│   ├── stream.go
│   ├── transfer.go
│   └── vcs_handler.go
├── tests/
│   ├── test-init-no-pass.sh
│   ├── test-init-pass-helper.sh
│   └── test-init-several.sh
├── util/
│   ├── conductor/
│   │   ├── conductor.go
│   │   └── conductor_test.go
│   ├── hashlib/
│   │   ├── hash.go
│   │   └── hash_test.go
│   ├── key.go
│   ├── log/
│   │   ├── logger.go
│   │   └── logger_test.go
│   ├── pwutil/
│   │   └── pwutil.go
│   ├── server/
│   │   └── server.go
│   ├── std.go
│   ├── std_test.go
│   ├── strings/
│   │   ├── README.md
│   │   └── builder.go
│   ├── testutil/
│   │   └── testutil.go
│   ├── trie/
│   │   ├── buildpath.go
│   │   ├── pathricia.go
│   │   └── pathricia_test.go
│   ├── zipper.go
│   └── zipper_test.go
└── version/
    └── version.go
Download .txt
Showing preview only (585K chars total). Download the full file or copy to clipboard to get everything.
SYMBOL INDEX (6397 symbols across 288 files)

FILE: backend/backend.go
  type VersionInfo (line 24) | type VersionInfo interface
  type Backend (line 31) | type Backend interface
  function ForwardLogByName (line 39) | func ForwardLogByName(name string, w io.Writer) error {
  function FromName (line 52) | func FromName(name, path, fingerprint string) (Backend, error) {
  function Version (line 73) | func Version(name, path string) VersionInfo {

FILE: backend/httpipfs/gc.go
  method GC (line 17) | func (nd *Node) GC() ([]h.Hash, error) {

FILE: backend/httpipfs/gc_test.go
  function TestGC (line 11) | func TestGC(t *testing.T) {

FILE: backend/httpipfs/io.go
  function cat (line 16) | func cat(s *shell.Shell, path string, offset int64) (io.ReadCloser, erro...
  type streamWrapper (line 31) | type streamWrapper struct
    method Read (line 41) | func (sw *streamWrapper) Read(buf []byte) (int, error) {
    method WriteTo (line 54) | func (sw *streamWrapper) WriteTo(w io.Writer) (int64, error) {
    method cachedSize (line 61) | func (sw *streamWrapper) cachedSize() (int64, error) {
    method getAbsOffset (line 89) | func (sw *streamWrapper) getAbsOffset(offset int64, whence int) (int64...
    method Seek (line 112) | func (sw *streamWrapper) Seek(offset int64, whence int) (int64, error) {
  method Cat (line 142) | func (nd *Node) Cat(hash h.Hash) (mio.Stream, error) {
  method Add (line 158) | func (nd *Node) Add(r io.Reader) (h.Hash, error) {

FILE: backend/httpipfs/io_test.go
  function TestAddCatBasic (line 14) | func TestAddCatBasic(t *testing.T) {
  function TestAddCatSize (line 36) | func TestAddCatSize(t *testing.T) {

FILE: backend/httpipfs/net.go
  type connWrapper (line 22) | type connWrapper struct
    method LocalAddr (line 31) | func (cw *connWrapper) LocalAddr() net.Addr {
    method RemoteAddr (line 38) | func (cw *connWrapper) RemoteAddr() net.Addr {
    method Close (line 45) | func (cw *connWrapper) Close() error {
  method Dial (line 52) | func (nd *Node) Dial(peerHash, fingerprint, protocol string) (net.Conn, ...
  function forward (line 103) | func forward(sh *shell.Shell, protocol, targetAddr, peerID string) error {
  function openListener (line 122) | func openListener(sh *shell.Shell, protocol, targetAddr string) error {
  function closeStream (line 139) | func closeStream(sh *shell.Shell, protocol, targetAddr, listenAddr strin...
  type addrWrapper (line 165) | type addrWrapper struct
    method Network (line 170) | func (sa *addrWrapper) Network() string {
    method String (line 174) | func (sa *addrWrapper) String() string {
  type listenerWrapper (line 178) | type listenerWrapper struct
    method Accept (line 187) | func (lw *listenerWrapper) Accept() (net.Conn, error) {
    method Addr (line 202) | func (lw *listenerWrapper) Addr() net.Addr {
    method Close (line 209) | func (lw *listenerWrapper) Close() error {
  function buildLocalAddrPath (line 215) | func buildLocalAddrPath(id, fingerprint string) string {
  function readLocalAddr (line 219) | func readLocalAddr(id, fingerprint string) (string, error) {
  function deleteLocalAddr (line 229) | func deleteLocalAddr(id, fingerprint string) error {
  function writeLocalAddr (line 234) | func writeLocalAddr(id, fingerprint, addr string) error {
  method Listen (line 240) | func (nd *Node) Listen(protocol string) (net.Listener, error) {
  type pinger (line 290) | type pinger struct
    method LastSeen (line 301) | func (p *pinger) LastSeen() time.Time {
    method Roundtrip (line 310) | func (p *pinger) Roundtrip() time.Duration {
    method Err (line 318) | func (p *pinger) Err() error {
    method Close (line 326) | func (p *pinger) Close() error {
    method update (line 335) | func (p *pinger) update(ctx context.Context, addr, self string) {
    method Run (line 361) | func (p *pinger) Run(ctx context.Context, addr string) error {
  function ping (line 380) | func ping(sh *shell.Shell, peerID string) (time.Duration, error) {
  method Ping (line 414) | func (nd *Node) Ping(addr string) (netBackend.Pinger, error) {

FILE: backend/httpipfs/net_test.go
  constant TestProtocol (line 13) | TestProtocol = "/brig/test/1.0"
  function testClientSide (line 20) | func testClientSide(t *testing.T, ipfsPathB string, addr string) {
  function TestDialAndListen (line 35) | func TestDialAndListen(t *testing.T) {
  function TestPing (line 63) | func TestPing(t *testing.T) {
  function TestDialAndListenOnSingleNode (line 94) | func TestDialAndListenOnSingleNode(t *testing.T) {
  function TestPingSelf (line 122) | func TestPingSelf(t *testing.T) {

FILE: backend/httpipfs/pin.go
  method IsPinned (line 14) | func (nd *Node) IsPinned(hash h.Hash) (bool, error) {
  method Pin (line 49) | func (nd *Node) Pin(hash h.Hash) error {
  method Unpin (line 54) | func (nd *Node) Unpin(hash h.Hash) error {
  type objectRef (line 62) | type objectRef struct
  type Link (line 69) | type Link struct
  method IsCached (line 76) | func (nd *Node) IsCached(hash h.Hash) (bool, error) {
  method CachedSize (line 123) | func (nd *Node) CachedSize(hash h.Hash) (int64, error) {

FILE: backend/httpipfs/pin_test.go
  function TestPinUnpin (line 12) | func TestPinUnpin(t *testing.T) {
  function TestIsCached (line 41) | func TestIsCached(t *testing.T) {

FILE: backend/httpipfs/pubsub.go
  type subWrapper (line 10) | type subWrapper struct
    method Next (line 26) | func (s *subWrapper) Next(ctx context.Context) (eventsBackend.Message,...
    method Close (line 35) | func (s *subWrapper) Close() error {
  type msgWrapper (line 14) | type msgWrapper struct
    method Data (line 18) | func (msg *msgWrapper) Data() []byte {
    method Source (line 22) | func (msg *msgWrapper) Source() string {
  method Subscribe (line 42) | func (nd *Node) Subscribe(ctx context.Context, topic string) (eventsBack...
  method PublishEvent (line 56) | func (nd *Node) PublishEvent(topic string, data []byte) error {

FILE: backend/httpipfs/pubsub_test.go
  function TestPubSub (line 11) | func TestPubSub(t *testing.T) {

FILE: backend/httpipfs/resolve.go
  method PublishName (line 19) | func (nd *Node) PublishName(name string) error {
  method Identity (line 32) | func (nd *Node) Identity() (peer.Info, error) {
  function findProvider (line 60) | func findProvider(ctx context.Context, sh *shell.Shell, hash h.Hash) ([]...
  method ResolveName (line 114) | func (nd *Node) ResolveName(ctx context.Context, name string) ([]peer.In...

FILE: backend/httpipfs/resolve_test.go
  function TestPublishResolve (line 12) | func TestPublishResolve(t *testing.T) {

FILE: backend/httpipfs/shell.go
  type IpfsStateCache (line 27) | type IpfsStateCache struct
  type Node (line 33) | type Node struct
    method IsOnline (line 160) | func (nd *Node) IsOnline() bool {
    method Connect (line 169) | func (nd *Node) Connect() error {
    method Disconnect (line 178) | func (nd *Node) Disconnect() error {
    method isOnline (line 186) | func (nd *Node) isOnline() bool {
    method Close (line 194) | func (nd *Node) Close() error {
    method Name (line 199) | func (nd *Node) Name() string {
  function getExperimentalFeatures (line 44) | func getExperimentalFeatures(sh *shell.Shell) (map[string]bool, error) {
  type Option (line 70) | type Option
  function WithNoLogging (line 74) | func WithNoLogging() Option {
  function toMultiAddr (line 80) | func toMultiAddr(ipfsPathOrMultiaddr string) (ma.Multiaddr, error) {
  function NewNode (line 99) | func NewNode(ipfsPathOrMultiaddr string, fingerprint string, opts ...Opt...

FILE: backend/httpipfs/testing.go
  function WithIpfs (line 17) | func WithIpfs(t *testing.T, portOff int, fn func(t *testing.T, ipfsPath ...
  function WithDoubleIpfs (line 68) | func WithDoubleIpfs(t *testing.T, portOff int, fn func(t *testing.T, ipf...

FILE: backend/httpipfs/testing_test.go
  function TestIpfsStartup (line 11) | func TestIpfsStartup(t *testing.T) {
  function TestDoubleIpfsStartup (line 24) | func TestDoubleIpfsStartup(t *testing.T) {

FILE: backend/httpipfs/version.go
  type VersionInfo (line 4) | type VersionInfo struct
    method SemVer (line 9) | func (v *VersionInfo) SemVer() string { return v.semVer }
    method Name (line 12) | func (v *VersionInfo) Name() string { return v.name }
    method Rev (line 15) | func (v *VersionInfo) Rev() string { return v.rev }
  method Version (line 18) | func (n *Node) Version() *VersionInfo {

FILE: backend/mock/mock.go
  type Backend (line 11) | type Backend struct
  function NewMockBackend (line 21) | func NewMockBackend(path, owner string) *Backend {
  type VersionInfo (line 31) | type VersionInfo struct
    method SemVer (line 36) | func (v *VersionInfo) SemVer() string { return v.semVer }
    method Name (line 39) | func (v *VersionInfo) Name() string { return v.name }
    method Rev (line 42) | func (v *VersionInfo) Rev() string { return v.rev }
  function Version (line 45) | func Version() *VersionInfo {

FILE: bench/bench.go
  type Run (line 30) | type Run struct
  type Runs (line 37) | type Runs
    method Average (line 40) | func (runs Runs) Average() Run {
  type Bench (line 56) | type Bench interface
  function withRunStats (line 77) | func withRunStats(size int64, fn func() (int64, error)) (*Run, error) {
  type memcpyBench (line 94) | type memcpyBench struct
    method SupportHints (line 100) | func (n memcpyBench) SupportHints() bool { return false }
    method CanBeVerified (line 102) | func (n memcpyBench) CanBeVerified() bool { return true }
    method Bench (line 104) | func (n memcpyBench) Bench(hint hints.Hint, size int64, r io.Reader, v...
    method Close (line 114) | func (n memcpyBench) Close() error { return nil }
  function newMemcpyBench (line 96) | func newMemcpyBench(_ string, _ bool) (Bench, error) {
  type serverCommon (line 118) | type serverCommon struct
    method Close (line 145) | func (sc *serverCommon) Close() error {
  function newServerCommon (line 123) | func newServerCommon(ipfsPath string) (*serverCommon, error) {
  type serverStageBench (line 151) | type serverStageBench struct
    method SupportHints (line 164) | func (s *serverStageBench) SupportHints() bool { return true }
    method CanBeVerified (line 166) | func (s *serverStageBench) CanBeVerified() bool { return false }
    method Bench (line 168) | func (s *serverStageBench) Bench(hint hints.Hint, size int64, r io.Rea...
    method Close (line 185) | func (s *serverStageBench) Close() error {
  function newServerStageBench (line 155) | func newServerStageBench(ipfsPath string, _ bool) (Bench, error) {
  type serverCatBench (line 189) | type serverCatBench struct
    method SupportHints (line 202) | func (s *serverCatBench) SupportHints() bool { return true }
    method CanBeVerified (line 204) | func (s *serverCatBench) CanBeVerified() bool { return true }
    method Bench (line 206) | func (s *serverCatBench) Bench(hint hints.Hint, size int64, r io.Reade...
    method Close (line 233) | func (s *serverCatBench) Close() error {
  function newServerCatBench (line 193) | func newServerCatBench(ipfsPath string, _ bool) (Bench, error) {
  type mioWriterBench (line 239) | type mioWriterBench struct
    method SupportHints (line 245) | func (m *mioWriterBench) SupportHints() bool { return true }
    method CanBeVerified (line 247) | func (m *mioWriterBench) CanBeVerified() bool { return false }
    method Bench (line 249) | func (m *mioWriterBench) Bench(hint hints.Hint, size int64, r io.Reade...
    method Close (line 261) | func (m *mioWriterBench) Close() error {
  function newMioWriterBench (line 241) | func newMioWriterBench(_ string, _ bool) (Bench, error) {
  type mioReaderBench (line 267) | type mioReaderBench struct
    method SupportHints (line 273) | func (m *mioReaderBench) SupportHints() bool { return true }
    method CanBeVerified (line 275) | func (m *mioReaderBench) CanBeVerified() bool { return true }
    method Bench (line 277) | func (m *mioReaderBench) Bench(hint hints.Hint, size int64, r io.Reade...
    method Close (line 311) | func (m *mioReaderBench) Close() error {
  function newMioReaderBench (line 269) | func newMioReaderBench(_ string, _ bool) (Bench, error) {
  type ipfsAddOrCatBench (line 317) | type ipfsAddOrCatBench struct
    method SupportHints (line 326) | func (ia *ipfsAddOrCatBench) SupportHints() bool { return false }
    method CanBeVerified (line 328) | func (ia *ipfsAddOrCatBench) CanBeVerified() bool { return !ia.isAdd }
    method Bench (line 330) | func (ia *ipfsAddOrCatBench) Bench(hint hints.Hint, size int64, r io.R...
    method Close (line 360) | func (ia *ipfsAddOrCatBench) Close() error {
  function newIPFSAddBench (line 322) | func newIPFSAddBench(ipfsPath string, isAdd bool) (Bench, error) {
  type fuseWriteOrReadBench (line 366) | type fuseWriteOrReadBench struct
    method SupportHints (line 411) | func (fb *fuseWriteOrReadBench) SupportHints() bool { return true }
    method CanBeVerified (line 413) | func (fb *fuseWriteOrReadBench) CanBeVerified() bool { return !fb.isWr...
    method Bench (line 415) | func (fb *fuseWriteOrReadBench) Bench(hint hints.Hint, size int64, r i...
    method Close (line 471) | func (fb *fuseWriteOrReadBench) Close() error {
  function newFuseWriteOrReadBench (line 375) | func newFuseWriteOrReadBench(ipfsPath string, isWrite bool) (Bench, erro...
  function ByName (line 516) | func ByName(name, ipfsPath string) (Bench, error) {
  function BenchmarkNames (line 527) | func BenchmarkNames() []string {

FILE: bench/inputs.go
  type Verifier (line 16) | type Verifier interface
  type Input (line 27) | type Input interface
  function benchData (line 34) | func benchData(size uint64, name string) []byte {
  type memVerifier (line 49) | type memVerifier struct
    method Write (line 54) | func (m *memVerifier) Write(buf []byte) (int, error) {
    method MissingBytes (line 70) | func (m *memVerifier) MissingBytes() int64 {
  type memInput (line 74) | type memInput struct
    method Reader (line 82) | func (ni *memInput) Reader(seed uint64) (io.Reader, error) {
    method Verifier (line 91) | func (ni *memInput) Verifier() (Verifier, error) {
    method Size (line 98) | func (ni *memInput) Size() int64 {
    method Close (line 102) | func (ni *memInput) Close() error {
  function newMemInput (line 78) | func newMemInput(size uint64, name string) Input {
  function InputByName (line 124) | func InputByName(name string, size uint64) (Input, error) {
  function InputNames (line 134) | func InputNames() []string {

FILE: bench/runner.go
  type Config (line 20) | type Config struct
  type Result (line 30) | type Result struct
  function buildHints (line 43) | func buildHints(cfg Config) []hints.Hint {
  function sortHints (line 81) | func sortHints(hs []hints.Hint) []hints.Hint {
  function benchmarkSingle (line 90) | func benchmarkSingle(cfg Config, fn func(result Result), ipfsPath string...
  function ipfsIsNeeded (line 190) | func ipfsIsNeeded(cfgs []Config) bool {
  function Benchmark (line 201) | func Benchmark(cfgs []Config, fn func(result Result)) error {

FILE: bench/stats.go
  type Stats (line 10) | type Stats struct
  function FetchStats (line 18) | func FetchStats() Stats {

FILE: brig.go
  function main (line 9) | func main() {

FILE: catfs/backend.go
  type ErrNoSuchHash (line 16) | type ErrNoSuchHash struct
    method Error (line 20) | func (eh ErrNoSuchHash) Error() string {
  type FsBackend (line 26) | type FsBackend interface
  type MemFsBackend (line 57) | type MemFsBackend struct
    method Cat (line 71) | func (mb *MemFsBackend) Cat(hash h.Hash) (mio.Stream, error) {
    method Add (line 94) | func (mb *MemFsBackend) Add(r io.Reader) (h.Hash, error) {
    method Pin (line 106) | func (mb *MemFsBackend) Pin(hash h.Hash) error {
    method Unpin (line 112) | func (mb *MemFsBackend) Unpin(hash h.Hash) error {
    method IsPinned (line 118) | func (mb *MemFsBackend) IsPinned(hash h.Hash) (bool, error) {
    method IsCached (line 129) | func (mb *MemFsBackend) IsCached(hash h.Hash) (bool, error) {
    method CachedSize (line 136) | func (mb *MemFsBackend) CachedSize(hash h.Hash) (int64, error) {
  function NewMemFsBackend (line 63) | func NewMemFsBackend() *MemFsBackend {

FILE: catfs/capnp/pinner.capnp.go
  type Pin (line 11) | type Pin struct
    method String (line 31) | func (s Pin) String() string {
    method Inode (line 36) | func (s Pin) Inode() uint64 {
    method SetInode (line 40) | func (s Pin) SetInode(v uint64) {
    method IsPinned (line 44) | func (s Pin) IsPinned() bool {
    method SetIsPinned (line 48) | func (s Pin) SetIsPinned(v bool) {
  constant Pin_TypeID (line 14) | Pin_TypeID = 0x985d53e01674ee95
  function NewPin (line 16) | func NewPin(s *capnp.Segment) (Pin, error) {
  function NewRootPin (line 21) | func NewRootPin(s *capnp.Segment) (Pin, error) {
  function ReadRootPin (line 26) | func ReadRootPin(msg *capnp.Message) (Pin, error) {
  type Pin_List (line 53) | type Pin_List struct
    method At (line 61) | func (s Pin_List) At(i int) Pin { return Pin{s.List.Struct(i)} }
    method Set (line 63) | func (s Pin_List) Set(i int, v Pin) error { return s.List.SetStruct(i,...
    method String (line 65) | func (s Pin_List) String() string {
  function NewPin_List (line 56) | func NewPin_List(s *capnp.Segment, sz int32) (Pin_List, error) {
  type Pin_Promise (line 71) | type Pin_Promise struct
    method Struct (line 73) | func (p Pin_Promise) Struct() (Pin, error) {
  type PinEntry (line 79) | type PinEntry struct
    method String (line 99) | func (s PinEntry) String() string {
    method Pins (line 104) | func (s PinEntry) Pins() (Pin_List, error) {
    method HasPins (line 109) | func (s PinEntry) HasPins() bool {
    method SetPins (line 114) | func (s PinEntry) SetPins(v Pin_List) error {
    method NewPins (line 120) | func (s PinEntry) NewPins(n int32) (Pin_List, error) {
  constant PinEntry_TypeID (line 82) | PinEntry_TypeID = 0xdb74f7cf7bc815c6
  function NewPinEntry (line 84) | func NewPinEntry(s *capnp.Segment) (PinEntry, error) {
  function NewRootPinEntry (line 89) | func NewRootPinEntry(s *capnp.Segment) (PinEntry, error) {
  function ReadRootPinEntry (line 94) | func ReadRootPinEntry(msg *capnp.Message) (PinEntry, error) {
  type PinEntry_List (line 130) | type PinEntry_List struct
    method At (line 138) | func (s PinEntry_List) At(i int) PinEntry { return PinEntry{s.List.Str...
    method Set (line 140) | func (s PinEntry_List) Set(i int, v PinEntry) error { return s.List.Se...
    method String (line 142) | func (s PinEntry_List) String() string {
  function NewPinEntry_List (line 133) | func NewPinEntry_List(s *capnp.Segment, sz int32) (PinEntry_List, error) {
  type PinEntry_Promise (line 148) | type PinEntry_Promise struct
    method Struct (line 150) | func (p PinEntry_Promise) Struct() (PinEntry, error) {
  constant schema_ba762188b0a6e4cf (line 155) | schema_ba762188b0a6e4cf = "x\xda\\\xd0\xb1K\xebP\x14\x06\xf0\xef\xbbI_[" +
  function init (line 176) | func init() {

FILE: catfs/core/coreutils.go
  function mkdirParents (line 27) | func mkdirParents(lkr *Linker, repoPath string) (*n.Directory, error) {
  function Mkdir (line 54) | func Mkdir(lkr *Linker, repoPath string, createParents bool) (dir *n.Dir...
  function Remove (line 125) | func Remove(lkr *Linker, nd n.ModNode, createGhost, force bool) (parentD...
  function prepareParent (line 180) | func prepareParent(lkr *Linker, nd n.ModNode, dstPath string) (*n.Direct...
  function Copy (line 252) | func Copy(lkr *Linker, nd n.ModNode, dstPath string) (newNode n.ModNode,...
  function Move (line 300) | func Move(lkr *Linker, nd n.ModNode, dstPath string) error {
  function StageFromFileNode (line 358) | func StageFromFileNode(lkr *Linker, f *n.File) (*n.File, error) {
  function Stage (line 376) | func Stage(
  function Log (line 488) | func Log(lkr *Linker, start *n.Commit, fn func(cmt *n.Commit) error) err...

FILE: catfs/core/coreutils_test.go
  function TestMkdir (line 16) | func TestMkdir(t *testing.T) {
  function TestRemove (line 85) | func TestRemove(t *testing.T) {
  function TestRemoveGhost (line 156) | func TestRemoveGhost(t *testing.T) {
  function TestRemoveExistingGhost (line 188) | func TestRemoveExistingGhost(t *testing.T) {
  function moveValidCheck (line 202) | func moveValidCheck(t *testing.T, lkr *Linker, srcPath, dstPath string) {
  function moveInvalidCheck (line 223) | func moveInvalidCheck(t *testing.T, lkr *Linker, srcPath, dstPath string) {
  function TestMoveSingle (line 329) | func TestMoveSingle(t *testing.T) {
  function TestMoveDirectoryWithChild (line 371) | func TestMoveDirectoryWithChild(t *testing.T) {
  function TestMoveDirectory (line 398) | func TestMoveDirectory(t *testing.T) {
  function TestMoveDirectoryWithGhosts (line 436) | func TestMoveDirectoryWithGhosts(t *testing.T) {
  function TestStage (line 485) | func TestStage(t *testing.T) {
  function TestStageDirOverGhost (line 533) | func TestStageDirOverGhost(t *testing.T) {
  function TestCopy (line 548) | func TestCopy(t *testing.T) {

FILE: catfs/core/gc.go
  type GarbageCollector (line 14) | type GarbageCollector struct
    method markMoveMap (line 31) | func (gc *GarbageCollector) markMoveMap(key []string) error {
    method mark (line 56) | func (gc *GarbageCollector) mark(cmt *n.Commit, recursive bool) error {
    method sweep (line 93) | func (gc *GarbageCollector) sweep(prefix []string) (int, error) {
    method findAllMoveLocations (line 139) | func (gc *GarbageCollector) findAllMoveLocations(head *n.Commit) ([][]...
    method Run (line 170) | func (gc *GarbageCollector) Run(allObjects bool) error {
  function NewGarbageCollector (line 23) | func NewGarbageCollector(lkr *Linker, kv db.Database, kc func(nd n.Node)...

FILE: catfs/core/gc_test.go
  function assertNodeExists (line 11) | func assertNodeExists(t *testing.T, kv db.Database, nd n.Node) {
  function TestGC (line 17) | func TestGC(t *testing.T) {

FILE: catfs/core/linker.go
  type Linker (line 65) | type Linker struct
    method MemIndexAdd (line 93) | func (lkr *Linker) MemIndexAdd(nd n.Node, updatePathIndex bool) {
    method MemIndexSwap (line 111) | func (lkr *Linker) MemIndexSwap(nd n.Node, oldHash h.Hash, updatePathI...
    method MemSetRoot (line 122) | func (lkr *Linker) MemSetRoot(root *n.Directory) {
    method MemIndexPurge (line 133) | func (lkr *Linker) MemIndexPurge(nd n.Node) {
    method MemIndexClear (line 142) | func (lkr *Linker) MemIndexClear() {
    method NextInode (line 155) | func (lkr *Linker) NextInode() uint64 {
    method FilesByContents (line 185) | func (lkr *Linker) FilesByContents(contents []h.Hash) (map[string]*n.F...
    method loadNode (line 231) | func (lkr *Linker) loadNode(hash h.Hash) (n.Node, error) {
    method NodeByHash (line 260) | func (lkr *Linker) NodeByHash(hash h.Hash) (n.Node, error) {
    method ResolveNode (line 296) | func (lkr *Linker) ResolveNode(nodePath string) (n.Node, error) {
    method StageNode (line 336) | func (lkr *Linker) StageNode(nd n.Node) error {
    method CommitByIndex (line 363) | func (lkr *Linker) CommitByIndex(index int64) (*n.Commit, error) {
    method NodeByInode (line 403) | func (lkr *Linker) NodeByInode(uid uint64) (n.Node, error) {
    method stageNodeRecursive (line 417) | func (lkr *Linker) stageNodeRecursive(batch db.Batch, nd n.Node) error {
    method SetMergeMarker (line 479) | func (lkr *Linker) SetMergeMarker(with string, remoteHead h.Hash) error {
    method MakeCommit (line 496) | func (lkr *Linker) MakeCommit(author string, message string) error {
    method makeCommitPutCurrToPersistent (line 509) | func (lkr *Linker) makeCommitPutCurrToPersistent(batch db.Batch, rootD...
    method makeCommit (line 531) | func (lkr *Linker) makeCommit(batch db.Batch, author string, message s...
    method clearStage (line 624) | func (lkr *Linker) clearStage(batch db.Batch) error {
    method MetadataPut (line 647) | func (lkr *Linker) MetadataPut(key string, value []byte) error {
    method MetadataGet (line 656) | func (lkr *Linker) MetadataGet(key string) ([]byte, error) {
    method Owner (line 665) | func (lkr *Linker) Owner() (string, error) {
    method SetOwner (line 683) | func (lkr *Linker) SetOwner(owner string) error {
    method SetABIVersion (line 689) | func (lkr *Linker) SetABIVersion(version int) error {
    method ResolveRef (line 702) | func (lkr *Linker) ResolveRef(refname string) (n.Node, error) {
    method SaveRef (line 794) | func (lkr *Linker) SaveRef(refname string, nd n.Node) error {
    method ListRefs (line 803) | func (lkr *Linker) ListRefs() ([]string, error) {
    method RemoveRef (line 822) | func (lkr *Linker) RemoveRef(refname string) error {
    method Head (line 830) | func (lkr *Linker) Head() (*n.Commit, error) {
    method Root (line 846) | func (lkr *Linker) Root() (*n.Directory, error) {
    method Status (line 867) | func (lkr *Linker) Status() (*n.Commit, error) {
    method status (line 876) | func (lkr *Linker) status(batch db.Batch) (cmt *n.Commit, err error) {
    method loadStatus (line 938) | func (lkr *Linker) loadStatus() (*n.Commit, error) {
    method saveStatus (line 963) | func (lkr *Linker) saveStatus(cmt *n.Commit) error {
    method LookupNode (line 999) | func (lkr *Linker) LookupNode(repoPath string) (n.Node, error) {
    method LookupNodeAt (line 1009) | func (lkr *Linker) LookupNodeAt(cmt *n.Commit, repoPath string) (n.Nod...
    method LookupModNode (line 1023) | func (lkr *Linker) LookupModNode(repoPath string) (n.ModNode, error) {
    method LookupModNodeAt (line 1042) | func (lkr *Linker) LookupModNodeAt(cmt *n.Commit, repoPath string) (n....
    method DirectoryByHash (line 1062) | func (lkr *Linker) DirectoryByHash(hash h.Hash) (*n.Directory, error) {
    method ResolveDirectory (line 1083) | func (lkr *Linker) ResolveDirectory(dirpath string) (*n.Directory, err...
    method LookupDirectory (line 1102) | func (lkr *Linker) LookupDirectory(repoPath string) (*n.Directory, err...
    method FileByHash (line 1121) | func (lkr *Linker) FileByHash(hash h.Hash) (*n.File, error) {
    method LookupFile (line 1136) | func (lkr *Linker) LookupFile(repoPath string) (*n.File, error) {
    method LookupGhost (line 1155) | func (lkr *Linker) LookupGhost(repoPath string) (*n.Ghost, error) {
    method CommitByHash (line 1175) | func (lkr *Linker) CommitByHash(hash h.Hash) (*n.Commit, error) {
    method HaveStagedChanges (line 1195) | func (lkr *Linker) HaveStagedChanges() (bool, error) {
    method CheckoutCommit (line 1219) | func (lkr *Linker) CheckoutCommit(cmt *n.Commit, force bool) error {
    method AddMoveMapping (line 1256) | func (lkr *Linker) AddMoveMapping(fromInode, toInode uint64) (err erro...
    method parseMoveMappingLine (line 1286) | func (lkr *Linker) parseMoveMappingLine(line string) (n.Node, MoveDir,...
    method commitMoveMappingKey (line 1328) | func (lkr *Linker) commitMoveMappingKey(
    method commitMoveMapping (line 1445) | func (lkr *Linker) commitMoveMapping(status *n.Commit, exported map[ui...
    method MoveEntryPoint (line 1518) | func (lkr *Linker) MoveEntryPoint(nd n.Node) (n.Node, MoveDir, error) {
    method MoveMapping (line 1556) | func (lkr *Linker) MoveMapping(cmt *n.Commit, nd n.Node) (n.Node, Move...
    method ExpandAbbrev (line 1622) | func (lkr *Linker) ExpandAbbrev(abbrev string) (h.Hash, error) {
    method IterAll (line 1664) | func (lkr *Linker) IterAll(from, to *n.Commit, fn func(n.ModNode, *n.C...
    method iterAll (line 1669) | func (lkr *Linker) iterAll(from, to *n.Commit, visited map[string]stru...
    method Atomic (line 1727) | func (lkr *Linker) Atomic(fn func() (bool, error)) (err error) {
    method AtomicWithBatch (line 1735) | func (lkr *Linker) AtomicWithBatch(fn func(batch db.Batch) (bool, erro...
    method KV (line 1788) | func (lkr *Linker) KV() db.Database {
  function NewLinker (line 86) | func NewLinker(kv db.Database) *Linker {
  function appendDot (line 281) | func appendDot(path string) string {
  constant MoveDirUnknown (line 1464) | MoveDirUnknown = iota
  constant MoveDirSrcToDst (line 1467) | MoveDirSrcToDst
  constant MoveDirDstToSrc (line 1470) | MoveDirDstToSrc
  constant MoveDirNone (line 1472) | MoveDirNone
  type MoveDir (line 1476) | type MoveDir
    method String (line 1478) | func (md MoveDir) String() string {
    method Invert (line 1492) | func (md MoveDir) Invert() MoveDir {
  function moveDirFromString (line 1503) | func moveDirFromString(spec string) MoveDir {
  function hintRollback (line 1779) | func hintRollback(err error) (bool, error) {

FILE: catfs/core/linker_test.go
  function TestLinkerInsertRoot (line 23) | func TestLinkerInsertRoot(t *testing.T) {
  function TestLinkerRefs (line 68) | func TestLinkerRefs(t *testing.T) {
  function TestLinkerNested (line 137) | func TestLinkerNested(t *testing.T) {
  function TestLinkerPersistence (line 223) | func TestLinkerPersistence(t *testing.T) {
  function TestCollideSameObjectHash (line 272) | func TestCollideSameObjectHash(t *testing.T) {
  function TestHaveStagedChanges (line 360) | func TestHaveStagedChanges(t *testing.T) {
  function TestFilesByContent (line 392) | func TestFilesByContent(t *testing.T) {
  function TestResolveRef (line 408) | func TestResolveRef(t *testing.T) {
  type iterResult (line 445) | type iterResult struct
  function TestIterAll (line 449) | func TestIterAll(t *testing.T) {
  function TestAtomic (line 496) | func TestAtomic(t *testing.T) {
  function TestCommitByIndex (line 534) | func TestCommitByIndex(t *testing.T) {
  function TestLookupNodeAt (line 582) | func TestLookupNodeAt(t *testing.T) {

FILE: catfs/core/testing.go
  function WithDummyKv (line 19) | func WithDummyKv(t *testing.T, fn func(kv db.Database)) {
  function WithDummyLinker (line 40) | func WithDummyLinker(t *testing.T, fn func(lkr *Linker)) {
  function WithReloadingLinker (line 53) | func WithReloadingLinker(t *testing.T, fn1 func(lkr *Linker), fn2 func(l...
  function WithLinkerPair (line 67) | func WithLinkerPair(t *testing.T, fn func(lkrSrc, lkrDst *Linker)) {
  function AssertDir (line 78) | func AssertDir(t *testing.T, lkr *Linker, path string, shouldExist bool) {
  function MustMkdir (line 96) | func MustMkdir(t *testing.T, lkr *Linker, repoPath string) *n.Directory {
  function MustTouch (line 107) | func MustTouch(t *testing.T, lkr *Linker, touchPath string, seed byte) *...
  function MustMove (line 144) | func MustMove(t *testing.T, lkr *Linker, nd n.ModNode, destPath string) ...
  function MustRemove (line 158) | func MustRemove(t *testing.T, lkr *Linker, nd n.ModNode) n.ModNode {
  function MustCommit (line 172) | func MustCommit(t *testing.T, lkr *Linker, msg string) *n.Commit {
  function MustCommitIfPossible (line 186) | func MustCommitIfPossible(t *testing.T, lkr *Linker, msg string) *n.Comm...
  function MustTouchAndCommit (line 200) | func MustTouchAndCommit(t *testing.T, lkr *Linker, path string, seed byt...
  function MustModify (line 220) | func MustModify(t *testing.T, lkr *Linker, file *n.File, seed int) {
  function MustLookupDirectory (line 245) | func MustLookupDirectory(t *testing.T, lkr *Linker, path string) *n.Dire...

FILE: catfs/db/database.go
  type Batch (line 14) | type Batch interface
  type Database (line 38) | type Database interface
  function CopyKey (line 73) | func CopyKey(db Database, src, dst []string) error {

FILE: catfs/db/database_badger.go
  type BadgerDatabase (line 16) | type BadgerDatabase struct
    method runGC (line 65) | func (bdb *BadgerDatabase) runGC() error {
    method view (line 147) | func (db *BadgerDatabase) view(fn func(txn *badger.Txn) error) error {
    method Get (line 159) | func (db *BadgerDatabase) Get(key ...string) ([]byte, error) {
    method Keys (line 191) | func (db *BadgerDatabase) Keys(prefix ...string) ([][]string, error) {
    method Export (line 223) | func (db *BadgerDatabase) Export(w io.Writer) error {
    method Import (line 232) | func (db *BadgerDatabase) Import(r io.Reader) error {
    method Glob (line 240) | func (db *BadgerDatabase) Glob(prefix []string) ([][]string, error) {
    method Batch (line 271) | func (db *BadgerDatabase) Batch() Batch {
    method batch (line 278) | func (db *BadgerDatabase) batch() Batch {
    method Put (line 288) | func (db *BadgerDatabase) Put(val []byte, key ...string) {
    method withRetry (line 305) | func (db *BadgerDatabase) withRetry(fn func() error) error {
    method Clear (line 324) | func (db *BadgerDatabase) Clear(key ...string) error {
    method Erase (line 365) | func (db *BadgerDatabase) Erase(key ...string) {
    method Flush (line 382) | func (db *BadgerDatabase) Flush() error {
    method Rollback (line 410) | func (db *BadgerDatabase) Rollback() {
    method HaveWrites (line 431) | func (db *BadgerDatabase) HaveWrites() bool {
    method Close (line 439) | func (db *BadgerDatabase) Close() error {
  function NewBadgerDatabase (line 28) | func NewBadgerDatabase(path string) (*BadgerDatabase, error) {

FILE: catfs/db/database_disk.go
  constant debug (line 17) | debug = false
  type DiskDatabase (line 27) | type DiskDatabase struct
    method Flush (line 92) | func (db *DiskDatabase) Flush() error {
    method Rollback (line 128) | func (db *DiskDatabase) Rollback() {
    method Get (line 140) | func (db *DiskDatabase) Get(key ...string) ([]byte, error) {
    method Batch (line 170) | func (db *DiskDatabase) Batch() Batch {
    method Put (line 195) | func (db *DiskDatabase) Put(val []byte, key ...string) {
    method Clear (line 238) | func (db *DiskDatabase) Clear(key ...string) error {
    method Erase (line 296) | func (db *DiskDatabase) Erase(key ...string) {
    method HaveWrites (line 317) | func (db *DiskDatabase) HaveWrites() bool {
    method Keys (line 322) | func (db *DiskDatabase) Keys(prefix ...string) ([][]string, error) {
    method Glob (line 346) | func (db *DiskDatabase) Glob(prefix []string) ([][]string, error) {
    method Export (line 372) | func (db *DiskDatabase) Export(w io.Writer) error {
    method Import (line 378) | func (db *DiskDatabase) Import(r io.Reader) error {
    method Close (line 383) | func (db *DiskDatabase) Close() error {
  function NewDiskDatabase (line 36) | func NewDiskDatabase(basePath string) (*DiskDatabase, error) {
  function fixDirectoryKeys (line 44) | func fixDirectoryKeys(key []string) string {
  function reverseDirectoryKeys (line 75) | func reverseDirectoryKeys(key string) []string {
  function removeNonDirs (line 175) | func removeNonDirs(path string) error {

FILE: catfs/db/database_memory.go
  type MemoryDatabase (line 12) | type MemoryDatabase struct
    method Batch (line 39) | func (mdb *MemoryDatabase) Batch() Batch {
    method Flush (line 49) | func (mdb *MemoryDatabase) Flush() error {
    method Rollback (line 59) | func (mdb *MemoryDatabase) Rollback() {
    method Get (line 69) | func (mdb *MemoryDatabase) Get(key ...string) ([]byte, error) {
    method Put (line 79) | func (mdb *MemoryDatabase) Put(data []byte, key ...string) {
    method Clear (line 85) | func (mdb *MemoryDatabase) Clear(key ...string) error {
    method Erase (line 98) | func (mdb *MemoryDatabase) Erase(key ...string) {
    method Keys (line 105) | func (mdb *MemoryDatabase) Keys(prefix ...string) ([][]string, error) {
    method HaveWrites (line 133) | func (mdb *MemoryDatabase) HaveWrites() bool {
    method Glob (line 138) | func (mdb *MemoryDatabase) Glob(prefix []string) ([][]string, error) {
    method Export (line 168) | func (mdb *MemoryDatabase) Export(w io.Writer) error {
    method Import (line 173) | func (mdb *MemoryDatabase) Import(r io.Reader) error {
    method Close (line 178) | func (mdb *MemoryDatabase) Close() error {
  function shallowCopyMap (line 20) | func shallowCopyMap(src map[string][]byte) map[string][]byte {
  function NewMemoryDatabase (line 32) | func NewMemoryDatabase() *MemoryDatabase {

FILE: catfs/db/database_test.go
  function withDiskDatabase (line 16) | func withDiskDatabase(fn func(db *DiskDatabase)) error {
  function withBadgerDatabase (line 29) | func withBadgerDatabase(fn func(db *BadgerDatabase)) error {
  function withMemDatabase (line 42) | func withMemDatabase(fn func(db *MemoryDatabase)) error {
  function withDbByName (line 48) | func withDbByName(name string, fn func(db Database)) error {
  function withDbsByName (line 67) | func withDbsByName(name string, fn func(db1, db2 Database)) error {
  function TestDatabase (line 77) | func TestDatabase(t *testing.T) {
  function TestGCRace (line 92) | func TestGCRace(t *testing.T) {
  function testDatabaseTwoDbs (line 109) | func testDatabaseTwoDbs(t *testing.T, name string) {
  function testDatabaseOneDb (line 131) | func testDatabaseOneDb(t *testing.T, name string) {
  function testErase (line 179) | func testErase(t *testing.T, db Database) {
  function testKeys (line 196) | func testKeys(t *testing.T, db Database) {
  function testRollback (line 228) | func testRollback(t *testing.T, db Database) {
  function testRecursiveBatch (line 252) | func testRecursiveBatch(t *testing.T, db Database) {
  function testPutAndGet (line 274) | func testPutAndGet(t *testing.T, db Database) {
  function testInvalidAccess (line 296) | func testInvalidAccess(t *testing.T, db Database) {
  function testClear (line 302) | func testClear(t *testing.T, db Database) {
  function testClearPrefix (line 328) | func testClearPrefix(t *testing.T, db Database) {
  function testGlob (line 370) | func testGlob(t *testing.T, db Database) {
  function testExportImport (line 389) | func testExportImport(t *testing.T, db1, db2 Database) {
  function TestLargeBatch (line 424) | func TestLargeBatch(t *testing.T) {
  function BenchmarkDatabase (line 450) | func BenchmarkDatabase(b *testing.B) {
  function benchmarkDatabasePut (line 471) | func benchmarkDatabasePut(b *testing.B, db Database) {
  function benchmarkDatabaseGet (line 480) | func benchmarkDatabaseGet(b *testing.B, db Database) {

FILE: catfs/errors/errors.go
  type ErrNoSuchRef (line 29) | type ErrNoSuchRef
    method Error (line 31) | func (e ErrNoSuchRef) Error() string {
  function IsErrNoSuchRef (line 36) | func IsErrNoSuchRef(err error) bool {
  type ErrNoSuchCommitIndex (line 44) | type ErrNoSuchCommitIndex struct
    method Error (line 48) | func (e ErrNoSuchCommitIndex) Error() string {
  function NoSuchCommitIndex (line 53) | func NoSuchCommitIndex(ind int64) error {
  function IsErrNoSuchCommitIndex (line 58) | func IsErrNoSuchCommitIndex(err error) bool {
  type errNoSuchFile (line 65) | type errNoSuchFile struct
    method Error (line 70) | func (e *errNoSuchFile) Error() string {
  function NoSuchFile (line 77) | func NoSuchFile(path string) error {
  function IsNoSuchFileError (line 82) | func IsNoSuchFileError(err error) bool {

FILE: catfs/fs.go
  constant abiVersion (line 36) | abiVersion                 = 1
  constant defaultEncryptionKeyLength (line 37) | defaultEncryptionKeyLength = 32
  function emptyFileEncryptionKey (line 40) | func emptyFileEncryptionKey() []byte {
  type HintManager (line 45) | type HintManager interface
  type defaultHintManager (line 56) | type defaultHintManager struct
    method Lookup (line 58) | func (dhm defaultHintManager) Lookup(path string) hints.Hint {
    method Set (line 62) | func (dhm defaultHintManager) Set(path string, hint hints.Hint) error {
  type FS (line 74) | type FS struct
    method nodeToStat (line 250) | func (fs *FS) nodeToStat(nd n.Node) *StatInfo {
    method handleGcEvent (line 315) | func (fs *FS) handleGcEvent(nd n.Node) bool {
    method doGcRun (line 342) | func (fs *FS) doGcRun() {
    method gcLoop (line 423) | func (fs *FS) gcLoop() {
    method autoCommitLoop (line 442) | func (fs *FS) autoCommitLoop() {
    method repinLoop (line 469) | func (fs *FS) repinLoop() {
    method Close (line 511) | func (fs *FS) Close() error {
    method Export (line 527) | func (fs *FS) Export(w io.Writer) error {
    method Import (line 535) | func (fs *FS) Import(r io.Reader) error {
    method Move (line 554) | func (fs *FS) Move(src, dst string) error {
    method Copy (line 572) | func (fs *FS) Copy(src, dst string) error {
    method Mkdir (line 591) | func (fs *FS) Mkdir(dir string, createParents bool) error {
    method Remove (line 606) | func (fs *FS) Remove(path string) error {
    method Stat (line 625) | func (fs *FS) Stat(path string) (*StatInfo, error) {
    method Filter (line 643) | func (fs *FS) Filter(root, query string) ([]*StatInfo, error) {
    method List (line 701) | func (fs *FS) List(root string, maxDepth int) ([]*StatInfo, error) {
    method preCache (line 772) | func (fs *FS) preCache(hash h.Hash) error {
    method preCacheInBackground (line 782) | func (fs *FS) preCacheInBackground(hash h.Hash) {
    method Pin (line 795) | func (fs *FS) Pin(path, rev string, explicit bool) error {
    method Unpin (line 800) | func (fs *FS) Unpin(path, rev string, explicit bool) error {
    method doPin (line 804) | func (fs *FS) doPin(path, rev string, op func(nd n.Node, explicit bool...
    method IsPinned (line 842) | func (fs *FS) IsPinned(path string) (bool, bool, error) {
    method Touch (line 868) | func (fs *FS) Touch(path string) error {
    method Truncate (line 913) | func (fs *FS) Truncate(path string, size uint64) error {
    method renewPins (line 934) | func (fs *FS) renewPins(oldFile, newFile *n.File) error {
    method preStageKeyGen (line 963) | func (fs *FS) preStageKeyGen(path string) ([]byte, error) {
    method Stage (line 1012) | func (fs *FS) Stage(path string, r io.Reader) error {
    method stageWithKey (line 1027) | func (fs *FS) stageWithKey(path string, r io.Reader, key []byte) error {
    method getTarableEntries (line 1101) | func (fs *FS) getTarableEntries(root string, filter func(node *StatInf...
    method Tar (line 1161) | func (fs *FS) Tar(root string, w io.Writer, filter func(node *StatInfo...
    method Cat (line 1214) | func (fs *FS) Cat(path string) (mio.Stream, error) {
    method catHash (line 1241) | func (fs *FS) catHash(backendHash h.Hash, key []byte, size uint64, isR...
    method Open (line 1260) | func (fs *FS) Open(path string) (*Handle, error) {
    method MakeCommit (line 1284) | func (fs *FS) MakeCommit(msg string) error {
    method isMove (line 1296) | func (fs *FS) isMove(nd n.ModNode) (bool, error) {
    method DeletedNodes (line 1323) | func (fs *FS) DeletedNodes(root string) ([]*StatInfo, error) {
    method Undelete (line 1365) | func (fs *FS) Undelete(root string) error {
    method Head (line 1386) | func (fs *FS) Head() (string, error) {
    method Curr (line 1399) | func (fs *FS) Curr() (string, error) {
    method History (line 1427) | func (fs *FS) History(path string) ([]Change, error) {
    method buildSyncCfg (line 1480) | func (fs *FS) buildSyncCfg() (*vcs.SyncOptions, error) {
    method Sync (line 1627) | func (fs *FS) Sync(remote *FS, options ...SyncOption) error {
    method MakeDiff (line 1650) | func (fs *FS) MakeDiff(remote *FS, headRevOwn, headRevRemote string) (...
    method buildCommitHashToRefTable (line 1720) | func (fs *FS) buildCommitHashToRefTable() (map[string][]string, error) {
    method Log (line 1746) | func (fs *FS) Log(head string, fn func(c *Commit) error) error {
    method Reset (line 1778) | func (fs *FS) Reset(path, rev string) error {
    method Checkout (line 1822) | func (fs *FS) Checkout(rev string, force bool) error {
    method checkout (line 1829) | func (fs *FS) checkout(rev string, force bool) error {
    method Tag (line 1846) | func (fs *FS) Tag(rev, name string) error {
    method RemoveTag (line 1859) | func (fs *FS) RemoveTag(name string) error {
    method FilesByContent (line 1869) | func (fs *FS) FilesByContent(contents []h.Hash) (map[string]StatInfo, ...
    method ScheduleGCRun (line 1888) | func (fs *FS) ScheduleGCRun() {
    method writeLastPatchIndex (line 1896) | func (fs *FS) writeLastPatchIndex(index int64) error {
    method autoCommitStagedChanges (line 1901) | func (fs *FS) autoCommitStagedChanges(remoteName string) error {
    method MakePatch (line 1936) | func (fs *FS) MakePatch(fromRev string, folders []string, remoteName s...
    method MakePatches (line 1964) | func (fs *FS) MakePatches(fromRev string, folders []string, remoteName...
    method ApplyPatch (line 1991) | func (fs *FS) ApplyPatch(data []byte) error {
    method ApplyPatches (line 2009) | func (fs *FS) ApplyPatches(data []byte) error {
    method applyPatches (line 2026) | func (fs *FS) applyPatches(patches vcs.Patches) error {
    method LastPatchIndex (line 2062) | func (fs *FS) LastPatchIndex() (int64, error) {
    method CommitInfo (line 2081) | func (fs *FS) CommitInfo(rev string) (*Commit, error) {
    method HaveStagedChanges (line 2099) | func (fs *FS) HaveStagedChanges() (bool, error) {
    method IsCached (line 2107) | func (fs *FS) IsCached(path string) (bool, error) {
    method Hints (line 2156) | func (fs *FS) Hints() HintManager {
  type StatInfo (line 122) | type StatInfo struct
  type DiffPair (line 167) | type DiffPair struct
  type Diff (line 173) | type Diff struct
  type Commit (line 197) | type Commit struct
  type Change (line 212) | type Change struct
  type ExplicitPin (line 241) | type ExplicitPin struct
  function lookupFileOrDir (line 297) | func lookupFileOrDir(lkr *c.Linker, path string) (n.ModNode, error) {
  function NewFilesystem (line 360) | func NewFilesystem(
  function prefixSlash (line 858) | func prefixSlash(s string) string {
  type tarEntry (line 1095) | type tarEntry struct
  function commitToExternal (line 1411) | func commitToExternal(cmt *n.Commit, hashToRef map[string][]string) *Com...
  type SyncOption (line 1568) | type SyncOption
  function SyncOptMessage (line 1572) | func SyncOptMessage(msg string) SyncOption {
  function SyncOptConflictStrategy (line 1580) | func SyncOptConflictStrategy(strategy string) SyncOption {
  function SyncOptReadOnlyFolders (line 1592) | func SyncOptReadOnlyFolders(folders []string) SyncOption {
  function SyncOptConflictgStrategyPerFolder (line 1607) | func SyncOptConflictgStrategyPerFolder(strategies map[string]string) Syn...

FILE: catfs/fs_test.go
  function init (line 30) | func init() {
  function withDummyFSReadOnly (line 34) | func withDummyFSReadOnly(t *testing.T, readOnly bool, fn func(fs *FS)) {
  function withDummyFS (line 79) | func withDummyFS(t *testing.T, fn func(fs *FS)) {
  function TestStat (line 83) | func TestStat(t *testing.T) {
  function TestLogAndTag (line 117) | func TestLogAndTag(t *testing.T) {
  function TestCat (line 172) | func TestCat(t *testing.T) {
  function TestStageBasic (line 218) | func TestStageBasic(t *testing.T) {
  function TestHistory (line 281) | func TestHistory(t *testing.T) {
  function mustReadPath (line 323) | func mustReadPath(t *testing.T, fs *FS, path string) []byte {
  function TestReset (line 333) | func TestReset(t *testing.T) {
  function TestCheckout (line 375) | func TestCheckout(t *testing.T) {
  function TestExportImport (line 421) | func TestExportImport(t *testing.T) {
  function TestSync (line 454) | func TestSync(t *testing.T) {
  function TestMakeDiff (line 487) | func TestMakeDiff(t *testing.T) {
  function TestPin (line 528) | func TestPin(t *testing.T) {
  function TestMkdir (line 577) | func TestMkdir(t *testing.T) {
  function TestMove (line 606) | func TestMove(t *testing.T) {
  function TestTouch (line 623) | func TestTouch(t *testing.T) {
  function TestHead (line 671) | func TestHead(t *testing.T) {
  function TestList (line 690) | func TestList(t *testing.T) {
  function TestTag (line 725) | func TestTag(t *testing.T) {
  function TestStageUnmodified (line 748) | func TestStageUnmodified(t *testing.T) {
  function TestTruncate (line 767) | func TestTruncate(t *testing.T) {
  function TestChangingCompressAlgos (line 796) | func TestChangingCompressAlgos(t *testing.T) {
  function TestPatch (line 819) | func TestPatch(t *testing.T) {
  function TestTar (line 860) | func TestTar(t *testing.T) {
  function TestReadOnly (line 897) | func TestReadOnly(t *testing.T) {
  function TestDeletedNodesDirectory (line 904) | func TestDeletedNodesDirectory(t *testing.T) {
  function TestDeletedNodesFile (line 922) | func TestDeletedNodesFile(t *testing.T) {
  function TestUndeleteFile (line 941) | func TestUndeleteFile(t *testing.T) {
  function TestUndeleteDirectory (line 968) | func TestUndeleteDirectory(t *testing.T) {

FILE: catfs/handle.go
  type Handle (line 23) | type Handle struct
    method initStreamIfNeeded (line 42) | func (hdl *Handle) initStreamIfNeeded() error {
    method Read (line 80) | func (hdl *Handle) Read(buf []byte) (int, error) {
    method ReadAt (line 96) | func (hdl *Handle) ReadAt(buf []byte, off int64) (int, error) {
    method Write (line 113) | func (hdl *Handle) Write(buf []byte) (int, error) {
    method WriteAt (line 135) | func (hdl *Handle) WriteAt(buf []byte, off int64) (n int, err error) {
    method Seek (line 157) | func (hdl *Handle) Seek(offset int64, whence int) (int64, error) {
    method Truncate (line 178) | func (hdl *Handle) Truncate(size uint64) error {
    method flush (line 203) | func (hdl *Handle) flush() error {
    method Flush (line 243) | func (hdl *Handle) Flush() error {
    method Close (line 260) | func (hdl *Handle) Close() error {
    method Path (line 273) | func (hdl *Handle) Path() string {
  function newHandle (line 34) | func newHandle(fs *FS, file *n.File, readOnly bool) *Handle {

FILE: catfs/handle_test.go
  function TestOpenRead (line 15) | func TestOpenRead(t *testing.T) {
  function TestOpenWrite (line 31) | func TestOpenWrite(t *testing.T) {
  function TestOpenTruncate (line 61) | func TestOpenTruncate(t *testing.T) {
  function TestOpenOpAfterClose (line 98) | func TestOpenOpAfterClose(t *testing.T) {
  function TestOpenExtend (line 118) | func TestOpenExtend(t *testing.T) {
  function testOpenExtend (line 130) | func testOpenExtend(t *testing.T, pos int64, whence int) {
  function TestHandleFuseLikeRead (line 164) | func TestHandleFuseLikeRead(t *testing.T) {
  function testHandleFuseLikeRead (line 178) | func testHandleFuseLikeRead(t *testing.T, fileSize, blockSize int) {
  function TestHandleChangeCompression (line 221) | func TestHandleChangeCompression(t *testing.T) {

FILE: catfs/mio/chunkbuf/chunkbuf.go
  type ChunkBuffer (line 10) | type ChunkBuffer struct
    method Write (line 21) | func (c *ChunkBuffer) Write(p []byte) (int, error) {
    method Reset (line 29) | func (c *ChunkBuffer) Reset(data []byte) {
    method Len (line 37) | func (c *ChunkBuffer) Len() int {
    method Read (line 41) | func (c *ChunkBuffer) Read(p []byte) (int, error) {
    method Seek (line 52) | func (c *ChunkBuffer) Seek(offset int64, whence int) (int64, error) {
    method Close (line 67) | func (c *ChunkBuffer) Close() error {
    method WriteTo (line 72) | func (c *ChunkBuffer) WriteTo(w io.Writer) (int64, error) {
  constant maxChunkSize (line 18) | maxChunkSize = 64 * 1024
  function NewChunkBuffer (line 85) | func NewChunkBuffer(data []byte) *ChunkBuffer {

FILE: catfs/mio/chunkbuf/chunkbuf_test.go
  function TestChunkBufBasic (line 13) | func TestChunkBufBasic(t *testing.T) {
  function TestChunkBufEOF (line 22) | func TestChunkBufEOF(t *testing.T) {
  function TestChunkBufWriteTo (line 33) | func TestChunkBufWriteTo(t *testing.T) {
  function TestChunkBufSeek (line 44) | func TestChunkBufSeek(t *testing.T) {
  function TestChunkBufWrite (line 90) | func TestChunkBufWrite(t *testing.T) {

FILE: catfs/mio/compress/algorithm.go
  constant AlgoUnknown (line 19) | AlgoUnknown = AlgorithmType(iota)
  constant AlgoSnappy (line 23) | AlgoSnappy
  constant AlgoLZ4 (line 27) | AlgoLZ4
  constant AlgoZstd (line 31) | AlgoZstd
  type AlgorithmType (line 35) | type AlgorithmType
    method IsValid (line 38) | func (at AlgorithmType) IsValid() bool {
    method String (line 47) | func (at AlgorithmType) String() string {
  type Algorithm (line 57) | type Algorithm interface
  type snappyAlgo (line 74) | type snappyAlgo struct
    method Encode (line 142) | func (a snappyAlgo) Encode(dst, src []byte) ([]byte, error) {
    method Decode (line 146) | func (a snappyAlgo) Decode(dst, src []byte) ([]byte, error) {
    method MaxEncodeBufferSize (line 150) | func (a snappyAlgo) MaxEncodeBufferSize() int {
  type lz4Algo (line 76) | type lz4Algo struct
    method Encode (line 156) | func (a *lz4Algo) Encode(dst, src []byte) ([]byte, error) {
    method Decode (line 168) | func (a *lz4Algo) Decode(dst, src []byte) ([]byte, error) {
    method MaxEncodeBufferSize (line 173) | func (a *lz4Algo) MaxEncodeBufferSize() int {
  type zstdAlgo (line 80) | type zstdAlgo struct
    method Encode (line 179) | func (a zstdAlgo) Encode(dst, src []byte) ([]byte, error) {
    method Decode (line 183) | func (a zstdAlgo) Decode(dst, src []byte) ([]byte, error) {
    method MaxEncodeBufferSize (line 187) | func (a zstdAlgo) MaxEncodeBufferSize() int {
  function init (line 87) | func init() {
  function algorithmFromType (line 192) | func algorithmFromType(a AlgorithmType) (Algorithm, error) {

FILE: catfs/mio/compress/compress_test.go
  function openDest (line 22) | func openDest(t *testing.T, dest string) *os.File {
  function openSrc (line 33) | func openSrc(t *testing.T, src string) *os.File {
  function createTempFile (line 41) | func createTempFile(t *testing.T) string {
  constant C64K (line 50) | C64K = 64 * 1024
  constant C32K (line 51) | C32K = 32 * 1024
  function TestCompressDecompress (line 54) | func TestCompressDecompress(t *testing.T) {
  function testCompressDecompress (line 75) | func testCompressDecompress(t *testing.T, size int64, algo AlgorithmType...
  function TestSeek (line 132) | func TestSeek(t *testing.T) {
  function testSeek (line 155) | func testSeek(t *testing.T, size, offset int64, algo AlgorithmType, useR...
  function TestReadItAllTwice (line 231) | func TestReadItAllTwice(t *testing.T) {
  function TestReadFuseLike (line 258) | func TestReadFuseLike(t *testing.T) {
  function TestCheckSize (line 300) | func TestCheckSize(t *testing.T) {

FILE: catfs/mio/compress/header.go
  constant maxChunkSize (line 35) | maxChunkSize   = 64 * 1024
  constant indexChunkSize (line 36) | indexChunkSize = 16
  constant trailerSize (line 37) | trailerSize    = 12
  constant headerSize (line 38) | headerSize     = 12
  constant currentVersion (line 39) | currentVersion = 1
  type record (line 45) | type record struct
    method marshal (line 66) | func (rc *record) marshal(buf []byte) {
    method unmarshal (line 71) | func (rc *record) unmarshal(buf []byte) {
  type trailer (line 51) | type trailer struct
    method marshal (line 56) | func (t *trailer) marshal(buf []byte) {
    method unmarshal (line 61) | func (t *trailer) unmarshal(buf []byte) {
  type header (line 76) | type header struct
  function makeHeader (line 81) | func makeHeader(algo AlgorithmType, version byte) []byte {
  function readHeader (line 92) | func readHeader(bheader []byte) (*header, error) {
  function Pack (line 124) | func Pack(data []byte, algo AlgorithmType) ([]byte, error) {
  function Unpack (line 145) | func Unpack(data []byte) ([]byte, error) {

FILE: catfs/mio/compress/heuristic.go
  constant HeaderSizeThreshold (line 25) | HeaderSizeThreshold = 2048
  function guessMime (line 28) | func guessMime(path string, buf []byte) string {
  function isCompressible (line 49) | func isCompressible(mimetype string) bool {
  function GuessAlgorithm (line 59) | func GuessAlgorithm(path string, header []byte) (AlgorithmType, error) {

FILE: catfs/mio/compress/heuristic_test.go
  type testCase (line 9) | type testCase struct
  function TestChooseCompressAlgo (line 43) | func TestChooseCompressAlgo(t *testing.T) {

FILE: catfs/mio/compress/reader.go
  type Reader (line 12) | type Reader struct
    method Seek (line 45) | func (r *Reader) Seek(destOff int64, whence int) (int64, error) {
    method chunkLookup (line 94) | func (r *Reader) chunkLookup(currOff int64, isRawOff bool) (*record, *...
    method parseTrailerIfNeeded (line 115) | func (r *Reader) parseTrailerIfNeeded() error {
    method WriteTo (line 204) | func (r *Reader) WriteTo(w io.Writer) (int64, error) {
    method Read (line 236) | func (r *Reader) Read(p []byte) (int, error) {
    method fixZipChunk (line 270) | func (r *Reader) fixZipChunk() (int64, error) {
    method readZipChunk (line 294) | func (r *Reader) readZipChunk() ([]byte, error) {
  function NewReader (line 328) | func NewReader(r io.ReadSeeker) *Reader {

FILE: catfs/mio/compress/writer.go
  type Writer (line 11) | type Writer struct
    method addRecordToIndex (line 42) | func (w *Writer) addRecordToIndex() {
    method flushBuffer (line 46) | func (w *Writer) flushBuffer(data []byte) error {
    method writeHeaderIfNeeded (line 72) | func (w *Writer) writeHeaderIfNeeded() error {
    method ReadFrom (line 87) | func (w *Writer) ReadFrom(r io.Reader) (n int64, err error) {
    method Write (line 115) | func (w *Writer) Write(p []byte) (n int, err error) {
    method Close (line 156) | func (w *Writer) Close() error {
  function NewWriter (line 138) | func NewWriter(w io.Writer, algoType AlgorithmType) (*Writer, error) {

FILE: catfs/mio/encrypt/format.go
  type Flags (line 45) | type Flags
  constant FlagEmpty (line 50) | FlagEmpty = Flags(0)
  constant FlagEncryptAES256GCM (line 54) | FlagEncryptAES256GCM = Flags(1) << iota
  constant FlagEncryptChaCha20 (line 58) | FlagEncryptChaCha20
  constant flagReserved1 (line 62) | flagReserved1
  constant flagReserved2 (line 63) | flagReserved2
  constant flagReserved3 (line 64) | flagReserved3
  constant flagReserved4 (line 65) | flagReserved4
  constant flagReserved5 (line 66) | flagReserved5
  constant flagReserved6 (line 67) | flagReserved6
  constant FlagCompressedInside (line 71) | FlagCompressedInside
  constant macSize (line 77) | macSize = 16
  constant version (line 80) | version = 1
  constant headerSize (line 83) | headerSize = 20 + macSize
  constant defaultMaxBlockSize (line 86) | defaultMaxBlockSize = 64 * 1024
  constant defaultDecBufferSize (line 88) | defaultDecBufferSize = defaultMaxBlockSize
  constant defaultEncBufferSize (line 89) | defaultEncBufferSize = defaultMaxBlockSize + 40
  function GenerateHeader (line 106) | func GenerateHeader(key []byte, maxBlockSize int64, flags Flags) []byte {
  type HeaderInfo (line 146) | type HeaderInfo struct
  function cipherTypeBitFromFlags (line 182) | func cipherTypeBitFromFlags(flags Flags) (Flags, error) {
  function ParseHeader (line 212) | func ParseHeader(header, key []byte) (*HeaderInfo, error) {
  function createAEADWorker (line 259) | func createAEADWorker(cipherType Flags, key []byte) (cipher.AEAD, error) {
  type aeadCommon (line 275) | type aeadCommon struct
    method initAeadCommon (line 291) | func (c *aeadCommon) initAeadCommon(key []byte, cipherBit Flags, maxBl...
  function Encrypt (line 306) | func Encrypt(key []byte, source io.Reader, dest io.Writer, flags Flags) ...
  function Decrypt (line 322) | func Decrypt(key []byte, source io.Reader, dest io.Writer) (int64, error) {

FILE: catfs/mio/encrypt/format_test.go
  constant ExtraDebug (line 19) | ExtraDebug = false
  function openFiles (line 21) | func openFiles(from, to string) (*os.File, *os.File, error) {
  function encryptFile (line 36) | func encryptFile(key []byte, from, to string) (n int64, outErr error) {
  function decryptFile (line 55) | func decryptFile(key []byte, from, to string) (n int64, outErr error) {
  function remover (line 76) | func remover(t *testing.T, path string) {
  function testSimpleEncDec (line 82) | func testSimpleEncDec(t *testing.T, size int64) {
  function TestSimpleEncDec (line 117) | func TestSimpleEncDec(t *testing.T) {
  type seekTest (line 147) | type seekTest struct
  function BenchmarkEncDec (line 172) | func BenchmarkEncDec(b *testing.B) {
  function TestSeek (line 178) | func TestSeek(t *testing.T) {
  function testSeek (line 191) | func testSeek(t *testing.T, N int64, readFrom, writeTo bool) {
  function testSeekOneWhence (line 235) | func testSeekOneWhence(
  function TestEmptyFile (line 336) | func TestEmptyFile(t *testing.T) {
  function TestEncryptedTheSame (line 385) | func TestEncryptedTheSame(t *testing.T) {
  function TestEncryptedByteSwaps (line 426) | func TestEncryptedByteSwaps(t *testing.T) {

FILE: catfs/mio/encrypt/reader.go
  type Reader (line 11) | type Reader struct
    method readHeaderIfNotDone (line 46) | func (r *Reader) readHeaderIfNotDone() error {
    method Flags (line 96) | func (r *Reader) Flags() (Flags, error) {
    method Read (line 110) | func (r *Reader) Read(dest []byte) (int, error) {
    method readBlock (line 139) | func (r *Reader) readBlock() (int, error) {
    method Seek (line 191) | func (r *Reader) Seek(offset int64, whence int) (int64, error) {
    method WriteTo (line 313) | func (r *Reader) WriteTo(w io.Writer) (int64, error) {
  function NewReader (line 361) | func NewReader(r io.Reader, key []byte) (*Reader, error) {

FILE: catfs/mio/encrypt/writer.go
  type Writer (line 19) | type Writer struct
    method GoodDecBufferSize (line 44) | func (w *Writer) GoodDecBufferSize() int64 {
    method GoodEncBufferSize (line 49) | func (w *Writer) GoodEncBufferSize() int64 {
    method emitHeaderIfNeeded (line 53) | func (w *Writer) emitHeaderIfNeeded() error {
    method Write (line 64) | func (w *Writer) Write(p []byte) (int, error) {
    method flushPack (line 84) | func (w *Writer) flushPack(pack []byte) (int, error) {
    method Close (line 104) | func (w *Writer) Close() error {
    method ReadFrom (line 129) | func (w *Writer) ReadFrom(r io.Reader) (int64, error) {
  function NewWriter (line 176) | func NewWriter(w io.Writer, key []byte, flags Flags) (*Writer, error) {
  function NewWriterWithBlockSize (line 183) | func NewWriterWithBlockSize(w io.Writer, key []byte, flags Flags, maxBlo...

FILE: catfs/mio/pagecache/cache.go
  type Cache (line 9) | type Cache interface

FILE: catfs/mio/pagecache/mdcache/l1.go
  type l1item (line 31) | type l1item struct
  type l1cache (line 36) | type l1cache struct
    method Set (line 52) | func (c *l1cache) Set(pk pageKey, p *page.Page) error {
    method Get (line 101) | func (c *l1cache) Get(pk pageKey) (*page.Page, error) {
    method Del (line 112) | func (c *l1cache) Del(pks []pageKey) {
    method Close (line 122) | func (c *l1cache) Close() error {
  function newL1Cache (line 43) | func newL1Cache(l2 cacheLayer, maxMemory int64) (*l1cache, error) {

FILE: catfs/mio/pagecache/mdcache/l1_test.go
  function withL1Cache (line 10) | func withL1Cache(t *testing.T, fn func(l1, backing *l1cache)) {
  function TestL1GetSetDel (line 24) | func TestL1GetSetDel(t *testing.T) {
  function TestL1SwapPriority (line 46) | func TestL1SwapPriority(t *testing.T) {

FILE: catfs/mio/pagecache/mdcache/l2.go
  type l2cache (line 19) | type l2cache struct
    method Set (line 45) | func (c *l2cache) Set(pk pageKey, p *page.Page) error {
    method Get (line 62) | func (c *l2cache) Get(pk pageKey) (*page.Page, error) {
    method Del (line 86) | func (c *l2cache) Del(pks []pageKey) {
    method Close (line 103) | func (c *l2cache) Close() error {
  function newL2Cache (line 28) | func newL2Cache(dir string, compress bool) (*l2cache, error) {

FILE: catfs/mio/pagecache/mdcache/l2_test.go
  function dummyPage (line 13) | func dummyPage(off, length uint32) *page.Page {
  function withL2Cache (line 18) | func withL2Cache(t *testing.T, fn func(l2 *l2cache)) {
  function TestL2GetSetDel (line 44) | func TestL2GetSetDel(t *testing.T) {
  function TestL2Nil (line 65) | func TestL2Nil(t *testing.T) {

FILE: catfs/mio/pagecache/mdcache/mdcache.go
  type Options (line 13) | type Options struct
  type cacheLayer (line 34) | type cacheLayer interface
  type MDCache (line 42) | type MDCache struct
    method Lookup (line 87) | func (dc *MDCache) Lookup(inode int64, pageIdx uint32) (*page.Page, er...
    method get (line 94) | func (dc *MDCache) get(pk pageKey) (*page.Page, error) {
    method Merge (line 119) | func (dc *MDCache) Merge(inode int64, pageIdx, off uint32, write []byt...
    method Evict (line 149) | func (dc *MDCache) Evict(inode, size int64) error {
    method Close (line 170) | func (dc *MDCache) Close() error {
  type pageKey (line 49) | type pageKey struct
    method String (line 54) | func (pk pageKey) String() string {
  function New (line 59) | func New(opts Options) (*MDCache, error) {

FILE: catfs/mio/pagecache/mdcache/mdcache_test.go
  function withMDCache (line 13) | func withMDCache(t *testing.T, fn func(mdc *MDCache)) {
  function TestMDBasic (line 31) | func TestMDBasic(t *testing.T) {

FILE: catfs/mio/pagecache/overlay.go
  type Layer (line 14) | type Layer struct
    method ensureOffset (line 70) | func (l *Layer) ensureOffset(zpr *zeroPadReader) error {
    method WriteAt (line 91) | func (l *Layer) WriteAt(buf []byte, off int64) (n int, err error) {
    method ReadAt (line 164) | func (l *Layer) ReadAt(buf []byte, off int64) (int, error) {
    method Truncate (line 280) | func (l *Layer) Truncate(size int64) {
    method Length (line 288) | func (l *Layer) Length() int64 {
    method Read (line 301) | func (l *Layer) Read(buf []byte) (int, error) {
    method Write (line 307) | func (l *Layer) Write(buf []byte) (int, error) {
    method Seek (line 316) | func (l *Layer) Seek(off int64, whence int) (int64, error) {
    method Close (line 333) | func (l *Layer) Close() error {
    method WriteTo (line 338) | func (l *Layer) WriteTo(w io.Writer) (int64, error) {
  function NewLayer (line 56) | func NewLayer(rs io.ReadSeeker, cache Cache, inode, size int64) (*Layer,...

FILE: catfs/mio/pagecache/overlay_test.go
  function withLayer (line 17) | func withLayer(t *testing.T, size int64, fn func(expected []byte, p *Lay...
  function TestReadOnly (line 49) | func TestReadOnly(t *testing.T) {
  function padOrCutToLength (line 63) | func padOrCutToLength(buf []byte, length int64) []byte {
  function TestReadOnlyTruncate (line 73) | func TestReadOnlyTruncate(t *testing.T) {
  function TestWriteSingle (line 124) | func TestWriteSingle(t *testing.T) {
  function TestWriteRandomOffset (line 152) | func TestWriteRandomOffset(t *testing.T) {
  function TestReadRandomOffset (line 219) | func TestReadRandomOffset(t *testing.T) {

FILE: catfs/mio/pagecache/page/page.go
  constant Size (line 20) | Size = 64 * 1024
  constant Meta (line 25) | Meta = 4 * 1024
  constant ExtentSize (line 28) | ExtentSize = 8
  type Extent (line 49) | type Extent struct
    method String (line 53) | func (e Extent) String() string {
  type Page (line 58) | type Page struct
    method String (line 69) | func (p *Page) String() string {
    method AsBytes (line 137) | func (p *Page) AsBytes() []byte {
    method affectedExtentIdxs (line 191) | func (p *Page) affectedExtentIdxs(lo, hi uint32) (int, int) {
    method OccludesStream (line 212) | func (p *Page) OccludesStream(pageOff, length uint32) bool {
    method Overlay (line 239) | func (p *Page) Overlay(off uint32, write []byte) {
    method updateExtents (line 256) | func (p *Page) updateExtents(off, offPlusWrite uint32) {
    method Underlay (line 339) | func (p *Page) Underlay(pageOff uint32, write []byte) {
  function New (line 83) | func New(off uint32, write []byte) *Page {
  function FromBytes (line 95) | func FromBytes(data []byte) (*Page, error) {
  function minUint32 (line 326) | func minUint32(a, b uint32) uint32 {

FILE: catfs/mio/pagecache/page/page_test.go
  function TestPageAffectedIndices (line 10) | func TestPageAffectedIndices(t *testing.T) {
  function TestPageSerializeDeserialize (line 64) | func TestPageSerializeDeserialize(t *testing.T) {
  function TestPageSerializeWithManyWrites (line 75) | func TestPageSerializeWithManyWrites(t *testing.T) {
  function TestPageOccludeStreamBasic (line 96) | func TestPageOccludeStreamBasic(t *testing.T) {
  function TestPageOccludeStreamInExtent (line 113) | func TestPageOccludeStreamInExtent(t *testing.T) {
  function TestPageAddExtent (line 123) | func TestPageAddExtent(t *testing.T) {
  function TestPageAddExtentRegression (line 168) | func TestPageAddExtentRegression(t *testing.T) {
  function TestPageUnderlayFull (line 177) | func TestPageUnderlayFull(t *testing.T) {
  function TestPageUnderlayPartial (line 189) | func TestPageUnderlayPartial(t *testing.T) {
  function TestPageUnderlayLeftover (line 213) | func TestPageUnderlayLeftover(t *testing.T) {

FILE: catfs/mio/pagecache/util.go
  type iobuf (line 9) | type iobuf struct
    method Write (line 14) | func (ib *iobuf) Write(src []byte) (int, error) {
    method Len (line 20) | func (ib *iobuf) Len() int {
    method Left (line 24) | func (ib *iobuf) Left() int {
  type zeroPadReader (line 31) | type zeroPadReader struct
    method Read (line 46) | func (zpr *zeroPadReader) Read(buf []byte) (int, error) {
  function memzero (line 36) | func memzero(buf []byte) {
  function copyNBuffer (line 83) | func copyNBuffer(dst io.Writer, src io.Reader, n int64, buf []byte) (wri...

FILE: catfs/mio/pagecache/util_test.go
  function TestZeroPaddedReader (line 13) | func TestZeroPaddedReader(t *testing.T) {
  function TestIOBuf (line 69) | func TestIOBuf(t *testing.T) {
  function BenchmarkZeroing (line 115) | func BenchmarkZeroing(b *testing.B) {

FILE: catfs/mio/stream.go
  type Stream (line 16) | type Stream interface
  type stream (line 23) | type stream struct
  type dumbWriterTo (line 30) | type dumbWriterTo struct
    method WriteTo (line 34) | func (d dumbWriterTo) WriteTo(w io.Writer) (n int64, err error) {
  function NewOutStream (line 41) | func NewOutStream(r io.ReadSeeker, isRaw bool, key []byte) (Stream, erro...
  function guessCompression (line 119) | func guessCompression(path string, r io.Reader, hint *hints.Hint) (io.Re...
  function NewInStream (line 150) | func NewInStream(r io.Reader, path string, key []byte, hint hints.Hint) ...
  type limitedStream (line 213) | type limitedStream struct
    method Read (line 219) | func (ls *limitedStream) Read(buf []byte) (int, error) {
    method Seek (line 238) | func (ls *limitedStream) Seek(offset int64, whence int) (int64, error) {
    method WriteTo (line 262) | func (ls *limitedStream) WriteTo(w io.Writer) (int64, error) {
    method Close (line 268) | func (ls *limitedStream) Close() error {
  function LimitStream (line 274) | func LimitStream(stream Stream, size uint64) Stream {

FILE: catfs/mio/stream_test.go
  function testWriteAndRead (line 19) | func testWriteAndRead(
  function TestWriteAndRead (line 77) | func TestWriteAndRead(t *testing.T) {
  function TestLimitedStream (line 110) | func TestLimitedStream(t *testing.T) {
  function TestLimitStreamSize (line 185) | func TestLimitStreamSize(t *testing.T) {
  function TestStreamSizeBySeek (line 221) | func TestStreamSizeBySeek(t *testing.T) {

FILE: catfs/nodes/base.go
  type Base (line 17) | type Base struct
    method copyBase (line 44) | func (b *Base) copyBase(inode uint64) Base {
    method User (line 58) | func (b *Base) User() string {
    method Name (line 64) | func (b *Base) Name() string {
    method TreeHash (line 69) | func (b *Base) TreeHash() h.Hash {
    method ContentHash (line 74) | func (b *Base) ContentHash() h.Hash {
    method BackendHash (line 79) | func (b *Base) BackendHash() h.Hash {
    method Type (line 84) | func (b *Base) Type() NodeType {
    method ModTime (line 90) | func (b *Base) ModTime() time.Time {
    method Inode (line 95) | func (b *Base) Inode() uint64 {
    method setBaseAttrsToNode (line 101) | func (b *Base) setBaseAttrsToNode(capnode capnp_model.Node) error {
    method parseBaseAttrsFromNode (line 130) | func (b *Base) parseBaseAttrsFromNode(capnode capnp_model.Node) error {
  function prefixSlash (line 184) | func prefixSlash(s string) string {
  function MarshalNode (line 198) | func MarshalNode(nd Node) ([]byte, error) {
  function UnmarshalNode (line 208) | func UnmarshalNode(data []byte) (Node, error) {
  function CapNodeToNode (line 223) | func CapNodeToNode(capNd capnp_model.Node) (Node, error) {
  function Depth (line 254) | func Depth(nd Node) int {
  function RemoveNode (line 272) | func RemoveNode(lkr Linker, nd Node) error {
  function ParentDirectory (line 288) | func ParentDirectory(lkr Linker, nd Node) (*Directory, error) {
  function ContentHash (line 309) | func ContentHash(nd Node) (h.Hash, error) {

FILE: catfs/nodes/capnp/nodes.capnp.go
  type Commit (line 13) | type Commit struct
    method String (line 34) | func (s Commit) String() string {
    method Message (line 39) | func (s Commit) Message() (string, error) {
    method HasMessage (line 44) | func (s Commit) HasMessage() bool {
    method MessageBytes (line 49) | func (s Commit) MessageBytes() ([]byte, error) {
    method SetMessage (line 54) | func (s Commit) SetMessage(v string) error {
    method Author (line 58) | func (s Commit) Author() (string, error) {
    method HasAuthor (line 63) | func (s Commit) HasAuthor() bool {
    method AuthorBytes (line 68) | func (s Commit) AuthorBytes() ([]byte, error) {
    method SetAuthor (line 73) | func (s Commit) SetAuthor(v string) error {
    method Parent (line 77) | func (s Commit) Parent() ([]byte, error) {
    method HasParent (line 82) | func (s Commit) HasParent() bool {
    method SetParent (line 87) | func (s Commit) SetParent(v []byte) error {
    method Root (line 91) | func (s Commit) Root() ([]byte, error) {
    method HasRoot (line 96) | func (s Commit) HasRoot() bool {
    method SetRoot (line 101) | func (s Commit) SetRoot(v []byte) error {
    method Index (line 105) | func (s Commit) Index() int64 {
    method SetIndex (line 109) | func (s Commit) SetIndex(v int64) {
    method Merge (line 113) | func (s Commit) Merge() Commit_merge { return Commit_merge(s) }
  type Commit_merge (line 14) | type Commit_merge
    method With (line 115) | func (s Commit_merge) With() (string, error) {
    method HasWith (line 120) | func (s Commit_merge) HasWith() bool {
    method WithBytes (line 125) | func (s Commit_merge) WithBytes() ([]byte, error) {
    method SetWith (line 130) | func (s Commit_merge) SetWith(v string) error {
    method Head (line 134) | func (s Commit_merge) Head() ([]byte, error) {
    method HasHead (line 139) | func (s Commit_merge) HasHead() bool {
    method SetHead (line 144) | func (s Commit_merge) SetHead(v []byte) error {
  constant Commit_TypeID (line 17) | Commit_TypeID = 0x8da013c66e545daf
  function NewCommit (line 19) | func NewCommit(s *capnp.Segment) (Commit, error) {
  function NewRootCommit (line 24) | func NewRootCommit(s *capnp.Segment) (Commit, error) {
  function ReadRootCommit (line 29) | func ReadRootCommit(msg *capnp.Message) (Commit, error) {
  type Commit_List (line 149) | type Commit_List struct
    method At (line 157) | func (s Commit_List) At(i int) Commit { return Commit{s.List.Struct(i)} }
    method Set (line 159) | func (s Commit_List) Set(i int, v Commit) error { return s.List.SetStr...
    method String (line 161) | func (s Commit_List) String() string {
  function NewCommit_List (line 152) | func NewCommit_List(s *capnp.Segment, sz int32) (Commit_List, error) {
  type Commit_Promise (line 167) | type Commit_Promise struct
    method Struct (line 169) | func (p Commit_Promise) Struct() (Commit, error) {
    method Merge (line 174) | func (p Commit_Promise) Merge() Commit_merge_Promise { return Commit_m...
  type Commit_merge_Promise (line 177) | type Commit_merge_Promise struct
    method Struct (line 179) | func (p Commit_merge_Promise) Struct() (Commit_merge, error) {
  type DirEntry (line 185) | type DirEntry struct
    method String (line 205) | func (s DirEntry) String() string {
    method Name (line 210) | func (s DirEntry) Name() (string, error) {
    method HasName (line 215) | func (s DirEntry) HasName() bool {
    method NameBytes (line 220) | func (s DirEntry) NameBytes() ([]byte, error) {
    method SetName (line 225) | func (s DirEntry) SetName(v string) error {
    method Hash (line 229) | func (s DirEntry) Hash() ([]byte, error) {
    method HasHash (line 234) | func (s DirEntry) HasHash() bool {
    method SetHash (line 239) | func (s DirEntry) SetHash(v []byte) error {
  constant DirEntry_TypeID (line 188) | DirEntry_TypeID = 0x8b15ee76774b1f9d
  function NewDirEntry (line 190) | func NewDirEntry(s *capnp.Segment) (DirEntry, error) {
  function NewRootDirEntry (line 195) | func NewRootDirEntry(s *capnp.Segment) (DirEntry, error) {
  function ReadRootDirEntry (line 200) | func ReadRootDirEntry(msg *capnp.Message) (DirEntry, error) {
  type DirEntry_List (line 244) | type DirEntry_List struct
    method At (line 252) | func (s DirEntry_List) At(i int) DirEntry { return DirEntry{s.List.Str...
    method Set (line 254) | func (s DirEntry_List) Set(i int, v DirEntry) error { return s.List.Se...
    method String (line 256) | func (s DirEntry_List) String() string {
  function NewDirEntry_List (line 247) | func NewDirEntry_List(s *capnp.Segment, sz int32) (DirEntry_List, error) {
  type DirEntry_Promise (line 262) | type DirEntry_Promise struct
    method Struct (line 264) | func (p DirEntry_Promise) Struct() (DirEntry, error) {
  type Directory (line 270) | type Directory struct
    method String (line 290) | func (s Directory) String() string {
    method Size (line 295) | func (s Directory) Size() uint64 {
    method SetSize (line 299) | func (s Directory) SetSize(v uint64) {
    method CachedSize (line 303) | func (s Directory) CachedSize() int64 {
    method SetCachedSize (line 307) | func (s Directory) SetCachedSize(v int64) {
    method Parent (line 311) | func (s Directory) Parent() (string, error) {
    method HasParent (line 316) | func (s Directory) HasParent() bool {
    method ParentBytes (line 321) | func (s Directory) ParentBytes() ([]byte, error) {
    method SetParent (line 326) | func (s Directory) SetParent(v string) error {
    method Children (line 330) | func (s Directory) Children() (DirEntry_List, error) {
    method HasChildren (line 335) | func (s Directory) HasChildren() bool {
    method SetChildren (line 340) | func (s Directory) SetChildren(v DirEntry_List) error {
    method NewChildren (line 346) | func (s Directory) NewChildren(n int32) (DirEntry_List, error) {
    method Contents (line 355) | func (s Directory) Contents() (DirEntry_List, error) {
    method HasContents (line 360) | func (s Directory) HasContents() bool {
    method SetContents (line 365) | func (s Directory) SetContents(v DirEntry_List) error {
    method NewContents (line 371) | func (s Directory) NewContents(n int32) (DirEntry_List, error) {
  constant Directory_TypeID (line 273) | Directory_TypeID = 0xe24c59306c829c01
  function NewDirectory (line 275) | func NewDirectory(s *capnp.Segment) (Directory, error) {
  function NewRootDirectory (line 280) | func NewRootDirectory(s *capnp.Segment) (Directory, error) {
  function ReadRootDirectory (line 285) | func ReadRootDirectory(msg *capnp.Message) (Directory, error) {
  type Directory_List (line 381) | type Directory_List struct
    method At (line 389) | func (s Directory_List) At(i int) Directory { return Directory{s.List....
    method Set (line 391) | func (s Directory_List) Set(i int, v Directory) error { return s.List....
    method String (line 393) | func (s Directory_List) String() string {
  function NewDirectory_List (line 384) | func NewDirectory_List(s *capnp.Segment, sz int32) (Directory_List, erro...
  type Directory_Promise (line 399) | type Directory_Promise struct
    method Struct (line 401) | func (p Directory_Promise) Struct() (Directory, error) {
  type File (line 407) | type File struct
    method String (line 427) | func (s File) String() string {
    method Size (line 432) | func (s File) Size() uint64 {
    method SetSize (line 436) | func (s File) SetSize(v uint64) {
    method CachedSize (line 440) | func (s File) CachedSize() int64 {
    method SetCachedSize (line 444) | func (s File) SetCachedSize(v int64) {
    method Parent (line 448) | func (s File) Parent() (string, error) {
    method HasParent (line 453) | func (s File) HasParent() bool {
    method ParentBytes (line 458) | func (s File) ParentBytes() ([]byte, error) {
    method SetParent (line 463) | func (s File) SetParent(v string) error {
    method Key (line 467) | func (s File) Key() ([]byte, error) {
    method HasKey (line 472) | func (s File) HasKey() bool {
    method SetKey (line 477) | func (s File) SetKey(v []byte) error {
    method IsRaw (line 481) | func (s File) IsRaw() bool {
    method SetIsRaw (line 485) | func (s File) SetIsRaw(v bool) {
  constant File_TypeID (line 410) | File_TypeID = 0x8ea7393d37893155
  function NewFile (line 412) | func NewFile(s *capnp.Segment) (File, error) {
  function NewRootFile (line 417) | func NewRootFile(s *capnp.Segment) (File, error) {
  function ReadRootFile (line 422) | func ReadRootFile(msg *capnp.Message) (File, error) {
  type File_List (line 490) | type File_List struct
    method At (line 498) | func (s File_List) At(i int) File { return File{s.List.Struct(i)} }
    method Set (line 500) | func (s File_List) Set(i int, v File) error { return s.List.SetStruct(...
    method String (line 502) | func (s File_List) String() string {
  function NewFile_List (line 493) | func NewFile_List(s *capnp.Segment, sz int32) (File_List, error) {
  type File_Promise (line 508) | type File_Promise struct
    method Struct (line 510) | func (p File_Promise) Struct() (File, error) {
  type Ghost (line 516) | type Ghost struct
    method String (line 557) | func (s Ghost) String() string {
    method Which (line 562) | func (s Ghost) Which() Ghost_Which {
    method GhostInode (line 565) | func (s Ghost) GhostInode() uint64 {
    method SetGhostInode (line 569) | func (s Ghost) SetGhostInode(v uint64) {
    method GhostPath (line 573) | func (s Ghost) GhostPath() (string, error) {
    method HasGhostPath (line 578) | func (s Ghost) HasGhostPath() bool {
    method GhostPathBytes (line 583) | func (s Ghost) GhostPathBytes() ([]byte, error) {
    method SetGhostPath (line 588) | func (s Ghost) SetGhostPath(v string) error {
    method Commit (line 592) | func (s Ghost) Commit() (Commit, error) {
    method HasCommit (line 600) | func (s Ghost) HasCommit() bool {
    method SetCommit (line 608) | func (s Ghost) SetCommit(v Commit) error {
    method NewCommit (line 615) | func (s Ghost) NewCommit() (Commit, error) {
    method Directory (line 625) | func (s Ghost) Directory() (Directory, error) {
    method HasDirectory (line 633) | func (s Ghost) HasDirectory() bool {
    method SetDirectory (line 641) | func (s Ghost) SetDirectory(v Directory) error {
    method NewDirectory (line 648) | func (s Ghost) NewDirectory() (Directory, error) {
    method File (line 658) | func (s Ghost) File() (File, error) {
    method HasFile (line 666) | func (s Ghost) HasFile() bool {
    method SetFile (line 674) | func (s Ghost) SetFile(v File) error {
    method NewFile (line 681) | func (s Ghost) NewFile() (File, error) {
  type Ghost_Which (line 517) | type Ghost_Which
    method String (line 525) | func (w Ghost_Which) String() string {
  constant Ghost_Which_commit (line 520) | Ghost_Which_commit    Ghost_Which = 0
  constant Ghost_Which_directory (line 521) | Ghost_Which_directory Ghost_Which = 1
  constant Ghost_Which_file (line 522) | Ghost_Which_file      Ghost_Which = 2
  constant Ghost_TypeID (line 540) | Ghost_TypeID = 0x80c828d7e89c12ea
  function NewGhost (line 542) | func NewGhost(s *capnp.Segment) (Ghost, error) {
  function NewRootGhost (line 547) | func NewRootGhost(s *capnp.Segment) (Ghost, error) {
  function ReadRootGhost (line 552) | func ReadRootGhost(msg *capnp.Message) (Ghost, error) {
  type Ghost_List (line 692) | type Ghost_List struct
    method At (line 700) | func (s Ghost_List) At(i int) Ghost { return Ghost{s.List.Struct(i)} }
    method Set (line 702) | func (s Ghost_List) Set(i int, v Ghost) error { return s.List.SetStruc...
    method String (line 704) | func (s Ghost_List) String() string {
  function NewGhost_List (line 695) | func NewGhost_List(s *capnp.Segment, sz int32) (Ghost_List, error) {
  type Ghost_Promise (line 710) | type Ghost_Promise struct
    method Struct (line 712) | func (p Ghost_Promise) Struct() (Ghost, error) {
    method Commit (line 717) | func (p Ghost_Promise) Commit() Commit_Promise {
    method Directory (line 721) | func (p Ghost_Promise) Directory() Directory_Promise {
    method File (line 725) | func (p Ghost_Promise) File() File_Promise {
  type Node (line 730) | type Node struct
    method String (line 774) | func (s Node) String() string {
    method Which (line 779) | func (s Node) Which() Node_Which {
    method Name (line 782) | func (s Node) Name() (string, error) {
    method HasName (line 787) | func (s Node) HasName() bool {
    method NameBytes (line 792) | func (s Node) NameBytes() ([]byte, error) {
    method SetName (line 797) | func (s Node) SetName(v string) error {
    method TreeHash (line 801) | func (s Node) TreeHash() ([]byte, error) {
    method HasTreeHash (line 806) | func (s Node) HasTreeHash() bool {
    method SetTreeHash (line 811) | func (s Node) SetTreeHash(v []byte) error {
    method ModTime (line 815) | func (s Node) ModTime() (string, error) {
    method HasModTime (line 820) | func (s Node) HasModTime() bool {
    method ModTimeBytes (line 825) | func (s Node) ModTimeBytes() ([]byte, error) {
    method SetModTime (line 830) | func (s Node) SetModTime(v string) error {
    method Inode (line 834) | func (s Node) Inode() uint64 {
    method SetInode (line 838) | func (s Node) SetInode(v uint64) {
    method ContentHash (line 842) | func (s Node) ContentHash() ([]byte, error) {
    method HasContentHash (line 847) | func (s Node) HasContentHash() bool {
    method SetContentHash (line 852) | func (s Node) SetContentHash(v []byte) error {
    method User (line 856) | func (s Node) User() (string, error) {
    method HasUser (line 861) | func (s Node) HasUser() bool {
    method UserBytes (line 866) | func (s Node) UserBytes() ([]byte, error) {
    method SetUser (line 871) | func (s Node) SetUser(v string) error {
    method Commit (line 875) | func (s Node) Commit() (Commit, error) {
    method HasCommit (line 883) | func (s Node) HasCommit() bool {
    method SetCommit (line 891) | func (s Node) SetCommit(v Commit) error {
    method NewCommit (line 898) | func (s Node) NewCommit() (Commit, error) {
    method Directory (line 908) | func (s Node) Directory() (Directory, error) {
    method HasDirectory (line 916) | func (s Node) HasDirectory() bool {
    method SetDirectory (line 924) | func (s Node) SetDirectory(v Directory) error {
    method NewDirectory (line 931) | func (s Node) NewDirectory() (Directory, error) {
    method File (line 941) | func (s Node) File() (File, error) {
    method HasFile (line 949) | func (s Node) HasFile() bool {
    method SetFile (line 957) | func (s Node) SetFile(v File) error {
    method NewFile (line 964) | func (s Node) NewFile() (File, error) {
    method Ghost (line 974) | func (s Node) Ghost() (Ghost, error) {
    method HasGhost (line 982) | func (s Node) HasGhost() bool {
    method SetGhost (line 990) | func (s Node) SetGhost(v Ghost) error {
    method NewGhost (line 997) | func (s Node) NewGhost() (Ghost, error) {
    method BackendHash (line 1007) | func (s Node) BackendHash() ([]byte, error) {
    method HasBackendHash (line 1012) | func (s Node) HasBackendHash() bool {
    method SetBackendHash (line 1017) | func (s Node) SetBackendHash(v []byte) error {
  type Node_Which (line 731) | type Node_Which
    method String (line 740) | func (w Node_Which) String() string {
  constant Node_Which_commit (line 734) | Node_Which_commit    Node_Which = 0
  constant Node_Which_directory (line 735) | Node_Which_directory Node_Which = 1
  constant Node_Which_file (line 736) | Node_Which_file      Node_Which = 2
  constant Node_Which_ghost (line 737) | Node_Which_ghost     Node_Which = 3
  constant Node_TypeID (line 757) | Node_TypeID = 0xa629eb7f7066fae3
  function NewNode (line 759) | func NewNode(s *capnp.Segment) (Node, error) {
  function NewRootNode (line 764) | func NewRootNode(s *capnp.Segment) (Node, error) {
  function ReadRootNode (line 769) | func ReadRootNode(msg *capnp.Message) (Node, error) {
  type Node_List (line 1022) | type Node_List struct
    method At (line 1030) | func (s Node_List) At(i int) Node { return Node{s.List.Struct(i)} }
    method Set (line 1032) | func (s Node_List) Set(i int, v Node) error { return s.List.SetStruct(...
    method String (line 1034) | func (s Node_List) String() string {
  function NewNode_List (line 1025) | func NewNode_List(s *capnp.Segment, sz int32) (Node_List, error) {
  type Node_Promise (line 1040) | type Node_Promise struct
    method Struct (line 1042) | func (p Node_Promise) Struct() (Node, error) {
    method Commit (line 1047) | func (p Node_Promise) Commit() Commit_Promise {
    method Directory (line 1051) | func (p Node_Promise) Directory() Directory_Promise {
    method File (line 1055) | func (p Node_Promise) File() File_Promise {
    method Ghost (line 1059) | func (p Node_Promise) Ghost() Ghost_Promise {
  constant schema_9195d073cb5c5953 (line 1063) | schema_9195d073cb5c5953 = "x\xda\xb4\x96\xefk\x14\xd7\x1a\xc7\x9f\xef9\x...
  function init (line 1146) | func init() {

FILE: catfs/nodes/commit.go
  constant AuthorOfStage (line 17) | AuthorOfStage = "unknown"
  type Commit (line 21) | type Commit struct
    method ToCapnp (line 64) | func (c *Commit) ToCapnp() (*capnp.Message, error) {
    method ToCapnpNode (line 79) | func (c *Commit) ToCapnpNode(seg *capnp.Segment, capNd capnp_model.Nod...
    method setCommitAttrs (line 92) | func (c *Commit) setCommitAttrs(seg *capnp.Segment) (*capnp_model.Comm...
    method FromCapnp (line 129) | func (c *Commit) FromCapnp(msg *capnp.Message) error {
    method FromCapnpNode (line 139) | func (c *Commit) FromCapnpNode(capNd capnp_model.Node) error {
    method readCommitAttrs (line 153) | func (c *Commit) readCommitAttrs(capCmt capnp_model.Commit) error {
    method IsBoxed (line 190) | func (c *Commit) IsBoxed() bool {
    method Root (line 205) | func (c *Commit) Root() h.Hash {
    method SetRoot (line 210) | func (c *Commit) SetRoot(hash h.Hash) {
    method BoxCommit (line 217) | func (c *Commit) BoxCommit(author string, message string) error {
    method String (line 245) | func (c *Commit) String() string {
    method SetMergeMarker (line 255) | func (c *Commit) SetMergeMarker(with string, remoteHead h.Hash) {
    method MergeMarker (line 261) | func (c *Commit) MergeMarker() (string, h.Hash) {
    method Name (line 268) | func (c *Commit) Name() string {
    method Message (line 273) | func (c *Commit) Message() string {
    method Path (line 278) | func (c *Commit) Path() string {
    method Size (line 285) | func (c *Commit) Size() uint64 {
    method CachedSize (line 291) | func (c *Commit) CachedSize() int64 { return 0 }
    method Index (line 295) | func (c *Commit) Index() int64 {
    method NChildren (line 303) | func (c *Commit) NChildren() int {
    method Child (line 308) | func (c *Commit) Child(lkr Linker, _ string) (Node, error) {
    method Parent (line 315) | func (c *Commit) Parent(lkr Linker) (Node, error) {
    method SetParent (line 324) | func (c *Commit) SetParent(lkr Linker, nd Node) error {
    method SetModTime (line 331) | func (c *Commit) SetModTime(t time.Time) {
  function NewEmptyCommit (line 51) | func NewEmptyCommit(inode uint64, index int64) (*Commit, error) {
  function padHash (line 197) | func padHash(hash h.Hash) []byte {

FILE: catfs/nodes/commit_test.go
  function TestCommit (line 11) | func TestCommit(t *testing.T) {

FILE: catfs/nodes/directory.go
  type Directory (line 19) | type Directory struct
    method String (line 65) | func (d *Directory) String() string {
    method ToCapnp (line 70) | func (d *Directory) ToCapnp() (*capnp.Message, error) {
    method ToCapnpNode (line 85) | func (d *Directory) ToCapnpNode(seg *capnp.Segment, capNd capnp_model....
    method setDirectoryAttrs (line 98) | func (d *Directory) setDirectoryAttrs(seg *capnp.Segment) (*capnp_mode...
    method FromCapnp (line 171) | func (d *Directory) FromCapnp(msg *capnp.Message) error {
    method FromCapnpNode (line 181) | func (d *Directory) FromCapnpNode(capNd capnp_model.Node) error {
    method readDirectoryAttr (line 194) | func (d *Directory) readDirectoryAttr(capDir capnp_model.Directory) er...
    method Name (line 255) | func (d *Directory) Name() string {
    method Size (line 261) | func (d *Directory) Size() uint64 {
    method CachedSize (line 266) | func (d *Directory) CachedSize() int64 {
    method Path (line 271) | func (d *Directory) Path() string {
    method NChildren (line 276) | func (d *Directory) NChildren() int {
    method Child (line 281) | func (d *Directory) Child(lkr Linker, name string) (Node, error) {
    method Parent (line 292) | func (d *Directory) Parent(lkr Linker) (Node, error) {
    method SetParent (line 301) | func (d *Directory) SetParent(lkr Linker, nd Node) error {
    method VisitChildren (line 319) | func (d *Directory) VisitChildren(lkr Linker, fn func(nd Node) error) ...
    method ChildrenSorted (line 342) | func (d *Directory) ChildrenSorted(lkr Linker) ([]Node, error) {
    method Up (line 358) | func (d *Directory) Up(lkr Linker, visit func(par *Directory) error) e...
    method IsRoot (line 404) | func (d *Directory) IsRoot() bool {
    method Lookup (line 479) | func (d *Directory) Lookup(lkr Linker, repoPath string) (Node, error) {
    method SetSize (line 517) | func (d *Directory) SetSize(size uint64) { d.size = size }
    method SetCachedSize (line 520) | func (d *Directory) SetCachedSize(cachedSize int64) { d.cachedSize = c...
    method SetName (line 523) | func (d *Directory) SetName(name string) {
    method SetModTime (line 528) | func (d *Directory) SetModTime(modTime time.Time) {
    method Copy (line 533) | func (d *Directory) Copy(inode uint64) ModNode {
    method rehash (line 558) | func (d *Directory) rehash(lkr Linker, updateContentHash bool) error {
    method Add (line 583) | func (d *Directory) Add(lkr Linker, nd Node) error {
    method RemoveChild (line 655) | func (d *Directory) RemoveChild(lkr Linker, nd Node) error {
    method rebuildOrderCache (line 700) | func (d *Directory) rebuildOrderCache() {
    method NotifyMove (line 709) | func (d *Directory) NotifyMove(lkr Linker, newParent *Directory, newPa...
    method SetUser (line 783) | func (d *Directory) SetUser(user string) {
  function NewEmptyDirectory (line 31) | func NewEmptyDirectory(
  function Walk (line 420) | func Walk(lkr Linker, node Node, dfs bool, visit func(child Node) error)...

FILE: catfs/nodes/directory_test.go
  function TestDirectoryBasics (line 11) | func TestDirectoryBasics(t *testing.T) {

FILE: catfs/nodes/file.go
  type File (line 15) | type File struct
    method ToCapnp (line 40) | func (f *File) ToCapnp() (*capnp.Message, error) {
    method ToCapnpNode (line 55) | func (f *File) ToCapnpNode(seg *capnp.Segment, capNd capnp_model.Node)...
    method setFileAttrs (line 68) | func (f *File) setFileAttrs(seg *capnp.Segment) (*capnp_model.File, er...
    method FromCapnp (line 89) | func (f *File) FromCapnp(msg *capnp.Message) error {
    method FromCapnpNode (line 99) | func (f *File) FromCapnpNode(capNd capnp_model.Node) error {
    method readFileAttrs (line 112) | func (f *File) readFileAttrs(capFile capnp_model.File) error {
    method Size (line 131) | func (f *File) Size() uint64 { return f.size }
    method CachedSize (line 134) | func (f *File) CachedSize() int64 { return f.cachedSize }
    method SetModTime (line 139) | func (f *File) SetModTime(t time.Time) {
    method SetName (line 144) | func (f *File) SetName(n string) { f.name = n }
    method SetIsRaw (line 147) | func (f *File) SetIsRaw(isRaw bool) { f.isRaw = isRaw }
    method SetKey (line 150) | func (f *File) SetKey(k []byte) { f.key = k }
    method SetSize (line 153) | func (f *File) SetSize(s uint64) {
    method SetCachedSize (line 159) | func (f *File) SetCachedSize(s int64) {
    method Copy (line 165) | func (f *File) Copy(inode uint64) ModNode {
    method rehash (line 185) | func (f *File) rehash(lkr Linker, newPath string) {
    method NotifyMove (line 199) | func (f *File) NotifyMove(lkr Linker, newParent *Directory, newPath st...
    method SetContent (line 217) | func (f *File) SetContent(lkr Linker, content h.Hash) {
    method SetBackend (line 224) | func (f *File) SetBackend(lkr Linker, backend h.Hash) {
    method String (line 229) | func (f *File) String() string {
    method Path (line 240) | func (f *File) Path() string {
    method IsRaw (line 246) | func (f *File) IsRaw() bool { return f.isRaw }
    method NChildren (line 251) | func (f *File) NChildren() int {
    method Child (line 256) | func (f *File) Child(_ Linker, name string) (Node, error) {
    method Parent (line 263) | func (f *File) Parent(lkr Linker) (Node, error) {
    method SetParent (line 268) | func (f *File) SetParent(_ Linker, parent Node) error {
    method Key (line 278) | func (f *File) Key() []byte {
    method SetUser (line 283) | func (f *File) SetUser(user string) {
  function NewEmptyFile (line 26) | func NewEmptyFile(parent *Directory, name string, user string, inode uin...

FILE: catfs/nodes/file_test.go
  function TestFile (line 12) | func TestFile(t *testing.T) {

FILE: catfs/nodes/ghost.go
  type Ghost (line 16) | type Ghost struct
    method Type (line 42) | func (g *Ghost) Type() NodeType {
    method OldNode (line 47) | func (g *Ghost) OldNode() ModNode {
    method OldFile (line 53) | func (g *Ghost) OldFile() (*File, error) {
    method OldDirectory (line 64) | func (g *Ghost) OldDirectory() (*Directory, error) {
    method String (line 73) | func (g *Ghost) String() string {
    method Path (line 78) | func (g *Ghost) Path() string {
    method TreeHash (line 83) | func (g *Ghost) TreeHash() h.Hash {
    method Inode (line 88) | func (g *Ghost) Inode() uint64 {
    method SetGhostPath (line 93) | func (g *Ghost) SetGhostPath(newPath string) {
    method ToCapnp (line 98) | func (g *Ghost) ToCapnp() (*capnp.Message, error) {
    method ToCapnpNode (line 113) | func (g *Ghost) ToCapnpNode(seg *capnp.Segment, capNd capnp_model.Node...
    method FromCapnp (line 174) | func (g *Ghost) FromCapnp(msg *capnp.Message) error {
    method FromCapnpNode (line 184) | func (g *Ghost) FromCapnpNode(capNd capnp_model.Node) error {
  function MakeGhost (line 28) | func MakeGhost(nd ModNode, inode uint64) (*Ghost, error) {

FILE: catfs/nodes/ghost_test.go
  function TestGhost (line 11) | func TestGhost(t *testing.T) {

FILE: catfs/nodes/linker.go
  type Linker (line 14) | type Linker interface
  type MockLinker (line 41) | type MockLinker struct
    method Root (line 57) | func (ml *MockLinker) Root() (*Directory, error) {
    method LookupNode (line 72) | func (ml *MockLinker) LookupNode(path string) (Node, error) {
    method NodeByHash (line 81) | func (ml *MockLinker) NodeByHash(hash h.Hash) (Node, error) {
    method MemSetRoot (line 90) | func (ml *MockLinker) MemSetRoot(root *Directory) {
    method MemIndexSwap (line 96) | func (ml *MockLinker) MemIndexSwap(nd Node, oldHash h.Hash, updatePath...
    method AddNode (line 103) | func (ml *MockLinker) AddNode(nd Node, updatePathIndex bool) {
  function NewMockLinker (line 48) | func NewMockLinker() *MockLinker {

FILE: catfs/nodes/node.go
  type NodeType (line 12) | type NodeType
    method String (line 34) | func (n NodeType) String() string {
  constant NodeTypeUnknown (line 16) | NodeTypeUnknown = NodeType(iota)
  constant NodeTypeFile (line 18) | NodeTypeFile
  constant NodeTypeDirectory (line 20) | NodeTypeDirectory
  constant NodeTypeCommit (line 22) | NodeTypeCommit
  constant NodeTypeGhost (line 24) | NodeTypeGhost
  type Metadatable (line 43) | type Metadatable interface
  type Serializable (line 89) | type Serializable interface
  type HierarchyEntry (line 99) | type HierarchyEntry interface
  type Streamable (line 116) | type Streamable interface
  type Node (line 122) | type Node interface
  type ModNode (line 131) | type ModNode interface

FILE: catfs/pinner.go
  type pinCacheEntry (line 19) | type pinCacheEntry struct
  function capnpToPinCacheEntry (line 23) | func capnpToPinCacheEntry(data []byte) (*pinCacheEntry, error) {
  function pinEnryToCapnpData (line 51) | func pinEnryToCapnpData(entry *pinCacheEntry) ([]byte, error) {
  type Pinner (line 94) | type Pinner struct
    method Close (line 106) | func (pc *Pinner) Close() error {
    method remember (line 127) | func (pc *Pinner) remember(inode uint64, hash h.Hash, isPinned, isExpl...
    method IsPinned (line 164) | func (pc *Pinner) IsPinned(inode uint64, hash h.Hash) (bool, bool, err...
    method Pin (line 201) | func (pc *Pinner) Pin(inode uint64, hash h.Hash, explicit bool) error {
    method Unpin (line 223) | func (pc *Pinner) Unpin(inode uint64, hash h.Hash, explicit bool) error {
    method doPinOp (line 245) | func (pc *Pinner) doPinOp(op func(uint64, h.Hash, bool) error, nd n.No...
    method PinNode (line 267) | func (pc *Pinner) PinNode(nd n.Node, explicit bool) error {
    method UnpinNode (line 272) | func (pc *Pinner) UnpinNode(nd n.Node, explicit bool) error {
    method IsNodePinned (line 279) | func (pc *Pinner) IsNodePinned(nd n.Node) (bool, bool, error) {
  function NewPinner (line 101) | func NewPinner(lkr *c.Linker, bk FsBackend) (*Pinner, error) {
  function getEntry (line 111) | func getEntry(kv db.Database, hash h.Hash) (*pinCacheEntry, error) {

FILE: catfs/pinner_test.go
  function TestPinMemCache (line 12) | func TestPinMemCache(t *testing.T) {
  function TestPinRememberHashTwice (line 37) | func TestPinRememberHashTwice(t *testing.T) {
  function TestPinNode (line 72) | func TestPinNode(t *testing.T) {
  function TestPinEntryMarshal (line 105) | func TestPinEntryMarshal(t *testing.T) {
  function TestPinEmptyDir (line 123) | func TestPinEmptyDir(t *testing.T) {

FILE: catfs/repin.go
  type partition (line 15) | type partition struct
  method partitionNodeHashes (line 33) | func (fs *FS) partitionNodeHashes(nd n.ModNode, minDepth, maxDepth int64...
  method ensurePin (line 94) | func (fs *FS) ensurePin(entries []n.ModNode) (uint64, error) {
  method ensureUnpin (line 134) | func (fs *FS) ensureUnpin(entries []n.ModNode) (uint64, error) {
  function findLastPinnedIdx (line 157) | func findLastPinnedIdx(pinner *Pinner, nds []n.ModNode) (int, error) {
  method balanceQuota (line 172) | func (fs *FS) balanceQuota(ps []*partition, totalStorage, quota uint64) ...
  method repin (line 217) | func (fs *FS) repin(root string) error {
  method Repin (line 312) | func (fs *FS) Repin(root string) error {

FILE: catfs/repin_test.go
  function TestRepinDepthOnly (line 12) | func TestRepinDepthOnly(t *testing.T) {
  function TestRepinNoMaxDepth (line 23) | func TestRepinNoMaxDepth(t *testing.T) {
  function TestRepinDisabled (line 34) | func TestRepinDisabled(t *testing.T) {
  function TestRepinQuota (line 41) | func TestRepinQuota(t *testing.T) {
  function TestRepinKillAll (line 52) | func TestRepinKillAll(t *testing.T) {
  function TestRepinOldBehaviour (line 63) | func TestRepinOldBehaviour(t *testing.T) {
  function testRun (line 74) | func testRun(t *testing.T, fs *FS, split, n int) {

FILE: catfs/rev.go
  function validateRev (line 28) | func validateRev(rev string) error {
  function parseRev (line 55) | func parseRev(lkr *c.Linker, rev string) (*n.Commit, error) {

FILE: catfs/rev_test.go
  function TestRevParse (line 10) | func TestRevParse(t *testing.T) {

FILE: catfs/vcs/capnp/patch.capnp.go
  type Change (line 13) | type Change struct
    method String (line 33) | func (s Change) String() string {
    method Mask (line 38) | func (s Change) Mask() uint64 {
    method SetMask (line 42) | func (s Change) SetMask(v uint64) {
    method Head (line 46) | func (s Change) Head() (capnp2.Node, error) {
    method HasHead (line 51) | func (s Change) HasHead() bool {
    method SetHead (line 56) | func (s Change) SetHead(v capnp2.Node) error {
    method NewHead (line 62) | func (s Change) NewHead() (capnp2.Node, error) {
    method Next (line 71) | func (s Change) Next() (capnp2.Node, error) {
    method HasNext (line 76) | func (s Change) HasNext() bool {
    method SetNext (line 81) | func (s Change) SetNext(v capnp2.Node) error {
    method NewNext (line 87) | func (s Change) NewNext() (capnp2.Node, error) {
    method Curr (line 96) | func (s Change) Curr() (capnp2.Node, error) {
    method HasCurr (line 101) | func (s Change) HasCurr() bool {
    method SetCurr (line 106) | func (s Change) SetCurr(v capnp2.Node) error {
    method NewCurr (line 112) | func (s Change) NewCurr() (capnp2.Node, error) {
    method MovedTo (line 121) | func (s Change) MovedTo() (string, error) {
    method HasMovedTo (line 126) | func (s Change) HasMovedTo() bool {
    method MovedToBytes (line 131) | func (s Change) MovedToBytes() ([]byte, error) {
    method SetMovedTo (line 136) | func (s Change) SetMovedTo(v string) error {
    method WasPreviouslyAt (line 140) | func (s Change) WasPreviouslyAt() (string, error) {
    method HasWasPreviouslyAt (line 145) | func (s Change) HasWasPreviouslyAt() bool {
    method WasPreviouslyAtBytes (line 150) | func (s Change) WasPreviouslyAtBytes() ([]byte, error) {
    method SetWasPreviouslyAt (line 155) | func (s Change) SetWasPreviouslyAt(v string) error {
  constant Change_TypeID (line 16) | Change_TypeID = 0x9592300df48789af
  function NewChange (line 18) | func NewChange(s *capnp.Segment) (Change, error) {
  function NewRootChange (line 23) | func NewRootChange(s *capnp.Segment) (Change, error) {
  function ReadRootChange (line 28) | func ReadRootChange(msg *capnp.Message) (Change, error) {
  type Change_List (line 160) | type Change_List struct
    method At (line 168) | func (s Change_List) At(i int) Change { return Change{s.List.Struct(i)} }
    method Set (line 170) | func (s Change_List) Set(i int, v Change) error { return s.List.SetStr...
    method String (line 172) | func (s Change_List) String() string {
  function NewChange_List (line 163) | func NewChange_List(s *capnp.Segment, sz int32) (Change_List, error) {
  type Change_Promise (line 178) | type Change_Promise struct
    method Struct (line 180) | func (p Change_Promise) Struct() (Change, error) {
    method Head (line 185) | func (p Change_Promise) Head() capnp2.Node_Promise {
    method Next (line 189) | func (p Change_Promise) Next() capnp2.Node_Promise {
    method Curr (line 193) | func (p Change_Promise) Curr() capnp2.Node_Promise {
  type Patch (line 198) | type Patch struct
    method String (line 218) | func (s Patch) String() string {
    method FromIndex (line 223) | func (s Patch) FromIndex() int64 {
    method SetFromIndex (line 227) | func (s Patch) SetFromIndex(v int64) {
    method CurrIndex (line 231) | func (s Patch) CurrIndex() int64 {
    method SetCurrIndex (line 235) | func (s Patch) SetCurrIndex(v int64) {
    method Changes (line 239) | func (s Patch) Changes() (Change_List, error) {
    method HasChanges (line 244) | func (s Patch) HasChanges() bool {
    method SetChanges (line 249) | func (s Patch) SetChanges(v Change_List) error {
    method NewChanges (line 255) | func (s Patch) NewChanges(n int32) (Change_List, error) {
  constant Patch_TypeID (line 201) | Patch_TypeID = 0x927c7336e3054805
  function NewPatch (line 203) | func NewPatch(s *capnp.Segment) (Patch, error) {
  function NewRootPatch (line 208) | func NewRootPatch(s *capnp.Segment) (Patch, error) {
  function ReadRootPatch (line 213) | func ReadRootPatch(msg *capnp.Message) (Patch, error) {
  type Patch_List (line 265) | type Patch_List struct
    method At (line 273) | func (s Patch_List) At(i int) Patch { return Patch{s.List.Struct(i)} }
    method Set (line 275) | func (s Patch_List) Set(i int, v Patch) error { return s.List.SetStruc...
    method String (line 277) | func (s Patch_List) String() string {
  function NewPatch_List (line 268) | func NewPatch_List(s *capnp.Segment, sz int32) (Patch_List, error) {
  type Patch_Promise (line 283) | type Patch_Promise struct
    method Struct (line 285) | func (p Patch_Promise) Struct() (Patch, error) {
  type Patches (line 291) | type Patches struct
    method String (line 311) | func (s Patches) String() string {
    method Patches (line 316) | func (s Patches) Patches() (Patch_List, error) {
    method HasPatches (line 321) | func (s Patches) HasPatches() bool {
    method SetPatches (line 326) | func (s Patches) SetPatches(v Patch_List) error {
    method NewPatches (line 332) | func (s Patches) NewPatches(n int32) (Patch_List, error) {
  constant Patches_TypeID (line 294) | Patches_TypeID = 0xc2984f083ea5351d
  function NewPatches (line 296) | func NewPatches(s *capnp.Segment) (Patches, error) {
  function NewRootPatches (line 301) | func NewRootPatches(s *capnp.Segment) (Patches, error) {
  function ReadRootPatches (line 306) | func ReadRootPatches(msg *capnp.Message) (Patches, error) {
  type Patches_List (line 342) | type Patches_List struct
    method At (line 350) | func (s Patches_List) At(i int) Patches { return Patches{s.List.Struct...
    method Set (line 352) | func (s Patches_List) Set(i int, v Patches) error { return s.List.SetS...
    method String (line 354) | func (s Patches_List) String() string {
  function NewPatches_List (line 345) | func NewPatches_List(s *capnp.Segment, sz int32) (Patches_List, error) {
  type Patches_Promise (line 360) | type Patches_Promise struct
    method Struct (line 362) | func (p Patches_Promise) Struct() (Patches, error) {
  constant schema_b943b54bf1683782 (line 367) | schema_b943b54bf1683782 = "x\xda\x84\x93\xc1k\x13O\x1c\xc5\xbf\xef;\xbb\...
  function init (line 404) | func init() {

FILE: catfs/vcs/change.go
  constant ChangeTypeNone (line 20) | ChangeTypeNone = ChangeType(0)
  constant ChangeTypeAdd (line 22) | ChangeTypeAdd = ChangeType(1 << iota)
  constant ChangeTypeModify (line 24) | ChangeTypeModify
  constant ChangeTypeMove (line 27) | ChangeTypeMove
  constant ChangeTypeRemove (line 29) | ChangeTypeRemove
  type ChangeType (line 33) | type ChangeType
    method String (line 36) | func (ct ChangeType) String() string {
    method IsCompatible (line 64) | func (ct ChangeType) IsCompatible(ot ChangeType) bool {
  type Change (line 72) | type Change struct
    method String (line 97) | func (ch *Change) String() string {
    method Replay (line 252) | func (ch *Change) Replay(lkr *c.Linker) error {
    method toCapnpChange (line 280) | func (ch *Change) toCapnpChange(seg *capnp.Segment, capCh *capnp_patch...
    method ToCapnp (line 334) | func (ch *Change) ToCapnp() (*capnp.Message, error) {
    method fromCapnpChange (line 352) | func (ch *Change) fromCapnpChange(capCh capnp_patch.Change) error {
    method FromCapnp (line 407) | func (ch *Change) FromCapnp(msg *capnp.Message) error {
  function replayAddWithUnpacking (line 111) | func replayAddWithUnpacking(lkr *c.Linker, ch *Change) error {
  function replayAdd (line 146) | func replayAdd(lkr *c.Linker, currNd n.ModNode) error {
  function replayMove (line 167) | func replayMove(lkr *c.Linker, ch *Change) error {
  function replayAddMoveMapping (line 215) | func replayAddMoveMapping(lkr *c.Linker, oldPath, newPath string) error {
  function replayRemove (line 234) | func replayRemove(lkr *c.Linker, ch *Change) error {
  function CombineChanges (line 418) | func CombineChanges(changes []*Change) *Change {

FILE: catfs/vcs/change_test.go
  function TestChangeMarshalling (line 11) | func TestChangeMarshalling(t *testing.T) {
  function TestChangeCombine (line 44) | func TestChangeCombine(t *testing.T) {
  function TestChangeCombineMoveBackAndForth (line 78) | func TestChangeCombineMoveBackAndForth(t *testing.T) {
  function TestChangeRemoveAndReadd (line 105) | func TestChangeRemoveAndReadd(t *testing.T) {
  function TestChangeReplay (line 131) | func TestChangeReplay(t *testing.T) {

FILE: catfs/vcs/debug.go
  constant printDebug (line 8) | printDebug = false
  function debug (line 11) | func debug(args ...interface{}) {
  function debugf (line 17) | func debugf(spec string, args ...interface{}) {

FILE: catfs/vcs/diff.go
  type DiffPair (line 10) | type DiffPair struct
  type Diff (line 18) | type Diff struct
    method handleAdd (line 45) | func (df *Diff) handleAdd(src n.ModNode) error {
    method handleRemove (line 49) | func (df *Diff) handleRemove(dst n.ModNode) error {
    method handleMissing (line 59) | func (df *Diff) handleMissing(dst n.ModNode) error {
    method handleTypeConflict (line 65) | func (df *Diff) handleTypeConflict(src, dst n.ModNode) error {
    method handleConflictNode (line 70) | func (df *Diff) handleConflictNode(nd n.ModNode) error {
    method handleMove (line 75) | func (df *Diff) handleMove(src, dst n.ModNode) error {
    method handleConflict (line 86) | func (df *Diff) handleConflict(src, dst n.ModNode, srcMask, dstMask Ch...
    method handleMerge (line 97) | func (df *Diff) handleMerge(src, dst n.ModNode, srcMask, dstMask Chang...
  function MakeDiff (line 112) | func MakeDiff(lkrSrc, lkrDst *c.Linker, headSrc, headDst *n.Commit, cfg ...

FILE: catfs/vcs/diff_test.go
  function setupDiffBasicSrcFile (line 10) | func setupDiffBasicSrcFile(t *testing.T, lkrSrc, lkrDst *c.Linker) {
  function checkDiffBasicSrcFileForward (line 15) | func checkDiffBasicSrcFileForward(t *testing.T, lkrSrc, lkrDst *c.Linker...
  function checkDiffBasicSrcFileBackward (line 28) | func checkDiffBasicSrcFileBackward(t *testing.T, lkrSrc, lkrDst *c.Linke...
  function assertDiffIsEmpty (line 43) | func assertDiffIsEmpty(t *testing.T, diff *Diff) {
  function TestDiff (line 53) | func TestDiff(t *testing.T) {
  function TestDiffWithSameLinker (line 120) | func TestDiffWithSameLinker(t *testing.T) {

FILE: catfs/vcs/history.go
  type HistoryWalker (line 30) | type HistoryWalker struct
    method maskFromState (line 51) | func (hw *HistoryWalker) maskFromState(curr, next n.ModNode) ChangeType {
    method findReferToPath (line 228) | func (hw *HistoryWalker) findReferToPath(prevHeadCommit *n.Commit, pre...
    method findDirectPrev (line 265) | func (hw *HistoryWalker) findDirectPrev(prevHeadCommit *n.Commit) (n.N...
    method Next (line 306) | func (hw *HistoryWalker) Next() bool {
    method State (line 418) | func (hw *HistoryWalker) State() *Change {
    method Err (line 423) | func (hw *HistoryWalker) Err() error {
  function NewHistoryWalker (line 42) | func NewHistoryWalker(lkr *c.Linker, cmt *n.Commit, node n.ModNode) *His...
  function parentDirectoryForCommit (line 98) | func parentDirectoryForCommit(lkr *c.Linker, cmt *n.Commit, curr n.Node)...
  function findMovePartner (line 141) | func findMovePartner(lkr *c.Linker, head *n.Commit, curr n.Node) (n.Node...
  function getRealType (line 220) | func getRealType(nd n.Node) n.NodeType {
  function History (line 431) | func History(lkr *c.Linker, nd n.ModNode, start, stop *n.Commit) ([]*Cha...

FILE: catfs/vcs/history_test.go
  function init (line 15) | func init() {
  type historySetup (line 19) | type historySetup struct
  function setupHistoryBasic (line 29) | func setupHistoryBasic(t *testing.T, lkr *c.Linker) *historySetup {
  function setupHistoryBasicHole (line 58) | func setupHistoryBasicHole(t *testing.T, lkr *c.Linker) *historySetup {
  function setupHistoryRemoveImmediately (line 90) | func setupHistoryRemoveImmediately(t *testing.T, lkr *c.Linker) *history...
  function setupHistoryRemoved (line 115) | func setupHistoryRemoved(t *testing.T, lkr *c.Linker) *historySetup {
  function setupHistoryMoved (line 152) | func setupHistoryMoved(t *testing.T, lkr *c.Linker) *historySetup {
  function setupHistoryMoveStaging (line 175) | func setupHistoryMoveStaging(t *testing.T, lkr *c.Linker) *historySetup {
  function setupMoveInitial (line 203) | func setupMoveInitial(t *testing.T, lkr *c.Linker) *historySetup {
  function setupHistoryMoveAndModify (line 227) | func setupHistoryMoveAndModify(t *testing.T, lkr *c.Linker) *historySetup {
  function setupHistoryMoveAndModifyStage (line 252) | func setupHistoryMoveAndModifyStage(t *testing.T, lkr *c.Linker) *histor...
  function setupHistoryRemoveReadd (line 280) | func setupHistoryRemoveReadd(t *testing.T, lkr *c.Linker) *historySetup {
  function setupHistoryRemoveReaddModify (line 306) | func setupHistoryRemoveReaddModify(t *testing.T, lkr *c.Linker) *history...
  function setupHistoryRemoveReaddNoModify (line 332) | func setupHistoryRemoveReaddNoModify(t *testing.T, lkr *c.Linker) *histo...
  function setupHistoryMoveCircle (line 358) | func setupHistoryMoveCircle(t *testing.T, lkr *c.Linker) *historySetup {
  function setupHistoryMoveSamePlaceLeft (line 385) | func setupHistoryMoveSamePlaceLeft(t *testing.T, lkr *c.Linker) *history...
  function setupHistoryTypeChange (line 415) | func setupHistoryTypeChange(t *testing.T, lkr *c.Linker) *historySetup {
  function setupHistoryMoveSamePlaceRight (line 439) | func setupHistoryMoveSamePlaceRight(t *testing.T, lkr *c.Linker) *histor...
  function setupHistoryMoveAndReaddFromMoved (line 466) | func setupHistoryMoveAndReaddFromMoved(t *testing.T, lkr *c.Linker) *his...
  function setupHistoryMultipleMovesPerCommit (line 490) | func setupHistoryMultipleMovesPerCommit(t *testing.T, lkr *c.Linker) *hi...
  function setupHistoryMultipleMovesInStage (line 526) | func setupHistoryMultipleMovesInStage(t *testing.T, lkr *c.Linker) *hist...
  function setupHistoryMoveAndReaddFromAdded (line 552) | func setupHistoryMoveAndReaddFromAdded(t *testing.T, lkr *c.Linker) *his...
  function setupMoveDirectoryWithChild (line 580) | func setupMoveDirectoryWithChild(t *testing.T, lkr *c.Linker) *historySe...
  function setupDirectoryHistory (line 612) | func setupDirectoryHistory(t *testing.T, lkr *c.Linker) *historySetup {
  function setupGhostHistory (line 644) | func setupGhostHistory(t *testing.T, lkr *c.Linker) *historySetup {
  function setupEdgeRoot (line 679) | func setupEdgeRoot(t *testing.T, lkr *c.Linker) *historySetup {
  type setupFunc (line 720) | type setupFunc
  function TestHistoryWalker (line 723) | func TestHistoryWalker(t *testing.T) {
  function testHistoryRunner (line 814) | func testHistoryRunner(t *testing.T, lkr *c.Linker, setup *historySetup) {
  function TestHistoryUtil (line 862) | func TestHistoryUtil(t *testing.T) {
  function TestHistoryWithNoParent (line 911) | func TestHistoryWithNoParent(t *testing.T) {
  function TestHistoryMovedDirsWithReloadedLinker (line 928) | func TestHistoryMovedDirsWithReloadedLinker(t *testing.T) {
  function TestHistoryOfMovedNestedDir (line 962) | func TestHistoryOfMovedNestedDir(t *testing.T) {

FILE: catfs/vcs/mapper.go
  type MapPair (line 38) | type MapPair struct
  type flags (line 49) | type flags struct
  type Mapper (line 68) | type Mapper struct
    method getFlags (line 77) | func (ma *Mapper) getFlags(path string) *flags {
    method setSrcVisited (line 90) | func (ma *Mapper) setSrcVisited(nd n.Node) {
    method setSrcHandled (line 94) | func (ma *Mapper) setSrcHandled(nd n.Node) {
    method setDstHandled (line 98) | func (ma *Mapper) setDstHandled(nd n.Node) {
    method setSrcComplete (line 102) | func (ma *Mapper) setSrcComplete(nd n.Node) {
    method setDstComplete (line 106) | func (ma *Mapper) setDstComplete(nd n.Node) {
    method isSrcVisited (line 110) | func (ma *Mapper) isSrcVisited(nd n.Node) bool {
    method isSrcHandled (line 114) | func (ma *Mapper) isSrcHandled(nd n.Node) bool {
    method isDstHandled (line 118) | func (ma *Mapper) isDstHandled(nd n.Node) bool {
    method isSrcComplete (line 122) | func (ma *Mapper) isSrcComplete(nd n.Node) bool {
    method isDstComplete (line 126) | func (ma *Mapper) isDstComplete(nd n.Node) bool {
    method report (line 132) | func (ma *Mapper) report(src, dst n.ModNode, typeMismatch, isRemove, i...
    method reportByType (line 151) | func (ma *Mapper) reportByType(src, dst n.ModNode) error {
    method mapFile (line 178) | func (ma *Mapper) mapFile(srcCurr *n.File, dstFilePath string) error {
    method mapDirectoryContents (line 241) | func (ma *Mapper) mapDirectoryContents(srcCurr *n.Directory, dstPath s...
    method mapDirectory (line 284) | func (ma *Mapper) mapDirectory(srcCurr *n.Directory, dstPath string, f...
    method ghostToAlive (line 367) | func (ma *Mapper) ghostToAlive(lkr *c.Linker, head *n.Commit, nd n.Nod...
    method handleGhostsWithoutAliveNd (line 435) | func (ma *Mapper) handleGhostsWithoutAliveNd(srcNd n.Node) error {
    method extractGhostDirs (line 456) | func (ma *Mapper) extractGhostDirs() ([]ghostDir, error) {
    method handleGhosts (line 549) | func (ma *Mapper) handleGhosts() error {
    method nodeIsHandled (line 596) | func (ma *Mapper) nodeIsHandled(nd n.Node, srcToDst bool) bool {
    method isComplete (line 604) | func (ma *Mapper) isComplete(lkr *c.Linker, root n.Node, srcToDst bool...
    method extractLeftovers (line 656) | func (ma *Mapper) extractLeftovers(lkr *c.Linker, root *n.Directory, s...
    method Map (line 759) | func (ma *Mapper) Map(fn func(pair MapPair) error) error {
  type ghostDir (line 427) | type ghostDir struct
  function NewMapper (line 569) | func NewMapper(lkrSrc, lkrDst *c.Linker, srcHead, dstHead *n.Commit, src...

FILE: catfs/vcs/mapper_test.go
  function mapperSetupBasicSame (line 12) | func mapperSetupBasicSame(t *testing.T, lkrSrc, lkrDst *c.Linker) []MapP...
  function mapperSetupBasicDiff (line 18) | func mapperSetupBasicDiff(t *testing.T, lkrSrc, lkrDst *c.Linker) []MapP...
  function mapperSetupBasicSrcTypeMismatch (line 30) | func mapperSetupBasicSrcTypeMismatch(t *testing.T, lkrSrc, lkrDst *c.Lin...
  function mapperSetupBasicDstTypeMismatch (line 45) | func mapperSetupBasicDstTypeMismatch(t *testing.T, lkrSrc, lkrDst *c.Lin...
  function mapperSetupBasicSrcAddFile (line 59) | func mapperSetupBasicSrcAddFile(t *testing.T, lkrSrc, lkrDst *c.Linker) ...
  function mapperSetupBasicDstAddFile (line 71) | func mapperSetupBasicDstAddFile(t *testing.T, lkrSrc, lkrDst *c.Linker) ...
  function mapperSetupBasicSrcAddDir (line 82) | func mapperSetupBasicSrcAddDir(t *testing.T, lkrSrc, lkrDst *c.Linker) [...
  function mapperSetupBasicDstAddDir (line 94) | func mapperSetupBasicDstAddDir(t *testing.T, lkrSrc, lkrDst *c.Linker) [...
  function mapperSetupSrcMoveFile (line 99) | func mapperSetupSrcMoveFile(t *testing.T, lkrSrc, lkrDst *c.Linker) []Ma...
  function mapperSetupDstMoveFile (line 114) | func mapperSetupDstMoveFile(t *testing.T, lkrSrc, lkrDst *c.Linker) []Ma...
  function mapperMoveNestedDir (line 129) | func mapperMoveNestedDir(t *testing.T, lkrSrc, lkrDst *c.Linker) []MapPa...
  function mapperSetupDstMoveDirEmpty (line 153) | func mapperSetupDstMoveDirEmpty(t *testing.T, lkrSrc, lkrDst *c.Linker) ...
  function mapperSetupDstMoveDir (line 170) | func mapperSetupDstMoveDir(t *testing.T, lkrSrc, lkrDst *c.Linker) []Map...
  function mapperSetupSrcMoveDir (line 189) | func mapperSetupSrcMoveDir(t *testing.T, lkrSrc, lkrDst *c.Linker) []Map...
  function mapperSetupMoveDirWithChild (line 208) | func mapperSetupMoveDirWithChild(t *testing.T, lkrSrc, lkrDst *c.Linker)...
  function mapperSetupSrcMoveWithExisting (line 227) | func mapperSetupSrcMoveWithExisting(t *testing.T, lkrSrc, lkrDst *c.Link...
  function mapperSetupSrcFileMoveToExistingEmptyDir (line 254) | func mapperSetupSrcFileMoveToExistingEmptyDir(t *testing.T, lkrSrc, lkrD...
  function mapperSetupDstMoveWithExisting (line 274) | func mapperSetupDstMoveWithExisting(t *testing.T, lkrSrc, lkrDst *c.Link...
  function mapperSetupNested (line 299) | func mapperSetupNested(t *testing.T, lkrSrc, lkrDst *c.Linker) []MapPair {
  function mapperSetupSrcRemove (line 339) | func mapperSetupSrcRemove(t *testing.T, lkrSrc, lkrDst *c.Linker) []MapP...
  function mapperSetupDstRemove (line 358) | func mapperSetupDstRemove(t *testing.T, lkrSrc, lkrDst *c.Linker) []MapP...
  function mapperSetupMoveOnBothSides (line 380) | func mapperSetupMoveOnBothSides(t *testing.T, lkrSrc, lkrDst *c.Linker) ...
  function TestMapper (line 400) | func TestMapper(t *testing.T) {

FILE: catfs/vcs/patch.go
  type Patch (line 20) | type Patch struct
    method Len (line 30) | func (p *Patch) Len() int {
    method Swap (line 34) | func (p *Patch) Swap(i, j int) {
    method Less (line 38) | func (p *Patch) Less(i, j int) bool {
    method toCapnpPatch (line 75) | func (p *Patch) toCapnpPatch(seg *capnp.Segment, capPatch capnp_patch....
    method ToCapnp (line 107) | func (p *Patch) ToCapnp() (*capnp.Message, error) {
    method fromCapnpPatch (line 160) | func (p *Patch) fromCapnpPatch(capPatch capnp_patch.Patch) error {
    method FromCapnp (line 182) | func (p *Patch) FromCapnp(msg *capnp.Message) error {
  type Patches (line 27) | type Patches
    method ToCapnp (line 122) | func (ps Patches) ToCapnp() (*capnp.Message, error) {
    method FromCapnp (line 192) | func (ps *Patches) FromCapnp(msg *capnp.Message) error {
  function buildPrefixTrie (line 218) | func buildPrefixTrie(prefixes []string) *trie.Node {
  function hasValidPrefix (line 231) | func hasValidPrefix(root *trie.Node, path string) bool {
  function filterInvalidMoveGhost (line 254) | func filterInvalidMoveGhost(lkr *c.Linker, child n.Node, combCh *Change,...
  function MakePatch (line 281) | func MakePatch(lkr *c.Linker, from *n.Commit, prefixes []string) (*Patch...
  function MakePatches (line 291) | func MakePatches(lkr *c.Linker, from *n.Commit, prefixes []string) (Patc...
  function MakePatchFromTo (line 334) | func MakePatchFromTo(lkr *c.Linker, from, to *n.Commit, prefixes []strin...
  function ApplyPatch (line 444) | func ApplyPatch(lkr *c.Linker, p *Patch) error {

FILE: catfs/vcs/patch_test.go
  function TestPatchMarshalling (line 12) | func TestPatchMarshalling(t *testing.T) {
  function TestPrefixTrie (line 54) | func TestPrefixTrie(t *testing.T) {
  function TestMakePatch (line 73) | func TestMakePatch(t *testing.T) {
  function TestMakePatchWithOrderConflict (line 133) | func TestMakePatchWithOrderConflict(t *testing.T) {
  function TestMakePatchDirMoveAllChildren (line 182) | func TestMakePatchDirMoveAllChildren(t *testing.T) {
  function TestMakePatchDirMoveCompletely (line 229) | func TestMakePatchDirMoveCompletely(t *testing.T) {
  function TestSyncPartialTwiceWithMovedFile (line 273) | func TestSyncPartialTwiceWithMovedFile(t *testing.T) {

FILE: catfs/vcs/reset.go
  function findPathAt (line 14) | func findPathAt(lkr *c.Linker, cmt *n.Commit, path string) (string, erro...
  function clearPath (line 47) | func clearPath(lkr *c.Linker, ndPath string) (*n.Directory, error) {
  function ResetNode (line 102) | func ResetNode(lkr *c.Linker, cmt *n.Commit, currPath string) (n.Node, e...

FILE: catfs/vcs/reset_test.go
  function TestResetFile (line 14) | func TestResetFile(t *testing.T) {
  function TestFindPathAt (line 104) | func TestFindPathAt(t *testing.T) {
  function TestResetMovedFile (line 118) | func TestResetMovedFile(t *testing.T) {

FILE: catfs/vcs/resolve.go
  type executor (line 69) | type executor interface
  type resolver (line 84) | type resolver struct
    method resolve (line 125) | func (rv *resolver) resolve() error {
    method cacheLastCommonMerge (line 159) | func (rv *resolver) cacheLastCommonMerge() error {
    method hasConflictFile (line 207) | func (rv *resolver) hasConflictFile(dstNd n.ModNode) (bool, error) {
    method hasConflicts (line 238) | func (rv *resolver) hasConflicts(src, dst n.ModNode) (bool, ChangeType...
    method decide (line 319) | func (rv *resolver) decide(pair MapPair) error {
  function newResolver (line 100) | func newResolver(lkrSrc, lkrDst *c.Linker, srcHead, dstHead *n.Commit, e...
  function isConflictPath (line 202) | func isConflictPath(path string) bool {
  function pathOrNil (line 311) | func pathOrNil(nd n.Node) string {

FILE: catfs/vcs/resolve_test.go
  type expect (line 11) | type expect struct
  function setupResolveBasicNoConflict (line 22) | func setupResolveBasicNoConflict(t *testing.T, lkrSrc, lkrDst *c.Linker)...
  function TestHasConflicts (line 36) | func TestHasConflicts(t *testing.T) {

FILE: catfs/vcs/sync.go
  constant ConflictStragetyMarker (line 16) | ConflictStragetyMarker = iota
  constant ConflictStragetyIgnore (line 19) | ConflictStragetyIgnore
  constant ConflictStragetyEmbrace (line 22) | ConflictStragetyEmbrace
  constant ConflictStragetyUnknown (line 25) | ConflictStragetyUnknown
  type ConflictStrategy (line 30) | type ConflictStrategy
    method String (line 32) | func (cs ConflictStrategy) String() string {
  function ConflictStrategyFromString (line 47) | func ConflictStrategyFromString(spec string) ConflictStrategy {
  type PinStats (line 61) | type PinStats struct
  type SyncOptions (line 66) | type SyncOptions struct
  type syncer (line 84) | type syncer struct
    method add (line 90) | func (sy *syncer) add(src n.ModNode, srcParent, srcName string) error {
    method handleAdd (line 196) | func (sy *syncer) handleAdd(src n.ModNode) error {
    method handleMove (line 205) | func (sy *syncer) handleMove(src, dst n.ModNode) error {
    method handleMissing (line 223) | func (sy *syncer) handleMissing(dst n.ModNode) error {
    method handleRemove (line 230) | func (sy *syncer) handleRemove(dst n.ModNode) error {
    method getConflictStrategy (line 251) | func (sy *syncer) getConflictStrategy(nd n.ModNode) ConflictStrategy {
    method handleConflict (line 280) | func (sy *syncer) handleConflict(src, dst n.ModNode, srcMask, dstMask ...
    method handleMerge (line 326) | func (sy *syncer) handleMerge(src, dst n.ModNode, srcMask, dstMask Cha...
    method handleTypeConflict (line 398) | func (sy *syncer) handleTypeConflict(src, dst n.ModNode) error {
    method handleConflictNode (line 405) | func (sy *syncer) handleConflictNode(src n.ModNode) error {
  function isReadOnly (line 177) | func isReadOnly(folders map[string]bool, nodePaths ...string) bool {
  function Sync (line 416) | func Sync(lkrSrc, lkrDst *c.Linker, cfg *SyncOptions) error {

FILE: catfs/vcs/sync_test.go
  function setupBasicSrcFile (line 13) | func setupBasicSrcFile(t *testing.T, lkrSrc, lkrDst *c.Linker) {
  function checkBasicSrcFile (line 17) | func checkBasicSrcFile(t *testing.T, lkrSrc, lkrDst *c.Linker) {
  function setupBasicDstFile (line 28) | func setupBasicDstFile(t *testing.T, lkrSrc, lkrDst *c.Linker) {
  function checkBasicDstFile (line 32) | func checkBasicDstFile(t *testing.T, lkrSrc, lkrDst *c.Linker) {
  function setupBasicBothNoConflict (line 42) | func setupBasicBothNoConflict(t *testing.T, lkrSrc, lkrDst *c.Linker) {
  function checkBasicBothNoConflict (line 47) | func checkBasicBothNoConflict(t *testing.T, lkrSrc, lkrDst *c.Linker) {
  function setupBasicBothConflict (line 63) | func setupBasicBothConflict(t *testing.T, lkrSrc, lkrDst *c.Linker) {
  function checkBasicBothConflict (line 68) | func checkBasicBothConflict(t *testing.T, lkrSrc, lkrDst *c.Linker) {
  function setupBasicRemove (line 87) | func setupBasicRemove(t *testing.T, lkrSrc, lkrDst *c.Linker) {
  function checkBasicRemove (line 97) | func checkBasicRemove(t *testing.T, lkrSrc, lkrDst *c.Linker) {
  function setupBasicSrcMove (line 105) | func setupBasicSrcMove(t *testing.T, lkrSrc, lkrDst *c.Linker) {
  function checkBasicSrcMove (line 115) | func checkBasicSrcMove(t *testing.T, lkrSrc, lkrDst *c.Linker) {
  function setupEdgeMoveDirAndModifyChild (line 131) | func setupEdgeMoveDirAndModifyChild(t *testing.T, lkrSrc, lkrDst *c.Link...
  function setupEdgeEmptyDir (line 142) | func setupEdgeEmptyDir(t *testing.T, lkrSrc, lkrDst *c.Linker) {
  function checkEdgeEmptyDir (line 148) | func checkEdgeEmptyDir(t *testing.T, lkrSrc, lkrDst *c.Linker) {
  function TestSync (line 154) | func TestSync(t *testing.T) {
  function TestSyncMergeMarker (line 213) | func TestSyncMergeMarker(t *testing.T) {
  function TestSyncConflictMergeMarker (line 252) | func TestSyncConflictMergeMarker(t *testing.T) {
  function TestSyncTwiceWithMovedFile (line 295) | func TestSyncTwiceWithMovedFile(t *testing.T) {
  function TestSyncConflictStrategyEmbrace (line 316) | func TestSyncConflictStrategyEmbrace(t *testing.T) {
  function TestSyncReadOnlyFolders (line 347) | func TestSyncReadOnlyFolders(t *testing.T) {

FILE: catfs/vcs/undelete.go
  function Undelete (line 13) | func Undelete(lkr *c.Linker, root string) error {

FILE: client/client.go
  type Client (line 15) | type Client struct
    method LocalAddr (line 55) | func (cl *Client) LocalAddr() net.Addr {
    method RemoteAddr (line 60) | func (cl *Client) RemoteAddr() net.Addr {
    method Close (line 65) | func (cl *Client) Close() error {
  function connFromURL (line 22) | func connFromURL(s string) (net.Conn, error) {
  function Dial (line 32) | func Dial(ctx context.Context, daemonURL string) (*Client, error) {

FILE: client/clienttest/daemon.go
  function StartDaemon (line 20) | func StartDaemon(name, backendName, ipfsPath string) (*server.Server, er...
  function WithDaemon (line 63) | func WithDaemon(name string, fn func(ctl *client.Client) error) error {
  function WithDaemonPair (line 95) | func WithDaemonPair(nameA, nameB string, fn func(ctlA, ctlB *client.Clie...

FILE: client/fs_cmds.go
  type StatInfo (line 18) | type StatInfo struct
  function convertHash (line 37) | func convertHash(hashBytes []byte, err error) (h.Hash, error) {
  function convertCapStatInfo (line 45) | func convertCapStatInfo(capInfo *capnp.StatInfo) (*StatInfo, error) {
  method List (line 117) | func (cl *Client) List(root string, maxDepth int) ([]StatInfo, error) {
  method Stage (line 148) | func (cl *Client) Stage(localPath, repoPath string) error {
  method StageFromReader (line 162) | func (cl *Client) StageFromReader(repoPath string, r io.Reader) error {
  method Cat (line 221) | func (cl *Client) Cat(path string, offline bool) (io.ReadCloser, error) {
  method CatOnClient (line 243) | func (cl *Client) CatOnClient(path string, offline bool, w io.Writer) er...
  method Tar (line 298) | func (cl *Client) Tar(path string, offline bool) (io.ReadCloser, error) {
  method Mkdir (line 320) | func (cl *Client) Mkdir(path string, createParents bool) error {
  method Remove (line 332) | func (cl *Client) Remove(path string) error {
  method Move (line 342) | func (cl *Client) Move(srcPath, dstPath string) error {
  method Copy (line 356) | func (cl *Client) Copy(srcPath, dstPath string) error {
  method Pin (line 370) | func (cl *Client) Pin(path string) error {
  method Unpin (line 380) | func (cl *Client) Unpin(path string) error {
  method Repin (line 390) | func (cl *Client) Repin(root string) error {
  method Stat (line 400) | func (cl *Client) Stat(path string) (*StatInfo, error) {
  method Touch (line 419) | func (cl *Client) Touch(path string) error {
  method Exists (line 429) | func (cl *Client) Exists(path string) (bool, error) {
  method Undelete (line 443) | func (cl *Client) Undelete(path string) error {
  method DeletedNodes (line 453) | func (cl *Client) DeletedNodes(root string) ([]StatInfo, error) {
  method IsCached (line 483) | func (cl *Client) IsCached(path string) (bool, error) {
  method RecodeStream (line 498) | func (cl *Client) RecodeStream(path string) error {

FILE: client/fs_test.go
  function init (line 20) | func init() {
  function stringify (line 27) | func stringify(err error) string {
  function withDaemon (line 35) | func withDaemon(t *testing.T, name string, fn func(ctl *client.Client)) {
  function withDaemonPair (line 45) | func withDaemonPair(t *testing.T, nameA, nameB string, fn func(ctlA, ctl...
  function TestStageAndCat (line 55) | func TestStageAndCat(t *testing.T) {
  function TestStageAndCatStream (line 79) | func TestStageAndCatStream(t *testing.T) {
  function TestMkdir (line 97) | func TestMkdir(t *testing.T) {
  function TestSyncBasic (line 134) | func TestSyncBasic(t *testing.T) {
  function pathsFromListing (line 160) | func pathsFromListing(l []client.StatInfo) []string {
  function TestSyncConflict (line 169) | func TestSyncConflict(t *testing.T) {
  function TestSyncSeveralTimes (line 221) | func TestSyncSeveralTimes(t *testing.T) {
  function TestSyncPartial (line 268) | func TestSyncPartial(t *testing.T) {
  function TestSyncMovedFile (line 362) | func TestSyncMovedFile(t *testing.T) {
  function TestSyncRemovedFile (line 389) | func TestSyncRemovedFile(t *testing.T) {
  function TestHints (line 423) | func TestHints(t *testing.T) {

FILE: client/net_cmds.go
  type RemoteFolder (line 17) | type RemoteFolder struct
  type Remote (line 24) | type Remote struct
  function capRemoteToRemote (line 33) | func capRemoteToRemote(capRemote capnp.Remote) (*Remote, error) {
  function remoteToCapRemote (line 84) | func remoteToCapRemote(remote Remote, seg *capnplib.Segment) (*capnp.Rem...
  method RemoteAddOrUpdate (line 138) | func (cl *Client) RemoteAddOrUpdate(remote Remote) error {
  method RemoteByName (line 154) | func (cl *Client) RemoteByName(name string) (Remote, error) {
  method RemoteUpdate (line 178) | func (cl *Client) RemoteUpdate(remote Remote) error {
  method RemoteRm (line 193) | func (cl *Client) RemoteRm(name string) error {
  method RemoteClear (line 203) | func (cl *Client) RemoteClear() error {
  method RemoteLs (line 213) | func (cl *Client) RemoteLs() ([]Remote, error) {
  method RemoteSave (line 243) | func (cl *Client) RemoteSave(remotes []Remote) error {
  type LocateResult (line 270) | type LocateResult struct
  function capLrToLr (line 277) | func capLrToLr(capLr capnp.LocateResult) (*LocateResult, error) {
  method NetLocate (line 309) | func (cl *Client) NetLocate(who, mask string, timeoutSec float64) (chan ...
  method RemotePing (line 364) | func (cl *Client) RemotePing(who string) (float64, error) {
  type Whoami (line 378) | type Whoami struct
  method Whoami (line 386) | func (cl *Client) Whoami() (*Whoami, error) {
  method NetConnect (line 422) | func (cl *Client) NetConnect() error {
  method NetDisconnect (line 430) | func (cl *Client) NetDisconnect() error {
  type RemoteStatus (line 439) | type RemoteStatus struct
  function capRemoteStatusToRemoteStatus (line 447) | func capRemoteStatusToRemoteStatus(capStatus capnp.RemoteStatus) (*Remot...
  method RemoteOnlineList (line 493) | func (cl *Client) RemoteOnlineList() ([]RemoteStatus, error) {
  method Push (line 524) | func (cl *Client) Push(remoteName string, dryRun bool) error {

FILE: client/net_test.go
  function TestPush (line 13) | func TestPush(t *testing.T) {

FILE: client/repo_cmds.go
  method Quit (line 13) | func (ctl *Client) Quit() error {
  method Ping (line 23) | func (ctl *Client) Ping() error {
  type MountOptions (line 38) | type MountOptions struct
  function mountOptionsToCapnp (line 44) | func mountOptionsToCapnp(opts MountOptions, seg *capnplib.Segment) (*cap...
  method Mount (line 60) | func (ctl *Client) Mount(mountPath string, opts MountOptions) error {
  method Unmount (line 79) | func (ctl *Client) Unmount(mountPath string) error {
  method ConfigGet (line 89) | func (ctl *Client) ConfigGet(key string) (string, error) {
  method ConfigSet (line 103) | func (ctl *Client) ConfigSet(key, value string) error {
  type ConfigEntry (line 117) | type ConfigEntry struct
  function configEntryFromCapnp (line 125) | func configEntryFromCapnp(capEntry capnp.ConfigEntry) (*ConfigEntry, err...
  method ConfigAll (line 156) | func (ctl *Client) ConfigAll() ([]ConfigEntry, error) {
  method ConfigDoc (line 186) | func (ctl *Client) ConfigDoc(key string) (ConfigEntry, error) {
  type VersionInfo (line 210) | type VersionInfo struct
  method Version (line 218) | func (ctl *Client) Version() (*VersionInfo, error) {
  method FstabAdd (line 259) | func (ctl *Client) FstabAdd(mountName, mountPath string, opts MountOptio...
  method FstabRemove (line 282) | func (ctl *Client) FstabRemove(mountName string) error {
  method FstabApply (line 293) | func (ctl *Client) FstabApply() error {
  method FstabUnmountAll (line 303) | func (ctl *Client) FstabUnmountAll() error {
  type FsTabEntry (line 313) | type FsTabEntry struct
  function capMountToMount (line 322) | func capMountToMount(capEntry capnp.FsTabEntry) (*FsTabEntry, error) {
  method FsTabList (line 349) | func (ctl *Client) FsTabList() ([]FsTabEntry, error) {
  type GarbageItem (line 379) | type GarbageItem struct
  method GarbageCollect (line 386) | func (ctl *Client) GarbageCollect(aggressive bool) ([]*GarbageItem, erro...
  method Become (line 435) | func (ctl *Client) Become(who string) error {
  type GatewayUser (line 445) | type GatewayUser struct
  method GatewayUserAdd (line 456) | func (ctl *Client) GatewayUserAdd(name, password string, folders, rights...
  method GatewayUserRemove (line 506) | func (ctl *Client) GatewayUserRemove(name string) error {
  method GatewayUserList (line 516) | func (ctl *Client) GatewayUserList() ([]GatewayUser, error) {
  method DebugProfilePort (line 553) | func (ctl *Client) DebugProfilePort() (int, error) {
  type Hint (line 567) | type Hint struct
  method HintSet (line 580) | func (ctl *Client) HintSet(path string, compressionAlgo, encryptionAlgo ...
  method HintRemove (line 611) | func (ctl *Client) HintRemove(path string) error {
  function convertCapHint (line 620) | func convertCapHint(capHint capnp.Hint) (*Hint, error) {
  method HintList (line 644) | func (ctl *Client) HintList() ([]Hint, error) {

FILE: client/vcs_cmds.go
  method MakeCommit (line 13) | func (ctl *Client) MakeCommit(msg string) error {
  type Commit (line 23) | type Commit struct
  function convertCapCommit (line 30) | func convertCapCommit(capEntry *capnp.Commit) (*Commit, error) {
  method Log (line 71) | func (ctl *Client) Log() ([]Commit, error) {
  method Tag (line 101) | func (ctl *Client) Tag(rev, name string) error {
  method Untag (line 115) | func (ctl *Client) Untag(name string) error {
  method Reset (line 126) | func (ctl *Client) Reset(path, rev string, force bool) error {
  type Change (line 141) | type Change struct
  method History (line 155) | func (ctl *Client) History(path string) ([]*Change, error) {
  type DiffPair (line 232) | type DiffPair struct
  type Diff (line 238) | type Diff struct
    method IsEmpty (line 250) | func (df *Diff) IsEmpty() bool {
  function convertDiffList (line 261) | func convertDiffList(lst capnp.StatInfo_List) ([]StatInfo, error) {
  function convertDiffPairList (line 277) | func convertDiffPairList(lst capnp.DiffPair_List) ([]DiffPair, error) {
  function convertCapDiffToDiff (line 310) | func convertCapDiffToDiff(capDiff capnp.Diff) (*Diff, error) {
  method MakeDiff (line 388) | func (ctl *Client) MakeDiff(local, remote, localRev, remoteRev string, n...
  method Fetch (line 420) | func (ctl *Client) Fetch(remote string) error {
  method Sync (line 431) | func (ctl *Client) Sync(remote string, needFetch bool) (*Diff, error) {
  method CommitInfo (line 451) | func (ctl *Client) CommitInfo(rev string) (bool, *Commit, error) {

FILE: cmd/bug.go
  constant reportURL (line 20) | reportURL = "https://github.com/sahib/brig/issues/new?"
  function printError (line 24) | func printError(msg string) {
  function cmdOutput (line 30) | func cmdOutput(path string, args ...string) string {
  function handleBugReport (line 42) | func handleBugReport(ctx *cli.Context) error {

FILE: cmd/debug.go
  function handleDebugPprofPort (line 19) | func handleDebugPprofPort(ctx *cli.Context, ctl *client.Client) error {
  function readDebugKey (line 35) | func readDebugKey(ctx *cli.Context) ([]byte, error) {
  function handleDebugDecodeStream (line 45) | func handleDebugDecodeStream(ctx *cli.Context) error {
  function handleDebugEncodeStream (line 78) | func handleDebugEncodeStream(ctx *cli.Context) error {
  function readStreamSized (line 102) | func readStreamSized(ctx *cli.Context) (uint64, error) {
  function handleDebugTenSource (line 106) | func handleDebugTenSource(ctx *cli.Context) error {
  function handleDebugTenSink (line 117) | func handleDebugTenSink(ctx *cli.Context) error {
  function handleDebugFuseMock (line 136) | func handleDebugFuseMock(ctx *cli.Context) error {

FILE: cmd/exit_codes.go
  constant Success (line 5) | Success = iota
  constant BadArgs (line 8) | BadArgs
  constant BadPassword (line 11) | BadPassword
  constant DaemonNotResponding (line 15) | DaemonNotResponding
  constant UnknownError (line 18) | UnknownError

FILE: cmd/fs_handlers.go
  function handleStage (line 28) | func handleStage(ctx *cli.Context, ctl *client.Client) error {
  type twins (line 59) | type twins struct
  type walkOptions (line 64) | type walkOptions struct
  function walk (line 69) | func walk(root, repoRoot string, depth int, opt walkOptions) (map[string...
  function makeParentDirIfNeeded (line 155) | func makeParentDirIfNeeded(ctx *cli.Context, ctl *client.Client, path st...
  function handleStageDirectory (line 171) | func handleStageDirectory(ctx *cli.Context, ctl *client.Client, root, re...
  function handleCat (line 287) | func handleCat(ctx *cli.Context, ctl *client.Client) error {
  function handleRm (line 324) | func handleRm(ctx *cli.Context, ctl *client.Client) error {
  function handleMv (line 337) | func handleMv(ctx *cli.Context, ctl *client.Client) error {
  function handleCp (line 343) | func handleCp(ctx *cli.Context, ctl *client.Client) error {
  function colorForSize (line 349) | func colorForSize(size uint64) func(f string, a ...interface{}) string {
  function userPrefixMap (line 366) | func userPrefixMap(users []string) map[string]string {
  function formatHint (line 399) | func formatHint(hint client.Hint) string {
  function handleList (line 403) | func handleList(ctx *cli.Context, ctl *client.Client) error {
  function handleTree (line 493) | func handleTree(ctx *cli.Context, ctl *client.Client) error {
  function handleMkdir (line 513) | func handleMkdir(ctx *cli.Context, ctl *client.Client) error {
  function handleShow (line 524) | func handleShow(ctx *cli.Context, ctl *client.Client) error {
  function handleShowCommit (line 538) | func handleShowCommit(ctx *cli.Context, ctl *client.Client, cmt *client....
  function handleShowFileOrDir (line 586) | func handleShowFileOrDir(ctx *cli.Context, ctl *client.Client, path stri...
  function handleEdit (line 653) | func handleEdit(ctx *cli.Context, ctl *client.Client) error {
  function handleTouch (line 690) | func handleTouch(ctx *cli.Context, ctl *client.Client) error {
  function handleTrashList (line 695) | func handleTrashList(ctx *cli.Context, ctl *client.Client) error {
  function handleTrashRemove (line 713) | func handleTrashRemove(ctx *cli.Context, ctl *client.Client) error {

FILE: cmd/help.go
  type helpEntry (line 12) | type helpEntry struct
  function die (line 20) | func die(msg string) {
  function compressionHintsToBullets (line 26) | func compressionHintsToBullets() string {
  function encryptionHintsToBullets (line 42) | func encryptionHintsToBullets() string {
  function injectHelp (line 1697) | func injectHelp(cmd *cli.Command, path string) {
  function translateHelp (line 1710) | func translateHelp(cmds []cli.Command, prefix []string) {
  function TranslateHelp (line 1721) | func TranslateHelp(cmds []cli.Command) []cli.Command {
  function handleOpenHelp (line 1727) | func handleOpenHelp(ctx *cli.Context) error {

FILE: cmd/init.go
  function Init (line 10) | func Init(ctx *cli.Context, ipfsPathOrMultiaddr string, opts repo.InitOp...

FILE: cmd/inode_other.go
  function inodeString (line 8) | func inodeString(path string) (string, error) {

FILE: cmd/inode_unix.go
  function inodeString (line 11) | func inodeString(path string) (string, error) {

FILE: cmd/iobench.go
  function allBenchmarks (line 16) | func allBenchmarks() []string {
  function printStats (line 28) | func printStats(s bench.Stats) {
  type benchmarkRun (line 37) | type benchmarkRun struct
  function handleIOBench (line 42) | func handleIOBench(ctx *cli.Context) error {
  function drawHeading (line 130) | func drawHeading(heading string) {
  function drawBench (line 137) | func drawBench(result bench.Result, ref time.Duration, inputSize uint64) {
  function handleIOBenchList (line 153) | func handleIOBenchList(ctx *cli.Context) error {

FILE: cmd/log.go
  function logVerbose (line 11) | func logVerbose(ctx *cli.Context, format string, args ...interface{}) {

FILE: cmd/net_handlers.go
  function handleOffline (line 18) | func handleOffline(ctx *cli.Context, ctl *client.Client) error {
  function handleOnline (line 22) | func handleOnline(ctx *cli.Context, ctl *client.Client) error {
  function handleIsOnline (line 26) | func handleIsOnline(ctx *cli.Context, ctl *client.Client) error {
  function handleRemoteList (line 41) | func handleRemoteList(ctx *cli.Context, ctl *client.Client) error {
  function nFoldersToIcon (line 49) | func nFoldersToIcon(nFolders int) string {
  function handleRemoteListOffline (line 57) | func handleRemoteListOffline(ctx *cli.Context, ctl *client.Client) error {
  function handleRemoteListOnline (line 111) | func handleRemoteListOnline(ctx *cli.Context, ctl *client.Client) error {
  constant remoteHelpText (line 210) | remoteHelpText = `# No remotes yet. Uncomment the next lines for an exam...
  function remoteListToYml (line 216) | func remoteListToYml(remotes []client.Remote) ([]byte, error) {
  function ymlToRemoteList (line 225) | func ymlToRemoteList(data []byte) ([]client.Remote, error) {
  function handleRemoteAdd (line 235) | func handleRemoteAdd(ctx *cli.Context, ctl *client.Client) error {
  function handleRemoteAutoUpdate (line 264) | func handleRemoteAutoUpdate(ctx *cli.Context, ctl *client.Client) error {
  function handleRemoteAcceptPush (line 297) | func handleRemoteAcceptPush(ctx *cli.Context, ctl *client.Client) error {
  function handleRemoteConflictStrategy (line 324) | func handleRemoteConflictStrategy(ctx *cli.Context, ctl *client.Client) ...
  function handleRemoteRemove (line 340) | func handleRemoteRemove(ctx *cli.Context, ctl *client.Client) error {
  function handleRemoteClear (line 349) | func handleRemoteClear(ctx *cli.Context, ctl *client.Client) error {
  function handleRemoteEdit (line 353) | func handleRemoteEdit(ctx *cli.Context, ctl *client.Client) error {
  function findRemoteForName (line 388) | func findRemoteForName(ctl *client.Client, name string) (*client.Remote,...
  function handleRemoteFolderAdd (line 403) | func handleRemoteFolderAdd(ctx *cli.Context, ctl *client.Client) error {
  function handleRemoteFolderSet (line 407) | func handleRemoteFolderSet(ctx *cli.Context, ctl *client.Client) error {
  function handleRemoteFolderAddOrReplace (line 411) | func handleRemoteFolderAddOrReplace(ctx *cli.Context, ctl *client.Client...
  function handleRemoteFolderRemove (line 468) | func handleRemoteFolderRemove(ctx *cli.Context, ctl *client.Client) error {
  function handleRemoteFolderClear (line 488) | func handleRemoteFolderClear(ctx *cli.Context, ctl *client.Client) error {
  function handleRemoteFolderList (line 498) | func handleRemoteFolderList(ctx *cli.Context, ctl *client.Client) error {
  function handleRemoteFolderListAll (line 525) | func handleRemoteFolderListAll(ctx *cli.Context, ctl *client.Client) err...
  function handleNetLocate (line 550) | func handleNetLocate(ctx *cli.Context, ctl *client.Client) error {
  function handleRemotePing (line 606) | func handleRemotePing(ctx *cli.Context, ctl *client.Client) error {
  function handlePin (line 623) | func handlePin(ctx *cli.Context, ctl *client.Client) error {
  function handleUnpin (line 628) | func handleUnpin(ctx *cli.Context, ctl *client.Client) error {
  function handleRepin (line 633) | func handleRepin(ctx *cli.Context, ctl *client.Client) error {
  function handleWhoami (line 642) | func handleWhoami(ctx *cli.Context, ctl *client.Client) error {
  function handlePush (line 707) | func handlePush(ctx *cli.Context, ctl *client.Client) error {

FILE: cmd/parser.go
  function init (line 20) | func init() {
  function formatGroup (line 43) | func formatGroup(category string) string {
  function memProfile (line 47) | func memProfile() {
  function startCPUProfile (line 66) | func startCPUProfile() *os.File {
  function stopCPUProfile (line 85) | func stopCPUProfile(fd *os.File) {
  function RunCmdline (line 99) | func RunCmdline(args []string) int {

FILE: cmd/pwd/pwd-util/pwd-util.go
  function main (line 11) | func main() {

FILE: cmd/pwd/pwd.go
  constant msgLowEntropy (line 15) | msgLowEntropy  = "Please enter a password with at least %g bits entropy."
  constant msgReEnter (line 16) | msgReEnter     = "Well done! Please re-type your password now for safety:"
  constant msgBadPassword (line 17) | msgBadPassword = "This did not seem to match. Please retype it again."
  constant msgMaxTriesHit (line 18) | msgMaxTriesHit = "Maximum number of password tries exceeded: %d"
  function doPromptLine (line 21) | func doPromptLine(rl *readline.Instance, prompt string, hide bool) ([]by...
  function createStrengthPrompt (line 40) | func createStrengthPrompt(password []rune, prefix string) string {
  function PromptNewPassword (line 71) | func PromptNewPassword(minEntropy float64) ([]byte, error) {
  function promptPassword (line 131) | func promptPassword(prompt string) ([]byte, error) {
  function PromptPassword (line 144) | func PromptPassword() ([]byte, error) {

FILE: cmd/pwd/pwd_test.go
  function TestLongPassword (line 11) | func TestLongPassword(t *testing.T) {
  function BenchmarkLongPassword (line 17) | func BenchmarkLongPassword(b *testing.B) {

FILE: cmd/repo_handlers.go
  constant brigLogo (line 34) | brigLogo = `
  constant initBanner (line 48) | initBanner = `
  function createInitialReadme (line 55) | func createInitialReadme(ctl *client.Client, folder string) error {
  function isMultiAddr (line 102) | func isMultiAddr(ipfsPathOrMultiaddr string) bool {
  function handleInit (line 107) | func handleInit(ctx *cli.Context) error {
  function handleInitPost (line 212) | func handleInitPost(ctx *cli.Context, ctl *client.Client, folder string)...
  function printConfigDocEntry (line 230) | func printConfigDocEntry(entry client.ConfigEntry) {
  function handleConfigList (line 254) | func handleConfigList(cli *cli.Context, ctl *client.Client) error {
  function handleConfigGet (line 267) | func handleConfigGet(ctx *cli.Context, ctl *client.Client) error {
  function handleConfigSet (line 280) | func handleConfigSet(ctx *cli.Context, ctl *client.Client) error {
  function handleConfigDoc (line 304) | func handleConfigDoc(ctx *cli.Context, ctl *client.Client) error {
  function handleDaemonPing (line 315) | func handleDaemonPing(ctx *cli.Context, ctl *client.Client) error {
  function handleDaemonQuit (line 340) | func handleDaemonQuit(ctx *cli.Context, ctl *client.Client) error {
  function switchToSyslog (line 351) | func switchToSyslog() {
  function handleDaemonLaunch (line 381) | func handleDaemonLaunch(ctx *cli.Context) error {
  function handleMount (line 459) | func handleMount(ctx *cli.Context, ctl *client.Client) error {
  function handleUnmount (line 494) | func handleUnmount(ctx *cli.Context, ctl *client.Client) error {
  function handleVersion (line 511) | func handleVersion(ctx *cli.Context, ctl *client.Client) error {
  function handleGc (line 532) | func handleGc(ctx *cli.Context, ctl *client.Client) error {
  function handleFstabAdd (line 564) | func handleFstabAdd(ctx *cli.Context, ctl *client.Client) error {
  function handleFstabRemove (line 577) | func handleFstabRemove(ctx *cli.Context, ctl *client.Client) error {
  function handleFstabApply (line 582) | func handleFstabApply(ctx *cli.Context, ctl *client.Client) error {
  function handleFstabUnmounetAll (line 590) | func handleFstabUnmounetAll(ctx *cli.Context, ctl *client.Client) error {
  function handleFstabList (line 594) | func handleFstabList(ctx *cli.Context, ctl *client.Client) error {
  function handleGatewayStart (line 642) | func handleGatewayStart(ctx *cli.Context, ctl *client.Client) error {
  function handleGatewayStatus (line 668) | func handleGatewayStatus(ctx *cli.Context, ctl *client.Client) error {
  function handleGatewayStop (line 729) | func handleGatewayStop(ctx *cli.Context, ctl *client.Client) error {
  function handleGatewayURL (line 748) | func handleGatewayURL(ctx *cli.Context, ctl *client.Client) error {
  function handleGatewayUserAdd (line 772) | func handleGatewayUserAdd(ctx *cli.Context, ctl *client.Client) error {
  function handleGatewayUserRemove (line 829) | func handleGatewayUserRemove(ctx *cli.Context, ctl *client.Client) error {
  function handleGatewayUserList (line 839) | func handleGatewayUserList(ctx *cli.Context, ctl *client.Client) error {
  function readPassword (line 884) | func readPassword(ctx *cli.Context, isNew bool) ([]byte, error) {
  function handleRepoPack (line 918) | func handleRepoPack(ctx *cli.Context) error {
  function handleRepoUnpack (line 954) | func handleRepoUnpack(ctx *cli.Context) error {
  function optionalStringParamAsPtr (line 991) | func optionalStringParamAsPtr(ctx *cli.Context, name string) *string {
  function handleRepoHintsSet (line 999) | func handleRepoHintsSet(ctx *cli.Context, ctl *client.Client) error {
  function handleRepoHintsList (line 1030) | func handleRepoHintsList(ctx *cli.Context, ctl *client.Client) error {
  function handleRepoHintsRemove (line 1060) | func handleRepoHintsRemove(ctx *cli.Context, ctl *client.Client) error {
  function handleRepoHintsRecode (line 1064) | func handleRepoHintsRecode(ctx *cli.Context, ctl *client.Client) error {

FILE: cmd/suggest.go
  type suggestion (line 17) | type suggestion struct
  function levenshteinRatio (line 22) | func levenshteinRatio(s, t string) float64 {
  function findLastGoodCommands (line 32) | func findLastGoodCommands(ctx *cli.Context) ([]string, []cli.Command) {
  function findSimilarCommands (line 64) | func findSimilarCommands(cmdName string, cmds []cli.Command) []suggestion {
  function findCurrentCommand (line 106) | func findCurrentCommand(ctx *cli.Context) *cli.Command {
  function completeLocalPath (line 128) | func completeLocalPath(ctx *cli.Context) {
  function completeBrigPath (line 139) | func completeBrigPath(allowFiles, allowDirs bool) func(ctx *cli.Context) {
  function completeArgsUsage (line 173) | func completeArgsUsage(ctx *cli.Context) {
  function completeLocalFile (line 189) | func completeLocalFile(ctx *cli.Context) {
  function completeSubcommands (line 224) | func completeSubcommands(ctx *cli.Context) {
  function commandNotFound (line 232) | func commandNotFound(ctx *cli.Context, cmdName string) {

FILE: cmd/tabwriter/example_test.go
  function ExampleWriter_Init (line 13) | func ExampleWriter_Init() {
  function Example_elastic (line 40) | func Example_elastic() {
  function Example_trailingTab (line 57) | func Example_trailingTab() {

FILE: cmd/tabwriter/tabwriter.go
  type cell (line 28) | type cell struct
  type Writer (line 92) | type Writer struct
    method addLine (line 110) | func (b *Writer) addLine() { b.lines = append(b.lines, []cell{}) }
    method reset (line 113) | func (b *Writer) reset() {
    method Init (line 187) | func (b *Writer) Init(output io.Writer, minwidth, tabwidth, padding in...
    method dump (line 210) | func (b *Writer) dump() {
    method write0 (line 229) | func (b *Writer) write0(buf []byte) {
    method writeN (line 239) | func (b *Writer) writeN(src []byte, n int) {
    method writePadding (line 252) | func (b *Writer) writePadding(textw, cellw int, useTabs bool) {
    method writeLines (line 274) | func (b *Writer) writeLines(pos0 int, line0, line1 int) (pos int) {
    method format (line 330) | func (b *Writer) format(pos0 int, line0, line1 int) (pos int) {
    method append (line 389) | func (b *Writer) append(text []byte) {
    method updateWidth (line 395) | func (b *Writer) updateWidth() {
    method startEscape (line 413) | func (b *Writer) startEscape(ch byte) {
    method endEscape (line 431) | func (b *Writer) endEscape() {
    method terminateCell (line 450) | func (b *Writer) terminateCell(htab bool) int {
    method Flush (line 472) | func (b *Writer) Flush() error {
    method flush (line 476) | func (b *Writer) flush() (err error) {
    method Write (line 500) | func (b *Writer) Write(buf []byte) (n int, err error) {
  constant FilterHTML (line 150) | FilterHTML uint = 1 << iota
  constant StripEscape (line 154) | StripEscape
  constant AlignRight (line 158) | AlignRight
  constant DiscardEmptyColumns (line 162) | DiscardEmptyColumns
  constant TabIndent (line 166) | TabIndent
  constant Debug (line 170) | Debug
  type osError (line 225) | type osError struct
  constant Escape (line 407) | Escape = '\xff'
  constant ColorStart (line 410) | ColorStart = '\x1B'
  function handlePanic (line 458) | func handlePanic(err *error, op string) {
  function NewWriter (line 584) | func NewWriter(output io.Writer, minwidth, tabwidth, padding int, padcha...

FILE: cmd/tabwriter/tabwriter_test.go
  type buffer (line 13) | type buffer struct
    method init (line 17) | func (b *buffer) init(n int) { b.a = make([]byte, 0, n) }
    method clear (line 19) | func (b *buffer) clear() { b.a = b.a[0:0] }
    method Write (line 21) | func (b *buffer) Write(buf []byte) (written int, err error) {
    method String (line 35) | func (b *buffer) String() string { return string(b.a) }
  function write (line 37) | func write(t *testing.T, testname string, w *Writer, src string) {
  function verify (line 47) | func verify(t *testing.T, testname string, w *Writer, b *buffer, src, ex...
  function check (line 59) | func check(t *testing.T, testname string, minwidth, tabwidth, padding in...
  function Test (line 611) | func Test(t *testing.T) {
  type panicWriter (line 617) | type panicWriter struct
    method Write (line 619) | func (panicWriter) Write([]byte) (int, error) {
  function wantPanicString (line 623) | func wantPanicString(t *testing.T, want string) {
  function TestPanicDuringFlush (line 635) | func TestPanicDuringFlush(t *testing.T) {
  function TestPanicDuringWrite (line 645) | func TestPanicDuringWrite(t *testing.T) {

FILE: cmd/tree.go
  type treeNode (line 20) | type treeNode struct
    method Insert (line 37) | func (n *treeNode) Insert(entry client.StatInfo) {
    method Len (line 84) | func (n *treeNode) Len() int {
    method Swap (line 88) | func (n *treeNode) Swap(i, j int) {
    method Less (line 96) | func (n *treeNode) Less(i, j int) bool {
    method Print (line 120) | func (n *treeNode) Print(cfg *treeCfg) {
  type treeCfg (line 30) | type treeCfg struct
  function showTree (line 182) | func showTree(entries []client.StatInfo, cfg *treeCfg) {

FILE: cmd/util.go
  type ExitCode (line 37) | type ExitCode struct
    method Error (line 42) | func (err ExitCode) Error() string {
  function mustAbsPath (line 46) | func mustAbsPath(path string) string {
  function yesify (line 56) | func yesify(val bool) string {
  function checkmarkify (line 64) | func checkmarkify(val bool) string {
  function guessRepoFolder (line 74) | func guessRepoFolder(ctx *cli.Context) (string, error) {
  function openConfig (line 99) | func openConfig(folder string) (*config.Config, error) {
  function guessDaemonURL (line 109) | func guessDaemonURL(ctx *cli.Context) (string, error) {
  function guessFreeDaemonURL (line 133) | func guessFreeDaemonURL(ctx *cli.Context, owner string) (string, error) {
  function prefixSlash (line 165) | func prefixSlash(s string) string {
  type cmdHandlerWithClient (line 173) | type cmdHandlerWithClient
  function getExecutablePath (line 175) | func getExecutablePath() (string, error) {
  function startDaemon (line 186) | func startDaemon(ctx *cli.Context, repoPath, daemonURL string) (*client....
  function isDaemonRunning (line 248) | func isDaemonRunning(ctx *cli.Context) (bool, error) {
  function withDaemon (line 266) | func withDaemon(handler cmdHandlerWithClient, startNew bool) cli.ActionF...
  type checkFunc (line 313) | type checkFunc
  function withArgCheck (line 315) | func withArgCheck(checker checkFunc, handler cli.ActionFunc) cli.ActionF...
  function prettyPrintError (line 325) | func prettyPrintError(err error) string {
  function needAtLeast (line 329) | func needAtLeast(min int) checkFunc {
  function isNonEmptyDir (line 349) | func isNonEmptyDir(dir string) (bool, error) {
  function tempFileWithSuffix (line 374) | func tempFileWithSuffix(dir, prefix, suffix string) (f *os.File, err err...
  function editToPath (line 393) | func editToPath(data []byte, suffix string) (string, error) {
  function edit (line 445) | func edit(data []byte, suffix string) ([]byte, error) {
  function parseDuration (line 470) | func parseDuration(s string) (float64, error) {
  function readFormatTemplate (line 484) | func readFormatTemplate(ctx *cli.Context) (*template.Template, error) {
  function pinStateToSymbol (line 499) | func pinStateToSymbol(isPinned, isExplicit bool) string {
  function yesOrNo (line 512) | func yesOrNo(v bool) string {
  type logWriter (line 520) | type logWriter struct
    method Write (line 522) | func (lw *logWriter) Write(buf []byte) (int, error) {

FILE: cmd/vcs_handlers.go
  function handleReset (line 19) | func handleReset(ctx *cli.Context, ctl *client.Client) error {
  function commitName (line 35) | func commitName(cmt *client.Commit) string {
  function handleHistory (line 47) | func handleHistory(ctx *cli.Context, ctl *client.Client) error {
  function makePathAbbrev (line 141) | func makePathAbbrev(srcNd, dstNd client.StatInfo) string {
  function suffixIfDir (line 173) | func suffixIfDir(nd *treeNode) string {
  constant diffTypeNone (line 182) | diffTypeNone = iota
  constant diffTypeAdded (line 183) | diffTypeAdded
  constant diffTypeRemoved (line 184) | diffTypeRemoved
  constant diffTypeMissing (line 185) | diffTypeMissing
  constant diffTypeMoved (line 186) | diffTypeMoved
  constant diffTypeIgnored (line 187) | diffTypeIgnored
  constant diffTypeConflict (line 188) | diffTypeConflict
  constant diffTypeMerged (line 189) | diffTypeMerged
  type diffEntry (line 192) | type diffEntry struct
  function printDiffTreeLineFormatter (line 198) | func printDiffTreeLineFormatter(types map[string]diffEntry, n *treeNode)...
  function printDiffTree (line 253) | func printDiffTree(diff *client.Diff, printMissing bool) {
  function isEmptyDiff (line 320) | func isEmptyDiff(diff *client.Diff) bool {
  function printDiff (line 331) | func printDiff(diff *client.Diff, printMissing bool) {
  function handleDiff (line 426) | func handleDiff(ctx *cli.Context, ctl *client.Client) error {
  function handleFetch (line 489) | func handleFetch(ctx *cli.Context, ctl *client.Client) error {
  function handleSync (line 494) | func handleSync(ctx *cli.Context, ctl *client.Client) error {
  function handleSyncSingle (line 520) | func handleSyncSingle(ctx *cli.Context, ctl *client.Client, remoteName s...
  function handleStatus (line 544) | func handleStatus(ctx *cli.Context, ctl *client.Client) error {
  function handleBecome (line 565) | func handleBecome(ctx *cli.Context, ctl *client.Client) error {
  function handleCommit (line 602) | func handleCommit(ctx *cli.Context, ctl *client.Client) error {
  function handleTag (line 622) | func handleTag(ctx *cli.Context, ctl *client.Client) error {
  function handleLog (line 651) | func handleLog(ctx *cli.Context, ctl *client.Client) error {

FILE: defaults/defaults.go
  constant CurrentVersion (line 11) | CurrentVersion = 0
  function OpenMigratedConfig (line 19) | func OpenMigratedConfig(path string) (*config.Config, error) {

FILE: defaults/defaults_v0.go
  function DaemonDefaultURL (line 12) | func DaemonDefaultURL() string {
  function urlValidator (line 28) | func urlValidator(val interface{}) error {

FILE: docs/conf.py
  function get_version_from_git (line 66) | def get_version_from_git():
  function setup (line 286) | def setup(app):

FILE: events/backend/backend.go
  type Message (line 11) | type Message interface
  type Subscription (line 20) | type Subscription interface
  type Backend (line 29) | type Backend interface

FILE: events/capnp/events_api.capnp.go
  type Event (line 11) | type Event struct
    method String (line 31) | func (s Event) String() string {
    method Type (line 36) | func (s Event) Type() (string, error) {
    method HasType (line 41) | func (s Event) HasType() bool {
    method TypeBytes (line 46) | func (s Event) TypeBytes() ([]byte, error) {
    method SetType (line 51) | func (s Event) SetType(v string) error {
  constant Event_TypeID (line 14) | Event_TypeID = 0x9c032508b61d1d09
  function NewEvent (line 16) | func NewEvent(s *capnp.Segment) (Event, error) {
  function NewRootEvent (line 21) | func NewRootEvent(s *capnp.Segment) (Event, error) {
  function ReadRootEvent (line 26) | func ReadRootEvent(msg *capnp.Message) (Event, error) {
  type Event_List (line 56) | type Event_List struct
    method At (line 64) | func (s Event_List) At(i int) Event { return Event{s.List.Struct(i)} }
    method Set (line 66) | func (s Event_List) Set(i int, v Event) error { return s.List.SetStruc...
    method String (line 68) | func (s Event_List) String() string {
  function NewEvent_List (line 59) | func NewEvent_List(s *capnp.Segment, sz int32) (Event_List, error) {
  type Event_Promise (line 74) | type Event_Promise struct
    method Struct (line 76) | func (p Event_Promise) Struct() (Event, error) {
  constant schema_fc8938b535319bfe (line 81) | schema_fc8938b535319bfe = "x\xda\x12\xd0s`\x12d\x8dg`\x08dae\xfb" +
  function init (line 93) | func init() {

FILE: events/event.go
  constant UnknownEvent (line 12) | UnknownEvent = EventType(1 << iota)
  constant FsEvent (line 14) | FsEvent
  constant NetEvent (line 16) | NetEvent
  type EventType (line 20) | type EventType
    method String (line 23) | func (ev EventType) String() string {
  function EventFromString (line 36) | func EventFromString(ev string) (EventType, error) {
  type Event (line 48) | type Event struct
    method encode (line 53) | func (msg *Event) encode() ([]byte, error) {
  function decodeMessage (line 71) | func decodeMessage(data []byte) (*Event, error) {
  function dedupeEvents (line 99) | func dedupeEvents(evs []Event) []Event {

FILE: events/listener.go
  constant brigEventTopicPrefix (line 17) | brigEventTopicPrefix = "brig/events/"
  constant maxBurstSize (line 18) | maxBurstSize         = 100
  type Listener (line 24) | type Listener struct
    method Close (line 63) | func (lst *Listener) Close() error {
    method RegisterEventHandler (line 86) | func (lst *Listener) RegisterEventHandler(ev EventType, notifyOnOwn bo...
    method eventRecvLoop (line 153) | func (lst *Listener) eventRecvLoop() {
    method eventSendLoop (line 170) | func (lst *Listener) eventSendLoop() {
    method publishToSelf (line 190) | func (lst *Listener) publishToSelf(ev Event) {
    method PublishEvent (line 203) | func (lst *Listener) PublishEvent(ev Event) error {
    method SetupListeners (line 234) | func (lst *Listener) SetupListeners(ctx context.Context, addrs []strin...
    method listenSingle (line 264) | func (lst *Listener) listenSingle(ctx context.Context, topic string) e...
  type callback (line 37) | type callback struct
  function NewListener (line 46) | func NewListener(cfg *config.Config, bk backend.Backend, ownAddr string)...
  function eventLoop (line 100) | func eventLoop(evCh chan Event, interval time.Duration, rps float64, fn ...

FILE: events/listener_test.go
  function withEventListener (line 15) | func withEventListener(t *testing.T, ownAddr string, fn func(lst *Listen...
  function withEventListenerPair (line 31) | func withEventListenerPair(t *testing.T, addrA, addrB string, fn func(ls...
  function TestBasicRun (line 39) | func TestBasicRun(t *testing.T) {

FILE: events/mock/mock.go
  function init (line 13) | func init() {
  type EventsBackend (line 19) | type EventsBackend struct
    method Subscribe (line 61) | func (mb *EventsBackend) Subscribe(ctx context.Context, topic string) ...
    method PublishEvent (line 74) | func (mb *EventsBackend) PublishEvent(topic string, data []byte) error {
  function NewEventsBackend (line 24) | func NewEventsBackend(ownAddr string) *EventsBackend {
  type mockMessage (line 30) | type mockMessage struct
    method Data (line 35) | func (mm mockMessage) Data() []byte {
    method Source (line 39) | func (mm mockMessage) Source() string {
  type mockSubscription (line 43) | type mockSubscription struct
    method Next (line 47) | func (ms *mockSubscription) Next(ctx context.Context) (eventsBackend.M...
    method Close (line 56) | func (ms *mockSubscription) Close() error {

FILE: fuse/directory.go
  type Directory (line 18) | type Directory struct
    method Attr (line 24) | func (dir *Directory) Attr(ctx context.Context, attr *fuse.Attr) error {
    method Lookup (line 45) | func (dir *Directory) Lookup(ctx context.Context, name string) (fs.Nod...
    method Mkdir (line 75) | func (dir *Directory) Mkdir(ctx context.Context, req *fuse.MkdirReques...
    method Create (line 95) | func (dir *Directory) Create(ctx context.Context, req *fuse.CreateRequ...
    method Remove (line 128) | func (dir *Directory) Remove(ctx context.Context, req *fuse.RemoveRequ...
    method ReadDirAll (line 142) | func (dir *Directory) ReadDirAll(ctx context.Context) ([]fuse.Dirent, ...
    method Rename (line 211) | func (dir *Directory) Rename(ctx context.Context, req *fuse.RenameRequ...
    method Getxattr (line 231) | func (dir *Directory) Getxattr(ctx context.Context, req *fuse.Getxattr...
    method Setxattr (line 246) | func (dir *Directory) Setxattr(ctx context.Context, req *fuse.Setxattr...
    method Listxattr (line 253) | func (dir *Directory) Listxattr(ctx context.Context, req *fuse.Listxat...

FILE: fuse/file.go
  type File (line 27) | type File struct
    method Attr (line 35) | func (fi *File) Attr(ctx context.Context, attr *fuse.Attr) error {
    method Open (line 80) | func (fi *File) Open(ctx context.Context, req *fuse.OpenRequest, resp ...
    method Setattr (line 117) | func (fi *File) Setattr(ctx context.Context, req *fuse.SetattrRequest,...
    method Fsync (line 140) | func (fi *File) Fsync(ctx context.Context, req *fuse.FsyncRequest) err...
    method Getxattr (line 148) | func (fi *File) Getxattr(ctx context.Context, req *fuse.GetxattrReques...
    method Setxattr (line 163) | func (fi *File) Setxattr(ctx context.Context, req *fuse.SetxattrReques...
    method Listxattr (line 170) | func (fi *File) Listxattr(ctx context.Context, req *fuse.ListxattrRequ...
    method Readlink (line 181) | func (fi *File) Readlink(ctx context.Context, req *fuse.ReadlinkReques...

FILE: fuse/fs.go
  constant enableDebugLogs (line 11) | enableDebugLogs = false
  function debugLog (line 14) | func debugLog(format string, args ...interface{}) {
  type Filesystem (line 21) | type Filesystem struct
    method Root (line 29) | func (fs *Filesystem) Root() (fs.Node, error) {

FILE: fuse/fstab.go
  function FsTabAdd (line 18) | func FsTabAdd(cfg *config.Config, name, path string, opts MountOptions) ...
  function FsTabRemove (line 52) | func FsTabRemove(cfg *config.Config, name string) error {
  function FsTabUnmountAll (line 61) | func FsTabUnmountAll(cfg *config.Config, mounts *MountTable) error {
  function FsTabApply (line 84) | func FsTabApply(cfg *config.Config, mounts *MountTable) error {
  type FsTabEntry (line 138) | type FsTabEntry struct
  function FsTabList (line 148) | func FsTabList(cfg *config.Config, mounts *MountTable) ([]FsTabEntry, er...

FILE: fuse/fuse_test.go
  function init (line 44) | func init() {
  function TestMain (line 48) | func TestMain(m *testing.M) {
  type fuseCatFSHelp (line 55) | type fuseCatFSHelp struct
    method ServeHTTP (line 59) | func (fch *fuseCatFSHelp) ServeHTTP(w http.ResponseWriter, req *http.R...
    method makeCatfsAndFuseMount (line 134) | func (fch *fuseCatFSHelp) makeCatfsAndFuseMount(ctx context.Context, r...
    method makeFuseReMount (line 151) | func (fch *fuseCatFSHelp) makeFuseReMount(ctx context.Context, req mou...
    method unmountFuseAndCloseDummyCatFS (line 161) | func (fch *fuseCatFSHelp) unmountFuseAndCloseDummyCatFS(ctx context.Co...
    method catfsStage (line 210) | func (fch *fuseCatFSHelp) catfsStage(ctx context.Context, req catfsPay...
    method catfsGetData (line 216) | func (fch *fuseCatFSHelp) catfsGetData(ctx context.Context, req catfsP...
  function makeDummyCatFS (line 76) | func makeDummyCatFS(dbPath string) (catfsFuseInfo, error) {
  type nothing (line 116) | type nothing struct
  type catfsFuseInfo (line 118) | type catfsFuseInfo struct
  type mountingRequest (line 128) | type mountingRequest struct
  function makeFuseMount (line 181) | func makeFuseMount(cfs *catfs.FS, mntPath string, opts MountOptions) (*M...
  type catfsPayload (line 205) | type catfsPayload struct
  type mountInfo (line 239) | type mountInfo struct
  function callUnMount (line 245) | func callUnMount(ctx context.Context, t testing.TB, control *spawntest.C...
  function withMount (line 252) | func withMount(t testing.TB, opts MountOptions, f func(ctx context.Conte...
  function checkFuseFileMatchToCatFS (line 283) | func checkFuseFileMatchToCatFS(ctx context.Context, t *testing.T, contro...
  function checkCatfsFileContent (line 292) | func checkCatfsFileContent(ctx context.Context, t *testing.T, control *s...
  function TestCatfsStage (line 309) | func TestCatfsStage(t *testing.T) {
  function TestCatfsGetData (line 318) | func TestCatfsGetData(t *testing.T) {
  function TestRead (line 342) | func TestRead(t *testing.T) {
  function TestFileXattr (line 361) | func TestFileXattr(t *testing.T) {
  function TestWrite (line 410) | func TestWrite(t *testing.T) {
  function TestTouchWrite (line 431) | func TestTouchWrite(t *testing.T) {
  function TestTouchWriteSubdir (line 457) | func TestTouchWriteSubdir(t *testing.T) {
  function TestReadOnlyFs (line 475) | func TestReadOnlyFs(t *testing.T) {
  function TestWithRoot (line 503) | func TestWithRoot(t *testing.T) {
  function stageAndRead (line 568) | func stageAndRead(ctx context.Context, b *testing.B, control *spawntest....
  function BenchmarkRead (line 584) | func BenchmarkRead(b *testing.B) {
  function writeDataNtimes (line 598) | func writeDataNtimes(b *testing.B, data []byte, ntimes int) {
  function BenchmarkWrite (line 637) | func BenchmarkWrite(b *testing.B) {

FILE: fuse/fusetest/client.go
  type Client (line 12) | type Client struct
    method QuitServer (line 37) | func (ctl *Client) QuitServer() error {
  function Dial (line 17) | func Dial(url string) (*Client, error) {

FILE: fuse/fusetest/helper.go
  function LaunchAsProcess (line 19) | func LaunchAsProcess(opts Options) (*os.Process, error) {

FILE: fuse/fusetest/server.go
  function makeFS (line 27) | func makeFS(dbPath string, backend catfs.FsBackend) (*catfs.FS, error) {
  function mount (line 65) | func mount(cfs *catfs.FS, mountPath string, opts Options) (*fuse.Mount, ...
  function serveHTTPServer (line 77) | func serveHTTPServer(opts Options) error {
  type Options (line 126) | type Options struct
  function Launch (line 148) | func Launch(opts Options) error {

FILE: fuse/handle.go
  type Handle (line 20) | type Handle struct
    method Read (line 28) | func (hd *Handle) Read(ctx context.Context, req *fuse.ReadRequest, res...
    method Write (line 54) | func (hd *Handle) Write(ctx context.Context, req *fuse.WriteRequest, r...
    method writeAt (line 86) | func (hd *Handle) writeAt(buf []byte, off int64) (n int, err error) {
    method Flush (line 95) | func (hd *Handle) Flush(ctx context.Context, req *fuse.FlushRequest) e...
    method flush (line 100) | func (hd *Handle) flush() error {
    method Release (line 122) | func (hd *Handle) Release(ctx context.Context, req *fuse.ReleaseReques...
    method truncate (line 140) | func (hd *Handle) truncate(size uint64) error {
    method Poll (line 149) | func (hd *Handle) Poll(ctx context.Context, req *fuse.PollRequest, res...

FILE: fuse/mount.go
  type Notifier (line 25) | type Notifier interface
  type MountOptions (line 32) | type MountOptions struct
  type Mount (line 49) | type Mount struct
    method EqualOptions (line 150) | func (m *Mount) EqualOptions(opts MountOptions) bool {
    method Close (line 160) | func (m *Mount) Close() error {
  function NewMount (line 64) | func NewMount(cfs *catfs.FS, mountpoint string, notifier Notifier, opts ...
  function lazyUnmount (line 134) | func lazyUnmount(dir string) error {
  type MountTable (line 227) | type MountTable struct
    method AddMount (line 244) | func (t *MountTable) AddMount(path string, opts MountOptions) (*Mount,...
    method addMount (line 265) | func (t *MountTable) addMount(path string, opts MountOptions) (*Mount,...
    method Unmount (line 284) | func (t *MountTable) Unmount(path string) error {
    method unmount (line 291) | func (t *MountTable) unmount(path string) error {
    method Close (line 302) | func (t *MountTable) Close() error {
  function NewMountTable (line 235) | func NewMountTable(fs *catfs.FS, notifier Notifier) *MountTable {
  function checkMountPath (line 251) | func checkMountPath(path string) error {

FILE: fuse/stub.go
  type Notifier (line 18) | type Notifier interface
  type MountOptions (line 24) | type MountOptions struct
  type Mount (line 29) | type Mount struct
    method EqualOptions (line 37) | func (m *Mount) EqualOptions(opts MountOptions) bool {
    method Close (line 41) | func (m *Mount) Close() error {
  function NewMount (line 33) | func NewMount(cfs *catfs.FS, mountpoint string, opts MountOptions) (*Mou...
  type MountTable (line 45) | type MountTable struct
    method AddMount (line 51) | func (t *MountTable) AddMount(path string, opts MountOptions) (*Mount,...
    method Unmount (line 55) | func (t *MountTable) Unmount(path string) error {
    method Close (line 59) | func (t *MountTable) Close() error {
  function NewMountTable (line 47) | func NewMountTable(fs *catfs.FS, notifier Notifier) *MountTable {
  type FsTabEntry (line 63) | type FsTabEntry struct
  function FsTabAdd (line 71) | func FsTabAdd(cfg *config.Config, name, path string, opts MountOptions) ...
  function FsTabRemove (line 75) | func FsTabRemove(cfg *config.Config, name string) error {
  function FsTabUnmountAll (line 79) | func FsTabUnmountAll(cfg *config.Config, mounts *MountTable) error {
  function FsTabApply (line 83) | func FsTabApply(cfg *config.Config, mounts *MountTable) error {
  function FsTabList (line 87) | func FsTabList(cfg *config.Config, mounts *MountTable) ([]FsTabEntry, er...

FILE: fuse/util.go
  function errorize (line 16) | func errorize(name string, err error) error {
  function logPanic (line 32) | func logPanic(name string) {
  type xattrHandler (line 38) | type xattrHandler struct
  function listXattr (line 105) | func listXattr() []byte {
  function getXattr (line 115) | func getXattr(cfs *catfs.FS, name, path string) ([]byte, error) {
  function setXattr (line 129) | func setXattr(cfs *catfs.FS, name, path string, val []byte) error {
  function notifyChange (line 142) | func notifyChange(m *Mount, d time.Duration) {

FILE: gateway/db/capnp/user.capnp.go
  type User (line 11) | type User struct
    method String (line 31) | func (s User) String() string {
    method Name (line 36) | func (s User) Name() (string, error) {
    method HasName (line 41) | func (s User) HasName() bool {
    method NameBytes (line 46) | func (s User) NameBytes() ([]byte, error) {
    method SetName (line 51) | func (s User) SetName(v string) error {
    method PasswordHash (line 55) | func (s User) PasswordHash() (string, error) {
    method HasPasswordHash (line 60) | func (s User) HasPasswordHash() bool {
    method PasswordHashBytes (line 65) | func (s User) PasswordHashBytes() ([]byte, error) {
    method SetPasswordHash (line 70) | func (s User) SetPasswordHash(v string) error {
    method Salt (line 74) | func (s User) Salt() (string, error) {
    method HasSalt (line 79) | func (s User) HasSalt() bool {
    method SaltBytes (line 84) | func (s User) SaltBytes() ([]byte, error) {
    method SetSalt (line 89) | func (s User) SetSalt(v string) error {
    method Folders (line 93) | func (s User) Folders() (capnp.TextList, error) {
    method HasFolders (line 98) | func (s User) HasFolders() bool {
    method SetFolders (line 103) | func (s User) SetFolders(v capnp.TextList) error {
    method NewFolders (line 109) | func (s User) NewFolders(n int32) (capnp.TextList, error) {
    method Rights (line 118) | func (s User) Rights() (capnp.TextList, error) {
    method HasRights (line 123) | func (s User) HasRights() bool {
    method SetRights (line 128) | func (s User) SetRights(v capnp.TextList) error {
    method NewRights (line 134) | func (s User) NewRights(n int32) (capnp.TextList, error) {
  constant User_TypeID (line 14) | User_TypeID = 0x861de4463c5a4a22
  function NewUser (line 16) | func NewUser(s *capnp.Segment) (User, error) {
  function NewRootUser (line 21) | func NewRootUser(s *capnp.Segment) (User, error) {
  function ReadRootUser (line 26) | func ReadRootUser(msg *capnp.Message) (User, error) {
  type User_List (line 144) | type User_List struct
    method At (line 152) | func (s User_List) At(i int) User { return User{s.List.Struct(i)} }
    method Set (line 154) | func (s User_List) Set(i int, v User) error { return s.List.SetStruct(...
    method String (line 156) | func (s User_List) String() string {
  function NewUser_List (line 147) | func NewUser_List(s *capnp.Segment, sz int32) (User_List, error) {
  type User_Promise (line 162) | type User_Promise struct
    method Struct (line 164) | func (p User_Promise) Struct() (User, error) {
  constant schema_a0b1c18bd0f965c4 (line 169) | schema_a0b1c18bd0f965c4 = "x\xda\\\xca1J\x03A\x18\xc5\xf1\xf7ff\x15$" +
  function init (line 187) | func init() {

FILE: gateway/db/db.go
  constant RightDownload (line 21) | RightDownload = "fs.download"
  constant RightFsView (line 24) | RightFsView = "fs.view"
  constant RightFsEdit (line 27) | RightFsEdit = "fs.edit"
  constant RightRemotesView (line 29) | RightRemotesView = "remotes.view"
  constant RightRemotesEdit (line 31) | RightRemotesEdit = "remotes.edit"
  type UserDatabase (line 59) | type UserDatabase struct
    method Close (line 102) | func (ub *UserDatabase) Close() error {
    method Add (line 297) | func (ub *UserDatabase) Add(name, password string, folders []string, r...
    method Get (line 340) | func (ub *UserDatabase) Get(name string) (User, error) {
    method Remove (line 364) | func (ub *UserDatabase) Remove(name string) error {
    method List (line 379) | func (ub *UserDatabase) List() ([]User, error) {
  function NewUserDatabase (line 68) | func NewUserDatabase(path string) (*UserDatabase, error) {
  function unmarshalUser (line 117) | func unmarshalUser(data []byte) (*User, error) {
  function UserFromCapnp (line 132) | func UserFromCapnp(capUser capnp.User) (*User, error) {
  function marshalUser (line 187) | func marshalUser(user *User) ([]byte, error) {
  function UserToCapnp (line 201) | func UserToCapnp(user *User, seg *capnp_lib.Segment) (*capnp.User, error) {
  type User (line 254) | type User struct
    method CheckPassword (line 263) | func (u User) CheckPassword(password string) (bool, error) {
  function HashPassword (line 279) | func HashPassword(password string) (string, string, error) {

FILE: gateway/db/db_test.go
  function withDummyDb (line 11) | func withDummyDb(t *testing.T, fn func(db *UserDatabase)) {
  function TestAddGet (line 24) | func TestAddGet(t *testing.T) {

FILE: gateway/endpoints/all_dirs.go
  type AllDirsHandler (line 15) | type AllDirsHandler struct
    method ServeHTTP (line 30) | func (ah *AllDirsHandler) ServeHTTP(w http.ResponseWriter, r *http.Req...
  function NewAllDirsHandler (line 20) | func NewAllDirsHandler(s *State) *AllDirsHandler {
  type AllDirsResponse (line 25) | type AllDirsResponse struct

FILE: gateway/endpoints/all_dirs_test.go
  function TestAllDirsSuccess (line 10) | func TestAllDirsSuccess(t *testing.T) {

FILE: gateway/endpoints/copy.go
  type CopyHandler (line 13) | type CopyHandler struct
    method ServeHTTP (line 30) | func (ch *CopyHandler) ServeHTTP(w http.ResponseWriter, r *http.Reques...
  function NewCopyHandler (line 18) | func NewCopyHandler(s *State) *CopyHandler {
  type CopyRequest (line 23) | type CopyRequest struct

FILE: gateway/endpoints/copy_test.go
  type copyResponse (line 10) | type copyResponse struct
  function TestCopySuccess (line 14) | func TestCopySuccess(t *testing.T) {
  function TestCopyDisallowedSource (line 44) | func TestCopyDisallowedSource(t *testing.T) {
  function TestCopyDisallowedDest (line 64) | func TestCopyDisallowedDest(t *testing.T) {

FILE: gateway/endpoints/deleted.go
  type DeletedPathsHandler (line 18) | type DeletedPathsHandler struct
    method ServeHTTP (line 48) | func (dh *DeletedPathsHandler) ServeHTTP(w http.ResponseWriter, r *htt...
  function NewDeletedPathsHandler (line 23) | func NewDeletedPathsHandler(s *State) *DeletedPathsHandler {
  type DeletedPathsResponse (line 28) | type DeletedPathsResponse struct
  type DeletedRequest (line 34) | type DeletedRequest struct
  function matchEntry (line 40) | func matchEntry(info *catfs.StatInfo, filter string) bool {

FILE: gateway/endpoints/deleted_test.go
  function TestDeletedPathSuccess (line 10) | func TestDeletedPathSuccess(t *testing.T) {

FILE: gateway/endpoints/events.go
  type EventsHandler (line 22) | type EventsHandler struct
    method Notify (line 67) | func (eh *EventsHandler) Notify(ctx context.Context, msg string) error {
    method notify (line 71) | func (eh *EventsHandler) notify(ctx context.Context, msg string, isOwn...
    method Shutdown (line 101) | func (eh *EventsHandler) Shutdown() {
    method ServeHTTP (line 110) | func (eh *EventsHandler) ServeHTTP(w http.ResponseWriter, r *http.Requ...
  function NewEventsHandler (line 37) | func NewEventsHandler(rapi remotesapi.RemotesAPI, ev *events.Listener) *...

FILE: gateway/endpoints/events_test.go
  function TestEvents (line 15) | func TestEvents(t *testing.T) {

FILE: gateway/endpoints/get.go
  type GetHandler (line 21) | type GetHandler struct
    method checkBasicAuth (line 62) | func (gh *GetHandler) checkBasicAuth(nodePath string, w http.ResponseW...
    method checkDownloadRight (line 107) | func (gh *GetHandler) checkDownloadRight(w http.ResponseWriter, r *htt...
    method checkDownloadRightByName (line 116) | func (gh *GetHandler) checkDownloadRightByName(name string, w http.Res...
    method ServeHTTP (line 131) | func (gh *GetHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
  function NewGetHandler (line 26) | func NewGetHandler(s *State) *GetHandler {
  function mimeTypeFromStream (line 30) | func mimeTypeFromStream(stream mio.Stream) (io.ReadSeeker, string) {
  function setContentDisposition (line 42) | func setContentDisposition(info *catfs.StatInfo, hdr http.Header, dispoT...

FILE: gateway/endpoints/get_test.go
  function TestGetEndpointSuccess (line 12) | func TestGetEndpointSuccess(t *testing.T) {
  function TestGetEndpointDisallowed (line 31) | func TestGetEndpointDisallowed(t *testing.T) {

FILE: gateway/endpoints/history.go
  type HistoryHandler (line 13) | type HistoryHandler struct
    method ServeHTTP (line 79) | func (hh *HistoryHandler) ServeHTTP(w http.ResponseWriter, r *http.Req...
  function NewHistoryHandler (line 18) | func NewHistoryHandler(s *State) *HistoryHandler {
  type HistoryRequest (line 23) | type HistoryRequest struct
  type Commit (line 29) | type Commit struct
  type HistoryEntry (line 38) | type HistoryEntry struct
  type HistoryResponse (line 47) | type HistoryResponse struct
  function toExternalCommit (line 52) | func toExternalCommit(cmt *catfs.Commit) Commit {
  function toExternalChange (line 69) | func toExternalChange(c catfs.Change) HistoryEntry {

FILE: gateway/endpoints/history_test.go
  function TestHistoryEndpointSuccess (line 11) | func TestHistoryEndpointSuccess(t *testing.T) {
  function TestHistoryEndpointForbidden (line 56) | func TestHistoryEndpointForbidden(t *testing.T) {

FILE: gateway/endpoints/index.go
  type IndexHandler (line 20) | type IndexHandler struct
    method loadTemplateData (line 29) | func (ih *IndexHandler) loadTemplateData() (io.ReadCloser, error) {
    method ServeHTTP (line 38) | func (ih *IndexHandler) ServeHTTP(w http.ResponseWriter, r *http.Reque...
  function NewIndexHandler (line 25) | func NewIndexHandler(s *State) *IndexHandler {

FILE: gateway/endpoints/log.go
  type LogHandler (line 15) | type LogHandler struct
    method ServeHTTP (line 42) | func (lh *LogHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
  function NewLogHandler (line 20) | func NewLogHandler(s *State) *LogHandler {
  type LogRequest (line 25) | type LogRequest struct
  type LogResponse (line 32) | type LogResponse struct
  function matchCommit (line 38) | func matchCommit(cmt *catfs.Commit, filter string) bool {

FILE: gateway/endpoints/log_test.go
  function TestLogEndpointSuccess (line 11) | func TestLogEndpointSuccess(t *testing.T) {

FILE: gateway/endpoints/login.go
  function getUserName (line 13) | func getUserName(store *sessions.CookieStore, w http.ResponseWriter, r *...
  function setSession (line 36) | func setSession(store *sessions.CookieStore, userName string, w http.Res...
  function clearSession (line 58) | func clearSession(store *sessions.CookieStore, w http.ResponseWriter, r ...
  type LoginHandler (line 77) | type LoginHandler struct
    method ServeHTTP (line 101) | func (lih *LoginHandler) ServeHTTP(w http.ResponseWriter, r *http.Requ...
  function NewLoginHandler (line 82) | func NewLoginHandler(s *State) *LoginHandler {
  type LoginRequest (line 87) | type LoginRequest struct
  type LoginResponse (line 93) | type LoginResponse struct
  type LogoutHandler (line 152) | type LogoutHandler struct
    method ServeHTTP (line 161) | func (loh *LogoutHandler) ServeHTTP(w http.ResponseWriter, r *http.Req...
  function NewLogoutHandler (line 157) | func NewLogoutHandler(s *State) *LogoutHandler {
  type WhoamiHandler (line 176) | type WhoamiHandler struct
    method ServeHTTP (line 194) | func (wh *WhoamiHandler) ServeHTTP(w http.ResponseWriter, r *http.Requ...
  function NewWhoamiHandler (line 181) | func NewWhoamiHandler(s *State) *WhoamiHandler {
  type WhoamiResponse (line 186) | type WhoamiResponse struct
  type authMiddleware (line 227) | type authMiddleware struct
    method ServeHTTP (line 234) | func (am *authMiddleware) ServeHTTP(w http.ResponseWriter, r *http.Req...
  type dbUserKey (line 232) | type dbUserKey
  function checkRights (line 265) | func checkRights(w http.ResponseWriter, r *http.Request, rights ...strin...
  function AuthMiddleware (line 291) | func AuthMiddleware(s *State) func(http.Handler) http.Handler {

FILE: gateway/endpoints/login_test.go
  type loginResponse (line 10) | type loginResponse struct
  function TestLoginEndpointSuccess (line 14) | func TestLoginEndpointSuccess(t *testing.T) {

FILE: gateway/endpoints/ls.go
  type LsHandler (line 14) | type LsHandler struct
    method ServeHTTP (line 74) | func (lh *LsHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
  function NewLsHandler (line 19) | func NewLsHandler(s *State) *LsHandler {
  type LsRequest (line 24) | type LsRequest struct
  type StatInfo (line 32) | type StatInfo struct
  function toExternalStatInfo (line 44) | func toExternalStatInfo(i *catfs.StatInfo) *StatInfo {
  type LsResponse (line 59) | type LsResponse struct
  function doQuery (line 66) | func doQuery(fs *catfs.FS, root, filter string) ([]*catfs.StatInfo, erro...

FILE: gateway/endpoints/ls_test.go
  function TestLsEndpoint (line 11) | func TestLsEndpoint(t *testing.T) {

FILE: gateway/endpoints/mkdir.go
  type MkdirHandler (line 13) | type MkdirHandler struct
    method ServeHTTP (line 28) | func (mh *MkdirHandler) ServeHTTP(w http.ResponseWriter, r *http.Reque...
  function NewMkdirHandler (line 18) | func NewMkdirHandler(s *State) *MkdirHandler {
  type MkdirRequest (line 23) | type MkdirRequest struct

FILE: gateway/endpoints/mkdir_test.go
  type mkdirResponse (line 10) | type mkdirResponse struct
  function TestMkdirEndpointSuccess (line 15) | func TestMkdirEndpointSuccess(t *testing.T) {
  function TestMkdirEndpointInvalidPath (line 39) | func TestMkdirEndpointInvalidPath(t *testing.T) {

FILE: gateway/endpoints/move.go
  type MoveHandler (line 13) | type MoveHandler struct
    method ServeHTTP (line 30) | func (mh *MoveHandler) ServeHTTP(w http.ResponseWriter, r *http.Reques...
  function NewMoveHandler (line 18) | func NewMoveHandler(s *State) *MoveHandler {
  type MoveRequest (line 23) | type MoveRequest struct

FILE: gateway/endpoints/move_test.go
  type moveResponse (line 10) | type moveResponse struct
  function TestMoveSuccess (line 14) | func TestMoveSuccess(t *testing.T) {
  function TestMoveDisallowedSource (line 44) | func TestMoveDisallowedSource(t *testing.T) {
  function TestMoveDisallowedDest (line 64) | func TestMoveDisallowedDest(t *testing.T) {

FILE: gateway/endpoints/pin.go
  type PinHandler (line 14) | type PinHandler struct
    method ServeHTTP (line 36) | func (ph *PinHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
  function NewPinHandler (line 20) | func NewPinHandler(s *State) *PinHandler {
  function NewUnpinHandler (line 25) | func NewUnpinHandler(s *State) *PinHandler {
  type PinRequest (line 30) | type PinRequest struct

FILE: gateway/endpoints/pin_test.go
  type pinResponse (line 10) | type pinResponse struct
  function TestPinEndpointSuccess (line 15) | func TestPinEndpointSuccess(t *testing.T) {
  function TestPinEndpointForbidden (line 49) | func TestPinEndpointForbidden(t *testing.T) {

FILE: gateway/endpoints/ping.go
  type PingHandler (line 9) | type PingHandler struct
    method ServeHTTP (line 23) | func (wh *PingHandler) ServeHTTP(w http.ResponseWriter, r *http.Reques...
  function NewPingHandler (line 14) | func NewPingHandler(s *State) *PingHandler {
  type PingResponse (line 19) | type PingResponse struct

FILE: gateway/endpoints/ping_test.go
  function TestPingEndpointSuccess (line 10) | func TestPingEndpointSuccess(t *testing.T) {

FILE: gateway/endpoints/redirect.go
  type RedirHandler (line 13) | type RedirHandler struct
    method ServeHTTP (line 24) | func (rh *RedirHandler) ServeHTTP(w http.ResponseWriter, req *http.Req...
  function NewHTTPRedirectHandler (line 18) | func NewHTTPRedirectHandler(redirPort int64) *RedirHandler {

FILE: gateway/endpoints/remotes_add.go
  type RemotesAddHandler (line 16) | type RemotesAddHandler struct
    method ServeHTTP (line 91) | func (rh *RemotesAddHandler) ServeHTTP(w http.ResponseWriter, r *http....
  function NewRemotesAddHandler (line 21) | func NewRemotesAddHandler(s *State) *RemotesAddHandler {
  type RemoteAddRequest (line 26) | type RemoteAddRequest struct
  function dedupeFolders (line 35) | func dedupeFolders(folders []remotesapi.Folder) []remotesapi.Folder {
  function validateFingerprint (line 56) | func validateFing
Copy disabled (too large) Download .json
Condensed preview — 399 files, each showing path, character count, and a content snippet. Download the .json file for the full structured content (10,361K chars).
[
  {
    "path": ".github/ISSUE_TEMPLATE/bug_report.md",
    "chars": 531,
    "preview": "---\nname: Bug report\nabout: Create a report to help us improve\nlabels: \n\n---\n\n**Describe the bug**\nA clear and concise d"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/feature_request.md",
    "chars": 678,
    "preview": "---\nname: Feature request\nabout: Suggest an idea for this project\nlabels: \n\n---\n\n**Is your feature request related to a "
  },
  {
    "path": ".gitignore",
    "chars": 98,
    "preview": "brig\n*.coverprofile\ncoverage.out\n_vendor*\nTODO\n.idea\nmage\nipfs\nrepo/setup/ipfs\n.task\ntags\ncov.out\n"
  },
  {
    "path": ".mailmap",
    "chars": 383,
    "preview": "Christopher Pahl <sahib@online.de>\nChristopher Pahl <sahib@online.de> <sahib@peeel.(none)>\nChristopher Pahl <sahib@onlin"
  },
  {
    "path": ".travis.yml",
    "chars": 844,
    "preview": "language: go\nsudo: required\ngo:\n    - \"1.15\"\nnotifications:\n    email:\n      - sahib@online.de\ninstall:\n    - sudo apt-g"
  },
  {
    "path": "CHANGELOG.md",
    "chars": 16223,
    "preview": "# Change Log\n\nAll notable changes to this project will be documented in this file.\n\nThe format follows [keepachangelog.c"
  },
  {
    "path": "Dockerfile",
    "chars": 733,
    "preview": "FROM golang\nMAINTAINER sahib@online.de\n\n# Most test cases can use the pre-defined BRIG_PATH.\nENV BRIG_PATH /var/repo\nRUN"
  },
  {
    "path": "LICENSE",
    "chars": 34520,
    "preview": "                    GNU AFFERO GENERAL PUBLIC LICENSE\n                       Version 3, 19 November 2007\n\n Copyright (C)"
  },
  {
    "path": "PULL_REQUEST_TEMPLATE.md",
    "chars": 379,
    "preview": "Here's a small checklist before publishing your pull request:\n\n* Did you ``go fmt`` all code?\n* Does your code style fit"
  },
  {
    "path": "README.md",
    "chars": 4897,
    "preview": "# `brig`: Ship your data around the world\n\n<center>  <!-- I know, that's not how you usually do it :) -->\n<img src=\"http"
  },
  {
    "path": "Taskfile.yml",
    "chars": 1455,
    "preview": "# This file controls how brig is build.\n# It is a nicer to use alternative to Makefiles.\n# Please read the documentation"
  },
  {
    "path": "autocomplete/bash_autocomplete",
    "chars": 436,
    "preview": "#!/bin/bash\n\n# This should be installed to /etc/bash_completion.d/brig and sourced.\n# If you want to try out the autocom"
  },
  {
    "path": "autocomplete/zsh_autocomplete",
    "chars": 218,
    "preview": "_cli_zsh_autocomplete() {\n  local -a opts\n  opts=(\"${(@f)$(_CLI_ZSH_AUTOCOMPLETE_HACK=1 ${words[@]:0:#words[@]-1} --gene"
  },
  {
    "path": "backend/backend.go",
    "chars": 1963,
    "preview": "package backend\n\nimport (\n\t\"errors\"\n\t\"io\"\n\t\"os\"\n\n\t\"github.com/sahib/brig/backend/httpipfs\"\n\t\"github.com/sahib/brig/backe"
  },
  {
    "path": "backend/httpipfs/gc.go",
    "chars": 1154,
    "preview": "package httpipfs\n\nimport (\n\t\"bufio\"\n\t\"bytes\"\n\t\"context\"\n\t\"encoding/json\"\n\n\te \"github.com/pkg/errors\"\n\th \"github.com/sahi"
  },
  {
    "path": "backend/httpipfs/gc_test.go",
    "chars": 559,
    "preview": "package httpipfs\n\nimport (\n\t\"bytes\"\n\t\"testing\"\n\n\t\"github.com/sahib/brig/util/testutil\"\n\t\"github.com/stretchr/testify/req"
  },
  {
    "path": "backend/httpipfs/io.go",
    "chars": 3059,
    "preview": "package httpipfs\n\nimport (\n\t\"context\"\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/ioutil\"\n\t\"sync\"\n\n\tshell \"github.com/ipfs/go-ipf"
  },
  {
    "path": "backend/httpipfs/io_test.go",
    "chars": 1336,
    "preview": "package httpipfs\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/ioutil\"\n\t\"testing\"\n\n\t\"github.com/sahib/brig/util/testutil\"\n\t\"githu"
  },
  {
    "path": "backend/httpipfs/net.go",
    "chars": 8648,
    "preview": "package httpipfs\n\nimport (\n\t\"context\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io/ioutil\"\n\t\"net\"\n\t\"os\"\n\t\"path\"\n\t\"path/filepat"
  },
  {
    "path": "backend/httpipfs/net_test.go",
    "chars": 3040,
    "preview": "package httpipfs\n\nimport (\n\t\"bytes\"\n\t\"io\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/stretchr/testify/require\"\n)\n\nconst (\n\tTestPro"
  },
  {
    "path": "backend/httpipfs/pin.go",
    "chars": 3382,
    "preview": "package httpipfs\n\nimport (\n\t\"context\"\n\t\"encoding/json\"\n\t\"io\"\n\t\"strings\"\n\n\t\"github.com/patrickmn/go-cache\"\n\th \"github.com"
  },
  {
    "path": "backend/httpipfs/pin_test.go",
    "chars": 1448,
    "preview": "package httpipfs\n\nimport (\n\t\"bytes\"\n\t\"testing\"\n\n\th \"github.com/sahib/brig/util/hashlib\"\n\t\"github.com/sahib/brig/util/tes"
  },
  {
    "path": "backend/httpipfs/pubsub.go",
    "chars": 1289,
    "preview": "package httpipfs\n\nimport (\n\t\"context\"\n\n\tshell \"github.com/ipfs/go-ipfs-api\"\n\teventsBackend \"github.com/sahib/brig/events"
  },
  {
    "path": "backend/httpipfs/pubsub_test.go",
    "chars": 801,
    "preview": "package httpipfs\n\nimport (\n\t\"context\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/stretchr/testify/require\"\n)\n\nfunc TestPubSub(t *t"
  },
  {
    "path": "backend/httpipfs/resolve.go",
    "chars": 2815,
    "preview": "package httpipfs\n\nimport (\n\t\"bufio\"\n\t\"bytes\"\n\t\"context\"\n\t\"encoding/json\"\n\n\tshell \"github.com/ipfs/go-ipfs-api\"\n\tipfsutil"
  },
  {
    "path": "backend/httpipfs/resolve_test.go",
    "chars": 856,
    "preview": "package httpipfs\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/stretchr/testify/require\"\n)\n\nfunc TestPubl"
  },
  {
    "path": "backend/httpipfs/shell.go",
    "chars": 4635,
    "preview": "package httpipfs\n\nimport (\n\t\"context\"\n\t\"encoding/json\"\n\t\"errors\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"sync\"\n\t\"time\"\n\n\t\"github.com/bl"
  },
  {
    "path": "backend/httpipfs/testing.go",
    "chars": 2331,
    "preview": "package httpipfs\n\nimport (\n\t\"fmt\"\n\t\"io/ioutil\"\n\t\"os\"\n\t\"os/exec\"\n\t\"testing\"\n\t\"time\"\n\n\tshell \"github.com/ipfs/go-ipfs-api\""
  },
  {
    "path": "backend/httpipfs/testing_test.go",
    "chars": 958,
    "preview": "package httpipfs\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/require\"\n)\n\nfunc TestIpfsStartup(t "
  },
  {
    "path": "backend/httpipfs/version.go",
    "chars": 681,
    "preview": "package httpipfs\n\n// VersionInfo holds version info (yeah, golint)\ntype VersionInfo struct {\n\tsemVer, name, rev string\n}"
  },
  {
    "path": "backend/mock/mock.go",
    "chars": 1398,
    "preview": "package mock\n\nimport (\n\t\"github.com/sahib/brig/catfs\"\n\teventsMock \"github.com/sahib/brig/events/mock\"\n\tnetMock \"github.c"
  },
  {
    "path": "bench/bench.go",
    "chars": 12953,
    "preview": "package bench\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/ioutil\"\n\t\"math/rand\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"runtime\"\n\t\"s"
  },
  {
    "path": "bench/inputs.go",
    "chars": 3292,
    "preview": "package bench\n\nimport (\n\t\"bytes\"\n\t\"encoding/binary\"\n\t\"fmt\"\n\t\"io\"\n\t\"sort\"\n\n\t\"github.com/sahib/brig/util/testutil\"\n)\n\n// V"
  },
  {
    "path": "bench/runner.go",
    "chars": 6111,
    "preview": "package bench\n\nimport (\n\t\"fmt\"\n\t\"io/ioutil\"\n\t\"os\"\n\t\"os/signal\"\n\t\"runtime\"\n\t\"sort\"\n\t\"strings\"\n\t\"syscall\"\n\t\"time\"\n\n\t\"githu"
  },
  {
    "path": "bench/stats.go",
    "chars": 588,
    "preview": "package bench\n\nimport (\n\t\"time\"\n\n\t\"github.com/klauspost/cpuid/v2\"\n)\n\n// Stats are system statistics that might influence"
  },
  {
    "path": "brig.go",
    "chars": 112,
    "preview": "package main\n\nimport (\n\t\"os\"\n\n\t\"github.com/sahib/brig/cmd\"\n)\n\nfunc main() {\n\tos.Exit(cmd.RunCmdline(os.Args))\n}\n"
  },
  {
    "path": "catfs/backend.go",
    "chars": 3847,
    "preview": "package catfs\n\nimport (\n\t\"fmt\"\n\t\"io\"\n\t\"io/ioutil\"\n\n\t\"github.com/sahib/brig/catfs/mio\"\n\t\"github.com/sahib/brig/catfs/mio/"
  },
  {
    "path": "catfs/capnp/pinner.capnp",
    "chars": 343,
    "preview": "using Go = import \"/go.capnp\";\n\n@0xba762188b0a6e4cf;\n\n$Go.package(\"capnp\");\n$Go.import(\"github.com/sahib/brig/catfs/capn"
  },
  {
    "path": "catfs/capnp/pinner.capnp.go",
    "chars": 5259,
    "preview": "// Code generated by capnpc-go. DO NOT EDIT.\n\npackage capnp\n\nimport (\n\tcapnp \"zombiezen.com/go/capnproto2\"\n\ttext \"zombie"
  },
  {
    "path": "catfs/core/coreutils.go",
    "chars": 12443,
    "preview": "package core\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n\t\"path\"\n\t\"strings\"\n\t\"time\"\n\n\te \"github.com/pkg/errors\"\n\tie \"github.com/sahib/bri"
  },
  {
    "path": "catfs/core/coreutils_test.go",
    "chars": 16526,
    "preview": "package core\n\nimport (\n\t\"path\"\n\t\"sort\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n\n\tie \"github.com/sahib/brig/catfs/errors\"\n\tn \"githu"
  },
  {
    "path": "catfs/core/gc.go",
    "chars": 4582,
    "preview": "package core\n\nimport (\n\t\"github.com/sahib/brig/catfs/db\"\n\tie \"github.com/sahib/brig/catfs/errors\"\n\tn \"github.com/sahib/b"
  },
  {
    "path": "catfs/core/gc_test.go",
    "chars": 2633,
    "preview": "package core\n\nimport (\n\t\"testing\"\n\n\t\"github.com/sahib/brig/catfs/db\"\n\tn \"github.com/sahib/brig/catfs/nodes\"\n\t\"github.com"
  },
  {
    "path": "catfs/core/linker.go",
    "chars": 44331,
    "preview": "package core\n\n// Layout of the key/value store:\n//\n// objects/<NODE_HASH>                   => NODE_METADATA\n// tree/<FU"
  },
  {
    "path": "catfs/core/linker_test.go",
    "chars": 15629,
    "preview": "package core\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n\t\"io/ioutil\"\n\t\"os\"\n\t\"sort\"\n\t\"strings\"\n\t\"testing\"\n\t\"unsafe\"\n\n\t\"github.com/sahib/b"
  },
  {
    "path": "catfs/core/testing.go",
    "chars": 6739,
    "preview": "package core\n\nimport (\n\t\"fmt\"\n\t\"io/ioutil\"\n\t\"os\"\n\t\"path\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/sahib/brig/catfs/db\"\n\tie \"gith"
  },
  {
    "path": "catfs/db/database.go",
    "chars": 2457,
    "preview": "package db\n\nimport (\n\t\"errors\"\n\t\"io\"\n)\n\nvar (\n\t// ErrNoSuchKey is returned when Get() was passed a non-existent key\n\tErr"
  },
  {
    "path": "catfs/db/database_badger.go",
    "chars": 10423,
    "preview": "package db\n\nimport (\n\t\"io\"\n\t\"strings\"\n\t\"sync\"\n\t\"sync/atomic\"\n\t\"time\"\n\n\tbadger \"github.com/dgraph-io/badger/v3\"\n\n\tlog \"gi"
  },
  {
    "path": "catfs/db/database_disk.go",
    "chars": 8897,
    "preview": "package db\n\nimport (\n\t\"fmt\"\n\t\"io\"\n\t\"io/ioutil\"\n\t\"os\"\n\t\"path\"\n\t\"path/filepath\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/sahib/bri"
  },
  {
    "path": "catfs/db/database_memory.go",
    "chars": 3865,
    "preview": "package db\n\nimport (\n\t\"encoding/gob\"\n\t\"io\"\n\t\"path\"\n\t\"sort\"\n\t\"strings\"\n)\n\n// MemoryDatabase is a purely in memory databas"
  },
  {
    "path": "catfs/db/database_test.go",
    "chars": 10465,
    "preview": "package db\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"io/ioutil\"\n\t\"os\"\n\t\"sort\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/sahib/brig/util/test"
  },
  {
    "path": "catfs/errors/errors.go",
    "chars": 2282,
    "preview": "package catfs\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n)\n\nvar (\n\t// ErrStageNotEmpty is returned by Reset() when it was called without"
  },
  {
    "path": "catfs/fs.go",
    "chars": 50913,
    "preview": "package catfs\n\nimport (\n\t\"archive/tar\"\n\t\"bytes\"\n\t\"crypto/rand\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/ioutil\"\n\t\"path\"\n\t\"sort\"\n\t\"str"
  },
  {
    "path": "catfs/fs_test.go",
    "chars": 25058,
    "preview": "package catfs\n\nimport (\n\t\"archive/tar\"\n\t\"bytes\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/ioutil\"\n\t\"os\"\n\t\"sort\"\n\t\"testing\"\n\t\"time\"\n\n\tc \"github.c"
  },
  {
    "path": "catfs/handle.go",
    "chars": 5610,
    "preview": "package catfs\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"sync\"\n\n\t\"github.com/sahib/brig/catfs/mio\"\n\t\"github.com/sahib/brig/catfs"
  },
  {
    "path": "catfs/handle_test.go",
    "chars": 6297,
    "preview": "package catfs\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/ioutil\"\n\t\"testing\"\n\n\t\"github.com/sahib/brig/catfs/mio/compress\"\n\t\"git"
  },
  {
    "path": "catfs/mio/chunkbuf/chunkbuf.go",
    "chars": 1942,
    "preview": "package chunkbuf\n\nimport (\n\t\"io\"\n\n\t\"github.com/sahib/brig/util\"\n)\n\n// ChunkBuffer represents a custom buffer struct with"
  },
  {
    "path": "catfs/mio/chunkbuf/chunkbuf_test.go",
    "chars": 2804,
    "preview": "package chunkbuf\n\nimport (\n\t\"bytes\"\n\t\"io\"\n\t\"io/ioutil\"\n\t\"testing\"\n\n\t\"github.com/sahib/brig/util/testutil\"\n\t\"github.com/s"
  },
  {
    "path": "catfs/mio/compress/algorithm.go",
    "chars": 4749,
    "preview": "package compress\n\nimport (\n\t\"errors\"\n\n\t\"github.com/golang/snappy\"\n\t\"github.com/klauspost/compress/zstd\"\n\t\"github.com/pie"
  },
  {
    "path": "catfs/mio/compress/compress_test.go",
    "chars": 7931,
    "preview": "package compress\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/ioutil\"\n\t\"os\"\n\t\"testing\"\n\n\t\"github.com/sahib/brig/util\"\n\t\"github.c"
  },
  {
    "path": "catfs/mio/compress/header.go",
    "chars": 4087,
    "preview": "package compress\n\n// TODO: rename to header.go\nimport (\n\t\"bytes\"\n\t\"encoding/binary\"\n\t\"errors\"\n)\n\nvar (\n\t// ErrBadIndex i"
  },
  {
    "path": "catfs/mio/compress/heuristic.go",
    "chars": 1732,
    "preview": "package compress\n\nimport (\n\t\"mime\"\n\t\"net/http\"\n\t\"path/filepath\"\n\t\"strings\"\n\n\t\"github.com/sdemontfort/go-mimemagic\"\n)\n\nva"
  },
  {
    "path": "catfs/mio/compress/heuristic_test.go",
    "chars": 1088,
    "preview": "package compress\n\nimport (\n\t\"testing\"\n\n\t\"github.com/sahib/brig/util/testutil\"\n)\n\ntype testCase struct {\n\tpath         st"
  },
  {
    "path": "catfs/mio/compress/mime_db.go",
    "chars": 56637,
    "preview": "package compress\n\n// CompressibleMapping maps between mime types and a bool indicating\n// if they're compressible. Choic"
  },
  {
    "path": "catfs/mio/compress/reader.go",
    "chars": 8097,
    "preview": "package compress\n\nimport (\n\t\"fmt\"\n\t\"io\"\n\t\"sort\"\n\n\t\"github.com/sahib/brig/catfs/mio/chunkbuf\"\n)\n\n// Reader implements an "
  },
  {
    "path": "catfs/mio/compress/writer.go",
    "chars": 4144,
    "preview": "package compress\n\nimport (\n\t\"bytes\"\n\t\"io\"\n\n\t\"github.com/sahib/brig/util\"\n)\n\n// Writer implements a compression writer.\nt"
  },
  {
    "path": "catfs/mio/doc.go",
    "chars": 494,
    "preview": "// Package mio (short for memory input/output) implements the layered io stack\n// of brig. This includes currently three"
  },
  {
    "path": "catfs/mio/encrypt/format.go",
    "chars": 8913,
    "preview": "// Package encrypt implements the encryption layer of brig.\n// The file format used looks something like this:\n//\n// [HE"
  },
  {
    "path": "catfs/mio/encrypt/format_test.go",
    "chars": 10727,
    "preview": "package encrypt\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/ioutil\"\n\t\"math\"\n\t\"os\"\n\t\"testing\"\n\n\t\"github.com/sahib/brig/util\"\n\t\"g"
  },
  {
    "path": "catfs/mio/encrypt/reader.go",
    "chars": 8923,
    "preview": "package encrypt\n\nimport (\n\t\"bytes\"\n\t\"encoding/binary\"\n\t\"fmt\"\n\t\"io\"\n)\n\n// Reader decrypts and encrypted stream from Reade"
  },
  {
    "path": "catfs/mio/encrypt/writer.go",
    "chars": 4861,
    "preview": "package encrypt\n\nimport (\n\t\"bytes\"\n\t\"encoding/binary\"\n\t\"errors\"\n\t\"io\"\n)\n\nvar (\n\t// ErrBadBlockSize is returned when the "
  },
  {
    "path": "catfs/mio/pagecache/cache.go",
    "chars": 775,
    "preview": "package pagecache\n\nimport (\n\t\"github.com/sahib/brig/catfs/mio/pagecache/page\"\n)\n\n// Cache is the backing layer that stor"
  },
  {
    "path": "catfs/mio/pagecache/doc.go",
    "chars": 1803,
    "preview": "// Package pagecache implements a io.ReaderAt and io.WriterAt that is similar in\n// function to the OverlayFS of Linux. "
  },
  {
    "path": "catfs/mio/pagecache/mdcache/l1.go",
    "chars": 2972,
    "preview": "package mdcache\n\nimport (\n\t\"container/list\"\n\t\"fmt\"\n\n\t\"github.com/sahib/brig/catfs/mio/pagecache/page\"\n)\n\n// L1 is a pure"
  },
  {
    "path": "catfs/mio/pagecache/mdcache/l1_test.go",
    "chars": 1823,
    "preview": "package mdcache\n\nimport (\n\t\"testing\"\n\n\t\"github.com/sahib/brig/catfs/mio/pagecache/page\"\n\t\"github.com/stretchr/testify/re"
  },
  {
    "path": "catfs/mio/pagecache/mdcache/l2.go",
    "chars": 2065,
    "preview": "package mdcache\n\nimport (\n\t\"io/ioutil\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"sync\"\n\n\t\"github.com/golang/snappy\"\n\t\"github.com/sahib/br"
  },
  {
    "path": "catfs/mio/pagecache/mdcache/l2_test.go",
    "chars": 1630,
    "preview": "package mdcache\n\nimport (\n\t\"io/ioutil\"\n\t\"os\"\n\t\"testing\"\n\n\t\"github.com/sahib/brig/catfs/mio/pagecache/page\"\n\t\"github.com/"
  },
  {
    "path": "catfs/mio/pagecache/mdcache/mdcache.go",
    "chars": 3908,
    "preview": "// Package mdcache implements a leveled memory/disk cache combination.\npackage mdcache\n\nimport (\n\t\"fmt\"\n\t\"sync\"\n\n\t\"githu"
  },
  {
    "path": "catfs/mio/pagecache/mdcache/mdcache_test.go",
    "chars": 1065,
    "preview": "package mdcache\n\nimport (\n\t\"io/ioutil\"\n\t\"os\"\n\t\"testing\"\n\n\t\"github.com/sahib/brig/catfs/mio/pagecache/page\"\n\t\"github.com/"
  },
  {
    "path": "catfs/mio/pagecache/overlay.go",
    "chars": 10278,
    "preview": "package pagecache\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"io\"\n\t\"sync\"\n\n\t\"github.com/sahib/brig/catfs/mio/pagecache/page\"\n\t\"github.co"
  },
  {
    "path": "catfs/mio/pagecache/overlay_test.go",
    "chars": 7665,
    "preview": "package pagecache\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"io\"\n\t\"math/rand\"\n\t\"testing\"\n\n\t\"github.com/sahib/brig/catfs/mio/pagecache/m"
  },
  {
    "path": "catfs/mio/pagecache/page/page.go",
    "chars": 10530,
    "preview": "package page\n\n// NOTE: I had quite often brain freeze while figuring out the indexing.\n// If you do too, take a piece of"
  },
  {
    "path": "catfs/mio/pagecache/page/page_test.go",
    "chars": 6454,
    "preview": "package page\n\nimport (\n\t\"testing\"\n\n\t\"github.com/sahib/brig/util/testutil\"\n\t\"github.com/stretchr/testify/require\"\n)\n\nfunc"
  },
  {
    "path": "catfs/mio/pagecache/util.go",
    "chars": 2009,
    "preview": "package pagecache\n\nimport (\n\t\"io\"\n)\n\n// small util to wrap a buffer we want to write to. Tells you easily how much\n// da"
  },
  {
    "path": "catfs/mio/pagecache/util_test.go",
    "chars": 2309,
    "preview": "package pagecache\n\nimport (\n\t\"bytes\"\n\t\"io\"\n\t\"testing\"\n\n\t\"github.com/sahib/brig/util\"\n\t\"github.com/sahib/brig/util/testut"
  },
  {
    "path": "catfs/mio/stream.go",
    "chars": 7116,
    "preview": "package mio\n\nimport (\n\t\"fmt\"\n\t\"io\"\n\t\"io/ioutil\"\n\n\t\"github.com/sahib/brig/catfs/mio/compress\"\n\t\"github.com/sahib/brig/cat"
  },
  {
    "path": "catfs/mio/stream_test.go",
    "chars": 5378,
    "preview": "package mio\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/ioutil\"\n\t\"testing\"\n\n\t\"github.com/brianvoe/gofakeit/v6\"\n\t\"github.com/sah"
  },
  {
    "path": "catfs/nodes/base.go",
    "chars": 7083,
    "preview": "package nodes\n\nimport (\n\t\"fmt\"\n\t\"strings\"\n\t\"time\"\n\n\te \"github.com/pkg/errors\"\n\tie \"github.com/sahib/brig/catfs/errors\"\n\t"
  },
  {
    "path": "catfs/nodes/capnp/nodes.capnp",
    "chars": 1884,
    "preview": "using Go = import \"/go.capnp\";\n\n@0x9195d073cb5c5953;\n\n$Go.package(\"capnp\");\n$Go.import(\"github.com/sahib/brig/catfs/node"
  },
  {
    "path": "catfs/nodes/capnp/nodes.capnp.go",
    "chars": 30122,
    "preview": "// Code generated by capnpc-go. DO NOT EDIT.\n\npackage capnp\n\nimport (\n\tstrconv \"strconv\"\n\tcapnp \"zombiezen.com/go/capnpr"
  },
  {
    "path": "catfs/nodes/commit.go",
    "chars": 7800,
    "preview": "package nodes\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"path\"\n\t\"time\"\n\n\tcapnp_model \"github.com/sahib/brig/catfs/nodes/capnp\"\n\th \"gith"
  },
  {
    "path": "catfs/nodes/commit_test.go",
    "chars": 1792,
    "preview": "package nodes\n\nimport (\n\t\"testing\"\n\n\th \"github.com/sahib/brig/util/hashlib\"\n\t\"github.com/stretchr/testify/require\"\n\tcapn"
  },
  {
    "path": "catfs/nodes/directory.go",
    "chars": 18012,
    "preview": "package nodes\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n\t\"path\"\n\t\"sort\"\n\t\"strings\"\n\t\"time\"\n\n\tie \"github.com/sahib/brig/catfs/errors\"\n\tc"
  },
  {
    "path": "catfs/nodes/directory_test.go",
    "chars": 1748,
    "preview": "package nodes\n\nimport (\n\t\"testing\"\n\n\tie \"github.com/sahib/brig/catfs/errors\"\n\t\"github.com/stretchr/testify/require\"\n\tcap"
  },
  {
    "path": "catfs/nodes/doc.go",
    "chars": 458,
    "preview": "// Package nodes implements all nodes and defines basic operations on it.\n//\n// It however does not implement any specif"
  },
  {
    "path": "catfs/nodes/file.go",
    "chars": 6810,
    "preview": "package nodes\n\nimport (\n\t\"fmt\"\n\t\"path\"\n\t\"time\"\n\n\tcapnp_model \"github.com/sahib/brig/catfs/nodes/capnp\"\n\th \"github.com/sa"
  },
  {
    "path": "catfs/nodes/file_test.go",
    "chars": 2074,
    "preview": "package nodes\n\nimport (\n\t\"bytes\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/stretchr/testify/require\"\n\tcapnp \"zombiezen.com/go/cap"
  },
  {
    "path": "catfs/nodes/ghost.go",
    "chars": 5203,
    "preview": "package nodes\n\nimport (\n\t\"fmt\"\n\n\tie \"github.com/sahib/brig/catfs/errors\"\n\tcapnp_model \"github.com/sahib/brig/catfs/nodes"
  },
  {
    "path": "catfs/nodes/ghost_test.go",
    "chars": 2430,
    "preview": "package nodes\n\nimport (\n\t\"bytes\"\n\t\"testing\"\n\n\th \"github.com/sahib/brig/util/hashlib\"\n\tcapnp \"zombiezen.com/go/capnproto2"
  },
  {
    "path": "catfs/nodes/linker.go",
    "chars": 3008,
    "preview": "package nodes\n\nimport (\n\t\"fmt\"\n\n\tie \"github.com/sahib/brig/catfs/errors\"\n\th \"github.com/sahib/brig/util/hashlib\"\n)\n\n// L"
  },
  {
    "path": "catfs/nodes/node.go",
    "chars": 4196,
    "preview": "package nodes\n\nimport (\n\t\"time\"\n\n\tcapnp_model \"github.com/sahib/brig/catfs/nodes/capnp\"\n\th \"github.com/sahib/brig/util/h"
  },
  {
    "path": "catfs/pinner.go",
    "chars": 7963,
    "preview": "package catfs\n\nimport (\n\t\"errors\"\n\n\tcapnp \"github.com/sahib/brig/catfs/capnp\"\n\tc \"github.com/sahib/brig/catfs/core\"\n\t\"gi"
  },
  {
    "path": "catfs/pinner_test.go",
    "chars": 3488,
    "preview": "package catfs\n\nimport (\n\t\"bytes\"\n\t\"testing\"\n\n\tc \"github.com/sahib/brig/catfs/core\"\n\th \"github.com/sahib/brig/util/hashli"
  },
  {
    "path": "catfs/repin.go",
    "chars": 7669,
    "preview": "package catfs\n\nimport (\n\t\"sort\"\n\n\t\"github.com/dustin/go-humanize\"\n\te \"github.com/pkg/errors\"\n\tie \"github.com/sahib/brig/"
  },
  {
    "path": "catfs/repin_test.go",
    "chars": 2324,
    "preview": "package catfs\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"strings\"\n\t\"testing\"\n\n\t\"github.com/stretchr/testify/require\"\n)\n\nfunc TestRepinD"
  },
  {
    "path": "catfs/rev.go",
    "chars": 2249,
    "preview": "package catfs\n\nimport (\n\t\"fmt\"\n\t\"regexp\"\n\t\"strconv\"\n\t\"strings\"\n\t\"unicode\"\n\n\te \"github.com/pkg/errors\"\n\tc \"github.com/sah"
  },
  {
    "path": "catfs/rev_test.go",
    "chars": 307,
    "preview": "package catfs\n\nimport (\n\t\"testing\"\n\n\tc \"github.com/sahib/brig/catfs/core\"\n\t\"github.com/stretchr/testify/require\"\n)\n\nfunc"
  },
  {
    "path": "catfs/vcs/capnp/patch.capnp",
    "chars": 681,
    "preview": "using Go = import \"/go.capnp\";\nusing Nodes = import \"../../nodes/capnp/nodes.capnp\";\n\n@0xb943b54bf1683782;\n\n$Go.package("
  },
  {
    "path": "catfs/vcs/capnp/patch.capnp.go",
    "chars": 11613,
    "preview": "// Code generated by capnpc-go. DO NOT EDIT.\n\npackage capnp\n\nimport (\n\tcapnp2 \"github.com/sahib/brig/catfs/nodes/capnp\"\n"
  },
  {
    "path": "catfs/vcs/change.go",
    "chars": 11448,
    "preview": "package vcs\n\nimport (\n\t\"fmt\"\n\t\"path\"\n\t\"strings\"\n\n\te \"github.com/pkg/errors\"\n\tc \"github.com/sahib/brig/catfs/core\"\n\tie \"g"
  },
  {
    "path": "catfs/vcs/change_test.go",
    "chars": 9053,
    "preview": "package vcs\n\nimport (\n\t\"testing\"\n\n\tc \"github.com/sahib/brig/catfs/core\"\n\tn \"github.com/sahib/brig/catfs/nodes\"\n\t\"github."
  },
  {
    "path": "catfs/vcs/debug.go",
    "chars": 241,
    "preview": "package vcs\n\nimport (\n\t\"fmt\"\n)\n\nconst (\n\tprintDebug = false\n)\n\nfunc debug(args ...interface{}) {\n\tif printDebug {\n\t\tfmt."
  },
  {
    "path": "catfs/vcs/diff.go",
    "chars": 2860,
    "preview": "package vcs\n\nimport (\n\tc \"github.com/sahib/brig/catfs/core\"\n\tn \"github.com/sahib/brig/catfs/nodes\"\n)\n\n// DiffPair is a p"
  },
  {
    "path": "catfs/vcs/diff_test.go",
    "chars": 4847,
    "preview": "package vcs\n\nimport (\n\t\"testing\"\n\n\tc \"github.com/sahib/brig/catfs/core\"\n\t\"github.com/stretchr/testify/require\"\n)\n\nfunc s"
  },
  {
    "path": "catfs/vcs/history.go",
    "chars": 11516,
    "preview": "package vcs\n\nimport (\n\t\"fmt\"\n\t\"path\"\n\t\"strings\"\n\n\te \"github.com/pkg/errors\"\n\tc \"github.com/sahib/brig/catfs/core\"\n\tie \"g"
  },
  {
    "path": "catfs/vcs/history_test.go",
    "chars": 23033,
    "preview": "package vcs\n\nimport (\n\t\"testing\"\n\t\"time\"\n\n\tc \"github.com/sahib/brig/catfs/core\"\n\t\"github.com/sahib/brig/catfs/db\"\n\tn \"gi"
  },
  {
    "path": "catfs/vcs/mapper.go",
    "chars": 21026,
    "preview": "package vcs\n\n// NOTE ON CODING STYLE:\n// If you modify something in here, make sure to always\n// incude \"src\" or \"dst\" i"
  },
  {
    "path": "catfs/vcs/mapper_test.go",
    "chars": 13538,
    "preview": "package vcs\n\nimport (\n\t\"fmt\"\n\t\"testing\"\n\n\tc \"github.com/sahib/brig/catfs/core\"\n\tn \"github.com/sahib/brig/catfs/nodes\"\n\t\""
  },
  {
    "path": "catfs/vcs/patch.go",
    "chars": 10568,
    "preview": "package vcs\n\nimport (\n\t\"errors\"\n\t\"path\"\n\t\"sort\"\n\n\te \"github.com/pkg/errors\"\n\tc \"github.com/sahib/brig/catfs/core\"\n\tie \"g"
  },
  {
    "path": "catfs/vcs/patch_test.go",
    "chars": 8448,
    "preview": "package vcs\n\nimport (\n\t\"testing\"\n\n\tc \"github.com/sahib/brig/catfs/core\"\n\tn \"github.com/sahib/brig/catfs/nodes\"\n\th \"githu"
  },
  {
    "path": "catfs/vcs/reset.go",
    "chars": 3772,
    "preview": "package vcs\n\nimport (\n\t\"errors\"\n\t\"fmt\"\n\t\"path\"\n\n\te \"github.com/pkg/errors\"\n\tc \"github.com/sahib/brig/catfs/core\"\n\tie \"gi"
  },
  {
    "path": "catfs/vcs/reset_test.go",
    "chars": 3988,
    "preview": "package vcs\n\nimport (\n\t\"testing\"\n\n\tc \"github.com/sahib/brig/catfs/core\"\n\t\"github.com/sahib/brig/catfs/db\"\n\tie \"github.co"
  },
  {
    "path": "catfs/vcs/resolve.go",
    "chars": 10365,
    "preview": "package vcs\n\n// This package implements brig's sync algorithm which I called, in a burst of\n// modesty, \"bright\". (Not b"
  },
  {
    "path": "catfs/vcs/resolve_test.go",
    "chars": 1867,
    "preview": "package vcs\n\nimport (\n\t\"testing\"\n\n\tc \"github.com/sahib/brig/catfs/core\"\n\tn \"github.com/sahib/brig/catfs/nodes\"\n\t\"github."
  },
  {
    "path": "catfs/vcs/sync.go",
    "chars": 10820,
    "preview": "package vcs\n\nimport (\n\t\"fmt\"\n\t\"path\"\n\n\te \"github.com/pkg/errors\"\n\tc \"github.com/sahib/brig/catfs/core\"\n\tie \"github.com/s"
  },
  {
    "path": "catfs/vcs/sync_test.go",
    "chars": 10878,
    "preview": "package vcs\n\nimport (\n\t\"testing\"\n\n\tc \"github.com/sahib/brig/catfs/core\"\n\th \"github.com/sahib/brig/util/hashlib\"\n\t\"github"
  },
  {
    "path": "catfs/vcs/undelete.go",
    "chars": 2180,
    "preview": "package vcs\n\nimport (\n\t\"fmt\"\n\n\tc \"github.com/sahib/brig/catfs/core\"\n\tie \"github.com/sahib/brig/catfs/errors\"\n\tn \"github."
  },
  {
    "path": "client/.gitignore",
    "chars": 11,
    "preview": "discovery/\n"
  },
  {
    "path": "client/client.go",
    "chars": 1468,
    "preview": "package client\n\nimport (\n\t\"context\"\n\t\"net\"\n\n\t\"github.com/sahib/brig/server/capnp\"\n\t\"github.com/sahib/brig/util\"\n\t\"zombie"
  },
  {
    "path": "client/clienttest/daemon.go",
    "chars": 3113,
    "preview": "package clienttest\n\nimport (\n\t\"context\"\n\t\"io/ioutil\"\n\t\"os\"\n\t\"path/filepath\"\n\t\"time\"\n\n\t\"github.com/sahib/brig/client\"\n\t\"g"
  },
  {
    "path": "client/fs_cmds.go",
    "chars": 11170,
    "preview": "package client\n\nimport (\n\t\"fmt\"\n\t\"io\"\n\t\"net\"\n\t\"os\"\n\t\"time\"\n\n\t\"github.com/sahib/brig/backend/httpipfs\"\n\t\"github.com/sahib"
  },
  {
    "path": "client/fs_test.go",
    "chars": 12333,
    "preview": "package client_test\n\nimport (\n\t\"bytes\"\n\t\"io\"\n\t\"io/ioutil\"\n\t\"os\"\n\t\"sort\"\n\t\"testing\"\n\n\t\"github.com/sahib/brig/client\"\n\t\"gi"
  },
  {
    "path": "client/net_cmds.go",
    "chars": 11699,
    "preview": "package client\n\nimport (\n\t\"errors\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/sahib/brig/server/capnp\"\n\tcapnplib \"zombiezen.com/go"
  },
  {
    "path": "client/net_test.go",
    "chars": 1106,
    "preview": "package client_test\n\nimport (\n\t\"bytes\"\n\t\"strings\"\n\t\"testing\"\n\t\"time\"\n\n\t\"github.com/sahib/brig/client\"\n\t\"github.com/stret"
  },
  {
    "path": "client/repo_cmds.go",
    "chars": 14550,
    "preview": "package client\n\nimport (\n\t\"sort\"\n\n\tgwdb \"github.com/sahib/brig/gateway/db\"\n\t\"github.com/sahib/brig/server/capnp\"\n\th \"git"
  },
  {
    "path": "client/vcs_cmds.go",
    "chars": 9248,
    "preview": "package client\n\nimport (\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/sahib/brig/server/capnp\"\n\th \"github.com/sahib/brig/util/hashli"
  },
  {
    "path": "cmd/bug.go",
    "chars": 3147,
    "preview": "package cmd\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"fmt\"\n\t\"net/url\"\n\t\"os\"\n\t\"os/exec\"\n\t\"strings\"\n\n\t\"github.com/fatih/color\"\n\t\"git"
  },
  {
    "path": "cmd/debug.go",
    "chars": 2872,
    "preview": "package cmd\n\nimport (\n\t\"fmt\"\n\t\"io\"\n\t\"io/ioutil\"\n\t\"os\"\n\n\t\"github.com/dustin/go-humanize\"\n\t\"github.com/mr-tron/base58\"\n\t\"g"
  },
  {
    "path": "cmd/exit_codes.go",
    "chars": 411,
    "preview": "package cmd\n\nconst (\n\t// Success is the same as EXIT_SUCCESS in C\n\tSuccess = iota\n\n\t// BadArgs passed to cli; not our fa"
  },
  {
    "path": "cmd/fs_handlers.go",
    "chars": 16444,
    "preview": "package cmd\n\nimport (\n\t\"fmt\"\n\t\"io\"\n\t\"io/ioutil\"\n\t\"os\"\n\t\"path\"\n\t\"path/filepath\"\n\t\"regexp\"\n\t\"strconv\"\n\t\"strings\"\n\t\"time\"\n\n"
  },
  {
    "path": "cmd/help.go",
    "chars": 56929,
    "preview": "package cmd\n\nimport (\n\t\"fmt\"\n\t\"strings\"\n\n\t\"github.com/sahib/brig/repo/hints\"\n\t\"github.com/toqueteos/webbrowser\"\n\t\"github"
  },
  {
    "path": "cmd/init.go",
    "chars": 531,
    "preview": "package cmd\n\nimport (\n\te \"github.com/pkg/errors\"\n\t\"github.com/sahib/brig/repo\"\n\t\"github.com/urfave/cli\"\n)\n\n// Init creat"
  },
  {
    "path": "cmd/inode_other.go",
    "chars": 271,
    "preview": "// +build windows\n\npackage cmd\n\n// indodeString convert file path a hardware dependent string\n// unfortunately on non un"
  },
  {
    "path": "cmd/inode_unix.go",
    "chars": 371,
    "preview": "// +build !windows\n\npackage cmd\n\nimport (\n\t\"fmt\"\n\t\"golang.org/x/sys/unix\"\n)\n\n// indodeString convert file path a hardwar"
  },
  {
    "path": "cmd/iobench.go",
    "chars": 3165,
    "preview": "package cmd\n\nimport (\n\t\"encoding/json\"\n\t\"fmt\"\n\t\"os\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/dustin/go-humanize\"\n\t\"github.com/sa"
  },
  {
    "path": "cmd/log.go",
    "chars": 310,
    "preview": "package cmd\n\nimport (\n\t\"fmt\"\n\t\"os\"\n\t\"strings\"\n\n\t\"github.com/urfave/cli\"\n)\n\nfunc logVerbose(ctx *cli.Context, format stri"
  },
  {
    "path": "cmd/net_handlers.go",
    "chars": 15442,
    "preview": "package cmd\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\t\"os\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/fatih/color\"\n\t\"github.com/sahib/brig/cmd/ta"
  },
  {
    "path": "cmd/parser.go",
    "chars": 15378,
    "preview": "package cmd\n\nimport (\n\t\"fmt\"\n\t\"os\"\n\t\"runtime\"\n\t\"runtime/debug\"\n\t\"runtime/pprof\"\n\t\"strings\"\n\n\t\"github.com/fatih/color\"\n\ti"
  },
  {
    "path": "cmd/pwd/pwd-util/pwd-util.go",
    "chars": 473,
    "preview": "package main\n\nimport (\n\t\"crypto/rand\"\n\t\"fmt\"\n\n\t\"github.com/sahib/brig/cmd/pwd\"\n\t\"github.com/sahib/brig/util\"\n)\n\nfunc mai"
  },
  {
    "path": "cmd/pwd/pwd.go",
    "chars": 3408,
    "preview": "package pwd\n\nimport (\n\t\"bytes\"\n\t\"fmt\"\n\n\t\"github.com/chzyer/readline\"\n\t\"github.com/fatih/color\"\n\n\tzxcvbn \"github.com/nbut"
  },
  {
    "path": "cmd/pwd/pwd_test.go",
    "chars": 411,
    "preview": "package pwd\n\nimport (\n\t\"fmt\"\n\t\"testing\"\n\t\"time\"\n\n\tzxcvbn \"github.com/nbutton23/zxcvbn-go\"\n)\n\nfunc TestLongPassword(t *te"
  },
  {
    "path": "cmd/repo_handlers.go",
    "chars": 23876,
    "preview": "package cmd\n\nimport (\n\t\"bytes\"\n\t\"errors\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/ioutil\"\n\t\"log/syslog\"\n\t\"net/url\"\n\t\"os\"\n\t\"os/exec\"\n\t\"path/file"
  },
  {
    "path": "cmd/suggest.go",
    "chars": 5583,
    "preview": "package cmd\n\nimport (\n\t\"context\"\n\t\"fmt\"\n\t\"io/ioutil\"\n\t\"os\"\n\t\"sort\"\n\t\"strings\"\n\n\t\"github.com/fatih/color\"\n\t\"github.com/sa"
  },
  {
    "path": "cmd/tabwriter/example_test.go",
    "chars": 2014,
    "preview": "// Copyright 2012 The Go Authors. All rights reserved.\n// Use of this source code is governed by a BSD-style\n// license "
  },
  {
    "path": "cmd/tabwriter/tabwriter.go",
    "chars": 17332,
    "preview": "// Copyright 2009 The Go Authors. All rights reserved.\n// Use of this source code is governed by a BSD-style\n// license "
  },
  {
    "path": "cmd/tabwriter/tabwriter_test.go",
    "chars": 11558,
    "preview": "// Copyright 2009 The Go Authors. All rights reserved.\n// Use of this source code is governed by a BSD-style\n// license "
  },
  {
    "path": "cmd/tree.go",
    "chars": 4053,
    "preview": "package cmd\n\nimport (\n\t\"fmt\"\n\t\"sort\"\n\t\"strings\"\n\t\"unicode\"\n\n\t\"github.com/fatih/color\"\n\t\"github.com/sahib/brig/client\"\n)\n"
  },
  {
    "path": "cmd/util.go",
    "chars": 12028,
    "preview": "package cmd\n\nimport (\n\t\"bytes\"\n\t\"context\"\n\t\"fmt\"\n\t\"io\"\n\t\"io/ioutil\"\n\t\"math/rand\"\n\t\"net/url\"\n\t\"os\"\n\t\"os/exec\"\n\t\"path/file"
  },
  {
    "path": "cmd/vcs_handlers.go",
    "chars": 14920,
    "preview": "package cmd\n\nimport (\n\t\"fmt\"\n\t\"os\"\n\t\"path\"\n\t\"path/filepath\"\n\t\"sort\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/sahib/brig/cmd/tabw"
  },
  {
    "path": "defaults/defaults.go",
    "chars": 964,
    "preview": "package defaults\n\nimport (\n\t\"os\"\n\n\te \"github.com/pkg/errors\"\n\t\"github.com/sahib/config\"\n)\n\n// CurrentVersion is the curr"
  },
  {
    "path": "defaults/defaults_v0.go",
    "chars": 8912,
    "preview": "package defaults\n\nimport (\n\t\"errors\"\n\t\"net/url\"\n\t\"runtime\"\n\n\t\"github.com/sahib/config\"\n)\n\n// DaemonDefaultURL returns th"
  },
  {
    "path": "docs/.gitignore",
    "chars": 8,
    "preview": "_build/\n"
  },
  {
    "path": "docs/Makefile",
    "chars": 601,
    "preview": "# Minimal makefile for Sphinx documentation\n#\n\n# You can set these variables from the command line.\nSPHINXOPTS    =\nSPHI"
  },
  {
    "path": "docs/_static/css/custom.css",
    "chars": 837,
    "preview": "span.strikethrough { text-decoration: line-through; }\n\na {\n    color: #1866bc !important;\n}\n\n#navbar a {\n    color: #18b"
  },
  {
    "path": "docs/asciinema/1_init.json",
    "chars": 15343,
    "preview": "{\n  \"version\": 1,\n  \"width\": 119,\n  \"height\": 29,\n  \"duration\": 27.933128,\n  \"command\": null,\n  \"title\": null,\n  \"env\": "
  },
  {
    "path": "docs/asciinema/1_init_with_pwm.json",
    "chars": 9909,
    "preview": "{\"version\": 2, \"width\": 172, \"height\": 42, \"timestamp\": 1542382930, \"env\": {\"SHELL\": \"/bin/zsh\", \"TERM\": \"xterm-256color"
  },
  {
    "path": "docs/asciinema/2_adding.json",
    "chars": 10585,
    "preview": "{\n  \"version\": 1,\n  \"width\": 119,\n  \"height\": 29,\n  \"duration\": 21.32594,\n  \"command\": null,\n  \"title\": null,\n  \"env\": {"
  },
  {
    "path": "docs/asciinema/3_coreutils.json",
    "chars": 27308,
    "preview": "{\n  \"version\": 1,\n  \"width\": 119,\n  \"height\": 29,\n  \"duration\": 44.439839,\n  \"command\": null,\n  \"title\": null,\n  \"env\": "
  },
  {
    "path": "docs/asciinema/4_mount.json",
    "chars": 47784,
    "preview": "{\"version\": 2, \"width\": 119, \"height\": 29, \"timestamp\": 1519811299, \"env\": {\"SHELL\": \"/bin/zsh\", \"TERM\": \"xterm-256color"
  },
  {
    "path": "docs/asciinema/5_commits.json",
    "chars": 19170,
    "preview": "{\"version\": 2, \"width\": 119, \"height\": 29, \"timestamp\": 1519811965, \"env\": {\"SHELL\": \"/bin/zsh\", \"TERM\": \"xterm-256color"
  },
  {
    "path": "docs/asciinema/6_history.json",
    "chars": 25265,
    "preview": "{\"version\": 2, \"width\": 119, \"height\": 29, \"timestamp\": 1519815271, \"env\": {\"SHELL\": \"/bin/zsh\", \"TERM\": \"xterm-256color"
  },
  {
    "path": "docs/asciinema/7_remotes.json",
    "chars": 26764,
    "preview": "{\"version\": 2, \"width\": 119, \"height\": 29, \"timestamp\": 1519833254, \"env\": {\"SHELL\": \"/bin/zsh\", \"TERM\": \"xterm-256color"
  },
  {
    "path": "docs/asciinema/8_sync.json",
    "chars": 9573,
    "preview": "{\"version\": 2, \"width\": 119, \"height\": 29, \"timestamp\": 1519834569, \"env\": {\"SHELL\": \"/bin/zsh\", \"TERM\": \"xterm-256color"
  },
  {
    "path": "docs/asciinema/9_pin.json",
    "chars": 29982,
    "preview": "{\"version\": 2, \"width\": 211, \"height\": 54, \"timestamp\": 1523915438, \"env\": {\"SHELL\": \"/bin/zsh\", \"TERM\": \"xterm-256color"
  },
  {
    "path": "docs/conf.py",
    "chars": 8941,
    "preview": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n#\n# brig documentation build configuration file, created by\n# sphinx-quic"
  },
  {
    "path": "docs/contributing.rst",
    "chars": 1631,
    "preview": "How to contribute\n=================\n\nSomething we would be especially interested in are *experience reports:* We\nwant yo"
  },
  {
    "path": "docs/faq.rst",
    "chars": 4951,
    "preview": "Frequently Asked Questions\n==========================\n\nGeneral questions\n-----------------\n\n1. Why is the software named"
  },
  {
    "path": "docs/features.rst",
    "chars": 6548,
    "preview": ".. _features-page:\n\nFeatures\n========\n\n.. note::\n\n    The featuers below are actually available, but before version 1.0."
  },
  {
    "path": "docs/index.rst",
    "chars": 3670,
    "preview": "``brig`` - decentralized & secure synchronization\n=================================================\n\n.. image:: _static/"
  },
  {
    "path": "docs/installation.rst",
    "chars": 4093,
    "preview": "Installation\n------------\n\nWe provide pre-compiled binaries on every release. ``brig`` comes to your computer\nas a singl"
  },
  {
    "path": "docs/make.bat",
    "chars": 808,
    "preview": "@ECHO OFF\r\n\r\npushd %~dp0\r\n\r\nREM Command file for Sphinx documentation\r\n\r\nif \"%SPHINXBUILD%\" == \"\" (\r\n\tset SPHINXBUILD=sp"
  },
  {
    "path": "docs/quickstart.rst",
    "chars": 4221,
    "preview": ".. warning::\n\n    The examples below are slightly outdated and will be revisited at some point.\n    All commands should "
  },
  {
    "path": "docs/requirements.txt",
    "chars": 59,
    "preview": "sphinx_bootstrap_theme==0.6.5\nsphinxcontrib-fulltoc==1.2.0\n"
  },
  {
    "path": "docs/roadmap.rst",
    "chars": 4952,
    "preview": "Roadmap\n=======\n\nThis document lists the improvements that can be done to ``brig`` and (if\npossible) when. All features "
  },
  {
    "path": "docs/talk/Makefile",
    "chars": 33,
    "preview": "all:\n\thovercraft -N -s index.rst\n"
  },
  {
    "path": "docs/talk/demo.rst",
    "chars": 3168,
    "preview": "0. Preparation\n==============\n\n- Windows: Chrome Incognito (slides, presenter console), Monitor Settings, Terminal (dock"
  },
  {
    "path": "docs/talk/index.rst",
    "chars": 11981,
    "preview": ":title: brig\n:author: Chris Pahl\n:css: style.css\n:data-transition-duration: 350\n:data-perspective: 5000\n\n.. role:: white"
  },
  {
    "path": "docs/talk/requirements.txt",
    "chars": 133,
    "preview": "argh==0.26.2\ndocutils==0.14\nhovercraft==2.5\nlxml==4.6.2\npathtools==0.1.2\nPygments==2.2.0\nPyYAML>=4.2b1\nsvg.path==2.2\nwat"
  },
  {
    "path": "docs/talk/style.css",
    "chars": 3892,
    "preview": "@import url(http://fonts.googleapis.com/css?family=Vollkorn);\n\nbody {\n    background-image: url(images/noise.png);\n    b"
  },
  {
    "path": "docs/tutorial/config.rst",
    "chars": 1157,
    "preview": ".. _configurations:\n\nConfiguration\n-------------\n\nAs mentioned earlier, we can use the built-in configuration system to "
  },
  {
    "path": "docs/tutorial/coreutils.rst",
    "chars": 8953,
    "preview": "Adding & Viewing files\n----------------------\n\nNow let's add some files to ``brig``. We do this by using ``brig stage``."
  },
  {
    "path": "docs/tutorial/gateway.rst",
    "chars": 7350,
    "preview": "Using the gateway / UI\n----------------------\n\nGateway Screenshots\n~~~~~~~~~~~~~~~~~~~\n\nThe gateway UI consists of sever"
  },
  {
    "path": "docs/tutorial/init.rst",
    "chars": 7221,
    "preview": "Creating a repository\n---------------------\n\nYou need a central place where ``brig`` stores its metadata. This place is\n"
  },
  {
    "path": "docs/tutorial/intro.rst",
    "chars": 3310,
    "preview": ".. _getting_started:\n\nGetting started\n================\n\nThis guide will walk you through the steps of synchronizing your"
  },
  {
    "path": "docs/tutorial/mounts.rst",
    "chars": 4667,
    "preview": "Mounting repositories\n---------------------\n\nUsing commands like ``brig cp`` might not feel very seamless, especially wh"
  },
  {
    "path": "docs/tutorial/pinning.rst",
    "chars": 3400,
    "preview": ".. _pinning-section:\n\nPinning\n-------\n\nHow can we control what files are stored locally and which should be retrieved\nfr"
  },
  {
    "path": "docs/tutorial/remotes.rst",
    "chars": 15246,
    "preview": "Remotes\n-------\n\nUntil now, all our operations were tied only to our local computer. But\n``brig`` is a synchronization t"
  },
  {
    "path": "docs/tutorial/vcs.rst",
    "chars": 9122,
    "preview": "Version control\n---------------\n\nOne key feature of ``brig`` over other synchronisation tools is the built-in\nand quite "
  },
  {
    "path": "events/backend/backend.go",
    "chars": 971,
    "preview": "package backend\n\nimport (\n\t\"context\"\n\t\"io\"\n)\n\n// Message is returned by Subscribe.\n// It encapsulates a single event mes"
  },
  {
    "path": "events/capnp/events_api.capnp",
    "chars": 174,
    "preview": "using Go = import \"/go.capnp\";\n\n@0xfc8938b535319bfe;\n$Go.package(\"capnp\");\n$Go.import(\"github.com/sahib/brig/events/capn"
  }
]

// ... and 199 more files (download for full content)

About this extraction

This page contains the full source code of the sahib/brig GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 399 files (9.3 MB), approximately 2.5M tokens, and a symbol index with 6397 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!