[
  {
    "path": ".gitignore",
    "content": "node_modules\ndb\ntmp\nbench\n2\n"
  },
  {
    "path": ".travis.yml",
    "content": "language: node_js\nnode_js:\n  - \"0.10\"\n  - \"0.12\"\n  - \"4\"\n  - \"6\"\n"
  },
  {
    "path": "LICENSE",
    "content": "The MIT License (MIT)\n\nCopyright (c) 2015 Mathias Buus\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\nTHE SOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "# hyperlog\n\n[Merkle DAG](https://github.com/jbenet/random-ideas/issues/20) that replicates based on scuttlebutt logs and causal linking\n\n```\nnpm install hyperlog\n```\n\n[![build status](http://img.shields.io/travis/mafintosh/hyperlog.svg?style=flat)](http://travis-ci.org/mafintosh/hyperlog)\n![dat](http://img.shields.io/badge/Development%20sponsored%20by-dat-green.svg?style=flat)\n\n``` js\nvar hyperlog = require('hyperlog')\n\nvar log = hyperlog(db) // where db is a levelup instance\n\n// add a node with value 'hello' and no links\nlog.add(null, 'hello', function(err, node) {\n  console.log('inserted node', node)\n\n  // insert 'world' with a link back to the above node\n  log.add([node.key], 'world', function(err, node) {\n    console.log('inserted new node', node)\n  })\n})\n```\n\n## Replicate graph\n\nTo replicate this log with another one simply use `log.replicate()` and pipe it together with a replication stream from another log.\n\n``` js\nvar l1 = hyperlog(db1)\nvar l2 = hyperlog(db2)\n\nvar s1 = l1.replicate()\nvar s2 = l2.replicate()\n\ns1.pipe(s2).pipe(s1)\n\ns1.on('end', function() {\n  console.log('replication ended')\n})\n```\n\nA detailed write-up on how this replication protocol works will be added to this repo in the near\nfuture. For now see the source code.\n\n## API\n\n#### `log = hyperlog(db, opts={})`\n\nCreate a new log instance. Valid keys for `opts` include:\n\n- `id` - some (ideally globally unique) string identifier for the log.\n- `valueEncoding` - a [levelup-style](https://github.com/Level/levelup#options)\n  encoding string or object (e.g. `\"json\"`)\n- `hash(links, value)` - a hash function that runs synchronously. Defaults to a\n  SHA-256 implementation.\n- `asyncHash(links, value, cb)` - an asynchronous hash function with node-style\n  callback (`cb(err, hash)`).\n- `identity`, `sign`, `verify` - values for creating a cryptographically signed\n  feed. See below.\n\n\nYou can also pass in an `identity` and `sign`/`verify` functions which can be\nused to create a signed log:\n\n``` js\n{\n  identity: aPublicKeyBuffer, // will be added to all nodes you insert\n  sign: function (node, cb) {\n    // will be called with all nodes you add\n    var signatureBuffer = someCrypto.sign(node.key, mySecretKey)\n    cb(null, signatureBuffer)\n  },\n  verify: function (node, cb) {\n    // will be called with all nodes you receive\n    if (!node.signature) return cb(null, false)\n    cb(null, someCrypto.verify(node.key, node.signature. node.identity))\n  }\n}\n```\n\n#### `log.add(links, value, opts={}, [cb])`\n\nAdd a new node to the graph. `links` should be an array of node keys that this node links to.\nIf it doesn't link to any nodes use `null` or an empty array. `value` is the value that you want to store\nin the node. This should be a string or a buffer. The callback is called with the inserted node:\n\n``` js\nlog.add([link], value, function(err, node) {\n  // node looks like this\n  {\n    change: ... // the change number for this node in the local log\n    key:   ... // the hash of the node. this is also the key of the node\n    value:  ... // the value (as the valueEncoding type, default buffer) you inserted\n    log:    ... // the peer log this node was appended to\n    seq:    ... // the peer log seq number\n    links: ['hash-of-link-1', ...]\n  }\n})\n```\n\nOptionally supply an `opts.valueEncoding`.\n\n#### `log.append(value, opts={}, [cb])`\n\nAdd a value that links all the current heads.\n\nOptionally supply an `opts.valueEncoding`.\n\n#### `log.batch(docs, opts={}, [cb])`\n\nAdd many documents atomically to the log at once: either all the docs are\ninserted successfully or nothing is inserted.\n\n`docs` is an array of objects where each object looks like:\n\n``` js\n{\n  links: [...] // array of ancestor node keys\n  value: ... // the value to insert\n}\n```\n\nThe callback `cb(err, nodes)` is called with an array of `nodes`. Each `node` is\nof the form described in the `log.add()` section.\n\nYou may specify an `opts.valueEncoding`.\n\n#### `log.get(hash, opts={}, cb)`\n\nLookup a node by its hash. Returns a node similar to `.add` above.\n\nOptionally supply an `opts.valueEncoding`.\n\n#### `log.heads(opts={}, cb)`\n\nGet the heads of the graph as a list. A head is node that no other node\nlinks to.\n\n``` js\nlog.heads(function(err, heads) {\n  console.log(heads) // prints an array of nodes\n})\n```\n\nThe method also returns a stream of heads which is useful\nif, for some reason, your graph has A LOT of heads\n\n``` js\nvar headsStream = log.heads()\n\nheadsStream.on('data', function(node) {\n  console.log('head:', node)\n})\n\nheadsStream.on('end', function() {\n  console.log('(no more heads)')\n})\n```\n\nOptionally supply an `opts.valueEncoding`.\n\n#### `changesStream = log.createReadStream([options])`\n\nTail the changes feed from the log. Everytime you add a node to the graph\nthe changes feed is updated with that node.\n\n``` js\nvar changesStream = log.createReadStream({live:true})\n\nchangesStream.on('data', function(node) {\n  console.log('change:', node)\n})\n```\n\nOptions include:\n\n``` js\n{\n  since: changeNumber     // only returns changes AFTER the number\n  live: false             // never close the change stream\n  tail: false             // since = lastChange\n  limit: number           // only return up to `limit` changes\n  until: number           // (for non-live streams) only returns changes BEFORE the number\n  valueEncoding: 'binary'\n}\n```\n\n#### `replicationStream = log.replicate([options])`\n\nReplicate the log to another one using a replication stream.\nSimply pipe your replication stream together with another log's replication stream.\n\n``` js\nvar l1 = hyperlog(db1)\nvar l2 = hyperlog(db2)\n\nvar s1 = l1.createReplicationStream()\nvar s2 = l2.createReplicationStream()\n\ns1.pipe(s2).pipe(s1)\n\ns1.on('end', function() {\n  console.log('replication ended')\n})\n```\n\nOptions include:\n\n``` js\n{\n  mode: 'push' | 'pull' | 'sync', // set replication mode. defaults to sync\n  live: true, // keep the replication stream open. defaults to false\n  metadata: someBuffer, // send optional metadata as part of the handshake\n  frame: true // frame the data with length prefixes. defaults to true\n}\n```\n\nIf you send `metadata` it will be emitted as an `metadata` event on the stream.\nA detailed write up on how the graph replicates will be added later.\n\n#### log.on('preadd', function (node) {})\n\nOn the same tick as `log.add()` is called, this event fires with the `node`\nabout to be inserted into the log. At this stage of the add process, node has\nthese properties:\n\n* `node.log`\n* `node.key`\n* `node.value`\n* `node.links`\n\n#### log.on('add', function (node) {})\n\nAfter a node has been successfully added to the log, this event fires with the\nfull `node` object that the callback to `.add()` gets.\n\n#### log.on('reject', function (node) {})\n\nWhen a node is rejected, this event fires. Otherwise the `add` event will fire.\n\nYou can track `preadd` events against both `add` and `reject` events in\ncombination to know when the log is completely caught up.\n\n## Hyperlog Hygiene\n\nA hyperlog will refer to potentially *many* different logs as it replicates with\nothers, each with its own ID. Bear in mind that each hyperlog's underlying\nleveldb contains a notion of what its *own* local ID is. If you make a copy of a\nhyperlog's leveldb and write different data to each copy, the results are\nunpredictable and likely disastrous. Always only use the included replication\nmechanism for making hyperlog copies!\n\n\n## License\n\nMIT\n"
  },
  {
    "path": "example/log.js",
    "content": "var hyperlog = require('../')\nvar memdb = require('memdb')\n\nvar log = hyperlog(memdb())\nvar clone = hyperlog(memdb())\n\nvar sync = function (a, b) {\n  a = a.createReplicationStream({mode: 'push'})\n  b = b.createReplicationStream({mode: 'pull'})\n\n  a.on('push', function () {\n    console.log('a pushed')\n  })\n\n  a.on('pull', function () {\n    console.log('a pulled')\n  })\n\n  a.on('end', function () {\n    console.log('a ended')\n  })\n\n  b.on('push', function () {\n    console.log('b pushed')\n  })\n\n  b.on('pull', function () {\n    console.log('b pulled')\n  })\n\n  b.on('end', function () {\n    console.log('b ended')\n  })\n\n  a.pipe(b).pipe(a)\n}\n\nclone.createReadStream({live: true}).on('data', function (data) {\n  console.log('change: (%d) %s', data.change, data.key)\n})\n\nlog.add(null, 'hello', function (err, node) {\n  if (err) throw err\n  log.add(node, 'world', function (err, node) {\n    if (err) throw err\n    sync(log, clone)\n    log.add(null, 'meh')\n  })\n})\n"
  },
  {
    "path": "example/signed.js",
    "content": "var hyperlog = require('../')\nvar memdb = require('memdb')\nvar sodium = require('sodium').api\nvar eq = require('buffer-equals')\n\nvar keys = sodium.crypto_sign_keypair()\nvar log = hyperlog(memdb(), {\n  identity: keys.publicKey,\n  sign: function (node, cb) {\n    var bkey = Buffer(node.key, 'hex')\n    cb(null, sodium.crypto_sign(bkey, keys.secretKey))\n  }\n})\nvar clone = hyperlog(memdb(), {\n  verify: function (node, cb) {\n    if (!node.signature) return cb(null, false)\n    if (!eq(node.identity, keys.publicKey)) return cb(null, false)\n    var bkey = Buffer(node.key, 'hex')\n    var m = sodium.crypto_sign_open(node.signature, node.identity)\n    cb(null, eq(m, bkey))\n  }\n})\n\nvar sync = function (a, b) {\n  a = a.createReplicationStream({mode: 'push'})\n  b = b.createReplicationStream({mode: 'pull'})\n\n  a.on('push', function () {\n    console.log('a pushed')\n  })\n\n  a.on('pull', function () {\n    console.log('a pulled')\n  })\n\n  a.on('end', function () {\n    console.log('a ended')\n  })\n\n  b.on('push', function () {\n    console.log('b pushed')\n  })\n\n  b.on('pull', function () {\n    console.log('b pulled')\n  })\n\n  b.on('end', function () {\n    console.log('b ended')\n  })\n\n  a.pipe(b).pipe(a)\n}\n\nclone.createReadStream({live: true}).on('data', function (data) {\n  console.log('change: (%d) %s', data.change, data.key)\n})\n\nlog.add(null, 'hello', function (err, node) {\n  if (err) throw err\n  log.add(node, 'world', function (err, node) {\n    if (err) throw err\n    sync(log, clone)\n    log.add(null, 'meh')\n  })\n})\n"
  },
  {
    "path": "index.js",
    "content": "var after = require('after-all')\nvar lexint = require('lexicographic-integer')\nvar collect = require('stream-collector')\nvar through = require('through2')\nvar pump = require('pump')\nvar from = require('from2')\nvar mutexify = require('mutexify')\nvar cuid = require('cuid')\nvar logs = require('level-logs')\nvar events = require('events')\nvar util = require('util')\nvar enumerate = require('level-enumerate')\nvar replicate = require('./lib/replicate')\nvar messages = require('./lib/messages')\nvar hash = require('./lib/hash')\nvar encoder = require('./lib/encode')\nvar defined = require('defined')\nvar parallel = require('run-parallel')\nvar waterfall = require('run-waterfall')\n\nvar ID = '!!id'\nvar CHANGES = '!changes!'\nvar NODES = '!nodes!'\nvar HEADS = '!heads!'\n\nvar INVALID_SIGNATURE = new Error('Invalid signature')\nvar CHECKSUM_MISMATCH = new Error('Checksum mismatch')\nvar INVALID_LOG = new Error('Invalid log sequence')\n\nINVALID_LOG.notFound = true\nINVALID_LOG.status = 404\n\nvar noop = function () {}\n\nvar Hyperlog = function (db, opts) {\n  if (!(this instanceof Hyperlog)) return new Hyperlog(db, opts)\n  if (!opts) opts = {}\n\n  events.EventEmitter.call(this)\n\n  this.id = defined(opts.id, null)\n  this.enumerate = enumerate(db, {prefix: 'enum'})\n  this.db = db\n  this.logs = logs(db, {prefix: 'logs', valueEncoding: messages.Entry})\n  this.lock = defined(opts.lock, mutexify())\n  this.changes = 0\n  this.setMaxListeners(0)\n  this.valueEncoding = defined(opts.valueEncoding, opts.encoding, 'binary')\n  this.identity = defined(opts.identity, null)\n  this.verify = defined(opts.verify, null)\n  this.sign = defined(opts.sign, null)\n  this.hash = defined(opts.hash, hash)\n  this.asyncHash = defined(opts.asyncHash, null)\n\n  // Retrieve this hyperlog instance's unique ID.\n  var self = this\n  var getId = defined(opts.getId, function (cb) {\n    db.get(ID, {valueEncoding: 'utf-8'}, function (_, id) {\n      if (id) return cb(null, id)\n      id = cuid()\n      db.put(ID, id, function () {\n        cb(null, id)\n      })\n    })\n  })\n\n  // Startup logic to..\n  // 1. Determine & record the largest change # in the db.\n  // 2. Determine this hyperlog db's local ID.\n  //\n  // This is behind a lock in order to ensure that no hyperlog operations\n  // can be performed -- these two values MUST be known before any\n  // hyperlog usage may occur.\n  this.lock(function (release) {\n    collect(db.createKeyStream({gt: CHANGES, lt: CHANGES + '~', reverse: true, limit: 1}), function (_, keys) {\n      self.changes = Math.max(self.changes, keys && keys.length ? lexint.unpack(keys[0].split('!').pop(), 'hex') : 0)\n      if (self.id) return release()\n      getId(function (_, id) {\n        self.id = id || cuid()\n        release()\n      })\n    })\n  })\n}\n\nutil.inherits(Hyperlog, events.EventEmitter)\n\n// Call callback 'cb' once the hyperlog is ready for use (knows some\n// fundamental properties about itself from the leveldb). If it's already\n// ready, cb is called immediately.\nHyperlog.prototype.ready = function (cb) {\n  if (this.id) return cb()\n  this.lock(function (release) {\n    release()\n    cb()\n  })\n}\n\n// Returns a readable stream of all hyperlog heads. That is, all nodes that no\n// other nodes link to.\nHyperlog.prototype.heads = function (opts, cb) {\n  var self = this\n  if (!opts) opts = {}\n  if (typeof opts === 'function') {\n    cb = opts\n    opts = {}\n  }\n\n  var rs = this.db.createValueStream({\n    gt: HEADS,\n    lt: HEADS + '~',\n    valueEncoding: 'utf-8'\n  })\n\n  var format = through.obj(function (key, enc, cb) {\n    self.get(key, opts, cb)\n  })\n\n  return collect(pump(rs, format), cb)\n}\n\n// Retrieve a single, specific node, given its key.\nHyperlog.prototype.get = function (key, opts, cb) {\n  if (!opts) opts = {}\n  if (typeof opts === 'function') {\n    cb = opts\n    opts = {}\n  }\n  var self = this\n  this.db.get(NODES + key, {valueEncoding: 'binary'}, function (err, buf) {\n    if (err) return cb(err)\n    var node = messages.Node.decode(buf)\n    node.value = encoder.decode(node.value, opts.valueEncoding || self.valueEncoding)\n    cb(null, node)\n  })\n}\n\n// Utility function to be used in a nodes.reduce() to determine the largest\n// change # present.\nvar maxChange = function (max, cur) {\n  return Math.max(max, cur.change)\n}\n\n// Consumes either a string or a hyperlog node and returns its key.\nvar toKey = function (link) {\n  return typeof link !== 'string' ? link.key : link\n}\n\n// Adds a new hyperlog node to an existing array of leveldb batch insertions.\n// This includes performing crypto signing and verification.\n// Performs deduplication; returns the existing node if alreay present in the hyperlog.\nvar addBatchAndDedupe = function (dag, node, logLinks, batch, opts, cb) {\n  if (opts.hash && node.key !== opts.hash) return cb(CHECKSUM_MISMATCH)\n  if (opts.seq && node.seq !== opts.seq) return cb(INVALID_LOG)\n\n  var log = {\n    change: node.change,\n    node: node.key,\n    links: logLinks\n  }\n\n  var onclone = function (clone) {\n    if (!opts.log) return cb(null, clone, [])\n    batch.push({type: 'put', key: dag.logs.key(node.log, node.seq), value: messages.Entry.encode(log)})\n    cb(null, clone)\n  }\n\n  var done = function () {\n    dag.get(node.key, { valueEncoding: 'binary' }, function (_, clone) {\n      // This node already exists somewhere in the hyperlog; add it to the\n      // log's append-only log, but don't insert it again.\n      if (clone) return onclone(clone)\n\n      var links = node.links\n      for (var i = 0; i < links.length; i++) batch.push({type: 'del', key: HEADS + links[i]})\n      batch.push({type: 'put', key: CHANGES + lexint.pack(node.change, 'hex'), value: node.key})\n      batch.push({type: 'put', key: NODES + node.key, value: messages.Node.encode(node)})\n      batch.push({type: 'put', key: HEADS + node.key, value: node.key})\n      batch.push({type: 'put', key: dag.logs.key(node.log, node.seq), value: messages.Entry.encode(log)})\n\n      cb(null, node)\n    })\n  }\n\n  // Local node; sign it.\n  if (node.log === dag.id) {\n    if (!dag.sign || node.signature) return done()\n    dag.sign(node, function (err, sig) {\n      if (err) return cb(err)\n      if (!node.identity) node.identity = dag.identity\n      node.signature = sig\n      done()\n    })\n  // Remote node; verify it.\n  } else {\n    if (!dag.verify) return done()\n    dag.verify(node, function (err, valid) {\n      if (err) return cb(err)\n      if (!valid) return cb(INVALID_SIGNATURE)\n      done()\n    })\n  }\n}\n\nvar getLinks = function (dag, id, links, cb) {\n  var logLinks = []\n  var nextLink = function () {\n    var cb = next()\n    return function (err, link) {\n      if (err) return cb(err)\n      if (link.log !== id && logLinks.indexOf(link.log) === -1) logLinks.push(link.log)\n      cb(null)\n    }\n  }\n  var next = after(function (err) {\n    if (err) cb(err)\n    else cb(null, logLinks)\n  })\n\n  for (var i = 0; i < links.length; i++) {\n    dag.get(links[i], nextLink())\n  }\n}\n\n// Produce a readable stream of all nodes added from this point onward, in\n// topographic order.\nvar createLiveStream = function (dag, opts) {\n  var since = opts.since || 0\n  var limit = opts.limit || -1\n  var wait = null\n\n  var read = function (size, cb) {\n    if (dag.changes <= since) {\n      wait = cb\n      return\n    }\n\n    if (!limit) return cb(null, null)\n\n    dag.db.get(CHANGES + lexint.pack(since + 1, 'hex'), function (err, hash) {\n      if (err) return cb(err)\n      dag.get(hash, opts, function (err, node) {\n        if (err) return cb(err)\n        since = node.change\n        if (limit !== -1) limit--\n        cb(null, node)\n      })\n    })\n  }\n\n  var kick = function () {\n    if (!wait) return\n    var cb = wait\n    wait = null\n    read(0, cb)\n  }\n\n  dag.on('add', kick)\n  dag.ready(kick)\n\n  var rs = from.obj(read)\n\n  rs.once('close', function () {\n    dag.removeListener('add', kick)\n  })\n\n  return rs\n}\n\n// Produce a readable stream of nodes in the hyperlog, in topographic order.\nHyperlog.prototype.createReadStream = function (opts) {\n  if (!opts) opts = {}\n  if (opts.tail) {\n    opts.since = this.changes\n  }\n  if (opts.live) return createLiveStream(this, opts)\n\n  var self = this\n  var since = opts.since || 0\n  var until = opts.until || 0\n\n  var keys = this.db.createValueStream({\n    gt: CHANGES + lexint.pack(since, 'hex'),\n    lt: CHANGES + (until ? lexint.pack(until, 'hex') : '~'),\n    valueEncoding: 'utf-8',\n    reverse: opts.reverse,\n    limit: opts.limit\n  })\n\n  var get = function (key, enc, cb) {\n    self.get(key, opts, cb)\n  }\n\n  return pump(keys, through.obj(get))\n}\n\nHyperlog.prototype.replicate =\nHyperlog.prototype.createReplicationStream = function (opts) {\n  return replicate(this, opts)\n}\n\nHyperlog.prototype.add = function (links, value, opts, cb) {\n  if (typeof opts === 'function') {\n    cb = opts\n    opts = {}\n  }\n  if (!cb) cb = noop\n  this.batch([{links: links, value: value}], opts, function (err, nodes) {\n    if (err) cb(err)\n    else cb(null, nodes[0])\n  })\n}\n\nHyperlog.prototype.batch = function (docs, opts, cb) {\n  // 0. preamble\n  if (typeof opts === 'function') {\n    cb = opts\n    opts = {}\n  }\n  if (!cb) cb = noop\n  if (!opts) opts = {}\n\n  // Bail asynchronously; nothing to add.\n  if (docs.length === 0) return process.nextTick(function () { cb(null, []) })\n\n  var self = this\n  var id = opts.log || self.id\n  opts.log = id\n\n  var logLinks = {}\n  var lockRelease = null\n  var latestSeq\n\n  // Bubble up errors on non-batch (1 element) calls.\n  var bubbleUpErrors = false\n  if (docs.length === 1) {\n    bubbleUpErrors = true\n  }\n\n  // 1. construct initial hyperlog \"node\" for each of \"docs\"\n  var nodes = docs.map(function (doc) {\n    return constructInitialNode(doc, opts)\n  })\n\n  // 2. emit all preadd events\n  nodes.forEach(function (node) {\n    self.emit('preadd', node)\n  })\n\n  waterfall([\n    // 3. lock the hyperlog (if needed)\n    // 4. wait until the hyperlog is 'ready'\n    // 5. retrieve the seq# of this hyperlog's head\n    lockAndGetSeqNumber,\n\n    // 3. hash (async/sync) all nodes\n    // 4. retrieve + set 'getLinks' for each node\n    function (seq, release, done) {\n      lockRelease = release\n      latestSeq = seq\n\n      hashNodesAndFindLinks(nodes, done)\n    },\n\n    // 8. dedupe the node against the params AND the hyperlog (in sequence)\n    function (nodes, done) {\n      dedupeNodes(nodes, latestSeq, done)\n    },\n\n    // 9. create each node's leveldb batch operation object\n    function (nodes, done) {\n      computeBatchNodeOperations(nodes, done)\n    },\n\n    // 10. perform the leveldb batch op\n    function (nodes, batchOps, done) {\n      self.db.batch(batchOps, function (err) {\n        if (err) {\n          nodes.forEach(rejectNode)\n          return done(err)\n        }\n        done(null, nodes)\n      })\n    },\n\n    // 11. update the hyperlog's change#\n    // 12. emit all add/reject events\n    function (nodes, done) {\n      self.changes = nodes.reduce(maxChange, self.changes)\n      done(null, nodes)\n    }\n  ], function (err, nodes) {\n    // release lock, if necessary\n    if (lockRelease) return lockRelease(onUnlocked, err)\n    onUnlocked(err)\n\n    function onUnlocked (err) {\n      // Error; all nodes were rejected.\n      if (err) return cb(err)\n\n      // Emit add events.\n      nodes.forEach(function (node) {\n        self.emit('add', node)\n      })\n\n      cb(null, nodes)\n    }\n  })\n\n  function rejectNode (node) {\n    self.emit('reject', node)\n  }\n\n  // Hashes and finds links for the given nodes. If some nodes fail to hash to\n  // have their links found, they are rejected and not returned in the results.\n  function hashNodesAndFindLinks (nodes, done) {\n    var goodNodes = []\n\n    parallel(\n      nodes.map(function (node) {\n        return function (done) {\n          hashNode(node, function (err, key) {\n            if (err) {\n              rejectNode(node)\n              return done(bubbleUpErrors ? err : null)\n            }\n            node.key = key\n\n            getLinks(self, id, node.links, function (err, links) {\n              if (err) {\n                rejectNode(node)\n                return done(bubbleUpErrors ? err : null)\n              }\n              logLinks[node.key] = links\n\n              if (!node.log) node.log = self.id\n\n              goodNodes.push(node)\n              done()\n            })\n          })\n        }\n      }),\n      function (err) {\n        done(err, goodNodes)\n      }\n    )\n  }\n\n  function lockAndGetSeqNumber (done) {\n    if (opts.release) onlocked(opts.release)\n    else self.lock(onlocked)\n\n    function onlocked (release) {\n      self.ready(function () {\n        self.logs.head(id, function (err, seq) {\n          if (err) return release(cb, err)\n          done(null, seq, release)\n        })\n      })\n    }\n  }\n\n  function dedupeNodes (nodes, seq, done) {\n    var goodNodes = []\n\n    var added = nodes.length > 1 ? {} : null\n    var seqIdx = 1\n    var changeIdx = 1\n\n    waterfall(\n      nodes.map(function (node) {\n        return function (done) {\n          dedupeNode(node, done)\n        }\n      }),\n      function (err) {\n        done(err, goodNodes)\n      }\n    )\n\n    function dedupeNode (node, done) {\n      // Check if the to-be-added node already exists in the hyperlog.\n      self.get(node.key, function (_, clone) {\n        // It already exists\n        if (clone) {\n          node.seq = seq + (seqIdx++)\n          node.change = clone.change\n        // It already exists; it was added in this batch op earlier on.\n        } else if (added && added[node.key]) {\n          node.seq = added[node.key].seq\n          node.change = added[node.key].change\n          rejectNode(node)\n          return done()\n        } else {\n          // new node across all logs\n          node.seq = seq + (seqIdx++)\n          node.change = self.changes + (changeIdx++)\n        }\n\n        if (added) added[node.key] = node\n\n        goodNodes.push(node)\n\n        done()\n      })\n    }\n  }\n\n  function computeBatchNodeOperations (nodes, done) {\n    var batch = []\n    var goodNodes = []\n\n    waterfall(\n      nodes.map(function (node) {\n        return function (done) {\n          computeNodeBatchOp(node, function (err, ops) {\n            if (err) {\n              rejectNode(node)\n              return done(bubbleUpErrors ? err : null)\n            }\n            batch = batch.concat(ops)\n            goodNodes.push(node)\n            done()\n          })\n        }\n      }),\n      function (err) {\n        if (err) return done(err)\n        done(null, nodes, batch)\n      }\n    )\n\n    // Create a new leveldb batch operation for this node.\n    function computeNodeBatchOp (node, done) {\n      var batch = []\n      var links = logLinks[node.key]\n      addBatchAndDedupe(self, node, links, batch, opts, function (err, newNode) {\n        if (err) return done(err)\n        newNode.value = encoder.decode(newNode.value, opts.valueEncoding || self.valueEncoding)\n        done(null, batch)\n      })\n    }\n  }\n\n  function constructInitialNode (doc, opts) {\n    var links = doc.links || []\n    if (!Array.isArray(links)) links = [links]\n    links = links.map(toKey)\n\n    var encodedValue = encoder.encode(doc.value, opts.valueEncoding || self.valueEncoding)\n    return {\n      log: opts.log || self.id,\n      key: null,\n      identity: doc.identity || opts.identity || null,\n      signature: opts.signature || null,\n      value: encodedValue,\n      links: links\n    }\n  }\n\n  function hashNode (node, done) {\n    if (self.asyncHash) {\n      self.asyncHash(node.links, node.value, done)\n    } else {\n      var key = self.hash(node.links, node.value)\n      done(null, key)\n    }\n  }\n}\n\nHyperlog.prototype.append = function (value, opts, cb) {\n  if (typeof opts === 'function') {\n    cb = opts\n    opts = {}\n  }\n  if (!cb) cb = noop\n  if (!opts) opts = {}\n  var self = this\n\n  this.lock(function (release) {\n    self.heads(function (err, heads) {\n      if (err) return release(cb, err)\n      opts.release = release\n      self.add(heads, value, opts, cb)\n    })\n  })\n}\n\nmodule.exports = Hyperlog\n"
  },
  {
    "path": "lib/encode.js",
    "content": "exports.encode = function (value, enc) {\n  if (typeof enc === 'object' && enc.encode) {\n    value = enc.encode(value)\n  } else if (enc === 'json') {\n    value = Buffer(JSON.stringify(value))\n  }\n  if (typeof value === 'string') value = new Buffer(value)\n  return value\n}\n\nexports.decode = function (value, enc) {\n  if (typeof enc === 'object' && enc.decode) {\n    return enc.decode(value)\n  } else if (enc === 'json') {\n    return JSON.parse(value.toString())\n  } else if (enc === 'utf-8' || enc === 'utf8') {\n    return value.toString()\n  }\n  return value\n}\n"
  },
  {
    "path": "lib/hash.js",
    "content": "var framedHash = require('framed-hash')\nvar empty = new Buffer(0)\n\nmodule.exports = function (links, value) {\n  var hash = framedHash('sha256')\n  for (var i = 0; i < links.length; i++) hash.update(links[i])\n  hash.update(value || empty)\n  return hash.digest('hex')\n}\n"
  },
  {
    "path": "lib/messages.js",
    "content": "var protobuf = require('protocol-buffers')\nvar fs = require('fs')\nvar path = require('path')\n\nmodule.exports = protobuf(fs.readFileSync(path.join(__dirname, '..', 'schema.proto'), 'utf-8'))\n"
  },
  {
    "path": "lib/protocol.js",
    "content": "var Duplexify = require('duplexify')\nvar util = require('util')\nvar lpstream = require('length-prefixed-stream')\nvar through = require('through2')\nvar debug = require('debug')('hyperlog-replicate')\nvar messages = require('./messages')\n\nvar empty = {\n  encodingLength: function () {\n    return 0\n  },\n  encode: function (data, buf, offset) {\n    return buf\n  }\n}\n\nvar Protocol = function (opts) {\n  if (!(this instanceof Protocol)) return new Protocol(opts)\n\n  var frame = !opts || opts.frame !== false\n\n  this._encoder = frame ? lpstream.encode() : through.obj()\n  this._decoder = frame ? lpstream.decode() : through.obj()\n  this._finalize = opts.finalize ? opts.finalize : function (cb) { cb() }\n  this._process = opts.process || null\n\n  var self = this\n  var parse = through.obj(function (data, enc, cb) {\n    self._decode(data, cb)\n  })\n\n  parse.on('error', function (err) {\n    self.destroy(err)\n  })\n\n  this.on('end', function () {\n    debug('ended')\n    self.end()\n  })\n\n  this.on('finish', function () {\n    debug('finished')\n    self.finalize()\n  })\n\n  this._decoder.pipe(parse)\n\n  if (this._process) {\n    this._process.pipe(through.obj(function (node, enc, cb) {\n      self.emit('node', node, cb) || cb()\n    }))\n  }\n\n  var hwm = opts.highWaterMark || 16\n  Duplexify.call(this, this._decoder, this._encoder, frame ? {} : {objectMode: true, highWaterMark: hwm})\n}\n\nutil.inherits(Protocol, Duplexify)\n\nProtocol.prototype.handshake = function (handshake, cb) {\n  debug('sending handshake')\n  this._encode(0, messages.Handshake, handshake, cb)\n}\n\nProtocol.prototype.have = function (have, cb) {\n  debug('sending have')\n  this._encode(1, messages.Log, have, cb)\n}\n\nProtocol.prototype.want = function (want, cb) {\n  debug('sending want')\n  this._encode(2, messages.Log, want, cb)\n}\n\nProtocol.prototype.node = function (node, cb) {\n  debug('sending node')\n  this._encode(3, messages.Node, node, cb)\n}\n\nProtocol.prototype.sentHeads = function (cb) {\n  debug('sending sentHeads')\n  this._encode(4, empty, null, cb)\n}\n\nProtocol.prototype.sentWants = function (cb) {\n  debug('sending sentWants')\n  this._encode(5, empty, null, cb)\n}\n\nProtocol.prototype.finalize = function (cb) {\n  var self = this\n  this._finalize(function (err) {\n    debug('ending')\n    if (err) return self.destroy(err)\n    self._encoder.end(cb)\n  })\n}\n\nProtocol.prototype._encode = function (type, enc, data, cb) {\n  var buf = new Buffer(enc.encodingLength(data) + 1)\n  buf[0] = type\n  enc.encode(data, buf, 1)\n  this._encoder.write(buf, cb)\n}\n\nvar decodeMessage = function (data) {\n  switch (data[0]) {\n    case 0: return messages.Handshake.decode(data, 1)\n    case 1: return messages.Log.decode(data, 1)\n    case 2: return messages.Log.decode(data, 1)\n    case 3: return messages.Node.decode(data, 1)\n  }\n  return null\n}\n\nProtocol.prototype._decode = function (data, cb) {\n  try {\n    var msg = decodeMessage(data)\n  } catch (err) {\n    return cb(err)\n  }\n\n  switch (data[0]) {\n    case 0:\n      debug('receiving handshake')\n      return this.emit('handshake', msg, cb) || cb()\n\n    case 1:\n      debug('receiving have')\n      return this.emit('have', msg, cb) || cb()\n\n    case 2:\n      debug('receiving want')\n      return this.emit('want', msg, cb) || cb()\n\n    case 3:\n      debug('receiving node')\n      return this._process ? this._process.write(msg, cb) : (this.emit('node', msg, cb) || cb())\n\n    case 4:\n      debug('receiving sentHeads')\n      return this.emit('sentHeads', cb) || cb()\n\n    case 5:\n      debug('receiving sentWants')\n      return this.emit('sentWants', cb) || cb()\n  }\n\n  cb()\n}\n\nmodule.exports = Protocol\n"
  },
  {
    "path": "lib/replicate.js",
    "content": "var through = require('through2')\nvar pump = require('pump')\nvar bitfield = require('bitfield')\nvar protocol = require('./protocol')\nvar sortedQueue = require('./sorted-queue')\nvar encoder = require('./encode.js')\n\nvar noop = function () {}\nvar noarr = []\n\nvar MAX_BITFIELD = 10 * 1024 * 1024 // arbitrary high number\n\nmodule.exports = function (dag, opts) {\n  if (!opts) opts = {}\n\n  var stream = protocol(opts)\n  var mode = opts.mode || 'sync'\n\n  // Bitfield to ensure that the nodes of each log in the hyperlog is only sent\n  // once.\n  var pushing = bitfield(1024, {grow: MAX_BITFIELD})\n\n  // The largest change # known to this log when replication begins.\n  var changes = 0\n\n  var missing = 0\n\n  var done = false\n  var remoteSentWants = false\n  var remoteSentHeads = false\n  var localSentWants = false\n  var localSentHeads = false\n\n  var live = opts.live\n\n  // Local nodes yet to be sent.\n  var outgoing = sortedQueue()\n  // Remote nodes yet to be added to this hyperlog.\n  var incoming = sortedQueue()\n\n  // Asynchronous loop to continue sending nodes from a log in sequence from\n  // low seq # to its highest seq #.\n  outgoing.pull(function loop (entry) {\n    dag.get(entry.node, {valueEncoding: 'binary'}, function (err, node) {\n      if (err) return stream.destroy(err)\n\n      if (entry.log && (node.log !== entry.log || node.seq !== entry.seq)) { // deduplicated\n        node.log = entry.log\n        node.seq = entry.seq\n      }\n\n      stream.emit('push')\n      stream.node(node, function (err) {\n        if (err) return stream.destroy(err)\n        sendNode(node.log, node.seq + 1, function (err) {\n          if (err) return stream.destroy(err)\n          outgoing.pull(loop)\n        })\n      })\n    })\n  })\n\n  var pipe = function (a, b, cb) {\n    var destroy = function () {\n      a.destroy()\n    }\n\n    stream.on('close', destroy)\n    stream.on('finish', destroy)\n\n    a.on('end', function () {\n      stream.removeListener('close', destroy)\n      stream.removeListener('finish', destroy)\n    })\n\n    return pump(a, b, cb)\n  }\n\n  // For live replication. Reads live from the local hyperlog and continues to\n  // send new nodes to the other end.\n  var sendChanges = function () {\n    var write = function (node, enc, cb) {\n      node.value = encoder.encode(node.value, dag.valueEncoding)\n      stream.node(node, cb)\n    }\n\n    stream.emit('live')\n    pipe(dag.createReadStream({since: changes, live: true}), through.obj(write))\n  }\n\n  // Check if replication is finished.\n  var update = function (cb) {\n    if (done || !localSentWants || !localSentHeads || !remoteSentWants || !remoteSentHeads) return cb()\n    done = true\n    if (!live) return stream.finalize(cb)\n    sendChanges()\n    cb()\n  }\n\n  // Inform the other side that we've requested all of the nodes we want.\n  var sentWants = function (cb) {\n    localSentWants = true\n    stream.sentWants()\n    update(cb)\n  }\n\n  // Inform the other side that we've sent all of the heads we have.\n  var sentHeads = function (cb) {\n    localSentHeads = true\n    stream.sentHeads()\n    update(cb)\n  }\n\n  // Send a specific entry in a specific log to the other side.\n  // If the node links to other nodes, inform the other side we have those,\n  // too.\n  var sendNode = function (log, seq, cb) {\n    dag.logs.get(log, seq, function (err, entry) {\n      if (err && err.notFound) return cb()\n      if (err) return cb(err)\n      if (entry.change > changes) return cb() // ensure snapshot\n\n      entry.log = log\n      entry.seq = seq\n\n      var i = 0\n      var loop = function () {\n        if (i < entry.links.length) return sendHave(entry.links[i++], loop)\n        entry.links = noarr // premature opt: less mem yo\n        outgoing.push(entry, cb)\n      }\n\n      loop()\n    })\n  }\n\n  // Add a received remote node to our local hyperlog.\n  // It is normal for the insertion to sometimes fail: we may have received a\n  // node that depends on another node from a log we haven't yet received. If\n  // so, enqueue it into 'incoming' and continue trying to re-insert it until\n  // its dependencies are also present.\n  var receiveNode = function (node, cb) {\n    var opts = {\n      hash: node.key,\n      log: node.log,\n      seq: node.seq,\n      identity: node.identity,\n      signature: node.signature,\n      valueEncoding: 'binary'\n    }\n    dag.add(node.links, node.value, opts, function (err) {\n      if (!err) return afterAdd(cb)\n      if (!err.notFound) return cb(err)\n      incoming.push(node, cb)\n    })\n  }\n\n  var afterAdd = function (cb) {\n    stream.emit('pull')\n    if (!localSentWants && !--missing) return sentWants(cb)\n    if (!incoming.length) return cb()\n    incoming.pull(function (node) {\n      receiveNode(node, cb)\n    })\n  }\n\n  var sendHave = function (log, cb) {\n    dag.enumerate(log, function (err, idx) {\n      if (err) return cb(err)\n\n      // Don't send the same log twice.\n      if (pushing.get(idx)) return cb()\n      pushing.set(idx, true)\n\n      dag.logs.head(log, function (err, seq) {\n        if (err) return cb(err)\n        dag.logs.get(log, seq, function loop (err, entry) { // ensure snapshot\n          if (err && err.notFound) return cb()\n          if (err) return cb(err)\n          if (entry.change > changes) return dag.logs.get(log, seq - 1, loop)\n          stream.have({log: log, seq: seq}, cb)\n        })\n      })\n    })\n  }\n\n  stream.once('sentHeads', function (cb) {\n    if (!localSentWants && !missing) sentWants(noop)\n    remoteSentHeads = true\n    update(cb)\n  })\n\n  stream.once('sentWants', function (cb) {\n    remoteSentWants = true\n    update(cb)\n  })\n\n  stream.on('want', function (head, cb) {\n    sendNode(head.log, head.seq + 1, cb)\n  })\n\n  stream.on('have', function (head, cb) {\n    dag.logs.head(head.log, function (err, seq) {\n      if (err) return cb(err)\n      if (seq >= head.seq) return cb()\n      missing += (head.seq - seq)\n      stream.want({log: head.log, seq: seq}, cb)\n    })\n  })\n\n  stream.on('node', receiveNode)\n\n  // start the handshake\n\n  stream.on('handshake', function (handshake, cb) {\n    var remoteMode = handshake.mode\n\n    if (remoteMode !== 'pull' && remoteMode !== 'push' && remoteMode !== 'sync') return cb(new Error('Remote uses invalid mode: ' + remoteMode))\n    if (remoteMode === 'pull' && mode === 'pull') return cb(new Error('Remote and local are both pulling'))\n    if (remoteMode === 'push' && mode === 'push') return cb(new Error('Remote and local are both pushing'))\n\n    remoteSentWants = remoteMode === 'push'\n    remoteSentHeads = remoteMode === 'pull'\n    localSentWants = mode === 'push' || remoteMode === 'pull'\n    localSentHeads = mode === 'pull' || remoteMode === 'push'\n\n    if (handshake.metadata) stream.emit('metadata', handshake.metadata)\n    if (!live) live = handshake.live\n    if (localSentHeads) return update(cb)\n\n    var write = function (node, enc, cb) {\n      sendHave(node.log, cb)\n    }\n\n    dag.lock(function (release) { // TODO: don't lock here. figure out how to snapshot the heads to a change instead\n      changes = dag.changes\n      pipe(dag.heads(), through.obj(write), function (err) {\n        release()\n        if (err) return cb(err)\n        sentHeads(cb)\n      })\n    })\n  })\n\n  stream.handshake({version: 1, mode: opts.mode, metadata: opts.metadata, live: live})\n\n  return stream\n}\n"
  },
  {
    "path": "lib/sorted-queue.js",
    "content": "// A queue of hyperlog nodes that is sorted by the nodes' change #. The node\n// with the lowest change # will be the first dequeued.\n//\n// TODO: buffer to leveldb if the queue becomes large\nvar SortedQueue = function () {\n  if (!(this instanceof SortedQueue)) return new SortedQueue()\n  this.list = []\n  this.wait = null\n  this.length = 0\n}\n\nSortedQueue.prototype.push = function (entry, cb) {\n  var i = indexOf(this.list, entry.change)\n  if (i === this.list.length) this.list.push(entry)\n  else this.list.splice(i, 0, entry)\n  this.length++\n\n  if (this.wait) this.pull(this.wait)\n  if (cb) cb()\n}\n\nSortedQueue.prototype.pull = function (cb) {\n  if (!this.list.length) {\n    this.wait = cb\n    return\n  }\n\n  this.wait = null\n\n  var next = this.list.shift()\n  this.length--\n\n  cb(next)\n}\n\nfunction indexOf (list, change) {\n  var low = 0\n  var high = list.length\n  var mid = 0\n\n  while (low < high) {\n    mid = (low + high) >> 1\n    if (change < list[mid].change) high = mid\n    else low = mid + 1\n  }\n\n  return low\n}\n\nmodule.exports = SortedQueue\n"
  },
  {
    "path": "package.json",
    "content": "{\n  \"name\": \"hyperlog\",\n  \"version\": \"4.12.1\",\n  \"description\": \"Merkle DAG that replicates based on scuttlebutt logs and causal linking\",\n  \"main\": \"index.js\",\n  \"dependencies\": {\n    \"after-all\": \"^2.0.2\",\n    \"bitfield\": \"^1.1.2\",\n    \"brfs\": \"^1.4.0\",\n    \"cuid\": \"^1.2.5\",\n    \"debug\": \"^2.2.0\",\n    \"defined\": \"^1.0.0\",\n    \"duplexify\": \"^3.4.2\",\n    \"framed-hash\": \"^1.1.0\",\n    \"from2\": \"^2.1.0\",\n    \"length-prefixed-stream\": \"^1.3.0\",\n    \"level-enumerate\": \"^1.0.1\",\n    \"level-logs\": \"^1.1.0\",\n    \"lexicographic-integer\": \"^1.1.0\",\n    \"mutexify\": \"^1.1.0\",\n    \"protocol-buffers\": \"^3.1.2\",\n    \"pump\": \"^1.0.0\",\n    \"run-parallel\": \"^1.1.6\",\n    \"run-waterfall\": \"^1.1.3\",\n    \"stream-collector\": \"^1.0.1\",\n    \"through2\": \"^2.0.0\"\n  },\n  \"devDependencies\": {\n    \"bs58\": \"^3.0.0\",\n    \"memdb\": \"^1.0.1\",\n    \"standard\": \"^5.0.0\",\n    \"multihashing\": \"^0.2.0\",\n    \"tape\": \"^4.0.0\"\n  },\n  \"browserify\": {\n    \"transform\": [\n      \"brfs\"\n    ]\n  },\n  \"scripts\": {\n    \"test\": \"standard && tape test/*\"\n  },\n  \"repository\": {\n    \"type\": \"git\",\n    \"url\": \"https://github.com/mafintosh/hyperlog.git\"\n  },\n  \"author\": \"Mathias Buus (@mafintosh)\",\n  \"license\": \"MIT\",\n  \"bugs\": {\n    \"url\": \"https://github.com/mafintosh/hyperlog/issues\"\n  },\n  \"homepage\": \"https://github.com/mafintosh/hyperlog\"\n}\n"
  },
  {
    "path": "schema.proto",
    "content": "message Node {\n  required uint32 change   = 1;\n  required string key      = 2;\n  required string log      = 3;\n  optional uint32 seq      = 4;\n  optional bytes identity  = 7;\n  optional bytes signature = 8;\n  required bytes value     = 5;\n  repeated string links    = 6;\n}\n\nmessage Entry {\n  required uint32 change = 1;\n  required string node   = 2;\n  repeated string links  = 3;\n  optional string log = 4;\n  optional uint32 seq = 5;\n}\n\nmessage Log {\n  required string log = 1;\n  required uint32 seq = 2;\n}\n\nmessage Handshake {\n  required uint32 version = 1;\n  optional string mode    = 2 [default = \"sync\"];\n  optional bytes metadata = 3;\n  optional bool live      = 4;\n}\n"
  },
  {
    "path": "test/basic.js",
    "content": "var hyperlog = require('../')\nvar tape = require('tape')\nvar memdb = require('memdb')\nvar collect = require('stream-collector')\n\ntape('add node', function (t) {\n  var hyper = hyperlog(memdb())\n\n  hyper.add(null, 'hello world', function (err, node) {\n    t.error(err)\n    t.ok(node.key, 'has key')\n    t.same(node.links, [])\n    t.same(node.value, new Buffer('hello world'))\n    t.end()\n  })\n})\n\ntape('add node with links', function (t) {\n  var hyper = hyperlog(memdb())\n\n  hyper.add(null, 'hello', function (err, node) {\n    t.error(err)\n    hyper.add(node, 'world', function (err, node2) {\n      t.error(err)\n      t.ok(node2.key, 'has key')\n      t.same(node2.links, [node.key], 'has links')\n      t.same(node2.value, new Buffer('world'))\n      t.end()\n    })\n  })\n})\n\ntape('cannot add node with bad links', function (t) {\n  var hyper = hyperlog(memdb())\n\n  hyper.add('i-do-not-exist', 'hello world', function (err) {\n    t.ok(err, 'had error')\n    t.ok(err.notFound, 'not found error')\n    t.end()\n  })\n})\n\ntape('heads', function (t) {\n  var hyper = hyperlog(memdb())\n\n  hyper.heads(function (err, heads) {\n    t.error(err)\n    t.same(heads, [], 'no heads yet')\n    hyper.add(null, 'a', function (err, node) {\n      t.error(err)\n      hyper.heads(function (err, heads) {\n        t.error(err)\n        t.same(heads, [node], 'has head')\n        hyper.add(node, 'b', function (err, node2) {\n          t.error(err)\n          hyper.heads(function (err, heads) {\n            t.error(err)\n            t.same(heads, [node2], 'new heads')\n            t.end()\n          })\n        })\n      })\n    })\n  })\n})\n\ntape('deduplicates', function (t) {\n  var hyper = hyperlog(memdb())\n\n  hyper.add(null, 'hello world', function (err, node) {\n    t.error(err)\n    hyper.add(null, 'hello world', function (err, node) {\n      t.error(err)\n      collect(hyper.createReadStream(), function (err, changes) {\n        t.error(err)\n        t.same(changes.length, 1, 'only one change')\n        t.end()\n      })\n    })\n  })\n})\n\ntape('deduplicates -- same batch', function (t) {\n  var hyper = hyperlog(memdb())\n\n  var doc = { links: [], value: 'hello world' }\n\n  hyper.batch([doc, doc], function (err, nodes) {\n    t.error(err)\n    collect(hyper.createReadStream(), function (err, changes) {\n      t.error(err)\n      t.same(changes.length, 1, 'only one change')\n      t.same(hyper.changes, 1, 'only one change')\n      t.end()\n    })\n  })\n})\n\ntape('bug repro: bad insert links results in correct preadd/add/reject counts', function (t) {\n  var hyper = hyperlog(memdb())\n\n  var pending = 0\n  hyper.on('preadd', function (node) { pending++ })\n  hyper.on('add', function (node) { pending-- })\n  hyper.on('reject', function (node) { pending-- })\n\n  hyper.add(['123'], 'hello', function (err, node) {\n    t.ok(err)\n\n    t.equal(pending, 0)\n    t.end()\n  })\n})\n"
  },
  {
    "path": "test/batch.js",
    "content": "var hyperlog = require('../')\nvar tape = require('tape')\nvar memdb = require('memdb')\n\ntape('batch', function (t) {\n  t.plan(10)\n  var log = hyperlog(memdb(), { valueEncoding: 'utf8' })\n  log.add(null, 'A', function (err, node) {\n    t.error(err)\n    var ops = [\n      { links: [node.key], value: 'B' },\n      { links: [node.key], value: 'C' },\n      { links: [node.key], value: 'D' }\n    ]\n    log.batch(ops, function (err, nodes) {\n      t.error(err)\n      log.get(node.key, function (err, doc) {\n        t.error(err)\n        t.equal(doc.value, 'A')\n      })\n      log.get(nodes[0].key, function (err, doc) {\n        t.error(err)\n        t.equal(doc.value, 'B')\n      })\n      log.get(nodes[1].key, function (err, doc) {\n        t.error(err)\n        t.equal(doc.value, 'C')\n      })\n      log.get(nodes[2].key, function (err, doc) {\n        t.error(err)\n        t.equal(doc.value, 'D')\n      })\n    })\n  })\n})\n\ntape('batch dedupe', function (t) {\n  t.plan(6)\n\n  var doc1 = { links: [], value: 'hello world' }\n  var doc2 = { links: [], value: 'hello world 2' }\n\n  var hyper = hyperlog(memdb(), { valueEncoding: 'utf8' })\n\n  hyper.batch([doc1], function (err) {\n    t.error(err)\n    hyper.batch([doc2], function (err) {\n      t.error(err)\n      hyper.batch([doc1], function (err, nodes) {\n        t.error(err)\n        t.equal(hyper.changes, 2)\n        t.equal(nodes.length, 1)\n        t.equal(nodes[0].change, 1)\n      })\n    })\n  })\n})\n\ntape('batch dedupe 2', function (t) {\n  t.plan(4)\n\n  var doc1 = { links: [], value: 'hello world' }\n  var doc2 = { links: [], value: 'hello world 2' }\n\n  var hyper = hyperlog(memdb(), { valueEncoding: 'utf8' })\n\n  hyper.batch([doc1], function (err) {\n    t.error(err)\n    hyper.batch([doc2], function (err) {\n      t.error(err)\n      hyper.batch([doc2, doc1, doc2], function (err) {\n        t.error(err)\n        t.equal(hyper.changes, 2)\n      })\n    })\n  })\n})\n"
  },
  {
    "path": "test/changes.js",
    "content": "var hyperlog = require('../')\nvar tape = require('tape')\nvar memdb = require('memdb')\nvar collect = require('stream-collector')\n\ntape('changes', function (t) {\n  var hyper = hyperlog(memdb())\n\n  hyper.add(null, 'a', function (err, a) {\n    t.error(err)\n    hyper.add(null, 'b', function (err, b) {\n      t.error(err)\n      hyper.add(null, 'c', function (err, c) {\n        t.error(err)\n        collect(hyper.createReadStream(), function (err, changes) {\n          t.error(err)\n          t.same(changes, [a, b, c], 'has 3 changes')\n          t.end()\n        })\n      })\n    })\n  })\n})\n\ntape('changes since', function (t) {\n  var hyper = hyperlog(memdb())\n\n  hyper.add(null, 'a', function (err, a) {\n    t.error(err)\n    hyper.add(null, 'b', function (err, b) {\n      t.error(err)\n      hyper.add(null, 'c', function (err, c) {\n        t.error(err)\n        collect(hyper.createReadStream({since: 2}), function (err, changes) {\n          t.error(err)\n          t.same(changes, [c], 'has 1 change')\n          t.end()\n        })\n      })\n    })\n  })\n})\n\ntape('live changes', function (t) {\n  var hyper = hyperlog(memdb())\n  var expects = ['a', 'b', 'c']\n\n  hyper.createReadStream({live: true})\n    .on('data', function (data) {\n      var next = expects.shift()\n      t.same(data.value.toString(), next, 'was expected value')\n      if (!expects.length) t.end()\n    })\n\n  hyper.add(null, 'a', function () {\n    hyper.add(null, 'b', function () {\n      hyper.add(null, 'c')\n    })\n  })\n})\n\ntape('parallel add orders changes', function (t) {\n  var hyper = hyperlog(memdb())\n\n  var missing = 3\n  var values = {}\n  var done = function () {\n    if (--missing) return\n    collect(hyper.createReadStream(), function (err, changes) {\n      t.error(err)\n      changes.forEach(function (c, i) {\n        t.same(c.change, i + 1, 'correct change number')\n        values[c.value.toString()] = true\n      })\n      t.same(values, {a: true, b: true, c: true}, 'contains all values')\n      t.end()\n    })\n  }\n\n  hyper.add(null, 'a', done)\n  hyper.add(null, 'b', done)\n  hyper.add(null, 'c', done)\n})\n"
  },
  {
    "path": "test/encoding.js",
    "content": "var hyperlog = require('../')\nvar tape = require('tape')\nvar memdb = require('memdb')\nvar collect = require('stream-collector')\n\ntape('add node', function (t) {\n  var hyper = hyperlog(memdb(), { valueEncoding: 'json' })\n\n  hyper.add(null, { msg: 'hello world' }, function (err, node) {\n    t.error(err)\n    t.ok(node.key, 'has key')\n    t.same(node.links, [])\n    t.same(node.value, { msg: 'hello world' })\n    t.end()\n  })\n})\n\ntape('add node with encoding option', function (t) {\n  var hyper = hyperlog(memdb())\n\n  hyper.add(null, { msg: 'hello world' }, { valueEncoding: 'json' },\n  function (err, node) {\n    t.error(err)\n    t.ok(node.key, 'has key')\n    t.same(node.links, [])\n    t.same(node.value, { msg: 'hello world' })\n    t.end()\n  })\n})\n\ntape('append node', function (t) {\n  var hyper = hyperlog(memdb(), { valueEncoding: 'json' })\n\n  hyper.append({ msg: 'hello world' }, function (err, node) {\n    t.error(err)\n    t.ok(node.key, 'has key')\n    t.same(node.links, [])\n    t.same(node.value, { msg: 'hello world' })\n    t.end()\n  })\n})\n\ntape('append node with encoding option', function (t) {\n  var hyper = hyperlog(memdb())\n\n  hyper.append({ msg: 'hello world' }, { valueEncoding: 'json' }, function (err, node) {\n    t.error(err)\n    t.ok(node.key, 'has key')\n    t.same(node.links, [])\n    t.same(node.value, { msg: 'hello world' })\n    t.end()\n  })\n})\n\ntape('add node with links', function (t) {\n  var hyper = hyperlog(memdb(), { valueEncoding: 'json' })\n\n  hyper.add(null, { msg: 'hello' }, function (err, node) {\n    t.error(err)\n    hyper.add(node, { msg: 'world' }, function (err, node2) {\n      t.error(err)\n      t.ok(node2.key, 'has key')\n      t.same(node2.links, [node.key], 'has links')\n      t.same(node2.value, { msg: 'world' })\n      t.end()\n    })\n  })\n})\n\ntape('cannot add node with bad links', function (t) {\n  var hyper = hyperlog(memdb(), { valueEncoding: 'json' })\n\n  hyper.add('i-do-not-exist', { msg: 'hello world' }, function (err) {\n    t.ok(err, 'had error')\n    t.ok(err.notFound, 'not found error')\n    t.end()\n  })\n})\n\ntape('heads', function (t) {\n  var hyper = hyperlog(memdb(), { valueEncoding: 'json' })\n\n  hyper.heads(function (err, heads) {\n    t.error(err)\n    t.same(heads, [], 'no heads yet')\n    hyper.add(null, 'a', function (err, node) {\n      t.error(err)\n      hyper.heads(function (err, heads) {\n        t.error(err)\n        t.same(heads, [node], 'has head')\n        hyper.add(node, 'b', function (err, node2) {\n          t.error(err)\n          hyper.heads(function (err, heads) {\n            t.error(err)\n            t.same(heads, [node2], 'new heads')\n            t.end()\n          })\n        })\n      })\n    })\n  })\n})\n\ntape('heads with encoding option', function (t) {\n  var hyper = hyperlog(memdb())\n\n  hyper.heads({ valueEncoding: 'json' }, function (err, heads) {\n    t.error(err)\n    t.same(heads, [], 'no heads yet')\n    hyper.add(null, 'a', { valueEncoding: 'json' }, function (err, node) {\n      t.error(err)\n      hyper.heads({ valueEncoding: 'json' }, function (err, heads) {\n        t.error(err)\n        t.same(heads, [node], 'has head')\n        hyper.add(node, 'b', { valueEncoding: 'json' }, function (err, node2) {\n          t.error(err)\n          hyper.heads({ valueEncoding: 'json' }, function (err, heads) {\n            t.error(err)\n            t.same(heads, [node2], 'new heads')\n            t.end()\n          })\n        })\n      })\n    })\n  })\n})\n\ntape('get', function (t) {\n  var hyper = hyperlog(memdb(), { valueEncoding: 'json' })\n\n  hyper.add(null, { msg: 'hello world' }, function (err, node) {\n    t.error(err)\n    t.ok(node.key, 'has key')\n    t.same(node.links, [])\n    t.same(node.value, { msg: 'hello world' })\n    hyper.get(node.key, function (err, node2) {\n      t.ifError(err)\n      t.same(node2.value, { msg: 'hello world' })\n      t.end()\n    })\n  })\n})\n\ntape('get with encoding option', function (t) {\n  var hyper = hyperlog(memdb())\n\n  hyper.add(null, { msg: 'hello world' }, { valueEncoding: 'json' }, function (err, node) {\n    t.error(err)\n    t.ok(node.key, 'has key')\n    t.same(node.links, [])\n    t.same(node.value, { msg: 'hello world' })\n    hyper.get(node.key, { valueEncoding: 'json' }, function (err, node2) {\n      t.ifError(err)\n      t.same(node2.value, { msg: 'hello world' })\n      t.end()\n    })\n  })\n})\n\ntape('deduplicates', function (t) {\n  var hyper = hyperlog(memdb(), { valueEncoding: 'json' })\n\n  hyper.add(null, { msg: 'hello world' }, function (err, node) {\n    t.error(err)\n    hyper.add(null, { msg: 'hello world' }, function (err, node) {\n      t.error(err)\n      collect(hyper.createReadStream(), function (err, changes) {\n        t.error(err)\n        t.same(changes.length, 1, 'only one change')\n        t.end()\n      })\n    })\n  })\n})\n\ntape('live replication encoding', function (t) {\n  t.plan(2)\n  var h0 = hyperlog(memdb(), { valueEncoding: 'json' })\n  var h1 = hyperlog(memdb(), { valueEncoding: 'json' })\n  h1.createReadStream({ live: true })\n    .on('data', function (data) {\n      t.deepEqual(data.value, { msg: 'hello world' })\n    })\n\n  var r0 = h0.replicate({ live: true })\n  var r1 = h1.replicate({ live: true })\n\n  h0.add(null, { msg: 'hello world' }, function (err, node) {\n    t.error(err)\n    r0.pipe(r1).pipe(r0)\n  })\n})\n"
  },
  {
    "path": "test/events.js",
    "content": "var hyperlog = require('../')\nvar tape = require('tape')\nvar memdb = require('memdb')\n\ntape('add and preadd events', function (t) {\n  t.plan(13)\n  var hyper = hyperlog(memdb())\n  var expected = [ 'hello', 'world' ]\n  var expectedPre = [ 'hello', 'world' ]\n  var order = []\n\n  hyper.on('add', function (node) {\n    // at this point, the event has already been added\n    t.equal(node.value.toString(), expected.shift())\n    order.push('add ' + node.value)\n  })\n  hyper.on('preadd', function (node) {\n    t.equal(node.value.toString(), expectedPre.shift())\n    order.push('preadd ' + node.value)\n    hyper.get(node.key, function (err) {\n      t.ok(err.notFound)\n    })\n  })\n  hyper.add(null, 'hello', function (err, node) {\n    t.error(err)\n    hyper.add(node, 'world', function (err, node2) {\n      t.error(err)\n      t.ok(node2.key, 'has key')\n      t.same(node2.links, [node.key], 'has links')\n      t.same(node2.value, new Buffer('world'))\n      t.deepEqual(order, [\n        'preadd hello',\n        'add hello',\n        'preadd world',\n        'add world'\n      ], 'order')\n    })\n  })\n  t.deepEqual(order, ['preadd hello'])\n})\n"
  },
  {
    "path": "test/hash.js",
    "content": "var hyperlog = require('../')\nvar tape = require('tape')\nvar memdb = require('memdb')\nvar framedHash = require('framed-hash')\nvar multihashing = require('multihashing')\nvar base58 = require('bs58')\n\nvar sha1 = function (links, value) {\n  var hash = framedHash('sha1')\n  for (var i = 0; i < links.length; i++) hash.update(links[i])\n  hash.update(value)\n  return hash.digest('hex')\n}\n\nvar asyncSha2 = function (links, value, cb) {\n  process.nextTick(function () {\n    var prevalue = value.toString()\n    links.forEach(function (link) { prevalue += link })\n    var result = base58.encode(multihashing(prevalue, 'sha2-256'))\n    cb(null, result)\n  })\n}\n\ntape('add node using sha1', function (t) {\n  var hyper = hyperlog(memdb(), {\n    hash: sha1\n  })\n\n  hyper.add(null, 'hello world', function (err, node) {\n    t.error(err)\n    t.same(node.key, '99cf70777a24b574b8fb5b3173cd4073f02098b0')\n    t.end()\n  })\n})\n\ntape('add node with links using sha1', function (t) {\n  var hyper = hyperlog(memdb(), {\n    hash: sha1\n  })\n\n  hyper.add(null, 'hello', function (err, node) {\n    t.error(err)\n    t.same(node.key, '445198669b880239a7e64247ed303066b398678b')\n    hyper.add(node, 'world', function (err, node2) {\n      t.error(err)\n      t.same(node2.key, '1d95837842db3995fb3e77ed070457eb4f9875bc')\n      t.end()\n    })\n  })\n})\n\ntape('add node using async multihash', function (t) {\n  var hyper = hyperlog(memdb(), {\n    asyncHash: asyncSha2\n  })\n\n  hyper.add(null, 'hello world', function (err, node) {\n    t.error(err)\n    t.same(node.key, 'QmaozNR7DZHQK1ZcU9p7QdrshMvXqWK6gpu5rmrkPdT3L4')\n    t.end()\n  })\n})\n\ntape('add node with links using async multihash', function (t) {\n  var hyper = hyperlog(memdb(), {\n    asyncHash: asyncSha2\n  })\n\n  hyper.add(null, 'hello', function (err, node) {\n    t.error(err)\n    t.same(node.key, 'QmRN6wdp1S2A5EtjW9A3M1vKSBuQQGcgvuhoMUoEz4iiT5')\n    hyper.add(node, 'world', function (err, node2) {\n      t.error(err)\n      t.same(node2.key, 'QmVeZeqV6sbzeDyzhxFHwBLddaQzUELCxLjrQVzfBuDrt8')\n      hyper.add([node, node2], '!!!', function (err, node3) {\n        t.error(err)\n        t.same(node3.key, 'QmNs89mwydjboQGpvcK2F3hyKjSmdqQTqDWmRMsAQnL4ZU')\n        t.end()\n      })\n    })\n  })\n})\n\ntape('preadd event with async hash', function (t) {\n  var hyper = hyperlog(memdb(), {\n    asyncHash: asyncSha2\n  })\n\n  var prenode = null\n  hyper.on('preadd', function (node) {\n    prenode = node\n  })\n\n  hyper.add(null, 'hello world', function (err, node) {\n    t.error(err)\n    t.same(node.key, 'QmaozNR7DZHQK1ZcU9p7QdrshMvXqWK6gpu5rmrkPdT3L4')\n    t.end()\n  })\n  t.equal(prenode.key, null)\n})\n"
  },
  {
    "path": "test/replicate.js",
    "content": "var hyperlog = require('../')\nvar tape = require('tape')\nvar memdb = require('memdb')\nvar pump = require('pump')\nvar through = require('through2')\n\nvar sync = function (a, b, cb) {\n  var stream = a.replicate()\n  pump(stream, b.replicate(), stream, cb)\n}\n\nvar toJSON = function (log, cb) {\n  var map = {}\n  log.createReadStream()\n    .on('data', function (node) {\n      map[node.key] = {value: node.value, links: node.links}\n    })\n    .on('end', function () {\n      cb(null, map)\n    })\n}\n\ntape('clones', function (t) {\n  var hyper = hyperlog(memdb())\n  var clone = hyperlog(memdb())\n\n  hyper.add(null, 'a', function () {\n    hyper.add(null, 'b', function () {\n      hyper.add(null, 'c', function () {\n        sync(hyper, clone, function (err) {\n          t.error(err)\n          toJSON(clone, function (err, map1) {\n            t.error(err)\n            toJSON(hyper, function (err, map2) {\n              t.error(err)\n              t.same(map1, map2, 'logs are synced')\n              t.end()\n            })\n          })\n        })\n      })\n    })\n  })\n})\n\ntape('clones with valueEncoding', function (t) {\n  var hyper = hyperlog(memdb(), {valueEncoding: 'json'})\n  var clone = hyperlog(memdb(), {valueEncoding: 'json'})\n\n  hyper.add(null, 'a', function () {\n    hyper.add(null, 'b', function () {\n      hyper.add(null, 'c', function () {\n        sync(hyper, clone, function (err) {\n          t.error(err)\n          toJSON(clone, function (err, map1) {\n            t.error(err)\n            toJSON(hyper, function (err, map2) {\n              t.error(err)\n              t.same(map1, map2, 'logs are synced')\n              t.end()\n            })\n          })\n        })\n      })\n    })\n  })\n})\n\ntape('syncs with initial subset', function (t) {\n  var hyper = hyperlog(memdb())\n  var clone = hyperlog(memdb())\n\n  clone.add(null, 'a', function () {\n    hyper.add(null, 'a', function () {\n      hyper.add(null, 'b', function () {\n        hyper.add(null, 'c', function () {\n          sync(hyper, clone, function (err) {\n            t.error(err)\n            toJSON(clone, function (err, map1) {\n              t.error(err)\n              toJSON(hyper, function (err, map2) {\n                t.error(err)\n                t.same(map1, map2, 'logs are synced')\n                t.end()\n              })\n            })\n          })\n        })\n      })\n    })\n  })\n})\n\ntape('syncs with initial superset', function (t) {\n  var hyper = hyperlog(memdb())\n  var clone = hyperlog(memdb())\n\n  clone.add(null, 'd', function () {\n    hyper.add(null, 'a', function () {\n      hyper.add(null, 'b', function () {\n        hyper.add(null, 'c', function () {\n          sync(hyper, clone, function (err) {\n            t.error(err)\n            toJSON(clone, function (err, map1) {\n              t.error(err)\n              toJSON(hyper, function (err, map2) {\n                t.error(err)\n                t.same(map1, map2, 'logs are synced')\n                t.end()\n              })\n            })\n          })\n        })\n      })\n    })\n  })\n})\n\ntape('process', function (t) {\n  var hyper = hyperlog(memdb())\n  var clone = hyperlog(memdb())\n\n  var process = function (node, enc, cb) {\n    setImmediate(function () {\n      cb(null, node)\n    })\n  }\n\n  hyper.add(null, 'a', function () {\n    hyper.add(null, 'b', function () {\n      hyper.add(null, 'c', function () {\n        var stream = hyper.replicate()\n        pump(stream, clone.replicate({process: through.obj(process)}), stream, function () {\n          toJSON(clone, function (err, map1) {\n            t.error(err)\n            toJSON(hyper, function (err, map2) {\n              t.error(err)\n              t.same(map1, map2, 'logs are synced')\n              t.end()\n            })\n          })\n        })\n      })\n    })\n  })\n})\n\n// bugfix: previously replication would not terminate\ntape('shared history with duplicates', function (t) {\n  var hyper1 = hyperlog(memdb())\n  var hyper2 = hyperlog(memdb())\n\n  var doc1 = { links: [], value: 'a' }\n  var doc2 = { links: [], value: 'b' }\n\n  hyper1.batch([doc1], function (err) {\n    t.error(err)\n    sync(hyper1, hyper2, function (err) {\n      t.error(err)\n      hyper2.batch([doc1, doc2], function (err, nodes) {\n        t.error(err)\n        t.equals(nodes[0].change, 1)\n        t.equals(nodes[1].change, 2)\n        sync(hyper1, hyper2, function (err) {\n          t.error(err)\n          t.end()\n        })\n      })\n    })\n  })\n})\n"
  },
  {
    "path": "test/signatures.js",
    "content": "var hyperlog = require('../')\nvar tape = require('tape')\nvar memdb = require('memdb')\n\ntape('sign', function (t) {\n  t.plan(4)\n\n  var log = hyperlog(memdb(), {\n    identity: new Buffer('i-am-a-public-key'),\n    sign: function (node, cb) {\n      t.same(node.value, new Buffer('hello'), 'sign is called')\n      cb(null, new Buffer('i-am-a-signature'))\n    }\n  })\n\n  log.add(null, 'hello', function (err, node) {\n    t.error(err, 'no err')\n    t.same(node.signature, new Buffer('i-am-a-signature'), 'has signature')\n    t.same(node.identity, new Buffer('i-am-a-public-key'), 'has public key')\n    t.end()\n  })\n})\n\ntape('sign fails', function (t) {\n  t.plan(2)\n\n  var log = hyperlog(memdb(), {\n    identity: new Buffer('i-am-a-public-key'),\n    sign: function (node, cb) {\n      cb(new Error('lol'))\n    }\n  })\n\n  log.on('reject', function (node) {\n    t.ok(node)\n  })\n\n  log.add(null, 'hello', function (err) {\n    t.same(err && err.message, 'lol', 'had error')\n  })\n})\n\ntape('verify', function (t) {\n  t.plan(3)\n\n  var log1 = hyperlog(memdb(), {\n    identity: new Buffer('i-am-a-public-key'),\n    sign: function (node, cb) {\n      cb(null, new Buffer('i-am-a-signature'))\n    }\n  })\n\n  var log2 = hyperlog(memdb(), {\n    verify: function (node, cb) {\n      t.same(node.signature, new Buffer('i-am-a-signature'), 'verify called with signature')\n      t.same(node.identity, new Buffer('i-am-a-public-key'), 'verify called with public key')\n      cb(null, true)\n    }\n  })\n\n  log1.add(null, 'hello', function (err, node) {\n    t.error(err, 'no err')\n    var stream = log2.replicate()\n    stream.pipe(log1.replicate()).pipe(stream)\n  })\n})\n\ntape('verify fails', function (t) {\n  t.plan(2)\n\n  var log1 = hyperlog(memdb(), {\n    identity: new Buffer('i-am-a-public-key'),\n    sign: function (node, cb) {\n      cb(null, new Buffer('i-am-a-signature'))\n    }\n  })\n\n  var log2 = hyperlog(memdb(), {\n    verify: function (node, cb) {\n      cb(null, false)\n    }\n  })\n\n  log1.add(null, 'hello', function (err, node) {\n    t.error(err, 'no err')\n\n    var stream = log2.replicate()\n\n    stream.on('error', function (err) {\n      t.same(err.message, 'Invalid signature', 'stream had error')\n      t.end()\n    })\n    stream.pipe(log1.replicate()).pipe(stream)\n  })\n})\n\ntape('per-document identity (add)', function (t) {\n  t.plan(3)\n\n  var log1 = hyperlog(memdb(), {\n    sign: function (node, cb) {\n      cb(null, new Buffer('i-am-a-signature'))\n    }\n  })\n\n  var log2 = hyperlog(memdb(), {\n    verify: function (node, cb) {\n      t.same(node.signature, new Buffer('i-am-a-signature'), 'verify called with signature')\n      t.same(node.identity, new Buffer('i-am-a-public-key'), 'verify called with public key')\n      cb(null, true)\n    }\n  })\n\n  var opts = { identity: new Buffer('i-am-a-public-key') }\n  log1.add(null, 'hello', opts, function (err, node) {\n    t.error(err, 'no err')\n    var stream = log2.replicate()\n    stream.pipe(log1.replicate()).pipe(stream)\n  })\n})\n\ntape('per-document identity (batch)', function (t) {\n  t.plan(5)\n\n  var log1 = hyperlog(memdb(), {\n    sign: function (node, cb) {\n      cb(null, new Buffer('i-am-a-signature'))\n    }\n  })\n\n  var expectedpk = [ Buffer('hello id'), Buffer('whatever id') ]\n  var log2 = hyperlog(memdb(), {\n    verify: function (node, cb) {\n      t.same(node.signature, new Buffer('i-am-a-signature'), 'verify called with signature')\n      t.same(node.identity, expectedpk.shift(), 'verify called with public key')\n      cb(null, true)\n    }\n  })\n\n  log1.batch([\n    {\n      value: 'hello',\n      identity: Buffer('hello id')\n    },\n    {\n      value: 'whatever',\n      identity: Buffer('whatever id')\n    }\n  ], function (err, nodes) {\n    t.error(err, 'no err')\n    var stream = log2.replicate()\n    stream.pipe(log1.replicate()).pipe(stream)\n  })\n})\n"
  }
]