Repository: mafintosh/hyperlog Branch: master Commit: df15dc61e913 Files: 23 Total size: 64.3 KB Directory structure: gitextract_k5ec7sld/ ├── .gitignore ├── .travis.yml ├── LICENSE ├── README.md ├── example/ │ ├── log.js │ └── signed.js ├── index.js ├── lib/ │ ├── encode.js │ ├── hash.js │ ├── messages.js │ ├── protocol.js │ ├── replicate.js │ └── sorted-queue.js ├── package.json ├── schema.proto └── test/ ├── basic.js ├── batch.js ├── changes.js ├── encoding.js ├── events.js ├── hash.js ├── replicate.js └── signatures.js ================================================ FILE CONTENTS ================================================ ================================================ FILE: .gitignore ================================================ node_modules db tmp bench 2 ================================================ FILE: .travis.yml ================================================ language: node_js node_js: - "0.10" - "0.12" - "4" - "6" ================================================ FILE: LICENSE ================================================ The MIT License (MIT) Copyright (c) 2015 Mathias Buus Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ================================================ FILE: README.md ================================================ # hyperlog [Merkle DAG](https://github.com/jbenet/random-ideas/issues/20) that replicates based on scuttlebutt logs and causal linking ``` npm install hyperlog ``` [![build status](http://img.shields.io/travis/mafintosh/hyperlog.svg?style=flat)](http://travis-ci.org/mafintosh/hyperlog) ![dat](http://img.shields.io/badge/Development%20sponsored%20by-dat-green.svg?style=flat) ``` js var hyperlog = require('hyperlog') var log = hyperlog(db) // where db is a levelup instance // add a node with value 'hello' and no links log.add(null, 'hello', function(err, node) { console.log('inserted node', node) // insert 'world' with a link back to the above node log.add([node.key], 'world', function(err, node) { console.log('inserted new node', node) }) }) ``` ## Replicate graph To replicate this log with another one simply use `log.replicate()` and pipe it together with a replication stream from another log. ``` js var l1 = hyperlog(db1) var l2 = hyperlog(db2) var s1 = l1.replicate() var s2 = l2.replicate() s1.pipe(s2).pipe(s1) s1.on('end', function() { console.log('replication ended') }) ``` A detailed write-up on how this replication protocol works will be added to this repo in the near future. For now see the source code. ## API #### `log = hyperlog(db, opts={})` Create a new log instance. Valid keys for `opts` include: - `id` - some (ideally globally unique) string identifier for the log. - `valueEncoding` - a [levelup-style](https://github.com/Level/levelup#options) encoding string or object (e.g. `"json"`) - `hash(links, value)` - a hash function that runs synchronously. Defaults to a SHA-256 implementation. - `asyncHash(links, value, cb)` - an asynchronous hash function with node-style callback (`cb(err, hash)`). - `identity`, `sign`, `verify` - values for creating a cryptographically signed feed. See below. You can also pass in an `identity` and `sign`/`verify` functions which can be used to create a signed log: ``` js { identity: aPublicKeyBuffer, // will be added to all nodes you insert sign: function (node, cb) { // will be called with all nodes you add var signatureBuffer = someCrypto.sign(node.key, mySecretKey) cb(null, signatureBuffer) }, verify: function (node, cb) { // will be called with all nodes you receive if (!node.signature) return cb(null, false) cb(null, someCrypto.verify(node.key, node.signature. node.identity)) } } ``` #### `log.add(links, value, opts={}, [cb])` Add a new node to the graph. `links` should be an array of node keys that this node links to. If it doesn't link to any nodes use `null` or an empty array. `value` is the value that you want to store in the node. This should be a string or a buffer. The callback is called with the inserted node: ``` js log.add([link], value, function(err, node) { // node looks like this { change: ... // the change number for this node in the local log key: ... // the hash of the node. this is also the key of the node value: ... // the value (as the valueEncoding type, default buffer) you inserted log: ... // the peer log this node was appended to seq: ... // the peer log seq number links: ['hash-of-link-1', ...] } }) ``` Optionally supply an `opts.valueEncoding`. #### `log.append(value, opts={}, [cb])` Add a value that links all the current heads. Optionally supply an `opts.valueEncoding`. #### `log.batch(docs, opts={}, [cb])` Add many documents atomically to the log at once: either all the docs are inserted successfully or nothing is inserted. `docs` is an array of objects where each object looks like: ``` js { links: [...] // array of ancestor node keys value: ... // the value to insert } ``` The callback `cb(err, nodes)` is called with an array of `nodes`. Each `node` is of the form described in the `log.add()` section. You may specify an `opts.valueEncoding`. #### `log.get(hash, opts={}, cb)` Lookup a node by its hash. Returns a node similar to `.add` above. Optionally supply an `opts.valueEncoding`. #### `log.heads(opts={}, cb)` Get the heads of the graph as a list. A head is node that no other node links to. ``` js log.heads(function(err, heads) { console.log(heads) // prints an array of nodes }) ``` The method also returns a stream of heads which is useful if, for some reason, your graph has A LOT of heads ``` js var headsStream = log.heads() headsStream.on('data', function(node) { console.log('head:', node) }) headsStream.on('end', function() { console.log('(no more heads)') }) ``` Optionally supply an `opts.valueEncoding`. #### `changesStream = log.createReadStream([options])` Tail the changes feed from the log. Everytime you add a node to the graph the changes feed is updated with that node. ``` js var changesStream = log.createReadStream({live:true}) changesStream.on('data', function(node) { console.log('change:', node) }) ``` Options include: ``` js { since: changeNumber // only returns changes AFTER the number live: false // never close the change stream tail: false // since = lastChange limit: number // only return up to `limit` changes until: number // (for non-live streams) only returns changes BEFORE the number valueEncoding: 'binary' } ``` #### `replicationStream = log.replicate([options])` Replicate the log to another one using a replication stream. Simply pipe your replication stream together with another log's replication stream. ``` js var l1 = hyperlog(db1) var l2 = hyperlog(db2) var s1 = l1.createReplicationStream() var s2 = l2.createReplicationStream() s1.pipe(s2).pipe(s1) s1.on('end', function() { console.log('replication ended') }) ``` Options include: ``` js { mode: 'push' | 'pull' | 'sync', // set replication mode. defaults to sync live: true, // keep the replication stream open. defaults to false metadata: someBuffer, // send optional metadata as part of the handshake frame: true // frame the data with length prefixes. defaults to true } ``` If you send `metadata` it will be emitted as an `metadata` event on the stream. A detailed write up on how the graph replicates will be added later. #### log.on('preadd', function (node) {}) On the same tick as `log.add()` is called, this event fires with the `node` about to be inserted into the log. At this stage of the add process, node has these properties: * `node.log` * `node.key` * `node.value` * `node.links` #### log.on('add', function (node) {}) After a node has been successfully added to the log, this event fires with the full `node` object that the callback to `.add()` gets. #### log.on('reject', function (node) {}) When a node is rejected, this event fires. Otherwise the `add` event will fire. You can track `preadd` events against both `add` and `reject` events in combination to know when the log is completely caught up. ## Hyperlog Hygiene A hyperlog will refer to potentially *many* different logs as it replicates with others, each with its own ID. Bear in mind that each hyperlog's underlying leveldb contains a notion of what its *own* local ID is. If you make a copy of a hyperlog's leveldb and write different data to each copy, the results are unpredictable and likely disastrous. Always only use the included replication mechanism for making hyperlog copies! ## License MIT ================================================ FILE: example/log.js ================================================ var hyperlog = require('../') var memdb = require('memdb') var log = hyperlog(memdb()) var clone = hyperlog(memdb()) var sync = function (a, b) { a = a.createReplicationStream({mode: 'push'}) b = b.createReplicationStream({mode: 'pull'}) a.on('push', function () { console.log('a pushed') }) a.on('pull', function () { console.log('a pulled') }) a.on('end', function () { console.log('a ended') }) b.on('push', function () { console.log('b pushed') }) b.on('pull', function () { console.log('b pulled') }) b.on('end', function () { console.log('b ended') }) a.pipe(b).pipe(a) } clone.createReadStream({live: true}).on('data', function (data) { console.log('change: (%d) %s', data.change, data.key) }) log.add(null, 'hello', function (err, node) { if (err) throw err log.add(node, 'world', function (err, node) { if (err) throw err sync(log, clone) log.add(null, 'meh') }) }) ================================================ FILE: example/signed.js ================================================ var hyperlog = require('../') var memdb = require('memdb') var sodium = require('sodium').api var eq = require('buffer-equals') var keys = sodium.crypto_sign_keypair() var log = hyperlog(memdb(), { identity: keys.publicKey, sign: function (node, cb) { var bkey = Buffer(node.key, 'hex') cb(null, sodium.crypto_sign(bkey, keys.secretKey)) } }) var clone = hyperlog(memdb(), { verify: function (node, cb) { if (!node.signature) return cb(null, false) if (!eq(node.identity, keys.publicKey)) return cb(null, false) var bkey = Buffer(node.key, 'hex') var m = sodium.crypto_sign_open(node.signature, node.identity) cb(null, eq(m, bkey)) } }) var sync = function (a, b) { a = a.createReplicationStream({mode: 'push'}) b = b.createReplicationStream({mode: 'pull'}) a.on('push', function () { console.log('a pushed') }) a.on('pull', function () { console.log('a pulled') }) a.on('end', function () { console.log('a ended') }) b.on('push', function () { console.log('b pushed') }) b.on('pull', function () { console.log('b pulled') }) b.on('end', function () { console.log('b ended') }) a.pipe(b).pipe(a) } clone.createReadStream({live: true}).on('data', function (data) { console.log('change: (%d) %s', data.change, data.key) }) log.add(null, 'hello', function (err, node) { if (err) throw err log.add(node, 'world', function (err, node) { if (err) throw err sync(log, clone) log.add(null, 'meh') }) }) ================================================ FILE: index.js ================================================ var after = require('after-all') var lexint = require('lexicographic-integer') var collect = require('stream-collector') var through = require('through2') var pump = require('pump') var from = require('from2') var mutexify = require('mutexify') var cuid = require('cuid') var logs = require('level-logs') var events = require('events') var util = require('util') var enumerate = require('level-enumerate') var replicate = require('./lib/replicate') var messages = require('./lib/messages') var hash = require('./lib/hash') var encoder = require('./lib/encode') var defined = require('defined') var parallel = require('run-parallel') var waterfall = require('run-waterfall') var ID = '!!id' var CHANGES = '!changes!' var NODES = '!nodes!' var HEADS = '!heads!' var INVALID_SIGNATURE = new Error('Invalid signature') var CHECKSUM_MISMATCH = new Error('Checksum mismatch') var INVALID_LOG = new Error('Invalid log sequence') INVALID_LOG.notFound = true INVALID_LOG.status = 404 var noop = function () {} var Hyperlog = function (db, opts) { if (!(this instanceof Hyperlog)) return new Hyperlog(db, opts) if (!opts) opts = {} events.EventEmitter.call(this) this.id = defined(opts.id, null) this.enumerate = enumerate(db, {prefix: 'enum'}) this.db = db this.logs = logs(db, {prefix: 'logs', valueEncoding: messages.Entry}) this.lock = defined(opts.lock, mutexify()) this.changes = 0 this.setMaxListeners(0) this.valueEncoding = defined(opts.valueEncoding, opts.encoding, 'binary') this.identity = defined(opts.identity, null) this.verify = defined(opts.verify, null) this.sign = defined(opts.sign, null) this.hash = defined(opts.hash, hash) this.asyncHash = defined(opts.asyncHash, null) // Retrieve this hyperlog instance's unique ID. var self = this var getId = defined(opts.getId, function (cb) { db.get(ID, {valueEncoding: 'utf-8'}, function (_, id) { if (id) return cb(null, id) id = cuid() db.put(ID, id, function () { cb(null, id) }) }) }) // Startup logic to.. // 1. Determine & record the largest change # in the db. // 2. Determine this hyperlog db's local ID. // // This is behind a lock in order to ensure that no hyperlog operations // can be performed -- these two values MUST be known before any // hyperlog usage may occur. this.lock(function (release) { collect(db.createKeyStream({gt: CHANGES, lt: CHANGES + '~', reverse: true, limit: 1}), function (_, keys) { self.changes = Math.max(self.changes, keys && keys.length ? lexint.unpack(keys[0].split('!').pop(), 'hex') : 0) if (self.id) return release() getId(function (_, id) { self.id = id || cuid() release() }) }) }) } util.inherits(Hyperlog, events.EventEmitter) // Call callback 'cb' once the hyperlog is ready for use (knows some // fundamental properties about itself from the leveldb). If it's already // ready, cb is called immediately. Hyperlog.prototype.ready = function (cb) { if (this.id) return cb() this.lock(function (release) { release() cb() }) } // Returns a readable stream of all hyperlog heads. That is, all nodes that no // other nodes link to. Hyperlog.prototype.heads = function (opts, cb) { var self = this if (!opts) opts = {} if (typeof opts === 'function') { cb = opts opts = {} } var rs = this.db.createValueStream({ gt: HEADS, lt: HEADS + '~', valueEncoding: 'utf-8' }) var format = through.obj(function (key, enc, cb) { self.get(key, opts, cb) }) return collect(pump(rs, format), cb) } // Retrieve a single, specific node, given its key. Hyperlog.prototype.get = function (key, opts, cb) { if (!opts) opts = {} if (typeof opts === 'function') { cb = opts opts = {} } var self = this this.db.get(NODES + key, {valueEncoding: 'binary'}, function (err, buf) { if (err) return cb(err) var node = messages.Node.decode(buf) node.value = encoder.decode(node.value, opts.valueEncoding || self.valueEncoding) cb(null, node) }) } // Utility function to be used in a nodes.reduce() to determine the largest // change # present. var maxChange = function (max, cur) { return Math.max(max, cur.change) } // Consumes either a string or a hyperlog node and returns its key. var toKey = function (link) { return typeof link !== 'string' ? link.key : link } // Adds a new hyperlog node to an existing array of leveldb batch insertions. // This includes performing crypto signing and verification. // Performs deduplication; returns the existing node if alreay present in the hyperlog. var addBatchAndDedupe = function (dag, node, logLinks, batch, opts, cb) { if (opts.hash && node.key !== opts.hash) return cb(CHECKSUM_MISMATCH) if (opts.seq && node.seq !== opts.seq) return cb(INVALID_LOG) var log = { change: node.change, node: node.key, links: logLinks } var onclone = function (clone) { if (!opts.log) return cb(null, clone, []) batch.push({type: 'put', key: dag.logs.key(node.log, node.seq), value: messages.Entry.encode(log)}) cb(null, clone) } var done = function () { dag.get(node.key, { valueEncoding: 'binary' }, function (_, clone) { // This node already exists somewhere in the hyperlog; add it to the // log's append-only log, but don't insert it again. if (clone) return onclone(clone) var links = node.links for (var i = 0; i < links.length; i++) batch.push({type: 'del', key: HEADS + links[i]}) batch.push({type: 'put', key: CHANGES + lexint.pack(node.change, 'hex'), value: node.key}) batch.push({type: 'put', key: NODES + node.key, value: messages.Node.encode(node)}) batch.push({type: 'put', key: HEADS + node.key, value: node.key}) batch.push({type: 'put', key: dag.logs.key(node.log, node.seq), value: messages.Entry.encode(log)}) cb(null, node) }) } // Local node; sign it. if (node.log === dag.id) { if (!dag.sign || node.signature) return done() dag.sign(node, function (err, sig) { if (err) return cb(err) if (!node.identity) node.identity = dag.identity node.signature = sig done() }) // Remote node; verify it. } else { if (!dag.verify) return done() dag.verify(node, function (err, valid) { if (err) return cb(err) if (!valid) return cb(INVALID_SIGNATURE) done() }) } } var getLinks = function (dag, id, links, cb) { var logLinks = [] var nextLink = function () { var cb = next() return function (err, link) { if (err) return cb(err) if (link.log !== id && logLinks.indexOf(link.log) === -1) logLinks.push(link.log) cb(null) } } var next = after(function (err) { if (err) cb(err) else cb(null, logLinks) }) for (var i = 0; i < links.length; i++) { dag.get(links[i], nextLink()) } } // Produce a readable stream of all nodes added from this point onward, in // topographic order. var createLiveStream = function (dag, opts) { var since = opts.since || 0 var limit = opts.limit || -1 var wait = null var read = function (size, cb) { if (dag.changes <= since) { wait = cb return } if (!limit) return cb(null, null) dag.db.get(CHANGES + lexint.pack(since + 1, 'hex'), function (err, hash) { if (err) return cb(err) dag.get(hash, opts, function (err, node) { if (err) return cb(err) since = node.change if (limit !== -1) limit-- cb(null, node) }) }) } var kick = function () { if (!wait) return var cb = wait wait = null read(0, cb) } dag.on('add', kick) dag.ready(kick) var rs = from.obj(read) rs.once('close', function () { dag.removeListener('add', kick) }) return rs } // Produce a readable stream of nodes in the hyperlog, in topographic order. Hyperlog.prototype.createReadStream = function (opts) { if (!opts) opts = {} if (opts.tail) { opts.since = this.changes } if (opts.live) return createLiveStream(this, opts) var self = this var since = opts.since || 0 var until = opts.until || 0 var keys = this.db.createValueStream({ gt: CHANGES + lexint.pack(since, 'hex'), lt: CHANGES + (until ? lexint.pack(until, 'hex') : '~'), valueEncoding: 'utf-8', reverse: opts.reverse, limit: opts.limit }) var get = function (key, enc, cb) { self.get(key, opts, cb) } return pump(keys, through.obj(get)) } Hyperlog.prototype.replicate = Hyperlog.prototype.createReplicationStream = function (opts) { return replicate(this, opts) } Hyperlog.prototype.add = function (links, value, opts, cb) { if (typeof opts === 'function') { cb = opts opts = {} } if (!cb) cb = noop this.batch([{links: links, value: value}], opts, function (err, nodes) { if (err) cb(err) else cb(null, nodes[0]) }) } Hyperlog.prototype.batch = function (docs, opts, cb) { // 0. preamble if (typeof opts === 'function') { cb = opts opts = {} } if (!cb) cb = noop if (!opts) opts = {} // Bail asynchronously; nothing to add. if (docs.length === 0) return process.nextTick(function () { cb(null, []) }) var self = this var id = opts.log || self.id opts.log = id var logLinks = {} var lockRelease = null var latestSeq // Bubble up errors on non-batch (1 element) calls. var bubbleUpErrors = false if (docs.length === 1) { bubbleUpErrors = true } // 1. construct initial hyperlog "node" for each of "docs" var nodes = docs.map(function (doc) { return constructInitialNode(doc, opts) }) // 2. emit all preadd events nodes.forEach(function (node) { self.emit('preadd', node) }) waterfall([ // 3. lock the hyperlog (if needed) // 4. wait until the hyperlog is 'ready' // 5. retrieve the seq# of this hyperlog's head lockAndGetSeqNumber, // 3. hash (async/sync) all nodes // 4. retrieve + set 'getLinks' for each node function (seq, release, done) { lockRelease = release latestSeq = seq hashNodesAndFindLinks(nodes, done) }, // 8. dedupe the node against the params AND the hyperlog (in sequence) function (nodes, done) { dedupeNodes(nodes, latestSeq, done) }, // 9. create each node's leveldb batch operation object function (nodes, done) { computeBatchNodeOperations(nodes, done) }, // 10. perform the leveldb batch op function (nodes, batchOps, done) { self.db.batch(batchOps, function (err) { if (err) { nodes.forEach(rejectNode) return done(err) } done(null, nodes) }) }, // 11. update the hyperlog's change# // 12. emit all add/reject events function (nodes, done) { self.changes = nodes.reduce(maxChange, self.changes) done(null, nodes) } ], function (err, nodes) { // release lock, if necessary if (lockRelease) return lockRelease(onUnlocked, err) onUnlocked(err) function onUnlocked (err) { // Error; all nodes were rejected. if (err) return cb(err) // Emit add events. nodes.forEach(function (node) { self.emit('add', node) }) cb(null, nodes) } }) function rejectNode (node) { self.emit('reject', node) } // Hashes and finds links for the given nodes. If some nodes fail to hash to // have their links found, they are rejected and not returned in the results. function hashNodesAndFindLinks (nodes, done) { var goodNodes = [] parallel( nodes.map(function (node) { return function (done) { hashNode(node, function (err, key) { if (err) { rejectNode(node) return done(bubbleUpErrors ? err : null) } node.key = key getLinks(self, id, node.links, function (err, links) { if (err) { rejectNode(node) return done(bubbleUpErrors ? err : null) } logLinks[node.key] = links if (!node.log) node.log = self.id goodNodes.push(node) done() }) }) } }), function (err) { done(err, goodNodes) } ) } function lockAndGetSeqNumber (done) { if (opts.release) onlocked(opts.release) else self.lock(onlocked) function onlocked (release) { self.ready(function () { self.logs.head(id, function (err, seq) { if (err) return release(cb, err) done(null, seq, release) }) }) } } function dedupeNodes (nodes, seq, done) { var goodNodes = [] var added = nodes.length > 1 ? {} : null var seqIdx = 1 var changeIdx = 1 waterfall( nodes.map(function (node) { return function (done) { dedupeNode(node, done) } }), function (err) { done(err, goodNodes) } ) function dedupeNode (node, done) { // Check if the to-be-added node already exists in the hyperlog. self.get(node.key, function (_, clone) { // It already exists if (clone) { node.seq = seq + (seqIdx++) node.change = clone.change // It already exists; it was added in this batch op earlier on. } else if (added && added[node.key]) { node.seq = added[node.key].seq node.change = added[node.key].change rejectNode(node) return done() } else { // new node across all logs node.seq = seq + (seqIdx++) node.change = self.changes + (changeIdx++) } if (added) added[node.key] = node goodNodes.push(node) done() }) } } function computeBatchNodeOperations (nodes, done) { var batch = [] var goodNodes = [] waterfall( nodes.map(function (node) { return function (done) { computeNodeBatchOp(node, function (err, ops) { if (err) { rejectNode(node) return done(bubbleUpErrors ? err : null) } batch = batch.concat(ops) goodNodes.push(node) done() }) } }), function (err) { if (err) return done(err) done(null, nodes, batch) } ) // Create a new leveldb batch operation for this node. function computeNodeBatchOp (node, done) { var batch = [] var links = logLinks[node.key] addBatchAndDedupe(self, node, links, batch, opts, function (err, newNode) { if (err) return done(err) newNode.value = encoder.decode(newNode.value, opts.valueEncoding || self.valueEncoding) done(null, batch) }) } } function constructInitialNode (doc, opts) { var links = doc.links || [] if (!Array.isArray(links)) links = [links] links = links.map(toKey) var encodedValue = encoder.encode(doc.value, opts.valueEncoding || self.valueEncoding) return { log: opts.log || self.id, key: null, identity: doc.identity || opts.identity || null, signature: opts.signature || null, value: encodedValue, links: links } } function hashNode (node, done) { if (self.asyncHash) { self.asyncHash(node.links, node.value, done) } else { var key = self.hash(node.links, node.value) done(null, key) } } } Hyperlog.prototype.append = function (value, opts, cb) { if (typeof opts === 'function') { cb = opts opts = {} } if (!cb) cb = noop if (!opts) opts = {} var self = this this.lock(function (release) { self.heads(function (err, heads) { if (err) return release(cb, err) opts.release = release self.add(heads, value, opts, cb) }) }) } module.exports = Hyperlog ================================================ FILE: lib/encode.js ================================================ exports.encode = function (value, enc) { if (typeof enc === 'object' && enc.encode) { value = enc.encode(value) } else if (enc === 'json') { value = Buffer(JSON.stringify(value)) } if (typeof value === 'string') value = new Buffer(value) return value } exports.decode = function (value, enc) { if (typeof enc === 'object' && enc.decode) { return enc.decode(value) } else if (enc === 'json') { return JSON.parse(value.toString()) } else if (enc === 'utf-8' || enc === 'utf8') { return value.toString() } return value } ================================================ FILE: lib/hash.js ================================================ var framedHash = require('framed-hash') var empty = new Buffer(0) module.exports = function (links, value) { var hash = framedHash('sha256') for (var i = 0; i < links.length; i++) hash.update(links[i]) hash.update(value || empty) return hash.digest('hex') } ================================================ FILE: lib/messages.js ================================================ var protobuf = require('protocol-buffers') var fs = require('fs') var path = require('path') module.exports = protobuf(fs.readFileSync(path.join(__dirname, '..', 'schema.proto'), 'utf-8')) ================================================ FILE: lib/protocol.js ================================================ var Duplexify = require('duplexify') var util = require('util') var lpstream = require('length-prefixed-stream') var through = require('through2') var debug = require('debug')('hyperlog-replicate') var messages = require('./messages') var empty = { encodingLength: function () { return 0 }, encode: function (data, buf, offset) { return buf } } var Protocol = function (opts) { if (!(this instanceof Protocol)) return new Protocol(opts) var frame = !opts || opts.frame !== false this._encoder = frame ? lpstream.encode() : through.obj() this._decoder = frame ? lpstream.decode() : through.obj() this._finalize = opts.finalize ? opts.finalize : function (cb) { cb() } this._process = opts.process || null var self = this var parse = through.obj(function (data, enc, cb) { self._decode(data, cb) }) parse.on('error', function (err) { self.destroy(err) }) this.on('end', function () { debug('ended') self.end() }) this.on('finish', function () { debug('finished') self.finalize() }) this._decoder.pipe(parse) if (this._process) { this._process.pipe(through.obj(function (node, enc, cb) { self.emit('node', node, cb) || cb() })) } var hwm = opts.highWaterMark || 16 Duplexify.call(this, this._decoder, this._encoder, frame ? {} : {objectMode: true, highWaterMark: hwm}) } util.inherits(Protocol, Duplexify) Protocol.prototype.handshake = function (handshake, cb) { debug('sending handshake') this._encode(0, messages.Handshake, handshake, cb) } Protocol.prototype.have = function (have, cb) { debug('sending have') this._encode(1, messages.Log, have, cb) } Protocol.prototype.want = function (want, cb) { debug('sending want') this._encode(2, messages.Log, want, cb) } Protocol.prototype.node = function (node, cb) { debug('sending node') this._encode(3, messages.Node, node, cb) } Protocol.prototype.sentHeads = function (cb) { debug('sending sentHeads') this._encode(4, empty, null, cb) } Protocol.prototype.sentWants = function (cb) { debug('sending sentWants') this._encode(5, empty, null, cb) } Protocol.prototype.finalize = function (cb) { var self = this this._finalize(function (err) { debug('ending') if (err) return self.destroy(err) self._encoder.end(cb) }) } Protocol.prototype._encode = function (type, enc, data, cb) { var buf = new Buffer(enc.encodingLength(data) + 1) buf[0] = type enc.encode(data, buf, 1) this._encoder.write(buf, cb) } var decodeMessage = function (data) { switch (data[0]) { case 0: return messages.Handshake.decode(data, 1) case 1: return messages.Log.decode(data, 1) case 2: return messages.Log.decode(data, 1) case 3: return messages.Node.decode(data, 1) } return null } Protocol.prototype._decode = function (data, cb) { try { var msg = decodeMessage(data) } catch (err) { return cb(err) } switch (data[0]) { case 0: debug('receiving handshake') return this.emit('handshake', msg, cb) || cb() case 1: debug('receiving have') return this.emit('have', msg, cb) || cb() case 2: debug('receiving want') return this.emit('want', msg, cb) || cb() case 3: debug('receiving node') return this._process ? this._process.write(msg, cb) : (this.emit('node', msg, cb) || cb()) case 4: debug('receiving sentHeads') return this.emit('sentHeads', cb) || cb() case 5: debug('receiving sentWants') return this.emit('sentWants', cb) || cb() } cb() } module.exports = Protocol ================================================ FILE: lib/replicate.js ================================================ var through = require('through2') var pump = require('pump') var bitfield = require('bitfield') var protocol = require('./protocol') var sortedQueue = require('./sorted-queue') var encoder = require('./encode.js') var noop = function () {} var noarr = [] var MAX_BITFIELD = 10 * 1024 * 1024 // arbitrary high number module.exports = function (dag, opts) { if (!opts) opts = {} var stream = protocol(opts) var mode = opts.mode || 'sync' // Bitfield to ensure that the nodes of each log in the hyperlog is only sent // once. var pushing = bitfield(1024, {grow: MAX_BITFIELD}) // The largest change # known to this log when replication begins. var changes = 0 var missing = 0 var done = false var remoteSentWants = false var remoteSentHeads = false var localSentWants = false var localSentHeads = false var live = opts.live // Local nodes yet to be sent. var outgoing = sortedQueue() // Remote nodes yet to be added to this hyperlog. var incoming = sortedQueue() // Asynchronous loop to continue sending nodes from a log in sequence from // low seq # to its highest seq #. outgoing.pull(function loop (entry) { dag.get(entry.node, {valueEncoding: 'binary'}, function (err, node) { if (err) return stream.destroy(err) if (entry.log && (node.log !== entry.log || node.seq !== entry.seq)) { // deduplicated node.log = entry.log node.seq = entry.seq } stream.emit('push') stream.node(node, function (err) { if (err) return stream.destroy(err) sendNode(node.log, node.seq + 1, function (err) { if (err) return stream.destroy(err) outgoing.pull(loop) }) }) }) }) var pipe = function (a, b, cb) { var destroy = function () { a.destroy() } stream.on('close', destroy) stream.on('finish', destroy) a.on('end', function () { stream.removeListener('close', destroy) stream.removeListener('finish', destroy) }) return pump(a, b, cb) } // For live replication. Reads live from the local hyperlog and continues to // send new nodes to the other end. var sendChanges = function () { var write = function (node, enc, cb) { node.value = encoder.encode(node.value, dag.valueEncoding) stream.node(node, cb) } stream.emit('live') pipe(dag.createReadStream({since: changes, live: true}), through.obj(write)) } // Check if replication is finished. var update = function (cb) { if (done || !localSentWants || !localSentHeads || !remoteSentWants || !remoteSentHeads) return cb() done = true if (!live) return stream.finalize(cb) sendChanges() cb() } // Inform the other side that we've requested all of the nodes we want. var sentWants = function (cb) { localSentWants = true stream.sentWants() update(cb) } // Inform the other side that we've sent all of the heads we have. var sentHeads = function (cb) { localSentHeads = true stream.sentHeads() update(cb) } // Send a specific entry in a specific log to the other side. // If the node links to other nodes, inform the other side we have those, // too. var sendNode = function (log, seq, cb) { dag.logs.get(log, seq, function (err, entry) { if (err && err.notFound) return cb() if (err) return cb(err) if (entry.change > changes) return cb() // ensure snapshot entry.log = log entry.seq = seq var i = 0 var loop = function () { if (i < entry.links.length) return sendHave(entry.links[i++], loop) entry.links = noarr // premature opt: less mem yo outgoing.push(entry, cb) } loop() }) } // Add a received remote node to our local hyperlog. // It is normal for the insertion to sometimes fail: we may have received a // node that depends on another node from a log we haven't yet received. If // so, enqueue it into 'incoming' and continue trying to re-insert it until // its dependencies are also present. var receiveNode = function (node, cb) { var opts = { hash: node.key, log: node.log, seq: node.seq, identity: node.identity, signature: node.signature, valueEncoding: 'binary' } dag.add(node.links, node.value, opts, function (err) { if (!err) return afterAdd(cb) if (!err.notFound) return cb(err) incoming.push(node, cb) }) } var afterAdd = function (cb) { stream.emit('pull') if (!localSentWants && !--missing) return sentWants(cb) if (!incoming.length) return cb() incoming.pull(function (node) { receiveNode(node, cb) }) } var sendHave = function (log, cb) { dag.enumerate(log, function (err, idx) { if (err) return cb(err) // Don't send the same log twice. if (pushing.get(idx)) return cb() pushing.set(idx, true) dag.logs.head(log, function (err, seq) { if (err) return cb(err) dag.logs.get(log, seq, function loop (err, entry) { // ensure snapshot if (err && err.notFound) return cb() if (err) return cb(err) if (entry.change > changes) return dag.logs.get(log, seq - 1, loop) stream.have({log: log, seq: seq}, cb) }) }) }) } stream.once('sentHeads', function (cb) { if (!localSentWants && !missing) sentWants(noop) remoteSentHeads = true update(cb) }) stream.once('sentWants', function (cb) { remoteSentWants = true update(cb) }) stream.on('want', function (head, cb) { sendNode(head.log, head.seq + 1, cb) }) stream.on('have', function (head, cb) { dag.logs.head(head.log, function (err, seq) { if (err) return cb(err) if (seq >= head.seq) return cb() missing += (head.seq - seq) stream.want({log: head.log, seq: seq}, cb) }) }) stream.on('node', receiveNode) // start the handshake stream.on('handshake', function (handshake, cb) { var remoteMode = handshake.mode if (remoteMode !== 'pull' && remoteMode !== 'push' && remoteMode !== 'sync') return cb(new Error('Remote uses invalid mode: ' + remoteMode)) if (remoteMode === 'pull' && mode === 'pull') return cb(new Error('Remote and local are both pulling')) if (remoteMode === 'push' && mode === 'push') return cb(new Error('Remote and local are both pushing')) remoteSentWants = remoteMode === 'push' remoteSentHeads = remoteMode === 'pull' localSentWants = mode === 'push' || remoteMode === 'pull' localSentHeads = mode === 'pull' || remoteMode === 'push' if (handshake.metadata) stream.emit('metadata', handshake.metadata) if (!live) live = handshake.live if (localSentHeads) return update(cb) var write = function (node, enc, cb) { sendHave(node.log, cb) } dag.lock(function (release) { // TODO: don't lock here. figure out how to snapshot the heads to a change instead changes = dag.changes pipe(dag.heads(), through.obj(write), function (err) { release() if (err) return cb(err) sentHeads(cb) }) }) }) stream.handshake({version: 1, mode: opts.mode, metadata: opts.metadata, live: live}) return stream } ================================================ FILE: lib/sorted-queue.js ================================================ // A queue of hyperlog nodes that is sorted by the nodes' change #. The node // with the lowest change # will be the first dequeued. // // TODO: buffer to leveldb if the queue becomes large var SortedQueue = function () { if (!(this instanceof SortedQueue)) return new SortedQueue() this.list = [] this.wait = null this.length = 0 } SortedQueue.prototype.push = function (entry, cb) { var i = indexOf(this.list, entry.change) if (i === this.list.length) this.list.push(entry) else this.list.splice(i, 0, entry) this.length++ if (this.wait) this.pull(this.wait) if (cb) cb() } SortedQueue.prototype.pull = function (cb) { if (!this.list.length) { this.wait = cb return } this.wait = null var next = this.list.shift() this.length-- cb(next) } function indexOf (list, change) { var low = 0 var high = list.length var mid = 0 while (low < high) { mid = (low + high) >> 1 if (change < list[mid].change) high = mid else low = mid + 1 } return low } module.exports = SortedQueue ================================================ FILE: package.json ================================================ { "name": "hyperlog", "version": "4.12.1", "description": "Merkle DAG that replicates based on scuttlebutt logs and causal linking", "main": "index.js", "dependencies": { "after-all": "^2.0.2", "bitfield": "^1.1.2", "brfs": "^1.4.0", "cuid": "^1.2.5", "debug": "^2.2.0", "defined": "^1.0.0", "duplexify": "^3.4.2", "framed-hash": "^1.1.0", "from2": "^2.1.0", "length-prefixed-stream": "^1.3.0", "level-enumerate": "^1.0.1", "level-logs": "^1.1.0", "lexicographic-integer": "^1.1.0", "mutexify": "^1.1.0", "protocol-buffers": "^3.1.2", "pump": "^1.0.0", "run-parallel": "^1.1.6", "run-waterfall": "^1.1.3", "stream-collector": "^1.0.1", "through2": "^2.0.0" }, "devDependencies": { "bs58": "^3.0.0", "memdb": "^1.0.1", "standard": "^5.0.0", "multihashing": "^0.2.0", "tape": "^4.0.0" }, "browserify": { "transform": [ "brfs" ] }, "scripts": { "test": "standard && tape test/*" }, "repository": { "type": "git", "url": "https://github.com/mafintosh/hyperlog.git" }, "author": "Mathias Buus (@mafintosh)", "license": "MIT", "bugs": { "url": "https://github.com/mafintosh/hyperlog/issues" }, "homepage": "https://github.com/mafintosh/hyperlog" } ================================================ FILE: schema.proto ================================================ message Node { required uint32 change = 1; required string key = 2; required string log = 3; optional uint32 seq = 4; optional bytes identity = 7; optional bytes signature = 8; required bytes value = 5; repeated string links = 6; } message Entry { required uint32 change = 1; required string node = 2; repeated string links = 3; optional string log = 4; optional uint32 seq = 5; } message Log { required string log = 1; required uint32 seq = 2; } message Handshake { required uint32 version = 1; optional string mode = 2 [default = "sync"]; optional bytes metadata = 3; optional bool live = 4; } ================================================ FILE: test/basic.js ================================================ var hyperlog = require('../') var tape = require('tape') var memdb = require('memdb') var collect = require('stream-collector') tape('add node', function (t) { var hyper = hyperlog(memdb()) hyper.add(null, 'hello world', function (err, node) { t.error(err) t.ok(node.key, 'has key') t.same(node.links, []) t.same(node.value, new Buffer('hello world')) t.end() }) }) tape('add node with links', function (t) { var hyper = hyperlog(memdb()) hyper.add(null, 'hello', function (err, node) { t.error(err) hyper.add(node, 'world', function (err, node2) { t.error(err) t.ok(node2.key, 'has key') t.same(node2.links, [node.key], 'has links') t.same(node2.value, new Buffer('world')) t.end() }) }) }) tape('cannot add node with bad links', function (t) { var hyper = hyperlog(memdb()) hyper.add('i-do-not-exist', 'hello world', function (err) { t.ok(err, 'had error') t.ok(err.notFound, 'not found error') t.end() }) }) tape('heads', function (t) { var hyper = hyperlog(memdb()) hyper.heads(function (err, heads) { t.error(err) t.same(heads, [], 'no heads yet') hyper.add(null, 'a', function (err, node) { t.error(err) hyper.heads(function (err, heads) { t.error(err) t.same(heads, [node], 'has head') hyper.add(node, 'b', function (err, node2) { t.error(err) hyper.heads(function (err, heads) { t.error(err) t.same(heads, [node2], 'new heads') t.end() }) }) }) }) }) }) tape('deduplicates', function (t) { var hyper = hyperlog(memdb()) hyper.add(null, 'hello world', function (err, node) { t.error(err) hyper.add(null, 'hello world', function (err, node) { t.error(err) collect(hyper.createReadStream(), function (err, changes) { t.error(err) t.same(changes.length, 1, 'only one change') t.end() }) }) }) }) tape('deduplicates -- same batch', function (t) { var hyper = hyperlog(memdb()) var doc = { links: [], value: 'hello world' } hyper.batch([doc, doc], function (err, nodes) { t.error(err) collect(hyper.createReadStream(), function (err, changes) { t.error(err) t.same(changes.length, 1, 'only one change') t.same(hyper.changes, 1, 'only one change') t.end() }) }) }) tape('bug repro: bad insert links results in correct preadd/add/reject counts', function (t) { var hyper = hyperlog(memdb()) var pending = 0 hyper.on('preadd', function (node) { pending++ }) hyper.on('add', function (node) { pending-- }) hyper.on('reject', function (node) { pending-- }) hyper.add(['123'], 'hello', function (err, node) { t.ok(err) t.equal(pending, 0) t.end() }) }) ================================================ FILE: test/batch.js ================================================ var hyperlog = require('../') var tape = require('tape') var memdb = require('memdb') tape('batch', function (t) { t.plan(10) var log = hyperlog(memdb(), { valueEncoding: 'utf8' }) log.add(null, 'A', function (err, node) { t.error(err) var ops = [ { links: [node.key], value: 'B' }, { links: [node.key], value: 'C' }, { links: [node.key], value: 'D' } ] log.batch(ops, function (err, nodes) { t.error(err) log.get(node.key, function (err, doc) { t.error(err) t.equal(doc.value, 'A') }) log.get(nodes[0].key, function (err, doc) { t.error(err) t.equal(doc.value, 'B') }) log.get(nodes[1].key, function (err, doc) { t.error(err) t.equal(doc.value, 'C') }) log.get(nodes[2].key, function (err, doc) { t.error(err) t.equal(doc.value, 'D') }) }) }) }) tape('batch dedupe', function (t) { t.plan(6) var doc1 = { links: [], value: 'hello world' } var doc2 = { links: [], value: 'hello world 2' } var hyper = hyperlog(memdb(), { valueEncoding: 'utf8' }) hyper.batch([doc1], function (err) { t.error(err) hyper.batch([doc2], function (err) { t.error(err) hyper.batch([doc1], function (err, nodes) { t.error(err) t.equal(hyper.changes, 2) t.equal(nodes.length, 1) t.equal(nodes[0].change, 1) }) }) }) }) tape('batch dedupe 2', function (t) { t.plan(4) var doc1 = { links: [], value: 'hello world' } var doc2 = { links: [], value: 'hello world 2' } var hyper = hyperlog(memdb(), { valueEncoding: 'utf8' }) hyper.batch([doc1], function (err) { t.error(err) hyper.batch([doc2], function (err) { t.error(err) hyper.batch([doc2, doc1, doc2], function (err) { t.error(err) t.equal(hyper.changes, 2) }) }) }) }) ================================================ FILE: test/changes.js ================================================ var hyperlog = require('../') var tape = require('tape') var memdb = require('memdb') var collect = require('stream-collector') tape('changes', function (t) { var hyper = hyperlog(memdb()) hyper.add(null, 'a', function (err, a) { t.error(err) hyper.add(null, 'b', function (err, b) { t.error(err) hyper.add(null, 'c', function (err, c) { t.error(err) collect(hyper.createReadStream(), function (err, changes) { t.error(err) t.same(changes, [a, b, c], 'has 3 changes') t.end() }) }) }) }) }) tape('changes since', function (t) { var hyper = hyperlog(memdb()) hyper.add(null, 'a', function (err, a) { t.error(err) hyper.add(null, 'b', function (err, b) { t.error(err) hyper.add(null, 'c', function (err, c) { t.error(err) collect(hyper.createReadStream({since: 2}), function (err, changes) { t.error(err) t.same(changes, [c], 'has 1 change') t.end() }) }) }) }) }) tape('live changes', function (t) { var hyper = hyperlog(memdb()) var expects = ['a', 'b', 'c'] hyper.createReadStream({live: true}) .on('data', function (data) { var next = expects.shift() t.same(data.value.toString(), next, 'was expected value') if (!expects.length) t.end() }) hyper.add(null, 'a', function () { hyper.add(null, 'b', function () { hyper.add(null, 'c') }) }) }) tape('parallel add orders changes', function (t) { var hyper = hyperlog(memdb()) var missing = 3 var values = {} var done = function () { if (--missing) return collect(hyper.createReadStream(), function (err, changes) { t.error(err) changes.forEach(function (c, i) { t.same(c.change, i + 1, 'correct change number') values[c.value.toString()] = true }) t.same(values, {a: true, b: true, c: true}, 'contains all values') t.end() }) } hyper.add(null, 'a', done) hyper.add(null, 'b', done) hyper.add(null, 'c', done) }) ================================================ FILE: test/encoding.js ================================================ var hyperlog = require('../') var tape = require('tape') var memdb = require('memdb') var collect = require('stream-collector') tape('add node', function (t) { var hyper = hyperlog(memdb(), { valueEncoding: 'json' }) hyper.add(null, { msg: 'hello world' }, function (err, node) { t.error(err) t.ok(node.key, 'has key') t.same(node.links, []) t.same(node.value, { msg: 'hello world' }) t.end() }) }) tape('add node with encoding option', function (t) { var hyper = hyperlog(memdb()) hyper.add(null, { msg: 'hello world' }, { valueEncoding: 'json' }, function (err, node) { t.error(err) t.ok(node.key, 'has key') t.same(node.links, []) t.same(node.value, { msg: 'hello world' }) t.end() }) }) tape('append node', function (t) { var hyper = hyperlog(memdb(), { valueEncoding: 'json' }) hyper.append({ msg: 'hello world' }, function (err, node) { t.error(err) t.ok(node.key, 'has key') t.same(node.links, []) t.same(node.value, { msg: 'hello world' }) t.end() }) }) tape('append node with encoding option', function (t) { var hyper = hyperlog(memdb()) hyper.append({ msg: 'hello world' }, { valueEncoding: 'json' }, function (err, node) { t.error(err) t.ok(node.key, 'has key') t.same(node.links, []) t.same(node.value, { msg: 'hello world' }) t.end() }) }) tape('add node with links', function (t) { var hyper = hyperlog(memdb(), { valueEncoding: 'json' }) hyper.add(null, { msg: 'hello' }, function (err, node) { t.error(err) hyper.add(node, { msg: 'world' }, function (err, node2) { t.error(err) t.ok(node2.key, 'has key') t.same(node2.links, [node.key], 'has links') t.same(node2.value, { msg: 'world' }) t.end() }) }) }) tape('cannot add node with bad links', function (t) { var hyper = hyperlog(memdb(), { valueEncoding: 'json' }) hyper.add('i-do-not-exist', { msg: 'hello world' }, function (err) { t.ok(err, 'had error') t.ok(err.notFound, 'not found error') t.end() }) }) tape('heads', function (t) { var hyper = hyperlog(memdb(), { valueEncoding: 'json' }) hyper.heads(function (err, heads) { t.error(err) t.same(heads, [], 'no heads yet') hyper.add(null, 'a', function (err, node) { t.error(err) hyper.heads(function (err, heads) { t.error(err) t.same(heads, [node], 'has head') hyper.add(node, 'b', function (err, node2) { t.error(err) hyper.heads(function (err, heads) { t.error(err) t.same(heads, [node2], 'new heads') t.end() }) }) }) }) }) }) tape('heads with encoding option', function (t) { var hyper = hyperlog(memdb()) hyper.heads({ valueEncoding: 'json' }, function (err, heads) { t.error(err) t.same(heads, [], 'no heads yet') hyper.add(null, 'a', { valueEncoding: 'json' }, function (err, node) { t.error(err) hyper.heads({ valueEncoding: 'json' }, function (err, heads) { t.error(err) t.same(heads, [node], 'has head') hyper.add(node, 'b', { valueEncoding: 'json' }, function (err, node2) { t.error(err) hyper.heads({ valueEncoding: 'json' }, function (err, heads) { t.error(err) t.same(heads, [node2], 'new heads') t.end() }) }) }) }) }) }) tape('get', function (t) { var hyper = hyperlog(memdb(), { valueEncoding: 'json' }) hyper.add(null, { msg: 'hello world' }, function (err, node) { t.error(err) t.ok(node.key, 'has key') t.same(node.links, []) t.same(node.value, { msg: 'hello world' }) hyper.get(node.key, function (err, node2) { t.ifError(err) t.same(node2.value, { msg: 'hello world' }) t.end() }) }) }) tape('get with encoding option', function (t) { var hyper = hyperlog(memdb()) hyper.add(null, { msg: 'hello world' }, { valueEncoding: 'json' }, function (err, node) { t.error(err) t.ok(node.key, 'has key') t.same(node.links, []) t.same(node.value, { msg: 'hello world' }) hyper.get(node.key, { valueEncoding: 'json' }, function (err, node2) { t.ifError(err) t.same(node2.value, { msg: 'hello world' }) t.end() }) }) }) tape('deduplicates', function (t) { var hyper = hyperlog(memdb(), { valueEncoding: 'json' }) hyper.add(null, { msg: 'hello world' }, function (err, node) { t.error(err) hyper.add(null, { msg: 'hello world' }, function (err, node) { t.error(err) collect(hyper.createReadStream(), function (err, changes) { t.error(err) t.same(changes.length, 1, 'only one change') t.end() }) }) }) }) tape('live replication encoding', function (t) { t.plan(2) var h0 = hyperlog(memdb(), { valueEncoding: 'json' }) var h1 = hyperlog(memdb(), { valueEncoding: 'json' }) h1.createReadStream({ live: true }) .on('data', function (data) { t.deepEqual(data.value, { msg: 'hello world' }) }) var r0 = h0.replicate({ live: true }) var r1 = h1.replicate({ live: true }) h0.add(null, { msg: 'hello world' }, function (err, node) { t.error(err) r0.pipe(r1).pipe(r0) }) }) ================================================ FILE: test/events.js ================================================ var hyperlog = require('../') var tape = require('tape') var memdb = require('memdb') tape('add and preadd events', function (t) { t.plan(13) var hyper = hyperlog(memdb()) var expected = [ 'hello', 'world' ] var expectedPre = [ 'hello', 'world' ] var order = [] hyper.on('add', function (node) { // at this point, the event has already been added t.equal(node.value.toString(), expected.shift()) order.push('add ' + node.value) }) hyper.on('preadd', function (node) { t.equal(node.value.toString(), expectedPre.shift()) order.push('preadd ' + node.value) hyper.get(node.key, function (err) { t.ok(err.notFound) }) }) hyper.add(null, 'hello', function (err, node) { t.error(err) hyper.add(node, 'world', function (err, node2) { t.error(err) t.ok(node2.key, 'has key') t.same(node2.links, [node.key], 'has links') t.same(node2.value, new Buffer('world')) t.deepEqual(order, [ 'preadd hello', 'add hello', 'preadd world', 'add world' ], 'order') }) }) t.deepEqual(order, ['preadd hello']) }) ================================================ FILE: test/hash.js ================================================ var hyperlog = require('../') var tape = require('tape') var memdb = require('memdb') var framedHash = require('framed-hash') var multihashing = require('multihashing') var base58 = require('bs58') var sha1 = function (links, value) { var hash = framedHash('sha1') for (var i = 0; i < links.length; i++) hash.update(links[i]) hash.update(value) return hash.digest('hex') } var asyncSha2 = function (links, value, cb) { process.nextTick(function () { var prevalue = value.toString() links.forEach(function (link) { prevalue += link }) var result = base58.encode(multihashing(prevalue, 'sha2-256')) cb(null, result) }) } tape('add node using sha1', function (t) { var hyper = hyperlog(memdb(), { hash: sha1 }) hyper.add(null, 'hello world', function (err, node) { t.error(err) t.same(node.key, '99cf70777a24b574b8fb5b3173cd4073f02098b0') t.end() }) }) tape('add node with links using sha1', function (t) { var hyper = hyperlog(memdb(), { hash: sha1 }) hyper.add(null, 'hello', function (err, node) { t.error(err) t.same(node.key, '445198669b880239a7e64247ed303066b398678b') hyper.add(node, 'world', function (err, node2) { t.error(err) t.same(node2.key, '1d95837842db3995fb3e77ed070457eb4f9875bc') t.end() }) }) }) tape('add node using async multihash', function (t) { var hyper = hyperlog(memdb(), { asyncHash: asyncSha2 }) hyper.add(null, 'hello world', function (err, node) { t.error(err) t.same(node.key, 'QmaozNR7DZHQK1ZcU9p7QdrshMvXqWK6gpu5rmrkPdT3L4') t.end() }) }) tape('add node with links using async multihash', function (t) { var hyper = hyperlog(memdb(), { asyncHash: asyncSha2 }) hyper.add(null, 'hello', function (err, node) { t.error(err) t.same(node.key, 'QmRN6wdp1S2A5EtjW9A3M1vKSBuQQGcgvuhoMUoEz4iiT5') hyper.add(node, 'world', function (err, node2) { t.error(err) t.same(node2.key, 'QmVeZeqV6sbzeDyzhxFHwBLddaQzUELCxLjrQVzfBuDrt8') hyper.add([node, node2], '!!!', function (err, node3) { t.error(err) t.same(node3.key, 'QmNs89mwydjboQGpvcK2F3hyKjSmdqQTqDWmRMsAQnL4ZU') t.end() }) }) }) }) tape('preadd event with async hash', function (t) { var hyper = hyperlog(memdb(), { asyncHash: asyncSha2 }) var prenode = null hyper.on('preadd', function (node) { prenode = node }) hyper.add(null, 'hello world', function (err, node) { t.error(err) t.same(node.key, 'QmaozNR7DZHQK1ZcU9p7QdrshMvXqWK6gpu5rmrkPdT3L4') t.end() }) t.equal(prenode.key, null) }) ================================================ FILE: test/replicate.js ================================================ var hyperlog = require('../') var tape = require('tape') var memdb = require('memdb') var pump = require('pump') var through = require('through2') var sync = function (a, b, cb) { var stream = a.replicate() pump(stream, b.replicate(), stream, cb) } var toJSON = function (log, cb) { var map = {} log.createReadStream() .on('data', function (node) { map[node.key] = {value: node.value, links: node.links} }) .on('end', function () { cb(null, map) }) } tape('clones', function (t) { var hyper = hyperlog(memdb()) var clone = hyperlog(memdb()) hyper.add(null, 'a', function () { hyper.add(null, 'b', function () { hyper.add(null, 'c', function () { sync(hyper, clone, function (err) { t.error(err) toJSON(clone, function (err, map1) { t.error(err) toJSON(hyper, function (err, map2) { t.error(err) t.same(map1, map2, 'logs are synced') t.end() }) }) }) }) }) }) }) tape('clones with valueEncoding', function (t) { var hyper = hyperlog(memdb(), {valueEncoding: 'json'}) var clone = hyperlog(memdb(), {valueEncoding: 'json'}) hyper.add(null, 'a', function () { hyper.add(null, 'b', function () { hyper.add(null, 'c', function () { sync(hyper, clone, function (err) { t.error(err) toJSON(clone, function (err, map1) { t.error(err) toJSON(hyper, function (err, map2) { t.error(err) t.same(map1, map2, 'logs are synced') t.end() }) }) }) }) }) }) }) tape('syncs with initial subset', function (t) { var hyper = hyperlog(memdb()) var clone = hyperlog(memdb()) clone.add(null, 'a', function () { hyper.add(null, 'a', function () { hyper.add(null, 'b', function () { hyper.add(null, 'c', function () { sync(hyper, clone, function (err) { t.error(err) toJSON(clone, function (err, map1) { t.error(err) toJSON(hyper, function (err, map2) { t.error(err) t.same(map1, map2, 'logs are synced') t.end() }) }) }) }) }) }) }) }) tape('syncs with initial superset', function (t) { var hyper = hyperlog(memdb()) var clone = hyperlog(memdb()) clone.add(null, 'd', function () { hyper.add(null, 'a', function () { hyper.add(null, 'b', function () { hyper.add(null, 'c', function () { sync(hyper, clone, function (err) { t.error(err) toJSON(clone, function (err, map1) { t.error(err) toJSON(hyper, function (err, map2) { t.error(err) t.same(map1, map2, 'logs are synced') t.end() }) }) }) }) }) }) }) }) tape('process', function (t) { var hyper = hyperlog(memdb()) var clone = hyperlog(memdb()) var process = function (node, enc, cb) { setImmediate(function () { cb(null, node) }) } hyper.add(null, 'a', function () { hyper.add(null, 'b', function () { hyper.add(null, 'c', function () { var stream = hyper.replicate() pump(stream, clone.replicate({process: through.obj(process)}), stream, function () { toJSON(clone, function (err, map1) { t.error(err) toJSON(hyper, function (err, map2) { t.error(err) t.same(map1, map2, 'logs are synced') t.end() }) }) }) }) }) }) }) // bugfix: previously replication would not terminate tape('shared history with duplicates', function (t) { var hyper1 = hyperlog(memdb()) var hyper2 = hyperlog(memdb()) var doc1 = { links: [], value: 'a' } var doc2 = { links: [], value: 'b' } hyper1.batch([doc1], function (err) { t.error(err) sync(hyper1, hyper2, function (err) { t.error(err) hyper2.batch([doc1, doc2], function (err, nodes) { t.error(err) t.equals(nodes[0].change, 1) t.equals(nodes[1].change, 2) sync(hyper1, hyper2, function (err) { t.error(err) t.end() }) }) }) }) }) ================================================ FILE: test/signatures.js ================================================ var hyperlog = require('../') var tape = require('tape') var memdb = require('memdb') tape('sign', function (t) { t.plan(4) var log = hyperlog(memdb(), { identity: new Buffer('i-am-a-public-key'), sign: function (node, cb) { t.same(node.value, new Buffer('hello'), 'sign is called') cb(null, new Buffer('i-am-a-signature')) } }) log.add(null, 'hello', function (err, node) { t.error(err, 'no err') t.same(node.signature, new Buffer('i-am-a-signature'), 'has signature') t.same(node.identity, new Buffer('i-am-a-public-key'), 'has public key') t.end() }) }) tape('sign fails', function (t) { t.plan(2) var log = hyperlog(memdb(), { identity: new Buffer('i-am-a-public-key'), sign: function (node, cb) { cb(new Error('lol')) } }) log.on('reject', function (node) { t.ok(node) }) log.add(null, 'hello', function (err) { t.same(err && err.message, 'lol', 'had error') }) }) tape('verify', function (t) { t.plan(3) var log1 = hyperlog(memdb(), { identity: new Buffer('i-am-a-public-key'), sign: function (node, cb) { cb(null, new Buffer('i-am-a-signature')) } }) var log2 = hyperlog(memdb(), { verify: function (node, cb) { t.same(node.signature, new Buffer('i-am-a-signature'), 'verify called with signature') t.same(node.identity, new Buffer('i-am-a-public-key'), 'verify called with public key') cb(null, true) } }) log1.add(null, 'hello', function (err, node) { t.error(err, 'no err') var stream = log2.replicate() stream.pipe(log1.replicate()).pipe(stream) }) }) tape('verify fails', function (t) { t.plan(2) var log1 = hyperlog(memdb(), { identity: new Buffer('i-am-a-public-key'), sign: function (node, cb) { cb(null, new Buffer('i-am-a-signature')) } }) var log2 = hyperlog(memdb(), { verify: function (node, cb) { cb(null, false) } }) log1.add(null, 'hello', function (err, node) { t.error(err, 'no err') var stream = log2.replicate() stream.on('error', function (err) { t.same(err.message, 'Invalid signature', 'stream had error') t.end() }) stream.pipe(log1.replicate()).pipe(stream) }) }) tape('per-document identity (add)', function (t) { t.plan(3) var log1 = hyperlog(memdb(), { sign: function (node, cb) { cb(null, new Buffer('i-am-a-signature')) } }) var log2 = hyperlog(memdb(), { verify: function (node, cb) { t.same(node.signature, new Buffer('i-am-a-signature'), 'verify called with signature') t.same(node.identity, new Buffer('i-am-a-public-key'), 'verify called with public key') cb(null, true) } }) var opts = { identity: new Buffer('i-am-a-public-key') } log1.add(null, 'hello', opts, function (err, node) { t.error(err, 'no err') var stream = log2.replicate() stream.pipe(log1.replicate()).pipe(stream) }) }) tape('per-document identity (batch)', function (t) { t.plan(5) var log1 = hyperlog(memdb(), { sign: function (node, cb) { cb(null, new Buffer('i-am-a-signature')) } }) var expectedpk = [ Buffer('hello id'), Buffer('whatever id') ] var log2 = hyperlog(memdb(), { verify: function (node, cb) { t.same(node.signature, new Buffer('i-am-a-signature'), 'verify called with signature') t.same(node.identity, expectedpk.shift(), 'verify called with public key') cb(null, true) } }) log1.batch([ { value: 'hello', identity: Buffer('hello id') }, { value: 'whatever', identity: Buffer('whatever id') } ], function (err, nodes) { t.error(err, 'no err') var stream = log2.replicate() stream.pipe(log1.replicate()).pipe(stream) }) })