[
  {
    "path": ".gitignore",
    "content": "*~\nnode_modules\nbrowser_example/jot.js\njson_editor_example/img/\njson_editor_example/jot.js\njson_editor_example/jsoneditor.css\njson_editor_example/jsoneditor.js\n"
  },
  {
    "path": "README.md",
    "content": "JSON Operational Transformation (JOT)\n=====================================\n\nBy Joshua Tauberer <https://razor.occams.info>.\n\nAugust 2013.\n\nLicense: GPL v3 <http://choosealicense.com/licenses/gpl-v3/>\n\nThis module implements operational transformation (OT) on a JSON data model,\nwritten in JavaScript for use either in node.js or browsers.\n\nWhile most collaborative editing models operate on plain text documents with\noperations like insert and delete on strings, the document model in JOT is JSON\n--- i.e. the value space of null, booleans, numbers, strings, arrays, and\nobjects (key-value pairs with string keys). JOT includes the basic insert/delete\noperations on strings but adds many other operations that make JOT useful\nfor tracking changes to any sort of data that can be encoded in JSON.\n\nBasically, this is the core of real time simultaneous editing, like Etherpad,\nbut for structured data rather than just plain text. Since everything can\nbe represented in JSON, this provides plain text collaboration functionality\nand much more.\n\nThis is a work in progress. There is no UI or collaboration framework here.\n\nWhy JOT?\n--------\n\n### Introduction\n\nThe core problem addressed by operational transformation libraries like JOT\nis merging edits made simultaneously, i.e. asynchronously, by two or more\nusers, and the handling of potential conflicts that arise when multiple\nusers edit the same part of the document.\n\nTo illustrate the problem, imagine two users open the following JSON document:\n\n\t{ \"title\": \"Hello World!\", \"count\": 10 }\n\nEach user now has a copy of this document in their local memory. The first user\nmodfies their copy by changing the title and incrementing the count:\n\n \t{ \"title\": \"It's a Small World!\", \"count\": 20 }\n\nAt the same time, the second user changes their copy of the document by changing\n`Hello World!` to `Hello, Small World!` also incrementing the count by 5, yielding:\n\n \t{ \"title\": \"Hello, Small World!\", \"count\": 15 }\n\n### Structured representation of changes\n\nIn order to merge these changes, there needs to be a structured representation of the\nchanges. In the flat land of plain text, you are probably used to diffs and patches\nas structured representations of changes --- e.g. at lines 5 through 10, replace with\nnew content. In JOT, it is up to the library user to form structured representations\nof changes using JOT's classes. The changes in the example above are constructed as:\n\n\tvar user1 = new jot.LIST([\n\t\tnew jot.APPLY(\"title\", new jot.SPLICE(0, 5, \"It's a Small\")),\n\t\tnew jot.APPLY(\"count\", new jot.MATH(\"add\", 10))\n\t]);\n\n\tvar user2 = new jot.LIST([\n\t\tnew jot.APPLY(\"title\", new jot.SPLICE(5, 1, \", Small \")),\n\t\tnew jot.APPLY(\"count\", new jot.MATH('add', 5))\n\t]);\n\nIn other words, user 1 makes a change to the `title` property by replacing the 5 characters\nstarting at position 0 with `It's a Small` and increments the `count` property by 10.\nUser 2 makes a change to the `title` property by replacing the 1 character at position 5,\ni.e. the first space, with `, Small ` and increments the `count` property by 5.\n\nThese changes cannot yet be combined. If they were applied in order we would get a corrupted\ndocument because the character positions that user 2's operation referred to are shifted once\nuser 1's changes are applied. After applying user 1's changes, we have the document:\n\n\t{ title: \"It's a Small World!\", count: 20 }\n\nBut then if we apply user 2's changes, which say to replace the character at position 5, we\nwould get:\n\n\t{ title: \"It's , Small  Small World!\", count: 25 }\n\nThat's not what user 2 intended. **The second user's changes must be \"transformed\" to take into\naccount the changes to the document made by the first user before they can be applied.**\n\n### Transformation\n\nJOT provides an algorithm to transform the structured representation of changes so \nthat simultaneous changes can be combined sequentially.\n\nContinuing the example, we desire to transform the second user's changes so that\nthey can be applied in sequence after the first user's changes.\n\nInstead of\n\n\t... new jot.APPLY(\"title\", new jot.SPLICE(5, 1, \", Small \"))  ...\n\nwe want the second user's changes to look like\n\n\t... new jot.APPLY(\"title\", new jot.SPLICE(12, 1, \", Small \"))  ...\n\nNote how the character index has changed. These changes now _can_ be applied after the first\nuser's changes and achieve the _intent_ of user 2's change.\n\nJOT provides a `rebase` function on operation objects that can make this\ntransformation. (The transformation is named after [git's rebase](https://git-scm.com/book/en/v2/Git-Branching-Rebasing).) The `rebase` function transforms the operation and yields a new operation that should be applied instead, taking as an argument the operations executed by another user concurrently that have already applied to the document:\n\n\tuser2 = user2.rebase(user1)\n\nThese changes can now be merged using `compose`:\n\n\tvar all_changes = user1.compose(user2);\n\nand then applied to the base document:\n\n\tdocument = all_changes.apply(document)\n\nafter which the base document will include both user's changes:\n\n\t{ title: \"It's a Small, Small World!\", count: 25 }\n\nIt would also have been possible to rebase `user1` and then compose the operations in the other order, for the exact same result.\n\nSee [example.js](example.js) for the complete example.\n\n### Compared to other OT libraries\n\nOperational transformation libraries often operate only over strings. JOT has\nthose operations too. For instance, start with the document:\n\n\tHello world!\n\nTwo simultaneous changes might be:\n\n\tUser 1: REPLACE CHARS 0-4 WITH \"Brave new\"\n\n\tUser 2: REPLACE CHARS 11-11 WITH \".\"\n\nTo merge these changes, the second user's changes must be rebased to:\n\n\tUser 2: REPLACE CHARS 15-15 WITH \".\"\n\nJOT's rebase algorithm can handle this case too:\n\n\t// Construct operations\n\tvar document = \"Hello world!\";\n\tvar user1 = new jot.SPLICE(0, 5, \"Brave new\");\n\tvar user2 = new jot.SPLICE(11, 1, \".\");\n\n\t// Rebase user 2\n\tuser2 = user2.rebase(user1, { document: document })\n\n\t// user2 now holds:\n\t// new jot.SPLICE(15, 1, \".\")\n\n\t// Merge\n\tuser1.compose(user2).apply(document);\n\t> 'Brave new world.'\n\nUnlike most collaborative editing models where there are only operations like insert and delete that apply to strings, the document model in JOT is JSON --- i.e. the value space of null, booleans, numbers, strings, arrays, and objects (key-value pairs with string keys). Operations are provided that manipulate all of these data types. This makes JOT useful when tracking changes to data, rather than simply to plain text.\n\n\nInstallation\n------------\n\nThe code is written for the node.js platform and can also be used client-side in modern browsers.\n\nFirst install node, then install this package:\n\n\tnpm install git+https://github.com/joshdata/jot.git\n\nIn a node script, import the library:\n\n\tvar jot = require(\"jot\");\n\nTo build the library for browsers, run:\n\n\tnpm install -g browserify\n\tbrowserify browser_example/browserfy_root.js -d -o dist/jot.js\n\nThen use the library in your HTML page (see [the example](browser_example/example.html) for details):\n\n\t<html>\n\t\t<body>\n\t\t\t<script src=\"jot.js\"></script>\n\t\t\t<script>\n\t\t\t\t// see the example below, but skip the 'require' line\n\t\t\t</script>\n\t\t</body>\n\t</html>\n\n\nOperations\n----------\n\nThe operations in JOT are instantiated as `new jot.OPERATION(arguments)`. The available operations are...\n\n### General operations\n\n* `SET(new_value)`: Replaces any value with any other JSON-able value. `new_value` is the new value after the operation applies.\n* `LIST([op1, op2, op3, ...])`: Executes a series of operations in order. `op1`, `op2`, `op3`, ... are other JOT operations. Equivalent to `op1.compose(op2).compose(op3)...`.\n\n### Operations on booleans and numbers\n\n* `MATH(op, value)`: Applies an arithmetic or boolean operation to a value. `op` is one of \"add\", \"mult\" (multiply), \"rot\" (increment w/ modulus), \"and\" (boolean or bitwise and), \"or\" (boolean or bitwise or), \"xor\" (boolean or bitwise exclusive-or), \"not\" (boolean or bitwise negation). For `rot`, `value` is given as an array of `[increment, modulus]`. For `not`, `value` is ignored and should be `null`. `add` and `mult` apply to any number, `rot` applies to integers only, and the boolean/bitwise operations only apply to integers and booleans. Because of rounding, operations on floating-point numbers or with floating-point operands could result in inconsistent state depending on the order of execution of the operations.\n\n### Operations on strings and arrays\n\nThe same operation is used for both strings and arrays:\n\n* `SPLICE(index, length, new_value)`: Replaces text in a string or array elements in an array at the given index and length in the original. To delete, `new_value` should be an empty string or zero-length array. To insert, `length` should be zero.\n* `ATINDEX(index, operation)`: Apply any operation to a particular array element at `index`. `operation` is any operation. Operations at multiple indexes can be applied simultaneously using `ATINDEX({ index1: op1, index2: op2, ... })`.\n* `MAP(operation)`: Apply any operation to all elements of an array (or all characters in a string). `operation` is any operation created by these constructors.\n\n`SPLICE` is the only operation you need for basic plain text concurrent\nediting. JOT includes the entire text editing model in the `SPLICE`\noperations plus it adds new operations for non-string data structures!\n\n(Note that internally `SPLICE` and `ATINDEX` are sub-cases of an internal PATCH operation that maintains an ordered list of edits to a string or array.)\n\n### Operations on objects\n\n* `PUT(key, value)`: Adds a new property to an object. `key` is any valid JSON key (a string) and `value` is any valid JSON object. Equivalent to `APPLY(key, SET(value))`.\n* `REM(key)`: Remove a property from an object. Equivalent to `APPLY(key, SET(~))` where `~` is a special internal value.\n* `APPLY(key, operation)`: Apply any operation to a particular property named `key`. `operation` is any operation. The operation can also take a mapping from keys to operations, as `APPLY({key: operation, ...})`.\n\n### Operations that affect document structure\n\n* `COPY([ [source1, target1], [source2, target2], ... ])`: Copies a value from one location in the document to another. The source and target parameters are [JSON Pointer](https://tools.ietf.org/html/rfc6901) strings (but `/-` is not allowed). Use in combination with other operations to move parts of the document, e.g. a `COPY` plus a `REM` can be used to rename an object property.\n\nMethods\n-------\n\n### Instance Methods\n\nEach operation object provides the following instance methods:\n\n* `op.inspect()` returns a human-readable string representation of the operation. (A helper method so you can do `console.log(op)`.)\n* `op.isNoOp()` returns a boolean indicating whether the operation does nothing.\n* `op.apply(document)` applies the operation to the document and returns the new value of the document. Does not modify `document`.\n* `op.simplify()` attempts to simplify complex operations. Returns a new operation or the operation unchanged. Useful primarily for `LIST`s.\n* `op.drilldown(index_or_key)` looks inside an operation on a string, array, or object and returns the operation that represents the effect of this operation on a particular index or key.\n* `op.inverse(document)` returns the inverse operation, given the document value *before* the operation applied.\n* `op.compose(other)` composes two operations into a single operation instance, sometimes a `LIST` operation.\n* `op.rebase(other)` rebases an operation. Returns null if the operations conflict, otherwise a new operation instance.\n* `op.rebase(other, { document: ... })` rebases an operation in conflictless mode. The document value provided is the value of the document *before* either operation applied. Returns a new operation instance. See further documentation below.\n* `op.toJSON()` turns the operation into a JSON-able data structure (made up of objects, arrays, strings, etc). See `jot.opFromJSON()`. (A helper method so you can do `JSON.stringify(op)`.)\n* `op.serialize()` serializes the operation to a string. See `jot.deserialize()`.\n\n### Global Methods\n\nThe `jot` library itself offers several global methods:\n\n* `jot.diff(a, b, options)` compares two documents, `a` and `b`, and returns a JOT operation that when applied to `a` gives `b`.  `options`, if given, is an object that controls how the diff is performed. Any data type that can be a JOT document (i.e. any JSON-like data type) can be compared, and the comparison follows the document structure recursively. If the keys `words`, `lines`, or `sentences` is set to a truthy value, then strings are compared word-by-word, line-by-line, or sentence-by-sentence, instead of character-by-character. There is no general purpose structured diff algorithm that works well on all documents --- this one probably works fine on relatively small structured data.\n* `jot.opFromJSON(opdata)` is the inverse of `op.toJSON()`.\n* `jot.deserialize(string)` is the inverse of `op.serialize()`.\n\nConflictless Rebase\n-------------------\n\nWhat makes JOT useful is that each operation knows how to \"rebase\" itself against\nevery other operation. This is the \"transformation\" part of operational transformation,\nand it's what you do when you have two concurrent edits that need to be merged.\n\nThe rebase operation guarantees that any two operations can be combined in any order\nand result in the same document. In other words, rebase satisfies the constraints\n`A ○ (B/A) == B ○ (A/B)` and `C / (A○B) == (C/A) / B`, where `○` is `compose`\nand `/` is rebase.\n\n### Rebase conflicts\n\nIn general, not all rebases are possible in a way that preserves the logical intent\nof each change. This is what results in a merge conflict in source code control\nsoftware like git. The conflict indicates where two operations could not be merged\nwithout losing the logical intent of the changes and intervention by a human is\nnecessary. `rebase` will return `null` in these cases.\n\nFor example, two `MATH` operations with different operators will conflict because\nthe order that these operations apply is significant:\n\n\t> new jot.MATH(\"add\", 1)\n\t    .rebase( new jot.MATH(\"mult\", 2) )\n\tnull\n\n(10 + 1) * 2 = 22 but (10 * 2) + 1 == 21. A vanilla `rebase` will return `null` in this case\nto signal that human intervention is needed to choose which operation should apply\nfirst.\n\n### Using conflictless rebase\n\nHowever, JOT provides a way to guarantee that `rebase` will return *some* operation,\nso that a merge conflict cannot occur. We call this \"conflictless\" rebase. The result\nof a conflictless rebase comes *close* to preserving the logical intent of the\noperations by choosing one operation over the other *or* choosing an order that\nthe operations will apply in.\n\nTo get a conflictless rebase, pass a second options argument to `rebase` with the\n`document` option set to the content of the document prior to both operations applying:\n\n\t> new jot.MATH(\"add\", 1)\n\t    .rebase( new jot.MATH(\"mult\", 2),\n\t             { document: 10 } )\n\t<values.SET 22>\n\nThe rebase returns a valid operation now, in this case telling us that to add 1 *after the multiplication has applied*, we should simply set the\nresult to 22 instead of adding 1. In other words, the rebase has chosen the order\nwhere multiplication goes second.\n\nRebasing the other way around yields a consistent operation:\n\n\t> new jot.MATH(\"mult\", 2)\n\t    .rebase( new jot.MATH(\"add\", 1),\n\t             { document: 10 } )\n\t<values.MATH mult:2>\n\nIn other words, if we're multing by 2 *after the addition has applied*, we should\ncontinue to multiply by 2. That's the same order as rebase chose above.\n\nDevelopment and Testing\n-----------------------\n\nRun code coverage tests with `npm test` or:\n\n\tnpm test -- --coverage-report=html\n\nNotes\n-----\n\nThanks to @konklone for some inspiration and the first pull request.\n\nSimilar work: [ShareDB](https://github.com/share/sharedb), [ottypes/json0](https://github.com/ottypes/json0), [Apache Wave](http://incubator.apache.org/wave/) (formerly Google Wave), [Substance Operator](https://github.com/substance/operator) (defunct)."
  },
  {
    "path": "TODO.md",
    "content": "* PATCH inside PATCH must go away in simplfy in case a PATCH ends up inside a PATCH from rebasing a MAP\n* Document diff.\n* Versioning of serialized objects.\n* Strucutued output of conflicts from failed rebases would make it easier to implement a git merge driver.\n* Developer documentation.\n* Floating point operations yield inconsistent results due to rounding. The random.js tests fail because of this.\n* Better/more tests.\n* Test coverage.\n"
  },
  {
    "path": "browser_example/browserfy_root.js",
    "content": "// This is for building jot for use in browsers. Expose\n// the library in a global 'jot' object.\nglobal.jot = require(\"../jot\")"
  },
  {
    "path": "browser_example/example.html",
    "content": "<html>\n\t<head>\n\t\t<script src=\"jot.js\"> </script>\n\t</head>\n\t<body>\n\t\t<textarea id=\"mybox\" style=\"width: 100%; height: 40%;\" placeholder=\"type here!\"></textarea>\n\t\t<textarea id=\"eventlog\" style=\"width: 100%; height: 40%;\" readonly placeholder=\"event log\"></textarea>\n\n\t\t<script>\n\t\tfunction parseText(text) {\n\t\t\tif (text === \"\")\n\t\t\t\treturn \"\";\n\t\t\treturn text.split(/\\n/) // split up lines\n\t\t\t\t.map(function(line) { // split lines into words\n\t\t\t\t\treturn line.split(/(\\W+)/)\n\t\t\t\t});\n\t\t}\n\n\t\tvar last_text = \"\";\n\t\tfunction watch_for_changes() {\n\t\t\t// Get the current text in the box.\n\t\t\tvar textarea = document.getElementById(\"mybox\");\n\t\t\tvar text = textarea.value;\n\n\t\t\t// While we can do a diff on the raw text, it's more interesting\n\t\t\t// to parse it into a data structure and do a structured diff.\n\t\t\ttext = parseText(text);\n\n\t\t\t// Compare it with the last text.\n\t\t\tvar op = jot.diff(last_text, text);\n\t\t\tif (!op.isNoOp()) {\n\t\t\t\t// If there was a change, show a structured representation\n\t\t\t\t// of the change in the eventlog textarea.\n\t\t\t\tvar eventlog = document.getElementById(\"eventlog\");\n\t\t\t\teventlog.value = op.inspect() + \"\\n\" + eventlog.value\n\t\t\t}\n\n\t\t\t// Update state, schedule next check.\n\t\t\tlast_text = text;\n\t\t\tsetTimeout(watch_for_changes, 1000);\n\t\t}\n\t\tsetTimeout(watch_for_changes, 0);\n\t\t</script>\n\t</body>\n</html>"
  },
  {
    "path": "example.js",
    "content": "/* load libraries (test if 'jot' is defined already, so we can use this in the browser where 'require' is not available) */\nvar jot = jot || require(\"./jot\");\n\n/* The Base Document */\n\nvar doc = {\n\ttitle: \"Hello World!\",\n\tcount: 10,\n};\n\nconsole.log(\"Original Document\")\nconsole.log(doc); // { title: 'Hello World!', count: 10 }\nconsole.log(\"\")\n\n/* User 1 revises the title and increments the documents's count.\n *\n * { title: 'It\\'s a Small World!', count: 20 }\n *\n */\n\nvar user1 = new jot.LIST([\n\tnew jot.APPLY(\"title\", new jot.SPLICE(0, 5, \"It's a Small\")),\n\tnew jot.APPLY(\"count\", new jot.MATH(\"add\", 10))\n]);\n\nconsole.log(\"User 1\")\nconsole.log(user1.apply(doc)); // { title: 'Hello, User 2!', count: 20 }\nconsole.log(\"\")\n\n/* User 2 makes changes to the original ocument's values so\n * that the document becomes:\n *\n * { title: 'Hello, Small World!', count: 15 }\n *\n */\n\nvar user2 = new jot.LIST([\n\tnew jot.APPLY(\"title\", new jot.SPLICE(5, 1, \", Small \")),\n\tnew jot.APPLY(\"count\", new jot.MATH('add', 5))\n]);\n\nconsole.log(\"User 2\")\nconsole.log(user2.apply(doc)); // { key1: 'My Program', key2: 20 }\nconsole.log(\"\")\n\n/* Don't do this! */\n\nconsole.log(\"The Wrong Way\")\nconsole.log(user1.compose(user2).apply(doc));\nconsole.log(\"\")\n\n/* You must rebase user2's operations before composing them. */\n\nuser2 = user2.rebase(user1);\nif (user2 == null) throw new Error(\"hmm\");\nconsole.log(\"Rebased User 2\")\nconsole.log(user2);\nconsole.log(\"\")\n\nconsole.log(\"Merged\")\nconsole.log(user1.compose(user2).apply(doc)); // { title: 'I am, User 2!', count: 25 }\nconsole.log(\"\")\n\n"
  },
  {
    "path": "index.js",
    "content": "module.exports = require(\"./jot\")\n"
  },
  {
    "path": "jot/copies.js",
    "content": "/*  This module defines one operation:\n\t\n\tCOPY([[source, target], ...])\n\t\n\tClones values from source to target for each source-target pair.\n\tSource and target are strings that are paths in the JSON document\n\tto a value following the JSONPointer specification (RFC 6901).\n\tThe paths must exist --- a final dash in a path to refer to the\n\tnonexistentent element after the end of an array is not valid.\n\t*/\n\t\nvar util = require(\"util\");\n\nvar jot = require(\"./index.js\");\n\nvar JSONPointer = require('jsonpatch').JSONPointer;\n\nexports.module_name = 'copies'; // for serialization/deserialization\n\nexports.COPY = function (pathpairs) {\n\tif (!Array.isArray(pathpairs)) throw new Error(\"argument must be a list\");\n\tthis.pathpairs = pathpairs.map(function(pathpair) {\n\t\tif (!Array.isArray(pathpair) || pathpair.length != 2)\n\t\t\tthrow new Error(\"each element in pathpairs must be an array of two string elements\")\n\t\tif (pathpair[0] instanceof JSONPointer && pathpair[1] instanceof JSONPointer) {\n\t\t\t// for internal calls only\n\t\t\treturn pathpair;\n\t\t} else {\n\t\t\tif (typeof pathpair[0] != \"string\" || typeof pathpair[1] != \"string\")\n\t\t\t\tthrow new Error(\"each element in pathpairs must be an array of two strings\")\n\t\t\tif (pathpair[0] == pathpair[1])\n\t\t\t\tthrow new Error(\"can't copy a path to itself\")\n\t\t\treturn [\n\t\t\t\tnew JSONPointer(pathpair[0]),\n\t\t\t\tnew JSONPointer(pathpair[1])\n\t\t\t]\n\t\t}\n\t});\n\tObject.freeze(this);\n}\nexports.COPY.prototype = Object.create(jot.Operation.prototype); // inherit\njot.add_op(exports.COPY, exports, 'COPY');\n\nfunction serialize_pointer(jp) {\n    return (jp.path.map(function(part) { return \"/\" + part.replace(/~/g,'~0').replace(/\\//g,'~1') })\n    \t.join(\"\"));\n}\n\nexports.COPY.prototype.inspect = function(depth) {\n\treturn util.format(\"<COPY %s>\", this.pathpairs.map(function(pathpair) {\n\t\treturn serialize_pointer(pathpair[0]) + \" => \" + serialize_pointer(pathpair[1]);\n\t}).join(\", \"));\n}\n\nexports.COPY.prototype.visit = function(visitor) {\n\t// A simple visitor paradigm. Replace this operation instance itself\n\t// and any operation within it with the value returned by calling\n\t// visitor on itself, or if the visitor returns anything falsey\n\t// (probably undefined) then return the operation unchanged.\n\treturn visitor(this) || this;\n}\n\nexports.COPY.prototype.internalToJSON = function(json, protocol_version) {\n\tjson.pathpairs = this.pathpairs.map(function(pathpair) {\n\t\treturn [serialize_pointer(pathpair[0]), serialize_pointer(pathpair[1])];\n\t});\n}\n\nexports.COPY.internalFromJSON = function(json, protocol_version, op_map) {\n\treturn new exports.COPY(json.pathpairs);\n}\n\nexports.COPY.prototype.apply = function (document) {\n\t/* Applies the operation to a document.*/\n\tthis.pathpairs.forEach(function(pathpair) {\n\t\tvar val = pathpair[0].get(document);\n\t\tdocument = pathpair[1].replace(document, val);\n\t});\n\treturn document;\n}\n\nexports.COPY.prototype.simplify = function (aggressive) {\n\t// Simplifies the operation. Later targets in pathpairs overwrite\n\t// earlier ones at the same location or a descendant of the\n\t// location.\n\t// TODO.\n\treturn this;\n}\n\nfunction parse_path(jp, document) {\n\tvar path = [];\n\tfor (var i = 0; i < jp.length; i++) {\n\t\tvar p = jp.path[i];\n\t\tif (Array.isArray(document))\n\t\t\tp = parseInt(p)\n\t\tpath.push(p);\n\t\tdocument = document[p];\n\t}\n\treturn path;\n}\n\nfunction drilldown_op(jp, document, op) {\n\tvar path = parse_path(jp, document);\n\tfor (var i = 0; i < path.length; i++)\n\t\top = op.drilldown(path[i]);\n\treturn op;\n}\n\nfunction wrap_op_in_path(jp, document, op) {\n\tvar path = parse_path(jp, document);\n\tvar i = path.length-1;\n\twhile (i >= 0) {\n\t\tif (typeof path[i] == \"string\")\n\t\t\top = new jot.APPLY(path[i], op)\n\t\telse\n\t\t\top = new jot.ATINDEX(path[i], op)\n\t\ti--;\n\t}\n\treturn op;\n}\n\nexports.COPY.prototype.inverse = function (document) {\n\t// Create a SET operation for every target.\n\treturn new jot.LIST(this.pathpairs.map(function(pathpair) {\n\t\treturn wrap_op_in_path(pathpair[1], document, new jot.SET(pathpair[1].get(document)));\n\t}))\n}\n\nexports.COPY.prototype.atomic_compose = function (other) {\n\t// Return a single COPY that combines the effect of this\n\t// and other. Concatenate the pathpairs lists, then\n\t// run simplify().\n\tif (other instanceof exports.COPY)\n\t\treturn new exports.COPY(this.pathpairs.concat(other.pathpairs)).simplify();\n}\n\nexports.rebase = function(base, ops, conflictless, debug) {\n\t\n}\n\nexports.COPY.prototype.clone_operation = function(op, document) {\n\t// Return a list of operations that includes op and\n\t// also for any way that op affects a copied path,\n\t// then an identical operation at the target path.\n\tvar ret = [op];\n\tthis.pathpairs.forEach(function(pathpair) {\n\t\tvar src_op = drilldown_op(pathpair[0], document, op);\n\t\tif (src_op.isNoOp()) return;\n\t\tret.push(wrap_op_in_path(pathpair[1], document, src_op));\n\t});\n\treturn new jot.LIST(ret).simplify();\n}\n\nexports.COPY.prototype.drilldown = function(index_or_key) {\n\t// This method is supposed to return an operation that\n\t// has the same effect as this but is relative to index_or_key.\n\t// Can we do that? If a target is at or in index_or_key,\n\t// then we affect that location. If source is also at or\n\t// in index_or_key, we can drill-down both. But if source\n\t// is somewhere else in the document, we can't really do\n\t// this.\n\tthrow \"hmm\";\n}\n\nfunction make_random_path(doc) {\n\tvar path = [];\n\tif (typeof doc === \"string\" || Array.isArray(doc)) {\n\t\tif (doc.length == 0) return path;\n\t\tvar idx = Math.floor(Math.random() * doc.length);\n\t\tpath.push(\"\"+idx);\n\t\ttry {\n\t\t\tif (Math.random() < .5 && typeof doc !== \"string\")\n\t\t\t\tpath = path.concat(make_random_path(doc[idx]));\n\t\t} catch (e) {\n\t\t\t// ignore - can't create path on inner value\n\t\t}\n\t} else if (typeof doc === \"object\" && doc !== null) {\n\t\tvar keys = Object.keys(doc);\n\t\tif (keys.length == 0) return path;\n\t\tvar key = keys[Math.floor(Math.random() * keys.length)];\n\t\tpath.push(key);\n\t\ttry {\n\t\t\tif (Math.random() < .5)\n\t\t\t\tpath = path.concat(make_random_path(doc[key]));\n\t\t} catch (e) {\n\t\t\t// ignore - can't create path on inner value\n\t\t}\n\t} else {\n\t\tthrow new Error(\"COPY cannot apply to this document type: \" + doc);\n\t}\n\treturn path;\n}\n\nexports.createRandomOp = function(doc, context) {\n\t// Create a random COPY that could apply to doc. Choose\n\t// a random path for a source and a target.\n\tvar pathpairs = [];\n\twhile (1) {\n\t\tvar pp = [ serialize_pointer({ path: make_random_path(doc) }),\n\t\t           serialize_pointer({ path: make_random_path(doc) }) ];\n\t\tif (pp[0] != pp[1])\n\t\t\tpathpairs.push(pp);\n\t\tif (Math.random() < .5)\n\t\t\tbreak;\n\t}\n\treturn new exports.COPY(pathpairs);\n}\n"
  },
  {
    "path": "jot/diff.js",
    "content": "// Construct JOT operations by performing a diff on\n// standard data types.\n\nvar deepEqual = require(\"deep-equal\");\n\nvar jot = require(\"./index.js\");\n\nfunction diff(a, b, options) {\n\t// Compares two JSON-able data instances and returns\n\t// information about the difference:\n\t//\n\t// {\n\t//   op:   a JOT operation representing the change from a to b\n\t//   pct:  a number from 0 to 1 representing the proportion\n\t//         of content that is different\n\t//   size: an integer representing the approximate size of the\n\t//         content in characters, which is used for weighting\n\t// }\n\n\n\t// Run the diff method appropriate for the pair of data types.\n\t// Do a type-check for valid types early, before deepEqual is called.\n\t// We can't call JSON.stringify below if we get a non-JSONable\n\t// data type.\n\n\tfunction typename(val) {\n\t\tif (typeof val == \"undefined\")\n\t\t\tthrow new Error(\"Illegal argument: undefined passed to diff\");\n\t\tif (val === null)\n\t\t\treturn \"null\";\n\t\tif (typeof val == \"string\" || typeof val == \"number\" || typeof val == \"boolean\")\n\t\t\treturn typeof val;\n\t\tif (Array.isArray(val))\n\t\t\treturn \"array\";\n\t\tif (typeof val != \"object\")\n\t\t\tthrow new Error(\"Illegal argument: \" + typeof val + \" passed to diff\");\n\t\treturn \"object\";\n\t}\n\n\tvar ta = typename(a);\n\tvar tb = typename(b);\n\n\t// Return fast if the objects are equal. This is muuuuuch\n\t// faster than doing our stuff recursively.\n\n\tif (deepEqual(a, b, { strict: true })) {\n\t\treturn {\n\t\t\top: new jot.NO_OP(),\n\t\t\tpct: 0.0,\n\t\t\tsize: JSON.stringify(a).length\n\t\t};\n\t}\n\t\n\tif (ta == \"string\" && tb == \"string\")\n\t\treturn diff_strings(a, b, options);\n\n\tif (ta == \"array\" && tb == \"array\")\n\t\treturn diff_arrays(a, b, options);\n\t\n\tif (ta == \"object\" && tb == \"object\")\n\t\treturn diff_objects(a, b, options);\n\n\t// If the data types of the two values are different,\n\t// or if we don't recognize the data type (which is\n\t// not good), then only an atomic SET operation is possible.\n\treturn {\n\t\top: new jot.SET(b),\n\t\tpct: 1.0,\n\t\tsize: (JSON.stringify(a)+JSON.stringify(b)).length / 2\n\t}\n}\n\nexports.diff = function(a, b, options) {\n\t// Ensure options are defined.\n\toptions = options || { };\n\n\t// Call diff() and just return the operation.\n\treturn diff(a, b, options).op;\n}\n\nfunction diff_strings(a, b, options) {\n\t// Use the 'diff' package to compare two strings and convert\n\t// the output to a jot.LIST.\n\tvar diff = require(\"diff\");\n\t\n\tvar method = \"Chars\";\n\tif (options.words)\n\t\tmethod = \"Words\";\n\tif (options.lines)\n\t\tmethod = \"Lines\";\n\tif (options.sentences)\n\t\tmethod = \"Sentences\";\n\t\n\tvar total_content = 0;\n\tvar changed_content = 0;\n\n\tvar offset = 0;\n\tvar hunks = diff[\"diff\" + method](a, b)\n\t\t.map(function(change) {\n\t\t\t// Increment counter of total characters encountered.\n\t\t\ttotal_content += change.value.length;\n\t\t\t\n\t\t\tif (change.added || change.removed) {\n\t\t\t\t// Increment counter of changed characters.\n\t\t\t\tchanged_content += change.value.length;\n\n\t\t\t\t// Create a hunk for this change.\n\t\t\t\tvar length = 0, new_value = \"\";\n\t\t\t\tif (change.removed) length = change.value.length;\n\t\t\t\tif (change.added) new_value = change.value;\n\t\t\t\tvar ret = { offset: offset, length: length, op: new jot.SET(new_value) };\n\t\t\t\toffset = 0;\n\t\t\t\treturn ret;\n\t\t\t} else {\n\t\t\t\t// Advance character position index. Don't generate a hunk here.\n\t\t\t\toffset += change.value.length;\n\t\t\t\treturn null;\n\t\t\t}\n\t\t})\n\t\t.filter(function(item) { return item != null; });\n\n\t// Form the PATCH operation.\n\tvar op = new jot.PATCH(hunks).simplify();\n\treturn {\n\t\top: op,\n\t\tpct: (changed_content+1)/(total_content+1), // avoid divizion by zero\n\t\tsize: total_content\n\t};\n}\n\nfunction diff_arrays(a, b, options) {\n\t// Use the 'generic-diff' package to compare two arrays,\n\t// but using a custom equality function. This gives us\n\t// a relation between the elements in the arrays. Then\n\t// we can compute the operations for the diffs for the\n\t// elements that are lined up (and INS/DEL operations\n\t// for elements that are added/removed).\n\t\n\tvar generic_diff = require(\"generic-diff\");\n\n\t// We'll run generic_diff over an array of indices\n\t// into a and b, rather than on the elements themselves.\n\tvar ai = a.map(function(item, i) { return i });\n\tvar bi = b.map(function(item, i) { return i });\n\n\tvar ops = [ ];\n\tvar total_content = 0;\n\tvar changed_content = 0;\n\tvar pos = 0;\n\n\tfunction do_diff(ai, bi, level) {\n\t\t// Run generic-diff using a custom equality function that\n\t\t// treats two things as equal if their difference percent\n\t\t// is less than or equal to level.\n\t\t//\n\t\t// We get back a sequence of add/remove/equal operations.\n\t\t// Merge these into changed/same hunks.\n\n\t\tvar hunks = [];\n\t\tvar a_index = 0;\n\t\tvar b_index = 0;\n\t\tgeneric_diff(\n\t\t\tai, bi,\n\t\t\tfunction(ai, bi) { return diff(a[ai], b[bi], options).pct <= level; }\n\t\t\t).forEach(function(change) {\n\t\t\t\tif (!change.removed && !change.added) {\n\t\t\t\t\t// Same.\n\t\t\t\t\tif (a_index+change.items.length > ai.length) throw new Error(\"out of range\");\n\t\t\t\t\tif (b_index+change.items.length > bi.length) throw new Error(\"out of range\");\n\t\t\t\t\thunks.push({ type: 'equal', ai: ai.slice(a_index, a_index+change.items.length), bi: bi.slice(b_index, b_index+change.items.length) })\n\t\t\t\t\ta_index += change.items.length;\n\t\t\t\t\tb_index += change.items.length;\n\t\t\t\t} else {\n\t\t\t\t\tif (hunks.length == 0 || hunks[hunks.length-1].type == 'equal')\n\t\t\t\t\t\thunks.push({ type: 'unequal', ai: [], bi: [] })\n\t\t\t\t\tif (change.added) {\n\t\t\t\t\t\t// Added.\n\t\t\t\t\t\thunks[hunks.length-1].bi = hunks[hunks.length-1].bi.concat(change.items);\n\t\t\t\t\t\tb_index += change.items.length;\n\t\t\t\t\t} else if (change.removed) {\n\t\t\t\t\t\t// Removed.\n\t\t\t\t\t\thunks[hunks.length-1].ai = hunks[hunks.length-1].ai.concat(change.items);\n\t\t\t\t\t\ta_index += change.items.length;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t});\n\n\t\t// Process each hunk.\n\t\thunks.forEach(function(hunk) {\n\t\t\t//console.log(level, hunk.type, hunk.ai.map(function(i) { return a[i]; }), hunk.bi.map(function(i) { return b[i]; }));\n\n\t\t\tif (level < 1 && hunk.ai.length > 0 && hunk.bi.length > 0\n\t\t\t\t&& (level > 0 || hunk.type == \"unequal\")) {\n\t\t\t\t// Recurse at a less strict comparison level to\n\t\t\t\t// tease out more correspondences. We do this both\n\t\t\t\t// for 'equal' and 'unequal' hunks because even for\n\t\t\t\t// equal the pairs may not really correspond when\n\t\t\t\t// level > 0.\n\t\t\t\tdo_diff(\n\t\t\t\t\thunk.ai,\n\t\t\t\t\thunk.bi,\n\t\t\t\t\t(level+1.1)/2);\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\tif (hunk.ai.length != hunk.bi.length) {\n\t\t\t\t// The items aren't in correspondence, so we'll just return\n\t\t\t\t// a whole SPLICE from the left subsequence to the right\n\t\t\t\t// subsequence.\n\t\t\t\tvar op = new jot.SPLICE(\n\t\t\t\t\tpos,\n\t\t\t\t\thunk.ai.length,\n\t\t\t\t\thunk.bi.map(function(i) { return b[i]; }));\n\t\t\t\tops.push(op);\n\t\t\t\t//console.log(op);\n\n\t\t\t\t// Increment counters.\n\t\t\t\tvar dd = (JSON.stringify(hunk.ai.map(function(i) { return a[i]; }))\n\t\t\t\t         + JSON.stringify(hunk.bi.map(function(i) { return b[i]; })));\n\t\t\t\tdd = dd.length/2;\n\t\t\t\ttotal_content += dd;\n\t\t\t\tchanged_content += dd;\n\n\t\t\t} else {\n\t\t\t\t// The items in the arrays are in correspondence.\n\t\t\t\t// They may not be identical, however, if level > 0.\n\t\t\t\tfor (var i = 0; i < hunk.ai.length; i++) {\n\t\t\t\t\tvar d = diff(a[hunk.ai[i]], b[hunk.bi[i]], options);\n\n\t\t\t\t\t// Add an operation.\n\t\t\t\t\tif (!d.op.isNoOp())\n\t\t\t\t\t\tops.push(new jot.ATINDEX(hunk.bi[i], d.op));\n\n\t\t\t\t\t// Increment counters.\n\t\t\t\t\ttotal_content += d.size;\n\t\t\t\t\tchanged_content += d.size*d.pct;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tpos += hunk.bi.length;\n\t\t});\n\t}\n\n\t// Go.\n\n\tdo_diff(ai, bi, 0);\n\n\treturn {\n\t\top: new jot.LIST(ops).simplify(),\n\t\tpct: (changed_content+1)/(total_content+1), // avoid divizion by zero\n\t\tsize: total_content\n\t};\t\t\n}\n\nfunction diff_objects(a, b, options) {\n\t// Compare two objects.\n\n\tvar ops = [ ];\n\tvar total_content = 0;\n\tvar changed_content = 0;\n\t\n\t// If a key exists in both objects, then assume the key\n\t// has not been renamed.\n\tfor (var key in a) {\n\t\tif (key in b) {\n\t\t\t// Compute diff.\n\t\t\td = diff(a[key], b[key], options);\n\n\t\t\t// Add operation if there were any changes.\n\t\t\tif (!d.op.isNoOp())\n\t\t\t\tops.push(new jot.APPLY(key, d.op));\n\n\t\t\t// Increment counters.\n\t\t\ttotal_content += d.size;\n\t\t\tchanged_content += d.size*d.pct;\n\t\t}\n\t}\n\n\t// Do comparisons between all pairs of unmatched\n\t// keys to see what best lines up with what. Don't\n\t// store pairs with nothing in common.\n\tvar pairs = [ ];\n\t/*\n\tfor (var key1 in a) {\n\t\tif (key1 in b) continue;\n\t\tfor (var key2 in b) {\n\t\t\tif (key2 in a) continue;\n\t\t\tvar d = diff(a[key1], b[key2], options);\n\t\t\tif (d.pct == 1) continue;\n\t\t\tpairs.push({\n\t\t\t\ta_key: key1,\n\t\t\t\tb_key: key2,\n\t\t\t\tdiff: d\n\t\t\t});\n\t\t}\n\t}\n\t*/\n\n\t// Sort the pairs to choose the best matches first.\n\t// (This is a greedy approach. May not be optimal.)\n\tvar used_a = { };\n\tvar used_b = { };\n\tpairs.sort(function(a,b) { return ((a.diff.pct*a.diff.size) - (b.diff.pct*b.diff.size)); })\n\tpairs.forEach(function(item) {\n\t\t// Have we already generated an operation renaming\n\t\t// the key in a or renaming something to the key in b?\n\t\t// If so, this pair can't be used.\n\t\tif (item.a_key in used_a) return;\n\t\tif (item.b_key in used_b) return;\n\t\tused_a[item.a_key] = 1;\n\t\tused_b[item.b_key] = 1;\n\n\t\t// Use this pair.\n\t\tops.push(new jot.REN(item.a_key, item.b_key));\n\t\tif (!item.diff.op.isNoOp())\n\t\t\tops.push(new jot.APPLY(item.b_key, item.diff.op));\n\n\t\t// Increment counters.\n\t\ttotal_content += item.diff.size;\n\t\tchanged_content += item.diff.size*item.diff.pct;\n\t})\n\n\t// Delete/create any keys that didn't match up.\n\tfor (var key in a) {\n\t\tif (key in b || key in used_a) continue;\n\t\tops.push(new jot.REM(key));\n\t}\n\tfor (var key in b) {\n\t\tif (key in a || key in used_b) continue;\n\t\tops.push(new jot.PUT(key, b[key]));\n\t}\n\n\treturn {\n\t\top: new jot.LIST(ops).simplify(),\n\t\tpct: (changed_content+1)/(total_content+1), // avoid divizion by zero\n\t\tsize: total_content\n\t};\n}\n\n"
  },
  {
    "path": "jot/index.js",
    "content": "/* Base functions for the operational transformation library. */\n\nvar util = require('util');\nvar shallow_clone = require('shallow-clone');\n\n// Must define this ahead of any imports below so that this constructor\n// is available to the operation classes.\nexports.Operation = function() {\n}\nexports.add_op = function(constructor, module, opname) {\n\t// utility.\n\tconstructor.prototype.type = [module.module_name, opname];\n\tif (!('op_map' in module))\n\t\tmodule['op_map'] = { };\n\tmodule['op_map'][opname] = constructor;\n}\n\n\n// Expose the operation classes through the jot library.\nvar values = require(\"./values.js\");\nvar sequences = require(\"./sequences.js\");\nvar objects = require(\"./objects.js\");\nvar lists = require(\"./lists.js\");\nvar copies = require(\"./copies.js\");\n\nexports.NO_OP = values.NO_OP;\nexports.SET = values.SET;\nexports.MATH = values.MATH;\nexports.PATCH = sequences.PATCH;\nexports.SPLICE = sequences.SPLICE;\nexports.ATINDEX = sequences.ATINDEX;\nexports.MAP = sequences.MAP;\nexports.PUT = objects.PUT;\nexports.REM = objects.REM;\nexports.APPLY = objects.APPLY;\nexports.LIST = lists.LIST;\nexports.COPY = copies.COPY;\n\n// Expose the diff function too.\nexports.diff = require('./diff.js').diff;\n\n/////////////////////////////////////////////////////////////////////\n\nexports.Operation.prototype.isNoOp = function() {\n\treturn this instanceof values.NO_OP;\n}\n\nexports.Operation.prototype.visit = function(visitor) {\n\t// A simple visitor paradigm. Replace this operation instance itself\n\t// and any operation within it with the value returned by calling\n\t// visitor on itself, or if the visitor returns anything falsey\n\t// (probably undefined) then return the operation unchanged.\n\treturn visitor(this) || this;\n}\n\nexports.Operation.prototype.toJSON = function(__key__, protocol_version) {\n\t// The first argument __key__ is used when this function is called by\n\t// JSON.stringify. For reasons unclear, we get the name of the property\n\t// that this object is stored in in its parent? Doesn't matter. We\n\t// leave a slot so that this function can be correctly called by JSON.\n\t// stringify, but we don't use it.\n\n\t// The return value.\n\tvar repr = { };\n\n\t// If protocol_version is unspecified, then this is a top-level call.\n\t// Choose the latest (and only) protocol version and write it into\n\t// the output data structure, and pass it down recursively.\n\t//\n\t// If protocol_version was specified, this is a recursive call and\n\t// we don't need to write it out. Sanity check it's a valid value.\n\tif (typeof protocol_version == \"undefined\") {\n\t\tprotocol_version = 1;\n\t\trepr[\"_ver\"] = protocol_version;\n\t} else {\n\t\tif (protocol_version !== 1) throw new Error(\"Invalid protocol version: \" + protocol_version);\n\t}\n\n\t// Set the module and operation name.\n\trepr['_type'] = this.type[0] + \".\" + this.type[1];\n\n\t// Call the operation's toJSON function.\n\tthis.internalToJSON(repr, protocol_version);\n\n\t// Return.\n\treturn repr;\n}\n\nexports.opFromJSON = function(obj, protocol_version, op_map) {\n\t// Sanity check.\n\tif (typeof obj !== \"object\") throw new Error(\"Not an operation.\");\n\n\t// If protocol_version is unspecified, then this is a top-level call.\n\t// The version must be encoded in the object, and we pass it down\n\t// recursively.\n\t//\n\t// If protocol_version is specified, this is a recursive call and\n\t// we don't need to read it from the object.\n\tif (typeof protocol_version === \"undefined\") {\n\t\tprotocol_version = obj['_ver'];\n\t\tif (protocol_version !== 1)\n\t\t\tthrow new Error(\"JOT serialized data structure is missing protocol version and one wasn't provided as an argument.\");\n\t} else {\n\t\tif (protocol_version !== 1)\n\t\t\tthrow new Error(\"Invalid protocol version provided: \" + protocol_version)\n\t\tif (\"_ver\" in obj)\n\t\t\tthrow new Error(\"JOT serialized data structure should not have protocol version because it was provided as an argument.\");\n\t}\n\n\t// Create a default mapping from encoded types to constructors\n\t// allowing all operations to be deserialized.\n\tif (!op_map) {\n\t\top_map = { };\n\n\t\tfunction extend_op_map(module) {\n\t\t\top_map[module.module_name] = { };\n\t\t\tfor (var key in module.op_map)\n\t\t\t\top_map[module.module_name][key] = module.op_map[key];\n\t\t}\n\n\t\textend_op_map(values);\n\t\textend_op_map(sequences);\n\t\textend_op_map(objects);\n\t\textend_op_map(lists);\n\t\textend_op_map(copies);\n\t}\n\n\t// Get the operation class.\n\tif (typeof obj['_type'] !== \"string\") throw new Error(\"Not an operation.\");\n\tvar dottedclassparts = obj._type.split(/\\./g, 2);\n\tif (dottedclassparts.length != 2) throw new Error(\"Not an operation.\");\n\tvar clazz = op_map[dottedclassparts[0]][dottedclassparts[1]];\n\n\t// Call the deserializer function on the class.\n\treturn clazz.internalFromJSON(obj, protocol_version, op_map);\n}\n\nexports.Operation.prototype.serialize = function() {\n\t// JSON.stringify will use the object's toJSON method\n\t// implicitly.\n\treturn JSON.stringify(this);\n}\nexports.deserialize = function(op_json) {\n\treturn exports.opFromJSON(JSON.parse(op_json));\n}\n\nexports.Operation.prototype.compose = function(other, no_list) {\n\tif (!(other instanceof exports.Operation))\n\t\tthrow new Error(\"Argument must be an operation.\");\n\n\t// A NO_OP composed with anything just gives the other thing.\n\tif (this instanceof values.NO_OP)\n\t\treturn other;\n\n\t// Composing with a NO_OP does nothing.\n\tif (other instanceof values.NO_OP)\n\t\treturn this;\n\n\t// Composing with a SET obliterates this operation.\n\tif (other instanceof values.SET)\n\t\treturn other;\n\n\t// Attempt an atomic composition if this defines the method.\n\tif (this.atomic_compose) {\n\t\tvar op = this.atomic_compose(other);\n\t\tif (op != null)\n\t\t\treturn op;\n\t}\n\n\tif (no_list)\n\t\treturn null;\n\n\t// Fall back to creating a LIST. Call simplify() to weed out\n\t// anything equivalent to a NO_OP.\n\treturn new lists.LIST([this, other]).simplify();\n}\n\nexports.Operation.prototype.rebase = function(other, conflictless, debug) {\n\t/* Transforms this operation so that it can be composed *after* the other\n\t   operation to yield the same logical effect as if it had been executed\n\t   in parallel (rather than in sequence). Returns null on conflict.\n\t   If conflictless is true, tries extra hard to resolve a conflict in a\n\t   sensible way but possibly by killing one operation or the other.\n\t   Returns the rebased version of this. */\n\n\t// Run the rebase operation in a's prototype. If a doesn't define it,\n\t// check b's prototype. If neither define a rebase operation, then there\n\t// is a conflict.\n\tfor (var i = 0; i < ((this.rebase_functions!=null) ? this.rebase_functions.length : 0); i++) {\n\t\tif (other instanceof this.rebase_functions[i][0]) {\n\t\t\tvar r = this.rebase_functions[i][1].call(this, other, conflictless);\n\t\t\tif (r != null && r[0] != null) {\n\t\t\t\tif (debug) debug(\"rebase\", this, \"on\", other, (conflictless ? \"conflictless\" : \"\"), (\"document\" in conflictless ? JSON.stringify(conflictless.document) : \"\"), \"=>\", r[0]);\n\t\t\t\treturn r[0];\n\t\t\t}\n\t\t}\n\t}\n\n\t// Either a didn't define a rebase function for b's data type, or else\n\t// it returned null above. We can try running the same logic backwards on b.\n\tfor (var i = 0; i < ((other.rebase_functions!=null) ? other.rebase_functions.length : 0); i++) {\n\t\tif (this instanceof other.rebase_functions[i][0]) {\n\t\t\tvar r = other.rebase_functions[i][1].call(other, this, conflictless);\n\t\t\tif (r != null && r[1] != null) {\n\t\t\t\tif (debug) debug(\"rebase\", this, \"on\", other, (conflictless ? \"conflictless\" : \"\"), (\"document\" in conflictless ? JSON.stringify(conflictless.document) : \"\"), \"=>\", r[0]);\n\t\t\t\treturn r[1];\n\t\t\t}\n\t\t}\n\t}\n\n\t// Everything can rebase against a LIST and vice versa.\n\t// This has higher precedence than the this instanceof SET fallback.\n\tif (this instanceof lists.LIST || other instanceof lists.LIST) {\n\t\tvar ret = lists.rebase(other, this, conflictless, debug);\n\t\tif (debug) debug(\"rebase\", this, \"on\", other, \"=>\", ret);\n\t\treturn ret;\n\t}\n\n\tif (conflictless) {\n\t\t// Everything can rebase against a COPY in conflictless mode when\n\t\t// a previous document content is given --- the document is needed\n\t\t// to parse a JSONPointer and know whether the path components are\n\t\t// for objects or arrays. If this's operation affects a path that\n\t\t// is copied, the operation is cloned to the target path.\n\t\t// This has higher precedence than the this instanceof SET fallback.\n\t\tif (other instanceof copies.COPY && typeof conflictless.document != \"undefined\")\n\t\t\treturn other.clone_operation(this, conflictless.document);\n\n\t\t// Everything can rebase against a SET in a conflictless way.\n\t\t// Note that to resolve ties, SET rebased against SET is handled\n\t\t// in SET's rebase_functions.\n\n\t\t// The SET always wins!\n\t\tif (this instanceof values.SET) {\n\t\t\tif (debug) debug(\"rebase\", this, \"on\", other, \"=>\", this);\n\t\t\treturn this;\n\t\t}\n\t\tif (other instanceof values.SET) {\n\t\t\tif (debug) debug(\"rebase\", this, \"on\", other, \"=>\", new values.NO_OP());\n\t\t\treturn new values.NO_OP();\n\t\t}\n\n\t\t// If conflictless rebase would fail, raise an error.\n\t\tthrow new Error(\"Rebase failed between \" + this.inspect() + \" and \" + other.inspect() + \".\");\n\t}\n\n\treturn null;\n}\n\nexports.createRandomValue = function(depth) {\n\tvar values = [];\n\n\t// null\n\tvalues.push(null);\n\n\t// boolean\n\tvalues.push(false);\n\tvalues.push(true);\n\n\t// number (integer, float)\n\tvalues.push(1000 * Math.floor(Math.random() - .5));\n\tvalues.push(Math.random() - .5);\n\tvalues.push(1000 * (Math.random() - .5));\n\n\t// string\n\tvalues.push(Math.random().toString(36).substring(7));\n\n\t// array (make nesting exponentially less likely at each level of recursion)\n\tif (Math.random() < Math.exp(-(depth||0))) {\n\t\tvar n = Math.floor(Math.exp(3*Math.random()))-1;\n\t\tvar array = [];\n\t\twhile (array.length < n)\n\t\t\tarray.push(exports.createRandomValue((depth||0)+1));\n\t\tvalues.push(array);\n\t}\n\n\t// object (make nesting exponentially less likely at each level of recursion)\n\tif (Math.random() < Math.exp(-(depth||0))) {\n\t\tvar n = Math.floor(Math.exp(2.5*Math.random()))-1;\n\t\tvar obj = { };\n\t\twhile (Object.keys(obj).length < n)\n\t\t\tobj[Math.random().toString(36).substring(7)] = exports.createRandomValue((depth||0)+1);\n\t\tvalues.push(obj);\n\t}\n\n\treturn values[Math.floor(Math.random() * values.length)];\n}\n\nexports.createRandomOp = function(doc, context) {\n\t// Creates a random operation that could apply to doc. Just\n\t// chain off to the modules that can handle the data type.\n\n\tvar modules = [];\n\n\t// The values module can handle any data type.\n\tmodules.push(values);\n\n\t// sequences applies to strings and arrays.\n\tif (typeof doc === \"string\" || Array.isArray(doc)) {\n\t\tmodules.push(sequences);\n\t\t//modules.push(copies);\n\t}\n\n\t// objects applies to objects (but not Array objects or null)\n\telse if (typeof doc === \"object\" && doc !== null) {\n\t\tmodules.push(objects);\n\t\t//modules.push(copies);\n\t}\n\n\t// the lists module only defines LIST which can also\n\t// be applied to any data type but gives us stack\n\t// overflows\n\t//modules.push(lists);\n\n\treturn modules[Math.floor(Math.random() * modules.length)]\n\t\t.createRandomOp(doc, context);\n}\n\nexports.createRandomOpSequence = function(value, count) {\n\t// Create a random sequence of operations starting with a given value.\n\tvar ops = [];\n\twhile (ops.length < count) {\n\t\t// Create random operation.\n\t\tvar op = exports.createRandomOp(value);\n\n\t\t// Make the result of applying the op the initial value\n\t\t// for the next operation. createRandomOp sometimes returns\n\t\t// invalid operations, in which case we'll try again.\n\t\t// TODO: Make createRandomOp always return a valid operation\n\t\t// and remove the try block.\n\t\ttry {\n\t\t\tvalue = op.apply(value);\n\t\t} catch (e) {\n\t\t\tcontinue; // retry\n\t\t}\n\n\t\tops.push(op);\n\t}\n\treturn new lists.LIST(ops);\n}\n\nexports.type_name = function(x) {\n\tif (typeof x == 'object') {\n\t\tif (Array.isArray(x))\n\t\t\treturn 'array';\n\t\treturn 'object';\n\t}\n\treturn typeof x;\n}\n\n// Utility function to compare values for the purposes of\n// setting sort orders that resolve conflicts.\nexports.cmp = function(a, b) {\n\t// For objects.MISSING, make sure we try object identity.\n\tif (a === b)\n\t\treturn 0;\n\n\t// objects.MISSING has a lower sort order so that it tends to get clobbered.\n\tif (a === objects.MISSING)\n\t\treturn -1;\n\tif (b === objects.MISSING)\n\t\treturn 1;\n\n\t// Comparing strings to numbers, numbers to objects, etc.\n\t// just sort based on the type name.\n\tif (exports.type_name(a) != exports.type_name(b)) {\n\t\treturn exports.cmp(exports.type_name(a), exports.type_name(b));\n\t\n\t} else if (typeof a == \"number\") {\n\t\tif (a < b)\n\t\t\treturn -1;\n\t\tif (a > b)\n\t\t\treturn 1;\n\t\treturn 0;\n\t\t\n\t} else if (typeof a == \"string\") {\n\t\treturn a.localeCompare(b);\n\t\n\t} else if (Array.isArray(a)) {\n\t\t// First compare on length.\n\t\tvar x = exports.cmp(a.length, b.length);\n\t\tif (x != 0) return x;\n\n\t\t// Same length, compare on values.\n\t\tfor (var i = 0; i < a.length; i++) {\n\t\t\tx = exports.cmp(a[i], b[i]);\n\t\t\tif (x != 0) return x;\n\t\t}\n\n\t\treturn 0;\n\t}\n\n\t// Compare on strings.\n\t// TODO: Find a better way to sort objects.\n\treturn JSON.stringify(a).localeCompare(JSON.stringify(b));\n}\n\n"
  },
  {
    "path": "jot/lists.js",
    "content": "/*  This module defines one operation:\n\t\n\tLIST([op1, op2, ...])\n\t\n\tA composition of zero or more operations, given as an array.\n\n\t*/\n\t\nvar util = require(\"util\");\n\nvar shallow_clone = require('shallow-clone');\n\nvar jot = require(\"./index.js\");\nvar values = require('./values.js');\n\nexports.module_name = 'lists'; // for serialization/deserialization\n\nexports.LIST = function (ops) {\n\tif (!Array.isArray(ops)) throw new Error(\"Argument must be an array.\");\n\tops.forEach(function(op) {\n\t\tif (!(op instanceof jot.Operation))\n\t\t\tthrow new Error(\"Argument must be an array containing operations (found \" + op + \").\");\n\t})\n\tthis.ops = ops; // TODO: How to ensure this array is immutable?\n\tObject.freeze(this);\n}\nexports.LIST.prototype = Object.create(jot.Operation.prototype); // inherit\njot.add_op(exports.LIST, exports, 'LIST');\n\nexports.LIST.prototype.inspect = function(depth) {\n\treturn util.format(\"<LIST [%s]>\",\n\t\tthis.ops.map(function(item) { return item.inspect(depth-1) }).join(\", \"));\n}\n\nexports.LIST.prototype.visit = function(visitor) {\n\t// A simple visitor paradigm. Replace this operation instance itself\n\t// and any operation within it with the value returned by calling\n\t// visitor on itself, or if the visitor returns anything falsey\n\t// (probably undefined) then return the operation unchanged.\n\tvar ret = new exports.LIST(this.ops.map(function(op) { return op.visit(visitor); }));\n\treturn visitor(ret) || ret;\n}\n\nexports.LIST.prototype.internalToJSON = function(json, protocol_version) {\n\tjson.ops = this.ops.map(function(op) {\n\t\treturn op.toJSON(undefined, protocol_version);\n\t});\n}\n\nexports.LIST.internalFromJSON = function(json, protocol_version, op_map) {\n\tvar ops = json.ops.map(function(op) {\n\t\treturn jot.opFromJSON(op, protocol_version, op_map);\n\t});\n\treturn new exports.LIST(ops);\n}\n\nexports.LIST.prototype.apply = function (document) {\n\t/* Applies the operation to a document.*/\n\tfor (var i = 0; i < this.ops.length; i++)\n\t\tdocument = this.ops[i].apply(document);\n\treturn document;\n}\n\nexports.LIST.prototype.simplify = function (aggressive) {\n\t/* Returns a new operation that is a simpler version\n\t   of this operation. Composes consecutive operations where\n\t   possible and removes no-ops. Returns NO_OP if the result\n\t   would be an empty list of operations. Returns an\n\t   atomic (non-LIST) operation if possible. */\n\tvar new_ops = [];\n\tfor (var i = 0; i < this.ops.length; i++) {\n\t\tvar op = this.ops[i];\n\n\t\t// simplify the inner op\n\t\top = op.simplify();\n\n\t\t// if this isn't the first operation, try to atomic_compose the operation\n\t\t// with the previous one.\n\t\twhile (new_ops.length > 0) {\n\t\t\t// Don't bother with atomic_compose if the op to add is a no-op.\n\t\t\tif (op.isNoOp())\n\t\t\t\tbreak;\n\n\t\t\tvar c = new_ops[new_ops.length-1].compose(op, true);\n\n\t\t\t// If there is no atomic composition, there's nothing more we can do.\n\t\t\tif (!c)\n\t\t\t\tbreak;\n\n\t\t\t// The atomic composition was successful. Remove the old previous operation.\n\t\t\tnew_ops.pop();\n\n\t\t\t// Use the atomic_composition as the next op to add. On the next iteration\n\t\t\t// try composing it with the new last element of new_ops.\n\t\t\top = c;\n\t\t}\n\n\t\t// Don't add to the new list if it is a no-op.\n\t\tif (op.isNoOp())\n\t\t\tcontinue;\n\n\t\t// if it's the first operation, or atomic_compose failed, add it to the new_ops list\n\t\tnew_ops.push(op);\n\t}\n\n\tif (new_ops.length == 0)\n\t\treturn new values.NO_OP();\n\tif (new_ops.length == 1)\n\t\treturn new_ops[0];\n\n\treturn new exports.LIST(new_ops);\n}\n\nexports.LIST.prototype.inverse = function (document) {\n\t/* Returns a new atomic operation that is the inverse of this operation:\n\t   the inverse of each operation in reverse order. */\n\tvar new_ops = [];\n\tthis.ops.forEach(function(op) {\n\t\tnew_ops.push(op.inverse(document));\n\t\tdocument = op.apply(document);\n\t})\n\tnew_ops.reverse();\n\treturn new exports.LIST(new_ops);\n}\n\nexports.LIST.prototype.atomic_compose = function (other) {\n\t/* Returns a LIST operation that has the same result as this\n\t   and other applied in sequence (this first, other after). */\n\n\t// Nothing here anyway, return the other.\n\tif (this.ops.length == 0)\n\t\treturn other;\n\n\t// the next operation is an empty list, so the composition is just this\n\tif (other instanceof exports.LIST) {\n\t\tif (other.ops.length == 0)\n\t\t\treturn this;\n\t\t\n\t\t// concatenate\n\t\treturn new exports.LIST(this.ops.concat(other.ops));\n\t}\n\n\n\t// append\n\tvar new_ops = this.ops.slice(); // clone\n\tnew_ops.push(other);\n\treturn new exports.LIST(new_ops);\n}\n\nexports.rebase = function(base, ops, conflictless, debug) {\n\t// Ensure the operations are simplified, since rebase\n\t// is much more expensive than simplified.\n\n\tbase = base.simplify();\n\tops = ops.simplify();\n\n\t// Turn each argument into an array of operations.\n\t// If an argument is a LIST, unwrap it.\n\n\tif (base instanceof exports.LIST)\n\t\tbase = base.ops;\n\telse\n\t\tbase = [base];\n\n\tif (ops instanceof exports.LIST)\n\t\tops = ops.ops;\n\telse\n\t\tops = [ops];\n\n\t// Run the rebase algorithm.\n\n\tvar ret = rebase_array(base, ops, conflictless, debug);\n\n\t// The rebase may have failed.\n\tif (ret == null) return null;\n\n\t// ...or yielded no operations --- turn it into a NO_OP operation.\n\tif (ret.length == 0) return new values.NO_OP();\n\n\t// ...or yielded a single operation --- return it.\n\tif (ret.length == 1) return ret[0];\n\n\t// ...or yielded a list of operations --- re-wrap it in a LIST operation.\n\treturn new exports.LIST(ret).simplify();\n}\n\nfunction rebase_array(base, ops, conflictless, debug) {\n\t/* This is one of the core functions of the library: rebasing a sequence\n\t   of operations against another sequence of operations. */\n\n\t/*\n\t* To see the logic, it will help to put this in a symbolic form.\n\t*\n\t*   Let a + b == compose(a, b)\n\t*   and a / b == rebase(b, a)\n\t*\n\t* The contract of rebase has two parts;\n\t*\n\t* \t1) a + (b/a) == b + (a/b)\n\t* \t2) x/(a + b) == (x/a)/b\n\t*\n\t* Also note that the compose operator is associative, so\n\t*\n\t*\ta + (b+c) == (a+b) + c\n\t*\n\t* Our return value here in symbolic form is:\n\t*\n\t*   (op1/base) + (op2/(base/op1))\n\t*   where ops = op1 + op2\n\t*\n\t* To see that we've implemented rebase correctly, let's look\n\t* at what happens when we compose our result with base as per\n\t* the rebase rule:\n\t*\n\t*   base + (ops/base)\n\t*\n\t* And then do some algebraic manipulations:\n\t*\n\t*   base + [ (op1/base) + (op2/(base/op1)) ]   (substituting our hypothesis for self/base)\n\t*   [ base + (op1/base) ] + (op2/(base/op1))   (associativity)\n\t*   [ op1 + (base/op1) ] + (op2/(base/op1))    (rebase's contract on the left side)\n\t*   op1 + [ (base/op1)  + (op2/(base/op1)) ]   (associativity)\n\t*   op1 + [ op2 + ((base/op1)/op2) ]           (rebase's contract on the right side)\n\t*   (op1 + op2) + ((base/op1)/op2)             (associativity)\n\t*   self + [(base/op1)/op2]                    (substituting self for (op1+op2))\n\t*   self + [base/(op1+op2)]                    (rebase's second contract)\n\t*   self + (base/self)                         (substitution)\n\t*\n\t* Thus we've proved that the rebase contract holds for our return value.\n\t*/\n\n\tif (ops.length == 0 || base.length == 0)\n\t\treturn ops;\n\n\tif (ops.length == 1 && base.length == 1) {\n\t\t// This is the recursive base case: Rebasing a single operation against a single\n\t\t// operation. Wrap the result in an array.\n\t\tvar op = ops[0].rebase(base[0], conflictless, debug);\n\t\tif (!op) return null; // conflict\n\t\tif (op instanceof jot.NO_OP) return [];\n\t\tif (op instanceof jot.LIST) return op.ops;\n\t\treturn [op];\n\t}\n\t\n\tif (debug) {\n\t\t// Wrap the debug function to emit an extra argument to show depth.\n\t\tdebug(\"rebasing\", ops, \"on\", base, conflictless ? \"conflictless\" : \"\", \"document\" in conflictless ? JSON.stringify(conflictless.document) : \"\", \"...\");\n\t\tvar original_debug = debug;\n\t\tdebug = function() { var args = [\">\"].concat(Array.from(arguments)); original_debug.apply(null, args); }\n\t}\n\t\n\tif (base.length == 1) {\n\t\t// Rebase more than one operation (ops) against a single operation (base[0]).\n\n\t\t// Nothing to do if it is a no-op.\n\t\tif (base[0] instanceof values.NO_OP)\n\t\t\treturn ops;\n\n\t\t// The result is the first operation in ops rebased against the base concatenated with\n\t\t// the remainder of ops rebased against the-base-rebased-against-the-first-operation:\n\t\t// (op1/base) + (op2/(base/op1))\n\n\t\tvar op1 = ops.slice(0, 1); // first operation\n\t\tvar op2 = ops.slice(1); // remaining operations\n\t\t\n\t\tvar r1 = rebase_array(base, op1, conflictless, debug);\n\t\tif (r1 == null) return null; // rebase failed\n\t\t\n\t\tvar r2 = rebase_array(op1, base, conflictless, debug);\n\t\tif (r2 == null) return null; // rebase failed (must be the same as r1, so this test should never succeed)\n\t\t\n\t\t// For the remainder operations, we have to adjust the 'conflictless' object.\n\t\t// If it provides the base document state, then we have to advance the document\n\t\t// for the application of op1.\n\t\tvar conflictless2 = null;\n\t\tif (conflictless) {\n\t\t\tconflictless2 = shallow_clone(conflictless);\n\t\t\tif (\"document\" in conflictless2)\n\t\t\t\tconflictless2.document = op1[0].apply(conflictless2.document);\n\t\t}\n\n\t\tvar r3 = rebase_array(r2, op2, conflictless2, debug);\n\t\tif (r3 == null) return null; // rebase failed\n\t\t\n\t\t// returns a new array\n\t\treturn r1.concat(r3);\n\n\t} else {\n\t\t// Rebase one or more operations (ops) against more than one operation (base).\n\t\t//\n\t\t// From the second part of the rebase contract, we can rebase ops\n\t\t// against each operation in the base sequentially (base[0], base[1], ...).\n\t\t\n\t\t// shallow clone\n\t\tconflictless = !conflictless ? null : shallow_clone(conflictless);\n\n\t\tfor (var i = 0; i < base.length; i++) {\n\t\t\tops = rebase_array([base[i]], ops, conflictless, debug);\n\t\t\tif (ops == null) return null; // conflict\n\n\t\t\t// Adjust the 'conflictless' object if it provides the base document state\n\t\t\t// since for later operations we're assuming base[i] has now been applied.\n\t\t\tif (conflictless && \"document\" in conflictless)\n\t\t\t\tconflictless.document = base[i].apply(conflictless.document);\n\t\t}\n\n\t\treturn ops;\n\t}\n}\n\nexports.LIST.prototype.drilldown = function(index_or_key) {\n\treturn new exports.LIST(this.ops.map(function(op) {\n\t\treturn op.drilldown(index_or_key)\n\t})).simplify();\n}\n\nexports.createRandomOp = function(doc, context) {\n\t// Create a random LIST that could apply to doc.\n\tvar ops = [];\n\twhile (ops.length == 0 || Math.random() < .75) {\n\t\tops.push(jot.createRandomOp(doc, context));\n\t\tdoc = ops[ops.length-1].apply(doc);\n\t}\n\treturn new exports.LIST(ops);\n}\n"
  },
  {
    "path": "jot/merge.js",
    "content": "// Performs a merge, i.e. given a directed graph of operations with in-degree\n// of 1 or 2, and two nodes 'source' and 'target' on the graph which have a\n// common ancestor, compute the operation that applies the changes in source,\n// relative to the common ancestor, to target.\n//\n// In the simplest case of a merge where the source and target share a common\n// immediate parent, then the merge is simply the rebase of the source on the\n// target. In more complex operation graphs, we find the least common ancestor\n// (i.e. the nearest common ancestor) and then rebase around that.\n\nconst jot = require(\"./index.js\");\n\nfunction keys(obj) {\n  // Get all of the own-keys of obj including string keys and symbols.\n  return Object.keys(obj).concat(Object.getOwnPropertySymbols(obj));\n}\n\nfunction node_name_str(node) {\n  if (typeof node === \"Symbol\" && node.description)\n    return node.description;\n  return node.toString();\n}\n\nexports.merge = function(branch1, branch2, graph) {\n  // 'graph' is an object whose keys are identifiers of nodes\n  // 'branch1' and 'branch2' are keys in 'graph'\n  // The values of 'graph' are objects of the form:\n  // {\n  //   parents: an array of keys in graph\n  //   op: an array of JOT operations, each giving the difference between\n  //       the document at this node and the *corresponding* parent\n  //   document: the document content at this node, required at least for\n  //       root nodes, but can be set on any node\n  // }\n  // \n  // This method returns an array of two operations: The first merges\n  // branch2 into branch1 and the second merges branch1 into branch2.\n\n  // Find the lowest common ancestor(s) of branch1 and branch2.\n  var lca = lowest_common_ancestors(branch1, branch2, graph);\n  if (lca.length == 0)\n    throw \"NO_COMMON_ANCESTOR\";\n\n function get_document_content_at(n) {\n    // Node n may have a 'document' key set. If not, get the content at its first parent\n    // and apply its operations.\n    let path = [];\n    while (typeof graph[n].document === \"undefined\") {\n      if (!graph[n].parents)\n        throw \"ROOT_MISSING_DOCUMENT\";\n      path.unshift(graph[n].op[0]);\n      n = graph[n].parents[0];\n    }\n    let document = graph[n].document;\n    path.forEach(op => document = op.apply(document));\n    return document;\n  }\n\n  /*\n  console.log(\"Merging\", branch1, \"and\", branch2);\n  keys(graph).forEach(node =>\n    console.log(\" \", node, \"=>\", (graph[node].parents || []).map((p, i) =>\n      (p.toString() + \" + \" + graph[node].op[i].inspect())).join(\" X \"))\n  )\n  */\n\n  // If there is more than one common ancestor, then use the git merge\n  // \"recursive\" strategy, which successively merges pairs of common\n  // ancestors until there is just one left.\n  while (lca.length > 1) {\n    // Take the top two ancestors off the stack and compute the operations\n    // to merge them.\n    let lca1 = lca.pop();\n    let lca2 = lca.pop();\n    //console.log(\"Recursive merging...\");\n    let merge_ops = exports.merge(lca1.node, lca2.node, graph);\n\n    // Form a new virtual node in the graph that represents this merge and put\n    // this node back onto the stack. Along with the node, we also store the\n    // path through nodes to branch1 and branch2. Of course there is no path\n    // from this virtual node to anywhere, so we make one by rebasing the\n    // path from one of the ancestor nodes (it doesn't matter which) to the\n    // target onto the merge operation from that ancestor to the new node.\n    // (Maybe choosing the shortest paths will produce better merges?)\n    var node = Symbol(/*node_name_str(lca1.node) + \"|\" + node_name_str(lca2.node)*/);\n    graph[node] = {\n      parents: [lca1.node, lca2.node],\n      op: merge_ops\n    }\n    \n    let document1 = get_document_content_at(lca1.node);\n\n    /*\n    console.log(\"Rebasing\")\n    keys(lca1.paths).map(target => console.log(\" \", target, lca1.paths[target].inspect()));\n    console.log(\"against the merge commit's first operation\");\n    console.log(\"which is at\", lca1.node, \"document:\", document1);\n    */\n  \n    lca.push({\n      node: node,\n      paths: {\n        [branch1]: lca1.paths[branch1].rebase(merge_ops[0], { document: document1 }),\n        [branch2]: lca1.paths[branch2].rebase(merge_ops[0], { document: document1 })\n      }\n    });\n\n    /*\n    console.log(\"Making virtual node (\", lca1.node, lca2.node, \") with paths to targets:\");\n    keys(lca[lca.length-1].paths).map(target => console.log(\" \", target, lca[lca.length-1].paths[target].inspect()));\n    */\n  }\n\n  // Take the one common ancestors.\n  lca = lca.pop();\n\n  /*\n  console.log(\"Common ancestor:\", lca.node, \"Paths to targets:\");\n  keys(lca.paths).map(target => console.log(\" \", target, lca.paths[target].inspect()));\n  */\n\n  // Get the JOT operations from the ancestor to branch1 and branch2.\n  var branch1_op = lca.paths[branch1];\n  var branch2_op = lca.paths[branch2];\n\n  // Get the document content at the common ancestor.\n  let document = get_document_content_at(lca.node);\n  //console.log(\"document:\", document);\n\n  // Rebase and return the operations that would 1) merge branch2 into branch1\n  // and 2) branch1 into branch2.\n  let merge = [\n    branch2_op.rebase(branch1_op, { document: document }),\n    branch1_op.rebase(branch2_op, { document: document })\n  ];\n  //console.log(\"Merge:\", merge[0].inspect(), merge[1].inspect());\n  return merge;\n}\n\nfunction lowest_common_ancestors(a, b, graph) {\n  // Find the lowest common ancestors of a and b (i.e. an\n  // ancestor of both that is not the ancestor of another common\n  // ancestor). There may be more than one equally-near ancestors.\n  //\n  // For each such ancestor, return the path(s) to a and b. There\n  // may be multiple paths to each of a and b.\n  //\n  // We do this by traversing the graph starting first at a and then at b.\n  let reach_paths = { };\n  let descendants = { };\n  let queue = [ // node  paths to a & b  descendant nodes\n                 [ a,    { [a]: [[]] },  {} ],\n                 [ b,    { [b]: [[]] },  {} ]];\n  while (queue.length > 0) {\n    // Pop an item from the queue.\n    let [node, paths, descs] = queue.shift();\n    if (!graph.hasOwnProperty(node)) throw \"Invalid node: \" + node;\n    \n    // Update its reach_paths flags.\n    if (!reach_paths.hasOwnProperty(node)) reach_paths[node] = { };\n    keys(paths).forEach(target => {\n      if (!reach_paths[node].hasOwnProperty(target)) reach_paths[node][target] = [ ];\n      paths[target].forEach(path =>\n        reach_paths[node][target].push(path))\n    });\n\n    // Update its descendants.\n    if (!descendants.hasOwnProperty(node)) descendants[node] = { };\n    Object.assign(descendants[node], descs);\n\n    // Queue its parents, passing the reachability paths and descendants.\n    // Add this node to the descendants and path. To the path, add the\n    // index too.\n    let de = { [node]: true };\n    Object.assign(de, descs);\n    (graph[node].parents || []).forEach((p, i) => {\n      let pa = { };\n      keys(paths).forEach(pkey => {\n        pa[pkey] = paths[pkey].map(path => [[node, i]].concat(path));\n      })\n\n      queue.push([p, pa, de])\n    });\n  }\n\n  // Take the common ancetors.\n  var common_ancestors = keys(reach_paths).filter(\n    n => reach_paths[n].hasOwnProperty(a) && reach_paths[n].hasOwnProperty(b)\n  );\n\n  // Remove the common ancestors that have common ancestors as descendants.\n  function object_contains_any(obj, elems) {\n    for (let i = 0; i < elems.length; i++)\n      if (obj.hasOwnProperty(elems[i]))\n        return true;\n    return false;\n  }\n  var lowest_common_ancestors = common_ancestors.filter(\n    n => !object_contains_any(descendants[n], common_ancestors)\n  );\n\n  // Return the lowest common ancestors and, for each, return an\n  // object that holds the node id plus the operations from the\n  // ancestor to the target nodes a and b. Each item on the computed\n  // path is a node ID and the index of the incoming edge to the node\n  // on the path. For simple nodes, the edge index is always zero. For\n  // merges, there are multiple parents, and we need to get the operation\n  // that represents the change from the *corresponding* parent.\n  function get_paths_op(n) {\n    let paths = { };\n    keys(reach_paths[n]).forEach(target => {\n      // There may be multiple paths from the ancestor to the target, but\n      // we only need one. Take the first one. Maybe taking the shortest\n      // will make better diffs in very complex scenarios?\n      let path = reach_paths[n][target][0];\n      paths[target] = new jot.LIST(path.map(path_item => graph[path_item[0]].op[path_item[1]])).simplify();\n    });\n    return paths;\n  }\n  return lowest_common_ancestors.map(n => { return {\n    'node': n,\n    'paths': get_paths_op(n)\n  }});\n}\n\nexports.Revision = class {\n  constructor(ops, parents, document) {\n    this.ops = ops;\n    this.parents = parents || [];\n    this.document = document;\n    this.id = Symbol(/*Math.floor(Math.random() * 10000)*/);\n  }\n\n  add_to_graph(graph) {\n    // Add the parents.\n    this.parents.forEach(p => p.add_to_graph(graph));\n\n    // Add this node.\n    graph[this.id] = {\n      parents: this.parents.map(p => p.id),\n      op: this.ops,\n      document: this.document\n    };\n  }\n}\n\nexports.Document = class {\n  constructor(name) {\n    this.name = name || \"\";\n    this.history = [new exports.Revision(null, null, null, \"singularity\")];\n    this.commit_count = 0;\n    this.branch_count = 0;\n  }\n\n  commit(op) {\n    if (!(op instanceof jot.Operation)) {\n      // If 'op' is not an operation, it is document content. Do a diff\n      // against the last content to make an operation.\n      op = jot.diff(this.content(), op);\n    }\n\n    let rev = new exports.Revision(\n      [op],\n      [this.head()],\n      op.apply(this.content()),\n      (this.name||\"\") + \"+\" + (++this.commit_count)\n    );\n    this.history.push(rev);\n    this.branch_count = 0;\n    return rev;\n  }\n  \n  branch(name) {\n    let br = new exports.Document(\n        (this.name ? (this.name + \"_\") : \"\")\n      + (name || (++this.branch_count)));\n    br.history = [].concat(this.history); // shallow clone\n    return br;\n  }\n\n  head() {\n    return this.history[this.history.length-1];\n  }\n\n  content() {\n    return this.head().document;\n  }\n\n  merge(b) {\n    // Form a graph of the complete known history of the two branches.\n    let graph = { };\n    this.head().add_to_graph(graph);\n    b.head().add_to_graph(graph);\n\n    // Perform the merge. Two operations are returned. The first merges\n    // b into this, and the second merges this into b.\n    let op = exports.merge(this.head().id, b.head().id, graph);\n\n    // Make a commit on this branch that merges b into this.\n    let rev = this.commit(op[0])\n\n    // Add the second parent and the second merge operation.\n    rev.parents.push(b.head());\n    rev.ops.push(op[1]);\n    return rev;\n  }\n};\n\n"
  },
  {
    "path": "jot/objects.js",
    "content": "/* A library of operations for objects (i.e. JSON objects/Javascript associative arrays).\n\n   new objects.PUT(key, value)\n    \n    Creates a property with the given value. This is an alias for\n    new objects.APPLY(key, new values.SET(value)).\n\n   new objects.REM(key)\n    \n    Removes a property from an object. This is an alias for\n    new objects.APPLY(key, new values.SET(objects.MISSING)).\n\n   new objects.APPLY(key, operation)\n   new objects.APPLY({key: operation, ...})\n\n    Applies any operation to a property, or multiple operations to various\n    properties, on the object.\n\n    Use any operation defined in any of the modules depending on the data type\n    of the property. For instance, the operations in values.js can be\n    applied to any property. The operations in sequences.js can be used\n    if the property's value is a string or array. And the operations in\n    this module can be used if the value is another object.\n\n    Supports a conflictless rebase with itself with the inner operations\n    themselves support a conflictless rebase. It does not generate conflicts\n    with any other operations in this module.\n\n    Example:\n    \n    To replace the value of a property with a new value:\n    \n      new objects.APPLY(\"key1\", new values.SET(\"value\"))\n\n\tor\n\n      new objects.APPLY({ key1: new values.SET(\"value\") })\n\n   */\n   \nvar util = require('util');\n\nvar deepEqual = require(\"deep-equal\");\nvar shallow_clone = require('shallow-clone');\n\nvar jot = require(\"./index.js\");\nvar values = require(\"./values.js\");\nvar LIST = require(\"./lists.js\").LIST;\n\n//////////////////////////////////////////////////////////////////////////////\n\nexports.module_name = 'objects'; // for serialization/deserialization\n\nexports.APPLY = function () {\n\tif (arguments.length == 1 && typeof arguments[0] == \"object\") {\n\t\t// Dict form.\n\t\tthis.ops = arguments[0];\n\t} else if (arguments.length == 2 && typeof arguments[0] == \"string\") {\n\t\t// key & operation form.\n\t\tthis.ops = { };\n\t\tthis.ops[arguments[0]] = arguments[1];\n\t} else {\n\t\tthrow new Error(\"invalid arguments\");\n\t}\n\tObject.freeze(this);\n\tObject.freeze(this.ops);\n}\nexports.APPLY.prototype = Object.create(jot.Operation.prototype); // inherit\njot.add_op(exports.APPLY, exports, 'APPLY');\n\n// The MISSING object is a sentinel to signal the state of an Object property\n// that does not exist. It is the old_value to SET when adding a new property\n// and the value when removing a property.\nexports.MISSING = new Object();\nObject.freeze(exports.MISSING);\n\nexports.PUT = function (key, value) {\n\texports.APPLY.apply(this, [key, new values.SET(value)]);\n}\nexports.PUT.prototype = Object.create(exports.APPLY.prototype); // inherit prototype\n\nexports.REM = function (key) {\n\texports.APPLY.apply(this, [key, new values.SET(exports.MISSING)]);\n}\nexports.REM.prototype = Object.create(exports.APPLY.prototype); // inherit prototype\n\n//////////////////////////////////////////////////////////////////////////////\n\nexports.APPLY.prototype.inspect = function(depth) {\n\tvar inner = [];\n\tvar ops = this.ops;\n\tObject.keys(ops).forEach(function(key) {\n\t\tinner.push(util.format(\"%j:%s\", key, ops[key].inspect(depth-1)));\n\t});\n\treturn util.format(\"<APPLY %s>\", inner.join(\", \"));\n}\n\nexports.APPLY.prototype.visit = function(visitor) {\n\t// A simple visitor paradigm. Replace this operation instance itself\n\t// and any operation within it with the value returned by calling\n\t// visitor on itself, or if the visitor returns anything falsey\n\t// (probably undefined) then return the operation unchanged.\n\tvar ops = { };\n\tfor (var key in this.ops)\n\t\tops[key] = this.ops[key].visit(visitor);\n\tvar ret = new exports.APPLY(ops);\n\treturn visitor(ret) || ret;\n}\n\nexports.APPLY.prototype.internalToJSON = function(json, protocol_version) {\n\tjson.ops = { };\n\tfor (var key in this.ops)\n\t\tjson.ops[key] = this.ops[key].toJSON(undefined, protocol_version);\n}\n\nexports.APPLY.internalFromJSON = function(json, protocol_version, op_map) {\n\tvar ops = { };\n\tfor (var key in json.ops)\n\t\tops[key] = jot.opFromJSON(json.ops[key], protocol_version, op_map);\n\treturn new exports.APPLY(ops);\n}\n\nexports.APPLY.prototype.apply = function (document) {\n\t/* Applies the operation to a document. Returns a new object that is\n\t   the same type as document but with the change made. */\n\n\t// Clone first.\n\tvar d = { };\n\tfor (var k in document)\n\t\td[k] = document[k];\n\n\t// Apply. Pass the object and key down in the second argument\n\t// to apply so that values.SET can handle the special MISSING\n\t// value.\n\tfor (var key in this.ops) {\n\t\tvar value = this.ops[key].apply(d[key], [d, key]);\n\t\tif (value === exports.MISSING)\n\t\t\tdelete d[key]; // key was removed\n\t\telse\n\t\t\td[key] = value;\n\t}\n\treturn d;\n}\n\nexports.APPLY.prototype.simplify = function () {\n\t/* Returns a new atomic operation that is a simpler version\n\t   of this operation. If there is no sub-operation that is\n\t   not a NO_OP, then return a NO_OP. Otherwise, simplify all\n\t   of the sub-operations. */\n\tvar new_ops = { };\n\tvar had_non_noop = false;\n\tfor (var key in this.ops) {\n\t\tnew_ops[key] = this.ops[key].simplify();\n\t\tif (!(new_ops[key] instanceof values.NO_OP))\n\t\t\t// Remember that we have a substantive operation.\n\t\t\thad_non_noop = true;\n\t\telse\n\t\t\t// Drop internal NO_OPs.\n\t\t\tdelete new_ops[key];\n\t}\n\tif (!had_non_noop)\n\t\treturn new values.NO_OP();\n\treturn new exports.APPLY(new_ops);\n}\n\nexports.APPLY.prototype.inverse = function (document) {\n\t/* Returns a new atomic operation that is the inverse of this operation,\n\t   given the state of the document before this operation applies. */\n\tvar new_ops = { };\n\tfor (var key in this.ops) {\n\t\tnew_ops[key] = this.ops[key].inverse(key in document ? document[key] : exports.MISSING);\n\t}\n\treturn new exports.APPLY(new_ops);\n}\n\nexports.APPLY.prototype.atomic_compose = function (other) {\n\t/* Creates a new atomic operation that has the same result as this\n\t   and other applied in sequence (this first, other after). Returns\n\t   null if no atomic operation is possible. */\n\n\t// two APPLYs\n\tif (other instanceof exports.APPLY) {\n\t\t// Start with a clone of this operation's suboperations.\n\t\tvar new_ops = shallow_clone(this.ops);\n\n\t\t// Now compose with other.\n\t\tfor (var key in other.ops) {\n\t\t\tif (!(key in new_ops)) {\n\t\t\t\t// Operation in other applies to a key not present\n\t\t\t\t// in this, so we can just merge - the operations\n\t\t\t\t// happen in parallel and don't affect each other.\n\t\t\t\tnew_ops[key] = other.ops[key];\n\t\t\t} else {\n\t\t\t\t// Compose.\n\t\t\t\tvar op2 = new_ops[key].compose(other.ops[key]);\n\n\t\t\t\t// They composed to a no-op, so delete the\n\t\t\t\t// first operation.\n\t\t\t\tif (op2 instanceof values.NO_OP)\n\t\t\t\t\tdelete new_ops[key];\n\n\t\t\t\telse\n\t\t\t\t\tnew_ops[key] = op2;\n\t\t\t}\n\t\t}\n\n\t\treturn new exports.APPLY(new_ops).simplify();\n\t}\n\n\t// No composition possible.\n\treturn null;\n}\n\nexports.APPLY.prototype.rebase_functions = [\n\t[exports.APPLY, function(other, conflictless) {\n\t\t// Rebase the sub-operations on corresponding keys.\n\t\t// If any rebase fails, the whole rebase fails.\n\n\t\t// When conflictless is supplied with a prior document state,\n\t\t// the state represents the object, so before we call rebase\n\t\t// on inner operations, we have to go in a level on the prior\n\t\t// document.\n\t\tfunction build_conflictless(key) {\n\t\t\tif (!conflictless || !(\"document\" in conflictless))\n\t\t\t\treturn conflictless;\n\t\t\tvar ret = shallow_clone(conflictless);\n\t\t\tif (!(key in conflictless.document))\n\t\t\t\t// The key being modified isn't present yet.\n\t\t\t\tret.document = exports.MISSING;\n\t\t\telse\n\t\t\t\tret.document = conflictless.document[key];\n\t\t\treturn ret;\n\t\t}\n\n\t\tvar new_ops_left = { };\n\t\tfor (var key in this.ops) {\n\t\t\tnew_ops_left[key] = this.ops[key];\n\t\t\tif (key in other.ops)\n\t\t\t\tnew_ops_left[key] = new_ops_left[key].rebase(other.ops[key], build_conflictless(key));\n\t\t\tif (new_ops_left[key] === null)\n\t\t\t\treturn null;\n\t\t}\n\n\t\tvar new_ops_right = { };\n\t\tfor (var key in other.ops) {\n\t\t\tnew_ops_right[key] = other.ops[key];\n\t\t\tif (key in this.ops)\n\t\t\t\tnew_ops_right[key] = new_ops_right[key].rebase(this.ops[key], build_conflictless(key));\n\t\t\tif (new_ops_right[key] === null)\n\t\t\t\treturn null;\n\t\t}\n\n\t\treturn [\n\t\t\tnew exports.APPLY(new_ops_left).simplify(),\n\t\t\tnew exports.APPLY(new_ops_right).simplify()\n\t\t];\n\t}]\n]\n\nexports.APPLY.prototype.drilldown = function(index_or_key) {\n\tif (typeof index_or_key == \"string\" && index_or_key in this.ops)\n\t\treturn this.ops[index_or_key];\n\treturn new values.NO_OP();\n}\n\nexports.createRandomOp = function(doc, context) {\n\t// Create a random operation that could apply to doc.\n\t// Choose uniformly across various options.\n\tvar ops = [];\n\n\t// Add a random key with a random value.\n\tops.push(function() { return new exports.PUT(\"k\"+Math.floor(1000*Math.random()), jot.createRandomValue()); });\n\n\t// Apply random operations to individual keys.\n\tObject.keys(doc).forEach(function(key) {\n\t\tops.push(function() { return jot.createRandomOp(doc[key], \"object\") });\n\t});\n\n\t// Select randomly.\n\treturn ops[Math.floor(Math.random() * ops.length)]();\n}\n"
  },
  {
    "path": "jot/sequences.js",
    "content": "/* An operational transformation library for sequence-like objects,\n   i.e. strings and arrays.\n\n   The main operation provided by this library is PATCH, which represents\n   a set of non-overlapping changes to a string or array. Each change,\n   called a hunk, applies an operation to a subsequence -- i.e. a sub-string\n   or a slice of the array. The operation's .apply method yields a new\n   sub-sequence, and they are stitched together (along with unchanged elements)\n   to form the new document that results from the PATCH operation.\n\n   The internal structure of the PATCH operation is an array of hunks as\n   follows:\n\n   new sequences.PATCH(\n     [\n       { offset: ..., # unchanged elements to skip before this hunk\n         length: ..., # length of subsequence modified by this hunk\n         op: ...      # jot operation to apply to the subsequence\n       },\n       ...\n     ]\n    )\n\n   The inner operation must be one of NO_OP, SET, and MAP (or any\n   operation that defines \"get_length_change\" and \"decompose\" functions\n   and whose rebase always yields an operation that also satisfies these\n   same constraints.)\n\n\n   This library also defines the MAP operation, which applies a jot\n   operation to every element of a sequence. The MAP operation is\n   also used with length-one hunks to apply an operation to a single\n   element. On strings, the MAP operation only accepts inner operations\n   that yield back single characters so that a MAP on a string does\n   not change the string's length.\n\n   The internal structure of the MAP operation is:\n\n   new sequences.MAP(op)\n \n   Shortcuts for constructing useful PATCH operations are provided:\n\n\t\tnew sequences.SPLICE(pos, length, value)\n\n\t\t\t Equivalent to:\n\n\t\t\t PATCH([{\n\t\t\t\t offset: pos,\n\t\t\t\t length: length,\n\t\t\t\t op: new values.SET(value)\n\t\t\t\t }])\n\t\t\t \n\t\t\t i.e. replace elements with other elements\n\t\t\n\t\tnew sequences.ATINDEX(pos, op)\n\n\t\t\t Equivalent to:\n\n\t\t\t PATCH([{\n\t\t\t\t offset: pos,\n\t\t\t\t length: 1,\n\t\t\t\t op: new sequences.MAP(op)\n\t\t\t\t }])\n\t\t\t \n\t\t\t i.e. apply the operation to the single element at pos\n\n\t\tnew sequences.ATINDEX({ pos: op, ... })\n\n\t\t\t Similar to the above but for multiple operations at once.\n\n\t\tSupports a conflictless rebase with other PATCH operations.\n\n\t */\n\t \nvar util = require('util');\n\nvar deepEqual = require(\"deep-equal\");\nvar shallow_clone = require('shallow-clone');\n\nvar jot = require(\"./index.js\");\nvar values = require(\"./values.js\");\nvar LIST = require(\"./lists.js\").LIST;\n\n// utilities\n\nfunction elem(seq, pos) {\n\t// get an element of the sequence\n\tif (typeof seq == \"string\")\n\t\treturn seq.charAt(pos);\n\telse // is an array\n\t\treturn seq[pos];\n}\nfunction concat2(item1, item2) {\n\tif (item1 instanceof String)\n\t\treturn item1 + item2;\n\treturn item1.concat(item2);\n}\nfunction concat3(item1, item2, item3) {\n\tif (item1 instanceof String)\n\t\treturn item1 + item2 + item3;\n\treturn item1.concat(item2).concat(item3);\n}\nfunction map_index(pos, move_op) {\n\tif (pos >= move_op.pos && pos < move_op.pos+move_op.count) return (pos-move_op.pos) + move_op.new_pos; // within the move\n\tif (pos < move_op.pos && pos < move_op.new_pos) return pos; // before the move\n\tif (pos < move_op.pos) return pos + move_op.count; // a moved around by from right to left\n\tif (pos > move_op.pos && pos >= move_op.new_pos) return pos; // after the move\n\tif (pos > move_op.pos) return pos - move_op.count; // a moved around by from left to right\n\tthrow new Error(\"unhandled problem\");\n}\n\n//////////////////////////////////////////////////////////////////////////////\n\nexports.module_name = 'sequences'; // for serialization/deserialization\n\nexports.PATCH = function () {\n\t/* An operation that replaces a subrange of the sequence with new elements. */\n\tif (arguments[0] === \"__hmm__\") return; // used for subclassing\n\tif (arguments.length != 1)\n\t\tthrow new Error(\"Invaid Argument\");\n\n\tthis.hunks = arguments[0];\n\n\t// Sanity check & freeze hunks.\n\tif (!Array.isArray(this.hunks))\n\t\tthrow new Error(\"Invaid Argument\");\n\tthis.hunks.forEach(function(hunk) {\n\t\tif (typeof hunk.offset != \"number\")\n\t\t\tthrow new Error(\"Invalid Argument (hunk offset not a number)\");\n\t\tif (hunk.offset < 0)\n\t\t\tthrow new Error(\"Invalid Argument (hunk offset is negative)\");\n\t\tif (typeof hunk.length != \"number\")\n\t\t\tthrow new Error(\"Invalid Argument (hunk length is not a number)\");\n\t\tif (hunk.length < 0)\n\t\t\tthrow new Error(\"Invalid Argument (hunk length is negative)\");\n\t\tif (!(hunk.op instanceof jot.Operation))\n\t\t\tthrow new Error(\"Invalid Argument (hunk operation is not an operation)\");\n\t\tif (typeof hunk.op.get_length_change != \"function\")\n\t\t\tthrow new Error(\"Invalid Argument (hunk operation \" + hunk.op.inspect() + \" does not support get_length_change)\");\n\t\tif (typeof hunk.op.decompose != \"function\")\n\t\t\tthrow new Error(\"Invalid Argument (hunk operation \" + hunk.op.inspect() + \" does not support decompose)\");\n\t\tObject.freeze(hunk);\n\t});\n\n\tObject.freeze(this);\n}\nexports.PATCH.prototype = Object.create(jot.Operation.prototype); // inherit\njot.add_op(exports.PATCH, exports, 'PATCH');\n\n\t// shortcuts\n\n\texports.SPLICE = function (pos, length, value) {\n\t\t// value.slice(0,0) is a shorthand for constructing an empty string or empty list, generically\n\t\texports.PATCH.apply(this, [[{\n\t\t\toffset: pos,\n\t\t\tlength: length,\n\t\t\top: new values.SET(value)\n\t\t}]]);\n\t}\n\texports.SPLICE.prototype = new exports.PATCH(\"__hmm__\"); // inherit prototype\n\n\texports.ATINDEX = function () {\n\t\tvar indexes;\n\t\tvar op_map;\n\t\tif (arguments.length == 1) {\n\t\t\t// The argument is a mapping from indexes to operations to apply\n\t\t\t// at those indexes. Collect all of the integer indexes in sorted\n\t\t\t// order.\n\t\t\top_map = arguments[0];\n\t\t\tindexes = [];\n\t\t\tObject.keys(op_map).forEach(function(index) { indexes.push(parseInt(index)); });\n\t\t\tindexes.sort();\n\t\t} else if (arguments.length == 2) {\n\t\t\t// The arguments are just a single position and operation.\n\t\t\tindexes = [arguments[0]];\n\t\t\top_map = { };\n\t\t\top_map[arguments[0]] = arguments[1];\n\t\t} else {\n\t\t\tthrow new Error(\"Invalid Argument\")\n\t\t}\n\n\t\t// Form hunks.\n\t\tvar hunks = [];\n\t\tvar offset = 0;\n\t\tindexes.forEach(function(index) {\n\t\t\thunks.push({\n\t\t\t\toffset: index-offset,\n\t\t\t\tlength: 1,\n\t\t\t\top: new exports.MAP(op_map[index])\n\t\t\t})\n\t\t\toffset = index+1;\n\t\t});\n\t\texports.PATCH.apply(this, [hunks]);\n\t}\n\texports.ATINDEX.prototype = new exports.PATCH(\"__hmm__\"); // inherit prototype\n\nexports.MAP = function (op) {\n\tif (op == null) throw new Error(\"Invalid Argument\");\n\tthis.op = op;\n\tObject.freeze(this);\n}\nexports.MAP.prototype = Object.create(jot.Operation.prototype); // inherit\njot.add_op(exports.MAP, exports, 'MAP');\n\n//////////////////////////////////////////////////////////////////////////////\n\nexports.PATCH.prototype.inspect = function(depth) {\n\treturn util.format(\"<PATCH%s>\",\n\t\tthis.hunks.map(function(hunk) {\n\t\t\tif ((hunk.length == 1) && (hunk.op instanceof exports.MAP))\n\t\t\t\t// special format\n\t\t\t\treturn util.format(\" +%d %s\",\n\t\t\t\t\thunk.offset,\n\t\t\t\t\thunk.op.op.inspect(depth-1));\n\n\t\t\treturn util.format(\" +%dx%d %s\",\n\t\t\t\thunk.offset,\n\t\t\t\thunk.length,\n\t\t\t\thunk.op instanceof values.SET\n\t\t\t\t\t? util.format(\"%j\", hunk.op.value)\n\t\t\t\t\t: hunk.op.inspect(depth-1))\n\t\t}).join(\",\"));\n}\n\nexports.PATCH.prototype.visit = function(visitor) {\n\t// A simple visitor paradigm. Replace this operation instance itself\n\t// and any operation within it with the value returned by calling\n\t// visitor on itself, or if the visitor returns anything falsey\n\t// (probably undefined) then return the operation unchanged.\n\tvar ret = new exports.PATCH(this.hunks.map(function(hunk) {\n\t\tvar ret = shallow_clone(hunk);\n\t\tret.op = ret.op.visit(visitor);\n\t\treturn ret;\n\t}));\n\treturn visitor(ret) || ret;\n}\n\nexports.PATCH.prototype.internalToJSON = function(json, protocol_version) {\n\tjson.hunks = this.hunks.map(function(hunk) {\n\t\tvar ret = shallow_clone(hunk);\n\t\tret.op = ret.op.toJSON(undefined, protocol_version);\n\t\treturn ret;\n\t});\n}\n\nexports.PATCH.internalFromJSON = function(json, protocol_version, op_map) {\n\tvar hunks = json.hunks.map(function(hunk) {\n\t\tvar ret = shallow_clone(hunk);\n\t\tret.op = jot.opFromJSON(hunk.op, protocol_version, op_map);\n\t\treturn ret;\n\t});\n\treturn new exports.PATCH(hunks);\n}\n\nexports.PATCH.prototype.apply = function (document) {\n\t/* Applies the operation to a document. Returns a new sequence that is\n\t\t the same type as document but with the hunks applied. */\n\t\n\tvar index = 0;\n\tvar ret = document.slice(0,0); // start with an empty document\n\t\n\tthis.hunks.forEach(function(hunk) {\n\t\tif (index + hunk.offset + hunk.length > document.length)\n\t\t\tthrow new Error(\"offset past end of document\");\n\n\t\t// Append unchanged content before this hunk.\n\t\tret = concat2(ret, document.slice(index, index+hunk.offset));\n\t\tindex += hunk.offset;\n\n\t\t// Append new content.\n\t\tvar new_value = hunk.op.apply(document.slice(index, index+hunk.length));\n\n\t\tif (typeof document == \"string\" && typeof new_value != \"string\")\n\t\t\tthrow new Error(\"operation yielded invalid substring\");\n\t\tif (Array.isArray(document) && !Array.isArray(new_value))\n\t\t\tthrow new Error(\"operation yielded invalid subarray\");\n\n\t\tret = concat2(ret, new_value);\n\n\t\t// Advance counter.\n\t\tindex += hunk.length;\n\t});\n\t\n\t// Append unchanged content after the last hunk.\n\tret = concat2(ret, document.slice(index));\n\t\n\treturn ret;\n}\n\nexports.PATCH.prototype.simplify = function () {\n\t/* Returns a new atomic operation that is a simpler version\n\t\t of this operation.*/\n\n\t// Simplify the hunks by removing any that don't make changes.\n\t// Adjust offsets.\n\n\t// Some of the composition methods require knowing if these operations\n\t// are operating on a string or an array. We might not know if the PATCH\n\t// only has sub-operations where we can't tell, like a MAP.\n\tvar doctype = null;\n\tthis.hunks.forEach(function (hunk) {\n\t\tif (hunk.op instanceof values.SET) {\n\t\t\tif (typeof hunk.op.value == \"string\")\n\t\t\t\tdoctype = \"string\";\n\t\t\telse if (Array.isArray(hunk.op.value))\n\t\t\t\tdoctype = \"array\";\n\t\t}\n\t});\n\n\t// Form a new set of merged hunks.\n\n\tvar hunks = [];\n\tvar doffset = 0;\n\n\tfunction handle_hunk(hunk) {\n\t\tvar op = hunk.op.simplify();\n\t\t\n\t\tif (op.isNoOp()) {\n\t\t\t// Drop it, but adjust future offsets.\n\t\t\tdoffset += hunk.offset + hunk.length;\n\t\t\treturn;\n\n\t\t} else if (hunk.length == 0 && hunk.op.get_length_change(hunk.length) == 0) {\n\t\t\t// The hunk does nothing. Drop it, but adjust future offsets.\n\t\t\tdoffset += hunk.offset;\n\t\t\treturn;\n\n\t\t} else if (hunks.length > 0\n\t\t\t&& hunk.offset == 0\n\t\t\t&& doffset == 0\n\t\t\t) {\n\t\t\t\n\t\t\t// The hunks are adjacent. We can combine them\n\t\t\t// if one of the operations is a SET and the other\n\t\t\t// is a SET or a MAP containing a SET.\n\t\t\t// We can't combine two adjancent MAP->SET's because\n\t\t\t// we wouldn't know whether the combined value (in\n\t\t\t// a SET) should be a string or an array.\n\t\t\tif ((hunks[hunks.length-1].op instanceof values.SET\n\t\t\t\t|| (hunks[hunks.length-1].op instanceof exports.MAP && hunks[hunks.length-1].op.op instanceof values.SET))\n\t\t\t && (hunk.op instanceof values.SET || \n\t\t\t \t  (hunk.op instanceof exports.MAP && hunk.op.op instanceof values.SET) )\n\t\t\t && doctype != null) {\n\n\t\t\t\tfunction get_value(hunk) {\n\t\t\t\t \tif (hunk.op instanceof values.SET) {\n\t\t\t\t \t\t// The value is just the SET's value.\n\t\t\t\t \t\treturn hunk.op.value;\n\t\t\t\t \t} else {\n\t\t\t\t \t\t// The value is a sequence of the hunk's length\n\t\t\t\t \t\t// where each element is the value of the inner\n\t\t\t\t \t\t// SET's value.\n\t\t\t\t\t \tvar value = [];\n\t\t\t\t\t \tfor (var i = 0; i < hunk.length; i++)\n\t\t\t\t\t \t\tvalue.push(hunk.op.op.value);\n\n\t\t\t\t\t \t// If the outer value is a string, reform it as\n\t\t\t\t\t \t// a string.\n\t\t\t\t\t \tif (doctype == \"string\")\n\t\t\t\t\t \t\tvalue = value.join(\"\");\n\t\t\t\t\t \treturn value;\n\t\t\t\t \t}\n\t\t\t\t}\n\n\t\t\t\thunks[hunks.length-1] = {\n\t\t\t\t\toffset: hunks[hunks.length-1].offset,\n\t\t\t\t\tlength: hunks[hunks.length-1].length + hunk.length,\n\t\t\t\t\top: new values.SET(\n\t\t\t\t\t\tconcat2(\n\t\t\t\t\t\t\tget_value(hunks[hunks.length-1]),\n\t\t\t\t\t\t\tget_value(hunk))\n\t\t\t\t\t\t)\n\t\t\t\t};\n\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t}\n\n\t\t// Preserve but adjust offset.\n\t\thunks.push({\n\t\t\toffset: hunk.offset+doffset,\n\t\t\tlength: hunk.length,\n\t\t\top: op\n\t\t});\n\t\tdoffset = 0;\n\t}\n\t\n\tthis.hunks.forEach(handle_hunk);\n\tif (hunks.length == 0)\n\t\treturn new values.NO_OP();\n\t\n\treturn new exports.PATCH(hunks);\n}\n\nexports.PATCH.prototype.drilldown = function(index_or_key) {\n\tif (!Number.isInteger(index_or_key) || index_or_key < 0)\n\t\treturn new values.NO_OP();\n\tvar index = 0;\n\tvar ret = null;\n\tthis.hunks.forEach(function(hunk) {\n\t\tindex += hunk.offset;\n\t\tif (index <= index_or_key && index_or_key < index+hunk.length)\n\t\t\tret = hunk.op.drilldown(index_or_key-index);\n\t\tindex += hunk.length;\n\t})\n\treturn ret ? ret : new values.NO_OP();\n}\n\nexports.PATCH.prototype.inverse = function (document) {\n\t/* Returns a new atomic operation that is the inverse of this operation,\n\t   given the state of the document before this operation applies.\n\t   The inverse simply inverts the operations on the hunks, but the\n\t   lengths have to be fixed. */\n\tvar offset = 0;\n\treturn new exports.PATCH(this.hunks.map(function(hunk) {\n\t\tvar newhunk = {\n\t\t\toffset: hunk.offset,\n\t\t\tlength: hunk.length + hunk.op.get_length_change(hunk.length),\n\t\t\top: hunk.op.inverse(document.slice(offset+hunk.offset, offset+hunk.offset+hunk.length))\n\t\t}\n\t\toffset += hunk.offset + hunk.length;\n\t\treturn newhunk;\n\t}));\n}\n\nfunction compose_patches(a, b) {\n\t// Compose two patches. We do this as if we are zipping up two sequences,\n\t// where the index into the (hypothetical) sequence that results *after*\n\t// a is applied lines up with the index into the (hypothetical) sequence\n\t// before b is applied.\n\t\n\tvar hunks = [];\n\tvar index = 0;\n\n\tfunction make_state(op, side) {\n\t\treturn {\n\t\t\tindex: 0,\n\t\t\thunks: op.hunks.slice(), // clone\n\t\t\tempty: function() { return this.hunks.length == 0; },\n\t\t\ttake: function() {\n\t\t\t\tvar curend = this.end();\n\t\t\t\tvar h = this.hunks.shift();\n\t\t\t\thunks.push({\n\t\t\t\t\toffset: this.index + h.offset - index,\n\t\t\t\t\tlength: h.length,\n\t\t\t\t\top: h.op\n\t\t\t\t});\n\t\t\t\tthis.index = curend;\n\t\t\t\tindex = this.index;\n\t\t\t},\n\t\t\tskip: function() {\n\t\t\t\tthis.index = this.end();\n\t\t\t\tthis.hunks.shift();\n\t\t\t},\n\t\t\tstart: function() {\n\t\t\t\treturn this.index + this.hunks[0].offset;\n\t\t\t},\n\t\t\tend: function() {\n\t\t\t\tvar h = this.hunks[0];\n\t\t\t\tvar ret = this.index + h.offset + h.length;\n\t\t\t\tif (side == 0)\n\t\t\t\t\tret += h.op.get_length_change(h.length);\n\t\t\t\treturn ret;\n\t\t\t}\n\t\t}\n\t}\n\t\n\tvar a_state = make_state(a, 0),\n\t    b_state = make_state(b, 1);\n\t\n\twhile (!a_state.empty() || !b_state.empty()) {\n\t\t// Only operations in 'a' are remaining.\n\t\tif (b_state.empty()) {\n\t\t\ta_state.take();\n\t\t\tcontinue;\n\t\t}\n\n\t\t// Only operations in 'b' are remaining.\n\t\tif (a_state.empty()) {\n\t\t\tb_state.take();\n\t\t\tcontinue;\n\t\t}\n\n\t\t// The next hunk in 'a' precedes the next hunk in 'b'.\n\t\tif (a_state.end() <= b_state.start()) {\n\t\t\ta_state.take();\n\t\t\tcontinue;\n\t\t}\n\n\t\t// The next hunk in 'b' precedes the next hunk in 'a'.\n\t\tif (b_state.end() <= a_state.start()) {\n\t\t\tb_state.take();\n\t\t\tcontinue;\n\t\t}\n\n\t\t// There's overlap.\n\n\t\tvar dx_start = b_state.start() - a_state.start();\n\t\tvar dx_end = b_state.end() - a_state.end();\n\t\tif (dx_start >= 0 && dx_end <= 0) {\n\t\t\t// 'a' wholly encompasses 'b', including the case where they\n\t\t\t// changed the exact same elements.\n\n\t\t\t// Compose a's and b's suboperations using\n\t\t\t// atomic_compose. If the two hunks changed the exact same\n\t\t\t// elements, then we can compose the two operations directly.\n\t\t\tvar b_op = b_state.hunks[0].op;\n\t\t\tvar dx = b_op.get_length_change(b_state.hunks[0].length);\n\t\t\tif (dx_start != 0 || dx_end != 0) {\n\t\t\t\t// If a starts before b, wrap b_op in a PATCH operation\n\t\t\t\t// so that they can be considered to start at the same\n\t\t\t\t// location.\n\t\t\t\tb_op = new exports.PATCH([{ offset: dx_start, length: b_state.hunks[0].length, op: b_op }]);\n\t\t\t}\n\n\t\t\t// Try an atomic composition.\n\t\t\tvar ab = a_state.hunks[0].op.atomic_compose(b_op);\n\t\t\tif (!ab && dx_start == 0 && dx_end == 0 && b_op instanceof exports.MAP && b_op.op instanceof values.SET)\n\t\t\t\tab = b_op;\n\n\t\t\tif (ab) {\n\t\t\t\t// Replace the 'a' operation with itself composed with b's operation.\n\t\t\t\t// Don't take it yet because there could be more coming on b's\n\t\t\t\t// side that is within the range of 'a'.\n\t\t\t\ta_state.hunks[0] = {\n\t\t\t\t\toffset: a_state.hunks[0].offset,\n\t\t\t\t\tlength: a_state.hunks[0].length,\n\t\t\t\t\top: ab\n\t\t\t\t};\n\n\t\t\t\t// Since the a_state hunks have been rewritten, the indexing needs\n\t\t\t\t// to be adjusted.\n\t\t\t\tb_state.index += dx;\n\n\t\t\t\t// Drop b.\n\t\t\t\tb_state.skip();\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\t// If no atomic composition is possible, another case may work below\n\t\t\t// by decomposing the operations.\n\t\t}\n\n\t\t// There is some sort of other overlap. We can handle this by attempting\n\t\t// to decompose the operations.\n\t\tif (dx_start > 0) {\n\t\t\t// 'a' begins first. Attempt to decompose it into two operations.\n\t\t\t// Indexing of dx_start is based on the value *after* 'a' applies,\n\t\t\t// so we have to decompose it based on new-value indexes.\n\t\t\tvar decomp = a_state.hunks[0].op.decompose(true, dx_start);\n\n\t\t\t// But we need to know the length of the original hunk so that\n\t\t\t// the operation causes its final length to be dx_start.\n\t\t\tvar alen0;\n\t\t\tif (a_state.hunks[0].op.get_length_change(a_state.hunks[0].length) == 0)\n\t\t\t\t// This is probably a MAP. If the hunk's length is dx_start\n\t\t\t\t// and the operation causes no length change, then that's\n\t\t\t\t// the right length!\n\t\t\t\talen0 = dx_start;\n\t\t\telse\n\t\t\t\treturn null;\n\n\t\t\t// Take the left part of the decomposition.\n\t\t\thunks.push({\n\t\t\t\toffset: a_state.index + a_state.hunks[0].offset - index,\n\t\t\t\tlength: alen0,\n\t\t\t\top: decomp[0]\n\t\t\t});\n\t\t\ta_state.index = a_state.start() + dx_start;\n\t\t\tindex = a_state.index;\n\n\t\t\t// Return the right part of the decomposition to the hunks array.\n\t\t\ta_state.hunks[0] = {\n\t\t\t\toffset: 0,\n\t\t\t\tlength: a_state.hunks[0].length - alen0,\n\t\t\t\top: decomp[1]\n\t\t\t};\n\t\t\tcontinue;\n\t\t}\n\n\t\tif (dx_start < 0) {\n\t\t\t// 'b' begins first. Attempt to decompose it into two operations.\n\t\t\tvar decomp = b_state.hunks[0].op.decompose(false, -dx_start);\n\n\t\t\t// Take the left part of the decomposition.\n\t\t\thunks.push({\n\t\t\t\toffset: b_state.index + b_state.hunks[0].offset - index,\n\t\t\t\tlength: (-dx_start),\n\t\t\t\top: decomp[0]\n\t\t\t});\n\t\t\tb_state.index = b_state.start() + (-dx_start);\n\t\t\tindex = b_state.index;\n\n\t\t\t// Return the right part of the decomposition to the hunks array.\n\t\t\tb_state.hunks[0] = {\n\t\t\t\toffset: 0,\n\t\t\t\tlength: b_state.hunks[0].length - (-dx_start),\n\t\t\t\top: decomp[1]\n\t\t\t};\n\t\t\tcontinue;\n\t\t}\n\n\t\t// The two hunks start at the same location but have different\n\t\t// lengths.\n\t\tif (dx_end > 0) {\n\t\t\t// 'b' wholly encompasses 'a'.\n\t\t\tif (b_state.hunks[0].op instanceof values.SET) {\n\t\t\t\t// 'b' is replacing everything 'a' touched with\n\t\t\t\t// new elements, so the changes in 'a' can be\n\t\t\t\t// dropped. But b's length has to be updated\n\t\t\t\t// if 'a' changed the length of its subsequence.\n\t\t\t\tvar dx = a_state.hunks[0].op.get_length_change(a_state.hunks[0].length);\n\t\t\t\tb_state.hunks[0] = {\n\t\t\t\t\toffset: b_state.hunks[0].offset,\n\t\t\t\t\tlength: b_state.hunks[0].length - dx,\n\t\t\t\t\top: b_state.hunks[0].op\n\t\t\t\t};\n\t\t\t\ta_state.skip();\n\t\t\t\ta_state.index -= dx;\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t}\n\n\t\t// TODO.\n\n\t\t// There is no atomic composition.\n\t\treturn null;\n\t}\n\n\treturn new exports.PATCH(hunks).simplify();\n}\n\nexports.PATCH.prototype.atomic_compose = function (other) {\n\t/* Creates a new atomic operation that has the same result as this\n\t\t and other applied in sequence (this first, other after). Returns\n\t\t null if no atomic operation is possible. */\n\n\t// a PATCH composes with a PATCH\n\tif (other instanceof exports.PATCH)\n\t\treturn compose_patches(this, other);\n\n\t// No composition possible.\n\treturn null;\n}\n\nfunction rebase_patches(a, b, conflictless) {\n\t// Rebasing two PATCHes works like compose, except that we are aligning\n\t// 'a' and 'b' both on the state of the document before each has applied.\n\t//\n\t// We do this as if we are zipping up two sequences, where the index into\n\t// the (hypothetical) sequence, before either operation applies, lines\n\t// up across the two operations.\n\t\n\tfunction make_state(op) {\n\t\treturn {\n\t\t\told_index: 0,\n\t\t\told_hunks: op.hunks.slice(),\n\t\t\tdx_index: 0,\n\t\t\tnew_hunks: [],\n\t\t\tempty: function() { return this.old_hunks.length == 0; },\n\t\t\ttake: function(other, hold_dx_index) {\n\t\t\t\tvar h = this.old_hunks.shift();\n\t\t\t\tthis.new_hunks.push({\n\t\t\t\t\toffset: h.offset + this.dx_index,\n\t\t\t\t\tlength: h.length+(h.dlength||0),\n\t\t\t\t\top: h.op\n\t\t\t\t});\n\t\t\t\tthis.dx_index = 0;\n\t\t\t\tthis.old_index += h.offset + h.length;\n\t\t\t\tif (!hold_dx_index) other.dx_index += h.op.get_length_change(h.length);\n\t\t\t},\n\t\t\tskip: function() {\n\t\t\t\tthis.old_index = this.end();\n\t\t\t\tthis.old_hunks.shift();\n\t\t\t},\n\t\t\tstart: function() {\n\t\t\t\treturn this.old_index + this.old_hunks[0].offset;\n\t\t\t},\n\t\t\tend: function() {\n\t\t\t\tvar h = this.old_hunks[0];\n\t\t\t\treturn this.old_index + h.offset + h.length;\n\t\t\t}\n\t\t}\n\t}\n\t\n\tvar a_state = make_state(a),\n\t    b_state = make_state(b);\n\t\n\twhile (!a_state.empty() || !b_state.empty()) {\n\t\t// Only operations in 'a' are remaining.\n\t\tif (b_state.empty()) {\n\t\t\ta_state.take(b_state);\n\t\t\tcontinue;\n\t\t}\n\n\t\t// Only operations in 'b' are remaining.\n\t\tif (a_state.empty()) {\n\t\t\tb_state.take(a_state);\n\t\t\tcontinue;\n\t\t}\n\n\t\t// Two insertions at the same location.\n\t\tif (a_state.start() == b_state.start()\n\t\t\t&& a_state.old_hunks[0].length == 0\n\t\t\t&& b_state.old_hunks[0].length == 0) {\n\t\t\t\n\t\t\t// This is a conflict because we don't know which side\n\t\t\t// gets inserted first.\n\t\t\tif (!conflictless)\n\t\t\t\treturn null;\n\n\t\t\t// Or we can resolve the conflict.\n\t\t\tif (jot.cmp(a_state.old_hunks[0].op, b_state.old_hunks[0].op) == 0) {\n\t\t\t\t// If the inserted values are identical, we can't make a decision\n\t\t\t\t// about which goes first, so we only take one. Which one we take\n\t\t\t\t// doesn't matter because the document comes out the same either way.\n\t\t\t\t// This logic is actually required to get complex merges to work.\n\t\t\t\tb_state.take(b_state);\n\t\t\t\ta_state.skip();\n\t\t\t} else if (jot.cmp(a_state.old_hunks[0].op, b_state.old_hunks[0].op) < 0) {\n\t\t\t\ta_state.take(b_state);\n\t\t\t} else {\n\t\t\t\tb_state.take(a_state);\n\t\t\t}\n\t\t\tcontinue;\n\t\t}\n\n\n\t\t// The next hunk in 'a' precedes the next hunk in 'b'.\n\t\t// Take 'a' and adjust b's next offset.\n\t\tif (a_state.end() <= b_state.start()) {\n\t\t\ta_state.take(b_state);\n\t\t\tcontinue;\n\t\t}\n\n\t\t// The next hunk in 'b' precedes the next hunk in 'a'.\n\t\t// Take 'b' and adjust a's next offset.\n\t\tif (b_state.end() <= a_state.start()) {\n\t\t\tb_state.take(a_state);\n\t\t\tcontinue;\n\t\t}\n\n\t\t// There's overlap.\n\n\t\tvar dx_start = b_state.start() - a_state.start();\n\t\tvar dx_end = b_state.end() - a_state.end();\n\n\t\t// They both affected the exact same region, so just rebase the\n\t\t// inner operations and update lengths.\n\t\tif (dx_start == 0 && dx_end == 0) {\n\t\t\t// When conflictless is supplied with a prior document state,\n\t\t\t// the state represents the sequence, so we have to dig into\n\t\t\t// it and pass an inner value\n\t\t\tvar conflictless2 = !conflictless ? null : shallow_clone(conflictless);\n\t\t\tif (conflictless2 && \"document\" in conflictless2)\n\t\t\t\tconflictless2.document = conflictless2.document.slice(a_state.start(), a_state.end());\n\n\t\t\tvar ar = a_state.old_hunks[0].op.rebase(b_state.old_hunks[0].op, conflictless2);\n\t\t\tvar br = b_state.old_hunks[0].op.rebase(a_state.old_hunks[0].op, conflictless2);\n\t\t\tif (ar == null || br == null)\n\t\t\t\treturn null;\n\n\t\t\ta_state.old_hunks[0] = {\n\t\t\t\toffset: a_state.old_hunks[0].offset,\n\t\t\t\tlength: a_state.old_hunks[0].length,\n\t\t\t\tdlength: b_state.old_hunks[0].op.get_length_change(b_state.old_hunks[0].length),\n\t\t\t\top: ar\n\t\t\t}\n\t\t\tb_state.old_hunks[0] = {\n\t\t\t\toffset: b_state.old_hunks[0].offset,\n\t\t\t\tlength: b_state.old_hunks[0].length,\n\t\t\t\tdlength: a_state.old_hunks[0].op.get_length_change(a_state.old_hunks[0].length),\n\t\t\t\top: br\n\t\t\t}\n\t\t\ta_state.take(b_state, true);\n\t\t\tb_state.take(a_state, true);\n\t\t\tcontinue;\n\t\t}\n\n\t\t// Other overlaps generate conflicts.\n\t\tif (!conflictless)\n\t\t\treturn null;\n\n\t\t// Decompose whichever one starts first into two operations.\n\t\tif (dx_start > 0) {\n\t\t\t// a starts first.\n\t\t\tvar hunk = a_state.old_hunks.shift();\n\t\t\tvar decomp = hunk.op.decompose(false, dx_start);\n\n\t\t\t// Unshift the right half of the decomposition.\n\t\t\ta_state.old_hunks.unshift({\n\t\t\t\toffset: 0,\n\t\t\t\tlength: hunk.length-dx_start,\n\t\t\t\top: decomp[1]\n\t\t\t});\n\n\t\t\t// Unshift the left half of the decomposition.\n\t\t\ta_state.old_hunks.unshift({\n\t\t\t\toffset: hunk.offset,\n\t\t\t\tlength: dx_start,\n\t\t\t\top: decomp[0]\n\t\t\t});\n\n\t\t\t// Since we know the left half occurs first, take it.\n\t\t\ta_state.take(b_state)\n\n\t\t\t// Start the iteration over -- we should end up at the block\n\t\t\t// for two hunks that modify the exact same range.\n\t\t\tcontinue;\n\n\t\t} else if (dx_start < 0) {\n\t\t\t// b starts first.\n\t\t\tvar hunk = b_state.old_hunks.shift();\n\t\t\tvar decomp = hunk.op.decompose(false, -dx_start);\n\n\t\t\t// Unshift the right half of the decomposition.\n\t\t\tb_state.old_hunks.unshift({\n\t\t\t\toffset: 0,\n\t\t\t\tlength: hunk.length+dx_start,\n\t\t\t\top: decomp[1]\n\t\t\t});\n\n\t\t\t// Unshift the left half of the decomposition.\n\t\t\tb_state.old_hunks.unshift({\n\t\t\t\toffset: hunk.offset,\n\t\t\t\tlength: -dx_start,\n\t\t\t\top: decomp[0]\n\t\t\t});\n\n\t\t\t// Since we know the left half occurs first, take it.\n\t\t\tb_state.take(a_state)\n\n\t\t\t// Start the iteration over -- we should end up at the block\n\t\t\t// for two hunks that modify the exact same range.\n\t\t\tcontinue;\n\t\t}\n\n\t\t// They start at the same point, but don't end at the same\n\t\t// point. Decompose the longer one.\n\t\telse if (dx_end < 0) {\n\t\t\t// a is longer.\n\t\t\tvar hunk = a_state.old_hunks.shift();\n\t\t\tvar decomp = hunk.op.decompose(false, hunk.length+dx_end);\n\n\t\t\t// Unshift the right half of the decomposition.\n\t\t\ta_state.old_hunks.unshift({\n\t\t\t\toffset: 0,\n\t\t\t\tlength: -dx_end,\n\t\t\t\top: decomp[1]\n\t\t\t});\n\n\t\t\t// Unshift the left half of the decomposition.\n\t\t\ta_state.old_hunks.unshift({\n\t\t\t\toffset: hunk.offset,\n\t\t\t\tlength: hunk.length+dx_end,\n\t\t\t\top: decomp[0]\n\t\t\t});\n\n\t\t\t// Start the iteration over -- we should end up at the block\n\t\t\t// for two hunks that modify the exact same range.\n\t\t\tcontinue;\n\t\t} else if (dx_end > 0) {\n\t\t\t// b is longer.\n\t\t\tvar hunk = b_state.old_hunks.shift();\n\t\t\tvar decomp = hunk.op.decompose(false, hunk.length-dx_end);\n\n\t\t\t// Unshift the right half of the decomposition.\n\t\t\tb_state.old_hunks.unshift({\n\t\t\t\toffset: 0,\n\t\t\t\tlength: dx_end,\n\t\t\t\top: decomp[1]\n\t\t\t});\n\n\t\t\t// Unshift the left half of the decomposition.\n\t\t\tb_state.old_hunks.unshift({\n\t\t\t\toffset: hunk.offset,\n\t\t\t\tlength: hunk.length-dx_end,\n\t\t\t\top: decomp[0]\n\t\t\t});\n\n\t\t\t// Start the iteration over -- we should end up at the block\n\t\t\t// for two hunks that modify the exact same range.\n\t\t\tcontinue;\n\t\t}\n\n\t\tthrow new Error(\"We thought this line was not reachable.\");\n\t}\n\n\treturn [\n\t\tnew exports.PATCH(a_state.new_hunks).simplify(),\n\t\tnew exports.PATCH(b_state.new_hunks).simplify() ];\n}\n\nexports.PATCH.prototype.rebase_functions = [\n\t/* Transforms this operation so that it can be composed *after* the other\n\t\t operation to yield the same logical effect. Returns null on conflict. */\n\n\t[exports.PATCH, function(other, conflictless) {\n\t\t// Return the new operations.\n\t\treturn rebase_patches(this, other, conflictless);\n\t}]\n];\n\n\nexports.PATCH.prototype.get_length_change = function (old_length) {\n\t// Support routine for PATCH that returns the change in\n\t// length to a sequence if this operation is applied to it.\n\tvar dlen = 0;\n\tthis.hunks.forEach(function(hunk) {\n\t\tdlen += hunk.op.get_length_change(hunk.length);\n\t});\n\treturn dlen;\n}\n\n//////////////////////////////////////////////////////////////////////////////\n\nexports.MAP.prototype.inspect = function(depth) {\n\treturn util.format(\"<MAP %s>\", this.op.inspect(depth-1));\n}\n\nexports.MAP.prototype.visit = function(visitor) {\n\t// A simple visitor paradigm. Replace this operation instance itself\n\t// and any operation within it with the value returned by calling\n\t// visitor on itself, or if the visitor returns anything falsey\n\t// (probably undefined) then return the operation unchanged.\n\tvar ret = new exports.MAP(this.op.visit(visitor));\n\treturn visitor(ret) || ret;\n}\n\nexports.MAP.prototype.internalToJSON = function(json, protocol_version) {\n\tjson.op = this.op.toJSON(undefined, protocol_version);\n}\n\nexports.MAP.internalFromJSON = function(json, protocol_version, op_map) {\n\treturn new exports.MAP(jot.opFromJSON(json.op, protocol_version, op_map));\n}\n\nexports.MAP.prototype.apply = function (document) {\n\t/* Applies the operation to a document. Returns a new sequence that is\n\t\t the same type as document but with the element modified. */\n\n \t// Turn string into array of characters.\n\tvar d;\n\tif (typeof document == 'string')\n\t\td = document.split(/.{0}/)\n\n\t// Clone array.\n\telse\n\t\td = document.slice(); // clone\n\t\n\t// Apply operation to each element.\n\tfor (var i = 0; i < d.length; i++) {\n\t\td[i] = this.op.apply(d[i])\n\n\t\t// An operation on strings must return a single character.\n\t\tif (typeof document == 'string' && (typeof d[i] != 'string' || d[i].length != 1))\n\t\t\tthrow new Error(\"Invalid operation: String type or length changed.\")\n\t}\n\n\t// Turn the array of characters back into a string.\n\tif (typeof document == 'string')\n\t\treturn d.join(\"\");\n\n\treturn d;\n}\n\nexports.MAP.prototype.simplify = function () {\n\t/* Returns a new atomic operation that is a simpler version\n\t\t of this operation.*/\n\tvar op = this.op.simplify();\n\tif (op instanceof values.NO_OP)\n\t\treturn new values.NO_OP();\t   \n\treturn this;\n}\n\nexports.MAP.prototype.drilldown = function(index_or_key) {\n\tif (!Number.isInteger(index_or_key) || index_or_key < 0)\n\t\tnew values.NO_OP();\n\treturn this.op;\n}\n\nexports.MAP.prototype.inverse = function (document) {\n\t/* Returns a new atomic operation that is the inverse of this operation. */\n\n\tif (document.length == 0)\n\t\treturn new exports.NO_OP();\n\tif (document.length == 1)\n\t\treturn new exports.MAP(this.op.inverse(document[0]));\n\n\t// Since the inverse depends on the value of the document and the\n\t// elements of document may not all be the same, we have to explode\n\t// this out into individual operations.\n\tvar hunks = [];\n\tif (typeof document == 'string')\n\t\tdocument = document.split(/.{0}/);\n\tdocument.forEach(function(element) {\n\t\thunks.append({\n\t\t\toffset: 0,\n\t\t\tlength: 1,\n\t\t\top: this.op.inverse(element)\n\t\t});\n\t});\n\treturn new exports.PATCH(hunks);\n}\n\nexports.MAP.prototype.atomic_compose = function (other) {\n\t/* Creates a new atomic operation that has the same result as this\n\t\t and other applied in sequence (this first, other after). Returns\n\t\t null if no atomic operation is possible. */\n\n\t// two MAPs with atomically composable sub-operations\n\tif (other instanceof exports.MAP) {\n\t\tvar op2 = this.op.atomic_compose(other.op);\n\t\tif (op2)\n\t\t\treturn new exports.MAP(op2);\n\t}\n\n\t// No composition possible.\n\treturn null;\n}\n\nexports.MAP.prototype.rebase_functions = [\n\t[exports.MAP, function(other, conflictless) {\n\t\t// Two MAPs. The rebase succeeds only if a rebase on the\n\t\t// inner operations succeeds.\n\t\tvar opa;\n\t\tvar opb;\n\n\t\t// If conflictless is null or there is no prior document\n\t\t// state, then it's safe to pass conflictless into the\n\t\t// inner operations.\n\t\tif (!conflictless || !(\"document\" in conflictless)) {\n\t\t\topa = this.op.rebase(other.op, conflictless);\n\t\t\topb = other.op.rebase(this.op, conflictless);\n\n\t\t// If there is a single element in the prior document\n\t\t// state, then unwrap it for the inner operations.\n\t\t} else if (conflictless.document.length == 1) {\n\t\t\tvar conflictless2 = shallow_clone(conflictless);\n\t\t\tconflictless2.document = conflictless2.document[0];\n\n\t\t\topa = this.op.rebase(other.op, conflictless2);\n\t\t\topb = other.op.rebase(this.op, conflictless2);\n\n\t\t// If the prior document state is an empty array, then\n\t\t// we know these operations are NO_OPs anyway.\n\t\t} else if (conflictless.document.length == 0) {\n\t\t\treturn [\n\t\t\t\tnew jot.NO_OP(),\n\t\t\t\tnew jot.NO_OP()\n\t\t\t];\n\n\t\t// The prior document state is an array of more than one\n\t\t// element. In order to pass the prior document state into\n\t\t// the inner operations, we have to try it for each element\n\t\t// of the prior document state. If they all yield the same\n\t\t// operation, then we can use that operation. Otherwise the\n\t\t// rebases are too sensitive on prior document state and\n\t\t// we can't rebase.\n\t\t} else {\n\t\t\tvar ok = true;\n\t\t\tfor (var i = 0; i < conflictless.document.length; i++) {\n\t\t\t\tvar conflictless2 = shallow_clone(conflictless);\n\t\t\t\tconflictless2.document = conflictless.document[i];\n\n\t\t\t\tvar a = this.op.rebase(other.op, conflictless2);\n\t\t\t\tvar b = other.op.rebase(this.op, conflictless2);\n\t\t\t\tif (i == 0) {\n\t\t\t\t\topa = a;\n\t\t\t\t\topb = b;\n\t\t\t\t} else {\n\t\t\t\t\tif (!deepEqual(opa, a, { strict: true }))\n\t\t\t\t\t\tok = false;\n\t\t\t\t\tif (!deepEqual(opb, b, { strict: true }))\n\t\t\t\t\t\tok = false;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (!ok) {\n\t\t\t\t// The rebases were not the same for all elements. Decompose\n\t\t\t\t// the MAPs into PATCHes with individual hunks for each index,\n\t\t\t\t// and then rebase those.\n\t\t\t\tvar _this = this;\n\t\t\t\topa = new exports.PATCH(\n\t\t\t\t\tconflictless.document.map(function(item) {\n\t\t\t\t\t\treturn {\n\t\t\t\t\t\t\toffset: 0,\n\t\t\t\t\t\t\tlength: 1,\n\t\t\t\t\t\t\top: _this\n\t\t\t\t\t\t}\n\t\t\t\t\t}));\n\t\t\t\topb = new exports.PATCH(\n\t\t\t\t\tconflictless.document.map(function(item) {\n\t\t\t\t\t\treturn {\n\t\t\t\t\t\t\toffset: 0,\n\t\t\t\t\t\t\tlength: 1,\n\t\t\t\t\t\t\top: other\n\t\t\t\t\t\t}\n\t\t\t\t\t}));\n\t\t\t\treturn rebase_patches(opa, opb, conflictless);\n\t\t\t}\n\t\t}\n\n\n\t\tif (opa && opb)\n\t\t\treturn [\n\t\t\t\t(opa instanceof values.NO_OP) ? new values.NO_OP() : new exports.MAP(opa),\n\t\t\t\t(opb instanceof values.NO_OP) ? new values.NO_OP() : new exports.MAP(opb)\n\t\t\t];\n\t}],\n\n\t[exports.PATCH, function(other, conflictless) {\n\t\t// Rebase MAP and PATCH.\n\n\t\t// If the PATCH has no hunks, then the rebase is trivial.\n\t\tif (other.hunks.length == 0)\n\t\t\treturn [this, other];\n\n\t\t// If the PATCH hunks are all MAP operations and the rebase\n\t\t// between this and the hunk operations are all the same,\n\t\t// *and* the rebase of this is the same as this, then we can\n\t\t// use that. If the rebase is different from this operation,\n\t\t// then we can't use it because it wouldn't have the same\n\t\t// effect on parts of the sequence that the PATCH does not\n\t\t// affect.\n\t\tvar _this = this;\n\t\tvar rebase_result;\n\t\tother.hunks.forEach(function(hunk) {\n\t\t\tif (!(hunk.op instanceof exports.MAP)) {\n\t\t\t\t// Rebase is not possible. Flag that it is not possible.\n\t\t\t\trebase_result = null;\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\tvar r = _this.rebase_functions[0][1].call(_this, hunk.op);\n\t\t\tif (!r) {\n\t\t\t\t// Rebase failed. Flag it.\n\t\t\t\trebase_result = null;\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\tif (typeof rebase_result == \"undefined\") {\n\t\t\t\t// This is the first one.\n\t\t\t\trebase_result = r;\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\t// Check that it is equal to the last one. If not, flag.\n\t\t\tif (!deepEqual(rebase_result[0], r[0], { strict: true })\n\t\t\t\t|| !deepEqual(rebase_result[1], r[1], { strict: true }))\n\t\t\t\trebase_result = null;\n\t\t})\n\t\tif (rebase_result != null && deepEqual(rebase_result[0], this, { strict: true })) {\n\t\t\t// Rebase was possible and the same for every operation.\n\t\t\treturn [\n\t\t\t\trebase_result[0],\n\t\t\t\tnew exports.PATCH(other.hunks.map(function(hunk) {\n\t\t\t\t\thunk = shallow_clone(hunk);\n\t\t\t\t\thunk.op = rebase_result[1];\n\t\t\t\t\treturn hunk;\n\t\t\t\t})),\n\t\t\t]\n\t\t}\n\n\t\t// Only a conflictless rebase is possible in other cases,\n\t\t// and prior document state is required.\n\t\tif (conflictless && \"document\" in conflictless) {\n\t\t\t// Wrap MAP in a PATCH that spans the whole sequence, and then\n\t\t\t// use rebase_patches. This will jump ahead to comparing the\n\t\t\t// MAP to the PATCH's inner operations.\n\t\t\t//\n\t\t\t// NOTE: Operations that are allowed inside PATCH (including MAP)\n\t\t\t// normally must not rebase to an operation that is not allowed\n\t\t\t// inside PATCH. Returning a PATCH here would therefore normally\n\t\t\t// not be valid. We've partially satisfied the contract for PATCH\n\t\t\t// by defining PATCH.get_length_change, but not PATCH.decompose.\n\t\t\t// That seems to be enough.\n\t\t\treturn rebase_patches(\n\t\t\t\tnew exports.PATCH([{ offset: 0, length: conflictless.document.length, op: this}]),\n\t\t\t\tother,\n\t\t\t\tconflictless);\n\n\t\t\t/*\n\t\t\t// Alternatively:\n\t\t\t// Since the MAP doesn't change the number of elements in the sequence,\n\t\t\t// it makes sense to have the MAP go first.\n\t\t\t// But we don't do this because we have to return a SET so that LIST.rebase\n\t\t\t// doesn't go into infinite recursion by returning a LIST from a rebase,\n\t\t\t// and SET loses logical structure.\n\t\t\treturn [\n\t\t\t\t// MAP is coming second, so create an operation that undoes\n\t\t\t\t// the patch, applies the map, and then applies the patch.\n\t\t\t\t// See values.MATH.rebase for why we return a SET.\n\t\t\t\tnew jot.SET(this.compose(other).apply(conflictless.document)),\n\t\t\t\t//other.inverse(conflictless.document).compose(this).compose(other),\n\n\t\t\t\t// PATCH is coming second, which is right\n\t\t\t\tother\n\t\t\t];\n\t\t\t*/\n\t\t}\n\t}]\n];\n\nexports.MAP.prototype.get_length_change = function (old_length) {\n\t// Support routine for PATCH that returns the change in\n\t// length to a sequence if this operation is applied to it.\n\treturn 0;\n}\n\nexports.MAP.prototype.decompose = function (in_out, at_index) {\n\t// Support routine for when this operation is used as a hunk's\n\t// op in sequences.PATCH (i.e. its document is a string or array\n\t// sub-sequence) that returns a decomposition of the operation\n\t// into two operations, one that applies on the left of the\n\t// sequence and one on the right of the sequence, such that\n\t// the length of the input (if !in_out) or output (if in_out)\n\t// of the left operation is at_index, i.e. the split point\n\t// at_index is relative to the document either before (if\n\t// !in_out) or after (if in_out) this operation applies.\n\t//\n\t// Since MAP applies to all elements, the decomposition\n\t// is trivial.\n\treturn [this, this];\n}\n\n////\n\nexports.createRandomOp = function(doc, context) {\n\t// Not all inner operations are valid for PATCH and MAP. When they\n\t// apply to arrays, any inner operation is valid. But when they\n\t// apply to strings, the inner operations must yield a string\n\t// and the inner operation of a MAP must yield a length-one string.\n\tcontext = (typeof doc == \"string\") ? \"string\" : \"array\";\n\n\t// Create a random operation that could apply to doc.\n\t// Choose uniformly across various options.\n\tvar ops = [];\n\n\t// Construct a PATCH.\n\tops.push(function() {\n\t\tvar hunks = [];\n\t\tvar dx = 0;\n\n\t\twhile (dx < doc.length) {\n\t\t\t// Construct a random hunk. First select a range in the\n\t\t\t// document to modify. We can start at any element index,\n\t\t\t// or one past the end to insert at the end.\n\t\t\tvar start = dx + Math.floor(Math.random() * (doc.length+1-dx));\n\t\t\tvar old_length = (start < doc.length) ? Math.floor(Math.random() * (doc.length - start + 1)) : 0;\n\t\t\tvar old_value = doc.slice(start, start+old_length);\n\n\t\t\t// Choose an inner operation. Only ops in values can be used\n\t\t\t// because ops within PATCH must support get_length_change.\n\t\t\tvar op = values.createRandomOp(old_value, context);\n\n\t\t\t// Push the hunk.\n\t\t\thunks.push({\n\t\t\t\toffset: start-dx,\n\t\t\t\tlength: old_length,\n\t\t\t\top: op\n\t\t\t});\n\n\t\t\tdx = start + old_length;\n\n\t\t\t// Create another hunk?\n\t\t\tif (Math.random() < .25)\n\t\t\t\tbreak;\n\t\t}\n\n\t\treturn new exports.PATCH(hunks);\n\t});\n\n\t// Construct a MAP.\n\tops.push(function() {\n\t\twhile (true) {\n\t\t\t// Choose a random element to use as the template for the\n\t\t\t// random operation. If the sequence is empty, use \"?\" or null.\n\t\t\t// Don't use an empty string because we can't replace an\n\t\t\t// element of the string with an empty string.\n\t\t\tvar random_elem;\n\t\t\tif (doc.length == 0) {\n\t\t\t\tif (typeof doc === \"string\")\n\t\t\t\t\trandom_elem = \"?\";\n\t\t\t\telse if (Array.isArray(doc))\n\t\t\t\t\trandom_elem = null;\n\t\t\t} else {\n\t\t\t\trandom_elem = elem(doc, Math.floor(Math.random() * doc.length));\n\t\t\t}\n\n\t\t\t// Construct a random operation.\n\t\t\tvar op = values.createRandomOp(random_elem, context+\"-elem\");\n\n\t\t\t// Test that it is valid on all elements of doc.\n\t\t\ttry {\n\t\t\t\tif (typeof doc === \"string\") doc = doc.split(''); // convert to array\n\t\t\t\tdoc.forEach(function(item) {\n\t\t\t\t\top.apply(item);\n\t\t\t\t});\n\t\t\t\treturn new exports.MAP(op);\n\t\t\t} catch (e) {\n\t\t\t\t// It's invalid. Try again to find a valid operation\n\t\t\t\t// that can apply to all elements, looping indefinitely\n\t\t\t\t// until one can be found. SET is always valid and is\n\t\t\t\t// highly probable to be selected so this shouldn't\n\t\t\t\t// take long.\n\t\t\t}\n\t\t}\n\t});\n\n\t// Select randomly.\n\treturn ops[Math.floor(Math.random() * ops.length)]();\n}\n"
  },
  {
    "path": "jot/values.js",
    "content": "/*  An operational transformation library for atomic values.\n\n\tThis library provides three operations: NO_OP (an operation\n\tthat leaves the value unchanged), SET (replaces the value\n\twith a new value), and MATH (apply one of several mathematical\n\tfunctions to the value). These functions are generic over\n\tvarious sorts of atomic data types that they may apply to.\n\n\n\tnew values.NO_OP()\n\n\tThis operation does nothing. It is the return value of various\n\tfunctions throughout the library, e.g. when operations cancel\n\tout. NO_OP is conflictless: It never creates a conflict when\n\trebased against or operations or when other operations are\n\trebased against it.\n\t\n\n\tnew values.SET(value)\n\t\n\tThe atomic replacement of one value with another. Works for\n\tany data type. The SET operation supports a conflictless\n\trebase with all other operations.\n\t\n\n\tnew values.MATH(operator, operand)\n\t\n\tApplies a commutative arithmetic function to a number or boolean.\n\t\n\t\"add\": addition (use a negative number to decrement) (over numbers only)\n\t\n\t\"mult\": multiplication (use the reciprocal to divide) (over numbers only)\n\t\n\t\"rot\": addition followed by modulus (the operand is given\n\t       as a tuple of the increment and the modulus). The document\n\t       object must be a non-negative integer and less than the modulus.\n\n\t\"and\": bitwise and (over integers and booleans only)\n\n\t\"or\": bitwise or (over integers and booleans only)\n\t\n\t\"xor\": bitwise exclusive-or (over integers and booleans\n\t       only)\n\n\t\"not\": bitwise not (over integers and booleans only; the operand\n\t       is ignored)\n\t\n\tNote that by commutative we mean that the operation is commutative\n\tunder composition, i.e. add(1)+add(2) == add(2)+add(1).\n\n\tThe operators are also guaranteed to not change the data type of the\n\tdocument. Numbers remain numbers and booleans remain booleans.\n\n\tMATH supports a conflictless rebase with all other operations if\n\tprior document state is provided in the conflictless argument object.\n\t\n\t*/\n\t\nvar util = require('util');\nvar deepEqual = require(\"deep-equal\");\nvar jot = require(\"./index.js\");\nvar MISSING = require(\"./objects.js\").MISSING;\n\n//////////////////////////////////////////////////////////////////////////////\n\nexports.module_name = 'values'; // for serialization/deserialization\n\nexports.NO_OP = function() {\n\t/* An operation that makes no change to the document. */\n\tObject.freeze(this);\n}\nexports.NO_OP.prototype = Object.create(jot.Operation.prototype); // inherit\njot.add_op(exports.NO_OP, exports, 'NO_OP');\n\nexports.SET = function(value) {\n\t/* An operation that replaces the document with a new (atomic) value. */\n\tthis.value = value;\n\tObject.freeze(this);\n}\nexports.SET.prototype = Object.create(jot.Operation.prototype); // inherit\njot.add_op(exports.SET, exports, 'SET');\n\nexports.MATH = function(operator, operand) {\n\t/* An operation that applies addition, multiplication, or rotation (modulus addition)\n\t   to a numeric document. */\n\tthis.operator = operator;\n\tthis.operand = operand;\n\n\tif (this.operator == \"add\" || this.operator == \"mult\") {\n\t\tif (typeof this.operand != \"number\")\n\t\t\tthrow new Error(\"MATH[add] and MATH[mult]'s operand must be a number.\")\n\t}\n\n\tif (this.operator == \"and\" || this.operator == \"or\" || this.operator == \"xor\") {\n\t\tif (!Number.isInteger(this.operand) && typeof this.operand != \"boolean\")\n\t\t\tthrow new Error(\"MATH[and] and MATH[or] and MATH[xor]'s operand must be a boolean or integer.\")\n\t}\n\n\tif (this.operator == \"not\") {\n\t\tif (this.operand !== null)\n\t\t\tthrow new Error(\"MATH[not]'s operand must be null --- it is not used.\")\n\t}\n\n\tif (this.operator == \"rot\") {\n\t\tif (   !Array.isArray(this.operand)\n\t\t\t|| this.operand.length != 2\n\t\t\t|| !Number.isInteger(this.operand[0])\n\t\t\t|| !Number.isInteger(this.operand[1]))\n\t\t\tthrow new Error(\"MATH[rot] operand must be an array with two integer elements.\")\n\t\tif (this.operand[1] <= 1)\n\t\t\tthrow new Error(\"MATH[rot]'s second operand, the modulus, must be greater than one.\")\n\t\tif (this.operand[0] >= Math.abs(this.operand[1]))\n\t\t\tthrow new Error(\"MATH[rot]'s first operand, the increment, must be less than its second operand, the modulus.\")\n\t}\n\n\tObject.freeze(this);\n}\nexports.MATH.prototype = Object.create(jot.Operation.prototype); // inherit\njot.add_op(exports.MATH, exports, 'MATH');\n\n\n//////////////////////////////////////////////////////////////////////////////\n\nexports.NO_OP.prototype.inspect = function(depth) {\n\treturn \"<NO_OP>\"\n}\n\nexports.NO_OP.prototype.internalToJSON = function(json, protocol_version) {\n\t// Nothing to set.\n}\n\nexports.NO_OP.internalFromJSON = function(json, protocol_version, op_map) {\n\treturn new exports.NO_OP();\n}\n\nexports.NO_OP.prototype.apply = function (document) {\n\t/* Applies the operation to a document. Returns the document\n\t   unchanged. */\n\treturn document;\n}\n\nexports.NO_OP.prototype.simplify = function () {\n\t/* Returns a new atomic operation that is a simpler version\n\t   of this operation.*/\n\treturn this;\n}\n\nexports.NO_OP.prototype.drilldown = function(index_or_key) {\n\treturn new values.NO_OP();\n};\n\nexports.NO_OP.prototype.inverse = function (document) {\n\t/* Returns a new atomic operation that is the inverse of this operation,\n\tgiven the state of the document before the operation applies. */\n\treturn this;\n}\n\nexports.NO_OP.prototype.atomic_compose = function (other) {\n\t/* Creates a new atomic operation that has the same result as this\n\t   and other applied in sequence (this first, other after). Returns\n\t   null if no atomic operation is possible. */\n\treturn other;\n}\n\nexports.NO_OP.prototype.rebase_functions = [\n\t[jot.Operation, function(other, conflictless) {\n\t\t// NO_OP operations do not affect any other operation.\n\t\treturn [this, other];\n\t}]\n];\n\nexports.NO_OP.prototype.get_length_change = function (old_length) {\n\t// Support routine for sequences.PATCH that returns the change in\n\t// length to a sequence if this operation is applied to it.\n\treturn 0;\n}\n\nexports.NO_OP.prototype.decompose = function (in_out, at_index) {\n\t// Support routine for when this operation is used as a hunk's\n\t// op in sequences.PATCH (i.e. its document is a string or array\n\t// sub-sequence) that returns a decomposition of the operation\n\t// into two operations, one that applies on the left of the\n\t// sequence and one on the right of the sequence, such that\n\t// the length of the input (if !in_out) or output (if in_out)\n\t// of the left operation is at_index, i.e. the split point\n\t// at_index is relative to the document either before (if\n\t// !in_out) or after (if in_out) this operation applies.\n\t//\n\t// Since NO_OP has no effect, its decomposition is trivial.\n\treturn [this, this];\n}\n\n//////////////////////////////////////////////////////////////////////////////\n\nexports.SET.prototype.inspect = function(depth) {\n\tfunction str(v) {\n\t\t// Render the special MISSING value from objects.js\n\t\t// not as a JSON object.\n\t\tif (v === MISSING)\n\t\t\treturn \"~\";\n\n\t\t// Render any other value as a JSON string.\n\t\treturn util.format(\"%j\", v);\n\t}\n\treturn util.format(\"<SET %s>\", str(this.value));\n}\n\nexports.SET.prototype.internalToJSON = function(json, protocol_version) {\n\tif (this.value === MISSING)\n\t\tjson.value_missing = true;\n\telse\n\t\tjson.value = this.value;\n}\n\nexports.SET.internalFromJSON = function(json, protocol_version, op_map) {\n\tif (json.value_missing)\n\t\treturn new exports.SET(MISSING);\n\telse\n\t\treturn new exports.SET(json.value);\n}\n\nexports.SET.prototype.apply = function (document) {\n\t/* Applies the operation to a document. Returns the new\n\t   value, regardless of the document. */\n\treturn this.value;\n}\n\nexports.SET.prototype.simplify = function () {\n\t/* Returns a new atomic operation that is a simpler version\n\t   of another operation. There is nothing to simplify for\n\t   a SET. */\n\treturn this;\n}\n\nexports.SET.prototype.drilldown = function(index_or_key) {\n\t// If the SET sets an array or object value, then drilling down\n\t// sets the inner value to the element or property value.\n\tif (typeof this.value == \"object\" && Array.isArray(this.value))\n\t\tif (Number.isInteger(index_or_key) && index_or_key < this.value.length)\n\t\t\treturn new exports.SET(this.value[index_or_key]);\n\tif (typeof this.value == \"object\" && !Array.isArray(this.value) && this.value !== null)\n\t\tif (typeof index_or_key == \"string\" && index_or_key in this.value)\n\t\t\treturn new exports.SET(this.value[index_or_key]);\n\n\t// Signal that anything that used to be an array element or\n\t// object property is now nonexistent.\n\treturn new exports.SET(MISSING);\n};\n\nexports.SET.prototype.inverse = function (document) {\n\t/* Returns a new atomic operation that is the inverse of this operation,\n\t   given the state of the document before this operation applies. */\n\treturn new exports.SET(document);\n}\n\nexports.SET.prototype.atomic_compose = function (other) {\n\t/* Creates a new atomic operation that has the same result as this\n\t   and other applied in sequence (this first, other after). Returns\n\t   null if no atomic operation is possible.\n\t   Returns a new SET operation that simply sets the value to what\n\t   the value would be when the two operations are composed. */\n\treturn new exports.SET(other.apply(this.value)).simplify();\n}\n\nexports.SET.prototype.rebase_functions = [\n\t// Rebase this against other and other against this.\n\n\t[exports.SET, function(other, conflictless) {\n\t\t// SET and SET.\n\n\t\t// If they both set the the document to the same value, then the one\n\t\t// applied second (the one being rebased) becomes a no-op. Since the\n\t\t// two parts of the return value are for each rebased against the\n\t\t// other, both are returned as no-ops.\n\t\tif (deepEqual(this.value, other.value, { strict: true }))\n\t\t\treturn [new exports.NO_OP(), new exports.NO_OP()];\n\t\t\n\t\t// If they set the document to different values and conflictless is\n\t\t// true, then we clobber the one whose value has a lower sort order.\n\t\tif (conflictless && jot.cmp(this.value, other.value) < 0)\n\t\t\treturn [new exports.NO_OP(), new exports.SET(other.value)];\n\n\t\t// cmp > 0 is handled by a call to this function with the arguments\n\t\t// reversed, so we don't need to explicltly code that logic.\n\n\t\t// If conflictless is false, then we can't rebase the operations\n\t\t// because we can't preserve the meaning of both. Return null to\n\t\t// signal conflict.\n\t\treturn null;\n\t}],\n\n\t[exports.MATH, function(other, conflictless) {\n\t\t// SET (this) and MATH (other). To get a consistent effect no matter\n\t\t// which order the operations are applied in, we say the SET comes\n\t\t// second. i.e. If the SET is already applied, the MATH becomes a\n\t\t// no-op. If the MATH is already applied, the SET is applied unchanged.\n\t\treturn [\n\t\t\tthis,\n\t\t\tnew exports.NO_OP()\n\t\t\t];\n\t}]\n];\n\nexports.SET.prototype.get_length_change = function (old_length) {\n\t// Support routine for sequences.PATCH that returns the change in\n\t// length to a sequence if this operation is applied to it.\n\tif (typeof this.value == \"string\" || Array.isArray(this.value))\n\t\treturn this.value.length - old_length;\n\tthrow new Error(\"not applicable: new value is of type \" + typeof this.value);\n}\n\nexports.SET.prototype.decompose = function (in_out, at_index) {\n\t// Support routine for when this operation is used as a hunk's\n\t// op in sequences.PATCH (i.e. its document is a string or array\n\t// sub-sequence) that returns a decomposition of the operation\n\t// into two operations, one that applies on the left of the\n\t// sequence and one on the right of the sequence, such that\n\t// the length of the input (if !in_out) or output (if in_out)\n\t// of the left operation is at_index, i.e. the split point\n\t// at_index is relative to the document either before (if\n\t// !in_out) or after (if in_out) this operation applies.\n\tif (typeof this.value != \"string\" && !Array.isArray(this.value))\n\t\tthrow new Error(\"invalid value type for call\");\n\tif (!in_out) {\n\t\t// Decompose into a delete and a replace with the value\n\t\t// lumped on the right.\n\t\treturn [\n\t\t\tnew exports.SET(this.value.slice(0,0)), // create empty string or array\n\t\t\tthis\n\t\t];\n\t} else {\n\t\t// Split the new value at the given index.\n\t\treturn [\n\t\t\tnew exports.SET(this.value.slice(0, at_index)),\n\t\t\tnew exports.SET(this.value.slice(at_index))\n\t\t];\n\t}\n}\n\n//////////////////////////////////////////////////////////////////////////////\n\nexports.MATH.prototype.inspect = function(depth) {\n\treturn util.format(\"<MATH %s:%s>\",\n\t\tthis.operator,\n\t\t\t(typeof this.operand == \"number\" && (this.operator == \"and\" || this.operator == \"or\" || this.operator == \"xor\"))\n\t\t\t?\n\t\t\t\t(\"0x\" + this.operand.toString(16))\n\t\t\t:\n\t\t\t\tutil.format(\"%j\", this.operand)\n\t\t);\n}\n\nexports.MATH.prototype.internalToJSON = function(json, protocol_version) {\n\tjson.operator = this.operator;\n\tjson.operand = this.operand;\n}\n\nexports.MATH.internalFromJSON = function(json, protocol_version, op_map) {\n\treturn new exports.MATH(json.operator, json.operand);\n}\n\nexports.MATH.prototype.apply = function (document) {\n\t/* Applies the operation to this.operand. Applies the operator/operand\n\t   as a function to the document. */\n\tif (typeof document == \"number\") {\n\t\tif (this.operator == \"add\")\n\t\t\treturn document + this.operand;\n\t\tif (this.operator == \"mult\")\n\t\t\treturn document * this.operand;\n\t\tif (Number.isInteger(document)) {\n\t\t\tif (this.operator == \"rot\")\n\t\t\t\treturn (document + this.operand[0]) % this.operand[1];\n\t\t\tif (this.operator == \"and\")\n\t\t\t\treturn document & this.operand;\n\t\t\tif (this.operator == \"or\")\n\t\t\t\treturn document | this.operand;\n\t\t\tif (this.operator == \"xor\")\n\t\t\t\treturn document ^ this.operand;\n\t\t\tif (this.operator == \"not\")\n\t\t\t\treturn ~document;\n\t\t}\n\t\tthrow new Error(\"MATH operator \" + this.operator + \" cannot apply to \" + document + \".\");\n\t\n\t} else if (typeof document == \"boolean\") {\n\t\tif (this.operator == \"and\")\n\t\t\treturn document && this.operand;\n\t\tif (this.operator == \"or\")\n\t\t\treturn document || this.operand;\n\t\tif (this.operator == \"xor\")\n\t\t\treturn !!(document ^ this.operand); // convert arithmetic result to boolean\n\t\tif (this.operator == \"not\")\n\t\t\treturn !document;\n\t\tthrow new Error(\"MATH operator \" + this.operator + \" does not apply to boolean values.\")\n\t\n\t} else {\n\t\tthrow new Error(\"MATH operations only apply to number and boolean values, not \" + jot.type_name(document) + \".\")\n\t}\n}\n\nexports.MATH.prototype.simplify = function () {\n\t/* Returns a new atomic operation that is a simpler version\n\t   of another operation. If the operation is a degenerate case,\n\t   return NO_OP. */\n\tif (this.operator == \"add\" && this.operand == 0)\n\t\treturn new exports.NO_OP();\n\tif (this.operator == \"rot\" && this.operand[0] == 0)\n\t\treturn new exports.NO_OP();\n\tif (this.operator == \"mult\" && this.operand == 1)\n\t\treturn new exports.NO_OP();\n\tif (this.operator == \"and\" && this.operand === 0)\n\t\treturn new exports.SET(0);\n\tif (this.operator == \"and\" && this.operand === false)\n\t\treturn new exports.SET(false);\n\tif (this.operator == \"or\" && this.operand === 0)\n\t\treturn new exports.NO_OP();\n\tif (this.operator == \"or\" && this.operand === false)\n\t\treturn new exports.NO_OP();\n\tif (this.operator == \"xor\" && this.operand == 0)\n\t\treturn new exports.NO_OP();\n\treturn this;\n}\n\nexports.MATH.prototype.drilldown = function(index_or_key) {\n\t// MATH operations only apply to scalars, so drilling down\n\t// doesn't make any sense. But we can say a MATH operation\n\t// doesn't affect any sub-components of the value.\n\treturn new exports.NO_OP();\n};\n\nexports.MATH.prototype.inverse = function (document) {\n\t/* Returns a new atomic operation that is the inverse of this operation,\n\tgiven the state of the document before the operation applies.\n\tFor most of these operations the value of document doesn't\n\tmatter. */\n\tif (this.operator == \"add\")\n\t\treturn new exports.MATH(\"add\", -this.operand);\n\tif (this.operator == \"rot\")\n\t\treturn new exports.MATH(\"rot\", [-this.operand[0], this.operand[1]]);\n\tif (this.operator == \"mult\")\n\t\treturn new exports.MATH(\"mult\", 1.0/this.operand);\n\tif (this.operator == \"and\")\n\t\treturn new exports.MATH(\"or\", document & (~this.operand));\n\tif (this.operator == \"or\")\n\t\treturn new exports.MATH(\"xor\", ~document & this.operand);\n\tif (this.operator == \"xor\")\n\t\treturn this; // is its own inverse\n\tif (this.operator == \"not\")\n\t\treturn this; // is its own inverse\n}\n\nexports.MATH.prototype.atomic_compose = function (other) {\n\t/* Creates a new atomic operation that has the same result as this\n\t   and other applied in sequence (this first, other after). Returns\n\t   null if no atomic operation is possible. */\n\n\tif (other instanceof exports.MATH) {\n\t\t// two adds just add the operands\n\t\tif (this.operator == other.operator && this.operator == \"add\")\n\t\t\treturn new exports.MATH(\"add\", this.operand + other.operand).simplify();\n\n\t\t// two rots with the same modulus add the operands\n\t\tif (this.operator == other.operator && this.operator == \"rot\" && this.operand[1] == other.operand[1])\n\t\t\treturn new exports.MATH(\"rot\", [this.operand[0] + other.operand[0], this.operand[1]]).simplify();\n\n\t\t// two multiplications multiply the operands\n\t\tif (this.operator == other.operator && this.operator == \"mult\")\n\t\t\treturn new exports.MATH(\"mult\", this.operand * other.operand).simplify();\n\n\t\t// two and's and the operands\n\t\tif (this.operator == other.operator && this.operator == \"and\" && typeof this.operand == typeof other.operand && typeof this.operand == \"number\")\n\t\t\treturn new exports.MATH(\"and\", this.operand & other.operand).simplify();\n\t\tif (this.operator == other.operator && this.operator == \"and\" && typeof this.operand == typeof other.operand && typeof this.operand == \"boolean\")\n\t\t\treturn new exports.MATH(\"and\", this.operand && other.operand).simplify();\n\n\t\t// two or's or the operands\n\t\tif (this.operator == other.operator && this.operator == \"or\" && typeof this.operand == typeof other.operand && typeof this.operand == \"number\")\n\t\t\treturn new exports.MATH(\"or\", this.operand | other.operand).simplify();\n\t\tif (this.operator == other.operator && this.operator == \"or\" && typeof this.operand == typeof other.operand && typeof this.operand == \"boolean\")\n\t\t\treturn new exports.MATH(\"or\", this.operand || other.operand).simplify();\n\n\t\t// two xor's xor the operands\n\t\tif (this.operator == other.operator && this.operator == \"xor\" && typeof this.operand == typeof other.operand && typeof this.operand == \"number\")\n\t\t\treturn new exports.MATH(\"xor\", this.operand ^ other.operand).simplify();\n\t\tif (this.operator == other.operator && this.operator == \"xor\" && typeof this.operand == typeof other.operand && typeof this.operand == \"boolean\")\n\t\t\treturn new exports.MATH(\"xor\", !!(this.operand ^ other.operand)).simplify();\n\n\t\t// two not's cancel each other out\n\t\tif (this.operator == other.operator && this.operator == \"not\")\n\t\t\treturn new exports.NO_OP();\n\n\t\t// and+or with the same operand is SET(operand)\n\t\tif (this.operator == \"and\" && other.operator == \"or\" && this.operand === other.operand)\n\t\t\treturn new exports.SET(this.operand);\n\n\t\t// or+xor with the same operand is AND(~operand)\n\t\tif (this.operator == \"or\" && other.operator == \"xor\" && this.operand === other.operand && typeof this.operand == \"number\")\n\t\t\treturn new exports.MATH(\"and\", ~this.operand);\n\t\tif (this.operator == \"or\" && other.operator == \"xor\" && this.operand === other.operand && typeof this.operand == \"boolean\")\n\t\t\treturn new exports.MATH(\"and\", !this.operand);\n\n\t}\n\t\n\treturn null; // no composition is possible\n}\n\nexports.MATH.prototype.rebase_functions = [\n\t// Rebase this against other and other against this.\n\n\t[exports.MATH, function(other, conflictless) {\n\t\t// If this and other are MATH operations with the same operator (i.e. two\n\t\t// add's; two rot's with the same modulus), then since they are commutative\n\t\t// their order does not matter and the rebase returns each operation\n\t\t// unchanged.\n\t\tif (this.operator == other.operator\n\t\t\t&& (this.operator != \"rot\" || this.operand[1] == other.operand[1]))\n\t\t\t\treturn [this, other];\n\n\t\t// When two different operators ocurr simultaneously, then the order matters.\n\t\t// Since operators preserve the data type of the document, we know that both\n\t\t// orders are valid. Choose an order based on the operations: We'll put this\n\t\t// first and other second.\n\t\tif (conflictless && \"document\" in conflictless) {\n\t\t\tif (jot.cmp([this.operator, this.operand], [other.operator, other.operand]) < 0) {\n\t\t\t\treturn [\n\t\t\t\t\t// this came second, so replace it with an operation that\n\t\t\t\t\t// inverts the existing other operation, then applies this,\n\t\t\t\t\t// then re-applies other. Although a composition of operations\n\t\t\t\t\t// is logically sensible, returning a LIST will cause LIST.rebase\n\t\t\t\t\t// to go into an infinite regress in some cases.\n\t\t\t\t\tnew exports.SET(this.compose(other).apply(conflictless.document)),\n\t\t\t\t\t//other.inverse(conflictless.document).compose(this).compose(other),\n\n\t\t\t\t\t// no need to rewrite other because it's supposed to come second\n\t\t\t\t\tother\n\t\t\t\t]\n\t\t\t}\n\t\t}\n\n\t\t// The other order is handled by the converse call handled by jot.rebase.\n\t\treturn null;\n\t}]\n];\n\nexports.createRandomOp = function(doc, context) {\n\t// Create a random operation that could apply to doc.\n\t// Choose uniformly across various options depending on\n\t// the data type of doc.\n\tvar ops = [];\n\n\t// NO_OP is always a possibility.\n\tops.push(function() { return new exports.NO_OP() });\n\n\t// An identity SET is always a possibility.\n\tops.push(function() { return new exports.SET(doc) });\n\n\t// Set to another random value of a different type.\n\t// Can't do this in a context where changing the type is not valid,\n\t// i.e. when in a PATCH or MAP operation on a string.\n\tif (context != \"string-elem\" && context != \"string\")\n\t\tops.push(function() { return new exports.SET(jot.createRandomValue()) });\n\n\t// Clear the key, if we're in an object.\n\tif (context == \"object\")\n\t\tops.push(function() { return new exports.SET(MISSING) });\n\n\t// Set to another value of the same type.\n\tif (typeof doc === \"boolean\")\n\t\tops.push(function() { return new exports.SET(!doc) });\n\tif (typeof doc === \"number\") {\n\t\tif (Number.isInteger(doc)) {\n\t\t\tops.push(function() { return new exports.SET(doc + Math.floor((Math.random()+.5) * 100)) });\n\t\t} else {\n\t\t\tops.push(function() { return new exports.SET(doc * (Math.random()+.5)) });\n\t\t}\n\t}\n\n\tif ((typeof doc === \"string\" || Array.isArray(doc)) && context != \"string-elem\") {\n\t\t// Delete (if not already empty).\n\t\tif (doc.length > 0)\n\t\t\tops.push(function() { return new exports.SET(doc.slice(0, 0)) });\n\n\t\tif (doc.length >= 1) {\n\t\t\t// shorten at start\n\t\t\tops.push(function() { return new exports.SET(doc.slice(Math.floor(Math.random()*(doc.length-1)), doc.length)) });\n\n\t\t\t// shorten at end\n\t\t\tops.push(function() { return new exports.SET(doc.slice(0, Math.floor(Math.random()*(doc.length-1)))) });\n\t\t}\n\n\t\tif (doc.length >= 2) {\n\t\t\t// shorten by on both sides\n\t\t\tvar a = Math.floor(Math.random()*doc.length-1);\n\t\t\tvar b = Math.floor(Math.random()*(doc.length-a));\n\t\t\tops.push(function() { return new exports.SET(doc.slice(a, a+b)) });\n\t\t}\n\n\t\tif (doc.length > 0) {\n\t\t\t// expand by copying existing elements from document\n\n\t\t\tfunction concat2(item1, item2) {\n\t\t\t\tif (item1 instanceof String)\n\t\t\t\t\treturn item1 + item2;\n\t\t\t\treturn item1.concat(item2);\n\t\t\t}\n\t\t\tfunction concat3(item1, item2, item3) {\n\t\t\t\tif (item1 instanceof String)\n\t\t\t\t\treturn item1 + item2 + item3;\n\t\t\t\treturn item1.concat(item2).concat(item3);\n\t\t\t}\n\t\t\n\t\t\t// expand by elements at start\n\t\t\tops.push(function() { return new exports.SET(concat2(doc.slice(0, 1+Math.floor(Math.random()*(doc.length-1))), doc)) });\n\t\t\t// expand by elements at end\n\t\t\tops.push(function() { return new exports.SET(concat2(doc, doc.slice(0, 1+Math.floor(Math.random()*(doc.length-1))))); });\n\t\t\t// expand by elements on both sides\n\t\t\tops.push(function() { return new exports.SET(concat3(doc.slice(0, 1+Math.floor(Math.random()*(doc.length-1))), doc, doc.slice(0, 1+Math.floor(Math.random()*(doc.length-1))))); });\n\t\t} else {\n\t\t\t// expand by generating new elements\n\t\t\tif (typeof doc === \"string\")\n\t\t\t\tops.push(function() { return new exports.SET((Math.random()+\"\").slice(2)); });\n\t\t\telse if (Array.isArray(doc))\n\t\t\t\tops.push(function() { return new exports.SET([null,null,null].map(function() { return Math.random() })); });\n\t\t}\n\t}\n\n\tif (typeof doc === \"string\") {\n\t\t// reverse\n\t\tif (doc != doc.split(\"\").reverse().join(\"\"))\n\t\t\tops.push(function() { return new exports.SET(doc.split(\"\").reverse().join(\"\")); });\n\n\t\t// replace with new elements of the same length\n\t\tif (doc.length > 0) {\n\t\t\tvar newvalue = \"\";\n\t\t\tfor (var i = 0; i < doc.length; i++)\n\t\t\t\tnewvalue += (Math.random()+\"\").slice(2, 3);\n\t\t\tops.push(function() { return new exports.SET(newvalue); });\n\t\t}\n\t}\n\n\t// Math\n\tif (typeof doc === \"number\") {\n\t\tif (Number.isInteger(doc)) {\n\t\t\tops.push(function() { return new exports.MATH(\"add\", Math.floor(100 * (Math.random() - .25))); })\n\t\t\tops.push(function() { return new exports.MATH(\"mult\", Math.floor(Math.exp(Math.random()+.5))); })\n\t\t\tif (doc > 1)\n\t\t\t\tops.push(function() { return new exports.MATH(\"rot\", [1, Math.min(13, doc)]); })\n\t\t\tops.push(function() { return new exports.MATH(\"and\", 0xF1); })\n\t\t\tops.push(function() { return new exports.MATH(\"or\", 0xF1); })\n\t\t\tops.push(function() { return new exports.MATH(\"xor\", 0xF1); })\n\t\t\tops.push(function() { return new exports.MATH(\"not\", null); })\n\t\t} else {\n\t\t\t// floating point math yields inexact/inconsistent results if operation\n\t\t\t// order changes, so you may want to disable these in testing\n\t\t\tops.push(function() { return new exports.MATH(\"add\", 100 * (Math.random() - .25)); })\n\t\t\tops.push(function() { return new exports.MATH(\"mult\", Math.exp(Math.random()+.5)); })\n\t\t}\n\t}\n\tif (typeof doc === \"boolean\") {\n\t\tops.push(function() { return new exports.MATH(\"and\", true); })\n\t\tops.push(function() { return new exports.MATH(\"and\", false); })\n\t\tops.push(function() { return new exports.MATH(\"or\", true); })\n\t\tops.push(function() { return new exports.MATH(\"or\", false); })\n\t\tops.push(function() { return new exports.MATH(\"xor\", true); })\n\t\tops.push(function() { return new exports.MATH(\"xor\", false); })\n\t\tops.push(function() { return new exports.MATH(\"not\", null); })\n\t}\n\n\t// Select randomly.\n\treturn ops[Math.floor(Math.random() * ops.length)]();\n}\n"
  },
  {
    "path": "package.json",
    "content": "{\n  \"name\": \"jot\",\n  \"version\": \"0.0.1\",\n  \"description\": \"JSON Operational Transformation\",\n  \"dependencies\": {\n    \"deep-equal\": \"^2.0.3\",\n    \"diff\": \"^4.0.2\",\n    \"generic-diff\": \"*\",\n    \"jsonpatch\": \"*\",\n    \"shallow-clone\": \"^3.0.1\"\n  },\n  \"repository\": {\n    \"type\": \"git\",\n    \"url\": \"git://github.com/joshdata/jot.git\"\n  },\n  \"author\": \"Joshua Tauberer\",\n  \"scripts\": {\n    \"test\": \"tap --cov tests/*.js\"\n  },\n  \"devDependencies\": {\n    \"connect\": \"*\",\n    \"tap\": \"^14.10.7\"\n  }\n}\n"
  },
  {
    "path": "test.js",
    "content": "var jot = require(\"./jot\");\nvar deepEqual = require(\"deep-equal\");\nvar createRandomOp = jot.createRandomOp; // require('./jot/values.js').createRandomOp;\n\n// Create two complex simultaneous edits.\nvar opsize = 1;\n\nwhile (true) {\n\tvar initial_value = jot.createRandomValue();\n\tvar op1 = jot.createRandomOpSequence(initial_value, opsize);\n\tvar op2 = jot.createRandomOpSequence(initial_value, opsize);\n\n\t/*\n\tvar initial_value = false;\n\tvar op1 = jot.opFromJSON({\"_type\":\"meta.LIST\",\"ops\":[{\"_type\":\"values.SET\",\"new_value\":-226.62491332471424},{\"_type\":\"values.NO_OP\"}]});\n\tvar op2 = jot.opFromJSON({\"_type\":\"meta.LIST\",\"ops\":[{\"_type\":\"values.MATH\",\"operator\":\"and\",\"operand\":true},{\"_type\":\"values.MATH\",\"operator\":\"and\",\"operand\":false}]});\n\t*/\n\t//console.log(initial_value)\n\t//console.log(op1)\n\t//console.log(op2)\n\n\ttry {\n\n\t\t// Compute the end results.\n\t\tvar val1 = op1.apply(initial_value);\n\t\tvar val2 = op2.apply(initial_value);\n\n\t\t// Check that the parallel rebases match.\n\t\tvar op2r = op2.rebase(op1, { document: initial_value });\n\t\tvar val1b = op2r ? op2r.apply(val1) : null;\n\n\t\tvar op1r = op1.rebase(op2, { document: initial_value });\n\t\tvar val2b = op1r ? op1r.apply(val2) : null;\n\n\t\t// Check that they also match using composition.\n\t\tvar val1c = op2r ? op1.compose(op2r).apply(initial_value) : null;\n\t\tvar val2c = op1r ? op2.compose(op1r).apply(initial_value) : null;\n\n\t\t// Check that we can compute a diff.\n\t\tvar d = jot.diff(initial_value, val1b);\n\n\t\tif (op2r === null || op1r === null\n\t\t || !deepEqual(val1b, val2b, { strict: true })\n\t\t ) {\n\t\t  console.log(\"rebase failed or did not have same result\");\n\t\t  console.log(\"init\", initial_value)\n\t\t  console.log();\n\t\t  console.log(\"op1\", JSON.stringify(op1.toJSON()));\n\t\t  console.log(\"val\", val1);\n\t\t  console.log(\"op2r\", op2r);\n\t\t  console.log(\"val\", val1b);\n\t\t  console.log();\n\t\t  console.log(\"op2\", JSON.stringify(op2.toJSON()));\n\t\t  console.log(\"val\", val2);\n\t\t  console.log(\"op1r\", op1r);\n\t\t  console.log(\"val\", val2b);\n\t\t  break;\n\t\t} else if (!deepEqual(val1b, val1c, { strict: true }) || !deepEqual(val1c, val2c, { strict: true })) {\n\t\t  console.log(\"composition did not have same result\");\n\t\t  console.log(\"init\", initial_value)\n\t\t  console.log();\n\t\t  console.log(\"op1\", JSON.stringify(op1.toJSON()));\n\t\t  console.log(\"val\", val1);\n\t\t  console.log(\"op2r\", op2r);\n\t\t  console.log(\"val\", val1c);\n\t\t  console.log();\n\t\t  console.log(\"op2\", JSON.stringify(op2.toJSON()));\n\t\t  console.log(\"val\", val2);\n\t\t  console.log(\"op1r\", op1r);\n\t\t  console.log(\"val\", val2c);\n\t\t  break;\n\t\t}\n\t} catch (e) {\n\t\tconsole.error(e);\n\t\tconsole.log(\"init\", initial_value)\n\t\tconsole.log(\"op1\", JSON.stringify(op1.toJSON()));\n\t\tconsole.log(\"op2\", JSON.stringify(op2.toJSON()));\n\t\tbreak;\n\t}\n}"
  },
  {
    "path": "tests/diff.js",
    "content": "var test = require('tap').test;\nvar jot = require('../jot')\nvar diff = require(\"../jot/diff.js\");\n\ntest('diff', function(t) {\n\n\tfunction test(a, b, options) {\n\t\tvar op = diff.diff(a, b, options);\n\t\tt.deepEqual(op.apply(a), b);\n\t\tt.deepEqual(op.inverse(a).apply(b), a);\n\t}\n\n\t// values (these just turn into SET operations)\n\n\ttest(5, { });\n\n\t// strings\n\n\ttest(\"This is a test.\", \"That is not a test of string comparison.\");\n\ttest(\"This is a test.\", \"I know. This is a test.\");\n\ttest(\"This is a test.\", \"This is a test. Yes, I know.\");\n\n\t// arrays\n\n\ttest([1, 2, 3], [1, 2, 3])\n\ttest([1, 2, 3], [0.5, 1, 1.5, 2, 2.5, 3, 3.5])\n\ttest([0.5, 1, 1.5, 1.75, 2, 2.5, 3, 3.5], [1, 2, 3])\n\n\t// objects\n\n\ttest({ \"a\": \"Hello!\" }, { \"a\": \"Goodbye!\" });\n\ttest({ \"a\": \"Hello!\" }, { \"b\": \"Hello!\" });\n\n\t// recursive\n\n\ttest({ \"a\": [\"Hello!\", [\"Goodbye!\"]] }, { \"b\": [\"Hola!\", [\"Adios!\"]] }, { words: true });\n\n    t.end();\n});\n"
  },
  {
    "path": "tests/merge.js",
    "content": "const test = require('tap').test;\nconst jot = require('../jot')\nconst merge = require(\"../jot/merge.js\");\n\ntest('merge', function(t) {\n  t.deepEqual(\n    merge.merge(\n      'source',\n      'target',\n      {\n        source: { parents: ['root'], op: [new jot.SET('Hello')] },\n        target: { parents: ['root'], op: [new jot.SET('World')] },\n        root: { document: null },\n      }\n    )[1],\n    new jot.NO_OP()\n  );\n\n  t.deepEqual(\n    merge.merge(\n      'source',\n      'target',\n      {\n        source: { parents: ['root'], op: [new jot.SET('World')] },\n        target: { parents: ['root'], op: [new jot.SET('Hello')] },\n        root: { document: null },\n      }\n    )[1],\n    new jot.SET(\"World\")\n  );\n\n  t.deepEqual(\n    merge.merge(\n      'source',\n      'target',\n      {\n        source: { parents: ['a'], op: [new jot.MATH('add', 1)] },\n        target: { parents: ['a'], op: [new jot.MATH('add', 2)] },\n        a: { parents: ['root'], op: [new jot.MATH('add', 3)] },\n        root: { document: 0 },\n      }\n    )[1],\n    new jot.MATH('add', 1)\n  );\n\n  t.deepEqual(\n    merge.merge(\n      'source',\n      'target',\n      {\n        source: { parents: ['a'], op: [new jot.SPLICE(0, 5, \"Goodbye\")] },\n        target: { parents: ['a'], op: [new jot.SPLICE(12, 5, \"universe\")] },\n        a: { parents: ['root'], op: [new jot.SPLICE(6, 0, \"cruel \")] },\n        root: { document: \"Hello world.\" },\n      }\n    )[1],\n    new jot.SPLICE(0, 5, \"Goodbye\")\n  );\n\n\n  t.deepEqual(\n    merge.merge(\n      'source',\n      'target',\n      {\n        source: { parents: ['a'], op: [new jot.SPLICE(12, 5, \"universe\")] },\n        target: { parents: ['a'], op: [new jot.SPLICE(0, 5, \"Goodbye\")] },\n        a: { parents: ['root'], op: [new jot.SPLICE(6, 0, \"cruel \")] },\n        root: { document: \"Hello world.\" },\n      }\n    )[1],\n    new jot.SPLICE(14, 5, \"universe\")\n  );\n\n  t.deepEqual(\n    merge.merge(\n      'source',\n      'target',\n      {\n        target: { parents: ['a', 'source'], op: [new jot.SPLICE(8, 5, \"universe\"), new jot.SPLICE(0, 5, \"Goodbye\")] },\n        source: { parents: ['root'], op: [new jot.SPLICE(6, 5, \"universe\")] },\n        a: { parents: ['root'], op: [new jot.SPLICE(0, 5, \"Goodbye\")] },\n        root: { document: \"Hello world.\" },\n      }\n    )[1],\n    new jot.NO_OP()\n  );\n\n  t.deepEqual(\n    merge.merge(\n      'source',\n      'target',\n      {\n        target: { parents: ['a', 'b'], op: [new jot.SPLICE(8, 5, \"universe\"), new jot.SPLICE(0, 5, \"Goodbye\")] },\n        source: { parents: ['b'], op: [new jot.SPLICE(14, 1, \"!\")] },\n        b: { parents: ['root'], op: [new jot.SPLICE(6, 5, \"universe\")] },\n        a: { parents: ['root'], op: [new jot.SPLICE(0, 5, \"Goodbye\")] },\n        root: { document: \"Hello world.\" },\n      }\n    )[1],\n    new jot.SPLICE(16, 1, \"!\")\n  );\n\n\n  function test_merge(branch_a, branch_b, expected_content) {\n    // Merge branch_b into branch_a. Merges aren't necessarily symmetric\n    // in complex multi-common-ancestor scenarios, so we don't check\n    // that merging bran_a into branch_b.\n    branch_a.merge(branch_b);\n    t.deepEqual(branch_a.content(), expected_content);\n  }\n\n  {\n    let root = new merge.Document();\n\n    let a = root.branch();\n    a.commit(\"Hello world.\");\n\n    let b = root.branch();\n    b.commit(\"Goodbye world.\");\n    \n    test_merge(a, b, \"Hello world.\")\n  }\n\n  {\n    let root = new merge.Document();\n    root.commit(\"Hello world.\");\n\n    let a = root.branch();\n    a.commit(\"Hello cruel world.\");\n\n    let b = root.branch();\n    b.commit(\"Goodbye world.\");\n    b.commit(\"Goodbye world of mine.\");\n    \n    test_merge(a, b, \"Goodbye cruel world of mine.\")\n    test_merge(a, b, \"Goodbye cruel world of mine.\") // testing that repeat does nothing\n\n    b.commit(\"Goodbye love of yesterday.\");\n    test_merge(a, b, \"Goodbye cruel love of yesterday.\")\n    test_merge(b, a, \"Goodbye cruel love of yesterday.\")\n    test_merge(a, b, \"Goodbye cruel love of yesterday.\")\n\n    a.commit(\"Farewall, my love of chocolate.\");\n    b.commit(\"He said, 'Goodbye cruel love of yesterday,' before leaving.\");\n    test_merge(a, b, \"He said, 'Farewall, my love of chocolate,' before leaving.\")\n  }\n\n\n  {\n    // This is the \"When is merge recursive needed?\" example\n    // in http://blog.plasticscm.com/2011/09/merge-recursive-strategy.html.\n\n    let a = new merge.Document();\n    \n    a.commit(\"10\");\n\n    let b = a.branch();\n\n    a.commit(\"10 11\")\n    let a11 = a.branch();\n    a.commit(\"10 11 13\")\n    b.commit(\"10 12\")\n    let b12 = b.branch();\n    b.commit(\"10 12 14\")\n\n    test_merge(a, b12, \"10 11 13 12\");\n    test_merge(b, a11, \"10 11 12 14\");\n\n    //test_merge(a, b, \"10 11 12 14 13\");\n  }\n\n  {\n    // This is the \"Why merge recursive is better: a step by step example\"\n    // in http://blog.plasticscm.com/2011/09/merge-recursive-strategy.html.\n\n    let a = new merge.Document();\n    \n    a.commit(\"bcd\");\n\n    let b = a.branch();\n    b.commit(\"bcde\");\n    b.commit(\"bcdE\");\n\n    a.commit(\"bCd\");\n\n    let c = a.branch();\n    c.commit(\"abCd\");\n\n    a.commit(\"bcd\");\n\n    test_merge(c, b, \"abCdE\");\n    test_merge(a, b, \"bcdE\");\n    test_merge(c, a, \"abcdE\");\n  }\n\n  t.end();\n});\n"
  },
  {
    "path": "tests/meta.js",
    "content": "var test = require('tap').test;\nvar jot = require(\"../jot\");\nvar lists = require(\"../jot/lists.js\");\n\ntest('lists', function(t) {\n    // inspect\n\n    t.deepEqual(\n        new lists.LIST([]).inspect(),\n        \"<LIST []>\");\n\n    t.deepEqual(\n        new lists.LIST([new jot.SET(\"X\")]).inspect(),\n        \"<LIST [<SET \\\"X\\\">]>\");\n\n    t.deepEqual(\n        new lists.LIST([new jot.SET(\"X\"), new jot.SET(\"Y\")]).inspect(),\n        \"<LIST [<SET \\\"X\\\">, <SET \\\"Y\\\">]>\");\n\n\t// simplify\n\n\tt.deepEqual(\n\t\tnew lists.LIST([]).simplify(),\n\t\tnew jot.NO_OP());\n\n\tt.deepEqual(\n\t\tnew lists.LIST([ new jot.SET(\"X\"), new jot.SET(\"Y\") ]).simplify(),\n\t\tnew jot.SET(\"Y\") );\n\n\tt.deepEqual(\n\t\tnew lists.LIST([ new jot.MATH(\"add\", 1), new jot.MATH(\"add\", 2) ]).simplify(),\n\t\tnew jot.MATH(\"add\", 3) );\n\n\tt.deepEqual(\n\t\tnew lists.LIST([ new jot.MATH(\"add\", 1), new jot.MATH(\"mult\", 2) ]).simplify(),\n\t\tnew lists.LIST([ new jot.MATH(\"add\", 1), new jot.MATH(\"mult\", 2) ]));\n\n\tt.deepEqual(\n\t\tnew lists.LIST([ new jot.MATH(\"add\", 1), new jot.MATH(\"mult\", 2), new jot.MATH(\"xor\", 1) ]).simplify(),\n\t\tnew lists.LIST([ new jot.MATH(\"add\", 1), new jot.MATH(\"mult\", 2), new jot.MATH(\"xor\", 1) ]));\n\n    // drilldown\n\n    t.deepEqual(\n        new lists.LIST([ new jot.PUT(\"key1\", \"value1\"), new jot.PUT(\"key2\", \"value2\") ]).drilldown(\"key\"),\n        new jot.NO_OP());\n    t.deepEqual(\n        new lists.LIST([ new jot.PUT(\"key1\", \"value1\"), new jot.PUT(\"key2\", \"value2\") ]).drilldown(\"key1\"),\n        new jot.SET(\"value1\"));\n    t.deepEqual(\n        new lists.LIST([ new jot.APPLY(\"key1\", new jot.MATH(\"add\", 1)), new jot.APPLY(\"key1\", new jot.MATH(\"mult\", 2)) ]).drilldown(\"key1\"),\n        new lists.LIST([ new jot.MATH(\"add\", 1), new jot.MATH(\"mult\", 2) ]));\n\n    // compose\n\n    t.deepEqual(\n        new lists.LIST([ ])\n            .compose(\n                new jot.PUT('x', 'y')\n            ),\n        new jot.PUT('x', 'y')\n    )\n\n    t.deepEqual(\n        new lists.LIST([ new jot.PUT('x', 'y') ])\n            .compose(\n                new lists.LIST([ ])\n            ),\n        new lists.LIST([ new jot.PUT('x', 'y') ])\n    )\n\n    t.deepEqual(\n        new lists.LIST([ new jot.PUT('x', 'y') ])\n            .compose(\n                new jot.PUT('x', 'z')\n            ),\n        new lists.LIST([ new jot.PUT('x', 'y'), new jot.PUT('x', 'z') ])\n    )\n\n    t.deepEqual(\n        new lists.LIST([ new jot.PUT('x', 'y') ])\n            .compose(\n                new lists.LIST([ new jot.PUT('x', 'z') ])\n            ),\n        new lists.LIST([ new jot.PUT('x', 'y'), new jot.PUT('x', 'z') ])\n    )\n\n    // (de)serialization\n\n    t.deepEqual(\n        new jot.deserialize(\n            new lists.LIST([\n                new jot.APPLY(\n                    'foo', new jot.PUT(\n                        'x', 'y'\n                    )\n                ),\n                new jot.APPLY(\n                    'bar', new jot.SPLICE(\n                        0, 0, [{baz: 'quux'}]\n                    )\n                )\n            ]).serialize()\n        ),\n        new lists.LIST([\n            new jot.APPLY(\n                'foo', new jot.PUT(\n                    'x', 'y'\n                )\n            ),\n            new jot.APPLY(\n                'bar', new jot.SPLICE(\n                    0, 0, [{baz: 'quux'}]\n                )\n            )\n        ])\n    );\n\n    // rebase\n    t.deepEqual( // empty list\n        new lists.LIST([ ])\n            .rebase(\n                new jot.PUT('x', 'y')\n            ),\n        new jot.NO_OP()\n    )\n\n    t.deepEqual( // unrelated changes, unwrapping of the list\n        new lists.LIST([ new jot.PUT('x', 'y') ])\n            .rebase(\n                new jot.PUT('a', 'b')\n            ),\n        new jot.PUT('x', 'y')\n    )\n\n    t.deepEqual( // conflictless (A)\n        new lists.LIST([\n            new jot.SET('a')\n        ])\n            .rebase(\n                new lists.LIST([\n                    new jot.SET('b')\n                ]),\n                true\n            ),\n        new jot.NO_OP()\n    )\n\n    t.deepEqual( // conflictless (B)\n        new lists.LIST([\n            new jot.SET('b')\n        ])\n            .rebase(\n                new lists.LIST([\n                    new jot.SET('a')\n                ]),\n                true\n            ),\n        new jot.SET('b')\n    )\n\n    t.end();\n});\n"
  },
  {
    "path": "tests/objects.js",
    "content": "var test = require('tap').test;\nvar jot = require(\"../jot\");\nvar values = require(\"../jot/values.js\");\nvar seqs = require(\"../jot/sequences.js\");\nvar objs = require(\"../jot/objects.js\");\nvar lists = require(\"../jot/lists.js\");\n\ntest('objects', function(t) {\n\n// inspect\n\nt.equal(\n\tnew objs.PUT(\"0\", \"1\").inspect(),\n\t'<APPLY \"0\":<SET \"1\">>');\nt.equal(\n\tnew objs.REM(\"0\", \"1\").inspect(),\n\t'<APPLY \"0\":<SET ~>>');\nt.equal(\n\tnew objs.APPLY(\"0\", new values.SET(2)).inspect(),\n\t'<APPLY \"0\":<SET 2>>');\n\n// serialization\n\nt.deepEqual(\n\tjot.opFromJSON(new objs.PUT(\"0\", \"1\").toJSON()),\n\tnew objs.PUT(\"0\", \"1\"));\nt.deepEqual(\n\tjot.opFromJSON(new objs.REM(\"0\", \"1\").toJSON()),\n\tnew objs.REM(\"0\", \"1\"));\nt.deepEqual(\n\tjot.opFromJSON(new objs.APPLY(\"0\", new values.SET(2)).toJSON()),\n\tnew objs.APPLY(\"0\", new values.SET(2)));\n\n// apply\n\nt.deepEqual(\n\tnew objs.PUT(\"a\", \"b\")\n\t\t.apply({}),\n\t{ \"a\": \"b\" });\nt.deepEqual(\n\tnew objs.REM(\"a\", \"b\")\n\t\t.apply({\"a\": \"b\"}),\n\t{});\nt.deepEqual(\n\tnew objs.APPLY(\n\t\t\"a\",\n\t\tnew values.SET(\"c\"))\n\t\t.apply({\"a\": \"b\"}),\n\t{ \"a\": \"c\" });\nt.deepEqual(\n\tnew objs.APPLY(\n\t\t\"a\",\n\t\tnew seqs.SPLICE(0, 1, \"Hello\"))\n\t\t.apply({\"a\": \"b\"}),\n\t{ \"a\": \"Hello\" });\nt.deepEqual(\n\tnew objs.APPLY(\n\t\t\"a\",\n\t\tnew seqs.ATINDEX(\n\t\t\t1,\n\t\t\tnew values.MATH(\"add\", 1)\n\t\t\t))\n\t\t.apply({\"a\": [0, 0]}),\n\t{ \"a\": [0, 1] });\n\n// drilldown\n\nt.deepEqual(\n\tnew objs.PUT(\"key\", \"value\").drilldown(\"other\"),\n\tnew values.NO_OP());\nt.deepEqual(\n\tnew objs.PUT(\"key\", \"value\").drilldown(\"key\"),\n\tnew values.SET(\"value\"));\n\n// invert\n\n// ...\n\n// compose\n\nt.deepEqual(\n\tnew objs.PUT(\"key\", \"value\").compose(new values.SET(\"123\")),\n\tnew values.SET(\"123\"));\nt.deepEqual(\n\tnew objs.REM(\"key\", \"oldvalue\").compose(new values.SET(\"123\")),\n\tnew values.SET(\"123\"));\nt.deepEqual(\n\tnew objs.APPLY(\"key\", new values.MATH('add', 1)).compose(new values.SET(\"123\")),\n\tnew values.SET(\"123\"));\nt.deepEqual(\n\tnew objs.APPLY(\"key\", new values.MATH('add', 1)).compose(new objs.APPLY(\"key\", new values.MATH('add', 1))),\n\tnew objs.APPLY(\"key\", new values.MATH('add', 2)));\nt.deepEqual(\n\tnew objs.APPLY(\"key\", new values.MATH('add', 1)).compose(new objs.APPLY(\"key\", new values.MATH('add', -1))),\n\tnew values.NO_OP());\nt.deepEqual(\n\tnew objs.APPLY(\"key\", new values.MATH('add', 1)).compose(new objs.APPLY(\"key\", new values.MATH('mult', 1))),\n\tnew objs.APPLY(\"key\", new values.MATH('add', 1)));\nt.deepEqual(\n\tnew objs.APPLY(\"key1\", new values.MATH('add', 1)).compose(new objs.APPLY(\"key2\", new values.MATH('mult', 2))),\n\tnew objs.APPLY({ \"key1\": new values.MATH('add', 1), \"key2\": new values.MATH('mult', 2)}));\n\n\n// rebase\n\nt.deepEqual(\n\tnew objs.PUT(\"key\", \"value\").rebase(\n\t\tnew objs.PUT(\"key\", \"value\")),\n\tnew values.NO_OP()\n\t)\nt.notOk(\n\tnew objs.PUT(\"key\", \"value1\").rebase(\n\t\tnew objs.PUT(\"key\", \"value2\")))\nt.deepEqual(\n\tnew objs.PUT(\"key\", \"value1\").rebase(\n\t\tnew objs.PUT(\"key\", \"value2\"), {}),\n\tnew values.NO_OP())\nt.deepEqual(\n\tnew objs.PUT(\"key\", \"value2\").rebase(\n\t\tnew objs.PUT(\"key\", \"value1\"), {}),\n\tnew objs.APPLY(\"key\", new values.SET(\"value2\")))\n\nt.deepEqual(\n\tnew objs.REM(\"key\", \"value\").rebase(\n\t\tnew objs.REM(\"key\", \"value\")),\n\tnew values.NO_OP()\n\t)\nt.deepEqual(\n\tnew objs.REM(\"key1\", \"value1\").rebase(\n\t\tnew objs.REM(\"key2\", \"value2\")),\n\tnew objs.REM(\"key1\", \"value1\")\n\t)\n\nt.deepEqual(\n\tnew objs.REM(\"key\", \"old_value\").rebase(\n\t\tnew objs.APPLY(\"key\", new values.SET(\"new_value\")), {}),\n\tnew values.NO_OP()\n\t)\nt.deepEqual(\n\tnew objs.APPLY(\"key\", new values.SET(\"new_value\")).rebase(\n\t\tnew objs.REM(\"key\", \"old_value\"), {}),\n\tnew objs.PUT(\"key\", \"new_value\")\n\t)\n\nt.deepEqual(\n\tnew objs.APPLY('key', new values.MATH(\"add\", 3)).rebase(\n\t\tnew objs.APPLY('key', new values.MATH(\"add\", 1))),\n\tnew objs.APPLY('key', new values.MATH(\"add\", 3)));\nt.notOk(\n\tnew objs.APPLY('key', new values.SET(\"y\")).rebase(\n\t\tnew objs.APPLY('key', new values.SET(\"z\"))))\nt.deepEqual(\n\tnew objs.APPLY('key', new values.SET(\"y\")).rebase(\n\t\tnew objs.APPLY('key', new values.SET(\"z\")), {}),\n\tnew values.NO_OP()\n\t)\nt.deepEqual(\n\tnew objs.APPLY('key', new values.SET(\"z\")).rebase(\n\t\tnew objs.APPLY('key', new values.SET(\"y\")), {}),\n\tnew objs.APPLY('key', new values.SET(\"z\"))\n\t)\n\n// lists\n\nt.deepEqual(\n\tnew objs.APPLY(\n\t\t\"a\",\n\t\tnew lists.LIST([\n\t\t\tnew seqs.ATINDEX(\n\t\t\t\t1,\n\t\t\t\tnew values.MATH(\"add\", 1)\n\t\t\t\t),\n\t\t\tnew seqs.ATINDEX(\n\t\t\t\t2,\n\t\t\t\tnew values.MATH(\"add\", -1)\n\t\t\t\t)\n\t\t])\n\t).apply({\"a\": [0, 0, 0]}),\n\t{ \"a\": [0, 1, -1] });\n\n// serialization\n\nfunction test_serialization(op) {\n\tt.deepEqual(op.toJSON(), jot.opFromJSON(op.toJSON()).toJSON());\n}\n\ntest_serialization(new objs.PUT(\"key\", \"value\"))\ntest_serialization(new objs.REM(\"key\", \"old_value\"))\n\n//\n\n    t.end();\n});\n\n"
  },
  {
    "path": "tests/random.js",
    "content": "var test = require('tap').test;\nvar jot = require('../jot')\n\nfunction assertEqual(t, v1, v2) {\n\t// After applying math operators, floating point\n\t// results can come out a little different. Round\n\t// numbers to about as much precision as floating\n\t// point numbers have anyway.\n\tif (typeof v1 == \"number\" && typeof v2 == \"number\") {\n\t\tv1 = v1.toPrecision(8);\n\t\tv2 = v2.toPrecision(8);\n\t}\n\tt.deepEqual(v1, v2);\n}\n\ntest('random', function(tt) {\nfor (var i = 0; i < 1000; i++) {\n\t\t// Start with a random initial document.\n\t\tvar initial_value = jot.createRandomValue();\n\t\t\n\t\t// Create two operations on the initial document.\n\t\tvar op1 = jot.createRandomOpSequence(initial_value, Math.floor(10*Math.random())+1)\n\t\tvar op2 = jot.createRandomOpSequence(initial_value, Math.floor(10*Math.random())+1)\n\n\t\ttt.test(\n\t\t\tJSON.stringify({\n\t\t\t\t\"value\": initial_value,\n\t\t\t\t\"op1\": op1.toJSON(),\n\t\t\t\t\"op2\": op2.toJSON(),\n\t\t\t}),\n\t\t\tfunction(t) {\n\n\t\t// Apply each to get the intermediate values.\n\t\tvar val1 = op1.apply(initial_value);\n\t\tvar val2 = op2.apply(initial_value);\n\n\t\t// Check that the parallel rebases match.\n\t\tvar val1b = op2.rebase(op1, { document: initial_value }).apply(val1);\n\t\tvar val2b = op1.rebase(op2, { document: initial_value }).apply(val2);\n\t\tassertEqual(t, val1b, val2b);\n\n\t\t// Check that they also match using composition.\n\t\tvar val1c = op1.compose(op2.rebase(op1, { document: initial_value })).apply(initial_value);\n\t\tvar val2c = op2.compose(op1.rebase(op2, { document: initial_value })).apply(initial_value);\n\t\tassertEqual(t, val1c, val2c);\n\n\t\t// Check that we can compute a diff.\n\t\tvar d = jot.diff(initial_value, val1b);\n\n\t    t.end();\n\n\t\t});\n}\ntt.end();\n});\n"
  },
  {
    "path": "tests/sequences.js",
    "content": "var test = require('tap').test;\nvar values = require(\"../jot/values.js\");\nvar seqs = require(\"../jot/sequences.js\");\nvar jot = require(\"../jot\");\n\ntest('sequences', function(t) {\n\n// inspect\n\nt.equal(\n\tnew seqs.SPLICE(0, 1, \"4\").inspect(),\n\t'<PATCH +0x1 \"4\">');\nt.equal(\n\tnew seqs.ATINDEX(0, new values.SET(2)).inspect(),\n\t'<PATCH +0 <SET 2>>');\nt.equal(\n\tnew seqs.ATINDEX({ 0: new values.SET(2), 4: new values.SET(10) }).inspect(),\n\t'<PATCH +0 <SET 2>, +3 <SET 10>>');\nt.equal(\n\tnew seqs.MAP(new values.MATH('add', 1)).inspect(),\n\t'<MAP <MATH add:1>>');\n\n// serialization\n\nt.deepEqual(\n\tjot.opFromJSON(new seqs.SPLICE(0, 1, \"4\").toJSON()),\n\tnew seqs.SPLICE(0, 1, \"4\"));\nt.deepEqual(\n\tjot.opFromJSON(new seqs.ATINDEX(0, new values.SET(2)).toJSON()),\n\tnew seqs.ATINDEX(0, new values.SET(2)));\nt.deepEqual(\n\tjot.opFromJSON(new seqs.ATINDEX({ 0: new values.SET(2), 4: new values.SET(10) }).toJSON()),\n\tnew seqs.ATINDEX({ 0: new values.SET(2), 4: new values.SET(10) }));\nt.deepEqual(\n\tjot.opFromJSON(new seqs.MAP(new values.MATH('add', 1)).toJSON()),\n\tnew seqs.MAP(new values.MATH('add', 1)));\n\n// apply\n\nt.equal(\n\tnew seqs.SPLICE(0, 1, \"4\").apply(\"123\"),\n\t\"423\");\nt.equal(\n\tnew seqs.SPLICE(0, 1, \"\").apply(\"123\"),\n\t\"23\");\nt.equal(\n\tnew seqs.SPLICE(0, 1, \"44\").apply(\"123\"),\n\t\"4423\");\nt.equal(\n\tnew seqs.SPLICE(3, 0, \"44\").apply(\"123\"),\n\t\"12344\");\n\nt.deepEqual(\n\tnew seqs.ATINDEX(0, new values.SET(4)).apply([1, 2, 3]),\n\t[4, 2, 3]);\nt.deepEqual(\n\tnew seqs.ATINDEX(1, new values.SET(4)).apply([1, 2, 3]),\n\t[1, 4, 3]);\nt.deepEqual(\n\tnew seqs.ATINDEX(2, new values.SET(4)).apply([1, 2, 3]),\n\t[1, 2, 4]);\nt.deepEqual(\n\tnew seqs.ATINDEX({ 0: new values.SET(4), 1: new values.SET(5) })\n\t.apply([1, 2, 3]),\n\t[4, 5, 3]);\n\nt.deepEqual(\n\tnew seqs.ATINDEX(0, new values.SET(\"d\")).apply(\"abc\"),\n\t\"dbc\");\nt.deepEqual(\n\tnew seqs.ATINDEX(1, new values.SET(\"d\")).apply(\"abc\"),\n\t\"adc\");\nt.deepEqual(\n\tnew seqs.ATINDEX(2, new values.SET(\"d\")).apply(\"abc\"),\n\t\"abd\");\nt.deepEqual(\n\tnew seqs.ATINDEX({ 0: new values.SET(\"d\"), 1: new values.SET(\"e\") })\n\t.apply(\"abc\"),\n\t\"dec\");\n\n// simplify\n\nt.deepEqual(\n\tnew seqs.SPLICE(3, 0, \"\").simplify(),\n\tnew values.NO_OP());\nt.deepEqual(\n\tnew seqs.SPLICE(3, 3, \"456\").simplify(),\n\tnew seqs.SPLICE(3, 3, \"456\"));\nt.deepEqual(\n\tnew seqs.ATINDEX(0, new values.SET(2)).simplify(),\n\tnew seqs.ATINDEX(0, new values.SET(2)));\nt.deepEqual(\n\tnew seqs.ATINDEX({\n\t\t0: new values.SET(1),\n\t\t1: new jot.LIST([]) }).simplify(),\n\tnew seqs.ATINDEX(0, new values.SET(1)));\n\n// drilldown\n\nt.deepEqual(\n\tnew seqs.ATINDEX(1, new values.SET(-1)).drilldown(0),\n\tnew values.NO_OP());\nt.deepEqual(\n\tnew seqs.ATINDEX(1, new values.SET(-1)).drilldown(1),\n\tnew values.SET(-1));\nt.deepEqual(\n\tnew seqs.MAP(new values.SET(-1)).drilldown(1),\n\tnew values.SET(-1));\n\n// invert\n\nt.deepEqual(\n\tnew seqs.SPLICE(3, 3, \"456\").inverse(\"xxx123\"),\n\tnew seqs.SPLICE(3, 3, \"123\"));\nt.deepEqual(\n\tnew seqs.ATINDEX(0, new values.SET(2)).inverse([1]),\n\tnew seqs.ATINDEX(0, new values.SET(1)));\nt.deepEqual(\n\tnew seqs.ATINDEX({ 0: new values.SET(\"d\"), 1: new values.SET(\"e\") }).inverse(['a', 'b']),\n\tnew seqs.ATINDEX({ 0: new values.SET(\"a\"), 1: new values.SET(\"b\") }));\n\n\n// atomic_compose\n\nt.deepEqual(\n\tnew seqs.SPLICE(0, 4, \"1234\").atomic_compose(new seqs.SPLICE(5, 4, \"FGHI\")),\n\tnew seqs.PATCH([{offset: 0, length: 4, op: new values.SET(\"1234\")}, {offset: 1, length: 4, op: new values.SET(\"FGHI\")}]));\nt.deepEqual(\n\tnew seqs.SPLICE(0, 4, \"1234\").atomic_compose(new seqs.SPLICE(4, 4, \"EFGH\")),\n\tnew seqs.SPLICE(0, 8, \"1234EFGH\"));\nt.deepEqual(\n\tnew seqs.SPLICE(0, 4, \"1234\").atomic_compose(new seqs.SPLICE(2, 4, \"CDEF\")),\n\tnew seqs.SPLICE(0, 6, \"12CDEF\"));  // This isn't good.\nt.deepEqual(\n\tnew seqs.SPLICE(0, 4, \"1234\").atomic_compose(new seqs.SPLICE(2, 2, \"CD\")),\n\tnew seqs.SPLICE(0, 4, \"12CD\"));\nt.deepEqual(\n\tnew seqs.SPLICE(0, 4, \"1234\").atomic_compose(new seqs.SPLICE(0, 4, \"ABCD\")),\n\tnew seqs.SPLICE(0, 4, \"ABCD\"));\nt.deepEqual(\n\tnew seqs.SPLICE(0, 4, \"1234\").atomic_compose(new seqs.SPLICE(1, 2, \"BC\")),\n\tnew seqs.SPLICE(0, 4, \"1BC4\"));\nt.deepEqual(\n\tnew seqs.SPLICE(0, 4, \"1234\").atomic_compose(new seqs.SPLICE(0, 2, \"AB\")),\n\tnew seqs.SPLICE(0, 4, \"AB34\"));\nt.deepEqual(\n\tnew seqs.SPLICE(0, 4, \"1234\").atomic_compose(new seqs.SPLICE(0, 6, \"ABCDEF\")),\n\tnew seqs.SPLICE(0, 6, \"ABCDEF\"));\nt.deepEqual(\n\tnew seqs.SPLICE(2, 4, \"1234\").atomic_compose(new seqs.SPLICE(0, 8, \"YZABCDEF\")),\n\tnew seqs.SPLICE(0, 8, \"YZABCDEF\"));\nt.deepEqual(\n\tnew seqs.SPLICE(2, 2, \"1234\").atomic_compose(new seqs.SPLICE(0, 8, \"YZABCDEF\")),\n\tnew seqs.SPLICE(0, 6, \"YZABCDEF\"));\nt.deepEqual(\n\tnew seqs.SPLICE(2, 4, \"1234\").atomic_compose(new seqs.SPLICE(0, 6, \"YZABCD\")),\n\tnew seqs.SPLICE(0, 6, \"YZABCD\"));\nt.deepEqual(\n\tnew seqs.SPLICE(2, 4, \"1234\").atomic_compose(new seqs.SPLICE(0, 4, \"YZAB\")),\n\tnew seqs.SPLICE(0, 6, \"YZAB34\"));\nt.deepEqual(\n\tnew seqs.SPLICE(2, 4, \"1234\").atomic_compose(new seqs.SPLICE(0, 2, \"YZ\")),\n\tnew seqs.SPLICE(0, 6, \"YZ1234\"));\nt.deepEqual(\n\tnew seqs.SPLICE(2, 4, \"1234\").atomic_compose(new seqs.SPLICE(0, 1, \"Y\")),\n\tnew seqs.PATCH([{offset: 0, length: 1, op: new values.SET(\"Y\")}, {offset: 1, length: 4, op: new values.SET(\"1234\")}]));\nt.deepEqual(\n\tnew seqs.SPLICE(0, 4, \"1234\").atomic_compose(new seqs.PATCH([{offset: 0, length: 1, op: new values.SET(\"A\")}, {offset: 2, length: 1, op: new values.SET(\"D\")}])),\n\tnew seqs.SPLICE(0, 4, \"A23D\"));\n\nt.deepEqual(\n\tnew seqs.PATCH([{offset: 0, length: 4, op: new values.SET(\"ab\")}, {offset: 1, length: 4, op: new values.SET(\"defg\")}])\n\t\t.atomic_compose(new seqs.ATINDEX(4, new values.SET(\"E\"))),\n\tnew seqs.PATCH([{offset: 0, length: 4, op: new values.SET(\"ab\")}, {offset: 1, length: 4, op: new values.SET(\"dEfg\")}]));\nt.deepEqual(\n\tnew seqs.SPLICE(0, 4, \"5678\").atomic_compose(new seqs.ATINDEX(1, new values.SET(\"0\"))),\n\tnew seqs.SPLICE(0, 4, \"5078\"));\nt.deepEqual(\n\tnew seqs.SPLICE(0, 4, \"5678\").atomic_compose(new seqs.ATINDEX(4, new values.SET(\"0\"))),\n\tnew seqs.SPLICE(0, 5, \"56780\"));\n\nt.deepEqual(\n\tnew seqs.ATINDEX(0, new values.SET(\"0\")).atomic_compose(new seqs.SPLICE(0, 4, \"5678\")),\n\tnew seqs.SPLICE(0, 4, \"5678\"));\nt.deepEqual(\n\tnew seqs.ATINDEX(555, new values.SET(\"B\"))\n\t\t.atomic_compose(new seqs.ATINDEX(555, new values.SET(\"C\"))),\n\tnew seqs.ATINDEX(555, new values.SET(\"C\")));\nt.deepEqual(\n\tnew seqs.ATINDEX(555, new values.MATH(\"add\", 1))\n\t\t.atomic_compose(new seqs.ATINDEX(555, new values.MATH(\"mult\", 2))),\n\tnull);\nt.deepEqual(\n\tnew seqs.ATINDEX(555, new values.MATH(\"add\", 1))\n\t\t.atomic_compose(new seqs.ATINDEX(555, new values.SET(2))),\n\tnew seqs.ATINDEX(555, new values.SET(2)));\nt.deepEqual(\n\tnew seqs.ATINDEX(0, new values.SET(\"d\")).atomic_compose(new seqs.ATINDEX(1, new values.SET(\"e\"))),\n\tnew seqs.ATINDEX({ 0: new values.SET(\"d\"), 1: new values.SET(\"e\") }));\nt.deepEqual(\n\tnew seqs.ATINDEX({ 0: new values.SET(\"d\"), 1: new values.SET(\"e\") }).atomic_compose(new seqs.ATINDEX(0, new values.SET(\"f\"))),\n\tnew seqs.ATINDEX({ 0: new values.SET(\"f\"), 1: new values.SET(\"e\") }));\n\n// rebase\n\nt.deepEqual(\n\tnew seqs.SPLICE(0, 3, \"456\").rebase(\n\t\tnew seqs.SPLICE(0, 3, \"456\")),\n\tnew values.NO_OP());\nt.notOk(\n\tnew seqs.SPLICE(0, 0, \"123\").rebase(\n\t\tnew seqs.SPLICE(0, 0, \"456\")));\nt.deepEqual(\n\tnew seqs.SPLICE(0, 0, \"123\").rebase(\n\t\tnew seqs.SPLICE(0, 0, \"456\"), { }),\n\tnew seqs.SPLICE(0, 0, \"123\"));\nt.deepEqual(\n\tnew seqs.SPLICE(0, 0, \"456\").rebase(\n\t\tnew seqs.SPLICE(0, 0, \"123\"), { }),\n\tnew seqs.SPLICE(3, 0, \"456\"));\nt.notOk(\n\tnew seqs.SPLICE(0, 3, \"456\").rebase(\n\t\tnew seqs.SPLICE(0, 3, \"789\")));\nt.deepEqual(\n\tnew seqs.SPLICE(0, 3, \"456\").rebase(\n\t\tnew seqs.SPLICE(0, 3, \"789\"), { }),\n\tnew values.NO_OP());\nt.deepEqual(\n\tnew seqs.SPLICE(0, 3, \"789\").rebase(\n\t\tnew seqs.SPLICE(0, 3, \"456\"), { }),\n\tnew seqs.SPLICE(0, 3, \"789\"));\nt.deepEqual(\n\tnew seqs.SPLICE(0, 3, \"456\").rebase(\n\t\tnew seqs.SPLICE(3, 3, \"\")),\n\tnew seqs.SPLICE(0, 3, \"456\"));\nt.deepEqual(\n\tnew seqs.SPLICE(3, 3, \"456\").rebase(\n\t\tnew seqs.SPLICE(0, 3, \"AC\")),\n\tnew seqs.SPLICE(2, 3, \"456\"));\n\n// one encompasses the other\nt.deepEqual(\n\tnew seqs.SPLICE(3, 3, \"456\").rebase(\n\t\tnew seqs.SPLICE(3, 1, \"ABC\"), { }),\n\tnew seqs.SPLICE(6, 2, \"456\"));\nt.deepEqual(\n\tnew seqs.SPLICE(3, 3, \"456\").rebase(\n\t\tnew seqs.SPLICE(4, 1, \"ABC\"), { }),\n\tnew seqs.SPLICE(3, 1, \"\").compose(new seqs.SPLICE(6, 1, \"456\")));\nt.deepEqual(\n\tnew seqs.SPLICE(3, 3, \"456\").rebase(\n\t\tnew seqs.SPLICE(5, 1, \"ABC\"), { }),\n\tnew seqs.SPLICE(3, 2, \"\"));\nt.deepEqual(\n\tnew seqs.SPLICE(3, 1, \"ABC\").rebase(\n\t\tnew seqs.SPLICE(3, 3, \"456\"), { }),\n\tnew seqs.SPLICE(3, 0, \"ABC\"));\nt.deepEqual(\n\tnew seqs.SPLICE(4, 1, \"ABC\").rebase(\n\t\tnew seqs.SPLICE(3, 3, \"456\"), { }),\n\tnew seqs.SPLICE(3, 0, \"ABC\"));\nt.deepEqual(\n\tnew seqs.SPLICE(5, 1, \"ABC\").rebase(\n\t\tnew seqs.SPLICE(3, 3, \"456\"), { }),\n\tnew seqs.SPLICE(3, 3, \"ABC\"));\nt.deepEqual(\n\tnew seqs.SPLICE(3, 3, \"456\").rebase(\n\t\tnew seqs.SPLICE(2, 4, \"ABC\"), { }),\n\tnew values.NO_OP());\nt.deepEqual(\n\tnew seqs.SPLICE(3, 3, \"456\").rebase(\n\t\tnew seqs.SPLICE(2, 5, \"ABC\"), { }),\n\tnew seqs.SPLICE(2, 0, \"456\"));\n\n// partial overlap\nt.deepEqual(\n\tnew seqs.SPLICE(3, 3, \"456\").rebase(\n\t\tnew seqs.SPLICE(2, 2, \"ABC\"), { }),\n\tnew seqs.SPLICE(5, 2, \"456\"));\nt.deepEqual(\n\tnew seqs.SPLICE(3, 3, \"456\").rebase(\n\t\tnew seqs.SPLICE(5, 2, \"ABC\"), { }),\n\tnew seqs.SPLICE(3, 2, \"456\"));\nt.deepEqual(\n\tnew seqs.SPLICE(3, 3, \"456\").rebase(\n\t\tnew seqs.SPLICE(4, 3, \"AB\"), { }),\n\tnew seqs.SPLICE(3, 1, \"456\"));\nt.deepEqual(\n\tnew seqs.SPLICE(2, 2, \"ABC\").rebase(\n\t\tnew seqs.SPLICE(3, 3, \"46\"), { }),\n\tnew seqs.SPLICE(2, 1, \"ABC\"));\nt.deepEqual(\n\tnew seqs.SPLICE(5, 2, \"ABC\").rebase(\n\t\tnew seqs.SPLICE(3, 3, \"46\"), { }),\n\tnew seqs.SPLICE(5, 1, \"ABC\"));\nt.deepEqual(\n\tnew seqs.SPLICE(4, 3, \"ABC\").rebase(\n\t\tnew seqs.SPLICE(3, 3, \"46\"), { }),\n\tnew seqs.SPLICE(5, 1, \"ABC\"));\n\n// splice vs apply\n\nt.deepEqual(\n\tnew seqs.SPLICE(0, 3, [4,5,6]).rebase(\n\t\tnew seqs.ATINDEX(0, new values.MATH(\"add\", 1)), { }),\n\tnew seqs.SPLICE(0, 3, [4,5,6]));\n\n// splice vs map\n\nt.notOk(\n\tnew seqs.SPLICE(1, 3, [4,5]).rebase(\n\t\tnew seqs.MAP(new values.MATH(\"add\", 1))));\nt.notOk(\n\tnew seqs.MAP(new values.MATH(\"add\", 1)).rebase(\n\t\tnew seqs.SPLICE(1, 3, [4,5])));\n\n// apply vs splice\n\nt.deepEqual(\n\tnew seqs.ATINDEX(555, new values.MATH(\"add\", 3)).rebase(\n\t\tnew seqs.SPLICE(555, 0, [5])),\n\tnew seqs.ATINDEX(556, new values.MATH(\"add\", 3)));\n\n// apply vs apply\n\nt.deepEqual(\n\tnew seqs.ATINDEX(555, new values.MATH(\"add\", 3)).rebase(\n\t\tnew seqs.ATINDEX(555, new values.MATH(\"add\", 1))),\n\tnew seqs.ATINDEX(555, new values.MATH(\"add\", 3)));\nt.notOk(\n\tnew seqs.ATINDEX(555, new values.SET(\"y\")).rebase(\n\t\tnew seqs.ATINDEX(555, new values.SET(\"z\"))))\nt.deepEqual(\n\tnew seqs.ATINDEX(555, new values.SET(\"y\")).rebase(\n\t\tnew seqs.ATINDEX(555, new values.SET(\"z\")), { }),\n\tnew values.NO_OP()\n\t)\nt.deepEqual(\n\tnew seqs.ATINDEX(555, new values.SET(\"z\")).rebase(\n\t\tnew seqs.ATINDEX(555, new values.SET(\"y\")), { }),\n\tnew seqs.ATINDEX(555, new values.SET(\"z\"))\n\t)\nt.deepEqual(\n\tnew seqs.ATINDEX({0: new values.SET(\"z\"), 1: new values.SET(\"b\"), 2: new values.SET(\"N\")}).rebase(\n\t\tnew seqs.ATINDEX({0: new values.SET(\"y\"), 1: new values.SET(\" \")}), { }),\n\tnew seqs.ATINDEX({0: new values.SET(\"z\"), 1: new values.SET(\"b\"), 2: new values.SET(\"N\")})\n\t)\n\n// apply vs map\n\nt.deepEqual(\n\tnew seqs.ATINDEX(555, new values.MATH(\"add\", 3)).rebase(\n\t\tnew seqs.MAP(new values.MATH(\"add\", 1))),\n\tnew seqs.ATINDEX(555, new values.MATH(\"add\", 3)));\n\n// map vs apply\n\nt.deepEqual(\n\tnew seqs.MAP(new values.MATH(\"add\", 1)).rebase(\n\t\tnew seqs.ATINDEX(555, new values.MATH(\"add\", 3))),\n\tnew seqs.MAP(new values.MATH(\"add\", 1)));\nt.notOk(\n\tnew seqs.MAP(new values.MATH(\"add\", 1)).rebase(\n\t\tnew seqs.ATINDEX(555, new values.MATH(\"mult\", 2))));\n\n// map vs map\n\nt.deepEqual(\n\tnew seqs.MAP(new values.MATH(\"add\", 1)).rebase(\n\t\tnew seqs.MAP(new values.MATH(\"add\", 3))),\n\tnew seqs.MAP(new values.MATH(\"add\", 1)));\nt.notOk(\n\tnew seqs.MAP(new values.MATH(\"add\", 1)).rebase(\n\t\tnew seqs.MAP(new values.MATH(\"mult\", 3))));\n\nt.end();\n\n});\n"
  },
  {
    "path": "tests/values.js",
    "content": "var test = require('tap').test;\nvar jot = require('../jot')\nvar values = require(\"../jot/values.js\");\nvar MISSING = require(\"../jot/objects.js\").MISSING;\nvar LIST = require(\"../jot/lists.js\").LIST;\n\ntest('values', function(t) {\n\n// inspect\n\nt.equal(\n\tnew values.NO_OP().inspect(),\n\t\"<NO_OP>\");\nt.equal(\n\tnew values.SET(4).inspect(),\n\t'<SET 4>');\nt.equal(\n\tnew values.MATH('add', 4).inspect(),\n\t'<MATH add:4>');\n\n// serialization\n\nt.deepEqual(\n\tjot.deserialize(new values.NO_OP().serialize()),\n\tnew values.NO_OP());\nt.deepEqual(\n\tjot.opFromJSON(new values.NO_OP().toJSON()),\n\tnew values.NO_OP());\nt.deepEqual(\n\tjot.opFromJSON(new values.SET(4).toJSON()),\n\tnew values.SET(4));\nt.deepEqual(\n\tjot.opFromJSON(new values.MATH('add', 4).toJSON()),\n\tnew values.MATH('add', 4));\n\n// apply\n\nt.equal(\n\tnew values.NO_OP().apply(\"1\"),\n\t\"1\");\n\nt.equal(\n\tnew values.SET(\"2\").apply(\"1\"),\n\t\"2\");\n\nt.equal(\n\tnew values.MATH(\"add\", 5).apply(1),\n\t6);\nt.equal(\n\tnew values.MATH(\"rot\", [2, 3]).apply(1),\n\t0);\nt.equal(\n\tnew values.MATH(\"mult\", 5).apply(2),\n\t10);\nt.equal(\n\tnew values.MATH(\"and\", 0xF0).apply(0xF1),\n\t0xF0);\nt.equal(\n\tnew values.MATH(\"or\", 0xF0).apply(0x1),\n\t0xF1);\nt.equal(\n\tnew values.MATH(\"xor\", 12).apply(25),\n\t21);\nt.equal(\n\tnew values.MATH(\"xor\", 1).apply(true),\n\tfalse);\nt.equal(\n\tnew values.MATH(\"xor\", 1).apply(false),\n\ttrue);\nt.equal(\n\tnew values.MATH(\"not\", null).apply(0xF1),\n\t~0xF1);\n\n// simplify\n\nt.deepEqual(\n\tnew values.NO_OP().simplify(),\n\tnew values.NO_OP());\n\nt.deepEqual(\n\tnew values.SET(1).simplify(),\n\tnew values.SET(1));\n\nt.deepEqual(\n\tnew values.MATH(\"add\", 5).simplify(),\n\tnew values.MATH(\"add\", 5));\nt.deepEqual(\n\tnew values.MATH(\"add\", 0).simplify(),\n\tnew values.NO_OP());\nt.deepEqual(\n\tnew values.MATH(\"rot\", [0, 999]).simplify(),\n\tnew values.NO_OP());\nt.deepEqual(\n\tnew values.MATH(\"mult\", 0).simplify(),\n\tnew values.MATH(\"mult\", 0));\nt.deepEqual(\n\tnew values.MATH(\"mult\", 1).simplify(),\n\tnew values.NO_OP());\nt.deepEqual(\n\tnew values.MATH(\"and\", 0).simplify(),\n\tnew values.SET(0));\nt.deepEqual(\n\tnew values.MATH(\"or\", 0).simplify(),\n\tnew values.NO_OP());\nt.deepEqual(\n\tnew values.MATH(\"xor\", 0).simplify(),\n\tnew values.NO_OP());\n\n// invert\n\nt.deepEqual(\n\tnew values.NO_OP().inverse('anything here'),\n\tnew values.NO_OP());\n\nt.deepEqual(\n\tnew values.SET(1).inverse(0),\n\tnew values.SET(0));\n\nt.deepEqual(\n\tnew values.MATH(\"add\", 5).inverse(0),\n\tnew values.MATH(\"add\", -5));\nt.deepEqual(\n\tnew values.MATH(\"rot\", [5, 20]).inverse(0),\n\tnew values.MATH(\"rot\", [-5, 20]));\nt.deepEqual(\n\tnew values.MATH(\"mult\", 5).inverse(0),\n\tnew values.MATH(\"mult\", 1/5));\nt.deepEqual(\n\tnew values.MATH(\"and\", 0xF0).inverse(0xF),\n\tnew values.MATH(\"or\", 0xF));\nt.deepEqual(\n\tnew values.MATH(\"or\", 0xF0).inverse(0xF),\n\tnew values.MATH(\"xor\", 0xF0));\nt.deepEqual(\n\tnew values.MATH(\"xor\", 5).inverse(0),\n\tnew values.MATH(\"xor\", 5));\n\n// drilldown\n\nt.deepEqual(\n\tnew values.SET({ 5: \"A\" }).drilldown('4'),\n\tnew values.SET(MISSING));\nt.deepEqual(\n\tnew values.SET({ 5: \"A\" }).drilldown('5'),\n\tnew values.SET(\"A\"));\nt.deepEqual(\n\tnew values.SET([0, -1, -2]).drilldown(3),\n\tnew values.SET(MISSING));\nt.deepEqual(\n\tnew values.SET([0, -1, -2]).drilldown(2),\n\tnew values.SET(-2));\n\n// compose\n\nt.deepEqual(\n\tnew values.NO_OP().compose(\n\t\tnew values.SET(2) ),\n\tnew values.SET(2));\nt.deepEqual(\n\tnew values.SET(2).compose(\n\t\tnew values.NO_OP() ),\n\tnew values.SET(2));\n\nt.deepEqual(\n\tnew values.SET(1).compose(\n\t\tnew values.SET(2) ),\n\tnew values.SET(2));\n\nt.deepEqual(\n\tnew values.MATH(\"add\", 1).compose(\n\t\tnew values.MATH(\"add\", 1) ),\n\tnew values.MATH(\"add\", 2));\nt.deepEqual(\n\tnew values.MATH(\"rot\", [3, 13]).compose(\n\t\tnew values.MATH(\"rot\", [4, 13]) ),\n\tnew values.MATH(\"rot\", [7, 13]));\nt.deepEqual(\n\tnew values.MATH(\"mult\", 2).compose(\n\t\tnew values.MATH(\"mult\", 3) ),\n\tnew values.MATH(\"mult\", 6));\nt.deepEqual(\n\tnew values.MATH(\"add\", 1).atomic_compose(\n\t\tnew values.MATH(\"mult\", 2) ),\n\tnull);\nt.deepEqual(\n\tnew values.MATH(\"and\", 0x1).atomic_compose(\n\t\tnew values.MATH(\"and\", 0x2) ),\n\tnew values.SET(0));\nt.deepEqual(\n\tnew values.MATH(\"or\", 0x1).atomic_compose(\n\t\tnew values.MATH(\"or\", 0x2) ),\n\tnew values.MATH(\"or\", 0x3));\nt.deepEqual(\n\tnew values.MATH(\"xor\", 12).compose(\n\t\tnew values.MATH(\"xor\", 3) ),\n\tnew values.MATH(\"xor\", 15));\nt.deepEqual(\n\tnew values.MATH(\"not\", null).compose(\n\t\tnew values.MATH(\"not\", null) ),\n\tnew values.NO_OP());\n\nt.deepEqual(\n\tnew values.MATH(\"add\", 1).compose(\n\t\tnew values.SET(3) ),\n\tnew values.SET(3));\n\n// rebase\n\nt.deepEqual(\n\tnew values.NO_OP().rebase(new values.NO_OP() ),\n\tnew values.NO_OP());\nt.deepEqual(\n\tnew values.NO_OP().rebase(new values.MATH(\"add\", 1) ),\n\tnew values.NO_OP());\n\nt.deepEqual(\n\tnew values.SET(1).rebase(new values.SET(1) ),\n\tnew values.NO_OP());\nt.deepEqual(\n\tnew values.SET(2).rebase(new values.SET(1) ),\n\tnull);\nt.deepEqual(\n\tnew values.SET(2).rebase(new values.SET(1), true),\n\tnew values.SET(2));\nt.deepEqual(\n\tnew values.SET(1).rebase(new values.SET(2), true),\n\tnew values.NO_OP());\n\nt.deepEqual(\n\tnew values.SET(2).rebase(new values.MATH(\"add\", 3)),\n\tnew values.SET(2));\nt.deepEqual(\n\tnew values.SET(\"2\").rebase(new values.MATH(\"add\", 3), true),\n\tnew values.SET(\"2\"));\n\nt.deepEqual(\n\tnew values.MATH(\"add\", 1).rebase(new values.NO_OP() ),\n\tnew values.MATH(\"add\", 1));\nt.deepEqual(\n\tnew values.MATH(\"add\", 2).rebase(new values.MATH(\"add\", 1) ),\n\tnew values.MATH(\"add\", 2));\nt.deepEqual(\n\tnew values.MATH(\"rot\", [1, 3]).rebase(new values.MATH(\"rot\", [2, 3]) ),\n\tnew values.MATH(\"rot\", [1, 3]));\nt.notOk(\n\tnew values.MATH(\"mult\", 2).rebase(new values.MATH(\"add\", 1) )\n\t);\nt.deepEqual(\n\tnew values.MATH(\"xor\", 3).rebase(new values.MATH(\"xor\", 12) ),\n\tnew values.MATH(\"xor\", 3));\n\nt.deepEqual(\n\tnew values.MATH(\"add\", 3).rebase(new values.SET(2)),\n\tnew values.NO_OP());\nt.deepEqual(\n\tnew values.MATH(\"add\", 3).rebase(new values.SET(\"2\"), true),\n\tnew values.NO_OP());\n\n\n    t.end();\n});\n"
  }
]