[
  {
    "path": ".dockerignore",
    "content": "__pycache__"
  },
  {
    "path": ".gitignore",
    "content": "keys/vk.json\n__pycache__"
  },
  {
    "path": ".gitmodules",
    "content": "[submodule \"depends/baby_jubjub_ecc\"]\n\tpath = depends/baby_jubjub_ecc\n\turl = https://github.com/barrywhitehat/baby_jubjub_ecc\n[submodule \"depends/libsnark\"]\n\tpath = depends/libsnark\n\turl = https://github.com/scipr-lab/libsnark.git\n[submodule \"src/sha256_ethereum\"]\n\tpath = src/sha256_ethereum\n\turl = https://github.com/kobigurk/sha256_ethereum\n"
  },
  {
    "path": "CMakeLists.txt",
    "content": "cmake_minimum_required(VERSION 2.8)\n\nproject(roll_up)\n\nset(\n  CURVE\n  \"ALT_BN128\"\n  CACHE\n  STRING\n  \"Default curve: one of ALT_BN128, BN128, EDWARDS, MNT4, MNT6\"\n)\n\nset(\n  DEPENDS_DIR\n  \"${CMAKE_CURRENT_SOURCE_DIR}/depends\"\n  CACHE\n  STRING\n  \"Optionally specify the dependency installation directory relative to the source directory (default: inside dependency folder)\"\n)\n\nset(\n  OPT_FLAGS\n  \"\"\n  CACHE\n  STRING\n  \"Override C++ compiler optimization flags\"\n)\n\noption(\n  MULTICORE\n  \"Enable parallelized execution, using OpenMP\"\n  ON\n)\n\noption(\n  WITH_PROCPS\n  \"Use procps for memory profiling\"\n  ON\n)\n\noption(\n  VERBOSE\n  \"Print internal messages\"\n  ON\n)\n\noption(\n  DEBUG\n  \"Enable debugging mode\"\n  OFF\n)\n\noption(\n  CPPDEBUG\n  \"Enable debugging of C++ STL (does not imply DEBUG)\"\n  OFF\n)\n\nif(CMAKE_COMPILER_IS_GNUCXX OR \"${CMAKE_CXX_COMPILER_ID}\" STREQUAL \"Clang\")\n  # Common compilation flags and warning configuration\n  set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -std=c++11 -Wall -Wextra -Wfatal-errors -pthread\")\n\n  if(\"${MULTICORE}\")\n    set(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} -fopenmp\")\n  endif()\n\n   # Default optimizations flags (to override, use -DOPT_FLAGS=...)\n  if(\"${OPT_FLAGS}\" STREQUAL \"\")\n    set(OPT_FLAGS \"-ggdb3 -O2 -march=native -mtune=native\")\n  endif()\nendif()\n\nadd_definitions(-DCURVE_${CURVE})\n\nif(${CURVE} STREQUAL \"BN128\")\n  add_definitions(-DBN_SUPPORT_SNARK=1)\nendif()\n\nif(\"${VERBOSE}\")\n  add_definitions(-DVERBOSE=1)\nendif()\n\nif(\"${MULTICORE}\")\n  add_definitions(-DMULTICORE=1)\nendif()\n\n\n\nadd_compile_options(-fPIC)\n\nif(\"${CPPDEBUG}\")\n  add_definitions(-D_GLIBCXX_DEBUG -D_GLIBCXX_DEBUG_PEDANTIC)\nendif()\n\nset(CMAKE_CXX_FLAGS \"${CMAKE_CXX_FLAGS} ${OPT_FLAGS}\")\n\ninclude(FindPkgConfig)\nif(\"${WITH_PROCPS}\")\n  pkg_check_modules(PROCPS REQUIRED libprocps)\nelse()\n  add_definitions(-DNO_PROCPS)\nendif()\n\ninclude_directories(.)\n\nadd_subdirectory(depends)\nadd_subdirectory(src)\n"
  },
  {
    "path": "Dockerfile",
    "content": "FROM ubuntu:18.04\n\nRUN apt-get update && \\\n    apt-get install software-properties-common -y && \\\n    add-apt-repository ppa:ethereum/ethereum -y && \\\n    apt-get update && \\\n    apt-get install -y \\\n    wget unzip curl \\\n    build-essential cmake git libgmp3-dev libprocps-dev python-markdown libboost-all-dev libssl-dev pkg-config python3-pip solc\n\nWORKDIR /root/roll_up\n\nCOPY . .\n\nRUN pip3 install -r requirements.txt\n\nRUN cd build \\\n    && cmake .. \\\n    && make \\\n    && DESTDIR=/usr/local make install \\\n        NO_PROCPS=1 \\\n        NO_GTEST=1 \\\n        NO_DOCS=1 \\\n        CURVE=ALT_BN128 \\\n        FEATUREFLAGS=\"-DBINARY_OUTPUT=1 -DMONTGOMERY_OUTPUT=1 -DNO_PT_COMPRESSION=1\"\n\nENV LD_LIBRARY_PATH $LD_LIBRARY_PATH:/usr/local/lib\n"
  },
  {
    "path": "README.md",
    "content": "# roll_up \n\n[![Join the chat at https://gitter.im/barrywhitehat/roll_up](https://badges.gitter.im/barrywhitehat/roll_up.png)](https://gitter.im/barrywhitehat/roll_up?utm_source=share-link&utm_medium=link&utm_campaign=share-link)\n\nRoll_up aggregates transactions so that they only require a single onchain transactions required to validate multiple other transactions. The snark checks the signature and applies the transaction to the the leaf that the signer owns.\n\nMultiple users create signatures. Provers aggregates these signatures into a snark and use it to update a smart contract on the ethereum blockchain. A malicious prover who does not also have that leafs private key cannot change a leaf. Only the person who controls the private key can. \n\nThis is intended to be the database layer of snark-dapp (snapps) where the layers above define more rules about changing and updating the leaves\n\n`roll_up` does not make any rules about what happens in a leaf, what kind of leaves can be created and destroyed. This is the purview of \nhigher level snapps. Who can add their constraints in `src/roll_up.tcc` in the function `generate_r1cs_constraints()`\n\n## In Depth\n\nThe system is base use eddsa signatures defined in  [baby_jubjub_ecc](https://github.com/barryWhiteHat/baby_jubjub_ecc) base upon [baby_jubjub](https://github.com/barryWhiteHat/baby_jubjub). It uses sha256 padded with 512 bits input. \n\nThe leaf is defined as follows \n```\n\n                                        LEAF\n                        +----------------^----------------+\n                       LHS                               RHS\n               +----------------+                \n           Public_key_x    public_key_y         \n```\n\nThe leaf is then injected into a merkle tree. \n\nA transaction updates a single leaf in the merkle tree. A transaction takes the following form. \n\n```\n1. Public key x and y point\n2. The message which is defined as the hash of the old leaf and the new leaf. \n\n                                      MESSAGE\n                        +----------------^----------------+\n                     OLD_LEAF                          NEW_LEAF\n\n3. the point R and the integer S. \n```\n\n\nIn order to update the merkle tree the prover needs to aggregate together X transactions. For each transaction they check \n```\n1. Takes the merkel root as input from the smart contract (if it is the first iteration) or from the merkle root from the previous \ntransaction. \n2. Find the leaf that matches the message in the merkle tree. \nNOTE: If there are two messages that match, both can be updated as their is no replay protection this should be solved on the next layer\nthis is simply the read and write layer, we do not check what is being written here. \n3. Check that the proving key matches the owner of that leaf. \n4. Confirm that the signature is correct.\n5. Confirm that that leaf is in the merkle tree. \n6. Replace is with the new leaf and calculate the new merkle root. \n7. Continue until all transactions have been included in a snark\n```\nThe snark can then be included in a transaction to update the merkle root tracked by a smart contract. \n\n\n## Data availabilty guarrentees\n\nIt is important that each prover is able to make merkle proofs for all leaves.\nIf they cannot these leaves are essentially locked until that information becomes available.\n\nIn order to ensure this, we pass every updated leaf to the smart contract so\nthat data will always be available. \n\nThus the system has the same data availability guarrentees as ethereum.\n\n## Scalability\n\nGas cost of function call: 23368\nGas cost of throwing an event with a single leaf update : 1840\n\nAlthough we don't use groth16 currently. This is the cheapest proving system to our knowledge. \n\ngroth16 confirm:  560000 including tx cost and input data is ~600000.\n\nThe gas limit is 8,000,000 per block. So we can use the rest of the gas to maintain data availability. \n\n8000000 - 600000  =  7400000\n\nWe find that 7400000 is the remaining gas in the block. \n\nSo we calculate how much we can spend on data availability\n\n7400000 / 1840 ~= 4021.73913043478\n\n4021.73913043478 / 15 = 268 transactions per second\n\n\n## Proving time\n\nOn a laptop with 7 GB of ram and 20 GB of swap space it struggles to aggragate 20 transactions per second. This is a\ncombination of my hardware limits and cpp code that needs to be improved. \n\n[Wu et al](https://eprint.iacr.org/2018/691) showed that is is possible to distribute\nthese computations that scales to billions of constaints. \n\nIn order to reach the tps described above three approaches exist. \n\n1. Improve the cpp code similar to https://github.com/HarryR/ethsnarks/issues/3 and run it on enterprise hardware.\n2. Implmenting the full distributed system described by Wu et al.\n3. Specialized hardware to create these proofs. \n\n\n## Distribution\n\nThe role of prover can be distributed but it means that each will have to purchase/rent hardware in order to be able to keep up with the longest chain. \n\nThere are a few attacks where the fastest prover is able censor all other provers by constantly updating so the all competing provers proofs are constantly out of date. \n\nThese problem should be mitigated or solved at the consensus level. \n\n\n## Running tests \n\nIf you want to run at noTx greater than 10 you will need more than 7GB\nto add a bunch of swap space https://www.digitalocean.com/community/tutorials/how-to-add-swap-space-on-ubuntu-16-04\n\n### Build everything \n\n```\nmkdir keys\ngit submodule update --init --recursive\nmkdir build\ncd build\ncmake .. && make\n```\n\n### Run the tests\n\nNOTE: Make sure you have a node running so the smart contract would be deployed and validate the transaction, you can use \n`testrpc` or `ganache-cli`\n\n```\ncd ../tests/\npython3 test.py\n```\n\n### Change the merkle tree depth and number of transactions to be aggregated\n\nYou'd need to update two files, and re-build the prover.\n\nIn `pythonWrapper/helper.py`\n\n```\ntree_depth = 2\nnoTx = 4\n```\n\nIn `src/roll_up_wrapper.hpp`\n\n```\nconst int tree_depth = 2;\n```\n"
  },
  {
    "path": "build/.gitkeep",
    "content": ""
  },
  {
    "path": "contracts/Miximus.sol",
    "content": "/*    \n    copyright 2018 to the roll_up Authors\n\n    This file is part of roll_up.\n\n    roll_up is free software: you can redistribute it and/or modify\n    it under the terms of the GNU General Public License as published by\n    the Free Software Foundation, either version 3 of the License, or\n    (at your option) any later version.\n\n    roll_up is distributed in the hope that it will be useful,\n    but WITHOUT ANY WARRANTY; without even the implied warranty of\n    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n    GNU General Public License for more details.\n\n    You should have received a copy of the GNU General Public License\n    along with roll_up.  If not, see <https://www.gnu.org/licenses/>.\n*/\n\npragma solidity ^0.4.19;\n\nimport \"./Verifier.sol\";\n\ncontract roll_up{\n    bytes32 root;\n    mapping (bytes32 => bool) nullifiers;\n    event Withdraw (address); \n    Verifier public zksnark_verify;\n    function roll_up (address _zksnark_verify, bytes32 _root) {\n        zksnark_verify = Verifier(_zksnark_verify);\n        root = _root;\n    }\n\n    function isTrue (\n            uint[2] a,\n            uint[2] a_p,\n            uint[2][2] b,\n            uint[2] b_p,\n            uint[2] c,\n            uint[2] c_p,\n            uint[2] h,\n            uint[2] k,\n            uint[] input\n        ) returns (bool) {\n\n        bytes32 _root = padZero(reverse(bytes32(input[0]))); //)merge253bitWords(input[0], input[1]);\n        require(_root == padZero(root));\n        require(zksnark_verify.verifyTx(a,a_p,b,b_p,c,c_p,h,k,input));      \n        root = padZero(reverse(bytes32(input[2])));\n        return(true);\n    }\n\n    function getRoot() constant returns(bytes32) {\n        return(root);\n    } \n\n    // libshark only allows 253 bit chunks in its output\n    // to overcome this we merge the first 253 bits (left) with the remaining 3 bits\n    // in the next variable (right)\n\n    function merge253bitWords(uint left, uint right) returns(bytes32) {\n        right = pad3bit(right);\n        uint left_msb = uint(padZero(reverse(bytes32(left))));\n        uint left_lsb = uint(getZero(reverse(bytes32(left))));\n        right = right + left_lsb;\n        uint res = left_msb + right; \n        return(bytes32(res));\n    }\n\n\n    // ensure that the 3 bits on the left is actually 3 bits.\n    function pad3bit(uint input) constant returns(uint) {\n        if (input == 0) \n            return 0;\n        if (input == 1)\n            return 4;\n        if (input == 2)\n            return 4;\n        if (input == 3)\n            return 6;\n        return(input);\n    }\n\n    function getZero(bytes32 x) returns(bytes32) {\n                 //0x1111111111111111111111113fdc3192693e28ff6aee95320075e4c26be03308\n        return(x & 0x000000000000000000000000000000000000000000000000000000000000000F);\n    }\n\n    function padZero(bytes32 x) returns(bytes32) {\n                 //0x1111111111111111111111113fdc3192693e28ff6aee95320075e4c26be03308\n        return(x & 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF0);\n    }\n\n    function reverseByte(uint a) public pure returns (uint) {\n        uint c = 0xf070b030d0509010e060a020c0408000;\n\n        return (( c >> ((a & 0xF)*8)) & 0xF0)   +  \n               (( c >> (((a >> 4)&0xF)*8) + 4) & 0xF);\n    }\n    //flip endinaness\n    function reverse(bytes32 a) public pure returns(bytes32) {\n        uint r;\n        uint i;\n        uint b;\n        for (i=0; i<32; i++) {\n            b = (uint(a) >> ((31-i)*8)) & 0xff;\n            b = reverseByte(b);\n            r += b << (i*8);\n        }\n        return bytes32(r);\n    }\n\n}\n"
  },
  {
    "path": "contracts/Pairing.sol",
    "content": "// This code is taken from https://github.com/JacobEberhardt/ZoKrates\n\npragma solidity ^0.4.19;\n\nlibrary Pairing {\n    struct G1Point {\n        uint X;\n        uint Y;\n    }\n    // Encoding of field elements is: X[0] * z + X[1]\n    struct G2Point {\n        uint[2] X;\n        uint[2] Y;\n    }\n    /// @return the generator of G1\n    function P1() internal returns (G1Point) {\n        return G1Point(1, 2);\n    }\n    /// @return the generator of G2\n    function P2() internal returns (G2Point) {\n        return G2Point(\n            [11559732032986387107991004021392285783925812861821192530917403151452391805634,\n             10857046999023057135944570762232829481370756359578518086990519993285655852781],\n            [4082367875863433681332203403145435568316851327593401208105741076214120093531,\n             8495653923123431417604973247489272438418190587263600148770280649306958101930]\n        );\n    }\n    /// @return the negation of p, i.e. p.add(p.negate()) should be zero.\n    function negate(G1Point p) internal returns (G1Point) {\n        // The prime q in the base field F_q for G1\n        uint q = 21888242871839275222246405745257275088696311157297823662689037894645226208583;\n        if (p.X == 0 && p.Y == 0)\n            return G1Point(0, 0);\n        return G1Point(p.X, q - (p.Y % q));\n    }\n    /// @return the sum of two points of G1\n    function add(G1Point p1, G1Point p2) internal returns (G1Point r) {\n        uint[4] memory input;\n        input[0] = p1.X;\n        input[1] = p1.Y;\n        input[2] = p2.X;\n        input[3] = p2.Y;\n        bool success;\n        assembly {\n            success := call(sub(gas, 2000), 6, 0, input, 0xc0, r, 0x60)\n            // Use \"invalid\" to make gas estimation work\n            switch success case 0 { invalid }\n        }\n        require(success);\n    }\n    /// @return the product of a point on G1 and a scalar, i.e.\n    /// p == p.mul(1) and p.add(p) == p.mul(2) for all points p.\n    function mul(G1Point p, uint s) internal returns (G1Point r) {\n        uint[3] memory input;\n        input[0] = p.X;\n        input[1] = p.Y;\n        input[2] = s;\n        bool success;\n        assembly {\n            success := call(sub(gas, 2000), 7, 0, input, 0x80, r, 0x60)\n            // Use \"invalid\" to make gas estimation work\n            switch success case 0 { invalid }\n        }\n        require (success);\n    }\n    /// @return the result of computing the pairing check\n    /// e(p1[0], p2[0]) *  .... * e(p1[n], p2[n]) == 1\n    /// For example pairing([P1(), P1().negate()], [P2(), P2()]) should\n    /// return true.\n    function pairing(G1Point[] p1, G2Point[] p2) internal returns (bool) {\n        require(p1.length == p2.length);\n        uint elements = p1.length;\n        uint inputSize = elements * 6;\n        uint[] memory input = new uint[](inputSize);\n        for (uint i = 0; i < elements; i++)\n        {\n            input[i * 6 + 0] = p1[i].X;\n            input[i * 6 + 1] = p1[i].Y;\n            input[i * 6 + 2] = p2[i].X[0];\n            input[i * 6 + 3] = p2[i].X[1];\n            input[i * 6 + 4] = p2[i].Y[0];\n            input[i * 6 + 5] = p2[i].Y[1];\n        }\n        uint[1] memory out;\n        bool success;\n        assembly {\n            success := call(sub(gas, 2000), 8, 0, add(input, 0x20), mul(inputSize, 0x20), out, 0x20)\n            // Use \"invalid\" to make gas estimation work\n            switch success case 0 { invalid }\n        }\n        require(success);\n        return out[0] != 0;\n    }\n    /// Convenience method for a pairing check for two pairs.\n    function pairingProd2(G1Point a1, G2Point a2, G1Point b1, G2Point b2) internal returns (bool) {\n        G1Point[] memory p1 = new G1Point[](2);\n        G2Point[] memory p2 = new G2Point[](2);\n        p1[0] = a1;\n        p1[1] = b1;\n        p2[0] = a2;\n        p2[1] = b2;\n        return pairing(p1, p2);\n    }\n    /// Convenience method for a pairing check for three pairs.\n    function pairingProd3(\n            G1Point a1, G2Point a2,\n            G1Point b1, G2Point b2,\n            G1Point c1, G2Point c2\n    ) internal returns (bool) {\n        G1Point[] memory p1 = new G1Point[](3);\n        G2Point[] memory p2 = new G2Point[](3);\n        p1[0] = a1;\n        p1[1] = b1;\n        p1[2] = c1;\n        p2[0] = a2;\n        p2[1] = b2;\n        p2[2] = c2;\n        return pairing(p1, p2);\n    }\n    /// Convenience method for a pairing check for four pairs.\n    function pairingProd4(\n            G1Point a1, G2Point a2,\n            G1Point b1, G2Point b2,\n            G1Point c1, G2Point c2,\n            G1Point d1, G2Point d2\n    ) internal returns (bool) {\n        G1Point[] memory p1 = new G1Point[](4);\n        G2Point[] memory p2 = new G2Point[](4);\n        p1[0] = a1;\n        p1[1] = b1;\n        p1[2] = c1;\n        p1[3] = d1;\n        p2[0] = a2;\n        p2[1] = b2;\n        p2[2] = c2;\n        p2[3] = d2;\n        return pairing(p1, p2);\n    }\n}\n\n"
  },
  {
    "path": "contracts/Verifier.sol",
    "content": "// this code is taken from https://github.com/JacobEberhardt/ZoKrates \n\npragma solidity ^0.4.19;\n\nimport \"../contracts/Pairing.sol\";\n\ncontract Verifier {\n    using Pairing for *;\n    uint sealed = 0; //IC parameater add counter.\n    uint i = 0;\n    struct VerifyingKey {\n        Pairing.G2Point A;\n        Pairing.G1Point B;\n        Pairing.G2Point C;\n        Pairing.G2Point gamma;\n        Pairing.G1Point gammaBeta1;\n        Pairing.G2Point gammaBeta2;\n        Pairing.G2Point Z;\n        Pairing.G1Point[] IC;\n    }\n    struct Proof {\n        Pairing.G1Point A;\n        Pairing.G1Point A_p;\n        Pairing.G2Point B;\n        Pairing.G1Point B_p;\n        Pairing.G1Point C;\n        Pairing.G1Point C_p;\n        Pairing.G1Point K;\n        Pairing.G1Point H;\n    }\n    VerifyingKey verifyKey;\n    function Verifier (uint[2] A1, uint[2] A2, uint[2] B, uint[2] C1, uint[2] C2, \n                       uint[2] gamma1, uint[2] gamma2, uint[2] gammaBeta1, \n                       uint[2] gammaBeta2_1, uint[2] gammaBeta2_2, uint[2] Z1, uint[2] Z2,\n                       uint[] input) {\n        verifyKey.A = Pairing.G2Point(A1,A2);\n        verifyKey.B = Pairing.G1Point(B[0], B[1]);\n        verifyKey.C = Pairing.G2Point(C1, C2);\n        verifyKey.gamma = Pairing.G2Point(gamma1, gamma2);\n\n        verifyKey.gammaBeta1 = Pairing.G1Point(gammaBeta1[0], gammaBeta1[1]);\n        verifyKey.gammaBeta2 = Pairing.G2Point(gammaBeta2_1, gammaBeta2_2);\n        verifyKey.Z = Pairing.G2Point(Z1,Z2);\n\n        /*while (verifyKey.IC.length != input.length/2) {\n            verifyKey.IC.push(Pairing.G1Point(input[i], input[i+1]));\n            i += 2;\n        }*/\n\n    }\n\n\n   function addIC(uint[] input)  {  \n        require(sealed ==0);\n        while (verifyKey.IC.length != input.length/2 && msg.gas > 200000) {\n            verifyKey.IC.push(Pairing.G1Point(input[i], input[i+1]));\n            i += 2;\n        } \n       if( verifyKey.IC.length == input.length/2) {\n            sealed = 1;\n       }\n   } \n\n\n   function getIC(uint i) returns(uint, uint) {\n       return(verifyKey.IC[i].X, verifyKey.IC[i].Y);\n   }\n\n   function getICLen () returns (uint) { \n        return(verifyKey.IC.length);\n   } \n\n   function verify(uint[] input, Proof proof) internal returns (uint) {\n        VerifyingKey memory vk = verifyKey;\n        require(input.length + 1 == vk.IC.length);\n\n\n        // Compute the linear combination vk_x\n        Pairing.G1Point memory vk_x = Pairing.G1Point(0, 0);\n        for (uint i = 0; i < input.length; i++)\n            vk_x = Pairing.add(vk_x, Pairing.mul(vk.IC[i + 1], input[i]));\n        vk_x = Pairing.add(vk_x, vk.IC[0]);\n\n        if (!Pairing.pairingProd2(proof.A, vk.A, Pairing.negate(proof.A_p), Pairing.P2())) return 1;\n        if (!Pairing.pairingProd2(vk.B, proof.B, Pairing.negate(proof.B_p), Pairing.P2())) return 2;\n        if (!Pairing.pairingProd2(proof.C, vk.C, Pairing.negate(proof.C_p), Pairing.P2())) return 3;\n        if (!Pairing.pairingProd3(\n            proof.K, vk.gamma,\n            Pairing.negate(Pairing.add(vk_x, Pairing.add(proof.A, proof.C))), vk.gammaBeta2,\n            Pairing.negate(vk.gammaBeta1), proof.B\n        )) return 4;\n        if (!Pairing.pairingProd3(\n                Pairing.add(vk_x, proof.A), proof.B,\n                Pairing.negate(proof.H), vk.Z,\n                Pairing.negate(proof.C), Pairing.P2()\n        )) return 5; \n        return 0;\n    }\n    event Verified(string);\n    function verifyTx(\n            uint[2] a,\n            uint[2] a_p,\n            uint[2][2] b,\n            uint[2] b_p,\n            uint[2] c,\n            uint[2] c_p,\n            uint[2] h,\n            uint[2] k,\n            uint[] input\n        ) returns (bool) {\n        Proof memory proof;\n        proof.A = Pairing.G1Point(a[0], a[1]);\n        proof.A_p = Pairing.G1Point(a_p[0], a_p[1]);\n        proof.B = Pairing.G2Point([b[0][0], b[0][1]], [b[1][0], b[1][1]]);\n        proof.B_p = Pairing.G1Point(b_p[0], b_p[1]);\n        proof.C = Pairing.G1Point(c[0], c[1]);\n        proof.C_p = Pairing.G1Point(c_p[0], c_p[1]);\n        proof.H = Pairing.G1Point(h[0], h[1]);\n        proof.K = Pairing.G1Point(k[0], k[1]);\n        uint[] memory inputValues = new uint[](input.length);\n        for(uint i = 0; i < input.length; i++){\n            inputValues[i] = input[i];\n        }\n\n        if (verify(inputValues, proof) == 0) {\n            Verified(\"Transaction successfully verified.\");\n            return true;\n        } else {\n            return false;\n        }\n\n    } \n}\n"
  },
  {
    "path": "contracts/contract_deploy.py",
    "content": "'''\n    copyright 2018 to the roll_up Authors\n\n    This file is part of roll_up.\n\n    roll_up is free software: you can redistribute it and/or modify\n    it under the terms of the GNU General Public License as published by\n    the Free Software Foundation, either version 3 of the License, or\n    (at your option) any later version.\n\n    roll_up is distributed in the hope that it will be useful,\n    but WITHOUT ANY WARRANTY without even the implied warranty of\n    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n    GNU General Public License for more details.\n\n    You should have received a copy of the GNU General Public License\n    along with roll_up.  If not, see <https://www.gnu.org/licenses/>.\n'''\n\nimport json\nimport web3\n\nfrom web3 import Web3, HTTPProvider, TestRPCProvider\nfrom solc import compile_source, compile_standard, compile_files\nfrom solc import compile_source, compile_files, link_code\nfrom web3.contract import ConciseContract\n\nfrom utils import hex2int\n\ndef compile(tree_depth):\n    rollup = \"../contracts/roll_up.sol\"\n    Pairing =  \"../contracts/Pairing.sol\"\n    Verifier = \"../contracts/Verifier.sol\"\n\n    compiled_sol =  compile_files([Pairing, Verifier, rollup], allow_paths=\"./contracts\")\n\n    rollup_interface = compiled_sol[rollup + ':roll_up']\n    verifier_interface = compiled_sol[Verifier + ':Verifier']\n\n    return(rollup_interface, verifier_interface)\n   \n\ndef contract_deploy(tree_depth, vk_dir, merkle_root, host=\"localhost\"):\n    w3 = Web3(HTTPProvider(\"http://\" + host + \":8545\"))\n\n    rollup_interface , verifier_interface  = compile(tree_depth)\n    with open(vk_dir) as json_data:\n        vk = json.load(json_data)\n\n\n    vk  = [hex2int(vk[\"a\"][0]),\n           hex2int(vk[\"a\"][1]),\n           hex2int(vk[\"b\"]),\n           hex2int(vk[\"c\"][0]),\n           hex2int(vk[\"c\"][1]),\n           hex2int(vk[\"g\"][0]),\n           hex2int(vk[\"g\"][1]),\n           hex2int(vk[\"gb1\"]),\n           hex2int(vk[\"gb2\"][0]),\n           hex2int(vk[\"gb2\"][1]),\n           hex2int(vk[\"z\"][0]),\n           hex2int(vk[\"z\"][1]),\n           hex2int(sum(vk[\"IC\"], []))\n    ]\n\n     # Instantiate and deploy contract\n    rollup = w3.eth.contract(abi=rollup_interface['abi'], bytecode=rollup_interface['bin'])\n    verifier = w3.eth.contract(abi=verifier_interface['abi'], bytecode=verifier_interface['bin'])\n\n    # Get transaction hash from deployed contract\n    tx_hash = verifier.deploy(args=vk, transaction={'from': w3.eth.accounts[0], 'gas': 4000000})\n    # Get tx receipt to get contract address\n\n    tx_receipt = w3.eth.waitForTransactionReceipt(tx_hash, 10000)\n    verifier_address = tx_receipt['contractAddress']\n\n\n    # add IC \n    verifier = w3.eth.contract(address=verifier_address, abi=verifier_interface['abi'],ContractFactoryClass=ConciseContract)\n    while verifier.getICLen() != (len(vk[-1]))//2:\n        tx_hash = verifier.addIC(vk[-1] , transact={'from': w3.eth.accounts[0], 'gas': 4000000})\n        tx_receipt = w3.eth.waitForTransactionReceipt(tx_hash, 100000)\n\n    tx_hash = rollup.deploy(transaction={'from': w3.eth.accounts[0], 'gas': 4000000}, args=[verifier_address, merkle_root])\n\n    # Get tx receipt to get contract address\n    tx_receipt = w3.eth.waitForTransactionReceipt(tx_hash, 10000)\n    rollup_address = tx_receipt['contractAddress']\n\n    # Contract instance in concise mode\n    abi = rollup_interface['abi']\n    rollup = w3.eth.contract(address=rollup_address, abi=abi,ContractFactoryClass=ConciseContract)\n    return(rollup)\n\ndef verify(contract, proof, host=\"localhost\"):\n    w3 = Web3(HTTPProvider(\"http://\" + host + \":8545\"))\n\n    tx_hash = contract.isTrue(proof[\"a\"] , proof[\"a_p\"], proof[\"b\"], proof[\"b_p\"] , proof[\"c\"], proof[\"c_p\"] , proof[\"h\"] , proof[\"k\"], proof[\"input\"] , transact={'from': w3.eth.accounts[0], 'gas': 4000000})\n    tx_receipt = w3.eth.waitForTransactionReceipt(tx_hash, 10000)\n\n    return(tx_receipt)\n"
  },
  {
    "path": "contracts/roll_up.sol",
    "content": "/*    \n    copyright 2018 to the roll_up Authors\n\n    This file is part of roll_up.\n\n    roll_up is free software: you can redistribute it and/or modify\n    it under the terms of the GNU General Public License as published by\n    the Free Software Foundation, either version 3 of the License, or\n    (at your option) any later version.\n\n    roll_up is distributed in the hope that it will be useful,\n    but WITHOUT ANY WARRANTY; without even the implied warranty of\n    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n    GNU General Public License for more details.\n\n    You should have received a copy of the GNU General Public License\n    along with roll_up.  If not, see <https://www.gnu.org/licenses/>.\n*/\n\npragma solidity ^0.4.19;\n\nimport \"../contracts/Verifier.sol\";\n\ncontract roll_up{\n    bytes32 root;\n    mapping (bytes32 => bool) nullifiers;\n    event Withdraw (address); \n    Verifier public zksnark_verify;\n    function roll_up (address _zksnark_verify, bytes32 _root) {\n        zksnark_verify = Verifier(_zksnark_verify);\n        root = _root;\n    }\n\n    function isTrue (\n            uint[2] a,\n            uint[2] a_p,\n            uint[2][2] b,\n            uint[2] b_p,\n            uint[2] c,\n            uint[2] c_p,\n            uint[2] h,\n            uint[2] k,\n            uint[] input\n        ) returns (bool) {\n\n        bytes32 _root = padZero(reverse(bytes32(input[0]))); //)merge253bitWords(input[0], input[1]);\n        require(_root == padZero(root));\n        require(zksnark_verify.verifyTx(a,a_p,b,b_p,c,c_p,h,k,input));      \n        root = padZero(reverse(bytes32(input[2])));\n        return(true);\n    }\n\n    function getRoot() constant returns(bytes32) {\n        return(root);\n    } \n\n    // libshark only allows 253 bit chunks in its output\n    // to overcome this we merge the first 253 bits (left) with the remaining 3 bits\n    // in the next variable (right)\n\n    function merge253bitWords(uint left, uint right) returns(bytes32) {\n        right = pad3bit(right);\n        uint left_msb = uint(padZero(reverse(bytes32(left))));\n        uint left_lsb = uint(getZero(reverse(bytes32(left))));\n        right = right + left_lsb;\n        uint res = left_msb + right; \n        return(bytes32(res));\n    }\n\n\n    // ensure that the 3 bits on the left is actually 3 bits.\n    function pad3bit(uint input) constant returns(uint) {\n        if (input == 0) \n            return 0;\n        if (input == 1)\n            return 4;\n        if (input == 2)\n            return 4;\n        if (input == 3)\n            return 6;\n        return(input);\n    }\n\n    function getZero(bytes32 x) returns(bytes32) {\n                 //0x1111111111111111111111113fdc3192693e28ff6aee95320075e4c26be03308\n        return(x & 0x000000000000000000000000000000000000000000000000000000000000000F);\n    }\n\n    function padZero(bytes32 x) returns(bytes32) {\n                 //0x1111111111111111111111113fdc3192693e28ff6aee95320075e4c26be03308\n        return(x & 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF0);\n    }\n\n    function reverseByte(uint a) public pure returns (uint) {\n        uint c = 0xf070b030d0509010e060a020c0408000;\n\n        return (( c >> ((a & 0xF)*8)) & 0xF0)   +  \n               (( c >> (((a >> 4)&0xF)*8) + 4) & 0xF);\n    }\n    //flip endinaness\n    function reverse(bytes32 a) public pure returns(bytes32) {\n        uint r;\n        uint i;\n        uint b;\n        for (i=0; i<32; i++) {\n            b = (uint(a) >> ((31-i)*8)) & 0xff;\n            b = reverseByte(b);\n            r += b << (i*8);\n        }\n        return bytes32(r);\n    }\n\n}\n"
  },
  {
    "path": "depends/CMakeLists.txt",
    "content": "add_subdirectory(baby_jubjub_ecc)\n\n\n"
  },
  {
    "path": "docker-compose.yml",
    "content": "version: \"3\"\n\nservices:\n\n  testrpc:\n    image: trufflesuite/ganache-cli:v6.1.8\n    ports:\n      - 8545\n    networks:\n      - blockchain\n\n  test:\n    build: .\n    working_dir: /root/roll_up/tests\n    command: python3 test.py testrpc\n    depends_on:\n      - testrpc\n    networks:\n      - blockchain\n    volumes:\n      - ./tests:/root/roll_up/tests\n      - ./pythonWrapper:/root/roll_up/pythonWrapper\n      - ./keys:/root/roll_up/keys\n      - ./contracts/contract_deploy.py:/root/roll_up/contracts/contract_deploy.py\n\nnetworks:\n  blockchain:"
  },
  {
    "path": "keys/.gitkeep",
    "content": ""
  },
  {
    "path": "pythonWrapper/helper.py",
    "content": "\n'''\n    copyright 2018 to the roll_up Authors\n\n    This file is part of roll_up.\n\n    roll_up is free software: you can redistribute it and/or modify\n    it under the terms of the GNU General Public License as published by\n    the Free Software Foundation, either version 3 of the License, or\n    (at your option) any later version.\n\n    roll_up is distributed in the hope that it will be useful,\n    but WITHOUT ANY WARRANTY; without even the implied warranty of\n    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n    GNU General Public License for more details.\n\n    You should have received a copy of the GNU General Public License\n    along with roll_up.  If not, see <https://www.gnu.org/licenses/>.\n'''\n\nimport pdb\nimport json\nfrom solc import compile_source, compile_files, link_code\nfrom bitstring import BitArray\nimport random \n\nfrom ctypes import cdll\nimport ctypes as c\n\nimport sys\nsys.path.insert(0, '../pythonWrapper')\nimport utils \nfrom utils import libsnark2python\n\ntree_depth = 2\nnoTx = 4\nlib = cdll.LoadLibrary('../build/src/libroll_up_wrapper.so')\n\n\nprove = lib.prove\nprove.argtypes = [((c.c_bool*256)*(tree_depth)*(noTx)), (c.c_bool*256 * noTx), (c.c_bool*256 * noTx), (c.c_bool*256* noTx), \n                  (((c.c_bool*tree_depth) * noTx)), (c.c_bool*256 * noTx), (c.c_bool*256 * noTx), (c.c_bool*256 * noTx),\n                  (c.c_bool*256 * noTx) , (c.c_bool*256* noTx),c.c_int, c.c_int] \n\nprove.restype = c.c_char_p\n\ngenKeys = lib.genKeys\ngenKeys.argtypes = [c.c_int, c.c_char_p, c.c_char_p]\n\n\n#verify = lib.verify\n#verify.argtypes = [c.c_char_p, c.c_char_p , c.c_char_p , c.c_char_p, c.c_char_p, c.c_char_p, c.c_char_p, c.c_char_p, c.c_char_p, c.c_char_p, c.c_char_p, c.c_char_p, c.c_char_p, c.c_char_p, c.c_char_p, c.c_char_p, c.c_char_p, c.c_char_p, c.c_char_p, c.c_char_p, c.c_char_p, c.c_char_p, c.c_char_p, c.c_char_p, c.c_char_p , c.c_char_p, c.c_char_p, c.c_char_p,  c.c_char_p, c.c_char_p, c.c_char_p, c.c_char_p, c.c_char_p ]\n#verify.restype = c.c_bool\n\n\ndef binary2ctypes(out):\n    return((c.c_bool*256)(*out))\n\ndef hexToBinary(hexString):\n    \n    out = [ int(x) for x in bin(int(hexString, 16))[2:].zfill(256)]\n\n    return(out)\n  \ndef genWitness(leaves, public_key_x, public_key_y, address, tree_depth, _rhs_leaf, _new_leaf,r_x, r_y, s):\n\n    path = []\n    fee = 0 \n    address_bits = []\n    pub_key_x = []\n    pub_key_y = [] \n    roots = []\n    paths = []\n\n    old_leaf = [] \n    new_leaf = []\n    r_x_bin_array = []\n    r_y_bin_array = []\n    s_bin_array = []\n    for i in range(noTx): \n\n        root , merkle_tree = utils.genMerkelTree(tree_depth, leaves[i])\n        path , address_bit = utils.getMerkelProof(leaves[i], address[i], tree_depth)\n\n        path = [binary2ctypes(hexToBinary(x)) for x in path] \n\n        address_bit = address_bit[::-1]\n        path = path[::-1]\n        paths.append(((c.c_bool*256)*(tree_depth))(*path))\n\n\n        pub_key_x.append(binary2ctypes(hexToBinary(public_key_x[i])))\n        pub_key_y.append(binary2ctypes(hexToBinary(public_key_y[i])))\n\n        roots.append(binary2ctypes(hexToBinary(root)))\n\n\n        address_bits.append((c.c_bool*tree_depth)(*address_bit))\n\n   \n        old_leaf.append(binary2ctypes(hexToBinary(_rhs_leaf[i])))\n        new_leaf.append(binary2ctypes(hexToBinary(_new_leaf[i])))\n\n        r_x_bin_array.append(binary2ctypes(hexToBinary(r_x[i])))\n        r_y_bin_array.append(binary2ctypes(hexToBinary(r_y[i])))\n        s_bin_array.append(binary2ctypes(hexToBinary(hex(s[i]))))\n\n\n\n    pub_key_x_array = ((c.c_bool*256)*(noTx))(*pub_key_x)\n    pub_key_y_array = ((c.c_bool*256)*(noTx))(*pub_key_y)\n    merkle_roots = ((c.c_bool*256)*(noTx))(*roots)\n    old_leaf = ((c.c_bool*256)*(noTx))(*old_leaf)\n    new_leaf = ((c.c_bool*256)*(noTx))(*new_leaf)\n    r_x_bin = ((c.c_bool*256)*(noTx))(*r_x_bin_array)\n    r_y_bin = ((c.c_bool*256)*(noTx))(*r_y_bin_array)\n    s_bin = ((c.c_bool*256)*(noTx))(*s_bin_array)\n    paths = ((c.c_bool*256)*(tree_depth) * noTx)(*paths)\n    address_bits = ((c.c_bool)*(tree_depth) * noTx)(*address_bits)\n\n    proof = prove(paths, pub_key_x_array, pub_key_y_array, merkle_roots,  address_bits, old_leaf, new_leaf, r_x_bin, r_y_bin, s_bin, tree_depth, noTx)\n\n\n    proof = json.loads(proof.decode(\"utf-8\"))\n    root , merkle_tree = utils.genMerkelTree(tree_depth, leaves[0])\n\n    return(proof, root)\n\ndef genSalt(i):\n    salt = [random.choice(\"0123456789abcdef\") for x in range(0,i)]\n    out = \"\".join(salt)\n    return(out)\n\ndef genNullifier(recvAddress):\n    salt = genSalt(24)\n    return(recvAddress + salt)   \n"
  },
  {
    "path": "pythonWrapper/utils.py",
    "content": "'''\n    copyright 2018 to the roll_up Authors\n\n    This file is part of roll_up.\n\n    roll_up is free software: you can redistribute it and/or modify\n    it under the terms of the GNU General Public License as published by\n    the Free Software Foundation, either version 3 of the License, or\n    (at your option) any later version.\n\n    roll_up is distributed in the hope that it will be useful,\n    but WITHOUT ANY WARRANTY; without even the implied warranty of\n    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n    GNU General Public License for more details.\n\n    You should have received a copy of the GNU General Public License\n    along with roll_up.  If not, see <https://www.gnu.org/licenses/>.\n'''\n\n\nimport pdb\nimport hashlib \n\nimport sys\nsys.path.insert(0, \"../depends/baby_jubjub_ecc/tests\")\n\nimport ed25519 as ed\n\ndef hex2int(elements):\n    ints = []\n    for el in elements:\n        ints.append(int(el, 16))\n    return(ints)\n\ndef normalize_proof(proof):\n    proof[\"a\"] = hex2int(proof[\"a\"])\n    proof[\"a_p\"] = hex2int(proof[\"a_p\"])\n    proof[\"b\"] = [hex2int(proof[\"b\"][0]), hex2int(proof[\"b\"][1])]\n    proof[\"b_p\"] = hex2int(proof[\"b_p\"])\n    proof[\"c\"] = hex2int(proof[\"c\"])\n    proof[\"c_p\"] = hex2int(proof[\"c_p\"])\n    proof[\"h\"] = hex2int(proof[\"h\"])\n    proof[\"k\"] = hex2int(proof[\"k\"])\n    proof[\"input\"] = hex2int(proof[\"input\"]) \n    \n    return proof\n\ndef getSignature(m,sk,pk):\n\n   R,S = ed.signature(m,sk,pk)\n   return(R,S) \n\n\ndef createLeaf(public_key , message):\n    pk = ed.encodepoint(public_key)\n    leaf = hashPadded(pk, message)\n\n    return(leaf[2:])\n\ndef libsnark2python (inputs):   \n    #flip the inputs\n\n    bin_inputs = []\n    for x in inputs:\n        binary = bin(x)[2:][::-1]\n\n        if len(binary) > 100:\n            binary = binary.ljust(253, \"0\")          \n        bin_inputs.append(binary)\n    raw = \"\".join(bin_inputs)\n\n    raw += \"0\" * (256 * 5 - len(raw)) \n\n    output = []\n    i = 0\n    while i < len(raw):\n        hexnum = hex(int(raw[i:i+256], 2))\n        #pad leading zeros\n        padding = 66 - len(hexnum)\n        hexnum = hexnum[:2] + \"0\"*padding + hexnum[2:]\n\n        output.append(hexnum)\n        i += 256\n    return(output)\n\ndef hashPadded(left, right):\n    x1 = int(left , 16).to_bytes(32, \"big\")\n    x2 = int(right , 16).to_bytes(32, \"big\")    \n    data = x1 + x2 \n    answer = hashlib.sha256(data).hexdigest()\n    return(\"0x\" + answer)\n\ndef sha256(data):\n    data = str(data).encode()\n    return(\"0x\" + hashlib.sha256(data).hexdigest())\n\ndef getUniqueLeaf(depth):\n    inputHash = \"0x0000000000000000000000000000000000000000000000000000000000000000\"\n    for i in range(0,depth):\n        inputHash = hashPadded(inputHash, inputHash)\n    return(inputHash)\n\ndef genMerkelTree(tree_depth, leaves):\n\n    tree_layers = [leaves ,[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]] \n\n    for i in range(0, tree_depth):\n        if len(tree_layers[i]) % 2 != 0:\n            tree_layers[i].append(getUniqueLeaf(i))\n        for j in range(0, len(tree_layers[i]), 2):\n            tree_layers[i+1].append(hashPadded(tree_layers[i][j], tree_layers[i][j+1]))\n\n    return(tree_layers[tree_depth][0] , tree_layers)\n\ndef getMerkelRoot(tree_depth, leaves):\n    genMerkelTree(tree_depth, leaves)  \n\ndef getMerkelProof(leaves, index, tree_depth):\n    address_bits = []\n    merkelProof = []\n    mr , tree = genMerkelTree(tree_depth, leaves)\n    for i in range(0 , tree_depth):\n        address_bits.append(index%2)\n        if (index%2 == 0): \n            merkelProof.append(tree[i][index + 1])\n        else:\n            merkelProof.append(tree[i][index - 1])\n        index = int(index/2);\n    return(merkelProof, address_bits); \n\ndef testHashPadded():\n    left = \"0x0000000000000000000000000000000000000000000000000000000000000000\"\n    right = \"0x0000000000000000000000000000000000000000000000000000000000000000\"\n    res = hashPadded(left , right)\n    assert (res == \"0xf5a5fd42d16a20302798ef6ed309979b43003d2320d9f0e8ea9831a92759fb4b\")\n\ndef testGenMerkelTree():\n    mr1, tree = genMerkelTree(1, [\"0x0000000000000000000000000000000000000000000000000000000000000000\", \"0x0000000000000000000000000000000000000000000000000000000000000000\"]) \n    mr2, tree = genMerkelTree(2, [\"0x0000000000000000000000000000000000000000000000000000000000000000\", \"0x0000000000000000000000000000000000000000000000000000000000000000\", \n                      \"0x0000000000000000000000000000000000000000000000000000000000000000\", \"0x0000000000000000000000000000000000000000000000000000000000000000\"])\n    mr3, tree = genMerkelTree(29, [\"0x0000000000000000000000000000000000000000000000000000000000000000\", \"0x0000000000000000000000000000000000000000000000000000000000000000\"])\n    assert(mr1 == \"0xf5a5fd42d16a20302798ef6ed309979b43003d2320d9f0e8ea9831a92759fb4b\") \n    assert(mr2 == \"0xdb56114e00fdd4c1f85c892bf35ac9a89289aaecb1ebd0a96cde606a748b5d71\")\n\ndef testlibsnarkTopython():\n    inputs = [12981351829201453377820191526040524295325907810881751591725375521336092323040, \n              2225095499654173609649711272123535458680077283826030252600915820706026312895, \n              10509931637877506470161905650895697133838017786875388895008260393592381807236, \n              11784807906137262651861317232543524609532737193375988426511007536308407308209, 17]\n\n    inputs = [9782619478414927069440250629401329418138703122237912437975467993246167708418,\n              2077680306600520305813581592038078188768881965413185699798221798985779874888,\n              4414150718664423886727710960459764220828063162079089958392546463165678021703,\n              7513790795222206681892855620762680219484336729153939269867138100414707910106,\n              902]\n\n    output = libsnark2python(inputs)\n    print(output)\n    assert(output[0] == \"0x40cde80490e78bc7d1035cbc78d3e6be3e41b2fdfad473782e02e226cc2305a8\")\n    assert(output[1] == \"0x918e88a16d0624cd5ca4695bd84e23e4a6c8a202ce85560d3c66d4ed39bf4938\")\n    assert(output[2] == \"0x8dd3ea28fe8d04f3e15b787fec7e805e152fe7d3302d0122c8522bee1290e4b7\")\n    assert(output[3] == \"0x47a6bbcf8fa3667431e895f08cbd8ec2869a31698d9cf91e5bfd94cbca72161c\")\n\ndef testgetMissingLeaf():\n    assert (getMissingLeaf(0) == \"0x0000000000000000000000000000000000000000000000000000000000000000\")\n    assert (getMissingLeaf(1) == \"0xf5a5fd42d16a20302798ef6ed309979b43003d2320d9f0e8ea9831a92759fb4b\")\n    assert (getMissingLeaf(2) == \"0xdb56114e00fdd4c1f85c892bf35ac9a89289aaecb1ebd0a96cde606a748b5d71\") \n    assert (getMissingLeaf(3) == \"0xc78009fdf07fc56a11f122370658a353aaa542ed63e44c4bc15ff4cd105ab33c\")\n    assert (getMissingLeaf(4) == \"0x536d98837f2dd165a55d5eeae91485954472d56f246df256bf3cae19352a123c\")\n\ndef testgetMerkelProof():\n    proof1, address1 =  getMerkelProof([\"0x0000000000000000000000000000000000000000000000000000000000000000\", \"0x0000000000000000000000000000000000000000000000000000000000000000\",\n                      \"0x0000000000000000000000000000000000000000000000000000000000000000\", \"0x0000000000000000000000000000000000000000000000000000000000000000\"] , 0 , 2)\n    assert ( proof1[0] == \"0x0000000000000000000000000000000000000000000000000000000000000000\")\n    assert ( proof1[1] == \"f5a5fd42d16a20302798ef6ed309979b43003d2320d9f0e8ea9831a92759fb4b\")\n    assert ( address1[0] == 0)\n    assert ( address1[1] == 0)\n \n"
  },
  {
    "path": "requirements.txt",
    "content": "web3==4.6.0\npy-solc==3.1.0\nbitstring==3.1.5"
  },
  {
    "path": "src/CMakeLists.txt",
    "content": "include_directories(.)\n\n\nadd_library(\n  roll_up_wrapper\n  SHARED\n  roll_up_wrapper.cpp\n)\n\ntarget_link_libraries(\n  roll_up_wrapper\n  snark\n  baby_jubjub_ecc  \n)\n\ntarget_include_directories(roll_up_wrapper PUBLIC ../depends/baby_jubjub_ecc/src)\n\nset_property(TARGET roll_up_wrapper PROPERTY POSITION_INDEPENDENT_CODE ON)\n\ntarget_include_directories(\n  roll_up_wrapper\n  PUBLIC\n  ${DEPENDS_DIR}/baby_jubjub_ecc\n  ${DEPENDS_DIR}/baby_jubjub_ecc/baby_jubjub_ecc\n  ${DEPENDS_DIR}/baby_jubjub_ecc/depends/libsnark\n  ${DEPENDS_DIR}/baby_jubjub_ecc/depends/libsnark/depends/libff\n  ${DEPENDS_DIR}/baby_jubjub_ecc/depends/libsnark/depends/libfqfft\n)\n"
  },
  {
    "path": "src/ZoKrates/wraplibsnark.cpp",
    "content": "/**\n * @file wraplibsnark.cpp\n * @author Jacob Eberhardt <jacob.eberhardt@tu-berlin.de\n * @author Dennis Kuhnert <dennis.kuhnert@campus.tu-berlin.de>\n * @date 2017\n */\n\n#include \"wraplibsnark.hpp\"\n#include <fstream>\n#include <iostream>\n#include <cassert>\n#include <iomanip>\n\n// contains definition of alt_bn128 ec public parameters\n//#include \"libsnark/libsnark/algebra/curves/alt_bn128/alt_bn128_pp.hpp\"\n#include \"libff/algebra/curves/alt_bn128/alt_bn128_pp.hpp\"\n// contains required interfaces and types (keypair, proof, generator, prover, verifier)\n#include <libsnark/zk_proof_systems/ppzksnark/r1cs_ppzksnark/r1cs_ppzksnark.hpp>\n\ntypedef long integer_coeff_t;\n\nusing namespace std;\nusing namespace libsnark;\n\n// conversion byte[32] <-> libsnark bigint.\nlibff::bigint<libff::alt_bn128_r_limbs> libsnarkBigintFromBytes(const uint8_t* _x)\n{\n  libff::bigint<libff::alt_bn128_r_limbs> x;\n\n  for (unsigned i = 0; i < 4; i++) {\n    for (unsigned j = 0; j < 8; j++) {\n      x.data[3 - i] |= uint64_t(_x[i * 8 + j]) << (8 * (7-j));\n    }\n  }\n  return x;\n}\n\nstd::string HexStringFromLibsnarkBigint(libff::bigint<libff::alt_bn128_r_limbs> _x){\n    uint8_t x[32];\n    for (unsigned i = 0; i < 4; i++)\n        for (unsigned j = 0; j < 8; j++)\n                        x[i * 8 + j] = uint8_t(uint64_t(_x.data[3 - i]) >> (8 * (7 - j)));\n\n        std::stringstream ss;\n        ss << std::setfill('0');\n        for (unsigned i = 0; i<32; i++) {\n                ss << std::hex << std::setw(2) << (int)x[i];\n        }\n\n                std:string str = ss.str();\n                return str.erase(0, min(str.find_first_not_of('0'), str.size()-1));\n}\n\nstd::string outputPointG1AffineAsHex(libff::alt_bn128_G1 _p)\n{\n        libff::alt_bn128_G1 aff = _p;\n        aff.to_affine_coordinates();\n        std::stringstream ss; \n        ss << \"0x\"  << aff.X.as_bigint() << \",\" << aff.Y.as_bigint() << \",\" << aff.Z.as_bigint();    \n\n        return       \"\\\"\" + \n               HexStringFromLibsnarkBigint(aff.X.as_bigint()) +\n                \"\\\", \\\"0x\"+\n                HexStringFromLibsnarkBigint(aff.Y.as_bigint()) +\n                \"\\\"\"; \n}\n\nstd::string outputPointG1AffineAsInt(libff::alt_bn128_G1 _p)\n{\n        libff::alt_bn128_G1 aff = _p;\n        aff.to_affine_coordinates();\n        std::stringstream ss; \n        ss << \"\"  << aff.X.as_bigint() << \",\" << aff.Y.as_bigint() << \",\" << aff.Z.as_bigint();\n        return ss.str();\n}\n\n\nstd::string outputPointG2AffineAsHex(libff::alt_bn128_G2 _p)\n{\n        libff::alt_bn128_G2 aff = _p;\n\n        if (aff.Z.c0.as_bigint() != \"0\" && aff.Z.c1.as_bigint() != \"0\" ) {\n            aff.to_affine_coordinates();\n        }\n        return \"[\\\"0x\" +\n                HexStringFromLibsnarkBigint(aff.X.c1.as_bigint()) + \"\\\", \\\"0x\" +\n                HexStringFromLibsnarkBigint(aff.X.c0.as_bigint()) + \"\\\"],\\n [\\\"0x\" + \n                HexStringFromLibsnarkBigint(aff.Y.c1.as_bigint()) + \"\\\", \\\"0x\" +\n                HexStringFromLibsnarkBigint(aff.Y.c0.as_bigint()) + \"\\\"]\"; \n}\nstd::string outputPointG2AffineAsInt(libff::alt_bn128_G2 _p)\n{\n        libff::alt_bn128_G2 aff = _p;\n        if (aff.Z.c0.as_bigint() != \"0\" && aff.Z.c1.as_bigint() != \"0\" ) {\n            aff.to_affine_coordinates();\n        }\n        std::stringstream ss;\n        ss << \"\"  << aff.X.c1.as_bigint() << \",\" << aff.X.c0.as_bigint() << \",\" << aff.Y.c1.as_bigint() << \",\" << aff.Y.c0.as_bigint() << \",\" << aff.Z.c1.as_bigint() << \",\" <<aff.Z.c0.as_bigint()  ;\n\n        return ss.str();\n}\n\n\n//takes input and puts it into constraint system\nr1cs_ppzksnark_constraint_system<libff::alt_bn128_pp> createConstraintSystem(const uint8_t* A, const uint8_t* B, const uint8_t* C, int constraints, int variables, int inputs)\n{\n  r1cs_ppzksnark_constraint_system<libff::alt_bn128_pp> cs;\n  cs.primary_input_size = inputs;\n  cs.auxiliary_input_size = variables - inputs - 1; // ~one not included\n\n  cout << \"num variables: \" << variables <<endl;\n  cout << \"num constraints: \" << constraints <<endl;\n  cout << \"num inputs: \" << inputs <<endl;\n\n  for (int row = 0; row < constraints; row++) {\n    linear_combination<libff::alt_bn128_pp> lin_comb_A, lin_comb_B, lin_comb_C;\n\n    for (int idx=0; idx<variables; idx++) {\n      libff::bigint<libff::alt_bn128_r_limbs> value = libsnarkBigintFromBytes(A+row*variables*32 + idx*32);\n      libff::alt_bn128_pp::init_public_params();\n      cout << \"C entry \" << idx << \" in row \" << row << \": \" << value << endl;\n      if (!value.is_zero()) {\n        //cout << \"A(\" << idx << \", \" << value << \")\" << endl;\n        //lin_comb_A.add_term(idx,value);\n        //linear_term<libff::alt_bn128_pp>(0);\n      }\n    }\n    for (int idx=0; idx<variables; idx++) {\n      libff::bigint<libff::alt_bn128_r_limbs> value = libsnarkBigintFromBytes(B+row*variables*32 + idx*32);\n      cout << \"B entry \" << idx << \" in row \" << row << \": \" << value << endl;\n      if (!value.is_zero()) {\n        cout << \"B(\" << idx << \", \" << value << \")\" << endl;\n        //lin_comb_B.add_term(idx, value);\n      }\n    }\n    for (int idx=0; idx<variables; idx++) {\n      libff::bigint<libff::alt_bn128_r_limbs> value = libsnarkBigintFromBytes(C+row*variables*32 + idx*32);\n      // cout << \"C entry \" << idx << \" in row \" << row << \": \" << value << endl;\n      if (!value.is_zero()) {\n        // cout << \"C(\" << idx << \", \" << value << \")\" << endl;\n        //lin_comb_C.add_term(idx, value);\n      }\n    }\n    //cs.add_constraint(r1cs_constraint<libff::alt_bn128_pp>(lin_comb_A, lin_comb_B, lin_comb_C));\n  }\n  return cs;\n}\n\n// keypair generateKeypair(constraints)\nr1cs_ppzksnark_keypair<libff::alt_bn128_pp> generateKeypair(const r1cs_ppzksnark_constraint_system<libff::alt_bn128_pp> &cs){\n  // from r1cs_ppzksnark.hpp\n  return r1cs_ppzksnark_generator<libff::alt_bn128_pp>(cs);\n}\n\ntemplate<typename T>\nvoid writeToFile(std::string path, T& obj) {\n    std::stringstream ss;\n    ss << obj;\n    std::ofstream fh;\n    fh.open(path, std::ios::binary);\n    ss.rdbuf()->pubseekpos(0, std::ios_base::out);\n    fh << ss.rdbuf();\n    fh.flush();\n    fh.close();\n}\n\ntemplate<typename T>\nT loadFromFile(std::string path) {\n    std::stringstream ss;\n    std::ifstream fh(path, std::ios::binary);\n\n    assert(fh.is_open());\n\n    ss << fh.rdbuf();\n    fh.close();\n\n    ss.rdbuf()->pubseekpos(0, std::ios_base::in);\n\n    T obj;\n    ss >> obj;\n\n    return obj;\n}\n\nvoid serializeProvingKeyToFile(r1cs_ppzksnark_proving_key<libff::alt_bn128_pp> pk, const char* pk_path){\n  writeToFile(pk_path, pk);\n}\n\nr1cs_ppzksnark_proving_key<libff::alt_bn128_pp> deserializeProvingKeyFromFile(const char* pk_path){\n  return loadFromFile<r1cs_ppzksnark_proving_key<libff::alt_bn128_pp>>(pk_path);\n}\n\nvoid serializeVerificationKeyToFile(r1cs_ppzksnark_verification_key<libff::alt_bn128_pp> vk, const char* vk_path){\n  std::stringstream ss;\n\n  unsigned icLength = vk.encoded_IC_query.rest.indices.size() + 1;\n\n  ss << \"\\t\\tvk.A = \" << outputPointG2AffineAsHex(vk.alphaA_g2) << endl;\n  ss << \"\\t\\tvk.B = \" << outputPointG1AffineAsHex(vk.alphaB_g1) << endl;\n  ss << \"\\t\\tvk.C = \" << outputPointG2AffineAsHex(vk.alphaC_g2) << endl;\n  ss << \"\\t\\tvk.gamma = \" << outputPointG2AffineAsHex(vk.gamma_g2) << endl;\n  ss << \"\\t\\tvk.gammaBeta1 = \" << outputPointG1AffineAsHex(vk.gamma_beta_g1) << endl;\n  ss << \"\\t\\tvk.gammaBeta2 = \" << outputPointG2AffineAsHex(vk.gamma_beta_g2) << endl;\n  ss << \"\\t\\tvk.Z = \" << outputPointG2AffineAsHex(vk.rC_Z_g2) << endl;\n  ss << \"\\t\\tvk.IC.len() = \" << icLength << endl;\n  ss << \"\\t\\tvk.IC[0] = \" << outputPointG1AffineAsHex(vk.encoded_IC_query.first) << endl;\n  for (size_t i = 1; i < icLength; ++i)\n  {\n                  auto vkICi = outputPointG1AffineAsHex(vk.encoded_IC_query.rest.values[i - 1]);\n                  ss << \"\\t\\tvk.IC[\" << i << \"] = \" << vkICi << endl;\n  }\n\n  std::ofstream fh;\n  fh.open(vk_path, std::ios::binary);\n  ss.rdbuf()->pubseekpos(0, std::ios_base::out);\n  fh << ss.rdbuf();\n  fh.flush();\n  fh.close();\n}\n\n// compliant with solidty verification example\nvoid exportVerificationKey(r1cs_ppzksnark_keypair<libff::alt_bn128_pp> keypair){\n        unsigned icLength = keypair.vk.encoded_IC_query.rest.indices.size() + 1;\n\n        cout << \"\\tVerification key in Solidity compliant format:{\" << endl;\n        cout << \"\\t\\tvk.A = Pairing.G2Point(\" << outputPointG2AffineAsHex(keypair.vk.alphaA_g2) << \");\" << endl;\n        cout << \"\\t\\tvk.B = Pairing.G1Point(\" << outputPointG1AffineAsHex(keypair.vk.alphaB_g1) << \");\" << endl;\n        cout << \"\\t\\tvk.C = Pairing.G2Point(\" << outputPointG2AffineAsHex(keypair.vk.alphaC_g2) << \");\" << endl;\n        cout << \"\\t\\tvk.gamma = Pairing.G2Point(\" << outputPointG2AffineAsHex(keypair.vk.gamma_g2) << \");\" << endl;\n        cout << \"\\t\\tvk.gammaBeta1 = Pairing.G1Point(\" << outputPointG1AffineAsHex(keypair.vk.gamma_beta_g1) << \");\" << endl;\n        cout << \"\\t\\tvk.gammaBeta2 = Pairing.G2Point(\" << outputPointG2AffineAsHex(keypair.vk.gamma_beta_g2) << \");\" << endl;\n        cout << \"\\t\\tvk.Z = Pairing.G2Point(\" << outputPointG2AffineAsHex(keypair.vk.rC_Z_g2) << \");\" << endl;\n        cout << \"\\t\\tvk.IC = new Pairing.G1Point[](\" << icLength << \");\" << endl;\n        cout << \"\\t\\tvk.IC[0] = Pairing.G1Point(\" << outputPointG1AffineAsHex(keypair.vk.encoded_IC_query.first) << \");\" << endl;\n        for (size_t i = 1; i < icLength; ++i)\n        {\n                auto vkICi = outputPointG1AffineAsHex(keypair.vk.encoded_IC_query.rest.values[i - 1]);\n                cout << \"\\t\\tvk.IC[\" << i << \"] = Pairing.G1Point(\" << vkICi << \");\" << endl;\n        }\n        cout << \"\\t\\t}\" << endl;\n\n}\n\n// compliant with solidty verification example\n/*\nvoid exportInput(r1cs_primary_input<libff::alt_bn128_pp> input){\n        cout << \"\\tInput in Solidity compliant format:{\" << endl;\n        for (size_t i = 0; i < input.size(); ++i)\n        {\n                cout << \"\\t\\tinput[\" << i << \"] = \" << HexStringFromLibsnarkBigint(input[i].as_bigint()) << \";\" << endl;\n        }\n        cout << \"\\t\\t}\" << endl;\n} */\n\n\nvoid printProof(r1cs_ppzksnark_proof<libff::alt_bn128_pp> proof){\n                cout << \"Proof:\"<< endl;\n                cout << \"proof.A = Pairing.G1Point(\" << outputPointG1AffineAsHex(proof.g_A.g)<< \");\" << endl;\n                cout << \"proof.A_p = Pairing.G1Point(\" << outputPointG1AffineAsHex(proof.g_A.h)<< \");\" << endl;\n                cout << \"proof.B = Pairing.G2Point(\" << outputPointG2AffineAsHex(proof.g_B.g)<< \");\" << endl;\n                cout << \"proof.B_p = Pairing.G1Point(\" << outputPointG1AffineAsHex(proof.g_B.h)<<\");\" << endl;\n                cout << \"proof.C = Pairing.G1Point(\" << outputPointG1AffineAsHex(proof.g_C.g)<< \");\" << endl;\n                cout << \"proof.C_p = Pairing.G1Point(\" << outputPointG1AffineAsHex(proof.g_C.h)<<\");\" << endl;\n                cout << \"proof.H = Pairing.G1Point(\" << outputPointG1AffineAsHex(proof.g_H)<<\");\"<< endl;\n                cout << \"proof.K = Pairing.G1Point(\" << outputPointG1AffineAsHex(proof.g_K)<<\");\"<< endl;\n}\n\n/*bool _setup(const uint8_t* A, const uint8_t* B, const uint8_t* C, int constraints, int variables, int inputs, const char* pk_path, const char* vk_path)\n{\n  //libsnark::inhibit_profiling_info = true;\n  //libsnark::inhibit_profiling_counters = true;\n\n  //initialize curve parameters\n  libff::alt_bn128_pp::init_public_params();\n\n  r1cs_constraint_system<libff::alt_bn128_pp> cs;\n  cs = createConstraintSystem(A, B ,C , constraints, variables, inputs);\n\n  assert(cs.num_variables() >= inputs);\n  assert(cs.num_inputs() == inputs);\n  assert(cs.num_constraints() == constraints);\n\n  // create keypair\n  r1cs_ppzksnark_keypair<alt_bn128_pp> keypair = r1cs_ppzksnark_generator<alt_bn128_pp>(cs);\n\n  // Export vk and pk to files\n  serializeProvingKeyToFile(keypair.pk, pk_path);\n  serializeVerificationKeyToFile(keypair.vk, vk_path);\n\n  // Print VerificationKey in Solidity compatible format\n  exportVerificationKey(keypair);\n\n  return true;\n}*/\n/*\nbool _generate_proof(const char* pk_path, const uint8_t* public_inputs, int public_inputs_length, const uint8_t* private_inputs, int private_inputs_length)\n{\n//  libsnark::inhibit_profiling_info = true;\n//  libsnark::inhibit_profiling_counters = true;\n\n  //initialize curve parameters\n  libff::alt_bn128_pp::init_public_params();\n  r1cs_ppzksnark_proving_key<libff::alt_bn128_pp> pk = deserializeProvingKeyFromFile(pk_path);\n\n  // assign variables based on witness values, excludes ~one\n  r1cs_variable_assignment<libff::alt_bn128_pp> full_variable_assignment;\n  for (int i = 1; i < public_inputs_length; i++) {\n    full_variable_assignment.push_back(libff::alt_bn128_pp(libsnarkBigintFromBytes(public_inputs + i*32)));\n  }\n  for (int i = 0; i < private_inputs_length; i++) {\n    full_variable_assignment.push_back(<libff::alt_bn128_pp>(libsnarkBigintFromBytes(private_inputs + i*32)));\n  }\n\n  // split up variables into primary and auxiliary inputs. Does *NOT* include the constant 1\n  // Public variables belong to primary input, private variables are auxiliary input.\n  r1cs_primary_input<libff::alt_bn128_pp> primary_input(full_variable_assignment.begin(), full_variable_assignment.begin() + public_inputs_length-1);\n  r1cs_primary_input<libff::alt_bn128_pp> auxiliary_input(full_variable_assignment.begin() + public_inputs_length-1, full_variable_assignment.end());\n\n  // for debugging\n  // cout << \"full variable assignment:\"<< endl << full_variable_assignment;\n  // cout << \"primary input:\"<< endl << primary_input;\n  // cout << \"auxiliary input:\"<< endl << auxiliary_input;\n\n  // Proof Generation\n  r1cs_ppzksnark_proof<alt_bn128_pp> proof = r1cs_ppzksnark_prover<alt_bn128_pp>(pk, primary_input, auxiliary_input);\n\n  // print proof\n  printProof(proof);\n  // TODO? print inputs\n\n  return true;\n} */\n"
  },
  {
    "path": "src/ZoKrates/wraplibsnark.hpp",
    "content": "/**\n * @file wraplibsnark.hpp\n * @author Jacob Eberhardt <jacob.eberhardt@tu-berlin.de\n * @author Dennis Kuhnert <dennis.kuhnert@campus.tu-berlin.de>\n * @date 2017\n */\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include <stdbool.h>\n#include <stdint.h>\n\nbool _setup(const uint8_t* A,\n            const uint8_t* B,\n            const uint8_t* C,\n            int constraints,\n            int variables,\n            int inputs,\n            const char* pk_path,\n            const char* vk_path\n          );\n\nbool _generate_proof(const char* pk_path,\n            const uint8_t* public_inputs,\n            int public_inputs_length,\n            const uint8_t* private_inputs,\n            int private_inputs_length\n          );\n\n#ifdef __cplusplus\n} // extern \"C\"\n#endif\n"
  },
  {
    "path": "src/export.cpp",
    "content": "/*    \n    copyright 2018 to the roll_up Authors\n\n    This file is part of roll_up.\n\n    roll_up is free software: you can redistribute it and/or modify\n    it under the terms of the GNU General Public License as published by\n    the Free Software Foundation, either version 3 of the License, or\n    (at your option) any later version.\n\n    roll_up is distributed in the hope that it will be useful,\n    but WITHOUT ANY WARRANTY; without even the implied warranty of\n    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n    GNU General Public License for more details.\n\n    You should have received a copy of the GNU General Public License\n    along with roll_up.  If not, see <https://www.gnu.org/licenses/>.\n*/\n\n#include <fstream>\n#include <iostream>\n#include <cassert>\n#include <iomanip>\n\n\n\n#include <libsnark/zk_proof_systems/ppzksnark/r1cs_ppzksnark/r1cs_ppzksnark.hpp>\n\n// ZoKrates\n#include <ZoKrates/wraplibsnark.cpp>\n\n\n\n\n\n\n//key gen \n#include \"libff/algebra/curves/alt_bn128/alt_bn128_pp.hpp\" //hold key\n#include \"libff/algebra/curves/bn128/bn128_pp.hpp\" //hold key\n#include <libff/algebra/curves/bn128/bn128_pp.hpp>\n#include <libff/algebra/curves/edwards/edwards_pp.hpp>\n\n\n\n\n\n\n\n#include <libsnark/common/data_structures/merkle_tree.hpp>\n#include <libsnark/gadgetlib1/gadget.hpp>\n#include <libsnark/gadgetlib1/gadgets/hashes/crh_gadget.hpp>\n#include <libsnark/gadgetlib1/gadgets/hashes/digest_selector_gadget.hpp>\n#include <libsnark/gadgetlib1/gadgets/hashes/hash_io.hpp>\n#include <libsnark/gadgetlib1/gadgets/merkle_tree/merkle_authentication_path_variable.hpp>\n#include <libsnark/gadgetlib1/gadgets/merkle_tree/merkle_tree_check_read_gadget.hpp>\n\n\n// tmp \n//#include <libsnark/gadgetlib1/gadgets/hashes/sha256/sha256_gadget.hpp>\n#include <libsnark/gadgetlib1/gadgets/merkle_tree/merkle_tree_check_read_gadget.hpp>\n#include <libsnark/gadgetlib1/gadgets/merkle_tree/merkle_tree_check_update_gadget.hpp>\n\nusing namespace libsnark;\nusing namespace libff;\n\n\ntemplate<typename FieldT>\nvoid constraint_to_json(linear_combination<FieldT> constraints, std::stringstream &ss)\n{\n    ss << \"{\";\n    uint count = 0;\n    for (const linear_term<FieldT>& lt : constraints.terms)\n    {\n        if (count != 0) {\n            ss << \",\";\n        }\n        if (lt.coeff != 0 && lt.coeff != 1) {\n            ss << '\"' << lt.index << '\"' << \":\" << \"-1\";\n        }\n        else {\n            ss << '\"' << lt.index << '\"' << \":\" << lt.coeff;\n        }\n        count++;\n    }\n    ss << \"}\";\n}\n\ntemplate <typename FieldT>\nvoid array_to_json(protoboard<FieldT> pb, uint input_variables,  std::string path)\n{\n\n    std::stringstream ss;\n    std::ofstream fh;\n    fh.open(path, std::ios::binary);\n\n    r1cs_variable_assignment<FieldT> values = pb.full_variable_assignment();\n    ss << \"\\n{\\\"TestVariables\\\":[\";\n\n    for (size_t i = 0; i < values.size(); ++i)\n    {\n        ss << values[i].as_bigint();\n        if (i <  values.size() - 1) { ss << \",\";}\n    }\n\n    ss << \"]}\\n\";\n    ss.rdbuf()->pubseekpos(0, std::ios_base::out);\n\n    fh << ss.rdbuf();\n    fh.flush();\n    fh.close();\n}\n\ntemplate<typename FieldT>\nvoid r1cs_to_json(protoboard<FieldT> pb, uint input_variables, std::string path)\n    {\n    // output inputs, right now need to compile with debug flag so that the `variable_annotations`\n    // exists. Having trouble setting that up so will leave for now.\n    r1cs_constraint_system<FieldT> constraints = pb.get_constraint_system();\n    std::stringstream ss;\n    std::ofstream fh;\n    fh.open(path, std::ios::binary);\n\n    ss << \"\\n{\\\"variables\\\":[\";\n    \n    for (size_t i = 0; i < input_variables + 1; ++i) \n    {   \n        ss << '\"' << constraints.variable_annotations[i].c_str() << '\"';\n        if (i < input_variables ) {\n            ss << \", \";\n        }\n    }\n    ss << \"],\\n\";\n    ss << \"\\\"constraints\\\":[\";\n     \n    for (size_t c = 0; c < constraints.num_constraints(); ++c)\n    {\n        ss << \"[\";// << \"\\\"A\\\"=\";\n        constraint_to_json(constraints.constraints[c].a, ss);\n        ss << \",\";// << \"\\\"B\\\"=\";\n        constraint_to_json(constraints.constraints[c].b, ss);\n        ss << \",\";// << \"\\\"A\\\"=\";;\n        constraint_to_json(constraints.constraints[c].c, ss);\n        if (c == constraints.num_constraints()-1 ) {\n            ss << \"]\\n\";\n        } else {\n            ss << \"],\\n\";\n        }\n    }\n    ss << \"]}\";\n    ss.rdbuf()->pubseekpos(0, std::ios_base::out);\n    fh << ss.rdbuf();\n    fh.flush();\n    fh.close();\n}\n\ntemplate<typename FieldT>\nstring proof_to_json(r1cs_ppzksnark_proof<libff::alt_bn128_pp> proof, r1cs_primary_input<FieldT> input, bool isInt) {\n    std::cout << \"proof.A = Pairing.G1Point(\" << outputPointG1AffineAsHex(proof.g_A.g)<< \");\" << endl;\n    std::cout << \"proof.A_p = Pairing.G1Point(\" << outputPointG1AffineAsHex(proof.g_A.h)<< \");\" << endl;\n    std::cout << \"proof.B = Pairing.G2Point(\" << outputPointG2AffineAsHex(proof.g_B.g)<< \");\" << endl;\n    std::cout << \"proof.B_p = Pairing.G1Point(\" << outputPointG1AffineAsHex(proof.g_B.h)<<\");\" << endl;\n    std::cout << \"proof.C = Pairing.G1Point(\" << outputPointG1AffineAsHex(proof.g_C.g)<< \");\" << endl;\n    std::cout << \"proof.C_p = Pairing.G1Point(\" << outputPointG1AffineAsHex(proof.g_C.h)<<\");\" << endl;\n    std::cout << \"proof.H = Pairing.G1Point(\" << outputPointG1AffineAsHex(proof.g_H)<<\");\"<< endl;\n    std::cout << \"proof.K = Pairing.G1Point(\" << outputPointG1AffineAsHex(proof.g_K)<<\");\"<< endl; \n\n\n    std::string path = \"../zksnark_element/proof.json\";\n    std::stringstream ss;\n    std::ofstream fh;\n    fh.open(path, std::ios::binary);\n    if(isInt) { \n        ss << \"{\\n\";\n        ss << \" \\\"a\\\" :[\" << outputPointG1AffineAsInt(proof.g_A.g) << \"],\\n\";\n        ss << \" \\\"a_p\\\"  :[\" << outputPointG1AffineAsInt(proof.g_A.h)<< \"],\\n\";\n        ss << \" \\\"b\\\"  :[\" << outputPointG2AffineAsInt(proof.g_B.g)<< \"],\\n\";\n        ss << \" \\\"b_p\\\" :[\" << outputPointG1AffineAsInt(proof.g_B.h)<< \"],\\n\";\n        ss << \" \\\"c\\\" :[\" << outputPointG1AffineAsInt(proof.g_C.g)<< \"],\\n\";\n        ss << \" \\\"c_p\\\" :[\" << outputPointG1AffineAsInt(proof.g_C.h)<< \"],\\n\";\n        ss << \" \\\"h\\\" :[\" << outputPointG1AffineAsInt(proof.g_H)<< \"],\\n\";\n        ss << \" \\\"k\\\" :[\" << outputPointG1AffineAsInt(proof.g_K)<< \"],\\n\";\n        ss << \" \\\"input\\\" :\" << \"[\"; //1 should always be the first variavle passed\n\n        for (size_t i = 0; i < input.size(); ++i)\n        {   \n            ss << input[i].as_bigint() ; \n            if ( i < input.size() - 1 ) { \n                ss<< \", \";\n            }\n        }\n        ss << \"]\\n\";\n        ss << \"}\";\n    }\n    else {\n\n        ss << \"{\\n\";\n        ss << \" \\\"a\\\" :[\" << outputPointG1AffineAsHex(proof.g_A.g) << \"],\\n\";\n        ss << \" \\\"a_p\\\"  :[\" << outputPointG1AffineAsHex(proof.g_A.h)<< \"],\\n\";\n        ss << \" \\\"b\\\"  :[\" << outputPointG2AffineAsHex(proof.g_B.g)<< \"],\\n\";\n        ss << \" \\\"b_p\\\" :[\" << outputPointG1AffineAsHex(proof.g_B.h)<< \"],\\n\";\n        ss << \" \\\"c\\\" :[\" << outputPointG1AffineAsHex(proof.g_C.g)<< \"],\\n\";\n        ss << \" \\\"c_p\\\" :[\" << outputPointG1AffineAsHex(proof.g_C.h)<< \"],\\n\";\n        ss << \" \\\"h\\\" :[\" << outputPointG1AffineAsHex(proof.g_H)<< \"],\\n\";\n        ss << \" \\\"k\\\" :[\" << outputPointG1AffineAsHex(proof.g_K)<< \"],\\n\";\n        ss << \" \\\"input\\\" :\" << \"[\"; //1 should always be the first variavle passed\n\n        for (size_t i = 0; i < input.size(); ++i)\n        {   \n            ss << \"\\\"0x\" << HexStringFromLibsnarkBigint(input[i].as_bigint()) << \"\\\"\"; \n            if ( i < input.size() - 1 ) { \n                ss<< \", \";\n            }\n        }\n        ss << \"]\\n\";\n        ss << \"}\";\n    }\n\n\n    ss.rdbuf()->pubseekpos(0, std::ios_base::out);\n    fh << ss.rdbuf();\n    fh.flush();\n    fh.close();\n    return(ss.str());\n}\n\nvoid vk2json(r1cs_ppzksnark_keypair<libff::alt_bn128_pp> keypair, std::string path ) {\n\n    std::stringstream ss;\n    std::ofstream fh;\n    fh.open(path, std::ios::binary);\n    unsigned icLength = keypair.vk.encoded_IC_query.rest.indices.size() + 1;\n    \n    ss << \"{\\n\";\n    ss << \" \\\"a\\\" :[\" << outputPointG2AffineAsHex(keypair.vk.alphaA_g2) << \"],\\n\";\n    ss << \" \\\"b\\\"  :[\" << outputPointG1AffineAsHex(keypair.vk.alphaB_g1) << \"],\\n\";\n    ss << \" \\\"c\\\" :[\" << outputPointG2AffineAsHex(keypair.vk.alphaC_g2) << \"],\\n\";\n    ss << \" \\\"g\\\" :[\" << outputPointG2AffineAsHex(keypair.vk.gamma_g2)<< \"],\\n\";\n    ss << \" \\\"gb1\\\" :[\" << outputPointG1AffineAsHex(keypair.vk.gamma_beta_g1)<< \"],\\n\";\n    ss << \" \\\"gb2\\\" :[\" << outputPointG2AffineAsHex(keypair.vk.gamma_beta_g2)<< \"],\\n\";\n    ss << \" \\\"z\\\" :[\" << outputPointG2AffineAsHex(keypair.vk.rC_Z_g2)<< \"],\\n\";\n\n    ss <<  \"\\\"IC\\\" :[[\" << outputPointG1AffineAsHex(keypair.vk.encoded_IC_query.first) << \"]\";\n    \n    for (size_t i = 1; i < icLength; ++i)\n    {   \n        auto vkICi = outputPointG1AffineAsHex(keypair.vk.encoded_IC_query.rest.values[i - 1]);\n        ss << \",[\" <<  vkICi << \"]\";\n    } \n    ss << \"]\";\n    ss << \"}\";\n    ss.rdbuf()->pubseekpos(0, std::ios_base::out);\n    fh << ss.rdbuf();\n    fh.flush();\n    fh.close();\n}\ntemplate<typename FieldT>\n//void dump_key(r1cs_constraint_system<FieldT> cs)\nchar* dump_key(protoboard<FieldT> pb, std::string path)\n{\n\n    r1cs_constraint_system<FieldT> constraints = pb.get_constraint_system();\n    std::stringstream ss;\n    std::ofstream fh;\n    fh.open(path, std::ios::binary);\n\n\n    r1cs_ppzksnark_keypair<libff::alt_bn128_pp> keypair = generateKeypair(pb.get_constraint_system());\n\n    //save keys\n    vk2json(keypair, \"vk.json\");\n    writeToFile(\"../zksnark_element/pk.raw\", keypair.pk);\n    writeToFile(\"../zksnark_element/vk.raw\", keypair.vk);\n\n    pb.primary_input();\n    pb.auxiliary_input();\n\n    r1cs_primary_input <FieldT> primary_input = pb.primary_input();\n    r1cs_auxiliary_input <FieldT> auxiliary_input = pb.auxiliary_input();\n    ss << \"primaryinputs\" << primary_input;\n    ss << \"aux input\" << auxiliary_input;\n\n\n    r1cs_ppzksnark_proof<libff::alt_bn128_pp> proof = r1cs_ppzksnark_prover<libff::alt_bn128_pp>(keypair.pk, primary_input, auxiliary_input);\n\n\n    auto json = proof_to_json (proof, primary_input);\n\n    ss.rdbuf()->pubseekpos(0, std::ios_base::out);\n    fh << ss.rdbuf();\n    fh.flush();\n    fh.close();\n\n    auto result = new char[json.size()];\n    memcpy(result, json.c_str(), json.size() + 1);\n\n\n\n    return result;\n\n\n}\n\n\n"
  },
  {
    "path": "src/roll_up.hpp",
    "content": "/*    \n    copyright 2018 to the roll_up Authors\n\n    This file is part of roll_up.\n\n    roll_up is free software: you can redistribute it and/or modify\n    it under the terms of the GNU General Public License as published by\n    the Free Software Foundation, either version 3 of the License, or\n    (at your option) any later version.\n\n    roll_up is distributed in the hope that it will be useful,\n    but WITHOUT ANY WARRANTY; without even the implied warranty of\n    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n    GNU General Public License for more details.\n\n    You should have received a copy of the GNU General Public License\n    along with roll_up.  If not, see <https://www.gnu.org/licenses/>.\n*/\n\n\n#include <cassert>\n#include <memory>\n\n#include <libsnark/gadgetlib1/gadget.hpp>\n#include <tx.hpp>\n\n\ntypedef sha256_ethereum HashT;\n\n\n\nnamespace libsnark {\n\ntemplate<typename FieldT>\nclass roll_up: public gadget<FieldT> {\n//greater than gadget\nprivate:\n    /* no internal variables */\npublic:\n    std::shared_ptr<multipacking_gadget<FieldT>> unpacker_old_root;\n    std::shared_ptr<multipacking_gadget<FieldT>> unpacker_new_root;\n    std::shared_ptr<multipacking_gadget<FieldT>> unpacker_leaf_addresses;\n    std::shared_ptr<multipacking_gadget<FieldT>> unpacker_leaf_hashes;\n\n\n\n    pb_variable<FieldT> a;\n    pb_variable<FieldT> d;\n\n\n    pb_variable_array<FieldT> unpacked_addresses;\n    pb_variable_array<FieldT> unpacked_leaves;\n\n    std::string annotation_prefix = \"roll up\";\n\n\n\n    int noTx;\n    std::vector<std::shared_ptr<tx<FieldT, HashT>>> transactions;\n\n\n    roll_up(protoboard<FieldT> &pb,\n                   std::vector<pb_variable_array<FieldT>> &pub_key_x_bin, \n                   std::vector<pb_variable_array<FieldT>> &pub_key_y_bin,\n                   int tree_depth, std::vector<pb_variable_array<FieldT>> address_bits_va, \n                   std::vector<std::shared_ptr<digest_variable<FieldT>>> root_digest_old, \n                   std::vector<std::shared_ptr<digest_variable<FieldT>>> root_digest_new,\n                   std::vector<std::vector<merkle_authentication_node>> path_old, std::vector<std::vector<merkle_authentication_node>> path_new,\n                   std::vector<pb_variable_array<FieldT>> rhs_leaf,\n                   std::vector<pb_variable_array<FieldT>> S, std::vector<std::shared_ptr<digest_variable<FieldT>>> new_leaf, \n                   std::vector<pb_variable_array<FieldT>> r_x_bin, std::vector<pb_variable_array<FieldT>> r_y_bin,\n                   pb_variable_array<FieldT> old_root , pb_variable_array<FieldT> new_root, pb_variable_array<FieldT> leaves_data_availability,\n                   pb_variable_array<FieldT> leaves_addresses_data_availability, \n                   int noTx,\n                   const std::string &annotation_prefix);\n\n    void generate_r1cs_constraints();\n    void generate_r1cs_witness();\n};\n\n} // libsnark\n#include <roll_up.tcc>\n\n"
  },
  {
    "path": "src/roll_up.tcc",
    "content": "/*    \n    copyright 2018 to the roll_up Authors\n\n    This file is part of roll_up.\n\n    roll_up is free software: you can redistribute it and/or modify\n    it under the terms of the GNU General Public License as published by\n    the Free Software Foundation, either version 3 of the License, or\n    (at your option) any later version.\n\n    roll_up is distributed in the hope that it will be useful,\n    but WITHOUT ANY WARRANTY; without even the implied warranty of\n    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n    GNU General Public License for more details.\n\n    You should have received a copy of the GNU General Public License\n    along with roll_up.  If not, see <https://www.gnu.org/licenses/>.\n*/\n\n\nnamespace libsnark {\n    template<typename FieldT>\n    roll_up<FieldT>::roll_up(protoboard<FieldT> &pb,\n                   std::vector<pb_variable_array<FieldT>> &pub_key_x_bin, \n                   std::vector<pb_variable_array<FieldT>> &pub_key_y_bin,\n                   int tree_depth, std::vector<pb_variable_array<FieldT>> address_bits_va, std::vector<std::shared_ptr<digest_variable<FieldT>>> root_digest_old,  \n                   std::vector<std::shared_ptr<digest_variable<FieldT>>> root_digest_new, std::vector<std::vector<merkle_authentication_node>> path_old, \n                   std::vector<std::vector<merkle_authentication_node>> path_new, std::vector<pb_variable_array<FieldT>> rhs_leaf,\n                   std::vector<pb_variable_array<FieldT>> S, std::vector<std::shared_ptr<digest_variable<FieldT>>> new_leaf, \n                   std::vector<pb_variable_array<FieldT>> r_x_bin, std::vector<pb_variable_array<FieldT>> r_y_bin, \n                   pb_variable_array<FieldT> old_root , pb_variable_array<FieldT> new_root, \n                   pb_variable_array<FieldT> leaves_data_availability, pb_variable_array<FieldT> leaves_addresses_data_availability,\n                   int noTx,\n                   const std::string &annotation_prefix): gadget<FieldT>(pb, annotation_prefix) , noTx(noTx) {\n\n\n\n\n\n    for(uint i = 0; i < noTx-1; i++) { \n        unpacked_addresses.insert(unpacked_addresses.end(), address_bits_va[i].begin(), address_bits_va[i].end()); \n        unpacked_leaves.insert(unpacked_leaves.end(), new_leaf[i]->bits.begin(), new_leaf[i]->bits.end());\n    }\n \n\n\n    unpacker_old_root.reset(new multipacking_gadget<FieldT>(\n        pb,\n        root_digest_old[0]->bits,\n        old_root,\n        FieldT::capacity(),\n        \"old root\"\n    ));\n\n    unpacker_new_root.reset(new multipacking_gadget<FieldT>(\n        pb,\n        root_digest_new[noTx-2]->bits,\n        new_root,\n        FieldT::capacity(),\n        \"new_root\"\n    ));\n\n   unpacker_leaf_addresses.reset(new multipacking_gadget<FieldT>(\n        pb,\n        unpacked_addresses,\n        leaves_addresses_data_availability,\n        FieldT::capacity(),\n        \"new_root\"\n    ));\n\n    unpacker_leaf_hashes.reset(new multipacking_gadget<FieldT>(\n        pb,\n        unpacked_leaves, \n        leaves_data_availability,\n        FieldT::capacity(),\n        \"new_root\"\n    ));  \n                      //5 for the old root , new root\n                      // noTx*2 for address, leaf\n                      // noTx*2*253/256 for the left over bits\n                      // that do not fit in a 253 bit field element.\n    pb.set_input_sizes(6);\n\n    transactions.resize(noTx);\n    transactions[0].reset(new tx<FieldT, HashT>(pb,\n           pub_key_x_bin[0], pub_key_y_bin[0], tree_depth,address_bits_va[0],root_digest_old[0], \n           root_digest_new[0],path_old[0],path_new[0], rhs_leaf[0], S[0] , new_leaf[0] , r_x_bin[0], r_y_bin[0], \n           \"tx i\"\n       ));\n\n    for (int i =1; i<noTx; i++) {\n        transactions[i].reset(new tx<FieldT, HashT>(pb,\n               pub_key_x_bin[i], pub_key_y_bin[i], tree_depth,address_bits_va[i],root_digest_new[i-1], \n               root_digest_new[i],path_old[i],path_new[i], rhs_leaf[i], S[i] , new_leaf[i] , r_x_bin[i], r_y_bin[i], \n               \"tx i\"\n           ));\n        }\n\n\n    }\n\n    template<typename FieldT>\n    void roll_up<FieldT>::generate_r1cs_constraints() { \n        for (int i =0; i<noTx; i++) {\n            transactions[i]->generate_r1cs_constraints();\n            }\n        unpacker_old_root->generate_r1cs_constraints(true);\n        unpacker_new_root->generate_r1cs_constraints(true);\n        unpacker_leaf_addresses->generate_r1cs_constraints(true);\n        unpacker_leaf_hashes->generate_r1cs_constraints(true);\n    } \n\n\n    template<typename FieldT>\n    void roll_up<FieldT>::generate_r1cs_witness() { \n        for (int i =0; i<noTx; i++) { \n            transactions[i]->generate_r1cs_witness();\n        }\n        unpacker_old_root->generate_r1cs_witness_from_bits();\n        unpacker_new_root->generate_r1cs_witness_from_bits();\n        unpacker_leaf_addresses->generate_r1cs_witness_from_bits();\n        unpacker_leaf_hashes->generate_r1cs_witness_from_bits();\n    }\n}\n"
  },
  {
    "path": "src/roll_up_wrapper.cpp",
    "content": "/*    \n    copyright 2018 to the roll_up Authors\n\n    This file is part of roll_up.\n\n    roll_up is free software: you can redistribute it and/or modify\n    it under the terms of the GNU General Public License as published by\n    the Free Software Foundation, either version 3 of the License, or\n    (at your option) any later version.\n\n    roll_up is distributed in the hope that it will be useful,\n    but WITHOUT ANY WARRANTY; without even the implied warranty of\n    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n    GNU General Public License for more details.\n\n    You should have received a copy of the GNU General Public License\n    along with roll_up.  If not, see <https://www.gnu.org/licenses/>.\n*/\n\n\n//hash\n#include \"roll_up_wrapper.hpp\"\n#include <export.cpp>\n\n#include <roll_up.hpp>\n\n\n\nusing namespace libsnark;\nusing namespace libff;\n\ntypedef sha256_ethereum HashT;\n\n\n\n\n#include <iostream>\nvoid genKeys(int noTx, char* pkOutput, char* vkOuput) {\n    libff::alt_bn128_pp::init_public_params();\n    protoboard<FieldT> pb;\n\n    pb_variable<FieldT> ZERO;\n    ZERO.allocate(pb, \"ZERO\");\n    pb.val(ZERO) = 0;\n    //make sure we constarin to zero.\n\n    std::shared_ptr<roll_up<FieldT>> transactions;\n\n    std::vector<std::vector<merkle_authentication_node>> path(noTx);\n    path.resize(noTx);\n\n\n    std::vector<std::shared_ptr<digest_variable<FieldT>>> root_digest_old(noTx);\n    std::vector<std::shared_ptr<digest_variable<FieldT>>> root_digest_new(noTx);\n    std::vector<std::shared_ptr<digest_variable<FieldT>>> new_leaf(noTx);\n\n    std::vector<pb_variable_array<FieldT>> pub_key_x_bin(noTx);\n    std::vector<pb_variable_array<FieldT>> pub_key_y_bin(noTx);\n    std::vector<pb_variable_array<FieldT>> address_bits_va(noTx);\n    std::vector<pb_variable_array<FieldT>> rhs_leaf(noTx);\n\n    //signatures setup\n    std::vector<pb_variable_array<FieldT>> S(noTx);\n    std::vector<pb_variable_array<FieldT>> pk_x_bin(noTx);\n    std::vector<pb_variable_array<FieldT>> pk_y_bin(noTx);\n    std::vector<pb_variable_array<FieldT>> r_x_bin(noTx);\n    std::vector<pb_variable_array<FieldT>> r_y_bin(noTx);\n\n\n    for(int k = 0 ; k < noTx; k++) {\n\n        root_digest_old[k].reset(new digest_variable<FieldT>(pb, 256, \"root_digest_old\"));\n        root_digest_new[k].reset(new digest_variable<FieldT>(pb, 256, \"root_digest_new\"));\n        new_leaf[k].reset(new digest_variable<FieldT>(pb, 256, \"new leaf\"));\n\n        pub_key_x_bin[k].allocate(pb,256,\"pub_key_x_bin\");\n        pub_key_y_bin[k].allocate(pb,256,\"pub_key_y_bin\");\n        address_bits_va[k].allocate(pb, 256, \"address_bits\");\n        rhs_leaf[k].allocate(pb,256,\"pub_key_y_bin\");\n\n\n        S[k].allocate(pb, 256, FMT(\"annotation_prefix\", \" scaler to multiply by\"));\n        pk_x_bin[k].allocate(pb, 256, FMT(\"annotation_prefix\", \" scaler to multiply by\"));\n        pk_y_bin[k].allocate(pb, 256, FMT(\"annotation_prefix\", \" scaler to multiply by\"));\n        r_x_bin[k].allocate(pb, 256, FMT(\"annotation_prefix\", \" scaler to multiply by\"));\n        r_y_bin[k].allocate(pb, 256, FMT(\"annotation_prefix\", \" scaler to multiply by\"));\n    }\n\n/*    transactions.reset( new roll_up <FieldT> (pb, pub_key_x_bin, pub_key_y_bin, tree_depth,\n                                              address_bits_va, root_digest_old, root_digest_new,\n                                              path, path, rhs_leaf, S, new_leaf, r_x_bin, r_y_bin, noTx ,\"Confirm tx\"));\n\n    transactions->generate_r1cs_constraints();\n\n\n    r1cs_ppzksnark_keypair<libff::alt_bn128_pp> keypair = generateKeypair(pb.get_constraint_system());\n\n    //save keys\n    vk2json(keypair, \"../keys/vk.json\");\n\n    writeToFile(\"../keys/pk.raw\", keypair.pk);\n    writeToFile(\"../keys/vk.raw\", keypair.vk); */\n} \n\nchar* prove(bool _path[][tree_depth][256], bool _pub_key_x[][256], bool _pub_key_y[][256] , bool _root[][256],\n            bool _address_bits[][tree_depth],  bool _rhs_leaf[][256], \n            bool _new_leaf[][256], bool _r_x[][256], bool _r_y[][256] , bool _S[][256], int _tree_depth, int noTx) { \n\n    libff::alt_bn128_pp::init_public_params();\n    libff::bit_vector init(0,256);\n    std::vector<libff::bit_vector> pub_key_x(noTx);\n    std::vector<libff::bit_vector> pub_key_y(noTx);\n    std::vector<libff::bit_vector> root(noTx);\n\n    std::vector<libff::bit_vector> rhs_leaf_bits(noTx);\n    std::vector<libff::bit_vector> new_leaf_bits(noTx);\n    std::vector<libff::bit_vector> r_x_bits(noTx);\n    std::vector<libff::bit_vector> r_y_bits(noTx);\n    std::vector<libff::bit_vector> S_bits(noTx);\n\n\n    std::vector<libff::bit_vector> address_bits(noTx);\n\n    std::vector<std::vector<merkle_authentication_node>> path(noTx);\n\n    init.resize(256);\n\n    path.resize(noTx);\n\n    pub_key_x.resize(noTx);    \n    pub_key_y.resize(noTx);\n    root.resize(noTx);\n    rhs_leaf_bits.resize(noTx);\n    new_leaf_bits.resize(noTx);\n\n    r_x_bits.resize(noTx);\n    r_y_bits.resize(noTx);\n    S_bits.resize(noTx);\n\n\n    for(int k = 0 ; k < noTx; k++) { \n        pub_key_x[k].resize(256);\n        pub_key_y[k].resize(256);\n        root[k].resize(256);\n        rhs_leaf_bits[k].resize(256);\n        new_leaf_bits[k].resize(256);\n\n        r_x_bits[k].resize(256);\n        r_y_bits[k].resize(256);\n        S_bits[k].resize(256);\n\n        path[k].resize(tree_depth);\n        for (int i =tree_depth - 1; i>=0 ; i--) {\n            path[k][i] = init;\n            for (int j =0; j<sizeof(_path[k][0]); j++) {\n                path[k][i][j] = _path[k][i][j];\n           } \n        }\n\n        for (int j = 0 ; j <256 ; j++) { \n            pub_key_x[k][j] = _pub_key_x[k][j];\n            pub_key_y[k][j] = _pub_key_y[k][j];\n            root[k][j] = _root[k][j];\n            rhs_leaf_bits[k][j] = _rhs_leaf[k][j];\n            new_leaf_bits[k][j] = _new_leaf[k][j];\n            r_x_bits[k][j] = _r_x[k][j];\n            r_y_bits[k][j] = _r_y[k][j];\n            S_bits[k][j] = _S[k][j];\n        } \n\n        size_t address = 0;\n        for (long level = tree_depth-1; level >= 0; level--)\n        {  \n            const bool computed_is_right = _address_bits[k][level];\n            address |= (computed_is_right ? 1ul << (tree_depth-1-level) : 0);\n            address_bits[k].push_back(computed_is_right);\n        } \n    }\n\n    protoboard<FieldT> pb;\n\n    pb_variable_array<FieldT> old_root;\n    pb_variable_array<FieldT> new_root;\n\n    pb_variable_array<FieldT> leaves_data_availability;\n    pb_variable_array<FieldT> leaves_addresses_data_availability;\n\n\n\n    old_root.allocate(pb, 2, \"old_root\");\n    new_root.allocate(pb, 2, \"new_root\");\n\n\n\n    leaves_data_availability.allocate(pb, noTx*256, \"packed\");\n    leaves_addresses_data_availability.allocate(pb, noTx*256, \"packed\");\n\n    pb_variable<FieldT> ZERO;\n    ZERO.allocate(pb, \"ZERO\");\n    pb.val(ZERO) = 0;\n    //make sure we constarin to zero.\n\n    std::shared_ptr<roll_up<FieldT>> transactions;\n\n    std::vector<std::shared_ptr<digest_variable<FieldT>>> root_digest_old(noTx);\n    std::vector<std::shared_ptr<digest_variable<FieldT>>> root_digest_new(noTx);\n    std::vector<std::shared_ptr<digest_variable<FieldT>>> new_leaf(noTx);\n\n    std::vector<pb_variable_array<FieldT>> pub_key_x_bin(noTx);\n    std::vector<pb_variable_array<FieldT>> pub_key_y_bin(noTx);\n    std::vector<pb_variable_array<FieldT>> address_bits_va(noTx);\n    std::vector<pb_variable_array<FieldT>> rhs_leaf(noTx);\n\n    //signatures setup\n    std::vector<pb_variable_array<FieldT>> S(noTx);  \n    std::vector<pb_variable_array<FieldT>> pk_x_bin(noTx);\n    std::vector<pb_variable_array<FieldT>> pk_y_bin(noTx);\n    std::vector<pb_variable_array<FieldT>> r_x_bin(noTx);\n    std::vector<pb_variable_array<FieldT>> r_y_bin(noTx);\n\n\n    for(int k = 0 ; k < noTx; k++) {\n\n        root_digest_old[k].reset(new digest_variable<FieldT>(pb, 256, \"root_digest_old\"));\n        root_digest_new[k].reset(new digest_variable<FieldT>(pb, 256, \"root_digest_new\"));\n        new_leaf[k].reset(new digest_variable<FieldT>(pb, 256, \"new leaf\"));\n\n        pub_key_x_bin[k].allocate(pb,256,\"pub_key_x_bin\");\n        pub_key_y_bin[k].allocate(pb,256,\"pub_key_y_bin\");\n        address_bits_va[k].allocate(pb, 256, \"address_bits\");\n        rhs_leaf[k].allocate(pb,256,\"pub_key_y_bin\");\n\n\n        S[k].allocate(pb, 256, FMT(\"annotation_prefix\", \" scaler to multiply by\"));\n        pk_x_bin[k].allocate(pb, 256, FMT(\"annotation_prefix\", \" scaler to multiply by\"));\n        pk_y_bin[k].allocate(pb, 256, FMT(\"annotation_prefix\", \" scaler to multiply by\"));\n        r_x_bin[k].allocate(pb, 256, FMT(\"annotation_prefix\", \" scaler to multiply by\"));\n        r_y_bin[k].allocate(pb, 256, FMT(\"annotation_prefix\", \" scaler to multiply by\"));\n\n\n        S[k].fill_with_bits(pb, S_bits[k]);\n\n        r_x_bin[k].fill_with_bits(pb, r_x_bits[k]);\n        r_y_bin[k].fill_with_bits(pb, r_y_bits[k]); \n\n        root_digest_old[k]->bits.fill_with_bits(pb, root[k]);\n        pub_key_x_bin[k].fill_with_bits(pb, pub_key_x[k]);\n        pub_key_y_bin[k].fill_with_bits(pb, pub_key_y[k]);\n        address_bits_va[k] = from_bits(address_bits[k], ZERO);\n        rhs_leaf[k].fill_with_bits(pb, rhs_leaf_bits[k]);\n        new_leaf[k]->bits.fill_with_bits(pb,  new_leaf_bits[k]);\n    } \n\n    transactions.reset( new roll_up <FieldT> (pb, pub_key_x_bin, pub_key_y_bin, tree_depth, \n                                              address_bits_va, root_digest_old, root_digest_new, \n                                              path, path, rhs_leaf, S, new_leaf, r_x_bin, r_y_bin, old_root, new_root, leaves_data_availability, leaves_addresses_data_availability , noTx ,\"Confirm tx\"));\n\n    transactions->generate_r1cs_constraints();         \n\n    transactions->generate_r1cs_witness();\n\n\n    std::cout << \"is satisfied: \" << pb.is_satisfied() << std::endl;\n \n    pb.primary_input();\n    pb.auxiliary_input();\n\n    r1cs_ppzksnark_keypair<libff::alt_bn128_pp> keypair = generateKeypair(pb.get_constraint_system());\n\n    //save keys\n    vk2json(keypair, \"../keys/vk.json\");\n\n\n    r1cs_primary_input <FieldT> primary_input = pb.primary_input();\n    std::cout << \"primary_input \" << primary_input;\n    r1cs_auxiliary_input <FieldT> auxiliary_input = pb.auxiliary_input();\n    r1cs_ppzksnark_proof<libff::alt_bn128_pp> proof = r1cs_ppzksnark_prover<libff::alt_bn128_pp>(keypair.pk, primary_input, auxiliary_input);\n\n    auto json = proof_to_json (proof, primary_input, false);     \n\n    auto result = new char[json.size()];\n    memcpy(result, json.c_str(), json.size() + 1);     \n\n    return result; \n}\n      \n"
  },
  {
    "path": "src/roll_up_wrapper.hpp",
    "content": "/*    \n    copyright 2018 to the roll_up Authors\n\n    This file is part of roll_up.\n\n    roll_up is free software: you can redistribute it and/or modify\n    it under the terms of the GNU General Public License as published by\n    the Free Software Foundation, either version 3 of the License, or\n    (at your option) any later version.\n\n    roll_up is distributed in the hope that it will be useful,\n    but WITHOUT ANY WARRANTY; without even the implied warranty of\n    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n    GNU General Public License for more details.\n\n    You should have received a copy of the GNU General Public License\n    along with roll_up.  If not, see <https://www.gnu.org/licenses/>.\n*/\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include <stdbool.h>\n#include <stdint.h>\nconst int tree_depth = 2;\nchar* _sha256Constraints();\nchar* _sha256Witness();\nchar* prove(bool path[][tree_depth][256],  bool _pub_key_x[][256], bool _pub_key_y[][256] , bool _root[][256], \n            bool _address_bits[][tree_depth], bool _rhs_leaf[][256], \n            bool _new_leaf[][256], bool _r_x[][256], bool _r_y[][256] , bool _S[][256], int tree_depth, int noTx);\nvoid genKeys(int noTx, char* pkOutput, char* vkOuput );\n\n\nbool verify( char* vk, char* _g_A_0, char* _g_A_1, char* _g_A_2 ,  char* _g_A_P_0, char* _g_A_P_1, char* _g_A_P_2,\n             char* _g_B_1, char* _g_B_0, char* _g_B_3, char* _g_B_2, char* _g_B_5 , char* _g_B_4, char* _g_B_P_0, char* _g_B_P_1, char* _g_B_P_2,\n             char* _g_C_0, char* _g_C_1, char* _g_C_2, char* _g_C_P_0, char* _g_C_P_1, char* _g_C_P_2,\n             char* _g_H_0, char* _g_H_1, char* _g_H_2, char* _g_K_0, char* _g_K_1, char* _g_K_2, char* _input0 , char* _input1 , char* _input2, char* _input3,\n             char* _input4, char* _input5\n             ) ;\n\n\n\n#ifdef __cplusplus\n} // extern \"C\"\n#endif\n"
  },
  {
    "path": "src/sha256/sha256_ethereum.cpp",
    "content": "/*    \n    copyright 2018 to the Kobigurk \n    https://github.com/kobigurk/sha256_ethereum\n    MIT\n*/\n\n\n#include <iostream>\n\n#include \"libsnark/gadgetlib1/gadget.hpp\"\n#include \"libsnark/gadgetlib1/protoboard.hpp\"\n#include \"libff/common/default_types/ec_pp.hpp\"\n\n\n\n#include <libsnark/common/data_structures/merkle_tree.hpp>\n#include <libsnark/gadgetlib1/gadgets/basic_gadgets.hpp>\n#include <libsnark/gadgetlib1/gadgets/hashes/hash_io.hpp>\n#include <libsnark/gadgetlib1/gadgets/hashes/sha256/sha256_components.hpp>\n#include <libsnark/gadgetlib1/gadgets/hashes/sha256/sha256_gadget.hpp>\n\nusing namespace libsnark;\nusing namespace libff;\n\nusing std::vector;\n\n//typedef libff::Fr<libff::default_ec_pp> FieldT;\ntypedef libff::Fr<alt_bn128_pp> FieldT;\n\npb_variable_array<FieldT> from_bits(std::vector<bool> bits, pb_variable<FieldT>& ZERO) {\n    pb_variable_array<FieldT> acc;\n\n\t\tfor (size_t i = 0; i < bits.size(); i++) {\n\t\t\tbool bit = bits[i];\n\t\t\tacc.emplace_back(bit ? ONE : ZERO);\n\t\t}\n\n    return acc;\n}\n\nclass sha256_ethereum : gadget<FieldT> {\nprivate:\n    std::shared_ptr<block_variable<FieldT>> block1;\n    std::shared_ptr<block_variable<FieldT>> block2;\n    std::shared_ptr<sha256_compression_function_gadget<FieldT>> hasher1;\n    std::shared_ptr<digest_variable<FieldT>> intermediate_hash;\n    std::shared_ptr<sha256_compression_function_gadget<FieldT>> hasher2;\n\npublic:\n\n   sha256_ethereum(protoboard<FieldT> &pb,\n                                  const size_t block_length,\n                                  const block_variable<FieldT> &input_block,\n                                  const digest_variable<FieldT> &output,\n                                  const std::string &annotation_prefix) : gadget<FieldT>(pb, \"sha256_ethereum\") {\n\n         intermediate_hash.reset(new digest_variable<FieldT>(pb, 256, \"intermediate\"));\n         pb_variable<FieldT> ZERO;\n       \n         ZERO.allocate(pb, \"ZERO\");\n         pb.val(ZERO) = 0;\n\n        // final padding\n         pb_variable_array<FieldT> length_padding =\n            from_bits({\n                // padding\n                1,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n\n                // length of message (512 bits)\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,0,0,\n                0,0,0,0,0,0,1,0,\n                0,0,0,0,0,0,0,0\n            }, ZERO);\n\n/*        block2.reset(new block_variable<FieldT>(pb, {\n            length_padding\n        }, \"block2\"));\n*/\n        pb_linear_combination_array<FieldT> IV = SHA256_default_IV(pb);\n\n        hasher1.reset(new sha256_compression_function_gadget<FieldT>(\n            pb,\n            IV,\n            input_block.bits,\n            *intermediate_hash,\n        \"hasher1\"));\n\n        pb_linear_combination_array<FieldT> IV2(intermediate_hash->bits);\n  //      std::cout << block2->bits;\n//        std::cout << intermediate_hash;\n\n        hasher2.reset(new sha256_compression_function_gadget<FieldT>(\n            pb,\n            IV2,\n            length_padding,\n            output,\n        \"hasher2\"));\n       \n    }\n\n    void generate_r1cs_constraints(const bool ensure_output_bitness) {\n        libff::UNUSED(ensure_output_bitness);\n        hasher1->generate_r1cs_constraints();\n        hasher2->generate_r1cs_constraints();\n    }\n\n    void generate_r1cs_witness() {\n        hasher1->generate_r1cs_witness();\n        hasher2->generate_r1cs_witness();\n    }\n\n    static size_t get_digest_len()\n    {\n        return 256;\n    }\n\n\n\n    static libff::bit_vector get_hash(const libff::bit_vector &input)\n    {\n\n        protoboard<FieldT> pb;\n\n        block_variable<FieldT> input_variable(pb, SHA256_block_size, \"input\");\n        digest_variable<FieldT> output_variable(pb, SHA256_digest_size, \"output\");\n        sha256_ethereum f(pb, SHA256_block_size, input_variable, output_variable, \"f\");\n\n        input_variable.generate_r1cs_witness(input);\n        f.generate_r1cs_witness();\n\n        return output_variable.get_digest(); \n\n    }\n\n\n    static size_t expected_constraints(const bool ensure_output_bitness)\n    {\n        libff::UNUSED(ensure_output_bitness);\n        return 54560; /* hardcoded for now */\n    }\n};\n\n\n\n\nvector<unsigned long> bit_list_to_ints(vector<bool> bit_list, const size_t wordsize) {\n  vector<unsigned long> res;\n\tsize_t iterations = bit_list.size()/wordsize+1;\n  for (size_t i = 0; i < iterations; ++i) {\n      unsigned long current = 0;\n      for (size_t j = 0; j < wordsize; ++j) {\n\t\t\t\t\tif (bit_list.size() == (i*wordsize+j)) break;\n          current += (bit_list[i*wordsize+j] * (1ul<<(wordsize-1-j)));\n      }\n      res.push_back(current);\n  }\n  return res;\n}\n"
  },
  {
    "path": "src/tx.hpp",
    "content": "/*    \n    copyright 2018 to the roll_up Authors\n\n    This file is part of roll_up.\n\n    roll_up is free software: you can redistribute it and/or modify\n    it under the terms of the GNU General Public License as published by\n    the Free Software Foundation, either version 3 of the License, or\n    (at your option) any later version.\n\n    roll_up is distributed in the hope that it will be useful,\n    but WITHOUT ANY WARRANTY; without even the implied warranty of\n    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n    GNU General Public License for more details.\n\n    You should have received a copy of the GNU General Public License\n    along with roll_up.  If not, see <https://www.gnu.org/licenses/>.\n*/\n\n\n#include <cassert>\n#include <memory>\n\n#include <libsnark/gadgetlib1/gadget.hpp>\n#include \"baby_jubjub_ecc/main.cpp\"\n\n\n\nnamespace libsnark {\n\ntemplate<typename FieldT, typename HashT>\nclass tx: public gadget<FieldT> {\n//greater than gadget\nprivate:\n    /* no internal variables */\npublic:\n    pb_variable<FieldT> a;\n    pb_variable<FieldT> d;\n\n    int tree_depth;\n    //intermeditate variables \n\n\n    pb_variable_array<FieldT> pub_key_x_bin;\n    pb_variable_array<FieldT> pub_key_y_bin;\n    std::string annotation_prefix = \"roll up\";\n\n    //internal\n    std::shared_ptr<HashT> public_key_hash;\n    std::shared_ptr<HashT> leaf_hash;\n    std::shared_ptr<HashT> message_hash;\n\n\n\n    std::shared_ptr<digest_variable<FieldT>> lhs_leaf;\n    pb_variable_array<FieldT> rhs_leaf;\n\n    std::shared_ptr<digest_variable<FieldT>> leaf;\n    std::shared_ptr<digest_variable<FieldT>> root_digest_old;\n    std::shared_ptr<digest_variable<FieldT>> root_digest_calculated;\n    std::shared_ptr<digest_variable<FieldT>> root_digest_new;\n    std::shared_ptr<digest_variable<FieldT>> message;\n\n\n    std::shared_ptr<merkle_authentication_path_variable<FieldT, HashT>> path_var_old;\n    std::shared_ptr<merkle_authentication_path_variable<FieldT, HashT>> path_var_new;\n\n    std::shared_ptr<merkle_tree_check_update_gadget<FieldT, HashT>> ml;\n    std::shared_ptr<merkle_tree_check_read_gadget<FieldT, HashT>> ml_update;\n\n    std::vector<merkle_authentication_node> path_old;\n    std::vector<merkle_authentication_node> path_new;\n    pb_variable_array<FieldT> address_bits_va;\n\n    std::shared_ptr<eddsa<FieldT, HashT>> jubjub_eddsa;\n    std::shared_ptr<multipacking_gadget<FieldT>> unpacker_pub_key_x;\n    std::shared_ptr<multipacking_gadget<FieldT>> unpacker_pub_key_y;\n\n    std::shared_ptr <block_variable<FieldT>> input_variable;\n    std::shared_ptr <block_variable<FieldT>> input_variable2;\n    std::shared_ptr <block_variable<FieldT>> input_variable3;\n\n\n    pb_variable_array<FieldT> pub_key_x;\n    pb_variable_array<FieldT> pub_key_y;\n    std::shared_ptr<digest_variable<FieldT>> new_leaf;\n\n \n    pb_variable<FieldT> ZERO;\n    pb_variable<FieldT> ONE_test;\n\n\n\n    tx(protoboard<FieldT> &pb,\n                   pb_variable_array<FieldT> &pub_key_x_bin, \n                   pb_variable_array<FieldT> &pub_key_y_bin,\n                   int tree_depth, pb_variable_array<FieldT> address_bits_va, std::shared_ptr<digest_variable<FieldT>> root_digest_old,\n                   std::shared_ptr<digest_variable<FieldT>> root_digest_new,\n                   std::vector<merkle_authentication_node> path_old, std::vector<merkle_authentication_node> path_new, pb_variable_array<FieldT> rhs_leaf,\n                   pb_variable_array<FieldT> S, std::shared_ptr<digest_variable<FieldT>> new_leaf, pb_variable_array<FieldT> r_x_bin, pb_variable_array<FieldT> r_y_bin,\n                   const std::string &annotation_prefix);\n\n    void generate_r1cs_constraints();\n    void generate_r1cs_witness();\n};\n\n} // libsnark\n#include <tx.tcc>\n\n"
  },
  {
    "path": "src/tx.tcc",
    "content": "/*    \n    copyright 2018 to the roll_up Authors\n\n    This file is part of roll_up.\n\n    roll_up is free software: you can redistribute it and/or modify\n    it under the terms of the GNU General Public License as published by\n    the Free Software Foundation, either version 3 of the License, or\n    (at your option) any later version.\n\n    roll_up is distributed in the hope that it will be useful,\n    but WITHOUT ANY WARRANTY; without even the implied warranty of\n    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n    GNU General Public License for more details.\n\n    You should have received a copy of the GNU General Public License\n    along with roll_up.  If not, see <https://www.gnu.org/licenses/>.\n*/\n\n\n\nnamespace libsnark {\n    template<typename FieldT, typename HashT>\n    tx<FieldT,HashT>::tx(protoboard<FieldT> &pb,\n                   pb_variable_array<FieldT> &pub_key_x_bin, \n                   pb_variable_array<FieldT> &pub_key_y_bin,\n                   int tree_depth, pb_variable_array<FieldT> address_bits_va, std::shared_ptr<digest_variable<FieldT>> root_digest_old, \n                   std::shared_ptr<digest_variable<FieldT>> root_digest_new,\n                   std::vector<merkle_authentication_node> path_old, std::vector<merkle_authentication_node> path_new, pb_variable_array<FieldT> rhs_leaf,\n                   pb_variable_array<FieldT> S, std::shared_ptr<digest_variable<FieldT>> new_leaf, pb_variable_array<FieldT> r_x_bin, pb_variable_array<FieldT> r_y_bin,\n                   const std::string &annotation_prefix): gadget<FieldT>(pb, annotation_prefix) ,\n                   pub_key_x_bin(pub_key_x_bin), \n                   pub_key_y_bin(pub_key_y_bin) , tree_depth(tree_depth), path_old(path_old), \n                   path_new(path_new), address_bits_va(address_bits_va), rhs_leaf(rhs_leaf), \n                   root_digest_old(root_digest_old), root_digest_new(root_digest_new), new_leaf(new_leaf) {\n\n\n\n        pb_variable<FieldT> base_x;\n        pb_variable<FieldT> base_y;\n\n        pb_variable<FieldT> a;\n        pb_variable<FieldT> d;\n\n        //public key\n        pb_variable<FieldT> pub_x;\n        pb_variable<FieldT> pub_y;\n\n\n        base_x.allocate(pb, \"base x\");\n        base_y.allocate(pb, \"base y\");\n\n        pub_x.allocate(pb, \"pub_x\");\n        pub_y.allocate(pb, \"pub_y\");\n\n\n        a.allocate(pb, \"a\");\n        d.allocate(pb, \"d\");\n\n        pb.val(base_x) = FieldT(\"17777552123799933955779906779655732241715742912184938656739573121738514868268\");\n        pb.val(base_y) = FieldT(\"2626589144620713026669568689430873010625803728049924121243784502389097019475\");\n\n        pb.val(a) = FieldT(\"168700\");\n        pb.val(d) = FieldT(\"168696\");\n\n\n        pub_key_x.allocate(pb,2, \"ZERO\");\n        pub_key_y.allocate(pb,2, \"ZERO\");\n\n        ZERO.allocate(pb, \"ZERO\");\n        pb.val(ZERO) = 0;\n\n\n        lhs_leaf.reset(new digest_variable<FieldT>(pb, 256, \"lhs_leaf\"));\n        leaf.reset(new digest_variable<FieldT>(pb, 256, \"lhs_leaf\"));\n\n        message.reset(new digest_variable<FieldT>(pb, 256, \"message digest\"));\n       \n \n        input_variable.reset(new block_variable<FieldT>(pb, {pub_key_x_bin, pub_key_y_bin}, \"input_variable\")); \n        input_variable2.reset(new block_variable<FieldT>(pb, {lhs_leaf->bits, rhs_leaf}, \"input_variable\"));\n\n\n        public_key_hash.reset(new sha256_ethereum(pb, 256, *input_variable, *lhs_leaf, \"pub key hash\"));\n        leaf_hash.reset(new sha256_ethereum(pb, 256, *input_variable2, *leaf, \"pub key hash\"));\n        input_variable3.reset(new block_variable<FieldT>(pb, {leaf->bits, new_leaf->bits}, \"input_variable\"));\n        message_hash.reset(new sha256_ethereum(pb, 256, *input_variable3, *message, \"pub key hash\"));\n\n\n        unpacker_pub_key_x.reset(new multipacking_gadget<FieldT>(\n            pb,\n            pub_key_x_bin, //pb_linear_combination_array<FieldT>(cm->bits.begin(), cm->bits.begin() , cm->bits.size()),\n            pub_key_x,\n            FieldT::capacity() + 1, \n            \"pack pub key x into var\" \n        ));\n\n        unpacker_pub_key_y.reset(new multipacking_gadget<FieldT>(\n            pb,\n            pub_key_y_bin, //pb_linear_combination_array<FieldT>(cm->bits.begin(), cm->bits.begin() , cm->bits.size()),\n            pub_key_y,\n            FieldT::capacity() + 1,\n            \"pack pub key y into var\"\n        ));\n\n        path_var_old.reset(new merkle_authentication_path_variable<FieldT, HashT> (pb, tree_depth, \"path_var\" ));\n        path_var_new.reset(new merkle_authentication_path_variable<FieldT, HashT> (pb, tree_depth, \"path_var\" ));\n\n        ml.reset(new merkle_tree_check_update_gadget<FieldT, HashT>(pb, tree_depth, address_bits_va, *leaf, *root_digest_old, *path_var_old, *new_leaf, *root_digest_new, *path_var_new, ONE, \"ml\"));\n        jubjub_eddsa.reset(new eddsa<FieldT, HashT> (pb,a,d, pub_key_x_bin, pub_key_y_bin, base_x,base_y,r_x_bin, r_y_bin, message->bits, S));\n    }\n\n    template<typename FieldT, typename HashT>\n    void tx<FieldT, HashT>::generate_r1cs_constraints() { \n        jubjub_eddsa->generate_r1cs_constraints();\n       \n        public_key_hash->generate_r1cs_constraints(true);\n        leaf_hash->generate_r1cs_constraints(true);\n\n        message_hash->generate_r1cs_constraints(true);\n\n        unpacker_pub_key_x->generate_r1cs_constraints(true);\n        unpacker_pub_key_y->generate_r1cs_constraints(true);\n\n        path_var_old->generate_r1cs_constraints();\n        path_var_new->generate_r1cs_constraints();\n\n        root_digest_old->generate_r1cs_constraints();\n        root_digest_new->generate_r1cs_constraints();\n        ml->generate_r1cs_constraints();   \n\n        //make sure the traget root matched the calculated root\n        //for(int i = 0 ; i < 255; i++) {\n        //    this->pb.add_r1cs_constraint(r1cs_constraint<FieldT>(1, root_digest_old->bits[i], root_digest_calculated->bits[i]),\n        //                   FMT(annotation_prefix, \" root digests equal\"));\n        //} \n    } \n\n\n    template<typename FieldT, typename HashT>\n    void tx<FieldT, HashT>::generate_r1cs_witness() { \n        //debug\n        public_key_hash->generate_r1cs_witness();\n        leaf_hash->generate_r1cs_witness();\n        message_hash->generate_r1cs_witness();\n\n        unpacker_pub_key_x->generate_r1cs_witness_from_bits();\n        unpacker_pub_key_y->generate_r1cs_witness_from_bits();\n\n        auto address = address_bits_va.get_field_element_from_bits(this->pb);\n\n        path_var_old->generate_r1cs_witness(address.as_ulong(), path_old);\n        path_var_new->generate_r1cs_witness(address.as_ulong(), path_new);\n\n        ml->generate_r1cs_witness();\n        jubjub_eddsa->generate_r1cs_witness();\n\n        //debug\n        /*\n        std::cout << \" leaf \" ;\n        for(uint i =0;i<256;i++) { \n             std::cout << \" , \" << this->pb.lc_val(leaf->bits[i]);\n        }\n\n        std::cout << \"new leaf \" ;\n        for(uint i =0;i<256;i++) { \n             std::cout << \" , \" << this->pb.lc_val(new_leaf->bits[i]);\n        }\n\n        std::cout << \"message \" ;\n        for(uint i =0;i<256;i++) { \n             std::cout << \" , \" << this->pb.lc_val(message->bits[i]);\n        }\n\n        std::cout << \" pub_key_x \" << this->pb.lc_val(pub_key_x[0]) << \" \" << this->pb.lc_val(pub_key_x[1]) << std::endl;\n        std::cout << \" pub_key_y \" << this->pb.lc_val(pub_key_y[0]) << \" \" << this->pb.lc_val(pub_key_y[1]) << std::endl;   \n        */\n        std::cout << \"pub_key_x \" ;\n        for(uint i =0;i<256;i++) {\n             std::cout << \" , \" << this->pb.lc_val(pub_key_x_bin[i]);\n        }\n\n\n    }\n}\n"
  },
  {
    "path": "tests/test.py",
    "content": "'''   \n    copyright 2018 to the roll_up Authors\n\n    This file is part of roll_up.\n\n    roll_up is free software: you can redistribute it and/or modify\n    it under the terms of the GNU General Public License as published by\n    the Free Software Foundation, either version 3 of the License, or\n    (at your option) any later version.\n\n    roll_up is distributed in the hope that it will be useful,\n    but WITHOUT ANY WARRANTY; without even the implied warranty of\n    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n    GNU General Public License for more details.\n\n    You should have received a copy of the GNU General Public License\n    along with roll_up.  If not, see <https://www.gnu.org/licenses/>.\n'''\n\n\nimport sys\nsys.path.insert(0, '../pythonWrapper')\nsys.path.insert(0, \"../depends/baby_jubjub_ecc/tests\")\n\nsys.path.insert(0, '../contracts')\nfrom contract_deploy import contract_deploy, verify\n\nfrom helper import *\nfrom utils import getSignature, createLeaf, hashPadded, libsnark2python, normalize_proof, hex2int\nimport ed25519 as ed\n\nfrom web3 import Web3, HTTPProvider, TestRPCProvider\n\nhost = sys.argv[1] if len(sys.argv) > 1 else \"localhost\"\nw3 = Web3(HTTPProvider(\"http://\" + host + \":8545\"))\n\n\nif __name__ == \"__main__\":\n    \n    pk_output = \"../zksnark_element/pk.raw\"  # Prover key\n    vk_output = \"../zksnark_element/vk.json\" # Verifier key\n\n    #genKeys(c.c_int(noTx), c.c_char_p(pk_output.encode()) , c.c_char_p(vk_output.encode())) \n    \n    pub_x = []\n    pub_y = []\n    leaves = []\n    R_x = []\n    R_y = []\n    S = []\n    old_leaf = []\n    new_leaf = []\n    rhs_leaf = []   # Message \n    address = []\n    public_key = []\n    sk = []\n    fee = 0 \n    \n    # Generate random private key\n    sk.append(genSalt(64)) \n    \n    # Public key from private key\n    public_key.append(ed.publickey(sk[0]))\n    \n    # Empty right handside of first leaf\n    rhs_leaf.append(hashPadded(\"0\"*64 , \"0\"*64)[2:])\n    \n    # Iterate over transactions on the merkle tree\n    for j in range (1,noTx + 1):\n\n        leaves.append([])\n        \n        # create a random pub key from priv key\n        sk.append(genSalt(64)) \n        public_key.append(ed.publickey(sk[j]))\n\n      \n        # create a random new leaf\n        # This is just a filler message for test purpose (e.g. 11111111... , 22222211111...)\n        rhs_leaf.append(hashPadded(hex(j)[2]*64 , \"1\"*64)[2:])\n        \n        # The old leaf is previous pubkey + previous message\n        old_leaf.append(createLeaf(public_key[j-1], rhs_leaf[j-1]))\n        \n        # The new leaf is current pubkey with current message\n        new_leaf.append(createLeaf(public_key[j], rhs_leaf[j]))\n        \n        # The message to sign is the previous leaf with the new leaf\n        message = hashPadded(old_leaf[j-1], new_leaf[j-1])\n        \n        # Remove '0x' from byte\n        message = message[2:]\n        \n        # Obtain Signature \n        r,s = getSignature(message, sk[j - 1], public_key[j-1])\n\n        # check the signature is correct\n        ed.checkvalid(r, s, message, public_key[j-1])\n\n        # Now we reverse the puplic key by bit\n        # we have to reverse the bits so that the \n        # unpacker in libsnark will return us the \n        # correct field element\n        # To put into little-endian\n        pub_key_x = hex(int(''.join(str(e) for e in hexToBinary(hex(public_key[j-1][0]))[::-1]),2)) \n        pub_key_y = hex(int(''.join(str(e) for e in hexToBinary(hex(public_key[j-1][1]))[::-1]),2))\n           \n        r[0] = hex(int(''.join(str(e) for e in hexToBinary(hex(r[0]))[::-1]),2))\n        r[1] = hex(int(''.join(str(e) for e in hexToBinary(hex(r[1]))[::-1]),2))\n        \n        # Two r on x and y axis of curve\n        R_x.append(r[0])\n        R_y.append(r[1])\n        \n        # Store s\n        S.append(s)\n        \n        # Store public key\n        pub_x.append(pub_key_x) \n        pub_y.append(pub_key_y)\n        \n        \n        leaves[j-1].append(old_leaf[j-1])\n\n        address.append(0)\n\n    # Get zk proof and merkle root\n    proof, root = genWitness(leaves, pub_x, pub_y, address, tree_depth, \n                                rhs_leaf, new_leaf , R_x, R_y, S)              \n\n\n\n    proof = normalize_proof(proof)\n\n    #root , merkle_tree = utils.genMerkelTree(tree_depth, leaves[0])\n\n    try:\n        inputs = libsnark2python(proof[\"input\"])     \n\n        proof_input_root = libsnark2python(proof[\"input\"][:2])[0] \n        assert proof_input_root == root, \"Proof input {} not matching the root {}\".format(proof_input_root, root)\n        # calculate final root\n        root_final , merkle_tree = utils.genMerkelTree(tree_depth, leaves[-1])\n\n        proof_input_root_final = libsnark2python(proof[\"input\"][2:4])[0]\n        assert proof_input_root_final == root_final, \"Proof input final root {} not matching the final root\".format(proof_input_root_final, root_final)\n\n        first_leaf = libsnark2python(proof[\"input\"][4:6])[0]\n        assert first_leaf == \"0x\" + leaves[1][0], \"First leaf {} is not matching the leaf\".format(first_leaf, leaves[1][0])\n\n        contract = contract_deploy(1, \"../keys/vk.json\", root, host)\n\n        result = verify(contract, proof, host)\n\n        print(result)\n        assert result[\"status\"] == 1, \"Result status of the verify function not equal to 1, but equals to {}\".format(result['status'])\n        \n        \n        contract_root = w3.toHex(contract.getRoot())[:65]\n        assert contract_root == root_final[:65], \"contract root {} not equals to root_final {}\".format(contract_root, root_final)\n    except Exception as err:\n        pdb.set_trace()\n        raise\n\n\n\n\n       \n\n\n   \n"
  }
]