[
  {
    "path": ".bumpversion.cfg",
    "content": "[bumpversion]\ncurrent_version = 0.0.2-alpha.2\ncommit = True\ntag = True\nparse = (?P<major>\\d+)\\.(?P<minor>\\d+)\\.(?P<patch>\\d+)(-(?P<stage>[^.]*)\\.(?P<devnum>\\d+))?\nserialize = \n\t{major}.{minor}.{patch}-{stage}.{devnum}\n\t{major}.{minor}.{patch}\n\n[bumpversion:part:stage]\noptional_value = stable\nfirst_value = stable\nvalues = \n\talpha\n\tbeta\n\tstable\n\n[bumpversion:part:devnum]\n\n[bumpversion:file:setup.py]\nsearch = version='{current_version}',\nreplace = version='{new_version}',\n\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE.md",
    "content": "* OS: osx/linux/win\n* Environment (output of `pip freeze`):\n    * Python version\n    * Vyper version\n    * py-evm version\n\n### What is wrong?\n\nPlease include information like:\n\n* full output of the error you received\n* what command you ran\n* the code that caused the failure (see [this link](https://help.github.com/articles/basic-writing-and-formatting-syntax/) for help with formatting code)\n\n\n### How can it be fixed\n\nFill this in if you know how to fix it.\n"
  },
  {
    "path": ".github/PULL_REQUEST_TEMPLATE.md",
    "content": "### What was wrong?\n\n\n\n### How was it fixed?\n\n\n\n#### Cute Animal Picture\n\n![put a cute animal picture link inside the parentheses]()\n"
  },
  {
    "path": ".gitignore",
    "content": "*.py[co]\n__pycache__/\n*~\n[#]*[#]\n.*.swp\n.*.swo\n.*.swn\n.~\n.DS_Store\n/tmp/\n/.venv/\n/dist/\n/*.egg-info/\n/.tox/\n/bin/\n/develop-eggs/\n/eggs/\n.installed.cfg\nlogging.conf\n*.log\n.coverage\n.eggs\n.cache\n.env\n.idea\n.venv*\n.build"
  },
  {
    "path": ".travis.yml",
    "content": "sudo: false\nlanguage: python\ndist: trusty\nenv:\n  global:\n    - PYTEST_ADDOPTS=\"-n 2 --durations 50 --maxfail 50\"\nmatrix:\n  include:\n    #\n    # Linting and Static Analysis\n    #\n    - python: \"3.5\"\n      env: TOX_POSARGS=\"-e lint35\"\n    - python: \"3.6\"\n      env: TOX_POSARGS=\"-e lint36\"\n    #\n    # Python 3.5\n    #\n    - python: \"3.5\"\n      env: TOX_POSARGS=\"-e py35-handler\"\n    #\n    # Python 3.6\n    #\n    - python: \"3.6\"\n      env: TOX_POSARGS=\"-e py36-contract\"\n    - python: \"3.6\"\n      env: TOX_POSARGS=\"-e py36-handler\"\ncache:\n  pip: true\ninstall:\n  - \"travis_retry pip install pip setuptools --upgrade\"\n  - \"travis_retry pip install tox\"\nbefore_script:\n  - pip freeze\nscript:\n  - tox $TOX_POSARGS\nafter_script:\n  - cat .tox/$TOX_POSARGS/log/*.log\n"
  },
  {
    "path": "MANIFEST.in",
    "content": "include README.md\ninclude requirements.txt\ninclude requirements-dev.txt\n\ninclude sharding/contracts/sharding_manager.json\n"
  },
  {
    "path": "Makefile",
    "content": "# Variables\n# compile-smc parameters\ncompile_script = tools/vyper_compile_script.py\ncontract = sharding/contracts/sharding_manager.v.py\ncontract_json = sharding/contracts/sharding_manager.json\n\n# Using target:prerequisites to avoid redundant compilation.\n$(contract_json): $(contract)\n\tpython $(compile_script) $(contract)\n\n# Commands\nhelp:\n\t@echo \"compile-smc - compile sharding manager contract\"\n\t@echo \"clean - remove build and Python file artifacts\"\n\t@echo \"clean-build - remove build artifacts\"\n\t@echo \"clean-pyc - remove Python file artifacts\"\n\t@echo \"lint - check style with flake8 and mypy\"\n\t@echo \"test - run tests quickly with the default Python\"\n\t@echo \"test-all - run tox\"\n\t@echo \"release - package and upload a release\"\n\t@echo \"dist - package\"\n\ncompile-smc: $(contract_json)\n\nclean: clean-build clean-pyc\n\nclean-build:\n\trm -fr build/\n\trm -fr dist/\n\trm -fr *.egg-info\n\nclean-pyc:\n\tfind . -name '*.pyc' -exec rm -f {} +\n\tfind . -name '*.pyo' -exec rm -f {} +\n\tfind . -name '*~' -exec rm -f {} +\n\nlint:\n\ttox -elint3{5,6}\n\ntest:\n\tpy.test --tb native tests\n\ntest-all:\n\ttox\n\nrelease: clean\n\tCURRENT_SIGN_SETTING=$(git config commit.gpgSign)\n\tgit config commit.gpgSign true\n\tbumpversion $(bump)\n\tgit push upstream && git push upstream --tags\n\tpython setup.py sdist bdist_wheel upload\n\tgit config commit.gpgSign \"$(CURRENT_SIGN_SETTING)\"\n\nsdist: clean\n\tpython setup.py sdist bdist_wheel\n\tls -l dist\n"
  },
  {
    "path": "README.md",
    "content": "# Sharding\n\n### Sharding Implementation\nRefer [Py-EVM](https://github.com/ethereum/py-evm) for the latest implementation progress.\n\n### Specification and Documentation\nSee the \"docs\" directory for documentation and EIPs.\n\n### Ethereum Research Forum\nPlease visit [ethresear.ch](https://ethresear.ch/c/sharding).\n"
  },
  {
    "path": "docs/doc.html",
    "content": "<h3>Preliminaries</h3>\n\n<p>We assume that at address <code>VALIDATOR_MANAGER_ADDRESS</code> (on the existing \"main shard\") there exists a contract that manages an active \"validator set\", and supports the following functions:</p>\n\n<ul>\n<li><code>deposit(address validationCodeAddr, address returnAddr) returns uint256</code>: adds a validator to the validator set, with the validator's size being the <code>msg.value</code> (ie. amount of ETH deposited) in the function call. Returns the validator index. <code>validationCodeAddr</code> stores the address of the validation code; the function fails if this address's code has not been purity-verified.</li>\n<li><code>withdraw(uint256 validatorIndex, bytes sig) returns bool</code>: verifies that the signature is correct (ie. a call with 200000 gas, <code>validationCodeAddr</code> as destination, 0 value and <code>sha3(\"withdraw\") + sig</code> as data returns 1), and if it is removes the validator from the validator set and refunds the deposited ETH.</li>\n<li><code>sample(uint256 shardId) returns uint256</code>: uses a recent block hash as a seed to pseudorandomly select a signer from the validator set. Chance of being selected should be proportional to the validator's deposit.</li>\n<li><code>addHeader(bytes header) returns bool</code>: attempts to process a collation header, returns True on success, reverts on failure.</li>\n<li><code>get_shard_head(uint256 shardId) returns bytes32</code>: returns the header hash that is the head of a given shard as perceived by the manager contract.</li>\n<li><code>getAncestor(bytes32 hash)</code>: returns the 10000th ancestor of this hash.</li>\n<li><code>getAncestorDistance(bytes32 hash)</code>: returns the difference between the block number of this hash and the block number of the 10000th ancestor of this hash.</li>\n<li><code>getCollationGasLimit()</code>: returns the gas limit that collations can currently have (by default make this function always answer 10 million).</li>\n<li><code>txToShard(address to, uint256 shardId, bytes data) returns uint256</code>: records a request to deposit <code>msg.value</code> ETH to address <code>to</code> in shard <code>shardId</code> during a future collation. Saves a receipt ID for this request, also saving <code>msg.value</code>, <code>to</code>, <code>shardId</code>, <code>data</code> and <code>msg.sender</code>.</li>\n</ul>\n\n<h3>Parameters</h3>\n\n<ul>\n<li><code>SERENITY_FORK_BLKNUM</code>: ????</li>\n<li><code>SHARD_COUNT</code>: 100</li>\n<li><code>VALIDATOR_MANAGER_ADDRESS</code>: ????</li>\n<li><code>USED_RECEIPT_STORE_ADDRESS</code>: ????</li>\n<li><code>SIG_GASLIMIT</code>: 40000</li>\n<li><code>COLLATOR_REWARD</code>: 0.002</li>\n<li><code>PERIOD_LENGTH</code>: 5 blocks</li>\n<li><code>SHUFFLING_CYCLE</code>: 2500 blocks</li>\n</ul>\n\n<h3>Specification</h3>\n\n<p>We first define a \"collation header\" as an RLP list with the following values:</p>\n\n<pre><code>[\n    shard_id: uint256,\n    expected_period_number: uint256,\n    period_start_prevhash: bytes32,\n    parent_collation_hash: bytes32,\n    tx_list_root: bytes32,\n    coinbase: address,\n    post_state_root: bytes32,\n    receipts_root: bytes32,\n    sig: bytes\n]\n</code></pre>\n\n<p>Where:</p>\n\n<ul>\n<li><code>shard_id</code> is the shard ID of the shard</li>\n<li><code>expected_period_number</code> is the period number in which this collation expects to be included. A period is an interval of <code>PERIOD_LENGTH</code> blocks.</li>\n<li><code>period_start_prevhash</code> is the block hash of block <code>PERIOD_LENGTH * expected_period_number - 1</code> (ie. the last block before the expected period starts). Opcodes in the shard that refer to block data (eg. NUMBER, DIFFICULTY) will refer to the data of this block, with the exception of COINBASE, which will refer to the shard coinbase.</li>\n<li><code>parent_collation_hash</code> is the hash of the parent collation</li>\n<li><code>tx_list_root</code> is the root hash of the trie holding the transactions included in this collation</li>\n<li><code>post_state_root</code> is the new state root of the shard after this aollation</li>\n<li><code>receipts_root</code> is the root hash of the receipt trie</li>\n<li><code>sig</code> is a signature</li>\n</ul>\n\n<p>For blocks where <code>block.number &gt;= SERENITY_FORK_BLKNUM</code>, the block header's extra data must contain a hash which points to an RLP list of <code>SHARD_COUNT</code> objects, where each object is either the empty string or a valid collation header for a shard.</p>\n\n<p>A <strong>collation header</strong> is valid if calling <code>addHeader(header)</code> returns true. The validator manager contract should do this if:</p>\n\n<ul>\n<li>The <code>shard_id</code> is at least 0, and less than <code>SHARD_COUNT</code></li>\n<li>The <code>expected_period_number</code> equals <code>floor(block.number / PERIOD_LENGTH)</code></li>\n<li>A collation with the hash <code>parent_collation_hash</code> has already been accepted</li>\n<li>The <code>sig</code> is a valid signature. That is, if we calculate <code>validation_code_addr = sample(shard_id)</code>, then call <code>validation_code_addr</code> with the calldata being <code>sha3(shortened_header) ++ sig</code> (where <code>shortened_header</code> is the RLP encoded form of the collation header <em>without</em> the sig), the result of the call should be 1</li>\n</ul>\n\n<p>A <strong>collation</strong> is valid if (i) its collation header is valid, (ii) executing the collation on top of the <code>parent_collation_hash</code>'s <code>post_state_root</code> results in the given <code>post_state_root</code> and <code>receipts_root</code>, and (iii) the total gas used is less than or equal to the output of calling <code>getCollationGasLimit()</code> on the main shard.</p>\n\n<h3>Collation state transition function</h3>\n\n<p>The state transition process for executing a collation is as follows:</p>\n\n<ul>\n<li>Execute each transaction in the tree pointed to by <code>tx_list_root</code> in order</li>\n<li>Assign a reward of <code>COLLATOR_REWARD</code> to the coinbase</li>\n</ul>\n\n<h3>Receipt-consuming transactions</h3>\n\n<p>A transaction in a shard can use a receipt ID as its signature (that is, (v, r, s) = (1, receiptID, 0)). Let <code>(to, value, shard_id, sender, data)</code> be the values that were saved by the <code>txToShard</code> call that created this receipt. For such a transaction to be valid:</p>\n\n<ul>\n<li>Such a receipt <em>must</em> have in fact been created by a <code>txToShard</code> call in the main chain.</li>\n<li>The <code>to</code> and <code>value</code> of the transaction <em>must</em> match the <code>to</code> and <code>value</code> of this receipt.</li>\n<li>The shard Id <em>must</em> match <code>shard_id</code>.</li>\n<li>The contract at address <code>USED_RECEIPT_STORE_ADDRESS</code> <em>must NOT</em> have a record saved saying that the given receipt ID was already consumed.</li>\n</ul>\n\n<p>The transaction has an additional side effect of saving a record in <code>USED_RECEIPT_STORE_ADDRESS</code> saying that the given receipt ID has been consumed. Such a transaction effects a message whose:</p>\n\n<ul>\n<li><code>sender</code> is <code>USED_RECEIPT_STORE_ADDRESS</code></li>\n<li><code>to</code> is the <code>to</code> from the receipt</li>\n<li><code>value</code> is the <code>value</code> from the receipt, minus <code>gasprice * gaslimit</code></li>\n<li><code>data</code> is twelve zero bytes concatenated with the <code>sender</code> from the receipt concatenated with the <code>data</code> from the receipt</li>\n<li>Gas refunds go to the <code>to</code> address</li>\n</ul>\n\n<h3>Details of <code>sample</code></h3>\n\n<p>The <code>sample</code> function should be coded in such a way that any given validator randomly gets allocated to some number of shards every <code>SHUFFLING_CYCLE</code>, where the expected number of shards is proportional to the validator's balance. During that cycle, <code>sample(shard_id)</code> can only return that validator if the <code>shard_id</code> is one of the shards that they were assigned to. The purpose of this is to give validators time to download the state of the specific shards that they are allocated to.</p>\n\n<p>Here is one possible implementation of <code>sample</code>, assuming for simplicity of illustration that all validators have the same deposit size:</p>\n\n<pre><code>def sample(shard_id: num) -&gt; address:\n    cycle = floor(block.number / SHUFFLING_CYCLE)\n    cycle_seed = blockhash(cycle * SHUFFLING_CYCLE)\n    seed = blockhash(block.number - (block.number % PERIOD_LENGTH))\n    index_in_subset = num256_mod(as_num256(sha3(concat(seed, as_bytes32(shard_id)))),\n                                 100)\n    validator_index = num256_mod(as_num256(sha3(concat(cycle_seed), as_bytes32(shard_id), as_bytes32(index_in_subset))),\n                                 as_num256(self.validator_set_size))\n    return self.validators[validator_index]\n</code></pre>\n\n<p>This picks out 100 validators for each shard during each cycle, and then during each block one out of those 100 validators is picked by choosing a distinct <code>index_in_subset</code> for each block.</p>\n\n<h3>Collation Header Production and Propagation</h3>\n\n<p>We generally expect collation headers to be produced and propagated as follows.</p>\n\n<ul>\n<li>Every time a new <code>SHUFFLING_CYCLE</code> starts, every validator computes the set of 100 validators for every shard that they were assigned to, and sees which shards they are eligible to validate in. The validator then downloads the state for that shard (using fast sync)</li>\n<li>The validator keeps track of the head of the chain for all shards they are currently assigned to. It is each validator's responsibility to reject invalid or unavailable collations, and refuse to build on such blocks, even if those blocks get accepted by the main chain validator manager contract.</li>\n<li>If a validator is currently eligible to validate in some shard <code>i</code>, they download the full collation association with any collation header that is included into block headers for shard <code>i</code>.</li>\n<li>When on the current global main chain a new period starts, the validator calls <code>sample(i)</code> to determine if they are eligible to create a collation; if they are, then they do so.</li>\n</ul>\n\n<h3>Rationale</h3>\n\n<p>This allows for a quick and dirty form of medium-security proof of stake sharding in a way that achieves quadratic scaling through separation of concerns between block proposers and collators, and thereby increases throughput by ~100x without too many changes to the protocol or software architecture. This is intended to serve as the first phase in a multi-phase plan to fully roll out quadratic sharding, the latter phases of which are described below.</p>\n\n<h3>Subsequent phases</h3>\n\n<ul>\n<li><strong>Phase 2, option a</strong>: require collation headers to be added in as uncles instead of as transactions</li>\n<li><strong>Phase 2, option b</strong>: require collation headers to be added in an array, where item <code>i</code> in the array must be either a collation header of shard <code>i</code> or the empty string, and where the extra data must be the hash of this array (soft fork)</li>\n<li><strong>Phase 3 (two-way pegging)</strong>: add to the <code>USED_RECEIPT_STORE_ADDRESS</code> contract a function that allows receipts to be created in shards. Add to the main chain's <code>VALIDATOR_MANAGER_ADDRESS</code> a function for submitting Merkle proofs of unspent receipts that have confirmed (ie. they point to some hash <code>h</code> such that some hash <code>h2</code> exists such that <code>getAncestor(h2) = h</code> and <code>getAncestorDistance(h2) &lt; 10000 * PERIOD_LENGTH * 1.33</code>), which has similar behavior to the <code>USED_RECEIPT_STORE_ADDRESS</code> contract in the shards.</li>\n<li><strong>Phase 4 (tight coupling)</strong>: blocks are no longer valid if they point to invalid or unavailable collations. Add data availability proofs.</li>\n</ul>\n"
  },
  {
    "path": "docs/doc.md",
    "content": "## Introduction\n\nThe purpose of this document is to provide a reasonably complete specification and introduction for anyone looking to understand the details of the sharding proposal, as well as to implement it. This document as written describes only \"phase 1\" of quadratic sharding; [phases 2, 3 and 4](https://github.com/ethereum/sharding/blob/develop/docs/doc.md#subsequent-phases) are at this point out of scope, and super-quadratic sharding (\"Ethereum 3.0\") is also out of scope.\n\nSuppose that the variable `c` denotes the level of computational power available to one node. In a simple blockchain, the transaction capacity is bounded by O(c), as every node must process every transaction. The goal of quadratic sharding is to increase the capacity with a two-layer design. Stage 1 requires no hard forks; the main chain stays exactly as is. However, a contract is published to the main chain called the **validator manager contract** (VMC), which maintains the sharding system. There are O(c) **shards** (currently, 100), where each shard is like a separate \"galaxy\": it has its own account space, transactions need to specify which shard they are to be published inside, and communication between shards is very limited (in fact, in phase 1, it is nonexistent).\n\nThe shards are run on a simple longest-chain-rule proof of stake system, where the stake is on the main chain (specifically, inside the VMC). All shards share a common validator pool; this also means that anyone who signs up with the VMC as a validator could theoretically at any time be assigned the right to create a block on any shard. Each shard has a block size/gas limit of O(c), and so the total capacity of the system is O(c^2).\n\nMost users of the sharding system will run both (i) either a full (O(c) resource requirements) or light (O(log(c)) resource requirements) node on the main chain, and (ii) a \"shard client\" which talks to the main chain node via RPC (this client is assumed to be trusted because it's also running on the user's computer) and which can also be used as a light client for any shard, as a full client for any specific shard (the user would have to specify that they are \"watching\" a specific shard) or as a validator node. In all cases, the storage and computation requirements for a shard client will also not exceed O(c) (unless the user chooses to specify that they are watching _every_ shard; block explorers and large exchanges may want to do this).\n\nIn this document, the term `Collation` is used to differentiate from `Block` because (i) they are different RLP objects: transactions are level 0 objects, collations are level 1 objects that package transactions, and blocks are level 2 objects that package collation (headers); (ii) it’s clearer in context of sharding. Basically, `Collation` must consist of `CollationHeader` and `TransactionList`; `Witness` and the detailed format of `Collation` will be defined in **Stateless clients** section. `Collator` is the collation proposer sampled by `getEligibleProposer` function of **Validator Manager Contract** in the main chain; the mechanism will be introduced in the following sections.\n\n| Main Chain                                 | Shard Chain            |\n|--------------------------------------------|------------------------|\n| Block                                      | Collation              |\n| BlockHeader                                | CollationHeader        |\n| Block Proposer (or `Miner` in PoW chain)   | Collator               |\n\n## Quadratic sharding\n\n### Constants\n\n* `LOOKAHEAD_LENGTH`: 4\n* `PERIOD_LENGTH`: 5\n* `COLLATION_GASLIMIT`: 10,000,000 gas\n* `SHARD_COUNT`: 100\n* `SIG_GASLIMIT`: 40000 gas\n* `COLLATOR_REWARD`: 0.001 ETH\n\n### Validator Manager Contract (VMC)\n\nWe assume that at address `VALIDATOR_MANAGER_ADDRESS` (on the existing \"main shard\") there exists the VMC, which supports the following functions:\n\n-   `deposit() returns uint256`: adds a validator to the validator set, with the validator's size being the `msg.value` (i.e., the amount of ETH deposited) in the function call. This function returns the validator index.\n-   `withdraw(uint256 validator_index) returns bool`: verifies that `msg.sender == validators[validator_index].addr`. if it is removes the validator from the validator set and refunds the deposited ETH.\n-   `get_eligible_proposer(uint256 shard_id, uint256 period) returns address`: uses a block hash as a seed to pseudorandomly select a signer from the validator set. The chance of being selected should be proportional to the validator's deposit. The function should be able to return a value for the current period or any future up to `LOOKAHEAD_LENGTH` periods ahead.\n-   `add_header(uint256 shard_id, uint256 expected_period_number, bytes32 period_start_prevhash, bytes32 parent_hash, bytes32 transaction_root, address coinbase, bytes32 state_root, bytes32 receipt_root, uint256 number) returns bool`: attempts to process a collation header, returns True on success, reverts on failure.\n-   `get_shard_head(uint256 shard_id) returns bytes32`: returns the header hash that is the head of a given shard as perceived by the manager contract.\n\nThere is also one log type:\n\n-   `CollationAdded(indexed uint256 shard_id, bytes collation_header_bytes, bool is_new_head, uint256 score)`\n\nwhere `collation_header_bytes` can be constructed in vyper by\n\n```python\n    collation_header_bytes = concat(\n        as_bytes32(shard_id),\n        as_bytes32(expected_period_number),\n        period_start_prevhash,\n        parent_hash,\n        transaction_root,\n        as_bytes32(collation_coinbase),\n        state_root,\n        receipt_root,\n        as_bytes32(collation_number),\n    )\n```\n\nNote: `coinbase` and `number` are renamed to `collation_coinbase` and `collation_number`, due to the fact that they are reserved keywords in vyper.\n\n### Collation header\n\nWe first define a \"collation header\" as an RLP list with the following values:\n\n    [\n        shard_id: uint256,\n        expected_period_number: uint256,\n        period_start_prevhash: bytes32,\n        parent_hash: bytes32,\n        transaction_root: bytes32,\n        coinbase: address,\n        state_root: bytes32,\n        receipt_root: bytes32,\n        number: uint256,\n    ]\n\nWhere:\n\n-   `shard_id` is the shard ID of the shard;\n-   `expected_period_number` is the period number in which this collation expects to be included; this is calculated as `period_number = floor(block.number / PERIOD_LENGTH)`;\n-   `period_start_prevhash` is the block hash of block `PERIOD_LENGTH * expected_period_number - 1` (i.e., it is the hash of the last block before the expected period starts). Opcodes in the shard that refer to block data (e.g. NUMBER and DIFFICULTY) will refer to the data of this block, with the exception of COINBASE, which will refer to the shard coinbase;\n-   `parent_hash` is the hash of the parent collation;\n-   `transaction_root` is the root hash of the trie holding the transactions included in this collation;\n-   `state_root` is the new state root of the shard after this collation;\n-   `receipt_root` is the root hash of the receipt trie;\n-   `number` is the collation number, which is also the score for the fork choice rule now; and\n\nA **collation header** is valid if calling `add_header(shard_id, expected_period_number, period_start_prevhash, parent_hash, transaction_root, coinbase, state_root, receipt_root, number)` returns true. The validator manager contract should do this if:\n\n-   the `shard_id` is at least 0, and less than `SHARD_COUNT`;\n-   the `expected_period_number` equals the actual current period number (i.e., `floor(block.number / PERIOD_LENGTH)`)\n-   a collation with the hash `parent_hash` for the same shard has already been accepted;\n-   a collation for the same shard has not yet been submitted during the current period;\n-   the address of the sender of `add_header` is equal to the address returned by `get_eligible_proposer(shard_id, expected_period_number)`.\n\nA **collation** is valid if: (i) its collation header is valid; (ii) executing the collation on top of the `parent_hash`'s `state_root` results in the given `state_root` and `receipt_root`; and (iii) the total gas used is less than or equal to `COLLATION_GASLIMIT`.\n\n### Collation state transition function\n\nThe state transition process for executing a collation is as follows:\n\n* execute each transaction in the tree pointed to by `transaction_root` in order; and\n* assign a reward of `COLLATOR_REWARD` to the coinbase.\n\n### Details of `getEligibleProposer`\n\nHere is one simple implementation in Viper:\n\n```python\ndef getEligibleProposer(shardId: num, period: num) -> address:\n    assert period >= LOOKAHEAD_LENGTH\n    assert (period - LOOKAHEAD_LENGTH) * PERIOD_LENGTH < block.number\n    assert self.num_validators > 0\n\n    h = as_num256(\n        sha3(\n            concat(\n                blockhash((period - LOOKAHEAD_LENGTH) * PERIOD_LENGTH),\n                as_bytes32(shardId)\n            )\n        )\n    )\n    return self.validators[\n        as_num128(\n            num256_mod(\n                h,\n                as_num256(self.num_validators)\n            )\n        )\n    ].addr\n```\n\n## Stateless clients\n\nA validator is only given a few minutes' notice (precisely, `LOOKAHEAD_LENGTH * PERIOD_LENGTH` blocks worth of notice) when they are asked to create a block on a given shard. In Ethereum 1.0, creating a block requires having access to the entire state in order to validate transactions. Here, our goal is to avoid requiring validators to store the state of the entire system (as that would be an O(c^2) computational resource requirement). Instead, we allow validators to create collations knowing only the state root, pushing the responsibility onto transaction senders to provide \"witness data\" (i.e., Merkle branches), to prove the pre-state of the accounts that the transaction affects, and to provide enough information to calculate the post-state root after executing the transaction.\n\n(Note that it's theoretically possible to implement sharding in a non-stateless paradigm; however, this requires: (i) storage rent to keep storage size bounded; and (ii) validators to be assigned to create blocks in a single shard for O(c) time. This scheme avoids the need for these sacrifices.)\n\n### Data format\n\nWe modify the format of a transaction so that the transaction must specify an **access list** enumerating the parts of the state that it can access (we describe this more precisely later; for now consider this informally as a list of addresses). Any attempt to read or write to any state outside of a transaction's specified access list during VM execution returns an error. This prevents attacks where someone sends a transaction that spends 5 million cycles of gas on random execution, then attempts to access a random account for which the transaction sender and the collator do not have a witness, preventing the collator from including the transaction and thereby wasting the collator's time.\n\n_Outside_ of the signed body of the transaction, but packaged along with the transaction, the transaction sender must specify a \"witness\", an RLP-encoded list of Merkle tree nodes that provides the portions of the state that the transaction specifies in its access list. This allows the collator to process the transaction with only the state root. When publishing the collation, the collator also sends a witness for the entire collation.\n\n#### Transaction package format\n\n```python\n    [\n        [nonce, acct, data....],    # transaction body (see below for specification)\n        [node1, node2, node3....]   # witness\n    ]\n```\n\n#### Collation format\n\n```python\n    [\n        [shard_id, ... , sig],   # header\n        [tx1, tx2 ...],          # transaction list\n        [node1, node2, node3...] # witness\n    ]\n```\n\nSee also ethresearch thread on [The Stateless Client Concept](https://ethresear.ch/t/the-stateless-client-concept/172).\n\n### Stateless client state transition function\n\nIn general, we can describe a traditional \"stateful\" client as executing a state transition function `stf(state, tx) -> state'` (or `stf(state, block) -> state'`). In a stateless client model, nodes do not store the state. The functions `apply_transaction` and `apply_block` can be rewritten as follows:\n\n```python\napply_block(state_obj, witness, block) -> state_obj', reads, writes\n```\n\nWhere `state_obj` is a tuple containing the state root and other O(1)-sized state data (gas used, receipts, bloom filter, etc); `witness` is a witness; and `block` is the rest of the block. The returned output is:\n\n* a new `state_obj` containing the new state root and other variables;\n* the set of objects from the witness that have been read (which is useful for block creation); and\n* the set of new state objects that have been created to form the new state trie.\n\nThis allows the functions to be \"pure\", as well as only dealing with small-sized objects (as opposed to the state in existing Ethereum, which is currently [hundreds of gigabytes](https://etherscan.io/chart/chaindatasizefull)), making them convenient to use for sharding.\n\n### Client logic\n\nA client would have a config of the following form:\n\n```python\n{\n    validator_address: \"0x...\" OR null,\n    watching: [list of shard IDs],\n    ...\n}\n```\n\nIf a validator address is provided, then it checks (on the main chain) if the address is an active validator. If it is, then every time a new period on the main chain starts (i.e., when `floor(block.number / PERIOD_LENGTH)` changes), then it should call `getEligibleProposer` for all shards for period `floor(block.number / PERIOD_LENGTH) + LOOKAHEAD_LENGTH`. If it returns the validator's address for some shard `i`, then it runs the algorithm `CREATE_COLLATION(i)` (see below).\n\nFor every shard `i` in the `watching` list, every time a new collation header appears in the main chain, it downloads the full collation from the shard network, and verifies it. It locally keeps track of all valid headers (where validity is defined recursively, i.e., for a header to be valid its parent must also be valid), and accepts as the main shard chain the shard chain whose head has the highest score, and where all collations from the genesis collation to the head are valid and available. Note that this implies the reorgs of the main chain *and* reorgs of the shard chain may both influence the shard head.\n\n### Fetch candidate heads in reverse sorted order\n\nTo implement the algorithms for watching a shard, and for creating a collation, the first primitive that we need is the following algorithm for fetching candidate heads in highest-to-lowest order. First, suppose the existence of an (impure, stateful) method `getNextLog()`, which gets the most recent `CollationAdded` log in some given shard that has not yet been fetched. This would work by fetching all the logs in recent blocks backwards, starting from the head, and within each block looking in reverse order through the receipts. We define an impure method `fetch_candidate_head` as follows:\n\n```python\nunchecked_logs = []\ncurrent_checking_score = None\n\ndef fetch_candidate_head():\n    # Try to return a log that has the score that we are checking for,\n    # checking in order of oldest to most recent.\n    for i in range(len(unchecked_logs)-1, -1, -1):\n        if unchecked_logs[i].score == current_checking_score:\n            return unchecked_logs.pop(i)\n    # If no further recorded but unchecked logs exist, go to the next\n    # isNewHead = true log\n    while 1:\n        unchecked_logs.append(getNextLog())\n        if unchecked_logs[-1].isNewHead is True:\n            break\n    o = unchecked_logs.pop()\n    current_checking_score = o.score\n    return o\n```\n\nTo re-express in plain language, the idea is to scan backwards through `CollationAdded` logs (for the correct shard), and wait until you get to one where `isNewHead = True`. Return that log first, then return all more recent logs with a score equal to that log with `isNewHead = False`, in order of oldest to most recent. Then go to the previous log with `isNewHead = True` (this is guaranteed to have a score that is 1 lower than the previous NewHead), then go to all more recent collations after it with that score, and so forth.\n\nThe idea is that this algorithm is guaranteed to check potential head candidates in highest-to-lowest sorted order of score, with the second priority being oldest to most recent.\n\nFor example, suppose that `CollationAdded` logs have hashes and scores as follows:\n\n    ... 10 11 12 11 13   14 15 11 12 13   14 12 13 14 15   16 17 18 19 16\n\nThen, `isNewHead` would be assigned as:\n\n    ... T  T  T  F  T    T  T  F  F  F    F  F  F  F  F    T  T  T  T  F\n\nIf we number the collations A1..A5, B1..B5, C1..C5 and D1..D5, the precise returning order is:\n\n    D4 D3 D2 D1 D5 B2 C5 B1 C1 C4 A5 B5 C3 A3 B4 C2 A2 A4 B3 A1\n\n### Watching a shard\n\nIf a client is watching a shard, it should attempt to download and verify any collations in that shard that it can (checking any given collation only if its parent has already been verified). To get the head at any time, keep calling `fetch_candidate_head()` until it returns a collation that has been verified; that collation is the head. This will in normal circumstances return a valid collation immediately or at most after a few tries due to latency or a small-scale attack that creates a few invalid or unavailable collations. Only in the case of a true long-running 51% attack will this algorithm degrade to O(N) time.\n\n### CREATE_COLLATION\n\nThis process has three parts. The first part can be called `GUESS_HEAD(shard_id)`, with pseudocode here:\n\n```python\n# Download a single collation and check if it is valid or invalid (memoized)\nvalidity_cache = {}\ndef memoized_fetch_and_verify_collation(c):\n    if c.hash not in validity_cache:\n        validity_cache[c.hash] = fetch_and_verify_collation(c)\n    return validity_cache[c.hash]\n\n\ndef main(shard_id):\n    head = None\n    while 1:\n        head = fetch_candidate_head(shard_id)\n        c = head\n        while 1:\n            if not memoized_fetch_and_verify_collation(c):\n                break\n            c = get_parent(c)\n```\n\n`fetch_and_verify_collation(c)` involves fetching the full data of `c` (including witnesses) from the shard network, and verifying it. The above algorithm is equivalent to \"pick the longest valid chain, check validity as far as possible, and if you find it's invalid then switch to the next-highest-scoring valid chain you know about\". The algorithm should only stop when the validator runs out of time and it is time to create the collation. Every execution of `fetch_and_verify_collation` should also return a \"write set\" (see stateless client section above). Save all of these write sets, and combine them together; this is the `recent_trie_nodes_db`.\n\nWe can now define `UPDATE_WITNESS(tx, recent_trie_nodes_db)`. While running `GUESS_HEAD`, a node will have received some transactions. When it comes time to (attempt to) include a transaction into a collation, this algorithm will need to be run on the transaction first. Suppose that the transaction has an access list `[A1 ... An]`, and a witness `W`. For each `Ai`, use the current state tree root and get the Merkle branch for `Ai`, using the union of `recent_trie_nodes_db` and `W` as a database. If the original `W` was correct, and the transaction was sent not before the time that the client checked back to, then getting this Merkle branch will always succeed. After including the transaction into a collation, the \"write set\" from the state change should then also be added into the `recent_trie_nodes_db`.\n\nNext, we have `CREATE_COLLATION`. For illustration, here is full pseudocode for a possible transaction-gathering part of this method.\n\n```python\n# Sort by descending order of gasprice\ntxpool = sorted(copy(available_transactions), key=-tx.gasprice)\ncollation = new Collation(...)\nwhile len(txpool) > 0:\n    # Remove txs that ask for too much gas\n    i = 0\n    while i < len(txpool):\n        if txpool[i].startgas > GASLIMIT - collation.gasused:\n            txpool.pop(i)\n        else:\n            i += 1\n    tx = copy.deepcopy(txpool[0])\n    tx.witness = UPDATE_WITNESS(tx.witness, recent_trie_nodes_db)\n    # Try to add the transaction, discard if it fails\n    success, reads, writes = add_transaction(collation, tx)\n    recent_trie_nodes_db = union(recent_trie_nodes_db, writes)\n    txpool.pop(0)\n```\n\nAt the end, there is an additional step, finalizing the collation (to give the collator the reward, which is `COLLATOR_REWARD` ETH). This requires asking the network for a Merkle branch for the collator's account. When the network replies with this, the post-state root after applying the reward, as well as the fees, can be calculated. The collator can then package up the collation, of the form (header, txs, witness), where the witness is the union of the witnesses of all the transactions and the branch for the collator's account.\n\n## Protocol changes\n\n### Transaction format\n\nThe format of a transaction now becomes (note that this includes [account abstraction](https://ethresear.ch/t/tradeoffs-in-account-abstraction-proposals/263/20) and [read/write lists](https://ethresear.ch/t/account-read-write-lists/285/3)):\n\n```\n    [\n        chain_id,      # 1 on mainnet\n        shard_id,      # the shard the transaction goes onto\n        target,        # account the tx goes to\n        data,          # transaction data\n        start_gas,     # starting gas\n        gasprice,      # gasprice\n        access_list,   # access list (see below for specification)\n        code           # initcode of the target (for account creation)\n    ]\n```\n\nThe process for applying a transaction is now as follows:\n\n* Verify that the `chain_id` and `shard_id` are correct\n* Subtract `start_gas * gasprice` wei from the `target` account\n* Check if the target `account` has code. If not, verify that `sha3(code)[12:] == target`\n* If the target account is empty, execute a contract creation at the `target` with `code` as init code; otherwise skip this step\n* Execute a message with the remaining gas as startgas, the `target` as the to address, 0xff...ff as the sender, 0 value, and the transaction `data` as data\n* If either of the two executions fail, and <= 200000 gas has been consumed (i.e., `start_gas - remaining_gas <= 200000`), the transaction is invalid\n* Otherwise `remaining_gas * gasprice` is refunded, and the fee paid is added to a fee counter (note: fees are NOT immediately added to the coinbase balance; instead, fees are added all at once during block finalization)\n\n### Two-layer trie redesign\n\nThe existing account model is replaced with one where there is a single-layer trie, and all account balances, code and storage are incorporated into the trie. Specifically, the mapping is:\n\n* Balance of account X: `sha3(X) ++ 0x00`\n* Code of account X: `sha3(X) ++ 0x01`\n* Storage key K of account X: `sha3(X) ++ 0x02 ++ K`\n\nSee also ethresearch thread on [A two-layer account trie inside a single-layer trie](https://ethresear.ch/t/a-two-layer-account-trie-inside-a-single-layer-trie/210)\n\nAdditionally, the trie is now a new binary trie design: https://github.com/ethereum/research/tree/master/trie_research\n\n### Access list\n\nThe access list for an account looks as follows:\n\n    [[address, prefix1, prefix2...], [address, prefix1, prefix2...], ...]\n\nThis basically means \"the transaction can access the balance and code for the given accounts, as well as any storage key provided that at least one of the prefixes listed with the account is a prefix of the storage key\". One can translate it into \"prefix list form\", which essentially is a list of prefixes of the underlying storage trie (see above section):\n\n```python\ndef to_prefix_list_form(access_list):\n    o = []\n    for obj in access_list:\n        addr, storage_prefixes = obj[0], obj[1:]\n        o.append(sha3(addr) + b'\\x00')\n        o.append(sha3(addr) + b'\\x01')\n        for prefix in storage_prefixes:\n            o.append(sha3(addr) + b'\\x02' + prefix)\n    return o\n```\n\nOne can compute the witness for a transaction by taking the transaction's access list, converting it into prefix list form, then running the algorithm `get_witness_for_prefix` for each item in the prefix list form, and taking the union of these results.\n\n`get_witness_for_prefix` returns a minimal set of trie nodes that are sufficient to access any key which starts with the given prefix. See implementation here: https://github.com/ethereum/research/blob/b0de8d352f6236c9fa2244fed871546fabb016d1/trie_research/new_bintrie.py#L250\n\nIn the EVM, any attempt to access (either by calling or SLOAD'ing or via an opcode such as `BALANCE` or `EXTCODECOPY`) an account that is outside the access list will lead to the EVM instance that made the access attempt immediately throwing an exception.\n\nSee also ethresearch thread on [Account read/write lists](https://ethresear.ch/t/account-read-write-lists/285).\n\n### Gas costs\n\nTo be finalized.\n\n## Subsequent phases\n\nThis allows for a quick and dirty form of medium-security proof of stake sharding in a way that achieves quadratic scaling through separation of concerns between block proposers and collators, and thereby increases throughput by ~100x without too many changes to the protocol or software architecture. This is intended to serve as the first phase in a multi-phase plan to fully roll out quadratic sharding, the latter phases of which are described below.\n\n* **Phase 2 (two-way pegging)**: see section on `USED_RECEIPT_STORE`, still to be written\n* **Phase 3, option a**: require collation headers to be added in as uncles instead of as transactions\n* **Phase 3, option b**: require collation headers to be added in an array, where item `i` in the array must be either a collation header of shard `i` or the empty string, and where the extra data must be the hash of this array (soft fork)\n* **Phase 4 (tight coupling)**: blocks are no longer valid if they point to invalid or unavailable collations. Add data availability proofs.\n"
  },
  {
    "path": "requirements-dev.txt",
    "content": "bumpversion>=0.5.3,<1\nflake8==3.5.0\nmypy==0.600\nhypothesis==3.44.26\npytest==3.6.0\npytest-asyncio==0.8.0\npytest-cov==2.5.1\npytest-logging>=0.3.0\npytest-xdist==1.22.2\ntox==3.0.0\neth-tester[py-evm]==0.1.0-beta.26\ngit+git://github.com/ethereum/vyper.git@08ba8ed7c3c84d44edda85ff28c96bd1e2d867fe\n"
  },
  {
    "path": "requirements.txt",
    "content": "cytoolz>=0.9.0,<1.0.0\neth-utils>=1.0.3,<2.0.0\nrlp>=1.0.0,<2.0.0\nweb3>=4.1.0,<5.0.0\npy-evm==0.2.0a18\neth-typing==1.0.0\n"
  },
  {
    "path": "setup.py",
    "content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom setuptools import setup, find_packages\n\n\n# requirements\nINSTALL_REQUIRES = list()\nwith open('requirements.txt') as f:\n    INSTALL_REQUIRES = f.read().splitlines()\n\nsetup(\n    name='sharding',\n    # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.\n    version='0.0.2-alpha.2',\n    description='Ethereum Sharding Manager Contract',\n    url='https://github.com/ethereum/sharding',\n    packages=find_packages(\n        exclude=[\n            \"tests\",\n            \"tests.*\",\n            \"tools\",\n            \"docs\",\n        ]\n    ),\n    python_requires='>=3.5, <4',\n    py_modules=['sharding'],\n    setup_requires=['setuptools-markdown'],\n    long_description_markdown_filename='README.md',\n    include_package_data=True,\n    zip_safe=False,\n    classifiers=[\n        'Intended Audience :: Developers',\n        'Natural Language :: English',\n        'Programming Language :: Python :: 3.5',\n        'Programming Language :: Python :: 3.6',\n    ],\n    install_requires=INSTALL_REQUIRES,\n)\n"
  },
  {
    "path": "sharding/__init__.py",
    "content": "import pkg_resources\n\nfrom sharding.contracts.utils.smc_utils import (  # noqa: F401\n    get_smc_source_code,\n    get_smc_json,\n)\n\nfrom sharding.handler.log_handler import (  # noqa: F401\n    LogHandler,\n)\nfrom sharding.handler.shard_tracker import (  # noqa: F401\n    ShardTracker,\n)\nfrom sharding.handler.smc_handler import (  # noqa: F401\n    SMC,\n)\n\n\n__version__ = pkg_resources.get_distribution(\"sharding\").version\n"
  },
  {
    "path": "sharding/contracts/__init__.py",
    "content": ""
  },
  {
    "path": "sharding/contracts/sharding_manager.json",
    "content": "{\"abi\": [{\"name\": \"RegisterNotary\", \"inputs\": [{\"type\": \"int128\", \"name\": \"index_in_notary_pool\", \"indexed\": false}, {\"type\": \"address\", \"name\": \"notary\", \"indexed\": true}], \"anonymous\": false, \"type\": \"event\"}, {\"name\": \"DeregisterNotary\", \"inputs\": [{\"type\": \"int128\", \"name\": \"index_in_notary_pool\", \"indexed\": false}, {\"type\": \"address\", \"name\": \"notary\", \"indexed\": true}, {\"type\": \"int128\", \"name\": \"deregistered_period\", \"indexed\": false}], \"anonymous\": false, \"type\": \"event\"}, {\"name\": \"ReleaseNotary\", \"inputs\": [{\"type\": \"int128\", \"name\": \"index_in_notary_pool\", \"indexed\": false}, {\"type\": \"address\", \"name\": \"notary\", \"indexed\": true}], \"anonymous\": false, \"type\": \"event\"}, {\"name\": \"AddHeader\", \"inputs\": [{\"type\": \"int128\", \"name\": \"period\", \"indexed\": false}, {\"type\": \"int128\", \"name\": \"shard_id\", \"indexed\": true}, {\"type\": \"bytes32\", \"name\": \"chunk_root\", \"indexed\": false}], \"anonymous\": false, \"type\": \"event\"}, {\"name\": \"SubmitVote\", \"inputs\": [{\"type\": \"int128\", \"name\": \"period\", \"indexed\": false}, {\"type\": \"int128\", \"name\": \"shard_id\", \"indexed\": true}, {\"type\": \"bytes32\", \"name\": \"chunk_root\", \"indexed\": false}, {\"type\": \"address\", \"name\": \"notary\", \"indexed\": false}], \"anonymous\": false, \"type\": \"event\"}, {\"name\": \"__init__\", \"outputs\": [], \"inputs\": [{\"type\": \"int128\", \"name\": \"_SHARD_COUNT\"}, {\"type\": \"int128\", \"name\": \"_PERIOD_LENGTH\"}, {\"type\": \"int128\", \"name\": \"_LOOKAHEAD_LENGTH\"}, {\"type\": \"int128\", \"name\": \"_COMMITTEE_SIZE\"}, {\"type\": \"int128\", \"name\": \"_QUORUM_SIZE\"}, {\"type\": \"int128\", \"name\": \"_NOTARY_DEPOSIT\"}, {\"type\": \"int128\", \"name\": \"_NOTARY_LOCKUP_LENGTH\"}], \"constant\": false, \"payable\": false, \"type\": \"constructor\"}, {\"name\": \"get_notary_info\", \"outputs\": [{\"type\": \"int128\", \"name\": \"out\"}, {\"type\": \"int128\", \"name\": \"out\"}], \"inputs\": [{\"type\": \"address\", \"name\": \"notary_address\"}], \"constant\": true, \"payable\": false, \"type\": \"function\", \"gas\": 1288}, {\"name\": \"update_notary_sample_size\", \"outputs\": [{\"type\": \"bool\", \"name\": \"out\"}], \"inputs\": [], \"constant\": false, \"payable\": false, \"type\": \"function\", \"gas\": 71573}, {\"name\": \"register_notary\", \"outputs\": [{\"type\": \"bool\", \"name\": \"out\"}], \"inputs\": [], \"constant\": false, \"payable\": true, \"type\": \"function\", \"gas\": 347585}, {\"name\": \"deregister_notary\", \"outputs\": [{\"type\": \"bool\", \"name\": \"out\"}], \"inputs\": [], \"constant\": false, \"payable\": false, \"type\": \"function\", \"gas\": 239744}, {\"name\": \"release_notary\", \"outputs\": [{\"type\": \"bool\", \"name\": \"out\"}], \"inputs\": [], \"constant\": false, \"payable\": false, \"type\": \"function\", \"gas\": 120521}, {\"name\": \"get_member_of_committee\", \"outputs\": [{\"type\": \"address\", \"name\": \"out\"}], \"inputs\": [{\"type\": \"int128\", \"name\": \"shard_id\"}, {\"type\": \"int128\", \"name\": \"index\"}], \"constant\": true, \"payable\": false, \"type\": \"function\", \"gas\": 3704}, {\"name\": \"add_header\", \"outputs\": [{\"type\": \"bool\", \"name\": \"out\"}], \"inputs\": [{\"type\": \"int128\", \"name\": \"shard_id\"}, {\"type\": \"int128\", \"name\": \"period\"}, {\"type\": \"bytes32\", \"name\": \"chunk_root\"}], \"constant\": false, \"payable\": false, \"type\": \"function\", \"gas\": 222697}, {\"name\": \"get_vote_count\", \"outputs\": [{\"type\": \"int128\", \"name\": \"out\"}], \"inputs\": [{\"type\": \"int128\", \"name\": \"shard_id\"}], \"constant\": true, \"payable\": false, \"type\": \"function\", \"gas\": 1229}, {\"name\": \"has_notary_voted\", \"outputs\": [{\"type\": \"bool\", \"name\": \"out\"}], \"inputs\": [{\"type\": \"int128\", \"name\": \"shard_id\"}, {\"type\": \"int128\", \"name\": \"index\"}], \"constant\": true, \"payable\": false, \"type\": \"function\", \"gas\": 1321}, {\"name\": \"submit_vote\", \"outputs\": [{\"type\": \"bool\", \"name\": \"out\"}], \"inputs\": [{\"type\": \"int128\", \"name\": \"shard_id\"}, {\"type\": \"int128\", \"name\": \"period\"}, {\"type\": \"bytes32\", \"name\": \"chunk_root\"}, {\"type\": \"int128\", \"name\": \"index\"}], \"constant\": false, \"payable\": false, \"type\": \"function\", \"gas\": 128234}, {\"name\": \"notary_pool\", \"outputs\": [{\"type\": \"address\", \"name\": \"out\"}], \"inputs\": [{\"type\": \"int128\", \"name\": \"arg0\"}], \"constant\": true, \"payable\": false, \"type\": \"function\", \"gas\": 1126}, {\"name\": \"notary_pool_len\", \"outputs\": [{\"type\": \"int128\", \"name\": \"out\"}], \"inputs\": [], \"constant\": true, \"payable\": false, \"type\": \"function\", \"gas\": 933}, {\"name\": \"empty_slots_stack\", \"outputs\": [{\"type\": \"int128\", \"name\": \"out\"}], \"inputs\": [{\"type\": \"int128\", \"name\": \"arg0\"}], \"constant\": true, \"payable\": false, \"type\": \"function\", \"gas\": 1186}, {\"name\": \"empty_slots_stack_top\", \"outputs\": [{\"type\": \"int128\", \"name\": \"out\"}], \"inputs\": [], \"constant\": true, \"payable\": false, \"type\": \"function\", \"gas\": 993}, {\"name\": \"does_notary_exist\", \"outputs\": [{\"type\": \"bool\", \"name\": \"out\"}], \"inputs\": [{\"type\": \"address\", \"name\": \"arg0\"}], \"constant\": true, \"payable\": false, \"type\": \"function\", \"gas\": 1195}, {\"name\": \"current_period_notary_sample_size\", \"outputs\": [{\"type\": \"int128\", \"name\": \"out\"}], \"inputs\": [], \"constant\": true, \"payable\": false, \"type\": \"function\", \"gas\": 1053}, {\"name\": \"next_period_notary_sample_size\", \"outputs\": [{\"type\": \"int128\", \"name\": \"out\"}], \"inputs\": [], \"constant\": true, \"payable\": false, \"type\": \"function\", \"gas\": 1083}, {\"name\": \"notary_sample_size_updated_period\", \"outputs\": [{\"type\": \"int128\", \"name\": \"out\"}], \"inputs\": [], \"constant\": true, \"payable\": false, \"type\": \"function\", \"gas\": 1113}, {\"name\": \"collation_records__chunk_root\", \"outputs\": [{\"type\": \"bytes32\", \"name\": \"out\"}], \"inputs\": [{\"type\": \"int128\", \"name\": \"arg0\"}, {\"type\": \"int128\", \"name\": \"arg1\"}], \"constant\": true, \"payable\": false, \"type\": \"function\", \"gas\": 1649}, {\"name\": \"collation_records__proposer\", \"outputs\": [{\"type\": \"address\", \"name\": \"out\"}], \"inputs\": [{\"type\": \"int128\", \"name\": \"arg0\"}, {\"type\": \"int128\", \"name\": \"arg1\"}], \"constant\": true, \"payable\": false, \"type\": \"function\", \"gas\": 1685}, {\"name\": \"collation_records__is_elected\", \"outputs\": [{\"type\": \"bool\", \"name\": \"out\"}], \"inputs\": [{\"type\": \"int128\", \"name\": \"arg0\"}, {\"type\": \"int128\", \"name\": \"arg1\"}], \"constant\": true, \"payable\": false, \"type\": \"function\", \"gas\": 1715}, {\"name\": \"records_updated_period\", \"outputs\": [{\"type\": \"int128\", \"name\": \"out\"}], \"inputs\": [{\"type\": \"int128\", \"name\": \"arg0\"}], \"constant\": true, \"payable\": false, \"type\": \"function\", \"gas\": 1456}, {\"name\": \"head_collation_period\", \"outputs\": [{\"type\": \"int128\", \"name\": \"out\"}], \"inputs\": [{\"type\": \"int128\", \"name\": \"arg0\"}], \"constant\": true, \"payable\": false, \"type\": \"function\", \"gas\": 1486}, {\"name\": \"current_vote\", \"outputs\": [{\"type\": \"bytes32\", \"name\": \"out\"}], \"inputs\": [{\"type\": \"int128\", \"name\": \"arg0\"}], \"constant\": true, \"payable\": false, \"type\": \"function\", \"gas\": 1516}], \"bytecode\": \"0x600035601c52740100000000000000000000000000000000000000006020526f7fffffffffffffffffffffffffffffff6040527fffffffffffffffffffffffffffffffff8000000000000000000000000000000060605274012a05f1fffffffffffffffffffffffffdabf41c006080527ffffffffffffffffffffffffed5fa0e000000000000000000000000000000000060a05260e0611c526101403934156100a757600080fd5b6060516020611c5260c03960c051806040519013156100c557600080fd5b80919012156100d357600080fd5b5060605160206020611c520160c03960c051806040519013156100f557600080fd5b809190121561010357600080fd5b5060605160206040611c520160c03960c0518060405190131561012557600080fd5b809190121561013357600080fd5b5060605160206060611c520160c03960c0518060405190131561015557600080fd5b809190121561016357600080fd5b5060605160206080611c520160c03960c0518060405190131561018557600080fd5b809190121561019357600080fd5b50606051602060a0611c520160c03960c051806040519013156101b557600080fd5b80919012156101c357600080fd5b50606051602060c0611c520160c03960c051806040519013156101e557600080fd5b80919012156101f357600080fd5b5061014051600d5561016051600e5561018051600f556101a0516010556101c0516011556101e05160125561020051601355611c3a56600035601c52740100000000000000000000000000000000000000006020526f7fffffffffffffffffffffffffffffff6040527fffffffffffffffffffffffffffffffff8000000000000000000000000000000060605274012a05f1fffffffffffffffffffffffffdabf41c006080527ffffffffffffffffffffffffed5fa0e000000000000000000000000000000000060a052634343d1b860005114156100c85734156100ac57600080fd5b3033146100b857600080fd5b60006003541460005260206000f3005b6384eb85c3600051141561015c57602060046101403734156100e957600080fd5b3033146100f557600080fd5b6060516004358060405190131561010b57600080fd5b809190121561011957600080fd5b5061014051600260c05260035460e052604060c02055600360605160018254018060405190131561014957600080fd5b809190121561015757600080fd5b815550005b6314de97d2600051141561021b57341561017557600080fd5b30331461018157600080fd5b60206101a06004634343d1b86101405261015c6000305af16101a257600080fd5b6101a051156101d5577fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff60005260206000f35b60036060516001825403806040519013156101ef57600080fd5b80919012156101fd57600080fd5b815550600260c05260035460e052604060c0205460005260206000f3005b631b16485060005114156102a3576020600461014037341561023c57600080fd5b600435602051811061024d57600080fd5b506040610160526101806001600460c0526101405160e052604060c02060c052602060c020015481526002600460c0526101405160e052604060c02060c052602060c020015481602001525061016051610180f3005b636ea86a1f60005114156103ba5734156102bc57600080fd5b600060a051600e54806102ce57600080fd5b6402540be400430205806080519013156102e757600080fd5b80919012156102f557600080fd5b1215610345576402540be4006402540be3ff60a051600e548061031757600080fd5b6402540be4004302058060805190131561033057600080fd5b809190121561033e57600080fd5b0305610384565b6402540be40060a051600e548061035b57600080fd5b6402540be4004302058060805190131561037457600080fd5b809190121561038257600080fd5b055b61014052610140516008541215156103a157600060005260206000f35b60075460065561014051600855600160005260206000f3005b639d34be88600051141561055b576012543412156103d757600080fd5b600560c0523360e052604060c02054156103f057600080fd5b600060006004636ea86a1f6101405261015c6000305af161041057600080fd5b6001546101a05260206102206004634343d1b86101c0526101dc6000305af161043857600080fd5b61022051151561046c5760206102a060046314de97d26102405261025c6000305af161046357600080fd5b6102a0516101a0525b33600060c0526101a05160e052604060c02055600160605160018254018060405190131561049957600080fd5b80919012156104a757600080fd5b8155506007546101a0511215156104e45760605160016101a05101806040519013156104d257600080fd5b80919012156104e057600080fd5b6007555b600460c0523360e052604060c02060c052602060c020348155600060018201556101a0516002820155506001600560c0523360e052604060c020556101a0516102c052337f42cc700f5b78a74c6520ec5341d7c49eeaa8f89015e714b4d7207c947c2d19ec60206102c0a2600160005260206000f3005b63664f158e600051141561077057341561057457600080fd5b6001600560c0523360e052604060c020541461058f57600080fd5b600060006004636ea86a1f6101405261015c6000305af16105af57600080fd5b6002600460c0523360e052604060c02060c052602060c02001546101a0526000600060246384eb85c36101c0526101a0516101e0526101dc6000305af16105f557600080fd5b6000600060c0526101a05160e052604060c02055600160605160018254038060405190131561062357600080fd5b809190121561063157600080fd5b815550600060a051600e548061064657600080fd5b6402540be4004302058060805190131561065f57600080fd5b809190121561066d57600080fd5b12156106bd576402540be4006402540be3ff60a051600e548061068f57600080fd5b6402540be400430205806080519013156106a857600080fd5b80919012156106b657600080fd5b03056106fc565b6402540be40060a051600e54806106d357600080fd5b6402540be400430205806080519013156106ec57600080fd5b80919012156106fa57600080fd5b055b6001600460c0523360e052604060c02060c052602060c02001556101a051610240526001600460c0523360e052604060c02060c052602060c020015461026052337fa528ff03c83165bca6de116822fb727543effc08e4e22a2447925ffe5e1364626040610240a2600160005260206000f3005b6358821dd760005114156109a457341561078957600080fd5b6001600560c0523360e052604060c02054146107a457600080fd5b60006001600460c0523360e052604060c02060c052602060c020015414156107cb57600080fd5b6060516013546001600460c0523360e052604060c02060c052602060c020015401806040519013156107fc57600080fd5b809190121561080a57600080fd5b600060a051600e548061081c57600080fd5b6402540be4004302058060805190131561083557600080fd5b809190121561084357600080fd5b1215610893576402540be4006402540be3ff60a051600e548061086557600080fd5b6402540be4004302058060805190131561087e57600080fd5b809190121561088c57600080fd5b03056108d2565b6402540be40060a051600e54806108a957600080fd5b6402540be400430205806080519013156108c257600080fd5b80919012156108d057600080fd5b055b136108dc57600080fd5b6002600460c0523360e052604060c02060c052602060c020015461014052600460c0523360e052604060c02060c052602060c0205461016052600460c0523360e052604060c02060c052602060c020600081556000600182015560006002820155506000600560c0523360e052604060c02055600060006000600061016051336000f161096857600080fd5b6101405161018052337f2443ae687d261a634cadc8eba71424fe46a8663d8c30011d2bebca3a4c999c906020610180a2600160005260206000f3005b6315de738d6000511415610c5057604060046101403734156109c557600080fd5b606051600435806040519013156109db57600080fd5b80919012156109e957600080fd5b5060605160243580604051901315610a0057600080fd5b8091901215610a0e57600080fd5b50600d546101405112600061014051121516610a2957600080fd5b600060a051600e5480610a3b57600080fd5b6402540be40043020580608051901315610a5457600080fd5b8091901215610a6257600080fd5b1215610ab2576402540be4006402540be3ff60a051600e5480610a8457600080fd5b6402540be40043020580608051901315610a9d57600080fd5b8091901215610aab57600080fd5b0305610af1565b6402540be40060a051600e5480610ac857600080fd5b6402540be40043020580608051901315610ae157600080fd5b8091901215610aef57600080fd5b055b61018052610180516008541215610b0e576007546101a052610b24565b610180516008541415610b23576006546101a0525b5b6060516001606051600e54610180510280604051901315610b4457600080fd5b8091901215610b5257600080fd5b0380604051901315610b6357600080fd5b8091901215610b7157600080fd5b6101c0526060516101a0516000811215610b8a57600080fd5b610b9357600080fd5b6101a0516000811215610ba557600080fd5b60006101c0516101004303811215610bbc57600080fd5b438110610bc857600080fd5b406020826102000101526020810190506101405160208261020001015260208101905061016051602082610200010152602081019050806102005261020090508051602082012090500680604051901315610c2257600080fd5b8091901215610c3057600080fd5b6101e052600060c0526101e05160e052604060c0205460005260206000f3005b63bbeff1ce6000511415610ea05760606004610140373415610c7157600080fd5b60605160043580604051901315610c8757600080fd5b8091901215610c9557600080fd5b5060605160243580604051901315610cac57600080fd5b8091901215610cba57600080fd5b50600d546101405112600061014051121516610cd557600080fd5b600060a051600e5480610ce757600080fd5b6402540be40043020580608051901315610d0057600080fd5b8091901215610d0e57600080fd5b1215610d5e576402540be4006402540be3ff60a051600e5480610d3057600080fd5b6402540be40043020580608051901315610d4957600080fd5b8091901215610d5757600080fd5b0305610d9d565b6402540be40060a051600e5480610d7457600080fd5b6402540be40043020580608051901315610d8d57600080fd5b8091901215610d9b57600080fd5b055b6101a052610160516101a05114610db357600080fd5b61016051600a60c0526101405160e052604060c0205412610dd357600080fd5b600060006004636ea86a1f6101c0526101dc6000305af1610df357600080fd5b600960c0526101405160e052604060c02060c0526101605160e052604060c02060c052602060c02061018051815560006001820155336002820155506101a051600a60c0526101405160e052604060c020556000600c60c0526101405160e052604060c0205561016051610220526101805161024052610140517f24a51436697045b93a79a2bda900b05055f1e1e91b021b4c2fb6f67cbb0b2e956040610220a2600160005260206000f3005b63f86fda376000511415610f3b5760206004610140373415610ec157600080fd5b60605160043580604051901315610ed757600080fd5b8091901215610ee557600080fd5b50600c60c0526101405160e052604060c0205461016052606051610100610f0b57600080fd5b610100610160510680604051901315610f2357600080fd5b8091901215610f3157600080fd5b60005260206000f3005b638f8cf17560005114156110195760406004610140373415610f5c57600080fd5b60605160043580604051901315610f7257600080fd5b8091901215610f8057600080fd5b5060605160243580604051901315610f9757600080fd5b8091901215610fa557600080fd5b50600c60c0526101405160e052604060c020546101805260016060516101605160ff0380604051901315610fd857600080fd5b8091901215610fe657600080fd5b6000811215610ffd578060000360020a8204611004565b8060020a82025b905090506101805116151560005260206000f3005b63e91737d16000511415611184576040600461014037341561103a57600080fd5b30331461104657600080fd5b6060516004358060405190131561105c57600080fd5b809190121561106a57600080fd5b506060516024358060405190131561108157600080fd5b809190121561108f57600080fd5b50600c60c0526101405160e052604060c020546101805260016060516101605160ff03806040519013156110c257600080fd5b80919012156110d057600080fd5b60008112156110e7578060000360020a82046110ee565b8060020a82025b905090506101a0526020610260602463f86fda376101e05261014051610200526101fc6000305af161111f57600080fd5b610260516101c0526101a05161018051176101805260ff6101c0511215611162576101805160016101805101101561115657600080fd5b60016101805101610180525b61018051600c60c0526101405160e052604060c02055600160005260206000f3005b6371742fa5600051141561153857608060046101403734156111a557600080fd5b606051600435806040519013156111bb57600080fd5b80919012156111c957600080fd5b50606051602435806040519013156111e057600080fd5b80919012156111ee57600080fd5b506060516064358060405190131561120557600080fd5b809190121561121357600080fd5b50600d54610140511260006101405112151661122e57600080fd5b600060a051600e548061124057600080fd5b6402540be4004302058060805190131561125957600080fd5b809190121561126757600080fd5b12156112b7576402540be4006402540be3ff60a051600e548061128957600080fd5b6402540be400430205806080519013156112a257600080fd5b80919012156112b057600080fd5b03056112f6565b6402540be40060a051600e54806112cd57600080fd5b6402540be400430205806080519013156112e657600080fd5b80919012156112f457600080fd5b055b6101c052610160516101c0511461130c57600080fd5b6010546101a0511260006101a05112151661132657600080fd5b33602061028060446315de738d6101e05261014051610200526101a051610220526101fc6000305af161135857600080fd5b610280511461136657600080fd5b61016051600a60c0526101405160e052604060c020541461138657600080fd5b61018051600960c0526101405160e052604060c02060c0526101605160e052604060c02060c052602060c02054146113bd57600080fd5b60206103406044638f8cf1756102a052610140516102c0526101a0516102e0526102bc6000305af16113ee57600080fd5b61034051156113fc57600080fd5b6020610400604463e91737d16103605261014051610380526101a0516103a05261037c6000305af161142d57600080fd5b6104005161143a57600080fd5b60206104c0602463f86fda3761044052610140516104605261045c6000305af161146357600080fd5b6104c051610420526001600960c0526101405160e052604060c02060c0526101605160e052604060c02060c052602060c02001541560115461042051121516156114ec5761016051600b60c0526101405160e052604060c0205560016001600960c0526101405160e052604060c02060c0526101605160e052604060c02060c052602060c02001555b610160516104e05261018051610500523361052052610140517f30070ae8079c39b04eac5372c1e108238a47bc888ecab315a9f53171d5e4c30160606104e0a2600160005260206000f3005b632901077a600051141561159a576020600461014037341561155957600080fd5b6060516004358060405190131561156f57600080fd5b809190121561157d57600080fd5b50600060c0526101405160e052604060c0205460005260206000f3005b63cdd8d52c60005114156115c05734156115b357600080fd5b60015460005260206000f3005b634b443aa4600051141561162257602060046101403734156115e157600080fd5b606051600435806040519013156115f757600080fd5b809190121561160557600080fd5b50600260c0526101405160e052604060c0205460005260206000f3005b631824181c600051141561164857341561163b57600080fd5b60035460005260206000f3005b6377ff3abe6000511415611697576020600461014037341561166957600080fd5b600435602051811061167a57600080fd5b50600560c0526101405160e052604060c0205460005260206000f3005b63c069707d60005114156116bd5734156116b057600080fd5b60065460005260206000f3005b63ab580c6a60005114156116e35734156116d657600080fd5b60075460005260206000f3005b6394250c0560005114156117095734156116fc57600080fd5b60085460005260206000f3005b63aa59419360005114156117a7576040600461014037341561172a57600080fd5b6060516004358060405190131561174057600080fd5b809190121561174e57600080fd5b506060516024358060405190131561176557600080fd5b809190121561177357600080fd5b50600960c0526101405160e052604060c02060c0526101605160e052604060c02060c052602060c0205460005260206000f3005b63f3c687dd600051141561184857604060046101403734156117c857600080fd5b606051600435806040519013156117de57600080fd5b80919012156117ec57600080fd5b506060516024358060405190131561180357600080fd5b809190121561181157600080fd5b506002600960c0526101405160e052604060c02060c0526101605160e052604060c02060c052602060c020015460005260206000f3005b63fe21918f60005114156118e9576040600461014037341561186957600080fd5b6060516004358060405190131561187f57600080fd5b809190121561188d57600080fd5b50606051602435806040519013156118a457600080fd5b80919012156118b257600080fd5b506001600960c0526101405160e052604060c02060c0526101605160e052604060c02060c052602060c020015460005260206000f3005b63e3ad6147600051141561194b576020600461014037341561190a57600080fd5b6060516004358060405190131561192057600080fd5b809190121561192e57600080fd5b50600a60c0526101405160e052604060c0205460005260206000f3005b6347ecf00d60005114156119ad576020600461014037341561196c57600080fd5b6060516004358060405190131561198257600080fd5b809190121561199057600080fd5b50600b60c0526101405160e052604060c0205460005260206000f3005b634830ad8f6000511415611a0f57602060046101403734156119ce57600080fd5b606051600435806040519013156119e457600080fd5b80919012156119f257600080fd5b50600c60c0526101405160e052604060c0205460005260206000f3005b5b61022a611c3a0361022a60003961022a611c3a036000f3\"}"
  },
  {
    "path": "sharding/contracts/sharding_manager.v.py",
    "content": "# NOTE: Some variables are set as public variables for testing. They should be reset\n# to private variables in an official deployment of the contract. \n\n#\n# Events\n#\n\nRegisterNotary: event({index_in_notary_pool: int128, notary: indexed(address)})\nDeregisterNotary: event({index_in_notary_pool: int128, notary: indexed(address), deregistered_period: int128})\nReleaseNotary: event({index_in_notary_pool: int128, notary: indexed(address)})\nAddHeader: event({period: int128, shard_id: indexed(int128), chunk_root: bytes32})\nSubmitVote: event({period: int128, shard_id: indexed(int128), chunk_root: bytes32, notary: address})\n\n\n#\n# State Variables\n#\n\n# Notary pool\n# - notary_pool: array of active notary addresses\n# - notary_pool_len: size of the notary pool\n# - empty_slots_stack: stack of empty notary slot indices\n# - empty_slots_stack_top: top index of the stack\nnotary_pool: public(address[int128])\nnotary_pool_len: public(int128)\nempty_slots_stack: public(int128[int128])\nempty_slots_stack_top: public(int128)\n\n# Notary registry\n# - deregistered: the period when the notary deregister. It defaults to 0 for not yet deregistered notarys\n# - pool_index: indicates notary's index in the notary pool\n# - deposit: notary's deposit value\nnotary_registry: {\n    deregistered: int128,\n    pool_index: int128,\n    deposit: wei_value\n}[address]\n# - does_notary_exist: returns true if notary's record exist in notary registry\ndoes_notary_exist: public(bool[address])\n\n# Notary sampling info\n# In order to keep sample size unchanged through out entire period, we keep track of pool size change\n# resulted from notary regitration/deregistration in current period and apply the change until next period. \n# - current_period_notary_sample_size: \n# - next_period_notary_sample_size: \n# - notary_sample_size_updated_period: latest period when current_period_notary_sample_size is updated\ncurrent_period_notary_sample_size: public(int128)\nnext_period_notary_sample_size: public(int128)\nnotary_sample_size_updated_period: public(int128)\n\n# Collation\n# - collation_records: the collation records that have been appended by the proposer.\n# Mapping [period][shard_id] to chunk_root and proposer. is_elected is used to indicate if\n# this collation has received enough votes.\n# - records_updated_period: the latest period in which new collation header has been\n# submitted for the given shard.\n# - head_collation_period: period number of the head collation in the given shard, e.g., if\n# a collation which is added in period P in shard 3 receives enough votes, then\n# head_collation_period[3] is set to P.\ncollation_records: public({\n    chunk_root: bytes32,\n    proposer: address,\n    is_elected: bool\n}[int128][int128])\nrecords_updated_period: public(int128[int128])\nhead_collation_period: public(int128[int128])\n\n# Notarization\n# - current_vote: vote count of collation in current period in each shard.\n# First 31 bytes: bitfield of which notary has voted and which has not. First bit\n# represents notary's vote(notary with index 0 in get_committee_member) and second\n# bit represents next notary's vote(notary with index 1) and so on.\ncurrent_vote: public(bytes32[int128])\n\n\n#\n# Configuration Parameters\n# \n\n# The total number of shards within a network.\n# Provisionally SHARD_COUNT := 100 for the phase 1 testnet.\nSHARD_COUNT: int128\n\n# The period of time, denominated in main chain block times, during which\n# a collation tree can be extended by one collation.\n# Provisionally PERIOD_LENGTH := 5, approximately 75 seconds.\nPERIOD_LENGTH: int128\n\n# The lookahead time, denominated in periods, for eligible collators to\n# perform windback and select proposals.\n# Provisionally LOOKAHEAD_LENGTH := 4, approximately 5 minutes.\nLOOKAHEAD_LENGTH: int128\n\n# The number of notaries to select from notary pool for each shard in each period.\nCOMMITTEE_SIZE: int128\n\n# The threshold(number of notaries in committee) for a proposal to be deem accepted\nQUORUM_SIZE: int128\n\n# The fixed-size deposit, denominated in ETH, required for registration.\n# Provisionally COLLATOR_DEPOSIT := 1000 and PROPOSER_DEPOSIT := 1.\nNOTARY_DEPOSIT: wei_value\n\n# The amount of time, denominated in periods, a deposit is locked up from the\n# time of deregistration.\n# Provisionally COLLATOR_LOCKUP_LENGTH := 16128, approximately two weeks, and\n# PROPOSER_LOCKUP_LENGTH := 48, approximately one hour.\nNOTARY_LOCKUP_LENGTH: int128\n\n\n@public\ndef __init__(\n        _SHARD_COUNT: int128,\n        _PERIOD_LENGTH: int128,\n        _LOOKAHEAD_LENGTH: int128,\n        _COMMITTEE_SIZE: int128,\n        _QUORUM_SIZE: int128,\n        _NOTARY_DEPOSIT: wei_value,\n        _NOTARY_LOCKUP_LENGTH: int128,\n    ):\n    self.SHARD_COUNT = _SHARD_COUNT\n    self.PERIOD_LENGTH = _PERIOD_LENGTH\n    self.LOOKAHEAD_LENGTH = _LOOKAHEAD_LENGTH\n    self.COMMITTEE_SIZE = _COMMITTEE_SIZE\n    self.QUORUM_SIZE = _QUORUM_SIZE\n    self.NOTARY_DEPOSIT = _NOTARY_DEPOSIT\n    self.NOTARY_LOCKUP_LENGTH = _NOTARY_LOCKUP_LENGTH\n\n\n# Checks if empty_slots_stack_top is empty\n@private\ndef is_empty_slots_stack_empty() -> bool:\n    return (self.empty_slots_stack_top == 0)\n\n\n# Pushes one int128 to empty_slots_stack\n@private\ndef empty_slots_stack_push(index: int128):\n    self.empty_slots_stack[self.empty_slots_stack_top] = index\n    self.empty_slots_stack_top += 1\n\n\n# Pops one int128 out of empty_slots_stack\n@private\ndef empty_slots_stack_pop() -> int128:\n    if self.is_empty_slots_stack_empty():\n        return -1\n    self.empty_slots_stack_top -= 1\n    return self.empty_slots_stack[self.empty_slots_stack_top]\n\n\n# Helper functions to get notary info in notary_registry\n@public\n@constant\ndef get_notary_info(notary_address: address) -> (int128, int128):\n    return (self.notary_registry[notary_address].deregistered, self.notary_registry[notary_address].pool_index)\n\n\n# Update notary_sample_size\n@public\ndef update_notary_sample_size() -> bool:\n    current_period: int128 = floor(block.number / self.PERIOD_LENGTH)\n    if self.notary_sample_size_updated_period >= current_period:\n        return False\n\n    self.current_period_notary_sample_size = self.next_period_notary_sample_size\n    self.notary_sample_size_updated_period = current_period\n\n    return True\n\n\n# Adds an entry to notary_registry, updates the notary pool (notary_pool, notary_pool_len, etc.),\n# locks a deposit of size NOTARY_DEPOSIT, and returns True on success.\n@public\n@payable\ndef register_notary() -> bool:\n    assert msg.value >= self.NOTARY_DEPOSIT\n    assert not self.does_notary_exist[msg.sender]\n\n    # Update notary_sample_size\n    self.update_notary_sample_size()\n\n    # Add the notary to the notary pool\n    pool_index: int128 = self.notary_pool_len\n    if not self.is_empty_slots_stack_empty():\n        pool_index = self.empty_slots_stack_pop()        \n    self.notary_pool[pool_index] = msg.sender\n    self.notary_pool_len += 1\n\n    # If index is larger than notary_sample_size, expand notary_sample_size in next period.\n    if pool_index >= self.next_period_notary_sample_size:\n        self.next_period_notary_sample_size = pool_index + 1\n\n    # Add the notary to the notary registry\n    self.notary_registry[msg.sender] = {\n        deregistered: 0,\n        pool_index: pool_index,\n        deposit: msg.value,\n    }\n    self.does_notary_exist[msg.sender] = True\n\n    log.RegisterNotary(pool_index, msg.sender)\n\n    return True\n\n\n# Sets the deregistered period in the notary_registry entry, updates the notary pool (notary_pool, notary_pool_len, etc.),\n# and returns True on success.\n@public\ndef deregister_notary() -> bool:\n    assert self.does_notary_exist[msg.sender] == True\n\n    # Update notary_sample_size\n    self.update_notary_sample_size()\n\n    # Delete entry in notary pool\n    index_in_notary_pool: int128 = self.notary_registry[msg.sender].pool_index \n    self.empty_slots_stack_push(index_in_notary_pool)\n    self.notary_pool[index_in_notary_pool] = None\n    self.notary_pool_len -= 1\n\n    # Set deregistered period to current period\n    self.notary_registry[msg.sender].deregistered = floor(block.number / self.PERIOD_LENGTH)\n\n    log.DeregisterNotary(index_in_notary_pool, msg.sender, self.notary_registry[msg.sender].deregistered)\n\n    return True\n\n\n# Removes an entry from notary_registry, releases the notary deposit, and returns True on success.\n@public\ndef release_notary() -> bool:\n    assert self.does_notary_exist[msg.sender] == True\n    assert self.notary_registry[msg.sender].deregistered != 0\n    assert floor(block.number / self.PERIOD_LENGTH) > self.notary_registry[msg.sender].deregistered + self.NOTARY_LOCKUP_LENGTH\n\n    pool_index: int128 = self.notary_registry[msg.sender].pool_index\n    deposit: wei_value = self.notary_registry[msg.sender].deposit\n    # Delete entry in notary registry\n    self.notary_registry[msg.sender] = {\n        deregistered: 0,\n        pool_index: 0,\n        deposit: 0,\n    }\n    self.does_notary_exist[msg.sender] = False\n\n    send(msg.sender, deposit)\n\n    log.ReleaseNotary(pool_index, msg.sender)\n\n    return True\n\n\n# Given shard_id and index, return the chosen notary in the current period\n@public\n@constant\ndef get_member_of_committee(\n        shard_id: int128,\n        index: int128,\n    ) -> address:\n    # Check that shard_id is valid\n    assert shard_id >= 0 and shard_id < self.SHARD_COUNT\n    period: int128 = floor(block.number / self.PERIOD_LENGTH)\n\n    # Decide notary pool length based on if notary sample size is updated\n    sample_size: int128\n    if self.notary_sample_size_updated_period < period:\n        sample_size = self.next_period_notary_sample_size\n    elif self.notary_sample_size_updated_period == period:\n        sample_size = self.current_period_notary_sample_size\n\n    # Block hash used as entropy is the latest block of previous period  \n    entropy_block_number: int128 = period * self.PERIOD_LENGTH - 1\n\n    sampled_index: int128 = convert(\n        convert(\n            sha3(\n                concat(\n                    blockhash(entropy_block_number),\n                    convert(shard_id, \"bytes32\"),\n                    convert(index, \"bytes32\"),\n                )\n            ),\n            \"uint256\",\n        ) % convert(sample_size, \"uint256\"),\n        'int128',\n    )\n    return self.notary_pool[sampled_index]\n\n\n# Attempts to process a collation header, returns True on success, reverts on failure.\n@public\ndef add_header(\n        shard_id: int128,\n        period: int128,\n        chunk_root: bytes32\n    ) -> bool:\n\n    # Check that shard_id is valid\n    assert shard_id >= 0 and shard_id < self.SHARD_COUNT\n    # Check that it's current period\n    current_period: int128 = floor(block.number / self.PERIOD_LENGTH)\n    assert current_period == period\n    # Check that no header is added yet in this period in this shard\n    assert self.records_updated_period[shard_id] < period\n\n    # Update notary_sample_size\n    self.update_notary_sample_size()\n\n    # Add header\n    self.collation_records[shard_id][period] = {\n        chunk_root: chunk_root,\n        proposer: msg.sender,\n        is_elected: False,\n    }\n\n    # Update records_updated_period\n    self.records_updated_period[shard_id] = current_period\n\n    # Clear previous vote count\n    self.current_vote[shard_id] = None\n\n    # Emit log\n    log.AddHeader(\n        period,\n        shard_id,\n        chunk_root,\n    )\n\n    return True\n\n\n# Helper function to get vote count\n@public\n@constant\ndef get_vote_count(shard_id: int128) -> int128:\n    current_vote_in_uint: uint256 = convert(self.current_vote[shard_id], 'uint256')\n\n    # Extract current vote count(last byte of current_vote)\n    return convert(\n        current_vote_in_uint % convert(\n            2**8,\n            'uint256'\n        ),\n        'int128'\n    )\n\n\n# Helper function to get vote count\n@public\n@constant\ndef has_notary_voted(shard_id: int128, index: int128) -> bool:\n    # Right shift current_vote then AND(bitwise) it's value with value 1 to see if\n    # notary of given index in bitfield had voted.\n    current_vote_in_uint: uint256 = convert(self.current_vote[shard_id], 'uint256')\n    # Convert the result from integer to bool\n    return not not bitwise_and(\n        current_vote_in_uint,\n        shift(convert(1, 'uint256'), 255 - index),\n    )\n\n\n@private\ndef update_vote(shard_id: int128, index: int128) -> bool:\n    current_vote_in_uint: uint256 = convert(self.current_vote[shard_id], 'uint256')\n    index_in_bitfield: uint256 = shift(convert(1, 'uint256'), 255 - index)\n    old_vote_count: int128 = self.get_vote_count(shard_id)\n\n    # Update bitfield\n    current_vote_in_uint = bitwise_or(current_vote_in_uint, index_in_bitfield)\n    # Update vote count\n    # Add an upper bound check to prevent 1-byte vote count overflow\n    if old_vote_count < 255:\n        current_vote_in_uint = current_vote_in_uint + convert(1, 'uint256')\n    self.current_vote[shard_id] = convert(current_vote_in_uint, 'bytes32')\n\n    return True\n\n\n# Notary submit a vote\n@public\ndef submit_vote(\n        shard_id: int128,\n        period: int128,\n        chunk_root: bytes32,\n        index: int128,\n    ) -> bool:\n\n    # Check that shard_id is valid\n    assert shard_id >= 0 and shard_id < self.SHARD_COUNT\n    # Check that it's current period\n    current_period: int128 = floor(block.number / self.PERIOD_LENGTH)\n    assert current_period == period\n    # Check that index is valid\n    assert index >= 0 and index < self.COMMITTEE_SIZE\n    # Check that notary is eligible to cast a vote\n    assert self.get_member_of_committee(shard_id, index) == msg.sender\n    # Check that collation record exists and matches\n    assert self.records_updated_period[shard_id] == period\n    assert self.collation_records[shard_id][period].chunk_root == chunk_root\n    # Check that notary has not yet voted\n    assert not self.has_notary_voted(shard_id, index)\n\n    # Update bitfield and vote count\n    assert self.update_vote(shard_id, index)\n\n    # Check if we have enough vote and make update accordingly\n    current_vote_count: int128 = self.get_vote_count(shard_id)\n    if current_vote_count >= self.QUORUM_SIZE and \\\n        not self.collation_records[shard_id][period].is_elected:\n        self.head_collation_period[shard_id] = period\n        self.collation_records[shard_id][period].is_elected = True\n\n    # Emit log\n    log.SubmitVote(\n        period,\n        shard_id,\n        chunk_root,\n        msg.sender,\n    )\n\n    return True\n"
  },
  {
    "path": "sharding/contracts/utils/__init__.py",
    "content": ""
  },
  {
    "path": "sharding/contracts/utils/config.py",
    "content": "from typing import (\n    Any,\n    Dict,\n)\nfrom eth_utils import (\n    to_wei,\n)\n\nfrom evm.utils import (\n    env,\n)\n\n\ndef get_sharding_config() -> Dict[str, Any]:\n    return {\n        'SHARD_COUNT': env.get('SHARDING_SHARD_COUNT', type=int, default=100),\n        'PERIOD_LENGTH': env.get('SHARDING_PERIOD_LENGTH', type=int, default=100),\n        'LOOKAHEAD_LENGTH': env.get('SHARDING_LOOKAHEAD_LENGTH', type=int, default=4),\n        'COMMITTEE_SIZE': env.get('SHARDING_COMMITTEE_SIZE', type=int, default=135),\n        'QUORUM_SIZE': env.get('SHARDING_QUORUM_SIZE', type=int, default=90),\n        'NOTARY_DEPOSIT': env.get(\n            'SHARDING_NOTARY_DEPOSIT',\n            type=int,\n            default=to_wei('1000', 'ether'),\n        ),\n        'NOTARY_LOCKUP_LENGTH': env.get(\n            'SHARDING_NOTARY_LOCKUP_LENGTH',\n            type=int,\n            default=16128,\n        ),\n        'NOTARY_REWARD': env.get(\n            'SHARDING_NOTARY_REWARD',\n            type=int,\n            default=to_wei('0.001', 'ether'),\n        ),\n        'GAS_PRICE': env.get('SHARDING_GAS_PRICE', type=int, default=1),\n    }\n"
  },
  {
    "path": "sharding/contracts/utils/smc_utils.py",
    "content": "import json\nimport os\n\nfrom typing import (\n    Any,\n    Dict,\n)\n\n\nDIR = os.path.dirname(__file__)\n\n\ndef get_smc_source_code() -> str:\n    file_path = os.path.join(DIR, '../sharding_manager.v.py')\n    smc_source_code = open(file_path).read()\n    return smc_source_code\n\n\ndef get_smc_json() -> Dict[str, Any]:\n    file_path = os.path.join(DIR, '../sharding_manager.json')\n    smc_json_str = open(file_path).read()\n    return json.loads(smc_json_str)\n"
  },
  {
    "path": "sharding/handler/__init__.py",
    "content": ""
  },
  {
    "path": "sharding/handler/exceptions.py",
    "content": "class LogParsingError(Exception):\n    pass\n"
  },
  {
    "path": "sharding/handler/log_handler.py",
    "content": "import logging\nfrom typing import (\n    Any,\n    Dict,\n    List,\n    Union,\n)\n\nfrom evm.exceptions import BlockNotFound\n\nfrom web3 import Web3\n\nfrom eth_typing import (\n    Address,\n)\n\n\nclass LogHandler:\n\n    logger = logging.getLogger(\"sharding.handler.LogHandler\")\n\n    def __init__(self, w3: Web3, period_length: int) -> None:\n        self.w3 = w3\n        self.period_length = period_length\n\n    def get_logs(self,\n                 address: Address=None,\n                 topics: List[Union[str, None]]=None,\n                 from_block: Union[int, str]=None,\n                 to_block: Union[int, str]=None) -> List[Dict[str, Any]]:\n        filter_params = {\n            'address': address,\n            'topics': topics,\n        }  # type: Dict[str, Any]\n\n        current_block_number = self.w3.eth.blockNumber\n        if from_block is None:\n            # Search from the start of current period if from_block is not given\n            filter_params['fromBlock'] = current_block_number - \\\n                current_block_number % self.period_length\n        else:\n            if from_block > current_block_number:\n                raise BlockNotFound(\n                    \"Try to search from block number {} while current block number is {}\".format(\n                        from_block,\n                        current_block_number\n                    )\n                )\n            filter_params['fromBlock'] = from_block\n\n        if to_block is None:\n            filter_params['toBlock'] = 'latest'\n        else:\n            filter_params['toBlock'] = min(current_block_number, to_block)\n\n        return self.w3.eth.getLogs(filter_params)\n"
  },
  {
    "path": "sharding/handler/shard_tracker.py",
    "content": "from web3 import Web3\n\nfrom typing import (\n    Any,\n    Dict,\n    Generator,\n    List,\n    Optional,\n    Union,\n    Tuple,\n)\n\nfrom eth_utils import (\n    encode_hex,\n    to_list,\n    is_address,\n)\nfrom eth_typing import (\n    Address,\n)\n\nfrom sharding.contracts.utils.config import (\n    get_sharding_config,\n)\nfrom sharding.handler.log_handler import (\n    LogHandler,\n)\nfrom sharding.handler.utils.log_parser import LogParser\nfrom sharding.handler.utils.shard_tracker_utils import (\n    to_log_topic_address,\n    get_event_signature_from_abi,\n)\n\n\nclass ShardTracker:\n    \"\"\"Track emitted logs of specific shard.\n    \"\"\"\n\n    def __init__(self,\n                 w3: Web3,\n                 config: Optional[Dict[str, Any]],\n                 shard_id: int,\n                 smc_handler_address: Address) -> None:\n        if config is None:\n            self.config = get_sharding_config()\n        else:\n            self.config = config\n        self.shard_id = shard_id\n        self.log_handler = LogHandler(w3, self.config['PERIOD_LENGTH'])\n        self.smc_handler_address = smc_handler_address\n\n    def _get_logs_by_shard_id(self,\n                              event_name: str,\n                              from_block: Union[int, str]=None,\n                              to_block: Union[int, str]=None) -> List[Dict[str, Any]]:\n        \"\"\"Search logs by the shard id.\n        \"\"\"\n        return self.log_handler.get_logs(\n            address=self.smc_handler_address,\n            topics=[\n                encode_hex(get_event_signature_from_abi(event_name)),\n                encode_hex(self.shard_id.to_bytes(32, byteorder='big')),\n            ],\n            from_block=from_block,\n            to_block=to_block,\n        )\n\n    def _get_logs_by_notary(self,\n                            event_name: str,\n                            notary: Union[str, None],\n                            from_block: Union[int, str]=None,\n                            to_block: Union[int, str]=None) -> List[Dict[str, Any]]:\n        \"\"\"Search logs by notary address.\n\n        NOTE: The notary address provided must be padded to 32 bytes\n        and also hex-encoded. If notary address provided\n        is `None`, it will return all logs related to the event.\n        \"\"\"\n        return self.log_handler.get_logs(\n            address=self.smc_handler_address,\n            topics=[\n                encode_hex(get_event_signature_from_abi(event_name)),\n                notary,\n            ],\n            from_block=from_block,\n            to_block=to_block,\n        )\n\n    def _decide_period_block_number(self,\n                                    from_period: Union[int, None],\n                                    to_period: Union[int, None]\n                                    ) -> Tuple[Union[int, None], Union[int, None]]:\n        if from_period is None:\n            from_block = None\n        else:\n            from_block = from_period * self.config['PERIOD_LENGTH']\n\n        if to_period is None:\n            to_block = None\n        else:\n            to_block = (to_period + 1) * self.config['PERIOD_LENGTH'] - 1\n\n        return from_block, to_block\n\n    #\n    # Basic functions to get emitted logs\n    #\n    @to_list\n    def get_register_notary_logs(self,\n                                 from_period: int=None,\n                                 to_period: int=None) -> Generator[LogParser, None, None]:\n        from_block, to_block = self._decide_period_block_number(from_period, to_period)\n        logs = self._get_logs_by_notary(\n            'RegisterNotary',\n            notary=None,\n            from_block=from_block,\n            to_block=to_block,\n        )\n        for log in logs:\n            yield LogParser(event_name='RegisterNotary', log=log)\n\n    @to_list\n    def get_deregister_notary_logs(self,\n                                   from_period: int=None,\n                                   to_period: int=None\n                                   ) -> Generator[LogParser, None, None]:\n        from_block, to_block = self._decide_period_block_number(from_period, to_period)\n        logs = self._get_logs_by_notary(\n            'DeregisterNotary',\n            notary=None,\n            from_block=from_block,\n            to_block=to_block,\n        )\n        for log in logs:\n            yield LogParser(event_name='DeregisterNotary', log=log)\n\n    @to_list\n    def get_release_notary_logs(self,\n                                from_period: int=None,\n                                to_period: int=None\n                                ) -> Generator[LogParser, None, None]:\n        from_block, to_block = self._decide_period_block_number(from_period, to_period)\n        logs = self._get_logs_by_notary(\n            'ReleaseNotary',\n            notary=None,\n            from_block=from_block,\n            to_block=to_block,\n        )\n        for log in logs:\n            yield LogParser(event_name='ReleaseNotary', log=log)\n\n    @to_list\n    def get_add_header_logs(self,\n                            from_period: int=None,\n                            to_period: int=None\n                            ) -> Generator[LogParser, None, None]:\n        from_block, to_block = self._decide_period_block_number(from_period, to_period)\n        logs = self._get_logs_by_shard_id(\n            'AddHeader',\n            from_block=from_block,\n            to_block=to_block,\n        )\n        for log in logs:\n            yield LogParser(event_name='AddHeader', log=log)\n\n    @to_list\n    def get_submit_vote_logs(self,\n                             from_period: int=None,\n                             to_period: int=None\n                             ) -> Generator[LogParser, None, None]:\n        from_block, to_block = self._decide_period_block_number(from_period, to_period)\n        logs = self._get_logs_by_shard_id(\n            'SubmitVote',\n            from_block=from_block,\n            to_block=to_block,\n        )\n        for log in logs:\n            yield LogParser(event_name='SubmitVote', log=log)\n\n    #\n    # Functions for user to check the status of registration or votes\n    #\n    def is_notary_registered(self, notary: str, from_period: int=None) -> bool:\n        assert is_address(notary)\n        from_block, _ = self._decide_period_block_number(from_period, None)\n        log = self._get_logs_by_notary(\n            'RegisterNotary',\n            notary=to_log_topic_address(notary),\n            from_block=from_block,\n        )\n        return False if not log else True\n\n    def is_notary_deregistered(self, notary: str, from_period: int=None) -> bool:\n        assert is_address(notary)\n        from_block, _ = self._decide_period_block_number(from_period, None)\n        log = self._get_logs_by_notary(\n            'DeregisterNotary',\n            notary=to_log_topic_address(notary),\n            from_block=from_block,\n        )\n        return False if not log else True\n\n    def is_notary_released(self, notary: str, from_period: int=None) -> bool:\n        assert is_address(notary)\n        from_block, _ = self._decide_period_block_number(from_period, None)\n        log = self._get_logs_by_notary(\n            'ReleaseNotary',\n            notary=to_log_topic_address(notary),\n            from_block=from_block,\n        )\n        return False if not log else True\n\n    def is_new_header_added(self, period: int) -> bool:\n        # Get the header added in the specified period\n        log = self._get_logs_by_shard_id(\n            'AddHeader',\n            from_block=period * self.config['PERIOD_LENGTH'],\n            to_block=(period + 1) * self.config['PERIOD_LENGTH'] - 1,\n        )\n        return False if not log else True\n\n    def has_enough_vote(self, period: int) -> bool:\n        # Get the votes submitted in the specified period\n        logs = self._get_logs_by_shard_id(\n            'SubmitVote',\n            from_block=period * self.config['PERIOD_LENGTH'],\n            to_block=(period + 1) * self.config['PERIOD_LENGTH'] - 1,\n        )\n        return False if not logs else len(logs) >= self.config['QUORUM_SIZE']\n"
  },
  {
    "path": "sharding/handler/smc_handler.py",
    "content": "import logging\nfrom typing import (\n    Any,\n    Dict,\n    Iterable,\n    List,\n    Tuple,\n)\n\nfrom web3.contract import (\n    Contract,\n)\nfrom eth_utils import (\n    decode_hex,\n    to_canonical_address,\n)\n\nfrom sharding.handler.utils.smc_handler_utils import (\n    make_call_context,\n    make_transaction_context,\n)\nfrom sharding.contracts.utils.smc_utils import (\n    get_smc_json,\n)\n\nfrom eth_keys import (\n    datatypes,\n)\nfrom eth_typing import (\n    Address,\n    Hash32,\n)\n\n\nsmc_json = get_smc_json()\n\n\nclass SMC(Contract):\n\n    logger = logging.getLogger(\"sharding.SMC\")\n    abi = smc_json[\"abi\"]\n    bytecode = decode_hex(smc_json[\"bytecode\"])\n\n    default_priv_key = None  # type: datatypes.PrivateKey\n    default_sender_address = None  # type: Address\n    config = None  # type: Dict[str, Any]\n\n    _estimate_gas_dict = {\n        entry['name']: entry['gas']\n        for entry in smc_json[\"abi\"]\n        if entry['type'] == 'function'\n    }  # type: Dict[str, int]\n\n    def __init__(self,\n                 *args: Any,\n                 default_priv_key: datatypes.PrivateKey,\n                 config: Dict[str, Any],\n                 **kwargs: Any) -> None:\n        self.default_priv_key = default_priv_key\n        self.default_sender_address = self.default_priv_key.public_key.to_canonical_address()\n        self.config = config\n\n        super().__init__(*args, **kwargs)\n\n    #\n    # property\n    #\n    @property\n    def basic_call_context(self) -> Dict[str, Any]:\n        return make_call_context(\n            sender_address=self.default_sender_address,\n        )\n\n    #\n    # Public variable getter functions\n    #\n    def does_notary_exist(self, notary_address: Address) -> bool:\n        return self.functions.does_notary_exist(notary_address).call(self.basic_call_context)\n\n    def get_notary_info(self, notary_address: Address) -> Tuple[int, int]:\n        return self.functions.get_notary_info(notary_address).call(self.basic_call_context)\n\n    def notary_pool_len(self) -> int:\n        return self.functions.notary_pool_len().call(self.basic_call_context)\n\n    def notary_pool(self, pool_index: int) -> List[Address]:\n        notary_address = self.functions.notary_pool(pool_index).call(self.basic_call_context)\n        return to_canonical_address(notary_address)\n\n    def empty_slots_stack_top(self) -> int:\n        return self.functions.empty_slots_stack_top().call(self.basic_call_context)\n\n    def empty_slots_stack(self, stack_index: int) -> List[int]:\n        return self.functions.empty_slots_stack(stack_index).call(self.basic_call_context)\n\n    def current_period_notary_sample_size(self) -> int:\n        return self.functions.current_period_notary_sample_size().call(self.basic_call_context)\n\n    def next_period_notary_sample_size(self) -> int:\n        return self.functions.next_period_notary_sample_size().call(self.basic_call_context)\n\n    def notary_sample_size_updated_period(self) -> int:\n        return self.functions.notary_sample_size_updated_period().call(self.basic_call_context)\n\n    def records_updated_period(self, shard_id: int) -> int:\n        return self.functions.records_updated_period(shard_id).call(self.basic_call_context)\n\n    def head_collation_period(self, shard_id: int) -> int:\n        return self.functions.head_collation_period(shard_id).call(self.basic_call_context)\n\n    def get_member_of_committee(self, shard_id: int, index: int) -> Address:\n        notary_address = self.functions.get_member_of_committee(\n            shard_id,\n            index,\n        ).call(self.basic_call_context)\n        return to_canonical_address(notary_address)\n\n    def get_collation_chunk_root(self, shard_id: int, period: int) -> Hash32:\n        return self.functions.collation_records__chunk_root(\n            shard_id,\n            period,\n        ).call(self.basic_call_context)\n\n    def get_collation_proposer(self, shard_id: int, period: int) -> Address:\n        proposer_address = self.functions.collation_records__proposer(\n            shard_id,\n            period,\n        ).call(self.basic_call_context)\n        return to_canonical_address(proposer_address)\n\n    def get_collation_is_elected(self, shard_id: int, period: int) -> bool:\n        return self.functions.collation_records__is_elected(\n            shard_id,\n            period,\n        ).call(self.basic_call_context)\n\n    def current_vote(self, shard_id: int) -> bytes:\n        return self.functions.current_vote(\n            shard_id,\n        ).call(self.basic_call_context)\n\n    def get_vote_count(self, shard_id: int) -> int:\n        return self.functions.get_vote_count(\n            shard_id,\n        ).call(self.basic_call_context)\n\n    def has_notary_voted(self, shard_id: int, index: int) -> bool:\n        return self.functions.has_notary_voted(\n            shard_id,\n            index,\n        ).call(self.basic_call_context)\n\n    def _send_transaction(self,\n                          *,\n                          func_name: str,\n                          args: Iterable[Any],\n                          private_key: datatypes.PrivateKey=None,\n                          nonce: int=None,\n                          chain_id: int=None,\n                          gas: int=None,\n                          value: int=0,\n                          gas_price: int=None,\n                          data: bytes=None) -> Hash32:\n        if gas_price is None:\n            gas_price = self.config['GAS_PRICE']\n        if private_key is None:\n            private_key = self.private_key\n        if nonce is None:\n            nonce = self.web3.eth.getTransactionCount(private_key.public_key.to_checksum_address())\n        build_transaction_detail = make_transaction_context(\n            nonce=nonce,\n            gas=gas,\n            chain_id=chain_id,\n            value=value,\n            gas_price=gas_price,\n            data=data,\n        )\n        func_instance = getattr(self.functions, func_name)\n        unsigned_transaction = func_instance(*args).buildTransaction(\n            transaction=build_transaction_detail,\n        )\n        signed_transaction_dict = self.web3.eth.account.signTransaction(\n            unsigned_transaction,\n            private_key.to_hex(),\n        )\n        tx_hash = self.web3.eth.sendRawTransaction(signed_transaction_dict['rawTransaction'])\n        return tx_hash\n\n    #\n    # Transactions\n    #\n    def register_notary(self,\n                        private_key: datatypes.PrivateKey=None,\n                        gas_price: int=None) -> Hash32:\n        gas = self._estimate_gas_dict['register_notary']\n        tx_hash = self._send_transaction(\n            func_name='register_notary',\n            args=[],\n            private_key=private_key,\n            value=self.config['NOTARY_DEPOSIT'],\n            gas=gas,\n            gas_price=gas_price,\n        )\n        return tx_hash\n\n    def deregister_notary(self,\n                          private_key: datatypes.PrivateKey=None,\n                          gas_price: int=None) -> Hash32:\n        gas = self._estimate_gas_dict['deregister_notary']\n        tx_hash = self._send_transaction(\n            func_name='deregister_notary',\n            args=[],\n            private_key=private_key,\n            gas=gas,\n            gas_price=gas_price,\n        )\n        return tx_hash\n\n    def release_notary(self,\n                       private_key: datatypes.PrivateKey=None,\n                       gas_price: int=None) -> Hash32:\n        gas = self._estimate_gas_dict['release_notary']\n        tx_hash = self._send_transaction(\n            func_name='release_notary',\n            args=[],\n            private_key=private_key,\n            gas=gas,\n            gas_price=gas_price,\n        )\n        return tx_hash\n\n    def add_header(self,\n                   *,\n                   shard_id: int,\n                   period: int,\n                   chunk_root: Hash32,\n                   private_key: datatypes.PrivateKey=None,\n                   gas_price: int=None) -> Hash32:\n        gas = self._estimate_gas_dict['add_header']\n        tx_hash = self._send_transaction(\n            func_name='add_header',\n            args=[\n                shard_id,\n                period,\n                chunk_root,\n            ],\n            private_key=private_key,\n            gas=gas,\n            gas_price=gas_price,\n        )\n        return tx_hash\n\n    def submit_vote(self,\n                    *,\n                    shard_id: int,\n                    period: int,\n                    chunk_root: Hash32,\n                    index: int,\n                    private_key: datatypes.PrivateKey=None,\n                    gas_price: int=None) -> Hash32:\n        gas = self._estimate_gas_dict['submit_vote']\n        tx_hash = self._send_transaction(\n            func_name='submit_vote',\n            args=[\n                shard_id,\n                period,\n                chunk_root,\n                index,\n            ],\n            private_key=private_key,\n            gas=gas,\n            gas_price=gas_price,\n        )\n        return tx_hash\n"
  },
  {
    "path": "sharding/handler/utils/__init__.py",
    "content": ""
  },
  {
    "path": "sharding/handler/utils/log_parser.py",
    "content": "from typing import (\n    Any,\n    Dict,\n    List,\n    Tuple,\n    Union,\n)\n\nfrom eth_utils import (\n    to_canonical_address,\n    decode_hex,\n    big_endian_to_int,\n)\nfrom eth_typing import (\n    Address,\n)\n\nfrom sharding.contracts.utils.smc_utils import (\n    get_smc_json,\n)\nfrom sharding.handler.exceptions import (\n    LogParsingError,\n)\n\n\nclass LogParser(object):\n    def __init__(self, *, event_name: str, log: Dict[str, Any]) -> None:\n        event_abi = self._extract_event_abi(event_name=event_name)\n\n        topics = []\n        data = []\n        for item in event_abi[\"inputs\"]:\n            if item['indexed'] is True:\n                topics.append((item['name'], item['type']))\n            else:\n                data.append((item['name'], item['type']))\n\n        self._set_topic_value(topics=topics, log=log)\n        self._set_data_value(data=data, log=log)\n\n    def _extract_event_abi(self, *, event_name: str) -> Dict[str, Any]:\n        for func in get_smc_json()['abi']:\n            if func['name'] == event_name and func['type'] == 'event':\n                return func\n        raise LogParsingError(\"Can not find event {}\".format(event_name))\n\n    def _set_topic_value(self, *, topics: List[Tuple[str, Any]], log: Dict[str, Any]) -> None:\n        if len(topics) != len(log['topics'][1:]):\n            raise LogParsingError(\n                \"Error parsing log topics, expect\"\n                \"{} topics but get {}.\".format(len(topics), len(log['topics'][1:]))\n            )\n        for (i, topic) in enumerate(topics):\n            val = self._parse_value(val_type=topic[1], val=log['topics'][i + 1])\n            setattr(self, topic[0], val)\n\n    def _set_data_value(self, *, data: List[Tuple[str, Any]], log: Dict[str, Any]) -> None:\n        data_bytes = decode_hex(log['data'])\n        if len(data) * 32 != len(data_bytes):\n            raise LogParsingError(\n                \"Error parsing log data, expect\"\n                \"{} data but get {}.\".format(len(data), len(data_bytes))\n            )\n        for (i, (name, type_)) in enumerate(data):\n            val = self._parse_value(val_type=type_, val=data_bytes[i * 32: (i + 1) * 32])\n            setattr(self, name, val)\n\n    def _parse_value(self, *, val_type: str, val: bytes) -> Union[bool, Address, bytes, int]:\n        if val_type == 'bool':\n            return bool(big_endian_to_int(val))\n        elif val_type == 'address':\n            return to_canonical_address(val[-20:])\n        elif val_type == 'bytes32':\n            return val\n        elif 'int' in val_type:\n            return big_endian_to_int(val)\n        else:\n            raise LogParsingError(\n                \"Error parsing the type of given value. Expect bool/address/bytes32/int*\"\n                \"but get {}.\".format(val_type)\n            )\n"
  },
  {
    "path": "sharding/handler/utils/shard_tracker_utils.py",
    "content": "from typing import (\n    Union,\n)\n\nfrom eth_utils import (\n    event_abi_to_log_topic,\n    to_checksum_address,\n)\nfrom eth_typing import (\n    Address,\n)\n\nfrom sharding.contracts.utils.smc_utils import (\n    get_smc_json,\n)\n\n\ndef to_log_topic_address(address: Union[Address, str]) -> str:\n    return '0x' + to_checksum_address(address)[2:].rjust(64, '0')\n\n\ndef get_event_signature_from_abi(event_name: str) -> bytes:\n    for function in get_smc_json()['abi']:\n        if function['name'] == event_name and function['type'] == 'event':\n            return event_abi_to_log_topic(function)\n    raise ValueError(\"Event with name {} not found\".format(event_name))\n"
  },
  {
    "path": "sharding/handler/utils/smc_handler_utils.py",
    "content": "from typing import (\n    Any,\n    Generator,\n    Tuple,\n)\n\nfrom eth_utils import (\n    is_address,\n    to_checksum_address,\n    to_dict,\n)\nfrom eth_typing import (\n    Address,\n)\n\n\n@to_dict\ndef make_call_context(sender_address: Address,\n                      gas: int=None,\n                      value: int=None,\n                      gas_price: int=None,\n                      data: bytes=None) -> Generator[Tuple[str, Any], None, None]:\n    \"\"\"\n    Makes the context for message call.\n    \"\"\"\n    if not is_address(sender_address):\n        raise ValueError('Message call sender provided is not an address')\n    # 'from' is required in eth_tester\n    yield 'from', to_checksum_address(sender_address)\n    if gas is not None:\n        yield 'gas', gas\n    if value is not None:\n        yield 'value', value\n    if gas_price is not None:\n        yield 'gas_price', gas_price\n    if data is not None:\n        yield 'data', data\n\n\n@to_dict\ndef make_transaction_context(nonce: int,\n                             gas: int,\n                             chain_id: int=None,\n                             value: int=None,\n                             gas_price: int=None,\n                             data: bytes=None) -> Generator[Tuple[str, Any], None, None]:\n    \"\"\"\n    Makes the context for transaction call.\n    \"\"\"\n\n    if not (isinstance(nonce, int) and nonce >= 0):\n        raise ValueError('nonce should be provided as non-negative integer')\n    if not (isinstance(gas, int) and gas >= 0):\n        raise ValueError('gas should be provided as positive integer')\n    yield 'nonce', nonce\n    yield 'gas', gas\n    yield 'chainId', chain_id\n    if value is not None:\n        yield 'value', value\n    if gas_price is not None:\n        yield 'gasPrice', gas_price\n    if data is not None:\n        yield 'data', data\n"
  },
  {
    "path": "sharding/handler/utils/web3_utils.py",
    "content": "import rlp\n\nfrom evm.rlp.transactions import (\n    BaseTransaction,\n)\nfrom web3 import (\n    Web3,\n)\n\nfrom eth_utils import (\n    to_checksum_address,\n)\n\nfrom typing import (\n    List,\n    Tuple,\n)\nfrom eth_typing import (\n    Address,\n    Hash32,\n)\n\n\ndef get_code(w3: Web3, address: Address) -> bytes:\n    return w3.eth.getCode(to_checksum_address(address))\n\n\ndef get_nonce(w3: Web3, address: Address) -> int:\n    return w3.eth.getTransactionCount(to_checksum_address(address))\n\n\ndef take_snapshot(w3: Web3) -> int:\n    return w3.testing.snapshot()\n\n\ndef revert_to_snapshot(w3: Web3, snapshot_id: int) -> None:\n    w3.testing.revert(snapshot_id)\n\n\ndef mine(w3: Web3, num_blocks: int) -> None:\n    w3.testing.mine(num_blocks)\n\n\ndef send_raw_transaction(w3: Web3, raw_transaction: BaseTransaction) -> Hash32:\n    raw_transaction_bytes = rlp.encode(raw_transaction)\n    raw_transaction_hex = w3.toHex(raw_transaction_bytes)\n    transaction_hash = w3.eth.sendRawTransaction(raw_transaction_hex)\n    return transaction_hash\n\n\ndef get_recent_block_hashes(w3: Web3, history_size: int) -> Tuple[Hash32, ...]:\n    block = w3.eth.getBlock('latest')\n    recent_hashes = []\n\n    for _ in range(history_size):\n        recent_hashes.append(block['hash'])\n        # break the loop if we hit the genesis block.\n        if block['number'] == 0:\n            break\n        block = w3.eth.getBlock(block['parentHash'])\n\n    return tuple(reversed(recent_hashes))\n\n\ndef get_canonical_chain(w3: Web3,\n                        recent_block_hashes: List[Hash32],\n                        history_size: int) -> Tuple[List[Hash32], Tuple[Hash32, ...]]:\n    block = w3.eth.getBlock('latest')\n\n    new_block_hashes = []\n\n    for _ in range(history_size):\n        if block['hash'] in recent_block_hashes:\n            break\n        new_block_hashes.append(block['hash'])\n        block = w3.eth.getBlock(block['parentHash'])\n    else:\n        raise Exception('No common ancestor found')\n\n    first_common_ancestor_idx = recent_block_hashes.index(block['hash'])\n\n    revoked_hashes = recent_block_hashes[first_common_ancestor_idx + 1:]\n\n    # reverse it to comply with the order of `self.recent_block_hashes`\n    reversed_new_block_hashes = tuple(reversed(new_block_hashes))\n\n    return revoked_hashes, reversed_new_block_hashes\n"
  },
  {
    "path": "tests/__init__.py",
    "content": ""
  },
  {
    "path": "tests/conftest.py",
    "content": "import pytest\n\nfrom web3 import (\n    Web3,\n)\n\nfrom web3.providers.eth_tester import (\n    EthereumTesterProvider,\n)\n\nfrom eth_tester import (\n    EthereumTester,\n    PyEVMBackend,\n)\n\nfrom eth_tester.backends.pyevm.main import (\n    get_default_account_keys,\n)\nfrom sharding.handler.smc_handler import (\n    SMC as SMCFactory,\n)\nfrom sharding.handler.utils.web3_utils import (\n    get_code,\n)\nfrom tests.handler.utils.config import (\n    get_sharding_testing_config,\n)\n\n\n@pytest.fixture(scope=\"session\")\ndef smc_testing_config():\n    return get_sharding_testing_config()\n\n\n@pytest.fixture\ndef smc_handler(smc_testing_config):\n    eth_tester = EthereumTester(\n        backend=PyEVMBackend(),\n        auto_mine_transactions=False,\n    )\n    provider = EthereumTesterProvider(eth_tester)\n    w3 = Web3(provider)\n    if hasattr(w3.eth, \"enable_unaudited_features\"):\n        w3.eth.enable_unaudited_features()\n\n    private_key = get_default_account_keys()[0]\n\n    # deploy smc contract\n    SMC = w3.eth.contract(ContractFactoryClass=SMCFactory)\n    constructor_kwargs = {\n        \"_SHARD_COUNT\": smc_testing_config[\"SHARD_COUNT\"],\n        \"_PERIOD_LENGTH\": smc_testing_config[\"PERIOD_LENGTH\"],\n        \"_LOOKAHEAD_LENGTH\": smc_testing_config[\"LOOKAHEAD_LENGTH\"],\n        \"_COMMITTEE_SIZE\": smc_testing_config[\"COMMITTEE_SIZE\"],\n        \"_QUORUM_SIZE\": smc_testing_config[\"QUORUM_SIZE\"],\n        \"_NOTARY_DEPOSIT\": smc_testing_config[\"NOTARY_DEPOSIT\"],\n        \"_NOTARY_LOCKUP_LENGTH\": smc_testing_config[\"NOTARY_LOCKUP_LENGTH\"],\n    }\n    eth_tester.enable_auto_mine_transactions()\n    deployment_tx_hash = SMC.constructor(**constructor_kwargs).transact()\n    deployment_receipt = w3.eth.waitForTransactionReceipt(deployment_tx_hash, timeout=0)\n    eth_tester.disable_auto_mine_transactions()\n\n    assert get_code(w3, deployment_receipt.contractAddress) != b''\n    smc_handler = SMC(\n        address=deployment_receipt.contractAddress,\n        default_priv_key=private_key,\n        config=smc_testing_config,\n    )\n\n    return smc_handler\n"
  },
  {
    "path": "tests/contract/__init__.py",
    "content": ""
  },
  {
    "path": "tests/contract/test_add_header.py",
    "content": "from sharding.handler.utils.web3_utils import (\n    mine,\n)\n\nfrom tests.contract.utils.common_utils import (\n    batch_register,\n    fast_forward,\n)\nfrom tests.contract.utils.notary_account import (\n    NotaryAccount,\n)\n\n\ndef test_normal_add_header(smc_handler):  # noqa: F811\n    w3 = smc_handler.web3\n\n    # Register notary 0~2 and fast forward to next period\n    batch_register(smc_handler, 0, 2)\n    fast_forward(smc_handler, 1)\n    current_period = w3.eth.blockNumber // smc_handler.config['PERIOD_LENGTH']\n    assert current_period == 1\n    # Check that collation records of shard 0 and shard 1 have not been updated before\n    assert smc_handler.records_updated_period(0) == 0\n    assert smc_handler.records_updated_period(1) == 0\n\n    CHUNK_ROOT_1_0 = b'\\x10' * 32\n    smc_handler.add_header(\n        shard_id=0,\n        period=1,\n        chunk_root=CHUNK_ROOT_1_0,\n        private_key=NotaryAccount(index=0).private_key,\n    )\n    mine(w3=w3, num_blocks=1)\n    # Check that collation record of shard 0 has been updated\n    assert smc_handler.records_updated_period(0) == 1\n    assert smc_handler.get_collation_chunk_root(shard_id=0, period=1) == CHUNK_ROOT_1_0\n\n    fast_forward(smc_handler, 1)\n    current_period = w3.eth.blockNumber // smc_handler.config['PERIOD_LENGTH']\n    assert current_period == 2\n\n    CHUNK_ROOT_2_0 = b'\\x20' * 32\n    smc_handler.add_header(\n        shard_id=0,\n        period=2,\n        chunk_root=CHUNK_ROOT_2_0,\n        private_key=NotaryAccount(index=0).private_key,\n    )\n    mine(w3=w3, num_blocks=1)\n    # Check that collation record of shard 0 has been updated\n    assert smc_handler.records_updated_period(0) == 2\n    assert smc_handler.get_collation_chunk_root(shard_id=0, period=2) == CHUNK_ROOT_2_0\n    # Check that collation record of shard 1 has never been updated\n    assert smc_handler.records_updated_period(1) == 0\n\n    CHUNK_ROOT_2_1 = b'\\x21' * 32\n    smc_handler.add_header(\n        shard_id=1,\n        period=2,\n        chunk_root=CHUNK_ROOT_2_1,\n        private_key=NotaryAccount(index=0).private_key,\n    )\n    mine(w3=w3, num_blocks=1)\n    # Check that collation record of shard 1 has been updated\n    assert smc_handler.records_updated_period(1) == 2\n    assert smc_handler.get_collation_chunk_root(shard_id=1, period=2) == CHUNK_ROOT_2_1\n\n\ndef test_add_header_wrong_period(smc_handler):  # noqa: F811\n    w3 = smc_handler.web3\n\n    # Register notary 0~2 and fast forward to next period\n    batch_register(smc_handler, 0, 2)\n    fast_forward(smc_handler, 1)\n    current_period = w3.eth.blockNumber // smc_handler.config['PERIOD_LENGTH']\n    assert current_period == 1\n\n    BLANK_CHUNK_ROOT = b'\\x00' * 32\n    CHUNK_ROOT_1_0 = b'\\x10' * 32\n    # Attempt to add collation record with wrong period specified\n    tx_hash = smc_handler.add_header(\n        shard_id=0,\n        period=0,\n        chunk_root=CHUNK_ROOT_1_0,\n        private_key=NotaryAccount(index=0).private_key,\n    )\n    mine(w3, 1)\n    # Check that collation record of shard 0 has not been updated and transaction consume all gas\n    # and no logs has been emitted\n    assert smc_handler.records_updated_period(0) == 0\n    assert smc_handler.get_collation_chunk_root(shard_id=0, period=1) == BLANK_CHUNK_ROOT\n    assert len(w3.eth.getTransactionReceipt(tx_hash)['logs']) == 0\n\n    # Second attempt to add collation record with wrong period specified\n    tx_hash = smc_handler.add_header(\n        shard_id=0,\n        period=2,\n        chunk_root=CHUNK_ROOT_1_0,\n        private_key=NotaryAccount(index=0).private_key,\n    )\n    mine(w3, 1)\n    # Check that collation record of shard 0 has not been updated and transaction consume all gas\n    # and no logs has been emitted\n    assert smc_handler.records_updated_period(0) == 0\n    assert smc_handler.get_collation_chunk_root(shard_id=0, period=1) == BLANK_CHUNK_ROOT\n    assert len(w3.eth.getTransactionReceipt(tx_hash)['logs']) == 0\n\n    # Add correct collation record\n    smc_handler.add_header(\n        shard_id=0,\n        period=1,\n        chunk_root=CHUNK_ROOT_1_0,\n        private_key=NotaryAccount(index=0).private_key,\n    )\n    mine(w3=w3, num_blocks=1)\n    # Check that collation record of shard 0 has been updated\n    assert smc_handler.records_updated_period(0) == 1\n    assert smc_handler.get_collation_chunk_root(shard_id=0, period=1) == CHUNK_ROOT_1_0\n\n\ndef test_add_header_wrong_shard(smc_handler):  # noqa: F811\n    w3 = smc_handler.web3\n    shard_count = smc_handler.config['SHARD_COUNT']\n\n    # Register notary 0~2 and fast forward to next period\n    batch_register(smc_handler, 0, 2)\n    fast_forward(smc_handler, 1)\n    current_period = w3.eth.blockNumber // smc_handler.config['PERIOD_LENGTH']\n    assert current_period == 1\n\n    BLANK_CHUNK_ROOT = b'\\x00' * 32\n    CHUNK_ROOT_1_0 = b'\\x10' * 32\n    # Attempt to add collation record with illegal shard_id specified\n    tx_hash = smc_handler.add_header(\n        shard_id=shard_count + 1,\n        period=1,\n        chunk_root=CHUNK_ROOT_1_0,\n        private_key=NotaryAccount(index=0).private_key,\n    )\n    mine(w3, 1)\n    # Check that collation record of shard 0 has not been updated and transaction consume all gas\n    # and no logs has been emitted\n    assert smc_handler.records_updated_period(0) == 0\n    assert smc_handler.get_collation_chunk_root(shard_id=0, period=1) == BLANK_CHUNK_ROOT\n    assert len(w3.eth.getTransactionReceipt(tx_hash)['logs']) == 0\n\n    # Second attempt to add collation record with illegal shard_id specified\n    tx_hash = smc_handler.add_header(\n        shard_id=-1,\n        period=1,\n        chunk_root=CHUNK_ROOT_1_0,\n        private_key=NotaryAccount(index=0).private_key,\n    )\n    mine(w3, 1)\n    # Check that collation record of shard 0 has not been updated and transaction consume all gas\n    # and no logs has been emitted\n    assert smc_handler.records_updated_period(0) == 0\n    assert smc_handler.get_collation_chunk_root(shard_id=0, period=1) == BLANK_CHUNK_ROOT\n    assert len(w3.eth.getTransactionReceipt(tx_hash)['logs']) == 0\n\n    # Add correct collation record\n    smc_handler.add_header(\n        shard_id=0,\n        period=1,\n        chunk_root=CHUNK_ROOT_1_0,\n        private_key=NotaryAccount(index=0).private_key,\n    )\n    mine(w3=w3, num_blocks=1)\n    # Check that collation record of shard 0 has been updated\n    assert smc_handler.records_updated_period(0) == 1\n    assert smc_handler.get_collation_chunk_root(shard_id=0, period=1) == CHUNK_ROOT_1_0\n\n\ndef test_double_add_header(smc_handler):  # noqa: F811\n    w3 = smc_handler.web3\n\n    # Register notary 0~2 and fast forward to next period\n    batch_register(smc_handler, 0, 2)\n    fast_forward(smc_handler, 1)\n    current_period = w3.eth.blockNumber // smc_handler.config['PERIOD_LENGTH']\n    assert current_period == 1\n\n    CHUNK_ROOT_1_0 = b'\\x10' * 32\n    smc_handler.add_header(\n        shard_id=0,\n        period=1,\n        chunk_root=CHUNK_ROOT_1_0,\n        private_key=NotaryAccount(index=0).private_key,\n    )\n    mine(w3=w3, num_blocks=1)\n    # Check that collation record of shard 0 has been updated\n    assert smc_handler.records_updated_period(0) == 1\n    assert smc_handler.get_collation_chunk_root(shard_id=0, period=1) == CHUNK_ROOT_1_0\n\n    # Attempt to add collation record again with same collation record\n    tx_hash = smc_handler.add_header(\n        shard_id=0,\n        period=1,\n        chunk_root=CHUNK_ROOT_1_0,\n        private_key=NotaryAccount(index=0).private_key,\n    )\n    mine(w3, 1)\n    # Check that transaction consume all gas and no logs has been emitted\n    assert len(w3.eth.getTransactionReceipt(tx_hash)['logs']) == 0\n\n    # Attempt to add collation record again with different chunk root\n    tx_hash = smc_handler.add_header(\n        shard_id=0,\n        period=1,\n        chunk_root=b'\\x56' * 32,\n        private_key=NotaryAccount(index=0).private_key,\n    )\n    mine(w3, 1)\n    # Check that collation record of shard 0 remains the same and transaction consume all gas\n    # and no logs has been emitted\n    assert smc_handler.records_updated_period(0) == 1\n    assert smc_handler.get_collation_chunk_root(shard_id=0, period=1) == CHUNK_ROOT_1_0\n    assert len(w3.eth.getTransactionReceipt(tx_hash)['logs']) == 0\n"
  },
  {
    "path": "tests/contract/test_compile.py",
    "content": "from vyper import compiler\n\nfrom sharding.contracts.utils.smc_utils import (\n    get_smc_json,\n    get_smc_source_code,\n)\n\n\ndef test_compile_smc():\n    compiled_smc_json = get_smc_json()\n\n    vmc_code = get_smc_source_code()\n    abi = compiler.mk_full_signature(vmc_code)\n    bytecode = compiler.compile(vmc_code)\n    bytecode_hex = '0x' + bytecode.hex()\n\n    assert abi == compiled_smc_json[\"abi\"]\n    assert bytecode_hex == compiled_smc_json[\"bytecode\"]\n"
  },
  {
    "path": "tests/contract/test_log_emission.py",
    "content": "from sharding.handler.shard_tracker import (  # noqa: F401\n    ShardTracker,\n)\nfrom sharding.handler.utils.web3_utils import (\n    mine,\n)\n\nfrom tests.contract.utils.common_utils import (\n    fast_forward,\n)\nfrom tests.contract.utils.notary_account import (\n    NotaryAccount,\n)\nfrom tests.contract.utils.sample_helper import (\n    sampling,\n)\n\n\ndef test_log_emission(smc_handler):  # noqa: F811\n    w3 = smc_handler.web3\n    shard_tracker = ShardTracker(\n        w3=w3,\n        config=smc_handler.config,\n        shard_id=0,\n        smc_handler_address=smc_handler.address,\n    )\n    notary = NotaryAccount(0)\n\n    # Register\n    smc_handler.register_notary(private_key=notary.private_key)\n    mine(w3, 1)\n    # Check that log was successfully emitted\n    log = shard_tracker.get_register_notary_logs()[0]\n    assert log.index_in_notary_pool == 0 and log.notary == notary.canonical_address\n    fast_forward(smc_handler, 1)\n\n    # Add header\n    CHUNK_ROOT_1_0 = b'\\x10' * 32\n    smc_handler.add_header(\n        shard_id=0,\n        period=1,\n        chunk_root=CHUNK_ROOT_1_0,\n        private_key=notary.private_key,\n    )\n    mine(w3, 1)\n    # Check that log was successfully emitted\n    log = shard_tracker.get_add_header_logs()[0]\n    assert log.period == 1 and log.shard_id == 0 and log.chunk_root == CHUNK_ROOT_1_0\n\n    # Submit vote\n    sample_index = 0\n    pool_index = sampling(smc_handler, 0)[sample_index]\n    smc_handler.submit_vote(\n        shard_id=0,\n        period=1,\n        chunk_root=CHUNK_ROOT_1_0,\n        index=sample_index,\n        private_key=NotaryAccount(pool_index).private_key,\n    )\n    mine(w3, 1)\n    # Check that log was successfully emitted\n    log = shard_tracker.get_submit_vote_logs()[0]\n    assert log.period == 1 and log.shard_id == 0 and log.chunk_root == CHUNK_ROOT_1_0 and \\\n        log.notary == NotaryAccount(pool_index).canonical_address\n    fast_forward(smc_handler, 1)\n\n    # Deregister\n    smc_handler.deregister_notary(private_key=notary.private_key)\n    mine(w3, 1)\n    # Check that log was successfully emitted\n    log = shard_tracker.get_deregister_notary_logs()[0]\n    assert log.index_in_notary_pool == 0 and log.notary == notary.canonical_address and \\\n        log.deregistered_period == 2\n    # Fast foward to end of lock up\n    fast_forward(smc_handler, smc_handler.config['NOTARY_LOCKUP_LENGTH'] + 1)\n\n    # Release\n    smc_handler.release_notary(private_key=notary.private_key)\n    mine(w3, 1)\n    # Check that log was successfully emitted\n    log = shard_tracker.get_release_notary_logs()[0]\n    assert log.index_in_notary_pool == 0 and log.notary == notary.canonical_address\n\n    # Test fetching logs in past period\n    assert shard_tracker.get_register_notary_logs(from_period=0, to_period=0)\n    assert shard_tracker.get_add_header_logs(from_period=1, to_period=1)\n    assert shard_tracker.get_submit_vote_logs(from_period=1, to_period=1)\n    assert shard_tracker.get_deregister_notary_logs(from_period=2, to_period=2)\n    assert shard_tracker.get_release_notary_logs(\n        from_period=(3 + smc_handler.config['NOTARY_LOCKUP_LENGTH']),\n        to_period=(3 + smc_handler.config['NOTARY_LOCKUP_LENGTH'])\n    )\n"
  },
  {
    "path": "tests/contract/test_notary_sample.py",
    "content": "from sharding.handler.utils.web3_utils import (\n    mine,\n)\n\nfrom tests.contract.utils.common_utils import (\n    update_notary_sample_size,\n    batch_register,\n    fast_forward,\n)\nfrom tests.contract.utils.notary_account import (\n    NotaryAccount,\n)\nfrom tests.contract.utils.sample_helper import (\n    get_notary_pool_list,\n    get_committee_list,\n    get_sample_result,\n)\n\n\ndef test_normal_update_notary_sample_size(smc_handler):  # noqa: F811\n    w3 = smc_handler.web3\n\n    notary_0 = NotaryAccount(0)\n\n    # Register notary 0\n    smc_handler.register_notary(private_key=notary_0.private_key)\n    mine(w3, 1)\n    _, notary_0_pool_index = smc_handler.get_notary_info(\n        notary_0.checksum_address\n    )\n    assert notary_0_pool_index == 0\n    next_period_notary_sample_size = smc_handler.next_period_notary_sample_size()\n    assert (notary_0_pool_index + 1) == next_period_notary_sample_size\n\n    notary_1 = NotaryAccount(1)\n\n    # Register notary 1\n    smc_handler.register_notary(private_key=notary_1.private_key)\n    mine(w3, 1)\n\n    _, notary_1_pool_index = smc_handler.get_notary_info(\n        notary_1.checksum_address\n    )\n    assert notary_1_pool_index == 1\n    next_period_notary_sample_size = smc_handler.next_period_notary_sample_size()\n    assert (notary_1_pool_index + 1) == next_period_notary_sample_size\n\n    # Check that it's not yet the time to update notary sample size,\n    # i.e., current period is the same as latest period the notary sample size was updated.\n    current_period = w3.eth.blockNumber // smc_handler.config['PERIOD_LENGTH']\n    notary_sample_size_updated_period = smc_handler.notary_sample_size_updated_period()\n    assert current_period == notary_sample_size_updated_period\n\n    # Check that current_period_notary_sample_size has not been updated before\n    current_period_notary_sample_size = smc_handler.current_period_notary_sample_size()\n    assert 0 == current_period_notary_sample_size\n\n    # Try updating notary sample size\n    update_notary_sample_size(smc_handler)\n    # Check that current_period_notary_sample_size is not updated,\n    # i.e., updating notary sample size failed.\n    assert 0 == current_period_notary_sample_size\n\n    # fast forward to next period\n    fast_forward(smc_handler, 1)\n\n    # Register notary 2\n    # NOTE: Registration would also invoke update_notary_sample_size function\n    notary_2 = NotaryAccount(2)\n    smc_handler.register_notary(private_key=notary_2.private_key)\n    mine(w3, 1)\n\n    # Check that current_period_notary_sample_size is updated,\n    # i.e., it is assigned the value of next_period_notary_sample_size.\n    current_period_notary_sample_size = smc_handler.current_period_notary_sample_size()\n    assert next_period_notary_sample_size == current_period_notary_sample_size\n\n    # Check that notary sample size is updated in this period\n    current_period = w3.eth.blockNumber // smc_handler.config['PERIOD_LENGTH']\n    notary_sample_size_updated_period = smc_handler.notary_sample_size_updated_period()\n    assert current_period == notary_sample_size_updated_period\n\n\ndef test_register_then_deregister(smc_handler):  # noqa: F811\n    w3 = smc_handler.web3\n\n    notary_0 = NotaryAccount(0)\n\n    # Register notary 0 first\n    smc_handler.register_notary(private_key=notary_0.private_key)\n    mine(w3, 1)\n    _, notary_0_pool_index = smc_handler.get_notary_info(\n        notary_0.checksum_address\n    )\n    assert notary_0_pool_index == 0\n    next_period_notary_sample_size = smc_handler.next_period_notary_sample_size()\n    assert (notary_0_pool_index + 1) == next_period_notary_sample_size\n\n    # Then deregister notary 0\n    smc_handler.deregister_notary(private_key=notary_0.private_key)\n    mine(w3, 1)\n    # Check that next_period_notary_sample_size remains the same\n    next_period_notary_sample_size = smc_handler.next_period_notary_sample_size()\n    assert (notary_0_pool_index + 1) == next_period_notary_sample_size\n\n\ndef test_deregister_then_register(smc_handler):  # noqa: F811\n    w3 = smc_handler.web3\n\n    notary_0 = NotaryAccount(0)\n\n    # Register notary 0 and fast forward to next period\n    smc_handler.register_notary(private_key=notary_0.private_key)\n    fast_forward(smc_handler, 1)\n\n    # Deregister notary 0 first\n    # NOTE: Deregistration would also invoke update_notary_sample_size function\n    smc_handler.deregister_notary(private_key=notary_0.private_key)\n    mine(w3, 1)\n    # Check that current_period_notary_sample_size is updated\n    current_period_notary_sample_size = smc_handler.current_period_notary_sample_size()\n    assert current_period_notary_sample_size == 1\n\n    notary_1 = NotaryAccount(1)\n\n    # Then register notary 1\n    smc_handler.register_notary(private_key=notary_1.private_key)\n    mine(w3, 1)\n\n    _, notary_1_pool_index = smc_handler.get_notary_info(\n        notary_1.checksum_address\n    )\n    assert notary_1_pool_index == 0\n    # Check that next_period_notary_sample_size remains the same\n    next_period_notary_sample_size = smc_handler.next_period_notary_sample_size()\n    assert (notary_1_pool_index + 1) == next_period_notary_sample_size\n\n\ndef test_series_of_deregister_starting_from_top_of_the_stack(smc_handler):  # noqa: F811\n    w3 = smc_handler.web3\n\n    notary_0 = NotaryAccount(0)\n    notary_1 = NotaryAccount(1)\n    notary_2 = NotaryAccount(2)\n\n    # Register notary 0~2\n    batch_register(smc_handler, 0, 2)\n    next_period_notary_sample_size = smc_handler.next_period_notary_sample_size()\n    assert next_period_notary_sample_size == 3\n\n    # Fast forward to next period\n    fast_forward(smc_handler, 1)\n\n    # Deregister from notary 2 to notary 0\n    # Deregister notary 2\n    smc_handler.deregister_notary(private_key=notary_2.private_key)\n    mine(w3, 1)\n    # Check that current_period_notary_sample_size is updated\n    current_period_notary_sample_size = smc_handler.current_period_notary_sample_size()\n    assert current_period_notary_sample_size == 3\n    # Check that next_period_notary_sample_size remains the samev\n    next_period_notary_sample_size = smc_handler.next_period_notary_sample_size()\n    assert next_period_notary_sample_size == 3\n    # Deregister notary 1\n    smc_handler.deregister_notary(private_key=notary_1.private_key)\n    mine(w3, 1)\n    # Check that next_period_notary_sample_size remains the same\n    next_period_notary_sample_size = smc_handler.next_period_notary_sample_size()\n    assert next_period_notary_sample_size == 3\n    # Deregister notary 0\n    smc_handler.deregister_notary(private_key=notary_0.private_key)\n    mine(w3, 1)\n    # Check that next_period_notary_sample_size remains the same\n    next_period_notary_sample_size = smc_handler.next_period_notary_sample_size()\n    assert next_period_notary_sample_size == 3\n\n    # Fast forward to next period\n    fast_forward(smc_handler, 1)\n\n    # Update notary sample size\n    update_notary_sample_size(smc_handler)\n    current_period_notary_sample_size = smc_handler.current_period_notary_sample_size()\n    assert current_period_notary_sample_size == next_period_notary_sample_size\n\n\ndef test_series_of_deregister_starting_from_bottom_of_the_stack(smc_handler):  # noqa: F811\n    w3 = smc_handler.web3\n\n    notary_0 = NotaryAccount(0)\n    notary_1 = NotaryAccount(1)\n    notary_2 = NotaryAccount(2)\n\n    # Register notary 0~2\n    batch_register(smc_handler, 0, 2)\n\n    # Fast forward to next period\n    fast_forward(smc_handler, 1)\n\n    # Deregister from notary 0 to notary 2\n    # Deregister notary 0\n    smc_handler.deregister_notary(private_key=notary_0.private_key)\n    mine(w3, 1)\n    _, notary_0_pool_index = smc_handler.get_notary_info(\n        notary_0.checksum_address\n    )\n    next_period_notary_sample_size = smc_handler.next_period_notary_sample_size()\n    # Check that next_period_notary_sample_size remains the same\n    assert next_period_notary_sample_size == 3\n    # Deregister notary 1\n    smc_handler.deregister_notary(private_key=notary_1.private_key)\n    mine(w3, 1)\n    _, notary_1_pool_index = smc_handler.get_notary_info(\n        notary_1.checksum_address\n    )\n    next_period_notary_sample_size = smc_handler.next_period_notary_sample_size()\n    # Check that next_period_notary_sample_size remains the same\n    assert next_period_notary_sample_size == 3\n    # Deregister notary 2\n    smc_handler.deregister_notary(private_key=notary_2.private_key)\n    mine(w3, 1)\n    # Check that current_period_notary_sample_size is updated\n    current_period_notary_sample_size = smc_handler.current_period_notary_sample_size()\n    assert current_period_notary_sample_size == 3\n    _, notary_2_pool_index = smc_handler.get_notary_info(\n        notary_2.checksum_address\n    )\n    next_period_notary_sample_size = smc_handler.next_period_notary_sample_size()\n    assert next_period_notary_sample_size == 3\n\n    # Fast forward to next period\n    fast_forward(smc_handler, 1)\n\n    # Update notary sample size\n    update_notary_sample_size(smc_handler)\n    current_period_notary_sample_size = smc_handler.current_period_notary_sample_size()\n    assert current_period_notary_sample_size == next_period_notary_sample_size\n\n\ndef test_get_member_of_committee_without_updating_sample_size(smc_handler):  # noqa: F811\n    w3 = smc_handler.web3\n\n    # Register notary 0~5 and fast forward to next period\n    batch_register(smc_handler, 0, 5)\n    fast_forward(smc_handler, 1)\n\n    # Register notary 6~8\n    batch_register(smc_handler, 6, 8)\n\n    # Check that sample-size-related values match\n    current_period = w3.eth.blockNumber // smc_handler.config['PERIOD_LENGTH']\n    notary_sample_size_updated_period = smc_handler.notary_sample_size_updated_period()\n    assert notary_sample_size_updated_period == current_period\n    current_period_notary_sample_size = smc_handler.current_period_notary_sample_size()\n    assert current_period_notary_sample_size == 6\n    next_period_notary_sample_size = smc_handler.next_period_notary_sample_size()\n    assert next_period_notary_sample_size == 9\n\n    # Fast forward to next period\n    fast_forward(smc_handler, 1)\n    current_period = w3.eth.blockNumber // smc_handler.config['PERIOD_LENGTH']\n    notary_sample_size_updated_period = smc_handler.notary_sample_size_updated_period()\n    assert notary_sample_size_updated_period == current_period - 1\n\n    shard_0_committee_list = get_committee_list(smc_handler, 0)\n    # Check that get_committee_list did generate committee list\n    assert len(shard_0_committee_list) > 0\n    for (i, notary) in enumerate(shard_0_committee_list):\n        assert smc_handler.get_member_of_committee(0, i) == notary\n\n\ndef test_get_member_of_committee_with_updated_sample_size(smc_handler):  # noqa: F811\n    w3 = smc_handler.web3\n\n    # Register notary 0~8 and fast forward to next period\n    batch_register(smc_handler, 0, 8)\n    fast_forward(smc_handler, 1)\n\n    # Update notary sample size\n    update_notary_sample_size(smc_handler)\n    # Check that sample-size-related values match\n    current_period = w3.eth.blockNumber // smc_handler.config['PERIOD_LENGTH']\n    notary_sample_size_updated_period = smc_handler.notary_sample_size_updated_period()\n    assert notary_sample_size_updated_period == current_period\n    current_period_notary_sample_size = smc_handler.current_period_notary_sample_size()\n    assert current_period_notary_sample_size == 9\n    next_period_notary_sample_size = smc_handler.next_period_notary_sample_size()\n    assert next_period_notary_sample_size == 9\n\n    shard_0_committee_list = get_committee_list(smc_handler, 0)\n    for (i, notary) in enumerate(shard_0_committee_list):\n        assert smc_handler.get_member_of_committee(0, i) == notary\n\n\ndef test_committee_lists_generated_are_different(smc_handler):  # noqa: F811\n    # Register notary 0~8 and fast forward to next period\n    batch_register(smc_handler, 0, 8)\n    fast_forward(smc_handler, 1)\n\n    # Update notary sample size\n    update_notary_sample_size(smc_handler)\n\n    shard_0_committee_list = get_committee_list(smc_handler, 0)\n    shard_1_committee_list = get_committee_list(smc_handler, 1)\n    assert shard_0_committee_list != shard_1_committee_list\n\n    # Fast forward to next period\n    fast_forward(smc_handler, 1)\n\n    # Update notary sample size\n    update_notary_sample_size(smc_handler)\n\n    new_shard_0_committee_list = get_committee_list(smc_handler, 0)\n    assert new_shard_0_committee_list != shard_0_committee_list\n\n\ndef test_get_member_of_committee_with_non_member(smc_handler):  # noqa: F811\n    # Register notary 0~8 and fast forward to next period\n    batch_register(smc_handler, 0, 8)\n    fast_forward(smc_handler, 1)\n\n    # Update notary sample size\n    update_notary_sample_size(smc_handler)\n\n    notary_pool_list = get_notary_pool_list(smc_handler)\n    shard_0_committee_list = get_committee_list(smc_handler, 0)\n    for (i, notary) in enumerate(shard_0_committee_list):\n        notary_index = notary_pool_list.index(notary)\n        next_notary_index = notary_index + 1 \\\n            if notary_index < len(notary_pool_list) - 1 else 0\n        next_notary = notary_pool_list[next_notary_index]\n        assert not (smc_handler.get_member_of_committee(0, i) == next_notary)\n\n\ndef test_committee_change_with_deregister_then_register(smc_handler):  # noqa: F811\n    w3 = smc_handler.web3\n\n    # Register notary 0~8 and fast forward to next period\n    batch_register(smc_handler, 0, 8)\n    fast_forward(smc_handler, 1)\n\n    # Update notary sample size\n    update_notary_sample_size(smc_handler)\n\n    notary_pool_list = get_notary_pool_list(smc_handler)\n    # Choose the first sampled notary and deregister it\n    notary = get_committee_list(smc_handler, 0)[0]\n    notary_index = notary_pool_list.index(notary)\n    smc_handler.deregister_notary(private_key=NotaryAccount(notary_index).private_key)\n    mine(w3, 1)\n    # Check that first slot in committee is now empty\n    assert smc_handler.get_member_of_committee(0, 0) == b'\\x00' * 20\n\n    # Register notary 9\n    smc_handler.register_notary(private_key=NotaryAccount(9).private_key)\n    mine(w3, 1)\n    # Check that first slot in committee is replaced by notary 9\n    assert smc_handler.get_member_of_committee(0, 0) == NotaryAccount(9).canonical_address\n\n\ndef test_get_sample_result(smc_handler):  # noqa: F811\n    w3 = smc_handler.web3\n\n    # Register notary 0~8 and fast forward to next period\n    batch_register(smc_handler, 0, 8)\n    fast_forward(smc_handler, 1)\n\n    # Update notary sample size\n    current_period = w3.eth.blockNumber // smc_handler.config['PERIOD_LENGTH']\n    update_notary_sample_size(smc_handler)\n\n    # Get all committee of current period\n    committee_group = []\n    for shard_id in range(smc_handler.config['SHARD_COUNT']):\n        committee_group.append(get_committee_list(smc_handler, shard_id))\n\n    # Get sampling result for notary 0\n    notary_0 = NotaryAccount(0)\n    _, notary_0_pool_index = smc_handler.get_notary_info(\n        notary_0.checksum_address\n    )\n    notary_0_sampling_result = get_sample_result(smc_handler, notary_0_pool_index)\n\n    for (period, shard_id, sampling_index) in notary_0_sampling_result:\n        assert period == current_period\n        # Check that notary is correctly sampled in get_committee_list\n        assert committee_group[shard_id][sampling_index] == notary_0.canonical_address\n        # Check that notary is correctly sampled in SMC\n        assert smc_handler.get_member_of_committee(shard_id, sampling_index) \\\n            == notary_0.canonical_address\n"
  },
  {
    "path": "tests/contract/test_registry_management.py",
    "content": "from sharding.handler.utils.web3_utils import (\n    mine,\n)\n\nfrom tests.contract.utils.common_utils import (\n    batch_register,\n    fast_forward,\n)\nfrom tests.contract.utils.notary_account import (\n    NotaryAccount,\n)\n\n\ndef test_normal_register(smc_handler):  # noqa: F811\n    w3 = smc_handler.web3\n\n    notary_0 = NotaryAccount(0)\n\n    does_notary_exist = smc_handler.does_notary_exist(notary_0.checksum_address)\n    assert not does_notary_exist\n    # Register notary 0\n    smc_handler.register_notary(private_key=notary_0.private_key)\n    mine(w3, 1)\n    does_notary_exist = smc_handler.does_notary_exist(notary_0.checksum_address)\n    assert does_notary_exist\n    notary_deregistered_period, notary_pool_index = smc_handler.get_notary_info(\n        notary_0.checksum_address\n    )\n    assert notary_deregistered_period == 0 and notary_pool_index == 0\n    notary_pool_length = smc_handler.notary_pool_len()\n    assert notary_pool_length == 1\n\n    notary_1 = NotaryAccount(1)\n\n    notary_2 = NotaryAccount(2)\n\n    # Register notary 1 and notary 2\n    batch_register(smc_handler, 0, 2)\n\n    does_notary_exist = smc_handler.does_notary_exist(notary_1.checksum_address)\n    assert does_notary_exist\n    notary_deregistered_period, notary_pool_index = smc_handler.get_notary_info(\n        notary_1.checksum_address\n    )\n    assert notary_deregistered_period == 0 and notary_pool_index == 1\n\n    does_notary_exist = smc_handler.does_notary_exist(notary_2.checksum_address)\n    assert does_notary_exist\n    notary_deregistered_period, notary_pool_index = smc_handler.get_notary_info(\n        notary_2.checksum_address\n    )\n    assert notary_deregistered_period == 0 and notary_pool_index == 2\n\n    notary_pool_length = smc_handler.notary_pool_len()\n    assert notary_pool_length == 3\n\n\ndef test_register_without_enough_ether(smc_handler):  # noqa: F811\n    w3 = smc_handler.web3\n\n    notary_0 = NotaryAccount(0)\n\n    does_notary_exist = smc_handler.does_notary_exist(notary_0.checksum_address)\n    assert not does_notary_exist\n\n    # Register without enough ether\n    smc_handler._send_transaction(\n        func_name='register_notary',\n        args=[],\n        private_key=notary_0.private_key,\n        value=smc_handler.config['NOTARY_DEPOSIT'] // 10000,\n        gas=smc_handler._estimate_gas_dict['register_notary'],\n    )\n    mine(w3, 1)\n\n    # Check that the registration failed\n    does_notary_exist = smc_handler.does_notary_exist(notary_0.checksum_address)\n    assert not does_notary_exist\n    notary_pool_length = smc_handler.notary_pool_len()\n    assert notary_pool_length == 0\n\n\ndef test_double_register(smc_handler):  # noqa: F811\n    w3 = smc_handler.web3\n\n    notary_0 = NotaryAccount(0)\n\n    # Register notary 0\n    smc_handler.register_notary(private_key=notary_0.private_key)\n    mine(w3, 1)\n    does_notary_exist = smc_handler.does_notary_exist(notary_0.checksum_address)\n    assert does_notary_exist\n    notary_pool_length = smc_handler.notary_pool_len()\n    assert notary_pool_length == 1\n\n    # Try register notary 0 again\n    tx_hash = smc_handler.register_notary(private_key=notary_0.private_key)\n    mine(w3, 1)\n    # Check pool remain the same and the transaction consume all gas\n    # and no logs has been emitted\n    notary_pool_length = smc_handler.notary_pool_len()\n    assert notary_pool_length == 1\n    assert len(w3.eth.getTransactionReceipt(tx_hash)['logs']) == 0\n\n\ndef test_normal_deregister(smc_handler):  # noqa: F811\n    w3 = smc_handler.web3\n\n    notary_0 = NotaryAccount(0)\n\n    # Register notary 0\n    smc_handler.register_notary(private_key=notary_0.private_key)\n    mine(w3, 1)\n    does_notary_exist = smc_handler.does_notary_exist(notary_0.checksum_address)\n    assert does_notary_exist\n    notary_pool_length = smc_handler.notary_pool_len()\n    assert notary_pool_length == 1\n\n    # Fast foward\n    fast_forward(smc_handler, 1)\n\n    # Deregister notary 0\n    smc_handler.deregister_notary(private_key=notary_0.private_key)\n    current_period = w3.eth.blockNumber // smc_handler.config['PERIOD_LENGTH']\n    mine(w3, 1)\n    does_notary_exist = smc_handler.does_notary_exist(notary_0.checksum_address)\n    assert does_notary_exist\n    notary_deregistered_period, notary_pool_index = smc_handler.get_notary_info(\n        notary_0.checksum_address\n    )\n    assert notary_deregistered_period == current_period\n    notary_pool_length = smc_handler.notary_pool_len()\n    assert notary_pool_length == 0\n\n\ndef test_deregister_then_register(smc_handler):  # noqa: F811\n    w3 = smc_handler.web3\n\n    notary_0 = NotaryAccount(0)\n\n    # Register notary 0\n    smc_handler.register_notary(private_key=notary_0.private_key)\n    mine(w3, 1)\n    does_notary_exist = smc_handler.does_notary_exist(notary_0.checksum_address)\n    assert does_notary_exist\n    notary_pool_length = smc_handler.notary_pool_len()\n    assert notary_pool_length == 1\n\n    # Fast foward\n    fast_forward(smc_handler, 1)\n\n    # Deregister notary 0\n    smc_handler.deregister_notary(private_key=notary_0.private_key)\n    current_period = w3.eth.blockNumber // smc_handler.config['PERIOD_LENGTH']\n    mine(w3, 1)\n    does_notary_exist = smc_handler.does_notary_exist(notary_0.checksum_address)\n    assert does_notary_exist\n    notary_deregistered_period, notary_pool_index = smc_handler.get_notary_info(\n        notary_0.checksum_address\n    )\n    assert notary_deregistered_period == current_period\n    notary_pool_length = smc_handler.notary_pool_len()\n    assert notary_pool_length == 0\n\n    # Register again right away\n    tx_hash = smc_handler.register_notary(private_key=notary_0.private_key)\n    mine(w3, 1)\n    # Check pool remain the same and the transaction consume all gas\n    # and no logs has been emitted\n    notary_pool_length = smc_handler.notary_pool_len()\n    assert notary_pool_length == 0\n    assert len(w3.eth.getTransactionReceipt(tx_hash)['logs']) == 0\n\n\ndef test_normal_release_notary(smc_handler):  # noqa: F811\n    w3 = smc_handler.web3\n\n    notary_0 = NotaryAccount(0)\n\n    # Register notary 0\n    smc_handler.register_notary(private_key=notary_0.private_key)\n    mine(w3, 1)\n    does_notary_exist = smc_handler.does_notary_exist(notary_0.checksum_address)\n    assert does_notary_exist\n    notary_pool_length = smc_handler.notary_pool_len()\n    assert notary_pool_length == 1\n\n    # Fast foward\n    fast_forward(smc_handler, 1)\n\n    # Deregister notary 0\n    smc_handler.deregister_notary(private_key=notary_0.private_key)\n    mine(w3, 1)\n    notary_pool_length = smc_handler.notary_pool_len()\n    assert notary_pool_length == 0\n\n    # Fast foward to end of lock up\n    fast_forward(smc_handler, smc_handler.config['NOTARY_LOCKUP_LENGTH'] + 1)\n\n    # Release notary 0\n    smc_handler.release_notary(private_key=notary_0.private_key)\n    mine(w3, 1)\n    does_notary_exist = smc_handler.does_notary_exist(notary_0.checksum_address)\n    assert not does_notary_exist\n\n\ndef test_instant_release_notary(smc_handler):  # noqa: F811\n    w3 = smc_handler.web3\n\n    notary_0 = NotaryAccount(0)\n\n    # Register notary 0\n    smc_handler.register_notary(private_key=notary_0.private_key)\n    mine(w3, 1)\n    does_notary_exist = smc_handler.does_notary_exist(notary_0.checksum_address)\n    assert does_notary_exist\n    notary_pool_length = smc_handler.notary_pool_len()\n    assert notary_pool_length == 1\n\n    # Fast foward\n    fast_forward(smc_handler, 1)\n\n    # Deregister notary 0\n    smc_handler.deregister_notary(private_key=notary_0.private_key)\n    mine(w3, 1)\n    notary_pool_length = smc_handler.notary_pool_len()\n    assert notary_pool_length == 0\n\n    # Instant release notary 0\n    tx_hash = smc_handler.release_notary(private_key=notary_0.private_key)\n    mine(w3, 1)\n    # Check registry remain the same and the transaction consume all gas\n    # and no logs has been emitted\n    does_notary_exist = smc_handler.does_notary_exist(notary_0.checksum_address)\n    assert does_notary_exist\n    assert len(w3.eth.getTransactionReceipt(tx_hash)['logs']) == 0\n\n\ndef test_deregister_and_new_notary_register(smc_handler):  # noqa: F811\n    w3 = smc_handler.web3\n\n    notary_0 = NotaryAccount(0)\n\n    # Register notary 0\n    smc_handler.register_notary(private_key=notary_0.private_key)\n    mine(w3, 1)\n    notary_pool_length = smc_handler.notary_pool_len()\n    assert notary_pool_length == 1\n\n    notary_2 = NotaryAccount(2)\n\n    # Register notary 1~3\n    batch_register(smc_handler, 1, 3)\n\n    notary_pool_length = smc_handler.notary_pool_len()\n    assert notary_pool_length == 4\n    # Check that empty_slots_stack is empty\n    empty_slots_stack_top = smc_handler.empty_slots_stack_top()\n    assert empty_slots_stack_top == 0\n\n    # Fast foward\n    fast_forward(smc_handler, 1)\n\n    # Deregister notary 2\n    smc_handler.deregister_notary(private_key=notary_2.private_key)\n    mine(w3, 1)\n    notary_pool_length = smc_handler.notary_pool_len()\n    assert notary_pool_length == 3\n\n    # Check that empty_slots_stack is not empty\n    empty_slots_stack_top = smc_handler.empty_slots_stack_top()\n    assert empty_slots_stack_top == 1\n    _, notary_2_pool_index = smc_handler.get_notary_info(notary_2.checksum_address)\n    empty_slots = smc_handler.empty_slots_stack(0)\n    # Check that the top empty_slots entry point to notary 2\n    assert empty_slots == notary_2_pool_index\n\n    notary_4 = NotaryAccount(4)\n\n    # Register notary 4\n    smc_handler.register_notary(private_key=notary_4.private_key)\n    mine(w3, 1)\n\n    notary_pool_length = smc_handler.notary_pool_len()\n    assert notary_pool_length == 4\n    # Check that empty_slots_stack is empty\n    empty_slots_stack_top = smc_handler.empty_slots_stack_top()\n    assert empty_slots_stack_top == 0\n    _, notary_4_pool_index = smc_handler.get_notary_info(notary_4.checksum_address)\n    # Check that notary fill in notary 2's spot\n    assert notary_4_pool_index == notary_2_pool_index\n"
  },
  {
    "path": "tests/contract/test_submit_vote.py",
    "content": "import pytest\n\nfrom sharding.handler.utils.web3_utils import (\n    mine,\n)\n\nfrom tests.contract.utils.common_utils import (\n    batch_register,\n    fast_forward,\n)\nfrom tests.contract.utils.notary_account import (\n    NotaryAccount,\n)\nfrom tests.contract.utils.sample_helper import (\n    sampling,\n    get_sample_result,\n)\n\n\ndef test_normal_submit_vote(smc_handler):  # noqa: F811\n    w3 = smc_handler.web3\n    # We only vote in shard 0 for ease of testing\n    shard_id = 0\n\n    # Register notary 0~8 and fast forward to next period\n    batch_register(smc_handler, 0, 8)\n    fast_forward(smc_handler, 1)\n    current_period = w3.eth.blockNumber // smc_handler.config['PERIOD_LENGTH']\n    assert current_period == 1\n\n    # Add collation record\n    CHUNK_ROOT_1_0 = b'\\x10' * 32\n    smc_handler.add_header(\n        shard_id=shard_id,\n        period=current_period,\n        chunk_root=CHUNK_ROOT_1_0,\n        private_key=NotaryAccount(index=0).private_key,\n    )\n    mine(w3, 1)\n\n    # Get the first notary in the sample list in this period\n    sample_index = 0\n    pool_index = sampling(smc_handler, shard_id)[sample_index]\n    # Check that voting record does not exist prior to voting\n    assert smc_handler.get_vote_count(shard_id) == 0\n    assert not smc_handler.has_notary_voted(shard_id, sample_index)\n    # First notary vote\n    smc_handler.submit_vote(\n        shard_id=shard_id,\n        period=current_period,\n        chunk_root=CHUNK_ROOT_1_0,\n        index=sample_index,\n        private_key=NotaryAccount(index=pool_index).private_key,\n    )\n    mine(w3, 1)\n    # Check that vote has been casted successfully\n    assert smc_handler.get_vote_count(shard_id) == 1\n    assert smc_handler.has_notary_voted(shard_id, sample_index)\n\n    # Check that collation is not elected and forward to next period\n    assert not smc_handler.get_collation_is_elected(shard_id=shard_id, period=current_period)\n    fast_forward(smc_handler, 1)\n    current_period = w3.eth.blockNumber // smc_handler.config['PERIOD_LENGTH']\n    assert current_period == 2\n\n    # Add collation record\n    CHUNK_ROOT_2_0 = b'\\x20' * 32\n    smc_handler.add_header(\n        shard_id=shard_id,\n        period=current_period,\n        chunk_root=CHUNK_ROOT_2_0,\n        private_key=NotaryAccount(index=0).private_key,\n    )\n    mine(w3, 1)\n\n    # Check that vote count is zero\n    assert smc_handler.get_vote_count(shard_id) == 0\n    # Keep voting until the collation is elected.\n    for (sample_index, pool_index) in enumerate(sampling(smc_handler, shard_id)):\n        if smc_handler.get_collation_is_elected(shard_id=shard_id, period=current_period):\n            assert smc_handler.get_vote_count(shard_id) == smc_handler.config['QUORUM_SIZE']\n            break\n        # Check that voting record does not exist prior to voting\n        assert not smc_handler.has_notary_voted(shard_id, sample_index)\n        # Vote\n        smc_handler.submit_vote(\n            shard_id=shard_id,\n            period=current_period,\n            chunk_root=CHUNK_ROOT_2_0,\n            index=sample_index,\n            private_key=NotaryAccount(index=pool_index).private_key,\n        )\n        mine(w3, 1)\n        # Check that vote has been casted successfully\n        assert smc_handler.has_notary_voted(shard_id, sample_index)\n    # Check that the collation is indeed elected.\n    assert smc_handler.get_collation_is_elected(shard_id=shard_id, period=current_period)\n\n\ndef test_double_submit_vote(smc_handler):  # noqa: F811\n    w3 = smc_handler.web3\n    # We only vote in shard 0 for ease of testing\n    shard_id = 0\n\n    # Register notary 0~8 and fast forward to next period\n    batch_register(smc_handler, 0, 8)\n    fast_forward(smc_handler, 1)\n    current_period = w3.eth.blockNumber // smc_handler.config['PERIOD_LENGTH']\n    assert current_period == 1\n\n    # Add collation record\n    CHUNK_ROOT_1_0 = b'\\x10' * 32\n    smc_handler.add_header(\n        shard_id=shard_id,\n        period=current_period,\n        chunk_root=CHUNK_ROOT_1_0,\n        private_key=NotaryAccount(index=0).private_key,\n    )\n    mine(w3, 1)\n\n    # Get the first notary in the sample list in this period and vote\n    sample_index = 0\n    pool_index = sampling(smc_handler, shard_id)[sample_index]\n    smc_handler.submit_vote(\n        shard_id=shard_id,\n        period=current_period,\n        chunk_root=CHUNK_ROOT_1_0,\n        index=sample_index,\n        private_key=NotaryAccount(index=pool_index).private_key,\n    )\n    mine(w3, 1)\n    # Check that vote has been casted successfully\n    assert smc_handler.get_vote_count(shard_id) == 1\n    assert smc_handler.has_notary_voted(shard_id, sample_index)\n\n    # Attempt to double vote\n    tx_hash = smc_handler.submit_vote(\n        shard_id=shard_id,\n        period=current_period,\n        chunk_root=CHUNK_ROOT_1_0,\n        index=sample_index,\n        private_key=NotaryAccount(index=pool_index).private_key,\n    )\n    mine(w3, 1)\n    # Check that transaction failed and vote count remains the same\n    # and no logs has been emitted\n    assert len(w3.eth.getTransactionReceipt(tx_hash)['logs']) == 0\n    assert smc_handler.get_vote_count(shard_id) == 1\n\n\ndef test_submit_vote_by_notary_sampled_multiple_times(smc_handler):  # noqa: F811\n    w3 = smc_handler.web3\n    # We only vote in shard 0 for ease of testing\n    shard_id = 0\n\n    # Here we only register 5 notaries so it's guaranteed that at least\n    # one notary is going to be sampled twice.\n    # Register notary 0~4 and fast forward to next period\n    batch_register(smc_handler, 0, 4)\n    fast_forward(smc_handler, 1)\n    current_period = w3.eth.blockNumber // smc_handler.config['PERIOD_LENGTH']\n    assert current_period == 1\n\n    # Add collation record\n    CHUNK_ROOT_1_0 = b'\\x10' * 32\n    smc_handler.add_header(\n        shard_id=shard_id,\n        period=current_period,\n        chunk_root=CHUNK_ROOT_1_0,\n        private_key=NotaryAccount(index=0).private_key,\n    )\n    mine(w3, 1)\n\n    # Find the notary that's sampled more than one time\n    for pool_index in range(5):\n        sample_index_list = [\n            sample_index\n            for (_, _shard_id, sample_index) in get_sample_result(smc_handler, pool_index)\n            if _shard_id == shard_id\n        ]\n        if len(sample_index_list) > 1:\n            vote_count = len(sample_index_list)\n            for sample_index in sample_index_list:\n                smc_handler.submit_vote(\n                    shard_id=shard_id,\n                    period=current_period,\n                    chunk_root=CHUNK_ROOT_1_0,\n                    index=sample_index,\n                    private_key=NotaryAccount(index=pool_index).private_key,\n                )\n                mine(w3, 1)\n            # Check that every vote is successfully casted even by the same notary\n            assert smc_handler.get_vote_count(shard_id) == vote_count\n            break\n\n\ndef test_submit_vote_by_non_eligible_notary(smc_handler):  # noqa: F811\n    w3 = smc_handler.web3\n    # We only vote in shard 0 for ease of testing\n    shard_id = 0\n\n    # Register notary 0~8 and fast forward to next period\n    batch_register(smc_handler, 0, 8)\n    fast_forward(smc_handler, 1)\n    current_period = w3.eth.blockNumber // smc_handler.config['PERIOD_LENGTH']\n    assert current_period == 1\n\n    # Add collation record\n    CHUNK_ROOT_1_0 = b'\\x10' * 32\n    smc_handler.add_header(\n        shard_id=shard_id,\n        period=current_period,\n        chunk_root=CHUNK_ROOT_1_0,\n        private_key=NotaryAccount(index=0).private_key,\n    )\n    mine(w3, 1)\n\n    sample_index = 0\n    pool_index = sampling(smc_handler, shard_id)[sample_index]\n    wrong_pool_index = 0 if pool_index != 0 else 1\n    tx_hash = smc_handler.submit_vote(\n        shard_id=shard_id,\n        period=current_period,\n        chunk_root=CHUNK_ROOT_1_0,\n        index=sample_index,\n        # Vote by non-eligible notary\n        private_key=NotaryAccount(wrong_pool_index).private_key,\n    )\n    mine(w3, 1)\n    # Check that transaction failed and vote count remains the same\n    # and no logs has been emitted\n    assert len(w3.eth.getTransactionReceipt(tx_hash)['logs']) == 0\n    assert smc_handler.get_vote_count(shard_id) == 0\n    assert not smc_handler.has_notary_voted(shard_id, sample_index)\n\n\ndef test_submit_vote_without_add_header_first(smc_handler):  # noqa: F811\n    w3 = smc_handler.web3\n    # We only vote in shard 0 for ease of testing\n    shard_id = 0\n\n    # Register notary 0~8 and fast forward to next period\n    batch_register(smc_handler, 0, 8)\n    fast_forward(smc_handler, 1)\n    current_period = w3.eth.blockNumber // smc_handler.config['PERIOD_LENGTH']\n    assert current_period == 1\n\n    CHUNK_ROOT_1_0 = b'\\x10' * 32\n    # Get the first notary in the sample list in this period and vote\n    sample_index = 0\n    pool_index = sampling(smc_handler, shard_id)[sample_index]\n    tx_hash = smc_handler.submit_vote(\n        shard_id=shard_id,\n        period=current_period,\n        chunk_root=CHUNK_ROOT_1_0,\n        index=sample_index,\n        private_key=NotaryAccount(index=pool_index).private_key,\n    )\n    mine(w3, 1)\n    # Check that transaction failed and vote count remains the same\n    # and no logs has been emitted\n    assert len(w3.eth.getTransactionReceipt(tx_hash)['logs']) == 0\n    assert smc_handler.get_vote_count(shard_id) == 0\n    assert not smc_handler.has_notary_voted(shard_id, sample_index)\n\n\n@pytest.mark.parametrize(  # noqa: F811\n    'period, shard_id, chunk_root, sample_index',\n    (\n        (-1, 0, b'\\x10' * 32, 0),\n        (999, 0, b'\\x10' * 32, 0),\n        (1, -1, b'\\x10' * 32, 0),\n        (1, 999, b'\\x10' * 32, 0),\n        (1, 0, b'\\xff' * 32, 0),\n        (1, 0, b'\\x10' * 32, -1),\n        (1, 0, b'\\x10' * 32, 999),\n    )\n)\ndef test_submit_vote_with_invalid_args(smc_handler, period, shard_id, chunk_root, sample_index):\n    w3 = smc_handler.web3\n\n    # Register notary 0~8 and fast forward to next period\n    batch_register(smc_handler, 0, 8)\n    fast_forward(smc_handler, 1)\n    current_period = w3.eth.blockNumber // smc_handler.config['PERIOD_LENGTH']\n    assert current_period == 1\n\n    # Add correct collation record\n    smc_handler.add_header(\n        shard_id=0,\n        period=current_period,\n        chunk_root=b'\\x10' * 32,\n        private_key=NotaryAccount(index=0).private_key,\n    )\n    mine(w3, 1)\n\n    pool_index = sampling(smc_handler, 0)[0]\n    # Vote with provided incorrect arguments\n    tx_hash = smc_handler.submit_vote(\n        shard_id=shard_id,\n        period=period,\n        chunk_root=chunk_root,\n        index=sample_index,\n        private_key=NotaryAccount(index=pool_index).private_key,\n    )\n    mine(w3, 1)\n    # Check that transaction failed and vote count remains the same\n    # and no logs has been emitted\n    assert len(w3.eth.getTransactionReceipt(tx_hash)['logs']) == 0\n    assert smc_handler.get_vote_count(shard_id) == 0\n    assert not smc_handler.has_notary_voted(shard_id, sample_index)\n\n\ndef test_submit_vote_then_deregister(smc_handler):  # noqa: F811\n    w3 = smc_handler.web3\n    # We only vote in shard 0 for ease of testing\n    shard_id = 0\n\n    # Register notary 0~8 and fast forward to next period\n    batch_register(smc_handler, 0, 8)\n    fast_forward(smc_handler, 1)\n    current_period = w3.eth.blockNumber // smc_handler.config['PERIOD_LENGTH']\n    assert current_period == 1\n\n    # Add collation record\n    CHUNK_ROOT_1_0 = b'\\x10' * 32\n    smc_handler.add_header(\n        shard_id=shard_id,\n        period=current_period,\n        chunk_root=CHUNK_ROOT_1_0,\n        private_key=NotaryAccount(index=0).private_key,\n    )\n    mine(w3, 1)\n\n    sample_index = 0\n    pool_index = sampling(smc_handler, shard_id)[sample_index]\n    smc_handler.submit_vote(\n        shard_id=shard_id,\n        period=current_period,\n        chunk_root=CHUNK_ROOT_1_0,\n        index=sample_index,\n        private_key=NotaryAccount(index=pool_index).private_key,\n    )\n    mine(w3, 1)\n\n    # Check that vote has been casted successfully\n    assert smc_handler.get_vote_count(shard_id) == 1\n    assert smc_handler.has_notary_voted(shard_id, sample_index)\n\n    # The notary deregisters\n    smc_handler.deregister_notary(private_key=NotaryAccount(pool_index).private_key)\n    mine(w3, 1)\n    # Check that vote was not effected by deregistration\n    assert smc_handler.get_vote_count(shard_id) == 1\n    assert smc_handler.has_notary_voted(shard_id, sample_index)\n\n    # Notary 9 registers and takes retired notary's place in pool\n    smc_handler.register_notary(private_key=NotaryAccount(9).private_key)\n    # Attempt to vote\n    tx_hash = smc_handler.submit_vote(\n        shard_id=shard_id,\n        period=current_period,\n        chunk_root=CHUNK_ROOT_1_0,\n        index=sample_index,\n        private_key=NotaryAccount(index=9).private_key,\n    )\n    mine(w3, 1)\n\n    # Check that transaction failed and vote count remains the same\n    # and no logs has been emitted\n    assert len(w3.eth.getTransactionReceipt(tx_hash)['logs']) == 0\n    assert smc_handler.get_vote_count(shard_id) == 1\n"
  },
  {
    "path": "tests/contract/utils/common_utils.py",
    "content": "from sharding.handler.utils.web3_utils import (\n    mine,\n)\nfrom tests.contract.utils.notary_account import (\n    NotaryAccount,\n)\n\n\ndef update_notary_sample_size(smc_handler):\n    smc_handler._send_transaction(\n        func_name='update_notary_sample_size',\n        args=[],\n        private_key=NotaryAccount(0).private_key,\n        gas=smc_handler._estimate_gas_dict['update_notary_sample_size'],\n    )\n    mine(smc_handler.web3, 1)\n\n\ndef batch_register(smc_handler, start, end):\n    assert start <= end\n    for i in range(start, end + 1):\n        notary = NotaryAccount(i)\n        smc_handler.register_notary(private_key=notary.private_key)\n    mine(smc_handler.web3, 1)\n\n\ndef fast_forward(smc_handler, num_of_periods):\n    assert num_of_periods > 0\n    period_length = smc_handler.config['PERIOD_LENGTH']\n    block_number = smc_handler.web3.eth.blockNumber\n    current_period = block_number // period_length\n    blocks_to_the_period = (current_period + num_of_periods) * period_length \\\n        - block_number\n    mine(smc_handler.web3, blocks_to_the_period)\n"
  },
  {
    "path": "tests/contract/utils/notary_account.py",
    "content": "from eth_tester.backends.pyevm.main import (\n    get_default_account_keys,\n)\n\n\nclass NotaryAccount:\n    index = None\n\n    def __init__(self, index):\n        self.index = index\n\n    @property\n    def private_key(self):\n        return get_default_account_keys()[self.index]\n\n    @property\n    def checksum_address(self):\n        return self.private_key.public_key.to_checksum_address()\n\n    @property\n    def canonical_address(self):\n        return self.private_key.public_key.to_canonical_address()\n"
  },
  {
    "path": "tests/contract/utils/sample_helper.py",
    "content": "from eth_utils import (\n    to_list,\n    keccak,\n    big_endian_to_int,\n)\n\nfrom evm.utils.numeric import (\n    int_to_bytes32,\n)\n\n\n@to_list\ndef get_notary_pool_list(smc_handler):\n    \"\"\"Get the full list of notaries that's currently in notary pool.\n    \"\"\"\n    pool_len = smc_handler.notary_pool_len()\n    for i in range(pool_len):\n        yield smc_handler.notary_pool(i)\n\n\n@to_list\ndef sampling(smc_handler, shard_id):\n    \"\"\"The sampling process is the same as the one in SMC(inside the\n    `get_member_of_committee` function). It is used to avoid the overhead\n    of making contrac call to SMC. The overhead could be quite significant\n    if you want to get the complete sampling result since you have to make\n    a total of `SHARD_COUNT`*`COMMITTEE_SIZE` times of contract calls.\n    \"\"\"\n    w3 = smc_handler.web3\n    current_period = w3.eth.blockNumber // smc_handler.config['PERIOD_LENGTH']\n\n    # Determine sample size\n    if smc_handler.notary_sample_size_updated_period() < current_period:\n        sample_size = smc_handler.next_period_notary_sample_size()\n    elif smc_handler.notary_sample_size_updated_period() == current_period:\n        sample_size = smc_handler.current_period_notary_sample_size()\n    else:\n        raise Exception(\"notary_sample_size_updated_period is larger than current period\")\n\n    # Get source for pseudo random number generation\n    bytes32_shard_id = int_to_bytes32(shard_id)\n    entropy_block_number = current_period * smc_handler.config['PERIOD_LENGTH'] - 1\n    entropy_block_hash = w3.eth.getBlock(entropy_block_number)['hash']\n\n    for i in range(smc_handler.config['COMMITTEE_SIZE']):\n        yield big_endian_to_int(\n            keccak(\n                entropy_block_hash + bytes32_shard_id + int_to_bytes32(i)\n            )\n        ) % sample_size\n\n\n@to_list\ndef get_committee_list(smc_handler, shard_id):\n    \"\"\"Get committee list in specified shard in current period.\n    Returns the list of sampled notaries.\n    \"\"\"\n    for notary_pool_index in sampling(smc_handler, shard_id):\n        yield smc_handler.notary_pool(notary_pool_index)\n\n\n@to_list\ndef get_sample_result(smc_handler, notary_index):\n    \"\"\"Get sampling result for the specified notary. Pass in notary's index in notary pool.\n    Returns a list of tuple(period, shard_id, index) indicating in which period on which shard\n    is the notary sampled and by which sampling index.\n    Note that here sampling index in not the same as notary pool index.\n    \"\"\"\n    current_period = smc_handler.web3.eth.blockNumber // smc_handler.config['PERIOD_LENGTH']\n\n    for shard_id in range(smc_handler.config['SHARD_COUNT']):\n        for (index, notary_pool_index) in enumerate(sampling(smc_handler, shard_id)):\n            if notary_pool_index == notary_index:\n                yield (current_period, shard_id, index)\n"
  },
  {
    "path": "tests/handler/__init__.py",
    "content": ""
  },
  {
    "path": "tests/handler/test_log_handler.py",
    "content": "import itertools\n\nimport pytest\n\nfrom cytoolz.dicttoolz import (\n    assoc,\n)\n\nfrom web3 import (\n    Web3,\n)\n\nfrom web3.providers.eth_tester import (\n    EthereumTesterProvider,\n)\n\nfrom eth_utils import (\n    event_signature_to_log_topic,\n)\n\nfrom eth_tester import (\n    EthereumTester,\n    PyEVMBackend,\n)\nfrom eth_tester.backends.pyevm.main import (\n    get_default_account_keys,\n)\n\nfrom sharding.handler.log_handler import (\n    LogHandler,\n)\nfrom sharding.handler.utils.web3_utils import (\n    mine,\n    take_snapshot,\n    revert_to_snapshot,\n)\n\n\ncode = \"\"\"\nTest: __log__({amount1: num})\n\n@public\ndef emit_log(log_number: num):\n    log.Test(log_number)\n\"\"\"\nabi = [{'name': 'Test', 'inputs': [{'type': 'int128', 'name': 'amount1', 'indexed': False}], 'anonymous': False, 'type': 'event'}, {'name': 'emit_log', 'outputs': [], 'inputs': [{'type': 'int128', 'name': 'log_number'}], 'constant': False, 'payable': False, 'type': 'function'}]  # noqa: E501\nbytecode = b'a\\x00\\xf9V`\\x005`\\x1cRt\\x01\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00` Ro\\x7f\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff`@R\\x7f\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\x80\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00``Rt\\x01*\\x05\\xf1\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xfd\\xab\\xf4\\x1c\\x00`\\x80R\\x7f\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xfe\\xd5\\xfa\\x0e\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00`\\xa0Rc\\xd0(}7`\\x00Q\\x14\\x15a\\x00\\xf4W` `\\x04a\\x01@74\\x15\\x15XW``Q`\\x045\\x80`@Q\\x90\\x13XW\\x80\\x91\\x90\\x12XWPa\\x01@Qa\\x01`R\\x7f\\xaeh\\x04lU;\\x85\\xd0\\x8bolL6\\x92S)\\x06\\xf3M\\x1d\\xa6\\xcb\\x032\\x1e\\xd6\\x96\\xca\\x0b\\xdcL\\xad` a\\x01`\\xa1\\x00[[a\\x00\\x04a\\x00\\xf9\\x03a\\x00\\x04`\\x009a\\x00\\x04a\\x00\\xf9\\x03`\\x00\\xf3'  # noqa: E501\n\ntest_keys = get_default_account_keys()\nprivkey = test_keys[0]\ndefault_tx_detail = {\n    'from': privkey.public_key.to_checksum_address(),\n    'gas': 500000,\n}\ntest_event_signature = event_signature_to_log_topic(\"Test(int128)\")\n\nHISTORY_SIZE = 256\n\n\n@pytest.fixture\ndef contract():\n    eth_tester = EthereumTester(\n        backend=PyEVMBackend(),\n        auto_mine_transactions=False,\n    )\n    provider = EthereumTesterProvider(eth_tester)\n    w3 = Web3(provider)\n    tx_hash = w3.eth.sendTransaction(assoc(default_tx_detail, 'data', bytecode))\n    mine(w3, 1)\n    receipt = w3.eth.getTransactionReceipt(tx_hash)\n    contract_address = receipt['contractAddress']\n    return w3.eth.contract(contract_address, abi=abi, bytecode=bytecode)\n\n\ndef test_get_logs_without_forks(contract, smc_testing_config):\n    period_length = smc_testing_config['PERIOD_LENGTH']\n    w3 = contract.web3\n    log_handler = LogHandler(w3, period_length)\n    counter = itertools.count()\n\n    contract.functions.emit_log(next(counter)).transact(default_tx_detail)\n    mine(w3, 1)\n    logs_block2 = log_handler.get_logs(address=contract.address)\n    assert len(logs_block2) == 1\n    assert int(logs_block2[0]['data'], 16) == 0\n    mine(w3, period_length - 1)\n\n    contract.functions.emit_log(next(counter)).transact(default_tx_detail)\n    mine(w3, 1)\n    logs_block3 = log_handler.get_logs(address=contract.address)\n    assert len(logs_block3) == 1\n    assert int(logs_block3[0]['data'], 16) == 1\n    mine(w3, period_length - 1)\n\n    contract.functions.emit_log(next(counter)).transact(default_tx_detail)\n    mine(w3, 1)\n    contract.functions.emit_log(next(counter)).transact(default_tx_detail)\n    mine(w3, 1)\n    logs_block4_5 = log_handler.get_logs(address=contract.address)\n    assert len(logs_block4_5) == 2\n    assert int(logs_block4_5[0]['data'], 16) == 2\n    assert int(logs_block4_5[1]['data'], 16) == 3\n\n\ndef test_get_logs_with_forks(contract, smc_testing_config):\n    w3 = contract.web3\n    log_handler = LogHandler(w3, smc_testing_config['PERIOD_LENGTH'])\n    counter = itertools.count()\n    snapshot_id = take_snapshot(w3)\n    current_block_number = w3.eth.blockNumber\n\n    contract.functions.emit_log(next(counter)).transact(default_tx_detail)\n    mine(w3, 1)\n    revert_to_snapshot(w3, snapshot_id)\n    assert w3.eth.blockNumber == current_block_number\n    contract.functions.emit_log(next(counter)).transact(default_tx_detail)\n    mine(w3, 1)\n    contract.functions.emit_log(next(counter)).transact(default_tx_detail)\n    mine(w3, 1)\n    logs = log_handler.get_logs()\n    # assert len(logs) == 2\n    assert int(logs[0]['data'], 16) == 1\n    assert int(logs[1]['data'], 16) == 2\n"
  },
  {
    "path": "tests/handler/test_shard_tracker.py",
    "content": "import logging\n\nimport pytest\n\nfrom sharding.handler.exceptions import (\n    LogParsingError,\n)\nfrom sharding.handler.utils.log_parser import (\n    LogParser,\n)\nfrom sharding.handler.shard_tracker import (  # noqa: F401\n    ShardTracker,\n)\nfrom sharding.handler.utils.web3_utils import (\n    mine,\n)\n\nfrom tests.contract.utils.common_utils import (\n    batch_register,\n    fast_forward,\n)\nfrom tests.contract.utils.notary_account import (\n    NotaryAccount,\n)\nfrom tests.contract.utils.sample_helper import (\n    sampling,\n)\n\n\nlogger = logging.getLogger('sharding.handler.ShardTracker')\n\n\n@pytest.mark.parametrize(\n    'raw_log, event_name, attr_tuples',\n    (\n        (\n            {'type': 'mined', 'logIndex': 0, 'transactionIndex': 0, 'transactionHash': b'\\xda\\xb8:\\xe5\\x86\\xe9Q\\xf2\\x9c\\xc6<g\\x9bl\\x84\\x85\\xf4\\x1dh\\xce\\x8d\\xe6\\xc0D\\xa0*E\\xd8m\\xd4\\x01\\xcf', 'blockHash': b'\\x13\\xa97d\\r\\x90t\\xe5;\\x84\\xf9\\xe0\\xb8\\xf2c\\x1c}\\x88\\xbf\\x84DN\\xa0\\x16Q\\xd9|\\xa1\\x00\\x91\\xc0\\xbd', 'blockNumber': 25, 'address': '0xf4F1600B0a65995833854738764b50A4DA8d6BE1', 'data': '0x0000000000000000000000000000000000000000000000000000000000000000', 'topics': [b'B\\xccp\\x0f[x\\xa7Le \\xecSA\\xd7\\xc4\\x9e\\xea\\xa8\\xf8\\x90\\x15\\xe7\\x14\\xb4\\xd7 |\\x94|-\\x19\\xec', b'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00~_ER\\t\\x1ai\\x12]]\\xfc\\xb7\\xb8\\xc2e\\x90)9[\\xdf']},  # noqa: E501\n            'RegisterNotary',\n            [\n                ('index_in_notary_pool', 0),\n                ('notary', b'~_ER\\t\\x1ai\\x12]]\\xfc\\xb7\\xb8\\xc2e\\x90)9[\\xdf'),\n            ]\n        ),\n        (\n            {'type': 'mined', 'logIndex': 0, 'transactionIndex': 0, 'transactionHash': b'\\x16\\xc2\\x0b\\xadZ|\\x92l@@\\xb1\\x15\\x93nh\\xd6]p\\x16\\xae\\xd5\\xe7\\x9crKl\\x8c\\xcf\\x06\\x9a\\xd4\\x05', 'blockHash': b'\\x94\\\\\\xce\\x19\\x01:j\\xbb\\xf8\\xba\\x19\\xcfv\\xc3z3}^\\xb6>\\xa0\\x0e\\xf74\\xe8A\\t\\x12p\\x9a\\xf6V', 'blockNumber': 30, 'address': '0xf4F1600B0a65995833854738764b50A4DA8d6BE1', 'data': '0x0000000000000000000000000000000000000000000000000000000000000003', 'topics': [b'B\\xccp\\x0f[x\\xa7Le \\xecSA\\xd7\\xc4\\x9e\\xea\\xa8\\xf8\\x90\\x15\\xe7\\x14\\xb4\\xd7 |\\x94|-\\x19\\xec', b'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x1e\\xffG\\xbc:\\x10\\xa4]K#\\x0b]\\x10\\xe3wQ\\xfej\\xa7\\x18']},  # noqa: E501\n            'RegisterNotary',\n            [\n                ('index_in_notary_pool', 3),\n                ('notary', b'\\x1e\\xffG\\xbc:\\x10\\xa4]K#\\x0b]\\x10\\xe3wQ\\xfej\\xa7\\x18'),\n            ]\n        ),\n        (\n            {'type': 'mined', 'logIndex': 0, 'transactionIndex': 0, 'transactionHash': b'\\xda\\xb8:\\xe5\\x86\\xe9Q\\xf2\\x9c\\xc6<g\\x9bl\\x84\\x85\\xf4\\x1dh\\xce\\x8d\\xe6\\xc0D\\xa0*E\\xd8m\\xd4\\x01\\xcf', 'blockHash': b'\\x13\\xa97d\\r\\x90t\\xe5;\\x84\\xf9\\xe0\\xb8\\xf2c\\x1c}\\x88\\xbf\\x84DN\\xa0\\x16Q\\xd9|\\xa1\\x00\\x91\\xc0\\xbd', 'blockNumber': 25, 'address': '0xf4F1600B0a65995833854738764b50A4DA8d6BE1', 'data': '0x0000000000000000000000000000000000000000000000000000000000000005000000000000000000000000000000000000000000000000000000000000000a', 'topics': [b'B\\xccp\\x0f[x\\xa7Le \\xecSA\\xd7\\xc4\\x9e\\xea\\xa8\\xf8\\x90\\x15\\xe7\\x14\\xb4\\xd7 |\\x94|-\\x19\\xec', b'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00~_ER\\t\\x1ai\\x12]]\\xfc\\xb7\\xb8\\xc2e\\x90)9[\\xdf']},  # noqa: E501\n            'DeregisterNotary',\n            [\n                ('index_in_notary_pool', 5),\n                ('notary', b'~_ER\\t\\x1ai\\x12]]\\xfc\\xb7\\xb8\\xc2e\\x90)9[\\xdf'),\n                ('deregistered_period', 10),\n            ]\n        ),\n        (\n            {'type': 'mined', 'logIndex': 0, 'transactionIndex': 0, 'transactionHash': b'\\x16\\xc2\\x0b\\xadZ|\\x92l@@\\xb1\\x15\\x93nh\\xd6]p\\x16\\xae\\xd5\\xe7\\x9crKl\\x8c\\xcf\\x06\\x9a\\xd4\\x05', 'blockHash': b'\\x94\\\\\\xce\\x19\\x01:j\\xbb\\xf8\\xba\\x19\\xcfv\\xc3z3}^\\xb6>\\xa0\\x0e\\xf74\\xe8A\\t\\x12p\\x9a\\xf6V', 'blockNumber': 30, 'address': '0xf4F1600B0a65995833854738764b50A4DA8d6BE1', 'data': '0x00000000000000000000000000000000000000000000000000000000000000100000000000000000000000000000000000000000000000000000000000000005', 'topics': [b'B\\xccp\\x0f[x\\xa7Le \\xecSA\\xd7\\xc4\\x9e\\xea\\xa8\\xf8\\x90\\x15\\xe7\\x14\\xb4\\xd7 |\\x94|-\\x19\\xec', b'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x1e\\xffG\\xbc:\\x10\\xa4]K#\\x0b]\\x10\\xe3wQ\\xfej\\xa7\\x18']},  # noqa: E501\n            'DeregisterNotary',\n            [\n                ('index_in_notary_pool', 16),\n                ('notary', b'\\x1e\\xffG\\xbc:\\x10\\xa4]K#\\x0b]\\x10\\xe3wQ\\xfej\\xa7\\x18'),\n                ('deregistered_period', 5),\n            ]\n        ),\n        (\n            {'type': 'mined', 'logIndex': 0, 'transactionIndex': 0, 'transactionHash': b'\\xda\\xb8:\\xe5\\x86\\xe9Q\\xf2\\x9c\\xc6<g\\x9bl\\x84\\x85\\xf4\\x1dh\\xce\\x8d\\xe6\\xc0D\\xa0*E\\xd8m\\xd4\\x01\\xcf', 'blockHash': b'\\x13\\xa97d\\r\\x90t\\xe5;\\x84\\xf9\\xe0\\xb8\\xf2c\\x1c}\\x88\\xbf\\x84DN\\xa0\\x16Q\\xd9|\\xa1\\x00\\x91\\xc0\\xbd', 'blockNumber': 25, 'address': '0xf4F1600B0a65995833854738764b50A4DA8d6BE1', 'data': '0x00000000000000000000000000000000000000000000000000000000000000011010101010101010101010101010101010101010101010101010101010101010', 'topics': [b'$\\xa5\\x146ipE\\xb9:y\\xa2\\xbd\\xa9\\x00\\xb0PU\\xf1\\xe1\\xe9\\x1b\\x02\\x1bL/\\xb6\\xf6|\\xbb\\x0b.\\x95', b'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00']},  # noqa: E501\n            'AddHeader',\n            [\n                ('period', 1),\n                ('shard_id', 0),\n                ('chunk_root', b'\\x10\\x10\\x10\\x10\\x10\\x10\\x10\\x10\\x10\\x10\\x10\\x10\\x10\\x10\\x10\\x10\\x10\\x10\\x10\\x10\\x10\\x10\\x10\\x10\\x10\\x10\\x10\\x10\\x10\\x10\\x10\\x10'),  # noqa: E501\n            ]\n        ),\n        (\n            {'type': 'mined', 'logIndex': 0, 'transactionIndex': 0, 'transactionHash': b'\\x16\\xc2\\x0b\\xadZ|\\x92l@@\\xb1\\x15\\x93nh\\xd6]p\\x16\\xae\\xd5\\xe7\\x9crKl\\x8c\\xcf\\x06\\x9a\\xd4\\x05', 'blockHash': b'\\x94\\\\\\xce\\x19\\x01:j\\xbb\\xf8\\xba\\x19\\xcfv\\xc3z3}^\\xb6>\\xa0\\x0e\\xf74\\xe8A\\t\\x12p\\x9a\\xf6V', 'blockNumber': 30, 'address': '0xf4F1600B0a65995833854738764b50A4DA8d6BE1', 'data': '0x00000000000000000000000000000000000000000000000000000000000000077373737373737373737373737373737373737373737373737373737373737373', 'topics': [b'$\\xa5\\x146ipE\\xb9:y\\xa2\\xbd\\xa9\\x00\\xb0PU\\xf1\\xe1\\xe9\\x1b\\x02\\x1bL/\\xb6\\xf6|\\xbb\\x0b.\\x95', b'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x03']},  # noqa: E501\n            'AddHeader',\n            [\n                ('period', 7),\n                ('shard_id', 3),\n                ('chunk_root', b'ssssssssssssssssssssssssssssssss'),\n            ]\n        ),\n        (\n            {'type': 'mined', 'logIndex': 0, 'transactionIndex': 0, 'transactionHash': b'\\xda\\xb8:\\xe5\\x86\\xe9Q\\xf2\\x9c\\xc6<g\\x9bl\\x84\\x85\\xf4\\x1dh\\xce\\x8d\\xe6\\xc0D\\xa0*E\\xd8m\\xd4\\x01\\xcf', 'blockHash': b'\\x13\\xa97d\\r\\x90t\\xe5;\\x84\\xf9\\xe0\\xb8\\xf2c\\x1c}\\x88\\xbf\\x84DN\\xa0\\x16Q\\xd9|\\xa1\\x00\\x91\\xc0\\xbd', 'blockNumber': 25, 'address': '0xf4F1600B0a65995833854738764b50A4DA8d6BE1', 'data': '0x000000000000000000000000000000000000000000000000000000000000001010011001100110011001100110011001100110011001100110011001100110010000000000000000000000001eff47bc3a10a45d4b230b5d10e37751fe6aa718', 'topics': [b'$\\xa5\\x146ipE\\xb9:y\\xa2\\xbd\\xa9\\x00\\xb0PU\\xf1\\xe1\\xe9\\x1b\\x02\\x1bL/\\xb6\\xf6|\\xbb\\x0b.\\x95', b'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01']},  # noqa: E501\n            'SubmitVote',\n            [\n                ('period', 16),\n                ('shard_id', 1),\n                ('chunk_root', b'\\x10\\x01\\x10\\x01\\x10\\x01\\x10\\x01\\x10\\x01\\x10\\x01\\x10\\x01\\x10\\x01\\x10\\x01\\x10\\x01\\x10\\x01\\x10\\x01\\x10\\x01\\x10\\x01\\x10\\x01\\x10\\x01'),  # noqa: E501\n                ('notary', b'\\x1e\\xffG\\xbc:\\x10\\xa4]K#\\x0b]\\x10\\xe3wQ\\xfej\\xa7\\x18'),\n            ]\n        ),\n        (\n            {'type': 'mined', 'logIndex': 0, 'transactionIndex': 0, 'transactionHash': b'\\x16\\xc2\\x0b\\xadZ|\\x92l@@\\xb1\\x15\\x93nh\\xd6]p\\x16\\xae\\xd5\\xe7\\x9crKl\\x8c\\xcf\\x06\\x9a\\xd4\\x05', 'blockHash': b'\\x94\\\\\\xce\\x19\\x01:j\\xbb\\xf8\\xba\\x19\\xcfv\\xc3z3}^\\xb6>\\xa0\\x0e\\xf74\\xe8A\\t\\x12p\\x9a\\xf6V', 'blockNumber': 30, 'address': '0xf4F1600B0a65995833854738764b50A4DA8d6BE1', 'data': '0x000000000000000000000000000000000000000000000000000000000000002121632163216321632163216321632163216321632163216321632163216321630000000000000000000000007e5f4552091a69125d5dfcb7b8c2659029395bdf', 'topics': [b'$\\xa5\\x146ipE\\xb9:y\\xa2\\xbd\\xa9\\x00\\xb0PU\\xf1\\xe1\\xe9\\x1b\\x02\\x1bL/\\xb6\\xf6|\\xbb\\x0b.\\x95', b'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x63']},  # noqa: E501\n            'SubmitVote',\n            [\n                ('period', 33),\n                ('shard_id', 99),\n                ('chunk_root', b'!c!c!c!c!c!c!c!c!c!c!c!c!c!c!c!c'),\n                ('notary', b'~_ER\\t\\x1ai\\x12]]\\xfc\\xb7\\xb8\\xc2e\\x90)9[\\xdf'),\n            ]\n        ),\n    )\n)\ndef test_normal_log_parser(raw_log, event_name, attr_tuples):\n    parsed_log = LogParser(event_name=event_name, log=raw_log)\n    for attr in attr_tuples:\n        assert getattr(parsed_log, attr[0]) == attr[1]\n\n\n@pytest.mark.parametrize(\n    'raw_log, event_name',\n    (\n        (\n            # Wrong event name\n            {'type': 'mined', 'logIndex': 0, 'transactionIndex': 0, 'transactionHash': b'\\xda\\xb8:\\xe5\\x86\\xe9Q\\xf2\\x9c\\xc6<g\\x9bl\\x84\\x85\\xf4\\x1dh\\xce\\x8d\\xe6\\xc0D\\xa0*E\\xd8m\\xd4\\x01\\xcf', 'blockHash': b'\\x13\\xa97d\\r\\x90t\\xe5;\\x84\\xf9\\xe0\\xb8\\xf2c\\x1c}\\x88\\xbf\\x84DN\\xa0\\x16Q\\xd9|\\xa1\\x00\\x91\\xc0\\xbd', 'blockNumber': 25, 'address': '0xf4F1600B0a65995833854738764b50A4DA8d6BE1', 'data': '0x0000000000000000000000000000000000000000000000000000000000000005000000000000000000000000000000000000000000000000000000000000000a', 'topics': [b'B\\xccp\\x0f[x\\xa7Le \\xecSA\\xd7\\xc4\\x9e\\xea\\xa8\\xf8\\x90\\x15\\xe7\\x14\\xb4\\xd7 |\\x94|-\\x19\\xec', b'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00~_ER\\t\\x1ai\\x12]]\\xfc\\xb7\\xb8\\xc2e\\x90)9[\\xdf']},  # noqa: E501\n            'WrongEventName',\n        ),\n        (\n            # Too many topics in log\n            {'type': 'mined', 'logIndex': 0, 'transactionIndex': 0, 'transactionHash': b'\\xda\\xb8:\\xe5\\x86\\xe9Q\\xf2\\x9c\\xc6<g\\x9bl\\x84\\x85\\xf4\\x1dh\\xce\\x8d\\xe6\\xc0D\\xa0*E\\xd8m\\xd4\\x01\\xcf', 'blockHash': b'\\x13\\xa97d\\r\\x90t\\xe5;\\x84\\xf9\\xe0\\xb8\\xf2c\\x1c}\\x88\\xbf\\x84DN\\xa0\\x16Q\\xd9|\\xa1\\x00\\x91\\xc0\\xbd', 'blockNumber': 25, 'address': '0xf4F1600B0a65995833854738764b50A4DA8d6BE1', 'data': '0x0000000000000000000000000000000000000000000000000000000000000000', 'topics': [b'B\\xccp\\x0f[x\\xa7Le \\xecSA\\xd7\\xc4\\x9e\\xea\\xa8\\xf8\\x90\\x15\\xe7\\x14\\xb4\\xd7 |\\x94|-\\x19\\xec', b'$\\xa5\\x146ipE\\xb9:y\\xa2\\xbd\\xa9\\x00\\xb0PU\\xf1\\xe1\\xe9\\x1b\\x02\\x1bL/\\xb6\\xf6|\\xbb\\x0b.\\x95', b'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00~_ER\\t\\x1ai\\x12]]\\xfc\\xb7\\xb8\\xc2e\\x90)9[\\xdf']},  # noqa: E501\n            'RegisterNotary',\n        ),\n        (\n            # Too few topics in log\n            {'type': 'mined', 'logIndex': 0, 'transactionIndex': 0, 'transactionHash': b'\\xda\\xb8:\\xe5\\x86\\xe9Q\\xf2\\x9c\\xc6<g\\x9bl\\x84\\x85\\xf4\\x1dh\\xce\\x8d\\xe6\\xc0D\\xa0*E\\xd8m\\xd4\\x01\\xcf', 'blockHash': b'\\x13\\xa97d\\r\\x90t\\xe5;\\x84\\xf9\\xe0\\xb8\\xf2c\\x1c}\\x88\\xbf\\x84DN\\xa0\\x16Q\\xd9|\\xa1\\x00\\x91\\xc0\\xbd', 'blockNumber': 25, 'address': '0xf4F1600B0a65995833854738764b50A4DA8d6BE1', 'data': '0x0000000000000000000000000000000000000000000000000000000000000000', 'topics': [b'B\\xccp\\x0f[x\\xa7Le \\xecSA\\xd7\\xc4\\x9e\\xea\\xa8\\xf8\\x90\\x15\\xe7\\x14\\xb4\\xd7 |\\x94|-\\x19\\xec']},  # noqa: E501\n            'RegisterNotary',\n        ),\n        (\n            # Too many data in log\n            {'type': 'mined', 'logIndex': 0, 'transactionIndex': 0, 'transactionHash': b'\\xda\\xb8:\\xe5\\x86\\xe9Q\\xf2\\x9c\\xc6<g\\x9bl\\x84\\x85\\xf4\\x1dh\\xce\\x8d\\xe6\\xc0D\\xa0*E\\xd8m\\xd4\\x01\\xcf', 'blockHash': b'\\x13\\xa97d\\r\\x90t\\xe5;\\x84\\xf9\\xe0\\xb8\\xf2c\\x1c}\\x88\\xbf\\x84DN\\xa0\\x16Q\\xd9|\\xa1\\x00\\x91\\xc0\\xbd', 'blockNumber': 25, 'address': '0xf4F1600B0a65995833854738764b50A4DA8d6BE1', 'data': '0x0000000000000000000000000000000000000000000000000000000000000005000000000000000000000000000000000000000000000000000000000000000a00000000000000000000000000000000000000000000000000000000000000ff', 'topics': [b'B\\xccp\\x0f[x\\xa7Le \\xecSA\\xd7\\xc4\\x9e\\xea\\xa8\\xf8\\x90\\x15\\xe7\\x14\\xb4\\xd7 |\\x94|-\\x19\\xec', b'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00~_ER\\t\\x1ai\\x12]]\\xfc\\xb7\\xb8\\xc2e\\x90)9[\\xdf']},  # noqa: E501\n            'DeregisterNotary',\n        ),\n        (\n            # Too few data in log\n            {'type': 'mined', 'logIndex': 0, 'transactionIndex': 0, 'transactionHash': b'\\xda\\xb8:\\xe5\\x86\\xe9Q\\xf2\\x9c\\xc6<g\\x9bl\\x84\\x85\\xf4\\x1dh\\xce\\x8d\\xe6\\xc0D\\xa0*E\\xd8m\\xd4\\x01\\xcf', 'blockHash': b'\\x13\\xa97d\\r\\x90t\\xe5;\\x84\\xf9\\xe0\\xb8\\xf2c\\x1c}\\x88\\xbf\\x84DN\\xa0\\x16Q\\xd9|\\xa1\\x00\\x91\\xc0\\xbd', 'blockNumber': 25, 'address': '0xf4F1600B0a65995833854738764b50A4DA8d6BE1', 'data': '0x0000000000000000000000000000000000000000000000000000000000000001', 'topics': [b'$\\xa5\\x146ipE\\xb9:y\\xa2\\xbd\\xa9\\x00\\xb0PU\\xf1\\xe1\\xe9\\x1b\\x02\\x1bL/\\xb6\\xf6|\\xbb\\x0b.\\x95', b'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00']},  # noqa: E501\n            'AddHeader',\n        ),\n    )\n)\ndef test_log_parser_with_wrong_log_content(raw_log, event_name):\n    with pytest.raises(LogParsingError):\n        LogParser(event_name=event_name, log=raw_log)\n\n\ndef test_status_checking_functions(smc_handler, smc_testing_config):  # noqa: F811\n    w3 = smc_handler.web3\n    config = smc_testing_config\n    shard_tracker = ShardTracker(\n        w3=w3,\n        config=config,\n        shard_id=0,\n        smc_handler_address=smc_handler.address,\n    )\n\n    # Register nine notaries\n    batch_register(smc_handler, 0, 8)\n    # Check that registration log was/was not emitted accordingly\n    assert shard_tracker.is_notary_registered(notary=NotaryAccount(0).checksum_address)\n    assert shard_tracker.is_notary_registered(notary=NotaryAccount(5).checksum_address)\n    assert not shard_tracker.is_notary_registered(notary=NotaryAccount(9).checksum_address)\n    fast_forward(smc_handler, 1)\n\n    # Check that add header log has not been emitted yet\n    current_period = w3.eth.blockNumber // config['PERIOD_LENGTH']\n    assert not shard_tracker.is_new_header_added(period=current_period)\n    # Add header in multiple shards\n    CHUNK_ROOT_1_0 = b'\\x10' * 32\n    smc_handler.add_header(\n        shard_id=0,\n        period=current_period,\n        chunk_root=CHUNK_ROOT_1_0,\n        private_key=NotaryAccount(0).private_key,\n    )\n    CHUNK_ROOT_1_7 = b'\\x17' * 32\n    smc_handler.add_header(\n        shard_id=7,\n        period=current_period,\n        chunk_root=CHUNK_ROOT_1_7,\n        private_key=NotaryAccount(7).private_key,\n    )\n    CHUNK_ROOT_1_3 = b'\\x13' * 32\n    smc_handler.add_header(\n        shard_id=3,\n        period=current_period,\n        chunk_root=CHUNK_ROOT_1_3,\n        private_key=NotaryAccount(3).private_key,\n    )\n    mine(w3, 1)\n    # Check that add header log was successfully emitted\n    assert shard_tracker.is_new_header_added(period=current_period)\n\n    # Check that there has not been enough votes yet in shard 0\n    assert not shard_tracker.has_enough_vote(period=current_period)\n    # Submit three votes in shard 0 and one vote in shard 7\n    for sample_index in range(3):\n        pool_index = sampling(smc_handler, 0)[sample_index]\n        smc_handler.submit_vote(\n            shard_id=0,\n            period=current_period,\n            chunk_root=CHUNK_ROOT_1_0,\n            index=sample_index,\n            private_key=NotaryAccount(pool_index).private_key,\n        )\n        mine(w3, 1)\n    sample_index = 0\n    pool_index = sampling(smc_handler, 7)[sample_index]\n    smc_handler.submit_vote(\n        shard_id=7,\n        period=current_period,\n        chunk_root=CHUNK_ROOT_1_7,\n        index=sample_index,\n        private_key=NotaryAccount(pool_index).private_key,\n    )\n    mine(w3, 1)\n    # Check that there has not been enough votes yet in shard 0\n    # Only three votes in shard 0 while four is required\n    assert not shard_tracker.has_enough_vote(period=current_period)\n    # Cast the fourth vote\n    sample_index = 3\n    pool_index = sampling(smc_handler, 0)[sample_index]\n    smc_handler.submit_vote(\n        shard_id=0,\n        period=current_period,\n        chunk_root=CHUNK_ROOT_1_0,\n        index=sample_index,\n        private_key=NotaryAccount(pool_index).private_key,\n    )\n    mine(w3, 1)\n    # Check that there are enough votes now in shard 0\n    assert shard_tracker.has_enough_vote(period=current_period)\n    # Proceed to next period\n    fast_forward(smc_handler, 1)\n\n    # Go back and check the status of header and vote counts in last period\n    current_period = w3.eth.blockNumber // config['PERIOD_LENGTH']\n    assert shard_tracker.is_new_header_added(period=(current_period - 1))\n    assert shard_tracker.has_enough_vote(period=(current_period - 1))\n\n    # Deregister\n    smc_handler.deregister_notary(private_key=NotaryAccount(0).private_key)\n    mine(w3, 1)\n    # Check that deregistration log was/was not emitted accordingly\n    assert shard_tracker.is_notary_deregistered(NotaryAccount(0).checksum_address)\n    assert not shard_tracker.is_notary_deregistered(NotaryAccount(5).checksum_address)\n\n    # Fast foward to end of lock up\n    fast_forward(smc_handler, smc_handler.config['NOTARY_LOCKUP_LENGTH'] + 1)\n    # Release\n    smc_handler.release_notary(private_key=NotaryAccount(0).private_key)\n    mine(w3, 1)\n    # Check that log was successfully emitted\n    assert shard_tracker.is_notary_released(NotaryAccount(0).checksum_address)\n"
  },
  {
    "path": "tests/handler/test_smc_handler.py",
    "content": "import logging\n\nimport pytest\n\nfrom sharding.handler.utils.smc_handler_utils import (\n    make_call_context,\n    make_transaction_context,\n)\n\n\nZERO_ADDR = b'\\x00' * 20\n\nlogger = logging.getLogger('evm.chain.sharding.mainchain_handler.SMC')\n\n\ndef test_make_transaction_context():\n    transaction_context = make_transaction_context(\n        nonce=0,\n        gas=10000,\n    )\n    assert 'nonce' in transaction_context\n    assert 'gas' in transaction_context\n    assert 'chainId' in transaction_context\n    with pytest.raises(ValueError):\n        transaction_context = make_transaction_context(\n            nonce=None,\n            gas=10000,\n        )\n    with pytest.raises(ValueError):\n        transaction_context = make_transaction_context(\n            nonce=0,\n            gas=None,\n        )\n\n\ndef test_make_call_context():\n    call_context = make_call_context(\n        sender_address=ZERO_ADDR,\n        gas=1000,\n    )\n    assert 'from' in call_context\n    assert 'gas' in call_context\n    with pytest.raises(ValueError):\n        call_context = make_call_context(\n            sender_address=None,\n            gas=1000,\n        )\n"
  },
  {
    "path": "tests/handler/utils/__init__.py",
    "content": ""
  },
  {
    "path": "tests/handler/utils/config.py",
    "content": "from cytoolz import (\n    merge,\n)\n\nfrom sharding.contracts.utils.config import (\n    get_sharding_config,\n)\n\n\ndef get_sharding_testing_config():\n    REPLACED_PARAMETERS = {\n        'SHARD_COUNT': 10,\n        'PERIOD_LENGTH': 10,\n        'COMMITTEE_SIZE': 6,\n        'QUORUM_SIZE': 4,\n        'NOTARY_LOCKUP_LENGTH': 30,\n    }\n    return merge(\n        get_sharding_config(),\n        REPLACED_PARAMETERS,\n    )\n"
  },
  {
    "path": "tools/vyper_compile_script.py",
    "content": "import argparse\nimport json\nimport os\n\nfrom vyper import compiler\n\n\ndef generate_compiled_json(file_path: str) -> None:\n    vmc_code = open(file_path).read()\n    abi = compiler.mk_full_signature(vmc_code)\n    bytecode = compiler.compile(vmc_code)\n    bytecode_hex = '0x' + bytecode.hex()\n    contract_json = {\n        'abi': abi,\n        'bytecode': bytecode_hex,\n    }\n    # write json\n    basename = os.path.basename(file_path)\n    dirname = os.path.dirname(file_path)\n    contract_name = basename.split('.')[0]\n    with open(dirname + \"/{}.json\".format(contract_name), 'w') as f_write:\n        json.dump(contract_json, f_write)\n\n\ndef main() -> None:\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"path\", type=str, help=\"the path of the contract\")\n    args = parser.parse_args()\n    path = args.path\n    generate_compiled_json(path)\n\n\nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "tox.ini",
    "content": "[tox]\nenvlist=\n    py{35,36}-{contract,handler}\n    lint{35,36}\n\n[flake8]\nmax-line-length= 100\nexclude=\nignore=\n\n[testenv]\nusedevelop=True\npassenv =\n    PYTEST_ADDOPTS\n    TRAVIS_EVENT_TYPE\ncommands=\n    contract: py.test {posargs:tests/contract/}\n    handler: py.test {posargs:tests/handler/}\nextras =\n    coincurve\ndeps = -r{toxinidir}/requirements-dev.txt\nbasepython =\n    py35: python3.5\n    py36: python3.6\n\n[testenv:lint35]\nbasepython=python3.5\nsetenv=MYPYPATH={toxinidir}:{toxinidir}/stubs\ncommands=\n    flake8 {toxinidir}/sharding --exclude=\"{toxinidir}/sharding/contracts/*.v.py\"\n    flake8 {toxinidir}/tests\n    mypy --follow-imports=silent --ignore-missing-imports --disallow-incomplete-defs --disallow-untyped-defs sharding tools\n\n[testenv:lint36]\nbasepython=python3.6\nsetenv=MYPYPATH={toxinidir}:{toxinidir}/stubs\ncommands=\n    flake8 {toxinidir}/sharding --exclude=\"{toxinidir}/sharding/contracts/*.v.py\"\n    flake8 {toxinidir}/tests\n    mypy --follow-imports=silent --ignore-missing-imports --disallow-incomplete-defs --disallow-untyped-defs sharding tools\n"
  }
]