[
  {
    "path": ".dockerignore",
    "content": "devtools/chain/data\ndevtools/dex\n.github\ndocs\ntests/e2e/node_modules\n"
  },
  {
    "path": ".github/CODEOWNERS",
    "content": "/ @nervosnetwork/muta-dev-team"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/bug-report.md",
    "content": "---\nname: Bug Report\nabout: Report a bug\nlabels: t:bug\n---\n\n<!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!\n-->\n\n**What happened**:\n\n**What you expected to happen**:\n\n**How to reproduce it (as minimally and precisely as possible)**:\n\n**Anything else we need to know?**:\n\n**Environment**:\n\n- MutaChain version or commit hash (`MutaChain -V`):\n- OS (e.g: `cat /etc/os-release`):\n- Kernel (e.g. `uname -a`):\n- Others:\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/feature.md",
    "content": "---\nname: Feature Request\nabout: Suggest a feature to the Muta-Chain project\nlabels: t:feature\n---\n\n<!-- Please only use this template for submitting enhancement requests -->\n\n**What would you like to be added**:\n\n**Why is this needed**:\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/help.md",
    "content": "---\nname: Help me\nabout: What kind of help do you want?\nlabels: t:help\n---\n"
  },
  {
    "path": ".github/PULL_REQUEST_TEMPLATE.md",
    "content": "<!--  Thanks for sending a pull request! -->\n<!--  Have I run `make ci`? -->\n\n**What this PR does / why we need it**:\n\n\n**Which issue(s) this PR fixes**:\n<!--\n*Automatically closes linked issue when PR is merged.\nUsage: `Fixes #<issue number>`, or `Fixes (paste link of issue)`.\n-->\nFixes #\n\n\n**Which docs this PR relation**:\n\nRef #\n\n\n**Which toolchain this PR adaption**:\n\nNo Breaking Change\n\n\n**Special notes for your reviewer**:\n"
  },
  {
    "path": ".github/semantic.yml",
    "content": "# By default types specified in commitizen/conventional-commit-types is used.\n# See: https://github.com/commitizen/conventional-commit-types/blob/v3.0.0/index.json\n# You can override the valid types\n\n# Angular\ntypes:\n    - build # Changes that affect the build system or external dependencies (example scopes: gulp, broccoli, npm)\n    - ci # Changes to our CI configuration files and scripts (example scopes: Travis, Circle, BrowserStack, SauceLabs)\n    - docs # Documentation only changes\n    - feat # A new feature\n    - fix # A bug fix\n    - perf # A code change that improves performance\n    - refactor # A code change that neither fixes a bug nor adds a feature\n    - style # Changes that do not affect the meaning of the code (white-space, formatting, missing semi-colons, etc)\n    - test # Adding missing tests or correcting existing tests\n"
  },
  {
    "path": ".gitignore",
    "content": "# Generated by Cargo\n# will have compiled files and executables\n/target/\n\n# These are backup files generated by rustfmt\n**/*.rs.bk\n\n# Added by cargo\n#\n# already existing elements are commented out\n\n/target\n#**/*.rs.bk\n\n# OS\n.DS_Store\n\n# IDE\n.idea/\n.vscode/\n\n# dev\ndevtools/chain/data\n\ntests/e2e/node_modules\ntests/e2e/yarn-error.log\n\n# rocksdb\n**/rocksdb/\nlogs/\n\n# cargo.lock\nCargo.lock\n\n# free space, you can store anything you want here\nfree-space\n\nbyzantine/tests/node_modules\n"
  },
  {
    "path": ".helmignore",
    "content": "# Patterns to ignore when building packages.\n# This supports shell glob matching, relative path matching, and\n# negation (prefixed with !). Only one pattern per line.\n.DS_Store\n# Common VCS dirs\n.git/\n.gitignore\n.bzr/\n.bzrignore\n.hg/\n.hgignore\n.svn/\n# Common backup files\n*.swp\n*.bak\n*.tmp\n*~\n# Various IDEs\n.project\n.idea/\n*.tmproj\n*.png\n\n# known compile time folders\ntarget/\nnode_modules/\nvendor/"
  },
  {
    "path": "CHANGELOG/CHANGELOG-0.1.md",
    "content": "\n\n## [0.1.2-beta](https://github.com/nervosnetwork/muta/compare/v0.1.2-beta2...v0.1.2-beta) (2020-06-04)\n\n\n\n## [0.1.2-beta2](https://github.com/nervosnetwork/muta/compare/v0.1.2-beta1...v0.1.2-beta2) (2020-06-03)\n\n\n### Features\n\n* supported storage metrics ([#307](https://github.com/nervosnetwork/muta/issues/307)) ([2531b8d](https://github.com/nervosnetwork/muta/commit/2531b8da8e8f2a839484adef62dd93f1deff12dd))\n\n\n\n## [0.1.2-beta1](https://github.com/nervosnetwork/muta/compare/v0.1.0-rc.2-huobi...v0.1.2-beta1) (2020-06-01)\n\n\n### Bug Fixes\n\n* **ci:** Increase timeout in ci ([#262](https://github.com/nervosnetwork/muta/issues/262)) ([a12124a](https://github.com/nervosnetwork/muta/commit/a12124a115512196894a7ca88fc42555db927666))\n* **mempool:** check exsit before insert a transaction ([#257](https://github.com/nervosnetwork/muta/issues/257)) ([be3c139](https://github.com/nervosnetwork/muta/commit/be3c13929d2a59f21655b040aa6738c3d43db611))\n* **network:** broken users_cast ([#261](https://github.com/nervosnetwork/muta/issues/261)) ([f36eabd](https://github.com/nervosnetwork/muta/commit/f36eabdc5040bc5cbf0d2011c942867150534a41))\n* **network:** reconnection fialure ([#273](https://github.com/nervosnetwork/muta/issues/273)) ([9f594b8](https://github.com/nervosnetwork/muta/commit/9f594b8af12e1810bd0cbf23f20ca718d96f6e3a))\n* reboot when the diff between height and exec_height more than one ([#267](https://github.com/nervosnetwork/muta/issues/267)) ([e8f8595](https://github.com/nervosnetwork/muta/commit/e8f85958d85e3363fccbfde3971684ebf2fceb4d))\n* **sync:** Avoid requesting redundant transactions ([#259](https://github.com/nervosnetwork/muta/issues/259)) ([8ece029](https://github.com/nervosnetwork/muta/commit/8ece0299fe185667ac23fed92d8c2f156c0e2c5b))\n* binding store type should return Option None instead of panic when get none ([#238](https://github.com/nervosnetwork/muta/issues/238)) ([54bdbb9](https://github.com/nervosnetwork/muta/commit/54bdbb93df1a1a85a83814dcb29461acf3645d10))\n* **config:** use serde(default) for rocksdb conf ([#229](https://github.com/nervosnetwork/muta/issues/229)) ([2a03e73](https://github.com/nervosnetwork/muta/commit/2a03e73c77807e80020c50bb287adf4d428632e5))\n* **storage:** fix rocksdb too many open files error ([#228](https://github.com/nervosnetwork/muta/issues/228)) ([96c32cd](https://github.com/nervosnetwork/muta/commit/96c32cd7956220beddca33b22d4663a675573ba9))\n* **sync:** set crypto info when synchronization ([#235](https://github.com/nervosnetwork/muta/issues/235)) ([84ccfc1](https://github.com/nervosnetwork/muta/commit/84ccfc1d8422265028ad7a0b460b4e297d161fe3))\n* docker compose configs ([#210](https://github.com/nervosnetwork/muta/issues/210)) ([acc5265](https://github.com/nervosnetwork/muta/commit/acc52653d304ac5cd25a9d643b263a2f462f7d43))\n* hang when kill it ([#225](https://github.com/nervosnetwork/muta/issues/225)) ([dc51240](https://github.com/nervosnetwork/muta/commit/dc512405f32854f165f3145c01d022bca4fff93b))\n* panic when start ([#214](https://github.com/nervosnetwork/muta/issues/214)) ([d2da69b](https://github.com/nervosnetwork/muta/commit/d2da69b5941a88376b64453f7d3c10eca3f67d81))\n* **muta:** hangs up on one cpu core ([#203](https://github.com/nervosnetwork/muta/issues/203)) ([555dd9e](https://github.com/nervosnetwork/muta/commit/555dd9e694fda043be01f90c91396efd7fe0ace5))\n\n\n### Features\n\n* split monitor network url  ([#300](https://github.com/nervosnetwork/muta/issues/300)) ([1237354](https://github.com/nervosnetwork/muta/commit/12373544598d0dae852321cbe3b4e8dab5c70e54))\n* supported mempool monitor ([#298](https://github.com/nervosnetwork/muta/issues/298)) ([cc7fdfa](https://github.com/nervosnetwork/muta/commit/cc7fdfa7a7c99466d76d4fe9c1a3537ab8754837))\n* supported new metrics ([#294](https://github.com/nervosnetwork/muta/issues/294)) ([e59364a](https://github.com/nervosnetwork/muta/commit/e59364a7759960d8a3279dc78844965f54f4bf62))\n* **apm:** add api get_block metrics ([#276](https://github.com/nervosnetwork/muta/issues/276)) ([6ea21e3](https://github.com/nervosnetwork/muta/commit/6ea21e3e0fe08898264f13938cf849c197531afa))\n* **apm:** Add opentracing ([#270](https://github.com/nervosnetwork/muta/issues/270)) ([cece21d](https://github.com/nervosnetwork/muta/commit/cece21d8e865223c8679e54d0253ced70dab4c0a))\n* **apm:** tracing height and round in OverlordMsg ([#287](https://github.com/nervosnetwork/muta/issues/287)) ([a8c09ff](https://github.com/nervosnetwork/muta/commit/a8c09ff363e8caac9c0977db2fc6cffb782961d7))\n* **ci:** add e2e ([#236](https://github.com/nervosnetwork/muta/issues/236)) ([3058722](https://github.com/nervosnetwork/muta/commit/3058722081084b7cb8f423c26eba9e88707fca18))\n* **consensus:** add proof check logic for sync and consensus ([#224](https://github.com/nervosnetwork/muta/issues/224)) ([b19502f](https://github.com/nervosnetwork/muta/commit/b19502f48e6d314717a8a2286ada58f6097c6f31))\n* **consensus:** change validator list ([#211](https://github.com/nervosnetwork/muta/issues/211)) ([bb04d2c](https://github.com/nervosnetwork/muta/commit/bb04d2c961110276d38cf0e07239d5e72e8125a8))\n* **consensus:** integrate trust metric to consensus ([#244](https://github.com/nervosnetwork/muta/issues/244)) ([3dd6bc1](https://github.com/nervosnetwork/muta/commit/3dd6bc1796ca3e6c76cb99beefd5911d35a5e8ee))\n* **mempool:** integrate trust metric ([#245](https://github.com/nervosnetwork/muta/issues/245)) ([49474fd](https://github.com/nervosnetwork/muta/commit/49474fddde3ffc45d564544bb5887bb09a37da1d))\n* **metric:** introduce metric using prometheus ([#271](https://github.com/nervosnetwork/muta/issues/271)) ([3d1dc4f](https://github.com/nervosnetwork/muta/commit/3d1dc4fcf196b8616f41dc4cd2a5ba0c0a5ab422))\n* **metrics:** mempool, consensus and sync ([#275](https://github.com/nervosnetwork/muta/issues/275)) ([12e4918](https://github.com/nervosnetwork/muta/commit/12e4918d9925868407f854af29410d8ecafe4d48))\n* **network:** add metrics ([#274](https://github.com/nervosnetwork/muta/issues/274)) ([56a9b62](https://github.com/nervosnetwork/muta/commit/56a9b62251106d44df33c43d4590575df25df61a))\n* **network:** add trace header to network msg ([#281](https://github.com/nervosnetwork/muta/issues/281)) ([6509cbe](https://github.com/nervosnetwork/muta/commit/6509cbec2f700238b2259943212e0968b58404ce))\n* **network:** peer trust metric ([#231](https://github.com/nervosnetwork/muta/issues/231)) ([5abefeb](https://github.com/nervosnetwork/muta/commit/5abefebddacfb58415f2a319098bb164ceaa8c81))\n* add tx hook in framework ([#218](https://github.com/nervosnetwork/muta/issues/218)) ([cdeb9fd](https://github.com/nervosnetwork/muta/commit/cdeb9fd1e18e198636fa59d91aead85d65cf9852))\n* re-execute blocks to recover current status ([#222](https://github.com/nervosnetwork/muta/issues/222)) ([1cd7cb6](https://github.com/nervosnetwork/muta/commit/1cd7cb6d4fbc599bac65bd2c36b507088a3fa041))\n* **network:** rpc remote server error response ([#205](https://github.com/nervosnetwork/muta/issues/205)) ([bb993ac](https://github.com/nervosnetwork/muta/commit/bb993ac1f5fe44a2f6a72c8718572accacb27dc3))\n* **sync:** Split a transaction in a block into multiple requests ([#221](https://github.com/nervosnetwork/muta/issues/221)) ([0bbf43c](https://github.com/nervosnetwork/muta/commit/0bbf43c49d2df49d70b4bc816ac24c3bc3603a1a))\n* add actix payload size limit config ([#204](https://github.com/nervosnetwork/muta/issues/204)) ([97319d6](https://github.com/nervosnetwork/muta/commit/97319d6d22c8143ba35c3fe42d56f2cfbc131e37))\n\n\n### BREAKING CHANGES\n\n* **network:** change rpc response\n\n* change(network): bump transmitter protocol version\n\n\n\n# [0.1.0-rc.2-huobi](https://github.com/nervosnetwork/muta/compare/v0.0.1-rc1-huobi...v0.1.0-rc.2-huobi) (2020-02-24)\n\n\n### Bug Fixes\n\n* **mempool:** fix repeat txs, add flush_incumbent_queue ([#189](https://github.com/nervosnetwork/muta/issues/189)) ([e0db745](https://github.com/nervosnetwork/muta/commit/e0db745419c5ada3d6e9dc4416945a0775a8f18b))\n* **muta:** hangs up running on single core environment ([#201](https://github.com/nervosnetwork/muta/issues/201)) ([09f5b4e](https://github.com/nervosnetwork/muta/commit/09f5b4ed70a519155933f7fd4c2015ff512dfdb1))\n* block hash from bytes ([#192](https://github.com/nervosnetwork/muta/issues/192)) ([7ca0af4](https://github.com/nervosnetwork/muta/commit/7ca0af46edbd00e4ba43e8646e77fa41aba781cf))\n\n\n### Features\n\n* check size and cycle limit when insert tx into mempool ([#195](https://github.com/nervosnetwork/muta/issues/195)) ([92bdf2d](https://github.com/nervosnetwork/muta/commit/92bdf2d5147502e1d250fdae47b8ae2c2cfce23f))\n* remove redundant wal transactions when commit ([#197](https://github.com/nervosnetwork/muta/issues/197)) ([3aff1db](https://github.com/nervosnetwork/muta/commit/3aff1dbb2dcdabaaf9cbecb9c3e9757a2c737354))\n* Supports actix in tokio ([#200](https://github.com/nervosnetwork/muta/issues/200)) ([266c1cb](https://github.com/nervosnetwork/muta/commit/266c1cb2cf6223759eba4ca9771ee21b244db3a4))\n* **api:** Supports configuring the max number of connections. ([#194](https://github.com/nervosnetwork/muta/issues/194)) ([6cbdd26](https://github.com/nervosnetwork/muta/commit/6cbdd267b7ff56eefbe23bffc8e4dc589272111d))\n* **service:** upgrade asset service ([#150](https://github.com/nervosnetwork/muta/issues/150)) ([8925390](https://github.com/nervosnetwork/muta/commit/8925390b59353d853dd1266cdcfe6db1258a8296))\n\n\n### Reverts\n\n* Revert \"fix(muta): hangs up running on single core environment (#201)\" (#202) ([28e685a](https://github.com/nervosnetwork/muta/commit/28e685a62b82c1a91699b4495d430b0757e5438d)), closes [#201](https://github.com/nervosnetwork/muta/issues/201) [#202](https://github.com/nervosnetwork/muta/issues/202)\n\n\n\n## [0.0.1-rc1-huobi](https://github.com/nervosnetwork/muta/compare/v0.0.1-rc.1-huobi...v0.0.1-rc1-huobi) (2020-02-15)\n\n\n### Bug Fixes\n\n* **ci:** fail to install sccache after new rust-toolchain ([#68](https://github.com/nervosnetwork/muta/issues/68)) ([f961415](https://github.com/nervosnetwork/muta/commit/f961415803ae6d38b70e97a810f33a1b60639d43))\n* **consensus:** check logs bloom when check block ([#168](https://github.com/nervosnetwork/muta/issues/168)) ([0984989](https://github.com/nervosnetwork/muta/commit/09849893270cc0908e2ee965e7e8b7c46ada0f16))\n* **consensus:** empty block receipts root ([#61](https://github.com/nervosnetwork/muta/issues/61)) ([89ed4d2](https://github.com/nervosnetwork/muta/commit/89ed4d2c4a708f278e7cd777c562f1f1fb5a9755))\n* **consensus:** encode overlord message and verify signature ([#39](https://github.com/nervosnetwork/muta/issues/39)) ([b11e69e](https://github.com/nervosnetwork/muta/commit/b11e69e49ed195d0d23f22b6abf1387f4a4c0c94))\n* **consensus:** fix check state roots ([#107](https://github.com/nervosnetwork/muta/issues/107)) ([cf45c3b](https://github.com/nervosnetwork/muta/commit/cf45c3ba39eb65bdb012165e232352a9187a6f0d))\n* **consensus:** Get authority list returns none. ([#4](https://github.com/nervosnetwork/muta/issues/4)) ([2a7eb3c](https://github.com/nervosnetwork/muta/commit/2a7eb3c26fade5a065ec2435b4ba46b6c16f223a))\n* **consensus:** state root can not be clear ([#140](https://github.com/nervosnetwork/muta/issues/140)) ([4ea1df4](https://github.com/nervosnetwork/muta/commit/4ea1df425620482f36daf61b4b50edb83807efdd))\n* **consensus:** sync txs context no session id ([#167](https://github.com/nervosnetwork/muta/issues/167)) ([53136c3](https://github.com/nervosnetwork/muta/commit/53136c3dfdf0e7b29762cd72f51eeb35d52804c2))\n* **doc:** fix graphql_api doc link and doc-api build sh ([#161](https://github.com/nervosnetwork/muta/issues/161)) ([e67e2b2](https://github.com/nervosnetwork/muta/commit/e67e2b24bf0609c263f59381a83fcf04d2227583))\n* **executor:** wrong hook logic ([#127](https://github.com/nervosnetwork/muta/issues/127)) ([8c6a246](https://github.com/nervosnetwork/muta/commit/8c6a246a1b64a197371305856148b034320f1fa0))\n* **framework/executor:** Catch any errors in the call. ([#92](https://github.com/nervosnetwork/muta/issues/92)) ([739a126](https://github.com/nervosnetwork/muta/commit/739a126c86643b28e1c47aef87d8bd803b9fe8d9))\n* **keypair:** Use hex encoding common_ref. ([#79](https://github.com/nervosnetwork/muta/issues/79)) ([abbce4c](https://github.com/nervosnetwork/muta/commit/abbce4c15919f45f824bd4967ea64f8234548765))\n* **makefile:** Docker push to the correct image ([#146](https://github.com/nervosnetwork/muta/issues/146)) ([05f6396](https://github.com/nervosnetwork/muta/commit/05f6396f1786b46b4cf9c41e3f700b37ebaddb68))\n* **mempool:** Always get the latest epoch id when `package`. ([#30](https://github.com/nervosnetwork/muta/issues/30)) ([9a77ebf](https://github.com/nervosnetwork/muta/commit/9a77ebf9ecba6323cc81cd094774e32fd28b946e))\n* **mempool:** broadcast new transactions ([#32](https://github.com/nervosnetwork/muta/issues/32)) ([086ec7e](https://github.com/nervosnetwork/muta/commit/086ec7eb6ca2c8f6afc14767d51efdb91533f932))\n* **mempool:** Fix concurrent insert bug of mempool ([#19](https://github.com/nervosnetwork/muta/issues/19)) ([515eec2](https://github.com/nervosnetwork/muta/commit/515eec2ab65a2d57a5ca742c774daeb9cef99354))\n* **mempool:** Resize the queue to ensure correct switching. ([#18](https://github.com/nervosnetwork/muta/issues/18)) ([ebf1ae3](https://github.com/nervosnetwork/muta/commit/ebf1ae34861fc48297813cdc465e4d9c99e059d4))\n* **mempool:** sync proposal txs doesn't insert txs at all ([#179](https://github.com/nervosnetwork/muta/issues/179)) ([33f39c5](https://github.com/nervosnetwork/muta/commit/33f39c5bac0235a8261c53327c558864a6149c8a))\n* **network:** dead lock in peer manager ([#24](https://github.com/nervosnetwork/muta/issues/24)) ([a74017a](https://github.com/nervosnetwork/muta/commit/a74017aa9d84b6b862683860e63c000b4048e459))\n* **network:** default rpc timeout to 4 seconds ([#115](https://github.com/nervosnetwork/muta/issues/115)) ([666049c](https://github.com/nervosnetwork/muta/commit/666049c54c8eee8291cc173230caccb35de137ca))\n* **network:** fail to bootstrap if bootstrap isn't start already ([#46](https://github.com/nervosnetwork/muta/issues/46)) ([9dd515a](https://github.com/nervosnetwork/muta/commit/9dd515a3e09f1c158dff6536ed38eb5116f4317f))\n* **network:** give up retry ([#152](https://github.com/nervosnetwork/muta/issues/152)) ([34d052a](https://github.com/nervosnetwork/muta/commit/34d052aaba1684333fdd49f86e54c103064fa2f6))\n* **network:** never reconnect bootstrap again after failure ([#22](https://github.com/nervosnetwork/muta/issues/22)) ([79d66bd](https://github.com/nervosnetwork/muta/commit/79d66bd06e61ff6ef41c12ada91cf6485482aa43))\n* **network:** NoSessionId Error ([#33](https://github.com/nervosnetwork/muta/issues/33)) ([4761d79](https://github.com/nervosnetwork/muta/commit/4761d797dded9534e0c0b5e43c6e519055542c2c))\n* **network:** rpc memory leak if rpc call future is dropped ([#166](https://github.com/nervosnetwork/muta/issues/166)) ([8476a4b](https://github.com/nervosnetwork/muta/commit/8476a4b85bf3cf923adcd7555cef04ae73a225f1))\n* **sync:** Check the height again after get the lock ([#171](https://github.com/nervosnetwork/muta/issues/171)) ([68164f3](https://github.com/nervosnetwork/muta/commit/68164f3f75d83b9507ee68a099fb712492339edb))\n* **sync:** Flush the memory pool when the storage success ([#165](https://github.com/nervosnetwork/muta/issues/165)) ([3b9cbd5](https://github.com/nervosnetwork/muta/commit/3b9cbd55310993c783b0a5794237df75accf118e))\n* fix overlord not found error ([#95](https://github.com/nervosnetwork/muta/issues/95)) ([0754c64](https://github.com/nervosnetwork/muta/commit/0754c64973f7fca92e49080c3a03a869b43a4c46))\n* Ignore bootstraps when empty. ([#41](https://github.com/nervosnetwork/muta/issues/41)) ([2b3566b](https://github.com/nervosnetwork/muta/commit/2b3566b4acb91f6086b9cca2b1ea4d2883a75be9))\n\n\n### Features\n\n* **config:** move bls_pub_key config to genesis.toml ([#162](https://github.com/nervosnetwork/muta/issues/162)) ([337b01f](https://github.com/nervosnetwork/muta/commit/337b01fda21fc33f4d4817d93a27d86af9e2b164))\n* **network:** interval report pending data size ([#160](https://github.com/nervosnetwork/muta/issues/160)) ([3c46aca](https://github.com/nervosnetwork/muta/commit/3c46aca4873abf9b8afd01d5f464df57bb1b9b9a))\n* **sync:** Trigger sync after waiting for consensus interval ([#169](https://github.com/nervosnetwork/muta/issues/169)) ([fe355f1](https://github.com/nervosnetwork/muta/commit/fe355f1d7d6359dfa97809f1bc603cb99975ba46))\n* add api schema ([#90](https://github.com/nervosnetwork/muta/issues/90)) ([3f8adfa](https://github.com/nervosnetwork/muta/commit/3f8adfa0a717b055a4455fd102de68003f835bf2))\n* add common_ref argument for keypair tool ([#154](https://github.com/nervosnetwork/muta/issues/154)) ([2651346](https://github.com/nervosnetwork/muta/commit/26513469206aa8a4480c5fffad9d134d5d0e8ded))\n* add panic hook to logger ([#156](https://github.com/nervosnetwork/muta/issues/156)) ([93b65fe](https://github.com/nervosnetwork/muta/commit/93b65feb89502b7d7836d7f4c423db37fbd1ef4f))\n* Extract muta as crate. ([1b62fe7](https://github.com/nervosnetwork/muta/commit/1b62fe786fbd576b67ea28df3d304d235ae3e94e))\n* Metadata service ([#133](https://github.com/nervosnetwork/muta/issues/133)) ([a588b12](https://github.com/nervosnetwork/muta/commit/a588b12de4f3c0de666b66e2a5dea65d71977f5f))\n* spawn sync txs in check epoch ([6dca1dd](https://github.com/nervosnetwork/muta/commit/6dca1ddcd9256a3061f132a5abc5d784d466c168))\n* support specify module log level via config ([#105](https://github.com/nervosnetwork/muta/issues/105)) ([c06061b](https://github.com/nervosnetwork/muta/commit/c06061b4ccd755177385dfee000783e2b11b0dcd))\n* Update juniper, supports async ([#149](https://github.com/nervosnetwork/muta/issues/149)) ([cbabf50](https://github.com/nervosnetwork/muta/commit/cbabf507c25ee8feb8a57de408bc97efc8a4a4ab))\n* update overlord with brake engine ([#159](https://github.com/nervosnetwork/muta/issues/159)) ([8cd886a](https://github.com/nervosnetwork/muta/commit/8cd886a79fec934a53d409a27de941f16166c176)), closes [#156](https://github.com/nervosnetwork/muta/issues/156) [#158](https://github.com/nervosnetwork/muta/issues/158)\n* **api:** Add the exec_height field to the block ([#138](https://github.com/nervosnetwork/muta/issues/138)) ([417153c](https://github.com/nervosnetwork/muta/commit/417153c632793c7ac4e7bc3ffa5b2832dd2dbe66))\n* **binding-macro:** service method supports none payload and none response ([#103](https://github.com/nervosnetwork/muta/issues/103)) ([3a5783e](https://github.com/nervosnetwork/muta/commit/3a5783eadd1090cf739d4fdbe94f049115eb65f0))\n* **consensus:** develop aggregate crypto with overlord ([#60](https://github.com/nervosnetwork/muta/issues/60)) ([2bc0869](https://github.com/nervosnetwork/muta/commit/2bc0869e928b35c674b4cafdf48540298752b5b5))\n* **core/binding:** Implementation of service state. ([#48](https://github.com/nervosnetwork/muta/issues/48)) ([301be6f](https://github.com/nervosnetwork/muta/commit/301be6f39379bd3826b5f605c999ce107f7404e4))\n* **core/binding-macro:** Add `read` and `write` proc-macro. ([#49](https://github.com/nervosnetwork/muta/issues/49)) ([687b6e1](https://github.com/nervosnetwork/muta/commit/687b6e1e1a960f679394843c42b861981828d8aa))\n* **core/binding-macro:** Add cycles proc-marco. ([#52](https://github.com/nervosnetwork/muta/issues/52)) ([e2289a2](https://github.com/nervosnetwork/muta/commit/e2289a2481510b59c18e37d0fc8bedd9f5d4537e))\n* **core/binding-macro:** Support for returning a struct. ([#70](https://github.com/nervosnetwork/muta/issues/70)) ([e13b1ff](https://github.com/nervosnetwork/muta/commit/e13b1ff7834279de9c2df5a0df6967035b7fb8b3))\n* **framework:** add ExecutorParams into hook method ([#116](https://github.com/nervosnetwork/muta/issues/116)) ([8036bd6](https://github.com/nervosnetwork/muta/commit/8036bd6f9be1f49eedbc40bbc260ad82952c2e71))\n* **framework:** add extra: Option<Bytes> to ServiceContext ([#118](https://github.com/nervosnetwork/muta/issues/118)) ([694c4a3](https://github.com/nervosnetwork/muta/commit/694c4a34f32dc1ba4940db19e304de7a927e1531))\n* **framework:** add tx_hash, nonce to ServiceContext ([#111](https://github.com/nervosnetwork/muta/issues/111)) ([352f71f](https://github.com/nervosnetwork/muta/commit/352f71fb3b8b024d533d26c7a344fad801b7a91c))\n* **framework/executor:** create service genesis from config ([#104](https://github.com/nervosnetwork/muta/issues/104)) ([8988ccb](https://github.com/nervosnetwork/muta/commit/8988ccb3e5cb2a25bfeabe93c5a63ac1600290a2))\n* **graphql:** Modify the API to fit the framework data structure. ([#74](https://github.com/nervosnetwork/muta/issues/74)) ([a1ca2b0](https://github.com/nervosnetwork/muta/commit/a1ca2b0d68e32e335d8d388b70bca83137519f5a))\n* **muta:** flush metadata while commit  ([#137](https://github.com/nervosnetwork/muta/issues/137)) ([383a481](https://github.com/nervosnetwork/muta/commit/383a481c348efdf73fd690b42b2430fca6d9a0db))\n* **muta:** link up metadata service with muta ([#136](https://github.com/nervosnetwork/muta/issues/136)) ([ba65b80](https://github.com/nervosnetwork/muta/commit/ba65b80dffd128f12336b44d4e80ed40cced8e75))\n* **protocol/traits:** Add traits of binding. ([#47](https://github.com/nervosnetwork/muta/issues/47)) ([c6b85ee](https://github.com/nervosnetwork/muta/commit/c6b85ee7bee5b14c5da1676ff44d743c031a0fa6))\n* **protocol/types:** Add cycles_price for raw_transaction. ([#46](https://github.com/nervosnetwork/muta/issues/46)) ([55f64a4](https://github.com/nervosnetwork/muta/commit/55f64a49634061ca05c75cbf5923f183fc83936d))\n* **sync:** Wait for the execution queue. ([#132](https://github.com/nervosnetwork/muta/issues/132)) ([a8d2013](https://github.com/nervosnetwork/muta/commit/a8d2013991cc6b5b579429954c8411c7954b1da4))\n* add end to end test ([#42](https://github.com/nervosnetwork/muta/issues/42)) ([e84756d](https://github.com/nervosnetwork/muta/commit/e84756d1734ad58943309c3c2299393f5a2022e4))\n* Extract muta as crate. ([#75](https://github.com/nervosnetwork/muta/issues/75)) ([fc576ea](https://github.com/nervosnetwork/muta/commit/fc576eaa67a3b4b4fa459b0ab970251d63b06b4f)), closes [#46](https://github.com/nervosnetwork/muta/issues/46) [#47](https://github.com/nervosnetwork/muta/issues/47) [#48](https://github.com/nervosnetwork/muta/issues/48) [#49](https://github.com/nervosnetwork/muta/issues/49) [#52](https://github.com/nervosnetwork/muta/issues/52) [#51](https://github.com/nervosnetwork/muta/issues/51) [#55](https://github.com/nervosnetwork/muta/issues/55) [#58](https://github.com/nervosnetwork/muta/issues/58) [#56](https://github.com/nervosnetwork/muta/issues/56) [#64](https://github.com/nervosnetwork/muta/issues/64) [#65](https://github.com/nervosnetwork/muta/issues/65) [#70](https://github.com/nervosnetwork/muta/issues/70) [#71](https://github.com/nervosnetwork/muta/issues/71) [#72](https://github.com/nervosnetwork/muta/issues/72) [#73](https://github.com/nervosnetwork/muta/issues/73) [#43](https://github.com/nervosnetwork/muta/issues/43) [#54](https://github.com/nervosnetwork/muta/issues/54) [#53](https://github.com/nervosnetwork/muta/issues/53) [#57](https://github.com/nervosnetwork/muta/issues/57) [#45](https://github.com/nervosnetwork/muta/issues/45) [#62](https://github.com/nervosnetwork/muta/issues/62) [#63](https://github.com/nervosnetwork/muta/issues/63) [#66](https://github.com/nervosnetwork/muta/issues/66) [#61](https://github.com/nervosnetwork/muta/issues/61) [#67](https://github.com/nervosnetwork/muta/issues/67) [#68](https://github.com/nervosnetwork/muta/issues/68) [#60](https://github.com/nervosnetwork/muta/issues/60) [#46](https://github.com/nervosnetwork/muta/issues/46) [#47](https://github.com/nervosnetwork/muta/issues/47) [#48](https://github.com/nervosnetwork/muta/issues/48) [#49](https://github.com/nervosnetwork/muta/issues/49) [#52](https://github.com/nervosnetwork/muta/issues/52) [#51](https://github.com/nervosnetwork/muta/issues/51) [#55](https://github.com/nervosnetwork/muta/issues/55) [#58](https://github.com/nervosnetwork/muta/issues/58) [#56](https://github.com/nervosnetwork/muta/issues/56) [#64](https://github.com/nervosnetwork/muta/issues/64) [#65](https://github.com/nervosnetwork/muta/issues/65) [#70](https://github.com/nervosnetwork/muta/issues/70) [#72](https://github.com/nervosnetwork/muta/issues/72) [#74](https://github.com/nervosnetwork/muta/issues/74)\n* metrics logger ([#43](https://github.com/nervosnetwork/muta/issues/43)) ([d633309](https://github.com/nervosnetwork/muta/commit/d6333091959da6ab0a12630282f6ea783d509319))\n* support consensus tracing ([#53](https://github.com/nervosnetwork/muta/issues/53)) ([03942f0](https://github.com/nervosnetwork/muta/commit/03942f08cfdcc573d7feef3a1111e59f63d077f1))\n* **api:** make API more user-friendly ([#38](https://github.com/nervosnetwork/muta/issues/38)) ([ba33467](https://github.com/nervosnetwork/muta/commit/ba33467e52c114576b82850e11662d168ede293a))\n* **mempool:** implement cached batch txs broadcast ([#20](https://github.com/nervosnetwork/muta/issues/20)) ([d2af811](https://github.com/nervosnetwork/muta/commit/d2af811bb99becc9600d784ce19e021fec11627d))\n* **sync:** synchronization epoch ([#9](https://github.com/nervosnetwork/muta/issues/9)) ([fb4bf0d](https://github.com/nervosnetwork/muta/commit/fb4bf0d7c4bde7c86d1b09f469037ff1219f15fa)), closes [#17](https://github.com/nervosnetwork/muta/issues/17) [#18](https://github.com/nervosnetwork/muta/issues/18)\n* add compile and run in README ([#11](https://github.com/nervosnetwork/muta/issues/11)) ([1058322](https://github.com/nervosnetwork/muta/commit/10583224053ab91c32dbec815cd0a5af6b0dbeb3))\n* add docker ([#31](https://github.com/nervosnetwork/muta/issues/31)) ([8a4386a](https://github.com/nervosnetwork/muta/commit/8a4386ad4c1f66783cada885db9851609b6f5f8d))\n* change rlp in executor to fixed-codec ([#29](https://github.com/nervosnetwork/muta/issues/29)) ([7f737cd](https://github.com/nervosnetwork/muta/commit/7f737cdfc9353148b945ad52dd5ab3fd46e2c4db))\n* Get balance. ([#28](https://github.com/nervosnetwork/muta/issues/28)) ([8c4a3f9](https://github.com/nervosnetwork/muta/commit/8c4a3f9af8b9e1e8f19cc50b280b66b5d8e270bb))\n* **codec:** Add codec tests and benchmarks ([#22](https://github.com/nervosnetwork/muta/issues/22)) ([dcbe522](https://github.com/nervosnetwork/muta/commit/dcbe522be22596059280f6ef845a6d6f4e798551))\n* **consensus:** develop consensus interfaces ([#21](https://github.com/nervosnetwork/muta/issues/21)) ([62e3c06](https://github.com/nervosnetwork/muta/commit/62e3c063cd4f82efda43ca5c87c042db5adb9abb))\n* **consensus:** develop consensus provider and engine ([#28](https://github.com/nervosnetwork/muta/issues/28)) ([b2ccf9c](https://github.com/nervosnetwork/muta/commit/b2ccf9c84502a6dd476b1737aa9cbb2a283ced32))\n* **consensus:** Execute the transactions on commit. ([#7](https://github.com/nervosnetwork/muta/issues/7)) ([b54e7d2](https://github.com/nervosnetwork/muta/commit/b54e7d2bbd5d0ac45ef0d4c728e398b87a1f5450))\n* **consensus:** joint overlord and chain ([#32](https://github.com/nervosnetwork/muta/issues/32)) ([72cec41](https://github.com/nervosnetwork/muta/commit/72cec41c86824455ad35cfb1da8a246c50731568))\n* **consensus:** mutex lock and timer config ([#45](https://github.com/nervosnetwork/muta/issues/45)) ([cf09687](https://github.com/nervosnetwork/muta/commit/cf09687299b5be39a9c40f13d4b88a496ec7c943))\n* **consensus:** Support trsanction executor. ([#6](https://github.com/nervosnetwork/muta/issues/6)) ([e1188f9](https://github.com/nervosnetwork/muta/commit/e1188f9296b3947f833d6bc9a9beff22ebbbf4e7))\n* **executor:** Create genesis. ([#1](https://github.com/nervosnetwork/muta/issues/1)) ([a1111d8](https://github.com/nervosnetwork/muta/commit/a1111d8db709c62d119edf3238a22dd656e8035f))\n* **graphql:** Support transfer and contract deployment ([#44](https://github.com/nervosnetwork/muta/issues/44)) ([bfcb520](https://github.com/nervosnetwork/muta/commit/bfcb5203fe245e364922d5d8966197a8a8f8d91c))\n* **mempool:** fix fixed_codec ([#25](https://github.com/nervosnetwork/muta/issues/25)) ([c1ac607](https://github.com/nervosnetwork/muta/commit/c1ac607ac9b61f4867c17f69c50dad9797dc1c2b))\n* **mempool:** Remove cycle_limit ([#23](https://github.com/nervosnetwork/muta/issues/23)) ([8a19ae8](https://github.com/nervosnetwork/muta/commit/8a19ae867fd5b82c4fd56a1f8b59a83e24ca5bc0))\n* **native-contract:** Support for asset creation and transfer. ([#37](https://github.com/nervosnetwork/muta/issues/37)) ([1c505fb](https://github.com/nervosnetwork/muta/commit/1c505fbdd57fcb2ef3df3e8b19c65599d77c9bf1))\n* **network:** log connected peer ips ([#23](https://github.com/nervosnetwork/muta/issues/23)) ([1691bfa](https://github.com/nervosnetwork/muta/commit/1691bfa47ac561a2f27243e21b1b2fad2fb64be9))\n* develop merkle root ([#17](https://github.com/nervosnetwork/muta/issues/17)) ([03cec31](https://github.com/nervosnetwork/muta/commit/03cec318645ee49158f09ec59e356210a80f8bbf))\n* Fill in the main function ([#36](https://github.com/nervosnetwork/muta/issues/36)) ([d783f3b](https://github.com/nervosnetwork/muta/commit/d783f3b2d36507a695abd47b303b6c0108e2030b))\n* **mempool:** Develop mempool's tests and benches  ([#9](https://github.com/nervosnetwork/muta/issues/9)) ([5ddd5f4](https://github.com/nervosnetwork/muta/commit/5ddd5f4d0c1fa9630971ade538dcf954b6aa8f54))\n* **mempool:** Implement MemPool interfaces ([#8](https://github.com/nervosnetwork/muta/issues/8)) ([934ce58](https://github.com/nervosnetwork/muta/commit/934ce58b7a7a6b89b65ff931ce5487e553dd927d))\n* **native_contract:** Add an adapter that provides access to the world state. ([#27](https://github.com/nervosnetwork/muta/issues/27)) ([3281bea](https://github.com/nervosnetwork/muta/commit/3281beab2d054470b5edf330515df933cc713bb8))\n* **protocol:** Add the mempool traits ([#7](https://github.com/nervosnetwork/muta/issues/7)) ([9f6c19b](https://github.com/nervosnetwork/muta/commit/9f6c19bbfbff6c8f82bb732c3503d757833f837e))\n* **protocol:** Add the underlying data structure. ([#5](https://github.com/nervosnetwork/muta/issues/5)) ([5dae189](https://github.com/nervosnetwork/muta/commit/5dae189104c986348adddd43fbaa47af01781828))\n* **protocol:** Protobuf serialize ([#6](https://github.com/nervosnetwork/muta/issues/6)) ([ff00595](https://github.com/nervosnetwork/muta/commit/ff00595d100e44148b1cc243437798db8233ca2b))\n* **storage:** add storage test ([#18](https://github.com/nervosnetwork/muta/issues/18)) ([f78df5b](https://github.com/nervosnetwork/muta/commit/f78df5b0357eade7855152eee9c79070866477ac))\n* **storage:** Implement memory adapter API ([#11](https://github.com/nervosnetwork/muta/issues/11)) ([b0a8090](https://github.com/nervosnetwork/muta/commit/b0a80901229f85e8cf89bd806dcb32c95ae059b8))\n* **storage:** Implement storage ([#17](https://github.com/nervosnetwork/muta/issues/17)) ([7728b5b](https://github.com/nervosnetwork/muta/commit/7728b5b0307bd58b11671f123f37e3e365b14b97))\n* **types:** Add account structure. ([#24](https://github.com/nervosnetwork/muta/issues/24)) ([f6b93f0](https://github.com/nervosnetwork/muta/commit/f6b93f0f08b03a20761aef47f08343eb5d8e6a85))\n\n\n### Performance Improvements\n\n* **storage:** cache latest epoch ([#128](https://github.com/nervosnetwork/muta/issues/128)) ([da4d7a9](https://github.com/nervosnetwork/muta/commit/da4d7a92363596b7339518e24c64ab49648749dd))\n\n\n### Reverts\n\n* Revert \"[ᚬdebug-muta] feat(service): Upgrade asset (#181)\" (#182) ([dad3f99](https://github.com/nervosnetwork/muta/commit/dad3f99f7c694eea57b546c6b2169950c5692ea1)), closes [#181](https://github.com/nervosnetwork/muta/issues/181) [#182](https://github.com/nervosnetwork/muta/issues/182)\n* Revert \"feat: Extract muta as crate. (#75)\" (#77) ([3baacc5](https://github.com/nervosnetwork/muta/commit/3baacc5c781615377e9a6ba50cfc7b17dcb0ec6e)), closes [#75](https://github.com/nervosnetwork/muta/issues/75) [#77](https://github.com/nervosnetwork/muta/issues/77)\n\n\n\n# [0.1.0](https://github.com/nervosnetwork/muta/compare/733ee8e6be7649c9aa2d772bb1dc661bd0879917...v0.1.0) (2019-09-22)\n\n\n### Bug Fixes\n\n* **ci:** build on push and pull request ([d28aa55](https://github.com/nervosnetwork/muta/commit/d28aa55f5df240277e2b75e87aa948cdcf11ea7f))\n* **ci:** temporarily amend code to pass lint ([9441236](https://github.com/nervosnetwork/muta/commit/9441236a5107e0042753915ed943b487cd02d6a5))\n* **consensus:** Clear cache of last proposal. ([#199](https://github.com/nervosnetwork/muta/issues/199)) ([f548653](https://github.com/nervosnetwork/muta/commit/f5486531f43fa720171941ad4be5ec7646a269c2))\n* **consensus:** fix lock free too early problem and add state root check ([#277](https://github.com/nervosnetwork/muta/issues/277)) ([7238c5b](https://github.com/nervosnetwork/muta/commit/7238c5bc057bd6c6f31773fa4bd3e06aaea72255))\n* **consensus:** Makes sure that proposer is this node. ([#281](https://github.com/nervosnetwork/muta/issues/281)) ([d7f4e50](https://github.com/nervosnetwork/muta/commit/d7f4e5081f00a04aee934d0ce700cd107f4f345f))\n* **core-network:** CallbackItemNotFound ([#243](https://github.com/nervosnetwork/muta/issues/243)) ([47365fa](https://github.com/nervosnetwork/muta/commit/47365faf5fa7171dde8951661fa095a6c43bcb1f))\n* **core-network:** false bootstrapped connections ([#275](https://github.com/nervosnetwork/muta/issues/275)) ([26e76f0](https://github.com/nervosnetwork/muta/commit/26e76f0a2879aed3da745529f64ba3828a1cc30e))\n* **core-types:** compilation failure ([#269](https://github.com/nervosnetwork/muta/issues/269)) ([56d8649](https://github.com/nervosnetwork/muta/commit/56d86491f69ab16fd2c76b66b28ad76df78c6ca7))\n* **core/crypto:** pubkey_to_address() consistent with cita ([acb5e63](https://github.com/nervosnetwork/muta/commit/acb5e63ea577429bc94c16a3430035ea139aaf15))\n* **executor:** Save the full node data. ([b57a1c5](https://github.com/nervosnetwork/muta/commit/b57a1c5fa775479b85d1531f7d2dced817de4729))\n* **jsonrpc:** give default value for newFilter ([#289](https://github.com/nervosnetwork/muta/issues/289)) ([17069b4](https://github.com/nervosnetwork/muta/commit/17069b49067dd7335f243d248e3c8d633e455a73))\n* **jsonrpc:** logic error in getTransactionCount ([#290](https://github.com/nervosnetwork/muta/issues/290)) ([464bfdf](https://github.com/nervosnetwork/muta/commit/464bfdf08a9954206bb595b3861c52208fc9630d))\n* **jsonrpc:** make the response compatible with jsonrpc 2.0 spec ([1db5190](https://github.com/nervosnetwork/muta/commit/1db5190bc91d431bacce6bb44a1185b19520c1a2))\n* **jsonrpc:** prefix with 0x by API getTransactionProof ([#295](https://github.com/nervosnetwork/muta/issues/295)) ([b1c0160](https://github.com/nervosnetwork/muta/commit/b1c0160b65fc91e8a2bcfd908943fb238d1101c1))\n* **jsonrpc:** raise error when key not found in state ([#294](https://github.com/nervosnetwork/muta/issues/294)) ([7a7c294](https://github.com/nervosnetwork/muta/commit/7a7c294df5ae75f50ec0fe3620634c7280f837e7))\n* **jsonrpc:** returns the correct block hash ([#280](https://github.com/nervosnetwork/muta/issues/280)) ([f6a58d0](https://github.com/nervosnetwork/muta/commit/f6a58d0cfc743d1fa84fe5de99798157ba5f25a6))\n* Call header.hash ([#94](https://github.com/nervosnetwork/muta/issues/94)) ([636aa54](https://github.com/nervosnetwork/muta/commit/636aa549c21a04611b6f4575dfc7e78fa47d768e))\n* change the blocking thread from rayon to std::thread ([5b80476](https://github.com/nervosnetwork/muta/commit/5b804765d0a76055e6e730560a6d7ecd576703be))\n* return err if tx not found in get_batch to avoid forking ([#279](https://github.com/nervosnetwork/muta/issues/279)) ([6aed2fe](https://github.com/nervosnetwork/muta/commit/6aed2fe5ffcd0eb6a699cff00d92e9dd3ab7d7b3))\n* **sync:** proof and proposal_hash hash not match. ([#239](https://github.com/nervosnetwork/muta/issues/239)) ([51f332e](https://github.com/nervosnetwork/muta/commit/51f332ee8c4a10b88844a272bc51a116b4d25dd2))\n* tokio::spawn panic. ([#238](https://github.com/nervosnetwork/muta/issues/238)) ([12d8d01](https://github.com/nervosnetwork/muta/commit/12d8d01ed42f9cc5d9cc341edfd76a6076aa37e1))\n* **common/logger:** cargo fmt ([e3a7f5a](https://github.com/nervosnetwork/muta/commit/e3a7f5a2217956b86191881caeb3ca6cea7ec2fc))\n* **compoents/transaction-pool:** Use the latest crypto API. ([#86](https://github.com/nervosnetwork/muta/issues/86)) ([f6c94d3](https://github.com/nervosnetwork/muta/commit/f6c94d307d6e89afba75ed8b83b99088fc7ca9de))\n* **components/transaction-pool:** Check if the transaction is repeated in histories block. ([dba25fe](https://github.com/nervosnetwork/muta/commit/dba25fe09d8e82f0e396415055ce08efbf1fe159))\n* **core-p2p:** transmission example: a clippy warning ([6d2f42a](https://github.com/nervosnetwork/muta/commit/6d2f42ae97194333a823581406fc75d2c47536b2))\n* **core-p2p:** transmission example: remove unreachable match branch ([0082bd6](https://github.com/nervosnetwork/muta/commit/0082bd6a3fb956f9ee17a9eba6ada77fc91f3dfe))\n* **core-p2p:** transmission: future task starvation ([ba14db0](https://github.com/nervosnetwork/muta/commit/ba14db035413220ed7eba5e5543b8a6496267641))\n* **devchain:** correct addresses matched with privkey ([#114](https://github.com/nervosnetwork/muta/issues/114)) ([f56744e](https://github.com/nervosnetwork/muta/commit/f56744e7809b39da79434a3fbcf3deb127fded27))\n* **network:** RepeatedConnection and ConnectSelf errors ([#196](https://github.com/nervosnetwork/muta/issues/196)) ([2e5e888](https://github.com/nervosnetwork/muta/commit/2e5e888cdb0869e7622639919b12e62eca06f137))\n* **p2p:** Make sure the \"poll\" is triggered. ([#182](https://github.com/nervosnetwork/muta/issues/182)) ([88daed1](https://github.com/nervosnetwork/muta/commit/88daed1e3e175c21e7923ddd5f1b4eb4ef4d6286))\n* **p2p-identify:** empty local listen addresses ([#198](https://github.com/nervosnetwork/muta/issues/198)) ([c40ad8a](https://github.com/nervosnetwork/muta/commit/c40ad8a8dedd999efd17a88b9c30b198d4a0035a))\n* **synchronizer:** add a pull_txs_sync method to sync txs from block ([#207](https://github.com/nervosnetwork/muta/issues/207)) ([317fca8](https://github.com/nervosnetwork/muta/commit/317fca8b8d2f270e5d140a94bb1a9227c4b7271b))\n* **transaction-pool:** duplicate insertion transactions from network ([#191](https://github.com/nervosnetwork/muta/issues/191)) ([2c095bb](https://github.com/nervosnetwork/muta/commit/2c095bbe5649454abf2663df7355c0a56f54a71f))\n* **tx-pool:** \"get_count\" returns the repeat transaction. ([f5612d0](https://github.com/nervosnetwork/muta/commit/f5612d09d02e9183b702f0233aecc14c31779945))\n* **tx-pool:** `ensure` method always pull all txs from remote peer ([#194](https://github.com/nervosnetwork/muta/issues/194)) ([9ff300e](https://github.com/nervosnetwork/muta/commit/9ff300e191aa39b6301e481f8f287287b645ba39))\n* **tx-pool:** Ensure the number of transactions meets expectations ([dcbf0dd](https://github.com/nervosnetwork/muta/commit/dcbf0dd8cf548ddfe3afb3226d7596637ae615dd))\n* **tx-pool:** replace chashmap ([#211](https://github.com/nervosnetwork/muta/issues/211)) ([717f55e](https://github.com/nervosnetwork/muta/commit/717f55e4772c5818ab17e2b1c320b0b98f174122))\n* Aviod drop ([4d0f986](https://github.com/nervosnetwork/muta/commit/4d0f986741c392489893f036989db7218db54743))\n* build failure ([18ce8e4](https://github.com/nervosnetwork/muta/commit/18ce8e4642d8d27892fee53b9695e4ced7921055))\n* jsonrpc call return value ([#104](https://github.com/nervosnetwork/muta/issues/104)) ([1fe41eb](https://github.com/nervosnetwork/muta/commit/1fe41eb491a16588019218144985eec143613c65))\n* logic error of bloom filter ([#176](https://github.com/nervosnetwork/muta/issues/176)) ([70269cb](https://github.com/nervosnetwork/muta/commit/70269cb5cefd82f1a14eb5e85df419c1587d19c8))\n* merkle typo ([4f63585](https://github.com/nervosnetwork/muta/commit/4f6358565ee8d486be18ac8ff6069b95b597ea4d))\n* rlp encode ([b852ac1](https://github.com/nervosnetwork/muta/commit/b852ac147db818cf289b972f054028d293218a19))\n* rlp hash ([837055a](https://github.com/nervosnetwork/muta/commit/837055a4eb78ba941004dbc0466955895de8bcab))\n* Set quota limit for the genesis. ([#106](https://github.com/nervosnetwork/muta/issues/106)) ([931fe40](https://github.com/nervosnetwork/muta/commit/931fe404453a6f936cbd27bf37d0e326a03e4484))\n* write lock ([de80439](https://github.com/nervosnetwork/muta/commit/de80439cb4e7889c1220fc7821604f9ef792422e))\n\n\n### Features\n\n* add business model support for executor ([#308](https://github.com/nervosnetwork/muta/issues/308)) ([e03396b](https://github.com/nervosnetwork/muta/commit/e03396bb6b964a0c93f43c2684a0e76a55db5540))\n* add Deserialize for Hash and Address ([#259](https://github.com/nervosnetwork/muta/issues/259)) ([fef188c](https://github.com/nervosnetwork/muta/commit/fef188c5950fb7f64a92312894efdb4955201a93))\n* add docker config for dev ([#197](https://github.com/nervosnetwork/muta/issues/197)) ([6e74aec](https://github.com/nervosnetwork/muta/commit/6e74aec0b51c2bf80c1d1b893130ea74f4a1a8f0))\n* add fabric devops scripts ([fcdc25c](https://github.com/nervosnetwork/muta/commit/fcdc25c05b5c30ba38bf6af57885c2f45233d3fc))\n* add height to the end of proposal msg ([#255](https://github.com/nervosnetwork/muta/issues/255)) ([c5cbc5e](https://github.com/nervosnetwork/muta/commit/c5cbc5ec70f1dc0fb46ef0bb87c3b994596b4571))\n* add more info to version ([#298](https://github.com/nervosnetwork/muta/issues/298)) ([fd02a17](https://github.com/nervosnetwork/muta/commit/fd02a17a68bb6ef59bbd4cded13d69da221237ee))\n* peerCount RPC API ([#257](https://github.com/nervosnetwork/muta/issues/257)) ([736ae8c](https://github.com/nervosnetwork/muta/commit/736ae8c7f537a56b01d648cf066f220e47108820))\n* **components/cita-jsonrpc:** impl executor related apis ([#80](https://github.com/nervosnetwork/muta/issues/80)) ([bc8f340](https://github.com/nervosnetwork/muta/commit/bc8f34015617e1a01fb2fbb30d9709cdd806daea))\n* **components/cita-jsonrpc:** impl get_code and finish some todo ([#87](https://github.com/nervosnetwork/muta/issues/87)) ([e1b0b9d](https://github.com/nervosnetwork/muta/commit/e1b0b9dc8c39965366c5b572905e63cacecdc958))\n* **components/databse:** Implement RocksDB ([#72](https://github.com/nervosnetwork/muta/issues/72)) ([3516fbc](https://github.com/nervosnetwork/muta/commit/3516fbc41338a2f423e0ba56eb96c7fa697a6c77))\n* **components/executor:** Add trie db for executor. ([#85](https://github.com/nervosnetwork/muta/issues/85)) ([fd7dc1d](https://github.com/nervosnetwork/muta/commit/fd7dc1da97a4b7dafb1ecbc2813c9506423689a5))\n* **components/executor:** Implement EVM executor. ([#68](https://github.com/nervosnetwork/muta/issues/68)) ([021893d](https://github.com/nervosnetwork/muta/commit/021893db432f1ddadc89da9c9251bdb6fb79d925))\n* **components/jsonrpc:** implement getStateProof ([#178](https://github.com/nervosnetwork/muta/issues/178)) ([69499fb](https://github.com/nervosnetwork/muta/commit/69499fbb98cbe7f23d426c15ebe67de552dd5d2b))\n* **components/jsonrpc:** implement getTransactionProof ([0db8785](https://github.com/nervosnetwork/muta/commit/0db8785475e9d9c098fa123b9c23b4f0eab286dc))\n* **components/jsonrpc:** running on microscope ([#200](https://github.com/nervosnetwork/muta/issues/200)) ([1c63a0e](https://github.com/nervosnetwork/muta/commit/1c63a0e3db751b7b7be6f053bed2b66245b105cd))\n* **components/jsonrpc:** Try to convert tx to cita::tx ([#221](https://github.com/nervosnetwork/muta/issues/221)) ([b8ab16b](https://github.com/nervosnetwork/muta/commit/b8ab16b05ad01a0c6ef5a7b8d7ad76961e7749ff))\n* **core-network:** expost send_buffer_size and recv_buffer_size ([#248](https://github.com/nervosnetwork/muta/issues/248)) ([e5120ad](https://github.com/nervosnetwork/muta/commit/e5120ad646c9d206b43b0d50911303507bdfe381))\n* **core-network:** implement peer count feature ([#256](https://github.com/nervosnetwork/muta/issues/256)) ([8f7e7eb](https://github.com/nervosnetwork/muta/commit/8f7e7eb51cdeebfb9c679d88626ac2ec3fa651a4))\n* add performance test lua script ([#244](https://github.com/nervosnetwork/muta/issues/244)) ([c727b73](https://github.com/nervosnetwork/muta/commit/c727b733340029f72d9280a57e07522f635eff44))\n* **core-network:** implement concurrent reactor and real chained reactor ([#175](https://github.com/nervosnetwork/muta/issues/175)) ([dc9f897](https://github.com/nervosnetwork/muta/commit/dc9f897f08801d7b8a418750ed516a8acac057ca))\n* **core-p2p:** implement datagram transport protocol ([fee2d45](https://github.com/nervosnetwork/muta/commit/fee2d4546552bd6c46376309eb399126219c55fb))\n* **core-p2p:** transmission: use `poll` func to do broadcast ([b376cbe](https://github.com/nervosnetwork/muta/commit/b376cbef9211e55f809f16bb9bab1360dd4b3523))\n* **core/consensus:** Implement solo mode for consensus ([e071b15](https://github.com/nervosnetwork/muta/commit/e071b1533b1107f65eb0f97563f011f644d73be6))\n* **core/crypto:** Add secp256k1 ([8349eaa](https://github.com/nervosnetwork/muta/commit/8349eaa2817ee8c27e9e8367c89f3469e52b6f8a))\n* **core/crypto:** Modify the return type to result. ([9f2424c](https://github.com/nervosnetwork/muta/commit/9f2424ca11fa300f7269f7a32195ec8bbde096e0))\n* **core/network:** Support broadcast message ([#185](https://github.com/nervosnetwork/muta/issues/185)) ([992c55f](https://github.com/nervosnetwork/muta/commit/992c55f87458a38629944fb78ee69982d8329b2b))\n* **core/types:** Add hash function for the header and receipts ([c982a52](https://github.com/nervosnetwork/muta/commit/c982a52ce29da7f0e783b2a7a52f1d541c15ea10))\n* **executor:** Add flush for trie db. ([#240](https://github.com/nervosnetwork/muta/issues/240)) ([23fd538](https://github.com/nervosnetwork/muta/commit/23fd53849ac626cdeaabb165c0534bb90651aa90))\n* **jsonrpc:** Implement filter APIs ([#190](https://github.com/nervosnetwork/muta/issues/190)) ([c97ed22](https://github.com/nervosnetwork/muta/commit/c97ed2273b6ddb2385d6d0285f2d5b4d267b130b))\n* **tx-pool:** Batch broadcast transactions. ([#234](https://github.com/nervosnetwork/muta/issues/234)) ([d297b1a](https://github.com/nervosnetwork/muta/commit/d297b1a4d655fdfac25f7f5630253f7e8f6f70ea))\n* add synchronizer ([#167](https://github.com/nervosnetwork/muta/issues/167)) ([38db7aa](https://github.com/nervosnetwork/muta/commit/38db7aa3f83e4a35417440e4787c5249b9eace63))\n* Implement many JSONRPC APIs ([#166](https://github.com/nervosnetwork/muta/issues/166)) ([807b6a7](https://github.com/nervosnetwork/muta/commit/807b6a73cb098087179d9b086fa0070b6ced74d0))\n* Implement RPC getTransactionCount ([#169](https://github.com/nervosnetwork/muta/issues/169)) ([dbf0c51](https://github.com/nervosnetwork/muta/commit/dbf0c51a17f3e285e1146eee3b5e9def08d16d50))\n* rewrite network component ([#230](https://github.com/nervosnetwork/muta/issues/230)) ([585dabb](https://github.com/nervosnetwork/muta/commit/585dabb2d52dd70de7ebc26eee59345596301c1a))\n* **components/jsonrpc:** Implements sendRawTransaction ([#159](https://github.com/nervosnetwork/muta/issues/159)) ([112d345](https://github.com/nervosnetwork/muta/commit/112d34582c00bea3c05d1663cf07d79aefbfa6a9))\n* **core-context:** add `CommonValue` trait and `p2p_session_id` method ([#165](https://github.com/nervosnetwork/muta/issues/165)) ([216b743](https://github.com/nervosnetwork/muta/commit/216b74381c00b15ba61444cf462528ee170fcc41))\n* **core/consensus:** Implements BFT ([#158](https://github.com/nervosnetwork/muta/issues/158)) ([e7a3bfd](https://github.com/nervosnetwork/muta/commit/e7a3bfd2f667c9bb8d6b9deb29a57c837ae296b9))\n* **core/notify:** add notify as message-bus between components ([b53c50d](https://github.com/nervosnetwork/muta/commit/b53c50dc04090b6b0d5b6725b5c32697446aa5f8))\n* **core/serialization:** Add proto file ([0bf7c59](https://github.com/nervosnetwork/muta/commit/0bf7c59200ad4a4cc7994efecaec5d8c683f175a))\n* **core/storage:** Add the storage trait ([ffc8776](https://github.com/nervosnetwork/muta/commit/ffc8776b02bc0a4cf785c7c5c47a88266f186b49))\n* **core/types:** Add the transactions hash calculation function. ([67d8170](https://github.com/nervosnetwork/muta/commit/67d817072c4c03b2fc2eaae5d1dc99d2d41240e0))\n* **core/types:** Define serialization and deserialization methods ([f28c63d](https://github.com/nervosnetwork/muta/commit/f28c63d2b4c7b77dbe24e2b50e70cf649a6c714c))\n* **database:** Add memory db ([d21a5a2](https://github.com/nervosnetwork/muta/commit/d21a5a29bd20e02f3ddd29f77c3df2963f8f3b4b))\n* **jsonrpc:** support batch ([0a0c680](https://github.com/nervosnetwork/muta/commit/0a0c680993ff9be231f1ae8e583171e1f304f79b))\n* **main:** add init command for genesis ([#96](https://github.com/nervosnetwork/muta/issues/96)) ([ec752b0](https://github.com/nervosnetwork/muta/commit/ec752b0602800055990fbfcc54bd2c2ab0b2cb60))\n* **p2p:** Update to tentacle0.2.0-alpha.5 ([#177](https://github.com/nervosnetwork/muta/issues/177)) ([f6f83b6](https://github.com/nervosnetwork/muta/commit/f6f83b6b263579d66160cfab29b83bd5a709eeb4))\n* **pubsub:** Implement pubsub components ([#143](https://github.com/nervosnetwork/muta/issues/143)) ([a079770](https://github.com/nervosnetwork/muta/commit/a079770b0e66e22552bd8cf504a9e1ba0c520d0e))\n* **runtime:** add `Context` struct ([#155](https://github.com/nervosnetwork/muta/issues/155)) ([27e5aa7](https://github.com/nervosnetwork/muta/commit/27e5aa7f01f3559d2a9dd17346595c9161a9c0f6))\n* Add project framework ([#24](https://github.com/nervosnetwork/muta/issues/24)) ([733ee8e](https://github.com/nervosnetwork/muta/commit/733ee8e6be7649c9aa2d772bb1dc661bd0879917))\n* Add transaction pool component. ([360c935](https://github.com/nervosnetwork/muta/commit/360c93540ea77dc51551a3739e17682600d2b1b7))\n* Fill main.rs ([#102](https://github.com/nervosnetwork/muta/issues/102)) ([b5b4c72](https://github.com/nervosnetwork/muta/commit/b5b4c7233efcd1c35e92248b7726ca20644800e9))\n* impl cita-jsonrpc ([49e2a2d](https://github.com/nervosnetwork/muta/commit/49e2a2d22d094b2b6a2f71bc5201ccfe28308797))\n* update db interface and storage interface ([#137](https://github.com/nervosnetwork/muta/issues/137)) ([36b3d07](https://github.com/nervosnetwork/muta/commit/36b3d07f23e2c7ada870cb699bf138cdd66c2860))\n\n\n### Reverts\n\n* Revert \"chore: Update bft-rs (#203)\" (#204) ([cc15ba9](https://github.com/nervosnetwork/muta/commit/cc15ba9ed302ab1389838a4a6c745675106179e9)), closes [#203](https://github.com/nervosnetwork/muta/issues/203) [#204](https://github.com/nervosnetwork/muta/issues/204)\n\n\n\n# [](https://github.com/nervosnetwork/muta/compare/v0.2.0-alpha.1...v) (2020-08-03)\n\n\n### Bug Fixes\n\n* **consensus:** return an error when committing an outdated block ([#371](https://github.com/nervosnetwork/muta/issues/371)) ([b3d518b](https://github.com/nervosnetwork/muta/commit/b3d518b52658b40746ef708fa8cde5c96a39a539))\n* **mempool:** Ensure that there are no duplicate transactions in the order transaction ([#379](https://github.com/nervosnetwork/muta/issues/379)) ([97708ac](https://github.com/nervosnetwork/muta/commit/97708ac385be2243344d700a0d7c928f18fd51b3))\n* **storage:** test batch receipts get panic ([#373](https://github.com/nervosnetwork/muta/issues/373)) ([300a3c6](https://github.com/nervosnetwork/muta/commit/300a3c65cf0399c2ba37a3bd655e06719b660330))\n\n\n### Features\n\n* **network:** tag consensus peer ([#364](https://github.com/nervosnetwork/muta/issues/364)) ([9b27df1](https://github.com/nervosnetwork/muta/commit/9b27df1015a25792cc210c5aa0dd473a45ae885d)), closes [#354](https://github.com/nervosnetwork/muta/issues/354) [#2](https://github.com/nervosnetwork/muta/issues/2) [#3](https://github.com/nervosnetwork/muta/issues/3) [#4](https://github.com/nervosnetwork/muta/issues/4) [#5](https://github.com/nervosnetwork/muta/issues/5) [#6](https://github.com/nervosnetwork/muta/issues/6) [#7](https://github.com/nervosnetwork/muta/issues/7)\n* Add global panic hook ([#376](https://github.com/nervosnetwork/muta/issues/376)) ([7382279](https://github.com/nervosnetwork/muta/commit/738227962771a6a66b85f2fd199df2e699b43adc))\n\n\n### Performance Improvements\n\n* **executor:** use inner call instead of service dispatcher ([#365](https://github.com/nervosnetwork/muta/issues/365)) ([7b1d2a3](https://github.com/nervosnetwork/muta/commit/7b1d2a32d5c20306af3868e5265bd2530dd9493b))\n\n\n### BREAKING CHANGES\n\n* **network:** - replace Validator address bytes with pubkey bytes\n\n* change(consensus): log validator address instead of its public key\n\nBlock proposer is address instead public key\n\n* fix: compilation failed\n* **network:** - change users_cast to multicast, take peer_ids bytes instead of Address\n- network bootstrap configuration now takes peer id instead of pubkey hex\n\n* refactor(network): PeerId api\n\n\n\n# [0.2.0-alpha.1](https://github.com/nervosnetwork/muta/compare/v0.1.2-beta...v0.2.0-alpha.1) (2020-07-22)\n\n\n### Bug Fixes\n\n* **executor:** The logic to deal with tx_hook and tx_body ([#367](https://github.com/nervosnetwork/muta/issues/367)) ([749d558](https://github.com/nervosnetwork/muta/commit/749d558b8b58a1943bfa2842dcedcc45218c0f78))\n* **executor:** tx events aren't cleared on execution error ([#313](https://github.com/nervosnetwork/muta/issues/313)) ([1605cf5](https://github.com/nervosnetwork/muta/commit/1605cf59b558b97889bb431da7f81fd424b90a89))\n* **proof:** Verify aggregated signature in checking proof ([#308](https://github.com/nervosnetwork/muta/issues/308)) ([d2a98b0](https://github.com/nervosnetwork/muta/commit/d2a98b06e44449ca756f135c1b235ff0d80eaf67))\n* **trust_metric_test:** unreliable full node exit check ([#327](https://github.com/nervosnetwork/muta/issues/327)) ([a4ab4a6](https://github.com/nervosnetwork/muta/commit/a4ab4a6209e0978148983e88447ac2d9178fa42a))\n* **WAL:** Ignore path already exist ([#304](https://github.com/nervosnetwork/muta/issues/304)) ([02df937](https://github.com/nervosnetwork/muta/commit/02df937fb6449c9b3b0b50e790e0ecf6bfc1ee3d))\n\n\n### Performance Improvements\n\n* **mempool:** parallel verifying signatures in mempool ([#359](https://github.com/nervosnetwork/muta/issues/359)) ([2ccdf1a](https://github.com/nervosnetwork/muta/commit/2ccdf1a67a40cd483749a98a1a68c37bcf1d473c))\n\n\n### Reverts\n\n* Revert \"refactor(consensus)!: replace Validator address bytes with pubkey bytes (#354)\" (#361) ([4dabfa2](https://github.com/nervosnetwork/muta/commit/4dabfa231961d1ec8be1ba42bf05781f55395aed)), closes [#354](https://github.com/nervosnetwork/muta/issues/354) [#361](https://github.com/nervosnetwork/muta/issues/361)\n\n\n* refactor(consensus)!: replace Validator address bytes with pubkey bytes (#354) ([e4433d7](https://github.com/nervosnetwork/muta/commit/e4433d793e8a63788ec682880afc93474e0d2414)), closes [#354](https://github.com/nervosnetwork/muta/issues/354)\n\n\n### Features\n\n* **executor:** allow cancel execution units through context ([#317](https://github.com/nervosnetwork/muta/issues/317)) ([eafb489](https://github.com/nervosnetwork/muta/commit/eafb489f78f7521487c6b2d25dd9912e43f76500))\n* **executor:** indenpendent tx hook states commit ([#316](https://github.com/nervosnetwork/muta/issues/316)) ([fde6450](https://github.com/nervosnetwork/muta/commit/fde645010363a4664033370e4109e4d1f08b13bc))\n* **protocol:** Remove the logs bloom from block header ([#312](https://github.com/nervosnetwork/muta/issues/312)) ([ff1e0df](https://github.com/nervosnetwork/muta/commit/ff1e0df1e8a65cc480825a49eed9495cc31ecee0))\n"
  },
  {
    "path": "CHANGELOG/CHANGELOG-0.2.md",
    "content": "# [](https://github.com/nervosnetwork/muta/compare/v0.2.0-rc.2.1...v) (2020-09-15)\n\n\n### Bug Fixes\n\n* **cli:** expose version, author and app_name to be customized ([#456](https://github.com/nervosnetwork/muta/issues/456)) ([93c551e](https://github.com/nervosnetwork/muta/commit/93c551e09ae0d79e5d1e3a03f3882c3ddc883da0))\n* **logger:** add structured api ([#450](https://github.com/nervosnetwork/muta/issues/450)) ([4ef3d93](https://github.com/nervosnetwork/muta/commit/4ef3d93f2ff466d69dd22805c91812a8b74605b6))\n* **metric:** network broadcast all data size ([#452](https://github.com/nervosnetwork/muta/issues/452)) ([5a8999a](https://github.com/nervosnetwork/muta/commit/5a8999ade29ad54e72caf85115c424361caaf379))\n* **network:** wrong connected consensus peer count ([#451](https://github.com/nervosnetwork/muta/issues/451)) ([43357fa](https://github.com/nervosnetwork/muta/commit/43357fa29339d4540b5d86ed51f42277fe657a7d))\n* **state:** If value is an empty byte it needs to return none ([#448](https://github.com/nervosnetwork/muta/issues/448)) ([5e1e4b6](https://github.com/nervosnetwork/muta/commit/5e1e4b631d692b2673d5fb039925cafafb8fcd06))\n\n\n### Features\n\n* **logger:** add a json macro to generate json object ([#455](https://github.com/nervosnetwork/muta/issues/455)) ([ffb1b45](https://github.com/nervosnetwork/muta/commit/ffb1b45159bad2d444f81b44ab57fae0dca16550))\n* cli for maintance ([#436](https://github.com/nervosnetwork/muta/issues/436)) ([aebd85f](https://github.com/nervosnetwork/muta/commit/aebd85fd99424ddb50afcf434045bd0b78bcd53e))\n* **api:** dump profile data through http request ([#446](https://github.com/nervosnetwork/muta/issues/446)) ([31d66ab](https://github.com/nervosnetwork/muta/commit/31d66ab5928f046af46630609c82e91eb916afc5))\n* **metric:** add accumulated network message size count ([#449](https://github.com/nervosnetwork/muta/issues/449)) ([eda8f75](https://github.com/nervosnetwork/muta/commit/eda8f756a5de72601d6dc2bc1ac0abdae065467c))\n\n\n\n# [0.2.0-rc.2.1](https://github.com/nervosnetwork/muta/compare/v0.2.0-rc...v0.2.0-rc.2.1) (2020-09-04)\n\n\n### Bug Fixes\n\n* update example configs, fix send transaction in byzantine ([#442](https://github.com/nervosnetwork/muta/issues/442)) ([d6a1a85](https://github.com/nervosnetwork/muta/commit/d6a1a8513e9fdf9166839f5c6aaccd0b5dc9cee3))\n* **consensus:** recover and insert tx to mempool to avoid inactivation ([#414](https://github.com/nervosnetwork/muta/issues/414)) ([fd9716e](https://github.com/nervosnetwork/muta/commit/fd9716e078289453b70dd0e378a4a94a6531d9b7))\n* **network:** identify protocol: possible dead lock in identification ([#439](https://github.com/nervosnetwork/muta/issues/439)) ([b676c4c](https://github.com/nervosnetwork/muta/commit/b676c4ca3deb98d76cb5c2f6d771e69174cef632))\n* fix framework to deal with state while tx runs fail ([#440](https://github.com/nervosnetwork/muta/issues/440)) ([d186505](https://github.com/nervosnetwork/muta/commit/d186505da89afe62840d406052125244bee357c7))\n* **network:** cannot process message after reactor exit ([#412](https://github.com/nervosnetwork/muta/issues/412)) ([36af704](https://github.com/nervosnetwork/muta/commit/36af7047544628dd098d6cb34cbe2b5d3c0b1770))\n* **network:** double decrease connecting gauge ([#424](https://github.com/nervosnetwork/muta/issues/424)) ([0a1cfcf](https://github.com/nervosnetwork/muta/commit/0a1cfcfa7ddedcc236243f9dc3e317610742ca5c))\n* **network:** give up a peer without log a reason ([#423](https://github.com/nervosnetwork/muta/issues/423)) ([7151cd4](https://github.com/nervosnetwork/muta/commit/7151cd435e6bec2a961eac67cb779708c0ab0fd0))\n* **network:** give up peer because of handshake timeout ([#418](https://github.com/nervosnetwork/muta/issues/418)) ([2627c00](https://github.com/nervosnetwork/muta/commit/2627c005485466373d632a60fb41d897db63fedc))\n* **network:** give up peer due to secio io error ([#425](https://github.com/nervosnetwork/muta/issues/425)) ([27a8e8b](https://github.com/nervosnetwork/muta/commit/27a8e8ba5ce644f316d1cedb48230cab398a31da))\n* **network:** negative connecting metric number ([#430](https://github.com/nervosnetwork/muta/issues/430)) ([dae62ae](https://github.com/nervosnetwork/muta/commit/dae62aeb760c3acb18b14be7f03da032dd495e9b))\n* update to latest overlord ([#421](https://github.com/nervosnetwork/muta/issues/421)) ([c8f018c](https://github.com/nervosnetwork/muta/commit/c8f018c89eb9b7bf64c5525768c66f8d5f5038da))\n\n\n### Features\n\n* **logger:** add structured log api ([#434](https://github.com/nervosnetwork/muta/issues/434)) ([2e4de12](https://github.com/nervosnetwork/muta/commit/2e4de12f1d386af90f2fbb19d57d3832cd5d2e2a))\n* **logger:** split log file by size ([#435](https://github.com/nervosnetwork/muta/issues/435)) ([5c4f075](https://github.com/nervosnetwork/muta/commit/5c4f075da31231a92100e8ba85438bde4e5c65b6))\n* add byzantine test script ([#433](https://github.com/nervosnetwork/muta/issues/433)) ([b7ceda0](https://github.com/nervosnetwork/muta/commit/b7ceda00a65ebe87b500e5b0c489e5325e22747a))\n* log the overlord view change reason ([#432](https://github.com/nervosnetwork/muta/issues/432)) ([8b25191](https://github.com/nervosnetwork/muta/commit/8b251917f28bc0762fa91e15127f659fe8f4685b))\n* **apm:** add executing block num to apm ([#429](https://github.com/nervosnetwork/muta/issues/429)) ([b27ac99](https://github.com/nervosnetwork/muta/commit/b27ac99486f376075fb393fa0f80db6ecfb7b955))\n* **network:** add more metrics ([#416](https://github.com/nervosnetwork/muta/issues/416)) ([d03ddde](https://github.com/nervosnetwork/muta/commit/d03ddde2763b43e77cced2ff8552910c5fcff1eb))\n* **network:** add tentacle_metrics feature ([#417](https://github.com/nervosnetwork/muta/issues/417)) ([5181562](https://github.com/nervosnetwork/muta/commit/5181562c947a34d3c344e766171b60ba161dff29))\n\n\n\n# [0.2.0-rc](https://github.com/nervosnetwork/muta/compare/v0.2.0-beta.4...v0.2.0-rc) (2020-08-12)\n\n\n### Features\n\n* **network:** split transmitter data ([#380](https://github.com/nervosnetwork/muta/issues/380)) ([0322cd6](https://github.com/nervosnetwork/muta/commit/0322cd690cb118f56153e424e9a6bf4b2a11d8b4))\n* **network:** verify chain id during protocol handshake ([#406](https://github.com/nervosnetwork/muta/issues/406)) ([e678e92](https://github.com/nervosnetwork/muta/commit/e678e92bf01bc4bc914e74b6fed22c8b55b3cdc7))\n\n\n\n# [0.2.0-beta.4](https://github.com/nervosnetwork/muta/compare/v0.2.0-beta.3...v0.2.0-beta.4) (2020-08-10)\n\n\n### Bug Fixes\n\n* load hrp before deserializing genesis payload to take hrp effect ([#405](https://github.com/nervosnetwork/muta/issues/405)) ([828e6d5](https://github.com/nervosnetwork/muta/commit/828e6d539cf4da9cf042c450418e75a944315014))\n\n\n### Features\n\n* **api:** Support enabled TLS ([#402](https://github.com/nervosnetwork/muta/issues/402)) ([c2908a3](https://github.com/nervosnetwork/muta/commit/c2908a3ba6a5ab1219ddc9b14ff6d7320cf70228))\n\n\n### Performance Improvements\n\n* **state:** add state cache for trieDB ([#404](https://github.com/nervosnetwork/muta/issues/404)) ([2a08c14](https://github.com/nervosnetwork/muta/commit/2a08c147571707507b72882788fd51f7a799f3ec))\n\n\n\n# [0.2.0-beta.3](https://github.com/nervosnetwork/muta/compare/v0.2.0-beta.2...v0.2.0-beta.3) (2020-08-07)\n\n\n### Bug Fixes\n\n* **apm:** Return the correct time ([#400](https://github.com/nervosnetwork/muta/issues/400)) ([fd6549a](https://github.com/nervosnetwork/muta/commit/fd6549a6352633cee7b5b747448129df7a0532ca))\n\n\n### Features\n\n* **network:** limit connections from same ip ([#388](https://github.com/nervosnetwork/muta/issues/388)) ([dc78c13](https://github.com/nervosnetwork/muta/commit/dc78c13b8aa25f3e4535e588149042f6345e4d25))\n* **network:** limit inbound and outbound connections ([#393](https://github.com/nervosnetwork/muta/issues/393)) ([3a3111e](https://github.com/nervosnetwork/muta/commit/3a3111e1e332529bc8636c54526920c292c04f8a))\n* **sync:** Limit the maximum height of once sync ([#390](https://github.com/nervosnetwork/muta/issues/390)) ([f951a95](https://github.com/nervosnetwork/muta/commit/f951a953daf307ffc98b4df0fe1a77a6a810ac71))\n\n\n\n# [0.2.0-beta.2](https://github.com/nervosnetwork/muta/compare/v0.2.0-beta.1...v0.2.0-beta.2) (2020-08-04)\n\n\n### Bug Fixes\n\n* **consensus:** Add timestamp checking ([#377](https://github.com/nervosnetwork/muta/issues/377)) ([382ede9](https://github.com/nervosnetwork/muta/commit/382ede9367b910a06b59f3562ecd28ab8100d39e))\n\n\n### Features\n\n* **benchmark:** add a perf benchmark macro ([#391](https://github.com/nervosnetwork/muta/issues/391)) ([eb24311](https://github.com/nervosnetwork/muta/commit/eb2431149b6865a82d0e4286536f65319a5e1d1f))\n* **Cargo:** add random leader feature for muta ([#385](https://github.com/nervosnetwork/muta/issues/385)) ([43da977](https://github.com/nervosnetwork/muta/commit/43da9772b22b97ab4797b80ce5161f1a49827543))\n\n\n### Performance Improvements\n\n* **metrics:** Add metrics of state ([#397](https://github.com/nervosnetwork/muta/issues/397)) ([5822764](https://github.com/nervosnetwork/muta/commit/5822764240f8b4e8cfeca4bccf7d399a0bf71897))\n\n\n\n# [0.2.0-beta.1](https://github.com/nervosnetwork/muta/compare/v0.2.0-alpha.1...v0.2.0-beta.1) (2020-08-03)\n\n\n### Bug Fixes\n\n* **consensus:** return an error when committing an outdated block ([#371](https://github.com/nervosnetwork/muta/issues/371)) ([b3d518b](https://github.com/nervosnetwork/muta/commit/b3d518b52658b40746ef708fa8cde5c96a39a539))\n* **mempool:** Ensure that there are no duplicate transactions in the order transaction ([#379](https://github.com/nervosnetwork/muta/issues/379)) ([97708ac](https://github.com/nervosnetwork/muta/commit/97708ac385be2243344d700a0d7c928f18fd51b3))\n* **storage:** test batch receipts get panic ([#373](https://github.com/nervosnetwork/muta/issues/373)) ([300a3c6](https://github.com/nervosnetwork/muta/commit/300a3c65cf0399c2ba37a3bd655e06719b660330))\n\n\n### Features\n\n* **network:** tag consensus peer ([#364](https://github.com/nervosnetwork/muta/issues/364)) ([9b27df1](https://github.com/nervosnetwork/muta/commit/9b27df1015a25792cc210c5aa0dd473a45ae885d)), closes [#354](https://github.com/nervosnetwork/muta/issues/354) [#2](https://github.com/nervosnetwork/muta/issues/2) [#3](https://github.com/nervosnetwork/muta/issues/3) [#4](https://github.com/nervosnetwork/muta/issues/4) [#5](https://github.com/nervosnetwork/muta/issues/5) [#6](https://github.com/nervosnetwork/muta/issues/6) [#7](https://github.com/nervosnetwork/muta/issues/7)\n* Add global panic hook ([#376](https://github.com/nervosnetwork/muta/issues/376)) ([7382279](https://github.com/nervosnetwork/muta/commit/738227962771a6a66b85f2fd199df2e699b43adc))\n\n\n### Performance Improvements\n\n* **executor:** use inner call instead of service dispatcher ([#365](https://github.com/nervosnetwork/muta/issues/365)) ([7b1d2a3](https://github.com/nervosnetwork/muta/commit/7b1d2a32d5c20306af3868e5265bd2530dd9493b))\n\n\n### BREAKING CHANGES\n\n* **network:** - replace Validator address bytes with pubkey bytes\n\n* change(consensus): log validator address instead of its public key\n\nBlock proposer is address instead public key\n\n* fix: compilation failed\n* **network:** - change users_cast to multicast, take peer_ids bytes instead of Address\n- network bootstrap configuration now takes peer id instead of pubkey hex\n\n* refactor(network): PeerId api\n\n\n\n# [0.2.0-alpha.1](https://github.com/nervosnetwork/muta/compare/v0.2.0-dev.2...v0.2.0-alpha.1) (2020-07-22)\n\n\n### Bug Fixes\n\n* **executor:** The logic to deal with tx_hook and tx_body ([#367](https://github.com/nervosnetwork/muta/issues/367)) ([749d558](https://github.com/nervosnetwork/muta/commit/749d558b8b58a1943bfa2842dcedcc45218c0f78))\n\n\n### Performance Improvements\n\n* **mempool:** parallel verifying signatures in mempool ([#359](https://github.com/nervosnetwork/muta/issues/359)) ([2ccdf1a](https://github.com/nervosnetwork/muta/commit/2ccdf1a67a40cd483749a98a1a68c37bcf1d473c))\n\n\n### Reverts\n\n* Revert \"refactor(consensus)!: replace Validator address bytes with pubkey bytes (#354)\" (#361) ([4dabfa2](https://github.com/nervosnetwork/muta/commit/4dabfa231961d1ec8be1ba42bf05781f55395aed)), closes [#354](https://github.com/nervosnetwork/muta/issues/354) [#361](https://github.com/nervosnetwork/muta/issues/361)\n\n\n* refactor(consensus)!: replace Validator address bytes with pubkey bytes (#354) ([e4433d7](https://github.com/nervosnetwork/muta/commit/e4433d793e8a63788ec682880afc93474e0d2414)), closes [#354](https://github.com/nervosnetwork/muta/issues/354)\n\n\n### BREAKING CHANGES\n\n* - replace Validator address bytes with pubkey bytes\n\n* change(consensus): log validator address instead of its public key\n\nBlock proposer is address instead public key\n\n* fix: compilation failed\n\n\n\n# [0.2.0-dev.2](https://github.com/nervosnetwork/muta/compare/v0.2.0-dev.1...v0.2.0-dev.2) (2020-07-14)\n\n\n\n# [0.2.0-dev.1](https://github.com/nervosnetwork/muta/compare/v0.2.0-dev.0...v0.2.0-dev.1) (2020-07-09)\n\n\n### Bug Fixes\n\n* **trust_metric_test:** unreliable full node exit check ([#327](https://github.com/nervosnetwork/muta/issues/327)) ([a4ab4a6](https://github.com/nervosnetwork/muta/commit/a4ab4a6209e0978148983e88447ac2d9178fa42a))\n\n\n\n# [0.2.0-dev.0](https://github.com/nervosnetwork/muta/compare/v0.1.2-beta...v0.2.0-dev.0) (2020-07-01)\n\n\n### Bug Fixes\n\n* **executor:** tx events aren't cleared on execution error ([#313](https://github.com/nervosnetwork/muta/issues/313)) ([1605cf5](https://github.com/nervosnetwork/muta/commit/1605cf59b558b97889bb431da7f81fd424b90a89))\n* **proof:** Verify aggregated signature in checking proof ([#308](https://github.com/nervosnetwork/muta/issues/308)) ([d2a98b0](https://github.com/nervosnetwork/muta/commit/d2a98b06e44449ca756f135c1b235ff0d80eaf67))\n* **WAL:** Ignore path already exist ([#304](https://github.com/nervosnetwork/muta/issues/304)) ([02df937](https://github.com/nervosnetwork/muta/commit/02df937fb6449c9b3b0b50e790e0ecf6bfc1ee3d))\n\n\n### Features\n\n* **executor:** allow cancel execution units through context ([#317](https://github.com/nervosnetwork/muta/issues/317)) ([eafb489](https://github.com/nervosnetwork/muta/commit/eafb489f78f7521487c6b2d25dd9912e43f76500))\n* **executor:** indenpendent tx hook states commit ([#316](https://github.com/nervosnetwork/muta/issues/316)) ([fde6450](https://github.com/nervosnetwork/muta/commit/fde645010363a4664033370e4109e4d1f08b13bc))\n* **protocol:** Remove the logs bloom from block header ([#312](https://github.com/nervosnetwork/muta/issues/312)) ([ff1e0df](https://github.com/nervosnetwork/muta/commit/ff1e0df1e8a65cc480825a49eed9495cc31ecee0))\n\n\n\n## [0.1.2-beta](https://github.com/nervosnetwork/muta/compare/v0.1.2-beta2...v0.1.2-beta) (2020-06-04)\n\n\n\n## [0.1.2-beta2](https://github.com/nervosnetwork/muta/compare/v0.1.2-beta1...v0.1.2-beta2) (2020-06-03)\n\n\n### Features\n\n* supported storage metrics ([#307](https://github.com/nervosnetwork/muta/issues/307)) ([2531b8d](https://github.com/nervosnetwork/muta/commit/2531b8da8e8f2a839484adef62dd93f1deff12dd))\n\n\n\n## [0.1.2-beta1](https://github.com/nervosnetwork/muta/compare/v0.1.0-rc.2-huobi...v0.1.2-beta1) (2020-06-01)\n\n\n### Bug Fixes\n\n* **ci:** Increase timeout in ci ([#262](https://github.com/nervosnetwork/muta/issues/262)) ([a12124a](https://github.com/nervosnetwork/muta/commit/a12124a115512196894a7ca88fc42555db927666))\n* **mempool:** check exsit before insert a transaction ([#257](https://github.com/nervosnetwork/muta/issues/257)) ([be3c139](https://github.com/nervosnetwork/muta/commit/be3c13929d2a59f21655b040aa6738c3d43db611))\n* **network:** broken users_cast ([#261](https://github.com/nervosnetwork/muta/issues/261)) ([f36eabd](https://github.com/nervosnetwork/muta/commit/f36eabdc5040bc5cbf0d2011c942867150534a41))\n* **network:** reconnection fialure ([#273](https://github.com/nervosnetwork/muta/issues/273)) ([9f594b8](https://github.com/nervosnetwork/muta/commit/9f594b8af12e1810bd0cbf23f20ca718d96f6e3a))\n* reboot when the diff between height and exec_height more than one ([#267](https://github.com/nervosnetwork/muta/issues/267)) ([e8f8595](https://github.com/nervosnetwork/muta/commit/e8f85958d85e3363fccbfde3971684ebf2fceb4d))\n* **sync:** Avoid requesting redundant transactions ([#259](https://github.com/nervosnetwork/muta/issues/259)) ([8ece029](https://github.com/nervosnetwork/muta/commit/8ece0299fe185667ac23fed92d8c2f156c0e2c5b))\n* binding store type should return Option None instead of panic when get none ([#238](https://github.com/nervosnetwork/muta/issues/238)) ([54bdbb9](https://github.com/nervosnetwork/muta/commit/54bdbb93df1a1a85a83814dcb29461acf3645d10))\n* **config:** use serde(default) for rocksdb conf ([#229](https://github.com/nervosnetwork/muta/issues/229)) ([2a03e73](https://github.com/nervosnetwork/muta/commit/2a03e73c77807e80020c50bb287adf4d428632e5))\n* **storage:** fix rocksdb too many open files error ([#228](https://github.com/nervosnetwork/muta/issues/228)) ([96c32cd](https://github.com/nervosnetwork/muta/commit/96c32cd7956220beddca33b22d4663a675573ba9))\n* **sync:** set crypto info when synchronization ([#235](https://github.com/nervosnetwork/muta/issues/235)) ([84ccfc1](https://github.com/nervosnetwork/muta/commit/84ccfc1d8422265028ad7a0b460b4e297d161fe3))\n* docker compose configs ([#210](https://github.com/nervosnetwork/muta/issues/210)) ([acc5265](https://github.com/nervosnetwork/muta/commit/acc52653d304ac5cd25a9d643b263a2f462f7d43))\n* hang when kill it ([#225](https://github.com/nervosnetwork/muta/issues/225)) ([dc51240](https://github.com/nervosnetwork/muta/commit/dc512405f32854f165f3145c01d022bca4fff93b))\n* panic when start ([#214](https://github.com/nervosnetwork/muta/issues/214)) ([d2da69b](https://github.com/nervosnetwork/muta/commit/d2da69b5941a88376b64453f7d3c10eca3f67d81))\n* **muta:** hangs up on one cpu core ([#203](https://github.com/nervosnetwork/muta/issues/203)) ([555dd9e](https://github.com/nervosnetwork/muta/commit/555dd9e694fda043be01f90c91396efd7fe0ace5))\n\n\n### Features\n\n* split monitor network url  ([#300](https://github.com/nervosnetwork/muta/issues/300)) ([1237354](https://github.com/nervosnetwork/muta/commit/12373544598d0dae852321cbe3b4e8dab5c70e54))\n* supported mempool monitor ([#298](https://github.com/nervosnetwork/muta/issues/298)) ([cc7fdfa](https://github.com/nervosnetwork/muta/commit/cc7fdfa7a7c99466d76d4fe9c1a3537ab8754837))\n* supported new metrics ([#294](https://github.com/nervosnetwork/muta/issues/294)) ([e59364a](https://github.com/nervosnetwork/muta/commit/e59364a7759960d8a3279dc78844965f54f4bf62))\n* **apm:** add api get_block metrics ([#276](https://github.com/nervosnetwork/muta/issues/276)) ([6ea21e3](https://github.com/nervosnetwork/muta/commit/6ea21e3e0fe08898264f13938cf849c197531afa))\n* **apm:** Add opentracing ([#270](https://github.com/nervosnetwork/muta/issues/270)) ([cece21d](https://github.com/nervosnetwork/muta/commit/cece21d8e865223c8679e54d0253ced70dab4c0a))\n* **apm:** tracing height and round in OverlordMsg ([#287](https://github.com/nervosnetwork/muta/issues/287)) ([a8c09ff](https://github.com/nervosnetwork/muta/commit/a8c09ff363e8caac9c0977db2fc6cffb782961d7))\n* **ci:** add e2e ([#236](https://github.com/nervosnetwork/muta/issues/236)) ([3058722](https://github.com/nervosnetwork/muta/commit/3058722081084b7cb8f423c26eba9e88707fca18))\n* **consensus:** add proof check logic for sync and consensus ([#224](https://github.com/nervosnetwork/muta/issues/224)) ([b19502f](https://github.com/nervosnetwork/muta/commit/b19502f48e6d314717a8a2286ada58f6097c6f31))\n* **consensus:** change validator list ([#211](https://github.com/nervosnetwork/muta/issues/211)) ([bb04d2c](https://github.com/nervosnetwork/muta/commit/bb04d2c961110276d38cf0e07239d5e72e8125a8))\n* **consensus:** integrate trust metric to consensus ([#244](https://github.com/nervosnetwork/muta/issues/244)) ([3dd6bc1](https://github.com/nervosnetwork/muta/commit/3dd6bc1796ca3e6c76cb99beefd5911d35a5e8ee))\n* **mempool:** integrate trust metric ([#245](https://github.com/nervosnetwork/muta/issues/245)) ([49474fd](https://github.com/nervosnetwork/muta/commit/49474fddde3ffc45d564544bb5887bb09a37da1d))\n* **metric:** introduce metric using prometheus ([#271](https://github.com/nervosnetwork/muta/issues/271)) ([3d1dc4f](https://github.com/nervosnetwork/muta/commit/3d1dc4fcf196b8616f41dc4cd2a5ba0c0a5ab422))\n* **metrics:** mempool, consensus and sync ([#275](https://github.com/nervosnetwork/muta/issues/275)) ([12e4918](https://github.com/nervosnetwork/muta/commit/12e4918d9925868407f854af29410d8ecafe4d48))\n* **network:** add metrics ([#274](https://github.com/nervosnetwork/muta/issues/274)) ([56a9b62](https://github.com/nervosnetwork/muta/commit/56a9b62251106d44df33c43d4590575df25df61a))\n* **network:** add trace header to network msg ([#281](https://github.com/nervosnetwork/muta/issues/281)) ([6509cbe](https://github.com/nervosnetwork/muta/commit/6509cbec2f700238b2259943212e0968b58404ce))\n* **network:** peer trust metric ([#231](https://github.com/nervosnetwork/muta/issues/231)) ([5abefeb](https://github.com/nervosnetwork/muta/commit/5abefebddacfb58415f2a319098bb164ceaa8c81))\n* add tx hook in framework ([#218](https://github.com/nervosnetwork/muta/issues/218)) ([cdeb9fd](https://github.com/nervosnetwork/muta/commit/cdeb9fd1e18e198636fa59d91aead85d65cf9852))\n* re-execute blocks to recover current status ([#222](https://github.com/nervosnetwork/muta/issues/222)) ([1cd7cb6](https://github.com/nervosnetwork/muta/commit/1cd7cb6d4fbc599bac65bd2c36b507088a3fa041))\n* **network:** rpc remote server error response ([#205](https://github.com/nervosnetwork/muta/issues/205)) ([bb993ac](https://github.com/nervosnetwork/muta/commit/bb993ac1f5fe44a2f6a72c8718572accacb27dc3))\n* **sync:** Split a transaction in a block into multiple requests ([#221](https://github.com/nervosnetwork/muta/issues/221)) ([0bbf43c](https://github.com/nervosnetwork/muta/commit/0bbf43c49d2df49d70b4bc816ac24c3bc3603a1a))\n* add actix payload size limit config ([#204](https://github.com/nervosnetwork/muta/issues/204)) ([97319d6](https://github.com/nervosnetwork/muta/commit/97319d6d22c8143ba35c3fe42d56f2cfbc131e37))\n\n\n### BREAKING CHANGES\n\n* **network:** change rpc response\n\n* change(network): bump transmitter protocol version\n\n\n\n# [0.1.0-rc.2-huobi](https://github.com/nervosnetwork/muta/compare/v0.0.1-rc1-huobi...v0.1.0-rc.2-huobi) (2020-02-24)\n\n\n### Bug Fixes\n\n* **mempool:** fix repeat txs, add flush_incumbent_queue ([#189](https://github.com/nervosnetwork/muta/issues/189)) ([e0db745](https://github.com/nervosnetwork/muta/commit/e0db745419c5ada3d6e9dc4416945a0775a8f18b))\n* **muta:** hangs up running on single core environment ([#201](https://github.com/nervosnetwork/muta/issues/201)) ([09f5b4e](https://github.com/nervosnetwork/muta/commit/09f5b4ed70a519155933f7fd4c2015ff512dfdb1))\n* block hash from bytes ([#192](https://github.com/nervosnetwork/muta/issues/192)) ([7ca0af4](https://github.com/nervosnetwork/muta/commit/7ca0af46edbd00e4ba43e8646e77fa41aba781cf))\n\n\n### Features\n\n* check size and cycle limit when insert tx into mempool ([#195](https://github.com/nervosnetwork/muta/issues/195)) ([92bdf2d](https://github.com/nervosnetwork/muta/commit/92bdf2d5147502e1d250fdae47b8ae2c2cfce23f))\n* remove redundant wal transactions when commit ([#197](https://github.com/nervosnetwork/muta/issues/197)) ([3aff1db](https://github.com/nervosnetwork/muta/commit/3aff1dbb2dcdabaaf9cbecb9c3e9757a2c737354))\n* Supports actix in tokio ([#200](https://github.com/nervosnetwork/muta/issues/200)) ([266c1cb](https://github.com/nervosnetwork/muta/commit/266c1cb2cf6223759eba4ca9771ee21b244db3a4))\n* **api:** Supports configuring the max number of connections. ([#194](https://github.com/nervosnetwork/muta/issues/194)) ([6cbdd26](https://github.com/nervosnetwork/muta/commit/6cbdd267b7ff56eefbe23bffc8e4dc589272111d))\n* **service:** upgrade asset service ([#150](https://github.com/nervosnetwork/muta/issues/150)) ([8925390](https://github.com/nervosnetwork/muta/commit/8925390b59353d853dd1266cdcfe6db1258a8296))\n\n\n### Reverts\n\n* Revert \"fix(muta): hangs up running on single core environment (#201)\" (#202) ([28e685a](https://github.com/nervosnetwork/muta/commit/28e685a62b82c1a91699b4495d430b0757e5438d)), closes [#201](https://github.com/nervosnetwork/muta/issues/201) [#202](https://github.com/nervosnetwork/muta/issues/202)\n\n\n\n## [0.0.1-rc1-huobi](https://github.com/nervosnetwork/muta/compare/v0.0.1-rc.1-huobi...v0.0.1-rc1-huobi) (2020-02-15)\n\n\n### Bug Fixes\n\n* **ci:** fail to install sccache after new rust-toolchain ([#68](https://github.com/nervosnetwork/muta/issues/68)) ([f961415](https://github.com/nervosnetwork/muta/commit/f961415803ae6d38b70e97a810f33a1b60639d43))\n* **consensus:** check logs bloom when check block ([#168](https://github.com/nervosnetwork/muta/issues/168)) ([0984989](https://github.com/nervosnetwork/muta/commit/09849893270cc0908e2ee965e7e8b7c46ada0f16))\n* **consensus:** empty block receipts root ([#61](https://github.com/nervosnetwork/muta/issues/61)) ([89ed4d2](https://github.com/nervosnetwork/muta/commit/89ed4d2c4a708f278e7cd777c562f1f1fb5a9755))\n* **consensus:** encode overlord message and verify signature ([#39](https://github.com/nervosnetwork/muta/issues/39)) ([b11e69e](https://github.com/nervosnetwork/muta/commit/b11e69e49ed195d0d23f22b6abf1387f4a4c0c94))\n* **consensus:** fix check state roots ([#107](https://github.com/nervosnetwork/muta/issues/107)) ([cf45c3b](https://github.com/nervosnetwork/muta/commit/cf45c3ba39eb65bdb012165e232352a9187a6f0d))\n* **consensus:** Get authority list returns none. ([#4](https://github.com/nervosnetwork/muta/issues/4)) ([2a7eb3c](https://github.com/nervosnetwork/muta/commit/2a7eb3c26fade5a065ec2435b4ba46b6c16f223a))\n* **consensus:** state root can not be clear ([#140](https://github.com/nervosnetwork/muta/issues/140)) ([4ea1df4](https://github.com/nervosnetwork/muta/commit/4ea1df425620482f36daf61b4b50edb83807efdd))\n* **consensus:** sync txs context no session id ([#167](https://github.com/nervosnetwork/muta/issues/167)) ([53136c3](https://github.com/nervosnetwork/muta/commit/53136c3dfdf0e7b29762cd72f51eeb35d52804c2))\n* **doc:** fix graphql_api doc link and doc-api build sh ([#161](https://github.com/nervosnetwork/muta/issues/161)) ([e67e2b2](https://github.com/nervosnetwork/muta/commit/e67e2b24bf0609c263f59381a83fcf04d2227583))\n* **executor:** wrong hook logic ([#127](https://github.com/nervosnetwork/muta/issues/127)) ([8c6a246](https://github.com/nervosnetwork/muta/commit/8c6a246a1b64a197371305856148b034320f1fa0))\n* **framework/executor:** Catch any errors in the call. ([#92](https://github.com/nervosnetwork/muta/issues/92)) ([739a126](https://github.com/nervosnetwork/muta/commit/739a126c86643b28e1c47aef87d8bd803b9fe8d9))\n* **keypair:** Use hex encoding common_ref. ([#79](https://github.com/nervosnetwork/muta/issues/79)) ([abbce4c](https://github.com/nervosnetwork/muta/commit/abbce4c15919f45f824bd4967ea64f8234548765))\n* **makefile:** Docker push to the correct image ([#146](https://github.com/nervosnetwork/muta/issues/146)) ([05f6396](https://github.com/nervosnetwork/muta/commit/05f6396f1786b46b4cf9c41e3f700b37ebaddb68))\n* **mempool:** Always get the latest epoch id when `package`. ([#30](https://github.com/nervosnetwork/muta/issues/30)) ([9a77ebf](https://github.com/nervosnetwork/muta/commit/9a77ebf9ecba6323cc81cd094774e32fd28b946e))\n* **mempool:** broadcast new transactions ([#32](https://github.com/nervosnetwork/muta/issues/32)) ([086ec7e](https://github.com/nervosnetwork/muta/commit/086ec7eb6ca2c8f6afc14767d51efdb91533f932))\n* **mempool:** Fix concurrent insert bug of mempool ([#19](https://github.com/nervosnetwork/muta/issues/19)) ([515eec2](https://github.com/nervosnetwork/muta/commit/515eec2ab65a2d57a5ca742c774daeb9cef99354))\n* **mempool:** Resize the queue to ensure correct switching. ([#18](https://github.com/nervosnetwork/muta/issues/18)) ([ebf1ae3](https://github.com/nervosnetwork/muta/commit/ebf1ae34861fc48297813cdc465e4d9c99e059d4))\n* **mempool:** sync proposal txs doesn't insert txs at all ([#179](https://github.com/nervosnetwork/muta/issues/179)) ([33f39c5](https://github.com/nervosnetwork/muta/commit/33f39c5bac0235a8261c53327c558864a6149c8a))\n* **network:** dead lock in peer manager ([#24](https://github.com/nervosnetwork/muta/issues/24)) ([a74017a](https://github.com/nervosnetwork/muta/commit/a74017aa9d84b6b862683860e63c000b4048e459))\n* **network:** default rpc timeout to 4 seconds ([#115](https://github.com/nervosnetwork/muta/issues/115)) ([666049c](https://github.com/nervosnetwork/muta/commit/666049c54c8eee8291cc173230caccb35de137ca))\n* **network:** fail to bootstrap if bootstrap isn't start already ([#46](https://github.com/nervosnetwork/muta/issues/46)) ([9dd515a](https://github.com/nervosnetwork/muta/commit/9dd515a3e09f1c158dff6536ed38eb5116f4317f))\n* **network:** give up retry ([#152](https://github.com/nervosnetwork/muta/issues/152)) ([34d052a](https://github.com/nervosnetwork/muta/commit/34d052aaba1684333fdd49f86e54c103064fa2f6))\n* **network:** never reconnect bootstrap again after failure ([#22](https://github.com/nervosnetwork/muta/issues/22)) ([79d66bd](https://github.com/nervosnetwork/muta/commit/79d66bd06e61ff6ef41c12ada91cf6485482aa43))\n* **network:** NoSessionId Error ([#33](https://github.com/nervosnetwork/muta/issues/33)) ([4761d79](https://github.com/nervosnetwork/muta/commit/4761d797dded9534e0c0b5e43c6e519055542c2c))\n* **network:** rpc memory leak if rpc call future is dropped ([#166](https://github.com/nervosnetwork/muta/issues/166)) ([8476a4b](https://github.com/nervosnetwork/muta/commit/8476a4b85bf3cf923adcd7555cef04ae73a225f1))\n* **sync:** Check the height again after get the lock ([#171](https://github.com/nervosnetwork/muta/issues/171)) ([68164f3](https://github.com/nervosnetwork/muta/commit/68164f3f75d83b9507ee68a099fb712492339edb))\n* **sync:** Flush the memory pool when the storage success ([#165](https://github.com/nervosnetwork/muta/issues/165)) ([3b9cbd5](https://github.com/nervosnetwork/muta/commit/3b9cbd55310993c783b0a5794237df75accf118e))\n* fix overlord not found error ([#95](https://github.com/nervosnetwork/muta/issues/95)) ([0754c64](https://github.com/nervosnetwork/muta/commit/0754c64973f7fca92e49080c3a03a869b43a4c46))\n* Ignore bootstraps when empty. ([#41](https://github.com/nervosnetwork/muta/issues/41)) ([2b3566b](https://github.com/nervosnetwork/muta/commit/2b3566b4acb91f6086b9cca2b1ea4d2883a75be9))\n\n\n### Features\n\n* **config:** move bls_pub_key config to genesis.toml ([#162](https://github.com/nervosnetwork/muta/issues/162)) ([337b01f](https://github.com/nervosnetwork/muta/commit/337b01fda21fc33f4d4817d93a27d86af9e2b164))\n* **network:** interval report pending data size ([#160](https://github.com/nervosnetwork/muta/issues/160)) ([3c46aca](https://github.com/nervosnetwork/muta/commit/3c46aca4873abf9b8afd01d5f464df57bb1b9b9a))\n* **sync:** Trigger sync after waiting for consensus interval ([#169](https://github.com/nervosnetwork/muta/issues/169)) ([fe355f1](https://github.com/nervosnetwork/muta/commit/fe355f1d7d6359dfa97809f1bc603cb99975ba46))\n* add api schema ([#90](https://github.com/nervosnetwork/muta/issues/90)) ([3f8adfa](https://github.com/nervosnetwork/muta/commit/3f8adfa0a717b055a4455fd102de68003f835bf2))\n* add common_ref argument for keypair tool ([#154](https://github.com/nervosnetwork/muta/issues/154)) ([2651346](https://github.com/nervosnetwork/muta/commit/26513469206aa8a4480c5fffad9d134d5d0e8ded))\n* add panic hook to logger ([#156](https://github.com/nervosnetwork/muta/issues/156)) ([93b65fe](https://github.com/nervosnetwork/muta/commit/93b65feb89502b7d7836d7f4c423db37fbd1ef4f))\n* Extract muta as crate. ([1b62fe7](https://github.com/nervosnetwork/muta/commit/1b62fe786fbd576b67ea28df3d304d235ae3e94e))\n* Metadata service ([#133](https://github.com/nervosnetwork/muta/issues/133)) ([a588b12](https://github.com/nervosnetwork/muta/commit/a588b12de4f3c0de666b66e2a5dea65d71977f5f))\n* spawn sync txs in check epoch ([6dca1dd](https://github.com/nervosnetwork/muta/commit/6dca1ddcd9256a3061f132a5abc5d784d466c168))\n* support specify module log level via config ([#105](https://github.com/nervosnetwork/muta/issues/105)) ([c06061b](https://github.com/nervosnetwork/muta/commit/c06061b4ccd755177385dfee000783e2b11b0dcd))\n* Update juniper, supports async ([#149](https://github.com/nervosnetwork/muta/issues/149)) ([cbabf50](https://github.com/nervosnetwork/muta/commit/cbabf507c25ee8feb8a57de408bc97efc8a4a4ab))\n* update overlord with brake engine ([#159](https://github.com/nervosnetwork/muta/issues/159)) ([8cd886a](https://github.com/nervosnetwork/muta/commit/8cd886a79fec934a53d409a27de941f16166c176)), closes [#156](https://github.com/nervosnetwork/muta/issues/156) [#158](https://github.com/nervosnetwork/muta/issues/158)\n* **api:** Add the exec_height field to the block ([#138](https://github.com/nervosnetwork/muta/issues/138)) ([417153c](https://github.com/nervosnetwork/muta/commit/417153c632793c7ac4e7bc3ffa5b2832dd2dbe66))\n* **binding-macro:** service method supports none payload and none response ([#103](https://github.com/nervosnetwork/muta/issues/103)) ([3a5783e](https://github.com/nervosnetwork/muta/commit/3a5783eadd1090cf739d4fdbe94f049115eb65f0))\n* **consensus:** develop aggregate crypto with overlord ([#60](https://github.com/nervosnetwork/muta/issues/60)) ([2bc0869](https://github.com/nervosnetwork/muta/commit/2bc0869e928b35c674b4cafdf48540298752b5b5))\n* **core/binding:** Implementation of service state. ([#48](https://github.com/nervosnetwork/muta/issues/48)) ([301be6f](https://github.com/nervosnetwork/muta/commit/301be6f39379bd3826b5f605c999ce107f7404e4))\n* **core/binding-macro:** Add `read` and `write` proc-macro. ([#49](https://github.com/nervosnetwork/muta/issues/49)) ([687b6e1](https://github.com/nervosnetwork/muta/commit/687b6e1e1a960f679394843c42b861981828d8aa))\n* **core/binding-macro:** Add cycles proc-marco. ([#52](https://github.com/nervosnetwork/muta/issues/52)) ([e2289a2](https://github.com/nervosnetwork/muta/commit/e2289a2481510b59c18e37d0fc8bedd9f5d4537e))\n* **core/binding-macro:** Support for returning a struct. ([#70](https://github.com/nervosnetwork/muta/issues/70)) ([e13b1ff](https://github.com/nervosnetwork/muta/commit/e13b1ff7834279de9c2df5a0df6967035b7fb8b3))\n* **framework:** add ExecutorParams into hook method ([#116](https://github.com/nervosnetwork/muta/issues/116)) ([8036bd6](https://github.com/nervosnetwork/muta/commit/8036bd6f9be1f49eedbc40bbc260ad82952c2e71))\n* **framework:** add extra: Option<Bytes> to ServiceContext ([#118](https://github.com/nervosnetwork/muta/issues/118)) ([694c4a3](https://github.com/nervosnetwork/muta/commit/694c4a34f32dc1ba4940db19e304de7a927e1531))\n* **framework:** add tx_hash, nonce to ServiceContext ([#111](https://github.com/nervosnetwork/muta/issues/111)) ([352f71f](https://github.com/nervosnetwork/muta/commit/352f71fb3b8b024d533d26c7a344fad801b7a91c))\n* **framework/executor:** create service genesis from config ([#104](https://github.com/nervosnetwork/muta/issues/104)) ([8988ccb](https://github.com/nervosnetwork/muta/commit/8988ccb3e5cb2a25bfeabe93c5a63ac1600290a2))\n* **graphql:** Modify the API to fit the framework data structure. ([#74](https://github.com/nervosnetwork/muta/issues/74)) ([a1ca2b0](https://github.com/nervosnetwork/muta/commit/a1ca2b0d68e32e335d8d388b70bca83137519f5a))\n* **muta:** flush metadata while commit  ([#137](https://github.com/nervosnetwork/muta/issues/137)) ([383a481](https://github.com/nervosnetwork/muta/commit/383a481c348efdf73fd690b42b2430fca6d9a0db))\n* **muta:** link up metadata service with muta ([#136](https://github.com/nervosnetwork/muta/issues/136)) ([ba65b80](https://github.com/nervosnetwork/muta/commit/ba65b80dffd128f12336b44d4e80ed40cced8e75))\n* **protocol/traits:** Add traits of binding. ([#47](https://github.com/nervosnetwork/muta/issues/47)) ([c6b85ee](https://github.com/nervosnetwork/muta/commit/c6b85ee7bee5b14c5da1676ff44d743c031a0fa6))\n* **protocol/types:** Add cycles_price for raw_transaction. ([#46](https://github.com/nervosnetwork/muta/issues/46)) ([55f64a4](https://github.com/nervosnetwork/muta/commit/55f64a49634061ca05c75cbf5923f183fc83936d))\n* **sync:** Wait for the execution queue. ([#132](https://github.com/nervosnetwork/muta/issues/132)) ([a8d2013](https://github.com/nervosnetwork/muta/commit/a8d2013991cc6b5b579429954c8411c7954b1da4))\n* add end to end test ([#42](https://github.com/nervosnetwork/muta/issues/42)) ([e84756d](https://github.com/nervosnetwork/muta/commit/e84756d1734ad58943309c3c2299393f5a2022e4))\n* Extract muta as crate. ([#75](https://github.com/nervosnetwork/muta/issues/75)) ([fc576ea](https://github.com/nervosnetwork/muta/commit/fc576eaa67a3b4b4fa459b0ab970251d63b06b4f)), closes [#46](https://github.com/nervosnetwork/muta/issues/46) [#47](https://github.com/nervosnetwork/muta/issues/47) [#48](https://github.com/nervosnetwork/muta/issues/48) [#49](https://github.com/nervosnetwork/muta/issues/49) [#52](https://github.com/nervosnetwork/muta/issues/52) [#51](https://github.com/nervosnetwork/muta/issues/51) [#55](https://github.com/nervosnetwork/muta/issues/55) [#58](https://github.com/nervosnetwork/muta/issues/58) [#56](https://github.com/nervosnetwork/muta/issues/56) [#64](https://github.com/nervosnetwork/muta/issues/64) [#65](https://github.com/nervosnetwork/muta/issues/65) [#70](https://github.com/nervosnetwork/muta/issues/70) [#71](https://github.com/nervosnetwork/muta/issues/71) [#72](https://github.com/nervosnetwork/muta/issues/72) [#73](https://github.com/nervosnetwork/muta/issues/73) [#43](https://github.com/nervosnetwork/muta/issues/43) [#54](https://github.com/nervosnetwork/muta/issues/54) [#53](https://github.com/nervosnetwork/muta/issues/53) [#57](https://github.com/nervosnetwork/muta/issues/57) [#45](https://github.com/nervosnetwork/muta/issues/45) [#62](https://github.com/nervosnetwork/muta/issues/62) [#63](https://github.com/nervosnetwork/muta/issues/63) [#66](https://github.com/nervosnetwork/muta/issues/66) [#61](https://github.com/nervosnetwork/muta/issues/61) [#67](https://github.com/nervosnetwork/muta/issues/67) [#68](https://github.com/nervosnetwork/muta/issues/68) [#60](https://github.com/nervosnetwork/muta/issues/60) [#46](https://github.com/nervosnetwork/muta/issues/46) [#47](https://github.com/nervosnetwork/muta/issues/47) [#48](https://github.com/nervosnetwork/muta/issues/48) [#49](https://github.com/nervosnetwork/muta/issues/49) [#52](https://github.com/nervosnetwork/muta/issues/52) [#51](https://github.com/nervosnetwork/muta/issues/51) [#55](https://github.com/nervosnetwork/muta/issues/55) [#58](https://github.com/nervosnetwork/muta/issues/58) [#56](https://github.com/nervosnetwork/muta/issues/56) [#64](https://github.com/nervosnetwork/muta/issues/64) [#65](https://github.com/nervosnetwork/muta/issues/65) [#70](https://github.com/nervosnetwork/muta/issues/70) [#72](https://github.com/nervosnetwork/muta/issues/72) [#74](https://github.com/nervosnetwork/muta/issues/74)\n* metrics logger ([#43](https://github.com/nervosnetwork/muta/issues/43)) ([d633309](https://github.com/nervosnetwork/muta/commit/d6333091959da6ab0a12630282f6ea783d509319))\n* support consensus tracing ([#53](https://github.com/nervosnetwork/muta/issues/53)) ([03942f0](https://github.com/nervosnetwork/muta/commit/03942f08cfdcc573d7feef3a1111e59f63d077f1))\n* **api:** make API more user-friendly ([#38](https://github.com/nervosnetwork/muta/issues/38)) ([ba33467](https://github.com/nervosnetwork/muta/commit/ba33467e52c114576b82850e11662d168ede293a))\n* **mempool:** implement cached batch txs broadcast ([#20](https://github.com/nervosnetwork/muta/issues/20)) ([d2af811](https://github.com/nervosnetwork/muta/commit/d2af811bb99becc9600d784ce19e021fec11627d))\n* **sync:** synchronization epoch ([#9](https://github.com/nervosnetwork/muta/issues/9)) ([fb4bf0d](https://github.com/nervosnetwork/muta/commit/fb4bf0d7c4bde7c86d1b09f469037ff1219f15fa)), closes [#17](https://github.com/nervosnetwork/muta/issues/17) [#18](https://github.com/nervosnetwork/muta/issues/18)\n* add compile and run in README ([#11](https://github.com/nervosnetwork/muta/issues/11)) ([1058322](https://github.com/nervosnetwork/muta/commit/10583224053ab91c32dbec815cd0a5af6b0dbeb3))\n* add docker ([#31](https://github.com/nervosnetwork/muta/issues/31)) ([8a4386a](https://github.com/nervosnetwork/muta/commit/8a4386ad4c1f66783cada885db9851609b6f5f8d))\n* change rlp in executor to fixed-codec ([#29](https://github.com/nervosnetwork/muta/issues/29)) ([7f737cd](https://github.com/nervosnetwork/muta/commit/7f737cdfc9353148b945ad52dd5ab3fd46e2c4db))\n* Get balance. ([#28](https://github.com/nervosnetwork/muta/issues/28)) ([8c4a3f9](https://github.com/nervosnetwork/muta/commit/8c4a3f9af8b9e1e8f19cc50b280b66b5d8e270bb))\n* **codec:** Add codec tests and benchmarks ([#22](https://github.com/nervosnetwork/muta/issues/22)) ([dcbe522](https://github.com/nervosnetwork/muta/commit/dcbe522be22596059280f6ef845a6d6f4e798551))\n* **consensus:** develop consensus interfaces ([#21](https://github.com/nervosnetwork/muta/issues/21)) ([62e3c06](https://github.com/nervosnetwork/muta/commit/62e3c063cd4f82efda43ca5c87c042db5adb9abb))\n* **consensus:** develop consensus provider and engine ([#28](https://github.com/nervosnetwork/muta/issues/28)) ([b2ccf9c](https://github.com/nervosnetwork/muta/commit/b2ccf9c84502a6dd476b1737aa9cbb2a283ced32))\n* **consensus:** Execute the transactions on commit. ([#7](https://github.com/nervosnetwork/muta/issues/7)) ([b54e7d2](https://github.com/nervosnetwork/muta/commit/b54e7d2bbd5d0ac45ef0d4c728e398b87a1f5450))\n* **consensus:** joint overlord and chain ([#32](https://github.com/nervosnetwork/muta/issues/32)) ([72cec41](https://github.com/nervosnetwork/muta/commit/72cec41c86824455ad35cfb1da8a246c50731568))\n* **consensus:** mutex lock and timer config ([#45](https://github.com/nervosnetwork/muta/issues/45)) ([cf09687](https://github.com/nervosnetwork/muta/commit/cf09687299b5be39a9c40f13d4b88a496ec7c943))\n* **consensus:** Support trsanction executor. ([#6](https://github.com/nervosnetwork/muta/issues/6)) ([e1188f9](https://github.com/nervosnetwork/muta/commit/e1188f9296b3947f833d6bc9a9beff22ebbbf4e7))\n* **executor:** Create genesis. ([#1](https://github.com/nervosnetwork/muta/issues/1)) ([a1111d8](https://github.com/nervosnetwork/muta/commit/a1111d8db709c62d119edf3238a22dd656e8035f))\n* **graphql:** Support transfer and contract deployment ([#44](https://github.com/nervosnetwork/muta/issues/44)) ([bfcb520](https://github.com/nervosnetwork/muta/commit/bfcb5203fe245e364922d5d8966197a8a8f8d91c))\n* **mempool:** fix fixed_codec ([#25](https://github.com/nervosnetwork/muta/issues/25)) ([c1ac607](https://github.com/nervosnetwork/muta/commit/c1ac607ac9b61f4867c17f69c50dad9797dc1c2b))\n* **mempool:** Remove cycle_limit ([#23](https://github.com/nervosnetwork/muta/issues/23)) ([8a19ae8](https://github.com/nervosnetwork/muta/commit/8a19ae867fd5b82c4fd56a1f8b59a83e24ca5bc0))\n* **native-contract:** Support for asset creation and transfer. ([#37](https://github.com/nervosnetwork/muta/issues/37)) ([1c505fb](https://github.com/nervosnetwork/muta/commit/1c505fbdd57fcb2ef3df3e8b19c65599d77c9bf1))\n* **network:** log connected peer ips ([#23](https://github.com/nervosnetwork/muta/issues/23)) ([1691bfa](https://github.com/nervosnetwork/muta/commit/1691bfa47ac561a2f27243e21b1b2fad2fb64be9))\n* develop merkle root ([#17](https://github.com/nervosnetwork/muta/issues/17)) ([03cec31](https://github.com/nervosnetwork/muta/commit/03cec318645ee49158f09ec59e356210a80f8bbf))\n* Fill in the main function ([#36](https://github.com/nervosnetwork/muta/issues/36)) ([d783f3b](https://github.com/nervosnetwork/muta/commit/d783f3b2d36507a695abd47b303b6c0108e2030b))\n* **mempool:** Develop mempool's tests and benches  ([#9](https://github.com/nervosnetwork/muta/issues/9)) ([5ddd5f4](https://github.com/nervosnetwork/muta/commit/5ddd5f4d0c1fa9630971ade538dcf954b6aa8f54))\n* **mempool:** Implement MemPool interfaces ([#8](https://github.com/nervosnetwork/muta/issues/8)) ([934ce58](https://github.com/nervosnetwork/muta/commit/934ce58b7a7a6b89b65ff931ce5487e553dd927d))\n* **native_contract:** Add an adapter that provides access to the world state. ([#27](https://github.com/nervosnetwork/muta/issues/27)) ([3281bea](https://github.com/nervosnetwork/muta/commit/3281beab2d054470b5edf330515df933cc713bb8))\n* **protocol:** Add the mempool traits ([#7](https://github.com/nervosnetwork/muta/issues/7)) ([9f6c19b](https://github.com/nervosnetwork/muta/commit/9f6c19bbfbff6c8f82bb732c3503d757833f837e))\n* **protocol:** Add the underlying data structure. ([#5](https://github.com/nervosnetwork/muta/issues/5)) ([5dae189](https://github.com/nervosnetwork/muta/commit/5dae189104c986348adddd43fbaa47af01781828))\n* **protocol:** Protobuf serialize ([#6](https://github.com/nervosnetwork/muta/issues/6)) ([ff00595](https://github.com/nervosnetwork/muta/commit/ff00595d100e44148b1cc243437798db8233ca2b))\n* **storage:** add storage test ([#18](https://github.com/nervosnetwork/muta/issues/18)) ([f78df5b](https://github.com/nervosnetwork/muta/commit/f78df5b0357eade7855152eee9c79070866477ac))\n* **storage:** Implement memory adapter API ([#11](https://github.com/nervosnetwork/muta/issues/11)) ([b0a8090](https://github.com/nervosnetwork/muta/commit/b0a80901229f85e8cf89bd806dcb32c95ae059b8))\n* **storage:** Implement storage ([#17](https://github.com/nervosnetwork/muta/issues/17)) ([7728b5b](https://github.com/nervosnetwork/muta/commit/7728b5b0307bd58b11671f123f37e3e365b14b97))\n* **types:** Add account structure. ([#24](https://github.com/nervosnetwork/muta/issues/24)) ([f6b93f0](https://github.com/nervosnetwork/muta/commit/f6b93f0f08b03a20761aef47f08343eb5d8e6a85))\n\n\n### Performance Improvements\n\n* **storage:** cache latest epoch ([#128](https://github.com/nervosnetwork/muta/issues/128)) ([da4d7a9](https://github.com/nervosnetwork/muta/commit/da4d7a92363596b7339518e24c64ab49648749dd))\n\n\n### Reverts\n\n* Revert \"[ᚬdebug-muta] feat(service): Upgrade asset (#181)\" (#182) ([dad3f99](https://github.com/nervosnetwork/muta/commit/dad3f99f7c694eea57b546c6b2169950c5692ea1)), closes [#181](https://github.com/nervosnetwork/muta/issues/181) [#182](https://github.com/nervosnetwork/muta/issues/182)\n* Revert \"feat: Extract muta as crate. (#75)\" (#77) ([3baacc5](https://github.com/nervosnetwork/muta/commit/3baacc5c781615377e9a6ba50cfc7b17dcb0ec6e)), closes [#75](https://github.com/nervosnetwork/muta/issues/75) [#77](https://github.com/nervosnetwork/muta/issues/77)\n\n\n\n# [0.1.0](https://github.com/nervosnetwork/muta/compare/733ee8e6be7649c9aa2d772bb1dc661bd0879917...v0.1.0) (2019-09-22)\n\n\n### Bug Fixes\n\n* **ci:** build on push and pull request ([d28aa55](https://github.com/nervosnetwork/muta/commit/d28aa55f5df240277e2b75e87aa948cdcf11ea7f))\n* **ci:** temporarily amend code to pass lint ([9441236](https://github.com/nervosnetwork/muta/commit/9441236a5107e0042753915ed943b487cd02d6a5))\n* **consensus:** Clear cache of last proposal. ([#199](https://github.com/nervosnetwork/muta/issues/199)) ([f548653](https://github.com/nervosnetwork/muta/commit/f5486531f43fa720171941ad4be5ec7646a269c2))\n* **consensus:** fix lock free too early problem and add state root check ([#277](https://github.com/nervosnetwork/muta/issues/277)) ([7238c5b](https://github.com/nervosnetwork/muta/commit/7238c5bc057bd6c6f31773fa4bd3e06aaea72255))\n* **consensus:** Makes sure that proposer is this node. ([#281](https://github.com/nervosnetwork/muta/issues/281)) ([d7f4e50](https://github.com/nervosnetwork/muta/commit/d7f4e5081f00a04aee934d0ce700cd107f4f345f))\n* **core-network:** CallbackItemNotFound ([#243](https://github.com/nervosnetwork/muta/issues/243)) ([47365fa](https://github.com/nervosnetwork/muta/commit/47365faf5fa7171dde8951661fa095a6c43bcb1f))\n* **core-network:** false bootstrapped connections ([#275](https://github.com/nervosnetwork/muta/issues/275)) ([26e76f0](https://github.com/nervosnetwork/muta/commit/26e76f0a2879aed3da745529f64ba3828a1cc30e))\n* **core-types:** compilation failure ([#269](https://github.com/nervosnetwork/muta/issues/269)) ([56d8649](https://github.com/nervosnetwork/muta/commit/56d86491f69ab16fd2c76b66b28ad76df78c6ca7))\n* **core/crypto:** pubkey_to_address() consistent with cita ([acb5e63](https://github.com/nervosnetwork/muta/commit/acb5e63ea577429bc94c16a3430035ea139aaf15))\n* **executor:** Save the full node data. ([b57a1c5](https://github.com/nervosnetwork/muta/commit/b57a1c5fa775479b85d1531f7d2dced817de4729))\n* **jsonrpc:** give default value for newFilter ([#289](https://github.com/nervosnetwork/muta/issues/289)) ([17069b4](https://github.com/nervosnetwork/muta/commit/17069b49067dd7335f243d248e3c8d633e455a73))\n* **jsonrpc:** logic error in getTransactionCount ([#290](https://github.com/nervosnetwork/muta/issues/290)) ([464bfdf](https://github.com/nervosnetwork/muta/commit/464bfdf08a9954206bb595b3861c52208fc9630d))\n* **jsonrpc:** make the response compatible with jsonrpc 2.0 spec ([1db5190](https://github.com/nervosnetwork/muta/commit/1db5190bc91d431bacce6bb44a1185b19520c1a2))\n* **jsonrpc:** prefix with 0x by API getTransactionProof ([#295](https://github.com/nervosnetwork/muta/issues/295)) ([b1c0160](https://github.com/nervosnetwork/muta/commit/b1c0160b65fc91e8a2bcfd908943fb238d1101c1))\n* **jsonrpc:** raise error when key not found in state ([#294](https://github.com/nervosnetwork/muta/issues/294)) ([7a7c294](https://github.com/nervosnetwork/muta/commit/7a7c294df5ae75f50ec0fe3620634c7280f837e7))\n* **jsonrpc:** returns the correct block hash ([#280](https://github.com/nervosnetwork/muta/issues/280)) ([f6a58d0](https://github.com/nervosnetwork/muta/commit/f6a58d0cfc743d1fa84fe5de99798157ba5f25a6))\n* Call header.hash ([#94](https://github.com/nervosnetwork/muta/issues/94)) ([636aa54](https://github.com/nervosnetwork/muta/commit/636aa549c21a04611b6f4575dfc7e78fa47d768e))\n* change the blocking thread from rayon to std::thread ([5b80476](https://github.com/nervosnetwork/muta/commit/5b804765d0a76055e6e730560a6d7ecd576703be))\n* return err if tx not found in get_batch to avoid forking ([#279](https://github.com/nervosnetwork/muta/issues/279)) ([6aed2fe](https://github.com/nervosnetwork/muta/commit/6aed2fe5ffcd0eb6a699cff00d92e9dd3ab7d7b3))\n* **sync:** proof and proposal_hash hash not match. ([#239](https://github.com/nervosnetwork/muta/issues/239)) ([51f332e](https://github.com/nervosnetwork/muta/commit/51f332ee8c4a10b88844a272bc51a116b4d25dd2))\n* tokio::spawn panic. ([#238](https://github.com/nervosnetwork/muta/issues/238)) ([12d8d01](https://github.com/nervosnetwork/muta/commit/12d8d01ed42f9cc5d9cc341edfd76a6076aa37e1))\n* **common/logger:** cargo fmt ([e3a7f5a](https://github.com/nervosnetwork/muta/commit/e3a7f5a2217956b86191881caeb3ca6cea7ec2fc))\n* **compoents/transaction-pool:** Use the latest crypto API. ([#86](https://github.com/nervosnetwork/muta/issues/86)) ([f6c94d3](https://github.com/nervosnetwork/muta/commit/f6c94d307d6e89afba75ed8b83b99088fc7ca9de))\n* **components/transaction-pool:** Check if the transaction is repeated in histories block. ([dba25fe](https://github.com/nervosnetwork/muta/commit/dba25fe09d8e82f0e396415055ce08efbf1fe159))\n* **core-p2p:** transmission example: a clippy warning ([6d2f42a](https://github.com/nervosnetwork/muta/commit/6d2f42ae97194333a823581406fc75d2c47536b2))\n* **core-p2p:** transmission example: remove unreachable match branch ([0082bd6](https://github.com/nervosnetwork/muta/commit/0082bd6a3fb956f9ee17a9eba6ada77fc91f3dfe))\n* **core-p2p:** transmission: future task starvation ([ba14db0](https://github.com/nervosnetwork/muta/commit/ba14db035413220ed7eba5e5543b8a6496267641))\n* **devchain:** correct addresses matched with privkey ([#114](https://github.com/nervosnetwork/muta/issues/114)) ([f56744e](https://github.com/nervosnetwork/muta/commit/f56744e7809b39da79434a3fbcf3deb127fded27))\n* **network:** RepeatedConnection and ConnectSelf errors ([#196](https://github.com/nervosnetwork/muta/issues/196)) ([2e5e888](https://github.com/nervosnetwork/muta/commit/2e5e888cdb0869e7622639919b12e62eca06f137))\n* **p2p:** Make sure the \"poll\" is triggered. ([#182](https://github.com/nervosnetwork/muta/issues/182)) ([88daed1](https://github.com/nervosnetwork/muta/commit/88daed1e3e175c21e7923ddd5f1b4eb4ef4d6286))\n* **p2p-identify:** empty local listen addresses ([#198](https://github.com/nervosnetwork/muta/issues/198)) ([c40ad8a](https://github.com/nervosnetwork/muta/commit/c40ad8a8dedd999efd17a88b9c30b198d4a0035a))\n* **synchronizer:** add a pull_txs_sync method to sync txs from block ([#207](https://github.com/nervosnetwork/muta/issues/207)) ([317fca8](https://github.com/nervosnetwork/muta/commit/317fca8b8d2f270e5d140a94bb1a9227c4b7271b))\n* **transaction-pool:** duplicate insertion transactions from network ([#191](https://github.com/nervosnetwork/muta/issues/191)) ([2c095bb](https://github.com/nervosnetwork/muta/commit/2c095bbe5649454abf2663df7355c0a56f54a71f))\n* **tx-pool:** \"get_count\" returns the repeat transaction. ([f5612d0](https://github.com/nervosnetwork/muta/commit/f5612d09d02e9183b702f0233aecc14c31779945))\n* **tx-pool:** `ensure` method always pull all txs from remote peer ([#194](https://github.com/nervosnetwork/muta/issues/194)) ([9ff300e](https://github.com/nervosnetwork/muta/commit/9ff300e191aa39b6301e481f8f287287b645ba39))\n* **tx-pool:** Ensure the number of transactions meets expectations ([dcbf0dd](https://github.com/nervosnetwork/muta/commit/dcbf0dd8cf548ddfe3afb3226d7596637ae615dd))\n* **tx-pool:** replace chashmap ([#211](https://github.com/nervosnetwork/muta/issues/211)) ([717f55e](https://github.com/nervosnetwork/muta/commit/717f55e4772c5818ab17e2b1c320b0b98f174122))\n* Aviod drop ([4d0f986](https://github.com/nervosnetwork/muta/commit/4d0f986741c392489893f036989db7218db54743))\n* build failure ([18ce8e4](https://github.com/nervosnetwork/muta/commit/18ce8e4642d8d27892fee53b9695e4ced7921055))\n* jsonrpc call return value ([#104](https://github.com/nervosnetwork/muta/issues/104)) ([1fe41eb](https://github.com/nervosnetwork/muta/commit/1fe41eb491a16588019218144985eec143613c65))\n* logic error of bloom filter ([#176](https://github.com/nervosnetwork/muta/issues/176)) ([70269cb](https://github.com/nervosnetwork/muta/commit/70269cb5cefd82f1a14eb5e85df419c1587d19c8))\n* merkle typo ([4f63585](https://github.com/nervosnetwork/muta/commit/4f6358565ee8d486be18ac8ff6069b95b597ea4d))\n* rlp encode ([b852ac1](https://github.com/nervosnetwork/muta/commit/b852ac147db818cf289b972f054028d293218a19))\n* rlp hash ([837055a](https://github.com/nervosnetwork/muta/commit/837055a4eb78ba941004dbc0466955895de8bcab))\n* Set quota limit for the genesis. ([#106](https://github.com/nervosnetwork/muta/issues/106)) ([931fe40](https://github.com/nervosnetwork/muta/commit/931fe404453a6f936cbd27bf37d0e326a03e4484))\n* write lock ([de80439](https://github.com/nervosnetwork/muta/commit/de80439cb4e7889c1220fc7821604f9ef792422e))\n\n\n### Features\n\n* add business model support for executor ([#308](https://github.com/nervosnetwork/muta/issues/308)) ([e03396b](https://github.com/nervosnetwork/muta/commit/e03396bb6b964a0c93f43c2684a0e76a55db5540))\n* add Deserialize for Hash and Address ([#259](https://github.com/nervosnetwork/muta/issues/259)) ([fef188c](https://github.com/nervosnetwork/muta/commit/fef188c5950fb7f64a92312894efdb4955201a93))\n* add docker config for dev ([#197](https://github.com/nervosnetwork/muta/issues/197)) ([6e74aec](https://github.com/nervosnetwork/muta/commit/6e74aec0b51c2bf80c1d1b893130ea74f4a1a8f0))\n* add fabric devops scripts ([fcdc25c](https://github.com/nervosnetwork/muta/commit/fcdc25c05b5c30ba38bf6af57885c2f45233d3fc))\n* add height to the end of proposal msg ([#255](https://github.com/nervosnetwork/muta/issues/255)) ([c5cbc5e](https://github.com/nervosnetwork/muta/commit/c5cbc5ec70f1dc0fb46ef0bb87c3b994596b4571))\n* add more info to version ([#298](https://github.com/nervosnetwork/muta/issues/298)) ([fd02a17](https://github.com/nervosnetwork/muta/commit/fd02a17a68bb6ef59bbd4cded13d69da221237ee))\n* peerCount RPC API ([#257](https://github.com/nervosnetwork/muta/issues/257)) ([736ae8c](https://github.com/nervosnetwork/muta/commit/736ae8c7f537a56b01d648cf066f220e47108820))\n* **components/cita-jsonrpc:** impl executor related apis ([#80](https://github.com/nervosnetwork/muta/issues/80)) ([bc8f340](https://github.com/nervosnetwork/muta/commit/bc8f34015617e1a01fb2fbb30d9709cdd806daea))\n* **components/cita-jsonrpc:** impl get_code and finish some todo ([#87](https://github.com/nervosnetwork/muta/issues/87)) ([e1b0b9d](https://github.com/nervosnetwork/muta/commit/e1b0b9dc8c39965366c5b572905e63cacecdc958))\n* **components/databse:** Implement RocksDB ([#72](https://github.com/nervosnetwork/muta/issues/72)) ([3516fbc](https://github.com/nervosnetwork/muta/commit/3516fbc41338a2f423e0ba56eb96c7fa697a6c77))\n* **components/executor:** Add trie db for executor. ([#85](https://github.com/nervosnetwork/muta/issues/85)) ([fd7dc1d](https://github.com/nervosnetwork/muta/commit/fd7dc1da97a4b7dafb1ecbc2813c9506423689a5))\n* **components/executor:** Implement EVM executor. ([#68](https://github.com/nervosnetwork/muta/issues/68)) ([021893d](https://github.com/nervosnetwork/muta/commit/021893db432f1ddadc89da9c9251bdb6fb79d925))\n* **components/jsonrpc:** implement getStateProof ([#178](https://github.com/nervosnetwork/muta/issues/178)) ([69499fb](https://github.com/nervosnetwork/muta/commit/69499fbb98cbe7f23d426c15ebe67de552dd5d2b))\n* **components/jsonrpc:** implement getTransactionProof ([0db8785](https://github.com/nervosnetwork/muta/commit/0db8785475e9d9c098fa123b9c23b4f0eab286dc))\n* **components/jsonrpc:** running on microscope ([#200](https://github.com/nervosnetwork/muta/issues/200)) ([1c63a0e](https://github.com/nervosnetwork/muta/commit/1c63a0e3db751b7b7be6f053bed2b66245b105cd))\n* **components/jsonrpc:** Try to convert tx to cita::tx ([#221](https://github.com/nervosnetwork/muta/issues/221)) ([b8ab16b](https://github.com/nervosnetwork/muta/commit/b8ab16b05ad01a0c6ef5a7b8d7ad76961e7749ff))\n* **core-network:** expost send_buffer_size and recv_buffer_size ([#248](https://github.com/nervosnetwork/muta/issues/248)) ([e5120ad](https://github.com/nervosnetwork/muta/commit/e5120ad646c9d206b43b0d50911303507bdfe381))\n* **core-network:** implement peer count feature ([#256](https://github.com/nervosnetwork/muta/issues/256)) ([8f7e7eb](https://github.com/nervosnetwork/muta/commit/8f7e7eb51cdeebfb9c679d88626ac2ec3fa651a4))\n* add performance test lua script ([#244](https://github.com/nervosnetwork/muta/issues/244)) ([c727b73](https://github.com/nervosnetwork/muta/commit/c727b733340029f72d9280a57e07522f635eff44))\n* **core-network:** implement concurrent reactor and real chained reactor ([#175](https://github.com/nervosnetwork/muta/issues/175)) ([dc9f897](https://github.com/nervosnetwork/muta/commit/dc9f897f08801d7b8a418750ed516a8acac057ca))\n* **core-p2p:** implement datagram transport protocol ([fee2d45](https://github.com/nervosnetwork/muta/commit/fee2d4546552bd6c46376309eb399126219c55fb))\n* **core-p2p:** transmission: use `poll` func to do broadcast ([b376cbe](https://github.com/nervosnetwork/muta/commit/b376cbef9211e55f809f16bb9bab1360dd4b3523))\n* **core/consensus:** Implement solo mode for consensus ([e071b15](https://github.com/nervosnetwork/muta/commit/e071b1533b1107f65eb0f97563f011f644d73be6))\n* **core/crypto:** Add secp256k1 ([8349eaa](https://github.com/nervosnetwork/muta/commit/8349eaa2817ee8c27e9e8367c89f3469e52b6f8a))\n* **core/crypto:** Modify the return type to result. ([9f2424c](https://github.com/nervosnetwork/muta/commit/9f2424ca11fa300f7269f7a32195ec8bbde096e0))\n* **core/network:** Support broadcast message ([#185](https://github.com/nervosnetwork/muta/issues/185)) ([992c55f](https://github.com/nervosnetwork/muta/commit/992c55f87458a38629944fb78ee69982d8329b2b))\n* **core/types:** Add hash function for the header and receipts ([c982a52](https://github.com/nervosnetwork/muta/commit/c982a52ce29da7f0e783b2a7a52f1d541c15ea10))\n* **executor:** Add flush for trie db. ([#240](https://github.com/nervosnetwork/muta/issues/240)) ([23fd538](https://github.com/nervosnetwork/muta/commit/23fd53849ac626cdeaabb165c0534bb90651aa90))\n* **jsonrpc:** Implement filter APIs ([#190](https://github.com/nervosnetwork/muta/issues/190)) ([c97ed22](https://github.com/nervosnetwork/muta/commit/c97ed2273b6ddb2385d6d0285f2d5b4d267b130b))\n* **tx-pool:** Batch broadcast transactions. ([#234](https://github.com/nervosnetwork/muta/issues/234)) ([d297b1a](https://github.com/nervosnetwork/muta/commit/d297b1a4d655fdfac25f7f5630253f7e8f6f70ea))\n* add synchronizer ([#167](https://github.com/nervosnetwork/muta/issues/167)) ([38db7aa](https://github.com/nervosnetwork/muta/commit/38db7aa3f83e4a35417440e4787c5249b9eace63))\n* Implement many JSONRPC APIs ([#166](https://github.com/nervosnetwork/muta/issues/166)) ([807b6a7](https://github.com/nervosnetwork/muta/commit/807b6a73cb098087179d9b086fa0070b6ced74d0))\n* Implement RPC getTransactionCount ([#169](https://github.com/nervosnetwork/muta/issues/169)) ([dbf0c51](https://github.com/nervosnetwork/muta/commit/dbf0c51a17f3e285e1146eee3b5e9def08d16d50))\n* rewrite network component ([#230](https://github.com/nervosnetwork/muta/issues/230)) ([585dabb](https://github.com/nervosnetwork/muta/commit/585dabb2d52dd70de7ebc26eee59345596301c1a))\n* **components/jsonrpc:** Implements sendRawTransaction ([#159](https://github.com/nervosnetwork/muta/issues/159)) ([112d345](https://github.com/nervosnetwork/muta/commit/112d34582c00bea3c05d1663cf07d79aefbfa6a9))\n* **core-context:** add `CommonValue` trait and `p2p_session_id` method ([#165](https://github.com/nervosnetwork/muta/issues/165)) ([216b743](https://github.com/nervosnetwork/muta/commit/216b74381c00b15ba61444cf462528ee170fcc41))\n* **core/consensus:** Implements BFT ([#158](https://github.com/nervosnetwork/muta/issues/158)) ([e7a3bfd](https://github.com/nervosnetwork/muta/commit/e7a3bfd2f667c9bb8d6b9deb29a57c837ae296b9))\n* **core/notify:** add notify as message-bus between components ([b53c50d](https://github.com/nervosnetwork/muta/commit/b53c50dc04090b6b0d5b6725b5c32697446aa5f8))\n* **core/serialization:** Add proto file ([0bf7c59](https://github.com/nervosnetwork/muta/commit/0bf7c59200ad4a4cc7994efecaec5d8c683f175a))\n* **core/storage:** Add the storage trait ([ffc8776](https://github.com/nervosnetwork/muta/commit/ffc8776b02bc0a4cf785c7c5c47a88266f186b49))\n* **core/types:** Add the transactions hash calculation function. ([67d8170](https://github.com/nervosnetwork/muta/commit/67d817072c4c03b2fc2eaae5d1dc99d2d41240e0))\n* **core/types:** Define serialization and deserialization methods ([f28c63d](https://github.com/nervosnetwork/muta/commit/f28c63d2b4c7b77dbe24e2b50e70cf649a6c714c))\n* **database:** Add memory db ([d21a5a2](https://github.com/nervosnetwork/muta/commit/d21a5a29bd20e02f3ddd29f77c3df2963f8f3b4b))\n* **jsonrpc:** support batch ([0a0c680](https://github.com/nervosnetwork/muta/commit/0a0c680993ff9be231f1ae8e583171e1f304f79b))\n* **main:** add init command for genesis ([#96](https://github.com/nervosnetwork/muta/issues/96)) ([ec752b0](https://github.com/nervosnetwork/muta/commit/ec752b0602800055990fbfcc54bd2c2ab0b2cb60))\n* **p2p:** Update to tentacle0.2.0-alpha.5 ([#177](https://github.com/nervosnetwork/muta/issues/177)) ([f6f83b6](https://github.com/nervosnetwork/muta/commit/f6f83b6b263579d66160cfab29b83bd5a709eeb4))\n* **pubsub:** Implement pubsub components ([#143](https://github.com/nervosnetwork/muta/issues/143)) ([a079770](https://github.com/nervosnetwork/muta/commit/a079770b0e66e22552bd8cf504a9e1ba0c520d0e))\n* **runtime:** add `Context` struct ([#155](https://github.com/nervosnetwork/muta/issues/155)) ([27e5aa7](https://github.com/nervosnetwork/muta/commit/27e5aa7f01f3559d2a9dd17346595c9161a9c0f6))\n* Add project framework ([#24](https://github.com/nervosnetwork/muta/issues/24)) ([733ee8e](https://github.com/nervosnetwork/muta/commit/733ee8e6be7649c9aa2d772bb1dc661bd0879917))\n* Add transaction pool component. ([360c935](https://github.com/nervosnetwork/muta/commit/360c93540ea77dc51551a3739e17682600d2b1b7))\n* Fill main.rs ([#102](https://github.com/nervosnetwork/muta/issues/102)) ([b5b4c72](https://github.com/nervosnetwork/muta/commit/b5b4c7233efcd1c35e92248b7726ca20644800e9))\n* impl cita-jsonrpc ([49e2a2d](https://github.com/nervosnetwork/muta/commit/49e2a2d22d094b2b6a2f71bc5201ccfe28308797))\n* update db interface and storage interface ([#137](https://github.com/nervosnetwork/muta/issues/137)) ([36b3d07](https://github.com/nervosnetwork/muta/commit/36b3d07f23e2c7ada870cb699bf138cdd66c2860))\n\n\n### Reverts\n\n* Revert \"chore: Update bft-rs (#203)\" (#204) ([cc15ba9](https://github.com/nervosnetwork/muta/commit/cc15ba9ed302ab1389838a4a6c745675106179e9)), closes [#203](https://github.com/nervosnetwork/muta/issues/203) [#204](https://github.com/nervosnetwork/muta/issues/204)\n\n\n\n# [](https://github.com/nervosnetwork/muta/compare/v0.2.0-beta.4...v) (2020-08-12)\n\n\n### Features\n\n* **network:** split transmitter data ([#380](https://github.com/nervosnetwork/muta/issues/380)) ([0322cd6](https://github.com/nervosnetwork/muta/commit/0322cd690cb118f56153e424e9a6bf4b2a11d8b4))\n* **network:** verify chain id during protocol handshake ([#406](https://github.com/nervosnetwork/muta/issues/406)) ([e678e92](https://github.com/nervosnetwork/muta/commit/e678e92bf01bc4bc914e74b6fed22c8b55b3cdc7))\n\n\n\n# [0.2.0-beta.4](https://github.com/nervosnetwork/muta/compare/v0.2.0-beta.3...v0.2.0-beta.4) (2020-08-10)\n\n\n### Bug Fixes\n\n* load hrp before deserializing genesis payload to take hrp effect ([#405](https://github.com/nervosnetwork/muta/issues/405)) ([828e6d5](https://github.com/nervosnetwork/muta/commit/828e6d539cf4da9cf042c450418e75a944315014))\n\n\n### Features\n\n* **api:** Support enabled TLS ([#402](https://github.com/nervosnetwork/muta/issues/402)) ([c2908a3](https://github.com/nervosnetwork/muta/commit/c2908a3ba6a5ab1219ddc9b14ff6d7320cf70228))\n\n\n### Performance Improvements\n\n* **state:** add state cache for trieDB ([#404](https://github.com/nervosnetwork/muta/issues/404)) ([2a08c14](https://github.com/nervosnetwork/muta/commit/2a08c147571707507b72882788fd51f7a799f3ec))\n\n\n\n# [0.2.0-beta.3](https://github.com/nervosnetwork/muta/compare/v0.2.0-beta.2...v0.2.0-beta.3) (2020-08-07)\n\n\n### Bug Fixes\n\n* **apm:** Return the correct time ([#400](https://github.com/nervosnetwork/muta/issues/400)) ([fd6549a](https://github.com/nervosnetwork/muta/commit/fd6549a6352633cee7b5b747448129df7a0532ca))\n\n\n### Features\n\n* **network:** limit connections from same ip ([#388](https://github.com/nervosnetwork/muta/issues/388)) ([dc78c13](https://github.com/nervosnetwork/muta/commit/dc78c13b8aa25f3e4535e588149042f6345e4d25))\n* **network:** limit inbound and outbound connections ([#393](https://github.com/nervosnetwork/muta/issues/393)) ([3a3111e](https://github.com/nervosnetwork/muta/commit/3a3111e1e332529bc8636c54526920c292c04f8a))\n* **sync:** Limit the maximum height of once sync ([#390](https://github.com/nervosnetwork/muta/issues/390)) ([f951a95](https://github.com/nervosnetwork/muta/commit/f951a953daf307ffc98b4df0fe1a77a6a810ac71))\n\n\n\n# [0.2.0-beta.2](https://github.com/nervosnetwork/muta/compare/v0.2.0-beta.1...v0.2.0-beta.2) (2020-08-04)\n\n\n### Bug Fixes\n\n* **consensus:** Add timestamp checking ([#377](https://github.com/nervosnetwork/muta/issues/377)) ([382ede9](https://github.com/nervosnetwork/muta/commit/382ede9367b910a06b59f3562ecd28ab8100d39e))\n\n\n### Features\n\n* **benchmark:** add a perf benchmark macro ([#391](https://github.com/nervosnetwork/muta/issues/391)) ([eb24311](https://github.com/nervosnetwork/muta/commit/eb2431149b6865a82d0e4286536f65319a5e1d1f))\n* **Cargo:** add random leader feature for muta ([#385](https://github.com/nervosnetwork/muta/issues/385)) ([43da977](https://github.com/nervosnetwork/muta/commit/43da9772b22b97ab4797b80ce5161f1a49827543))\n\n\n### Performance Improvements\n\n* **metrics:** Add metrics of state ([#397](https://github.com/nervosnetwork/muta/issues/397)) ([5822764](https://github.com/nervosnetwork/muta/commit/5822764240f8b4e8cfeca4bccf7d399a0bf71897))\n\n\n\n# [0.2.0-beta.1](https://github.com/nervosnetwork/muta/compare/v0.2.0-alpha.1...v0.2.0-beta.1) (2020-08-03)\n\n\n### Bug Fixes\n\n* **consensus:** return an error when committing an outdated block ([#371](https://github.com/nervosnetwork/muta/issues/371)) ([b3d518b](https://github.com/nervosnetwork/muta/commit/b3d518b52658b40746ef708fa8cde5c96a39a539))\n* **mempool:** Ensure that there are no duplicate transactions in the order transaction ([#379](https://github.com/nervosnetwork/muta/issues/379)) ([97708ac](https://github.com/nervosnetwork/muta/commit/97708ac385be2243344d700a0d7c928f18fd51b3))\n* **storage:** test batch receipts get panic ([#373](https://github.com/nervosnetwork/muta/issues/373)) ([300a3c6](https://github.com/nervosnetwork/muta/commit/300a3c65cf0399c2ba37a3bd655e06719b660330))\n\n\n### Features\n\n* **network:** tag consensus peer ([#364](https://github.com/nervosnetwork/muta/issues/364)) ([9b27df1](https://github.com/nervosnetwork/muta/commit/9b27df1015a25792cc210c5aa0dd473a45ae885d)), closes [#354](https://github.com/nervosnetwork/muta/issues/354) [#2](https://github.com/nervosnetwork/muta/issues/2) [#3](https://github.com/nervosnetwork/muta/issues/3) [#4](https://github.com/nervosnetwork/muta/issues/4) [#5](https://github.com/nervosnetwork/muta/issues/5) [#6](https://github.com/nervosnetwork/muta/issues/6) [#7](https://github.com/nervosnetwork/muta/issues/7)\n* Add global panic hook ([#376](https://github.com/nervosnetwork/muta/issues/376)) ([7382279](https://github.com/nervosnetwork/muta/commit/738227962771a6a66b85f2fd199df2e699b43adc))\n\n\n### Performance Improvements\n\n* **executor:** use inner call instead of service dispatcher ([#365](https://github.com/nervosnetwork/muta/issues/365)) ([7b1d2a3](https://github.com/nervosnetwork/muta/commit/7b1d2a32d5c20306af3868e5265bd2530dd9493b))\n\n\n### BREAKING CHANGES\n\n* **network:** - replace Validator address bytes with pubkey bytes\n\n* change(consensus): log validator address instead of its public key\n\nBlock proposer is address instead public key\n\n* fix: compilation failed\n* **network:** - change users_cast to multicast, take peer_ids bytes instead of Address\n- network bootstrap configuration now takes peer id instead of pubkey hex\n\n* refactor(network): PeerId api\n\n\n\n# [0.2.0-alpha.1](https://github.com/nervosnetwork/muta/compare/v0.1.2-beta...v0.2.0-alpha.1) (2020-07-22)\n\n\n### Bug Fixes\n\n* **executor:** The logic to deal with tx_hook and tx_body ([#367](https://github.com/nervosnetwork/muta/issues/367)) ([749d558](https://github.com/nervosnetwork/muta/commit/749d558b8b58a1943bfa2842dcedcc45218c0f78))\n* **executor:** tx events aren't cleared on execution error ([#313](https://github.com/nervosnetwork/muta/issues/313)) ([1605cf5](https://github.com/nervosnetwork/muta/commit/1605cf59b558b97889bb431da7f81fd424b90a89))\n* **proof:** Verify aggregated signature in checking proof ([#308](https://github.com/nervosnetwork/muta/issues/308)) ([d2a98b0](https://github.com/nervosnetwork/muta/commit/d2a98b06e44449ca756f135c1b235ff0d80eaf67))\n* **trust_metric_test:** unreliable full node exit check ([#327](https://github.com/nervosnetwork/muta/issues/327)) ([a4ab4a6](https://github.com/nervosnetwork/muta/commit/a4ab4a6209e0978148983e88447ac2d9178fa42a))\n* **WAL:** Ignore path already exist ([#304](https://github.com/nervosnetwork/muta/issues/304)) ([02df937](https://github.com/nervosnetwork/muta/commit/02df937fb6449c9b3b0b50e790e0ecf6bfc1ee3d))\n\n\n### Performance Improvements\n\n* **mempool:** parallel verifying signatures in mempool ([#359](https://github.com/nervosnetwork/muta/issues/359)) ([2ccdf1a](https://github.com/nervosnetwork/muta/commit/2ccdf1a67a40cd483749a98a1a68c37bcf1d473c))\n\n\n### Reverts\n\n* Revert \"refactor(consensus)!: replace Validator address bytes with pubkey bytes (#354)\" (#361) ([4dabfa2](https://github.com/nervosnetwork/muta/commit/4dabfa231961d1ec8be1ba42bf05781f55395aed)), closes [#354](https://github.com/nervosnetwork/muta/issues/354) [#361](https://github.com/nervosnetwork/muta/issues/361)\n\n\n* refactor(consensus)!: replace Validator address bytes with pubkey bytes (#354) ([e4433d7](https://github.com/nervosnetwork/muta/commit/e4433d793e8a63788ec682880afc93474e0d2414)), closes [#354](https://github.com/nervosnetwork/muta/issues/354)\n\n\n### Features\n\n* **executor:** allow cancel execution units through context ([#317](https://github.com/nervosnetwork/muta/issues/317)) ([eafb489](https://github.com/nervosnetwork/muta/commit/eafb489f78f7521487c6b2d25dd9912e43f76500))\n* **executor:** indenpendent tx hook states commit ([#316](https://github.com/nervosnetwork/muta/issues/316)) ([fde6450](https://github.com/nervosnetwork/muta/commit/fde645010363a4664033370e4109e4d1f08b13bc))\n* **protocol:** Remove the logs bloom from block header ([#312](https://github.com/nervosnetwork/muta/issues/312)) ([ff1e0df](https://github.com/nervosnetwork/muta/commit/ff1e0df1e8a65cc480825a49eed9495cc31ecee0))\n\n\n### BREAKING CHANGES\n\n* - replace Validator address bytes with pubkey bytes\n\n* change(consensus): log validator address instead of its public key\n\nBlock proposer is address instead public key\n\n* fix: compilation failed\n\n"
  },
  {
    "path": "CHANGELOG/README.md",
    "content": "# CHANGELOGs\n> use: conventional-changelog\n>\n> example command: conventional-changelog -p angular -i CHANGELOG-0.2.md -s -r 0.2\n\n- [CHANGELOG-0.1.md](./CHANGELOG-0.1.md)\n- [CHANGELOG-0.2.md](./CHANGELOG-0.2.md)\n"
  },
  {
    "path": "CONTRIBUTING.md",
    "content": "# Contributing\n\nOur goal is to make contributing to the `muta` project easy and transparent.\n\nWhen contributing to this repository, please first discuss the change you wish to make via issue, or any other method with the community before making a change. \n\n### Report Issue\n\n* Read known issues to see whether the issue is already addressed there.\n\n* Search existing issues to see whether others had already posted a similar issue.\n  \n* **Do not open up a GitHub issue to report security vulnerabilities**. Instead,\n  refer to the [security policy](SECURITY.md).\n\n* When creating a new issue, be sure to include a title and clear description. It is appreciated that if you can also attach as much relevant information as possible, such as version, environment, reproducing steps, samples.\n\n### Send PR\n\n* See [Code Standards]() for code guidelines.\n  \n* See [Conventional Commits](https://conventionalcommits.org) for commit guidelines.\n\n1. Fork the `muta` repo and create your branch from master.\n2. If you have added code that should be tested, add unit tests.\n3. Verify and ensure that the test suite passes.\n4. Run `make ci` to lint and test the code before commit.\n5. Make sure your code passes CI.\n6. Ensure the PR description clearly describes the problem and solution. Include the relevant issue number if applicable.\n7. Submit your pull request.\n\n\n## Code of Conduct\n\n### Our Pledge\n\nIn the interest of fostering an open and welcoming environment, we as\ncontributors and maintainers pledge to making participation in our project and\nour community a harassment-free experience for everyone, regardless of age, body\nsize, disability, ethnicity, sex characteristics, gender identity and expression,\nlevel of experience, education, socio-economic status, nationality, personal\nappearance, race, religion, or sexual identity and orientation.\n\n### Our Standards\n\nExamples of behavior that contributes to creating a positive environment\n\ninclude:\n\n* Using welcoming and inclusive language\n* Being respectful of differing viewpoints and experiences\n* Gracefully accepting constructive criticism\n* Focusing on what is best for the community\n* Showing empathy towards other community members\n\nExamples of unacceptable behavior by participants include:\n\n* The use of sexualized language or imagery and unwelcome sexual attention or\n advances\n* Trolling, insulting/derogatory comments, and personal or political attacks\n* Public or private harassment\n* Publishing others' private information, such as a physical or electronic\n address, without explicit permission\n* Other conduct which could reasonably be considered inappropriate in a\n professional setting\n\n### Our Responsibilities\n\nProject maintainers are responsible for clarifying the standards of acceptable\nbehavior and are expected to take appropriate and fair corrective action in\nresponse to any instances of unacceptable behavior.\n\nProject maintainers have the right and responsibility to remove, edit, or\nreject comments, commits, code, wiki edits, issues, and other contributions\nthat are not aligned to this Code of Conduct, or to ban temporarily or\npermanently any contributor for other behaviors that they deem inappropriate,\nthreatening, offensive, or harmful.\n\n### Scope\n\nThis Code of Conduct applies both within project spaces and in public spaces\nwhen an individual is representing the project or its community. Examples of\nrepresenting a project or community include using an official project e-mail\naddress, posting via an official social media account, or acting as an appointed\nrepresentative at an online or offline event. Representation of a project may be\nfurther defined and clarified by project maintainers.\n\n### Enforcement\n\nInstances of abusive, harassing, or otherwise unacceptable behavior may be\nreported by contacting the project team at hello@nervos.org. All\ncomplaints will be reviewed and investigated and will result in a response that\nis deemed necessary and appropriate to the circumstances. The project team is\nobligated to maintain confidentiality with regard to the reporter of an incident.\nFurther details of specific enforcement policies may be posted separately.\n\nProject maintainers who do not follow or enforce the Code of Conduct in good\nfaith may face temporary or permanent repercussions as determined by other\nmembers of the project's leadership.\n\n### Attribution\n\nThis Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,\navailable at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html\n\n[homepage]: https://www.contributor-covenant.org\n\nFor answers to common questions about this code of conduct, see\nhttps://www.contributor-covenant.org/faq"
  },
  {
    "path": "Cargo.toml",
    "content": "[package]\nname = \"muta\"\nversion = \"0.2.1\"\nauthors = [\"Muta Dev <muta@nervos.org>\"]\nedition = \"2018\"\nrepository = \"https://github.com/nervosnetwork/muta\"\n\n[dependencies]\ncli = { path = \"./core/cli\"}\nbyzantine = { path = \"./byzantine\" }\ncommon-apm = { path = \"./common/apm\" }\ncommon-config-parser = { path = \"./common/config-parser\" }\ncommon-crypto = { path = \"./common/crypto\" }\ncommon-logger = { path = \"./common/logger\" }\nprotocol = { path = \"./protocol\", package = \"muta-protocol\" }\ncore-api = { path = \"./core/api\" }\ncore-storage = { path = \"./core/storage\" }\ncore-mempool = { path = \"./core/mempool\" }\ncore-network = { path = \"./core/network\" }\ncore-consensus = { path = \"./core/consensus\" }\n\nbinding-macro = { path = \"./binding-macro\" }\nframework = { path = \"./framework\" }\n\nbacktrace = \"0.3\"\nactix-rt = \"1.0\"\nderive_more = \"0.99\"\nfutures = \"0.3\"\nparking_lot = \"0.11\"\nserde = \"1.0\"\nserde_derive = \"1.0\"\nserde_json = \"1.0\"\nlog = \"0.4\"\nclap = \"2.33\"\nbytes = \"0.5\"\nhex = \"0.4\"\nrlp = \"0.4\"\ntoml = \"0.5\"\ntokio = { version = \"0.2\", features = [\"macros\", \"sync\", \"rt-core\", \"rt-util\", \"signal\", \"time\"] }\nmuta-apm = \"0.1.0-alpha.7\"\nfutures-timer=\"3.0\"\ncita_trie = \"2.0\"\nfs_extra = \"1.2.0\"\n\n[dev-dependencies]\ncita_trie = \"2.0\"\nasync-trait = \"0.1\"\ntoml = \"0.5\"\nlazy_static = \"1.4\"\nmuta-codec-derive = \"0.2\"\nasset = { path = \"built-in-services/asset\" }\nmulti-signature = { path = \"built-in-services/multi-signature\" }\nauthorization = { path = \"built-in-services/authorization\" }\nmetadata = { path = \"built-in-services/metadata\"}\nutil = { path = \"built-in-services/util\"}\nrand = \"0.7\"\ncore-network = { path = \"./core/network\", features = [\"diagnostic\"] }\ntokio = { version = \"0.2\", features = [\"full\"] }\n\n[workspace]\nmembers = [\n  \"devtools/keypair\",\n\n  \"common/channel\",\n  \"common/config-parser\",\n  \"common/crypto\",\n  \"common/logger\",\n  \"common/merkle\",\n  \"common/pubsub\",\n\n  \"core/api\",\n  \"core/consensus\",\n  \"core/mempool\",\n  \"core/network\",\n  \"core/storage\",\n  \"core/cli\",\n  \"core/run\",\n\n  \"binding-macro\",\n  \"framework\",\n  \"built-in-services/asset\",\n  \"built-in-services/metadata\",\n  \"built-in-services/multi-signature\",\n  \"built-in-services/authorization\",\n\n  \"protocol\",\n\n  \"byzantine\",\n]\n\n[features]\ndefault = []\nrandom_leader = [\"core-consensus/random_leader\"]\ntentacle_metrics = [\"core-network/tentacle_metrics\"]\n\n[[example]]\nname = \"muta-chain\"\ncrate-type = [\"bin\"]\n\n[[test]]\nname = \"trust_metric\"\npath = \"tests/trust_metric.rs\"\nrequired-features = [ \"core-network/diagnostic\" ]\n\n[[test]]\nname = \"verify_chain_id\"\npath = \"tests/verify_chain_id.rs\"\nrequired-features = [ \"core-network/diagnostic\" ]\n\n[[bench]]\nname = \"bench_execute\"\npath = \"benchmark/mod.rs\"\n"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2019 Nervos Foundation\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "Makefile",
    "content": "ERBOSE := $(if ${CI},--verbose,)\n\nCOMMIT := $(shell git rev-parse --short HEAD)\n\nifneq (\"$(wildcard /usr/lib/librocksdb.so)\",\"\")\n\tSYS_LIB_DIR := /usr/lib\nelse ifneq (\"$(wildcard /usr/lib64/librocksdb.so)\",\"\")\n\tSYS_LIB_DIR := /usr/lib64\nelse\n\tUSE_SYS_ROCKSDB :=\nendif\n\nUSE_SYS_ROCKSDB :=\nSYS_ROCKSDB := $(if ${USE_SYS_ROCKSDB},ROCKSDB_LIB_DIR=${SYS_LIB_DIR},)\n\nCARGO := env ${SYS_ROCKSDB} cargo\n\ntest:\n\t${CARGO} test ${VERBOSE} --all -- --skip trust_metric --nocapture\n\ndoc:\n\tcargo doc --all --no-deps\n\ndoc-deps:\n\tcargo doc --all\n\n# generate GraphQL API documentation\ndoc-api:\n\tbash docs/build/gql_api.sh\n\ncheck:\n\t${CARGO} check ${VERBOSE} --all\n\nbuild:\n\t${CARGO} build ${VERBOSE} --release\n\nprod-muta-chain:\n\t${CARGO} build ${VERBOSE} --release --example muta-chain\n\nfmt:\n\tcargo fmt ${VERBOSE} --all -- --check\n\nclippy:\n\t${CARGO} clippy ${VERBOSE} --all --all-targets --all-features -- \\\n\t\t-D warnings -D clippy::clone_on_ref_ptr -D clippy::enum_glob_use\n\n\nci: fmt clippy test\n\ninfo:\n\tdate\n\tpwd\n\tenv\n\ne2e-test:\n\tcargo build --example muta-chain\n\trm -rf ./devtools/chain/data\n\t./target/debug/examples/muta-chain -c ./devtools/chain/config.toml -g ./devtools/chain/genesis.toml > /tmp/log 2>&1 &\n\tcd tests/e2e && yarn && ./wait-for-it.sh -t 300 localhost:8000 -- yarn run test\n\tpkill -2 muta-chain\n\nbyz-test:\n\tcargo build --example muta-chain\n\tcargo build --example byzantine_node\n\trm -rf ./devtools/chain/data\n\tCONFIG=./examples/config-1.toml GENESIS=./examples/genesis.toml ./target/debug/examples/muta-chain > /tmp/log 2>&1 &\n\tCONFIG=./examples/config-2.toml GENESIS=./examples/genesis.toml ./target/debug/examples/muta-chain > /tmp/log 2>&1 &\n\tCONFIG=./examples/config-3.toml GENESIS=./examples/genesis.toml ./target/debug/examples/muta-chain > /tmp/log 2>&1 &\n\tCONFIG=./examples/config-4.toml GENESIS=./examples/genesis.toml ./target/debug/examples/byzantine_node > /tmp/log 2>&1 &\n\tcd byzantine/tests && yarn && ../../tests/e2e/wait-for-it.sh -t 300 localhost:8000 -- yarn run test\n\tpkill -2 muta-chain byzantine_node\n\ne2e-test-via-docker:\n\tdocker-compose -f tests/e2e/docker-compose-e2e-test.yaml up --exit-code-from e2e-test --force-recreate\n\n# For counting lines of code\nstats:\n\t@cargo count --version || cargo +nightly install --git https://github.com/kbknapp/cargo-count\n\t@cargo count --separator , --unsafe-statistics\n\n# Use cargo-audit to audit Cargo.lock for crates with security vulnerabilities\n# expecting to see \"Success No vulnerable packages found\"\nsecurity-audit:\n\t@cargo audit --version || cargo install cargo-audit\n\t@cargo audit\n\n.PHONY: build prod prod-test\n.PHONY: fmt test clippy doc doc-deps doc-api check stats\n.PHONY: ci info security-audit\n"
  },
  {
    "path": "OWNERS",
    "content": "# See the OWNERS docs at https://go.k8s.io/owners\n\napprovers:\n- yejiayu\n- zeroqn\n- KaoImin \n- LycrusHamster \n- rev-chaos \n- homura \n- zhouyun-zoe \nreviewers:\n- yejiayu\n- zeroqn\n- KaoImin \n- LycrusHamster \n- rev-chaos \n- homura \n- zhouyun-zoe \n"
  },
  {
    "path": "OWNERS_ALIASES",
    "content": "aliases:\n- yejiayu\n- zeroqn\n- KaoImin \n- LycrusHamster \n- rev-chaos \n- homura \nbest-approvers:\n- yejiayu\n- zeroqn\n- KaoImin \n- LycrusHamster \n- rev-chaos \n- homura \nbest-reviewers:\n- yejiayu\n- zeroqn\n- KaoImin \n- LycrusHamster \n- rev-chaos \n- homura \n"
  },
  {
    "path": "README.md",
    "content": "<p align=\"center\">\n  <a href=\"https://github.com/nervosnetwork/muta\">\n    <img src=\"https://github.com/nervosnetwork/muta-docs/blob/master/static/docs-img/muta-logo1.png\" width=\"270\">\n  </a>\n  <h3 align=\"center\">Build your own blockchain,today</h3>\n  <p align=\"center\">\n    <a href=\"https://opensource.org/licenses/MIT\"><img src=\"https://img.shields.io/badge/License-MIT-green.svg\"></a>\n    <a href=\"https://github.com/nervosnetwork/muta/blob/master/rust-toolchain\"><img src=\"https://img.shields.io/badge/rustc-nightly-informational.svg\"></a>\n    <a href=\"https://travis-ci.com/nervosnetwork/muta\"><img src=\"https://travis-ci.com/nervosnetwork/muta.svg?branch=master\"></a>\n     <a href=\"https://discord.gg/QXkFT88\"><img src=\"https://img.shields.io/discord/674846745607536651?logo=discord\"\n    alt=\"chat on Discord\"></a>\n    <a href=\"https://github.com/nervosnetwork/muta\"><img src=\"https://img.shields.io/github/stars/nervosnetwork/muta.svg?style=social\"></a>\n    <a href=\"https://github.com/nervosnetwork/muta\"><img src=\"https://img.shields.io/github/forks/nervosnetwork/muta.svg?style=social\"></a>\n  </p>\n  <p align=\"center\">\n     Developed by Nervos<br>\n  </p>\n</p>\n\nEnglish | [简体中文](./README_CN.md)\n\n## What is Muta？\n\nMuta is a highly customizable high-performance blockchain framework. It has a built-in BFT-like consensus algorithm \"Overlord\" with high throughput and low latency, and it can also support different virtual machines, including CKB-VM, EVM, and WASM. Muta has interoperability across VMs. Different virtual machines can be used in a Muta-based blockchain at the same time. Developed by the Nervos team, Muta is designed to allow anyone in the world to build their own blockchain while enjoying the security and finality brought by Nervos CKB.\n\nDevelopers can customize PoA, PoS or DPoS chains based on Muta, and use different economic models and governance models. Developers can also develop different application chains (such as DEX chains) based on Muta to implement a specific business logic.\n\nMuta's core design philosophy is to make the development of a blockchain state transition as flexible and simple as possible, which means that while reducing the obstacles to build high-performance blockchains, it still maximizes its flexibility to facilitate developers to customize their business logic. Therefore, as a highly customizable high-performance blockchain framework, Muta provides a basic core component that a blockchain system needs, and developers can customize the functional parts of the chain freely.\n\n## Getting Started!\n\n[Muta Documentation](https://nervosnetwork.github.io/muta-docs/)\n\nQuickly build a simple chain and try some simple interaction, please refer to [Quick Start](https://nervosnetwork.github.io/muta-docs/#/en-us/getting_started.md)。\n\n## The basic core component Muta provided\n \nMuta provided all the core components needed to build a blockchain:\n\n* [Transaction Pool](https://nervosnetwork.github.io/muta-docs/#/en-us/transaction_pool.md)\n* [P2P Network](https://nervosnetwork.github.io/muta-docs/#/en-us/network.md)\n* [Consensus](https://nervosnetwork.github.io/muta-docs/#/en-us/overlord.md)\n* [Storage](https://nervosnetwork.github.io/muta-docs/#/en-us/storage.md)\n\n## Customizable Part\n\nDevelopers can customize the functional parts of the chain by developing Services.\n\nService is an abstraction layer for extension in Muta framework. Users can define block management, add VMs, etc. based on Service. Each Service, as a relatively independent logical component, can implement its specific function, and at the same time, different services can directly interact with each other, so that more complex functional logic can be constructed. More flexible is that services from different chains can also be reused, which makes it easier for developers to build their own functional modules.\n\nWe provide detailed service development guides and some service examples.\n\n* [Service Development Guide](https://nervosnetwork.github.io/muta-docs/#/en-us/service_dev.md)\n* [Service Examples](https://nervosnetwork.github.io/muta-docs/#/en-us/service_eg.md)\n* [Develop a DEX Chain](https://nervosnetwork.github.io/muta-docs/#/en-us/dex.md)\n\n## Developer Resources\n\nDeveloper resources can be found [here](./docs/resources.md)\n\n## Who is using Muta？\n\nMuta powers some open source projects.\n\n<p align=\"left\">\n  <a href=\"https://www.huobichain.com/\">\n    <img src=\"https://github.com/nervosnetwork/muta-docs/blob/master/static/docs-img/user/s_huobichain.jpg\" width=\"150\">\n  </a>\n</p>\n\nIs your project using Muta? Edit this page with a Pull Request to add your logo.:tada:\n\n## How to Contribute\n\nThe contribution workflow is described in [CONTRIBUTING.md](CONTRIBUTING.md), and security policy is described in [SECURITY.md](SECURITY.md).\n"
  },
  {
    "path": "README_CN.md",
    "content": "<p align=\"center\">\n  <a href=\"https://github.com/nervosnetwork/muta\">\n    <img src=\"https://github.com/nervosnetwork/muta-docs/blob/master/static/docs-img/muta-logo1.png\" width=\"270\">\n  </a>\n  <h3 align=\"center\">让世界上任何一个人都可以搭建属于他们自己的区块链</h3>\n  <p align=\"center\">\n    <a href=\"https://opensource.org/licenses/MIT\"><img src=\"https://img.shields.io/badge/License-MIT-green.svg\"></a>\n    <a href=\"https://github.com/nervosnetwork/muta/blob/master/rust-toolchain\"><img src=\"https://img.shields.io/badge/rustc-nightly-informational.svg\"></a>\n    <a href=\"https://travis-ci.com/nervosnetwork/muta\"><img src=\"https://travis-ci.com/nervosnetwork/muta.svg?branch=master\"></a>\n     <a href=\"https://discord.gg/QXkFT88\"><img src=\"https://img.shields.io/discord/674846745607536651?logo=discord\"\n    alt=\"chat on Discord\"></a>\n    <a href=\"https://github.com/nervosnetwork/muta\"><img src=\"https://img.shields.io/github/stars/nervosnetwork/muta.svg?style=social\"></a>\n    <a href=\"https://github.com/nervosnetwork/muta\"><img src=\"https://img.shields.io/github/forks/nervosnetwork/muta.svg?style=social\"></a>\n  </p>\n  <p align=\"center\">\n     由 Nervos 团队开发<br>\n  </p>\n</p>\n\n[English](./README.md) | 简体中文\n\n## 什么是 Muta？\n\nMuta 是一个高度可定制的高性能区块链框架。它内置了具有高吞吐量和低延迟特性的类 BFT 共识算法「Overlord」，并且可以支持不同的虚拟机，包括 CKB-VM、EVM 和 WASM。Muta 具有跨 VM 的互操作性，不同的虚拟机可以同时在一条基于 Muta 搭建的区块链中使用。Muta 由 Nervos 团队开发，旨在让世界上任何一个人都可以搭建属于他们自己的区块链，同时享受 Nervos CKB 所带来的安全性和最终性。\n\n开发者可以基于 Muta 定制开发 PoA、PoS 或者 DPoS 链，并且可以使用不同的经济模型和治理模型进行部署。开发者也可以基于 Muta 来开发不同的应用链（例如 DEX 链），以实现某种特定的业务逻辑。\n\nMuta 的核心理念是使一个区块链状态转换的开发尽可能的灵活和简便，也就是说在降低开发者搭建高性能区块链障碍的同时，仍然最大限度地保证其灵活性以方便开发者可以自由定制他们的协议。因此，作为一个高度可定制的高性能区块链框架，Muta 提供了一个区块链系统需要有的基础核心组件，开发者可以自由定制链的功能部分。\n\n## 快速开始！\n\n[Muta 文档网站](https://nervosnetwork.github.io/muta-docs/)\n\n快速搭建一条简单的链并尝试简单的交互，请参考[快速开始](https://nervosnetwork.github.io/muta-docs/#/getting_started.md)。\n\n## Muta 提供哪些基础核心组件？\n\nMuta 框架提供了搭建一个分布式区块链网络所需的全部核心组件：\n\n* [交易池](https://nervosnetwork.github.io/muta-docs/#/transaction_pool.md)\n* [P2P 网络](https://nervosnetwork.github.io/muta-docs/#/network.md)\n* [共识](https://nervosnetwork.github.io/muta-docs/#/overlord.md)\n* [存储](https://nervosnetwork.github.io/muta-docs/#/storage.md)\n\n## 开发者需要自己实现哪些部分？\n\n开发者可以通过开发 Service 来定制链的功能部分。\n\nService 是 Muta 框架中用于扩展的抽象层，用户可以基于 Service 定义区块治理、添加 VM 等等。每一个 Service 作为一个相对独立的逻辑化组件，可以实现其特定的功能，同时，不同的 Service 之间又可以直接进行交互，从而可以构建更为复杂的功能逻辑。更为灵活的是，不同链的 Service 还可以复用，这使得开发者们可以更为轻松的搭建自己的功能模块。\n\n我们提供了详细的 Service 开发指南，以及一些 Service 示例。\n\n* [Service 开发指南](https://nervosnetwork.github.io/muta-docs/#service_dev.md)\n* [Service 示例](https://nervosnetwork.github.io/muta-docs/#service_eg.md)\n\n## 开发资源\n\n可以在[这边](./docs/resources.md)找到相关的开发资源\n\n## 谁在使用 Muta？\n\n<p align=\"left\">\n  <a href=\"https://www.huobichain.com/\">\n    <img src=\"https://github.com/nervosnetwork/muta-docs/blob/master/static/docs-img/user/s_huobichain.jpg\" width=\"150\">\n  </a>\n</p>\n\n您的项目使用的是 Muta 吗？欢迎在这里添加您项目的 logo 和链接，请点击顶部的 `Edit Document` ，修改本文档的相关内容，并提交 Pull Request 即可:tada:\n\n## 贡献 ![PRs](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)\n\n如何贡献请参考 [CONTRIBUTING.md](CONTRIBUTING.md)，Security Policy 请参考 [SECURITY.md](SECURITY.md)。\n"
  },
  {
    "path": "SECURITY.md",
    "content": "# Security Policy\n\nThis project is still under development, the primary goal at this stage is to implement features but we also take security very seriously. This document defines the policy on how to report vulnerabilities and receive updates when patches to those are released.\n\n\n## Reporting a vulnerability\n\nAll security bugs should be reported by sending email to [Nervos Security Team](mailto:security@nervos.org). This will deliver a message to Nervos Security Team who handle security issues. Your report will be acknowledged within 24 hours, and you'll receive a more detailed response to your email within 72 hours indicating the next steps in handling your report.\n\nAfter the initial reply to your report the security team will endeavor to keep you informed of the progress being made towards a fix and full announcement.\n\n## Disclosure process\n\n1. Security report received and is assigned a primary handler. This person will coordinate the fix and release process. Problem is confirmed and all affected versions is determinted. Code is audited to find any potential similar problems.\n2. Fixes are prepared for all supported releases. These fixes are not committed to the public repository but rather held locally pending the announcement.\n3. A suggested embargo date for this vulnerability is chosen. This notification will include patches for all supported versions.\n4. On the embargo date, the [Nervos security mailing list](#TBD) is sent a copy of the announcement. The changes are pushed to the public repository. At least 6 hours after the mailing list is notified, a copy of the advisory will be published on Nervos community channels.\n\nThis process can take some time, especially when coordination is required with maintainers of other projects. Every effort will be made to handle the bug in as timely a manner as possible, however it's important that we follow the release process above to ensure that the disclosure is handled in a consistent manner.\n\n## Receiving disclosures\n\nIf you require prior notification of vulnerabilities please subscribe to the [Nervos Security mailing list](#TBD). The mailing list is very low traffic, and it receives the public notifications the moment the embargo is lifted.\n\nIf you have any suggestions to improve this policy, please send an email to [Nervos Security Team](security@nervos.org).\n"
  },
  {
    "path": "benchmark/bench_executor.rs",
    "content": "#![allow(clippy::needless_collect)]\n\nuse asset::types::TransferPayload;\n\nuse super::*;\n\n#[rustfmt::skip]\n/// Bench in Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz (8 x 2200)\n/// 100 txs bench_execute ... bench:  11,299,912 ns/iter (+/- 3,402,276)\n/// 1000 txs bench::bench_execute ... bench: 101,187,934 ns/iter (+/- 26,000,469)\n#[bench]\nfn bench_execute(b: &mut Bencher) {\n    let mut bench_adapter = BenchmarkAdapter::new();\n\n    let payload = TransferPayload {\n        asset_id: NATIVE_ASSET_ID.clone(),\n        to:       FEE_INLET_ACCOUNT.clone(),\n        value:    1u64,\n    };\n\n    let req = (0..1000).map(|_| TransactionRequest {\n        service_name: \"asset\".to_string(),\n        method:       \"transfer\".to_string(),\n        payload:      serde_json::to_string(&payload).unwrap(),\n    }).collect::<Vec<_>>();\n\n    perf_exec!(bench_adapter, req, b);\n}\n\n#[rustfmt::skip]\n/// 10 assets bench::perf_execute  ... bench: 109,202,563 ns/iter (+/- 6,378,009)\n/// 100 assets bench::perf_execute  ... bench: 108,859,512 ns/iter (+/- 2,977,622)\n/// 1000 assets bench::bench_execute ... bench: 108,037,404 ns/iter (+/- 4,539,634)\n/// 10000 assets test bench::perf_execute  ... bench: 100,244,123 ns/iter (+/- 18,935,087)\n#[bench]\nfn bench_execute_with_assets(b: &mut Bencher) {\n    let mut bench_adapter = BenchmarkAdapter::new();\n    create_assets(&mut bench_adapter, 10000);\n\n    let payload = TransferPayload {\n        asset_id: NATIVE_ASSET_ID.clone(),\n        to:       FEE_INLET_ACCOUNT.clone(),\n        value:    1u64,\n    };\n\n    let req = (0..1000).map(|_| TransactionRequest {\n        service_name: \"asset\".to_string(),\n        method:       \"transfer\".to_string(),\n        payload:      serde_json::to_string(&payload).unwrap(),\n    }).collect::<Vec<_>>();\n\n    perf_exec!(bench_adapter, req, b);\n}\n\nfn create_assets(bench_adapter: &mut BenchmarkAdapter, num: u64) {\n    let create_assets = (0..num)\n        .map(|n| {\n            let payload = asset::types::CreateAssetPayload {\n                name:   \"muta_\".to_string() + n.to_string().as_str(),\n                symbol: \"muta_\".to_string() + n.to_string().as_str(),\n                supply: 100_000,\n            };\n\n            TransactionRequest {\n                service_name: \"asset\".to_string(),\n                method:       \"create_asset\".to_string(),\n                payload:      serde_json::to_string(&payload).unwrap(),\n            }\n        })\n        .collect::<Vec<_>>();\n\n    exec!(bench_adapter, create_assets);\n}\n"
  },
  {
    "path": "benchmark/bench_mempool.rs",
    "content": "\n"
  },
  {
    "path": "benchmark/benchmark_genesis.toml",
    "content": "timestamp = 0\nprevhash = \"0x44915be5b6c20b0678cf05fcddbbaa832e25d7e6ac538784cd5c24de00d47472\"\n\n[[services]]\nname = \"asset\"\npayload = '''\n{\n   \"id\": \"0xf56924db538e77bb5951eb5ff0d02b88983c49c45eea30e8ae3e7234b311436c\",\n   \"name\": \"MutaToken\",\n   \"symbol\": \"MT\",\n   \"supply\": 320000011,\n   \"issuer\": \"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\"\n}\n'''\n\n# private key of this admin:\n# 5ec982173d54d830b6789cbbbe43eaa2853a5ff752d1ebc1b266cf9790314f8a\n[[services]]\nname = \"governance\"\npayload = '''\n{\n   \"info\": {\n       \"admin\": \"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\",\n       \"tx_failure_fee\": 10,\n       \"tx_floor_fee\": 20,\n       \"profit_deduct_rate_per_million\": 3,\n        \"tx_fee_discount\": [\n            {\n                \"threshold\": 1000,\n                \"discount_percent\": 90\n            },\n            {\n                \"threshold\": 10000,\n                \"discount_percent\": 70\n            },\n            {\n                \"threshold\": 100000,\n                \"discount_percent\": 50\n            }\n        ],\n       \"miner_benefit\": 10\n   },\n   \"tx_fee_inlet_address\": \"muta15a8a9ksxe3hhjpw3l7wz7ry778qg8h9wz8y35p\",\n   \"miner_profit_outlet_address\": \"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\",\n   \"miner_charge_map\": []\n}\n'''\n"
  },
  {
    "path": "benchmark/governance/mod.rs",
    "content": "mod types;\n\nuse std::cell::RefCell;\nuse std::convert::From;\nuse std::rc::Rc;\n\nuse bytes::Bytes;\nuse derive_more::{Display, From};\n\nuse binding_macro::{genesis, hook_after, service, tx_hook_after, tx_hook_before};\nuse protocol::traits::{ExecutorParams, ServiceResponse, ServiceSDK, StoreMap};\nuse protocol::try_service_response;\nuse protocol::types::{Address, Hash, ServiceContext, ServiceContextParams};\n\nuse asset::types::TransferPayload;\nuse asset::Assets;\nuse types::{GovernanceInfo, InitGenesisPayload};\n\nconst INFO_KEY: &str = \"admin\";\nconst TX_FEE_INLET_KEY: &str = \"fee_address\";\nconst MINER_PROFIT_OUTLET_KEY: &str = \"miner_address\";\nstatic ADMISSION_TOKEN: Bytes = Bytes::from_static(b\"governance\");\n\nlazy_static::lazy_static! {\n    pub static ref NATIVE_ASSET_ID: Hash = Hash::from_hex(\"0xf56924db538e77bb5951eb5ff0d02b88983c49c45eea30e8ae3e7234b311436c\").unwrap();\n}\n\npub struct GovernanceService<A, SDK> {\n    sdk:     SDK,\n    profits: Box<dyn StoreMap<Address, u64>>,\n    miners:  Box<dyn StoreMap<Address, Address>>,\n    asset:   A,\n}\n\n#[service]\nimpl<A: Assets, SDK: ServiceSDK> GovernanceService<A, SDK> {\n    pub fn new(mut sdk: SDK, asset: A) -> Self {\n        let profits: Box<dyn StoreMap<Address, u64>> = sdk.alloc_or_recover_map(\"profit\");\n        let miners: Box<dyn StoreMap<Address, Address>> = sdk.alloc_or_recover_map(\"miner_address\");\n        Self {\n            sdk,\n            profits,\n            miners,\n            asset,\n        }\n    }\n\n    #[genesis]\n    fn init_genesis(&mut self, payload: InitGenesisPayload) {\n        assert!(self.profits.is_empty());\n\n        let mut info = payload.info;\n        info.tx_fee_discount.sort();\n        self.sdk.set_value(INFO_KEY.to_string(), info);\n        self.sdk\n            .set_value(TX_FEE_INLET_KEY.to_string(), payload.tx_fee_inlet_address);\n        self.sdk.set_value(\n            MINER_PROFIT_OUTLET_KEY.to_string(),\n            payload.miner_profit_outlet_address,\n        );\n\n        for miner in payload.miner_charge_map.into_iter() {\n            self.miners\n                .insert(miner.address, miner.miner_charge_address);\n        }\n    }\n\n    #[tx_hook_before]\n    fn pledge_fee(&mut self, ctx: ServiceContext) -> ServiceResponse<String> {\n        let info = self\n            .sdk\n            .get_value::<_, GovernanceInfo>(&INFO_KEY.to_owned());\n        let tx_fee_inlet_address = self\n            .sdk\n            .get_value::<_, Address>(&TX_FEE_INLET_KEY.to_owned());\n\n        if info.is_none() || tx_fee_inlet_address.is_none() {\n            return ServiceError::MissingInfo.into();\n        }\n\n        let info = info.unwrap();\n        let tx_fee_inlet_address = tx_fee_inlet_address.unwrap();\n        let payload = TransferPayload {\n            asset_id: NATIVE_ASSET_ID.clone(),\n            to:       tx_fee_inlet_address,\n            value:    info.tx_failure_fee,\n        };\n\n        // Pledge the tx failure fee before executed the transaction.\n        let res = self.asset.transfer_(&ctx, payload);\n        try_service_response!(res);\n        ServiceResponse::from_succeed(String::new())\n    }\n\n    #[tx_hook_after]\n    fn deduct_fee(&mut self, ctx: ServiceContext) -> ServiceResponse<String> {\n        let tx_fee_inlet_address = self\n            .sdk\n            .get_value::<_, Address>(&TX_FEE_INLET_KEY.to_owned());\n        if tx_fee_inlet_address.is_none() {\n            return ServiceError::MissingInfo.into();\n        }\n\n        let tx_fee_inlet_address = tx_fee_inlet_address.unwrap();\n        let payload = TransferPayload {\n            asset_id: NATIVE_ASSET_ID.clone(),\n            to:       tx_fee_inlet_address,\n            value:    1,\n        };\n\n        let res = self.asset.transfer_(&ctx, payload);\n        try_service_response!(res);\n        ServiceResponse::from_succeed(String::new())\n    }\n\n    #[hook_after]\n    fn handle_miner_profit(&mut self, params: &ExecutorParams) {\n        let info = self\n            .sdk\n            .get_value::<_, GovernanceInfo>(&INFO_KEY.to_owned());\n\n        let sender_address = self\n            .sdk\n            .get_value::<_, Address>(&MINER_PROFIT_OUTLET_KEY.to_owned());\n\n        if info.is_none() || sender_address.is_none() {\n            return;\n        }\n\n        let info = info.unwrap();\n        let sender_address = sender_address.unwrap();\n\n        let ctx_params = ServiceContextParams {\n            tx_hash:         None,\n            nonce:           None,\n            cycles_limit:    params.cycles_limit,\n            cycles_price:    1,\n            cycles_used:     Rc::new(RefCell::new(0)),\n            caller:          sender_address,\n            height:          params.height,\n            service_name:    String::new(),\n            service_method:  String::new(),\n            service_payload: String::new(),\n            extra:           Some(ADMISSION_TOKEN.clone()),\n            timestamp:       params.timestamp,\n            events:          Rc::new(RefCell::new(vec![])),\n        };\n\n        let recipient_addr = if let Some(addr) = self.miners.get(&params.proposer) {\n            addr\n        } else {\n            params.proposer.clone()\n        };\n\n        let payload = TransferPayload {\n            asset_id: NATIVE_ASSET_ID.clone(),\n            to:       recipient_addr,\n            value:    info.miner_benefit,\n        };\n\n        let _ = self\n            .asset\n            .transfer_(&ServiceContext::new(ctx_params), payload);\n    }\n}\n\n#[derive(Debug, Display, From)]\npub enum ServiceError {\n    NonAuthorized,\n\n    #[display(fmt = \"Can not get governance info\")]\n    MissingInfo,\n\n    #[display(fmt = \"calc overflow\")]\n    Overflow,\n\n    #[display(fmt = \"query balance failed\")]\n    QueryBalance,\n\n    #[display(fmt = \"Parsing payload to json failed {:?}\", _0)]\n    JsonParse(serde_json::Error),\n}\n\nimpl ServiceError {\n    fn code(&self) -> u64 {\n        match self {\n            ServiceError::NonAuthorized => 101,\n            ServiceError::JsonParse(_) => 102,\n            ServiceError::MissingInfo => 103,\n            ServiceError::Overflow => 104,\n            ServiceError::QueryBalance => 105,\n        }\n    }\n}\n\nimpl<T: Default> From<ServiceError> for ServiceResponse<T> {\n    fn from(err: ServiceError) -> ServiceResponse<T> {\n        ServiceResponse::from_error(err.code(), err.to_string())\n    }\n}\n"
  },
  {
    "path": "benchmark/governance/types.rs",
    "content": "use std::cmp::Ordering;\n\nuse muta_codec_derive::RlpFixedCodec;\nuse serde::{Deserialize, Serialize};\n\nuse protocol::fixed_codec::{FixedCodec, FixedCodecError};\nuse protocol::types::{Address, Bytes};\nuse protocol::ProtocolResult;\n\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug)]\npub struct InitGenesisPayload {\n    pub info:                        GovernanceInfo,\n    pub tx_fee_inlet_address:        Address,\n    pub miner_profit_outlet_address: Address,\n    pub miner_charge_map:            Vec<MinerChargeConfig>,\n}\n\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug)]\npub struct MinerChargeConfig {\n    pub address:              Address,\n    pub miner_charge_address: Address,\n}\n\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug, Default)]\npub struct GovernanceInfo {\n    pub admin:                          Address,\n    pub tx_failure_fee:                 u64,\n    pub tx_floor_fee:                   u64,\n    pub profit_deduct_rate_per_million: u64,\n    pub tx_fee_discount:                Vec<DiscountLevel>,\n    pub miner_benefit:                  u64,\n}\n\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug, Default, PartialEq, Eq)]\npub struct DiscountLevel {\n    pub threshold:        u64,\n    pub discount_percent: u64,\n}\n\nimpl PartialOrd for DiscountLevel {\n    fn partial_cmp(&self, other: &DiscountLevel) -> Option<Ordering> {\n        self.threshold.partial_cmp(&other.threshold)\n    }\n}\n\nimpl Ord for DiscountLevel {\n    fn cmp(&self, other: &DiscountLevel) -> Ordering {\n        self.threshold.cmp(&other.threshold)\n    }\n}\n\n#[derive(Deserialize, Serialize, Clone, Debug)]\npub struct RecordProfitEvent {\n    pub owner:  Address,\n    pub amount: u64,\n}\n\n#[derive(Deserialize, Serialize, Clone, Debug)]\npub struct AccumulateProfitPayload {\n    pub address:            Address,\n    pub accumulated_profit: u64,\n}\n\n#[derive(Deserialize, Serialize, Clone, Debug)]\npub struct HookTransferFromPayload {\n    pub sender:    Address,\n    pub recipient: Address,\n    pub value:     u64,\n    pub memo:      String,\n}\n"
  },
  {
    "path": "benchmark/mod.rs",
    "content": "#![allow(clippy::needless_collect)]\n#![feature(test)]\nextern crate test;\n\nuse std::str::FromStr;\nuse std::sync::Arc;\n\nuse common_crypto::{Crypto, Secp256k1, Signature};\nuse core_mempool::DefaultMemPoolAdapter;\nuse core_network::{NetworkConfig, NetworkService, NetworkServiceHandle};\nuse core_storage::{adapter::rocks::RocksAdapter, ImplStorage};\nuse framework::binding::state::RocksTrieDB;\nuse framework::executor::{ServiceExecutor, ServiceExecutorFactory};\nuse protocol::fixed_codec::FixedCodec;\nuse protocol::traits::{\n    CommonStorage, Context, Executor, ExecutorParams, SDKFactory, Service, ServiceMapping,\n    ServiceSDK, Storage,\n};\nuse protocol::types::{\n    Address, Block, BlockHeader, Bytes, Genesis, Hash, Hex, MerkleRoot, Proof, RawTransaction,\n    SignedTransaction, TransactionRequest,\n};\nuse protocol::ProtocolResult;\nuse test::Bencher;\n\nuse asset::AssetService;\nuse governance::GovernanceService;\nuse multi_signature::MultiSignatureService;\n\nconst TRIE_PATH: &str = \"./free-space/state\";\nconst STORAGE_PATH: &str = \"./free-space/block\";\n\nlazy_static::lazy_static! {\n    pub static ref FEE_ACCOUNT: Address = Address::from_str(\"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\").unwrap();\n    pub static ref FEE_INLET_ACCOUNT: Address = Address::from_str(\"muta15a8a9ksxe3hhjpw3l7wz7ry778qg8h9wz8y35p\").unwrap();\n    pub static ref PROPOSER_ACCOUNT: Address = Address::from_str(\"muta1h99h6f54vytatam3ckftrmvcdpn4jlmnwm6hl0\").unwrap();\n    pub static ref NATIVE_ASSET_ID: Hash = Hash::from_hex(\"0xf56924db538e77bb5951eb5ff0d02b88983c49c45eea30e8ae3e7234b311436c\").unwrap();\n    pub static ref PRIV_KEY: Bytes = Hex::from_string(\"0x5ec982173d54d830b6789cbbbe43eaa2853a5ff752d1ebc1b266cf9790314f8a\".to_string()).unwrap().decode();\n    pub static ref PUB_KEY: Bytes = Hex::from_string(\n        \"0x02ef0cb0d7bc6c18b4bea1f5908d9106522b35ab3c399369605d4242525bda7e60\".to_string(),\n    )\n    .unwrap()\n    .decode();\n}\n\nmacro_rules! exec {\n    ($adapter: expr, $payloads: expr) => {{\n        let stxs = $payloads.into_iter().map(construct_stx).collect::<Vec<_>>();\n\n        let mut executor = $adapter.create_executor();\n        let params = $adapter.create_params();\n\n        executor.exec(Context::new(), &params, &stxs).unwrap();\n        $adapter.next_height();\n    }};\n}\n\nmacro_rules! perf_exec {\n    ($adapter: expr, $payloads: expr, $bencher: expr) => {{\n        let stxs = $payloads.into_iter().map(construct_stx).collect::<Vec<_>>();\n\n        let mut executor = $adapter.create_executor();\n        let params = $adapter.create_params();\n\n        $bencher.iter(|| {\n            let txs = stxs.clone();\n            executor.exec(Context::new(), &params, &txs).unwrap();\n        });\n    }};\n}\n\nmod bench_executor;\nmod bench_mempool;\n// This is a test service that provides transaction hooks.\nmod governance;\n\npub struct BenchmarkAdapter {\n    trie_db:    Arc<RocksTrieDB>,\n    storage:    Arc<ImplStorage<RocksAdapter>>,\n    height:     u64,\n    timestamp:  u64,\n    state_root: MerkleRoot,\n}\n\nimpl Default for BenchmarkAdapter {\n    fn default() -> Self {\n        BenchmarkAdapter::new()\n    }\n}\n\nimpl BenchmarkAdapter {\n    pub fn new() -> Self {\n        let mut rt = tokio::runtime::Builder::new()\n            .core_threads(4)\n            .build()\n            .unwrap();\n        let rocks_adapter = Arc::new(RocksAdapter::new(STORAGE_PATH, 1024).unwrap());\n        let toml_str = include_str!(\"./benchmark_genesis.toml\");\n        let genesis: Genesis = toml::from_str(toml_str).unwrap();\n\n        let mut ret = BenchmarkAdapter {\n            trie_db:    Arc::new(RocksTrieDB::new(TRIE_PATH, false, 1024, 2000).unwrap()),\n            storage:    Arc::new(ImplStorage::new(Arc::clone(&rocks_adapter))),\n            height:     1,\n            timestamp:  1,\n            state_root: Hash::default(),\n        };\n\n        let root = ServiceExecutor::create_genesis(\n            genesis.services,\n            Arc::clone(&ret.trie_db),\n            Arc::clone(&ret.storage),\n            Arc::new(MockServiceMapping {}),\n        )\n        .unwrap();\n\n        let genesis_block = BenchmarkAdapter::create_genesis_block(root.clone());\n\n        rt.block_on(async {\n            ret.storage\n                .update_latest_proof(Context::new(), genesis_block.header.proof.clone())\n                .await\n                .expect(\"save proof\");\n            ret.storage\n                .insert_block(Context::new(), genesis_block)\n                .await\n                .expect(\"save genesis\");\n        });\n\n        ret.state_root = root;\n        ret\n    }\n\n    pub fn create_executor(\n        &mut self,\n    ) -> ServiceExecutor<ImplStorage<RocksAdapter>, RocksTrieDB, MockServiceMapping> {\n        ServiceExecutor::with_root(\n            self.state_root.clone(),\n            Arc::clone(&self.trie_db),\n            Arc::clone(&self.storage),\n            Arc::new(MockServiceMapping {}),\n        )\n        .unwrap()\n    }\n\n    pub fn create_params(&mut self) -> ExecutorParams {\n        ExecutorParams {\n            state_root:   self.state_root.clone(),\n            height:       self.height,\n            timestamp:    self.timestamp,\n            cycles_limit: u64::max_value(),\n            proposer:     PROPOSER_ACCOUNT.clone(),\n        }\n    }\n\n    pub fn create_mempool_adapter(\n        &mut self,\n    ) -> DefaultMemPoolAdapter<\n        ServiceExecutorFactory,\n        Secp256k1,\n        NetworkServiceHandle,\n        ImplStorage<RocksAdapter>,\n        RocksTrieDB,\n        MockServiceMapping,\n    > {\n        DefaultMemPoolAdapter::new(\n            NetworkService::new(NetworkConfig::new()).handle(),\n            Arc::clone(&self.storage),\n            Arc::clone(&self.trie_db),\n            Arc::new(MockServiceMapping {}),\n            3000,\n            100,\n        )\n    }\n\n    pub fn next_height(&mut self) {\n        self.height += 1;\n        self.timestamp += 2;\n    }\n\n    fn create_genesis_block(state_root: MerkleRoot) -> Block {\n        let genesis_block_header = BlockHeader {\n            chain_id: Hash::default(),\n            height: 0,\n            exec_height: 0,\n            prev_hash: Hash::from_empty(),\n            timestamp: 0,\n            order_root: Hash::from_empty(),\n            order_signed_transactions_hash: Hash::from_empty(),\n            confirm_root: vec![],\n            state_root,\n            receipt_root: vec![],\n            cycles_used: vec![],\n            proposer: PROPOSER_ACCOUNT.clone(),\n            proof: Proof {\n                height:     0,\n                round:      0,\n                block_hash: Hash::from_empty(),\n                signature:  Bytes::new(),\n                bitmap:     Bytes::new(),\n            },\n            validator_version: 0,\n            validators: vec![],\n        };\n\n        Block {\n            header:            genesis_block_header,\n            ordered_tx_hashes: vec![],\n        }\n    }\n}\n\npub fn construct_stx(req: TransactionRequest) -> SignedTransaction {\n    let raw_tx = RawTransaction {\n        chain_id:     Hash::default(),\n        nonce:        Hash::from_empty(),\n        timeout:      300,\n        cycles_price: 1,\n        cycles_limit: u64::max_value(),\n        request:      req,\n        sender:       FEE_ACCOUNT.clone(),\n    };\n\n    let hash = Hash::digest(raw_tx.encode_fixed().unwrap());\n    let sig = Secp256k1::sign_message(&hash.as_bytes(), &PRIV_KEY).unwrap();\n\n    SignedTransaction {\n        raw:       raw_tx,\n        tx_hash:   hash,\n        pubkey:    Bytes::from(rlp::encode_list::<Vec<u8>, _>(&[PUB_KEY.clone().to_vec()])),\n        signature: Bytes::from(rlp::encode_list::<Vec<u8>, _>(&[sig.to_bytes().to_vec()])),\n    }\n}\n\npub struct MockServiceMapping;\n\nimpl ServiceMapping for MockServiceMapping {\n    fn get_service<SDK: 'static + ServiceSDK, Factory: SDKFactory<SDK>>(\n        &self,\n        name: &str,\n        factory: &Factory,\n    ) -> ProtocolResult<Box<dyn Service>> {\n        let asset_sdk = factory.get_sdk(\"asset\")?;\n        let governance_sdk = factory.get_sdk(\"governance\")?;\n        let multi_sig_sdk = factory.get_sdk(\"multi_signature\")?;\n\n        let service = match name {\n            \"asset\" => Box::new(AssetService::new(asset_sdk)) as Box<dyn Service>,\n\n            \"governance\" => Box::new(GovernanceService::new(\n                governance_sdk,\n                AssetService::new(asset_sdk),\n            )) as Box<dyn Service>,\n\n            \"multi_signature\" => {\n                Box::new(MultiSignatureService::new(multi_sig_sdk)) as Box<dyn Service>\n            }\n\n            _ => panic!(\"not found service\"),\n        };\n\n        Ok(service)\n    }\n\n    fn list_service_name(&self) -> Vec<String> {\n        vec![\n            \"asset\".to_owned(),\n            \"governance\".to_owned(),\n            \"multi_signature\".to_owned(),\n        ]\n    }\n}\n"
  },
  {
    "path": "binding-macro/Cargo.toml",
    "content": "[package]\nname = \"binding-macro\"\nversion = \"0.2.1\"\nauthors = [\"Muta Dev <muta@nervos.org>\"]\nedition = \"2018\"\nrepository = \"https://github.com/nervosnetwork/muta\"\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[lib]\nproc-macro = true\ndoctest = false\n\n[dependencies]\nprotocol = { path = \"../protocol\", package = \"muta-protocol\" }\n\nsyn = { version = \"1.0\", features = [\"full\"] }\nproc-macro2 = \"1.0\"\nquote = \"1.0\"\nderive_more = \"0.15\"\nserde_json = \"1.0\"\n\n[dev-dependencies]\nframework = { path = \"../framework\" }\nbytes = \"0.5\"\nserde = { version = \"1.0\", features = [\"derive\"] }\n"
  },
  {
    "path": "binding-macro/src/common.rs",
    "content": "use syn::{FnArg, Pat, Path, Type};\n\npub fn get_request_context_pat(bound_name: &str, fn_arg: &FnArg) -> Option<Pat> {\n    if let FnArg::Typed(pat_type) = &*fn_arg {\n        if let Type::Path(type_path) = &*pat_type.ty {\n            if path_is_request_context(&type_path.path, &bound_name) {\n                return Some(*pat_type.pat.clone());\n            }\n        }\n    }\n\n    None\n}\n\nfn path_is_request_context(path: &Path, bound_name: &str) -> bool {\n    // ::<a>::<b>\n    if path.leading_colon.is_some() {\n        return false;\n    }\n\n    // RequestContext\n    path.segments.len() == 1 && path.segments[0].ident == bound_name\n}\n\npub fn assert_type(ty: &Type, ty_str: &str) {\n    match ty {\n        Type::Path(ty_path) => {\n            let path = &ty_path.path;\n            assert_eq!(path.leading_colon.is_none(), true);\n            assert_eq!(path.segments.len(), 1);\n            assert_eq!(path.segments[0].ident, ty_str)\n        }\n        _ => panic!(\"asset type failed\"),\n    }\n}\n\npub fn assert_reference_type(ty: &Type, ty_str: &str) {\n    match ty {\n        Type::Reference(ref_ty) => {\n            let ty_ref = &ref_ty.elem.as_ref();\n            assert_type(ty_ref, ty_str)\n        }\n        _ => panic!(\"asset reference type failed\"),\n    }\n}\n\n// expect &mut self\npub fn arg_is_mutable_receiver(fn_arg: &FnArg) -> bool {\n    match fn_arg {\n        FnArg::Receiver(receiver) => receiver.reference.is_some() && receiver.mutability.is_some(),\n        _ => false,\n    }\n}\n\n// expect &self\npub fn arg_is_immutable_receiver(fn_arg: &FnArg) -> bool {\n    match fn_arg {\n        FnArg::Receiver(receiver) => receiver.reference.is_some() && receiver.mutability.is_none(),\n        _ => false,\n    }\n}\n"
  },
  {
    "path": "binding-macro/src/cycles.rs",
    "content": "use proc_macro::TokenStream;\nuse quote::quote;\nuse syn::parse::{Parse, ParseStream, Result};\nuse syn::punctuated::Punctuated;\nuse syn::{\n    parse_macro_input, Block, FnArg, Generics, Ident, ImplItemMethod, ItemFn, LitInt, Pat,\n    ReturnType, Token, Visibility,\n};\n\nuse crate::common::get_request_context_pat;\n\n#[derive(Debug)]\nstruct Cycles {\n    value: u64,\n}\n\nimpl Parse for Cycles {\n    fn parse(input: ParseStream) -> Result<Self> {\n        let lit: LitInt = input.parse()?;\n        let value = lit.base10_parse::<u64>()?;\n        Ok(Self { value })\n    }\n}\n\nstruct CyclesFnItem {\n    pub func_name: Ident,\n    pub func_vis:  Visibility,\n    pub inputs:    Punctuated<FnArg, Token![,]>,\n    pub ret:       ReturnType,\n    pub body:      Block,\n    pub generics:  Generics,\n}\n\nimpl Parse for CyclesFnItem {\n    fn parse(input: ParseStream) -> Result<Self> {\n        match input.parse::<ImplItemMethod>() {\n            Ok(method_item) => Ok(CyclesFnItem {\n                func_name: method_item.sig.ident.clone(),\n                func_vis:  method_item.vis.clone(),\n                inputs:    method_item.sig.inputs.clone(),\n                ret:       method_item.sig.output.clone(),\n                body:      method_item.block.clone(),\n                generics:  method_item.sig.generics,\n            }),\n            Err(_) => {\n                let item = input.parse::<ItemFn>()?;\n                Ok(CyclesFnItem {\n                    func_name: item.sig.ident.clone(),\n                    func_vis:  item.vis.clone(),\n                    inputs:    item.sig.inputs.clone(),\n                    ret:       item.sig.output.clone(),\n                    body:      *item.block.clone(),\n                    generics:  item.sig.generics,\n                })\n            }\n        }\n    }\n}\n\npub fn gen_cycles_code(attr: TokenStream, item: TokenStream) -> TokenStream {\n    let cycles = parse_macro_input!(attr as Cycles);\n    let fn_item = parse_macro_input!(item as CyclesFnItem);\n\n    let func_name = &fn_item.func_name;\n    let func_vis = &fn_item.func_vis;\n    let inputs = &fn_item.inputs;\n    let ret = &fn_item.ret;\n    let body = &fn_item.body;\n    let generics = &fn_item.generics;\n\n    let request_pat = find_request_ident(\"ServiceContext\", inputs)\n        .expect(\"The first parameter to read/write must be ServiceContext\");\n\n    // Extract the variable name of the RequestContext.\n    let request_ident = match request_pat {\n        Pat::Ident(pat_ident) => pat_ident.ident,\n        _ => panic!(\"Make sure the RequestContext declaration is ctx: ServiceContext.\"),\n    };\n\n    let cycles_value = cycles.value;\n\n    TokenStream::from(quote! {\n        #func_vis fn #func_name#generics(#inputs) #ret {\n            if !#request_ident.sub_cycles(#cycles_value) {\n                return ServiceResponse::<_>::from_error(3, \"cycles macro consume cycles fialed: out of cycles\".to_owned());\n            }\n            #body\n        }\n    })\n}\n\nfn find_request_ident(bound_name: &str, inputs: &Punctuated<FnArg, Token![,]>) -> Option<Pat> {\n    for fn_arg in inputs {\n        let opt_request_pat = get_request_context_pat(bound_name, &fn_arg);\n        if opt_request_pat.is_some() {\n            return opt_request_pat;\n        }\n    }\n\n    None\n}\n"
  },
  {
    "path": "binding-macro/src/hooks.rs",
    "content": "use proc_macro::TokenStream;\nuse quote::quote;\nuse syn::{parse_macro_input, FnArg, ImplItemMethod};\n\nuse crate::common::{arg_is_mutable_receiver, assert_reference_type};\n\npub fn verify_hook(item: TokenStream) -> TokenStream {\n    let method_item = parse_macro_input!(item as ImplItemMethod);\n\n    let inputs = &method_item.sig.inputs;\n    assert_eq!(inputs.len(), 2);\n\n    assert!(arg_is_mutable_receiver(&inputs[0]));\n\n    match &inputs[1] {\n        FnArg::Typed(pt) => {\n            let ty = pt.ty.as_ref();\n            assert_reference_type(ty, \"ExecutorParams\")\n        }\n        _ => panic!(\"The second parameter type should be `&ExecutorParams`.\"),\n    }\n\n    TokenStream::from(quote! {#method_item})\n}\n"
  },
  {
    "path": "binding-macro/src/lib.rs",
    "content": "extern crate proc_macro;\n\nmod common;\nmod cycles;\nmod hooks;\nmod read_write;\nmod service;\n\nuse proc_macro::TokenStream;\n\nuse crate::cycles::gen_cycles_code;\nuse crate::hooks::verify_hook;\nuse crate::read_write::verify_read_or_write;\nuse crate::service::gen_service_code;\n\n#[rustfmt::skip]\n/// `#[genesis]` marks a service method to generate genesis states when fire up the chain\n///\n/// Method input params should be `(&mut self)` or `(&mut self, payload: PayloadType)`\n///\n/// # Example:\n///\n/// ```rust\n/// struct Service;\n/// #[service]\n/// impl Service {\n///     #[genesis]\n///     fn init_genesis(\n///         &mut self,\n///     ) {\n///         do_work();\n///     }\n/// }\n/// ```\n///\n/// Or\n///\n/// ```rust\n/// struct Service;\n/// #[service]\n/// impl Service {\n///     #[genesis]\n///     fn init_genesis(\n///         &mut self,\n///         payload: PayloadType,\n///     ) {\n///         do_work(payload);\n///     }\n/// }\n/// ```\n#[proc_macro_attribute]\npub fn genesis(_: TokenStream, item: TokenStream) -> TokenStream {\n    item\n}\n\n#[proc_macro_attribute]\npub fn tx_hook_before(_: TokenStream, item: TokenStream) -> TokenStream {\n    item\n}\n\n#[proc_macro_attribute]\npub fn tx_hook_after(_: TokenStream, item: TokenStream) -> TokenStream {\n    item\n}\n\n#[rustfmt::skip]\n/// `#[read]` marks a service method as readable.\n///\n/// Methods marked with this macro will have:\n///  Methods with this macro allow access (readable) from outside (RPC or other services).\n///\n/// - Verification\n///  1. Is it a struct method marked with #[service]?\n///  2. Is visibility private?\n///  3. Parameter signature contains `&self and ctx: ServiceContext`?\n///  4. Is the return value `ServiceResponse<T>`?\n///\n/// # Example:\n///\n/// ```rust\n/// struct Service;\n/// #[service]\n/// impl Service {\n///     #[read]\n///     fn test_read_fn(\n///         &self,\n///         _ctx: ServiceContext,\n///     ) -> ServiceResponse<String> {\n///         ServiceResponse::<String>::from_succeed(\"ok\".to_owned())\n///     }\n/// }\n/// ```\n#[proc_macro_attribute]\npub fn read(_: TokenStream, item: TokenStream) -> TokenStream {\n    verify_read_or_write(item, false)\n}\n\n#[rustfmt::skip]\n/// `#[write]` marks a service method as writable.\n///\n/// Methods marked with this macro will have:\n/// - Accessibility\n///  Methods with this macro allow access (writeable) from outside (RPC or other services).\n///\n/// - Verification\n///  1. Is it a struct method marked with #[service]?\n///  2. Is visibility private?\n///  3. Parameter signature contains `&self and ctx: ServiceContext`?\n///  4. Is the return value `ServiceResponse<T>`?\n///\n/// # Example:\n///\n/// ```rust\n/// struct Service;\n/// #[service]\n/// impl Service {\n///     #[write]\n///     fn test_write_fn(\n///         &mut self,\n///         _ctx: ServiceContext,\n///     ) -> ServiceResponse<String> {\n///         ServiceResponse::<String>::from_succeed(\"ok\".to_owned())\n///     }\n/// }\n/// ```\n#[proc_macro_attribute]\npub fn write(_: TokenStream, item: TokenStream) -> TokenStream {\n    verify_read_or_write(item, true)\n}\n\n#[rustfmt::skip]\n/// `# [cycles]` mark an `ImplFn` or `fn`, it will automatically generate code\n/// to complete the cycle deduction,\n///\n/// ```rust\n/// // Source Code\n/// impl Tests {\n///     #[cycles(100)]\n///     fn test_cycles(&self, ctx: ServiceContext) -> ServiceResponse<()> {\n///         ServiceResponse::<()>::from_succeed(())\n///     }\n/// }\n///\n/// // Generated code.\n/// impl Tests {\n///     fn test_cycles(&self, ctx: ServiceContext) -> ServiceResponse<()> {\n///         ctx.sub_cycles(100);\n///         ServiceResponse::<()>::from_succeed(())\n///     }\n/// }\n/// ```\n#[proc_macro_attribute]\npub fn cycles(attr: TokenStream, item: TokenStream) -> TokenStream {\n    gen_cycles_code(attr, item)\n}\n\n/// Marks a method so that it executes after the entire block executes.\n// TODO(@yejiayu): Verify the function signature.\n#[proc_macro_attribute]\npub fn hook_after(_: TokenStream, item: TokenStream) -> TokenStream {\n    verify_hook(item)\n}\n\n/// Marks a method so that it executes before the entire block executes.\n// TODO(@yejiayu): Verify the function signature.\n#[proc_macro_attribute]\npub fn hook_before(_: TokenStream, item: TokenStream) -> TokenStream {\n    verify_hook(item)\n}\n\n#[rustfmt::skip]\n/// Marking a ImplItem for service, it will automatically trait\n/// `protocol::traits::Service`.\n///\n/// # Example\n///\n/// use serde::{Deserialize, Serialize};\n/// use protocol::traits::ServiceSDK;\n/// use protocol::types::ServiceContext;\n/// use protocol::ProtocolResult;\n///\n/// ```rust\n/// // Source code\n///\n/// // serde::Deserialize and serde::Serialize are required.\n/// #[derive(Serialize, Deserialize)]\n/// struct CreateKittyPayload {\n///     // fields\n/// }\n///\n/// // serde::Deserialize and serde::Serialize are required.\n/// #[derive(Serialize, Deserialize)]\n/// struct GetKittyPayload<SDK: ServiceSDK> {\n///     // fields\n/// }\n///\n/// #[service]\n/// impl<SDK: ServiceSDK> KittyService<SDK> {\n///     #[hook_before]\n///     fn custom_hook_before(&mut self) {\n///         // Do something\n///     }\n///\n///     #[hook_after]\n///     fn custom_hook_after(&mut self) {\n///         // Do something\n///     }\n///\n///     #[read]\n///     fn get_kitty(\n///         &self,\n///         ctx: ServiceContext,\n///         payload: GetKittyPayload,\n///     ) -> ServiceResponse<String> {\n///         // Do something\n///     }\n///\n///     #[write]\n///     fn create_kitty(\n///         &mut self,\n///         ctx: ServiceContext,\n///         payload: CreateKittyPayload,\n///     ) -> ServiceResponse<String> {\n///         // Do something\n///     }\n/// }\n///\n/// // Generated code.\n/// impl<SDK: ServiceSDK> Service<SDK> for KittyService<SDK> {\n///     fn hook_before_(&mut self) {\n///         self.custom_hook_before()\n///     }\n///\n///     fn hook_after(&mut self) {\n///         self.custom_hook_after()\n///     }\n///\n///     fn write(&mut self, ctx: ServiceContext) -> ServiceResponse<String> {\n///         let method = ctx.get_service_method();\n///\n///         match ctx.get_service_method() {\n///             \"create_kitty\" => {\n///                 let payload_res: Result<CreateKittyPayload, _> = serde_json::from_str(ctx.get_payload());\n///                 if payload_res.is_error() {\n///                      return ServiceResponse::<String>::from_error(1, \"service macro decode payload failed\".to_owned());\n///                 };\n///                 let payload = payload_res.unwrap();\n///                 let res = self.#list_read_ident(ctx, payload);\n///                 if !res.is_error() {\n///                     let mut data_json = serde_json::to_string(&res.succeed_data).unwrap_or_else(|e| panic!(\"service macro encode payload failed: {:?}\", e));\n///                     if data_json == \"null\" {\n///                         data_json = \"\".to_owned();\n///                     }\n///                     ServiceResponse::<String>::from_succeed(data_json)\n///                 } else {\n///                     ServiceResponse::<String>::from_error(res.code, res.error_message.clone())\n///             }\n///             _ => panic!(\"service macro not found method:{:?} of service:{:?}\", method, service),\n///         }\n///     }\n///\n///     fn read(&self, ctx: ServiceContext) -> ProtocolResult<&str> {\n///         let method = ctx.get_service_method();\n///\n///         match ctx.get_service_method() {\n///             \"get_kitty\" => {\n///                 let payload_res: Result<GetKittyPayload, _> = serde_json::from_str(ctx.get_payload());\n///                 if payload_res.is_error() {\n///                      return ServiceResponse::<String>::from_error(1, \"service macro decode payload failed\".to_owned());\n///                 };\n///                 let payload = payload_res.unwrap();\n///                 let res = self.#list_read_ident(ctx, payload);\n///                 if !res.is_error() {\n///                     let mut data_json = serde_json::to_string(&res.succeed_data).unwrap_or_else(|e| panic!(\"service macro encode payload failed: {:?}\", e));\n///                     if data_json == \"null\" {\n///                         data_json = \"\".to_owned();\n///                     }\n///                     ServiceResponse::<String>::from_succeed(data_json)\n///                 } else {\n///                     ServiceResponse::<String>::from_error(res.code, res.error_message.clone())\n///             }\n///             _ => panic!(\"service macro not found method:{:?} of service:{:?}\", method, service),\n///         }\n///     }\n/// }\n/// ```\n#[proc_macro_attribute]\npub fn service(attr: TokenStream, item: TokenStream) -> TokenStream {\n    gen_service_code(attr, item)\n}\n"
  },
  {
    "path": "binding-macro/src/read_write.rs",
    "content": "use proc_macro::TokenStream;\nuse quote::quote;\nuse syn::punctuated::Punctuated;\nuse syn::{parse_macro_input, FnArg, ImplItemMethod, ReturnType, Token, Visibility};\n\nuse crate::common::{arg_is_immutable_receiver, arg_is_mutable_receiver, assert_type};\n\npub fn verify_read_or_write(item: TokenStream, mutable: bool) -> TokenStream {\n    let method_item = parse_macro_input!(item as ImplItemMethod);\n\n    let visibility = &method_item.vis;\n    let inputs = &method_item.sig.inputs;\n    let ret_type = &method_item.sig.output;\n\n    verify_visibiity(visibility);\n\n    verify_inputs(inputs, mutable);\n\n    verify_ret_type(ret_type);\n\n    TokenStream::from(quote! {#method_item})\n}\n\nfn verify_visibiity(visibility: &Visibility) {\n    match visibility {\n        Visibility::Inherited => {}\n        _ => panic!(\"The visibility of read/write method must be private\"),\n    };\n}\n\nfn verify_inputs(inputs: &Punctuated<FnArg, Token![,]>, mutable: bool) {\n    if inputs.len() < 2 || inputs.len() > 3 {\n        panic!(\"The input parameters should be `(&self/&mut self, ctx: ServiceContext)` or `(&self/&mut self, ctx: ServiceContext, payload: PayloadType)`\")\n    }\n\n    if mutable {\n        if !arg_is_mutable_receiver(&inputs[0]) {\n            panic!(\"The receiver must be `&mut self`.\")\n        }\n    } else if !arg_is_immutable_receiver(&inputs[0]) {\n        panic!(\"The receiver must be `&self`.\")\n    }\n\n    match &inputs[1] {\n        FnArg::Typed(pt) => {\n            let ty = pt.ty.as_ref();\n            assert_type(ty, \"ServiceContext\")\n        }\n        _ => panic!(\"The second parameter type should be `ServiceContext`.\"),\n    }\n}\n\nfn verify_ret_type(ret_type: &ReturnType) {\n    let real_ret_type = match ret_type {\n        ReturnType::Type(_, t) => t.as_ref(),\n        _ => panic!(\"The return type of read/write method must be protocol::ProtocolResult\"),\n    };\n\n    assert_type(&real_ret_type, \"ServiceResponse\");\n}\n"
  },
  {
    "path": "binding-macro/src/service.rs",
    "content": "use proc_macro::TokenStream;\nuse quote::quote;\nuse syn::{parse_macro_input, FnArg, Ident, ImplItem, ImplItemMethod, ItemImpl, Type};\n\nconst READ_ATTRIBUTE: &str = \"read\";\nconst WRITE_ATTRIBUTE: &str = \"write\";\nconst GENESIS_ATTRIBUTE: &str = \"genesis\";\nconst HOOK_BEFORE_ATTRIBUTE: &str = \"hook_before\";\nconst HOOK_AFTER_ATTRIBUTE: &str = \"hook_after\";\nconst TX_HOOK_BEFORE_ATTRIBUTE: &str = \"tx_hook_before\";\nconst TX_HOOK_AFTER_ATTRIBUTE: &str = \"tx_hook_after\";\n\nenum ServiceMethod {\n    Read(ImplItemMethod),\n    Write(ImplItemMethod),\n}\n\nstruct Hooks {\n    before:    Option<Ident>,\n    after:     Option<Ident>,\n    tx_before: Option<Ident>,\n    tx_after:  Option<Ident>,\n}\n\nstruct MethodMeta {\n    method_ident:  Ident,\n    payload_ident: Option<Ident>,\n    readonly:      bool,\n}\n\npub fn gen_service_code(_: TokenStream, item: TokenStream) -> TokenStream {\n    let impl_item = parse_macro_input!(item as ItemImpl);\n\n    let service_ident = get_service_ident(&impl_item);\n    let items = &impl_item.items;\n    let (impl_generics, ty_generics, where_clause) = impl_item.generics.split_for_impl();\n\n    let mut methods: Vec<ServiceMethod> = vec![];\n\n    for item in items {\n        if let ImplItem::Method(method) = item {\n            if let Some(service_method) = find_service_method(method) {\n                methods.push(service_method)\n            }\n        }\n    }\n\n    let genesis_method = find_genesis(items);\n    let genesis_body = match genesis_method {\n        Some(genesis_method) => get_genesis_body(&genesis_method),\n        None => quote! {()},\n    };\n\n    let hooks = extract_hooks(items);\n    let hook_before = &hooks.before;\n    let hook_before_body = match hook_before {\n        Some(hook_before) => quote! { self.#hook_before(_params) },\n        None => quote! {()},\n    };\n    let hook_after = &hooks.after;\n    let hook_after_body = match hook_after {\n        Some(hook_after) => quote! { self.#hook_after(_params) },\n        None => quote! {()},\n    };\n    let tx_hook_before = &hooks.tx_before;\n    let tx_hook_before_body = match tx_hook_before {\n        Some(tx_hook_before) => quote! {\n            let res = self.#tx_hook_before(_ctx);\n            if !res.is_error() {\n                let mut data_json = serde_json::to_string(&res.succeed_data).unwrap_or_else(|e| panic!(\"encode succeed_data of ServiceResponse failed: {:?}\", e));\n                if data_json == \"null\" {\n                    data_json = \"\".to_owned();\n                }\n                ServiceResponse::<String>::from_succeed(data_json)\n            } else {\n                ServiceResponse::<String>::from_error(res.code, res.error_message.clone())\n            }\n        },\n        None => quote! {ServiceResponse::<String>::from_succeed(\"\".to_owned())},\n    };\n    let tx_hook_after = &hooks.tx_after;\n    let tx_hook_after_body = match tx_hook_after {\n        Some(tx_hook_after) => {\n            quote! {\n                let res = self.#tx_hook_after(_ctx);\n                if !res.is_error() {\n                    let mut data_json = serde_json::to_string(&res.succeed_data).unwrap_or_else(|e| panic!(\"encode succeed_data of ServiceResponse failed: {:?}\", e));\n                    if data_json == \"null\" {\n                        data_json = \"\".to_owned();\n                    }\n                    ServiceResponse::<String>::from_succeed(data_json)\n                } else {\n                    ServiceResponse::<String>::from_error(res.code, res.error_message.clone())\n                }\n            }\n        }\n        None => quote! {ServiceResponse::<String>::from_succeed(\"\".to_owned())},\n    };\n\n    let list_method_meta: Vec<MethodMeta> = methods.into_iter().map(extract_method_meta).collect();\n\n    let (list_read_name, list_read_ident, list_read_payload) =\n        split_list_for_metadata(&list_method_meta, true);\n    let (list_write_name, list_write_ident, list_write_payload) =\n        split_list_for_metadata(&list_method_meta, false);\n\n    let (list_read_name_nonepayload, list_read_ident_nonepayload) =\n        split_list_for_metadata_nonepayload(&list_method_meta, true);\n    let (list_write_name_nonepayload, list_write_ident_nonepayload) =\n        split_list_for_metadata_nonepayload(&list_method_meta, false);\n\n    TokenStream::from(quote! {\n        impl #impl_generics protocol::traits::Service for #service_ident #ty_generics #where_clause {\n            fn genesis_(&mut self, _payload: String) {\n                #genesis_body\n            }\n\n            fn hook_before_(&mut self, _params: &ExecutorParams) {\n                #hook_before_body\n            }\n\n            fn hook_after_(&mut self, _params: &ExecutorParams) {\n                #hook_after_body\n            }\n\n            fn tx_hook_before_(&mut self, _ctx: ServiceContext) -> ServiceResponse<String> {\n                #tx_hook_before_body\n            }\n\n            fn tx_hook_after_(&mut self, _ctx: ServiceContext) -> ServiceResponse<String> {\n                 #tx_hook_after_body\n            }\n\n            fn read_(&self, ctx: protocol::types::ServiceContext) -> ServiceResponse<String> {\n                let service = ctx.get_service_name();\n                let method = ctx.get_service_method();\n\n                match method {\n                    #(#list_read_name => {\n                        let payload_res: Result<#list_read_payload, _> = serde_json::from_str(ctx.get_payload());\n                        if payload_res.is_err() {\n                            return ServiceResponse::<String>::from_error(1, \"decode service payload failed\".to_owned());\n                        };\n                        let payload = payload_res.unwrap();\n                        let res = self.#list_read_ident(ctx, payload);\n                        if !res.is_error() {\n                            let mut data_json = serde_json::to_string(&res.succeed_data).unwrap_or_else(|e| panic!(\"encode succeed_data of ServiceResponse failed: {:?}\", e));\n                            if data_json == \"null\" {\n                                data_json = \"\".to_owned();\n                            }\n                            ServiceResponse::<String>::from_succeed(data_json)\n                        } else {\n                            ServiceResponse::<String>::from_error(res.code, res.error_message.clone())\n                        }\n                    },)*\n                    #(#list_read_name_nonepayload => {\n                        let res = self.#list_read_ident_nonepayload(ctx);\n                        if !res.is_error() {\n                            let mut data_json = serde_json::to_string(&res.succeed_data).unwrap_or_else(|e| panic!(\"encode succeed_data of ServiceResponse failed: {:?}\", e));\n                            if data_json == \"null\" {\n                                data_json = \"\".to_owned();\n                            }\n                            ServiceResponse::<String>::from_succeed(data_json)\n                        } else {\n                            ServiceResponse::<String>::from_error(res.code, res.error_message.clone())\n                        }\n                    },)*\n                    _ => ServiceResponse::<String>::from_error(2, format!(\"not found method:{:?} of service:{:?}\", method, service))\n                }\n            }\n\n            fn write_(&mut self, ctx: protocol::types::ServiceContext) -> ServiceResponse<String> {\n                let service = ctx.get_service_name();\n                let method = ctx.get_service_method();\n\n                match method {\n                    #(#list_write_name => {\n                        let payload_res: Result<#list_write_payload, _> = serde_json::from_str(ctx.get_payload());\n                        if payload_res.is_err() {\n                            return ServiceResponse::<String>::from_error(1, \"decode service payload failed\".to_owned());\n                        };\n                        let payload = payload_res.unwrap();\n                        let res = self.#list_write_ident(ctx, payload);\n                        if !res.is_error() {\n                            let mut data_json = serde_json::to_string(&res.succeed_data).unwrap_or_else(|e| panic!(\"encode succeed_data of ServiceResponse failed: {:?}\", e));\n                            if data_json == \"null\" {\n                                data_json = \"\".to_owned();\n                            }\n                            ServiceResponse::<String>::from_succeed(data_json)\n                        } else {\n                            ServiceResponse::<String>::from_error(res.code, res.error_message.clone())\n                        }\n                    },)*\n                    #(#list_write_name_nonepayload => {\n                        let res = self.#list_write_ident_nonepayload(ctx);\n                        if !res.is_error() {\n                            let mut data_json = serde_json::to_string(&res.succeed_data).unwrap_or_else(|e| panic!(\"encode succeed_data of ServiceResponse failed: {:?}\", e));\n                            if data_json == \"null\" {\n                                data_json = \"\".to_owned();\n                            }\n                            ServiceResponse::<String>::from_succeed(data_json)\n                        } else {\n                            ServiceResponse::<String>::from_error(res.code, res.error_message.clone())\n                        }\n                    },)*\n                    _ => ServiceResponse::<String>::from_error(2, format!(\"not found method:{:?} of service:{:?}\", method, service))\n                }\n            }\n        }\n\n        #impl_item\n    })\n}\n\nfn split_list_for_metadata(\n    list: &[MethodMeta],\n    readonly: bool,\n) -> (Vec<String>, Vec<Ident>, Vec<Ident>) {\n    let mut methods = vec![];\n    let mut method_idents = vec![];\n    let mut payload_idents = vec![];\n\n    list.iter()\n        .filter(|meta| meta.readonly == readonly && meta.payload_ident.is_some())\n        .for_each(|meta| {\n            methods.push(meta.method_ident.to_string());\n            method_idents.push(meta.method_ident.clone());\n            payload_idents.push(\n                meta.payload_ident\n                    .as_ref()\n                    .expect(\"MethodMeta should have payload ident\")\n                    .clone(),\n            );\n        });\n    (methods, method_idents, payload_idents)\n}\n\nfn split_list_for_metadata_nonepayload(\n    list: &[MethodMeta],\n    readonly: bool,\n) -> (Vec<String>, Vec<Ident>) {\n    let mut methods = vec![];\n    let mut method_idents = vec![];\n\n    list.iter()\n        .filter(|meta| meta.readonly == readonly && meta.payload_ident.is_none())\n        .for_each(|meta| {\n            methods.push(meta.method_ident.to_string());\n            method_idents.push(meta.method_ident.clone());\n        });\n    (methods, method_idents)\n}\n\nfn get_service_ident(impl_item: &ItemImpl) -> Ident {\n    match &*impl_item.self_ty {\n        Type::Path(type_path) => type_path.path.segments[0].ident.clone(),\n        _ => panic!(\"The identity of the service was not found.\"),\n    }\n}\n\nfn find_service_method(method: &ImplItemMethod) -> Option<ServiceMethod> {\n    let attrs = &method.attrs;\n\n    for attr in attrs {\n        for segment in &attr.path.segments {\n            if segment.ident == READ_ATTRIBUTE {\n                return Some(ServiceMethod::Read(method.clone()));\n            } else if segment.ident == WRITE_ATTRIBUTE {\n                return Some(ServiceMethod::Write(method.clone()));\n            }\n        }\n    }\n\n    None\n}\n\nfn find_genesis(items: &[ImplItem]) -> Option<ImplItemMethod> {\n    let methods: Vec<ImplItemMethod> = find_list_for_item_method(items);\n\n    let mut count = 0;\n    let mut genesis: Option<ImplItemMethod> = None;\n\n    for method in methods {\n        for attr in &method.attrs {\n            for segment in &attr.path.segments {\n                if segment.ident == GENESIS_ATTRIBUTE {\n                    if count == 0 {\n                        genesis = Some(method.clone());\n                        count = 1;\n                    } else {\n                        panic!(\"The genesis method can only have one\")\n                    }\n                }\n            }\n        }\n    }\n\n    genesis\n}\n\nfn get_genesis_body(item: &ImplItemMethod) -> proc_macro2::TokenStream {\n    let method_name = item.sig.ident.clone();\n    match item.sig.inputs.len() {\n        1 => quote!{ self.#method_name()},\n        2 => {\n                let payload_arg = &item.sig.inputs[1];\n                let pat_type = match payload_arg {\n                    FnArg::Typed(pat_type) => pat_type,\n                    _ => unreachable!(),\n                };\n\n                let payload_ident = if let Type::Path(path) = &*pat_type.ty {\n                    Some(path.path.get_ident().expect(\"No payload type found.\").clone())\n                } else {\n                    panic!(\"No payload type found.\")\n                };\n\n                quote!{\n                    let payload: #payload_ident = serde_json::from_str(&_payload)\n                    .unwrap_or_else(|e| panic!(\"decode genesis payload failed: {:?}\", e));\n                    self.#method_name(payload)\n                }\n        },\n        _ => panic!(\"genesis method input params should be `(&mut self)` or `(&mut self, payload: PayloadType)`\")\n    }\n}\n\nfn extract_hooks(items: &[ImplItem]) -> Hooks {\n    let methods: Vec<ImplItemMethod> = find_list_for_item_method(items);\n\n    let mut hooks = Hooks {\n        before:    None,\n        after:     None,\n        tx_before: None,\n        tx_after:  None,\n    };\n\n    let mut before_count = 0;\n    let mut after_count = 0;\n    let mut tx_before_count = 0;\n    let mut tx_after_count = 0;\n\n    for method in methods {\n        for attr in &method.attrs {\n            for segment in &attr.path.segments {\n                if segment.ident == HOOK_BEFORE_ATTRIBUTE {\n                    if before_count == 0 {\n                        hooks.before = Some(method.sig.ident.clone());\n                        before_count = 1;\n                    } else {\n                        panic!(\"The before hook can only have one\")\n                    }\n                } else if segment.ident == HOOK_AFTER_ATTRIBUTE {\n                    if after_count == 0 {\n                        hooks.after = Some(method.sig.ident.clone());\n                        after_count = 1;\n                    } else {\n                        panic!(\"The after hook can only have one\")\n                    }\n                } else if segment.ident == TX_HOOK_BEFORE_ATTRIBUTE {\n                    if tx_before_count == 0 {\n                        hooks.tx_before = Some(method.sig.ident.clone());\n                        tx_before_count = 1;\n                    } else {\n                        panic!(\"The tx before hook can only have one\")\n                    }\n                } else if segment.ident == TX_HOOK_AFTER_ATTRIBUTE {\n                    if tx_after_count == 0 {\n                        hooks.tx_after = Some(method.sig.ident.clone());\n                        tx_after_count = 1;\n                    } else {\n                        panic!(\"The tx after hook can only have one\")\n                    }\n                }\n            }\n        }\n    }\n\n    hooks\n}\n\nfn find_list_for_item_method(items: &[ImplItem]) -> Vec<ImplItemMethod> {\n    items\n        .iter()\n        .filter(|item| matches!(item, ImplItem::Method(_)))\n        .map(|item| {\n            if let ImplItem::Method(method) = item {\n                method.clone()\n            } else {\n                unreachable!()\n            }\n        })\n        .collect()\n}\n\nfn extract_method_meta(method: ServiceMethod) -> MethodMeta {\n    let (impl_method, readonly) = match method {\n        ServiceMethod::Read(impl_method) => (impl_method, true),\n        ServiceMethod::Write(impl_method) => (impl_method, false),\n    };\n\n    match &impl_method.sig.inputs.len() {\n        // Method input params: `(&self/&mut self, ctx: ServiceContext)`\n        2 => {\n            MethodMeta {\n                method_ident: impl_method.sig.ident,\n                payload_ident: None,\n                readonly,\n            }\n        },\n        // Method input params: `(&self/&mut self, ctx: ServiceContext, payload: PayloadType)`\n        3 => {\n            let payload_arg = &impl_method.sig.inputs[2];\n            let pat_type = match payload_arg {\n                FnArg::Typed(pat_type) => pat_type,\n                _ => unreachable!(),\n            };\n\n            let payload_ident = if let Type::Path(path) = &*pat_type.ty {\n                Some(path.path.get_ident().expect(\"No payload type found.\").clone())\n            } else {\n                panic!(\"No payload type found.\")\n            };\n\n            MethodMeta {\n                method_ident: impl_method.sig.ident,\n                payload_ident,\n                readonly,\n            }\n        },\n        _ => panic!(\"Method input params should be `(&self/&mut self, ctx: ServiceContext)` or `(&self/&mut self, ctx: ServiceContext, payload: PayloadType)`\")\n    }\n}\n"
  },
  {
    "path": "binding-macro/tests/mod.rs",
    "content": "#![allow(clippy::unit_cmp)]\n#[macro_use]\nextern crate binding_macro;\n\nuse std::cell::RefCell;\nuse std::panic::{self, AssertUnwindSafe};\nuse std::rc::Rc;\n\nuse serde::{Deserialize, Serialize};\n\nuse protocol::fixed_codec::FixedCodec;\nuse protocol::traits::{\n    ExecutorParams, Service, ServiceResponse, ServiceSDK, StoreArray, StoreBool, StoreMap,\n    StoreString, StoreUint64,\n};\nuse protocol::types::{\n    Address, Block, Hash, Receipt, ServiceContext, ServiceContextParams, SignedTransaction,\n};\n\n#[test]\nfn test_read_and_write() {\n    struct Tests;\n\n    #[service]\n    impl Tests {\n        #[read]\n        fn test_read_fn(&self, _ctx: ServiceContext) -> ServiceResponse<String> {\n            ServiceResponse::<String>::from_succeed(\"read\".to_owned())\n        }\n\n        #[write]\n        fn test_write_fn(&mut self, _ctx: ServiceContext) -> ServiceResponse<String> {\n            ServiceResponse::<String>::from_succeed(\"write\".to_owned())\n        }\n    }\n\n    let context = get_context(1000, \"\", \"\", \"\");\n\n    let mut t = Tests {};\n    assert_eq!(\n        t.test_read_fn(context.clone()).succeed_data,\n        \"read\".to_owned()\n    );\n    assert_eq!(t.test_write_fn(context).succeed_data, \"write\".to_owned());\n}\n\n#[test]\nfn test_hooks() {\n    struct Tests {\n        pub height: u64,\n    };\n\n    #[service]\n    impl Tests {\n        #[hook_after]\n        fn hook_after(&mut self, params: &ExecutorParams) {\n            self.height = params.height;\n        }\n\n        #[hook_before]\n        fn hook_before(&mut self, params: &ExecutorParams) {\n            self.height = params.height;\n        }\n    }\n\n    let mut t = Tests { height: 0 };\n    t.hook_after(&mock_executor_params());\n    assert_eq!(t.height, 9);\n    t.hook_before(&mock_executor_params());\n    assert_eq!(t.height, 9);\n}\n\n#[test]\nfn test_tx_hooks() {\n    struct Tests {\n        pub height: u64,\n    };\n\n    #[service]\n    impl Tests {\n        #[tx_hook_after]\n        fn tx_hook_after(&mut self, _ctx: ServiceContext) -> ServiceResponse<()> {\n            self.height = 9;\n            ServiceResponse::from_succeed(())\n        }\n\n        #[tx_hook_before]\n        fn tx_hook_before(&mut self, _ctx: ServiceContext) -> ServiceResponse<()> {\n            self.height = 10;\n            ServiceResponse::from_succeed(())\n        }\n    }\n\n    let mut t = Tests { height: 0 };\n    let context = get_context(1000, \"\", \"\", \"\");\n\n    t.tx_hook_after(context.clone());\n    assert_eq!(t.height, 9);\n    t.tx_hook_before(context);\n    assert_eq!(t.height, 10);\n}\n\n#[test]\nfn test_read_and_write_with_noneparams() {\n    struct Tests;\n\n    #[service]\n    impl Tests {\n        #[read]\n        fn test_read_fn(&self, _ctx: ServiceContext) -> ServiceResponse<()> {\n            ServiceResponse::<()>::from_succeed(())\n        }\n\n        #[write]\n        fn test_write_fn(&mut self, _ctx: ServiceContext) -> ServiceResponse<()> {\n            ServiceResponse::<()>::from_succeed(())\n        }\n    }\n\n    let context = get_context(1000, \"\", \"\", \"\");\n\n    let mut t = Tests {};\n    assert_eq!(t.test_read_fn(context.clone()).succeed_data, ());\n    assert_eq!(t.test_write_fn(context).succeed_data, ());\n}\n\n#[test]\nfn test_cycles() {\n    struct Tests;\n\n    #[service]\n    impl Tests {\n        #[cycles(100)]\n        fn test_cycles(&self, ctx: ServiceContext) -> ServiceResponse<()> {\n            ServiceResponse::<()>::from_succeed(())\n        }\n\n        #[cycles(500)]\n        fn test_cycles2(&self, ctx: ServiceContext) -> ServiceResponse<()> {\n            ServiceResponse::<()>::from_succeed(())\n        }\n    }\n\n    #[cycles(200)]\n    fn test_sub_cycles_fn1(ctx: ServiceContext) -> ServiceResponse<()> {\n        ServiceResponse::<()>::from_succeed(())\n    }\n\n    #[cycles(200)]\n    fn test_sub_cycles_fn2(_foo: u64, ctx: ServiceContext) -> ServiceResponse<()> {\n        ServiceResponse::<()>::from_succeed(())\n    }\n\n    let t = Tests {};\n    let context = get_context(1000, \"\", \"\", \"\");\n    t.test_cycles(context.clone());\n    assert_eq!(context.get_cycles_used(), 100);\n\n    t.test_cycles2(context.clone());\n    assert_eq!(context.get_cycles_used(), 600);\n\n    test_sub_cycles_fn1(context.clone());\n    assert_eq!(context.get_cycles_used(), 800);\n\n    test_sub_cycles_fn2(1, context.clone());\n    assert_eq!(context.get_cycles_used(), 1000);\n}\n\n#[test]\nfn test_service() {\n    #[derive(Serialize, Deserialize, Debug)]\n    struct TestServicePayload {\n        name: String,\n        age:  u64,\n        sex:  bool,\n    }\n    #[derive(Serialize, Deserialize, Debug, Default)]\n    struct TestServiceResponse {\n        pub message: String,\n    }\n\n    struct Tests<SDK: ServiceSDK> {\n        _sdk:         SDK,\n        genesis_data: String,\n        hook_before:  bool,\n        hook_after:   bool,\n    }\n\n    #[service]\n    impl<SDK: ServiceSDK> Tests<SDK> {\n        #[genesis]\n        fn init_genesis(&mut self) {\n            self.genesis_data = \"genesis\".to_owned();\n        }\n\n        #[hook_before]\n        fn custom_hook_before(&mut self, _params: &ExecutorParams) {\n            self.hook_before = true;\n        }\n\n        #[hook_after]\n        fn custom_hook_after(&mut self, _params: &ExecutorParams) {\n            self.hook_after = true;\n        }\n\n        #[read]\n        fn test_read(\n            &self,\n            _ctx: ServiceContext,\n            _payload: TestServicePayload,\n        ) -> ServiceResponse<TestServiceResponse> {\n            let res = TestServiceResponse {\n                message: \"read ok\".to_owned(),\n            };\n\n            ServiceResponse::<TestServiceResponse>::from_succeed(res)\n        }\n\n        #[write]\n        fn test_write(\n            &mut self,\n            _ctx: ServiceContext,\n            _payload: TestServicePayload,\n        ) -> ServiceResponse<TestServiceResponse> {\n            let res = TestServiceResponse {\n                message: \"write ok\".to_owned(),\n            };\n\n            ServiceResponse::<TestServiceResponse>::from_succeed(res)\n        }\n    }\n\n    let payload = TestServicePayload {\n        name: \"test\".to_owned(),\n        age:  10,\n        sex:  false,\n    };\n    let payload_str = serde_json::to_string(&payload).unwrap();\n\n    let sdk = MockServiceSDK {};\n    let mut test_service = Tests {\n        _sdk:         sdk,\n        genesis_data: \"\".to_owned(),\n        hook_after:   false,\n        hook_before:  false,\n    };\n\n    test_service.genesis_(\"\".to_owned());\n    assert_eq!(test_service.genesis_data, \"genesis\");\n\n    let context = get_context(1024 * 1024, \"\", \"test_write\", &payload_str);\n    let write_res = test_service.write_(context).succeed_data;\n    assert_eq!(write_res, r#\"{\"message\":\"write ok\"}\"#);\n\n    let context = get_context(1024 * 1024, \"\", \"test_read\", &payload_str);\n    let read_res = test_service.read_(context).succeed_data;\n    assert_eq!(read_res, r#\"{\"message\":\"read ok\"}\"#);\n\n    let context = get_context(1024 * 1024, \"\", \"test_notfound\", &payload_str);\n    let read_res = panic::catch_unwind(AssertUnwindSafe(|| test_service.read_(context.clone())));\n    assert_eq!(read_res.unwrap().is_error(), true);\n    let write_res = panic::catch_unwind(AssertUnwindSafe(|| test_service.write_(context)));\n    assert_eq!(write_res.unwrap().is_error(), true);\n\n    test_service.hook_before_(&mock_executor_params());\n    assert_eq!(test_service.hook_before, true);\n\n    test_service.hook_after_(&mock_executor_params());\n    assert_eq!(test_service.hook_after, true);\n}\n\n#[test]\nfn test_service_none_payload() {\n    #[derive(Serialize, Deserialize, Debug, Default)]\n    struct TestServiceResponse {\n        pub message: String,\n    }\n\n    struct Tests<SDK: ServiceSDK> {\n        _sdk:         SDK,\n        genesis_data: String,\n        hook_before:  bool,\n        hook_after:   bool,\n    }\n\n    #[service]\n    impl<SDK: ServiceSDK> Tests<SDK> {\n        #[genesis]\n        fn init_genesis(&mut self) {\n            self.genesis_data = \"genesis\".to_owned();\n        }\n\n        #[hook_before]\n        fn custom_hook_before(&mut self, _params: &ExecutorParams) {\n            self.hook_before = true;\n        }\n\n        #[hook_after]\n        fn custom_hook_after(&mut self, _params: &ExecutorParams) {\n            self.hook_after = true;\n        }\n\n        #[read]\n        fn test_read(&self, _ctx: ServiceContext) -> ServiceResponse<TestServiceResponse> {\n            let res = TestServiceResponse {\n                message: \"read ok\".to_owned(),\n            };\n\n            ServiceResponse::<TestServiceResponse>::from_succeed(res)\n        }\n\n        #[write]\n        fn test_write(&mut self, _ctx: ServiceContext) -> ServiceResponse<TestServiceResponse> {\n            let res = TestServiceResponse {\n                message: \"write ok\".to_owned(),\n            };\n\n            ServiceResponse::<TestServiceResponse>::from_succeed(res)\n        }\n    }\n\n    let sdk = MockServiceSDK {};\n    let mut test_service = Tests {\n        _sdk:         sdk,\n        genesis_data: \"\".to_owned(),\n        hook_after:   false,\n        hook_before:  false,\n    };\n\n    test_service.genesis_(\"\".to_owned());\n    assert_eq!(test_service.genesis_data, \"genesis\");\n\n    let context = get_context(1024 * 1024, \"\", \"test_write\", \"\");\n    let write_res = test_service.write_(context).succeed_data;\n    assert_eq!(write_res, r#\"{\"message\":\"write ok\"}\"#);\n\n    let context = get_context(1024 * 1024, \"\", \"test_read\", \"\");\n    let read_res = test_service.read_(context).succeed_data;\n    assert_eq!(read_res, r#\"{\"message\":\"read ok\"}\"#);\n\n    let context = get_context(1024 * 1024, \"\", \"test_notfound\", \"\");\n    let read_res = panic::catch_unwind(AssertUnwindSafe(|| test_service.read_(context.clone())));\n    assert_eq!(read_res.unwrap().is_error(), true);\n    let write_res = panic::catch_unwind(AssertUnwindSafe(|| test_service.write_(context)));\n    assert_eq!(write_res.unwrap().is_error(), true);\n\n    test_service.hook_before_(&mock_executor_params());\n    assert_eq!(test_service.hook_before, true);\n\n    test_service.hook_after_(&mock_executor_params());\n    assert_eq!(test_service.hook_after, true);\n}\n\n#[test]\nfn test_service_none_response() {\n    struct Tests<SDK: ServiceSDK> {\n        _sdk:         SDK,\n        genesis_data: String,\n        hook_before:  bool,\n        hook_after:   bool,\n    }\n\n    #[service]\n    impl<SDK: ServiceSDK> Tests<SDK> {\n        #[genesis]\n        fn init_genesis(&mut self) {\n            self.genesis_data = \"genesis\".to_owned();\n        }\n\n        #[hook_before]\n        fn custom_hook_before(&mut self, _params: &ExecutorParams) {\n            self.hook_before = true;\n        }\n\n        #[hook_after]\n        fn custom_hook_after(&mut self, _params: &ExecutorParams) {\n            self.hook_after = true;\n        }\n\n        #[read]\n        fn test_read(&self, _ctx: ServiceContext) -> ServiceResponse<()> {\n            ServiceResponse::<()>::from_succeed(())\n        }\n\n        #[write]\n        fn test_write(&mut self, _ctx: ServiceContext) -> ServiceResponse<()> {\n            ServiceResponse::<()>::from_succeed(())\n        }\n    }\n\n    let sdk = MockServiceSDK {};\n    let mut test_service = Tests {\n        _sdk:         sdk,\n        genesis_data: \"\".to_owned(),\n        hook_after:   false,\n        hook_before:  false,\n    };\n\n    test_service.genesis_(\"\".to_owned());\n    assert_eq!(test_service.genesis_data, \"genesis\");\n\n    let context = get_context(1024 * 1024, \"\", \"test_write\", \"\");\n    let write_res = test_service.write_(context).succeed_data;\n    assert_eq!(write_res, \"\");\n\n    let context = get_context(1024 * 1024, \"\", \"test_read\", \"\");\n    let read_res = test_service.read_(context).succeed_data;\n    assert_eq!(read_res, \"\");\n\n    let context = get_context(1024 * 1024, \"\", \"test_notfound\", \"\");\n    let read_res = panic::catch_unwind(AssertUnwindSafe(|| test_service.read_(context.clone())));\n    assert_eq!(read_res.unwrap().is_error(), true);\n    let write_res = panic::catch_unwind(AssertUnwindSafe(|| test_service.write_(context)));\n    assert_eq!(write_res.unwrap().is_error(), true);\n\n    test_service.hook_before_(&mock_executor_params());\n    assert_eq!(test_service.hook_before, true);\n\n    test_service.hook_after_(&mock_executor_params());\n    assert_eq!(test_service.hook_after, true);\n}\n\nfn get_context(cycles_limit: u64, service: &str, method: &str, payload: &str) -> ServiceContext {\n    let params = ServiceContextParams {\n        tx_hash: None,\n        nonce: None,\n        cycles_limit,\n        cycles_price: 1,\n        cycles_used: Rc::new(RefCell::new(0)),\n        caller: Address::from_hash(Hash::from_empty()).unwrap(),\n        height: 1,\n        timestamp: 0,\n        service_name: service.to_owned(),\n        service_method: method.to_owned(),\n        service_payload: payload.to_owned(),\n        extra: None,\n        events: Rc::new(RefCell::new(vec![])),\n    };\n\n    ServiceContext::new(params)\n}\n\nfn mock_executor_params() -> ExecutorParams {\n    ExecutorParams {\n        state_root:   Hash::default(),\n        height:       9,\n        timestamp:    99,\n        cycles_limit: 99999,\n        proposer:     Address::from_hash(Hash::from_empty()).unwrap(),\n    }\n}\n\nstruct MockServiceSDK;\n\nimpl ServiceSDK for MockServiceSDK {\n    // Alloc or recover a `Map` by` var_name`\n    fn alloc_or_recover_map<Key: 'static + FixedCodec + PartialEq, Val: 'static + FixedCodec>(\n        &mut self,\n        _var_name: &str,\n    ) -> Box<dyn StoreMap<Key, Val>> {\n        unimplemented!()\n    }\n\n    // Alloc or recover a `Array` by` var_name`\n    fn alloc_or_recover_array<Elm: 'static + FixedCodec>(\n        &mut self,\n        _var_name: &str,\n    ) -> Box<dyn StoreArray<Elm>> {\n        unimplemented!()\n    }\n\n    // Alloc or recover a `Uint64` by` var_name`\n    fn alloc_or_recover_uint64(&mut self, _var_name: &str) -> Box<dyn StoreUint64> {\n        unimplemented!()\n    }\n\n    // Alloc or recover a `String` by` var_name`\n    fn alloc_or_recover_string(&mut self, _var_name: &str) -> Box<dyn StoreString> {\n        unimplemented!()\n    }\n\n    // Alloc or recover a `Bool` by` var_name`\n    fn alloc_or_recover_bool(&mut self, _var_name: &str) -> Box<dyn StoreBool> {\n        unimplemented!()\n    }\n\n    // Get a value from the service state by key\n    fn get_value<Key: FixedCodec, Ret: FixedCodec>(&self, _key: &Key) -> Option<Ret> {\n        unimplemented!()\n    }\n\n    // Set a value to the service state by key\n    fn set_value<Key: FixedCodec, Val: FixedCodec>(&mut self, _key: Key, _val: Val) {\n        unimplemented!()\n    }\n\n    // Get a value from the specified address by key\n    fn get_account_value<Key: FixedCodec, Ret: FixedCodec>(\n        &self,\n        _address: &Address,\n        _key: &Key,\n    ) -> Option<Ret> {\n        unimplemented!()\n    }\n\n    // Insert a pair of key / value to the specified address\n    fn set_account_value<Key: FixedCodec, Val: FixedCodec>(\n        &mut self,\n        _address: &Address,\n        _key: Key,\n        _val: Val,\n    ) {\n        unimplemented!()\n    }\n\n    // Get a signed transaction by `tx_hash`\n    // if not found on the chain, return None\n    fn get_transaction_by_hash(&self, _tx_hash: &Hash) -> Option<SignedTransaction> {\n        unimplemented!()\n    }\n\n    // Get a block by `height`\n    // if not found on the chain, return None\n    // When the parameter `height` is None, get the latest (executing)` block`\n    fn get_block_by_height(&self, _height: Option<u64>) -> Option<Block> {\n        unimplemented!()\n    }\n\n    // Get a receipt by `tx_hash`\n    // if not found on the chain, return None\n    fn get_receipt_by_hash(&self, _tx_hash: &Hash) -> Option<Receipt> {\n        unimplemented!()\n    }\n}\n"
  },
  {
    "path": "built-in-services/asset/Cargo.toml",
    "content": "[package]\nname = \"asset\"\nversion = \"0.2.1\"\nauthors = [\"Muta Dev <muta@nervos.org>\"]\nedition = \"2018\"\nrepository = \"https://github.com/nervosnetwork/muta\"\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\nbinding-macro = { path = \"../../binding-macro\" }\nprotocol = { path = \"../../protocol\", package = \"muta-protocol\" }\n\nserde = { version = \"1.0\", features = [\"derive\"] }\nserde_json = \"1.0\"\nrlp = \"0.4\"\nbytes = \"0.5\"\nderive_more = \"0.99\"\nbyteorder = \"1.3\"\nmuta-codec-derive = \"0.2\"\n\n[dev-dependencies]\ncita_trie = \"2.0\"\nasync-trait = \"0.1\"\nframework = { path = \"../../framework\" }\n"
  },
  {
    "path": "built-in-services/asset/src/lib.rs",
    "content": "#![allow(clippy::mutable_key_type)]\n\n#[cfg(test)]\nmod tests;\npub mod types;\n\nuse std::collections::BTreeMap;\n\nuse binding_macro::{cycles, genesis, service};\nuse protocol::traits::{ExecutorParams, ServiceResponse, ServiceSDK, StoreMap};\nuse protocol::try_service_response;\nuse protocol::types::{Address, Bytes, Hash, ServiceContext};\n\nuse crate::types::{\n    ApproveEvent, ApprovePayload, Asset, AssetBalance, CreateAssetPayload, GetAllowancePayload,\n    GetAllowanceResponse, GetAssetPayload, GetBalancePayload, GetBalanceResponse,\n    InitGenesisPayload, TransferEvent, TransferFromEvent, TransferFromPayload, TransferPayload,\n};\n\npub const ASSET_SERVICE_NAME: &str = \"asset\";\n\npub trait Assets {\n    fn create_(&mut self, ctx: &ServiceContext, payload: CreateAssetPayload)\n        -> ServiceResponse<()>;\n\n    fn balance_(\n        &self,\n        ctx: &ServiceContext,\n        payload: GetBalancePayload,\n    ) -> ServiceResponse<GetBalanceResponse>;\n\n    fn transfer_(&mut self, ctx: &ServiceContext, payload: TransferPayload) -> ServiceResponse<()>;\n\n    fn transfer_from_(\n        &mut self,\n        ctx: &ServiceContext,\n        payload: TransferFromPayload,\n    ) -> ServiceResponse<()>;\n\n    fn allowance_(\n        &self,\n        ctx: &ServiceContext,\n        payload: GetAllowancePayload,\n    ) -> ServiceResponse<GetAllowanceResponse>;\n}\n\npub struct AssetService<SDK> {\n    sdk:    SDK,\n    assets: Box<dyn StoreMap<Hash, Asset>>,\n}\n\nimpl<SDK: ServiceSDK> Assets for AssetService<SDK> {\n    fn create_(\n        &mut self,\n        ctx: &ServiceContext,\n        payload: CreateAssetPayload,\n    ) -> ServiceResponse<()> {\n        let res = self.create_asset(ctx.clone(), payload);\n        try_service_response!(res);\n        ServiceResponse::from_succeed(())\n    }\n\n    fn balance_(\n        &self,\n        ctx: &ServiceContext,\n        payload: GetBalancePayload,\n    ) -> ServiceResponse<GetBalanceResponse> {\n        self.get_balance(ctx.clone(), payload)\n    }\n\n    fn transfer_(&mut self, ctx: &ServiceContext, payload: TransferPayload) -> ServiceResponse<()> {\n        self.transfer(ctx.clone(), payload)\n    }\n\n    fn transfer_from_(\n        &mut self,\n        ctx: &ServiceContext,\n        payload: TransferFromPayload,\n    ) -> ServiceResponse<()> {\n        self.transfer_from(ctx.clone(), payload)\n    }\n\n    fn allowance_(\n        &self,\n        ctx: &ServiceContext,\n        payload: GetAllowancePayload,\n    ) -> ServiceResponse<GetAllowanceResponse> {\n        self.get_allowance(ctx.clone(), payload)\n    }\n}\n\n#[service]\nimpl<SDK: ServiceSDK> AssetService<SDK> {\n    pub fn new(mut sdk: SDK) -> Self {\n        let assets: Box<dyn StoreMap<Hash, Asset>> = sdk.alloc_or_recover_map(\"assets\");\n\n        Self { sdk, assets }\n    }\n\n    #[genesis]\n    fn init_genesis(&mut self, payload: InitGenesisPayload) {\n        let asset = Asset {\n            id:     payload.id,\n            name:   payload.name,\n            symbol: payload.symbol,\n            supply: payload.supply,\n            issuer: payload.issuer.clone(),\n        };\n\n        self.assets.insert(asset.id.clone(), asset.clone());\n\n        let asset_balance = AssetBalance {\n            value:     payload.supply,\n            allowance: BTreeMap::new(),\n        };\n\n        self.sdk\n            .set_account_value(&asset.issuer, asset.id, asset_balance)\n    }\n\n    #[cycles(10_000)]\n    #[read]\n    fn get_asset(&self, ctx: ServiceContext, payload: GetAssetPayload) -> ServiceResponse<Asset> {\n        if let Some(asset) = self.assets.get(&payload.id) {\n            ServiceResponse::<Asset>::from_succeed(asset)\n        } else {\n            ServiceResponse::<Asset>::from_error(101, \"asset id not existed\".to_owned())\n        }\n    }\n\n    #[cycles(10_000)]\n    #[read]\n    fn get_balance(\n        &self,\n        ctx: ServiceContext,\n        payload: GetBalancePayload,\n    ) -> ServiceResponse<GetBalanceResponse> {\n        if !self.assets.contains(&payload.asset_id) {\n            return ServiceResponse::<GetBalanceResponse>::from_error(\n                101,\n                \"asset id not existed\".to_owned(),\n            );\n        }\n\n        let asset_balance = self\n            .sdk\n            .get_account_value(&payload.user, &payload.asset_id)\n            .unwrap_or(AssetBalance {\n                value:     0,\n                allowance: BTreeMap::new(),\n            });\n\n        let res = GetBalanceResponse {\n            asset_id: payload.asset_id,\n            user:     payload.user,\n            balance:  asset_balance.value,\n        };\n\n        ServiceResponse::<GetBalanceResponse>::from_succeed(res)\n    }\n\n    #[cycles(10_000)]\n    #[read]\n    fn get_allowance(\n        &self,\n        ctx: ServiceContext,\n        payload: GetAllowancePayload,\n    ) -> ServiceResponse<GetAllowanceResponse> {\n        if !self.assets.contains(&payload.asset_id) {\n            return ServiceResponse::<GetAllowanceResponse>::from_error(\n                101,\n                \"asset id not existed\".to_owned(),\n            );\n        }\n\n        let opt_asset_balance: Option<AssetBalance> = self\n            .sdk\n            .get_account_value(&payload.grantor, &payload.asset_id);\n\n        if let Some(v) = opt_asset_balance {\n            let allowance = v.allowance.get(&payload.grantee).unwrap_or(&0);\n\n            let res = GetAllowanceResponse {\n                asset_id: payload.asset_id,\n                grantor:  payload.grantor,\n                grantee:  payload.grantee,\n                value:    *allowance,\n            };\n            ServiceResponse::<GetAllowanceResponse>::from_succeed(res)\n        } else {\n            let res = GetAllowanceResponse {\n                asset_id: payload.asset_id,\n                grantor:  payload.grantor,\n                grantee:  payload.grantee,\n                value:    0,\n            };\n            ServiceResponse::<GetAllowanceResponse>::from_succeed(res)\n        }\n    }\n\n    #[cycles(21_000)]\n    #[write]\n    fn create_asset(\n        &mut self,\n        ctx: ServiceContext,\n        payload: CreateAssetPayload,\n    ) -> ServiceResponse<Asset> {\n        let caller = ctx.get_caller();\n        let payload_res = serde_json::to_string(&payload);\n\n        if let Err(e) = payload_res {\n            return ServiceResponse::<Asset>::from_error(103, format!(\"{:?}\", e));\n        }\n        let payload_str = payload_res.unwrap();\n\n        let id = Hash::digest(Bytes::from(payload_str + &caller.to_string()));\n\n        if self.assets.contains(&id) {\n            return ServiceResponse::<Asset>::from_error(102, \"asset id existed\".to_owned());\n        }\n        let asset = Asset {\n            id:     id.clone(),\n            name:   payload.name,\n            symbol: payload.symbol,\n            supply: payload.supply,\n            issuer: caller,\n        };\n        self.assets.insert(id, asset.clone());\n\n        let asset_balance = AssetBalance {\n            value:     payload.supply,\n            allowance: BTreeMap::new(),\n        };\n\n        self.sdk\n            .set_account_value(&asset.issuer, asset.id.clone(), asset_balance);\n\n        let event_res = serde_json::to_string(&asset);\n\n        if let Err(e) = event_res {\n            return ServiceResponse::<Asset>::from_error(103, format!(\"{:?}\", e));\n        }\n        let event_str = event_res.unwrap();\n        ctx.emit_event(\n            ASSET_SERVICE_NAME.to_owned(),\n            \"CreateAsset\".to_owned(),\n            event_str,\n        );\n\n        ServiceResponse::<Asset>::from_succeed(asset)\n    }\n\n    #[cycles(21_000)]\n    #[write]\n    pub fn transfer(\n        &mut self,\n        ctx: ServiceContext,\n        payload: TransferPayload,\n    ) -> ServiceResponse<()> {\n        let caller = ctx.get_caller();\n        let asset_id = payload.asset_id.clone();\n        let value = payload.value;\n        let to = payload.to;\n\n        if !self.assets.contains(&payload.asset_id) {\n            return ServiceResponse::<()>::from_error(101, \"asset id not existed\".to_owned());\n        }\n\n        if let Err(e) = self._transfer(caller.clone(), to.clone(), asset_id.clone(), value) {\n            return ServiceResponse::<()>::from_error(106, format!(\"{:?}\", e));\n        };\n\n        let event = TransferEvent {\n            asset_id,\n            from: caller,\n            to,\n            value,\n        };\n        let event_res = serde_json::to_string(&event);\n\n        if let Err(e) = event_res {\n            return ServiceResponse::<()>::from_error(103, format!(\"{:?}\", e));\n        };\n        let event_str = event_res.unwrap();\n        ctx.emit_event(\n            ASSET_SERVICE_NAME.to_owned(),\n            \"TransferAsset\".to_owned(),\n            event_str,\n        );\n\n        ServiceResponse::<()>::from_succeed(())\n    }\n\n    #[cycles(21_000)]\n    #[write]\n    fn approve(&mut self, ctx: ServiceContext, payload: ApprovePayload) -> ServiceResponse<()> {\n        let caller = ctx.get_caller();\n        let asset_id = payload.asset_id.clone();\n        let value = payload.value;\n        let to = payload.to;\n\n        if caller == to {\n            return ServiceResponse::<()>::from_error(104, \"cann't approve to yourself\".to_owned());\n        }\n\n        if !self.assets.contains(&payload.asset_id) {\n            return ServiceResponse::<()>::from_error(101, \"asset id not existed\".to_owned());\n        }\n\n        let mut caller_asset_balance: AssetBalance = self\n            .sdk\n            .get_account_value(&caller, &asset_id)\n            .unwrap_or(AssetBalance {\n                value:     0,\n                allowance: BTreeMap::new(),\n            });\n        caller_asset_balance\n            .allowance\n            .entry(to.clone())\n            .and_modify(|e| *e = value)\n            .or_insert(value);\n\n        self.sdk\n            .set_account_value(&caller, asset_id.clone(), caller_asset_balance);\n\n        let event = ApproveEvent {\n            asset_id,\n            grantor: caller,\n            grantee: to,\n            value,\n        };\n        let event_res = serde_json::to_string(&event);\n\n        if let Err(e) = event_res {\n            return ServiceResponse::<()>::from_error(103, format!(\"{:?}\", e));\n        };\n        let event_str = event_res.unwrap();\n        ctx.emit_event(\n            ASSET_SERVICE_NAME.to_owned(),\n            \"ApproveAsset\".to_owned(),\n            event_str,\n        );\n\n        ServiceResponse::<()>::from_succeed(())\n    }\n\n    #[cycles(21_000)]\n    #[write]\n    pub fn transfer_from(\n        &mut self,\n        ctx: ServiceContext,\n        payload: TransferFromPayload,\n    ) -> ServiceResponse<()> {\n        let caller = ctx.get_caller();\n        let sender = payload.sender;\n        let recipient = payload.recipient;\n        let asset_id = payload.asset_id;\n        let value = payload.value;\n\n        if !self.assets.contains(&asset_id) {\n            return ServiceResponse::<()>::from_error(101, \"asset id not existed\".to_owned());\n        }\n\n        let mut sender_asset_balance: AssetBalance = self\n            .sdk\n            .get_account_value(&sender, &asset_id)\n            .unwrap_or(AssetBalance {\n                value:     0,\n                allowance: BTreeMap::new(),\n            });\n        let sender_allowance = sender_asset_balance\n            .allowance\n            .entry(caller.clone())\n            .or_insert(0);\n        if *sender_allowance < value {\n            return ServiceResponse::<()>::from_error(105, \"insufficient balance\".to_owned());\n        }\n        let after_sender_allowance = *sender_allowance - value;\n        sender_asset_balance\n            .allowance\n            .entry(caller.clone())\n            .and_modify(|e| *e = after_sender_allowance)\n            .or_insert(after_sender_allowance);\n        self.sdk\n            .set_account_value(&sender, asset_id.clone(), sender_asset_balance);\n\n        if let Err(e) = self._transfer(sender.clone(), recipient.clone(), asset_id.clone(), value) {\n            return ServiceResponse::<()>::from_error(106, format!(\"{:?}\", e));\n        };\n\n        let event = TransferFromEvent {\n            asset_id,\n            caller,\n            sender,\n            recipient,\n            value,\n        };\n        let event_res = serde_json::to_string(&event);\n\n        if let Err(e) = event_res {\n            return ServiceResponse::<()>::from_error(103, format!(\"{:?}\", e));\n        };\n        let event_str = event_res.unwrap();\n        ctx.emit_event(\n            ASSET_SERVICE_NAME.to_owned(),\n            \"TransferFrom\".to_owned(),\n            event_str,\n        );\n\n        ServiceResponse::<()>::from_succeed(())\n    }\n\n    fn _transfer(\n        &mut self,\n        sender: Address,\n        recipient: Address,\n        asset_id: Hash,\n        value: u64,\n    ) -> Result<(), String> {\n        if recipient == sender {\n            return Err(\"cann't send value to yourself\".to_owned());\n        }\n\n        let mut sender_asset_balance: AssetBalance = self\n            .sdk\n            .get_account_value(&sender, &asset_id)\n            .unwrap_or(AssetBalance {\n                value:     0,\n                allowance: BTreeMap::new(),\n            });\n        let sender_balance = sender_asset_balance.value;\n\n        if sender_balance < value {\n            return Err(\"insufficient balance\".to_owned());\n        }\n\n        let mut to_asset_balance: AssetBalance = self\n            .sdk\n            .get_account_value(&recipient, &asset_id)\n            .unwrap_or(AssetBalance {\n                value:     0,\n                allowance: BTreeMap::new(),\n            });\n\n        let (v, overflow) = to_asset_balance.value.overflowing_add(value);\n        if overflow {\n            return Err(\"u64 overflow\".to_owned());\n        }\n        to_asset_balance.value = v;\n\n        self.sdk\n            .set_account_value(&recipient, asset_id.clone(), to_asset_balance);\n\n        let (v, overflow) = sender_balance.overflowing_sub(value);\n        if overflow {\n            return Err(\"u64 overflow\".to_owned());\n        }\n        sender_asset_balance.value = v;\n        self.sdk\n            .set_account_value(&sender, asset_id, sender_asset_balance);\n\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "built-in-services/asset/src/tests/mod.rs",
    "content": "use std::cell::RefCell;\nuse std::rc::Rc;\nuse std::str::FromStr;\nuse std::sync::Arc;\n\nuse async_trait::async_trait;\nuse cita_trie::MemoryDB;\n\nuse framework::binding::sdk::{DefaultChainQuerier, DefaultServiceSDK};\nuse framework::binding::state::{GeneralServiceState, MPTTrie};\nuse protocol::traits::{CommonStorage, Context, Storage};\nuse protocol::types::{\n    Address, Block, BlockHeader, Hash, Proof, Receipt, ServiceContext, ServiceContextParams,\n    SignedTransaction,\n};\nuse protocol::ProtocolResult;\n\nuse crate::types::{\n    ApprovePayload, CreateAssetPayload, GetAllowancePayload, GetAssetPayload, GetBalancePayload,\n    TransferFromPayload, TransferPayload,\n};\nuse crate::AssetService;\n\n#[test]\nfn test_create_asset() {\n    let cycles_limit = 1024 * 1024 * 1024; // 1073741824\n    let caller = Address::from_str(\"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\").unwrap();\n    let context = mock_context(cycles_limit, caller.clone());\n\n    let mut service = new_asset_service();\n\n    let supply = 1024 * 1024;\n    // test create_asset\n    let asset = service\n        .create_asset(context.clone(), CreateAssetPayload {\n            name: \"test\".to_owned(),\n            symbol: \"test\".to_owned(),\n            supply,\n        })\n        .succeed_data;\n\n    let new_asset = service\n        .get_asset(context.clone(), GetAssetPayload {\n            id: asset.id.clone(),\n        })\n        .succeed_data;\n    assert_eq!(asset, new_asset);\n\n    let balance_res = service\n        .get_balance(context, GetBalancePayload {\n            asset_id: asset.id.clone(),\n            user:     caller,\n        })\n        .succeed_data;\n    assert_eq!(balance_res.balance, supply);\n    assert_eq!(balance_res.asset_id, asset.id);\n}\n\n#[test]\nfn test_transfer() {\n    let cycles_limit = 1024 * 1024 * 1024; // 1073741824\n    let caller = Address::from_str(\"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\").unwrap();\n    let context = mock_context(cycles_limit, caller.clone());\n\n    let mut service = new_asset_service();\n\n    let supply = 1024 * 1024;\n    // test create_asset\n    let asset = service\n        .create_asset(context.clone(), CreateAssetPayload {\n            name: \"test\".to_owned(),\n            symbol: \"test\".to_owned(),\n            supply,\n        })\n        .succeed_data;\n\n    let to_address = Address::from_str(\"muta15a8a9ksxe3hhjpw3l7wz7ry778qg8h9wz8y35p\").unwrap();\n    service.transfer(context.clone(), TransferPayload {\n        asset_id: asset.id.clone(),\n        to:       to_address.clone(),\n        value:    1024,\n    });\n\n    let balance_res = service\n        .get_balance(context, GetBalancePayload {\n            asset_id: asset.id.clone(),\n            user:     caller,\n        })\n        .succeed_data;\n    assert_eq!(balance_res.balance, supply - 1024);\n\n    let context = mock_context(cycles_limit, to_address.clone());\n    let balance_res = service\n        .get_balance(context, GetBalancePayload {\n            asset_id: asset.id,\n            user:     to_address,\n        })\n        .succeed_data;\n    assert_eq!(balance_res.balance, 1024);\n}\n\n#[test]\nfn test_approve() {\n    let cycles_limit = 1024 * 1024 * 1024; // 1073741824\n    let caller = Address::from_str(\"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\").unwrap();\n    let context = mock_context(cycles_limit, caller.clone());\n\n    let mut service = new_asset_service();\n\n    let supply = 1024 * 1024;\n    let asset = service\n        .create_asset(context.clone(), CreateAssetPayload {\n            name: \"test\".to_owned(),\n            symbol: \"test\".to_owned(),\n            supply,\n        })\n        .succeed_data;\n\n    let to_address = Address::from_str(\"muta15a8a9ksxe3hhjpw3l7wz7ry778qg8h9wz8y35p\").unwrap();\n    service.approve(context.clone(), ApprovePayload {\n        asset_id: asset.id.clone(),\n        to:       to_address.clone(),\n        value:    1024,\n    });\n\n    let allowance_res = service\n        .get_allowance(context, GetAllowancePayload {\n            asset_id: asset.id.clone(),\n            grantor:  caller,\n            grantee:  to_address.clone(),\n        })\n        .succeed_data;\n    assert_eq!(allowance_res.asset_id, asset.id);\n    assert_eq!(allowance_res.grantee, to_address);\n    assert_eq!(allowance_res.value, 1024);\n}\n\n#[test]\nfn test_transfer_from() {\n    let cycles_limit = 1024 * 1024 * 1024; // 1073741824\n    let caller = Address::from_str(\"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\").unwrap();\n    let context = mock_context(cycles_limit, caller.clone());\n\n    let mut service = new_asset_service();\n\n    let supply = 1024 * 1024;\n    let asset = service\n        .create_asset(context.clone(), CreateAssetPayload {\n            name: \"test\".to_owned(),\n            symbol: \"test\".to_owned(),\n            supply,\n        })\n        .succeed_data;\n\n    let to_address = Address::from_str(\"muta15a8a9ksxe3hhjpw3l7wz7ry778qg8h9wz8y35p\").unwrap();\n    service.approve(context.clone(), ApprovePayload {\n        asset_id: asset.id.clone(),\n        to:       to_address.clone(),\n        value:    1024,\n    });\n\n    let to_context = mock_context(cycles_limit, to_address.clone());\n\n    service.transfer_from(to_context.clone(), TransferFromPayload {\n        asset_id:  asset.id.clone(),\n        sender:    caller.clone(),\n        recipient: to_address.clone(),\n        value:     24,\n    });\n\n    let allowance_res = service\n        .get_allowance(context.clone(), GetAllowancePayload {\n            asset_id: asset.id.clone(),\n            grantor:  caller.clone(),\n            grantee:  to_address.clone(),\n        })\n        .succeed_data;\n    assert_eq!(allowance_res.asset_id, asset.id);\n    assert_eq!(allowance_res.grantee, to_address);\n    assert_eq!(allowance_res.value, 1000);\n\n    let balance_res = service\n        .get_balance(context, GetBalancePayload {\n            asset_id: asset.id.clone(),\n            user:     caller,\n        })\n        .succeed_data;\n    assert_eq!(balance_res.balance, supply - 24);\n\n    let balance_res = service\n        .get_balance(to_context, GetBalancePayload {\n            asset_id: asset.id,\n            user:     to_address,\n        })\n        .succeed_data;\n    assert_eq!(balance_res.balance, 24);\n}\n\nfn new_asset_service(\n) -> AssetService<DefaultServiceSDK<GeneralServiceState<MemoryDB>, DefaultChainQuerier<MockStorage>>>\n{\n    let chain_db = DefaultChainQuerier::new(Arc::new(MockStorage {}));\n    let trie = MPTTrie::new(Arc::new(MemoryDB::new(false)));\n    let state = GeneralServiceState::new(trie);\n\n    let sdk = DefaultServiceSDK::new(Rc::new(RefCell::new(state)), Rc::new(chain_db));\n\n    AssetService::new(sdk)\n}\n\nfn mock_context(cycles_limit: u64, caller: Address) -> ServiceContext {\n    let params = ServiceContextParams {\n        tx_hash: None,\n        nonce: None,\n        cycles_limit,\n        cycles_price: 1,\n        cycles_used: Rc::new(RefCell::new(0)),\n        caller,\n        height: 1,\n        timestamp: 0,\n        service_name: \"service_name\".to_owned(),\n        service_method: \"service_method\".to_owned(),\n        service_payload: \"service_payload\".to_owned(),\n        extra: None,\n        events: Rc::new(RefCell::new(vec![])),\n    };\n\n    ServiceContext::new(params)\n}\n\nstruct MockStorage;\n\n#[async_trait]\nimpl CommonStorage for MockStorage {\n    async fn insert_block(&self, _ctx: Context, _block: Block) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn get_block(&self, _ctx: Context, _height: u64) -> ProtocolResult<Option<Block>> {\n        unimplemented!()\n    }\n\n    async fn set_block(&self, _ctx: Context, _block: Block) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn remove_block(&self, _ctx: Context, _height: u64) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn get_latest_block(&self, _ctx: Context) -> ProtocolResult<Block> {\n        unimplemented!()\n    }\n\n    async fn set_latest_block(&self, _ctx: Context, _block: Block) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn get_latest_block_header(&self, _ctx: Context) -> ProtocolResult<BlockHeader> {\n        unimplemented!()\n    }\n\n    async fn get_block_header(\n        &self,\n        _ctx: Context,\n        _height: u64,\n    ) -> ProtocolResult<Option<BlockHeader>> {\n        unimplemented!()\n    }\n}\n\n#[async_trait]\nimpl Storage for MockStorage {\n    async fn insert_transactions(\n        &self,\n        _ctx: Context,\n        _: u64,\n        _: Vec<SignedTransaction>,\n    ) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn insert_receipts(&self, _ctx: Context, _: u64, _: Vec<Receipt>) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn update_latest_proof(&self, _ctx: Context, _: Proof) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn get_transaction_by_hash(\n        &self,\n        _ctx: Context,\n        _: &Hash,\n    ) -> ProtocolResult<Option<SignedTransaction>> {\n        unimplemented!()\n    }\n\n    async fn get_transactions(\n        &self,\n        _ctx: Context,\n        _: u64,\n        _: &[Hash],\n    ) -> ProtocolResult<Vec<Option<SignedTransaction>>> {\n        unimplemented!()\n    }\n\n    async fn get_receipt_by_hash(&self, _ctx: Context, _: Hash) -> ProtocolResult<Option<Receipt>> {\n        unimplemented!()\n    }\n\n    async fn get_receipts(\n        &self,\n        _ctx: Context,\n        _: u64,\n        _: Vec<Hash>,\n    ) -> ProtocolResult<Vec<Option<Receipt>>> {\n        unimplemented!()\n    }\n\n    async fn get_latest_proof(&self, _ctx: Context) -> ProtocolResult<Proof> {\n        unimplemented!()\n    }\n}\n"
  },
  {
    "path": "built-in-services/asset/src/types.rs",
    "content": "use std::collections::BTreeMap;\n\nuse muta_codec_derive::RlpFixedCodec;\nuse serde::{Deserialize, Serialize};\n\nuse protocol::fixed_codec::{FixedCodec, FixedCodecError};\nuse protocol::types::{Address, Bytes, Hash};\nuse protocol::ProtocolResult;\n\n/// Payload\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug)]\npub struct InitGenesisPayload {\n    pub id:     Hash,\n    pub name:   String,\n    pub symbol: String,\n    pub supply: u64,\n    pub issuer: Address,\n}\n\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug)]\npub struct CreateAssetPayload {\n    pub name:   String,\n    pub symbol: String,\n    pub supply: u64,\n}\n\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug)]\npub struct GetAssetPayload {\n    pub id: Hash,\n}\n\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug)]\npub struct TransferPayload {\n    pub asset_id: Hash,\n    pub to:       Address,\n    pub value:    u64,\n}\n\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug)]\npub struct TransferEvent {\n    pub asset_id: Hash,\n    pub from:     Address,\n    pub to:       Address,\n    pub value:    u64,\n}\n\npub type ApprovePayload = TransferPayload;\n\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug)]\npub struct ApproveEvent {\n    pub asset_id: Hash,\n    pub grantor:  Address,\n    pub grantee:  Address,\n    pub value:    u64,\n}\n\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug)]\npub struct TransferFromPayload {\n    pub asset_id:  Hash,\n    pub sender:    Address,\n    pub recipient: Address,\n    pub value:     u64,\n}\n\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug)]\npub struct TransferFromEvent {\n    pub asset_id:  Hash,\n    pub caller:    Address,\n    pub sender:    Address,\n    pub recipient: Address,\n    pub value:     u64,\n}\n\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug)]\npub struct GetBalancePayload {\n    pub asset_id: Hash,\n    pub user:     Address,\n}\n\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug, Default)]\npub struct GetBalanceResponse {\n    pub asset_id: Hash,\n    pub user:     Address,\n    pub balance:  u64,\n}\n\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug)]\npub struct GetAllowancePayload {\n    pub asset_id: Hash,\n    pub grantor:  Address,\n    pub grantee:  Address,\n}\n\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug, Default)]\npub struct GetAllowanceResponse {\n    pub asset_id: Hash,\n    pub grantor:  Address,\n    pub grantee:  Address,\n    pub value:    u64,\n}\n\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug, PartialEq, Default)]\npub struct Asset {\n    pub id:     Hash,\n    pub name:   String,\n    pub symbol: String,\n    pub supply: u64,\n    pub issuer: Address,\n}\n\npub struct AssetBalance {\n    pub value:     u64,\n    pub allowance: BTreeMap<Address, u64>,\n}\n\n#[derive(RlpFixedCodec)]\nstruct AllowanceCodec {\n    pub addr:  Address,\n    pub total: u64,\n}\n\nimpl rlp::Decodable for AssetBalance {\n    fn decode(rlp: &rlp::Rlp) -> Result<Self, rlp::DecoderError> {\n        let value = rlp.at(0)?.as_val()?;\n        let codec_list: Vec<AllowanceCodec> = rlp::decode_list(rlp.at(1)?.as_raw());\n        let mut allowance = BTreeMap::new();\n        for v in codec_list {\n            allowance.insert(v.addr, v.total);\n        }\n\n        Ok(AssetBalance { value, allowance })\n    }\n}\n\nimpl rlp::Encodable for AssetBalance {\n    fn rlp_append(&self, s: &mut rlp::RlpStream) {\n        s.begin_list(2);\n        s.append(&self.value);\n\n        let mut codec_list = Vec::with_capacity(self.allowance.len());\n\n        for (address, allowance) in self.allowance.iter() {\n            let fixed_codec = AllowanceCodec {\n                addr:  address.clone(),\n                total: *allowance,\n            };\n\n            codec_list.push(fixed_codec);\n        }\n\n        s.append_list(&codec_list);\n    }\n}\n\nimpl FixedCodec for AssetBalance {\n    fn encode_fixed(&self) -> ProtocolResult<Bytes> {\n        Ok(Bytes::from(rlp::encode(self)))\n    }\n\n    fn decode_fixed(bytes: Bytes) -> ProtocolResult<Self> {\n        Ok(rlp::decode(bytes.as_ref()).map_err(FixedCodecError::from)?)\n    }\n}\n"
  },
  {
    "path": "built-in-services/authorization/Cargo.toml",
    "content": "[package]\nname = \"authorization\"\nversion = \"0.2.1\"\nauthors = [\"Muta Dev <muta@nervos.org>\"]\nedition = \"2018\"\nrepository = \"https://github.com/nervosnetwork/muta\"\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\nbinding-macro = { path = \"../../binding-macro\" }\nprotocol = { path = \"../../protocol\", package = \"muta-protocol\" }\n\nmulti-signature = { path = \"../multi-signature\" }\n\nserde = { version = \"1.0\", features = [\"derive\"] }\nserde_json = \"1.0\"\nlazy_static = \"1.4\"\nrlp = \"0.4\"\nbytes = \"0.5\"\nderive_more = \"0.99\"\nbyteorder = \"1.3\"\nmuta-codec-derive = \"0.2\"\n\n[dev-dependencies]\ncita_trie = \"2.0\"\nasync-trait = \"0.1\"\nframework = { path = \"../../framework\" }\n"
  },
  {
    "path": "built-in-services/authorization/src/lib.rs",
    "content": "use binding_macro::{cycles, service};\nuse protocol::traits::{ExecutorParams, ServiceResponse, ServiceSDK};\nuse protocol::types::{ServiceContext, SignedTransaction};\nuse serde::Deserialize;\n\nuse multi_signature::MultiSignatureService;\n\npub const AUTHORIZATION_SERVICE_NAME: &str = \"authorization\";\n\n#[derive(Deserialize)]\npub struct PtrSignedTransaction {\n    ptr: usize,\n}\n\npub struct AuthorizationService<SDK> {\n    _sdk:      SDK,\n    multi_sig: MultiSignatureService<SDK>,\n}\n\n#[service]\nimpl<SDK: ServiceSDK> AuthorizationService<SDK> {\n    pub fn new(_sdk: SDK, multi_sig: MultiSignatureService<SDK>) -> Self {\n        Self { _sdk, multi_sig }\n    }\n\n    #[cycles(21_000)]\n    #[read]\n    fn check_authorization_by_ptr(\n        &self,\n        ctx: ServiceContext,\n        payload: PtrSignedTransaction,\n    ) -> ServiceResponse<()> {\n        let stx: SignedTransaction = {\n            let boxed = unsafe { Box::from_raw(payload.ptr as *mut SignedTransaction) };\n            *boxed\n        };\n\n        self.check_authorization(ctx, stx)\n    }\n\n    #[cycles(21_000)]\n    #[read]\n    fn check_authorization(\n        &self,\n        ctx: ServiceContext,\n        payload: SignedTransaction,\n    ) -> ServiceResponse<()> {\n        let resp = self.multi_sig.verify_signature(ctx, payload);\n        if resp.is_error() {\n            return ServiceResponse::<()>::from_error(\n                102,\n                format!(\n                    \"verify transaction signature error {:?}\",\n                    resp.error_message\n                ),\n            );\n        }\n\n        ServiceResponse::from_succeed(())\n    }\n}\n"
  },
  {
    "path": "built-in-services/metadata/Cargo.toml",
    "content": "[package]\nname = \"metadata\"\nversion = \"0.2.1\"\nauthors = [\"Muta Dev <muta@nervos.org>\"]\nedition = \"2018\"\nrepository = \"https://github.com/nervosnetwork/muta\"\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\nbinding-macro = { path = \"../../binding-macro\" }\nprotocol = { path = \"../../protocol\", package = \"muta-protocol\" }\n\nserde = { version = \"1.0\", features = [\"derive\"] }\nserde_json = \"1.0\"\nrlp = \"0.4\"\nbytes = \"0.5\"\nderive_more = \"0.99\"\nbyteorder = \"1.3\"\n\n[dev-dependencies]\nhex = \"0.4\"\ncita_trie = \"2.0\"\nasync-trait = \"0.1\"\nframework = { path = \"../../framework\" }\n"
  },
  {
    "path": "built-in-services/metadata/src/lib.rs",
    "content": "#[cfg(test)]\nmod tests;\n\nuse binding_macro::{cycles, genesis, service};\nuse protocol::traits::{ExecutorParams, ServiceResponse, ServiceSDK};\nuse protocol::types::{Metadata, ServiceContext, METADATA_KEY};\n\npub const METADATA_SERVICE_NAME: &str = \"metadata\";\n\npub trait MetaData {\n    fn get_(&self, ctx: &ServiceContext) -> ServiceResponse<Metadata>;\n}\n\npub struct MetadataService<SDK> {\n    sdk: SDK,\n}\n\nimpl<SDK: ServiceSDK> MetaData for MetadataService<SDK> {\n    fn get_(&self, ctx: &ServiceContext) -> ServiceResponse<Metadata> {\n        self.get_metadata(ctx.clone())\n    }\n}\n\n#[service]\nimpl<SDK: ServiceSDK> MetadataService<SDK> {\n    pub fn new(sdk: SDK) -> Self {\n        Self { sdk }\n    }\n\n    #[genesis]\n    fn init_genesis(&mut self, metadata: Metadata) {\n        self.sdk.set_value(METADATA_KEY.to_string(), metadata)\n    }\n\n    #[cycles(21_000)]\n    #[read]\n    fn get_metadata(&self, ctx: ServiceContext) -> ServiceResponse<Metadata> {\n        let metadata: Metadata = self\n            .sdk\n            .get_value(&METADATA_KEY.to_owned())\n            .expect(\"metadata should not be none\");\n        ServiceResponse::<Metadata>::from_succeed(metadata)\n    }\n}\n"
  },
  {
    "path": "built-in-services/metadata/src/tests/mod.rs",
    "content": "use std::cell::RefCell;\nuse std::rc::Rc;\nuse std::str::FromStr;\nuse std::sync::Arc;\n\nuse async_trait::async_trait;\nuse cita_trie::MemoryDB;\n\nuse framework::binding::sdk::{DefaultChainQuerier, DefaultServiceSDK};\nuse framework::binding::state::{GeneralServiceState, MPTTrie};\nuse protocol::traits::{CommonStorage, Context, ServiceSDK, Storage};\nuse protocol::types::{\n    Address, Block, BlockHeader, Hash, Hex, Metadata, Proof, Receipt, ServiceContext,\n    ServiceContextParams, SignedTransaction, ValidatorExtend, METADATA_KEY,\n};\nuse protocol::{types::Bytes, ProtocolResult};\n\nuse crate::MetadataService;\n\n#[test]\nfn test_get_metadata() {\n    let cycles_limit = 1024 * 1024 * 1024; // 1073741824\n    let caller = Address::from_str(\"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\").unwrap();\n    let context = mock_context(cycles_limit, caller);\n\n    let init_metadata = mock_metadata();\n\n    let service = new_metadata_service_with_metadata(init_metadata.clone());\n    let metadata = service.get_metadata(context).succeed_data;\n\n    assert_eq!(metadata, init_metadata);\n}\n\nfn new_metadata_service_with_metadata(\n    metadata: Metadata,\n) -> MetadataService<\n    DefaultServiceSDK<GeneralServiceState<MemoryDB>, DefaultChainQuerier<MockStorage>>,\n> {\n    let chain_db = DefaultChainQuerier::new(Arc::new(MockStorage {}));\n    let trie = MPTTrie::new(Arc::new(MemoryDB::new(false)));\n    let state = GeneralServiceState::new(trie);\n\n    let mut sdk = DefaultServiceSDK::new(Rc::new(RefCell::new(state)), Rc::new(chain_db));\n\n    sdk.set_value(METADATA_KEY.to_string(), metadata);\n\n    MetadataService::new(sdk)\n}\n\nfn mock_metadata() -> Metadata {\n    Metadata {\n        chain_id:        Hash::digest(Bytes::from(\"test\")),\n        bech32_address_hrp: \"muta\".to_owned(),\n        common_ref:      Hex::from_string(\"0x703873635a6b51513451\".to_string()).unwrap(),\n        timeout_gap:     20,\n        cycles_limit:    99_999_999,\n        cycles_price:    1,\n       interval:        3000,\n       verifier_list:   [ValidatorExtend {\n           bls_pub_key: Hex::from_string(\"0x04102947214862a503c73904deb5818298a186d68c7907bb609583192a7de6331493835e5b8281f4d9ee705537c0e765580e06f86ddce5867812fceb42eecefd209f0eddd0389d6b7b0100f00fb119ef9ab23826c6ea09aadcc76fa6cea6a32724\".to_owned()).unwrap(),\n           pub_key:     Hex::from_string(\"0x02ef0cb0d7bc6c18b4bea1f5908d9106522b35ab3c399369605d4242525bda7e60\".to_owned()).unwrap(),\n           address:     Address::from_str(\"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\").unwrap(),\n           propose_weight: 1,\n           vote_weight:    1,\n       }]\n        .to_vec(),\n        propose_ratio:   10,\n        prevote_ratio:   10,\n        precommit_ratio: 10,\n        brake_ratio:     7,\n        tx_num_limit: 20000,\n        max_tx_size: 1_073_741_824,\n    }\n}\n\nfn mock_context(cycles_limit: u64, caller: Address) -> ServiceContext {\n    let params = ServiceContextParams {\n        tx_hash: None,\n        nonce: None,\n        cycles_limit,\n        cycles_price: 1,\n        cycles_used: Rc::new(RefCell::new(0)),\n        caller,\n        height: 1,\n        timestamp: 0,\n        service_name: \"service_name\".to_owned(),\n        service_method: \"service_method\".to_owned(),\n        service_payload: \"service_payload\".to_owned(),\n        extra: None,\n        events: Rc::new(RefCell::new(vec![])),\n    };\n\n    ServiceContext::new(params)\n}\n\nstruct MockStorage;\n\n#[async_trait]\nimpl CommonStorage for MockStorage {\n    async fn insert_block(&self, _ctx: Context, _block: Block) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn get_block(&self, _ctx: Context, _height: u64) -> ProtocolResult<Option<Block>> {\n        unimplemented!()\n    }\n\n    async fn set_block(&self, _ctx: Context, _block: Block) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn remove_block(&self, _ctx: Context, _height: u64) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn get_latest_block(&self, _ctx: Context) -> ProtocolResult<Block> {\n        unimplemented!()\n    }\n\n    async fn set_latest_block(&self, _ctx: Context, _block: Block) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn get_latest_block_header(&self, _ctx: Context) -> ProtocolResult<BlockHeader> {\n        unimplemented!()\n    }\n\n    async fn get_block_header(\n        &self,\n        _ctx: Context,\n        _height: u64,\n    ) -> ProtocolResult<Option<BlockHeader>> {\n        unimplemented!()\n    }\n}\n\n#[async_trait]\nimpl Storage for MockStorage {\n    async fn insert_transactions(\n        &self,\n        _ctx: Context,\n        _: u64,\n        _: Vec<SignedTransaction>,\n    ) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn insert_receipts(&self, _ctx: Context, _: u64, _: Vec<Receipt>) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn update_latest_proof(&self, _ctx: Context, _: Proof) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn get_transaction_by_hash(\n        &self,\n        _ctx: Context,\n        _: &Hash,\n    ) -> ProtocolResult<Option<SignedTransaction>> {\n        unimplemented!()\n    }\n\n    async fn get_transactions(\n        &self,\n        _ctx: Context,\n        _: u64,\n        _: &[Hash],\n    ) -> ProtocolResult<Vec<Option<SignedTransaction>>> {\n        unimplemented!()\n    }\n\n    async fn get_receipt_by_hash(&self, _ctx: Context, _: Hash) -> ProtocolResult<Option<Receipt>> {\n        unimplemented!()\n    }\n\n    async fn get_receipts(\n        &self,\n        _ctx: Context,\n        _: u64,\n        _: Vec<Hash>,\n    ) -> ProtocolResult<Vec<Option<Receipt>>> {\n        unimplemented!()\n    }\n\n    async fn get_latest_proof(&self, _ctx: Context) -> ProtocolResult<Proof> {\n        unimplemented!()\n    }\n}\n"
  },
  {
    "path": "built-in-services/multi-signature/Cargo.toml",
    "content": "[package]\nname = \"multi-signature\"\nversion = \"0.2.1\"\nauthors = [\"Muta Dev <muta@nervos.org>\"]\nedition = \"2018\"\nrepository = \"https://github.com/nervosnetwork/muta\"\n\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\nbinding-macro = { path = \"../../binding-macro\" }\nbyteorder = \"1.3\"\ncommon-crypto = { path = \"../../common/crypto\" }\nderive_more = \"0.99\"\nhasher = { version=\"0.1\", features = [\"hash-keccak\"] }\nhex = \"0.4\"\nlazy_static = \"1.4\"\nmuta-codec-derive = \"0.2\"\nprotocol = { path = \"../../protocol\", package = \"muta-protocol\" }\nrand = \"0.7\"\nrlp = \"0.4\"\nserde = { version = \"1.0\", features = [\"derive\"] }\nserde_json = \"1.0\"\n\n[dev-dependencies]\nasync-trait = \"0.1\"\ncita_trie = \"2.0\"\nframework = { path = \"../../framework\" }\n"
  },
  {
    "path": "built-in-services/multi-signature/src/lib.rs",
    "content": "#![allow(clippy::suspicious_else_formatting, clippy::mutable_key_type)]\n\n#[cfg(test)]\nmod tests;\npub mod types;\n\nuse std::collections::HashMap;\n\nuse binding_macro::{cycles, genesis, service};\nuse derive_more::Display;\nuse rlp::{Decodable, Rlp};\n\nuse common_crypto::{Crypto, Secp256k1};\nuse protocol::traits::{ExecutorParams, ServiceResponse, ServiceSDK};\nuse protocol::types::{Address, Bytes, Hash, ServiceContext, SignedTransaction};\n\nuse crate::types::{\n    Account, AddAccountPayload, ChangeMemoPayload, ChangeOwnerPayload,\n    GenerateMultiSigAccountPayload, GenerateMultiSigAccountResponse, GetMultiSigAccountPayload,\n    GetMultiSigAccountResponse, InitGenesisPayload, MultiSigPermission, RemoveAccountPayload,\n    RemoveAccountResult, SetAccountWeightPayload, SetThresholdPayload, SetWeightResult,\n    UpdateAccountPayload, VerifySignaturePayload, Witness,\n};\n\npub const MULTI_SIG_SERVICE_NAME: &str = \"multi_signature\";\nconst MAX_MULTI_SIGNATURE_RECURSION_DEPTH: u8 = 8;\nconst MAX_PERMISSION_ACCOUNTS: u8 = 16;\n\npub trait MultiSignature {\n    fn verify_signature_(\n        &self,\n        ctx: &ServiceContext,\n        payload: SignedTransaction,\n    ) -> ServiceResponse<()>;\n\n    fn generate_account_(\n        &mut self,\n        ctx: &ServiceContext,\n        payload: GenerateMultiSigAccountPayload,\n    ) -> ServiceResponse<GenerateMultiSigAccountResponse>;\n}\n\npub struct MultiSignatureService<SDK> {\n    sdk: SDK,\n}\n\nimpl<SDK: ServiceSDK> MultiSignature for MultiSignatureService<SDK> {\n    fn verify_signature_(\n        &self,\n        ctx: &ServiceContext,\n        payload: SignedTransaction,\n    ) -> ServiceResponse<()> {\n        self.verify_signature(ctx.clone(), payload)\n    }\n\n    fn generate_account_(\n        &mut self,\n        ctx: &ServiceContext,\n        payload: GenerateMultiSigAccountPayload,\n    ) -> ServiceResponse<GenerateMultiSigAccountResponse> {\n        self.generate_account(ctx.clone(), payload)\n    }\n}\n\n#[service]\nimpl<SDK: ServiceSDK> MultiSignatureService<SDK> {\n    pub fn new(sdk: SDK) -> Self {\n        MultiSignatureService { sdk }\n    }\n\n    #[genesis]\n    fn init_genesis(&mut self, payload: InitGenesisPayload) {\n        if payload.addr_with_weight.is_empty()\n            || payload.addr_with_weight.len() > MAX_PERMISSION_ACCOUNTS as usize\n        {\n            panic!(\"Invalid account number\");\n        }\n\n        let weight_sum = payload\n            .addr_with_weight\n            .iter()\n            .map(|item| item.weight as u32)\n            .sum::<u32>();\n\n        if payload.threshold == 0 || weight_sum < payload.threshold {\n            panic!(\"Invalid threshold or weights\");\n        }\n\n        let address = payload.address.clone();\n        let accounts = payload\n            .addr_with_weight\n            .iter()\n            .map(|item| Account {\n                address:     item.address.clone(),\n                weight:      item.weight,\n                is_multiple: false,\n            })\n            .collect::<Vec<_>>();\n\n        let permission = MultiSigPermission {\n            accounts,\n            owner: payload.owner,\n            threshold: payload.threshold,\n            memo: payload.memo,\n        };\n\n        self.sdk.set_account_value(&address, 0u8, permission);\n    }\n\n    #[cycles(21_000)]\n    #[write]\n    fn generate_account(\n        &mut self,\n        ctx: ServiceContext,\n        payload: GenerateMultiSigAccountPayload,\n    ) -> ServiceResponse<GenerateMultiSigAccountResponse> {\n        if payload.addr_with_weight.is_empty()\n            || payload.addr_with_weight.len() > MAX_PERMISSION_ACCOUNTS as usize\n        {\n            return ServiceError::InvalidAccountLength.into();\n        }\n\n        let weight_sum = payload\n            .addr_with_weight\n            .iter()\n            .map(|item| item.weight as u32)\n            .sum::<u32>();\n\n        if payload.threshold == 0 || weight_sum < payload.threshold {\n            return ServiceError::InvalidAccountWeights.into();\n        }\n\n        // check the recursion depth\n        if payload\n            .addr_with_weight\n            .iter()\n            .map(|s| self._is_recursion_depth_overflow(&s.address, 0))\n            .any(|res| res)\n        {\n            return ServiceError::AboveMaxRecursionDepth.into();\n        }\n\n        let tx_hash = match ctx.get_tx_hash() {\n            Some(hash) => hash,\n            None => return ServiceError::CtxMissingTxHash.into(),\n        };\n\n        if let Ok(address) = Address::from_hash(Hash::digest(tx_hash.as_bytes())) {\n            let accounts = payload\n                .addr_with_weight\n                .iter()\n                .map(|item| Account {\n                    address:     item.address.clone(),\n                    weight:      item.weight,\n                    is_multiple: !self\n                        .get_account_from_address(ctx.clone(), GetMultiSigAccountPayload {\n                            multi_sig_address: item.address.clone(),\n                        })\n                        .is_error(),\n                })\n                .collect::<Vec<_>>();\n\n            let owner = if payload.autonomy {\n                address.clone()\n            } else {\n                payload.owner.clone()\n            };\n\n            let permission = MultiSigPermission {\n                accounts,\n                owner,\n                threshold: payload.threshold,\n                memo: payload.memo,\n            };\n\n            self.sdk.set_account_value(&address, 0u8, permission);\n            ServiceResponse::<GenerateMultiSigAccountResponse>::from_succeed(\n                GenerateMultiSigAccountResponse { address },\n            )\n        } else {\n            ServiceError::GenerateAddressFailed.into()\n        }\n    }\n\n    #[cycles(10_000)]\n    #[read]\n    fn get_account_from_address(\n        &self,\n        _ctx: ServiceContext,\n        payload: GetMultiSigAccountPayload,\n    ) -> ServiceResponse<GetMultiSigAccountResponse> {\n        if let Some(permission) = self.sdk.get_account_value(&payload.multi_sig_address, &0u8) {\n            ServiceResponse::<GetMultiSigAccountResponse>::from_succeed(\n                GetMultiSigAccountResponse { permission },\n            )\n        } else {\n            ServiceError::AccountNotExsit.into()\n        }\n    }\n\n    #[cycles(21_000)]\n    #[read]\n    pub fn verify_signature(\n        &self,\n        ctx: ServiceContext,\n        payload: SignedTransaction,\n    ) -> ServiceResponse<()> {\n        let pubkeys = match decode_list::<Vec<u8>>(&payload.pubkey, \"public key\") {\n            Ok(pks) => pks,\n            Err(err) => return err.into(),\n        };\n\n        let sigs = match decode_list::<Vec<u8>>(&payload.signature, \"signature\") {\n            Ok(sig) => sig,\n            Err(err) => return err.into(),\n        };\n\n        self._inner_verify_signature(VerifySignaturePayload {\n            tx_hash:    payload.tx_hash,\n            pubkeys:    pubkeys.into_iter().map(Bytes::from).collect::<Vec<_>>(),\n            signatures: sigs.into_iter().map(Bytes::from).collect::<Vec<_>>(),\n            sender:     payload.raw.sender,\n        })\n    }\n\n    #[cycles(21_000)]\n    #[write]\n    fn update_account(\n        &mut self,\n        ctx: ServiceContext,\n        payload: UpdateAccountPayload,\n    ) -> ServiceResponse<()> {\n        if let Some(permission) = self\n            .sdk\n            .get_account_value::<_, MultiSigPermission>(&payload.account_address, &0u8)\n        {\n            // check owner address\n            if ctx.get_caller() != permission.owner {\n                return ServiceError::InvalidOwner.into();\n            }\n\n            // check if account contains itself\n            if payload\n                .addr_with_weight\n                .iter()\n                .map(|a| a.address.clone())\n                .any(|addr| addr == payload.account_address)\n            {\n                return ServiceError::AccountSelfContained.into();\n            }\n\n            // check sum of weight\n            if payload.addr_with_weight.is_empty()\n                || payload.addr_with_weight.len() > MAX_PERMISSION_ACCOUNTS as usize\n            {\n                return ServiceError::InvalidAccountLength.into();\n            }\n\n            let weight_sum = payload\n                .addr_with_weight\n                .iter()\n                .map(|item| item.weight as u32)\n                .sum::<u32>();\n\n            // check if sum of the weights is above threshold\n            if payload.threshold == 0 || weight_sum < payload.threshold {\n                return ServiceError::InvalidAccountWeights.into();\n            }\n\n            // check the recursion depth\n            if payload\n                .addr_with_weight\n                .iter()\n                .map(|s| self._is_recursion_depth_overflow(&s.address, 0))\n                .any(|res| res)\n            {\n                return ServiceError::AboveMaxRecursionDepth.into();\n            }\n\n            let accounts = payload\n                .addr_with_weight\n                .iter()\n                .map(|item| Account {\n                    address:     item.address.clone(),\n                    weight:      item.weight,\n                    is_multiple: !self\n                        .get_account_from_address(ctx.clone(), GetMultiSigAccountPayload {\n                            multi_sig_address: item.address.clone(),\n                        })\n                        .is_error(),\n                })\n                .collect::<Vec<_>>();\n\n            self.sdk\n                .set_account_value(&payload.account_address, 0u8, MultiSigPermission {\n                    accounts,\n                    owner: payload.owner,\n                    threshold: payload.threshold,\n                    memo: payload.memo,\n                });\n            return ServiceResponse::<()>::from_succeed(());\n        }\n\n        ServiceError::AccountNotExsit.into()\n    }\n\n    #[cycles(21_000)]\n    #[write]\n    fn change_owner(\n        &mut self,\n        ctx: ServiceContext,\n        payload: ChangeOwnerPayload,\n    ) -> ServiceResponse<()> {\n        if let Some(mut permission) = self\n            .sdk\n            .get_account_value::<_, MultiSigPermission>(&payload.multi_sig_address, &0u8)\n        {\n            // check owner address\n            if ctx.get_caller() != permission.owner {\n                return ServiceError::InvalidOwner.into();\n            }\n\n            // check new owner's recursion depth\n            if self._is_recursion_depth_overflow(&payload.new_owner, 0) {\n                return ServiceError::AboveMaxRecursionDepth.into();\n            }\n\n            permission.set_owner(payload.new_owner);\n            self.sdk\n                .set_account_value(&payload.multi_sig_address, 0u8, permission);\n            ServiceResponse::<()>::from_succeed(())\n        } else {\n            ServiceError::AccountNotExsit.into()\n        }\n    }\n\n    #[cycles(21_000)]\n    #[write]\n    fn change_memo(\n        &mut self,\n        ctx: ServiceContext,\n        payload: ChangeMemoPayload,\n    ) -> ServiceResponse<()> {\n        if let Some(mut permission) = self\n            .sdk\n            .get_account_value::<_, MultiSigPermission>(&payload.multi_sig_address, &0u8)\n        {\n            // check owner address\n            if ctx.get_caller() != permission.owner {\n                return ServiceError::InvalidOwner.into();\n            }\n\n            permission.set_memo(payload.new_memo);\n            self.sdk\n                .set_account_value(&payload.multi_sig_address, 0u8, permission);\n            ServiceResponse::<()>::from_succeed(())\n        } else {\n            ServiceError::AccountNotExsit.into()\n        }\n    }\n\n    #[cycles(21_000)]\n    #[write]\n    fn add_account(\n        &mut self,\n        ctx: ServiceContext,\n        payload: AddAccountPayload,\n    ) -> ServiceResponse<()> {\n        if let Some(mut permission) = self\n            .sdk\n            .get_account_value::<_, MultiSigPermission>(&payload.multi_sig_address, &0u8)\n        {\n            // check owner address\n            if ctx.get_caller() != permission.owner {\n                return ServiceError::InvalidOwner.into();\n            }\n\n            // check whether reach the max count\n            if permission.accounts.len() == MAX_PERMISSION_ACCOUNTS as usize {\n                return ServiceError::AccountCountReachMaxValue.into();\n            }\n\n            // check whether the new account above max recursion depth\n            if self._is_recursion_depth_overflow(&payload.new_account.address, 1) {\n                return ServiceError::AboveMaxRecursionDepth.into();\n            }\n\n            permission.add_account(payload.new_account.clone());\n            self.sdk\n                .set_account_value(&payload.multi_sig_address, 0u8, permission);\n\n            ServiceResponse::<()>::from_succeed(())\n        } else {\n            ServiceError::AccountNotExsit.into()\n        }\n    }\n\n    #[cycles(21_000)]\n    #[write]\n    fn remove_account(\n        &mut self,\n        ctx: ServiceContext,\n        payload: RemoveAccountPayload,\n    ) -> ServiceResponse<Account> {\n        if let Some(mut permission) = self\n            .sdk\n            .get_account_value::<_, MultiSigPermission>(&payload.multi_sig_address, &0u8)\n        {\n            // check owner address\n            if ctx.get_caller() != permission.owner {\n                return ServiceError::InvalidOwner.into();\n            }\n\n            match permission.remove_account(&payload.account_address) {\n                RemoveAccountResult::Success(ret) => {\n                    self.sdk\n                        .set_account_value(&payload.multi_sig_address, 0u8, permission);\n                    return ServiceResponse::<Account>::from_succeed(ret);\n                }\n                RemoveAccountResult::BelowThreshold => {\n                    return ServiceError::InvalidAccountWeights.into();\n                }\n                _ => (),\n            }\n        }\n        ServiceError::AccountNotExsit.into()\n    }\n\n    #[cycles(21_000)]\n    #[write]\n    fn set_account_weight(\n        &mut self,\n        ctx: ServiceContext,\n        payload: SetAccountWeightPayload,\n    ) -> ServiceResponse<()> {\n        if let Some(mut permission) = self\n            .sdk\n            .get_account_value::<_, MultiSigPermission>(&payload.multi_sig_address, &0u8)\n        {\n            // check owner address\n            if ctx.get_caller() != permission.owner {\n                return ServiceError::InvalidOwner.into();\n            }\n\n            match permission.set_account_weight(&payload.account_address, payload.new_weight) {\n                SetWeightResult::Success => {\n                    self.sdk\n                        .set_account_value(&payload.multi_sig_address, 0u8, permission);\n                    return ServiceResponse::<()>::from_succeed(());\n                }\n                SetWeightResult::InvalidNewWeight => {\n                    return ServiceError::InvalidAccountWeights.into();\n                }\n                _ => (),\n            }\n        }\n        ServiceError::AccountNotExsit.into()\n    }\n\n    #[cycles(21_000)]\n    #[write]\n    fn set_threshold(\n        &mut self,\n        ctx: ServiceContext,\n        payload: SetThresholdPayload,\n    ) -> ServiceResponse<()> {\n        if let Some(mut permission) = self\n            .sdk\n            .get_account_value::<_, MultiSigPermission>(&payload.multi_sig_address, &0u8)\n        {\n            // check owner address\n            if ctx.get_caller() != permission.owner {\n                return ServiceError::InvalidOwner.into();\n            }\n\n            // check new threshold\n            if permission\n                .accounts\n                .iter()\n                .map(|account| account.weight as u32)\n                .sum::<u32>()\n                < payload.new_threshold\n            {\n                return ServiceError::InvalidAccountWeights.into();\n            }\n\n            permission.set_threshold(payload.new_threshold);\n            self.sdk\n                .set_account_value(&payload.multi_sig_address, 0u8, permission);\n            ServiceResponse::<()>::from_succeed(())\n        } else {\n            ServiceError::AccountNotExsit.into()\n        }\n    }\n\n    fn _inner_verify_signature(&self, payload: VerifySignaturePayload) -> ServiceResponse<()> {\n        if payload.pubkeys.len() != payload.signatures.len() {\n            return ServiceError::PubkeyAndSignatureMismatch.into();\n        }\n\n        if payload.pubkeys.len() == 1 {\n            if let Ok(addr) = Address::from_pubkey_bytes(&payload.pubkeys[0]) {\n                if addr == payload.sender {\n                    return self._verify_single_signature(\n                        &payload.tx_hash,\n                        &payload.signatures[0],\n                        &payload.pubkeys[0],\n                    );\n                }\n            } else {\n                return ServiceError::InvalidPublicKey.into();\n            }\n        }\n\n        self._verify_multi_signature(\n            &payload.tx_hash,\n            &Witness::new(payload.pubkeys, payload.signatures).into_addr_map(),\n            &payload.sender,\n            0u8,\n        )\n    }\n\n    fn _verify_multi_signature(\n        &self,\n        tx_hash: &Hash,\n        wit_map: &HashMap<Address, (Bytes, Bytes)>,\n        sender: &Address,\n        recursion_depth: u8,\n    ) -> ServiceResponse<()> {\n        // use local variable to do DFS\n        let depth_clone = recursion_depth + 1;\n\n        // check recursion depth\n        if depth_clone >= MAX_MULTI_SIGNATURE_RECURSION_DEPTH {\n            return ServiceError::AboveMaxRecursionDepth.into();\n        }\n\n        let mut weight_acc = 0u32;\n\n        let permission = self\n            .sdk\n            .get_account_value::<_, MultiSigPermission>(sender, &0u8);\n        if permission.is_none() {\n            return ServiceError::AccountNotExsit.into();\n        }\n        let permission = permission.unwrap();\n\n        for account in permission.accounts.iter() {\n            if !account.is_multiple {\n                if let Some((pk, sig)) = wit_map.get(&account.address) {\n                    if !self._verify_single_signature(tx_hash, sig, pk).is_error() {\n                        weight_acc += account.weight as u32;\n                    }\n                }\n            } else if !self\n                ._verify_multi_signature(tx_hash, wit_map, &account.address, depth_clone)\n                .is_error()\n            {\n                weight_acc += account.weight as u32;\n            }\n\n            if weight_acc >= permission.threshold {\n                return ServiceResponse::<()>::from_succeed(());\n            }\n        }\n\n        ServiceError::VerifyMultiSignatureFailed.into()\n    }\n\n    fn _verify_single_signature(\n        &self,\n        tx_hash: &Hash,\n        sig: &Bytes,\n        pubkey: &Bytes,\n    ) -> ServiceResponse<()> {\n        if Secp256k1::verify_signature(tx_hash.as_slice(), sig.as_ref(), pubkey.as_ref()).is_ok() {\n            ServiceResponse::<()>::from_succeed(())\n        } else {\n            ServiceError::VerifyMultiSignatureFailed.into()\n        }\n    }\n\n    fn _is_recursion_depth_overflow(&self, address: &Address, recursion_depth: u8) -> bool {\n        let depth_clone = recursion_depth + 1;\n        if depth_clone >= MAX_MULTI_SIGNATURE_RECURSION_DEPTH {\n            return true;\n        }\n\n        if let Some(permission) = self\n            .sdk\n            .get_account_value::<_, MultiSigPermission>(address, &0u8)\n        {\n            permission\n                .accounts\n                .iter()\n                .filter(|account| account.is_multiple)\n                .map(|account| self._is_recursion_depth_overflow(&account.address, depth_clone))\n                .any(|overflow| overflow)\n        } else {\n            false\n        }\n    }\n}\n\n#[derive(Debug, Display)]\npub enum ServiceError {\n    #[display(fmt = \"Decode {:?} error\", _0)]\n    DecodeErr(String),\n\n    #[display(fmt = \"accounts length must be [1,16]\")]\n    InvalidAccountLength,\n\n    #[display(fmt = \"accounts weight or threshold not valid\")]\n    InvalidAccountWeights,\n\n    #[display(fmt = \"above max recursion depth\")]\n    AboveMaxRecursionDepth,\n\n    #[display(fmt = \"Can not get tx hash from service context\")]\n    CtxMissingTxHash,\n\n    #[display(fmt = \"generate address from tx_hash failed\")]\n    GenerateAddressFailed,\n\n    #[display(fmt = \"account is not existed\")]\n    AccountNotExsit,\n\n    #[display(fmt = \"invalid owner\")]\n    InvalidOwner,\n\n    #[display(fmt = \"account can not contain itself\")]\n    AccountSelfContained,\n\n    #[display(fmt = \"the account count reach max value\")]\n    AccountCountReachMaxValue,\n\n    #[display(fmt = \"pubkkeys len is not equal to signatures len\")]\n    PubkeyAndSignatureMismatch,\n\n    #[display(fmt = \"invalid public key\")]\n    InvalidPublicKey,\n\n    #[display(fmt = \"multi signature verified failed\")]\n    VerifyMultiSignatureFailed,\n}\n\nimpl ServiceError {\n    fn code(&self) -> u64 {\n        match self {\n            ServiceError::DecodeErr(_) => 101,\n            ServiceError::InvalidAccountLength => 102,\n            ServiceError::InvalidAccountWeights => 103,\n            ServiceError::AboveMaxRecursionDepth => 104,\n            ServiceError::CtxMissingTxHash => 105,\n            ServiceError::GenerateAddressFailed => 106,\n            ServiceError::AccountNotExsit => 107,\n            ServiceError::InvalidOwner => 108,\n            ServiceError::AccountSelfContained => 109,\n            ServiceError::AccountCountReachMaxValue => 110,\n            ServiceError::PubkeyAndSignatureMismatch => 111,\n            ServiceError::InvalidPublicKey => 112,\n            ServiceError::VerifyMultiSignatureFailed => 113,\n        }\n    }\n}\n\nimpl<T: Default> From<ServiceError> for ServiceResponse<T> {\n    fn from(err: ServiceError) -> ServiceResponse<T> {\n        ServiceResponse::from_error(err.code(), err.to_string())\n    }\n}\n\nfn decode_list<T: Decodable>(bytes: &[u8], ty: &str) -> Result<Vec<T>, ServiceError> {\n    Rlp::new(bytes)\n        .as_list()\n        .map_err(|_| ServiceError::DecodeErr(ty.to_string()))\n}\n"
  },
  {
    "path": "built-in-services/multi-signature/src/tests/curd_test.rs",
    "content": "use std::str::FromStr;\n\nuse crate::types::{\n    AddAccountPayload, GenerateMultiSigAccountPayload, GetMultiSigAccountPayload,\n    MultiSigPermission, RemoveAccountPayload, SetAccountWeightPayload, SetThresholdPayload,\n    UpdateAccountPayload,\n};\n\nuse super::*;\n\n#[test]\nfn test_generate_multi_signature() {\n    let cycles_limit = 1024 * 1024 * 1024; // 1073741824\n    let caller = Address::from_str(\"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\").unwrap();\n    let context = mock_context(cycles_limit, caller);\n\n    let mut service = new_multi_signature_service();\n    let owner = Address::from_pubkey_bytes(gen_one_keypair().1).unwrap();\n\n    // test permission accounts above the max value\n    let accounts = gen_keypairs(17)\n        .iter()\n        .map(|pair| to_multi_sig_account(pair.1.clone()))\n        .collect::<Vec<_>>();\n    let multi_sig_address =\n        service.generate_account(context.clone(), GenerateMultiSigAccountPayload {\n            owner:            owner.clone(),\n            autonomy:         false,\n            addr_with_weight: accounts,\n            threshold:        12,\n            memo:             String::new(),\n        });\n    assert!(multi_sig_address.is_error());\n\n    // test the threshold larger than the sum of weights\n    let accounts = gen_keypairs(4)\n        .iter()\n        .map(|pair| to_multi_sig_account(pair.1.clone()))\n        .collect::<Vec<_>>();\n    let multi_sig_address =\n        service.generate_account(context.clone(), GenerateMultiSigAccountPayload {\n            owner:            owner.clone(),\n            autonomy:         false,\n            addr_with_weight: accounts,\n            threshold:        12,\n            memo:             String::new(),\n        });\n    assert!(multi_sig_address.is_error());\n\n    // test generate a multi-signature address\n    let accounts = gen_keypairs(4)\n        .iter()\n        .map(|pair| to_multi_sig_account(pair.1.clone()))\n        .collect::<Vec<_>>();\n    let multi_sig_address =\n        service.generate_account(context.clone(), GenerateMultiSigAccountPayload {\n            owner:            owner.clone(),\n            autonomy:         false,\n            addr_with_weight: accounts.clone(),\n            threshold:        3,\n            memo:             String::new(),\n        });\n    assert!(!multi_sig_address.is_error());\n\n    // test get permission by multi-signature address\n    let addr = multi_sig_address.succeed_data;\n    let permission = service.get_account_from_address(context, GetMultiSigAccountPayload {\n        multi_sig_address: addr.address,\n    });\n    assert!(!permission.is_error());\n    assert_eq!(permission.succeed_data.permission, MultiSigPermission {\n        owner,\n        accounts: to_accounts_list(accounts),\n        threshold: 3,\n        memo: String::new(),\n    });\n}\n\n#[test]\nfn test_set_threshold() {\n    let cycles_limit = 1024 * 1024 * 1024; // 1073741824\n    let mut service = new_multi_signature_service();\n    let owner = gen_one_keypair();\n    let owner_address = Address::from_pubkey_bytes(owner.1).unwrap();\n    let context = mock_context(cycles_limit, owner_address.clone());\n    let keypairs = gen_keypairs(4);\n    let account_pubkeys = keypairs\n        .iter()\n        .map(|pair| to_multi_sig_account(pair.1.clone()))\n        .collect::<Vec<_>>();\n    let multi_sig_address = service\n        .generate_account(context.clone(), GenerateMultiSigAccountPayload {\n            owner:            owner_address,\n            autonomy:         false,\n            addr_with_weight: account_pubkeys,\n            threshold:        3,\n            memo:             String::new(),\n        })\n        .succeed_data\n        .address;\n\n    // test new threshold above sum of the weights\n    let res = service.set_threshold(context.clone(), SetThresholdPayload {\n        multi_sig_address: multi_sig_address.clone(),\n        new_threshold:     5,\n    });\n    assert_eq!(\n        res.error_message,\n        \"accounts weight or threshold not valid\".to_owned()\n    );\n\n    // test set new threshold success\n    let res = service.set_threshold(context, SetThresholdPayload {\n        multi_sig_address,\n        new_threshold: 2,\n    });\n    assert_eq!(res.error_message, \"\".to_owned());\n}\n\n#[test]\nfn test_adeption_address() {\n    let cycles_limit = 1024 * 1024 * 1024; // 1073741824\n    let mut service = new_multi_signature_service();\n    let owner = gen_one_keypair();\n    let owner_address = Address::from_pubkey_bytes(owner.1).unwrap();\n    let context = mock_context(cycles_limit, owner_address);\n    let keypairs = gen_keypairs(15);\n    let account_pubkeys = keypairs\n        .iter()\n        .map(|pair| to_multi_sig_account(pair.1.clone()))\n        .collect::<Vec<_>>();\n    let multi_sig_address = service\n        .generate_account(context.clone(), GenerateMultiSigAccountPayload {\n            owner:            Address::default(),\n            autonomy:         true,\n            addr_with_weight: account_pubkeys,\n            threshold:        3,\n            memo:             String::new(),\n        })\n        .succeed_data\n        .address;\n\n    let permission = service.get_account_from_address(context, GetMultiSigAccountPayload {\n        multi_sig_address: multi_sig_address.clone(),\n    });\n    assert_eq!(multi_sig_address, permission.succeed_data.permission.owner);\n}\n\n#[test]\nfn test_add_account() {\n    let cycles_limit = 1024 * 1024 * 1024; // 1073741824\n    let mut service = new_multi_signature_service();\n    let owner = gen_one_keypair();\n    let owner_address = Address::from_pubkey_bytes(owner.1).unwrap();\n    let context = mock_context(cycles_limit, owner_address.clone());\n    let keypairs = gen_keypairs(15);\n    let mut account_pubkeys = keypairs\n        .iter()\n        .map(|pair| to_multi_sig_account(pair.1.clone()))\n        .collect::<Vec<_>>();\n    let multi_sig_address = service\n        .generate_account(context.clone(), GenerateMultiSigAccountPayload {\n            owner:            owner_address.clone(),\n            autonomy:         false,\n            addr_with_weight: account_pubkeys.clone(),\n            threshold:        3,\n            memo:             String::new(),\n        })\n        .succeed_data\n        .address;\n\n    // test add new account success\n    let new_keypair = gen_one_keypair();\n    account_pubkeys.push(to_multi_sig_account(new_keypair.1.clone()));\n    let res = service.add_account(context.clone(), AddAccountPayload {\n        multi_sig_address: multi_sig_address.clone(),\n        new_account:       to_multi_sig_account(new_keypair.1).into_signle_account(),\n    });\n    assert_eq!(res.error_message, \"\".to_owned());\n\n    // test add new account success above max count value\n    let new_keypair = gen_one_keypair();\n    let res = service.add_account(context.clone(), AddAccountPayload {\n        multi_sig_address: multi_sig_address.clone(),\n        new_account:       to_multi_sig_account(new_keypair.1).into_signle_account(),\n    });\n    assert_eq!(\n        res.error_message,\n        \"the account count reach max value\".to_owned()\n    );\n\n    // test get permission after add a new account\n    let permission =\n        service.get_account_from_address(context, GetMultiSigAccountPayload { multi_sig_address });\n    assert_eq!(permission.succeed_data.permission, MultiSigPermission {\n        owner:     owner_address,\n        accounts:  to_accounts_list(account_pubkeys),\n        threshold: 3,\n        memo:      String::new(),\n    });\n}\n\n#[test]\nfn test_update_account() {\n    let cycles_limit = 1024 * 1024 * 1024; // 1073741824\n    let mut service = new_multi_signature_service();\n    let owner = gen_one_keypair();\n    let owner_address = Address::from_pubkey_bytes(owner.1).unwrap();\n    let context = mock_context(cycles_limit, owner_address.clone());\n    let keypairs = gen_keypairs(4);\n    let account_pubkeys = keypairs\n        .iter()\n        .map(|pair| to_multi_sig_account(pair.1.clone()))\n        .collect::<Vec<_>>();\n    let multi_sig_address = service\n        .generate_account(context, GenerateMultiSigAccountPayload {\n            owner:            owner_address.clone(),\n            autonomy:         false,\n            addr_with_weight: account_pubkeys,\n            threshold:        4,\n            memo:             String::new(),\n        })\n        .succeed_data\n        .address;\n\n    let new_owner = gen_one_keypair();\n    let new_owner_address = Address::from_pubkey_bytes(new_owner.1).unwrap();\n    let context = mock_context(cycles_limit, owner_address);\n    let account_pubkeys = vec![AddressWithWeight {\n        address: multi_sig_address.clone(),\n        weight:  1u8,\n    }];\n    let res = service.update_account(context.clone(), UpdateAccountPayload {\n        account_address:  multi_sig_address.clone(),\n        owner:            new_owner_address.clone(),\n        addr_with_weight: account_pubkeys,\n        threshold:        1,\n        memo:             String::new(),\n    });\n    assert!(res.is_error());\n\n    let keypairs = gen_keypairs(4);\n    let account_pubkeys = keypairs\n        .iter()\n        .map(|pair| to_multi_sig_account(pair.1.clone()))\n        .collect::<Vec<_>>();\n    let res = service.update_account(context, UpdateAccountPayload {\n        account_address:  multi_sig_address,\n        owner:            new_owner_address,\n        addr_with_weight: account_pubkeys,\n        threshold:        1,\n        memo:             String::new(),\n    });\n    assert_eq!(res.is_error(), false);\n}\n\n#[test]\nfn test_set_weight() {\n    let cycles_limit = 1024 * 1024 * 1024; // 1073741824\n    let mut service = new_multi_signature_service();\n    let owner = gen_one_keypair();\n    let owner_address = Address::from_pubkey_bytes(owner.1).unwrap();\n    let context = mock_context(cycles_limit, owner_address.clone());\n    let keypairs = gen_keypairs(4);\n    let mut account_pubkeys = keypairs\n        .iter()\n        .map(|pair| to_multi_sig_account(pair.1.clone()))\n        .collect::<Vec<_>>();\n    let multi_sig_address = service\n        .generate_account(context.clone(), GenerateMultiSigAccountPayload {\n            owner:            owner_address.clone(),\n            autonomy:         false,\n            addr_with_weight: account_pubkeys.clone(),\n            threshold:        4,\n            memo:             String::new(),\n        })\n        .succeed_data\n        .address;\n    let to_be_changed_address = Address::from_pubkey_bytes(keypairs[0].1.clone()).unwrap();\n\n    // test set weight success\n    let res = service.set_account_weight(context.clone(), SetAccountWeightPayload {\n        multi_sig_address: multi_sig_address.clone(),\n        account_address:   to_be_changed_address.clone(),\n        new_weight:        2,\n    });\n    assert_eq!(res.error_message, \"\".to_owned());\n\n    // test set an invalid weight\n    let res = service.set_account_weight(context.clone(), SetAccountWeightPayload {\n        multi_sig_address: multi_sig_address.clone(),\n        account_address:   to_be_changed_address,\n        new_weight:        0,\n    });\n    assert_eq!(\n        res.error_message,\n        \"accounts weight or threshold not valid\".to_owned()\n    );\n\n    // test get permission after add a new account\n    let permission =\n        service.get_account_from_address(context, GetMultiSigAccountPayload { multi_sig_address });\n    account_pubkeys[0].weight = 2;\n    assert_eq!(permission.succeed_data.permission, MultiSigPermission {\n        owner:     owner_address,\n        accounts:  to_accounts_list(account_pubkeys),\n        threshold: 4,\n        memo:      String::new(),\n    });\n}\n\n#[test]\nfn test_remove_account() {\n    let cycles_limit = 1024 * 1024 * 1024; // 1073741824\n    let mut service = new_multi_signature_service();\n    let owner = gen_one_keypair();\n    let owner_address = Address::from_pubkey_bytes(owner.1).unwrap();\n    let context = mock_context(cycles_limit, owner_address.clone());\n    let keypairs = gen_keypairs(4);\n    let mut account_pubkeys = keypairs\n        .iter()\n        .map(|pair| to_multi_sig_account(pair.1.clone()))\n        .collect::<Vec<_>>();\n    let multi_sig_address = service\n        .generate_account(context.clone(), GenerateMultiSigAccountPayload {\n            owner:            owner_address.clone(),\n            autonomy:         false,\n            addr_with_weight: account_pubkeys.clone(),\n            threshold:        3,\n            memo:             String::new(),\n        })\n        .succeed_data\n        .address;\n    let to_be_removed_address = Address::from_pubkey_bytes(keypairs[3].1.clone()).unwrap();\n\n    let res = service.remove_account(context.clone(), RemoveAccountPayload {\n        multi_sig_address: multi_sig_address.clone(),\n        account_address:   to_be_removed_address,\n    });\n    account_pubkeys.pop();\n    assert!(!res.is_error());\n\n    let to_be_removed_address = Address::from_pubkey_bytes(keypairs[2].1.clone()).unwrap();\n    let res = service.remove_account(context.clone(), RemoveAccountPayload {\n        multi_sig_address: multi_sig_address.clone(),\n        account_address:   to_be_removed_address,\n    });\n\n    assert_eq!(\n        res.error_message,\n        \"accounts weight or threshold not valid\".to_owned()\n    );\n\n    let permission =\n        service.get_account_from_address(context, GetMultiSigAccountPayload { multi_sig_address });\n    assert_eq!(permission.succeed_data.permission, MultiSigPermission {\n        owner:     owner_address,\n        accounts:  to_accounts_list(account_pubkeys),\n        threshold: 3,\n        memo:      String::new(),\n    });\n}\n"
  },
  {
    "path": "built-in-services/multi-signature/src/tests/mod.rs",
    "content": "mod curd_test;\nmod recursion_test;\n\nuse std::cell::RefCell;\nuse std::convert::TryFrom;\nuse std::rc::Rc;\nuse std::sync::Arc;\n\nuse async_trait::async_trait;\nuse cita_trie::MemoryDB;\nuse rand::{random, thread_rng};\n\nuse common_crypto::{\n    HashValue, PrivateKey, PublicKey, Secp256k1PrivateKey, Signature, ToPublicKey,\n};\nuse framework::binding::sdk::{DefaultChainQuerier, DefaultServiceSDK};\nuse framework::binding::state::{GeneralServiceState, MPTTrie};\nuse protocol::traits::{CommonStorage, Context, Storage};\nuse protocol::types::{\n    Address, Block, BlockHeader, Hash, Proof, Receipt, ServiceContext, ServiceContextParams,\n    SignedTransaction,\n};\nuse protocol::{types::Bytes, ProtocolResult};\n\nuse crate::types::{Account, AddressWithWeight, VerifySignaturePayload};\nuse crate::MultiSignatureService;\n\nstruct MockStorage;\n\n#[async_trait]\nimpl CommonStorage for MockStorage {\n    async fn insert_block(&self, _ctx: Context, _block: Block) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn get_block(&self, _ctx: Context, _height: u64) -> ProtocolResult<Option<Block>> {\n        unimplemented!()\n    }\n\n    async fn set_block(&self, _ctx: Context, _block: Block) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn get_block_header(\n        &self,\n        _ctx: Context,\n        _height: u64,\n    ) -> ProtocolResult<Option<BlockHeader>> {\n        unimplemented!()\n    }\n\n    async fn remove_block(&self, _ctx: Context, _height: u64) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn get_latest_block(&self, _ctx: Context) -> ProtocolResult<Block> {\n        unimplemented!()\n    }\n\n    async fn set_latest_block(&self, _ctx: Context, _block: Block) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn get_latest_block_header(&self, _ctx: Context) -> ProtocolResult<BlockHeader> {\n        unimplemented!()\n    }\n}\n\n#[async_trait]\nimpl Storage for MockStorage {\n    async fn insert_transactions(\n        &self,\n        _ctx: Context,\n        _: u64,\n        _: Vec<SignedTransaction>,\n    ) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn insert_receipts(&self, _ctx: Context, _: u64, _: Vec<Receipt>) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn update_latest_proof(&self, _ctx: Context, _: Proof) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn get_transaction_by_hash(\n        &self,\n        _ctx: Context,\n        _: &Hash,\n    ) -> ProtocolResult<Option<SignedTransaction>> {\n        unimplemented!()\n    }\n\n    async fn get_transactions(\n        &self,\n        _ctx: Context,\n        _: u64,\n        _: &[Hash],\n    ) -> ProtocolResult<Vec<Option<SignedTransaction>>> {\n        unimplemented!()\n    }\n\n    async fn get_receipt_by_hash(&self, _ctx: Context, _: Hash) -> ProtocolResult<Option<Receipt>> {\n        unimplemented!()\n    }\n\n    async fn get_receipts(\n        &self,\n        _ctx: Context,\n        _: u64,\n        _: Vec<Hash>,\n    ) -> ProtocolResult<Vec<Option<Receipt>>> {\n        unimplemented!()\n    }\n\n    async fn get_latest_proof(&self, _ctx: Context) -> ProtocolResult<Proof> {\n        unimplemented!()\n    }\n}\n\nfn new_multi_signature_service() -> MultiSignatureService<\n    DefaultServiceSDK<GeneralServiceState<MemoryDB>, DefaultChainQuerier<MockStorage>>,\n> {\n    let chain_db = DefaultChainQuerier::new(Arc::new(MockStorage {}));\n    let trie = MPTTrie::new(Arc::new(MemoryDB::new(false)));\n    let state = GeneralServiceState::new(trie);\n\n    let sdk = DefaultServiceSDK::new(Rc::new(RefCell::new(state)), Rc::new(chain_db));\n\n    MultiSignatureService::new(sdk)\n}\n\nfn mock_context(cycles_limit: u64, caller: Address) -> ServiceContext {\n    let params = ServiceContextParams {\n        tx_hash: Some(mock_hash()),\n        nonce: None,\n        cycles_limit,\n        cycles_price: 1,\n        cycles_used: Rc::new(RefCell::new(0)),\n        caller,\n        height: 1,\n        timestamp: 0,\n        service_name: \"service_name\".to_owned(),\n        service_method: \"service_method\".to_owned(),\n        service_payload: \"service_payload\".to_owned(),\n        extra: None,\n        events: Rc::new(RefCell::new(vec![])),\n    };\n\n    ServiceContext::new(params)\n}\n\nfn mock_hash() -> Hash {\n    Hash::digest(get_random_bytes(10))\n}\n\nfn get_random_bytes(len: usize) -> Bytes {\n    let vec: Vec<u8> = (0..len).map(|_| random::<u8>()).collect();\n    Bytes::from(vec)\n}\n\nfn gen_one_keypair() -> (Bytes, Bytes) {\n    let sk = Secp256k1PrivateKey::generate(&mut thread_rng());\n    let pk = sk.pub_key();\n    (sk.to_bytes(), pk.to_bytes())\n}\n\nfn gen_keypairs(num: usize) -> Vec<(Bytes, Bytes)> {\n    (0..num).map(|_| gen_one_keypair()).collect::<Vec<_>>()\n}\n\nfn to_multi_sig_account(pk: Bytes) -> AddressWithWeight {\n    AddressWithWeight {\n        address: Address::from_pubkey_bytes(pk).unwrap(),\n        weight:  1u8,\n    }\n}\n\nfn sign(privkey: &Bytes, hash: &Hash) -> Bytes {\n    Secp256k1PrivateKey::try_from(privkey.as_ref())\n        .unwrap()\n        .sign_message(&HashValue::try_from(hash.as_bytes().as_ref()).unwrap())\n        .to_bytes()\n}\n\nfn _gen_single_witness(privkey: &Bytes, hash: &Hash) -> VerifySignaturePayload {\n    let privkey = Secp256k1PrivateKey::try_from(privkey.as_ref()).unwrap();\n    let pk = privkey.pub_key().to_bytes();\n    let sig = privkey\n        .sign_message(&HashValue::try_from(hash.as_bytes().as_ref()).unwrap())\n        .to_bytes();\n\n    VerifySignaturePayload {\n        pubkeys:    vec![pk.clone()],\n        signatures: vec![sig],\n        sender:     Address::from_pubkey_bytes(pk).unwrap(),\n        tx_hash:    hash.clone(),\n    }\n}\n\nfn to_accounts_list(input: Vec<AddressWithWeight>) -> Vec<Account> {\n    input\n        .into_iter()\n        .map(|item| item.into_signle_account())\n        .collect::<Vec<_>>()\n}\n"
  },
  {
    "path": "built-in-services/multi-signature/src/tests/recursion_test.rs",
    "content": "use std::str::FromStr;\n\nuse crate::types::{GenerateMultiSigAccountPayload, VerifySignaturePayload};\n\nuse super::*;\n\n#[test]\nfn test_recursion_verify_signature() {\n    let cycles_limit = 1024 * 1024 * 1024; // 1073741824\n    let caller = Address::from_str(\"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\").unwrap();\n    let mut service = new_multi_signature_service();\n    let owner = Address::from_pubkey_bytes(gen_one_keypair().1).unwrap();\n\n    let init_keypairs = gen_keypairs(4);\n    let init_multi_sig_account = init_keypairs\n        .iter()\n        .map(|pair| to_multi_sig_account(pair.1.clone()))\n        .collect::<Vec<_>>();\n\n    let sender = service\n        .generate_account(\n            mock_context(cycles_limit, caller.clone()),\n            GenerateMultiSigAccountPayload {\n                owner:            owner.clone(),\n                autonomy:         false,\n                addr_with_weight: init_multi_sig_account,\n                threshold:        4,\n                memo:             String::new(),\n            },\n        )\n        .succeed_data\n        .address;\n\n    let keypairs = gen_keypairs(3);\n    let mut multi_sig_account = keypairs\n        .iter()\n        .map(|pair| to_multi_sig_account(pair.1.clone()))\n        .collect::<Vec<_>>();\n\n    multi_sig_account.push(AddressWithWeight {\n        address: sender,\n        weight:  1u8,\n    });\n\n    let sender_new = service\n        .generate_account(\n            mock_context(cycles_limit, caller.clone()),\n            GenerateMultiSigAccountPayload {\n                owner,\n                autonomy: false,\n                addr_with_weight: multi_sig_account,\n                threshold: 4,\n                memo: String::new(),\n            },\n        )\n        .succeed_data\n        .address;\n\n    let ctx = mock_context(cycles_limit, caller);\n    let tx_hash = ctx.get_tx_hash().unwrap();\n\n    let mut pks = Vec::new();\n    let mut sigs = Vec::new();\n\n    for pair in init_keypairs.iter().chain(keypairs.iter()) {\n        pks.push(pair.1.clone());\n        sigs.push(sign(&pair.0, &tx_hash));\n    }\n\n    assert_eq!(pks.len(), sigs.len());\n\n    let res = service._inner_verify_signature(VerifySignaturePayload {\n        pubkeys: pks,\n        signatures: sigs,\n        sender: sender_new,\n        tx_hash,\n    });\n\n    assert_eq!(res.is_error(), false);\n}\n\n#[test]\nfn test_recursion_depth() {\n    let cycles_limit = 1024 * 1024 * 1024; // 1073741824\n    let caller = Address::from_str(\"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\").unwrap();\n    let mut service = new_multi_signature_service();\n    let owner_keypair = gen_one_keypair();\n    let owner = Address::from_pubkey_bytes(owner_keypair.1).unwrap();\n    let mut all_keypairs = Vec::new();\n\n    let init_keypairs = gen_keypairs(4);\n    let mut init_keypairs_clone = init_keypairs.clone();\n    all_keypairs.append(&mut init_keypairs_clone);\n\n    let init_multi_sig_account = init_keypairs\n        .iter()\n        .map(|pair| to_multi_sig_account(pair.1.clone()))\n        .collect::<Vec<_>>();\n\n    let mut sender = service\n        .generate_account(\n            mock_context(cycles_limit, caller.clone()),\n            GenerateMultiSigAccountPayload {\n                owner:            owner.clone(),\n                autonomy:         false,\n                addr_with_weight: init_multi_sig_account,\n                threshold:        4,\n                memo:             String::new(),\n            },\n        )\n        .succeed_data\n        .address;\n\n    for _i in 0..7 {\n        let new_keypairs = gen_keypairs(3);\n        let mut new_keypairs_clone = new_keypairs.clone();\n        all_keypairs.append(&mut new_keypairs_clone);\n\n        let mut multi_sig_account = new_keypairs\n            .iter()\n            .map(|pair| to_multi_sig_account(pair.1.clone()))\n            .collect::<Vec<_>>();\n        multi_sig_account.push(AddressWithWeight {\n            address: sender.clone(),\n            weight:  1u8,\n        });\n        let res = service.generate_account(\n            mock_context(cycles_limit, caller.clone()),\n            GenerateMultiSigAccountPayload {\n                owner:            owner.clone(),\n                autonomy:         false,\n                addr_with_weight: multi_sig_account,\n                threshold:        4,\n                memo:             String::new(),\n            },\n        );\n\n        assert_eq!(res.is_error(), false);\n        sender = res.succeed_data.address;\n    }\n\n    let res = service.generate_account(\n        mock_context(cycles_limit, caller),\n        GenerateMultiSigAccountPayload {\n            owner,\n            autonomy: false,\n            addr_with_weight: vec![AddressWithWeight {\n                address: sender,\n                weight:  4u8,\n            }],\n            threshold: 1,\n            memo: String::new(),\n        },\n    );\n    assert!(res.is_error());\n}\n"
  },
  {
    "path": "built-in-services/multi-signature/src/types.rs",
    "content": "use std::collections::HashMap;\n\nuse muta_codec_derive::RlpFixedCodec;\nuse serde::{Deserialize, Serialize};\n\nuse protocol::fixed_codec::{FixedCodec, FixedCodecError};\nuse protocol::types::{Address, Bytes, Hash};\nuse protocol::ProtocolResult;\n\n#[derive(Clone, Debug)]\npub enum SetWeightResult {\n    Success,\n    NoAccount,\n    InvalidNewWeight,\n}\n\n#[derive(Clone, Debug)]\npub enum RemoveAccountResult {\n    Success(Account),\n    NoAccount,\n    BelowThreshold,\n}\n\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug)]\npub struct InitGenesisPayload {\n    pub address:          Address,\n    pub owner:            Address,\n    pub addr_with_weight: Vec<AddressWithWeight>,\n    pub threshold:        u32,\n    pub memo:             String,\n}\n\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug, PartialEq, Eq)]\npub struct GenerateMultiSigAccountPayload {\n    pub owner:            Address,\n    pub autonomy:         bool,\n    pub addr_with_weight: Vec<AddressWithWeight>,\n    pub threshold:        u32,\n    pub memo:             String,\n}\n\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug, Default)]\npub struct GenerateMultiSigAccountResponse {\n    pub address: Address,\n}\n\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug)]\npub struct VerifySignaturePayload {\n    pub tx_hash:    Hash,\n    pub pubkeys:    Vec<Bytes>,\n    pub signatures: Vec<Bytes>,\n    pub sender:     Address,\n}\n\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug)]\npub struct GetMultiSigAccountPayload {\n    pub multi_sig_address: Address,\n}\n\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug, Default)]\npub struct GetMultiSigAccountResponse {\n    pub permission: MultiSigPermission,\n}\n\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug)]\npub struct ChangeOwnerPayload {\n    pub multi_sig_address: Address,\n    pub new_owner:         Address,\n}\n\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug)]\npub struct ChangeMemoPayload {\n    pub multi_sig_address: Address,\n    pub new_memo:          String,\n}\n\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug)]\npub struct AddAccountPayload {\n    pub multi_sig_address: Address,\n    pub new_account:       Account,\n}\n\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug)]\npub struct RemoveAccountPayload {\n    pub multi_sig_address: Address,\n    pub account_address:   Address,\n}\n\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug)]\npub struct SetAccountWeightPayload {\n    pub multi_sig_address: Address,\n    pub account_address:   Address,\n    pub new_weight:        u8,\n}\n\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug)]\npub struct SetThresholdPayload {\n    pub multi_sig_address: Address,\n    pub new_threshold:     u32,\n}\n\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug)]\npub struct UpdateAccountPayload {\n    pub account_address:  Address,\n    pub owner:            Address,\n    pub addr_with_weight: Vec<AddressWithWeight>,\n    pub threshold:        u32,\n    pub memo:             String,\n}\n\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug, Default, PartialEq, Eq)]\npub struct MultiSigPermission {\n    pub owner:     Address,\n    pub accounts:  Vec<Account>,\n    pub threshold: u32,\n    pub memo:      String,\n}\n\nimpl MultiSigPermission {\n    pub fn get_account(&self, addr: &Address) -> Option<Account> {\n        for account in self.accounts.iter() {\n            if &account.address == addr {\n                return Some(account.clone());\n            }\n        }\n        None\n    }\n\n    pub fn set_owner(&mut self, new_owner: Address) {\n        self.owner = new_owner;\n    }\n\n    pub fn set_memo(&mut self, new_memo: String) {\n        self.memo = new_memo;\n    }\n\n    pub fn add_account(&mut self, new_account: Account) {\n        self.accounts.push(new_account);\n    }\n\n    pub fn remove_account(&mut self, address: &Address) -> RemoveAccountResult {\n        let mut idx = self.accounts.len();\n        let weight_sum = self\n            .accounts\n            .iter()\n            .map(|account| account.weight as u32)\n            .sum::<u32>();\n\n        for (index, account) in self.accounts.iter().enumerate() {\n            if &account.address == address {\n                idx = index;\n                break;\n            }\n        }\n\n        if idx != self.accounts.len() {\n            if (weight_sum - self.accounts[idx].weight as u32) < self.threshold {\n                RemoveAccountResult::BelowThreshold\n            } else {\n                let ret = self.accounts.remove(idx);\n                RemoveAccountResult::Success(ret)\n            }\n        } else {\n            RemoveAccountResult::NoAccount\n        }\n    }\n\n    pub fn set_threshold(&mut self, new_threshold: u32) {\n        self.threshold = new_threshold;\n    }\n\n    pub fn set_account_weight(\n        &mut self,\n        account_address: &Address,\n        new_weight: u8,\n    ) -> SetWeightResult {\n        let weight_sum = self\n            .accounts\n            .iter()\n            .map(|account| account.weight as u32)\n            .sum::<u32>();\n\n        for account in self.accounts.iter_mut() {\n            if &account.address == account_address {\n                if weight_sum + (new_weight as u32) - (account.weight as u32) < self.threshold {\n                    return SetWeightResult::InvalidNewWeight;\n                } else {\n                    account.weight = new_weight;\n                    return SetWeightResult::Success;\n                }\n            }\n        }\n        SetWeightResult::NoAccount\n    }\n}\n\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug, Default, PartialEq, Eq)]\npub struct Account {\n    pub address:     Address,\n    pub weight:      u8,\n    pub is_multiple: bool,\n}\n\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug, PartialEq, Eq)]\npub struct AddressWithWeight {\n    pub address: Address,\n    pub weight:  u8,\n}\n\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug)]\npub struct Witness {\n    pub pubkeys:    Vec<Bytes>,\n    pub signatures: Vec<Bytes>,\n}\n\nimpl Witness {\n    pub fn new(pubkeys: Vec<Bytes>, signatures: Vec<Bytes>) -> Self {\n        Witness {\n            pubkeys,\n            signatures,\n        }\n    }\n\n    pub fn into_addr_map(self) -> HashMap<Address, (Bytes, Bytes)> {\n        let mut ret = HashMap::new();\n        for (pk, sig) in self.pubkeys.into_iter().zip(self.signatures.into_iter()) {\n            if let Ok(addr) = Address::from_pubkey_bytes(&pk) {\n                ret.insert(addr, (pk, sig));\n            }\n        }\n        ret\n    }\n}\n\n#[cfg(test)]\nimpl AddressWithWeight {\n    pub fn into_signle_account(self) -> Account {\n        Account {\n            address:     self.address,\n            weight:      self.weight,\n            is_multiple: false,\n        }\n    }\n}\n"
  },
  {
    "path": "built-in-services/util/Cargo.toml",
    "content": "[package]\nname = \"util\"\nversion = \"0.2.1\"\nauthors = [\"Muta Dev <muta@nervos.org>\"]\nedition = \"2018\"\nrepository = \"https://github.com/nervosnetwork/muta\"\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\nbinding-macro = { path = \"../../binding-macro\" }\nprotocol = { path = \"../../protocol\", package = \"muta-protocol\" }\nhasher = { version=\"0.1\", features = [\"hash-keccak\"] }\nserde = { version = \"1.0\", features = [\"derive\"] }\nserde_json = \"1.0\"\nrlp = \"0.4\"\nbytes = \"0.5\"\nderive_more = \"0.15\"\nbyteorder = \"1.3\"\ncommon-crypto = { path = \"../../common/crypto\" }\nhex = \"0.4\"\nrand = \"0.7\"\n\n[dev-dependencies]\ncita_trie = \"2.0\"\nasync-trait = \"0.1\"\nframework = { path = \"../../framework\" }\n"
  },
  {
    "path": "built-in-services/util/src/lib.rs",
    "content": "use bytes::Bytes;\nuse hasher::{Hasher, HasherKeccak};\n\nuse binding_macro::{cycles, service};\nuse common_crypto::{Crypto, Secp256k1};\nuse protocol::traits::{ExecutorParams, ServiceResponse, ServiceSDK};\nuse protocol::types::{Hash, ServiceContext};\n\nuse crate::types::{KeccakPayload, KeccakResponse, SigVerifyPayload, SigVerifyResponse};\n\n#[cfg(test)]\nmod tests;\npub mod types;\n\npub const UTIL_SERVICE_NAME: &str = \"util\";\n\npub struct UtilService<SDK> {\n    _sdk: SDK,\n}\n\n#[service]\nimpl<SDK: ServiceSDK> UtilService<SDK> {\n    pub fn new(_sdk: SDK) -> Self {\n        Self { _sdk }\n    }\n\n    #[cycles(10_000)]\n    #[read]\n    fn keccak256(\n        &self,\n        ctx: ServiceContext,\n        payload: KeccakPayload,\n    ) -> ServiceResponse<KeccakResponse> {\n        let keccak = HasherKeccak::new();\n        let data = hex::decode(payload.hex_str.as_string_trim0x());\n        if data.is_err() {\n            return ServiceResponse::<KeccakResponse>::from_error(107, \"data not valid\".to_owned());\n        }\n\n        let hash_res = keccak.digest(data.unwrap().as_slice());\n        let response = KeccakResponse {\n            result: Hash::from_bytes(Bytes::from(hash_res)).unwrap(),\n        };\n        ServiceResponse::<KeccakResponse>::from_succeed(response)\n    }\n\n    #[cycles(10_000)]\n    #[read]\n    fn verify(\n        &self,\n        ctx: ServiceContext,\n        payload: SigVerifyPayload,\n    ) -> ServiceResponse<SigVerifyResponse> {\n        let data_sig = hex::decode(payload.sig.as_string_trim0x());\n        if data_sig.is_err() {\n            return ServiceResponse::<SigVerifyResponse>::from_error(\n                108,\n                \"signature not valid\".to_owned(),\n            );\n        };\n\n        let data_pk = hex::decode(payload.pub_key.as_string_trim0x());\n        if data_pk.is_err() {\n            return ServiceResponse::<SigVerifyResponse>::from_error(\n                109,\n                \"public key not valid\".to_owned(),\n            );\n        };\n\n        let data_hash = payload.hash.as_bytes();\n\n        let response = SigVerifyResponse {\n            is_ok: Secp256k1::verify_signature(\n                data_hash.as_ref(),\n                data_sig.unwrap().as_slice(),\n                data_pk.unwrap().as_slice(),\n            )\n            .is_ok(),\n        };\n\n        ServiceResponse::<SigVerifyResponse>::from_succeed(response)\n    }\n}\n"
  },
  {
    "path": "built-in-services/util/src/tests/mod.rs",
    "content": "use std::cell::RefCell;\nuse std::rc::Rc;\nuse std::str::FromStr;\nuse std::sync::Arc;\n\nuse cita_trie::MemoryDB;\nuse rand::rngs::OsRng;\n\nuse async_trait::async_trait;\nuse common_crypto::{\n    Crypto, PrivateKey, PublicKey, Secp256k1, Secp256k1PrivateKey, Signature, ToPublicKey,\n};\nuse framework::binding::sdk::{DefaultChainQuerier, DefaultServiceSDK};\nuse framework::binding::state::{GeneralServiceState, MPTTrie};\nuse protocol::traits::{CommonStorage, Context, Storage};\nuse protocol::types::{\n    Address, Block, BlockHeader, Hash, Hex, Proof, Receipt, ServiceContext, ServiceContextParams,\n    SignedTransaction,\n};\nuse protocol::ProtocolResult;\n\nuse crate::types::{KeccakPayload, SigVerifyPayload};\nuse crate::UtilService;\n\n#[test]\nfn test_hash() {\n    let cycles_limit = 1024 * 1024 * 1024; // 1073741824\n    let caller = Address::from_str(\"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\").unwrap();\n    let context = mock_context(cycles_limit, caller);\n\n    let service = new_util_service();\n\n    let res = service\n        .keccak256(context, KeccakPayload {\n            hex_str: Hex::from_string(\"0x1234\".to_string()).unwrap(),\n        })\n        .succeed_data;\n\n    assert_eq!(\n        res.result.as_hex(),\n        \"0x56570de287d73cd1cb6092bb8fdee6173974955fdef345ae579ee9f475ea7432\".to_string()\n    )\n}\n\n#[test]\nfn test_verify() {\n    let cycles_limit = 1024 * 1024 * 1024; // 1073741824\n    let caller = Address::from_str(\"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\").unwrap();\n    let context = mock_context(cycles_limit, caller);\n\n    let service = new_util_service();\n\n    let priv_key = Secp256k1PrivateKey::generate(&mut OsRng);\n    let pub_key = priv_key.pub_key();\n\n    let mut input_pk: String = \"0x\".to_string();\n    input_pk.push_str(hex::encode(pub_key.to_bytes()).as_str());\n\n    let pub_key_data = Hex::from_string(input_pk).unwrap();\n    let hash = Hash::from_hex(\"0x56570de287d73cd1cb6092bb8fdee6173974955fdef345ae579ee9f475ea7432\")\n        .unwrap();\n\n    let sig = Secp256k1::sign_message(&hash.as_bytes(), &priv_key.to_bytes()).unwrap();\n    let mut input_sig: String = \"0x\".to_string();\n    input_sig.push_str(hex::encode(sig.to_bytes()).as_str());\n    let sig_data = Hex::from_string(input_sig).unwrap();\n\n    println!(\n        \"pubkey: {}\\r\\nsig: {}\",\n        pub_key_data.as_string(),\n        sig_data.as_string()\n    );\n\n    let res = service\n        .verify(context, SigVerifyPayload {\n            hash,\n            sig: sig_data,\n            pub_key: pub_key_data,\n        })\n        .succeed_data;\n\n    assert_eq!(res.is_ok, true)\n}\n\nfn new_util_service(\n) -> UtilService<DefaultServiceSDK<GeneralServiceState<MemoryDB>, DefaultChainQuerier<MockStorage>>>\n{\n    let chain_db = DefaultChainQuerier::new(Arc::new(MockStorage {}));\n    let trie = MPTTrie::new(Arc::new(MemoryDB::new(false)));\n    let state = GeneralServiceState::new(trie);\n\n    let sdk = DefaultServiceSDK::new(Rc::new(RefCell::new(state)), Rc::new(chain_db));\n\n    UtilService::new(sdk)\n}\n\nfn mock_context(cycles_limit: u64, caller: Address) -> ServiceContext {\n    let params = ServiceContextParams {\n        tx_hash: None,\n        nonce: None,\n        cycles_limit,\n        cycles_price: 1,\n        cycles_used: Rc::new(RefCell::new(0)),\n        caller,\n        height: 1,\n        timestamp: 0,\n        service_name: \"service_name\".to_owned(),\n        service_method: \"service_method\".to_owned(),\n        service_payload: \"service_payload\".to_owned(),\n        extra: None,\n        events: Rc::new(RefCell::new(vec![])),\n    };\n\n    ServiceContext::new(params)\n}\n\nstruct MockStorage;\n\n#[async_trait]\nimpl CommonStorage for MockStorage {\n    async fn insert_block(&self, _ctx: Context, _block: Block) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn get_block(&self, _ctx: Context, _height: u64) -> ProtocolResult<Option<Block>> {\n        unimplemented!()\n    }\n\n    async fn get_block_header(\n        &self,\n        _ctx: Context,\n        _height: u64,\n    ) -> ProtocolResult<Option<BlockHeader>> {\n        unimplemented!()\n    }\n\n    async fn set_block(&self, _ctx: Context, _block: Block) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn remove_block(&self, _ctx: Context, _height: u64) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn get_latest_block(&self, _ctx: Context) -> ProtocolResult<Block> {\n        unimplemented!()\n    }\n\n    async fn set_latest_block(&self, _ctx: Context, _block: Block) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn get_latest_block_header(&self, _ctx: Context) -> ProtocolResult<BlockHeader> {\n        unimplemented!()\n    }\n}\n\n#[async_trait]\nimpl Storage for MockStorage {\n    async fn insert_transactions(\n        &self,\n        _: Context,\n        _: u64,\n        _: Vec<SignedTransaction>,\n    ) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn get_transactions(\n        &self,\n        _: Context,\n        _height: u64,\n        _: &[Hash],\n    ) -> ProtocolResult<Vec<Option<SignedTransaction>>> {\n        unimplemented!()\n    }\n\n    async fn get_transaction_by_hash(\n        &self,\n        _: Context,\n        _: &Hash,\n    ) -> ProtocolResult<Option<SignedTransaction>> {\n        unimplemented!()\n    }\n\n    async fn insert_receipts(\n        &self,\n        _: Context,\n        _height: u64,\n        _: Vec<Receipt>,\n    ) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn get_receipt_by_hash(&self, _: Context, _: Hash) -> ProtocolResult<Option<Receipt>> {\n        unimplemented!()\n    }\n\n    async fn get_receipts(\n        &self,\n        _: Context,\n        _height: u64,\n        _: Vec<Hash>,\n    ) -> ProtocolResult<Vec<Option<Receipt>>> {\n        unimplemented!()\n    }\n\n    async fn update_latest_proof(&self, _: Context, _: Proof) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn get_latest_proof(&self, _: Context) -> ProtocolResult<Proof> {\n        unimplemented!()\n    }\n}\n"
  },
  {
    "path": "built-in-services/util/src/types.rs",
    "content": "use protocol::types::{Hash, Hex};\nuse serde::{Deserialize, Serialize};\n\n#[derive(Deserialize, Serialize, Clone, Debug)]\npub struct KeccakPayload {\n    pub hex_str: Hex,\n}\n\n#[derive(Deserialize, Serialize, Clone, Debug, Default)]\npub struct KeccakResponse {\n    pub result: Hash,\n}\n\n#[derive(Deserialize, Serialize, Clone, Debug)]\npub struct SigVerifyPayload {\n    pub hash:    Hash,\n    pub sig:     Hex,\n    pub pub_key: Hex,\n}\n\n#[derive(Deserialize, Serialize, Clone, Debug, Default)]\npub struct SigVerifyResponse {\n    pub is_ok: bool,\n}\n"
  },
  {
    "path": "byzantine/Cargo.toml",
    "content": "[package]\nname = \"byzantine\"\nversion = \"0.2.1\"\nauthors = [\"Muta Dev <muta@nervos.org>\"]\nedition = \"2018\"\nrepository = \"https://github.com/nervosnetwork/muta\"\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\ncommon-apm = { path = \"../common/apm\" }\ncommon-config-parser = { path = \"../common/config-parser\" }\ncommon-crypto = { path = \"../common/crypto\" }\ncommon-logger = { path = \"../common/logger\" }\ncommon-merkle = { path = \"../common/merkle\" }\nprotocol = { path = \"../protocol\", package = \"muta-protocol\" }\ncore-api = { path = \"../core/api\" }\ncore-storage = { path = \"../core/storage\" }\ncore-mempool = { path = \"../core/mempool\" }\ncore-network = { path = \"../core/network\" }\ncore-consensus = { path = \"../core/consensus\" }\noverlord = \"0.2\"\n\nbinding-macro = { path = \"../binding-macro\" }\nframework = { path = \"../framework\" }\n\nactix-rt = \"1.0\"\nasync-trait = \"0.1\"\nderive_more = \"0.99\"\nlazy_static = \"1.4\"\nfutures = \"0.3\"\nparking_lot = \"0.11\"\nserde = \"1.0\"\nserde_derive = \"1.0\"\nserde_json = \"1.0\"\nlog = \"0.4\"\nclap = \"2.33\"\nbytes = \"0.5\"\nhex = \"0.4\"\nrlp = \"0.4\"\nrand = \"0.7\"\ntoml = \"0.5\"\ntokio = { version = \"0.2\", features = [\"macros\", \"rt-core\", \"rt-util\", \"signal\", \"time\"] }\nmuta-apm = \"0.1.0-alpha.7\"\nfutures-timer=\"3.0\"\n"
  },
  {
    "path": "byzantine/README.md",
    "content": "# 1. 概述\n\n拜占庭测试是通过模拟恶意攻击行为对系统进行安全性和稳定性测试。\n对于区块链这样的分布式系统，节点是通过对外发送/不发送消息来对系统施加影响的。\n因此通过控制消息的发送时间、内容、数量和接收方，我们就可以模拟出任意的恶意行为。\n考虑到这些控制因素理论上可以组合出无穷多的恶意行为，而我们无法穷尽所有的可能，\n所以在具体实现时一方面是通过随机组合来覆盖尽可能多的可能，另一方面要方便随时增加新的测试用例。\n后者会在[架构设计](#3-架构设计)部分做详细的解释。\n\n在`muta`系统中，可以将节点发送的消息分为`主动消息`和`被动消息`。\n主动消息是指节点可以在任意时刻主动发起的消息，如发送新交易，提案，投票等。\n被动消息是指节点需要被其他消息触发才能发起的消息，\n比如节点在收到 `pull_txs` 的时候才会发送 `push_txs`。\n本项目的核心思路就是精心构造许多不同的恶意消息类型，\n再与其他因素随机组合，以模拟出能够覆盖大部分场景的恶意行为。\n\n# 2. 测试操作\n\n首先根据测试需要，修改配置文件 `byzantine/generatoes.toml`，增加或删除某些测试用例。\n```\ninterval = 500  // 主动消息的触发间隔，单位 ms\n\n[[list]]\nreq_end = \"/gossip/consensus/signed_proposal\"    // 主动消息需忽略该项，被动消息填触发消息的 end\nmsg_type = { RecvProposal = \"InvalidHeight\" }    // 消息内容的类型\nprobability = 0.2         // 每次被触发时，生成该类型消息的概率，最大 1.0，表示 100%\nnum_range = [1, 10]       // 消息数量的取值范围，实际数量从该范围内随机生成\npriority = \"Normal\"       // 消息发送的优先级，目前只有 Normal 和 High 两个选项\n```\n测试命令\n```\n// 启动三个正常节点\nmuta$ CONFIG=examples/config-1.toml GENESIS=examples/genesis.toml cargo run --release --example muta-chain\nmuta$ CONFIG=examples/config-2.toml GENESIS=examples/genesis.toml cargo run --release --example muta-chain\nmuta$ CONFIG=examples/config-3.toml GENESIS=examples/genesis.toml cargo run --release --example muta-chain\n// 启动一个拜占庭节点\nmuta$ CONFIG=examples/config-4.toml GENESIS=examples/genesis.toml cargo run --release --example byzantine_node\n```\n\n# 3. 架构设计\n\n核心需求 1. 方便随时增加新的测试用例；2.提供未来可通过外部交互控制和组织编排恶意攻击的能力。\n\n为了实现以上需求，将恶意消息的生成过程抽象成以下三个过程\n\n1. 配置文件 -> 行为生成器，由 `strategy` 模块实现\n```rust\npub struct BehaviorGenerator {\n    pub req_end:     Option<String>,  \n    pub msg_type:    MessageType,\n    pub probability: f64,   \n    pub num_range:   (u64, u64),\n    pub priority:    Priority,\n}\n```\n\n2. 行为生成器 -> 行为，由 `commander` 模块实现\n```rust\npub struct Behavior {\n    pub msg_type: MessageType,\n    pub msg_num:  u64,\n    pub request:  Option<Request>,  // 主动消息为 None, 被动消息为触发消息的内容\n    pub send_to:  Vec<Bytes>,\n    pub priority: Priority,\n}\n```\n\n3. 行为 -> 消息，由 `worker` 模块实现\n\n由于通过配置生成的消息具有很强的随机性且无法从外部进行操纵，因此若未来有更加针对性的测试需求时，\n可以增加交互功能，通过交互命令指导 `commander` 模块触发特定的 `Behavior`。\n\n![image](./resource/structure.png)\n\n# 4. 文件列表\n```\nbyzantine\n├── src\n│   ├── behaviors.rs           # 定义 Behavior 和各种消息类型\n│   ├── commander.rs           # 根据 BehaviorGenerator 生成 Behavior\n│   ├── config.rs              # 定义配置文件的数据结构\n│   ├── default_start.rs       # 启动逻辑\n│   ├── invalid_types.rs       # 生成各种恶意消息的方法实现\n│   ├── lib.rs                 \n│   ├── message.rs             # 将被动消息的触发消息传递给 commander\n│   ├── strategy.rs            # 根据配置文件生成 BehaviorGenerator\n│   ├── utils.rs               # 工具方法\n└── └── worker.rs              # 根据 Behavior 生成和发送消息\n```\n"
  },
  {
    "path": "byzantine/generators.toml",
    "content": "interval = 500\n\n#################\n##### NewTx #####\n#################\n[[list]]\nmsg_type = { NewTxs = \"Valid\" }\nprobability = 1.0\nnum_range = [10, 100]\npriority = \"Normal\"\n\n[[list]]\nmsg_type = { NewTxs = \"InvalidStruct\" }\nprobability = 0.01\nnum_range = [1, 10]\npriority = \"Normal\"\n\n[[list]]\nmsg_type = { NewTxs = \"InvalidHash\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"Normal\"\n\n[[list]]\nmsg_type = { NewTxs = \"InvalidSig\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"Normal\"\n\n[[list]]\nmsg_type = { NewTxs = \"InvalidChainID\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"Normal\"\n\n[[list]]\nmsg_type = { NewTxs = \"InvalidCyclesPrice\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"Normal\"\n\n[[list]]\nmsg_type = { NewTxs = \"InvalidCyclesLimit\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"Normal\"\n\n[[list]]\nmsg_type = { NewTxs = \"InvalidNonceOfRandLen\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"Normal\"\n\n[[list]]\nmsg_type = { NewTxs = \"InvalidNonceDup\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"Normal\"\n\n[[list]]\nmsg_type = { NewTxs = \"InvalidRequest\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"Normal\"\n\n[[list]]\nmsg_type = { NewTxs = \"InvalidTimeout\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"Normal\"\n\n[[list]]\nmsg_type = { NewTxs = \"InvalidSender\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"Normal\"\n\n#########################\n###### RecvProposal #####\n#########################\n[[list]]\nreq_end = \"/gossip/consensus/signed_proposal\"\nmsg_type = { RecvProposal = \"InvalidHeight\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nreq_end = \"/gossip/consensus/signed_proposal\"\nmsg_type = { RecvProposal = \"InvalidHash\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nreq_end = \"/gossip/consensus/signed_proposal\"\nmsg_type = { RecvProposal = \"NotExistTxs\" }\nprobability = 1.0\nnum_range = [1, 1000]\npriority = \"High\"\n\n[[list]]\nreq_end = \"/gossip/consensus/signed_proposal\"\nmsg_type = { RecvProposal = \"InvalidStruct\" }\nprobability = 1.0\nnum_range = [1, 2]\npriority = \"High\"\n\n##########################\n####### SendProposal #####\n##########################\n[[list]]\nmsg_type = { SendProposal = \"InvalidStruct\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nmsg_type = { SendProposal = \"InvalidChainId\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nmsg_type = { SendProposal = \"InvalidPrevHash\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nmsg_type = { SendProposal = \"InvalidHeight\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nmsg_type = { SendProposal = \"InvalidExecHeight\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nmsg_type = { SendProposal = \"InvalidTimestamp\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nmsg_type = { SendProposal = \"InvalidOrderRoot\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nmsg_type = { SendProposal = \"InvalidSignedTxsHash\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nmsg_type = { SendProposal = \"InvalidConfirmRoot\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nmsg_type = { SendProposal = \"InvalidStateRoot\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nmsg_type = { SendProposal = \"InvalidReceiptRoot\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nmsg_type = { SendProposal = \"InvalidCyclesUsed\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n#\n[[list]]\nmsg_type = { SendProposal = \"InvalidBlockProposer\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nmsg_type = { SendProposal = \"InvalidProof\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nmsg_type = { SendProposal = \"InvalidVersion\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nmsg_type = { SendProposal = \"InvalidValidators\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nmsg_type = { SendProposal = \"InvalidTxHash\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nmsg_type = { SendProposal = \"InvalidSig\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nmsg_type = { SendProposal = \"InvalidProposalHeight\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nmsg_type = { SendProposal = \"InvalidRound\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n#\n##[[list]]\n##msg_type = { SendProposal = \"InvalidContentStruct\" }     ## Cause panic\n##probability = 1.0\n##num_range = [1, 10]\n##priority = \"High\"\n#\n##[[list]]\n##msg_type = { SendProposal = \"InvalidBlockHash\" }     ## Cause panic\n##probability = 1.0\n##num_range = [1, 10]\n##priority = \"High\"\n#\n[[list]]\nmsg_type = { SendProposal = \"InvalidLock\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nmsg_type = { SendProposal = \"InvalidProposalProposer\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n####################\n##### SendVote #####\n####################\n[[list]]\nmsg_type = { SendVote = \"InvalidStruct\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nmsg_type = { SendVote = \"InvalidHeight\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nmsg_type = { SendVote = \"InvalidRound\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nmsg_type = { SendVote = \"InvalidBlockHash\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nmsg_type = { SendVote = \"InvalidSig\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nmsg_type = { SendVote = \"InvalidVoter\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n##################\n##### SendQC #####\n##################\n[[list]]\nmsg_type = { SendQC = \"InvalidStruct\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nmsg_type = { SendQC = \"InvalidHeight\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nmsg_type = { SendQC = \"InvalidRound\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nmsg_type = { SendQC = \"InvalidBlockHash\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nmsg_type = { SendQC = \"InvalidSig\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nmsg_type = { SendQC = \"InvalidLeader\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n#####################\n##### SendChoke #####\n#####################\n\n[[list]]\nmsg_type = { SendChoke = \"InvalidStruct\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nmsg_type = { SendChoke = \"InvalidHeight\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nmsg_type = { SendChoke = \"InvalidRound\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n##[[list]]\n##msg_type = { SendChoke = \"InvalidFrom\" }     ## Break liveness\n##probability = 1.0\n##num_range = [1, 10]\n##priority = \"High\"\n#\n[[list]]\nmsg_type = { SendChoke = \"InvalidSig\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nmsg_type = { SendChoke = \"InvalidAddress\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n#####################\n#### SendHeight #####\n#####################\n[[list]]\nmsg_type = \"SendHeight\"\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n#####################\n#### PullTxs #####\n#####################\n\n[[list]]\nreq_end = \"/rpc_call/mempool/pull_txs\"\nmsg_type = { PullTxs = \"InvalidStruct\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nreq_end = \"/rpc_call/mempool/pull_txs\"\nmsg_type = { PullTxs = \"InvalidHash\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nreq_end = \"/rpc_call/mempool/pull_txs\"\nmsg_type = { PullTxs = \"InvalidSig\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nreq_end = \"/rpc_call/mempool/pull_txs\"\nmsg_type = { PullTxs = \"InvalidChainID\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nreq_end = \"/rpc_call/mempool/pull_txs\"\nmsg_type = { PullTxs = \"InvalidCyclesPrice\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nreq_end = \"/rpc_call/mempool/pull_txs\"\nmsg_type = { PullTxs = \"InvalidCyclesLimit\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nreq_end = \"/rpc_call/mempool/pull_txs\"\nmsg_type = { PullTxs = \"InvalidNonceOfRandLen\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nreq_end = \"/rpc_call/mempool/pull_txs\"\nmsg_type = { PullTxs = \"InvalidRequest\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nreq_end = \"/rpc_call/mempool/pull_txs\"\nmsg_type = { PullTxs = \"InvalidTimeout\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n\n[[list]]\nreq_end = \"/rpc_call/mempool/pull_txs\"\nmsg_type = { PullTxs = \"InvalidSender\" }\nprobability = 1.0\nnum_range = [1, 10]\npriority = \"High\"\n"
  },
  {
    "path": "byzantine/src/behaviors.rs",
    "content": "use bytes::Bytes;\nuse derive_more::Constructor;\nuse serde_derive::Deserialize;\n\nuse core_consensus::message::{\n    Choke, Proposal, Vote, BROADCAST_HEIGHT, END_GOSSIP_AGGREGATED_VOTE, END_GOSSIP_SIGNED_CHOKE,\n    END_GOSSIP_SIGNED_PROPOSAL, END_GOSSIP_SIGNED_VOTE, QC,\n};\nuse core_mempool::{MsgNewTxs, MsgPullTxs, END_GOSSIP_NEW_TXS, RPC_PULL_TXS};\nuse protocol::traits::Priority;\n\n#[derive(Constructor, Clone, Debug)]\npub struct Behavior {\n    pub msg_type: MessageType,\n    pub msg_num:  u64,\n    pub request:  Option<Request>,\n    pub send_to:  Vec<Bytes>,\n    pub priority: Priority,\n}\n\n#[allow(dead_code)]\n#[derive(Clone, Debug)]\npub enum Request {\n    NewTx(MsgNewTxs),\n    PullTxs(MsgPullTxs),\n    RecvProposal(Proposal),\n    RecvVote(Vote),\n    RecvQC(QC),\n    RecvChoke(Choke),\n    RecvHeight(u64),\n}\n\nimpl Request {\n    pub fn to_end(&self) -> &str {\n        match self {\n            Request::NewTx(_) => END_GOSSIP_NEW_TXS,\n            Request::PullTxs(_) => RPC_PULL_TXS,\n            Request::RecvProposal(_) => END_GOSSIP_SIGNED_PROPOSAL,\n            Request::RecvVote(_) => END_GOSSIP_SIGNED_VOTE,\n            Request::RecvQC(_) => END_GOSSIP_AGGREGATED_VOTE,\n            Request::RecvChoke(_) => END_GOSSIP_SIGNED_CHOKE,\n            Request::RecvHeight(_) => BROADCAST_HEIGHT,\n        }\n    }\n}\n\n#[allow(dead_code)]\n#[derive(Clone, Debug, Deserialize)]\npub enum MessageType {\n    NewTxs(NewTx),\n    SendProposal(NewProposal),\n    RecvProposal(PullTxs),\n    SendVote(NewVote),\n    RecvVote,\n    SendQC(NewQC),\n    RecvQC,\n    SendChoke(NewChoke),\n    RecvChoke,\n    SendHeight,\n    RecvHeight,\n    PullTxs(NewTx),\n}\n\n#[derive(Clone, Debug, Deserialize)]\npub enum NewTx {\n    InvalidStruct,\n    InvalidHash,\n    InvalidSig,\n    InvalidChainID,\n    InvalidCyclesPrice,\n    InvalidCyclesLimit,\n    InvalidNonceOfRandLen,\n    InvalidNonceDup,\n    InvalidRequest,\n    InvalidTimeout,\n    InvalidSender,\n    Valid,\n}\n\n#[derive(Clone, Debug, Deserialize)]\npub enum PullTxs {\n    Valid,\n    InvalidStruct,\n    InvalidHeight,\n    InvalidHash,\n    NotExistTxs,\n}\n\n#[derive(Clone, Debug, Deserialize)]\npub enum NewProposal {\n    Valid,\n    InvalidStruct,\n    InvalidChainId,\n    InvalidPrevHash,\n    InvalidHeight,\n    InvalidExecHeight,\n    InvalidTimestamp,\n    InvalidOrderRoot,\n    InvalidSignedTxsHash,\n    InvalidConfirmRoot,\n    InvalidStateRoot,\n    InvalidReceiptRoot,\n    InvalidCyclesUsed,\n    InvalidBlockProposer,\n    InvalidProof,\n    InvalidVersion,\n    InvalidValidators,\n    InvalidTxHash,\n    InvalidSig,\n    InvalidProposalHeight,\n    InvalidRound,\n    InvalidContentStruct,\n    InvalidBlockHash,\n    InvalidLock,\n    InvalidProposalProposer,\n}\n\n#[derive(Clone, Debug, Deserialize)]\npub enum NewVote {\n    InvalidStruct,\n    InvalidHeight,\n    InvalidRound,\n    InvalidBlockHash,\n    InvalidSig,\n    InvalidVoter,\n}\n\n#[derive(Clone, Debug, Deserialize)]\npub enum NewQC {\n    InvalidStruct,\n    InvalidHeight,\n    InvalidRound,\n    InvalidBlockHash,\n    InvalidSig,\n    InvalidLeader,\n}\n\n#[derive(Clone, Debug, Deserialize)]\npub enum NewChoke {\n    InvalidStruct,\n    InvalidHeight,\n    InvalidRound,\n    InvalidFrom,\n    InvalidSig,\n    InvalidAddress,\n}\n\n#[derive(Clone, Debug, Deserialize)]\npub enum SyncPullBlock {\n    Valid,\n}\n"
  },
  {
    "path": "byzantine/src/commander.rs",
    "content": "use bytes::Bytes;\nuse futures::{\n    channel::mpsc::{UnboundedReceiver, UnboundedSender},\n    stream::StreamExt,\n};\nuse tokio::time::{self, Duration};\n\nuse core_consensus::message::{\n    BROADCAST_HEIGHT, END_GOSSIP_AGGREGATED_VOTE, END_GOSSIP_SIGNED_CHOKE,\n    END_GOSSIP_SIGNED_PROPOSAL, END_GOSSIP_SIGNED_VOTE,\n};\nuse protocol::traits::{Context, Priority};\n\nuse crate::behaviors::{Behavior, MessageType, PullTxs, Request};\nuse crate::config::Generators;\nuse crate::strategy::{BehaviorGenerator, DefaultStrategy, Strategy};\n\npub struct Commander {\n    generators:   Generators,\n    pub_key_list: Vec<Bytes>,\n    to_worker:    UnboundedSender<(Context, Vec<Behavior>)>,\n    from_network: UnboundedReceiver<(Context, Request)>,\n}\n\nimpl Commander {\n    pub fn new(\n        generators: Generators,\n        pub_key_list: Vec<Bytes>,\n        to_worker: UnboundedSender<(Context, Vec<Behavior>)>,\n        from_network: UnboundedReceiver<(Context, Request)>,\n    ) -> Self {\n        Commander {\n            generators,\n            pub_key_list,\n            to_worker,\n            from_network,\n        }\n    }\n\n    pub async fn run(mut self) {\n        let mut list = self.generators.list.clone();\n        add_primitive_generator(&mut list);\n        let strategy = DefaultStrategy::new(self.pub_key_list.clone(), list);\n        let interval = self.generators.interval;\n\n        let mut cnt = 0;\n        loop {\n            let mut delay = time::delay_for(Duration::from_millis(interval));\n            tokio::select! {\n                _ = &mut delay => {\n                    let behaviors = strategy.get_behaviors(None);\n                    cnt += behaviors.len();\n                    println!(\"commander is working, accumulative gen {} behaviors\", cnt);\n                    let _ = self.to_worker.unbounded_send((Context::default(), behaviors));\n                }\n\n                Some((ctx, request)) = self.from_network.next() => {\n                    let behaviors = strategy.get_behaviors(Some(request));\n                    cnt += behaviors.len();\n                    println!(\"commander receive message from network, accumulative gen {} behaviors\", cnt);\n                    let _ = self.to_worker.unbounded_send((ctx, behaviors));\n                }\n            }\n        }\n    }\n}\n\nfn add_primitive_generator(list: &mut Vec<BehaviorGenerator>) {\n    let valid_recv_proposal_generator = BehaviorGenerator {\n        req_end:     Some(END_GOSSIP_SIGNED_PROPOSAL.to_string()),\n        msg_type:    MessageType::RecvProposal(PullTxs::Valid),\n        probability: 1.0,\n        num_range:   (1, 2),\n        priority:    Priority::High,\n    };\n    list.push(valid_recv_proposal_generator);\n\n    let valid_recv_vote_generator = BehaviorGenerator {\n        req_end:     Some(END_GOSSIP_SIGNED_VOTE.to_string()),\n        msg_type:    MessageType::RecvVote,\n        probability: 1.0,\n        num_range:   (1, 2),\n        priority:    Priority::High,\n    };\n    list.push(valid_recv_vote_generator);\n\n    let valid_recv_qc_generator = BehaviorGenerator {\n        req_end:     Some(END_GOSSIP_AGGREGATED_VOTE.to_string()),\n        msg_type:    MessageType::RecvQC,\n        probability: 1.0,\n        num_range:   (1, 2),\n        priority:    Priority::High,\n    };\n    list.push(valid_recv_qc_generator);\n\n    let valid_recv_choke_generator = BehaviorGenerator {\n        req_end:     Some(END_GOSSIP_SIGNED_CHOKE.to_string()),\n        msg_type:    MessageType::RecvChoke,\n        probability: 1.0,\n        num_range:   (1, 2),\n        priority:    Priority::High,\n    };\n    list.push(valid_recv_choke_generator);\n\n    let valid_recv_height_generator = BehaviorGenerator {\n        req_end:     Some(BROADCAST_HEIGHT.to_string()),\n        msg_type:    MessageType::RecvHeight,\n        probability: 1.0,\n        num_range:   (1, 2),\n        priority:    Priority::High,\n    };\n    list.push(valid_recv_height_generator);\n}\n"
  },
  {
    "path": "byzantine/src/config.rs",
    "content": "use std::collections::HashMap;\nuse std::net::SocketAddr;\nuse std::path::PathBuf;\n\nuse serde_derive::Deserialize;\n\nuse core_mempool::{DEFAULT_BROADCAST_TXS_INTERVAL, DEFAULT_BROADCAST_TXS_SIZE};\nuse protocol::types::Hex;\n\nuse crate::strategy::BehaviorGenerator;\n\n#[derive(Debug, Deserialize)]\npub struct ConfigGraphQL {\n    pub listening_address: SocketAddr,\n    pub graphql_uri:       String,\n    pub graphiql_uri:      String,\n    #[serde(default)]\n    pub workers:           usize,\n    #[serde(default)]\n    pub maxconn:           usize,\n    #[serde(default)]\n    pub max_payload_size:  usize,\n    pub tls:               Option<ConfigGraphQLTLS>,\n}\n\n#[derive(Debug, Deserialize)]\npub struct ConfigGraphQLTLS {\n    pub private_key_file_path:       PathBuf,\n    pub certificate_chain_file_path: PathBuf,\n}\n\n#[derive(Debug, Deserialize)]\npub struct ConfigNetwork {\n    pub bootstraps:                 Option<Vec<ConfigNetworkBootstrap>>,\n    pub allowlist:                  Option<Vec<String>>,\n    pub allowlist_only:             Option<bool>,\n    pub trust_interval_duration:    Option<u64>,\n    pub trust_max_history_duration: Option<u64>,\n    pub fatal_ban_duration:         Option<u64>,\n    pub soft_ban_duration:          Option<u64>,\n    pub max_connected_peers:        Option<usize>,\n    pub same_ip_conn_limit:         Option<usize>,\n    pub inbound_conn_limit:         Option<usize>,\n    pub listening_address:          SocketAddr,\n    pub rpc_timeout:                Option<u64>,\n    pub selfcheck_interval:         Option<u64>,\n    pub send_buffer_size:           Option<usize>,\n    pub write_timeout:              Option<u64>,\n    pub recv_buffer_size:           Option<usize>,\n    pub max_frame_length:           Option<usize>,\n    pub max_wait_streams:           Option<usize>,\n    pub ping_interval:              Option<u64>,\n}\n\n#[derive(Debug, Deserialize)]\npub struct ConfigNetworkBootstrap {\n    pub peer_id: String,\n    pub address: String,\n}\n\n#[derive(Debug, Deserialize)]\npub struct ConfigConsensus {\n    pub sync_txs_chunk_size: usize,\n}\n\nimpl Default for ConfigConsensus {\n    fn default() -> Self {\n        Self {\n            sync_txs_chunk_size: 5000,\n        }\n    }\n}\n\nfn default_broadcast_txs_size() -> usize {\n    DEFAULT_BROADCAST_TXS_SIZE\n}\n\nfn default_broadcast_txs_interval() -> u64 {\n    DEFAULT_BROADCAST_TXS_INTERVAL\n}\n\n#[derive(Debug, Deserialize)]\npub struct ConfigMempool {\n    pub pool_size: u64,\n\n    #[serde(default = \"default_broadcast_txs_size\")]\n    pub broadcast_txs_size:     usize,\n    #[serde(default = \"default_broadcast_txs_interval\")]\n    pub broadcast_txs_interval: u64,\n}\n\n#[derive(Debug, Deserialize)]\npub struct ConfigExecutor {\n    pub light:             bool,\n    pub triedb_cache_size: usize,\n}\n\n#[derive(Debug, Deserialize)]\npub struct ConfigRocksDB {\n    pub max_open_files: i32,\n}\n\nimpl Default for ConfigRocksDB {\n    fn default() -> Self {\n        Self { max_open_files: 64 }\n    }\n}\n\n#[derive(Debug, Deserialize)]\npub struct ConfigLogger {\n    pub filter:                     String,\n    pub log_to_console:             bool,\n    pub console_show_file_and_line: bool,\n    pub log_to_file:                bool,\n    pub metrics:                    bool,\n    pub log_path:                   PathBuf,\n    #[serde(default)]\n    pub modules_level:              HashMap<String, String>,\n}\n\nimpl Default for ConfigLogger {\n    fn default() -> Self {\n        Self {\n            filter:                     \"info\".into(),\n            log_to_console:             true,\n            console_show_file_and_line: false,\n            log_to_file:                true,\n            metrics:                    true,\n            log_path:                   \"logs/\".into(),\n            modules_level:              HashMap::new(),\n        }\n    }\n}\n\n#[derive(Debug, Deserialize)]\npub struct ConfigAPM {\n    pub service_name:       String,\n    pub tracing_address:    SocketAddr,\n    pub tracing_batch_size: Option<usize>,\n}\n\n#[derive(Debug, Deserialize)]\npub struct Config {\n    // crypto\n    pub privkey:   Hex,\n    // db config\n    pub data_path: PathBuf,\n\n    pub graphql:   ConfigGraphQL,\n    pub network:   ConfigNetwork,\n    pub mempool:   ConfigMempool,\n    pub executor:  ConfigExecutor,\n    #[serde(default)]\n    pub consensus: ConfigConsensus,\n    #[serde(default)]\n    pub logger:    ConfigLogger,\n    #[serde(default)]\n    pub rocksdb:   ConfigRocksDB,\n    pub apm:       Option<ConfigAPM>,\n}\n\nimpl Config {\n    pub fn data_path_for_state(&self) -> PathBuf {\n        let mut path_state = self.data_path.clone();\n        path_state.push(\"rocksdb\");\n        path_state.push(\"state_data\");\n        path_state\n    }\n\n    pub fn data_path_for_block(&self) -> PathBuf {\n        let mut path_state = self.data_path.clone();\n        path_state.push(\"rocksdb\");\n        path_state.push(\"block_data\");\n        path_state\n    }\n\n    pub fn data_path_for_txs_wal(&self) -> PathBuf {\n        let mut path_state = self.data_path.clone();\n        path_state.push(\"txs_wal\");\n        path_state\n    }\n}\n\n#[derive(Debug, Deserialize)]\npub struct Generators {\n    pub interval: u64, // ms\n    pub list:     Vec<BehaviorGenerator>,\n}\n"
  },
  {
    "path": "byzantine/src/default_start.rs",
    "content": "use std::collections::HashMap;\nuse std::convert::TryFrom;\nuse std::sync::Arc;\n\nuse bytes::Bytes;\nuse common_crypto::{\n    BlsCommonReference, BlsPrivateKey, BlsPublicKey, PublicKey, Secp256k1PrivateKey, ToPublicKey,\n    UncompressedPublicKey,\n};\nuse futures::channel::mpsc::unbounded;\nuse futures::future;\n#[cfg(unix)]\nuse tokio::signal::unix::{self as os_impl};\n\nuse core_consensus::message::{\n    BROADCAST_HEIGHT, END_GOSSIP_AGGREGATED_VOTE, END_GOSSIP_SIGNED_CHOKE,\n    END_GOSSIP_SIGNED_PROPOSAL, END_GOSSIP_SIGNED_VOTE,\n};\nuse core_consensus::util::OverlordCrypto;\nuse core_mempool::{MsgPushTxs, END_GOSSIP_NEW_TXS, RPC_PULL_TXS, RPC_RESP_PULL_TXS};\nuse core_network::{NetworkConfig, NetworkService, PeerId, PeerIdExt};\nuse protocol::traits::{Context, Network};\nuse protocol::types::{Address, Genesis, Metadata, Validator};\nuse protocol::ProtocolResult;\n\nuse crate::commander::Commander;\nuse crate::config::{Config, Generators};\nuse crate::message::{\n    ChokeMessageHandler, NewTxsHandler, ProposalMessageHandler, PullTxsHandler, QCMessageHandler,\n    RemoteHeightMessageHandler, VoteMessageHandler,\n};\nuse crate::worker::Worker;\n\npub async fn start(config: Config, genesis: Genesis, generators: Generators) -> ProtocolResult<()> {\n    log::info!(\"byzantine node starts\");\n\n    // Init network\n    let network_config = NetworkConfig::new()\n        .max_connections(config.network.max_connected_peers)?\n        .same_ip_conn_limit(config.network.same_ip_conn_limit)\n        .inbound_conn_limit(config.network.inbound_conn_limit)?\n        .allowlist_only(config.network.allowlist_only)\n        .peer_trust_metric(\n            config.network.trust_interval_duration,\n            config.network.trust_max_history_duration,\n        )?\n        .peer_soft_ban(config.network.soft_ban_duration)\n        .peer_fatal_ban(config.network.fatal_ban_duration)\n        .rpc_timeout(config.network.rpc_timeout)\n        .ping_interval(config.network.ping_interval)\n        .selfcheck_interval(config.network.selfcheck_interval)\n        .max_wait_streams(config.network.max_wait_streams)\n        .max_frame_length(config.network.max_frame_length)\n        .send_buffer_size(config.network.send_buffer_size)\n        .write_timeout(config.network.write_timeout)\n        .recv_buffer_size(config.network.recv_buffer_size);\n\n    let network_privkey = config.privkey.as_string_trim0x();\n\n    let mut bootstrap_pairs = vec![];\n    if let Some(bootstrap) = &config.network.bootstraps {\n        for bootstrap in bootstrap.iter() {\n            bootstrap_pairs.push((bootstrap.peer_id.to_owned(), bootstrap.address.to_owned()));\n        }\n    }\n\n    let allowlist = config.network.allowlist.clone().unwrap_or_default();\n    let network_config = network_config\n        .bootstraps(bootstrap_pairs)?\n        .allowlist(allowlist)?\n        .secio_keypair(network_privkey)?;\n\n    let mut network_service = NetworkService::new(network_config);\n    network_service\n        .listen(config.network.listening_address)\n        .await?;\n\n    // self private key\n    let hex_privkey =\n        hex::decode(config.privkey.as_string_trim0x()).expect(\"decode privkey error!\");\n    let my_privkey =\n        Secp256k1PrivateKey::try_from(hex_privkey.as_ref()).expect(\"get privkey failed!\");\n    let my_pubkey = my_privkey.pub_key();\n    let my_address = Address::from_pubkey_bytes(my_pubkey.to_uncompressed_bytes())?;\n\n    // get pub_key_list\n    let metadata: Metadata =\n        serde_json::from_str(genesis.get_payload(\"metadata\")).expect(\"Decode metadata failed!\");\n    let pub_key_list: Vec<Bytes> = metadata\n        .verifier_list\n        .iter()\n        .map(|v| v.pub_key.decode())\n        .filter(|addr| addr != &my_pubkey.to_bytes())\n        .collect();\n    let validators: Vec<Validator> = metadata\n        .verifier_list\n        .iter()\n        .map(|v| Validator {\n            pub_key:        v.pub_key.decode(),\n            propose_weight: v.propose_weight,\n            vote_weight:    v.vote_weight,\n        })\n        .collect();\n\n    assert_ne!(\n        pub_key_list.len(),\n        0,\n        \"It's meaningless to test a system contains only one node which is a byzantine node\"\n    );\n\n    // get crypto\n    let mut bls_pub_keys = HashMap::new();\n    for validator_extend in metadata.verifier_list.iter() {\n        let address = validator_extend.pub_key.decode();\n        let hex_pubkey = hex::decode(validator_extend.bls_pub_key.as_string_trim0x())\n            .expect(\"decode pubkey failed\");\n        let pub_key =\n            BlsPublicKey::try_from(hex_pubkey.as_ref()).expect(\"try into BlsPublicKey failed\");\n        bls_pub_keys.insert(address, pub_key);\n    }\n\n    let mut priv_key = Vec::new();\n    priv_key.extend_from_slice(&[0u8; 16]);\n    let mut tmp = hex::decode(config.privkey.as_string_trim0x()).unwrap();\n    priv_key.append(&mut tmp);\n    let bls_priv_key =\n        BlsPrivateKey::try_from(priv_key.as_ref()).expect(\"try into BlsPrivateKey failed\");\n\n    let hex_common_ref =\n        hex::decode(metadata.common_ref.as_string_trim0x()).expect(\"decode common ref failed\");\n    let common_ref: BlsCommonReference = std::str::from_utf8(hex_common_ref.as_ref())\n        .expect(\"transfer common_ref failed\")\n        .into();\n    let crypto = OverlordCrypto::new(bls_priv_key, bls_pub_keys, common_ref);\n\n    let (network_tx, network_rx) = unbounded();\n    let (worker_tx, worker_rx) = unbounded();\n\n    // set chain id in network\n    network_service.set_chain_id(metadata.chain_id.clone());\n\n    let peer_ids = metadata\n        .verifier_list\n        .iter()\n        .map(|v| PeerId::from_pubkey_bytes(v.pub_key.decode()).map(PeerIdExt::into_bytes_ext))\n        .collect::<Result<Vec<_>, _>>()?;\n\n    network_service\n        .handle()\n        .tag_consensus(Context::new(), peer_ids)?;\n\n    // register broadcast new transaction\n    network_service\n        .register_endpoint_handler(END_GOSSIP_NEW_TXS, NewTxsHandler::new(network_tx.clone()))?;\n\n    // register pull txs from other node\n    network_service\n        .register_endpoint_handler(RPC_PULL_TXS, PullTxsHandler::new(network_tx.clone()))?;\n    network_service.register_rpc_response::<MsgPushTxs>(RPC_RESP_PULL_TXS)?;\n\n    network_service.register_endpoint_handler(\n        END_GOSSIP_SIGNED_PROPOSAL,\n        ProposalMessageHandler::new(network_tx.clone()),\n    )?;\n\n    network_service.register_endpoint_handler(\n        END_GOSSIP_SIGNED_VOTE,\n        VoteMessageHandler::new(network_tx.clone()),\n    )?;\n\n    network_service.register_endpoint_handler(\n        END_GOSSIP_AGGREGATED_VOTE,\n        QCMessageHandler::new(network_tx.clone()),\n    )?;\n\n    network_service.register_endpoint_handler(\n        END_GOSSIP_SIGNED_CHOKE,\n        ChokeMessageHandler::new(network_tx.clone()),\n    )?;\n\n    network_service.register_endpoint_handler(\n        BROADCAST_HEIGHT,\n        RemoteHeightMessageHandler::new(network_tx.clone()),\n    )?;\n\n    let commander = Commander::new(generators, pub_key_list, worker_tx, network_rx);\n    let worker = Worker::new(\n        my_address,\n        my_pubkey.to_bytes(),\n        metadata,\n        validators,\n        crypto,\n        Arc::new(network_service.handle()),\n        worker_rx,\n    );\n\n    // Run network\n    tokio::spawn(network_service);\n\n    // Run worker\n    tokio::spawn(async move {\n        worker.run().await;\n    });\n\n    // run commander\n    let (abortable_demon, abort_handle) = future::abortable(commander.run());\n    let exec_handler = tokio::task::spawn_local(abortable_demon);\n    let ctrl_c_handler = tokio::task::spawn_local(async {\n        #[cfg(windows)]\n        let _ = tokio::signal::ctrl_c().await;\n        #[cfg(unix)]\n        {\n            let mut sigtun_int = os_impl::signal(os_impl::SignalKind::interrupt()).unwrap();\n            let mut sigtun_term = os_impl::signal(os_impl::SignalKind::terminate()).unwrap();\n            tokio::select! {\n                _ = sigtun_int.recv() => {}\n                _ = sigtun_term.recv() => {}\n            };\n        }\n    });\n\n    tokio::select! {\n        _ = exec_handler =>{log::error!(\"exec_daemon is down, quit.\")},\n        _ = ctrl_c_handler =>{log::info!(\"ctrl + c is pressed, quit.\")},\n    };\n    abort_handle.abort();\n\n    Ok(())\n}\n"
  },
  {
    "path": "byzantine/src/invalid_types.rs",
    "content": "use std::error::Error;\nuse std::sync::Arc;\n\nuse bytes::Bytes;\nuse derive_more::Constructor;\nuse overlord::types::SignedProposal;\nuse overlord::{Codec, Crypto};\nuse rlp::Encodable;\n\nuse common_crypto::Secp256k1PrivateKey;\nuse core_consensus::util::OverlordCrypto;\nuse core_mempool::MsgPullTxs;\nuse protocol::traits::MessageCodec;\nuse protocol::types::{Address, Hash, Metadata, SignedTransaction, Validator};\nuse protocol::ProtocolResult;\n\nuse crate::utils::{\n    gen_invalid_address, gen_invalid_aggregate_sig, gen_invalid_chain_id,\n    gen_invalid_content_struct_proposal, gen_invalid_from, gen_invalid_hash, gen_invalid_lock,\n    gen_invalid_proof, gen_invalid_request, gen_invalid_sig, gen_invalid_validators,\n    gen_positive_range, gen_random_bytes, gen_range, gen_signed_proposal_from_header,\n    gen_signed_tx, gen_valid_block, gen_valid_block_header, gen_valid_choke, gen_valid_hash,\n    gen_valid_proposal, gen_valid_qc, gen_valid_raw_tx, gen_valid_signed_choke,\n    gen_valid_signed_proposal, gen_valid_signed_tx, gen_valid_signed_vote, gen_valid_vote,\n};\nuse crate::worker::State;\n\n#[derive(Constructor, Clone, Debug, Eq, PartialEq)]\npub struct InvalidStruct {\n    pub inner: Bytes,\n}\n\nimpl InvalidStruct {\n    pub fn gen(len: usize) -> Self {\n        InvalidStruct {\n            inner: gen_random_bytes(len),\n        }\n    }\n}\n\nimpl MessageCodec for InvalidStruct {\n    fn encode(&mut self) -> ProtocolResult<Bytes> {\n        Ok(self.inner.clone())\n    }\n\n    fn decode(bytes: Bytes) -> ProtocolResult<Self> {\n        Ok(InvalidStruct::new(bytes))\n    }\n}\n\nimpl Codec for InvalidStruct {\n    fn encode(&self) -> Result<Bytes, Box<dyn Error + Send>> {\n        let bytes = self.inner.clone();\n        Ok(bytes)\n    }\n\n    fn decode(data: Bytes) -> Result<Self, Box<dyn Error + Send>> {\n        Ok(InvalidStruct::new(data))\n    }\n}\n\n//################################\n//##########  NewChoke  ##########\n//########## ######################\npub fn gen_invalid_struct_new_choke(\n    _state: &State,\n    _crypto: &Arc<OverlordCrypto>,\n    _my_pub_key: &Bytes,\n) -> Vec<u8> {\n    gen_random_bytes(100).to_vec()\n}\n\npub fn gen_invalid_height_new_choke(\n    state: &State,\n    crypto: &Arc<OverlordCrypto>,\n    my_pub_key: &Bytes,\n) -> Vec<u8> {\n    let mut choke = gen_valid_choke(state, my_pub_key);\n    choke.height = gen_positive_range(state.height, 20);\n    let signed_choke = gen_valid_signed_choke(choke, crypto, my_pub_key);\n    signed_choke.rlp_bytes()\n}\n\npub fn gen_invalid_round_new_choke(\n    state: &State,\n    crypto: &Arc<OverlordCrypto>,\n    my_pub_key: &Bytes,\n) -> Vec<u8> {\n    let mut choke = gen_valid_choke(state, my_pub_key);\n    choke.round = gen_positive_range(state.round, 20);\n    let signed_choke = gen_valid_signed_choke(choke, crypto, my_pub_key);\n    signed_choke.rlp_bytes()\n}\n\npub fn gen_invalid_from_new_vote(\n    state: &State,\n    crypto: &Arc<OverlordCrypto>,\n    my_pub_key: &Bytes,\n) -> Vec<u8> {\n    let mut choke = gen_valid_choke(state, my_pub_key);\n    choke.from = gen_invalid_from();\n    let signed_choke = gen_valid_signed_choke(choke, crypto, my_pub_key);\n    signed_choke.rlp_bytes()\n}\n\npub fn gen_invalid_sig_new_choke(\n    state: &State,\n    crypto: &Arc<OverlordCrypto>,\n    my_pub_key: &Bytes,\n) -> Vec<u8> {\n    let choke = gen_valid_choke(state, my_pub_key);\n    let mut signed_choke = gen_valid_signed_choke(choke, crypto, my_pub_key);\n    signed_choke.signature = gen_invalid_sig();\n    signed_choke.rlp_bytes()\n}\n\npub fn gen_invalid_address_new_choke(\n    state: &State,\n    crypto: &Arc<OverlordCrypto>,\n    my_pub_key: &Bytes,\n) -> Vec<u8> {\n    let choke = gen_valid_choke(state, my_pub_key);\n    let mut signed_choke = gen_valid_signed_choke(choke, crypto, my_pub_key);\n    signed_choke.address = gen_invalid_address().as_bytes();\n    signed_choke.rlp_bytes()\n}\n\n//#############################\n//##########  NewQC  ##########\n//########## ###################\npub fn gen_invalid_struct_new_qc(_state: &State, _my_pub_key: &Bytes) -> Vec<u8> {\n    gen_random_bytes(100).to_vec()\n}\n\npub fn gen_invalid_height_new_qc(state: &State, my_pub_key: &Bytes) -> Vec<u8> {\n    let mut qc = gen_valid_qc(state, my_pub_key);\n    qc.height = gen_positive_range(state.height, 20);\n    qc.rlp_bytes()\n}\n\npub fn gen_invalid_round_new_qc(state: &State, my_pub_key: &Bytes) -> Vec<u8> {\n    let mut qc = gen_valid_qc(state, my_pub_key);\n    qc.round = gen_positive_range(state.round, 20);\n    qc.rlp_bytes()\n}\n\npub fn gen_invalid_block_hash_new_qc(state: &State, my_pub_key: &Bytes) -> Vec<u8> {\n    let mut qc = gen_valid_qc(state, my_pub_key);\n    qc.block_hash = gen_invalid_hash().as_bytes();\n    qc.rlp_bytes()\n}\n\npub fn gen_invalid_sig_new_qc(state: &State, my_pub_key: &Bytes) -> Vec<u8> {\n    let mut qc = gen_valid_qc(state, my_pub_key);\n    qc.signature = gen_invalid_aggregate_sig();\n    qc.rlp_bytes()\n}\n\npub fn gen_invalid_leader_new_qc(state: &State, my_pub_key: &Bytes) -> Vec<u8> {\n    let mut qc = gen_valid_qc(state, my_pub_key);\n    qc.leader = gen_invalid_address().as_bytes();\n    qc.rlp_bytes()\n}\n\n//###############################\n//##########  NewVote  ##########\n//########## #####################\npub fn gen_invalid_struct_new_vote(\n    _state: &State,\n    _crypto: &Arc<OverlordCrypto>,\n    _my_pub_key: &Bytes,\n) -> Vec<u8> {\n    gen_random_bytes(100).to_vec()\n}\n\npub fn gen_invalid_height_new_vote(\n    state: &State,\n    crypto: &Arc<OverlordCrypto>,\n    my_pub_key: &Bytes,\n) -> Vec<u8> {\n    let mut vote = gen_valid_vote(state);\n    vote.height = gen_positive_range(state.height, 20);\n    let signed_vote = gen_valid_signed_vote(vote, crypto, my_pub_key);\n    signed_vote.rlp_bytes()\n}\n\npub fn gen_invalid_round_new_vote(\n    state: &State,\n    crypto: &Arc<OverlordCrypto>,\n    my_pub_key: &Bytes,\n) -> Vec<u8> {\n    let mut vote = gen_valid_vote(state);\n    vote.round = gen_positive_range(state.round, 20);\n    let signed_vote = gen_valid_signed_vote(vote, crypto, my_pub_key);\n    signed_vote.rlp_bytes()\n}\n\npub fn gen_invalid_block_hash_new_vote(\n    state: &State,\n    crypto: &Arc<OverlordCrypto>,\n    my_pub_key: &Bytes,\n) -> Vec<u8> {\n    let mut vote = gen_valid_vote(state);\n    vote.block_hash = gen_invalid_hash().as_bytes();\n    let signed_vote = gen_valid_signed_vote(vote, crypto, my_pub_key);\n    signed_vote.rlp_bytes()\n}\n\npub fn gen_invalid_sig_new_vote(\n    state: &State,\n    crypto: &Arc<OverlordCrypto>,\n    my_pub_key: &Bytes,\n) -> Vec<u8> {\n    let vote = gen_valid_vote(state);\n    let mut signed_vote = gen_valid_signed_vote(vote, crypto, my_pub_key);\n    signed_vote.signature = gen_invalid_sig();\n    signed_vote.rlp_bytes()\n}\n\npub fn gen_invalid_voter_new_vote(\n    state: &State,\n    crypto: &Arc<OverlordCrypto>,\n    my_pub_key: &Bytes,\n) -> Vec<u8> {\n    let vote = gen_valid_vote(state);\n    let mut signed_vote = gen_valid_signed_vote(vote, crypto, my_pub_key);\n    signed_vote.voter = gen_random_bytes(100);\n    signed_vote.rlp_bytes()\n}\n\n//###################################\n//##########  NewProposal  ##########\n//########## #########################\npub fn gen_valid_new_proposal(\n    state: &State,\n    metadata: &Metadata,\n    crypto: &Arc<OverlordCrypto>,\n    my_address: &Address,\n    my_pub_key: &Bytes,\n    validators: &[Validator],\n) -> Vec<u8> {\n    let order_tx_hashes: Vec<Hash> = (0..gen_range(0, 1000)).map(|_| gen_valid_hash()).collect();\n    let propose_tx_hashes: Vec<Hash> = (0..gen_range(0, 1000)).map(|_| gen_valid_hash()).collect();\n    let header = gen_valid_block_header(\n        state,\n        metadata,\n        my_address,\n        validators,\n        order_tx_hashes.clone(),\n    );\n\n    let block = gen_valid_block(header, order_tx_hashes);\n    let proposal = gen_valid_proposal(block, state, my_pub_key, propose_tx_hashes);\n    let signed_proposal = gen_valid_signed_proposal(proposal, crypto);\n    signed_proposal.rlp_bytes()\n}\n\npub fn gen_invalid_prop_proposer_new_proposal(\n    state: &State,\n    metadata: &Metadata,\n    crypto: &Arc<OverlordCrypto>,\n    my_address: &Address,\n    my_pub_key: &Bytes,\n    validators: &[Validator],\n) -> Vec<u8> {\n    let header = gen_valid_block_header(state, metadata, my_address, validators, vec![]);\n    let block = gen_valid_block(header, vec![]);\n    let mut proposal = gen_valid_proposal(block, state, my_pub_key, vec![]);\n    proposal.proposer = gen_invalid_address().as_bytes();\n    let signed_proposal = gen_valid_signed_proposal(proposal, crypto);\n    signed_proposal.rlp_bytes()\n}\n\npub fn gen_invalid_lock_new_proposal(\n    state: &State,\n    metadata: &Metadata,\n    crypto: &Arc<OverlordCrypto>,\n    my_address: &Address,\n    my_pub_key: &Bytes,\n    validators: &[Validator],\n) -> Vec<u8> {\n    let header = gen_valid_block_header(state, metadata, my_address, validators, vec![]);\n    let block = gen_valid_block(header, vec![]);\n    let mut proposal = gen_valid_proposal(block, state, my_pub_key, vec![]);\n    proposal.lock = Some(gen_invalid_lock());\n    let signed_proposal = gen_valid_signed_proposal(proposal, crypto);\n    signed_proposal.rlp_bytes()\n}\n\npub fn gen_invalid_block_hash_new_proposal(\n    state: &State,\n    metadata: &Metadata,\n    crypto: &Arc<OverlordCrypto>,\n    my_address: &Address,\n    my_pub_key: &Bytes,\n    validators: &[Validator],\n) -> Vec<u8> {\n    let header = gen_valid_block_header(state, metadata, my_address, validators, vec![]);\n    let block = gen_valid_block(header, vec![]);\n    let mut proposal = gen_valid_proposal(block, state, my_pub_key, vec![]);\n    proposal.block_hash = gen_invalid_hash().as_bytes();\n    let signed_proposal = gen_valid_signed_proposal(proposal, crypto);\n    signed_proposal.rlp_bytes()\n}\n\npub fn gen_invalid_content_struct_new_proposal(\n    state: &State,\n    _metadata: &Metadata,\n    crypto: &Arc<OverlordCrypto>,\n    _my_address: &Address,\n    my_pub_key: &Bytes,\n    _validators: &[Validator],\n) -> Vec<u8> {\n    let proposal = gen_invalid_content_struct_proposal(state, my_pub_key);\n    let signature = crypto\n        .sign(crypto.hash(proposal.content.inner.clone()))\n        .expect(\"sign proposal failed\");\n\n    let signed_proposal = SignedProposal {\n        signature,\n        proposal,\n    };\n    signed_proposal.rlp_bytes()\n}\n\npub fn gen_invalid_round_new_proposal(\n    state: &State,\n    metadata: &Metadata,\n    crypto: &Arc<OverlordCrypto>,\n    my_address: &Address,\n    my_pub_key: &Bytes,\n    validators: &[Validator],\n) -> Vec<u8> {\n    let header = gen_valid_block_header(state, metadata, my_address, validators, vec![]);\n    let block = gen_valid_block(header, vec![]);\n    let mut proposal = gen_valid_proposal(block, state, my_pub_key, vec![]);\n    proposal.round = gen_positive_range(state.round, 20);\n    let signed_proposal = gen_valid_signed_proposal(proposal, crypto);\n    signed_proposal.rlp_bytes()\n}\n\npub fn gen_invalid_prop_height_new_proposal(\n    state: &State,\n    metadata: &Metadata,\n    crypto: &Arc<OverlordCrypto>,\n    my_address: &Address,\n    my_pub_key: &Bytes,\n    validators: &[Validator],\n) -> Vec<u8> {\n    let header = gen_valid_block_header(state, metadata, my_address, validators, vec![]);\n    let block = gen_valid_block(header, vec![]);\n    let mut proposal = gen_valid_proposal(block, state, my_pub_key, vec![]);\n    proposal.height = gen_positive_range(state.height, 20);\n    let signed_proposal = gen_valid_signed_proposal(proposal, crypto);\n    signed_proposal.rlp_bytes()\n}\n\npub fn gen_invalid_sig_new_proposal(\n    state: &State,\n    metadata: &Metadata,\n    _crypto: &Arc<OverlordCrypto>,\n    my_address: &Address,\n    my_pub_key: &Bytes,\n    validators: &[Validator],\n) -> Vec<u8> {\n    let header = gen_valid_block_header(state, metadata, my_address, validators, vec![]);\n    let block = gen_valid_block(header, vec![]);\n    let proposal = gen_valid_proposal(block, state, my_pub_key, vec![]);\n    let signed_proposal = SignedProposal {\n        proposal,\n        signature: gen_invalid_sig(),\n    };\n    signed_proposal.rlp_bytes()\n}\n\npub fn gen_invalid_tx_hash_new_proposal(\n    state: &State,\n    metadata: &Metadata,\n    crypto: &Arc<OverlordCrypto>,\n    my_address: &Address,\n    my_pub_key: &Bytes,\n    validators: &[Validator],\n) -> Vec<u8> {\n    let order_tx_hashes: Vec<Hash> = (0..gen_range(0, 1000))\n        .map(|_| gen_invalid_hash())\n        .collect();\n    let propose_tx_hashes: Vec<Hash> = (0..gen_range(0, 1000))\n        .map(|_| gen_invalid_hash())\n        .collect();\n    let header = gen_valid_block_header(\n        state,\n        metadata,\n        my_address,\n        validators,\n        order_tx_hashes.clone(),\n    );\n\n    let block = gen_valid_block(header, order_tx_hashes);\n    let proposal = gen_valid_proposal(block, state, my_pub_key, propose_tx_hashes);\n    let signed_proposal = gen_valid_signed_proposal(proposal, crypto);\n    signed_proposal.rlp_bytes()\n}\n\npub fn gen_invalid_validators_new_proposal(\n    state: &State,\n    metadata: &Metadata,\n    crypto: &Arc<OverlordCrypto>,\n    my_address: &Address,\n    my_pub_key: &Bytes,\n    validators: &[Validator],\n) -> Vec<u8> {\n    let mut header = gen_valid_block_header(state, metadata, my_address, validators, vec![]);\n    header.validators = gen_invalid_validators();\n    gen_signed_proposal_from_header(header, state, crypto, my_pub_key)\n}\n\npub fn gen_invalid_version_new_proposal(\n    state: &State,\n    metadata: &Metadata,\n    crypto: &Arc<OverlordCrypto>,\n    my_address: &Address,\n    my_pub_key: &Bytes,\n    validators: &[Validator],\n) -> Vec<u8> {\n    let mut header = gen_valid_block_header(state, metadata, my_address, validators, vec![]);\n    header.validator_version = gen_range(u64::MIN, u64::MAX);\n    gen_signed_proposal_from_header(header, state, crypto, my_pub_key)\n}\n\npub fn gen_invalid_proof_new_proposal(\n    state: &State,\n    metadata: &Metadata,\n    crypto: &Arc<OverlordCrypto>,\n    my_address: &Address,\n    my_pub_key: &Bytes,\n    validators: &[Validator],\n) -> Vec<u8> {\n    let mut header = gen_valid_block_header(state, metadata, my_address, validators, vec![]);\n    header.proof = gen_invalid_proof();\n    gen_signed_proposal_from_header(header, state, crypto, my_pub_key)\n}\n\npub fn gen_invalid_block_proposer_new_proposal(\n    state: &State,\n    metadata: &Metadata,\n    crypto: &Arc<OverlordCrypto>,\n    my_address: &Address,\n    my_pub_key: &Bytes,\n    validators: &[Validator],\n) -> Vec<u8> {\n    let mut header = gen_valid_block_header(state, metadata, my_address, validators, vec![]);\n    header.proposer = gen_invalid_address();\n    gen_signed_proposal_from_header(header, state, crypto, my_pub_key)\n}\n\npub fn gen_invalid_cycle_used_new_proposal(\n    state: &State,\n    metadata: &Metadata,\n    crypto: &Arc<OverlordCrypto>,\n    my_address: &Address,\n    my_pub_key: &Bytes,\n    validators: &[Validator],\n) -> Vec<u8> {\n    let mut header = gen_valid_block_header(state, metadata, my_address, validators, vec![]);\n    header.cycles_used = vec![gen_range(u64::MIN, u64::MAX)];\n    gen_signed_proposal_from_header(header, state, crypto, my_pub_key)\n}\n\npub fn gen_invalid_receipt_root_new_proposal(\n    state: &State,\n    metadata: &Metadata,\n    crypto: &Arc<OverlordCrypto>,\n    my_address: &Address,\n    my_pub_key: &Bytes,\n    validators: &[Validator],\n) -> Vec<u8> {\n    let mut header = gen_valid_block_header(state, metadata, my_address, validators, vec![]);\n    header.receipt_root = vec![gen_invalid_hash()];\n    gen_signed_proposal_from_header(header, state, crypto, my_pub_key)\n}\n\npub fn gen_invalid_state_root_new_proposal(\n    state: &State,\n    metadata: &Metadata,\n    crypto: &Arc<OverlordCrypto>,\n    my_address: &Address,\n    my_pub_key: &Bytes,\n    validators: &[Validator],\n) -> Vec<u8> {\n    let mut header = gen_valid_block_header(state, metadata, my_address, validators, vec![]);\n    header.state_root = gen_invalid_hash();\n    gen_signed_proposal_from_header(header, state, crypto, my_pub_key)\n}\n\npub fn gen_invalid_confirm_root_new_proposal(\n    state: &State,\n    metadata: &Metadata,\n    crypto: &Arc<OverlordCrypto>,\n    my_address: &Address,\n    my_pub_key: &Bytes,\n    validators: &[Validator],\n) -> Vec<u8> {\n    let mut header = gen_valid_block_header(state, metadata, my_address, validators, vec![]);\n    header.confirm_root = vec![gen_invalid_hash()];\n    gen_signed_proposal_from_header(header, state, crypto, my_pub_key)\n}\n\npub fn gen_invalid_signed_tx_hash_new_proposal(\n    state: &State,\n    metadata: &Metadata,\n    crypto: &Arc<OverlordCrypto>,\n    my_address: &Address,\n    my_pub_key: &Bytes,\n    validators: &[Validator],\n) -> Vec<u8> {\n    let mut header = gen_valid_block_header(state, metadata, my_address, validators, vec![]);\n    header.order_signed_transactions_hash = gen_invalid_hash();\n    gen_signed_proposal_from_header(header, state, crypto, my_pub_key)\n}\n\npub fn gen_invalid_order_root_new_proposal(\n    state: &State,\n    metadata: &Metadata,\n    crypto: &Arc<OverlordCrypto>,\n    my_address: &Address,\n    my_pub_key: &Bytes,\n    validators: &[Validator],\n) -> Vec<u8> {\n    let mut header = gen_valid_block_header(state, metadata, my_address, validators, vec![]);\n    header.order_root = gen_invalid_hash();\n    gen_signed_proposal_from_header(header, state, crypto, my_pub_key)\n}\n\npub fn gen_invalid_timestamp_new_proposal(\n    state: &State,\n    metadata: &Metadata,\n    crypto: &Arc<OverlordCrypto>,\n    my_address: &Address,\n    my_pub_key: &Bytes,\n    validators: &[Validator],\n) -> Vec<u8> {\n    let mut header = gen_valid_block_header(state, metadata, my_address, validators, vec![]);\n    header.timestamp = gen_positive_range(state.prev_timestamp, 1_000_000);\n    gen_signed_proposal_from_header(header, state, crypto, my_pub_key)\n}\n\npub fn gen_invalid_exec_height_new_proposal(\n    state: &State,\n    metadata: &Metadata,\n    crypto: &Arc<OverlordCrypto>,\n    my_address: &Address,\n    my_pub_key: &Bytes,\n    validators: &[Validator],\n) -> Vec<u8> {\n    let mut header = gen_valid_block_header(state, metadata, my_address, validators, vec![]);\n    header.exec_height = gen_positive_range(state.exec_height, 20);\n    gen_signed_proposal_from_header(header, state, crypto, my_pub_key)\n}\n\npub fn gen_invalid_height_new_proposal(\n    state: &State,\n    metadata: &Metadata,\n    crypto: &Arc<OverlordCrypto>,\n    my_address: &Address,\n    my_pub_key: &Bytes,\n    validators: &[Validator],\n) -> Vec<u8> {\n    let mut header = gen_valid_block_header(state, metadata, my_address, validators, vec![]);\n    header.height = gen_positive_range(state.height, 20);\n    gen_signed_proposal_from_header(header, state, crypto, my_pub_key)\n}\n\npub fn gen_invalid_prev_hash_new_proposal(\n    state: &State,\n    metadata: &Metadata,\n    crypto: &Arc<OverlordCrypto>,\n    my_address: &Address,\n    my_pub_key: &Bytes,\n    validators: &[Validator],\n) -> Vec<u8> {\n    let mut header = gen_valid_block_header(state, metadata, my_address, validators, vec![]);\n    header.prev_hash = gen_invalid_hash();\n    gen_signed_proposal_from_header(header, state, crypto, my_pub_key)\n}\n\npub fn gen_invalid_chain_id_new_proposal(\n    state: &State,\n    metadata: &Metadata,\n    crypto: &Arc<OverlordCrypto>,\n    my_address: &Address,\n    my_pub_key: &Bytes,\n    validators: &[Validator],\n) -> Vec<u8> {\n    let mut header = gen_valid_block_header(state, metadata, my_address, validators, vec![]);\n    header.chain_id = gen_invalid_chain_id();\n    gen_signed_proposal_from_header(header, state, crypto, my_pub_key)\n}\n\npub fn gen_invalid_struct_new_proposal(\n    _state: &State,\n    _metadata: &Metadata,\n    _crypto: &Arc<OverlordCrypto>,\n    _my_address: &Address,\n    _my_pub_key: &Bytes,\n    _validators: &[Validator],\n) -> Vec<u8> {\n    gen_random_bytes(1000).to_vec()\n}\n\n//###############################\n//##########  PullTxs  ##########\n//########## #####################\npub fn gen_invalid_height_pull_txs(height: u64) -> MsgPullTxs {\n    let tx_num = gen_positive_range(100, 300);\n    let tx_hashes: Vec<Hash> = (0..tx_num).map(|_| gen_valid_hash()).collect();\n    MsgPullTxs {\n        height: Some(gen_positive_range(height, 100)),\n        hashes: tx_hashes,\n    }\n}\n\npub fn gen_invalid_hash_pull_txs(_height: u64) -> MsgPullTxs {\n    let tx_num = gen_positive_range(100, 300);\n    let tx_hashes: Vec<Hash> = (0..tx_num).map(|_| gen_invalid_hash()).collect();\n    MsgPullTxs {\n        height: None,\n        hashes: tx_hashes,\n    }\n}\n\npub fn gen_not_exists_txs_pull_txs(_height: u64) -> MsgPullTxs {\n    let tx_num = gen_positive_range(100, 300);\n    let tx_hashes: Vec<Hash> = (0..tx_num).map(|_| gen_valid_hash()).collect();\n    MsgPullTxs {\n        height: None,\n        hashes: tx_hashes,\n    }\n}\n\n//#############################\n//##########  NewTx  ##########\n//########## ###################\npub fn gen_invalid_hash_signed_tx(\n    pri_key: &Secp256k1PrivateKey,\n    height: u64,\n    metadata: &Metadata,\n) -> SignedTransaction {\n    let raw = gen_valid_raw_tx(pri_key, height, metadata);\n    gen_signed_tx(raw, pri_key, Some(gen_random_bytes(100)), None)\n}\n\npub fn gen_invalid_sig_signed_tx(\n    pri_key: &Secp256k1PrivateKey,\n    height: u64,\n    metadata: &Metadata,\n) -> SignedTransaction {\n    let raw = gen_valid_raw_tx(pri_key, height, metadata);\n    gen_signed_tx(raw, pri_key, None, Some(gen_random_bytes(100)))\n}\n\npub fn gen_invalid_chain_id_signed_tx(\n    pri_key: &Secp256k1PrivateKey,\n    height: u64,\n    metadata: &Metadata,\n) -> SignedTransaction {\n    let mut raw = gen_valid_raw_tx(pri_key, height, metadata);\n    raw.chain_id = gen_invalid_chain_id();\n    gen_valid_signed_tx(raw, pri_key)\n}\n\npub fn gen_invalid_cycles_price_signed_tx(\n    pri_key: &Secp256k1PrivateKey,\n    height: u64,\n    metadata: &Metadata,\n) -> SignedTransaction {\n    let mut raw = gen_valid_raw_tx(pri_key, height, metadata);\n    raw.cycles_price = gen_range(metadata.cycles_price + 1, u64::MAX);\n    gen_valid_signed_tx(raw, pri_key)\n}\n\npub fn gen_invalid_cycles_limit_signed_tx(\n    pri_key: &Secp256k1PrivateKey,\n    height: u64,\n    metadata: &Metadata,\n) -> SignedTransaction {\n    let mut raw = gen_valid_raw_tx(pri_key, height, metadata);\n    raw.cycles_limit = gen_range(metadata.cycles_limit + 1, u64::MAX);\n    gen_valid_signed_tx(raw, pri_key)\n}\n\npub fn gen_invalid_nonce_of_rand_len_signed_tx(\n    pri_key: &Secp256k1PrivateKey,\n    height: u64,\n    metadata: &Metadata,\n) -> SignedTransaction {\n    let mut raw = gen_valid_raw_tx(pri_key, height, metadata);\n    raw.nonce = gen_invalid_hash();\n    gen_valid_signed_tx(raw, pri_key)\n}\n\npub fn gen_invalid_nonce_dup_signed_tx(\n    pri_key: &Secp256k1PrivateKey,\n    height: u64,\n    metadata: &Metadata,\n    nonce: Hash,\n) -> SignedTransaction {\n    let mut raw = gen_valid_raw_tx(pri_key, height, metadata);\n    raw.nonce = nonce;\n    gen_valid_signed_tx(raw, pri_key)\n}\n\npub fn gen_invalid_request_signed_tx(\n    pri_key: &Secp256k1PrivateKey,\n    height: u64,\n    metadata: &Metadata,\n) -> SignedTransaction {\n    let mut raw = gen_valid_raw_tx(pri_key, height, metadata);\n    raw.request = gen_invalid_request();\n    gen_valid_signed_tx(raw, pri_key)\n}\n\npub fn gen_invalid_timeout_signed_tx(\n    pri_key: &Secp256k1PrivateKey,\n    height: u64,\n    metadata: &Metadata,\n) -> SignedTransaction {\n    let mut raw = gen_valid_raw_tx(pri_key, height, metadata);\n    raw.timeout = gen_positive_range(height + metadata.timeout_gap, 100);\n    gen_valid_signed_tx(raw, pri_key)\n}\n\npub fn gen_invalid_sender_signed_tx(\n    pri_key: &Secp256k1PrivateKey,\n    height: u64,\n    metadata: &Metadata,\n) -> SignedTransaction {\n    let mut raw = gen_valid_raw_tx(pri_key, height, metadata);\n    raw.sender = gen_invalid_address();\n    gen_valid_signed_tx(raw, pri_key)\n}\n\npub fn gen_valid_tx(\n    pri_key: &Secp256k1PrivateKey,\n    height: u64,\n    metadata: &Metadata,\n) -> SignedTransaction {\n    let raw = gen_valid_raw_tx(pri_key, height, metadata);\n    gen_valid_signed_tx(raw, pri_key)\n}\n"
  },
  {
    "path": "byzantine/src/lib.rs",
    "content": "#![allow(clippy::mutable_key_type)]\n\npub mod config;\npub mod default_start;\n\nmod behaviors;\nmod commander;\nmod invalid_types;\nmod message;\nmod strategy;\nmod utils;\nmod worker;\n"
  },
  {
    "path": "byzantine/src/message.rs",
    "content": "use async_trait::async_trait;\nuse derive_more::Constructor;\nuse futures::channel::mpsc::UnboundedSender;\n\nuse core_consensus::message::{Choke, Proposal, Vote, QC};\nuse core_mempool::{MsgNewTxs, MsgPullTxs};\nuse protocol::traits::{Context, MessageHandler, TrustFeedback};\n\nuse crate::behaviors::Request;\n\n#[derive(Constructor)]\npub struct NewTxsHandler {\n    to_commander: UnboundedSender<(Context, Request)>,\n}\n\n#[async_trait]\nimpl MessageHandler for NewTxsHandler {\n    type Message = MsgNewTxs;\n\n    async fn process(&self, ctx: Context, msg: Self::Message) -> TrustFeedback {\n        self.to_commander\n            .unbounded_send((ctx, Request::NewTx(msg)))\n            .unwrap();\n\n        TrustFeedback::Neutral\n    }\n}\n\n#[derive(Constructor)]\npub struct PullTxsHandler {\n    to_commander: UnboundedSender<(Context, Request)>,\n}\n\n#[async_trait]\nimpl MessageHandler for PullTxsHandler {\n    type Message = MsgPullTxs;\n\n    async fn process(&self, ctx: Context, msg: Self::Message) -> TrustFeedback {\n        self.to_commander\n            .unbounded_send((ctx, Request::PullTxs(msg)))\n            .unwrap();\n\n        TrustFeedback::Neutral\n    }\n}\n\n#[derive(Constructor)]\npub struct ProposalMessageHandler {\n    to_commander: UnboundedSender<(Context, Request)>,\n}\n\n#[async_trait]\nimpl MessageHandler for ProposalMessageHandler {\n    type Message = Proposal;\n\n    #[muta_apm::derive::tracing_span(name = \"handle_proposal\", kind = \"consensus.message\")]\n    async fn process(&self, ctx: Context, msg: Self::Message) -> TrustFeedback {\n        self.to_commander\n            .unbounded_send((ctx, Request::RecvProposal(msg)))\n            .unwrap();\n\n        TrustFeedback::Good\n    }\n}\n\n#[derive(Constructor)]\npub struct VoteMessageHandler {\n    to_commander: UnboundedSender<(Context, Request)>,\n}\n\n#[async_trait]\nimpl MessageHandler for VoteMessageHandler {\n    type Message = Vote;\n\n    #[muta_apm::derive::tracing_span(name = \"handle_vote\", kind = \"consensus.message\")]\n    async fn process(&self, ctx: Context, msg: Self::Message) -> TrustFeedback {\n        self.to_commander\n            .unbounded_send((ctx, Request::RecvVote(msg)))\n            .unwrap();\n\n        TrustFeedback::Good\n    }\n}\n\n#[derive(Constructor)]\npub struct QCMessageHandler {\n    to_commander: UnboundedSender<(Context, Request)>,\n}\n\n#[async_trait]\nimpl MessageHandler for QCMessageHandler {\n    type Message = QC;\n\n    #[muta_apm::derive::tracing_span(name = \"handle_vote\", kind = \"consensus.message\")]\n    async fn process(&self, ctx: Context, msg: Self::Message) -> TrustFeedback {\n        self.to_commander\n            .unbounded_send((ctx, Request::RecvQC(msg)))\n            .unwrap();\n\n        TrustFeedback::Good\n    }\n}\n\n#[derive(Constructor)]\npub struct ChokeMessageHandler {\n    to_commander: UnboundedSender<(Context, Request)>,\n}\n\n#[async_trait]\nimpl MessageHandler for ChokeMessageHandler {\n    type Message = Choke;\n\n    #[muta_apm::derive::tracing_span(name = \"handle_vote\", kind = \"consensus.message\")]\n    async fn process(&self, ctx: Context, msg: Self::Message) -> TrustFeedback {\n        self.to_commander\n            .unbounded_send((ctx, Request::RecvChoke(msg)))\n            .unwrap();\n\n        TrustFeedback::Good\n    }\n}\n\n#[derive(Constructor)]\npub struct RemoteHeightMessageHandler {\n    to_commander: UnboundedSender<(Context, Request)>,\n}\n\n#[async_trait]\nimpl MessageHandler for RemoteHeightMessageHandler {\n    type Message = u64;\n\n    #[muta_apm::derive::tracing_span(name = \"handle_vote\", kind = \"consensus.message\")]\n    async fn process(&self, ctx: Context, msg: Self::Message) -> TrustFeedback {\n        self.to_commander\n            .unbounded_send((ctx, Request::RecvHeight(msg)))\n            .unwrap();\n\n        TrustFeedback::Good\n    }\n}\n"
  },
  {
    "path": "byzantine/src/strategy.rs",
    "content": "use bytes::Bytes;\nuse derive_more::Constructor;\nuse rand::seq::SliceRandom;\nuse serde_derive::Deserialize;\n\nuse protocol::traits::Priority;\n\nuse crate::behaviors::{Behavior, MessageType, Request};\nuse crate::utils::{gen_bool, gen_range};\n\npub trait Strategy {\n    fn get_behaviors(&self, request: Option<Request>) -> Vec<Behavior>;\n}\n\n#[derive(Constructor, Clone, Debug, Deserialize)]\npub struct BehaviorGenerator {\n    pub req_end:     Option<String>,\n    pub msg_type:    MessageType,\n    pub probability: f64,\n    pub num_range:   (u64, u64),\n    pub priority:    Priority,\n}\n\nimpl BehaviorGenerator {\n    fn gen_behavior(\n        &self,\n        pub_key_list: &mut Vec<Bytes>,\n        req: Option<Request>,\n    ) -> Option<Behavior> {\n        if gen_bool(self.probability) {\n            let msg_num = gen_range(self.num_range.0, self.num_range.1);\n            let send_to = gen_rand_pub_key_list(pub_key_list);\n            let behavior =\n                Behavior::new(self.msg_type.clone(), msg_num, req, send_to, self.priority);\n            Some(behavior)\n        } else {\n            None\n        }\n    }\n}\n\n#[derive(Constructor, Clone, Debug)]\npub struct DefaultStrategy {\n    pub_key_list: Vec<Bytes>,\n    generators:   Vec<BehaviorGenerator>,\n}\n\nimpl Strategy for DefaultStrategy {\n    fn get_behaviors(&self, request: Option<Request>) -> Vec<Behavior> {\n        let mut pub_key_list = self.pub_key_list.to_vec();\n        self.generators\n            .iter()\n            .filter(|gen| {\n                if request.is_none() {\n                    gen.req_end.is_none()\n                } else {\n                    gen.req_end.is_some()\n                        && gen.req_end.as_ref().unwrap() == request.as_ref().unwrap().to_end()\n                }\n            })\n            .map(|gen| gen.gen_behavior(&mut pub_key_list, request.clone()))\n            .filter_map(|opt| opt)\n            .collect()\n    }\n}\n\npub fn gen_rand_pub_key_list(pub_key_list: &mut Vec<Bytes>) -> Vec<Bytes> {\n    let mut rng = rand::thread_rng();\n    pub_key_list.shuffle(&mut rng);\n\n    let mut new_list = pub_key_list.to_vec();\n    let cut_num = gen_range(0, new_list.len());\n    new_list.split_off(cut_num)\n}\n"
  },
  {
    "path": "byzantine/src/utils.rs",
    "content": "use std::convert::TryFrom;\nuse std::sync::Arc;\nuse std::time::{SystemTime, UNIX_EPOCH};\n\nuse bytes::Bytes;\nuse overlord::types::{\n    AggregatedChoke, AggregatedSignature, AggregatedVote, Choke, PoLC, Proposal, SignedChoke,\n    SignedProposal, SignedVote, UpdateFrom, Vote, VoteType,\n};\nuse overlord::Crypto;\nuse rand::distributions::uniform::{SampleBorrow, SampleUniform};\nuse rand::distributions::Alphanumeric;\nuse rand::{random, Rng};\nuse rlp::{self, Encodable, RlpStream};\n\nuse common_crypto::{\n    HashValue, PrivateKey, PublicKey, Secp256k1PrivateKey, Signature, ToPublicKey,\n    UncompressedPublicKey,\n};\nuse common_merkle::Merkle;\nuse core_consensus::fixed_types::FixedPill;\nuse core_consensus::util::OverlordCrypto;\nuse protocol::fixed_codec::FixedCodec;\nuse protocol::types::{\n    Address, Block, BlockHeader, Hash, Metadata, Pill, Proof, RawTransaction, SignedTransaction,\n    TransactionRequest, Validator,\n};\n\nuse crate::invalid_types::InvalidStruct;\nuse crate::worker::State;\n\nconst VALIDATOR_VERSION: u64 = 0;\nconst HASH_LEN: u64 = 32;\nconst ADDRESS_LEN: u64 = 20;\nconst SIGNATURE_LEN: u64 = 192;\nconst BITMAP_LEN: u64 = 1;\n\npub fn time_now() -> u64 {\n    SystemTime::now()\n        .duration_since(UNIX_EPOCH)\n        .unwrap()\n        .as_millis() as u64\n}\n\npub fn gen_random_bytes(len: usize) -> Bytes {\n    let vec = (0..len).map(|_| random::<u8>()).collect::<Vec<_>>();\n    Bytes::from(vec)\n}\n\npub fn gen_random_string(len: usize) -> String {\n    rand::thread_rng()\n        .sample_iter(&Alphanumeric)\n        .take(len)\n        .collect()\n}\n\npub fn gen_range<T: SampleUniform, B1, B2>(low: B1, high: B2) -> T\nwhere\n    B1: SampleBorrow<T> + Sized,\n    B2: SampleBorrow<T> + Sized,\n{\n    let mut rng = rand::thread_rng();\n    rng.gen_range(low, high)\n}\n\npub fn gen_bool(p: f64) -> bool {\n    let mut rng = rand::thread_rng();\n    if p >= 1.0 {\n        true\n    } else {\n        rng.gen_bool(p)\n    }\n}\n\npub fn gen_valid_raw_tx(\n    pri_key: &Secp256k1PrivateKey,\n    height: u64,\n    metadata: &Metadata,\n) -> RawTransaction {\n    RawTransaction {\n        chain_id:     metadata.chain_id.clone(),\n        cycles_price: 100,\n        cycles_limit: 1_000_000,\n        nonce:        gen_valid_hash(),\n        request:      gen_transfer_tx_request(),\n        timeout:      gen_range(height, height + metadata.timeout_gap),\n        sender:       gen_address_bytes(pri_key),\n    }\n}\n\npub fn gen_invalid_request() -> TransactionRequest {\n    TransactionRequest {\n        method:       gen_random_string(10),\n        service_name: gen_random_string(10),\n        payload:      gen_random_string(100),\n    }\n}\n\npub fn gen_transfer_tx_request() -> TransactionRequest {\n    TransactionRequest {\n        method: \"asset\".to_string(),\n        service_name: \"transfer\".to_string(),\n        payload: \"{ \\\"asset_id\\\": \\\"0xf56924db538e77bb5951eb5ff0d02b88983c49c45eea30e8ae3e7234b311436c\\\", \\\"to\\\":\\\"0x0000000000000000000000000000000000000001\\\", \\\"value\\\": 100 }\".to_string(),\n    }\n}\n\npub fn gen_address_bytes(pri_key: &Secp256k1PrivateKey) -> Address {\n    let pubkey = pri_key.pub_key();\n    Address::from_pubkey_bytes(pubkey.to_uncompressed_bytes()).expect(\"get address failed\")\n}\n\npub fn gen_valid_hash() -> Hash {\n    Hash::digest(gen_random_bytes(20))\n}\n\npub fn gen_invalid_hash() -> Hash {\n    let rand_len = gen_positive_range(HASH_LEN, 1);\n    Hash::from_invalid_bytes(gen_random_bytes(rand_len as usize))\n}\n\npub fn gen_invalid_address() -> Address {\n    let rand_len = gen_positive_range(ADDRESS_LEN, 1);\n    Address::from_invalid_bytes(gen_random_bytes(rand_len as usize))\n}\n\npub fn gen_valid_signed_tx(\n    raw: RawTransaction,\n    pri_key: &Secp256k1PrivateKey,\n) -> SignedTransaction {\n    gen_signed_tx(raw, pri_key, None, None)\n}\n\npub fn gen_signed_tx(\n    raw: RawTransaction,\n    pri_key: &Secp256k1PrivateKey,\n    fixed_bytes: Option<Bytes>,\n    sig: Option<Bytes>,\n) -> SignedTransaction {\n    let fixed_bytes =\n        fixed_bytes.unwrap_or_else(|| raw.encode_fixed().expect(\"get bytes from raw_tx failed!\"));\n    let tx_hash = Hash::digest(fixed_bytes);\n    let hash_value = HashValue::try_from(tx_hash.as_bytes().as_ref()).unwrap();\n    let signature = sig.unwrap_or_else(|| pri_key.sign_message(&hash_value).to_bytes());\n    let pubkey = pri_key.pub_key().to_bytes();\n    let signature = Bytes::from(rlp::encode_list::<Vec<u8>, _>(&[signature.to_vec()]));\n    let pubkey = Bytes::from(rlp::encode_list::<Vec<u8>, _>(&[pubkey.to_vec()]));\n    SignedTransaction {\n        raw,\n        tx_hash,\n        pubkey,\n        signature,\n    }\n}\n\npub fn gen_valid_block_header(\n    state: &State,\n    metadata: &Metadata,\n    my_address: &Address,\n    validators: &[Validator],\n    ordered_tx_hashes: Vec<Hash>,\n) -> BlockHeader {\n    let order_root = Merkle::from_hashes(ordered_tx_hashes).get_root_hash();\n    BlockHeader {\n        chain_id:                       metadata.chain_id.clone(),\n        height:                         state.height,\n        exec_height:                    state.exec_height,\n        prev_hash:                      state.prev_hash.clone(),\n        timestamp:                      time_now(),\n        order_root:                     order_root.unwrap_or_else(Hash::from_empty),\n        order_signed_transactions_hash: Hash::from_empty(),\n        confirm_root:                   state.confirm_root.clone(),\n        state_root:                     state.state_root.clone(),\n        receipt_root:                   state.receipt_root.clone(),\n        cycles_used:                    state.cycles_used.clone(),\n        proposer:                       my_address.clone(),\n        proof:                          state.proof.clone(),\n        validator_version:              VALIDATOR_VERSION,\n        validators:                     validators.to_vec(),\n    }\n}\n\npub fn gen_valid_block(header: BlockHeader, ordered_tx_hashes: Vec<Hash>) -> Block {\n    Block {\n        header,\n        ordered_tx_hashes,\n    }\n}\n\npub fn gen_invalid_content_struct_proposal(\n    state: &State,\n    my_pub_key: &Bytes,\n) -> Proposal<InvalidStruct> {\n    let content = InvalidStruct::gen(1000);\n    let hash = Hash::digest(content.inner.clone()).as_bytes();\n    Proposal {\n        height: state.height,\n        round: state.round,\n        content,\n        block_hash: hash,\n        lock: state.lock.clone(),\n        proposer: my_pub_key.clone(),\n    }\n}\n\npub fn gen_valid_proposal(\n    block: Block,\n    state: &State,\n    my_pub_key: &Bytes,\n    propose_hashes: Vec<Hash>,\n) -> Proposal<FixedPill> {\n    let pill = Pill {\n        block,\n        propose_hashes,\n    };\n    let fixed_pill = FixedPill {\n        inner: pill.clone(),\n    };\n    let hash = Hash::digest(\n        pill.block\n            .header\n            .encode_fixed()\n            .expect(\"encode block header failed\"),\n    )\n    .as_bytes();\n    Proposal {\n        height:     state.height,\n        round:      state.round,\n        content:    fixed_pill,\n        block_hash: hash,\n        lock:       state.lock.clone(),\n        proposer:   my_pub_key.clone(),\n    }\n}\n\npub fn gen_valid_signed_proposal(\n    proposal: Proposal<FixedPill>,\n    crypto: &Arc<OverlordCrypto>,\n) -> SignedProposal<FixedPill> {\n    let signature = crypto\n        .sign(crypto.hash(Bytes::from(rlp::encode(&proposal))))\n        .expect(\"sign proposal failed\");\n\n    SignedProposal {\n        signature,\n        proposal,\n    }\n}\n\npub fn gen_signed_proposal_from_header(\n    header: BlockHeader,\n    state: &State,\n    crypto: &Arc<OverlordCrypto>,\n    my_pub_key: &Bytes,\n) -> Vec<u8> {\n    let block = gen_valid_block(header, vec![]);\n    let proposal = gen_valid_proposal(block, state, my_pub_key, vec![]);\n    let signed_proposal = gen_valid_signed_proposal(proposal, crypto);\n    signed_proposal.rlp_bytes()\n}\n\npub fn gen_invalid_chain_id() -> Hash {\n    Hash::digest(gen_random_bytes(20))\n}\n\npub fn gen_positive_range(base: u64, range: u64) -> u64 {\n    let low = if base < range { 0 } else { base - range };\n    let high = if u64::MAX - base < range {\n        u64::MAX\n    } else {\n        base + range\n    };\n    gen_range(low, high)\n}\n\npub fn gen_invalid_sig() -> Bytes {\n    gen_random_bytes(gen_positive_range(SIGNATURE_LEN, 1) as usize)\n}\n\npub fn gen_invalid_proof() -> Proof {\n    Proof {\n        height:     gen_range(u64::MIN, u64::MAX),\n        round:      gen_range(u64::MIN, u64::MAX),\n        block_hash: gen_invalid_hash(),\n        signature:  gen_invalid_sig(),\n        bitmap:     gen_invalid_bitmap(),\n    }\n}\n\npub fn gen_invalid_bitmap() -> Bytes {\n    gen_random_bytes(gen_positive_range(BITMAP_LEN, 1) as usize)\n}\n\npub fn gen_invalid_validators() -> Vec<Validator> {\n    (0..gen_range(0, 100))\n        .map(|_| Validator {\n            pub_key:        gen_random_bytes(32),\n            propose_weight: gen_range(u32::MIN, u32::MAX),\n            vote_weight:    gen_range(u32::MIN, u32::MAX),\n        })\n        .collect()\n}\n\npub fn gen_invalid_lock() -> PoLC {\n    PoLC {\n        lock_round: gen_range(u64::MIN, u64::MAX),\n        lock_votes: gen_invalid_qc(),\n    }\n}\n\npub fn gen_invalid_qc() -> AggregatedVote {\n    AggregatedVote {\n        signature:  gen_invalid_aggregate_sig(),\n        vote_type:  gen_vote_type(),\n        height:     gen_range(u64::MIN, u64::MAX),\n        round:      gen_range(u64::MIN, u64::MAX),\n        block_hash: gen_invalid_hash().as_bytes(),\n        leader:     gen_invalid_address().as_bytes(),\n    }\n}\n\npub fn gen_invalid_aggregate_sig() -> AggregatedSignature {\n    AggregatedSignature {\n        signature:      gen_invalid_sig(),\n        address_bitmap: gen_invalid_bitmap(),\n    }\n}\n\npub fn gen_valid_qc(state: &State, my_pub_key: &Bytes) -> AggregatedVote {\n    AggregatedVote {\n        signature:  gen_valid_aggregate_sig(),\n        vote_type:  gen_vote_type(),\n        height:     state.height,\n        round:      state.round,\n        block_hash: gen_invalid_hash().as_bytes(),\n        leader:     my_pub_key.clone(),\n    }\n}\n\npub fn gen_valid_aggregate_sig() -> AggregatedSignature {\n    AggregatedSignature {\n        signature:      gen_random_bytes(SIGNATURE_LEN as usize),\n        address_bitmap: gen_random_bytes(BITMAP_LEN as usize),\n    }\n}\n\npub fn gen_valid_signed_vote(\n    vote: Vote,\n    crypto: &Arc<OverlordCrypto>,\n    my_pub_key: &Bytes,\n) -> SignedVote {\n    let signature = crypto\n        .sign(crypto.hash(Bytes::from(rlp::encode(&vote))))\n        .expect(\"sign proposal failed\");\n\n    SignedVote {\n        signature,\n        vote,\n        voter: my_pub_key.clone(),\n    }\n}\n\npub fn gen_valid_vote(state: &State) -> Vote {\n    Vote {\n        height:     state.height,\n        round:      state.round,\n        vote_type:  gen_vote_type(),\n        block_hash: gen_valid_hash().as_bytes(),\n    }\n}\n\npub fn gen_valid_choke(state: &State, my_pub_key: &Bytes) -> Choke {\n    Choke {\n        height: state.height,\n        round:  state.round,\n        from:   UpdateFrom::PrevoteQC(gen_valid_qc(state, my_pub_key)),\n    }\n}\n\npub fn gen_invalid_from() -> UpdateFrom {\n    match gen_range(0, 100) % 3 {\n        0 => UpdateFrom::PrevoteQC(gen_invalid_qc()),\n        1 => UpdateFrom::PrecommitQC(gen_invalid_qc()),\n        2 => UpdateFrom::ChokeQC(gen_invalid_aggregated_choke()),\n        _ => panic!(\"unreachable!\"),\n    }\n}\n\npub fn gen_valid_signed_choke(\n    choke: Choke,\n    crypto: &Arc<OverlordCrypto>,\n    my_pub_key: &Bytes,\n) -> SignedChoke {\n    let signature = crypto\n        .sign(crypto.hash(Bytes::from(rlp::encode(&choke_to_hash(&choke)))))\n        .expect(\"sign choke failed\");\n    SignedChoke {\n        signature,\n        choke,\n        address: my_pub_key.clone(),\n    }\n}\n\n#[derive(Clone, Debug)]\nstruct HashChoke {\n    height: u64,\n    round:  u64,\n}\n\nimpl Encodable for HashChoke {\n    fn rlp_append(&self, s: &mut RlpStream) {\n        s.begin_list(2).append(&self.height).append(&self.round);\n    }\n}\n\nfn choke_to_hash(choke: &Choke) -> HashChoke {\n    HashChoke {\n        height: choke.height,\n        round:  choke.round,\n    }\n}\n\npub fn gen_invalid_aggregated_choke() -> AggregatedChoke {\n    AggregatedChoke {\n        height:    gen_range(u64::MIN, u64::MAX),\n        round:     gen_range(u64::MIN, u64::MAX),\n        signature: gen_invalid_sig(),\n        voters:    vec![gen_invalid_address().as_bytes()],\n    }\n}\n\nfn gen_vote_type() -> VoteType {\n    match gen_range(0, 100) % 2 {\n        0 => VoteType::Prevote,\n        1 => VoteType::Precommit,\n        _ => panic!(\"unreachable!\"),\n    }\n}\n"
  },
  {
    "path": "byzantine/src/worker.rs",
    "content": "use std::convert::TryFrom;\nuse std::sync::Arc;\n\nuse bytes::Bytes;\nuse futures::{channel::mpsc::UnboundedReceiver, stream::StreamExt};\nuse lazy_static::lazy_static;\nuse overlord::types::{\n    AggregatedVote, Choke, PoLC, Proposal, SignedChoke, SignedProposal, SignedVote, Vote, VoteType,\n};\nuse rlp::Encodable;\n\nuse common_crypto::Secp256k1PrivateKey;\nuse core_consensus::fixed_types::FixedPill;\nuse core_consensus::message::{\n    BROADCAST_HEIGHT, END_GOSSIP_AGGREGATED_VOTE, END_GOSSIP_SIGNED_CHOKE,\n    END_GOSSIP_SIGNED_PROPOSAL, END_GOSSIP_SIGNED_VOTE,\n};\nuse core_consensus::util::OverlordCrypto;\nuse core_mempool::{\n    MsgNewTxs, MsgPullTxs, MsgPushTxs, END_GOSSIP_NEW_TXS, RPC_PULL_TXS, RPC_RESP_PULL_TXS,\n};\nuse core_network::{PeerId, PeerIdExt};\nuse protocol::traits::{Context, Gossip, MessageCodec, PeerTrust, Priority, Rpc};\nuse protocol::types::{\n    Address, Hash, Hex, MerkleRoot, Metadata, Proof, SignedTransaction, Validator,\n};\n\nuse crate::behaviors::{\n    Behavior, MessageType, NewChoke, NewProposal, NewQC, NewTx, NewVote, PullTxs, Request,\n};\nuse crate::invalid_types::{\n    gen_invalid_address_new_choke, gen_invalid_block_hash_new_proposal,\n    gen_invalid_block_hash_new_qc, gen_invalid_block_hash_new_vote,\n    gen_invalid_block_proposer_new_proposal, gen_invalid_chain_id_new_proposal,\n    gen_invalid_chain_id_signed_tx, gen_invalid_confirm_root_new_proposal,\n    gen_invalid_content_struct_new_proposal, gen_invalid_cycle_used_new_proposal,\n    gen_invalid_cycles_limit_signed_tx, gen_invalid_cycles_price_signed_tx,\n    gen_invalid_exec_height_new_proposal, gen_invalid_from_new_vote, gen_invalid_hash_pull_txs,\n    gen_invalid_hash_signed_tx, gen_invalid_height_new_choke, gen_invalid_height_new_proposal,\n    gen_invalid_height_new_qc, gen_invalid_height_new_vote, gen_invalid_height_pull_txs,\n    gen_invalid_leader_new_qc, gen_invalid_lock_new_proposal, gen_invalid_nonce_dup_signed_tx,\n    gen_invalid_nonce_of_rand_len_signed_tx, gen_invalid_order_root_new_proposal,\n    gen_invalid_prev_hash_new_proposal, gen_invalid_proof_new_proposal,\n    gen_invalid_prop_height_new_proposal, gen_invalid_prop_proposer_new_proposal,\n    gen_invalid_receipt_root_new_proposal, gen_invalid_request_signed_tx,\n    gen_invalid_round_new_choke, gen_invalid_round_new_proposal, gen_invalid_round_new_qc,\n    gen_invalid_round_new_vote, gen_invalid_sender_signed_tx, gen_invalid_sig_new_choke,\n    gen_invalid_sig_new_proposal, gen_invalid_sig_new_qc, gen_invalid_sig_new_vote,\n    gen_invalid_sig_signed_tx, gen_invalid_signed_tx_hash_new_proposal,\n    gen_invalid_state_root_new_proposal, gen_invalid_struct_new_choke,\n    gen_invalid_struct_new_proposal, gen_invalid_struct_new_qc, gen_invalid_struct_new_vote,\n    gen_invalid_timeout_signed_tx, gen_invalid_timestamp_new_proposal,\n    gen_invalid_tx_hash_new_proposal, gen_invalid_validators_new_proposal,\n    gen_invalid_version_new_proposal, gen_invalid_voter_new_vote, gen_not_exists_txs_pull_txs,\n    gen_valid_new_proposal, gen_valid_tx, InvalidStruct,\n};\nuse crate::utils::{\n    gen_positive_range, gen_random_bytes, gen_valid_signed_choke, gen_valid_signed_vote, time_now,\n};\n\nlazy_static! {\n    static ref TEST_PRI_KEY: Secp256k1PrivateKey = {\n        let hex_prikey = Hex::from_string(\n            \"0x5ec982173d54d830b6789cbbbe43eaa2853a5ff752d1ebc1b266cf9790314f8a\".to_string(),\n        )\n        .unwrap();\n        Secp256k1PrivateKey::try_from(hex_prikey.decode().as_ref())\n            .expect(\"get test pri_key failed\")\n    };\n}\n\nmacro_rules! send_new_tx {\n    ($self_: ident, $ctx: ident, $behavior: ident, $func: ident) => {{\n        let behavior = $behavior.clone();\n        let metadata = $self_.metadata.clone();\n        let height = $self_.state.height;\n        let network = Arc::<_>::clone(&$self_.network);\n        tokio::spawn(async move {\n            let batch_stxs: Vec<SignedTransaction> = (0..behavior.msg_num)\n                .map(|_| $func(&TEST_PRI_KEY, height, &metadata))\n                .collect();\n            let gossip_txs = MsgNewTxs { batch_stxs };\n            send(&network, gossip_txs, $ctx, END_GOSSIP_NEW_TXS, &behavior).await;\n        });\n    }};\n}\n\nmacro_rules! send_push_txs {\n    ($self_: ident, $ctx: ident, $behavior: ident, $func: ident) => {{\n        let behavior = $behavior.clone();\n        let metadata = $self_.metadata.clone();\n        let height = $self_.state.height;\n        let network = Arc::<_>::clone(&$self_.network);\n        tokio::spawn(async move {\n            let batch_stxs: Vec<SignedTransaction> = (0..behavior.msg_num)\n                .map(|_| $func(&TEST_PRI_KEY, height, &metadata))\n                .collect();\n            let push_txs = MsgPushTxs {\n                sig_txs: batch_stxs,\n            };\n            let _ = network\n                .response::<MsgPushTxs>($ctx, RPC_RESP_PULL_TXS, Ok(push_txs), behavior.priority)\n                .await;\n        });\n    }};\n}\n\nmacro_rules! send_pull_txs {\n    ($self_: ident, $ctx: ident, $behavior: ident, $func: ident) => {{\n        let behavior = $behavior.clone();\n        let height = $self_.state.height;\n        let network = Arc::<_>::clone(&$self_.network);\n        tokio::spawn(async move {\n            for _ in (0..behavior.msg_num) {\n                let pull_msg = $func(height);\n                let _ = network\n                    .call::<MsgPullTxs, MsgPushTxs>(\n                        $ctx.clone(),\n                        RPC_PULL_TXS,\n                        pull_msg,\n                        behavior.priority.clone(),\n                    )\n                    .await;\n            }\n        });\n    }};\n}\n\nmacro_rules! send_new_proposal {\n    ($self_: ident, $ctx: ident, $behavior: ident, $func: ident) => {{\n        let behavior = $behavior.clone();\n        let state = $self_.state.clone();\n        let metadata = $self_.metadata.clone();\n        let crypto = $self_.crypto.clone();\n        let address = $self_.address.clone();\n        let pub_key = $self_.pub_key.clone();\n        let validators = $self_.validators.clone();\n        let network = Arc::<_>::clone(&$self_.network);\n        tokio::spawn(async move {\n            let messages: Vec<Vec<u8>> = (0..behavior.msg_num)\n                .map(|_| $func(&state, &metadata, &crypto, &address, &pub_key, &validators))\n                .collect();\n            for msg in messages {\n                send(\n                    &network,\n                    msg,\n                    $ctx.clone(),\n                    END_GOSSIP_SIGNED_PROPOSAL,\n                    &behavior,\n                )\n                .await;\n            }\n        });\n    }};\n}\n\nmacro_rules! send_new_vote_or_choke {\n    ($self_: ident, $ctx: ident, $behavior: ident, $func: ident, $end: ident) => {{\n        let behavior = $behavior.clone();\n        let state = $self_.state.clone();\n        let crypto = $self_.crypto.clone();\n        let pub_key = $self_.pub_key.clone();\n        let network = Arc::<_>::clone(&$self_.network);\n        tokio::spawn(async move {\n            let messages: Vec<Vec<u8>> = (0..behavior.msg_num)\n                .map(|_| $func(&state, &crypto, &pub_key))\n                .collect();\n            for msg in messages {\n                send(&network, msg, $ctx.clone(), $end, &behavior).await;\n            }\n        });\n    }};\n}\n\nmacro_rules! send_new_vote {\n    ($self_: ident, $ctx: ident, $behavior: ident, $func: ident) => {\n        send_new_vote_or_choke!($self_, $ctx, $behavior, $func, END_GOSSIP_SIGNED_VOTE);\n    };\n}\n\nmacro_rules! send_new_choke {\n    ($self_: ident, $ctx: ident, $behavior: ident, $func: ident) => {\n        send_new_vote_or_choke!($self_, $ctx, $behavior, $func, END_GOSSIP_SIGNED_CHOKE);\n    };\n}\n\nmacro_rules! send_new_qc {\n    ($self_: ident, $ctx: ident, $behavior: ident, $func: ident) => {{\n        let behavior = $behavior.clone();\n        let state = $self_.state.clone();\n        let pub_key = $self_.pub_key.clone();\n        let network = Arc::<_>::clone(&$self_.network);\n        tokio::spawn(async move {\n            let messages: Vec<Vec<u8>> = (0..behavior.msg_num)\n                .map(|_| $func(&state, &pub_key))\n                .collect();\n            for msg in messages {\n                send(\n                    &network,\n                    msg,\n                    $ctx.clone(),\n                    END_GOSSIP_AGGREGATED_VOTE,\n                    &behavior,\n                )\n                .await;\n            }\n        });\n    }};\n}\n\n#[derive(Clone, Debug)]\npub struct State {\n    pub height:         u64,\n    pub round:          u64,\n    pub exec_height:    u64,\n    pub prev_hash:      Hash,\n    pub prev_timestamp: u64,\n    pub state_root:     MerkleRoot,\n    pub confirm_root:   Vec<MerkleRoot>,\n    pub receipt_root:   Vec<MerkleRoot>,\n    pub cycles_used:    Vec<u64>,\n    pub lock:           Option<PoLC>,\n    pub proof:          Proof,\n}\n\nimpl Default for State {\n    fn default() -> Self {\n        State {\n            height:         0,\n            round:          0,\n            exec_height:    0,\n            prev_hash:      Hash::from_empty(),\n            prev_timestamp: time_now(),\n            state_root:     MerkleRoot::from_empty(),\n            confirm_root:   vec![],\n            receipt_root:   vec![],\n            cycles_used:    vec![],\n            lock:           None,\n            proof:          Proof {\n                height:     0,\n                round:      0,\n                block_hash: Hash::from_empty(),\n                signature:  Bytes::new(),\n                bitmap:     Bytes::new(),\n            },\n        }\n    }\n}\n\npub struct Worker<N: Rpc + PeerTrust + Gossip + 'static> {\n    state:      State,\n    address:    Address,\n    pub_key:    Bytes,\n    metadata:   Metadata,\n    validators: Vec<Validator>,\n    crypto:     Arc<OverlordCrypto>,\n    network:    Arc<N>,\n\n    from_timeout: UnboundedReceiver<(Context, Vec<Behavior>)>,\n}\n\nimpl<N> Worker<N>\nwhere\n    N: Rpc + PeerTrust + Gossip + 'static,\n{\n    pub fn new(\n        address: Address,\n        pub_key: Bytes,\n        metadata: Metadata,\n        validators: Vec<Validator>,\n        crypto: OverlordCrypto,\n        network: Arc<N>,\n        from_timeout: UnboundedReceiver<(Context, Vec<Behavior>)>,\n    ) -> Worker<N> {\n        Worker {\n            state: State::default(),\n            address,\n            pub_key,\n            crypto: Arc::new(crypto),\n            metadata,\n            validators,\n            network,\n            from_timeout,\n        }\n    }\n\n    pub async fn run(mut self) {\n        let mut cnt = 0;\n        loop {\n            let (ctx, behaviors) = self.from_timeout.next().await.expect(\"Channel is down!\");\n            for behavior in behaviors {\n                cnt += 1;\n                println!(\n                    \"[h: {}, r: {}] worker process {:?}, accumulative process {} behaviors\",\n                    self.state.height, self.state.round, behavior.msg_type, cnt\n                );\n                self.process(ctx.clone(), &behavior).await;\n            }\n        }\n    }\n\n    pub async fn process(&mut self, ctx: Context, behavior: &Behavior) {\n        match &behavior.msg_type {\n            MessageType::NewTxs(new_tx) => match new_tx {\n                NewTx::InvalidStruct => self.send_invalid_struct_of_new_tx(ctx, behavior).await,\n                NewTx::InvalidHash => send_new_tx!(self, ctx, behavior, gen_invalid_hash_signed_tx),\n                NewTx::InvalidSig => send_new_tx!(self, ctx, behavior, gen_invalid_sig_signed_tx),\n                NewTx::InvalidChainID => {\n                    send_new_tx!(self, ctx, behavior, gen_invalid_chain_id_signed_tx)\n                }\n                NewTx::InvalidCyclesPrice => {\n                    send_new_tx!(self, ctx, behavior, gen_invalid_cycles_price_signed_tx)\n                }\n                NewTx::InvalidCyclesLimit => {\n                    send_new_tx!(self, ctx, behavior, gen_invalid_cycles_limit_signed_tx)\n                }\n                NewTx::InvalidNonceOfRandLen => {\n                    send_new_tx!(self, ctx, behavior, gen_invalid_nonce_of_rand_len_signed_tx)\n                }\n                NewTx::InvalidNonceDup => {\n                    self.send_invalid_nonce_dup_of_new_tx(ctx, behavior).await\n                }\n                NewTx::InvalidRequest => {\n                    send_new_tx!(self, ctx, behavior, gen_invalid_request_signed_tx)\n                }\n                NewTx::InvalidTimeout => {\n                    send_new_tx!(self, ctx, behavior, gen_invalid_timeout_signed_tx)\n                }\n                NewTx::InvalidSender => {\n                    send_new_tx!(self, ctx, behavior, gen_invalid_sender_signed_tx)\n                }\n                NewTx::Valid => send_new_tx!(self, ctx, behavior, gen_valid_tx),\n            },\n            MessageType::RecvProposal(pull_txs) => match pull_txs {\n                PullTxs::Valid => self.set_state(behavior.request.as_ref()).await,\n                PullTxs::InvalidHeight => {\n                    send_pull_txs!(self, ctx, behavior, gen_invalid_height_pull_txs)\n                }\n                PullTxs::InvalidHash => {\n                    send_pull_txs!(self, ctx, behavior, gen_invalid_hash_pull_txs)\n                }\n                PullTxs::NotExistTxs => {\n                    send_pull_txs!(self, ctx, behavior, gen_not_exists_txs_pull_txs)\n                }\n                PullTxs::InvalidStruct => self.send_invalid_struct_of_pull_txs(ctx, behavior).await,\n            },\n            MessageType::SendProposal(new_proposal) => match new_proposal {\n                NewProposal::Valid => {\n                    send_new_proposal!(self, ctx, behavior, gen_valid_new_proposal)\n                }\n                NewProposal::InvalidStruct => {\n                    send_new_proposal!(self, ctx, behavior, gen_invalid_struct_new_proposal)\n                }\n                NewProposal::InvalidChainId => {\n                    send_new_proposal!(self, ctx, behavior, gen_invalid_chain_id_new_proposal)\n                }\n                NewProposal::InvalidPrevHash => {\n                    send_new_proposal!(self, ctx, behavior, gen_invalid_prev_hash_new_proposal)\n                }\n                NewProposal::InvalidHeight => {\n                    send_new_proposal!(self, ctx, behavior, gen_invalid_height_new_proposal)\n                }\n                NewProposal::InvalidExecHeight => {\n                    send_new_proposal!(self, ctx, behavior, gen_invalid_exec_height_new_proposal)\n                }\n                NewProposal::InvalidTimestamp => {\n                    send_new_proposal!(self, ctx, behavior, gen_invalid_timestamp_new_proposal)\n                }\n                NewProposal::InvalidOrderRoot => {\n                    send_new_proposal!(self, ctx, behavior, gen_invalid_order_root_new_proposal)\n                }\n                NewProposal::InvalidSignedTxsHash => {\n                    send_new_proposal!(self, ctx, behavior, gen_invalid_signed_tx_hash_new_proposal)\n                }\n                NewProposal::InvalidConfirmRoot => {\n                    send_new_proposal!(self, ctx, behavior, gen_invalid_confirm_root_new_proposal)\n                }\n                NewProposal::InvalidStateRoot => {\n                    send_new_proposal!(self, ctx, behavior, gen_invalid_state_root_new_proposal)\n                }\n                NewProposal::InvalidReceiptRoot => {\n                    send_new_proposal!(self, ctx, behavior, gen_invalid_receipt_root_new_proposal)\n                }\n                NewProposal::InvalidCyclesUsed => {\n                    send_new_proposal!(self, ctx, behavior, gen_invalid_cycle_used_new_proposal)\n                }\n                NewProposal::InvalidBlockProposer => {\n                    send_new_proposal!(self, ctx, behavior, gen_invalid_block_proposer_new_proposal)\n                }\n                NewProposal::InvalidProof => {\n                    send_new_proposal!(self, ctx, behavior, gen_invalid_proof_new_proposal)\n                }\n                NewProposal::InvalidVersion => {\n                    send_new_proposal!(self, ctx, behavior, gen_invalid_version_new_proposal)\n                }\n                NewProposal::InvalidValidators => {\n                    send_new_proposal!(self, ctx, behavior, gen_invalid_validators_new_proposal)\n                }\n                NewProposal::InvalidTxHash => {\n                    send_new_proposal!(self, ctx, behavior, gen_invalid_tx_hash_new_proposal)\n                }\n                NewProposal::InvalidSig => {\n                    send_new_proposal!(self, ctx, behavior, gen_invalid_sig_new_proposal)\n                }\n                NewProposal::InvalidProposalHeight => {\n                    send_new_proposal!(self, ctx, behavior, gen_invalid_prop_height_new_proposal)\n                }\n                NewProposal::InvalidRound => {\n                    send_new_proposal!(self, ctx, behavior, gen_invalid_round_new_proposal)\n                }\n                NewProposal::InvalidContentStruct => {\n                    send_new_proposal!(self, ctx, behavior, gen_invalid_content_struct_new_proposal)\n                }\n                NewProposal::InvalidBlockHash => {\n                    send_new_proposal!(self, ctx, behavior, gen_invalid_block_hash_new_proposal)\n                }\n                NewProposal::InvalidLock => {\n                    send_new_proposal!(self, ctx, behavior, gen_invalid_lock_new_proposal)\n                }\n                NewProposal::InvalidProposalProposer => {\n                    send_new_proposal!(self, ctx, behavior, gen_invalid_prop_proposer_new_proposal)\n                }\n            },\n            MessageType::SendVote(new_vote) => match new_vote {\n                NewVote::InvalidStruct => {\n                    send_new_vote!(self, ctx, behavior, gen_invalid_struct_new_vote)\n                }\n                NewVote::InvalidHeight => {\n                    send_new_vote!(self, ctx, behavior, gen_invalid_height_new_vote)\n                }\n                NewVote::InvalidRound => {\n                    send_new_vote!(self, ctx, behavior, gen_invalid_round_new_vote)\n                }\n                NewVote::InvalidBlockHash => {\n                    send_new_vote!(self, ctx, behavior, gen_invalid_block_hash_new_vote)\n                }\n                NewVote::InvalidSig => {\n                    send_new_vote!(self, ctx, behavior, gen_invalid_sig_new_vote)\n                }\n                NewVote::InvalidVoter => {\n                    send_new_vote!(self, ctx, behavior, gen_invalid_voter_new_vote)\n                }\n            },\n            MessageType::SendQC(new_qc) => match new_qc {\n                NewQC::InvalidStruct => {\n                    send_new_qc!(self, ctx, behavior, gen_invalid_struct_new_qc)\n                }\n                NewQC::InvalidHeight => {\n                    send_new_qc!(self, ctx, behavior, gen_invalid_height_new_qc)\n                }\n                NewQC::InvalidRound => send_new_qc!(self, ctx, behavior, gen_invalid_round_new_qc),\n                NewQC::InvalidBlockHash => {\n                    send_new_qc!(self, ctx, behavior, gen_invalid_block_hash_new_qc)\n                }\n                NewQC::InvalidSig => send_new_qc!(self, ctx, behavior, gen_invalid_sig_new_qc),\n                NewQC::InvalidLeader => {\n                    send_new_qc!(self, ctx, behavior, gen_invalid_leader_new_qc)\n                }\n            },\n            MessageType::SendChoke(new_choke) => match new_choke {\n                NewChoke::InvalidStruct => {\n                    send_new_choke!(self, ctx, behavior, gen_invalid_struct_new_choke)\n                }\n                NewChoke::InvalidHeight => {\n                    send_new_choke!(self, ctx, behavior, gen_invalid_height_new_choke)\n                }\n                NewChoke::InvalidRound => {\n                    send_new_choke!(self, ctx, behavior, gen_invalid_round_new_choke)\n                }\n                NewChoke::InvalidFrom => {\n                    send_new_choke!(self, ctx, behavior, gen_invalid_from_new_vote)\n                }\n                NewChoke::InvalidSig => {\n                    send_new_choke!(self, ctx, behavior, gen_invalid_sig_new_choke)\n                }\n                NewChoke::InvalidAddress => {\n                    send_new_choke!(self, ctx, behavior, gen_invalid_address_new_choke)\n                }\n            },\n            MessageType::SendHeight => self.send_invalid_new_height(ctx, behavior).await,\n            MessageType::PullTxs(new_tx) => match new_tx {\n                NewTx::InvalidStruct => self.send_invalid_struct_of_push_txs(ctx, behavior).await,\n                NewTx::InvalidHash => {\n                    send_push_txs!(self, ctx, behavior, gen_invalid_hash_signed_tx)\n                }\n                NewTx::InvalidSig => send_push_txs!(self, ctx, behavior, gen_invalid_sig_signed_tx),\n                NewTx::InvalidChainID => {\n                    send_push_txs!(self, ctx, behavior, gen_invalid_chain_id_signed_tx)\n                }\n                NewTx::InvalidCyclesPrice => {\n                    send_push_txs!(self, ctx, behavior, gen_invalid_cycles_price_signed_tx)\n                }\n                NewTx::InvalidCyclesLimit => {\n                    send_push_txs!(self, ctx, behavior, gen_invalid_cycles_limit_signed_tx)\n                }\n                NewTx::InvalidNonceOfRandLen => {\n                    send_push_txs!(self, ctx, behavior, gen_invalid_nonce_of_rand_len_signed_tx)\n                }\n                NewTx::InvalidRequest => {\n                    send_push_txs!(self, ctx, behavior, gen_invalid_request_signed_tx)\n                }\n                NewTx::InvalidTimeout => {\n                    send_push_txs!(self, ctx, behavior, gen_invalid_timeout_signed_tx)\n                }\n                NewTx::InvalidSender => {\n                    send_push_txs!(self, ctx, behavior, gen_invalid_sender_signed_tx)\n                }\n                _ => panic!(\"not support yet!\"),\n            },\n            MessageType::RecvQC\n            | MessageType::RecvVote\n            | MessageType::RecvChoke\n            | MessageType::RecvHeight => self.set_state(behavior.request.as_ref()).await,\n        }\n    }\n\n    pub async fn send_invalid_new_height(&mut self, ctx: Context, behavior: &Behavior) {\n        let behavior = behavior.clone();\n        let height = self.state.height;\n        let network = Arc::<_>::clone(&self.network);\n        tokio::spawn(async move {\n            let messages: Vec<u64> = (0..behavior.msg_num)\n                .map(|_| gen_positive_range(height, 20))\n                .collect();\n            for msg in messages {\n                send(&network, msg, ctx.clone(), BROADCAST_HEIGHT, &behavior).await;\n            }\n        });\n    }\n\n    pub async fn send_invalid_struct_of_pull_txs(&mut self, ctx: Context, behavior: &Behavior) {\n        let behavior = behavior.clone();\n        let network = Arc::<_>::clone(&self.network);\n        tokio::spawn(async move {\n            for _ in 0..behavior.msg_num {\n                let pull_msg = InvalidStruct::gen(100);\n                let _ = network\n                    .call::<InvalidStruct, MsgPushTxs>(\n                        ctx.clone(),\n                        RPC_PULL_TXS,\n                        pull_msg,\n                        behavior.priority,\n                    )\n                    .await;\n            }\n        });\n    }\n\n    pub async fn send_invalid_struct_of_new_tx(&self, ctx: Context, behavior: &Behavior) {\n        let behavior = behavior.clone();\n        let network = Arc::<_>::clone(&self.network);\n        tokio::spawn(async move {\n            let messages: Vec<InvalidStruct> = (0..behavior.msg_num)\n                .map(|_| InvalidStruct::gen(1000))\n                .collect();\n            for msg in messages {\n                send(&network, msg, ctx.clone(), END_GOSSIP_NEW_TXS, &behavior).await;\n            }\n        });\n    }\n\n    pub async fn send_invalid_struct_of_push_txs(&self, ctx: Context, behavior: &Behavior) {\n        let behavior = behavior.clone();\n        let network = Arc::<_>::clone(&self.network);\n        tokio::spawn(async move {\n            let messages: Vec<InvalidStruct> = (0..behavior.msg_num)\n                .map(|_| InvalidStruct::gen(1000))\n                .collect();\n            for msg in messages {\n                let _ = network\n                    .response::<InvalidStruct>(\n                        ctx.clone(),\n                        RPC_RESP_PULL_TXS,\n                        Ok(msg),\n                        behavior.priority,\n                    )\n                    .await;\n            }\n        });\n    }\n\n    pub async fn send_invalid_nonce_dup_of_new_tx(&self, ctx: Context, behavior: &Behavior) {\n        let nonce = Hash::digest(gen_random_bytes(20));\n        let behavior = behavior.clone();\n        let metadata = self.metadata.clone();\n        let height = self.state.height;\n        let network = Arc::<_>::clone(&self.network);\n        tokio::spawn(async move {\n            let batch_stxs: Vec<SignedTransaction> = (0..behavior.msg_num)\n                .map(|_| {\n                    gen_invalid_nonce_dup_signed_tx(&TEST_PRI_KEY, height, &metadata, nonce.clone())\n                })\n                .collect();\n            let gossip_txs = MsgNewTxs { batch_stxs };\n            send(&network, gossip_txs, ctx, END_GOSSIP_NEW_TXS, &behavior).await;\n        });\n    }\n\n    async fn set_state(&mut self, req_opt: Option<&Request>) {\n        if let Some(req) = req_opt {\n            match req {\n                Request::RecvProposal(proposal) => {\n                    let signed_proposal: SignedProposal<FixedPill> =\n                        rlp::decode(&proposal.0).expect(\"decode signed_proposal failed\");\n                    let proposal = signed_proposal.proposal;\n                    if proposal.height > self.state.height\n                        || (proposal.height == self.state.height\n                            && proposal.round >= self.state.round)\n                    {\n                        let header = proposal.content.inner.block.header.clone();\n                        self.state.height = proposal.height;\n                        self.state.round = proposal.round;\n                        self.state.prev_hash = header.prev_hash;\n                        self.state.proof = header.proof;\n                        self.state.state_root = header.state_root;\n                        self.state.exec_height = header.exec_height;\n                        self.state.confirm_root = header.confirm_root;\n                        self.state.receipt_root = header.receipt_root;\n                        self.state.cycles_used = header.cycles_used;\n                        self.state.lock = proposal.lock.clone();\n                    }\n                    self.send_prevote(&proposal).await;\n                }\n                Request::RecvQC(qc) => {\n                    let qc: AggregatedVote = rlp::decode(&qc.0).expect(\"decode qc failed\");\n                    if !qc.is_prevote_qc() && qc.height >= self.state.height {\n                        if !qc.block_hash.is_empty() {\n                            self.state.height = qc.height + 1;\n                            self.state.round = 0;\n                            self.state.prev_hash = Hash::from_bytes(qc.block_hash.clone()).unwrap();\n                            self.state.proof = Proof {\n                                height:     qc.height,\n                                round:      qc.round,\n                                block_hash: Hash::from_bytes(qc.block_hash.clone()).unwrap(),\n                                signature:  qc.signature.signature.clone(),\n                                bitmap:     qc.signature.address_bitmap,\n                            };\n                            self.state.confirm_root = vec![];\n                            self.state.receipt_root = vec![];\n                            self.state.cycles_used = vec![];\n                            self.state.lock = None;\n                            self.state.prev_timestamp = time_now();\n                        } else if qc.round >= self.state.round {\n                            self.state.height = qc.height;\n                            self.state.round = qc.round + 1;\n                        }\n                    }\n                }\n                Request::RecvVote(vote) => {\n                    let vote: SignedVote = rlp::decode(&vote.0).expect(\"decode vote failed\");\n                    if vote.vote.height > self.state.height\n                        || (vote.vote.height == self.state.height\n                            && vote.vote.round > self.state.round)\n                    {\n                        self.state.height = vote.vote.height;\n                        self.state.round = vote.vote.round;\n                    }\n                }\n                Request::RecvChoke(choke) => {\n                    let choke: SignedChoke = rlp::decode(&choke.0).expect(\"decode choke failed\");\n                    if choke.choke.height > self.state.height\n                        || (choke.choke.height == self.state.height\n                            && choke.choke.round > self.state.round)\n                    {\n                        self.state.height = choke.choke.height;\n                        self.state.round = choke.choke.round;\n                    }\n                    self.send_choke(choke.choke.clone(), choke.address.clone())\n                        .await;\n                }\n                Request::RecvHeight(height) => {\n                    if *height > self.state.height {\n                        self.state.height = *height;\n                        self.state.round = 0;\n                    }\n                }\n                _ => panic!(\"not support yet\"),\n            }\n        }\n        self.check_liveness();\n    }\n\n    fn check_liveness(&self) {\n        let current_time = time_now();\n        let gap = current_time - self.state.prev_timestamp;\n        if gap > 10 * 60 * 1000 {\n            panic!(\"liveness is seemly broken! do not reach consensus in past 10 min\");\n        } else if gap > 5 * 60 * 1000 {\n            println!(\"strong warning! do not reach consensus in past 5 min\");\n        } else if gap > 60 * 1000 {\n            println!(\"warning! do not reach consensus in past 60 s\");\n        }\n    }\n\n    async fn send_prevote(&self, proposal: &Proposal<FixedPill>) {\n        let pre_vote = Vote {\n            height:     proposal.height,\n            round:      proposal.round,\n            vote_type:  VoteType::Prevote,\n            block_hash: proposal.block_hash.clone(),\n        };\n        let signed_vote = gen_valid_signed_vote(pre_vote, &self.crypto, &self.pub_key);\n        let msg = signed_vote.rlp_bytes();\n        let peer_id = PeerId::from_pubkey_bytes(&proposal.proposer)\n            .unwrap()\n            .into_bytes_ext();\n        let _ = self\n            .network\n            .multicast(\n                Context::default(),\n                END_GOSSIP_SIGNED_VOTE,\n                [peer_id],\n                msg,\n                Priority::High,\n            )\n            .await;\n    }\n\n    async fn send_choke(&self, choke: Choke, sender: Bytes) {\n        let signed_choke = gen_valid_signed_choke(choke, &self.crypto, &self.pub_key);\n        let msg = signed_choke.rlp_bytes();\n        let peer_id = PeerId::from_pubkey_bytes(&sender).unwrap().into_bytes_ext();\n        let _ = self\n            .network\n            .multicast(\n                Context::default(),\n                END_GOSSIP_SIGNED_CHOKE,\n                [peer_id],\n                msg,\n                Priority::High,\n            )\n            .await;\n    }\n}\n\nasync fn send<M, N>(network: &Arc<N>, message: M, ctx: Context, end: &str, behavior: &Behavior)\nwhere\n    M: MessageCodec,\n    N: Rpc + PeerTrust + Gossip + 'static,\n{\n    let peer_ids: Vec<_> = behavior\n        .send_to\n        .iter()\n        .map(|pub_key| PeerId::from_pubkey_bytes(pub_key).unwrap().into_bytes_ext())\n        .collect();\n    let _ = network\n        .multicast(ctx.clone(), end, peer_ids, message, behavior.priority)\n        .await;\n}\n"
  },
  {
    "path": "byzantine/tests/byz.test.ts",
    "content": "import { parse } from 'toml';\nimport { find } from 'lodash';\nimport { AssetService, MultiSignatureService } from '@mutadev/service'\nimport { readFileSync } from 'fs';\nimport * as sdk from '@mutadev/muta-sdk';\nimport { Muta } from \"@mutadev/muta-sdk\";\n\nconst genesis = parse(readFileSync('../../examples/genesis.toml', 'utf-8'));\nconst metadata = JSON.parse(\n  find(genesis.services, (s) => s.name === 'metadata').payload,\n);\nconst chain_id = metadata.chain_id;\nconst client_0 = get_client('../../examples/config-1.toml', chain_id);\nconst client_1 = get_client('../../examples/config-2.toml', chain_id);\nconst client_2 = get_client('../../examples/config-3.toml', chain_id);\n\ndescribe(\"Byzantine test via @mutadev/muta-sdk-js\", () => {\n  test(\"getLatestBlock\", async () => {\n    const timeoutLoopTimes = process.env.TIMEOUT | 600;  // seconds\n    var last_height = 0;\n    var cnt = 0;\n    for (var i = 0; i < timeoutLoopTimes; i++) {\n      let height_0 = await client_0.getLatestBlockHeight();\n      let height_1 = await client_1.getLatestBlockHeight();\n      let height_2 = await client_2.getLatestBlockHeight();\n      let max_height = Math.max(height_0, height_1, height_2);\n      console.log(max_height);\n      if (max_height > last_height) {\n        last_height = max_height;\n        cnt = 0;\n      } else if (max_height == last_height) {\n        cnt += 1;\n        if (cnt > 600) {\n          throw new Error('break liveness');\n        }\n      } else {\n        throw new Error('break safety');\n      }\n      await sleep(1000);\n    }\n  });\n});\n\nfunction sleep(ms: number) {\n  return new Promise((resolve) => setTimeout(resolve, ms));\n}\n\nfunction get_client(file_path: string, chain_id: string) {\n  const config = parse(readFileSync(file_path, 'utf-8'));\n  const graphql_port = config.graphql.listening_address.split(':')[1];\n  const muta = new Muta({\n    endpoint: 'http://localhost:' + graphql_port + '/graphql',\n    chainId: chain_id\n  });\n  return muta.client();\n}\n"
  },
  {
    "path": "byzantine/tests/jest.config.js",
    "content": "module.exports = {\n  displayName: \"Unit Tests\",\n  testRegex: \"(/.*.(test|spec))\\\\.(ts?|js?)$\",\n  transform: {\n    \"^.+\\\\.ts?$\": \"ts-jest\"\n  },\n  moduleFileExtensions: [\"ts\", \"js\", \"json\"],\n  testTimeout: 10000000\n};\n"
  },
  {
    "path": "byzantine/tests/package.json",
    "content": "{\n  \"name\": \"muta-e2e-tests\",\n  \"version\": \"1.0.0\",\n  \"description\": \"\",\n  \"author\": \"wancencen\",\n  \"license\": \"MIT\",\n  \"scripts\": {\n    \"test\": \"jest --color\",\n    \"lint\": \"eslint --fix '{src,test}/**/*.{js,ts}'\",\n    \"prettier\": \"prettier --write **/*.{js,ts,graphql}\"\n  },\n  \"dependencies\": {\n    \"@mutadev/muta-sdk\": \"0.2.0-rc.0\",\n    \"@mutadev/service\": \"0.2.0-rc.0\",\n    \"graphql\": \"^15.2.0\",\n    \"graphql-tag\": \"^2.10.1\",\n    \"toml\": \"^3.0.0\",\n    \"lodash\": \"^4.17.15\",\n    \"ts-node\": \"^8.3.0\",\n    \"typescript\": \"^3.5.3\"\n  },\n  \"devDependencies\": {\n    \"@types/jest\": \"^24.0.23\",\n    \"jest\": \"^24.9.0\",\n    \"prettier\": \"^1.19.1\",\n    \"ts-jest\": \"^26.0.0\"\n  }\n}\n"
  },
  {
    "path": "charts/deploy-chaos/.helmignore",
    "content": "# Patterns to ignore when building packages.\n# This supports shell glob matching, relative path matching, and\n# negation (prefixed with !). Only one pattern per line.\n.DS_Store\n# Common VCS dirs\n.git/\n.gitignore\n.bzr/\n.bzrignore\n.hg/\n.hgignore\n.svn/\n# Common backup files\n*.swp\n*.bak\n*.tmp\n*~\n# Various IDEs\n.project\n.idea/\n*.tmproj\n.vscode/\n"
  },
  {
    "path": "charts/deploy-chaos/Chart.yaml",
    "content": "apiVersion: v2\nname: deploy-chaos\ndescription: A Helm chart for Kubernetes\n\n# A chart can be either an 'application' or a 'library' chart.\n#\n# Application charts are a collection of templates that can be packaged into versioned archives\n# to be deployed.\n#\n# Library charts provide useful utilities or functions for the chart developer. They're included as\n# a dependency of application charts to inject those utilities and functions into the rendering\n# pipeline. Library charts do not define any templates and therefore cannot be deployed.\ntype: application\n\n# This is the chart version. This version number should be incremented each time you make changes\n# to the chart and its templates, including the app version.\nversion: 0.1.0\n\n# This is the version number of the application being deployed. This version number should be\n# incremented each time you make changes to the application.\nappVersion: 1.16.0\n"
  },
  {
    "path": "charts/deploy-chaos/templates/_helpers.tpl",
    "content": "{{/* vim: set filetype=mustache: */}}\n{{/*\nExpand the name of the chart.\n*/}}\n{{- define \"deploy-chaos.name\" -}}\n{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix \"-\" -}}\n{{- end -}}\n\n{{/*\nCreate a default fully qualified app name.\nWe truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).\nIf release name contains chart name it will be used as a full name.\n*/}}\n{{- define \"deploy-chaos.fullname\" -}}\n{{- if .Values.fullnameOverride -}}\n{{- .Values.fullnameOverride | trunc 63 | trimSuffix \"-\" -}}\n{{- else -}}\n{{- $name := default .Chart.Name .Values.nameOverride -}}\n{{- if contains $name .Release.Name -}}\n{{- .Release.Name | trunc 63 | trimSuffix \"-\" -}}\n{{- else -}}\n{{- printf \"%s-%s\" .Release.Name $name | trunc 63 | trimSuffix \"-\" -}}\n{{- end -}}\n{{- end -}}\n{{- end -}}\n\n{{/*\nCreate chart name and version as used by the chart label.\n*/}}\n{{- define \"deploy-chaos.chart\" -}}\n{{- printf \"%s-%s\" .Chart.Name .Chart.Version | replace \"+\" \"_\" | trunc 63 | trimSuffix \"-\" -}}\n{{- end -}}\n\n{{/*\nCommon labels\n*/}}\n{{- define \"deploy-chaos.labels\" -}}\nhelm.sh/chart: {{ include \"deploy-chaos.chart\" . }}\n{{ include \"deploy-chaos.selectorLabels\" . }}\n{{- if .Chart.AppVersion }}\napp.kubernetes.io/version: {{ .Chart.AppVersion | quote }}\n{{- end }}\napp.kubernetes.io/managed-by: {{ .Release.Service }}\n{{- end -}}\n\n{{/*\nSelector labels\n*/}}\n{{- define \"deploy-chaos.selectorLabels\" -}}\napp.kubernetes.io/name: {{ include \"deploy-chaos.name\" . }}\napp.kubernetes.io/instance: {{ .Release.Name }}\n{{- end -}}\n\n{{/*\nCreate the name of the service account to use\n*/}}\n{{- define \"deploy-chaos.serviceAccountName\" -}}\n{{- if .Values.serviceAccount.create -}}\n    {{ default (include \"deploy-chaos.fullname\" .) .Values.serviceAccount.name }}\n{{- else -}}\n    {{ default \"default\" .Values.serviceAccount.name }}\n{{- end -}}\n{{- end -}}\n"
  },
  {
    "path": "charts/deploy-chaos/templates/muta-benchmark.yaml",
    "content": "{{- $chainName  := (printf \"chaos-%s-%s\" .Values.repo_name .Values.version) -}}\napiVersion: batch/v1beta1\nkind: CronJob\nmetadata:\n  name: benchmark-{{ .Values.repo_name }}-{{ .Values.version }}\n  namespace: {{ .Values.namespace }} # Only supports deployment to the mutadev namespace\nspec:\n  concurrencyPolicy: Replace\n  schedule: {{ .Values.benchmark.schedule | quote }}\n  jobTemplate:\n    spec:\n      template:\n        spec:\n          containers:\n          - name: benchmark\n            image: {{ .Values.benchmark.image }}\n            args:\n            {{- range .Values.benchmark.args }}\n            - {{ . | quote }}\n            {{- end }}\n            - --chain-id\n            - {{ .Values.chain_genesis.metadata.chain_id }}\n            {{- range $i, $e := until (.Values.size | int) }}\n            - {{ printf \"http://%s-%d:8000/graphql\" $chainName $i }}\n            {{- end }}\n          restartPolicy: OnFailure\n"
  },
  {
    "path": "charts/deploy-chaos/templates/muta-chaos-crd.yaml",
    "content": "apiVersion: nervos.org/v1alpha1\nkind: Muta\nmetadata:\n  name: chaos-{{ .Values.repo_name }}-{{ .Values.version }}\n  namespace: {{ .Values.namespace }} # Only supports deployment to the mutadev namespace\nspec:\n  image: mutadev/{{ .Values.repo_name }}:{{ .Values.version }} # docker image\n  resources:\n    limits:\n      cpu: {{ .Values.resources.cpu }}\n      memory: {{ .Values.resources.memory }}\n      ephemeral-storage: {{ .Values.resources.storage }}\n    requests:\n      cpu: {{ .Values.resources.cpu }}\n      memory: {{ .Values.resources.memory }}\n      ephemeral-storage: {{ .Values.resources.storage }}\n  chaos: # all / stable-network-corrupt / stable-network-delay / stable-network-duplicate / stable-network-loss / stable-network-partition / stable-node-failure / stable-node-kill\n  {{- range .Values.chaos }}\n    - {{ . }}\n  {{- end }}\n  size: {{ .Values.size }} # Node numbers\n  persistent: {{ .Values.resources.persistent }} # Persistent data\n  config: # see https://github.com/nervosnetwork/muta/blob/master/devtools/chain/config.toml\n    data_path: \"/muta-data\"\n    graphql:\n      listening_address: \"0.0.0.0:8000\"\n      graphql_uri: \"/graphql\"\n      graphiql_uri: \"/\"\n      workers: 0 # if 0, uses number of available logical cpu as threads count.\n      maxconn: 25000\n    network:\n      listening_address: \"0.0.0.0:1337\"\n      rpc_timeout: {{ .Values.chain_config.network.rpc_timeout }}\n    mempool:\n      pool_size: {{ .Values.chain_config.mempool.pool_size }}\n      broadcast_txs_size: {{ .Values.chain_config.mempool.broadcast_txs_size }}\n      broadcast_txs_interval: {{ .Values.chain_config.mempool.broadcast_txs_interval }}\n    executor:\n      light: false\n    logger:\n      filter: \"info\"\n      log_to_console: true\n      console_show_file_and_line: false\n      log_path: \"/muta-data/logs/\"\n      log_to_file: true\n      metrics: true\n      modules_level:\n        # \"overlord::state::process\": \"debug\"\n        # \"core_consensus\": \"error\"\n  genesis: # https://github.com/nervosnetwork/muta/blob/master/devtools/chain/genesis.toml\n    prevhash: {{ .Values.chain_genesis.prevhash }}\n    metadata:\n      chain_id: {{ .Values.chain_genesis.metadata.chain_id }}\n      bech32_address_hrp: {{ .Values.chain_genesis.metadata.bech32_address_hrp }}\n      timeout_gap: {{ .Values.chain_genesis.metadata.timeout_gap }}\n      cycles_limit: {{ .Values.chain_genesis.metadata.cycles_limit }}\n      cycles_price: {{ .Values.chain_genesis.metadata.cycles_price }}\n      interval: {{ .Values.chain_genesis.metadata.interval }}\n      propose_ratio: {{ .Values.chain_genesis.metadata.propose_ratio }}\n      prevote_ratio: {{ .Values.chain_genesis.metadata.prevote_ratio }}\n      precommit_ratio: {{ .Values.chain_genesis.metadata.precommit_ratio }}\n      brake_ratio: {{ .Values.chain_genesis.metadata.brake_ratio }}\n      tx_num_limit: {{ .Values.chain_genesis.metadata.tx_num_limit }}\n      max_tx_size: {{ .Values.chain_genesis.metadata.max_tx_size }}\n    services:\n    {{- range $service := .Values.chain_genesis.services }}\n      - name: {{ $service.name }}\n        payload: {{ $service.payload | toJson | quote }}\n    {{- end }}\n"
  },
  {
    "path": "charts/deploy-chaos/values.yaml",
    "content": "# Default values for deploy-chaos.\n# This is a YAML-formatted file.\n# Declare variables to be passed into your templates.\n\nbenchmark:\n  schedule: \"*/6 * * * *\"\n  image: mutadev/muta-benchmark:v0.1.12\n  args:\n    - -d\n    - 300s\n    - -c\n    - 16\n    - -g\n    - 9999\n    - --cpu\n    - 3\n\nnamespace: mutadev\n\nrepo_name: muta\nversion: latest\n\nresources:\n  cpu: 1100m\n  memory: 4Gi\n  storage: 6Gi\n  persistent: true\n\nchaos:\n  - all\n\nsize: 4\n\nchain_config:\n  network:\n    rpc_timeout: 10\n  mempool:\n    pool_size: 20000\n    broadcast_txs_size: 200\n    broadcast_txs_interval: 200\n\nchain_genesis:\n  prevhash: 0x44915be5b6c20b0678cf05fcddbbaa832e25d7e6ac538784cd5c24de00d47472\n  metadata:\n    chain_id: 0xb6a4d7da21443f5e816e8700eea87610e6d769657d6b8ec73028457bf2ca4036\n    bech32_address_hrp: muta\n    timeout_gap: 9999\n    cycles_limit: 99999999\n    cycles_price: 1\n    interval: 3000\n    propose_ratio: 15\n    prevote_ratio: 15\n    precommit_ratio: 10\n    brake_ratio: 3\n    tx_num_limit: 10000\n    max_tx_size: 1073741824\n  services:\n    - name: asset\n      payload: { \"id\": \"0xf56924db538e77bb5951eb5ff0d02b88983c49c45eea30e8ae3e7234b311436c\", \"name\": \"MutaToken\", \"symbol\": \"MT\", \"supply\": 320000011, \"issuer\": \"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\" }\n"
  },
  {
    "path": "charts/muta/.helmignore",
    "content": "# Patterns to ignore when building packages.\n# This supports shell glob matching, relative path matching, and\n# negation (prefixed with !). Only one pattern per line.\n.DS_Store\n# Common VCS dirs\n.git/\n.gitignore\n.bzr/\n.bzrignore\n.hg/\n.hgignore\n.svn/\n# Common backup files\n*.swp\n*.bak\n*.tmp\n*~\n# Various IDEs\n.project\n.idea/\n*.tmproj\n"
  },
  {
    "path": "charts/muta/Chart.yaml",
    "content": "apiVersion: v1\ndescription: A Helm chart for Kubernetes\nicon: https://raw.githubusercontent.com/jenkins-x/jenkins-x-platform/d965bfa/images/rust.png\nname: muta\nversion: 0.1.0-SNAPSHOT\n"
  },
  {
    "path": "charts/muta/Makefile",
    "content": "CHART_REPO := http://jenkins-x-chartmuseum:8080\nCURRENT=$(pwd)\nNAME := muta\nOS := $(shell uname)\nRELEASE_VERSION := $(shell cat ../../VERSION)\n\nbuild: clean\n\trm -rf requirements.lock\n\thelm dependency build\n\thelm lint\n\ninstall: clean build\n\thelm install . --name ${NAME}\n\nupgrade: clean build\n\thelm upgrade ${NAME} .\n\ndelete:\n\thelm delete --purge ${NAME}\n\nclean:\n\trm -rf charts\n\trm -rf ${NAME}*.tgz\n\nrelease: clean\n\thelm dependency build\n\thelm lint\n\thelm init --client-only\n\thelm package .\n\tcurl --fail -u $(CHARTMUSEUM_CREDS_USR):$(CHARTMUSEUM_CREDS_PSW) --data-binary \"@$(NAME)-$(shell sed -n 's/^version: //p' Chart.yaml).tgz\" $(CHART_REPO)/api/charts\n\trm -rf ${NAME}*.tgz%\n\ntag:\nifeq ($(OS),Darwin)\n\tsed -i \"\" -e \"s/version:.*/version: $(RELEASE_VERSION)/\" Chart.yaml\n\tsed -i \"\" -e \"s/tag:.*/tag: $(RELEASE_VERSION)/\" values.yaml\nelse ifeq ($(OS),Linux)\n\tsed -i -e \"s/version:.*/version: $(RELEASE_VERSION)/\" Chart.yaml\n\tsed -i -e \"s|repository:.*|repository: $(DOCKER_REGISTRY)\\/nervosnetwork\\/muta|\" values.yaml\n\tsed -i -e \"s/tag:.*/tag: $(RELEASE_VERSION)/\" values.yaml\nelse\n\techo \"platfrom $(OS) not supported to release from\"\n\texit -1\nendif\n\tgit add --all\n\tgit commit -m \"release $(RELEASE_VERSION)\" --allow-empty # if first release then no verion update is performed\n\tgit tag -fa v$(RELEASE_VERSION) -m \"Release version $(RELEASE_VERSION)\"\n\tgit push origin v$(RELEASE_VERSION)\n"
  },
  {
    "path": "charts/muta/README.md",
    "content": "# Rust application"
  },
  {
    "path": "charts/muta/templates/NOTES.txt",
    "content": "\nGet the application URL by running these commands:\n\nkubectl get ingress {{ template \"fullname\" . }}\n"
  },
  {
    "path": "charts/muta/templates/_helpers.tpl",
    "content": "{{/* vim: set filetype=mustache: */}}\n{{/*\nExpand the name of the chart.\n*/}}\n{{- define \"name\" -}}\n{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix \"-\" -}}\n{{- end -}}\n\n{{/*\nCreate a default fully qualified app name.\nWe truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).\n*/}}\n{{- define \"fullname\" -}}\n{{- $name := default .Chart.Name .Values.nameOverride -}}\n{{- printf \"%s-%s\" .Release.Name $name | trunc 63 | trimSuffix \"-\" -}}\n{{- end -}}\n"
  },
  {
    "path": "charts/muta/templates/canary.yaml",
    "content": "{{- if .Values.canary.enabled }}\napiVersion: flagger.app/v1beta1\nkind: Canary\nmetadata:\n  name: {{ template \"fullname\" . }}\n  labels:\n    draft: {{ default \"draft-app\" .Values.draft }}\n    chart: \"{{ .Chart.Name }}-{{ .Chart.Version | replace \"+\" \"_\" }}\"\nspec:\n  provider: istio\n  targetRef:\n    apiVersion: apps/v1\n    kind: Deployment\n    name: {{ template \"fullname\" . }}\n  progressDeadlineSeconds: {{ .Values.canary.progressDeadlineSeconds }}\n  {{- if .Values.hpa.enabled }}\n  autoscalerRef:\n    apiVersion: autoscaling/v2beta1\n    kind: HorizontalPodAutoscaler\n    name: {{ template \"fullname\" . }}\n  {{- end }}\n  service:\n    port: {{ .Values.service.externalPort }}\n    targetPort: {{ .Values.service.internalPort }}\n    gateways:\n    - {{ template \"fullname\" . }}\n    hosts:\n    - {{ .Values.canary.host }}\n  analysis:\n    interval: {{ .Values.canary.canaryAnalysis.interval }}\n    threshold: {{ .Values.canary.canaryAnalysis.threshold }}\n    maxWeight: {{ .Values.canary.canaryAnalysis.maxWeight }}\n    stepWeight: {{ .Values.canary.canaryAnalysis.stepWeight }}\n    metrics:\n    - name: request-success-rate\n      threshold: {{ .Values.canary.canaryAnalysis.metrics.requestSuccessRate.threshold }}\n      interval: {{ .Values.canary.canaryAnalysis.metrics.requestSuccessRate.interval }}\n    - name: latency\n      templateRef:\n        name: latency\n      thresholdRange:\n        max: {{ .Values.canary.canaryAnalysis.metrics.requestDuration.threshold }}\n      interval: {{ .Values.canary.canaryAnalysis.metrics.requestDuration.interval }}\n\n---\n\napiVersion: flagger.app/v1beta1\nkind: MetricTemplate\nmetadata:\n  name: latency\nspec:\n  provider:\n    type: prometheus\n    address: http://prometheus.istio-system:9090\n  query: |\n    histogram_quantile(\n        0.99,\n        sum(\n            rate(\n                istio_request_duration_milliseconds_bucket{\n                    reporter=\"destination\",\n                    destination_workload_namespace=\"{{ \"{{\" }} namespace {{ \"}}\" }}\",\n                    destination_workload=~\"{{ \"{{\" }} target {{ \"}}\" }}\"\n                }[{{ \"{{\" }} interval {{ \"}}\" }}]\n            )\n        ) by (le)\n    )\n\n---\n\napiVersion: networking.istio.io/v1alpha3\nkind: Gateway\nmetadata:\n  name: {{ template \"fullname\" . }}\nspec:\n  selector:\n    istio: ingressgateway\n  servers:\n  - port:\n      number: {{ .Values.service.externalPort }}\n      name: http\n      protocol: HTTP\n    hosts:\n    - {{ .Values.canary.host }}\n{{- end }}\n"
  },
  {
    "path": "charts/muta/templates/deployment.yaml",
    "content": "{{- if .Values.knativeDeploy }}\n{{- else }}\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: {{ template \"fullname\" . }}\n  labels:\n    draft: {{ default \"draft-app\" .Values.draft }}\n    chart: \"{{ .Chart.Name }}-{{ .Chart.Version | replace \"+\" \"_\" }}\"\nspec:\n  selector:\n    matchLabels:\n      app: {{ template \"fullname\" . }}\n{{- if .Values.hpa.enabled }}\n{{- else }}\n  replicas: {{ .Values.replicaCount }}\n{{- end }}\n  template:\n    metadata:\n      labels:\n        draft: {{ default \"draft-app\" .Values.draft }}\n        app: {{ template \"fullname\" . }}\n{{- if .Values.podAnnotations }}\n      annotations:\n{{ toYaml .Values.podAnnotations | indent 8 }}\n{{- end }}\n    spec:\n      containers:\n      - name: {{ .Chart.Name }}\n        image: \"{{ .Values.image.repository }}:{{ .Values.image.tag }}\"\n        imagePullPolicy: {{ .Values.image.pullPolicy }}\n        env:\n{{- range $pkey, $pval := .Values.env }}\n        - name: {{ $pkey }}\n          value: {{ quote $pval }}\n{{- end }}\n        envFrom:\n{{ toYaml .Values.envFrom | indent 10 }}\n        ports:\n        - containerPort: {{ .Values.service.internalPort }}\n        livenessProbe:\n          httpGet:\n            path: {{ .Values.probePath }}\n            port: {{ .Values.service.internalPort }}\n          initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}\n          periodSeconds: {{ .Values.livenessProbe.periodSeconds }}\n          successThreshold: {{ .Values.livenessProbe.successThreshold }}\n          timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}\n        readinessProbe:\n          httpGet:\n            path: {{ .Values.probePath }}\n            port: {{ .Values.service.internalPort }}\n          periodSeconds: {{ .Values.readinessProbe.periodSeconds }}\n          successThreshold: {{ .Values.readinessProbe.successThreshold }}\n          timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}\n        resources:\n{{ toYaml .Values.resources | indent 12 }}\n      terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }}\n{{- end }}\n"
  },
  {
    "path": "charts/muta/templates/hpa.yaml",
    "content": "{{- if .Values.hpa.enabled }}\napiVersion: autoscaling/v2beta1\nkind: HorizontalPodAutoscaler\nmetadata:\n  name: {{ template \"fullname\" . }}\n  labels:\n    draft: {{ default \"draft-app\" .Values.draft }}\n    chart: \"{{ .Chart.Name }}-{{ .Chart.Version | replace \"+\" \"_\" }}\"\nspec:\n  scaleTargetRef:\n    apiVersion: apps/v1\n    kind: Deployment\n    name: {{ template \"fullname\" . }}\n  minReplicas: {{ .Values.hpa.minReplicas }}\n  maxReplicas: {{ .Values.hpa.maxReplicas }}\n  metrics:\n  - type: Resource\n    resource:\n      name: cpu\n      targetAverageUtilization: {{ .Values.hpa.cpuTargetAverageUtilization }}\n  - type: Resource\n    resource:\n      name: memory\n      targetAverageUtilization: {{ .Values.hpa.memoryTargetAverageUtilization }}\n{{- end }}"
  },
  {
    "path": "charts/muta/templates/ingress.yaml",
    "content": "{{- if and (.Values.jxRequirements.ingress.domain) (not .Values.knativeDeploy) }}\napiVersion: {{ .Values.jxRequirements.ingress.apiVersion }}\nkind: Ingress\nmetadata:\n  annotations:\n    kubernetes.io/ingress.class: nginx\n{{- if .Values.ingress.annotations }}\n{{ toYaml .Values.ingress.annotations | indent 4 }}\n{{- end }}\n{{- if .Values.jxRequirements.ingress.annotations }}\n{{ toYaml .Values.jxRequirements.ingress.annotations | indent 4 }}\n{{- end }}\n  name: {{ .Values.service.name }}\nspec:\n  rules:\n  - host: {{ .Values.service.name }}{{ .Values.jxRequirements.ingress.namespaceSubDomain }}{{ .Values.jxRequirements.ingress.domain }}\n    http:\n      paths:\n      - backend:\n          serviceName: {{ .Values.service.name }}\n          servicePort: 80\n{{- if .Values.jxRequirements.ingress.tls.enabled }}\n  tls:\n  - hosts:\n    - {{ .Values.service.name }}{{ .Values.jxRequirements.ingress.namespaceSubDomain }}{{ .Values.jxRequirements.ingress.domain }}\n{{- if .Values.jxRequirements.ingress.tls.production }}\n    secretName: \"tls-{{ .Values.jxRequirements.ingress.domain | replace \".\" \"-\" }}-p\"\n{{- else }}\n    secretName: \"tls-{{ .Values.jxRequirements.ingress.domain | replace \".\" \"-\" }}-s\"\n{{- end }}\n{{- end }}\n{{- end }}\n"
  },
  {
    "path": "charts/muta/templates/ksvc.yaml",
    "content": "{{- if .Values.knativeDeploy }}\napiVersion: serving.knative.dev/v1alpha1\nkind: Service\nmetadata:\n{{- if .Values.service.name }}\n  name: {{ .Values.service.name }}\n{{- else }}\n  name: {{ template \"fullname\" . }}\n{{- end }}\n  labels:\n    chart: \"{{ .Chart.Name }}-{{ .Chart.Version | replace \"+\" \"_\" }}\"\nspec:\n  runLatest:\n    configuration:\n      revisionTemplate:\n        spec:\n          container:\n            image: \"{{ .Values.image.repository }}:{{ .Values.image.tag }}\"\n            imagePullPolicy: {{ .Values.image.pullPolicy }}\n            env:\n{{- range $pkey, $pval := .Values.env }}\n            - name: {{ $pkey }}\n              value: {{ quote $pval }}\n{{- end }}\n            livenessProbe:\n              httpGet:\n                path: {{ .Values.probePath }}\n              initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}\n              periodSeconds: {{ .Values.livenessProbe.periodSeconds }}\n              successThreshold: {{ .Values.livenessProbe.successThreshold }}\n              timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}\n            readinessProbe:\n              failureThreshold: {{ .Values.readinessProbe.failureThreshold }}\n              httpGet:\n                path: {{ .Values.probePath }}\n              periodSeconds: {{ .Values.readinessProbe.periodSeconds }}\n              successThreshold: {{ .Values.readinessProbe.successThreshold }}\n              timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}\n            resources:\n{{ toYaml .Values.resources | indent 14 }}\n{{- end }}\n"
  },
  {
    "path": "charts/muta/templates/service.yaml",
    "content": "{{- if or .Values.knativeDeploy .Values.canary.enabled }}\n{{- else }}\napiVersion: v1\nkind: Service\nmetadata:\n{{- if .Values.service.name }}\n  name: {{ .Values.service.name }}\n{{- else }}\n  name: {{ template \"fullname\" . }}\n{{- end }}\n  labels:\n    chart: \"{{ .Chart.Name }}-{{ .Chart.Version | replace \"+\" \"_\" }}\"\n{{- if .Values.service.annotations }}\n  annotations:\n{{ toYaml .Values.service.annotations | indent 4 }}\n{{- end }}\nspec:\n  type: {{ .Values.service.type }}\n  ports:\n  - port: {{ .Values.service.externalPort }}\n    targetPort: {{ .Values.service.internalPort }}\n    protocol: TCP\n    name: http\n  selector:\n    app: {{ template \"fullname\" . }}\n{{- end }}\n"
  },
  {
    "path": "charts/muta/values.yaml",
    "content": "# Default values for Rust projects.\n# This is a YAML-formatted file.\n# Declare variables to be passed into your templates.\nreplicaCount: 1\nimage:\n  repository: draft\n  tag: dev\n  pullPolicy: IfNotPresent\n\n# define environment variables here as a map of key: value\nenv:\n\n# enable this flag to use knative serve to deploy the app\nknativeDeploy: false\n\n# HorizontalPodAutoscaler\nhpa:\n  enabled: false\n  minReplicas: 2\n  maxReplicas: 6\n  cpuTargetAverageUtilization: 80\n  memoryTargetAverageUtilization: 80\n\n# Canary deployments\n# If enabled, Istio v1.5+ and Flagger need to be installed in the cluster\ncanary:\n  enabled: false\n  progressDeadlineSeconds: 60\n  canaryAnalysis:\n    interval: \"1m\"\n    threshold: 5\n    maxWeight: 60\n    stepWeight: 20\n    # WARNING: Canary deployments will fail and rollback if there is no traffic that will generate the below specified metrics.\n    metrics:\n      requestSuccessRate:\n        threshold: 99\n        interval: \"1m\"\n      requestDuration:\n        threshold: 1000\n        interval: \"1m\"\n  # The host is using Istio Gateway and is currently not auto-generated\n  # Please overwrite the `canary.host` in `values.yaml` in each environment repository (e.g., staging, production)\n  host: acme.com\n\nservice:\n  name: muta\n  type: ClusterIP\n  externalPort: 80\n  internalPort: 8080\n  annotations:\n    fabric8.io/expose: \"true\"\n    fabric8.io/ingress.annotations: \"kubernetes.io/ingress.class: nginx\"\nresources:\n  limits:\n    cpu: 100m\n    memory: 256Mi\n  requests:\n    cpu: 80m\n    memory: 128Mi\nprobePath: /\nlivenessProbe:\n  initialDelaySeconds: 60\n  periodSeconds: 10\n  successThreshold: 1\n  timeoutSeconds: 1\nreadinessProbe:\n  failureThreshold: 1\n  periodSeconds: 10\n  successThreshold: 1\n  timeoutSeconds: 1\n\n\n# custom ingress annotations on this service\ningress:\n  annotations:\n#      kubernetes.io/ingress.class: nginx\n\n# values we use from the `jx-requirements.yml` file if we are using helmfile and helm 3\njxRequirements:\n  ingress:\n    domain: \"\"\n    externalDNS: false\n    namespaceSubDomain: -jx.\n    tls:\n      email: \"\"\n      enabled: false\n      production: false\n\n    # For Kubernetes v1.14+, use 'networking.k8s.io/v1beta1'\n    apiVersion: \"extensions/v1beta1\"\n\n    # shared ingress annotations on all services\n    annotations:\n    #  kubernetes.io/ingress.class: nginx\n"
  },
  {
    "path": "charts/preview/Chart.yaml",
    "content": "apiVersion: v1\ndescription: A Helm chart for Kubernetes\nicon: https://raw.githubusercontent.com/jenkins-x/jenkins-x-platform/d965bfa/images/rust.png\nname: preview\nversion: 0.1.0-SNAPSHOT\n"
  },
  {
    "path": "charts/preview/Makefile",
    "content": "OS := $(shell uname)\n\npreview: \nifeq ($(OS),Darwin)\n\tsed -i \"\" -e \"s/version:.*/version: $(PREVIEW_VERSION)/\" Chart.yaml\n\tsed -i \"\" -e \"s/version:.*/version: $(PREVIEW_VERSION)/\" ../*/Chart.yaml\n\tsed -i \"\" -e \"s/tag:.*/tag: $(PREVIEW_VERSION)/\" values.yaml\nelse ifeq ($(OS),Linux)\n\tsed -i -e \"s/version:.*/version: $(PREVIEW_VERSION)/\" Chart.yaml\n\tsed -i -e \"s/version:.*/version: $(PREVIEW_VERSION)/\" ../*/Chart.yaml\n\tsed -i -e \"s|repository:.*|repository: $(DOCKER_REGISTRY)\\/nervosnetwork\\/muta|\" values.yaml\n\tsed -i -e \"s/tag:.*/tag: $(PREVIEW_VERSION)/\" values.yaml\nelse\n\techo \"platfrom $(OS) not supported to release from\"\n\texit -1\nendif\n\techo \"  version: $(PREVIEW_VERSION)\" >> requirements.yaml\n\tjx step helm build\n"
  },
  {
    "path": "charts/preview/requirements.yaml",
    "content": "# !! File must end with empty line !!\ndependencies:\n- alias: expose\n  name: exposecontroller\n  repository: http://chartmuseum.jenkins-x.io\n  version: 2.3.92\n- alias: cleanup\n  name: exposecontroller\n  repository: http://chartmuseum.jenkins-x.io\n  version: 2.3.92\n\n  # !! \"alias: preview\" must be last entry in dependencies array !!\n  # !! Place custom dependencies above !!\n- alias: preview\n  name: muta\n  repository: file://../muta\n"
  },
  {
    "path": "charts/preview/values.yaml",
    "content": "cleanup:\n  Annotations:\n    helm.sh/hook: pre-delete\n    helm.sh/hook-delete-policy: hook-succeeded\n  Args:\n  - --cleanup\nexpose:\n  Annotations:\n    helm.sh/hook: post-install,post-upgrade\n    helm.sh/hook-delete-policy: hook-succeeded\n  config:\n    exposer: Ingress\n    http: true\n    tlsacme: false\npreview:\n  image:\n    pullPolicy: IfNotPresent\n    repository: null\n    tag: null\n  namespace: jx-previews\n"
  },
  {
    "path": "clippy.toml",
    "content": "too-many-arguments-threshold = 12"
  },
  {
    "path": "common/apm/Cargo.toml",
    "content": "[package]\nname = \"common-apm\"\nversion = \"0.2.1\"\nauthors = [\"Muta Dev <muta@nervos.org>\"]\nedition = \"2018\"\nrepository = \"https://github.com/nervosnetwork/muta\"\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\nprotocol = { path = \"../../protocol\", package = \"muta-protocol\" }\n\nmuta-apm = \"0.1.0-alpha.15\"\nprometheus = \"0.10\"\nprometheus-static-metric = \"0.5\"\nderive_more = \"0.99\"\nlazy_static = \"1.4\"\n"
  },
  {
    "path": "common/apm/README.md",
    "content": "# Metrics documentation for promethues\nAll current metrics and usage\n## API\n\n| Metric name | Metric types | Related Grafana panel |\n|---|---|---|\n| muta_api_request_total             | counter      |                          |\n| muta_api_request_result_total      | counter      | processed_tx_request     |\n| muta_api_request_time_cost_seconds | histogram    |                          |\n\n\n## Consensus\n<table>\n<thead>\n  <tr>\n    <th colspan=\"3\">Consensus</th>\n  </tr>\n</thead>\n<tbody>\n  <tr>\n    <td>Metric name</td>\n    <td>Metric types</td>\n    <td>Related Grafana panel</td>\n  </tr>\n  <tr>\n    <td>muta_concensus_result</td>\n    <td>counter</td>\n    <td></td>\n  </tr>\n  <tr>\n    <td>muta_consensus_time_cost_seconds</td>\n    <td>histogram</td>\n    <td>exec_p90</td>\n  </tr>\n  <tr>\n    <td>muta_consensus_round</td>\n    <td>gauge</td>\n    <td>consensus_round_cost</td>\n  </tr>\n  <tr>\n    <td>muta_executing_queue</td>\n    <td>gauge</td>\n    <td>executing_block_size</td>\n  </tr>\n  <tr>\n    <td rowspan=\"3\">muta_consensus_height</td>\n    <td rowspan=\"3\">gauge</td>\n    <td>get_cf_each_block_time_usage</td>\n  </tr>\n  <tr>\n    <td>put_cf_each_block_time_usage</td>\n  </tr>\n  <tr>\n    <td>current_height</td>\n  </tr>\n  <tr>\n    <td>muta_consensus_committed_tx_total</td>\n    <td>counter</td>\n    <td>TPS</td>\n  </tr>\n  <tr>\n    <td>muta_consensus_sync_block_duration</td>\n    <td>counter</td>\n    <td>synced_block</td>\n  </tr>\n  <tr>\n    <td>muta_consensus_duration_seconds</td>\n    <td>histogram</td>\n    <td>consensus_p90</td>\n  </tr>\n</tbody>\n</table>\n\n## Mempool\t\t\n<table>\n<thead>\n  <tr>\n    <th>Metric name</th>\n    <th>Metric types</th>\n    <th>Related Grafana panel</th>\n  </tr>\n</thead>\n<tbody>\n  <tr>\n    <td>muta_mempool_counter</td>\n    <td>counter</td>\n    <td></td>\n  </tr>\n  <tr>\n    <td>muta_mempool_result_counter</td>\n    <td>counter</td>\n    <td></td>\n  </tr>\n  <tr>\n    <td>muta_mempool_cost_seconds</td>\n    <td>histogram</td>\n    <td></td>\n  </tr>\n  <tr>\n    <td>muta_mempool_package_size_vec</td>\n    <td>histogram</td>\n    <td></td>\n  </tr>\n  <tr>\n    <td>muta_mempool_current_size_vec</td>\n    <td>histogram</td>\n    <td></td>\n  </tr>\n  <tr>\n    <td>muta_mempool_tx_count</td>\n    <td>guage</td>\n    <td>mempool_cached_tx</td>\n  </tr>\n</tbody>\n</table>\n\n## Network\t\t\n<table>\n<thead>\n  <tr>\n    <th>Metric name</th>\n    <th>Metric types</th>\n    <th>Related Grafana panel</th>\n  </tr>\n</thead>\n<tbody>\n  <tr>\n    <td>muta_network_message_total</td>\n    <td>counter</td>\n    <td>network_message_arrival_rate</td>\n  </tr>\n  <tr>\n    <td>muta_network_rpc_result_total</td>\n    <td>counter</td>\n    <td></td>\n  </tr>\n  <tr>\n    <td>muta_network_protocol_time_cost_seconds</td>\n    <td>histogram</td>\n    <td></td>\n  </tr>\n  <tr>\n    <td>muta_network_total_pending_data_size</td>\n    <td>gauge</td>\n    <td></td>\n  </tr>\n  <tr>\n    <td>muta_network_ip_pending_data_size</td>\n    <td>gauge</td>\n    <td></td>\n  </tr>\n  <tr>\n    <td>muta_network_received_message_in_processing_guage</td>\n    <td>gauge</td>\n    <td>Received messages in processing</td>\n  </tr>\n  <tr>\n    <td>muta_network_received_ip_message_in_processing_guage</td>\n    <td>gauge</td>\n    <td>Received messages in processing by ip</td>\n  </tr>\n  <tr>\n    <td>muta_network_connected_peers</td>\n    <td>gauge</td>\n    <td>Connected Peers</td>\n  </tr>\n  <tr>\n    <td rowspan=\"2\">muta_network_ip_ping_in_ms</td>\n    <td rowspan=\"2\">gauge</td>\n    <td>Ping (ms)</td>\n  </tr>\n  <tr>\n    <td>Ping by ip</td>\n  </tr>\n  <tr>\n    <td>muta_network_ip_disconnected_count</td>\n    <td>counter</td>\n    <td>Disconnected count(To other peers)</td>\n  </tr>\n  <tr>\n    <td>muta_network_outbound_connecting_peers</td>\n    <td>gauge</td>\n    <td>Connecting Peers</td>\n  </tr>\n  <tr>\n    <td>muta_network_unidentified_connections</td>\n    <td>gauge</td>\n    <td>Received messages in processing</td>\n  </tr>\n  <tr>\n    <td>muta_network_saved_peer_count</td>\n    <td>counter</td>\n    <td>Saved peers</td>\n  </tr>\n  <tr>\n    <td>muta_network_tagged_consensus_peers</td>\n    <td>gauge</td>\n    <td>Consensus peers</td>\n  </tr>\n  <tr>\n    <td>muta_network_connected_consensus_peers</td>\n    <td>gauge</td>\n    <td>Connected Consensus Peers (Minus itself)</td>\n  </tr>\n</tbody>\n</table>\n\n## Storage\n<table>\n<thead>\n  <tr>\n    <th>Metric name</th>\n    <th>Metric types</th>\n    <th>Related Grafana panel</th>\n  </tr>\n</thead>\n<tbody>\n  <tr>\n    <td>muta_storage_put_cf_seconds</td>\n    <td>counter</td>\n    <td>put_cf_each_block_time_usage</td>\n  </tr>\n  <tr>\n    <td>muta_storage_put_cf_bytes</td>\n    <td>counter</td>\n    <td></td>\n  </tr>\n  <tr>\n    <td>muta_storage_get_cf_seconds</td>\n    <td>counter</td>\n    <td>get_cf_each_block_time_usage</td>\n  </tr>\n  <tr>\n    <td>muta_storage_get_cf_total</td>\n    <td>counter</td>\n    <td></td>\n  </tr>\n</tbody>\n</table>"
  },
  {
    "path": "common/apm/src/lib.rs",
    "content": "// https://rust-lang.github.io/rust-clippy/master/index.html#float_cmp\n#![allow(clippy::float_cmp)]\n\npub mod metrics;\n\npub use muta_apm;\n\npub use lazy_static;\npub use prometheus;\npub use prometheus_static_metric;\n"
  },
  {
    "path": "common/apm/src/metrics/api.rs",
    "content": "use crate::metrics::{\n    auto_flush_from, exponential_buckets, make_auto_flush_static_metric, register_histogram_vec,\n    register_int_counter_vec, HistogramVec, IntCounterVec,\n};\n\nuse lazy_static::lazy_static;\n\nmake_auto_flush_static_metric! {\n    pub label_enum RequestKind {\n        send_transaction,\n        get_block,\n    }\n\n    pub label_enum SendTransactionResult {\n        success,\n        failure,\n    }\n\n    pub struct RequestCounterVec: LocalIntCounter {\n        \"type\" => RequestKind,\n    }\n\n    pub struct RequestResultCounterVec: LocalIntCounter {\n        \"type\" => RequestKind,\n        \"result\" => SendTransactionResult,\n    }\n\n    pub struct RequestTimeHistogramVec: LocalHistogram {\n        \"type\" => RequestKind,\n    }\n}\n\nlazy_static! {\n    pub static ref API_REQUEST_COUNTER_VEC: IntCounterVec =\n        register_int_counter_vec!(\"muta_api_request_total\", \"Total number of request\", &[\n            \"type\"\n        ])\n        .expect(\"request total\");\n    pub static ref API_REQUEST_RESULT_COUNTER_VEC: IntCounterVec = register_int_counter_vec!(\n        \"muta_api_request_result_total\",\n        \"Total number of request result\",\n        &[\"type\", \"result\"]\n    )\n    .expect(\"request result total\");\n    pub static ref API_REQUEST_TIME_HISTOGRAM_VEC: HistogramVec = register_histogram_vec!(\n        \"muta_api_request_time_cost_seconds\",\n        \"Request process time cost\",\n        &[\"type\"],\n        exponential_buckets(0.001, 2.0, 20).expect(\"api req time expontial\")\n    )\n    .expect(\"request time cost\");\n}\n\nlazy_static! {\n    pub static ref API_REQUEST_COUNTER_VEC_STATIC: RequestCounterVec =\n        auto_flush_from!(API_REQUEST_COUNTER_VEC, RequestCounterVec);\n    pub static ref API_REQUEST_RESULT_COUNTER_VEC_STATIC: RequestResultCounterVec =\n        auto_flush_from!(API_REQUEST_RESULT_COUNTER_VEC, RequestResultCounterVec);\n    pub static ref API_REQUEST_TIME_HISTOGRAM_STATIC: RequestTimeHistogramVec =\n        auto_flush_from!(API_REQUEST_TIME_HISTOGRAM_VEC, RequestTimeHistogramVec);\n}\n"
  },
  {
    "path": "common/apm/src/metrics/consensus.rs",
    "content": "use crate::metrics::{\n    auto_flush_from, exponential_buckets, make_auto_flush_static_metric, register_histogram,\n    register_histogram_vec, register_int_counter, register_int_counter_vec, register_int_gauge,\n    Histogram, HistogramVec, IntCounter, IntCounterVec, IntGauge,\n};\n\nuse lazy_static::lazy_static;\n\nmake_auto_flush_static_metric! {\n    pub label_enum ConsensusResultKind {\n        get_block_from_remote,\n    }\n\n    pub label_enum ConsensusResult {\n        success,\n        failure,\n    }\n\n    pub struct ConsensusResultCounterVec: LocalIntCounter {\n        \"type\" => ConsensusResultKind,\n        \"result\" => ConsensusResult,\n    }\n\n    pub label_enum ConsensusTimeKind {\n        commit,\n        exec,\n        block\n    }\n\n    pub struct ConsensusTimeHistogramVec: LocalHistogram {\n        \"type\" => ConsensusTimeKind,\n    }\n\n    pub label_enum ConsensusRoundKind {\n        round\n    }\n\n    pub struct ConsensusRoundHistogramVec: LocalHistogram {\n        \"type\" => ConsensusRoundKind,\n    }\n}\n\nlazy_static! {\n    pub static ref CONSENSUS_RESULT_COUNTER_VEC: IntCounterVec = register_int_counter_vec!(\n        \"muta_concensus_result\",\n        \"Total number of consensus result\",\n        &[\"type\", \"result\"]\n    )\n    .unwrap();\n    pub static ref CONSENSUS_TIME_HISTOGRAM_VEC: HistogramVec = register_histogram_vec!(\n        \"muta_consensus_time_cost_seconds\",\n        \"Consensus process time cost\",\n        &[\"type\"],\n        exponential_buckets(0.05, 1.2, 30).unwrap()\n    )\n    .unwrap();\n}\n\nlazy_static! {\n    pub static ref CONSENSUS_RESULT_COUNTER_VEC_STATIC: ConsensusResultCounterVec =\n        auto_flush_from!(CONSENSUS_RESULT_COUNTER_VEC, ConsensusResultCounterVec);\n    pub static ref CONSENSUS_TIME_HISTOGRAM_VEC_STATIC: ConsensusTimeHistogramVec =\n        auto_flush_from!(CONSENSUS_TIME_HISTOGRAM_VEC, ConsensusTimeHistogramVec);\n    pub static ref ENGINE_ROUND_GAUGE: IntGauge =\n        register_int_gauge!(\"muta_consensus_round\", \"Round count of consensus\").unwrap();\n    pub static ref ENGINE_HEIGHT_GAUGE: IntGauge =\n        register_int_gauge!(\"muta_consensus_height\", \"Height of muta\").unwrap();\n    pub static ref ENGINE_EXECUTING_BLOCK_GAUGE: IntGauge =\n        register_int_gauge!(\"muta_executing_block_count\", \"The executing blocks\").unwrap();\n    pub static ref ENGINE_COMMITED_TX_COUNTER: IntCounter = register_int_counter!(\n        \"muta_consensus_committed_tx_total\",\n        \"The committed transactions\"\n    )\n    .unwrap();\n    pub static ref ENGINE_ORDER_TX_GAUGE: IntGauge =\n        register_int_gauge!(\"muta_proposal_order_tx_len\", \"The ordered transactions len\").unwrap();\n    pub static ref ENGINE_SYNC_TX_GAUGE: IntGauge =\n        register_int_gauge!(\"muta_proposal_sync_tx_len\", \"The sync transactions len\").unwrap();\n    pub static ref ENGINE_SYNC_BLOCK_COUNTER: IntCounter = register_int_counter!(\n        \"muta_consensus_sync_block_total\",\n        \"The counter for sync blocks from remote\"\n    )\n    .unwrap();\n    pub static ref ENGINE_SYNC_BLOCK_HISTOGRAM: Histogram = register_histogram!(\n        \"muta_consensus_sync_block_duration\",\n        \"Histogram of consensus sync duration\",\n        exponential_buckets(0.5, 1.2, 20).expect(\"consensus duration time exponential\")\n    )\n    .unwrap();\n    pub static ref ENGINE_CONSENSUS_COST_TIME: Histogram = register_histogram!(\n        \"muta_consensus_duration_seconds\",\n        \"Histogram of consensus duration from last block\",\n        exponential_buckets(1.0, 1.2, 15).expect(\"consensus duration time exponential\")\n    )\n    .unwrap();\n}\n"
  },
  {
    "path": "common/apm/src/metrics/mempool.rs",
    "content": "use crate::metrics::{\n    auto_flush_from, exponential_buckets, make_auto_flush_static_metric, register_histogram_vec,\n    register_int_counter_vec, register_int_gauge, HistogramVec, IntCounterVec, IntGauge,\n};\n\nuse lazy_static::lazy_static;\n\nmake_auto_flush_static_metric! {\n    pub label_enum MempoolKind {\n        insert_tx_from_p2p,\n        package,\n        current_size,\n    }\n\n    pub label_enum MempoolOpResult {\n        success,\n        failure,\n    }\n\n    pub struct MempoolCounterVec: LocalIntCounter {\n        \"type\" => MempoolKind,\n    }\n\n    pub struct MempoolResultCounterVec: LocalIntCounter {\n        \"type\" => MempoolKind,\n        \"result\" => MempoolOpResult,\n    }\n\n    pub struct MempoolTimeHistogramVec: LocalHistogram {\n        \"type\" => MempoolKind,\n    }\n\n    pub struct MempoolPackageSizeVec: LocalHistogram {\n        \"type\" => MempoolKind,\n    }\n\n    pub struct MempoolCurrentSizeVec: LocalHistogram {\n        \"type\" => MempoolKind,\n    }\n}\n\nlazy_static! {\n    pub static ref MEMPOOL_COUNTER_VEC: IntCounterVec =\n        register_int_counter_vec!(\"muta_mempool_counter\", \"Counter in mempool\", &[\"type\"])\n            .expect(\"failed init mempool counter vec\");\n    pub static ref MEMPOOL_RESULT_COUNTER_VEC: IntCounterVec = register_int_counter_vec!(\n        \"muta_mempool_result_counter\",\n        \"Result counter in mempool\",\n        &[\"type\", \"result\"]\n    )\n    .expect(\"request result total\");\n    pub static ref MEMPOOL_TIME_HISTOGRAM_VEC: HistogramVec = register_histogram_vec!(\n        \"muta_mempool_cost_seconds\",\n        \"Time cost in mempool\",\n        &[\"type\"],\n        exponential_buckets(0.05, 2.0, 10).expect(\"mempool time expontial\")\n    )\n    .expect(\"mempool time cost\");\n    pub static ref MEMPOOL_PACKAGE_SIZE_VEC: HistogramVec = register_histogram_vec!(\n        \"muta_mempool_package_size_vec\",\n        \"Package size\",\n        &[\"type\"],\n        exponential_buckets(0.05, 2.0, 10).expect(\"mempool package size exponential\")\n    )\n    .expect(\"mempool package size\");\n    pub static ref MEMPOOL_CURRENT_SIZE_VEC: HistogramVec = register_histogram_vec!(\n        \"muta_mempool_current_size_vec\",\n        \"Current size\",\n        &[],\n        exponential_buckets(0.05, 2.0, 10).expect(\"mempool current size exponential\")\n    )\n    .expect(\"mempool current size\");\n    pub static ref MEMPOOL_LEN_GAUGE: IntGauge =\n        register_int_gauge!(\"muta_mempool_tx_count\", \"Tx len in mempool\").unwrap();\n}\n\nlazy_static! {\n    pub static ref MEMPOOL_COUNTER_STATIC: MempoolCounterVec =\n        auto_flush_from!(MEMPOOL_COUNTER_VEC, MempoolCounterVec);\n    pub static ref MEMPOOL_RESULT_COUNTER_STATIC: MempoolResultCounterVec =\n        auto_flush_from!(MEMPOOL_RESULT_COUNTER_VEC, MempoolResultCounterVec);\n    pub static ref MEMPOOL_TIME_STATIC: MempoolTimeHistogramVec =\n        auto_flush_from!(MEMPOOL_TIME_HISTOGRAM_VEC, MempoolTimeHistogramVec);\n    pub static ref MEMPOOL_PACKAGE_SIZE_VEC_STATIC: MempoolPackageSizeVec =\n        auto_flush_from!(MEMPOOL_PACKAGE_SIZE_VEC, MempoolPackageSizeVec);\n    pub static ref MEMPOOL_CURRENT_SIZE_VEC_STATIC: MempoolCurrentSizeVec =\n        auto_flush_from!(MEMPOOL_CURRENT_SIZE_VEC, MempoolCurrentSizeVec);\n}\n"
  },
  {
    "path": "common/apm/src/metrics/network.rs",
    "content": "use lazy_static::lazy_static;\n\nuse crate::metrics::{\n    auto_flush_from, exponential_buckets, linear_buckets, make_auto_flush_static_metric,\n    register_histogram_vec, register_int_counter, register_int_counter_vec, register_int_gauge,\n    register_int_gauge_vec, HistogramVec, IntCounter, IntCounterVec, IntGauge, IntGaugeVec,\n};\n\nmake_auto_flush_static_metric! {\n    pub label_enum MessageDirection {\n        sent,\n        received,\n    }\n\n    pub label_enum ProtocolKind {\n        rpc,\n    }\n\n    pub label_enum RPCResult {\n        success,\n        timeout,\n    }\n\n    pub label_enum MessageTaret {\n      single,\n      multi,\n      all\n    }\n\n    pub struct MessageCounterVec: LocalIntCounter {\n        \"direction\" => MessageDirection,\n    }\n\n    pub struct RPCResultCounterVec: LocalIntCounter {\n        \"result\" => RPCResult,\n    }\n\n    pub struct ProtocolTimeHistogramVec: LocalHistogram {\n        \"type\" => ProtocolKind,\n    }\n}\n\nlazy_static! {\n    pub static ref NETWORK_MESSAGE_COUNT_VEC: IntCounterVec = register_int_counter_vec!(\n        \"muta_network_message_total\",\n        \"Total number of network message\",\n        &[\"direction\", \"target\", \"type\", \"module\", \"action\"]\n    )\n    .expect(\"network message total\");\n    pub static ref NETWORK_MESSAGE_SIZE_COUNT_VEC: IntCounterVec = register_int_counter_vec!(\n        \"muta_network_message_size\",\n        \"Accumulated compressed network message size\",\n        &[\"direction\", \"url\"]\n    )\n    .expect(\"network message size\");\n    pub static ref NETWORK_RPC_RESULT_COUNT_VEC: IntCounterVec = register_int_counter_vec!(\n        \"muta_network_rpc_result_total\",\n        \"Total number of network rpc result\",\n        &[\"result\"]\n    )\n    .expect(\"network rpc result total\");\n    pub static ref NETWORK_PROTOCOL_TIME_HISTOGRAM_VEC: HistogramVec = register_histogram_vec!(\n        \"muta_network_protocol_time_cost_seconds\",\n        \"Network protocol time cost\",\n        &[\"type\"],\n        exponential_buckets(0.01, 2.0, 20).expect(\"network protocol time expontial\")\n    )\n    .expect(\"network protocol time cost\");\n    pub static ref NETWORK_PING_HISTOGRAM_VEC: HistogramVec = register_histogram_vec!(\n        \"muta_network_ping_in_ms\",\n        \"Network peer ping time\",\n        &[\"ip\"],\n        linear_buckets(100.0, 200.0, 5).expect(\"network ping time linear buckets\")\n    )\n    .expect(\"network ping time\");\n}\n\nlazy_static! {\n    pub static ref NETWORK_RPC_RESULT_COUNT_VEC_STATIC: RPCResultCounterVec =\n        auto_flush_from!(NETWORK_RPC_RESULT_COUNT_VEC, RPCResultCounterVec);\n    pub static ref NETWORK_PROTOCOL_TIME_HISTOGRAM_VEC_STATIC: ProtocolTimeHistogramVec = auto_flush_from!(\n        NETWORK_PROTOCOL_TIME_HISTOGRAM_VEC,\n        ProtocolTimeHistogramVec\n    );\n}\n\nlazy_static! {\n    pub static ref NETWORK_TOTAL_PENDING_DATA_SIZE: IntGauge = register_int_gauge!(\n        \"muta_network_total_pending_data_size\",\n        \"Total pending data size\"\n    )\n    .expect(\"network total pending data size\");\n    pub static ref NETWORK_IP_PENDING_DATA_SIZE_VEC: IntGaugeVec = register_int_gauge_vec!(\n        \"muta_network_ip_pending_data_size\",\n        \"IP pending data size\",\n        &[\"ip\"]\n    )\n    .expect(\"network ip pending data size\");\n    pub static ref NETWORK_RECEIVED_MESSAGE_IN_PROCESSING_GUAGE: IntGauge = register_int_gauge!(\n        \"muta_network_received_message_in_processing_guage\",\n        \"Total number of network received message current in processing\"\n    )\n    .expect(\"network received message in processing\");\n    pub static ref NETWORK_RECEIVED_IP_MESSAGE_IN_PROCESSING_GUAGE_VEC: IntGaugeVec =\n        register_int_gauge_vec!(\n            \"muta_network_received_ip_message_in_processing_guage\",\n            \"Number of network received messasge from ip current in processing\",\n            &[\"ip\"]\n        )\n        .expect(\"network received ip message in processing\");\n    pub static ref NETWORK_CONNECTED_PEERS: IntGauge =\n        register_int_gauge!(\"muta_network_connected_peers\", \"Total connected peer count\")\n            .expect(\"network total connecteds\");\n    pub static ref NETWORK_IP_DISCONNECTED_COUNT_VEC: IntCounterVec = register_int_counter_vec!(\n        \"muta_network_ip_disconnected_count\",\n        \"Total number of ip disconnected count\",\n        &[\"ip\"]\n    )\n    .expect(\"network disconnect ip count\");\n    pub static ref NETWORK_OUTBOUND_CONNECTING_PEERS: IntGauge = register_int_gauge!(\n        \"muta_network_outbound_connecting_peers\",\n        \"Total number of network outbound connecting peers\"\n    )\n    .expect(\"network outbound connecting peer count\");\n    pub static ref NETWORK_UNIDENTIFIED_CONNECTIONS: IntGauge = register_int_gauge!(\n        \"muta_network_unidentified_connections\",\n        \"Total number of network unidentified connections\"\n    )\n    .expect(\"network unidentified connections\");\n    pub static ref NETWORK_SAVED_PEER_COUNT: IntCounter = register_int_counter!(\n        \"muta_network_saved_peer_count\",\n        \"Total number of saved peer count\"\n    )\n    .expect(\"network saved peer count\");\n    pub static ref NETWORK_TAGGED_CONSENSUS_PEERS: IntGauge = register_int_gauge!(\n        \"muta_network_tagged_consensus_peers\",\n        \"Total number of consensus peers\"\n    )\n    .expect(\"network tagged consensus peers\");\n    pub static ref NETWORK_CONNECTED_CONSENSUS_PEERS: IntGauge = register_int_gauge!(\n        \"muta_network_connected_consensus_peers\",\n        \"Total number of connected consensus peers\"\n    )\n    .expect(\"network connected consenss peers\");\n}\n\nfn on_network_message(direction: &str, target: &str, url: &str, inc: i64) {\n    let spliced: Vec<&str> = url.split('/').collect();\n    if spliced.len() < 4 {\n        return;\n    }\n\n    let network_type = spliced[1];\n    let module = spliced[2];\n    let action = spliced[3];\n\n    NETWORK_MESSAGE_COUNT_VEC\n        .with_label_values(&[direction, target, network_type, module, action])\n        .inc_by(inc);\n}\n\npub fn on_network_message_sent_all_target(url: &str) {\n    on_network_message(\"sent\", \"all\", url, 1)\n}\n\npub fn on_network_message_sent_multi_target(url: &str, target_count: i64) {\n    on_network_message(\"sent\", \"single\", url, target_count);\n}\n\npub fn on_network_message_sent(url: &str) {\n    on_network_message(\"sent\", \"single\", url, 1);\n}\n\npub fn on_network_message_received(url: &str) {\n    on_network_message(\"received\", \"single\", url, 1);\n}\n"
  },
  {
    "path": "common/apm/src/metrics/storage.rs",
    "content": "use std::time::Duration;\n\nuse lazy_static::lazy_static;\nuse protocol::traits::StorageCategory;\n\nuse crate::metrics::{\n    auto_flush_from, duration_to_sec, make_auto_flush_static_metric, register_counter_vec,\n    register_int_counter_vec, CounterVec, IntCounterVec,\n};\n\nmake_auto_flush_static_metric! {\n  pub label_enum COLUMN_FAMILY_TYPES {\n    block,\n    block_header,\n    receipt,\n    signed_tx,\n    wal,\n    hash_height,\n    state,\n  }\n\n  pub struct StoragePutCfTimeUsageVec: LocalCounter {\n    \"cf\" => COLUMN_FAMILY_TYPES\n  }\n\n  pub struct StoragePutCfBytesVec: LocalIntCounter {\n    \"cf\" => COLUMN_FAMILY_TYPES\n  }\n\n  pub struct StorageGetCfTimeUsageVec: LocalCounter {\n    \"cf\" => COLUMN_FAMILY_TYPES\n  }\n\n  pub struct StorageGetCfTotalVec: LocalIntCounter {\n    \"cf\" => COLUMN_FAMILY_TYPES\n  }\n}\n\nlazy_static! {\n    pub static ref STORAGE_PUT_CF_TIME_USAGE_VEC: CounterVec = register_counter_vec!(\n        \"muta_storage_put_cf_seconds\",\n        \"Storage put_cf time usage\",\n        &[\"cf\"]\n    )\n    .unwrap();\n    pub static ref STORAGE_PUT_CF_BYTES_COUNTER_VEC: IntCounterVec = register_int_counter_vec!(\n        \"muta_storage_put_cf_bytes\",\n        \"Storage total insert bytes\",\n        &[\"cf\"]\n    )\n    .unwrap();\n    pub static ref STORAGE_GET_CF_TIME_USAGE_VEC: CounterVec = register_counter_vec!(\n        \"muta_storage_get_cf_seconds\",\n        \"Storage get_cf time usage\",\n        &[\"cf\"]\n    )\n    .unwrap();\n    pub static ref STORAGE_GET_CF_COUNTER_VEC: IntCounterVec = register_int_counter_vec!(\n        \"muta_storage_get_cf_total\",\n        \"Storage total get_cf keys number\",\n        &[\"cf\"]\n    )\n    .unwrap();\n}\n\nlazy_static! {\n    pub static ref STORAGE_PUT_CF_TIME_USAGE: StoragePutCfTimeUsageVec =\n        auto_flush_from!(STORAGE_PUT_CF_TIME_USAGE_VEC, StoragePutCfTimeUsageVec);\n    pub static ref STORAGE_PUT_CF_BYTES_COUNTER: StoragePutCfBytesVec =\n        auto_flush_from!(STORAGE_PUT_CF_BYTES_COUNTER_VEC, StoragePutCfBytesVec);\n    pub static ref STORAGE_GET_CF_TIME_USAGE: StorageGetCfTimeUsageVec =\n        auto_flush_from!(STORAGE_GET_CF_TIME_USAGE_VEC, StorageGetCfTimeUsageVec);\n    pub static ref STORAGE_GET_CF_COUNTER: StorageGetCfTotalVec =\n        auto_flush_from!(STORAGE_GET_CF_COUNTER_VEC, StorageGetCfTotalVec);\n}\n\npub fn on_storage_get_state(duration: Duration, keys: i64) {\n    let seconds = duration_to_sec(duration);\n\n    STORAGE_GET_CF_TIME_USAGE.state.inc_by(seconds);\n    STORAGE_GET_CF_COUNTER.state.inc_by(keys);\n}\n\npub fn on_storage_put_state(duration: Duration, size: i64) {\n    let seconds = duration_to_sec(duration);\n\n    STORAGE_PUT_CF_TIME_USAGE.state.inc_by(seconds);\n    STORAGE_PUT_CF_BYTES_COUNTER.state.inc_by(size);\n}\n\npub fn on_storage_get_cf(sc: StorageCategory, duration: Duration, keys: i64) {\n    let seconds = duration_to_sec(duration);\n\n    match sc {\n        StorageCategory::Block => {\n            STORAGE_GET_CF_TIME_USAGE.block.inc_by(seconds);\n            STORAGE_GET_CF_COUNTER.block.inc_by(keys);\n        }\n        StorageCategory::BlockHeader => {\n            STORAGE_GET_CF_TIME_USAGE.block_header.inc_by(seconds);\n            STORAGE_GET_CF_COUNTER.block_header.inc_by(keys);\n        }\n        StorageCategory::Receipt => {\n            STORAGE_GET_CF_TIME_USAGE.receipt.inc_by(seconds);\n            STORAGE_GET_CF_COUNTER.receipt.inc_by(keys);\n        }\n        StorageCategory::Wal => {\n            STORAGE_GET_CF_TIME_USAGE.wal.inc_by(seconds);\n            STORAGE_GET_CF_COUNTER.wal.inc_by(keys);\n        }\n        StorageCategory::SignedTransaction => {\n            STORAGE_GET_CF_TIME_USAGE.signed_tx.inc_by(seconds);\n            STORAGE_GET_CF_COUNTER.signed_tx.inc_by(keys);\n        }\n        StorageCategory::HashHeight => {\n            STORAGE_GET_CF_TIME_USAGE.hash_height.inc_by(seconds);\n            STORAGE_GET_CF_COUNTER.hash_height.inc_by(keys);\n        }\n    }\n}\n\npub fn on_storage_put_cf(sc: StorageCategory, duration: Duration, size: i64) {\n    let seconds = duration_to_sec(duration);\n\n    match sc {\n        StorageCategory::Block => {\n            STORAGE_PUT_CF_TIME_USAGE.block.inc_by(seconds);\n            STORAGE_PUT_CF_BYTES_COUNTER.block.inc_by(size);\n        }\n        StorageCategory::BlockHeader => {\n            STORAGE_PUT_CF_TIME_USAGE.block_header.inc_by(seconds);\n            STORAGE_PUT_CF_BYTES_COUNTER.block_header.inc_by(size);\n        }\n        StorageCategory::Receipt => {\n            STORAGE_PUT_CF_TIME_USAGE.receipt.inc_by(seconds);\n            STORAGE_PUT_CF_BYTES_COUNTER.receipt.inc_by(size);\n        }\n        StorageCategory::Wal => {\n            STORAGE_PUT_CF_TIME_USAGE.wal.inc_by(seconds);\n            STORAGE_PUT_CF_BYTES_COUNTER.wal.inc_by(size);\n        }\n        StorageCategory::SignedTransaction => {\n            STORAGE_PUT_CF_TIME_USAGE.signed_tx.inc_by(seconds);\n            STORAGE_PUT_CF_BYTES_COUNTER.signed_tx.inc_by(size);\n        }\n        StorageCategory::HashHeight => {\n            STORAGE_PUT_CF_TIME_USAGE.hash_height.inc_by(seconds);\n            STORAGE_PUT_CF_BYTES_COUNTER.hash_height.inc_by(size);\n        }\n    }\n}\n"
  },
  {
    "path": "common/apm/src/metrics.rs",
    "content": "pub mod api;\npub mod consensus;\npub mod mempool;\npub mod network;\npub mod storage;\n\npub use prometheus::{\n    CounterVec, Histogram, HistogramVec, IntCounter, IntCounterVec, IntGauge, IntGaugeVec,\n};\n\nuse derive_more::Display;\nuse prometheus::{\n    exponential_buckets, linear_buckets, register_counter_vec, register_histogram,\n    register_histogram_vec, register_int_counter, register_int_counter_vec, register_int_gauge,\n    register_int_gauge_vec, Encoder, TextEncoder,\n};\nuse prometheus_static_metric::{auto_flush_from, make_auto_flush_static_metric};\nuse protocol::{ProtocolError, ProtocolErrorKind, ProtocolResult};\n\nuse std::time::Duration;\n\n#[derive(Debug, Display)]\nenum Error {\n    #[display(fmt = \"promtheus {}\", _0)]\n    Prometheus(prometheus::Error),\n}\n\nimpl From<prometheus::Error> for Error {\n    fn from(err: prometheus::Error) -> Error {\n        Error::Prometheus(err)\n    }\n}\n\nimpl From<Error> for ProtocolError {\n    fn from(err: Error) -> ProtocolError {\n        ProtocolError::new(ProtocolErrorKind::Metric, Box::new(err))\n    }\n}\n\nimpl std::error::Error for Error {}\n\npub fn duration_to_sec(d: Duration) -> f64 {\n    d.as_secs_f64()\n}\n\npub fn all_metrics() -> ProtocolResult<Vec<u8>> {\n    let metric_families = prometheus::gather();\n    let encoder = TextEncoder::new();\n\n    let mut encoded_metrics = vec![];\n    encoder\n        .encode(&metric_families, &mut encoded_metrics)\n        .map_err(Error::Prometheus)?;\n\n    Ok(encoded_metrics)\n}\n\n#[cfg(test)]\nmod tests {\n    use super::duration_to_sec;\n    use std::time::Duration;\n\n    #[test]\n    fn test_duration_to_sec() {\n        let d = Duration::from_millis(1110);\n        let sec = duration_to_sec(d);\n\n        assert_eq!(sec, 1.11 as f64);\n    }\n}\n"
  },
  {
    "path": "common/channel/Cargo.toml",
    "content": "[package]\nname = \"common-channel\"\nversion = \"0.2.1\"\nauthors = [\"Muta Dev <muta@nervos.org>\"]\nedition = \"2018\"\nrepository = \"https://github.com/nervosnetwork/muta\"\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\n"
  },
  {
    "path": "common/channel/src/lib.rs",
    "content": "#[cfg(test)]\nmod tests {\n    #[test]\n    fn it_works() {\n        assert_eq!(2 + 2, 4);\n    }\n}\n"
  },
  {
    "path": "common/config-parser/Cargo.toml",
    "content": "[package]\nname = \"common-config-parser\"\nversion = \"0.2.1\"\nauthors = [\"Muta Dev <muta@nervos.org>\"]\nedition = \"2018\"\nrepository = \"https://github.com/nervosnetwork/muta\"\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\nreqwest = \"0.9\"\nserde = \"1.0\"\nserde_derive = \"1.0\"\nstringreader = \"0.1\"\ntoml = \"0.4\"\n\ncore-consensus = { path = \"../../core/consensus\" }\ncore-mempool = { path = \"../../core/mempool\" }\nprotocol = { path = \"../../protocol\", package = \"muta-protocol\" }\n"
  },
  {
    "path": "common/config-parser/src/lib.rs",
    "content": "pub mod types;\n\nuse serde::de;\n\nuse std::error;\nuse std::fmt;\nuse std::fs;\nuse std::io;\nuse std::path::Path;\n\n/// Parse a config from reader.\npub fn parse_reader<R: io::Read, T: de::DeserializeOwned>(r: &mut R) -> Result<T, ParseError> {\n    let mut buf = Vec::new();\n    r.read_to_end(&mut buf)?;\n    Ok(toml::from_slice(&buf)?)\n}\n\n/// Parse a config from file.\n///\n/// Note: In most cases, function `parse` is better.\npub fn parse_file<T: de::DeserializeOwned>(name: impl AsRef<Path>) -> Result<T, ParseError> {\n    let mut f = fs::File::open(name)?;\n    parse_reader(&mut f)\n}\n\n// FIXME: http is inscure, support https only\n/// Parse a config from method of HTTP GET.\n///\n/// Note: In most cases, function `parse` is better.\npub fn parse_http<T: de::DeserializeOwned>(name: &str) -> Result<T, ParseError> {\n    let mut r = reqwest::get(name)?;\n    parse_reader(&mut r)\n}\n\n/// If name is starts with \"http\", parse it by function `parse_http`, else\n/// `parse_file` in use.\npub fn parse<T: de::DeserializeOwned>(name: &str) -> Result<T, ParseError> {\n    if name.starts_with(\"http\") {\n        parse_http(name)\n    } else {\n        parse_file(name)\n    }\n}\n\n#[derive(Debug)]\npub enum ParseError {\n    IO(io::Error),\n    Deserialize(toml::de::Error),\n    Reqwest(reqwest::Error),\n}\n\nimpl error::Error for ParseError {}\n\nimpl fmt::Display for ParseError {\n    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {\n        match self {\n            ParseError::IO(e) => return write!(f, \"{}\", e),\n            ParseError::Deserialize(e) => return write!(f, \"{}\", e),\n            ParseError::Reqwest(e) => return write!(f, \"{}\", e),\n        }\n    }\n}\n\nimpl From<io::Error> for ParseError {\n    fn from(error: io::Error) -> ParseError {\n        ParseError::IO(error)\n    }\n}\n\nimpl From<toml::de::Error> for ParseError {\n    fn from(error: toml::de::Error) -> ParseError {\n        ParseError::Deserialize(error)\n    }\n}\n\nimpl From<reqwest::Error> for ParseError {\n    fn from(error: reqwest::Error) -> ParseError {\n        ParseError::Reqwest(error)\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::{parse, parse_file, parse_http, parse_reader};\n    use serde_derive::Deserialize;\n    use stringreader::StringReader;\n\n    #[derive(Debug, Deserialize)]\n    struct Config {\n        global_string: Option<String>,\n        global_int:    Option<u64>,\n    }\n\n    #[test]\n    fn test_parse_reader() {\n        let toml_str = r#\"\n        global_string = \"Best Food\"\n        global_int = 42\n    \"#;\n        let mut toml_r = StringReader::new(toml_str);\n        let config: Config = parse_reader(&mut toml_r).unwrap();\n        assert_eq!(config.global_string, Some(String::from(\"Best Food\")));\n        assert_eq!(config.global_int, Some(42));\n    }\n\n    #[ignore]\n    #[test]\n    fn test_parse_file() {\n        let config: Config = parse_file(\"/tmp/config.toml\").unwrap();\n        assert_eq!(config.global_string, Some(String::from(\"Best Food\")));\n        assert_eq!(config.global_int, Some(42));\n    }\n\n    #[ignore]\n    #[test]\n    fn test_parse_http() {\n        let config: Config = parse_http(\"http://127.0.0.1:8080/config.toml\").unwrap();\n        assert_eq!(config.global_string, Some(String::from(\"Best Food\")));\n        assert_eq!(config.global_int, Some(42));\n    }\n\n    #[ignore]\n    #[test]\n    fn test_parse() {\n        let config: Config = parse(\"http://127.0.0.1:8080/config.toml\").unwrap();\n        assert_eq!(config.global_string, Some(String::from(\"Best Food\")));\n        assert_eq!(config.global_int, Some(42));\n        let config: Config = parse(\"/tmp/config.toml\").unwrap();\n        assert_eq!(config.global_string, Some(String::from(\"Best Food\")));\n        assert_eq!(config.global_int, Some(42));\n    }\n}\n"
  },
  {
    "path": "common/config-parser/src/types.rs",
    "content": "use std::collections::HashMap;\nuse std::net::SocketAddr;\nuse std::path::PathBuf;\n\nuse serde_derive::Deserialize;\n\nuse core_consensus::{DEFAULT_OVERLORD_GAP, DEFAULT_SYNC_TXS_CHUNK_SIZE};\nuse core_mempool::{DEFAULT_BROADCAST_TXS_INTERVAL, DEFAULT_BROADCAST_TXS_SIZE};\nuse protocol::types::Hex;\n\n#[derive(Debug, Deserialize)]\npub struct ConfigGraphQL {\n    pub listening_address:   SocketAddr,\n    pub graphql_uri:         String,\n    pub graphiql_uri:        String,\n    #[serde(default)]\n    pub workers:             usize,\n    #[serde(default)]\n    pub maxconn:             usize,\n    #[serde(default)]\n    pub max_payload_size:    usize,\n    pub tls:                 Option<ConfigGraphQLTLS>,\n    pub enable_dump_profile: Option<bool>,\n}\n\n#[derive(Debug, Deserialize)]\npub struct ConfigGraphQLTLS {\n    pub private_key_file_path:       PathBuf,\n    pub certificate_chain_file_path: PathBuf,\n}\n\n#[derive(Debug, Deserialize)]\npub struct ConfigNetwork {\n    pub bootstraps:                 Option<Vec<ConfigNetworkBootstrap>>,\n    pub allowlist:                  Option<Vec<String>>,\n    pub allowlist_only:             Option<bool>,\n    pub trust_interval_duration:    Option<u64>,\n    pub trust_max_history_duration: Option<u64>,\n    pub fatal_ban_duration:         Option<u64>,\n    pub soft_ban_duration:          Option<u64>,\n    pub max_connected_peers:        Option<usize>,\n    pub same_ip_conn_limit:         Option<usize>,\n    pub inbound_conn_limit:         Option<usize>,\n    pub listening_address:          SocketAddr,\n    pub rpc_timeout:                Option<u64>,\n    pub selfcheck_interval:         Option<u64>,\n    pub send_buffer_size:           Option<usize>,\n    pub write_timeout:              Option<u64>,\n    pub recv_buffer_size:           Option<usize>,\n    pub max_frame_length:           Option<usize>,\n    pub max_wait_streams:           Option<usize>,\n    pub ping_interval:              Option<u64>,\n}\n\n#[derive(Debug, Deserialize)]\npub struct ConfigNetworkBootstrap {\n    pub peer_id: String,\n    pub address: String,\n}\n\nfn default_overlord_gap() -> usize {\n    DEFAULT_OVERLORD_GAP\n}\n\nfn default_sync_txs_chunk_size() -> usize {\n    DEFAULT_SYNC_TXS_CHUNK_SIZE\n}\n\n#[derive(Debug, Deserialize)]\npub struct ConfigConsensus {\n    #[serde(default = \"default_overlord_gap\")]\n    pub overlord_gap:        usize,\n    #[serde(default = \"default_sync_txs_chunk_size\")]\n    pub sync_txs_chunk_size: usize,\n}\n\nfn default_broadcast_txs_size() -> usize {\n    DEFAULT_BROADCAST_TXS_SIZE\n}\n\nfn default_broadcast_txs_interval() -> u64 {\n    DEFAULT_BROADCAST_TXS_INTERVAL\n}\n\n#[derive(Debug, Deserialize)]\npub struct ConfigMempool {\n    pub pool_size: u64,\n\n    #[serde(default = \"default_broadcast_txs_size\")]\n    pub broadcast_txs_size:     usize,\n    #[serde(default = \"default_broadcast_txs_interval\")]\n    pub broadcast_txs_interval: u64,\n}\n\n#[derive(Debug, Deserialize)]\npub struct ConfigExecutor {\n    pub light:             bool,\n    pub triedb_cache_size: usize,\n}\n\n#[derive(Debug, Deserialize)]\npub struct ConfigRocksDB {\n    pub max_open_files: i32,\n}\n\nimpl Default for ConfigRocksDB {\n    fn default() -> Self {\n        Self { max_open_files: 64 }\n    }\n}\n\n#[derive(Debug, Deserialize)]\npub struct ConfigLogger {\n    pub filter:                     String,\n    pub log_to_console:             bool,\n    pub console_show_file_and_line: bool,\n    pub log_to_file:                bool,\n    pub metrics:                    bool,\n    pub log_path:                   PathBuf,\n    pub file_size_limit:            u64,\n    #[serde(default)]\n    pub modules_level:              HashMap<String, String>,\n}\n\nimpl Default for ConfigLogger {\n    fn default() -> Self {\n        Self {\n            filter:                     \"info\".into(),\n            log_to_console:             true,\n            console_show_file_and_line: false,\n            log_to_file:                true,\n            metrics:                    true,\n            log_path:                   \"logs/\".into(),\n            file_size_limit:            1024 * 1024 * 1024, // GiB\n            modules_level:              HashMap::new(),\n        }\n    }\n}\n\n#[derive(Debug, Deserialize)]\npub struct ConfigAPM {\n    pub service_name:       String,\n    pub tracing_address:    SocketAddr,\n    pub tracing_batch_size: Option<usize>,\n}\n\n#[derive(Debug, Deserialize)]\npub struct Config {\n    // crypto\n    pub privkey:   Hex,\n    // db config\n    pub data_path: PathBuf,\n\n    pub graphql:   ConfigGraphQL,\n    pub network:   ConfigNetwork,\n    pub mempool:   ConfigMempool,\n    pub executor:  ConfigExecutor,\n    pub consensus: ConfigConsensus,\n    #[serde(default)]\n    pub logger:    ConfigLogger,\n    #[serde(default)]\n    pub rocksdb:   ConfigRocksDB,\n    pub apm:       Option<ConfigAPM>,\n}\n\nimpl Config {\n    pub fn data_path_for_state(&self) -> PathBuf {\n        let mut path_state = self.data_path.clone();\n        path_state.push(\"rocksdb\");\n        path_state.push(\"state_data\");\n        path_state\n    }\n\n    pub fn data_path_for_block(&self) -> PathBuf {\n        let mut path_state = self.data_path.clone();\n        path_state.push(\"rocksdb\");\n        path_state.push(\"block_data\");\n        path_state\n    }\n\n    pub fn data_path_for_txs_wal(&self) -> PathBuf {\n        let mut path_state = self.data_path.clone();\n        path_state.push(\"txs_wal\");\n        path_state\n    }\n\n    pub fn data_path_for_consensus_wal(&self) -> PathBuf {\n        let mut path_state = self.data_path.clone();\n        path_state.push(\"consensus_wal\");\n        path_state\n    }\n}\n"
  },
  {
    "path": "common/crypto/Cargo.toml",
    "content": "[package]\nname = \"common-crypto\"\nversion = \"0.2.1\"\nauthors = [\"Muta Dev <muta@nervos.org>\"]\nedition = \"2018\"\nrepository = \"https://github.com/nervosnetwork/muta\"\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\nophelia-bls-amcl = \"0.3\"\nophelia-secp256k1 = \"0.3\"\nophelia = \"0.3\"\n\n[dev-dependencies]\noverlord = \"0.2.0-alpha.11\"\nprotocol = { path = \"../../protocol\", package = \"muta-protocol\"}\nrand = \"0.7\"\nrlp = \"0.4\"\n"
  },
  {
    "path": "common/crypto/src/lib.rs",
    "content": "#![feature(test)]\n\npub use ophelia::HashValue;\npub use ophelia::{\n    BlsSignatureVerify, Crypto, Error, PrivateKey, PublicKey, Signature, ToBlsPublicKey,\n    ToPublicKey, UncompressedPublicKey,\n};\npub use ophelia_bls_amcl::{BlsCommonReference, BlsPrivateKey, BlsPublicKey, BlsSignature};\npub use ophelia_secp256k1::{\n    Secp256k1, Secp256k1PrivateKey, Secp256k1PublicKey, Secp256k1Signature,\n};\n\n#[rustfmt::skip]\n/// Bench in Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz (8 x 2200)\n/// test benches::bench_4_aggregated_sig         ... bench:      20,325 ns/iter (+/- 1,251)\n/// test benches::bench_8_aggregated_sig         ... bench:      40,178 ns/iter (+/- 4,191)\n/// test benches::bench_16_aggregated_sig        ... bench:      78,256 ns/iter (+/- 5,680)\n/// test benches::bench_32_aggregated_sig        ... bench:     156,514 ns/iter (+/- 14,312)\n/// test benches::bench_64_aggregated_sig        ... bench:     313,124 ns/iter (+/- 16,774)\n/// test benches::bench_4_aggregated_sig_verify  ... bench:   4,451,726 ns/iter (+/- 341,019)\n/// test benches::bench_8_aggregated_sig_verify  ... bench:   4,347,873 ns/iter (+/- 247,429)\n/// test benches::bench_16_aggregated_sig_verify ... bench:   5,034,893 ns/iter (+/- 1,552,969)\n/// test benches::bench_32_aggregated_sig_verify ... bench:   4,439,291 ns/iter (+/- 452,905)\n/// test benches::bench_64_aggregated_sig_verify ... bench:   4,404,453 ns/iter (+/- 224,377)\n\n#[cfg(test)]\nmod benches {\n    extern crate test;\n\n    use std::convert::TryFrom;\n\n    use overlord::types::{Vote, VoteType};\n    use rand::distributions::Alphanumeric;\n    use rand::{random, Rng, RngCore};\n    use test::Bencher;\n\n    use protocol::types::Hash;\n    use protocol::{Bytes, BytesMut};\n\n    use super::*;\n\n    fn gen_common_ref() -> String {\n        rand::thread_rng()\n            .sample_iter(&Alphanumeric)\n            .take(10)\n            .collect::<String>()\n    }\n\n    fn mock_block_hash() -> Hash {\n        let temp = (0..10).map(|_| random::<u8>()).collect::<Vec<_>>();\n        Hash::digest(Bytes::from(temp))\n    }\n\n    fn mock_vote() -> Vote {\n        Vote {\n            height:     0u64,\n            round:      0u64,\n            vote_type:  VoteType::Prevote,\n            block_hash: mock_block_hash().as_bytes(),\n        }\n    }\n\n    fn gen_key_pair_sigs(\n        size: usize,\n        keypairs: &mut Vec<(BlsPrivateKey, BlsPublicKey)>,\n        sigs: &mut Vec<BlsSignature>,\n        hash: &HashValue,\n        common_ref: &BlsCommonReference,\n    ) {\n        for _i in 0..size {\n            let seckey = {\n                let mut seed = [0u8; 32];\n                rand::rngs::OsRng.fill_bytes(&mut seed);\n                Hash::digest(BytesMut::from(seed.as_ref()).freeze()).as_bytes()\n            };\n\n            let bls_priv_key =\n                BlsPrivateKey::try_from([&[0u8; 16], seckey.as_ref()].concat().as_ref()).unwrap();\n            let bls_pub_key = bls_priv_key.pub_key(common_ref);\n\n            let sig = bls_priv_key.sign_message(&hash);\n            keypairs.push((bls_priv_key, bls_pub_key));\n            sigs.push(sig);\n        }\n    }\n\n    #[bench]\n    fn bench_4_aggregated_sig(b: &mut Bencher) {\n        let common_ref: BlsCommonReference = gen_common_ref().as_str().into();\n        let vote_msg = HashValue::try_from(\n            Hash::digest(Bytes::from(rlp::encode(&mock_vote())))\n                .as_bytes()\n                .as_ref(),\n        )\n        .unwrap();\n\n        let mut priv_pub_keys = Vec::new();\n        let mut signatures = Vec::new();\n        gen_key_pair_sigs(\n            4,\n            &mut priv_pub_keys,\n            &mut signatures,\n            &vote_msg,\n            &common_ref,\n        );\n\n        let sigs_pubkeys = signatures\n            .iter()\n            .zip(priv_pub_keys.iter())\n            .map(|(sig, key_pair)| (sig.clone(), key_pair.1.clone()))\n            .collect::<Vec<_>>();\n        \n\n        b.iter(move || {\n            let _ = BlsSignature::combine(sigs_pubkeys.clone());\n        })\n    }\n\n    #[bench]\n    fn bench_8_aggregated_sig(b: &mut Bencher) {\n        let common_ref: BlsCommonReference = gen_common_ref().as_str().into();\n        let vote_msg = HashValue::try_from(\n            Hash::digest(Bytes::from(rlp::encode(&mock_vote())))\n                .as_bytes()\n                .as_ref(),\n        )\n        .unwrap();\n\n        let mut priv_pub_keys = Vec::new();\n        let mut signatures = Vec::new();\n        gen_key_pair_sigs(\n            8,\n            &mut priv_pub_keys,\n            &mut signatures,\n            &vote_msg,\n            &common_ref,\n        );\n\n        let sigs_pubkeys = signatures\n            .iter()\n            .zip(priv_pub_keys.iter())\n            .map(|(sig, key_pair)| (sig.clone(), key_pair.1.clone()))\n            .collect::<Vec<_>>();\n        \n\n        b.iter(move || {\n            let _ = BlsSignature::combine(sigs_pubkeys.clone());\n        })\n    }\n\n    #[bench]\n    fn bench_16_aggregated_sig(b: &mut Bencher) {\n        let common_ref: BlsCommonReference = gen_common_ref().as_str().into();\n        let vote_msg = HashValue::try_from(\n            Hash::digest(Bytes::from(rlp::encode(&mock_vote())))\n                .as_bytes()\n                .as_ref(),\n        )\n        .unwrap();\n\n        let mut priv_pub_keys = Vec::new();\n        let mut signatures = Vec::new();\n        gen_key_pair_sigs(\n            16,\n            &mut priv_pub_keys,\n            &mut signatures,\n            &vote_msg,\n            &common_ref,\n        );\n\n        let sigs_pubkeys = signatures\n            .iter()\n            .zip(priv_pub_keys.iter())\n            .map(|(sig, key_pair)| (sig.clone(), key_pair.1.clone()))\n            .collect::<Vec<_>>();\n        \n\n        b.iter(move || {\n            let _ = BlsSignature::combine(sigs_pubkeys.clone());\n        })\n    }\n\n    #[bench]\n    fn bench_32_aggregated_sig(b: &mut Bencher) {\n        let common_ref: BlsCommonReference = gen_common_ref().as_str().into();\n        let vote_msg = HashValue::try_from(\n            Hash::digest(Bytes::from(rlp::encode(&mock_vote())))\n                .as_bytes()\n                .as_ref(),\n        )\n        .unwrap();\n\n        let mut priv_pub_keys = Vec::new();\n        let mut signatures = Vec::new();\n        gen_key_pair_sigs(\n            32,\n            &mut priv_pub_keys,\n            &mut signatures,\n            &vote_msg,\n            &common_ref,\n        );\n\n        let sigs_pubkeys = signatures\n            .iter()\n            .zip(priv_pub_keys.iter())\n            .map(|(sig, key_pair)| (sig.clone(), key_pair.1.clone()))\n            .collect::<Vec<_>>();\n        \n\n        b.iter(move || {\n            let _ = BlsSignature::combine(sigs_pubkeys.clone());\n        })\n    }\n\n    #[bench]\n    fn bench_64_aggregated_sig(b: &mut Bencher) {\n        let common_ref: BlsCommonReference = gen_common_ref().as_str().into();\n        let vote_msg = HashValue::try_from(\n            Hash::digest(Bytes::from(rlp::encode(&mock_vote())))\n                .as_bytes()\n                .as_ref(),\n        )\n        .unwrap();\n\n        let mut priv_pub_keys = Vec::new();\n        let mut signatures = Vec::new();\n        gen_key_pair_sigs(\n            64,\n            &mut priv_pub_keys,\n            &mut signatures,\n            &vote_msg,\n            &common_ref,\n        );\n\n        let sigs_pubkeys = signatures\n            .iter()\n            .zip(priv_pub_keys.iter())\n            .map(|(sig, key_pair)| (sig.clone(), key_pair.1.clone()))\n            .collect::<Vec<_>>();\n        \n\n        b.iter(move || {\n            let _ = BlsSignature::combine(sigs_pubkeys.clone());\n        })\n    }\n\n    #[bench]\n    fn bench_4_aggregated_sig_verify(b: &mut Bencher) {\n        let common_ref: BlsCommonReference = gen_common_ref().as_str().into();\n        let vote_msg = HashValue::try_from(\n            Hash::digest(Bytes::from(rlp::encode(&mock_vote())))\n                .as_bytes()\n                .as_ref(),\n        )\n        .unwrap();\n\n        let mut priv_pub_keys = Vec::new();\n        let mut signatures = Vec::new();\n        gen_key_pair_sigs(\n            4,\n            &mut priv_pub_keys,\n            &mut signatures,\n            &vote_msg,\n            &common_ref,\n        );\n\n        let sigs_pubkeys = signatures\n            .iter()\n            .zip(priv_pub_keys.iter())\n            .map(|(sig, key_pair)| (sig.clone(), key_pair.1.clone()))\n            .collect::<Vec<_>>();\n        let aggragated_sig = BlsSignature::combine(sigs_pubkeys);\n        let aggregated_key = BlsPublicKey::aggregate(\n            priv_pub_keys\n                .iter()\n                .map(|key_pair| key_pair.1.clone())\n                .collect::<Vec<_>>(),\n        );\n\n        b.iter(move || {\n            aggragated_sig\n                .clone()\n                .verify(&vote_msg, &aggregated_key, &common_ref)\n                .unwrap();\n        })\n    }\n\n    #[bench]\n    fn bench_8_aggregated_sig_verify(b: &mut Bencher) {\n        let common_ref: BlsCommonReference = gen_common_ref().as_str().into();\n        let vote_msg = HashValue::try_from(\n            Hash::digest(Bytes::from(rlp::encode(&mock_vote())))\n                .as_bytes()\n                .as_ref(),\n        )\n        .unwrap();\n\n        let mut priv_pub_keys = Vec::new();\n        let mut signatures = Vec::new();\n        gen_key_pair_sigs(\n            8,\n            &mut priv_pub_keys,\n            &mut signatures,\n            &vote_msg,\n            &common_ref,\n        );\n\n        let sigs_pubkeys = signatures\n            .iter()\n            .zip(priv_pub_keys.iter())\n            .map(|(sig, key_pair)| (sig.clone(), key_pair.1.clone()))\n            .collect::<Vec<_>>();\n        let aggragated_sig = BlsSignature::combine(sigs_pubkeys);\n        let aggregated_key = BlsPublicKey::aggregate(\n            priv_pub_keys\n                .iter()\n                .map(|key_pair| key_pair.1.clone())\n                .collect::<Vec<_>>(),\n        );\n\n        b.iter(move || {\n            aggragated_sig\n                .clone()\n                .verify(&vote_msg, &aggregated_key, &common_ref)\n                .unwrap();\n        })\n    }\n\n    #[bench]\n    fn bench_16_aggregated_sig_verify(b: &mut Bencher) {\n        let common_ref: BlsCommonReference = gen_common_ref().as_str().into();\n        let vote_msg = HashValue::try_from(\n            Hash::digest(Bytes::from(rlp::encode(&mock_vote())))\n                .as_bytes()\n                .as_ref(),\n        )\n        .unwrap();\n\n        let mut priv_pub_keys = Vec::new();\n        let mut signatures = Vec::new();\n        gen_key_pair_sigs(\n            16,\n            &mut priv_pub_keys,\n            &mut signatures,\n            &vote_msg,\n            &common_ref,\n        );\n\n        let sigs_pubkeys = signatures\n            .iter()\n            .zip(priv_pub_keys.iter())\n            .map(|(sig, key_pair)| (sig.clone(), key_pair.1.clone()))\n            .collect::<Vec<_>>();\n        let aggragated_sig = BlsSignature::combine(sigs_pubkeys);\n        let aggregated_key = BlsPublicKey::aggregate(\n            priv_pub_keys\n                .iter()\n                .map(|key_pair| key_pair.1.clone())\n                .collect::<Vec<_>>(),\n        );\n\n        b.iter(move || {\n            aggragated_sig\n                .clone()\n                .verify(&vote_msg, &aggregated_key, &common_ref)\n                .unwrap();\n        })\n    }\n\n    #[bench]\n    fn bench_32_aggregated_sig_verify(b: &mut Bencher) {\n        let common_ref: BlsCommonReference = gen_common_ref().as_str().into();\n        let vote_msg = HashValue::try_from(\n            Hash::digest(Bytes::from(rlp::encode(&mock_vote())))\n                .as_bytes()\n                .as_ref(),\n        )\n        .unwrap();\n\n        let mut priv_pub_keys = Vec::new();\n        let mut signatures = Vec::new();\n        gen_key_pair_sigs(\n            32,\n            &mut priv_pub_keys,\n            &mut signatures,\n            &vote_msg,\n            &common_ref,\n        );\n\n        let sigs_pubkeys = signatures\n            .iter()\n            .zip(priv_pub_keys.iter())\n            .map(|(sig, key_pair)| (sig.clone(), key_pair.1.clone()))\n            .collect::<Vec<_>>();\n        let aggragated_sig = BlsSignature::combine(sigs_pubkeys);\n        let aggregated_key = BlsPublicKey::aggregate(\n            priv_pub_keys\n                .iter()\n                .map(|key_pair| key_pair.1.clone())\n                .collect::<Vec<_>>(),\n        );\n\n        b.iter(move || {\n            aggragated_sig\n                .clone()\n                .verify(&vote_msg, &aggregated_key, &common_ref)\n                .unwrap();\n        })\n    }\n\n    #[bench]\n    fn bench_64_aggregated_sig_verify(b: &mut Bencher) {\n        let common_ref: BlsCommonReference = gen_common_ref().as_str().into();\n        let vote_msg = HashValue::try_from(\n            Hash::digest(Bytes::from(rlp::encode(&mock_vote())))\n                .as_bytes()\n                .as_ref(),\n        )\n        .unwrap();\n\n        let mut priv_pub_keys = Vec::new();\n        let mut signatures = Vec::new();\n        gen_key_pair_sigs(\n            64,\n            &mut priv_pub_keys,\n            &mut signatures,\n            &vote_msg,\n            &common_ref,\n        );\n\n        let sigs_pubkeys = signatures\n            .iter()\n            .zip(priv_pub_keys.iter())\n            .map(|(sig, key_pair)| (sig.clone(), key_pair.1.clone()))\n            .collect::<Vec<_>>();\n        let aggragated_sig = BlsSignature::combine(sigs_pubkeys);\n        let aggregated_key = BlsPublicKey::aggregate(\n            priv_pub_keys\n                .iter()\n                .map(|key_pair| key_pair.1.clone())\n                .collect::<Vec<_>>(),\n        );\n\n        b.iter(move || {\n            aggragated_sig\n                .clone()\n                .verify(&vote_msg, &aggregated_key, &common_ref)\n                .unwrap();\n        })\n    }\n}\n"
  },
  {
    "path": "common/logger/Cargo.toml",
    "content": "[package]\nname = \"common-logger\"\nversion = \"0.2.1\"\nauthors = [\"Muta Dev <muta@nervos.org>\"]\nedition = \"2018\"\nrepository = \"https://github.com/nervosnetwork/muta\"\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\nenv_logger = \"0.7\"\nlog = \"0.4\"\n# Turn off gzip feature, it hurts performance. For more information, reference\n# log4rs document.\nlog4rs = { version = \"0.13\", features = [\"all_components\", \"file\", \"yaml_format\"] }\njson = \"0.12\"\ncreep = \"0.2\"\nrustracing_jaeger = \"0.5\"\nserde = \"1.0\"\nserde_derive = \"1.0\"\nchrono = \"0.4\""
  },
  {
    "path": "common/logger/README.md",
    "content": "# Logger Module Instruction\n\n## Logger Config\n\nThe logger config in `config.toml` is listed below with default values.\n\n```toml\n[logger]\nfilter = \"info\"\nlog_to_console = true\nconsole_show_file_and_line = false\nlog_path = \"logs/\"\nlog_to_file = true\nmetrics = true\n```\n\n`filter` is the root logger filter, must be one of `off`, `trace`, `debug`, `info`, `warn` and `error`.\n\nIf `log_to_console` is `true`, logs like below will be logged to console.\n\n```\n[2019-12-02T10:02:45.779337+08:00 INFO overlord::state::process] Overlord: state receive commit event height 11220, round 0\n```\n\nIf `console_show_file_and_line` is `true`, log file and line number will also be logged to console, pretty useful for debugging.\n\n```\n[2019-12-02T10:05:28.343228+08:00 INFO core_network::peer_manager core/network/src/peer_manager/mod.rs:1035] network: PeerId(QmYSZUy3G5Mf5GSTKfH7LXJeFJrVW59rX1qPPfapuH7AUw): connected peer_ip(s): []\n```\n\nIf `log_to_file` is true, logs like below will be logged to `{log_path}/muta.log`.\nIt is json format, good for machine understanding.\n\n```\n{\"time\":\"2019-12-01T22:01:57.839042+08:00\",\"message\":\"network: PeerId(QmYSZUy3G5Mf5GSTKfH7LXJeFJrVW59rX1qPPfapuH7AUw): connect addrs [\\\"/ip4/0.0.0.0/tcp/1888\\\"]\",\"module_path\":\"core_network::peer_manager\",\"file\":\"core/network/src/peer_manager/mod.rs\",\"line\":591,\"level\":\"INFO\",\"target\":\"core_network::peer_manager\",\"thread\":\"tokio-runtime-worker-0\",\"thread_id\":123145432756224,\"mdc\":{}}\n```\n\nThis crate uses `log4rs` to init the logger, but you don't need to add dependency for that. After invoking the `init` function in this crate, you can use `log` crate to log.\n\n## Metrics\n\nMetrics is an independent logger, it `metrics` is `true`, the metrics will be logged to `{log_path}/metrics.log`.\n\n```\n{\"time\":\"2019-12-01T22:02:49.035084+08:00\",\"message\":\"{\\\"height\\\":7943,\\\"name\\\":\\\"save_block\\\",\\\"ordered_tx_num\\\":0}\",\"module_path\":\"common_logger\",\"file\":\"common/logger/src/lib.rs\",\"line\":83,\"level\":\"TRACE\",\"target\":\"metrics\",\"thread\":\"tokio-runtime-worker-3\",\"thread_id\":123145445486592,\"mdc\":{}}\n```\n\nIf you want to use log metrics in a module, you need to add this crate as dependency and use the code below to add a metric. The `name` field is reserved, please avoid using this as a key in your metrics.\n\n```rust\ncommon_logger::metrics(\"save_block\", common_logger::object! {\n    \"height\" => block.header.height,\n    \"ordered_tx_num\" => block.ordered_tx_hashes.len(),\n});\n```\n\nThis signature of the function is showed below. The `JsonValue` is a `enum` from [`json crate`](https://docs.rs/json/0.12.0/json/enum.JsonValue.html).\n\n```rust\npub fn metrics(name: &str, mut content: JsonValue)\n```\n\n## Structured Event Log With TraceId Included\n\nStructured event log api provide a convenient way to log structured json data. It's signature is provided as below:\n\n```rust\npub fn log(level: Level, module: &str, event: &str, ctx: &Context, mut msg: JsonValue)\n```\n\n`module` should be your component name, `event` is just event name, better begin with 4 chars with 4 digits\nto identify this event. `Context` is used to extract trace id. `msg` is `JsonValue` which is same as `metrics`.\n\nUseage example:\n\n```rust\ncommon_logger::log(Level::Info, \"network\", \"netw0001\", &ctx, common_logger::json!({\"music\", \"beautiful world\"; \"movie\", \"fury\"}));\n```\n\n## Yaml File\n\nThe `log.yml` in this crate is the yaml style config of log4rs with default logger config.\n\nIf you need more customized configurations, you can copy the file to some config path, edit the file, and replace the `init` function with `log4rs::init_file(\"/path/to/log.yml\", Default::default()).unwrap();`.\n"
  },
  {
    "path": "common/logger/log.yml",
    "content": "# This file is yaml style config, can make testing the logger more easily.\n# When you need to do some test, Add the code below to the `init` function.\n# log4rs::init_file(\"common/logger/log.yml\", Default::default()).unwrap();\n# reference: <https://docs.rs/log4rs/0.13.0/log4rs/>\nappenders:\n  console:\n    kind: console\n    encoder:\n      # this pattern below contains file name and line, usefule for debugging\n      # pattern: \"[{d} {h({l})} {t} {f}:{L}] {m}{n}\"\n      pattern: \"[{d} {h({l})} {t}] {m}{n}\"\n\n  file:\n    kind: file\n    path: logs/muta.log\n    encoder:\n      kind: json\n\n  metrics:\n    kind: file\n    path: logs/metrics.log\n    encoder:\n      kind: json\n\nroot:\n  level: info\n  appenders:\n  - console\n  - file\n\nloggers:\n  metrics:\n    level: trace\n    appenders:\n    - metrics\n    additive: false\n"
  },
  {
    "path": "common/logger/src/date_fixed_roller.rs",
    "content": "use std::error::Error;\nuse std::fs;\nuse std::path::Path;\n\nuse chrono::prelude::Utc;\nuse log4rs::append::rolling_file::policy::compound::roll::Roll;\nuse log4rs::file::{Deserialize, Deserializers};\n\n#[derive(serde_derive::Deserialize, Clone)]\n#[serde(deny_unknown_fields)]\npub struct DateFixedWindowRollerConfig {\n    pattern: String,\n}\n\npub struct DateFixedWindowRollerBuilder;\n\nimpl DateFixedWindowRollerBuilder {\n    pub fn build(\n        self,\n        pattern: &str,\n    ) -> Result<DateFixedWindowRoller, Box<dyn Error + Sync + Send>> {\n        if !pattern.contains(\"{date}\") || !pattern.contains(\"{timestamp}\") {\n            return Err(\"pattern doesn't contain `{date}` or `{timestamp}`\".into());\n        }\n\n        let roller = DateFixedWindowRoller {\n            pattern: pattern.into(),\n        };\n\n        Ok(roller)\n    }\n}\n\n/// The pattern takes two interpolation arguments, {date} and {timestamp}.\n/// {date} and {timestamp} will be replaced with actual date and timestamp\n/// value.\n///\n/// For example:\n/// For pattern `log/{date}.muta.{timestamp}.log`, it will generate\n/// `log/2020-08-27.muta.83748392743.log`.\n#[derive(Debug)]\npub struct DateFixedWindowRoller {\n    pattern: String,\n}\n\nimpl DateFixedWindowRoller {\n    pub fn builder() -> DateFixedWindowRollerBuilder {\n        DateFixedWindowRollerBuilder\n    }\n\n    fn roll_file(\n        &self,\n        cur_log: &Path,\n        date: &str,\n        timestamp: &str,\n    ) -> Result<(), Box<dyn Error + Sync + Send>> {\n        let archived_log = {\n            let pattern = self.pattern.clone();\n            let partial_log = pattern.replace(\"{date}\", date);\n            partial_log.replace(\"{timestamp}\", &timestamp)\n        };\n\n        if let Some(parent) = Path::new(&archived_log).parent() {\n            fs::create_dir_all(parent)?;\n        }\n\n        match fs::rename(cur_log, &archived_log) {\n            Ok(()) => return Ok(()),\n            Err(ref e) if e.kind() == std::io::ErrorKind::NotFound => return Ok(()),\n            Err(_) => {}\n        }\n\n        // fall back to a copy\n        fs::copy(cur_log, &archived_log).and_then(|_| fs::remove_file(cur_log))?;\n        Ok(())\n    }\n}\n\nimpl Roll for DateFixedWindowRoller {\n    fn roll(&self, cur_log: &Path) -> Result<(), Box<dyn Error + Sync + Send>> {\n        let now = Utc::now();\n        self.roll_file(\n            cur_log,\n            &now.format(\"%Y-%m-%d\").to_string(),\n            &now.timestamp().to_string(),\n        )\n    }\n}\n\npub struct DateFixedWindowRollerDeserializer;\n\nimpl Deserialize for DateFixedWindowRollerDeserializer {\n    type Config = DateFixedWindowRollerConfig;\n    type Trait = dyn Roll;\n\n    fn deserialize(\n        &self,\n        config: Self::Config,\n        _: &Deserializers,\n    ) -> Result<Box<Self::Trait>, Box<dyn Error + Sync + Send>> {\n        let roll = DateFixedWindowRoller {\n            pattern: config.pattern,\n        };\n\n        Ok(Box::new(roll))\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use std::fs::File;\n    use std::io::{Read, Write};\n\n    use chrono::prelude::Utc;\n\n    use super::DateFixedWindowRoller;\n\n    #[test]\n    fn test_rotation() {\n        let temp_dir = std::env::temp_dir();\n        let pattern = format!(\n            \"{}/{{date}}.muta.{{timestamp}}.log\",\n            temp_dir.as_path().to_string_lossy()\n        );\n        let roller = DateFixedWindowRoller::builder().build(&pattern).unwrap();\n\n        let test_log = {\n            let mut temp_file = temp_dir.clone();\n            temp_file.push(\"logger_test.log\");\n            temp_file\n        };\n        File::create(&test_log).unwrap().write_all(b\"test\").unwrap();\n\n        let now = Utc::now();\n        let date = &now.format(\"%Y-%m-%d\").to_string();\n        let timestamp = &now.timestamp().to_string();\n\n        roller.roll_file(&test_log, &date, &timestamp).unwrap();\n        assert!(!test_log.exists());\n\n        let mut log_data = vec![];\n        let archived_log = {\n            let mut temp_file = temp_dir;\n            temp_file.push(&format!(\"{}.muta.{}.log\", &date, &timestamp));\n            temp_file\n        };\n\n        File::open(archived_log)\n            .unwrap()\n            .read_to_end(&mut log_data)\n            .unwrap();\n\n        assert_eq!(log_data, b\"test\");\n    }\n}\n"
  },
  {
    "path": "common/logger/src/lib.rs",
    "content": "mod date_fixed_roller;\n\nuse std::collections::HashMap;\nuse std::path::PathBuf;\n\nuse creep::Context;\nuse json::JsonValue;\nuse log::{Level, LevelFilter};\nuse log4rs::append::console::ConsoleAppender;\nuse log4rs::append::rolling_file::policy::compound::trigger::size::SizeTrigger;\nuse log4rs::append::rolling_file::policy::compound::CompoundPolicy;\nuse log4rs::append::rolling_file::RollingFileAppender;\nuse log4rs::config::{Appender, Config, Logger, Root};\nuse log4rs::encode::json::JsonEncoder;\nuse log4rs::encode::pattern::PatternEncoder;\nuse rustracing_jaeger::span::{SpanContext, TraceId};\n\nuse date_fixed_roller::DateFixedWindowRoller;\n\npub use json::array;\npub use json::object;\nuse log4rs::append::file::FileAppender;\n\n// Example\n// ```rust\n//     let json_obj = json!({\n//         \"key_01\", value_01;\n//         \"key_02\", value_02;\n//    });\n// ```\n#[macro_export]\nmacro_rules! json {\n    ({$($key: expr, $value: expr); *}) => {{\n        let mut evt = JsonValue::new_object();\n        $(evt[$key] = $value.into();)*\n        evt\n    }};\n}\n\npub fn init<S: ::std::hash::BuildHasher>(\n    filter: String,\n    log_to_console: bool,\n    console_show_file_and_line: bool,\n    log_to_file: bool,\n    metrics: bool,\n    log_path: PathBuf,\n    file_size_limit: u64, // bytes\n    modules_level: HashMap<String, String, S>,\n) {\n    let console_appender = ConsoleAppender::builder()\n        .encoder(Box::new(PatternEncoder::new(\n            if console_show_file_and_line {\n                \"[{d} {h({l})} {t} {f}:{L}] {m}{n}\"\n            } else {\n                \"[{d} {h({l})} {t}] {m}{n}\"\n            },\n        )))\n        .build();\n\n    let muta_roller_pat = log_path.join(\"{date}.muta.{timestamp}.log\");\n    let metrics_roller_pat = log_path.join(\"{date}.metrics.{timestamp}.log\");\n\n    let file_appender = {\n        let size_trigger = SizeTrigger::new(file_size_limit);\n        let roller = DateFixedWindowRoller::builder()\n            .build(&muta_roller_pat.to_string_lossy())\n            .unwrap();\n        let policy = CompoundPolicy::new(Box::new(size_trigger), Box::new(roller));\n\n        RollingFileAppender::builder()\n            .encoder(Box::new(JsonEncoder::new()))\n            .build(log_path.join(\"muta.log\"), Box::new(policy))\n            .unwrap()\n    };\n\n    let cli_file_appender = FileAppender::builder()\n        .encoder(Box::new(JsonEncoder::new()))\n        .build(log_path.join(\"cli.log\"))\n        .unwrap();\n\n    let metrics_appender = {\n        let size_trigger = SizeTrigger::new(file_size_limit);\n        let roller = DateFixedWindowRoller::builder()\n            .build(&metrics_roller_pat.to_string_lossy())\n            .unwrap();\n        let policy = CompoundPolicy::new(Box::new(size_trigger), Box::new(roller));\n\n        RollingFileAppender::builder()\n            .encoder(Box::new(JsonEncoder::new()))\n            .build(log_path.join(\"metrics.log\"), Box::new(policy))\n            .unwrap()\n    };\n\n    let mut root_builder = Root::builder();\n    if log_to_console {\n        root_builder = root_builder.appender(\"console\");\n    }\n    if log_to_file {\n        root_builder = root_builder.appender(\"file\");\n    }\n\n    let level_filter = convert_level(filter.as_ref());\n    let root = root_builder.build(level_filter);\n\n    let metrics_logger = Logger::builder().additive(false).appender(\"metrics\").build(\n        \"metrics\",\n        if metrics {\n            LevelFilter::Trace\n        } else {\n            LevelFilter::Off\n        },\n    );\n\n    let cli_logger = Logger::builder()\n        .additive(false)\n        .appender(\"cli\")\n        .appender(\"console\")\n        .build(\"cli\", LevelFilter::Trace);\n\n    let mut config_builder = Config::builder()\n        .appender(Appender::builder().build(\"console\", Box::new(console_appender)))\n        .appender(Appender::builder().build(\"file\", Box::new(file_appender)))\n        .appender(Appender::builder().build(\"metrics\", Box::new(metrics_appender)))\n        .appender(Appender::builder().build(\"cli\", Box::new(cli_file_appender)))\n        .logger(metrics_logger)\n        .logger(cli_logger);\n\n    for (module, level) in &modules_level {\n        let module_logger = Logger::builder()\n            .additive(false)\n            .appender(\"console\")\n            .appender(\"file\")\n            .build(module, convert_level(&level));\n        config_builder = config_builder.logger(module_logger);\n    }\n    let config = config_builder.build(root).unwrap();\n\n    log4rs::init_config(config).expect(\"\");\n}\n\nfn convert_level(level: &str) -> LevelFilter {\n    match level {\n        \"off\" => LevelFilter::Off,\n        \"error\" => LevelFilter::Error,\n        \"info\" => LevelFilter::Info,\n        \"warn\" => LevelFilter::Warn,\n        \"debug\" => LevelFilter::Debug,\n        \"trace\" => LevelFilter::Trace,\n        f => {\n            println!(\"invalid logger.filter {}, use info\", f);\n            LevelFilter::Info\n        }\n    }\n}\n\npub fn metrics(name: &str, mut content: JsonValue) {\n    log::trace!(target: \"metrics\", \"{}\", {\n        content[\"name\"] = name.into();\n        content\n    });\n}\n\n// Usage:\n// log(Level::Info, \"network\", \"netw0001\", &ctx, common_logger::object!{\"music\"\n// : \"beautiful world\"})\npub fn log(level: Level, module: &str, event: &str, ctx: &Context, mut msg: JsonValue) {\n    if let Some(trace_ctx) = trace_context(ctx) {\n        msg[\"trace_id\"] = trace_ctx.trace_id.to_string().into();\n        msg[\"span_id\"] = trace_ctx.span_id.into();\n    }\n\n    log::log!(target: module, level, \"{}\", {\n        msg[\"event\"] = event.into();\n        msg\n    });\n}\n\n#[derive(Debug, Clone, Copy)]\nstruct TraceContext {\n    trace_id: TraceId,\n    span_id:  u64,\n}\n\n// NOTE: Reference muta_apm::MutaTracer::span_state.\n// Copy code to avoid depends on muta_apm crate.\nfn trace_context(ctx: &Context) -> Option<TraceContext> {\n    match ctx.get::<Option<SpanContext>>(\"parent_span_ctx\") {\n        Some(Some(parent_ctx)) => {\n            let state = parent_ctx.state();\n            let trace_ctx = TraceContext {\n                trace_id: state.trace_id(),\n                span_id:  state.span_id(),\n            };\n\n            Some(trace_ctx)\n        }\n        _ => None,\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_json() {\n        env_logger::init();\n        let json = json!({\"height\", 1; \"msg\", \"asset_01\"; \"is_connected\", true});\n        log(\n            Level::Warn,\n            \"logger\",\n            \"logg_001\",\n            &Context::new(),\n            json.clone(),\n        );\n        assert_eq!(json[\"height\"], 1);\n        assert_eq!(json[\"msg\"], \"asset_01\");\n        assert_eq!(json[\"is_connected\"], true);\n    }\n}\n"
  },
  {
    "path": "common/merkle/Cargo.toml",
    "content": "[package]\nname = \"common-merkle\"\nversion = \"0.2.1\"\nauthors = [\"Muta Dev <muta@nervos.org>\"]\nedition = \"2018\"\nrepository = \"https://github.com/nervosnetwork/muta\"\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\nprotocol = { path = \"../../protocol\", package = \"muta-protocol\" }\nrayon = \"1.3\"\nstatic_merkle_tree = \"1.1.0\"\n\n[dev-dependencies]\nrand = \"0.7\"\n"
  },
  {
    "path": "common/merkle/src/lib.rs",
    "content": "#![feature(test)]\n\nuse static_merkle_tree::Tree;\n\nuse protocol::{types::Hash, Bytes};\n\n#[derive(Debug, Clone)]\npub struct ProofNode {\n    pub is_right: bool,\n    pub hash:     Hash,\n}\n\npub struct Merkle {\n    tree: Tree<Hash>,\n}\n\nimpl Merkle {\n    pub fn from_hashes(hashes: Vec<Hash>) -> Self {\n        let tree = Tree::from_hashes(hashes, merge);\n        Merkle { tree }\n    }\n\n    pub fn get_root_hash(&self) -> Option<Hash> {\n        match self.tree.get_root_hash() {\n            Some(hash) => Some(hash.clone()),\n            None => None,\n        }\n    }\n\n    pub fn get_proof_by_input_index(&self, input_index: usize) -> Option<Vec<ProofNode>> {\n        self.tree\n            .get_proof_by_input_index(input_index)\n            .map(|proof| {\n                proof\n                    .0\n                    .into_iter()\n                    .map(|node| ProofNode {\n                        is_right: node.is_right,\n                        hash:     node.hash,\n                    })\n                    .collect()\n            })\n    }\n}\n\nfn merge(left: &Hash, right: &Hash) -> Hash {\n    let left = left.as_bytes();\n    let right = right.as_bytes();\n\n    let mut root = Vec::with_capacity(left.len() + right.len());\n    root.extend_from_slice(&left);\n    root.extend_from_slice(&right);\n    Hash::digest(Bytes::from(root))\n}\n\n#[rustfmt::skip]\n/// Bench in Intel(R) Core(TM) i7-4770HQ CPU @2.20GHz (8 x 2200):\n/// test benches::bench_merkle_1000_hashes  ... bench:   1,167,080 ns/iter (+/- 108,462)\n/// test benches::bench_merkle_2000_hashes  ... bench:   2,338,504 ns/iter (+/- 137,184)\n/// test benches::bench_merkle_4000_hashes  ... bench:   4,662,601 ns/iter (+/- 231,500)\n/// test benches::bench_merkle_8000_hashes  ... bench:   9,336,278 ns/iter (+/- 900,731)\n/// test benches::bench_merkle_16000_hashes ... bench:  18,697,547 ns/iter (+/- 1,103,828)\n\n#[cfg(test)]\nmod benches {\n    extern crate test;\n\n    use rand::random;\n    use test::Bencher;\n\n    use super::*;\n\n    fn mock_hash() -> Hash {\n        Hash::digest(Bytes::from(\n            (0..10).map(|_| random::<u8>()).collect::<Vec<_>>(),\n        ))\n    }\n\n    fn rand_hashes(size: usize) -> Vec<Hash> {\n        (0..size).map(|_| mock_hash()).collect::<Vec<_>>()\n    }\n\n    #[bench]\n    fn bench_merkle_1000_hashes(b: &mut Bencher) {\n        let case = rand_hashes(1000);\n\n        b.iter(|| {\n            let _ = Merkle::from_hashes(case.clone());\n        });\n    }\n\n    #[bench]\n    fn bench_merkle_2000_hashes(b: &mut Bencher) {\n        let case = rand_hashes(2000);\n\n        b.iter(|| {\n            let _ = Merkle::from_hashes(case.clone());\n        });\n    }\n\n    #[bench]\n    fn bench_merkle_4000_hashes(b: &mut Bencher) {\n        let case = rand_hashes(4000);\n\n        b.iter(|| {\n            let _ = Merkle::from_hashes(case.clone());\n        });\n    }\n\n    #[bench]\n    fn bench_merkle_8000_hashes(b: &mut Bencher) {\n        let case = rand_hashes(8000);\n\n        b.iter(|| {\n            let _ = Merkle::from_hashes(case.clone());\n        });\n    }\n\n    #[bench]\n    fn bench_merkle_16000_hashes(b: &mut Bencher) {\n        let case = rand_hashes(16000);\n\n        b.iter(|| {\n            let _ = Merkle::from_hashes(case.clone());\n        });\n    }\n}\n"
  },
  {
    "path": "common/pubsub/Cargo.toml",
    "content": "[package]\nname = \"common-pubsub\"\nversion = \"0.2.1\"\nauthors = [\"Muta Dev <muta@nervos.org>\"]\nedition = \"2018\"\nrepository = \"https://github.com/nervosnetwork/muta\"\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\n"
  },
  {
    "path": "common/pubsub/src/lib.rs",
    "content": "#[cfg(test)]\nmod tests {\n    #[test]\n    fn it_works() {\n        assert_eq!(2 + 2, 4);\n    }\n}\n"
  },
  {
    "path": "core/api/Cargo.toml",
    "content": "[package]\nname = \"core-api\"\nversion = \"0.2.1\"\nauthors = [\"Muta Dev <muta@nervos.org>\"]\nedition = \"2018\"\nrepository = \"https://github.com/nervosnetwork/muta\"\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\nprotocol = { path = \"../../protocol\", package = \"muta-protocol\" }\ncommon-apm = { path = \"../../common/apm\" }\ncommon-crypto = { path = \"../../common/crypto\" }\n\njuniper = { git = \"https://github.com/graphql-rust/juniper\", rev = \"eff086a\", features = [\"async\"] }\njuniper_codegen = \"0.14\"\nasync-trait = \"0.1\"\nhex = \"0.4\"\nfutures = \"0.3\"\nderive_more = \"0.15\"\ncita_trie = \"2.0\"\nbytes = \"0.5\"\nactix-web = { version = \"2.0.0\", features = [\"openssl\"] }\nserde_json = \"1.0\"\nlazy_static = \"1.4\"\nnum_cpus = \"1.12\"\nlog = \"0.4\"\nopenssl = \"0.10\"\npprof = { version = \"0.3\", features = [\"flamegraph\", \"protobuf\"] }\nurl = { version = \"2.1\" }\ntokio = { version = \"0.2\", features = [ \"time\" ] }\n"
  },
  {
    "path": "core/api/source/graphiql.html",
    "content": "<!DOCTYPE html>\n<html>\n\n<head>\n  <meta charset=utf-8 />\n  <meta name=\"viewport\" content=\"user-scalable=no, initial-scale=1.0, minimum-scale=1.0, maximum-scale=1.0, minimal-ui\">\n  <link href=\"https://fonts.googleapis.com/css?family=Open+Sans:300,400,600,700|Source+Code+Pro:400,700\"\n    rel=\"stylesheet\">\n  <title>GraphQL Playground</title>\n\n  <link rel=\"stylesheet\" href=\"//cdn.jsdelivr.net/npm/graphql-playground-react/build/static/css/index.css\" />\n\n  <link rel=\"shortcut icon\" href=\"//cdn.jsdelivr.net/npm/graphql-playground-react/build/favicon.png\" />\n  <script src=\"//cdn.jsdelivr.net/npm/graphql-playground-react/build/static/js/middleware.js\"></script>\n\n</head>\n\n<body>\n  <style type=\"text/css\">\n    html {\n      font-family: \"Open Sans\", sans-serif;\n      overflow: hidden;\n    }\n\n    body {\n      margin: 0;\n      background: #172a3a;\n    }\n\n    .playgroundIn {\n      -webkit-animation: playgroundIn 0.5s ease-out forwards;\n      animation: playgroundIn 0.5s ease-out forwards;\n    }\n\n    @-webkit-keyframes playgroundIn {\n      from {\n        opacity: 0;\n        -webkit-transform: translateY(10px);\n        -ms-transform: translateY(10px);\n        transform: translateY(10px);\n      }\n\n      to {\n        opacity: 1;\n        -webkit-transform: translateY(0);\n        -ms-transform: translateY(0);\n        transform: translateY(0);\n      }\n    }\n\n    @keyframes playgroundIn {\n      from {\n        opacity: 0;\n        -webkit-transform: translateY(10px);\n        -ms-transform: translateY(10px);\n        transform: translateY(10px);\n      }\n\n      to {\n        opacity: 1;\n        -webkit-transform: translateY(0);\n        -ms-transform: translateY(0);\n        transform: translateY(0);\n      }\n    }\n  </style>\n\n  <style type=\"text/css\">\n    .fadeOut {\n      -webkit-animation: fadeOut 0.5s ease-out forwards;\n      animation: fadeOut 0.5s ease-out forwards;\n    }\n\n    @-webkit-keyframes fadeIn {\n      from {\n        opacity: 0;\n        -webkit-transform: translateY(-10px);\n        -ms-transform: translateY(-10px);\n        transform: translateY(-10px);\n      }\n\n      to {\n        opacity: 1;\n        -webkit-transform: translateY(0);\n        -ms-transform: translateY(0);\n        transform: translateY(0);\n      }\n    }\n\n    @keyframes fadeIn {\n      from {\n        opacity: 0;\n        -webkit-transform: translateY(-10px);\n        -ms-transform: translateY(-10px);\n        transform: translateY(-10px);\n      }\n\n      to {\n        opacity: 1;\n        -webkit-transform: translateY(0);\n        -ms-transform: translateY(0);\n        transform: translateY(0);\n      }\n    }\n\n    @-webkit-keyframes fadeOut {\n      from {\n        opacity: 1;\n        -webkit-transform: translateY(0);\n        -ms-transform: translateY(0);\n        transform: translateY(0);\n      }\n\n      to {\n        opacity: 0;\n        -webkit-transform: translateY(-10px);\n        -ms-transform: translateY(-10px);\n        transform: translateY(-10px);\n      }\n    }\n\n    @keyframes fadeOut {\n      from {\n        opacity: 1;\n        -webkit-transform: translateY(0);\n        -ms-transform: translateY(0);\n        transform: translateY(0);\n      }\n\n      to {\n        opacity: 0;\n        -webkit-transform: translateY(-10px);\n        -ms-transform: translateY(-10px);\n        transform: translateY(-10px);\n      }\n    }\n\n    @-webkit-keyframes appearIn {\n      from {\n        opacity: 0;\n        -webkit-transform: translateY(0px);\n        -ms-transform: translateY(0px);\n        transform: translateY(0px);\n      }\n\n      to {\n        opacity: 1;\n        -webkit-transform: translateY(0);\n        -ms-transform: translateY(0);\n        transform: translateY(0);\n      }\n    }\n\n    @keyframes appearIn {\n      from {\n        opacity: 0;\n        -webkit-transform: translateY(0px);\n        -ms-transform: translateY(0px);\n        transform: translateY(0px);\n      }\n\n      to {\n        opacity: 1;\n        -webkit-transform: translateY(0);\n        -ms-transform: translateY(0);\n        transform: translateY(0);\n      }\n    }\n\n    @-webkit-keyframes scaleIn {\n      from {\n        -webkit-transform: scale(0);\n        -ms-transform: scale(0);\n        transform: scale(0);\n      }\n\n      to {\n        -webkit-transform: scale(1);\n        -ms-transform: scale(1);\n        transform: scale(1);\n      }\n    }\n\n    @keyframes scaleIn {\n      from {\n        -webkit-transform: scale(0);\n        -ms-transform: scale(0);\n        transform: scale(0);\n      }\n\n      to {\n        -webkit-transform: scale(1);\n        -ms-transform: scale(1);\n        transform: scale(1);\n      }\n    }\n\n    @-webkit-keyframes innerDrawIn {\n      0% {\n        stroke-dashoffset: 70;\n      }\n\n      50% {\n        stroke-dashoffset: 140;\n      }\n\n      100% {\n        stroke-dashoffset: 210;\n      }\n    }\n\n    @keyframes innerDrawIn {\n      0% {\n        stroke-dashoffset: 70;\n      }\n\n      50% {\n        stroke-dashoffset: 140;\n      }\n\n      100% {\n        stroke-dashoffset: 210;\n      }\n    }\n\n    @-webkit-keyframes outerDrawIn {\n      0% {\n        stroke-dashoffset: 76;\n      }\n\n      100% {\n        stroke-dashoffset: 152;\n      }\n    }\n\n    @keyframes outerDrawIn {\n      0% {\n        stroke-dashoffset: 76;\n      }\n\n      100% {\n        stroke-dashoffset: 152;\n      }\n    }\n\n    .hHWjkv {\n      -webkit-transform-origin: 0px 0px;\n      -ms-transform-origin: 0px 0px;\n      transform-origin: 0px 0px;\n      -webkit-transform: scale(0);\n      -ms-transform: scale(0);\n      transform: scale(0);\n      -webkit-animation: scaleIn 0.25s linear forwards 0.2222222222222222s;\n      animation: scaleIn 0.25s linear forwards 0.2222222222222222s;\n    }\n\n    .gCDOzd {\n      -webkit-transform-origin: 0px 0px;\n      -ms-transform-origin: 0px 0px;\n      transform-origin: 0px 0px;\n      -webkit-transform: scale(0);\n      -ms-transform: scale(0);\n      transform: scale(0);\n      -webkit-animation: scaleIn 0.25s linear forwards 0.4222222222222222s;\n      animation: scaleIn 0.25s linear forwards 0.4222222222222222s;\n    }\n\n    .hmCcxi {\n      -webkit-transform-origin: 0px 0px;\n      -ms-transform-origin: 0px 0px;\n      transform-origin: 0px 0px;\n      -webkit-transform: scale(0);\n      -ms-transform: scale(0);\n      transform: scale(0);\n      -webkit-animation: scaleIn 0.25s linear forwards 0.6222222222222222s;\n      animation: scaleIn 0.25s linear forwards 0.6222222222222222s;\n    }\n\n    .eHamQi {\n      -webkit-transform-origin: 0px 0px;\n      -ms-transform-origin: 0px 0px;\n      transform-origin: 0px 0px;\n      -webkit-transform: scale(0);\n      -ms-transform: scale(0);\n      transform: scale(0);\n      -webkit-animation: scaleIn 0.25s linear forwards 0.8222222222222223s;\n      animation: scaleIn 0.25s linear forwards 0.8222222222222223s;\n    }\n\n    .byhgGu {\n      -webkit-transform-origin: 0px 0px;\n      -ms-transform-origin: 0px 0px;\n      transform-origin: 0px 0px;\n      -webkit-transform: scale(0);\n      -ms-transform: scale(0);\n      transform: scale(0);\n      -webkit-animation: scaleIn 0.25s linear forwards 1.0222222222222221s;\n      animation: scaleIn 0.25s linear forwards 1.0222222222222221s;\n    }\n\n    .llAKP {\n      -webkit-transform-origin: 0px 0px;\n      -ms-transform-origin: 0px 0px;\n      transform-origin: 0px 0px;\n      -webkit-transform: scale(0);\n      -ms-transform: scale(0);\n      transform: scale(0);\n      -webkit-animation: scaleIn 0.25s linear forwards 1.2222222222222223s;\n      animation: scaleIn 0.25s linear forwards 1.2222222222222223s;\n    }\n\n    .bglIGM {\n      -webkit-transform-origin: 64px 28px;\n      -ms-transform-origin: 64px 28px;\n      transform-origin: 64px 28px;\n      -webkit-transform: scale(0);\n      -ms-transform: scale(0);\n      transform: scale(0);\n      -webkit-animation: scaleIn 0.25s linear forwards 0.2222222222222222s;\n      animation: scaleIn 0.25s linear forwards 0.2222222222222222s;\n    }\n\n    .ksxRII {\n      -webkit-transform-origin: 95.98500061035156px 46.510000228881836px;\n      -ms-transform-origin: 95.98500061035156px 46.510000228881836px;\n      transform-origin: 95.98500061035156px 46.510000228881836px;\n      -webkit-transform: scale(0);\n      -ms-transform: scale(0);\n      transform: scale(0);\n      -webkit-animation: scaleIn 0.25s linear forwards 0.4222222222222222s;\n      animation: scaleIn 0.25s linear forwards 0.4222222222222222s;\n    }\n\n    .cWrBmb {\n      -webkit-transform-origin: 95.97162628173828px 83.4900016784668px;\n      -ms-transform-origin: 95.97162628173828px 83.4900016784668px;\n      transform-origin: 95.97162628173828px 83.4900016784668px;\n      -webkit-transform: scale(0);\n      -ms-transform: scale(0);\n      transform: scale(0);\n      -webkit-animation: scaleIn 0.25s linear forwards 0.6222222222222222s;\n      animation: scaleIn 0.25s linear forwards 0.6222222222222222s;\n    }\n\n    .Wnusb {\n      -webkit-transform-origin: 64px 101.97999572753906px;\n      -ms-transform-origin: 64px 101.97999572753906px;\n      transform-origin: 64px 101.97999572753906px;\n      -webkit-transform: scale(0);\n      -ms-transform: scale(0);\n      transform: scale(0);\n      -webkit-animation: scaleIn 0.25s linear forwards 0.8222222222222223s;\n      animation: scaleIn 0.25s linear forwards 0.8222222222222223s;\n    }\n\n    .bfPqf {\n      -webkit-transform-origin: 32.03982162475586px 83.4900016784668px;\n      -ms-transform-origin: 32.03982162475586px 83.4900016784668px;\n      transform-origin: 32.03982162475586px 83.4900016784668px;\n      -webkit-transform: scale(0);\n      -ms-transform: scale(0);\n      transform: scale(0);\n      -webkit-animation: scaleIn 0.25s linear forwards 1.0222222222222221s;\n      animation: scaleIn 0.25s linear forwards 1.0222222222222221s;\n    }\n\n    .edRCTN {\n      -webkit-transform-origin: 32.033552169799805px 46.510000228881836px;\n      -ms-transform-origin: 32.033552169799805px 46.510000228881836px;\n      transform-origin: 32.033552169799805px 46.510000228881836px;\n      -webkit-transform: scale(0);\n      -ms-transform: scale(0);\n      transform: scale(0);\n      -webkit-animation: scaleIn 0.25s linear forwards 1.2222222222222223s;\n      animation: scaleIn 0.25s linear forwards 1.2222222222222223s;\n    }\n\n    .iEGVWn {\n      opacity: 0;\n      stroke-dasharray: 76;\n      -webkit-animation: outerDrawIn 0.5s ease-out forwards 0.3333333333333333s, appearIn 0.1s ease-out forwards 0.3333333333333333s;\n      animation: outerDrawIn 0.5s ease-out forwards 0.3333333333333333s, appearIn 0.1s ease-out forwards 0.3333333333333333s;\n      -webkit-animation-iteration-count: 1, 1;\n      animation-iteration-count: 1, 1;\n    }\n\n    .bsocdx {\n      opacity: 0;\n      stroke-dasharray: 76;\n      -webkit-animation: outerDrawIn 0.5s ease-out forwards 0.5333333333333333s, appearIn 0.1s ease-out forwards 0.5333333333333333s;\n      animation: outerDrawIn 0.5s ease-out forwards 0.5333333333333333s, appearIn 0.1s ease-out forwards 0.5333333333333333s;\n      -webkit-animation-iteration-count: 1, 1;\n      animation-iteration-count: 1, 1;\n    }\n\n    .jAZXmP {\n      opacity: 0;\n      stroke-dasharray: 76;\n      -webkit-animation: outerDrawIn 0.5s ease-out forwards 0.7333333333333334s, appearIn 0.1s ease-out forwards 0.7333333333333334s;\n      animation: outerDrawIn 0.5s ease-out forwards 0.7333333333333334s, appearIn 0.1s ease-out forwards 0.7333333333333334s;\n      -webkit-animation-iteration-count: 1, 1;\n      animation-iteration-count: 1, 1;\n    }\n\n    .hSeArx {\n      opacity: 0;\n      stroke-dasharray: 76;\n      -webkit-animation: outerDrawIn 0.5s ease-out forwards 0.9333333333333333s, appearIn 0.1s ease-out forwards 0.9333333333333333s;\n      animation: outerDrawIn 0.5s ease-out forwards 0.9333333333333333s, appearIn 0.1s ease-out forwards 0.9333333333333333s;\n      -webkit-animation-iteration-count: 1, 1;\n      animation-iteration-count: 1, 1;\n    }\n\n    .bVgqGk {\n      opacity: 0;\n      stroke-dasharray: 76;\n      -webkit-animation: outerDrawIn 0.5s ease-out forwards 1.1333333333333333s, appearIn 0.1s ease-out forwards 1.1333333333333333s;\n      animation: outerDrawIn 0.5s ease-out forwards 1.1333333333333333s, appearIn 0.1s ease-out forwards 1.1333333333333333s;\n      -webkit-animation-iteration-count: 1, 1;\n      animation-iteration-count: 1, 1;\n    }\n\n    .hEFqBt {\n      opacity: 0;\n      stroke-dasharray: 76;\n      -webkit-animation: outerDrawIn 0.5s ease-out forwards 1.3333333333333333s, appearIn 0.1s ease-out forwards 1.3333333333333333s;\n      animation: outerDrawIn 0.5s ease-out forwards 1.3333333333333333s, appearIn 0.1s ease-out forwards 1.3333333333333333s;\n      -webkit-animation-iteration-count: 1, 1;\n      animation-iteration-count: 1, 1;\n    }\n\n    .dzEKCM {\n      opacity: 0;\n      stroke-dasharray: 70;\n      -webkit-animation: innerDrawIn 1s ease-in-out forwards 1.3666666666666667s, appearIn 0.1s linear forwards 1.3666666666666667s;\n      animation: innerDrawIn 1s ease-in-out forwards 1.3666666666666667s, appearIn 0.1s linear forwards 1.3666666666666667s;\n      -webkit-animation-iteration-count: infinite, 1;\n      animation-iteration-count: infinite, 1;\n    }\n\n    .DYnPx {\n      opacity: 0;\n      stroke-dasharray: 70;\n      -webkit-animation: innerDrawIn 1s ease-in-out forwards 1.5333333333333332s, appearIn 0.1s linear forwards 1.5333333333333332s;\n      animation: innerDrawIn 1s ease-in-out forwards 1.5333333333333332s, appearIn 0.1s linear forwards 1.5333333333333332s;\n      -webkit-animation-iteration-count: infinite, 1;\n      animation-iteration-count: infinite, 1;\n    }\n\n    .hjPEAQ {\n      opacity: 0;\n      stroke-dasharray: 70;\n      -webkit-animation: innerDrawIn 1s ease-in-out forwards 1.7000000000000002s, appearIn 0.1s linear forwards 1.7000000000000002s;\n      animation: innerDrawIn 1s ease-in-out forwards 1.7000000000000002s, appearIn 0.1s linear forwards 1.7000000000000002s;\n      -webkit-animation-iteration-count: infinite, 1;\n      animation-iteration-count: infinite, 1;\n    }\n\n    #loading-wrapper {\n      position: absolute;\n      width: 100vw;\n      height: 100vh;\n      display: -webkit-box;\n      display: -webkit-flex;\n      display: -ms-flexbox;\n      display: flex;\n      -webkit-align-items: center;\n      -webkit-box-align: center;\n      -ms-flex-align: center;\n      align-items: center;\n      -webkit-box-pack: center;\n      -webkit-justify-content: center;\n      -ms-flex-pack: center;\n      justify-content: center;\n      -webkit-flex-direction: column;\n      -ms-flex-direction: column;\n      flex-direction: column;\n    }\n\n    .logo {\n      width: 75px;\n      height: 75px;\n      margin-bottom: 20px;\n      opacity: 0;\n      -webkit-animation: fadeIn 0.5s ease-out forwards;\n      animation: fadeIn 0.5s ease-out forwards;\n    }\n\n    .text {\n      font-size: 32px;\n      font-weight: 200;\n      text-align: center;\n      color: rgba(255, 255, 255, 0.6);\n      opacity: 0;\n      -webkit-animation: fadeIn 0.5s ease-out forwards;\n      animation: fadeIn 0.5s ease-out forwards;\n    }\n\n    .dGfHfc {\n      font-weight: 400;\n    }\n  </style>\n  <div id=\"loading-wrapper\">\n    <svg class=\"logo\" viewBox=\"0 0 128 128\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">\n      <title>GraphQL Playground Logo</title>\n      <defs>\n        <linearGradient id=\"linearGradient-1\" x1=\"4.86%\" x2=\"96.21%\" y1=\"0%\" y2=\"99.66%\">\n          <stop stop-color=\"#E00082\" stop-opacity=\".8\" offset=\"0%\"></stop>\n          <stop stop-color=\"#E00082\" offset=\"100%\"></stop>\n        </linearGradient>\n      </defs>\n      <g>\n        <rect id=\"Gradient\" width=\"127.96\" height=\"127.96\" y=\"1\" fill=\"url(#linearGradient-1)\" rx=\"4\"></rect>\n        <path id=\"Border\" fill=\"#E00082\" fill-rule=\"nonzero\"\n          d=\"M4.7 2.84c-1.58 0-2.86 1.28-2.86 2.85v116.57c0 1.57 1.28 2.84 2.85 2.84h116.57c1.57 0 2.84-1.26 2.84-2.83V5.67c0-1.55-1.26-2.83-2.83-2.83H4.67zM4.7 0h116.58c3.14 0 5.68 2.55 5.68 5.7v116.58c0 3.14-2.54 5.68-5.68 5.68H4.68c-3.13 0-5.68-2.54-5.68-5.68V5.68C-1 2.56 1.55 0 4.7 0z\">\n        </path>\n        <path class=\"bglIGM\" x=\"64\" y=\"28\" fill=\"#fff\" d=\"M64 36c-4.42 0-8-3.58-8-8s3.58-8 8-8 8 3.58 8 8-3.58 8-8 8\"\n          style=\"transform: translate(100px, 100px);\"></path>\n        <path class=\"ksxRII\" x=\"95.98500061035156\" y=\"46.510000228881836\" fill=\"#fff\"\n          d=\"M89.04 50.52c-2.2-3.84-.9-8.73 2.94-10.96 3.83-2.2 8.72-.9 10.95 2.94 2.2 3.84.9 8.73-2.94 10.96-3.85 2.2-8.76.9-10.97-2.94\"\n          style=\"transform: translate(100px, 100px);\"></path>\n        <path class=\"cWrBmb\" x=\"95.97162628173828\" y=\"83.4900016784668\" fill=\"#fff\"\n          d=\"M102.9 87.5c-2.2 3.84-7.1 5.15-10.94 2.94-3.84-2.2-5.14-7.12-2.94-10.96 2.2-3.84 7.12-5.15 10.95-2.94 3.86 2.23 5.16 7.12 2.94 10.96\"\n          style=\"transform: translate(100px, 100px);\"></path>\n        <path class=\"Wnusb\" x=\"64\" y=\"101.97999572753906\" fill=\"#fff\"\n          d=\"M64 110c-4.43 0-8-3.6-8-8.02 0-4.44 3.57-8.02 8-8.02s8 3.58 8 8.02c0 4.4-3.57 8.02-8 8.02\"\n          style=\"transform: translate(100px, 100px);\"></path>\n        <path class=\"bfPqf\" x=\"32.03982162475586\" y=\"83.4900016784668\" fill=\"#fff\"\n          d=\"M25.1 87.5c-2.2-3.84-.9-8.73 2.93-10.96 3.83-2.2 8.72-.9 10.95 2.94 2.2 3.84.9 8.73-2.94 10.96-3.85 2.2-8.74.9-10.95-2.94\"\n          style=\"transform: translate(100px, 100px);\"></path>\n        <path class=\"edRCTN\" x=\"32.033552169799805\" y=\"46.510000228881836\" fill=\"#fff\"\n          d=\"M38.96 50.52c-2.2 3.84-7.12 5.15-10.95 2.94-3.82-2.2-5.12-7.12-2.92-10.96 2.2-3.84 7.12-5.15 10.95-2.94 3.83 2.23 5.14 7.12 2.94 10.96\"\n          style=\"transform: translate(100px, 100px);\"></path>\n        <path class=\"iEGVWn\" stroke=\"#fff\" stroke-width=\"4\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\n          d=\"M63.55 27.5l32.9 19-32.9-19z\"></path>\n        <path class=\"bsocdx\" stroke=\"#fff\" stroke-width=\"4\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\n          d=\"M96 46v38-38z\"></path>\n        <path class=\"jAZXmP\" stroke=\"#fff\" stroke-width=\"4\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\n          d=\"M96.45 84.5l-32.9 19 32.9-19z\"></path>\n        <path class=\"hSeArx\" stroke=\"#fff\" stroke-width=\"4\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\n          d=\"M64.45 103.5l-32.9-19 32.9 19z\"></path>\n        <path class=\"bVgqGk\" stroke=\"#fff\" stroke-width=\"4\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\n          d=\"M32 84V46v38z\"></path>\n        <path class=\"hEFqBt\" stroke=\"#fff\" stroke-width=\"4\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\n          d=\"M31.55 46.5l32.9-19-32.9 19z\"></path>\n        <path class=\"dzEKCM\" id=\"Triangle-Bottom\" stroke=\"#fff\" stroke-width=\"4\" d=\"M30 84h70\" stroke-linecap=\"round\">\n        </path>\n        <path class=\"DYnPx\" id=\"Triangle-Left\" stroke=\"#fff\" stroke-width=\"4\" d=\"M65 26L30 87\" stroke-linecap=\"round\">\n        </path>\n        <path class=\"hjPEAQ\" id=\"Triangle-Right\" stroke=\"#fff\" stroke-width=\"4\" d=\"M98 87L63 26\" stroke-linecap=\"round\">\n        </path>\n      </g>\n    </svg>\n    <div class=\"text\">Loading\n      <span class=\"dGfHfc\">GraphQL Playground</span>\n    </div>\n  </div>\n\n  <div id=\"root\" />\n  <script type=\"text/javascript\">\n    window.addEventListener('load', function (event) {\n\n      const loadingWrapper = document.getElementById('loading-wrapper');\n      if (loadingWrapper) {\n        loadingWrapper.classList.add('fadeOut');\n      }\n\n\n      const root = document.getElementById('root');\n      root.classList.add('playgroundIn');\n\n      GraphQLPlayground.init(root, {\n        \"endpoint\": \"/graphql\",\n        \"subscriptionsEndpoint\": \"/graphql\",\n        \"canSaveConfig\": false,\n        \"subscriptionEndpoint\": \"/graphql\"\n      })\n    })\n  </script>\n</body>\n\n</html>\n"
  },
  {
    "path": "core/api/src/adapter/mod.rs",
    "content": "use std::marker::PhantomData;\nuse std::sync::Arc;\n\nuse async_trait::async_trait;\nuse derive_more::Display;\n\nuse protocol::traits::{\n    APIAdapter, Context, ExecutorFactory, ExecutorParams, MemPool, ServiceMapping, ServiceResponse,\n    Storage,\n};\nuse protocol::types::{\n    Address, Block, BlockHeader, Hash, Receipt, SignedTransaction, TransactionRequest,\n};\nuse protocol::{ProtocolError, ProtocolErrorKind, ProtocolResult};\n\n#[derive(Debug, Display)]\npub enum APIError {\n    #[display(\n        fmt = \"Unexecuted block,try to {:?}, but now only reached {:?}\",\n        real,\n        expect\n    )]\n    UnExecedError { expect: u64, real: u64 },\n\n    #[display(fmt = \"not found\")]\n    NotFound,\n}\n\nimpl std::error::Error for APIError {}\n\nimpl From<APIError> for ProtocolError {\n    fn from(api_err: APIError) -> ProtocolError {\n        ProtocolError::new(ProtocolErrorKind::API, Box::new(api_err))\n    }\n}\n\npub struct DefaultAPIAdapter<EF, M, S, DB, Mapping> {\n    mempool:         Arc<M>,\n    storage:         Arc<S>,\n    trie_db:         Arc<DB>,\n    service_mapping: Arc<Mapping>,\n\n    pin_ef: PhantomData<EF>,\n}\n\nimpl<\n        EF: ExecutorFactory<DB, S, Mapping>,\n        M: MemPool,\n        S: Storage,\n        DB: cita_trie::DB,\n        Mapping: ServiceMapping,\n    > DefaultAPIAdapter<EF, M, S, DB, Mapping>\n{\n    pub fn new(\n        mempool: Arc<M>,\n        storage: Arc<S>,\n        trie_db: Arc<DB>,\n        service_mapping: Arc<Mapping>,\n    ) -> Self {\n        Self {\n            mempool,\n            storage,\n            trie_db,\n            service_mapping,\n            pin_ef: PhantomData,\n        }\n    }\n}\n\n#[async_trait]\nimpl<\n        EF: ExecutorFactory<DB, S, Mapping>,\n        M: MemPool,\n        S: Storage,\n        DB: cita_trie::DB,\n        Mapping: ServiceMapping,\n    > APIAdapter for DefaultAPIAdapter<EF, M, S, DB, Mapping>\n{\n    async fn insert_signed_txs(\n        &self,\n        ctx: Context,\n        signed_tx: SignedTransaction,\n    ) -> ProtocolResult<()> {\n        self.mempool.insert(ctx, signed_tx).await\n    }\n\n    async fn get_block_by_height(\n        &self,\n        ctx: Context,\n        height: Option<u64>,\n    ) -> ProtocolResult<Option<Block>> {\n        match height {\n            Some(id) => self.storage.get_block(ctx.clone(), id).await,\n            None => Ok(Some(self.storage.get_latest_block(ctx).await?)),\n        }\n    }\n\n    async fn get_block_header_by_height(\n        &self,\n        ctx: Context,\n        height: Option<u64>,\n    ) -> ProtocolResult<Option<BlockHeader>> {\n        match height {\n            Some(id) => self.storage.get_block_header(ctx.clone(), id).await,\n            None => Ok(Some(self.storage.get_latest_block_header(ctx).await?)),\n        }\n    }\n\n    async fn get_receipt_by_tx_hash(\n        &self,\n        ctx: Context,\n        tx_hash: Hash,\n    ) -> ProtocolResult<Option<Receipt>> {\n        let opt_receipt = self\n            .storage\n            .get_receipt_by_hash(ctx.clone(), tx_hash)\n            .await?;\n\n        let exec_height = self.storage.get_latest_block_header(ctx).await?.exec_height;\n\n        match opt_receipt {\n            Some(receipt) => {\n                let height = receipt.height;\n                if exec_height >= height {\n                    Ok(Some(receipt))\n                } else {\n                    Ok(None)\n                }\n            }\n            None => Ok(None),\n        }\n    }\n\n    async fn get_transaction_by_hash(\n        &self,\n        ctx: Context,\n        tx_hash: Hash,\n    ) -> ProtocolResult<Option<SignedTransaction>> {\n        self.storage.get_transaction_by_hash(ctx, &tx_hash).await\n    }\n\n    async fn query_service(\n        &self,\n        ctx: Context,\n        height: u64,\n        cycles_limit: u64,\n        cycles_price: u64,\n        caller: Address,\n        service_name: String,\n        method: String,\n        payload: String,\n    ) -> ProtocolResult<ServiceResponse<String>> {\n        let header = self\n            .get_block_header_by_height(ctx.clone(), Some(height))\n            .await?\n            .ok_or(APIError::NotFound)?;\n\n        let executor = EF::from_root(\n            header.state_root.clone(),\n            Arc::clone(&self.trie_db),\n            Arc::clone(&self.storage),\n            Arc::clone(&self.service_mapping),\n        )?;\n\n        let params = ExecutorParams {\n            state_root: header.state_root,\n            height,\n            timestamp: header.timestamp,\n            cycles_limit,\n            proposer: header.proposer,\n        };\n        executor.read(&params, &caller, cycles_price, &TransactionRequest {\n            service_name,\n            method,\n            payload,\n        })\n    }\n}\n"
  },
  {
    "path": "core/api/src/config.rs",
    "content": "use std::net::SocketAddr;\nuse std::path::PathBuf;\n\n#[derive(Debug, Clone)]\npub struct GraphQLConfig {\n    pub listening_address: SocketAddr,\n\n    pub graphql_uri:  String,\n    pub graphiql_uri: String,\n\n    // Set number of workers to start.\n    // By default http server uses number of available logical cpu as threads count.\n    pub workers: usize,\n\n    // Sets the maximum number of all concurrent connections.\n    pub maxconn: usize,\n\n    // Set the max payload size of graphql interface.\n    // It is used to prevent DOS attacking through memory exhaustion.\n    // The default value is 1024 * 1024, which is 1MB.\n    pub max_payload_size: usize,\n\n    pub tls: Option<GraphQLTLS>,\n\n    pub enable_dump_profile: bool,\n}\n\n#[derive(Debug, Clone)]\npub struct GraphQLTLS {\n    pub private_key_file_path:       PathBuf,\n    pub certificate_chain_file_path: PathBuf,\n}\n\nimpl Default for GraphQLConfig {\n    fn default() -> Self {\n        Self {\n            listening_address: \"127.0.0.1:8080\"\n                .parse()\n                .expect(\"Unable to parse socket address\"),\n\n            graphql_uri:         \"/graphql\".to_owned(),\n            graphiql_uri:        \"/graphiql\".to_owned(),\n            workers:             num_cpus::get(),\n            maxconn:             25000,\n            max_payload_size:    1024 * 1024, // 1MB\n            tls:                 None,\n            enable_dump_profile: false,\n        }\n    }\n}\n"
  },
  {
    "path": "core/api/src/lib.rs",
    "content": "pub mod adapter;\npub mod config;\nmod schema;\n\nuse std::cmp;\nuse std::convert::TryFrom;\nuse std::sync::Arc;\nuse std::time::Instant;\n\nuse actix_web::{web, App, Error, FromRequest, HttpResponse, HttpServer};\nuse futures::executor::block_on;\nuse juniper::http::GraphQLRequest;\nuse juniper::FieldResult;\nuse lazy_static::lazy_static;\nuse openssl::ssl::{SslAcceptor, SslFiletype, SslMethod};\n\nuse common_crypto::{\n    HashValue, PrivateKey, PublicKey, Secp256k1PrivateKey, Signature, ToPublicKey,\n};\n\nuse protocol::fixed_codec::FixedCodec;\nuse protocol::traits::{APIAdapter, Context};\n\nuse crate::config::GraphQLConfig;\nuse crate::schema::{\n    to_signed_transaction, to_transaction, Address, Block, Bytes, Hash, InputRawTransaction,\n    InputTransactionEncryption, Receipt, ServiceResponse, SignedTransaction, Uint64,\n};\n\nlazy_static! {\n    static ref GRAPHIQL_HTML: &'static str = include_str!(\"../source/graphiql.html\");\n}\n\n// This is accessible as state in Tide, and as executor context in Juniper.\n#[derive(Clone)]\nstruct State {\n    adapter: Arc<Box<dyn APIAdapter>>,\n    schema:  Arc<Schema>,\n}\n\n// We define `Query` unit struct here. GraphQL queries will refer to this\n// struct. The struct itself doesn't have any associated state (and there's no\n// need to do so), but instead it exposes the accumulator state from the\n// context.\nstruct Query;\n// Switch to async/await fn https://github.com/graphql-rust/juniper/issues/2\n#[juniper::graphql_object(Context = State)]\nimpl Query {\n    #[graphql(name = \"getBlock\", description = \"Get the block\")]\n    async fn get_block(state_ctx: &State, height: Option<Uint64>) -> FieldResult<Option<Block>> {\n        let ctx = Context::new();\n        let inst = Instant::now();\n        common_apm::metrics::api::API_REQUEST_COUNTER_VEC_STATIC\n            .get_block\n            .inc();\n\n        let height = match height {\n            Some(id) => match id.try_into_u64() {\n                Ok(id) => Some(id),\n                Err(err) => {\n                    common_apm::metrics::api::API_REQUEST_RESULT_COUNTER_VEC_STATIC\n                        .get_block\n                        .failure\n                        .inc();\n\n                    return Err(err.into());\n                }\n            },\n            None => None,\n        };\n\n        let opt_block = match state_ctx\n            .adapter\n            .get_block_by_height(ctx.clone(), height)\n            .await\n        {\n            Ok(opt_block) => opt_block,\n            Err(err) => {\n                common_apm::metrics::api::API_REQUEST_RESULT_COUNTER_VEC_STATIC\n                    .get_block\n                    .failure\n                    .inc();\n\n                return Err(err.into());\n            }\n        };\n\n        common_apm::metrics::api::API_REQUEST_RESULT_COUNTER_VEC_STATIC\n            .get_block\n            .success\n            .inc();\n        common_apm::metrics::api::API_REQUEST_TIME_HISTOGRAM_STATIC\n            .get_block\n            .observe(common_apm::metrics::duration_to_sec(inst.elapsed()));\n\n        Ok(opt_block.map(Block::from))\n    }\n\n    #[graphql(name = \"getTransaction\", description = \"Get the transaction by hash\")]\n    async fn get_transaction(\n        state_ctx: &State,\n        tx_hash: Hash,\n    ) -> FieldResult<Option<SignedTransaction>> {\n        let ctx = Context::new();\n\n        let hash = protocol::types::Hash::from_hex(&tx_hash.as_hex())?;\n\n        let opt_stx = state_ctx\n            .adapter\n            .get_transaction_by_hash(ctx.clone(), hash)\n            .await?;\n\n        Ok(opt_stx.map(SignedTransaction::from))\n    }\n\n    #[graphql(\n        name = \"getReceipt\",\n        description = \"Get the receipt by transaction hash\"\n    )]\n    async fn get_receipt(state_ctx: &State, tx_hash: Hash) -> FieldResult<Option<Receipt>> {\n        let ctx = Context::new();\n\n        let hash = protocol::types::Hash::from_hex(&tx_hash.as_hex())?;\n\n        let opt_receipt = state_ctx\n            .adapter\n            .get_receipt_by_tx_hash(ctx.clone(), hash)\n            .await?;\n\n        Ok(opt_receipt.map(Receipt::from))\n    }\n\n    #[graphql(name = \"queryService\", description = \"query service\")]\n    async fn query_service(\n        state_ctx: &State,\n        height: Option<Uint64>,\n        cycles_limit: Option<Uint64>,\n        cycles_price: Option<Uint64>,\n        caller: Address,\n        service_name: String,\n        method: String,\n        payload: String,\n    ) -> FieldResult<ServiceResponse> {\n        let ctx = Context::new();\n\n        let height = match height {\n            Some(id) => id.try_into_u64()?,\n            None => {\n                block_on(state_ctx.adapter.get_block_by_height(Context::new(), None))?\n                    .expect(\"Always not none\")\n                    .header\n                    .height\n            }\n        };\n        let cycles_limit = match cycles_limit {\n            Some(cycles_limit) => cycles_limit.try_into_u64()?,\n            None => std::u64::MAX,\n        };\n\n        let cycles_price = match cycles_price {\n            Some(cycles_price) => cycles_price.try_into_u64()?,\n            None => 1,\n        };\n\n        let address: protocol::types::Address = caller.to_str().parse()?;\n\n        let exec_resp = state_ctx\n            .adapter\n            .query_service(\n                ctx.clone(),\n                height,\n                cycles_limit,\n                cycles_price,\n                address,\n                service_name,\n                method,\n                payload,\n            )\n            .await?;\n        Ok(ServiceResponse::from(exec_resp))\n    }\n}\n\nstruct Mutation;\n// Switch to async/await fn https://github.com/graphql-rust/juniper/issues/2\n#[juniper::graphql_object(Context = State)]\nimpl Mutation {\n    #[graphql(name = \"sendTransaction\", description = \"send transaction\")]\n    async fn send_transaction(\n        state_ctx: &State,\n        input_raw: InputRawTransaction,\n        input_encryption: InputTransactionEncryption,\n    ) -> FieldResult<Hash> {\n        let ctx = Context::new();\n\n        let inst = Instant::now();\n        common_apm::metrics::api::API_REQUEST_COUNTER_VEC_STATIC\n            .send_transaction\n            .inc();\n\n        let stx = to_signed_transaction(input_raw, input_encryption)?;\n        let tx_hash = stx.tx_hash.clone();\n\n        if let Err(err) = state_ctx.adapter.insert_signed_txs(ctx.clone(), stx).await {\n            common_apm::metrics::api::API_REQUEST_RESULT_COUNTER_VEC_STATIC\n                .send_transaction\n                .failure\n                .inc();\n            return Err(err.into());\n        }\n\n        common_apm::metrics::api::API_REQUEST_RESULT_COUNTER_VEC_STATIC\n            .send_transaction\n            .success\n            .inc();\n        common_apm::metrics::api::API_REQUEST_TIME_HISTOGRAM_STATIC\n            .send_transaction\n            .observe(common_apm::metrics::duration_to_sec(inst.elapsed()));\n\n        Ok(Hash::from(tx_hash))\n    }\n\n    #[graphql(\n        name = \"unsafeSendTransaction\",\n        deprecated = \"DON'T use it in production! This is just for development.\"\n    )]\n    async fn unsafe_send_transaction(\n        state_ctx: &State,\n        input_raw: InputRawTransaction,\n        input_privkey: Bytes,\n    ) -> FieldResult<Hash> {\n        let ctx = Context::new();\n\n        let raw_tx = to_transaction(input_raw)?;\n        let tx_hash = protocol::types::Hash::digest(raw_tx.encode_fixed()?);\n\n        let privkey = Secp256k1PrivateKey::try_from(input_privkey.to_vec()?.as_ref())?;\n        let pubkey = privkey.pub_key();\n        let hash_value = HashValue::try_from(tx_hash.as_bytes().as_ref())?;\n        let signature = privkey.sign_message(&hash_value);\n\n        let stx = protocol::types::SignedTransaction {\n            raw:       raw_tx,\n            tx_hash:   tx_hash.clone(),\n            signature: signature.to_bytes(),\n            pubkey:    pubkey.to_bytes(),\n        };\n        state_ctx\n            .adapter\n            .insert_signed_txs(ctx.clone(), stx)\n            .await?;\n\n        Ok(Hash::from(tx_hash))\n    }\n}\n\n// Adding `Query` and `Mutation` together we get `Schema`, which describes,\n// well, the whole GraphQL schema.\ntype Schema = juniper::RootNode<'static, Query, Mutation>;\n\nasync fn graphiql() -> HttpResponse {\n    HttpResponse::Ok()\n        .content_type(\"text/html; charset=utf-8\")\n        .body(GRAPHIQL_HTML.to_owned())\n}\n\nasync fn graphql(\n    st: web::Data<State>,\n    data: web::Json<GraphQLRequest>,\n) -> Result<HttpResponse, Error> {\n    let result = data.execute_async(&st.schema, &st).await;\n    let res = Ok::<_, serde_json::error::Error>(serde_json::to_string(&result)?)?;\n\n    Ok(HttpResponse::Ok()\n        .content_type(\"application/json\")\n        .body(res))\n}\n\nasync fn metrics() -> HttpResponse {\n    let metrics_data = match common_apm::metrics::all_metrics() {\n        Ok(data) => data,\n        Err(e) => e.to_string().into_bytes(),\n    };\n\n    HttpResponse::Ok()\n        .content_type(\"text/plain; charset=utf-8\")\n        .body(metrics_data)\n}\n\nmod profile {\n    use std::collections::HashMap;\n    use std::str::FromStr;\n    use std::time::Duration;\n\n    use actix_web::error::{ErrorBadRequest, ErrorInternalServerError};\n    use actix_web::{dev, FromRequest, HttpRequest, HttpResponse};\n    use futures::future;\n    use pprof::protos::Message;\n\n    pub enum ProfileReport {\n        /// Perf flamegraph\n        FlameGraph,\n        /// Go pprof\n        PProf,\n    }\n\n    impl FromStr for ProfileReport {\n        type Err = &'static str;\n\n        fn from_str(report: &str) -> Result<Self, Self::Err> {\n            match report {\n                \"flamegraph\" => Ok(ProfileReport::FlameGraph),\n                \"pprof\" => Ok(ProfileReport::PProf),\n                _ => Err(\"invalid report type, only support flamegraph and pprof\"),\n            }\n        }\n    }\n\n    pub struct ProfileConfig {\n        duration:  Duration,\n        frequency: i32,\n        report:    ProfileReport,\n    }\n\n    impl Default for ProfileConfig {\n        fn default() -> Self {\n            ProfileConfig {\n                duration:  Duration::from_secs(10),\n                frequency: 99,\n                report:    ProfileReport::FlameGraph,\n            }\n        }\n    }\n\n    impl FromRequest for ProfileConfig {\n        type Config = ();\n        type Error = actix_web::Error;\n        type Future = future::Ready<Result<Self, Self::Error>>;\n\n        fn from_request(req: &HttpRequest, _: &mut dev::Payload) -> Self::Future {\n            let query = req.query_string();\n            let query_pairs: HashMap<_, _> =\n                url::form_urlencoded::parse(query.as_bytes()).collect();\n\n            let duration: Duration = match query_pairs.get(\"duration\").map(|val| val.parse()) {\n                Some(Ok(val)) => Duration::from_secs(val),\n                Some(Err(e)) => return future::err(ErrorBadRequest(e)),\n                None => ProfileConfig::default().duration,\n            };\n\n            let frequency: i32 = match query_pairs.get(\"frequency\").map(|val| val.parse()) {\n                Some(Ok(val)) => val,\n                Some(Err(e)) => return future::err(ErrorBadRequest(e)),\n                None => ProfileConfig::default().frequency,\n            };\n\n            let report: ProfileReport = match query_pairs.get(\"report\").map(|val| val.parse()) {\n                Some(Ok(val)) => val,\n                Some(Err(e)) => return future::err(ErrorBadRequest(e)),\n                None => ProfileConfig::default().report,\n            };\n\n            future::ok(ProfileConfig {\n                duration,\n                frequency,\n                report,\n            })\n        }\n    }\n\n    pub async fn dump_profile(maybe_config: actix_web::Result<ProfileConfig>) -> HttpResponse {\n        let config = match maybe_config {\n            Ok(config) => config,\n            Err(e) => return e.into(),\n        };\n\n        let guard = match pprof::ProfilerGuard::new(config.frequency) {\n            Ok(guard) => guard,\n            Err(e) => return ErrorInternalServerError(e).into(),\n        };\n\n        tokio::time::delay_for(config.duration).await;\n        let report = match guard.report().build() {\n            Ok(report) => report,\n            Err(e) => return ErrorInternalServerError(e).into(),\n        };\n        drop(guard);\n\n        let mut body = Vec::new();\n        match config.report {\n            ProfileReport::FlameGraph => match report.flamegraph(&mut body) {\n                Ok(_) => {\n                    log::info!(\"dump flamegraph successfully\");\n                    HttpResponse::Ok().body(body)\n                }\n                Err(err) => HttpResponse::InternalServerError().body(err.to_string()),\n            },\n            ProfileReport::PProf => match report.pprof().map(|p| p.encode(&mut body)) {\n                Ok(Ok(())) => {\n                    log::info!(\"dump pprof successfully\");\n                    HttpResponse::Ok().body(body)\n                }\n                Err(err) => HttpResponse::InternalServerError().body(err.to_string()),\n                Ok(Err(err)) => HttpResponse::InternalServerError().body(err.to_string()),\n            },\n        }\n    }\n}\n\npub async fn start_graphql<Adapter: APIAdapter + 'static>(cfg: GraphQLConfig, adapter: Adapter) {\n    let schema = Schema::new(Query, Mutation);\n\n    let state = State {\n        adapter: Arc::new(Box::new(adapter)),\n        schema:  Arc::new(schema),\n    };\n\n    let path_graphql_uri = cfg.graphql_uri.to_owned();\n    let path_graphiql_uri = cfg.graphiql_uri.to_owned();\n    let workers = cfg.workers;\n    let maxconn = cfg.maxconn;\n    let add_listening_address = cfg.listening_address;\n    let max_payload_size = cfg.max_payload_size;\n    let enable_dump_profile = cfg.enable_dump_profile;\n\n    // Start http server\n    let server = HttpServer::new(move || {\n        let app = App::new()\n            .data(state.clone())\n            .service(\n                web::resource(&path_graphql_uri)\n                    .app_data(web::Json::<GraphQLRequest>::configure(|cfg| {\n                        cfg.limit(max_payload_size)\n                    }))\n                    .route(web::post().to(graphql)),\n            )\n            .service(web::resource(&path_graphiql_uri).route(web::get().to(graphiql)))\n            .service(web::resource(\"/metrics\").route(web::get().to(metrics)));\n\n        if enable_dump_profile {\n            app.service(web::resource(\"/dump_profile\").route(web::get().to(profile::dump_profile)))\n        } else {\n            app\n        }\n    })\n    .workers(workers)\n    .maxconn(cmp::max(maxconn / workers, 1));\n\n    if let Some(tls) = cfg.tls {\n        // load ssl keys\n        let mut builder = SslAcceptor::mozilla_intermediate(SslMethod::tls()).unwrap();\n        builder\n            .set_private_key_file(tls.private_key_file_path, SslFiletype::PEM)\n            .unwrap();\n        builder\n            .set_certificate_chain_file(tls.certificate_chain_file_path)\n            .unwrap();\n\n        server\n            .bind_openssl(add_listening_address, builder)\n            .unwrap()\n            .run()\n            .await\n            .unwrap()\n    } else {\n        server\n            .bind(add_listening_address)\n            .unwrap()\n            .run()\n            .await\n            .unwrap()\n    }\n}\n"
  },
  {
    "path": "core/api/src/schema/block.rs",
    "content": "use protocol::fixed_codec::FixedCodec;\nuse protocol::types::Hash as PHash;\n\nuse crate::schema::{Address, Bytes, Hash, MerkleRoot, Uint64};\n\n#[derive(juniper::GraphQLObject, Clone)]\n#[graphql(\n    description = \"Block is a single digital record created within a blockchain. \\\n                   Each block contains a record of the previous Block, \\\n                   and when linked together these become the “chain”.\\\n                   A block is always composed of header and body.\"\n)]\npub struct Block {\n    #[graphql(description = \"The header section of a block\")]\n    header:            BlockHeader,\n    #[graphql(description = \"The body section of a block\")]\n    ordered_tx_hashes: Vec<Hash>,\n    #[graphql(description = \"Hash of the block\")]\n    hash:              Hash,\n}\n\n#[derive(juniper::GraphQLObject, Clone)]\n#[graphql(description = \"A block header is like the metadata of a block.\")]\npub struct BlockHeader {\n    #[graphql(\n        description = \"Identifier of a chain in order to prevent replay attacks across channels \"\n    )]\n    pub chain_id:                       Hash,\n    #[graphql(description = \"block height\")]\n    pub height:                         Uint64,\n    #[graphql(description = \"The height to which the block has been executed\")]\n    pub exec_height:                    Uint64,\n    #[graphql(description = \"The hash of the serialized previous block\")]\n    pub prev_hash:                      Hash,\n    #[graphql(description = \"A timestamp that records when the block was created\")]\n    pub timestamp:                      Uint64,\n    #[graphql(description = \"The merkle root of ordered transactions\")]\n    pub order_root:                     MerkleRoot,\n    #[graphql(description = \"The hash of ordered signed transactions\")]\n    pub order_signed_transactions_hash: Hash,\n    #[graphql(description = \"The merkle roots of all the confirms\")]\n    pub confirm_root:                   Vec<MerkleRoot>,\n    #[graphql(description = \"The merkle root of state root\")]\n    pub state_root:                     MerkleRoot,\n    #[graphql(description = \"The merkle roots of receipts\")]\n    pub receipt_root:                   Vec<MerkleRoot>,\n    #[graphql(description = \"The sum of all transactions costs\")]\n    pub cycles_used:                    Vec<Uint64>,\n    #[graphql(description = \"The address descirbed who packed the block\")]\n    pub proposer:                       Address,\n    pub proof:                          Proof,\n    #[graphql(description = \"The version of validator is designed for cross chain\")]\n    pub validator_version:              Uint64,\n    pub validators:                     Vec<Validator>,\n}\n\n#[derive(juniper::GraphQLObject, Clone)]\n#[graphql(description = \"The verifier of the block header proved\")]\npub struct Proof {\n    pub height:     Uint64,\n    pub round:      Uint64,\n    pub block_hash: Hash,\n    pub signature:  Bytes,\n    pub bitmap:     Bytes,\n}\n\n#[derive(juniper::GraphQLObject, Clone)]\n#[graphql(description = \"Validator address set\")]\npub struct Validator {\n    pub pubkey:         Bytes,\n    pub propose_weight: i32,\n    pub vote_weight:    i32,\n}\n\nimpl From<protocol::types::BlockHeader> for BlockHeader {\n    fn from(block_header: protocol::types::BlockHeader) -> Self {\n        BlockHeader {\n            chain_id:                       Hash::from(block_header.chain_id),\n            height:                         Uint64::from(block_header.height),\n            exec_height:                    Uint64::from(block_header.exec_height),\n            prev_hash:                      Hash::from(block_header.prev_hash),\n            timestamp:                      Uint64::from(block_header.timestamp),\n            order_root:                     MerkleRoot::from(block_header.order_root),\n            order_signed_transactions_hash: Hash::from(block_header.order_signed_transactions_hash),\n            state_root:                     MerkleRoot::from(block_header.state_root),\n            confirm_root:                   block_header\n                .confirm_root\n                .into_iter()\n                .map(MerkleRoot::from)\n                .collect(),\n            receipt_root:                   block_header\n                .receipt_root\n                .into_iter()\n                .map(MerkleRoot::from)\n                .collect(),\n            cycles_used:                    block_header\n                .cycles_used\n                .into_iter()\n                .map(Uint64::from)\n                .collect(),\n            proposer:                       Address::from(block_header.proposer),\n            proof:                          Proof::from(block_header.proof),\n            validator_version:              Uint64::from(block_header.validator_version),\n            validators:                     block_header\n                .validators\n                .into_iter()\n                .map(Validator::from)\n                .collect(),\n        }\n    }\n}\n\nimpl From<protocol::types::Block> for Block {\n    fn from(block: protocol::types::Block) -> Self {\n        Block {\n            header:            BlockHeader::from(block.header.clone()),\n            ordered_tx_hashes: block\n                .ordered_tx_hashes\n                .clone()\n                .into_iter()\n                .map(MerkleRoot::from)\n                .collect(),\n            hash:              Hash::from(PHash::digest(\n                block.header.encode_fixed().expect(\"rlp encode never fail\"),\n            )),\n        }\n    }\n}\n\nimpl From<protocol::types::Proof> for Proof {\n    fn from(proof: protocol::types::Proof) -> Self {\n        Proof {\n            height:     Uint64::from(proof.height),\n            round:      Uint64::from(proof.round),\n            block_hash: Hash::from(proof.block_hash),\n            signature:  Bytes::from(proof.signature),\n            bitmap:     Bytes::from(proof.bitmap),\n        }\n    }\n}\n\nimpl From<protocol::types::Validator> for Validator {\n    fn from(validator: protocol::types::Validator) -> Self {\n        Validator {\n            pubkey:         Bytes::from(validator.pub_key),\n            propose_weight: validator.vote_weight as i32,\n            vote_weight:    validator.vote_weight as i32,\n        }\n    }\n}\n"
  },
  {
    "path": "core/api/src/schema/mod.rs",
    "content": "mod block;\nmod receipt;\nmod transaction;\n\nuse std::convert::From;\n\nuse derive_more::{Display, From};\nuse std::num::ParseIntError;\n\nuse protocol::{ProtocolError, ProtocolErrorKind, ProtocolResult};\n\npub use block::{Block, BlockHeader};\npub use receipt::{Event, Receipt, ReceiptResponse};\npub use transaction::{\n    to_signed_transaction, to_transaction, InputRawTransaction, InputTransactionEncryption,\n    SignedTransaction,\n};\n\n#[derive(juniper::GraphQLObject, Clone)]\npub struct ServiceResponse {\n    pub code:          Uint64,\n    pub succeed_data:  String,\n    pub error_message: String,\n}\n\nimpl From<protocol::traits::ServiceResponse<String>> for ServiceResponse {\n    fn from(resp: protocol::traits::ServiceResponse<String>) -> Self {\n        Self {\n            code:          Uint64::from(resp.code),\n            succeed_data:  resp.succeed_data,\n            error_message: resp.error_message,\n        }\n    }\n}\n\n#[derive(juniper::GraphQLScalarValue, Clone)]\n#[graphql(description = \"The output digest of Keccak hash function\")]\npub struct Hash(String);\npub type MerkleRoot = Hash;\n\n#[derive(juniper::GraphQLScalarValue, Clone)]\n#[graphql(description = \"20 bytes of account address\")]\npub struct Address(String);\n\n#[derive(juniper::GraphQLScalarValue, Clone)]\n#[graphql(description = \"Uint64\")]\npub struct Uint64(String);\n\n#[derive(juniper::GraphQLScalarValue, Clone)]\n#[graphql(description = \"Bytes corresponding hex string.\")]\npub struct Bytes(String);\n\nimpl Hash {\n    pub fn as_hex(&self) -> String {\n        self.0.to_uppercase()\n    }\n}\n\nimpl Address {\n    pub fn to_str(&self) -> &str {\n        &self.0\n    }\n}\n\nimpl Uint64 {\n    pub fn as_hex(&self) -> ProtocolResult<String> {\n        Ok(clean_0x(&self.0)?.to_uppercase())\n    }\n\n    pub fn try_into_u64(self) -> ProtocolResult<u64> {\n        let n = u64::from_str_radix(&self.as_hex()?, 16).map_err(SchemaError::IntoU64)?;\n        Ok(n)\n    }\n}\n\nimpl Bytes {\n    pub fn as_hex(&self) -> ProtocolResult<String> {\n        Ok(clean_0x(&self.0)?.to_uppercase())\n    }\n\n    pub fn to_vec(&self) -> ProtocolResult<Vec<u8>> {\n        let v = hex::decode(self.as_hex()?).map_err(SchemaError::FromHex)?;\n        Ok(v)\n    }\n}\n\nimpl From<protocol::types::Hash> for Hash {\n    fn from(hash: protocol::types::Hash) -> Self {\n        Hash(hash.as_hex())\n    }\n}\n\nimpl From<protocol::types::Address> for Address {\n    fn from(address: protocol::types::Address) -> Self {\n        Address(address.to_string())\n    }\n}\n\nimpl From<u64> for Uint64 {\n    fn from(n: u64) -> Self {\n        Uint64(\"0x\".to_owned() + &hex::encode(n.to_be_bytes().to_vec()))\n    }\n}\n\nimpl From<protocol::Bytes> for Bytes {\n    fn from(bytes: protocol::Bytes) -> Self {\n        Bytes(\"0x\".to_owned() + &hex::encode(bytes))\n    }\n}\n\nfn clean_0x(s: &str) -> ProtocolResult<String> {\n    if s.starts_with(\"0x\") || s.starts_with(\"0X\") {\n        Ok(s[2..].to_owned())\n    } else {\n        Err(SchemaError::HexPrefix.into())\n    }\n}\n\n#[derive(Debug, Display, From)]\npub enum SchemaError {\n    #[display(fmt = \"into u64 {:?}\", _0)]\n    IntoU64(ParseIntError),\n\n    #[display(fmt = \"from hex {:?}\", _0)]\n    FromHex(hex::FromHexError),\n\n    #[display(fmt = \"hex should start with 0x\")]\n    HexPrefix,\n}\n\nimpl std::error::Error for SchemaError {}\n\nimpl From<SchemaError> for ProtocolError {\n    fn from(err: SchemaError) -> ProtocolError {\n        ProtocolError::new(ProtocolErrorKind::API, Box::new(err))\n    }\n}\n"
  },
  {
    "path": "core/api/src/schema/receipt.rs",
    "content": "use crate::schema::{Hash, MerkleRoot, ServiceResponse, Uint64};\n\n#[derive(juniper::GraphQLObject, Clone)]\npub struct Receipt {\n    pub state_root:  MerkleRoot,\n    pub height:      Uint64,\n    pub tx_hash:     Hash,\n    pub cycles_used: Uint64,\n    pub events:      Vec<Event>,\n    pub response:    ReceiptResponse,\n}\n\n#[derive(juniper::GraphQLObject, Clone)]\npub struct Event {\n    pub service: String,\n    pub name:    String,\n    pub data:    String,\n}\n\n#[derive(juniper::GraphQLObject, Clone)]\npub struct ReceiptResponse {\n    pub service_name: String,\n    pub method:       String,\n    pub response:     ServiceResponse,\n}\n\nimpl From<protocol::types::Receipt> for Receipt {\n    fn from(receipt: protocol::types::Receipt) -> Self {\n        Self {\n            state_root:  MerkleRoot::from(receipt.state_root),\n            height:      Uint64::from(receipt.height),\n            tx_hash:     Hash::from(receipt.tx_hash),\n            cycles_used: Uint64::from(receipt.cycles_used),\n            events:      receipt.events.into_iter().map(Event::from).collect(),\n            response:    ReceiptResponse::from(receipt.response),\n        }\n    }\n}\n\nimpl From<protocol::types::Event> for Event {\n    fn from(event: protocol::types::Event) -> Self {\n        Self {\n            service: event.service,\n            name:    event.name,\n            data:    event.data,\n        }\n    }\n}\n\nimpl From<protocol::types::ReceiptResponse> for ReceiptResponse {\n    fn from(response: protocol::types::ReceiptResponse) -> Self {\n        Self {\n            service_name: response.service_name,\n            method:       response.method,\n            response:     ServiceResponse::from(response.response),\n        }\n    }\n}\n"
  },
  {
    "path": "core/api/src/schema/transaction.rs",
    "content": "use protocol::ProtocolResult;\n\nuse crate::schema::{Address, Bytes, Hash, SchemaError, Uint64};\n\n#[derive(juniper::GraphQLObject, Clone)]\npub struct SignedTransaction {\n    pub chain_id:     Hash,\n    pub cycles_limit: Uint64,\n    pub cycles_price: Uint64,\n    pub nonce:        Hash,\n    pub timeout:      Uint64,\n    pub sender:       Address,\n    pub service_name: String,\n    pub method:       String,\n    pub payload:      String,\n    pub tx_hash:      Hash,\n    pub pubkey:       Bytes,\n    pub signature:    Bytes,\n}\n\nimpl From<protocol::types::SignedTransaction> for SignedTransaction {\n    fn from(stx: protocol::types::SignedTransaction) -> Self {\n        Self {\n            chain_id:     Hash::from(stx.raw.chain_id),\n            cycles_limit: Uint64::from(stx.raw.cycles_limit),\n            cycles_price: Uint64::from(stx.raw.cycles_price),\n            nonce:        Hash::from(stx.raw.nonce),\n            timeout:      Uint64::from(stx.raw.timeout),\n            sender:       Address::from(stx.raw.sender),\n            service_name: stx.raw.request.service_name,\n            method:       stx.raw.request.method,\n            payload:      stx.raw.request.payload,\n            tx_hash:      Hash::from(stx.tx_hash),\n            pubkey:       Bytes::from(stx.pubkey),\n            signature:    Bytes::from(stx.signature),\n        }\n    }\n}\n\n// #####################\n// GraphQLInputObject\n// #####################\n\n#[derive(juniper::GraphQLInputObject, Clone)]\n#[graphql(description = \"There was many types of transaction in Muta, \\\n                         A transaction often require computing resources or write data to chain,\\\n                         these resources are valuable so we need to pay some token for them.\\\n                         InputRawTransaction describes information above\")]\npub struct InputRawTransaction {\n    #[graphql(description = \"Identifier of the chain.\")]\n    pub chain_id:     Hash,\n    #[graphql(\n        description = \"Mostly like the gas limit in Ethereum, describes the fee that \\\n                       you are willing to pay the highest price for the transaction\"\n    )]\n    pub cycles_limit: Uint64,\n    pub cycles_price: Uint64,\n    #[graphql(\n        description = \"Every transaction has its own id, unlike Ethereum's nonce,\\\n                       the nonce in Muta is an hash\"\n    )]\n    pub nonce:        Hash,\n    #[graphql(description = \"For security and performance reasons, \\\n    Muta will only deal with trade request over a period of time,\\\n    the `timeout` should be `timeout > current_block_height` and `timeout < current_block_height + timeout_gap`,\\\n    the `timeout_gap` generally equal to 20.\")]\n    pub timeout:      Uint64,\n    pub service_name: String,\n    pub method:       String,\n    pub payload:      String,\n    pub sender:       Address,\n}\n\n#[derive(juniper::GraphQLInputObject, Clone)]\n#[graphql(description = \"Signature of the transaction\")]\npub struct InputTransactionEncryption {\n    #[graphql(description = \"The digest of the transaction\")]\n    pub tx_hash:   Hash,\n    #[graphql(description = \"The public key of transfer\")]\n    pub pubkey:    Bytes,\n    #[graphql(description = \"The signature of the transaction\")]\n    pub signature: Bytes,\n}\n\npub fn to_signed_transaction(\n    raw: InputRawTransaction,\n    encryption: InputTransactionEncryption,\n) -> ProtocolResult<protocol::types::SignedTransaction> {\n    let pubkey: &[u8] = &hex::decode(encryption.pubkey.as_hex()?).map_err(SchemaError::from)?;\n    let signature: &[u8] =\n        &hex::decode(encryption.signature.as_hex()?).map_err(SchemaError::from)?;\n\n    Ok(protocol::types::SignedTransaction {\n        raw:       to_transaction(raw)?,\n        tx_hash:   protocol::types::Hash::from_hex(&encryption.tx_hash.as_hex())?,\n        pubkey:    bytes::BytesMut::from(pubkey).freeze(),\n        signature: bytes::BytesMut::from(signature).freeze(),\n    })\n}\n\npub fn to_transaction(raw: InputRawTransaction) -> ProtocolResult<protocol::types::RawTransaction> {\n    Ok(protocol::types::RawTransaction {\n        chain_id:     protocol::types::Hash::from_hex(&raw.chain_id.as_hex())?,\n        nonce:        protocol::types::Hash::from_hex(&raw.nonce.as_hex())?,\n        timeout:      raw.timeout.try_into_u64()?,\n        cycles_price: raw.cycles_price.try_into_u64()?,\n        cycles_limit: raw.cycles_limit.try_into_u64()?,\n        request:      protocol::types::TransactionRequest {\n            service_name: raw.service_name.to_owned(),\n            method:       raw.method.to_owned(),\n            payload:      raw.payload.to_owned(),\n        },\n        sender:       raw.sender.to_str().parse()?,\n    })\n}\n"
  },
  {
    "path": "core/cli/Cargo.toml",
    "content": "[package]\nname = \"cli\"\nversion = \"0.2.1\"\nauthors = [\"Muta Dev <muta@nervos.org>\"]\nedition = \"2018\"\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\nbacktrace = \"0.3\"\nactix-rt = \"1.0\"\nderive_more = \"0.99\"\nfutures = \"0.3\"\nparking_lot = \"0.11\"\nserde = \"1.0\"\nserde_derive = \"1.0\"\nserde_json = \"1.0\"\nlog = \"0.4\"\nclap = \"2.33\"\nbytes = \"0.5\"\nhex = \"0.4\"\nrlp = \"0.4\"\ntoml = \"0.5\"\ntokio = { version = \"0.2\", features = [\"macros\", \"sync\", \"rt-core\", \"rt-util\", \"signal\", \"time\"] }\nmuta-apm = \"0.1.0-alpha.7\"\nfutures-timer=\"3.0\"\ncita_trie = \"2.0\"\nfs_extra = \"1.2.0\"\n\nbyzantine = { path = \"../../byzantine\" }\ncommon-apm = { path = \"../../common/apm\" }\ncommon-config-parser = { path = \"../../common/config-parser\" }\ncommon-crypto = { path = \"../../common/crypto\" }\ncommon-logger = { path = \"../../common/logger\" }\nprotocol = { path = \"../../protocol\", package = \"muta-protocol\" }\ncore-api = { path = \"../../core/api\" }\ncore-storage = { path = \"../../core/storage\" }\ncore-mempool = { path = \"../../core/mempool\" }\ncore-network = { path = \"../../core/network\" }\ncore-consensus = { path = \"../../core/consensus\" }\n\nbinding-macro = { path = \"../../binding-macro\" }\nframework = { path = \"../../framework\" }\nrun = {path = \"../run\"}\n\n[dev-dependencies]\ncita_trie = \"2.0\"\nasync-trait = \"0.1\"\ntoml = \"0.5\"\nlazy_static = \"1.4\"\nmuta-codec-derive = \"0.2\"\nasset = { path = \"../../built-in-services/asset\" }\nmulti-signature = { path = \"../../built-in-services/multi-signature\" }\nauthorization = { path = \"../../built-in-services/authorization\" }\nmetadata = { path = \"../../built-in-services/metadata\"}\nutil = { path = \"../../built-in-services/util\"}\nrand = \"0.7\"\ncore-network = { path = \"../../core/network\", features = [\"diagnostic\"] }\ntokio = { version = \"0.2\", features = [\"full\"] }"
  },
  {
    "path": "core/cli/src/error.rs",
    "content": "use std::error::Error;\n\nuse derive_more::{Display, From};\n\nuse protocol::{ProtocolError, ProtocolErrorKind};\n\n#[derive(Debug, Display, From)]\npub enum CliError {\n    #[display(fmt = \"input is not a valid JSON format for target, {:?}\", _0)]\n    JSONFormat(serde_json::error::Error),\n\n    #[display(fmt = \"grammar error\")]\n    Grammar,\n\n    #[display(fmt = \"path not found: {}\", _0)]\n    Path(String),\n\n    #[display(fmt = \"io operation fails: {:?}\", _0)]\n    IO(std::io::Error),\n\n    #[display(fmt = \"io operation fails: {:?}\", _0)]\n    IO2(fs_extra::error::Error),\n\n    #[display(fmt = \"block for height {} not found\", _0)]\n    BlockNotFound(u64),\n\n    #[display(fmt = \"parsing error\")]\n    Parse,\n\n    #[display(fmt = \"unsupported command\")]\n    UnsupportedCommand,\n\n    #[display(fmt = \"genesis.toml is missing\")]\n    MissingGenesis,\n}\n\nimpl Error for CliError {}\n\nimpl From<CliError> for ProtocolError {\n    fn from(err: CliError) -> ProtocolError {\n        ProtocolError::new(ProtocolErrorKind::Storage, Box::new(err))\n    }\n}\n"
  },
  {
    "path": "core/cli/src/lib.rs",
    "content": "mod error;\n\n#[cfg(test)]\nmod tests;\n\nuse std::fs;\nuse std::ops::RangeInclusive;\nuse std::path::{Path, PathBuf};\nuse std::str::FromStr;\nuse std::sync::Arc;\n\nuse clap::ArgMatches;\nuse common_config_parser::types::Config;\nuse core_consensus::wal::ConsensusWal;\nuse core_consensus::SignedTxsWAL;\nuse core_storage::adapter::rocks::RocksAdapter;\nuse core_storage::ImplStorage;\nuse protocol::traits::{Context, MaintenanceStorage, ServiceMapping};\nuse protocol::types::{Block, Genesis, SignedTransaction};\nuse protocol::ProtocolResult;\n\nuse crate::error::CliError;\n\nconst PLEASE_CONFIRM: &str =\n    \"Please use -y to confirm modification and DO BACK UP YOUR DB DATA AND WAL\";\n\npub struct CliConfig {\n    pub app_name:      &'static str,\n    pub version:       &'static str,\n    pub author:        &'static str,\n    pub config_path:   &'static str,\n    pub genesis_patch: &'static str,\n}\n\npub struct Cli<'a, Mapping>\nwhere\n    Mapping: 'static + ServiceMapping,\n{\n    pub matches:         ArgMatches<'a>,\n    pub config:          Config,\n    pub genesis:         Option<Genesis>,\n    pub service_mapping: Arc<Mapping>,\n}\n\nimpl<'a, Mapping> Cli<'a, Mapping>\nwhere\n    Mapping: 'static + ServiceMapping,\n{\n    pub fn run(\n        service_mapping: Mapping,\n        cli_config: CliConfig,\n        target_commands: Option<Vec<&str>>,\n    ) {\n        let cli = Self::new(service_mapping, cli_config, target_commands);\n        if let Err(e) = cli.start() {\n            log::error!(\"{:?}\", e)\n        }\n    }\n\n    pub fn new(\n        service_mapping: Mapping,\n        cli_config: CliConfig,\n        target_commands: Option<Vec<&str>>,\n    ) -> Self {\n        let matches = Self::generate_matches(cli_config, target_commands);\n\n        let config_path = matches.value_of(\"config\").expect(\"missing config path\");\n\n        let genesis_path = matches.value_of(\"genesis\").expect(\"missing genesis path\");\n\n        let config: Config =\n            common_config_parser::parse(&config_path.trim()).expect(\"config path is not set\");\n\n        if !cfg!(test) {\n            Self::register_log(&config)\n        };\n\n        // genesis may be absent for now\n        let genesis = match fs::read_to_string(&genesis_path.trim()) {\n            Ok(genesis_content) => match toml::from_str::<Genesis>(&genesis_content) {\n                Ok(genesis) => Some(genesis),\n                Err(_) => None,\n            },\n            Err(_) => None,\n        };\n\n        Self {\n            matches,\n            config,\n            genesis,\n            service_mapping: Arc::new(service_mapping),\n        }\n    }\n\n    fn register_log(config: &Config) {\n        common_logger::init(\n            config.logger.filter.clone(),\n            config.logger.log_to_console,\n            config.logger.console_show_file_and_line,\n            config.logger.log_to_file,\n            config.logger.metrics,\n            config.logger.log_path.clone(),\n            config.logger.file_size_limit,\n            config.logger.modules_level.clone(),\n        );\n    }\n\n    pub fn start(self) -> ProtocolResult<()> {\n        match self.matches.subcommand() {\n            (\"run\", Some(_sub_cmd)) => {\n                log::info!(\"run subcommand run\");\n                if let Some(genesis) = self.genesis {\n                    let muta = run::Muta::new(self.config, genesis, self.service_mapping);\n                    muta.run()\n                } else {\n                    log::error!(\"genesis.toml is missing\");\n                    Err(CliError::MissingGenesis.into())\n                }\n            }\n            (\"latest_block\", Some(_sub_cmd)) => {\n                log::info!(\"run subcommand latest_block\");\n                let maintenance_cli = self.generate_maintenance_cli();\n                maintenance_cli.start()\n            }\n            (\"block\", Some(_sub_cmd)) => {\n                log::info!(\"run subcommand block\");\n                let maintenance_cli = self.generate_maintenance_cli();\n                maintenance_cli.start()\n            }\n\n            (\"wal\", Some(_sub_cmd)) => {\n                log::info!(\"run subcommand wal\");\n                let maintenance_cli = self.generate_maintenance_cli();\n                maintenance_cli.start()\n            }\n\n            (\"backup\", Some(_sub_cmd)) => {\n                log::info!(\"run subcommand backup\");\n                let maintenance_cli = self.generate_maintenance_cli();\n                maintenance_cli.start()\n            }\n            _ => {\n                log::info!(\"run without any subcommand, default to run\");\n                if let Some(genesis) = self.genesis {\n                    let muta = run::Muta::new(self.config, genesis, self.service_mapping);\n                    muta.run()\n                } else {\n                    log::error!(\"genesis.toml is missing\");\n                    Err(CliError::MissingGenesis.into())\n                }\n            }\n        }\n    }\n\n    pub fn generate_matches(cli_config: CliConfig, cmds: Option<Vec<&str>>) -> ArgMatches<'a> {\n        let app = clap::App::new(cli_config.app_name)\n            .version(cli_config.version)\n            .author(cli_config.author)\n            .arg(\n                clap::Arg::with_name(\"config\")\n                    .short(\"c\")\n                    .long(\"config\")\n                    .value_name(\"FILE\")\n                    .help(\"a required file for the configuration\")\n                    .env(\"CONFIG\")\n                    .default_value(cli_config.config_path),\n            )\n            .arg(\n                clap::Arg::with_name(\"genesis\")\n                    .short(\"g\")\n                    .long(\"genesis\")\n                    .value_name(\"FILE\")\n                    .help(\"a required file for the genesis\")\n                    .env(\"GENESIS\")\n                    .default_value(cli_config.genesis_patch),\n            )\n            .subcommand(clap::SubCommand::with_name(\"run\").about(\"run the muta-chain\"))\n            .subcommand(\n                clap::SubCommand::with_name(\"latest_block\")\n                    //.help(\"latest block\")\n                    .about(\"APIs for latest block operation\")\n                    .subcommand(\n                        clap::SubCommand::with_name(\"set\")\n                            .arg(clap::Arg::with_name(\"BLOCK_HEIGHT\").required(true))\n                            .arg(clap::Arg::with_name(\"confirm\").short(\"y\").help(\"confirm to take effect\"))\n                            .about(\"set the latest block\")\n                    )\n                    .subcommand(\n                        clap::SubCommand::with_name(\"get\")\n                            .help(\"latest_block get\")),\n            )\n            .subcommand(\n                clap::SubCommand::with_name(\"block\")\n                    .about(\"APIs for block manipulation\")\n                    .subcommand(\n                        clap::SubCommand::with_name(\"get\")\n                            .arg(clap::Arg::with_name(\"BLOCK_HEIGHT\").required(true))\n                            .about(\"get block of [BLOCK_HEIGHT]\"),\n                    )\n                    .subcommand(\n                        clap::SubCommand::with_name(\"set\")\n                            .arg(clap::Arg::with_name(\"BLOCK\").required(true))\n                            .arg(clap::Arg::with_name(\"confirm\").short(\"y\").help(\"confirm to take effect\"))\n                            .about(\"upsert target block by [BLOCK], [BLOCK] is in JSON format\"),\n                    ),\n            )\n            .subcommand(\n                clap::SubCommand::with_name(\"wal\")\n                    .about(\"APIs for Write Ahead Log operation\")\n                    .subcommand(\n                        clap::SubCommand::with_name(\"clear\")\n                            .arg(clap::Arg::with_name(\"confirm\")\n                                .short(\"y\").help(\"confirm to take effect\"))\n                            .about(\"clear all wals, include mempool wal and consensus txs\"),\n                    )\n                    .subcommand(\n                        clap::SubCommand::with_name(\"mempool\")\n                            .about(\"handle mempool wal\")\n                            .subcommand(\n                                clap::SubCommand::with_name(\"clear\")\n                                    .arg(clap::Arg::with_name(\"confirm\").short(\"y\").help(\"confirm to take effect\"))\n                                    .about(\"clear mempool wal\"),\n                            )\n                            .subcommand(\n                                clap::SubCommand::with_name(\"list\").about(\"list mempool wal\"),\n                            )\n                            .subcommand(\n                                clap::SubCommand::with_name(\"get\")\n                                    .about(\"get mempool wal\")\n                                    .arg(clap::Arg::with_name(\"BLOCK_HEIGHT\").required(true)),\n                            ),\n                    )\n                    .subcommand(\n                        clap::SubCommand::with_name(\"consensus\")\n                            .about(\"handle consensus wal\")\n                            .subcommand(\n                                clap::SubCommand::with_name(\"clear\")\n                                    .arg(clap::Arg::with_name(\"confirm\").short(\"y\").help(\"confirm to take effect\"))\n                                    .about(\"clear consensus wal\"),\n                            ),\n                    ),\n            )\n            .subcommand(\n                clap::SubCommand::with_name(\"backup\")\n                    .about(\"APIs for backup operation\")\n                    .subcommand(\n                        clap::SubCommand::with_name(\"save\")\n                            .about(\"save db to [TO] place\")\n                            .arg(clap::Arg::with_name(\"TO\").required(true).help(\"path\")),\n                    )\n                    .subcommand(\n                        clap::SubCommand::with_name(\"restore\")\n                            .about(\"restore db from [FROM] place\")\n                            .arg(clap::Arg::with_name(\"FROM\").required(true).help(\"path\")),\n                    ),\n            );\n        match cmds {\n            Some(cmds) => app.get_matches_from(cmds),\n            None => app.get_matches(),\n        }\n    }\n\n    fn generate_maintenance_cli(self) -> MaintenanceCli<'a, Mapping, ImplStorage<RocksAdapter>> {\n        let path_block = self.config.data_path_for_block();\n        let rocks_adapter = match RocksAdapter::new(path_block, self.config.rocksdb.max_open_files)\n        {\n            Ok(adapter) => Arc::new(adapter),\n            Err(e) => {\n                log::error!(\"{:?}\", e);\n                panic!(\"rocks_adapter init fails\")\n            }\n        };\n        let storage = ImplStorage::new(rocks_adapter);\n\n        // Init full transactions wal\n        let txs_wal_path = self\n            .config\n            .data_path_for_txs_wal()\n            .to_str()\n            .unwrap()\n            .to_string();\n        let txs_wal = SignedTxsWAL::new(txs_wal_path);\n\n        // Init consensus wal\n        let consensus_wal_path = self\n            .config\n            .data_path_for_consensus_wal()\n            .to_str()\n            .unwrap()\n            .to_string();\n        let consensus_wal = ConsensusWal::new(consensus_wal_path);\n\n        MaintenanceCli::new(\n            self.matches,\n            self.config,\n            self.service_mapping,\n            storage,\n            txs_wal,\n            consensus_wal,\n        )\n    }\n}\n\npub struct MaintenanceCli<'a, Mapping, S>\nwhere\n    Mapping: 'static + ServiceMapping,\n    S: 'static + MaintenanceStorage,\n{\n    pub matches:         ArgMatches<'a>,\n    pub config:          Config,\n    pub service_mapping: Arc<Mapping>,\n    pub storage:         Arc<S>,\n    pub txs_wal:         Arc<SignedTxsWAL>,\n    pub consensus_wal:   Arc<ConsensusWal>,\n}\n\nimpl<'a, Mapping, S> MaintenanceCli<'a, Mapping, S>\nwhere\n    Mapping: 'static + ServiceMapping,\n    S: 'static + MaintenanceStorage,\n{\n    pub fn new(\n        matches: ArgMatches<'a>,\n        config: Config,\n        service_mapping: Arc<Mapping>,\n        storage: S,\n        txs_wal: SignedTxsWAL,\n        consensus_wal: ConsensusWal,\n    ) -> Self {\n        Self {\n            matches,\n            config,\n            service_mapping,\n            storage: Arc::new(storage),\n            txs_wal: Arc::new(txs_wal),\n            consensus_wal: Arc::new(consensus_wal),\n        }\n    }\n\n    pub fn start(&self) -> ProtocolResult<()> {\n        match self.matches.subcommand() {\n            (\"latest_block\", Some(sub_cmd)) => self.latest_block(sub_cmd),\n            (\"block\", Some(sub_cmd)) => self.block(sub_cmd),\n            (\"wal\", Some(sub_cmd)) => self.wal(sub_cmd),\n            (\"backup\", Some(sub_cmd)) => self.backup(sub_cmd),\n            _ => Err(CliError::UnsupportedCommand.into()),\n        }\n    }\n\n    pub fn latest_block(&self, sub_cmd: &ArgMatches) -> ProtocolResult<()> {\n        let mut rt = tokio::runtime::Runtime::new().expect(\"new tokio runtime\");\n\n        match sub_cmd.subcommand() {\n            (\"set\", Some(cmd)) => {\n                let height = cmd\n                    .value_of(\"BLOCK_HEIGHT\")\n                    .expect(\"missing [BLOCK_HEIGHT]\");\n\n                let confirm = cmd.is_present(\"confirm\");\n                if !confirm {\n                    log::info!(\"{}\", PLEASE_CONFIRM);\n                    return Ok(());\n                }\n\n                match u64::from_str_radix(height, 10) {\n                    Ok(height) => rt.block_on(async move { self.latest_block_set(height).await }),\n                    Err(_e) => Err(CliError::Parse.into()),\n                }\n            }\n\n            (\"get\", Some(_cmd)) => {\n                let block = rt.block_on(async move { self.latest_block_get().await })?;\n                log::info!(\n                    \"latest_block get {}\",\n                    serde_json::to_string(&block).unwrap()\n                );\n                Ok(())\n            }\n\n            _ => Err(CliError::Grammar.into()),\n        }\n    }\n\n    pub async fn latest_block_set(&self, height: u64) -> ProtocolResult<()> {\n        let last = self.storage.get_latest_block(Context::new()).await?;\n\n        let block = self.block_get(height).await?;\n        let block = match block {\n            Some(blk) => blk,\n            None => return Err(CliError::BlockNotFound(height).into()),\n        };\n\n        self.storage\n            .insert_block(Context::new(), block.clone())\n            .await?;\n        log::info!(\n            \"latest_block set successfully : {}\",\n            serde_json::to_string(&block).unwrap()\n        );\n\n        // now remove 'future' blocks\n        for idx in RangeInclusive::new(height + 1, last.header.height) {\n            self.storage.remove_block(Context::new(), idx).await?\n        }\n        log::info!(\n            \"latest_block set, remove blocks from {} to {}\",\n            height + 1,\n            last.header.height\n        );\n        Ok(())\n    }\n\n    pub async fn latest_block_get(&self) -> ProtocolResult<Block> {\n        self.storage.get_latest_block(Context::new()).await\n    }\n\n    pub fn block(&self, sub_cmd: &ArgMatches) -> ProtocolResult<()> {\n        let mut rt = tokio::runtime::Runtime::new().expect(\"new tokio runtime\");\n\n        match sub_cmd.subcommand() {\n            (\"set\", Some(cmd)) => {\n                let confirm = cmd.is_present(\"confirm\");\n                if !confirm {\n                    log::info!(\"{}\", PLEASE_CONFIRM);\n                    return Ok(());\n                }\n\n                let block_json = cmd.value_of(\"BLOCK\").expect(\"missing [BLOCK]\");\n                rt.block_on(async move { self.block_set(block_json).await })?;\n\n                Ok(())\n            }\n\n            (\"get\", Some(cmd)) => {\n                let height = cmd\n                    .value_of(\"BLOCK_HEIGHT\")\n                    .expect(\"missing height\")\n                    .parse()\n                    .unwrap();\n\n                let res = rt.block_on(async move { self.block_get(height).await })?;\n                match res {\n                    Some(block) => {\n                        log::info!(\"block_get: {}\", serde_json::to_string(&block).unwrap());\n                    }\n                    None => {\n                        log::info!(\"block not found for height {}\", height);\n                    }\n                }\n                Ok(())\n            }\n\n            _ => Err(CliError::Grammar.into()),\n        }\n    }\n\n    pub async fn block_get(&self, height: u64) -> ProtocolResult<Option<Block>> {\n        self.storage.get_block(Context::new(), height).await\n    }\n\n    pub async fn block_set(&self, block_json: &str) -> ProtocolResult<()> {\n        let block = serde_json::from_str::<Block>(block_json).map_err(|e| {\n            log::info!(\"use 'block get 0' to get a example block JSON output\");\n            CliError::JSONFormat(e)\n        })?;\n\n        self.storage\n            .remove_block(Context::new(), block.header.height)\n            .await?;\n        self.storage\n            .set_block(Context::new(), block.clone())\n            .await?;\n        log::info!(\n            \"block set successfully: {}\",\n            serde_json::to_string(&block).unwrap()\n        );\n        Ok(())\n    }\n\n    pub fn wal(&self, sub_cmd: &ArgMatches) -> ProtocolResult<()> {\n        match sub_cmd.subcommand() {\n            (\"mempool\", Some(cmd)) => match cmd.subcommand() {\n                (\"clear\", Some(cmd)) => {\n                    let confirm = cmd.is_present(\"confirm\");\n                    if !confirm {\n                        log::info!(\"{}\", PLEASE_CONFIRM);\n                        return Ok(());\n                    };\n                    self.wal_txs_clear()\n                }\n                (\"list\", Some(_cmd)) => {\n                    self.wal_txs_list()?;\n                    Ok(())\n                }\n                (\"get\", Some(cmd)) => {\n                    let height = cmd\n                        .value_of(\"BLOCK_HEIGHT\")\n                        .expect(\"missing [BLOCK_HEIGHT]\")\n                        .parse()\n                        .unwrap();\n                    self.wal_txs_get(height)?;\n                    Ok(())\n                }\n                _ => Err(CliError::Grammar.into()),\n            },\n\n            (\"consensus\", Some(cmd)) => match cmd.subcommand() {\n                (\"clear\", Some(cmd)) => {\n                    let confirm = cmd.is_present(\"confirm\");\n                    if !confirm {\n                        log::info!(\"{}\", PLEASE_CONFIRM);\n                        return Ok(());\n                    };\n                    self.wal_consensus_clear()\n                }\n                _ => Err(CliError::Grammar.into()),\n            },\n\n            (\"clear\", Some(cmd)) => {\n                let confirm = cmd.is_present(\"confirm\");\n\n                if !confirm {\n                    log::info!(\"{}\", PLEASE_CONFIRM);\n                    return Ok(());\n                };\n\n                self.wal_consensus_clear()?;\n                self.wal_txs_clear()?;\n                log::info!(\"wal clear, successfully\");\n                Ok(())\n            }\n\n            _ => Err(CliError::Grammar.into()),\n        }\n    }\n\n    pub fn wal_txs_clear(&self) -> ProtocolResult<()> {\n        let res = self.txs_wal.remove_all();\n        log::info!(\"wal_txs_clear: {:?}\", res);\n        res\n    }\n\n    pub fn wal_txs_list(&self) -> ProtocolResult<Vec<u64>> {\n        let res = self.txs_wal.available_height();\n        log::info!(\"wal_txs_list: {:?}\", res);\n        res\n    }\n\n    pub fn wal_txs_get(&self, height: u64) -> ProtocolResult<Vec<SignedTransaction>> {\n        let res = self.txs_wal.load_by_height(height);\n        log::info!(\"wal_txs_get: {:?}\", res);\n        Ok(res)\n    }\n\n    pub fn wal_consensus_clear(&self) -> ProtocolResult<()> {\n        let res = self.consensus_wal.clear();\n        log::info!(\"wal_consensus_clear: {:?}\", res);\n        res\n    }\n\n    pub fn backup(&self, sub_cmd: &ArgMatches) -> ProtocolResult<()> {\n        match sub_cmd.subcommand() {\n            (\"save\", Some(cmd)) => {\n                let to = cmd.value_of(\"TO\").expect(\"missing [TO]\");\n\n                self.backup_save(PathBuf::from_str(to).map_err(|e| CliError::Path(e.to_string()))?)\n            }\n\n            (\"restore\", Some(cmd)) => {\n                let from = cmd.value_of(\"FROM\").expect(\"missing [FROM]\");\n                self.backup_restore(\n                    PathBuf::from_str(from).map_err(|e| CliError::Path(e.to_string()))?,\n                )\n            }\n\n            _ => Err(CliError::Grammar.into()),\n        }\n    }\n\n    pub fn backup_save<P: AsRef<Path>>(&self, to: P) -> ProtocolResult<()> {\n        let to = to.as_ref();\n        let data_path = self.config.data_path.as_path();\n        fs_extra::dir::remove(to).map_err(CliError::IO2)?;\n        fs_extra::dir::copy(data_path, to, &fs_extra::dir::CopyOptions {\n            overwrite:    true,\n            skip_exist:   false,\n            buffer_size:  64000, // 64kb\n            copy_inside:  true,\n            content_only: false,\n            depth:        0,\n        })\n        .map_err(CliError::IO2)?;\n\n        log::info!(\"backup_save successfully to: {:?}\", to.to_str());\n        Ok(())\n    }\n\n    pub fn backup_restore<P: AsRef<Path>>(&self, from: P) -> ProtocolResult<()> {\n        let from = from.as_ref();\n        let data_path = self.config.data_path.as_path();\n        fs_extra::dir::remove(data_path).map_err(CliError::IO2)?;\n        fs_extra::dir::copy(from, data_path, &fs_extra::dir::CopyOptions {\n            overwrite:    true,\n            skip_exist:   false,\n            buffer_size:  64000, // 64kb\n            copy_inside:  true,\n            content_only: false,\n            depth:        0,\n        })\n        .map_err(CliError::IO2)?;\n        log::info!(\"backup_restore successfully to: {:?}\", from.to_str());\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "core/cli/src/tests/config.toml",
    "content": "# crypto\nprivkey = \"0x5ec982173d54d830b6789cbbbe43eaa2853a5ff752d1ebc1b266cf9790314f8a\"\n\n# db config\ndata_path = \"./free-space/data\"\n\n[graphql]\nlistening_address = \"127.0.0.1:8000\"\ngraphql_uri = \"/graphql\"\ngraphiql_uri = \"/graphiql\"\nworkers = 0 # if 0, uses number of available logical cpu as threads count.\nmaxconn = 25000\nmax_payload_size = 1048576\n# [graphql.tls]\n# private_key_file_path = \"key.pem\"\n# certificate_chain_file_path = \"cert.pem\"\n\n\n[network]\nlistening_address = \"0.0.0.0:1337\"\nrpc_timeout = 10\n\n[consensus]\noverlord_gap = 5\nsync_txs_chunk_size = 5000\n\n[[network.bootstraps]]\npeer_id = \"QmTEJkB5QKWsEq37huryZZfVvqBKb54sHnKn9TQcA6j3n9\"\naddress = \"0.0.0.0:1888\"\n\n[mempool]\npool_size = 20000\nbroadcast_txs_size = 200\nbroadcast_txs_interval = 200\n\n[executor]\nlight = false\ntriedb_cache_size = 2000\n\n[logger]\nfilter = \"info\"\nlog_to_console = true\nconsole_show_file_and_line = false\nlog_path = \"./free-space/logs\"\nlog_to_file = true\nfile_size_limit = 1073741824 # 1 GiB\nmetrics = true\n# you can specify log level for modules with config below\n# modules_level = { \"overlord::state::process\" = \"debug\", core_consensus = \"error\" }\n\n[rocksdb]\nmax_open_files = 64\n\n# [apm]\n# service_name = \"muta\"\n# tracing_address = \"127.0.0.1:6831\"\n# tracing_batch_size = 50\n"
  },
  {
    "path": "core/cli/src/tests/genesis.toml",
    "content": "timestamp = 0\nprevhash = \"0x44915be5b6c20b0678cf05fcddbbaa832e25d7e6ac538784cd5c24de00d47472\"\n\n[[services]]\nname = \"asset\"\npayload = '''\n{\n   \"id\": \"0xf56924db538e77bb5951eb5ff0d02b88983c49c45eea30e8ae3e7234b311436c\",\n   \"name\": \"MutaToken\",\n   \"symbol\": \"MT\",\n   \"supply\": 320000011,\n   \"issuer\": \"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\"\n}\n'''\n\n[[services]]\nname = \"metadata\"\npayload = '''\n{\n    \"chain_id\": \"0xb6a4d7da21443f5e816e8700eea87610e6d769657d6b8ec73028457bf2ca4036\",\n    \"bech32_address_hrp\": \"muta\",\n    \"common_ref\": \"0x6c747758636859487038\",\n    \"timeout_gap\": 20,\n    \"cycles_limit\": 4294967295,\n    \"cycles_price\": 1,\n    \"interval\": 3000,\n    \"verifier_list\": [\n       {\n           \"bls_pub_key\": \"0x04102947214862a503c73904deb5818298a186d68c7907bb609583192a7de6331493835e5b8281f4d9ee705537c0e765580e06f86ddce5867812fceb42eecefd209f0eddd0389d6b7b0100f00fb119ef9ab23826c6ea09aadcc76fa6cea6a32724\",\n           \"pub_key\": \"0x02ef0cb0d7bc6c18b4bea1f5908d9106522b35ab3c399369605d4242525bda7e60\",\n           \"address\": \"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\",\n           \"propose_weight\": 1,\n           \"vote_weight\": 1\n       }\n    ],\n    \"propose_ratio\": 15,\n    \"prevote_ratio\": 10,\n    \"precommit_ratio\": 10,\n    \"brake_ratio\": 7,\n    \"tx_num_limit\": 20000,\n    \"max_tx_size\": 1024\n}\n'''\n"
  },
  {
    "path": "core/cli/src/tests/mod.rs",
    "content": "mod service_mapping;\n\nuse std::path::PathBuf;\nuse std::str::FromStr;\n\nuse protocol::traits::{CommonStorage, Context};\nuse protocol::types::{Block, BlockHeader, Bytes, Hash, Proof};\nuse protocol::ProtocolResult;\n\nuse crate::{Cli, CliConfig};\n\nuse service_mapping::DefaultServiceMapping;\n\nconst SAVE_DIR: &str = \"./free-space/save\";\nconst DATA_DIR: &str = \"./free-space/data\";\nconst CONFIG_PATH: &str = \"./src/tests/config.toml\";\nconst GENESIS_PATH: &str = \"./src/tests/genesis.toml\";\n\n#[test]\nfn test_lineally() {\n    clean();\n\n    prepare();\n    save_restore();\n    clean();\n\n    // set \"latest\" test before \"block\" test due to latest block cache in storage\n    prepare();\n    latest_get(23);\n    clean();\n\n    prepare();\n    latest_set();\n    clean();\n\n    prepare();\n    block_get();\n    clean();\n\n    prepare();\n    block_set();\n    clean();\n}\n\nfn save_restore() {\n    println!(\"test save_restore\");\n    let save = PathBuf::from_str(SAVE_DIR).expect(\"save_restore, path fails\");\n    fs_extra::dir::remove(save.clone()).expect(\"save_restore, remove save_restore fails\");\n\n    run(vec![\n        \"muta-chain\",\n        \"--config\",\n        CONFIG_PATH,\n        \"--genesis\",\n        GENESIS_PATH,\n        \"backup\",\n        \"save\",\n        SAVE_DIR,\n    ])\n    .expect(\"save_restore, run save fails\");\n\n    assert!(save.exists());\n    // now the data has gone\n    clean();\n\n    run(vec![\n        \"muta-chain\",\n        \"--config\",\n        CONFIG_PATH,\n        \"--genesis\",\n        GENESIS_PATH,\n        \"backup\",\n        \"restore\",\n        SAVE_DIR,\n    ])\n    .expect(\"save_restore, run restore fails\");\n\n    let data = PathBuf::from_str(DATA_DIR).expect(\"save_restore, path fails\");\n    assert!(data.exists());\n\n    fs_extra::dir::remove(save).expect(\"save_restore, remove save files fails\");\n    println!(\"tested save_restore\");\n}\n\nfn block_get() -> Block {\n    println!(\"test block_get\");\n    let cmd = vec![\n        \"muta-chain\",\n        \"--config\",\n        CONFIG_PATH,\n        \"--genesis\",\n        GENESIS_PATH,\n        \"block\",\n        \"get\",\n        \"11\",\n    ];\n\n    let maintenance_cli = Cli::new(\n        DefaultServiceMapping {},\n        CliConfig {\n            app_name:      \"Rodents\",\n            version:       \"Big Cheek\",\n            author:        \"Hamsters\",\n            config_path:   \"./cofnig.toml\",\n            genesis_patch: \"./genesis.toml\",\n        },\n        Some(cmd),\n    )\n    .generate_maintenance_cli();\n\n    let block = if let (\"block\", Some(sub_cmd)) = maintenance_cli.matches.subcommand() {\n        let mut rt = tokio::runtime::Runtime::new().expect(\"new tokio runtime\");\n\n        if let (\"get\", Some(_cmd)) = sub_cmd.subcommand() {\n            let res = rt.block_on(async move { maintenance_cli.block_get(11).await });\n            let block = res\n                .expect(\"block_get, block_get fails\")\n                .expect(\"block_get, block_get block not found\");\n            assert_eq!(block.header.height, 11);\n            block\n        } else {\n            panic!()\n        }\n    } else {\n        panic!()\n    };\n    println!(\"tested block_get\");\n    block\n}\n\nfn block_set() {\n    println!(\"test block_set\");\n    // we chagne the exec height from 10 to 9 on height 11\n    let cmd = vec![\n        \"muta-chain\",\n        \"--config\",\n        CONFIG_PATH,\n        \"--genesis\",\n        GENESIS_PATH,\n        \"block\",\n        \"set\",\n        \"-y\",\n        r#\"\n        {\"header\":{\"chain_id\":\"0xb6a4d7da21443f5e816e8700eea87610e6d769657d6b8ec73028457bf2ca4036\",\"height\":11,\"exec_height\":9,\"prev_hash\":\"0xc60d9652e5a7d18d34272ac4f8350086439520923d812b4cc4428a9b04d2dd01\",\"timestamp\":1598632570280,\"order_root\":\"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421\",\"order_signed_transactions_hash\":\"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421\",\"confirm_root\":[\"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421\"],\"state_root\":\"0xd26475337965236ee6bfb4db3f02ed8d21b710f4194e7de5a379fdde0f48c681\",\"receipt_root\":[\"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421\"],\"cycles_used\":[0],\"proposer\":\"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\",\"proof\":{\"height\":10,\"round\":0,\"block_hash\":\"0xc60d9652e5a7d18d34272ac4f8350086439520923d812b4cc4428a9b04d2dd01\",\"signature\":[7,23,172,129,210,37,136,144,12,57,227,78,29,103,134,41,243,30,237,76,239,6,104,140,72,255,52,0,245,178,160,99,83,172,226,68,115,200,56,126,97,78,80,58,101,70,84,162,8,230,26,25,30,82,91,62,107,140,126,30,95,148,17,78,243,149,82,90,103,206,13,32,42,83,41,233,22,248,127,89,83,246,37,8,152,236,11,120,55,77,110,93,222,191,246,59,11,217,193,133,230,91,73,115,76,124,147,244,154,146,179,147,242,89,239,124,135,95,62,70,190,42,220,245,155,74,210,75,166,138,78,42,247,71,229,134,245,53,10,57,65,253,178,238,14,108,79,191,45,140,142,134,251,157,255,148,122,78,167,127,204,79,176,71,188,253,42,167,34,61,234,242,248,86,0,62,225,11,207,15,254,235,189,202,94,10,185,176,223,127,62,127],\"bitmap\":[128]},\"validator_version\":0,\"validators\":[{\"pub_key\":[2,239,12,176,215,188,108,24,180,190,161,245,144,141,145,6,82,43,53,171,60,57,147,105,96,93,66,66,82,91,218,126,96],\"propose_weight\":1,\"vote_weight\":1}]},\"ordered_tx_hashes\":[]}\n        \"#,\n    ];\n\n    let maintenance_cli = Cli::new(\n        DefaultServiceMapping {},\n        CliConfig {\n            app_name:      \"Rodents\",\n            version:       \"Big Cheek\",\n            author:        \"Hamsters\",\n            config_path:   \"./cofnig.toml\",\n            genesis_patch: \"./genesis.toml\",\n        },\n        Some(cmd),\n    )\n    .generate_maintenance_cli();\n    let mut rt = tokio::runtime::Runtime::new().expect(\"new tokio runtime\");\n\n    rt.block_on(async move {\n        if let (\"block\", Some(sub_cmd)) = maintenance_cli.matches.subcommand() {\n            if let (\"set\", Some(cmd)) = sub_cmd.subcommand() {\n                let block_json = cmd.value_of(\"BLOCK\").expect(\"missing [BLOCK]\");\n\n                let res = maintenance_cli.block_set(block_json).await;\n                assert!(res.is_ok());\n            } else {\n                panic!()\n            }\n        } else {\n            panic!()\n        }\n    });\n\n    let changed = block_get();\n    assert_eq!(changed.header.exec_height, 9);\n    println!(\"tested block_set\");\n}\n\nfn latest_get(expect: u64) -> Block {\n    println!(\"test latest_get\");\n\n    // we chagne the exec height from 10 to 9 on height 11\n    let cmd = vec![\n        \"muta-chain\",\n        \"--config\",\n        CONFIG_PATH,\n        \"--genesis\",\n        GENESIS_PATH,\n        \"latest_block\",\n        \"get\",\n    ];\n\n    let maintenance_cli = Cli::new(\n        DefaultServiceMapping {},\n        CliConfig {\n            app_name:      \"Rodents\",\n            version:       \"Big Cheek\",\n            author:        \"Hamsters\",\n            config_path:   \"./cofnig.toml\",\n            genesis_patch: \"./genesis.toml\",\n        },\n        Some(cmd),\n    )\n    .generate_maintenance_cli();\n\n    let block = if let (\"latest_block\", Some(sub_cmd)) = maintenance_cli.matches.subcommand() {\n        if let (\"get\", Some(_cmd)) = sub_cmd.subcommand() {\n            let mut rt = tokio::runtime::Runtime::new().expect(\"new tokio runtime\");\n            let res = rt.block_on(async move { maintenance_cli.latest_block_get().await });\n            let block = res.expect(\"latest_get, latest_block_get fails\");\n            assert_eq!(block.header.height, expect);\n            block\n        } else {\n            panic!()\n        }\n    } else {\n        panic!()\n    };\n    println!(\"tested latest_get\");\n    block\n}\n\nfn latest_set() {\n    println!(\"test latest_set\");\n\n    // we change the exec height from 10 to 9 on height 11\n    let cmd = vec![\n        \"muta-chain\",\n        \"--config\",\n        CONFIG_PATH,\n        \"--genesis\",\n        GENESIS_PATH,\n        \"latest_block\",\n        \"set\",\n        \"-y\",\n        \"10\",\n    ];\n\n    let maintenance_cli = Cli::new(\n        DefaultServiceMapping {},\n        CliConfig {\n            app_name:      \"Rodents\",\n            version:       \"Big Cheek\",\n            author:        \"Hamsters\",\n            config_path:   \"./cofnig.toml\",\n            genesis_patch: \"./genesis.toml\",\n        },\n        Some(cmd),\n    )\n    .generate_maintenance_cli();\n\n    if let (\"latest_block\", Some(sub_cmd)) = maintenance_cli.matches.subcommand() {\n        if let (\"set\", Some(_cmd)) = sub_cmd.subcommand() {\n            let mut rt = tokio::runtime::Runtime::new().expect(\"new tokio runtime\");\n            let res = rt.block_on(async move { maintenance_cli.latest_block_set(10).await });\n            assert!(res.is_ok());\n        } else {\n            panic!()\n        }\n    } else {\n        panic!()\n    }\n\n    let changed = latest_get(10);\n    assert_eq!(changed.header.height, 10);\n    println!(\"tested latest_set\");\n}\n\n// test functional methods list below\n\nfn prepare() {\n    let to = PathBuf::from_str(DATA_DIR).expect(\"prepare,data dir fails\");\n\n    if to.exists() {\n        fs_extra::dir::remove(to.as_path()).expect(\"prepare, remove to fails\");\n    }\n\n    // we just add a validation command, but we don't use the match yet\n    let cmd = vec![\n        \"muta-chain\",\n        \"--config\",\n        CONFIG_PATH,\n        \"--genesis\",\n        GENESIS_PATH,\n        \"latest_block\",\n        \"get\",\n    ];\n\n    let maintenance_cli = Cli::new(\n        DefaultServiceMapping {},\n        CliConfig {\n            app_name:      \"Rodents\",\n            version:       \"Big Cheek\",\n            author:        \"Hamsters\",\n            config_path:   \"./cofnig.toml\",\n            genesis_patch: \"./genesis.toml\",\n        },\n        Some(cmd),\n    )\n    .generate_maintenance_cli();\n\n    let storage = maintenance_cli.storage;\n\n    // now we add fake blocks\n    let mut rt = tokio::runtime::Runtime::new().expect(\"new tokio runtime\");\n\n    for idx in 0..=23 {\n        if let Err(e) = rt.block_on(storage.insert_block(Context::new(), Block {\n            header:            BlockHeader {\n                chain_id:                       Default::default(),\n                height:                         idx,\n                exec_height:                    match idx {\n                    i if i > 0 => i - 1,\n                    _ => 0,\n                },\n                prev_hash:                      Default::default(),\n                timestamp:                      0,\n                order_root:                     Default::default(),\n                order_signed_transactions_hash: Default::default(),\n                confirm_root:                   vec![],\n                state_root:                     Default::default(),\n                receipt_root:                   vec![],\n                cycles_used:                    vec![],\n                proposer:                       Default::default(),\n                proof:                          Proof {\n                    height:     0,\n                    round:      0,\n                    block_hash: Default::default(),\n                    signature:  Default::default(),\n                    bitmap:     Default::default(),\n                },\n                validator_version:              0,\n                validators:                     vec![],\n            },\n            ordered_tx_hashes: vec![],\n        })) {\n            println!(\"{:?}\", e);\n            panic!(\"muta cli test prepare(), prepare rocksdb fails\")\n        };\n    }\n\n    let tx_wal = maintenance_cli.txs_wal;\n    if tx_wal\n        .save(\n            23,\n            Hash::from_hex(\"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421\")\n                .unwrap(),\n            vec![],\n        )\n        .is_err()\n    {\n        panic!(\"muta cli test prepare(), prepare tx_wal fails\")\n    };\n\n    let consensus_wal = maintenance_cli.consensus_wal;\n\n    if consensus_wal\n        .update_overlord_wal(\n            Context::new(),\n            Bytes::from_static(b\"1234567,doremifasolati\"),\n        )\n        .is_err()\n    {\n        panic!(\"muta cli test prepare(), prepare consense_wal fails\")\n    };\n}\n\nfn clean() {\n    let to = PathBuf::from_str(DATA_DIR).expect(\"clean, data dir fails\");\n    if to.exists() {\n        fs_extra::dir::remove(to.as_path()).expect(\"clean, remove to\");\n    }\n}\n\nfn run(cmd: Vec<&str>) -> ProtocolResult<()> {\n    Cli::new(\n        service_mapping::DefaultServiceMapping {},\n        CliConfig {\n            app_name:      \"Rodents\",\n            version:       \"Big Cheek\",\n            author:        \"Hamsters\",\n            config_path:   \"./cofnig.toml\",\n            genesis_patch: \"./genesis.toml\",\n        },\n        Some(cmd),\n    )\n    .start()\n}\n"
  },
  {
    "path": "core/cli/src/tests/service_mapping.rs",
    "content": "// This file is copied directly from example/muta-chain\n\nuse derive_more::{Display, From};\nuse protocol::traits::{SDKFactory, Service, ServiceMapping, ServiceSDK};\nuse protocol::{ProtocolError, ProtocolErrorKind, ProtocolResult};\n\nuse asset::{AssetService, ASSET_SERVICE_NAME};\nuse authorization::{AuthorizationService, AUTHORIZATION_SERVICE_NAME};\nuse metadata::{MetadataService, METADATA_SERVICE_NAME};\nuse multi_signature::{MultiSignatureService, MULTI_SIG_SERVICE_NAME};\nuse util::{UtilService, UTIL_SERVICE_NAME};\n\npub struct DefaultServiceMapping;\n\nimpl ServiceMapping for DefaultServiceMapping {\n    fn get_service<SDK: 'static + ServiceSDK, Factory: SDKFactory<SDK>>(\n        &self,\n        name: &str,\n        factory: &Factory,\n    ) -> ProtocolResult<Box<dyn Service>> {\n        let sdk = factory.get_sdk(name)?;\n        let service = match name {\n            AUTHORIZATION_SERVICE_NAME => {\n                let multi_sig_sdk = factory.get_sdk(\"multi_signature\")?;\n                Box::new(AuthorizationService::new(\n                    sdk,\n                    MultiSignatureService::new(multi_sig_sdk),\n                )) as Box<dyn Service>\n            }\n            ASSET_SERVICE_NAME => Box::new(AssetService::new(sdk)) as Box<dyn Service>,\n            METADATA_SERVICE_NAME => Box::new(MetadataService::new(sdk)) as Box<dyn Service>,\n            MULTI_SIG_SERVICE_NAME => Box::new(MultiSignatureService::new(sdk)) as Box<dyn Service>,\n            UTIL_SERVICE_NAME => Box::new(UtilService::new(sdk)) as Box<dyn Service>,\n            _ => {\n                return Err(MappingError::NotFoundService {\n                    service: name.to_owned(),\n                }\n                .into());\n            }\n        };\n\n        Ok(service)\n    }\n\n    fn list_service_name(&self) -> Vec<String> {\n        vec![\n            ASSET_SERVICE_NAME.to_owned(),\n            AUTHORIZATION_SERVICE_NAME.to_owned(),\n            METADATA_SERVICE_NAME.to_owned(),\n            MULTI_SIG_SERVICE_NAME.to_owned(),\n            UTIL_SERVICE_NAME.to_owned(),\n        ]\n    }\n}\n\n#[derive(Debug, Display, From)]\npub enum MappingError {\n    #[display(fmt = \"service {:?} was not found\", service)]\n    NotFoundService { service: String },\n}\n\nimpl std::error::Error for MappingError {}\n\nimpl From<MappingError> for ProtocolError {\n    fn from(err: MappingError) -> ProtocolError {\n        ProtocolError::new(ProtocolErrorKind::Service, Box::new(err))\n    }\n}\n"
  },
  {
    "path": "core/consensus/Cargo.toml",
    "content": "[package]\nname = \"core-consensus\"\nversion = \"0.2.1\"\nauthors = [\"Muta Dev <muta@nervos.org>\"]\nedition = \"2018\"\nrepository = \"https://github.com/nervosnetwork/muta\"\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\nasync-trait = \"0.1\"\nbincode = \"1.3\"\ncita_trie = \"2.0\"\njson = \"0.12\"\ncreep = \"0.2\"\nderive_more = \"0.99\"\nfutures = { version = \"0.3\", features = [\"async-await\"] }\nfutures-timer = \"3.0\"\nhex = \"0.4\"\nlog = \"0.4\"\noverlord = \"0.2\"\nparking_lot = \"0.11\"\nprost = \"0.6\"\nrlp = \"0.4\"\nserde = { version = \"1.0\", features = [\"derive\"] }\nserde_json = \"1.0\"\ntokio = { version = \"0.2\", features = [\"macros\", \"sync\", \"rt-core\", \"rt-threaded\"] }\nbytes = { version = \"0.5\", features = [\"serde\"] }\nlazy_static = \"1.4\"\n\ncommon-apm = { path = \"../../common/apm\" }\ncommon-crypto = { path = \"../../common/crypto\" }\ncommon-logger = { path = \"../../common/logger\" }\ncommon-merkle = { path = \"../../common/merkle\" }\ncore-mempool = { path = \"../../core/mempool\" }\ncore-storage = { path = \"../../core/storage\" }\ncore-network = { path = \"../../core/network\" }\nprotocol = { path = \"../../protocol\", package = \"muta-protocol\" }\n\n[dev-dependencies]\nbit-vec = \"0.6\"\nnum-traits = \"0.2\"\nrand = \"0.7\"\n\n[features]\ndefault = []\nrandom_leader = [\"overlord/random_leader\"]\n"
  },
  {
    "path": "core/consensus/src/adapter.rs",
    "content": "use std::boxed::Box;\nuse std::collections::HashMap;\nuse std::marker::PhantomData;\nuse std::sync::Arc;\nuse std::time::Instant;\n\nuse async_trait::async_trait;\nuse overlord::types::{Node, OverlordMsg, Vote, VoteType};\nuse overlord::{extract_voters, Crypto, OverlordHandler};\nuse parking_lot::RwLock;\nuse tokio::sync::mpsc::error::TrySendError;\nuse tokio::sync::mpsc::{channel, Receiver, Sender};\n\nuse common_apm::muta_apm;\nuse common_merkle::Merkle;\n\nuse core_network::{PeerId, PeerIdExt};\n\nuse protocol::traits::{\n    CommonConsensusAdapter, ConsensusAdapter, Context, ExecutorFactory, ExecutorParams,\n    ExecutorResp, Gossip, MemPool, MessageTarget, MixedTxHashes, Network, PeerTrust, Priority, Rpc,\n    ServiceMapping, Storage, SynchronizationAdapter, TrustFeedback,\n};\nuse protocol::types::{\n    Address, Block, BlockHeader, Bytes, Hash, Hex, MerkleRoot, Metadata, Proof, Receipt,\n    SignedTransaction, TransactionRequest, Validator,\n};\nuse protocol::{fixed_codec::FixedCodec, ProtocolResult};\n\nuse crate::consensus::gen_overlord_status;\nuse crate::fixed_types::{\n    FixedBlock, FixedHeight, FixedPill, FixedProof, FixedSignedTxs, PullTxsRequest,\n};\nuse crate::message::{\n    BROADCAST_HEIGHT, RPC_SYNC_PULL_BLOCK, RPC_SYNC_PULL_PROOF, RPC_SYNC_PULL_TXS,\n};\nuse crate::status::{ExecutedInfo, StatusAgent};\nuse crate::util::{convert_hex_to_bls_pubkeys, ExecuteInfo, OverlordCrypto};\nuse crate::BlockHeaderField::{PreviousBlockHash, ProofHash, Proposer};\nuse crate::BlockProofField::{BitMap, HashMismatch, HeightMismatch, Signature, WeightNotFound};\nuse crate::{BlockHeaderField, BlockProofField, ConsensusError};\n\npub struct OverlordConsensusAdapter<\n    EF: ExecutorFactory<DB, S, Mapping>,\n    M: MemPool,\n    N: Rpc + PeerTrust + Gossip + Network + 'static,\n    S: Storage,\n    DB: cita_trie::DB,\n    Mapping: ServiceMapping,\n> {\n    network:          Arc<N>,\n    mempool:          Arc<M>,\n    storage:          Arc<S>,\n    trie_db:          Arc<DB>,\n    service_mapping:  Arc<Mapping>,\n    overlord_handler: RwLock<Option<OverlordHandler<FixedPill>>>,\n\n    exec_queue:  Sender<ExecuteInfo>,\n    exec_demons: Option<ExecDemons<S, DB, EF, Mapping>>,\n    crypto:      Arc<OverlordCrypto>,\n}\n\n#[async_trait]\nimpl<EF, M, N, S, DB, Mapping> ConsensusAdapter\n    for OverlordConsensusAdapter<EF, M, N, S, DB, Mapping>\nwhere\n    EF: ExecutorFactory<DB, S, Mapping>,\n    M: MemPool + 'static,\n    N: Rpc + PeerTrust + Gossip + Network + 'static,\n    S: Storage + 'static,\n    DB: cita_trie::DB + 'static,\n    Mapping: ServiceMapping + 'static,\n{\n    #[muta_apm::derive::tracing_span(kind = \"consensus.adapter\")]\n    async fn get_txs_from_mempool(\n        &self,\n        ctx: Context,\n        _height: u64,\n        cycle_limit: u64,\n        tx_num_limit: u64,\n    ) -> ProtocolResult<MixedTxHashes> {\n        self.mempool.package(ctx, cycle_limit, tx_num_limit).await\n    }\n\n    #[muta_apm::derive::tracing_span(kind = \"consensus.adapter\")]\n    async fn sync_txs(&self, ctx: Context, txs: Vec<Hash>) -> ProtocolResult<()> {\n        self.mempool.sync_propose_txs(ctx, txs).await\n    }\n\n    #[muta_apm::derive::tracing_span(kind = \"consensus.adapter\", logs = \"{'txs_len': 'txs.len()'}\")]\n    async fn get_full_txs(\n        &self,\n        ctx: Context,\n        txs: &[Hash],\n    ) -> ProtocolResult<Vec<SignedTransaction>> {\n        self.mempool.get_full_txs(ctx, None, txs).await\n    }\n\n    #[muta_apm::derive::tracing_span(kind = \"consensus.adapter\")]\n    async fn transmit(\n        &self,\n        ctx: Context,\n        msg: Vec<u8>,\n        end: &str,\n        target: MessageTarget,\n    ) -> ProtocolResult<()> {\n        match target {\n            MessageTarget::Broadcast => {\n                self.network\n                    .broadcast(ctx.clone(), end, msg, Priority::High)\n                    .await\n            }\n\n            MessageTarget::Specified(pub_key) => {\n                let peer_id_bytes = PeerId::from_pubkey_bytes(pub_key)?.into_bytes_ext();\n\n                self.network\n                    .multicast(ctx, end, [peer_id_bytes], msg, Priority::High)\n                    .await\n            }\n        }\n    }\n\n    #[muta_apm::derive::tracing_span(kind = \"consensus.adapter\")]\n    async fn execute(\n        &self,\n        ctx: Context,\n        chain_id: Hash,\n        order_root: MerkleRoot,\n        height: u64,\n        cycles_price: u64,\n        proposer: Address,\n        block_hash: Hash,\n        signed_txs: Vec<SignedTransaction>,\n        cycles_limit: u64,\n        timestamp: u64,\n    ) -> ProtocolResult<()> {\n        let exec_info = ExecuteInfo {\n            ctx,\n            height,\n            chain_id,\n            cycles_price,\n            block_hash,\n            signed_txs,\n            order_root,\n            proposer,\n            cycles_limit,\n            timestamp,\n        };\n\n        let mut tx = self.exec_queue.clone();\n        tx.try_send(exec_info).map_err(|e| match e {\n            TrySendError::Closed(_) => panic!(\"exec queue dropped!\"),\n            _ => ConsensusError::ExecuteErr(e.to_string()),\n        })?;\n        Ok(())\n    }\n\n    #[muta_apm::derive::tracing_span(kind = \"consensus.adapter\")]\n    async fn get_last_validators(\n        &self,\n        ctx: Context,\n        height: u64,\n    ) -> ProtocolResult<Vec<Validator>> {\n        let header = self\n            .storage\n            .get_block_header(ctx, height)\n            .await?\n            .ok_or(ConsensusError::StorageItemNotFound)?;\n        Ok(header.validators)\n    }\n\n    /// Get the current height from storage.\n    #[muta_apm::derive::tracing_span(kind = \"consensus.adapter\")]\n    async fn get_current_height(&self, ctx: Context) -> ProtocolResult<u64> {\n        let header = self.storage.get_latest_block_header(ctx).await?;\n        Ok(header.height)\n    }\n\n    #[muta_apm::derive::tracing_span(kind = \"consensus.adapter\")]\n    async fn pull_block(&self, ctx: Context, height: u64, end: &str) -> ProtocolResult<Block> {\n        log::debug!(\"consensus: send rpc pull block {}\", height);\n        let res = self\n            .network\n            .call::<FixedHeight, FixedBlock>(ctx, end, FixedHeight::new(height), Priority::High)\n            .await?;\n        Ok(res.inner)\n    }\n\n    #[muta_apm::derive::tracing_span(kind = \"consensus.adapter\", logs = \"{'txs_len': 'txs.len()'}\")]\n    async fn verify_txs(&self, ctx: Context, height: u64, txs: &[Hash]) -> ProtocolResult<()> {\n        if let Err(e) = self\n            .mempool\n            .ensure_order_txs(ctx.clone(), Some(height), txs)\n            .await\n        {\n            log::error!(\"verify_txs error {:?}\", e);\n            return Err(ConsensusError::VerifyTransaction(height).into());\n        }\n\n        Ok(())\n    }\n}\n\n#[async_trait]\nimpl<EF, M, N, S, DB, Mapping> SynchronizationAdapter\n    for OverlordConsensusAdapter<EF, M, N, S, DB, Mapping>\nwhere\n    EF: ExecutorFactory<DB, S, Mapping>,\n    M: MemPool + 'static,\n    N: Rpc + PeerTrust + Gossip + Network + 'static,\n    S: Storage + 'static,\n    DB: cita_trie::DB + 'static,\n    Mapping: ServiceMapping + 'static,\n{\n    #[muta_apm::derive::tracing_span(kind = \"consensus.adapter\")]\n    fn update_status(\n        &self,\n        ctx: Context,\n        height: u64,\n        consensus_interval: u64,\n        propose_ratio: u64,\n        prevote_ratio: u64,\n        precommit_ratio: u64,\n        brake_ratio: u64,\n        validators: Vec<Validator>,\n    ) -> ProtocolResult<()> {\n        self.overlord_handler\n            .read()\n            .as_ref()\n            .expect(\"Please set the overlord handle first\")\n            .send_msg(\n                ctx,\n                OverlordMsg::RichStatus(gen_overlord_status(\n                    height + 1,\n                    consensus_interval,\n                    propose_ratio,\n                    prevote_ratio,\n                    precommit_ratio,\n                    brake_ratio,\n                    validators,\n                )),\n            )\n            .map_err(|e| ConsensusError::OverlordErr(Box::new(e)))?;\n        Ok(())\n    }\n\n    #[muta_apm::derive::tracing_span(kind = \"consensus.adapter\", logs = \"{'txs_len': 'txs.len()'}\")]\n    fn sync_exec(\n        &self,\n        ctx: Context,\n        params: &ExecutorParams,\n        txs: &[SignedTransaction],\n    ) -> ProtocolResult<ExecutorResp> {\n        let mut executor = EF::from_root(\n            params.state_root.clone(),\n            Arc::clone(&self.trie_db),\n            Arc::clone(&self.storage),\n            Arc::clone(&self.service_mapping),\n        )?;\n        let inst = Instant::now();\n        let resp = executor.exec(ctx, params, txs)?;\n        common_apm::metrics::consensus::CONSENSUS_TIME_HISTOGRAM_VEC_STATIC\n            .exec\n            .observe(common_apm::metrics::duration_to_sec(inst.elapsed()));\n        Ok(resp)\n    }\n\n    /// Pull some blocks from other nodes from `begin` to `end`.\n    #[muta_apm::derive::tracing_span(kind = \"consensus.adapter\")]\n    async fn get_block_from_remote(&self, ctx: Context, height: u64) -> ProtocolResult<Block> {\n        let res = self\n            .network\n            .call::<FixedHeight, FixedBlock>(\n                ctx,\n                RPC_SYNC_PULL_BLOCK,\n                FixedHeight::new(height),\n                Priority::High,\n            )\n            .await;\n        match res {\n            Ok(data) => {\n                common_apm::metrics::consensus::CONSENSUS_RESULT_COUNTER_VEC_STATIC\n                    .get_block_from_remote\n                    .success\n                    .inc();\n                Ok(data.inner)\n            }\n            Err(err) => {\n                common_apm::metrics::consensus::CONSENSUS_RESULT_COUNTER_VEC_STATIC\n                    .get_block_from_remote\n                    .failure\n                    .inc();\n                Err(err)\n            }\n        }\n    }\n\n    /// Pull signed transactions corresponding to the given hashes from other\n    /// nodes.\n    #[muta_apm::derive::tracing_span(\n        kind = \"consensus.adapter\",\n        logs = \"{'txs_len': 'hashes.len()'}\"\n    )]\n    async fn get_txs_from_remote(\n        &self,\n        ctx: Context,\n        height: u64,\n        hashes: &[Hash],\n    ) -> ProtocolResult<Vec<SignedTransaction>> {\n        let res = self\n            .network\n            .call::<PullTxsRequest, FixedSignedTxs>(\n                ctx,\n                RPC_SYNC_PULL_TXS,\n                PullTxsRequest::new(height, hashes.to_vec()),\n                Priority::High,\n            )\n            .await?;\n        Ok(res.inner)\n    }\n\n    /// Pull a proof of certain block from other nodes\n    #[muta_apm::derive::tracing_span(kind = \"consensus.adapter\")]\n    async fn get_proof_from_remote(&self, ctx: Context, height: u64) -> ProtocolResult<Proof> {\n        let ret = self\n            .network\n            .call::<FixedHeight, FixedProof>(\n                ctx.clone(),\n                RPC_SYNC_PULL_PROOF,\n                FixedHeight::new(height),\n                Priority::High,\n            )\n            .await?;\n        Ok(ret.inner)\n    }\n}\n\n#[async_trait]\nimpl<EF, M, N, S, DB, Mapping> CommonConsensusAdapter\n    for OverlordConsensusAdapter<EF, M, N, S, DB, Mapping>\nwhere\n    EF: ExecutorFactory<DB, S, Mapping>,\n    M: MemPool + 'static,\n    N: Rpc + PeerTrust + Gossip + Network + 'static,\n    S: Storage + 'static,\n    DB: cita_trie::DB + 'static,\n    Mapping: ServiceMapping + 'static,\n{\n    /// Save a block to the database.\n    #[muta_apm::derive::tracing_span(\n        kind = \"consensus.adapter\",\n        logs = \"{'txs_len': 'block.ordered_tx_hashes.len()'}\"\n    )]\n    async fn save_block(&self, ctx: Context, block: Block) -> ProtocolResult<()> {\n        self.storage.insert_block(ctx, block).await\n    }\n\n    #[muta_apm::derive::tracing_span(kind = \"consensus.adapter\")]\n    async fn save_proof(&self, ctx: Context, proof: Proof) -> ProtocolResult<()> {\n        self.storage.update_latest_proof(ctx, proof).await\n    }\n\n    /// Save some signed transactions to the database.\n    #[muta_apm::derive::tracing_span(\n        kind = \"consensus.adapter\",\n        logs = \"{'txs_len': 'signed_txs.len()'}\"\n    )]\n    async fn save_signed_txs(\n        &self,\n        ctx: Context,\n        block_height: u64,\n        signed_txs: Vec<SignedTransaction>,\n    ) -> ProtocolResult<()> {\n        self.storage\n            .insert_transactions(ctx, block_height, signed_txs)\n            .await\n    }\n\n    #[muta_apm::derive::tracing_span(\n        kind = \"consensus.adapter\",\n        logs = \"{'receipts_len': 'receipts.len()'}\"\n    )]\n    async fn save_receipts(\n        &self,\n        ctx: Context,\n        height: u64,\n        receipts: Vec<Receipt>,\n    ) -> ProtocolResult<()> {\n        self.storage.insert_receipts(ctx, height, receipts).await\n    }\n\n    /// Flush the given transactions in the mempool.\n    #[muta_apm::derive::tracing_span(\n        kind = \"consensus.adapter\",\n        logs = \"{'flush_txs_len': 'ordered_tx_hashes.len()'}\"\n    )]\n    async fn flush_mempool(&self, ctx: Context, ordered_tx_hashes: &[Hash]) -> ProtocolResult<()> {\n        self.mempool.flush(ctx, ordered_tx_hashes).await\n    }\n\n    /// Get a block corresponding to the given height.\n    #[muta_apm::derive::tracing_span(kind = \"consensus.adapter\")]\n    async fn get_block_by_height(&self, ctx: Context, height: u64) -> ProtocolResult<Block> {\n        self.storage\n            .get_block(ctx, height)\n            .await?\n            .ok_or_else(|| ConsensusError::StorageItemNotFound.into())\n    }\n\n    async fn get_block_header_by_height(\n        &self,\n        ctx: Context,\n        height: u64,\n    ) -> ProtocolResult<BlockHeader> {\n        self.storage\n            .get_block_header(ctx, height)\n            .await?\n            .ok_or_else(|| ConsensusError::StorageItemNotFound.into())\n    }\n\n    /// Get the current height from storage.\n    #[muta_apm::derive::tracing_span(kind = \"consensus.adapter\")]\n    async fn get_current_height(&self, ctx: Context) -> ProtocolResult<u64> {\n        let header = self.storage.get_latest_block_header(ctx).await?;\n        Ok(header.height)\n    }\n\n    #[muta_apm::derive::tracing_span(\n        kind = \"consensus.adapter\",\n        logs = \"{'txs_len': 'tx_hashes.len()'}\"\n    )]\n    async fn get_txs_from_storage(\n        &self,\n        ctx: Context,\n        tx_hashes: &[Hash],\n    ) -> ProtocolResult<Vec<SignedTransaction>> {\n        let futs = tx_hashes\n            .iter()\n            .map(|tx_hash| self.storage.get_transaction_by_hash(ctx.clone(), tx_hash))\n            .collect::<Vec<_>>();\n        futures::future::try_join_all(futs).await.map(|txs| {\n            txs.into_iter()\n                .filter_map(|opt_tx| opt_tx)\n                .collect::<Vec<_>>()\n        })\n    }\n\n    #[muta_apm::derive::tracing_span(kind = \"consensus.adapter\")]\n    async fn broadcast_height(&self, ctx: Context, height: u64) -> ProtocolResult<()> {\n        self.network\n            .broadcast(ctx.clone(), BROADCAST_HEIGHT, height, Priority::High)\n            .await\n    }\n\n    /// Get metadata by the giving height.\n    #[muta_apm::derive::tracing_span(kind = \"consensus.adapter\")]\n    fn get_metadata(\n        &self,\n        ctx: Context,\n        state_root: MerkleRoot,\n        height: u64,\n        timestamp: u64,\n        proposer: Address,\n    ) -> ProtocolResult<Metadata> {\n        let executor = EF::from_root(\n            state_root.clone(),\n            Arc::clone(&self.trie_db),\n            Arc::clone(&self.storage),\n            Arc::clone(&self.service_mapping),\n        )?;\n\n        let caller = Address::from_hash(Hash::digest(protocol::address_hrp().as_str()))?;\n\n        let params = ExecutorParams {\n            state_root,\n            height,\n            timestamp,\n            cycles_limit: u64::max_value(),\n            proposer,\n        };\n        let exec_resp = executor.read(&params, &caller, 1, &TransactionRequest {\n            service_name: \"metadata\".to_string(),\n            method:       \"get_metadata\".to_string(),\n            payload:      \"\".to_string(),\n        })?;\n\n        Ok(serde_json::from_str(&exec_resp.succeed_data).expect(\"Decode metadata failed!\"))\n    }\n\n    fn tag_consensus(&self, ctx: Context, pub_keys: Vec<Bytes>) -> ProtocolResult<()> {\n        let peer_ids_bytes = pub_keys\n            .iter()\n            .map(|pk| PeerId::from_pubkey_bytes(pk).map(PeerIdExt::into_bytes_ext))\n            .collect::<Result<_, _>>()?;\n\n        self.network.tag_consensus(ctx, peer_ids_bytes)\n    }\n\n    #[muta_apm::derive::tracing_span(kind = \"consensus.adapter\")]\n    fn report_bad(&self, ctx: Context, feedback: TrustFeedback) {\n        self.network.report(ctx, feedback);\n    }\n\n    fn set_args(&self, _context: Context, timeout_gap: u64, cycles_limit: u64, max_tx_size: u64) {\n        self.mempool\n            .set_args(timeout_gap, cycles_limit, max_tx_size);\n    }\n\n    /// this function verify all info in header except proof and roots\n    #[muta_apm::derive::tracing_span(kind = \"consensus.adapter\")]\n    async fn verify_block_header(&self, ctx: Context, block: &Block) -> ProtocolResult<()> {\n        let previous_block_header = self\n            .get_block_header_by_height(ctx.clone(), block.header.height - 1)\n            .await\n            .map_err(|e| {\n                log::error!(\n                    \"[consensus] verify_block_header, previous_block_header {} fails\",\n                    block.header.height - 1,\n                );\n                e\n            })?;\n\n        let previous_block_hash = Hash::digest(previous_block_header.encode_fixed()?);\n\n        if previous_block_hash != block.header.prev_hash {\n            log::error!(\n                \"[consensus] verify_block_header, previous_block_hash: {:?}, block.header.prev_hash: {:?}\",\n                previous_block_hash,\n                block.header.prev_hash\n            );\n            return Err(\n                ConsensusError::VerifyBlockHeader(block.header.height, PreviousBlockHash).into(),\n            );\n        }\n\n        // the block 0 and 1 's proof is consensus-ed by community\n        if block.header.height > 1u64 && block.header.prev_hash != block.header.proof.block_hash {\n            log::error!(\n                \"[consensus] verify_block_header, verifying_block header : {:?}\",\n                block.header\n            );\n            return Err(ConsensusError::VerifyBlockHeader(block.header.height, ProofHash).into());\n        }\n\n        // verify proposer and validators\n        let previous_metadata = self.get_metadata(\n            ctx,\n            previous_block_header.state_root.clone(),\n            previous_block_header.height,\n            previous_block_header.timestamp,\n            previous_block_header.proposer,\n        )?;\n\n        let authority_map = previous_metadata\n            .verifier_list\n            .iter()\n            .map(|v| {\n                let address = v.pub_key.decode();\n                let node = Node {\n                    address:        v.pub_key.decode(),\n                    propose_weight: v.propose_weight,\n                    vote_weight:    v.vote_weight,\n                };\n                (address, node)\n            })\n            .collect::<HashMap<_, _>>();\n\n        // TODO: useless check\n        // check proposer\n        if block.header.height != 0\n            && !previous_metadata\n                .verifier_list\n                .iter()\n                .any(|v| v.address == block.header.proposer)\n        {\n            log::error!(\n                \"[consensus] verify_block_header, block.header.proposer: {:?}, authority_map: {:?}\",\n                block.header.proposer,\n                authority_map\n            );\n            return Err(ConsensusError::VerifyBlockHeader(block.header.height, Proposer).into());\n        }\n\n        // check validators\n        for validator in block.header.validators.iter() {\n            let validator_address = Address::from_pubkey_bytes(validator.pub_key.clone());\n\n            if !authority_map.contains_key(&validator.pub_key) {\n                log::error!(\n                    \"[consensus] verify_block_header, validator.address: {:?}, authority_map: {:?}\",\n                    validator_address,\n                    authority_map\n                );\n                return Err(ConsensusError::VerifyBlockHeader(\n                    block.header.height,\n                    BlockHeaderField::Validator,\n                )\n                .into());\n            } else {\n                let node = authority_map.get(&validator.pub_key).unwrap();\n\n                if node.vote_weight != validator.vote_weight\n                    || node.propose_weight != validator.vote_weight\n                {\n                    log::error!(\n                        \"[consensus] verify_block_header, validator.address: {:?}, authority_map: {:?}\",\n                        validator_address,\n                        authority_map\n                    );\n                    return Err(ConsensusError::VerifyBlockHeader(\n                        block.header.height,\n                        BlockHeaderField::Weight,\n                    )\n                    .into());\n                }\n            }\n        }\n\n        Ok(())\n    }\n\n    #[muta_apm::derive::tracing_span(kind = \"consensus.adapter\")]\n    async fn verify_proof(\n        &self,\n        ctx: Context,\n        block_header: &BlockHeader,\n        proof: &Proof,\n    ) -> ProtocolResult<()> {\n        // the block 0 has no proof, which is consensus-ed by community, not by chain\n\n        if block_header.height == 0 {\n            return Ok(());\n        };\n\n        if block_header.height != proof.height {\n            log::error!(\n                \"[consensus] verify_proof, block_header.height: {}, proof.height: {}\",\n                block_header.height,\n                proof.height\n            );\n            return Err(ConsensusError::VerifyProof(\n                block_header.height,\n                HeightMismatch(block_header.height, proof.height),\n            )\n            .into());\n        }\n\n        let blockhash = Hash::digest(block_header.encode_fixed()?);\n\n        if blockhash != proof.block_hash {\n            log::error!(\n                \"[consensus] verify_proof, blockhash: {:?}, proof.block_hash: {:?}\",\n                blockhash,\n                proof.block_hash\n            );\n            return Err(ConsensusError::VerifyProof(block_header.height, HashMismatch).into());\n        }\n\n        let previous_block_header = self\n            .get_block_header_by_height(ctx.clone(), block_header.height - 1)\n            .await\n            .map_err(|e| {\n                log::error!(\n                    \"[consensus] verify_proof, previous_block {} fails\",\n                    block_header.height - 1,\n                );\n                e\n            })?;\n        // the auth_list for the target should comes from previous height\n        let metadata = self.get_metadata(\n            ctx.clone(),\n            previous_block_header.state_root.clone(),\n            previous_block_header.height,\n            previous_block_header.timestamp,\n            previous_block_header.proposer,\n        )?;\n\n        let mut authority_list = metadata\n            .verifier_list\n            .iter()\n            .map(|v| Node {\n                address:        v.pub_key.decode(),\n                propose_weight: v.propose_weight,\n                vote_weight:    v.vote_weight,\n            })\n            .collect::<Vec<Node>>();\n\n        let signed_voters = extract_voters(&mut authority_list, &proof.bitmap).map_err(|_| {\n            log::error!(\"[consensus] extract_voters fails, bitmap error\");\n            ConsensusError::VerifyProof(block_header.height, BitMap)\n        })?;\n\n        let vote = Vote {\n            height:     proof.height,\n            round:      proof.round,\n            vote_type:  VoteType::Precommit,\n            block_hash: proof.block_hash.as_bytes(),\n        };\n\n        let weight_map = authority_list\n            .iter()\n            .map(|node| (node.address.clone(), node.vote_weight))\n            .collect::<HashMap<overlord::types::Address, u32>>();\n        self.verify_proof_weight(\n            ctx.clone(),\n            block_header.height,\n            weight_map,\n            signed_voters.clone(),\n        )?;\n\n        let vote_hash = self.crypto.hash(Bytes::from(rlp::encode(&vote)));\n        let hex_pubkeys = metadata\n            .verifier_list\n            .iter()\n            .filter_map(|v| {\n                if signed_voters.contains(&v.pub_key.decode()) {\n                    Some(v.bls_pub_key.clone())\n                } else {\n                    None\n                }\n            })\n            .collect::<Vec<_>>();\n\n        self.verify_proof_signature(\n            ctx.clone(),\n            block_header.height,\n            vote_hash.clone(),\n            proof.signature.clone(),\n            hex_pubkeys,\n        ).map_err(|e| {\n            log::error!(\"[consensus] verify_proof_signature error, height {}, vote: {:?}, vote_hash:{:?}, sig:{:?}, signed_voter:{:?}\",\n            block_header.height,\n            vote,\n            vote_hash,\n            proof.signature,\n            signed_voters,\n            );\n            e\n        })?;\n        Ok(())\n    }\n\n    #[muta_apm::derive::tracing_span(kind = \"consensus.adapter\")]\n    fn verify_proof_signature(\n        &self,\n        ctx: Context,\n        block_height: u64,\n        vote_hash: Bytes,\n        aggregated_signature_bytes: Bytes,\n        vote_keys: Vec<Hex>,\n    ) -> ProtocolResult<()> {\n        let mut pub_keys = Vec::new();\n        for hex in vote_keys.into_iter() {\n            pub_keys.push(convert_hex_to_bls_pubkeys(hex)?)\n        }\n\n        self.crypto\n            .inner_verify_aggregated_signature(vote_hash, pub_keys, aggregated_signature_bytes)\n            .map_err(|e| {\n                log::error!(\"[consensus] verify_proof_signature error: {}\", e);\n                ConsensusError::VerifyProof(block_height, Signature).into()\n            })\n    }\n\n    #[muta_apm::derive::tracing_span(kind = \"consensus.adapter\")]\n    fn verify_proof_weight(\n        &self,\n        ctx: Context,\n        block_height: u64,\n        weight_map: HashMap<Bytes, u32>,\n        signed_voters: Vec<Bytes>,\n    ) -> ProtocolResult<()> {\n        let total_validator_weight: u64 = weight_map.iter().map(|pair| u64::from(*pair.1)).sum();\n\n        let mut accumulator = 0u64;\n        for signed_voter_address in signed_voters {\n            if weight_map.contains_key(signed_voter_address.as_ref()) {\n                let weight = weight_map\n                    .get(signed_voter_address.as_ref())\n                    .ok_or(ConsensusError::VerifyProof(block_height, WeightNotFound))\n                    .map_err(|e| {\n                        log::error!(\n                            \"[consensus] verify_proof_weight,signed_voter_address: {:?}\",\n                            signed_voter_address\n                        );\n                        e\n                    })?;\n                accumulator += u64::from(*(weight));\n            } else {\n                log::error!(\n                    \"[consensus] verify_proof_weight, weight not found, signed_voter_address: {:?}\",\n                    signed_voter_address\n                );\n                return Err(\n                    ConsensusError::VerifyProof(block_height, BlockProofField::Validator).into(),\n                );\n            }\n        }\n\n        if 3 * accumulator <= 2 * total_validator_weight {\n            log::error!(\n                \"[consensus] verify_proof_weight, accumulator: {}, total: {}\",\n                accumulator,\n                total_validator_weight\n            );\n\n            return Err(ConsensusError::VerifyProof(block_height, BlockProofField::Weight).into());\n        }\n        Ok(())\n    }\n}\n\nimpl<EF, M, N, S, DB, Mapping> OverlordConsensusAdapter<EF, M, N, S, DB, Mapping>\nwhere\n    EF: ExecutorFactory<DB, S, Mapping>,\n    M: MemPool + 'static,\n    N: Rpc + PeerTrust + Gossip + Network + 'static,\n    S: Storage + 'static,\n    DB: cita_trie::DB + 'static,\n    Mapping: ServiceMapping + 'static,\n{\n    pub fn new(\n        network: Arc<N>,\n        mempool: Arc<M>,\n        storage: Arc<S>,\n        trie_db: Arc<DB>,\n        service_mapping: Arc<Mapping>,\n        status_agent: StatusAgent,\n        crypto: Arc<OverlordCrypto>,\n        gap: usize,\n    ) -> ProtocolResult<Self> {\n        let (exec_queue, rx) = channel(gap);\n        let exec_demons = Some(ExecDemons::new(\n            Arc::clone(&storage),\n            Arc::clone(&trie_db),\n            Arc::clone(&service_mapping),\n            rx,\n            status_agent,\n        ));\n\n        let adapter = OverlordConsensusAdapter {\n            network,\n            mempool,\n            storage,\n            trie_db,\n            service_mapping,\n            overlord_handler: RwLock::new(None),\n            exec_queue,\n            exec_demons,\n            crypto,\n        };\n\n        Ok(adapter)\n    }\n\n    pub fn take_exec_demon(&mut self) -> ExecDemons<S, DB, EF, Mapping> {\n        assert!(self.exec_demons.is_some());\n        self.exec_demons.take().unwrap()\n    }\n\n    pub fn set_overlord_handler(&self, handler: OverlordHandler<FixedPill>) {\n        *self.overlord_handler.write() = Some(handler)\n    }\n}\n\n#[derive(Debug)]\npub struct ExecDemons<S, DB, EF, Mapping> {\n    storage:         Arc<S>,\n    trie_db:         Arc<DB>,\n    service_mapping: Arc<Mapping>,\n\n    pin_ef: PhantomData<EF>,\n    queue:  Receiver<ExecuteInfo>,\n    status: StatusAgent,\n}\n\nimpl<S, DB, EF, Mapping> ExecDemons<S, DB, EF, Mapping>\nwhere\n    S: Storage,\n    DB: cita_trie::DB,\n    EF: ExecutorFactory<DB, S, Mapping>,\n    Mapping: ServiceMapping,\n{\n    fn new(\n        storage: Arc<S>,\n        trie_db: Arc<DB>,\n        service_mapping: Arc<Mapping>,\n        rx: Receiver<ExecuteInfo>,\n        status_agent: StatusAgent,\n    ) -> Self {\n        ExecDemons {\n            storage,\n            trie_db,\n            service_mapping,\n            queue: rx,\n            pin_ef: PhantomData,\n            status: status_agent,\n        }\n    }\n\n    pub async fn run(mut self) {\n        loop {\n            let inst = Instant::now();\n            if let Err(e) = self.process().await {\n                log::error!(\"muta-consensus: executor demons error {:?}\", e);\n            }\n            common_apm::metrics::consensus::CONSENSUS_TIME_HISTOGRAM_VEC_STATIC\n                .block\n                .observe(common_apm::metrics::duration_to_sec(inst.elapsed()));\n        }\n    }\n\n    async fn process(&mut self) -> ProtocolResult<()> {\n        if let Some(info) = self.queue.recv().await {\n            self.exec(info.ctx.clone(), info).await\n        } else {\n            Err(ConsensusError::Other(\"Queue disconnect\".to_string()).into())\n        }\n    }\n\n    #[muta_apm::derive::tracing_span(\n        kind = \"consensus.adapter\",\n        logs = \"{'height': 'info.height', 'txs_len': 'info.signed_txs.len()'}\"\n    )]\n    async fn exec(&self, ctx: Context, info: ExecuteInfo) -> ProtocolResult<()> {\n        let height = info.height;\n        let txs = info.signed_txs;\n        let order_root = info.order_root;\n        let state_root = self.status.to_inner().get_latest_state_root();\n\n        let now = Instant::now();\n        let mut executor = EF::from_root(\n            state_root.clone(),\n            Arc::clone(&self.trie_db),\n            Arc::clone(&self.storage),\n            Arc::clone(&self.service_mapping),\n        )?;\n        let exec_params = ExecutorParams {\n            state_root: state_root.clone(),\n            height,\n            timestamp: info.timestamp,\n            cycles_limit: info.cycles_limit,\n            proposer: info.proposer,\n        };\n        let resp = executor.exec(ctx.clone(), &exec_params, &txs)?;\n        common_apm::metrics::consensus::CONSENSUS_TIME_HISTOGRAM_VEC_STATIC\n            .exec\n            .observe(common_apm::metrics::duration_to_sec(now.elapsed()));\n        log::info!(\n            \"[consensus-adapter]: exec transactions cost {:?} transactions len {:?}\",\n            now.elapsed(),\n            txs.len(),\n        );\n\n        let now = Instant::now();\n        self.save_receipts(info.ctx.clone(), height, resp.receipts.clone())\n            .await?;\n        log::info!(\n            \"[consensus-adapter]: save receipts cost {:?} receipts len {:?}\",\n            now.elapsed(),\n            resp.receipts.len(),\n        );\n        self.status.update_by_executed(gen_executed_info(\n            info.ctx.clone(),\n            resp,\n            height,\n            order_root,\n        ));\n\n        Ok(())\n    }\n\n    #[muta_apm::derive::tracing_span(\n        kind = \"consensus.adapter\",\n        logs = \"{'receipts_len': 'receipts.len()'}\"\n    )]\n    async fn save_receipts(\n        &self,\n        ctx: Context,\n        height: u64,\n        receipts: Vec<Receipt>,\n    ) -> ProtocolResult<()> {\n        self.storage.insert_receipts(ctx, height, receipts).await\n    }\n}\n\nfn gen_executed_info(\n    ctx: Context,\n    exec_resp: ExecutorResp,\n    height: u64,\n    order_root: MerkleRoot,\n) -> ExecutedInfo {\n    let cycles = exec_resp.all_cycles_used;\n\n    let receipt = Merkle::from_hashes(\n        exec_resp\n            .receipts\n            .iter()\n            .map(|r| Hash::digest(r.to_owned().encode_fixed().unwrap()))\n            .collect::<Vec<_>>(),\n    )\n    .get_root_hash()\n    .unwrap_or_else(Hash::from_empty);\n\n    ExecutedInfo {\n        ctx,\n        exec_height: height,\n        cycles_used: cycles,\n        receipt_root: receipt,\n        confirm_root: order_root,\n        state_root: exec_resp.state_root,\n    }\n}\n"
  },
  {
    "path": "core/consensus/src/consensus.rs",
    "content": "use std::sync::Arc;\n\nuse async_trait::async_trait;\nuse creep::Context;\nuse futures::lock::Mutex;\nuse overlord::types::{\n    AggregatedVote, Node, OverlordMsg, SignedChoke, SignedProposal, SignedVote, Status,\n};\nuse overlord::{DurationConfig, Overlord, OverlordHandler};\n\nuse common_apm::muta_apm;\n\nuse protocol::traits::{Consensus, ConsensusAdapter, NodeInfo};\nuse protocol::types::Validator;\nuse protocol::ProtocolResult;\n\nuse crate::engine::ConsensusEngine;\nuse crate::fixed_types::FixedPill;\nuse crate::status::StatusAgent;\nuse crate::util::OverlordCrypto;\nuse crate::wal::{ConsensusWal, SignedTxsWAL};\nuse crate::{ConsensusError, ConsensusType};\n\n/// Provide consensus\npub struct OverlordConsensus<Adapter: ConsensusAdapter + 'static> {\n    /// Overlord consensus protocol instance.\n    inner: Arc<\n        Overlord<FixedPill, ConsensusEngine<Adapter>, OverlordCrypto, ConsensusEngine<Adapter>>,\n    >,\n    /// An overlord consensus protocol handler.\n    handler: OverlordHandler<FixedPill>,\n}\n\n#[async_trait]\nimpl<Adapter: ConsensusAdapter + 'static> Consensus for OverlordConsensus<Adapter> {\n    #[muta_apm::derive::tracing_span(kind = \"consensus\")]\n    async fn set_proposal(&self, ctx: Context, proposal: Vec<u8>) -> ProtocolResult<()> {\n        let signed_proposal: SignedProposal<FixedPill> = rlp::decode(&proposal)\n            .map_err(|_| ConsensusError::DecodeErr(ConsensusType::SignedProposal))?;\n\n        let msg = OverlordMsg::SignedProposal(signed_proposal);\n        tracing_overlord_message(ctx.clone(), &msg);\n\n        self.handler\n            .send_msg(ctx, msg)\n            .expect(\"Overlord handler disconnect\");\n        Ok(())\n    }\n\n    async fn set_vote(&self, ctx: Context, vote: Vec<u8>) -> ProtocolResult<()> {\n        let ctx = match muta_apm::MUTA_TRACER.span(\"consensus.set_vote\", vec![\n            muta_apm::rustracing::tag::Tag::new(\"kind\", \"consensus\"),\n        ]) {\n            Some(mut span) => {\n                span.log(|log| {\n                    log.time(std::time::SystemTime::now());\n                });\n                ctx.with_value(\"parent_span_ctx\", span.context().cloned())\n            }\n            None => ctx,\n        };\n\n        let signed_vote: SignedVote =\n            rlp::decode(&vote).map_err(|_| ConsensusError::DecodeErr(ConsensusType::SignedVote))?;\n\n        let msg = OverlordMsg::SignedVote(signed_vote);\n        tracing_overlord_message(ctx.clone(), &msg);\n\n        self.handler\n            .send_msg(ctx, msg)\n            .expect(\"Overlord handler disconnect\");\n        Ok(())\n    }\n\n    #[muta_apm::derive::tracing_span(kind = \"consensus\")]\n    async fn set_qc(&self, ctx: Context, qc: Vec<u8>) -> ProtocolResult<()> {\n        let aggregated_vote: AggregatedVote = rlp::decode(&qc)\n            .map_err(|_| ConsensusError::DecodeErr(ConsensusType::AggregateVote))?;\n\n        let msg = OverlordMsg::AggregatedVote(aggregated_vote);\n        tracing_overlord_message(ctx.clone(), &msg);\n\n        self.handler\n            .send_msg(ctx, msg)\n            .expect(\"Overlord handler disconnect\");\n        Ok(())\n    }\n\n    #[muta_apm::derive::tracing_span(kind = \"consensus\")]\n    async fn set_choke(&self, ctx: Context, choke: Vec<u8>) -> ProtocolResult<()> {\n        let signed_choke: SignedChoke = rlp::decode(&choke)\n            .map_err(|_| ConsensusError::DecodeErr(ConsensusType::SignedChoke))?;\n\n        let msg = OverlordMsg::SignedChoke(signed_choke);\n        tracing_overlord_message(ctx.clone(), &msg);\n\n        self.handler\n            .send_msg(ctx, msg)\n            .expect(\"Overlord handler disconnect\");\n        Ok(())\n    }\n}\n\nimpl<Adapter: ConsensusAdapter + 'static> OverlordConsensus<Adapter> {\n    pub fn new(\n        status_agent: StatusAgent,\n        node_info: NodeInfo,\n        crypto: Arc<OverlordCrypto>,\n        txs_wal: Arc<SignedTxsWAL>,\n        adapter: Arc<Adapter>,\n        lock: Arc<Mutex<()>>,\n        consensus_wal: Arc<ConsensusWal>,\n    ) -> Self {\n        let engine = Arc::new(ConsensusEngine::new(\n            status_agent.clone(),\n            node_info.clone(),\n            txs_wal,\n            Arc::clone(&adapter),\n            Arc::clone(&crypto),\n            lock,\n            consensus_wal,\n        ));\n\n        let overlord = Overlord::new(node_info.self_pub_key, Arc::clone(&engine), crypto, engine);\n        let overlord_handler = overlord.get_handler();\n        let status = status_agent.to_inner();\n\n        if status.latest_committed_height == 0 {\n            overlord_handler\n                .send_msg(\n                    Context::new(),\n                    OverlordMsg::RichStatus(gen_overlord_status(\n                        status.latest_committed_height + 1,\n                        status.consensus_interval,\n                        status.propose_ratio,\n                        status.prevote_ratio,\n                        status.precommit_ratio,\n                        status.brake_ratio,\n                        status.validators,\n                    )),\n                )\n                .unwrap();\n        }\n\n        Self {\n            inner:   Arc::new(overlord),\n            handler: overlord_handler,\n        }\n    }\n\n    pub fn get_overlord_handler(&self) -> OverlordHandler<FixedPill> {\n        self.handler.clone()\n    }\n\n    pub async fn run(\n        &self,\n        init_height: u64,\n        interval: u64,\n        authority_list: Vec<Node>,\n        timer_config: Option<DurationConfig>,\n    ) -> ProtocolResult<()> {\n        self.inner\n            .run(init_height, interval, authority_list, timer_config)\n            .await\n            .map_err(|e| ConsensusError::OverlordErr(Box::new(e)))?;\n\n        Ok(())\n    }\n}\n\npub fn gen_overlord_status(\n    height: u64,\n    interval: u64,\n    propose_ratio: u64,\n    prevote_ratio: u64,\n    precommit_ratio: u64,\n    brake_ratio: u64,\n    validators: Vec<Validator>,\n) -> Status {\n    let mut authority_list = validators\n        .into_iter()\n        .map(|v| Node {\n            address:        v.pub_key.clone(),\n            propose_weight: v.propose_weight,\n            vote_weight:    v.vote_weight,\n        })\n        .collect::<Vec<_>>();\n\n    authority_list.sort();\n\n    Status {\n        height,\n        interval: Some(interval),\n        timer_config: Some(DurationConfig {\n            propose_ratio,\n            prevote_ratio,\n            precommit_ratio,\n            brake_ratio,\n        }),\n        authority_list,\n    }\n}\n\ntrait OverlordMsgExt {\n    fn get_height(&self) -> String;\n    fn get_round(&self) -> String;\n}\n\nimpl<T: overlord::Codec> OverlordMsgExt for OverlordMsg<T> {\n    fn get_height(&self) -> String {\n        match self {\n            OverlordMsg::SignedProposal(sp) => sp.proposal.height.to_string(),\n            OverlordMsg::SignedVote(sv) => sv.get_height().to_string(),\n            OverlordMsg::AggregatedVote(av) => av.get_height().to_string(),\n            OverlordMsg::RichStatus(s) => s.height.to_string(),\n            OverlordMsg::SignedChoke(sc) => sc.choke.height.to_string(),\n            _ => \"\".to_owned(),\n        }\n    }\n\n    fn get_round(&self) -> String {\n        match self {\n            OverlordMsg::SignedProposal(sp) => sp.proposal.round.to_string(),\n            OverlordMsg::SignedVote(sv) => sv.get_round().to_string(),\n            OverlordMsg::AggregatedVote(av) => av.get_round().to_string(),\n            OverlordMsg::SignedChoke(sc) => sc.choke.round.to_string(),\n            _ => \"\".to_owned(),\n        }\n    }\n}\n\n#[muta_apm::derive::tracing_span(\n    kind = \"consensus\",\n    logs = \"{\n    'height': 'msg.get_height()',\n    'round': 'msg.get_round()'\n}\"\n)]\npub fn tracing_overlord_message<T: overlord::Codec>(ctx: Context, msg: &OverlordMsg<T>) {\n    let _ = msg;\n}\n"
  },
  {
    "path": "core/consensus/src/engine.rs",
    "content": "use std::collections::{HashMap, HashSet};\nuse std::convert::TryFrom;\nuse std::error::Error;\nuse std::sync::Arc;\nuse std::time::{Duration, Instant};\n\nuse async_trait::async_trait;\nuse futures::lock::Mutex;\nuse futures_timer::Delay;\nuse json::JsonValue;\nuse log::{error, info, warn};\nuse overlord::error::ConsensusError as OverlordError;\nuse overlord::types::{Commit, Node, OverlordMsg, Status, ViewChangeReason};\nuse overlord::{Consensus as Engine, DurationConfig, Wal};\nuse parking_lot::RwLock;\nuse rlp::Encodable;\n\nuse common_apm::muta_apm;\nuse common_crypto::BlsPublicKey;\nuse common_logger::{json, log};\nuse common_merkle::Merkle;\n\nuse protocol::fixed_codec::FixedCodec;\nuse protocol::traits::{ConsensusAdapter, Context, MessageTarget, NodeInfo, TrustFeedback};\nuse protocol::types::{\n    Address, Block, BlockHeader, Hash, MerkleRoot, Metadata, Pill, Proof, SignedTransaction,\n    Validator,\n};\nuse protocol::{Bytes, ProtocolError, ProtocolResult};\n\nuse crate::fixed_types::FixedPill;\nuse crate::message::{\n    END_GOSSIP_AGGREGATED_VOTE, END_GOSSIP_SIGNED_CHOKE, END_GOSSIP_SIGNED_PROPOSAL,\n    END_GOSSIP_SIGNED_VOTE,\n};\nuse crate::status::StatusAgent;\nuse crate::util::{check_list_roots, digest_signed_transactions, time_now, OverlordCrypto};\nuse crate::wal::{ConsensusWal, SignedTxsWAL};\nuse crate::ConsensusError;\n\nconst RETRY_COMMIT_INTERVAL: u64 = 1000; // 1s\nconst RETRY_CHECK_ROOT_LIMIT: u8 = 15;\nconst RETRY_CHECK_ROOT_INTERVAL: u64 = 100; // 100ms\n\n/// validator is for create new block, and authority is for build overlord\n/// status.\npub struct ConsensusEngine<Adapter> {\n    status_agent:   StatusAgent,\n    node_info:      NodeInfo,\n    exemption_hash: RwLock<HashSet<Bytes>>,\n\n    adapter: Arc<Adapter>,\n    txs_wal: Arc<SignedTxsWAL>,\n    crypto:  Arc<OverlordCrypto>,\n    lock:    Arc<Mutex<()>>,\n\n    last_commit_time:             RwLock<u64>,\n    consensus_wal:                Arc<ConsensusWal>,\n    last_check_block_fail_reason: RwLock<String>,\n}\n\n#[async_trait]\nimpl<Adapter: ConsensusAdapter + 'static> Engine<FixedPill> for ConsensusEngine<Adapter> {\n    #[muta_apm::derive::tracing_span(\n        kind = \"consensus.engine\",\n        logs = \"{'next_height': 'next_height'}\"\n    )]\n    async fn get_block(\n        &self,\n        ctx: Context,\n        next_height: u64,\n    ) -> Result<(FixedPill, Bytes), Box<dyn Error + Send>> {\n        let current_consensus_status = self.status_agent.to_inner();\n\n        if current_consensus_status.latest_committed_height\n            != current_consensus_status.current_proof.height\n        {\n            error!(\"[consensus] get_block for {}, error, current_consensus_status.current_height {} != current_consensus_status.current_proof.height, proof :{:?}\",\n            current_consensus_status.latest_committed_height,\n             current_consensus_status.current_proof.height,\n            current_consensus_status.current_proof)\n        }\n\n        let (ordered_tx_hashes, propose_hashes) = self\n            .adapter\n            .get_txs_from_mempool(\n                ctx.clone(),\n                next_height,\n                current_consensus_status.cycles_limit,\n                current_consensus_status.tx_num_limit,\n            )\n            .await?\n            .clap();\n        let signed_txs = self\n            .adapter\n            .get_full_txs(ctx.clone(), &ordered_tx_hashes)\n            .await?;\n        let order_signed_transactions_hash = digest_signed_transactions(&signed_txs)?;\n\n        if current_consensus_status.latest_committed_height != next_height - 1 {\n            return Err(ProtocolError::from(ConsensusError::MissingBlockHeader(\n                current_consensus_status.latest_committed_height,\n            ))\n            .into());\n        }\n\n        let order_root = Merkle::from_hashes(ordered_tx_hashes.clone()).get_root_hash();\n        let state_root = current_consensus_status.get_latest_state_root();\n\n        let header = BlockHeader {\n            chain_id: self.node_info.chain_id.clone(),\n            prev_hash: current_consensus_status.current_hash,\n            height: next_height,\n            exec_height: current_consensus_status.exec_height,\n            timestamp: time_now(),\n            order_root: order_root.unwrap_or_else(Hash::from_empty),\n            order_signed_transactions_hash,\n            confirm_root: current_consensus_status.list_confirm_root,\n            state_root,\n            receipt_root: current_consensus_status.list_receipt_root.clone(),\n            cycles_used: current_consensus_status.list_cycles_used,\n            proposer: self.node_info.self_address.clone(),\n            proof: current_consensus_status.current_proof.clone(),\n            validator_version: 0u64,\n            validators: current_consensus_status.validators.clone(),\n        };\n\n        if header.height != header.proof.height + 1 {\n            error!(\n                \"[consensus] get_block for {}, proof error, proof height {} mismatch\",\n                header.height, header.proof.height,\n            );\n        }\n\n        let block = Block {\n            header,\n            ordered_tx_hashes,\n        };\n\n        let pill = Pill {\n            block,\n            propose_hashes,\n        };\n        let fixed_pill = FixedPill {\n            inner: pill.clone(),\n        };\n        let hash = Hash::digest(pill.block.header.encode_fixed()?).as_bytes();\n        let mut set = self.exemption_hash.write();\n        set.insert(hash.clone());\n\n        Ok((fixed_pill, hash))\n    }\n\n    #[muta_apm::derive::tracing_span(\n        kind = \"consensus.engine\",\n        logs = \"{'next_height': 'next_height', 'hash':\n    'Hash::from_bytes(hash.clone()).unwrap().as_hex()', 'txs_len':\n    'block.inner.block.ordered_tx_hashes.len()'}\"\n    )]\n    async fn check_block(\n        &self,\n        ctx: Context,\n        next_height: u64,\n        hash: Bytes,\n        block: FixedPill,\n    ) -> Result<(), Box<dyn Error + Send>> {\n        let time = Instant::now();\n\n        if block.inner.block.header.height != block.inner.block.header.proof.height + 1 {\n            error!(\"[consensus-engine]: check_block for overlord receives a proposal, error, block height {}, block {:?}\", block.inner.block.header.height,block.inner.block);\n        }\n\n        let order_hashes = block.get_ordered_hashes();\n        let order_hashes_len = order_hashes.len();\n        let exemption = { self.exemption_hash.read().contains(&hash) };\n        let sync_tx_hashes = block.get_propose_hashes();\n        let pill = block.inner;\n\n        gauge_txs_len(&pill);\n\n        // If the block is proposed by self, it does not need to check. Get full signed\n        // transactions directly.\n        if !exemption {\n            if let Err(e) = self.inner_check_block(ctx.clone(), &pill.block).await {\n                let mut reason = self.last_check_block_fail_reason.write();\n                *reason = e.to_string();\n                return Err(e.into());\n            }\n\n            let adapter = Arc::clone(&self.adapter);\n            let ctx_clone = ctx.clone();\n            tokio::spawn(async move {\n                if let Err(e) = sync_txs(ctx_clone, adapter, sync_tx_hashes).await {\n                    error!(\"Consensus sync block error {}\", e);\n                }\n            });\n        }\n\n        info!(\n            \"[consensus-engine]: check block cost {:?}\",\n            Instant::now() - time\n        );\n        let time = Instant::now();\n        let txs = self.adapter.get_full_txs(ctx, &order_hashes).await?;\n\n        info!(\n            \"[consensus-engine]: get txs cost {:?}\",\n            Instant::now() - time\n        );\n        let time = Instant::now();\n        self.txs_wal.save(\n            next_height,\n            pill.block.header.order_signed_transactions_hash,\n            txs,\n        )?;\n\n        info!(\n            \"[consensus-engine]: write wal cost {:?} order_hashes_len {:?}\",\n            time.elapsed(),\n            order_hashes_len\n        );\n        Ok(())\n    }\n\n    /// **TODO:** the overlord interface and process needs to be changed.\n    /// Get the `FixedSignedTxs` from the argument rather than get it from\n    /// mempool.\n    #[muta_apm::derive::tracing_span(\n        kind = \"consensus.engine\",\n        logs = \"{'current_height': 'current_height', 'txs_len':\n    'commit.content.inner.block.ordered_tx_hashes.len()'}\"\n    )]\n    async fn commit(\n        &self,\n        ctx: Context,\n        current_height: u64,\n        commit: Commit<FixedPill>,\n    ) -> Result<Status, Box<dyn Error + Send>> {\n        let lock = self.lock.try_lock();\n        if lock.is_none() {\n            return Err(ProtocolError::from(ConsensusError::LockInSync).into());\n        }\n\n        let current_consensus_status = self.status_agent.to_inner();\n        if current_consensus_status.exec_height == current_height {\n            let status = Status {\n                height:         current_height + 1,\n                interval:       Some(current_consensus_status.consensus_interval),\n                timer_config:   Some(DurationConfig {\n                    propose_ratio:   current_consensus_status.propose_ratio,\n                    prevote_ratio:   current_consensus_status.prevote_ratio,\n                    precommit_ratio: current_consensus_status.precommit_ratio,\n                    brake_ratio:     current_consensus_status.brake_ratio,\n                }),\n                authority_list: covert_to_overlord_authority(&current_consensus_status.validators),\n            };\n            return Ok(status);\n        }\n\n        if current_height != current_consensus_status.latest_committed_height + 1 {\n            return Err(ProtocolError::from(ConsensusError::OutdatedCommit(\n                current_height,\n                current_consensus_status.latest_committed_height,\n            ))\n            .into());\n        }\n\n        let pill = commit.content.inner;\n        let block_hash = Hash::from_bytes(commit.proof.block_hash.clone())?;\n        let signature = commit.proof.signature.signature.clone();\n        let bitmap = commit.proof.signature.address_bitmap.clone();\n        let txs_len = pill.block.ordered_tx_hashes.len();\n\n        // Storage save the latest proof.\n        let proof = Proof {\n            height: commit.proof.height,\n            round: commit.proof.round,\n            block_hash: block_hash.clone(),\n            signature,\n            bitmap,\n        };\n        common_apm::metrics::consensus::ENGINE_ROUND_GAUGE.set(commit.proof.round as i64);\n\n        self.adapter.save_proof(ctx.clone(), proof.clone()).await?;\n\n        // Get full transactions from mempool. If is error, try get from wal.\n        let ordered_tx_hashes = pill.block.ordered_tx_hashes.clone();\n        let signed_txs = match self\n            .adapter\n            .get_full_txs(ctx.clone(), &ordered_tx_hashes)\n            .await\n        {\n            Ok(txs) => txs,\n            Err(_) => self.txs_wal.load(\n                current_height,\n                pill.block.header.order_signed_transactions_hash.clone(),\n            )?,\n        };\n\n        // Execute transactions\n        loop {\n            if self\n                .exec(\n                    ctx.clone(),\n                    pill.block.header.order_root.clone(),\n                    current_height,\n                    pill.block.header.proposer.clone(),\n                    pill.block.header.timestamp,\n                    Hash::digest(pill.block.header.encode_fixed()?),\n                    signed_txs.clone(),\n                )\n                .await\n                .is_ok()\n            {\n                break;\n            } else {\n                Delay::new(Duration::from_millis(RETRY_COMMIT_INTERVAL)).await;\n            }\n        }\n\n        let block_exec_height = pill.block.header.exec_height;\n        let metadata = self.adapter.get_metadata(\n            ctx.clone(),\n            pill.block.header.state_root.clone(),\n            pill.block.header.height,\n            pill.block.header.timestamp,\n            pill.block.header.proposer.clone(),\n        )?;\n        info!(\n            \"[consensus]: validator of height {} is {:?}\",\n            current_height + 1,\n            metadata.verifier_list\n        );\n\n        self.update_status(metadata, pill.block, proof, signed_txs)\n            .await?;\n\n        self.adapter\n            .flush_mempool(ctx.clone(), &ordered_tx_hashes)\n            .await?;\n\n        self.adapter\n            .broadcast_height(ctx.clone(), current_height)\n            .await?;\n        self.txs_wal.remove(block_exec_height)?;\n\n        let mut set = self.exemption_hash.write();\n        set.clear();\n\n        let current_consensus_status = self.status_agent.to_inner();\n        let status = Status {\n            height:         current_height + 1,\n            interval:       Some(current_consensus_status.consensus_interval),\n            timer_config:   Some(DurationConfig {\n                propose_ratio:   current_consensus_status.propose_ratio,\n                prevote_ratio:   current_consensus_status.prevote_ratio,\n                precommit_ratio: current_consensus_status.precommit_ratio,\n                brake_ratio:     current_consensus_status.brake_ratio,\n            }),\n            authority_list: covert_to_overlord_authority(&current_consensus_status.validators),\n        };\n\n        self.metric_commit(current_height, txs_len);\n\n        Ok(status)\n    }\n\n    /// Only signed proposal and aggregated vote will be broadcast to others.\n    #[muta_apm::derive::tracing_span(kind = \"consensus.engine\")]\n    async fn broadcast_to_other(\n        &self,\n        ctx: Context,\n        msg: OverlordMsg<FixedPill>,\n    ) -> Result<(), Box<dyn Error + Send>> {\n        let (end, msg) = match msg {\n            OverlordMsg::SignedProposal(sp) => {\n                let bytes = sp.rlp_bytes();\n                (END_GOSSIP_SIGNED_PROPOSAL, bytes)\n            }\n\n            OverlordMsg::AggregatedVote(av) => {\n                let bytes = av.rlp_bytes();\n                (END_GOSSIP_AGGREGATED_VOTE, bytes)\n            }\n\n            OverlordMsg::SignedChoke(sc) => {\n                let bytes = sc.rlp_bytes();\n                (END_GOSSIP_SIGNED_CHOKE, bytes)\n            }\n\n            _ => unreachable!(),\n        };\n\n        self.adapter\n            .transmit(ctx, msg, end, MessageTarget::Broadcast)\n            .await?;\n        Ok(())\n    }\n\n    /// Only signed vote will be transmit to the relayer.\n    #[muta_apm::derive::tracing_span(\n        kind = \"consensus.engine\",\n        logs = \"{'pub_key': 'hex::encode(pub_key.clone())'}\"\n    )]\n    async fn transmit_to_relayer(\n        &self,\n        ctx: Context,\n        pub_key: Bytes,\n        msg: OverlordMsg<FixedPill>,\n    ) -> Result<(), Box<dyn Error + Send>> {\n        match msg {\n            OverlordMsg::SignedVote(sv) => {\n                let msg = sv.rlp_bytes();\n                self.adapter\n                    .transmit(\n                        ctx,\n                        msg,\n                        END_GOSSIP_SIGNED_VOTE,\n                        MessageTarget::Specified(pub_key),\n                    )\n                    .await?;\n            }\n            OverlordMsg::AggregatedVote(av) => {\n                let msg = av.rlp_bytes();\n                self.adapter\n                    .transmit(\n                        ctx,\n                        msg,\n                        END_GOSSIP_AGGREGATED_VOTE,\n                        MessageTarget::Specified(pub_key),\n                    )\n                    .await?;\n            }\n            _ => unreachable!(),\n        };\n        Ok(())\n    }\n\n    /// This function is rarely used, so get the authority list from the\n    /// RocksDB.\n    #[muta_apm::derive::tracing_span(\n        kind = \"consensus.engine\",\n        logs = \"{'next_height': 'next_height'}\"\n    )]\n    async fn get_authority_list(\n        &self,\n        ctx: Context,\n        next_height: u64,\n    ) -> Result<Vec<Node>, Box<dyn Error + Send>> {\n        if next_height == 0 {\n            return Ok(vec![]);\n        }\n\n        let old_block_header = self\n            .adapter\n            .get_block_header_by_height(ctx.clone(), next_height - 1)\n            .await?;\n        let old_metadata = self.adapter.get_metadata(\n            ctx.clone(),\n            old_block_header.state_root.clone(),\n            old_block_header.timestamp,\n            old_block_header.height,\n            old_block_header.proposer,\n        )?;\n        let mut old_validators = old_metadata\n            .verifier_list\n            .into_iter()\n            .map(|v| Node {\n                address:        v.pub_key.decode(),\n                propose_weight: v.propose_weight,\n                vote_weight:    v.vote_weight,\n            })\n            .collect::<Vec<_>>();\n        old_validators.sort();\n        Ok(old_validators)\n    }\n\n    fn report_error(&self, ctx: Context, err: OverlordError) {\n        match err {\n            OverlordError::CryptoErr(_) | OverlordError::AggregatedSignatureErr(_) => self\n                .adapter\n                .report_bad(ctx, TrustFeedback::Worse(err.to_string())),\n            _ => (),\n        }\n    }\n\n    fn report_view_change(&self, cx: Context, height: u64, round: u64, reason: ViewChangeReason) {\n        let view_change_reason = match reason {\n            ViewChangeReason::CheckBlockNotPass => {\n                let e = self.last_check_block_fail_reason.read();\n                reason.to_string() + \" \" + e.as_str()\n            }\n            _ => reason.to_string(),\n        };\n\n        log(\n            log::Level::Warn,\n            \"consensus\",\n            \"cons000\",\n            &cx,\n            json!({\"height\", height; \"round\", round; \"reason\", view_change_reason}),\n        );\n    }\n}\n\n#[async_trait]\nimpl<Adapter: ConsensusAdapter + 'static> Wal for ConsensusEngine<Adapter> {\n    async fn save(&self, info: Bytes) -> Result<(), Box<dyn Error + Send>> {\n        self.consensus_wal\n            .update_overlord_wal(Context::new(), info)\n            .map_err(|e| ProtocolError::from(ConsensusError::Other(e.to_string())))?;\n        Ok(())\n    }\n\n    async fn load(&self) -> Result<Option<Bytes>, Box<dyn Error + Send>> {\n        let res = self.consensus_wal.load_overlord_wal(Context::new()).ok();\n        Ok(res)\n    }\n}\n\nimpl<Adapter: ConsensusAdapter + 'static> ConsensusEngine<Adapter> {\n    pub fn new(\n        status_agent: StatusAgent,\n        node_info: NodeInfo,\n        wal: Arc<SignedTxsWAL>,\n        adapter: Arc<Adapter>,\n        crypto: Arc<OverlordCrypto>,\n        lock: Arc<Mutex<()>>,\n        consensus_wal: Arc<ConsensusWal>,\n    ) -> Self {\n        Self {\n            status_agent,\n            node_info,\n            exemption_hash: RwLock::new(HashSet::new()),\n            txs_wal: wal,\n            adapter,\n            crypto,\n            lock,\n            last_commit_time: RwLock::new(time_now()),\n            consensus_wal,\n            last_check_block_fail_reason: RwLock::new(String::new()),\n        }\n    }\n\n    #[muta_apm::derive::tracing_span(kind = \"consensus.engine\")]\n    pub async fn exec(\n        &self,\n        ctx: Context,\n        order_root: MerkleRoot,\n        height: u64,\n        proposer: Address,\n        timestamp: u64,\n        block_hash: Hash,\n        txs: Vec<SignedTransaction>,\n    ) -> ProtocolResult<()> {\n        let status = self.status_agent.to_inner();\n\n        self.adapter\n            .execute(\n                ctx,\n                self.node_info.chain_id.clone(),\n                order_root,\n                height,\n                status.cycles_price,\n                proposer,\n                block_hash,\n                txs,\n                status.cycles_limit,\n                timestamp,\n            )\n            .await\n    }\n\n    async fn inner_check_block(&self, ctx: Context, block: &Block) -> ProtocolResult<()> {\n        let current_timestamp = time_now();\n\n        self.adapter\n            .verify_block_header(ctx.clone(), &block)\n            .await\n            .map_err(|e| {\n                error!(\n                    \"[consensus] check_block, verify_block_header error, block header: {:?}\",\n                    block.header\n                );\n                e\n            })?;\n\n        // verify the proof in the block for previous block\n        // skip to get previous proof to compare because the node may just comes from\n        // sync and waste a delay of read\n        let previous_block_header = self\n            .adapter\n            .get_block_header_by_height(ctx.clone(), block.header.height - 1)\n            .await?;\n\n        // verify block timestamp.\n        if !validate_timestamp(\n            current_timestamp,\n            block.header.timestamp,\n            previous_block_header.timestamp,\n        ) {\n            return Err(ProtocolError::from(ConsensusError::InvalidTimestamp));\n        }\n\n        self.adapter\n                .verify_proof(\n                    ctx.clone(),\n                    &previous_block_header,\n                    &block.header.proof,\n                )\n                .await\n                .map_err(|e| {\n                    error!(\n                        \"[consensus] check_block, verify_proof error, previous block header: {:?}, proof: {:?}\",\n                        previous_block_header,\n                        block.header.proof\n                    );\n                    e\n                })?;\n\n        self.adapter\n            .verify_txs(ctx.clone(), block.header.height, &block.ordered_tx_hashes)\n            .await\n            .map_err(|e| {\n                error!(\"[consensus] check_block, verify_txs error\",);\n                e\n            })?;\n\n        // If it is inconsistent with the state of the proposal, we will wait for a\n        // period of time.\n        let mut check_retry = 0;\n        loop {\n            match self.check_block_roots(ctx.clone(), &block.header) {\n                Ok(()) => break,\n                Err(e) => {\n                    if check_retry >= RETRY_CHECK_ROOT_LIMIT {\n                        return Err(e);\n                    }\n\n                    check_retry += 1;\n                }\n            }\n            Delay::new(Duration::from_millis(RETRY_CHECK_ROOT_INTERVAL)).await;\n        }\n\n        let signed_txs = self\n            .adapter\n            .get_full_txs(ctx.clone(), &block.ordered_tx_hashes)\n            .await?;\n        self.check_order_transactions(ctx.clone(), &block, &signed_txs)\n    }\n\n    #[muta_apm::derive::tracing_span(kind = \"consensus.engine\")]\n    fn check_block_roots(&self, ctx: Context, block: &BlockHeader) -> ProtocolResult<()> {\n        let status = self.status_agent.to_inner();\n\n        // check previous hash\n        if status.current_hash != block.prev_hash {\n            return Err(ConsensusError::InvalidPrevhash {\n                expect: status.current_hash,\n                actual: block.prev_hash.clone(),\n            }\n            .into());\n        }\n\n        // check state root\n        if status.latest_committed_state_root != block.state_root\n            && !status.list_state_root.contains(&block.state_root)\n        {\n            warn!(\n                \"invalid status list_state_root, latest {:?}, current list {:?}, block {:?}\",\n                status.latest_committed_state_root, status.list_state_root, block.state_root\n            );\n            return Err(ConsensusError::InvalidStatusVec.into());\n        }\n\n        // check confirm root\n        if !check_list_roots(&status.list_confirm_root, &block.confirm_root) {\n            error!(\n                \"current list confirm root {:?}, block confirm root {:?}\",\n                status.list_confirm_root, block.confirm_root\n            );\n            return Err(ConsensusError::InvalidStatusVec.into());\n        }\n\n        // check receipt root\n        if !check_list_roots(&status.list_receipt_root, &block.receipt_root) {\n            error!(\n                \"current list receipt root {:?}, block receipt root {:?}\",\n                status.list_receipt_root, block.receipt_root\n            );\n            return Err(ConsensusError::InvalidStatusVec.into());\n        }\n\n        // check cycles used\n        if !check_list_roots(&status.list_cycles_used, &block.cycles_used) {\n            error!(\n                \"current list cycles used {:?}, block cycles used {:?}\",\n                status.list_cycles_used, block.cycles_used\n            );\n            return Err(ConsensusError::InvalidStatusVec.into());\n        }\n\n        Ok(())\n    }\n\n    #[muta_apm::derive::tracing_span(\n        kind = \"consensus.engine\",\n        logs = \"{'txs_len': 'signed_txs.len()'}\"\n    )]\n    fn check_order_transactions(\n        &self,\n        ctx: Context,\n        block: &Block,\n        signed_txs: &[SignedTransaction],\n    ) -> ProtocolResult<()> {\n        let order_root = Merkle::from_hashes(block.ordered_tx_hashes.clone())\n            .get_root_hash()\n            .unwrap_or_else(Hash::from_empty);\n        if order_root != block.header.order_root {\n            return Err(ConsensusError::InvalidOrderRoot {\n                expect: order_root,\n                actual: block.header.order_root.clone(),\n            }\n            .into());\n        }\n\n        let order_signed_transactions_hash = digest_signed_transactions(signed_txs)?;\n        if order_signed_transactions_hash != block.header.order_signed_transactions_hash {\n            return Err(ConsensusError::InvalidOrderSignedTransactionsHash {\n                expect: order_signed_transactions_hash,\n                actual: block.header.order_signed_transactions_hash.clone(),\n            }\n            .into());\n        }\n\n        Ok(())\n    }\n\n    /// After get the signed transactions:\n    /// 1. Execute the signed transactions.\n    /// 2. Save the signed transactions.\n    /// 3. Save the latest proof.\n    /// 4. Save the new block.\n    /// 5. Save the receipt.\n    pub async fn update_status(\n        &self,\n        metadata: Metadata,\n        block: Block,\n        proof: Proof,\n        txs: Vec<SignedTransaction>,\n    ) -> ProtocolResult<()> {\n        // Save signed transactions\n        self.adapter\n            .save_signed_txs(Context::new(), block.header.height, txs)\n            .await?;\n\n        // Save the block.\n        self.adapter\n            .save_block(Context::new(), block.clone())\n            .await?;\n\n        // update timeout_gap of mempool\n        self.adapter.set_args(\n            Context::new(),\n            metadata.timeout_gap,\n            metadata.cycles_limit,\n            metadata.max_tx_size,\n        );\n\n        let pub_keys = metadata\n            .verifier_list\n            .iter()\n            .map(|v| v.pub_key.decode())\n            .collect();\n        self.adapter.tag_consensus(Context::new(), pub_keys)?;\n\n        let block_hash = Hash::digest(block.header.encode_fixed()?);\n\n        if block.header.height != proof.height {\n            info!(\"[consensus] update_status for handle_commit, error, before update, block height {}, proof height:{}, proof : {:?}\",\n            block.header.height,\n            proof.height,\n            proof.clone());\n        }\n\n        self.status_agent\n            .update_by_committed(metadata.clone(), block, block_hash, proof);\n\n        let committed_status_agent = self.status_agent.to_inner();\n\n        if committed_status_agent.latest_committed_height\n            != committed_status_agent.current_proof.height\n        {\n            error!(\"[consensus] update_status for handle_commit, error, current_height {} != current_proof.height {}, proof :{:?}\",\n            committed_status_agent.latest_committed_height,\n            committed_status_agent.current_proof.height,\n            committed_status_agent.current_proof)\n        }\n\n        self.update_overlord_crypto(metadata)?;\n        Ok(())\n    }\n\n    fn update_overlord_crypto(&self, metadata: Metadata) -> ProtocolResult<()> {\n        self.crypto.update(generate_new_crypto_map(metadata)?);\n        Ok(())\n    }\n\n    fn metric_commit(&self, current_height: u64, txs_len: usize) {\n        common_apm::metrics::consensus::ENGINE_HEIGHT_GAUGE.set((current_height + 1) as i64);\n        common_apm::metrics::consensus::ENGINE_COMMITED_TX_COUNTER.inc_by(txs_len as i64);\n\n        let now = time_now();\n        let last_commit_time = *(self.last_commit_time.read());\n        let elapsed = (now - last_commit_time) as f64;\n        common_apm::metrics::consensus::ENGINE_CONSENSUS_COST_TIME.observe(elapsed / 1e3);\n        let mut last_commit_time = self.last_commit_time.write();\n        *last_commit_time = now;\n    }\n\n    #[cfg(test)]\n    pub fn get_current_status(&self) -> crate::status::CurrentConsensusStatus {\n        self.status_agent.to_inner()\n    }\n}\n\npub fn generate_new_crypto_map(metadata: Metadata) -> ProtocolResult<HashMap<Bytes, BlsPublicKey>> {\n    let mut new_addr_pubkey_map = HashMap::new();\n    for validator in metadata.verifier_list.into_iter() {\n        let addr = validator.pub_key.decode();\n        let hex_pubkey = hex::decode(validator.bls_pub_key.as_string_trim0x()).map_err(|err| {\n            ConsensusError::Other(format!(\"hex decode metadata bls pubkey error {:?}\", err))\n        })?;\n        let pubkey = BlsPublicKey::try_from(hex_pubkey.as_ref())\n            .map_err(|err| ConsensusError::Other(format!(\"try from bls pubkey error {:?}\", err)))?;\n        new_addr_pubkey_map.insert(addr, pubkey);\n    }\n    Ok(new_addr_pubkey_map)\n}\n\nfn covert_to_overlord_authority(validators: &[Validator]) -> Vec<Node> {\n    let mut authority = validators\n        .iter()\n        .map(|v| Node {\n            address:        v.pub_key.clone(),\n            propose_weight: v.propose_weight,\n            vote_weight:    v.vote_weight,\n        })\n        .collect::<Vec<_>>();\n    authority.sort();\n    authority\n}\n\nasync fn sync_txs<CA: ConsensusAdapter>(\n    ctx: Context,\n    adapter: Arc<CA>,\n    propose_hashes: Vec<Hash>,\n) -> ProtocolResult<()> {\n    adapter.sync_txs(ctx, propose_hashes).await\n}\n\nfn validate_timestamp(\n    current_timestamp: u64,\n    proposal_timestamp: u64,\n    previous_timestamp: u64,\n) -> bool {\n    if proposal_timestamp < previous_timestamp {\n        return false;\n    }\n\n    if proposal_timestamp > current_timestamp {\n        return false;\n    }\n\n    true\n}\n\nfn gauge_txs_len(pill: &Pill) {\n    common_apm::metrics::consensus::ENGINE_ORDER_TX_GAUGE\n        .set(pill.block.ordered_tx_hashes.len() as i64);\n    common_apm::metrics::consensus::ENGINE_SYNC_TX_GAUGE.set(pill.propose_hashes.len() as i64);\n}\n\n#[cfg(test)]\nmod tests {\n    use super::validate_timestamp;\n\n    #[test]\n    fn test_validate_timestamp() {\n        // current 10, proposal 9, previous 8. true\n        assert_eq!(validate_timestamp(10, 9, 8), true);\n\n        // current 10, proposal 11, previous 8. true\n        assert_eq!(validate_timestamp(10, 11, 8), false);\n\n        // current 10, proposal 9, previous 11. true\n        assert_eq!(validate_timestamp(10, 9, 11), false);\n    }\n}\n"
  },
  {
    "path": "core/consensus/src/fixed_types.rs",
    "content": "use std::error::Error;\n\nuse overlord::Codec;\n\nuse protocol::codec::{Deserialize, ProtocolCodecSync, Serialize};\nuse protocol::fixed_codec::FixedCodec;\nuse protocol::types::{Block, Hash, Pill, Proof, SignedTransaction};\nuse protocol::{traits::MessageCodec, Bytes, BytesMut, ProtocolResult};\n\nuse crate::{ConsensusError, ConsensusType};\n\n#[derive(Serialize, Deserialize, Clone, Debug)]\npub enum ConsensusRpcRequest {\n    PullBlocks(u64),\n    PullTxs(PullTxsRequest),\n}\n\n#[derive(Clone, Debug, PartialEq, Eq)]\npub enum ConsensusRpcResponse {\n    PullBlocks(Box<Block>),\n    PullTxs(Box<FixedSignedTxs>),\n}\n\nimpl MessageCodec for ConsensusRpcResponse {\n    fn encode(&mut self) -> ProtocolResult<Bytes> {\n        let bytes = match self {\n            ConsensusRpcResponse::PullBlocks(ep) => {\n                let mut tmp = BytesMut::from(ep.encode_fixed()?.as_ref());\n                tmp.extend_from_slice(b\"a\");\n                tmp\n            }\n\n            ConsensusRpcResponse::PullTxs(txs) => {\n                let mut tmp = BytesMut::from(\n                    bincode::serialize(&txs)\n                        .map_err(|_| ConsensusError::EncodeErr(ConsensusType::RpcPullTxs))?\n                        .as_slice(),\n                );\n                tmp.extend_from_slice(b\"b\");\n                tmp\n            }\n        };\n        Ok(bytes.freeze())\n    }\n\n    fn decode(mut bytes: Bytes) -> ProtocolResult<Self> {\n        let len = bytes.len();\n        let flag = bytes.split_off(len - 1);\n\n        match flag.as_ref() {\n            b\"a\" => {\n                let res: Block = FixedCodec::decode_fixed(bytes)?;\n                Ok(ConsensusRpcResponse::PullBlocks(Box::new(res)))\n            }\n\n            b\"b\" => {\n                let res: FixedSignedTxs = bincode::deserialize(&bytes)\n                    .map_err(|_| ConsensusError::DecodeErr(ConsensusType::RpcPullTxs))?;\n                Ok(ConsensusRpcResponse::PullTxs(Box::new(res)))\n            }\n            _ => unreachable!(),\n        }\n    }\n}\n\n#[derive(Clone, Debug, PartialEq, Eq)]\npub struct FixedPill {\n    pub inner: Pill,\n}\n\nimpl Codec for FixedPill {\n    fn encode(&self) -> Result<Bytes, Box<dyn Error + Send>> {\n        let bytes = self.inner.encode_fixed()?;\n        Ok(bytes)\n    }\n\n    fn decode(data: Bytes) -> Result<Self, Box<dyn Error + Send>> {\n        let inner: Pill = FixedCodec::decode_fixed(data)?;\n        Ok(FixedPill { inner })\n    }\n}\n\nimpl FixedPill {\n    pub fn get_ordered_hashes(&self) -> Vec<Hash> {\n        self.inner.block.ordered_tx_hashes.clone()\n    }\n\n    pub fn get_propose_hashes(&self) -> Vec<Hash> {\n        self.inner.propose_hashes.clone()\n    }\n}\n\n#[derive(Clone, Debug, PartialEq, Eq)]\npub struct FixedBlock {\n    pub inner: Block,\n}\n\nimpl MessageCodec for FixedBlock {\n    fn encode(&mut self) -> ProtocolResult<Bytes> {\n        self.inner.encode_sync()\n    }\n\n    fn decode(bytes: Bytes) -> ProtocolResult<Self> {\n        let inner: Block = ProtocolCodecSync::decode_sync(bytes)?;\n        Ok(FixedBlock::new(inner))\n    }\n}\n\nimpl FixedBlock {\n    pub fn new(inner: Block) -> Self {\n        FixedBlock { inner }\n    }\n}\n\n#[derive(Clone, Debug, PartialEq, Eq)]\npub struct FixedProof {\n    pub inner: Proof,\n}\n\nimpl MessageCodec for FixedProof {\n    fn encode(&mut self) -> ProtocolResult<Bytes> {\n        self.inner.encode_sync()\n    }\n\n    fn decode(bytes: Bytes) -> ProtocolResult<Self> {\n        let inner: Proof = ProtocolCodecSync::decode_sync(bytes)?;\n        Ok(FixedProof::new(inner))\n    }\n}\n\nimpl FixedProof {\n    pub fn new(inner: Proof) -> Self {\n        FixedProof { inner }\n    }\n}\n\n#[derive(Serialize, Deserialize, Clone, Debug)]\npub struct FixedHeight {\n    pub inner: u64,\n}\n\nimpl FixedHeight {\n    pub fn new(inner: u64) -> Self {\n        FixedHeight { inner }\n    }\n}\n\n#[derive(Serialize, Deserialize, Clone, Debug)]\npub struct PullTxsRequest {\n    pub height: u64,\n    #[serde(with = \"core_network::serde_multi\")]\n    pub inner:  Vec<Hash>,\n}\n\nimpl PullTxsRequest {\n    pub fn new(height: u64, inner: Vec<Hash>) -> Self {\n        PullTxsRequest { height, inner }\n    }\n}\n\n#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, Eq)]\npub struct FixedSignedTxs {\n    #[serde(with = \"core_network::serde_multi\")]\n    pub inner: Vec<SignedTransaction>,\n}\n\nimpl FixedSignedTxs {\n    pub fn new(inner: Vec<SignedTransaction>) -> Self {\n        FixedSignedTxs { inner }\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use std::convert::From;\n    use std::str::FromStr;\n\n    use futures::executor;\n    use rand::random;\n\n    use protocol::types::{\n        Address, Block, BlockHeader, Hash, Proof, RawTransaction, SignedTransaction,\n        TransactionRequest,\n    };\n    use protocol::Bytes;\n\n    use super::{FixedBlock, FixedSignedTxs};\n\n    const PUB_KEY_STR: &str = \"02ee34d1ce8270cd236e9455d4ab9e756c4478779b1a20d7ce1c247af61ec2be3b\";\n\n    fn gen_block(height: u64, block_hash: Hash) -> Block {\n        let nonce = Hash::digest(Bytes::from(\"XXXX\"));\n        let addr_str = \"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\";\n        let header = BlockHeader {\n            chain_id: nonce.clone(),\n            height,\n            exec_height: height - 1,\n            prev_hash: nonce.clone(),\n            timestamp: 1000,\n            order_root: nonce.clone(),\n            order_signed_transactions_hash: nonce.clone(),\n            confirm_root: Vec::new(),\n            state_root: nonce,\n            receipt_root: Vec::new(),\n            cycles_used: vec![999_999],\n            proposer: Address::from_str(addr_str).unwrap(),\n            proof: mock_proof(block_hash),\n            validator_version: 1,\n            validators: Vec::new(),\n        };\n\n        Block {\n            header,\n            ordered_tx_hashes: Vec::new(),\n        }\n    }\n\n    fn mock_proof(block_hash: Hash) -> Proof {\n        Proof {\n            height: 0,\n            round: 0,\n            block_hash,\n            signature: Default::default(),\n            bitmap: Default::default(),\n        }\n    }\n\n    fn gen_random_bytes(len: usize) -> Vec<u8> {\n        (0..len).map(|_| random::<u8>()).collect::<Vec<_>>()\n    }\n\n    fn gen_signed_tx() -> SignedTransaction {\n        use protocol::codec::ProtocolCodec;\n\n        let nonce = Hash::digest(Bytes::from(gen_random_bytes(10)));\n\n        let request = TransactionRequest {\n            service_name: \"test\".to_owned(),\n            method:       \"test\".to_owned(),\n            payload:      \"test\".to_owned(),\n        };\n        let mut raw = RawTransaction {\n            chain_id: nonce.clone(),\n            nonce,\n            timeout: random::<u64>(),\n            cycles_price: 1,\n            cycles_limit: random::<u64>(),\n            request,\n            sender: Address::from_pubkey_bytes(Bytes::from(hex::decode(PUB_KEY_STR).unwrap()))\n                .unwrap(),\n        };\n\n        let raw_bytes = executor::block_on(async { raw.encode().await.unwrap() });\n        let tx_hash = Hash::digest(raw_bytes);\n\n        SignedTransaction {\n            raw,\n            tx_hash,\n            pubkey: Bytes::from(hex::decode(PUB_KEY_STR).unwrap()),\n            signature: Bytes::from(gen_random_bytes(64)),\n        }\n    }\n\n    #[test]\n    fn test_txs_codec() {\n        use super::ProtocolCodecSync;\n\n        for _ in 0..10 {\n            let fixed_txs = FixedSignedTxs {\n                inner: (0..1000).map(|_| gen_signed_tx()).collect::<Vec<_>>(),\n            };\n\n            let bytes = fixed_txs.encode_sync().unwrap();\n            assert_eq!(fixed_txs, FixedSignedTxs::decode_sync(bytes).unwrap());\n        }\n    }\n\n    #[tokio::test]\n    async fn test_block_codec() {\n        use super::MessageCodec;\n\n        let block = gen_block(random::<u64>(), Hash::from_empty());\n        let mut origin = FixedBlock::new(block.clone());\n        let bytes = origin.encode().unwrap();\n        let res: FixedBlock = MessageCodec::decode(bytes).unwrap();\n        assert_eq!(res.inner, block);\n    }\n}\n"
  },
  {
    "path": "core/consensus/src/lib.rs",
    "content": "#![feature(test)]\n#![allow(\n    clippy::type_complexity,\n    clippy::suspicious_else_formatting,\n    clippy::mutable_key_type\n)]\n\npub mod adapter;\npub mod consensus;\nmod engine;\npub mod fixed_types;\npub mod message;\npub mod status;\npub mod synchronization;\n#[cfg(test)]\nmod tests;\npub mod util;\npub mod wal;\nmod wal_proto;\n\nuse std::error::Error;\n\nuse derive_more::Display;\n\nuse common_crypto::Error as CryptoError;\n\nuse protocol::types::{Hash, MerkleRoot};\nuse protocol::{ProtocolError, ProtocolErrorKind};\n\npub use crate::adapter::OverlordConsensusAdapter;\npub use crate::consensus::OverlordConsensus;\npub use crate::synchronization::{OverlordSynchronization, RichBlock};\npub use crate::wal::{ConsensusWal, SignedTxsWAL};\npub use overlord::{types::Node, DurationConfig};\n\npub const DEFAULT_OVERLORD_GAP: usize = 5;\npub const DEFAULT_SYNC_TXS_CHUNK_SIZE: usize = 5000;\n\n#[derive(Clone, Debug, Display, PartialEq, Eq)]\npub enum ConsensusType {\n    #[display(fmt = \"Signed Proposal\")]\n    SignedProposal,\n\n    #[display(fmt = \"Signed Vote\")]\n    SignedVote,\n\n    #[display(fmt = \"Aggregated Vote\")]\n    AggregateVote,\n\n    #[display(fmt = \"Rich Height\")]\n    RichHeight,\n\n    #[display(fmt = \"Rpc Pull Blocks\")]\n    RpcPullBlocks,\n\n    #[display(fmt = \"Rpc Pull Transactions\")]\n    RpcPullTxs,\n\n    #[display(fmt = \"Signed Choke\")]\n    SignedChoke,\n\n    #[display(fmt = \"WAL Signed Transactions\")]\n    WALSignedTxs,\n}\n\n/// Consensus errors defines here.\n#[derive(Debug, Display)]\npub enum ConsensusError {\n    /// Check block error.\n    #[display(fmt = \"Check invalid prev_hash, expect {:?} get {:?}\", expect, actual)]\n    InvalidPrevhash { expect: Hash, actual: Hash },\n\n    #[display(fmt = \"Check invalid order root, expect {:?} get {:?}\", expect, actual)]\n    InvalidOrderRoot {\n        expect: MerkleRoot,\n        actual: MerkleRoot,\n    },\n\n    #[display(\n        fmt = \"Check invalid order signed transactions hash, expect {:?} get {:?}\",\n        expect,\n        actual\n    )]\n    InvalidOrderSignedTransactionsHash { expect: Hash, actual: Hash },\n\n    #[display(fmt = \"Check invalid status vec\")]\n    InvalidStatusVec,\n\n    /// Decode consensus message error.\n    #[display(fmt = \"Decode {:?} message failed\", _0)]\n    DecodeErr(ConsensusType),\n\n    /// Encode consensus message error.\n    #[display(fmt = \"Encode {:?} message failed\", _0)]\n    EncodeErr(ConsensusType),\n\n    /// Overlord consensus protocol error.\n    #[display(fmt = \"Overlord error {:?}\", _0)]\n    OverlordErr(Box<dyn Error + Send>),\n\n    /// Consensus missed last block proof.\n    #[display(fmt = \"Consensus missed proof of {} block\", _0)]\n    MissingProof(u64),\n\n    /// Consensus missed the pill.\n    #[display(fmt = \"Consensus missed pill cooresponding {:?}\", _0)]\n    MissingPill(Hash),\n\n    /// Invalid timestamp\n    #[display(fmt = \"Consensus invalid timestamp\")]\n    InvalidTimestamp,\n\n    /// Consensus missed the block header.\n    #[display(fmt = \"Consensus missed block header of {} block\", _0)]\n    MissingBlockHeader(u64),\n\n    /// This boxed error should be a `CryptoError`.\n    #[display(fmt = \"Crypto error {:?}\", _0)]\n    CryptoErr(Box<CryptoError>),\n\n    #[display(fmt = \"Synchronization {} block error\", _0)]\n    VerifyTransaction(u64),\n\n    #[display(fmt = \"Synchronization/Consensus {} block error : {}\", _0, _1)]\n    VerifyBlockHeader(u64, BlockHeaderField),\n\n    #[display(fmt = \"Synchronization/Consensus {} block error : {}\", _0, _1)]\n    VerifyProof(u64, BlockProofField),\n\n    ///\n    #[display(fmt = \"Execute transactions error {:?}\", _0)]\n    ExecuteErr(String),\n\n    ///\n    WALErr(std::io::Error),\n\n    #[display(fmt = \"Storage item not found\")]\n    StorageItemNotFound,\n\n    #[display(fmt = \"Lock in sync\")]\n    LockInSync,\n\n    #[display(fmt = \"Wal transactions mismatch, height {}\", _0)]\n    WalTxsMismatch(u64),\n\n    #[display(\n        fmt = \"Commit an outdated block, block_height {}, last_committed_height {}\",\n        _0,\n        _1\n    )]\n    OutdatedCommit(u64, u64),\n\n    /// Other error used for very few errors.\n    #[display(fmt = \"{:?}\", _0)]\n    Other(String),\n\n    #[display(fmt = \"{:?}\", _0)]\n    SystemTime(std::time::SystemTimeError),\n\n    #[display(fmt = \"parse file name into timestamp error\")]\n    FileNameTimestamp,\n\n    #[display(fmt = \"consensus wal dir doesn't exist\")]\n    ConsensusWalDirNotExist,\n\n    #[display(fmt = \"no consensus wal file available\")]\n    ConsensusWalNoWalFile,\n}\n\n#[derive(Debug, Display)]\npub enum BlockHeaderField {\n    #[display(fmt = \"The prev_hash mismatch the previous block\")]\n    PreviousBlockHash,\n\n    #[display(fmt = \"The prev_hash mismatch the hash in the proof field\")]\n    ProofHash,\n\n    #[display(fmt = \"The proposer is not in the committee\")]\n    Proposer,\n\n    #[display(fmt = \"There is at least one validator not in the committee\")]\n    Validator,\n\n    #[display(fmt = \"There is at least one validator's weight mismatch\")]\n    Weight,\n}\n\n#[derive(Debug, Display)]\npub enum BlockProofField {\n    #[display(fmt = \"The bit_map has error with committer, can't get signed voters\")]\n    BitMap,\n\n    #[display(fmt = \"The proof signature is fraud or error\")]\n    Signature,\n\n    #[display(fmt = \"Heights of block and proof diverse, block {}, proof {}\", _0, _1)]\n    HeightMismatch(u64, u64),\n\n    #[display(fmt = \"Hash of block and proof diverse\")]\n    HashMismatch,\n\n    #[display(fmt = \"There is at least one validator not in the committee\")]\n    Validator,\n\n    #[display(fmt = \"There is at least one validator's weight mismatch\")]\n    Weight,\n\n    #[display(fmt = \"There is at least one validator's weight missing\")]\n    WeightNotFound,\n}\n\nimpl Error for ConsensusError {}\n\nimpl From<ConsensusError> for ProtocolError {\n    fn from(err: ConsensusError) -> ProtocolError {\n        ProtocolError::new(ProtocolErrorKind::Consensus, Box::new(err))\n    }\n}\n"
  },
  {
    "path": "core/consensus/src/message.rs",
    "content": "use std::sync::Arc;\n\nuse async_trait::async_trait;\nuse bincode::serialize;\nuse futures::TryFutureExt;\nuse log::warn;\nuse overlord::types::{AggregatedVote, SignedChoke, SignedProposal, SignedVote};\nuse overlord::Codec;\nuse rlp::Encodable;\nuse serde::{Deserialize, Serialize};\n\nuse common_apm::muta_apm;\n\nuse protocol::traits::{\n    Consensus, Context, MessageHandler, Priority, Rpc, Storage, Synchronization, TrustFeedback,\n};\nuse protocol::ProtocolError;\n\nuse core_storage::StorageError;\n\npub use crate::fixed_types::{FixedBlock, FixedHeight, FixedProof, FixedSignedTxs, PullTxsRequest};\n\npub const END_GOSSIP_SIGNED_PROPOSAL: &str = \"/gossip/consensus/signed_proposal\";\npub const END_GOSSIP_SIGNED_VOTE: &str = \"/gossip/consensus/signed_vote\";\npub const END_GOSSIP_AGGREGATED_VOTE: &str = \"/gossip/consensus/qc\";\npub const END_GOSSIP_SIGNED_CHOKE: &str = \"/gossip/consensus/signed_choke\";\npub const RPC_SYNC_PULL_BLOCK: &str = \"/rpc_call/consensus/sync_pull_block\";\npub const RPC_RESP_SYNC_PULL_BLOCK: &str = \"/rpc_resp/consensus/sync_pull_block\";\npub const RPC_SYNC_PULL_TXS: &str = \"/rpc_call/consensus/sync_pull_txs\";\npub const RPC_RESP_SYNC_PULL_TXS: &str = \"/rpc_resp/consensus/sync_pull_txs\";\npub const BROADCAST_HEIGHT: &str = \"/gossip/consensus/broadcast_height\";\npub const RPC_SYNC_PULL_PROOF: &str = \"/rpc_call/consensus/sync_pull_proof\";\npub const RPC_RESP_SYNC_PULL_PROOF: &str = \"/rpc_resp/consensus/sync_pull_proof\";\n\n#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, Eq)]\npub struct Proposal(pub Vec<u8>);\n\nimpl<C: Codec> From<SignedProposal<C>> for Proposal {\n    fn from(proposal: SignedProposal<C>) -> Self {\n        Proposal(proposal.rlp_bytes())\n    }\n}\n\n#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, Eq)]\npub struct Vote(pub Vec<u8>);\n\nimpl From<SignedVote> for Vote {\n    fn from(vote: SignedVote) -> Self {\n        Vote(vote.rlp_bytes())\n    }\n}\n\n#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, Eq)]\npub struct QC(pub Vec<u8>);\n\nimpl From<AggregatedVote> for QC {\n    fn from(aggregated_vote: AggregatedVote) -> Self {\n        QC(aggregated_vote.rlp_bytes())\n    }\n}\n\n#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, Eq)]\npub struct RichHeight(pub Vec<u8>);\n\nimpl From<FixedHeight> for RichHeight {\n    fn from(id: FixedHeight) -> Self {\n        RichHeight(serialize(&id).unwrap())\n    }\n}\n\n#[derive(Serialize, Deserialize, Clone, Debug, PartialEq, Eq)]\npub struct Choke(pub Vec<u8>);\n\nimpl From<SignedChoke> for Choke {\n    fn from(signed_choke: SignedChoke) -> Self {\n        Choke(signed_choke.rlp_bytes())\n    }\n}\n\npub struct ProposalMessageHandler<C> {\n    consensus: Arc<C>,\n}\n\nimpl<C: Consensus + 'static> ProposalMessageHandler<C> {\n    pub fn new(consensus: Arc<C>) -> Self {\n        Self { consensus }\n    }\n}\n\n#[async_trait]\nimpl<C: Consensus + 'static> MessageHandler for ProposalMessageHandler<C> {\n    type Message = Proposal;\n\n    #[muta_apm::derive::tracing_span(name = \"handle_proposal\", kind = \"consensus.message\")]\n    async fn process(&self, ctx: Context, msg: Self::Message) -> TrustFeedback {\n        if let Err(e) = self.consensus.set_proposal(ctx, msg.0).await {\n            warn!(\"set proposal {}\", e);\n            return TrustFeedback::Worse(e.to_string());\n        }\n\n        TrustFeedback::Good\n    }\n}\n\npub struct VoteMessageHandler<C> {\n    consensus: Arc<C>,\n}\n\nimpl<C: Consensus + 'static> VoteMessageHandler<C> {\n    pub fn new(consensus: Arc<C>) -> Self {\n        Self { consensus }\n    }\n}\n\n#[async_trait]\nimpl<C: Consensus + 'static> MessageHandler for VoteMessageHandler<C> {\n    type Message = Vote;\n\n    #[muta_apm::derive::tracing_span(name = \"handle_vote\", kind = \"consensus.message\")]\n    async fn process(&self, ctx: Context, msg: Self::Message) -> TrustFeedback {\n        if let Err(e) = self.consensus.set_vote(ctx, msg.0).await {\n            warn!(\"set vote {}\", e);\n            return TrustFeedback::Worse(e.to_string());\n        }\n\n        TrustFeedback::Good\n    }\n}\n\npub struct QCMessageHandler<C> {\n    consensus: Arc<C>,\n}\n\nimpl<C: Consensus + 'static> QCMessageHandler<C> {\n    pub fn new(consensus: Arc<C>) -> Self {\n        Self { consensus }\n    }\n}\n\n#[async_trait]\nimpl<C: Consensus + 'static> MessageHandler for QCMessageHandler<C> {\n    type Message = QC;\n\n    #[muta_apm::derive::tracing_span(name = \"handle_qc\", kind = \"consensus.message\")]\n    async fn process(&self, ctx: Context, msg: Self::Message) -> TrustFeedback {\n        if let Err(e) = self.consensus.set_qc(ctx, msg.0).await {\n            warn!(\"set qc {}\", e);\n            return TrustFeedback::Worse(e.to_string());\n        }\n\n        TrustFeedback::Good\n    }\n}\n\npub struct ChokeMessageHandler<C> {\n    consensus: Arc<C>,\n}\n\nimpl<C: Consensus + 'static> ChokeMessageHandler<C> {\n    pub fn new(consensus: Arc<C>) -> Self {\n        Self { consensus }\n    }\n}\n\n#[async_trait]\nimpl<C: Consensus + 'static> MessageHandler for ChokeMessageHandler<C> {\n    type Message = Choke;\n\n    #[muta_apm::derive::tracing_span(name = \"handle_choke\", kind = \"consensus.message\")]\n    async fn process(&self, ctx: Context, msg: Self::Message) -> TrustFeedback {\n        if let Err(e) = self.consensus.set_choke(ctx, msg.0).await {\n            warn!(\"set choke {}\", e);\n            return TrustFeedback::Worse(e.to_string());\n        }\n\n        TrustFeedback::Good\n    }\n}\n\npub struct RemoteHeightMessageHandler<Sy> {\n    synchronization: Arc<Sy>,\n}\n\nimpl<Sy: Synchronization + 'static> RemoteHeightMessageHandler<Sy> {\n    pub fn new(synchronization: Arc<Sy>) -> Self {\n        Self { synchronization }\n    }\n}\n\n#[async_trait]\nimpl<Sy: Synchronization + 'static> MessageHandler for RemoteHeightMessageHandler<Sy> {\n    type Message = u64;\n\n    #[muta_apm::derive::tracing_span(name = \"handle_remote_height\", kind = \"consensus.message\")]\n    async fn process(&self, ctx: Context, remote_height: Self::Message) -> TrustFeedback {\n        if let Err(e) = self\n            .synchronization\n            .receive_remote_block(ctx, remote_height)\n            .await\n        {\n            warn!(\"sync: receive remote block {}\", e);\n            if e.to_string().contains(\"timeout\") {\n                return TrustFeedback::Bad(\"sync block timeout\".to_owned());\n            } else {\n                // Just in case, don't use worse here\n                return TrustFeedback::Bad(e.to_string());\n            }\n        }\n\n        TrustFeedback::Good\n    }\n}\n\n#[derive(Debug)]\npub struct PullBlockRpcHandler<R, S> {\n    rpc:     Arc<R>,\n    storage: Arc<S>,\n}\n\nimpl<R, S> PullBlockRpcHandler<R, S>\nwhere\n    R: Rpc + 'static,\n    S: Storage + 'static,\n{\n    pub fn new(rpc: Arc<R>, storage: Arc<S>) -> Self {\n        PullBlockRpcHandler { rpc, storage }\n    }\n}\n\n#[async_trait]\nimpl<R: Rpc + 'static, S: Storage + 'static> MessageHandler for PullBlockRpcHandler<R, S> {\n    type Message = FixedHeight;\n\n    #[muta_apm::derive::tracing_span(name = \"pull_block_rpc\", kind = \"consensus.message\")]\n    async fn process(&self, ctx: Context, msg: FixedHeight) -> TrustFeedback {\n        let id = msg.inner;\n        let ret = match self.storage.get_block(ctx.clone(), id).await {\n            Ok(Some(block)) => Ok(FixedBlock::new(block)),\n            Ok(None) => Err(StorageError::GetNone.into()),\n            Err(e) => Err(e),\n        };\n        self.rpc\n            .response(ctx, RPC_RESP_SYNC_PULL_BLOCK, ret, Priority::High)\n            .unwrap_or_else(move |e: ProtocolError| warn!(\"[core_consensus] push block {}\", e))\n            .await;\n\n        TrustFeedback::Neutral\n    }\n}\n\n#[derive(Debug)]\npub struct PullProofRpcHandler<R, S> {\n    rpc:     Arc<R>,\n    storage: Arc<S>,\n}\n\nimpl<R, S> PullProofRpcHandler<R, S>\nwhere\n    R: Rpc + 'static,\n    S: Storage + 'static,\n{\n    pub fn new(rpc: Arc<R>, storage: Arc<S>) -> Self {\n        PullProofRpcHandler { rpc, storage }\n    }\n}\n\n#[async_trait]\nimpl<R: Rpc + 'static, S: Storage + 'static> MessageHandler for PullProofRpcHandler<R, S> {\n    type Message = FixedHeight;\n\n    #[muta_apm::derive::tracing_span(name = \"pull_proof_rpc\", kind = \"consensus.message\")]\n    async fn process(&self, ctx: Context, height: FixedHeight) -> TrustFeedback {\n        let height = height.inner;\n        let latest_proof = self.storage.get_latest_proof(ctx.clone()).await;\n\n        let ret = match latest_proof {\n            Ok(latest_proof) => match height {\n                height if height < latest_proof.height => {\n                    match self.storage.get_block_header(ctx.clone(), height + 1).await {\n                        Ok(Some(next_header)) => Ok(next_header.proof),\n                        Ok(None) => Err(StorageError::GetNone.into()),\n                        Err(_) => Err(StorageError::GetNone.into()),\n                    }\n                }\n                height if height == latest_proof.height => Ok(latest_proof),\n                _ => Err(StorageError::GetNone.into()),\n            },\n            Err(_) => Err(StorageError::GetNone.into()),\n        };\n\n        self.rpc\n            .response(\n                ctx,\n                RPC_RESP_SYNC_PULL_PROOF,\n                ret.map(FixedProof::new),\n                Priority::High,\n            )\n            .unwrap_or_else(move |e: ProtocolError| warn!(\"[core_consensus] push proof {}\", e))\n            .await;\n\n        TrustFeedback::Neutral\n    }\n}\n\n#[derive(Debug)]\npub struct PullTxsRpcHandler<R, S> {\n    rpc:     Arc<R>,\n    storage: Arc<S>,\n}\n\nimpl<R, S> PullTxsRpcHandler<R, S>\nwhere\n    R: Rpc + 'static,\n    S: Storage + 'static,\n{\n    pub fn new(rpc: Arc<R>, storage: Arc<S>) -> Self {\n        PullTxsRpcHandler { rpc, storage }\n    }\n}\n\n#[async_trait]\nimpl<R: Rpc + 'static, S: Storage + 'static> MessageHandler for PullTxsRpcHandler<R, S> {\n    type Message = PullTxsRequest;\n\n    #[muta_apm::derive::tracing_span(name = \"pull_txs_rpc\", kind = \"consensus.message\")]\n    async fn process(&self, ctx: Context, msg: PullTxsRequest) -> TrustFeedback {\n        let PullTxsRequest { height, inner } = msg;\n\n        let ret = self\n            .storage\n            .get_transactions(ctx.clone(), height, &inner)\n            .await\n            .map(|txs| {\n                txs.into_iter()\n                    .filter_map(|opt_tx| opt_tx)\n                    .collect::<Vec<_>>()\n            })\n            .map(FixedSignedTxs::new);\n\n        self.rpc\n            .response(ctx, RPC_RESP_SYNC_PULL_TXS, ret, Priority::High)\n            .unwrap_or_else(move |e: ProtocolError| warn!(\"[core_consensus] push txs {}\", e))\n            .await;\n\n        TrustFeedback::Neutral\n    }\n}\n"
  },
  {
    "path": "core/consensus/src/status.rs",
    "content": "use std::sync::Arc;\n\nuse derive_more::Display;\nuse parking_lot::RwLock;\nuse serde::{Deserialize, Serialize};\n\nuse common_merkle::Merkle;\nuse protocol::fixed_codec::FixedCodec;\nuse protocol::traits::{Context, ExecutorResp};\nuse protocol::types::{Block, Hash, MerkleRoot, Metadata, Proof, Validator};\n\nuse crate::util::check_list_roots;\n\n#[derive(Clone, Debug)]\npub struct StatusAgent {\n    status: Arc<RwLock<CurrentConsensusStatus>>,\n}\n\nimpl StatusAgent {\n    pub fn new(status: CurrentConsensusStatus) -> Self {\n        Self {\n            status: Arc::new(RwLock::new(status)),\n        }\n    }\n\n    pub fn update_by_executed(&self, info: ExecutedInfo) {\n        self.status.write().update_by_executed(info);\n    }\n\n    pub fn update_by_committed(\n        &self,\n        metadata: Metadata,\n        block: Block,\n        block_hash: Hash,\n        current_proof: Proof,\n    ) {\n        self.status\n            .write()\n            .update_by_committed(metadata, block, block_hash, current_proof)\n    }\n\n    // TODO(yejiayu): Is there a better way to write it?\n    pub fn replace(&self, new_status: CurrentConsensusStatus) {\n        let mut status = self.status.write();\n        status.cycles_price = new_status.cycles_price;\n        status.cycles_limit = new_status.cycles_limit;\n        status.latest_committed_height = new_status.latest_committed_height;\n        status.exec_height = new_status.exec_height;\n        status.current_hash = new_status.current_hash;\n        status.latest_committed_state_root = new_status.latest_committed_state_root;\n        status.list_confirm_root = new_status.list_confirm_root;\n        status.list_state_root = new_status.list_state_root;\n        status.list_receipt_root = new_status.list_receipt_root;\n        status.list_cycles_used = new_status.list_cycles_used;\n        status.current_proof = new_status.current_proof;\n        status.validators = new_status.validators;\n        status.consensus_interval = new_status.consensus_interval;\n    }\n\n    pub fn to_inner(&self) -> CurrentConsensusStatus {\n        self.status.read().clone()\n    }\n}\n\n#[derive(Serialize, Deserialize, Clone, Debug, Display, PartialEq, Eq)]\n#[display(\n    fmt = \"latest_committed_height {}, exec height {}, current_hash {:?}, latest_committed_state_root {:?} list state root {:?}, list receipt root {:?}, list confirm root {:?}, list cycle used {:?}\",\n    latest_committed_height,\n    exec_height,\n    current_hash,\n    latest_committed_state_root,\n    list_state_root,\n    list_receipt_root,\n    list_confirm_root,\n    list_cycles_used\n)]\npub struct CurrentConsensusStatus {\n    pub cycles_price:                u64, // metadata\n    pub cycles_limit:                u64, // metadata\n    pub latest_committed_height:     u64, // latest consented height\n    pub exec_height:                 u64,\n    pub current_hash:                Hash, // as same as block of current height\n    pub latest_committed_state_root: MerkleRoot, // latest consented height\n    pub list_confirm_root:           Vec<MerkleRoot>,\n    pub list_state_root:             Vec<MerkleRoot>,\n    pub list_receipt_root:           Vec<MerkleRoot>,\n    pub list_cycles_used:            Vec<u64>,\n    pub current_proof:               Proof, // latest consented block's proof, not previous block\n    pub validators:                  Vec<Validator>, // metadate\n    pub consensus_interval:          u64,   // metadata\n    pub propose_ratio:               u64,   // metadata\n    pub prevote_ratio:               u64,   // metadata\n    pub precommit_ratio:             u64,   // metadata\n    pub brake_ratio:                 u64,\n    pub tx_num_limit:                u64,\n    pub max_tx_size:                 u64,\n} // metadata is as same as latest consented height\n\nimpl CurrentConsensusStatus {\n    pub fn get_latest_state_root(&self) -> MerkleRoot {\n        self.list_state_root\n            .last()\n            .unwrap_or(&self.latest_committed_state_root)\n            .clone()\n    }\n\n    pub(crate) fn update_by_executed(&mut self, info: ExecutedInfo) {\n        if info.exec_height <= self.exec_height {\n            return;\n        }\n        log::info!(\"update_by_executed: info {}\", info,);\n        log::info!(\"update_by_executed: current status {}\", self);\n\n        assert!(info.exec_height == self.exec_height + 1);\n        self.exec_height += 1;\n        self.list_cycles_used.push(info.cycles_used);\n        self.list_confirm_root.push(info.confirm_root.clone());\n        self.list_receipt_root.push(info.receipt_root.clone());\n        self.list_state_root.push(info.state_root);\n\n        common_apm::metrics::consensus::ENGINE_EXECUTING_BLOCK_GAUGE\n            .set(self.latest_committed_height as i64 - self.exec_height as i64);\n    }\n\n    pub(crate) fn update_by_committed(\n        &mut self,\n        metadata: Metadata,\n        block: Block,\n        block_hash: Hash,\n        current_proof: Proof,\n    ) {\n        self.set_metadata(metadata);\n\n        assert!(block.header.height == self.latest_committed_height + 1);\n\n        self.latest_committed_height = block.header.height;\n        self.current_hash = block_hash;\n        self.current_proof = current_proof;\n        self.latest_committed_state_root = block.header.state_root.clone();\n\n        self.split_off(&block);\n\n        common_apm::metrics::consensus::ENGINE_EXECUTING_BLOCK_GAUGE\n            .set((self.latest_committed_height - self.exec_height) as i64);\n    }\n\n    pub(crate) fn set_metadata(&mut self, metadata: Metadata) {\n        self.cycles_limit = metadata.cycles_limit;\n        self.cycles_price = metadata.cycles_price;\n        self.consensus_interval = metadata.interval;\n        let validators: Vec<Validator> = metadata\n            .verifier_list\n            .iter()\n            .map(|v| Validator {\n                pub_key:        v.pub_key.decode(),\n                propose_weight: v.propose_weight,\n                vote_weight:    v.vote_weight,\n            })\n            .collect();\n        self.validators = validators;\n        self.propose_ratio = metadata.propose_ratio;\n        self.prevote_ratio = metadata.prevote_ratio;\n        self.precommit_ratio = metadata.precommit_ratio;\n        self.brake_ratio = metadata.brake_ratio;\n        self.max_tx_size = metadata.max_tx_size;\n        self.tx_num_limit = metadata.tx_num_limit;\n    }\n\n    fn split_off(&mut self, block: &Block) {\n        let len = block.header.confirm_root.len();\n        if len != block.header.cycles_used.len() || len != block.header.receipt_root.len() {\n            panic!(\"vec lengths do not match. {:?}\", block);\n        }\n\n        if !check_list_roots(&self.list_cycles_used, &block.header.cycles_used) {\n            panic!(\n                \"check list_cycles_used error current_roots: {:?}, committed_roots roots {:?}\",\n                self.list_cycles_used, block.header.cycles_used\n            );\n        }\n        if !check_list_roots(&self.list_confirm_root, &block.header.confirm_root) {\n            panic!(\n                \"check list_confirm_root error current_roots: {:?}, committed_roots roots {:?}\",\n                self.list_confirm_root, block.header.confirm_root\n            );\n        }\n        if !check_list_roots(&self.list_receipt_root, &block.header.receipt_root) {\n            panic!(\n                \"check list_receipt_root error current_roots: {:?}, committed_roots roots {:?}\",\n                self.list_receipt_root, block.header.receipt_root\n            );\n        }\n\n        self.list_cycles_used = self.list_cycles_used.split_off(len);\n        self.list_confirm_root = self.list_confirm_root.split_off(len);\n        self.list_receipt_root = self.list_receipt_root.split_off(len);\n        self.list_state_root = self.list_state_root.split_off(len);\n    }\n}\n\n#[derive(Clone, Debug, Display)]\n#[display(\n    fmt = \"exec height {}, cycles used {}, state root {:?}, receipt root {:?}, confirm root {:?}\",\n    exec_height,\n    cycles_used,\n    state_root,\n    receipt_root,\n    confirm_root\n)]\npub struct ExecutedInfo {\n    pub ctx:          Context,\n    pub exec_height:  u64,\n    pub cycles_used:  u64,\n    pub state_root:   MerkleRoot,\n    pub receipt_root: MerkleRoot,\n    pub confirm_root: MerkleRoot,\n}\n\nimpl ExecutedInfo {\n    pub fn new(ctx: Context, height: u64, order_root: MerkleRoot, resp: ExecutorResp) -> Self {\n        let cycles = resp.all_cycles_used;\n\n        let receipt = Merkle::from_hashes(\n            resp.receipts\n                .iter()\n                .map(|r| Hash::digest(r.to_owned().encode_fixed().unwrap()))\n                .collect::<Vec<_>>(),\n        )\n        .get_root_hash()\n        .unwrap_or_else(Hash::from_empty);\n\n        Self {\n            ctx,\n            exec_height: height,\n            cycles_used: cycles,\n            receipt_root: receipt,\n            confirm_root: order_root,\n            state_root: resp.state_root,\n        }\n    }\n}\n"
  },
  {
    "path": "core/consensus/src/synchronization.rs",
    "content": "use std::sync::Arc;\nuse std::time::{Duration, Instant};\n\nuse async_trait::async_trait;\nuse futures::lock::Mutex;\nuse futures_timer::Delay;\n\nuse common_apm::muta_apm;\n\nuse protocol::fixed_codec::FixedCodec;\nuse protocol::traits::{\n    Context, ExecutorParams, ExecutorResp, Synchronization, SynchronizationAdapter,\n};\nuse protocol::types::{Block, Hash, Proof, Receipt, SignedTransaction};\nuse protocol::ProtocolResult;\n\nuse crate::engine::generate_new_crypto_map;\nuse crate::status::{ExecutedInfo, StatusAgent};\nuse crate::util::{digest_signed_transactions, OverlordCrypto};\nuse crate::ConsensusError;\n\nconst POLLING_BROADCAST: u64 = 2000;\nconst WAIT_EXECUTION: u64 = 1000;\nconst ONCE_SYNC_BLOCK_LIMIT: u64 = 50;\n\n#[derive(Clone, Debug)]\npub struct RichBlock {\n    pub block: Block,\n    pub txs:   Vec<SignedTransaction>,\n}\n\npub struct OverlordSynchronization<Adapter: SynchronizationAdapter> {\n    adapter: Arc<Adapter>,\n    status:  StatusAgent,\n    crypto:  Arc<OverlordCrypto>,\n    lock:    Arc<Mutex<()>>,\n    syncing: Mutex<()>,\n\n    sync_txs_chunk_size: usize,\n}\n\n#[async_trait]\nimpl<Adapter: SynchronizationAdapter> Synchronization for OverlordSynchronization<Adapter> {\n    #[muta_apm::derive::tracing_span(\n        kind = \"consensus.sync\",\n        logs = \"{'remote_height': 'remote_height'}\"\n    )]\n    async fn receive_remote_block(&self, ctx: Context, remote_height: u64) -> ProtocolResult<()> {\n        let syncing_lock = self.syncing.try_lock();\n        if syncing_lock.is_none() {\n            return Ok(());\n        }\n        if !self.need_sync(ctx.clone(), remote_height).await? {\n            return Ok(());\n        }\n\n        // Lock the consensus engine, block commit process.\n        let commit_lock = self.lock.try_lock();\n        if commit_lock.is_none() {\n            return Ok(());\n        }\n\n        let current_height = self.status.to_inner().latest_committed_height;\n\n        if remote_height <= current_height {\n            return Ok(());\n        }\n\n        log::info!(\n            \"[synchronization]: sync start, remote block height {:?} current block height {:?}\",\n            remote_height,\n            current_height,\n        );\n\n        let sync_status_agent = self.init_status_agent().await?;\n        let sync_resp = self\n            .start_sync(\n                ctx.clone(),\n                sync_status_agent.clone(),\n                current_height,\n                remote_height,\n            )\n            .await;\n        let sync_status = sync_status_agent.to_inner();\n\n        if let Err(e) = sync_resp {\n            log::error!(\n                \"[synchronization]: err, current_height {:?} err_msg: {:?}\",\n                sync_status.latest_committed_height,\n                e\n            );\n            return Err(e);\n        }\n\n        log::info!(\n            \"[synchronization]: sync end, remote block height {:?} current block height {:?} current exec height {:?} current proof height {:?}\",\n            remote_height,\n            sync_status.latest_committed_height,\n            sync_status.exec_height,\n            sync_status.current_proof.height,\n        );\n\n        Ok(())\n    }\n}\n\nimpl<Adapter: SynchronizationAdapter> OverlordSynchronization<Adapter> {\n    pub fn new(\n        sync_txs_chunk_size: usize,\n        adapter: Arc<Adapter>,\n        status: StatusAgent,\n        crypto: Arc<OverlordCrypto>,\n        lock: Arc<Mutex<()>>,\n    ) -> Self {\n        let syncing = Mutex::new(());\n\n        Self {\n            adapter,\n            status,\n            crypto,\n            lock,\n            syncing,\n\n            sync_txs_chunk_size,\n        }\n    }\n\n    pub async fn polling_broadcast(&self) -> ProtocolResult<()> {\n        loop {\n            let current_height = self.status.to_inner().latest_committed_height;\n            if current_height != 0 {\n                self.adapter\n                    .broadcast_height(Context::new(), current_height)\n                    .await?;\n            }\n            Delay::new(Duration::from_millis(POLLING_BROADCAST)).await;\n        }\n    }\n\n    #[muta_apm::derive::tracing_span(\n        kind = \"consensus.sync\",\n        logs = \"{'current_height': 'current_height', 'remote_height': 'remote_height'}\"\n    )]\n    async fn start_sync(\n        &self,\n        ctx: Context,\n        sync_status_agent: StatusAgent,\n        current_height: u64,\n        remote_height: u64,\n    ) -> ProtocolResult<()> {\n        let remote_height = if current_height + ONCE_SYNC_BLOCK_LIMIT > remote_height {\n            remote_height\n        } else {\n            current_height + ONCE_SYNC_BLOCK_LIMIT\n        };\n\n        let mut current_consented_height = current_height;\n\n        while current_consented_height < remote_height {\n            let inst = Instant::now();\n\n            let consenting_height = current_consented_height + 1;\n            log::info!(\n                \"[synchronization]: try syncing block, current_consented_height:{},syncing_height:{}\",\n                current_consented_height,\n                consenting_height\n            );\n\n            let consenting_rich_block: RichBlock = self\n                .get_rich_block_from_remote(ctx.clone(), consenting_height)\n                .await\n                .map_err(|e| {\n                    log::error!(\n                        \"[synchronization]: get_rich_block_from_remote error, height: {:?}\",\n                        consenting_height\n                    );\n                    e\n                })?;\n\n            let consenting_proof: Proof = self\n                .adapter\n                .get_proof_from_remote(ctx.clone(), consenting_height)\n                .await\n                .map_err(|e| {\n                    log::error!(\n                        \"[synchronization]: get_proof_from_remote error, height: {:?}\",\n                        consenting_height\n                    );\n                    e\n                })?;\n\n            self.adapter\n                .verify_block_header(ctx.clone(), &consenting_rich_block.block)\n                .await\n                .map_err(|e| {\n                    log::error!(\n                        \"[synchronization]: verify_block_header error, block header: {:?}\",\n                        consenting_rich_block.block.header\n                    );\n                    e\n                })?;\n\n            // verify syncing proof\n            self.adapter\n                .verify_proof(\n                    ctx.clone(),\n                    &consenting_rich_block.block.header,\n                    &consenting_proof,\n                )\n                .await\n                .map_err(|e| {\n                    log::error!(\n                        \"[synchronization]: verify_proof error, syncing block header: {:?}, proof: {:?}\",\n                        consenting_rich_block.block.header,\n                        consenting_proof,\n                    );\n                    e\n                })?;\n\n            // verify previous proof\n            let previous_block_header = self\n                .adapter\n                .get_block_header_by_height(\n                    ctx.clone(),\n                    consenting_rich_block.block.header.height - 1,\n                )\n                .await\n                .map_err(|e| {\n                    log::error!(\n                        \"[synchronization] get previous block {} error\",\n                        consenting_rich_block.block.header.height - 1\n                    );\n                    e\n                })?;\n\n            self.adapter\n                .verify_proof(\n                    ctx.clone(),\n                    &previous_block_header,\n                    &consenting_rich_block.block.header.proof,\n                )\n                .await\n                .map_err(|e| {\n                    log::error!(\n                        \"[synchronization]: verify_proof error, previous block header: {:?}, proof: {:?}\",\n                        previous_block_header,\n                        consenting_rich_block.block.header.proof\n                    );\n                    e\n                })?;\n\n            let order_signed_transactions_hash =\n                digest_signed_transactions(&consenting_rich_block.txs)?;\n            if order_signed_transactions_hash\n                != consenting_rich_block\n                    .block\n                    .header\n                    .order_signed_transactions_hash\n            {\n                return Err(ConsensusError::InvalidOrderSignedTransactionsHash {\n                    expect: order_signed_transactions_hash,\n                    actual: consenting_rich_block\n                        .block\n                        .header\n                        .order_signed_transactions_hash\n                        .clone(),\n                }\n                .into());\n            }\n\n            let inst = Instant::now();\n            self.commit_block(\n                ctx.clone(),\n                consenting_rich_block.clone(),\n                consenting_proof,\n                sync_status_agent.clone(),\n            )\n            .await\n            .map_err(|e| {\n                log::error!(\n                    \"[synchronization]: commit block {} error\",\n                    consenting_rich_block.block.header.height\n                );\n                e\n            })?;\n\n            self.update_status(ctx.clone(), sync_status_agent.clone())?;\n            current_consented_height += 1;\n\n            common_apm::metrics::consensus::ENGINE_SYNC_BLOCK_COUNTER.inc_by(1 as i64);\n            common_apm::metrics::consensus::ENGINE_SYNC_BLOCK_HISTOGRAM\n                .observe(common_apm::metrics::duration_to_sec(inst.elapsed()));\n        }\n        Ok(())\n    }\n\n    #[muta_apm::derive::tracing_span(kind = \"consensus.sync\")]\n    async fn commit_block(\n        &self,\n        ctx: Context,\n        rich_block: RichBlock,\n        proof: Proof,\n        status_agent: StatusAgent,\n    ) -> ProtocolResult<()> {\n        let executor_resp = self\n            .exec_block(ctx.clone(), rich_block.clone(), status_agent.clone())\n            .await?;\n        let block = &rich_block.block;\n        let block_hash = Hash::digest(block.header.encode_fixed()?);\n\n        let metadata = self.adapter.get_metadata(\n            ctx.clone(),\n            block.header.state_root.clone(),\n            block.header.height,\n            block.header.timestamp,\n            block.header.proposer.clone(),\n        )?;\n\n        self.crypto\n            .update(generate_new_crypto_map(metadata.clone())?);\n\n        self.adapter.set_args(\n            ctx.clone(),\n            metadata.timeout_gap,\n            metadata.cycles_limit,\n            metadata.max_tx_size,\n        );\n\n        let pub_keys = metadata\n            .verifier_list\n            .iter()\n            .map(|v| v.pub_key.decode())\n            .collect();\n        self.adapter.tag_consensus(ctx.clone(), pub_keys)?;\n\n        log::info!(\n            \"[synchronization]: commit_block, committing block header: {}, committing proof:{:?}\",\n            block.header.clone(),\n            proof.clone()\n        );\n\n        status_agent.update_by_committed(metadata, block.clone(), block_hash, proof);\n\n        self.save_chain_data(\n            ctx.clone(),\n            rich_block.txs.clone(),\n            executor_resp.receipts.clone(),\n            rich_block.block.clone(),\n        )\n        .await?;\n\n        // If there are transactions in the trasnaction pool that have been on chain\n        // after this execution, make sure they are cleaned up.\n        self.adapter\n            .flush_mempool(ctx.clone(), &rich_block.block.ordered_tx_hashes)\n            .await?;\n\n        Ok(())\n    }\n\n    #[muta_apm::derive::tracing_span(kind = \"consensus.sync\", logs = \"{'height': 'height'}\")]\n    async fn get_rich_block_from_remote(\n        &self,\n        ctx: Context,\n        height: u64,\n    ) -> ProtocolResult<RichBlock> {\n        let block = self.get_block_from_remote(ctx.clone(), height).await?;\n\n        let mut txs = Vec::with_capacity(block.ordered_tx_hashes.len());\n\n        for tx_hashes in block.ordered_tx_hashes.chunks(self.sync_txs_chunk_size) {\n            let remote_txs = self\n                .adapter\n                .get_txs_from_remote(ctx.clone(), height, &tx_hashes)\n                .await?;\n\n            txs.extend(remote_txs);\n        }\n\n        Ok(RichBlock { block, txs })\n    }\n\n    #[muta_apm::derive::tracing_span(kind = \"consensus.sync\", logs = \"{'height': 'height'}\")]\n    async fn get_block_from_remote(&self, ctx: Context, height: u64) -> ProtocolResult<Block> {\n        self.adapter\n            .get_block_from_remote(ctx.clone(), height)\n            .await\n    }\n\n    #[muta_apm::derive::tracing_span(kind = \"consensus.sync\", logs = \"{'txs_len': 'txs.len()'}\")]\n    async fn save_chain_data(\n        &self,\n        ctx: Context,\n        txs: Vec<SignedTransaction>,\n        receipts: Vec<Receipt>,\n        block: Block,\n    ) -> ProtocolResult<()> {\n        self.adapter\n            .save_signed_txs(ctx.clone(), block.header.height, txs)\n            .await?;\n        self.adapter\n            .save_receipts(ctx.clone(), block.header.height, receipts)\n            .await?;\n        self.adapter\n            .save_proof(ctx.clone(), block.header.proof.clone())\n            .await?;\n        self.adapter.save_block(ctx.clone(), block).await?;\n        Ok(())\n    }\n\n    #[muta_apm::derive::tracing_span(kind = \"consensus.sync\")]\n    pub async fn exec_block(\n        &self,\n        ctx: Context,\n        rich_block: RichBlock,\n        status_agent: StatusAgent,\n    ) -> ProtocolResult<ExecutorResp> {\n        let current_status = status_agent.to_inner();\n        let cycles_limit = current_status.cycles_limit;\n\n        let exec_params = ExecutorParams {\n            state_root: current_status.get_latest_state_root(),\n            height: rich_block.block.header.height,\n            timestamp: rich_block.block.header.timestamp,\n            cycles_limit,\n            proposer: rich_block.block.header.proposer,\n        };\n        let resp = self\n            .adapter\n            .sync_exec(ctx.clone(), &exec_params, &rich_block.txs)?;\n\n        status_agent.update_by_executed(ExecutedInfo::new(\n            ctx,\n            rich_block.block.header.height,\n            rich_block.block.header.order_root,\n            resp.clone(),\n        ));\n\n        Ok(resp)\n    }\n\n    async fn init_status_agent(&self) -> ProtocolResult<StatusAgent> {\n        loop {\n            let current_status = self.status.to_inner();\n\n            if current_status.exec_height != current_status.latest_committed_height {\n                Delay::new(Duration::from_millis(WAIT_EXECUTION)).await;\n            } else {\n                break;\n            }\n        }\n        let current_status = self.status.to_inner();\n        Ok(StatusAgent::new(current_status))\n    }\n\n    #[muta_apm::derive::tracing_span(\n        kind = \"consensus.sync\",\n        logs = \"{'remote_height': 'remote_height'}\"\n    )]\n    async fn need_sync(&self, ctx: Context, remote_height: u64) -> ProtocolResult<bool> {\n        let mut current_height = self.status.to_inner().latest_committed_height;\n        if remote_height == 0 {\n            return Ok(false);\n        }\n\n        if remote_height <= current_height {\n            return Ok(false);\n        }\n\n        if current_height == remote_height - 1 {\n            let status = self.status.to_inner();\n            Delay::new(Duration::from_millis(status.consensus_interval)).await;\n\n            current_height = self.status.to_inner().latest_committed_height;\n            if current_height == remote_height {\n                return Ok(false);\n            }\n        }\n\n        let block = self\n            .get_block_from_remote(ctx.clone(), remote_height)\n            .await?;\n\n        log::debug!(\n            \"[synchronization] get block from remote success {:?} \",\n            remote_height\n        );\n\n        if block.header.height != remote_height {\n            log::error!(\"[synchronization]: block that doesn't match is found\");\n            return Ok(false);\n        }\n\n        Ok(true)\n    }\n\n    fn update_status(&self, ctx: Context, sync_status_agent: StatusAgent) -> ProtocolResult<()> {\n        let sync_status = sync_status_agent.to_inner();\n\n        self.status.replace(sync_status.clone());\n        self.adapter.update_status(\n            ctx,\n            sync_status.latest_committed_height,\n            sync_status.consensus_interval,\n            sync_status.propose_ratio,\n            sync_status.prevote_ratio,\n            sync_status.precommit_ratio,\n            sync_status.brake_ratio,\n            sync_status.validators,\n        )?;\n\n        log::info!(\n            \"[synchronization]: synced block, status: height:{}, exec_height:{}, proof_height:{}\",\n            sync_status.latest_committed_height,\n            sync_status.exec_height,\n            sync_status.current_proof.height\n        );\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "core/consensus/src/tests/engine.rs",
    "content": "use std::collections::HashMap;\nuse std::convert::TryFrom;\nuse std::sync::Arc;\n\nuse async_trait::async_trait;\nuse futures::lock::Mutex;\nuse overlord::types::{AggregatedSignature, Commit, Proof as OverlordProof};\nuse overlord::Consensus;\n\nuse common_crypto::BlsPrivateKey;\nuse protocol::fixed_codec::FixedCodec;\nuse protocol::traits::{\n    CommonConsensusAdapter, ConsensusAdapter, Context, MessageTarget, MixedTxHashes, NodeInfo,\n    TrustFeedback,\n};\nuse protocol::types::{\n    Address, Block, BlockHeader, Hash, Hex, MerkleRoot, Metadata, Pill, Proof, Receipt,\n    SignedTransaction, Validator,\n};\nuse protocol::{Bytes, ProtocolResult};\n\nuse crate::engine::ConsensusEngine;\nuse crate::fixed_types::FixedPill;\nuse crate::status::StatusAgent;\nuse crate::util::OverlordCrypto;\nuse crate::wal::{ConsensusWal, SignedTxsWAL};\n\nuse super::*;\n\nstatic FULL_TXS_PATH: &str = \"./free-space/engine/txs\";\nstatic FULL_CONSENSUS_PATH: &str = \"./free-space/engine/consensus\";\n\n#[tokio::test]\nasync fn test_repetitive_commit() {\n    let init_status = mock_current_status(1);\n    let engine = init_engine(init_status.clone());\n\n    let block = mock_block_from_status(&init_status);\n\n    let res = engine\n        .commit(Context::new(), 11, mock_commit(block.clone()))\n        .await;\n    assert!(res.is_ok());\n\n    let status = engine.get_current_status();\n\n    let res = engine\n        .commit(Context::new(), 11, mock_commit(block.clone()))\n        .await;\n    assert!(res.is_err());\n\n    assert_eq!(status, engine.get_current_status());\n}\n\nfn mock_commit(block: Block) -> Commit<FixedPill> {\n    let pill = Pill {\n        block:          block.clone(),\n        propose_hashes: vec![],\n    };\n    Commit {\n        height:  11,\n        content: FixedPill { inner: pill },\n        proof:   OverlordProof {\n            height:     11,\n            round:      0,\n            block_hash: Hash::digest(block.header.encode_fixed().unwrap()).as_bytes(),\n            signature:  AggregatedSignature {\n                signature:      get_random_bytes(32),\n                address_bitmap: get_random_bytes(10),\n            },\n        },\n    }\n}\n\nfn init_engine(init_status: CurrentConsensusStatus) -> ConsensusEngine<MockConsensusAdapter> {\n    ConsensusEngine::new(\n        StatusAgent::new(init_status),\n        mock_node_info(),\n        Arc::new(SignedTxsWAL::new(FULL_TXS_PATH)),\n        Arc::new(MockConsensusAdapter {}),\n        Arc::new(init_crypto()),\n        Arc::new(Mutex::new(())),\n        Arc::new(ConsensusWal::new(FULL_CONSENSUS_PATH)),\n    )\n}\n\nfn init_crypto() -> OverlordCrypto {\n    let mut priv_key = Vec::new();\n    priv_key.extend_from_slice(&[0u8; 16]);\n    let mut tmp =\n        hex::decode(\"45c56be699dca666191ad3446897e0f480da234da896270202514a0e1a587c3f\").unwrap();\n    priv_key.append(&mut tmp);\n\n    OverlordCrypto::new(\n        BlsPrivateKey::try_from(priv_key.as_ref()).unwrap(),\n        HashMap::new(),\n        std::str::from_utf8(hex::decode(\"\").unwrap().as_ref())\n            .unwrap()\n            .into(),\n    )\n}\n\nfn mock_node_info() -> NodeInfo {\n    NodeInfo {\n        self_pub_key: mock_pub_key().decode(),\n        chain_id:     mock_hash(),\n        self_address: mock_address(),\n    }\n}\n\nfn mock_metadata() -> Metadata {\n    Metadata {\n        chain_id:           mock_hash(),\n        bech32_address_hrp: \"muta\".to_owned(),\n        common_ref:         Hex::from_string(\"0x703873635a6b51513451\".to_string()).unwrap(),\n        timeout_gap:        20,\n        cycles_limit:       600000,\n        cycles_price:       1,\n        interval:           3000,\n        verifier_list:      vec![],\n        propose_ratio:      3,\n        prevote_ratio:      3,\n        precommit_ratio:    3,\n        brake_ratio:        3,\n        tx_num_limit:       3,\n        max_tx_size:        3000,\n    }\n}\n\npub struct MockConsensusAdapter;\n\n#[async_trait]\nimpl CommonConsensusAdapter for MockConsensusAdapter {\n    async fn save_block(&self, _ctx: Context, _block: Block) -> ProtocolResult<()> {\n        Ok(())\n    }\n\n    async fn save_proof(&self, _ctx: Context, _proof: Proof) -> ProtocolResult<()> {\n        Ok(())\n    }\n\n    async fn save_signed_txs(\n        &self,\n        _ctx: Context,\n        _block_height: u64,\n        _signed_txs: Vec<SignedTransaction>,\n    ) -> ProtocolResult<()> {\n        Ok(())\n    }\n\n    async fn save_receipts(\n        &self,\n        _ctx: Context,\n        _height: u64,\n        _receipts: Vec<Receipt>,\n    ) -> ProtocolResult<()> {\n        Ok(())\n    }\n\n    async fn flush_mempool(\n        &self,\n        _ctx: Context,\n        _ordered_tx_hashes: &[Hash],\n    ) -> ProtocolResult<()> {\n        Ok(())\n    }\n\n    async fn get_block_by_height(&self, _ctx: Context, _height: u64) -> ProtocolResult<Block> {\n        unimplemented!()\n    }\n\n    async fn get_block_header_by_height(\n        &self,\n        _ctx: Context,\n        _height: u64,\n    ) -> ProtocolResult<BlockHeader> {\n        unimplemented!()\n    }\n\n    async fn get_current_height(&self, _ctx: Context) -> ProtocolResult<u64> {\n        Ok(10)\n    }\n\n    async fn get_txs_from_storage(\n        &self,\n        _ctx: Context,\n        _tx_hashes: &[Hash],\n    ) -> ProtocolResult<Vec<SignedTransaction>> {\n        unimplemented!()\n    }\n\n    async fn verify_block_header(&self, _ctx: Context, _block: &Block) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn verify_proof(\n        &self,\n        _ctx: Context,\n        _block_header: &BlockHeader,\n        _proof: &Proof,\n    ) -> ProtocolResult<()> {\n        Ok(())\n    }\n\n    async fn broadcast_height(&self, _ctx: Context, _height: u64) -> ProtocolResult<()> {\n        Ok(())\n    }\n\n    fn get_metadata(\n        &self,\n        _context: Context,\n        _state_root: MerkleRoot,\n        _height: u64,\n        _timestamp: u64,\n        _proposer: Address,\n    ) -> ProtocolResult<Metadata> {\n        Ok(mock_metadata())\n    }\n\n    fn report_bad(&self, _ctx: Context, _feedback: TrustFeedback) {}\n\n    fn set_args(\n        &self,\n        _context: Context,\n        _timeout_gap: u64,\n        _cycles_limit: u64,\n        _max_tx_size: u64,\n    ) {\n    }\n\n    fn tag_consensus(&self, _ctx: Context, _peer_ids: Vec<Bytes>) -> ProtocolResult<()> {\n        Ok(())\n    }\n\n    fn verify_proof_signature(\n        &self,\n        _ctx: Context,\n        _block_height: u64,\n        _vote_hash: Bytes,\n        _aggregated_signature_bytes: Bytes,\n        _vote_pubkeys: Vec<Hex>,\n    ) -> ProtocolResult<()> {\n        Ok(())\n    }\n\n    fn verify_proof_weight(\n        &self,\n        _ctx: Context,\n        _block_height: u64,\n        _weight_map: HashMap<Bytes, u32>,\n        _signed_voters: Vec<Bytes>,\n    ) -> ProtocolResult<()> {\n        Ok(())\n    }\n}\n\n#[async_trait]\nimpl ConsensusAdapter for MockConsensusAdapter {\n    async fn get_txs_from_mempool(\n        &self,\n        _ctx: Context,\n        _height: u64,\n        _cycles_limit: u64,\n        _tx_num_limit: u64,\n    ) -> ProtocolResult<MixedTxHashes> {\n        unimplemented!()\n    }\n\n    async fn sync_txs(&self, _ctx: Context, _txs: Vec<Hash>) -> ProtocolResult<()> {\n        Ok(())\n    }\n\n    async fn get_full_txs(\n        &self,\n        _ctx: Context,\n        _txs: &[Hash],\n    ) -> ProtocolResult<Vec<SignedTransaction>> {\n        Ok(vec![])\n    }\n\n    async fn transmit(\n        &self,\n        _ctx: Context,\n        _msg: Vec<u8>,\n        _end: &str,\n        _target: MessageTarget,\n    ) -> ProtocolResult<()> {\n        Ok(())\n    }\n\n    async fn execute(\n        &self,\n        _ctx: Context,\n        _chain_id: Hash,\n        _order_root: MerkleRoot,\n        _height: u64,\n        _cycles_price: u64,\n        _proposer: Address,\n        _block_hash: Hash,\n        _signed_txs: Vec<SignedTransaction>,\n        _cycles_limit: u64,\n        _timestamp: u64,\n    ) -> ProtocolResult<()> {\n        Ok(())\n    }\n\n    async fn get_last_validators(\n        &self,\n        _ctx: Context,\n        _height: u64,\n    ) -> ProtocolResult<Vec<Validator>> {\n        unimplemented!()\n    }\n\n    async fn pull_block(&self, _ctx: Context, _height: u64, _end: &str) -> ProtocolResult<Block> {\n        unimplemented!()\n    }\n\n    async fn get_current_height(&self, _ctx: Context) -> ProtocolResult<u64> {\n        Ok(10)\n    }\n\n    async fn verify_txs(&self, _ctx: Context, _height: u64, _txs: &[Hash]) -> ProtocolResult<()> {\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "core/consensus/src/tests/mod.rs",
    "content": "mod engine;\nmod status;\nmod synchronization;\n\nuse rand::random;\n\nuse protocol::types::{Address, Block, BlockHeader, Hash, Hex, MerkleRoot, Proof, Validator};\nuse protocol::Bytes;\n\nuse crate::status::CurrentConsensusStatus;\n\nconst HEIGHT_TEN: u64 = 10;\n\nfn mock_block_from_status(status: &CurrentConsensusStatus) -> Block {\n    let block_header = BlockHeader {\n        chain_id:                       mock_hash(),\n        height:                         status.latest_committed_height + 1,\n        exec_height:                    status.exec_height + 1,\n        prev_hash:                      status.current_hash.clone(),\n        timestamp:                      random::<u64>(),\n        order_root:                     mock_hash(),\n        order_signed_transactions_hash: mock_hash(),\n        confirm_root:                   vec![status.list_confirm_root.first().cloned().unwrap()],\n        state_root:                     status.list_state_root.first().cloned().unwrap(),\n        receipt_root:                   vec![status.list_receipt_root.first().cloned().unwrap()],\n        cycles_used:                    vec![*status.list_cycles_used.first().unwrap()],\n        proposer:                       mock_address(),\n        proof:                          mock_proof(status.latest_committed_height),\n        validator_version:              1,\n        validators:                     mock_validators(4),\n    };\n\n    Block {\n        header:            block_header,\n        ordered_tx_hashes: vec![],\n    }\n}\n\nfn mock_current_status(exec_lag: u64) -> CurrentConsensusStatus {\n    let state_roots = mock_roots(exec_lag);\n\n    CurrentConsensusStatus {\n        cycles_price:                random::<u64>(),\n        cycles_limit:                random::<u64>(),\n        latest_committed_height:     HEIGHT_TEN,\n        exec_height:                 HEIGHT_TEN - exec_lag,\n        current_hash:                mock_hash(),\n        latest_committed_state_root: state_roots.last().cloned().unwrap_or_else(mock_hash),\n        list_confirm_root:           mock_roots(exec_lag),\n        list_state_root:             state_roots,\n        list_receipt_root:           mock_roots(exec_lag),\n        list_cycles_used:            (0..exec_lag).map(|_| random::<u64>()).collect::<Vec<_>>(),\n        current_proof:               mock_proof(HEIGHT_TEN + exec_lag),\n        validators:                  mock_validators(4),\n        consensus_interval:          random::<u64>(),\n        propose_ratio:               random::<u64>(),\n        prevote_ratio:               random::<u64>(),\n        precommit_ratio:             random::<u64>(),\n        brake_ratio:                 random::<u64>(),\n        tx_num_limit:                random::<u64>(),\n        max_tx_size:                 random::<u64>(),\n    }\n}\n\nfn mock_proof(proof_height: u64) -> Proof {\n    Proof {\n        height:     proof_height,\n        round:      random::<u64>(),\n        signature:  get_random_bytes(64),\n        bitmap:     get_random_bytes(20),\n        block_hash: mock_hash(),\n    }\n}\n\nfn mock_roots(len: u64) -> Vec<MerkleRoot> {\n    (0..len).map(|_| mock_hash()).collect::<Vec<_>>()\n}\n\nfn mock_hash() -> Hash {\n    Hash::digest(get_random_bytes(10))\n}\n\nfn mock_address() -> Address {\n    let hash = mock_hash();\n    Address::from_hash(hash).unwrap()\n}\n\nfn get_random_bytes(len: usize) -> Bytes {\n    let vec: Vec<u8> = (0..len).map(|_| random::<u8>()).collect();\n    Bytes::from(vec)\n}\n\nfn mock_pub_key() -> Hex {\n    Hex::from_string(\n        \"0x026c184a9016f6f71a234c86b141621f38b68c78602ab06768db4d83682c616004\".to_owned(),\n    )\n    .unwrap()\n}\n\nfn mock_validators(len: usize) -> Vec<Validator> {\n    (0..len).map(|_| mock_validator()).collect::<Vec<_>>()\n}\n\nfn mock_validator() -> Validator {\n    Validator {\n        pub_key:        mock_pub_key().decode(),\n        propose_weight: random::<u32>(),\n        vote_weight:    random::<u32>(),\n    }\n}\n"
  },
  {
    "path": "core/consensus/src/tests/status.rs",
    "content": "use creep::Context;\nuse rand::random;\n\nuse protocol::fixed_codec::FixedCodec;\nuse protocol::types::{Hash, Hex, Metadata, ValidatorExtend};\n\nuse crate::status::{CurrentConsensusStatus, ExecutedInfo};\n\nuse super::*;\n\n#[test]\n#[should_panic]\nfn test_update_by_executed() {\n    let mut status = mock_current_status(2);\n    let mut status_clone = status.clone();\n    let info = mock_executed_info(9);\n\n    status.update_by_executed(info.clone());\n    status_clone.exec_height = 9;\n    status_clone.list_cycles_used.push(info.cycles_used);\n    status_clone\n        .list_confirm_root\n        .push(info.confirm_root.clone());\n    status_clone.list_state_root.push(info.state_root.clone());\n    status_clone.list_receipt_root.push(info.receipt_root);\n    assert_eq!(status, status_clone);\n\n    let info = mock_executed_info(9);\n    status.update_by_executed(info);\n    assert_eq!(status, status_clone);\n\n    let info = mock_executed_info(11);\n    status.update_by_executed(info);\n}\n\n#[test]\n#[should_panic]\nfn test_update_by_committed() {\n    let mut status = mock_current_status(2);\n    let status_clone = status.clone();\n    let block = mock_block_from_status(&status);\n    let metadata = mock_metadata();\n    let block_hash = Hash::digest(block.encode_fixed().unwrap());\n\n    status.update_by_committed(\n        metadata.clone(),\n        block.clone(),\n        block_hash.clone(),\n        block.header.proof.clone(),\n    );\n\n    assert_eq!(status.latest_committed_height, block.header.height);\n    assert_eq!(status.current_hash, block_hash);\n    assert_eq!(status.latest_committed_state_root, block.header.state_root);\n    check_metadata(&status, &metadata);\n    check_vec(&status_clone, &status);\n\n    let mut block = mock_block_from_status(&status);\n    block.header.height += 1;\n    status.update_by_committed(\n        metadata,\n        block.clone(),\n        Hash::digest(block.encode_fixed().unwrap()),\n        block.header.proof,\n    );\n}\n\nfn check_metadata(status: &CurrentConsensusStatus, metadata: &Metadata) {\n    assert_eq!(status.consensus_interval, metadata.interval);\n    assert_eq!(status.propose_ratio, metadata.propose_ratio);\n    assert_eq!(status.prevote_ratio, metadata.prevote_ratio);\n    assert_eq!(status.precommit_ratio, metadata.precommit_ratio);\n    assert_eq!(status.brake_ratio, metadata.brake_ratio);\n    assert_eq!(status.tx_num_limit, metadata.tx_num_limit);\n    assert_eq!(status.max_tx_size, metadata.max_tx_size);\n}\n\nfn check_vec(status_before: &CurrentConsensusStatus, status_after: &CurrentConsensusStatus) {\n    assert!(status_after.list_cycles_used.len() == 1);\n    assert!(status_after.list_confirm_root.len() == 1);\n    assert!(status_after.list_receipt_root.len() == 1);\n    assert!(status_after.list_state_root.len() == 1);\n\n    assert!(status_before\n        .list_cycles_used\n        .ends_with(&status_after.list_cycles_used));\n    assert!(status_before\n        .list_confirm_root\n        .ends_with(&status_after.list_confirm_root));\n    assert!(status_before\n        .list_receipt_root\n        .ends_with(&status_after.list_receipt_root));\n    assert!(status_before\n        .list_state_root\n        .ends_with(&status_after.list_state_root));\n}\n\nfn mock_metadata() -> Metadata {\n    Metadata {\n        chain_id:           mock_hash(),\n        bech32_address_hrp: \"muta\".to_owned(),\n        common_ref:         Hex::from_string(\n            \"0xd654c7a6747fc2e34808c1ebb1510bfb19b443d639f2fab6dc41fce9f634de37\".to_string(),\n        )\n        .unwrap(),\n        timeout_gap:        random::<u64>(),\n        cycles_limit:       random::<u64>(),\n        cycles_price:       random::<u64>(),\n        verifier_list:      mock_validators_extend(4),\n        interval:           random::<u64>(),\n        propose_ratio:      random::<u64>(),\n        prevote_ratio:      random::<u64>(),\n        precommit_ratio:    random::<u64>(),\n        brake_ratio:        random::<u64>(),\n        tx_num_limit:       random::<u64>(),\n        max_tx_size:        random::<u64>(),\n    }\n}\n\nfn mock_validators_extend(len: usize) -> Vec<ValidatorExtend> {\n    (0..len)\n        .map(|_| ValidatorExtend {\n            bls_pub_key:    Hex::from_string(\n                \"0xd654c7a6747fc2e34808c1ebb1510bfb19b443d639f2fab6dc41fce9f634de37\".to_string(),\n            )\n            .unwrap(),\n            pub_key:        mock_pub_key(),\n            address:        mock_address(),\n            propose_weight: random::<u32>(),\n            vote_weight:    random::<u32>(),\n        })\n        .collect::<Vec<_>>()\n}\n\nfn mock_executed_info(height: u64) -> ExecutedInfo {\n    ExecutedInfo {\n        ctx:          Context::new(),\n        exec_height:  height,\n        cycles_used:  random::<u64>(),\n        state_root:   mock_hash(),\n        receipt_root: mock_hash(),\n        confirm_root: mock_hash(),\n    }\n}\n"
  },
  {
    "path": "core/consensus/src/tests/synchronization.rs",
    "content": "use std::collections::{HashMap, HashSet};\nuse std::convert::TryFrom;\nuse std::str::FromStr;\nuse std::sync::Arc;\n\nuse async_trait::async_trait;\nuse bit_vec::BitVec;\nuse futures::executor::block_on;\nuse futures::lock::Mutex;\nuse overlord::types::{AggregatedSignature, AggregatedVote, Node, SignedVote, Vote, VoteType};\nuse overlord::{extract_voters, Crypto};\nuse parking_lot::RwLock;\n\nuse common_crypto::{\n    BlsCommonReference, BlsPrivateKey, BlsPublicKey, HashValue, PrivateKey, PublicKey,\n    Secp256k1PrivateKey, Secp256k1PublicKey, Signature, ToPublicKey,\n};\nuse common_merkle::Merkle;\n\nuse protocol::fixed_codec::FixedCodec;\nuse protocol::traits::{CommonConsensusAdapter, Synchronization, SynchronizationAdapter};\nuse protocol::traits::{Context, ExecutorParams, ExecutorResp, ServiceResponse, TrustFeedback};\nuse protocol::types::{\n    Address, Block, BlockHeader, Bytes, Hash, Hex, MerkleRoot, Metadata, Proof, RawTransaction,\n    Receipt, ReceiptResponse, SignedTransaction, TransactionRequest, Validator, ValidatorExtend,\n};\nuse protocol::ProtocolResult;\n\nuse crate::status::{CurrentConsensusStatus, StatusAgent};\nuse crate::synchronization::{OverlordSynchronization, RichBlock};\nuse crate::util::{convert_hex_to_bls_pubkeys, digest_signed_transactions, OverlordCrypto};\nuse crate::BlockHeaderField::{PreviousBlockHash, ProofHash, Proposer};\nuse crate::BlockProofField::{BitMap, HashMismatch, HeightMismatch, WeightNotFound};\nuse crate::{BlockHeaderField, BlockProofField, ConsensusError};\n\nconst PUB_KEY_STR: &str = \"02ef0cb0d7bc6c18b4bea1f5908d9106522b35ab3c399369605d4242525bda7e60\";\n\n// Test the blocks gap from 1 to 4.\n#[test]\nfn sync_gap_test() {\n    for gap in [1, 2, 3, 4].iter() {\n        let key_tool = get_mock_key_tool();\n\n        let max_height = 10 * *gap;\n\n        let list_rich_block = mock_chained_rich_block(max_height, *gap, &key_tool);\n\n        let remote_blocks = gen_remote_block_hashmap(list_rich_block.0.clone());\n        let remote_proofs = gen_remote_proof_hashmap(list_rich_block.1.clone());\n        let genesis_block = remote_blocks.read().get(&0).unwrap().clone();\n\n        let local_blocks = Arc::new(RwLock::new(HashMap::new()));\n        local_blocks\n            .write()\n            .insert(genesis_block.header.height, genesis_block.clone());\n\n        let local_transactions = Arc::new(RwLock::new(HashMap::new()));\n        let remote_transactions = gen_remote_tx_hashmap(list_rich_block.0.clone());\n\n        let adapter = Arc::new(MockCommonConsensusAdapter::new(\n            0,\n            local_blocks,\n            remote_blocks,\n            remote_proofs,\n            local_transactions,\n            remote_transactions,\n            Arc::clone(&key_tool.overlord_crypto),\n        ));\n        let block_hash = Hash::digest(genesis_block.header.encode_fixed().unwrap());\n        let status = CurrentConsensusStatus {\n            cycles_price:                1,\n            cycles_limit:                300_000_000,\n            latest_committed_height:     genesis_block.header.height,\n            exec_height:                 genesis_block.header.exec_height,\n            current_hash:                block_hash,\n            list_confirm_root:           vec![],\n            latest_committed_state_root: genesis_block.header.state_root.clone(),\n            list_state_root:             vec![],\n            list_receipt_root:           vec![],\n            list_cycles_used:            vec![],\n            current_proof:               genesis_block.header.proof,\n            validators:                  genesis_block.header.validators,\n            consensus_interval:          3000,\n            propose_ratio:               15,\n            prevote_ratio:               10,\n            precommit_ratio:             10,\n            brake_ratio:                 3,\n            tx_num_limit:                20000,\n            max_tx_size:                 1_073_741_824,\n        };\n        let status_agent = StatusAgent::new(status);\n        let lock = Arc::new(Mutex::new(()));\n        let sync = OverlordSynchronization::<_>::new(\n            5000,\n            Arc::clone(&adapter),\n            status_agent.clone(),\n            Arc::new(mock_crypto()),\n            lock,\n        );\n\n        // simulate to get a block\n        block_on(sync.receive_remote_block(Context::new(), max_height / 2)).unwrap();\n\n        // get the current consensus status to check if the test works fine\n        let status = status_agent.to_inner();\n        let block =\n            block_on(adapter.get_block_by_height(Context::new(), status.latest_committed_height))\n                .unwrap();\n        assert_sync(status, block);\n\n        block_on(sync.receive_remote_block(Context::new(), max_height)).unwrap();\n        let status = status_agent.to_inner();\n        let block =\n            block_on(adapter.get_block_by_height(Context::new(), status.latest_committed_height))\n                .unwrap();\n        assert_sync(status, block);\n\n        let status = status_agent.to_inner();\n        // status.current_height is consensus-ed height\n        assert_eq!(status.latest_committed_height, max_height);\n    }\n}\n\npub type SafeHashMap<K, V> = Arc<RwLock<HashMap<K, V>>>;\n\npub struct MockCommonConsensusAdapter {\n    latest_height:       RwLock<u64>,\n    local_blocks:        SafeHashMap<u64, Block>,\n    remote_blocks:       SafeHashMap<u64, Block>,\n    remote_proofs:       SafeHashMap<u64, Proof>,\n    local_transactions:  SafeHashMap<Hash, SignedTransaction>,\n    remote_transactions: SafeHashMap<Hash, SignedTransaction>,\n    crypto:              Arc<OverlordCrypto>,\n}\n\nimpl MockCommonConsensusAdapter {\n    pub fn new(\n        latest_height: u64,\n        local_blocks: SafeHashMap<u64, Block>,\n        remote_blocks: SafeHashMap<u64, Block>,\n        remote_proofs: SafeHashMap<u64, Proof>,\n        local_transactions: SafeHashMap<Hash, SignedTransaction>,\n        remote_transactions: SafeHashMap<Hash, SignedTransaction>,\n        crypto: Arc<OverlordCrypto>,\n    ) -> Self {\n        Self {\n            latest_height: RwLock::new(latest_height),\n            local_blocks,\n            remote_blocks,\n            remote_proofs,\n            local_transactions,\n            remote_transactions,\n            crypto,\n        }\n    }\n}\n\n#[async_trait]\nimpl SynchronizationAdapter for MockCommonConsensusAdapter {\n    fn update_status(\n        &self,\n        _: Context,\n        _: u64,\n        _: u64,\n        _: u64,\n        _: u64,\n        _: u64,\n        _: u64,\n        _: Vec<Validator>,\n    ) -> ProtocolResult<()> {\n        Ok(())\n    }\n\n    fn sync_exec(\n        &self,\n        _: Context,\n        params: &ExecutorParams,\n        txs: &[SignedTransaction],\n    ) -> ProtocolResult<ExecutorResp> {\n        Ok(exec_txs(params.height, txs).0)\n    }\n\n    /// Pull some blocks from other nodes from `begin` to `end`.\n    async fn get_block_from_remote(&self, _: Context, height: u64) -> ProtocolResult<Block> {\n        Ok(self.remote_blocks.read().get(&height).unwrap().clone())\n    }\n\n    /// Pull signed transactions corresponding to the given hashes from other\n    /// nodes.\n    async fn get_txs_from_remote(\n        &self,\n        _: Context,\n        _: u64,\n        tx_hashes: &[Hash],\n    ) -> ProtocolResult<Vec<SignedTransaction>> {\n        let map = self.remote_transactions.read();\n        let mut txs = vec![];\n\n        for hash in tx_hashes.iter() {\n            let tx = map.get(hash).unwrap();\n            txs.push(tx.clone())\n        }\n\n        Ok(txs)\n    }\n\n    async fn get_proof_from_remote(&self, _: Context, height: u64) -> ProtocolResult<Proof> {\n        Ok(self.remote_proofs.read().get(&height).unwrap().clone())\n    }\n}\n\n#[async_trait]\nimpl CommonConsensusAdapter for MockCommonConsensusAdapter {\n    /// Save a block to the database.\n    async fn save_block(&self, _: Context, block: Block) -> ProtocolResult<()> {\n        self.local_blocks.write().insert(block.header.height, block);\n        let mut height = self.latest_height.write();\n        *height += 1;\n        Ok(())\n    }\n\n    async fn save_proof(&self, _: Context, _: Proof) -> ProtocolResult<()> {\n        Ok(())\n    }\n\n    /// Save some signed transactions to the database.\n    async fn save_signed_txs(\n        &self,\n        _: Context,\n        _block_height: u64,\n        signed_txs: Vec<SignedTransaction>,\n    ) -> ProtocolResult<()> {\n        let mut map = self.local_transactions.write();\n        for tx in signed_txs.into_iter() {\n            map.insert(tx.tx_hash.clone(), tx);\n        }\n        Ok(())\n    }\n\n    async fn save_receipts(&self, _: Context, _: u64, _: Vec<Receipt>) -> ProtocolResult<()> {\n        Ok(())\n    }\n\n    /// Flush the given transactions in the mempool.\n    async fn flush_mempool(&self, _: Context, _: &[Hash]) -> ProtocolResult<()> {\n        Ok(())\n    }\n\n    /// Get a block corresponding to the given height.\n    async fn get_block_by_height(&self, _: Context, height: u64) -> ProtocolResult<Block> {\n        Ok(self.local_blocks.read().get(&height).unwrap().clone())\n    }\n\n    async fn get_block_header_by_height(\n        &self,\n        _ctx: Context,\n        height: u64,\n    ) -> ProtocolResult<BlockHeader> {\n        Ok(self\n            .local_blocks\n            .read()\n            .get(&height)\n            .unwrap()\n            .header\n            .clone())\n    }\n\n    /// Get the current height from storage.\n    async fn get_current_height(&self, _: Context) -> ProtocolResult<u64> {\n        Ok(*self.latest_height.read())\n    }\n\n    async fn get_txs_from_storage(\n        &self,\n        _: Context,\n        tx_hashes: &[Hash],\n    ) -> ProtocolResult<Vec<SignedTransaction>> {\n        let map = self.local_transactions.read();\n        let mut txs = vec![];\n\n        for hash in tx_hashes.iter() {\n            let tx = map.get(hash).unwrap();\n            txs.push(tx.clone())\n        }\n\n        Ok(txs)\n    }\n\n    async fn broadcast_height(&self, _: Context, _: u64) -> ProtocolResult<()> {\n        Ok(())\n    }\n\n    fn get_metadata(\n        &self,\n        _context: Context,\n        _state_root: MerkleRoot,\n        _height: u64,\n        _timestamp: u64,\n        _proposer: Address,\n    ) -> ProtocolResult<Metadata> {\n        Ok(Metadata {\n            chain_id:           Hash::from_empty(),\n            bech32_address_hrp: \"muta\".to_owned(),\n            common_ref:         Hex::from_string(\"0x6c747758636859487038\".to_string()).unwrap(),\n            timeout_gap:        20,\n            cycles_limit:       9999,\n            cycles_price:       1,\n            interval:           3000,\n            verifier_list:      mock_verifier_list(),\n            propose_ratio:      10,\n            prevote_ratio:      10,\n            precommit_ratio:    10,\n            brake_ratio:        10,\n            tx_num_limit:       20000,\n            max_tx_size:        1_073_741_824,\n        })\n    }\n\n    fn tag_consensus(&self, _: Context, _: Vec<Bytes>) -> ProtocolResult<()> {\n        Ok(())\n    }\n\n    fn report_bad(&self, _ctx: Context, _feedback: TrustFeedback) {}\n\n    fn set_args(\n        &self,\n        _context: Context,\n        _timeout_gap: u64,\n        _cycles_limit: u64,\n        _max_tx_size: u64,\n    ) {\n    }\n\n    /// this function verify all info in header except proof and roots\n    async fn verify_block_header(&self, ctx: Context, block: &Block) -> ProtocolResult<()> {\n        let previous_block = self\n            .get_block_by_height(ctx.clone(), block.header.height - 1)\n            .await?;\n\n        let previous_block_hash = Hash::digest(previous_block.header.encode_fixed()?);\n\n        if previous_block_hash != block.header.prev_hash {\n            log::error!(\n                \"[consensus] verify_block_header, previous_block_hash: {:?}, block.header.prev_hash: {:?}\",\n                previous_block_hash,\n                block.header.prev_hash\n            );\n            return Err(\n                ConsensusError::VerifyBlockHeader(block.header.height, PreviousBlockHash).into(),\n            );\n        }\n\n        // the block 0 and 1 's proof is consensus-ed by community\n        if block.header.height > 1u64 && block.header.prev_hash != block.header.proof.block_hash {\n            log::error!(\n                \"[consensus] verify_block_header, verifying_block : {:?}\",\n                block\n            );\n            return Err(ConsensusError::VerifyBlockHeader(block.header.height, ProofHash).into());\n        }\n\n        // verify proposer and validators\n        let previous_metadata = self.get_metadata(\n            ctx,\n            previous_block.header.state_root.clone(),\n            previous_block.header.height,\n            previous_block.header.timestamp,\n            previous_block.header.proposer,\n        )?;\n\n        let authority_map = previous_metadata\n            .verifier_list\n            .iter()\n            .map(|v| {\n                let address = v.pub_key.decode();\n                let node = Node {\n                    address:        v.pub_key.decode(),\n                    propose_weight: v.propose_weight,\n                    vote_weight:    v.vote_weight,\n                };\n                (address, node)\n            })\n            .collect::<HashMap<_, _>>();\n\n        // check proposer\n        if block.header.height != 0\n            && !previous_metadata\n                .verifier_list\n                .iter()\n                .any(|v| v.address == block.header.proposer)\n        {\n            log::error!(\n                \"[consensus] verify_block_header, block.header.proposer: {:?}, authority_map: {:?}\",\n                block.header.proposer,\n                authority_map\n            );\n            return Err(ConsensusError::VerifyBlockHeader(block.header.height, Proposer).into());\n        }\n\n        // check validators\n        for validator in block.header.validators.iter() {\n            let validator_address = Address::from_pubkey_bytes(validator.pub_key.clone());\n\n            if !authority_map.contains_key(&validator.pub_key) {\n                log::error!(\n                    \"[consensus] verify_block_header, validator.address: {:?}, authority_map: {:?}\",\n                    validator_address,\n                    authority_map\n                );\n                return Err(ConsensusError::VerifyBlockHeader(\n                    block.header.height,\n                    BlockHeaderField::Validator,\n                )\n                .into());\n            } else {\n                let node = authority_map.get(&validator.pub_key).unwrap();\n\n                if node.vote_weight != validator.vote_weight\n                    || node.propose_weight != validator.vote_weight\n                {\n                    log::error!(\n                        \"[consensus] verify_block_header, validator.address: {:?}, authority_map: {:?}\",\n                        validator_address,\n                        authority_map\n                    );\n                    return Err(ConsensusError::VerifyBlockHeader(\n                        block.header.height,\n                        BlockHeaderField::Weight,\n                    )\n                    .into());\n                }\n            }\n        }\n\n        Ok(())\n    }\n\n    async fn verify_proof(\n        &self,\n        ctx: Context,\n        block_header: &BlockHeader,\n        proof: &Proof,\n    ) -> ProtocolResult<()> {\n        // the block 0 has no proof, which is consensus-ed by community, not by chain\n\n        if block_header.height == 0 {\n            return Ok(());\n        };\n\n        if block_header.height != proof.height {\n            log::error!(\n                \"[consensus] verify_proof, block_header.height: {}, proof.height: {}\",\n                block_header.height,\n                proof.height\n            );\n            return Err(ConsensusError::VerifyProof(\n                block_header.height,\n                HeightMismatch(block_header.height, proof.height),\n            )\n            .into());\n        }\n\n        let blockhash = Hash::digest(block_header.clone().encode_fixed()?);\n\n        if blockhash != proof.block_hash {\n            log::error!(\n                \"[consensus] verify_proof, blockhash: {:?}, proof.block_hash: {:?}\",\n                blockhash,\n                proof.block_hash\n            );\n            return Err(ConsensusError::VerifyProof(block_header.height, HashMismatch).into());\n        }\n\n        let previous_block = self\n            .get_block_by_height(ctx.clone(), block_header.height - 1)\n            .await?;\n        // the auth_list for the target should comes from previous height\n        let metadata = self.get_metadata(\n            ctx.clone(),\n            previous_block.header.state_root.clone(),\n            previous_block.header.height,\n            previous_block.header.timestamp,\n            previous_block.header.proposer,\n        )?;\n\n        let mut authority_list = metadata\n            .verifier_list\n            .iter()\n            .map(|v| Node {\n                address:        v.pub_key.decode(),\n                propose_weight: v.propose_weight,\n                vote_weight:    v.vote_weight,\n            })\n            .collect::<Vec<Node>>();\n\n        let signed_voters = extract_voters(&mut authority_list, &proof.bitmap).map_err(|_| {\n            log::error!(\"[consensus] extract_voters fails, bitmap error\");\n            ConsensusError::VerifyProof(block_header.height, BitMap)\n        })?;\n\n        let vote = Vote {\n            height:     proof.height,\n            round:      proof.round,\n            vote_type:  VoteType::Precommit,\n            block_hash: proof.block_hash.as_bytes(),\n        };\n\n        let vote_hash = self.crypto.hash(protocol::Bytes::from(rlp::encode(&vote)));\n        let hex_pubkeys = metadata\n            .verifier_list\n            .iter()\n            .filter_map(|v| {\n                if signed_voters.contains(&v.pub_key.decode()) {\n                    Some(v.bls_pub_key.clone())\n                } else {\n                    None\n                }\n            })\n            .collect::<Vec<_>>();\n\n        self.verify_proof_signature(\n            ctx.clone(),\n            block_header.height,\n            vote_hash.clone(),\n            proof.signature.clone(),\n            hex_pubkeys,\n        ).map_err(|e| {\n            log::error!(\"[consensus] verify_proof_signature error, height {}, vote: {:?}, vote_hash:{:?}, sig:{:?}, signed_voter:{:?}\",\n            block_header.height,\n            vote,\n            vote_hash,\n            proof.signature,\n            signed_voters,\n            );\n            e\n        })?;\n\n        let weight_map = authority_list\n            .iter()\n            .map(|node| (node.address.clone(), node.vote_weight))\n            .collect::<HashMap<_, _>>();\n\n        self.verify_proof_weight(ctx.clone(), block_header.height, weight_map, signed_voters)?;\n\n        Ok(())\n    }\n\n    fn verify_proof_signature(\n        &self,\n        _ctx: Context,\n        block_height: u64,\n        vote_hash: Bytes,\n        aggregated_signature_bytes: Bytes,\n        vote_keys: Vec<Hex>,\n    ) -> ProtocolResult<()> {\n        let mut pub_keys = Vec::new();\n        for hex in vote_keys.into_iter() {\n            pub_keys.push(convert_hex_to_bls_pubkeys(hex)?)\n        }\n\n        self.crypto\n            .inner_verify_aggregated_signature(vote_hash, pub_keys, aggregated_signature_bytes)\n            .map_err(|e| {\n                log::error!(\"[consensus] verify_proof_signature error: {}\", e);\n                ConsensusError::VerifyProof(block_height, BlockProofField::Signature).into()\n            })\n    }\n\n    fn verify_proof_weight(\n        &self,\n        _ctx: Context,\n        block_height: u64,\n        weight_map: HashMap<Bytes, u32>,\n        signed_voters: Vec<Bytes>,\n    ) -> ProtocolResult<()> {\n        let total_validator_weight: u64 = weight_map.iter().map(|pair| u64::from(*pair.1)).sum();\n\n        let mut accumulator = 0u64;\n        for signed_voter_address in signed_voters.iter() {\n            if weight_map.contains_key(signed_voter_address) {\n                let weight = weight_map.get(signed_voter_address).ok_or_else(|| {\n                    log::error!(\n                        \"[consensus] verify_proof_weight, signed_voter_address: {:?}\",\n                        hex::encode(signed_voter_address)\n                    );\n                    ConsensusError::VerifyProof(block_height, WeightNotFound)\n                })?;\n                accumulator += u64::from(*(weight));\n            } else {\n                log::error!(\n                    \"[consensus] verify_proof_weight,signed_voter_address: {:?}\",\n                    hex::encode(signed_voter_address)\n                );\n\n                return Err(\n                    ConsensusError::VerifyProof(block_height, BlockProofField::Validator).into(),\n                );\n            }\n        }\n\n        if 3 * accumulator <= 2 * total_validator_weight {\n            log::error!(\n                \"[consensus] verify_proof_weight, accumulator: {}, total: {}\",\n                accumulator,\n                total_validator_weight\n            );\n\n            return Err(ConsensusError::VerifyProof(block_height, BlockProofField::Weight).into());\n        }\n        Ok(())\n    }\n}\n\nfn mock_crypto() -> OverlordCrypto {\n    let priv_key = BlsPrivateKey::try_from(hex::decode(\"00000000000000000000000000000000d654c7a6747fc2e34808c1ebb1510bfb19b443d639f2fab6dc41fce9f634de37\").unwrap().as_ref()).unwrap();\n    OverlordCrypto::new(priv_key, HashMap::new(), \"muta\".into())\n}\n\nfn gen_remote_tx_hashmap(list: Vec<RichBlock>) -> SafeHashMap<Hash, SignedTransaction> {\n    let mut remote_txs = HashMap::new();\n\n    for rich_block in list.into_iter() {\n        for tx in rich_block.txs {\n            remote_txs.insert(tx.tx_hash.clone(), tx);\n        }\n    }\n\n    Arc::new(RwLock::new(remote_txs))\n}\n\nfn gen_remote_block_hashmap(list: Vec<RichBlock>) -> SafeHashMap<u64, Block> {\n    let mut remote_blocks = HashMap::new();\n    for rich_block in list.into_iter() {\n        remote_blocks.insert(rich_block.block.header.height, rich_block.block.clone());\n    }\n\n    Arc::new(RwLock::new(remote_blocks))\n}\n\nfn gen_remote_proof_hashmap(list: Vec<Proof>) -> SafeHashMap<u64, Proof> {\n    let mut remote_proof = HashMap::new();\n    for proof in list.into_iter() {\n        remote_proof.insert(proof.height, proof.clone());\n    }\n\n    Arc::new(RwLock::new(remote_proof))\n}\n\nfn mock_chained_rich_block(len: u64, gap: u64, key_tool: &KeyTool) -> (Vec<RichBlock>, Vec<Proof>) {\n    let mut list_rich_block = vec![];\n    let mut list_proof = vec![];\n\n    let genesis_rich_block = mock_genesis_rich_block();\n    list_rich_block.push(genesis_rich_block.clone());\n    // the proof of block 0 is n/a, we just stuff something here\n    list_proof.push(genesis_rich_block.clone().block.header.proof);\n    let mut last_rich_block = genesis_rich_block;\n\n    let mut current_height = 1;\n\n    let mut temp_rich_block: Vec<RichBlock> = vec![];\n\n    let mut last_proof: Proof = Proof {\n        height:     0,\n        round:      0,\n        block_hash: Hash::from_hex(\n            \"0x1122334455667788990011223344556677889900112233445566778899001122\",\n        )\n        .unwrap(),\n        signature:  Default::default(),\n        bitmap:     Default::default(),\n    };\n\n    loop {\n        let last_block_hash = Hash::digest(last_rich_block.block.header.encode_fixed().unwrap());\n        let last_header = &last_rich_block.block.header;\n\n        let txs = mock_tx_list(3, current_height);\n        let tx_hashes: Vec<Hash> = txs.iter().map(|tx| tx.tx_hash.clone()).collect();\n        let order_root = Merkle::from_hashes(tx_hashes.clone())\n            .get_root_hash()\n            .unwrap();\n        let order_signed_transactions_hash = digest_signed_transactions(&txs).unwrap();\n\n        let mut header = BlockHeader {\n            chain_id: last_header.chain_id.clone(),\n            height: current_height,\n            exec_height: current_height,\n            prev_hash: last_block_hash.clone(),\n            timestamp: 0,\n            order_root,\n            order_signed_transactions_hash,\n            confirm_root: vec![],\n            state_root: Hash::from_empty(),\n            receipt_root: vec![],\n            cycles_used: vec![],\n            proposer: Address::from_str(\"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\").unwrap(),\n            proof: last_proof,\n            validator_version: 0,\n            validators: vec![Validator {\n                pub_key:        Hex::from_string(\n                    \"0x02ef0cb0d7bc6c18b4bea1f5908d9106522b35ab3c399369605d4242525bda7e60\"\n                        .to_owned(),\n                )\n                .unwrap()\n                .decode(),\n                propose_weight: 5,\n                vote_weight:    5,\n            }],\n        };\n\n        if last_header.height != 0 && current_height % gap == 0 {\n            temp_rich_block.iter().for_each(|rich_block| {\n                let height = rich_block.block.header.height;\n                let confirm_root = rich_block.block.header.order_root.clone();\n                let (exec_resp, receipt_root) = exec_txs(height, &rich_block.txs);\n\n                header.exec_height = height;\n                header.confirm_root.push(confirm_root);\n                header.state_root = exec_resp.state_root;\n                header.receipt_root.push(receipt_root);\n                header.cycles_used.push(exec_resp.all_cycles_used);\n            });\n\n            temp_rich_block.clear();\n        } else if last_header.height != 0 && header.height != 1 {\n            header.exec_height -= temp_rich_block.len() as u64 + 1;\n        } else if header.height == 1 {\n            header.exec_height -= 1;\n        }\n\n        let block = Block {\n            header,\n            ordered_tx_hashes: tx_hashes,\n        };\n\n        let rich_block = RichBlock { block, txs };\n\n        list_rich_block.push(rich_block.clone());\n        temp_rich_block.push(rich_block.clone());\n        last_rich_block = rich_block.clone();\n\n        let current_block_hash = Hash::digest(rich_block.block.header.encode_fixed().unwrap());\n\n        // generate proof for current height and for next block use\n        last_proof = mock_proof(current_block_hash.clone(), current_height, 0, &key_tool);\n\n        list_proof.push(last_proof.clone());\n\n        current_height += 1;\n\n        if current_height > len {\n            break;\n        }\n    }\n\n    (list_rich_block, list_proof)\n}\n\nfn mock_genesis_rich_block() -> RichBlock {\n    let header = BlockHeader {\n        chain_id:                       Hash::from_empty(),\n        height:                         0,\n        exec_height:                    0,\n        prev_hash:                      Hash::from_empty(),\n        timestamp:                      0,\n        order_root:                     Hash::from_empty(),\n        order_signed_transactions_hash: Hash::from_empty(),\n        confirm_root:                   vec![],\n        state_root:                     Hash::from_empty(),\n        receipt_root:                   vec![],\n        cycles_used:                    vec![],\n        proposer:                       \"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\"\n            .parse()\n            .unwrap(),\n        proof:                          Proof {\n            height:     0,\n            round:      0,\n            block_hash: Hash::from_empty(),\n            signature:  Bytes::new(),\n            bitmap:     Bytes::new(),\n        },\n        validator_version:              0,\n        validators:                     vec![Validator {\n            pub_key:        Hex::from_string(\n                \"0x02ef0cb0d7bc6c18b4bea1f5908d9106522b35ab3c399369605d4242525bda7e60\".to_owned(),\n            )\n            .unwrap()\n            .decode(),\n            propose_weight: 0,\n            vote_weight:    0,\n        }],\n    };\n    let genesis_block = Block {\n        header,\n        ordered_tx_hashes: vec![],\n    };\n\n    RichBlock {\n        block: genesis_block,\n        txs:   vec![],\n    }\n}\n\nfn get_receipt(tx: &SignedTransaction, height: u64) -> Receipt {\n    Receipt {\n        state_root: MerkleRoot::from_empty(),\n        height,\n        tx_hash: tx.tx_hash.clone(),\n        cycles_used: tx.raw.cycles_limit,\n        events: vec![],\n        response: ReceiptResponse {\n            service_name: \"sync\".to_owned(),\n            method:       \"sync_exec\".to_owned(),\n            response:     ServiceResponse::<String> {\n                code:          0,\n                succeed_data:  \"ok\".to_owned(),\n                error_message: \"\".to_owned(),\n            },\n        },\n    }\n}\n\n// gen a lot of txs\nfn mock_tx_list(num: usize, height: u64) -> Vec<SignedTransaction> {\n    let mut txs = vec![];\n\n    for i in 0..num {\n        let raw = RawTransaction {\n            chain_id:     Hash::from_empty(),\n            nonce:        Hash::digest(Bytes::from(format!(\"{}\", i))),\n            timeout:      height,\n            cycles_price: 1,\n            cycles_limit: 1,\n            request:      TransactionRequest {\n                service_name: \"test_service\".to_owned(),\n                method:       \"test_method\".to_owned(),\n                payload:      \"test_payload\".to_owned(),\n            },\n            sender:       Address::from_pubkey_bytes(Bytes::from(\n                hex::decode(PUB_KEY_STR).unwrap(),\n            ))\n            .unwrap(),\n        };\n\n        let bytes = raw.encode_fixed().unwrap();\n\n        // sign it vividly\n        let hex_privkey =\n            hex::decode(\"5ec982173d54d830b6789cbbbe43eaa2853a5ff752d1ebc1b266cf9790314f8a\")\n                .unwrap();\n        let test_privkey = Secp256k1PrivateKey::try_from(hex_privkey.as_ref()).unwrap();\n        let test_pubkey = test_privkey.pub_key();\n        let _test_address = Address::from_pubkey_bytes(test_pubkey.to_bytes()).unwrap();\n\n        let tx_hash = Hash::digest(bytes);\n        let hash_value = HashValue::try_from(tx_hash.as_bytes().as_ref())\n            .ok()\n            .unwrap();\n        let signature = test_privkey.sign_message(&hash_value);\n\n        let signed_tx = SignedTransaction {\n            raw,\n            tx_hash,\n            pubkey: test_pubkey.to_bytes(),\n            signature: signature.to_bytes(),\n        };\n\n        txs.push(signed_tx)\n    }\n\n    txs\n}\n\n// only the bls_private_key in KeyTool.overlordCrypto.private_key signs the\n// Vote!!!!!!!\nfn mock_proof(block_hash: Hash, height: u64, round: u64, key_tool: &KeyTool) -> Proof {\n    let vote = Vote {\n        height,\n        round,\n        vote_type: VoteType::Precommit,\n        block_hash: block_hash.as_bytes(),\n    };\n\n    let vote_hash = key_tool\n        .overlord_crypto\n        .hash(Bytes::from(rlp::encode(&vote)));\n    let bls_signature = key_tool.overlord_crypto.sign(vote_hash).unwrap();\n    let signed_vote = SignedVote {\n        voter:     key_tool.signer_node.secp_public_key.to_bytes(),\n        signature: bls_signature,\n        vote:      vote.clone(),\n    };\n\n    let signed_voter = vec![key_tool.signer_node.secp_public_key.to_bytes()]\n        .iter()\n        .cloned()\n        .collect::<HashSet<Bytes>>(); //\n    let mut bit_map = BitVec::from_elem(3, false);\n\n    let mut authority_list: Vec<Node> = key_tool\n        .verifier_list\n        .clone()\n        .iter()\n        .map(|v| Node {\n            address:        v.pub_key.decode(),\n            propose_weight: v.propose_weight,\n            vote_weight:    v.vote_weight,\n        })\n        .collect::<Vec<_>>();\n    authority_list.sort();\n\n    for (index, node) in authority_list.iter().enumerate() {\n        if signed_voter.contains(&node.address) {\n            bit_map.set(index, true);\n        }\n    }\n\n    let aggregated_signature = AggregatedSignature {\n        signature:      key_tool\n            .overlord_crypto\n            .aggregate_signatures(vec![signed_vote.signature], vec![signed_vote.voter])\n            .unwrap(),\n        address_bitmap: Bytes::from(bit_map.to_bytes()),\n    };\n\n    let aggregated_vote = AggregatedVote {\n        signature: aggregated_signature,\n\n        vote_type: vote.vote_type,\n        height,\n        round,\n        block_hash: block_hash.as_bytes(),\n        leader: key_tool.signer_node.secp_public_key.to_bytes(),\n    };\n\n    Proof {\n        height:     aggregated_vote.height,\n        round:      0,\n        block_hash: Hash::from_bytes(aggregated_vote.block_hash).unwrap(),\n        signature:  aggregated_vote.signature.signature.clone(),\n        bitmap:     aggregated_vote.signature.address_bitmap,\n    }\n}\n\nfn exec_txs(height: u64, txs: &[SignedTransaction]) -> (ExecutorResp, MerkleRoot) {\n    let mut receipts = vec![];\n    let mut all_cycles_used = 0;\n\n    for tx in txs.iter() {\n        let receipt = get_receipt(tx, height);\n        all_cycles_used += receipt.cycles_used;\n        receipts.push(receipt);\n    }\n    let receipt_root = Merkle::from_hashes(\n        receipts\n            .iter()\n            .map(|r| Hash::digest(r.to_owned().encode_fixed().unwrap()))\n            .collect::<Vec<_>>(),\n    )\n    .get_root_hash()\n    .unwrap_or_else(Hash::from_empty);\n\n    (\n        ExecutorResp {\n            receipts,\n            all_cycles_used,\n            state_root: MerkleRoot::from_empty(),\n        },\n        receipt_root,\n    )\n}\n\n#[derive(Clone)]\nstruct SignerNode {\n    secp_private_key: Secp256k1PrivateKey,\n    secp_public_key:  Secp256k1PublicKey,\n}\n\nimpl SignerNode {\n    pub fn new(secp_private_key: Secp256k1PrivateKey, secp_public_key: Secp256k1PublicKey) -> Self {\n        SignerNode {\n            secp_private_key,\n            secp_public_key,\n        }\n    }\n}\n\nstruct KeyTool {\n    signer_node:     SignerNode,\n    overlord_crypto: Arc<OverlordCrypto>,\n    verifier_list:   Vec<ValidatorExtend>,\n}\n\nimpl KeyTool {\n    pub fn new(\n        signer_node: SignerNode,\n        overlord_crypto: Arc<OverlordCrypto>,\n        verifier_list: Vec<ValidatorExtend>,\n    ) -> Self {\n        KeyTool {\n            signer_node,\n            overlord_crypto,\n            verifier_list,\n        }\n    }\n}\n\nfn get_mock_key_tool() -> KeyTool {\n    let hex_privkey =\n        hex::decode(\"5ec982173d54d830b6789cbbbe43eaa2853a5ff752d1ebc1b266cf9790314f8a\").unwrap();\n    let secp_privkey = Secp256k1PrivateKey::try_from(hex_privkey.as_ref()).unwrap();\n    let secp_pubkey: Secp256k1PublicKey = secp_privkey.pub_key();\n    let signer_node = SignerNode::new(secp_privkey, secp_pubkey);\n\n    // generate BLS/OverlordCrypto\n    let mut bls_priv_key = Vec::new();\n    bls_priv_key.extend_from_slice(&[0u8; 16]);\n    let mut tmp =\n        hex::decode(\"5ec982173d54d830b6789cbbbe43eaa2853a5ff752d1ebc1b266cf9790314f8a\").unwrap();\n    bls_priv_key.append(&mut tmp);\n    let bls_priv_key = BlsPrivateKey::try_from(bls_priv_key.as_ref()).unwrap();\n\n    let (bls_pub_keys, common_ref) = get_mock_public_keys_and_common_ref();\n\n    let mock_crypto = OverlordCrypto::new(bls_priv_key, bls_pub_keys, common_ref);\n\n    KeyTool::new(signer_node, Arc::new(mock_crypto), mock_verifier_list())\n}\n\nfn get_mock_public_keys_and_common_ref() -> (HashMap<Bytes, BlsPublicKey>, BlsCommonReference) {\n    let mut bls_pub_keys: HashMap<Bytes, BlsPublicKey> = HashMap::new();\n\n    // weight = 5\n    let bls_hex = Hex::from_string(\"0x04102947214862a503c73904deb5818298a186d68c7907bb609583192a7de6331493835e5b8281f4d9ee705537c0e765580e06f86ddce5867812fceb42eecefd209f0eddd0389d6b7b0100f00fb119ef9ab23826c6ea09aadcc76fa6cea6a32724\".to_string()\n    ).unwrap();\n    let bls_hex = hex::decode(bls_hex.as_string_trim0x()).unwrap();\n    bls_pub_keys.insert(\n        Hex::from_string(\n            \"0x02ef0cb0d7bc6c18b4bea1f5908d9106522b35ab3c399369605d4242525bda7e60\".to_owned(),\n        )\n        .unwrap()\n        .decode(),\n        BlsPublicKey::try_from(bls_hex.as_ref()).unwrap(),\n    );\n\n    // weight = 1\n    let bls_hex = Hex::from_string(\"0x0418e16bd67ce0b58a575f506967706be733c96feef19a06bb37d510000d89905f2f61b7da4d831cb1bb01e2f99833362602a0a252dfd1e95c75c1eadb0db220e3722c9a077b730e7f6cec5f4a55bfc9a4d88db3e6c27684aa8335456824070501\".to_string()\n    ).unwrap();\n    let bls_hex = hex::decode(bls_hex.as_string_trim0x()).unwrap();\n    bls_pub_keys.insert(\n        Hex::from_string(\n            \"0x03dbd1dbf3835efb4ec34a360ee671ee1d22425425368edfc5b9ffafc812e86200\".to_owned(),\n        )\n        .unwrap()\n        .decode(),\n        BlsPublicKey::try_from(bls_hex.as_ref()).unwrap(),\n    );\n\n    // weight = 1\n    let bls_hex = Hex::from_string(\"0x040944276f414c46330227f2c0c5a998aba3d400ed19cfc2d31d3e7fcc442ce9f91ea86e172dc3c1b6cedc364bd52ba1cf074529e52337cd80ab32a196a3d42ab46eee25120b44fdd2b5c4268bf3b84c72d068ea83d0530a5461dc30b6a63a60e9\".to_string()\n    ).unwrap();\n    let bls_hex = hex::decode(bls_hex.as_string_trim0x()).unwrap();\n    bls_pub_keys.insert(\n        Hex::from_string(\n            \"0x03cba4ae147eb24891d78c9527798577419b7db913b4b03ba548c28f40c5841166\".to_owned(),\n        )\n        .unwrap()\n        .decode(),\n        BlsPublicKey::try_from(bls_hex.as_ref()).unwrap(),\n    );\n\n    let hex_common_ref = hex::decode(\"6c747758636859487038\").unwrap();\n    let common_ref: BlsCommonReference =\n        std::str::from_utf8(hex_common_ref.as_ref()).unwrap().into();\n\n    (bls_pub_keys, common_ref)\n}\n\nfn mock_verifier_list() -> Vec<ValidatorExtend> {\n    vec![\n        ValidatorExtend {\n            bls_pub_key: Hex::from_string(\"0x04102947214862a503c73904deb5818298a186d68c7907bb609583192a7de6331493835e5b8281f4d9ee705537c0e765580e06f86ddce5867812fceb42eecefd209f0eddd0389d6b7b0100f00fb119ef9ab23826c6ea09aadcc76fa6cea6a32724\".to_owned()).unwrap(),\n            pub_key: Hex::from_string(\"0x02ef0cb0d7bc6c18b4bea1f5908d9106522b35ab3c399369605d4242525bda7e60\".to_owned()).unwrap(),\n            address: Address::from_str(\"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\").unwrap(),\n            propose_weight: 5,\n            vote_weight:    5,\n        },\n        ValidatorExtend {\n            bls_pub_key: Hex::from_string(\"0x0418e16bd67ce0b58a575f506967706be733c96feef19a06bb37d510000d89905f2f61b7da4d831cb1bb01e2f99833362602a0a252dfd1e95c75c1eadb0db220e3722c9a077b730e7f6cec5f4a55bfc9a4d88db3e6c27684aa8335456824070501\".to_owned()).unwrap(),\n            pub_key: Hex::from_string(\"0x03dbd1dbf3835efb4ec34a360ee671ee1d22425425368edfc5b9ffafc812e86200\".to_owned()).unwrap(),\n            address: Address::from_str(\"muta15a8a9ksxe3hhjpw3l7wz7ry778qg8h9wz8y35p\").unwrap(),\n            propose_weight: 1,\n            vote_weight:    1,\n        },\n        ValidatorExtend {\n            bls_pub_key: Hex::from_string(\"0x040944276f414c46330227f2c0c5a998aba3d400ed19cfc2d31d3e7fcc442ce9f91ea86e172dc3c1b6cedc364bd52ba1cf074529e52337cd80ab32a196a3d42ab46eee25120b44fdd2b5c4268bf3b84c72d068ea83d0530a5461dc30b6a63a60e9\".to_owned()).unwrap(),\n            pub_key: Hex::from_string(\"0x03cba4ae147eb24891d78c9527798577419b7db913b4b03ba548c28f40c5841166\".to_owned()).unwrap(),\n            address: Address::from_str(\"muta1h99h6f54vytatam3ckftrmvcdpn4jlmnwm6hl0\").unwrap(),\n            propose_weight: 1,\n            vote_weight:    1,\n        },\n    ]\n}\n\n#[rustfmt::skip]\n// {\n//   \"common_ref\": \"0x6c747758636859487038\",\n//   \"keypairs\": [\n//     {\n//       \"index\": 1,\n//       \"private_key\": \"0x5ec982173d54d830b6789cbbbe43eaa2853a5ff752d1ebc1b266cf9790314f8a\",\n//       \"public_key\": \"0x02ef0cb0d7bc6c18b4bea1f5908d9106522b35ab3c399369605d4242525bda7e60\",\n//       \"address\": \"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\",\n//       \"peer_id\": \"QmTEJkB5QKWsEq37huryZZfVvqBKb54sHnKn9TQcA6j3n9\",\n//       \"bls_public_key\": \"0x04102947214862a503c73904deb5818298a186d68c7907bb609583192a7de6331493835e5b8281f4d9ee705537c0e765580e06f86ddce5867812fceb42eecefd209f0eddd0389d6b7b0100f00fb119ef9ab23826c6ea09aadcc76fa6cea6a32724\"\n//     },\n//     {\n//       \"index\": 2,\n//       \"private_key\": \"0x8dfbd3c689308d29c058cce163984a2ae8d5fc5191ce6b1e18bd1d7b95a8c632\",\n//       \"public_key\": \"0x03dbd1dbf3835efb4ec34a360ee671ee1d22425425368edfc5b9ffafc812e86200\",\n//       \"address\": \"muta15a8a9ksxe3hhjpw3l7wz7ry778qg8h9wz8y35p\",\n//       \"peer_id\": \"QmaEX2TxiC2YJufqcHRigVpnoxahX3hdR1gsFjD5Yf7K1Z\",\n//       \"bls_public_key\": \"0x0418e16bd67ce0b58a575f506967706be733c96feef19a06bb37d510000d89905f2f61b7da4d831cb1bb01e2f99833362602a0a252dfd1e95c75c1eadb0db220e3722c9a077b730e7f6cec5f4a55bfc9a4d88db3e6c27684aa8335456824070501\"\n//     },\n//     {\n//       \"index\": 3,\n//       \"private_key\": \"0xfc659f0ed09a4ba0d2d1836af7520d1a050a7739d598dc98517bbbe7a2e38124\",\n//       \"public_key\": \"0x03cba4ae147eb24891d78c9527798577419b7db913b4b03ba548c28f40c5841166\",\n//       \"address\": \"muta1h99h6f54vytatam3ckftrmvcdpn4jlmnwm6hl0\",\n//       \"peer_id\": \"QmbRmcYD3j2zMr27C6Ga2Bo5xB9t37NyAt36cSvUGYXE2B\",\n//       \"bls_public_key\": \"0x040944276f414c46330227f2c0c5a998aba3d400ed19cfc2d31d3e7fcc442ce9f91ea86e172dc3c1b6cedc364bd52ba1cf074529e52337cd80ab32a196a3d42ab46eee25120b44fdd2b5c4268bf3b84c72d068ea83d0530a5461dc30b6a63a60e9\"\n//     },\n//     {\n//       \"index\": 4,\n//       \"private_key\": \"0x7c01d6539419cffc78ab0779dabe88fad3f70c20ef47a562ac4ba5b7bd704b8e\",\n//       \"public_key\": \"0x0245a0c291f56c2c5751db1c0bf1ed986e703d29a0fe023df770fe92c7c2347316\",\n//       \"address\": \"muta16xukzz73l5r6vulk9q697tave8c5mfu33mwud6\",\n//       \"peer_id\": \"QmeqYprgrXwxzLP7qAFiiJ3Kfi3F6H9PPH2qPCEHr9cRYW\",\n//       \"bls_public_key\": \"0x041342e9a35278b298a67006cd98d663053e3f7eb72a08ffe9835074e430b2112a866c1c8d981edcd793cb16d459fc952b0464007d876355eea671e74727588bae69740c6a0b49d8142b7b0821a78acd34b4d8012b9ef69444a476e03d5fea5330\"\n//     }\n//   ]\n// }\n\nfn assert_sync(status: CurrentConsensusStatus, latest_block: Block) {\n    let exec_gap = latest_block.header.height - latest_block.header.exec_height;\n\n    assert_eq!(status.latest_committed_height, latest_block.header.height);\n    assert_eq!(status.exec_height, latest_block.header.height);\n    assert_eq!(status.current_proof.height, status.latest_committed_height);\n    assert_eq!(status.list_confirm_root.len(), exec_gap as usize);\n    assert_eq!(status.list_cycles_used.len(), exec_gap as usize);\n    assert_eq!(status.list_receipt_root.len(), exec_gap as usize);\n}\n"
  },
  {
    "path": "core/consensus/src/util.rs",
    "content": "use std::collections::HashMap;\nuse std::convert::TryFrom;\nuse std::error::Error;\nuse std::time::{SystemTime, UNIX_EPOCH};\n\nuse bytes::buf::BufMut;\nuse bytes::BytesMut;\nuse overlord::Crypto;\nuse parking_lot::RwLock;\n\nuse crate::ConsensusError;\nuse common_crypto::{\n    BlsCommonReference, BlsPrivateKey, BlsPublicKey, BlsSignature, BlsSignatureVerify, HashValue,\n    PrivateKey, Signature,\n};\nuse protocol::fixed_codec::FixedCodec;\nuse protocol::traits::Context;\nuse protocol::types::{Address, Hash, Hex, MerkleRoot, SignedTransaction};\nuse protocol::{Bytes, ProtocolError, ProtocolResult};\n\npub fn time_now() -> u64 {\n    SystemTime::now()\n        .duration_since(UNIX_EPOCH)\n        .unwrap()\n        .as_millis() as u64\n}\n\npub struct OverlordCrypto {\n    private_key: BlsPrivateKey,\n    addr_pubkey: RwLock<HashMap<Bytes, BlsPublicKey>>,\n    common_ref:  BlsCommonReference,\n}\n\nimpl Crypto for OverlordCrypto {\n    fn hash(&self, msg: Bytes) -> Bytes {\n        Hash::digest(msg).as_bytes()\n    }\n\n    fn sign(&self, hash: Bytes) -> Result<Bytes, Box<dyn Error + Send>> {\n        let hash = HashValue::try_from(hash.as_ref()).map_err(|_| {\n            ProtocolError::from(ConsensusError::Other(\n                \"failed to convert hash value\".to_string(),\n            ))\n        })?;\n        let sig = self.private_key.sign_message(&hash);\n        Ok(sig.to_bytes())\n    }\n\n    fn verify_signature(\n        &self,\n        signature: Bytes,\n        hash: Bytes,\n        voter: Bytes,\n    ) -> Result<(), Box<dyn Error + Send>> {\n        let map = self.addr_pubkey.read();\n        let hash = HashValue::try_from(hash.as_ref()).map_err(|_| {\n            ProtocolError::from(ConsensusError::Other(\n                \"failed to convert hash value\".to_string(),\n            ))\n        })?;\n        let pub_key = map.get(&voter).ok_or_else(|| {\n            ProtocolError::from(ConsensusError::Other(\"lose public key\".to_string()))\n        })?;\n        let signature = BlsSignature::try_from(signature.as_ref())\n            .map_err(|e| ProtocolError::from(ConsensusError::CryptoErr(Box::new(e))))?;\n\n        signature\n            .verify(&hash, &pub_key, &self.common_ref)\n            .map_err(|e| ProtocolError::from(ConsensusError::CryptoErr(Box::new(e))))?;\n        Ok(())\n    }\n\n    fn aggregate_signatures(\n        &self,\n        signatures: Vec<Bytes>,\n        voters: Vec<Bytes>,\n    ) -> Result<Bytes, Box<dyn Error + Send>> {\n        if signatures.len() != voters.len() {\n            return Err(ProtocolError::from(ConsensusError::Other(\n                \"signatures length does not match voters length\".to_string(),\n            ))\n            .into());\n        }\n\n        let map = self.addr_pubkey.read();\n        let mut sigs_pubkeys = Vec::with_capacity(signatures.len());\n        for (sig, addr) in signatures.iter().zip(voters.iter()) {\n            let signature = BlsSignature::try_from(sig.as_ref())\n                .map_err(|e| ProtocolError::from(ConsensusError::CryptoErr(Box::new(e))))?;\n\n            let pub_key = map.get(addr).ok_or_else(|| {\n                ProtocolError::from(ConsensusError::Other(\"lose public key\".to_string()))\n            })?;\n\n            sigs_pubkeys.push((signature, pub_key.to_owned()));\n        }\n\n        let sig = BlsSignature::combine(sigs_pubkeys);\n        Ok(sig.to_bytes())\n    }\n\n    fn verify_aggregated_signature(\n        &self,\n        aggregated_signature: Bytes,\n        hash: Bytes,\n        voters: Vec<Bytes>,\n    ) -> Result<(), Box<dyn Error + Send>> {\n        let map = self.addr_pubkey.read();\n        let mut pub_keys = Vec::new();\n        for addr in voters.iter() {\n            let pub_key = map.get(addr).ok_or_else(|| {\n                ProtocolError::from(ConsensusError::Other(\"lose public key\".to_string()))\n            })?;\n            pub_keys.push(pub_key.clone());\n        }\n\n        self.inner_verify_aggregated_signature(hash, pub_keys, aggregated_signature)?;\n        Ok(())\n    }\n}\n\nimpl OverlordCrypto {\n    pub fn new(\n        private_key: BlsPrivateKey,\n        pubkey_to_bls_pubkey: HashMap<Bytes, BlsPublicKey>,\n        common_ref: BlsCommonReference,\n    ) -> Self {\n        OverlordCrypto {\n            addr_pubkey: RwLock::new(pubkey_to_bls_pubkey),\n            private_key,\n            common_ref,\n        }\n    }\n\n    pub fn update(&self, new_addr_pubkey: HashMap<Bytes, BlsPublicKey>) {\n        let mut map = self.addr_pubkey.write();\n\n        *map = new_addr_pubkey;\n    }\n\n    pub fn inner_verify_aggregated_signature(\n        &self,\n        hash: Bytes,\n        pub_keys: Vec<BlsPublicKey>,\n        signature: Bytes,\n    ) -> ProtocolResult<()> {\n        let aggregate_key = BlsPublicKey::aggregate(pub_keys);\n        let aggregated_signature = BlsSignature::try_from(signature.as_ref())\n            .map_err(|e| ProtocolError::from(ConsensusError::CryptoErr(Box::new(e))))?;\n        let hash = HashValue::try_from(hash.as_ref()).map_err(|_| {\n            ProtocolError::from(ConsensusError::Other(\n                \"failed to convert hash value\".to_string(),\n            ))\n        })?;\n\n        aggregated_signature\n            .verify(&hash, &aggregate_key, &self.common_ref)\n            .map_err(|e| ProtocolError::from(ConsensusError::CryptoErr(Box::new(e))))?;\n        Ok(())\n    }\n}\n\n#[derive(Clone, Debug)]\npub struct ExecuteInfo {\n    pub ctx:          Context,\n    pub height:       u64,\n    pub chain_id:     Hash,\n    pub block_hash:   Hash,\n    pub signed_txs:   Vec<SignedTransaction>,\n    pub order_root:   MerkleRoot,\n    pub cycles_price: u64,\n    pub proposer:     Address,\n    pub timestamp:    u64,\n    pub cycles_limit: u64,\n}\n\npub fn check_list_roots<T: Eq>(cache_roots: &[T], block_roots: &[T]) -> bool {\n    block_roots.len() <= cache_roots.len()\n        && cache_roots\n            .iter()\n            .zip(block_roots.iter())\n            .all(|(c_root, e_root)| c_root == e_root)\n}\n\npub fn digest_signed_transactions(signed_txs: &[SignedTransaction]) -> ProtocolResult<Hash> {\n    if signed_txs.is_empty() {\n        return Ok(Hash::from_empty());\n    }\n\n    let mut list_bytes = BytesMut::new();\n\n    for signed_tx in signed_txs.iter() {\n        let bytes = signed_tx.encode_fixed()?;\n        list_bytes.put(bytes);\n    }\n\n    Ok(Hash::digest(list_bytes.freeze()))\n}\n\npub fn convert_hex_to_bls_pubkeys(hex: Hex) -> ProtocolResult<BlsPublicKey> {\n    let hex_pubkey = hex::decode(hex.as_string_trim0x())\n        .map_err(|e| ConsensusError::Other(format!(\"from hex error {:?}\", e)))?;\n    let ret = BlsPublicKey::try_from(hex_pubkey.as_ref())\n        .map_err(|e| ConsensusError::CryptoErr(Box::new(e)))?;\n    Ok(ret)\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_bls_amcl() {\n        let private_keys = vec![\n            hex::decode(\"000000000000000000000000000000001abd6ffdb44427d9e1fcb6f84e7fe7d98f2b5b205b30a94992ec24d94bb0c970\").unwrap(),\n            hex::decode(\"00000000000000000000000000000000320b11d7c1ae66fdad1b4a75221244ae2d84903d3548c581d7d30dc135aac817\").unwrap(),\n            hex::decode(\"000000000000000000000000000000006a41e900d0426e615ca9d9393e6792baf9bda4398d5d407e59f77cb6c6f393cc\").unwrap(),\n            hex::decode(\"00000000000000000000000000000000125d81e0eb0a9c3746d868bf3b4f07760fdd430daded41d92f53b4e484ef3415\").unwrap(),\n        ];\n\n        let public_keys = vec![\n            hex::decode(\"041054fe9a65be0891094ed37fb3655e3ffb12353bc0a1b4f8673b52ad65d1ca481780cf7e988eb8dcdc05d8352f03605b0d11afb2525b3f1b55ec694509248bcfead39cbb292725d710e2a509c77ed051d1d49e15e429cf6d12b9be7c02179612\").unwrap(),\n            hex::decode(\"040c15c82ed07dc866ab7c3af3a070eb4340ac0439bf12bb49cbed5797d52707e009f7c17414777b0213b9a55c8a5c08290ce40c366d59322db418b7ff41277090bd25614174763c9fd725ede1f65f3e61ca9acdb35f59e33d556e738add14d536\").unwrap(),\n            hex::decode(\"040b3118acefdfbb11ded262a7f3c90dfca4fbc0200a92b4f6bb80210ab85e39f79458f7d47f7cb06864df0571e7591a4e0858df0b52a4c3ae19ae3adc32e1da0ec4cbdca108365ee433becdb1ccebb1b339647788dfad94ebae1cbd770fcfa4e5\").unwrap(),\n            hex::decode(\"040709f204e3ec5b8bdd9f2bb6edc9cb1704fc1e4952661ba7532ea8e37f3b159b8d41987ee6707d32bdf494e2deb00b7f049a4670a5ce1ad8e429fcacc5bbc69cb03b71a7f1d831d0b47dda5e62642d420ff0a545950cb1db19d42fe04e2c91d2\").unwrap(),\n        ];\n\n        let msg = Hash::digest(Bytes::from(\"muta-consensus\"));\n        let hash = HashValue::try_from(msg.as_bytes().as_ref()).unwrap();\n        let mut sigs_and_pub_keys = Vec::new();\n        for i in 0..3 {\n            let sig = BlsPrivateKey::try_from(private_keys[i].as_ref())\n                .unwrap()\n                .sign_message(&hash);\n            let pub_key = BlsPublicKey::try_from(public_keys[i].as_ref()).unwrap();\n            sigs_and_pub_keys.push((sig, pub_key));\n        }\n\n        let signature = BlsSignature::combine(sigs_and_pub_keys.clone());\n        let aggregate_key = BlsPublicKey::aggregate(\n            sigs_and_pub_keys\n                .iter()\n                .map(|s| s.1.clone())\n                .collect::<Vec<_>>(),\n        );\n\n        let res = signature.verify(&hash, &aggregate_key, &\"muta\".into());\n        println!(\"{:?}\", res);\n        assert!(res.is_ok());\n    }\n\n    #[test]\n    fn test_aggregate_pubkeys_order() {\n        let public_keys = vec![\n            hex::decode(\"041054fe9a65be0891094ed37fb3655e3ffb12353bc0a1b4f8673b52ad65d1ca481780cf7e988eb8dcdc05d8352f03605b0d11afb2525b3f1b55ec694509248bcfead39cbb292725d710e2a509c77ed051d1d49e15e429cf6d12b9be7c02179612\").unwrap(),\n            hex::decode(\"040c15c82ed07dc866ab7c3af3a070eb4340ac0439bf12bb49cbed5797d52707e009f7c17414777b0213b9a55c8a5c08290ce40c366d59322db418b7ff41277090bd25614174763c9fd725ede1f65f3e61ca9acdb35f59e33d556e738add14d536\").unwrap(),\n            hex::decode(\"040b3118acefdfbb11ded262a7f3c90dfca4fbc0200a92b4f6bb80210ab85e39f79458f7d47f7cb06864df0571e7591a4e0858df0b52a4c3ae19ae3adc32e1da0ec4cbdca108365ee433becdb1ccebb1b339647788dfad94ebae1cbd770fcfa4e5\").unwrap(),\n            hex::decode(\"040709f204e3ec5b8bdd9f2bb6edc9cb1704fc1e4952661ba7532ea8e37f3b159b8d41987ee6707d32bdf494e2deb00b7f049a4670a5ce1ad8e429fcacc5bbc69cb03b71a7f1d831d0b47dda5e62642d420ff0a545950cb1db19d42fe04e2c91d2\").unwrap(),\n        ];\n        let mut pub_keys = public_keys\n            .into_iter()\n            .map(|pk| BlsPublicKey::try_from(pk.as_ref()).unwrap())\n            .collect::<Vec<_>>();\n        let pk_1 = BlsPublicKey::aggregate(pub_keys.clone());\n        pub_keys.reverse();\n        let pk_2 = BlsPublicKey::aggregate(pub_keys);\n        assert_eq!(pk_1, pk_2);\n    }\n\n    #[test]\n    fn test_zip_roots() {\n        let roots_1 = vec![1, 2, 3, 4, 5];\n        let roots_2 = vec![1, 2, 3];\n        let roots_3 = vec![];\n        let roots_4 = vec![1, 2];\n        let roots_5 = vec![3, 4, 5, 6, 8];\n\n        assert!(check_list_roots(&roots_1, &roots_2));\n        assert!(!check_list_roots(&roots_3, &roots_2));\n        assert!(!check_list_roots(&roots_4, &roots_2));\n        assert!(!check_list_roots(&roots_5, &roots_2));\n    }\n\n    #[test]\n    fn test_convert_from_hex() {\n        let hex_str = \"0x04188ef9488c19458a963cc57b567adde7db8f8b6bec392d5cb7b67b0abc1ed6cd966edc451f6ac2ef38079460eb965e890d1f576e4039a20467820237cda753f07a8b8febae1ec052190973a1bcf00690ea8fc0168b3fbbccd1c4e402eda5ef22\";\n        assert!(\n            convert_hex_to_bls_pubkeys(Hex::from_string(String::from(hex_str)).unwrap()).is_ok()\n        );\n    }\n}\n"
  },
  {
    "path": "core/consensus/src/wal.rs",
    "content": "use std::fs;\nuse std::io::{ErrorKind, Read, Write};\nuse std::path::{Path, PathBuf};\n\nuse common_apm::muta_apm;\nuse protocol::codec::ProtocolCodecSync;\nuse protocol::types::{Bytes, Hash, SignedTransaction};\nuse protocol::ProtocolResult;\n\nuse crate::fixed_types::FixedSignedTxs;\nuse crate::ConsensusError;\nuse bytes::{BufMut, BytesMut};\nuse creep::Context;\nuse std::str::FromStr;\nuse std::time::SystemTime;\n\n#[derive(Debug)]\npub struct SignedTxsWAL {\n    path: PathBuf,\n}\n\nimpl SignedTxsWAL {\n    pub fn new<P: AsRef<Path>>(path: P) -> Self {\n        if !path.as_ref().exists() {\n            fs::create_dir_all(&path).expect(\"Failed to create wal directory\");\n        }\n\n        SignedTxsWAL {\n            path: path.as_ref().to_path_buf(),\n        }\n    }\n\n    pub fn save(\n        &self,\n        height: u64,\n        ordered_signed_transactions_hash: Hash,\n        txs: Vec<SignedTransaction>,\n    ) -> ProtocolResult<()> {\n        let mut wal_path = self.path.clone();\n        wal_path.push(height.to_string());\n        if !wal_path.exists() {\n            fs::create_dir(&wal_path).map_err(ConsensusError::WALErr)?;\n        }\n\n        wal_path.push(ordered_signed_transactions_hash.as_hex());\n        wal_path.set_extension(\"txt\");\n\n        let mut wal_file = match fs::OpenOptions::new()\n            .read(true)\n            .write(true)\n            .create(true)\n            .open(wal_path)\n        {\n            Ok(file) => file,\n            Err(err) => {\n                if err.kind() == ErrorKind::AlreadyExists {\n                    return Ok(());\n                } else {\n                    return Err(ConsensusError::WALErr(err).into());\n                }\n            }\n        };\n\n        let data = FixedSignedTxs::new(txs).encode_sync()?;\n        wal_file\n            .write_all(data.as_ref())\n            .map_err(ConsensusError::WALErr)?;\n        Ok(())\n    }\n\n    pub fn available_height(&self) -> ProtocolResult<Vec<u64>> {\n        let dir_path = self.path.clone();\n        let mut availables = vec![];\n        for item in fs::read_dir(dir_path).map_err(ConsensusError::WALErr)? {\n            let item = item.map_err(ConsensusError::WALErr)?;\n\n            if item.path().is_dir() {\n                availables.push(item.file_name().to_str().unwrap().parse().unwrap())\n            }\n        }\n        Ok(availables)\n    }\n\n    pub fn remove_all(&self) -> ProtocolResult<()> {\n        for height in self.available_height()? {\n            self.remove(height)?\n        }\n        Ok(())\n    }\n\n    pub fn load(\n        &self,\n        height: u64,\n        ordered_signed_transactions_hash: Hash,\n    ) -> ProtocolResult<Vec<SignedTransaction>> {\n        let mut file_path = self.path.clone();\n        file_path.push(height.to_string());\n        file_path.push(ordered_signed_transactions_hash.as_hex());\n        file_path.set_extension(\"txt\");\n\n        self.recover_stxs(file_path)\n    }\n\n    pub fn load_by_height(&self, height: u64) -> Vec<SignedTransaction> {\n        let mut dir = self.path.clone();\n        dir.push(height.to_string());\n        let dir = if let Ok(res) = fs::read_dir(dir) {\n            res\n        } else {\n            return Vec::new();\n        };\n\n        let mut ret = Vec::new();\n        for entry in dir {\n            if let Ok(file_dir) = entry {\n                if let Ok(mut stxs) = self.recover_stxs(file_dir.path()) {\n                    ret.append(&mut stxs);\n                }\n            }\n        }\n        ret\n    }\n\n    pub fn remove(&self, committed_height: u64) -> ProtocolResult<()> {\n        for entry in fs::read_dir(&self.path).map_err(ConsensusError::WALErr)? {\n            let folder = entry.map_err(ConsensusError::WALErr)?.path();\n            let folder_name = folder\n                .file_stem()\n                .ok_or_else(|| ConsensusError::Other(\"file stem error\".to_string()))?\n                .to_os_string()\n                .clone();\n            let folder_name = folder_name.into_string().map_err(|err| {\n                ConsensusError::Other(format!(\"transfer os string to string error {:?}\", err))\n            })?;\n            let height = folder_name.parse::<u64>().map_err(|err| {\n                ConsensusError::Other(format!(\"parse folder name {:?} error {:?}\", folder, err))\n            })?;\n\n            if height <= committed_height {\n                fs::remove_dir_all(folder).map_err(ConsensusError::WALErr)?;\n            }\n        }\n        Ok(())\n    }\n\n    fn recover_stxs(&self, file_path: PathBuf) -> ProtocolResult<Vec<SignedTransaction>> {\n        let mut read_buf = Vec::new();\n        let mut file = fs::File::open(&file_path).map_err(ConsensusError::WALErr)?;\n        let _ = file\n            .read_to_end(&mut read_buf)\n            .map_err(ConsensusError::WALErr)?;\n        let txs = FixedSignedTxs::decode_sync(Bytes::from(read_buf))?;\n        Ok(txs.inner)\n    }\n}\n\n#[derive(Debug)]\npub struct ConsensusWal {\n    path: PathBuf,\n}\n\nimpl ConsensusWal {\n    pub fn new<P: AsRef<Path>>(path: P) -> Self {\n        if !path.as_ref().exists() {\n            fs::create_dir_all(&path).expect(\"Failed to create wal directory\");\n        }\n\n        ConsensusWal {\n            path: path.as_ref().to_path_buf(),\n        }\n    }\n\n    #[muta_apm::derive::tracing_span(kind = \"consensus_wal\")]\n    pub fn update_overlord_wal(&self, ctx: Context, info: Bytes) -> ProtocolResult<()> {\n        // 1st, make sure the dir exists\n        let dir_path = self.path.clone();\n        if !dir_path.exists() {\n            fs::create_dir(&dir_path).map_err(ConsensusError::WALErr)?;\n        }\n\n        // 2nd, write info into file\n        let check_sum = Hash::digest(info.clone());\n\n        let mut content = BytesMut::new();\n        content.put(check_sum.as_bytes());\n        content.put(info);\n\n        let (data_path, timestamp) = {\n            loop {\n                let timestamp = SystemTime::now()\n                    .duration_since(SystemTime::UNIX_EPOCH)\n                    .map_err(ConsensusError::SystemTime)?;\n\n                let timestamp = timestamp.as_millis();\n\n                let mut data_path = dir_path.clone();\n\n                data_path.push(timestamp.to_string());\n\n                if !data_path.exists() {\n                    break (data_path, timestamp);\n                }\n            }\n        };\n\n        let mut data_file = match fs::OpenOptions::new()\n            .read(true)\n            .write(true)\n            .create(true)\n            .open(data_path)\n        {\n            Ok(file) => file,\n            Err(err) => {\n                if err.kind() == ErrorKind::AlreadyExists {\n                    return Ok(());\n                } else {\n                    return Err(ConsensusError::WALErr(err).into());\n                }\n            }\n        };\n\n        data_file\n            .write_all(content.as_ref())\n            .map_err(ConsensusError::WALErr)?;\n\n        // 3rd, we can safely clean other old wal files\n        for item in fs::read_dir(dir_path).map_err(ConsensusError::WALErr)? {\n            let item = item.map_err(ConsensusError::WALErr)?;\n\n            let file_name = item\n                .file_name()\n                .to_str()\n                .ok_or(ConsensusError::FileNameTimestamp)?\n                .to_owned();\n\n            let file_name_timestamp = u128::from_str(file_name.as_str())\n                .map_err(|e| ConsensusError::FileNameTimestamp)?;\n\n            if file_name_timestamp < timestamp {\n                fs::remove_file(item.path()).map_err(ConsensusError::WALErr)?;\n            }\n        }\n\n        Ok(())\n    }\n\n    #[muta_apm::derive::tracing_span(kind = \"consensus_wal\")]\n    pub fn load_overlord_wal(&self, ctx: Context) -> ProtocolResult<Bytes> {\n        // 1st,\n        let dir_path = self.path.clone();\n        if !dir_path.exists() {\n            return Err(ConsensusError::ConsensusWalDirNotExist.into());\n        }\n\n        // 2 read all log files and sort by timestamp in their names\n        let files = fs::read_dir(dir_path.clone()).map_err(ConsensusError::WALErr)?;\n\n        let mut file_names_timestamps = files\n            .filter_map(|item| {\n                let item = item.ok()?;\n                let file_name = item.file_name();\n                let file_name = file_name.to_str()?;\n\n                let file_name_timestamp = u128::from_str(file_name).ok()?;\n\n                Some(file_name_timestamp)\n            })\n            .collect::<Vec<_>>();\n\n        file_names_timestamps.sort_by_key(|&b| std::cmp::Reverse(b));\n\n        // 3rd, get a latest and valid wal if possible\n        let mut index = 0;\n        let content = loop {\n            if index >= file_names_timestamps.len() {\n                break None;\n            }\n\n            let file_name_timestamp = file_names_timestamps[index];\n\n            let mut log_path = dir_path.clone();\n            log_path.push(file_name_timestamp.to_string());\n\n            let mut read_buf = Vec::new();\n            let mut file = fs::File::open(&log_path).map_err(ConsensusError::WALErr)?;\n            let res = file.read_to_end(&mut read_buf);\n            if res.is_err() {\n                continue;\n            }\n\n            let mut info = Bytes::from(read_buf);\n\n            if info.len() < Hash::default().as_bytes().len() {\n                continue;\n            }\n\n            let content = info.split_off(Hash::default().as_bytes().len());\n\n            if info == Hash::digest(content.clone()).as_bytes() {\n                break Some(content);\n            } else {\n                index += 1;\n            }\n        };\n\n        content.ok_or_else(|| ConsensusError::ConsensusWalNoWalFile.into())\n    }\n\n    pub fn clear(&self) -> ProtocolResult<()> {\n        let dir_path = self.path.clone();\n        if !dir_path.exists() {\n            return Ok(());\n        }\n\n        for item in fs::read_dir(dir_path).map_err(ConsensusError::WALErr)? {\n            let item = item.map_err(ConsensusError::WALErr)?;\n\n            fs::remove_file(item.path()).map_err(ConsensusError::WALErr)?;\n        }\n        Ok(())\n    }\n}\n\n#[rustfmt::skip]\n/// Bench in Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz (8 x 2200):\n/// test wal::test::bench_save_wal_1000_txs  ... bench:   2,346,611 ns/iter (+/- 754,074)\n/// test wal::test::bench_save_wal_16000_txs ... bench:  41,576,328 ns/iter (+/- 2,547,323)\n/// test wal::test::bench_save_wal_2000_txs  ... bench:   4,759,015 ns/iter (+/- 460,748)\n/// test wal::test::bench_save_wal_4000_txs  ... bench:   9,725,284 ns/iter (+/- 452,143)\n/// test wal::test::bench_save_wal_8000_txs  ... bench:  19,971,012 ns/iter (+/- 1,620,755)\n/// test wal::test::bench_save_wal_16000_txs ... bench:  41,576,328 ns/iter (+/- 2,547,323)\n/// test wal::test::bench_txs_prost_encode   ... bench:  40,020,365 ns/iter (+/- 2,800,361)\n/// test wal::test::bench_txs_rlp_encode     ... bench:  40,792,370 ns/iter (+/- 1,908,695)\n\n#[cfg(test)]\nmod tests {\n    extern crate test;\n\n    use rand::random;\n    use test::Bencher;\n\n    use protocol::types::{Address, Hash, RawTransaction, TransactionRequest};\n    use protocol::Bytes;\n\n    use super::*;\n\n    static FULL_TXS_PATH: &str = \"./free-space/wal/txs\";\n\n    static FULL_CONSENSUS_PATH: &str = \"./free-space/wal/consensus\";\n\n    fn mock_hash() -> Hash {\n        Hash::digest(get_random_bytes(10))\n    }\n\n    fn mock_address() -> Address {\n        let hash = mock_hash();\n        Address::from_hash(hash).unwrap()\n    }\n\n    fn mock_raw_tx() -> RawTransaction {\n        RawTransaction {\n            chain_id:     mock_hash(),\n            nonce:        mock_hash(),\n            timeout:      100,\n            cycles_price: 1,\n            cycles_limit: 100,\n            request:      mock_transaction_request(),\n            sender: mock_address(),\n        }\n    }\n\n    pub fn mock_transaction_request() -> TransactionRequest {\n        TransactionRequest {\n            service_name: \"mock-service\".to_owned(),\n            method:       \"mock-method\".to_owned(),\n            payload:      \"mock-payload\".to_owned(),\n        }\n    }\n\n    pub fn mock_sign_tx() -> SignedTransaction {\n        SignedTransaction {\n            raw:       mock_raw_tx(),\n            tx_hash:   mock_hash(),\n            pubkey:    Default::default(),\n            signature: Default::default(),\n        }\n    }\n\n    pub fn mock_wal_txs(size: usize) -> Vec<SignedTransaction> {\n        (0..size).map(|_| mock_sign_tx()).collect::<Vec<_>>()\n    }\n\n    pub fn get_random_bytes(len: usize) -> Bytes {\n        let vec: Vec<u8> = (0..len).map(|_| random::<u8>()).collect();\n        Bytes::from(vec)\n    }\n\n    #[test]\n    fn test_txs_wal() {\n        fs::remove_dir_all(PathBuf::from_str(FULL_TXS_PATH).unwrap()).unwrap();\n\n        let wal = SignedTxsWAL::new(FULL_TXS_PATH.to_string());\n        let txs_01 = mock_wal_txs(100);\n        let hash_01 = Hash::digest(Bytes::from(rlp::encode_list(&txs_01)));\n        wal.save(1u64, hash_01.clone(), txs_01.clone()).unwrap();\n        let txs_02 = mock_wal_txs(100);\n        let hash_02 = Hash::digest(Bytes::from(rlp::encode_list(&txs_02)));\n        wal.save(3u64, hash_02.clone(), txs_02.clone()).unwrap();\n\n        let txs_03 = mock_wal_txs(100);\n        let hash_03 = Hash::digest(Bytes::from(rlp::encode_list(&txs_03)));\n        wal.save(3u64, hash_03, txs_03.clone()).unwrap();\n\n        let res = wal.load_by_height(3);\n        assert_eq!(res.len(), 200);\n\n        for tx in res.iter() {\n            assert!(txs_02.contains(tx) || txs_03.contains(tx));\n        }\n\n        assert_eq!(wal.load(1u64, hash_01.clone()).unwrap(), txs_01);\n        assert_eq!(wal.load(3u64, hash_02.clone()).unwrap(), txs_02);\n\n        wal.remove(2u64).unwrap();\n        assert!(wal.load(1u64, hash_01).is_err());\n        assert!(wal.load(2u64, hash_02).is_err());\n\n        wal.remove(1u64).unwrap();\n        wal.remove(3u64).unwrap();\n    }\n\n    #[test]\n    fn test_consensus_wal() {\n        // write one, read one\n        let wal = ConsensusWal::new(FULL_CONSENSUS_PATH.to_string());\n        let info = get_random_bytes(1000);\n        wal.update_overlord_wal(Context::new(),info.clone()).unwrap();\n\n        let load = wal.load_overlord_wal(Context::new()).unwrap();\n        assert_eq!(load,info);\n\n        // write three, read latest\n        fs::remove_dir_all(PathBuf::from_str(FULL_CONSENSUS_PATH).unwrap()).unwrap();\n\n        let info = get_random_bytes(1000);\n        wal.update_overlord_wal(Context::new(),get_random_bytes(1000)).unwrap();\n        wal.update_overlord_wal(Context::new(),get_random_bytes(1000)).unwrap();\n        wal.update_overlord_wal(Context::new(),info.clone()).unwrap();\n\n        let load = wal.load_overlord_wal(Context::new()).unwrap();\n        assert_eq!(load,info);\n\n        // remove all, read nothing\n        fs::remove_dir_all(PathBuf::from_str(FULL_CONSENSUS_PATH).unwrap()).unwrap();\n\n        let load = wal.load_overlord_wal(Context::new());\n        assert!(load.is_err());\n\n        // write a old correct one and a new wrong one, read old\n\n        // old one\n        //fs::remove_dir_all(PathBuf::from_str(FULL_CONSENSUS_PATH).unwrap()).unwrap();\n\n        let info = get_random_bytes(1000);\n        wal.update_overlord_wal(Context::new(),info.clone()).unwrap();\n\n        // -> copy and modify to a new fake one\n\n        let mut files = fs::read_dir(FULL_CONSENSUS_PATH).unwrap();\n\n        let file = files.next().unwrap().unwrap();\n\n        let from = u128::from_str( file.file_name().to_str().unwrap()).unwrap();\n\n        let to = file.path().parent().unwrap().join((from+1).to_string());\n\n        let mut new_file = fs::OpenOptions::new()\n            .read(true)\n            .write(true)\n            .create(true)\n            .open(to).unwrap();\n\n        new_file\n            .write_all(get_random_bytes(1000).as_ref()).unwrap();\n\n        let load = wal.load_overlord_wal(Context::new()).unwrap();\n        assert_eq!(load,info);\n\n        fs::remove_dir_all(PathBuf::from_str(FULL_CONSENSUS_PATH).unwrap()).unwrap();\n    }\n\n    #[test]\n    fn test_wal_txs_codec() {\n        for _ in 0..10 {\n            let txs = FixedSignedTxs::new(mock_wal_txs(100));\n            assert_eq!(\n                FixedSignedTxs::decode_sync(txs.encode_sync().unwrap()).unwrap(),\n                txs\n            );\n        }\n    }\n\n    #[bench]\n    fn bench_txs_rlp_encode(b: &mut Bencher) {\n        let txs = mock_wal_txs(20000);\n\n        b.iter(move || {\n            let _ = rlp::encode_list(&txs);\n        });\n    }\n\n    #[bench]\n    fn bench_txs_prost_encode(b: &mut Bencher) {\n        let txs = FixedSignedTxs::new(mock_wal_txs(20000));\n\n        b.iter(move || {\n            let _ = txs.encode_sync();\n        });\n    }\n\n    #[bench]\n    fn bench_save_wal_1000_txs(b: &mut Bencher) {\n        let wal = SignedTxsWAL::new(FULL_TXS_PATH.to_string());\n        let txs = mock_wal_txs(1000);\n        let txs_hash = Hash::digest(Bytes::from(rlp::encode_list(&txs)));\n\n        b.iter(move || {\n            wal.save(1u64, txs_hash.clone(), txs.clone()).unwrap();\n        })\n    }\n\n    #[bench]\n    fn bench_save_wal_2000_txs(b: &mut Bencher) {\n        let wal = SignedTxsWAL::new(FULL_TXS_PATH.to_string());\n        let txs = mock_wal_txs(2000);\n        let txs_hash = Hash::digest(Bytes::from(rlp::encode_list(&txs)));\n\n        b.iter(move || {\n            wal.save(1u64, txs_hash.clone(), txs.clone()).unwrap();\n        })\n    }\n\n    #[bench]\n    fn bench_save_wal_4000_txs(b: &mut Bencher) {\n        let wal = SignedTxsWAL::new(FULL_TXS_PATH.to_string());\n        let txs = mock_wal_txs(4000);\n        let txs_hash = Hash::digest(Bytes::from(rlp::encode_list(&txs)));\n\n        b.iter(move || {\n            wal.save(1u64, txs_hash.clone(), txs.clone()).unwrap();\n        })\n    }\n\n    #[bench]\n    fn bench_save_wal_8000_txs(b: &mut Bencher) {\n        let wal = SignedTxsWAL::new(FULL_TXS_PATH.to_string());\n        let txs = mock_wal_txs(8000);\n        let txs_hash = Hash::digest(Bytes::from(rlp::encode_list(&txs)));\n\n        b.iter(move || {\n            wal.save(1u64, txs_hash.clone(), txs.clone()).unwrap();\n        })\n    }\n\n    #[bench]\n    fn bench_save_wal_16000_txs(b: &mut Bencher) {\n        let wal = SignedTxsWAL::new(FULL_TXS_PATH.to_string());\n        let txs = mock_wal_txs(16000);\n        let txs_hash = Hash::digest(Bytes::from(rlp::encode_list(&txs)));\n\n        b.iter(move || {\n            wal.save(1u64, txs_hash.clone(), txs.clone()).unwrap();\n        })\n    }\n}\n"
  },
  {
    "path": "core/consensus/src/wal_proto.rs",
    "content": "use std::convert::TryFrom;\n\nuse prost::Message;\n\nuse protocol::codec::{transaction, ProtocolCodecSync};\nuse protocol::types::SignedTransaction;\nuse protocol::{Bytes, ProtocolError, ProtocolResult};\n\nuse crate::{fixed_types, ConsensusError, ConsensusType};\n\n#[derive(Clone, Message)]\npub struct FixedSignedTxs {\n    #[prost(message, repeated, tag = \"1\")]\n    pub inner: Vec<transaction::SignedTransaction>,\n}\n\nimpl From<fixed_types::FixedSignedTxs> for FixedSignedTxs {\n    fn from(txs: fixed_types::FixedSignedTxs) -> FixedSignedTxs {\n        let inner = txs\n            .inner\n            .into_iter()\n            .map(transaction::SignedTransaction::from)\n            .collect::<Vec<_>>();\n        FixedSignedTxs { inner }\n    }\n}\n\nimpl TryFrom<FixedSignedTxs> for fixed_types::FixedSignedTxs {\n    type Error = ProtocolError;\n\n    fn try_from(txs: FixedSignedTxs) -> Result<fixed_types::FixedSignedTxs, Self::Error> {\n        let mut inner = Vec::new();\n        for tx in txs.inner.into_iter() {\n            let tmp = SignedTransaction::try_from(tx)?;\n            inner.push(tmp);\n        }\n\n        Ok(fixed_types::FixedSignedTxs { inner })\n    }\n}\n\nimpl ProtocolCodecSync for fixed_types::FixedSignedTxs {\n    fn encode_sync(&self) -> ProtocolResult<Bytes> {\n        let ser_type = FixedSignedTxs::from(self.clone());\n        let mut buf = Vec::with_capacity(ser_type.encoded_len());\n\n        ser_type\n            .encode(&mut buf)\n            .map_err(|_| ConsensusError::EncodeErr(ConsensusType::WALSignedTxs))?;\n        Ok(Bytes::from(buf))\n    }\n\n    fn decode_sync(data: Bytes) -> ProtocolResult<Self> {\n        let ser_type = FixedSignedTxs::decode(data)\n            .map_err(|_| ConsensusError::DecodeErr(ConsensusType::WALSignedTxs))?;\n\n        fixed_types::FixedSignedTxs::try_from(ser_type)\n    }\n}\n"
  },
  {
    "path": "core/mempool/Cargo.toml",
    "content": "[package]\nname = \"core-mempool\"\nversion = \"0.2.1\"\nauthors = [\"Muta Dev <muta@nervos.org>\"]\nedition = \"2018\"\nrepository = \"https://github.com/nervosnetwork/muta\"\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\nprotocol = { path = \"../../protocol\", package = \"muta-protocol\" }\ncommon-apm = { path = \"../../common/apm\" }\ncommon-crypto = { path = \"../../common/crypto\" }\ncore-network = { path = \"../network\" }\n\n\nfutures = { version = \"0.3\", features = [ \"async-await\" ] }\ncrossbeam-queue = \"0.2\"\nderive_more = \"0.99\"\nasync-trait = \"0.1\"\nnum-traits = \"0.2\"\nbytes = \"0.5\"\nrand = \"0.7\"\nhex = \"0.4\"\nserde_derive = \"1.0\"\nserde_json = \"1.0\"\nserde = \"1.0\"\nfutures-timer = \"3.0\"\nlog = \"0.4\"\ntokio = { version = \"0.2\", features = [\"macros\", \"rt-core\", \"sync\", \"blocking\"]}\nmuta-apm = \"0.1.0-alpha.7\"\ncita_trie = \"2.0\"\n\n[dev-dependencies]\nchashmap = \"2.2\"\nparking_lot = \"0.11\"\n"
  },
  {
    "path": "core/mempool/src/adapter/message.rs",
    "content": "use std::sync::Arc;\nuse std::time::Instant;\n\nuse async_trait::async_trait;\nuse futures::future::{try_join_all, TryFutureExt};\nuse protocol::{\n    traits::{Context, MemPool, MessageHandler, Priority, Rpc, TrustFeedback},\n    types::{Hash, SignedTransaction},\n};\nuse serde_derive::{Deserialize, Serialize};\n\nuse crate::context::TxContext;\n\npub const END_GOSSIP_NEW_TXS: &str = \"/gossip/mempool/new_txs\";\npub const RPC_PULL_TXS: &str = \"/rpc_call/mempool/pull_txs\";\npub const RPC_RESP_PULL_TXS: &str = \"/rpc_resp/mempool/pull_txs\";\npub const RPC_RESP_PULL_TXS_SYNC: &str = \"/rpc_resp/mempool/pull_txs_sync\";\n\n#[derive(Clone, Debug, Serialize, Deserialize)]\npub struct MsgNewTxs {\n    #[serde(with = \"core_network::serde_multi\")]\n    pub batch_stxs: Vec<SignedTransaction>,\n}\n\npub struct NewTxsHandler<M> {\n    mem_pool: Arc<M>,\n}\n\nimpl<M> NewTxsHandler<M>\nwhere\n    M: MemPool,\n{\n    pub fn new(mem_pool: Arc<M>) -> Self {\n        NewTxsHandler { mem_pool }\n    }\n}\n\n#[async_trait]\nimpl<M> MessageHandler for NewTxsHandler<M>\nwhere\n    M: MemPool + 'static,\n{\n    type Message = MsgNewTxs;\n\n    async fn process(&self, ctx: Context, msg: Self::Message) -> TrustFeedback {\n        let ctx = ctx.mark_network_origin_new_txs();\n\n        let insert_stx = |stx| -> _ {\n            let mem_pool = Arc::clone(&self.mem_pool);\n            let ctx = ctx.clone();\n\n            tokio::spawn(async move {\n                let inst = Instant::now();\n                common_apm::metrics::mempool::MEMPOOL_COUNTER_STATIC\n                    .insert_tx_from_p2p\n                    .inc();\n                if mem_pool.insert(ctx, stx).await.is_err() {\n                    common_apm::metrics::mempool::MEMPOOL_RESULT_COUNTER_STATIC\n                        .insert_tx_from_p2p\n                        .failure\n                        .inc();\n                }\n                common_apm::metrics::mempool::MEMPOOL_RESULT_COUNTER_STATIC\n                    .insert_tx_from_p2p\n                    .success\n                    .inc();\n                common_apm::metrics::mempool::MEMPOOL_TIME_STATIC\n                    .insert_tx_from_p2p\n                    .observe(common_apm::metrics::duration_to_sec(inst.elapsed()));\n            })\n        };\n\n        // Concurrently insert them\n        if try_join_all(\n            msg.batch_stxs\n                .into_iter()\n                .map(insert_stx)\n                .collect::<Vec<_>>(),\n        )\n        .await\n        .map(|_| ())\n        .is_err()\n        {\n            log::error!(\"[core_mempool] mempool batch insert error\");\n        }\n\n        TrustFeedback::Neutral\n    }\n}\n\n#[derive(Clone, Debug, Serialize, Deserialize)]\npub struct MsgPullTxs {\n    pub height: Option<u64>,\n    #[serde(with = \"core_network::serde_multi\")]\n    pub hashes: Vec<Hash>,\n}\n\n#[derive(Debug, Serialize, Deserialize)]\npub struct MsgPushTxs {\n    #[serde(with = \"core_network::serde_multi\")]\n    pub sig_txs: Vec<SignedTransaction>,\n}\n\npub struct PullTxsHandler<N, M> {\n    network:  Arc<N>,\n    mem_pool: Arc<M>,\n}\n\nimpl<N, M> PullTxsHandler<N, M>\nwhere\n    N: Rpc + 'static,\n    M: MemPool + 'static,\n{\n    pub fn new(network: Arc<N>, mem_pool: Arc<M>) -> Self {\n        PullTxsHandler { network, mem_pool }\n    }\n}\n\n#[async_trait]\nimpl<N, M> MessageHandler for PullTxsHandler<N, M>\nwhere\n    N: Rpc + 'static,\n    M: MemPool + 'static,\n{\n    type Message = MsgPullTxs;\n\n    async fn process(&self, ctx: Context, msg: Self::Message) -> TrustFeedback {\n        let push_txs = async move {\n            let ret = self\n                .mem_pool\n                .get_full_txs(ctx.clone(), msg.height, &msg.hashes)\n                .await\n                .map(|sig_txs| MsgPushTxs { sig_txs });\n\n            self.network\n                .response::<MsgPushTxs>(ctx, RPC_RESP_PULL_TXS, ret, Priority::High)\n                .await\n        };\n\n        push_txs\n            .unwrap_or_else(move |err| log::warn!(\"[core_mempool] push txs {}\", err))\n            .await;\n\n        TrustFeedback::Neutral\n    }\n}\n"
  },
  {
    "path": "core/mempool/src/adapter/mod.rs",
    "content": "use super::TxContext;\n\npub mod message;\n\nuse std::{\n    error::Error,\n    marker::PhantomData,\n    sync::atomic::{AtomicU64, Ordering},\n    sync::Arc,\n    time::Duration,\n};\n\nuse async_trait::async_trait;\nuse derive_more::Display;\nuse futures::{\n    channel::mpsc::{\n        channel, unbounded, Receiver, Sender, TrySendError, UnboundedReceiver, UnboundedSender,\n    },\n    lock::Mutex,\n    select,\n    stream::StreamExt,\n};\nuse futures_timer::Delay;\nuse log::{debug, error};\n\nuse common_crypto::Crypto;\nuse protocol::{\n    fixed_codec::FixedCodec,\n    traits::{\n        Context, ExecutorFactory, ExecutorParams, Gossip, MemPoolAdapter, PeerTrust, Priority, Rpc,\n        ServiceMapping, ServiceResponse, Storage, TrustFeedback,\n    },\n    types::{Address, Hash, SignedTransaction, TransactionRequest},\n    ProtocolError, ProtocolErrorKind, ProtocolResult,\n};\n\nuse crate::adapter::message::{\n    MsgNewTxs, MsgPullTxs, MsgPushTxs, END_GOSSIP_NEW_TXS, RPC_PULL_TXS,\n};\nuse crate::MemPoolError;\n\npub const DEFAULT_BROADCAST_TXS_SIZE: usize = 200;\npub const DEFAULT_BROADCAST_TXS_INTERVAL: u64 = 200; // milliseconds\n\nstruct IntervalTxsBroadcaster;\n\nimpl IntervalTxsBroadcaster {\n    pub async fn broadcast<G>(\n        stx_rx: UnboundedReceiver<SignedTransaction>,\n        interval_reached: Receiver<()>,\n        tx_size: usize,\n        gossip: G,\n        err_tx: UnboundedSender<ProtocolError>,\n    ) where\n        G: Gossip + Clone + Unpin + 'static,\n    {\n        let mut stx_rx = stx_rx.fuse();\n        let mut interval_rx = interval_reached.fuse();\n\n        let mut txs_cache = Vec::with_capacity(tx_size);\n\n        loop {\n            select! {\n                opt_stx = stx_rx.next() => {\n                    if let Some(stx) = opt_stx {\n                        txs_cache.push(stx);\n\n                        if txs_cache.len() == tx_size {\n                            Self::do_broadcast(&mut txs_cache, &gossip, err_tx.clone()).await\n                        }\n                    } else {\n                        debug!(\"mempool: default mempool adapter dropped\")\n                    }\n                },\n                signal = interval_rx.next() => {\n                    if signal.is_some() {\n                        Self::do_broadcast(&mut txs_cache, &gossip, err_tx.clone()).await\n                    }\n                },\n                complete => break,\n            };\n        }\n    }\n\n    pub async fn timer(mut signal_tx: Sender<()>, interval: u64) {\n        let interval = Duration::from_millis(interval);\n\n        loop {\n            Delay::new(interval).await;\n\n            if let Err(err) = signal_tx.try_send(()) {\n                // This means previous interval signal hasn't processed\n                // yet, simply drop this one.\n                if err.is_full() {\n                    debug!(\"mempool: interval signal channel full\");\n                }\n\n                if err.is_disconnected() {\n                    error!(\"mempool: interval broadcaster dropped\");\n                }\n            }\n        }\n    }\n\n    async fn do_broadcast<G>(\n        txs_cache: &mut Vec<SignedTransaction>,\n        gossip: &G,\n        err_tx: UnboundedSender<ProtocolError>,\n    ) where\n        G: Gossip + Unpin,\n    {\n        if txs_cache.is_empty() {\n            return;\n        }\n\n        let batch_stxs = txs_cache.drain(..).collect::<Vec<_>>();\n        let gossip_msg = MsgNewTxs { batch_stxs };\n\n        let ctx = Context::new();\n        let end = END_GOSSIP_NEW_TXS;\n\n        let report_if_err = move |ret: ProtocolResult<()>| {\n            if let Err(err) = ret {\n                if err_tx.unbounded_send(err).is_err() {\n                    error!(\"mempool: default mempool adapter dropped\");\n                }\n            }\n        };\n\n        report_if_err(\n            gossip\n                .broadcast(ctx, end, gossip_msg, Priority::Normal)\n                .await,\n        )\n    }\n}\n\npub struct DefaultMemPoolAdapter<EF, C, N, S, DB, Mapping> {\n    network:         N,\n    storage:         Arc<S>,\n    trie_db:         Arc<DB>,\n    service_mapping: Arc<Mapping>,\n\n    timeout_gap:  AtomicU64,\n    cycles_limit: AtomicU64,\n    max_tx_size:  AtomicU64,\n\n    stx_tx: UnboundedSender<SignedTransaction>,\n    err_rx: Mutex<UnboundedReceiver<ProtocolError>>,\n\n    pin_c:  PhantomData<C>,\n    pin_ef: PhantomData<EF>,\n}\n\nimpl<EF, C, N, S, DB, Mapping> DefaultMemPoolAdapter<EF, C, N, S, DB, Mapping>\nwhere\n    EF: ExecutorFactory<DB, S, Mapping>,\n    C: Crypto,\n    N: Rpc + PeerTrust + Gossip + Clone + Unpin + 'static,\n    S: Storage,\n    DB: cita_trie::DB + 'static,\n    Mapping: ServiceMapping + 'static,\n{\n    pub fn new(\n        network: N,\n        storage: Arc<S>,\n        trie_db: Arc<DB>,\n        service_mapping: Arc<Mapping>,\n        broadcast_txs_size: usize,\n        broadcast_txs_interval: u64,\n    ) -> Self {\n        let (stx_tx, stx_rx) = unbounded();\n        let (err_tx, err_rx) = unbounded();\n        let (signal_tx, interval_reached) = channel(1);\n\n        tokio::spawn(IntervalTxsBroadcaster::timer(\n            signal_tx,\n            broadcast_txs_interval,\n        ));\n\n        tokio::spawn(IntervalTxsBroadcaster::broadcast(\n            stx_rx,\n            interval_reached,\n            broadcast_txs_size,\n            network.clone(),\n            err_tx,\n        ));\n\n        DefaultMemPoolAdapter {\n            network,\n            storage,\n            trie_db,\n            service_mapping,\n\n            timeout_gap: AtomicU64::new(0),\n            cycles_limit: AtomicU64::new(0),\n            max_tx_size: AtomicU64::new(0),\n\n            stx_tx,\n            err_rx: Mutex::new(err_rx),\n\n            pin_c: PhantomData,\n            pin_ef: PhantomData,\n        }\n    }\n}\n\n#[async_trait]\nimpl<EF, C, N, S, DB, Mapping> MemPoolAdapter for DefaultMemPoolAdapter<EF, C, N, S, DB, Mapping>\nwhere\n    EF: ExecutorFactory<DB, S, Mapping>,\n    C: Crypto + Send + Sync + 'static,\n    N: Rpc + PeerTrust + Gossip + Clone + Unpin + 'static,\n    S: Storage + 'static,\n    DB: cita_trie::DB + 'static,\n    Mapping: ServiceMapping + 'static,\n{\n    #[muta_apm::derive::tracing_span(\n        kind = \"mempool.adapter\",\n        logs = \"{'txs_len': 'tx_hashes.len()'}\"\n    )]\n    async fn pull_txs(\n        &self,\n        ctx: Context,\n        height: Option<u64>,\n        tx_hashes: Vec<Hash>,\n    ) -> ProtocolResult<Vec<SignedTransaction>> {\n        let pull_msg = MsgPullTxs {\n            height,\n            hashes: tx_hashes,\n        };\n\n        let resp_msg = self\n            .network\n            .call::<MsgPullTxs, MsgPushTxs>(ctx, RPC_PULL_TXS, pull_msg, Priority::High)\n            .await?;\n\n        Ok(resp_msg.sig_txs)\n    }\n\n    async fn broadcast_tx(&self, _ctx: Context, stx: SignedTransaction) -> ProtocolResult<()> {\n        self.stx_tx\n            .unbounded_send(stx)\n            .map_err(AdapterError::from)?;\n\n        if let Some(mut err_rx) = self.err_rx.try_lock() {\n            match err_rx.try_next() {\n                Ok(Some(err)) => return Err(err),\n                // Error means receiver channel is empty, is ok here\n                Ok(None) | Err(_) => return Ok(()),\n            }\n        }\n\n        Ok(())\n    }\n\n    async fn check_authorization(\n        &self,\n        ctx: Context,\n        tx: Box<SignedTransaction>,\n    ) -> ProtocolResult<()> {\n        let network = self.network.clone();\n        let ctx_clone = ctx.clone();\n        let header = self.storage.get_latest_block_header(ctx.clone()).await?;\n        let trie_db_clone = Arc::clone(&self.trie_db);\n        let storage_clone = Arc::clone(&self.storage);\n        let service_mapping_clone = Arc::clone(&self.service_mapping);\n        let tx_hash = tx.tx_hash.clone();\n\n        let blocking_res: ProtocolResult<ServiceResponse<String>> =\n            tokio::task::spawn_blocking(move || {\n                // Verify transaction hash\n                let fixed_bytes = tx.raw.encode_fixed()?;\n                let tx_hash = Hash::digest(fixed_bytes);\n\n                if tx_hash != tx.tx_hash {\n                    if ctx_clone.is_network_origin_txs() {\n                        network.report(\n                            ctx_clone,\n                            TrustFeedback::Worse(format!(\n                                \"Mempool wrong tx_hash of tx {:?}\",\n                                tx.tx_hash\n                            )),\n                        );\n                    }\n\n                    return Err(MemPoolError::CheckHash {\n                        expect: tx.tx_hash,\n                        actual: tx_hash,\n                    }\n                    .into());\n                }\n\n                // Verify transaction signatures\n                let caller = Address::from_hash(Hash::digest(protocol::address_hrp().as_str()))?;\n                let executor = EF::from_root(\n                    header.state_root.clone(),\n                    Arc::clone(&trie_db_clone),\n                    Arc::clone(&storage_clone),\n                    Arc::clone(&service_mapping_clone),\n                )?;\n                let params = ExecutorParams {\n                    state_root:   header.state_root,\n                    height:       header.height,\n                    timestamp:    header.timestamp,\n                    cycles_limit: 99999,\n                    proposer:     header.proposer,\n                };\n\n                let stx_ptr_json = format!(\"{{ \\\"ptr\\\": {} }}\", Box::into_raw(tx) as usize);\n                let check_resp = executor.read(&params, &caller, 1, &TransactionRequest {\n                    service_name: \"authorization\".to_string(),\n                    method:       \"check_authorization_by_ptr\".to_string(),\n                    payload:      stx_ptr_json,\n                })?;\n\n                Ok(check_resp)\n            })\n            .await\n            .map_err(|_| AdapterError::Internal)?;\n\n        let check_resp = blocking_res?;\n        if check_resp.is_error() {\n            if ctx.is_network_origin_txs() {\n                self.network.report(\n                    ctx,\n                    TrustFeedback::Worse(format!(\n                        \"Mempool check authorization failed tx hash {:?}\",\n                        tx_hash\n                    )),\n                )\n            }\n\n            return Err(MemPoolError::CheckAuthorization {\n                tx_hash,\n                err_info: check_resp.error_message,\n            }\n            .into());\n        }\n        Ok(())\n    }\n\n    async fn check_transaction(&self, ctx: Context, stx: &SignedTransaction) -> ProtocolResult<()> {\n        let fixed_bytes = stx.raw.encode_fixed()?;\n        let size = fixed_bytes.len() as u64;\n        let tx_hash = stx.tx_hash.clone();\n\n        // check tx size\n        let max_tx_size = self.max_tx_size.load(Ordering::SeqCst);\n        if size > max_tx_size {\n            if ctx.is_network_origin_txs() {\n                self.network.report(\n                    ctx.clone(),\n                    TrustFeedback::Bad(format!(\n                        \"Mempool exceed size limit of tx {:?}\",\n                        stx.tx_hash\n                    )),\n                );\n            }\n            return Err(MemPoolError::ExceedSizeLimit {\n                tx_hash,\n                max_tx_size,\n                size,\n            }\n            .into());\n        }\n\n        // check cycle limit\n        let cycles_limit_config = self.cycles_limit.load(Ordering::SeqCst);\n        let cycles_limit_tx = stx.raw.cycles_limit;\n        if cycles_limit_tx > cycles_limit_config {\n            if ctx.is_network_origin_txs() {\n                self.network.report(\n                    ctx.clone(),\n                    TrustFeedback::Bad(format!(\n                        \"Mempool exceed cycle limit of tx {:?}\",\n                        stx.tx_hash\n                    )),\n                );\n            }\n            return Err(MemPoolError::ExceedCyclesLimit {\n                tx_hash,\n                cycles_limit_tx,\n                cycles_limit_config,\n            }\n            .into());\n        }\n\n        // Verify chain id\n        let latest_header = self.storage.get_latest_block_header(ctx.clone()).await?;\n        if latest_header.chain_id != stx.raw.chain_id {\n            if ctx.is_network_origin_txs() {\n                self.network.report(\n                    ctx.clone(),\n                    TrustFeedback::Worse(format!(\"Mempool wrong chain of tx {:?}\", stx.tx_hash)),\n                );\n            }\n            let wrong_chain_id = MemPoolError::WrongChain {\n                tx_hash: stx.tx_hash.clone(),\n            };\n\n            return Err(wrong_chain_id.into());\n        }\n\n        // Verify timeout\n        let latest_height = latest_header.height;\n        let timeout_gap = self.timeout_gap.load(Ordering::SeqCst);\n\n        if stx.raw.timeout > latest_height + timeout_gap {\n            let invalid_timeout = MemPoolError::InvalidTimeout {\n                tx_hash: stx.tx_hash.clone(),\n            };\n\n            return Err(invalid_timeout.into());\n        }\n\n        if stx.raw.timeout < latest_height {\n            let timeout = MemPoolError::Timeout {\n                tx_hash: stx.tx_hash.clone(),\n                timeout: stx.raw.timeout,\n            };\n\n            return Err(timeout.into());\n        }\n\n        Ok(())\n    }\n\n    async fn check_storage_exist(&self, ctx: Context, tx_hash: &Hash) -> ProtocolResult<()> {\n        match self.storage.get_transaction_by_hash(ctx, tx_hash).await {\n            Ok(Some(_)) => Err(MemPoolError::CommittedTx {\n                tx_hash: tx_hash.clone(),\n            }\n            .into()),\n            Ok(None) => Ok(()),\n            Err(err) => Err(err),\n        }\n    }\n\n    async fn get_latest_height(&self, ctx: Context) -> ProtocolResult<u64> {\n        let height = self.storage.get_latest_block_header(ctx).await?.height;\n        Ok(height)\n    }\n\n    async fn get_transactions_from_storage(\n        &self,\n        ctx: Context,\n        block_height: Option<u64>,\n        tx_hashes: &[Hash],\n    ) -> ProtocolResult<Vec<Option<SignedTransaction>>> {\n        if let Some(height) = block_height {\n            self.storage.get_transactions(ctx, height, tx_hashes).await\n        } else {\n            let futs = tx_hashes\n                .iter()\n                .map(|tx_hash| self.storage.get_transaction_by_hash(ctx.clone(), tx_hash))\n                .collect::<Vec<_>>();\n            futures::future::try_join_all(futs).await\n        }\n    }\n\n    fn report_good(&self, ctx: Context) {\n        if ctx.is_network_origin_txs() {\n            self.network.report(ctx, TrustFeedback::Good);\n        }\n    }\n\n    fn set_args(&self, timeout_gap: u64, cycles_limit: u64, max_tx_size: u64) {\n        self.timeout_gap.store(timeout_gap, Ordering::Relaxed);\n        self.cycles_limit.store(cycles_limit, Ordering::Relaxed);\n        self.max_tx_size.store(max_tx_size, Ordering::Relaxed);\n    }\n}\n\n#[derive(Debug, Display)]\npub enum AdapterError {\n    #[display(fmt = \"adapter: interval broadcaster drop\")]\n    IntervalBroadcasterDrop,\n\n    #[display(fmt = \"adapter: internal error\")]\n    Internal,\n}\n\nimpl Error for AdapterError {}\n\nimpl<T> From<TrySendError<T>> for AdapterError {\n    fn from(_error: TrySendError<T>) -> AdapterError {\n        AdapterError::IntervalBroadcasterDrop\n    }\n}\n\nimpl From<AdapterError> for ProtocolError {\n    fn from(error: AdapterError) -> ProtocolError {\n        ProtocolError::new(ProtocolErrorKind::Mempool, Box::new(error))\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::IntervalTxsBroadcaster;\n\n    use crate::{adapter::message::MsgNewTxs, tests::default_mock_txs};\n\n    use protocol::{\n        traits::{Context, Gossip, MessageCodec, Priority},\n        Bytes, ProtocolResult,\n    };\n\n    use async_trait::async_trait;\n    use futures::{\n        channel::mpsc::{channel, unbounded, UnboundedSender},\n        stream::StreamExt,\n    };\n    use parking_lot::Mutex;\n\n    use std::{\n        ops::Sub,\n        sync::Arc,\n        time::{Duration, Instant},\n    };\n\n    #[derive(Clone)]\n    struct MockGossip {\n        msgs:      Arc<Mutex<Vec<Bytes>>>,\n        signal_tx: UnboundedSender<()>,\n    }\n\n    impl MockGossip {\n        pub fn new(signal_tx: UnboundedSender<()>) -> Self {\n            MockGossip {\n                msgs: Default::default(),\n                signal_tx,\n            }\n        }\n    }\n\n    #[async_trait]\n    impl Gossip for MockGossip {\n        async fn broadcast<M>(\n            &self,\n            _: Context,\n            _: &str,\n            mut msg: M,\n            _: Priority,\n        ) -> ProtocolResult<()>\n        where\n            M: MessageCodec,\n        {\n            let bytes = msg.encode().expect(\"encode message fail\");\n            self.msgs.lock().push(bytes);\n\n            self.signal_tx\n                .unbounded_send(())\n                .expect(\"send broadcast signal fail\");\n\n            Ok(())\n        }\n\n        async fn multicast<'a, M, P>(\n            &self,\n            _: Context,\n            _: &str,\n            _: P,\n            _: M,\n            _: Priority,\n        ) -> ProtocolResult<()>\n        where\n            M: MessageCodec,\n            P: AsRef<[Bytes]> + Send + 'a,\n        {\n            unreachable!()\n        }\n    }\n\n    macro_rules! pop_msg {\n        ($msgs:expr) => {{\n            let msg = $msgs.pop().expect(\"should have one message\");\n            MsgNewTxs::decode(msg).expect(\"decode MsgNewTxs fail\")\n        }};\n    }\n\n    #[tokio::test]\n    async fn test_interval_timer() {\n        let (tx, mut rx) = channel(1);\n        let interval = Duration::from_millis(200);\n        let now = Instant::now();\n\n        tokio::spawn(IntervalTxsBroadcaster::timer(tx, 200));\n        rx.next().await.expect(\"await interval signal fail\");\n\n        assert!(now.elapsed().sub(interval).as_millis() < 100u128);\n    }\n\n    #[tokio::test]\n    async fn test_interval_broadcast_reach_cache_size() {\n        let (stx_tx, stx_rx) = unbounded();\n        let (err_tx, _err_rx) = unbounded();\n        let (_signal_tx, interval_reached) = channel(1);\n        let tx_size = 10;\n        let (broadcast_signal_tx, mut broadcast_signal_rx) = unbounded();\n        let gossip = MockGossip::new(broadcast_signal_tx);\n\n        tokio::spawn(IntervalTxsBroadcaster::broadcast(\n            stx_rx,\n            interval_reached,\n            tx_size,\n            gossip.clone(),\n            err_tx,\n        ));\n\n        for stx in default_mock_txs(11).into_iter() {\n            stx_tx.unbounded_send(stx).expect(\"send stx fail\");\n        }\n\n        broadcast_signal_rx.next().await;\n        let mut msgs = gossip.msgs.lock().drain(..).collect::<Vec<_>>();\n        assert_eq!(msgs.len(), 1, \"should only have one message\");\n\n        let msg = pop_msg!(msgs);\n        assert_eq!(msg.batch_stxs.len(), 10, \"should only have 10 stx\");\n    }\n\n    #[tokio::test]\n    async fn test_interval_broadcast_reach_interval() {\n        let (stx_tx, stx_rx) = unbounded();\n        let (err_tx, _err_rx) = unbounded();\n        let (signal_tx, interval_reached) = channel(1);\n        let tx_size = 10;\n        let (broadcast_signal_tx, mut broadcast_signal_rx) = unbounded();\n        let gossip = MockGossip::new(broadcast_signal_tx);\n\n        tokio::spawn(IntervalTxsBroadcaster::timer(signal_tx, 200));\n        tokio::spawn(IntervalTxsBroadcaster::broadcast(\n            stx_rx,\n            interval_reached,\n            tx_size,\n            gossip.clone(),\n            err_tx,\n        ));\n\n        for stx in default_mock_txs(9).into_iter() {\n            stx_tx.unbounded_send(stx).expect(\"send stx fail\");\n        }\n\n        broadcast_signal_rx.next().await;\n        let mut msgs = gossip.msgs.lock().drain(..).collect::<Vec<_>>();\n        assert_eq!(msgs.len(), 1, \"should only have one message\");\n\n        let msg = pop_msg!(msgs);\n        assert_eq!(msg.batch_stxs.len(), 9, \"should only have 9 stx\");\n    }\n\n    #[tokio::test]\n    async fn test_interval_broadcast() {\n        let (stx_tx, stx_rx) = unbounded();\n        let (err_tx, _err_rx) = unbounded();\n        let (signal_tx, interval_reached) = channel(1);\n        let tx_size = 10;\n        let (broadcast_signal_tx, mut broadcast_signal_rx) = unbounded();\n        let gossip = MockGossip::new(broadcast_signal_tx);\n\n        tokio::spawn(IntervalTxsBroadcaster::timer(signal_tx, 200));\n        tokio::spawn(IntervalTxsBroadcaster::broadcast(\n            stx_rx,\n            interval_reached,\n            tx_size,\n            gossip.clone(),\n            err_tx,\n        ));\n\n        for stx in default_mock_txs(19).into_iter() {\n            stx_tx.unbounded_send(stx).expect(\"send stx fail\");\n        }\n\n        // Should got two broadcast\n        broadcast_signal_rx.next().await;\n        broadcast_signal_rx.next().await;\n\n        let mut msgs = gossip.msgs.lock().drain(..).collect::<Vec<_>>();\n        assert_eq!(msgs.len(), 2, \"should only have two messages\");\n\n        let msg = pop_msg!(msgs);\n        assert_eq!(\n            msg.batch_stxs.len(),\n            9,\n            \"last message should only have 9 stx\"\n        );\n\n        let msg = pop_msg!(msgs);\n        assert_eq!(\n            msg.batch_stxs.len(),\n            10,\n            \"first message should only have 10 stx\"\n        );\n    }\n}\n"
  },
  {
    "path": "core/mempool/src/context.rs",
    "content": "use protocol::traits::Context;\n\nconst TXS_ORIGINAL_KEY: &str = \"txs_original\";\nconst NETWORK_TXS: usize = 1;\n\npub(crate) trait TxContext {\n    fn mark_network_origin_new_txs(&self) -> Self;\n\n    fn is_network_origin_txs(&self) -> bool;\n}\n\nimpl TxContext for Context {\n    fn mark_network_origin_new_txs(&self) -> Self {\n        self.with_value::<usize>(TXS_ORIGINAL_KEY, NETWORK_TXS)\n    }\n\n    fn is_network_origin_txs(&self) -> bool {\n        self.get::<usize>(TXS_ORIGINAL_KEY) == Some(&NETWORK_TXS)\n    }\n}\n"
  },
  {
    "path": "core/mempool/src/lib.rs",
    "content": "#![feature(async_closure, test)]\n#![allow(clippy::suspicious_else_formatting, clippy::mutable_key_type)]\n\nmod adapter;\nmod context;\nmod map;\n#[cfg(test)]\nmod tests;\nmod tx_cache;\n\npub use adapter::message::{\n    MsgNewTxs, MsgPullTxs, MsgPushTxs, NewTxsHandler, PullTxsHandler, END_GOSSIP_NEW_TXS,\n    RPC_PULL_TXS, RPC_RESP_PULL_TXS, RPC_RESP_PULL_TXS_SYNC,\n};\npub use adapter::DefaultMemPoolAdapter;\npub use adapter::{DEFAULT_BROADCAST_TXS_INTERVAL, DEFAULT_BROADCAST_TXS_SIZE};\n\nuse std::collections::HashSet;\nuse std::error::Error;\nuse std::sync::atomic::{AtomicU64, Ordering};\nuse std::sync::Arc;\nuse std::time::Instant;\n\nuse async_trait::async_trait;\nuse derive_more::Display;\nuse futures::future::try_join_all;\nuse tokio::sync::RwLock;\n\nuse protocol::traits::{Context, MemPool, MemPoolAdapter, MixedTxHashes};\nuse protocol::types::{Hash, SignedTransaction};\nuse protocol::{ProtocolError, ProtocolErrorKind, ProtocolResult};\n\nuse crate::context::TxContext;\nuse crate::map::Map;\nuse crate::tx_cache::TxCache;\n\n/// Memory pool for caching transactions.\npub struct HashMemPool<Adapter: MemPoolAdapter> {\n    /// Pool size limit.\n    pool_size:      usize,\n    /// A system param limits the life time of an off-chain transaction.\n    timeout_gap:    AtomicU64,\n    /// A structure for caching new transactions and responsible transactions of\n    /// propose-sync.\n    tx_cache:       TxCache,\n    /// A structure for caching fresh transactions in order transaction hashes.\n    callback_cache: Arc<Map<SignedTransaction>>,\n    /// Supply necessary functions from outer modules.\n    adapter:        Arc<Adapter>,\n    /// exclusive flush_memory and insert_tx to avoid repeat txs insertion.\n    flush_lock:     RwLock<()>,\n}\n\nimpl<Adapter: 'static> HashMemPool<Adapter>\nwhere\n    Adapter: MemPoolAdapter,\n{\n    pub async fn new(\n        pool_size: usize,\n        adapter: Adapter,\n        initial_txs: Vec<SignedTransaction>,\n    ) -> Self {\n        let mempool = HashMemPool {\n            pool_size,\n            timeout_gap: AtomicU64::new(0),\n            tx_cache: TxCache::new(pool_size * 2),\n            callback_cache: Arc::new(Map::new(pool_size)),\n            adapter: Arc::new(adapter),\n            flush_lock: RwLock::new(()),\n        };\n\n        for tx in initial_txs.into_iter() {\n            if let Err(e) = mempool.initial_insert(Context::new(), tx).await {\n                log::warn!(\"[mempool]: initial insert tx failed {:?}\", e);\n            }\n        }\n\n        mempool\n    }\n\n    pub fn get_tx_cache(&self) -> &TxCache {\n        &self.tx_cache\n    }\n\n    pub fn get_callback_cache(&self) -> &Map<SignedTransaction> {\n        &self.callback_cache\n    }\n\n    pub fn get_adapter(&self) -> &Adapter {\n        &self.adapter\n    }\n\n    async fn show_unknown_txs(&self, tx_hashes: &[Hash]) -> Vec<Hash> {\n        let tx_hashes = self.tx_cache.show_unknown(tx_hashes).await;\n        let mut unknown_hashes = vec![];\n\n        for tx_hash in tx_hashes.into_iter() {\n            if !self.callback_cache.contains_key(&tx_hash).await {\n                unknown_hashes.push(tx_hash)\n            }\n        }\n\n        unknown_hashes\n    }\n\n    async fn initial_insert(&self, ctx: Context, tx: SignedTransaction) -> ProtocolResult<()> {\n        let _lock = self.flush_lock.read().await;\n\n        self.tx_cache.check_exist(&tx.tx_hash).await?;\n        self.adapter\n            .check_storage_exist(ctx.clone(), &tx.tx_hash)\n            .await?;\n        self.tx_cache.insert_propose_tx(tx).await\n    }\n\n    async fn insert_tx(\n        &self,\n        ctx: Context,\n        tx: SignedTransaction,\n        tx_type: TxType,\n    ) -> ProtocolResult<()> {\n        let _lock = self.flush_lock.read().await;\n\n        let tx = Box::new(tx);\n        let tx_hash = &tx.tx_hash;\n        self.tx_cache.check_reach_limit(self.pool_size).await?;\n        self.tx_cache.check_exist(tx_hash).await?;\n        self.adapter\n            .check_authorization(ctx.clone(), tx.clone())\n            .await?;\n        self.adapter.check_transaction(ctx.clone(), &tx).await?;\n        self.adapter\n            .check_storage_exist(ctx.clone(), tx_hash)\n            .await?;\n\n        match tx_type {\n            TxType::NewTx => self.tx_cache.insert_new_tx(*tx.clone()).await?,\n            TxType::ProposeTx => self.tx_cache.insert_propose_tx(*tx.clone()).await?,\n        }\n\n        if !ctx.is_network_origin_txs() {\n            self.adapter.broadcast_tx(ctx, *tx).await?;\n        } else {\n            self.adapter.report_good(ctx);\n        }\n\n        Ok(())\n    }\n\n    async fn verify_tx_in_parallel(&self, ctx: Context, tx_ptrs: Vec<usize>) -> ProtocolResult<()> {\n        let now = Instant::now();\n        let len = tx_ptrs.len();\n\n        let futs = tx_ptrs\n            .into_iter()\n            .map(|ptr| {\n                let adapter = Arc::clone(&self.adapter);\n                let ctx = ctx.clone();\n\n                tokio::spawn(async move {\n                    let boxed_stx = unsafe { Box::from_raw(ptr as *mut SignedTransaction) };\n                    let signed_tx = *(boxed_stx.clone());\n\n                    adapter.check_authorization(ctx.clone(), boxed_stx).await?;\n                    adapter.check_transaction(ctx.clone(), &signed_tx).await?;\n                    adapter\n                        .check_storage_exist(ctx.clone(), &signed_tx.tx_hash)\n                        .await\n                })\n            })\n            .collect::<Vec<_>>();\n\n        try_join_all(futs).await.map_err(|e| {\n            log::error!(\"[mempool] verify batch txs error {:?}\", e);\n            MemPoolError::VerifyBatchTransactions\n        })?;\n\n        log::info!(\n            \"[mempool] verify txs done, size {:?} cost {:?}\",\n            len,\n            now.elapsed()\n        );\n        Ok(())\n    }\n}\n\n#[async_trait]\nimpl<Adapter: 'static> MemPool for HashMemPool<Adapter>\nwhere\n    Adapter: MemPoolAdapter,\n{\n    async fn insert(&self, ctx: Context, tx: SignedTransaction) -> ProtocolResult<()> {\n        self.insert_tx(ctx, tx, TxType::NewTx).await\n    }\n\n    async fn package(\n        &self,\n        ctx: Context,\n        cycles_limit: u64,\n        tx_num_limit: u64,\n    ) -> ProtocolResult<MixedTxHashes> {\n        let current_height = self.adapter.get_latest_height(ctx.clone()).await?;\n        log::info!(\n            \"[core_mempool]: {:?} txs in map and {:?} txs in queue while package\",\n            self.tx_cache.len().await,\n            self.tx_cache.queue_len(),\n        );\n        let inst = Instant::now();\n        let result = self\n            .tx_cache\n            .package(\n                cycles_limit,\n                tx_num_limit,\n                current_height,\n                current_height + self.timeout_gap.load(Ordering::Relaxed),\n            )\n            .await;\n        match result {\n            Ok(txs) => {\n                common_apm::metrics::mempool::MEMPOOL_PACKAGE_SIZE_VEC_STATIC\n                    .package\n                    .observe((txs.order_tx_hashes.len()) as f64);\n                common_apm::metrics::mempool::MEMPOOL_TIME_STATIC\n                    .package\n                    .observe(common_apm::metrics::duration_to_sec(inst.elapsed()));\n                Ok(txs)\n            }\n            Err(e) => {\n                common_apm::metrics::mempool::MEMPOOL_RESULT_COUNTER_STATIC\n                    .package\n                    .failure\n                    .inc();\n                Err(e)\n            }\n        }\n    }\n\n    async fn flush(&self, ctx: Context, tx_hashes: &[Hash]) -> ProtocolResult<()> {\n        let _lock = self.flush_lock.write().await;\n\n        let current_height = self.adapter.get_latest_height(ctx.clone()).await?;\n        log::info!(\n            \"[core_mempool]: flush mempool with {:?} tx_hashes\",\n            tx_hashes.len(),\n        );\n        self.tx_cache\n            .flush(\n                &tx_hashes,\n                current_height,\n                current_height + self.timeout_gap.load(Ordering::Relaxed),\n            )\n            .await;\n        self.callback_cache.clear().await;\n\n        Ok(())\n    }\n\n    async fn get_full_txs(\n        &self,\n        ctx: Context,\n        height: Option<u64>,\n        tx_hashes: &[Hash],\n    ) -> ProtocolResult<Vec<SignedTransaction>> {\n        let len = tx_hashes.len();\n        let mut missing_hashes = vec![];\n        let mut full_txs = Vec::with_capacity(len);\n\n        for tx_hash in tx_hashes.iter() {\n            if let Some(tx) = self.tx_cache.get(tx_hash).await {\n                full_txs.push(tx);\n            } else if let Some(tx) = self.callback_cache.get(tx_hash).await {\n                full_txs.push(tx);\n            } else {\n                missing_hashes.push(tx_hash.clone());\n            }\n        }\n\n        // for push txs when local mempool is flushed, but the remote node still fetch\n        // full block\n        if !missing_hashes.is_empty() {\n            let txs = self\n                .adapter\n                .get_transactions_from_storage(ctx, height, &missing_hashes)\n                .await?;\n            let txs = txs\n                .into_iter()\n                .filter_map(|opt_tx| opt_tx)\n                .collect::<Vec<_>>();\n\n            full_txs.extend(txs);\n        }\n\n        if full_txs.len() != len {\n            Err(MemPoolError::MisMatch {\n                require:  len,\n                response: full_txs.len(),\n            }\n            .into())\n        } else {\n            Ok(full_txs)\n        }\n    }\n\n    async fn ensure_order_txs(\n        &self,\n        ctx: Context,\n        height: Option<u64>,\n        order_tx_hashes: &[Hash],\n    ) -> ProtocolResult<()> {\n        check_dup_order_hashes(order_tx_hashes)?;\n\n        let unknown_hashes = self.show_unknown_txs(order_tx_hashes).await;\n        if !unknown_hashes.is_empty() {\n            let unknown_len = unknown_hashes.len();\n            let txs = self\n                .adapter\n                .pull_txs(ctx.clone(), height, unknown_hashes)\n                .await?;\n\n            // Make sure response signed_txs is the same size of request hashes.\n            if txs.len() != unknown_len {\n                return Err(MemPoolError::EnsureBreak {\n                    require:  unknown_len,\n                    response: txs.len(),\n                }\n                .into());\n            }\n\n            let (tx_ptrs, txs): (Vec<_>, Vec<_>) = txs\n                .into_iter()\n                .map(|tx| {\n                    let boxed = Box::new(tx);\n                    (Box::into_raw(boxed.clone()) as usize, boxed)\n                })\n                .unzip();\n\n            self.verify_tx_in_parallel(ctx.clone(), tx_ptrs).await?;\n\n            for signed_tx in txs.into_iter() {\n                self.callback_cache\n                    .insert(signed_tx.tx_hash.clone(), *signed_tx)\n                    .await;\n            }\n\n            self.adapter.report_good(ctx);\n        }\n\n        Ok(())\n    }\n\n    async fn sync_propose_txs(\n        &self,\n        ctx: Context,\n        propose_tx_hashes: Vec<Hash>,\n    ) -> ProtocolResult<()> {\n        let unknown_hashes = self.show_unknown_txs(&propose_tx_hashes).await;\n        if !unknown_hashes.is_empty() {\n            let txs = self\n                .adapter\n                .pull_txs(ctx.clone(), None, unknown_hashes)\n                .await?;\n            // TODO: concurrently insert\n            for tx in txs.into_iter() {\n                // Should not handle error here, it is normal that transactions\n                // response here are exist in pool.\n                let _ = self.insert_tx(ctx.clone(), tx, TxType::ProposeTx).await;\n            }\n        }\n        Ok(())\n    }\n\n    fn set_args(&self, timeout_gap: u64, cycles_limit: u64, max_tx_size: u64) {\n        self.adapter\n            .set_args(timeout_gap, cycles_limit, max_tx_size);\n        self.timeout_gap.store(timeout_gap, Ordering::Relaxed);\n    }\n}\n\nfn check_dup_order_hashes(order_tx_hashes: &[Hash]) -> ProtocolResult<()> {\n    let mut dup_set = HashSet::with_capacity(order_tx_hashes.len());\n\n    for hash in order_tx_hashes.iter() {\n        if dup_set.contains(hash) {\n            return Err(MemPoolError::EnsureDup { hash: hash.clone() }.into());\n        }\n\n        dup_set.insert(hash.clone());\n    }\n\n    Ok(())\n}\n\npub enum TxType {\n    NewTx,\n    ProposeTx,\n}\n\n#[derive(Debug, Display)]\npub enum MemPoolError {\n    #[display(\n        fmt = \"Tx: {:?} exceeds size limit, now: {}, limit: {} Bytes\",\n        tx_hash,\n        size,\n        max_tx_size\n    )]\n    ExceedSizeLimit {\n        tx_hash:     Hash,\n        max_tx_size: u64,\n        size:        u64,\n    },\n\n    #[display(\n        fmt = \"Tx: {:?} exceeds cycle limit, tx: {}, config: {}\",\n        tx_hash,\n        cycles_limit_tx,\n        cycles_limit_config\n    )]\n    ExceedCyclesLimit {\n        tx_hash:             Hash,\n        cycles_limit_config: u64,\n        cycles_limit_tx:     u64,\n    },\n\n    #[display(fmt = \"Tx: {:?} inserts failed\", tx_hash)]\n    Insert { tx_hash: Hash },\n\n    #[display(fmt = \"Mempool reaches limit: {}\", pool_size)]\n    ReachLimit { pool_size: usize },\n\n    #[display(fmt = \"Tx: {:?} exists in pool\", tx_hash)]\n    Dup { tx_hash: Hash },\n\n    #[display(fmt = \"Pull txs, require: {}, response: {}\", require, response)]\n    EnsureBreak { require: usize, response: usize },\n\n    #[display(\n        fmt = \"There is duplication in order transactions. duplication tx_hash {:?}\",\n        hash\n    )]\n    EnsureDup { hash: Hash },\n\n    #[display(fmt = \"Fetch full txs, require: {}, response: {}\", require, response)]\n    MisMatch { require: usize, response: usize },\n\n    #[display(fmt = \"Tx inserts candidate_queue failed, len: {}\", len)]\n    InsertCandidate { len: usize },\n\n    #[display(fmt = \"Tx: {:?} check authorization error {:?}\", tx_hash, err_info)]\n    CheckAuthorization { tx_hash: Hash, err_info: String },\n\n    #[display(fmt = \"Check_hash failed, expect: {:?}, get: {:?}\", expect, actual)]\n    CheckHash { expect: Hash, actual: Hash },\n\n    #[display(fmt = \"Tx: {:?} already commit\", tx_hash)]\n    CommittedTx { tx_hash: Hash },\n\n    #[display(fmt = \"Tx: {:?} doesn't match our chain id\", tx_hash)]\n    WrongChain { tx_hash: Hash },\n\n    #[display(fmt = \"Tx: {:?} timeout {}\", tx_hash, timeout)]\n    Timeout { tx_hash: Hash, timeout: u64 },\n\n    #[display(fmt = \"Tx: {:?} invalid timeout\", tx_hash)]\n    InvalidTimeout { tx_hash: Hash },\n\n    #[display(fmt = \"Batch transaction validation failed\")]\n    VerifyBatchTransactions,\n\n    #[display(fmt = \"Encode transaction to JSON failed\")]\n    EncodeJson,\n}\n\nimpl Error for MemPoolError {}\n\nimpl From<MemPoolError> for ProtocolError {\n    fn from(error: MemPoolError) -> ProtocolError {\n        ProtocolError::new(ProtocolErrorKind::Mempool, Box::new(error))\n    }\n}\n"
  },
  {
    "path": "core/mempool/src/map.rs",
    "content": "use std::collections::HashMap;\nuse std::sync::Arc;\n\nuse futures::future::try_join_all;\nuse tokio::sync::RwLock;\n\nuse protocol::types::Hash;\n\n/// The \"Map\" is a concurrent HashMap that uses 16 buckets to\n/// decentralize store transactions.\n/// Why use 16 buckets? We take 0 bytes of each \"tx_hash\" and shift it 4 bits to\n/// the left to get a number in the range 0~15, which corresponds to 16 buckets.\npub struct Map<V> {\n    buckets: Vec<Arc<Bucket<V>>>,\n}\n\nimpl<V> Map<V>\nwhere\n    V: Send + Sync + Clone + 'static,\n{\n    pub fn new(cache_size: usize) -> Self {\n        let mut buckets = Vec::with_capacity(16);\n        for _ in 0..16 {\n            buckets.push(Arc::new(Bucket {\n                // Allocate enough space to avoid triggering resize.\n                store: RwLock::new(HashMap::with_capacity(cache_size)),\n            }));\n        }\n        Self { buckets }\n    }\n\n    pub async fn insert(&self, hash: Hash, value: V) -> Option<V> {\n        let bucket = self.get_bucket(&hash);\n        bucket.insert(hash, value).await\n    }\n\n    pub async fn contains_key(&self, hash: &Hash) -> bool {\n        let bucket = self.get_bucket(hash);\n        bucket.contains_key(hash).await\n    }\n\n    pub async fn get(&self, hash: &Hash) -> Option<V> {\n        let bucket = self.get_bucket(hash);\n        bucket.get(hash).await\n    }\n\n    pub async fn remove(&self, hash: &Hash) {\n        let bucket = self.get_bucket(hash);\n        bucket.remove(hash).await\n    }\n\n    pub async fn remove_batch(&self, hashes: &[Hash]) {\n        let mut h: HashMap<usize, Vec<Hash>> = HashMap::new();\n\n        for hash in hashes.iter() {\n            let index = get_index(hash);\n            h.entry(index).or_insert_with(Vec::new).push(hash.clone());\n        }\n\n        let futs = h\n            .into_iter()\n            .map(|(index, hashes)| {\n                let bucket = Arc::clone(&self.buckets[index]);\n                tokio::spawn(async move { bucket.remove_batch(hashes).await })\n            })\n            .collect::<Vec<_>>();\n        try_join_all(futs)\n            .await\n            .expect(\"[mempool]: the runtime panics.\");\n    }\n\n    pub async fn len(&self) -> usize {\n        let mut len = 0;\n        for bucket in self.buckets.iter() {\n            len += bucket.len().await;\n        }\n        len\n    }\n\n    pub async fn clear(&self) {\n        let futs = self\n            .buckets\n            .iter()\n            .map(|bucket| {\n                let bucket = Arc::clone(bucket);\n                tokio::spawn(async move { bucket.clear().await })\n            })\n            .collect::<Vec<_>>();\n\n        try_join_all(futs)\n            .await\n            .expect(\"[mempool]: the runtime panics.\");\n    }\n\n    fn get_bucket(&self, hash: &Hash) -> &Bucket<V> {\n        &self.buckets[get_index(hash)]\n    }\n}\n\nfn get_index(hash: &Hash) -> usize {\n    (hash.as_bytes()[0] >> 4) as usize\n}\n\nstruct Bucket<V> {\n    store: RwLock<HashMap<Hash, V>>,\n}\n\nimpl<V> Bucket<V>\nwhere\n    V: Send + Sync + Clone,\n{\n    /// Before inserting a transaction into the bucket, you must check whether\n    /// the transaction is in the bucket first. Never use the insert function to\n    /// check this.\n    async fn insert(&self, hash: Hash, value: V) -> Option<V> {\n        let mut lock_data = self.store.write().await;\n        if lock_data.contains_key(&hash) {\n            Some(value)\n        } else {\n            lock_data.insert(hash, value)\n        }\n    }\n\n    async fn contains_key(&self, hash: &Hash) -> bool {\n        self.store.read().await.contains_key(hash)\n    }\n\n    async fn get(&self, hash: &Hash) -> Option<V> {\n        self.store.read().await.get(hash).map(Clone::clone)\n    }\n\n    async fn remove(&self, hash: &Hash) {\n        let mut store = self.store.write().await;\n        store.remove(hash);\n    }\n\n    async fn remove_batch(&self, hashes: Vec<Hash>) {\n        let mut store = self.store.write().await;\n        for hash in hashes {\n            store.remove(&hash);\n        }\n    }\n\n    async fn len(&self) -> usize {\n        self.store.read().await.len()\n    }\n\n    async fn clear(&self) {\n        self.store.write().await.clear();\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    extern crate test;\n\n    use std::collections::HashMap;\n    use std::sync::{Arc, RwLock};\n\n    use chashmap::CHashMap;\n    use rand::random;\n    use test::Bencher;\n\n    use protocol::{types::Hash, Bytes};\n\n    use crate::map::Map;\n\n    const GEN_TX_SIZE: usize = 1000;\n\n    #[bench]\n    fn bench_map_insert(b: &mut Bencher) {\n        let mut runtime = tokio::runtime::Runtime::new().unwrap();\n\n        let txs = mock_txs(GEN_TX_SIZE);\n\n        b.iter(move || {\n            let cache = Map::new(GEN_TX_SIZE);\n            txs.iter().for_each(|(hash, tx)| {\n                runtime.block_on(cache.insert(hash.clone(), tx.clone()));\n            });\n        });\n    }\n\n    #[bench]\n    fn bench_std_map_insert(b: &mut Bencher) {\n        let txs = mock_txs(GEN_TX_SIZE);\n\n        b.iter(move || {\n            let cache = Arc::new(RwLock::new(HashMap::new()));\n            txs.iter().for_each(|(hash, tx)| {\n                cache.write().unwrap().insert(hash, tx);\n            });\n        });\n    }\n\n    #[bench]\n    fn bench_chashmap_insert(b: &mut Bencher) {\n        let txs = mock_txs(GEN_TX_SIZE);\n\n        b.iter(move || {\n            let cache = CHashMap::new();\n            txs.iter().for_each(|(hash, tx)| {\n                cache.insert(hash, tx);\n            });\n        });\n    }\n\n    fn mock_txs(size: usize) -> Vec<(Hash, Hash)> {\n        let mut txs = Vec::with_capacity(size);\n        for _ in 0..size {\n            let tx: Vec<u8> = (0..10).map(|_| random::<u8>()).collect();\n            let tx = Hash::digest(Bytes::from(tx));\n            txs.push((tx.clone(), tx));\n        }\n        txs\n    }\n}\n"
  },
  {
    "path": "core/mempool/src/tests/mempool.rs",
    "content": "use std::sync::Arc;\n\nuse test::Bencher;\n\nuse protocol::types::Hash;\n\nuse super::*;\n\nmacro_rules! insert {\n    (normal($pool_size: expr, $input: expr, $output: expr)) => {\n        insert!(inner($pool_size, 1, $input, 0, $output));\n    };\n    (repeat($repeat: expr, $input: expr, $output: expr)) => {\n        insert!(inner($input * 10, $repeat, $input, 0, $output));\n    };\n    (invalid($valid: expr, $invalid: expr, $output: expr)) => {\n        insert!(inner($valid * 10, 1, $valid, $invalid, $output));\n    };\n    (inner($pool_size: expr, $repeat: expr, $valid: expr, $invalid: expr, $output: expr)) => {\n        let mempool =\n            Arc::new(new_mempool($pool_size, TIMEOUT_GAP, CYCLE_LIMIT, MAX_TX_SIZE).await);\n        let txs = mock_txs($valid, $invalid, TIMEOUT);\n        for _ in 0..$repeat {\n            concurrent_insert(txs.clone(), Arc::clone(&mempool)).await;\n        }\n        assert_eq!(mempool.get_tx_cache().len().await, $output);\n    };\n}\n\n#[test]\nfn test_dup_order_hashes() {\n    let hashes = vec![\n        Hash::digest(Bytes::from(\"test1\")),\n        Hash::digest(Bytes::from(\"test2\")),\n        Hash::digest(Bytes::from(\"test3\")),\n        Hash::digest(Bytes::from(\"test4\")),\n        Hash::digest(Bytes::from(\"test2\")),\n    ];\n    assert_eq!(check_dup_order_hashes(&hashes).is_err(), true);\n\n    let hashes = vec![\n        Hash::digest(Bytes::from(\"test1\")),\n        Hash::digest(Bytes::from(\"test2\")),\n        Hash::digest(Bytes::from(\"test3\")),\n        Hash::digest(Bytes::from(\"test4\")),\n    ];\n    assert_eq!(check_dup_order_hashes(&hashes).is_err(), false);\n}\n\n#[tokio::test]\nasync fn test_insert() {\n    // 1. insertion under pool size.\n    insert!(normal(100, 100, 100));\n\n    // 3. invalid insertion\n    insert!(invalid(80, 10, 80));\n}\n\nmacro_rules! package {\n    (normal($tx_num_limit: expr, $insert: expr, $expect_order: expr, $expect_propose: expr)) => {\n        package!(inner(\n            $tx_num_limit,\n            TIMEOUT_GAP,\n            TIMEOUT,\n            $insert,\n            $expect_order,\n            $expect_propose\n        ));\n    };\n    (timeout($timeout_gap: expr, $timeout: expr, $insert: expr, $expect: expr)) => {\n        package!(inner($insert, $timeout_gap, $timeout, $insert, $expect, 0));\n    };\n    (inner($tx_num_limit: expr, $timeout_gap: expr, $timeout: expr, $insert: expr, $expect_order: expr, $expect_propose: expr)) => {\n        let mempool =\n            &Arc::new(new_mempool($insert * 10, $timeout_gap, CYCLE_LIMIT, MAX_TX_SIZE).await);\n        let txs = mock_txs($insert, 0, $timeout);\n        concurrent_insert(txs.clone(), Arc::clone(mempool)).await;\n        let mixed_tx_hashes = exec_package(Arc::clone(mempool), CYCLE_LIMIT, $tx_num_limit).await;\n        assert_eq!(mixed_tx_hashes.order_tx_hashes.len(), $expect_order);\n        assert_eq!(mixed_tx_hashes.propose_tx_hashes.len(), $expect_propose);\n    };\n}\n\n#[tokio::test]\nasync fn test_package() {\n    // 1. pool_size <= tx_num_limit\n    package!(normal(100, 50, 50, 0));\n    package!(normal(100, 100, 100, 0));\n\n    // 2. tx_num_limit < pool_size <= 2 * tx_num_limit\n    package!(normal(100, 101, 100, 1));\n    package!(normal(100, 200, 100, 100));\n\n    // 3. 2 * tx_num_limit < pool_size\n    package!(normal(100, 201, 100, 100));\n\n    // 4. current_height >= tx.timeout\n    package!(timeout(50, CURRENT_HEIGHT, 10, 0));\n    package!(timeout(50, CURRENT_HEIGHT - 10, 10, 0));\n\n    // 5. current_height + timeout_gap < tx.timeout\n    package!(timeout(50, CURRENT_HEIGHT + 51, 10, 0));\n    package!(timeout(50, CURRENT_HEIGHT + 60, 10, 0));\n\n    // 6. tx.timeout - timeout_gap =< current_height < tx.timeout\n    package!(timeout(50, CURRENT_HEIGHT + 50, 10, 10));\n    package!(timeout(50, CURRENT_HEIGHT + 1, 10, 10));\n}\n\n#[tokio::test]\nasync fn test_package_order_consistent_with_insert_order() {\n    let mempool = &Arc::new(default_mempool().await);\n\n    let txs = default_mock_txs(100);\n    for tx in txs.iter() {\n        exec_insert(tx.clone(), Arc::clone(mempool)).await;\n    }\n    let mixed_tx_hashes = exec_package(Arc::clone(mempool), CYCLE_LIMIT, TX_NUM_LIMIT).await;\n    assert!(check_order_consistant(&mixed_tx_hashes, &txs));\n\n    // flush partial txs and test order consistency\n    let (remove_txs, reserve_txs) = txs.split_at(50);\n    let remove_hashes: Vec<Hash> = remove_txs.iter().map(|tx| tx.tx_hash.clone()).collect();\n    exec_flush(remove_hashes, Arc::clone(mempool)).await;\n    let mixed_tx_hashes = exec_package(Arc::clone(mempool), CYCLE_LIMIT, TX_NUM_LIMIT).await;\n    assert!(check_order_consistant(&mixed_tx_hashes, reserve_txs));\n}\n\n#[tokio::test]\nasync fn test_flush() {\n    let mempool = Arc::new(default_mempool().await);\n\n    // insert txs\n    let txs = default_mock_txs(555);\n    concurrent_insert(txs.clone(), Arc::clone(&mempool)).await;\n    assert_eq!(mempool.get_tx_cache().len().await, 555);\n\n    let callback_cache = mempool.get_callback_cache();\n    for tx in txs.iter() {\n        callback_cache.insert(tx.tx_hash.clone(), tx.clone()).await;\n    }\n    assert_eq!(callback_cache.len().await, 555);\n\n    // flush exist txs\n    let (remove_txs, _) = txs.split_at(123);\n    let remove_hashes: Vec<Hash> = remove_txs.iter().map(|tx| tx.tx_hash.clone()).collect();\n    exec_flush(remove_hashes, Arc::clone(&mempool)).await;\n    assert_eq!(mempool.get_tx_cache().len().await, 432);\n    assert_eq!(mempool.get_tx_cache().queue_len(), 432);\n    exec_package(Arc::clone(&mempool), CYCLE_LIMIT, TX_NUM_LIMIT).await;\n    assert_eq!(mempool.get_tx_cache().queue_len(), 432);\n    assert_eq!(callback_cache.len().await, 0);\n\n    // flush absent txs\n    let txs = default_mock_txs(222);\n    let remove_hashes: Vec<Hash> = txs.iter().map(|tx| tx.tx_hash.clone()).collect();\n    exec_flush(remove_hashes, Arc::clone(&mempool)).await;\n    assert_eq!(mempool.get_tx_cache().len().await, 432);\n    assert_eq!(mempool.get_tx_cache().queue_len(), 432);\n}\n\nmacro_rules! ensure_order_txs {\n    ($in_pool: expr, $out_pool: expr) => {\n        let mempool = &Arc::new(default_mempool().await);\n\n        let txs = &default_mock_txs($in_pool + $out_pool);\n        let (in_pool_txs, out_pool_txs) = txs.split_at($in_pool);\n        concurrent_insert(in_pool_txs.to_vec(), Arc::clone(mempool)).await;\n        concurrent_broadcast(out_pool_txs.to_vec(), Arc::clone(mempool)).await;\n\n        let tx_hashes: Vec<Hash> = txs.iter().map(|tx| tx.tx_hash.clone()).collect();\n        exec_ensure_order_txs(tx_hashes.clone(), Arc::clone(mempool)).await;\n\n        assert_eq!(mempool.get_callback_cache().len().await, $out_pool);\n\n        let fetch_txs = exec_get_full_txs(tx_hashes, Arc::clone(mempool)).await;\n        assert_eq!(fetch_txs.len(), txs.len());\n    };\n}\n\n#[tokio::test]\nasync fn test_ensure_order_txs() {\n    // all txs are in pool\n    ensure_order_txs!(100, 0);\n    // 50 txs are not in pool\n    ensure_order_txs!(50, 50);\n    // all txs are not in pool\n    ensure_order_txs!(0, 100);\n}\n\n#[tokio::test]\nasync fn test_sync_propose_txs() {\n    let mempool = &Arc::new(default_mempool().await);\n\n    let txs = &default_mock_txs(50);\n    let (exist_txs, need_sync_txs) = txs.split_at(20);\n    concurrent_insert(exist_txs.to_vec(), Arc::clone(mempool)).await;\n    concurrent_broadcast(need_sync_txs.to_vec(), Arc::clone(mempool)).await;\n\n    let tx_hashes: Vec<Hash> = txs.iter().map(|tx| tx.tx_hash.clone()).collect();\n    exec_sync_propose_txs(tx_hashes, Arc::clone(mempool)).await;\n\n    assert_eq!(mempool.get_tx_cache().len().await, 50);\n}\n\n#[rustfmt::skip]\n/// Bench in Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz (8 x 2200):\n/// test tests::mempool::bench_check_sig             ... bench:   2,881,140 ns/iter (+/- 907,215)\n/// test tests::mempool::bench_check_sig_serial_1    ... bench:      94,666 ns/iter (+/- 11,070)\n/// test tests::mempool::bench_check_sig_serial_10   ... bench:     966,800 ns/iter (+/- 97,227)\n/// test tests::mempool::bench_check_sig_serial_100  ... bench:  10,098,216 ns/iter (+/- 1,289,584)\n/// test tests::mempool::bench_check_sig_serial_1000 ... bench: 100,396,727 ns/iter (+/- 10,665,143)\n/// test tests::mempool::bench_flush                 ... bench:   3,504,193 ns/iter (+/- 1,096,699)\n/// test tests::mempool::bench_get_10000_full_txs    ... bench:  14,997,762 ns/iter (+/- 2,697,725)\n/// test tests::mempool::bench_get_20000_full_txs    ... bench:  31,858,720 ns/iter (+/- 3,822,648)\n/// test tests::mempool::bench_get_40000_full_txs    ... bench:  65,027,639 ns/iter (+/- 3,926,768)\n/// test tests::mempool::bench_get_80000_full_txs    ... bench: 131,066,149 ns/iter (+/- 11,457,417)\n/// test tests::mempool::bench_insert                ... bench:   9,320,879 ns/iter (+/- 710,246)\n/// test tests::mempool::bench_insert_serial_1       ... bench:       4,588 ns/iter (+/- 349)\n/// test tests::mempool::bench_insert_serial_10      ... bench:      44,027 ns/iter (+/- 4,168)\n/// test tests::mempool::bench_insert_serial_100     ... bench:     432,974 ns/iter (+/- 43,058)\n/// test tests::mempool::bench_insert_serial_1000    ... bench:   4,449,648 ns/iter (+/- 560,818)\n/// test tests::mempool::bench_mock_txs              ... bench:   5,890,752 ns/iter (+/- 583,029)\n/// test tests::mempool::bench_package               ... bench:   3,684,431 ns/iter (+/- 278,575)\n/// test tx_cache::tests::bench_flush                ... bench:   3,034,868 ns/iter (+/- 371,514)\n/// test tx_cache::tests::bench_flush_insert         ... bench:   2,954,223 ns/iter (+/- 389,002)\n/// test tx_cache::tests::bench_gen_txs              ... bench:   2,479,226 ns/iter (+/- 399,728)\n/// test tx_cache::tests::bench_insert               ... bench:   2,742,422 ns/iter (+/- 641,587)\n/// test tx_cache::tests::bench_package              ... bench:      70,563 ns/iter (+/- 16,723)\n/// test tx_cache::tests::bench_package_insert       ... bench:   2,654,196 ns/iter (+/- 285,460)\n\n#[bench]\nfn bench_insert(b: &mut Bencher) {\n    let mut runtime = tokio::runtime::Runtime::new().unwrap();\n    let mempool = &Arc::new(default_mempool_sync());\n\n    b.iter(|| {\n        let txs = default_mock_txs(100);\n        runtime.block_on(concurrent_insert(txs, Arc::clone(mempool)));\n    });\n}\n\n#[bench]\nfn bench_insert_serial_1(b: &mut Bencher) {\n    let mempool = &Arc::new(default_mempool_sync());\n    let txs = default_mock_txs(1);\n\n    b.iter(move || {\n        futures::executor::block_on(async {\n            for tx in txs.clone().into_iter() {\n                let _ = mempool.insert(Context::new(), tx).await;\n            }\n        });\n    })\n}\n\n#[bench]\nfn bench_insert_serial_10(b: &mut Bencher) {\n    let mempool = &Arc::new(default_mempool_sync());\n    let txs = default_mock_txs(10);\n\n    b.iter(move || {\n        futures::executor::block_on(async {\n            for tx in txs.clone().into_iter() {\n                let _ = mempool.insert(Context::new(), tx).await;\n            }\n        });\n    })\n}\n\n#[bench]\nfn bench_insert_serial_100(b: &mut Bencher) {\n    let mempool = &Arc::new(default_mempool_sync());\n    let txs = default_mock_txs(100);\n\n    b.iter(move || {\n        futures::executor::block_on(async {\n            for tx in txs.clone().into_iter() {\n                let _ = mempool.insert(Context::new(), tx).await;\n            }\n        });\n    })\n}\n\n#[bench]\nfn bench_insert_serial_1000(b: &mut Bencher) {\n    let mempool = &Arc::new(default_mempool_sync());\n    let txs = default_mock_txs(1000);\n\n    b.iter(move || {\n        futures::executor::block_on(async {\n            for tx in txs.clone().into_iter() {\n                let _ = mempool.insert(Context::new(), tx).await;\n            }\n        });\n    })\n}\n\n#[bench]\nfn bench_package(b: &mut Bencher) {\n    let mut runtime = tokio::runtime::Runtime::new().unwrap();\n\n    let mempool = Arc::new(default_mempool_sync());\n    let txs = default_mock_txs(50_000);\n    runtime.block_on(concurrent_insert(txs, Arc::clone(&mempool)));\n    b.iter(|| {\n        runtime.block_on(exec_package(\n            Arc::clone(&mempool),\n            CYCLE_LIMIT,\n            TX_NUM_LIMIT,\n        ));\n    });\n}\n\n#[bench]\nfn bench_get_10000_full_txs(b: &mut Bencher) {\n    let mut runtime = tokio::runtime::Runtime::new().unwrap();\n\n    let mempool = Arc::new(default_mempool_sync());\n    let txs = default_mock_txs(10_000);\n    let tx_hashes = txs.iter().map(|tx| tx.tx_hash.clone()).collect::<Vec<_>>();\n    runtime.block_on(concurrent_insert(txs, Arc::clone(&mempool)));\n    b.iter(|| {\n        runtime.block_on(exec_get_full_txs(tx_hashes.clone(), Arc::clone(&mempool)));\n    });\n}\n\n#[bench]\nfn bench_get_20000_full_txs(b: &mut Bencher) {\n    let mut runtime = tokio::runtime::Runtime::new().unwrap();\n\n    let mempool = Arc::new(default_mempool_sync());\n    let txs = default_mock_txs(20_000);\n    let tx_hashes = txs.iter().map(|tx| tx.tx_hash.clone()).collect::<Vec<_>>();\n    runtime.block_on(concurrent_insert(txs, Arc::clone(&mempool)));\n    b.iter(|| {\n        runtime.block_on(exec_get_full_txs(tx_hashes.clone(), Arc::clone(&mempool)));\n    });\n}\n\n#[bench]\nfn bench_get_40000_full_txs(b: &mut Bencher) {\n    let mut runtime = tokio::runtime::Runtime::new().unwrap();\n\n    let mempool = Arc::new(default_mempool_sync());\n    let txs = default_mock_txs(40_000);\n    let tx_hashes = txs.iter().map(|tx| tx.tx_hash.clone()).collect::<Vec<_>>();\n    runtime.block_on(concurrent_insert(txs, Arc::clone(&mempool)));\n    b.iter(|| {\n        runtime.block_on(exec_get_full_txs(tx_hashes.clone(), Arc::clone(&mempool)));\n    });\n}\n\n#[bench]\nfn bench_get_80000_full_txs(b: &mut Bencher) {\n    let mut runtime = tokio::runtime::Runtime::new().unwrap();\n\n    let mempool = Arc::new(default_mempool_sync());\n    let txs = default_mock_txs(80_000);\n    let tx_hashes = txs.iter().map(|tx| tx.tx_hash.clone()).collect::<Vec<_>>();\n    runtime.block_on(concurrent_insert(txs, Arc::clone(&mempool)));\n    b.iter(|| {\n        runtime.block_on(exec_get_full_txs(tx_hashes.clone(), Arc::clone(&mempool)));\n    });\n}\n\n#[bench]\nfn bench_flush(b: &mut Bencher) {\n    let mut runtime = tokio::runtime::Runtime::new().unwrap();\n\n    let mempool = &Arc::new(default_mempool_sync());\n    let txs = &default_mock_txs(100);\n    let remove_hashes: &Vec<Hash> = &txs.iter().map(|tx| tx.tx_hash.clone()).collect();\n    b.iter(|| {\n        runtime.block_on(concurrent_insert(txs.clone(), Arc::clone(mempool)));\n        runtime.block_on(exec_flush(remove_hashes.clone(), Arc::clone(mempool)));\n        runtime.block_on(exec_package(Arc::clone(mempool), CYCLE_LIMIT, TX_NUM_LIMIT));\n    });\n}\n\n#[tokio::test]\nasync fn bench_sign_with_spawn_list() {\n    let adapter = Arc::new(HashMemPoolAdapter::new());\n    let txs = default_mock_txs(30000);\n    let len = txs.len();\n    let now = std::time::Instant::now();\n\n    let futs = txs\n        .into_iter()\n        .map(|tx| {\n            let adapter = Arc::clone(&adapter);\n            tokio::spawn(async move {\n                adapter\n                    .check_authorization(Context::new(), Box::new(tx))\n                    .await\n                    .unwrap();\n            })\n        })\n        .collect::<Vec<_>>();\n    futures::future::try_join_all(futs).await.unwrap();\n\n    println!(\n        \"bench_sign_with_spawn_list size {:?} cost {:?}\",\n        len,\n        now.elapsed()\n    );\n}\n\n#[tokio::test]\nasync fn bench_sign() {\n    let adapter = HashMemPoolAdapter::new();\n    let txs = default_mock_txs(30000)\n        .into_iter()\n        .map(Box::new)\n        .collect::<Vec<_>>();\n    let now = std::time::Instant::now();\n\n    for tx in txs.iter() {\n        adapter\n            .check_authorization(Context::new(), tx.clone())\n            .await\n            .unwrap();\n    }\n\n    println!(\"bench_sign size {:?} cost {:?}\", txs.len(), now.elapsed());\n}\n\n#[bench]\nfn bench_mock_txs(b: &mut Bencher) {\n    b.iter(|| {\n        default_mock_txs(100);\n    });\n}\n\n#[bench]\nfn bench_check_sig(b: &mut Bencher) {\n    let mut runtime = tokio::runtime::Runtime::new().unwrap();\n\n    let txs = &default_mock_txs(100);\n\n    b.iter(|| {\n        runtime.block_on(concurrent_check_sig(txs.clone()));\n    });\n}\n\n#[bench]\nfn bench_check_sig_serial_1(b: &mut Bencher) {\n    let txs = default_mock_txs(1);\n\n    b.iter(|| {\n        for tx in txs.iter() {\n            let _ = check_sig(&tx);\n        }\n    })\n}\n\n#[bench]\nfn bench_check_sig_serial_10(b: &mut Bencher) {\n    let txs = default_mock_txs(10);\n\n    b.iter(|| {\n        for tx in txs.iter() {\n            let _ = check_sig(&tx);\n        }\n    })\n}\n\n#[bench]\nfn bench_check_sig_serial_100(b: &mut Bencher) {\n    let txs = default_mock_txs(100);\n\n    b.iter(|| {\n        for tx in txs.iter() {\n            let _ = check_sig(&tx);\n        }\n    })\n}\n\n#[bench]\nfn bench_check_sig_serial_1000(b: &mut Bencher) {\n    let txs = default_mock_txs(1000);\n\n    b.iter(|| {\n        for tx in txs.iter() {\n            let _ = check_sig(&tx);\n        }\n    })\n}\n"
  },
  {
    "path": "core/mempool/src/tests/mod.rs",
    "content": "extern crate test;\n\nmod mempool;\n\nuse std::convert::{From, TryFrom};\nuse std::sync::Arc;\n\nuse async_trait::async_trait;\nuse chashmap::CHashMap;\nuse futures::executor;\nuse rand::random;\nuse rand::rngs::OsRng;\n\nuse common_crypto::{\n    Crypto, PrivateKey, PublicKey, Secp256k1, Secp256k1PrivateKey, Secp256k1PublicKey,\n    Secp256k1Signature, Signature, ToPublicKey,\n};\nuse protocol::codec::ProtocolCodec;\nuse protocol::traits::{Context, MemPool, MemPoolAdapter, MixedTxHashes};\nuse protocol::types::{Address, Hash, RawTransaction, SignedTransaction, TransactionRequest};\nuse protocol::{Bytes, ProtocolResult};\n\nuse crate::{check_dup_order_hashes, HashMemPool, MemPoolError};\n\nconst CYCLE_LIMIT: u64 = 1_000_000;\nconst TX_NUM_LIMIT: u64 = 10_000;\nconst CURRENT_HEIGHT: u64 = 999;\nconst POOL_SIZE: usize = 100_000;\nconst MAX_TX_SIZE: u64 = 1024; // 1KB\nconst TIMEOUT: u64 = 1000;\nconst TIMEOUT_GAP: u64 = 100;\nconst TX_CYCLE: u64 = 1;\n\npub struct HashMemPoolAdapter {\n    network_txs: CHashMap<Hash, SignedTransaction>,\n}\n\nimpl HashMemPoolAdapter {\n    fn new() -> HashMemPoolAdapter {\n        HashMemPoolAdapter {\n            network_txs: CHashMap::new(),\n        }\n    }\n}\n\n#[async_trait]\nimpl MemPoolAdapter for HashMemPoolAdapter {\n    async fn pull_txs(\n        &self,\n        _ctx: Context,\n        _height: Option<u64>,\n        tx_hashes: Vec<Hash>,\n    ) -> ProtocolResult<Vec<SignedTransaction>> {\n        let mut vec = Vec::new();\n        for hash in tx_hashes {\n            if let Some(tx) = self.network_txs.get(&hash) {\n                vec.push(tx.clone());\n            }\n        }\n        Ok(vec)\n    }\n\n    async fn broadcast_tx(&self, _ctx: Context, tx: SignedTransaction) -> ProtocolResult<()> {\n        self.network_txs.insert(tx.tx_hash.clone(), tx);\n        Ok(())\n    }\n\n    async fn check_authorization(\n        &self,\n        _ctx: Context,\n        tx: Box<SignedTransaction>,\n    ) -> ProtocolResult<()> {\n        check_hash(&tx.clone()).await?;\n        check_sig(&tx)\n    }\n\n    async fn check_transaction(\n        &self,\n        _ctx: Context,\n        _tx: &SignedTransaction,\n    ) -> ProtocolResult<()> {\n        Ok(())\n    }\n\n    async fn check_storage_exist(&self, _ctx: Context, _tx_hash: &Hash) -> ProtocolResult<()> {\n        Ok(())\n    }\n\n    async fn get_latest_height(&self, _ctx: Context) -> ProtocolResult<u64> {\n        Ok(CURRENT_HEIGHT)\n    }\n\n    async fn get_transactions_from_storage(\n        &self,\n        _ctx: Context,\n        _height: Option<u64>,\n        _tx_hashes: &[Hash],\n    ) -> ProtocolResult<Vec<Option<SignedTransaction>>> {\n        Ok(vec![])\n    }\n\n    fn report_good(&self, _ctx: Context) {}\n\n    fn set_args(&self, _timeout_gap: u64, _cycles_limit: u64, _max_tx_size: u64) {}\n}\n\npub fn default_mock_txs(size: usize) -> Vec<SignedTransaction> {\n    mock_txs(size, 0, TIMEOUT)\n}\n\nfn mock_txs(valid_size: usize, invalid_size: usize, timeout: u64) -> Vec<SignedTransaction> {\n    let mut vec = Vec::new();\n    let priv_key = Secp256k1PrivateKey::generate(&mut OsRng);\n    let pub_key = priv_key.pub_key();\n    for i in 0..valid_size + invalid_size {\n        vec.push(mock_signed_tx(&priv_key, &pub_key, timeout, i < valid_size));\n    }\n    vec\n}\n\nfn default_mempool_sync() -> HashMemPool<HashMemPoolAdapter> {\n    let mut rt = tokio::runtime::Runtime::new().unwrap();\n    rt.block_on(default_mempool())\n}\n\nasync fn default_mempool() -> HashMemPool<HashMemPoolAdapter> {\n    new_mempool(POOL_SIZE, TIMEOUT_GAP, CYCLE_LIMIT, MAX_TX_SIZE).await\n}\n\nasync fn new_mempool(\n    pool_size: usize,\n    timeout_gap: u64,\n    cycles_limit: u64,\n    max_tx_size: u64,\n) -> HashMemPool<HashMemPoolAdapter> {\n    let adapter = HashMemPoolAdapter::new();\n    let mempool = HashMemPool::new(pool_size, adapter, vec![]).await;\n    mempool.set_args(timeout_gap, cycles_limit, max_tx_size);\n    mempool\n}\n\nasync fn check_hash(tx: &SignedTransaction) -> ProtocolResult<()> {\n    let mut raw = tx.raw.clone();\n    let raw_bytes = raw.encode().await?;\n    let tx_hash = Hash::digest(raw_bytes);\n    if tx_hash != tx.tx_hash {\n        return Err(MemPoolError::CheckHash {\n            expect: tx.tx_hash.clone(),\n            actual: tx_hash,\n        }\n        .into());\n    }\n    Ok(())\n}\n\nfn check_sig(tx: &SignedTransaction) -> ProtocolResult<()> {\n    if Secp256k1::verify_signature(&tx.tx_hash.as_bytes(), &tx.signature, &tx.pubkey).is_err() {\n        return Err(MemPoolError::CheckAuthorization {\n            tx_hash:  tx.tx_hash.clone(),\n            err_info: \"\".to_string(),\n        }\n        .into());\n    }\n    Ok(())\n}\n\nasync fn concurrent_check_sig(txs: Vec<SignedTransaction>) {\n    let futs = txs\n        .into_iter()\n        .map(|tx| tokio::task::spawn_blocking(move || check_sig(&tx).unwrap()))\n        .collect::<Vec<_>>();\n\n    futures::future::try_join_all(futs).await.unwrap();\n}\n\nasync fn concurrent_insert(\n    txs: Vec<SignedTransaction>,\n    mempool: Arc<HashMemPool<HashMemPoolAdapter>>,\n) {\n    let futs = txs\n        .into_iter()\n        .map(|tx| {\n            let mempool = Arc::clone(&mempool);\n            tokio::spawn(async { exec_insert(tx, mempool).await })\n        })\n        .collect::<Vec<_>>();\n\n    futures::future::try_join_all(futs).await.unwrap();\n}\n\nasync fn concurrent_broadcast(\n    txs: Vec<SignedTransaction>,\n    mempool: Arc<HashMemPool<HashMemPoolAdapter>>,\n) {\n    let futs = txs\n        .into_iter()\n        .map(|tx| {\n            let mempool = Arc::clone(&mempool);\n            tokio::spawn(async move {\n                mempool\n                    .get_adapter()\n                    .broadcast_tx(Context::new(), tx)\n                    .await\n                    .unwrap()\n            })\n        })\n        .collect::<Vec<_>>();\n\n    futures::future::try_join_all(futs).await.unwrap();\n}\n\nasync fn exec_insert(signed_tx: SignedTransaction, mempool: Arc<HashMemPool<HashMemPoolAdapter>>) {\n    let _ = mempool.insert(Context::new(), signed_tx).await.is_ok();\n}\n\nasync fn exec_flush(remove_hashes: Vec<Hash>, mempool: Arc<HashMemPool<HashMemPoolAdapter>>) {\n    mempool.flush(Context::new(), &remove_hashes).await.unwrap()\n}\n\nasync fn exec_package(\n    mempool: Arc<HashMemPool<HashMemPoolAdapter>>,\n    cycle_limit: u64,\n    tx_num_limit: u64,\n) -> MixedTxHashes {\n    mempool\n        .package(Context::new(), cycle_limit, tx_num_limit)\n        .await\n        .unwrap()\n}\n\nasync fn exec_ensure_order_txs(\n    require_hashes: Vec<Hash>,\n    mempool: Arc<HashMemPool<HashMemPoolAdapter>>,\n) {\n    mempool\n        .ensure_order_txs(Context::new(), None, &require_hashes)\n        .await\n        .unwrap();\n}\n\nasync fn exec_sync_propose_txs(\n    require_hashes: Vec<Hash>,\n    mempool: Arc<HashMemPool<HashMemPoolAdapter>>,\n) {\n    mempool\n        .sync_propose_txs(Context::new(), require_hashes)\n        .await\n        .unwrap();\n}\n\nasync fn exec_get_full_txs(\n    require_hashes: Vec<Hash>,\n    mempool: Arc<HashMemPool<HashMemPoolAdapter>>,\n) -> Vec<SignedTransaction> {\n    mempool\n        .get_full_txs(Context::new(), None, &require_hashes)\n        .await\n        .unwrap()\n}\n\nfn mock_signed_tx(\n    priv_key: &Secp256k1PrivateKey,\n    pub_key: &Secp256k1PublicKey,\n    timeout: u64,\n    valid: bool,\n) -> SignedTransaction {\n    let nonce = Hash::digest(Bytes::from(get_random_bytes(10)));\n\n    let request = TransactionRequest {\n        service_name: \"test\".to_owned(),\n        method:       \"test\".to_owned(),\n        payload:      \"test\".to_owned(),\n    };\n    let mut raw = RawTransaction {\n        chain_id: nonce.clone(),\n        nonce,\n        timeout,\n        cycles_limit: TX_CYCLE,\n        cycles_price: 1,\n        request,\n        sender: Address::from_pubkey_bytes(pub_key.to_bytes()).unwrap(),\n    };\n\n    let raw_bytes = executor::block_on(async { raw.encode().await.unwrap() });\n    let tx_hash = Hash::digest(raw_bytes);\n\n    let signature = if valid {\n        Secp256k1::sign_message(&tx_hash.as_bytes(), &priv_key.to_bytes()).unwrap()\n    } else {\n        Secp256k1Signature::try_from([0u8; 64].as_ref()).unwrap()\n    };\n\n    SignedTransaction {\n        raw,\n        tx_hash,\n        pubkey: pub_key.to_bytes(),\n        signature: signature.to_bytes(),\n    }\n}\n\nfn get_random_bytes(len: usize) -> Vec<u8> {\n    (0..len).map(|_| random::<u8>()).collect()\n}\n\nfn check_order_consistant(mixed_tx_hashes: &MixedTxHashes, txs: &[SignedTransaction]) -> bool {\n    mixed_tx_hashes\n        .order_tx_hashes\n        .iter()\n        .enumerate()\n        .any(|(i, hash)| hash == &txs.get(i).unwrap().tx_hash)\n}\n"
  },
  {
    "path": "core/mempool/src/tx_cache.rs",
    "content": "use std::sync::atomic::{AtomicBool, AtomicUsize, Ordering};\nuse std::sync::Arc;\n\nuse crossbeam_queue::ArrayQueue;\n\nuse protocol::traits::MixedTxHashes;\nuse protocol::types::{Hash, SignedTransaction};\nuse protocol::ProtocolResult;\n\nuse crate::map::Map;\nuse crate::MemPoolError;\n\n/// Wrap `SignedTransaction` with two marks for mempool management.\n///\n/// Each new transaction inserting into mempool will set `removed` false,\n/// while transaction from propose-transaction-sync will additionally set\n/// `proposed` true. When shared transaction in `TxCache` removed from map,\n/// it will set `removed` true. The `removed` and `proposed` marks will remind\n/// queue in `TxCache` to appropriately process elements while packaging\n/// transaction hashes for consensus.\npub struct TxWrapper {\n    /// Content.\n    tx:       SignedTransaction,\n    /// While map removes a `shared_tx` during flush, it will mark `removed`\n    /// true. Afterwards, queue removes the transaction which marks\n    /// `removed` true during package.\n    removed:  AtomicBool,\n    /// The response transactions in propose-syncing will insert into `TxCache`\n    /// marking `proposed` true.\n    /// While collecting propose_tx_hashes during package,\n    /// it will skips transactions which marks 'proposed` true.\n    proposed: AtomicBool,\n}\n\nimpl TxWrapper {\n    #[allow(dead_code)]\n    pub(crate) fn new(tx: SignedTransaction) -> Self {\n        TxWrapper {\n            tx,\n            removed: AtomicBool::new(false),\n            proposed: AtomicBool::new(false),\n        }\n    }\n\n    pub(crate) fn propose(tx: SignedTransaction) -> Self {\n        TxWrapper {\n            tx,\n            removed: AtomicBool::new(false),\n            proposed: AtomicBool::new(true),\n        }\n    }\n\n    pub(crate) fn set_removed(&self) {\n        self.removed.store(true, Ordering::SeqCst);\n    }\n\n    #[inline]\n    pub(crate) fn is_removed(&self) -> bool {\n        self.removed.load(Ordering::SeqCst)\n    }\n\n    #[inline]\n    fn is_proposed(&self) -> bool {\n        self.proposed.load(Ordering::SeqCst)\n    }\n\n    #[inline]\n    fn is_timeout(&self, current_height: u64, timeout: u64) -> bool {\n        let tx_timeout = self.tx.raw.timeout;\n        tx_timeout <= current_height || tx_timeout > timeout\n    }\n}\n\n/// Share `TxWrapper` for collections in `TxCache`.\npub type SharedTx = Arc<TxWrapper>;\n\n/// An enum stands for package stage\n#[derive(PartialEq, Eq)]\nenum Stage {\n    /// Packing order_tx_hashes\n    OrderTxs,\n    /// Packing propose_tx_hashes\n    ProposeTxs,\n    /// Packing finished. Only insert transactions into temp queue.\n    Finished,\n}\n\nimpl Stage {\n    fn next(&self) -> Self {\n        match self {\n            Stage::OrderTxs => Stage::ProposeTxs,\n            Stage::ProposeTxs => Stage::Finished,\n            Stage::Finished => panic!(\"There is no next stage after finished stage!\"),\n        }\n    }\n}\n\n/// Queue role. Incumbent is for insertion and package.\nstruct QueueRole {\n    incumbent: Arc<ArrayQueue<SharedTx>>,\n    candidate: Arc<ArrayQueue<SharedTx>>,\n}\n\n/// This is the core structure for caching new transactions and\n/// feeding transactions in batch for consensus.\n///\n/// The queues are served for packaging a batch of transactions in insertion\n/// order. The `map` is served for randomly search and removal.\n/// All these collections should support concurrent insertion.\n/// We set two queues, `queue_0` and `queue_1`, to make package concurrent with\n/// insertion. When `queue_0` served for insertion and package begins,\n/// transactions pop from `queue_0` and push into `queue_1` while new\n/// transactions still insert into `queue_0` concurrently. while `queue_0` pop\n/// out, `queue_1` switch to insertion queue.\npub struct TxCache {\n    /// One queue.\n    queue_0:          Arc<ArrayQueue<SharedTx>>,\n    /// Another queue.\n    queue_1:          Arc<ArrayQueue<SharedTx>>,\n    /// A map for randomly search and removal.\n    map:              Map<SharedTx>,\n    /// This is used to pick a queue for insertion,\n    /// If true selects `queue_0`, else `queue_1`.\n    is_zero:          AtomicBool,\n    /// This is an atomic state to solve concurrent insertion problem during\n    /// package. While switching insertion queues, some transactions may\n    /// still insert into the old queue. We use this state to make sure\n    /// switch insertions *happen-before* old queue re-pop.\n    concurrent_count: AtomicUsize,\n}\n\nimpl TxCache {\n    pub fn new(pool_size: usize) -> Self {\n        TxCache {\n            queue_0:          Arc::new(ArrayQueue::new(pool_size * 2)),\n            queue_1:          Arc::new(ArrayQueue::new(pool_size * 2)),\n            map:              Map::new(pool_size * 2),\n            is_zero:          AtomicBool::new(true),\n            concurrent_count: AtomicUsize::new(0),\n        }\n    }\n\n    pub async fn len(&self) -> usize {\n        self.map.len().await\n    }\n\n    pub async fn insert_new_tx(&self, signed_tx: SignedTransaction) -> ProtocolResult<()> {\n        let tx_hash = signed_tx.tx_hash.clone();\n        let tx_wrapper = TxWrapper::new(signed_tx);\n        let shared_tx = Arc::new(tx_wrapper);\n        self.insert(tx_hash, shared_tx).await\n    }\n\n    pub async fn insert_propose_tx(&self, signed_tx: SignedTransaction) -> ProtocolResult<()> {\n        let tx_hash = signed_tx.tx_hash.clone();\n        let tx_wrapper = TxWrapper::propose(signed_tx);\n        let shared_tx = Arc::new(tx_wrapper);\n        self.insert(tx_hash, shared_tx).await\n    }\n\n    pub async fn show_unknown(&self, tx_hashes: &[Hash]) -> Vec<Hash> {\n        let mut unknow_hashes = vec![];\n\n        for tx_hash in tx_hashes.iter() {\n            if !self.contain(&tx_hash).await {\n                unknow_hashes.push(tx_hash.clone());\n            }\n        }\n\n        unknow_hashes\n    }\n\n    pub async fn flush(&self, tx_hashes: &[Hash], current_height: u64, timeout: u64) {\n        for tx_hash in tx_hashes {\n            let opt = self.map.get(tx_hash).await;\n            if let Some(shared_tx) = opt {\n                shared_tx.set_removed();\n            }\n        }\n        // Dividing set removed and remove into two loops is to avoid lock competition.\n        self.map.remove_batch(tx_hashes).await;\n        self.flush_incumbent_queue(current_height, timeout).await;\n    }\n\n    pub async fn package(\n        &self,\n        _cycles_limit: u64,\n        tx_num_limit: u64,\n        current_height: u64,\n        timeout: u64,\n    ) -> ProtocolResult<MixedTxHashes> {\n        let queue_role = self.get_queue_role();\n\n        let mut order_tx_hashes = Vec::new();\n        let mut propose_tx_hashes = Vec::new();\n        let mut timeout_tx_hashes = Vec::new();\n\n        let mut tx_count: u64 = 0;\n        let mut stage = Stage::OrderTxs;\n\n        loop {\n            if let Ok(shared_tx) = queue_role.incumbent.pop() {\n                let tx_hash = &shared_tx.tx.tx_hash;\n\n                if shared_tx.is_removed() {\n                    continue;\n                }\n                if shared_tx.is_timeout(current_height, timeout) {\n                    timeout_tx_hashes.push(tx_hash.clone());\n                    continue;\n                }\n                // After previous filter, tx are valid and should cache in temp_queue.\n                if queue_role\n                    .candidate\n                    .push(Arc::<TxWrapper>::clone(&shared_tx))\n                    .is_err()\n                {\n                    log::error!(\n                        \"[core_mempool]: candidate queue is full while package, delete {:?}\",\n                        &shared_tx.tx.tx_hash\n                    );\n                    self.map.remove(&shared_tx.tx.tx_hash).await;\n                }\n\n                if stage == Stage::Finished\n                    || (stage == Stage::ProposeTxs && shared_tx.is_proposed())\n                {\n                    continue;\n                }\n                tx_count += 1;\n                if tx_count > tx_num_limit {\n                    stage = stage.next();\n                    tx_count = 1;\n                }\n\n                match stage {\n                    Stage::OrderTxs => order_tx_hashes.push(tx_hash.clone()),\n                    Stage::ProposeTxs => propose_tx_hashes.push(tx_hash.clone()),\n                    Stage::Finished => {}\n                }\n            } else {\n                // Switch queue_roles\n                let new_role = self.switch_queue_role();\n                // Transactions may insert into previous incumbent queue during role switch.\n                self.process_omission_txs(new_role).await;\n                break;\n            }\n        }\n        // Remove timeout tx in map\n        self.map.remove_batch(&timeout_tx_hashes).await;\n\n        Ok(MixedTxHashes {\n            order_tx_hashes,\n            propose_tx_hashes,\n        })\n    }\n\n    pub async fn check_exist(&self, tx_hash: &Hash) -> ProtocolResult<()> {\n        if self.contain(tx_hash).await {\n            return Err(MemPoolError::Dup {\n                tx_hash: tx_hash.clone(),\n            }\n            .into());\n        }\n        Ok(())\n    }\n\n    pub async fn check_reach_limit(&self, pool_size: usize) -> ProtocolResult<()> {\n        if self.len().await >= pool_size {\n            return Err(MemPoolError::ReachLimit { pool_size }.into());\n        }\n        Ok(())\n    }\n\n    pub async fn contain(&self, tx_hash: &Hash) -> bool {\n        self.map.contains_key(tx_hash).await\n    }\n\n    pub async fn get(&self, tx_hash: &Hash) -> Option<SignedTransaction> {\n        self.map\n            .get(tx_hash)\n            .await\n            .map(|shared_tx| shared_tx.tx.clone())\n    }\n\n    pub fn queue_len(&self) -> usize {\n        if self.is_zero.load(Ordering::Relaxed) {\n            self.queue_0.len()\n        } else {\n            self.queue_1.len()\n        }\n    }\n\n    async fn insert(&self, tx_hash: Hash, shared_tx: SharedTx) -> ProtocolResult<()> {\n        // If multiple transactions exactly the same insert concurrently,\n        // this will prevent them to be both insert successfully into queue.\n        if self\n            .map\n            .insert(tx_hash.clone(), Arc::<TxWrapper>::clone(&shared_tx))\n            .await\n            .is_some()\n        {\n            return Err(MemPoolError::Dup { tx_hash }.into());\n        }\n\n        self.concurrent_count.fetch_add(1, Ordering::SeqCst);\n        let rst = self\n            .get_queue_role()\n            .incumbent\n            .push(Arc::<TxWrapper>::clone(&shared_tx));\n        self.concurrent_count.fetch_sub(1, Ordering::SeqCst);\n\n        // If queue inserts into queue failed, removes from map.\n        if rst.is_err() {\n            // If tx_hash exists, it will panic. So repeat check must do before insertion.\n            self.map.remove(&tx_hash).await;\n            Err(MemPoolError::Insert { tx_hash }.into())\n        } else {\n            Ok(())\n        }\n    }\n\n    // Process transactions insert into previous incumbent queue during role switch.\n    async fn process_omission_txs(&self, queue_role: QueueRole) {\n        'outer: loop {\n            // When there are no transaction insertions processing,\n            // pop off previous incumbent queue and push them into current incumbent queue.\n            if self.concurrent_count.load(Ordering::SeqCst) == 0 {\n                while let Ok(shared_tx) = queue_role.candidate.pop() {\n                    if queue_role\n                        .incumbent\n                        .push(Arc::<TxWrapper>::clone(&shared_tx))\n                        .is_err()\n                    {\n                        log::error!(\n                            \"[core_mempool]: incumbent queue is full while process_omission_txs, delete {:?}\",\n                            &shared_tx.tx.tx_hash\n                        );\n                        self.map.remove(&shared_tx.tx.tx_hash).await;\n                    }\n                }\n                break 'outer;\n            }\n        }\n    }\n\n    async fn flush_incumbent_queue(&self, current_height: u64, timeout: u64) {\n        let queue_role = self.get_queue_role();\n        let mut timeout_tx_hashes = Vec::new();\n\n        loop {\n            if let Ok(shared_tx) = queue_role.incumbent.pop() {\n                let tx_hash = &shared_tx.tx.tx_hash;\n\n                if shared_tx.is_removed() {\n                    continue;\n                }\n                if shared_tx.is_timeout(current_height, timeout) {\n                    timeout_tx_hashes.push(tx_hash.clone());\n                    continue;\n                }\n                // After previous filter, tx are valid and should cache in temp_queue.\n                if queue_role\n                    .candidate\n                    .push(Arc::<TxWrapper>::clone(&shared_tx))\n                    .is_err()\n                {\n                    log::error!(\n                        \"[core_mempool]: candidate queue is full while flush_incumbent_queue, delete {:?}\",\n                        &shared_tx.tx.tx_hash\n                    );\n                    self.map.remove(&shared_tx.tx.tx_hash).await;\n                }\n            } else {\n                // Switch queue_roles\n                let new_role = self.switch_queue_role();\n                // Transactions may insert into previous incumbent queue during role switch.\n                self.process_omission_txs(new_role).await;\n                break;\n            }\n        }\n        // Remove timeout tx in map\n        self.map.remove_batch(&timeout_tx_hashes).await;\n    }\n\n    fn switch_queue_role(&self) -> QueueRole {\n        self.is_zero.fetch_xor(true, Ordering::SeqCst);\n        self.get_queue_role()\n    }\n\n    fn get_queue_role(&self) -> QueueRole {\n        let (incumbent, candidate) = if self.is_zero.load(Ordering::SeqCst) {\n            (&self.queue_0, &self.queue_1)\n        } else {\n            (&self.queue_1, &self.queue_0)\n        };\n        QueueRole {\n            incumbent: Arc::clone(incumbent),\n            candidate: Arc::clone(candidate),\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    extern crate test;\n\n    use std::sync::Arc;\n\n    use rand::random;\n    use test::Bencher;\n\n    use protocol::types::{\n        Address, Bytes, Hash, RawTransaction, SignedTransaction, TransactionRequest,\n    };\n\n    use crate::map::Map;\n    use crate::tx_cache::{TxCache, TxWrapper};\n\n    const POOL_SIZE: usize = 1000;\n    const BYTES_LEN: usize = 10;\n    const TX_NUM: usize = 1000;\n    const TX_CYCLE: u64 = 1;\n    const TX_NUM_LIMIT: u64 = 20000;\n    const CYCLE_LIMIT: u64 = 500;\n    const CURRENT_H: u64 = 100;\n    const TIMEOUT: u64 = 150;\n\n    fn gen_bytes() -> Vec<u8> {\n        (0..BYTES_LEN).map(|_| random::<u8>()).collect()\n    }\n\n    fn gen_signed_txs(n: usize) -> Vec<SignedTransaction> {\n        let mut vec = Vec::new();\n        for _ in 0..n {\n            vec.push(mock_signed_tx(gen_bytes()));\n        }\n        vec\n    }\n\n    fn mock_signed_tx(bytes: Vec<u8>) -> SignedTransaction {\n        let rand_hash = Hash::digest(Bytes::from(bytes));\n        let chain_id = rand_hash.clone();\n        let nonce = rand_hash.clone();\n        let tx_hash = rand_hash;\n        let pubkey = {\n            let hex_str = \"03380295981e77dcd0a3f50c1d58867e590f2837f03daf639d683ec5e995c02984\";\n            Bytes::from(hex::decode(hex_str).unwrap())\n        };\n        let fake_sig = Hash::digest(pubkey.clone()).as_bytes();\n\n        let request = TransactionRequest {\n            service_name: \"test\".to_owned(),\n            method:       \"test\".to_owned(),\n            payload:      \"test\".to_owned(),\n        };\n\n        let raw = RawTransaction {\n            chain_id,\n            nonce,\n            timeout: TIMEOUT,\n            cycles_limit: TX_CYCLE,\n            cycles_price: 1,\n            request,\n            sender: Address::from_pubkey_bytes(pubkey.clone()).unwrap(),\n        };\n        SignedTransaction {\n            raw,\n            tx_hash,\n            pubkey,\n            signature: fake_sig,\n        }\n    }\n\n    async fn concurrent_insert(txs: Vec<SignedTransaction>, tx_cache: Arc<TxCache>) {\n        let futs = txs\n            .into_iter()\n            .map(|tx| {\n                let tx_cache = Arc::clone(&tx_cache);\n                tokio::spawn(async move { tx_cache.insert_new_tx(tx.clone()).await })\n            })\n            .collect::<Vec<_>>();\n\n        futures::future::try_join_all(futs).await.unwrap();\n    }\n\n    async fn concurrent_flush(tx_cache: Arc<TxCache>, tx_hashes: Vec<Hash>, height: u64) {\n        tokio::spawn(async move {\n            tx_cache.flush(&tx_hashes, height, height + TIMEOUT).await;\n        })\n        .await\n        .unwrap();\n    }\n\n    async fn concurrent_package(tx_cache: Arc<TxCache>) {\n        tokio::spawn(async move {\n            tx_cache\n                .package(CYCLE_LIMIT, TX_NUM_LIMIT, CURRENT_H, TIMEOUT)\n                .await\n                .unwrap();\n        })\n        .await\n        .unwrap();\n    }\n\n    #[tokio::test]\n    async fn test_concurrent_insert() {\n        let txs = gen_signed_txs(POOL_SIZE / 2);\n        let txs: Vec<SignedTransaction> = txs\n            .iter()\n            .flat_map(|tx| {\n                (0..5)\n                    .map(|_| tx.clone())\n                    .collect::<Vec<SignedTransaction>>()\n            })\n            .collect();\n        let tx_cache = Arc::new(TxCache::new(POOL_SIZE));\n        concurrent_insert(txs, Arc::clone(&tx_cache)).await;\n        assert_eq!(tx_cache.len().await, POOL_SIZE / 2);\n    }\n\n    #[tokio::test]\n    async fn test_insert_overlap() {\n        let txs = gen_signed_txs(1);\n        let tx = txs.get(0).unwrap();\n        let map = Map::new(POOL_SIZE);\n\n        let tx_wrapper_0 = TxWrapper::new(tx.clone());\n        tx_wrapper_0.set_removed();\n        map.insert(tx.tx_hash.clone(), Arc::new(tx_wrapper_0)).await;\n        let shared_tx_0 = map.get(&tx.tx_hash).await.unwrap();\n        assert!(shared_tx_0.is_removed());\n\n        let tx_wrapper_1 = TxWrapper::new(tx.clone());\n        map.insert(tx.tx_hash.clone(), Arc::new(tx_wrapper_1)).await;\n        let shared_tx_1 = map.get(&tx.tx_hash).await.unwrap();\n        assert!(shared_tx_1.is_removed());\n    }\n\n    #[bench]\n    fn bench_gen_txs(b: &mut Bencher) {\n        b.iter(|| {\n            gen_signed_txs(TX_NUM);\n        });\n    }\n\n    #[bench]\n    fn bench_insert(b: &mut Bencher) {\n        let mut runtime = tokio::runtime::Runtime::new().unwrap();\n\n        let txs = gen_signed_txs(TX_NUM);\n        b.iter(|| {\n            let tx_cache = Arc::new(TxCache::new(POOL_SIZE));\n            runtime.block_on(concurrent_insert(txs.clone(), Arc::clone(&tx_cache)));\n            assert_eq!(runtime.block_on(tx_cache.len()), TX_NUM);\n            assert_eq!(tx_cache.queue_len(), TX_NUM);\n        });\n    }\n\n    #[bench]\n    fn bench_flush(b: &mut Bencher) {\n        let mut runtime = tokio::runtime::Runtime::new().unwrap();\n\n        let txs = gen_signed_txs(TX_NUM);\n        let tx_hashes: Vec<Hash> = txs\n            .iter()\n            .map(|signed_tx| signed_tx.tx_hash.clone())\n            .collect();\n        b.iter(|| {\n            let tx_cache = Arc::new(TxCache::new(POOL_SIZE));\n            runtime.block_on(concurrent_insert(txs.clone(), Arc::clone(&tx_cache)));\n            assert_eq!(runtime.block_on(tx_cache.len()), TX_NUM);\n            assert_eq!(tx_cache.queue_len(), TX_NUM);\n            runtime.block_on(tx_cache.flush(tx_hashes.as_slice(), CURRENT_H, CURRENT_H + TIMEOUT));\n            assert_eq!(runtime.block_on(tx_cache.len()), 0);\n            assert_eq!(tx_cache.queue_len(), 0);\n        });\n    }\n\n    #[bench]\n    fn bench_flush_insert(b: &mut Bencher) {\n        let mut runtime = tokio::runtime::Runtime::new().unwrap();\n\n        let txs_base = gen_signed_txs(TX_NUM / 2);\n        let txs_insert = gen_signed_txs(TX_NUM / 2);\n        let txs_flush: Vec<Hash> = txs_base\n            .iter()\n            .map(|signed_tx| signed_tx.tx_hash.clone())\n            .collect();\n        b.iter(|| {\n            let tx_cache = Arc::new(TxCache::new(POOL_SIZE));\n            runtime.block_on(concurrent_insert(txs_base.clone(), Arc::clone(&tx_cache)));\n            runtime.block_on(concurrent_flush(\n                Arc::clone(&tx_cache),\n                txs_flush.clone(),\n                CURRENT_H,\n            ));\n            runtime.block_on(concurrent_insert(txs_insert.clone(), Arc::clone(&tx_cache)));\n            assert_eq!(runtime.block_on(tx_cache.len()), TX_NUM / 2);\n            assert_eq!(tx_cache.queue_len(), TX_NUM / 2);\n        });\n    }\n\n    #[bench]\n    fn bench_package(b: &mut Bencher) {\n        let mut runtime = tokio::runtime::Runtime::new().unwrap();\n\n        let txs = gen_signed_txs(TX_NUM);\n        let tx_cache = Arc::new(TxCache::new(POOL_SIZE));\n        runtime.block_on(concurrent_insert(txs, Arc::clone(&tx_cache)));\n        b.iter(|| {\n            let mixed_tx_hashes = runtime\n                .block_on(tx_cache.package(TX_NUM_LIMIT, CYCLE_LIMIT, CURRENT_H, TIMEOUT))\n                .unwrap();\n            assert_eq!(\n                mixed_tx_hashes.order_tx_hashes.len(),\n                (CYCLE_LIMIT / TX_CYCLE) as usize\n            );\n        });\n    }\n\n    #[bench]\n    fn bench_package_insert(b: &mut Bencher) {\n        let mut runtime = tokio::runtime::Runtime::new().unwrap();\n\n        let txs = gen_signed_txs(TX_NUM / 2);\n        let txs_insert = gen_signed_txs(TX_NUM / 2);\n        b.iter(|| {\n            let tx_cache = Arc::new(TxCache::new(POOL_SIZE));\n            runtime.block_on(concurrent_insert(txs.clone(), Arc::clone(&tx_cache)));\n            runtime.block_on(concurrent_package(Arc::clone(&tx_cache)));\n            runtime.block_on(concurrent_insert(txs_insert.clone(), Arc::clone(&tx_cache)));\n            assert_eq!(runtime.block_on(tx_cache.len()), TX_NUM);\n            assert_eq!(tx_cache.queue_len(), TX_NUM);\n        });\n    }\n}\n"
  },
  {
    "path": "core/network/Cargo.toml",
    "content": "[package]\nname = \"core-network\"\nversion = \"0.2.1\"\nauthors = [\"Muta Dev <muta@nervos.org>\"]\nedition = \"2018\"\nrepository = \"https://github.com/nervosnetwork/muta\"\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\nprotocol = { path = \"../../protocol\", package = \"muta-protocol\" }\ncommon-apm = { path = \"../../common/apm\" }\n\nasync-trait = \"0.1\"\nbincode = \"1.2\"\nderive_more = \"0.99\"\nfutures-timer = \"2.0\"\nfutures= { version = \"0.3\", features = [ \"compat\" ] }\nhex = \"0.4\"\nlog = \"0.4\"\nparking_lot = \"0.11\"\nprost = \"0.6\"\nbytes = \"0.5\"\nrand = \"0.7\"\nserde = \"1.0\"\nserde_derive = \"1.0\"\nsnap = \"0.2\"\ntentacle = { git = \"http://github.com/zeroqn/p2p\", rev = \"b2682d2\", features = [\"molc\"]}\ntokio = { version = \"0.2\", features = [\"macros\", \"rt-core\"]}\ntokio-util = { version = \"0.2\", features = [\"codec\"] }\nhostname = \"0.3\"\nlazy_static = \"1.4\"\nbs58 = \"0.3\"\narc-swap = \"0.4\"\n\n[dev-dependencies]\nenv_logger = \"0.6\"\nquickcheck = \"0.9\"\nquickcheck_macros = \"0.8\"\ntokio = { version = \"0.2\", features = [\"macros\", \"rt-core\"]}\n\n[features]\ndefault = []\nglobal_ip_only = []\ndiagnostic = []\ntentacle_metrics = [\"tentacle/metrics\"]\n\n[[test]]\nname = \"broadcast\"\npath = \"tests/gossip_test.rs\"\n"
  },
  {
    "path": "core/network/examples/buycopy.rs",
    "content": "use std::{\n    net::{IpAddr, Ipv4Addr, SocketAddr},\n    thread,\n    time::Duration,\n};\n\nuse async_trait::async_trait;\nuse bytes::Bytes;\nuse log::{info, warn};\nuse serde_derive::{Deserialize, Serialize};\nuse tentacle::secio::SecioKeyPair;\n\nuse core_network::{NetworkConfig, NetworkService};\nuse protocol::{\n    traits::{Context, Gossip, MessageHandler, Priority, Rpc, TrustFeedback},\n    types::Hash,\n    ProtocolError,\n};\n\nconst IP_ADDR: IpAddr = IpAddr::V4(Ipv4Addr::new(0, 0, 0, 0));\n\nconst RELEASE_CHANNEL: &str = \"/gossip/cprd/cyperpunk7702_released\";\nconst SHOP_CASH_CHANNEL: &str = \"/rpc_call/v3/steam\";\nconst SHOP_CHANNEL: &str = \"/rpc_resp/v3/steam\";\n\n// Gossip message\n#[derive(Debug, Serialize, Deserialize)]\nstruct Cyber7702Released {\n    pub shop: String,\n    #[serde(with = \"core_network::serde\")]\n    pub hash: Hash,\n}\n\n// Gossip message handler\nstruct TakeMyMoney<N: Rpc> {\n    pub shop: N,\n}\n\n#[async_trait]\nimpl<N: Rpc + Send + Sync + 'static> MessageHandler for TakeMyMoney<N> {\n    type Message = Cyber7702Released;\n\n    async fn process(&self, ctx: Context, msg: Self::Message) -> TrustFeedback {\n        let sell = async move {\n            println!(\"Rush to {}. Shut up, take my money\", msg.shop);\n\n            let copy: ACopy = self\n                .shop\n                .call(ctx, SHOP_CASH_CHANNEL, BuyACopy, Priority::High)\n                .await?;\n            println!(\"Got my copy {:?}\", copy);\n\n            Ok::<(), ProtocolError>(())\n        };\n\n        match sell.await {\n            Ok(_) => TrustFeedback::Good,\n            Err(e) => {\n                warn!(\"sell {}\", e);\n                TrustFeedback::Bad(\"sell failed\".to_owned())\n            }\n        }\n    }\n}\n\n// Rpc message\n#[derive(Debug, Serialize, Deserialize)]\nstruct BuyACopy;\n\n#[derive(Debug, Serialize, Deserialize)]\nstruct ACopy {\n    #[serde(with = \"core_network::serde\")]\n    pub hash: Hash,\n\n    #[serde(with = \"core_network::serde_multi\")]\n    pub gifs: Vec<Hash>,\n}\n\n// Rpc call message handler\nstruct Checkout<N: Rpc> {\n    dealer: N,\n}\n\n#[async_trait]\nimpl<N: Rpc + Send + Sync + 'static> MessageHandler for Checkout<N> {\n    type Message = BuyACopy;\n\n    async fn process(&self, ctx: Context, _msg: Self::Message) -> TrustFeedback {\n        let acopy = ACopy {\n            hash: Hash::digest(Bytes::new()),\n            gifs: vec![\n                Hash::digest(\"jacket\"),\n                Hash::digest(\"map\"),\n                Hash::digest(\"book\"),\n            ],\n        };\n\n        match self\n            .dealer\n            .response(ctx, SHOP_CHANNEL, Ok(acopy), Priority::High)\n            .await\n        {\n            Ok(_) => TrustFeedback::Good,\n            Err(e) => TrustFeedback::Bad(format!(\"send copy {}\", e.to_string())),\n        }\n    }\n}\n\n#[tokio::main]\npub async fn main() {\n    env_logger::init();\n\n    let bt_seckey_bytes = \"8\".repeat(32);\n    let bt_seckey = hex::encode(&bt_seckey_bytes);\n    let bt_keypair = SecioKeyPair::secp256k1_raw_key(bt_seckey_bytes).expect(\"keypair\");\n    let bt_pubkey = hex::encode(bt_keypair.public_key().inner());\n    let bt_addr = SocketAddr::new(IP_ADDR, 1337);\n\n    if std::env::args().nth(1) == Some(\"server\".to_string()) {\n        info!(\"Starting server\");\n\n        let bt_conf = NetworkConfig::new()\n            .secio_keypair(bt_seckey)\n            .expect(\"set keypair\");\n\n        let mut bootstrap = NetworkService::new(bt_conf);\n        let handle = bootstrap.handle();\n        bootstrap.listen(bt_addr).await.unwrap();\n\n        let check_out = Checkout {\n            dealer: handle.clone(),\n        };\n        bootstrap\n            .register_endpoint_handler(SHOP_CASH_CHANNEL, check_out)\n            .unwrap();\n\n        tokio::spawn(bootstrap);\n        thread::sleep(Duration::from_secs(10));\n\n        let released = Cyber7702Released {\n            shop: \"steam\".to_owned(),\n            hash: Hash::digest(Bytes::from(\"buy\".repeat(3))),\n        };\n\n        let ctx = Context::default();\n        handle\n            .broadcast(ctx.clone(), RELEASE_CHANNEL, released, Priority::High)\n            .await\n            .unwrap();\n\n        thread::sleep(Duration::from_secs(10));\n    } else {\n        info!(\"Starting client\");\n\n        let port = std::env::args().nth(1).unwrap().parse::<u16>().unwrap();\n        let peer_addr = SocketAddr::new(IP_ADDR, port);\n        let peer_conf = NetworkConfig::new()\n            .bootstraps(vec![(bt_pubkey, bt_addr.to_string())])\n            .unwrap();\n\n        let mut peer = NetworkService::new(peer_conf);\n        let handle = peer.handle();\n        peer.listen(peer_addr).await.unwrap();\n\n        let take_my_money = TakeMyMoney {\n            shop: handle.clone(),\n        };\n        peer.register_endpoint_handler(RELEASE_CHANNEL, take_my_money)\n            .unwrap();\n        peer.register_rpc_response::<ACopy>(SHOP_CHANNEL).unwrap();\n\n        peer.await;\n    }\n}\n"
  },
  {
    "path": "core/network/src/common.rs",
    "content": "use crate::traits::MultiaddrExt;\n\nuse derive_more::Display;\nuse futures::{pin_mut, task::AtomicWaker};\nuse futures_timer::Delay;\nuse serde_derive::{Deserialize, Serialize};\nuse tentacle::{\n    multiaddr::{Multiaddr, Protocol},\n    secio::PeerId,\n};\n\nuse std::{\n    borrow::Cow,\n    future::Future,\n    net::{IpAddr, SocketAddr, ToSocketAddrs},\n    ops::Add,\n    pin::Pin,\n    sync::Arc,\n    task::{Context, Poll},\n    time::{Duration, Instant},\n    vec::IntoIter,\n};\n\n#[macro_export]\nmacro_rules! loop_ready {\n    ($poll:expr) => {\n        match $poll {\n            Poll::Pending => break,\n            Poll::Ready(v) => v,\n        }\n    };\n}\n\n#[macro_export]\nmacro_rules! service_ready {\n    ($service:expr, $poll:expr) => {\n        match crate::loop_ready!($poll) {\n            Some(v) => v,\n            None => {\n                log::info!(\"network: {} exit\", $service);\n                return Poll::Ready(());\n            }\n        }\n    };\n}\n\npub fn socket_to_multi_addr(socket_addr: SocketAddr) -> Multiaddr {\n    let mut multi_addr = Multiaddr::from(socket_addr.ip());\n    multi_addr.push(Protocol::TCP(socket_addr.port()));\n\n    multi_addr\n}\n\npub fn multiaddr_to_socket(multiaddr: &Multiaddr) -> Option<SocketAddr> {\n    let mut extract_ip = None;\n    let mut extract_port = 0u16;\n\n    for proto in multiaddr.iter() {\n        match proto {\n            Protocol::IP4(ip) => extract_ip = Some(IpAddr::V4(ip)),\n            Protocol::IP6(ip) => extract_ip = Some(IpAddr::V6(ip)),\n            Protocol::TCP(port) => extract_port = port,\n            _ => (),\n        }\n    }\n\n    if let Some(ip) = extract_ip {\n        Some(SocketAddr::new(ip, extract_port))\n    } else {\n        None\n    }\n}\n\npub fn resolve_if_unspecified(multiaddr: &Multiaddr) -> Result<Multiaddr, ()> {\n    let match_socket = |iter: IntoIter<SocketAddr>, be_v4: bool| -> Option<SocketAddr> {\n        for socket in iter {\n            match socket {\n                SocketAddr::V4(_) if be_v4 => {\n                    return Some(socket);\n                }\n                SocketAddr::V6(_) if !be_v4 => {\n                    return Some(socket);\n                }\n                _ => (),\n            }\n        }\n        None\n    };\n\n    let sock = multiaddr_to_socket(&multiaddr).ok_or(())?;\n    if !sock.ip().is_unspecified() {\n        return Err(());\n    }\n\n    let peer_id = multiaddr.id_bytes().clone().ok_or(())?;\n    let hs = hostname::get().map_err(|_| ())?;\n\n    let hostname_port = hs\n        .to_str()\n        .map(|s| format!(\"{}:{}\", s, sock.port()))\n        .ok_or(())?;\n\n    let socks_iter = hostname_port.to_socket_addrs().map_err(|_| ())?;\n    let socket = match_socket(socks_iter, sock.ip().is_ipv4()).ok_or_else(|| ())?;\n\n    let mut resolved_addr = socket_to_multi_addr(socket);\n    resolved_addr.push(Protocol::P2P(peer_id));\n    Ok(resolved_addr)\n}\n\nimpl MultiaddrExt for Multiaddr {\n    fn id_bytes(&self) -> Option<Cow<'_, [u8]>> {\n        for proto in self.iter() {\n            if let Protocol::P2P(bytes) = proto {\n                return Some(bytes);\n            }\n        }\n\n        None\n    }\n\n    fn has_id(&self) -> bool {\n        self.iter().any(|proto| matches!(proto, Protocol::P2P(_)))\n    }\n\n    fn push_id(&mut self, peer_id: PeerId) {\n        self.push(Protocol::P2P(Cow::Owned(peer_id.as_bytes().to_vec())))\n    }\n}\n\npub struct HeartBeat {\n    waker:    Arc<AtomicWaker>,\n    interval: Duration,\n    delay:    Delay,\n}\n\nimpl HeartBeat {\n    pub fn new(waker: Arc<AtomicWaker>, interval: Duration) -> Self {\n        let delay = Delay::new(interval);\n\n        HeartBeat {\n            waker,\n            interval,\n            delay,\n        }\n    }\n}\n\n// # Note\n//\n// Delay returns an error after default global timer gone away.\nimpl Future for HeartBeat {\n    type Output = <Delay as Future>::Output;\n\n    fn poll(mut self: Pin<&mut Self>, ctx: &mut Context<'_>) -> Poll<Self::Output> {\n        let ecg = &mut self.as_mut();\n\n        loop {\n            let interval = ecg.interval;\n            let delay = &mut ecg.delay;\n            pin_mut!(delay);\n\n            crate::loop_ready!(delay.poll(ctx));\n\n            let next_time = Instant::now().add(interval);\n            ecg.delay.reset(next_time);\n            ecg.waker.wake();\n        }\n\n        Poll::Pending\n    }\n}\n\n#[derive(Debug, Display, PartialEq, Eq, Serialize, Deserialize, Clone, Hash)]\n#[display(fmt = \"{}:{}\", host, port)]\npub struct ConnectedAddr {\n    pub host: String,\n    pub port: u16,\n}\n\nimpl From<&Multiaddr> for ConnectedAddr {\n    fn from(multiaddr: &Multiaddr) -> Self {\n        use tentacle::multiaddr::Protocol::{DNS4, DNS6, IP4, IP6, TCP, TLS};\n\n        let mut host = None;\n        let mut port = 0u16;\n\n        for comp in multiaddr.iter() {\n            match comp {\n                IP4(ip_addr) => host = Some(ip_addr.to_string()),\n                IP6(ip_addr) => host = Some(ip_addr.to_string()),\n                DNS4(dns_addr) | DNS6(dns_addr) => host = Some(dns_addr.to_string()),\n                TLS(tls_addr) => host = Some(tls_addr.to_string()),\n                TCP(p) => port = p,\n                _ => (),\n            }\n        }\n\n        let host = host.unwrap_or_else(|| multiaddr.to_string());\n        ConnectedAddr { host, port }\n    }\n}\n"
  },
  {
    "path": "core/network/src/compression/mod.rs",
    "content": "mod snappy;\npub use snappy::Snappy;\n"
  },
  {
    "path": "core/network/src/compression/snappy.rs",
    "content": "use std::io;\n\nuse protocol::Bytes;\n\nuse crate::{error::NetworkError, traits::Compression};\n\n#[derive(Clone)]\npub struct Snappy;\n\nimpl Compression for Snappy {\n    fn compress(&self, bytes: Bytes) -> Result<Bytes, NetworkError> {\n        let mut vec_bytes = Vec::with_capacity(bytes.len());\n\n        {\n            let mut writer = snap::Writer::new(&mut vec_bytes);\n            let n = io::copy(&mut bytes.as_ref(), &mut writer)?;\n\n            if n as usize != bytes.len() {\n                let kind = io::ErrorKind::Other;\n\n                return Err(io::Error::new(kind, \"snappy: fail to compress\").into());\n            }\n        }\n\n        Ok(Bytes::from(vec_bytes))\n    }\n\n    fn decompress(&self, bytes: Bytes) -> Result<Bytes, NetworkError> {\n        let mut vec_bytes = vec![];\n        let mut reader = snap::Reader::new(bytes.as_ref());\n\n        let _ = io::copy(&mut reader, &mut vec_bytes)? as usize;\n\n        Ok(Bytes::from(vec_bytes))\n    }\n}\n"
  },
  {
    "path": "core/network/src/config.rs",
    "content": "use std::{\n    default::Default,\n    net::{IpAddr, Ipv4Addr, SocketAddr},\n    path::{Path, PathBuf},\n    str::FromStr,\n    sync::Arc,\n    time::Duration,\n};\n\nuse log::error;\nuse protocol::ProtocolResult;\nuse tentacle::{\n    multiaddr::{multiaddr, Multiaddr, Protocol},\n    secio::{PeerId, SecioKeyPair},\n};\n\nuse crate::{\n    common::socket_to_multi_addr,\n    connection::ConnectionConfig,\n    error::NetworkError,\n    peer_manager::{ArcPeer, PeerManagerConfig, TrustMetricConfig},\n    selfcheck::SelfCheckConfig,\n    traits::MultiaddrExt,\n    PeerIdExt,\n};\n\n// TODO: 0.0.0.0 expose? 127.0.0.1 doesn't work because of tentacle-discovery.\n// Default listen address: 0.0.0.0:2337\npub const DEFAULT_LISTEN_IP_ADDR: IpAddr = IpAddr::V4(Ipv4Addr::new(0, 0, 0, 0));\npub const DEFAULT_LISTEN_PORT: u16 = 2337;\n// Default max connections\npub const DEFAULT_MAX_CONNECTIONS: usize = 40;\n// Default connection stream frame window lenght\npub const DEFAULT_MAX_FRAME_LENGTH: usize = 4 * 1024 * 1024; // 4 Mib\npub const DEFAULT_BUFFER_SIZE: usize = 24 * 1024 * 1024; // same as tentacle\n\n// Default max wait streams for accept\npub const DEFAULT_MAX_WAIT_STREAMS: usize = 256;\n// Default write timeout\npub const DEFAULT_WRITE_TIMEOUT: u64 = 10; // seconds\n\npub const DEFAULT_SAME_IP_CONN_LIMIT: usize = 1;\npub const DEFAULT_INBOUND_CONN_LIMIT: usize = 20;\n\n// Default peer trust metric\npub const DEFAULT_PEER_TRUST_INTERVAL_DURATION: Duration = Duration::from_secs(60);\npub const DEFAULT_PEER_TRUST_MAX_HISTORY_DURATION: Duration =\n    Duration::from_secs(24 * 60 * 60 * 10); // 10 day\nconst DEFAULT_PEER_FATAL_BAN_DURATION: Duration = Duration::from_secs(60 * 60); // 1 hour\nconst DEFAULT_PEER_SOFT_BAN_DURATION: Duration = Duration::from_secs(60 * 10); // 10 minutes\n\n// Default peer data persistent path\npub const DEFAULT_PEER_FILE_NAME: &str = \"peers\";\npub const DEFAULT_PEER_FILE_EXT: &str = \"dat\";\npub const DEFAULT_PEER_DAT_FILE: &str = \"./peers.dat\";\n\npub const DEFAULT_PING_INTERVAL: u64 = 15;\npub const DEFAULT_PING_TIMEOUT: u64 = 30;\npub const DEFAULT_DISCOVERY_SYNC_INTERVAL: u64 = 60 * 60; // 1 hour\n\npub const DEFAULT_PEER_MANAGER_HEART_BEAT_INTERVAL: u64 = 30;\npub const DEFAULT_SELF_HEART_BEAT_INTERVAL: u64 = 35;\n\npub const DEFAULT_RPC_TIMEOUT: u64 = 10;\n\n// Selfcheck\npub const DEFAULT_SELF_CHECK_INTERVAL: u64 = 30;\n\npub type PrivateKeyHexStr = String;\npub type PeerAddrStr = String;\npub type PeerIdBase58Str = String;\n\n// Example:\n//  example.com:2077\nstruct DnsAddr {\n    host: String,\n    port: u16,\n}\n\nimpl FromStr for DnsAddr {\n    type Err = NetworkError;\n\n    fn from_str(s: &str) -> Result<Self, Self::Err> {\n        use NetworkError::UnexpectedPeerAddr;\n\n        let comps = s.split(':').collect::<Vec<_>>();\n        if comps.len() != 2 {\n            return Err(UnexpectedPeerAddr(s.to_owned()));\n        }\n\n        let port = comps[1]\n            .parse::<u16>()\n            .map_err(|_| UnexpectedPeerAddr(s.to_owned()))?;\n\n        Ok(DnsAddr {\n            host: comps[0].to_owned(),\n            port,\n        })\n    }\n}\n\n// TODO: support Dns6\nimpl From<DnsAddr> for Multiaddr {\n    fn from(addr: DnsAddr) -> Self {\n        multiaddr!(DNS4(&addr.host), TCP(addr.port))\n    }\n}\n\n#[derive(Debug)]\npub struct NetworkConfig {\n    // connection\n    pub default_listen:   Multiaddr,\n    pub max_connections:  usize,\n    pub max_frame_length: usize,\n    pub send_buffer_size: usize,\n    pub recv_buffer_size: usize,\n    pub max_wait_streams: usize,\n    pub write_timeout:    u64,\n\n    // peer manager\n    pub bootstraps:             Vec<ArcPeer>,\n    pub allowlist:              Vec<PeerId>,\n    pub allowlist_only:         bool,\n    pub enable_save_restore:    bool,\n    pub peer_dat_file:          PathBuf,\n    pub peer_trust_interval:    Duration,\n    pub peer_trust_max_history: Duration,\n    pub peer_fatal_ban:         Duration,\n    pub peer_soft_ban:          Duration,\n    pub same_ip_conn_limit:     usize,\n    pub inbound_conn_limit:     usize,\n\n    // identity and encryption\n    pub secio_keypair: SecioKeyPair,\n\n    // protocol\n    pub ping_interval:           Duration,\n    pub ping_timeout:            Duration,\n    pub discovery_sync_interval: Duration,\n\n    // routine\n    pub peer_manager_heart_beat_interval: Duration,\n    pub heart_beat_interval:              Duration,\n\n    // rpc\n    pub rpc_timeout: Duration,\n\n    // self check\n    pub selfcheck_interval: Duration,\n}\n\nimpl NetworkConfig {\n    pub fn new() -> Self {\n        let mut listen_addr = Multiaddr::from(DEFAULT_LISTEN_IP_ADDR);\n        listen_addr.push(Protocol::TCP(DEFAULT_LISTEN_PORT));\n\n        let peer_manager_hb_interval =\n            Duration::from_secs(DEFAULT_PEER_MANAGER_HEART_BEAT_INTERVAL);\n\n        NetworkConfig {\n            default_listen:   listen_addr,\n            max_connections:  DEFAULT_MAX_CONNECTIONS,\n            max_frame_length: DEFAULT_MAX_FRAME_LENGTH,\n            send_buffer_size: DEFAULT_BUFFER_SIZE,\n            recv_buffer_size: DEFAULT_BUFFER_SIZE,\n            max_wait_streams: DEFAULT_MAX_WAIT_STREAMS,\n            write_timeout:    DEFAULT_WRITE_TIMEOUT,\n\n            bootstraps:             Default::default(),\n            allowlist:              Default::default(),\n            allowlist_only:         false,\n            enable_save_restore:    false,\n            peer_dat_file:          PathBuf::from(DEFAULT_PEER_DAT_FILE.to_owned()),\n            peer_trust_interval:    DEFAULT_PEER_TRUST_INTERVAL_DURATION,\n            peer_trust_max_history: DEFAULT_PEER_TRUST_MAX_HISTORY_DURATION,\n            peer_fatal_ban:         DEFAULT_PEER_FATAL_BAN_DURATION,\n            peer_soft_ban:          DEFAULT_PEER_SOFT_BAN_DURATION,\n            same_ip_conn_limit:     DEFAULT_SAME_IP_CONN_LIMIT,\n            inbound_conn_limit:     DEFAULT_INBOUND_CONN_LIMIT,\n\n            secio_keypair: SecioKeyPair::secp256k1_generated(),\n\n            ping_interval:           Duration::from_secs(DEFAULT_PING_INTERVAL),\n            ping_timeout:            Duration::from_secs(DEFAULT_PING_TIMEOUT),\n            discovery_sync_interval: Duration::from_secs(DEFAULT_DISCOVERY_SYNC_INTERVAL),\n\n            peer_manager_heart_beat_interval: peer_manager_hb_interval,\n            heart_beat_interval:              Duration::from_secs(DEFAULT_SELF_HEART_BEAT_INTERVAL),\n\n            rpc_timeout: Duration::from_secs(DEFAULT_RPC_TIMEOUT),\n\n            selfcheck_interval: Duration::from_secs(DEFAULT_SELF_CHECK_INTERVAL),\n        }\n    }\n\n    pub fn max_connections(mut self, max: Option<usize>) -> ProtocolResult<Self> {\n        if let Some(max) = max {\n            if max <= self.inbound_conn_limit {\n                return Err(NetworkError::InboundLimitEqualOrSmallerThanMaxConn.into());\n            }\n\n            self.max_connections = max;\n        }\n\n        Ok(self)\n    }\n\n    pub fn same_ip_conn_limit(mut self, limit: Option<usize>) -> Self {\n        if let Some(limit) = limit {\n            self.same_ip_conn_limit = limit;\n        }\n\n        self\n    }\n\n    pub fn inbound_conn_limit(mut self, limit: Option<usize>) -> ProtocolResult<Self> {\n        if let Some(limit) = limit {\n            if self.max_connections <= limit {\n                return Err(NetworkError::InboundLimitEqualOrSmallerThanMaxConn.into());\n            }\n\n            self.inbound_conn_limit = limit;\n        }\n\n        Ok(self)\n    }\n\n    pub fn max_frame_length(mut self, max: Option<usize>) -> Self {\n        if let Some(max) = max {\n            self.max_frame_length = max;\n        }\n\n        self\n    }\n\n    pub fn send_buffer_size(mut self, size: Option<usize>) -> Self {\n        if let Some(size) = size {\n            self.send_buffer_size = size;\n        }\n\n        self\n    }\n\n    pub fn recv_buffer_size(mut self, size: Option<usize>) -> Self {\n        if let Some(size) = size {\n            self.recv_buffer_size = size;\n        }\n\n        self\n    }\n\n    pub fn max_wait_streams(mut self, max: Option<usize>) -> Self {\n        if let Some(max) = max {\n            self.max_wait_streams = max;\n        }\n\n        self\n    }\n\n    pub fn write_timeout(mut self, timeout: Option<u64>) -> Self {\n        if let Some(timeout) = timeout {\n            self.write_timeout = timeout;\n        }\n\n        self\n    }\n\n    pub fn bootstraps(\n        mut self,\n        pairs: Vec<(PeerIdBase58Str, PeerAddrStr)>,\n    ) -> ProtocolResult<Self> {\n        let to_peer = |(pid_str, peer_addr): (PeerIdBase58Str, PeerAddrStr)| -> _ {\n            let peer_id = PeerId::from_str_ext(&pid_str)?;\n            let mut multiaddr = Self::parse_peer_addr(peer_addr)?;\n\n            let peer = ArcPeer::new(peer_id.clone());\n\n            if let Some(id_bytes) = multiaddr.id_bytes() {\n                if id_bytes != peer_id.as_bytes() {\n                    error!(\"network: pubkey doesn't match peer id in {}\", multiaddr);\n                    return Ok(peer);\n                }\n            }\n            if !multiaddr.has_id() {\n                multiaddr.push_id(peer_id);\n            }\n\n            peer.multiaddrs.insert_raw(multiaddr);\n            Ok(peer)\n        };\n\n        let bootstrap_peers = pairs\n            .into_iter()\n            .map(to_peer)\n            .collect::<ProtocolResult<Vec<_>>>()?;\n\n        self.bootstraps = bootstrap_peers;\n        Ok(self)\n    }\n\n    pub fn allowlist<S: AsRef<[String]>>(mut self, peer_id_strs: S) -> ProtocolResult<Self> {\n        let peer_ids = {\n            let str_iter = peer_id_strs.as_ref().iter();\n            let to_peer_ids = str_iter.map(PeerId::from_str_ext);\n            to_peer_ids.collect::<Result<Vec<_>, _>>()?\n        };\n\n        self.allowlist = peer_ids;\n        Ok(self)\n    }\n\n    pub fn allowlist_only(mut self, flag: Option<bool>) -> Self {\n        if let Some(flag) = flag {\n            self.allowlist_only = flag;\n        }\n        self\n    }\n\n    pub fn peer_dat_file<P: AsRef<Path>>(mut self, path: P) -> Self {\n        let mut path = path.as_ref().to_owned();\n        path.push(DEFAULT_PEER_FILE_NAME);\n        path.set_extension(DEFAULT_PEER_FILE_EXT);\n\n        self.peer_dat_file = path;\n\n        self\n    }\n\n    pub fn peer_trust_metric(\n        mut self,\n        interval: Option<u64>,\n        max_history: Option<u64>,\n    ) -> ProtocolResult<Self> {\n        if let Some(interval) = interval {\n            self.peer_trust_interval = Duration::from_secs(interval);\n        }\n        if let Some(max_hist) = max_history {\n            self.peer_trust_max_history = Duration::from_secs(max_hist);\n        }\n\n        if self.peer_trust_max_history < self.peer_trust_interval * 20 {\n            let interval = self.peer_trust_interval.as_secs();\n            Err(NetworkError::SmallTrustMaxHistory(interval * 20).into())\n        } else {\n            Ok(self)\n        }\n    }\n\n    pub fn peer_fatal_ban(mut self, duration: Option<u64>) -> Self {\n        if let Some(duration) = duration {\n            self.peer_fatal_ban = Duration::from_secs(duration);\n        }\n\n        self\n    }\n\n    pub fn peer_soft_ban(mut self, duration: Option<u64>) -> Self {\n        if let Some(duration) = duration {\n            self.peer_soft_ban = Duration::from_secs(duration);\n        }\n\n        self\n    }\n\n    pub fn secio_keypair(mut self, sk_hex: PrivateKeyHexStr) -> ProtocolResult<Self> {\n        let maybe_skp = hex::decode(sk_hex).map(SecioKeyPair::secp256k1_raw_key);\n\n        if let Ok(Ok(skp)) = maybe_skp {\n            self.secio_keypair = skp;\n\n            Ok(self)\n        } else {\n            Err(NetworkError::InvalidPrivateKey.into())\n        }\n    }\n\n    pub fn ping_interval(mut self, interval: Option<u64>) -> Self {\n        if let Some(interval) = interval {\n            self.ping_interval = Duration::from_secs(interval);\n        }\n\n        self\n    }\n\n    pub fn ping_timeout(mut self, timeout: u64) -> Self {\n        self.ping_timeout = Duration::from_secs(timeout);\n\n        self\n    }\n\n    pub fn discovery_sync_interval(mut self, interval: u64) -> Self {\n        self.discovery_sync_interval = Duration::from_secs(interval);\n\n        self\n    }\n\n    pub fn peer_manager_heart_beat_interval(mut self, interval: u64) -> Self {\n        self.peer_manager_heart_beat_interval = Duration::from_secs(interval);\n\n        self\n    }\n\n    pub fn heart_beat_interval(mut self, interval: u64) -> Self {\n        self.heart_beat_interval = Duration::from_secs(interval);\n\n        self\n    }\n\n    pub fn rpc_timeout(mut self, timeout: Option<u64>) -> Self {\n        if let Some(timeout) = timeout {\n            self.rpc_timeout = Duration::from_secs(timeout);\n        }\n\n        self\n    }\n\n    pub fn selfcheck_interval(mut self, interval: Option<u64>) -> Self {\n        if let Some(interval) = interval {\n            self.selfcheck_interval = Duration::from_secs(interval);\n        }\n\n        self\n    }\n\n    fn parse_peer_addr(addr: PeerAddrStr) -> ProtocolResult<Multiaddr> {\n        if let Ok(socket_addr) = addr.parse::<SocketAddr>() {\n            Ok(socket_to_multi_addr(socket_addr))\n        } else if let Ok(dns_addr) = addr.parse::<DnsAddr>() {\n            Ok(Multiaddr::from(dns_addr))\n        } else {\n            Err(NetworkError::UnexpectedPeerAddr(addr).into())\n        }\n    }\n}\n\nimpl Default for NetworkConfig {\n    fn default() -> Self {\n        NetworkConfig::new()\n    }\n}\n\nimpl From<&NetworkConfig> for ConnectionConfig {\n    fn from(config: &NetworkConfig) -> ConnectionConfig {\n        ConnectionConfig {\n            secio_keypair:    config.secio_keypair.clone(),\n            max_frame_length: Some(config.max_frame_length),\n            send_buffer_size: Some(config.send_buffer_size),\n            recv_buffer_size: Some(config.recv_buffer_size),\n            max_wait_streams: Some(config.max_wait_streams),\n            write_timeout:    Some(config.write_timeout),\n        }\n    }\n}\n\nimpl From<&NetworkConfig> for PeerManagerConfig {\n    fn from(config: &NetworkConfig) -> PeerManagerConfig {\n        let peer_trust_config =\n            TrustMetricConfig::new(config.peer_trust_interval, config.peer_trust_max_history);\n\n        PeerManagerConfig {\n            our_id:              config.secio_keypair.peer_id(),\n            pubkey:              config.secio_keypair.public_key(),\n            bootstraps:          config.bootstraps.clone(),\n            allowlist:           config.allowlist.clone(),\n            allowlist_only:      config.allowlist_only,\n            peer_trust_config:   Arc::new(peer_trust_config),\n            peer_fatal_ban:      config.peer_fatal_ban,\n            peer_soft_ban:       config.peer_soft_ban,\n            max_connections:     config.max_connections,\n            same_ip_conn_limit:  config.same_ip_conn_limit,\n            inbound_conn_limit:  config.inbound_conn_limit,\n            outbound_conn_limit: config.max_connections - config.inbound_conn_limit,\n            routine_interval:    config.peer_manager_heart_beat_interval,\n            peer_dat_file:       config.peer_dat_file.clone(),\n        }\n    }\n}\n\n#[derive(Debug, Clone, Copy)]\npub struct TimeoutConfig {\n    pub rpc: Duration,\n}\n\nimpl From<&NetworkConfig> for TimeoutConfig {\n    fn from(config: &NetworkConfig) -> TimeoutConfig {\n        TimeoutConfig {\n            rpc: config.rpc_timeout,\n        }\n    }\n}\n\nimpl From<&NetworkConfig> for SelfCheckConfig {\n    fn from(config: &NetworkConfig) -> SelfCheckConfig {\n        SelfCheckConfig {\n            interval: config.selfcheck_interval,\n        }\n    }\n}\n"
  },
  {
    "path": "core/network/src/connection/control.rs",
    "content": "use tentacle::error::SendErrorKind;\nuse tentacle::service::{ServiceControl, TargetSession};\nuse tentacle::ProtocolId;\n\nuse protocol::traits::Priority;\nuse protocol::Bytes;\n\npub struct ProtocolMessage {\n    pub protocol_id: ProtocolId,\n    pub target:      TargetSession,\n    pub data:        Bytes,\n    pub priority:    Priority,\n}\n\n#[derive(Clone)]\npub struct ConnectionServiceControl {\n    inner: ServiceControl,\n}\n\nimpl ConnectionServiceControl {\n    pub fn new(control: ServiceControl) -> Self {\n        ConnectionServiceControl { inner: control }\n    }\n\n    pub fn send(&self, message: ProtocolMessage) -> Result<(), SendErrorKind> {\n        let ProtocolMessage {\n            target,\n            protocol_id,\n            data,\n            ..\n        } = message;\n\n        match message.priority {\n            Priority::High => self.inner.quick_filter_broadcast(target, protocol_id, data),\n            Priority::Normal => self.inner.filter_broadcast(target, protocol_id, data),\n        }\n    }\n}\n"
  },
  {
    "path": "core/network/src/connection/keeper.rs",
    "content": "use std::sync::atomic::{AtomicBool, Ordering};\n\nuse futures::channel::mpsc::UnboundedSender;\nuse log::{debug, error};\nuse tentacle::secio::error::SecioError;\nuse tentacle::{\n    context::ServiceContext,\n    error::{DialerErrorKind, HandshakeErrorKind, ListenErrorKind},\n    multiaddr::Multiaddr,\n    service::{ServiceError, ServiceEvent},\n    traits::ServiceHandle,\n};\n\nuse crate::{\n    error::{ErrorKind, NetworkError},\n    event::{\n        ConnectionErrorKind, ConnectionType, PeerManagerEvent, ProtocolIdentity, SessionErrorKind,\n    },\n};\n\n#[cfg(test)]\nuse crate::test::mock::SessionContext;\n\n// This macro tries to extract PublicKey from SessionContext, it's Optional.\n// If it get None, then simple `return` to exit caller function. Otherwise,\n// return PublicKey reference.\nmacro_rules! peer_pubkey {\n    ($session_context:expr) => {{\n        let opt_pk = $session_context.remote_pubkey.as_ref();\n        debug_assert!(opt_pk.is_some(), \"secio is enforced, no way it's None here\");\n\n        if let Some(pubkey) = opt_pk {\n            pubkey\n        } else {\n            return;\n        }\n    }};\n}\n\npub struct ConnectionServiceKeeper {\n    peer_mgr:     UnboundedSender<PeerManagerEvent>,\n    sys_reporter: UnboundedSender<NetworkError>,\n\n    sys_shutdown: AtomicBool,\n}\n\nimpl ConnectionServiceKeeper {\n    pub fn new(\n        peer_mgr: UnboundedSender<PeerManagerEvent>,\n        sys_reporter: UnboundedSender<NetworkError>,\n    ) -> Self {\n        ConnectionServiceKeeper {\n            peer_mgr,\n            sys_reporter,\n\n            sys_shutdown: AtomicBool::new(false),\n        }\n    }\n\n    fn is_sys_shutdown(&self) -> bool {\n        self.sys_shutdown.load(Ordering::SeqCst)\n    }\n\n    fn sys_shutdown(&self) {\n        self.sys_shutdown.store(true, Ordering::SeqCst);\n    }\n\n    fn report_error(&self, kind: ErrorKind) {\n        debug!(\"network: connection error: {}\", kind);\n\n        if !self.is_sys_shutdown() {\n            let error = NetworkError::from(kind);\n\n            if self.sys_reporter.unbounded_send(error).is_err() {\n                error!(\"network: connection: error report channel dropped\");\n\n                self.sys_shutdown();\n            }\n        }\n    }\n\n    fn report_peer(&self, event: PeerManagerEvent) {\n        if self.peer_mgr.unbounded_send(event).is_err() {\n            self.report_error(ErrorKind::Offline(\"peer manager\"));\n        }\n    }\n\n    fn process_dailer_error(&self, addr: Multiaddr, error: DialerErrorKind) {\n        use DialerErrorKind::{\n            HandshakeError, IoError, PeerIdNotMatch, RepeatedConnection, TransportError,\n        };\n\n        let kind = match error {\n            IoError(err) => ConnectionErrorKind::Io(err),\n            PeerIdNotMatch => ConnectionErrorKind::PeerIdNotMatch,\n            RepeatedConnection(sid) => {\n                let ty = ConnectionType::Outbound;\n                let repeated_connection = PeerManagerEvent::RepeatedConnection { ty, sid, addr };\n                return self.report_peer(repeated_connection);\n            }\n            HandshakeError(HandshakeErrorKind::Timeout(reason)) => {\n                ConnectionErrorKind::TimeOut(reason)\n            }\n            HandshakeError(HandshakeErrorKind::SecioError(SecioError::IoError(err))) => {\n                ConnectionErrorKind::Io(err)\n            }\n            HandshakeError(err) => ConnectionErrorKind::SecioHandshake(Box::new(err)),\n            TransportError(err) => ConnectionErrorKind::from(err),\n        };\n\n        let dail_failed = PeerManagerEvent::ConnectFailed { addr, kind };\n        self.report_peer(dail_failed);\n    }\n\n    fn process_listen_error(&self, addr: Multiaddr, error: ListenErrorKind) {\n        use ListenErrorKind::{IoError, RepeatedConnection, TransportError};\n\n        let kind = match error {\n            IoError(err) => ConnectionErrorKind::Io(err),\n            RepeatedConnection(sid) => {\n                let ty = ConnectionType::Outbound;\n                let repeated_connection = PeerManagerEvent::RepeatedConnection { ty, sid, addr };\n                return self.report_peer(repeated_connection);\n            }\n            TransportError(err) => ConnectionErrorKind::from(err),\n        };\n\n        let listen_failed = PeerManagerEvent::ConnectFailed { addr, kind };\n        self.report_peer(listen_failed);\n    }\n}\n\n#[rustfmt::skip]\nimpl ServiceHandle for ConnectionServiceKeeper {\n    fn handle_error(&mut self, _ctx: &mut ServiceContext, err: ServiceError) {\n        match err {\n            ServiceError::DialerError { error, address } => {\n                self.process_dailer_error(address, error)\n            }\n            ServiceError::ListenError { error, address } => {\n                self.process_listen_error(address, error)\n            }\n            ServiceError::ProtocolSelectError { session_context, proto_name } => {\n                let protocol_identity = if let Some(proto_name) = proto_name {\n                    Some(ProtocolIdentity::Name(proto_name))\n                } else {\n                    None\n                };\n\n                let kind = SessionErrorKind::Protocol {\n                    identity: protocol_identity,\n                    cause: None,\n                };\n\n                let protocol_select_failure = PeerManagerEvent::SessionFailed {\n                    sid: session_context.id,\n                    kind,\n                };\n\n                self.report_peer(protocol_select_failure);\n            }\n\n            ServiceError::ProtocolError { id, error, proto_id } => {\n                let kind = SessionErrorKind::Protocol {\n                    identity: Some(ProtocolIdentity::Id(proto_id)),\n                    cause: Some(Box::new(error)),\n                };\n                let broken_protocol = PeerManagerEvent::SessionFailed { sid: id, kind };\n\n                self.report_peer(broken_protocol);\n            }\n\n            ServiceError::SessionTimeout { session_context } => {\n                let kind = SessionErrorKind::Io(std::io::ErrorKind::TimedOut.into());\n                let session_timeout = PeerManagerEvent::SessionFailed {\n                    sid: session_context.id,\n                    kind,\n                };\n\n                self.report_peer(session_timeout);\n            }\n\n            ServiceError::MuxerError { session_context, error } => {\n                let muxer_broken = PeerManagerEvent::SessionFailed {\n                    sid: session_context.id,\n                    kind: SessionErrorKind::Io(error)\n                };\n\n                self.report_peer(muxer_broken);\n            }\n\n            // Bad protocol code, will cause memory leaks/abnormal CPU usage\n            ServiceError::ProtocolHandleError { error, proto_id } => {\n                error!(\"network: bad protocol {} implement: {}\", proto_id, error);\n\n                let kind = ErrorKind::BadProtocolHandle {proto_id, cause : Box::new(error)};\n                self.report_error(kind);\n            }\n\n            // Partial protocol task logic take long time to process, usually\n            // indicate bad protocol implement.\n            ServiceError::SessionBlocked { session_context } => {\n                #[cfg(test)]\n                let session_context = SessionContext::from(session_context).arced();\n\n                let session_blocked = PeerManagerEvent::SessionBlocked {\n                    ctx: session_context\n                };\n                self.report_peer(session_blocked);\n            }\n        }\n    }\n\n    fn handle_event(&mut self, ctx: &mut ServiceContext, evt: ServiceEvent) {\n        match evt {\n            ServiceEvent::SessionOpen { session_context } => {\n                if session_context.remote_pubkey.is_none() {\n                    // Peer without encryption will not be able to connect to us\n                    error!(\"impossible, got connection from/to {:?} without public key, disconnect it\", session_context.address);\n\n                    // Just in case\n                    if let Err(e) = ctx.disconnect(session_context.id) {\n                        error!(\"disconnect session {} {}\", session_context.id, e);\n                    }\n                    return;\n                }\n\n                let pubkey = peer_pubkey!(&session_context).clone();\n                let pid = pubkey.peer_id();\n                #[cfg(test)]\n                let session_context = SessionContext::from(session_context).arced();\n                let new_unidentified_session = PeerManagerEvent::UnidentifiedSession { pid, pubkey, ctx: session_context };\n\n                self.report_peer(new_unidentified_session);\n            }\n            ServiceEvent::SessionClose { session_context } => {\n                let pid = peer_pubkey!(&session_context).peer_id();\n                let sid = session_context.id;\n\n                let peer_session_closed = PeerManagerEvent::SessionClosed { pid, sid };\n\n                self.report_peer(peer_session_closed);\n            }\n            ServiceEvent::ListenStarted { address } => {\n                let start_listen = PeerManagerEvent::AddNewListenAddr { addr: address };\n\n                self.report_peer(start_listen);\n            }\n            ServiceEvent::ListenClose { address } => {\n                let close_listen = PeerManagerEvent::RemoveListenAddr { addr: address };\n\n                self.report_peer(close_listen);\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "core/network/src/connection/mod.rs",
    "content": "mod control;\nmod keeper;\n\npub use control::{ConnectionServiceControl, ProtocolMessage};\npub use keeper::ConnectionServiceKeeper;\n\nuse std::collections::VecDeque;\nuse std::future::Future;\nuse std::marker::PhantomData;\nuse std::pin::Pin;\nuse std::task::{Context, Poll};\nuse std::time::Duration;\n\nuse futures::channel::mpsc::UnboundedReceiver;\nuse futures::stream::Stream;\nuse log::debug;\nuse tentacle::builder::ServiceBuilder;\nuse tentacle::error::SendErrorKind;\nuse tentacle::multiaddr::Multiaddr;\nuse tentacle::secio::SecioKeyPair;\nuse tentacle::service::Service;\n\nuse crate::error::NetworkError;\nuse crate::event::ConnectionEvent;\nuse crate::traits::NetworkProtocol;\n\npub struct ConnectionConfig {\n    /// Secio keypair for stream encryption and peer identity\n    pub secio_keypair: SecioKeyPair,\n\n    /// Max stream window size\n    pub max_frame_length: Option<usize>,\n\n    /// Send buffer size\n    pub send_buffer_size: Option<usize>,\n\n    /// Write buffer size\n    pub recv_buffer_size: Option<usize>,\n\n    /// Max wait streams\n    pub max_wait_streams: Option<usize>,\n\n    /// Write timeout\n    pub write_timeout: Option<u64>,\n}\n\npub struct ConnectionService<P: NetworkProtocol> {\n    inner: Service<ConnectionServiceKeeper>,\n\n    event_rx:       UnboundedReceiver<ConnectionEvent>,\n    // Temporary store events for later processing under high load\n    pending_events: VecDeque<ConnectionEvent>,\n\n    // Indicate which protocol this connection service tries to open\n    pin_protocol: PhantomData<P>,\n}\n\nimpl<P: NetworkProtocol> ConnectionService<P> {\n    pub fn new(\n        protocol: P,\n        config: ConnectionConfig,\n        keeper: ConnectionServiceKeeper,\n        event_rx: UnboundedReceiver<ConnectionEvent>,\n    ) -> Self {\n        let mut builder = ServiceBuilder::default()\n            .key_pair(config.secio_keypair)\n            .forever(true);\n\n        let mut yamux_config = tentacle::yamux::Config::default();\n\n        if let Some(max) = config.max_wait_streams {\n            yamux_config.accept_backlog = max;\n        }\n\n        if let Some(timeout) = config.write_timeout {\n            yamux_config.connection_write_timeout = Duration::from_secs(timeout);\n        }\n\n        builder = builder.yamux_config(yamux_config);\n\n        if let Some(max) = config.max_frame_length {\n            builder = builder.max_frame_length(max);\n        }\n\n        if let Some(size) = config.send_buffer_size {\n            builder = builder.set_send_buffer_size(size);\n        }\n\n        if let Some(size) = config.recv_buffer_size {\n            builder = builder.set_recv_buffer_size(size);\n        }\n\n        for proto_meta in protocol.metas().into_iter() {\n            debug!(\"network: connection: insert protocol {}\", proto_meta.name());\n            builder = builder.insert_protocol(proto_meta);\n        }\n\n        ConnectionService {\n            inner: builder.build(keeper),\n\n            event_rx,\n            pending_events: Default::default(),\n\n            pin_protocol: PhantomData,\n        }\n    }\n\n    pub async fn listen(&mut self, address: Multiaddr) -> Result<(), NetworkError> {\n        self.inner.listen(address).await?;\n\n        Ok(())\n    }\n\n    pub fn control(&self) -> ConnectionServiceControl {\n        ConnectionServiceControl::new(self.inner.control().clone())\n    }\n\n    // BrokenPipe means service is closed.\n    // WouldBlock means service is temporary unavailable.\n    //\n    // If WouldBlock is returned, we should try again later.\n    pub fn process_event(&mut self, event: ConnectionEvent) {\n        enum State {\n            Closed,\n            Busy, // limit to 2048 in tentacle\n        }\n\n        macro_rules! try_do {\n            ($ctrl_op:expr) => {{\n                let ret = $ctrl_op.map_err(|err| match &err {\n                    SendErrorKind::BrokenPipe => State::Closed,\n                    SendErrorKind::WouldBlock => State::Busy,\n                });\n\n                match ret {\n                    Ok(_) => Ok(()),\n                    Err(state) => match state {\n                        State::Closed => return, // Early abort func\n                        State::Busy => Err::<(), ()>(()),\n                    },\n                }\n            }};\n        }\n\n        let control = self.inner.control();\n\n        match event {\n            ConnectionEvent::Connect { addrs, .. } => {\n                let mut pending_addrs = Vec::new();\n                let target_protocol = P::target();\n\n                for addr in addrs.into_iter() {\n                    if let Err(()) = try_do!(control.dial(addr.clone(), target_protocol.clone())) {\n                        pending_addrs.push(addr);\n                    }\n                }\n\n                if !pending_addrs.is_empty() {\n                    let pending_connect = ConnectionEvent::Connect {\n                        addrs: pending_addrs,\n                        proto: target_protocol,\n                    };\n\n                    self.pending_events.push_back(pending_connect);\n                }\n            }\n\n            ConnectionEvent::Disconnect(sid) => {\n                if let Err(()) = try_do!(control.disconnect(sid)) {\n                    let pending_disconnect = ConnectionEvent::Disconnect(sid);\n\n                    self.pending_events.push_back(pending_disconnect);\n                }\n            }\n        }\n    }\n}\n\nimpl<P: NetworkProtocol + Unpin> Future for ConnectionService<P> {\n    type Output = ();\n\n    fn poll(mut self: Pin<&mut Self>, ctx: &mut Context<'_>) -> Poll<Self::Output> {\n        let serv_mut = &mut self.as_mut();\n\n        // Process commands\n\n        // Pending commands first\n        let mut pending_events = std::mem::replace(&mut serv_mut.pending_events, VecDeque::new());\n        for event in pending_events.drain(..) {\n            debug!(\"network: pending event {}\", event);\n\n            serv_mut.process_event(event);\n        }\n\n        // Now received events\n        // No-empty means service is temporary unavailable, try later\n        while serv_mut.pending_events.is_empty() {\n            let event_rx = &mut serv_mut.event_rx;\n            futures::pin_mut!(event_rx);\n\n            let event = crate::service_ready!(\"connection service\", event_rx.poll_next(ctx));\n            debug!(\"network: event [{}]\", event);\n\n            serv_mut.process_event(event);\n        }\n\n        // Advance service state\n        loop {\n            let inner = &mut serv_mut.inner;\n            futures::pin_mut!(inner);\n\n            crate::service_ready!(\"connection service\", inner.poll_next(ctx));\n        }\n\n        Poll::Pending\n    }\n}\n"
  },
  {
    "path": "core/network/src/endpoint.rs",
    "content": "use std::{\n    cmp::PartialEq,\n    convert::TryFrom,\n    hash::{Hash, Hasher},\n    str::FromStr,\n};\n\nuse derive_more::{Display, From};\n\nuse crate::error::{ErrorKind, NetworkError};\n\npub const GOSSIP_SCHEME: &str = \"/gossip\";\npub const RPC_CALL_SCHEME: &str = \"/rpc_call\";\npub const RPC_RESPONSE_SCHEME: &str = \"/rpc_resp\";\n\npub const MAX_ENDPOINT_LENGTH: usize = 120;\n\n#[derive(Debug, Display, PartialEq, Eq)]\npub enum EndpointScheme {\n    #[display(fmt = \"{}\", GOSSIP_SCHEME)]\n    Gossip,\n\n    #[display(fmt = \"{}\", RPC_CALL_SCHEME)]\n    RpcCall,\n\n    #[display(fmt = \"{}\", RPC_RESPONSE_SCHEME)]\n    RpcResponse,\n}\n\n// For example\n//\n// gossip: /gossip/cprd/7702_cnpukpeyr_release_date\n// rpc: /rpc_call/cykppeunr_7702/create_a_character/{rpc_id}\n//\n// NOTE: Endpoint only care about first three url comps. So\n// as its PartialEq, Eq and Hash implement.\n#[derive(Debug, Clone, Display)]\n#[display(fmt = \"{}\", _0)]\npub struct Endpoint(String);\n\nimpl Endpoint {\n    pub fn starts_with(&self, pat: &str) -> bool {\n        self.0.starts_with(pat)\n    }\n\n    pub fn scheme(&self) -> EndpointScheme {\n        if self.starts_with(GOSSIP_SCHEME) {\n            EndpointScheme::Gossip\n        } else if self.starts_with(RPC_CALL_SCHEME) {\n            EndpointScheme::RpcCall\n        } else if self.starts_with(RPC_RESPONSE_SCHEME) {\n            EndpointScheme::RpcResponse\n        } else {\n            unreachable!()\n        }\n    }\n\n    // Root part, the first three comps\n    pub fn root(&self) -> String {\n        let url = &self.0;\n\n        let comps = url\n            .split('/')\n            .filter(|comp| !comp.is_empty())\n            .collect::<Vec<&str>>();\n\n        format!(\"/{}/{}/{}\", comps[0], comps[1], comps[2])\n    }\n\n    pub fn full_url(&self) -> &str {\n        &self.0\n    }\n\n    pub fn extend(&self, comp: &str) -> Result<Self, NetworkError> {\n        let comp = comp.trim_start_matches('/');\n\n        format!(\"{}/{}\", self.0, comp).parse::<Endpoint>()\n    }\n}\n\nimpl PartialEq for Endpoint {\n    fn eq(&self, other: &Self) -> bool {\n        self.root() == other.root()\n    }\n}\n\nimpl Eq for Endpoint {}\n\nimpl Hash for Endpoint {\n    fn hash<H: Hasher>(&self, state: &mut H) {\n        self.root().hash(state)\n    }\n}\n\nimpl FromStr for Endpoint {\n    type Err = NetworkError;\n\n    fn from_str(end: &str) -> Result<Self, Self::Err> {\n        if end.is_empty() || end.len() > MAX_ENDPOINT_LENGTH {\n            return Err(NetworkError::NotEndpoint);\n        }\n\n        // Check scheme\n        if !end.starts_with(GOSSIP_SCHEME)\n            && !end.starts_with(RPC_CALL_SCHEME)\n            && !end.starts_with(RPC_RESPONSE_SCHEME)\n        {\n            return Err(NetworkError::UnexpectedScheme(end.to_owned()));\n        }\n\n        // Count components\n        let comps = end\n            .split('/')\n            .filter(|comp| !comp.is_empty())\n            .collect::<Vec<&str>>();\n\n        // Right now, gossip takes 3 comps and rpc has 4 comps\n        if comps.len() < 3 || comps.len() > 4 {\n            return Err(NetworkError::NotEndpoint);\n        }\n\n        Ok(Endpoint(end.to_owned()))\n    }\n}\n\n#[derive(Debug, PartialEq, Eq, From, Display, Hash, Clone, Copy)]\n#[display(fmt = \"{}\", _0)]\npub struct RpcId(u64);\n\nimpl RpcId {\n    pub fn value(self) -> u64 {\n        self.0\n    }\n}\n\n#[derive(Debug, Clone, From, Display)]\n#[display(fmt = \"{}/{}\", end, rid)]\npub struct RpcEndpoint {\n    end: Endpoint,\n    rid: RpcId,\n}\n\nimpl RpcEndpoint {\n    pub fn endpoint(&self) -> &Endpoint {\n        &self.end\n    }\n\n    pub fn rpc_id(&self) -> RpcId {\n        self.rid\n    }\n\n    fn extract_rpc_id_from(end: &Endpoint) -> Result<RpcId, NetworkError> {\n        let end = end.full_url();\n\n        // Rpc id should be the last comp\n        let r_sep_idx = end.rfind('/').ok_or(NetworkError::NotEndpoint)?;\n        if end.len() == (r_sep_idx + 1) {\n            // Last separator '/' should not be the last char\n            return Err(NetworkError::NotEndpoint);\n        }\n\n        // Extract rid\n        let rid = &end[(r_sep_idx + 1)..];\n\n        // Parse it\n        let rid = rid.parse::<u64>().map_err(ErrorKind::NotIdString)?;\n\n        Ok(rid.into())\n    }\n}\n\nimpl TryFrom<Endpoint> for RpcEndpoint {\n    type Error = NetworkError;\n\n    fn try_from(end: Endpoint) -> Result<Self, Self::Error> {\n        let rid = Self::extract_rpc_id_from(&end)?;\n\n        Ok(RpcEndpoint { end, rid })\n    }\n}\n\nimpl FromStr for RpcEndpoint {\n    type Err = NetworkError;\n\n    fn from_str(end: &str) -> Result<Self, Self::Err> {\n        let end = end.parse::<Endpoint>()?;\n\n        if !end.starts_with(RPC_CALL_SCHEME) && !end.starts_with(RPC_RESPONSE_SCHEME) {\n            return Err(NetworkError::UnexpectedScheme(end.root()));\n        }\n\n        let rid = Self::extract_rpc_id_from(&end)?;\n\n        Ok(RpcEndpoint { end, rid })\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::Endpoint;\n\n    #[test]\n    fn should_able_parse_valid_endpoint_url() {\n        let end = \"/gossip/crpd/watch_cpunpyker7702\";\n        let expect = Endpoint(end.to_owned());\n\n        let endpoint = end.parse::<Endpoint>().unwrap();\n        assert_eq!(endpoint, expect);\n    }\n}\n"
  },
  {
    "path": "core/network/src/error.rs",
    "content": "use std::{error::Error, num::ParseIntError};\n\nuse derive_more::Display;\nuse tentacle::{\n    multiaddr::Multiaddr,\n    secio::{PeerId, PublicKey},\n    ProtocolId, SessionId,\n};\n\nuse protocol::{types::Address, Bytes, ProtocolError, ProtocolErrorKind};\n\nuse crate::common::ConnectedAddr;\n\n#[derive(Debug, Display)]\npub enum ErrorKind {\n    #[display(fmt = \"{} offline\", _0)]\n    Offline(&'static str),\n\n    #[display(fmt = \"protocol {} missing\", _0)]\n    MissingProtocol(&'static str),\n\n    #[display(fmt = \"kind: bad protocl logic code\")]\n    BadProtocolHandle {\n        proto_id: ProtocolId,\n        cause:    Box<dyn Error + Send>,\n    },\n\n    #[display(fmt = \"kind: given string isn't an id: {}\", _0)]\n    NotIdString(ParseIntError),\n\n    #[display(fmt = \"kind: unable to encode or decode: {}\", _0)]\n    BadMessage(Box<dyn Error + Send>),\n\n    #[display(fmt = \"kind: unknown rid {} from session {}\", rid, sid)]\n    UnknownRpc { sid: SessionId, rid: u64 },\n\n    #[display(fmt = \"kind: unexpected rpc sender, wrong type\")]\n    UnexpectedRpcSender,\n\n    #[display(fmt = \"kind: more than one arc rpc sender, cannot unwrap it\")]\n    MoreArcRpcSender,\n\n    #[display(fmt = \"kind: session id not found in context\")]\n    NoSessionId,\n\n    #[display(fmt = \"kind: remote peer id not found in context\")]\n    NoRemotePeerId,\n\n    #[display(fmt = \"kind: rpc id not found in context\")]\n    NoRpcId,\n\n    #[display(fmt = \"kind: rpc future dropped {:?}\", _0)]\n    RpcDropped(Option<ConnectedAddr>),\n\n    #[display(fmt = \"kind: rpc timeout {:?}\", _0)]\n    RpcTimeout(Option<ConnectedAddr>),\n\n    #[display(fmt = \"kind: not reactor register for {}\", _0)]\n    NoReactor(String),\n\n    #[display(\n        fmt = \"kind: cannot create chain address from bytes {:?} {}\",\n        pubkey,\n        cause\n    )]\n    NoChainAddress {\n        pubkey: Bytes,\n        cause:  Box<dyn Error + Send>,\n    },\n\n    #[display(fmt = \"kind: public key {:?} not match {:?}\", pubkey, id)]\n    PublicKeyNotMatchId { pubkey: PublicKey, id: PeerId },\n\n    #[display(fmt = \"kind: untaggable {}\", _0)]\n    Untaggable(String),\n\n    #[display(fmt = \"kind: internal {}\", _0)]\n    Internal(String),\n}\n\nimpl Error for ErrorKind {}\n\n#[derive(Debug, Display)]\n#[display(fmt = \"peer id not found in {}\", _0)]\npub struct PeerIdNotFound(pub(crate) Multiaddr);\n\nimpl Error for PeerIdNotFound {}\n\n#[derive(Debug, Display)]\npub enum NetworkError {\n    #[display(fmt = \"io error: {}\", _0)]\n    IoError(std::io::Error),\n\n    #[display(fmt = \"temporary unavailable, try again later\")]\n    Busy,\n\n    #[display(fmt = \"send incompletely, blocked {:?}, other {:?}\", blocked, other)]\n    Send {\n        blocked: Option<Vec<SessionId>>,\n        other:   Option<Box<dyn Error + Send>>,\n    },\n\n    #[display(\n        fmt = \"send incompletely, unconnected {:?}, other {:?}\",\n        unconnected,\n        other\n    )]\n    MultiCast {\n        unconnected: Option<Vec<PeerId>>,\n        other:       Option<Box<dyn Error + Send>>,\n    },\n\n    #[display(fmt = \"shutdown\")]\n    Shutdown,\n\n    #[display(fmt = \"unexected error: {}\", _0)]\n    UnexpectedError(Box<dyn Error + Send>),\n\n    #[display(fmt = \"cannot decode public key bytes\")]\n    InvalidPublicKey,\n\n    #[display(fmt = \"cannot decode private key bytes\")]\n    InvalidPrivateKey,\n\n    #[display(fmt = \"cannot decode peer id\")]\n    InvalidPeerId,\n\n    #[display(fmt = \"unsupported peer address {}\", _0)]\n    UnexpectedPeerAddr(String),\n\n    #[display(fmt = \"unknown endpoint scheme {}\", _0)]\n    UnexpectedScheme(String),\n\n    #[display(fmt = \"cannot serde encode or decode: {}\", _0)]\n    SerdeError(Box<dyn Error + Send>),\n\n    #[display(fmt = \"malformat or exceed maximum length, /[scheme]/[name]/[method] etc\")]\n    NotEndpoint,\n\n    #[display(fmt = \"{:?} account addrs aren't connecting, try connect them\", miss)]\n    PartialRouteMessage { miss: Vec<Address> },\n\n    #[display(fmt = \"remote response {}\", _0)]\n    RemoteResponse(Box<dyn Error + Send>),\n\n    #[display(fmt = \"trust max history should be longer than {} secs\", _0)]\n    SmallTrustMaxHistory(u64),\n\n    #[display(fmt = \"transport {}\", _0)]\n    Transport(tentacle::error::TransportErrorKind),\n\n    #[display(fmt = \"inbound connection limit is equal or smaller than max connections\")]\n    InboundLimitEqualOrSmallerThanMaxConn,\n\n    #[display(fmt = \"internal error: {}\", _0)]\n    Internal(Box<dyn Error + Send>),\n}\n\nimpl Error for NetworkError {}\n\nimpl From<PeerIdNotFound> for NetworkError {\n    fn from(err: PeerIdNotFound) -> NetworkError {\n        NetworkError::Internal(Box::new(err))\n    }\n}\n\nimpl From<ErrorKind> for NetworkError {\n    fn from(kind: ErrorKind) -> NetworkError {\n        NetworkError::Internal(Box::new(kind))\n    }\n}\n\nimpl From<Box<bincode::ErrorKind>> for NetworkError {\n    fn from(kind: Box<bincode::ErrorKind>) -> NetworkError {\n        NetworkError::SerdeError(Box::new(kind))\n    }\n}\n\nimpl From<NetworkError> for ProtocolError {\n    fn from(err: NetworkError) -> ProtocolError {\n        ProtocolError::new(ProtocolErrorKind::Network, Box::new(err))\n    }\n}\n\nimpl From<std::io::Error> for NetworkError {\n    fn from(err: std::io::Error) -> NetworkError {\n        NetworkError::IoError(err)\n    }\n}\n\nimpl From<tentacle::error::TransportErrorKind> for NetworkError {\n    fn from(err: tentacle::error::TransportErrorKind) -> NetworkError {\n        NetworkError::Transport(err)\n    }\n}\n\nimpl From<NetworkError> for Box<dyn Error + Send> {\n    fn from(err: NetworkError) -> Box<dyn Error + Send> {\n        err.boxed()\n    }\n}\n\nimpl NetworkError {\n    pub fn boxed(self) -> Box<dyn Error + Send> {\n        Box::new(self) as Box<dyn Error + Send>\n    }\n}\n"
  },
  {
    "path": "core/network/src/event.rs",
    "content": "use std::{error::Error, sync::Arc};\n\nuse derive_more::Display;\nuse protocol::traits::TrustFeedback;\n#[cfg(not(test))]\nuse tentacle::context::SessionContext;\nuse tentacle::{\n    error::TransportErrorKind,\n    multiaddr::Multiaddr,\n    secio::{PeerId, PublicKey},\n    service::TargetProtocol,\n    ProtocolId, SessionId,\n};\n\n#[cfg(test)]\nuse crate::test::mock::SessionContext;\n\n#[derive(Debug, Display)]\npub enum ConnectionEvent {\n    #[display(fmt = \"connect addrs {:?}, proto: {:?}\", addrs, proto)]\n    Connect {\n        addrs: Vec<Multiaddr>,\n        proto: TargetProtocol,\n    },\n\n    #[display(fmt = \"disconnect session {}\", _0)]\n    Disconnect(SessionId),\n}\n\n#[derive(Debug, Display)]\npub enum ProtocolIdentity {\n    #[display(fmt = \"protocol id {}\", _0)]\n    Id(ProtocolId),\n    #[display(fmt = \"protocol name {}\", _0)]\n    Name(String),\n}\n\n#[derive(Debug, Display)]\npub enum ConnectionErrorKind {\n    #[display(fmt = \"io {:?}\", _0)]\n    Io(std::io::Error),\n\n    #[display(fmt = \"dns resolver {}\", _0)]\n    DNSResolver(Box<dyn Error + Send>),\n\n    #[display(fmt = \"multiaddr {} is not supported\", _0)]\n    MultiaddrNotSuppored(Multiaddr),\n\n    #[display(fmt = \"handshake {}\", _0)]\n    SecioHandshake(Box<dyn Error + Send>),\n\n    #[display(fmt = \"timeout {}\", _0)]\n    TimeOut(String),\n\n    #[display(fmt = \"remote peer doesn't match one in multiaddr\")]\n    PeerIdNotMatch,\n\n    #[display(fmt = \"protocol handle block or abnormally closed\")]\n    ProtocolHandle,\n}\n\nimpl From<TransportErrorKind> for ConnectionErrorKind {\n    fn from(err: TransportErrorKind) -> ConnectionErrorKind {\n        match err {\n            TransportErrorKind::Io(err) => ConnectionErrorKind::Io(err),\n            TransportErrorKind::NotSupported(addr) => {\n                ConnectionErrorKind::MultiaddrNotSuppored(addr)\n            }\n            TransportErrorKind::DNSResolverError(_, _) => {\n                ConnectionErrorKind::DNSResolver(Box::new(err))\n            }\n        }\n    }\n}\n\n#[derive(Debug, Display)]\npub enum SessionErrorKind {\n    #[display(fmt = \"io {:?}\", _0)]\n    Io(std::io::Error),\n\n    // Maybe unknown protocol, protocol version incompatible, protocol codec\n    // error\n    #[display(fmt = \"protocol identity {:?} {:?}\", identity, cause)]\n    Protocol {\n        identity: Option<ProtocolIdentity>,\n        cause:    Option<Box<dyn Error + Send>>,\n    },\n\n    #[display(fmt = \"unexpect {}\", _0)]\n    #[allow(dead_code)]\n    Unexpected(Box<dyn Error + Send>),\n}\n\n#[derive(Debug, Display)]\npub enum MisbehaviorKind {\n    #[display(fmt = \"discovery\")]\n    Discovery,\n\n    #[display(fmt = \"ping time out\")]\n    PingTimeout,\n\n    // Maybe message codec or nonce incorrect\n    #[display(fmt = \"ping unexpect\")]\n    PingUnexpect,\n}\n\n#[derive(Debug, Display, PartialEq, Eq)]\npub enum ConnectionType {\n    #[allow(dead_code)]\n    #[display(fmt = \"Receive an repeated connection\")]\n    Inbound,\n    #[display(fmt = \"Dial an repeated connection\")]\n    Outbound,\n}\n\n#[derive(Debug, Display)]\npub enum PeerManagerEvent {\n    // Peer\n    #[display(fmt = \"connect peers {:?} now\", pids)]\n    ConnectPeersNow { pids: Vec<PeerId> },\n\n    #[display(fmt = \"connect to {} failed, kind: {}\", addr, kind)]\n    ConnectFailed {\n        addr: Multiaddr,\n        kind: ConnectionErrorKind,\n    },\n\n    #[display(\n        fmt = \"new session {} peer {:?} addr {} ty {:?}\",\n        \"ctx.id\",\n        pid,\n        \"ctx.address\",\n        \"ctx.ty\"\n    )]\n    NewSession {\n        pid:    PeerId,\n        pubkey: PublicKey,\n        ctx:    Arc<SessionContext>,\n    },\n\n    #[display(\n        fmt = \"unidentified session {} peer {:?} addr {} ty {:?}\",\n        \"ctx.id\",\n        pid,\n        \"ctx.address\",\n        \"ctx.ty\"\n    )]\n    UnidentifiedSession {\n        pid:    PeerId,\n        pubkey: PublicKey,\n        ctx:    Arc<SessionContext>,\n    },\n\n    #[display(fmt = \"repeated connection type {} session {} addr {}\", ty, sid, addr)]\n    RepeatedConnection {\n        ty:   ConnectionType,\n        sid:  SessionId,\n        addr: Multiaddr,\n    },\n\n    #[display(\n        fmt = \"session {} blocked, pending data size {}\",\n        \"ctx.id\",\n        \"ctx.pending_data_size()\"\n    )]\n    SessionBlocked { ctx: Arc<SessionContext> },\n\n    #[display(fmt = \"peer {:?} session {} closed\", pid, sid)]\n    SessionClosed { pid: PeerId, sid: SessionId },\n\n    #[display(fmt = \"session {} failed, kind: {}\", sid, kind)]\n    SessionFailed {\n        sid:  SessionId,\n        kind: SessionErrorKind,\n    },\n\n    #[display(fmt = \"peer {:?} alive\", pid)]\n    PeerAlive { pid: PeerId },\n\n    #[display(fmt = \"peer {:?} misbehave {}\", pid, kind)]\n    Misbehave { pid: PeerId, kind: MisbehaviorKind },\n\n    #[display(fmt = \"peer {:?} trust metric feedback {}\", pid, feedback)]\n    TrustMetric {\n        pid:      PeerId,\n        feedback: TrustFeedback,\n    },\n\n    // Address\n    #[display(fmt = \"discover multi addrs {:?}\", addrs)]\n    DiscoverMultiAddrs { addrs: Vec<Multiaddr> },\n\n    #[display(fmt = \"identify pid {:?} addrs {:?}\", pid, addrs)]\n    IdentifiedAddrs {\n        pid:   PeerId,\n        addrs: Vec<Multiaddr>,\n    },\n\n    // Self\n    #[display(fmt = \"add listen addr {}\", addr)]\n    AddNewListenAddr { addr: Multiaddr },\n\n    #[display(fmt = \"rmeove listen addr {}\", addr)]\n    RemoveListenAddr { addr: Multiaddr },\n}\n"
  },
  {
    "path": "core/network/src/lib.rs",
    "content": "mod common;\nmod compression;\nmod config;\nmod connection;\nmod endpoint;\nmod error;\nmod event;\nmod message;\nmod metrics;\nmod outbound;\nmod peer_manager;\nmod protocols;\nmod reactor;\nmod rpc;\nmod selfcheck;\nmod service;\n#[cfg(test)]\nmod test;\nmod traits;\n\npub use config::NetworkConfig;\npub use error::NetworkError;\npub use message::{serde, serde_multi};\npub use service::{NetworkService, NetworkServiceHandle};\n\n#[cfg(feature = \"diagnostic\")]\npub use peer_manager::diagnostic::{DiagnosticEvent, TrustReport};\n\npub use tentacle::secio::PeerId;\n\nuse protocol::Bytes;\nuse tentacle::secio::PublicKey;\n\npub trait PeerIdExt {\n    fn from_pubkey_bytes<'a, B: AsRef<[u8]> + 'a>(bytes: B) -> Result<PeerId, NetworkError> {\n        let pubkey = PublicKey::secp256k1_raw_key(bytes.as_ref())\n            .map_err(|_| NetworkError::InvalidPublicKey)?;\n\n        Ok(PeerId::from_public_key(&pubkey))\n    }\n\n    fn from_bytes<'a, B: AsRef<[u8]> + 'a>(bytes: B) -> Result<PeerId, NetworkError> {\n        PeerId::from_bytes(bytes.as_ref().to_vec()).map_err(|_| NetworkError::InvalidPeerId)\n    }\n\n    fn to_string(&self) -> String;\n\n    fn into_bytes_ext(self) -> Bytes;\n\n    fn from_str_ext<'a, S: AsRef<str> + 'a>(s: S) -> Result<PeerId, NetworkError> {\n        s.as_ref().parse().map_err(|_| NetworkError::InvalidPeerId)\n    }\n}\n\nimpl PeerIdExt for PeerId {\n    fn into_bytes_ext(self) -> Bytes {\n        Bytes::from(self.into_bytes())\n    }\n\n    fn to_string(&self) -> String {\n        self.to_base58()\n    }\n}\n"
  },
  {
    "path": "core/network/src/message/mod.rs",
    "content": "pub mod serde;\npub mod serde_multi;\n\nuse std::{collections::HashMap, str::FromStr};\n\nuse common_apm::muta_apm::rustracing_jaeger::span::TraceId;\nuse prost::Message;\nuse protocol::Bytes;\n\nuse crate::endpoint::Endpoint;\nuse crate::error::{ErrorKind, NetworkError};\n\npub struct Headers(HashMap<String, Vec<u8>>);\n\nimpl Default for Headers {\n    fn default() -> Self {\n        Headers(Default::default())\n    }\n}\n\nimpl Headers {\n    pub fn set_trace_id(&mut self, id: TraceId) {\n        self.0\n            .insert(\"trace_id\".to_owned(), id.to_string().into_bytes());\n    }\n\n    pub fn set_span_id(&mut self, id: u64) {\n        self.0\n            .insert(\"span_id\".to_owned(), id.to_be_bytes().to_vec());\n    }\n}\n\n#[derive(Message)]\npub struct NetworkMessage {\n    #[prost(map = \"string, bytes\", tag = \"1\")]\n    pub headers: HashMap<String, Vec<u8>>,\n\n    #[prost(string, tag = \"2\")]\n    pub url: String,\n\n    #[prost(bytes, tag = \"3\")]\n    pub content: Vec<u8>,\n}\n\nimpl NetworkMessage {\n    pub fn new(endpoint: Endpoint, content: Bytes, headers: Headers) -> Self {\n        NetworkMessage {\n            headers: headers.0,\n            url:     endpoint.full_url().to_owned(),\n            content: content.to_vec(),\n        }\n    }\n\n    pub fn trace_id(&self) -> Option<TraceId> {\n        self.headers\n            .get(\"trace_id\")\n            .map(|id| {\n                String::from_utf8(id.to_owned())\n                    .ok()\n                    .map(|s| TraceId::from_str(&s).ok())\n                    .flatten()\n            })\n            .flatten()\n    }\n\n    pub fn span_id(&self) -> Option<u64> {\n        self.headers.get(\"span_id\").map(|id| {\n            let mut buf = [0u8; 8];\n            buf.copy_from_slice(&id[..8]);\n            u64::from_be_bytes(buf)\n        })\n    }\n\n    pub fn encode(self) -> Result<Bytes, NetworkError> {\n        let mut buf = Vec::with_capacity(self.encoded_len());\n\n        <Self as Message>::encode(&self, &mut buf)\n            .map_err(|e| ErrorKind::BadMessage(Box::new(e)))?;\n\n        Ok(Bytes::from(buf))\n    }\n\n    pub fn decode(bytes: Bytes) -> Result<Self, NetworkError> {\n        <Self as Message>::decode(bytes).map_err(|e| ErrorKind::BadMessage(Box::new(e)).into())\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use protocol::{types::Hash, Bytes};\n    use quickcheck_macros::quickcheck;\n    use serde_derive::{Deserialize, Serialize};\n\n    #[derive(Debug, Serialize, Deserialize)]\n    struct Hashes {\n        #[serde(with = \"super::serde_multi\")]\n        hashes: Vec<Hash>,\n    }\n\n    #[derive(Debug, Clone, Serialize, Deserialize)]\n    struct QHash {\n        #[serde(with = \"super::serde\")]\n        hash: Hash,\n    }\n\n    impl quickcheck::Arbitrary for QHash {\n        fn arbitrary<G: quickcheck::Gen>(g: &mut G) -> QHash {\n            let msg = Bytes::from(String::arbitrary(g));\n            let hash_val = Hash::digest(msg);\n\n            QHash { hash: hash_val }\n        }\n    }\n\n    impl From<Vec<QHash>> for Hashes {\n        fn from(q_hashes: Vec<QHash>) -> Hashes {\n            let hashes = q_hashes\n                .into_iter()\n                .map(|qhash| qhash.hash)\n                .collect::<Vec<_>>();\n\n            Hashes { hashes }\n        }\n    }\n\n    #[quickcheck]\n    fn prop_protocol_type_serialization(hash: QHash) -> bool {\n        bincode::deserialize::<QHash>(&bincode::serialize(&hash).unwrap()).is_ok()\n    }\n\n    #[quickcheck]\n    fn prop_vec_protocol_type_serialization(hashes: Vec<QHash>) -> bool {\n        let hashes = Hashes::from(hashes);\n\n        bincode::deserialize::<Hashes>(&bincode::serialize(&hashes).unwrap()).is_ok()\n    }\n}\n"
  },
  {
    "path": "core/network/src/message/serde.rs",
    "content": "use std::fmt;\n\nuse protocol::codec::ProtocolCodecSync;\nuse protocol::Bytes;\nuse serde::{de, ser, Deserializer, Serializer};\n\npub fn serialize<T, S>(val: &T, s: S) -> Result<S::Ok, S::Error>\nwhere\n    S: Serializer,\n    T: ProtocolCodecSync,\n{\n    let bytes = val.encode_sync().map_err(ser::Error::custom)?;\n\n    s.serialize_bytes(&bytes.to_vec())\n}\n\nstruct BytesVisit;\n\npub fn deserialize<'de, T, D>(deserializer: D) -> Result<T, D::Error>\nwhere\n    D: Deserializer<'de>,\n    T: ProtocolCodecSync,\n{\n    let bytes = deserializer.deserialize_byte_buf(BytesVisit)?;\n\n    <T as ProtocolCodecSync>::decode_sync(bytes).map_err(de::Error::custom)\n}\n\nimpl<'de> de::Visitor<'de> for BytesVisit {\n    type Value = Bytes;\n\n    fn expecting(&self, formatter: &mut fmt::Formatter<'_>) -> fmt::Result {\n        formatter.write_str(\"byte array\")\n    }\n\n    #[inline]\n    fn visit_byte_buf<E>(self, v: Vec<u8>) -> Result<Self::Value, E>\n    where\n        E: de::Error,\n    {\n        Ok(Bytes::from(v))\n    }\n}\n"
  },
  {
    "path": "core/network/src/message/serde_multi.rs",
    "content": "use std::{fmt, iter::FromIterator, marker::PhantomData};\n\nuse derive_more::Constructor;\nuse protocol::codec::ProtocolCodecSync;\nuse serde::{de, ser::SerializeStruct, Deserializer, Serializer};\nuse serde_derive::{Deserialize, Serialize};\n\n#[derive(Debug, Serialize, Deserialize)]\nstruct TWrapper<T: ProtocolCodecSync> {\n    #[serde(with = \"super::serde\")]\n    inner: T,\n}\n\n#[derive(Constructor, Serialize)]\nstruct VecT<T: ProtocolCodecSync> {\n    inner: Vec<TWrapper<T>>,\n}\n\npub fn serialize<'se, V, T, S>(val: &'se V, s: S) -> Result<S::Ok, S::Error>\nwhere\n    S: Serializer,\n    V: IntoIterator<Item = T> + Clone,\n    T: ProtocolCodecSync + 'se + Clone,\n{\n    let val_cloned = val.clone().into_iter();\n    let inner = val_cloned\n        .map(|t| TWrapper { inner: t })\n        .collect::<Vec<_>>();\n\n    let vec_t = VecT { inner };\n\n    let mut state = s.serialize_struct(\"VecT\", 1)?;\n    state.serialize_field(\"inner\", &vec_t.inner)?;\n    state.end()\n}\n\npub fn deserialize<'de, T, V, D>(deserializer: D) -> Result<V, D::Error>\nwhere\n    D: Deserializer<'de>,\n    V: FromIterator<T>,\n    T: ProtocolCodecSync,\n{\n    #[derive(Deserialize)]\n    #[serde(field_identifier, rename_all = \"lowercase\")]\n    enum Field {\n        Inner,\n    }\n\n    struct VecTVisitor<T> {\n        pin_t: PhantomData<T>,\n    }\n\n    impl<T> VecTVisitor<T> {\n        pub fn new() -> Self {\n            VecTVisitor { pin_t: PhantomData }\n        }\n    }\n\n    impl<'de, T> de::Visitor<'de> for VecTVisitor<T>\n    where\n        T: ProtocolCodecSync,\n    {\n        type Value = VecT<T>;\n\n        fn expecting(&self, formatter: &mut fmt::Formatter) -> fmt::Result {\n            formatter.write_str(\"serde multi\")\n        }\n\n        fn visit_seq<V>(self, mut seq: V) -> Result<Self::Value, V::Error>\n        where\n            V: de::SeqAccess<'de>,\n        {\n            let inner = seq\n                .next_element()?\n                .ok_or_else(|| de::Error::invalid_length(0, &self))?;\n\n            Ok(VecT::new(inner))\n        }\n\n        fn visit_map<V>(self, mut map: V) -> Result<Self::Value, V::Error>\n        where\n            V: de::MapAccess<'de>,\n        {\n            let mut inner = None;\n\n            while let Some(key) = map.next_key()? {\n                match key {\n                    Field::Inner => {\n                        if inner.is_some() {\n                            return Err(de::Error::duplicate_field(\"inner\"));\n                        }\n                        inner = Some(map.next_value()?);\n                    }\n                }\n            }\n\n            let inner = inner.ok_or_else(|| de::Error::missing_field(\"inner\"))?;\n            Ok(VecT::new(inner))\n        }\n    }\n\n    const FIELDS: &[&str] = &[\"inner\"];\n    let vec_t = deserializer.deserialize_struct(\"VecT\", FIELDS, VecTVisitor::new())?;\n\n    Ok(V::from_iter(\n        vec_t.inner.into_iter().map(|wrap_t| wrap_t.inner),\n    ))\n}\n"
  },
  {
    "path": "core/network/src/metrics.rs",
    "content": "use std::{\n    future::Future,\n    pin::Pin,\n    sync::Arc,\n    task::{Context, Poll},\n    time::Duration,\n};\n\nuse futures::task::AtomicWaker;\n\nuse crate::{\n    common::{ConnectedAddr, HeartBeat},\n    traits::SharedSessionBook,\n};\n\nconst METRICS_INTERVAL: Duration = Duration::from_secs(1);\n\npub(crate) struct Metrics<S> {\n    sessions:   S,\n    heart_beat: Option<HeartBeat>,\n    hb_waker:   Arc<AtomicWaker>,\n}\n\nimpl<S> Metrics<S>\nwhere\n    S: SharedSessionBook + Send + Unpin + 'static,\n{\n    pub fn new(sessions: S) -> Self {\n        let waker = Arc::new(AtomicWaker::new());\n        let heart_beat = HeartBeat::new(Arc::clone(&waker), METRICS_INTERVAL);\n\n        Metrics {\n            sessions,\n            heart_beat: Some(heart_beat),\n            hb_waker: waker,\n        }\n    }\n\n    fn report_pending_data(&self) {\n        let sids = self.sessions.all();\n\n        let total_size: usize = sids\n            .iter()\n            .map(|sid| {\n                let data_size = self.sessions.pending_data_size(*sid);\n\n                if let Some(ConnectedAddr { host, .. }) = self.sessions.connected_addr(*sid) {\n                    let guage = common_apm::metrics::network::NETWORK_IP_PENDING_DATA_SIZE_VEC\n                        .with_label_values(&[&host]);\n                    guage.set(data_size as i64);\n                }\n\n                data_size\n            })\n            .sum();\n\n        common_apm::metrics::network::NETWORK_TOTAL_PENDING_DATA_SIZE.set(total_size as i64);\n    }\n}\n\nimpl<S> Future for Metrics<S>\nwhere\n    S: SharedSessionBook + Send + Unpin + 'static,\n{\n    type Output = ();\n\n    fn poll(mut self: Pin<&mut Self>, ctx: &mut Context<'_>) -> Poll<Self::Output> {\n        self.hb_waker.register(ctx.waker());\n\n        // Spawn heart beat\n        if let Some(heart_beat) = self.heart_beat.take() {\n            tokio::spawn(heart_beat);\n\n            // No needed for first run\n            return Poll::Pending;\n        }\n\n        self.as_ref().report_pending_data();\n\n        Poll::Pending\n    }\n}\n"
  },
  {
    "path": "core/network/src/outbound/gossip.rs",
    "content": "use async_trait::async_trait;\nuse protocol::traits::{Context, Gossip, MessageCodec, Priority};\nuse protocol::{Bytes, ProtocolResult};\nuse tentacle::secio::PeerId;\nuse tentacle::service::TargetSession;\n\nuse crate::endpoint::Endpoint;\nuse crate::error::NetworkError;\nuse crate::message::{Headers, NetworkMessage};\nuse crate::protocols::{Recipient, Transmitter, TransmitterMessage};\nuse crate::traits::{Compression, NetworkContext};\nuse crate::PeerIdExt;\n\n#[derive(Clone)]\npub struct NetworkGossip {\n    transmitter: Transmitter,\n}\n\nimpl NetworkGossip {\n    pub fn new(transmitter: Transmitter) -> Self {\n        NetworkGossip { transmitter }\n    }\n\n    async fn package_message<M>(\n        &self,\n        ctx: Context,\n        endpoint: &str,\n        mut msg: M,\n    ) -> ProtocolResult<Bytes>\n    where\n        M: MessageCodec,\n    {\n        let endpoint = endpoint.parse::<Endpoint>()?;\n        let data = msg.encode()?;\n        let mut headers = Headers::default();\n        if let Some(state) = common_apm::muta_apm::MutaTracer::span_state(&ctx) {\n            headers.set_trace_id(state.trace_id());\n            headers.set_span_id(state.span_id());\n            log::info!(\"no trace id found for gossip {}\", endpoint.full_url());\n        }\n        let net_msg = NetworkMessage::new(endpoint, data, headers).encode()?;\n        let msg = self.transmitter.compressor().compress(net_msg)?;\n\n        Ok(msg)\n    }\n\n    async fn send_to_sessions(\n        &self,\n        ctx: Context,\n        target_session: TargetSession,\n        data: Bytes,\n        priority: Priority,\n    ) -> Result<(), NetworkError> {\n        let msg = TransmitterMessage {\n            recipient: Recipient::Session(target_session),\n            priority,\n            data,\n            ctx,\n        };\n\n        self.transmitter.behaviour.send(msg).await\n    }\n\n    async fn send_to_peers<'a, P: AsRef<[Bytes]> + 'a>(\n        &self,\n        ctx: Context,\n        peer_ids: P,\n        data: Bytes,\n        priority: Priority,\n    ) -> Result<(), NetworkError> {\n        let peer_ids = {\n            let byteses = peer_ids.as_ref().iter();\n            let maybe_ids = byteses.map(<PeerId as PeerIdExt>::from_bytes);\n\n            maybe_ids.collect::<Result<Vec<_>, _>>()?\n        };\n\n        let msg = TransmitterMessage {\n            recipient: Recipient::PeerId(peer_ids),\n            priority,\n            data,\n            ctx,\n        };\n\n        self.transmitter.behaviour.send(msg).await\n    }\n}\n\n#[async_trait]\nimpl Gossip for NetworkGossip {\n    async fn broadcast<M>(\n        &self,\n        mut cx: Context,\n        endpoint: &str,\n        msg: M,\n        priority: Priority,\n    ) -> ProtocolResult<()>\n    where\n        M: MessageCodec,\n    {\n        let msg = self.package_message(cx.clone(), endpoint, msg).await?;\n        let ctx = cx.set_url(endpoint.to_owned());\n        self.send_to_sessions(ctx, TargetSession::All, msg, priority)\n            .await?;\n        common_apm::metrics::network::on_network_message_sent_all_target(endpoint);\n        Ok(())\n    }\n\n    async fn multicast<'a, M, P>(\n        &self,\n        mut cx: Context,\n        endpoint: &str,\n        peer_ids: P,\n        msg: M,\n        priority: Priority,\n    ) -> ProtocolResult<()>\n    where\n        M: MessageCodec,\n        P: AsRef<[Bytes]> + Send + 'a,\n    {\n        let msg = self.package_message(cx.clone(), endpoint, msg).await?;\n        let multicast_count = peer_ids.as_ref().len();\n\n        let ctx = cx.set_url(endpoint.to_owned());\n        self.send_to_peers(ctx, peer_ids, msg, priority).await?;\n        common_apm::metrics::network::on_network_message_sent_multi_target(\n            endpoint,\n            multicast_count as i64,\n        );\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "core/network/src/outbound/mod.rs",
    "content": "mod gossip;\nmod rpc;\npub use gossip::NetworkGossip;\npub use rpc::NetworkRpc;\n"
  },
  {
    "path": "core/network/src/outbound/rpc.rs",
    "content": "use std::time::Instant;\n\nuse async_trait::async_trait;\nuse futures::future::{self, Either};\nuse futures_timer::Delay;\nuse protocol::traits::{Context, MessageCodec, Priority, Rpc};\nuse protocol::{Bytes, ProtocolResult};\nuse tentacle::service::TargetSession;\nuse tentacle::SessionId;\n\nuse crate::config::TimeoutConfig;\nuse crate::endpoint::Endpoint;\nuse crate::error::{ErrorKind, NetworkError};\nuse crate::message::{Headers, NetworkMessage};\nuse crate::protocols::{Recipient, Transmitter, TransmitterMessage};\nuse crate::rpc::{RpcErrorMessage, RpcResponse, RpcResponseCode};\nuse crate::traits::{Compression, NetworkContext};\n\n#[derive(Clone)]\npub struct NetworkRpc {\n    transmitter: Transmitter,\n    timeout:     TimeoutConfig,\n}\n\nimpl NetworkRpc {\n    pub fn new(transmitter: Transmitter, timeout: TimeoutConfig) -> Self {\n        NetworkRpc {\n            transmitter,\n            timeout,\n        }\n    }\n\n    async fn send(\n        &self,\n        ctx: Context,\n        session_id: SessionId,\n        data: Bytes,\n        priority: Priority,\n    ) -> Result<(), NetworkError> {\n        let compressed_data = self.transmitter.compressor().compress(data)?;\n\n        let msg = TransmitterMessage {\n            recipient: Recipient::Session(TargetSession::Single(session_id)),\n            priority,\n            data: compressed_data,\n            ctx,\n        };\n\n        self.transmitter.behaviour.send(msg).await\n    }\n}\n\n#[async_trait]\nimpl Rpc for NetworkRpc {\n    async fn call<M, R>(\n        &self,\n        mut cx: Context,\n        endpoint: &str,\n        mut msg: M,\n        priority: Priority,\n    ) -> ProtocolResult<R>\n    where\n        M: MessageCodec,\n        R: MessageCodec,\n    {\n        let endpoint = endpoint.parse::<Endpoint>()?;\n        let sid = cx.session_id()?;\n        let rpc_map = &self.transmitter.router.rpc_map;\n        let rid = rpc_map.next_rpc_id();\n        let connected_addr = cx.remote_connected_addr();\n        let done_rx = rpc_map.insert::<RpcResponse>(sid, rid);\n        let inst = Instant::now();\n\n        struct _Guard {\n            transmitter: Transmitter,\n            sid:         SessionId,\n            rid:         u64,\n        }\n\n        impl Drop for _Guard {\n            fn drop(&mut self) {\n                // Simple take then drop if there is one\n                let rpc_map = &self.transmitter.router.rpc_map;\n                let _ = rpc_map.take::<RpcResponse>(self.sid, self.rid);\n            }\n        }\n\n        let _guard = _Guard {\n            transmitter: self.transmitter.clone(),\n            sid,\n            rid,\n        };\n\n        let data = msg.encode()?;\n        let endpoint = endpoint.extend(&rid.to_string())?;\n        let mut headers = Headers::default();\n        if let Some(state) = common_apm::muta_apm::MutaTracer::span_state(&cx) {\n            headers.set_trace_id(state.trace_id());\n            headers.set_span_id(state.span_id());\n            log::info!(\"no trace id found for rpc {}\", endpoint.full_url());\n        }\n        common_apm::metrics::network::on_network_message_sent(endpoint.full_url());\n\n        let ctx = cx.set_url(endpoint.root());\n        let net_msg = NetworkMessage::new(endpoint, data, headers).encode()?;\n        self.send(ctx, sid, net_msg, priority).await?;\n\n        let timeout = Delay::new(self.timeout.rpc);\n        let ret = match future::select(done_rx, timeout).await {\n            Either::Left((ret, _timeout)) => {\n                ret.map_err(|_| NetworkError::from(ErrorKind::RpcDropped(connected_addr)))?\n            }\n            Either::Right((_unresolved, _timeout)) => {\n                common_apm::metrics::network::NETWORK_RPC_RESULT_COUNT_VEC_STATIC\n                    .timeout\n                    .inc();\n\n                return Err(NetworkError::from(ErrorKind::RpcTimeout(connected_addr)).into());\n            }\n        };\n\n        match ret {\n            RpcResponse::Success(v) => {\n                common_apm::metrics::network::NETWORK_RPC_RESULT_COUNT_VEC_STATIC\n                    .success\n                    .inc();\n                common_apm::metrics::network::NETWORK_PROTOCOL_TIME_HISTOGRAM_VEC_STATIC\n                    .rpc\n                    .observe(common_apm::metrics::duration_to_sec(inst.elapsed()));\n\n                Ok(R::decode(v)?)\n            }\n            RpcResponse::Error(e) => Err(NetworkError::RemoteResponse(Box::new(e)).into()),\n        }\n    }\n\n    async fn response<M>(\n        &self,\n        mut cx: Context,\n        endpoint: &str,\n        ret: ProtocolResult<M>,\n        priority: Priority,\n    ) -> ProtocolResult<()>\n    where\n        M: MessageCodec,\n    {\n        let endpoint = endpoint.parse::<Endpoint>()?;\n        let sid = cx.session_id()?;\n        let rid = cx.rpc_id()?;\n\n        let mut resp = match ret.map_err(|e| e.to_string()) {\n            Ok(mut m) => RpcResponse::Success(m.encode()?),\n            Err(err_msg) => RpcResponse::Error(RpcErrorMessage {\n                code: RpcResponseCode::ServerError,\n                msg:  err_msg,\n            }),\n        };\n\n        let encoded_resp = resp.encode()?;\n        let endpoint = endpoint.extend(&rid.to_string())?;\n        let mut headers = Headers::default();\n        if let Some(state) = common_apm::muta_apm::MutaTracer::span_state(&cx) {\n            headers.set_trace_id(state.trace_id());\n            headers.set_span_id(state.span_id());\n            log::info!(\"no trace id found for rpc {}\", endpoint.full_url());\n        }\n        common_apm::metrics::network::on_network_message_sent(endpoint.full_url());\n\n        let ctx = cx.set_url(endpoint.root());\n        let net_msg = NetworkMessage::new(endpoint, encoded_resp, headers).encode()?;\n        self.send(ctx, sid, net_msg, priority).await?;\n\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "core/network/src/peer_manager/addr_set.rs",
    "content": "use super::{PeerMultiaddr, MAX_RETRY_COUNT};\n\nuse std::{\n    borrow::{Borrow, Cow},\n    collections::HashSet,\n    hash::{Hash, Hasher},\n    ops::Deref,\n    sync::atomic::{AtomicUsize, Ordering},\n};\n\nuse parking_lot::RwLock;\nuse tentacle::{multiaddr::Multiaddr, secio::PeerId};\n\nuse crate::traits::MultiaddrExt;\n\nconst MAX_ADDR_FAILURE: u8 = MAX_RETRY_COUNT;\n\n#[derive(Debug)]\nstruct AddrInfo {\n    addr:    PeerMultiaddr,\n    failure: AtomicUsize,\n}\n\nimpl AddrInfo {\n    pub fn owned_addr(&self) -> PeerMultiaddr {\n        self.addr.to_owned()\n    }\n\n    pub fn owned_raw_addr(&self) -> Multiaddr {\n        (*self.addr).to_owned()\n    }\n\n    #[cfg(test)]\n    pub fn failure(&self) -> usize {\n        self.failure.load(Ordering::SeqCst)\n    }\n\n    pub fn inc_failure(&self) {\n        self.failure.fetch_add(1, Ordering::SeqCst);\n    }\n\n    pub fn give_up(&self) {\n        self.failure\n            .store(MAX_ADDR_FAILURE as usize + 1, Ordering::SeqCst);\n    }\n\n    pub fn reset_failure(&self) {\n        self.failure.store(0, Ordering::SeqCst);\n    }\n\n    pub fn connectable(&self) -> bool {\n        self.failure.load(Ordering::SeqCst) <= MAX_ADDR_FAILURE as usize\n    }\n}\n\nimpl Deref for AddrInfo {\n    type Target = PeerMultiaddr;\n\n    fn deref(&self) -> &Self::Target {\n        &self.addr\n    }\n}\n\nimpl From<PeerMultiaddr> for AddrInfo {\n    fn from(pma: PeerMultiaddr) -> AddrInfo {\n        AddrInfo {\n            addr:    pma,\n            failure: AtomicUsize::new(0),\n        }\n    }\n}\n\nimpl Borrow<PeerMultiaddr> for AddrInfo {\n    fn borrow(&self) -> &PeerMultiaddr {\n        &self.addr\n    }\n}\n\nimpl PartialEq for AddrInfo {\n    fn eq(&self, other: &Self) -> bool {\n        self.addr == other.addr\n    }\n}\n\nimpl Eq for AddrInfo {}\n\nimpl Hash for AddrInfo {\n    fn hash<H: Hasher>(&self, state: &mut H) {\n        self.addr.hash(state)\n    }\n}\n\n#[derive(Debug)]\npub struct PeerAddrSet {\n    peer_id: PeerId,\n    inner:   RwLock<HashSet<AddrInfo>>,\n}\n\nimpl PeerAddrSet {\n    pub fn new(peer_id: PeerId) -> Self {\n        PeerAddrSet {\n            peer_id,\n            inner: Default::default(),\n        }\n    }\n\n    pub fn insert(&self, multiaddrs: Vec<PeerMultiaddr>) {\n        let multiaddrs = {\n            let set = self.inner.read();\n\n            // Filter already exists multiaddrs, we dont reset failure.\n            multiaddrs\n                .into_iter()\n                .filter(|pma| self.match_peer_id(&pma) && !set.contains(pma))\n                .map(Into::into)\n                .collect::<HashSet<_>>()\n        };\n\n        self.inner.write().extend(multiaddrs);\n    }\n\n    pub fn set(&self, multiaddrs: Vec<PeerMultiaddr>) {\n        let multiaddrs = multiaddrs\n            .into_iter()\n            .filter(|pma| self.match_peer_id(&pma))\n            .map(Into::into)\n            .collect::<HashSet<_>>();\n\n        *self.inner.write() = multiaddrs;\n    }\n\n    pub(crate) fn insert_raw(&self, multiaddr: Multiaddr) {\n        if let Some(id_bytes) = multiaddr.id_bytes() {\n            if id_bytes != self.peer_id.as_bytes() {\n                return;\n            }\n        }\n\n        self.insert(vec![PeerMultiaddr::new(multiaddr, &self.peer_id)]);\n    }\n\n    pub fn remove(&self, multiaddr: &PeerMultiaddr) {\n        self.inner.write().remove(multiaddr);\n    }\n\n    pub fn contains(&self, multiaddr: &PeerMultiaddr) -> bool {\n        self.inner.read().contains(multiaddr)\n    }\n\n    pub fn all(&self) -> Vec<PeerMultiaddr> {\n        self.inner.read().iter().map(AddrInfo::owned_addr).collect()\n    }\n\n    pub fn all_raw(&self) -> Vec<Multiaddr> {\n        self.inner\n            .read()\n            .iter()\n            .map(AddrInfo::owned_raw_addr)\n            .collect()\n    }\n\n    pub fn connectable(&self) -> Vec<PeerMultiaddr> {\n        let to_pma = |a: &'_ AddrInfo| -> Option<PeerMultiaddr> {\n            if a.connectable() {\n                Some(a.owned_addr())\n            } else {\n                None\n            }\n        };\n\n        self.inner.read().iter().filter_map(to_pma).collect()\n    }\n\n    pub fn len(&self) -> usize {\n        self.inner.read().len()\n    }\n\n    pub fn connectable_len(&self) -> usize {\n        self.inner.read().iter().filter(|a| a.connectable()).count()\n    }\n\n    #[cfg(test)]\n    pub fn failure(&self, pma: &PeerMultiaddr) -> Option<usize> {\n        self.inner.read().get(pma).map(|a| a.failure())\n    }\n\n    pub fn inc_failure(&self, pma: &PeerMultiaddr) {\n        if let Some(info) = self.inner.read().get(pma) {\n            info.inc_failure();\n        }\n    }\n\n    pub fn give_up(&self, pma: &PeerMultiaddr) {\n        if let Some(info) = self.inner.read().get(pma) {\n            info.give_up();\n        }\n    }\n\n    pub fn reset_failure(&self, pma: &PeerMultiaddr) {\n        if let Some(info) = self.inner.read().get(pma) {\n            info.reset_failure();\n        }\n    }\n\n    fn match_peer_id(&self, pma: &PeerMultiaddr) -> bool {\n        pma.has_id() && pma.id_bytes() == Some(Cow::Borrowed(self.peer_id.as_bytes()))\n    }\n}\n"
  },
  {
    "path": "core/network/src/peer_manager/diagnostic.rs",
    "content": "use super::{Inner, WORSE_TRUST_SCALAR_RATIO};\nuse crate::event::PeerManagerEvent;\n\nuse derive_more::Display;\nuse protocol::traits::TrustFeedback;\nuse tentacle::{secio::PeerId, SessionId};\n\nuse std::sync::Arc;\n\n#[derive(Debug, Display)]\n#[display(fmt = \"not found\")]\npub struct NotFound {}\nimpl std::error::Error for NotFound {}\n\n#[derive(Debug, Display, Clone)]\npub enum DiagnosticEvent {\n    #[display(fmt = \"new session\")]\n    NewSession,\n\n    #[display(fmt = \"session closed\")]\n    SessionClosed,\n\n    #[display(fmt = \"trust metric feedback {}\", feedback)]\n    TrustMetric { feedback: TrustFeedback },\n\n    #[display(fmt = \"trust new interval report {}\", report)]\n    TrustNewInterval { report: TrustReport },\n\n    #[display(fmt = \"remote height {}\", height)]\n    RemoteHeight { height: u64 },\n}\n\nimpl From<&PeerManagerEvent> for Option<DiagnosticEvent> {\n    fn from(event: &PeerManagerEvent) -> Self {\n        use PeerManagerEvent::{NewSession, SessionClosed, TrustMetric};\n\n        match event {\n            NewSession { .. } => Some(DiagnosticEvent::NewSession),\n            SessionClosed { .. } => Some(DiagnosticEvent::SessionClosed),\n            TrustMetric { feedback, .. } => Some(DiagnosticEvent::TrustMetric {\n                feedback: feedback.to_owned(),\n            }),\n            _ => None,\n        }\n    }\n}\n\npub type DiagnosticHookFn = Box<dyn Fn(DiagnosticEvent) + Send + 'static>;\n\n#[derive(Debug, Display, Clone, Copy)]\n#[display(\n    fmt = \"score {}, good {}, bad {}, worse scalar ratio {}\",\n    score,\n    bad_events,\n    good_events,\n    worse_scalar_ratio\n)]\npub struct TrustReport {\n    pub score:              u8,\n    pub bad_events:         usize,\n    pub good_events:        usize,\n    pub worse_scalar_ratio: usize,\n}\n\n#[derive(Clone)]\npub struct Diagnostic(Arc<Inner>);\n\nimpl Diagnostic {\n    pub(super) fn new(inner: Arc<Inner>) -> Self {\n        Diagnostic(inner)\n    }\n\n    pub fn session(&self, peer_id: &PeerId) -> Option<SessionId> {\n        match self.0.peer(peer_id).map(|p| p.session_id()) {\n            Some(sid) if sid != SessionId::new(0) => Some(sid),\n            _ => None,\n        }\n    }\n\n    pub fn new_trust_interval(&self, sid: SessionId) -> Result<TrustReport, NotFound> {\n        let session = self.0.session(sid).ok_or_else(|| NotFound {})?;\n        let metric = session.peer.trust_metric().ok_or_else(|| NotFound {})?;\n\n        let score = metric.trust_score();\n        let (good_events, bad_events) = metric.events();\n        let report = TrustReport {\n            score,\n            good_events,\n            bad_events,\n            worse_scalar_ratio: WORSE_TRUST_SCALAR_RATIO,\n        };\n\n        metric.enter_new_interval();\n        Ok(report)\n    }\n}\n"
  },
  {
    "path": "core/network/src/peer_manager/mod.rs",
    "content": "#![allow(clippy::mutable_key_type)]\n\nmod addr_set;\nmod peer;\nmod retry;\nmod save_restore;\nmod session_book;\nmod shared;\nmod tags;\nmod time;\nmod trust_metric;\n\n#[cfg(feature = \"diagnostic\")]\npub mod diagnostic;\n\n#[cfg(test)]\nmod test_manager;\n\nuse std::borrow::Borrow;\nuse std::cmp::PartialEq;\nuse std::collections::HashSet;\nuse std::convert::{TryFrom, TryInto};\nuse std::future::Future;\nuse std::hash::{Hash, Hasher};\nuse std::iter::FromIterator;\nuse std::ops::Deref;\nuse std::path::PathBuf;\nuse std::pin::Pin;\nuse std::sync::Arc;\nuse std::task::{Context, Poll};\nuse std::time::{Duration, Instant};\n\nuse arc_swap::ArcSwap;\nuse derive_more::Display;\nuse futures::channel::mpsc::{UnboundedReceiver, UnboundedSender};\nuse futures::stream::Stream;\nuse futures::task::AtomicWaker;\nuse log::{debug, error, info, warn};\nuse parking_lot::RwLock;\nuse protocol::traits::{PeerTag, TrustFeedback};\nuse rand::seq::IteratorRandom;\nuse serde_derive::{Deserialize, Serialize};\nuse tentacle::multiaddr::Multiaddr;\nuse tentacle::secio::{PeerId, PublicKey};\nuse tentacle::service::SessionType;\nuse tentacle::SessionId;\n\nuse crate::common::{resolve_if_unspecified, HeartBeat};\nuse crate::error::{NetworkError, PeerIdNotFound};\nuse crate::event::{\n    ConnectionErrorKind, ConnectionEvent, ConnectionType, MisbehaviorKind, PeerManagerEvent,\n    SessionErrorKind,\n};\nuse crate::protocols::identify::{self, Identify, WaitIdentification};\nuse crate::protocols::CoreProtocol;\nuse crate::traits::{MultiaddrExt, NetworkProtocol};\n\nuse addr_set::PeerAddrSet;\nuse retry::Retry;\nuse save_restore::{NoPeerDatFile, PeerDatFile, SaveRestore};\nuse session_book::{AcceptableSession, ArcSession, SessionContext};\nuse tags::Tags;\n\npub use peer::{ArcPeer, Connectedness};\npub use session_book::SessionBook;\npub use shared::SharedSessions;\npub use trust_metric::{TrustMetric, TrustMetricConfig};\n\nconst SAME_IP_LIMIT_BAN: Duration = Duration::from_secs(5 * 60);\nconst REPEATED_CONNECTION_TIMEOUT: u64 = 30; // seconds\nconst BACKOFF_BASE: u64 = 2;\nconst MAX_RETRY_INTERVAL: u64 = 512; // seconds\nconst MAX_RETRY_COUNT: u8 = 30;\nconst SHORT_ALIVE_SESSION: u64 = 3; // seconds\nconst MAX_CONNECTING_MARGIN: usize = 10;\nconst MAX_RANDOM_NEXT_RETRY: u64 = 10;\nconst MAX_CONNECTING_TIMEOUT: Duration = Duration::from_secs(30);\n\nconst GOOD_TRUST_SCORE: u8 = 80u8;\nconst WORSE_TRUST_SCALAR_RATIO: usize = 10;\n\n#[derive(Debug, Display)]\npub enum NewSessionPreCheckError {\n    #[display(fmt = \"peer banned\")]\n    PeerBanned,\n\n    #[display(fmt = \"allow list peer only\")]\n    AllowListOnly,\n\n    #[display(fmt = \"reach max connection\")]\n    ReachMaxConnection,\n\n    #[display(fmt = \"peer already connected, only allow one connection per peer\")]\n    PeerAlreadyConnected,\n\n    #[display(fmt = \"{}\", _0)]\n    ReachSessionLimit(session_book::Error),\n}\n\n#[derive(Debug, Clone, Display, Serialize, Deserialize)]\n#[display(fmt = \"{}\", _0)]\npub struct PeerMultiaddr(Multiaddr);\n\nimpl PeerMultiaddr {\n    pub fn new(mut ma: Multiaddr, peer_id: &PeerId) -> Self {\n        if !ma.has_id() {\n            ma.push_id(peer_id.to_owned());\n        }\n\n        PeerMultiaddr(ma)\n    }\n\n    pub fn peer_id(&self) -> PeerId {\n        Self::extract_id(&self.0).expect(\"impossible, should be verified already\")\n    }\n\n    fn extract_id(ma: &Multiaddr) -> Option<PeerId> {\n        if let Some(Ok(peer_id)) = ma\n            .id_bytes()\n            .map(|bytes| PeerId::from_bytes(bytes.to_vec()))\n        {\n            Some(peer_id)\n        } else {\n            None\n        }\n    }\n}\n\nimpl Borrow<Multiaddr> for PeerMultiaddr {\n    fn borrow(&self) -> &Multiaddr {\n        &self.0\n    }\n}\n\nimpl PartialEq for PeerMultiaddr {\n    fn eq(&self, other: &PeerMultiaddr) -> bool {\n        self.0 == other.0\n    }\n}\n\nimpl Eq for PeerMultiaddr {}\n\nimpl Hash for PeerMultiaddr {\n    fn hash<H: Hasher>(&self, state: &mut H) {\n        self.0.hash(state)\n    }\n}\n\nimpl Deref for PeerMultiaddr {\n    type Target = Multiaddr;\n\n    fn deref(&self) -> &Self::Target {\n        &self.0\n    }\n}\n\nimpl TryFrom<Multiaddr> for PeerMultiaddr {\n    type Error = PeerIdNotFound;\n\n    fn try_from(ma: Multiaddr) -> Result<PeerMultiaddr, Self::Error> {\n        if Self::extract_id(&ma).is_some() {\n            Ok(PeerMultiaddr(ma))\n        } else {\n            Err(PeerIdNotFound(ma))\n        }\n    }\n}\n\nimpl Into<Multiaddr> for PeerMultiaddr {\n    fn into(self) -> Multiaddr {\n        self.0\n    }\n}\n\n#[derive(Debug)]\nstruct ConnectingAttempt {\n    peer:       ArcPeer,\n    multiaddrs: HashSet<PeerMultiaddr>,\n    at:         Instant,\n}\n\nimpl ConnectingAttempt {\n    fn new(peer: ArcPeer) -> Self {\n        let multiaddrs = HashSet::from_iter(peer.multiaddrs.connectable());\n        let at = Instant::now();\n\n        ConnectingAttempt {\n            peer,\n            multiaddrs,\n            at,\n        }\n    }\n\n    fn multiaddrs(&self) -> usize {\n        self.multiaddrs.len()\n    }\n\n    fn complete_one_multiaddr(&mut self, multiaddr: &PeerMultiaddr) {\n        self.multiaddrs.remove(multiaddr);\n    }\n\n    fn is_timeout(&self) -> bool {\n        self.at.elapsed() >= MAX_CONNECTING_TIMEOUT\n    }\n\n    #[cfg(test)]\n    fn set_at(&mut self, duration: Duration) {\n        self.at = self.at.checked_sub(duration).unwrap();\n    }\n}\n\nimpl Borrow<PeerId> for ConnectingAttempt {\n    fn borrow(&self) -> &PeerId {\n        &self.peer.id\n    }\n}\n\nimpl PartialEq for ConnectingAttempt {\n    fn eq(&self, other: &ConnectingAttempt) -> bool {\n        self.peer.id == other.peer.id\n    }\n}\n\nimpl Eq for ConnectingAttempt {}\n\nimpl Hash for ConnectingAttempt {\n    fn hash<H: Hasher>(&self, state: &mut H) {\n        self.peer.id.hash(state)\n    }\n}\n\nstruct Inner {\n    our_id:   Arc<PeerId>,\n    chain_id: ArcSwap<protocol::types::Hash>,\n\n    sessions:  SessionBook,\n    consensus: RwLock<HashSet<PeerId>>,\n    peers:     RwLock<HashSet<ArcPeer>>,\n\n    listen: RwLock<HashSet<PeerMultiaddr>>,\n}\n\nimpl Inner {\n    pub fn new(our_id: PeerId, sessions: SessionBook) -> Self {\n        Inner {\n            our_id: Arc::new(our_id),\n            chain_id: ArcSwap::new(Arc::new(protocol::types::Hash::from_empty())),\n\n            sessions,\n            consensus: Default::default(),\n            peers: Default::default(),\n\n            listen: Default::default(),\n        }\n    }\n\n    pub fn add_listen(&self, multiaddr: PeerMultiaddr) {\n        self.listen.write().insert(multiaddr);\n    }\n\n    pub fn listen(&self) -> HashSet<PeerMultiaddr> {\n        self.listen.read().clone()\n    }\n\n    pub fn remove_listen(&self, multiaddr: &PeerMultiaddr) {\n        self.listen.write().remove(multiaddr);\n    }\n\n    pub fn set_chain_id(&self, chain_id: protocol::types::Hash) {\n        self.chain_id.store(Arc::new(chain_id));\n    }\n\n    pub fn chain_id(&self) -> Arc<protocol::types::Hash> {\n        self.chain_id.load_full()\n    }\n\n    pub fn connected(&self) -> usize {\n        self.sessions.len()\n    }\n\n    /// If peer exists, return false\n    pub fn add_peer(&self, peer: ArcPeer) -> bool {\n        common_apm::metrics::network::NETWORK_SAVED_PEER_COUNT.inc();\n        self.peers.write().insert(peer)\n    }\n\n    pub fn peer_count(&self) -> usize {\n        self.peers.read().len()\n    }\n\n    pub fn peer(&self, peer_id: &PeerId) -> Option<ArcPeer> {\n        self.peers.read().get(peer_id).cloned()\n    }\n\n    pub fn contains(&self, peer_id: &PeerId) -> bool {\n        self.peers.read().contains(peer_id)\n    }\n\n    pub fn connectable_peers<F>(&self, max: usize, addition_filter: F) -> Vec<ArcPeer>\n    where\n        F: Fn(&ArcPeer) -> bool + 'static,\n    {\n        let connectable = |p: &'_ &ArcPeer| -> bool {\n            (p.connectedness() == Connectedness::NotConnected\n                || p.connectedness() == Connectedness::CanConnect)\n                && p.retry.ready()\n                && p.multiaddrs.connectable_len() > 0\n                && !p.banned()\n                && addition_filter(p)\n        };\n\n        let mut rng = rand::thread_rng();\n        let book = self.peers.read();\n        let qualified_peers = book.iter().filter(connectable).map(ArcPeer::to_owned);\n\n        qualified_peers.choose_multiple(&mut rng, max)\n    }\n\n    pub fn session(&self, sid: SessionId) -> Option<ArcSession> {\n        self.sessions.get(&sid)\n    }\n\n    pub fn share_sessions(&self) -> Vec<ArcSession> {\n        self.sessions.all()\n    }\n\n    pub fn remove_session(&self, sid: SessionId) -> Option<ArcSession> {\n        self.sessions.remove(&sid)\n    }\n\n    pub fn package_peers(&self) -> Vec<ArcPeer> {\n        self.peers.read().iter().cloned().collect()\n    }\n\n    fn restore(&self, peers: Vec<ArcPeer>) {\n        self.peers.write().extend(peers);\n    }\n\n    fn outbound_count(&self) -> usize {\n        self.sessions.outbound_count()\n    }\n}\n\nstruct UnidentifiedSessionEvent {\n    pubkey: PublicKey,\n    ctx:    Arc<SessionContext>,\n}\n\nstruct UnidentifiedSession {\n    event:        UnidentifiedSessionEvent,\n    ident_fut:    WaitIdentification,\n    connected_at: Instant,\n}\n\nimpl UnidentifiedSession {\n    fn new(event: UnidentifiedSessionEvent, ident_fut: WaitIdentification) -> Self {\n        UnidentifiedSession {\n            event,\n            ident_fut,\n            connected_at: Instant::now(),\n        }\n    }\n\n    fn peer_id(&self) -> PeerId {\n        self.event.pubkey.peer_id()\n    }\n}\n\nimpl Borrow<SessionId> for UnidentifiedSession {\n    fn borrow(&self) -> &SessionId {\n        &self.event.ctx.id\n    }\n}\n\nimpl PartialEq for UnidentifiedSession {\n    fn eq(&self, other: &UnidentifiedSession) -> bool {\n        self.event.ctx.id == other.event.ctx.id\n    }\n}\n\nimpl Eq for UnidentifiedSession {}\n\nimpl Hash for UnidentifiedSession {\n    fn hash<H: Hasher>(&self, state: &mut H) {\n        self.event.ctx.id.hash(state)\n    }\n}\n\n#[derive(Debug, Clone)]\npub struct PeerManagerConfig {\n    /// Our Peer ID\n    pub our_id: PeerId,\n\n    /// Our public key\n    pub pubkey: PublicKey,\n\n    /// Bootstrap peers\n    pub bootstraps: Vec<ArcPeer>,\n\n    /// Always accept/connect peers in list\n    pub allowlist:      Vec<PeerId>,\n    /// Only accept/conect peers in allowlist\n    pub allowlist_only: bool,\n\n    /// Limit connections from same ip\n    pub same_ip_conn_limit: usize,\n\n    /// Limit inbound connections\n    pub inbound_conn_limit: usize,\n\n    /// Limit outbound connections\n    pub outbound_conn_limit: usize,\n\n    /// Trust metric config\n    pub peer_trust_config: Arc<TrustMetricConfig>,\n    pub peer_fatal_ban:    Duration,\n    pub peer_soft_ban:     Duration,\n\n    /// Max connections\n    pub max_connections: usize,\n\n    /// Routine job interval\n    pub routine_interval: Duration,\n\n    /// Peer dat file path\n    pub peer_dat_file: PathBuf,\n}\n\n#[derive(Clone)]\npub struct PeerManagerHandle {\n    inner: Arc<Inner>,\n}\n\nimpl PeerManagerHandle {\n    pub fn peer_id(&self, sid: SessionId) -> Option<PeerId> {\n        self.inner.session(sid).map(|s| s.peer.owned_id())\n    }\n\n    pub fn set_chain_id(&self, chain_id: protocol::types::Hash) {\n        self.inner.set_chain_id(chain_id);\n    }\n\n    pub fn chain_id(&self) -> Arc<protocol::types::Hash> {\n        self.inner.chain_id()\n    }\n\n    pub fn contains_session(&self, session_id: SessionId) -> bool {\n        self.inner.session(session_id).is_some()\n    }\n\n    pub fn random_addrs(&self, max: usize, sid: SessionId) -> Vec<Multiaddr> {\n        let mut rng = rand::thread_rng();\n        let book = self.inner.peers.read();\n        let peers = book.iter().choose_multiple(&mut rng, max);\n\n        let is_self_consensus = self\n            .inner\n            .peer(&self.inner.our_id)\n            .map(|p| p.tags.contains(&PeerTag::Consensus))\n            .unwrap_or_else(|| false);\n\n        let is_remote_consensus = self\n            .inner\n            .session(sid)\n            .map(|s| s.peer.tags.contains(&PeerTag::Consensus))\n            .unwrap_or_else(|| false);\n\n        let condidates = peers\n            .into_iter()\n            .filter_map(|p| {\n                if !is_remote_consensus && p.tags.contains(&PeerTag::Consensus) {\n                    None\n                } else {\n                    Some(p.multiaddrs.all_raw())\n                }\n            })\n            .flatten();\n\n        if !is_self_consensus {\n            // Should always include our self\n            let our_self = self.listen_addrs();\n            our_self.into_iter().chain(condidates).take(max).collect()\n        } else {\n            condidates.take(max).collect()\n        }\n    }\n\n    pub fn listen_addrs(&self) -> Vec<Multiaddr> {\n        let listen = self.inner.listen();\n        debug_assert!(!listen.is_empty(), \"listen should alway be set\");\n\n        let sanitize = |pma: PeerMultiaddr| -> Multiaddr {\n            let ma: Multiaddr = pma.into();\n            match resolve_if_unspecified(&ma) {\n                Ok(resolved) => resolved,\n                Err(_) => ma,\n            }\n        };\n\n        listen.into_iter().map(sanitize).collect()\n    }\n\n    pub fn tag(&self, peer_id: &PeerId, tag: PeerTag) -> Result<(), NetworkError> {\n        let consensus_tag = tag == PeerTag::Consensus;\n\n        if let Some(peer) = self.inner.peer(peer_id) {\n            peer.tags.insert(tag)?;\n        } else {\n            let peer = ArcPeer::new(peer_id.to_owned());\n            peer.tags.insert(tag)?;\n            self.inner.add_peer(peer);\n        }\n\n        if consensus_tag {\n            self.inner.consensus.write().insert(peer_id.to_owned());\n        }\n\n        Ok(())\n    }\n\n    pub fn untag(&self, peer_id: &PeerId, tag: &PeerTag) {\n        if let Some(peer) = self.inner.peer(peer_id) {\n            peer.tags.remove(tag);\n        }\n\n        if tag == &PeerTag::Consensus {\n            self.inner.consensus.write().remove(peer_id);\n        }\n    }\n\n    pub fn tag_consensus(&self, peer_ids: Vec<PeerId>) {\n        common_apm::metrics::network::NETWORK_TAGGED_CONSENSUS_PEERS.set(peer_ids.len() as i64);\n\n        {\n            for peer_id in self.inner.consensus.read().iter() {\n                if let Some(peer) = self.inner.peer(peer_id) {\n                    peer.tags.remove(&PeerTag::Consensus)\n                }\n            }\n        }\n\n        for peer_id in peer_ids.iter() {\n            let _ = self.tag(peer_id, PeerTag::Consensus);\n        }\n\n        {\n            let id_set = HashSet::from_iter(peer_ids);\n            *self.inner.consensus.write() = id_set;\n        }\n    }\n}\n\npub struct PeerManager {\n    // core peer pool\n    inner:      Arc<Inner>,\n    config:     PeerManagerConfig,\n    peer_id:    PeerId,\n    bootstraps: HashSet<ArcPeer>,\n\n    // peers currently connecting\n    connecting: HashSet<ConnectingAttempt>,\n\n    // unidentified session backlog\n    unidentified_backlog: HashSet<UnidentifiedSession>,\n\n    event_rx: UnboundedReceiver<PeerManagerEvent>,\n    conn_tx:  UnboundedSender<ConnectionEvent>,\n\n    // heart beat, for current connections check, etc\n    heart_beat: Option<HeartBeat>,\n    hb_waker:   Arc<AtomicWaker>,\n\n    // save restore\n    peer_dat_file: Box<dyn SaveRestore>,\n\n    // diagnostic event hook\n    #[cfg(feature = \"diagnostic\")]\n    diagnostic_hook: Option<diagnostic::DiagnosticHookFn>,\n}\n\nimpl PeerManager {\n    pub fn new(\n        config: PeerManagerConfig,\n        event_rx: UnboundedReceiver<PeerManagerEvent>,\n        conn_tx: UnboundedSender<ConnectionEvent>,\n    ) -> Self {\n        let peer_id = config.our_id.clone();\n        let session_config = session_book::Config::from(&config);\n        let session_book = SessionBook::new(session_config);\n\n        let inner = Arc::new(Inner::new(config.our_id.clone(), session_book));\n        let bootstraps = HashSet::from_iter(config.bootstraps.clone());\n        let waker = Arc::new(AtomicWaker::new());\n        let heart_beat = HeartBeat::new(Arc::clone(&waker), config.routine_interval);\n        let peer_dat_file = Box::new(NoPeerDatFile);\n\n        for peer_id in config.allowlist.iter().cloned() {\n            assert_eq!(inner.peer_count(), 0, \"should be empty before bootstrapped\");\n\n            let peer = ArcPeer::new(peer_id);\n            let _ = peer.tags.insert(PeerTag::AlwaysAllow);\n\n            inner.add_peer(peer);\n        }\n\n        PeerManager {\n            inner,\n            config,\n            peer_id,\n            bootstraps,\n\n            connecting: Default::default(),\n            unidentified_backlog: Default::default(),\n\n            event_rx,\n            conn_tx,\n\n            heart_beat: Some(heart_beat),\n            hb_waker: waker,\n\n            peer_dat_file,\n\n            #[cfg(feature = \"diagnostic\")]\n            diagnostic_hook: None,\n        }\n    }\n\n    pub fn handle(&self) -> PeerManagerHandle {\n        PeerManagerHandle {\n            inner: Arc::clone(&self.inner),\n        }\n    }\n\n    pub fn share_session_book(&self, config: shared::Config) -> SharedSessions {\n        SharedSessions::new(Arc::clone(&self.inner), config)\n    }\n\n    #[cfg(feature = \"diagnostic\")]\n    pub fn register_diagnostic_hook(&mut self, f: diagnostic::DiagnosticHookFn) {\n        self.diagnostic_hook = Some(f);\n    }\n\n    #[cfg(feature = \"diagnostic\")]\n    pub fn diagnostic(&self) -> diagnostic::Diagnostic {\n        diagnostic::Diagnostic::new(Arc::clone(&self.inner))\n    }\n\n    pub fn enable_save_restore(&mut self) {\n        let peer_dat_file = PeerDatFile::new(&self.config.peer_dat_file);\n\n        self.peer_dat_file = Box::new(peer_dat_file);\n    }\n\n    pub fn restore_peers(&self) -> Result<(), NetworkError> {\n        let peers = self.peer_dat_file.restore()?;\n        self.inner.restore(peers);\n        Ok(())\n    }\n\n    pub fn bootstrap(&mut self) {\n        // Insert bootstrap peers\n        for peer in self.bootstraps.iter() {\n            info!(\"network: {:?}: bootstrap peer: {}\", self.peer_id, peer);\n\n            if let Some(peer_exist) = self.inner.peer(&peer.id) {\n                info!(\"restored peer {:?} found, insert multiaddr only\", peer.id);\n                peer_exist.multiaddrs.insert(peer.multiaddrs.all());\n            } else {\n                self.inner.add_peer(peer.clone());\n            }\n        }\n\n        self.connect_peers(self.bootstraps.iter().cloned().collect());\n    }\n\n    pub fn disconnect_session(&self, sid: SessionId) {\n        let disconnect_peer = ConnectionEvent::Disconnect(sid);\n        if self.conn_tx.unbounded_send(disconnect_peer).is_err() {\n            error!(\"network: connection service exit\");\n        }\n    }\n\n    #[cfg(test)]\n    fn inner(&self) -> Arc<Inner> {\n        Arc::clone(&self.inner)\n    }\n\n    #[cfg(test)]\n    fn config(&self) -> PeerManagerConfig {\n        self.config.clone()\n    }\n\n    #[cfg(test)]\n    fn set_connecting(&mut self, peers: Vec<ArcPeer>) {\n        for peer in peers.into_iter() {\n            self.connecting.insert(ConnectingAttempt::new(peer));\n        }\n    }\n\n    fn new_session_pre_check(\n        &mut self,\n        pubkey: &PublicKey,\n        ctx: &Arc<SessionContext>,\n    ) -> Result<ArcSession, NewSessionPreCheckError> {\n        let remote_peer_id = pubkey.peer_id();\n        let remote_multiaddr = PeerMultiaddr::new(ctx.address.to_owned(), &remote_peer_id);\n\n        // Remove from connecting if we dial this peer or create new one\n        self.connecting.remove(&remote_peer_id);\n        let opt_peer = self.inner.peer(&remote_peer_id);\n        let remote_peer = opt_peer.unwrap_or_else(|| ArcPeer::new(remote_peer_id.clone()));\n\n        // Inbound address is client address, it's useless\n        match ctx.ty {\n            SessionType::Inbound => remote_peer.multiaddrs.remove(&remote_multiaddr),\n            SessionType::Outbound => {\n                if remote_peer.multiaddrs.contains(&remote_multiaddr) {\n                    remote_peer.multiaddrs.reset_failure(&remote_multiaddr);\n                } else {\n                    remote_peer.multiaddrs.insert(vec![remote_multiaddr]);\n                }\n            }\n        }\n\n        if remote_peer.banned() {\n            info!(\"banned peer {:?} incomming\", remote_peer_id);\n            remote_peer.mark_disconnected();\n            self.disconnect_session(ctx.id);\n            return Err(NewSessionPreCheckError::PeerBanned);\n        }\n\n        if self.config.allowlist_only\n            && !remote_peer.tags.contains(&PeerTag::AlwaysAllow)\n            && !remote_peer.tags.contains(&PeerTag::Consensus)\n        {\n            debug!(\"allowlist_only enabled, reject peer {:?}\", remote_peer.id);\n            remote_peer.mark_disconnected();\n            self.disconnect_session(ctx.id);\n            return Err(NewSessionPreCheckError::AllowListOnly);\n        }\n\n        if self.inner.connected() >= self.config.max_connections {\n            let found_replacement = || -> bool {\n                let incoming_trust_score = match remote_peer.trust_metric() {\n                    Some(trust_metric) => trust_metric.trust_score(),\n                    None => return false,\n                };\n\n                for session in self.inner.share_sessions() {\n                    let session_trust_score = match session.peer.trust_metric() {\n                        Some(trust_metric) => trust_metric.trust_score(),\n                        None => {\n                            // Impossible\n                            error!(\"session peer {:?} trust metric not found\", session.peer.id);\n                            return false;\n                        }\n                    };\n\n                    // Ensure that session be replaced has traveled enough\n                    // intervals\n                    if incoming_trust_score > session_trust_score\n                        && !session.peer.tags.contains(&PeerTag::AlwaysAllow)\n                        && !session.peer.tags.contains(&PeerTag::Consensus)\n                        && session.peer.alive()\n                            > self.config.peer_trust_config.interval().as_secs() * 20\n                    {\n                        info!(\n                            \"session peer {:?} is been replaced by peer {:?}\",\n                            session.peer.id, remote_peer.id\n                        );\n                        self.disconnect_session(session.id);\n                        return true;\n                    }\n                }\n\n                false\n            };\n\n            if !remote_peer.tags.contains(&PeerTag::AlwaysAllow)\n                && !remote_peer.tags.contains(&PeerTag::Consensus)\n                && !found_replacement()\n            {\n                info!(\"reject peer {:?} due to max conn limit\", remote_peer.id);\n\n                remote_peer.mark_disconnected();\n                self.disconnect_session(ctx.id);\n                return Err(NewSessionPreCheckError::ReachMaxConnection);\n            }\n        }\n\n        let connectedness = remote_peer.connectedness();\n        if connectedness == Connectedness::Connected {\n            // This should not happen, because of repeated connection event\n            error!(\"got new session event on same peer {:?}\", remote_peer.id);\n\n            let exist_sid = remote_peer.session_id();\n            if exist_sid != ctx.id && self.inner.session(exist_sid).is_some() {\n                // We don't support multiple connections, disconnect new one\n                self.disconnect_session(ctx.id);\n                return Err(NewSessionPreCheckError::PeerAlreadyConnected);\n            }\n\n            if self.inner.session(exist_sid).is_none() {\n                // We keep new session, outdated will be updated after we insert\n                // it.\n                error!(\"network: impossible, peer session {} outdated\", exist_sid);\n            }\n        }\n\n        let session = ArcSession::new(remote_peer.clone(), Arc::clone(&ctx));\n        info!(\"check new session from {}\", session.connected_addr);\n\n        // Always allow peer in allowlist and consensus peer\n        if !remote_peer.tags.contains(&PeerTag::AlwaysAllow)\n            && !remote_peer.tags.contains(&PeerTag::Consensus)\n        {\n            if let Err(err) = self.inner.sessions.acceptable(&session) {\n                warn!(\"session {} unacceptable {}\", ctx.id, err);\n\n                // Ban this peer for a while so we won't choose it again\n                // NOTE: Always allowed and consensus peer cannot be banned.\n                if let Err(err) = remote_peer.tags.insert_ban(SAME_IP_LIMIT_BAN) {\n                    warn!(\"ban same ip peer {:?} failed: {}\", remote_peer.id, err);\n                }\n\n                remote_peer.mark_disconnected();\n                self.disconnect_session(ctx.id);\n                return Err(NewSessionPreCheckError::ReachSessionLimit(err));\n            }\n        }\n\n        Ok(session)\n    }\n\n    fn new_unidentified_session(&mut self, pubkey: PublicKey, ctx: Arc<SessionContext>) {\n        let peer_id = pubkey.peer_id();\n        if let Err(err) = self.new_session_pre_check(&pubkey, &ctx) {\n            log::info!(\"reject unidentified session due to {}\", err);\n\n            Identify::wait_failed(&peer_id, err.to_string());\n            return;\n        }\n        common_apm::metrics::network::NETWORK_UNIDENTIFIED_CONNECTIONS.inc();\n\n        let event = UnidentifiedSessionEvent { pubkey, ctx };\n        let ident_fut = Identify::wait_identified(peer_id);\n        let unidentified_session = UnidentifiedSession::new(event, ident_fut);\n\n        self.unidentified_backlog.insert(unidentified_session);\n    }\n\n    fn new_session(&mut self, pubkey: PublicKey, ctx: Arc<SessionContext>) {\n        let session = match self.new_session_pre_check(&pubkey, &ctx) {\n            Ok(session) => session,\n            Err(err) => {\n                log::info!(\"reject new session due to {}\", err);\n                return;\n            }\n        };\n        info!(\"new session from {}\", session.connected_addr);\n\n        if !session.peer.has_pubkey() {\n            if let Err(e) = session.peer.set_pubkey(pubkey) {\n                error!(\"impossible, set public key failed {}\", e);\n            }\n        }\n\n        // Currently we only save accepted peer.\n        // TODO: save to database\n        if !self.inner.contains(&session.peer.id) {\n            self.inner.add_peer(session.peer.clone());\n        }\n\n        let remote_peer = session.peer.clone();\n        self.inner.sessions.insert(AcceptableSession(session));\n        remote_peer.mark_connected(ctx.id);\n\n        common_apm::metrics::network::NETWORK_CONNECTED_PEERS.inc();\n        if remote_peer.tags.contains(&PeerTag::Consensus) {\n            common_apm::metrics::network::NETWORK_CONNECTED_CONSENSUS_PEERS.inc();\n        }\n\n        match remote_peer.trust_metric() {\n            Some(trust_metric) => trust_metric.start(),\n            None => {\n                let trust_metric = TrustMetric::new(Arc::clone(&self.config.peer_trust_config));\n                trust_metric.start();\n\n                remote_peer.set_trust_metric(trust_metric);\n            }\n        }\n    }\n\n    fn session_closed(&mut self, pid: PeerId, sid: SessionId) {\n        debug!(\"peer {:?} session {} closed\", pid, sid);\n\n        // Check unidentified session\n        let opt_unidentified_session = self.unidentified_backlog.take(&sid);\n        if opt_unidentified_session.is_none() {\n            common_apm::metrics::network::NETWORK_CONNECTED_PEERS.dec();\n        }\n\n        if self.connecting.take(&pid).is_some() {\n            log::info!(\"connecting peer {:?} session closed\", pid);\n        }\n\n        // Session may be removed by other event or rejected\n        let opt_session = self.inner.remove_session(sid);\n        if let Some(ref session) = opt_session {\n            common_apm::metrics::network::NETWORK_IP_DISCONNECTED_COUNT_VEC\n                .with_label_values(&[&session.connected_addr.host])\n                .inc();\n\n            log::info!(\"{} session closed\", session.connected_addr);\n        }\n\n        let remote_peer = {\n            match opt_session.map_or_else(|| self.inner.peer(&pid), |s| Some(s.peer.to_owned())) {\n                Some(peer) => peer,\n                None => {\n                    log::info!(\"close unsaved peer session, peer {:?}\", pid);\n                    return;\n                }\n            }\n        };\n\n        remote_peer.mark_disconnected();\n        if remote_peer.tags.contains(&PeerTag::Consensus) && opt_unidentified_session.is_none() {\n            common_apm::metrics::network::NETWORK_CONNECTED_CONSENSUS_PEERS.dec();\n        }\n\n        match remote_peer.trust_metric() {\n            Some(trust_metric) => trust_metric.pause(),\n            None => {\n                warn!(\"session peer {:?} trust metric not found\", remote_peer.id);\n\n                let trust_metric = TrustMetric::new(Arc::clone(&self.config.peer_trust_config));\n                remote_peer.set_trust_metric(trust_metric);\n            }\n        }\n\n        if remote_peer.alive() < SHORT_ALIVE_SESSION {\n            // NOTE: peer maybe abnormally disconnect from others. When we try\n            // to reconnect, other peers may treat this as repeated connection,\n            // then disconnect. We have to wait for timeout.\n            warn!(\n                \"increase peer {:?} retry due to repeated short live session\",\n                remote_peer.id\n            );\n\n            while remote_peer.retry.eta() < REPEATED_CONNECTION_TIMEOUT {\n                remote_peer.retry.inc();\n            }\n        } else {\n            // Set up a short ban, so we won't retry this peer immediately\n            if remote_peer.tags.contains(&PeerTag::Consensus)\n                || remote_peer.tags.contains(&PeerTag::AlwaysAllow)\n            {\n                return;\n            }\n\n            let rand_next_retry = {\n                let mut duration = rand::random::<u64>() % MAX_RANDOM_NEXT_RETRY;\n                if duration < 2 {\n                    duration = 2; // At least 2 seconds\n                }\n                Duration::from_secs(duration)\n            };\n\n            if let Err(err) = remote_peer.tags.insert_ban(rand_next_retry) {\n                log::info!(\"random retry for peer {:?} failed: {}\", remote_peer.id, err);\n            }\n        }\n    }\n\n    fn connect_failed(&mut self, addr: Multiaddr, error_kind: ConnectionErrorKind) {\n        use ConnectionErrorKind::{\n            DNSResolver, Io, MultiaddrNotSuppored, PeerIdNotMatch, ProtocolHandle, SecioHandshake,\n            TimeOut,\n        };\n        log::info!(\"connect to {:?} failed: {}\", addr, error_kind);\n\n        let peer_addr: PeerMultiaddr = match addr.clone().try_into() {\n            Ok(pma) => pma,\n            Err(e) => {\n                // All multiaddrs we dial have peer id included\n                error!(\"unconnectable multiaddr {} without peer id {}\", addr, e);\n                return;\n            }\n        };\n\n        let peer_id = peer_addr.peer_id();\n        let peer = match self.inner.peer(&peer_id) {\n            Some(p) => p,\n            None => {\n                // Impossibe\n                error!(\"outbound connecting peer not found {:?}\", peer_id);\n                return;\n            }\n        };\n\n        match error_kind {\n            Io(_) | DNSResolver(_) => peer.multiaddrs.inc_failure(&peer_addr),\n            MultiaddrNotSuppored(_) => {\n                info!(\"give up unsupported multiaddr {}\", addr);\n                peer.multiaddrs.give_up(&peer_addr);\n            }\n            PeerIdNotMatch => {\n                warn!(\"give up multiaddr {} because peer id not match\", peer_addr);\n                peer.multiaddrs.give_up(&peer_addr);\n            }\n            TimeOut(reason) => {\n                info!(\"connect timeout {}\", reason);\n                peer.multiaddrs.inc_failure(&peer_addr);\n            }\n            SecioHandshake(_) | ProtocolHandle => {\n                warn!(\"give up peer {:?} becasue {}\", peer.id, error_kind);\n                peer.set_connectedness(Connectedness::Unconnectable);\n            }\n        }\n\n        if let Some(mut attempt) = self.connecting.take(&peer_id) {\n            if attempt.peer.connectedness() == Connectedness::Unconnectable {\n                // We already give up peer\n                return;\n            }\n\n            attempt.complete_one_multiaddr(&peer_addr);\n            // No more connecting multiaddrs from this peer\n            // This means all multiaddrs failure\n            if attempt.multiaddrs() == 0 {\n                log::info!(\"peer {:?} increase retry\", attempt.peer.id);\n\n                attempt.peer.retry.inc();\n                attempt.peer.set_connectedness(Connectedness::CanConnect);\n\n                if attempt.peer.retry.run_out() {\n                    warn!(\"give up peer {:?} due to retry run out\", attempt.peer.id);\n                    attempt.peer.set_connectedness(Connectedness::Unconnectable);\n                }\n\n            // FIXME\n            // if let Some(trust_metric) = attempt.peer.trust_metric() {\n            //     trust_metric.bad_events(1);\n            // }\n            } else {\n                // Wait for other connecting multiaddrs result\n                self.connecting.insert(attempt);\n            }\n        }\n    }\n\n    fn session_failed(&self, sid: SessionId, error_kind: SessionErrorKind) {\n        warn!(\"session {} failed {}\", sid, error_kind);\n        use SessionErrorKind::{Io, Protocol, Unexpected};\n\n        let session = match self.inner.remove_session(sid) {\n            Some(s) => s,\n            None => return, /* Session may be removed by other event or rejected\n                             * due to max connections before insert */\n        };\n        // Ensure we disconnect this peer\n        self.disconnect_session(sid);\n        session.peer.mark_disconnected();\n\n        match session.peer.trust_metric() {\n            Some(trust_metric) => trust_metric.bad_events(1),\n            None => {\n                warn!(\"session peer {:?} trust metric not found\", session.peer.id);\n\n                let trust_metric = TrustMetric::new(Arc::clone(&self.config.peer_trust_config));\n                trust_metric.bad_events(1);\n\n                session.peer.set_trust_metric(trust_metric);\n            }\n        }\n\n        match error_kind {\n            Io(_) => {\n                info!(\"peer {:?} session failed, increase retry\", session.peer.id);\n                session.peer.retry.inc();\n            }\n            Protocol { .. } | Unexpected(_) => {\n                let pid = &session.peer.id;\n                let remote_addr = &session.connected_addr;\n\n                warn!(\"give up peer {:?} from {} {}\", pid, remote_addr, error_kind);\n                session.peer.set_connectedness(Connectedness::Unconnectable);\n            }\n        }\n    }\n\n    fn update_peer_alive(&self, pid: &PeerId) {\n        if let Some(peer) = self.inner.peer(pid) {\n            let sid = peer.session_id();\n            if sid != 0.into() {\n                if let Some(session) = self.inner.session(sid) {\n                    info!(\"peer {:?} {} alive\", pid, session.connected_addr);\n                }\n            }\n\n            peer.retry.reset(); // Just in case\n            peer.update_alive();\n        }\n    }\n\n    fn peer_misbehave(&self, pid: PeerId, kind: MisbehaviorKind) {\n        warn!(\"peer {:?} misbehave {}\", pid, kind);\n        use MisbehaviorKind::{Discovery, PingTimeout, PingUnexpect};\n\n        let peer = match self.inner.peer(&pid) {\n            Some(p) => p,\n            None => {\n                error!(\"misbehave peer {:?} not found\", pid);\n                return;\n            }\n        };\n\n        match peer.trust_metric() {\n            Some(trust_metric) => trust_metric.bad_events(1),\n            None => {\n                warn!(\"session peer {:?} trust metric not found\", peer.id);\n\n                let trust_metric = TrustMetric::new(Arc::clone(&self.config.peer_trust_config));\n                trust_metric.start();\n                trust_metric.bad_events(1);\n\n                peer.set_trust_metric(trust_metric);\n            }\n        }\n\n        let sid = peer.session_id();\n        if sid == SessionId::new(0) {\n            // Impossible, connected session always bigger than 0\n            error!(\"misbehave peer with session id 0\");\n            return;\n        }\n\n        self.inner.remove_session(sid);\n        peer.mark_disconnected();\n        // Ensure we disconnect from this peer\n        self.disconnect_session(sid);\n\n        match kind {\n            PingTimeout => peer.retry.inc(),\n            PingUnexpect | Discovery => {\n                warn!(\"give up peer {:?} because of {}\", peer.id, kind);\n                peer.set_connectedness(Connectedness::Unconnectable)\n            }\n        }\n    }\n\n    fn trust_metric_feedback(&self, pid: PeerId, feedback: TrustFeedback) {\n        use TrustFeedback::{Bad, Fatal, Good, Neutral, Worse};\n\n        let peer = match self.inner.peer(&pid) {\n            Some(p) => p,\n            None => {\n                error!(\"fatal peer {:?} not found\", pid);\n                return;\n            }\n        };\n\n        let peer_trust_metric = match peer.trust_metric() {\n            Some(t) => t,\n            None => {\n                warn!(\"session peer {:?} trust metric not found\", peer.id);\n\n                let trust_metric = TrustMetric::new(Arc::clone(&self.config.peer_trust_config));\n                trust_metric.start();\n\n                peer.set_trust_metric(trust_metric.clone());\n                trust_metric\n            }\n        };\n\n        match &feedback {\n            Fatal(reason) => {\n                warn!(\"peer {:?} trust feedback fatal {}\", pid, reason);\n                if peer.tags.contains(&PeerTag::AlwaysAllow)\n                    || peer.tags.contains(&PeerTag::Consensus)\n                {\n                    return;\n                }\n\n                let fatal_ban = self.config.peer_fatal_ban;\n                info!(\"peer {:?} ban {} seconds\", pid, fatal_ban.as_secs());\n                peer_trust_metric.pause();\n                if let Err(e) = peer.tags.insert_ban(fatal_ban) {\n                    warn!(\"ban peer {}\", e);\n                    debug!(\"impossible, we already make sure peer isn't in allowlist\");\n                }\n\n                if let Some(session) = self.inner.remove_session(peer.session_id()) {\n                    self.disconnect_session(session.id);\n                }\n                peer.mark_disconnected();\n            }\n            Bad(_) | Worse(_) => {\n                match &feedback {\n                    Bad(reason) => {\n                        info!(\"peer {:?} trust feedback bad {}\", pid, reason);\n                        peer_trust_metric.bad_events(1);\n                    }\n                    Worse(reason) => {\n                        warn!(\"peer {:?} trust feedback worse {}\", pid, reason);\n                        peer_trust_metric.bad_events(WORSE_TRUST_SCALAR_RATIO);\n                    }\n                    _ => unreachable!(),\n                };\n\n                if peer_trust_metric.knock_out()\n                    && !peer.tags.contains(&PeerTag::AlwaysAllow)\n                    && !peer.tags.contains(&PeerTag::Consensus)\n                {\n                    let soft_ban = self.config.peer_soft_ban.as_secs();\n                    info!(\"peer {:?} knocked out, soft ban {} seconds\", pid, soft_ban);\n\n                    peer_trust_metric.pause();\n                    if let Err(e) = peer.tags.insert_ban(Duration::from_secs(soft_ban)) {\n                        warn!(\"ban peer {}\", e);\n                        debug!(\"impossible, we already make sure peer isn't in allowlist\");\n                    }\n\n                    if let Some(session) = self.inner.remove_session(peer.session_id()) {\n                        self.disconnect_session(session.id);\n                    }\n                    peer.mark_disconnected();\n                }\n            }\n            Neutral => (),\n            Good => peer_trust_metric.good_events(1),\n        }\n    }\n\n    fn session_blocked(&self, ctx: Arc<SessionContext>) {\n        warn!(\n            \"session {} blocked, pending data size {}\",\n            ctx.id,\n            ctx.pending_data_size()\n        );\n\n        if let Some(session) = self.inner.session(ctx.id) {\n            session.block();\n\n            match session.peer.trust_metric() {\n                Some(trust_metric) => trust_metric.bad_events(1),\n                None => {\n                    warn!(\"session peer {:?} trust metric not found\", session.peer.id);\n\n                    let trust_metric = TrustMetric::new(Arc::clone(&self.config.peer_trust_config));\n                    trust_metric.start();\n                    trust_metric.bad_events(1);\n\n                    session.peer.set_trust_metric(trust_metric);\n                }\n            };\n        }\n    }\n\n    fn connect_peers_now(&mut self, peers: Vec<ArcPeer>) {\n        let peer_addrs = peers.into_iter().map(|peer| {\n            peer.set_connectedness(Connectedness::Connecting);\n\n            let addrs = peer.multiaddrs.all_raw();\n            self.connecting.insert(ConnectingAttempt::new(peer));\n\n            addrs\n        });\n\n        let addrs = peer_addrs.flatten().collect();\n        info!(\"connect addrs {:?}\", addrs);\n\n        let connect_attempt = ConnectionEvent::Connect {\n            addrs,\n            proto: CoreProtocol::target(),\n        };\n\n        if self.conn_tx.unbounded_send(connect_attempt).is_err() {\n            error!(\"network: connection service exit\");\n        }\n    }\n\n    fn connect_peers(&mut self, peers: Vec<ArcPeer>) {\n        let connectable = |p: ArcPeer| -> Option<ArcPeer> {\n            if p.multiaddrs.len() == 0 {\n                log::info!(\"peer {:?} has no multiaddress\", p.id);\n\n                return None;\n            }\n\n            if self.config.allowlist_only\n                && !p.tags.contains(&PeerTag::AlwaysAllow)\n                && !p.tags.contains(&PeerTag::Consensus)\n            {\n                debug!(\"filter peer {:?} not in allowlist\", p.id);\n                return None;\n            }\n\n            let connectedness = p.connectedness();\n            if connectedness != Connectedness::CanConnect\n                && connectedness != Connectedness::NotConnected\n            {\n                if connectedness == Connectedness::Unconnectable\n                    && p.tags.contains(&PeerTag::Consensus)\n                {\n                    // For consensus peer, just try again.\n                    Some(p)\n                } else {\n                    log::info!(\"peer {:?} connectedness {}\", p.id, connectedness);\n                    None\n                }\n            } else {\n                Some(p)\n            }\n        };\n\n        let connectable_peers: Vec<_> = peers.into_iter().filter_map(connectable).collect();\n\n        if !connectable_peers.is_empty() {\n            self.connect_peers_now(connectable_peers);\n        }\n    }\n\n    fn connect_peers_by_id(&mut self, pids: Vec<PeerId>) {\n        let peers_to_connect = {\n            let book = self.inner.peers.read();\n            pids.iter()\n                .filter_map(|pid| book.get(pid).cloned())\n                .collect()\n        };\n\n        log::info!(\"connect to peers {:?} found {:?}\", pids, peers_to_connect);\n        self.connect_peers(peers_to_connect);\n    }\n\n    fn discover_multiaddr(&mut self, addr: Multiaddr) {\n        let peer_addr: PeerMultiaddr = match addr.try_into() {\n            Ok(pma) => pma,\n            _ => return, // Ignore multiaddr without peer id\n        };\n\n        // Ignore our self\n        if peer_addr.peer_id() == self.peer_id {\n            return;\n        }\n\n        let peer_id = peer_addr.peer_id();\n        if let Some(peer) = self.inner.peer(&peer_id) {\n            peer.multiaddrs.insert(vec![peer_addr]);\n        } else {\n            let new_peer = ArcPeer::new(peer_addr.peer_id());\n            new_peer.multiaddrs.insert(vec![peer_addr]);\n\n            self.inner.add_peer(new_peer);\n        }\n    }\n\n    fn dicover_multi_multiaddrs(&mut self, addrs: Vec<Multiaddr>) {\n        for addr in addrs.into_iter() {\n            self.discover_multiaddr(addr);\n        }\n    }\n\n    fn identified_addrs(&self, pid: &PeerId, addrs: Vec<Multiaddr>) {\n        info!(\"peer {:?} multi identified addrs {:?}\", pid, addrs);\n\n        if let Some(peer) = self.inner.peer(pid) {\n            // Make sure all addresses include peer id\n            let peer_addrs = addrs\n                .into_iter()\n                .map(|a| PeerMultiaddr::new(a, pid))\n                .collect();\n\n            peer.multiaddrs.insert(peer_addrs);\n        }\n    }\n\n    fn repeated_connection(&mut self, ty: ConnectionType, sid: SessionId, addr: Multiaddr) {\n        info!(\n            \"repeated session {:?}, ty {}, remote addr {:?}\",\n            sid, ty, addr\n        );\n\n        let peer_id = {\n            let opt_unidentified_session = self.unidentified_backlog.get(&sid);\n            let opt_pid = opt_unidentified_session.map_or_else(\n                || self.inner.session(sid).map(|s| s.peer.owned_id()),\n                |unidentified_session| Some(unidentified_session.peer_id()),\n            );\n\n            match opt_pid {\n                Some(pid) => pid,\n                None => {\n                    // Impossibl\n                    error!(\"repeated connection but session {} not found\", sid);\n\n                    return;\n                }\n            }\n        };\n\n        if let Some(peer) = self.inner.peer(&peer_id) {\n            let peer_addr = PeerMultiaddr::new(addr, &peer_id);\n\n            match ty {\n                ConnectionType::Inbound => peer.multiaddrs.remove(&peer_addr),\n                ConnectionType::Outbound => peer.multiaddrs.reset_failure(&peer_addr),\n            }\n        }\n    }\n\n    fn process_event(&mut self, event: PeerManagerEvent) {\n        match event {\n            PeerManagerEvent::ConnectPeersNow { pids } => self.connect_peers_by_id(pids),\n            PeerManagerEvent::ConnectFailed { addr, kind } => self.connect_failed(addr, kind),\n            PeerManagerEvent::UnidentifiedSession { pubkey, ctx, .. } => {\n                self.new_unidentified_session(pubkey, ctx)\n            }\n            PeerManagerEvent::NewSession { pubkey, ctx, .. } => self.new_session(pubkey, ctx),\n            // NOTE: Alice may disconnect to Bob, but bob didn't know\n            // that, so the next time, Alice try to connect to Bob will\n            // cause repeated connection. The only way to fix this right\n            // now is wait for time out.\n            PeerManagerEvent::RepeatedConnection { ty, sid, addr } => {\n                self.repeated_connection(ty, sid, addr)\n            }\n            PeerManagerEvent::SessionBlocked { ctx, .. } => self.session_blocked(ctx),\n            PeerManagerEvent::SessionClosed { sid, pid } => self.session_closed(pid, sid),\n            PeerManagerEvent::SessionFailed { sid, kind } => self.session_failed(sid, kind),\n            PeerManagerEvent::PeerAlive { pid } => self.update_peer_alive(&pid),\n            PeerManagerEvent::Misbehave { pid, kind } => self.peer_misbehave(pid, kind),\n            PeerManagerEvent::TrustMetric { pid, feedback } => {\n                self.trust_metric_feedback(pid, feedback)\n            }\n            PeerManagerEvent::DiscoverMultiAddrs { addrs } => self.dicover_multi_multiaddrs(addrs),\n            PeerManagerEvent::IdentifiedAddrs { pid, addrs } => self.identified_addrs(&pid, addrs),\n            PeerManagerEvent::AddNewListenAddr { addr } => {\n                let peer_addr = PeerMultiaddr::new(addr, &self.peer_id);\n                self.inner.add_listen(peer_addr);\n            }\n            PeerManagerEvent::RemoveListenAddr { addr } => {\n                self.inner\n                    .remove_listen(&PeerMultiaddr::new(addr, &self.peer_id));\n            }\n        }\n    }\n}\n\n// Save peers during shutdown\nimpl Drop for PeerManager {\n    fn drop(&mut self) {\n        let peers = self.inner.package_peers();\n\n        if let Err(err) = self.peer_dat_file.save(peers) {\n            error!(\"network: peer dat file: {}\", err);\n        }\n    }\n}\n\nimpl Future for PeerManager {\n    type Output = ();\n\n    fn poll(mut self: Pin<&mut Self>, ctx: &mut Context<'_>) -> Poll<Self::Output> {\n        self.hb_waker.register(ctx.waker());\n\n        // Spawn heart beat\n        if let Some(heart_beat) = self.heart_beat.take() {\n            tokio::spawn(heart_beat);\n        }\n\n        // Process unidentified sessions\n        let unidentified_sessions = self.unidentified_backlog.drain().collect::<Vec<_>>();\n        for mut session in unidentified_sessions {\n            let peer_id = session.event.pubkey.peer_id();\n            let ident_fut = &mut session.ident_fut;\n            futures::pin_mut!(ident_fut);\n\n            match ident_fut.poll(ctx) {\n                Poll::Pending => {\n                    if session.connected_at.elapsed() >= identify::DEFAULT_TIMEOUT {\n                        warn!(\"reject peer {:?} due to identification timeout\", peer_id);\n\n                        self.disconnect_session(session.event.ctx.id);\n                        if let Some(peer) = self.inner.peer(&peer_id) {\n                            peer.mark_disconnected();\n                        }\n                    } else {\n                        self.unidentified_backlog.insert(session);\n                    }\n                }\n                Poll::Ready(ret) => match ret {\n                    Ok(()) => {\n                        let UnidentifiedSession { event, .. } = session;\n                        let new_session_event = PeerManagerEvent::NewSession {\n                            pid:    event.pubkey.peer_id(),\n                            pubkey: event.pubkey,\n                            ctx:    event.ctx,\n                        };\n\n                        // TODO: Remove duplicate diag code\n                        #[cfg(feature = \"diagnostic\")]\n                        let diag_event: Option<\n                            diagnostic::DiagnosticEvent,\n                        > = From::from(&new_session_event);\n\n                        self.process_event(new_session_event);\n\n                        #[cfg(feature = \"diagnostic\")]\n                        if let (Some(hook), Some(event)) =\n                            (self.diagnostic_hook.as_ref(), diag_event)\n                        {\n                            hook(event)\n                        }\n                    }\n                    Err(err) => {\n                        warn!(\n                            \"reject peer {:?} due to identification failed: {}\",\n                            peer_id, err\n                        );\n\n                        self.disconnect_session(session.event.ctx.id);\n                        if let Some(peer) = self.inner.peer(&peer_id) {\n                            peer.mark_disconnected();\n                        }\n                    }\n                },\n            }\n        }\n        common_apm::metrics::network::NETWORK_UNIDENTIFIED_CONNECTIONS\n            .set(self.unidentified_backlog.len() as i64);\n\n        // Process manager events\n        loop {\n            let event_rx = &mut self.as_mut().event_rx;\n            futures::pin_mut!(event_rx);\n\n            // service ready in common\n            let event = crate::service_ready!(\"peer manager\", event_rx.poll_next(ctx));\n            log::debug!(\"network: {:?}: event {}\", self.peer_id, event);\n\n            #[cfg(feature = \"diagnostic\")]\n            let diag_event: Option<diagnostic::DiagnosticEvent> = From::from(&event);\n\n            self.process_event(event);\n\n            #[cfg(feature = \"diagnostic\")]\n            if let (Some(hook), Some(event)) = (self.diagnostic_hook.as_ref(), diag_event) {\n                hook(event)\n            }\n        }\n\n        // Check connecting timeout\n        let timeout_reason = format!(\"exceed {} seconds\", MAX_CONNECTING_TIMEOUT.as_secs());\n        let timeouted_mutiaddrs = {\n            let connecting_attempts = self.connecting.iter();\n            let timeouted_attempts = connecting_attempts.filter_map(|attempt| {\n                if !attempt.is_timeout() {\n                    return None;\n                }\n\n                Some(attempt.multiaddrs.iter().cloned().collect::<Vec<_>>())\n            });\n            timeouted_attempts.flatten().collect::<Vec<_>>()\n        };\n        if !timeouted_mutiaddrs.is_empty() {\n            log::info!(\"timeouted connecting found: {:?}\", timeouted_mutiaddrs);\n        }\n        for peer_multiaddr in timeouted_mutiaddrs {\n            self.connect_failed(\n                Into::<Multiaddr>::into(peer_multiaddr),\n                ConnectionErrorKind::TimeOut(timeout_reason.clone()),\n            )\n        }\n        common_apm::metrics::network::NETWORK_OUTBOUND_CONNECTING_PEERS\n            .set(self.connecting.len() as i64);\n\n        // Check connecting count\n        let connected_count = self.inner.connected();\n        let outbound_count = self.inner.outbound_count();\n        let connection_attempts = outbound_count + self.connecting.len();\n        let max_connection_attempts = self.config.outbound_conn_limit + MAX_CONNECTING_MARGIN;\n\n        if connected_count < self.config.max_connections\n            && outbound_count < self.config.outbound_conn_limit\n            && connection_attempts < max_connection_attempts\n        {\n            let filter_good_peer = |peer: &ArcPeer| -> bool {\n                if let Some(trust_metric) = peer.trust_metric() {\n                    trust_metric.trust_score() > GOOD_TRUST_SCORE\n                } else {\n                    false\n                }\n            };\n            let just_enough = |_: &ArcPeer| -> bool { true };\n\n            let remain_count = max_connection_attempts - connection_attempts;\n            let mut connectable_peers =\n                self.inner.connectable_peers(remain_count, filter_good_peer);\n            if connectable_peers.is_empty() {\n                connectable_peers = self.inner.connectable_peers(remain_count, just_enough);\n            }\n            let candidate_count = connectable_peers.len();\n\n            debug!(\n                \"network: {:?}: connections not fullfill, {} candidate peers found\",\n                self.peer_id, candidate_count\n            );\n\n            if !connectable_peers.is_empty() {\n                self.connect_peers(connectable_peers);\n\n                common_apm::metrics::network::NETWORK_OUTBOUND_CONNECTING_PEERS\n                    .set(self.connecting.len() as i64);\n            }\n        }\n\n        Poll::Pending\n    }\n}\n"
  },
  {
    "path": "core/network/src/peer_manager/peer.rs",
    "content": "use super::{time, PeerAddrSet, Retry, Tags, TrustMetric, MAX_RETRY_COUNT};\n\nuse std::{\n    borrow::Borrow,\n    fmt,\n    hash::{Hash, Hasher},\n    ops::Deref,\n    sync::{\n        atomic::{AtomicU64, AtomicUsize, Ordering},\n        Arc,\n    },\n    time::{Duration, SystemTime, UNIX_EPOCH},\n};\n\nuse derive_more::Display;\nuse parking_lot::RwLock;\nuse protocol::traits::PeerTag;\nuse tentacle::{\n    secio::{PeerId, PublicKey},\n    SessionId,\n};\n\nuse crate::error::ErrorKind;\n\n#[derive(Debug, Eq, PartialEq, Ord, PartialOrd, Clone, Copy, Display)]\n#[repr(usize)]\npub enum Connectedness {\n    #[display(fmt = \"not connected\")]\n    NotConnected = 0,\n\n    #[display(fmt = \"can connect\")]\n    CanConnect = 1,\n\n    #[display(fmt = \"connected\")]\n    Connected = 2,\n\n    #[display(fmt = \"unconnectable\")]\n    Unconnectable = 3,\n\n    #[display(fmt = \"connecting\")]\n    Connecting = 4,\n}\n\nimpl From<usize> for Connectedness {\n    fn from(src: usize) -> Connectedness {\n        use self::Connectedness::{CanConnect, Connected, Connecting, NotConnected, Unconnectable};\n\n        match src {\n            0 => NotConnected,\n            1 => CanConnect,\n            2 => Connected,\n            3 => Unconnectable,\n            4 => Connecting,\n            _ => NotConnected,\n        }\n    }\n}\n\nimpl From<Connectedness> for usize {\n    fn from(src: Connectedness) -> usize {\n        src as usize\n    }\n}\n\n#[derive(Debug)]\npub struct Peer {\n    pub id:          PeerId,\n    pub multiaddrs:  PeerAddrSet,\n    pub retry:       Retry,\n    pub tags:        Tags,\n    pubkey:          RwLock<Option<PublicKey>>,\n    trust_metric:    RwLock<Option<TrustMetric>>,\n    connectedness:   AtomicUsize,\n    session_id:      AtomicUsize,\n    connected_at:    AtomicU64,\n    disconnected_at: AtomicU64,\n    alive:           AtomicU64,\n}\n\nimpl Peer {\n    pub fn new(peer_id: PeerId) -> Self {\n        Peer {\n            id:              peer_id.clone(),\n            multiaddrs:      PeerAddrSet::new(peer_id),\n            retry:           Retry::new(MAX_RETRY_COUNT),\n            tags:            Tags::default(),\n            pubkey:          RwLock::new(None),\n            trust_metric:    RwLock::new(None),\n            connectedness:   AtomicUsize::new(Connectedness::NotConnected as usize),\n            session_id:      AtomicUsize::new(0),\n            connected_at:    AtomicU64::new(0),\n            disconnected_at: AtomicU64::new(0),\n            alive:           AtomicU64::new(0),\n        }\n    }\n\n    pub fn from_pubkey(pubkey: PublicKey) -> Result<Self, ErrorKind> {\n        let peer = Peer::new(pubkey.peer_id());\n        peer.set_pubkey(pubkey)?;\n\n        Ok(peer)\n    }\n\n    pub fn owned_id(&self) -> PeerId {\n        self.id.to_owned()\n    }\n\n    pub fn has_pubkey(&self) -> bool {\n        self.pubkey.read().is_some()\n    }\n\n    pub fn owned_pubkey(&self) -> Option<PublicKey> {\n        self.pubkey.read().clone()\n    }\n\n    pub fn set_pubkey(&self, pubkey: PublicKey) -> Result<(), ErrorKind> {\n        if pubkey.peer_id() != self.id {\n            Err(ErrorKind::PublicKeyNotMatchId {\n                pubkey,\n                id: self.id.clone(),\n            })\n        } else {\n            *self.pubkey.write() = Some(pubkey);\n            Ok(())\n        }\n    }\n\n    pub fn trust_metric(&self) -> Option<TrustMetric> {\n        self.trust_metric.read().clone()\n    }\n\n    pub fn set_trust_metric(&self, metric: TrustMetric) {\n        *self.trust_metric.write() = Some(metric);\n    }\n\n    #[cfg(test)]\n    pub fn remove_trust_metric(&self) {\n        *self.trust_metric.write() = None;\n    }\n\n    pub fn connectedness(&self) -> Connectedness {\n        Connectedness::from(self.connectedness.load(Ordering::SeqCst))\n    }\n\n    pub fn set_connectedness(&self, flag: Connectedness) {\n        self.connectedness\n            .store(usize::from(flag), Ordering::SeqCst);\n    }\n\n    pub fn set_session_id(&self, sid: SessionId) {\n        self.session_id.store(sid.value(), Ordering::SeqCst);\n    }\n\n    pub fn session_id(&self) -> SessionId {\n        self.session_id.load(Ordering::SeqCst).into()\n    }\n\n    pub fn connected_at(&self) -> u64 {\n        self.connected_at.load(Ordering::SeqCst)\n    }\n\n    pub(super) fn set_connected_at(&self, at: u64) {\n        self.connected_at.store(at, Ordering::SeqCst);\n    }\n\n    pub fn disconnected_at(&self) -> u64 {\n        self.disconnected_at.load(Ordering::SeqCst)\n    }\n\n    pub(super) fn set_disconnected_at(&self, at: u64) {\n        self.disconnected_at.store(at, Ordering::SeqCst);\n    }\n\n    pub fn alive(&self) -> u64 {\n        self.alive.load(Ordering::SeqCst)\n    }\n\n    pub fn update_alive(&self) {\n        let connected_at =\n            UNIX_EPOCH + Duration::from_secs(self.connected_at.load(Ordering::SeqCst));\n        let alive = time::duration_since(SystemTime::now(), connected_at).as_secs();\n\n        self.alive.store(alive, Ordering::SeqCst);\n    }\n\n    pub(super) fn set_alive(&self, live: u64) {\n        self.alive.store(live, Ordering::SeqCst);\n    }\n\n    pub fn mark_connected(&self, sid: SessionId) {\n        self.set_connectedness(Connectedness::Connected);\n        self.set_session_id(sid);\n        self.retry.reset();\n        self.update_connected();\n    }\n\n    pub fn mark_disconnected(&self) {\n        self.set_connectedness(Connectedness::CanConnect);\n        self.set_session_id(0.into());\n        self.update_disconnected();\n        self.update_alive();\n    }\n\n    pub fn banned(&self) -> bool {\n        if let Some(until) = self.tags.get_banned_until() {\n            if time::now() < until {\n                return true;\n            }\n\n            self.tags.remove(&PeerTag::ban_key());\n            if let Some(trust_metric) = self.trust_metric() {\n                // TODO: Reset just in case, may remove in\n                // the future.\n                trust_metric.reset_history();\n            }\n        }\n\n        false\n    }\n\n    fn update_connected(&self) {\n        self.connected_at.store(time::now(), Ordering::SeqCst);\n    }\n\n    fn update_disconnected(&self) {\n        self.disconnected_at.store(time::now(), Ordering::SeqCst);\n    }\n}\n\nimpl fmt::Display for Peer {\n    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {\n        write!(\n            f,\n            \"{:?} multiaddr {:?} tags {:?} last connected at {} alive {} retry {} current {}\",\n            self.id,\n            self.multiaddrs.all(),\n            self.tags,\n            self.connected_at.load(Ordering::SeqCst),\n            self.alive.load(Ordering::SeqCst),\n            self.retry.count(),\n            Connectedness::from(self.connectedness.load(Ordering::SeqCst))\n        )\n    }\n}\n\n#[derive(Debug, Display, Clone)]\n#[display(fmt = \"{}\", _0)]\npub struct ArcPeer(Arc<Peer>);\n\nimpl ArcPeer {\n    pub fn new(peer_id: PeerId) -> Self {\n        ArcPeer(Arc::new(Peer::new(peer_id)))\n    }\n\n    pub fn from_pubkey(pubkey: PublicKey) -> Result<Self, ErrorKind> {\n        Ok(ArcPeer(Arc::new(Peer::from_pubkey(pubkey)?)))\n    }\n}\n\nimpl Deref for ArcPeer {\n    type Target = Peer;\n\n    fn deref(&self) -> &Self::Target {\n        &self.0\n    }\n}\n\nimpl Borrow<PeerId> for ArcPeer {\n    fn borrow(&self) -> &PeerId {\n        &self.id\n    }\n}\n\nimpl PartialEq for ArcPeer {\n    fn eq(&self, other: &ArcPeer) -> bool {\n        self.id == other.id\n    }\n}\n\nimpl Eq for ArcPeer {}\n\nimpl Hash for ArcPeer {\n    fn hash<H: Hasher>(&self, state: &mut H) {\n        self.id.hash(state)\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::{ArcPeer, Connectedness};\n    use crate::peer_manager::{time, TrustMetric, TrustMetricConfig};\n\n    use tentacle::secio::SecioKeyPair;\n\n    use std::sync::Arc;\n\n    #[test]\n    fn should_reset_trust_metric_history_after_unban() {\n        let keypair = SecioKeyPair::secp256k1_generated();\n        let pubkey = keypair.public_key();\n        let peer = ArcPeer::from_pubkey(pubkey).expect(\"make peer\");\n        let peer_trust_config = Arc::new(TrustMetricConfig::default());\n\n        let trust_metric = TrustMetric::new(Arc::clone(&peer_trust_config));\n        peer.set_trust_metric(trust_metric.clone());\n        for _ in 0..2 {\n            trust_metric.bad_events(10);\n            trust_metric.enter_new_interval();\n        }\n        assert!(trust_metric.trust_score() < 40, \"should lower score\");\n\n        peer.tags.set_ban_until(time::now() - 20);\n        assert!(!peer.banned(), \"should unban\");\n\n        assert_eq!(\n            trust_metric.intervals(),\n            0,\n            \"should reset peer trust history\"\n        );\n    }\n\n    #[test]\n    fn should_be_able_to_convert_between_connectedness_and_usize() {\n        assert_eq!(usize::from(Connectedness::NotConnected), 0usize);\n        assert_eq!(usize::from(Connectedness::CanConnect), 1usize);\n        assert_eq!(usize::from(Connectedness::Connected), 2usize);\n        assert_eq!(usize::from(Connectedness::Unconnectable), 3usize);\n        assert_eq!(usize::from(Connectedness::Connecting), 4usize);\n\n        assert_eq!(Connectedness::from(0usize), Connectedness::NotConnected);\n        assert_eq!(Connectedness::from(1usize), Connectedness::CanConnect);\n        assert_eq!(Connectedness::from(2usize), Connectedness::Connected);\n        assert_eq!(Connectedness::from(3usize), Connectedness::Unconnectable);\n        assert_eq!(Connectedness::from(4usize), Connectedness::Connecting);\n        assert_eq!(Connectedness::from(5usize), Connectedness::NotConnected);\n    }\n}\n"
  },
  {
    "path": "core/network/src/peer_manager/retry.rs",
    "content": "use super::{time, BACKOFF_BASE, MAX_RETRY_INTERVAL};\n\nuse std::sync::{\n    atomic::{AtomicU64, AtomicU8, Ordering},\n    Arc,\n};\nuse std::time::Duration;\n\n#[derive(Debug, Clone)]\npub struct Retry {\n    max:             u8,\n    count:           Arc<AtomicU8>,\n    next_attempt_at: Arc<AtomicU64>,\n}\n\nimpl Retry {\n    pub fn new(max: u8) -> Self {\n        Retry {\n            max,\n            count: Arc::new(AtomicU8::new(0)),\n            next_attempt_at: Arc::new(AtomicU64::new(0)),\n        }\n    }\n\n    pub fn inc(&self) {\n        let count = self.count.fetch_add(1, Ordering::SeqCst).saturating_add(1);\n\n        let mut secs = BACKOFF_BASE.pow(count as u32);\n        if secs > MAX_RETRY_INTERVAL {\n            secs = MAX_RETRY_INTERVAL;\n        }\n\n        let at = time::now().saturating_add(secs);\n        self.next_attempt_at.store(at, Ordering::SeqCst);\n    }\n\n    pub fn eta(&self) -> u64 {\n        let next_attempt_at = self.next_attempt_at.load(Ordering::SeqCst);\n        next_attempt_at.saturating_sub(time::now())\n    }\n\n    pub fn reset(&self) {\n        self.count.store(0, Ordering::SeqCst);\n    }\n\n    pub fn ready(&self) -> bool {\n        let next_attempt_at = Duration::from_secs(self.next_attempt_at.load(Ordering::SeqCst));\n\n        time::now() > next_attempt_at.as_secs()\n    }\n\n    pub fn count(&self) -> u8 {\n        self.count.load(Ordering::SeqCst)\n    }\n\n    pub fn next_attempt_at(&self) -> u64 {\n        self.next_attempt_at.load(Ordering::SeqCst)\n    }\n\n    pub fn run_out(&self) -> bool {\n        self.count() > self.max\n    }\n\n    // For test and save_restore\n    pub(crate) fn set_next_attempt_at(&self, at: u64) {\n        self.next_attempt_at.store(at, Ordering::SeqCst);\n    }\n\n    // For test and save_restore\n    pub(crate) fn set(&self, n: u8) {\n        self.count.store(n, Ordering::SeqCst);\n    }\n}\n"
  },
  {
    "path": "core/network/src/peer_manager/save_restore.rs",
    "content": "use super::{ArcPeer, Connectedness, PeerMultiaddr};\n\nuse std::{\n    convert::TryFrom,\n    fmt,\n    fs::File,\n    io::{BufReader, Read, Write},\n    path::{Path, PathBuf},\n};\n\nuse serde::{de, ser};\nuse serde_derive::{Deserialize, Serialize};\nuse tentacle::{\n    multiaddr::Multiaddr,\n    secio::{PeerId, PublicKey},\n};\n\nuse crate::error::NetworkError;\n\n// TODO: remove skip tag on retry and next_attempt_at\n// TODO: save multiaddr failure count\n#[derive(Debug, Serialize, Deserialize)]\nstruct SerdePeer {\n    id:              SerdePeerId,\n    pubkey:          Option<SerdePubKey>,\n    multiaddrs:      Vec<PeerMultiaddr>,\n    connectedness:   usize,\n    #[serde(skip)]\n    retry:           u8,\n    #[serde(skip)]\n    next_attempt_at: u64,\n    connected_at:    u64,\n    disconnected_at: u64,\n    alive:           u64,\n}\n\nimpl From<ArcPeer> for SerdePeer {\n    fn from(peer: ArcPeer) -> SerdePeer {\n        let connectedness = match peer.connectedness() {\n            Connectedness::Unconnectable => Connectedness::Unconnectable,\n            _ => Connectedness::CanConnect,\n        };\n\n        SerdePeer {\n            id:              SerdePeerId(peer.owned_id()),\n            pubkey:          peer.owned_pubkey().map(SerdePubKey),\n            multiaddrs:      peer.multiaddrs.all(),\n            connectedness:   connectedness as usize,\n            retry:           peer.retry.count(),\n            next_attempt_at: peer.retry.next_attempt_at(),\n            connected_at:    peer.connected_at(),\n            disconnected_at: peer.disconnected_at(),\n            alive:           peer.alive(),\n        }\n    }\n}\n\nimpl TryFrom<SerdePeer> for ArcPeer {\n    type Error = NetworkError;\n\n    fn try_from(serde_peer: SerdePeer) -> Result<Self, Self::Error> {\n        let peer_id = serde_peer.id.0;\n\n        let peer = ArcPeer::new(peer_id.clone());\n        if let Some(pubkey) = serde_peer.pubkey {\n            peer.set_pubkey(pubkey.0)?;\n        }\n\n        let multiaddrs = serde_peer\n            .multiaddrs\n            .into_iter()\n            .map(|ma| {\n                // Just ensure that our recovered multiaddr has id\n                let ma: Multiaddr = ma.into();\n                PeerMultiaddr::new(ma, &peer_id)\n            })\n            .collect();\n        peer.multiaddrs.set(multiaddrs);\n\n        peer.set_connectedness(Connectedness::from(serde_peer.connectedness));\n        peer.retry.set(serde_peer.retry);\n        peer.retry.set_next_attempt_at(serde_peer.next_attempt_at);\n        peer.set_connected_at(serde_peer.connected_at);\n        peer.set_disconnected_at(serde_peer.disconnected_at);\n        peer.set_alive(serde_peer.alive);\n\n        Ok(peer)\n    }\n}\n\n// TODO: Async support, right now, it's ok since we only restore/save data once.\npub(super) trait SaveRestore: Send + Sync {\n    fn save(&self, peers: Vec<ArcPeer>) -> Result<(), NetworkError>;\n    fn restore(&self) -> Result<Vec<ArcPeer>, NetworkError>;\n}\n\n#[derive(Clone)]\npub(super) struct PeerDatFile {\n    path: PathBuf,\n}\n\nimpl PeerDatFile {\n    pub fn new<P: AsRef<Path>>(path: P) -> Self {\n        PeerDatFile {\n            path: path.as_ref().to_owned(),\n        }\n    }\n}\n\nimpl SaveRestore for PeerDatFile {\n    fn save(&self, peers: Vec<ArcPeer>) -> Result<(), NetworkError> {\n        let mut file = File::create(&self.path)?;\n        let peers_to_save = peers.into_iter().map(SerdePeer::from).collect::<Vec<_>>();\n        let data = bincode::serialize(&peers_to_save)?;\n\n        file.write_all(data.as_slice())?;\n        Ok(())\n    }\n\n    // restore data only happen once during network service starting\n    fn restore(&self) -> Result<Vec<ArcPeer>, NetworkError> {\n        let file = File::open(&self.path)?;\n        let mut buf_reader = BufReader::new(file);\n        let mut data = Vec::new();\n\n        buf_reader.read_to_end(&mut data)?;\n        let peers_to_restore: Vec<SerdePeer> = bincode::deserialize(&data)?;\n\n        let mut peers = Vec::with_capacity(peers_to_restore.len());\n        for p in peers_to_restore {\n            if let Ok(p) = ArcPeer::try_from(p) {\n                peers.push(p);\n            }\n        }\n\n        Ok(peers)\n    }\n}\n\n#[derive(Clone)]\npub(super) struct NoPeerDatFile;\n\nimpl SaveRestore for NoPeerDatFile {\n    fn save(&self, _peers: Vec<ArcPeer>) -> Result<(), NetworkError> {\n        Ok(())\n    }\n\n    fn restore(&self) -> Result<Vec<ArcPeer>, NetworkError> {\n        Ok(vec![])\n    }\n}\n\n#[derive(Debug, PartialEq, Eq, Hash)]\npub struct SerdePubKey(PublicKey);\n\nimpl ser::Serialize for SerdePubKey {\n    fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>\n    where\n        S: ser::Serializer,\n    {\n        serializer.serialize_bytes(self.0.clone().encode().as_ref())\n    }\n}\n\nimpl<'de> de::Deserialize<'de> for SerdePubKey {\n    fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>\n    where\n        D: de::Deserializer<'de>,\n    {\n        struct Visitor;\n\n        impl<'de> de::Visitor<'de> for Visitor {\n            type Value = SerdePubKey;\n\n            fn expecting(&self, formatter: &mut fmt::Formatter<'_>) -> fmt::Result {\n                formatter.write_str(\"peer pubkey\")\n            }\n\n            fn visit_seq<A: de::SeqAccess<'de>>(self, mut seq: A) -> Result<Self::Value, A::Error> {\n                let mut buf: Vec<u8> = Vec::with_capacity(seq.size_hint().unwrap_or(0));\n\n                while let Some(val) = seq.next_element()? {\n                    buf.push(val);\n                }\n\n                self.visit_byte_buf(buf)\n            }\n\n            fn visit_byte_buf<E: de::Error>(self, v: Vec<u8>) -> Result<Self::Value, E> {\n                self.visit_bytes(v.as_slice())\n            }\n\n            fn visit_bytes<E: de::Error>(self, v: &[u8]) -> Result<Self::Value, E> {\n                PublicKey::decode(v)\n                    .ok_or_else(|| de::Error::custom(\"not valid public key\"))\n                    .map(SerdePubKey)\n            }\n        }\n\n        deserializer.deserialize_bytes(Visitor)\n    }\n}\n\n#[derive(Debug, PartialEq, Eq, Hash)]\npub struct SerdePeerId(PeerId);\n\nimpl ser::Serialize for SerdePeerId {\n    fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>\n    where\n        S: ser::Serializer,\n    {\n        serializer.serialize_bytes(self.0.as_bytes())\n    }\n}\n\nimpl<'de> de::Deserialize<'de> for SerdePeerId {\n    fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>\n    where\n        D: de::Deserializer<'de>,\n    {\n        struct Visitor;\n\n        impl<'de> de::Visitor<'de> for Visitor {\n            type Value = SerdePeerId;\n\n            fn expecting(&self, formatter: &mut fmt::Formatter<'_>) -> fmt::Result {\n                formatter.write_str(\"peer pubkey\")\n            }\n\n            fn visit_seq<A: de::SeqAccess<'de>>(self, mut seq: A) -> Result<Self::Value, A::Error> {\n                let mut buf: Vec<u8> = Vec::with_capacity(seq.size_hint().unwrap_or(0));\n\n                while let Some(val) = seq.next_element()? {\n                    buf.push(val);\n                }\n\n                self.visit_byte_buf(buf)\n            }\n\n            fn visit_byte_buf<E: de::Error>(self, v: Vec<u8>) -> Result<Self::Value, E> {\n                self.visit_bytes(v.as_slice())\n            }\n\n            fn visit_bytes<E: de::Error>(self, v: &[u8]) -> Result<Self::Value, E> {\n                PeerId::from_bytes(v.to_vec())\n                    .map_err(|_| de::Error::custom(\"not valid peer id\"))\n                    .map(SerdePeerId)\n            }\n        }\n\n        deserializer.deserialize_bytes(Visitor)\n    }\n}\n"
  },
  {
    "path": "core/network/src/peer_manager/session_book.rs",
    "content": "use std::borrow::Borrow;\nuse std::collections::{HashMap, HashSet};\nuse std::hash::{Hash, Hasher};\nuse std::ops::Deref;\nuse std::sync::atomic::{AtomicBool, AtomicUsize, Ordering};\nuse std::sync::Arc;\n\nuse derive_more::Display;\nuse parking_lot::RwLock;\nuse tentacle::service::SessionType;\nuse tentacle::SessionId;\n\nuse super::{ArcPeer, PeerManagerConfig};\nuse crate::common::ConnectedAddr;\nuse crate::config::{\n    DEFAULT_INBOUND_CONN_LIMIT, DEFAULT_MAX_CONNECTIONS, DEFAULT_SAME_IP_CONN_LIMIT,\n};\n\n#[cfg(test)]\npub use crate::test::mock::SessionContext;\n#[cfg(not(test))]\npub use tentacle::context::SessionContext;\n\ntype Host = String;\ntype Count = usize;\n\n#[derive(Debug, Display, PartialEq, Eq)]\npub enum Error {\n    #[display(fmt = \"reach same ip connections limit\")]\n    ReachSameIPConnLimit,\n\n    #[display(fmt = \"reach inbound connections limit\")]\n    ReachInboundConnLimit,\n\n    #[display(fmt = \"reach outbound connections limit\")]\n    ReachOutboundConnLimit,\n}\n\n#[derive(Debug)]\npub struct Config {\n    same_ip_conn_limit:  usize,\n    inbound_conn_limit:  usize,\n    outbound_conn_limit: usize,\n}\n\nimpl Default for Config {\n    fn default() -> Self {\n        Config {\n            same_ip_conn_limit:  DEFAULT_SAME_IP_CONN_LIMIT,\n            inbound_conn_limit:  DEFAULT_INBOUND_CONN_LIMIT,\n            outbound_conn_limit: DEFAULT_MAX_CONNECTIONS - DEFAULT_INBOUND_CONN_LIMIT,\n        }\n    }\n}\n\nimpl From<&PeerManagerConfig> for Config {\n    fn from(config: &PeerManagerConfig) -> Config {\n        Config {\n            same_ip_conn_limit:  config.same_ip_conn_limit,\n            inbound_conn_limit:  config.inbound_conn_limit,\n            outbound_conn_limit: config.outbound_conn_limit,\n        }\n    }\n}\n\n#[derive(Debug)]\npub struct Session {\n    pub(crate) id:             SessionId,\n    pub(crate) ctx:            Arc<SessionContext>,\n    pub(crate) peer:           ArcPeer,\n    blocked:                   AtomicBool,\n    pub(crate) connected_addr: ConnectedAddr,\n}\n\n#[derive(Debug, Clone)]\npub struct ArcSession(Arc<Session>);\n\nimpl ArcSession {\n    pub fn new(peer: ArcPeer, ctx: Arc<SessionContext>) -> Self {\n        let connected_addr = ConnectedAddr::from(&ctx.address);\n        let session = Session {\n            id: ctx.id,\n            ctx,\n            peer,\n            blocked: AtomicBool::new(false),\n            connected_addr,\n        };\n\n        ArcSession(Arc::new(session))\n    }\n\n    pub fn ty(&self) -> SessionType {\n        self.ctx.ty\n    }\n\n    pub fn block(&self) {\n        self.blocked.store(true, Ordering::SeqCst);\n    }\n\n    pub fn is_blocked(&self) -> bool {\n        self.blocked.load(Ordering::SeqCst)\n    }\n\n    pub fn unblock(&self) {\n        self.blocked.store(false, Ordering::SeqCst);\n    }\n}\n\nimpl Borrow<SessionId> for ArcSession {\n    fn borrow(&self) -> &SessionId {\n        &self.id\n    }\n}\n\nimpl PartialEq for ArcSession {\n    fn eq(&self, other: &ArcSession) -> bool {\n        self.id == other.id\n    }\n}\n\nimpl Eq for ArcSession {}\n\nimpl Hash for ArcSession {\n    fn hash<H: Hasher>(&self, state: &mut H) {\n        self.id.hash(state)\n    }\n}\n\nimpl Deref for ArcSession {\n    type Target = Session;\n\n    fn deref(&self) -> &Self::Target {\n        &self.0\n    }\n}\n\npub struct AcceptableSession(pub ArcSession);\n\npub struct SessionBook {\n    config: Config,\n\n    hosts:    RwLock<HashMap<Host, Count>>,\n    sessions: RwLock<HashSet<ArcSession>>,\n\n    inbound_count:  AtomicUsize,\n    outbound_count: AtomicUsize,\n}\n\nimpl Default for SessionBook {\n    fn default() -> SessionBook {\n        let config = Config::default();\n\n        SessionBook::new(config)\n    }\n}\n\nimpl SessionBook {\n    pub fn new(config: Config) -> Self {\n        SessionBook {\n            config,\n            hosts: Default::default(),\n            sessions: Default::default(),\n            inbound_count: AtomicUsize::new(0),\n            outbound_count: AtomicUsize::new(0),\n        }\n    }\n\n    pub fn len(&self) -> usize {\n        self.sessions.read().len()\n    }\n\n    pub fn get(&self, sid: &SessionId) -> Option<ArcSession> {\n        self.sessions.read().get(sid).cloned()\n    }\n\n    pub fn all(&self) -> Vec<ArcSession> {\n        self.sessions.read().iter().cloned().collect()\n    }\n\n    pub fn iter_fn<R, F>(&self, f: F) -> R\n    where\n        F: for<'a> FnOnce(&mut dyn Iterator<Item = &'a ArcSession>) -> R,\n    {\n        let sessions = self.sessions.read();\n        f(&mut sessions.iter())\n    }\n\n    pub fn inbound_count(&self) -> usize {\n        self.inbound_count.load(Ordering::SeqCst)\n    }\n\n    pub fn outbound_count(&self) -> usize {\n        self.outbound_count.load(Ordering::SeqCst)\n    }\n\n    pub fn acceptable(&self, session: &ArcSession) -> Result<(), self::Error> {\n        let session_host = &session.connected_addr.host;\n        let host_count = {\n            let hosts = self.hosts.read();\n            hosts.get(session_host).cloned().unwrap_or(0)\n        };\n\n        if host_count == usize::MAX || host_count + 1 > self.config.same_ip_conn_limit {\n            return Err(self::Error::ReachSameIPConnLimit);\n        }\n\n        match session.ty() {\n            SessionType::Inbound if self.inbound_count() >= self.config.inbound_conn_limit => {\n                Err(self::Error::ReachInboundConnLimit)\n            }\n            SessionType::Outbound if self.outbound_count() >= self.config.outbound_conn_limit => {\n                Err(self::Error::ReachOutboundConnLimit)\n            }\n            _ => Ok(()),\n        }\n    }\n\n    pub fn insert(&self, AcceptableSession(session): AcceptableSession) {\n        let session_host = &session.connected_addr.host;\n\n        let mut hosts = self.hosts.write();\n        hosts\n            .entry(session_host.to_owned())\n            .and_modify(|c| *c += 1)\n            .or_insert(1);\n\n        match session.ty() {\n            SessionType::Inbound => self.inbound_count.fetch_add(1, Ordering::SeqCst),\n            SessionType::Outbound => self.outbound_count.fetch_add(1, Ordering::SeqCst),\n        };\n\n        self.sessions.write().insert(session);\n    }\n\n    pub fn remove(&self, sid: &SessionId) -> Option<ArcSession> {\n        let session = self.sessions.write().take(sid);\n\n        if let Some(connected_addr) = session.as_ref().map(|s| &s.connected_addr) {\n            let session_host = &connected_addr.host;\n            let mut hosts = self.hosts.write();\n\n            if hosts.get(session_host) == Some(&1) {\n                hosts.remove(session_host);\n            } else if let Some(count) = hosts.get_mut(session_host) {\n                *count -= 1;\n            }\n        }\n\n        if let Some(ty) = session.as_ref().map(|s| s.ty()) {\n            match ty {\n                SessionType::Inbound => self.inbound_count.fetch_sub(1, Ordering::SeqCst),\n                SessionType::Outbound => self.outbound_count.fetch_sub(1, Ordering::SeqCst),\n            };\n        }\n\n        session\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use std::convert::TryInto;\n    use std::sync::Arc;\n\n    use tentacle::multiaddr::Multiaddr;\n    use tentacle::secio::{PeerId, SecioKeyPair};\n    use tentacle::service::SessionType;\n    use tentacle::SessionId;\n\n    use super::{AcceptableSession, ArcSession, Config, Error, SessionBook};\n    use crate::peer_manager::{ArcPeer, PeerMultiaddr};\n    use crate::test::mock::SessionContext;\n    use crate::traits::MultiaddrExt;\n\n    fn make_multiaddr(port: u16, id: Option<PeerId>) -> Multiaddr {\n        let mut multiaddr = format!(\"/ip4/127.0.0.1/tcp/{}\", port)\n            .parse::<Multiaddr>()\n            .expect(\"peer multiaddr\");\n\n        if let Some(id) = id {\n            multiaddr.push_id(id);\n        }\n\n        multiaddr\n    }\n\n    fn make_peer_multiaddr(port: u16, id: PeerId) -> PeerMultiaddr {\n        make_multiaddr(port, Some(id))\n            .try_into()\n            .expect(\"try into peer multiaddr\")\n    }\n\n    fn make_peer(port: u16) -> ArcPeer {\n        let keypair = SecioKeyPair::secp256k1_generated();\n        let pubkey = keypair.public_key();\n        let peer_id = pubkey.peer_id();\n        let peer = ArcPeer::from_pubkey(pubkey).expect(\"make peer\");\n        let multiaddr = make_peer_multiaddr(port, peer_id);\n\n        peer.multiaddrs.set(vec![multiaddr]);\n        peer\n    }\n\n    fn make_session(port: u16, sid: SessionId, ty: SessionType) -> ArcSession {\n        let peer = make_peer(port);\n        let multiaddr = peer.multiaddrs.all_raw().pop().unwrap();\n        let ctx = SessionContext::make(sid, multiaddr, ty, peer.owned_pubkey().unwrap());\n\n        ArcSession::new(peer, Arc::new(ctx))\n    }\n\n    #[test]\n    fn should_reject_session_when_reach_same_ip_conn_limit() {\n        let config = Config {\n            same_ip_conn_limit:  1,\n            inbound_conn_limit:  20,\n            outbound_conn_limit: 20,\n        };\n        let book = SessionBook::new(config);\n\n        let session = make_session(100, 1.into(), SessionType::Inbound);\n        assert!(book.acceptable(&session).is_ok());\n\n        book.insert(AcceptableSession(session.clone()));\n        assert_eq!(\n            book.hosts.read().get(&session.connected_addr.host),\n            Some(&1)\n        );\n\n        let same_ip_session = make_session(101, 2.into(), SessionType::Inbound);\n        assert_eq!(\n            book.acceptable(&same_ip_session),\n            Err(Error::ReachSameIPConnLimit)\n        );\n    }\n\n    #[test]\n    fn should_reduce_host_count() {\n        let config = Config {\n            same_ip_conn_limit:  5,\n            inbound_conn_limit:  20,\n            outbound_conn_limit: 20,\n        };\n        let book = SessionBook::new(config);\n\n        let session = make_session(100, 1.into(), SessionType::Inbound);\n        assert!(book.acceptable(&session).is_ok());\n\n        book.insert(AcceptableSession(session.clone()));\n        assert_eq!(\n            book.hosts.read().get(&session.connected_addr.host),\n            Some(&1)\n        );\n\n        book.remove(&(1.into()));\n        assert_eq!(book.hosts.read().get(&session.connected_addr.host), None);\n    }\n\n    #[test]\n    fn should_reject_inbound_session_when_reach_inbound_limit() {\n        let config = Config {\n            same_ip_conn_limit:  5,\n            inbound_conn_limit:  1,\n            outbound_conn_limit: 20,\n        };\n        let book = SessionBook::new(config);\n\n        let session = make_session(100, 1.into(), SessionType::Inbound);\n        assert!(book.acceptable(&session).is_ok());\n\n        book.insert(AcceptableSession(session.clone()));\n        assert_eq!(\n            book.hosts.read().get(&session.connected_addr.host),\n            Some(&1)\n        );\n        assert_eq!(book.inbound_count(), 1);\n\n        let same_ip_session = make_session(101, 2.into(), SessionType::Inbound);\n        assert_eq!(\n            book.acceptable(&same_ip_session),\n            Err(Error::ReachInboundConnLimit)\n        );\n    }\n\n    #[test]\n    fn should_reject_outbound_session_when_reach_outbound_limit() {\n        let config = Config {\n            same_ip_conn_limit:  5,\n            inbound_conn_limit:  10,\n            outbound_conn_limit: 1,\n        };\n        let book = SessionBook::new(config);\n\n        let session = make_session(100, 1.into(), SessionType::Outbound);\n        assert!(book.acceptable(&session).is_ok());\n\n        book.insert(AcceptableSession(session.clone()));\n        assert_eq!(\n            book.hosts.read().get(&session.connected_addr.host),\n            Some(&1)\n        );\n        assert_eq!(book.outbound_count(), 1);\n\n        let same_ip_session = make_session(101, 2.into(), SessionType::Outbound);\n        assert_eq!(\n            book.acceptable(&same_ip_session),\n            Err(Error::ReachOutboundConnLimit)\n        );\n    }\n}\n"
  },
  {
    "path": "core/network/src/peer_manager/shared.rs",
    "content": "use std::sync::Arc;\n\nuse log::debug;\nuse protocol::traits::PeerTag;\nuse tentacle::secio::PeerId;\nuse tentacle::SessionId;\n\nuse super::{Connectedness, Inner};\nuse crate::common::ConnectedAddr;\nuse crate::peer_manager::SessionBook;\nuse crate::traits::SharedSessionBook;\nuse crate::NetworkConfig;\n\npub struct Config {\n    pub max_stream_window_size: usize,\n    pub write_timeout:          u64,\n}\n\n// TODO: checkout max_frame_length\nimpl From<&NetworkConfig> for Config {\n    fn from(config: &NetworkConfig) -> Self {\n        Config {\n            write_timeout:          config.write_timeout,\n            max_stream_window_size: config.max_frame_length,\n        }\n    }\n}\n\n#[derive(Clone)]\npub struct SharedSessions {\n    inner:  Arc<Inner>,\n    config: Arc<Config>,\n}\n\nimpl SharedSessions {\n    pub(super) fn new(inner: Arc<Inner>, config: Config) -> Self {\n        SharedSessions {\n            inner,\n            config: Arc::new(config),\n        }\n    }\n\n    fn sessions(&self) -> &SessionBook {\n        &self.inner.sessions\n    }\n}\n\nimpl SharedSessionBook for SharedSessions {\n    fn all_sendable(&self) -> Vec<SessionId> {\n        self.sessions().iter_fn(|iter| {\n            iter.filter_map(|s| if !s.is_blocked() { Some(s.id) } else { None })\n                .collect()\n        })\n    }\n\n    fn all_blocked(&self) -> Vec<SessionId> {\n        self.sessions().iter_fn(|iter| {\n            iter.filter_map(|s| if s.is_blocked() { Some(s.id) } else { None })\n                .collect()\n        })\n    }\n\n    fn refresh_blocked(&self) {\n        let all_blocked = self\n            .sessions()\n            .iter_fn(|iter| iter.filter(|s| s.is_blocked()).cloned().collect::<Vec<_>>());\n\n        for session in all_blocked {\n            let pending_data_size = session.ctx.pending_data_size();\n            // FIXME: multi streams\n            let estimated_time = (pending_data_size / self.config.max_stream_window_size) as u64;\n\n            if estimated_time < self.config.write_timeout {\n                debug!(\"unblock session {}\", session.id);\n                session.unblock()\n            }\n        }\n    }\n\n    fn peers(&self, pids: Vec<PeerId>) -> (Vec<SessionId>, Vec<PeerId>) {\n        let mut connected = Vec::new();\n        let mut unconnected = Vec::new();\n\n        for peer_id in pids {\n            match self.inner.peer(&peer_id) {\n                Some(peer) if peer.connectedness() == Connectedness::Connected => {\n                    connected.push(peer.session_id())\n                }\n                _ => unconnected.push(peer_id),\n            }\n        }\n\n        (connected, unconnected)\n    }\n\n    fn all(&self) -> Vec<SessionId> {\n        self.sessions().iter_fn(|iter| iter.map(|s| s.id).collect())\n    }\n\n    fn connected_addr(&self, sid: SessionId) -> Option<ConnectedAddr> {\n        self.sessions()\n            .get(&sid)\n            .map(|s| s.connected_addr.to_owned())\n    }\n\n    fn pending_data_size(&self, sid: SessionId) -> usize {\n        self.sessions()\n            .get(&sid)\n            .map(|s| s.ctx.pending_data_size())\n            .unwrap_or_else(|| 0)\n    }\n\n    fn allowlist(&self) -> Vec<PeerId> {\n        self.sessions().iter_fn(|iter| {\n            iter.filter_map(|s| {\n                if s.peer.tags.contains(&PeerTag::AlwaysAllow) {\n                    Some(s.peer.id.to_owned())\n                } else {\n                    None\n                }\n            })\n            .collect()\n        })\n    }\n\n    fn len(&self) -> usize {\n        self.sessions().len()\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::{Config, SharedSessionBook, SharedSessions};\n    use crate::peer_manager::{Inner, SessionBook};\n\n    use tentacle::secio::SecioKeyPair;\n\n    use std::sync::Arc;\n\n    #[test]\n    fn should_return_unconnected_peer_ids() {\n        let sess_conf = Config {\n            max_stream_window_size: 10,\n            write_timeout:          10,\n        };\n\n        let keypair = SecioKeyPair::secp256k1_generated();\n        let pubkey = keypair.public_key();\n        let self_peer_id = pubkey.peer_id();\n\n        let inner = Arc::new(Inner::new(self_peer_id, SessionBook::default()));\n        let sessions = SharedSessions::new(Arc::clone(&inner), sess_conf);\n\n        let keypair = SecioKeyPair::secp256k1_generated();\n        let pubkey = keypair.public_key();\n        let peer_id = pubkey.peer_id();\n        assert!(inner.peer(&peer_id).is_none(), \"should not be registered\");\n\n        let (_, unconnected) = sessions.peers(vec![peer_id.clone()]);\n        assert!(unconnected.contains(&peer_id));\n    }\n}\n"
  },
  {
    "path": "core/network/src/peer_manager/tags.rs",
    "content": "use super::time;\nuse crate::error::NetworkError;\n\nuse derive_more::Display;\nuse parking_lot::RwLock;\nuse protocol::traits::PeerTag;\n\nuse std::{collections::HashSet, time::Duration};\n\n#[derive(Debug, Display, PartialEq, Eq)]\npub enum TagError {\n    #[display(fmt = \"cannot ban always allowed or consensus peer\")]\n    AlwaysAllow,\n}\n\nimpl std::error::Error for TagError {}\n\nimpl From<TagError> for NetworkError {\n    fn from(err: TagError) -> NetworkError {\n        NetworkError::Internal(Box::new(err))\n    }\n}\n\n#[derive(Debug)]\npub struct Tags(RwLock<HashSet<PeerTag>>);\n\nimpl Default for Tags {\n    fn default() -> Self {\n        Tags(Default::default())\n    }\n}\n\nimpl Tags {\n    pub fn get_banned_until(&self) -> Option<u64> {\n        let opt_banned = { self.0.read().get(&PeerTag::ban_key()).cloned() };\n\n        if let Some(PeerTag::Ban { until }) = opt_banned {\n            Some(until)\n        } else {\n            None\n        }\n    }\n\n    pub fn insert_ban(&self, timeout: Duration) -> Result<(), TagError> {\n        let until = Duration::from_secs(time::now()) + timeout;\n        self.insert(PeerTag::ban(until.as_secs()))\n    }\n\n    #[cfg(test)]\n    pub fn set_ban_until(&self, until: u64) {\n        self.0.write().insert(PeerTag::ban(until));\n    }\n\n    pub fn insert(&self, tag: PeerTag) -> Result<(), TagError> {\n        if let PeerTag::Ban { .. } = tag {\n            if self.contains(&PeerTag::Consensus) || self.contains(&PeerTag::AlwaysAllow) {\n                return Err(TagError::AlwaysAllow);\n            }\n        }\n\n        self.0.write().insert(tag);\n        Ok(())\n    }\n\n    pub fn remove(&self, tag: &PeerTag) {\n        self.0.write().remove(&tag);\n    }\n\n    pub fn contains(&self, tag: &PeerTag) -> bool {\n        self.0.read().contains(tag)\n    }\n}\n"
  },
  {
    "path": "core/network/src/peer_manager/test_manager.rs",
    "content": "#![allow(clippy::needless_collect)]\n\nuse super::{\n    time, ArcPeer, Connectedness, ConnectingAttempt, Inner, MisbehaviorKind, PeerManager,\n    PeerManagerConfig, PeerMultiaddr, TrustMetric, TrustMetricConfig, GOOD_TRUST_SCORE,\n    MAX_CONNECTING_MARGIN, MAX_CONNECTING_TIMEOUT, MAX_RANDOM_NEXT_RETRY, MAX_RETRY_COUNT,\n    REPEATED_CONNECTION_TIMEOUT, SAME_IP_LIMIT_BAN, SHORT_ALIVE_SESSION,\n};\nuse crate::{\n    common::ConnectedAddr,\n    event::{\n        ConnectionErrorKind, ConnectionEvent, ConnectionType, PeerManagerEvent, SessionErrorKind,\n    },\n    test::mock::SessionContext,\n    traits::MultiaddrExt,\n};\n\nuse futures::{\n    channel::mpsc::{unbounded, UnboundedReceiver, UnboundedSender},\n    StreamExt,\n};\nuse protocol::traits::{PeerTag, TrustFeedback};\nuse tentacle::{\n    multiaddr::Multiaddr,\n    secio::{PeerId, PublicKey, SecioKeyPair},\n    service::SessionType,\n    SessionId,\n};\n\nuse std::{\n    borrow::Cow,\n    collections::HashSet,\n    convert::TryInto,\n    future::Future,\n    pin::Pin,\n    sync::Arc,\n    task::{Context, Poll},\n    time::Duration,\n};\n\nfn make_multiaddr(port: u16, id: Option<PeerId>) -> Multiaddr {\n    let mut multiaddr = format!(\"/ip4/127.0.0.1/tcp/{}\", port)\n        .parse::<Multiaddr>()\n        .expect(\"peer multiaddr\");\n\n    if let Some(id) = id {\n        multiaddr.push_id(id);\n    }\n\n    multiaddr\n}\n\nfn make_peer_multiaddr(port: u16, id: PeerId) -> PeerMultiaddr {\n    make_multiaddr(port, Some(id))\n        .try_into()\n        .expect(\"try into peer multiaddr\")\n}\n\nfn make_peer(port: u16) -> ArcPeer {\n    let keypair = SecioKeyPair::secp256k1_generated();\n    let pubkey = keypair.public_key();\n    let peer_id = pubkey.peer_id();\n    let peer = ArcPeer::from_pubkey(pubkey).expect(\"make peer\");\n    let multiaddr = make_peer_multiaddr(port, peer_id);\n\n    peer.multiaddrs.set(vec![multiaddr]);\n    peer\n}\n\nfn make_bootstraps(num: usize) -> Vec<ArcPeer> {\n    let mut init_port = 5000;\n\n    (0..num)\n        .map(|_| {\n            let peer = make_peer(init_port);\n            init_port += 1;\n            peer\n        })\n        .collect()\n}\n\nstruct MockManager {\n    event_tx: UnboundedSender<PeerManagerEvent>,\n    inner:    PeerManager,\n}\n\nimpl MockManager {\n    pub fn new(inner: PeerManager, event_tx: UnboundedSender<PeerManagerEvent>) -> Self {\n        MockManager { event_tx, inner }\n    }\n\n    pub async fn poll_event(&mut self, event: PeerManagerEvent) {\n        self.event_tx.unbounded_send(event).expect(\"send event\");\n        self.await\n    }\n\n    pub async fn poll(&mut self) {\n        self.await\n    }\n\n    pub fn config(&self) -> PeerManagerConfig {\n        self.inner.config()\n    }\n\n    pub fn connecting(&self) -> &HashSet<ConnectingAttempt> {\n        &self.inner.connecting\n    }\n\n    pub fn connecting_mut(&mut self) -> &mut HashSet<ConnectingAttempt> {\n        &mut self.inner.connecting\n    }\n\n    pub fn core_inner(&self) -> Arc<Inner> {\n        self.inner.inner()\n    }\n}\n\nimpl Future for MockManager {\n    type Output = ();\n\n    fn poll(mut self: Pin<&mut Self>, ctx: &mut Context<'_>) -> Poll<Self::Output> {\n        let _ = Future::poll(Pin::new(&mut self.as_mut().inner), ctx);\n        Poll::Ready(())\n    }\n}\n\nfn make_manager(\n    bootstrap_num: usize,\n    max_connections: usize,\n) -> (MockManager, UnboundedReceiver<ConnectionEvent>) {\n    let manager_pubkey = make_pubkey();\n    let manager_id = manager_pubkey.peer_id();\n    let bootstraps = make_bootstraps(bootstrap_num);\n    let mut peer_dat_file = std::env::temp_dir();\n    peer_dat_file.push(\"peer.dat\");\n    let peer_trust_config = Arc::new(TrustMetricConfig::default());\n    let peer_fatal_ban = Duration::from_secs(50);\n    let peer_soft_ban = Duration::from_secs(10);\n    let inbound_conn_limit = max_connections / 2;\n\n    let config = PeerManagerConfig {\n        our_id: manager_id,\n        pubkey: manager_pubkey,\n        bootstraps,\n        allowlist: Default::default(),\n        allowlist_only: false,\n        peer_trust_config,\n        peer_fatal_ban,\n        peer_soft_ban,\n        max_connections,\n        same_ip_conn_limit: max_connections,\n        inbound_conn_limit,\n        outbound_conn_limit: max_connections - inbound_conn_limit,\n        routine_interval: Duration::from_secs(10),\n        peer_dat_file,\n    };\n\n    let (conn_tx, conn_rx) = unbounded();\n    let (mgr_tx, mgr_rx) = unbounded();\n    let manager = PeerManager::new(config, mgr_rx, conn_tx);\n\n    (MockManager::new(manager, mgr_tx), conn_rx)\n}\n\nfn make_pubkey() -> PublicKey {\n    let keypair = SecioKeyPair::secp256k1_generated();\n    keypair.public_key()\n}\n\nasync fn make_sessions(\n    mgr: &mut MockManager,\n    num: u16,\n    init_port: u16,\n    sess_ty: SessionType,\n) -> Vec<ArcPeer> {\n    let mut next_sid = 1;\n    let mut peers = Vec::with_capacity(num as usize);\n    let inbound_limit = mgr.config().inbound_conn_limit;\n    let outbound_limit = mgr.config().max_connections - inbound_limit;\n    let inner = mgr.core_inner();\n\n    for n in (0..num).into_iter() {\n        let remote_pubkey = make_pubkey();\n        let remote_pid = remote_pubkey.peer_id();\n        let remote_addr = make_multiaddr(init_port + n, Some(remote_pid.clone()));\n\n        let ty = if sess_ty == SessionType::Outbound && inner.outbound_count() == outbound_limit {\n            // Switch to create inbound session\n            SessionType::Inbound\n        } else {\n            sess_ty\n        };\n\n        let sess_ctx = SessionContext::make(\n            SessionId::new(next_sid),\n            remote_addr.clone(),\n            ty,\n            remote_pubkey.clone(),\n        );\n        next_sid += 1;\n\n        let new_session = PeerManagerEvent::NewSession {\n            pid:    remote_pid.clone(),\n            pubkey: remote_pubkey,\n            ctx:    sess_ctx.arced(),\n        };\n        mgr.poll_event(new_session).await;\n\n        peers.push(inner.peer(&remote_pid).expect(\"make peer session\"));\n    }\n\n    assert_eq!(inner.connected(), num as usize, \"make some sessions\");\n    peers\n}\n\n#[tokio::test]\nasync fn should_accept_new_peer_inbound_connection_on_new_session() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n\n    let remote_pubkey = make_pubkey();\n    let remote_peer_id = remote_pubkey.peer_id();\n    let remote_addr = make_multiaddr(6000, Some(remote_pubkey.peer_id()));\n\n    let sess_ctx = SessionContext::make(\n        SessionId::new(1),\n        remote_addr.clone(),\n        SessionType::Inbound,\n        remote_pubkey.clone(),\n    );\n    let new_session = PeerManagerEvent::NewSession {\n        pid:    remote_peer_id.clone(),\n        pubkey: remote_pubkey.clone(),\n        ctx:    sess_ctx.arced(),\n    };\n    mgr.poll_event(new_session).await;\n\n    let inner = mgr.core_inner();\n    assert_eq!(inner.connected(), 1, \"should have one without bootstrap\");\n\n    let saved_peer = inner.peer(&remote_peer_id).expect(\"should save peer\");\n    assert_eq!(saved_peer.session_id(), 1.into());\n    assert!(saved_peer.has_pubkey(), \"should have public key\");\n    assert_eq!(saved_peer.connectedness(), Connectedness::Connected);\n    assert_eq!(saved_peer.retry.count(), 0, \"should reset retry\");\n\n    let saved_session = inner.session(1.into()).expect(\n        \"should save\nsession\",\n    );\n    assert_eq!(saved_session.peer.id, remote_pubkey.peer_id());\n    assert!(!saved_session.is_blocked());\n    assert_eq!(\n        saved_session.connected_addr,\n        ConnectedAddr::from(&remote_addr)\n    );\n}\n\n#[tokio::test]\nasync fn should_accept_outbound_connection_and_remove_mached_connecting_on_new_session() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n\n    let test_peer = make_peer(9527);\n    let test_multiaddr = test_peer.multiaddrs.all_raw().pop().expect(\"get multiaddr\");\n    let target_attempt = ConnectingAttempt::new(test_peer.clone());\n\n    let inner = mgr.core_inner();\n    assert_eq!(inner.connected(), 0, \"should have zero connected\");\n\n    mgr.connecting_mut().insert(target_attempt);\n    assert_eq!(\n        mgr.connecting().len(),\n        1,\n        \"should have one connecting attempt\"\n    );\n\n    let sess_ctx = SessionContext::make(\n        SessionId::new(1),\n        test_multiaddr.clone(),\n        SessionType::Outbound,\n        test_peer.owned_pubkey().expect(\"pubkey\"),\n    );\n    let new_session = PeerManagerEvent::NewSession {\n        pid:    test_peer.owned_id(),\n        pubkey: test_peer.owned_pubkey().expect(\"pubkey\"),\n        ctx:    sess_ctx.arced(),\n    };\n    mgr.poll_event(new_session).await;\n\n    assert_eq!(\n        mgr.connecting().len(),\n        0,\n        \"should have 0 connecting attempt\"\n    );\n    assert_eq!(inner.connected(), 1, \"should have 1 connected\");\n    assert!(inner.peer(&test_peer.id).is_some(), \"should match peer\");\n}\n\n#[tokio::test]\nasync fn should_set_matched_peer_pubkey_on_new_session() {\n    let (mut mgr, _conn_rx) = make_manager(0, 2);\n\n    let inner = mgr.core_inner();\n    let test_pubkey = make_pubkey();\n    let test_peer = ArcPeer::new(test_pubkey.peer_id());\n    inner.add_peer(test_peer.clone());\n\n    let sess_ctx = SessionContext::make(\n        SessionId::new(1),\n        make_multiaddr(9527, None),\n        SessionType::Outbound,\n        test_pubkey.clone(),\n    );\n    let new_session = PeerManagerEvent::NewSession {\n        pid:    test_pubkey.peer_id(),\n        pubkey: test_pubkey.clone(),\n        ctx:    sess_ctx.arced(),\n    };\n    mgr.poll_event(new_session).await;\n\n    let inner = mgr.core_inner();\n    assert_eq!(inner.connected(), 1, \"should one connection\");\n    assert_eq!(\n        test_peer.owned_pubkey(),\n        Some(test_pubkey),\n        \"should set peer pubkey\"\n    );\n}\n\n#[tokio::test]\nasync fn should_reset_outbound_peer_multiaddr_failure_count_on_new_session() {\n    let (mut mgr, _conn_rx) = make_manager(0, 2);\n\n    let inner = mgr.core_inner();\n    let test_peer = make_peer(9527);\n    inner.add_peer(test_peer.clone());\n\n    let test_multiaddr = test_peer.multiaddrs.all().pop().expect(\"test multiaddr\");\n    test_peer.multiaddrs.inc_failure(&test_multiaddr);\n    assert_eq!(\n        test_peer.multiaddrs.failure(&test_multiaddr),\n        Some(1),\n        \"should have one failure\"\n    );\n\n    let sess_ctx = SessionContext::make(\n        SessionId::new(1),\n        make_multiaddr(9527, None),\n        SessionType::Outbound,\n        test_peer.owned_pubkey().expect(\"pubkey\"),\n    );\n    let new_session = PeerManagerEvent::NewSession {\n        pid:    test_peer.owned_id(),\n        pubkey: test_peer.owned_pubkey().expect(\"pubkey\"),\n        ctx:    sess_ctx.arced(),\n    };\n    mgr.poll_event(new_session).await;\n\n    let inner = mgr.core_inner();\n    assert_eq!(inner.connected(), 1, \"should one connection\");\n    assert_eq!(\n        test_peer.multiaddrs.failure(&test_multiaddr),\n        Some(0),\n        \"should reset matched outbound multiaddr's failure\"\n    );\n}\n\n#[tokio::test]\nasync fn should_ignore_inbound_address_on_new_session() {\n    let (mut mgr, _conn_rx) = make_manager(2, 20);\n\n    let remote_pubkey = make_pubkey();\n    let remote_peer_id = remote_pubkey.peer_id();\n    let remote_addr = make_multiaddr(6000, Some(remote_pubkey.peer_id()));\n\n    let sess_ctx = SessionContext::make(\n        SessionId::new(1),\n        remote_addr.clone(),\n        SessionType::Inbound,\n        remote_pubkey.clone(),\n    );\n    let new_session = PeerManagerEvent::NewSession {\n        pid:    remote_peer_id.clone(),\n        pubkey: remote_pubkey.clone(),\n        ctx:    sess_ctx.arced(),\n    };\n    mgr.poll_event(new_session).await;\n\n    let inner = mgr.core_inner();\n    assert_eq!(inner.connected(), 1, \"should have one without bootstrap\");\n\n    let saved_peer = inner.peer(&remote_peer_id).expect(\"should save peer\");\n    assert_eq!(\n        saved_peer.multiaddrs.len(),\n        0,\n        \"should not save inbound multiaddr\"\n    );\n}\n\n#[tokio::test]\nasync fn should_enforce_id_in_multiaddr_on_new_session() {\n    let (mut mgr, _conn_rx) = make_manager(2, 20);\n\n    let remote_pubkey = make_pubkey();\n    let remote_peer_id = remote_pubkey.peer_id();\n    let remote_addr = make_multiaddr(6000, None);\n\n    let sess_ctx = SessionContext::make(\n        SessionId::new(1),\n        remote_addr.clone(),\n        SessionType::Outbound,\n        remote_pubkey.clone(),\n    );\n    let new_session = PeerManagerEvent::NewSession {\n        pid:    remote_pubkey.peer_id(),\n        pubkey: remote_pubkey.clone(),\n        ctx:    sess_ctx.arced(),\n    };\n    mgr.poll_event(new_session).await;\n\n    let inner = mgr.core_inner();\n    assert_eq!(inner.connected(), 1, \"should have one without bootstrap\");\n\n    let saved_peer = inner.peer(&remote_peer_id).expect(\"should save peer\");\n    let saved_addrs = saved_peer.multiaddrs.all_raw();\n    assert_eq!(saved_addrs.len(), 1, \"should save outbound multiaddr\");\n\n    let remote_addr = saved_addrs.first().expect(\"get first multiaddr\");\n    assert!(remote_addr.has_id());\n    assert_eq!(\n        remote_addr.id_bytes(),\n        Some(Cow::Borrowed(remote_pubkey.peer_id().as_bytes())),\n        \"id should match\"\n    );\n}\n\n#[tokio::test]\nasync fn should_add_new_outbound_multiaddr_to_peer_on_new_session() {\n    let (mut mgr, _conn_rx) = make_manager(2, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n\n    let inner = mgr.core_inner();\n    assert_eq!(inner.connected(), 1, \"should have one without bootstrap\");\n\n    let test_peer = remote_peers.first().expect(\"get first\");\n    let session_closed = PeerManagerEvent::SessionClosed {\n        pid: test_peer.owned_id(),\n        sid: test_peer.session_id(),\n    };\n    mgr.poll_event(session_closed).await;\n\n    let new_multiaddr = make_multiaddr(9999, None);\n    let sess_ctx = SessionContext::make(\n        SessionId::new(2),\n        new_multiaddr,\n        SessionType::Outbound,\n        test_peer.owned_pubkey().expect(\"pubkey\"),\n    );\n    let new_session = PeerManagerEvent::NewSession {\n        pid:    test_peer.owned_id(),\n        pubkey: test_peer.owned_pubkey().expect(\"pubkey\"),\n        ctx:    sess_ctx.arced(),\n    };\n    mgr.poll_event(new_session).await;\n\n    assert_eq!(test_peer.multiaddrs.len(), 2, \"should have 2 addrs\");\n\n    let test_peer_multiaddr = make_peer_multiaddr(9999, test_peer.owned_id());\n    assert!(\n        test_peer.multiaddrs.contains(&test_peer_multiaddr),\n        \"should have this new multiaddr\"\n    );\n}\n\n#[tokio::test]\nasync fn should_always_remove_inbound_multiaddr_even_if_we_reach_max_connections_on_new_session() {\n    let (mut mgr, _conn_rx) = make_manager(0, 2);\n    let _remote_peers = make_sessions(&mut mgr, 2, 5000, SessionType::Outbound).await;\n\n    let inner = mgr.core_inner();\n    let test_peer = make_peer(9527);\n    inner.add_peer(test_peer.clone());\n    assert_eq!(\n        test_peer.multiaddrs.len(),\n        1,\n        \"should have on inbound address\"\n    );\n\n    let sess_ctx = SessionContext::make(\n        SessionId::new(1),\n        make_multiaddr(9527, Some(test_peer.owned_id())),\n        SessionType::Inbound,\n        test_peer.owned_pubkey().expect(\"pubkey\"),\n    );\n    let new_session = PeerManagerEvent::NewSession {\n        pid:    test_peer.owned_id(),\n        pubkey: test_peer.owned_pubkey().expect(\"pubkey\"),\n        ctx:    sess_ctx.arced(),\n    };\n    mgr.poll_event(new_session).await;\n\n    let inner = mgr.core_inner();\n    assert_eq!(inner.connected(), 2, \"should not increase conn count\");\n\n    assert_eq!(\n        test_peer.multiaddrs.len(),\n        0,\n        \"should remove inbound address\"\n    );\n}\n\n#[tokio::test]\nasync fn should_remove_matched_peer_inbound_address_from_ctx_even_if_it_doesnt_have_id_on_new_session(\n) {\n    let (mut mgr, _conn_rx) = make_manager(0, 2);\n\n    let inner = mgr.core_inner();\n    let test_peer = make_peer(9527);\n    inner.add_peer(test_peer.clone());\n    assert_eq!(\n        test_peer.multiaddrs.len(),\n        1,\n        \"should have on inbound address\"\n    );\n\n    let sess_ctx = SessionContext::make(\n        SessionId::new(1),\n        make_multiaddr(9527, None),\n        SessionType::Inbound,\n        test_peer.owned_pubkey().expect(\"pubkey\"),\n    );\n    let new_session = PeerManagerEvent::NewSession {\n        pid:    test_peer.owned_id(),\n        pubkey: test_peer.owned_pubkey().expect(\"pubkey\"),\n        ctx:    sess_ctx.arced(),\n    };\n    mgr.poll_event(new_session).await;\n\n    let inner = mgr.core_inner();\n    assert_eq!(inner.connected(), 1, \"should have one connection\");\n    assert_eq!(\n        test_peer.multiaddrs.len(),\n        0,\n        \"should remove inbound address\"\n    );\n}\n\n#[tokio::test]\nasync fn should_reject_new_connection_for_same_peer_on_new_session() {\n    let (mut mgr, mut conn_rx) = make_manager(0, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n\n    let test_peer = remote_peers.first().expect(\"get first peer\");\n    let expect_sid = test_peer.session_id();\n\n    let sess_ctx = SessionContext::make(\n        SessionId::new(99),\n        test_peer.multiaddrs.all_raw().pop().expect(\"get multiaddr\"),\n        SessionType::Outbound,\n        test_peer.owned_pubkey().expect(\"pubkey\"),\n    );\n    let new_session = PeerManagerEvent::NewSession {\n        pid:    test_peer.owned_id(),\n        pubkey: test_peer.owned_pubkey().expect(\"pubkey\"),\n        ctx:    sess_ctx.arced(),\n    };\n    mgr.poll_event(new_session).await;\n\n    let inner = mgr.core_inner();\n    assert_eq!(inner.connected(), 1, \"should not increase conn count\");\n    assert_eq!(\n        test_peer.session_id(),\n        expect_sid,\n        \"should not change peer session id\"\n    );\n\n    let conn_event = conn_rx.next().await.expect(\"should have disconnect event\");\n    match conn_event {\n        ConnectionEvent::Disconnect(sid) => assert_eq!(sid, 99.into(), \"should be new session id\"),\n        _ => panic!(\"should be disconnect event\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_keep_new_connection_for_error_outdated_peer_session_on_new_session() {\n    let (mut mgr, mut conn_rx) = make_manager(0, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n\n    let inner = mgr.core_inner();\n    let test_peer = remote_peers.first().expect(\"get first peer\");\n    inner.remove_session(test_peer.session_id());\n\n    let sess_ctx = SessionContext::make(\n        SessionId::new(99),\n        test_peer.multiaddrs.all_raw().pop().expect(\"get multiaddr\"),\n        SessionType::Outbound,\n        test_peer.owned_pubkey().expect(\"pubkey\"),\n    );\n    let new_session = PeerManagerEvent::NewSession {\n        pid:    test_peer.owned_id(),\n        pubkey: test_peer.owned_pubkey().expect(\"pubkey\"),\n        ctx:    sess_ctx.arced(),\n    };\n    mgr.poll_event(new_session).await;\n\n    assert_eq!(inner.connected(), 1, \"should not increase conn count\");\n    assert_eq!(\n        test_peer.session_id(),\n        99.into(),\n        \"should update session id\"\n    );\n\n    match conn_rx.try_next() {\n        Err(_) => (), // Err means channel is empty, it's expected\n        _ => panic!(\"should not have any connection event\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_reject_new_connections_when_we_reach_max_connections_on_new_session() {\n    let (mut mgr, mut conn_rx) = make_manager(0, 10); // set max to 10\n    let _remote_peers = make_sessions(&mut mgr, 10, 7000, SessionType::Outbound).await;\n\n    let remote_pubkey = make_pubkey();\n    let remote_addr = make_multiaddr(2077, Some(remote_pubkey.peer_id()));\n\n    let sess_ctx = SessionContext::make(\n        SessionId::new(99),\n        remote_addr,\n        SessionType::Outbound,\n        remote_pubkey.clone(),\n    );\n    let new_session = PeerManagerEvent::NewSession {\n        pid:    remote_pubkey.peer_id(),\n        pubkey: remote_pubkey.clone(),\n        ctx:    sess_ctx.arced(),\n    };\n    mgr.poll_event(new_session).await;\n\n    let inner = mgr.core_inner();\n    assert_eq!(inner.connected(), 10, \"should not increase conn count\");\n\n    let conn_event = conn_rx.next().await.expect(\"should have disconnect event\");\n    match conn_event {\n        ConnectionEvent::Disconnect(sid) => assert_eq!(sid, 99.into(), \"should be new session id\"),\n        _ => panic!(\"should be disconnect event\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_remove_connecting_even_if_session_is_reject_due_to_reach_max_connections_on_new_session(\n) {\n    let (mut mgr, mut conn_rx) = make_manager(0, 5); // set max to 5\n    let _remote_peers = make_sessions(&mut mgr, 5, 7000, SessionType::Outbound).await;\n\n    let test_peer = make_peer(2020);\n    let inner = mgr.core_inner();\n    inner.add_peer(test_peer.clone());\n    mgr.connecting_mut()\n        .insert(ConnectingAttempt::new(test_peer.clone()));\n    assert_eq!(mgr.connecting().len(), 1, \"should have one attempt\");\n\n    let sess_ctx = SessionContext::make(\n        SessionId::new(99),\n        test_peer.multiaddrs.all_raw().pop().expect(\"multiaddr\"),\n        SessionType::Outbound,\n        test_peer.owned_pubkey().expect(\"pubkey\"),\n    );\n    let new_session = PeerManagerEvent::NewSession {\n        pid:    test_peer.owned_id(),\n        pubkey: test_peer.owned_pubkey().expect(\"pubkey\"),\n        ctx:    sess_ctx.arced(),\n    };\n    mgr.poll_event(new_session).await;\n\n    assert_eq!(inner.connected(), 5, \"should not increase conn count\");\n    assert_eq!(\n        mgr.connecting().len(),\n        0,\n        \"should remove connecting attempt\"\n    );\n\n    let conn_event = conn_rx.next().await.expect(\"should have disconnect event\");\n    match conn_event {\n        ConnectionEvent::Disconnect(sid) => assert_eq!(sid, 99.into(), \"should be new session id\"),\n        _ => panic!(\"should be disconnect event\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_reject_banned_peer_on_new_session() {\n    let (mut mgr, mut conn_rx) = make_manager(0, 10);\n    let test_peer = make_peer(2077);\n\n    let inner = mgr.core_inner();\n    inner.add_peer(test_peer.clone());\n\n    test_peer\n        .tags\n        .insert_ban(Duration::from_secs(10))\n        .expect(\"insert ban tag\");\n    assert!(test_peer.banned(), \"should be banned\");\n\n    let sess_ctx = SessionContext::make(\n        SessionId::new(99),\n        test_peer.multiaddrs.all_raw().pop().expect(\"multiaddr\"),\n        SessionType::Outbound,\n        test_peer.owned_pubkey().expect(\"pubkey\"),\n    );\n    let new_session = PeerManagerEvent::NewSession {\n        pid:    test_peer.owned_id(),\n        pubkey: test_peer.owned_pubkey().expect(\"pubkey\"),\n        ctx:    sess_ctx.arced(),\n    };\n    mgr.poll_event(new_session).await;\n\n    assert_eq!(inner.connected(), 0, \"should not increase conn count\");\n    let conn_event = conn_rx.next().await.expect(\"should have disconnect event\");\n    match conn_event {\n        ConnectionEvent::Disconnect(sid) => assert_eq!(sid, 99.into(), \"should be new session id\"),\n        _ => panic!(\"should be disconnect event\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_start_trust_metric_on_connected_peer_on_new_session() {\n    let (mut mgr, _conn_rx) = make_manager(0, 10);\n    let test_peer = make_peer(2077);\n\n    let inner = mgr.core_inner();\n    inner.add_peer(test_peer.clone());\n\n    let sess_ctx = SessionContext::make(\n        SessionId::new(99),\n        test_peer.multiaddrs.all_raw().pop().expect(\"multiaddr\"),\n        SessionType::Outbound,\n        test_peer.owned_pubkey().expect(\"pubkey\"),\n    );\n    let new_session = PeerManagerEvent::NewSession {\n        pid:    test_peer.owned_id(),\n        pubkey: test_peer.owned_pubkey().expect(\"pubkey\"),\n        ctx:    sess_ctx.arced(),\n    };\n    mgr.poll_event(new_session).await;\n\n    let trust_metric = test_peer.trust_metric().expect(\"trust metric\");\n    assert!(trust_metric.is_started(), \"should start trust metric\");\n}\n\n#[tokio::test]\nasync fn should_replace_low_quality_peer_with_better_one_due_to_max_connections_on_new_session() {\n    let (mut mgr, mut conn_rx) = make_manager(0, 10);\n    let remote_peers = make_sessions(&mut mgr, 10, 5000, SessionType::Outbound).await;\n    let target_peer = remote_peers.first().expect(\"get first peer\");\n    let peer_trust_config = Arc::new(TrustMetricConfig::default());\n\n    let inner = mgr.core_inner();\n    assert_eq!(inner.connected(), 10, \"should reach max connections\");\n\n    if let Some(metric) = target_peer.trust_metric() {\n        for _ in 0..30 {\n            metric.good_events(1);\n            metric.bad_events(1);\n            metric.enter_new_interval();\n        }\n\n        assert!(metric.trust_score() < 80, \"should less than 80\");\n    }\n\n    // Update alive, only old enough peer can be replaced\n    target_peer.set_alive(peer_trust_config.interval().as_secs() * 20 + 20);\n\n    let test_peer = make_peer(2077);\n    inner.add_peer(test_peer.clone());\n    let trust_metric = TrustMetric::new(Arc::clone(&peer_trust_config));\n    test_peer.set_trust_metric(trust_metric.clone());\n    for _ in 0..10 {\n        trust_metric.good_events(1);\n        trust_metric.enter_new_interval();\n    }\n    assert!(trust_metric.trust_score() > 90, \"should have better score\");\n\n    let sess_ctx = SessionContext::make(\n        SessionId::new(99),\n        test_peer.multiaddrs.all_raw().pop().expect(\"multiaddr\"),\n        SessionType::Outbound,\n        test_peer.owned_pubkey().expect(\"pubkey\"),\n    );\n    let new_session = PeerManagerEvent::NewSession {\n        pid:    test_peer.owned_id(),\n        pubkey: test_peer.owned_pubkey().expect(\"pubkey\"),\n        ctx:    sess_ctx.arced(),\n    };\n    mgr.poll_event(new_session).await;\n\n    let target_sid = target_peer.session_id();\n    let conn_event = conn_rx.next().await.expect(\"should have disconnect event\");\n    match conn_event {\n        ConnectionEvent::Disconnect(sid) => {\n            assert_eq!(sid, target_sid, \"should be replaced session id\")\n        }\n        _ => panic!(\"should be disconnect event\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_not_replace_any_peer_if_incoming_hasnt_trust_score_due_to_max_connections_on_new_session(\n) {\n    let (mut mgr, mut conn_rx) = make_manager(0, 10);\n    let remote_peers = make_sessions(&mut mgr, 10, 5000, SessionType::Outbound).await;\n    let target_peer = remote_peers.first().expect(\"get first peer\");\n    let peer_trust_config = Arc::new(TrustMetricConfig::default());\n\n    let inner = mgr.core_inner();\n    assert_eq!(inner.connected(), 10, \"should reach max connections\");\n\n    if let Some(metric) = target_peer.trust_metric() {\n        for _ in 0..30 {\n            metric.good_events(1);\n            metric.bad_events(1);\n            metric.enter_new_interval();\n        }\n\n        assert!(metric.trust_score() < 80, \"should less than 80\");\n    }\n\n    // Update alive, only old enough peer can be replaced\n    target_peer.set_alive(peer_trust_config.interval().as_secs() * 20 + 20);\n\n    let test_peer = make_peer(2077);\n    inner.add_peer(test_peer.clone());\n\n    let sess_ctx = SessionContext::make(\n        SessionId::new(99),\n        test_peer.multiaddrs.all_raw().pop().expect(\"multiaddr\"),\n        SessionType::Outbound,\n        test_peer.owned_pubkey().expect(\"pubkey\"),\n    );\n    let new_session = PeerManagerEvent::NewSession {\n        pid:    test_peer.owned_id(),\n        pubkey: test_peer.owned_pubkey().expect(\"pubkey\"),\n        ctx:    sess_ctx.arced(),\n    };\n    mgr.poll_event(new_session).await;\n\n    let conn_event = conn_rx.next().await.expect(\"should have disconnect event\");\n    match conn_event {\n        ConnectionEvent::Disconnect(sid) => {\n            assert_eq!(sid, 99.into(), \"should be replaced session id\")\n        }\n        _ => panic!(\"should be disconnect event\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_not_replace_any_higher_score_peer_due_to_max_connections_on_new_session() {\n    let (mut mgr, mut conn_rx) = make_manager(0, 10);\n    let remote_peers = make_sessions(&mut mgr, 10, 5000, SessionType::Outbound).await;\n    let target_peer = remote_peers.first().expect(\"get first peer\");\n    let peer_trust_config = Arc::new(TrustMetricConfig::default());\n\n    let inner = mgr.core_inner();\n    assert_eq!(inner.connected(), 10, \"should reach max connections\");\n\n    if let Some(metric) = target_peer.trust_metric() {\n        for _ in 0..30 {\n            metric.good_events(1);\n            metric.enter_new_interval();\n        }\n\n        assert!(metric.trust_score() > 90, \"should have better score\");\n    }\n\n    // Update alive, only old enough peer can be replaced\n    target_peer.set_alive(peer_trust_config.interval().as_secs() * 20 + 20);\n\n    let test_peer = make_peer(2077);\n    inner.add_peer(test_peer.clone());\n    let trust_metric = TrustMetric::new(Arc::clone(&peer_trust_config));\n    test_peer.set_trust_metric(trust_metric.clone());\n    for _ in 0..30 {\n        trust_metric.good_events(1);\n        trust_metric.bad_events(1);\n        trust_metric.enter_new_interval();\n    }\n    assert!(trust_metric.trust_score() < 90, \"should have lower score\");\n\n    let sess_ctx = SessionContext::make(\n        SessionId::new(99),\n        test_peer.multiaddrs.all_raw().pop().expect(\"multiaddr\"),\n        SessionType::Outbound,\n        test_peer.owned_pubkey().expect(\"pubkey\"),\n    );\n    let new_session = PeerManagerEvent::NewSession {\n        pid:    test_peer.owned_id(),\n        pubkey: test_peer.owned_pubkey().expect(\"pubkey\"),\n        ctx:    sess_ctx.arced(),\n    };\n    mgr.poll_event(new_session).await;\n\n    let conn_event = conn_rx.next().await.expect(\"should have disconnect event\");\n    match conn_event {\n        ConnectionEvent::Disconnect(sid) => {\n            assert_eq!(sid, 99.into(), \"should be replaced session id\")\n        }\n        _ => panic!(\"should be disconnect event\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_not_replace_peer_in_allowlist_with_better_score_peer_due_to_max_connections_on_new_session(\n) {\n    let (mut mgr, mut conn_rx) = make_manager(0, 1);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n    let target_peer = remote_peers.first().expect(\"get first peer\");\n    let peer_trust_config = Arc::new(TrustMetricConfig::default());\n\n    let inner = mgr.core_inner();\n    assert_eq!(inner.connected(), 1, \"should reach max connections\");\n\n    if let Some(metric) = target_peer.trust_metric() {\n        for _ in 0..30 {\n            metric.good_events(1);\n            metric.bad_events(1);\n            metric.enter_new_interval();\n        }\n\n        assert!(metric.trust_score() < 80, \"should less than 80\");\n    }\n\n    // Update alive, only old enough peer can be replaced\n    target_peer.set_alive(peer_trust_config.interval().as_secs() * 20 + 20);\n    // Add always allow tag\n    target_peer.tags.insert(PeerTag::AlwaysAllow).unwrap();\n\n    let test_peer = make_peer(2077);\n    inner.add_peer(test_peer.clone());\n    let trust_metric = TrustMetric::new(Arc::clone(&peer_trust_config));\n    test_peer.set_trust_metric(trust_metric.clone());\n    for _ in 0..10 {\n        trust_metric.good_events(1);\n        trust_metric.enter_new_interval();\n    }\n    assert!(trust_metric.trust_score() > 90, \"should have better score\");\n\n    let sess_ctx = SessionContext::make(\n        SessionId::new(99),\n        test_peer.multiaddrs.all_raw().pop().expect(\"multiaddr\"),\n        SessionType::Outbound,\n        test_peer.owned_pubkey().expect(\"pubkey\"),\n    );\n    let new_session = PeerManagerEvent::NewSession {\n        pid:    test_peer.owned_id(),\n        pubkey: test_peer.owned_pubkey().expect(\"pubkey\"),\n        ctx:    sess_ctx.arced(),\n    };\n    mgr.poll_event(new_session).await;\n\n    let conn_event = conn_rx.next().await.expect(\"should have disconnect event\");\n    match conn_event {\n        ConnectionEvent::Disconnect(sid) => {\n            assert_eq!(sid, 99.into(), \"should be replaced session id\")\n        }\n        _ => panic!(\"should be disconnect event\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_not_replace_peer_not_old_enough_due_to_max_connections_on_new_session() {\n    let (mut mgr, mut conn_rx) = make_manager(0, 10);\n    let remote_peers = make_sessions(&mut mgr, 10, 5000, SessionType::Outbound).await;\n    let target_peer = remote_peers.first().expect(\"get first peer\");\n    let peer_trust_config = Arc::new(TrustMetricConfig::default());\n\n    let inner = mgr.core_inner();\n    assert_eq!(inner.connected(), 10, \"should reach max connections\");\n\n    if let Some(metric) = target_peer.trust_metric() {\n        for _ in 0..30 {\n            metric.good_events(1);\n            metric.bad_events(1);\n            metric.enter_new_interval();\n        }\n\n        assert!(metric.trust_score() < 80, \"should less than 80\");\n    }\n\n    let test_peer = make_peer(2077);\n    inner.add_peer(test_peer.clone());\n    let trust_metric = TrustMetric::new(Arc::clone(&peer_trust_config));\n    test_peer.set_trust_metric(trust_metric.clone());\n    for _ in 0..10 {\n        trust_metric.good_events(1);\n        trust_metric.enter_new_interval();\n    }\n    assert!(trust_metric.trust_score() > 90, \"should have better score\");\n\n    let sess_ctx = SessionContext::make(\n        SessionId::new(99),\n        test_peer.multiaddrs.all_raw().pop().expect(\"multiaddr\"),\n        SessionType::Outbound,\n        test_peer.owned_pubkey().expect(\"pubkey\"),\n    );\n    let new_session = PeerManagerEvent::NewSession {\n        pid:    test_peer.owned_id(),\n        pubkey: test_peer.owned_pubkey().expect(\"pubkey\"),\n        ctx:    sess_ctx.arced(),\n    };\n    mgr.poll_event(new_session).await;\n\n    let conn_event = conn_rx.next().await.expect(\"should have disconnect event\");\n    match conn_event {\n        ConnectionEvent::Disconnect(sid) => {\n            assert_eq!(sid, 99.into(), \"should be replaced session id\")\n        }\n        _ => panic!(\"should be disconnect event\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_remove_session_on_session_closed() {\n    let (mut mgr, _conn_rx) = make_manager(2, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n\n    let test_peer = remote_peers.first().expect(\"get first peer\");\n    assert_eq!(\n        test_peer.retry.count(),\n        0,\n        \"should reset retry after connect\"\n    );\n    // Set connected at to older timestamp to increase peer alive\n    test_peer.set_connected_at(time::now() - SHORT_ALIVE_SESSION - 1);\n\n    let session_closed = PeerManagerEvent::SessionClosed {\n        pid: test_peer.owned_id(),\n        sid: test_peer.session_id(),\n    };\n    mgr.poll_event(session_closed).await;\n\n    let inner = mgr.core_inner();\n    assert_eq!(inner.connected(), 0, \"shoulld have zero connected\");\n    assert_eq!(inner.share_sessions().len(), 0, \"should have no session\");\n    assert_eq!(\n        test_peer.connectedness(),\n        Connectedness::CanConnect,\n        \"should set peer connectednes to Connecting since we have't reach max connection\"\n    );\n    assert_eq!(test_peer.retry.count(), 0, \"should keep retry to 0\");\n}\n\n#[tokio::test]\nasync fn should_not_reconnect_to_closed_session_immediately_after_session_closed() {\n    let (mut mgr, _conn_rx) = make_manager(2, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n\n    let test_peer = remote_peers.first().expect(\"get first peer\");\n    assert_eq!(\n        test_peer.retry.count(),\n        0,\n        \"should reset retry after connect\"\n    );\n    // Set connected at to older timestamp to increase peer alive\n    test_peer.set_connected_at(time::now() - SHORT_ALIVE_SESSION - 1);\n\n    let session_closed = PeerManagerEvent::SessionClosed {\n        pid: test_peer.owned_id(),\n        sid: test_peer.session_id(),\n    };\n    mgr.poll_event(session_closed).await;\n\n    let inner = mgr.core_inner();\n    let random_short_ban = {\n        let opt_banned = test_peer.tags.get_banned_until();\n        opt_banned.expect(\"should have a random short ban\")\n    };\n    assert_eq!(inner.connected(), 0, \"shoulld have zero connected\");\n    assert_eq!(inner.share_sessions().len(), 0, \"should have no session\");\n    assert!(\n        random_short_ban <= (time::now() + MAX_RANDOM_NEXT_RETRY),\n        \"should have a random short ban, so we dont reconnect to this peer immediately\"\n    );\n    assert_eq!(\n        mgr.connecting().len(),\n        0,\n        \"should not reconnect immediately\"\n    );\n    assert_eq!(\n        test_peer.connectedness(),\n        Connectedness::CanConnect,\n        \"should set peer connectednes to Connecting since we have't reach max connection\"\n    );\n    assert_eq!(test_peer.retry.count(), 0, \"should keep retry to 0\");\n}\n\n#[tokio::test]\nasync fn should_pause_trust_metric_on_session_closed() {\n    let (mut mgr, _conn_rx) = make_manager(2, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n    let test_peer = remote_peers.first().expect(\"get first\");\n\n    let session_closed = PeerManagerEvent::SessionClosed {\n        pid: test_peer.owned_id(),\n        sid: test_peer.session_id(),\n    };\n    mgr.poll_event(session_closed).await;\n\n    let trust_metric = test_peer.trust_metric().expect(\"get trust metric\");\n    assert!(!trust_metric.is_started(), \"should pause trust metric\");\n}\n\n#[tokio::test]\nasync fn should_increase_retry_for_short_alive_session_on_session_closed() {\n    let (mut mgr, _conn_rx) = make_manager(2, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n    let test_peer = remote_peers.first().expect(\"get first peer\");\n    assert_eq!(\n        test_peer.retry.count(),\n        0,\n        \"should reset retry after connect\"\n    );\n\n    let session_closed = PeerManagerEvent::SessionClosed {\n        pid: test_peer.owned_id(),\n        sid: test_peer.session_id(),\n    };\n    mgr.poll_event(session_closed).await;\n\n    let inner = mgr.core_inner();\n    assert_eq!(\n        inner.connected(),\n        0,\n        \"should have no session because of retry\"\n    );\n    assert_eq!(inner.share_sessions().len(), 0, \"should have no session\");\n    assert_eq!(test_peer.connectedness(), Connectedness::CanConnect);\n    assert!(\n        test_peer.retry.eta() > REPEATED_CONNECTION_TIMEOUT,\n        \"should increase retry count enough to cover repeated connection timeout\"\n    );\n}\n\n#[tokio::test]\nasync fn should_properly_update_peer_state_even_if_session_not_found_on_session_closed() {\n    let (mut mgr, _conn_rx) = make_manager(2, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n    let test_peer = remote_peers.first().expect(\"get first peer\");\n\n    let inner = mgr.core_inner();\n    assert_eq!(inner.connected(), 1, \"should have one session\");\n\n    inner.remove_session(test_peer.session_id());\n    assert_eq!(inner.connected(), 0, \"should have no session\");\n\n    let session_closed = PeerManagerEvent::SessionClosed {\n        pid: test_peer.owned_id(),\n        sid: test_peer.session_id(),\n    };\n    mgr.poll_event(session_closed).await;\n\n    assert_eq!(test_peer.connectedness(), Connectedness::CanConnect);\n}\n\n#[tokio::test]\nasync fn should_inc_peer_multiaddr_failure_count_for_io_error_on_connect_failed() {\n    let (mut mgr, _conn_rx) = make_manager(1, 20);\n\n    let inner = mgr.core_inner();\n    let test_peer = make_peer(2077);\n    let test_multiaddr = test_peer.multiaddrs.all().pop().expect(\"multiaddr\");\n\n    inner.add_peer(test_peer.clone());\n    mgr.connecting_mut()\n        .insert(ConnectingAttempt::new(test_peer.clone()));\n\n    let connect_failed = PeerManagerEvent::ConnectFailed {\n        addr: (*test_multiaddr).to_owned(),\n        kind: ConnectionErrorKind::Io(std::io::ErrorKind::Other.into()),\n    };\n    mgr.poll_event(connect_failed).await;\n\n    assert_eq!(\n        test_peer.multiaddrs.failure(&test_multiaddr),\n        Some(1),\n        \"should increase failure count\"\n    );\n}\n\n#[tokio::test]\nasync fn should_inc_peer_multiaddr_failure_count_for_dns_error_on_connect_failed() {\n    let (mut mgr, _conn_rx) = make_manager(1, 20);\n\n    let inner = mgr.core_inner();\n    let test_peer = make_peer(2077);\n    let test_multiaddr = test_peer.multiaddrs.all().pop().expect(\"multiaddr\");\n\n    inner.add_peer(test_peer.clone());\n    mgr.connecting_mut()\n        .insert(ConnectingAttempt::new(test_peer.clone()));\n\n    let connect_failed = PeerManagerEvent::ConnectFailed {\n        addr: (*test_multiaddr).to_owned(),\n        kind: ConnectionErrorKind::DNSResolver(Box::new(std::io::Error::from(\n            std::io::ErrorKind::Other,\n        )) as Box<dyn std::error::Error + Send>),\n    };\n    mgr.poll_event(connect_failed).await;\n\n    assert_eq!(\n        test_peer.multiaddrs.failure(&test_multiaddr),\n        Some(1),\n        \"should increase failure count\"\n    );\n}\n\n#[tokio::test]\nasync fn should_give_up_peer_multiaddr_if_peer_id_not_match_on_connect_failed() {\n    let (mut mgr, _conn_rx) = make_manager(1, 20);\n\n    let inner = mgr.core_inner();\n    let test_peer = make_peer(2077);\n    let test_multiaddr = test_peer.multiaddrs.all().pop().expect(\"multiaddr\");\n    assert_eq!(\n        test_peer.multiaddrs.connectable_len(),\n        1,\n        \"should have one connectable multiaddr\"\n    );\n\n    inner.add_peer(test_peer.clone());\n    mgr.connecting_mut()\n        .insert(ConnectingAttempt::new(test_peer.clone()));\n\n    let connect_failed = PeerManagerEvent::ConnectFailed {\n        addr: (*test_multiaddr).to_owned(),\n        kind: ConnectionErrorKind::PeerIdNotMatch,\n    };\n    mgr.poll_event(connect_failed).await;\n\n    assert_eq!(\n        test_peer.multiaddrs.connectable_len(),\n        0,\n        \"should not have any connectable multiaddr\"\n    );\n}\n\n#[tokio::test]\nasync fn should_give_up_peer_itself_if_secio_handshake_error_on_connect_failed() {\n    let (mut mgr, _conn_rx) = make_manager(1, 20);\n\n    let inner = mgr.core_inner();\n    let test_peer = make_peer(2077);\n    let test_multiaddr = test_peer.multiaddrs.all().pop().expect(\"multiaddr\");\n\n    inner.add_peer(test_peer.clone());\n    mgr.connecting_mut()\n        .insert(ConnectingAttempt::new(test_peer.clone()));\n\n    let connect_failed = PeerManagerEvent::ConnectFailed {\n        addr: (*test_multiaddr).to_owned(),\n        kind: ConnectionErrorKind::SecioHandshake(Box::new(std::io::Error::from(\n            std::io::ErrorKind::Other,\n        )) as Box<dyn std::error::Error + Send>),\n    };\n    mgr.poll_event(connect_failed).await;\n\n    assert_eq!(test_peer.connectedness(), Connectedness::Unconnectable);\n}\n\n#[tokio::test]\nasync fn should_give_up_peer_itself_if_protocol_handle_error_on_connect_failed() {\n    let (mut mgr, _conn_rx) = make_manager(1, 20);\n\n    let inner = mgr.core_inner();\n    let test_peer = make_peer(2077);\n    let test_multiaddr = test_peer.multiaddrs.all().pop().expect(\"multiaddr\");\n\n    inner.add_peer(test_peer.clone());\n    mgr.connecting_mut()\n        .insert(ConnectingAttempt::new(test_peer.clone()));\n\n    let connect_failed = PeerManagerEvent::ConnectFailed {\n        addr: (*test_multiaddr).to_owned(),\n        kind: ConnectionErrorKind::ProtocolHandle,\n    };\n    mgr.poll_event(connect_failed).await;\n\n    assert_eq!(test_peer.connectedness(), Connectedness::Unconnectable);\n}\n\n#[tokio::test]\nasync fn should_increase_peer_retry_if_all_multiaddrs_failed_on_conect_failed() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n\n    let inner = mgr.core_inner();\n    let test_peer = make_peer(2077);\n    let test_multiaddr = test_peer.multiaddrs.all().pop().expect(\"multiaddr\");\n\n    inner.add_peer(test_peer.clone());\n    mgr.connecting_mut()\n        .insert(ConnectingAttempt::new(test_peer.clone()));\n\n    let connect_failed = PeerManagerEvent::ConnectFailed {\n        addr: (*test_multiaddr).to_owned(),\n        kind: ConnectionErrorKind::Io(std::io::ErrorKind::Other.into()),\n    };\n    mgr.poll_event(connect_failed).await;\n\n    assert_eq!(mgr.connecting().len(), 0, \"should not have any connecting\");\n    assert_eq!(test_peer.retry.count(), 1, \"should have 1 retry\");\n    assert_eq!(test_peer.connectedness(), Connectedness::CanConnect);\n}\n\n#[tokio::test]\nasync fn should_give_up_peer_if_run_out_retry_on_connect_failed() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n\n    let inner = mgr.core_inner();\n    let test_peer = make_peer(2077);\n    let test_multiaddr = test_peer.multiaddrs.all().pop().expect(\"multiaddr\");\n\n    inner.add_peer(test_peer.clone());\n    mgr.connecting_mut()\n        .insert(ConnectingAttempt::new(test_peer.clone()));\n\n    test_peer.retry.set(MAX_RETRY_COUNT);\n    let connect_failed = PeerManagerEvent::ConnectFailed {\n        addr: (*test_multiaddr).to_owned(),\n        kind: ConnectionErrorKind::Io(std::io::ErrorKind::Other.into()),\n    };\n    mgr.poll_event(connect_failed).await;\n\n    assert_eq!(mgr.connecting().len(), 0, \"should not have any connecting\");\n    assert_eq!(\n        test_peer.retry.count(),\n        MAX_RETRY_COUNT + 1,\n        \"should exceed max retry\"\n    );\n    assert_eq!(test_peer.connectedness(), Connectedness::Unconnectable);\n}\n\n#[tokio::test]\nasync fn should_return_early_if_we_already_give_up_peer_on_connect_failed() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n\n    let inner = mgr.core_inner();\n    let test_peer = make_peer(2077);\n    let test_multiaddr = test_peer.multiaddrs.all().pop().expect(\"multiaddr\");\n\n    inner.add_peer(test_peer.clone());\n    mgr.connecting_mut()\n        .insert(ConnectingAttempt::new(test_peer.clone()));\n\n    let connect_failed = PeerManagerEvent::ConnectFailed {\n        addr: (*test_multiaddr).to_owned(),\n        kind: ConnectionErrorKind::ProtocolHandle,\n    };\n    mgr.poll_event(connect_failed).await;\n\n    assert_eq!(mgr.connecting().len(), 0, \"should not have any connecting\");\n    assert_eq!(test_peer.connectedness(), Connectedness::Unconnectable);\n    assert_eq!(test_peer.retry.count(), 0, \"should not touch peer retry\");\n}\n\n#[tokio::test]\nasync fn should_wait_for_other_connecting_multiaddrs_if_we_dont_give_up_peer_on_connect_failed() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n\n    let inner = mgr.core_inner();\n    let test_peer = make_peer(2077);\n    let test_multiaddr = test_peer.multiaddrs.all().pop().expect(\"multiaddr\");\n    test_peer\n        .multiaddrs\n        .insert(vec![make_peer_multiaddr(2020, test_peer.owned_id())]);\n    assert_eq!(\n        test_peer.multiaddrs.connectable_len(),\n        2,\n        \"should have two connectable multiaddrs\"\n    );\n\n    inner.add_peer(test_peer.clone());\n    mgr.connecting_mut()\n        .insert(ConnectingAttempt::new(test_peer.clone()));\n\n    let attempt = mgr.connecting().iter().next().expect(\"attempt\");\n    assert_eq!(\n        attempt.multiaddrs(),\n        2,\n        \"should still have two connecting multiaddrs\"\n    );\n\n    let connect_failed = PeerManagerEvent::ConnectFailed {\n        addr: (*test_multiaddr).to_owned(),\n        kind: ConnectionErrorKind::Io(std::io::ErrorKind::Other.into()),\n    };\n    mgr.poll_event(connect_failed).await;\n\n    assert_eq!(mgr.connecting().len(), 1, \"should not have any connecting\");\n\n    let attempt = mgr.connecting().iter().next().expect(\"attempt\");\n    assert_eq!(\n        attempt.multiaddrs(),\n        1,\n        \"should still have one connecting multiaddr\"\n    );\n}\n\n#[tokio::test]\nasync fn should_ensure_disconnect_session_on_session_failed() {\n    let (mut mgr, mut conn_rx) = make_manager(0, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n\n    let test_peer = remote_peers.first().expect(\"get first peer\");\n    let expect_sid = test_peer.session_id();\n    let session_failed = PeerManagerEvent::SessionFailed {\n        sid:  expect_sid,\n        kind: SessionErrorKind::Io(std::io::ErrorKind::Other.into()),\n    };\n    mgr.poll_event(session_failed).await;\n\n    let inner = mgr.core_inner();\n    assert_eq!(inner.share_sessions().len(), 0, \"should disconnect session\");\n    assert_eq!(inner.connected(), 0, \"should disconnect session\");\n    assert_eq!(\n        test_peer.connectedness(),\n        Connectedness::CanConnect,\n        \"should disconnect peer\"\n    );\n\n    let conn_event = conn_rx.next().await.expect(\"should have disconnect event\");\n    match conn_event {\n        ConnectionEvent::Disconnect(sid) => {\n            assert_eq!(sid, expect_sid, \"should disconnect session\")\n        }\n        _ => panic!(\"should be disconnect event\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_increase_retry_for_io_error_on_session_failed() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n\n    let test_peer = remote_peers.first().expect(\"get first peer\");\n    let expect_sid = test_peer.session_id();\n    let session_failed = PeerManagerEvent::SessionFailed {\n        sid:  expect_sid,\n        kind: SessionErrorKind::Io(std::io::ErrorKind::Other.into()),\n    };\n    mgr.poll_event(session_failed).await;\n\n    let inner = mgr.core_inner();\n    assert_eq!(inner.connected(), 0, \"should disconnect session\");\n    assert_eq!(test_peer.retry.count(), 1, \"should increase onen retry\");\n}\n\n#[tokio::test]\nasync fn should_give_up_peer_for_protocol_error_on_session_failed() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n\n    let test_peer = remote_peers.first().expect(\"get first peer\");\n    let expect_sid = test_peer.session_id();\n    let session_failed = PeerManagerEvent::SessionFailed {\n        sid:  expect_sid,\n        kind: SessionErrorKind::Protocol {\n            identity: None,\n            cause:    None,\n        },\n    };\n    mgr.poll_event(session_failed).await;\n\n    let inner = mgr.core_inner();\n    assert_eq!(inner.connected(), 0, \"should disconnect session\");\n    assert_eq!(\n        test_peer.connectedness(),\n        Connectedness::Unconnectable,\n        \"should give up peer\"\n    );\n}\n\n#[tokio::test]\nasync fn should_give_up_peer_for_unexpected_error_on_session_failed() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n\n    let test_peer = remote_peers.first().expect(\"get first peer\");\n    let expect_sid = test_peer.session_id();\n    let session_failed = PeerManagerEvent::SessionFailed {\n        sid:  expect_sid,\n        kind: SessionErrorKind::Unexpected(\n            Box::new(std::io::Error::from(std::io::ErrorKind::Other))\n                as Box<dyn std::error::Error + Send>,\n        ),\n    };\n    mgr.poll_event(session_failed).await;\n\n    let inner = mgr.core_inner();\n    assert_eq!(inner.connected(), 0, \"should disconnect session\");\n    assert_eq!(\n        test_peer.connectedness(),\n        Connectedness::Unconnectable,\n        \"should give up peer\"\n    );\n}\n\n#[tokio::test]\nasync fn should_reduce_trust_score_on_session_failed() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n    let test_peer = remote_peers.first().expect(\"get first peer\");\n\n    let trust_metric = test_peer.trust_metric().expect(\"get trust metric\");\n    for _ in 0..10 {\n        trust_metric.good_events(1);\n        trust_metric.enter_new_interval();\n    }\n    let before_failed_score = trust_metric.trust_score();\n    assert!(before_failed_score > 90, \"should trust score\");\n\n    let expect_sid = test_peer.session_id();\n    let session_failed = PeerManagerEvent::SessionFailed {\n        sid:  expect_sid,\n        kind: SessionErrorKind::Io(std::io::ErrorKind::Other.into()),\n    };\n    mgr.poll_event(session_failed).await;\n\n    assert!(\n        trust_metric.trust_score() < before_failed_score,\n        \"should reduce trust score\"\n    )\n}\n\n#[tokio::test]\nasync fn should_update_peer_alive_on_peer_alive() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n\n    let test_peer = remote_peers.first().expect(\"get first peer\");\n    let old_alive = test_peer.alive();\n\n    // Set connected at to older timestamp to increase peer alive\n    test_peer.set_connected_at(time::now() - SHORT_ALIVE_SESSION - 1);\n\n    let peer_alive = PeerManagerEvent::PeerAlive {\n        pid: test_peer.owned_id(),\n    };\n    mgr.poll_event(peer_alive).await;\n\n    assert_eq!(\n        test_peer.alive(),\n        old_alive + SHORT_ALIVE_SESSION + 1,\n        \"should update peer alive\"\n    );\n}\n\n#[tokio::test]\nasync fn should_reset_peer_retry_on_peer_alive() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n\n    let test_peer = remote_peers.first().expect(\"get first peer\");\n    assert_eq!(test_peer.retry.count(), 0, \"should have 0 retry\");\n\n    test_peer.retry.inc();\n    assert_eq!(test_peer.retry.count(), 1, \"should now have 1 retry\");\n\n    let peer_alive = PeerManagerEvent::PeerAlive {\n        pid: test_peer.owned_id(),\n    };\n    mgr.poll_event(peer_alive).await;\n\n    assert_eq!(test_peer.retry.count(), 0, \"should reset retry\");\n}\n\n#[tokio::test]\nasync fn should_disconnect_peer_on_misbehave() {\n    let (mut mgr, mut conn_rx) = make_manager(0, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n\n    let test_peer = remote_peers.first().expect(\"get first peer\");\n    let expect_sid = test_peer.session_id();\n    let peer_misbehave = PeerManagerEvent::Misbehave {\n        pid:  test_peer.owned_id(),\n        kind: MisbehaviorKind::PingTimeout,\n    };\n    mgr.poll_event(peer_misbehave).await;\n\n    let inner = mgr.core_inner();\n    assert_eq!(inner.connected(), 0, \"should disconnect session\");\n    assert_eq!(inner.share_sessions().len(), 0, \"should disconnect session\");\n\n    let conn_event = conn_rx.next().await.expect(\"should have disconnect event\");\n    match conn_event {\n        ConnectionEvent::Disconnect(sid) => {\n            assert_eq!(sid, expect_sid, \"should disconnect session\")\n        }\n        _ => panic!(\"should be disconnect event\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_reduce_trust_score_on_misbehave() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n    let test_peer = remote_peers.first().expect(\"get first peer\");\n\n    let trust_metric = test_peer.trust_metric().expect(\"get trust metric\");\n    for _ in 0..10 {\n        trust_metric.good_events(1);\n        trust_metric.enter_new_interval();\n    }\n    let before_failed_score = trust_metric.trust_score();\n    assert!(before_failed_score > 90, \"should trust score\");\n\n    let peer_misbehave = PeerManagerEvent::Misbehave {\n        pid:  test_peer.owned_id(),\n        kind: MisbehaviorKind::PingTimeout,\n    };\n    mgr.poll_event(peer_misbehave).await;\n\n    assert!(\n        trust_metric.trust_score() < before_failed_score,\n        \"should reduce trust score\"\n    )\n}\n\n#[tokio::test]\nasync fn should_increase_retry_for_ping_timeout_on_misbehave() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n\n    let test_peer = remote_peers.first().expect(\"get first peer\");\n    let peer_misbehave = PeerManagerEvent::Misbehave {\n        pid:  test_peer.owned_id(),\n        kind: MisbehaviorKind::PingTimeout,\n    };\n    mgr.poll_event(peer_misbehave).await;\n\n    let inner = mgr.core_inner();\n    assert_eq!(inner.connected(), 0, \"should disconnect session\");\n    assert_eq!(test_peer.retry.count(), 1, \"should increase retry\");\n}\n\n#[tokio::test]\nasync fn should_give_up_peer_for_ping_unexpect_on_misbehave() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n\n    let test_peer = remote_peers.first().expect(\"get first peer\");\n    let peer_misbehave = PeerManagerEvent::Misbehave {\n        pid:  test_peer.owned_id(),\n        kind: MisbehaviorKind::PingUnexpect,\n    };\n    mgr.poll_event(peer_misbehave).await;\n\n    let inner = mgr.core_inner();\n    assert_eq!(inner.connected(), 0, \"should disconnect session\");\n    assert_eq!(\n        test_peer.connectedness(),\n        Connectedness::Unconnectable,\n        \"should give up peer\"\n    );\n}\n\n#[tokio::test]\nasync fn should_give_up_peer_for_discovery_on_misbehave() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n\n    let test_peer = remote_peers.first().expect(\"get first peer\");\n    let peer_misbehave = PeerManagerEvent::Misbehave {\n        pid:  test_peer.owned_id(),\n        kind: MisbehaviorKind::Discovery,\n    };\n    mgr.poll_event(peer_misbehave).await;\n\n    let inner = mgr.core_inner();\n    assert_eq!(inner.connected(), 0, \"should disconnect session\");\n    assert_eq!(\n        test_peer.connectedness(),\n        Connectedness::Unconnectable,\n        \"should give up peer\"\n    );\n}\n\n#[tokio::test]\nasync fn should_mark_session_blocked_on_session_blocked() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n\n    let test_peer = remote_peers.first().expect(\"get first peer\");\n    let sess_ctx = SessionContext::make(\n        test_peer.session_id(),\n        test_peer.multiaddrs.all_raw().pop().expect(\"get multiaddr\"),\n        SessionType::Outbound,\n        test_peer.owned_pubkey().expect(\"pubkey\"),\n    );\n    let session_blocked = PeerManagerEvent::SessionBlocked {\n        ctx: sess_ctx.arced(),\n    };\n    mgr.poll_event(session_blocked).await;\n\n    let inner = mgr.core_inner();\n    let session = inner\n        .session(test_peer.session_id())\n        .expect(\"should have a session\");\n    assert!(session.is_blocked(), \"should be blocked\");\n}\n\n#[tokio::test]\nasync fn should_add_one_bad_event_on_session_blocked() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n\n    let test_peer = remote_peers.first().expect(\"get first peer\");\n    let trust_metric = test_peer.trust_metric().expect(\"get trust metric\");\n\n    let sess_ctx = SessionContext::make(\n        test_peer.session_id(),\n        test_peer.multiaddrs.all_raw().pop().expect(\"get multiaddr\"),\n        SessionType::Outbound,\n        test_peer.owned_pubkey().expect(\"pubkey\"),\n    );\n    let session_blocked = PeerManagerEvent::SessionBlocked {\n        ctx: sess_ctx.arced(),\n    };\n    mgr.poll_event(session_blocked).await;\n\n    assert_eq!(\n        trust_metric.bad_events_count(),\n        1,\n        \"should add one bad event\"\n    );\n}\n\n#[tokio::test]\nasync fn should_try_all_peer_multiaddrs_on_connect_peers_now() {\n    let (mut mgr, mut conn_rx) = make_manager(0, 20);\n    let peers = (0..10)\n        .map(|port| {\n            // Every peer has two multiaddrs\n            let p = make_peer(port + 7000);\n            p.multiaddrs\n                .insert(vec![make_peer_multiaddr(port + 8000, p.owned_id())]);\n            p\n        })\n        .collect::<Vec<_>>();\n\n    let inner = mgr.core_inner();\n    for peer in peers.iter() {\n        inner.add_peer(peer.clone());\n    }\n\n    assert_eq!(\n        mgr.connecting().len(),\n        0,\n        \"should have 0 connecting attempt\"\n    );\n\n    let connect_peers = PeerManagerEvent::ConnectPeersNow {\n        pids: peers.iter().map(|p| p.owned_id()).collect(),\n    };\n    mgr.poll_event(connect_peers).await;\n\n    assert_eq!(\n        mgr.connecting().len(),\n        10,\n        \"should have all peer in connecting attempt\"\n    );\n\n    let conn_event = conn_rx.next().await.expect(\"should have connect event\");\n    let multiaddrs_in_event = match conn_event {\n        ConnectionEvent::Connect { addrs, .. } => addrs,\n        _ => panic!(\"should be connect event\"),\n    };\n\n    let expect_multiaddrs = peers\n        .into_iter()\n        .map(|p| p.multiaddrs.all_raw())\n        .flatten()\n        .collect::<Vec<_>>();\n\n    assert_eq!(\n        multiaddrs_in_event.len(),\n        expect_multiaddrs.len(),\n        \"should have same number of multiaddrs\"\n    );\n\n    assert!(\n        !multiaddrs_in_event\n            .iter()\n            .any(|ma| !expect_multiaddrs.contains(ma)),\n        \"all multiaddrs should be included\"\n    );\n}\n\n#[tokio::test]\nasync fn should_skip_peers_not_in_can_connect_or_not_connected_connectedness_on_connect_peers_now()\n{\n    let (mut mgr, mut conn_rx) = make_manager(0, 20);\n    let peer_in_connected = make_peer(2020);\n    let peer_in_unconnectable = make_peer(2059);\n\n    peer_in_unconnectable.set_connectedness(Connectedness::Unconnectable);\n    peer_in_connected.set_connectedness(Connectedness::Connected);\n\n    let inner = mgr.core_inner();\n    inner.add_peer(peer_in_connected.clone());\n    inner.add_peer(peer_in_unconnectable.clone());\n\n    let connect_peers = PeerManagerEvent::ConnectPeersNow {\n        pids: vec![\n            peer_in_unconnectable.owned_id(),\n            peer_in_connected.owned_id(),\n        ],\n    };\n    mgr.poll_event(connect_peers).await;\n\n    match conn_rx.try_next() {\n        Err(_) => (), // Err means channel is empty, it's expected\n        _ => panic!(\"should not have any connection event\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_connect_peers_even_if_they_are_not_retry_ready_on_connect_peers_now() {\n    let (mut mgr, mut conn_rx) = make_manager(0, 20);\n    let not_ready_peer = make_peer(2077);\n    not_ready_peer.retry.inc();\n\n    let inner = mgr.core_inner();\n    inner.add_peer(not_ready_peer.clone());\n\n    let connect_peers = PeerManagerEvent::ConnectPeersNow {\n        pids: vec![not_ready_peer.owned_id()],\n    };\n    mgr.poll_event(connect_peers).await;\n\n    let conn_event = conn_rx.next().await.expect(\"should have connect event\");\n    let multiaddrs_in_event = match conn_event {\n        ConnectionEvent::Connect { addrs, .. } => addrs,\n        _ => panic!(\"should be connect event\"),\n    };\n\n    let expect_multiaddrs = not_ready_peer.multiaddrs.all_raw();\n    assert_eq!(\n        multiaddrs_in_event.len(),\n        expect_multiaddrs.len(),\n        \"should have same number of multiaddrs\"\n    );\n    assert!(\n        !multiaddrs_in_event\n            .iter()\n            .any(|ma| !expect_multiaddrs.contains(ma)),\n        \"all multiaddrs should be included\"\n    );\n}\n\n#[tokio::test]\nasync fn should_insert_peers_on_discover_multi_addrs() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n    let peers = (0..10)\n        .map(|port| make_peer(port + 7000))\n        .collect::<Vec<_>>();\n\n    let peer_ids = peers\n        .clone()\n        .into_iter()\n        .map(|p| p.owned_id())\n        .collect::<Vec<_>>();\n    let test_multiaddrs = peers\n        .into_iter()\n        .map(|p| p.multiaddrs.all_raw().pop().expect(\"multiaddr\"))\n        .collect::<Vec<_>>();\n\n    let discover_multi_addrs = PeerManagerEvent::DiscoverMultiAddrs {\n        addrs: test_multiaddrs,\n    };\n    mgr.poll_event(discover_multi_addrs).await;\n\n    let inner = mgr.core_inner();\n    assert!(\n        !peer_ids.iter().any(|pid| !inner.contains(pid)),\n        \"should insert all discovered peers\"\n    );\n}\n\n#[tokio::test]\nasync fn should_not_reset_exist_multiaddr_failure_count_on_discover_multi_addrs() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n    let test_peer = make_peer(2077);\n    let test_multiaddr = test_peer.multiaddrs.all().pop().expect(\"multiaddr\");\n\n    test_peer.multiaddrs.inc_failure(&test_multiaddr);\n    assert_eq!(\n        test_peer.multiaddrs.failure(&test_multiaddr),\n        Some(1),\n        \"should have one failure\"\n    );\n\n    let discover_multi_addrs = PeerManagerEvent::DiscoverMultiAddrs {\n        addrs: vec![test_multiaddr.clone().into()],\n    };\n    mgr.poll_event(discover_multi_addrs).await;\n\n    assert_eq!(\n        test_peer.multiaddrs.failure(&test_multiaddr),\n        Some(1),\n        \"should have one failure\"\n    );\n}\n\n#[tokio::test]\nasync fn should_skip_our_listen_multiaddrs_on_discover_multi_addrs() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n    let self_id = mgr.inner.peer_id.to_owned();\n\n    let inner = mgr.core_inner();\n    let listen_multiaddr = make_peer_multiaddr(2020, self_id.clone());\n\n    inner.add_listen(listen_multiaddr.clone());\n    assert!(\n        inner.listen().contains(&listen_multiaddr),\n        \"should contains listen addr\"\n    );\n\n    let discover_multi_addrs = PeerManagerEvent::DiscoverMultiAddrs {\n        addrs: vec![make_multiaddr(2020, Some(self_id.clone()))],\n    };\n    mgr.poll_event(discover_multi_addrs).await;\n\n    assert!(!inner.contains(&self_id), \"should not add our self peer id\");\n}\n\n#[tokio::test]\nasync fn should_add_multiaddrs_to_peer_on_identified_addrs() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n    let test_peer = remote_peers.first().expect(\"get first\");\n    let old_multiaddrs_len = test_peer.multiaddrs.len();\n\n    let test_multiaddrs: Vec<_> = (0..2)\n        .map(|port| make_multiaddr(port + 9000, Some(test_peer.owned_id())))\n        .collect();\n\n    let identified_addrs = PeerManagerEvent::IdentifiedAddrs {\n        pid:   test_peer.owned_id(),\n        addrs: test_multiaddrs.clone(),\n    };\n    mgr.poll_event(identified_addrs).await;\n\n    assert_eq!(\n        test_peer.multiaddrs.len(),\n        old_multiaddrs_len + 2,\n        \"should have correct multiaddrs len\"\n    );\n    assert!(\n        !test_multiaddrs\n            .iter()\n            .any(|ma| !test_peer.multiaddrs.all_raw().contains(ma)),\n        \"should add all multiaddrs to peer\"\n    );\n}\n\n#[tokio::test]\nasync fn should_push_id_to_multiaddrs_if_not_included_on_identified_addrs() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n    let test_peer = remote_peers.first().expect(\"get first\");\n    let test_multiaddr = make_multiaddr(2077, None);\n\n    let identified_addrs = PeerManagerEvent::IdentifiedAddrs {\n        pid:   test_peer.owned_id(),\n        addrs: vec![test_multiaddr.clone()],\n    };\n    mgr.poll_event(identified_addrs).await;\n\n    assert!(\n        !test_peer.multiaddrs.all_raw().contains(&test_multiaddr),\n        \"should not contain multiaddr without id included\"\n    );\n\n    let with_id = make_peer_multiaddr(2077, test_peer.owned_id());\n    assert!(\n        test_peer.multiaddrs.contains(&with_id),\n        \"should push id to multiaddr when add it to peer\"\n    );\n}\n\n#[tokio::test]\nasync fn should_not_reset_exist_multiaddr_failure_count_on_identified_addrs() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n    let test_peer = remote_peers.first().expect(\"get first\");\n    let test_multiaddr = test_peer.multiaddrs.all().pop().expect(\"multiaddr\");\n\n    test_peer.multiaddrs.inc_failure(&test_multiaddr);\n    assert_eq!(\n        test_peer.multiaddrs.failure(&test_multiaddr),\n        Some(1),\n        \"should have one failure\"\n    );\n\n    let identified_addrs = PeerManagerEvent::IdentifiedAddrs {\n        pid:   test_peer.owned_id(),\n        addrs: vec![test_multiaddr.clone().into()],\n    };\n    mgr.poll_event(identified_addrs).await;\n\n    assert_eq!(\n        test_peer.multiaddrs.failure(&test_multiaddr),\n        Some(1),\n        \"should have one failure\"\n    );\n}\n\n#[tokio::test]\nasync fn should_reset_peer_failure_for_outbound_multiaddr_on_repeated_connection() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n    let test_peer = remote_peers.first().expect(\"get first\");\n    let test_multiaddr = test_peer.multiaddrs.all().pop().expect(\"multiaddr\");\n\n    test_peer.multiaddrs.inc_failure(&test_multiaddr);\n    assert_eq!(\n        test_peer.multiaddrs.failure(&test_multiaddr),\n        Some(1),\n        \"should have one failure\"\n    );\n\n    let repeated_connection = PeerManagerEvent::RepeatedConnection {\n        ty:   ConnectionType::Outbound,\n        sid:  test_peer.session_id(),\n        addr: test_multiaddr.clone().into(),\n    };\n    mgr.poll_event(repeated_connection).await;\n\n    assert_eq!(\n        test_peer.multiaddrs.failure(&test_multiaddr),\n        Some(0),\n        \"should have one failure\"\n    );\n}\n\n#[tokio::test]\nasync fn should_remove_inbound_multiaddr_on_repeated_connection() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n    let test_peer = remote_peers.first().expect(\"get first\");\n\n    let test_multiaddr = make_peer_multiaddr(2077, test_peer.owned_id());\n    test_peer.multiaddrs.insert(vec![test_multiaddr.clone()]);\n\n    let repeated_connection = PeerManagerEvent::RepeatedConnection {\n        ty:   ConnectionType::Inbound,\n        sid:  test_peer.session_id(),\n        addr: test_multiaddr.clone().into(),\n    };\n    mgr.poll_event(repeated_connection).await;\n\n    assert!(\n        !test_peer.multiaddrs.contains(&test_multiaddr),\n        \"should remove inbound multiaddr\"\n    );\n}\n\n#[tokio::test]\nasync fn should_enforce_id_if_not_included_on_repeated_connection() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n    let test_peer = remote_peers.first().expect(\"get first\");\n    let test_multiaddr = test_peer.multiaddrs.all().pop().expect(\"multiaddr\");\n\n    test_peer.multiaddrs.inc_failure(&test_multiaddr);\n    assert_eq!(\n        test_peer.multiaddrs.failure(&test_multiaddr),\n        Some(1),\n        \"should have one failure\"\n    );\n\n    let repeated_connection = PeerManagerEvent::RepeatedConnection {\n        ty:   ConnectionType::Outbound,\n        sid:  test_peer.session_id(),\n        addr: test_multiaddr.clone().into(),\n    };\n    mgr.poll_event(repeated_connection).await;\n\n    assert_eq!(\n        test_peer.multiaddrs.failure(&test_multiaddr),\n        Some(0),\n        \"should have one failure\"\n    );\n}\n\n#[tokio::test]\nasync fn should_add_new_listen_on_add_new_listen_addr() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n    let self_id = mgr.inner.peer_id.to_owned();\n\n    let inner = mgr.core_inner();\n    let listen_multiaddr = make_peer_multiaddr(2020, self_id.clone());\n    inner.add_listen(listen_multiaddr.clone());\n    assert!(!inner.listen().is_empty(), \"should have listen address\");\n\n    let test_multiaddr = make_multiaddr(2077, Some(self_id));\n    assert!(test_multiaddr != *listen_multiaddr);\n\n    let add_listen_addr = PeerManagerEvent::AddNewListenAddr {\n        addr: test_multiaddr.clone(),\n    };\n    mgr.poll_event(add_listen_addr).await;\n\n    assert_eq!(inner.listen().len(), 2, \"should have 2 listen addrs\");\n    assert!(\n        inner.listen().contains(&test_multiaddr),\n        \"should add new listen multiaddr\"\n    );\n}\n\n#[tokio::test]\nasync fn should_push_id_to_listen_multiaddr_if_not_included_on_add_new_listen_addr() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n    let self_id = mgr.inner.peer_id.to_owned();\n\n    let inner = mgr.core_inner();\n    let test_multiaddr = make_multiaddr(2077, None);\n    assert!(inner.listen().is_empty(), \"should not have any listen addr\");\n\n    let add_listen_addr = PeerManagerEvent::AddNewListenAddr {\n        addr: test_multiaddr.clone(),\n    };\n    mgr.poll_event(add_listen_addr).await;\n\n    let with_id = make_multiaddr(2077, Some(self_id));\n    assert_eq!(inner.listen().len(), 1, \"should have one listen addr\");\n    assert!(\n        inner.listen().contains(&with_id),\n        \"should add new listen multiaddr\"\n    );\n}\n\n#[tokio::test]\nasync fn should_remove_listen_on_remove_listen_addr() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n    let self_id = mgr.inner.peer_id.to_owned();\n\n    let inner = mgr.core_inner();\n    let listen_multiaddr = make_peer_multiaddr(2020, self_id.clone());\n\n    inner.add_listen(listen_multiaddr.clone());\n    assert!(\n        inner.listen().contains(&listen_multiaddr),\n        \"should contains listen addr\"\n    );\n\n    let remove_listen_addr = PeerManagerEvent::RemoveListenAddr {\n        addr: make_multiaddr(2020, Some(self_id)),\n    };\n    mgr.poll_event(remove_listen_addr).await;\n\n    assert_eq!(inner.listen().len(), 0, \"should have 0 listen addrs\");\n}\n\n#[tokio::test]\nasync fn should_remove_listen_even_if_no_peer_id_included_on_remove_listen_addr() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n    let self_id = mgr.inner.peer_id.to_owned();\n\n    let inner = mgr.core_inner();\n    let listen_multiaddr = make_peer_multiaddr(2020, self_id.clone());\n\n    inner.add_listen(listen_multiaddr.clone());\n    assert!(\n        inner.listen().contains(&listen_multiaddr),\n        \"should contains listen addr\"\n    );\n\n    let remove_listen_addr = PeerManagerEvent::RemoveListenAddr {\n        addr: make_multiaddr(2020, None),\n    };\n    mgr.poll_event(remove_listen_addr).await;\n\n    assert_eq!(inner.listen().len(), 0, \"should have 0 listen addrs\");\n}\n\n#[tokio::test]\nasync fn should_always_include_our_listen_addrs_in_return_from_manager_handle_random_addrs() {\n    let (mgr, _conn_rx) = make_manager(0, 20);\n    let self_id = mgr.inner.peer_id.to_owned();\n\n    let inner = mgr.core_inner();\n    let listen_multiaddrs = (0..5)\n        .map(|port| make_peer_multiaddr(port + 9000, self_id.clone()))\n        .collect::<Vec<_>>();\n\n    for ma in listen_multiaddrs.iter() {\n        inner.add_listen(ma.clone());\n    }\n\n    let handle = mgr.inner.handle();\n    let addrs = handle.random_addrs(100, 1.into());\n\n    assert!(\n        !listen_multiaddrs.iter().any(|lma| !addrs.contains(&*lma)),\n        \"should include our listen addresses\"\n    );\n}\n\n#[tokio::test]\nasync fn should_accept_always_allow_peer_even_if_we_reach_max_connections_on_new_session() {\n    let (mut mgr, _conn_rx) = make_manager(0, 10);\n    let _remote_peers = make_sessions(&mut mgr, 10, 5000, SessionType::Outbound).await;\n\n    let peer = make_peer(2019);\n    let always_allow_peer = make_peer(2077);\n    always_allow_peer.tags.insert(PeerTag::AlwaysAllow).unwrap();\n\n    let inner = mgr.core_inner();\n    inner.add_peer(always_allow_peer.clone());\n\n    assert_eq!(inner.connected(), 10, \"should have 10 connections\");\n\n    // First one without AlwaysAllow tag\n    let sess_ctx = SessionContext::make(\n        SessionId::new(233),\n        peer.multiaddrs.all_raw().pop().expect(\"peer multiaddr\"),\n        SessionType::Inbound,\n        peer.owned_pubkey().expect(\"pubkey\"),\n    );\n    let new_session = PeerManagerEvent::NewSession {\n        pid:    peer.owned_id(),\n        pubkey: peer.owned_pubkey().expect(\"pubkey\"),\n        ctx:    sess_ctx.arced(),\n    };\n    mgr.poll_event(new_session).await;\n    assert_eq!(inner.connected(), 10, \"should remain 10 connections\");\n\n    // Now peer has AlwaysAllow tag\n    let sess_ctx = SessionContext::make(\n        SessionId::new(666),\n        always_allow_peer\n            .multiaddrs\n            .all_raw()\n            .pop()\n            .expect(\"peer multiaddr\"),\n        SessionType::Inbound,\n        always_allow_peer\n            .owned_pubkey()\n            .expect(\"always allow peer's pubkey\"),\n    );\n    let new_session = PeerManagerEvent::NewSession {\n        pid:    always_allow_peer.owned_id(),\n        pubkey: always_allow_peer\n            .owned_pubkey()\n            .expect(\"always allow peer's pubkey\"),\n        ctx:    sess_ctx.arced(),\n    };\n    mgr.poll_event(new_session).await;\n\n    assert_eq!(inner.connected(), 11, \"should remain 11 connections\");\n    let session = inner.session(666.into()).expect(\"should have session\");\n    assert_eq!(\n        session.peer.id, always_allow_peer.id,\n        \"should be alway allow peer\"\n    );\n}\n\n#[tokio::test]\nasync fn should_only_connect_peers_in_allowlist_if_enable_allowlist_only() {\n    let manager_pubkey = make_pubkey();\n    let manager_id = manager_pubkey.peer_id();\n    let mut peer_dat_file = std::env::temp_dir();\n    peer_dat_file.push(\"peer.dat\");\n    let peer_trust_config = Arc::new(TrustMetricConfig::default());\n    let peer_fatal_ban = Duration::from_secs(50);\n    let peer_soft_ban = Duration::from_secs(10);\n\n    let test_peer = make_peer(2077);\n    let another_peer = make_peer(2020);\n\n    let config = PeerManagerConfig {\n        our_id: manager_id,\n        pubkey: manager_pubkey,\n        bootstraps: Default::default(),\n        allowlist: vec![test_peer.id.to_owned()],\n        allowlist_only: true,\n        peer_trust_config,\n        peer_fatal_ban,\n        peer_soft_ban,\n        max_connections: 10,\n        same_ip_conn_limit: 99,\n        inbound_conn_limit: 5,\n        outbound_conn_limit: 5,\n        routine_interval: Duration::from_secs(10),\n        peer_dat_file,\n    };\n\n    let (conn_tx, mut conn_rx) = unbounded();\n    let (mgr_tx, mgr_rx) = unbounded();\n    let manager = PeerManager::new(config, mgr_rx, conn_tx);\n\n    let inner = manager.inner();\n    inner.add_peer(another_peer);\n\n    let allowed_peer = inner\n        .peer(&test_peer.id)\n        .expect(\"should be inserted through config\");\n    // Add multiaddrs to peer inserted by allowlist\n    allowed_peer.multiaddrs.insert(test_peer.multiaddrs.all());\n    assert!(allowed_peer.tags.contains(&PeerTag::AlwaysAllow));\n\n    let mut manager = MockManager::new(manager, mgr_tx);\n    manager.poll().await;\n\n    let conn_event = conn_rx.next().await.expect(\"should have connect event\");\n    let multiaddrs_in_event = match conn_event {\n        ConnectionEvent::Connect { addrs, .. } => addrs,\n        _ => panic!(\"should be connect event\"),\n    };\n\n    assert_eq!(\n        multiaddrs_in_event.len(),\n        1,\n        \"should have on multiaddr to connect\"\n    );\n\n    let test_peer_multiaddr = test_peer.multiaddrs.all_raw().pop().expect(\"get multiaddr\");\n    assert_eq!(\n        multiaddrs_in_event[0], test_peer_multiaddr,\n        \"should be alway allow peer\"\n    );\n}\n\n#[tokio::test]\nasync fn should_only_accept_incoming_from_peer_in_allowlist_if_enable_allowlist_only() {\n    let manager_pubkey = make_pubkey();\n    let manager_id = manager_pubkey.peer_id();\n    let mut peer_dat_file = std::env::temp_dir();\n    peer_dat_file.push(\"peer.dat\");\n    let peer_trust_config = Arc::new(TrustMetricConfig::default());\n    let peer_fatal_ban = Duration::from_secs(50);\n    let peer_soft_ban = Duration::from_secs(10);\n\n    let test_peer = make_peer(2077);\n    let another_peer = make_peer(2020);\n\n    let config = PeerManagerConfig {\n        our_id: manager_id,\n        pubkey: manager_pubkey,\n        bootstraps: Default::default(),\n        allowlist: vec![test_peer.id.to_owned()],\n        allowlist_only: true,\n        peer_trust_config,\n        peer_fatal_ban,\n        peer_soft_ban,\n        max_connections: 10,\n        same_ip_conn_limit: 9,\n        inbound_conn_limit: 5,\n        outbound_conn_limit: 5,\n        routine_interval: Duration::from_secs(10),\n        peer_dat_file,\n    };\n\n    let (conn_tx, _conn_rx) = unbounded();\n    let (mgr_tx, mgr_rx) = unbounded();\n    let manager = PeerManager::new(config, mgr_rx, conn_tx);\n\n    let inner = manager.inner();\n    inner.add_peer(another_peer.clone());\n\n    let allowed_peer = inner\n        .peer(&test_peer.id)\n        .expect(\"should be inserted through config\");\n    assert!(allowed_peer.tags.contains(&PeerTag::AlwaysAllow));\n\n    let mut manager = MockManager::new(manager, mgr_tx);\n    assert_eq!(inner.connected(), 0, \"should have zero connections\");\n\n    // First one without AlwaysAllow tag\n    let sess_ctx = SessionContext::make(\n        SessionId::new(233),\n        another_peer\n            .multiaddrs\n            .all_raw()\n            .pop()\n            .expect(\"peer multiaddr\"),\n        SessionType::Inbound,\n        another_peer.owned_pubkey().expect(\"pubkey\"),\n    );\n    let new_session = PeerManagerEvent::NewSession {\n        pid:    another_peer.owned_id(),\n        pubkey: another_peer.owned_pubkey().expect(\"pubkey\"),\n        ctx:    sess_ctx.arced(),\n    };\n    manager.poll_event(new_session).await;\n\n    assert_eq!(inner.connected(), 0, \"should remain 0 connections\");\n\n    // Now with AlwaysAllow tag\n    let sess_ctx = SessionContext::make(\n        SessionId::new(666),\n        test_peer\n            .multiaddrs\n            .all_raw()\n            .pop()\n            .expect(\"peer multiaddr\"),\n        SessionType::Inbound,\n        test_peer\n            .owned_pubkey()\n            .expect(\"always allow peer's pubkey\"),\n    );\n    let new_session = PeerManagerEvent::NewSession {\n        pid:    test_peer.owned_id(),\n        pubkey: test_peer\n            .owned_pubkey()\n            .expect(\"always allow peer's pubkey\"),\n        ctx:    sess_ctx.arced(),\n    };\n    manager.poll_event(new_session).await;\n\n    assert_eq!(inner.connected(), 1, \"should have 1 connection\");\n}\n\n#[tokio::test]\nasync fn should_disconnect_and_ban_peer_for_fatal_feedback_on_trust_metric() {\n    let (mut mgr, mut conn_rx) = make_manager(0, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n    let test_peer = remote_peers.first().expect(\"get first peer\");\n    let target_sid = test_peer.session_id();\n\n    let feedback = PeerManagerEvent::TrustMetric {\n        pid:      test_peer.owned_id(),\n        feedback: TrustFeedback::Fatal(\"fatal\".to_owned()),\n    };\n    mgr.poll_event(feedback).await;\n\n    assert!(test_peer.banned(), \"should be banned\");\n    assert_eq!(\n        test_peer.tags.get_banned_until().expect(\"get banned until\"),\n        time::now() + mgr.config().peer_fatal_ban.as_secs(),\n        \"should use fatal ban duration\"\n    );\n\n    let trust_metric = test_peer.trust_metric().expect(\"get trust metric\");\n    assert!(!trust_metric.is_started(), \"should stop trust metric\");\n\n    let conn_event = conn_rx.next().await.expect(\"should have disconnect event\");\n    match conn_event {\n        ConnectionEvent::Disconnect(sid) => {\n            assert_eq!(sid, target_sid, \"should be disconnected session id\")\n        }\n        _ => panic!(\"should be disconnect event\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_exclude_always_allow_peer_from_fatal_feedback_ban_on_trust_metric() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n    let test_peer = remote_peers.first().expect(\"get first peer\");\n\n    let inner = mgr.core_inner();\n    test_peer.tags.insert(PeerTag::AlwaysAllow).unwrap();\n    inner.add_peer(test_peer.clone());\n\n    let feedback = PeerManagerEvent::TrustMetric {\n        pid:      test_peer.owned_id(),\n        feedback: TrustFeedback::Fatal(\"fatal\".to_owned()),\n    };\n    mgr.poll_event(feedback).await;\n\n    assert!(!test_peer.banned(), \"should not ban\");\n    let trust_metric = test_peer.trust_metric().expect(\"get trust metric\");\n    assert!(trust_metric.is_started(), \"should continue trust metric\");\n    assert_eq!(inner.connected(), 1, \"should not disconnect peer\");\n}\n\n#[tokio::test]\nasync fn should_add_one_bad_event_for_bad_feedback_on_trust_metric() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n    let test_peer = remote_peers.first().expect(\"get first peer\");\n    let trust_metric = test_peer.trust_metric().expect(\"trust metric\");\n\n    let feedback = PeerManagerEvent::TrustMetric {\n        pid:      test_peer.owned_id(),\n        feedback: TrustFeedback::Bad(\"bad\".to_owned()),\n    };\n    mgr.poll_event(feedback).await;\n\n    assert_eq!(\n        trust_metric.bad_events_count(),\n        1,\n        \"should have one bad event count\"\n    );\n}\n\n#[tokio::test]\nasync fn should_add_ten_bad_events_for_worse_feedback_on_trust_metric() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n    let test_peer = remote_peers.first().expect(\"get first peer\");\n    let trust_metric = test_peer.trust_metric().expect(\"trust metric\");\n\n    let feedback = PeerManagerEvent::TrustMetric {\n        pid:      test_peer.owned_id(),\n        feedback: TrustFeedback::Worse(\"worse\".to_owned()),\n    };\n    mgr.poll_event(feedback).await;\n\n    assert_eq!(\n        trust_metric.bad_events_count(),\n        10,\n        \"should have ten bad events count\"\n    );\n}\n\n#[tokio::test]\nasync fn should_disconnect_and_soft_ban_peer_if_below_fourty_score_on_worse_feedback_on_trust_metric(\n) {\n    let (mut mgr, mut conn_rx) = make_manager(0, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n    let test_peer = remote_peers.first().expect(\"get first peer\");\n    let trust_metric = test_peer.trust_metric().expect(\"trust metric\");\n    let test_sid = test_peer.session_id();\n\n    for _ in 0..4 {\n        trust_metric.bad_events(1);\n        trust_metric.enter_new_interval();\n    }\n    assert!(\n        trust_metric.trust_score() < 40,\n        \"should have score lower than 40\"\n    );\n\n    let feedback = PeerManagerEvent::TrustMetric {\n        pid:      test_peer.owned_id(),\n        feedback: TrustFeedback::Worse(\"worse\".to_owned()),\n    };\n    mgr.poll_event(feedback).await;\n\n    assert!(test_peer.banned(), \"should be banned\");\n    assert_eq!(\n        test_peer.tags.get_banned_until().expect(\"get banned until\"),\n        time::now() + mgr.config().peer_soft_ban.as_secs(),\n        \"should use soft ban duration\"\n    );\n\n    let trust_metric = test_peer.trust_metric().expect(\"get trust metric\");\n    assert!(!trust_metric.is_started(), \"should stop trust metric\");\n\n    let conn_event = conn_rx.next().await.expect(\"should have disconnect event\");\n    match conn_event {\n        ConnectionEvent::Disconnect(sid) => {\n            assert_eq!(sid, test_sid, \"should be replaced session id\")\n        }\n        _ => panic!(\"should be disconnect event\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_not_knock_out_peer_just_set_up_trust_metric_on_worse_feedback_on_trust_metric() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n    let test_peer = remote_peers.first().expect(\"get first peer\");\n    let trust_metric = test_peer.trust_metric().expect(\"trust metric\");\n\n    assert_eq!(\n        trust_metric.good_events_count(),\n        0,\n        \"should not have any good events\"\n    );\n    assert_eq!(\n        trust_metric.bad_events_count(),\n        0,\n        \"should not have any bad events\"\n    );\n    assert_eq!(trust_metric.intervals(), 0, \"should not have any intervals\");\n\n    let feedback = PeerManagerEvent::TrustMetric {\n        pid:      test_peer.owned_id(),\n        feedback: TrustFeedback::Worse(\"worse\".to_owned()),\n    };\n    mgr.poll_event(feedback).await;\n\n    let inner = mgr.core_inner();\n    assert!(!test_peer.banned(), \"should not ban\");\n    let trust_metric = test_peer.trust_metric().expect(\"get trust metric\");\n    assert!(trust_metric.is_started(), \"should continue trust metric\");\n    assert_eq!(inner.connected(), 1, \"should still connected\");\n}\n\n#[tokio::test]\nasync fn should_not_punish_always_allow_peer_when_its_score_below_fourty_on_worse_feedback_on_trust_metric(\n) {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n    let test_peer = remote_peers.first().expect(\"get first peer\");\n    let trust_metric = test_peer.trust_metric().expect(\"trust metric\");\n\n    for _ in 0..4 {\n        trust_metric.bad_events(1);\n        trust_metric.enter_new_interval();\n    }\n    assert!(\n        trust_metric.trust_score() < 40,\n        \"should have score lower than 40\"\n    );\n\n    let inner = mgr.core_inner();\n    test_peer.tags.insert(PeerTag::AlwaysAllow).unwrap();\n    inner.add_peer(test_peer.clone());\n\n    let feedback = PeerManagerEvent::TrustMetric {\n        pid:      test_peer.owned_id(),\n        feedback: TrustFeedback::Worse(\"worse\".to_owned()),\n    };\n    mgr.poll_event(feedback).await;\n\n    assert!(!test_peer.banned(), \"should not ban\");\n    let trust_metric = test_peer.trust_metric().expect(\"get trust metric\");\n    assert!(trust_metric.is_started(), \"should continue trust metric\");\n    assert_eq!(inner.connected(), 1, \"should still connected\");\n}\n\n#[tokio::test]\nasync fn should_do_nothing_for_neutral_feedback_on_trust_metric() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n    let test_peer = remote_peers.first().expect(\"get first peer\");\n    let trust_metric = test_peer.trust_metric().expect(\"trust metric\");\n\n    let feedback = PeerManagerEvent::TrustMetric {\n        pid:      test_peer.owned_id(),\n        feedback: TrustFeedback::Neutral,\n    };\n    mgr.poll_event(feedback).await;\n\n    assert_eq!(\n        trust_metric.good_events_count(),\n        0,\n        \"should not increase good events\"\n    );\n    assert_eq!(\n        trust_metric.bad_events_count(),\n        0,\n        \"should not increase bad events\"\n    );\n}\n\n#[tokio::test]\nasync fn should_add_one_bad_event_for_good_feedback_on_trust_metric() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n    let test_peer = remote_peers.first().expect(\"get first peer\");\n    let trust_metric = test_peer.trust_metric().expect(\"trust metric\");\n\n    let feedback = PeerManagerEvent::TrustMetric {\n        pid:      test_peer.owned_id(),\n        feedback: TrustFeedback::Good,\n    };\n    mgr.poll_event(feedback).await;\n\n    assert_eq!(\n        trust_metric.good_events_count(),\n        1,\n        \"should increase one good event\"\n    );\n}\n\n#[tokio::test]\nasync fn should_pick_good_peer_first_on_finding_connectable_peers() {\n    let (mut mgr, mut conn_rx) = make_manager(0, 4);\n    let outbound_conn_limit = mgr.config().outbound_conn_limit;\n    let pre_connected_count = outbound_conn_limit - 1;\n    let _remote_peers = make_sessions(\n        &mut mgr,\n        pre_connected_count as u16,\n        5000,\n        SessionType::Outbound,\n    )\n    .await;\n\n    let inner = mgr.core_inner();\n    assert_eq!(\n        inner.connected(),\n        pre_connected_count,\n        \"should have pre connected connections just one below outbound conn limit\"\n    );\n\n    // Fill connecting attempts, left one for our test\n    let fill_peers = (3..4 + MAX_CONNECTING_MARGIN - 1)\n        .map(|port| make_peer(6000u16 + port as u16))\n        .collect::<Vec<_>>();\n    mgr.inner.set_connecting(fill_peers);\n\n    let good_peer = make_peer(2077);\n    let normal_peer = make_peer(2020);\n    inner.add_peer(good_peer.clone());\n    inner.add_peer(normal_peer);\n\n    let trust_metric = TrustMetric::new(Arc::clone(&mgr.config().peer_trust_config));\n    good_peer.set_trust_metric(trust_metric.clone());\n    for _ in 0..10 {\n        trust_metric.good_events(1);\n        trust_metric.enter_new_interval();\n    }\n    assert!(\n        trust_metric.trust_score() > GOOD_TRUST_SCORE,\n        \"should have better score\"\n    );\n\n    mgr.poll().await;\n\n    let conn_event = conn_rx.next().await.expect(\"should have connect event\");\n    let multiaddrs_in_event = match conn_event {\n        ConnectionEvent::Connect { addrs, .. } => addrs,\n        _ => panic!(\"should be connect event\"),\n    };\n\n    assert_eq!(\n        multiaddrs_in_event.len(),\n        1,\n        \"should have one connecting multiaddr\"\n    );\n\n    let expect_multiaddrs = good_peer.multiaddrs.all_raw();\n    assert_eq!(\n        multiaddrs_in_event, expect_multiaddrs,\n        \"should be peer with better score\"\n    );\n}\n\n#[tokio::test]\nasync fn should_setup_trust_metric_if_none_on_session_closed() {\n    let (mut mgr, _conn_rx) = make_manager(2, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n\n    let test_peer = remote_peers.first().expect(\"get first peer\");\n    test_peer.remove_trust_metric();\n\n    let session_closed = PeerManagerEvent::SessionClosed {\n        pid: test_peer.owned_id(),\n        sid: test_peer.session_id(),\n    };\n    mgr.poll_event(session_closed).await;\n\n    assert!(\n        test_peer.trust_metric().is_some(),\n        \"should set up trust metric\"\n    );\n}\n\n#[tokio::test]\nasync fn should_setup_trust_metric_if_none_on_session_failed() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n\n    let test_peer = remote_peers.first().expect(\"get first peer\");\n    test_peer.remove_trust_metric();\n\n    let expect_sid = test_peer.session_id();\n    let session_failed = PeerManagerEvent::SessionFailed {\n        sid:  expect_sid,\n        kind: SessionErrorKind::Io(std::io::ErrorKind::Other.into()),\n    };\n    mgr.poll_event(session_failed).await;\n\n    assert!(\n        test_peer.trust_metric().is_some(),\n        \"should set up trust metric\"\n    );\n\n    let trust_metric = test_peer.trust_metric().expect(\"trust metric\");\n    assert!(!trust_metric.is_started(), \"should not start\");\n    assert_eq!(\n        trust_metric.bad_events_count(),\n        1,\n        \"should have 1 bad event\"\n    );\n}\n\n#[tokio::test]\nasync fn should_setup_trust_metric_if_none_on_peer_misbehave() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n\n    let test_peer = remote_peers.first().expect(\"get first peer\");\n    test_peer.remove_trust_metric();\n\n    let peer_misbehave = PeerManagerEvent::Misbehave {\n        pid:  test_peer.owned_id(),\n        kind: MisbehaviorKind::PingTimeout,\n    };\n    mgr.poll_event(peer_misbehave).await;\n\n    assert!(\n        test_peer.trust_metric().is_some(),\n        \"should set up trust metric\"\n    );\n\n    let trust_metric = test_peer.trust_metric().expect(\"trust metric\");\n    assert!(trust_metric.is_started(), \"should be started\");\n    assert_eq!(\n        trust_metric.bad_events_count(),\n        1,\n        \"should have 1 bad event\"\n    );\n}\n\n#[tokio::test]\nasync fn should_setup_trust_metric_if_none_on_session_blocked() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n\n    let test_peer = remote_peers.first().expect(\"get first peer\");\n    test_peer.remove_trust_metric();\n\n    let sess_ctx = SessionContext::make(\n        test_peer.session_id(),\n        test_peer.multiaddrs.all_raw().pop().expect(\"get multiaddr\"),\n        SessionType::Outbound,\n        test_peer.owned_pubkey().expect(\"pubkey\"),\n    );\n    let session_blocked = PeerManagerEvent::SessionBlocked {\n        ctx: sess_ctx.arced(),\n    };\n    mgr.poll_event(session_blocked).await;\n\n    assert!(\n        test_peer.trust_metric().is_some(),\n        \"should set up trust metric\"\n    );\n\n    let trust_metric = test_peer.trust_metric().expect(\"trust metric\");\n    assert!(trust_metric.is_started(), \"should be started\");\n    assert_eq!(\n        trust_metric.bad_events_count(),\n        1,\n        \"should have 1 bad event\"\n    );\n}\n\n#[tokio::test]\nasync fn should_able_to_tag_peer() {\n    let (mgr, _conn_rx) = make_manager(0, 20);\n    let handle = mgr.inner.handle();\n\n    let peer = make_peer(2077);\n    handle.tag(&peer.id, PeerTag::Consensus).unwrap();\n\n    let peer = mgr.core_inner().peer(&peer.id).unwrap();\n    assert!(peer.tags.contains(&PeerTag::Consensus));\n}\n\n#[tokio::test]\nasync fn should_able_to_untag_peer() {\n    let (mgr, _conn_rx) = make_manager(0, 20);\n    let handle = mgr.inner.handle();\n\n    let peer = make_peer(2077);\n    handle.tag(&peer.id, PeerTag::Consensus).unwrap();\n\n    let peer = mgr.core_inner().peer(&peer.id).unwrap();\n    assert!(peer.tags.contains(&PeerTag::Consensus));\n\n    handle.untag(&peer.id, &PeerTag::Consensus);\n    assert!(!peer.tags.contains(&PeerTag::Consensus));\n}\n\n#[tokio::test]\nasync fn should_remove_old_consensus_peer_tag_when_tag_consensus() {\n    let (mgr, _conn_rx) = make_manager(0, 20);\n    let handle = mgr.inner.handle();\n\n    let peer = make_peer(2077);\n    handle.tag(&peer.id, PeerTag::Consensus).unwrap();\n\n    let peer = mgr.core_inner().peer(&peer.id).unwrap();\n    assert!(peer.tags.contains(&PeerTag::Consensus));\n\n    let new_consensus = make_peer(3077);\n    handle.tag_consensus(vec![new_consensus.owned_id()]);\n\n    let new_consensus = mgr.core_inner().peer(&new_consensus.id).unwrap();\n    assert!(new_consensus.tags.contains(&PeerTag::Consensus));\n    assert!(!peer.tags.contains(&PeerTag::Consensus));\n}\n\n#[tokio::test]\nasync fn should_reject_same_ip_connection_when_reach_limit_on_new_session() {\n    let manager_pubkey = make_pubkey();\n    let manager_id = manager_pubkey.peer_id();\n    let mut peer_dat_file = std::env::temp_dir();\n    peer_dat_file.push(\"peer.dat\");\n    let peer_trust_config = Arc::new(TrustMetricConfig::default());\n    let peer_fatal_ban = Duration::from_secs(50);\n    let peer_soft_ban = Duration::from_secs(10);\n\n    let config = PeerManagerConfig {\n        our_id: manager_id,\n        pubkey: manager_pubkey,\n        bootstraps: Default::default(),\n        allowlist: vec![],\n        allowlist_only: false,\n        peer_trust_config,\n        peer_fatal_ban,\n        peer_soft_ban,\n        max_connections: 10,\n        same_ip_conn_limit: 1,\n        inbound_conn_limit: 5,\n        outbound_conn_limit: 5,\n        routine_interval: Duration::from_secs(10),\n        peer_dat_file,\n    };\n\n    let (conn_tx, mut conn_rx) = unbounded();\n    let (mgr_tx, mgr_rx) = unbounded();\n    let manager = PeerManager::new(config, mgr_rx, conn_tx);\n\n    let mut mgr = MockManager::new(manager, mgr_tx);\n    make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n\n    let same_ip_peer = make_peer(9527);\n    let expect_sid = same_ip_peer.session_id();\n\n    // Save same ip peer\n    let inner = mgr.core_inner();\n    inner.add_peer(same_ip_peer.clone());\n\n    let sess_ctx = SessionContext::make(\n        SessionId::new(99),\n        same_ip_peer.multiaddrs.all_raw().pop().unwrap(),\n        SessionType::Outbound,\n        same_ip_peer.owned_pubkey().expect(\"pubkey\"),\n    );\n    let new_session = PeerManagerEvent::NewSession {\n        pid:    same_ip_peer.owned_id(),\n        pubkey: same_ip_peer.owned_pubkey().expect(\"pubkey\"),\n        ctx:    sess_ctx.arced(),\n    };\n    mgr.poll_event(new_session).await;\n\n    assert_eq!(inner.connected(), 1, \"should not increase conn count\");\n    assert_eq!(\n        same_ip_peer.session_id(),\n        expect_sid,\n        \"should not change peer session id\"\n    );\n\n    let inserted_same_ip_peer = inner.peer(&same_ip_peer.id).unwrap();\n    assert_eq!(\n        inserted_same_ip_peer.tags.get_banned_until(),\n        Some(time::now() + SAME_IP_LIMIT_BAN.as_secs())\n    );\n\n    let conn_event = conn_rx.next().await.expect(\"should have disconnect event\");\n    match conn_event {\n        ConnectionEvent::Disconnect(sid) => assert_eq!(sid, 99.into(), \"should be new session id\"),\n        _ => panic!(\"should be disconnect event\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_not_dail_new_peer_after_reach_outbound_conn_limit() {\n    let (mut mgr, mut conn_rx) = make_manager(0, 4);\n    let outbound_conn_limit = mgr.config().outbound_conn_limit;\n    let _remote_peers = make_sessions(\n        &mut mgr,\n        outbound_conn_limit as u16,\n        5000,\n        SessionType::Outbound,\n    )\n    .await;\n\n    let inner = mgr.core_inner();\n    assert_eq!(\n        inner.connected(),\n        outbound_conn_limit,\n        \"should have pre connected connections just one below outbound conn limit\"\n    );\n\n    mgr.poll().await;\n    match conn_rx.try_next() {\n        Err(_) => (),\n        _ => panic!(\"should not have any event\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_reject_inbound_conn_when_reach_inbound_conn_limit() {\n    let (mut mgr, mut conn_rx) = make_manager(0, 20);\n    let inbound_conn_limit = mgr.config().inbound_conn_limit;\n    let _remote_peers = make_sessions(\n        &mut mgr,\n        inbound_conn_limit as u16,\n        5000,\n        SessionType::Inbound,\n    )\n    .await;\n\n    let inner = mgr.core_inner();\n    assert_eq!(\n        inner.connected(),\n        inbound_conn_limit,\n        \"should have reach inbound conn limit\"\n    );\n\n    let remote_pubkey = make_pubkey();\n    let remote_peer_id = remote_pubkey.peer_id();\n    let remote_addr = make_multiaddr(6000, Some(remote_pubkey.peer_id()));\n\n    let sess_ctx = SessionContext::make(\n        SessionId::new(99),\n        remote_addr.clone(),\n        SessionType::Inbound,\n        remote_pubkey.clone(),\n    );\n    let new_session = PeerManagerEvent::NewSession {\n        pid:    remote_peer_id.clone(),\n        pubkey: remote_pubkey.clone(),\n        ctx:    sess_ctx.arced(),\n    };\n    mgr.poll_event(new_session).await;\n\n    assert_eq!(\n        inner.connected(),\n        inbound_conn_limit,\n        \"should not accept inbound connection\"\n    );\n\n    let conn_event = conn_rx.next().await.expect(\"should have disconnect event\");\n    match conn_event {\n        ConnectionEvent::Disconnect(sid) => assert_eq!(sid, 99.into(), \"should be new session id\"),\n        _ => panic!(\"should be disconnect event\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_accept_peer_in_allowlist_even_reach_inbound_conn_limit() {\n    let manager_pubkey = make_pubkey();\n    let manager_id = manager_pubkey.peer_id();\n    let mut peer_dat_file = std::env::temp_dir();\n    peer_dat_file.push(\"peer.dat\");\n    let peer_trust_config = Arc::new(TrustMetricConfig::default());\n    let peer_fatal_ban = Duration::from_secs(50);\n    let peer_soft_ban = Duration::from_secs(10);\n\n    let test_peer = make_peer(2077);\n\n    let config = PeerManagerConfig {\n        our_id: manager_id,\n        pubkey: manager_pubkey,\n        bootstraps: Default::default(),\n        allowlist: vec![test_peer.id.to_owned()],\n        allowlist_only: false,\n        peer_trust_config,\n        peer_fatal_ban,\n        peer_soft_ban,\n        max_connections: 10,\n        same_ip_conn_limit: 9,\n        inbound_conn_limit: 5,\n        outbound_conn_limit: 5,\n        routine_interval: Duration::from_secs(10),\n        peer_dat_file,\n    };\n\n    let (conn_tx, _conn_rx) = unbounded();\n    let (mgr_tx, mgr_rx) = unbounded();\n    let manager = PeerManager::new(config, mgr_rx, conn_tx);\n\n    let inner = manager.inner();\n    let allowed_peer = inner\n        .peer(&test_peer.id)\n        .expect(\"should be inserted through config\");\n    assert!(allowed_peer.tags.contains(&PeerTag::AlwaysAllow));\n\n    let mut manager = MockManager::new(manager, mgr_tx);\n    assert_eq!(inner.connected(), 0, \"should have zero connections\");\n\n    let inbound_conn_limit = manager.config().inbound_conn_limit;\n    let _remote_peers = make_sessions(\n        &mut manager,\n        inbound_conn_limit as u16,\n        5000,\n        SessionType::Inbound,\n    )\n    .await;\n\n    let sess_ctx = SessionContext::make(\n        SessionId::new(666),\n        test_peer\n            .multiaddrs\n            .all_raw()\n            .pop()\n            .expect(\"peer multiaddr\"),\n        SessionType::Inbound,\n        test_peer\n            .owned_pubkey()\n            .expect(\"always allow peer's pubkey\"),\n    );\n    let new_session = PeerManagerEvent::NewSession {\n        pid:    test_peer.owned_id(),\n        pubkey: test_peer\n            .owned_pubkey()\n            .expect(\"always allow peer's pubkey\"),\n        ctx:    sess_ctx.arced(),\n    };\n    manager.poll_event(new_session).await;\n\n    assert_eq!(\n        inner.connected(),\n        inbound_conn_limit + 1,\n        \"should accept peer in allowlist\"\n    );\n}\n\n#[tokio::test]\nasync fn should_reject_new_connection_for_same_peer_on_unidentified_session() {\n    let (mut mgr, mut conn_rx) = make_manager(0, 20);\n    let remote_peers = make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n\n    let test_peer = remote_peers.first().expect(\"get first peer\");\n    let sess_ctx = SessionContext::make(\n        SessionId::new(99),\n        test_peer.multiaddrs.all_raw().pop().expect(\"get multiaddr\"),\n        SessionType::Outbound,\n        test_peer.owned_pubkey().expect(\"pubkey\"),\n    );\n    let new_session = PeerManagerEvent::UnidentifiedSession {\n        pid:    test_peer.owned_id(),\n        pubkey: test_peer.owned_pubkey().expect(\"pubkey\"),\n        ctx:    sess_ctx.arced(),\n    };\n    mgr.poll_event(new_session).await;\n\n    let conn_event = conn_rx.next().await.expect(\"should have disconnect event\");\n    match conn_event {\n        ConnectionEvent::Disconnect(sid) => assert_eq!(sid, 99.into(), \"should be new session id\"),\n        _ => panic!(\"should be disconnect event\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_reject_same_ip_connection_when_reach_limit_on_unidentified_session() {\n    let manager_pubkey = make_pubkey();\n    let manager_id = manager_pubkey.peer_id();\n    let mut peer_dat_file = std::env::temp_dir();\n    peer_dat_file.push(\"peer.dat\");\n    let peer_trust_config = Arc::new(TrustMetricConfig::default());\n    let peer_fatal_ban = Duration::from_secs(50);\n    let peer_soft_ban = Duration::from_secs(10);\n\n    let config = PeerManagerConfig {\n        our_id: manager_id,\n        pubkey: manager_pubkey,\n        bootstraps: Default::default(),\n        allowlist: vec![],\n        allowlist_only: false,\n        peer_trust_config,\n        peer_fatal_ban,\n        peer_soft_ban,\n        max_connections: 10,\n        same_ip_conn_limit: 1,\n        inbound_conn_limit: 5,\n        outbound_conn_limit: 5,\n        routine_interval: Duration::from_secs(10),\n        peer_dat_file,\n    };\n\n    let (conn_tx, mut conn_rx) = unbounded();\n    let (mgr_tx, mgr_rx) = unbounded();\n    let manager = PeerManager::new(config, mgr_rx, conn_tx);\n\n    let mut mgr = MockManager::new(manager, mgr_tx);\n    make_sessions(&mut mgr, 1, 5000, SessionType::Outbound).await;\n\n    let same_ip_peer = make_peer(9527);\n\n    // Save same ip peer\n    let inner = mgr.core_inner();\n    inner.add_peer(same_ip_peer.clone());\n\n    let sess_ctx = SessionContext::make(\n        SessionId::new(99),\n        same_ip_peer.multiaddrs.all_raw().pop().unwrap(),\n        SessionType::Outbound,\n        same_ip_peer.owned_pubkey().expect(\"pubkey\"),\n    );\n    let new_session = PeerManagerEvent::UnidentifiedSession {\n        pid:    same_ip_peer.owned_id(),\n        pubkey: same_ip_peer.owned_pubkey().expect(\"pubkey\"),\n        ctx:    sess_ctx.arced(),\n    };\n    mgr.poll_event(new_session).await;\n\n    let inserted_same_ip_peer = inner.peer(&same_ip_peer.id).unwrap();\n    assert_eq!(\n        inserted_same_ip_peer.tags.get_banned_until(),\n        Some(time::now() + SAME_IP_LIMIT_BAN.as_secs())\n    );\n\n    let conn_event = conn_rx.next().await.expect(\"should have disconnect event\");\n    match conn_event {\n        ConnectionEvent::Disconnect(sid) => assert_eq!(sid, 99.into(), \"should be new session id\"),\n        _ => panic!(\"should be disconnect event\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_accept_always_allow_peer_even_if_we_reach_max_connections_on_unidentified_session()\n{\n    let (mut mgr, mut conn_rx) = make_manager(0, 10);\n    let _remote_peers = make_sessions(&mut mgr, 10, 5000, SessionType::Outbound).await;\n\n    let peer = make_peer(2019);\n    let always_allow_peer = make_peer(2077);\n    always_allow_peer.tags.insert(PeerTag::AlwaysAllow).unwrap();\n\n    let inner = mgr.core_inner();\n    inner.add_peer(always_allow_peer.clone());\n\n    assert_eq!(inner.connected(), 10, \"should have 10 connections\");\n\n    // First one without AlwaysAllow tag\n    let sess_ctx = SessionContext::make(\n        SessionId::new(233),\n        peer.multiaddrs.all_raw().pop().expect(\"peer multiaddr\"),\n        SessionType::Inbound,\n        peer.owned_pubkey().expect(\"pubkey\"),\n    );\n    let new_session = PeerManagerEvent::UnidentifiedSession {\n        pid:    peer.owned_id(),\n        pubkey: peer.owned_pubkey().expect(\"pubkey\"),\n        ctx:    sess_ctx.arced(),\n    };\n    mgr.poll_event(new_session).await;\n    let conn_event = conn_rx.next().await.expect(\"should have disconnect event\");\n    match conn_event {\n        ConnectionEvent::Disconnect(sid) => assert_eq!(sid, 233.into(), \"should be new session id\"),\n        _ => panic!(\"should be disconnect event\"),\n    }\n\n    // Now peer has AlwaysAllow tag\n    let sess_ctx = SessionContext::make(\n        SessionId::new(666),\n        always_allow_peer\n            .multiaddrs\n            .all_raw()\n            .pop()\n            .expect(\"peer multiaddr\"),\n        SessionType::Inbound,\n        always_allow_peer\n            .owned_pubkey()\n            .expect(\"always allow peer's pubkey\"),\n    );\n    let new_session = PeerManagerEvent::UnidentifiedSession {\n        pid:    always_allow_peer.owned_id(),\n        pubkey: always_allow_peer\n            .owned_pubkey()\n            .expect(\"always allow peer's pubkey\"),\n        ctx:    sess_ctx.arced(),\n    };\n    mgr.poll_event(new_session).await;\n\n    match conn_rx.try_next() {\n        Err(_) => (), // Err means channel is empty, it's expected\n        _ => panic!(\"should not have any disconnect event\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_remove_connecting_attempt_when_reach_timeout() {\n    let (mut mgr, _conn_rx) = make_manager(0, 20);\n\n    let test_peer = make_peer(9527);\n    let mut target_attempt = ConnectingAttempt::new(test_peer.clone());\n    target_attempt.set_at(MAX_CONNECTING_TIMEOUT + Duration::from_secs(1));\n\n    let inner = mgr.core_inner();\n    inner.add_peer(test_peer);\n    assert_eq!(inner.connected(), 0, \"should have zero connected\");\n\n    mgr.connecting_mut().insert(target_attempt);\n    assert_eq!(\n        mgr.connecting().len(),\n        1,\n        \"should have one connecting attempt\"\n    );\n\n    mgr.poll().await;\n\n    assert_eq!(\n        mgr.connecting().len(),\n        0,\n        \"should have 0 connecting attempt\"\n    );\n    assert_eq!(inner.connected(), 0, \"should have 0 connected\");\n}\n"
  },
  {
    "path": "core/network/src/peer_manager/time.rs",
    "content": "use std::time::{Duration, SystemTime, UNIX_EPOCH};\n\npub fn now() -> u64 {\n    duration_since(SystemTime::now(), UNIX_EPOCH).as_secs()\n}\n\npub fn duration_since(now: SystemTime, early: SystemTime) -> Duration {\n    match now.duration_since(early) {\n        Ok(duration) => duration,\n        Err(e) => e.duration(),\n    }\n}\n"
  },
  {
    "path": "core/network/src/peer_manager/trust_metric.rs",
    "content": "use futures::{\n    future::{self, AbortHandle},\n    pin_mut,\n};\nuse futures_timer::Delay;\nuse parking_lot::RwLock;\n\nuse std::{\n    future::Future,\n    ops::{Add, Deref},\n    pin::Pin,\n    sync::atomic::{AtomicUsize, Ordering::SeqCst},\n    sync::Arc,\n    task::{Context, Poll},\n    time::{Duration, Instant},\n};\n\npub const PROPORTIONAL_WEIGHT: f64 = 0.4;\npub const INTERGRAL_WEIGHT: f64 = 0.6;\npub const OPTIMISTIC_HISTORY_WEIGHT: f64 = 0.8;\npub const DERIVATIVE_POSITIVE_WEIGHT: f64 = 0.0;\npub const DERIVATIVE_NEGATIVE_WEIGHT: f64 = 0.1;\n\npub const INITIAL_HISTORY_VALUE: f64 = 0.8f64;\npub const KNOCK_OUT_SCORE: u8 = 40;\npub const GOOD_INTERVAL_CAP: usize = 30;\n\npub const DEFAULT_INTERVAL_DURATION: Duration = Duration::from_secs(60);\npub const DEFAULT_MAX_HISTORY_DURATION: Duration = Duration::from_secs(24 * 60 * 60 * 10); // 10 day\n\n// HISTORY_VLAUE_WEIGHTS are only determined by max_intervals and\n// OPTIMISTIC_HISTORY_WEIGHT. Right now, all peers share same configuration, so\n// we can calculate these values once.\nlazy_static::lazy_static! {\n    static ref HISTORY_TRUST_WEIGHTS: Arc<RwLock<Vec<f64>>> = Arc::new(RwLock::new(Vec::new()));\n}\n\n#[derive(Debug)]\npub struct TrustMetricConfig {\n    interval:          Duration,\n    max_history:       Duration,\n    max_intervals:     u64,\n    max_faded_memorys: u64,\n}\n\nimpl TrustMetricConfig {\n    pub fn new(interval: Duration, max_history: Duration) -> Self {\n        let partial_config = TrustMetricConfig {\n            interval,\n            max_history,\n            max_intervals: 0,\n            max_faded_memorys: 0,\n        };\n\n        partial_config.finish()\n    }\n\n    pub fn interval(&self) -> Duration {\n        self.interval\n    }\n\n    fn finish(mut self) -> Self {\n        self.max_intervals = self.max_history.as_secs() / self.interval.as_secs();\n        self.max_faded_memorys = ((self.max_intervals as f64).log2().floor() as u64) + 1;\n        log::debug!(target: \"network-trust-metric\", \"max intervals {}\", self.max_intervals);\n        log::debug!(target: \"network-trust-metric\", \"max faded memorys {}\", self.max_faded_memorys);\n\n        {\n            *HISTORY_TRUST_WEIGHTS.write() = (1..=self.max_intervals)\n                .map(|k| OPTIMISTIC_HISTORY_WEIGHT.powf((k - 1) as f64))\n                .collect::<Vec<_>>();\n        }\n\n        self\n    }\n}\n\nimpl Default for TrustMetricConfig {\n    fn default() -> Self {\n        let partial_config = TrustMetricConfig {\n            interval:          DEFAULT_INTERVAL_DURATION,\n            max_history:       DEFAULT_MAX_HISTORY_DURATION,\n            max_intervals:     0,\n            max_faded_memorys: 0,\n        };\n\n        partial_config.finish()\n    }\n}\n\n#[derive(Debug, Clone, Copy, PartialEq)]\nstruct FadedMemory(f64);\n\nimpl Deref for FadedMemory {\n    type Target = f64;\n\n    fn deref(&self) -> &Self::Target {\n        &self.0\n    }\n}\n\nimpl FadedMemory {\n    fn new(history_value: f64) -> Self {\n        FadedMemory(history_value)\n    }\n}\n\n#[derive(Debug)]\nstruct History {\n    max_intervals:   u64,\n    max_memorys:     u64,\n    intervals:       u64,\n    memorys:         Vec<FadedMemory>,\n    aggregate_trust: f64,\n    weights_sum:     f64,\n}\n\nimpl History {\n    fn new(max_intervals: u64, max_memorys: u64) -> History {\n        History {\n            max_intervals,\n            max_memorys,\n            intervals: 0,\n            memorys: Vec::new(),\n            aggregate_trust: INITIAL_HISTORY_VALUE,\n            weights_sum: 0f64,\n        }\n    }\n\n    #[cfg(test)]\n    fn intervals(&self) -> u64 {\n        self.intervals\n    }\n\n    fn latest_trust_value(&self) -> f64 {\n        self.memorys.first().map(|v| **v).unwrap_or_else(|| 0f64)\n    }\n\n    fn remember_interval(&mut self, trust_value: f64) {\n        if self.intervals < self.max_intervals {\n            self.intervals += 1;\n\n            let i = self.intervals;\n            self.weights_sum += match HISTORY_TRUST_WEIGHTS.read().get(i as usize - 1).cloned() {\n                Some(v) => v,\n                None => {\n                    log::warn!(target: \"network-trust-metric\", \"precalculated history interval {} trust weight not found\", i);\n                    OPTIMISTIC_HISTORY_WEIGHT.powf((i - 1) as f64)\n                }\n            };\n        }\n\n        if self.intervals <= self.max_memorys {\n            self.memorys.insert(0, FadedMemory::new(trust_value));\n            return;\n        }\n\n        // Update faded memorys\n        let memento = self.memorys.len() - 1;\n        self.memorys = (1..=memento)\n            .map(|j| {\n                let w = 2f64.powf(j as f64);\n                let ftv = (*self.memorys[j - 1] + (*self.memorys[j] * (w - 1f64))) / w;\n                FadedMemory::new(ftv)\n            })\n            .collect::<Vec<_>>();\n        self.memorys.insert(0, FadedMemory::new(trust_value));\n    }\n\n    fn update_aggregate_trust(&mut self) {\n        let intervals = self.intervals;\n        if intervals < 1 {\n            return;\n        }\n\n        self.aggregate_trust = (1..=intervals).map(|i| {\n            let memory_idx = (i as f64).log2().floor() as usize;\n\n            let i_hist_trust = match self.memorys.get(memory_idx).cloned() {\n                Some(v) => *v,\n                None => {\n                    log::error!(target: \"network-trust-metric\", \"history interval {} trust value not found\", i);\n                    0f64\n                }\n            };\n            let i_hist_weight = match HISTORY_TRUST_WEIGHTS.read().get(i as usize - 1).cloned() {\n                Some(v) => v,\n                None => {\n                    log::warn!(target: \"network-trust-metric\", \"precalculated history interval {} weight not found\", i);\n                    OPTIMISTIC_HISTORY_WEIGHT.powf((i - 1) as f64)\n                }\n            };\n\n            i_hist_trust * (i_hist_weight / self.weights_sum)\n        }).sum::<f64>();\n\n        log::debug!(target: \"network-trust-metric\", \"aggregate trust {}\", self.aggregate_trust);\n    }\n}\n\n#[derive(Debug)]\npub struct Inner {\n    config:      Arc<TrustMetricConfig>,\n    history:     RwLock<History>,\n    good_events: AtomicUsize,\n    bad_events:  AtomicUsize,\n}\n\nimpl Inner {\n    pub fn new(config: Arc<TrustMetricConfig>) -> Self {\n        let max_intervals = config.max_intervals;\n        let max_memorys = config.max_faded_memorys;\n\n        Inner {\n            config,\n            history: RwLock::new(History::new(max_intervals, max_memorys)),\n            good_events: AtomicUsize::new(0),\n            bad_events: AtomicUsize::new(0),\n        }\n    }\n\n    pub fn trust_score(&self) -> u8 {\n        (self.trust_value() * 100f64) as u8\n    }\n\n    pub fn good_events(&self, num: usize) {\n        let curr_good_events = self.good_events.load(SeqCst);\n\n        if curr_good_events + num <= GOOD_INTERVAL_CAP {\n            self.good_events.fetch_add(num, SeqCst);\n        } else if curr_good_events < GOOD_INTERVAL_CAP {\n            self.good_events.store(GOOD_INTERVAL_CAP, SeqCst);\n        }\n    }\n\n    pub fn bad_events(&self, num: usize) {\n        self.bad_events.fetch_add(num, SeqCst);\n    }\n\n    pub fn knock_out(&self) -> bool {\n        self.trust_score() < KNOCK_OUT_SCORE\n    }\n\n    pub fn events(&self) -> (usize, usize) {\n        let good_events = self.good_events.load(SeqCst);\n        let bad_events = self.bad_events.load(SeqCst);\n\n        (good_events, bad_events)\n    }\n\n    pub fn enter_new_interval(&self) {\n        let latest_trust_value = self.trust_value();\n        log::debug!(target: \"network-trust-metric\", \"enter new interval, lastest trust value {}\", latest_trust_value);\n\n        {\n            let mut history = self.history.write();\n            history.remember_interval(latest_trust_value);\n            history.update_aggregate_trust();\n        }\n\n        self.good_events.store(0, SeqCst);\n        self.bad_events.store(0, SeqCst);\n    }\n\n    pub fn reset_history(&self) {\n        let max_intervals = self.config.max_intervals;\n        let max_memorys = self.config.max_faded_memorys;\n\n        *self.history.write() = History::new(max_intervals, max_memorys);\n\n        self.good_events.store(0, SeqCst);\n        self.bad_events.store(0, SeqCst);\n    }\n\n    fn trust_value(&self) -> f64 {\n        let proportional_value = match self.proportional_value() {\n            Some(v) => v,\n            None => return self.history.read().latest_trust_value(),\n        };\n\n        let intergral_value = self.intergral_value();\n        let deviation_value = proportional_value - intergral_value;\n        let derivative_value = if deviation_value >= 0f64 {\n            DERIVATIVE_POSITIVE_WEIGHT * deviation_value\n        } else {\n            DERIVATIVE_NEGATIVE_WEIGHT * deviation_value\n        };\n\n        log::debug!(target: \"network-trust-metric\", \"trust value components: r {:?}, h {}, d {}\", proportional_value, intergral_value, derivative_value);\n        proportional_value + intergral_value + derivative_value\n    }\n\n    fn proportional_value(&self) -> Option<f64> {\n        let good_events = self.good_events.load(SeqCst);\n        let total = good_events + self.bad_events.load(SeqCst);\n\n        if total > 0 {\n            Some((good_events as f64 / total as f64) * PROPORTIONAL_WEIGHT)\n        } else {\n            None\n        }\n    }\n\n    fn intergral_value(&self) -> f64 {\n        self.history.read().aggregate_trust * INTERGRAL_WEIGHT\n    }\n}\n\nstruct HeartBeat {\n    inner:          Arc<Inner>,\n    interval:       Duration,\n    delay:          Delay,\n    pause_save:     Arc<RwLock<Option<Duration>>>,\n    interval_start: Instant,\n}\n\nimpl HeartBeat {\n    pub fn new(\n        inner: Arc<Inner>,\n        interval: Duration,\n        resume: Option<Duration>,\n        pause_save: Arc<RwLock<Option<Duration>>>,\n    ) -> Self {\n        let delay = match resume {\n            Some(resume) if interval > resume => Delay::new(interval - resume),\n            // None or resume > interval\n            _ => Delay::new(interval),\n        };\n\n        HeartBeat {\n            inner,\n            interval,\n            delay,\n            pause_save,\n            interval_start: Instant::now(),\n        }\n    }\n}\n\nimpl Drop for HeartBeat {\n    fn drop(&mut self) {\n        let elapsed = self.interval_start.elapsed();\n        *self.pause_save.write() = Some(elapsed);\n    }\n}\n\nimpl Future for HeartBeat {\n    type Output = <Delay as Future>::Output;\n\n    fn poll(mut self: Pin<&mut Self>, ctx: &mut Context<'_>) -> Poll<Self::Output> {\n        let ecg = &mut self.as_mut();\n\n        loop {\n            let interval = ecg.interval;\n            let delay = &mut ecg.delay;\n            pin_mut!(delay);\n\n            crate::loop_ready!(delay.poll(ctx));\n            ecg.inner.enter_new_interval();\n            ecg.interval_start = Instant::now();\n\n            let next_interval = Instant::now().add(interval);\n            ecg.delay.reset(next_interval);\n        }\n\n        Poll::Pending\n    }\n}\n\n#[derive(Debug, Clone)]\npub struct TrustMetric {\n    inner:     Arc<Inner>,\n    hb_handle: Arc<RwLock<Option<AbortHandle>>>,\n    pause:     Arc<RwLock<Option<Duration>>>,\n}\n\nimpl TrustMetric {\n    pub fn new(config: Arc<TrustMetricConfig>) -> Self {\n        TrustMetric {\n            inner:     Arc::new(Inner::new(config)),\n            hb_handle: Arc::new(RwLock::new(None)),\n            pause:     Arc::new(RwLock::new(None)),\n        }\n    }\n\n    pub fn start(&self) {\n        if self.hb_handle.read().is_some() {\n            // Already started\n            return;\n        }\n\n        let interval = self.inner.config.interval;\n        let resume = self.pause.write().take();\n        let heart_beat = HeartBeat::new(\n            Arc::clone(&self.inner),\n            interval,\n            resume,\n            Arc::clone(&self.pause),\n        );\n\n        let (heart_beat, hb_handle) = future::abortable(heart_beat);\n        *self.hb_handle.write() = Some(hb_handle);\n        tokio::spawn(heart_beat);\n    }\n\n    #[cfg(test)]\n    pub fn is_started(&self) -> bool {\n        self.hb_handle.read().is_some()\n    }\n\n    pub fn pause(&self) {\n        if let Some(abort_handle) = self.hb_handle.write().take() {\n            abort_handle.abort();\n        }\n    }\n\n    #[cfg(test)]\n    pub fn bad_events_count(&self) -> usize {\n        self.inner.bad_events.load(SeqCst)\n    }\n\n    #[cfg(test)]\n    pub fn good_events_count(&self) -> usize {\n        self.inner.good_events.load(SeqCst)\n    }\n\n    #[cfg(test)]\n    pub fn intervals(&self) -> u64 {\n        self.inner.history.read().intervals()\n    }\n}\n\nimpl Deref for TrustMetric {\n    type Target = Arc<Inner>;\n\n    fn deref(&self) -> &Self::Target {\n        &self.inner\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::{Inner, TrustMetricConfig, GOOD_INTERVAL_CAP};\n\n    use std::sync::{atomic::Ordering::SeqCst, Arc};\n\n    #[test]\n    fn basic_metric_test() {\n        // env_logger::init();\n\n        let config = Arc::new(TrustMetricConfig::default());\n        let metric = Inner::new(config);\n\n        for _ in 0..20 {\n            metric.good_events(1);\n            metric.enter_new_interval();\n        }\n        assert!(metric.trust_score() >= 95);\n\n        for _ in 0..4 {\n            metric.bad_events(1);\n            metric.enter_new_interval();\n        }\n        assert!(metric.trust_score() < 40);\n\n        // For S\n        for _ in 0..20 {\n            metric.good_events(1);\n            metric.enter_new_interval();\n        }\n        assert!(metric.trust_score() > 90 && metric.trust_score() < 95);\n\n        for i in 0..17 {\n            metric.bad_events(10);\n            metric.good_events(1);\n            metric.enter_new_interval();\n\n            if i != 16 {\n                metric.good_events(1);\n                metric.enter_new_interval();\n            }\n        }\n        assert!(metric.trust_score() < 40);\n\n        // For Z\n        for _ in 0..20 {\n            metric.good_events(1);\n            metric.enter_new_interval();\n        }\n        assert!(metric.trust_score() >= 90 && metric.trust_score() < 95);\n\n        for _ in 0..200 {\n            metric.bad_events(1);\n            metric.good_events(1);\n            metric.enter_new_interval();\n        }\n\n        assert!(metric.trust_score() > 40);\n    }\n\n    #[test]\n    fn good_interval_cap_test() {\n        let config = Arc::new(TrustMetricConfig::default());\n        let metric = Inner::new(config);\n\n        metric.good_events(GOOD_INTERVAL_CAP - 1);\n        assert_eq!(metric.good_events.load(SeqCst), GOOD_INTERVAL_CAP - 1);\n\n        metric.good_events(20);\n        assert_eq!(metric.good_events.load(SeqCst), GOOD_INTERVAL_CAP);\n    }\n}\n"
  },
  {
    "path": "core/network/src/protocols/core.rs",
    "content": "use std::collections::{HashMap, HashSet};\nuse std::iter::FromIterator;\nuse std::time::Duration;\n\nuse futures::channel::mpsc::UnboundedSender;\nuse lazy_static::lazy_static;\nuse parking_lot::RwLock;\nuse tentacle::secio::PeerId;\nuse tentacle::service::{ProtocolMeta, TargetProtocol};\nuse tentacle::ProtocolId;\n\nuse crate::compression::Snappy;\nuse crate::event::PeerManagerEvent;\nuse crate::peer_manager::PeerManagerHandle;\nuse crate::protocols::discovery::Discovery;\nuse crate::protocols::identify::Identify;\nuse crate::protocols::ping::Ping;\nuse crate::protocols::transmitter::Transmitter;\nuse crate::reactor::MessageRouter;\nuse crate::traits::NetworkProtocol;\n\npub const PING_PROTOCOL_ID: usize = 1;\npub const IDENTIFY_PROTOCOL_ID: usize = 2;\npub const DISCOVERY_PROTOCOL_ID: usize = 3;\npub const TRANSMITTER_PROTOCOL_ID: usize = 4;\n\nlazy_static! {\n    // NOTE: Use peer id here because trust metric integrated test run in one process\n    static ref PEER_OPENED_PROTOCOLS: RwLock<HashMap<PeerId, HashSet<ProtocolId>>> = RwLock::new(HashMap::new());\n}\n\npub struct OpenedProtocols {}\n\nimpl OpenedProtocols {\n    pub fn register(peer_id: PeerId, proto_id: ProtocolId) {\n        PEER_OPENED_PROTOCOLS\n            .write()\n            .entry(peer_id)\n            .and_modify(|protos| {\n                protos.insert(proto_id);\n            })\n            .or_insert_with(|| HashSet::from_iter(vec![proto_id]));\n    }\n\n    #[allow(dead_code)]\n    pub fn unregister(peer_id: &PeerId, proto_id: ProtocolId) {\n        if let Some(ref mut proto_ids) = PEER_OPENED_PROTOCOLS.write().get_mut(peer_id) {\n            proto_ids.remove(&proto_id);\n        }\n    }\n\n    pub fn remove(peer_id: &PeerId) {\n        PEER_OPENED_PROTOCOLS.write().remove(peer_id);\n    }\n\n    #[cfg(test)]\n    pub fn is_open(peer_id: &PeerId, proto_id: &ProtocolId) -> bool {\n        PEER_OPENED_PROTOCOLS\n            .read()\n            .get(peer_id)\n            .map(|ids| ids.contains(proto_id))\n            .unwrap_or_else(|| false)\n    }\n\n    pub fn is_all_opened(peer_id: &PeerId) -> bool {\n        PEER_OPENED_PROTOCOLS\n            .read()\n            .get(peer_id)\n            .map(|ids| ids.len() == 4)\n            .unwrap_or_else(|| false)\n    }\n}\n\n#[derive(Default)]\npub struct CoreProtocolBuilder {\n    ping:        Option<Ping>,\n    identify:    Option<Identify>,\n    discovery:   Option<Discovery>,\n    transmitter: Option<Transmitter>,\n}\n\npub struct CoreProtocol {\n    metas:       Vec<ProtocolMeta>,\n    transmitter: Transmitter,\n}\n\nimpl CoreProtocol {\n    pub fn build() -> CoreProtocolBuilder {\n        CoreProtocolBuilder::new()\n    }\n\n    pub fn transmitter(&self) -> Transmitter {\n        self.transmitter.clone()\n    }\n}\n\nimpl NetworkProtocol for CoreProtocol {\n    fn target() -> TargetProtocol {\n        TargetProtocol::Single(ProtocolId::new(IDENTIFY_PROTOCOL_ID))\n    }\n\n    fn metas(self) -> Vec<ProtocolMeta> {\n        self.metas\n    }\n}\n\nimpl CoreProtocolBuilder {\n    pub fn new() -> Self {\n        CoreProtocolBuilder {\n            ping:        None,\n            identify:    None,\n            discovery:   None,\n            transmitter: None,\n        }\n    }\n\n    pub fn ping(\n        mut self,\n        interval: Duration,\n        timeout: Duration,\n        event_tx: UnboundedSender<PeerManagerEvent>,\n    ) -> Self {\n        let ping = Ping::new(interval, timeout, event_tx);\n\n        self.ping = Some(ping);\n        self\n    }\n\n    pub fn identify(\n        mut self,\n        peer_mgr: PeerManagerHandle,\n        event_tx: UnboundedSender<PeerManagerEvent>,\n    ) -> Self {\n        let identify = Identify::new(peer_mgr, event_tx);\n\n        self.identify = Some(identify);\n        self\n    }\n\n    pub fn discovery(\n        mut self,\n        peer_mgr: PeerManagerHandle,\n        event_tx: UnboundedSender<PeerManagerEvent>,\n        sync_interval: Duration,\n    ) -> Self {\n        let discovery = Discovery::new(peer_mgr, event_tx, sync_interval);\n\n        self.discovery = Some(discovery);\n        self\n    }\n\n    pub fn transmitter(\n        mut self,\n        message_router: MessageRouter<Snappy>,\n        peer_mgr: PeerManagerHandle,\n    ) -> Self {\n        let transmitter = Transmitter::new(message_router, peer_mgr);\n\n        self.transmitter = Some(transmitter);\n        self\n    }\n\n    pub fn build(self) -> CoreProtocol {\n        let mut metas = Vec::with_capacity(4);\n\n        let CoreProtocolBuilder {\n            ping,\n            identify,\n            discovery,\n            transmitter,\n        } = self;\n\n        let ping = ping.expect(\"init: missing protocol ping\");\n        let identify = identify.expect(\"init: missing protocol identify\");\n        let discovery = discovery.expect(\"init: missing protocol discovery\");\n        let transmitter = transmitter.expect(\"init: missing protocol transmitter\");\n\n        metas.push(ping.build_meta(PING_PROTOCOL_ID.into()));\n        metas.push(identify.build_meta(IDENTIFY_PROTOCOL_ID.into()));\n        metas.push(discovery.build_meta(DISCOVERY_PROTOCOL_ID.into()));\n        metas.push(\n            transmitter\n                .clone()\n                .build_meta(TRANSMITTER_PROTOCOL_ID.into()),\n        );\n\n        CoreProtocol { metas, transmitter }\n    }\n}\n"
  },
  {
    "path": "core/network/src/protocols/discovery/addr.rs",
    "content": "use crate::{\n    event::{MisbehaviorKind, PeerManagerEvent},\n    peer_manager::PeerManagerHandle,\n};\n\nuse futures::channel::mpsc::UnboundedSender;\nuse log::{error, warn};\nuse tentacle::{\n    bytes::{Bytes, BytesMut},\n    multiaddr::{Multiaddr, Protocol},\n    utils::is_reachable,\n    SessionId,\n};\n\nuse std::{\n    collections::{BTreeMap, HashMap, HashSet},\n    net::{IpAddr, SocketAddr},\n    time::Instant,\n};\n\npub(crate) const DEFAULT_MAX_KNOWN: usize = 5000;\n\npub enum Misbehavior {\n    // Already received GetNodes message\n    DuplicateGetNodes,\n    // Already received Nodes(announce=false) message\n    DuplicateFirstNodes,\n    // Nodes message include too many items\n    TooManyItems { announce: bool, length: usize },\n    // Too many address in one item\n    TooManyAddresses(usize),\n}\n\n/// Misbehavior report result\npub enum MisbehaveResult {\n    /// Continue to run\n    #[allow(dead_code)]\n    Continue,\n    /// Disconnect this peer\n    Disconnect,\n}\n\nimpl MisbehaveResult {\n    pub fn is_disconnect(&self) -> bool {\n        matches!(self, MisbehaveResult::Disconnect)\n    }\n}\n\nstruct AddrReporter {\n    inner:    UnboundedSender<PeerManagerEvent>,\n    shutdown: bool,\n}\n\nimpl AddrReporter {\n    pub fn new(reporter: UnboundedSender<PeerManagerEvent>) -> Self {\n        AddrReporter {\n            inner:    reporter,\n            shutdown: false,\n        }\n    }\n\n    // TODO: upstream heart-beat check\n    pub fn report(&mut self, event: PeerManagerEvent) {\n        if self.shutdown {\n            return;\n        }\n\n        if self.inner.unbounded_send(event).is_err() {\n            error!(\"network: discovery: peer manager offline\");\n\n            self.shutdown = true;\n        }\n    }\n}\n\npub struct AddressManager {\n    peer_mgr: PeerManagerHandle,\n    reporter: AddrReporter,\n}\n\n// FIXME: Should be peer store?\nimpl AddressManager {\n    pub fn new(peer_mgr: PeerManagerHandle, event_tx: UnboundedSender<PeerManagerEvent>) -> Self {\n        let reporter = AddrReporter::new(event_tx);\n\n        AddressManager { peer_mgr, reporter }\n    }\n\n    pub fn add_new_addr(&mut self, _sid: SessionId, addr: Multiaddr) {\n        let add_addr = PeerManagerEvent::DiscoverMultiAddrs { addrs: vec![addr] };\n\n        self.reporter.report(add_addr);\n    }\n\n    pub fn add_new_addrs(&mut self, _sid: SessionId, addrs: Vec<Multiaddr>) {\n        let add_multi_addrs = PeerManagerEvent::DiscoverMultiAddrs { addrs };\n\n        self.reporter.report(add_multi_addrs);\n    }\n\n    // TODO: reduce peer score based on kind\n    pub fn misbehave(&mut self, sid: SessionId, _kind: Misbehavior) -> MisbehaveResult {\n        warn!(\"network: session {} misbehave\", sid);\n\n        let pid = match self.peer_mgr.peer_id(sid) {\n            Some(id) => id,\n            None => {\n                error!(\"network: session {} peer id not found\", sid);\n                return MisbehaveResult::Disconnect;\n            }\n        };\n\n        // Right now, we just remove peer\n        let kind = MisbehaviorKind::Discovery;\n        let peer_misbehave = PeerManagerEvent::Misbehave { pid, kind };\n\n        self.reporter.report(peer_misbehave);\n        MisbehaveResult::Disconnect\n    }\n\n    pub fn get_random(&mut self, n: usize, sid: SessionId) -> Vec<Multiaddr> {\n        self.peer_mgr.random_addrs(n, sid).into_iter().collect()\n    }\n}\n\n// bitcoin: bloom.h, bloom.cpp => CRollingBloomFilter\npub struct AddrKnown {\n    max_known:  usize,\n    addrs:      HashSet<ConnectableAddr>,\n    addr_times: HashMap<ConnectableAddr, Instant>,\n    time_addrs: BTreeMap<Instant, ConnectableAddr>,\n}\n\nimpl AddrKnown {\n    pub(crate) fn new(max_known: usize) -> AddrKnown {\n        AddrKnown {\n            max_known,\n            addrs: HashSet::default(),\n            addr_times: HashMap::default(),\n            time_addrs: BTreeMap::default(),\n        }\n    }\n\n    pub(crate) fn insert(&mut self, key: ConnectableAddr) {\n        let now = Instant::now();\n        self.addrs.insert(key.clone());\n        self.time_addrs.insert(now, key.clone());\n        self.addr_times.insert(key, now);\n\n        if self.addrs.len() > self.max_known {\n            let first_time = {\n                let (first_time, first_key) = self.time_addrs.iter().next().unwrap();\n                self.addrs.remove(&first_key);\n                self.addr_times.remove(&first_key);\n                *first_time\n            };\n            self.time_addrs.remove(&first_time);\n        }\n    }\n\n    pub(crate) fn contains(&self, addr: &ConnectableAddr) -> bool {\n        self.addrs.contains(addr)\n    }\n\n    pub(crate) fn remove<'a>(&mut self, addrs: impl Iterator<Item = &'a ConnectableAddr>) {\n        addrs.for_each(|addr| {\n            self.addrs.remove(addr);\n            if let Some(time) = self.addr_times.remove(addr) {\n                self.time_addrs.remove(&time);\n            }\n        })\n    }\n}\n\nimpl Default for AddrKnown {\n    fn default() -> AddrKnown {\n        AddrKnown::new(DEFAULT_MAX_KNOWN)\n    }\n}\n\n#[derive(Clone, Debug, PartialOrd, Ord, Eq, PartialEq, Hash)]\npub struct ConnectableAddr {\n    host: Bytes,\n    port: u16,\n}\n\nimpl From<&Multiaddr> for ConnectableAddr {\n    fn from(addr: &Multiaddr) -> ConnectableAddr {\n        use tentacle::multiaddr::Protocol::{DNS4, DNS6, IP4, IP6, TCP, TLS};\n\n        let mut host = None;\n        let mut port = 0u16;\n\n        for proto in addr.iter() {\n            match proto {\n                IP4(_) | IP6(_) | DNS4(_) | DNS6(_) | TLS(_) => {\n                    let mut buf = BytesMut::new();\n                    proto.write_to_bytes(&mut buf);\n                    host = Some(buf.freeze());\n                }\n                TCP(p) => port = p,\n                _ => (),\n            }\n        }\n\n        let host = host.expect(\"impossible, unsupported host protocol\");\n\n        ConnectableAddr { host, port }\n    }\n}\n\nimpl From<Multiaddr> for ConnectableAddr {\n    fn from(addr: Multiaddr) -> ConnectableAddr {\n        ConnectableAddr::from(&addr)\n    }\n}\n\nimpl From<SocketAddr> for ConnectableAddr {\n    fn from(addr: SocketAddr) -> ConnectableAddr {\n        let proto = match addr.ip() {\n            IpAddr::V4(ipv4) => Protocol::IP4(ipv4),\n            IpAddr::V6(ipv6) => Protocol::IP6(ipv6),\n        };\n\n        let mut buf = BytesMut::new();\n        proto.write_to_bytes(&mut buf);\n\n        ConnectableAddr {\n            host: buf.freeze(),\n            port: addr.port(),\n        }\n    }\n}\n\n#[allow(dead_code)]\nimpl ConnectableAddr {\n    pub fn port(&self) -> u16 {\n        self.port\n    }\n\n    pub fn is_reachable(&self) -> bool {\n        let (proto, _) =\n            Protocol::from_bytes(&self.host).expect(\"impossible invalid host protocol\");\n\n        match proto {\n            Protocol::IP4(ipv4) => is_reachable(IpAddr::V4(ipv4)),\n            Protocol::IP6(ipv6) => is_reachable(IpAddr::V6(ipv6)),\n            _ => true,\n        }\n    }\n}\n"
  },
  {
    "path": "core/network/src/protocols/discovery/behaviour.rs",
    "content": "use std::collections::{HashMap, HashSet, VecDeque};\nuse std::pin::Pin;\nuse std::task::{Context, Poll};\nuse std::time::{Duration, Instant};\n\nuse futures::channel::mpsc::{channel, Receiver, Sender};\nuse futures::stream::FusedStream;\nuse futures::Stream;\nuse log::debug;\nuse rand::seq::SliceRandom;\nuse tentacle::multiaddr::Multiaddr;\nuse tentacle::utils::{is_reachable, multiaddr_to_socketaddr};\nuse tentacle::SessionId;\nuse tokio::time::Interval;\n\nuse crate::peer_manager::PeerManagerHandle;\n\nuse super::addr::{AddressManager, ConnectableAddr, DEFAULT_MAX_KNOWN};\nuse super::message::{DiscoveryMessage, Nodes};\nuse super::substream::{RemoteAddress, Substream, SubstreamKey, SubstreamValue};\n\nconst CHECK_INTERVAL: Duration = Duration::from_secs(3);\n\npub struct DiscoveryBehaviour {\n    // Default: 5000\n    max_known: usize,\n\n    // Address Manager\n    addr_mgr: AddressManager,\n\n    // TODO: Remove address manager\n    // Peer Manager\n    peer_mgr: PeerManagerHandle,\n\n    // The Nodes not yet been yield\n    pending_nodes: VecDeque<(SubstreamKey, SessionId, Nodes)>,\n\n    // For manage those substreams\n    substreams: HashMap<SubstreamKey, SubstreamValue>,\n\n    // For add new substream to Discovery\n    substream_sender:   Sender<Substream>,\n    // For add new substream to Discovery\n    substream_receiver: Receiver<Substream>,\n\n    dead_keys: HashSet<SubstreamKey>,\n\n    dynamic_query_cycle: Option<Duration>,\n\n    check_interval: Option<Interval>,\n}\n\n#[derive(Clone)]\npub struct DiscoveryBehaviourHandle {\n    pub substream_sender: Sender<Substream>,\n    pub peer_mgr:         PeerManagerHandle,\n}\n\nimpl DiscoveryBehaviourHandle {\n    pub fn contains_session(&self, session_id: SessionId) -> bool {\n        self.peer_mgr.contains_session(session_id)\n    }\n}\n\nimpl DiscoveryBehaviour {\n    /// Query cycle means checking and synchronizing the cycle time of the\n    /// currently connected node, default is 24 hours\n    pub fn new(\n        addr_mgr: AddressManager,\n        peer_mgr: PeerManagerHandle,\n        query_cycle: Option<Duration>,\n    ) -> DiscoveryBehaviour {\n        let (substream_sender, substream_receiver) = channel(8);\n\n        DiscoveryBehaviour {\n            check_interval: None,\n            max_known: DEFAULT_MAX_KNOWN,\n            addr_mgr,\n            peer_mgr,\n            pending_nodes: VecDeque::default(),\n            substreams: HashMap::default(),\n            substream_sender,\n            substream_receiver,\n            dead_keys: HashSet::default(),\n            dynamic_query_cycle: query_cycle,\n        }\n    }\n\n    pub fn handle(&self) -> DiscoveryBehaviourHandle {\n        DiscoveryBehaviourHandle {\n            substream_sender: self.substream_sender.clone(),\n            peer_mgr:         self.peer_mgr.clone(),\n        }\n    }\n\n    fn recv_substreams(&mut self, cx: &mut Context) {\n        loop {\n            if self.substream_receiver.is_terminated() {\n                break;\n            }\n\n            match Pin::new(&mut self.substream_receiver)\n                .as_mut()\n                .poll_next(cx)\n            {\n                Poll::Ready(Some(substream)) => {\n                    let key = substream.key();\n                    debug!(\"Received a substream: key={:?}\", key);\n                    let value = SubstreamValue::new(\n                        key.direction,\n                        substream,\n                        self.max_known,\n                        self.dynamic_query_cycle,\n                    );\n                    self.substreams.insert(key, value);\n                }\n                Poll::Ready(None) => unreachable!(),\n                Poll::Pending => {\n                    debug!(\"DiscoveryBehaviour.substream_receiver Async::NotReady\");\n                    break;\n                }\n            }\n        }\n    }\n\n    fn check_interval(&mut self, cx: &mut Context) {\n        if self.check_interval.is_none() {\n            self.check_interval = Some(tokio::time::interval(CHECK_INTERVAL));\n        }\n        let mut interval = self.check_interval.take().unwrap();\n        loop {\n            match Pin::new(&mut interval).as_mut().poll_next(cx) {\n                Poll::Ready(Some(_)) => {}\n                Poll::Ready(None) => {\n                    debug!(\"DiscoveryBehaviour check_interval poll finished\");\n                    break;\n                }\n                Poll::Pending => break,\n            }\n        }\n        self.check_interval = Some(interval);\n    }\n\n    fn poll_substreams(&mut self, cx: &mut Context, announce_multiaddrs: &mut Vec<Multiaddr>) {\n        #[cfg(feature = \"global_ip_only\")]\n        let global_ip_only = true;\n        #[cfg(not(feature = \"global_ip_only\"))]\n        let global_ip_only = false;\n\n        let announce_fn = |announce_multiaddrs: &mut Vec<Multiaddr>, addr: &Multiaddr| {\n            if !global_ip_only\n                || multiaddr_to_socketaddr(addr)\n                    .map(|addr| is_reachable(addr.ip()))\n                    .unwrap_or_default()\n            {\n                announce_multiaddrs.push(addr.clone());\n            }\n        };\n        for (key, value) in self.substreams.iter_mut() {\n            value.check_timer();\n\n            match value.receive_messages(cx, &mut self.addr_mgr) {\n                Ok(Some((session_id, nodes_list))) => {\n                    for nodes in nodes_list {\n                        self.pending_nodes\n                            .push_back((key.clone(), session_id, nodes));\n                    }\n                }\n                Ok(None) => {\n                    // stream close\n                    self.dead_keys.insert(key.clone());\n                }\n                Err(err) => {\n                    debug!(\"substream {:?} receive messages error: {:?}\", key, err);\n                    // remove the substream\n                    self.dead_keys.insert(key.clone());\n                }\n            }\n\n            match value.send_messages(cx) {\n                Ok(_) => {}\n                Err(err) => {\n                    debug!(\"substream {:?} send messages error: {:?}\", key, err);\n                    // remove the substream\n                    self.dead_keys.insert(key.clone());\n                }\n            }\n\n            if value.announce {\n                if let RemoteAddress::Listen(ref addr) = value.remote_addr {\n                    announce_fn(announce_multiaddrs, addr)\n                }\n                value.announce = false;\n                value.last_announce = Some(Instant::now());\n            }\n        }\n    }\n\n    fn remove_dead_stream(&mut self) {\n        let mut dead_addr = Vec::default();\n        for key in self.dead_keys.drain() {\n            if let Some(addr) = self.substreams.remove(&key) {\n                dead_addr.push(ConnectableAddr::from(addr.remote_addr.into_inner()));\n            }\n        }\n\n        if !dead_addr.is_empty() {\n            self.substreams\n                .values_mut()\n                .for_each(|value| value.addr_known.remove(dead_addr.iter()));\n        }\n    }\n\n    fn send_messages(&mut self, cx: &mut Context) {\n        for (key, value) in self.substreams.iter_mut() {\n            let announce_multiaddrs = value.announce_multiaddrs.split_off(0);\n            if !announce_multiaddrs.is_empty() {\n                let items = announce_multiaddrs\n                    .into_iter()\n                    .map(|addr| vec![addr])\n                    .collect::<Vec<_>>();\n\n                let announce = true;\n                value\n                    .pending_messages\n                    .push_back(DiscoveryMessage::new_nodes(announce, items));\n            }\n\n            match value.send_messages(cx) {\n                Ok(_) => {}\n                Err(err) => {\n                    debug!(\"substream {:?} send messages error: {:?}\", key, err);\n                    // remove the substream\n                    self.dead_keys.insert(key.clone());\n                }\n            }\n        }\n    }\n}\n\nimpl Stream for DiscoveryBehaviour {\n    type Item = ();\n\n    fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {\n        debug!(\"DiscoveryBehaviour.poll()\");\n        self.recv_substreams(cx);\n        self.check_interval(cx);\n\n        let mut announce_multiaddrs = Vec::new();\n\n        self.poll_substreams(cx, &mut announce_multiaddrs);\n\n        self.remove_dead_stream();\n\n        let mut rng = rand::thread_rng();\n        let mut remain_keys = self.substreams.keys().cloned().collect::<Vec<_>>();\n        debug!(\"announce_multiaddrs: {:?}\", announce_multiaddrs);\n        for announce_multiaddr in announce_multiaddrs.into_iter() {\n            let announce_addr = ConnectableAddr::from(announce_multiaddr.clone());\n            remain_keys.shuffle(&mut rng);\n            for i in 0..2 {\n                if let Some(key) = remain_keys.get(i) {\n                    if let Some(value) = self.substreams.get_mut(key) {\n                        debug!(\n                            \">> send {} to: {:?}, contains: {}\",\n                            announce_multiaddr,\n                            value.remote_addr,\n                            value.addr_known.contains(&announce_addr)\n                        );\n                        if value.announce_multiaddrs.len() < 10\n                            && !value.addr_known.contains(&announce_addr)\n                        {\n                            value.announce_multiaddrs.push(announce_multiaddr.clone());\n                            value.addr_known.insert(announce_addr.clone());\n                        }\n                    }\n                }\n            }\n        }\n\n        self.send_messages(cx);\n\n        match self.pending_nodes.pop_front() {\n            Some((_key, session_id, nodes)) => {\n                let addrs = nodes\n                    .items\n                    .into_iter()\n                    .flat_map(|node| node.addrs())\n                    .collect::<Vec<_>>();\n\n                self.addr_mgr.add_new_addrs(session_id, addrs);\n                Poll::Ready(Some(()))\n            }\n            None => Poll::Pending,\n        }\n    }\n}\n"
  },
  {
    "path": "core/network/src/protocols/discovery/message.rs",
    "content": "use std::convert::TryFrom;\n\nuse prost::{Message, Oneof};\nuse tentacle::multiaddr::Multiaddr;\n\n#[derive(Clone, Copy, PartialEq, Eq, Oneof)]\npub enum ListenPort {\n    #[prost(uint32, tag = \"3\")]\n    On(u32),\n}\n\n#[derive(Clone, PartialEq, Eq, Message)]\npub struct GetNodes {\n    #[prost(uint32, tag = \"1\")]\n    pub version:     u32,\n    #[prost(uint32, tag = \"2\")]\n    pub count:       u32,\n    #[prost(oneof = \"ListenPort\", tags = \"3\")]\n    pub listen_port: Option<ListenPort>,\n}\n\nimpl GetNodes {\n    pub fn listen_port(&self) -> Option<u16> {\n        match self.listen_port {\n            Some(ListenPort::On(port)) if port <= u16::MAX as u32 => Some(port as u16),\n            _ => None,\n        }\n    }\n}\n\n#[derive(Clone, PartialEq, Eq, Message)]\npub struct Node {\n    #[prost(bytes, repeated, tag = \"1\")]\n    pub addrs: Vec<Vec<u8>>,\n}\n\nimpl Node {\n    pub fn addrs(self) -> Vec<Multiaddr> {\n        let addrs = self.addrs.into_iter();\n        let to_multiaddrs = addrs.filter_map(|bytes| Multiaddr::try_from(bytes).ok());\n        to_multiaddrs.collect::<Vec<_>>()\n    }\n\n    pub fn with_addrs(addrs: Vec<Multiaddr>) -> Self {\n        Node {\n            addrs: addrs.into_iter().map(|addr| addr.to_vec()).collect(),\n        }\n    }\n}\n\n#[derive(Clone, PartialEq, Eq, Message)]\npub struct Nodes {\n    #[prost(bool, tag = \"1\")]\n    pub announce: bool,\n    #[prost(message, repeated, tag = \"2\")]\n    pub items:    Vec<Node>,\n}\n\n#[derive(Clone, PartialEq, Eq, Oneof)]\npub enum Payload {\n    #[prost(message, tag = \"1\")]\n    GetNodes(GetNodes),\n    #[prost(message, tag = \"2\")]\n    Nodes(Nodes),\n}\n\n#[derive(Clone, PartialEq, Eq, Message)]\npub struct DiscoveryMessage {\n    #[prost(oneof = \"Payload\", tags = \"1, 2\")]\n    pub payload: Option<Payload>,\n}\n\nimpl DiscoveryMessage {\n    pub fn new_get_nodes(version: u32, count: u32, listen_port: Option<u16>) -> Self {\n        let listen_port = listen_port.map(|port| ListenPort::On(port as u32));\n\n        DiscoveryMessage {\n            payload: Some(Payload::GetNodes(GetNodes {\n                version,\n                count,\n                listen_port,\n            })),\n        }\n    }\n\n    pub fn new_nodes(announce: bool, nodes: Vec<Vec<Multiaddr>>) -> Self {\n        DiscoveryMessage {\n            payload: Some(Payload::Nodes(Nodes {\n                announce,\n                items: nodes.into_iter().map(Node::with_addrs).collect(),\n            })),\n        }\n    }\n}\n\nimpl std::fmt::Display for DiscoveryMessage {\n    fn fmt(&self, f: &mut std::fmt::Formatter) -> Result<(), std::fmt::Error> {\n        match self {\n            DiscoveryMessage {\n                payload: Some(Payload::GetNodes(GetNodes { version, count, .. })),\n            } => {\n                write!(f, \"Payload::GetNodes(version:{}, count:{})\", version, count)?;\n            }\n            DiscoveryMessage {\n                payload: Some(Payload::Nodes(Nodes { announce, items })),\n            } => {\n                write!(\n                    f,\n                    \"Payload::Nodes(announce:{}, items.length:{})\",\n                    announce,\n                    items.len()\n                )?;\n            }\n            DiscoveryMessage { payload: None } => write!(f, \"Empty payload\")?,\n        }\n        Ok(())\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    use prost::Message;\n    use protocol::BytesMut;\n\n    #[test]\n    fn discovery_message_serialize_deserialize() {\n        let msg = DiscoveryMessage::new_get_nodes(0, 50, Some(1337));\n\n        let mut buf = BytesMut::with_capacity(msg.encoded_len());\n        msg.encode(&mut buf).unwrap();\n\n        let decoded_msg = DiscoveryMessage::decode(buf.freeze()).unwrap();\n        assert_eq!(decoded_msg, msg);\n    }\n}\n"
  },
  {
    "path": "core/network/src/protocols/discovery/protocol.rs",
    "content": "use std::collections::HashMap;\n\nuse futures::channel::mpsc::{channel, Sender};\nuse futures::stream::StreamExt;\nuse futures::FutureExt;\nuse log::{debug, warn};\nuse tentacle::context::{ProtocolContext, ProtocolContextMutRef};\nuse tentacle::traits::ServiceProtocol;\nuse tentacle::SessionId;\n\nuse super::behaviour::{DiscoveryBehaviour, DiscoveryBehaviourHandle};\nuse super::substream::Substream;\n\npub struct DiscoveryProtocol {\n    behaviour:         Option<DiscoveryBehaviour>,\n    behaviour_handle:  DiscoveryBehaviourHandle,\n    discovery_senders: HashMap<SessionId, Sender<Vec<u8>>>,\n}\n\nimpl DiscoveryProtocol {\n    pub fn new(behaviour: DiscoveryBehaviour) -> DiscoveryProtocol {\n        let behaviour_handle = behaviour.handle();\n        DiscoveryProtocol {\n            behaviour: Some(behaviour),\n            behaviour_handle,\n            discovery_senders: HashMap::default(),\n        }\n    }\n}\n\nimpl ServiceProtocol for DiscoveryProtocol {\n    fn init(&mut self, context: &mut ProtocolContext) {\n        debug!(\"protocol [discovery({})]: init\", context.proto_id);\n\n        let discovery_task = self\n            .behaviour\n            .take()\n            .map(|mut behaviour| {\n                debug!(\"Start discovery future_task\");\n                async move {\n                    loop {\n                        if behaviour.next().await.is_none() {\n                            warn!(\"discovery stream shutdown\");\n                            break;\n                        }\n                    }\n                }\n                .boxed()\n            })\n            .unwrap();\n        if context.future_task(discovery_task).is_err() {\n            warn!(\"start discovery fail\");\n        };\n    }\n\n    fn connected(&mut self, context: ProtocolContextMutRef, _: &str) {\n        let session = context.session;\n        debug!(\n            \"protocol [discovery] open on session [{}], address: [{}], type: [{:?}]\",\n            session.id, session.address, session.ty\n        );\n\n        if !self.behaviour_handle.contains_session(session.id) {\n            let _ = context.close_protocol(session.id, context.proto_id());\n            return;\n        }\n\n        let peer_id = match context.session.remote_pubkey.as_ref() {\n            Some(pubkey) => pubkey.peer_id(),\n            None => {\n                log::warn!(\"peer connection must be encrypted\");\n                let _ = context.disconnect(context.session.id);\n                return;\n            }\n        };\n        crate::protocols::OpenedProtocols::register(peer_id, context.proto_id());\n\n        let (sender, receiver) = channel(8);\n        self.discovery_senders.insert(session.id, sender);\n        let substream = Substream::new(context, receiver);\n        match self.behaviour_handle.substream_sender.try_send(substream) {\n            Ok(_) => {\n                debug!(\"Send substream success\");\n            }\n            Err(err) => {\n                // TODO: handle channel is full (wait for poll API?)\n                warn!(\"Send substream failed : {:?}\", err);\n            }\n        }\n    }\n\n    fn disconnected(&mut self, context: ProtocolContextMutRef) {\n        self.discovery_senders.remove(&context.session.id);\n        debug!(\n            \"protocol [discovery] close on session [{}]\",\n            context.session.id\n        );\n    }\n\n    fn received(&mut self, context: ProtocolContextMutRef, data: bytes::Bytes) {\n        debug!(\"[received message]: length={}\", data.len());\n\n        if let Some(ref mut sender) = self.discovery_senders.get_mut(&context.session.id) {\n            // TODO: handle channel is full (wait for poll API?)\n            if let Err(err) = sender.try_send(data.to_vec()) {\n                if err.is_full() {\n                    warn!(\"channel is full\");\n                } else if err.is_disconnected() {\n                    warn!(\"channel is disconnected\");\n                } else {\n                    warn!(\"other channel error: {:?}\", err);\n                }\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "core/network/src/protocols/discovery/substream.rs",
    "content": "use super::{\n    addr::{AddrKnown, AddressManager, ConnectableAddr, Misbehavior},\n    message::{DiscoveryMessage, Nodes, Payload},\n};\n\nuse bytes::{BufMut, BytesMut};\nuse futures::{channel::mpsc::Receiver, Sink, Stream};\nuse log::{debug, trace, warn};\nuse prost::Message;\nuse tentacle::{\n    context::ProtocolContextMutRef,\n    error::SendErrorKind,\n    multiaddr::{Multiaddr, Protocol},\n    service::{ServiceControl, SessionType},\n    utils::multiaddr_to_socketaddr,\n    ProtocolId, SessionId,\n};\nuse tokio::io::{AsyncRead, AsyncWrite};\nuse tokio_util::codec::{length_delimited::LengthDelimitedCodec, Decoder, Encoder, Framed};\n\nuse std::{\n    collections::VecDeque,\n    io,\n    pin::Pin,\n    task::{Context, Poll},\n    time::{Duration, Instant},\n};\n\n// FIXME: should be a more high level version number\nconst VERSION: u32 = 0;\n// The maximum number of new addresses to accumulate before announcing.\nconst MAX_ADDR_TO_SEND: u32 = 1000;\n// Every 24 hours send announce nodes message\nconst ANNOUNCE_INTERVAL: u64 = 3600 * 24;\nconst ANNOUNCE_THRESHOLD: usize = 10;\n\n// The maximum number addresses in on Nodes item\nconst MAX_ADDRS: usize = 3;\n\npub(crate) struct DiscoveryCodec {\n    inner: LengthDelimitedCodec,\n}\n\nimpl Default for DiscoveryCodec {\n    fn default() -> DiscoveryCodec {\n        DiscoveryCodec {\n            inner: LengthDelimitedCodec::new(),\n        }\n    }\n}\n\nimpl Decoder for DiscoveryCodec {\n    type Error = io::Error;\n    type Item = DiscoveryMessage;\n\n    fn decode(&mut self, src: &mut BytesMut) -> Result<Option<Self::Item>, Self::Error> {\n        match self.inner.decode(src) {\n            Ok(Some(frame)) => {\n                let maybe_msg = DiscoveryMessage::decode(frame.freeze());\n                maybe_msg.map(Some).map_err(|err| {\n                    debug!(\"deserialize {}\", err);\n                    io::ErrorKind::InvalidData.into()\n                })\n            }\n            Ok(None) => Ok(None),\n            Err(err) => {\n                debug!(\"codec decode {}\", err);\n                Err(io::ErrorKind::InvalidData.into())\n            }\n        }\n    }\n}\n\nimpl Encoder for DiscoveryCodec {\n    type Error = io::Error;\n    type Item = DiscoveryMessage;\n\n    fn encode(&mut self, item: Self::Item, dst: &mut BytesMut) -> Result<(), Self::Error> {\n        let mut buf = BytesMut::with_capacity(item.encoded_len());\n        item.encode(&mut buf).map_err(|err| {\n            warn!(\"serialize {}\", err);\n            io::ErrorKind::InvalidData\n        })?;\n\n        self.inner.encode(buf.freeze(), dst)\n    }\n}\n\n#[derive(Eq, PartialEq, Hash, Debug, Clone)]\npub struct SubstreamKey {\n    pub(crate) direction:  SessionType,\n    pub(crate) session_id: SessionId,\n    pub(crate) proto_id:   ProtocolId,\n}\n\npub struct StreamHandle {\n    data_buf:            BytesMut,\n    proto_id:            ProtocolId,\n    session_id:          SessionId,\n    pub(crate) receiver: Receiver<Vec<u8>>,\n    pub(crate) sender:   ServiceControl,\n}\n\nimpl AsyncRead for StreamHandle {\n    fn poll_read(\n        mut self: Pin<&mut Self>,\n        cx: &mut Context,\n        buf: &mut [u8],\n    ) -> Poll<io::Result<usize>> {\n        for _ in 0..10 {\n            match Pin::new(&mut self.receiver).as_mut().poll_next(cx) {\n                Poll::Ready(Some(data)) => {\n                    self.data_buf.reserve(data.len());\n                    self.data_buf.put(data.as_slice());\n                }\n                Poll::Ready(None) => {\n                    return Poll::Ready(Err(io::ErrorKind::BrokenPipe.into()));\n                }\n                Poll::Pending => {\n                    break;\n                }\n            }\n        }\n        let n = std::cmp::min(buf.len(), self.data_buf.len());\n        if n == 0 {\n            return Poll::Pending;\n        }\n        let b = self.data_buf.split_to(n);\n        buf[..n].copy_from_slice(&b);\n        Poll::Ready(Ok(n))\n    }\n}\n\nimpl AsyncWrite for StreamHandle {\n    fn poll_write(self: Pin<&mut Self>, _cx: &mut Context, buf: &[u8]) -> Poll<io::Result<usize>> {\n        self.sender\n            .send_message_to(self.session_id, self.proto_id, BytesMut::from(buf).freeze())\n            .map(|()| buf.len())\n            .map_err(|e| match e {\n                SendErrorKind::WouldBlock => io::ErrorKind::WouldBlock.into(),\n                SendErrorKind::BrokenPipe => io::ErrorKind::BrokenPipe.into(),\n            })\n            .into()\n    }\n\n    fn poll_flush(self: Pin<&mut Self>, _cx: &mut Context) -> Poll<io::Result<()>> {\n        Poll::Ready(Ok(()))\n    }\n\n    fn poll_shutdown(self: Pin<&mut Self>, _cx: &mut Context) -> Poll<io::Result<()>> {\n        Poll::Ready(Ok(()))\n    }\n}\n\npub struct SubstreamValue {\n    framed_stream:                  Framed<StreamHandle, DiscoveryCodec>,\n    // received pending messages\n    pub(crate) pending_messages:    VecDeque<DiscoveryMessage>,\n    pub(crate) addr_known:          AddrKnown,\n    // FIXME: Remote listen address, resolved by id protocol\n    pub(crate) remote_addr:         RemoteAddress,\n    pub(crate) announce:            bool,\n    pub(crate) last_announce:       Option<Instant>,\n    pub(crate) announce_multiaddrs: Vec<Multiaddr>,\n    session_id:                     SessionId,\n    announce_interval:              Duration,\n    received_get_nodes:             bool,\n    received_nodes:                 bool,\n    remote_closed:                  bool,\n}\n\nimpl SubstreamValue {\n    pub(crate) fn new(\n        direction: SessionType,\n        substream: Substream,\n        max_known: usize,\n        query_cycle: Option<Duration>,\n    ) -> SubstreamValue {\n        let session_id = substream.stream.session_id;\n        let mut pending_messages = VecDeque::default();\n        debug!(\"direction: {:?}\", direction);\n        let mut addr_known = AddrKnown::new(max_known);\n        let remote_addr = if direction.is_outbound() {\n            pending_messages.push_back(DiscoveryMessage::new_get_nodes(\n                VERSION,\n                MAX_ADDR_TO_SEND,\n                substream.listen_port,\n            ));\n            addr_known.insert(ConnectableAddr::from(&substream.remote_addr));\n\n            RemoteAddress::Listen(substream.remote_addr)\n        } else {\n            RemoteAddress::Init(substream.remote_addr)\n        };\n\n        SubstreamValue {\n            framed_stream: Framed::new(substream.stream, DiscoveryCodec::default()),\n            last_announce: None,\n            announce_interval: query_cycle\n                .unwrap_or_else(|| Duration::from_secs(ANNOUNCE_INTERVAL)),\n            pending_messages,\n            addr_known,\n            remote_addr,\n            session_id,\n            announce: false,\n            announce_multiaddrs: Vec::new(),\n            received_get_nodes: false,\n            received_nodes: false,\n            remote_closed: false,\n        }\n    }\n\n    fn remote_connectable_addr(&self) -> ConnectableAddr {\n        ConnectableAddr::from(self.remote_addr.to_inner())\n    }\n\n    pub(crate) fn check_timer(&mut self) {\n        if self\n            .last_announce\n            .map(|time| time.elapsed() > self.announce_interval)\n            .unwrap_or(true)\n        {\n            debug!(\"announce this session: {:?}\", self.session_id);\n            self.announce = true;\n        }\n    }\n\n    pub(crate) fn send_messages(&mut self, cx: &mut Context) -> Result<(), io::Error> {\n        let mut sink = Pin::new(&mut self.framed_stream);\n\n        while let Some(message) = self.pending_messages.pop_front() {\n            debug!(\"Discovery sending message: {}\", message);\n\n            match sink.as_mut().poll_ready(cx)? {\n                Poll::Pending => {\n                    self.pending_messages.push_front(message);\n                    return Ok(());\n                }\n                Poll::Ready(()) => {\n                    sink.as_mut().start_send(message)?;\n                }\n            }\n        }\n        let _ = sink.as_mut().poll_flush(cx)?;\n        Ok(())\n    }\n\n    pub(crate) fn handle_message(\n        &mut self,\n        message: DiscoveryMessage,\n        addr_mgr: &mut AddressManager,\n    ) -> Result<Option<Nodes>, io::Error> {\n        match message {\n            DiscoveryMessage {\n                payload: Some(Payload::GetNodes(get_nodes)),\n            } => {\n                if self.received_get_nodes {\n                    // TODO: misbehavior\n                    if addr_mgr\n                        .misbehave(self.session_id, Misbehavior::DuplicateGetNodes)\n                        .is_disconnect()\n                    {\n                        // TODO: more clear error type\n                        warn!(\"Already received get nodes\");\n                        return Err(io::ErrorKind::Other.into());\n                    }\n                } else {\n                    // TODO: magic number\n                    // must get the item first, otherwise it is possible to load\n                    // the address of peer listen.\n                    let mut items = addr_mgr.get_random(2500, self.session_id);\n\n                    // change client random outbound port to client listen port\n                    let listen_port = get_nodes.listen_port();\n                    debug!(\"listen port: {:?}\", listen_port);\n                    if let Some(port) = listen_port {\n                        self.remote_addr.update_port(port);\n                        self.addr_known.insert(self.remote_connectable_addr());\n\n                        // add client listen address to manager\n                        if let RemoteAddress::Listen(ref addr) = self.remote_addr {\n                            addr_mgr.add_new_addr(self.session_id, addr.clone());\n                        }\n                    }\n\n                    while items.len() > 1000 {\n                        if let Some(last_item) = items.pop() {\n                            let idx = rand::random::<usize>() % 1000;\n                            items[idx] = last_item;\n                        }\n                    }\n\n                    let announce = false;\n                    let items = items.into_iter().map(|addr| vec![addr]).collect::<Vec<_>>();\n\n                    self.pending_messages\n                        .push_back(DiscoveryMessage::new_nodes(announce, items));\n\n                    self.received_get_nodes = true;\n                }\n            }\n            DiscoveryMessage {\n                payload: Some(Payload::Nodes(nodes)),\n            } => {\n                for item in &nodes.items {\n                    if item.addrs.len() > MAX_ADDRS {\n                        let misbehavior = Misbehavior::TooManyAddresses(item.addrs.len());\n                        if addr_mgr\n                            .misbehave(self.session_id, misbehavior)\n                            .is_disconnect()\n                        {\n                            // TODO: more clear error type\n                            return Err(io::ErrorKind::Other.into());\n                        }\n                    }\n                }\n\n                if nodes.announce {\n                    if nodes.items.len() > ANNOUNCE_THRESHOLD {\n                        warn!(\"Nodes items more than {}\", ANNOUNCE_THRESHOLD);\n                        // TODO: misbehavior\n                        let misbehavior = Misbehavior::TooManyItems {\n                            announce: nodes.announce,\n                            length:   nodes.items.len(),\n                        };\n                        if addr_mgr\n                            .misbehave(self.session_id, misbehavior)\n                            .is_disconnect()\n                        {\n                            // TODO: more clear error type\n                            return Err(io::ErrorKind::Other.into());\n                        }\n                    } else {\n                        return Ok(Some(nodes));\n                    }\n                } else if self.received_nodes {\n                    warn!(\"already received Nodes(announce=false) message\");\n                    // TODO: misbehavior\n                    if addr_mgr\n                        .misbehave(self.session_id, Misbehavior::DuplicateFirstNodes)\n                        .is_disconnect()\n                    {\n                        // TODO: more clear error type\n                        return Err(io::ErrorKind::Other.into());\n                    }\n                } else if nodes.items.len() > MAX_ADDR_TO_SEND as usize {\n                    warn!(\n                        \"Too many items (announce=false) length={}\",\n                        nodes.items.len()\n                    );\n                    // TODO: misbehavior\n                    let misbehavior = Misbehavior::TooManyItems {\n                        announce: nodes.announce,\n                        length:   nodes.items.len(),\n                    };\n\n                    if addr_mgr\n                        .misbehave(self.session_id, misbehavior)\n                        .is_disconnect()\n                    {\n                        // TODO: more clear error type\n                        return Err(io::ErrorKind::Other.into());\n                    }\n                } else {\n                    self.received_nodes = true;\n                    return Ok(Some(nodes));\n                }\n            }\n            DiscoveryMessage { payload: None } => {\n                // TODO: misbehavior\n            }\n        }\n        Ok(None)\n    }\n\n    pub(crate) fn receive_messages(\n        &mut self,\n        cx: &mut Context,\n        addr_mgr: &mut AddressManager,\n    ) -> Result<Option<(SessionId, Vec<Nodes>)>, io::Error> {\n        if self.remote_closed {\n            return Ok(None);\n        }\n\n        let mut nodes_list = Vec::new();\n        loop {\n            match Pin::new(&mut self.framed_stream).as_mut().poll_next(cx) {\n                Poll::Ready(Some(res)) => {\n                    let message = res?;\n                    trace!(\"received message {}\", message);\n                    if let Some(nodes) = self.handle_message(message, addr_mgr)? {\n                        // Add to known address list\n                        for node in &nodes.items {\n                            for addr in node.clone().addrs() {\n                                trace!(\"received address: {}\", addr);\n                                self.addr_known.insert(ConnectableAddr::from(addr));\n                            }\n                        }\n                        nodes_list.push(nodes);\n                    }\n                }\n                Poll::Ready(None) => {\n                    debug!(\"remote closed\");\n                    self.remote_closed = true;\n                    break;\n                }\n                Poll::Pending => {\n                    break;\n                }\n            }\n        }\n        Ok(Some((self.session_id, nodes_list)))\n    }\n}\n\npub struct Substream {\n    pub remote_addr: Multiaddr,\n    pub direction:   SessionType,\n    pub stream:      StreamHandle,\n    pub listen_port: Option<u16>,\n}\n\nimpl Substream {\n    pub fn new(context: ProtocolContextMutRef, receiver: Receiver<Vec<u8>>) -> Substream {\n        let stream = StreamHandle {\n            data_buf: BytesMut::default(),\n            proto_id: context.proto_id,\n            session_id: context.session.id,\n            receiver,\n            sender: context.control().clone(),\n        };\n        let listen_port = if context.session.ty.is_outbound() {\n            context\n                .listens()\n                .iter()\n                .map(|address| multiaddr_to_socketaddr(address).unwrap().port())\n                .next()\n        } else {\n            None\n        };\n        Substream {\n            remote_addr: context.session.address.clone(),\n            direction: context.session.ty,\n            stream,\n            listen_port,\n        }\n    }\n\n    pub fn key(&self) -> SubstreamKey {\n        SubstreamKey {\n            direction:  self.direction,\n            session_id: self.stream.session_id,\n            proto_id:   self.stream.proto_id,\n        }\n    }\n}\n\n#[derive(Eq, PartialEq, Hash, Debug, Clone)]\npub(crate) enum RemoteAddress {\n    /// Inbound init remote address\n    Init(Multiaddr),\n    /// Outbound init remote address or Inbound listen address\n    Listen(Multiaddr),\n}\n\nimpl RemoteAddress {\n    fn to_inner(&self) -> &Multiaddr {\n        match self {\n            RemoteAddress::Init(ref addr) | RemoteAddress::Listen(ref addr) => addr,\n        }\n    }\n\n    pub(crate) fn into_inner(self) -> Multiaddr {\n        match self {\n            RemoteAddress::Init(addr) | RemoteAddress::Listen(addr) => addr,\n        }\n    }\n\n    fn update_port(&mut self, port: u16) {\n        if let RemoteAddress::Init(ref addr) = self {\n            let addr = addr\n                .into_iter()\n                .map(|proto| {\n                    match proto {\n                        // TODO: other transport, UDP for example\n                        Protocol::TCP(_) => Protocol::TCP(port),\n                        value => value,\n                    }\n                })\n                .collect();\n            *self = RemoteAddress::Listen(addr);\n        }\n    }\n}\n"
  },
  {
    "path": "core/network/src/protocols/discovery.rs",
    "content": "mod addr;\nmod behaviour;\nmod message;\nmod protocol;\nmod substream;\n\nuse self::protocol::DiscoveryProtocol;\nuse addr::AddressManager;\nuse behaviour::DiscoveryBehaviour;\n\nuse crate::{event::PeerManagerEvent, peer_manager::PeerManagerHandle};\n\nuse futures::channel::mpsc::UnboundedSender;\nuse tentacle::{\n    builder::MetaBuilder,\n    service::{ProtocolHandle, ProtocolMeta},\n    ProtocolId,\n};\n\nuse std::time::Duration;\n\npub const NAME: &str = \"chain_discovery\";\npub const SUPPORT_VERSIONS: [&str; 1] = [\"0.1\"];\n\npub struct Discovery(DiscoveryProtocol);\n\nimpl Discovery {\n    pub fn new(\n        peer_mgr: PeerManagerHandle,\n        event_tx: UnboundedSender<PeerManagerEvent>,\n        sync_interval: Duration,\n    ) -> Self {\n        #[cfg(feature = \"global_ip_only\")]\n        log::info!(\"turn on global ip only\");\n        #[cfg(not(feature = \"global_ip_only\"))]\n        log::info!(\"turn off global ip only\");\n\n        let address_manager = AddressManager::new(peer_mgr.clone(), event_tx);\n        let behaviour = DiscoveryBehaviour::new(address_manager, peer_mgr, Some(sync_interval));\n\n        Discovery(DiscoveryProtocol::new(behaviour))\n    }\n\n    pub fn build_meta(self, protocol_id: ProtocolId) -> ProtocolMeta {\n        MetaBuilder::new()\n            .id(protocol_id)\n            .name(name!(NAME))\n            .support_versions(support_versions!(SUPPORT_VERSIONS))\n            .service_handle(move || ProtocolHandle::Callback(Box::new(self.0)))\n            .build()\n    }\n}\n"
  },
  {
    "path": "core/network/src/protocols/identify/behaviour.rs",
    "content": "use std::sync::atomic::{AtomicBool, Ordering};\nuse std::sync::Arc;\n\nuse futures::channel::mpsc::UnboundedSender;\nuse tentacle::multiaddr::Multiaddr;\nuse tentacle::secio::PeerId;\nuse tentacle::service::SessionType;\n\nuse crate::event::PeerManagerEvent;\nuse crate::peer_manager::PeerManagerHandle;\n\nuse super::common::reachable;\nuse super::message;\nuse super::protocol::StateContext;\n\n#[derive(Clone)]\nstruct AddrReporter {\n    inner:    UnboundedSender<PeerManagerEvent>,\n    shutdown: Arc<AtomicBool>,\n}\n\nimpl AddrReporter {\n    pub fn new(reporter: UnboundedSender<PeerManagerEvent>) -> Self {\n        AddrReporter {\n            inner:    reporter,\n            shutdown: Arc::new(AtomicBool::new(false)),\n        }\n    }\n\n    // TODO: upstream heart-beat check\n    pub fn report(&self, event: PeerManagerEvent) {\n        if self.shutdown.load(Ordering::SeqCst) {\n            return;\n        }\n\n        if self.inner.unbounded_send(event).is_err() {\n            log::debug!(\"network: discovery: peer manager offline\");\n\n            self.shutdown.store(true, Ordering::SeqCst);\n        }\n    }\n}\n\n/// Identify protocol\npub struct IdentifyBehaviour {\n    peer_mgr:      PeerManagerHandle,\n    addr_reporter: AddrReporter,\n}\n\n// Allow dead code for cfg(test)\n#[allow(dead_code)]\nimpl IdentifyBehaviour {\n    pub fn new(peer_mgr: PeerManagerHandle, event_tx: UnboundedSender<PeerManagerEvent>) -> Self {\n        let addr_reporter = AddrReporter::new(event_tx);\n\n        IdentifyBehaviour {\n            peer_mgr,\n            addr_reporter,\n        }\n    }\n\n    pub fn chain_id(&self) -> String {\n        self.peer_mgr.chain_id().as_ref().as_hex()\n    }\n\n    pub fn local_listen_addrs(&self) -> Vec<Multiaddr> {\n        let addrs = self.peer_mgr.listen_addrs();\n        let reachable_addrs = addrs.into_iter().filter(reachable);\n\n        reachable_addrs.take(message::MAX_LISTEN_ADDRS).collect()\n    }\n\n    pub fn send_identity(&self, context: &StateContext) {\n        let address_info = {\n            let listen_addrs = self.local_listen_addrs();\n            let observed_addr = context.observed_addr();\n            message::AddressInfo::new(listen_addrs, observed_addr)\n        };\n\n        let identity = {\n            let msg = message::Identity::new(self.chain_id(), address_info);\n            match msg.into_bytes() {\n                Ok(msg) => msg,\n                Err(err) => {\n                    log::warn!(\"encode identity msg failed {}\", err);\n                    context.disconnect();\n                    return;\n                }\n            }\n        };\n\n        context.send_message(identity);\n    }\n\n    pub fn send_ack(&self, context: &StateContext) {\n        let address_info = {\n            let listen_addrs = self.local_listen_addrs();\n            let observed_addr = context.observed_addr();\n            message::AddressInfo::new(listen_addrs, observed_addr)\n        };\n\n        let acknowledge = {\n            let msg = message::Acknowledge::new(address_info);\n            match msg.into_bytes() {\n                Ok(msg) => msg,\n                Err(err) => {\n                    log::warn!(\"encode acknowledge msg failed {}\", err);\n                    context.disconnect();\n                    return;\n                }\n            }\n        };\n\n        context.send_message(acknowledge);\n    }\n\n    pub fn verify_remote_identity(\n        &self,\n        identity: &message::Identity,\n    ) -> Result<(), super::protocol::Error> {\n        if identity.chain_id != self.chain_id() {\n            Err(super::protocol::Error::WrongChainId)\n        } else {\n            Ok(())\n        }\n    }\n\n    pub fn process_listens(&self, context: &StateContext, listens: Vec<Multiaddr>) {\n        let peer_id = &context.remote_peer.id;\n        log::debug!(\"listen addresses: {:?}\", listens);\n\n        let reachable_addrs = listens.into_iter().filter(reachable).collect::<Vec<_>>();\n        let identified_addrs = PeerManagerEvent::IdentifiedAddrs {\n            pid:   peer_id.to_owned(),\n            addrs: reachable_addrs,\n        };\n        self.addr_reporter.report(identified_addrs);\n    }\n\n    pub fn process_observed(&self, context: &StateContext, observed: Multiaddr) {\n        let peer_id = &context.remote_peer.id;\n        let session_type = context.session_context.ty;\n        log::debug!(\"observed addr {:?} from {}\", observed, context.remote_peer);\n\n        let unobservable = |observed| -> bool {\n            self.add_observed_addr(peer_id, observed, session_type)\n                .is_err()\n        };\n\n        if reachable(&observed) && unobservable(observed.clone()) {\n            log::warn!(\"unobservable {} from {}\", observed, context.remote_peer);\n            context.disconnect();\n        }\n    }\n\n    pub fn add_observed_addr(\n        &self,\n        peer: &PeerId,\n        addr: Multiaddr,\n        ty: SessionType,\n    ) -> Result<(), ()> {\n        log::debug!(\"add observed: {:?}, addr {:?}, ty: {:?}\", peer, addr, ty);\n\n        // Noop right now\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "core/network/src/protocols/identify/common.rs",
    "content": "use tentacle::{\n    multiaddr::Multiaddr,\n    utils::{is_reachable, multiaddr_to_socketaddr},\n};\n\npub fn reachable(addr: &Multiaddr) -> bool {\n    #[cfg(feature = \"global_ip_only\")]\n    let global_ip_only = true;\n    #[cfg(not(feature = \"global_ip_only\"))]\n    let global_ip_only = false;\n\n    multiaddr_to_socketaddr(addr)\n        .map(|socket_addr| !global_ip_only || is_reachable(socket_addr.ip()))\n        .unwrap_or(false)\n}\n"
  },
  {
    "path": "core/network/src/protocols/identify/identification.rs",
    "content": "use std::borrow::Borrow;\nuse std::collections::HashSet;\nuse std::future::Future;\nuse std::hash::{Hash, Hasher};\nuse std::pin::Pin;\nuse std::sync::Arc;\nuse std::task::{Context, Poll, Waker};\n\nuse parking_lot::Mutex;\n\ntype Index = usize;\n\npub struct WaitIdentification {\n    idx:          Index,\n    ident_status: Arc<Mutex<IdentificationStatus>>,\n}\n\nimpl WaitIdentification {\n    fn new(ident_status: Arc<Mutex<IdentificationStatus>>) -> Self {\n        WaitIdentification {\n            idx: usize::MAX,\n            ident_status,\n        }\n    }\n}\n\nimpl Future for WaitIdentification {\n    type Output = Result<(), super::protocol::Error>;\n\n    fn poll(mut self: Pin<&mut Self>, ctx: &mut Context<'_>) -> Poll<Self::Output> {\n        let insert_idx = {\n            let idx = self.idx;\n            match &mut *self.ident_status.lock() {\n                IdentificationStatus::Done(ret) => return Poll::Ready(ret.to_owned()),\n                IdentificationStatus::Pending(_) if idx != usize::MAX => return Poll::Pending,\n                IdentificationStatus::Pending(wakerset) => wakerset.insert(ctx.waker().to_owned()),\n            }\n        };\n\n        self.idx = insert_idx;\n        Poll::Pending\n    }\n}\n\nimpl Drop for WaitIdentification {\n    fn drop(&mut self) {\n        if let IdentificationStatus::Pending(wakerset) = &mut *self.ident_status.lock() {\n            wakerset.remove(self.idx);\n        }\n    }\n}\n\npub struct Identification {\n    status: Arc<Mutex<IdentificationStatus>>,\n}\n\nimpl Identification {\n    pub(crate) fn new() -> Self {\n        Identification {\n            status: Default::default(),\n        }\n    }\n\n    pub fn wait(&self) -> WaitIdentification {\n        WaitIdentification::new(Arc::clone(&self.status))\n    }\n\n    pub fn pass(&self) {\n        self.done(Ok(()))\n    }\n\n    pub fn failed(&self, error: super::protocol::Error) {\n        self.done(Err(error))\n    }\n\n    fn fail_if_not_done(&self) {\n        {\n            let status = self.status.lock();\n            if let IdentificationStatus::Done(_) = &*status {\n                return;\n            }\n        }\n\n        self.failed(super::protocol::Error::WaitFutDropped)\n    }\n\n    fn done(&self, ret: Result<(), super::protocol::Error>) {\n        let wakerset = {\n            let mut status = self.status.lock();\n\n            if let IdentificationStatus::Pending(wakerset) =\n                std::mem::replace(&mut *status, IdentificationStatus::Done(ret))\n            {\n                wakerset\n            } else {\n                return;\n            }\n        };\n\n        wakerset.wake()\n    }\n}\n\nimpl Drop for Identification {\n    fn drop(&mut self) {\n        self.fail_if_not_done()\n    }\n}\n\nstruct IndexedWaker {\n    idx:   Index,\n    waker: Waker,\n}\n\nimpl IndexedWaker {\n    fn wake(self) {\n        self.waker.wake()\n    }\n}\n\nimpl Borrow<Index> for IndexedWaker {\n    fn borrow(&self) -> &Index {\n        &self.idx\n    }\n}\n\nimpl PartialEq for IndexedWaker {\n    fn eq(&self, other: &IndexedWaker) -> bool {\n        self.idx == other.idx\n    }\n}\n\nimpl Eq for IndexedWaker {}\n\nimpl Hash for IndexedWaker {\n    fn hash<H: Hasher>(&self, state: &mut H) {\n        self.idx.hash(state)\n    }\n}\n\nstruct WakerSet {\n    id:     Index,\n    wakers: HashSet<IndexedWaker>,\n}\n\nimpl WakerSet {\n    fn new() -> WakerSet {\n        WakerSet {\n            id:     0,\n            wakers: HashSet::new(),\n        }\n    }\n\n    fn insert(&mut self, waker: Waker) -> Index {\n        debug_assert!(self.id != std::usize::MAX);\n        self.id += 1;\n\n        let indexed_waker = IndexedWaker {\n            idx: self.id,\n            waker,\n        };\n\n        self.wakers.insert(indexed_waker);\n        self.id\n    }\n\n    fn remove(&mut self, idx: Index) {\n        self.wakers.remove(&idx);\n    }\n\n    fn wake(self) {\n        for waker in self.wakers {\n            waker.wake()\n        }\n    }\n}\n\nenum IdentificationStatus {\n    Pending(WakerSet),\n    Done(Result<(), super::protocol::Error>),\n}\n\nimpl Default for IdentificationStatus {\n    fn default() -> Self {\n        IdentificationStatus::Pending(WakerSet::new())\n    }\n}\n"
  },
  {
    "path": "core/network/src/protocols/identify/message.rs",
    "content": "use std::convert::TryFrom;\n\nuse derive_more::Display;\nuse prost::{EncodeError, Message};\nuse protocol::{Bytes, BytesMut};\nuse tentacle::multiaddr::Multiaddr;\n\npub const MAX_LISTEN_ADDRS: usize = 10;\n\n#[derive(Debug, Display)]\npub enum Error {\n    #[display(fmt = \"too many listen addrs\")]\n    TooManyListenAddrs,\n\n    #[display(fmt = \"no observed addrs\")]\n    NoObservedAddr,\n\n    #[display(fmt = \"no addr info\")]\n    NoAddrInfo,\n}\n\npub trait AddressInfoMessage {\n    fn validate(&self) -> Result<(), self::Error>;\n    fn listen_addrs(&self) -> Vec<Multiaddr>;\n    fn observed_addr(&self) -> Option<Multiaddr>;\n}\n\nimpl AddressInfoMessage for Option<AddressInfo> {\n    fn listen_addrs(&self) -> Vec<Multiaddr> {\n        self.as_ref()\n            .map(|ai| ai.listen_addrs())\n            .unwrap_or_else(Vec::new)\n    }\n\n    fn observed_addr(&self) -> Option<Multiaddr> {\n        self.as_ref().map(|ai| ai.observed_addr()).flatten()\n    }\n\n    fn validate(&self) -> Result<(), self::Error> {\n        match self.as_ref() {\n            Some(addr_info) => addr_info.validate(),\n            None => Err(self::Error::NoAddrInfo),\n        }\n    }\n}\n\n#[derive(Message)]\npub struct AddressInfo {\n    #[prost(bytes, repeated, tag = \"1\")]\n    pub listen_addrs:  Vec<Vec<u8>>,\n    #[prost(bytes, tag = \"2\")]\n    pub observed_addr: Vec<u8>,\n}\n\nimpl AddressInfo {\n    pub fn new(listen_addrs: Vec<Multiaddr>, observed_addr: Multiaddr) -> Self {\n        AddressInfo {\n            listen_addrs:  listen_addrs.into_iter().map(|addr| addr.to_vec()).collect(),\n            observed_addr: observed_addr.to_vec(),\n        }\n    }\n\n    pub fn listen_addrs(&self) -> Vec<Multiaddr> {\n        let addrs = self.listen_addrs.iter().cloned();\n        let to_multiaddrs = addrs.filter_map(|bytes| Multiaddr::try_from(bytes).ok());\n        to_multiaddrs.collect()\n    }\n\n    pub fn observed_addr(&self) -> Option<Multiaddr> {\n        Multiaddr::try_from(self.observed_addr.clone()).ok()\n    }\n\n    pub fn validate(&self) -> Result<(), self::Error> {\n        if self.listen_addrs.len() > MAX_LISTEN_ADDRS {\n            return Err(self::Error::TooManyListenAddrs);\n        }\n\n        if self.observed_addr().is_none() {\n            return Err(self::Error::NoObservedAddr);\n        }\n\n        Ok(())\n    }\n\n    #[cfg(test)]\n    pub fn mock_valid() -> Self {\n        let listen_addr: Multiaddr = \"/ip4/47.111.169.36/tcp/2000\".parse().unwrap();\n        let observed_addr: Multiaddr = \"/ip4/47.111.169.36/tcp/2001\".parse().unwrap();\n\n        AddressInfo {\n            listen_addrs:  vec![listen_addr.to_vec()],\n            observed_addr: observed_addr.to_vec(),\n        }\n    }\n\n    #[cfg(test)]\n    pub fn mock_invalid() -> Self {\n        AddressInfo {\n            listen_addrs:  vec![],\n            observed_addr: b\"xxx\".to_vec(),\n        }\n    }\n}\n\n#[derive(Message)]\npub struct Identity {\n    #[prost(string, tag = \"1\")]\n    pub chain_id:  String,\n    #[prost(message, tag = \"2\")]\n    pub addr_info: Option<AddressInfo>,\n}\n\nimpl Identity {\n    pub fn new(chain_id: String, addr_info: AddressInfo) -> Self {\n        Identity {\n            chain_id,\n            addr_info: Some(addr_info),\n        }\n    }\n\n    pub fn validate(&self) -> Result<(), self::Error> {\n        self.addr_info.validate()\n    }\n\n    pub fn into_bytes(self) -> Result<Bytes, EncodeError> {\n        let mut buf = BytesMut::with_capacity(self.encoded_len());\n        self.encode(&mut buf)?;\n\n        Ok(buf.freeze())\n    }\n\n    #[cfg(test)]\n    pub fn mock_valid() -> Self {\n        use protocol::types::Hash;\n\n        Identity {\n            chain_id:  Hash::digest(Bytes::from_static(b\"hello\")).as_hex(),\n            addr_info: Some(AddressInfo::mock_valid()),\n        }\n    }\n\n    #[cfg(test)]\n    pub fn mock_invalid() -> Self {\n        use protocol::types::Hash;\n\n        let identity = Identity {\n            chain_id:  Hash::digest(Bytes::from_static(b\"hello\")).as_hex(),\n            addr_info: Some(AddressInfo::mock_invalid()),\n        };\n        assert!(identity.validate().is_err());\n\n        identity\n    }\n}\n\n#[derive(Message)]\npub struct Acknowledge {\n    #[prost(message, tag = \"1\")]\n    pub addr_info: Option<AddressInfo>,\n}\n\nimpl Acknowledge {\n    pub fn new(addr_info: AddressInfo) -> Self {\n        Acknowledge {\n            addr_info: Some(addr_info),\n        }\n    }\n\n    pub fn validate(&self) -> Result<(), self::Error> {\n        self.addr_info.validate()\n    }\n\n    pub fn into_bytes(self) -> Result<Bytes, EncodeError> {\n        let mut buf = BytesMut::with_capacity(self.encoded_len());\n        self.encode(&mut buf)?;\n\n        Ok(buf.freeze())\n    }\n\n    #[cfg(test)]\n    pub fn mock_valid() -> Self {\n        Acknowledge {\n            addr_info: Some(AddressInfo::mock_valid()),\n        }\n    }\n\n    #[cfg(test)]\n    pub fn mock_invalid() -> Self {\n        Acknowledge {\n            addr_info: Some(AddressInfo::mock_invalid()),\n        }\n    }\n}\n"
  },
  {
    "path": "core/network/src/protocols/identify/protocol.rs",
    "content": "use std::collections::HashMap;\nuse std::sync::Arc;\nuse std::time::Duration;\n\nuse derive_more::Display;\nuse futures::future::{self, AbortHandle};\nuse futures_timer::Delay;\nuse lazy_static::lazy_static;\nuse parking_lot::RwLock;\nuse prost::Message;\nuse protocol::Bytes;\nuse tentacle::multiaddr::{Multiaddr, Protocol};\nuse tentacle::secio::PeerId;\nuse tentacle::service::{SessionType, TargetProtocol};\nuse tentacle::traits::SessionProtocol;\nuse tentacle::{ProtocolId, SessionId};\n\n#[cfg(test)]\nuse crate::test::mock::{ServiceControl, SessionContext};\n#[cfg(not(test))]\nuse tentacle::context::{ProtocolContextMutRef, SessionContext};\n#[cfg(not(test))]\nuse tentacle::service::ServiceControl;\n\n#[cfg(not(test))]\nuse super::behaviour::IdentifyBehaviour;\n#[cfg(test)]\nuse super::tests::MockIdentifyBehaviour;\n\nuse super::identification::{Identification, WaitIdentification};\nuse super::message::{Acknowledge, AddressInfoMessage, Identity};\n\npub const DEFAULT_TIMEOUT: Duration = Duration::from_secs(8);\npub const MAX_MESSAGE_SIZE: usize = 5 * 1000; // 5KB\n\nlazy_static! {\n    // NOTE: Use peer id here because trust metric integrated test run in one process\n    static ref PEER_IDENTIFICATION_BACKLOG: RwLock<HashMap<PeerId, Identification>> =\n        RwLock::new(HashMap::new());\n}\n\n#[derive(Debug, Display, Clone)]\npub enum Error {\n    #[display(fmt = \"wrong chain id\")]\n    WrongChainId,\n\n    #[display(fmt = \"timeout\")]\n    Timeout,\n\n    #[display(fmt = \"exceed max message size\")]\n    ExceedMaxMessageSize,\n\n    #[display(fmt = \"decode indentity failed\")]\n    DecodeIdentityFailed,\n\n    #[display(fmt = \"decode ack failed\")]\n    DecodeAckFailed,\n\n    #[display(fmt = \"{}\", _0)]\n    InvalidMessage(String),\n\n    #[display(fmt = \"wait future dropped\")]\n    WaitFutDropped,\n\n    #[display(fmt = \"disconnected\")]\n    Disconnected,\n\n    #[display(fmt = \"{}\", _0)]\n    Other(String),\n}\n\n// Wrap ProtocolContextMutRef for easy mock and test\n#[cfg(not(test))]\npub struct IdentifyProtocolContext<'a>(ProtocolContextMutRef<'a>);\n#[cfg(test)]\npub struct IdentifyProtocolContext<'a>(pub &'a crate::test::mock::ProtocolContext);\n\n#[derive(Debug, Display)]\n#[display(fmt = \"peer {:?} addr {:?}\", id, addr)]\npub struct RemotePeer {\n    pub id:   PeerId,\n    pub sid:  SessionId,\n    pub addr: Multiaddr,\n}\n\npub struct NoEncryption;\n\nimpl RemotePeer {\n    pub fn from_proto_context(\n        proto_context: &IdentifyProtocolContext,\n    ) -> Result<RemotePeer, NoEncryption> {\n        match proto_context.0.session.remote_pubkey.as_ref() {\n            None => Err(NoEncryption),\n            Some(pubkey) => {\n                let remote_peer = RemotePeer {\n                    id:   pubkey.peer_id(),\n                    sid:  proto_context.0.session.id,\n                    addr: proto_context.0.session.address.to_owned(),\n                };\n\n                Ok(remote_peer)\n            }\n        }\n    }\n}\n\npub struct StateContext {\n    pub remote_peer:          Arc<RemotePeer>,\n    pub proto_id:             ProtocolId,\n    pub service_control:      ServiceControl,\n    pub session_context:      SessionContext,\n    pub timeout_abort_handle: Option<AbortHandle>,\n}\n\nimpl StateContext {\n    pub fn from_proto_context(\n        proto_context: &IdentifyProtocolContext,\n    ) -> Result<StateContext, NoEncryption> {\n        let remote_peer = RemotePeer::from_proto_context(proto_context)?;\n        let state_context = StateContext {\n            remote_peer:          Arc::new(remote_peer),\n            proto_id:             proto_context.0.proto_id(),\n            service_control:      proto_context.0.control().clone(),\n            session_context:      proto_context.0.session.clone(),\n            timeout_abort_handle: None,\n        };\n\n        Ok(state_context)\n    }\n\n    pub fn observed_addr(&self) -> Multiaddr {\n        let remote_addr = self.session_context.address.iter();\n\n        remote_addr\n            .filter(|proto| !matches!(proto, Protocol::P2P(_)))\n            .collect()\n    }\n\n    pub fn send_message(&self, msg: Bytes) {\n        if let Err(err) =\n            self.service_control\n                .quick_send_message_to(self.remote_peer.sid, self.proto_id, msg)\n        {\n            log::warn!(\n                \"internal error: quick send message to {} failed {}\",\n                self.remote_peer,\n                err\n            );\n        }\n    }\n\n    pub fn disconnect(&self) {\n        let _ = self.service_control.disconnect(self.remote_peer.sid);\n    }\n\n    pub fn open_protocols(&self) {\n        if let Err(err) = self\n            .service_control\n            .open_protocols(self.remote_peer.sid, TargetProtocol::All)\n        {\n            log::warn!(\"open protocols to peer {} failed {}\", self.remote_peer, err);\n            self.disconnect()\n        }\n    }\n\n    pub fn set_open_protocols_timeout(&mut self, timeout: Duration) {\n        let service_control = self.service_control.clone();\n        let remote_peer = Arc::clone(&self.remote_peer);\n\n        tokio::spawn(async move {\n            Delay::new(timeout).await;\n\n            if crate::protocols::OpenedProtocols::is_all_opened(&remote_peer.id) {\n                return;\n            }\n\n            log::warn!(\"peer {} open protocols timeout, disconnect it\", remote_peer);\n            finish_identify(&remote_peer, Err(self::Error::Timeout));\n            let _ = service_control.disconnect(remote_peer.sid);\n        });\n    }\n\n    pub fn set_timeout(&mut self, description: &'static str, timeout: Duration) {\n        let service_control = self.service_control.clone();\n        let remote_peer = Arc::clone(&self.remote_peer);\n\n        let (timeout, timeout_abort_handle) = future::abortable(async move {\n            Delay::new(timeout).await;\n\n            log::warn!(\n                \"{} timeout from peer {}, disconnect it\",\n                description,\n                remote_peer,\n            );\n\n            finish_identify(&remote_peer, Err(self::Error::Timeout));\n            let _ = service_control.disconnect(remote_peer.sid);\n        });\n\n        self.timeout_abort_handle = Some(timeout_abort_handle);\n        tokio::spawn(timeout);\n    }\n\n    pub fn cancel_timeout(&self) {\n        if let Some(timeout) = self.timeout_abort_handle.as_ref() {\n            timeout.abort()\n        }\n    }\n}\n\nimpl Drop for StateContext {\n    fn drop(&mut self) {\n        // Something wrong happend, disconnect\n        self.disconnect();\n        finish_identify(\n            &self.remote_peer,\n            Err(Error::Other(\"StateContext dropped\".to_owned())),\n        );\n    }\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, Display)]\npub enum ClientProcedure {\n    #[display(fmt = \"client wait for server identity acknowledge\")]\n    WaitAck,\n\n    #[display(fmt = \"client open other protocols\")]\n    OpenOtherProtocols,\n\n    #[display(fmt = \"server failed identification\")]\n    Failed,\n}\n\n#[derive(Debug, Clone, PartialEq, Eq, Display)]\npub enum ServerProcedure {\n    #[display(fmt = \"server wait for client identity\")]\n    WaitIdentity,\n\n    #[display(fmt = \"server wait for client open protocols\")]\n    WaitOpenProtocols, // After accept session\n\n    #[display(fmt = \"client failed identification\")]\n    Failed,\n}\n\npub enum State {\n    SessionProtocolInited,\n    FailedWithoutEncryption,\n    FailedWithExceedMsgSize,\n    ClientNegotiate {\n        procedure: ClientProcedure,\n        context:   StateContext,\n    },\n    ServerNegotiate {\n        procedure: ServerProcedure,\n        context:   StateContext,\n    },\n}\n\npub struct IdentifyProtocol {\n    pub(crate) state:     State,\n    #[cfg(not(test))]\n    behaviour:            Arc<IdentifyBehaviour>,\n    #[cfg(test)]\n    pub(crate) behaviour: Arc<MockIdentifyBehaviour>,\n}\n\nimpl IdentifyProtocol {\n    #[cfg(not(test))]\n    pub fn new(behaviour: Arc<IdentifyBehaviour>) -> Self {\n        IdentifyProtocol {\n            state: State::SessionProtocolInited,\n            behaviour,\n        }\n    }\n\n    #[cfg(test)]\n    pub fn new() -> Self {\n        IdentifyProtocol {\n            state:     State::SessionProtocolInited,\n            behaviour: Arc::new(MockIdentifyBehaviour::new()),\n        }\n    }\n\n    pub fn wait(peer_id: PeerId) -> WaitIdentification {\n        let mut backlog = PEER_IDENTIFICATION_BACKLOG.write();\n        let identification = backlog.entry(peer_id).or_insert_with(Identification::new);\n\n        identification.wait()\n    }\n\n    pub fn wait_failed(peer_id: &PeerId, error: String) {\n        if let Some(identification) = { PEER_IDENTIFICATION_BACKLOG.write().remove(peer_id) } {\n            identification.failed(self::Error::Other(error))\n        }\n    }\n\n    pub fn on_connected(&mut self, protocol_context: &IdentifyProtocolContext) {\n        let mut state_context = match StateContext::from_proto_context(protocol_context) {\n            Ok(ctx) => ctx,\n            Err(_no) => {\n                // Without peer id, there's no way to register a wait identification.No\n                // need to clean it.\n                log::warn!(\n                    \"session from {:?} without encryption, disconnect it\",\n                    protocol_context.0.session.address\n                );\n\n                self.state = State::FailedWithoutEncryption;\n                let _ = protocol_context.0.disconnect(protocol_context.0.session.id);\n                return;\n            }\n        };\n        log::debug!(\"connected from {:?}\", state_context.remote_peer);\n\n        crate::protocols::OpenedProtocols::register(\n            state_context.remote_peer.id.to_owned(),\n            state_context.proto_id,\n        );\n\n        match protocol_context.0.session.ty {\n            SessionType::Inbound => {\n                log::info!(\n                    \"enter identify inbound procedure for {}\",\n                    protocol_context.0.session.address\n                );\n\n                state_context.set_timeout(\"wait client identity\", DEFAULT_TIMEOUT);\n\n                self.state = State::ServerNegotiate {\n                    procedure: ServerProcedure::WaitIdentity,\n                    context:   state_context,\n                };\n            }\n            SessionType::Outbound => {\n                log::info!(\n                    \"enter identify outbound procedure for {}\",\n                    protocol_context.0.session.address\n                );\n\n                self.behaviour.send_identity(&state_context);\n                state_context.set_timeout(\"wait server ack\", DEFAULT_TIMEOUT);\n\n                self.state = State::ClientNegotiate {\n                    procedure: ClientProcedure::WaitAck,\n                    context:   state_context,\n                };\n            }\n        }\n    }\n\n    pub fn on_disconnected(&mut self, protocol_context: &IdentifyProtocolContext) {\n        // Without peer id, there's no way to register a wait identification. No\n        // need to clean it.\n        let peer_id = match protocol_context.0.session.remote_pubkey.as_ref() {\n            Some(pubkey) => pubkey.peer_id(),\n            None => return,\n        };\n\n        // TODO: Remove from upper level\n        crate::protocols::OpenedProtocols::remove(&peer_id);\n\n        if let Some(identification) = PEER_IDENTIFICATION_BACKLOG.write().remove(&peer_id) {\n            identification.failed(self::Error::Disconnected);\n        }\n    }\n\n    pub fn on_received(&mut self, protocol_context: &IdentifyProtocolContext, data: Bytes) {\n        {\n            if data.len() > MAX_MESSAGE_SIZE {\n                let peer_id = match protocol_context.0.session.remote_pubkey.as_ref() {\n                    Some(pubkey) => pubkey.peer_id(),\n                    None => return,\n                };\n\n                if let Some(identification) = PEER_IDENTIFICATION_BACKLOG.write().remove(&peer_id) {\n                    identification.failed(self::Error::ExceedMaxMessageSize);\n                    self.state = State::FailedWithExceedMsgSize;\n                    let _ = protocol_context.0.disconnect(protocol_context.0.session.id);\n                    return;\n                }\n            }\n        }\n\n        match &mut self.state {\n            State::ServerNegotiate {\n                ref mut procedure,\n                context,\n            } => match procedure {\n                ServerProcedure::WaitIdentity => {\n                    let identity = match Identity::decode(data) {\n                        Ok(ident) => ident,\n                        Err(_) => {\n                            log::warn!(\"received invalid identity from {:?}\", context.remote_peer);\n\n                            finish_identify(\n                                &context.remote_peer,\n                                Err(self::Error::DecodeIdentityFailed),\n                            );\n                            *procedure = ServerProcedure::Failed;\n                            context.disconnect();\n                            return;\n                        }\n                    };\n                    context.cancel_timeout();\n\n                    if let Err(err) = identity.validate() {\n                        finish_identify(\n                            &context.remote_peer,\n                            Err(self::Error::InvalidMessage(err.to_string())),\n                        );\n                        *procedure = ServerProcedure::Failed;\n                        context.disconnect();\n                        return;\n                    }\n\n                    if let Err(err) = self.behaviour.verify_remote_identity(&identity) {\n                        finish_identify(&context.remote_peer, Err(err));\n                        *procedure = ServerProcedure::Failed;\n                        context.disconnect();\n                        return;\n                    }\n\n                    finish_identify(&context.remote_peer, Ok(()));\n\n                    let listen_addrs = identity.addr_info.listen_addrs();\n                    self.behaviour.process_listens(&context, listen_addrs);\n\n                    if let Some(observed_addr) = identity.addr_info.observed_addr() {\n                        self.behaviour.process_observed(&context, observed_addr);\n                    }\n\n                    self.behaviour.send_ack(&context);\n                    context.set_open_protocols_timeout(DEFAULT_TIMEOUT);\n                    *procedure = ServerProcedure::WaitOpenProtocols;\n                }\n                ServerProcedure::Failed | ServerProcedure::WaitOpenProtocols => {\n                    log::warn!(\n                        \"should not received any more message from {} after acked identity\",\n                        context.remote_peer\n                    );\n                    context.disconnect();\n                }\n            },\n            State::ClientNegotiate {\n                ref mut procedure,\n                context,\n            } => match procedure {\n                ClientProcedure::WaitAck => {\n                    let acknowledge = match Acknowledge::decode(data) {\n                        Ok(ack) => ack,\n                        Err(_) => {\n                            log::warn!(\"received invalid ack from {:?}\", context.remote_peer);\n\n                            finish_identify(\n                                &context.remote_peer,\n                                Err(self::Error::DecodeAckFailed),\n                            );\n                            *procedure = ClientProcedure::Failed;\n                            context.disconnect();\n                            return;\n                        }\n                    };\n                    context.cancel_timeout();\n\n                    if let Err(err) = acknowledge.validate() {\n                        finish_identify(\n                            &context.remote_peer,\n                            Err(self::Error::InvalidMessage(err.to_string())),\n                        );\n                        *procedure = ClientProcedure::Failed;\n                        context.disconnect();\n                        return;\n                    }\n\n                    finish_identify(&context.remote_peer, Ok(()));\n\n                    let listen_addrs = acknowledge.addr_info.listen_addrs();\n                    self.behaviour.process_listens(&context, listen_addrs);\n\n                    if let Some(observed_addr) = acknowledge.addr_info.observed_addr() {\n                        self.behaviour.process_observed(&context, observed_addr);\n                    }\n\n                    context.open_protocols();\n                    *procedure = ClientProcedure::OpenOtherProtocols;\n                }\n                ClientProcedure::OpenOtherProtocols | ClientProcedure::Failed => {\n                    log::warn!(\n                        \"should not received any more message from {} after open protocols\",\n                        context.remote_peer\n                    );\n                    context.disconnect();\n                }\n            },\n            _ => {\n                log::warn!(\n                    \"should not received message from {} out of negotiate state\",\n                    protocol_context.0.session.address\n                );\n                let _ = protocol_context.0.disconnect(protocol_context.0.session.id);\n            }\n        }\n    }\n}\n\n#[cfg(test)]\nimpl SessionProtocol for IdentifyProtocol {}\n\n#[cfg(not(test))]\nimpl SessionProtocol for IdentifyProtocol {\n    fn connected(&mut self, protocol_context: ProtocolContextMutRef, _version: &str) {\n        self.on_connected(&IdentifyProtocolContext(protocol_context));\n    }\n\n    fn disconnected(&mut self, protocol_context: ProtocolContextMutRef) {\n        self.on_disconnected(&IdentifyProtocolContext(protocol_context));\n    }\n\n    fn received(&mut self, protocol_context: ProtocolContextMutRef, data: bytes::Bytes) {\n        self.on_received(&IdentifyProtocolContext(protocol_context), data)\n    }\n}\n\nfn finish_identify(peer: &RemotePeer, result: Result<(), self::Error>) {\n    let identification = match { PEER_IDENTIFICATION_BACKLOG.write().remove(&peer.id) } {\n        Some(ident) => ident,\n        None => {\n            log::debug!(\"peer {:?} identification has finished already\", peer);\n            return;\n        }\n    };\n\n    match result {\n        Ok(()) => identification.pass(),\n        Err(err) => {\n            log::warn!(\"identification for peer {} failed: {}\", peer, err);\n            identification.failed(err);\n        }\n    }\n}\n"
  },
  {
    "path": "core/network/src/protocols/identify/tests.rs",
    "content": "use std::time::Duration;\n\nuse futures_timer::Delay;\nuse parking_lot::Mutex;\nuse protocol::Bytes;\nuse tentacle::multiaddr::Multiaddr;\nuse tentacle::service::{SessionType, TargetProtocol};\n\nuse super::message;\nuse super::protocol::{\n    ClientProcedure, Error, IdentifyProtocol, IdentifyProtocolContext, ServerProcedure, State,\n    StateContext, MAX_MESSAGE_SIZE,\n};\nuse crate::test::mock::{ControlEvent, ProtocolContext};\n\nconst PROTOCOL_ID: usize = 2;\nconst SESSION_ID: usize = 2;\n\n#[derive(Debug, Clone)]\npub enum BehaviourEvent {\n    SendIdentity,\n    SendAck,\n    ProcessListen,\n    ProcessObserved,\n    VerifyRemoteIdentity,\n}\n\npub struct MockIdentifyBehaviour {\n    event:                Mutex<Option<BehaviourEvent>>,\n    skip_chain_id_verify: Mutex<bool>,\n}\n\nimpl MockIdentifyBehaviour {\n    pub fn new() -> Self {\n        MockIdentifyBehaviour {\n            event:                Mutex::new(None),\n            skip_chain_id_verify: Mutex::new(true),\n        }\n    }\n\n    pub fn event(&self) -> Option<BehaviourEvent> {\n        self.event.lock().clone()\n    }\n\n    pub fn send_identity(&self, _: &StateContext) {\n        *self.event.lock() = Some(BehaviourEvent::SendIdentity)\n    }\n\n    pub fn send_ack(&self, _: &StateContext) {\n        *self.event.lock() = Some(BehaviourEvent::SendAck)\n    }\n\n    pub fn process_listens(&self, _: &StateContext, _listen_addrs: Vec<Multiaddr>) {\n        *self.event.lock() = Some(BehaviourEvent::ProcessListen)\n    }\n\n    pub fn process_observed(&self, _: &StateContext, _observed_addr: Multiaddr) {\n        *self.event.lock() = Some(BehaviourEvent::ProcessObserved)\n    }\n\n    pub fn verify_remote_identity(&self, _identity: &message::Identity) -> Result<(), Error> {\n        {\n            *self.event.lock() = Some(BehaviourEvent::VerifyRemoteIdentity);\n        }\n\n        if *self.skip_chain_id_verify.lock() {\n            Ok(())\n        } else {\n            Err(Error::WrongChainId)\n        }\n    }\n\n    pub fn skip_chain_id_verify(&self, result: bool) {\n        *self.skip_chain_id_verify.lock() = result;\n    }\n}\n\n#[test]\nfn should_reject_unencrypted_connection() {\n    let mut identify = IdentifyProtocol::new();\n    let proto_context = ProtocolContext::make_no_encrypted(\n        PROTOCOL_ID.into(),\n        SESSION_ID.into(),\n        SessionType::Inbound,\n    );\n\n    identify.on_connected(&IdentifyProtocolContext(&proto_context));\n    match identify.state {\n        State::FailedWithoutEncryption => (),\n        _ => panic!(\"should enter failed state\"),\n    }\n    match proto_context.control().event() {\n        Some(ControlEvent::Disconnect { session_id }) if session_id == SESSION_ID.into() => (),\n        _ => panic!(\"should disconnect\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_wait_client_identity_for_inbound_connection() {\n    let mut identify = IdentifyProtocol::new();\n    let proto_context =\n        ProtocolContext::make(PROTOCOL_ID.into(), SESSION_ID.into(), SessionType::Inbound);\n\n    identify.on_connected(&IdentifyProtocolContext(&proto_context));\n    match identify.state {\n        State::ServerNegotiate {\n            procedure: ServerProcedure::WaitIdentity,\n            context,\n        } => assert!(\n            context.timeout_abort_handle.is_some(),\n            \"should set up wait timeout\"\n        ),\n        _ => panic!(\"should enter failed state\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_disconnect_if_wait_client_identity_timeout() {\n    let mut identify = IdentifyProtocol::new();\n    let proto_context =\n        ProtocolContext::make(PROTOCOL_ID.into(), SESSION_ID.into(), SessionType::Inbound);\n\n    identify.on_connected(&IdentifyProtocolContext(&proto_context));\n    let mut context = match identify.state {\n        State::ServerNegotiate {\n            procedure: ServerProcedure::WaitIdentity,\n            context,\n        } => {\n            assert!(\n                context.timeout_abort_handle.is_some(),\n                \"should set up wait timeout\"\n            );\n            context\n        }\n        _ => panic!(\"should enter failed state\"),\n    };\n\n    context.set_timeout(\"override wait identity\", Duration::from_millis(300));\n    Delay::new(Duration::from_millis(700)).await;\n\n    match proto_context.control().event() {\n        Some(ControlEvent::Disconnect { session_id }) if session_id == SESSION_ID.into() => (),\n        _ => panic!(\"should disconnect\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_register_opened_protocol() {\n    let mut identify = IdentifyProtocol::new();\n    let proto_context =\n        ProtocolContext::make(PROTOCOL_ID.into(), SESSION_ID.into(), SessionType::Inbound);\n\n    identify.on_connected(&IdentifyProtocolContext(&proto_context));\n\n    let peer_id = proto_context\n        .session\n        .remote_pubkey\n        .as_ref()\n        .unwrap()\n        .peer_id();\n    assert!(crate::protocols::OpenedProtocols::is_open(\n        &peer_id,\n        &PROTOCOL_ID.into()\n    ));\n}\n\n#[tokio::test]\nasync fn should_send_identity_to_server_for_outbound_connection() {\n    let mut identify = IdentifyProtocol::new();\n    let proto_context =\n        ProtocolContext::make(PROTOCOL_ID.into(), SESSION_ID.into(), SessionType::Outbound);\n\n    identify.on_connected(&IdentifyProtocolContext(&proto_context));\n\n    match identify.state {\n        State::ClientNegotiate {\n            procedure: ClientProcedure::WaitAck,\n            context,\n        } => assert!(\n            context.timeout_abort_handle.is_some(),\n            \"should set up wait timeout\"\n        ),\n        _ => panic!(\"should enter failed state\"),\n    }\n\n    match identify.behaviour.event() {\n        Some(BehaviourEvent::SendIdentity) => (),\n        _ => panic!(\"should send identity\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_disconnect_if_wait_server_ack_timeout() {\n    let mut identify = IdentifyProtocol::new();\n    let proto_context =\n        ProtocolContext::make(PROTOCOL_ID.into(), SESSION_ID.into(), SessionType::Outbound);\n\n    identify.on_connected(&IdentifyProtocolContext(&proto_context));\n\n    let mut context = match identify.state {\n        State::ClientNegotiate {\n            procedure: ClientProcedure::WaitAck,\n            context,\n        } => {\n            assert!(\n                context.timeout_abort_handle.is_some(),\n                \"should set up wait timeout\"\n            );\n            context\n        }\n        _ => panic!(\"should enter failed state\"),\n    };\n\n    match identify.behaviour.event() {\n        Some(BehaviourEvent::SendIdentity) => (),\n        _ => panic!(\"should send identity\"),\n    }\n\n    context.set_timeout(\"override wait ack\", Duration::from_millis(300));\n    Delay::new(Duration::from_millis(700)).await;\n\n    match proto_context.control().event() {\n        Some(ControlEvent::Disconnect { session_id }) if session_id == SESSION_ID.into() => (),\n        _ => panic!(\"should disconnect\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_disconnect_if_exceed_max_message_size() {\n    let mut identify = IdentifyProtocol::new();\n    let proto_context =\n        ProtocolContext::make(PROTOCOL_ID.into(), SESSION_ID.into(), SessionType::Outbound);\n\n    let msg = Bytes::from(\"a\".repeat(MAX_MESSAGE_SIZE + 1));\n    identify.on_received(&IdentifyProtocolContext(&proto_context), msg);\n\n    match proto_context.control().event() {\n        Some(ControlEvent::Disconnect { session_id }) if session_id == SESSION_ID.into() => (),\n        _ => panic!(\"should disconnect\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_send_ack_if_identity_is_valid_on_server_side() {\n    let mut identify = IdentifyProtocol::new();\n    let proto_context =\n        ProtocolContext::make(PROTOCOL_ID.into(), SESSION_ID.into(), SessionType::Inbound);\n\n    identify.on_connected(&IdentifyProtocolContext(&proto_context));\n\n    let identity = message::Identity::mock_valid().into_bytes().unwrap();\n    identify.behaviour.skip_chain_id_verify(true);\n    identify.on_received(&IdentifyProtocolContext(&proto_context), identity);\n\n    match identify.state {\n        State::ServerNegotiate {\n            procedure: ServerProcedure::WaitOpenProtocols,\n            context,\n        } => assert!(\n            context.timeout_abort_handle.is_some(),\n            \"should set up wait open protocols timeout\"\n        ),\n        _ => panic!(\"should enter wait open protocols state\"),\n    }\n\n    match identify.behaviour.event() {\n        Some(BehaviourEvent::SendAck) => (),\n        _ => panic!(\"should send ack\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_disconnect_if_client_open_protocols_timeout() {\n    let mut identify = IdentifyProtocol::new();\n    let proto_context =\n        ProtocolContext::make(PROTOCOL_ID.into(), SESSION_ID.into(), SessionType::Inbound);\n\n    identify.on_connected(&IdentifyProtocolContext(&proto_context));\n\n    let identity = message::Identity::mock_valid().into_bytes().unwrap();\n    identify.behaviour.skip_chain_id_verify(true);\n    identify.on_received(&IdentifyProtocolContext(&proto_context), identity);\n\n    let mut context = match identify.state {\n        State::ServerNegotiate {\n            procedure: ServerProcedure::WaitOpenProtocols,\n            context,\n        } => {\n            assert!(\n                context.timeout_abort_handle.is_some(),\n                \"should set up wait open protocols timeout\"\n            );\n            context\n        }\n        _ => panic!(\"should enter wait open protocols state\"),\n    };\n\n    match identify.behaviour.event() {\n        Some(BehaviourEvent::SendAck) => (),\n        _ => panic!(\"should send ack\"),\n    }\n\n    context.set_timeout(\"override wait open protocols\", Duration::from_millis(300));\n    Delay::new(Duration::from_millis(700)).await;\n\n    match proto_context.control().event() {\n        Some(ControlEvent::Disconnect { session_id }) if session_id == SESSION_ID.into() => (),\n        _ => panic!(\"should disconnect\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_disconnect_if_client_send_undecodeable_identity() {\n    let mut identify = IdentifyProtocol::new();\n    let proto_context =\n        ProtocolContext::make(PROTOCOL_ID.into(), SESSION_ID.into(), SessionType::Inbound);\n\n    identify.on_connected(&IdentifyProtocolContext(&proto_context));\n\n    let msg = Bytes::from(\"a\");\n    identify.on_received(&IdentifyProtocolContext(&proto_context), msg);\n\n    match proto_context.control().event() {\n        Some(ControlEvent::Disconnect { session_id }) if session_id == SESSION_ID.into() => (),\n        _ => panic!(\"should disconnect\"),\n    }\n\n    match identify.state {\n        State::ServerNegotiate {\n            procedure: ServerProcedure::Failed,\n            ..\n        } => (),\n        _ => panic!(\"should enter failed state\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_disconnect_if_client_send_invalid_identity() {\n    let mut identify = IdentifyProtocol::new();\n    let proto_context =\n        ProtocolContext::make(PROTOCOL_ID.into(), SESSION_ID.into(), SessionType::Inbound);\n\n    identify.on_connected(&IdentifyProtocolContext(&proto_context));\n\n    let msg = message::Identity::mock_invalid().into_bytes().unwrap();\n    identify.on_received(&IdentifyProtocolContext(&proto_context), msg);\n\n    match proto_context.control().event() {\n        Some(ControlEvent::Disconnect { session_id }) if session_id == SESSION_ID.into() => (),\n        _ => panic!(\"should disconnect\"),\n    }\n\n    match identify.state {\n        State::ServerNegotiate {\n            procedure: ServerProcedure::Failed,\n            ..\n        } => (),\n        _ => panic!(\"should enter failed state\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_disconnect_if_client_send_different_chain_id() {\n    let mut identify = IdentifyProtocol::new();\n    let proto_context =\n        ProtocolContext::make(PROTOCOL_ID.into(), SESSION_ID.into(), SessionType::Inbound);\n\n    identify.on_connected(&IdentifyProtocolContext(&proto_context));\n\n    let msg = message::Identity::mock_valid().into_bytes().unwrap();\n    identify.behaviour.skip_chain_id_verify(false);\n    identify.on_received(&IdentifyProtocolContext(&proto_context), msg);\n\n    match proto_context.control().event() {\n        Some(ControlEvent::Disconnect { session_id }) if session_id == SESSION_ID.into() => (),\n        _ => panic!(\"should disconnect\"),\n    }\n\n    match identify.state {\n        State::ServerNegotiate {\n            procedure: ServerProcedure::Failed,\n            ..\n        } => (),\n        _ => panic!(\"should enter failed state\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_disconnect_if_client_send_data_during_open_protocols() {\n    let mut identify = IdentifyProtocol::new();\n    let proto_context =\n        ProtocolContext::make(PROTOCOL_ID.into(), SESSION_ID.into(), SessionType::Inbound);\n\n    identify.on_connected(&IdentifyProtocolContext(&proto_context));\n\n    let identity = message::Identity::mock_valid().into_bytes().unwrap();\n    identify.behaviour.skip_chain_id_verify(true);\n    identify.on_received(&IdentifyProtocolContext(&proto_context), identity);\n\n    match &identify.state {\n        State::ServerNegotiate {\n            procedure: ServerProcedure::WaitOpenProtocols,\n            context,\n        } => assert!(\n            context.timeout_abort_handle.is_some(),\n            \"should set up wait open protocols timeout\"\n        ),\n        _ => panic!(\"should enter wait open protocols state\"),\n    }\n\n    match identify.behaviour.event() {\n        Some(BehaviourEvent::SendAck) => (),\n        _ => panic!(\"should send ack\"),\n    }\n\n    identify.on_received(\n        &IdentifyProtocolContext(&proto_context),\n        Bytes::from_static(b\"test\"),\n    );\n\n    match proto_context.control().event() {\n        Some(ControlEvent::Disconnect { session_id }) if session_id == SESSION_ID.into() => (),\n        _ => panic!(\"should disconnect\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_open_protocols_after_receive_valid_ack_from_server() {\n    let mut identify = IdentifyProtocol::new();\n    let proto_context =\n        ProtocolContext::make(PROTOCOL_ID.into(), SESSION_ID.into(), SessionType::Outbound);\n\n    identify.on_connected(&IdentifyProtocolContext(&proto_context));\n\n    let ack = message::Acknowledge::mock_valid().into_bytes().unwrap();\n    identify.on_received(&IdentifyProtocolContext(&proto_context), ack);\n\n    match identify.state {\n        State::ClientNegotiate {\n            procedure: ClientProcedure::OpenOtherProtocols,\n            ..\n        } => (),\n        _ => panic!(\"should enter wait open protocols state\"),\n    }\n\n    match proto_context.control().event() {\n        Some(ControlEvent::OpenProtocols {\n            session_id,\n            target_proto,\n        }) if session_id == SESSION_ID.into() && target_proto == TargetProtocol::All => (),\n        _ => panic!(\"should open protocols\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_disconnect_if_server_send_undecodeable_ack() {\n    let mut identify = IdentifyProtocol::new();\n    let proto_context =\n        ProtocolContext::make(PROTOCOL_ID.into(), SESSION_ID.into(), SessionType::Outbound);\n\n    identify.on_connected(&IdentifyProtocolContext(&proto_context));\n\n    identify.on_received(\n        &IdentifyProtocolContext(&proto_context),\n        Bytes::from_static(b\"xxx\"),\n    );\n\n    match identify.state {\n        State::ClientNegotiate {\n            procedure: ClientProcedure::Failed,\n            ..\n        } => (),\n        _ => panic!(\"should enter failed state\"),\n    }\n\n    match proto_context.control().event() {\n        Some(ControlEvent::Disconnect { session_id }) if session_id == SESSION_ID.into() => (),\n        _ => panic!(\"should disconnect\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_disconnect_if_server_send_invalid_ack() {\n    let mut identify = IdentifyProtocol::new();\n    let proto_context =\n        ProtocolContext::make(PROTOCOL_ID.into(), SESSION_ID.into(), SessionType::Outbound);\n\n    identify.on_connected(&IdentifyProtocolContext(&proto_context));\n\n    let ack = message::Acknowledge::mock_invalid().into_bytes().unwrap();\n    identify.on_received(&IdentifyProtocolContext(&proto_context), ack);\n\n    match identify.state {\n        State::ClientNegotiate {\n            procedure: ClientProcedure::Failed,\n            ..\n        } => (),\n        _ => panic!(\"should enter failed state\"),\n    }\n\n    match proto_context.control().event() {\n        Some(ControlEvent::Disconnect { session_id }) if session_id == SESSION_ID.into() => (),\n        _ => panic!(\"should disconnect\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_disconnect_if_server_send_data_during_open_protocols() {\n    let mut identify = IdentifyProtocol::new();\n    let proto_context =\n        ProtocolContext::make(PROTOCOL_ID.into(), SESSION_ID.into(), SessionType::Outbound);\n\n    identify.on_connected(&IdentifyProtocolContext(&proto_context));\n\n    let ack = message::Acknowledge::mock_valid().into_bytes().unwrap();\n    identify.on_received(&IdentifyProtocolContext(&proto_context), ack);\n\n    match &identify.state {\n        State::ClientNegotiate {\n            procedure: ClientProcedure::OpenOtherProtocols,\n            ..\n        } => (),\n        _ => panic!(\"should enter wait open protocols state\"),\n    }\n\n    match proto_context.control().event() {\n        Some(ControlEvent::OpenProtocols {\n            session_id,\n            target_proto,\n        }) if session_id == SESSION_ID.into() && target_proto == TargetProtocol::All => (),\n        _ => panic!(\"should open protocols\"),\n    }\n\n    identify.on_received(\n        &IdentifyProtocolContext(&proto_context),\n        Bytes::from_static(b\"test\"),\n    );\n\n    match proto_context.control().event() {\n        Some(ControlEvent::Disconnect { session_id }) if session_id == SESSION_ID.into() => (),\n        _ => panic!(\"should disconnect\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_disconnect_if_either_send_data_no_in_negotiate_procedure() {\n    let mut identify = IdentifyProtocol::new();\n    let proto_context =\n        ProtocolContext::make(PROTOCOL_ID.into(), SESSION_ID.into(), SessionType::Outbound);\n\n    identify.on_received(\n        &IdentifyProtocolContext(&proto_context),\n        Bytes::from_static(b\"test\"),\n    );\n\n    match proto_context.control().event() {\n        Some(ControlEvent::Disconnect { session_id }) if session_id == SESSION_ID.into() => (),\n        _ => panic!(\"should disconnect\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_wake_wait_identification_after_call_finish_identify() {\n    let mut identify = IdentifyProtocol::new();\n    let proto_context =\n        ProtocolContext::make(PROTOCOL_ID.into(), SESSION_ID.into(), SessionType::Inbound);\n\n    let peer_id = proto_context\n        .session\n        .remote_pubkey\n        .as_ref()\n        .unwrap()\n        .peer_id();\n\n    let wait_fut = IdentifyProtocol::wait(peer_id);\n\n    tokio::spawn(async move {\n        identify.on_connected(&IdentifyProtocolContext(&proto_context));\n\n        let identity = message::Identity::mock_valid().into_bytes().unwrap();\n        identify.behaviour.skip_chain_id_verify(true);\n        identify.on_received(&IdentifyProtocolContext(&proto_context), identity);\n\n        match identify.state {\n            State::ServerNegotiate {\n                procedure: ServerProcedure::WaitOpenProtocols,\n                context,\n            } => assert!(\n                context.timeout_abort_handle.is_some(),\n                \"should set up wait open protocols timeout\"\n            ),\n            _ => panic!(\"should enter wait open protocols state\"),\n        }\n\n        match identify.behaviour.event() {\n            Some(BehaviourEvent::SendAck) => (),\n            _ => panic!(\"should send ack\"),\n        }\n    });\n\n    assert!(wait_fut.await.is_ok(), \"should be ok if pass identify\");\n}\n\n#[tokio::test]\nasync fn should_pass_error_to_wait_identification_result_if_failed_identify() {\n    let mut identify = IdentifyProtocol::new();\n    let proto_context =\n        ProtocolContext::make(PROTOCOL_ID.into(), SESSION_ID.into(), SessionType::Outbound);\n\n    let peer_id = proto_context\n        .session\n        .remote_pubkey\n        .as_ref()\n        .unwrap()\n        .peer_id();\n\n    let wait_fut = IdentifyProtocol::wait(peer_id);\n\n    tokio::spawn(async move {\n        identify.on_connected(&IdentifyProtocolContext(&proto_context));\n\n        identify.on_received(\n            &IdentifyProtocolContext(&proto_context),\n            Bytes::from_static(b\"xxx\"),\n        );\n\n        match identify.state {\n            State::ClientNegotiate {\n                procedure: ClientProcedure::Failed,\n                ..\n            } => (),\n            _ => panic!(\"should enter failed state\"),\n        }\n\n        match proto_context.control().event() {\n            Some(ControlEvent::Disconnect { session_id }) if session_id == SESSION_ID.into() => (),\n            _ => panic!(\"should disconnect\"),\n        }\n    });\n\n    match wait_fut.await {\n        Err(Error::DecodeAckFailed) => (),\n        _ => panic!(\"should pass decode failed error\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_pass_disconnected_to_wait_identification_result_if_still_wait_identify_but_disconnected(\n) {\n    let mut identify = IdentifyProtocol::new();\n    let proto_context =\n        ProtocolContext::make(PROTOCOL_ID.into(), SESSION_ID.into(), SessionType::Outbound);\n\n    let peer_id = proto_context\n        .session\n        .remote_pubkey\n        .as_ref()\n        .unwrap()\n        .peer_id();\n\n    let wait_fut = IdentifyProtocol::wait(peer_id);\n\n    tokio::spawn(async move {\n        identify.on_connected(&IdentifyProtocolContext(&proto_context));\n        identify.on_disconnected(&IdentifyProtocolContext(&proto_context));\n    });\n\n    match wait_fut.await {\n        Err(Error::Disconnected) => (),\n        _ => panic!(\"should pass disconnected error\"),\n    }\n}\n\n#[tokio::test]\nasync fn should_remove_from_opened_protocols_after_disconnect() {\n    let mut identify = IdentifyProtocol::new();\n    let proto_context =\n        ProtocolContext::make(PROTOCOL_ID.into(), SESSION_ID.into(), SessionType::Outbound);\n\n    let peer_id = proto_context\n        .session\n        .remote_pubkey\n        .as_ref()\n        .unwrap()\n        .peer_id();\n\n    identify.on_connected(&IdentifyProtocolContext(&proto_context));\n    identify.on_disconnected(&IdentifyProtocolContext(&proto_context));\n\n    assert_eq!(\n        crate::protocols::OpenedProtocols::is_open(&peer_id, &PROTOCOL_ID.into()),\n        false\n    );\n}\n"
  },
  {
    "path": "core/network/src/protocols/identify.rs",
    "content": "mod behaviour;\nmod common;\nmod identification;\nmod message;\nmod protocol;\n\n#[cfg(test)]\nmod tests;\n\nuse std::sync::Arc;\n\nuse futures::channel::mpsc::UnboundedSender;\nuse tentacle::builder::MetaBuilder;\nuse tentacle::secio::PeerId;\nuse tentacle::service::{ProtocolHandle, ProtocolMeta};\nuse tentacle::ProtocolId;\n\nuse crate::event::PeerManagerEvent;\nuse crate::peer_manager::PeerManagerHandle;\n\nuse self::protocol::IdentifyProtocol;\nuse behaviour::IdentifyBehaviour;\n\npub use self::identification::WaitIdentification;\npub use self::protocol::{Error, DEFAULT_TIMEOUT};\n\npub const NAME: &str = \"chain_identify\";\npub const SUPPORT_VERSIONS: [&str; 1] = [\"0.2\"];\n\npub struct Identify {\n    behaviour: Arc<IdentifyBehaviour>,\n}\n\nimpl Identify {\n    pub fn new(peer_mgr: PeerManagerHandle, event_tx: UnboundedSender<PeerManagerEvent>) -> Self {\n        #[cfg(feature = \"global_ip_only\")]\n        log::info!(\"turn on global ip only\");\n        #[cfg(not(feature = \"global_ip_only\"))]\n        log::info!(\"turn off global ip only\");\n\n        let behaviour = Arc::new(IdentifyBehaviour::new(peer_mgr, event_tx));\n        Identify { behaviour }\n    }\n\n    #[cfg(not(test))]\n    pub fn build_meta(self, protocol_id: ProtocolId) -> ProtocolMeta {\n        let behaviour = self.behaviour;\n\n        MetaBuilder::new()\n            .id(protocol_id)\n            .name(name!(NAME))\n            .support_versions(support_versions!(SUPPORT_VERSIONS))\n            .session_handle(move || {\n                ProtocolHandle::Callback(Box::new(IdentifyProtocol::new(Arc::clone(&behaviour))))\n            })\n            .build()\n    }\n\n    #[cfg(test)]\n    pub fn build_meta(self, protocol_id: ProtocolId) -> ProtocolMeta {\n        let _ = self.behaviour;\n\n        MetaBuilder::new()\n            .id(protocol_id)\n            .name(name!(NAME))\n            .support_versions(support_versions!(SUPPORT_VERSIONS))\n            .session_handle(move || ProtocolHandle::Callback(Box::new(IdentifyProtocol::new())))\n            .build()\n    }\n\n    pub fn wait_identified(peer_id: PeerId) -> WaitIdentification {\n        IdentifyProtocol::wait(peer_id)\n    }\n\n    pub fn wait_failed(peer_id: &PeerId, error: String) {\n        IdentifyProtocol::wait_failed(peer_id, error)\n    }\n}\n"
  },
  {
    "path": "core/network/src/protocols/macro.rs",
    "content": "/// `Fn` protocol name generator\n#[macro_export]\nmacro_rules! name {\n    ($proto_name:expr) => {\n        |id| format!(\"{}/{}\", $proto_name, id)\n    };\n}\n\n/// Create `Vec<String>` support versions from constant `[&str, N]`\n#[macro_export]\nmacro_rules! support_versions {\n    ($versions:expr) => {\n        $versions.to_vec().into_iter().map(String::from).collect()\n    };\n}\n"
  },
  {
    "path": "core/network/src/protocols/mod.rs",
    "content": "#[macro_use]\nmod r#macro;\n\nmod core;\nmod discovery;\nmod ping;\nmod transmitter;\n\npub mod identify;\npub use self::core::{CoreProtocol, CoreProtocolBuilder, OpenedProtocols};\npub use transmitter::{ReceivedMessage, Recipient, Transmitter, TransmitterMessage};\n"
  },
  {
    "path": "core/network/src/protocols/ping/behaviour.rs",
    "content": "use super::protocol::PingEvent;\nuse crate::event::{MisbehaviorKind, PeerManagerEvent};\n\nuse futures::{\n    channel::mpsc::{Receiver, UnboundedSender},\n    Future, Stream,\n};\nuse log::debug;\n\nuse std::{\n    pin::Pin,\n    sync::atomic::{AtomicBool, Ordering},\n    task::{Context, Poll},\n};\n\npub struct PingEventReporter {\n    inner:        UnboundedSender<PeerManagerEvent>,\n    mgr_shutdown: AtomicBool,\n}\n\nimpl PingEventReporter {\n    pub fn new(inner: UnboundedSender<PeerManagerEvent>) -> Self {\n        PingEventReporter {\n            inner,\n            mgr_shutdown: AtomicBool::new(false),\n        }\n    }\n\n    fn is_mgr_shutdown(&self) -> bool {\n        self.mgr_shutdown.load(Ordering::SeqCst)\n    }\n\n    fn mgr_shutdown(&self) {\n        debug!(\"network: ping: peer manager shutdown\");\n\n        self.mgr_shutdown.store(true, Ordering::SeqCst);\n    }\n}\n\n#[derive(derive_more::Constructor)]\npub struct EventTranslator {\n    rx:       Receiver<PingEvent>,\n    reporter: PingEventReporter,\n}\n\nimpl Future for EventTranslator {\n    type Output = ();\n\n    fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {\n        if self.reporter.is_mgr_shutdown() {\n            return Poll::Ready(());\n        }\n\n        loop {\n            let event = match Stream::poll_next(Pin::new(&mut self.as_mut().rx), cx) {\n                Poll::Pending => break,\n                Poll::Ready(None) => return Poll::Ready(()),\n                Poll::Ready(Some(event)) => event,\n            };\n\n            let mgr_event = match event {\n                PingEvent::Ping(ref _pid) => continue,\n                PingEvent::Pong(ref pid, ref connected_addr, ping_time) => {\n                    let host = &connected_addr.host;\n\n                    common_apm::metrics::network::NETWORK_PING_HISTOGRAM_VEC\n                        .with_label_values(&[host])\n                        .observe(ping_time.as_millis() as f64);\n\n                    PeerManagerEvent::PeerAlive { pid: pid.clone() }\n                }\n                PingEvent::Timeout(ref pid) => {\n                    let kind = MisbehaviorKind::PingTimeout;\n\n                    PeerManagerEvent::Misbehave {\n                        pid: pid.clone(),\n                        kind,\n                    }\n                }\n                PingEvent::UnexpectedError(ref pid) => {\n                    let kind = MisbehaviorKind::PingUnexpect;\n\n                    PeerManagerEvent::Misbehave {\n                        pid: pid.clone(),\n                        kind,\n                    }\n                }\n            };\n\n            if self.reporter.inner.unbounded_send(mgr_event).is_err() {\n                self.reporter.mgr_shutdown();\n                return Poll::Ready(());\n            }\n        }\n\n        Poll::Pending\n    }\n}\n"
  },
  {
    "path": "core/network/src/protocols/ping/message.rs",
    "content": "use prost::{EncodeError, Message, Oneof};\nuse protocol::{Bytes, BytesMut};\n\n#[derive(Clone, Copy, PartialEq, Eq, Oneof)]\npub enum PingPayload {\n    #[prost(uint32, tag = \"1\")]\n    Ping(u32),\n    #[prost(uint32, tag = \"2\")]\n    Pong(u32),\n}\n\n#[derive(Clone, PartialEq, Message)]\npub struct PingMessage {\n    #[prost(oneof = \"PingPayload\", tags = \"1, 2\")]\n    pub payload: Option<PingPayload>,\n}\n\nimpl PingMessage {\n    pub fn new_pong(nonce: u32) -> Self {\n        PingMessage {\n            payload: Some(PingPayload::Pong(nonce)),\n        }\n    }\n\n    pub fn new_ping(nonce: u32) -> Self {\n        PingMessage {\n            payload: Some(PingPayload::Ping(nonce)),\n        }\n    }\n\n    pub fn into_bytes(self) -> Result<Bytes, EncodeError> {\n        let mut buf = BytesMut::with_capacity(self.encoded_len());\n        self.encode(&mut buf)?;\n\n        Ok(buf.freeze())\n    }\n}\n"
  },
  {
    "path": "core/network/src/protocols/ping/protocol.rs",
    "content": "use super::message::{PingMessage, PingPayload};\n\nuse futures::channel::mpsc::Sender;\nuse log::{debug, error, warn};\nuse prost::Message;\nuse tentacle::{\n    context::{ProtocolContext, ProtocolContextMutRef},\n    secio::PeerId,\n    service::TargetSession,\n    traits::ServiceProtocol,\n    SessionId,\n};\n\nuse crate::common::ConnectedAddr;\n\nuse std::{\n    collections::HashMap,\n    str,\n    time::{Duration, SystemTime, UNIX_EPOCH},\n};\n\nconst SEND_PING_TOKEN: u64 = 0;\nconst CHECK_TIMEOUT_TOKEN: u64 = 1;\n\n/// Ping protocol events\n#[derive(Debug)]\npub enum PingEvent {\n    /// Peer send ping to us.\n    Ping(PeerId),\n    /// Peer send pong to us.\n    Pong(PeerId, ConnectedAddr, Duration),\n    /// Peer is timeout.\n    Timeout(PeerId),\n    /// Peer cause a unexpected error.\n    UnexpectedError(PeerId),\n}\n\n/// PingStatus of a peer\n#[derive(Clone, Debug)]\nstruct PingStatus {\n    /// Are we currently pinging this peer?\n    processing: bool,\n    /// The time we last send ping to this peer.\n    last_ping:  SystemTime,\n    peer_id:    PeerId,\n}\n\nimpl PingStatus {\n    /// A meaningless value, peer must send a pong has same nonce to respond a\n    /// ping.\n    fn nonce(&self) -> u32 {\n        self.last_ping\n            .duration_since(UNIX_EPOCH)\n            .map(|dur| dur.as_secs())\n            .unwrap_or(0) as u32\n    }\n\n    /// Time duration since we last send ping.\n    fn elapsed(&self) -> Duration {\n        self.last_ping\n            .elapsed()\n            .unwrap_or_else(|_| Duration::from_secs(0))\n    }\n}\n\n/// Ping protocol handler.\n///\n/// The interval means that we send ping to peers.\n/// The timeout means that consider peer is timeout if during a timeout we still\n/// have not received pong from a peer\npub struct PingProtocol {\n    interval:              Duration,\n    timeout:               Duration,\n    connected_session_ids: HashMap<SessionId, PingStatus>,\n    event_sender:          Sender<PingEvent>,\n}\n\nimpl PingProtocol {\n    pub fn new(\n        interval: Duration,\n        timeout: Duration,\n        event_sender: Sender<PingEvent>,\n    ) -> PingProtocol {\n        PingProtocol {\n            interval,\n            timeout,\n            connected_session_ids: Default::default(),\n            event_sender,\n        }\n    }\n\n    pub fn send_event(&mut self, event: PingEvent) {\n        if let Err(err) = self.event_sender.try_send(event) {\n            error!(\"send ping event error: {}\", err);\n        }\n    }\n}\n\nimpl ServiceProtocol for PingProtocol {\n    fn init(&mut self, context: &mut ProtocolContext) {\n        // send ping to peers periodically\n        let proto_id = context.proto_id;\n        if context\n            .set_service_notify(proto_id, self.interval, SEND_PING_TOKEN)\n            .is_err()\n        {\n            warn!(\"start ping fail\");\n        }\n        if context\n            .set_service_notify(proto_id, self.timeout, CHECK_TIMEOUT_TOKEN)\n            .is_err()\n        {\n            warn!(\"start ping fail\");\n        }\n    }\n\n    fn connected(&mut self, context: ProtocolContextMutRef, version: &str) {\n        let session = context.session;\n        match session.remote_pubkey {\n            Some(ref pubkey) => {\n                let peer_id = pubkey.peer_id();\n                self.connected_session_ids\n                    .entry(session.id)\n                    .or_insert_with(|| PingStatus {\n                        last_ping:  SystemTime::now(),\n                        processing: false,\n                        peer_id:    peer_id.clone(),\n                    });\n                debug!(\n                    \"proto id [{}] open on session [{}], address: [{}], type: [{:?}], version: {}\",\n                    context.proto_id, session.id, session.address, session.ty, version\n                );\n                debug!(\"connected sessions are: {:?}\", self.connected_session_ids);\n\n                crate::protocols::OpenedProtocols::register(peer_id, context.proto_id);\n            }\n            None => {\n                if context.disconnect(session.id).is_err() {\n                    debug!(\"disconnect fail\");\n                }\n            }\n        }\n    }\n\n    fn disconnected(&mut self, context: ProtocolContextMutRef) {\n        let session = context.session;\n        self.connected_session_ids.remove(&session.id);\n        debug!(\n            \"proto id [{}] close on session [{}]\",\n            context.proto_id, session.id\n        );\n    }\n\n    fn received(&mut self, context: ProtocolContextMutRef, data: bytes::Bytes) {\n        let session = context.session;\n        if let Some(peer_id) = self\n            .connected_session_ids\n            .get(&session.id)\n            .map(|ps| ps.peer_id.clone())\n        {\n            match PingMessage::decode(data) {\n                Err(err) => {\n                    warn!(\"decode message {}\", err);\n                    self.send_event(PingEvent::UnexpectedError(peer_id))\n                }\n                Ok(PingMessage { payload: None }) => {\n                    self.send_event(PingEvent::UnexpectedError(peer_id))\n                }\n                Ok(PingMessage { payload: Some(pld) }) => match pld {\n                    PingPayload::Ping(nonce) => {\n                        let pong = match PingMessage::new_pong(nonce).into_bytes() {\n                            Ok(p) => p,\n                            Err(err) => {\n                                warn!(\"encode pong {}\", err);\n                                return;\n                            }\n                        };\n\n                        if let Err(err) = context.send_message(pong) {\n                            debug!(\"send message {}\", err);\n                        }\n                        self.send_event(PingEvent::Ping(peer_id));\n                    }\n                    PingPayload::Pong(nonce) => {\n                        // check pong\n                        if self\n                            .connected_session_ids\n                            .get(&session.id)\n                            .map(|ps| (ps.processing, ps.nonce()))\n                            == Some((true, nonce))\n                        {\n                            let ping_time = match self.connected_session_ids.get_mut(&session.id) {\n                                Some(ps) => {\n                                    ps.processing = false;\n                                    ps.elapsed()\n                                }\n                                None => return,\n                            };\n                            let connected_addr = ConnectedAddr::from(&context.session.address);\n                            self.send_event(PingEvent::Pong(peer_id, connected_addr, ping_time));\n                        } else {\n                            // ignore if nonce is incorrect\n\n                            self.send_event(PingEvent::UnexpectedError(peer_id));\n                        }\n                    }\n                },\n            }\n        }\n    }\n\n    fn notify(&mut self, context: &mut ProtocolContext, token: u64) {\n        match token {\n            SEND_PING_TOKEN => {\n                debug!(\"proto [{}] start ping peers\", context.proto_id);\n                let now = SystemTime::now();\n                let peers: Vec<(SessionId, u32)> = self\n                    .connected_session_ids\n                    .iter_mut()\n                    .filter_map(|(session_id, ps)| {\n                        if ps.processing {\n                            None\n                        } else {\n                            ps.processing = true;\n                            ps.last_ping = now;\n                            Some((*session_id, ps.nonce()))\n                        }\n                    })\n                    .collect();\n                if !peers.is_empty() {\n                    let ping = match PingMessage::new_ping(peers[0].1).into_bytes() {\n                        Ok(p) => p,\n                        Err(err) => {\n                            warn!(\"encode ping {}\", err);\n                            return;\n                        }\n                    };\n\n                    let peer_ids: Vec<SessionId> = peers\n                        .into_iter()\n                        .map(|(session_id, _)| session_id)\n                        .collect();\n                    let proto_id = context.proto_id;\n                    let target = TargetSession::Multi(peer_ids);\n\n                    if let Err(err) = context.filter_broadcast(target, proto_id, ping) {\n                        debug!(\"send message {}\", err);\n                    }\n                }\n            }\n            CHECK_TIMEOUT_TOKEN => {\n                debug!(\"proto [{}] check ping timeout\", context.proto_id);\n                let timeout = self.timeout;\n                for peer_id in self\n                    .connected_session_ids\n                    .values()\n                    .filter(|ps| ps.processing && ps.elapsed() >= timeout)\n                    .map(|ps| ps.peer_id.clone())\n                    .collect::<Vec<PeerId>>()\n                {\n                    self.send_event(PingEvent::Timeout(peer_id));\n                }\n            }\n            _ => panic!(\"unknown token {}\", token),\n        }\n    }\n}\n"
  },
  {
    "path": "core/network/src/protocols/ping.rs",
    "content": "mod behaviour;\nmod message;\nmod protocol;\nuse self::protocol::PingProtocol;\nuse behaviour::{EventTranslator, PingEventReporter};\n\nuse crate::event::PeerManagerEvent;\n\nuse futures::channel::mpsc::{self, UnboundedSender};\nuse tentacle::{\n    builder::MetaBuilder,\n    service::{ProtocolHandle, ProtocolMeta},\n    ProtocolId,\n};\n\nuse std::time::Duration;\n\npub const NAME: &str = \"chain_ping\";\npub const SUPPORT_VERSIONS: [&str; 1] = [\"0.1\"];\n\npub struct Ping(PingProtocol);\n\nimpl Ping {\n    pub fn new(\n        interval: Duration,\n        timeout: Duration,\n        sender: UnboundedSender<PeerManagerEvent>,\n    ) -> Self {\n        let reporter = PingEventReporter::new(sender);\n        let (tx, rx) = mpsc::channel(1000);\n        let translator = EventTranslator::new(rx, reporter);\n        tokio::spawn(translator);\n\n        Ping(PingProtocol::new(interval, timeout, tx))\n    }\n\n    pub fn build_meta(self, protocol_id: ProtocolId) -> ProtocolMeta {\n        MetaBuilder::new()\n            .id(protocol_id)\n            .name(name!(NAME))\n            .support_versions(support_versions!(SUPPORT_VERSIONS))\n            .service_handle(move || ProtocolHandle::Callback(Box::new(self.0)))\n            .build()\n    }\n}\n"
  },
  {
    "path": "core/network/src/protocols/transmitter/behaviour.rs",
    "content": "use std::future::Future;\nuse std::pin::Pin;\nuse std::sync::atomic::{AtomicU64, Ordering};\nuse std::sync::Arc;\nuse std::task::{Context, Poll};\n\nuse arc_swap::ArcSwapOption;\nuse futures::channel::mpsc::{self, UnboundedReceiver, UnboundedSender};\nuse futures::channel::oneshot;\nuse futures::stream::Stream;\nuse protocol::traits::Priority;\nuse protocol::Bytes;\nuse tentacle::error::SendErrorKind;\nuse tentacle::secio::PeerId;\nuse tentacle::service::TargetSession;\nuse tentacle::SessionId;\n\nuse super::message::{Recipient, SeqChunkMessage, TransmitterMessage};\nuse super::MAX_CHUNK_SIZE;\n\nuse crate::connection::{ConnectionServiceControl, ProtocolMessage};\nuse crate::error::{ErrorKind, NetworkError};\nuse crate::event::PeerManagerEvent;\nuse crate::peer_manager::SharedSessions;\nuse crate::protocols::core::TRANSMITTER_PROTOCOL_ID;\nuse crate::traits::{NetworkContext, SharedSessionBook};\n\n// TODO: Refactor connection service, decouple protocol and service\n// initialization.\n#[derive(Clone)]\npub struct TransmitterBehaviour {\n    pending_sending_tx: ArcSwapOption<UnboundedSender<PendingSending>>,\n}\n\nimpl TransmitterBehaviour {\n    pub fn new() -> Self {\n        let pending_sending_tx = ArcSwapOption::from(None);\n\n        TransmitterBehaviour { pending_sending_tx }\n    }\n\n    pub fn init(\n        &self,\n        conn_ctrl: ConnectionServiceControl,\n        peers_serv: UnboundedSender<PeerManagerEvent>,\n        sessions: SharedSessions,\n    ) {\n        let (pending_sending_tx, pending_sending_rx) = mpsc::unbounded();\n\n        let background_sending =\n            BackgroundSending::new(conn_ctrl, peers_serv, sessions, pending_sending_rx);\n        tokio::spawn(background_sending);\n\n        self.pending_sending_tx\n            .store(Some(Arc::new(pending_sending_tx)))\n    }\n\n    pub fn send(&self, msg: TransmitterMessage) -> impl Future<Output = Result<(), NetworkError>> {\n        let (tx, rx) = oneshot::channel();\n\n        let pending_sending = PendingSending { msg, tx };\n        let tx_guard = self.pending_sending_tx.load();\n\n        async move {\n            match tx_guard.as_ref() {\n                Some(tx) => {\n                    if let Err(e) = tx.unbounded_send(pending_sending) {\n                        log::error!(\"pending sending tx dropped\");\n                        return Err(NetworkError::Internal(Box::new(e)));\n                    }\n                }\n                None => {\n                    log::error!(\"transmitter behaviour isn't inited\");\n                    return Err(NetworkError::Internal(Box::new(ErrorKind::Internal(\n                        \"transmitter behaviour isn't inited\".to_owned(),\n                    ))));\n                }\n            }\n\n            match rx.await {\n                Ok(Err(e)) => Err(NetworkError::Internal(Box::new(e))),\n                Err(e) => Err(NetworkError::Internal(Box::new(e))),\n                Ok(Ok(_)) => Ok(()),\n            }\n        }\n    }\n}\n\nstruct PendingSending {\n    msg: TransmitterMessage,\n    tx:  oneshot::Sender<Result<(), NetworkError>>,\n}\n\nstruct BackgroundSending {\n    conn_ctrl:  ConnectionServiceControl,\n    peers_serv: UnboundedSender<PeerManagerEvent>,\n    sessions:   SharedSessions,\n    data_seq:   AtomicU64,\n\n    pending_sending_rx: UnboundedReceiver<PendingSending>,\n}\n\nimpl BackgroundSending {\n    pub fn new(\n        conn_ctrl: ConnectionServiceControl,\n        peers_serv: UnboundedSender<PeerManagerEvent>,\n        sessions: SharedSessions,\n        pending_sending_rx: UnboundedReceiver<PendingSending>,\n    ) -> Self {\n        BackgroundSending {\n            conn_ctrl,\n            peers_serv,\n            sessions,\n            data_seq: AtomicU64::new(0),\n\n            pending_sending_rx,\n        }\n    }\n\n    pub fn context(&self) -> SendingContext<'_> {\n        SendingContext {\n            conn_ctrl:  &self.conn_ctrl,\n            peers_serv: &self.peers_serv,\n            sessions:   &self.sessions,\n            data_seq:   &self.data_seq,\n        }\n    }\n}\n\nimpl Future for BackgroundSending {\n    type Output = ();\n\n    fn poll(mut self: Pin<&mut Self>, ctx: &mut Context<'_>) -> Poll<Self::Output> {\n        loop {\n            let pending_sending_rx = &mut self.as_mut().pending_sending_rx;\n            futures::pin_mut!(pending_sending_rx);\n\n            match futures::ready!(pending_sending_rx.poll_next(ctx)) {\n                Some(PendingSending { msg, tx }) => {\n                    if let Err(e) = tx.send(self.context().send(msg)) {\n                        log::warn!(\"pending sending result {:?}\", e);\n                    }\n                }\n                None => {\n                    log::error!(\"transmitter pending tx dropped\");\n                    return Poll::Ready(());\n                }\n            }\n        }\n    }\n}\n\ntype MessageContext = protocol::traits::Context;\n\nstruct SendingContext<'a> {\n    conn_ctrl:  &'a ConnectionServiceControl,\n    peers_serv: &'a UnboundedSender<PeerManagerEvent>,\n    sessions:   &'a SharedSessions,\n    data_seq:   &'a AtomicU64,\n}\n\nimpl<'a> SendingContext<'a> {\n    fn send(&self, msg: TransmitterMessage) -> Result<(), NetworkError> {\n        let TransmitterMessage { priority, data, .. } = msg;\n\n        match msg.recipient {\n            Recipient::Session(target) => self.send_to_sessions(target, data, priority, msg.ctx),\n            Recipient::PeerId(peer_ids) => self.send_to_peers(peer_ids, data, priority, msg.ctx),\n        }\n    }\n\n    fn send_to_sessions(\n        &self,\n        target: TargetSession,\n        mut data: Bytes,\n        priority: Priority,\n        msg_ctx: MessageContext,\n    ) -> Result<(), NetworkError> {\n        let (target, opt_blocked) = match self.filter_blocked(target) {\n            (None, None) => unreachable!(),\n            (None, blocked) => {\n                return Err(NetworkError::Send {\n                    blocked,\n                    other: None,\n                });\n            }\n            (Some(tar), opt_blocked) => (tar, opt_blocked),\n        };\n\n        let url = msg_ctx.url().unwrap_or(\"\");\n        let data_size = match &target {\n            TargetSession::Single(_) => data.len(),\n            TargetSession::Multi(sessions) => data.len().saturating_mul(sessions.len()),\n            TargetSession::All => data.len().saturating_mul(self.sessions.len()),\n        };\n        common_apm::metrics::network::NETWORK_MESSAGE_SIZE_COUNT_VEC\n            .with_label_values(&[\"send\", url])\n            .inc_by(data_size as i64);\n\n        let seq = self.data_seq.fetch_add(1, Ordering::SeqCst);\n        log::debug!(\"seq {} data size {}\", seq, data.len());\n\n        if data.len() < MAX_CHUNK_SIZE {\n            let internal_msg = SeqChunkMessage {\n                seq,\n                eof: true,\n                data,\n            };\n\n            let proto_msg = ProtocolMessage {\n                protocol_id: TRANSMITTER_PROTOCOL_ID.into(),\n                target,\n                data: internal_msg.encode(),\n                priority,\n            };\n\n            let ret = self.conn_ctrl.send(proto_msg).map_err(|err| match &err {\n                SendErrorKind::BrokenPipe => NetworkError::Shutdown,\n                SendErrorKind::WouldBlock => NetworkError::Busy,\n            });\n\n            if ret.is_err() || opt_blocked.is_some() {\n                let other = ret.err();\n                return Err(NetworkError::Send {\n                    blocked: opt_blocked,\n                    other:   other.map(NetworkError::boxed),\n                });\n            }\n\n            return Ok(());\n        }\n\n        while !data.is_empty() {\n            if data.len() > MAX_CHUNK_SIZE {\n                let chunk = data.split_to(MAX_CHUNK_SIZE);\n\n                let internal_msg = SeqChunkMessage {\n                    seq,\n                    eof: false,\n                    data: chunk,\n                };\n\n                let proto_msg = ProtocolMessage {\n                    protocol_id: TRANSMITTER_PROTOCOL_ID.into(),\n                    target: target.clone(),\n                    data: internal_msg.encode(),\n                    priority,\n                };\n\n                let ret = self.conn_ctrl.send(proto_msg).map_err(|err| match &err {\n                    SendErrorKind::BrokenPipe => NetworkError::Shutdown,\n                    SendErrorKind::WouldBlock => NetworkError::Busy,\n                });\n\n                if ret.is_err() {\n                    let other = ret.err();\n                    return Err(NetworkError::Send {\n                        blocked: opt_blocked,\n                        other:   other.map(NetworkError::boxed),\n                    });\n                }\n            } else {\n                let last_data = std::mem::replace(&mut data, Bytes::new());\n\n                let internal_msg = SeqChunkMessage {\n                    seq,\n                    eof: true,\n                    data: last_data,\n                };\n\n                let proto_msg = ProtocolMessage {\n                    protocol_id: TRANSMITTER_PROTOCOL_ID.into(),\n                    target: target.clone(),\n                    data: internal_msg.encode(),\n                    priority,\n                };\n\n                let ret = self.conn_ctrl.send(proto_msg).map_err(|err| match &err {\n                    SendErrorKind::BrokenPipe => NetworkError::Shutdown,\n                    SendErrorKind::WouldBlock => NetworkError::Busy,\n                });\n\n                if ret.is_err() || opt_blocked.is_some() {\n                    let other = ret.err();\n                    return Err(NetworkError::Send {\n                        blocked: opt_blocked,\n                        other:   other.map(NetworkError::boxed),\n                    });\n                }\n            }\n        }\n\n        Ok(())\n    }\n\n    fn send_to_peers(\n        &self,\n        peer_ids: Vec<PeerId>,\n        data: Bytes,\n        priority: Priority,\n        msg_ctx: MessageContext,\n    ) -> Result<(), NetworkError> {\n        let (connected, unconnected) = self.sessions.peers(peer_ids);\n        let send_ret =\n            self.send_to_sessions(TargetSession::Multi(connected), data, priority, msg_ctx);\n        if unconnected.is_empty() {\n            return send_ret;\n        }\n\n        let connect_peers = PeerManagerEvent::ConnectPeersNow {\n            pids: unconnected.clone(),\n        };\n        if self.peers_serv.unbounded_send(connect_peers).is_err() {\n            log::error!(\"network: peer manager service exit\");\n        }\n\n        if send_ret.is_err() || !unconnected.is_empty() {\n            let other = send_ret.err().map(NetworkError::boxed);\n            let unconnected = if unconnected.is_empty() {\n                None\n            } else {\n                Some(unconnected)\n            };\n\n            return Err(NetworkError::MultiCast { unconnected, other });\n        }\n\n        Ok(())\n    }\n\n    fn filter_blocked(\n        &self,\n        target: TargetSession,\n    ) -> (Option<TargetSession>, Option<Vec<SessionId>>) {\n        self.sessions.refresh_blocked();\n\n        let all_blocked = self.sessions.all_blocked();\n        if all_blocked.is_empty() {\n            return (Some(target), None);\n        }\n\n        match target {\n            TargetSession::Single(sid) => {\n                if all_blocked.contains(&sid) {\n                    (None, Some(vec![sid]))\n                } else {\n                    (Some(TargetSession::Single(sid)), None)\n                }\n            }\n            TargetSession::Multi(sids) => {\n                let (sendable, blocked): (Vec<SessionId>, Vec<SessionId>) =\n                    sids.into_iter().partition(|sid| !all_blocked.contains(sid));\n\n                if sendable.is_empty() && blocked.is_empty() {\n                    unreachable!()\n                } else if sendable.is_empty() {\n                    (None, Some(blocked))\n                } else if blocked.is_empty() {\n                    (Some(TargetSession::Multi(sendable)), None)\n                } else {\n                    (Some(TargetSession::Multi(sendable)), Some(blocked))\n                }\n            }\n            TargetSession::All => {\n                let sendable = self.sessions.all_sendable();\n\n                (Some(TargetSession::Multi(sendable)), Some(all_blocked))\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "core/network/src/protocols/transmitter/message.rs",
    "content": "use bytes::{Buf, BufMut};\nuse protocol::traits::{Context, Priority};\nuse protocol::{Bytes, BytesMut};\nuse tentacle::secio::PeerId;\nuse tentacle::service::TargetSession;\nuse tentacle::SessionId;\n\npub enum Recipient {\n    Session(TargetSession),\n    PeerId(Vec<PeerId>),\n}\n\npub struct TransmitterMessage {\n    pub recipient: Recipient,\n    pub priority:  Priority,\n    pub data:      Bytes,\n    pub ctx:       Context, // For metric\n}\n\npub struct ReceivedMessage {\n    pub session_id: SessionId,\n    pub peer_id:    PeerId,\n    pub data:       Bytes,\n}\n\npub(crate) struct SeqChunkMessage {\n    pub seq:  u64,\n    pub eof:  bool,\n    pub data: Bytes,\n}\n\nimpl SeqChunkMessage {\n    pub fn encode(self) -> Bytes {\n        let eof = if self.eof { 1u8 } else { 0u8 };\n        let mut buf = BytesMut::with_capacity(9 + self.data.len());\n\n        buf.put_u64(self.seq);\n        buf.put_u8(eof);\n        buf.extend_from_slice(self.data.as_ref());\n\n        buf.freeze()\n    }\n\n    // Note: already check data size in protocol received.\n    pub fn decode(mut bytes: Bytes) -> Self {\n        let data = bytes.split_off(9);\n        let seq = bytes.get_u64();\n        let eof = bytes.get_u8() == 1;\n\n        SeqChunkMessage { seq, eof, data }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::SeqChunkMessage;\n\n    use protocol::Bytes;\n\n    #[test]\n    fn test_internal_message_codec() {\n        let data = b\"hello muta\";\n\n        let chunk = SeqChunkMessage {\n            seq:  1u64,\n            eof:  false,\n            data: Bytes::from_static(data),\n        };\n\n        let chunk = SeqChunkMessage::decode(chunk.encode());\n        assert_eq!(chunk.data, Bytes::from_static(data));\n        assert_eq!(chunk.eof, false);\n    }\n}\n"
  },
  {
    "path": "core/network/src/protocols/transmitter/protocol.rs",
    "content": "use std::time::Instant;\n\nuse protocol::Bytes;\nuse tentacle::context::ProtocolContextMutRef;\nuse tentacle::traits::SessionProtocol;\n\nuse crate::compression::Snappy;\nuse crate::peer_manager::PeerManagerHandle;\nuse crate::reactor::{MessageRouter, RemotePeer};\n\nuse super::message::{ReceivedMessage, SeqChunkMessage};\nuse super::{DATA_SEQ_TIMEOUT, MAX_CHUNK_SIZE};\n\npub struct TransmitterProtocol {\n    router:             MessageRouter<Snappy>,\n    peer_mgr:           PeerManagerHandle,\n    data_buf:           Vec<u8>,\n    current_data_seq:   u64,\n    first_seq_bytes_at: Instant,\n}\n\nimpl TransmitterProtocol {\n    pub fn new(router: MessageRouter<Snappy>, peer_mgr: PeerManagerHandle) -> Self {\n        TransmitterProtocol {\n            router,\n            peer_mgr,\n            data_buf: Vec::new(),\n            current_data_seq: 0,\n            first_seq_bytes_at: Instant::now(),\n        }\n    }\n}\n\nimpl SessionProtocol for TransmitterProtocol {\n    fn connected(&mut self, context: ProtocolContextMutRef, _version: &str) {\n        if !self.peer_mgr.contains_session(context.session.id) {\n            let _ = context.close_protocol(context.session.id, context.proto_id());\n            return;\n        }\n\n        let peer_id = match context.session.remote_pubkey.as_ref() {\n            Some(pubkey) => pubkey.peer_id(),\n            None => {\n                log::warn!(\"peer connection must be encrypted\");\n                let _ = context.disconnect(context.session.id);\n                return;\n            }\n        };\n        crate::protocols::OpenedProtocols::register(peer_id, context.proto_id());\n    }\n\n    fn received(&mut self, ctx: ProtocolContextMutRef, data: Bytes) {\n        let peer_id = match ctx.session.remote_pubkey.as_ref() {\n            Some(pk) => pk.peer_id(),\n            None => {\n                // Dont care result here, connection/keeper will also handle this.\n                let _ = ctx.disconnect(ctx.session.id);\n                return;\n            }\n        };\n        let session_id = ctx.session.id;\n\n        // Seq u64 takes 8 bytes, and eof bool take 1 byte, so a valid data length\n        // must be bigger or equal than 10.\n        if data.len() < 10 {\n            log::warn!(\"session {} data size < 10, drop it\", session_id);\n            return;\n        }\n\n        let SeqChunkMessage { seq, eof, data } = SeqChunkMessage::decode(data);\n        log::debug!(\"recived seq {} eof {} data size {}\", seq, eof, data.len());\n\n        if data.len() > MAX_CHUNK_SIZE {\n            log::warn!(\n                \"session {} data size > {}, drop it\",\n                session_id,\n                MAX_CHUNK_SIZE\n            );\n\n            return;\n        }\n\n        if seq == self.current_data_seq {\n            if self.first_seq_bytes_at.elapsed() > DATA_SEQ_TIMEOUT {\n                log::warn!(\n                    \"session {} data seq {} timeout, drop it\",\n                    session_id,\n                    self.current_data_seq\n                );\n\n                self.data_buf.clear();\n                return;\n            }\n\n            self.data_buf.extend(data.as_ref());\n            log::debug!(\"data buf size {}\", self.data_buf.len());\n        } else {\n            log::debug!(\"new data seq {}\", seq);\n\n            self.current_data_seq = seq;\n            self.data_buf.clear();\n            self.data_buf.extend(data.as_ref());\n            self.data_buf.shrink_to_fit();\n            self.first_seq_bytes_at = Instant::now();\n        }\n\n        if !eof {\n            return;\n        }\n\n        let data = std::mem::replace(&mut self.data_buf, Vec::new());\n        log::debug!(\"final seq {} data size {}\", seq, data.len());\n\n        let remote_peer = match RemotePeer::from_proto_context(&ctx) {\n            Ok(peer) => peer,\n            Err(_err) => {\n                log::warn!(\"received data from unencrypted peer, impossible, drop it\");\n                return;\n            }\n        };\n\n        let recv_msg = ReceivedMessage {\n            session_id,\n            peer_id,\n            data: Bytes::from(data),\n        };\n\n        let host = remote_peer.connected_addr.host.to_owned();\n        let route_fut = self.router.route_message(remote_peer.clone(), recv_msg);\n        tokio::spawn(async move {\n            common_apm::metrics::network::NETWORK_RECEIVED_MESSAGE_IN_PROCESSING_GUAGE.inc();\n            common_apm::metrics::network::NETWORK_RECEIVED_IP_MESSAGE_IN_PROCESSING_GUAGE_VEC\n                .with_label_values(&[&host])\n                .inc();\n\n            if let Err(err) = route_fut.await {\n                log::warn!(\"route message from {} failed: {}\", remote_peer, err);\n            }\n\n            common_apm::metrics::network::NETWORK_RECEIVED_MESSAGE_IN_PROCESSING_GUAGE.dec();\n            common_apm::metrics::network::NETWORK_RECEIVED_IP_MESSAGE_IN_PROCESSING_GUAGE_VEC\n                .with_label_values(&[&host])\n                .dec();\n        });\n    }\n}\n"
  },
  {
    "path": "core/network/src/protocols/transmitter.rs",
    "content": "mod behaviour;\nmod message;\nmod protocol;\n\nuse std::time::Duration;\n\nuse tentacle::builder::MetaBuilder;\nuse tentacle::service::{ProtocolHandle, ProtocolMeta};\nuse tentacle::ProtocolId;\n\nuse crate::compression::Snappy;\nuse crate::peer_manager::PeerManagerHandle;\nuse crate::reactor::MessageRouter;\nuse crate::traits::Compression;\n\nuse self::behaviour::TransmitterBehaviour;\nuse self::protocol::TransmitterProtocol;\npub use message::{ReceivedMessage, Recipient, TransmitterMessage};\n\npub const NAME: &str = \"chain_transmitter\";\npub const SUPPORT_VERSIONS: [&str; 1] = [\"0.3\"];\npub const DATA_SEQ_TIMEOUT: Duration = Duration::from_secs(60);\npub const MAX_CHUNK_SIZE: usize = 4 * 1000 * 1000; // 4MB\n\n#[derive(Clone)]\npub struct Transmitter {\n    pub(crate) router:    MessageRouter<Snappy>,\n    pub(crate) behaviour: TransmitterBehaviour,\n    peer_mgr:             PeerManagerHandle,\n}\n\nimpl Transmitter {\n    pub fn new(router: MessageRouter<Snappy>, peer_mgr: PeerManagerHandle) -> Self {\n        let behaviour = TransmitterBehaviour::new();\n        Transmitter {\n            router,\n            behaviour,\n            peer_mgr,\n        }\n    }\n\n    pub fn build_meta(self, protocol_id: ProtocolId) -> ProtocolMeta {\n        MetaBuilder::new()\n            .id(protocol_id)\n            .name(name!(NAME))\n            .support_versions(support_versions!(SUPPORT_VERSIONS))\n            .session_handle(move || {\n                let proto = TransmitterProtocol::new(self.router.clone(), self.peer_mgr.clone());\n                ProtocolHandle::Callback(Box::new(proto))\n            })\n            .build()\n    }\n\n    pub fn compressor(&self) -> impl Compression {\n        Snappy\n    }\n}\n"
  },
  {
    "path": "core/network/src/reactor/mod.rs",
    "content": "mod router;\nmod rpc_map;\n\nuse std::convert::TryFrom;\nuse std::marker::PhantomData;\n\nuse async_trait::async_trait;\nuse protocol::traits::{Context, MessageCodec, MessageHandler, TrustFeedback};\nuse protocol::{Bytes, ProtocolResult};\n\nuse crate::endpoint::{Endpoint, EndpointScheme, RpcEndpoint};\nuse crate::message::NetworkMessage;\nuse crate::rpc::RpcResponse;\nuse crate::traits::NetworkContext;\n\npub(crate) use router::{MessageRouter, RemotePeer, RouterContext};\n\n#[async_trait]\npub trait Reactor: Send + Sync {\n    async fn react(\n        &self,\n        context: RouterContext,\n        endpoint: Endpoint,\n        network_message: NetworkMessage,\n    ) -> ProtocolResult<()>;\n}\n\npub struct MessageReactor<M: MessageCodec, H: MessageHandler<Message = M>> {\n    msg_handler: H,\n}\n\npub fn generate<M: MessageCodec, H: MessageHandler<Message = M>>(h: H) -> MessageReactor<M, H> {\n    MessageReactor { msg_handler: h }\n}\n\npub fn rpc_resp<M: MessageCodec>() -> MessageReactor<M, NoopHandler<M>> {\n    MessageReactor {\n        msg_handler: NoopHandler::new(),\n    }\n}\n\n#[async_trait]\nimpl<M: MessageCodec, H: MessageHandler<Message = M>> Reactor for MessageReactor<M, H> {\n    async fn react(\n        &self,\n        context: RouterContext,\n        endpoint: Endpoint,\n        network_message: NetworkMessage,\n    ) -> ProtocolResult<()> {\n        let ctx = Context::new()\n            .set_session_id(context.remote_peer.session_id)\n            .set_remote_peer_id(context.remote_peer.peer_id.clone())\n            .set_remote_connected_addr(context.remote_peer.connected_addr.clone());\n\n        let mut ctx = match (network_message.trace_id(), network_message.span_id()) {\n            (Some(trace_id), Some(span_id)) => {\n                let span_state = common_apm::muta_apm::MutaTracer::new_state(trace_id, span_id);\n                common_apm::muta_apm::MutaTracer::inject_span_state(ctx, span_state)\n            }\n            _ => ctx,\n        };\n\n        let session_id = context.remote_peer.session_id;\n        let raw_context = Bytes::from(network_message.content);\n        let feedback = match endpoint.scheme() {\n            EndpointScheme::Gossip => {\n                let content = M::decode(raw_context)?;\n                self.msg_handler.process(ctx, content).await\n            }\n            EndpointScheme::RpcCall => {\n                let content = M::decode(raw_context)?;\n                let rpc_endpoint = RpcEndpoint::try_from(endpoint)?;\n\n                let ctx = ctx.set_rpc_id(rpc_endpoint.rpc_id().value());\n                self.msg_handler.process(ctx, content).await\n            }\n            EndpointScheme::RpcResponse => {\n                let content = RpcResponse::decode(raw_context)?;\n                let rpc_endpoint = RpcEndpoint::try_from(endpoint)?;\n                let rpc_id = rpc_endpoint.rpc_id().value();\n\n                if !context.rpc_map.contains(session_id, rpc_id) {\n                    let full_url = rpc_endpoint.endpoint().full_url();\n\n                    log::warn!(\n                        \"rpc to {} from {} not found, maybe timeout\",\n                        full_url,\n                        context.remote_peer\n                    );\n                    return Ok(());\n                }\n\n                let rpc_id = rpc_endpoint.rpc_id().value();\n                let resp_tx = context.rpc_map.take::<RpcResponse>(session_id, rpc_id)?;\n                if resp_tx.send(content).is_err() {\n                    let end = rpc_endpoint.endpoint().full_url();\n                    log::warn!(\"network: reactor: {} rpc dropped on {}\", session_id, end);\n                }\n\n                return Ok(());\n            }\n        };\n\n        context.report_feedback(feedback);\n        Ok(())\n    }\n}\n\n#[derive(Debug)]\npub struct NoopHandler<M> {\n    pin_m: PhantomData<fn() -> M>,\n}\n\nimpl<M> NoopHandler<M>\nwhere\n    M: MessageCodec,\n{\n    pub fn new() -> Self {\n        NoopHandler { pin_m: PhantomData }\n    }\n}\n\n#[async_trait]\nimpl<M> MessageHandler for NoopHandler<M>\nwhere\n    M: MessageCodec,\n{\n    type Message = M;\n\n    async fn process(&self, _: Context, _: Self::Message) -> TrustFeedback {\n        TrustFeedback::Neutral\n    }\n}\n"
  },
  {
    "path": "core/network/src/reactor/router.rs",
    "content": "use std::collections::HashMap;\nuse std::future::Future;\nuse std::sync::Arc;\n\nuse derive_more::Display;\nuse futures::channel::mpsc::UnboundedSender;\nuse parking_lot::RwLock;\nuse protocol::traits::{MessageCodec, MessageHandler, TrustFeedback};\nuse protocol::ProtocolResult;\nuse tentacle::context::ProtocolContextMutRef;\nuse tentacle::secio::PeerId;\nuse tentacle::SessionId;\n\nuse crate::common::ConnectedAddr;\nuse crate::endpoint::Endpoint;\nuse crate::error::{ErrorKind, NetworkError};\nuse crate::event::PeerManagerEvent;\nuse crate::message::NetworkMessage;\nuse crate::protocols::ReceivedMessage;\nuse crate::traits::Compression;\n\nuse super::rpc_map::RpcMap;\nuse super::Reactor;\n\n#[derive(Debug, Display)]\n#[display(fmt = \"connection isnt encrypted, no peer id\")]\npub struct NoEncryption {}\n\n#[derive(Debug, Display, Clone)]\n#[display(fmt = \"remote peer {:?} addr {}\", peer_id, connected_addr)]\npub struct RemotePeer {\n    pub session_id:     SessionId,\n    pub peer_id:        PeerId,\n    pub connected_addr: ConnectedAddr,\n}\n\nimpl RemotePeer {\n    pub fn from_proto_context(\n        protocol_context: &ProtocolContextMutRef,\n    ) -> Result<Self, NoEncryption> {\n        let session = protocol_context.session;\n        let pubkey = session\n            .remote_pubkey\n            .as_ref()\n            .ok_or_else(|| NoEncryption {})?;\n\n        Ok(RemotePeer {\n            session_id:     session.id,\n            peer_id:        pubkey.peer_id(),\n            connected_addr: ConnectedAddr::from(&session.address),\n        })\n    }\n}\n\npub struct RouterContext {\n    pub(crate) remote_peer: RemotePeer,\n    pub(crate) rpc_map:     Arc<RpcMap>,\n    trust_tx:               UnboundedSender<PeerManagerEvent>,\n}\n\nimpl RouterContext {\n    fn new(\n        remote_peer: RemotePeer,\n        rpc_map: Arc<RpcMap>,\n        trust_tx: UnboundedSender<PeerManagerEvent>,\n    ) -> Self {\n        RouterContext {\n            remote_peer,\n            rpc_map,\n            trust_tx,\n        }\n    }\n\n    pub fn report_feedback(&self, feedback: TrustFeedback) {\n        let feedback_event = PeerManagerEvent::TrustMetric {\n            pid: self.remote_peer.peer_id.clone(),\n            feedback,\n        };\n        if let Err(e) = self.trust_tx.unbounded_send(feedback_event) {\n            log::error!(\"send peer {} feedback failed {}\", self.remote_peer, e);\n        }\n    }\n}\n\ntype ReactorMap = HashMap<Endpoint, Arc<Box<dyn Reactor>>>;\n\n#[derive(Clone)]\npub struct MessageRouter<C> {\n    // Endpoint to reactor channel map\n    reactor_map: Arc<RwLock<ReactorMap>>,\n\n    // Rpc map\n    pub(crate) rpc_map: Arc<RpcMap>,\n\n    // Sender for peer trust metric feedback\n    trust_tx: UnboundedSender<PeerManagerEvent>,\n\n    // Compression to decompress message\n    compression: C,\n}\n\nimpl<C> MessageRouter<C>\nwhere\n    C: Compression + Send + Clone + 'static,\n{\n    pub fn new(trust_tx: UnboundedSender<PeerManagerEvent>, compression: C) -> Self {\n        MessageRouter {\n            reactor_map: Default::default(),\n            rpc_map: Arc::new(RpcMap::new()),\n            trust_tx,\n            compression,\n        }\n    }\n\n    pub fn register_reactor<M: MessageCodec>(\n        &self,\n        endpoint: Endpoint,\n        message_handler: impl MessageHandler<Message = M>,\n    ) {\n        let reactor = super::generate(message_handler);\n        self.reactor_map\n            .write()\n            .insert(endpoint, Arc::new(Box::new(reactor)));\n    }\n\n    pub fn register_rpc_response(&self, endpoint: Endpoint) {\n        let reactor = super::rpc_resp::<()>();\n        self.reactor_map\n            .write()\n            .insert(endpoint, Arc::new(Box::new(reactor)));\n    }\n\n    pub fn route_message(\n        &self,\n        remote_peer: RemotePeer,\n        recv_msg: ReceivedMessage,\n    ) -> impl Future<Output = ProtocolResult<()>> {\n        let reactor_map = Arc::clone(&self.reactor_map);\n        let compression = self.compression.clone();\n        let router_context = RouterContext::new(\n            remote_peer,\n            Arc::clone(&self.rpc_map),\n            self.trust_tx.clone(),\n        );\n        let raw_data_size = recv_msg.data.len();\n\n        async move {\n            let network_message = {\n                let decompressed = compression.decompress(recv_msg.data)?;\n                NetworkMessage::decode(decompressed)?\n            };\n            common_apm::metrics::network::on_network_message_received(&network_message.url);\n\n            let endpoint = network_message.url.parse::<Endpoint>()?;\n            common_apm::metrics::network::NETWORK_MESSAGE_SIZE_COUNT_VEC\n                .with_label_values(&[\"received\", &endpoint.root()])\n                .inc_by(raw_data_size as i64);\n\n            let reactor = {\n                let opt_reactor = reactor_map.read().get(&endpoint).cloned();\n                opt_reactor\n                    .ok_or_else(|| NetworkError::from(ErrorKind::NoReactor(endpoint.root())))?\n            };\n\n            let ret = reactor\n                .react(router_context, endpoint.clone(), network_message)\n                .await;\n            if let Err(err) = ret.as_ref() {\n                log::error!(\"process {} message failed: {}\", endpoint, err);\n            }\n            ret\n        }\n    }\n}\n"
  },
  {
    "path": "core/network/src/reactor/rpc_map.rs",
    "content": "use std::{\n    any::Any,\n    collections::HashMap,\n    sync::atomic::{AtomicU64, Ordering},\n    sync::Arc,\n};\n\nuse derive_more::Constructor;\nuse futures::channel::oneshot::{self, Receiver, Sender};\nuse parking_lot::RwLock;\nuse tentacle::SessionId;\n\nuse crate::error::{ErrorKind, NetworkError};\n\n#[derive(Debug, PartialEq, Eq, Hash, Clone, Copy, Constructor)]\nstruct Key {\n    sid: SessionId,\n    rid: u64,\n}\n\nstruct BackSender(Box<Arc<dyn Any + Send + Sync + 'static>>);\n\n#[derive(Default)]\npub struct RpcMap {\n    next_id: Arc<AtomicU64>,\n    map:     Arc<RwLock<HashMap<Key, BackSender>>>,\n}\n\nimpl RpcMap {\n    pub fn new() -> Self {\n        RpcMap {\n            next_id: Arc::new(AtomicU64::new(0)),\n            map:     Default::default(),\n        }\n    }\n\n    pub fn next_rpc_id(&self) -> u64 {\n        self.next_id.fetch_add(1, Ordering::SeqCst)\n    }\n\n    pub fn insert<T: Send + 'static>(&self, sid: SessionId, rid: u64) -> Receiver<T> {\n        let key = Key::new(sid, rid);\n\n        let (done_tx, done_rx) = oneshot::channel();\n        let sender = BackSender(Box::new(Arc::new(done_tx)));\n\n        self.map.write().insert(key, sender);\n\n        done_rx\n    }\n\n    pub fn contains(&self, sid: SessionId, rid: u64) -> bool {\n        let key = Key::new(sid, rid);\n        self.map.read().contains_key(&key)\n    }\n\n    pub fn take<T: Send + 'static>(\n        &self,\n        sid: SessionId,\n        rid: u64,\n    ) -> Result<Sender<T>, NetworkError> {\n        let key = Key::new(sid, rid);\n\n        if !self.map.read().contains_key(&key) {\n            return Err(ErrorKind::UnknownRpc { sid, rid }.into());\n        }\n\n        let BackSender(boxed_any) = {\n            let opt_sender = self.map.write().remove(&key);\n            opt_sender.ok_or_else(|| ErrorKind::UnknownRpc { sid, rid })?\n        };\n\n        let arc_sender: Arc<Sender<T>> = boxed_any\n            .downcast::<Sender<T>>()\n            .map_err(|_| ErrorKind::UnexpectedRpcSender)?;\n\n        Arc::try_unwrap(arc_sender).map_err(|_| ErrorKind::MoreArcRpcSender.into())\n    }\n}\n"
  },
  {
    "path": "core/network/src/rpc.rs",
    "content": "use derive_more::Display;\nuse serde::{Deserialize, Serialize};\n\nuse protocol::Bytes;\n\n#[derive(Debug, Deserialize, Serialize, Display)]\n#[repr(u8)]\npub enum RpcResponseCode {\n    ServerError,\n    Other(u8),\n}\n\n#[derive(Debug, Deserialize, Serialize, Display)]\n#[display(fmt = \"rpc err code {} msg {}\", code, msg)]\npub struct RpcErrorMessage {\n    pub code: RpcResponseCode,\n    pub msg:  String,\n}\n\nimpl std::error::Error for RpcErrorMessage {}\n\n#[derive(Debug, Deserialize, Serialize)]\npub enum RpcResponse {\n    Success(Bytes),\n    Error(RpcErrorMessage),\n}\n"
  },
  {
    "path": "core/network/src/selfcheck.rs",
    "content": "use std::{\n    future::Future,\n    pin::Pin,\n    sync::Arc,\n    task::{Context, Poll},\n    time::Duration,\n};\n\nuse futures::task::AtomicWaker;\nuse log::info;\n\nuse crate::{common::HeartBeat, traits::SharedSessionBook};\n\npub struct SelfCheckConfig {\n    pub interval: Duration,\n}\n\npub(crate) struct SelfCheck<S> {\n    sessions:   S,\n    heart_beat: Option<HeartBeat>,\n    hb_waker:   Arc<AtomicWaker>,\n}\n\nimpl<S> SelfCheck<S>\nwhere\n    S: SharedSessionBook + Send + Unpin + 'static,\n{\n    pub fn new(sessions: S, config: SelfCheckConfig) -> Self {\n        let waker = Arc::new(AtomicWaker::new());\n        let heart_beat = HeartBeat::new(Arc::clone(&waker), config.interval);\n\n        SelfCheck {\n            sessions,\n            heart_beat: Some(heart_beat),\n            hb_waker: waker,\n        }\n    }\n\n    fn report_allowlist(&self) {\n        info!(\"peers in allowlist: {:?}\", self.sessions.allowlist());\n    }\n\n    fn report_pending_data(&self) {\n        let sids = self.sessions.all();\n        let mut total_size = 0;\n\n        let peer_reports = sids\n            .into_iter()\n            .map(|sid| {\n                let connected_addr = self.sessions.connected_addr(sid);\n                let data_size = self.sessions.pending_data_size(sid) / (1000 * 1000); // MB not MiB\n\n                total_size += data_size;\n                (connected_addr, data_size)\n            })\n            .collect::<Vec<_>>();\n\n        info!(\n            \"total connected peers: {}, pending size {} MB, session(s) {:?}\",\n            peer_reports.len(),\n            total_size,\n            peer_reports\n        );\n    }\n}\n\nimpl<S> Future for SelfCheck<S>\nwhere\n    S: SharedSessionBook + Send + Unpin + 'static,\n{\n    type Output = ();\n\n    fn poll(mut self: Pin<&mut Self>, ctx: &mut Context<'_>) -> Poll<Self::Output> {\n        self.hb_waker.register(ctx.waker());\n\n        // Spawn heart beat\n        if let Some(heart_beat) = self.heart_beat.take() {\n            tokio::spawn(heart_beat);\n\n            // No needed for first run\n            return Poll::Pending;\n        }\n\n        self.as_ref().report_pending_data();\n        self.as_ref().report_allowlist();\n\n        Poll::Pending\n    }\n}\n"
  },
  {
    "path": "core/network/src/service.rs",
    "content": "use std::future::Future;\nuse std::net::SocketAddr;\nuse std::pin::Pin;\nuse std::sync::Arc;\nuse std::task::{Context as TaskContext, Poll};\n\nuse async_trait::async_trait;\nuse futures::channel::mpsc::{unbounded, UnboundedReceiver, UnboundedSender};\nuse futures::stream::Stream;\nuse futures::task::AtomicWaker;\nuse log::{debug, error, info};\nuse protocol::traits::{\n    Context, Gossip, MessageCodec, MessageHandler, Network, PeerTag, PeerTrust, Priority, Rpc,\n    TrustFeedback,\n};\nuse protocol::types::Hash;\nuse protocol::{Bytes, ProtocolResult};\nuse tentacle::secio::PeerId;\n\nuse crate::common::{socket_to_multi_addr, HeartBeat};\nuse crate::compression::Snappy;\nuse crate::connection::{ConnectionConfig, ConnectionService, ConnectionServiceKeeper};\nuse crate::endpoint::{Endpoint, EndpointScheme};\nuse crate::error::NetworkError;\nuse crate::event::{ConnectionEvent, PeerManagerEvent};\nuse crate::metrics::Metrics;\nuse crate::outbound::{NetworkGossip, NetworkRpc};\n#[cfg(feature = \"diagnostic\")]\nuse crate::peer_manager::diagnostic::{Diagnostic, DiagnosticHookFn};\nuse crate::peer_manager::{PeerManager, PeerManagerConfig, PeerManagerHandle, SharedSessions};\nuse crate::protocols::{CoreProtocol, Transmitter};\nuse crate::reactor::MessageRouter;\nuse crate::selfcheck::SelfCheck;\nuse crate::traits::NetworkContext;\nuse crate::{NetworkConfig, PeerIdExt};\n\n#[derive(Clone)]\npub struct NetworkServiceHandle {\n    gossip:     NetworkGossip,\n    rpc:        NetworkRpc,\n    peer_trust: UnboundedSender<PeerManagerEvent>,\n    peer_state: PeerManagerHandle,\n\n    #[cfg(feature = \"diagnostic\")]\n    pub diagnostic: Diagnostic,\n}\n\n#[async_trait]\nimpl Gossip for NetworkServiceHandle {\n    async fn broadcast<M>(&self, cx: Context, end: &str, msg: M, p: Priority) -> ProtocolResult<()>\n    where\n        M: MessageCodec,\n    {\n        self.gossip.broadcast(cx, end, msg, p).await\n    }\n\n    async fn multicast<'a, M, P>(\n        &self,\n        cx: Context,\n        end: &str,\n        peer_ids: P,\n        msg: M,\n        p: Priority,\n    ) -> ProtocolResult<()>\n    where\n        M: MessageCodec,\n        P: AsRef<[Bytes]> + Send + 'a,\n    {\n        self.gossip.multicast(cx, end, peer_ids, msg, p).await\n    }\n}\n\n#[async_trait]\nimpl Rpc for NetworkServiceHandle {\n    async fn call<M, R>(&self, cx: Context, end: &str, msg: M, p: Priority) -> ProtocolResult<R>\n    where\n        M: MessageCodec,\n        R: MessageCodec,\n    {\n        self.rpc.call(cx, end, msg, p).await\n    }\n\n    async fn response<M>(\n        &self,\n        cx: Context,\n        end: &str,\n        msg: ProtocolResult<M>,\n        p: Priority,\n    ) -> ProtocolResult<()>\n    where\n        M: MessageCodec,\n    {\n        self.rpc.response(cx, end, msg, p).await\n    }\n}\n\nimpl PeerTrust for NetworkServiceHandle {\n    fn report(&self, ctx: Context, feedback: TrustFeedback) {\n        let remote_peer_id = match ctx.remote_peer_id() {\n            Ok(id) => id,\n            Err(e) => {\n                log::error!(\n                    \"peer id not found on trust report ctx, repoort {}, err {}\",\n                    feedback,\n                    e\n                );\n                return;\n            }\n        };\n\n        let feedback = PeerManagerEvent::TrustMetric {\n            pid: remote_peer_id,\n            feedback,\n        };\n        if let Err(e) = self.peer_trust.unbounded_send(feedback) {\n            log::error!(\"peer manager offline {}\", e);\n        }\n    }\n}\n\nimpl Network for NetworkServiceHandle {\n    fn tag(&self, _: Context, peer_id: Bytes, tag: PeerTag) -> ProtocolResult<()> {\n        let peer_id = <PeerId as PeerIdExt>::from_bytes(peer_id)?;\n        self.peer_state.tag(&peer_id, tag)?;\n\n        Ok(())\n    }\n\n    fn untag(&self, _: Context, peer_id: Bytes, tag: &PeerTag) -> ProtocolResult<()> {\n        let peer_id = <PeerId as PeerIdExt>::from_bytes(peer_id)?;\n        self.peer_state.untag(&peer_id, tag);\n\n        Ok(())\n    }\n\n    fn tag_consensus(&self, _: Context, peer_ids: Vec<Bytes>) -> ProtocolResult<()> {\n        let peer_ids = peer_ids\n            .into_iter()\n            .map(<PeerId as PeerIdExt>::from_bytes)\n            .collect::<Result<Vec<_>, _>>()?;\n        self.peer_state.tag_consensus(peer_ids);\n\n        Ok(())\n    }\n}\n\nenum NetworkConnectionService {\n    NoListen(ConnectionService<CoreProtocol>), // no listen address yet\n    Ready(ConnectionService<CoreProtocol>),\n}\n\npub struct NetworkService {\n    sys_rx: UnboundedReceiver<NetworkError>,\n\n    // Heart beats\n    conn_tx:    UnboundedSender<ConnectionEvent>,\n    mgr_tx:     UnboundedSender<PeerManagerEvent>,\n    heart_beat: Option<HeartBeat>,\n    hb_waker:   Arc<AtomicWaker>,\n\n    // Config backup\n    config: NetworkConfig,\n\n    // Public service components\n    gossip:      NetworkGossip,\n    rpc:         NetworkRpc,\n    transmitter: Transmitter,\n\n    // Core service\n    net_conn_srv:    Option<NetworkConnectionService>,\n    peer_mgr:        Option<PeerManager>,\n    peer_mgr_handle: PeerManagerHandle,\n\n    // Metrics\n    metrics: Option<Metrics<SharedSessions>>,\n\n    // Self check\n    selfcheck: Option<SelfCheck<SharedSessions>>,\n\n    // Diagnostic\n    #[cfg(feature = \"diagnostic\")]\n    diagnostic: Diagnostic,\n}\n\nimpl NetworkService {\n    pub fn new(config: NetworkConfig) -> Self {\n        let (mgr_tx, mgr_rx) = unbounded();\n        let (conn_tx, conn_rx) = unbounded();\n        let (sys_tx, sys_rx) = unbounded();\n\n        let hb_waker = Arc::new(AtomicWaker::new());\n        let heart_beat = HeartBeat::new(Arc::clone(&hb_waker), config.heart_beat_interval);\n\n        let mgr_config = PeerManagerConfig::from(&config);\n        let conn_config = ConnectionConfig::from(&config);\n\n        // Build peer manager\n        let mut peer_mgr = PeerManager::new(mgr_config, mgr_rx, conn_tx.clone());\n        let peer_mgr_handle = peer_mgr.handle();\n        let session_book = peer_mgr.share_session_book((&config).into());\n        #[cfg(feature = \"diagnostic\")]\n        let diagnostic = peer_mgr.diagnostic();\n\n        if config.enable_save_restore {\n            peer_mgr.enable_save_restore();\n        }\n\n        if let Err(err) = peer_mgr.restore_peers() {\n            error!(\"network: peer manager: load peers failure: {}\", err);\n        }\n\n        if !config.bootstraps.is_empty() {\n            peer_mgr.bootstrap();\n        }\n\n        // Build service protocol\n        let disc_sync_interval = config.discovery_sync_interval;\n        let message_router = MessageRouter::new(mgr_tx.clone(), Snappy);\n        let proto = CoreProtocol::build()\n            .ping(config.ping_interval, config.ping_timeout, mgr_tx.clone())\n            .identify(peer_mgr_handle.clone(), mgr_tx.clone())\n            .discovery(peer_mgr_handle.clone(), mgr_tx.clone(), disc_sync_interval)\n            .transmitter(message_router, peer_mgr_handle.clone())\n            .build();\n        let transmitter = proto.transmitter();\n\n        // Build connection service\n        let keeper = ConnectionServiceKeeper::new(mgr_tx.clone(), sys_tx);\n        let conn_srv = ConnectionService::<CoreProtocol>::new(proto, conn_config, keeper, conn_rx);\n        let conn_ctrl = conn_srv.control();\n\n        transmitter\n            .behaviour\n            .init(conn_ctrl, mgr_tx.clone(), session_book.clone());\n\n        // Build public service components\n        let gossip = NetworkGossip::new(transmitter.clone());\n        let rpc = NetworkRpc::new(transmitter.clone(), (&config).into());\n\n        // Build metrics service\n        let metrics = Metrics::new(session_book.clone());\n\n        // Build selfcheck service\n        let selfcheck = SelfCheck::new(session_book, (&config).into());\n\n        NetworkService {\n            sys_rx,\n            conn_tx,\n            mgr_tx,\n            hb_waker,\n\n            heart_beat: Some(heart_beat),\n\n            config,\n\n            gossip,\n            rpc,\n            transmitter,\n\n            net_conn_srv: Some(NetworkConnectionService::NoListen(conn_srv)),\n            peer_mgr: Some(peer_mgr),\n            peer_mgr_handle,\n\n            metrics: Some(metrics),\n\n            selfcheck: Some(selfcheck),\n\n            #[cfg(feature = \"diagnostic\")]\n            diagnostic,\n        }\n    }\n\n    pub fn register_endpoint_handler<M>(\n        &mut self,\n        end: &str,\n        handler: impl MessageHandler<Message = M>,\n    ) -> ProtocolResult<()>\n    where\n        M: MessageCodec,\n    {\n        let endpoint = end.parse::<Endpoint>()?;\n        if endpoint.scheme() == EndpointScheme::RpcResponse {\n            let err = \"use register_rpc_response() instead\".to_owned();\n\n            return Err(NetworkError::UnexpectedScheme(err).into());\n        }\n\n        self.transmitter.router.register_reactor(endpoint, handler);\n        Ok(())\n    }\n\n    // Currently rpc response dont invoke message handler, so we create a dummy\n    // for it.\n    pub fn register_rpc_response<M>(&mut self, end: &str) -> ProtocolResult<()>\n    where\n        M: MessageCodec,\n    {\n        let endpoint = end.parse::<Endpoint>()?;\n        if endpoint.scheme() != EndpointScheme::RpcResponse {\n            return Err(NetworkError::UnexpectedScheme(end.to_owned()).into());\n        }\n\n        self.transmitter.router.register_rpc_response(endpoint);\n        Ok(())\n    }\n\n    #[cfg(feature = \"diagnostic\")]\n    pub fn register_diagnostic_hook(&mut self, f: DiagnosticHookFn) {\n        if let Some(peer_mgr) = self.peer_mgr.as_mut() {\n            peer_mgr.register_diagnostic_hook(f);\n        }\n    }\n\n    pub fn handle(&self) -> NetworkServiceHandle {\n        NetworkServiceHandle {\n            gossip:     self.gossip.clone(),\n            rpc:        self.rpc.clone(),\n            peer_trust: self.mgr_tx.clone(),\n            peer_state: self.peer_mgr_handle.clone(),\n\n            #[cfg(feature = \"diagnostic\")]\n            diagnostic:                                self.diagnostic.clone(),\n        }\n    }\n\n    pub fn peer_id(&self) -> PeerId {\n        self.config.secio_keypair.peer_id()\n    }\n\n    pub fn set_chain_id(&self, chain_id: Hash) {\n        self.peer_mgr_handle.set_chain_id(chain_id);\n    }\n\n    pub async fn listen(&mut self, socket_addr: SocketAddr) -> ProtocolResult<()> {\n        if let Some(NetworkConnectionService::NoListen(conn_srv)) = &mut self.net_conn_srv {\n            debug!(\"network: listen to {}\", socket_addr);\n\n            let addr = socket_to_multi_addr(socket_addr);\n\n            conn_srv.listen(addr.clone()).await?;\n\n            // Update service state\n            if let Some(NetworkConnectionService::NoListen(conn_srv)) = self.net_conn_srv.take() {\n                self.net_conn_srv = Some(NetworkConnectionService::Ready(conn_srv));\n            } else {\n                unreachable!(\"connection service must be there\");\n            }\n        }\n\n        Ok(())\n    }\n}\n\nimpl Future for NetworkService {\n    type Output = ();\n\n    fn poll(mut self: Pin<&mut Self>, ctx: &mut TaskContext<'_>) -> Poll<Self::Output> {\n        self.hb_waker.register(ctx.waker());\n\n        macro_rules! service_ready {\n            ($poll:expr) => {\n                match $poll {\n                    Poll::Pending => break,\n                    Poll::Ready(Some(v)) => v,\n                    Poll::Ready(None) => {\n                        info!(\"network shutdown\");\n\n                        return Poll::Ready(());\n                    }\n                }\n            };\n        }\n\n        // Preflight\n        if let Some(conn_srv) = self.net_conn_srv.take() {\n            let default_listen = self.config.default_listen.clone();\n\n            tokio::spawn(async move {\n                let conn_srv = match conn_srv {\n                    NetworkConnectionService::NoListen(mut conn_srv) => {\n                        conn_srv\n                            .listen(default_listen)\n                            .await\n                            .expect(\"fail to listen default address\");\n\n                        conn_srv\n                    }\n                    NetworkConnectionService::Ready(conn_srv) => conn_srv,\n                };\n\n                conn_srv.await\n            });\n        }\n\n        if let Some(peer_mgr) = self.peer_mgr.take() {\n            tokio::spawn(peer_mgr);\n        }\n\n        if let Some(metrics) = self.metrics.take() {\n            tokio::spawn(metrics);\n        }\n\n        if let Some(selfcheck) = self.selfcheck.take() {\n            tokio::spawn(selfcheck);\n        }\n\n        // Heart beats\n        if let Some(heart_beat) = self.heart_beat.take() {\n            tokio::spawn(heart_beat);\n        }\n\n        // TODO: Reboot ceased service? Right now we just assume that it's\n        // normal shutdown, simple log it and let it go.\n        //\n        // let it go ~~~ , let it go ~~~\n        // i am one with the wind and sky\n        // let it go, let it go\n        // you'll never see me cry\n        // bla bla bal ~~~\n        if self.conn_tx.is_closed() {\n            info!(\"network: connection service closed\");\n        }\n\n        if self.mgr_tx.is_closed() {\n            info!(\"network: peer manager closed\");\n        }\n\n        // Process system error report\n        loop {\n            let sys_rx = &mut self.as_mut().sys_rx;\n            futures::pin_mut!(sys_rx);\n\n            let sys_err = service_ready!(sys_rx.poll_next(ctx));\n            error!(\"network: system error: {}\", sys_err);\n        }\n\n        Poll::Pending\n    }\n}\n"
  },
  {
    "path": "core/network/src/test/mock.rs",
    "content": "use std::sync::atomic::{AtomicUsize, Ordering};\nuse std::sync::Arc;\n\nuse parking_lot::Mutex;\nuse protocol::Bytes;\nuse tentacle::multiaddr::Multiaddr;\nuse tentacle::secio::{PublicKey, SecioKeyPair};\nuse tentacle::service::{SessionType, TargetProtocol};\nuse tentacle::{ProtocolId, SessionId};\n\n#[derive(Clone, Debug)]\npub struct SessionContext {\n    pub id:            SessionId,\n    pub address:       Multiaddr,\n    pub ty:            SessionType,\n    pub remote_pubkey: Option<PublicKey>,\n    pending_data_size: Arc<AtomicUsize>,\n}\n\nimpl SessionContext {\n    pub fn no_encrypted(id: SessionId, ty: SessionType) -> Self {\n        let address = \"/ip4/47.111.169.36/tcp/3000\".parse().expect(\"multiaddr\");\n\n        SessionContext {\n            id,\n            address,\n            ty,\n            remote_pubkey: None,\n            pending_data_size: Arc::new(AtomicUsize::new(0)),\n        }\n    }\n\n    pub fn random(id: SessionId, ty: SessionType) -> Self {\n        let keypair = SecioKeyPair::secp256k1_generated();\n        let pubkey = keypair.public_key();\n        let peer_id = pubkey.peer_id();\n\n        let address = {\n            let addr_str = format!(\"/ip4/47.111.169.36/tcp/3000/p2p/{}\", peer_id.to_base58());\n            addr_str.parse().expect(\"multiaddr\")\n        };\n\n        SessionContext {\n            id,\n            address,\n            ty,\n            remote_pubkey: Some(pubkey),\n            pending_data_size: Arc::new(AtomicUsize::new(0)),\n        }\n    }\n\n    pub fn make(id: SessionId, address: Multiaddr, ty: SessionType, pubkey: PublicKey) -> Self {\n        SessionContext {\n            id,\n            address,\n            ty,\n            remote_pubkey: Some(pubkey),\n            pending_data_size: Arc::new(AtomicUsize::new(0)),\n        }\n    }\n\n    pub fn pending_data_size(&self) -> usize {\n        self.pending_data_size.load(Ordering::SeqCst)\n    }\n\n    pub fn arced(self) -> Arc<SessionContext> {\n        Arc::new(self)\n    }\n}\n\nimpl From<Arc<tentacle::context::SessionContext>> for SessionContext {\n    fn from(ctx: Arc<tentacle::context::SessionContext>) -> Self {\n        SessionContext {\n            id:                ctx.id,\n            address:           ctx.address.to_owned(),\n            ty:                ctx.ty,\n            remote_pubkey:     ctx.remote_pubkey.clone(),\n            pending_data_size: Arc::new(AtomicUsize::new(ctx.pending_data_size())),\n        }\n    }\n}\n\n#[derive(Clone, PartialEq, Eq)]\npub enum ControlEvent {\n    SendMessage {\n        proto_id:   ProtocolId,\n        session_id: SessionId,\n        msg:        Bytes,\n    },\n    Disconnect {\n        session_id: SessionId,\n    },\n    OpenProtocols {\n        session_id:   SessionId,\n        target_proto: TargetProtocol,\n    },\n}\n\n#[derive(Clone)]\npub struct ServiceControl {\n    pub event: Arc<Mutex<Option<ControlEvent>>>,\n}\n\nimpl Default for ServiceControl {\n    fn default() -> Self {\n        ServiceControl {\n            event: Arc::new(Mutex::new(None)),\n        }\n    }\n}\n\nimpl ServiceControl {\n    pub fn event(&self) -> Option<ControlEvent> {\n        self.event.lock().clone()\n    }\n\n    pub fn quick_send_message_to(\n        &self,\n        session_id: SessionId,\n        proto_id: ProtocolId,\n        msg: Bytes,\n    ) -> Result<(), String> {\n        *self.event.lock() = Some(ControlEvent::SendMessage {\n            session_id,\n            proto_id,\n            msg,\n        });\n\n        Ok(())\n    }\n\n    pub fn disconnect(&self, session_id: SessionId) {\n        *self.event.lock() = Some(ControlEvent::Disconnect { session_id });\n    }\n\n    pub fn open_protocols(\n        &self,\n        session_id: SessionId,\n        target_proto: TargetProtocol,\n    ) -> Result<(), String> {\n        *self.event.lock() = Some(ControlEvent::OpenProtocols {\n            session_id,\n            target_proto,\n        });\n\n        Ok(())\n    }\n}\n\npub struct ProtocolContext {\n    proto_id:    ProtocolId,\n    pub session: SessionContext,\n    pub control: ServiceControl,\n}\n\nimpl ProtocolContext {\n    pub fn make_no_encrypted(proto_id: ProtocolId, id: SessionId, ty: SessionType) -> Self {\n        ProtocolContext {\n            proto_id,\n            session: SessionContext::no_encrypted(id, ty),\n            control: ServiceControl::default(),\n        }\n    }\n\n    pub fn make(proto_id: ProtocolId, id: SessionId, ty: SessionType) -> Self {\n        ProtocolContext {\n            proto_id,\n            session: SessionContext::random(id, ty),\n            control: ServiceControl::default(),\n        }\n    }\n\n    pub fn proto_id(&self) -> ProtocolId {\n        self.proto_id\n    }\n\n    pub fn control(&self) -> &ServiceControl {\n        &self.control\n    }\n\n    pub fn disconnect(&self, session_id: SessionId) {\n        self.control.disconnect(session_id)\n    }\n}\n"
  },
  {
    "path": "core/network/src/test.rs",
    "content": "pub mod mock;\n"
  },
  {
    "path": "core/network/src/traits.rs",
    "content": "use std::borrow::Cow;\n\nuse protocol::traits::Context;\nuse protocol::Bytes;\nuse tentacle::multiaddr::Multiaddr;\nuse tentacle::secio::PeerId;\nuse tentacle::service::{ProtocolMeta, TargetProtocol};\nuse tentacle::SessionId;\n\nuse crate::common::ConnectedAddr;\nuse crate::error::{ErrorKind, NetworkError};\n\npub trait NetworkProtocol {\n    fn target() -> TargetProtocol;\n\n    fn metas(self) -> Vec<ProtocolMeta>;\n}\n\npub trait Compression {\n    fn compress(&self, bytes: Bytes) -> Result<Bytes, NetworkError>;\n    fn decompress(&self, bytes: Bytes) -> Result<Bytes, NetworkError>;\n}\n\npub trait NetworkContext: Sized {\n    fn session_id(&self) -> Result<SessionId, NetworkError>;\n    fn set_session_id(&mut self, sid: SessionId) -> Self;\n    fn remote_peer_id(&self) -> Result<PeerId, NetworkError>;\n    fn set_remote_peer_id(&mut self, pid: PeerId) -> Self;\n    // This connected address is for debug purpose, so soft failure is ok.\n    fn remote_connected_addr(&self) -> Option<ConnectedAddr>;\n    fn set_remote_connected_addr(&mut self, addr: ConnectedAddr) -> Self;\n    fn rpc_id(&self) -> Result<u64, NetworkError>;\n    fn set_rpc_id(&mut self, rid: u64) -> Self;\n    fn url(&self) -> Result<&str, ()>;\n    fn set_url(&mut self, url: String) -> Self;\n}\n\npub trait ListenExchangeManager {\n    fn listen_addr(&self) -> Multiaddr;\n    fn add_remote_listen_addr(&mut self, pid: PeerId, addr: Multiaddr);\n    fn misbehave(&mut self, sid: SessionId);\n}\n\npub trait SharedSessionBook {\n    fn all_sendable(&self) -> Vec<SessionId>;\n    fn all_blocked(&self) -> Vec<SessionId>;\n    fn refresh_blocked(&self);\n    fn peers(&self, pids: Vec<PeerId>) -> (Vec<SessionId>, Vec<PeerId>);\n    fn all(&self) -> Vec<SessionId>;\n    fn connected_addr(&self, sid: SessionId) -> Option<ConnectedAddr>;\n    fn pending_data_size(&self, sid: SessionId) -> usize;\n    fn allowlist(&self) -> Vec<PeerId>;\n    fn len(&self) -> usize;\n}\n\npub trait MultiaddrExt {\n    fn id_bytes(&self) -> Option<Cow<'_, [u8]>>;\n    fn has_id(&self) -> bool;\n    fn push_id(&mut self, peer_id: PeerId);\n}\n\n#[derive(Debug, Clone)]\nstruct CtxRpcId(u64);\n\nimpl NetworkContext for Context {\n    fn session_id(&self) -> Result<SessionId, NetworkError> {\n        self.get::<usize>(\"session_id\")\n            .map(|sid| SessionId::new(*sid))\n            .ok_or_else(|| ErrorKind::NoSessionId.into())\n    }\n\n    #[must_use]\n    fn set_session_id(&mut self, sid: SessionId) -> Self {\n        self.with_value::<usize>(\"session_id\", sid.value())\n    }\n\n    fn remote_peer_id(&self) -> Result<PeerId, NetworkError> {\n        self.get::<PeerId>(\"remote_peer_id\")\n            .map(ToOwned::to_owned)\n            .ok_or_else(|| ErrorKind::NoRemotePeerId.into())\n    }\n\n    #[must_use]\n    fn set_remote_peer_id(&mut self, pid: PeerId) -> Self {\n        self.with_value::<PeerId>(\"remote_peer_id\", pid)\n    }\n\n    fn remote_connected_addr(&self) -> Option<ConnectedAddr> {\n        self.get::<ConnectedAddr>(\"remote_connected_addr\")\n            .map(ToOwned::to_owned)\n    }\n\n    #[must_use]\n    fn set_remote_connected_addr(&mut self, addr: ConnectedAddr) -> Self {\n        self.with_value::<ConnectedAddr>(\"remote_connected_addr\", addr)\n    }\n\n    fn rpc_id(&self) -> Result<u64, NetworkError> {\n        self.get::<CtxRpcId>(\"rpc_id\")\n            .map(|ctx_rid| ctx_rid.0)\n            .ok_or_else(|| ErrorKind::NoRpcId.into())\n    }\n\n    #[must_use]\n    fn set_rpc_id(&mut self, rid: u64) -> Self {\n        self.with_value::<CtxRpcId>(\"rpc_id\", CtxRpcId(rid))\n    }\n\n    fn url(&self) -> Result<&str, ()> {\n        self.get::<String>(\"url\")\n            .map(String::as_str)\n            .ok_or_else(|| ())\n    }\n\n    #[must_use]\n    fn set_url(&mut self, url: String) -> Self {\n        self.with_value::<String>(\"url\", url)\n    }\n}\n"
  },
  {
    "path": "core/network/tests/common.rs",
    "content": "use std::net::{IpAddr, Ipv4Addr, SocketAddr};\n\nuse lazy_static::lazy_static;\nuse tentacle::secio::SecioKeyPair;\n\nuse core_network::{NetworkConfig, NetworkService};\n\npub const IP_ADDR: IpAddr = IpAddr::V4(Ipv4Addr::new(10, 137, 0, 25));\npub const BOOTSTRAP_PORT: u16 = 1337;\n\nlazy_static! {\n    pub static ref BOOTSTRAP_SECKEY: String = hex::encode(\"8\".repeat(32));\n    pub static ref BOOTSTRAP_PUBKEY: String = hex::encode(\n        SecioKeyPair::secp256k1_raw_key(\"8\".repeat(32))\n            .expect(\"seckey\")\n            .public_key()\n            .inner()\n    );\n    pub static ref BOOTSTRAP_ADDR: SocketAddr = SocketAddr::new(IP_ADDR, BOOTSTRAP_PORT);\n}\n\npub async fn setup_bootstrap() -> NetworkService {\n    let bootstrap_conf = NetworkConfig::new()\n        .secio_keypair(BOOTSTRAP_SECKEY.to_string())\n        .expect(\"bootstrap secio keypair\");\n\n    let mut bootstrap = NetworkService::new(bootstrap_conf);\n\n    bootstrap\n        .listen(*BOOTSTRAP_ADDR)\n        .await\n        .expect(\"bootstrap listen\");\n\n    bootstrap\n}\n\npub async fn setup_peer(port: u16) -> NetworkService {\n    let peer_conf = NetworkConfig::new()\n        .bootstraps(vec![(\n            BOOTSTRAP_PUBKEY.to_string(),\n            (*BOOTSTRAP_ADDR).to_string(),\n        )])\n        .expect(\"peer bootstraps\");\n\n    let mut peer = NetworkService::new(peer_conf);\n\n    peer.listen(SocketAddr::new(IP_ADDR, port))\n        .await\n        .expect(\"peer listen\");\n\n    peer\n}\n"
  },
  {
    "path": "core/network/tests/gossip_test.rs",
    "content": "mod common;\n\nuse std::{\n    sync::atomic::{AtomicBool, Ordering},\n    thread,\n    time::{Duration, SystemTime},\n};\n\nuse async_trait::async_trait;\nuse futures::{\n    channel::mpsc::{unbounded, UnboundedSender},\n    stream::StreamExt,\n};\n\nuse protocol::traits::{Context, Gossip, MessageHandler, Priority, TrustFeedback};\n\nconst END_TEST_BROADCAST: &str = \"/gossip/test/message\";\nconst TEST_MESSAGE: &str = \"spike lee action started\";\nconst BROADCAST_TEST_TIMEOUT: u64 = 30;\n\nenum TestResult {\n    TimeOut,\n    Success,\n}\n\nstruct NewsReader {\n    sent:    AtomicBool,\n    done_tx: UnboundedSender<()>,\n}\n\nimpl NewsReader {\n    pub fn new(done_tx: UnboundedSender<()>) -> Self {\n        NewsReader {\n            sent: AtomicBool::new(false),\n            done_tx,\n        }\n    }\n\n    pub fn sent(&self) -> bool {\n        self.sent.load(Ordering::SeqCst)\n    }\n\n    pub fn set_sent(&self) {\n        self.sent.store(true, Ordering::SeqCst);\n    }\n}\n\n#[async_trait]\nimpl MessageHandler for NewsReader {\n    type Message = String;\n\n    async fn process(&self, _ctx: Context, msg: Self::Message) -> TrustFeedback {\n        if !self.sent() {\n            assert_eq!(&msg, TEST_MESSAGE);\n            self.done_tx.unbounded_send(()).expect(\"news reader done\");\n            self.set_sent();\n        }\n        TrustFeedback::Neutral\n    }\n}\n\n// FIXME: sometimes timeout\n#[tokio::test]\n#[ignore]\nasync fn broadcast() {\n    env_logger::init();\n\n    let (test_tx, mut test_rx) = unbounded();\n\n    // Init bootstrap node\n    let mut bootstrap = common::setup_bootstrap().await;\n    let (done_tx, mut bootstrap_done) = unbounded();\n\n    bootstrap\n        .register_endpoint_handler(END_TEST_BROADCAST, NewsReader::new(done_tx))\n        .expect(\"bootstrap register news reader\");\n\n    tokio::spawn(bootstrap);\n\n    // Init peer alpha\n    let mut alpha = common::setup_peer(common::BOOTSTRAP_PORT + 1).await;\n    let (done_tx, mut alpha_done) = unbounded();\n\n    alpha\n        .register_endpoint_handler(END_TEST_BROADCAST, NewsReader::new(done_tx))\n        .expect(\"alpha register news reader\");\n\n    tokio::spawn(alpha);\n\n    // Init peer brova\n    let mut brova = common::setup_peer(common::BOOTSTRAP_PORT + 2).await;\n    let (done_tx, mut brova_done) = unbounded();\n\n    brova\n        .register_endpoint_handler(END_TEST_BROADCAST, NewsReader::new(done_tx))\n        .expect(\"brova register news reader\");\n\n    tokio::spawn(brova);\n\n    // Init peer charlie\n    let charlie = common::setup_peer(common::BOOTSTRAP_PORT + 3).await;\n    let broadcaster = charlie.handle();\n\n    tokio::spawn(charlie);\n\n    // Sleep a while for bootstrap phrase, so peers can connect to each other\n    thread::sleep(Duration::from_secs(3));\n\n    // Loop broadcast test message until all peers receive test message\n    let test_tx_clone = test_tx.clone();\n    tokio::spawn(async move {\n        let ctx = Context::new();\n        let end = END_TEST_BROADCAST;\n        let msg = TEST_MESSAGE.to_owned();\n        let start = SystemTime::now();\n\n        loop {\n            if SystemTime::now()\n                .duration_since(start)\n                .expect(\"duration\")\n                .as_secs()\n                > BROADCAST_TEST_TIMEOUT\n            {\n                test_tx_clone\n                    .unbounded_send(TestResult::TimeOut)\n                    .expect(\"timeout send\");\n            }\n\n            broadcaster\n                .broadcast(ctx.clone(), end, msg.clone(), Priority::Normal)\n                .await\n                .expect(\"gossip broadcast\");\n\n            thread::sleep(Duration::from_secs(2));\n        }\n    });\n\n    tokio::spawn(async move {\n        bootstrap_done.next().await.expect(\"bootstrap done\");\n        alpha_done.next().await.expect(\"alpha done\");\n        brova_done.next().await.expect(\"brova done\");\n\n        test_tx\n            .unbounded_send(TestResult::Success)\n            .expect(\"success send\");\n    });\n\n    match test_rx.next().await {\n        Some(TestResult::TimeOut) => panic!(\"timeout\"),\n        Some(TestResult::Success) => (),\n        None => panic!(\"fail\"),\n    }\n}\n"
  },
  {
    "path": "core/run/Cargo.toml",
    "content": "[package]\nname = \"run\"\nversion = \"0.2.1\"\nauthors = [\"Muta Dev <muta@nervos.org>\"]\nedition = \"2018\"\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\nbacktrace = \"0.3\"\nactix-rt = \"1.0\"\nderive_more = \"0.99\"\nfutures = \"0.3\"\nparking_lot = \"0.11\"\nserde = \"1.0\"\nserde_derive = \"1.0\"\nserde_json = \"1.0\"\nlog = \"0.4\"\nclap = \"2.33\"\nbytes = \"0.5\"\nhex = \"0.4\"\nrlp = \"0.4\"\ntoml = \"0.5\"\nfutures-timer=\"3.0\"\ncita_trie = \"2.0\"\ntokio = { version = \"0.2\", features = [\"macros\", \"sync\", \"rt-core\", \"rt-util\", \"signal\", \"time\"] }\n\nbyzantine = { path = \"../../byzantine\" }\ncommon-apm = { path = \"../../common/apm\" }\ncommon-config-parser = { path = \"../../common/config-parser\" }\ncommon-crypto = { path = \"../../common/crypto\" }\ncommon-logger = { path = \"../../common/logger\" }\nprotocol = { path = \"../../protocol\", package = \"muta-protocol\" }\ncore-api = { path = \"../../core/api\" }\ncore-storage = { path = \"../../core/storage\" }\ncore-mempool = { path = \"../../core/mempool\" }\ncore-network = { path = \"../../core/network\" }\ncore-consensus = { path = \"../../core/consensus\" }\n\nbinding-macro = { path = \"../../binding-macro\" }\nframework = { path = \"../../framework\" }\n"
  },
  {
    "path": "core/run/src/lib.rs",
    "content": "#![feature(async_closure)]\n#![allow(clippy::mutable_key_type)]\n\nuse derive_more::{Display, From};\n\nuse protocol::{ProtocolError, ProtocolErrorKind};\n\nuse std::collections::HashMap;\nuse std::convert::TryFrom;\nuse std::panic;\nuse std::sync::Arc;\nuse std::thread;\nuse std::time::Duration;\n\nuse backtrace::Backtrace;\nuse bytes::Bytes;\nuse futures::stream::StreamExt;\nuse futures::{future, lock::Mutex};\nuse futures_timer::Delay;\n#[cfg(unix)]\nuse tokio::signal::unix::{self as os_impl};\n\nuse common_config_parser::types::Config;\nuse common_crypto::{\n    BlsCommonReference, BlsPrivateKey, BlsPublicKey, PublicKey, Secp256k1, Secp256k1PrivateKey,\n    ToPublicKey, UncompressedPublicKey,\n};\nuse core_api::adapter::DefaultAPIAdapter;\nuse core_api::config::{GraphQLConfig, GraphQLTLS};\nuse core_consensus::fixed_types::{FixedBlock, FixedProof, FixedSignedTxs};\nuse core_consensus::message::{\n    ChokeMessageHandler, ProposalMessageHandler, PullBlockRpcHandler, PullProofRpcHandler,\n    PullTxsRpcHandler, QCMessageHandler, RemoteHeightMessageHandler, VoteMessageHandler,\n    BROADCAST_HEIGHT, END_GOSSIP_AGGREGATED_VOTE, END_GOSSIP_SIGNED_CHOKE,\n    END_GOSSIP_SIGNED_PROPOSAL, END_GOSSIP_SIGNED_VOTE, RPC_RESP_SYNC_PULL_BLOCK,\n    RPC_RESP_SYNC_PULL_PROOF, RPC_RESP_SYNC_PULL_TXS, RPC_SYNC_PULL_BLOCK, RPC_SYNC_PULL_PROOF,\n    RPC_SYNC_PULL_TXS,\n};\nuse core_consensus::status::{CurrentConsensusStatus, StatusAgent};\nuse core_consensus::util::OverlordCrypto;\nuse core_consensus::{\n    ConsensusWal, DurationConfig, Node, OverlordConsensus, OverlordConsensusAdapter,\n    OverlordSynchronization, RichBlock, SignedTxsWAL,\n};\nuse core_mempool::{\n    DefaultMemPoolAdapter, HashMemPool, MsgPushTxs, NewTxsHandler, PullTxsHandler,\n    END_GOSSIP_NEW_TXS, RPC_PULL_TXS, RPC_RESP_PULL_TXS, RPC_RESP_PULL_TXS_SYNC,\n};\nuse core_network::{NetworkConfig, NetworkService, PeerId, PeerIdExt};\nuse core_storage::{adapter::rocks::RocksAdapter, ImplStorage, StorageError};\nuse framework::binding::state::RocksTrieDB;\nuse framework::executor::{ServiceExecutor, ServiceExecutorFactory};\nuse protocol::traits::{\n    APIAdapter, CommonStorage, Context, MemPool, Network, NodeInfo, ServiceMapping, Storage,\n};\nuse protocol::types::{Address, Block, BlockHeader, Genesis, Hash, Metadata, Proof, Validator};\nuse protocol::{fixed_codec::FixedCodec, ProtocolResult};\n\nuse common_apm::muta_apm;\n\npub struct Muta<Mapping>\nwhere\n    Mapping: ServiceMapping,\n{\n    config:          Config,\n    genesis:         Genesis,\n    service_mapping: Arc<Mapping>,\n}\n\nimpl<Mapping: 'static + ServiceMapping> Muta<Mapping> {\n    pub fn new(config: Config, genesis: Genesis, service_mapping: Arc<Mapping>) -> Self {\n        Self {\n            config,\n            genesis,\n            service_mapping,\n        }\n    }\n\n    pub fn run(self) -> ProtocolResult<()> {\n        if let Some(apm_config) = &self.config.apm {\n            muta_apm::global_tracer_register(\n                &apm_config.service_name,\n                apm_config.tracing_address,\n                apm_config.tracing_batch_size,\n            );\n\n            log::info!(\"muta_apm start\");\n        }\n        // run muta\n        let mut rt = tokio::runtime::Runtime::new().expect(\"new tokio runtime\");\n        let local = tokio::task::LocalSet::new();\n        local.block_on(&mut rt, async move {\n            self.create_genesis().await?;\n\n            self.start().await\n        })?;\n\n        Ok(())\n    }\n\n    pub async fn create_genesis(&self) -> ProtocolResult<Block> {\n        log::info!(\"Genesis data: {:?}\", self.genesis);\n\n        let metadata_payload = self.genesis.get_payload(\"metadata\");\n\n        let hrp = Metadata::get_hrp_from_json(metadata_payload.to_string());\n\n        // Set bech32 address hrp\n        if !protocol::address_hrp_inited() {\n            protocol::init_address_hrp(hrp.into());\n        }\n\n        // Init Block db\n        let path_block = self.config.data_path_for_block();\n        let rocks_adapter = Arc::new(RocksAdapter::new(\n            path_block,\n            self.config.rocksdb.max_open_files,\n        )?);\n        let storage = Arc::new(ImplStorage::new(rocks_adapter));\n\n        match storage.get_latest_block(Context::new()).await {\n            Ok(genesis_block) => {\n                log::info!(\"The Genesis block has been initialized.\");\n                return Ok(genesis_block);\n            }\n            Err(e) => {\n                if !e.to_string().contains(\"GetNone\") {\n                    return Err(e);\n                }\n            }\n        };\n\n        // Init trie db\n        let path_state = self.config.data_path_for_state();\n        let trie_db = Arc::new(RocksTrieDB::new(\n            path_state,\n            self.config.executor.light,\n            self.config.rocksdb.max_open_files,\n            self.config.executor.triedb_cache_size,\n        )?);\n\n        let metadata: Metadata = serde_json::from_str(self.genesis.get_payload(\"metadata\"))\n            .expect(\"Decode metadata failed!\");\n\n        let validators: Vec<Validator> = metadata\n            .verifier_list\n            .iter()\n            .map(|v| Validator {\n                pub_key:        v.pub_key.decode(),\n                propose_weight: v.propose_weight,\n                vote_weight:    v.vote_weight,\n            })\n            .collect();\n\n        // Init genesis\n        let genesis_state_root = ServiceExecutor::create_genesis(\n            self.genesis.services.clone(),\n            Arc::clone(&trie_db),\n            Arc::clone(&storage),\n            Arc::clone(&self.service_mapping),\n        )?;\n\n        // Build genesis block.\n        let proposer = Address::from_hash(Hash::digest(protocol::address_hrp().as_str()))?;\n        let genesis_block_header = BlockHeader {\n            chain_id: metadata.chain_id.clone(),\n            height: 0,\n            exec_height: 0,\n            prev_hash: Hash::from_empty(),\n            timestamp: self.genesis.timestamp,\n            order_root: Hash::from_empty(),\n            order_signed_transactions_hash: Hash::from_empty(),\n            confirm_root: vec![],\n            state_root: genesis_state_root,\n            receipt_root: vec![],\n            cycles_used: vec![],\n            proposer,\n            proof: Proof {\n                height:     0,\n                round:      0,\n                block_hash: Hash::from_empty(),\n                signature:  Bytes::new(),\n                bitmap:     Bytes::new(),\n            },\n            validator_version: 0,\n            validators,\n        };\n        let latest_proof = genesis_block_header.proof.clone();\n        let genesis_block = Block {\n            header:            genesis_block_header,\n            ordered_tx_hashes: vec![],\n        };\n        storage\n            .insert_block(Context::new(), genesis_block.clone())\n            .await?;\n        storage\n            .update_latest_proof(Context::new(), latest_proof)\n            .await?;\n\n        log::info!(\"The genesis block is created {:?}\", genesis_block);\n        Ok(genesis_block)\n    }\n\n    pub async fn start(self) -> ProtocolResult<()> {\n        log::info!(\"node starts\");\n        let config = self.config;\n        let service_mapping = self.service_mapping;\n        // Init Block db\n        let path_block = config.data_path_for_block();\n        log::info!(\"Data path for block: {:?}\", path_block);\n\n        let rocks_adapter = Arc::new(RocksAdapter::new(\n            path_block.clone(),\n            config.rocksdb.max_open_files,\n        )?);\n        let storage = Arc::new(ImplStorage::new(Arc::clone(&rocks_adapter)));\n\n        // Init network\n        let network_config = NetworkConfig::new()\n            .max_connections(config.network.max_connected_peers)?\n            .same_ip_conn_limit(config.network.same_ip_conn_limit)\n            .inbound_conn_limit(config.network.inbound_conn_limit)?\n            .allowlist_only(config.network.allowlist_only)\n            .peer_trust_metric(\n                config.network.trust_interval_duration,\n                config.network.trust_max_history_duration,\n            )?\n            .peer_soft_ban(config.network.soft_ban_duration)\n            .peer_fatal_ban(config.network.fatal_ban_duration)\n            .rpc_timeout(config.network.rpc_timeout)\n            .ping_interval(config.network.ping_interval)\n            .selfcheck_interval(config.network.selfcheck_interval)\n            .max_wait_streams(config.network.max_wait_streams)\n            .max_frame_length(config.network.max_frame_length)\n            .send_buffer_size(config.network.send_buffer_size)\n            .write_timeout(config.network.write_timeout)\n            .recv_buffer_size(config.network.recv_buffer_size);\n\n        let network_privkey = config.privkey.as_string_trim0x();\n\n        let mut bootstrap_pairs = vec![];\n        if let Some(bootstrap) = &config.network.bootstraps {\n            for bootstrap in bootstrap.iter() {\n                bootstrap_pairs.push((bootstrap.peer_id.to_owned(), bootstrap.address.to_owned()));\n            }\n        }\n\n        let allowlist = config.network.allowlist.clone().unwrap_or_default();\n        let network_config = network_config\n            .bootstraps(bootstrap_pairs)?\n            .allowlist(allowlist)?\n            .secio_keypair(network_privkey)?;\n\n        let mut network_service = NetworkService::new(network_config);\n        network_service\n            .listen(config.network.listening_address)\n            .await?;\n\n        // Init trie db\n        let path_state = config.data_path_for_state();\n        let trie_db = Arc::new(RocksTrieDB::new(\n            path_state,\n            config.executor.light,\n            config.rocksdb.max_open_files,\n            config.executor.triedb_cache_size,\n        )?);\n\n        // Init full transactions wal\n        let txs_wal_path = config.data_path_for_txs_wal().to_str().unwrap().to_string();\n        let txs_wal = Arc::new(SignedTxsWAL::new(txs_wal_path));\n\n        // Init consensus wal\n        let consensus_wal_path = config\n            .data_path_for_consensus_wal()\n            .to_str()\n            .unwrap()\n            .to_string();\n        let consensus_wal = Arc::new(ConsensusWal::new(consensus_wal_path));\n\n        // Recover signed transactions of current height\n        let current_block = storage.get_latest_block(Context::new()).await?;\n        let current_stxs = txs_wal.load_by_height(current_block.header.height + 1);\n        log::info!(\n            \"Recover {} tx of height {} from wal\",\n            current_stxs.len(),\n            current_block.header.height + 1\n        );\n\n        // Init mempool\n        let mempool_adapter =\n            DefaultMemPoolAdapter::<ServiceExecutorFactory, Secp256k1, _, _, _, _>::new(\n                network_service.handle(),\n                Arc::clone(&storage),\n                Arc::clone(&trie_db),\n                Arc::clone(&service_mapping),\n                config.mempool.broadcast_txs_size,\n                config.mempool.broadcast_txs_interval,\n            );\n        let mempool = Arc::new(\n            HashMemPool::new(\n                config.mempool.pool_size as usize,\n                mempool_adapter,\n                current_stxs,\n            )\n            .await,\n        );\n\n        let monitor_mempool = Arc::clone(&mempool);\n        tokio::spawn(async move {\n            let interval = Duration::from_millis(1000);\n            loop {\n                Delay::new(interval).await;\n                common_apm::metrics::mempool::MEMPOOL_LEN_GAUGE\n                    .set(monitor_mempool.get_tx_cache().len().await as i64);\n            }\n        });\n\n        // self private key\n        let hex_privkey =\n            hex::decode(config.privkey.as_string_trim0x()).map_err(MainError::FromHex)?;\n        let my_privkey =\n            Secp256k1PrivateKey::try_from(hex_privkey.as_ref()).map_err(MainError::Crypto)?;\n        let my_pubkey = my_privkey.pub_key();\n        let my_address = Address::from_pubkey_bytes(my_pubkey.to_uncompressed_bytes())?;\n\n        // Get metadata\n        let api_adapter = DefaultAPIAdapter::<ServiceExecutorFactory, _, _, _, _>::new(\n            Arc::clone(&mempool),\n            Arc::clone(&storage),\n            Arc::clone(&trie_db),\n            Arc::clone(&service_mapping),\n        );\n\n        let exec_resp = api_adapter\n            .query_service(\n                Context::new(),\n                current_block.header.height,\n                u64::max_value(),\n                1,\n                my_address.clone(),\n                \"metadata\".to_string(),\n                \"get_metadata\".to_string(),\n                \"\".to_string(),\n            )\n            .await?;\n\n        let metadata: Metadata =\n            serde_json::from_str(&exec_resp.succeed_data).expect(\"Decode metadata failed!\");\n\n        // Set bech32 address hrp\n        if !protocol::address_hrp_inited() {\n            protocol::init_address_hrp(metadata.bech32_address_hrp.into());\n        }\n\n        // set chain id in network\n        network_service.set_chain_id(metadata.chain_id.clone());\n\n        // set args in mempool\n        mempool.set_args(\n            metadata.timeout_gap,\n            metadata.cycles_limit,\n            metadata.max_tx_size,\n        );\n\n        // register broadcast new transaction\n        network_service.register_endpoint_handler(\n            END_GOSSIP_NEW_TXS,\n            NewTxsHandler::new(Arc::clone(&mempool)),\n        )?;\n\n        // register pull txs from other node\n        network_service.register_endpoint_handler(\n            RPC_PULL_TXS,\n            PullTxsHandler::new(Arc::new(network_service.handle()), Arc::clone(&mempool)),\n        )?;\n        network_service.register_rpc_response::<MsgPushTxs>(RPC_RESP_PULL_TXS)?;\n\n        network_service.register_rpc_response::<MsgPushTxs>(RPC_RESP_PULL_TXS_SYNC)?;\n\n        // Init Consensus\n        let validators: Vec<Validator> = metadata\n            .verifier_list\n            .iter()\n            .map(|v| Validator {\n                pub_key:        v.pub_key.decode(),\n                propose_weight: v.propose_weight,\n                vote_weight:    v.vote_weight,\n            })\n            .collect();\n\n        let node_info = NodeInfo {\n            chain_id:     metadata.chain_id.clone(),\n            self_address: my_address.clone(),\n            self_pub_key: my_pubkey.to_bytes(),\n        };\n        let current_header = &current_block.header;\n        let block_hash = Hash::digest(current_block.header.encode_fixed()?);\n        let current_height = current_block.header.height;\n        let exec_height = current_block.header.exec_height;\n        let proof = if let Ok(temp) = storage.get_latest_proof(Context::new()).await {\n            temp\n        } else {\n            current_header.proof.clone()\n        };\n\n        let current_consensus_status = CurrentConsensusStatus {\n            cycles_price:                metadata.cycles_price,\n            cycles_limit:                metadata.cycles_limit,\n            latest_committed_height:     current_block.header.height,\n            exec_height:                 current_block.header.exec_height,\n            current_hash:                block_hash,\n            latest_committed_state_root: current_header.state_root.clone(),\n            list_confirm_root:           vec![],\n            list_state_root:             vec![],\n            list_receipt_root:           vec![],\n            list_cycles_used:            vec![],\n            current_proof:               proof,\n            validators:                  validators.clone(),\n            consensus_interval:          metadata.interval,\n            propose_ratio:               metadata.propose_ratio,\n            prevote_ratio:               metadata.prevote_ratio,\n            precommit_ratio:             metadata.precommit_ratio,\n            brake_ratio:                 metadata.brake_ratio,\n            max_tx_size:                 metadata.max_tx_size,\n            tx_num_limit:                metadata.tx_num_limit,\n        };\n\n        let consensus_interval = current_consensus_status.consensus_interval;\n        let status_agent = StatusAgent::new(current_consensus_status);\n\n        let mut bls_pub_keys = HashMap::new();\n        for validator_extend in metadata.verifier_list.iter() {\n            let address = validator_extend.pub_key.decode();\n            let hex_pubkey = hex::decode(validator_extend.bls_pub_key.as_string_trim0x())\n                .map_err(MainError::FromHex)?;\n            let pub_key = BlsPublicKey::try_from(hex_pubkey.as_ref()).map_err(MainError::Crypto)?;\n            bls_pub_keys.insert(address, pub_key);\n        }\n\n        let mut priv_key = Vec::new();\n        priv_key.extend_from_slice(&[0u8; 16]);\n        let mut tmp = hex::decode(config.privkey.as_string_trim0x()).unwrap();\n        priv_key.append(&mut tmp);\n        let bls_priv_key = BlsPrivateKey::try_from(priv_key.as_ref()).map_err(MainError::Crypto)?;\n\n        let hex_common_ref =\n            hex::decode(metadata.common_ref.as_string_trim0x()).map_err(MainError::FromHex)?;\n        let common_ref: BlsCommonReference = std::str::from_utf8(hex_common_ref.as_ref())\n            .map_err(MainError::Utf8)?\n            .into();\n\n        let crypto = Arc::new(OverlordCrypto::new(bls_priv_key, bls_pub_keys, common_ref));\n\n        let mut consensus_adapter =\n            OverlordConsensusAdapter::<ServiceExecutorFactory, _, _, _, _, _>::new(\n                Arc::new(network_service.handle()),\n                Arc::clone(&mempool),\n                Arc::clone(&storage),\n                Arc::clone(&trie_db),\n                Arc::clone(&service_mapping),\n                status_agent.clone(),\n                Arc::clone(&crypto),\n                config.consensus.overlord_gap,\n            )?;\n\n        let exec_demon = consensus_adapter.take_exec_demon();\n        let consensus_adapter = Arc::new(consensus_adapter);\n\n        let lock = Arc::new(Mutex::new(()));\n\n        let overlord_consensus = Arc::new(OverlordConsensus::new(\n            status_agent.clone(),\n            node_info,\n            Arc::clone(&crypto),\n            Arc::clone(&txs_wal),\n            Arc::clone(&consensus_adapter),\n            Arc::clone(&lock),\n            Arc::clone(&consensus_wal),\n        ));\n\n        consensus_adapter.set_overlord_handler(overlord_consensus.get_overlord_handler());\n\n        let synchronization = Arc::new(OverlordSynchronization::<_>::new(\n            config.consensus.sync_txs_chunk_size,\n            consensus_adapter,\n            status_agent.clone(),\n            crypto,\n            lock,\n        ));\n\n        let peer_ids = metadata\n            .verifier_list\n            .iter()\n            .map(|v| PeerId::from_pubkey_bytes(v.pub_key.decode()).map(PeerIdExt::into_bytes_ext))\n            .collect::<Result<Vec<_>, _>>()?;\n\n        network_service\n            .handle()\n            .tag_consensus(Context::new(), peer_ids)?;\n\n        // Re-execute block from exec_height + 1 to current_height, so that init the\n        // lost current status.\n        log::info!(\"Re-execute from {} to {}\", exec_height + 1, current_height);\n        for height in exec_height + 1..=current_height {\n            let block = storage\n                .get_block(Context::new(), height)\n                .await?\n                .ok_or(StorageError::GetNone)?;\n            let txs = storage\n                .get_transactions(\n                    Context::new(),\n                    block.header.height,\n                    &block.ordered_tx_hashes,\n                )\n                .await?\n                .into_iter()\n                .filter_map(|opt_stx| opt_stx)\n                .collect::<Vec<_>>();\n            if txs.len() != block.ordered_tx_hashes.len() {\n                return Err(StorageError::GetNone.into());\n            }\n            let rich_block = RichBlock { block, txs };\n            let _ = synchronization\n                .exec_block(Context::new(), rich_block, status_agent.clone())\n                .await?;\n        }\n\n        // register consensus\n        network_service.register_endpoint_handler(\n            END_GOSSIP_SIGNED_PROPOSAL,\n            ProposalMessageHandler::new(Arc::clone(&overlord_consensus)),\n        )?;\n        network_service.register_endpoint_handler(\n            END_GOSSIP_AGGREGATED_VOTE,\n            QCMessageHandler::new(Arc::clone(&overlord_consensus)),\n        )?;\n        network_service.register_endpoint_handler(\n            END_GOSSIP_SIGNED_VOTE,\n            VoteMessageHandler::new(Arc::clone(&overlord_consensus)),\n        )?;\n        network_service.register_endpoint_handler(\n            END_GOSSIP_SIGNED_CHOKE,\n            ChokeMessageHandler::new(Arc::clone(&overlord_consensus)),\n        )?;\n        network_service.register_endpoint_handler(\n            BROADCAST_HEIGHT,\n            RemoteHeightMessageHandler::new(Arc::clone(&synchronization)),\n        )?;\n        network_service.register_endpoint_handler(\n            RPC_SYNC_PULL_BLOCK,\n            PullBlockRpcHandler::new(Arc::new(network_service.handle()), Arc::clone(&storage)),\n        )?;\n\n        network_service.register_endpoint_handler(\n            RPC_SYNC_PULL_PROOF,\n            PullProofRpcHandler::new(Arc::new(network_service.handle()), Arc::clone(&storage)),\n        )?;\n\n        network_service.register_endpoint_handler(\n            RPC_SYNC_PULL_TXS,\n            PullTxsRpcHandler::new(Arc::new(network_service.handle()), Arc::clone(&storage)),\n        )?;\n        network_service.register_rpc_response::<FixedBlock>(RPC_RESP_SYNC_PULL_BLOCK)?;\n        network_service.register_rpc_response::<FixedProof>(RPC_RESP_SYNC_PULL_PROOF)?;\n        network_service.register_rpc_response::<FixedSignedTxs>(RPC_RESP_SYNC_PULL_TXS)?;\n\n        // Run network\n        tokio::spawn(network_service);\n\n        // Run sync\n        tokio::spawn(async move {\n            if let Err(e) = synchronization.polling_broadcast().await {\n                log::error!(\"synchronization: {:?}\", e);\n            }\n        });\n\n        // Run consensus\n        let authority_list = validators\n            .iter()\n            .map(|v| Node {\n                address:        v.pub_key.clone(),\n                propose_weight: v.propose_weight,\n                vote_weight:    v.vote_weight,\n            })\n            .collect::<Vec<_>>();\n\n        let timer_config = DurationConfig {\n            propose_ratio:   metadata.propose_ratio,\n            prevote_ratio:   metadata.prevote_ratio,\n            precommit_ratio: metadata.precommit_ratio,\n            brake_ratio:     metadata.brake_ratio,\n        };\n\n        tokio::spawn(async move {\n            if let Err(e) = overlord_consensus\n                .run(\n                    current_height,\n                    consensus_interval,\n                    authority_list,\n                    Some(timer_config),\n                )\n                .await\n            {\n                log::error!(\"muta-consensus: {:?} error\", e);\n            }\n        });\n\n        let (abortable_demon, abort_handle) = future::abortable(exec_demon.run());\n        let exec_handler = tokio::task::spawn_local(abortable_demon);\n\n        // Init graphql\n        let mut graphql_config = GraphQLConfig::default();\n        graphql_config.listening_address = config.graphql.listening_address;\n        graphql_config.graphql_uri = config.graphql.graphql_uri.clone();\n        graphql_config.graphiql_uri = config.graphql.graphiql_uri.clone();\n        if config.graphql.workers != 0 {\n            graphql_config.workers = config.graphql.workers;\n        }\n        if config.graphql.maxconn != 0 {\n            graphql_config.maxconn = config.graphql.maxconn;\n        }\n        if config.graphql.max_payload_size != 0 {\n            graphql_config.max_payload_size = config.graphql.max_payload_size;\n        }\n        if let Some(tls) = config.graphql.tls {\n            graphql_config.tls = Some(GraphQLTLS {\n                private_key_file_path:       tls.private_key_file_path,\n                certificate_chain_file_path: tls.certificate_chain_file_path,\n            })\n        }\n        graphql_config.enable_dump_profile = config.graphql.enable_dump_profile.unwrap_or(false);\n\n        tokio::task::spawn_local(async move {\n            let local = tokio::task::LocalSet::new();\n            let actix_rt = actix_rt::System::run_in_tokio(\"muta-graphql\", &local);\n            tokio::task::spawn_local(actix_rt);\n\n            core_api::start_graphql(graphql_config, api_adapter).await;\n        });\n\n        let ctrl_c_handler = tokio::task::spawn_local(async {\n            #[cfg(windows)]\n            let _ = tokio::signal::ctrl_c().await;\n            #[cfg(unix)]\n            {\n                let mut sigtun_int = os_impl::signal(os_impl::SignalKind::interrupt()).unwrap();\n                let mut sigtun_term = os_impl::signal(os_impl::SignalKind::terminate()).unwrap();\n                tokio::select! {\n                    _ = sigtun_int.recv() => {}\n                    _ = sigtun_term.recv() => {}\n                };\n            }\n        });\n\n        // register channel of panic\n        let (panic_sender, mut panic_receiver) = tokio::sync::mpsc::channel::<()>(1);\n\n        panic::set_hook(Box::new(move |info: &panic::PanicInfo| {\n            let mut panic_sender = panic_sender.clone();\n            Self::panic_log(info);\n            panic_sender.try_send(()).expect(\"panic_receiver is droped\");\n        }));\n\n        tokio::select! {\n            _ = exec_handler =>{log::error!(\"exec_daemon is down, quit.\")},\n            _ = ctrl_c_handler =>{log::info!(\"ctrl + c is pressed, quit.\")},\n            _ = panic_receiver.next() =>{log::info!(\"child thraed panic, quit.\")},\n        };\n        abort_handle.abort();\n        Ok(())\n    }\n\n    fn panic_log(info: &panic::PanicInfo) {\n        let backtrace = Backtrace::new();\n        let thread = thread::current();\n        let name = thread.name().unwrap_or(\"unnamed\");\n        let location = info.location().unwrap(); // The current implementation always returns Some\n        let msg = match info.payload().downcast_ref::<&'static str>() {\n            Some(s) => *s,\n            None => match info.payload().downcast_ref::<String>() {\n                Some(s) => &*s,\n                None => \"Box<Any>\",\n            },\n        };\n        log::error!(\n            target: \"panic\", \"thread '{}' panicked at '{}': {}:{} {:?}\",\n            name,\n            msg,\n            location.file(),\n            location.line(),\n            backtrace,\n        );\n    }\n}\n\n#[derive(Debug, Display, From)]\npub enum MainError {\n    #[display(fmt = \"The muta configuration read failed {:?}\", _0)]\n    ConfigParse(common_config_parser::ParseError),\n\n    #[display(fmt = \"{:?}\", _0)]\n    Io(std::io::Error),\n\n    #[display(fmt = \"Toml fails to parse genesis {:?}\", _0)]\n    GenesisTomlDe(toml::de::Error),\n\n    #[display(fmt = \"hex error {:?}\", _0)]\n    FromHex(hex::FromHexError),\n\n    #[display(fmt = \"crypto error {:?}\", _0)]\n    Crypto(common_crypto::Error),\n\n    #[display(fmt = \"{:?}\", _0)]\n    Utf8(std::str::Utf8Error),\n\n    #[display(fmt = \"{:?}\", _0)]\n    JSONParse(serde_json::error::Error),\n\n    #[display(fmt = \"other error {:?}\", _0)]\n    Other(String),\n}\n\nimpl std::error::Error for MainError {}\n\nimpl From<MainError> for ProtocolError {\n    fn from(error: MainError) -> ProtocolError {\n        ProtocolError::new(ProtocolErrorKind::Main, Box::new(error))\n    }\n}\n"
  },
  {
    "path": "core/storage/Cargo.toml",
    "content": "[package]\nname = \"core-storage\"\nversion = \"0.2.1\"\nauthors = [\"Muta Dev <muta@nervos.org>\"]\nedition = \"2018\"\nrepository = \"https://github.com/nervosnetwork/muta\"\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\ncommon-apm = { path = \"../../common/apm\" }\nprotocol = { path = \"../../protocol\", package = \"muta-protocol\" }\n\nfutures = \"0.3\"\nderive_more = \"0.15\"\nlazy_static = \"1.4\"\nparking_lot = \"0.11\"\nasync-trait = \"0.1\"\nrocksdb = \"0.14\"\ntokio = \"0.2\"\narc-swap = \"0.4\"\n\n[dev-dependencies]\nnum-traits = \"0.2\"\nrand = \"0.6\"\nhex = \"0.4\"\ntokio = { version = \"0.2\", features = [\"macros\", \"rt-core\", \"rt-util\", \"signal\", \"time\"]}\n"
  },
  {
    "path": "core/storage/examples/bench.rs",
    "content": "use core_storage::{adapter::rocks::RocksAdapter, CommonHashKey, ImplStorage};\nuse protocol::{\n    traits::{Context, Storage},\n    types::{Bytes, Hash, RawTransaction, SignedTransaction, TransactionRequest},\n};\n\nuse std::{\n    fs::OpenOptions,\n    io::prelude::*,\n    io::{BufReader, LineWriter},\n    path::PathBuf,\n    str::FromStr,\n    sync::Arc,\n    time::Instant,\n};\n\nconst NUMBER_OF_TXS_PER_ROUND: usize = 15_000; // 1.5W, 2.5M\nconst ADDRESS_STR: &str = \"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\";\n\n#[tokio::main]\npub async fn main() {\n    if std::env::args().nth(1) == Some(\"generate\".to_string()) {\n        println!(\"generate 1.5W txs\");\n\n        let mut height = 1u64;\n        let mut count = std::env::args()\n            .nth(2)\n            .expect(\"number of round(1.5W txs per round, 2.5M)\")\n            .parse::<u64>()\n            .expect(\"number of round(1.5W txs per round, 2.5M)\");\n\n        let db_path = std::env::args().nth(3).expect(\"db patch\");\n        let max_fd = std::env::args()\n            .nth(4)\n            .expect(\"max open files for rocksdb\")\n            .parse::<i32>()\n            .expect(\"max open files for rocksdb\");\n\n        let mut hash_keys_file = {\n            let mut file_path = PathBuf::from(db_path.clone());\n            file_path.push(\"hash_keys\");\n\n            let file = OpenOptions::new()\n                .write(true)\n                .append(true)\n                .create_new(true)\n                .open(file_path)\n                .expect(\"tx hashes file\");\n\n            LineWriter::new(file)\n        };\n\n        let adapter = RocksAdapter::new(db_path, max_fd).expect(\"create adapter\");\n        let storage = ImplStorage::new(Arc::new(adapter));\n\n        let mut hash_keys = Vec::with_capacity(NUMBER_OF_TXS_PER_ROUND);\n\n        while count > 0 {\n            let stxs = (0..NUMBER_OF_TXS_PER_ROUND)\n                .map(|_| {\n                    let bytes = get_random_bytes();\n                    let hash = Hash::digest(bytes);\n\n                    hash_keys.push(CommonHashKey::new(height, hash.clone()));\n                    mock_signed_tx(hash)\n                })\n                .collect::<Vec<_>>();\n\n            for key in hash_keys.drain(..) {\n                let encoded_key = key.to_string();\n                hash_keys_file\n                    .write_all(encoded_key.as_bytes())\n                    .expect(\"write tx hash\");\n                hash_keys_file.write_all(b\"\\n\").expect(\"write line\");\n            }\n\n            storage\n                .insert_transactions(Context::new(), height, stxs)\n                .await\n                .expect(\"insert transaction\");\n\n            count -= 1;\n            height += 1;\n        }\n\n        println!(\"insert complete, height {}\", height - 1);\n    } else if std::env::args().nth(1) == Some(\"fetch\".to_string()) {\n        let db_path = std::env::args().nth(2).expect(\"db patch\");\n        let max_fd = std::env::args()\n            .nth(3)\n            .expect(\"max open files for rocksdb\")\n            .parse::<i32>()\n            .expect(\"max open files for rocksdb\");\n        let height = std::env::args()\n            .nth(4)\n            .expect(\"height\")\n            .parse::<u64>()\n            .expect(\"height\");\n\n        let hash_keys_file = {\n            let mut file_path = PathBuf::from(db_path.clone());\n            file_path.push(\"hash_keys\");\n\n            let file = OpenOptions::new()\n                .read(true)\n                .open(file_path)\n                .expect(\"tx hashes file\");\n\n            BufReader::new(file).lines()\n        };\n\n        let hashes = hash_keys_file\n            .skip((height - 1) as usize * NUMBER_OF_TXS_PER_ROUND)\n            .take(NUMBER_OF_TXS_PER_ROUND)\n            .map(|l| {\n                let key = CommonHashKey::from_str(&l.expect(\"read line\")).expect(\"key\");\n                key.hash().to_owned()\n            })\n            .collect::<Vec<_>>();\n\n        let adapter = RocksAdapter::new(db_path, max_fd).expect(\"create adapter\");\n        let storage = ImplStorage::new(Arc::new(adapter));\n\n        let now = Instant::now();\n        let stxs = storage\n            .get_transactions(Context::new(), height, &hashes)\n            .await\n            .expect(\"fetch\");\n\n        println!(\"total {}, fetch {}\", NUMBER_OF_TXS_PER_ROUND, stxs.len());\n        println!(\"fetch cost {} ms\", now.elapsed().as_millis());\n    } else {\n        println!(\n            r#\"\n        Usage:\n            generate [round] [db path] [fd]\n\n            fetch [db path] [fd] [height]\n        \"#\n        );\n    }\n}\n\nfn get_random_bytes() -> Bytes {\n    let mut buf = [0u8; 32];\n    for u in &mut buf {\n        *u = rand::random::<u8>();\n    }\n\n    Bytes::copy_from_slice(&buf)\n}\n\nfn mock_signed_tx(tx_hash: Hash) -> SignedTransaction {\n    let nonce = Hash::digest(Bytes::from(\"XXXX\"));\n\n    let request = TransactionRequest {\n        service_name: \"test\".to_owned(),\n        method:       \"test\".to_owned(),\n        payload:      \"test\".to_owned(),\n    };\n\n    let raw = RawTransaction {\n        chain_id: nonce.clone(),\n        nonce,\n        timeout: 10,\n        cycles_limit: 10,\n        cycles_price: 1,\n        request,\n        sender: ADDRESS_STR.parse().unwrap(),\n    };\n\n    SignedTransaction {\n        raw,\n        tx_hash,\n        pubkey: Default::default(),\n        signature: Default::default(),\n    }\n}\n"
  },
  {
    "path": "core/storage/src/adapter/memory.rs",
    "content": "use std::collections::{hash_map, HashMap};\nuse std::error::Error;\nuse std::marker::PhantomData;\nuse std::sync::Arc;\n\nuse async_trait::async_trait;\nuse derive_more::{Display, From};\nuse parking_lot::RwLock;\n\nuse protocol::codec::ProtocolCodecSync;\nuse protocol::traits::{\n    IntoIteratorByRef, StorageAdapter, StorageBatchModify, StorageIterator, StorageSchema,\n};\nuse protocol::Bytes;\nuse protocol::{ProtocolError, ProtocolErrorKind, ProtocolResult};\n\ntype Category = HashMap<Vec<u8>, Vec<u8>>;\n\n#[derive(Debug)]\npub struct MemoryAdapter {\n    db: Arc<RwLock<HashMap<String, Category>>>,\n}\n\nimpl MemoryAdapter {\n    pub fn new() -> Self {\n        MemoryAdapter {\n            db: Arc::new(RwLock::new(HashMap::new())),\n        }\n    }\n}\n\nimpl Default for MemoryAdapter {\n    fn default() -> Self {\n        MemoryAdapter {\n            db: Arc::new(RwLock::new(HashMap::new())),\n        }\n    }\n}\n\npub struct MemoryIterator<'a, S: StorageSchema> {\n    inner: hash_map::Iter<'a, Vec<u8>, Vec<u8>>,\n    pin_s: PhantomData<S>,\n}\n\nimpl<'a, S: StorageSchema> Iterator for MemoryIterator<'a, S> {\n    type Item = ProtocolResult<(<S as StorageSchema>::Key, <S as StorageSchema>::Value)>;\n\n    fn next(&mut self) -> Option<Self::Item> {\n        let kv_decode = |(k_bytes, v_bytes): (&Vec<u8>, &Vec<u8>)| -> ProtocolResult<_> {\n            let k_bytes = Bytes::copy_from_slice(k_bytes.as_ref());\n            let key = <_>::decode_sync(k_bytes)?;\n\n            let v_bytes = Bytes::copy_from_slice(&v_bytes.as_ref());\n            let val = <_>::decode_sync(v_bytes)?;\n\n            Ok((key, val))\n        };\n\n        self.inner.next().map(kv_decode)\n    }\n}\n\npub struct MemoryIntoIterator<'a, S: StorageSchema> {\n    inner: parking_lot::RwLockReadGuard<'a, HashMap<String, Category>>,\n    pin_s: PhantomData<S>,\n}\n\nimpl<'a, 'b: 'a, S: StorageSchema> IntoIterator for &'b MemoryIntoIterator<'a, S> {\n    type IntoIter = StorageIterator<'a, S>;\n    type Item = ProtocolResult<(<S as StorageSchema>::Key, <S as StorageSchema>::Value)>;\n\n    fn into_iter(self) -> Self::IntoIter {\n        Box::new(MemoryIterator {\n            inner: self\n                .inner\n                .get(&S::category().to_string())\n                .expect(\"impossible, already ensure we have category in prepare_iter\")\n                .iter(),\n            pin_s: PhantomData::<S>,\n        })\n    }\n}\n\nimpl<'c, S: StorageSchema> IntoIteratorByRef<S> for MemoryIntoIterator<'c, S> {\n    fn ref_to_iter<'a, 'b: 'a>(&'b self) -> StorageIterator<'a, S> {\n        self.into_iter()\n    }\n}\n\n#[async_trait]\nimpl StorageAdapter for MemoryAdapter {\n    async fn insert<S: StorageSchema>(\n        &self,\n        key: <S as StorageSchema>::Key,\n        val: <S as StorageSchema>::Value,\n    ) -> ProtocolResult<()> {\n        let key = key.encode_sync()?.to_vec();\n        let val = val.encode_sync()?.to_vec();\n\n        let mut db = self.db.write();\n        let db = db\n            .entry(S::category().to_string())\n            .or_insert_with(HashMap::new);\n\n        db.insert(key, val);\n\n        Ok(())\n    }\n\n    async fn get<S: StorageSchema>(\n        &self,\n        key: <S as StorageSchema>::Key,\n    ) -> ProtocolResult<Option<<S as StorageSchema>::Value>> {\n        let key = key.encode_sync()?;\n\n        let mut db = self.db.write();\n        let db = db\n            .entry(S::category().to_string())\n            .or_insert_with(HashMap::new);\n\n        let opt_bytes = db.get(&key.to_vec()).cloned();\n\n        if let Some(bytes) = opt_bytes {\n            let val = <_>::decode_sync(Bytes::copy_from_slice(&bytes))?;\n\n            Ok(Some(val))\n        } else {\n            Ok(None)\n        }\n    }\n\n    async fn remove<S: StorageSchema>(&self, key: <S as StorageSchema>::Key) -> ProtocolResult<()> {\n        let key = key.encode_sync()?.to_vec();\n\n        let mut db = self.db.write();\n        let db = db\n            .entry(S::category().to_string())\n            .or_insert_with(HashMap::new);\n\n        db.remove(&key);\n\n        Ok(())\n    }\n\n    async fn contains<S: StorageSchema>(\n        &self,\n        key: <S as StorageSchema>::Key,\n    ) -> ProtocolResult<bool> {\n        let key = key.encode_sync()?.to_vec();\n\n        let mut db = self.db.write();\n        let db = db\n            .entry(S::category().to_string())\n            .or_insert_with(HashMap::new);\n\n        Ok(db.get(&key).is_some())\n    }\n\n    async fn batch_modify<S: StorageSchema>(\n        &self,\n        keys: Vec<<S as StorageSchema>::Key>,\n        vals: Vec<StorageBatchModify<S>>,\n    ) -> ProtocolResult<()> {\n        if keys.len() != vals.len() {\n            return Err(MemoryAdapterError::BatchLengthMismatch.into());\n        }\n\n        let mut pairs: Vec<(Bytes, Option<Bytes>)> = Vec::with_capacity(keys.len());\n\n        for (key, value) in keys.into_iter().zip(vals.into_iter()) {\n            let key = key.encode_sync()?;\n\n            let value = match value {\n                StorageBatchModify::Insert(value) => Some(value.encode_sync()?),\n                StorageBatchModify::Remove => None,\n            };\n\n            pairs.push((key, value))\n        }\n\n        let mut db = self.db.write();\n        let db = db\n            .entry(S::category().to_string())\n            .or_insert_with(HashMap::new);\n\n        for (key, value) in pairs.into_iter() {\n            match value {\n                Some(value) => db.insert(key.to_vec(), value.to_vec()),\n                None => db.remove(&key.to_vec()),\n            };\n        }\n\n        Ok(())\n    }\n\n    fn prepare_iter<'a, 'b: 'a, S: StorageSchema + 'static, P: AsRef<[u8]> + 'a>(\n        &'b self,\n        _prefix: &P,\n    ) -> ProtocolResult<Box<dyn IntoIteratorByRef<S> + 'a>> {\n        {\n            self.db\n                .write()\n                .entry(S::category().to_string())\n                .or_insert_with(HashMap::new);\n        }\n\n        Ok(Box::new(MemoryIntoIterator {\n            inner: self.db.read(),\n            pin_s: PhantomData::<S>,\n        }))\n    }\n}\n\n#[derive(Debug, Display, From)]\npub enum MemoryAdapterError {\n    #[display(fmt = \"batch length dont match\")]\n    BatchLengthMismatch,\n}\n\nimpl Error for MemoryAdapterError {}\n\nimpl From<MemoryAdapterError> for ProtocolError {\n    fn from(err: MemoryAdapterError) -> ProtocolError {\n        ProtocolError::new(ProtocolErrorKind::Storage, Box::new(err))\n    }\n}\n"
  },
  {
    "path": "core/storage/src/adapter/mod.rs",
    "content": "pub mod memory;\npub mod rocks;\n"
  },
  {
    "path": "core/storage/src/adapter/rocks.rs",
    "content": "use std::error::Error;\nuse std::marker::PhantomData;\nuse std::path::Path;\nuse std::sync::Arc;\nuse std::time::Instant;\n\nuse derive_more::{Display, From};\nuse rocksdb::{ColumnFamily, DBIterator, Options, WriteBatch, DB};\n\nuse async_trait::async_trait;\n\nuse common_apm::metrics::storage::on_storage_put_cf;\nuse protocol::codec::ProtocolCodecSync;\nuse protocol::traits::{\n    IntoIteratorByRef, StorageAdapter, StorageBatchModify, StorageCategory, StorageIterator,\n    StorageSchema,\n};\nuse protocol::Bytes;\nuse protocol::{ProtocolError, ProtocolErrorKind, ProtocolResult};\n\n#[derive(Debug)]\npub struct RocksAdapter {\n    db: Arc<DB>,\n}\n\nimpl RocksAdapter {\n    pub fn new<P: AsRef<Path>>(path: P, max_open_files: i32) -> ProtocolResult<Self> {\n        let mut opts = Options::default();\n        opts.create_if_missing(true);\n        opts.create_missing_column_families(true);\n        opts.set_max_open_files(max_open_files);\n\n        let categories = [\n            map_category(StorageCategory::Block),\n            map_category(StorageCategory::BlockHeader),\n            map_category(StorageCategory::Receipt),\n            map_category(StorageCategory::SignedTransaction),\n            map_category(StorageCategory::Wal),\n            map_category(StorageCategory::HashHeight),\n        ];\n\n        let db = DB::open_cf(&opts, path, categories.iter()).map_err(RocksAdapterError::from)?;\n\n        Ok(RocksAdapter { db: Arc::new(db) })\n    }\n}\n\nmacro_rules! db {\n    ($db:expr, $op:ident, $column:expr, $key:expr) => {\n        $db.$op($column, $key).map_err(RocksAdapterError::from)\n    };\n    ($db:expr, $op:ident, $column:expr, $key:expr, $val:expr) => {\n        $db.$op($column, $key, $val)\n            .map_err(RocksAdapterError::from)\n    };\n}\n\npub struct RocksIterator<'a, S: StorageSchema> {\n    inner: DBIterator<'a>,\n    pin_s: PhantomData<S>,\n}\n\nimpl<'a, S: StorageSchema> Iterator for RocksIterator<'a, S> {\n    type Item = ProtocolResult<(<S as StorageSchema>::Key, <S as StorageSchema>::Value)>;\n\n    fn next(&mut self) -> Option<Self::Item> {\n        let kv_decode = |(k_bytes, v_bytes): (Box<[u8]>, Box<[u8]>)| -> ProtocolResult<_> {\n            let k_bytes = Bytes::copy_from_slice(k_bytes.as_ref());\n            let key = <_>::decode_sync(k_bytes)?;\n\n            let v_bytes = Bytes::copy_from_slice(&v_bytes.as_ref());\n            let val = <_>::decode_sync(v_bytes)?;\n\n            Ok((key, val))\n        };\n\n        self.inner.next().map(kv_decode)\n    }\n}\n\npub struct RocksIntoIterator<'a, S: StorageSchema, P: AsRef<[u8]>> {\n    db:     Arc<DB>,\n    column: &'a ColumnFamily,\n    prefix: &'a P,\n    pin_s:  PhantomData<S>,\n}\n\nimpl<'a, 'b: 'a, S: StorageSchema, P: AsRef<[u8]>> IntoIterator\n    for &'b RocksIntoIterator<'a, S, P>\n{\n    type IntoIter = StorageIterator<'a, S>;\n    type Item = ProtocolResult<(<S as StorageSchema>::Key, <S as StorageSchema>::Value)>;\n\n    fn into_iter(self) -> Self::IntoIter {\n        let iter: DBIterator<'_> = self.db.prefix_iterator_cf(self.column, self.prefix);\n\n        Box::new(RocksIterator {\n            inner: iter,\n            pin_s: PhantomData::<S>,\n        })\n    }\n}\n\nimpl<'c, S: StorageSchema, P: AsRef<[u8]>> IntoIteratorByRef<S> for RocksIntoIterator<'c, S, P> {\n    fn ref_to_iter<'a, 'b: 'a>(&'b self) -> StorageIterator<'a, S> {\n        self.into_iter()\n    }\n}\n\n#[async_trait]\nimpl StorageAdapter for RocksAdapter {\n    async fn insert<S: StorageSchema>(&self, key: S::Key, val: S::Value) -> ProtocolResult<()> {\n        let inst = Instant::now();\n\n        let column = get_column::<S>(&self.db)?;\n        let key = key.encode_sync()?.to_vec();\n        let val = val.encode_sync()?.to_vec();\n        let size = val.len() as i64;\n\n        db!(self.db, put_cf, column, key, val)?;\n        on_storage_put_cf(S::category(), inst.elapsed(), size);\n\n        Ok(())\n    }\n\n    async fn get<S: StorageSchema>(\n        &self,\n        key: <S as StorageSchema>::Key,\n    ) -> ProtocolResult<Option<<S as StorageSchema>::Value>> {\n        let column = get_column::<S>(&self.db)?;\n        let key = key.encode_sync()?;\n\n        let opt_bytes =\n            { db!(self.db, get_cf, column, key)?.map(|db_vec| Bytes::copy_from_slice(&db_vec)) };\n\n        if let Some(bytes) = opt_bytes {\n            let val = <_>::decode_sync(bytes)?;\n\n            Ok(Some(val))\n        } else {\n            Ok(None)\n        }\n    }\n\n    async fn remove<S: StorageSchema>(&self, key: <S as StorageSchema>::Key) -> ProtocolResult<()> {\n        let column = get_column::<S>(&self.db)?;\n        let key = key.encode_sync()?.to_vec();\n\n        db!(self.db, delete_cf, column, key)?;\n\n        Ok(())\n    }\n\n    async fn contains<S: StorageSchema>(\n        &self,\n        key: <S as StorageSchema>::Key,\n    ) -> ProtocolResult<bool> {\n        let column = get_column::<S>(&self.db)?;\n        let key = key.encode_sync()?.to_vec();\n        let val = db!(self.db, get_cf, column, key)?;\n\n        Ok(val.is_some())\n    }\n\n    async fn batch_modify<S: StorageSchema>(\n        &self,\n        keys: Vec<<S as StorageSchema>::Key>,\n        vals: Vec<StorageBatchModify<S>>,\n    ) -> ProtocolResult<()> {\n        if keys.len() != vals.len() {\n            return Err(RocksAdapterError::BatchLengthMismatch.into());\n        }\n\n        let column = get_column::<S>(&self.db)?;\n        let mut pairs: Vec<(Bytes, Option<Bytes>)> = Vec::with_capacity(keys.len());\n\n        for (key, value) in keys.into_iter().zip(vals.into_iter()) {\n            let key = key.encode_sync()?;\n\n            let value = match value {\n                StorageBatchModify::Insert(value) => Some(value.encode_sync()?),\n                StorageBatchModify::Remove => None,\n            };\n\n            pairs.push((key, value))\n        }\n\n        let mut batch = WriteBatch::default();\n        let mut insert_size = 0usize;\n        let inst = Instant::now();\n        for (key, value) in pairs.into_iter() {\n            match value {\n                Some(value) => {\n                    insert_size += value.len();\n                    batch.put_cf(column, key, value)\n                }\n                None => batch.delete_cf(column, key),\n            }\n        }\n\n        on_storage_put_cf(S::category(), inst.elapsed(), insert_size as i64);\n\n        self.db.write(batch).map_err(RocksAdapterError::from)?;\n        Ok(())\n    }\n\n    fn prepare_iter<'a, 'b: 'a, S: StorageSchema + 'static, P: AsRef<[u8]> + 'a>(\n        &'b self,\n        prefix: &'a P,\n    ) -> ProtocolResult<Box<dyn IntoIteratorByRef<S> + 'a>> {\n        let column = get_column::<S>(&self.db)?;\n\n        let rocks_iter = RocksIntoIterator {\n            db: Arc::clone(&self.db),\n            column,\n            prefix,\n            pin_s: PhantomData::<S>,\n        };\n        Ok(Box::new(rocks_iter))\n    }\n}\n\n#[derive(Debug, Display, From)]\npub enum RocksAdapterError {\n    #[display(fmt = \"category {} not found\", _0)]\n    CategoryNotFound(&'static str),\n\n    #[display(fmt = \"rocksdb {}\", _0)]\n    RocksDB(rocksdb::Error),\n\n    #[display(fmt = \"parameters do not match\")]\n    InsertParameter,\n\n    #[display(fmt = \"batch length dont match\")]\n    BatchLengthMismatch,\n}\n\nimpl Error for RocksAdapterError {}\n\nimpl From<RocksAdapterError> for ProtocolError {\n    fn from(err: RocksAdapterError) -> ProtocolError {\n        ProtocolError::new(ProtocolErrorKind::Storage, Box::new(err))\n    }\n}\n\nconst C_BLOCKS: &str = \"c1\";\nconst C_SIGNED_TRANSACTIONS: &str = \"c2\";\nconst C_RECEIPTS: &str = \"c3\";\nconst C_WALS: &str = \"c4\";\nconst C_HASH_HEIGHT_MAP: &str = \"c5\";\nconst C_BLOCK_HEADERS: &str = \"c6\";\n\nfn map_category(c: StorageCategory) -> &'static str {\n    match c {\n        StorageCategory::Block => C_BLOCKS,\n        StorageCategory::BlockHeader => C_BLOCK_HEADERS,\n        StorageCategory::Receipt => C_RECEIPTS,\n        StorageCategory::SignedTransaction => C_SIGNED_TRANSACTIONS,\n        StorageCategory::Wal => C_WALS,\n        StorageCategory::HashHeight => C_HASH_HEIGHT_MAP,\n    }\n}\n\nfn get_column<S: StorageSchema>(db: &DB) -> Result<&ColumnFamily, RocksAdapterError> {\n    let category = map_category(S::category());\n\n    let column = db\n        .cf_handle(category)\n        .ok_or_else(|| RocksAdapterError::from(category))?;\n\n    Ok(column)\n}\n"
  },
  {
    "path": "core/storage/src/lib.rs",
    "content": "#![feature(test)]\n#![allow(clippy::mutable_key_type)]\n\n#[cfg(test)]\nmod tests;\n\npub mod adapter;\n\nuse std::collections::{HashMap, HashSet};\nuse std::convert::From;\nuse std::error::Error;\nuse std::str::FromStr;\nuse std::sync::Arc;\nuse std::time::Instant;\n\nuse arc_swap::ArcSwap;\nuse async_trait::async_trait;\nuse derive_more::{Display, From};\nuse lazy_static::lazy_static;\n\nuse common_apm::metrics::storage::on_storage_get_cf;\nuse common_apm::muta_apm;\nuse protocol::codec::ProtocolCodecSync;\nuse protocol::traits::{\n    CommonStorage, Context, MaintenanceStorage, Storage, StorageAdapter, StorageBatchModify,\n    StorageCategory, StorageSchema,\n};\nuse protocol::types::{Block, BlockHeader, Hash, Proof, Receipt, SignedTransaction};\nuse protocol::Bytes;\nuse protocol::{ProtocolError, ProtocolErrorKind, ProtocolResult};\n\nconst BATCH_VALUE_DECODE_NUMBER: usize = 1000;\n\nlazy_static! {\n    pub static ref LATEST_BLOCK_KEY: Hash = Hash::digest(Bytes::from(\"latest_hash\"));\n    pub static ref LATEST_PROOF_KEY: Hash = Hash::digest(Bytes::from(\"latest_proof\"));\n}\n\n// FIXME: https://github.com/facebook/rocksdb/wiki/Transactions\nmacro_rules! batch_insert {\n    ($self_: ident, $block_height:expr, $vec: expr, $schema: ident) => {\n        let (hashes, heights) = $vec\n            .iter()\n            .map(|item| {\n                (\n                    item.tx_hash.clone(),\n                    StorageBatchModify::Insert($block_height),\n                )\n            })\n            .unzip();\n\n        let (keys, batch_stxs): (Vec<_>, Vec<_>) = $vec\n            .into_iter()\n            .map(|item| {\n                (\n                    CommonHashKey::new($block_height, item.tx_hash.clone()),\n                    StorageBatchModify::Insert(item),\n                )\n            })\n            .unzip();\n\n        $self_\n            .adapter\n            .batch_modify::<$schema>(keys, batch_stxs)\n            .await?;\n\n        $self_\n            .adapter\n            .batch_modify::<HashHeightSchema>(hashes, heights)\n            .await?;\n    };\n}\n\nmacro_rules! get {\n    ($self_: ident, $key: expr, $schema: ident) => {{\n        $self_.adapter.get::<$schema>($key).await\n    }};\n}\n\nmacro_rules! ensure_get {\n    ($self_: ident, $key: expr, $schema: ident) => {{\n        let opt = get!($self_, $key, $schema)?;\n        opt.ok_or(StorageError::GetNone)?\n    }};\n}\n\nmacro_rules! impl_storage_schema_for {\n    ($name: ident, $key: ident, $val: ident, $category: ident) => {\n        pub struct $name;\n\n        impl StorageSchema for $name {\n            type Key = $key;\n            type Value = $val;\n\n            fn category() -> StorageCategory {\n                StorageCategory::$category\n            }\n        }\n    };\n}\n\n#[derive(Debug)]\npub struct ImplStorage<Adapter> {\n    adapter: Arc<Adapter>,\n\n    latest_block: ArcSwap<Option<Block>>,\n}\n\nimpl<Adapter: StorageAdapter> ImplStorage<Adapter> {\n    pub fn new(adapter: Arc<Adapter>) -> Self {\n        Self {\n            adapter,\n            latest_block: ArcSwap::from(Arc::new(None)),\n        }\n    }\n}\n\n#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]\npub struct CommonPrefix {\n    block_height: [u8; 8], // BigEndian\n}\n\nimpl CommonPrefix {\n    pub fn new(block_height: u64) -> Self {\n        CommonPrefix {\n            block_height: block_height.to_be_bytes(),\n        }\n    }\n\n    pub fn len() -> usize {\n        8\n    }\n\n    pub fn height(self) -> u64 {\n        u64::from_be_bytes(self.block_height)\n    }\n\n    pub fn make_hash_key(self, hash: &Hash) -> [u8; 40] {\n        debug_assert!(hash.as_bytes().len() == 32);\n\n        let mut key = [0u8; 40];\n        key[0..8].copy_from_slice(&self.block_height);\n        key[8..40].copy_from_slice(&hash.as_bytes()[..32]);\n\n        key\n    }\n}\n\nimpl AsRef<[u8]> for CommonPrefix {\n    fn as_ref(&self) -> &[u8] {\n        &self.block_height\n    }\n}\n\nimpl From<&[u8]> for CommonPrefix {\n    fn from(bytes: &[u8]) -> CommonPrefix {\n        debug_assert!(bytes.len() >= 8);\n\n        let mut h_buf = [0u8; 8];\n        h_buf.copy_from_slice(&bytes[0..8]);\n\n        CommonPrefix {\n            block_height: h_buf,\n        }\n    }\n}\n\nimpl ProtocolCodecSync for CommonPrefix {\n    fn encode_sync(&self) -> ProtocolResult<Bytes> {\n        Ok(Bytes::copy_from_slice(&self.block_height))\n    }\n\n    fn decode_sync(bytes: Bytes) -> ProtocolResult<Self> {\n        Ok(CommonPrefix::from(&bytes[..8]))\n    }\n}\n\n#[derive(Debug, Clone)]\npub struct CommonHashKey {\n    prefix: CommonPrefix,\n    hash:   Hash,\n}\n\nimpl CommonHashKey {\n    pub fn new(block_height: u64, hash: Hash) -> Self {\n        CommonHashKey {\n            prefix: CommonPrefix::new(block_height),\n            hash,\n        }\n    }\n\n    pub fn height(&self) -> u64 {\n        self.prefix.height()\n    }\n\n    pub fn hash(&self) -> &Hash {\n        &self.hash\n    }\n}\n\nimpl ProtocolCodecSync for CommonHashKey {\n    fn encode_sync(&self) -> ProtocolResult<Bytes> {\n        Ok(Bytes::copy_from_slice(\n            &self.prefix.make_hash_key(&self.hash),\n        ))\n    }\n\n    fn decode_sync(mut bytes: Bytes) -> ProtocolResult<Self> {\n        debug_assert!(bytes.len() >= CommonPrefix::len());\n\n        let prefix = CommonPrefix::from(&bytes[0..CommonPrefix::len()]);\n        let hash = Hash::from_bytes(bytes.split_off(CommonPrefix::len()))?;\n\n        Ok(CommonHashKey { prefix, hash })\n    }\n}\n\nimpl ToString for CommonHashKey {\n    fn to_string(&self) -> String {\n        format!(\"{}:{}\", self.prefix.height(), self.hash.as_hex())\n    }\n}\n\nimpl FromStr for CommonHashKey {\n    type Err = ();\n\n    fn from_str(s: &str) -> Result<Self, Self::Err> {\n        let parts = s.split(':').collect::<Vec<_>>();\n        debug_assert!(parts.len() == 2);\n\n        let height = parts[0].parse::<u64>().map_err(|_| ())?;\n        let hash = Hash::from_hex(parts[1]).map_err(|_| ())?;\n\n        Ok(CommonHashKey::new(height, hash))\n    }\n}\n\npub type BlockKey = CommonPrefix;\n\nimpl_storage_schema_for!(\n    TransactionSchema,\n    CommonHashKey,\n    SignedTransaction,\n    SignedTransaction\n);\nimpl_storage_schema_for!(\n    TransactionBytesSchema,\n    CommonHashKey,\n    Bytes,\n    SignedTransaction\n);\nimpl_storage_schema_for!(BlockSchema, BlockKey, Block, Block);\nimpl_storage_schema_for!(BlockHeaderSchema, BlockKey, BlockHeader, BlockHeader);\nimpl_storage_schema_for!(ReceiptSchema, CommonHashKey, Receipt, Receipt);\nimpl_storage_schema_for!(ReceiptBytesSchema, CommonHashKey, Bytes, Receipt);\nimpl_storage_schema_for!(HashHeightSchema, Hash, u64, HashHeight);\nimpl_storage_schema_for!(LatestBlockSchema, Hash, Block, Block);\nimpl_storage_schema_for!(LatestProofSchema, Hash, Proof, Block);\n\n#[async_trait]\nimpl<Adapter: StorageAdapter> MaintenanceStorage for ImplStorage<Adapter> {}\n\n#[async_trait]\nimpl<Adapter: StorageAdapter> Storage for ImplStorage<Adapter> {\n    #[muta_apm::derive::tracing_span(kind = \"storage\")]\n    async fn insert_transactions(\n        &self,\n        ctx: Context,\n        block_height: u64,\n        signed_txs: Vec<SignedTransaction>,\n    ) -> ProtocolResult<()> {\n        batch_insert!(self, block_height, signed_txs, TransactionSchema);\n\n        Ok(())\n    }\n\n    #[muta_apm::derive::tracing_span(kind = \"storage\")]\n    async fn get_transactions(\n        &self,\n        ctx: Context,\n        block_height: u64,\n        hashes: &[Hash],\n    ) -> ProtocolResult<Vec<Option<SignedTransaction>>> {\n        let key_prefix = CommonPrefix::new(block_height);\n        let mut found = Vec::with_capacity(hashes.len());\n\n        {\n            let inst = Instant::now();\n            let prepare_iter = self\n                .adapter\n                .prepare_iter::<TransactionBytesSchema, _>(&key_prefix)?;\n            let mut iter = prepare_iter.ref_to_iter();\n\n            let set = hashes.iter().collect::<HashSet<_>>();\n            let mut count = hashes.len();\n            on_storage_get_cf(\n                StorageCategory::SignedTransaction,\n                inst.elapsed(),\n                count as i64,\n            );\n\n            while count > 0 {\n                let (key, stx_bytes) = match iter.next() {\n                    None => break,\n                    Some(Ok(key_to_stx_bytes)) => key_to_stx_bytes,\n                    Some(Err(err)) => return Err(err),\n                };\n\n                // Note: fix clippy::suspicious_else_formatting\n                if key.height() != block_height {\n                    break;\n                } else if !set.contains(&key.hash) {\n                    continue;\n                } else {\n                    found.push((key.hash, stx_bytes));\n                    count -= 1;\n                }\n            }\n        }\n\n        let mut found = {\n            if found.len() <= BATCH_VALUE_DECODE_NUMBER {\n                found\n                    .drain(..)\n                    .map(|(k, v): (Hash, Bytes)| SignedTransaction::decode_sync(v).map(|v| (k, v)))\n                    .collect::<ProtocolResult<Vec<_>>>()?\n                    .into_iter()\n                    .collect::<HashMap<_, _>>()\n            } else {\n                let futs = found\n                    .chunks(BATCH_VALUE_DECODE_NUMBER)\n                    .map(|vals| {\n                        let vals = vals.to_owned();\n\n                        // FIXME: cancel decode\n                        tokio::spawn(async move {\n                            vals.into_iter()\n                                .map(|(k, v)| <_>::decode_sync(v).map(|v| (k, v)))\n                                .collect::<ProtocolResult<Vec<_>>>()\n                        })\n                    })\n                    .collect::<Vec<_>>();\n\n                futures::future::try_join_all(futs)\n                    .await\n                    .map_err(|_| StorageError::BatchDecode)?\n                    .into_iter()\n                    .collect::<ProtocolResult<Vec<Vec<_>>>>()?\n                    .into_iter()\n                    .flatten()\n                    .collect::<HashMap<_, _>>()\n            }\n        };\n\n        Ok(hashes.iter().map(|h| found.remove(&h)).collect::<Vec<_>>())\n    }\n\n    async fn get_transaction_by_hash(\n        &self,\n        _ctx: Context,\n        hash: &Hash,\n    ) -> ProtocolResult<Option<SignedTransaction>> {\n        if let Some(block_height) = get!(self, hash.clone(), HashHeightSchema)? {\n            get!(\n                self,\n                CommonHashKey::new(block_height, hash.clone()),\n                TransactionSchema\n            )\n        } else {\n            Ok(None)\n        }\n    }\n\n    #[muta_apm::derive::tracing_span(kind = \"storage\")]\n    async fn insert_receipts(\n        &self,\n        ctx: Context,\n        block_height: u64,\n        receipts: Vec<Receipt>,\n    ) -> ProtocolResult<()> {\n        batch_insert!(self, block_height, receipts, ReceiptSchema);\n\n        Ok(())\n    }\n\n    #[muta_apm::derive::tracing_span(kind = \"storage\")]\n    async fn get_receipts(\n        &self,\n        ctx: Context,\n        block_height: u64,\n        hashes: Vec<Hash>,\n    ) -> ProtocolResult<Vec<Option<Receipt>>> {\n        let key_prefix = CommonPrefix::new(block_height);\n        let mut found = Vec::with_capacity(hashes.len());\n\n        {\n            let inst = Instant::now();\n            let prepare_iter = self\n                .adapter\n                .prepare_iter::<ReceiptBytesSchema, _>(&key_prefix)?;\n            let mut iter = prepare_iter.ref_to_iter();\n\n            let set = hashes.iter().collect::<HashSet<_>>();\n            let mut count = hashes.len();\n            on_storage_get_cf(StorageCategory::Receipt, inst.elapsed(), count as i64);\n\n            while count > 0 {\n                let (key, stx_bytes) = match iter.next() {\n                    None => break,\n                    Some(Ok(key_to_stx_bytes)) => key_to_stx_bytes,\n                    Some(Err(err)) => return Err(err),\n                };\n\n                // Note: fix clippy::suspicious_else_formatting\n                if key.height() != block_height {\n                    break;\n                } else if !set.contains(&key.hash) {\n                    continue;\n                } else {\n                    found.push((key.hash, stx_bytes));\n                    count -= 1;\n                }\n            }\n        }\n\n        let mut found = {\n            if found.len() <= BATCH_VALUE_DECODE_NUMBER {\n                found\n                    .drain(..)\n                    .map(|(k, v): (Hash, Bytes)| Receipt::decode_sync(v).map(|v| (k, v)))\n                    .collect::<ProtocolResult<Vec<_>>>()?\n                    .into_iter()\n                    .collect::<HashMap<_, _>>()\n            } else {\n                let futs = found\n                    .chunks(BATCH_VALUE_DECODE_NUMBER)\n                    .map(|vals| {\n                        let vals = vals.to_owned();\n\n                        // FIXME: cancel decode\n                        tokio::spawn(async move {\n                            vals.into_iter()\n                                .map(|(k, v)| <_>::decode_sync(v).map(|v| (k, v)))\n                                .collect::<ProtocolResult<Vec<_>>>()\n                        })\n                    })\n                    .collect::<Vec<_>>();\n\n                futures::future::try_join_all(futs)\n                    .await\n                    .map_err(|_| StorageError::BatchDecode)?\n                    .into_iter()\n                    .collect::<ProtocolResult<Vec<Vec<_>>>>()?\n                    .into_iter()\n                    .flatten()\n                    .collect::<HashMap<_, _>>()\n            }\n        };\n\n        Ok(hashes\n            .into_iter()\n            .map(|h| found.remove(&h))\n            .collect::<Vec<_>>())\n    }\n\n    async fn get_receipt_by_hash(\n        &self,\n        _ctx: Context,\n        hash: Hash,\n    ) -> ProtocolResult<Option<Receipt>> {\n        if let Some(block_height) = get!(self, hash.clone(), HashHeightSchema)? {\n            get!(self, CommonHashKey::new(block_height, hash), ReceiptSchema)\n        } else {\n            Ok(None)\n        }\n    }\n\n    async fn update_latest_proof(&self, _ctx: Context, proof: Proof) -> ProtocolResult<()> {\n        self.adapter\n            .insert::<LatestProofSchema>(LATEST_PROOF_KEY.clone(), proof)\n            .await?;\n        Ok(())\n    }\n\n    async fn get_latest_proof(&self, _ctx: Context) -> ProtocolResult<Proof> {\n        let proof = ensure_get!(self, LATEST_PROOF_KEY.clone(), LatestProofSchema);\n        Ok(proof)\n    }\n}\n\n#[async_trait]\nimpl<Adapter: StorageAdapter> CommonStorage for ImplStorage<Adapter> {\n    async fn insert_block(&self, ctx: Context, block: Block) -> ProtocolResult<()> {\n        self.set_block(ctx.clone(), block.clone()).await?;\n\n        self.set_latest_block(ctx, block).await?;\n\n        Ok(())\n    }\n\n    async fn get_block(&self, _ctx: Context, height: u64) -> ProtocolResult<Option<Block>> {\n        self.adapter.get::<BlockSchema>(BlockKey::new(height)).await\n    }\n\n    async fn get_block_header(\n        &self,\n        ctx: Context,\n        height: u64,\n    ) -> ProtocolResult<Option<BlockHeader>> {\n        let opt_header = self\n            .adapter\n            .get::<BlockHeaderSchema>(BlockKey::new(height))\n            .await?;\n        if opt_header.is_some() {\n            return Ok(opt_header);\n        }\n\n        Ok(self.get_block(ctx, height).await?.map(|b| b.header))\n    }\n\n    // !!!be careful, the prev_hash may mismatch and latest block may diverse!!!\n    async fn set_block(&self, _ctx: Context, block: Block) -> ProtocolResult<()> {\n        self.adapter\n            .insert::<BlockSchema>(BlockKey::new(block.header.height), block.clone())\n            .await?;\n        self.adapter\n            .insert::<BlockHeaderSchema>(BlockKey::new(block.header.height), block.header.clone())\n            .await?;\n        Ok(())\n    }\n\n    // !be careful, only call this function in maintenance mode!\n    async fn remove_block(&self, _ctx: Context, height: u64) -> ProtocolResult<()> {\n        self.adapter\n            .remove::<BlockSchema>(BlockKey::new(height))\n            .await\n    }\n\n    async fn get_latest_block(&self, _ctx: Context) -> ProtocolResult<Block> {\n        if let Some(block) = self.latest_block.load().as_ref().clone() {\n            Ok(block)\n        } else {\n            let block = ensure_get!(self, LATEST_BLOCK_KEY.clone(), LatestBlockSchema);\n            Ok(block)\n        }\n    }\n\n    async fn get_latest_block_header(&self, _ctx: Context) -> ProtocolResult<BlockHeader> {\n        let opt_header = {\n            let guard = self.latest_block.load();\n            let opt_block = guard.as_ref();\n            opt_block.as_ref().map(|b| b.header.clone())\n        };\n\n        if let Some(header) = opt_header {\n            Ok(header)\n        } else {\n            let block = ensure_get!(self, LATEST_BLOCK_KEY.clone(), LatestBlockSchema);\n            Ok(block.header)\n        }\n    }\n\n    async fn set_latest_block(&self, _ctx: Context, block: Block) -> ProtocolResult<()> {\n        self.adapter\n            .insert::<LatestBlockSchema>(LATEST_BLOCK_KEY.clone(), block.clone())\n            .await?;\n\n        self.latest_block.store(Arc::new(Some(block)));\n\n        Ok(())\n    }\n}\n\n#[derive(Debug, Display, From)]\npub enum StorageError {\n    #[display(fmt = \"get none\")]\n    GetNone,\n\n    #[display(fmt = \"decode batch value\")]\n    BatchDecode,\n}\n\nimpl Error for StorageError {}\n\nimpl From<StorageError> for ProtocolError {\n    fn from(err: StorageError) -> ProtocolError {\n        ProtocolError::new(ProtocolErrorKind::Storage, Box::new(err))\n    }\n}\n"
  },
  {
    "path": "core/storage/src/tests/adapter.rs",
    "content": "use protocol::traits::{StorageAdapter, StorageBatchModify};\nuse protocol::types::Hash;\n\nuse crate::adapter::memory::MemoryAdapter;\nuse crate::adapter::rocks::RocksAdapter;\nuse crate::tests::{get_random_bytes, mock_signed_tx};\nuse crate::{CommonHashKey, TransactionSchema};\n\n#[tokio::test]\nasync fn test_adapter_insert() {\n    adapter_insert_test(MemoryAdapter::new()).await;\n    adapter_insert_test(RocksAdapter::new(\"rocksdb/test_adapter_insert\".to_string(), 64).unwrap())\n        .await\n}\n\n#[tokio::test]\nasync fn test_adapter_batch_modify() {\n    adapter_batch_modify_test(MemoryAdapter::new()).await;\n    adapter_batch_modify_test(\n        RocksAdapter::new(\"rocksdb/test_adapter_batch_modify\".to_string(), 64).unwrap(),\n    )\n    .await\n}\n\n#[tokio::test]\nasync fn test_adapter_remove() {\n    adapter_remove_test(MemoryAdapter::new()).await;\n    adapter_remove_test(RocksAdapter::new(\"rocksdb/test_adapter_remove\".to_string(), 64).unwrap())\n        .await\n}\n\nasync fn adapter_insert_test(db: impl StorageAdapter) {\n    let tx_hash = Hash::digest(get_random_bytes(10));\n    let tx_key = CommonHashKey::new(1, tx_hash.clone());\n    let stx = mock_signed_tx(tx_hash.clone());\n\n    db.insert::<TransactionSchema>(tx_key.clone(), stx.clone())\n        .await\n        .unwrap();\n    let stx = db.get::<TransactionSchema>(tx_key).await.unwrap().unwrap();\n\n    assert_eq!(tx_hash, stx.tx_hash);\n}\n\nasync fn adapter_batch_modify_test(db: impl StorageAdapter) {\n    let mut stxs = Vec::new();\n    let mut keys = Vec::new();\n    let mut inserts = Vec::new();\n\n    for _ in 0..10 {\n        let tx_hash = Hash::digest(get_random_bytes(10));\n        keys.push(CommonHashKey::new(1, tx_hash.clone()));\n        let stx = mock_signed_tx(tx_hash);\n        stxs.push(stx.clone());\n        inserts.push(StorageBatchModify::Insert::<TransactionSchema>(stx));\n    }\n\n    db.batch_modify::<TransactionSchema>(keys.clone(), inserts)\n        .await\n        .unwrap();\n    let opt_stxs = db.get_batch::<TransactionSchema>(keys).await.unwrap();\n\n    for i in 0..10 {\n        assert_eq!(\n            stxs.get(i).unwrap().tx_hash,\n            opt_stxs.get(i).unwrap().as_ref().unwrap().tx_hash\n        );\n    }\n}\n\nasync fn adapter_remove_test(db: impl StorageAdapter) {\n    let tx_hash = Hash::digest(get_random_bytes(10));\n    let tx_key = CommonHashKey::new(1, tx_hash.clone());\n    let is_exist = db\n        .contains::<TransactionSchema>(tx_key.clone())\n        .await\n        .unwrap();\n    assert!(!is_exist);\n\n    let stx = &mock_signed_tx(tx_hash);\n    db.insert::<TransactionSchema>(tx_key.clone(), stx.clone())\n        .await\n        .unwrap();\n    let is_exist = db\n        .contains::<TransactionSchema>(tx_key.clone())\n        .await\n        .unwrap();\n    assert!(is_exist);\n\n    db.remove::<TransactionSchema>(tx_key.clone())\n        .await\n        .unwrap();\n    let is_exist = db.contains::<TransactionSchema>(tx_key).await.unwrap();\n    assert!(!is_exist);\n}\n"
  },
  {
    "path": "core/storage/src/tests/mod.rs",
    "content": "extern crate test;\n\nmod adapter;\nmod storage;\n\nuse rand::random;\n\nuse protocol::traits::ServiceResponse;\nuse protocol::types::{\n    Block, BlockHeader, Hash, Proof, RawTransaction, Receipt, ReceiptResponse, SignedTransaction,\n    TransactionRequest,\n};\nuse protocol::Bytes;\n\nconst ADDRESS_STR: &str = \"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\";\n\nfn mock_signed_tx(tx_hash: Hash) -> SignedTransaction {\n    let nonce = Hash::digest(Bytes::from(\"XXXX\"));\n\n    let request = TransactionRequest {\n        service_name: \"test\".to_owned(),\n        method:       \"test\".to_owned(),\n        payload:      \"test\".to_owned(),\n    };\n\n    let raw = RawTransaction {\n        chain_id: nonce.clone(),\n        nonce,\n        timeout: 10,\n        cycles_limit: 10,\n        cycles_price: 1,\n        request,\n        sender: ADDRESS_STR.parse().unwrap(),\n    };\n\n    SignedTransaction {\n        raw,\n        tx_hash,\n        pubkey: Default::default(),\n        signature: Default::default(),\n    }\n}\n\nfn mock_receipt(tx_hash: Hash) -> Receipt {\n    let nonce = Hash::digest(Bytes::from(\"XXXX\"));\n\n    let response = ReceiptResponse {\n        service_name: \"test\".to_owned(),\n        method:       \"test\".to_owned(),\n        response:     ServiceResponse::<String> {\n            code:          0,\n            succeed_data:  \"ok\".to_owned(),\n            error_message: \"\".to_owned(),\n        },\n    };\n    Receipt {\n        state_root: nonce,\n        height: 10,\n        tx_hash,\n        cycles_used: 10,\n        events: vec![],\n        response,\n    }\n}\n\nfn mock_block(height: u64, block_hash: Hash) -> Block {\n    let nonce = Hash::digest(Bytes::from(\"XXXX\"));\n    let header = BlockHeader {\n        chain_id: nonce.clone(),\n        height,\n        exec_height: height - 1,\n        prev_hash: nonce.clone(),\n        timestamp: 1000,\n        order_root: nonce.clone(),\n        order_signed_transactions_hash: nonce.clone(),\n        confirm_root: Vec::new(),\n        state_root: nonce,\n        receipt_root: Vec::new(),\n        cycles_used: vec![999_999],\n        proposer: ADDRESS_STR.parse().unwrap(),\n        proof: mock_proof(block_hash),\n        validator_version: 1,\n        validators: Vec::new(),\n    };\n\n    Block {\n        header,\n        ordered_tx_hashes: Vec::new(),\n    }\n}\n\nfn mock_proof(block_hash: Hash) -> Proof {\n    Proof {\n        height: 0,\n        round: 0,\n        block_hash,\n        signature: Default::default(),\n        bitmap: Default::default(),\n    }\n}\n\nfn get_random_bytes(len: usize) -> Bytes {\n    let vec: Vec<u8> = (0..len).map(|_| random::<u8>()).collect();\n    Bytes::from(vec)\n}\n"
  },
  {
    "path": "core/storage/src/tests/storage.rs",
    "content": "extern crate test;\n\nuse std::sync::Arc;\n\nuse test::Bencher;\n\nuse protocol::traits::{CommonStorage, Context, Storage};\nuse protocol::types::Hash;\nuse tokio::runtime::Runtime;\n\nuse crate::adapter::memory::MemoryAdapter;\nuse crate::tests::{get_random_bytes, mock_block, mock_proof, mock_receipt, mock_signed_tx};\nuse crate::ImplStorage;\nuse crate::BATCH_VALUE_DECODE_NUMBER;\n\n#[tokio::test]\nasync fn test_storage_block_insert() {\n    let storage = ImplStorage::new(Arc::new(MemoryAdapter::new()));\n\n    let height = 100;\n    let block = mock_block(height, Hash::digest(get_random_bytes(10)));\n\n    storage.insert_block(Context::new(), block).await.unwrap();\n\n    let block = storage.get_latest_block(Context::new()).await.unwrap();\n    assert_eq!(height, block.header.height);\n\n    let block = storage.get_block(Context::new(), height).await.unwrap();\n    assert_eq!(Some(height), block.map(|b| b.header.height));\n}\n\n#[tokio::test]\nasync fn test_storage_receipts_insert() {\n    let storage = ImplStorage::new(Arc::new(MemoryAdapter::new()));\n    let height = 2077;\n\n    let mut receipts = Vec::new();\n    let mut hashes = Vec::new();\n\n    for _ in 0..10 {\n        let tx_hash = Hash::digest(get_random_bytes(10));\n        hashes.push(tx_hash.clone());\n        let receipt = mock_receipt(tx_hash.clone());\n        receipts.push(receipt);\n    }\n\n    storage\n        .insert_receipts(Context::new(), height, receipts.clone())\n        .await\n        .unwrap();\n    let receipts_2 = storage\n        .get_receipts(Context::new(), height, hashes)\n        .await\n        .unwrap();\n\n    for i in 0..10 {\n        assert_eq!(\n            Some(receipts.get(i).unwrap()),\n            receipts_2.get(i).unwrap().as_ref()\n        );\n    }\n}\n\n#[tokio::test]\nasync fn test_storage_receipts_get_batch_decode() {\n    let storage = ImplStorage::new(Arc::new(MemoryAdapter::new()));\n    let height = 2077;\n    let count = BATCH_VALUE_DECODE_NUMBER + 100;\n\n    let mut receipts = Vec::new();\n    let mut hashes = Vec::new();\n\n    for _ in 0..count {\n        let tx_hash = Hash::digest(get_random_bytes(10));\n        hashes.push(tx_hash.clone());\n        let receipt = mock_receipt(tx_hash.clone());\n        receipts.push(receipt);\n    }\n\n    storage\n        .insert_receipts(Context::new(), height, receipts.clone())\n        .await\n        .unwrap();\n\n    let receipts_2 = storage\n        .get_receipts(Context::new(), height, hashes)\n        .await\n        .unwrap();\n\n    for i in 0..count {\n        assert_eq!(\n            Some(receipts.get(i).unwrap()),\n            receipts_2.get(i).unwrap().as_ref()\n        );\n    }\n}\n\n#[tokio::test]\nasync fn test_storage_transactions_insert() {\n    let storage = ImplStorage::new(Arc::new(MemoryAdapter::new()));\n    let height = 2020;\n\n    let mut transactions = Vec::new();\n    let mut hashes = Vec::new();\n\n    for _ in 0..10 {\n        let tx_hash = Hash::digest(get_random_bytes(10));\n        hashes.push(tx_hash.clone());\n        let transaction = mock_signed_tx(tx_hash.clone());\n        transactions.push(transaction);\n    }\n\n    storage\n        .insert_transactions(Context::new(), height, transactions.clone())\n        .await\n        .unwrap();\n    let transactions_2 = storage\n        .get_transactions(Context::new(), height, &hashes)\n        .await\n        .unwrap();\n\n    for i in 0..10 {\n        assert_eq!(\n            Some(transactions.get(i).unwrap()),\n            transactions_2.get(i).unwrap().as_ref()\n        );\n    }\n}\n\n#[tokio::test]\nasync fn test_storage_transactions_get_batch_decode() {\n    let storage = ImplStorage::new(Arc::new(MemoryAdapter::new()));\n    let height = 2020;\n    let count = BATCH_VALUE_DECODE_NUMBER + 100;\n\n    let mut transactions = Vec::new();\n    let mut hashes = Vec::new();\n\n    for _ in 0..count {\n        let tx_hash = Hash::digest(get_random_bytes(10));\n        hashes.push(tx_hash.clone());\n        let transaction = mock_signed_tx(tx_hash.clone());\n        transactions.push(transaction);\n    }\n\n    storage\n        .insert_transactions(Context::new(), height, transactions.clone())\n        .await\n        .unwrap();\n    let transactions_2 = storage\n        .get_transactions(Context::new(), height, &hashes)\n        .await\n        .unwrap();\n\n    for i in 0..count {\n        assert_eq!(\n            Some(transactions.get(i).unwrap()),\n            transactions_2.get(i).unwrap().as_ref()\n        );\n    }\n}\n\n#[tokio::test]\nasync fn test_storage_latest_proof_insert() {\n    let storage = ImplStorage::new(Arc::new(MemoryAdapter::new()));\n\n    let block_hash = Hash::digest(get_random_bytes(10));\n    let proof = mock_proof(block_hash);\n\n    storage\n        .update_latest_proof(Context::new(), proof.clone())\n        .await\n        .unwrap();\n    let proof_2 = storage.get_latest_proof(Context::new()).await.unwrap();\n\n    assert_eq!(proof.block_hash, proof_2.block_hash);\n}\n\n#[rustfmt::skip]\n/// Bench in Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz (8 x 2200)\n/// test tests::storage::bench_insert_10000_receipts ... bench:  33,954,916 ns/iter (+/- 3,818,780)\n/// test tests::storage::bench_insert_20000_receipts ... bench:  69,476,334 ns/iter (+/- 25,206,468)\n/// test tests::storage::bench_insert_40000_receipts ... bench: 138,903,121 ns/iter (+/- 26,053,433)\n/// test tests::storage::bench_insert_80000_receipts ... bench: 289,629,756 ns/iter (+/- 114,583,692)\n/// test tests::storage::bench_insert_10000_txs      ... bench:  37,900,652 ns/iter (+/- 19,055,351)\n/// test tests::storage::bench_insert_20000_txs      ... bench:  76,499,664 ns/iter (+/- 17,883,127)\n/// test tests::storage::bench_insert_40000_txs      ... bench: 148,111,340 ns/iter (+/- 5,637,411)\n/// test tests::storage::bench_insert_80000_txs      ... bench: 311,861,163 ns/iter (+/- 16,891,290)\n\n#[bench]\nfn bench_insert_10000_receipts(b: &mut Bencher) {\n    let storage = ImplStorage::new(Arc::new(MemoryAdapter::new()));\n    let height = 2045;\n\n    let receipts = (0..10000)\n        .map(|_| mock_receipt(Hash::digest(get_random_bytes(10))))\n        .collect::<Vec<_>>();\n\n    let mut rt = Runtime::new().unwrap();\n    b.iter(|| {\n        rt.block_on(storage.insert_receipts(Context::new(), height, receipts.clone())).unwrap()\n    })\n}\n\n#[bench]\nfn bench_insert_20000_receipts(b: &mut Bencher) {\n    let storage = ImplStorage::new(Arc::new(MemoryAdapter::new()));\n    let height = 2045;\n\n    let receipts = (0..20000)\n        .map(|_| mock_receipt(Hash::digest(get_random_bytes(10))))\n        .collect::<Vec<_>>();\n\n    let mut rt = Runtime::new().unwrap();\n    b.iter(move || {\n        rt.block_on(storage.insert_receipts(Context::new(), height, receipts.clone()))\n            .unwrap()\n    })\n}\n\n#[bench]\nfn bench_insert_40000_receipts(b: &mut Bencher) {\n    let storage = ImplStorage::new(Arc::new(MemoryAdapter::new()));\n    let height = 2077;\n\n    let receipts = (0..40000)\n        .map(|_| mock_receipt(Hash::digest(get_random_bytes(10))))\n        .collect::<Vec<_>>();\n\n    let mut rt = Runtime::new().unwrap();\n    b.iter(move || {\n        rt.block_on(storage.insert_receipts(Context::new(), height, receipts.clone()))\n            .unwrap()\n    })\n}\n\n#[bench]\nfn bench_insert_80000_receipts(b: &mut Bencher) {\n    let storage = ImplStorage::new(Arc::new(MemoryAdapter::new()));\n    let height = 2077;\n\n    let receipts = (0..80000)\n        .map(|_| mock_receipt(Hash::digest(get_random_bytes(10))))\n        .collect::<Vec<_>>();\n\n    let mut rt = Runtime::new().unwrap();\n    b.iter(move || {\n        rt.block_on(storage.insert_receipts(Context::new(), height, receipts.clone()))\n            .unwrap()\n    })\n}\n#[bench]\nfn bench_insert_10000_txs(b: &mut Bencher) {\n    let storage = ImplStorage::new(Arc::new(MemoryAdapter::new()));\n    let height = 2077;\n\n    let txs = (0..10000)\n        .map(|_| mock_signed_tx(Hash::digest(get_random_bytes(10))))\n        .collect::<Vec<_>>();\n\n    let mut rt = Runtime::new().unwrap();\n    b.iter(move || {\n        rt.block_on(storage.insert_transactions(Context::new(), height, txs.clone()))\n            .unwrap()\n    })\n}\n\n#[bench]\nfn bench_insert_20000_txs(b: &mut Bencher) {\n    let storage = ImplStorage::new(Arc::new(MemoryAdapter::new()));\n    let height = 2077;\n\n    let txs = (0..20000)\n        .map(|_| mock_signed_tx(Hash::digest(get_random_bytes(10))))\n        .collect::<Vec<_>>();\n\n    let mut rt = Runtime::new().unwrap();\n    b.iter(move || {\n        rt.block_on(storage.insert_transactions(Context::new(), height, txs.clone()))\n            .unwrap()\n    })\n}\n\n#[bench]\nfn bench_insert_40000_txs(b: &mut Bencher) {\n    let storage = ImplStorage::new(Arc::new(MemoryAdapter::new()));\n    let height = 2077;\n\n    let txs = (0..40000)\n        .map(|_| mock_signed_tx(Hash::digest(get_random_bytes(10))))\n        .collect::<Vec<_>>();\n\n    let mut rt = Runtime::new().unwrap();\n    b.iter(move || {\n        rt.block_on(storage.insert_transactions(Context::new(), height, txs.clone()))\n            .unwrap()\n    })\n}\n\n#[bench]\nfn bench_insert_80000_txs(b: &mut Bencher) {\n    let storage = ImplStorage::new(Arc::new(MemoryAdapter::new()));\n    let height = 2077;\n\n    let txs = (0..80000)\n        .map(|_| mock_signed_tx(Hash::digest(get_random_bytes(10))))\n        .collect::<Vec<_>>();\n\n    let mut rt = Runtime::new().unwrap();\n    b.iter(move || {\n        rt.block_on(storage.insert_transactions(Context::new(), height, txs.clone()))\n            .unwrap()\n    })\n}\n"
  },
  {
    "path": "devtools/chain/README.md",
    "content": "# A simple config set for creating a new chain\n\nAddress in genesis:\n\n Address                                       | Asset(MTT)    | PrivKey                                                              | Pubkey                                                                 |\n --------------------------------------------- | ------------- | -------------------------------------------------------------------- | ---------------------------------------------------------------------- |\n `muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705` | `0x100000000` | `0x8dfbd3c689308d29c058cce163984a2ae8d5fc5191ce6b1e18bd1d7b95a8c632` | `0x03dbd1dbf3835efb4ec34a360ee671ee1d22425425368edfc5b9ffafc812e86200` |\n"
  },
  {
    "path": "devtools/chain/config.toml",
    "content": "# crypto\nprivkey = \"0x5ec982173d54d830b6789cbbbe43eaa2853a5ff752d1ebc1b266cf9790314f8a\"\n\n# db config\ndata_path = \"./devtools/chain/data\"\n\n[graphql]\nlistening_address = \"127.0.0.1:8000\"\ngraphql_uri = \"/graphql\"\ngraphiql_uri = \"/graphiql\"\nworkers = 0 # if 0, uses number of available logical cpu as threads count.\nmaxconn = 25000\nmax_payload_size = 1048576\n# enable_dump_profile = false\n# [graphql.tls]\n# private_key_file_path = \"key.pem\"\n# certificate_chain_file_path = \"cert.pem\"\n\n\n[network]\nlistening_address = \"0.0.0.0:1337\"\nrpc_timeout = 10\n\n[consensus]\noverlord_gap = 5\nsync_txs_chunk_size = 5000\n\n[[network.bootstraps]]\npeer_id = \"QmTEJkB5QKWsEq37huryZZfVvqBKb54sHnKn9TQcA6j3n9\"\naddress = \"0.0.0.0:1888\"\n\n[mempool]\npool_size = 20000\nbroadcast_txs_size = 200\nbroadcast_txs_interval = 200\n\n[executor]\nlight = false\ntriedb_cache_size = 2000\n\n[logger]\nfilter = \"info\"\nlog_to_console = true\nconsole_show_file_and_line = false\nlog_path = \"logs/\"\nlog_to_file = true\nfile_size_limit = 1073741824 # 1 GiB\nmetrics = true\n# you can specify log level for modules with config below\n# modules_level = { \"overlord::state::process\" = \"debug\", core_consensus = \"error\" }\n\n[rocksdb]\nmax_open_files = 64\n\n# [apm]\n# service_name = \"muta\"\n# tracing_address = \"127.0.0.1:6831\"\n# tracing_batch_size = 50\n"
  },
  {
    "path": "devtools/chain/genesis.toml",
    "content": "timestamp = 0\nprevhash = \"0x44915be5b6c20b0678cf05fcddbbaa832e25d7e6ac538784cd5c24de00d47472\"\n\n[[services]]\nname = \"asset\"\npayload = '''\n{\n   \"id\": \"0xf56924db538e77bb5951eb5ff0d02b88983c49c45eea30e8ae3e7234b311436c\",\n   \"name\": \"MutaToken\",\n   \"symbol\": \"MT\",\n   \"supply\": 320000011,\n   \"issuer\": \"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\"\n}\n'''\n\n[[services]]\nname = \"metadata\"\npayload = '''\n{\n    \"chain_id\": \"0xb6a4d7da21443f5e816e8700eea87610e6d769657d6b8ec73028457bf2ca4036\",\n    \"bech32_address_hrp\": \"muta\",\n    \"common_ref\": \"0x6c747758636859487038\",\n    \"timeout_gap\": 20,\n    \"cycles_limit\": 4294967295,\n    \"cycles_price\": 1,\n    \"interval\": 3000,\n    \"verifier_list\": [\n       {\n           \"bls_pub_key\": \"0x04102947214862a503c73904deb5818298a186d68c7907bb609583192a7de6331493835e5b8281f4d9ee705537c0e765580e06f86ddce5867812fceb42eecefd209f0eddd0389d6b7b0100f00fb119ef9ab23826c6ea09aadcc76fa6cea6a32724\",\n           \"pub_key\": \"0x02ef0cb0d7bc6c18b4bea1f5908d9106522b35ab3c399369605d4242525bda7e60\",\n           \"address\": \"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\",\n           \"propose_weight\": 1,\n           \"vote_weight\": 1\n       }\n    ],\n    \"propose_ratio\": 15,\n    \"prevote_ratio\": 10,\n    \"precommit_ratio\": 10,\n    \"brake_ratio\": 7,\n    \"tx_num_limit\": 20000,\n    \"max_tx_size\": 1024\n}\n'''\n"
  },
  {
    "path": "devtools/docker-build/Dockerfile",
    "content": "FROM ubuntu:18.04\nLABEL maintainer=\"yejiayu.fe@gmail.com\"\n\nCOPY target/release/examples/muta-chain .\nCOPY devtools/chain/config.toml devtools/chain/config.toml\nCOPY devtools/chain/genesis.toml devtools/chain/genesis.toml\n\nEXPOSE 1337 8000\nCMD [\"./muta-chain\"]\n"
  },
  {
    "path": "devtools/docker-build/Dockerfile.build-env",
    "content": "FROM ubuntu:18.04\n\nLABEL maintainer=\"yejiayu.fe@gmail.com\"\n\nRUN set -eux; \\\n    apt-get update; \\\n    apt-get install -y --no-install-recommends \\\n        ca-certificates \\\n        gcc \\\n        libc6-dev \\\n        wget \\\n        git \\\n        build-essential \\ \n        pkg-config \\\n        openssl \\\n        libssl-dev \\\n        libclang-dev clang; \\\n    rm -rf /var/lib/apt/lists/*\n\nENV RUSTUP_HOME=/usr/local/rustup \\\n    CARGO_HOME=/usr/local/cargo \\\n    PATH=/usr/local/cargo/bin:$PATH \\\n    RUSTUP_VERSION=1.21.1 \\\n    RUSTUP_SHA256=ad1f8b5199b3b9e231472ed7aa08d2e5d1d539198a15c5b1e53c746aad81d27b \\\n    RUST_ARCH=x86_64-unknown-linux-gnu\n\nRUN set -eux; \\\n    url=\"https://static.rust-lang.org/rustup/archive/${RUSTUP_VERSION}/${RUST_ARCH}/rustup-init\"; \\\n    wget \"$url\"; \\\n    echo \"${RUSTUP_SHA256} *rustup-init\" | sha256sum -c -; \\\n    chmod +x rustup-init\n\nENV RUST_VERSION=1.41.0\n\nRUN set -eux; \\\n    ./rustup-init -y --no-modify-path --default-toolchain $RUST_VERSION; \\\n    rm rustup-init; \\\n    chmod -R a+w $RUSTUP_HOME $CARGO_HOME; \\\n    rustup --version; \\\n    cargo --version; \\\n    rustc --version; \\\n    openssl version;\n"
  },
  {
    "path": "devtools/docker-build/Dockerfile.e2e-env",
    "content": "FROM mutadev/muta-build-env:v0.3.0\n\nLABEL maintainer=\"yejiayu.fe@gmail.com\"\n\nRUN set -eux; \\\n    apt-get update; \\\n    apt-get install -y --no-install-recommends \\\n        curl \\\n    curl -sL https://deb.nodesource.com/setup_12.x | bash -; \\\n    apt-get install -y nodejs; \\\n    rm -rf /var/lib/apt/lists/*\n\nRUN npm i yarn -g;\n"
  },
  {
    "path": "devtools/keypair/Cargo.toml",
    "content": "[package]\nname = \"muta-keypair\"\nversion = \"0.2.1\"\nauthors = [\"Muta Dev <muta@nervos.org>\"]\nedition = \"2018\"\ninclude = [\"Cargo.toml\", \"src/*\"]\nrepository = \"https://github.com/nervosnetwork/muta/tree/master/devtools/keypair\"\nlicense = \"MIT\"\ndescription = \"A tool to generate keypairs for muta framework\"\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\nclap = { version = \"2.33\", features = [\"yaml\"] }\nhex = \"0.4\"\nophelia-bls-amcl = \"0.3\"\nophelia = \"0.3\"\nprotocol = { path = \"../../protocol\", package = \"muta-protocol\" }\nrand = \"0.7\"\nserde = {version = \"1.0\", features = [\"derive\"]}\nserde_json = \"1.0\"\ntentacle-secio = { version = \"0.1\", features = [ \"molc\" ] }\n"
  },
  {
    "path": "devtools/keypair/src/keypair.yml",
    "content": "name: muta_keypair\nversion: \"0.1\"\nabout: a tool to generate keypairs for muta\nauthor: Muta Dev <muta@nervos.org>\n\nargs:\n    - number:\n        help: Number of keypairs to generate\n        short: n\n        long: number\n        default_value: \"4\"\n\n    - private_keys:\n        help: Generate keypairs from a given private key vector\n        short: p\n        long: private_keys\n        multiple: true\n        takes_value: true\n\n    - common_ref:\n        help: common_ref for bls signature, it will be randomly generated if not passed\n        short: c\n        long: common_ref\n        default_value: \"\"\n"
  },
  {
    "path": "devtools/keypair/src/main.rs",
    "content": "#[macro_use]\nextern crate clap;\n\nuse std::convert::TryFrom;\nuse std::default::Default;\n\nuse clap::App;\nuse ophelia::{PublicKey, ToBlsPublicKey};\nuse ophelia_bls_amcl::BlsPrivateKey;\nuse protocol::types::{Address, Hash};\nuse protocol::{Bytes, BytesMut};\nuse rand::distributions::Alphanumeric;\nuse rand::Rng;\nuse rand::{rngs::OsRng, RngCore};\nuse serde::Serialize;\nuse tentacle_secio::SecioKeyPair;\n\n#[derive(Default, Serialize, Debug)]\nstruct Keypair {\n    pub index:          usize,\n    pub private_key:    String,\n    pub public_key:     String,\n    pub address:        String,\n    pub peer_id:        String,\n    pub bls_public_key: String,\n}\n\n#[derive(Default, Serialize, Debug)]\nstruct Output {\n    pub common_ref: String,\n    pub keypairs:   Vec<Keypair>,\n}\n\n#[allow(clippy::needless_range_loop)]\npub fn main() {\n    let yml = load_yaml!(\"keypair.yml\");\n    let m = App::from(yml).get_matches();\n    let number = value_t!(m, \"number\", usize).unwrap();\n    let priv_keys = values_t!(m.values_of(\"private_keys\"), String).unwrap_or_default();\n    let len = priv_keys.len();\n    if len > number {\n        panic!(\"private keys length can not be larger than number\");\n    }\n\n    let common_ref_encoded = value_t!(m, \"common_ref\", String).unwrap();\n    let common_ref = if common_ref_encoded.is_empty() {\n        rand::thread_rng()\n            .sample_iter(&Alphanumeric)\n            .take(10)\n            .collect::<String>()\n    } else {\n        String::from_utf8(\n            hex::decode(common_ref_encoded).expect(\"common_ref should be a hex string\"),\n        )\n        .expect(\"common_ref should be a valid utf8 string\")\n    };\n\n    let mut output = Output {\n        common_ref: add_0x(hex::encode(common_ref.clone())),\n        keypairs:   vec![],\n    };\n\n    for i in 0..number {\n        let mut k = Keypair::default();\n        let seckey = if i < len {\n            Bytes::from(hex::decode(&priv_keys[i]).expect(\"decode hex private key\"))\n        } else {\n            let mut seed = [0u8; 32];\n            OsRng.fill_bytes(&mut seed);\n            Hash::digest(BytesMut::from(seed.as_ref()).freeze()).as_bytes()\n        };\n        let keypair = SecioKeyPair::secp256k1_raw_key(seckey.as_ref()).expect(\"secp256k1 keypair\");\n        let pubkey = keypair.to_public_key().inner();\n        let address = Address::from_pubkey_bytes(pubkey.clone()).expect(\"address\");\n\n        k.private_key = add_0x(hex::encode(seckey.as_ref()));\n        k.public_key = add_0x(hex::encode(pubkey));\n        k.peer_id = keypair.to_public_key().peer_id().to_base58();\n        k.address = address.to_string();\n\n        let priv_key =\n            BlsPrivateKey::try_from([&[0u8; 16], seckey.as_ref()].concat().as_ref()).unwrap();\n        let pub_key = priv_key.pub_key(&common_ref.as_str().into());\n        k.bls_public_key = add_0x(hex::encode(pub_key.to_bytes()));\n        k.index = i + 1;\n        output.keypairs.push(k);\n    }\n    let output_str = serde_json::to_string_pretty(&output).unwrap();\n    println!(\"{}\", output_str);\n}\n\nfn add_0x(s: String) -> String {\n    \"0x\".to_owned() + &s\n}\n"
  },
  {
    "path": "devtools/kube/deploy-chaos-crd-template.yml",
    "content": "apiVersion: nervos.org/v1alpha1\nkind: Muta\nmetadata:\n  name: chaos-${REPO_NAME}-${VERSION}\n  namespace: mutadev # Only supports deployment to the mutadev namespace\nspec:\n  image: mutadev/muta:latest # docker image\n  resources:\n    limits:\n      cpu: 1100m\n      memory: 3Gi\n      ephemeral-storage: 5Gi\n    requests:\n      cpu: 1100m\n      memory: 3Gi\n      ephemeral-storage: 5Gi\n  chaos: # all / stable-network-corrupt / stable-network-delay / stable-network-duplicate / stable-network-loss / stable-network-partition / stable-node-failure / stable-node-kill\n    - all\n  size: 4 # Node numbers\n  persistent: false # Persistent data\n  config: # see https://github.com/nervosnetwork/muta/blob/master/devtools/chain/config.toml\n    data_path: \"/app/data\"\n    graphql:\n      listening_address: \"0.0.0.0:8000\"\n      graphql_uri: \"/graphql\"\n      graphiql_uri: \"/\"\n      workers: 0 # if 0, uses number of available logical cpu as threads count.\n      maxconn: 25000\n    network:\n      listening_address: \"0.0.0.0:1337\"\n      rpc_timeout: 10\n    mempool:\n      pool_size: 20000\n      broadcast_txs_size: 200\n      broadcast_txs_interval: 200\n    executor:\n      light: false\n    logger:\n      filter: \"info\"\n      log_to_console: true\n      console_show_file_and_line: false\n      log_path: \"logs/\"\n      log_to_file: true\n      metrics: true\n      modules_level:\n        # \"overlord::state::process\": \"debug\"\n        # \"core_consensus\": \"error\"\n  genesis: # https://github.com/nervosnetwork/muta/blob/master/devtools/chain/genesis.toml\n    prevhash: \"0x44915be5b6c20b0678cf05fcddbbaa832e25d7e6ac538784cd5c24de00d47472\"\n    metadata:\n      chain_id: \"0xb6a4d7da21443f5e816e8700eea87610e6d769657d6b8ec73028457bf2ca4036\"\n      timeout_gap: 20\n      cycles_limit: 99999999\n      cycles_price: 1\n      interval: 3000\n      propose_ratio: 15\n      prevote_ratio: 15\n      precommit_ratio: 10\n      brake_ratio: 3\n      tx_num_limit: 20000\n     max_tx_size: 1073741824\n   services:\n     - name: asset\n       payload: '{ \"id\": \"0xf56924db538e77bb5951eb5ff0d02b88983c49c45eea30e8ae3e7234b311436c\", \"name\": \"MutaToken\", \"symbol\": \"MT\", \"supply\": 320000011, \"issuer\": \"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\" }'\n"
  },
  {
    "path": "docs/_config.yml",
    "content": "theme: jekyll-theme-minimal"
  },
  {
    "path": "docs/build/gql_api.sh",
    "content": "#!/usr/bin/env bash\n\nBASEDIR=$(dirname \"$0\")\n\nfunction check() {\n  if ! type \"$1\" > /dev/null; then\n  echo \"$1 is required, install first $2\"\n  echo $2\n  exit 1\nfi\n}\n\ncheck node\ncheck graphql-markdown \"run npm install graphql-markdown --global\"\n\nendpoint=\"http://127.0.0.1:8000/graphql\"\nif [ ! -z \"$1\" ]; then\n  endpoint=$1\nfi\n\n#res_code=$(curl --write-out %{http_code} --silent --output /dev/null \\\n#            -X POST -d 'query q{\\n  getBlock(height:\"0x00\"){\\n    hash \\n  }\\n}' \\\n#            $endpoint)\n\nres_code=$(curl $endpoint --write-out %{http_code} --silent --output /dev/null -H 'content-type: application/json' --data-binary '{\"operationName\":\"q\",\"variables\":{},\"query\":\"query q {\\n  getBlock(height: \\\"0x00\\\") {\\n    hash\\n  }\\n}\\n\"}')\n\nif [ $res_code -ne 200 ]; then\n  echo \"$endpoint GraphQL endpoint request failed\"\n  echo \"start API server at first or use the custom endpoint make doc-api http://x.x.x.x:8000/graphql\"\n  exit 1;\nfi\n\nprologue=\"\n>[GraphQL](https://graphql.org) is a query language for APIs and a runtime for fulfilling those queries with your existing data.\nGraphQL provides a complete and understandable description of the data in your API,\ngives clients the power to ask for exactly what they need and nothing more,\nmakes it easier to evolve APIs over time, and enables powerful developer tools.\n\nMuta has embeded a [Graph**i**QL](https://github.com/graphql/graphiql) for checking and calling API. Started a the Muta\nnode, and then try open http://127.0.0.1:8000/graphiql in the browser.\n\"\n\ngraphql-markdown $endpoint --title \"Muta GraphQL API\" --prologue \"$prologue\" > $BASEDIR/../graphql_api.md\n\nsed -i -E 's/<a href=\"#(.+)\">/<a href=\"#\\/graphql_api?id=\\1\">/g'  $BASEDIR/../graphql_api.md"
  },
  {
    "path": "docs/graphql_api.md",
    "content": "# Muta GraphQL API\n\n\n>[GraphQL](https://graphql.org) is a query language for APIs and a runtime for fulfilling those queries with your existing data.\nGraphQL provides a complete and understandable description of the data in your API,\ngives clients the power to ask for exactly what they need and nothing more,\nmakes it easier to evolve APIs over time, and enables powerful developer tools.\n\nMuta has embeded a [Graph**i**QL](https://github.com/graphql/graphiql) for checking and calling API. Started a the Muta\nnode, and then try open http://127.0.0.1:8000/graphiql in the browser.\n\n\n<details>\n  <summary><strong>Table of Contents</strong></summary>\n\n  * [Query](#query)\n  * [Mutation](#mutation)\n  * [Objects](#objects)\n    * [Block](#block)\n    * [BlockHeader](#blockheader)\n    * [Event](#event)\n    * [Proof](#proof)\n    * [Receipt](#receipt)\n    * [ReceiptResponse](#receiptresponse)\n    * [ServiceResponse](#serviceresponse)\n    * [SignedTransaction](#signedtransaction)\n    * [Validator](#validator)\n  * [Inputs](#inputs)\n    * [InputRawTransaction](#inputrawtransaction)\n    * [InputTransactionEncryption](#inputtransactionencryption)\n  * [Scalars](#scalars)\n    * [Address](#address)\n    * [Boolean](#boolean)\n    * [Bytes](#bytes)\n    * [Hash](#hash)\n    * [Int](#int)\n    * [String](#string)\n    * [Uint64](#uint64)\n\n</details>\n\n## Query\n<table>\n<thead>\n<tr>\n<th align=\"left\">Field</th>\n<th align=\"right\">Argument</th>\n<th align=\"left\">Type</th>\n<th align=\"left\">Description</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>getBlock</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=block\">Block</a></td>\n<td>\n\nGet the block\n\n</td>\n</tr>\n<tr>\n<td colspan=\"2\" align=\"right\" valign=\"top\">height</td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=uint64\">Uint64</a></td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>getTransaction</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=signedtransaction\">SignedTransaction</a></td>\n<td>\n\nGet the transaction by hash\n\n</td>\n</tr>\n<tr>\n<td colspan=\"2\" align=\"right\" valign=\"top\">txHash</td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=hash\">Hash</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>getReceipt</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=receipt\">Receipt</a></td>\n<td>\n\nGet the receipt by transaction hash\n\n</td>\n</tr>\n<tr>\n<td colspan=\"2\" align=\"right\" valign=\"top\">txHash</td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=hash\">Hash</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>queryService</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=serviceresponse\">ServiceResponse</a>!</td>\n<td>\n\nquery service\n\n</td>\n</tr>\n<tr>\n<td colspan=\"2\" align=\"right\" valign=\"top\">height</td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=uint64\">Uint64</a></td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" align=\"right\" valign=\"top\">cyclesLimit</td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=uint64\">Uint64</a></td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" align=\"right\" valign=\"top\">cyclesPrice</td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=uint64\">Uint64</a></td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" align=\"right\" valign=\"top\">caller</td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=address\">Address</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" align=\"right\" valign=\"top\">serviceName</td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=string\">String</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" align=\"right\" valign=\"top\">method</td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=string\">String</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" align=\"right\" valign=\"top\">payload</td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=string\">String</a>!</td>\n<td></td>\n</tr>\n</tbody>\n</table>\n\n## Mutation\n<table>\n<thead>\n<tr>\n<th align=\"left\">Field</th>\n<th align=\"right\">Argument</th>\n<th align=\"left\">Type</th>\n<th align=\"left\">Description</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>sendTransaction</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=hash\">Hash</a>!</td>\n<td>\n\nsend transaction\n\n</td>\n</tr>\n<tr>\n<td colspan=\"2\" align=\"right\" valign=\"top\">inputRaw</td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=inputrawtransaction\">InputRawTransaction</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" align=\"right\" valign=\"top\">inputEncryption</td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=inputtransactionencryption\">InputTransactionEncryption</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>unsafeSendTransaction</strong> ⚠️</td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=hash\">Hash</a>!</td>\n<td>\n<p>⚠️ <strong>DEPRECATED</strong></p>\n<blockquote>\n\nDON'T use it in production! This is just for development.\n\n</blockquote>\n</td>\n</tr>\n<tr>\n<td colspan=\"2\" align=\"right\" valign=\"top\">inputRaw</td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=inputrawtransaction\">InputRawTransaction</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" align=\"right\" valign=\"top\">inputPrivkey</td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=bytes\">Bytes</a>!</td>\n<td></td>\n</tr>\n</tbody>\n</table>\n\n## Objects\n\n### Block\n\nBlock is a single digital record created within a blockchain. Each block contains a record of the previous Block, and when linked together these become the “chain”.A block is always composed of header and body.\n\n<table>\n<thead>\n<tr>\n<th align=\"left\">Field</th>\n<th align=\"right\">Argument</th>\n<th align=\"left\">Type</th>\n<th align=\"left\">Description</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>header</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=blockheader\">BlockHeader</a>!</td>\n<td>\n\nThe header section of a block\n\n</td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>orderedTxHashes</strong></td>\n<td valign=\"top\">[<a href=\"#/graphql_api?id=hash\">Hash</a>!]!</td>\n<td>\n\nThe body section of a block\n\n</td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>hash</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=hash\">Hash</a>!</td>\n<td>\n\nHash of the block\n\n</td>\n</tr>\n</tbody>\n</table>\n\n### BlockHeader\n\nA block header is like the metadata of a block.\n\n<table>\n<thead>\n<tr>\n<th align=\"left\">Field</th>\n<th align=\"right\">Argument</th>\n<th align=\"left\">Type</th>\n<th align=\"left\">Description</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>chainId</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=hash\">Hash</a>!</td>\n<td>\n\nIdentifier of a chain in order to prevent replay attacks across channels \n\n</td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>height</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=uint64\">Uint64</a>!</td>\n<td>\n\nblock height\n\n</td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>execHeight</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=uint64\">Uint64</a>!</td>\n<td>\n\nThe height to which the block has been executed\n\n</td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>prevHash</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=hash\">Hash</a>!</td>\n<td>\n\nThe hash of the serialized previous block\n\n</td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>timestamp</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=uint64\">Uint64</a>!</td>\n<td>\n\nA timestamp that records when the block was created\n\n</td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>orderRoot</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=hash\">Hash</a>!</td>\n<td>\n\nThe merkle root of ordered transactions\n\n</td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>orderSignedTransactionsHash</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=hash\">Hash</a>!</td>\n<td>\n\nThe hash of ordered signed transactions\n\n</td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>confirmRoot</strong></td>\n<td valign=\"top\">[<a href=\"#/graphql_api?id=hash\">Hash</a>!]!</td>\n<td>\n\nThe merkle roots of all the confirms\n\n</td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>stateRoot</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=hash\">Hash</a>!</td>\n<td>\n\nThe merkle root of state root\n\n</td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>receiptRoot</strong></td>\n<td valign=\"top\">[<a href=\"#/graphql_api?id=hash\">Hash</a>!]!</td>\n<td>\n\nThe merkle roots of receipts\n\n</td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>cyclesUsed</strong></td>\n<td valign=\"top\">[<a href=\"#/graphql_api?id=uint64\">Uint64</a>!]!</td>\n<td>\n\nThe sum of all transactions costs\n\n</td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>proposer</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=address\">Address</a>!</td>\n<td>\n\nThe address descirbed who packed the block\n\n</td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>proof</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=proof\">Proof</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>validatorVersion</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=uint64\">Uint64</a>!</td>\n<td>\n\nThe version of validator is designed for cross chain\n\n</td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>validators</strong></td>\n<td valign=\"top\">[<a href=\"#/graphql_api?id=validator\">Validator</a>!]!</td>\n<td></td>\n</tr>\n</tbody>\n</table>\n\n### Event\n\n<table>\n<thead>\n<tr>\n<th align=\"left\">Field</th>\n<th align=\"right\">Argument</th>\n<th align=\"left\">Type</th>\n<th align=\"left\">Description</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>service</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=string\">String</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>name</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=string\">String</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>data</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=string\">String</a>!</td>\n<td></td>\n</tr>\n</tbody>\n</table>\n\n### Proof\n\nThe verifier of the block header proved\n\n<table>\n<thead>\n<tr>\n<th align=\"left\">Field</th>\n<th align=\"right\">Argument</th>\n<th align=\"left\">Type</th>\n<th align=\"left\">Description</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>height</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=uint64\">Uint64</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>round</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=uint64\">Uint64</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>blockHash</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=hash\">Hash</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>signature</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=bytes\">Bytes</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>bitmap</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=bytes\">Bytes</a>!</td>\n<td></td>\n</tr>\n</tbody>\n</table>\n\n### Receipt\n\n<table>\n<thead>\n<tr>\n<th align=\"left\">Field</th>\n<th align=\"right\">Argument</th>\n<th align=\"left\">Type</th>\n<th align=\"left\">Description</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>stateRoot</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=hash\">Hash</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>height</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=uint64\">Uint64</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>txHash</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=hash\">Hash</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>cyclesUsed</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=uint64\">Uint64</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>events</strong></td>\n<td valign=\"top\">[<a href=\"#/graphql_api?id=event\">Event</a>!]!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>response</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=receiptresponse\">ReceiptResponse</a>!</td>\n<td></td>\n</tr>\n</tbody>\n</table>\n\n### ReceiptResponse\n\n<table>\n<thead>\n<tr>\n<th align=\"left\">Field</th>\n<th align=\"right\">Argument</th>\n<th align=\"left\">Type</th>\n<th align=\"left\">Description</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>serviceName</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=string\">String</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>method</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=string\">String</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>response</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=serviceresponse\">ServiceResponse</a>!</td>\n<td></td>\n</tr>\n</tbody>\n</table>\n\n### ServiceResponse\n\n<table>\n<thead>\n<tr>\n<th align=\"left\">Field</th>\n<th align=\"right\">Argument</th>\n<th align=\"left\">Type</th>\n<th align=\"left\">Description</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>code</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=uint64\">Uint64</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>succeedData</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=string\">String</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>errorMessage</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=string\">String</a>!</td>\n<td></td>\n</tr>\n</tbody>\n</table>\n\n### SignedTransaction\n\n<table>\n<thead>\n<tr>\n<th align=\"left\">Field</th>\n<th align=\"right\">Argument</th>\n<th align=\"left\">Type</th>\n<th align=\"left\">Description</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>chainId</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=hash\">Hash</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>cyclesLimit</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=uint64\">Uint64</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>cyclesPrice</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=uint64\">Uint64</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>nonce</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=hash\">Hash</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>timeout</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=uint64\">Uint64</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>sender</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=address\">Address</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>serviceName</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=string\">String</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>method</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=string\">String</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>payload</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=string\">String</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>txHash</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=hash\">Hash</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>pubkey</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=bytes\">Bytes</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>signature</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=bytes\">Bytes</a>!</td>\n<td></td>\n</tr>\n</tbody>\n</table>\n\n### Validator\n\nValidator address set\n\n<table>\n<thead>\n<tr>\n<th align=\"left\">Field</th>\n<th align=\"right\">Argument</th>\n<th align=\"left\">Type</th>\n<th align=\"left\">Description</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>pubkey</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=bytes\">Bytes</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>proposeWeight</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=int\">Int</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>voteWeight</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=int\">Int</a>!</td>\n<td></td>\n</tr>\n</tbody>\n</table>\n\n## Inputs\n\n### InputRawTransaction\n\nThere was many types of transaction in Muta, A transaction often require computing resources or write data to chain,these resources are valuable so we need to pay some token for them.InputRawTransaction describes information above\n\n<table>\n<thead>\n<tr>\n<th colspan=\"2\" align=\"left\">Field</th>\n<th align=\"left\">Type</th>\n<th align=\"left\">Description</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>chainId</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=hash\">Hash</a>!</td>\n<td>\n\nIdentifier of the chain.\n\n</td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>cyclesLimit</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=uint64\">Uint64</a>!</td>\n<td>\n\nMostly like the gas limit in Ethereum, describes the fee that you are willing to pay the highest price for the transaction\n\n</td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>cyclesPrice</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=uint64\">Uint64</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>nonce</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=hash\">Hash</a>!</td>\n<td>\n\nEvery transaction has its own id, unlike Ethereum's nonce,the nonce in Muta is an hash\n\n</td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>timeout</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=uint64\">Uint64</a>!</td>\n<td>\n\nFor security and performance reasons, Muta will only deal with trade request over a period of time,the `timeout` should be `timeout > current_block_height` and `timeout < current_block_height + timeout_gap`,the `timeout_gap` generally equal to 20.\n\n</td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>serviceName</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=string\">String</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>method</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=string\">String</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>payload</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=string\">String</a>!</td>\n<td></td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>sender</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=address\">Address</a>!</td>\n<td></td>\n</tr>\n</tbody>\n</table>\n\n### InputTransactionEncryption\n\nSignature of the transaction\n\n<table>\n<thead>\n<tr>\n<th colspan=\"2\" align=\"left\">Field</th>\n<th align=\"left\">Type</th>\n<th align=\"left\">Description</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>txHash</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=hash\">Hash</a>!</td>\n<td>\n\nThe digest of the transaction\n\n</td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>pubkey</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=bytes\">Bytes</a>!</td>\n<td>\n\nThe public key of transfer\n\n</td>\n</tr>\n<tr>\n<td colspan=\"2\" valign=\"top\"><strong>signature</strong></td>\n<td valign=\"top\"><a href=\"#/graphql_api?id=bytes\">Bytes</a>!</td>\n<td>\n\nThe signature of the transaction\n\n</td>\n</tr>\n</tbody>\n</table>\n\n## Scalars\n\n### Address\n\n20 bytes of account address\n\n### Boolean\n\n### Bytes\n\nBytes corresponding hex string.\n\n### Hash\n\nThe output digest of Keccak hash function\n\n### Int\n\n### String\n\n### Uint64\n\nUint64\n\n"
  },
  {
    "path": "docs/how_to_deploy_a_core_crate.md",
    "content": "# How to develop a core crate.\n\n> This document will show you how to develop a core crate.\n\nNow, take a look at an example, we are going to develop a `storage` crate, which is used to store data from the blockchain.\n\n## Step 0 Define the trait of the crate.\n\n```rust\n// muta/protocol/src/traits/storage.rs\n\nuse async_trait::async_trait;\n\n#[async_trait]\npub trait Storage: Send + Sync {\n    async fn insert_transactions(&self, signed_txs: Vec<SignedTransaction>) -> ProtocolResult<()>;\n\n    async fn get_transaction_by_hash(\n        &self,\n        tx_hash: Hash,\n    ) -> ProtocolResult<Option<SignedTransaction>>;\n}\n```\n\n**Starting with the first line of code, you will first see a macro: `#[async_trait]`:**\n\n[`#[async_trait]`](https://crates.io/crates/async-trait) is a macro that allows you to define `async fn` in a `trait`. In most cases you should use async to define your `fn`.\n\n**Next is the second line, a trait declaration:**\n\nHere we constrain this trait to `Send` + `Sync`,if you don't understand the semantics of `Send` and `Sync` you can get knowledge from [the official documentation](https://doc.rust-lang.org/std/marker/index.html).\n\nIn short, this constraint is necessary because our runtime is always asynchronous, and you must ensure that your crate satisfies the constraints under asynchronous conditions.\n\n**Define the function signature:**\n\nYou only need to pay attention to two points:\n\n1. Always use `&self` and handle the internal variables yourself.\n2. The return value is uniformly used with `ProtocolResult<T>`, `ProtocolResult` is wrap to `Result <T, ProtocolError>`, and `ProtocolError` is a global error type.\n\n## Step 1 The adapter that defines crate\n\nEarlier we mentioned that the role of `storage` is to store blockchain data, but it does not care where the final data is stored. It can be memory, network database, hard drive, etc.\n\nThe existence of a `StorageAdapter` is decoupled persistence logic that specifies a set of key-value database interfaces that implement various `StorageAdapters` to enforce data persistence requirements in a variety of situations.\n\n```rust\n// muta/protocol/src/traits/storage.rs\n\nuse async_trait::async_trait;\nuse bytes::Bytes;\n\n#[async_trait]\npub trait Storage<Adapter: StorageAdapter>: Send + Sync {\n    async fn insert_transactions(&self, signed_txs: Vec<SignedTransaction>) -> ProtocolResult<()>;\n\n    async fn get_transaction_by_hash(\n        &self,\n        tx_hash: Hash,\n    ) -> ProtocolResult<Option<SignedTransaction>>;\n}\n\n#[async_trait]\npub trait StorageAdapter: Send + Sync {\n    async fn get(&self, c: StorageCategory, key: Bytes) -> ProtocolResult<Option<Bytes>>;\n\n    async fn get_batch(\n        &self,\n        c: StorageCategory,\n        keys: Vec<Bytes>,\n    ) -> ProtocolResult<Vec<Option<Bytes>>>;\n\n    async fn insert(&self, c: StorageCategory, key: Bytes, value: Bytes) -> ProtocolResult<()>;\n\n    async fn insert_batch(\n        &self,\n        c: StorageCategory,\n        keys: Vec<Bytes>,\n        values: Vec<Bytes>,\n    ) -> ProtocolResult<()>;\n\n    async fn contains(&self, c: StorageCategory, key: Bytes) -> ProtocolResult<bool>;\n\n    async fn remove(&self, c: StorageCategory, key: Bytes) -> ProtocolResult<()>;\n\n    async fn remove_batch(&self, c: StorageCategory, keys: Vec<Bytes>) -> ProtocolResult<()>;\n}\n```\n\nFinally, don't forget to add the `pub trait Storage<Adapter: StorageAdapter>` constraint to the `Storage`. Its purpose is to make you remember to always rely on an adapter.\n\n## Step 3 Implement storage crate\n\nSee: https://github.com/nervosnetwork/muta/blob/master/core/storage/src/lib.rs\n\nNote:\n\n1. Core crate does not allow dependencies on other cores crate.\n2. The adapter can rely on other core crate.\n"
  },
  {
    "path": "docs/layout.md",
    "content": "## Layout\n\n```sh\n.\n├── common\n│   ├── channel\n│   ├── config-parser\n│   ├── crypto\n│   ├── logger\n│   ├── merkle\n│   └── metrics\n│   └── pubsub\n├── core\n│   ├── api\n│   ├── consensus\n│   ├── database\n│   ├── executor\n│   ├── network\n│   ├── storage\n│   └── mempool\n├── devtools\n│   └── ci\n├── docs\n│   └── menu.md\n├── protocol\n│   ├── codec\n│   ├── traits\n│   └── types\n├── src\n   └── main.rs\n```\n\nA brief description:\n\n- `common` Contains utilities for muta-chain.\n- `core` Contains implementations of module traits.\n- `devtools` Contains scripts and configurations for better use of the this repository.\n- `docs` for project documentations.\n- `protocol` Contains types, serialization, core traits for muta-chain.\n- `src` Contains main packages\n"
  },
  {
    "path": "docs/resources.md",
    "content": "# Resources\n\n## SDK\n\n- [muta-sdk-java](https://dl.bintray.com/lycrushamster/Muta-Java-SDK/org/nervos/muta-sdk-java/1.4/) - The Java SDK\n- [muta-sdk-js](https://www.npmjs.com/package/@mutadev/muta-sdk/v/0.2.0-alpha.1) - The JavaScript SDK\n\n## Others\n\n- [muta-cli](https://github.com/nervosnetwork/muta-cli) - A command-line util for new to Muta\n- [muta-bench](https://github.com/nervosnetwork/muta-benchmark/tree/v0.1.12) - A transfer-based performance test script\n- [hermit-purple](https://github.com/homura/hermit-purple-server) - Cache server for Muta\n\n"
  },
  {
    "path": "examples/byzantine_node.rs",
    "content": "use std::fs;\n\nuse byzantine::config::{Config, Generators};\nuse byzantine::default_start::start;\nuse protocol::types::Genesis;\n\nfn main() {\n    let config_path =\n        std::env::var(\"CONFIG\").unwrap_or_else(|_| \"devtools/chain/config.toml\".to_owned());\n    let genesis_path =\n        std::env::var(\"GENESIS\").unwrap_or_else(|_| \"devtools/chain/genesis.toml\".to_owned());\n    let generators_path =\n        std::env::var(\"GENERATORS\").unwrap_or_else(|_| \"byzantine/generators.toml\".to_owned());\n\n    let config: Config = common_config_parser::parse(&config_path).expect(\"parse config failed\");\n\n    let genesis_toml = fs::read_to_string(&genesis_path).expect(\"read genesis.toml failed\");\n    let genesis: Genesis = toml::from_str(&genesis_toml).expect(\"parse genesis failed\");\n\n    let generators_toml =\n        fs::read_to_string(&generators_path).expect(\"read generators.toml failed\");\n    let generators: Generators = toml::from_str(&generators_toml).expect(\"parse generators failed\");\n\n    let mut rt = tokio::runtime::Runtime::new().expect(\"new tokio runtime\");\n    let local = tokio::task::LocalSet::new();\n    local\n        .block_on(\n            &mut rt,\n            async move { start(config, genesis, generators).await },\n        )\n        .expect(\"start failed\");\n}\n"
  },
  {
    "path": "examples/config-1.toml",
    "content": "data_path = \"./devtools/chain/data/1\"\nprivkey = \"0x5ec982173d54d830b6789cbbbe43eaa2853a5ff752d1ebc1b266cf9790314f8a\"\n\n[network]\nlistening_address = \"0.0.0.0:1337\"\nrpc_timeout = 10\n\n[graphql]\ngraphiql_uri = \"/graphiql\"\nlistening_address = \"0.0.0.0:8000\"\ngraphql_uri = \"/graphql\"\nworkers = 0 # if 0, uses number of available logical cpu as threads count.\nmaxconn = 25000\nmax_payload_size = 1048576\n\n[executor]\nlight = false\ntriedb_cache_size = 2000\n\n[mempool]\nbroadcast_txs_size = 200\nbroadcast_txs_interval = 200\npool_size = 1000\n\n[logger]\nmetrics = false\nlog_path = \"./devtools/chain/logs/1\"\nlog_to_console = true\nfilter = \"info\"\nlog_to_file = true\nconsole_show_file_and_line = false\nfile_size_limit = 1073741824 # 1 GiB\n\n[rocksdb]\nmax_open_files = 64\n"
  },
  {
    "path": "examples/config-2.toml",
    "content": "data_path = \"./devtools/chain/data/2\"\nprivkey = \"0x8dfbd3c689308d29c058cce163984a2ae8d5fc5191ce6b1e18bd1d7b95a8c632\"\n\n[network]\nlistening_address = \"0.0.0.0:1338\"\nrpc_timeout = 10\n\n[[network.bootstraps]]\npeer_id = \"QmTEJkB5QKWsEq37huryZZfVvqBKb54sHnKn9TQcA6j3n9\"\naddress = \"127.0.0.1:1337\" # Replace it with your IP\n\n[graphql]\ngraphiql_uri = \"/graphiql\"\nlistening_address = \"0.0.0.0:8001\"\ngraphql_uri = \"/graphql\"\nworkers = 0 # if 0, uses number of available logical cpu as threads count.\nmaxconn = 25000\nmax_payload_size = 1048576\n\n[executor]\nlight = false\ntriedb_cache_size = 2000\n\n[mempool]\nbroadcast_txs_size = 200\nbroadcast_txs_interval = 200\npool_size = 1000\n\n[logger]\nmetrics = false\nlog_path = \"./devtools/chain/logs/2\"\nlog_to_console = true\nfilter = \"info\"\nlog_to_file = true\nconsole_show_file_and_line = false\nfile_size_limit = 1073741824 # 1 GiB\n\n[rocksdb]\nmax_open_files = 64\n"
  },
  {
    "path": "examples/config-3.toml",
    "content": "data_path = \"./devtools/chain/data/3\"\nprivkey = \"0xfc659f0ed09a4ba0d2d1836af7520d1a050a7739d598dc98517bbbe7a2e38124\"\n\n[network]\nlistening_address = \"0.0.0.0:1339\"\nrpc_timeout = 10\n\n[[network.bootstraps]]\npeer_id = \"QmTEJkB5QKWsEq37huryZZfVvqBKb54sHnKn9TQcA6j3n9\"\naddress = \"127.0.0.1:1337\" # Replace it with your IP\n\n[graphql]\ngraphiql_uri = \"/graphiql\"\nlistening_address = \"0.0.0.0:8002\"\ngraphql_uri = \"/graphql\"\nworkers = 0 # if 0, uses number of available logical cpu as threads count.\nmaxconn = 25000\nmax_payload_size = 1048576\n\n[executor]\nlight = false\ntriedb_cache_size = 2000\n\n[mempool]\nbroadcast_txs_size = 200\nbroadcast_txs_interval = 200\npool_size = 1000\n\n[logger]\nmetrics = false\nlog_path = \"./devtools/chain/logs/3\"\nlog_to_console = true\nfilter = \"info\"\nlog_to_file = true\nconsole_show_file_and_line = false\nfile_size_limit = 1073741824 # 1 GiB\n\n[rocksdb]\nmax_open_files = 64\n"
  },
  {
    "path": "examples/config-4.toml",
    "content": "data_path = \"./devtools/chain/data/4\"\nprivkey = \"0x7c01d6539419cffc78ab0779dabe88fad3f70c20ef47a562ac4ba5b7bd704b8e\"\n\n[network]\nlistening_address = \"0.0.0.0:1340\"\nrpc_timeout = 10\n\n[[network.bootstraps]]\npeer_id = \"QmTEJkB5QKWsEq37huryZZfVvqBKb54sHnKn9TQcA6j3n9\"\naddress = \"127.0.0.1:1337\" # Replace it with your IP\n\n[graphql]\ngraphiql_uri = \"/graphiql\"\nlistening_address = \"0.0.0.0:8004\"\ngraphql_uri = \"/graphql\"\nworkers = 0 # if 0, uses number of available logical cpu as threads count.\nmaxconn = 25000\nmax_payload_size = 1048576\n\n[executor]\nlight = false\ntriedb_cache_size = 2000\n\n[mempool]\nbroadcast_txs_size = 200\nbroadcast_txs_interval = 200\npool_size = 1000\n\n[logger]\nmetrics = false\nlog_path = \"./devtools/chain/logs/4\"\nlog_to_console = true\nfilter = \"info\"\nlog_to_file = true\nconsole_show_file_and_line = false\nfile_size_limit = 1073741824 # 1 GiB\n\n[rocksdb]\nmax_open_files = 64\n"
  },
  {
    "path": "examples/genesis.toml",
    "content": "timestamp = 0\nprevhash = \"0x44915be5b6c20b0678cf05fcddbbaa832e25d7e6ac538784cd5c24de00d47472\"\n\n[[services]]\nname = \"asset\"\npayload = '''\n{\n    \"id\": \"0xf56924db538e77bb5951eb5ff0d02b88983c49c45eea30e8ae3e7234b311436c\",\n   \"name\": \"MutaToken\",\n   \"symbol\": \"MT\",\n   \"supply\": 320000011,\n   \"issuer\": \"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\"\n}\n'''\n\n# private keys:\n# 0x5ec982173d54d830b6789cbbbe43eaa2853a5ff752d1ebc1b266cf9790314f8a\n# 0x8dfbd3c689308d29c058cce163984a2ae8d5fc5191ce6b1e18bd1d7b95a8c632\n# 0xfc659f0ed09a4ba0d2d1836af7520d1a050a7739d598dc98517bbbe7a2e38124\n# 0x7c01d6539419cffc78ab0779dabe88fad3f70c20ef47a562ac4ba5b7bd704b8e\n[[services]]\nname = \"metadata\"\npayload = '''\n{\n    \"chain_id\": \"0xb6a4d7da21443f5e816e8700eea87610e6d769657d6b8ec73028457bf2ca4036\",\n    \"bech32_address_hrp\": \"muta\",\n    \"common_ref\": \"0x6c747758636859487038\",\n    \"timeout_gap\": 20,\n    \"cycles_limit\": 999999999999,\n    \"cycles_price\": 1,\n    \"interval\": 3000,\n    \"verifier_list\": [\n       {\n           \"bls_pub_key\": \"0x04102947214862a503c73904deb5818298a186d68c7907bb609583192a7de6331493835e5b8281f4d9ee705537c0e765580e06f86ddce5867812fceb42eecefd209f0eddd0389d6b7b0100f00fb119ef9ab23826c6ea09aadcc76fa6cea6a32724\",\n           \"pub_key\": \"0x02ef0cb0d7bc6c18b4bea1f5908d9106522b35ab3c399369605d4242525bda7e60\",\n           \"address\": \"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\",\n           \"propose_weight\": 1,\n           \"vote_weight\": 1\n       },\n        {\n            \"bls_pub_key\": \"0x0418e16bd67ce0b58a575f506967706be733c96feef19a06bb37d510000d89905f2f61b7da4d831cb1bb01e2f99833362602a0a252dfd1e95c75c1eadb0db220e3722c9a077b730e7f6cec5f4a55bfc9a4d88db3e6c27684aa8335456824070501\",\n            \"pub_key\": \"0x03dbd1dbf3835efb4ec34a360ee671ee1d22425425368edfc5b9ffafc812e86200\",\n            \"address\": \"muta15a8a9ksxe3hhjpw3l7wz7ry778qg8h9wz8y35p\",\n            \"propose_weight\": 1,\n            \"vote_weight\": 1\n        },\n        {\n            \"bls_pub_key\": \"0x040944276f414c46330227f2c0c5a998aba3d400ed19cfc2d31d3e7fcc442ce9f91ea86e172dc3c1b6cedc364bd52ba1cf074529e52337cd80ab32a196a3d42ab46eee25120b44fdd2b5c4268bf3b84c72d068ea83d0530a5461dc30b6a63a60e9\",\n            \"pub_key\": \"0x03cba4ae147eb24891d78c9527798577419b7db913b4b03ba548c28f40c5841166\",\n            \"address\": \"muta1h99h6f54vytatam3ckftrmvcdpn4jlmnwm6hl0\",\n            \"propose_weight\": 1,\n            \"vote_weight\": 1\n        },\n        {\n            \"bls_pub_key\": \"0x041342e9a35278b298a67006cd98d663053e3f7eb72a08ffe9835074e430b2112a866c1c8d981edcd793cb16d459fc952b0464007d876355eea671e74727588bae69740c6a0b49d8142b7b0821a78acd34b4d8012b9ef69444a476e03d5fea5330\",\n            \"pub_key\": \"0x0245a0c291f56c2c5751db1c0bf1ed986e703d29a0fe023df770fe92c7c2347316\",\n            \"address\": \"muta16xukzz73l5r6vulk9q697tave8c5mfu33mwud6\",\n            \"propose_weight\": 1,\n            \"vote_weight\": 1\n        }\n    ],\n    \"propose_ratio\": 15,\n    \"prevote_ratio\": 10,\n    \"precommit_ratio\": 10,\n    \"brake_ratio\": 7,\n    \"tx_num_limit\": 20000,\n    \"max_tx_size\": 1024\n}\n'''\n"
  },
  {
    "path": "examples/muta-chain.rs",
    "content": "use derive_more::{Display, From};\nuse protocol::traits::{SDKFactory, Service, ServiceMapping, ServiceSDK};\nuse protocol::{ProtocolError, ProtocolErrorKind, ProtocolResult};\n\nuse asset::{AssetService, ASSET_SERVICE_NAME};\nuse authorization::{AuthorizationService, AUTHORIZATION_SERVICE_NAME};\nuse metadata::{MetadataService, METADATA_SERVICE_NAME};\nuse multi_signature::{MultiSignatureService, MULTI_SIG_SERVICE_NAME};\nuse util::{UtilService, UTIL_SERVICE_NAME};\n\nstruct DefaultServiceMapping;\n\nimpl ServiceMapping for DefaultServiceMapping {\n    fn get_service<SDK: 'static + ServiceSDK, Factory: SDKFactory<SDK>>(\n        &self,\n        name: &str,\n        factory: &Factory,\n    ) -> ProtocolResult<Box<dyn Service>> {\n        let sdk = factory.get_sdk(name)?;\n        let service = match name {\n            AUTHORIZATION_SERVICE_NAME => {\n                let multi_sig_sdk = factory.get_sdk(\"multi_signature\")?;\n                Box::new(AuthorizationService::new(\n                    sdk,\n                    MultiSignatureService::new(multi_sig_sdk),\n                )) as Box<dyn Service>\n            }\n            ASSET_SERVICE_NAME => Box::new(AssetService::new(sdk)) as Box<dyn Service>,\n            METADATA_SERVICE_NAME => Box::new(MetadataService::new(sdk)) as Box<dyn Service>,\n            MULTI_SIG_SERVICE_NAME => Box::new(MultiSignatureService::new(sdk)) as Box<dyn Service>,\n            UTIL_SERVICE_NAME => Box::new(UtilService::new(sdk)) as Box<dyn Service>,\n            _ => {\n                return Err(MappingError::NotFoundService {\n                    service: name.to_owned(),\n                }\n                .into());\n            }\n        };\n\n        Ok(service)\n    }\n\n    fn list_service_name(&self) -> Vec<String> {\n        vec![\n            ASSET_SERVICE_NAME.to_owned(),\n            AUTHORIZATION_SERVICE_NAME.to_owned(),\n            METADATA_SERVICE_NAME.to_owned(),\n            MULTI_SIG_SERVICE_NAME.to_owned(),\n            UTIL_SERVICE_NAME.to_owned(),\n        ]\n    }\n}\n\npub fn main() {\n    muta::run(\n        DefaultServiceMapping,\n        \"muta-chain\",\n        \"v0.2.1\",\n        \"Muta Dev <muta@nervos.org>\",\n        \"./devtools/chain/config.toml\",\n        \"./devtools/chain/genesis.toml\",\n        None,\n    )\n}\n\n#[derive(Debug, Display, From)]\npub enum MappingError {\n    #[display(fmt = \"service {:?} was not found\", service)]\n    NotFoundService { service: String },\n}\n\nimpl std::error::Error for MappingError {}\n\nimpl From<MappingError> for ProtocolError {\n    fn from(err: MappingError) -> ProtocolError {\n        ProtocolError::new(ProtocolErrorKind::Service, Box::new(err))\n    }\n}\n"
  },
  {
    "path": "framework/Cargo.toml",
    "content": "[package]\nname = \"framework\"\nversion = \"0.2.1\"\nauthors = [\"Muta Dev <muta@nervos.org>\"]\nedition = \"2018\"\nrepository = \"https://github.com/nervosnetwork/muta\"\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\ncommon-apm = { path = \"../common/apm\" }\nprotocol = { path = \"../protocol\", package = \"muta-protocol\" }\nasset = { path = \"../built-in-services/asset\"}\nmetadata = { path = \"../built-in-services/metadata\"}\nutil = { path = \"../built-in-services/util\"}\n\nhasher = { version = \"0.1\", features = ['hash-keccak'] }\ncita_trie = \"2.0\"\nbytes = \"0.5\"\nderive_more = \"0.99\"\nrocksdb = \"0.14\"\nlazy_static = \"1.4\"\nbyteorder = \"1.3\"\nrlp = \"0.4\"\nfutures = \"0.3\"\njson = \"0.12\"\nhex = \"0.4\"\nserde_json = \"1.0\"\nlog = \"0.4\"\nrayon = \"1.3\"\nlru-cache = \"0.1\"\nlru = \"0.6\"\nparking_lot = \"0.11\"\nrand = { version = \"0.7\", features = [\"small_rng\"]}\n\n[dev-dependencies]\nasync-trait = \"0.1\"\ntoml = \"0.5\"\nbinding-macro = { path = \"../binding-macro\" }\nserde = { version = \"1.0\", features = [\"derive\"] }\nmuta-codec-derive = \"0.2\"\n"
  },
  {
    "path": "framework/src/binding/mod.rs",
    "content": "#[cfg(test)]\nmod tests;\n\npub mod sdk;\npub mod state;\npub mod store;\n"
  },
  {
    "path": "framework/src/binding/sdk/chain_querier.rs",
    "content": "use std::sync::Arc;\n\nuse derive_more::{Display, From};\nuse futures::executor::block_on;\n\nuse protocol::traits::{ChainQuerier, Context, Storage};\nuse protocol::types::{Block, Hash, Receipt, SignedTransaction};\nuse protocol::{ProtocolError, ProtocolErrorKind, ProtocolResult};\n\npub struct DefaultChainQuerier<S: Storage> {\n    storage: Arc<S>,\n}\n\nimpl<S: Storage> DefaultChainQuerier<S> {\n    pub fn new(storage: Arc<S>) -> Self {\n        Self { storage }\n    }\n}\n\nimpl<S: Storage> ChainQuerier for DefaultChainQuerier<S> {\n    fn get_transaction_by_hash(&self, tx_hash: &Hash) -> ProtocolResult<Option<SignedTransaction>> {\n        let ret = block_on(\n            self.storage\n                .get_transaction_by_hash(Context::new(), &tx_hash),\n        )\n        .map_err(|_| ChainQueryError::AsyncStorage)?;\n\n        Ok(ret)\n    }\n\n    fn get_block_by_height(&self, height: Option<u64>) -> ProtocolResult<Option<Block>> {\n        if let Some(u) = height {\n            let ret = block_on(self.storage.get_block(Context::new(), u))\n                .map_err(|_| ChainQueryError::AsyncStorage)?;\n\n            Ok(ret)\n        } else {\n            let ret = block_on(self.storage.get_latest_block(Context::new()))\n                .map_err(|_| ChainQueryError::AsyncStorage)?;\n\n            Ok(Some(ret))\n        }\n    }\n\n    fn get_receipt_by_hash(&self, tx_hash: &Hash) -> ProtocolResult<Option<Receipt>> {\n        let ret = block_on(\n            self.storage\n                .get_receipt_by_hash(Context::new(), tx_hash.clone()),\n        )\n        .map_err(|_| ChainQueryError::AsyncStorage)?;\n\n        Ok(ret)\n    }\n}\n\n#[derive(Debug, Display, From)]\npub enum ChainQueryError {\n    #[display(fmt = \"get error when call async method of storage\")]\n    AsyncStorage,\n}\n\nimpl std::error::Error for ChainQueryError {}\n\nimpl From<ChainQueryError> for ProtocolError {\n    fn from(err: ChainQueryError) -> ProtocolError {\n        ProtocolError::new(ProtocolErrorKind::Binding, Box::new(err))\n    }\n}\n"
  },
  {
    "path": "framework/src/binding/sdk/mod.rs",
    "content": "mod chain_querier;\n\npub use chain_querier::{ChainQueryError, DefaultChainQuerier};\n\nuse std::cell::RefCell;\nuse std::rc::Rc;\n\nuse cita_trie::DB as TrieDB;\nuse derive_more::Display;\n\nuse protocol::fixed_codec::FixedCodec;\nuse protocol::traits::{\n    ChainQuerier, SDKFactory, ServiceSDK, ServiceState, StoreArray, StoreBool, StoreMap,\n    StoreString, StoreUint64,\n};\nuse protocol::types::{Address, Block, Hash, Receipt, SignedTransaction};\nuse protocol::{ProtocolError, ProtocolErrorKind, ProtocolResult};\n\nuse crate::binding::state::GeneralServiceState;\nuse crate::binding::store::{\n    DefaultStoreArray, DefaultStoreBool, DefaultStoreMap, DefaultStoreString, DefaultStoreUint64,\n};\nuse crate::executor::ServiceStateMap;\n\npub struct DefaultSDKFactory<C: ChainQuerier, DB: TrieDB> {\n    states:        Rc<ServiceStateMap<DB>>,\n    chain_querier: Rc<C>,\n}\n\nimpl<C: ChainQuerier, DB: TrieDB> DefaultSDKFactory<C, DB> {\n    pub fn new(states: Rc<ServiceStateMap<DB>>, chain_querier: Rc<C>) -> Self {\n        DefaultSDKFactory {\n            states,\n            chain_querier,\n        }\n    }\n}\n\nimpl<C: ChainQuerier, DB: 'static + TrieDB>\n    SDKFactory<DefaultServiceSDK<GeneralServiceState<DB>, C>> for DefaultSDKFactory<C, DB>\n{\n    fn get_sdk(&self, name: &str) -> ProtocolResult<DefaultServiceSDK<GeneralServiceState<DB>, C>> {\n        let state = self.states.get(name).ok_or(SDKError::NotFoundService {\n            service: name.to_owned(),\n        })?;\n\n        Ok(DefaultServiceSDK::new(\n            Rc::clone(state),\n            Rc::clone(&self.chain_querier),\n        ))\n    }\n}\n\npub struct DefaultServiceSDK<S: ServiceState, C: ChainQuerier> {\n    state:         Rc<RefCell<S>>,\n    chain_querier: Rc<C>,\n}\n\nimpl<S: ServiceState, C: ChainQuerier> DefaultServiceSDK<S, C> {\n    pub fn new(state: Rc<RefCell<S>>, chain_querier: Rc<C>) -> Self {\n        Self {\n            state,\n            chain_querier,\n        }\n    }\n}\n\nimpl<S: 'static + ServiceState, C: ChainQuerier> ServiceSDK for DefaultServiceSDK<S, C> {\n    // Alloc or recover a `Map` by` var_name`\n    fn alloc_or_recover_map<\n        K: 'static + Send + FixedCodec + Clone + PartialEq,\n        V: 'static + FixedCodec,\n    >(\n        &mut self,\n        var_name: &str,\n    ) -> Box<dyn StoreMap<K, V>> {\n        Box::new(DefaultStoreMap::<S, K, V>::new(\n            Rc::clone(&self.state),\n            var_name,\n        ))\n    }\n\n    // Alloc or recover a `Array` by` var_name`\n    fn alloc_or_recover_array<E: 'static + FixedCodec>(\n        &mut self,\n        var_name: &str,\n    ) -> Box<dyn StoreArray<E>> {\n        Box::new(DefaultStoreArray::<S, E>::new(\n            Rc::clone(&self.state),\n            var_name,\n        ))\n    }\n\n    // Alloc or recover a `Uint64` by` var_name`\n    fn alloc_or_recover_uint64(&mut self, var_name: &str) -> Box<dyn StoreUint64> {\n        Box::new(DefaultStoreUint64::new(Rc::clone(&self.state), var_name))\n    }\n\n    // Alloc or recover a `String` by` var_name`\n    fn alloc_or_recover_string(&mut self, var_name: &str) -> Box<dyn StoreString> {\n        Box::new(DefaultStoreString::new(Rc::clone(&self.state), var_name))\n    }\n\n    // Alloc or recover a `Bool` by` var_name`\n    fn alloc_or_recover_bool(&mut self, var_name: &str) -> Box<dyn StoreBool> {\n        Box::new(DefaultStoreBool::new(Rc::clone(&self.state), var_name))\n    }\n\n    // Get a value from the service state by key\n    fn get_value<Key: FixedCodec, Ret: FixedCodec>(&self, key: &Key) -> Option<Ret> {\n        self.state\n            .borrow()\n            .get(key)\n            .unwrap_or_else(|e| panic!(\"service sdk get value failed: {}\", e))\n    }\n\n    // Set a value to the service state by key\n    fn set_value<Key: FixedCodec, Val: FixedCodec>(&mut self, key: Key, val: Val) {\n        self.state\n            .borrow_mut()\n            .insert(key, val)\n            .unwrap_or_else(|e| panic!(\"service sdk set value failed: {}\", e));\n    }\n\n    // Get a value from the specified address by key\n    fn get_account_value<Key: FixedCodec, Ret: FixedCodec>(\n        &self,\n        address: &Address,\n        key: &Key,\n    ) -> Option<Ret> {\n        self.state\n            .borrow()\n            .get_account_value(address, key)\n            .unwrap_or_else(|e| panic!(\"service sdk get account value failed: {}\", e))\n    }\n\n    // Insert a pair of key / value to the specified address\n    fn set_account_value<Key: FixedCodec, Val: FixedCodec>(\n        &mut self,\n        address: &Address,\n        key: Key,\n        val: Val,\n    ) {\n        self.state\n            .borrow_mut()\n            .set_account_value(address, key, val)\n            .unwrap_or_else(|e| panic!(\"service sdk set account value failed: {}\", e));\n    }\n\n    // Get a signed transaction by `tx_hash`\n    // if not found on the chain, return None\n    fn get_transaction_by_hash(&self, tx_hash: &Hash) -> Option<SignedTransaction> {\n        self.chain_querier\n            .get_transaction_by_hash(tx_hash)\n            .unwrap_or_else(|e| panic!(\"service sdk get transaction by hash failed: {}\", e))\n    }\n\n    // Get a block by `height`\n    // if not found on the chain, return None\n    // When the parameter `height` is None, get the latest (executing)` block`\n    fn get_block_by_height(&self, height: Option<u64>) -> Option<Block> {\n        self.chain_querier\n            .get_block_by_height(height)\n            .unwrap_or_else(|e| panic!(\"service sdk get block by height failed: {}\", e))\n    }\n\n    // Get a receipt by `tx_hash`\n    // if not found on the chain, return None\n    fn get_receipt_by_hash(&self, tx_hash: &Hash) -> Option<Receipt> {\n        self.chain_querier\n            .get_receipt_by_hash(tx_hash)\n            .unwrap_or_else(|e| panic!(\"service sdk get receipt by hash failed: {}\", e))\n    }\n}\n\n#[derive(Debug, Display)]\npub enum SDKError {\n    #[display(fmt = \"service {:?} was not found\", service)]\n    NotFoundService { service: String },\n}\nimpl std::error::Error for SDKError {}\n\nimpl From<SDKError> for ProtocolError {\n    fn from(err: SDKError) -> ProtocolError {\n        ProtocolError::new(ProtocolErrorKind::Binding, Box::new(err))\n    }\n}\n"
  },
  {
    "path": "framework/src/binding/state/mod.rs",
    "content": "mod trie;\npub mod trie_db;\n\npub use trie::{MPTTrie, MPTTrieError};\npub use trie_db::{RocksTrieDB, RocksTrieDBError};\n\nuse std::collections::HashMap;\n\nuse bytes::Bytes;\nuse cita_trie::DB as TrieDB;\n\nuse protocol::fixed_codec::FixedCodec;\nuse protocol::traits::ServiceState;\nuse protocol::types::{Address, Hash, MerkleRoot};\nuse protocol::ProtocolResult;\n\npub struct GeneralServiceState<DB: TrieDB> {\n    trie: MPTTrie<DB>,\n\n    // TODO(@yejiayu): The value of HashMap should be changed to Box<dyn Any> to avoid multiple\n    // serializations.\n    cache_map: HashMap<Bytes, Bytes>,\n    stash_map: HashMap<Bytes, Bytes>,\n}\n\nimpl<DB: TrieDB> GeneralServiceState<DB> {\n    pub fn new(trie: MPTTrie<DB>) -> Self {\n        Self {\n            trie,\n\n            cache_map: HashMap::new(),\n            stash_map: HashMap::new(),\n        }\n    }\n\n    fn get_bytes_value(&self, key: Bytes) -> ProtocolResult<Option<Bytes>> {\n        if let Some(value_bytes) = self.cache_map.get(&key) {\n            if value_bytes.is_empty() {\n                return Ok(None);\n            }\n            return Ok(Some(value_bytes.clone()));\n        }\n\n        if let Some(value_bytes) = self.stash_map.get(&key) {\n            if value_bytes.is_empty() {\n                return Ok(None);\n            }\n            return Ok(Some(value_bytes.clone()));\n        }\n\n        if let Some(value_bytes) = self.trie.get(&key)? {\n            if value_bytes.is_empty() {\n                return Ok(None);\n            }\n            return Ok(Some(value_bytes));\n        }\n\n        Ok(None)\n    }\n}\n\nimpl<DB: TrieDB> ServiceState for GeneralServiceState<DB> {\n    fn get<Key: FixedCodec, Ret: FixedCodec>(&self, key: &Key) -> ProtocolResult<Option<Ret>> {\n        let encoded_key = key.encode_fixed()?;\n\n        if let Some(value_bytes) = self.get_bytes_value(encoded_key)? {\n            let inst = <_>::decode_fixed(value_bytes)?;\n            Ok(Some(inst))\n        } else {\n            Ok(None)\n        }\n    }\n\n    fn contains<Key: FixedCodec>(&self, key: &Key) -> ProtocolResult<bool> {\n        let encoded_key = key.encode_fixed()?;\n\n        Ok(self.get_bytes_value(encoded_key)?.is_some())\n    }\n\n    // Insert a pair of key / value\n    // Note: This key/value pair will go into the cache first\n    // and will not be persisted to MPT until `commit` is called.\n    fn insert<Key: FixedCodec, Value: FixedCodec>(\n        &mut self,\n        key: Key,\n        value: Value,\n    ) -> ProtocolResult<()> {\n        self.cache_map\n            .insert(key.encode_fixed()?, value.encode_fixed()?);\n        Ok(())\n    }\n\n    fn get_account_value<Key: FixedCodec, Ret: FixedCodec>(\n        &self,\n        address: &Address,\n        key: &Key,\n    ) -> ProtocolResult<Option<Ret>> {\n        let hash_key = get_address_key(address, key)?;\n        self.get(&hash_key)\n    }\n\n    fn set_account_value<Key: FixedCodec, Val: FixedCodec>(\n        &mut self,\n        address: &Address,\n        key: Key,\n        val: Val,\n    ) -> ProtocolResult<()> {\n        let hash_key = get_address_key(address, &key)?;\n        self.insert(hash_key, val)\n    }\n\n    // Roll back all data in the cache\n    fn revert_cache(&mut self) -> ProtocolResult<()> {\n        self.cache_map.clear();\n        Ok(())\n    }\n\n    // Move data from cache to stash\n    fn stash(&mut self) -> ProtocolResult<()> {\n        for (k, v) in self.cache_map.drain() {\n            self.stash_map.insert(k, v);\n        }\n\n        Ok(())\n    }\n\n    // Persist data from stash into MPT\n    fn commit(&mut self) -> ProtocolResult<MerkleRoot> {\n        for (key, value) in self.stash_map.drain() {\n            self.trie.insert(key, value)?;\n        }\n\n        let root = self.trie.commit()?;\n        Ok(root)\n    }\n}\n\nfn get_address_key<Key: FixedCodec>(address: &Address, key: &Key) -> ProtocolResult<Hash> {\n    let mut hash_bytes = address.as_bytes().to_vec();\n    hash_bytes.extend_from_slice(key.encode_fixed()?.as_ref());\n\n    Ok(Hash::digest(Bytes::from(hash_bytes)))\n}\n\n#[cfg(test)]\nmod tests {\n    use bytes::Bytes;\n    use std::sync::Arc;\n\n    use cita_trie::MemoryDB;\n\n    use protocol::traits::ServiceState;\n\n    use super::*;\n    use crate::binding::state::MPTTrie;\n\n    #[test]\n    fn test_get_trie() {\n        let mut state = GeneralServiceState::new(MPTTrie::new(Arc::new(MemoryDB::new(false))));\n\n        let key = Bytes::from(\"test\");\n        let value = Bytes::from(\"test\");\n\n        state.insert(key.clone(), value.clone()).unwrap();\n\n        assert_eq!(state.get::<Bytes, Bytes>(&key).unwrap(), Some(value));\n        state.insert(key.clone(), Bytes::new()).unwrap();\n\n        assert_eq!(state.get::<Bytes, Bytes>(&key).unwrap().is_some(), false);\n        assert_eq!(state.contains(&key).unwrap(), false);\n    }\n}\n"
  },
  {
    "path": "framework/src/binding/state/trie.rs",
    "content": "use std::sync::Arc;\n\nuse bytes::Bytes;\nuse cita_trie::{PatriciaTrie, Trie, TrieError, DB as TrieDB};\nuse derive_more::{Display, From};\nuse hasher::HasherKeccak;\nuse lazy_static::lazy_static;\n\nuse protocol::types::{Hash, MerkleRoot};\nuse protocol::{ProtocolError, ProtocolErrorKind, ProtocolResult};\n\nlazy_static! {\n    static ref HASHER_INST: Arc<HasherKeccak> = Arc::new(HasherKeccak::new());\n}\n\npub struct MPTTrie<DB: TrieDB> {\n    root: MerkleRoot,\n    trie: PatriciaTrie<DB, HasherKeccak>,\n}\n\nimpl<DB: TrieDB> MPTTrie<DB> {\n    pub fn new(db: Arc<DB>) -> Self {\n        let trie = PatriciaTrie::new(db, Arc::clone(&HASHER_INST));\n\n        Self {\n            root: Hash::from_empty(),\n            trie,\n        }\n    }\n\n    pub fn from(root: MerkleRoot, db: Arc<DB>) -> ProtocolResult<Self> {\n        let trie = PatriciaTrie::from(db, Arc::clone(&HASHER_INST), &root.as_bytes())\n            .map_err(MPTTrieError::from)?;\n\n        Ok(Self { root, trie })\n    }\n\n    pub fn get(&self, key: &Bytes) -> ProtocolResult<Option<Bytes>> {\n        Ok(self\n            .trie\n            .get(key)\n            .map_err(MPTTrieError::from)?\n            .map(Bytes::from))\n    }\n\n    pub fn contains(&self, key: &Bytes) -> ProtocolResult<bool> {\n        Ok(self.trie.contains(key).map_err(MPTTrieError::from)?)\n    }\n\n    pub fn insert(&mut self, key: Bytes, value: Bytes) -> ProtocolResult<()> {\n        self.trie\n            .insert(key.to_vec(), value.to_vec())\n            .map_err(MPTTrieError::from)?;\n        Ok(())\n    }\n\n    pub fn commit(&mut self) -> ProtocolResult<MerkleRoot> {\n        let root_bytes = self.trie.root().map_err(MPTTrieError::from)?;\n        let root = MerkleRoot::from_bytes(Bytes::from(root_bytes))?;\n        self.root = root;\n        Ok(self.root.clone())\n    }\n}\n\n#[derive(Debug, Display, From)]\npub enum MPTTrieError {\n    #[display(fmt = \"{:?}\", _0)]\n    Trie(TrieError),\n}\n\nimpl std::error::Error for MPTTrieError {}\n\nimpl From<MPTTrieError> for ProtocolError {\n    fn from(err: MPTTrieError) -> ProtocolError {\n        ProtocolError::new(ProtocolErrorKind::Binding, Box::new(err))\n    }\n}\n"
  },
  {
    "path": "framework/src/binding/state/trie_db.rs",
    "content": "use std::collections::HashMap;\nuse std::path::Path;\nuse std::sync::Arc;\nuse std::time::Instant;\n\nuse bytes::Bytes;\nuse derive_more::{Display, From};\nuse parking_lot::RwLock;\nuse rand::{rngs::SmallRng, Rng, SeedableRng};\nuse rocksdb::{Options, WriteBatch, DB};\n\nuse common_apm::metrics::storage::{on_storage_get_state, on_storage_put_state};\nuse protocol::{ProtocolError, ProtocolErrorKind, ProtocolResult};\n\n// 49999 is the largest prime number within 50000.\nconst RAND_SEED: u64 = 49999;\n\npub struct RocksTrieDB {\n    light:      bool,\n    db:         Arc<DB>,\n    cache_size: usize,\n    cache:      RwLock<HashMap<Vec<u8>, Vec<u8>>>,\n}\n\nimpl RocksTrieDB {\n    pub fn new<P: AsRef<Path>>(\n        path: P,\n        light: bool,\n        max_open_files: i32,\n        cache_size: usize,\n    ) -> ProtocolResult<Self> {\n        let mut opts = Options::default();\n        opts.create_if_missing(true);\n        opts.create_missing_column_families(true);\n        opts.set_max_open_files(max_open_files);\n\n        let db = DB::open(&opts, path).map_err(RocksTrieDBError::from)?;\n\n        // Init HashMap with capacity 2 * cache_size to avoid reallocate memory.\n        Ok(RocksTrieDB {\n            light,\n            db: Arc::new(db),\n            cache: RwLock::new(HashMap::with_capacity(cache_size + cache_size)),\n            cache_size,\n        })\n    }\n\n    fn inner_get(&self, key: &[u8]) -> Result<Option<Vec<u8>>, RocksTrieDBError> {\n        let res = {\n            let cache = self.cache.read();\n            cache.get(key).cloned()\n        };\n\n        if res.is_none() {\n            let inst = Instant::now();\n            let ret = self.db.get(key).map_err(to_store_err)?;\n            on_storage_get_state(inst.elapsed(), 1i64);\n\n            if let Some(val) = ret.clone() {\n                let mut cache = self.cache.write();\n                cache.insert(key.to_owned(), val);\n            }\n\n            return Ok(ret);\n        }\n\n        Ok(res)\n    }\n\n    #[cfg(test)]\n    pub fn insert_batch_without_cache(&self, keys: Vec<Vec<u8>>, values: Vec<Vec<u8>>) {\n        let mut _total_size = 0;\n        let mut batch = WriteBatch::default();\n        assert_eq!(keys.len(), values.len());\n\n        for (key, val) in keys.iter().zip(values.iter()) {\n            _total_size += key.len();\n            _total_size += val.len();\n            batch.put(key, val);\n        }\n\n        self.db.write(batch).unwrap();\n    }\n\n    #[cfg(test)]\n    pub fn insert_without_cache(&self, key: Vec<u8>, value: Vec<u8>) {\n        self.db.put(key, value).unwrap();\n    }\n\n    #[cfg(test)]\n    pub fn get_without_cache(&self, key: &[u8]) -> Option<Vec<u8>> {\n        self.db.get(key).unwrap()\n    }\n\n    #[cfg(test)]\n    pub fn cache(&self) -> HashMap<Vec<u8>, Vec<u8>> {\n        let cache = self.cache.read();\n        cache.clone()\n    }\n}\n\nimpl cita_trie::DB for RocksTrieDB {\n    type Error = RocksTrieDBError;\n\n    fn get(&self, key: &[u8]) -> Result<Option<Vec<u8>>, Self::Error> {\n        self.inner_get(key)\n    }\n\n    fn contains(&self, key: &[u8]) -> Result<bool, Self::Error> {\n        let res = {\n            let cache = self.cache.read();\n            cache.contains_key(key)\n        };\n\n        if res {\n            Ok(true)\n        } else {\n            if let Some(val) = self.db.get(key).map_err(to_store_err)? {\n                let mut cache = self.cache.write();\n                cache.insert(key.to_owned(), val);\n                return Ok(true);\n            }\n            Ok(false)\n        }\n    }\n\n    fn insert(&self, key: Vec<u8>, value: Vec<u8>) -> Result<(), Self::Error> {\n        let inst = Instant::now();\n        let size = key.len() + value.len();\n\n        {\n            let mut cache = self.cache.write();\n            cache.insert(key.clone(), value.clone());\n        }\n\n        self.db\n            .put(Bytes::from(key), Bytes::from(value))\n            .map_err(to_store_err)?;\n\n        on_storage_put_state(inst.elapsed(), size as i64);\n        Ok(())\n    }\n\n    fn insert_batch(&self, keys: Vec<Vec<u8>>, values: Vec<Vec<u8>>) -> Result<(), Self::Error> {\n        if keys.len() != values.len() {\n            return Err(RocksTrieDBError::BatchLengthMismatch);\n        }\n\n        let mut total_size = 0;\n        let mut batch = WriteBatch::default();\n\n        {\n            let mut cache = self.cache.write();\n            for (key, val) in keys.iter().zip(values.iter()) {\n                total_size += key.len();\n                total_size += val.len();\n                batch.put(key, val);\n                cache.insert(key.clone(), val.clone());\n            }\n        }\n\n        let inst = Instant::now();\n        self.db.write(batch).map_err(to_store_err)?;\n        on_storage_put_state(inst.elapsed(), total_size as i64);\n        Ok(())\n    }\n\n    fn remove(&self, key: &[u8]) -> Result<(), Self::Error> {\n        if self.light {\n            {\n                let mut cache = self.cache.write();\n                cache.remove(key);\n            }\n            self.db.delete(key).map_err(to_store_err)?;\n        }\n        Ok(())\n    }\n\n    fn remove_batch(&self, keys: &[Vec<u8>]) -> Result<(), Self::Error> {\n        if self.light {\n            let mut batch = WriteBatch::default();\n            {\n                let mut cache = self.cache.write();\n                for key in keys {\n                    batch.delete(key);\n                    cache.remove(key);\n                }\n            }\n\n            self.db.write(batch).map_err(to_store_err)?;\n        }\n        Ok(())\n    }\n\n    fn flush(&self) -> Result<(), Self::Error> {\n        let mut cache = self.cache.write();\n        let len = cache.len();\n\n        if len <= self.cache_size {\n            return Ok(());\n        }\n\n        let keys = cache.keys().collect::<Vec<_>>();\n        let remove_list = rand_remove_list(keys, len - self.cache_size);\n\n        for item in remove_list.iter() {\n            cache.remove(item);\n        }\n        Ok(())\n    }\n}\n\nfn rand_remove_list<T: Clone>(keys: Vec<&T>, num: usize) -> Vec<T> {\n    let mut len = keys.len() - 1;\n    let mut idx_list = (0..len).collect::<Vec<_>>();\n    let mut rng = SmallRng::seed_from_u64(RAND_SEED);\n    let mut ret = Vec::new();\n\n    for _ in 0..num {\n        let tmp = rng.gen_range(0, len);\n        let idx = idx_list.remove(tmp);\n        ret.push(keys[idx].to_owned());\n        len -= 1;\n    }\n    ret\n}\n\n#[derive(Debug, Display, From)]\npub enum RocksTrieDBError {\n    #[display(fmt = \"store error\")]\n    Store,\n\n    #[display(fmt = \"rocksdb {}\", _0)]\n    RocksDB(rocksdb::Error),\n\n    #[display(fmt = \"parameters do not match\")]\n    InsertParameter,\n\n    #[display(fmt = \"batch length dont match\")]\n    BatchLengthMismatch,\n}\n\nimpl std::error::Error for RocksTrieDBError {}\n\nimpl From<RocksTrieDBError> for ProtocolError {\n    fn from(err: RocksTrieDBError) -> ProtocolError {\n        ProtocolError::new(ProtocolErrorKind::Binding, Box::new(err))\n    }\n}\n\nfn to_store_err(e: rocksdb::Error) -> RocksTrieDBError {\n    log::error!(\"[framework] trie db {:?}\", e);\n    RocksTrieDBError::Store\n}\n\n#[cfg(test)]\nmod tests {\n    extern crate test;\n    use test::Bencher;\n\n    use super::*;\n\n    #[bench]\n    fn bench_rand(b: &mut Bencher) {\n        b.iter(|| {\n            let mut rng = SmallRng::seed_from_u64(RAND_SEED);\n            for _ in 0..10000 {\n                rng.gen_range(10, 1000000);\n            }\n        })\n    }\n\n    #[test]\n    fn test_rand_remove() {\n        let list = (0..10).collect::<Vec<_>>();\n        let keys = list.iter().collect::<Vec<_>>();\n\n        for num in 1..10 {\n            let res = rand_remove_list(keys.clone(), num);\n            assert_eq!(res.len(), num);\n        }\n    }\n}\n"
  },
  {
    "path": "framework/src/binding/store/array.rs",
    "content": "use std::cell::RefCell;\nuse std::marker::PhantomData;\nuse std::rc::Rc;\n\nuse bytes::Bytes;\n\nuse protocol::fixed_codec::FixedCodec;\nuse protocol::traits::{ServiceState, StoreArray};\nuse protocol::types::Hash;\nuse protocol::ProtocolResult;\n\nuse crate::binding::store::FixedKeys;\n\npub struct DefaultStoreArray<S: ServiceState, E: FixedCodec> {\n    state:    Rc<RefCell<S>>,\n    var_name: Hash,\n    keys:     FixedKeys<Hash>,\n    phantom:  PhantomData<E>,\n}\n\nimpl<S: ServiceState, E: FixedCodec> DefaultStoreArray<S, E> {\n    pub fn new(state: Rc<RefCell<S>>, name: &str) -> Self {\n        let var_name = Hash::digest(Bytes::from(name.to_owned() + \"array\"));\n\n        let opt_bs: Option<Bytes> = state\n            .borrow()\n            .get(&var_name)\n            .expect(\"get array should not fail\");\n\n        let keys = if let Some(bs) = opt_bs {\n            <_>::decode_fixed(bs).expect(\"decode keys should not fail\")\n        } else {\n            FixedKeys { inner: Vec::new() }\n        };\n\n        Self {\n            state,\n            var_name,\n            keys,\n            phantom: PhantomData,\n        }\n    }\n\n    fn inner_get(&self, index: u64) -> ProtocolResult<Option<E>> {\n        if let Some(k) = self.keys.inner.get(index as usize) {\n            self.state\n                .borrow()\n                .get(k)?\n                .map_or_else(|| Ok(None), |v| Ok(Some(v)))\n        } else {\n            Ok(None)\n        }\n    }\n\n    // TODO(@zhounan): Atomicity of insert(k, v) and insert self.keys to\n    // ServiceState is not guaranteed for now That must be settled soon after.\n    fn inner_push(&mut self, elm: E) -> ProtocolResult<()> {\n        let key = Hash::digest(elm.encode_fixed()?);\n\n        self.keys.inner.push(key.clone());\n        self.state\n            .borrow_mut()\n            .insert(self.var_name.clone(), self.keys.encode_fixed()?)?;\n\n        self.state.borrow_mut().insert(key, elm)\n    }\n\n    // TODO(@zhounan): Atomicity of insert(k, v) and insert self.keys to\n    // ServiceState is not guaranteed for now That must be settled soon after.\n    fn inner_remove(&mut self, index: u64) -> ProtocolResult<()> {\n        let key = self.keys.inner.remove(index as usize);\n        self.state\n            .borrow_mut()\n            .insert(self.var_name.clone(), self.keys.encode_fixed()?)?;\n\n        self.state.borrow_mut().insert(key, Bytes::new())\n    }\n}\n\nimpl<S: ServiceState, E: FixedCodec> StoreArray<E> for DefaultStoreArray<S, E> {\n    fn get(&self, index: u64) -> Option<E> {\n        self.inner_get(index)\n            .unwrap_or_else(|e| panic!(\"StoreArray get value failed: {}\", e))\n    }\n\n    fn push(&mut self, elm: E) {\n        self.inner_push(elm)\n            .unwrap_or_else(|e| panic!(\"StoreArray push value failed: {}\", e));\n    }\n\n    fn remove(&mut self, index: u64) {\n        self.inner_remove(index)\n            .unwrap_or_else(|e| panic!(\"StoreArray remove value failed: {}\", e));\n    }\n\n    fn len(&self) -> u64 {\n        self.keys.inner.len() as u64\n    }\n\n    fn is_empty(&self) -> bool {\n        self.len() == 0\n    }\n\n    fn iter<'a>(&'a self) -> Box<dyn Iterator<Item = (u64, E)> + 'a> {\n        Box::new(ArrayIter::<E, Self>::new(0, self))\n    }\n}\n\nstruct ArrayIter<'a, E: FixedCodec, A: StoreArray<E>> {\n    idx:     u64,\n    array:   &'a A,\n    phantom: PhantomData<E>,\n}\n\nimpl<'a, E: FixedCodec, A: StoreArray<E>> ArrayIter<'a, E, A> {\n    pub fn new(idx: u64, array: &'a A) -> Self {\n        ArrayIter {\n            idx,\n            array,\n            phantom: PhantomData,\n        }\n    }\n}\n\nimpl<'a, E: FixedCodec, A: StoreArray<E>> Iterator for ArrayIter<'a, E, A> {\n    type Item = (u64, E);\n\n    fn next(&mut self) -> Option<Self::Item> {\n        if self.idx < self.array.len() {\n            let ele = self\n                .array\n                .get(self.idx)\n                .expect(\"StoreArray should get Some when index inbound\");\n            self.idx += 1;\n            Some((self.idx - 1, ele))\n        } else {\n            None\n        }\n    }\n}\n"
  },
  {
    "path": "framework/src/binding/store/map.rs",
    "content": "use std::cell::RefCell;\nuse std::iter::Iterator;\nuse std::marker::PhantomData;\nuse std::rc::Rc;\n\nuse bytes::Bytes;\nuse rayon::prelude::*;\n\nuse protocol::fixed_codec::FixedCodec;\nuse protocol::traits::{ServiceState, StoreMap};\nuse protocol::types::Hash;\nuse protocol::ProtocolResult;\n\nuse crate::binding::store::{get_bucket_index, Bucket, FixedBuckets};\n\npub struct DefaultStoreMap<S: ServiceState, K: FixedCodec + PartialEq, V: FixedCodec> {\n    state:    Rc<RefCell<S>>,\n    var_name: String,\n    keys:     RefCell<FixedBuckets<K>>,\n    len_key:  Bytes,\n    len:      u64,\n    phantom:  PhantomData<V>,\n}\n\nimpl<S, K, V> DefaultStoreMap<S, K, V>\nwhere\n    S: 'static + ServiceState,\n    K: 'static + Send + FixedCodec + PartialEq,\n    V: 'static + FixedCodec,\n{\n    pub fn new(state: Rc<RefCell<S>>, name: &str) -> Self {\n        let len_key = Bytes::from(name.to_string() + \"_map_len\");\n        let len = state\n            .borrow()\n            .get(&len_key)\n            .expect(\"Get len failed\")\n            .unwrap_or(0u64);\n\n        DefaultStoreMap {\n            state,\n            len_key,\n            len,\n            var_name: name.to_string(),\n            keys: RefCell::new(FixedBuckets::new()),\n            phantom: PhantomData,\n        }\n    }\n\n    fn inner_insert(&mut self, key: K, value: V) -> ProtocolResult<()> {\n        let key_bytes = key.encode_fixed()?;\n        let mk = self.get_map_key(&key_bytes);\n        let bkt_idx = get_bucket_index(&key_bytes);\n\n        if !self.inner_contains(bkt_idx, &key)? {\n            self.keys.borrow_mut().insert(bkt_idx, key);\n\n            self.state.borrow_mut().insert(\n                self.get_bucket_name(bkt_idx),\n                self.keys.borrow().get_bucket(bkt_idx).encode_fixed()?,\n            )?;\n            self.len_add_one()?;\n        }\n        self.state.borrow_mut().insert(mk, value)\n    }\n\n    fn inner_get(&self, key: &K) -> ProtocolResult<Option<V>> {\n        let key_bytes = key.encode_fixed()?;\n        let bkt_idx = get_bucket_index(&key_bytes);\n\n        if self.inner_contains(bkt_idx, &key)? {\n            self.state\n                .borrow()\n                .get(&self.get_map_key(&key_bytes))?\n                .map_or_else(|| Ok(None), |v| Ok(Some(v)))\n        } else {\n            Ok(None)\n        }\n    }\n\n    fn inner_remove(&mut self, key: &K) -> ProtocolResult<Option<V>> {\n        let key_bytes = key.encode_fixed()?;\n        let bkt_idx = get_bucket_index(&key_bytes);\n\n        if self.inner_contains(bkt_idx, &key)? {\n            let value = self.inner_get(key)?.expect(\"value should be existed\");\n            let bkt_idx = get_bucket_index(&key_bytes);\n            let bkt_name = self.get_bucket_name(bkt_idx);\n\n            let _ = self.keys.borrow_mut().remove_item(bkt_idx, key)?;\n            self.state.borrow_mut().insert(\n                bkt_name,\n                self.keys.borrow().get_bucket(bkt_idx).encode_fixed()?,\n            )?;\n            self.state\n                .borrow_mut()\n                .insert(self.get_map_key(&key_bytes), Bytes::new())?;\n            self.len_sub_one()?;\n            Ok(Some(value))\n        } else {\n            Ok(None)\n        }\n    }\n\n    #[inline(always)]\n    fn inner_contains(&self, bkt_idx: usize, key: &K) -> ProtocolResult<bool> {\n        if self.keys.borrow().is_bucket_recovered(bkt_idx) {\n            return Ok(self.keys.borrow().contains(bkt_idx, key));\n        }\n\n        let bkt = if let Some(bytes) = self.state.borrow().get(&self.get_bucket_name(bkt_idx))? {\n            <_>::decode_fixed(bytes)?\n        } else {\n            Bucket::new()\n        };\n\n        let ret = bkt.contains(key);\n        self.keys.borrow_mut().recover_bucket(bkt_idx, bkt);\n        Ok(ret)\n    }\n\n    fn get_map_key(&self, key_bytes: &Bytes) -> Bytes {\n        let mut name_bytes = self.var_name.as_bytes().to_vec();\n        name_bytes.extend_from_slice(key_bytes);\n\n        if key_bytes.len() > 32 {\n            Hash::digest(Bytes::from(name_bytes)).as_bytes()\n        } else {\n            Bytes::from(name_bytes)\n        }\n    }\n\n    fn get_bucket_name(&self, index: usize) -> Bytes {\n        let mut bytes = (self.var_name.clone() + \"_bucket_\").as_bytes().to_vec();\n        bytes.extend_from_slice(&index.to_le_bytes());\n        Bytes::from(bytes)\n    }\n\n    fn len_add_one(&mut self) -> ProtocolResult<()> {\n        self.len += 1;\n        self.state\n            .borrow_mut()\n            .insert(self.len_key.clone(), self.len.encode_fixed()?)\n    }\n\n    fn len_sub_one(&mut self) -> ProtocolResult<()> {\n        self.len -= 1;\n        self.state\n            .borrow_mut()\n            .insert(self.len_key.clone(), self.len.encode_fixed()?)\n    }\n\n    fn recover_all_buckets(&self) {\n        let idxs = self\n            .keys\n            .borrow()\n            .is_recovered\n            .iter()\n            .enumerate()\n            .filter_map(|(i, &res)| if !res { Some(i) } else { None })\n            .collect::<Vec<_>>();\n\n        let opt_bytes = idxs\n            .iter()\n            .map(|idx| {\n                let name = self.get_bucket_name(*idx);\n                self.state.borrow().get(&name).unwrap()\n            })\n            .collect::<Vec<_>>();\n\n        let buckets = opt_bytes\n            .into_par_iter()\n            .map(|bytes| {\n                if let Some(bs) = bytes {\n                    <_>::decode_fixed(bs).expect(\"Decode bucket failed\")\n                } else {\n                    Bucket::new()\n                }\n            })\n            .collect::<Vec<_>>();\n\n        for (idx, bkt) in idxs.into_iter().zip(buckets.into_iter()) {\n            self.keys.borrow_mut().recover_bucket(idx, bkt);\n        }\n    }\n\n    #[cfg(test)]\n    fn get_buckets(self) -> FixedBuckets<K> {\n        self.keys.into_inner()\n    }\n}\n\nimpl<S, K, V> StoreMap<K, V> for DefaultStoreMap<S, K, V>\nwhere\n    S: 'static + ServiceState,\n    K: 'static + Send + FixedCodec + Clone + PartialEq,\n    V: 'static + FixedCodec,\n{\n    fn get(&self, key: &K) -> Option<V> {\n        self.inner_get(key)\n            .unwrap_or_else(|e| panic!(\"StoreMap get failed: {}\", e))\n    }\n\n    fn insert(&mut self, key: K, value: V) {\n        self.inner_insert(key, value)\n            .unwrap_or_else(|e| panic!(\"StoreMap insert failed: {}\", e));\n    }\n\n    fn remove(&mut self, key: &K) -> Option<V> {\n        self.inner_remove(key)\n            .unwrap_or_else(|e| panic!(\"StoreMap remove failed: {}\", e))\n    }\n\n    fn contains(&self, key: &K) -> bool {\n        if let Ok(bytes) = key.encode_fixed() {\n            self.inner_contains(get_bucket_index(&bytes), &key)\n                .unwrap_or(false)\n        } else {\n            false\n        }\n    }\n\n    fn len(&self) -> u64 {\n        self.len\n    }\n\n    fn is_empty(&self) -> bool {\n        self.len == 0\n    }\n\n    fn iter<'a>(&'a self) -> Box<dyn Iterator<Item = (K, V)> + 'a> {\n        self.recover_all_buckets();\n        Box::new(NewMapIter::<S, K, V>::new(0, self))\n    }\n}\n\npub struct NewMapIter<\n    'a,\n    S: 'static + ServiceState,\n    K: 'static + FixedCodec + PartialEq,\n    V: 'static + FixedCodec,\n> {\n    idx: u64,\n    map: &'a DefaultStoreMap<S, K, V>,\n}\n\nimpl<'a, S, K, V> NewMapIter<'a, S, K, V>\nwhere\n    S: 'static + ServiceState,\n    K: 'static + FixedCodec + PartialEq,\n    V: 'static + FixedCodec,\n{\n    pub fn new(idx: u64, map: &'a DefaultStoreMap<S, K, V>) -> Self {\n        Self { idx, map }\n    }\n}\n\nimpl<'a, S, K, V> Iterator for NewMapIter<'a, S, K, V>\nwhere\n    S: 'static + ServiceState,\n    K: 'static + Send + FixedCodec + Clone + PartialEq,\n    V: 'static + FixedCodec,\n{\n    type Item = (K, V);\n\n    fn next(&mut self) -> Option<Self::Item> {\n        let idx = self.idx;\n        if idx >= self.map.len {\n            return None;\n        }\n\n        for i in 0..16 {\n            let (left, right) = self.map.keys.borrow().get_abs_index_interval(i);\n            if left <= idx && idx < right {\n                let index = idx - left;\n                let key = self.map.keys.borrow().keys_bucket[i]\n                    .0\n                    .get(index as usize)\n                    .cloned()\n                    .expect(\"get key should not fail\");\n\n                self.idx += 1;\n                return Some((\n                    key.clone(),\n                    self.map.get(&key).expect(\"get value should not fail\"),\n                ));\n            }\n        }\n        None\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use std::sync::Arc;\n\n    use cita_trie::MemoryDB;\n    use rand::random;\n\n    use crate::binding::state::{GeneralServiceState, MPTTrie};\n    use crate::binding::store::map::DefaultStoreMap;\n\n    use super::*;\n\n    fn gen_bytes() -> Bytes {\n        Bytes::from((0..16).map(|_| random::<u8>()).collect::<Vec<_>>())\n    }\n\n    #[test]\n    fn test_map_and_bucket() {\n        let state = Rc::new(RefCell::new(GeneralServiceState::new(MPTTrie::new(\n            Arc::new(MemoryDB::new(false)),\n        ))));\n        let mut map = DefaultStoreMap::<_, Bytes, Bytes>::new(Rc::clone(&state), \"test\");\n        let key_1 = gen_bytes();\n        let val_1 = gen_bytes();\n        let key_2 = gen_bytes();\n        let val_2 = gen_bytes();\n        let key_idx_1 = get_bucket_index(&key_1.encode_fixed().unwrap());\n        let key_idx_2 = get_bucket_index(&key_2.encode_fixed().unwrap());\n\n        map.insert(key_1, val_1);\n        map.insert(key_2, val_2);\n\n        assert_eq!(map.len(), 2);\n\n        let fbkt = map.get_buckets();\n        assert!(fbkt.is_recovered[key_idx_1]);\n        assert!(fbkt.is_recovered[key_idx_2]);\n        assert_eq!(fbkt.len(), 2);\n\n        let max = key_idx_1.max(key_idx_2);\n        let min = key_idx_1.min(key_idx_2);\n        let res = (0..17)\n            .map(|i| {\n                if i > max {\n                    2u64\n                } else if i > min {\n                    1u64\n                } else {\n                    0u64\n                }\n            })\n            .collect::<Vec<_>>();\n        assert_eq!(fbkt.bucket_lens, res);\n    }\n}\n"
  },
  {
    "path": "framework/src/binding/store/mod.rs",
    "content": "mod array;\nmod map;\nmod primitive;\n\nuse bytes::Bytes;\nuse derive_more::{Display, From};\n\nuse protocol::fixed_codec::{FixedCodec, FixedCodecError};\nuse protocol::{ProtocolError, ProtocolErrorKind, ProtocolResult};\n\npub use array::DefaultStoreArray;\npub use map::DefaultStoreMap;\npub use primitive::{DefaultStoreBool, DefaultStoreString, DefaultStoreUint64};\n\npub struct FixedKeys<K: FixedCodec> {\n    pub inner: Vec<K>,\n}\n\nimpl<K: FixedCodec> rlp::Encodable for FixedKeys<K> {\n    fn rlp_append(&self, s: &mut rlp::RlpStream) {\n        let inner: Vec<Vec<u8>> = self\n            .inner\n            .iter()\n            .map(|k| k.encode_fixed().expect(\"encode should not fail\").to_vec())\n            .collect();\n\n        s.begin_list(1).append_list::<Vec<u8>, _>(&inner);\n    }\n}\n\nimpl<K: FixedCodec> rlp::Decodable for FixedKeys<K> {\n    fn decode(r: &rlp::Rlp) -> Result<Self, rlp::DecoderError> {\n        let inner_u8: Vec<Vec<u8>> = rlp::decode_list(r.at(0)?.as_raw());\n\n        let inner_k: Result<Vec<K>, _> = inner_u8\n            .into_iter()\n            .map(|v| <_>::decode_fixed(Bytes::from(v)))\n            .collect();\n\n        let inner = inner_k.map_err(|_| rlp::DecoderError::Custom(\"decode K from bytes fail\"))?;\n\n        Ok(FixedKeys { inner })\n    }\n}\n\nimpl<K: FixedCodec> FixedCodec for FixedKeys<K> {\n    fn encode_fixed(&self) -> ProtocolResult<Bytes> {\n        Ok(Bytes::from(rlp::encode(self)))\n    }\n\n    fn decode_fixed(bytes: Bytes) -> ProtocolResult<Self> {\n        Ok(rlp::decode(bytes.as_ref()).map_err(FixedCodecError::from)?)\n    }\n}\n\npub struct FixedBuckets<K: FixedCodec + PartialEq> {\n    pub keys_bucket:  Vec<Bucket<K>>,\n    pub bucket_lens:  Vec<u64>,\n    pub is_recovered: Vec<bool>,\n}\n\nimpl<K: FixedCodec + PartialEq> FixedBuckets<K> {\n    fn new() -> Self {\n        let mut keys_bucket = Vec::new();\n        let mut bucket_lens = vec![0];\n        let mut is_recovered = Vec::new();\n\n        for _i in 0..16 {\n            keys_bucket.push(Bucket::new());\n            bucket_lens.push(0u64);\n            is_recovered.push(false);\n        }\n\n        FixedBuckets {\n            keys_bucket,\n            bucket_lens,\n            is_recovered,\n        }\n    }\n\n    fn recover_bucket(&mut self, index: usize, bucket: Bucket<K>) {\n        self.keys_bucket[index] = bucket;\n        self.is_recovered[index] = true;\n        self.update_index_interval(index);\n    }\n\n    fn insert(&mut self, index: usize, key: K) {\n        let bkt = self.keys_bucket.get_mut(index).unwrap();\n        bkt.push(key);\n        self.update_index_interval(index);\n    }\n\n    fn contains(&self, index: usize, key: &K) -> bool {\n        self.keys_bucket[index].contains(key)\n    }\n\n    fn remove_item(&mut self, index: usize, key: &K) -> ProtocolResult<K> {\n        let bkt = self.keys_bucket.get_mut(index).unwrap();\n        if bkt.contains(key) {\n            let val = bkt.remove_item(key)?;\n            self.update_index_interval(index);\n            Ok(val)\n        } else {\n            Err(StoreError::GetNone.into())\n        }\n    }\n\n    fn get_bucket(&self, index: usize) -> &Bucket<K> {\n        self.keys_bucket\n            .get(index)\n            .expect(\"index must less than 16\")\n    }\n\n    /// The function will panic when index is greater than or equal 16.\n    fn get_abs_index_interval(&self, index: usize) -> (u64, u64) {\n        (self.bucket_lens[index], self.bucket_lens[index + 1])\n    }\n\n    fn is_bucket_recovered(&self, index: usize) -> bool {\n        self.is_recovered[index]\n    }\n\n    fn update_index_interval(&mut self, index: usize) {\n        let start = index + 1;\n        let mut acc = self.bucket_lens[index];\n\n        for i in start..17 {\n            acc += self.keys_bucket[i - 1].len() as u64;\n            self.bucket_lens[i] = acc;\n        }\n    }\n\n    #[cfg(test)]\n    fn len(&self) -> u64 {\n        self.bucket_lens[16]\n    }\n\n    #[cfg(test)]\n    fn is_empty(&self) -> bool {\n        self.len() == 0\n    }\n}\n\npub struct Bucket<K: FixedCodec + PartialEq>(Vec<K>);\n\nimpl<K: FixedCodec + PartialEq> Bucket<K> {\n    fn new() -> Self {\n        Bucket(Vec::new())\n    }\n\n    fn len(&self) -> usize {\n        self.0.len()\n    }\n\n    fn contains(&self, x: &K) -> bool {\n        self.0.contains(x)\n    }\n\n    fn push(&mut self, value: K) {\n        self.0.push(value);\n    }\n\n    fn remove_item(&mut self, key: &K) -> ProtocolResult<K> {\n        let mut idx = self.len();\n        for (i, item) in self.0.iter().enumerate() {\n            if item == key {\n                idx = i;\n                break;\n            }\n        }\n\n        if idx < self.len() {\n            Ok(self.0.remove(idx))\n        } else {\n            Err(StoreError::GetNone.into())\n        }\n    }\n}\n\nimpl<K: FixedCodec + PartialEq + PartialEq> rlp::Encodable for Bucket<K> {\n    fn rlp_append(&self, s: &mut rlp::RlpStream) {\n        let inner: Vec<Vec<u8>> = self\n            .0\n            .iter()\n            .map(|k| k.encode_fixed().expect(\"encode should not fail\").to_vec())\n            .collect();\n\n        s.begin_list(1).append_list::<Vec<u8>, _>(&inner);\n    }\n}\n\nimpl<K: FixedCodec + PartialEq> rlp::Decodable for Bucket<K> {\n    fn decode(r: &rlp::Rlp) -> Result<Self, rlp::DecoderError> {\n        let inner_u8: Vec<Vec<u8>> = rlp::decode_list(r.at(0)?.as_raw());\n\n        let inner_k: Result<Vec<K>, _> = inner_u8\n            .into_iter()\n            .map(|v| <_>::decode_fixed(Bytes::from(v)))\n            .collect();\n\n        let inner = inner_k.map_err(|_| rlp::DecoderError::Custom(\"decode K from bytes fail\"))?;\n\n        Ok(Bucket(inner))\n    }\n}\n\nimpl<K: FixedCodec + PartialEq> FixedCodec for Bucket<K> {\n    fn encode_fixed(&self) -> ProtocolResult<Bytes> {\n        Ok(Bytes::from(rlp::encode(self)))\n    }\n\n    fn decode_fixed(bytes: Bytes) -> ProtocolResult<Self> {\n        Ok(rlp::decode(bytes.as_ref()).map_err(FixedCodecError::from)?)\n    }\n}\n\n#[inline(always)]\nfn get_bucket_index(bytes: &Bytes) -> usize {\n    let len = bytes.len() - 1;\n    (bytes[len] >> 4) as usize\n}\n\n#[derive(Debug, Display, From)]\npub enum StoreError {\n    #[display(fmt = \"the key not existed\")]\n    GetNone,\n\n    #[display(fmt = \"access array out of range\")]\n    OutRange,\n\n    #[display(fmt = \"decode error\")]\n    DecodeError,\n\n    #[display(fmt = \"overflow when calculating\")]\n    Overflow,\n}\n\nimpl std::error::Error for StoreError {}\n\nimpl From<StoreError> for ProtocolError {\n    fn from(err: StoreError) -> ProtocolError {\n        ProtocolError::new(ProtocolErrorKind::Binding, Box::new(err))\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_insert() {\n        let mut buckets = FixedBuckets::new();\n        assert!(buckets.is_empty());\n\n        for i in 0..=255u8 {\n            let key = Bytes::from(vec![i]);\n            buckets.insert(get_bucket_index(&key), key);\n        }\n\n        println!(\"{:?}\", buckets.bucket_lens);\n\n        let intervals = (0u64..=16).map(|i| i * 16).collect::<Vec<_>>();\n        assert!(intervals == buckets.bucket_lens);\n        assert!(buckets.len() == 256);\n\n        for i in 0..16 {\n            assert!(buckets.get_bucket(i).len() == 16);\n        }\n\n        let mut buckets = FixedBuckets::new();\n        for i in 0..8 {\n            let key = Bytes::from(vec![i]);\n            buckets.insert(get_bucket_index(&key), key);\n        }\n\n        assert!(buckets.get_bucket(0).len() == 8);\n        assert!(buckets.len() == 8);\n        for i in 1..16 {\n            assert!(buckets.get_bucket(i).len() == 0);\n        }\n    }\n\n    #[test]\n    fn test_remove() {\n        let mut buckets = FixedBuckets::new();\n\n        for i in 0..=255u8 {\n            let key = Bytes::from(vec![i]);\n            buckets.insert(get_bucket_index(&key), key);\n        }\n\n        let key = Bytes::from(vec![0]);\n        let _ = buckets\n            .remove_item(get_bucket_index(&key.encode_fixed().unwrap()), &key)\n            .unwrap();\n        let intervals = (0u64..=16)\n            .map(|i| if i == 0 { 0 } else { i * 16 - 1 })\n            .collect::<Vec<_>>();\n        assert!(buckets.len() == 255);\n        assert!(intervals == buckets.bucket_lens);\n    }\n\n    #[test]\n    fn test_contains() {\n        let mut buckets = FixedBuckets::new();\n\n        for i in 0..3u8 {\n            let key = Bytes::from(vec![i]);\n            buckets.insert(get_bucket_index(&key), key);\n        }\n\n        let key = Bytes::from(vec![0]);\n        assert!(buckets.contains(get_bucket_index(&key.encode_fixed().unwrap()), &key));\n\n        let key = Bytes::from(vec![5]);\n        assert!(!buckets.contains(get_bucket_index(&key.encode_fixed().unwrap()), &key));\n\n        let key = Bytes::from(vec![20]);\n        assert!(!buckets.contains(get_bucket_index(&key.encode_fixed().unwrap()), &key));\n    }\n}\n"
  },
  {
    "path": "framework/src/binding/store/primitive.rs",
    "content": "use std::cell::RefCell;\nuse std::rc::Rc;\n\nuse bytes::Bytes;\n\nuse protocol::traits::{ServiceState, StoreBool, StoreString, StoreUint64};\nuse protocol::types::Hash;\nuse protocol::ProtocolResult;\n\npub struct DefaultStoreBool<S: ServiceState> {\n    state: Rc<RefCell<S>>,\n    key:   Hash,\n}\n\nimpl<S: ServiceState> DefaultStoreBool<S> {\n    pub fn new(state: Rc<RefCell<S>>, var_name: &str) -> Self {\n        Self {\n            state,\n            key: Hash::digest(Bytes::from(var_name.to_owned() + \"bool\")),\n        }\n    }\n\n    fn inner_get(&self) -> ProtocolResult<bool> {\n        let b: Option<bool> = self.state.borrow().get(&self.key)?;\n\n        match b {\n            Some(v) => Ok(v),\n            None => {\n                self.state.borrow_mut().insert(self.key.clone(), false)?;\n                Ok(false)\n            }\n        }\n    }\n\n    fn inner_set(&mut self, b: bool) -> ProtocolResult<()> {\n        self.state.borrow_mut().insert(self.key.clone(), b)?;\n        Ok(())\n    }\n}\n\nimpl<S: ServiceState> StoreBool for DefaultStoreBool<S> {\n    fn get(&self) -> bool {\n        self.inner_get()\n            .unwrap_or_else(|e| panic!(\"StoreBool get failed: {}\", e))\n    }\n\n    fn set(&mut self, b: bool) {\n        self.inner_set(b)\n            .unwrap_or_else(|e| panic!(\"StoreBool set failed: {}\", e));\n    }\n}\n\npub struct DefaultStoreUint64<S: ServiceState> {\n    state: Rc<RefCell<S>>,\n    key:   Hash,\n}\n\nimpl<S: ServiceState> DefaultStoreUint64<S> {\n    pub fn new(state: Rc<RefCell<S>>, var_name: &str) -> Self {\n        Self {\n            state,\n            key: Hash::digest(Bytes::from(var_name.to_owned() + \"uint64\")),\n        }\n    }\n\n    fn inner_get(&self) -> u64 {\n        let u: Option<u64> = self\n            .state\n            .borrow()\n            .get(&self.key)\n            .unwrap_or_else(|e| panic!(\"StoreUint64 get failed: {}\", e));\n\n        match u {\n            Some(v) => v,\n            None => {\n                self.state\n                    .borrow_mut()\n                    .insert(self.key.clone(), 0u64)\n                    .unwrap_or_else(|e| panic!(\"StoreUint64 get failed: {}\", e));\n                0\n            }\n        }\n    }\n\n    fn inner_set(&mut self, val: u64) {\n        self.state\n            .borrow_mut()\n            .insert(self.key.clone(), val)\n            .unwrap_or_else(|e| panic!(\"StoreUint64 set failed: {}\", e));\n    }\n\n    // Add val with self\n    // And set the result back to self\n    fn inner_add(&mut self, val: u64) -> bool {\n        let sv = self.inner_get();\n\n        match val.overflowing_add(sv) {\n            (sum, false) => {\n                self.inner_set(sum);\n                false\n            }\n            _ => true,\n        }\n    }\n\n    // Self minus val\n    // And set the result back to self\n    fn inner_sub(&mut self, val: u64) -> bool {\n        let sv = self.inner_get();\n\n        if sv >= val {\n            self.inner_set(sv - val);\n            false\n        } else {\n            true\n        }\n    }\n\n    // Multiply val with self\n    // And set the result back to self\n    fn inner_mul(&mut self, val: u64) -> bool {\n        let sv = self.inner_get();\n\n        match val.overflowing_mul(sv) {\n            (mul, false) => {\n                self.inner_set(mul);\n                false\n            }\n            _ => true,\n        }\n    }\n\n    // Power of self\n    // And set the result back to self\n    fn inner_pow(&mut self, val: u32) -> bool {\n        let sv = self.inner_get();\n\n        match sv.overflowing_pow(val) {\n            (pow, false) => {\n                self.inner_set(pow);\n                false\n            }\n            _ => true,\n        }\n    }\n\n    // Self divided by val\n    // And set the result back to self\n    fn inner_div(&mut self, val: u64) -> bool {\n        let sv = self.inner_get();\n\n        if let 0 = val {\n            true\n        } else {\n            self.inner_set(sv / val);\n            false\n        }\n    }\n\n    // Remainder of self\n    // And set the result back to self\n    fn inner_rem(&mut self, val: u64) -> bool {\n        let sv = self.inner_get();\n\n        if let 0 = val {\n            true\n        } else {\n            self.inner_set(sv % val);\n            false\n        }\n    }\n}\n\nimpl<S: ServiceState> StoreUint64 for DefaultStoreUint64<S> {\n    fn get(&self) -> u64 {\n        self.inner_get()\n    }\n\n    fn set(&mut self, val: u64) {\n        self.inner_set(val);\n    }\n\n    // Add val with self\n    // And set the result back to self\n    fn safe_add(&mut self, val: u64) -> bool {\n        self.inner_add(val)\n    }\n\n    // Self minus val\n    // And set the result back to self\n    fn safe_sub(&mut self, val: u64) -> bool {\n        self.inner_sub(val)\n    }\n\n    // Multiply val with self\n    // And set the result back to self\n    fn safe_mul(&mut self, val: u64) -> bool {\n        self.inner_mul(val)\n    }\n\n    // Power of self\n    // And set the result back to self\n    fn safe_pow(&mut self, val: u32) -> bool {\n        self.inner_pow(val)\n    }\n\n    // Self divided by val\n    // And set the result back to self\n    fn safe_div(&mut self, val: u64) -> bool {\n        self.inner_div(val)\n    }\n\n    // Remainder of self\n    // And set the result back to self\n    fn safe_rem(&mut self, val: u64) -> bool {\n        self.inner_rem(val)\n    }\n}\n\npub struct DefaultStoreString<S: ServiceState> {\n    state: Rc<RefCell<S>>,\n    key:   Hash,\n}\n\nimpl<S: ServiceState> DefaultStoreString<S> {\n    pub fn new(state: Rc<RefCell<S>>, var_name: &str) -> Self {\n        Self {\n            state,\n            key: Hash::digest(Bytes::from(var_name.to_owned() + \"string\")),\n        }\n    }\n\n    fn inner_set(&mut self, val: &str) -> ProtocolResult<()> {\n        self.state\n            .borrow_mut()\n            .insert(self.key.clone(), val.to_string())?;\n        Ok(())\n    }\n\n    fn inner_get(&self) -> ProtocolResult<String> {\n        let s: Option<String> = self.state.borrow().get(&self.key)?;\n\n        match s {\n            Some(v) => Ok(v),\n            None => {\n                self.state\n                    .borrow_mut()\n                    .insert(self.key.clone(), \"\".to_string())?;\n                Ok(\"\".to_string())\n            }\n        }\n    }\n\n    fn inner_len(&self) -> ProtocolResult<u64> {\n        self.inner_get().map(|s| s.len() as u64)\n    }\n\n    fn is_empty_(&self) -> ProtocolResult<bool> {\n        self.inner_get().map(|s| s.is_empty())\n    }\n}\n\nimpl<S: ServiceState> StoreString for DefaultStoreString<S> {\n    fn get(&self) -> String {\n        self.inner_get()\n            .unwrap_or_else(|e| panic!(\"StoreString get failed: {}\", e))\n    }\n\n    fn set(&mut self, val: &str) {\n        self.inner_set(val)\n            .unwrap_or_else(|e| panic!(\"StoreString set failed: {}\", e));\n    }\n\n    fn len(&self) -> u64 {\n        self.inner_len()\n            .unwrap_or_else(|e| panic!(\"StoreString get length failed: {}\", e))\n    }\n\n    fn is_empty(&self) -> bool {\n        self.is_empty_()\n            .unwrap_or_else(|e| panic!(\"StoreString get is_empty failed: {}\", e))\n    }\n}\n"
  },
  {
    "path": "framework/src/binding/tests/mod.rs",
    "content": "mod sdk;\nmod state;\nmod store;\n"
  },
  {
    "path": "framework/src/binding/tests/sdk.rs",
    "content": "use std::cell::RefCell;\nuse std::rc::Rc;\nuse std::sync::Arc;\n\nuse async_trait::async_trait;\nuse bytes::Bytes;\nuse cita_trie::MemoryDB;\n\nuse protocol::traits::{CommonStorage, Context, ServiceResponse, ServiceSDK, Storage};\nuse protocol::types::{\n    Address, Block, BlockHeader, Event, Hash, MerkleRoot, Proof, RawTransaction, Receipt,\n    ReceiptResponse, SignedTransaction, TransactionRequest, Validator,\n};\nuse protocol::ProtocolResult;\n\nuse crate::binding::sdk::{DefaultChainQuerier, DefaultServiceSDK};\nuse crate::binding::store::StoreError;\nuse crate::binding::tests::state::new_state;\n\n#[test]\nfn test_service_sdk() {\n    let memdb = Arc::new(MemoryDB::new(false));\n    let state = new_state(Arc::clone(&memdb), None);\n    let rs = Rc::new(RefCell::new(state));\n\n    let arcs = Arc::new(MockStorage {});\n    let cq = DefaultChainQuerier::new(Arc::clone(&arcs));\n\n    let mut sdk = DefaultServiceSDK::new(Rc::clone(&rs), Rc::new(cq));\n\n    // test sdk store bool\n    let mut sdk_bool = sdk.alloc_or_recover_bool(\"test_bool\");\n    sdk_bool.set(true);\n    assert_eq!(sdk_bool.get(), true);\n\n    // test sdk store string\n    let mut sdk_string = sdk.alloc_or_recover_string(\"test_string\");\n    sdk_string.set(\"hello\");\n    assert_eq!(sdk_string.get(), \"hello\".to_owned());\n\n    // test sdk store uint64\n    let mut sdk_uint64 = sdk.alloc_or_recover_uint64(\"test_uint64\");\n    sdk_uint64.set(99);\n    assert_eq!(sdk_uint64.get(), 99);\n\n    // test sdk map\n    let mut sdk_map = sdk.alloc_or_recover_map::<Hash, Bytes>(\"test_map\");\n    assert_eq!(sdk_map.is_empty(), true);\n\n    sdk_map.insert(Hash::digest(Bytes::from(\"key_1\")), Bytes::from(\"val_1\"));\n\n    assert_eq!(\n        sdk_map.get(&Hash::digest(Bytes::from(\"key_1\"))).unwrap(),\n        Bytes::from(\"val_1\")\n    );\n\n    let mut it = sdk_map.iter();\n    assert_eq!(\n        it.next().unwrap(),\n        (Hash::digest(Bytes::from(\"key_1\")), Bytes::from(\"val_1\"))\n    );\n    assert_eq!(it.next().is_none(), true);\n\n    // test sdk array\n    let mut sdk_array = sdk.alloc_or_recover_array::<Hash>(\"test_array\");\n    assert_eq!(sdk_array.is_empty(), true);\n\n    sdk_array.push(Hash::digest(Bytes::from(\"key_1\")));\n\n    assert_eq!(\n        sdk_array.get(0).unwrap(),\n        Hash::digest(Bytes::from(\"key_1\"))\n    );\n\n    let mut it = sdk_array.iter();\n    assert_eq!(it.next().unwrap(), (0, Hash::digest(Bytes::from(\"key_1\"))));\n    assert_eq!(it.next().is_none(), true);\n\n    // test get/set account value\n    sdk.set_account_value(&mock_address(), Bytes::from(\"ak\"), Bytes::from(\"av\"));\n    let account_value: Bytes = sdk\n        .get_account_value(&mock_address(), &Bytes::from(\"ak\"))\n        .unwrap();\n    assert_eq!(Bytes::from(\"av\"), account_value);\n\n    // test get/set value\n    sdk.set_value(Bytes::from(\"ak\"), Bytes::from(\"av\"));\n    let value: Bytes = sdk.get_value(&Bytes::from(\"ak\")).unwrap();\n    assert_eq!(Bytes::from(\"av\"), value);\n\n    // test query chain\n    let tx_data = sdk\n        .get_transaction_by_hash(&Hash::digest(Bytes::from(\"param\")))\n        .unwrap();\n    assert_eq!(mock_signed_tx(), tx_data);\n\n    let receipt_data = sdk\n        .get_receipt_by_hash(&Hash::digest(Bytes::from(\"param\")))\n        .unwrap();\n    assert_eq!(mock_receipt(), receipt_data);\n\n    let block_data = sdk.get_block_by_height(Some(1)).unwrap();\n    assert_eq!(mock_block(1), block_data);\n}\n\nstruct MockStorage;\n\n#[async_trait]\nimpl CommonStorage for MockStorage {\n    async fn insert_block(&self, _ctx: Context, _block: Block) -> ProtocolResult<()> {\n        Ok(())\n    }\n\n    async fn get_block(&self, _ctx: Context, _height: u64) -> ProtocolResult<Option<Block>> {\n        Ok(Some(mock_block(1)))\n    }\n\n    async fn get_block_header(\n        &self,\n        _ctx: Context,\n        _height: u64,\n    ) -> ProtocolResult<Option<BlockHeader>> {\n        Ok(Some(mock_block(1).header))\n    }\n\n    async fn set_block(&self, _ctx: Context, _block: Block) -> ProtocolResult<()> {\n        Ok(())\n    }\n\n    async fn remove_block(&self, _ctx: Context, _height: u64) -> ProtocolResult<()> {\n        Ok(())\n    }\n\n    async fn get_latest_block(&self, _ctx: Context) -> ProtocolResult<Block> {\n        Ok(mock_block(1))\n    }\n\n    async fn set_latest_block(&self, _ctx: Context, _block: Block) -> ProtocolResult<()> {\n        Ok(())\n    }\n\n    async fn get_latest_block_header(&self, _ctx: Context) -> ProtocolResult<BlockHeader> {\n        Ok(mock_block(1).header)\n    }\n}\n\n#[async_trait]\nimpl Storage for MockStorage {\n    async fn insert_transactions(\n        &self,\n        _ctx: Context,\n        _: u64,\n        _: Vec<SignedTransaction>,\n    ) -> ProtocolResult<()> {\n        Ok(())\n    }\n\n    async fn insert_receipts(&self, _ctx: Context, _: u64, _: Vec<Receipt>) -> ProtocolResult<()> {\n        Ok(())\n    }\n\n    async fn update_latest_proof(&self, _ctx: Context, _: Proof) -> ProtocolResult<()> {\n        Ok(())\n    }\n\n    async fn get_transaction_by_hash(\n        &self,\n        _ctx: Context,\n        _tx_hash: &Hash,\n    ) -> ProtocolResult<Option<SignedTransaction>> {\n        Ok(Some(mock_signed_tx()))\n    }\n\n    async fn get_transactions(\n        &self,\n        _ctx: Context,\n        _height: u64,\n        _hashes: &[Hash],\n    ) -> ProtocolResult<Vec<Option<SignedTransaction>>> {\n        Err(StoreError::GetNone.into())\n    }\n\n    async fn get_receipt_by_hash(&self, _ctx: Context, _: Hash) -> ProtocolResult<Option<Receipt>> {\n        Ok(Some(mock_receipt()))\n    }\n\n    async fn get_receipts(\n        &self,\n        _ctx: Context,\n        _: u64,\n        _: Vec<Hash>,\n    ) -> ProtocolResult<Vec<Option<Receipt>>> {\n        Err(StoreError::GetNone.into())\n    }\n\n    async fn get_latest_proof(&self, _ctx: Context) -> ProtocolResult<Proof> {\n        Err(StoreError::GetNone.into())\n    }\n}\n\n// #####################\n// Mock Primitive\n// #####################\n\npub fn mock_address() -> Address {\n    let hash = mock_hash();\n    Address::from_hash(hash).unwrap()\n}\n\npub fn mock_pub_key(s: &'static str) -> Bytes {\n    Hash::digest(Bytes::from(s)).as_bytes()\n}\n\npub fn mock_hash() -> Hash {\n    Hash::digest(Bytes::from(\"mock\"))\n}\n\npub fn mock_merkle_root() -> MerkleRoot {\n    Hash::digest(Bytes::from(\"mock\"))\n}\n\n// #####################\n// Mock Transaction\n// #####################\n\npub fn mock_transaction_request() -> TransactionRequest {\n    TransactionRequest {\n        service_name: \"mock-service\".to_owned(),\n        method:       \"mock-method\".to_owned(),\n        payload:      \"mock-payload\".to_owned(),\n    }\n}\n\npub fn mock_raw_tx() -> RawTransaction {\n    RawTransaction {\n        chain_id:     mock_hash(),\n        nonce:        mock_hash(),\n        timeout:      100,\n        cycles_price: 1,\n        cycles_limit: 100,\n        request:      mock_transaction_request(),\n        sender:       mock_address(),\n    }\n}\n\npub fn mock_signed_tx() -> SignedTransaction {\n    SignedTransaction {\n        raw:       mock_raw_tx(),\n        tx_hash:   mock_hash(),\n        pubkey:    Default::default(),\n        signature: Default::default(),\n    }\n}\n\n// #####################\n// Mock Receipt\n// #####################\n\npub fn mock_receipt() -> Receipt {\n    Receipt {\n        state_root:  mock_merkle_root(),\n        height:      13,\n        tx_hash:     mock_hash(),\n        cycles_used: 100,\n        events:      vec![mock_event()],\n        response:    mock_receipt_response(),\n    }\n}\n\npub fn mock_receipt_response() -> ReceiptResponse {\n    ReceiptResponse {\n        service_name: \"mock-service\".to_owned(),\n        method:       \"mock-method\".to_owned(),\n        response:     ServiceResponse::<String> {\n            code:          0,\n            succeed_data:  \"ok\".to_owned(),\n            error_message: \"\".to_owned(),\n        },\n    }\n}\n\npub fn mock_event() -> Event {\n    Event {\n        service: \"mock-event\".to_owned(),\n        name:    \"mock-method\".to_owned(),\n        data:    \"mock-data\".to_owned(),\n    }\n}\n\n// #####################\n// Mock Block\n// #####################\n\npub fn mock_validator(s: &'static str) -> Validator {\n    Validator {\n        pub_key:        mock_pub_key(s),\n        propose_weight: 1,\n        vote_weight:    1,\n    }\n}\n\npub fn mock_proof() -> Proof {\n    Proof {\n        height:     4,\n        round:      99,\n        block_hash: mock_hash(),\n        signature:  Default::default(),\n        bitmap:     Default::default(),\n    }\n}\n\npub fn mock_block_header() -> BlockHeader {\n    BlockHeader {\n        chain_id:                       mock_hash(),\n        height:                         42,\n        exec_height:                    41,\n        prev_hash:                      mock_hash(),\n        timestamp:                      420_000_000,\n        order_root:                     mock_merkle_root(),\n        order_signed_transactions_hash: mock_hash(),\n        confirm_root:                   vec![mock_hash(), mock_hash()],\n        state_root:                     mock_merkle_root(),\n        receipt_root:                   vec![mock_hash(), mock_hash()],\n        cycles_used:                    vec![999_999],\n        proposer:                       mock_address(),\n        proof:                          mock_proof(),\n        validator_version:              1,\n        validators:                     vec![\n            mock_validator(\"a\"),\n            mock_validator(\"b\"),\n            mock_validator(\"c\"),\n            mock_validator(\"d\"),\n        ],\n    }\n}\n\npub fn mock_block(order_size: usize) -> Block {\n    Block {\n        header:            mock_block_header(),\n        ordered_tx_hashes: (0..order_size).map(|_| mock_hash()).collect(),\n    }\n}\n"
  },
  {
    "path": "framework/src/binding/tests/state.rs",
    "content": "extern crate test;\n\nuse std::collections::HashSet;\nuse std::path::PathBuf;\nuse std::sync::Arc;\n\nuse bytes::Bytes;\nuse cita_trie::{MemoryDB, DB};\nuse test::Bencher;\n\nuse protocol::traits::ServiceState;\nuse protocol::types::{Address, Hash, MerkleRoot};\n\nuse crate::binding::state::{GeneralServiceState, MPTTrie, RocksTrieDB};\n\n#[rustfmt::skip]\n/// Bench in AMD Ryzen 7 3800X 8-Core Processor (16 x 4250)\n/// test binding::tests::state::bench_get_cache_hit              ... bench:          47 ns/iter (+/- 3)\n/// test binding::tests::state::bench_get_cache_miss             ... bench:       1,063 ns/iter (+/- 35)\n/// test binding::tests::state::bench_get_without_cache          ... bench:         526 ns/iter (+/- 19)\n/// test binding::tests::state::bench_insert_batch_with_cache    ... bench:   1,113,015 ns/iter (+/- 489,068)\n/// test binding::tests::state::bench_insert_batch_without_cache ... bench:     979,408 ns/iter (+/- 510,953)\n/// test binding::tests::state::bench_insert_with_cache          ... bench:       2,716 ns/iter (+/- 602)\n/// test binding::tests::state::bench_insert_without_cache       ... bench:       2,491 ns/iter (+/- 486)\n#[bench]\nfn bench_insert_batch_with_cache(b: &mut Bencher) {\n    let triedb = new_triedb(\"bench_insert_batch_with_cache\");\n\n    let keys = (0..1000).map(|_| rand_bytes()).collect::<Vec<_>>();\n    let values = (0..1000).map(|_| rand_bytes()).collect::<Vec<_>>();\n\n    b.iter(|| {\n        triedb.insert_batch(keys.clone(), values.clone()).unwrap();\n    })\n}\n\n#[bench]\nfn bench_insert_batch_without_cache(b: &mut Bencher) {\n    let triedb = new_triedb(\"bench_insert_batch_without_cache\");\n\n    let keys = (0..1000).map(|_| rand_bytes()).collect::<Vec<_>>();\n    let values = (0..1000).map(|_| rand_bytes()).collect::<Vec<_>>();\n\n    b.iter(|| {\n        triedb.insert_batch_without_cache(keys.clone(), values.clone());\n    })\n}\n\n#[bench]\nfn bench_insert_with_cache(b: &mut Bencher) {\n    let triedb = new_triedb(\"bench_insert_with_cache\");\n\n    let key = rand_bytes();\n    let value = rand_bytes();\n\n    b.iter(|| {\n        triedb.insert(key.clone(), value.clone()).unwrap();\n    })\n}\n\n#[bench]\nfn bench_insert_without_cache(b: &mut Bencher) {\n    let triedb = new_triedb(\"bench_insert_without_cache\");\n\n    let key = rand_bytes();\n    let value = rand_bytes();\n\n    b.iter(|| {\n        triedb.insert_without_cache(key.clone(), value.clone());\n    })\n}\n\n#[bench]\nfn bench_get_cache_hit(b: &mut Bencher) {\n    let triedb = new_triedb(\"bench_get_cache_hit\");\n\n    let keys = (0..1000).map(|_| rand_bytes()).collect::<Vec<_>>();\n    let values = (0..1000).map(|_| rand_bytes()).collect::<Vec<_>>();\n\n    triedb.insert_batch(keys.clone(), values).unwrap();\n\n    let key = keys[0].clone();\n    b.iter(|| {\n        let _ = triedb.get(&key).unwrap();\n    })\n}\n\n#[bench]\nfn bench_get_cache_miss(b: &mut Bencher) {\n    let triedb = new_triedb(\"bench_get_cache_miss\");\n\n    let keys = (0..1000).map(|_| rand_bytes()).collect::<Vec<_>>();\n    let values = (0..1000).map(|_| rand_bytes()).collect::<Vec<_>>();\n\n    triedb.insert_batch(keys.clone(), values).unwrap();\n\n    let keys = keys.iter().collect::<HashSet<_>>();\n    let key = {\n        let mut tmp = rand_bytes();\n        while keys.contains(&tmp) {\n            tmp = rand_bytes();\n        }\n        tmp\n    };\n\n    b.iter(|| {\n        let _ = triedb.get(&key).unwrap();\n    })\n}\n\n#[bench]\nfn bench_get_without_cache(b: &mut Bencher) {\n    let triedb = new_triedb(\"bench_get_without_cache\");\n\n    let keys = (0..1000).map(|_| rand_bytes()).collect::<Vec<_>>();\n    let values = (0..1000).map(|_| rand_bytes()).collect::<Vec<_>>();\n\n    triedb.insert_batch_without_cache(keys.clone(), values);\n\n    let key = keys[0].clone();\n    b.iter(|| {\n        let _ = triedb.get_without_cache(&key).unwrap();\n    })\n}\n\n#[test]\nfn test_trie_db() {\n    let triedb = new_triedb(\"test_trie_db\");\n    let key = rand_bytes();\n    let value = rand_bytes();\n\n    triedb.insert(key.clone(), value.clone()).unwrap();\n    assert_eq!(triedb.get(&key).unwrap().unwrap(), value);\n\n    let keys = (0..3000).map(|_| rand_bytes()).collect::<Vec<_>>();\n    let values = (0..3000).map(|_| rand_bytes()).collect::<Vec<_>>();\n    triedb.insert_batch(keys.clone(), values.clone()).unwrap();\n\n    let _ = keys\n        .iter()\n        .zip(values.iter())\n        .map(|(k, v)| assert_eq!(&triedb.get(k).unwrap().unwrap(), v));\n    assert_eq!(triedb.cache().len(), 3001);\n\n    triedb.flush().unwrap();\n    assert_eq!(triedb.cache().len(), 2000);\n\n    let _ = keys.iter().map(|k| assert!(triedb.contains(k).unwrap()));\n}\n\n#[test]\nfn test_state_insert() {\n    let memdb = Arc::new(MemoryDB::new(false));\n    let mut state = new_state(Arc::clone(&memdb), None);\n\n    let key = Hash::digest(Bytes::from(\"key\".to_owned()));\n    let value = Hash::digest(Bytes::from(\"value\".to_owned()));\n    state.insert(key.clone(), value.clone()).unwrap();\n    let val: Hash = state.get(&key).unwrap().unwrap();\n    assert_eq!(val, value);\n\n    state.stash().unwrap();\n    let new_root = state.commit().unwrap();\n\n    let val: Hash = state.get(&key).unwrap().unwrap();\n    assert_eq!(val, value);\n\n    let new_state = new_state(Arc::clone(&memdb), Some(new_root));\n    let val: Hash = new_state.get(&key).unwrap().unwrap();\n    assert_eq!(val, value);\n}\n\n#[test]\nfn test_state_account() {\n    let memdb = Arc::new(MemoryDB::new(false));\n    let mut state = new_state(Arc::clone(&memdb), None);\n\n    let address = Address::from_hash(Hash::digest(Bytes::from(\"test-address\"))).unwrap();\n    let key = Hash::digest(Bytes::from(\"key\".to_owned()));\n    let value = Hash::digest(Bytes::from(\"value\".to_owned()));\n\n    state\n        .set_account_value(&address, key.clone(), value.clone())\n        .unwrap();\n    let val: Hash = state.get_account_value(&address, &key).unwrap().unwrap();\n    assert_eq!(val, value);\n\n    state.stash().unwrap();\n    let new_root = state.commit().unwrap();\n\n    let new_state = new_state(Arc::clone(&memdb), Some(new_root));\n    let val: Hash = new_state\n        .get_account_value(&address, &key)\n        .unwrap()\n        .unwrap();\n    assert_eq!(val, value);\n}\n\npub fn new_state(memdb: Arc<MemoryDB>, root: Option<MerkleRoot>) -> GeneralServiceState<MemoryDB> {\n    let trie = match root {\n        Some(root) => MPTTrie::from(root, memdb).unwrap(),\n        None => MPTTrie::new(memdb),\n    };\n\n    GeneralServiceState::new(trie)\n}\n\nfn new_triedb(name: &str) -> RocksTrieDB {\n    let mut path = PathBuf::from(\"./free-space/\");\n    path.push(name);\n    RocksTrieDB::new(path, false, 1024, 2000).unwrap()\n}\n\nfn rand_bytes() -> Vec<u8> {\n    (0..32).map(|_| rand::random::<u8>()).collect::<Vec<u8>>()\n}\n"
  },
  {
    "path": "framework/src/binding/tests/store.rs",
    "content": "use std::cell::RefCell;\nuse std::rc::Rc;\nuse std::sync::Arc;\n\nuse bytes::Bytes;\nuse cita_trie::MemoryDB;\n\nuse protocol::traits::{StoreArray, StoreBool, StoreMap, StoreString, StoreUint64};\nuse protocol::types::Hash;\n\nuse crate::binding::store::{\n    DefaultStoreArray, DefaultStoreBool, DefaultStoreMap, DefaultStoreString, DefaultStoreUint64,\n};\nuse crate::binding::tests::state::new_state;\n\n#[test]\nfn test_default_store_bool() {\n    let memdb = Arc::new(MemoryDB::new(false));\n    let state = new_state(Arc::clone(&memdb), None);\n\n    let mut sb = DefaultStoreBool::new(Rc::new(RefCell::new(state)), \"test\");\n\n    assert_eq!(sb.get(), false);\n    sb.set(true);\n    assert_eq!(sb.get(), true);\n    sb.set(false);\n    assert_eq!(sb.get(), false);\n}\n\n#[test]\nfn test_default_store_uint64() {\n    let memdb = Arc::new(MemoryDB::new(false));\n    let state = new_state(Arc::clone(&memdb), None);\n\n    let mut su = DefaultStoreUint64::new(Rc::new(RefCell::new(state)), \"test\");\n\n    assert_eq!(su.get(), 0u64);\n    su.set(8u64);\n    assert_eq!(su.get(), 8u64);\n\n    assert_eq!(su.safe_add(12u64), false);\n    assert_eq!(su.get(), 20u64);\n\n    assert_eq!(su.safe_sub(10u64), false);\n    assert_eq!(su.get(), 10u64);\n\n    assert_eq!(su.safe_mul(8u64), false);\n    assert_eq!(su.get(), 80u64);\n\n    assert_eq!(su.safe_div(10u64), false);\n    assert_eq!(su.get(), 8u64);\n\n    assert_eq!(su.safe_pow(2u32), false);\n    assert_eq!(su.get(), 64u64);\n\n    assert_eq!(su.safe_rem(5u64), false);\n    assert_eq!(su.get(), 4u64);\n}\n\n#[test]\nfn test_default_store_string() {\n    let memdb = Arc::new(MemoryDB::new(false));\n    let state = new_state(Arc::clone(&memdb), None);\n\n    let rs = Rc::new(RefCell::new(state));\n    let mut ss = DefaultStoreString::new(Rc::clone(&rs), \"test\");\n\n    assert_eq!(ss.get(), \"\");\n\n    ss.set(\"\");\n    assert_eq!(ss.get(), \"\");\n    assert_eq!(ss.is_empty(), true);\n\n    ss.set(\"ok\");\n    assert_eq!(ss.get(), String::from(\"ok\"));\n    assert_eq!(ss.len(), 2u64);\n}\n\n#[test]\nfn test_default_store_map() {\n    let memdb = Arc::new(MemoryDB::new(false));\n    let state = new_state(Arc::clone(&memdb), None);\n    let rs = Rc::new(RefCell::new(state));\n\n    let mut sm = DefaultStoreMap::<_, Hash, Bytes>::new(Rc::clone(&rs), \"test\");\n\n    assert_eq!(sm.get(&Hash::digest(Bytes::from(\"key_1\"))).is_none(), true);\n    sm.insert(Hash::digest(Bytes::from(\"key_1\")), Bytes::from(\"val_1\"));\n    sm.insert(Hash::digest(Bytes::from(\"key_2\")), Bytes::from(\"val_2\"));\n\n    {\n        let mut it = sm.iter();\n        assert_eq!(\n            it.next().unwrap(),\n            (Hash::digest(Bytes::from(\"key_2\")), Bytes::from(\"val_2\"))\n        );\n        assert_eq!(\n            it.next().unwrap(),\n            (Hash::digest(Bytes::from(\"key_1\")), Bytes::from(\"val_1\"))\n        );\n        assert_eq!(it.next().is_none(), true);\n    }\n\n    assert_eq!(\n        sm.get(&Hash::digest(Bytes::from(\"key_1\"))).unwrap(),\n        Bytes::from(\"val_1\")\n    );\n    assert_eq!(\n        sm.get(&Hash::digest(Bytes::from(\"key_2\"))).unwrap(),\n        Bytes::from(\"val_2\")\n    );\n\n    sm.remove(&Hash::digest(Bytes::from(\"key_1\"))).unwrap();\n\n    assert_eq!(sm.contains(&Hash::digest(Bytes::from(\"key_1\"))), false);\n    assert_eq!(sm.len(), 1u64);\n\n    let sm = DefaultStoreMap::<_, Hash, Bytes>::new(Rc::clone(&rs), \"test\");\n    assert_eq!(\n        sm.get(&Hash::digest(Bytes::from(\"key_2\"))).unwrap(),\n        Bytes::from(\"val_2\")\n    );\n}\n\n#[test]\nfn test_default_store_array() {\n    let memdb = Arc::new(MemoryDB::new(false));\n    let state = new_state(Arc::clone(&memdb), None);\n    let rs = Rc::new(RefCell::new(state));\n\n    let mut sa = DefaultStoreArray::<_, Bytes>::new(Rc::clone(&rs), \"test\");\n\n    assert_eq!(sa.len(), 0u64);\n    assert_eq!(sa.get(0u64).is_none(), true);\n\n    sa.push(Bytes::from(\"111\"));\n    sa.push(Bytes::from(\"222\"));\n\n    assert_eq!(sa.get(3u64).is_none(), true);\n\n    {\n        let mut it = sa.iter();\n        assert_eq!(it.next().unwrap(), (0u64, Bytes::from(\"111\")));\n        assert_eq!(it.next().unwrap(), (1u64, Bytes::from(\"222\")));\n        assert_eq!(it.next().is_none(), true);\n    }\n\n    assert_eq!(sa.get(0u64).unwrap(), Bytes::from(\"111\"));\n    assert_eq!(sa.get(1u64).unwrap(), Bytes::from(\"222\"));\n\n    sa.remove(0u64);\n\n    assert_eq!(sa.len(), 1u64);\n    assert_eq!(sa.get(0u64).unwrap(), Bytes::from(\"222\"));\n}\n"
  },
  {
    "path": "framework/src/executor/error.rs",
    "content": "use derive_more::Display;\nuse protocol::{ProtocolError, ProtocolErrorKind};\nuse std::any::Any;\n\n#[derive(Debug, Display)]\npub enum ExecutorError {\n    #[display(fmt = \"service {:?} was not found\", service)]\n    NotFoundService { service: String },\n    #[display(fmt = \"service {:?} method {:?} was not found\", service, method)]\n    NotFoundMethod { service: String, method: String },\n    #[display(fmt = \"Parsing payload to json failed {:?}\", _0)]\n    JsonParse(serde_json::Error),\n\n    #[display(fmt = \"Init service genesis failed: {:?}\", _0)]\n    InitService(String),\n    #[display(fmt = \"Query service failed: {:?}\", _0)]\n    QueryService(String),\n    #[display(fmt = \"Call service failed: {:?}\", _0)]\n    CallService(String),\n\n    #[display(fmt = \"Tx hook panic: {:?}\", _0)]\n    TxHook(Box<dyn Any + Send>),\n}\n\nimpl std::error::Error for ExecutorError {}\n\nimpl From<ExecutorError> for ProtocolError {\n    fn from(err: ExecutorError) -> ProtocolError {\n        ProtocolError::new(ProtocolErrorKind::Executor, Box::new(err))\n    }\n}\n"
  },
  {
    "path": "framework/src/executor/factory.rs",
    "content": "use std::sync::Arc;\n\nuse protocol::traits::{Executor, ExecutorFactory, ServiceMapping, Storage};\nuse protocol::types::MerkleRoot;\nuse protocol::ProtocolResult;\n\nuse crate::executor::ServiceExecutor;\n\npub struct ServiceExecutorFactory;\n\nimpl<DB: 'static + cita_trie::DB, S: 'static + Storage, Mapping: 'static + ServiceMapping>\n    ExecutorFactory<DB, S, Mapping> for ServiceExecutorFactory\n{\n    fn from_root(\n        root: MerkleRoot,\n        db: Arc<DB>,\n        storage: Arc<S>,\n        mapping: Arc<Mapping>,\n    ) -> ProtocolResult<Box<dyn Executor>> {\n        let executor = ServiceExecutor::with_root(root, db, storage, mapping)?;\n        Ok(Box::new(executor))\n    }\n}\n"
  },
  {
    "path": "framework/src/executor/mod.rs",
    "content": "mod error;\nmod factory;\n#[cfg(test)]\nmod tests;\n\npub use factory::ServiceExecutorFactory;\n\nuse std::{\n    cell::RefCell,\n    collections::HashMap,\n    marker::PhantomData,\n    ops::{Deref, DerefMut},\n    panic::{self, AssertUnwindSafe},\n    rc::Rc,\n    sync::Arc,\n};\n\nuse cita_trie::DB as TrieDB;\n\nuse common_apm::muta_apm;\nuse protocol::traits::{\n    Context, Executor, ExecutorParams, ExecutorResp, Service, ServiceMapping, ServiceResponse,\n    ServiceState, Storage,\n};\nuse protocol::types::{\n    Address, Event, Hash, MerkleRoot, Receipt, ReceiptResponse, ServiceContext,\n    ServiceContextParams, ServiceParam, SignedTransaction, TransactionRequest,\n};\nuse protocol::{ProtocolError, ProtocolResult};\n\nuse crate::binding::sdk::{DefaultChainQuerier, DefaultSDKFactory};\nuse crate::binding::state::{GeneralServiceState, MPTTrie};\nuse crate::executor::error::ExecutorError;\n\nconst SERVICE_NOT_FOUND_CODE: u64 = 62077;\n\ntrait TxHooks {\n    fn before(\n        &mut self,\n        _: Context,\n        _: ServiceContext,\n    ) -> ProtocolResult<Vec<ServiceResponse<String>>> {\n        Ok(vec![ServiceResponse::from_succeed(\n            \"default_implement\".to_owned(),\n        )])\n    }\n\n    fn after(\n        &mut self,\n        _: Context,\n        _: ServiceContext,\n    ) -> ProtocolResult<Vec<ServiceResponse<String>>> {\n        Ok(vec![ServiceResponse::from_succeed(\n            \"default_implement\".to_owned(),\n        )])\n    }\n}\n\nimpl TxHooks for () {}\n\nenum HookType {\n    Before,\n    After,\n}\n\n#[derive(Clone, Copy)]\nenum ExecType {\n    Read,\n    Write,\n}\n\npub struct ServiceStateMap<DB: TrieDB>(HashMap<String, Rc<RefCell<GeneralServiceState<DB>>>>);\n\nimpl<DB: TrieDB> ServiceStateMap<DB> {\n    fn new() -> ServiceStateMap<DB> {\n        Self(HashMap::new())\n    }\n}\n\nimpl<DB: TrieDB> Deref for ServiceStateMap<DB> {\n    type Target = HashMap<String, Rc<RefCell<GeneralServiceState<DB>>>>;\n\n    fn deref(&self) -> &Self::Target {\n        &self.0\n    }\n}\n\nimpl<DB: TrieDB> DerefMut for ServiceStateMap<DB> {\n    fn deref_mut(&mut self) -> &mut Self::Target {\n        &mut self.0\n    }\n}\n\nimpl<DB: TrieDB> ServiceStateMap<DB> {\n    fn stash(&self) -> ProtocolResult<()> {\n        for state in self.0.values() {\n            state.borrow_mut().stash()?;\n        }\n\n        Ok(())\n    }\n\n    fn revert_cache(&self) -> ProtocolResult<()> {\n        for state in self.0.values() {\n            state.borrow_mut().revert_cache()?;\n        }\n\n        Ok(())\n    }\n}\n\nstruct CommitHooks<DB: TrieDB> {\n    inner:  Vec<Rc<RefCell<Box<dyn Service>>>>,\n    states: Rc<ServiceStateMap<DB>>,\n}\n\nimpl<DB: TrieDB> CommitHooks<DB> {\n    fn new(\n        hooks: Vec<Rc<RefCell<Box<dyn Service>>>>,\n        states: Rc<ServiceStateMap<DB>>,\n    ) -> CommitHooks<DB> {\n        Self {\n            inner: hooks,\n            states,\n        }\n    }\n\n    // bagua kan 101 :)\n    fn kan<H: FnOnce() -> ServiceResponse<String>>(\n        _context: ServiceContext,\n        states: Rc<ServiceStateMap<DB>>,\n        hook: H,\n    ) -> ProtocolResult<ServiceResponse<String>> {\n        match panic::catch_unwind(AssertUnwindSafe(hook)) {\n            Ok(res) => {\n                states.stash()?;\n\n                Ok(res)\n            }\n            Err(e) => {\n                states.revert_cache()?;\n                // something really bad happens, chain maybe fork, must halt\n                Err(ProtocolError::from(ExecutorError::TxHook(e)))\n            }\n        }\n    }\n}\n\nimpl<DB: TrieDB> TxHooks for CommitHooks<DB> {\n    fn before(\n        &mut self,\n        _context: Context,\n        service_context: ServiceContext,\n    ) -> ProtocolResult<Vec<ServiceResponse<String>>> {\n        let mut ret: Vec<ServiceResponse<String>> = Vec::new();\n        for hook in self.inner.iter_mut() {\n            let resp = Self::kan(service_context.clone(), Rc::clone(&self.states), || {\n                hook.borrow_mut().tx_hook_before_(service_context.clone())\n            })?;\n            ret.push(resp);\n        }\n\n        Ok(ret)\n    }\n\n    fn after(\n        &mut self,\n        _context: Context,\n        service_context: ServiceContext,\n    ) -> ProtocolResult<Vec<ServiceResponse<String>>> {\n        let mut ret: Vec<ServiceResponse<String>> = Vec::new();\n\n        for hook in self.inner.iter_mut() {\n            let resp = Self::kan(service_context.clone(), Rc::clone(&self.states), || {\n                hook.borrow_mut().tx_hook_after_(service_context.clone())\n            })?;\n            ret.push(resp);\n        }\n\n        Ok(ret)\n    }\n}\n\npub struct ServiceExecutor<S: Storage, DB: TrieDB, Mapping: ServiceMapping> {\n    service_mapping: Arc<Mapping>,\n    states:          Rc<ServiceStateMap<DB>>,\n    root_state:      GeneralServiceState<DB>,\n    services:        HashMap<String, Rc<RefCell<Box<dyn Service>>>>,\n\n    phantom: PhantomData<S>,\n}\n\nimpl<S: 'static + Storage, DB: 'static + TrieDB, Mapping: 'static + ServiceMapping>\n    ServiceExecutor<S, DB, Mapping>\n{\n    pub fn create_genesis(\n        services: Vec<ServiceParam>,\n        trie_db: Arc<DB>,\n        storage: Arc<S>,\n        mapping: Arc<Mapping>,\n    ) -> ProtocolResult<MerkleRoot> {\n        let querier = Rc::new(DefaultChainQuerier::new(Arc::clone(&storage)));\n\n        let mut states = ServiceStateMap::new();\n        for name in mapping.list_service_name().into_iter() {\n            let trie = MPTTrie::new(Arc::clone(&trie_db));\n\n            states.insert(name, Rc::new(RefCell::new(GeneralServiceState::new(trie))));\n        }\n\n        let states = Rc::new(states);\n        let sdk_factory = DefaultSDKFactory::new(Rc::clone(&states), Rc::clone(&querier));\n\n        for params in services.into_iter() {\n            let state = states\n                .get(&params.name)\n                .ok_or(ExecutorError::NotFoundService {\n                    service: params.name.to_owned(),\n                })?;\n\n            let mut service = mapping.get_service(&params.name, &sdk_factory)?;\n            panic::catch_unwind(AssertUnwindSafe(|| {\n                service.genesis_(params.payload.clone())\n            }))\n            .map_err(|e| ProtocolError::from(ExecutorError::InitService(format!(\"{:?}\", e))))?;\n\n            state.borrow_mut().stash()?;\n        }\n\n        let trie = MPTTrie::new(Arc::clone(&trie_db));\n        let mut root_state = GeneralServiceState::new(trie);\n        for (name, state) in states.iter() {\n            let root = state.borrow_mut().commit()?;\n            root_state.insert(name.to_owned(), root)?;\n        }\n        root_state.stash()?;\n        root_state.commit()\n    }\n\n    pub fn with_root(\n        root: MerkleRoot,\n        trie_db: Arc<DB>,\n        storage: Arc<S>,\n        service_mapping: Arc<Mapping>,\n    ) -> ProtocolResult<Self> {\n        let querier = Rc::new(DefaultChainQuerier::new(Arc::clone(&storage)));\n        let trie = MPTTrie::from(root, Arc::clone(&trie_db))?;\n        let root_state = GeneralServiceState::new(trie);\n        let list_service_name = service_mapping.list_service_name();\n\n        let mut states = ServiceStateMap::new();\n        for name in list_service_name.iter() {\n            let trie = match root_state.get(name)? {\n                Some(service_root) => MPTTrie::from(service_root, Arc::clone(&trie_db))?,\n                None => MPTTrie::new(Arc::clone(&trie_db)),\n            };\n\n            let service_state = GeneralServiceState::new(trie);\n            states.insert(name.to_owned(), Rc::new(RefCell::new(service_state)));\n        }\n\n        let states = Rc::new(states);\n        let sdk_factory = DefaultSDKFactory::new(Rc::clone(&states), Rc::clone(&querier));\n\n        let mut services = HashMap::new();\n        for name in list_service_name.iter() {\n            let service = service_mapping.get_service(name, &sdk_factory)?;\n            services.insert(name.clone(), Rc::new(RefCell::new(service)));\n        }\n\n        Ok(Self {\n            service_mapping,\n            states,\n            root_state,\n            services,\n            phantom: PhantomData,\n        })\n    }\n\n    #[muta_apm::derive::tracing_span(kind = \"executor.commit\")]\n    fn commit(&mut self, ctx: Context) -> ProtocolResult<MerkleRoot> {\n        for (name, state) in self.states.iter() {\n            let root = state.borrow_mut().commit()?;\n            self.root_state.insert(name.to_owned(), root)?;\n        }\n        self.root_state.stash()?;\n        self.root_state.commit()\n    }\n\n    fn stash(&mut self) -> ProtocolResult<()> {\n        self.states.stash()\n    }\n\n    fn revert_cache(&mut self) -> ProtocolResult<()> {\n        self.states.revert_cache()\n    }\n\n    #[muta_apm::derive::tracing_span(\n        kind = \"executor.before_hook\",\n        tags = \"{'hook_type': 'hook_type'}\"\n    )]\n    fn hook(\n        &mut self,\n        ctx: Context,\n        hook_type: HookType,\n        exec_params: &ExecutorParams,\n    ) -> ProtocolResult<()> {\n        for name in self.service_mapping.list_service_name().into_iter() {\n            let service = self.get_service(name.as_str())?;\n\n            let hook_ret = match hook_type {\n                HookType::Before => panic::catch_unwind(AssertUnwindSafe(|| {\n                    service.borrow_mut().hook_before_(exec_params)\n                })),\n                HookType::After => panic::catch_unwind(AssertUnwindSafe(|| {\n                    service.borrow_mut().hook_after_(exec_params)\n                })),\n            };\n\n            if hook_ret.is_err() {\n                self.revert_cache()?;\n            } else {\n                self.stash()?;\n            }\n        }\n        Ok(())\n    }\n\n    fn get_service(&self, service: &str) -> ProtocolResult<Rc<RefCell<Box<dyn Service>>>> {\n        self.services\n            .get(service)\n            .map(|s| Rc::clone(s))\n            .ok_or_else(|| {\n                ExecutorError::NotFoundService {\n                    service: service.to_owned(),\n                }\n                .into()\n            })\n    }\n\n    fn get_context(\n        &self,\n        tx_hash: Option<Hash>,\n        nonce: Option<Hash>,\n        caller: &Address,\n        cycles_price: u64,\n        cycles_limit: u64,\n        params: &ExecutorParams,\n        request: &TransactionRequest,\n        event: Rc<RefCell<Vec<Event>>>,\n    ) -> ProtocolResult<ServiceContext> {\n        let ctx_params = ServiceContextParams {\n            tx_hash,\n            nonce,\n            cycles_limit,\n            cycles_price,\n            cycles_used: Rc::new(RefCell::new(0)),\n            caller: caller.clone(),\n            height: params.height,\n            timestamp: params.timestamp,\n            service_name: request.service_name.to_owned(),\n            service_method: request.method.to_owned(),\n            service_payload: request.payload.to_owned(),\n            extra: None,\n            events: event,\n        };\n\n        Ok(ServiceContext::new(ctx_params))\n    }\n\n    fn get_tx_hooks(&self, exec_type: ExecType) -> Box<dyn TxHooks> {\n        match exec_type {\n            ExecType::Read => Box::new(()),\n            ExecType::Write => {\n                let mut tx_hooks = vec![];\n\n                for name in self.service_mapping.list_service_name().into_iter() {\n                    let tx_hook_service = self.get_service(name.as_str()).expect(\"no service\");\n                    tx_hooks.push(tx_hook_service);\n                }\n\n                Box::new(CommitHooks::new(tx_hooks, Rc::clone(&self.states)))\n            }\n        }\n    }\n\n    fn catch_call(\n        &mut self,\n        context: Context,\n        service_context: ServiceContext,\n        exec_type: ExecType,\n        event: Rc<RefCell<Vec<Event>>>,\n    ) -> ProtocolResult<ServiceResponse<String>> {\n        let mut tx_hooks = self.get_tx_hooks(exec_type);\n\n        let resp = tx_hooks.before(context.clone(), service_context.clone())?;\n        self.states.stash()?;\n\n        let event_index = event.borrow_mut().len();\n\n        let ret = if resp.iter().any(|r| r.is_error()) {\n            self.revert_cache()?;\n            event.borrow_mut().truncate(event_index);\n            ServiceResponse::from_error(65535, \"skip_tx_run\".to_owned())\n        } else {\n            match panic::catch_unwind(AssertUnwindSafe(|| {\n                self.call(service_context.clone(), exec_type)\n            })) {\n                Ok(r) => Ok(r),\n                Err(e) => {\n                    self.revert_cache()?;\n                    log::error!(\"inner chain error occurred when calling service: {:?}\", e);\n                    Err(ProtocolError::from(ExecutorError::CallService(format!(\n                        \"{:?}\",\n                        e\n                    ))))\n                }\n            }?\n        };\n\n        if ret.is_error() {\n            event.borrow_mut().truncate(event_index);\n            self.states.revert_cache()?;\n            service_context.cancel(\"tx_exec_return_code_not_zero\".to_owned());\n        }\n\n        let resp = tx_hooks.after(context, service_context)?;\n\n        if resp.iter().any(|r| r.is_error()) {\n            event.borrow_mut().truncate(event_index);\n            self.states.revert_cache()?;\n        } else {\n            self.states.stash()?;\n        }\n\n        Ok(ret)\n    }\n\n    fn call(&self, context: ServiceContext, exec_type: ExecType) -> ServiceResponse<String> {\n        let service_name = context.get_service_name();\n        let service = self.get_service(service_name);\n\n        if service.is_err() {\n            return ServiceResponse::from_error(\n                SERVICE_NOT_FOUND_CODE,\n                \"can not found service\".to_owned(),\n            );\n        }\n\n        let service = service.unwrap();\n        match exec_type {\n            ExecType::Read => service.borrow().read_(context),\n            ExecType::Write => service.borrow_mut().write_(context),\n        }\n    }\n}\n\nimpl<S: 'static + Storage, DB: 'static + TrieDB, Mapping: 'static + ServiceMapping> Executor\n    for ServiceExecutor<S, DB, Mapping>\n{\n    #[muta_apm::derive::tracing_span(kind = \"executor.exec\", logs = \"{'tx_len': 'txs.len()'}\")]\n    fn exec(\n        &mut self,\n        ctx: Context,\n        params: &ExecutorParams,\n        txs: &[SignedTransaction],\n    ) -> ProtocolResult<ExecutorResp> {\n        self.hook(ctx.clone(), HookType::Before, params)?;\n\n        let mut receipts = txs\n            .iter()\n            .map(|stx| {\n                let event = Rc::new(RefCell::new(vec![]));\n                let service_context = self.get_context(\n                    Some(stx.tx_hash.clone()),\n                    Some(stx.raw.nonce.clone()),\n                    &stx.raw.sender,\n                    stx.raw.cycles_price,\n                    stx.raw.cycles_limit,\n                    params,\n                    &stx.raw.request,\n                    Rc::clone(&event),\n                )?;\n\n                let exec_resp = self.catch_call(\n                    ctx.clone(),\n                    service_context.clone(),\n                    ExecType::Write,\n                    Rc::clone(&event),\n                )?;\n                Ok(Receipt {\n                    state_root:  MerkleRoot::from_empty(),\n                    height:      service_context.get_current_height(),\n                    tx_hash:     stx.tx_hash.clone(),\n                    cycles_used: service_context.get_cycles_used(),\n                    events:      service_context.get_events(),\n                    response:    ReceiptResponse {\n                        service_name: service_context.get_service_name().to_owned(),\n                        method:       service_context.get_service_method().to_owned(),\n                        response:     exec_resp,\n                    },\n                })\n            })\n            .collect::<Result<Vec<Receipt>, ProtocolError>>()?;\n\n        self.hook(ctx.clone(), HookType::After, params)?;\n\n        let state_root = self.commit(ctx)?;\n        let mut all_cycles_used = 0;\n\n        for receipt in receipts.iter_mut() {\n            receipt.state_root = state_root.clone();\n            all_cycles_used += receipt.cycles_used;\n        }\n\n        Ok(ExecutorResp {\n            receipts,\n            all_cycles_used,\n            state_root,\n        })\n    }\n\n    fn read(\n        &self,\n        params: &ExecutorParams,\n        caller: &Address,\n        cycles_price: u64,\n        request: &TransactionRequest,\n    ) -> ProtocolResult<ServiceResponse<String>> {\n        let context = self.get_context(\n            None,\n            None,\n            caller,\n            cycles_price,\n            std::u64::MAX,\n            params,\n            request,\n            Rc::new(RefCell::new(vec![])),\n        )?;\n        panic::catch_unwind(AssertUnwindSafe(|| self.call(context, ExecType::Read)))\n            .map_err(|e| ProtocolError::from(ExecutorError::QueryService(format!(\"{:?}\", e))))\n    }\n}\n"
  },
  {
    "path": "framework/src/executor/tests/framework.rs",
    "content": "use crate::executor::ServiceExecutor;\n\nuse async_trait::async_trait;\nuse binding_macro::{cycles, service, tx_hook_after, tx_hook_before};\nuse bytes::{Bytes, BytesMut};\nuse cita_trie::MemoryDB;\nuse protocol::traits::{\n    CommonStorage, Context, Executor, ExecutorParams, ExecutorResp, SDKFactory, Service,\n    ServiceMapping, ServiceResponse, ServiceSDK, Storage,\n};\nuse protocol::types::{\n    Address, Block, BlockHeader, Genesis, Hash, Proof, RawTransaction, Receipt, ServiceContext,\n    SignedTransaction, TransactionRequest,\n};\nuse protocol::ProtocolResult;\nuse std::sync::Arc;\n\nlazy_static::lazy_static! {\n   pub static ref ADMIN_ACCOUNT: Address = \"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\".parse().unwrap();\n}\n\nmacro_rules! exec_txs {\n    ($exec_cycle_limit: expr, $tx_cycle_limit: expr $(, ($service: expr, $method: expr, $payload: expr))*) => {\n        {\n            let memdb = Arc::new(MemoryDB::new(false));\n            let arcs = Arc::new(MockStorage {});\n\n            let toml_str = include_str!(\"./framework_genesis_services.toml\");\n            let genesis: Genesis = toml::from_str(toml_str).unwrap();\n\n            let root = ServiceExecutor::create_genesis(\n                genesis.services,\n                Arc::clone(&memdb),\n                Arc::new(MockStorage {}),\n                Arc::new(MockServiceMapping {}),\n            )\n            .unwrap();\n\n            let mut executor = ServiceExecutor::with_root(\n                root.clone(),\n                Arc::clone(&memdb),\n                Arc::clone(&arcs),\n                Arc::new(MockServiceMapping {}),\n            )\n            .unwrap();\n\n            let params = ExecutorParams {\n                state_root:   root,\n                height:       1,\n                timestamp:    0,\n                cycles_limit: $exec_cycle_limit,\n                proposer:     ADMIN_ACCOUNT.clone(),\n            };\n\n            let mut stxs = Vec::new();\n            $(stxs.push(construct_stx(\n                    $tx_cycle_limit,\n                    $service.to_owned(),\n                    $method.to_owned(),\n                    serde_json::to_string(&$payload).unwrap()\n                ));\n            )*\n\n            let resp : ExecutorResp  = executor.exec(Context::new(), &params, &stxs).unwrap();\n\n            resp\n        }\n    };\n}\n\npub fn construct_stx(\n    tx_cycle_limit: u64,\n    service_name: String,\n    method: String,\n    payload: String,\n) -> SignedTransaction {\n    let raw_tx = RawTransaction {\n        chain_id:     Hash::from_empty(),\n        nonce:        Hash::from_empty(),\n        timeout:      0,\n        cycles_price: 1,\n        cycles_limit: tx_cycle_limit,\n        request:      TransactionRequest {\n            service_name,\n            method,\n            payload,\n        },\n        sender:       ADMIN_ACCOUNT.clone(),\n    };\n\n    SignedTransaction {\n        raw:       raw_tx,\n        tx_hash:   Hash::from_empty(),\n        pubkey:    Bytes::from(\n            hex::decode(\"031288a6788678c25952eba8693b2f278f66e2187004b64ac09416d07f83f96d5b\")\n                .unwrap(),\n        ),\n        signature: BytesMut::from(\"\").freeze(),\n    }\n}\n\nstruct MockStorage;\n\n#[async_trait]\nimpl CommonStorage for MockStorage {\n    async fn insert_block(&self, _ctx: Context, _block: Block) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn get_block(&self, _ctx: Context, _height: u64) -> ProtocolResult<Option<Block>> {\n        unimplemented!()\n    }\n\n    async fn get_block_header(\n        &self,\n        _ctx: Context,\n        _height: u64,\n    ) -> ProtocolResult<Option<BlockHeader>> {\n        unimplemented!()\n    }\n\n    async fn set_block(&self, _ctx: Context, _block: Block) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn remove_block(&self, _ctx: Context, _height: u64) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn get_latest_block(&self, _ctx: Context) -> ProtocolResult<Block> {\n        unimplemented!()\n    }\n\n    async fn set_latest_block(&self, _ctx: Context, _block: Block) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn get_latest_block_header(&self, _ctx: Context) -> ProtocolResult<BlockHeader> {\n        unimplemented!()\n    }\n}\n\n#[async_trait]\nimpl Storage for MockStorage {\n    async fn insert_transactions(\n        &self,\n        _ctx: Context,\n        _: u64,\n        _: Vec<SignedTransaction>,\n    ) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn insert_receipts(&self, _ctx: Context, _: u64, _: Vec<Receipt>) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn update_latest_proof(&self, _ctx: Context, _: Proof) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn get_transaction_by_hash(\n        &self,\n        _ctx: Context,\n        _: &Hash,\n    ) -> ProtocolResult<Option<SignedTransaction>> {\n        unimplemented!()\n    }\n\n    async fn get_transactions(\n        &self,\n        _ctx: Context,\n        _: u64,\n        _: &[Hash],\n    ) -> ProtocolResult<Vec<Option<SignedTransaction>>> {\n        unimplemented!()\n    }\n\n    async fn get_receipt_by_hash(&self, _ctx: Context, _: Hash) -> ProtocolResult<Option<Receipt>> {\n        unimplemented!()\n    }\n\n    async fn get_receipts(\n        &self,\n        _ctx: Context,\n        _: u64,\n        _: Vec<Hash>,\n    ) -> ProtocolResult<Vec<Option<Receipt>>> {\n        unimplemented!()\n    }\n\n    async fn get_latest_proof(&self, _ctx: Context) -> ProtocolResult<Proof> {\n        unimplemented!()\n    }\n}\n\npub struct MockServiceMapping;\n\nimpl ServiceMapping for MockServiceMapping {\n    fn get_service<SDK: 'static + ServiceSDK, Factory: SDKFactory<SDK>>(\n        &self,\n        name: &str,\n        factory: &Factory,\n    ) -> ProtocolResult<Box<dyn Service>> {\n        let sdk = factory.get_sdk(name)?;\n\n        let service = match name {\n            \"TestService\" => Box::new(TestService::new(sdk)) as Box<dyn Service>,\n            _ => panic!(\"not found service\"),\n        };\n\n        Ok(service)\n    }\n\n    fn list_service_name(&self) -> Vec<String> {\n        vec![\"TestService\".to_owned()]\n    }\n}\n\npub struct TestService<SDK> {\n    _sdk: SDK,\n}\n\n#[service]\nimpl<SDK: ServiceSDK> TestService<SDK> {\n    pub fn new(sdk: SDK) -> Self {\n        Self { _sdk: sdk }\n    }\n\n    #[cycles(10_000)]\n    #[read]\n    fn test_read(&self, _ctx: ServiceContext) -> ServiceResponse<String> {\n        ServiceResponse::from_succeed(\"\".to_owned())\n    }\n\n    #[cycles(300_00)]\n    #[write]\n    fn test_write(&mut self, ctx: ServiceContext) -> ServiceResponse<String> {\n        ctx.emit_event(\n            \"test_service\".to_owned(),\n            \"write\".to_owned(),\n            \"write\".to_owned(),\n        );\n        ServiceResponse::from_succeed(\"\".to_owned())\n    }\n\n    #[tx_hook_before]\n    fn test_tx_hook_before(&mut self, ctx: ServiceContext) -> ServiceResponse<()> {\n        // we emit an event\n        ctx.emit_event(\n            \"test_service\".to_owned(),\n            \"before\".to_owned(),\n            \"before\".to_owned(),\n        );\n        if ctx.get_payload().contains(\"before\") {\n            return ServiceResponse::from_error(2, \"before_error\".to_owned());\n        }\n        ServiceResponse::from_succeed(())\n    }\n\n    #[tx_hook_after]\n    fn test_tx_hook_after(&mut self, ctx: ServiceContext) -> ServiceResponse<()> {\n        if ctx.get_payload().contains(\"after\") {\n            return ServiceResponse::from_error(2, \"after_error\".to_owned());\n        }\n        ctx.emit_event(\n            \"test_service\".to_owned(),\n            \"after\".to_owned(),\n            \"after\".to_owned(),\n        );\n        ServiceResponse::from_succeed(())\n    }\n}\n\n#[test]\nfn test_tx_hook_ok_ok() {\n    let resp: ExecutorResp =\n        exec_txs!(50000, 50000, (\"TestService\", \"test_write\", \"a test string\"));\n    assert_eq!(3, resp.receipts.get(0).unwrap().events.len());\n\n    let resp: ExecutorResp = exec_txs!(50000, 50000, (\"TestService\", \"test_write\", \"before\"));\n    assert_eq!(2, resp.receipts.get(0).unwrap().events.len());\n    assert!(resp\n        .receipts\n        .get(0)\n        .unwrap()\n        .events\n        .iter()\n        .any(|e| { e.name.as_str() == \"after\" }));\n    assert!(resp\n        .receipts\n        .get(0)\n        .unwrap()\n        .events\n        .iter()\n        .any(|e| { e.name.as_str() == \"before\" }));\n\n    let resp: ExecutorResp = exec_txs!(50000, 50000, (\"TestService\", \"test_write\", \"after\"));\n    assert_eq!(1, resp.receipts.get(0).unwrap().events.len());\n    assert!(resp\n        .receipts\n        .get(0)\n        .unwrap()\n        .events\n        .iter()\n        .any(|e| { e.name.as_str() == \"before\" }));\n\n    let resp: ExecutorResp = exec_txs!(50000, 50000, (\"TestService\", \"test_write\", \"before_after\"));\n    assert_eq!(1, resp.receipts.get(0).unwrap().events.len());\n    assert!(resp\n        .receipts\n        .get(0)\n        .unwrap()\n        .events\n        .iter()\n        .any(|e| { e.name.as_str() == \"before\" }));\n}\n"
  },
  {
    "path": "framework/src/executor/tests/framework_genesis_services.toml",
    "content": "timestamp = 0\nprevhash = \"0x44915be5b6c20b0678cf05fcddbbaa832e25d7e6ac538784cd5c24de00d47472\"\n\n[[services]]\nname = \"TestService\"\npayload = ''\n"
  },
  {
    "path": "framework/src/executor/tests/genesis_services.toml",
    "content": "timestamp = 0\nprevhash = \"0x44915be5b6c20b0678cf05fcddbbaa832e25d7e6ac538784cd5c24de00d47472\"\n\n[[services]]\nname = \"asset\"\npayload = '''{ \"id\": \"0xf56924db538e77bb5951eb5ff0d02b88983c49c45eea30e8ae3e7234b311436c\", \"name\": \"MutaToken\", \"symbol\": \"MT\", \"supply\": 320000011, \"issuer\": \"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\" }'''\n"
  },
  {
    "path": "framework/src/executor/tests/mod.rs",
    "content": "extern crate test;\n\n#[cfg(test)]\nmod framework;\nmod test_service;\n\nuse std::str::FromStr;\nuse std::sync::Arc;\n\nuse async_trait::async_trait;\nuse bytes::{Bytes, BytesMut};\nuse cita_trie::MemoryDB;\nuse test::Bencher;\n\nuse asset::types::{Asset, GetBalanceResponse};\nuse asset::AssetService;\nuse metadata::MetadataService;\nuse protocol::traits::{\n    CommonStorage, Context, Executor, ExecutorParams, SDKFactory, Service, ServiceMapping,\n    ServiceSDK, Storage,\n};\nuse protocol::types::{\n    Address, Block, BlockHeader, Genesis, Hash, Proof, RawTransaction, Receipt, SignedTransaction,\n    TransactionRequest,\n};\nuse protocol::ProtocolResult;\n\nuse crate::executor::{ServiceExecutor, SERVICE_NOT_FOUND_CODE};\nuse test_service::TestService;\n\nmacro_rules! read {\n    ($executor:expr, $params:expr, $caller:expr, $payload:expr) => {{\n        let request = TransactionRequest {\n            service_name: \"test\".to_owned(),\n            method:       \"test_read\".to_owned(),\n            payload:      $payload.to_owned(),\n        };\n\n        $executor\n            .read($params, $caller, 1, &request)\n            .expect(&format!(\"read {}\", $payload))\n    }};\n}\n\npub const PUB_KEY_STR: &str = \"02ef0cb0d7bc6c18b4bea1f5908d9106522b35ab3c399369605d4242525bda7e60\";\n\n#[test]\nfn test_create_genesis() {\n    let toml_str = include_str!(\"./genesis_services.toml\");\n    let genesis: Genesis = toml::from_str(toml_str).unwrap();\n\n    let db = Arc::new(MemoryDB::new(false));\n\n    let root = ServiceExecutor::create_genesis(\n        genesis.services,\n        Arc::clone(&db),\n        Arc::new(MockStorage {}),\n        Arc::new(MockServiceMapping {}),\n    )\n    .unwrap();\n\n    let executor = ServiceExecutor::with_root(\n        root.clone(),\n        Arc::clone(&db),\n        Arc::new(MockStorage {}),\n        Arc::new(MockServiceMapping {}),\n    )\n    .unwrap();\n    let params = ExecutorParams {\n        state_root:   root,\n        height:       1,\n        timestamp:    0,\n        cycles_limit: std::u64::MAX,\n        proposer:     Address::from_hash(Hash::from_empty()).unwrap(),\n    };\n    let caller = Address::from_str(\"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\").unwrap();\n    let request = TransactionRequest {\n       service_name: \"asset\".to_owned(),\n       method:       \"get_balance\".to_owned(),\n       payload:\n           r#\"{\"asset_id\": \"0xf56924db538e77bb5951eb5ff0d02b88983c49c45eea30e8ae3e7234b311436c\", \"user\": \"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\"}\"#\n               .to_owned(),\n   };\n    let res = executor.read(&params, &caller, 1, &request).unwrap();\n    let resp: GetBalanceResponse = serde_json::from_str(&res.succeed_data).unwrap();\n\n    assert_eq!(resp.balance, 320_000_011);\n}\n\n#[test]\nfn test_exec() {\n    let toml_str = include_str!(\"./genesis_services.toml\");\n    let genesis: Genesis = toml::from_str(toml_str).unwrap();\n\n    let db = Arc::new(MemoryDB::new(false));\n\n    let root = ServiceExecutor::create_genesis(\n        genesis.services,\n        Arc::clone(&db),\n        Arc::new(MockStorage {}),\n        Arc::new(MockServiceMapping {}),\n    )\n    .unwrap();\n\n    let mut executor = ServiceExecutor::with_root(\n        root.clone(),\n        Arc::clone(&db),\n        Arc::new(MockStorage {}),\n        Arc::new(MockServiceMapping {}),\n    )\n    .unwrap();\n\n    let params = ExecutorParams {\n        state_root:   root,\n        height:       1,\n        timestamp:    0,\n        cycles_limit: std::u64::MAX,\n        proposer:     Address::from_hash(Hash::from_empty()).unwrap(),\n    };\n\n    let stx = mock_signed_tx();\n    let txs = vec![stx];\n    let executor_resp = executor.exec(Context::new(), &params, &txs).unwrap();\n    let receipt = &executor_resp.receipts[0];\n\n    assert_eq!(receipt.response.response.code, 0);\n    let asset: Asset = serde_json::from_str(&receipt.response.response.succeed_data).unwrap();\n    assert_eq!(asset.name, \"MutaToken2\");\n    assert_eq!(asset.symbol, \"MT2\");\n    assert_eq!(asset.supply, 320_000_011);\n}\n\n#[test]\nfn test_emit_event() {\n    let toml_str = include_str!(\"./genesis_services.toml\");\n    let genesis: Genesis = toml::from_str(toml_str).unwrap();\n\n    let db = Arc::new(MemoryDB::new(false));\n\n    let root = ServiceExecutor::create_genesis(\n        genesis.services,\n        Arc::clone(&db),\n        Arc::new(MockStorage {}),\n        Arc::new(MockServiceMapping {}),\n    )\n    .unwrap();\n\n    let mut executor = ServiceExecutor::with_root(\n        root.clone(),\n        Arc::clone(&db),\n        Arc::new(MockStorage {}),\n        Arc::new(MockServiceMapping {}),\n    )\n    .unwrap();\n\n    let params = ExecutorParams {\n        state_root:   root,\n        height:       1,\n        timestamp:    0,\n        cycles_limit: std::u64::MAX,\n        proposer:     Address::from_hash(Hash::from_empty()).unwrap(),\n    };\n\n    let mut stx = mock_signed_tx();\n    stx.raw.request.service_name = \"test\".to_owned();\n    stx.raw.request.method = \"test_event\".to_owned();\n    stx.raw.request.payload = r#\"{\n        \"key\": \"\",\n        \"value\": \"\",\n        \"extra\": \"\"\n    }\"#\n    .to_owned();\n\n    let txs = vec![stx];\n    let executor_resp = executor.exec(Context::new(), &params, &txs).unwrap();\n    let receipt = &executor_resp.receipts[0];\n\n    assert_eq!(receipt.response.response.code, 0);\n    assert_eq!(receipt.events.len(), 1);\n    assert_eq!(&receipt.events[0].data, \"test\");\n    assert_eq!(&receipt.events[0].name, \"test-name\");\n    assert_eq!(&receipt.events[0].service, \"wow\");\n}\n\n#[test]\nfn test_revert_event_on_exec_error() {\n    let toml_str = include_str!(\"./genesis_services.toml\");\n    let genesis: Genesis = toml::from_str(toml_str).unwrap();\n\n    let db = Arc::new(MemoryDB::new(false));\n\n    let root = ServiceExecutor::create_genesis(\n        genesis.services,\n        Arc::clone(&db),\n        Arc::new(MockStorage {}),\n        Arc::new(MockServiceMapping {}),\n    )\n    .unwrap();\n\n    let mut executor = ServiceExecutor::with_root(\n        root.clone(),\n        Arc::clone(&db),\n        Arc::new(MockStorage {}),\n        Arc::new(MockServiceMapping {}),\n    )\n    .unwrap();\n\n    let params = ExecutorParams {\n        state_root:   root,\n        height:       1,\n        timestamp:    0,\n        cycles_limit: std::u64::MAX,\n        proposer:     Address::from_hash(Hash::from_empty()).unwrap(),\n    };\n\n    let mut stx = mock_signed_tx();\n    stx.raw.request.service_name = \"test\".to_owned();\n    stx.raw.request.method = \"test_revert_event\".to_owned();\n    stx.raw.request.payload = r#\"{\n        \"key\": \"\",\n        \"value\": \"\",\n        \"extra\": \"\"\n    }\"#\n    .to_owned();\n\n    let txs = vec![stx];\n    let executor_resp = executor.exec(Context::new(), &params, &txs).unwrap();\n    let receipt = &executor_resp.receipts[0];\n\n    assert_eq!(receipt.response.response.code, 111);\n    assert_eq!(receipt.events.len(), 0);\n}\n\n#[test]\nfn test_service_not_found_panic() {\n    let toml_str = include_str!(\"./genesis_services.toml\");\n    let genesis: Genesis = toml::from_str(toml_str).unwrap();\n\n    let db = Arc::new(MemoryDB::new(false));\n\n    let root = ServiceExecutor::create_genesis(\n        genesis.services,\n        Arc::clone(&db),\n        Arc::new(MockStorage {}),\n        Arc::new(MockServiceMapping {}),\n    )\n    .unwrap();\n\n    let mut executor = ServiceExecutor::with_root(\n        root.clone(),\n        Arc::clone(&db),\n        Arc::new(MockStorage {}),\n        Arc::new(MockServiceMapping {}),\n    )\n    .unwrap();\n\n    let params = ExecutorParams {\n        state_root:   root,\n        height:       1,\n        timestamp:    0,\n        cycles_limit: std::u64::MAX,\n        proposer:     Address::from_hash(Hash::from_empty()).unwrap(),\n    };\n\n    let mut stx = mock_signed_tx();\n    stx.raw.request.service_name = \"FlyMeToTheMars\".to_owned();\n\n    let txs = vec![stx];\n    let executor_resp = executor\n        .exec(Context::new(), &params, &txs)\n        .expect(\"should not panic on service not found\");\n    let receipt = &executor_resp.receipts[0];\n\n    assert_eq!(receipt.response.response.code, SERVICE_NOT_FOUND_CODE);\n}\n\n#[test]\nfn test_tx_hook() {\n    let toml_str = include_str!(\"./genesis_services.toml\");\n    let genesis: Genesis = toml::from_str(toml_str).unwrap();\n\n    let db = Arc::new(MemoryDB::new(false));\n\n    let root = ServiceExecutor::create_genesis(\n        genesis.services,\n        Arc::clone(&db),\n        Arc::new(MockStorage {}),\n        Arc::new(MockServiceMapping {}),\n    )\n    .unwrap();\n\n    let mut executor = ServiceExecutor::with_root(\n        root.clone(),\n        Arc::clone(&db),\n        Arc::new(MockStorage {}),\n        Arc::new(MockServiceMapping {}),\n    )\n    .unwrap();\n\n    let params = ExecutorParams {\n        state_root:   root,\n        height:       1,\n        timestamp:    0,\n        cycles_limit: std::u64::MAX,\n        proposer:     Address::from_hash(Hash::from_empty()).unwrap(),\n    };\n\n    // no tx hook\n    let mut stx = mock_signed_tx();\n    stx.raw.request.service_name = \"test\".to_owned();\n    stx.raw.request.method = \"test_write\".to_owned();\n    stx.raw.request.payload = r#\"{\n        \"key\": \"foo\",\n        \"value\": \"bar\",\n        \"extra\": \"\"\n    }\"#\n    .to_owned();\n    let txs = vec![stx.clone()];\n    let executor_resp = executor.exec(Context::new(), &params, &txs).unwrap();\n    let receipt = &executor_resp.receipts[0];\n    assert_eq!(receipt.response.response.code, 0);\n    assert_eq!(receipt.events.len(), 0);\n\n    // tx hook\n    stx.raw.request.payload = r#\"{\n        \"key\": \"foo\",\n        \"value\": \"bar\",\n        \"extra\": \"test_hook_before; test_hook_after\"\n    }\"#\n    .to_owned();\n    let txs = vec![stx.clone()];\n    let executor_resp = executor.exec(Context::new(), &params, &txs).unwrap();\n    let receipt = &executor_resp.receipts[0];\n    assert_eq!(receipt.response.response.code, 0);\n    assert_eq!(receipt.events.len(), 2);\n    assert_eq!(&receipt.events[0].data, \"test_tx_hook_before invoked\");\n    assert_eq!(&receipt.events[1].data, \"test_tx_hook_after invoked\");\n\n    // test_service_call_invoke_hook_only_once\n    stx.raw.request.method = \"test_service_call_invoke_hook_only_once\".to_owned();\n    stx.raw.request.payload = r#\"{\n        \"key\": \"foo\",\n        \"value\": \"bar\",\n        \"extra\": \"test_hook_before; test_hook_after\"\n    }\"#\n    .to_owned();\n    let txs = vec![stx];\n    let executor_resp = executor.exec(Context::new(), &params, &txs).unwrap();\n    let receipt = &executor_resp.receipts[0];\n    assert_eq!(receipt.response.response.code, 0);\n    assert_eq!(receipt.events.len(), 2);\n    assert_eq!(&receipt.events[0].data, \"test_tx_hook_before invoked\");\n    assert_eq!(&receipt.events[1].data, \"test_tx_hook_after invoked\");\n}\n\n#[test]\nfn test_commit_tx_hook_use_panic_tx() {\n    let toml_str = include_str!(\"./genesis_services.toml\");\n    let genesis: Genesis = toml::from_str(toml_str).unwrap();\n\n    let db = Arc::new(MemoryDB::new(false));\n\n    let root = ServiceExecutor::create_genesis(\n        genesis.services,\n        Arc::clone(&db),\n        Arc::new(MockStorage {}),\n        Arc::new(MockServiceMapping {}),\n    )\n    .unwrap();\n\n    let mut executor = ServiceExecutor::with_root(\n        root.clone(),\n        Arc::clone(&db),\n        Arc::new(MockStorage {}),\n        Arc::new(MockServiceMapping {}),\n    )\n    .unwrap();\n\n    let params = ExecutorParams {\n        state_root:   root,\n        height:       1,\n        timestamp:    0,\n        cycles_limit: std::u64::MAX,\n        proposer:     Address::from_hash(Hash::from_empty()).unwrap(),\n    };\n\n    let mut stx = mock_signed_tx();\n    stx.raw.request.service_name = \"test\".to_owned();\n    stx.raw.request.method = \"test_panic\".to_owned();\n    stx.raw.request.payload = r#\"\"\"\"#.to_owned();\n\n    let txs = vec![stx];\n    let error_resp = executor.exec(Context::new(), &params, &txs);\n    assert!(error_resp.is_err());\n\n    let caller = Address::from_str(\"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\").unwrap();\n\n    let before = read!(executor, &params, &caller, r#\"\"before\"\"#);\n    assert_eq!(before.succeed_data, r#\"\"before\"\"#);\n\n    let after = read!(executor, &params, &caller, r#\"\"after\"\"#);\n    assert_eq!(after.succeed_data, r#\"\"\"\"#);\n}\n\n#[test]\nfn test_tx_hook_before_panic() {\n    let toml_str = include_str!(\"./genesis_services.toml\");\n    let genesis: Genesis = toml::from_str(toml_str).unwrap();\n\n    let db = Arc::new(MemoryDB::new(false));\n\n    let root = ServiceExecutor::create_genesis(\n        genesis.services,\n        Arc::clone(&db),\n        Arc::new(MockStorage {}),\n        Arc::new(MockServiceMapping {}),\n    )\n    .unwrap();\n\n    let mut executor = ServiceExecutor::with_root(\n        root.clone(),\n        Arc::clone(&db),\n        Arc::new(MockStorage {}),\n        Arc::new(MockServiceMapping {}),\n    )\n    .unwrap();\n\n    let params = ExecutorParams {\n        state_root:   root,\n        height:       1,\n        timestamp:    0,\n        cycles_limit: std::u64::MAX,\n        proposer:     Address::from_hash(Hash::from_empty()).unwrap(),\n    };\n\n    let mut stx = mock_signed_tx();\n    stx.raw.request.service_name = \"test\".to_owned();\n    stx.raw.request.method = \"tx_hook_before_panic\".to_owned();\n    stx.raw.request.payload = r#\"\"\"\"#.to_owned();\n\n    let txs = vec![stx];\n    let error_resp = executor.exec(Context::new(), &params, &txs);\n    assert!(error_resp.is_err());\n\n    let caller = Address::from_str(\"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\").unwrap();\n\n    let before = read!(executor, &params, &caller, r#\"\"before\"\"#);\n    assert_eq!(before.succeed_data, r#\"\"\"\"#);\n\n    let tx_hook_before_panic = read!(executor, &params, &caller, r#\"\"tx_hook_before_panic\"\"#);\n    assert_eq!(tx_hook_before_panic.succeed_data, r#\"\"\"\"#);\n\n    let after = read!(executor, &params, &caller, r#\"\"after\"\"#);\n    assert_eq!(after.succeed_data, r#\"\"\"\"#);\n}\n\n#[test]\nfn test_tx_hook_after_panic() {\n    let toml_str = include_str!(\"./genesis_services.toml\");\n    let genesis: Genesis = toml::from_str(toml_str).unwrap();\n\n    let db = Arc::new(MemoryDB::new(false));\n\n    let root = ServiceExecutor::create_genesis(\n        genesis.services,\n        Arc::clone(&db),\n        Arc::new(MockStorage {}),\n        Arc::new(MockServiceMapping {}),\n    )\n    .unwrap();\n\n    let mut executor = ServiceExecutor::with_root(\n        root.clone(),\n        Arc::clone(&db),\n        Arc::new(MockStorage {}),\n        Arc::new(MockServiceMapping {}),\n    )\n    .unwrap();\n\n    let params = ExecutorParams {\n        state_root:   root,\n        height:       1,\n        timestamp:    0,\n        cycles_limit: std::u64::MAX,\n        proposer:     Address::from_hash(Hash::from_empty()).unwrap(),\n    };\n\n    let mut stx = mock_signed_tx();\n    stx.raw.request.service_name = \"test\".to_owned();\n    stx.raw.request.method = \"tx_hook_after_panic\".to_owned();\n    stx.raw.request.payload = r#\"\"\"\"#.to_owned();\n\n    let txs = vec![stx];\n    let error_resp = executor.exec(Context::new(), &params, &txs);\n    assert!(error_resp.is_err());\n\n    let caller = Address::from_str(\"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\").unwrap();\n\n    let before = read!(executor, &params, &caller, r#\"\"before\"\"#);\n    assert_eq!(before.succeed_data, r#\"\"before\"\"#);\n\n    let tx_hook_after_panic = read!(executor, &params, &caller, r#\"\"tx_hook_after_panic\"\"#);\n    assert_eq!(tx_hook_after_panic.succeed_data, r#\"\"tx_hook_after_panic\"\"#);\n\n    let after = read!(executor, &params, &caller, r#\"\"after\"\"#);\n    assert_eq!(after.succeed_data, r#\"\"\"\"#);\n}\n\n#[bench]\nfn bench_execute(b: &mut Bencher) {\n    let toml_str = include_str!(\"./genesis_services.toml\");\n    let genesis: Genesis = toml::from_str(toml_str).unwrap();\n\n    let db = Arc::new(MemoryDB::new(false));\n\n    let root = ServiceExecutor::create_genesis(\n        genesis.services,\n        Arc::clone(&db),\n        Arc::new(MockStorage {}),\n        Arc::new(MockServiceMapping {}),\n    )\n    .unwrap();\n\n    let mut executor = ServiceExecutor::with_root(\n        root.clone(),\n        Arc::clone(&db),\n        Arc::new(MockStorage {}),\n        Arc::new(MockServiceMapping {}),\n    )\n    .unwrap();\n\n    let txs: Vec<SignedTransaction> = (0..1000).map(|_| mock_signed_tx()).collect();\n\n    b.iter(|| {\n        let params = ExecutorParams {\n            state_root:   root.clone(),\n            height:       1,\n            timestamp:    0,\n            cycles_limit: std::u64::MAX,\n            proposer:     Address::from_hash(Hash::from_empty()).unwrap(),\n        };\n        let txs = txs.clone();\n        executor.exec(Context::new(), &params, &txs).unwrap();\n    });\n}\n\nfn mock_signed_tx() -> SignedTransaction {\n    let raw = RawTransaction {\n        chain_id:     Hash::from_empty(),\n        nonce:        Hash::from_empty(),\n        timeout:      0,\n        cycles_price: 1,\n        cycles_limit: std::u64::MAX,\n        request:      TransactionRequest {\n            service_name: \"asset\".to_owned(),\n            method:       \"create_asset\".to_owned(),\n            payload:      r#\"{ \"name\": \"MutaToken2\", \"symbol\": \"MT2\", \"supply\": 320000011 }\"#\n                .to_owned(),\n        },\n        sender:       Address::from_pubkey_bytes(Bytes::from(hex::decode(PUB_KEY_STR).unwrap()))\n            .unwrap(),\n    };\n\n    SignedTransaction {\n        raw,\n        tx_hash: Hash::from_empty(),\n        pubkey: Bytes::from(hex::decode(PUB_KEY_STR).unwrap()),\n        signature: BytesMut::from(\"\").freeze(),\n    }\n}\n\nstruct MockServiceMapping;\n\nimpl ServiceMapping for MockServiceMapping {\n    fn get_service<SDK: 'static + ServiceSDK, Factory: SDKFactory<SDK>>(\n        &self,\n        name: &str,\n        factory: &Factory,\n    ) -> ProtocolResult<Box<dyn Service>> {\n        let sdk = factory.get_sdk(name)?;\n\n        let service = match name {\n            \"asset\" => Box::new(AssetService::new(sdk)) as Box<dyn Service>,\n            \"metadata\" => Box::new(MetadataService::new(sdk)) as Box<dyn Service>,\n            \"test\" => Box::new(TestService::new(sdk)) as Box<dyn Service>,\n            _ => panic!(\"not found service\"),\n        };\n\n        Ok(service)\n    }\n\n    fn list_service_name(&self) -> Vec<String> {\n        vec![\"asset\".to_owned(), \"metadata\".to_owned(), \"test\".to_owned()]\n    }\n}\n\nstruct MockStorage;\n\n#[async_trait]\nimpl CommonStorage for MockStorage {\n    async fn insert_block(&self, _ctx: Context, _block: Block) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn get_block(&self, _ctx: Context, _height: u64) -> ProtocolResult<Option<Block>> {\n        unimplemented!()\n    }\n\n    async fn get_block_header(\n        &self,\n        _ctx: Context,\n        _height: u64,\n    ) -> ProtocolResult<Option<BlockHeader>> {\n        unimplemented!()\n    }\n\n    async fn set_block(&self, _ctx: Context, _block: Block) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn remove_block(&self, _ctx: Context, _height: u64) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn get_latest_block(&self, _ctx: Context) -> ProtocolResult<Block> {\n        unimplemented!()\n    }\n\n    async fn set_latest_block(&self, _ctx: Context, _block: Block) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn get_latest_block_header(&self, _ctx: Context) -> ProtocolResult<BlockHeader> {\n        unimplemented!()\n    }\n}\n\n#[async_trait]\nimpl Storage for MockStorage {\n    async fn insert_transactions(\n        &self,\n        _ctx: Context,\n        _: u64,\n        _: Vec<SignedTransaction>,\n    ) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn insert_receipts(&self, _ctx: Context, _: u64, _: Vec<Receipt>) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn update_latest_proof(&self, _ctx: Context, _: Proof) -> ProtocolResult<()> {\n        unimplemented!()\n    }\n\n    async fn get_transaction_by_hash(\n        &self,\n        _ctx: Context,\n        _: &Hash,\n    ) -> ProtocolResult<Option<SignedTransaction>> {\n        unimplemented!()\n    }\n\n    async fn get_transactions(\n        &self,\n        _ctx: Context,\n        _: u64,\n        _: &[Hash],\n    ) -> ProtocolResult<Vec<Option<SignedTransaction>>> {\n        unimplemented!()\n    }\n\n    async fn get_receipt_by_hash(&self, _ctx: Context, _: Hash) -> ProtocolResult<Option<Receipt>> {\n        unimplemented!()\n    }\n\n    async fn get_receipts(\n        &self,\n        _ctx: Context,\n        _: u64,\n        _: Vec<Hash>,\n    ) -> ProtocolResult<Vec<Option<Receipt>>> {\n        unimplemented!()\n    }\n\n    async fn get_latest_proof(&self, _ctx: Context) -> ProtocolResult<Proof> {\n        unimplemented!()\n    }\n}\n"
  },
  {
    "path": "framework/src/executor/tests/test_service.rs",
    "content": "use serde::{Deserialize, Serialize};\n\nuse binding_macro::{cycles, service, tx_hook_after, tx_hook_before};\nuse protocol::traits::{ExecutorParams, ServiceResponse, ServiceSDK};\nuse protocol::types::ServiceContext;\n\npub struct TestService<SDK> {\n    sdk: SDK,\n}\n\n#[derive(Deserialize, Serialize, Clone, Debug)]\npub struct TestWritePayload {\n    pub key:   String,\n    pub value: String,\n    pub extra: String,\n}\n\n#[derive(Deserialize, Serialize, Clone, Debug, Default)]\npub struct TestWriteResponse {}\n\n#[service]\nimpl<SDK: ServiceSDK> TestService<SDK> {\n    pub fn new(sdk: SDK) -> Self {\n        Self { sdk }\n    }\n\n    #[cycles(10_000)]\n    #[read]\n    fn test_read(&self, ctx: ServiceContext, payload: String) -> ServiceResponse<String> {\n        let value: String = self.sdk.get_value(&payload).unwrap_or_default();\n        ServiceResponse::from_succeed(value)\n    }\n\n    #[cycles(21_000)]\n    #[write]\n    fn test_write(\n        &mut self,\n        ctx: ServiceContext,\n        payload: TestWritePayload,\n    ) -> ServiceResponse<TestWriteResponse> {\n        self.sdk.set_value(payload.key, payload.value);\n        ServiceResponse::<TestWriteResponse>::from_succeed(TestWriteResponse {})\n    }\n\n    #[cycles(21_000)]\n    #[write]\n    fn test_revert_event(\n        &mut self,\n        ctx: ServiceContext,\n        _: TestWritePayload,\n    ) -> ServiceResponse<TestWriteResponse> {\n        ServiceResponse::from_error(111, \"error\".to_owned())\n    }\n\n    #[cycles(21_000)]\n    #[write]\n    fn test_event(\n        &mut self,\n        ctx: ServiceContext,\n        _: TestWritePayload,\n    ) -> ServiceResponse<TestWriteResponse> {\n        ctx.emit_event(\"wow\".to_owned(), \"test-name\".to_owned(), \"test\".to_owned());\n        ServiceResponse::from_succeed(TestWriteResponse::default())\n    }\n\n    #[cycles(21_000)]\n    #[write]\n    fn test_service_call_invoke_hook_only_once(\n        &mut self,\n        ctx: ServiceContext,\n        payload: TestWritePayload,\n    ) -> ServiceResponse<TestWriteResponse> {\n        self.test_write(ctx, payload);\n        ServiceResponse::<TestWriteResponse>::from_succeed(TestWriteResponse {})\n    }\n\n    #[cycles(21_000)]\n    #[write]\n    fn test_panic(&mut self, ctx: ServiceContext, _payload: String) -> ServiceResponse<()> {\n        panic!(\"hello panic\");\n    }\n\n    #[cycles(21_000)]\n    #[write]\n    fn tx_hook_before_panic(\n        &mut self,\n        ctx: ServiceContext,\n        _payload: String,\n    ) -> ServiceResponse<()> {\n        self.sdk.set_value(\n            \"tx_hook_before_panic\".to_owned(),\n            \"tx_hook_before_panic\".to_owned(),\n        );\n        ServiceResponse::from_succeed(())\n    }\n\n    #[cycles(21_000)]\n    #[write]\n    fn tx_hook_after_panic(\n        &mut self,\n        ctx: ServiceContext,\n        _payload: String,\n    ) -> ServiceResponse<()> {\n        self.sdk.set_value(\n            \"tx_hook_after_panic\".to_owned(),\n            \"tx_hook_after_panic\".to_owned(),\n        );\n        ServiceResponse::from_succeed(())\n    }\n\n    #[tx_hook_before]\n    fn test_tx_hook_before(&mut self, ctx: ServiceContext) -> ServiceResponse<()> {\n        if ctx.get_service_name() == \"test\"\n            && ctx.get_payload().to_owned().contains(\"test_hook_before\")\n        {\n            ctx.emit_event(\n                \"test_service\".to_owned(),\n                \"test-name\".to_owned(),\n                \"test_tx_hook_before invoked\".to_owned(),\n            );\n        }\n\n        if ctx.get_service_method() == \"tx_hook_before_panic\" {\n            panic!(\"tx hook before\");\n        }\n\n        self.sdk.set_value(\"before\".to_owned(), \"before\".to_owned());\n        ServiceResponse::from_succeed(())\n    }\n\n    #[tx_hook_after]\n    fn test_tx_hook_after(&mut self, ctx: ServiceContext) -> ServiceResponse<()> {\n        if ctx.get_service_name() == \"test\"\n            && ctx.get_payload().to_owned().contains(\"test_hook_after\")\n        {\n            ctx.emit_event(\n                \"test_service\".to_owned(),\n                \"test-name\".to_owned(),\n                \"test_tx_hook_after invoked\".to_owned(),\n            );\n        }\n\n        if ctx.get_service_method() == \"tx_hook_after_panic\" {\n            panic!(\"tx hook before\");\n        }\n\n        self.sdk.set_value(\"after\".to_owned(), \"after\".to_owned());\n        ServiceResponse::from_succeed(())\n    }\n}\n"
  },
  {
    "path": "framework/src/lib.rs",
    "content": "#![feature(vec_remove_item)]\n#![feature(test)]\n\npub mod binding;\npub mod executor;\n"
  },
  {
    "path": "jenkins-x-chaos.yml",
    "content": "buildPack: none\npipelineConfig:\n  pipelines:\n    pullRequest:\n      pipeline:\n        agent:\n          image: mutadev/muta-build-env:v0.3.0\n        options:\n          timeout:\n            time: 180 # 3H\n            unit: minutes\n        stages:\n          - name: chaos\n            environment:\n              - name: BASE_WORKSPACE\n                value: /workspace/source\n              - name: NODE_SIZE\n                value: \"4\"\n              - name: CHAIN_GENESIS_TIMEOUT_GAP\n                value: \"9999\"\n            options:\n              containerOptions:\n                volumeMounts:\n                  - name: jenkins-docker-cfg\n                    mountPath: /kaniko/.docker\n                resources:\n                  limits:\n                    cpu: 4\n                    memory: 8Gi\n                  requests:\n                    cpu: 2\n                    memory: 8Gi\n              volumes:\n                - name: jenkins-docker-cfg\n                  secret:\n                    secretName: jenkins-docker-cfg\n                    items:\n                      - key: config.json\n                        path: config.json\n\n            steps:\n              - name: build-release\n                image: mutadev/muta-build-env:v0.3.0\n                env:\n                  - name: OPENSSL_STATIC\n                    value: \"1\"\n                  - name: OPENSSL_LIB_DIR\n                    value: /usr/lib/x86_64-linux-gnu\n                  - name: OPENSSL_INCLUDE_DIR\n                    value: /usr/include/openssl\n                command: cargo\n                args:\n                  - build\n                  - --release\n                  - --example \n                  - muta-chain\n              - name: push-image\n                image: gcr.io/kaniko-project/executor:9912ccbf8d22bbafbf971124600fbb0b13b9cbd6\n                command: /kaniko/executor\n                args:\n                  - --dockerfile=/workspace/source/devtools/docker-build/Dockerfile\n                  - --destination=mutadev/${REPO_NAME}:pr-${PULL_NUMBER}-${BUILD_NUMBER}\n                  - --context=/workspace/source\n\n              - name: create-chaos-crd\n                image: alpine/helm:3.2.4\n                command: helm\n                args:\n                  - install\n                  - chaos-${REPO_NAME}-pr-${PULL_NUMBER}-${BUILD_NUMBER}\n                  - charts/deploy-chaos\n                  - --namespace\n                  - mutadev\n                  - --set\n                  - size=${NODE_SIZE},repo_name=${REPO_NAME},version=pr-${PULL_NUMBER}-${BUILD_NUMBER},resources.cpu=1100m,resources.memory=8Gi,chain_genesis.metadata.timeout_gap=${CHAIN_GENESIS_TIMEOUT_GAP}\n              \n              - name: watchdog\n                image: mutadev/muta-watchdog:v0.2.0-rc\n                env:\n                  - name: WATCH_DURATION\n                    value: 1H\n                  - name: APP_NAMESPACE\n                    value: mutadev\n                  - name: APP_PORT\n                    value: \"8000\"\n                  - name: APP_GRAPHQL_URL\n                    value: graphql\n                  - name: JOB_BENCHMARK_DURATION\n                    value: \"300\"\n                  - name: JOB_BENCHMARK_TIMEOUT_GAP\n                    value: \"9999\"\n                  - name: JOB_BENCHMARK_CPU\n                    value: \"3\"\n                command: APP_NAME=chaos-${REPO_NAME}-pr-${PULL_NUMBER}-${BUILD_NUMBER} node /watchdog/index.js\n\n              - name: delete-chaos-crd\n                image: alpine/helm:3.2.4\n                command: helm\n                args:\n                  - uninstall\n                  - chaos-${REPO_NAME}-pr-${PULL_NUMBER}-${BUILD_NUMBER}\n                  - --namespace\n                  - mutadev\n\n"
  },
  {
    "path": "jenkins-x-e2e.yml",
    "content": "buildPack: none\npipelineConfig:\n  pipelines:\n    pullRequest:\n      pipeline:\n        agent:\n          image: mutadev/muta-e2e-env:v0.3.0\n        options:\n          timeout:\n            time: 30\n            unit: minutes\n        stages:\n          - name: e2e\n            options:\n              containerOptions:\n                resources:\n                  limits:\n                    cpu: 4\n                    memory: 8Gi\n                  requests:\n                    cpu: 2\n                    memory: 8Gi\n\n            steps:\n              - name: e2e\n                command: make\n                args:\n                  - e2e-test\n"
  },
  {
    "path": "jenkins-x-lint.yml",
    "content": "buildPack: none\npipelineConfig:\n  pipelines:\n    pullRequest:\n      pipeline:\n        agent:\n          image: mutadev/muta-build-env:v0.3.0\n        options:\n          timeout:\n            time: 30\n            unit: minutes\n        stages:\n          - name: lint\n            options:\n              containerOptions:\n                resources:\n                  limits:\n                    cpu: 4\n                    memory: 8Gi\n                  requests:\n                    cpu: 2\n                    memory: 8Gi\n\n            steps:\n              - name: fmt\n                command: make\n                args:\n                  - fmt\n              - name: clippy\n                command: make\n                args:\n                  - clippy\n"
  },
  {
    "path": "jenkins-x-unit.yml",
    "content": "buildPack: none\npipelineConfig:\n  pipelines:\n    pullRequest:\n      pipeline:\n        agent:\n          image: mutadev/muta-build-env:v0.3.0\n        options:\n          timeout:\n            time: 60\n            unit: minutes\n        stages:\n          - name: unit\n            options:\n              containerOptions:\n                resources:\n                  limits:\n                    cpu: 4\n                    memory: 12Gi\n                  requests:\n                    cpu: 2\n                    memory: 12Gi\n\n            steps:\n              - name: unit\n                command: make\n                args:\n                  - test\n"
  },
  {
    "path": "jenkins-x.yml",
    "content": "buildPack: none\nnoReleasePrepare: true\npipelineConfig:\n  pipelines:\n    release:\n      pipeline:\n        agent:\n          image: mutadev/muta-build-env:v0.3.0\n        stages:\n          - name: release\n            environment:\n              - name: BASE_WORKSPACE\n                value: /workspace/source\n            options:\n              containerOptions:\n                volumeMounts:\n                  - name: jenkins-docker-cfg\n                    mountPath: /kaniko/.docker\n                resources:\n                  limits:\n                    cpu: 4\n                    memory: 8Gi\n                  requests:\n                    cpu: 2\n                    memory: 8Gi\n              volumes:\n                - name: jenkins-docker-cfg\n                  secret:\n                    secretName: jenkins-docker-cfg\n                    items:\n                      - key: config.json\n                        path: config.json\n\n            steps:\n              - name: build-release\n                image: mutadev/muta-build-env:v0.3.0\n                env:\n                  - name: OPENSSL_STATIC\n                    value: \"1\"\n                  - name: OPENSSL_LIB_DIR\n                    value: /usr/lib/x86_64-linux-gnu\n                  - name: OPENSSL_INCLUDE_DIR\n                    value: /usr/include/openssl\n                command: cargo\n                args:\n                  - build\n                  - --release\n                  - --example \n                  - muta-chain\n              - name: push-image\n                image: gcr.io/kaniko-project/executor:9912ccbf8d22bbafbf971124600fbb0b13b9cbd6\n                command: /kaniko/executor\n                args:\n                  - --dockerfile=/workspace/source/devtools/docker-build/Dockerfile\n                  - --destination=mutadev/${REPO_NAME}:latest\n                  - --context=/workspace/source\n"
  },
  {
    "path": "protocol/Cargo.toml",
    "content": "[package]\nname = \"muta-protocol\"\nversion = \"0.2.1\"\nauthors = [\"Muta Dev <muta@nervos.org>\"]\nedition = \"2018\"\nrepository = \"https://github.com/nervosnetwork/muta\"\nlicense = \"MIT\"\ndescription = \"Contains all the core data types and traits of the muta framework\"\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\nfutures = \"0.3\"\nderive_more = \"0.99\"\nasync-trait = \"0.1\"\nlazy_static = \"1.4\"\nhex = \"0.4\"\nprost = \"0.6\"\nbytes = { version = \"0.5\", features = [\"serde\"] }\nhasher = { version = \"0.1\", features = ['hash-keccak'] }\ncreep = \"0.2\"\nbincode = \"1.3\"\nserde = { version = \"1.0\", features = [\"derive\"] }\nserde_json = \"1.0\"\nrlp = \"0.4\"\ncita_trie = \"2.0\"\njson = \"0.12\"\nbyteorder = \"1.3\"\nmuta-codec-derive = \"0.2\"\nophelia = \"0.3\"\nophelia-secp256k1 = \"0.3\"\nbech32 = \"0.7\"\narc-swap = \"0.4\"\nsmol_str = \"0.1\"\nlog = \"0.4\"\n\n[dev-dependencies]\nrayon = \"1.3\"\nrand = \"0.7\"\n"
  },
  {
    "path": "protocol/src/codec/block.rs",
    "content": "use std::convert::TryFrom;\n\nuse bytes::Bytes;\nuse prost::Message;\n\nuse crate::{\n    codec::{\n        primitive::{Address, Hash},\n        CodecError, ProtocolCodecSync,\n    },\n    field, impl_default_bytes_codec_for,\n    types::primitive as protocol_primitive,\n    ProtocolError, ProtocolResult,\n};\n\n// #####################\n// Protobuf\n// #####################\n\n#[derive(Clone, Message)]\npub struct Block {\n    #[prost(message, tag = \"1\")]\n    pub header: Option<BlockHeader>,\n\n    #[prost(message, repeated, tag = \"2\")]\n    pub ordered_tx_hashes: Vec<Hash>,\n}\n\n#[derive(Clone, Message)]\npub struct BlockHeader {\n    #[prost(message, tag = \"1\")]\n    pub chain_id: Option<Hash>,\n\n    #[prost(uint64, tag = \"2\")]\n    pub height: u64,\n\n    #[prost(message, tag = \"3\")]\n    pub prev_hash: Option<Hash>,\n\n    #[prost(uint64, tag = \"4\")]\n    pub timestamp: u64,\n\n    #[prost(message, tag = \"5\")]\n    pub order_root: Option<Hash>,\n\n    #[prost(message, tag = \"6\")]\n    pub order_signed_transactions_hash: Option<Hash>,\n\n    #[prost(message, repeated, tag = \"7\")]\n    pub confirm_root: Vec<Hash>,\n\n    #[prost(message, tag = \"8\")]\n    pub state_root: Option<Hash>,\n\n    #[prost(message, repeated, tag = \"9\")]\n    pub receipt_root: Vec<Hash>,\n\n    #[prost(message, repeated, tag = \"10\")]\n    pub cycles_used: Vec<u64>,\n\n    #[prost(message, tag = \"11\")]\n    pub proposer: Option<Address>,\n\n    #[prost(message, tag = \"12\")]\n    pub proof: Option<Proof>,\n\n    #[prost(uint64, tag = \"13\")]\n    pub validator_version: u64,\n\n    #[prost(message, repeated, tag = \"14\")]\n    pub validators: Vec<Validator>,\n\n    #[prost(uint64, tag = \"15\")]\n    pub exec_height: u64,\n}\n\n#[derive(Clone, Message)]\npub struct Proof {\n    #[prost(uint64, tag = \"1\")]\n    pub height: u64,\n\n    #[prost(uint64, tag = \"2\")]\n    pub round: u64,\n\n    #[prost(message, tag = \"3\")]\n    pub block_hash: Option<Hash>,\n\n    #[prost(bytes, tag = \"4\")]\n    pub signature: Vec<u8>,\n\n    #[prost(bytes, tag = \"5\")]\n    pub bitmap: Vec<u8>,\n}\n\n#[derive(Clone, Message)]\npub struct Validator {\n    #[prost(bytes, tag = \"1\")]\n    pub pub_key: Vec<u8>,\n\n    #[prost(uint32, tag = \"2\")]\n    pub propose_weight: u32,\n\n    #[prost(uint32, tag = \"3\")]\n    pub vote_weight: u32,\n}\n\n#[derive(Clone, Message)]\npub struct Pill {\n    #[prost(message, tag = \"1\")]\n    pub block: Option<Block>,\n\n    #[prost(message, repeated, tag = \"2\")]\n    pub propose_hashes: Vec<Hash>,\n}\n\n// #################\n// Conversion\n// #################\n\n// Block\n\nimpl From<block::Block> for Block {\n    fn from(block: block::Block) -> Block {\n        let header = Some(BlockHeader::from(block.header));\n        let ordered_tx_hashes = block\n            .ordered_tx_hashes\n            .into_iter()\n            .map(Hash::from)\n            .collect::<Vec<_>>();\n\n        Block {\n            header,\n            ordered_tx_hashes,\n        }\n    }\n}\n\nimpl TryFrom<Block> for block::Block {\n    type Error = ProtocolError;\n\n    fn try_from(block: Block) -> Result<block::Block, Self::Error> {\n        let header = field!(block.header, \"Block\", \"header\")?;\n\n        let mut ordered_tx_hashes = Vec::new();\n        for hash in block.ordered_tx_hashes {\n            ordered_tx_hashes.push(protocol_primitive::Hash::try_from(hash)?);\n        }\n\n        let block = block::Block {\n            header: block::BlockHeader::try_from(header)?,\n            ordered_tx_hashes,\n        };\n\n        Ok(block)\n    }\n}\n\n// BlockHeader\n\nimpl From<block::BlockHeader> for BlockHeader {\n    fn from(block_header: block::BlockHeader) -> BlockHeader {\n        let chain_id = Some(Hash::from(block_header.chain_id));\n        let prev_hash = Some(Hash::from(block_header.prev_hash));\n        let order_root = Some(Hash::from(block_header.order_root));\n        let order_signed_transactions_hash =\n            Some(Hash::from(block_header.order_signed_transactions_hash));\n        let state_root = Some(Hash::from(block_header.state_root));\n        let proposer = Some(Address::from(block_header.proposer));\n        let proof = Some(Proof::from(block_header.proof));\n\n        let confirm_root = block_header\n            .confirm_root\n            .into_iter()\n            .map(Hash::from)\n            .collect::<Vec<_>>();\n        let receipt_root = block_header\n            .receipt_root\n            .into_iter()\n            .map(Hash::from)\n            .collect::<Vec<_>>();\n        let validators = block_header\n            .validators\n            .into_iter()\n            .map(Validator::from)\n            .collect::<Vec<_>>();\n\n        BlockHeader {\n            chain_id,\n            height: block_header.height,\n            exec_height: block_header.exec_height,\n            prev_hash,\n            timestamp: block_header.timestamp,\n            order_root,\n            order_signed_transactions_hash,\n            confirm_root,\n            state_root,\n            receipt_root,\n            cycles_used: block_header.cycles_used,\n            proposer,\n            proof,\n            validator_version: block_header.validator_version,\n            validators,\n        }\n    }\n}\n\nimpl TryFrom<BlockHeader> for block::BlockHeader {\n    type Error = ProtocolError;\n\n    fn try_from(block_header: BlockHeader) -> Result<block::BlockHeader, Self::Error> {\n        let chain_id = field!(block_header.chain_id, \"BlockHeader\", \"chain_id\")?;\n        let prev_hash = field!(block_header.prev_hash, \"BlockHeader\", \"prev_hash\")?;\n        let order_root = field!(block_header.order_root, \"BlockHeader\", \"order_root\")?;\n        let order_signed_transactions_hash = field!(\n            block_header.order_signed_transactions_hash,\n            \"BlockHeader\",\n            \"order_signed_transactions_hash\"\n        )?;\n        let state_root = field!(block_header.state_root, \"BlockHeader\", \"state_root\")?;\n        let proposer = field!(block_header.proposer, \"BlockHeader\", \"proposer\")?;\n        let proof = field!(block_header.proof, \"BlockHeader\", \"proof\")?;\n\n        let mut confirm_root = Vec::new();\n        for root in block_header.confirm_root {\n            confirm_root.push(protocol_primitive::Hash::try_from(root)?);\n        }\n\n        let mut receipt_root = Vec::new();\n        for root in block_header.receipt_root {\n            receipt_root.push(protocol_primitive::Hash::try_from(root)?);\n        }\n\n        let mut validators = Vec::new();\n        for validator in block_header.validators {\n            validators.push(block::Validator::try_from(validator)?);\n        }\n\n        let proof = block::BlockHeader {\n            chain_id: protocol_primitive::Hash::try_from(chain_id)?,\n            height: block_header.height,\n            exec_height: block_header.exec_height,\n            prev_hash: protocol_primitive::Hash::try_from(prev_hash)?,\n            timestamp: block_header.timestamp,\n            order_root: protocol_primitive::Hash::try_from(order_root)?,\n            order_signed_transactions_hash: protocol_primitive::Hash::try_from(\n                order_signed_transactions_hash,\n            )?,\n            confirm_root,\n            state_root: protocol_primitive::Hash::try_from(state_root)?,\n            receipt_root,\n            cycles_used: block_header.cycles_used,\n            proposer: protocol_primitive::Address::try_from(proposer)?,\n            proof: block::Proof::try_from(proof)?,\n            validator_version: block_header.validator_version,\n            validators,\n        };\n\n        Ok(proof)\n    }\n}\n\n// Proof\n\nimpl From<block::Proof> for Proof {\n    fn from(proof: block::Proof) -> Proof {\n        let block_hash = Some(Hash::from(proof.block_hash));\n\n        Proof {\n            height: proof.height,\n            round: proof.round,\n            block_hash,\n            signature: proof.signature.to_vec(),\n            bitmap: proof.bitmap.to_vec(),\n        }\n    }\n}\n\nimpl TryFrom<Proof> for block::Proof {\n    type Error = ProtocolError;\n\n    fn try_from(proof: Proof) -> Result<block::Proof, Self::Error> {\n        let block_hash = field!(proof.block_hash, \"Proof\", \"block_hash\")?;\n\n        let proof = block::Proof {\n            height:     proof.height,\n            round:      proof.round,\n            block_hash: protocol_primitive::Hash::try_from(block_hash)?,\n            signature:  Bytes::from(proof.signature),\n            bitmap:     Bytes::from(proof.bitmap),\n        };\n\n        Ok(proof)\n    }\n}\n\n// Validator\n\nimpl From<block::Validator> for Validator {\n    fn from(validator: block::Validator) -> Validator {\n        Validator {\n            pub_key:        validator.pub_key.to_vec(),\n            propose_weight: validator.propose_weight,\n            vote_weight:    validator.vote_weight,\n        }\n    }\n}\n\nimpl TryFrom<Validator> for block::Validator {\n    type Error = ProtocolError;\n\n    fn try_from(validator: Validator) -> Result<block::Validator, Self::Error> {\n        let validator = block::Validator {\n            pub_key:        Bytes::from(validator.pub_key),\n            propose_weight: validator.propose_weight,\n            vote_weight:    validator.vote_weight,\n        };\n\n        Ok(validator)\n    }\n}\n\n// Pill\n\nimpl From<block::Pill> for Pill {\n    fn from(pill: block::Pill) -> Pill {\n        let block = Some(Block::from(pill.block));\n        let propose_hashes = pill\n            .propose_hashes\n            .into_iter()\n            .map(Hash::from)\n            .collect::<Vec<_>>();\n\n        Pill {\n            block,\n            propose_hashes,\n        }\n    }\n}\n\nimpl TryFrom<Pill> for block::Pill {\n    type Error = ProtocolError;\n\n    fn try_from(pill: Pill) -> Result<block::Pill, Self::Error> {\n        let block = field!(pill.block, \"Pill\", \"block\")?;\n\n        let mut propose_hashes = Vec::new();\n        for hash in pill.propose_hashes {\n            propose_hashes.push(protocol_primitive::Hash::try_from(hash)?);\n        }\n\n        let pill = block::Pill {\n            block: block::Block::try_from(block)?,\n            propose_hashes,\n        };\n\n        Ok(pill)\n    }\n}\n\n// #################\n// Codec\n// #################\n\nimpl_default_bytes_codec_for!(block, [Block, BlockHeader, Proof, Validator, Pill]);\n\n#[cfg(test)]\nmod test {\n    #[test]\n    fn test_u8_convert_u32() {\n        for i in u8::min_value()..u8::max_value() {\n            let j = u32::from(i);\n            assert_eq!(i, (j as u8));\n        }\n    }\n}\n"
  },
  {
    "path": "protocol/src/codec/macro.rs",
    "content": "#[macro_export]\nmacro_rules! field {\n    ($opt_field:expr, $type:expr, $field:expr) => {\n        $opt_field.ok_or_else(|| crate::codec::CodecError::MissingField {\n            r#type: $type,\n            field:  $field,\n        })\n    };\n}\n\n#[macro_export]\nmacro_rules! impl_default_bytes_codec_for {\n    ($category:ident, [$($type:ident),+]) => (\n        use crate::types::$category;\n\n        $(\n            impl ProtocolCodecSync for $category::$type {\n                fn encode_sync(&self) -> ProtocolResult<Bytes>  {\n                    let ser_type = $type::from(self.clone());\n                    let mut buf = Vec::with_capacity(ser_type.encoded_len());\n\n                    ser_type.encode(&mut buf).map_err(CodecError::from)?;\n\n                    Ok(Bytes::from(buf))\n                }\n\n                fn decode_sync(bytes: Bytes) -> ProtocolResult<Self> {\n                    let ser_type = $type::decode(bytes).map_err(CodecError::from)?;\n\n                    $category::$type::try_from(ser_type)\n                }\n            }\n        )+\n    )\n}\n"
  },
  {
    "path": "protocol/src/codec/mod.rs",
    "content": "// TODO: change Vec<u8> to Bytes\n// pin: https://github.com/danburkert/prost/pull/190\n\n#[macro_use]\nmod r#macro;\npub mod block;\npub mod primitive;\npub mod receipt;\n#[cfg(test)]\nmod tests;\npub mod transaction;\n\nuse std::error::Error;\n\nuse async_trait::async_trait;\nuse bytes::Bytes;\nuse derive_more::{Display, From};\n\nuse crate::{ProtocolError, ProtocolErrorKind, ProtocolResult};\n\npub use serde::{Deserialize, Serialize};\n\n#[async_trait]\npub trait ProtocolCodec: Sized + Send + ProtocolCodecSync {\n    // Note: We take mut reference so that it can be pinned. This removes Sync\n    // requirement.\n    async fn encode(&mut self) -> ProtocolResult<Bytes>;\n\n    async fn decode<B: Into<Bytes> + Send>(bytes: B) -> ProtocolResult<Self>;\n}\n\n// Sync version is still useful in some cases, for example, use in Stream.\n// This also work around #[async_trait] problem inside macro\n#[doc(hidden)]\npub trait ProtocolCodecSync: Sized + Send {\n    fn encode_sync(&self) -> ProtocolResult<Bytes>;\n\n    fn decode_sync(bytes: Bytes) -> ProtocolResult<Self>;\n}\n\n#[async_trait]\nimpl<T: ProtocolCodecSync + 'static> ProtocolCodec for T {\n    async fn encode(&mut self) -> ProtocolResult<Bytes> {\n        <T as ProtocolCodecSync>::encode_sync(self)\n    }\n\n    async fn decode<B: Into<Bytes> + Send>(bytes: B) -> ProtocolResult<Self> {\n        let bytes: Bytes = bytes.into();\n\n        <T as ProtocolCodecSync>::decode_sync(bytes)\n    }\n}\n\nimpl ProtocolCodecSync for Bytes {\n    fn encode_sync(&self) -> ProtocolResult<Bytes> {\n        Ok(self.clone())\n    }\n\n    fn decode_sync(bytes: Bytes) -> ProtocolResult<Self> {\n        Ok(bytes)\n    }\n}\n\n#[derive(Debug, From, Display)]\npub enum CodecError {\n    #[display(fmt = \"prost encode: {}\", _0)]\n    ProtobufEncode(prost::EncodeError),\n\n    #[display(fmt = \"prost decode: {}\", _0)]\n    ProtobufDecode(prost::DecodeError),\n\n    #[display(fmt = \"{} missing field {}\", r#type, field)]\n    MissingField {\n        r#type: &'static str,\n        field:  &'static str,\n    },\n\n    #[display(fmt = \"invalid contract type {}\", _0)]\n    InvalidContractType(i32),\n\n    #[display(fmt = \"wrong bytes length: {{ expect: {}, got: {} }}\", expect, real)]\n    WrongBytesLength { expect: usize, real: usize },\n\n    #[display(fmt = \"from string {}\", _0)]\n    FromStringUtf8(std::string::FromUtf8Error),\n}\n\nimpl Error for CodecError {}\n\n// TODO: derive macro\nimpl From<CodecError> for ProtocolError {\n    fn from(err: CodecError) -> ProtocolError {\n        ProtocolError::new(ProtocolErrorKind::Codec, Box::new(err))\n    }\n}\n"
  },
  {
    "path": "protocol/src/codec/primitive.rs",
    "content": "use std::{convert::TryFrom, default::Default, mem};\n\nuse byteorder::{ByteOrder, LittleEndian};\nuse bytes::{Bytes, BytesMut};\nuse derive_more::From;\nuse prost::Message;\n\nuse crate::{\n    codec::{CodecError, ProtocolCodecSync},\n    field, impl_default_bytes_codec_for,\n    types::primitive as protocol_primitive,\n    ProtocolError, ProtocolResult,\n};\n\n// #####################\n// Protobuf\n// #####################\n\n#[derive(Clone, Message, From)]\npub struct Hash {\n    #[prost(bytes, tag = \"1\")]\n    pub value: Vec<u8>,\n}\n\n#[derive(Clone, Message, From)]\npub struct MerkleRoot {\n    #[prost(message, tag = \"1\")]\n    pub value: Option<Hash>,\n}\n\n#[derive(Clone, Message, From)]\npub struct Address {\n    #[prost(bytes, tag = \"1\")]\n    pub value: Vec<u8>,\n}\n\n// #####################\n// Conversion\n// #####################\n\n// Hash\n\nimpl From<protocol_primitive::Hash> for Hash {\n    fn from(hash: protocol_primitive::Hash) -> Hash {\n        let value = hash.as_bytes().to_vec();\n\n        Hash { value }\n    }\n}\n\nimpl TryFrom<Hash> for protocol_primitive::Hash {\n    type Error = ProtocolError;\n\n    fn try_from(hash: Hash) -> Result<protocol_primitive::Hash, Self::Error> {\n        let bytes = Bytes::from(hash.value);\n\n        protocol_primitive::Hash::from_bytes(bytes)\n    }\n}\n\n// Address\nimpl From<protocol_primitive::Address> for Address {\n    fn from(address: protocol_primitive::Address) -> Address {\n        let value = address.as_bytes().to_vec();\n\n        Address { value }\n    }\n}\n\nimpl TryFrom<Address> for protocol_primitive::Address {\n    type Error = ProtocolError;\n\n    fn try_from(address: Address) -> Result<protocol_primitive::Address, Self::Error> {\n        let bytes = Bytes::from(address.value);\n\n        protocol_primitive::Address::from_bytes(bytes)\n    }\n}\n\n// MerkleRoot\n\nimpl From<protocol_primitive::MerkleRoot> for MerkleRoot {\n    fn from(root: protocol_primitive::MerkleRoot) -> MerkleRoot {\n        let value = Some(Hash::from(root));\n\n        MerkleRoot { value }\n    }\n}\n\nimpl TryFrom<MerkleRoot> for protocol_primitive::MerkleRoot {\n    type Error = ProtocolError;\n\n    fn try_from(root: MerkleRoot) -> Result<protocol_primitive::MerkleRoot, Self::Error> {\n        let hash = field!(root.value, \"MerkleRoot\", \"value\")?;\n\n        protocol_primitive::Hash::try_from(hash)\n    }\n}\n\n// #####################\n// Codec\n// #####################\n\n// MerkleRoot and AssetID are just Hash aliases\nimpl_default_bytes_codec_for!(primitive, [Hash, Address]);\n\nimpl ProtocolCodecSync for u64 {\n    fn encode_sync(&self) -> ProtocolResult<Bytes> {\n        let mut buf = [0u8; mem::size_of::<u64>()];\n        LittleEndian::write_u64(&mut buf, *self);\n\n        Ok(BytesMut::from(buf.as_ref()).freeze())\n    }\n\n    fn decode_sync(bytes: Bytes) -> ProtocolResult<Self> {\n        Ok(LittleEndian::read_u64(bytes.as_ref()))\n    }\n}\n\n// #####################\n// Util\n// #####################\n\n#[allow(dead_code)]\nfn ensure_len(real: usize, expect: usize) -> Result<(), CodecError> {\n    if real != expect {\n        return Err(CodecError::WrongBytesLength { expect, real });\n    }\n\n    Ok(())\n}\n"
  },
  {
    "path": "protocol/src/codec/receipt.rs",
    "content": "use std::convert::TryFrom;\n\nuse bytes::Bytes;\nuse prost::Message;\n\nuse crate::{\n    codec::{primitive::Hash, CodecError, ProtocolCodecSync},\n    field, impl_default_bytes_codec_for,\n    traits::ServiceResponse,\n    types::primitive as protocol_primitive,\n    types::receipt as protocol_receipt,\n    ProtocolError, ProtocolResult,\n};\n\n// #####################\n// Protobuf\n// #####################\n\n#[derive(Clone, Message)]\npub struct Receipt {\n    #[prost(message, tag = \"1\")]\n    pub state_root: Option<Hash>,\n\n    #[prost(uint64, tag = \"2\")]\n    pub height: u64,\n\n    #[prost(message, tag = \"3\")]\n    pub tx_hash: Option<Hash>,\n\n    #[prost(uint64, tag = \"4\")]\n    pub cycles_used: u64,\n\n    #[prost(message, repeated, tag = \"5\")]\n    pub events: Vec<Event>,\n\n    #[prost(message, tag = \"6\")]\n    pub response: Option<ReceiptResponse>,\n}\n\n#[derive(Clone, Message)]\npub struct ReceiptResponse {\n    #[prost(bytes, tag = \"1\")]\n    pub service_name: Vec<u8>,\n\n    #[prost(bytes, tag = \"2\")]\n    pub method: Vec<u8>,\n\n    #[prost(uint64, tag = \"3\")]\n    pub code: u64,\n\n    #[prost(bytes, tag = \"4\")]\n    pub succeed_data: Vec<u8>,\n\n    #[prost(bytes, tag = \"5\")]\n    pub error_message: Vec<u8>,\n}\n\n#[derive(Clone, Message)]\npub struct Event {\n    #[prost(bytes, tag = \"1\")]\n    pub service: Vec<u8>,\n\n    #[prost(bytes, tag = \"2\")]\n    pub name: Vec<u8>,\n\n    #[prost(bytes, tag = \"3\")]\n    pub data: Vec<u8>,\n}\n\n// #################\n// Conversion\n// #################\n\n// ReceiptResult\n\nimpl From<receipt::ReceiptResponse> for ReceiptResponse {\n    fn from(response: receipt::ReceiptResponse) -> ReceiptResponse {\n        ReceiptResponse {\n            service_name:  response.service_name.as_bytes().to_vec(),\n            method:        response.method.as_bytes().to_vec(),\n            code:          response.response.code,\n            succeed_data:  response.response.succeed_data.as_bytes().to_vec(),\n            error_message: response.response.error_message.as_bytes().to_vec(),\n        }\n    }\n}\n\nimpl TryFrom<ReceiptResponse> for receipt::ReceiptResponse {\n    type Error = ProtocolError;\n\n    fn try_from(response: ReceiptResponse) -> Result<receipt::ReceiptResponse, Self::Error> {\n        Ok(receipt::ReceiptResponse {\n            service_name: String::from_utf8(response.service_name)\n                .map_err(CodecError::FromStringUtf8)?,\n            method:       String::from_utf8(response.method).map_err(CodecError::FromStringUtf8)?,\n            response:     ServiceResponse {\n                code:          response.code,\n                succeed_data:  String::from_utf8(response.succeed_data)\n                    .map_err(CodecError::FromStringUtf8)?,\n                error_message: String::from_utf8(response.error_message)\n                    .map_err(CodecError::FromStringUtf8)?,\n            },\n        })\n    }\n}\n\n// Receipt\n\nimpl From<receipt::Receipt> for Receipt {\n    fn from(receipt: receipt::Receipt) -> Receipt {\n        let state_root = Some(Hash::from(receipt.state_root));\n        let tx_hash = Some(Hash::from(receipt.tx_hash));\n        let events = receipt.events.into_iter().map(Event::from).collect();\n        let response = Some(ReceiptResponse::from(receipt.response));\n\n        Receipt {\n            state_root,\n            height: receipt.height,\n            tx_hash,\n            cycles_used: receipt.cycles_used,\n            events,\n            response,\n        }\n    }\n}\n\nimpl TryFrom<Receipt> for receipt::Receipt {\n    type Error = ProtocolError;\n\n    fn try_from(receipt: Receipt) -> Result<receipt::Receipt, Self::Error> {\n        let state_root = field!(receipt.state_root, \"Receipt\", \"state_root\")?;\n        let tx_hash = field!(receipt.tx_hash, \"Receipt\", \"tx_hash\")?;\n        let response = field!(receipt.response, \"Receipt\", \"response\")?;\n        let events = receipt\n            .events\n            .into_iter()\n            .map(protocol_receipt::Event::try_from)\n            .collect::<Result<Vec<protocol_receipt::Event>, ProtocolError>>()?;\n\n        let receipt = receipt::Receipt {\n            state_root: protocol_primitive::Hash::try_from(state_root)?,\n            height: receipt.height,\n            tx_hash: protocol_primitive::Hash::try_from(tx_hash)?,\n            cycles_used: receipt.cycles_used,\n            events,\n            response: receipt::ReceiptResponse::try_from(response)?,\n        };\n\n        Ok(receipt)\n    }\n}\n\n// Event\nimpl From<receipt::Event> for Event {\n    fn from(event: receipt::Event) -> Event {\n        Event {\n            service: event.service.as_bytes().to_vec(),\n            name:    event.name.as_bytes().to_vec(),\n            data:    event.data.as_bytes().to_vec(),\n        }\n    }\n}\n\nimpl TryFrom<Event> for receipt::Event {\n    type Error = ProtocolError;\n\n    fn try_from(event: Event) -> Result<receipt::Event, Self::Error> {\n        Ok(receipt::Event {\n            service: String::from_utf8(event.service).map_err(CodecError::FromStringUtf8)?,\n            name:    String::from_utf8(event.name).map_err(CodecError::FromStringUtf8)?,\n            data:    String::from_utf8(event.data).map_err(CodecError::FromStringUtf8)?,\n        })\n    }\n}\n\n// #################\n// Codec\n// #################\n\nimpl_default_bytes_codec_for!(receipt, [Receipt]);\n"
  },
  {
    "path": "protocol/src/codec/tests/mod.rs",
    "content": "extern crate test;\n\nuse std::convert::TryInto;\n\nuse bytes::Bytes;\nuse test::Bencher;\n\nuse crate::codec::ProtocolCodecSync;\nuse crate::types::block::Block;\nuse crate::types::transaction::SignedTransaction;\nuse crate::{codec, types};\n\nuse crate::fixed_codec::tests::*;\n\nmacro_rules! test {\n    ($mod: ident, $r#type: ident, $mock_func: ident $(, $arg: expr)*) => {\n        {\n            let before_val = $mock_func($($arg),*);\n            let codec_val: codec::$mod::$r#type = before_val.into();\n            let after_val: types::$mod::$r#type = codec_val.try_into().unwrap();\n            after_val\n        }\n    };\n}\n\n#[test]\nfn test_codec() {\n    test!(primitive, Hash, mock_hash);\n    test!(primitive, MerkleRoot, mock_merkle_root);\n\n    test!(receipt, Receipt, mock_receipt);\n\n    test!(transaction, TransactionRequest, mock_transaction_request);\n    test!(transaction, RawTransaction, mock_raw_tx);\n    test!(transaction, SignedTransaction, mock_sign_tx);\n\n    test!(block, Validator, mock_validator);\n    test!(block, Proof, mock_proof);\n    test!(block, BlockHeader, mock_block_header);\n    test!(block, Block, mock_block, 100);\n    test!(block, Pill, mock_pill, 100, 200);\n}\n\n#[test]\nfn test_signed_tx_serialize_size() {\n    let txs: Vec<Bytes> = (0..50_000)\n        .map(|_| mock_sign_tx().encode_sync().unwrap())\n        .collect();\n    let size = &txs.iter().fold(0, |acc, x| acc + x.len());\n    println!(\"1 tx size {:?}\", txs[1].len());\n    println!(\"50_000 tx size {:?}\", size);\n}\n\n#[bench]\nfn bench_signed_tx_serialize(b: &mut Bencher) {\n    let txs: Vec<SignedTransaction> = (0..50_000).map(|_| mock_sign_tx()).collect();\n    b.iter(|| {\n        txs.iter().for_each(|signed_tx| {\n            signed_tx.encode_sync().unwrap();\n        });\n    });\n}\n\n#[bench]\nfn bench_signed_tx_deserialize(b: &mut Bencher) {\n    let txs: Vec<Bytes> = (0..50_000)\n        .map(|_| mock_sign_tx().encode_sync().unwrap())\n        .collect();\n\n    b.iter(|| {\n        txs.iter().for_each(|signed_tx| {\n            SignedTransaction::decode_sync(signed_tx.clone()).unwrap();\n        });\n    });\n}\n\n#[bench]\nfn bench_block_serialize(b: &mut Bencher) {\n    let block = mock_block(50_000);\n\n    b.iter(|| {\n        block.encode_sync().unwrap();\n    });\n}\n\n#[bench]\nfn bench_block_try_into(b: &mut Bencher) {\n    let block = mock_block(50_000).encode_sync().unwrap();\n\n    b.iter(|| {\n        Block::decode_sync(block.clone()).unwrap();\n    });\n}\n"
  },
  {
    "path": "protocol/src/codec/transaction.rs",
    "content": "use std::convert::TryFrom;\n\nuse bytes::Bytes;\nuse prost::Message;\n\nuse crate::{\n    codec::primitive::{Address, Hash},\n    codec::{CodecError, ProtocolCodecSync},\n    field, impl_default_bytes_codec_for,\n    types::primitive as protocol_primitive,\n    ProtocolError, ProtocolResult,\n};\n\n#[derive(Clone, Message)]\npub struct TransactionRequest {\n    #[prost(bytes, tag = \"1\")]\n    pub service_name: Vec<u8>,\n\n    #[prost(bytes, tag = \"2\")]\n    pub method: Vec<u8>,\n\n    #[prost(bytes, tag = \"3\")]\n    pub payload: Vec<u8>,\n}\n\n#[derive(Clone, Message)]\npub struct RawTransaction {\n    #[prost(message, tag = \"1\")]\n    pub chain_id: Option<Hash>,\n\n    #[prost(message, tag = \"2\")]\n    pub nonce: Option<Hash>,\n\n    #[prost(uint64, tag = \"3\")]\n    pub timeout: u64,\n\n    #[prost(uint64, tag = \"4\")]\n    pub cycles_price: u64,\n\n    #[prost(uint64, tag = \"5\")]\n    pub cycles_limit: u64,\n\n    #[prost(message, tag = \"6\")]\n    pub request: Option<TransactionRequest>,\n\n    #[prost(message, tag = \"7\")]\n    pub sender: Option<Address>,\n}\n\n#[derive(Clone, Message)]\npub struct SignedTransaction {\n    #[prost(message, tag = \"1\")]\n    pub raw: Option<RawTransaction>,\n\n    #[prost(message, tag = \"2\")]\n    pub tx_hash: Option<Hash>,\n\n    #[prost(bytes, tag = \"3\")]\n    pub pubkey: Vec<u8>,\n\n    #[prost(bytes, tag = \"4\")]\n    pub signature: Vec<u8>,\n}\n\n// #################\n// Conversion\n// #################\n\n// TransactionAction\n\nimpl From<transaction::TransactionRequest> for TransactionRequest {\n    fn from(request: transaction::TransactionRequest) -> TransactionRequest {\n        TransactionRequest {\n            service_name: request.service_name.as_bytes().to_vec(),\n            method:       request.method.as_bytes().to_vec(),\n            payload:      request.payload.as_bytes().to_vec(),\n        }\n    }\n}\n\nimpl TryFrom<TransactionRequest> for transaction::TransactionRequest {\n    type Error = ProtocolError;\n\n    fn try_from(\n        request: TransactionRequest,\n    ) -> Result<transaction::TransactionRequest, Self::Error> {\n        Ok(transaction::TransactionRequest {\n            service_name: String::from_utf8(request.service_name)\n                .map_err(CodecError::FromStringUtf8)?,\n            method:       String::from_utf8(request.method).map_err(CodecError::FromStringUtf8)?,\n            payload:      String::from_utf8(request.payload).map_err(CodecError::FromStringUtf8)?,\n        })\n    }\n}\n\n// RawTransaction\n\nimpl From<transaction::RawTransaction> for RawTransaction {\n    fn from(raw: transaction::RawTransaction) -> RawTransaction {\n        let chain_id = Some(Hash::from(raw.chain_id));\n        let nonce = Some(Hash::from(raw.nonce));\n        let request = Some(TransactionRequest::from(raw.request));\n        let sender = Some(Address::from(raw.sender));\n\n        RawTransaction {\n            chain_id,\n            nonce,\n            cycles_price: raw.cycles_price,\n            timeout: raw.timeout,\n            cycles_limit: raw.cycles_limit,\n            request,\n            sender,\n        }\n    }\n}\n\nimpl TryFrom<RawTransaction> for transaction::RawTransaction {\n    type Error = ProtocolError;\n\n    fn try_from(raw: RawTransaction) -> Result<transaction::RawTransaction, Self::Error> {\n        let chain_id = field!(raw.chain_id, \"RawTransaction\", \"chain_id\")?;\n        let nonce = field!(raw.nonce, \"RawTransaction\", \"nonce\")?;\n        let request = field!(raw.request, \"RawTransaction\", \"request\")?;\n        let sender = field!(raw.sender, \"RawTransaction\", \"sender\")?;\n\n        let raw_tx = transaction::RawTransaction {\n            chain_id:     protocol_primitive::Hash::try_from(chain_id)?,\n            nonce:        protocol_primitive::Hash::try_from(nonce)?,\n            timeout:      raw.timeout,\n            cycles_price: raw.cycles_price,\n            cycles_limit: raw.cycles_limit,\n            request:      transaction::TransactionRequest::try_from(request)?,\n            sender:       protocol_primitive::Address::try_from(sender)?,\n        };\n\n        Ok(raw_tx)\n    }\n}\n\n// SignedTransaction\n\nimpl From<transaction::SignedTransaction> for SignedTransaction {\n    fn from(stx: transaction::SignedTransaction) -> SignedTransaction {\n        let raw = RawTransaction::from(stx.raw);\n        let tx_hash = Hash::from(stx.tx_hash);\n\n        SignedTransaction {\n            raw:       Some(raw),\n            tx_hash:   Some(tx_hash),\n            pubkey:    stx.pubkey.to_vec(),\n            signature: stx.signature.to_vec(),\n        }\n    }\n}\n\nimpl TryFrom<SignedTransaction> for transaction::SignedTransaction {\n    type Error = ProtocolError;\n\n    fn try_from(stx: SignedTransaction) -> Result<transaction::SignedTransaction, Self::Error> {\n        let raw = field!(stx.raw, \"SignedTransaction\", \"raw\")?;\n        let tx_hash = field!(stx.tx_hash, \"SignedTransaction\", \"tx_hash\")?;\n\n        let stx = transaction::SignedTransaction {\n            raw:       transaction::RawTransaction::try_from(raw)?,\n            tx_hash:   protocol_primitive::Hash::try_from(tx_hash)?,\n            pubkey:    Bytes::from(stx.pubkey),\n            signature: Bytes::from(stx.signature),\n        };\n\n        Ok(stx)\n    }\n}\n\n// #################\n// Codec\n// #################\n\nimpl_default_bytes_codec_for!(transaction, [RawTransaction, SignedTransaction]);\n"
  },
  {
    "path": "protocol/src/fixed_codec/mod.rs",
    "content": "pub mod primitive;\npub mod receipt;\n#[cfg(test)]\npub mod tests;\npub mod transaction;\n\nuse std::error::Error;\n\nuse bytes::Bytes;\nuse derive_more::{Display, From};\n\nuse crate::{ProtocolError, ProtocolErrorKind, ProtocolResult};\n\n// Consistent serialization trait using rlp-algorithm\npub trait FixedCodec: Sized {\n    fn encode_fixed(&self) -> ProtocolResult<Bytes>;\n\n    fn decode_fixed(bytes: Bytes) -> ProtocolResult<Self>;\n}\n\n#[derive(Debug, Display, From)]\npub enum FixedCodecError {\n    Decoder(rlp::DecoderError),\n\n    StringUTF8(std::string::FromUtf8Error),\n\n    #[display(fmt = \"wrong bytes of bool\")]\n    DecodeBool,\n\n    #[display(fmt = \"wrong bytes of u8\")]\n    DecodeUint8,\n}\n\nimpl Error for FixedCodecError {}\n\nimpl From<FixedCodecError> for ProtocolError {\n    fn from(err: FixedCodecError) -> ProtocolError {\n        ProtocolError::new(ProtocolErrorKind::FixedCodec, Box::new(err))\n    }\n}\n"
  },
  {
    "path": "protocol/src/fixed_codec/primitive.rs",
    "content": "use std::mem;\n\nuse byteorder::{ByteOrder, LittleEndian};\n\nuse crate::fixed_codec::{FixedCodec, FixedCodecError};\nuse crate::types::{Bytes, BytesMut, Hex};\nuse crate::ProtocolResult;\n\nimpl FixedCodec for bool {\n    fn encode_fixed(&self) -> ProtocolResult<Bytes> {\n        let bs = if *self {\n            [1u8; mem::size_of::<u8>()]\n        } else {\n            [0u8; mem::size_of::<u8>()]\n        };\n\n        Ok(BytesMut::from(bs.as_ref()).freeze())\n    }\n\n    fn decode_fixed(bytes: Bytes) -> ProtocolResult<Self> {\n        let u = *bytes.to_vec().get(0).ok_or(FixedCodecError::DecodeBool)?;\n\n        match u {\n            0 => Ok(false),\n            1 => Ok(true),\n            _ => Err(FixedCodecError::DecodeBool.into()),\n        }\n    }\n}\n\nimpl FixedCodec for u8 {\n    fn encode_fixed(&self) -> ProtocolResult<Bytes> {\n        Ok(BytesMut::from([*self].as_ref()).freeze())\n    }\n\n    fn decode_fixed(bytes: Bytes) -> ProtocolResult<Self> {\n        let u = *bytes.to_vec().get(0).ok_or(FixedCodecError::DecodeUint8)?;\n        Ok(u)\n    }\n}\n\nimpl FixedCodec for u16 {\n    fn encode_fixed(&self) -> ProtocolResult<Bytes> {\n        let mut buf = [0u8; mem::size_of::<u32>()];\n        LittleEndian::write_u16(&mut buf, *self);\n\n        Ok(BytesMut::from(buf.as_ref()).freeze())\n    }\n\n    fn decode_fixed(bytes: Bytes) -> ProtocolResult<Self> {\n        Ok(LittleEndian::read_u16(bytes.as_ref()))\n    }\n}\n\nimpl FixedCodec for u32 {\n    fn encode_fixed(&self) -> ProtocolResult<Bytes> {\n        let mut buf = [0u8; mem::size_of::<u32>()];\n        LittleEndian::write_u32(&mut buf, *self);\n\n        Ok(BytesMut::from(buf.as_ref()).freeze())\n    }\n\n    fn decode_fixed(bytes: Bytes) -> ProtocolResult<Self> {\n        Ok(LittleEndian::read_u32(bytes.as_ref()))\n    }\n}\n\nimpl FixedCodec for u64 {\n    fn encode_fixed(&self) -> ProtocolResult<Bytes> {\n        let mut buf = [0u8; mem::size_of::<u64>()];\n        LittleEndian::write_u64(&mut buf, *self);\n\n        Ok(BytesMut::from(buf.as_ref()).freeze())\n    }\n\n    fn decode_fixed(bytes: Bytes) -> ProtocolResult<Self> {\n        Ok(LittleEndian::read_u64(bytes.as_ref()))\n    }\n}\n\nimpl FixedCodec for u128 {\n    fn encode_fixed(&self) -> ProtocolResult<Bytes> {\n        let mut buf = [0u8; mem::size_of::<u128>()];\n        LittleEndian::write_u128(&mut buf, *self);\n\n        Ok(BytesMut::from(buf.as_ref()).freeze())\n    }\n\n    fn decode_fixed(bytes: Bytes) -> ProtocolResult<Self> {\n        Ok(LittleEndian::read_u128(bytes.as_ref()))\n    }\n}\n\nimpl FixedCodec for String {\n    fn encode_fixed(&self) -> ProtocolResult<Bytes> {\n        Ok(Bytes::from(self.clone()))\n    }\n\n    fn decode_fixed(bytes: Bytes) -> ProtocolResult<Self> {\n        String::from_utf8(bytes.to_vec()).map_err(|e| FixedCodecError::StringUTF8(e).into())\n    }\n}\n\nimpl FixedCodec for Bytes {\n    fn encode_fixed(&self) -> ProtocolResult<Bytes> {\n        Ok(self.clone())\n    }\n\n    fn decode_fixed(bytes: Bytes) -> ProtocolResult<Self> {\n        Ok(bytes)\n    }\n}\n\nimpl FixedCodec for Vec<u8> {\n    fn encode_fixed(&self) -> ProtocolResult<Bytes> {\n        Ok(Bytes::from(self.clone()))\n    }\n\n    fn decode_fixed(bytes: Bytes) -> ProtocolResult<Self> {\n        Ok(bytes.to_vec())\n    }\n}\n\nimpl FixedCodec for Hex {\n    fn encode_fixed(&self) -> ProtocolResult<bytes::Bytes> {\n        let bytes = self.as_string_trim0x().as_bytes().to_vec();\n        Ok(bytes::Bytes::from(bytes))\n    }\n\n    fn decode_fixed(bytes: bytes::Bytes) -> ProtocolResult<Self> {\n        let s = String::from_utf8(bytes.to_vec()).map_err(FixedCodecError::StringUTF8)?;\n        Ok(Hex::from_string(\"0x\".to_owned() + s.as_str())?)\n    }\n}\n"
  },
  {
    "path": "protocol/src/fixed_codec/receipt.rs",
    "content": "use crate::fixed_codec::{FixedCodec, FixedCodecError};\nuse crate::traits::ServiceResponse;\nuse crate::types::receipt::ReceiptResponse;\nuse crate::ProtocolResult;\n\nimpl rlp::Encodable for ReceiptResponse {\n    fn rlp_append(&self, s: &mut rlp::RlpStream) {\n        s.begin_list(5)\n            .append(&self.response.code)\n            .append(&self.response.succeed_data)\n            .append(&self.response.error_message)\n            .append(&self.method)\n            .append(&self.service_name);\n    }\n}\n\nimpl rlp::Decodable for ReceiptResponse {\n    fn decode(r: &rlp::Rlp) -> Result<Self, rlp::DecoderError> {\n        if !r.is_list() && r.size() != 5 {\n            return Err(rlp::DecoderError::RlpIncorrectListLen);\n        }\n\n        let code = r.at(0)?.as_val()?;\n        let succeed_data = r.at(1)?.as_val()?;\n        let error_message = r.at(2)?.as_val()?;\n        let method = r.at(3)?.as_val()?;\n        let service_name = r.at(4)?.as_val()?;\n\n        Ok(ReceiptResponse {\n            service_name,\n            method,\n            response: ServiceResponse {\n                code,\n                succeed_data,\n                error_message,\n            },\n        })\n    }\n}\n\nimpl FixedCodec for ReceiptResponse {\n    fn encode_fixed(&self) -> ProtocolResult<bytes::Bytes> {\n        Ok(bytes::Bytes::from(rlp::encode(self)))\n    }\n\n    fn decode_fixed(bytes: bytes::Bytes) -> ProtocolResult<Self> {\n        Ok(rlp::decode(bytes.as_ref()).map_err(FixedCodecError::from)?)\n    }\n}\n"
  },
  {
    "path": "protocol/src/fixed_codec/tests/fixed_codec.rs",
    "content": "extern crate test;\n\nuse test::Bencher;\n\nuse crate::fixed_codec::FixedCodec;\nuse crate::types;\n\nuse super::*;\n\nmacro_rules! test_eq {\n    ($category: ident, $r#type: ident, $mock_func: ident $(, $arg: expr)*) => {\n        let before_val = $mock_func($($arg),*);\n        let rlp_bytes = before_val.encode_fixed().unwrap();\n        let after_val: types::$category::$r#type = <_>::decode_fixed(rlp_bytes.clone()).unwrap();\n        assert_eq!(before_val, after_val);\n    };\n}\n\n#[test]\nfn test_fixed_codec_primitive() {\n    let bs = true.encode_fixed().unwrap();\n    assert_eq!(<bool as FixedCodec>::decode_fixed(bs).unwrap(), true);\n\n    let bs = false.encode_fixed().unwrap();\n    assert_eq!(<bool as FixedCodec>::decode_fixed(bs).unwrap(), false);\n\n    let bs = 0u8.encode_fixed().unwrap();\n    assert_eq!(<u8 as FixedCodec>::decode_fixed(bs).unwrap(), 0u8);\n\n    let bs = 8u8.encode_fixed().unwrap();\n    assert_eq!(<u8 as FixedCodec>::decode_fixed(bs).unwrap(), 8u8);\n\n    let bs = 8u32.encode_fixed().unwrap();\n    assert_eq!(<u32 as FixedCodec>::decode_fixed(bs).unwrap(), 8u32);\n\n    let bs = 8u64.encode_fixed().unwrap();\n    assert_eq!(<u64 as FixedCodec>::decode_fixed(bs).unwrap(), 8u64);\n\n    let bs = 8u128.encode_fixed().unwrap();\n    assert_eq!(<u64 as FixedCodec>::decode_fixed(bs).unwrap(), 8u64);\n\n    let bs = \"test\".to_owned().encode_fixed().unwrap();\n    assert_eq!(\n        <String as FixedCodec>::decode_fixed(bs).unwrap(),\n        \"test\".to_owned()\n    );\n}\n\n#[test]\nfn test_fixed_codec() {\n    test_eq!(primitive, Hash, mock_hash);\n\n    test_eq!(transaction, RawTransaction, mock_raw_tx);\n    test_eq!(transaction, SignedTransaction, mock_sign_tx);\n\n    test_eq!(block, Proof, mock_proof);\n    test_eq!(block, BlockHeader, mock_block_header);\n    test_eq!(block, Block, mock_block, 33);\n    test_eq!(block, Pill, mock_pill, 22, 33);\n    test_eq!(block, Validator, mock_validator);\n\n    test_eq!(receipt, Receipt, mock_receipt);\n    test_eq!(receipt, Receipt, mock_receipt);\n    test_eq!(receipt, Receipt, mock_receipt);\n    test_eq!(receipt, Receipt, mock_receipt);\n}\n\n#[test]\nfn test_signed_tx_serialize_size() {\n    let txs: Vec<Bytes> = (0..50_000)\n        .map(|_| mock_sign_tx().encode_fixed().unwrap())\n        .collect();\n    let size = &txs.iter().fold(0, |acc, x| acc + x.len());\n    println!(\"1 tx size {:?}\", txs[1].len());\n    println!(\"50_000 tx size {:?}\", size);\n}\n\n#[bench]\nfn bench_signed_tx_serialize(b: &mut Bencher) {\n    let txs: Vec<SignedTransaction> = (0..50_000).map(|_| mock_sign_tx()).collect();\n    b.iter(|| {\n        txs.iter().for_each(|signed_tx| {\n            signed_tx.encode_fixed().unwrap();\n        });\n    });\n}\n\n#[bench]\nfn bench_signed_tx_deserialize(b: &mut Bencher) {\n    let txs: Vec<Bytes> = (0..50_000)\n        .map(|_| mock_sign_tx().encode_fixed().unwrap())\n        .collect();\n\n    b.iter(|| {\n        txs.iter().for_each(|signed_tx| {\n            SignedTransaction::decode_fixed(signed_tx.clone()).unwrap();\n        });\n    });\n}\n\n#[bench]\nfn bench_block_serialize(b: &mut Bencher) {\n    let block = mock_block(50_000);\n\n    b.iter(|| {\n        block.encode_fixed().unwrap();\n    });\n}\n\n#[bench]\nfn bench_block_deserialize(b: &mut Bencher) {\n    let block = mock_block(50_000).encode_fixed().unwrap();\n\n    b.iter(|| {\n        Block::decode_fixed(block.clone()).unwrap();\n    });\n}\n"
  },
  {
    "path": "protocol/src/fixed_codec/tests/mod.rs",
    "content": "mod fixed_codec;\n\nuse bytes::Bytes;\nuse rand::random;\n\nuse crate::traits::ServiceResponse;\nuse crate::types::block::{Block, BlockHeader, Pill, Proof, Validator};\nuse crate::types::primitive::{Address, Hash, MerkleRoot};\nuse crate::types::receipt::{Event, Receipt, ReceiptResponse};\nuse crate::types::transaction::{RawTransaction, SignedTransaction, TransactionRequest};\n\n// #####################\n// Mock Primitive\n// #####################\n\npub fn mock_hash() -> Hash {\n    Hash::digest(get_random_bytes(10))\n}\n\npub fn mock_merkle_root() -> MerkleRoot {\n    Hash::digest(get_random_bytes(10))\n}\n\npub fn mock_address() -> Address {\n    let hash = mock_hash();\n    Address::from_hash(hash).unwrap()\n}\n\n// #####################\n// Mock Receipt\n// #####################\n\npub fn mock_receipt_response() -> ReceiptResponse {\n    ReceiptResponse {\n        service_name: \"mock-service\".to_owned(),\n        method:       \"mock-method\".to_owned(),\n        response:     ServiceResponse::<String> {\n            code:          0,\n            succeed_data:  \"ok\".to_owned(),\n            error_message: \"\".to_owned(),\n        },\n    }\n}\n\npub fn mock_receipt() -> Receipt {\n    Receipt {\n        state_root:  mock_merkle_root(),\n        height:      13,\n        tx_hash:     mock_hash(),\n        cycles_used: 100,\n        events:      vec![mock_event()],\n        response:    mock_receipt_response(),\n    }\n}\n\npub fn mock_event() -> Event {\n    Event {\n        service: \"mock-event\".to_owned(),\n        name:    \"mock-name\".to_owned(),\n        data:    \"mock-data\".to_owned(),\n    }\n}\n\n// #####################\n// Mock Transaction\n// #####################\n\npub fn mock_transaction_request() -> TransactionRequest {\n    TransactionRequest {\n        service_name: \"mock-service\".to_owned(),\n        method:       \"mock-method\".to_owned(),\n        payload:      \"mock-payload\".to_owned(),\n    }\n}\n\npub fn mock_raw_tx() -> RawTransaction {\n    RawTransaction {\n        chain_id:     mock_hash(),\n        nonce:        mock_hash(),\n        timeout:      100,\n        cycles_price: 1,\n        cycles_limit: 100,\n        request:      mock_transaction_request(),\n        sender:       mock_address(),\n    }\n}\n\npub fn mock_sign_tx() -> SignedTransaction {\n    SignedTransaction {\n        raw:       mock_raw_tx(),\n        tx_hash:   mock_hash(),\n        pubkey:    Default::default(),\n        signature: Default::default(),\n    }\n}\n\n// #####################\n// Mock Block\n// #####################\n\npub fn mock_validator() -> Validator {\n    Validator {\n        pub_key:        get_random_bytes(32),\n        propose_weight: 1,\n        vote_weight:    1,\n    }\n}\n\npub fn mock_proof() -> Proof {\n    Proof {\n        height:     4,\n        round:      99,\n        block_hash: mock_hash(),\n        signature:  Default::default(),\n        bitmap:     Default::default(),\n    }\n}\n\npub fn mock_block_header() -> BlockHeader {\n    BlockHeader {\n        chain_id:                       mock_hash(),\n        height:                         42,\n        exec_height:                    41,\n        prev_hash:                      mock_hash(),\n        timestamp:                      420_000_000,\n        order_root:                     mock_merkle_root(),\n        order_signed_transactions_hash: Hash::default(),\n        confirm_root:                   vec![mock_hash(), mock_hash()],\n        state_root:                     mock_merkle_root(),\n        receipt_root:                   vec![mock_hash(), mock_hash()],\n        cycles_used:                    vec![999_999],\n        proposer:                       mock_address(),\n        proof:                          mock_proof(),\n        validator_version:              1,\n        validators:                     vec![\n            mock_validator(),\n            mock_validator(),\n            mock_validator(),\n            mock_validator(),\n        ],\n    }\n}\n\npub fn mock_block(order_size: usize) -> Block {\n    Block {\n        header:            mock_block_header(),\n        ordered_tx_hashes: (0..order_size).map(|_| mock_hash()).collect(),\n    }\n}\n\npub fn mock_pill(order_size: usize, propose_size: usize) -> Pill {\n    Pill {\n        block:          mock_block(order_size),\n        propose_hashes: (0..propose_size).map(|_| mock_hash()).collect(),\n    }\n}\n\npub fn get_random_bytes(len: usize) -> Bytes {\n    let vec: Vec<u8> = (0..len).map(|_| random::<u8>()).collect();\n    Bytes::from(vec)\n}\n"
  },
  {
    "path": "protocol/src/fixed_codec/transaction.rs",
    "content": "use bytes::BytesMut;\n\nuse crate::fixed_codec::{FixedCodec, FixedCodecError};\nuse crate::types::{Hash, RawTransaction, TransactionRequest};\nuse crate::ProtocolResult;\n\nimpl rlp::Encodable for RawTransaction {\n    fn rlp_append(&self, s: &mut rlp::RlpStream) {\n        s.begin_list(9);\n        s.append(&self.chain_id.as_bytes().to_vec());\n        s.append(&self.cycles_limit);\n        s.append(&self.cycles_price);\n        s.append(&self.nonce.as_bytes().to_vec());\n        s.append(&self.request.method);\n        s.append(&self.request.service_name);\n        s.append(&self.request.payload);\n        s.append(&self.timeout);\n        s.append(&self.sender);\n    }\n}\n\nimpl rlp::Decodable for RawTransaction {\n    fn decode(r: &rlp::Rlp) -> Result<Self, rlp::DecoderError> {\n        let chain_id = Hash::from_bytes(BytesMut::from(r.at(0)?.data()?).freeze())\n            .map_err(|_| rlp::DecoderError::RlpInvalidLength)?;\n\n        let cycles_limit: u64 = r.at(1)?.as_val()?;\n        let cycles_price: u64 = r.at(2)?.as_val()?;\n\n        let nonce = Hash::from_bytes(BytesMut::from(r.at(3)?.data()?).freeze())\n            .map_err(|_| rlp::DecoderError::RlpInvalidLength)?;\n\n        let request = TransactionRequest {\n            method:       r.at(4)?.as_val()?,\n            service_name: r.at(5)?.as_val()?,\n            payload:      r.at(6)?.as_val()?,\n        };\n        let timeout = r.at(7)?.as_val()?;\n        let sender = r.at(8)?.as_val()?;\n\n        Ok(Self {\n            chain_id,\n            cycles_price,\n            cycles_limit,\n            nonce,\n            request,\n            timeout,\n            sender,\n        })\n    }\n}\n\nimpl FixedCodec for RawTransaction {\n    fn encode_fixed(&self) -> ProtocolResult<bytes::Bytes> {\n        Ok(bytes::Bytes::from(rlp::encode(self)))\n    }\n\n    fn decode_fixed(bytes: bytes::Bytes) -> ProtocolResult<Self> {\n        Ok(rlp::decode(bytes.as_ref()).map_err(FixedCodecError::from)?)\n    }\n}\n"
  },
  {
    "path": "protocol/src/lib.rs",
    "content": "#![feature(test)]\n#![allow(clippy::mutable_key_type)]\n\npub mod codec;\npub mod fixed_codec;\npub mod traits;\npub mod types;\n\nuse std::error::Error;\n\npub use async_trait::async_trait;\npub use bytes::{Buf, BufMut, Bytes, BytesMut};\nuse derive_more::{Constructor, Display};\n\npub use types::{address_hrp, address_hrp_inited, init_address_hrp};\n\n#[derive(Debug, Clone)]\npub enum ProtocolErrorKind {\n    // traits\n    API,\n    Consensus,\n    Executor,\n    Mempool,\n    Network,\n    Storage,\n    Runtime,\n    Binding,\n    BindingMacro,\n    Service,\n    Main,\n\n    // codec\n    Codec,\n\n    // fixed codec\n    FixedCodec,\n\n    // types\n    Types,\n\n    // metric\n    Metric,\n    Cli,\n}\n\n// refer to https://github.com/rust-lang/rust/blob/a17951c4f80eb5208030f91fdb4ae93919fa6b12/src/libstd/io/error.rs#L73\n#[derive(Debug, Constructor, Display)]\n#[display(fmt = \"[ProtocolError] Kind: {:?} Error: {:?}\", kind, error)]\npub struct ProtocolError {\n    kind:  ProtocolErrorKind,\n    error: Box<dyn Error + Send>,\n}\n\nimpl From<ProtocolError> for Box<dyn Error + Send> {\n    fn from(error: ProtocolError) -> Self {\n        Box::new(error) as Box<dyn Error + Send>\n    }\n}\n\nimpl Error for ProtocolError {}\n\npub type ProtocolResult<T> = Result<T, ProtocolError>;\n"
  },
  {
    "path": "protocol/src/traits/api.rs",
    "content": "use async_trait::async_trait;\n\nuse crate::traits::{Context, ServiceResponse};\nuse crate::types::{Address, Block, BlockHeader, Hash, Receipt, SignedTransaction};\nuse crate::ProtocolResult;\n\n#[async_trait]\npub trait APIAdapter: Send + Sync {\n    async fn insert_signed_txs(\n        &self,\n        ctx: Context,\n        signed_tx: SignedTransaction,\n    ) -> ProtocolResult<()>;\n\n    async fn get_block_by_height(\n        &self,\n        ctx: Context,\n        height: Option<u64>,\n    ) -> ProtocolResult<Option<Block>>;\n\n    async fn get_block_header_by_height(\n        &self,\n        ctx: Context,\n        height: Option<u64>,\n    ) -> ProtocolResult<Option<BlockHeader>>;\n\n    async fn get_receipt_by_tx_hash(\n        &self,\n        ctx: Context,\n        tx_hash: Hash,\n    ) -> ProtocolResult<Option<Receipt>>;\n\n    async fn get_transaction_by_hash(\n        &self,\n        ctx: Context,\n        tx_hash: Hash,\n    ) -> ProtocolResult<Option<SignedTransaction>>;\n\n    async fn query_service(\n        &self,\n        ctx: Context,\n        height: u64,\n        cycles_limit: u64,\n        cycles_price: u64,\n        caller: Address,\n        service_name: String,\n        method: String,\n        payload: String,\n    ) -> ProtocolResult<ServiceResponse<String>>;\n}\n"
  },
  {
    "path": "protocol/src/traits/binding.rs",
    "content": "use std::iter::Iterator;\n\nuse crate::fixed_codec::FixedCodec;\nuse crate::traits::{ExecutorParams, ServiceResponse};\nuse crate::types::{Address, Block, Hash, MerkleRoot, Receipt, ServiceContext, SignedTransaction};\nuse crate::ProtocolResult;\n\n#[macro_export]\nmacro_rules! try_service_response {\n    ($service_resp: expr) => {{\n        if $service_resp.is_error() {\n            return ServiceResponse::from_error($service_resp.code, $service_resp.error_message);\n        }\n        $service_resp.succeed_data\n    }};\n}\n\npub trait SDKFactory<SDK: ServiceSDK> {\n    fn get_sdk(&self, name: &str) -> ProtocolResult<SDK>;\n}\n\npub trait ServiceMapping: Send + Sync {\n    fn get_service<SDK: 'static + ServiceSDK, Factory: SDKFactory<SDK>>(\n        &self,\n        name: &str,\n        factory: &Factory,\n    ) -> ProtocolResult<Box<dyn Service>>;\n\n    fn list_service_name(&self) -> Vec<String>;\n}\n\n// `ServiceState` provides access to` world state` and `account` for` service`.\n// The bottom layer is an MPT tree.\n//\n// Each `service` will have a separate` ServiceState`, so their states are\n// isolated from each other.\npub trait ServiceState {\n    fn get<Key: FixedCodec, Ret: FixedCodec>(&self, key: &Key) -> ProtocolResult<Option<Ret>>;\n\n    fn contains<Key: FixedCodec>(&self, key: &Key) -> ProtocolResult<bool>;\n\n    // Insert a pair of key / value\n    // Note: This key/value pair will go into the cache first\n    // and will not be persisted to MPT until `commit` is called.\n    fn insert<Key: FixedCodec, Value: FixedCodec>(\n        &mut self,\n        key: Key,\n        value: Value,\n    ) -> ProtocolResult<()>;\n\n    fn get_account_value<Key: FixedCodec, Ret: FixedCodec>(\n        &self,\n        address: &Address,\n        key: &Key,\n    ) -> ProtocolResult<Option<Ret>>;\n\n    fn set_account_value<Key: FixedCodec, Val: FixedCodec>(\n        &mut self,\n        address: &Address,\n        key: Key,\n        val: Val,\n    ) -> ProtocolResult<()>;\n\n    // Roll back all data in the cache\n    fn revert_cache(&mut self) -> ProtocolResult<()>;\n\n    // Move data from cache to stash\n    fn stash(&mut self) -> ProtocolResult<()>;\n\n    // Persist data from stash into MPT\n    fn commit(&mut self) -> ProtocolResult<MerkleRoot>;\n}\n\npub trait ChainQuerier {\n    fn get_transaction_by_hash(&self, tx_hash: &Hash) -> ProtocolResult<Option<SignedTransaction>>;\n\n    // To get the latest `Block` of finality, set `height` to `None`\n    fn get_block_by_height(&self, height: Option<u64>) -> ProtocolResult<Option<Block>>;\n\n    fn get_receipt_by_hash(&self, tx_hash: &Hash) -> ProtocolResult<Option<Receipt>>;\n}\n\n// Admission control will be called before entering service\npub trait AdmissionControl {\n    fn next<SDK: ServiceSDK>(&self, ctx: ServiceContext, sdk: SDK) -> ProtocolResult<()>;\n}\n\n// Developers can use service to customize blockchain business\n//\n// It contains:\n// - init: Initialize the service.\n// - hooks: A pair of hooks that allow inserting a piece of logic before and\n//   after the block is executed.\n// - read: Provide some read-only functions for users or other services to call\n// - write: provide some writable functions for users or other services to call\npub trait Service {\n    // Executed to create genesis states when starting chain\n    fn genesis_(&mut self, _payload: String) {}\n\n    // Called before block execution\n    fn hook_before_(&mut self, _params: &ExecutorParams) {}\n\n    // Called after block execution\n    fn hook_after_(&mut self, _params: &ExecutorParams) {}\n\n    // Called before tx execution\n    fn tx_hook_before_(&mut self, _ctx: ServiceContext) -> ServiceResponse<String>;\n\n    // Called after tx execution\n    fn tx_hook_after_(&mut self, _ctx: ServiceContext) -> ServiceResponse<String>;\n\n    fn write_(&mut self, ctx: ServiceContext) -> ServiceResponse<String>;\n\n    fn read_(&self, ctx: ServiceContext) -> ServiceResponse<String>;\n}\n\n// `ServiceSDK` provides multiple rich interfaces for `service` developers\n//\n// It contains:\n//\n// - Various data structures that store data to `world state`(call\n//   `alloc_or_recover_*`)\n// - Access and modify `account`\n// - Access service state\n// - Event triggered\n// - Access to data on the chain (block, transaction, receipt)\n// - Read / write other `service`\n//\n// In fact, these functions depend on:\n//\n// - ChainDB\n// - ServiceState\npub trait ServiceSDK {\n    // Alloc or recover a `Map` by` var_name`\n    fn alloc_or_recover_map<\n        Key: 'static + Send + FixedCodec + Clone + PartialEq,\n        Val: 'static + FixedCodec,\n    >(\n        &mut self,\n        var_name: &str,\n    ) -> Box<dyn StoreMap<Key, Val>>;\n\n    // Alloc or recover a `Array` by` var_name`\n    fn alloc_or_recover_array<Elm: 'static + FixedCodec>(\n        &mut self,\n        var_name: &str,\n    ) -> Box<dyn StoreArray<Elm>>;\n\n    // Alloc or recover a `Uint64` by` var_name`\n    fn alloc_or_recover_uint64(&mut self, var_name: &str) -> Box<dyn StoreUint64>;\n\n    // Alloc or recover a `String` by` var_name`\n    fn alloc_or_recover_string(&mut self, var_name: &str) -> Box<dyn StoreString>;\n\n    // Alloc or recover a `Bool` by` var_name`\n    fn alloc_or_recover_bool(&mut self, var_name: &str) -> Box<dyn StoreBool>;\n\n    // Get a value from the service state by key\n    fn get_value<Key: FixedCodec, Ret: FixedCodec>(&self, key: &Key) -> Option<Ret>;\n\n    // Set a value to the service state by key\n    fn set_value<Key: FixedCodec, Val: FixedCodec>(&mut self, key: Key, val: Val);\n\n    // Get a value from the specified address by key\n    fn get_account_value<Key: FixedCodec, Ret: FixedCodec>(\n        &self,\n        address: &Address,\n        key: &Key,\n    ) -> Option<Ret>;\n\n    // Insert a pair of key / value to the specified address\n    fn set_account_value<Key: FixedCodec, Val: FixedCodec>(\n        &mut self,\n        address: &Address,\n        key: Key,\n        val: Val,\n    );\n\n    // Get a signed transaction by `tx_hash`\n    // if not found on the chain, return None\n    fn get_transaction_by_hash(&self, tx_hash: &Hash) -> Option<SignedTransaction>;\n\n    // Get a block by `height`\n    // if not found on the chain, return None\n    // When the parameter `height` is None, get the latest (executing)` block`\n    fn get_block_by_height(&self, height: Option<u64>) -> Option<Block>;\n\n    // Get a receipt by `tx_hash`\n    // if not found on the chain, return None\n    fn get_receipt_by_hash(&self, tx_hash: &Hash) -> Option<Receipt>;\n}\n\npub trait StoreMap<K: FixedCodec + PartialEq, V: FixedCodec> {\n    fn get(&self, key: &K) -> Option<V>;\n\n    fn contains(&self, key: &K) -> bool;\n\n    fn insert(&mut self, key: K, value: V);\n\n    fn remove(&mut self, key: &K) -> Option<V>;\n\n    fn len(&self) -> u64;\n\n    fn is_empty(&self) -> bool;\n\n    fn iter<'a>(&'a self) -> Box<dyn Iterator<Item = (K, V)> + 'a>;\n}\n\npub trait StoreArray<E: FixedCodec> {\n    fn get(&self, index: u64) -> Option<E>;\n\n    fn push(&mut self, element: E);\n\n    fn remove(&mut self, index: u64);\n\n    fn len(&self) -> u64;\n\n    fn is_empty(&self) -> bool;\n\n    fn iter<'a>(&'a self) -> Box<dyn Iterator<Item = (u64, E)> + 'a>;\n}\n\npub trait StoreUint64 {\n    fn get(&self) -> u64;\n\n    fn set(&mut self, val: u64);\n\n    // Add val with self\n    // And set the result back to self\n    fn safe_add(&mut self, val: u64) -> bool;\n\n    // Self minus val\n    // And set the result back to self\n    fn safe_sub(&mut self, val: u64) -> bool;\n\n    // Multiply val with self\n    // And set the result back to self\n    fn safe_mul(&mut self, val: u64) -> bool;\n\n    // Power of self\n    // And set the result back to self\n    fn safe_pow(&mut self, val: u32) -> bool;\n\n    // Self divided by val\n    // And set the result back to self\n    fn safe_div(&mut self, val: u64) -> bool;\n\n    // Remainder of self\n    // And set the result back to self\n    fn safe_rem(&mut self, val: u64) -> bool;\n}\n\npub trait StoreString {\n    fn get(&self) -> String;\n\n    fn set(&mut self, val: &str);\n\n    fn len(&self) -> u64;\n\n    fn is_empty(&self) -> bool;\n}\n\npub trait StoreBool {\n    fn get(&self) -> bool;\n\n    fn set(&mut self, b: bool);\n}\n"
  },
  {
    "path": "protocol/src/traits/consensus.rs",
    "content": "use std::collections::HashMap;\n\nuse async_trait::async_trait;\nuse creep::Context;\n\nuse crate::traits::{ExecutorParams, ExecutorResp, TrustFeedback};\nuse crate::types::{\n    Address, Block, BlockHeader, Bytes, Hash, Hex, MerkleRoot, Metadata, Proof, Receipt,\n    SignedTransaction, Validator,\n};\nuse crate::{traits::mempool::MixedTxHashes, ProtocolResult};\n\n#[derive(Clone, Debug, PartialEq, Eq)]\npub enum MessageTarget {\n    Broadcast,\n    Specified(Bytes),\n}\n\n#[derive(Debug, Clone)]\npub struct NodeInfo {\n    pub chain_id:     Hash,\n    pub self_pub_key: Bytes,\n    pub self_address: Address,\n}\n\n#[async_trait]\npub trait Consensus: Send + Sync {\n    /// Network set a received signed proposal to consensus.\n    async fn set_proposal(&self, ctx: Context, proposal: Vec<u8>) -> ProtocolResult<()>;\n\n    /// Network set a received signed vote to consensus.\n    async fn set_vote(&self, ctx: Context, vote: Vec<u8>) -> ProtocolResult<()>;\n\n    /// Network set a received quorum certificate to consensus.\n    async fn set_qc(&self, ctx: Context, qc: Vec<u8>) -> ProtocolResult<()>;\n\n    /// Network set a received signed choke to consensus.\n    async fn set_choke(&self, ctx: Context, choke: Vec<u8>) -> ProtocolResult<()>;\n}\n\n#[async_trait]\npub trait Synchronization: Send + Sync {\n    async fn receive_remote_block(&self, ctx: Context, remote_height: u64) -> ProtocolResult<()>;\n}\n\n#[async_trait]\npub trait SynchronizationAdapter: CommonConsensusAdapter + Send + Sync {\n    fn update_status(\n        &self,\n        ctx: Context,\n        height: u64,\n        consensus_interval: u64,\n        propose_ratio: u64,\n        prevote_ratio: u64,\n        precommit_ratio: u64,\n        brake_ratio: u64,\n        validators: Vec<Validator>,\n    ) -> ProtocolResult<()>;\n\n    fn sync_exec(\n        &self,\n        ctx: Context,\n        params: &ExecutorParams,\n        txs: &[SignedTransaction],\n    ) -> ProtocolResult<ExecutorResp>;\n\n    /// Pull some blocks from other nodes from `begin` to `end`.\n    async fn get_block_from_remote(&self, ctx: Context, height: u64) -> ProtocolResult<Block>;\n\n    /// Pull signed transactions corresponding to the given hashes from other\n    /// nodes.\n    async fn get_txs_from_remote(\n        &self,\n        ctx: Context,\n        height: u64,\n        hashes: &[Hash],\n    ) -> ProtocolResult<Vec<SignedTransaction>>;\n\n    async fn get_proof_from_remote(&self, ctx: Context, height: u64) -> ProtocolResult<Proof>;\n}\n\n#[async_trait]\npub trait CommonConsensusAdapter: Send + Sync {\n    /// Save a block to the database.\n    async fn save_block(&self, ctx: Context, block: Block) -> ProtocolResult<()>;\n\n    async fn save_proof(&self, ctx: Context, proof: Proof) -> ProtocolResult<()>;\n\n    /// Save some signed transactions to the database.\n    async fn save_signed_txs(\n        &self,\n        ctx: Context,\n        block_height: u64,\n        signed_txs: Vec<SignedTransaction>,\n    ) -> ProtocolResult<()>;\n\n    async fn save_receipts(\n        &self,\n        ctx: Context,\n        height: u64,\n        receipts: Vec<Receipt>,\n    ) -> ProtocolResult<()>;\n\n    /// Flush the given transactions in the mempool.\n    async fn flush_mempool(&self, ctx: Context, ordered_tx_hashes: &[Hash]) -> ProtocolResult<()>;\n\n    /// Get a block corresponding to the given height.\n    async fn get_block_by_height(&self, ctx: Context, height: u64) -> ProtocolResult<Block>;\n\n    async fn get_block_header_by_height(\n        &self,\n        ctx: Context,\n        height: u64,\n    ) -> ProtocolResult<BlockHeader>;\n\n    /// Get the current height from storage.\n    async fn get_current_height(&self, ctx: Context) -> ProtocolResult<u64>;\n\n    async fn get_txs_from_storage(\n        &self,\n        ctx: Context,\n        tx_hashes: &[Hash],\n    ) -> ProtocolResult<Vec<SignedTransaction>>;\n\n    async fn broadcast_height(&self, ctx: Context, height: u64) -> ProtocolResult<()>;\n\n    /// Get metadata by the giving state_root.\n    fn get_metadata(\n        &self,\n        context: Context,\n        state_root: MerkleRoot,\n        height: u64,\n        timestamp: u64,\n        proposer: Address,\n    ) -> ProtocolResult<Metadata>;\n\n    fn tag_consensus(&self, ctx: Context, peer_ids: Vec<Bytes>) -> ProtocolResult<()>;\n\n    fn report_bad(&self, ctx: Context, feedback: TrustFeedback);\n\n    fn set_args(&self, context: Context, timeout_gap: u64, cycles_limit: u64, max_tx_size: u64);\n\n    async fn verify_proof(\n        &self,\n        ctx: Context,\n        block_header: &BlockHeader,\n        proof: &Proof,\n    ) -> ProtocolResult<()>;\n\n    async fn verify_block_header(&self, ctx: Context, block: &Block) -> ProtocolResult<()>;\n\n    fn verify_proof_signature(\n        &self,\n        ctx: Context,\n        block_height: u64,\n        vote_hash: Bytes,\n        aggregated_signature_bytes: Bytes,\n        vote_pubkeys: Vec<Hex>,\n    ) -> ProtocolResult<()>;\n\n    fn verify_proof_weight(\n        &self,\n        ctx: Context,\n        block_height: u64,\n        weight_map: HashMap<Bytes, u32>,\n        signed_voters: Vec<Bytes>,\n    ) -> ProtocolResult<()>;\n}\n\n#[async_trait]\npub trait ConsensusAdapter: CommonConsensusAdapter + Send + Sync {\n    /// Get some transaction hashes of the given height. The amount of the\n    /// transactions is limited by the given cycle limit and return a\n    /// `MixedTxHashes` struct.\n    async fn get_txs_from_mempool(\n        &self,\n        ctx: Context,\n        height: u64,\n        cycle_limit: u64,\n        tx_num_limit: u64,\n    ) -> ProtocolResult<MixedTxHashes>;\n\n    /// Synchronous signed transactions.\n    async fn sync_txs(&self, ctx: Context, propose_txs: Vec<Hash>) -> ProtocolResult<()>;\n\n    /// Get the signed transactions corresponding to the given hashes.\n    async fn get_full_txs(\n        &self,\n        ctx: Context,\n        order_txs: &[Hash],\n    ) -> ProtocolResult<Vec<SignedTransaction>>;\n\n    /// Consensus transmit a message to the given target.\n    async fn transmit(\n        &self,\n        ctx: Context,\n        msg: Vec<u8>,\n        end: &str,\n        target: MessageTarget,\n    ) -> ProtocolResult<()>;\n\n    /// Execute some transactions.\n    #[allow(clippy::too_many_arguments)]\n    async fn execute(\n        &self,\n        ctx: Context,\n        chain_id: Hash,\n        order_root: MerkleRoot,\n        height: u64,\n        cycles_price: u64,\n        proposer: Address,\n        block_hash: Hash,\n        signed_txs: Vec<SignedTransaction>,\n        cycles_limit: u64,\n        timestamp: u64,\n    ) -> ProtocolResult<()>;\n\n    /// Get the validator list of the given last block.\n    async fn get_last_validators(\n        &self,\n        ctx: Context,\n        height: u64,\n    ) -> ProtocolResult<Vec<Validator>>;\n\n    /// Get the current height from storage.\n    async fn get_current_height(&self, ctx: Context) -> ProtocolResult<u64>;\n\n    /// Pull some blocks from other nodes from `begin` to `end`.\n    async fn pull_block(&self, ctx: Context, height: u64, end: &str) -> ProtocolResult<Block>;\n\n    async fn verify_txs(&self, ctx: Context, height: u64, txs: &[Hash]) -> ProtocolResult<()>;\n}\n"
  },
  {
    "path": "protocol/src/traits/executor.rs",
    "content": "use std::sync::Arc;\n\nuse creep::Context;\n\nuse crate::traits::{ServiceMapping, Storage};\nuse crate::types::{Address, MerkleRoot, Receipt, SignedTransaction, TransactionRequest};\nuse crate::ProtocolResult;\n\n#[derive(Debug, Clone)]\npub struct ExecutorResp {\n    pub receipts:        Vec<Receipt>,\n    pub all_cycles_used: u64,\n    pub state_root:      MerkleRoot,\n}\n\n#[derive(Debug, Clone)]\npub struct ExecutorParams {\n    pub state_root:   MerkleRoot,\n    pub height:       u64,\n    pub timestamp:    u64,\n    pub cycles_limit: u64,\n    pub proposer:     Address,\n}\n\n#[derive(Debug, Clone, Default)]\npub struct ServiceResponse<T: Default> {\n    pub code:          u64,\n    pub succeed_data:  T,\n    pub error_message: String,\n}\n\nimpl<T: Default> ServiceResponse<T> {\n    pub fn from_error(code: u64, error_message: String) -> Self {\n        Self {\n            code,\n            succeed_data: T::default(),\n            error_message,\n        }\n    }\n\n    pub fn from_succeed(succeed_data: T) -> Self {\n        Self {\n            code: 0,\n            succeed_data,\n            error_message: \"\".to_owned(),\n        }\n    }\n\n    pub fn is_error(&self) -> bool {\n        self.code != 0\n    }\n}\n\nimpl<T: Default + PartialEq> PartialEq for ServiceResponse<T> {\n    fn eq(&self, other: &Self) -> bool {\n        self.code == other.code\n            && self.succeed_data == other.succeed_data\n            && self.error_message == other.error_message\n    }\n}\n\nimpl<T: Default + Eq> Eq for ServiceResponse<T> {}\n\npub trait ExecutorFactory<DB: cita_trie::DB, S: Storage, Mapping: ServiceMapping>:\n    Send + Sync\n{\n    fn from_root(\n        root: MerkleRoot,\n        db: Arc<DB>,\n        storage: Arc<S>,\n        mapping: Arc<Mapping>,\n    ) -> ProtocolResult<Box<dyn Executor>>;\n}\n\npub trait Executor {\n    fn exec(\n        &mut self,\n        ctx: Context,\n        params: &ExecutorParams,\n        txs: &[SignedTransaction],\n    ) -> ProtocolResult<ExecutorResp>;\n\n    fn read(\n        &self,\n        params: &ExecutorParams,\n        caller: &Address,\n        cycles_price: u64,\n        request: &TransactionRequest,\n    ) -> ProtocolResult<ServiceResponse<String>>;\n}\n"
  },
  {
    "path": "protocol/src/traits/mempool.rs",
    "content": "use async_trait::async_trait;\nuse creep::Context;\n\nuse crate::types::{Hash, SignedTransaction};\nuse crate::ProtocolResult;\n\n#[allow(dead_code)]\npub struct MixedTxHashes {\n    pub order_tx_hashes:   Vec<Hash>,\n    pub propose_tx_hashes: Vec<Hash>,\n}\n\nimpl MixedTxHashes {\n    pub fn clap(self) -> (Vec<Hash>, Vec<Hash>) {\n        (self.order_tx_hashes, self.propose_tx_hashes)\n    }\n}\n\n#[async_trait]\npub trait MemPool: Send + Sync {\n    async fn insert(&self, ctx: Context, tx: SignedTransaction) -> ProtocolResult<()>;\n\n    async fn package(\n        &self,\n        ctx: Context,\n        cycles_limit: u64,\n        tx_num_limit: u64,\n    ) -> ProtocolResult<MixedTxHashes>;\n\n    async fn flush(&self, ctx: Context, tx_hashes: &[Hash]) -> ProtocolResult<()>;\n\n    async fn get_full_txs(\n        &self,\n        ctx: Context,\n        height: Option<u64>,\n        tx_hashes: &[Hash],\n    ) -> ProtocolResult<Vec<SignedTransaction>>;\n\n    async fn ensure_order_txs(\n        &self,\n        ctx: Context,\n        height: Option<u64>,\n        order_tx_hashes: &[Hash],\n    ) -> ProtocolResult<()>;\n\n    async fn sync_propose_txs(\n        &self,\n        ctx: Context,\n        propose_tx_hashes: Vec<Hash>,\n    ) -> ProtocolResult<()>;\n\n    fn set_args(&self, timeout_gap: u64, cycles_limit: u64, max_tx_size: u64);\n}\n\n#[async_trait]\npub trait MemPoolAdapter: Send + Sync {\n    async fn pull_txs(\n        &self,\n        ctx: Context,\n        height: Option<u64>,\n        tx_hashes: Vec<Hash>,\n    ) -> ProtocolResult<Vec<SignedTransaction>>;\n\n    async fn broadcast_tx(&self, ctx: Context, tx: SignedTransaction) -> ProtocolResult<()>;\n\n    async fn check_authorization(\n        &self,\n        ctx: Context,\n        tx: Box<SignedTransaction>,\n    ) -> ProtocolResult<()>;\n\n    async fn check_transaction(&self, ctx: Context, tx: &SignedTransaction) -> ProtocolResult<()>;\n\n    async fn check_storage_exist(&self, ctx: Context, tx_hash: &Hash) -> ProtocolResult<()>;\n\n    async fn get_latest_height(&self, ctx: Context) -> ProtocolResult<u64>;\n\n    async fn get_transactions_from_storage(\n        &self,\n        ctx: Context,\n        block_height: Option<u64>,\n        tx_hashes: &[Hash],\n    ) -> ProtocolResult<Vec<Option<SignedTransaction>>>;\n\n    fn report_good(&self, ctx: Context);\n\n    fn set_args(&self, timeout_gap: u64, cycles_limit: u64, max_tx_size: u64);\n}\n"
  },
  {
    "path": "protocol/src/traits/mod.rs",
    "content": "mod api;\nmod binding;\nmod consensus;\nmod executor;\nmod mempool;\nmod network;\nmod storage;\n\npub use api::APIAdapter;\npub use binding::{\n    AdmissionControl, ChainQuerier, SDKFactory, Service, ServiceMapping, ServiceSDK, ServiceState,\n    StoreArray, StoreBool, StoreMap, StoreString, StoreUint64,\n};\npub use consensus::{\n    CommonConsensusAdapter, Consensus, ConsensusAdapter, MessageTarget, NodeInfo, Synchronization,\n    SynchronizationAdapter,\n};\npub use executor::{Executor, ExecutorFactory, ExecutorParams, ExecutorResp, ServiceResponse};\npub use mempool::{MemPool, MemPoolAdapter, MixedTxHashes};\npub use network::{\n    Gossip, MessageCodec, MessageHandler, Network, PeerTag, PeerTrust, Priority, Rpc, TrustFeedback,\n};\npub use storage::{\n    CommonStorage, IntoIteratorByRef, MaintenanceStorage, Storage, StorageAdapter,\n    StorageBatchModify, StorageCategory, StorageIterator, StorageSchema,\n};\n\npub use creep::{Cloneable, Context};\n"
  },
  {
    "path": "protocol/src/traits/network.rs",
    "content": "use std::{\n    error::Error,\n    fmt::Debug,\n    hash::{Hash, Hasher},\n};\n\nuse async_trait::async_trait;\nuse bytes::Bytes;\nuse derive_more::Display;\nuse serde::{Deserialize, Serialize};\n\nuse crate::{traits::Context, ProtocolError, ProtocolErrorKind, ProtocolResult};\n\n#[derive(Clone, Debug, Copy, Deserialize)]\npub enum Priority {\n    High,\n    Normal,\n}\n\n#[derive(Debug, Display, Clone)]\npub enum TrustFeedback {\n    #[display(fmt = \"fatal {}\", _0)]\n    Fatal(String),\n    #[display(fmt = \"worse {}\", _0)]\n    Worse(String),\n    #[display(fmt = \"bad {}\", _0)]\n    Bad(String),\n    #[display(fmt = \"neutral\")]\n    Neutral,\n    #[display(fmt = \"good\")]\n    Good,\n}\n\n#[derive(Debug, Display, Clone)]\npub enum PeerTag {\n    #[display(fmt = \"consensus\")]\n    Consensus,\n    #[display(fmt = \"always allow\")]\n    AlwaysAllow,\n    #[display(fmt = \"banned, until {}\", until)]\n    Ban { until: u64 }, // timestamp\n    #[display(fmt = \"{}\", _0)]\n    Custom(String), // TODO: Hide custom constructor\n}\n\nimpl PeerTag {\n    pub fn ban(until: u64) -> Self {\n        PeerTag::Ban { until }\n    }\n\n    pub fn ban_key() -> Self {\n        PeerTag::Ban { until: 0 }\n    }\n\n    pub fn custom<S: AsRef<str>>(s: S) -> Result<Self, ()> {\n        let custom_str = s.as_ref();\n        match custom_str {\n            \"consensus\" | \"always_allow\" | \"ban\" => Err(()),\n            _ => Ok(PeerTag::Custom(custom_str.to_owned())),\n        }\n    }\n\n    pub fn str(&self) -> &str {\n        match self {\n            PeerTag::Consensus => \"consensus\",\n            PeerTag::AlwaysAllow => \"always_allow\",\n            PeerTag::Ban { .. } => \"ban\",\n            PeerTag::Custom(str) => str,\n        }\n    }\n}\n\nimpl PartialEq for PeerTag {\n    fn eq(&self, other: &PeerTag) -> bool {\n        self.str() == other.str()\n    }\n}\n\nimpl Eq for PeerTag {}\n\nimpl Hash for PeerTag {\n    fn hash<H: Hasher>(&self, state: &mut H) {\n        self.str().hash(state)\n    }\n}\n\npub trait MessageCodec: Sized + Send + Debug + 'static {\n    fn encode(&mut self) -> ProtocolResult<Bytes>;\n\n    fn decode(bytes: Bytes) -> ProtocolResult<Self>;\n}\n\n#[derive(Debug, Display)]\n#[display(fmt = \"cannot serde encode or decode: {}\", _0)]\nstruct SerdeError(Box<dyn Error + Send>);\n\nimpl Error for SerdeError {}\n\nimpl From<SerdeError> for ProtocolError {\n    fn from(err: SerdeError) -> ProtocolError {\n        ProtocolError::new(ProtocolErrorKind::Network, Box::new(err))\n    }\n}\n\nimpl<T> MessageCodec for T\nwhere\n    T: Serialize + for<'a> Deserialize<'a> + Send + Debug + 'static,\n{\n    fn encode(&mut self) -> ProtocolResult<Bytes> {\n        let bytes = bincode::serialize(self).map_err(|e| SerdeError(Box::new(e)))?;\n\n        Ok(bytes.into())\n    }\n\n    fn decode(bytes: Bytes) -> ProtocolResult<Self> {\n        bincode::deserialize::<T>(&bytes.as_ref()).map_err(|e| SerdeError(Box::new(e)).into())\n    }\n}\n\n#[async_trait]\npub trait Gossip: Send + Sync {\n    async fn broadcast<M>(&self, cx: Context, end: &str, msg: M, p: Priority) -> ProtocolResult<()>\n    where\n        M: MessageCodec;\n\n    async fn multicast<'a, M, P>(\n        &self,\n        cx: Context,\n        end: &str,\n        peer_ids: P,\n        msg: M,\n        p: Priority,\n    ) -> ProtocolResult<()>\n    where\n        M: MessageCodec,\n        P: AsRef<[Bytes]> + Send + 'a;\n}\n\n#[async_trait]\npub trait Rpc: Send + Sync {\n    async fn call<M, R>(&self, ctx: Context, end: &str, msg: M, pri: Priority) -> ProtocolResult<R>\n    where\n        M: MessageCodec,\n        R: MessageCodec;\n\n    async fn response<M>(\n        &self,\n        cx: Context,\n        end: &str,\n        ret: ProtocolResult<M>,\n        p: Priority,\n    ) -> ProtocolResult<()>\n    where\n        M: MessageCodec;\n}\n\npub trait Network: Send + Sync {\n    fn tag(&self, ctx: Context, peer_id: Bytes, tag: PeerTag) -> ProtocolResult<()>;\n    fn untag(&self, ctx: Context, peer_id: Bytes, tag: &PeerTag) -> ProtocolResult<()>;\n    fn tag_consensus(&self, ctx: Context, peer_ids: Vec<Bytes>) -> ProtocolResult<()>;\n}\n\npub trait PeerTrust: Send + Sync {\n    fn report(&self, ctx: Context, feedback: TrustFeedback);\n}\n\n#[async_trait]\npub trait MessageHandler: Sync + Send + 'static {\n    type Message: MessageCodec;\n\n    async fn process(&self, ctx: Context, msg: Self::Message) -> TrustFeedback;\n}\n"
  },
  {
    "path": "protocol/src/traits/storage.rs",
    "content": "use async_trait::async_trait;\nuse derive_more::Display;\n\nuse crate::codec::ProtocolCodec;\nuse crate::traits::Context;\nuse crate::types::block::{Block, BlockHeader, Proof};\nuse crate::types::receipt::Receipt;\nuse crate::types::{Hash, SignedTransaction};\nuse crate::ProtocolResult;\n\n#[derive(Debug, Copy, Clone, Display)]\npub enum StorageCategory {\n    Block,\n    BlockHeader,\n    Receipt,\n    SignedTransaction,\n    Wal,\n    HashHeight,\n}\n\npub type StorageIterator<'a, S> = Box<\n    dyn Iterator<Item = ProtocolResult<(<S as StorageSchema>::Key, <S as StorageSchema>::Value)>>\n        + 'a,\n>;\n\npub trait StorageSchema {\n    type Key: ProtocolCodec + Send;\n    type Value: ProtocolCodec + Send;\n\n    fn category() -> StorageCategory;\n}\n\npub trait IntoIteratorByRef<S: StorageSchema> {\n    fn ref_to_iter<'a, 'b: 'a>(&'b self) -> StorageIterator<'a, S>;\n}\n\n#[async_trait]\npub trait CommonStorage: Send + Sync {\n    async fn insert_block(&self, ctx: Context, block: Block) -> ProtocolResult<()>;\n\n    async fn get_block(&self, ctx: Context, height: u64) -> ProtocolResult<Option<Block>>;\n\n    async fn get_block_header(\n        &self,\n        ctx: Context,\n        height: u64,\n    ) -> ProtocolResult<Option<BlockHeader>>;\n\n    async fn set_block(&self, _ctx: Context, block: Block) -> ProtocolResult<()>;\n\n    async fn remove_block(&self, ctx: Context, height: u64) -> ProtocolResult<()>;\n\n    async fn get_latest_block(&self, ctx: Context) -> ProtocolResult<Block>;\n\n    async fn set_latest_block(&self, ctx: Context, block: Block) -> ProtocolResult<()>;\n\n    async fn get_latest_block_header(&self, ctx: Context) -> ProtocolResult<BlockHeader>;\n}\n\n#[async_trait]\npub trait Storage: CommonStorage {\n    async fn insert_transactions(\n        &self,\n        ctx: Context,\n        block_height: u64,\n        signed_txs: Vec<SignedTransaction>,\n    ) -> ProtocolResult<()>;\n\n    async fn get_transactions(\n        &self,\n        ctx: Context,\n        block_height: u64,\n        hashes: &[Hash],\n    ) -> ProtocolResult<Vec<Option<SignedTransaction>>>;\n\n    async fn get_transaction_by_hash(\n        &self,\n        ctx: Context,\n        hash: &Hash,\n    ) -> ProtocolResult<Option<SignedTransaction>>;\n\n    async fn insert_receipts(\n        &self,\n        ctx: Context,\n        block_height: u64,\n        receipts: Vec<Receipt>,\n    ) -> ProtocolResult<()>;\n\n    async fn get_receipt_by_hash(\n        &self,\n        ctx: Context,\n        hash: Hash,\n    ) -> ProtocolResult<Option<Receipt>>;\n\n    async fn get_receipts(\n        &self,\n        ctx: Context,\n        block_height: u64,\n        hashes: Vec<Hash>,\n    ) -> ProtocolResult<Vec<Option<Receipt>>>;\n\n    async fn update_latest_proof(&self, ctx: Context, proof: Proof) -> ProtocolResult<()>;\n\n    async fn get_latest_proof(&self, ctx: Context) -> ProtocolResult<Proof>;\n}\n\n#[async_trait]\npub trait MaintenanceStorage: CommonStorage {}\n\npub enum StorageBatchModify<S: StorageSchema> {\n    Remove,\n    Insert(<S as StorageSchema>::Value),\n}\n\n#[async_trait]\npub trait StorageAdapter: Send + Sync {\n    async fn insert<S: StorageSchema>(\n        &self,\n        key: <S as StorageSchema>::Key,\n        val: <S as StorageSchema>::Value,\n    ) -> ProtocolResult<()>;\n\n    async fn get<S: StorageSchema>(\n        &self,\n        key: <S as StorageSchema>::Key,\n    ) -> ProtocolResult<Option<<S as StorageSchema>::Value>>;\n\n    async fn get_batch<S: StorageSchema>(\n        &self,\n        keys: Vec<<S as StorageSchema>::Key>,\n    ) -> ProtocolResult<Vec<Option<<S as StorageSchema>::Value>>> {\n        let mut vec = Vec::new();\n\n        for key in keys {\n            vec.push(self.get::<S>(key).await?);\n        }\n\n        Ok(vec)\n    }\n\n    async fn remove<S: StorageSchema>(&self, key: <S as StorageSchema>::Key) -> ProtocolResult<()>;\n\n    async fn contains<S: StorageSchema>(\n        &self,\n        key: <S as StorageSchema>::Key,\n    ) -> ProtocolResult<bool>;\n\n    async fn batch_modify<S: StorageSchema>(\n        &self,\n        keys: Vec<<S as StorageSchema>::Key>,\n        vals: Vec<StorageBatchModify<S>>,\n    ) -> ProtocolResult<()>;\n\n    fn prepare_iter<'a, 'b: 'a, S: StorageSchema + 'static, P: AsRef<[u8]> + 'a>(\n        &'b self,\n        prefix: &'a P,\n    ) -> ProtocolResult<Box<dyn IntoIteratorByRef<S> + 'a>>;\n}\n"
  },
  {
    "path": "protocol/src/types/block.rs",
    "content": "use bytes::Bytes;\nuse derive_more::Display;\nuse muta_codec_derive::RlpFixedCodec;\nuse serde::{Deserialize, Serialize};\n\nuse crate::fixed_codec::{FixedCodec, FixedCodecError};\nuse crate::types::{Address, Hash, MerkleRoot};\nuse crate::ProtocolResult;\n\n#[derive(RlpFixedCodec, Clone, Debug, PartialEq, Eq, Deserialize, Serialize)]\npub struct Block {\n    pub header:            BlockHeader,\n    pub ordered_tx_hashes: Vec<Hash>,\n}\n\n#[derive(RlpFixedCodec, Clone, Debug, Display, PartialEq, Eq, Deserialize, Serialize)]\n#[display(\n    fmt = \"chain id {:?}, height {}, exec height {}, previous hash {:?},\n    ordered root {:?}, order_signed_transactions_hash {:?}, confirm root {:?}, state root {:?},\n    receipt root {:?},cycles_used {:?}, proposer {:?}, proof {:?}, validators {:?}\",\n    chain_id,\n    height,\n    exec_height,\n    prev_hash,\n    order_root,\n    order_signed_transactions_hash,\n    confirm_root,\n    state_root,\n    receipt_root,\n    cycles_used,\n    proposer,\n    proof,\n    validators\n)]\npub struct BlockHeader {\n    pub chain_id:                       Hash,\n    pub height:                         u64,\n    pub exec_height:                    u64,\n    pub prev_hash:                      Hash,\n    pub timestamp:                      u64,\n    pub order_root:                     MerkleRoot,\n    pub order_signed_transactions_hash: Hash,\n    pub confirm_root:                   Vec<MerkleRoot>,\n    pub state_root:                     MerkleRoot,\n    pub receipt_root:                   Vec<MerkleRoot>,\n    pub cycles_used:                    Vec<u64>,\n    pub proposer:                       Address,\n    pub proof:                          Proof,\n    pub validator_version:              u64,\n    pub validators:                     Vec<Validator>,\n}\n\n#[derive(RlpFixedCodec, Serialize, Deserialize, Clone, Debug, Hash, PartialEq, Eq)]\npub struct Proof {\n    pub height:     u64,\n    pub round:      u64,\n    pub block_hash: Hash,\n    pub signature:  Bytes,\n    pub bitmap:     Bytes,\n}\n\n#[derive(RlpFixedCodec, Serialize, Deserialize, Clone, Debug, PartialEq, Eq)]\npub struct Validator {\n    pub pub_key:        Bytes,\n    pub propose_weight: u32,\n    pub vote_weight:    u32,\n}\n\n#[derive(RlpFixedCodec, Clone, Debug, PartialEq, Eq)]\npub struct Pill {\n    pub block:          Block,\n    pub propose_hashes: Vec<Hash>,\n}\n"
  },
  {
    "path": "protocol/src/types/genesis.rs",
    "content": "use bytes::Bytes;\nuse muta_codec_derive::RlpFixedCodec;\nuse serde::Deserialize;\n\nuse crate::fixed_codec::{FixedCodec, FixedCodecError};\nuse crate::{types::primitive::Hex, ProtocolResult};\n\n#[derive(RlpFixedCodec, Clone, Debug, Deserialize, PartialEq, Eq)]\npub struct Genesis {\n    pub timestamp: u64,\n    pub prevhash:  Hex,\n    pub services:  Vec<ServiceParam>,\n}\n\nimpl Genesis {\n    pub fn get_payload(&self, name: &str) -> &str {\n        &self\n            .services\n            .iter()\n            .find(|&service| service.name == name)\n            .unwrap_or_else(|| panic!(\"miss {:?} service!\", name))\n            .payload\n    }\n}\n\n#[derive(RlpFixedCodec, Clone, Debug, Deserialize, PartialEq, Eq)]\npub struct ServiceParam {\n    pub name:    String,\n    pub payload: String,\n}\n"
  },
  {
    "path": "protocol/src/types/mod.rs",
    "content": "pub(crate) mod block;\npub(crate) mod genesis;\npub(crate) mod primitive;\npub(crate) mod receipt;\npub(crate) mod service_context;\npub(crate) mod transaction;\n\nuse std::error::Error;\n\nuse derive_more::{Display, From};\n\nuse crate::{ProtocolError, ProtocolErrorKind};\n\npub use block::{Block, BlockHeader, Pill, Proof, Validator};\npub use bytes::{Bytes, BytesMut};\npub use genesis::{Genesis, ServiceParam};\npub use primitive::{\n    address_hrp, address_hrp_inited, init_address_hrp, Address, Hash, Hex, JsonString, MerkleRoot,\n    Metadata, ValidatorExtend, GENESIS_HEIGHT, METADATA_KEY,\n};\npub use receipt::{Event, Receipt, ReceiptResponse};\npub use service_context::{ServiceContext, ServiceContextError, ServiceContextParams};\npub use transaction::{RawTransaction, SignedTransaction, TransactionRequest};\n\n#[derive(Debug, Display, From)]\npub enum TypesError {\n    #[display(fmt = \"Expect {:?}, get {:?}.\", expect, real)]\n    LengthMismatch { expect: usize, real: usize },\n\n    #[display(fmt = \"{:?}\", error)]\n    FromHex { error: hex::FromHexError },\n\n    #[display(fmt = \"{:?} is an invalid address\", address)]\n    InvalidAddress { address: String },\n\n    #[display(fmt = \"{}\", error)]\n    Bech32 { error: bech32::Error },\n\n    #[display(fmt = \"Hex should start with 0x\")]\n    HexPrefix,\n\n    #[display(fmt = \"Invalid public key\")]\n    InvalidPublicKey,\n}\n\nimpl Error for TypesError {}\n\nimpl From<TypesError> for ProtocolError {\n    fn from(error: TypesError) -> ProtocolError {\n        ProtocolError::new(ProtocolErrorKind::Types, Box::new(error))\n    }\n}\n"
  },
  {
    "path": "protocol/src/types/primitive.rs",
    "content": "use std::convert::TryFrom;\nuse std::fmt;\nuse std::str::FromStr;\nuse std::sync::atomic::{AtomicBool, Ordering};\nuse std::sync::Arc;\n\nuse arc_swap::ArcSwap;\nuse bech32::{self, FromBase32, ToBase32};\nuse bytes::Bytes;\nuse hasher::{Hasher, HasherKeccak};\nuse lazy_static::lazy_static;\nuse muta_codec_derive::RlpFixedCodec;\nuse ophelia::{PublicKey, UncompressedPublicKey};\nuse ophelia_secp256k1::Secp256k1PublicKey;\nuse serde::de;\nuse serde::{Deserialize, Serialize};\nuse serde_json::Value;\nuse smol_str::SmolStr;\n\nuse crate::fixed_codec::{FixedCodec, FixedCodecError};\nuse crate::types::TypesError;\nuse crate::ProtocolResult;\n\npub const METADATA_KEY: &str = \"metadata\";\n\nlazy_static! {\n    static ref HASHER_INST: HasherKeccak = HasherKeccak::new();\n    static ref ADDRESS_HRP: ArcSwap<SmolStr> = ArcSwap::from(Arc::new(\"muta\".into()));\n    static ref ADDRESS_HRP_INITED: AtomicBool = AtomicBool::new(false);\n}\n\npub fn address_hrp() -> SmolStr {\n    ADDRESS_HRP.load().as_ref().clone()\n}\n\npub fn init_address_hrp(address_hrp: SmolStr) {\n    if ADDRESS_HRP_INITED.load(Ordering::SeqCst) {\n        panic!(\"address hrp can only be inited once\");\n    }\n    if address_hrp.is_heap_allocated() {\n        log::warn!(\"address hrp too long\");\n    }\n\n    // Verify address hrp\n    let hash = HASHER_INST.digest(b\"hello muta\");\n    assert_eq!(hash.len(), 32);\n\n    let bytes = &hash[12..];\n    assert_eq!(bytes.len(), 20);\n\n    bech32::encode(&address_hrp, bytes.to_base32()).expect(\"invalid address hrp\");\n\n    // Set address hrp\n    ADDRESS_HRP.store(Arc::new(address_hrp));\n    ADDRESS_HRP_INITED.store(true, Ordering::SeqCst);\n}\n\npub fn address_hrp_inited() -> bool {\n    ADDRESS_HRP_INITED.load(Ordering::SeqCst)\n}\n\n/// The height of the genesis block.\npub const GENESIS_HEIGHT: u64 = 0;\n\n/// Hash length\nconst HASH_LEN: usize = 32;\n\n// Should started with 0x\n#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord)]\npub struct Hex(String);\n\nimpl Hex {\n    pub fn from_string(s: String) -> ProtocolResult<Self> {\n        if (!s.starts_with(\"0x\") && !s.starts_with(\"0X\")) || s.len() < 3 {\n            return Err(TypesError::HexPrefix.into());\n        }\n\n        hex::decode(&s[2..]).map_err(|error| TypesError::FromHex { error })?;\n        Ok(Hex(s))\n    }\n\n    pub fn as_string(&self) -> String {\n        self.0.to_owned()\n    }\n\n    pub fn as_string_trim0x(&self) -> String {\n        (&self.0[2..]).to_owned()\n    }\n\n    pub fn decode(&self) -> Bytes {\n        Bytes::from(hex::decode(&self.0[2..]).expect(\"impossible, already checked in from_string\"))\n    }\n}\n\nimpl Default for Hex {\n    fn default() -> Self {\n        Hex::from_string(\"0x1\".to_owned()).expect(\"Hex must start with 0x\")\n    }\n}\n\nimpl Serialize for Hex {\n    fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>\n    where\n        S: serde::ser::Serializer,\n    {\n        serializer.serialize_str(&self.0)\n    }\n}\n\nstruct HexVisitor;\n\nimpl<'de> de::Visitor<'de> for HexVisitor {\n    type Value = Hex;\n\n    fn expecting(&self, formatter: &mut fmt::Formatter) -> fmt::Result {\n        formatter.write_str(\"Expect a hex string\")\n    }\n\n    fn visit_string<E>(self, v: String) -> Result<Self::Value, E>\n    where\n        E: de::Error,\n    {\n        Hex::from_string(v).map_err(|e| de::Error::custom(e.to_string()))\n    }\n\n    fn visit_str<E>(self, v: &str) -> Result<Self::Value, E>\n    where\n        E: de::Error,\n    {\n        Hex::from_string(v.to_owned()).map_err(|e| de::Error::custom(e.to_string()))\n    }\n}\n\nimpl<'de> Deserialize<'de> for Hex {\n    fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>\n    where\n        D: de::Deserializer<'de>,\n    {\n        deserializer.deserialize_string(HexVisitor)\n    }\n}\n\n#[derive(RlpFixedCodec, Clone, PartialEq, Eq, Hash, PartialOrd, Ord)]\npub struct Hash(Bytes);\n/// Merkel root hash\npub type MerkleRoot = Hash;\n/// Json string\npub type JsonString = String;\n\nimpl Serialize for Hash {\n    fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>\n    where\n        S: serde::ser::Serializer,\n    {\n        serializer.serialize_str(&self.as_hex())\n    }\n}\n\nstruct HashVisitor;\n\nimpl<'de> de::Visitor<'de> for HashVisitor {\n    type Value = Hash;\n\n    fn expecting(&self, formatter: &mut fmt::Formatter) -> fmt::Result {\n        formatter.write_str(\"Expect a hex string\")\n    }\n\n    fn visit_string<E>(self, v: String) -> Result<Self::Value, E>\n    where\n        E: de::Error,\n    {\n        Hash::from_hex(&v).map_err(|e| de::Error::custom(e.to_string()))\n    }\n\n    fn visit_str<E>(self, v: &str) -> Result<Self::Value, E>\n    where\n        E: de::Error,\n    {\n        Hash::from_hex(&v).map_err(|e| de::Error::custom(e.to_string()))\n    }\n}\n\nimpl<'de> Deserialize<'de> for Hash {\n    fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>\n    where\n        D: de::Deserializer<'de>,\n    {\n        deserializer.deserialize_string(HashVisitor)\n    }\n}\n\nimpl Hash {\n    /// Enter an array of bytes to get a 32-bit hash.\n    /// Note: sha3 is used for the time being and may be replaced with other\n    /// hashing algorithms later.\n    pub fn digest<B: AsRef<[u8]>>(bytes: B) -> Self {\n        let out = HASHER_INST.digest(bytes.as_ref());\n        Self(Bytes::from(out))\n    }\n\n    pub fn from_empty() -> Self {\n        let out = HASHER_INST.digest(&rlp::NULL_RLP);\n        Self(Bytes::from(out))\n    }\n\n    /// Converts the byte array to a Hash type.\n    /// Note: if you want to compute the hash value of the byte array, you\n    /// should call `fn digest`.\n    pub fn from_bytes(bytes: Bytes) -> ProtocolResult<Self> {\n        ensure_len(bytes.len(), HASH_LEN)?;\n\n        Ok(Self(bytes))\n    }\n\n    pub fn from_hex(s: &str) -> ProtocolResult<Self> {\n        let s = clean_0x(s)?;\n        let bytes = hex::decode(s).map_err(TypesError::from)?;\n\n        let bytes = Bytes::from(bytes);\n        Self::from_bytes(bytes)\n    }\n\n    pub fn as_bytes(&self) -> Bytes {\n        self.0.clone()\n    }\n\n    pub fn as_slice(&self) -> &[u8] {\n        &self.0\n    }\n\n    pub fn as_hex(&self) -> String {\n        \"0x\".to_owned() + &hex::encode(self.0.clone())\n    }\n\n    /// Used for byzantine test\n    pub fn from_invalid_bytes(bytes: Bytes) -> Self {\n        Self(bytes)\n    }\n}\n\nimpl Default for Hash {\n    fn default() -> Self {\n        Hash::from_empty()\n    }\n}\n\nimpl fmt::Debug for Hash {\n    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {\n        write!(f, \"{}\", self.as_hex())\n    }\n}\n\n/// Address length.\nconst ADDRESS_LEN: usize = 20;\n\n#[derive(RlpFixedCodec, Clone, PartialEq, Eq, Hash, PartialOrd, Ord)]\npub struct Address(Bytes);\n\nimpl Default for Address {\n    fn default() -> Self {\n        Address::from_hex(\"0x0000000000000000000000000000000000000000\")\n            .expect(\"Address must consist of 20 bytes\")\n    }\n}\n\nimpl Serialize for Address {\n    fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>\n    where\n        S: serde::ser::Serializer,\n    {\n        serializer.serialize_str(&self.to_string())\n    }\n}\n\nstruct AddressVisitor;\n\nimpl<'de> de::Visitor<'de> for AddressVisitor {\n    type Value = Address;\n\n    fn expecting(&self, formatter: &mut fmt::Formatter) -> fmt::Result {\n        formatter.write_str(\"Expect a bech32 string\")\n    }\n\n    fn visit_string<E>(self, v: String) -> Result<Self::Value, E>\n    where\n        E: de::Error,\n    {\n        Address::from_str(&v).map_err(|e| de::Error::custom(e.to_string()))\n    }\n\n    fn visit_str<E>(self, v: &str) -> Result<Self::Value, E>\n    where\n        E: de::Error,\n    {\n        Address::from_str(&v).map_err(|e| de::Error::custom(e.to_string()))\n    }\n}\n\nimpl<'de> Deserialize<'de> for Address {\n    fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>\n    where\n        D: de::Deserializer<'de>,\n    {\n        deserializer.deserialize_string(AddressVisitor)\n    }\n}\n\nimpl Address {\n    pub fn from_pubkey_bytes<B: AsRef<[u8]>>(bytes: B) -> ProtocolResult<Self> {\n        let compressed_pubkey_len = <Secp256k1PublicKey as PublicKey>::LENGTH;\n        let uncompressed_pubkey_len = <Secp256k1PublicKey as UncompressedPublicKey>::LENGTH;\n\n        let slice = bytes.as_ref();\n        if slice.len() != compressed_pubkey_len && slice.len() != uncompressed_pubkey_len {\n            return Err(TypesError::InvalidPublicKey.into());\n        }\n\n        // Drop first byte\n        let hash = {\n            if slice.len() == compressed_pubkey_len {\n                let pubkey = Secp256k1PublicKey::try_from(slice)\n                    .map_err(|_| TypesError::InvalidPublicKey)?;\n                Hash::digest(&(pubkey.to_uncompressed_bytes())[1..])\n            } else {\n                Hash::digest(&slice[1..])\n            }\n        };\n\n        Self::from_hash(hash)\n    }\n\n    pub fn from_hash(hash: Hash) -> ProtocolResult<Self> {\n        let hash_val = hash.as_slice();\n        ensure_len(hash_val.len(), HASH_LEN)?;\n\n        Self::from_bytes(Bytes::copy_from_slice(&hash_val[12..]))\n    }\n\n    pub fn from_bytes(bytes: Bytes) -> ProtocolResult<Self> {\n        ensure_len(bytes.len(), ADDRESS_LEN)?;\n\n        Ok(Self(bytes))\n    }\n\n    pub fn as_bytes(&self) -> Bytes {\n        self.0.clone()\n    }\n\n    pub fn as_slice(&self) -> &[u8] {\n        &self.0\n    }\n\n    pub fn from_hex(s: &str) -> ProtocolResult<Self> {\n        let s = clean_0x(s)?;\n        let bytes = hex::decode(s).map_err(TypesError::from)?;\n\n        let bytes = Bytes::from(bytes);\n        Self::from_bytes(bytes)\n    }\n\n    /// Used for byzantine test\n    pub fn from_invalid_bytes(bytes: Bytes) -> Self {\n        Self(bytes)\n    }\n}\n\nimpl FromStr for Address {\n    type Err = TypesError;\n\n    fn from_str(s: &str) -> Result<Self, Self::Err> {\n        let (hrp, data) = bech32::decode(s).map_err(TypesError::from)?;\n        if hrp != address_hrp() {\n            return Err(TypesError::InvalidAddress {\n                address: s.to_owned(),\n            });\n        }\n\n        let bytes = Vec::<u8>::from_base32(&data).map_err(TypesError::from)?;\n        Ok(Address(Bytes::from(bytes)))\n    }\n}\n\nimpl fmt::Debug for Address {\n    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {\n        // NOTE: ADDRESS_HRP was verified in init_address_hrp fn\n        bech32::encode_to_fmt(f, address_hrp().as_ref(), &self.0.to_base32()).unwrap()\n    }\n}\n\nimpl fmt::Display for Address {\n    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {\n        // NOTE: ADDRESS_HRP was verified in init_address_hrp fn\n        bech32::encode_to_fmt(f, address_hrp().as_ref(), &self.0.to_base32()).unwrap()\n    }\n}\n\n#[derive(RlpFixedCodec, Deserialize, Default, Serialize, Clone, Debug, PartialEq, Eq)]\npub struct Metadata {\n    pub chain_id:           Hash,\n    pub bech32_address_hrp: String,\n    pub common_ref:         Hex,\n    pub timeout_gap:        u64,\n    pub cycles_limit:       u64,\n    pub cycles_price:       u64,\n    pub interval:           u64,\n    pub verifier_list:      Vec<ValidatorExtend>,\n    pub propose_ratio:      u64,\n    pub prevote_ratio:      u64,\n    pub precommit_ratio:    u64,\n    pub brake_ratio:        u64,\n    pub tx_num_limit:       u64,\n    pub max_tx_size:        u64,\n}\n\nimpl Metadata {\n    pub fn get_hrp_from_json(payload: String) -> String {\n        let nodes: Value = serde_json::from_str(payload.as_str())\n            .expect(\"metadata's genesis payload is invalid JSON\");\n        nodes[\"bech32_address_hrp\"]\n            .as_str()\n            .expect(\"bech32_address_hrp in genesis payload is not string?\")\n            .to_string()\n    }\n}\n\n#[derive(RlpFixedCodec, Serialize, Deserialize, Clone, PartialEq, Eq, Default)]\npub struct ValidatorExtend {\n    pub bls_pub_key:    Hex,\n    pub pub_key:        Hex,\n    pub address:        Address,\n    pub propose_weight: u32,\n    pub vote_weight:    u32,\n}\n\nimpl fmt::Debug for ValidatorExtend {\n    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {\n        let bls_pub_key = self.bls_pub_key.as_string_trim0x();\n        let pk = if bls_pub_key.len() > 8 {\n            unsafe { bls_pub_key.get_unchecked(0..8) }\n        } else {\n            bls_pub_key.as_str()\n        };\n\n        write!(\n            f,\n            \"bls public key {:?}, public key {:?}, address {:?} propose weight {}, vote weight {}\",\n            pk, self.pub_key, self.address, self.propose_weight, self.vote_weight\n        )\n    }\n}\n\nfn clean_0x(s: &str) -> ProtocolResult<&str> {\n    if s.starts_with(\"0x\") || s.starts_with(\"0X\") {\n        Ok(&s[2..])\n    } else {\n        Err(TypesError::HexPrefix.into())\n    }\n}\n\nfn ensure_len(real: usize, expect: usize) -> ProtocolResult<()> {\n    if real != expect {\n        Err(TypesError::LengthMismatch { expect, real }.into())\n    } else {\n        Ok(())\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use bech32::{self, FromBase32};\n    use bytes::Bytes;\n\n    use super::{address_hrp, init_address_hrp, Address, Hash, ValidatorExtend};\n    use crate::types::Metadata;\n    use crate::{fixed_codec::FixedCodec, types::Hex};\n\n    #[test]\n    fn test_hash() {\n        let hash = Hash::digest(Bytes::from(\"xxxxxx\"));\n\n        let bytes = hash.as_bytes();\n        Hash::from_bytes(bytes).unwrap();\n    }\n\n    #[test]\n    fn test_from_hex() {\n        let address_hex = \"0x755cdba6ae4f479f7164792b318b2a06c759833b\";\n        let address_bech32 = \"muta1w4wdhf4wfare7uty0y4nrze2qmr4nqem9j7teu\";\n        let address = Address::from_hex(address_hex).unwrap();\n        assert_eq!(address.to_string(), address_bech32);\n    }\n\n    #[test]\n    fn test_from_pubkey_bytes() {\n        let pubkey = \"02ef0cb0d7bc6c18b4bea1f5908d9106522b35ab3c399369605d4242525bda7e60\";\n        let expect_addr = \"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\";\n\n        let pubkey_bytes = Bytes::from(hex::decode(pubkey).unwrap());\n        let addr = Address::from_pubkey_bytes(pubkey_bytes).unwrap();\n\n        assert_eq!(addr.to_string(), expect_addr);\n    }\n\n    #[test]\n    fn test_address() {\n        let add_str = \"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\";\n        let (_, data) = bech32::decode(add_str).unwrap();\n        let bytes = Bytes::from(Vec::<u8>::from_base32(&data).unwrap());\n\n        let address = Address::from_bytes(bytes).unwrap();\n        assert_eq!(add_str, &address.to_string());\n    }\n\n    #[test]\n    fn test_hex() {\n        let hex_str = \"0x112233445566AABBcc\";\n        let hex = Hex::from_string(hex_str.to_owned()).unwrap();\n\n        assert_eq!(hex_str, hex.0.as_str());\n    }\n\n    #[test]\n    fn test_validator_extend() {\n        let extend = ValidatorExtend {\n           bls_pub_key: Hex::from_string(\"0x04102947214862a503c73904deb5818298a186d68c7907bb609583192a7de6331493835e5b8281f4d9ee705537c0e765580e06f86ddce5867812fceb42eecefd209f0eddd0389d6b7b0100f00fb119ef9ab23826c6ea09aadcc76fa6cea6a32724\".to_owned()).unwrap(),\n           pub_key: Hex::from_string(\"0x02ef0cb0d7bc6c18b4bea1f5908d9106522b35ab3c399369605d4242525bda7e60\".to_owned()).unwrap(),\n           address: \"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\".parse().unwrap(),\n           propose_weight: 1,\n           vote_weight:    1,\n       };\n\n        let decoded = ValidatorExtend::decode_fixed(extend.encode_fixed().unwrap()).unwrap();\n        assert_eq!(decoded, extend);\n    }\n\n    // Note: All tests run in same process, change ADDRESS_HRP affects other tests\n    #[test]\n    #[should_panic(expected = \"must set hrp before deserialization\")]\n    fn test_init_address_hrp() {\n        assert_eq!(address_hrp(), \"muta\", \"default value\");\n\n        let metadata_payload = r#\"\n        {\n            \"chain_id\": \"0xb6a4d7da21443f5e816e8700eea87610e6d769657d6b8ec73028457bf2ca4036\",\n            \"bech32_address_hrp\": \"ham\",\n            \"common_ref\": \"0x6c747758636859487038\",\n            \"timeout_gap\": 20,\n            \"cycles_limit\": 4294967295,\n            \"cycles_price\": 1,\n            \"interval\": 3000,\n            \"verifier_list\": [\n               {\n                   \"bls_pub_key\": \"0x04102947214862a503c73904deb5818298a186d68c7907bb609583192a7de6331493835e5b8281f4d9ee705537c0e765580e06f86ddce5867812fceb42eecefd209f0eddd0389d6b7b0100f00fb119ef9ab23826c6ea09aadcc76fa6cea6a32724\",\n                   \"pub_key\": \"0x02ef0cb0d7bc6c18b4bea1f5908d9106522b35ab3c399369605d4242525bda7e60\",\n                   \"address\": \"ham14e0lmgck835vm2dfm0w3ckv6svmez8fdmq5fts\",\n                   \"propose_weight\": 1,\n                   \"vote_weight\": 1\n               }\n            ],\n            \"propose_ratio\": 15,\n            \"prevote_ratio\": 10,\n            \"precommit_ratio\": 10,\n            \"brake_ratio\": 7,\n            \"tx_num_limit\": 20000,\n            \"max_tx_size\": 1024\n        }\n        \"#;\n\n        let hrp = Metadata::get_hrp_from_json(metadata_payload.to_string());\n\n        assert_eq!(\"ham\".to_string(), hrp, \"should be same\");\n\n        // this should fail because we did not set hrp to ham like\n        // init_address_hrp(hrp);\n        serde_json::from_str::<Metadata>(metadata_payload)\n            .expect(\"must set hrp before deserialization\");\n    }\n\n    #[test]\n    #[should_panic(expected = \"address hrp can only be inited once\")]\n    fn test_init_address_hrp_twice() {\n        init_address_hrp(\"muta\".into());\n        init_address_hrp(\"muta\".into());\n    }\n}\n"
  },
  {
    "path": "protocol/src/types/receipt.rs",
    "content": "use bytes::Bytes;\nuse muta_codec_derive::RlpFixedCodec;\n\nuse crate::fixed_codec::{FixedCodec, FixedCodecError};\nuse crate::types::{Hash, MerkleRoot};\nuse crate::{traits::ServiceResponse, ProtocolResult};\n\n#[derive(RlpFixedCodec, Debug, Clone, PartialEq, Eq)]\npub struct Event {\n    pub service: String,\n    pub name:    String,\n    pub data:    String,\n}\n\n#[derive(RlpFixedCodec, Clone, Debug, PartialEq, Eq)]\npub struct Receipt {\n    pub state_root:  MerkleRoot,\n    pub height:      u64,\n    pub tx_hash:     Hash,\n    pub cycles_used: u64,\n    pub events:      Vec<Event>,\n    pub response:    ReceiptResponse,\n}\n\n#[derive(Clone, Debug, PartialEq, Eq)]\npub struct ReceiptResponse {\n    pub service_name: String,\n    pub method:       String,\n    pub response:     ServiceResponse<String>,\n}\n"
  },
  {
    "path": "protocol/src/types/service_context.rs",
    "content": "use std::cell::RefCell;\nuse std::rc::Rc;\n\nuse bytes::Bytes;\nuse derive_more::{Display, From};\n\nuse crate::types::{Address, Event, Hash};\nuse crate::{ProtocolError, ProtocolErrorKind};\n\n#[derive(Debug, Clone)]\npub struct ServiceContextParams {\n    pub tx_hash:         Option<Hash>,\n    pub nonce:           Option<Hash>,\n    pub cycles_limit:    u64,\n    pub cycles_price:    u64,\n    pub cycles_used:     Rc<RefCell<u64>>,\n    pub caller:          Address,\n    pub height:          u64,\n    pub service_name:    String,\n    pub service_method:  String,\n    pub service_payload: String,\n    pub extra:           Option<Bytes>,\n    pub timestamp:       u64,\n    pub events:          Rc<RefCell<Vec<Event>>>,\n}\n\npub type Reason = String;\n\n#[derive(Debug, Clone, PartialEq)]\npub struct ServiceContext {\n    tx_hash:         Option<Hash>,\n    nonce:           Option<Hash>,\n    cycles_limit:    u64,\n    cycles_price:    u64,\n    cycles_used:     Rc<RefCell<u64>>,\n    caller:          Address,\n    height:          u64,\n    service_name:    String,\n    service_method:  String,\n    service_payload: String,\n    extra:           Option<Bytes>,\n    timestamp:       u64,\n    events:          Rc<RefCell<Vec<Event>>>,\n    canceled:        Rc<RefCell<Option<Reason>>>,\n}\n\nimpl ServiceContext {\n    pub fn new(params: ServiceContextParams) -> Self {\n        Self {\n            tx_hash:         params.tx_hash,\n            nonce:           params.nonce,\n            cycles_limit:    params.cycles_limit,\n            cycles_price:    params.cycles_price,\n            cycles_used:     params.cycles_used,\n            caller:          params.caller,\n            height:          params.height,\n            service_name:    params.service_name,\n            service_method:  params.service_method,\n            service_payload: params.service_payload,\n            extra:           params.extra,\n            timestamp:       params.timestamp,\n            events:          params.events,\n            canceled:        Rc::new(RefCell::new(None)),\n        }\n    }\n\n    pub fn with_context(\n        context: &ServiceContext,\n        extra: Option<Bytes>,\n        service_name: String,\n        service_method: String,\n        service_payload: String,\n    ) -> Self {\n        Self {\n            tx_hash: context.tx_hash.clone(),\n            nonce: context.nonce.clone(),\n            cycles_limit: context.cycles_limit,\n            cycles_price: context.cycles_price,\n            cycles_used: Rc::clone(&context.cycles_used),\n            caller: context.caller.clone(),\n            height: context.height,\n            service_name,\n            service_method,\n            service_payload,\n            extra,\n            timestamp: context.get_timestamp(),\n            events: Rc::clone(&context.events),\n            canceled: Rc::clone(&context.canceled),\n        }\n    }\n\n    pub fn get_tx_hash(&self) -> Option<Hash> {\n        self.tx_hash.clone()\n    }\n\n    pub fn get_nonce(&self) -> Option<Hash> {\n        self.nonce.clone()\n    }\n\n    pub fn get_events(&self) -> Vec<Event> {\n        self.events.borrow().clone()\n    }\n\n    pub fn sub_cycles(&self, cycles: u64) -> bool {\n        if self.get_cycles_used() + cycles <= self.cycles_limit {\n            *self.cycles_used.borrow_mut() = self.get_cycles_used() + cycles;\n            true\n        } else {\n            false\n        }\n    }\n\n    pub fn get_cycles_price(&self) -> u64 {\n        self.cycles_price\n    }\n\n    pub fn get_cycles_limit(&self) -> u64 {\n        self.cycles_limit\n    }\n\n    pub fn get_cycles_used(&self) -> u64 {\n        *self.cycles_used.borrow()\n    }\n\n    pub fn get_caller(&self) -> Address {\n        self.caller.clone()\n    }\n\n    pub fn get_current_height(&self) -> u64 {\n        self.height\n    }\n\n    pub fn get_service_name(&self) -> &str {\n        &self.service_name\n    }\n\n    pub fn get_service_method(&self) -> &str {\n        &self.service_method\n    }\n\n    pub fn get_payload(&self) -> &str {\n        &self.service_payload\n    }\n\n    pub fn get_extra(&self) -> Option<Bytes> {\n        self.extra.clone()\n    }\n\n    pub fn get_timestamp(&self) -> u64 {\n        self.timestamp\n    }\n\n    pub fn canceled(&self) -> bool {\n        self.canceled.borrow().is_some()\n    }\n\n    pub fn cancel_reason(&self) -> Option<Reason> {\n        self.canceled.borrow().to_owned()\n    }\n\n    pub fn cancel(&self, reason: String) {\n        *self.canceled.borrow_mut() = Some(reason);\n    }\n\n    pub fn emit_event(&self, service: String, name: String, message: String) {\n        self.events.borrow_mut().push(Event {\n            service,\n            name,\n            data: message,\n        })\n    }\n}\n\n#[derive(Debug, Display, From)]\npub enum ServiceContextError {\n    #[display(fmt = \"out of cycles\")]\n    OutOfCycles,\n}\n\nimpl std::error::Error for ServiceContextError {}\n\nimpl From<ServiceContextError> for ProtocolError {\n    fn from(err: ServiceContextError) -> ProtocolError {\n        ProtocolError::new(ProtocolErrorKind::Service, Box::new(err))\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use std::cell::RefCell;\n    use std::rc::Rc;\n\n    use super::{ServiceContext, ServiceContextParams};\n    use crate::types::{Address, Hash};\n\n    #[test]\n    fn test_request_context() {\n        let params = ServiceContextParams {\n            tx_hash:         None,\n            nonce:           None,\n            cycles_limit:    100,\n            cycles_price:    8,\n            cycles_used:     Rc::new(RefCell::new(10)),\n            caller:          Address::from_hash(Hash::from_empty()).unwrap(),\n            height:          1,\n            timestamp:       0,\n            service_name:    \"service_name\".to_owned(),\n            service_method:  \"service_method\".to_owned(),\n            service_payload: \"service_payload\".to_owned(),\n            extra:           None,\n            events:          Rc::new(RefCell::new(vec![])),\n        };\n        let ctx = ServiceContext::new(params);\n\n        ctx.sub_cycles(8);\n        assert_eq!(ctx.get_cycles_used(), 18);\n\n        assert_eq!(ctx.get_cycles_limit(), 100);\n        assert_eq!(ctx.get_cycles_price(), 8);\n        assert_eq!(\n            ctx.get_caller(),\n            Address::from_hash(Hash::from_empty()).unwrap()\n        );\n        assert_eq!(ctx.get_current_height(), 1);\n        assert_eq!(ctx.get_timestamp(), 0);\n        assert_eq!(ctx.get_service_name(), \"service_name\");\n        assert_eq!(ctx.get_service_method(), \"service_method\");\n        assert_eq!(ctx.get_payload(), \"service_payload\");\n\n        let bro = ctx.clone();\n        let reason = \"hurry up, bus is about to leave\".to_owned();\n\n        ctx.cancel(reason.clone());\n        assert!(ctx.canceled());\n        assert!(bro.canceled());\n        assert_eq!(bro.cancel_reason(), Some(reason));\n    }\n}\n"
  },
  {
    "path": "protocol/src/types/transaction.rs",
    "content": "use bytes::Bytes;\nuse muta_codec_derive::RlpFixedCodec;\nuse serde::{Deserialize, Serialize};\n\nuse crate::fixed_codec::{FixedCodec, FixedCodecError};\nuse crate::types::primitive::{Address, Hash, JsonString};\nuse crate::ProtocolResult;\n\n#[derive(Deserialize, Serialize, Clone, Debug, PartialEq, Eq)]\npub struct RawTransaction {\n    pub chain_id:     Hash,\n    pub cycles_price: u64,\n    pub cycles_limit: u64,\n    pub nonce:        Hash,\n    pub request:      TransactionRequest,\n    pub timeout:      u64,\n    pub sender:       Address,\n}\n\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug, PartialEq, Eq)]\npub struct TransactionRequest {\n    pub method:       String,\n    pub service_name: String,\n    pub payload:      JsonString,\n}\n\n#[derive(RlpFixedCodec, Deserialize, Serialize, Clone, Debug, PartialEq, Eq)]\npub struct SignedTransaction {\n    pub raw:       RawTransaction,\n    pub tx_hash:   Hash,\n    pub pubkey:    Bytes,\n    pub signature: Bytes,\n}\n"
  },
  {
    "path": "rust-toolchain",
    "content": "nightly-2020-09-20\n"
  },
  {
    "path": "rustfmt.toml",
    "content": "# Convert /* */ comments to // comments where possible\n#\n# Default value: false\n# Possible values: true, false\n# Stable: No (tracking issue: #3350)\n# false (default):\n# // Lorem ipsum:\n# fn dolor() -> usize {}\n#\n# /* sit amet: */\n# fn adipiscing() -> usize {}\n# true:\n# // Lorem ipsum:\n# fn dolor() -> usize {}\n#\n# // sit amet:\n# fn adipiscing() -> usize {}\nnormalize_comments = true\n# Reorder impl items. type and const are put first, then macros and methods.\n#\n# Default value: false\n# Possible values: true, false\n# Stable: No (tracking issue: #3363)\n# false (default)\n# struct Dummy;\n#\n# impl Iterator for Dummy {\n#     fn next(&mut self) -> Option<Self::Item> {\n#         None\n#     }\n#\n#     type Item = i32;\n# }\n# true\n# struct Dummy;\n#\n# impl Iterator for Dummy {\n#     type Item = i32;\n#\n#     fn next(&mut self) -> Option<Self::Item> {\n#         None\n#     }\n# }\nreorder_impl_items = true\n# The maximum diff of width between struct fields to be aligned with each other.\n#\n# Default value : 0\n# Possible values: any non-negative integer\n# Stable: No (tracking issue: #3371)\n# 0 (default):\n# struct Foo {\n#     x: u32,\n#     yy: u32,\n#     zzz: u32,\n# }\n# 20:\n# struct Foo {\n#     x:   u32,\n#     yy:  u32,\n#     zzz: u32,\n# }\nstruct_field_align_threshold = 25\n# Use field initialize shorthand if possible.\n#\n# Default value: false\n# Possible values: true, false\n# Stable: Yes\n# false (default):\n# struct Foo {\n#     x: u32,\n#     y: u32,\n#     z: u32,\n# }\n#\n# fn main() {\n#     let x = 1;\n#     let y = 2;\n#     let z = 3;\n#     let a = Foo { x: x, y: y, z: z };\n# }\n# true:\n# struct Foo {\n#     x: u32,\n#     y: u32,\n#     z: u32,\n# }\n#\n# fn main() {\n#     let x = 1;\n#     let y = 2;\n#     let z = 3;\n#     let a = Foo { x, y, z };\n# }\nuse_field_init_shorthand = true\n# Replace uses of the try! macro by the ? shorthand\n#\n# Default value: false\n# Possible values: true, false\n# Stable: Yes\n# false (default):\n# fn main() {\n#     let lorem = try!(ipsum.map(|dolor| dolor.sit()));\n# }\n# true:\n# fn main() {\n#     let lorem = ipsum.map(|dolor| dolor.sit())?;\n# }\nuse_try_shorthand = true\n# Break comments to fit on the line\n#\n# Default value: false\n# Possible values: true, false\n# Stable: No (tracking issue: #3347)\n# false (default):\n# // Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.\n# true:\n# // Lorem ipsum dolor sit amet, consectetur adipiscing elit,\n# // sed do eiusmod tempor incididunt ut labore et dolore\n# // magna aliqua. Ut enim ad minim veniam, quis nostrud\n# // exercitation ullamco laboris nisi ut aliquip ex ea\n# // commodo consequat.\nwrap_comments = true\n# When structs, slices, arrays, and block/array-like macros are used as the last argument in an expression list, allow them to overflow (like blocks/closures) instead of being indented on a new line.\n#\n# Default value: false\n# Possible values: true, false\n# Stable: No (tracking issue: #3370)\n# false (default):\n# fn example() {\n#     foo(ctx, |param| {\n#         action();\n#         foo(param)\n#     });\n#\n#     foo(\n#         ctx,\n#         Bar {\n#             x: value,\n#             y: value2,\n#         },\n#     );\n#\n#     foo(\n#         ctx,\n#         &[\n#             MAROON_TOMATOES,\n#             PURPLE_POTATOES,\n#             ORGANE_ORANGES,\n#             GREEN_PEARS,\n#             RED_APPLES,\n#         ],\n#     );\n#\n#     foo(\n#         ctx,\n#         vec![\n#             MAROON_TOMATOES,\n#             PURPLE_POTATOES,\n#             ORGANE_ORANGES,\n#             GREEN_PEARS,\n#             RED_APPLES,\n#         ],\n#     );\n# }\n# true:\n# fn example() {\n#     foo(ctx, |param| {\n#         action();\n#         foo(param)\n#     });\n#\n#     foo(ctx, Bar {\n#         x: value,\n#         y: value2,\n#     });\n#\n#     foo(ctx, &[\n#         MAROON_TOMATOES,\n#         PURPLE_POTATOES,\n#         ORGANE_ORANGES,\n#         GREEN_PEARS,\n#         RED_APPLES,\n#     ]);\n#\n#     foo(ctx, vec![\n#         MAROON_TOMATOES,\n#         PURPLE_POTATOES,\n#         ORGANE_ORANGES,\n#         GREEN_PEARS,\n#         RED_APPLES,\n#     ]);\n# }\noverflow_delimited_expr = true\n"
  },
  {
    "path": "src/lib.rs",
    "content": "#![feature(async_closure)]\n#![allow(clippy::mutable_key_type)]\n\nuse protocol::traits::ServiceMapping;\n\nuse cli::{Cli, CliConfig};\n\npub fn run<Mapping: 'static + ServiceMapping>(\n    service_mapping: Mapping,\n    app_name: &'static str,\n    version: &'static str,\n    author: &'static str,\n    config_path: &'static str,\n    genesis_patch: &'static str,\n    target_commands: Option<Vec<&str>>,\n) {\n    Cli::run(\n        service_mapping,\n        CliConfig {\n            app_name,\n            version,\n            author,\n            config_path,\n            genesis_patch,\n        },\n        target_commands,\n    )\n}\n"
  },
  {
    "path": "tests/common/mod.rs",
    "content": "#![allow(clippy::mutable_key_type)]\n\npub mod node;\n\nuse std::net::TcpListener;\nuse std::path::PathBuf;\nuse std::sync::atomic::{AtomicU16, Ordering};\n\nuse protocol::types::Hash;\nuse protocol::BytesMut;\nuse rand::{rngs::OsRng, RngCore};\n\nstatic AVAILABLE_PORT: AtomicU16 = AtomicU16::new(2000);\n\npub fn tmp_dir() -> PathBuf {\n    let mut tmp_dir = std::env::temp_dir();\n    let sub_dir = {\n        let mut random_bytes = [0u8; 32];\n        OsRng.fill_bytes(&mut random_bytes);\n        Hash::digest(BytesMut::from(random_bytes.as_ref()).freeze()).as_hex()\n    };\n\n    tmp_dir.push(sub_dir + \"/\");\n    tmp_dir\n}\n\npub fn available_port_pair() -> (u16, u16) {\n    (available_port(), available_port())\n}\n\nfn available_port() -> u16 {\n    let is_available = |port| -> bool { TcpListener::bind((\"127.0.0.1\", port)).is_ok() };\n\n    loop {\n        let port = AVAILABLE_PORT.fetch_add(1, Ordering::SeqCst);\n        if is_available(port) {\n            return port;\n        }\n    }\n}\n"
  },
  {
    "path": "tests/common/node/config.rs",
    "content": "use std::collections::HashMap;\nuse std::net::SocketAddr;\nuse std::path::PathBuf;\n\nuse serde_derive::Deserialize;\n\nuse core_mempool::{DEFAULT_BROADCAST_TXS_INTERVAL, DEFAULT_BROADCAST_TXS_SIZE};\nuse protocol::types::Hex;\n\n#[derive(Debug, Deserialize)]\npub struct ConfigNetwork {\n    pub bootstraps:                 Option<Vec<ConfigNetworkBootstrap>>,\n    pub allowlist:                  Option<Vec<String>>,\n    pub allowlist_only:             Option<bool>,\n    pub trust_interval_duration:    Option<u64>,\n    pub trust_max_history_duration: Option<u64>,\n    pub fatal_ban_duration:         Option<u64>,\n    pub soft_ban_duration:          Option<u64>,\n    pub max_connected_peers:        Option<usize>,\n    pub listening_address:          SocketAddr,\n    pub rpc_timeout:                Option<u64>,\n    pub selfcheck_interval:         Option<u64>,\n    pub send_buffer_size:           Option<usize>,\n    pub write_timeout:              Option<u64>,\n    pub recv_buffer_size:           Option<usize>,\n    pub max_frame_length:           Option<usize>,\n    pub max_wait_streams:           Option<usize>,\n    pub ping_interval:              Option<u64>,\n}\n\n#[derive(Debug, Deserialize)]\npub struct ConfigNetworkBootstrap {\n    pub peer_id: String,\n    pub address: String,\n}\n\n#[derive(Debug, Deserialize)]\npub struct ConfigConsensus {\n    pub overlord_gap:        usize,\n    pub sync_txs_chunk_size: usize,\n}\n\nimpl Default for ConfigConsensus {\n    fn default() -> Self {\n        Self {\n            overlord_gap:        5,\n            sync_txs_chunk_size: 5000,\n        }\n    }\n}\n\nfn default_broadcast_txs_size() -> usize {\n    DEFAULT_BROADCAST_TXS_SIZE\n}\n\nfn default_broadcast_txs_interval() -> u64 {\n    DEFAULT_BROADCAST_TXS_INTERVAL\n}\n\n#[derive(Debug, Deserialize)]\npub struct ConfigMempool {\n    pub pool_size: u64,\n\n    #[serde(default = \"default_broadcast_txs_size\")]\n    pub broadcast_txs_size:     usize,\n    #[serde(default = \"default_broadcast_txs_interval\")]\n    pub broadcast_txs_interval: u64,\n}\n\n#[derive(Debug, Deserialize)]\npub struct ConfigExecutor {\n    pub light: bool,\n}\n\n#[derive(Debug, Deserialize)]\npub struct ConfigLogger {\n    pub filter:                     String,\n    pub log_to_console:             bool,\n    pub console_show_file_and_line: bool,\n    pub log_to_file:                bool,\n    pub metrics:                    bool,\n    pub log_path:                   PathBuf,\n    #[serde(default)]\n    pub modules_level:              HashMap<String, String>,\n}\n\nimpl Default for ConfigLogger {\n    fn default() -> Self {\n        Self {\n            filter:                     \"info\".into(),\n            log_to_console:             true,\n            console_show_file_and_line: false,\n            log_to_file:                true,\n            metrics:                    true,\n            log_path:                   \"logs/\".into(),\n            modules_level:              HashMap::new(),\n        }\n    }\n}\n\n#[derive(Debug, Deserialize)]\npub struct Config {\n    // crypto\n    pub privkey: Hex,\n\n    pub network:   ConfigNetwork,\n    pub mempool:   ConfigMempool,\n    pub executor:  ConfigExecutor,\n    #[serde(default)]\n    pub consensus: ConfigConsensus,\n    #[serde(default)]\n    pub logger:    ConfigLogger,\n}\n"
  },
  {
    "path": "tests/common/node/consts.rs",
    "content": "pub const CHAIN_CONFIG_PATH: &str = \"devtools/chain/config.toml\";\npub const CHAIN_GENESIS_PATH: &str = \"devtools/chain/genesis.toml\";\npub const CHAIN_ID: &str = \"0xb6a4d7da21443f5e816e8700eea87610e6d769657d6b8ec73028457bf2ca4036\";\n\n// Disable ping\npub const NETWORK_PING_INTERVAL: Option<u64> = Some(99999);\n// Enough interval for tests\npub const NETWORK_TRUST_METRIC_INTERVAL: Option<u64> = Some(99);\n// Trust metric soft hard ban duration\npub const NETWORK_SOFT_BAND_DURATION: Option<u64> = Some(5);\n\npub const MEMPOOL_POOL_SIZE: usize = 10;\n"
  },
  {
    "path": "tests/common/node/diagnostic.rs",
    "content": "use super::sync::Sync;\n\nuse core_network::{DiagnosticEvent, NetworkServiceHandle};\nuse protocol::{\n    async_trait,\n    traits::{Context, MessageHandler, PeerTrust, TrustFeedback},\n};\nuse serde_derive::{Deserialize, Serialize};\n\nuse std::ops::Deref;\n\npub const GOSSIP_TRUST_NEW_INTERVAL: &str = \"/gossip/diagnostic/trust_new_interval\";\npub const GOSSIP_TRUST_TWIN_EVENT: &str = \"/gossip/diagnostic/trust_twin_event\";\n\n#[derive(Debug, Serialize, Deserialize)]\npub struct TrustNewIntervalReq(pub u8);\n\npub struct TrustNewIntervalHandler {\n    pub sync:    Sync,\n    pub network: NetworkServiceHandle,\n}\n\nimpl TrustNewIntervalHandler {\n    pub fn new(sync: Sync, network: NetworkServiceHandle) -> Self {\n        TrustNewIntervalHandler { sync, network }\n    }\n}\n\n#[async_trait]\nimpl MessageHandler for TrustNewIntervalHandler {\n    type Message = TrustNewIntervalReq;\n\n    async fn process(&self, ctx: Context, _msg: Self::Message) -> TrustFeedback {\n        let session_id = ctx\n            .get::<usize>(\"session_id\")\n            .cloned()\n            .expect(\"impossible, session id not found\");\n\n        let report = self\n            .network\n            .diagnostic\n            .new_trust_interval(session_id.into())\n            .expect(\"failed to enter new trust interval\");\n        self.sync.emit(DiagnosticEvent::TrustNewInterval { report });\n\n        TrustFeedback::Neutral\n    }\n}\n\n#[repr(u8)]\n#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq, Hash)]\npub enum TwinEvent {\n    Good = 0,\n    Bad = 1,\n    Worse = 2,\n    Both = 3,\n}\n\n#[derive(Debug, Serialize, Deserialize)]\npub struct TrustTwinEventReq(pub TwinEvent);\n\npub struct TrustTwinEventHandler(pub NetworkServiceHandle);\n\n#[async_trait]\nimpl MessageHandler for TrustTwinEventHandler {\n    type Message = TrustTwinEventReq;\n\n    async fn process(&self, ctx: Context, msg: Self::Message) -> TrustFeedback {\n        match msg.0 {\n            TwinEvent::Good => self.report(ctx, TrustFeedback::Good),\n            TwinEvent::Bad => self.report(ctx, TrustFeedback::Bad(\"twin bad\".to_owned())),\n            TwinEvent::Worse => self.report(ctx, TrustFeedback::Worse(\"twin worse\".to_owned())),\n            TwinEvent::Both => {\n                self.report(ctx.clone(), TrustFeedback::Good);\n                self.report(ctx, TrustFeedback::Bad(\"twin bad\".to_owned()));\n            }\n        }\n\n        TrustFeedback::Neutral\n    }\n}\n\nimpl Deref for TrustTwinEventHandler {\n    type Target = NetworkServiceHandle;\n\n    fn deref(&self) -> &Self::Target {\n        &self.0\n    }\n}\n"
  },
  {
    "path": "tests/common/node/full_node/builder.rs",
    "content": "use super::{\n    config::Config,\n    default_start::{create_genesis, start},\n    error::MainError,\n    memory_db::MemoryDB,\n    Sync,\n};\n\nuse std::{\n    fs,\n    net::{IpAddr, Ipv4Addr, SocketAddr},\n    sync::Arc,\n};\n\nuse protocol::traits::ServiceMapping;\nuse protocol::types::{Block, Genesis};\nuse protocol::ProtocolResult;\n\n#[derive(Default)]\npub struct MutaBuilder<Mapping: ServiceMapping> {\n    config_path:     Option<String>,\n    genesis_path:    Option<String>,\n    servive_mapping: Option<Arc<Mapping>>,\n}\n\nimpl<Mapping: 'static + ServiceMapping> MutaBuilder<Mapping> {\n    pub fn new() -> Self {\n        Self {\n            servive_mapping: None,\n            config_path:     None,\n            genesis_path:    None,\n        }\n    }\n\n    pub fn service_mapping(mut self, mapping: Mapping) -> MutaBuilder<Mapping> {\n        self.servive_mapping = Some(Arc::new(mapping));\n        self\n    }\n\n    pub fn config_path(mut self, path: &str) -> MutaBuilder<Mapping> {\n        self.config_path = Some(path.to_owned());\n        self\n    }\n\n    pub fn genesis_path(mut self, path: &str) -> MutaBuilder<Mapping> {\n        self.genesis_path = Some(path.to_owned());\n        self\n    }\n\n    pub fn build(self, listen_port: u16) -> ProtocolResult<Muta<Mapping>> {\n        let mut config: Config =\n            common_config_parser::parse(&self.config_path.expect(\"config path is not set\"))\n                .map_err(MainError::ConfigParse)?;\n\n        // Override listening address\n        let listen_addr = SocketAddr::new(IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), listen_port);\n        config.network.listening_address = listen_addr;\n\n        let genesis_toml = fs::read_to_string(&self.genesis_path.expect(\"genesis path is not set\"))\n            .map_err(MainError::Io)?;\n        let genesis: Genesis = toml::from_str(&genesis_toml).map_err(MainError::GenesisTomlDe)?;\n\n        Ok(Muta::new(\n            config,\n            genesis,\n            self.servive_mapping\n                .expect(\"service mapping cannot be None\"),\n        ))\n    }\n}\n\npub struct Muta<Mapping: ServiceMapping> {\n    config:          Config,\n    genesis:         Genesis,\n    service_mapping: Arc<Mapping>,\n}\n\nimpl<Mapping: 'static + ServiceMapping> Muta<Mapping> {\n    pub fn new(config: Config, genesis: Genesis, service_mapping: Arc<Mapping>) -> Self {\n        Self {\n            config,\n            genesis,\n            service_mapping,\n        }\n    }\n\n    pub async fn run(self, seckey: String, sync: Sync) -> ProtocolResult<()> {\n        // run muta\n        let memory_db = MemoryDB::default();\n\n        self.create_genesis(memory_db.clone()).await?;\n        start(\n            self.config,\n            Arc::clone(&self.service_mapping),\n            memory_db,\n            seckey,\n            sync,\n        )\n        .await?;\n\n        Ok(())\n    }\n\n    async fn create_genesis(&self, db: MemoryDB) -> ProtocolResult<Block> {\n        create_genesis(&self.genesis, Arc::clone(&self.service_mapping), db).await\n    }\n}\n"
  },
  {
    "path": "tests/common/node/full_node/default_start.rs",
    "content": "use super::diagnostic::{\n    TrustNewIntervalHandler, TrustTwinEventHandler, GOSSIP_TRUST_NEW_INTERVAL,\n    GOSSIP_TRUST_TWIN_EVENT,\n};\n/// Almost same as src/default_start.rs, only remove graphql service.\nuse super::{config::Config, consts, error::MainError, memory_db::MemoryDB, Sync};\n\nuse std::collections::HashMap;\nuse std::convert::TryFrom;\nuse std::sync::Arc;\n\nuse bytes::Bytes;\nuse futures::lock::Mutex;\n\nuse common_crypto::{\n    BlsCommonReference, BlsPrivateKey, BlsPublicKey, PublicKey, Secp256k1, Secp256k1PrivateKey,\n    ToPublicKey, UncompressedPublicKey,\n};\nuse core_api::adapter::DefaultAPIAdapter;\nuse core_consensus::fixed_types::{FixedBlock, FixedProof, FixedSignedTxs};\nuse core_consensus::message::{\n    ChokeMessageHandler, ProposalMessageHandler, PullBlockRpcHandler, PullProofRpcHandler,\n    PullTxsRpcHandler, QCMessageHandler, RemoteHeightMessageHandler, VoteMessageHandler,\n    BROADCAST_HEIGHT, END_GOSSIP_AGGREGATED_VOTE, END_GOSSIP_SIGNED_CHOKE,\n    END_GOSSIP_SIGNED_PROPOSAL, END_GOSSIP_SIGNED_VOTE, RPC_RESP_SYNC_PULL_BLOCK,\n    RPC_RESP_SYNC_PULL_PROOF, RPC_RESP_SYNC_PULL_TXS, RPC_SYNC_PULL_BLOCK, RPC_SYNC_PULL_PROOF,\n    RPC_SYNC_PULL_TXS,\n};\nuse core_consensus::status::{CurrentConsensusStatus, StatusAgent};\nuse core_consensus::util::OverlordCrypto;\nuse core_consensus::{\n    ConsensusWal, DurationConfig, Node, OverlordConsensus, OverlordConsensusAdapter,\n    OverlordSynchronization, RichBlock, SignedTxsWAL,\n};\nuse core_mempool::{\n    DefaultMemPoolAdapter, HashMemPool, MsgPushTxs, NewTxsHandler, PullTxsHandler,\n    END_GOSSIP_NEW_TXS, RPC_PULL_TXS, RPC_RESP_PULL_TXS,\n};\nuse core_network::{DiagnosticEvent, NetworkConfig, NetworkService, PeerId, PeerIdExt};\nuse core_storage::{ImplStorage, StorageError};\nuse framework::executor::{ServiceExecutor, ServiceExecutorFactory};\nuse protocol::traits::{\n    APIAdapter, CommonStorage, Context, MemPool, Network, NodeInfo, ServiceMapping, Storage,\n};\nuse protocol::types::{Address, Block, BlockHeader, Genesis, Hash, Metadata, Proof, Validator};\nuse protocol::{fixed_codec::FixedCodec, ProtocolResult};\n\npub async fn create_genesis<Mapping: 'static + ServiceMapping>(\n    genesis: &Genesis,\n    servive_mapping: Arc<Mapping>,\n    db: MemoryDB,\n) -> ProtocolResult<Block> {\n    let metadata: Metadata =\n        serde_json::from_str(genesis.get_payload(\"metadata\")).expect(\"Decode metadata failed!\");\n\n    let validators: Vec<Validator> = metadata\n        .verifier_list\n        .iter()\n        .map(|v| Validator {\n            pub_key:        v.pub_key.decode(),\n            propose_weight: v.propose_weight,\n            vote_weight:    v.vote_weight,\n        })\n        .collect();\n\n    // Read genesis.\n    log::info!(\"Genesis data: {:?}\", genesis);\n\n    // Init Block db\n    let storage = Arc::new(ImplStorage::new(Arc::new(db.clone())));\n\n    match storage.get_latest_block(Context::new()).await {\n        Ok(genesis_block) => {\n            log::info!(\"The Genesis block has been initialized.\");\n            return Ok(genesis_block);\n        }\n        Err(e) => {\n            if !e.to_string().contains(\"GetNone\") {\n                return Err(e);\n            }\n        }\n    };\n\n    // Init genesis\n    let genesis_state_root = ServiceExecutor::create_genesis(\n        genesis.services.clone(),\n        Arc::new(db),\n        Arc::clone(&storage),\n        servive_mapping,\n    )?;\n\n    // Build genesis block.\n    let proposer = Address::from_hash(Hash::digest(protocol::address_hrp().as_str()))?;\n    let genesis_block_header = BlockHeader {\n        chain_id: metadata.chain_id.clone(),\n        height: 0,\n        exec_height: 0,\n        prev_hash: Hash::from_empty(),\n        timestamp: genesis.timestamp,\n        order_root: Hash::from_empty(),\n        order_signed_transactions_hash: Hash::from_empty(),\n        confirm_root: vec![],\n        state_root: genesis_state_root,\n        receipt_root: vec![],\n        cycles_used: vec![],\n        proposer,\n        proof: Proof {\n            height:     0,\n            round:      0,\n            block_hash: Hash::from_empty(),\n            signature:  Bytes::new(),\n            bitmap:     Bytes::new(),\n        },\n        validator_version: 0,\n        validators,\n    };\n    let latest_proof = genesis_block_header.proof.clone();\n    let genesis_block = Block {\n        header:            genesis_block_header,\n        ordered_tx_hashes: vec![],\n    };\n    storage\n        .insert_block(Context::new(), genesis_block.clone())\n        .await?;\n    storage\n        .update_latest_proof(Context::new(), latest_proof)\n        .await?;\n\n    log::info!(\"The genesis block is created {:?}\", genesis_block);\n    Ok(genesis_block)\n}\n\npub async fn start<Mapping: 'static + ServiceMapping>(\n    config: Config,\n    service_mapping: Arc<Mapping>,\n    db: MemoryDB,\n    seckey: String,\n    sync: Sync,\n) -> ProtocolResult<()> {\n    log::info!(\"node starts\");\n    // Init Block db\n    let storage = Arc::new(ImplStorage::new(Arc::new(db.clone())));\n\n    // Init network\n    let network_config = NetworkConfig::new()\n        .max_connections(config.network.max_connected_peers)?\n        .allowlist_only(config.network.allowlist_only)\n        .peer_trust_metric(\n            consts::NETWORK_TRUST_METRIC_INTERVAL,\n            config.network.trust_max_history_duration,\n        )?\n        .peer_soft_ban(consts::NETWORK_SOFT_BAND_DURATION)\n        .peer_fatal_ban(config.network.fatal_ban_duration)\n        .rpc_timeout(config.network.rpc_timeout)\n        .ping_interval(consts::NETWORK_PING_INTERVAL)\n        .selfcheck_interval(config.network.selfcheck_interval)\n        .max_wait_streams(config.network.max_wait_streams)\n        .max_frame_length(config.network.max_frame_length)\n        .send_buffer_size(config.network.send_buffer_size)\n        .write_timeout(config.network.write_timeout)\n        .recv_buffer_size(config.network.recv_buffer_size);\n\n    let mut bootstrap_pairs = vec![];\n    if let Some(bootstrap) = &config.network.bootstraps {\n        for bootstrap in bootstrap.iter() {\n            bootstrap_pairs.push((bootstrap.peer_id.to_owned(), bootstrap.address.to_owned()));\n        }\n    }\n\n    let allowlist = config.network.allowlist.clone().unwrap_or_default();\n\n    let network_config = network_config\n        .bootstraps(bootstrap_pairs)?\n        .allowlist(allowlist)?\n        .secio_keypair(seckey.clone())?;\n    let mut network_service = NetworkService::new(network_config);\n    network_service\n        .listen(config.network.listening_address)\n        .await?;\n\n    // Register diagnostic\n    network_service.register_endpoint_handler(\n        GOSSIP_TRUST_NEW_INTERVAL,\n        TrustNewIntervalHandler::new(sync.clone(), network_service.handle()),\n    )?;\n    network_service.register_endpoint_handler(\n        GOSSIP_TRUST_TWIN_EVENT,\n        TrustTwinEventHandler(network_service.handle()),\n    )?;\n\n    let hook_fn = |sync: Sync| -> _ { Box::new(move |event: DiagnosticEvent| sync.emit(event)) };\n    network_service.register_diagnostic_hook(hook_fn(sync.clone()));\n\n    // Init mempool\n    let current_block = storage.get_latest_block(Context::new()).await?;\n    let mempool_adapter =\n        DefaultMemPoolAdapter::<ServiceExecutorFactory, Secp256k1, _, _, _, _>::new(\n            network_service.handle(),\n            Arc::clone(&storage),\n            Arc::new(db.clone()),\n            Arc::clone(&service_mapping),\n            config.mempool.broadcast_txs_size,\n            config.mempool.broadcast_txs_interval,\n        );\n    let mempool =\n        Arc::new(HashMemPool::new(consts::MEMPOOL_POOL_SIZE, mempool_adapter, vec![]).await);\n\n    // self private key\n    let hex_privkey = hex::decode(config.privkey.as_string_trim0x()).map_err(MainError::FromHex)?;\n    let my_privkey =\n        Secp256k1PrivateKey::try_from(hex_privkey.as_ref()).map_err(MainError::Crypto)?;\n    let my_pubkey = my_privkey.pub_key();\n    let my_address = Address::from_pubkey_bytes(my_pubkey.to_uncompressed_bytes())?;\n\n    // Get metadata\n    let api_adapter = DefaultAPIAdapter::<ServiceExecutorFactory, _, _, _, _>::new(\n        Arc::clone(&mempool),\n        Arc::clone(&storage),\n        Arc::new(db.clone()),\n        Arc::clone(&service_mapping),\n    );\n\n    // Create full transactions wal\n    let wal_path = crate::common::tmp_dir()\n        .to_str()\n        .expect(\"wal path string\")\n        .to_string();\n    let txs_wal = Arc::new(SignedTxsWAL::new(wal_path));\n\n    // Init consensus wal\n    let wal_path = crate::common::tmp_dir()\n        .to_str()\n        .expect(\"wal path string\")\n        .to_string();\n    let consensus_wal = Arc::new(ConsensusWal::new(wal_path));\n\n    let exec_resp = api_adapter\n        .query_service(\n            Context::new(),\n            current_block.header.height,\n            u64::max_value(),\n            1,\n            my_address.clone(),\n            \"metadata\".to_string(),\n            \"get_metadata\".to_string(),\n            \"\".to_string(),\n        )\n        .await?;\n\n    let metadata: Metadata =\n        serde_json::from_str(&exec_resp.succeed_data).expect(\"Decode metadata failed!\");\n\n    // set chain id in network\n    network_service.set_chain_id(Hash::from_hex(consts::CHAIN_ID).expect(\"chain id\"));\n\n    // set args in mempool\n    mempool.set_args(\n        metadata.timeout_gap,\n        metadata.cycles_limit,\n        metadata.max_tx_size,\n    );\n\n    // register broadcast new transaction\n    network_service\n        .register_endpoint_handler(END_GOSSIP_NEW_TXS, NewTxsHandler::new(Arc::clone(&mempool)))?;\n\n    // register pull txs from other node\n    network_service.register_endpoint_handler(\n        RPC_PULL_TXS,\n        PullTxsHandler::new(Arc::new(network_service.handle()), Arc::clone(&mempool)),\n    )?;\n    network_service.register_rpc_response::<MsgPushTxs>(RPC_RESP_PULL_TXS)?;\n\n    // Init Consensus\n    let validators: Vec<Validator> = metadata\n        .verifier_list\n        .iter()\n        .map(|v| Validator {\n            pub_key:        v.pub_key.decode(),\n            propose_weight: v.propose_weight,\n            vote_weight:    v.vote_weight,\n        })\n        .collect();\n\n    let node_info = NodeInfo {\n        chain_id:     metadata.chain_id.clone(),\n        self_address: my_address.clone(),\n        self_pub_key: my_pubkey.to_bytes(),\n    };\n    let current_header = &current_block.header;\n    let block_hash = Hash::digest(current_block.header.encode_fixed()?);\n    let current_height = current_block.header.height;\n    let exec_height = current_block.header.exec_height;\n\n    let current_consensus_status = CurrentConsensusStatus {\n        cycles_price:                metadata.cycles_price,\n        cycles_limit:                metadata.cycles_limit,\n        latest_committed_height:     current_block.header.height,\n        exec_height:                 current_block.header.exec_height,\n        current_hash:                block_hash,\n        latest_committed_state_root: current_header.state_root.clone(),\n        list_confirm_root:           vec![],\n        list_state_root:             vec![],\n        list_receipt_root:           vec![],\n        list_cycles_used:            vec![],\n        current_proof:               current_header.proof.clone(),\n        validators:                  validators.clone(),\n        consensus_interval:          metadata.interval,\n        propose_ratio:               metadata.propose_ratio,\n        prevote_ratio:               metadata.prevote_ratio,\n        precommit_ratio:             metadata.precommit_ratio,\n        brake_ratio:                 metadata.brake_ratio,\n        max_tx_size:                 metadata.max_tx_size,\n        tx_num_limit:                metadata.tx_num_limit,\n    };\n\n    let consensus_interval = current_consensus_status.consensus_interval;\n    let status_agent = StatusAgent::new(current_consensus_status);\n\n    let mut bls_pub_keys = HashMap::new();\n    for validator_extend in metadata.verifier_list.iter() {\n        let address = validator_extend.pub_key.decode();\n        let hex_pubkey = hex::decode(validator_extend.bls_pub_key.as_string_trim0x())\n            .map_err(MainError::FromHex)?;\n        let pub_key = BlsPublicKey::try_from(hex_pubkey.as_ref()).map_err(MainError::Crypto)?;\n        bls_pub_keys.insert(address, pub_key);\n    }\n\n    let mut priv_key = Vec::new();\n    priv_key.extend_from_slice(&[0u8; 16]);\n    let mut tmp = hex::decode(config.privkey.as_string_trim0x()).unwrap();\n    priv_key.append(&mut tmp);\n    let bls_priv_key = BlsPrivateKey::try_from(priv_key.as_ref()).map_err(MainError::Crypto)?;\n\n    let hex_common_ref =\n        hex::decode(metadata.common_ref.as_string_trim0x()).map_err(MainError::FromHex)?;\n    let common_ref: BlsCommonReference = std::str::from_utf8(hex_common_ref.as_ref())\n        .map_err(MainError::Utf8)?\n        .into();\n\n    let crypto = Arc::new(OverlordCrypto::new(bls_priv_key, bls_pub_keys, common_ref));\n\n    let mut consensus_adapter =\n        OverlordConsensusAdapter::<ServiceExecutorFactory, _, _, _, _, _>::new(\n            Arc::new(network_service.handle()),\n            Arc::clone(&mempool),\n            Arc::clone(&storage),\n            Arc::new(db),\n            Arc::clone(&service_mapping),\n            status_agent.clone(),\n            Arc::clone(&crypto),\n            config.consensus.overlord_gap,\n        )?;\n\n    let exec_demon = consensus_adapter.take_exec_demon();\n    let consensus_adapter = Arc::new(consensus_adapter);\n\n    let lock = Arc::new(Mutex::new(()));\n\n    let overlord_consensus = Arc::new(OverlordConsensus::new(\n        status_agent.clone(),\n        node_info,\n        Arc::clone(&crypto),\n        Arc::clone(&txs_wal),\n        Arc::clone(&consensus_adapter),\n        Arc::clone(&lock),\n        Arc::clone(&consensus_wal),\n    ));\n\n    consensus_adapter.set_overlord_handler(overlord_consensus.get_overlord_handler());\n\n    let synchronization = Arc::new(OverlordSynchronization::<_>::new(\n        config.consensus.sync_txs_chunk_size,\n        consensus_adapter,\n        status_agent.clone(),\n        crypto,\n        lock,\n    ));\n\n    let peer_ids = metadata\n        .verifier_list\n        .iter()\n        .map(|v| PeerId::from_pubkey_bytes(v.pub_key.decode()).map(PeerIdExt::into_bytes_ext))\n        .collect::<Result<Vec<_>, _>>()?;\n\n    network_service\n        .handle()\n        .tag_consensus(Context::new(), peer_ids)?;\n\n    // Re-execute block from exec_height + 1 to current_height, so that init the\n    // lost current status.\n    log::info!(\"Re-execute from {} to {}\", exec_height + 1, current_height);\n    for height in exec_height + 1..=current_height {\n        let block = storage\n            .get_block(Context::new(), height)\n            .await?\n            .ok_or(StorageError::GetNone)?;\n        let txs = storage\n            .get_transactions(\n                Context::new(),\n                block.header.height,\n                &block.ordered_tx_hashes,\n            )\n            .await?\n            .into_iter()\n            .filter_map(|opt_stx| opt_stx)\n            .collect::<Vec<_>>();\n        if txs.len() != block.ordered_tx_hashes.len() {\n            return Err(StorageError::GetNone.into());\n        }\n        let rich_block = RichBlock { block, txs };\n        let _ = synchronization\n            .exec_block(Context::new(), rich_block, status_agent.clone())\n            .await?;\n    }\n\n    // register consensus\n    network_service.register_endpoint_handler(\n        END_GOSSIP_SIGNED_PROPOSAL,\n        ProposalMessageHandler::new(Arc::clone(&overlord_consensus)),\n    )?;\n    network_service.register_endpoint_handler(\n        END_GOSSIP_AGGREGATED_VOTE,\n        QCMessageHandler::new(Arc::clone(&overlord_consensus)),\n    )?;\n    network_service.register_endpoint_handler(\n        END_GOSSIP_SIGNED_VOTE,\n        VoteMessageHandler::new(Arc::clone(&overlord_consensus)),\n    )?;\n    network_service.register_endpoint_handler(\n        END_GOSSIP_SIGNED_CHOKE,\n        ChokeMessageHandler::new(Arc::clone(&overlord_consensus)),\n    )?;\n    network_service.register_endpoint_handler(\n        BROADCAST_HEIGHT,\n        RemoteHeightMessageHandler::new(Arc::clone(&synchronization)),\n    )?;\n    network_service.register_endpoint_handler(\n        RPC_SYNC_PULL_BLOCK,\n        PullBlockRpcHandler::new(Arc::new(network_service.handle()), Arc::clone(&storage)),\n    )?;\n\n    network_service.register_endpoint_handler(\n        RPC_SYNC_PULL_PROOF,\n        PullProofRpcHandler::new(Arc::new(network_service.handle()), Arc::clone(&storage)),\n    )?;\n\n    network_service.register_endpoint_handler(\n        RPC_SYNC_PULL_TXS,\n        PullTxsRpcHandler::new(Arc::new(network_service.handle()), Arc::clone(&storage)),\n    )?;\n    network_service.register_rpc_response::<FixedBlock>(RPC_RESP_SYNC_PULL_BLOCK)?;\n    network_service.register_rpc_response::<FixedProof>(RPC_RESP_SYNC_PULL_PROOF)?;\n    network_service.register_rpc_response::<FixedSignedTxs>(RPC_RESP_SYNC_PULL_TXS)?;\n\n    // Run network\n    tokio::spawn(network_service);\n    sync.wait().await;\n\n    // Run sync\n    tokio::spawn(async move {\n        if let Err(e) = synchronization.polling_broadcast().await {\n            log::error!(\"synchronization: {:?}\", e);\n        }\n    });\n\n    // Run consensus\n    let authority_list = validators\n        .iter()\n        .map(|v| Node {\n            address:        v.pub_key.clone(),\n            propose_weight: v.propose_weight,\n            vote_weight:    v.vote_weight,\n        })\n        .collect::<Vec<_>>();\n\n    let timer_config = DurationConfig {\n        propose_ratio:   metadata.propose_ratio,\n        prevote_ratio:   metadata.prevote_ratio,\n        precommit_ratio: metadata.precommit_ratio,\n        brake_ratio:     metadata.brake_ratio,\n    };\n\n    let consensus_handle = tokio::spawn(async move {\n        if let Err(e) = overlord_consensus\n            .run(\n                current_height,\n                consensus_interval,\n                authority_list,\n                Some(timer_config),\n            )\n            .await\n        {\n            log::error!(\"muta-consensus: {:?} error\", e);\n        }\n    });\n\n    exec_demon.run().await;\n    let _ = consensus_handle.await;\n    let _ = sync;\n\n    Ok(())\n}\n"
  },
  {
    "path": "tests/common/node/full_node/error.rs",
    "content": "use derive_more::{Display, From};\nuse protocol::{ProtocolError, ProtocolErrorKind};\n\n#[derive(Debug, Display, From)]\npub enum MainError {\n    #[display(fmt = \"The muta configuration read failed {:?}\", _0)]\n    ConfigParse(common_config_parser::ParseError),\n\n    #[display(fmt = \"{:?}\", _0)]\n    Io(std::io::Error),\n\n    #[display(fmt = \"Toml fails to parse genesis {:?}\", _0)]\n    GenesisTomlDe(toml::de::Error),\n\n    #[display(fmt = \"hex error {:?}\", _0)]\n    FromHex(hex::FromHexError),\n\n    #[display(fmt = \"crypto error {:?}\", _0)]\n    Crypto(common_crypto::Error),\n\n    #[display(fmt = \"{:?}\", _0)]\n    Utf8(std::str::Utf8Error),\n\n    #[display(fmt = \"other error {:?}\", _0)]\n    Other(String),\n}\n\nimpl std::error::Error for MainError {}\n\nimpl From<MainError> for ProtocolError {\n    fn from(error: MainError) -> ProtocolError {\n        ProtocolError::new(ProtocolErrorKind::Main, Box::new(error))\n    }\n}\n"
  },
  {
    "path": "tests/common/node/full_node/memory_db.rs",
    "content": "use derive_more::Display;\nuse parking_lot::RwLock;\nuse protocol::{\n    async_trait,\n    codec::ProtocolCodecSync,\n    traits::{\n        IntoIteratorByRef, StorageAdapter, StorageBatchModify, StorageIterator, StorageSchema,\n    },\n    Bytes, ProtocolError, ProtocolErrorKind, ProtocolResult,\n};\n\nuse std::{\n    collections::{hash_map, HashMap},\n    marker::PhantomData,\n    ops::Deref,\n    sync::Arc,\n};\n\n#[derive(Debug, Display)]\npub enum MemoryDBError {\n    #[display(fmt = \"batch length dont match\")]\n    BatchLengthMismatch,\n}\n\nimpl std::error::Error for MemoryDBError {}\n\nimpl From<MemoryDBError> for ProtocolError {\n    fn from(err: MemoryDBError) -> ProtocolError {\n        ProtocolError::new(ProtocolErrorKind::Storage, Box::new(err))\n    }\n}\n\ntype Category = HashMap<Vec<u8>, Vec<u8>>;\n\n#[derive(Clone)]\npub struct MemoryDB {\n    trie: Arc<RwLock<HashMap<Vec<u8>, Vec<u8>>>>,\n    db:   Arc<RwLock<HashMap<String, Category>>>,\n}\n\nimpl Default for MemoryDB {\n    fn default() -> Self {\n        MemoryDB {\n            trie: Default::default(),\n            db:   Default::default(),\n        }\n    }\n}\n\nimpl Deref for MemoryDB {\n    type Target = Arc<RwLock<HashMap<Vec<u8>, Vec<u8>>>>;\n\n    fn deref(&self) -> &Self::Target {\n        &self.trie\n    }\n}\n\nimpl cita_trie::DB for MemoryDB {\n    type Error = MemoryDBError;\n\n    fn get(&self, key: &[u8]) -> Result<Option<Vec<u8>>, Self::Error> {\n        Ok(self.read().get(key).cloned())\n    }\n\n    fn contains(&self, key: &[u8]) -> Result<bool, Self::Error> {\n        Ok(self.read().contains_key(key))\n    }\n\n    fn insert(&self, key: Vec<u8>, value: Vec<u8>) -> Result<(), Self::Error> {\n        self.write().insert(key, value);\n        Ok(())\n    }\n\n    fn insert_batch(&self, keys: Vec<Vec<u8>>, values: Vec<Vec<u8>>) -> Result<(), Self::Error> {\n        if keys.len() != values.len() {\n            return Err(MemoryDBError::BatchLengthMismatch);\n        }\n\n        for (key, value) in keys.into_iter().zip(values.into_iter()) {\n            self.write().insert(key, value);\n        }\n        Ok(())\n    }\n\n    fn remove(&self, key: &[u8]) -> Result<(), Self::Error> {\n        self.write().remove(key);\n        Ok(())\n    }\n\n    fn remove_batch(&self, keys: &[Vec<u8>]) -> Result<(), Self::Error> {\n        for key in keys {\n            self.write().remove(key);\n        }\n\n        Ok(())\n    }\n\n    fn flush(&self) -> Result<(), Self::Error> {\n        Ok(())\n    }\n}\n\npub struct MemoryIterator<'a, S: StorageSchema> {\n    inner: hash_map::Iter<'a, Vec<u8>, Vec<u8>>,\n    pin_s: PhantomData<S>,\n}\n\nimpl<'a, S: StorageSchema> Iterator for MemoryIterator<'a, S> {\n    type Item = ProtocolResult<(<S as StorageSchema>::Key, <S as StorageSchema>::Value)>;\n\n    fn next(&mut self) -> Option<Self::Item> {\n        let kv_decode = |(k_bytes, v_bytes): (&Vec<u8>, &Vec<u8>)| -> ProtocolResult<_> {\n            let k_bytes = Bytes::copy_from_slice(k_bytes.as_ref());\n            let key = <_>::decode_sync(k_bytes)?;\n\n            let v_bytes = Bytes::copy_from_slice(&v_bytes.as_ref());\n            let val = <_>::decode_sync(v_bytes)?;\n\n            Ok((key, val))\n        };\n\n        self.inner.next().map(kv_decode)\n    }\n}\n\npub struct MemoryIntoIterator<'a, S: StorageSchema> {\n    inner: parking_lot::RwLockReadGuard<'a, HashMap<String, Category>>,\n    pin_s: PhantomData<S>,\n}\n\nimpl<'a, 'b: 'a, S: StorageSchema> IntoIterator for &'b MemoryIntoIterator<'a, S> {\n    type IntoIter = StorageIterator<'a, S>;\n    type Item = ProtocolResult<(<S as StorageSchema>::Key, <S as StorageSchema>::Value)>;\n\n    fn into_iter(self) -> Self::IntoIter {\n        Box::new(MemoryIterator {\n            inner: self\n                .inner\n                .get(&S::category().to_string())\n                .expect(\"impossible, already ensure we have category in prepare_iter\")\n                .iter(),\n            pin_s: PhantomData::<S>,\n        })\n    }\n}\n\nimpl<'c, S: StorageSchema> IntoIteratorByRef<S> for MemoryIntoIterator<'c, S> {\n    fn ref_to_iter<'a, 'b: 'a>(&'b self) -> StorageIterator<'a, S> {\n        self.into_iter()\n    }\n}\n\n#[async_trait]\nimpl StorageAdapter for MemoryDB {\n    async fn insert<S: StorageSchema>(\n        &self,\n        key: <S as StorageSchema>::Key,\n        val: <S as StorageSchema>::Value,\n    ) -> ProtocolResult<()> {\n        let key = key.encode_sync()?.to_vec();\n        let val = val.encode_sync()?.to_vec();\n\n        let mut db = self.db.write();\n        let db = db\n            .entry(S::category().to_string())\n            .or_insert_with(HashMap::new);\n\n        db.insert(key, val);\n\n        Ok(())\n    }\n\n    async fn get<S: StorageSchema>(\n        &self,\n        key: <S as StorageSchema>::Key,\n    ) -> ProtocolResult<Option<<S as StorageSchema>::Value>> {\n        let key = key.encode_sync()?;\n\n        let mut db = self.db.write();\n        let db = db\n            .entry(S::category().to_string())\n            .or_insert_with(HashMap::new);\n\n        let opt_bytes = db.get(&key.to_vec()).cloned();\n\n        if let Some(bytes) = opt_bytes {\n            let val = <_>::decode_sync(Bytes::copy_from_slice(&bytes))?;\n\n            Ok(Some(val))\n        } else {\n            Ok(None)\n        }\n    }\n\n    async fn remove<S: StorageSchema>(&self, key: <S as StorageSchema>::Key) -> ProtocolResult<()> {\n        let key = key.encode_sync()?.to_vec();\n\n        let mut db = self.db.write();\n        let db = db\n            .entry(S::category().to_string())\n            .or_insert_with(HashMap::new);\n\n        db.remove(&key);\n\n        Ok(())\n    }\n\n    async fn contains<S: StorageSchema>(\n        &self,\n        key: <S as StorageSchema>::Key,\n    ) -> ProtocolResult<bool> {\n        let key = key.encode_sync()?.to_vec();\n\n        let mut db = self.db.write();\n        let db = db\n            .entry(S::category().to_string())\n            .or_insert_with(HashMap::new);\n\n        Ok(db.get(&key).is_some())\n    }\n\n    async fn batch_modify<S: StorageSchema>(\n        &self,\n        keys: Vec<<S as StorageSchema>::Key>,\n        vals: Vec<StorageBatchModify<S>>,\n    ) -> ProtocolResult<()> {\n        if keys.len() != vals.len() {\n            return Err(MemoryDBError::BatchLengthMismatch.into());\n        }\n\n        let mut pairs: Vec<(Bytes, Option<Bytes>)> = Vec::with_capacity(keys.len());\n\n        for (key, value) in keys.into_iter().zip(vals.into_iter()) {\n            let key = key.encode_sync()?;\n\n            let value = match value {\n                StorageBatchModify::Insert(value) => Some(value.encode_sync()?),\n                StorageBatchModify::Remove => None,\n            };\n\n            pairs.push((key, value))\n        }\n\n        let mut db = self.db.write();\n        let db = db\n            .entry(S::category().to_string())\n            .or_insert_with(HashMap::new);\n\n        for (key, value) in pairs.into_iter() {\n            match value {\n                Some(value) => db.insert(key.to_vec(), value.to_vec()),\n                None => db.remove(&key.to_vec()),\n            };\n        }\n\n        Ok(())\n    }\n\n    fn prepare_iter<'a, 'b: 'a, S: StorageSchema + 'static, P: AsRef<[u8]> + 'a>(\n        &'b self,\n        _prefix: &P,\n    ) -> ProtocolResult<Box<dyn IntoIteratorByRef<S> + 'a>> {\n        {\n            self.db\n                .write()\n                .entry(S::category().to_string())\n                .or_insert_with(HashMap::new);\n        }\n\n        Ok(Box::new(MemoryIntoIterator {\n            inner: self.db.read(),\n            pin_s: PhantomData::<S>,\n        }))\n    }\n}\n"
  },
  {
    "path": "tests/common/node/full_node.rs",
    "content": "mod builder;\nmod default_start;\nmod error;\nmod memory_db;\n\nuse super::{config, consts, diagnostic, sync::Sync};\nuse builder::MutaBuilder;\n\nuse asset::AssetService;\nuse authorization::AuthorizationService;\nuse derive_more::{Display, From};\nuse metadata::MetadataService;\nuse multi_signature::MultiSignatureService;\nuse protocol::traits::{SDKFactory, Service, ServiceMapping, ServiceSDK};\nuse protocol::{ProtocolError, ProtocolErrorKind, ProtocolResult};\n\nstruct DefaultServiceMapping;\n\nimpl ServiceMapping for DefaultServiceMapping {\n    fn get_service<SDK: 'static + ServiceSDK, Factory: SDKFactory<SDK>>(\n        &self,\n        name: &str,\n        factory: &Factory,\n    ) -> ProtocolResult<Box<dyn Service>> {\n        let sdk = factory.get_sdk(name)?;\n\n        let service = match name {\n            \"authorization\" => {\n                let multi_sig_sdk = factory.get_sdk(\"multi_signature\")?;\n                Box::new(AuthorizationService::new(\n                    sdk,\n                    MultiSignatureService::new(multi_sig_sdk),\n                )) as Box<dyn Service>\n            }\n            \"asset\" => Box::new(AssetService::new(sdk)) as Box<dyn Service>,\n            \"metadata\" => Box::new(MetadataService::new(sdk)) as Box<dyn Service>,\n            \"multi_signature\" => Box::new(MultiSignatureService::new(sdk)) as Box<dyn Service>,\n            _ => {\n                return Err(MappingError::NotFoundService {\n                    service: name.to_owned(),\n                }\n                .into())\n            }\n        };\n\n        Ok(service)\n    }\n\n    fn list_service_name(&self) -> Vec<String> {\n        vec![\n            \"asset\".to_owned(),\n            \"authorization\".to_owned(),\n            \"metadata\".to_owned(),\n            \"multi_signature\".to_owned(),\n        ]\n    }\n}\n\n#[derive(Debug, Display, From)]\nenum MappingError {\n    #[display(fmt = \"service {:?} was not found\", service)]\n    NotFoundService { service: String },\n}\n\nimpl std::error::Error for MappingError {}\n\nimpl From<MappingError> for ProtocolError {\n    fn from(err: MappingError) -> ProtocolError {\n        ProtocolError::new(ProtocolErrorKind::Service, Box::new(err))\n    }\n}\n\n// Note: inject runnning_status\npub async fn run(listen_port: u16, seckey: String, sync: Sync) {\n    let builder = MutaBuilder::new()\n        .config_path(consts::CHAIN_CONFIG_PATH)\n        .genesis_path(consts::CHAIN_GENESIS_PATH)\n        .service_mapping(DefaultServiceMapping {});\n\n    let muta = builder.build(listen_port).expect(\"build\");\n    muta.run(seckey, sync).await.expect(\"run\");\n}\n"
  },
  {
    "path": "tests/common/node/sync.rs",
    "content": "use core_network::{DiagnosticEvent, TrustReport};\nuse derive_more::Display;\nuse protocol::traits::TrustFeedback;\nuse tokio::sync::{\n    broadcast::{channel, Receiver, RecvError, Sender},\n    Barrier, BarrierWaitResult, Mutex,\n};\nuse tokio::time::timeout;\n\nuse std::{\n    sync::atomic::{AtomicBool, Ordering},\n    sync::Arc,\n    time::Duration,\n};\n\nconst SYNC_RECV_TIMEOUT: Duration = Duration::from_secs(60);\n\n#[derive(Debug, Display)]\npub enum SyncError {\n    #[display(fmt = \"timeout\")]\n    Timeout,\n    #[display(fmt = \"recv {}\", _0)]\n    Recv(RecvError),\n    #[display(fmt = \"disconnected\")]\n    Disconected,\n}\n\n#[derive(Debug, Display)]\npub enum SyncEvent {\n    #[display(fmt = \"connected\")]\n    Connected,\n    #[display(fmt = \"remote height {}\", _0)]\n    RemoteHeight(u64),\n    #[display(fmt = \"feedback {}\", _0)]\n    TrustMetric(TrustFeedback),\n    #[display(fmt = \"report {}\", _0)]\n    TrustReport(TrustReport),\n}\n\n#[derive(Clone)]\npub struct Sync {\n    diag_tx:   Sender<DiagnosticEvent>,\n    diag_rx:   Arc<Mutex<Receiver<DiagnosticEvent>>>,\n    barrier:   Arc<Barrier>,\n    connected: Arc<AtomicBool>,\n}\n\nimpl Sync {\n    pub fn new() -> Self {\n        let (diag_tx, diag_rx) = channel(10);\n        let barrier = Arc::new(Barrier::new(2));\n        let connected = Arc::new(AtomicBool::new(false));\n        let diag_rx = Arc::new(Mutex::new(diag_rx));\n\n        Sync {\n            diag_tx,\n            diag_rx,\n            barrier,\n            connected,\n        }\n    }\n\n    pub fn is_connected(&self) -> bool {\n        self.connected.load(Ordering::SeqCst)\n    }\n\n    pub fn set_connected(&self) {\n        self.connected.store(true, Ordering::SeqCst);\n    }\n\n    pub fn disconnect(&self) {\n        self.connected.store(false, Ordering::SeqCst);\n    }\n\n    pub async fn wait(&self) -> BarrierWaitResult {\n        self.barrier.wait().await\n    }\n\n    // # Panic\n    pub async fn wait_connected(&self) {\n        let mut count: usize = 2; // Wait client node and full node both be connected to each other\n        while count > 0 {\n            match self.recv().await {\n                Ok(SyncEvent::Connected) => count -= 1,\n                Ok(event) => panic!(\"wait connected, but receive {}\", event),\n                Err(err) => panic!(\"connect to full node failed {:?}\", err),\n            }\n        }\n        self.set_connected();\n\n        loop {\n            match self.recv().await {\n                Ok(SyncEvent::RemoteHeight(height)) if height > 0 => break,\n                Ok(event) => panic!(\"wait remote height, but receive {}\", event),\n                Err(err) => panic!(\"wait remote height failed {:?}\", err),\n            }\n        }\n    }\n\n    pub fn emit(&self, event: DiagnosticEvent) {\n        self.diag_tx.send(event).unwrap();\n    }\n\n    pub async fn recv(&self) -> Result<SyncEvent, SyncError> {\n        match timeout(SYNC_RECV_TIMEOUT, self.diag_rx.lock().await.recv()).await {\n            Err(_) if !self.is_connected() => Err(SyncError::Disconected),\n            Err(_) => Err(SyncError::Timeout),\n            Ok(Err(e)) => Err(SyncError::Recv(e)),\n            Ok(Ok(event)) => match event {\n                DiagnosticEvent::SessionClosed => {\n                    self.disconnect();\n                    Err(SyncError::Disconected)\n                }\n                DiagnosticEvent::RemoteHeight { height } => Ok(SyncEvent::RemoteHeight(height)),\n                DiagnosticEvent::TrustMetric { feedback } => Ok(SyncEvent::TrustMetric(feedback)),\n                DiagnosticEvent::TrustNewInterval { report } => Ok(SyncEvent::TrustReport(report)),\n                DiagnosticEvent::NewSession => Ok(SyncEvent::Connected),\n            },\n        }\n    }\n}\n\nimpl Default for Sync {\n    fn default() -> Self {\n        Sync::new()\n    }\n}\n\nimpl Drop for Sync {\n    fn drop(&mut self) {\n        self.connected.store(false, Ordering::SeqCst);\n    }\n}\n"
  },
  {
    "path": "tests/common/node.rs",
    "content": "pub mod config;\npub mod consts;\npub mod diagnostic;\npub mod full_node;\npub mod sync;\n\npub use diagnostic::TwinEvent;\n"
  },
  {
    "path": "tests/e2e/jest.config.js",
    "content": "module.exports = {\n  displayName: \"Unit Tests\",\n  testRegex: \"(/.*.(test|spec))\\\\.(ts?|js?)$\",\n  transform: {\n    \"^.+\\\\.ts?$\": \"ts-jest\"\n  },\n  moduleFileExtensions: [\"ts\", \"js\", \"json\"],\n  testTimeout: 50000\n};\n"
  },
  {
    "path": "tests/e2e/package.json",
    "content": "{\n  \"name\": \"muta-e2e-tests\",\n  \"version\": \"1.0.0\",\n  \"description\": \"\",\n  \"author\": \"huwenchao\",\n  \"license\": \"MIT\",\n  \"scripts\": {\n    \"test\": \"jest --color\",\n    \"lint\": \"eslint --fix '{src,test}/**/*.{js,ts}'\",\n    \"prettier\": \"prettier --write **/*.{js,ts,graphql}\"\n  },\n  \"dependencies\": {\n    \"@mutadev/muta-sdk\": \"0.2.0-rc.0\",\n    \"@mutadev/service\": \"0.2.0-rc.0\",\n    \"@types/node\": \"^14.0.14\",\n    \"@types/node-fetch\": \"^2.5.7\",\n    \"apollo-boost\": \"^0.4.4\",\n    \"graphql\": \"^15.2.0\",\n    \"graphql-tag\": \"^2.10.1\",\n    \"node-fetch\": \"^2.6.0\",\n    \"toml\": \"^3.0.0\",\n    \"ts-node\": \"^8.3.0\",\n    \"typescript\": \"^3.5.3\"\n  },\n  \"devDependencies\": {\n    \"@types/jest\": \"^24.0.23\",\n    \"jest\": \"^24.9.0\",\n    \"prettier\": \"^1.19.1\",\n    \"ts-jest\": \"^26.0.0\"\n  }\n}\n"
  },
  {
    "path": "tests/e2e/sdk.test.ts",
    "content": "import { AssetService, MultiSignatureService } from '@mutadev/service'\nimport * as sdk from '@mutadev/muta-sdk';\nimport { mutaClient } from './utils';\n\nconst { Account, retry } = sdk;\nconst { toHex } = sdk.utils;\n\ndescribe(\"API test via @mutadev/muta-sdk-js\", () => {\n  test(\"getLatestBlock\", async () => {\n    let current_height = await mutaClient.getLatestBlockHeight();\n    expect(current_height).toBeGreaterThan(0);\n  });\n\n  test(\"getNoneBlock\", async () => {\n    let block = await mutaClient.getBlock(\"0xffffffff\");\n    expect(block).toBe(null);\n  })\n\n  test(\"getNoneTransaction\", async () => {\n    let tx = await mutaClient.getTransaction(\"0xf56924db538e77bb5951eb5ff0d02b88983c49c45eea30e8ae3e7234b311436c\");\n    expect(tx).toBe(null);\n  })\n\n  test(\"getNoneReceipt\", async () => {\n    let receipt = await mutaClient.getReceipt(\"0xf56924db538e77bb5951eb5ff0d02b88983c49c45eea30e8ae3e7234b311436c\");\n    expect(receipt).toBe(null);\n })\n\n test(\"transfer work\", async () => {\n   const from_addr = \"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\";\n   const from_pk =\n     \"0x5ec982173d54d830b6789cbbbe43eaa2853a5ff752d1ebc1b266cf9790314f8a\";\n   const to_addr = \"muta15a8a9ksxe3hhjpw3l7wz7ry778qg8h9wz8y35p\";\n    const asset_id =\n      \"0xf56924db538e77bb5951eb5ff0d02b88983c49c45eea30e8ae3e7234b311436c\";\n\n    const account = new sdk.Account(from_pk);\n    const assetService = new AssetService(mutaClient, account);\n\n    const from_balance_before = await assetService.read.get_balance({\n      user: from_addr,\n      asset_id: asset_id\n    })!;\n    const to_balance_before = await assetService.read.get_balance({\n      user: to_addr,\n      asset_id: asset_id,\n    })!;\n\n    // transfer\n    expect(account.address).toBe(from_addr);\n\n    await assetService.write.transfer({\n      asset_id: asset_id,\n      to: to_addr,\n      value: 0x01,\n    })\n\n    // check result\n    let from_balance_after = await assetService.read.get_balance({\n      user: from_addr,\n      asset_id: asset_id,\n    })!;\n    const to_balance_after = await assetService.read.get_balance({\n      user: to_addr,\n      asset_id: asset_id,\n    })!;\n\n    const c1 = from_balance_before.succeedData.balance as number;\n    expect(from_balance_after.succeedData.balance).toBe(c1 - 1);\n    const c2 = to_balance_before.succeedData.balance as number;\n    expect(to_balance_after.succeedData.balance).toBe(c2 + 1);\n  });\n\n  test('multisig', async () => {\n    const wangYe = Account.fromPrivateKey(\n      '0x1000000000000000000000000000000000000000000000000000000000000000',\n    );\n    const qing = Account.fromPrivateKey(\n      '0x2000000000000000000000000000000000000000000000000000000000000000',\n    );\n\n    const multiSigService = new MultiSignatureService(mutaClient, wangYe);\n\n    var GenerateMultiSigAccountPayload = {\n      owner: wangYe.address,\n      autonomy: false,\n      addr_with_weight: [{ address: wangYe.address, weight: 1 }, { address: qing.address, weight: 1 }],\n      threshold: 2,\n      memo: 'welcome to BiYouCun'\n    };\n    const generated = await multiSigService.write.generate_account(GenerateMultiSigAccountPayload);\n    expect(Number(generated.response.response.code)).toBe(0);\n\n    const multiSigAddress = generated.response.response.succeedData.address;\n    const createAssetTx = await mutaClient.composeTransaction({\n      method: 'create_asset',\n      payload: {\n        name:      'miao',\n        supply:    2077,\n        symbol:    '😺',\n      },\n      serviceName: 'asset',\n      sender: multiSigAddress,\n    });\n\n    const signedCreateAssetTx = wangYe.signTransaction(createAssetTx);\n    try {\n      await mutaClient.sendTransaction(signedCreateAssetTx);\n      throw 'should failed';\n    } catch(e) {\n      expect(String(e)).toContain('CheckAuthorization');\n    }\n\n    const bothSignedCreateAssetTx = qing.signTransaction(signedCreateAssetTx);\n    const txHash = await mutaClient.sendTransaction(bothSignedCreateAssetTx);\n    const receipt = await retry(() => mutaClient.getReceipt(toHex(txHash)));\n    expect(Number(receipt.response.response.code)).toBe(0);\n\n    // MultiSig address balance\n    const asset = JSON.parse(receipt.response.response.succeedData as string);\n    const assetService = new AssetService(mutaClient, wangYe);\n    const balance = await assetService.read.get_balance({\n        asset_id: asset.id,\n        user: multiSigAddress,\n    });\n\n    expect(Number(balance.code)).toBe(0);\n    expect(Number(balance.succeedData.balance)).toBe(2077);\n\n    const updateAccountPayload = {\n      account_address: multiSigAddress,\n      owner: wangYe.address,\n      addr_with_weight: [{ address: wangYe.address, weight: 3 }, { address: qing.address, weight: 1 }],\n      threshold: 4,\n      memo: 'welcome to BiYouCun'\n    };\n\n    const update = await multiSigService.write.update_account(updateAccountPayload);\n    expect(Number(update.response.response.code)).toBe(0);\n\n    const fei = Account.fromPrivateKey(\n      '0x3000000000000000000000000000000000000000000000000000000000000000',\n    );\n\n    var GenerateMultiSigAccountPayload = {\n      owner: wangYe.address,\n      autonomy: false,\n      addr_with_weight: [{ address: multiSigAddress, weight: 2 }, { address: fei.address, weight: 1 }],\n      threshold: 2,\n      memo: 'welcome to CiYouCun'\n    };\n    const newGenerate = await multiSigService.write.generate_account(GenerateMultiSigAccountPayload);\n    expect(Number(newGenerate.response.response.code)).toBe(0);\n\n    const newMultiSigAddress = newGenerate.response.response.succeedData.address;\n    const newAssetTx = await mutaClient.composeTransaction({\n      method: 'create_asset',\n      payload: {\n        name: 'miaomiao',\n        supply: 2078,\n        symbol: '😺😺',\n      },\n      serviceName: 'asset',\n      sender: newMultiSigAddress,\n    });\n\n    const newSignedCreateAssetTx = wangYe.signTransaction(newAssetTx);\n    const newBothCreateAssetTx = qing.signTransaction(newSignedCreateAssetTx);\n    const newTxHash = await mutaClient.sendTransaction(newBothCreateAssetTx);\n    const newReceipt = await retry(() => mutaClient.getReceipt(toHex(newTxHash)));\n    expect(Number(newReceipt.response.response.code)).toBe(0);\n\n    const newAsset = JSON.parse(newReceipt.response.response.succeedData as string);\n    const newAssetService = new AssetService(mutaClient, wangYe);\n    const newBalance = await newAssetService.read.get_balance({\n      asset_id: newAsset.id,\n      user: newMultiSigAddress,\n    });\n\n    expect(Number(newBalance.code)).toBe(0);\n    expect(Number(newBalance.succeedData.balance)).toBe(2078);\n  });\n});\n"
  },
  {
    "path": "tests/e2e/tsconfig.json",
    "content": "{\n    \"compilerOptions\": {\n      \"target\": \"es2017\",\n      \"module\": \"commonjs\",\n      \"strict\": true,\n      \"skipLibCheck\": true,\n      \"declaration\": true,\n      \"esModuleInterop\": true,\n      \"noUnusedLocals\": true,\n      \"noUnusedParameters\": true,\n      \"noImplicitReturns\": true,\n      \"noFallthroughCasesInSwitch\": true,\n      \"traceResolution\": false,\n      \"listEmittedFiles\": false,\n      \"listFiles\": false,\n      \"pretty\": true,\n      \"composite\": true,\n      \"lib\": [\"es2017\"],\n      \"sourceMap\": true,\n      \"inlineSources\": true,\n      \"outDir\": \"lib\",\n      \"rootDir\": \"src\"\n    },\n    \"files\": [\"./sdk.test.ts\", \"./utils.ts\"],\n    \"references\": [\n    ]\n}\n"
  },
  {
    "path": "tests/e2e/utils.ts",
    "content": "import fetch from \"node-fetch\";\nimport { createHttpLink } from \"apollo-link-http\";\nimport { InMemoryCache } from \"apollo-cache-inmemory\";\nimport ApolloClient from \"apollo-client\";\nimport { Muta } from \"@mutadev/muta-sdk\";\n\nexport const CHAIN_ID =\n  \"0xb6a4d7da21443f5e816e8700eea87610e6d769657d6b8ec73028457bf2ca4036\";\nexport const API_URL = process.env.API_URL || \"http://localhost:8000/graphql\";\nexport const client = new ApolloClient({\n  link: createHttpLink({\n    uri: API_URL,\n    fetch: fetch\n  }),\n  cache: new InMemoryCache(),\n  defaultOptions: { query: { fetchPolicy: \"no-cache\" } }\n});\nexport const muta = new Muta({\n  endpoint: API_URL,\n  chainId: CHAIN_ID\n});\nexport const mutaClient = muta.client();\n\nexport function makeid(length: number) {\n  var result = \"\";\n  var characters = \"abcdef0123456789\";\n  var charactersLength = characters.length;\n  for (var i = 0; i < length; i++) {\n    result += characters.charAt(Math.floor(Math.random() * charactersLength));\n  }\n  return result;\n}\n\nexport function getNonce() {\n  return makeid(64);\n}\n\nexport function delay(ms: number) {\n  return new Promise(resolve => setTimeout(resolve, ms));\n}\n"
  },
  {
    "path": "tests/e2e/wait-for-it.sh",
    "content": "#!/usr/bin/env bash\n#   Use this script to test if a given TCP host/port are available\n# copy from https://github.com/vishnubob/wait-for-it\n\nWAITFORIT_cmdname=${0##*/}\n\nechoerr() { if [[ $WAITFORIT_QUIET -ne 1 ]]; then echo \"$@\" 1>&2; fi }\n\nusage()\n{\n    cat << USAGE >&2\nUsage:\n    $WAITFORIT_cmdname host:port [-s] [-t timeout] [-- command args]\n    -h HOST | --host=HOST       Host or IP under test\n    -p PORT | --port=PORT       TCP port under test\n                                Alternatively, you specify the host and port as host:port\n    -s | --strict               Only execute subcommand if the test succeeds\n    -q | --quiet                Don't output any status messages\n    -t TIMEOUT | --timeout=TIMEOUT\n                                Timeout in seconds, zero for no timeout\n    -- COMMAND ARGS             Execute command with args after the test finishes\nUSAGE\n    exit 1\n}\n\nwait_for()\n{\n    if [[ $WAITFORIT_TIMEOUT -gt 0 ]]; then\n        echoerr \"$WAITFORIT_cmdname: waiting $WAITFORIT_TIMEOUT seconds for $WAITFORIT_HOST:$WAITFORIT_PORT\"\n    else\n        echoerr \"$WAITFORIT_cmdname: waiting for $WAITFORIT_HOST:$WAITFORIT_PORT without a timeout\"\n    fi\n    WAITFORIT_start_ts=$(date +%s)\n    while :\n    do\n        if [[ $WAITFORIT_ISBUSY -eq 1 ]]; then\n            nc -z $WAITFORIT_HOST $WAITFORIT_PORT\n            WAITFORIT_result=$?\n        else\n            (echo > /dev/tcp/$WAITFORIT_HOST/$WAITFORIT_PORT) >/dev/null 2>&1\n            WAITFORIT_result=$?\n        fi\n        if [[ $WAITFORIT_result -eq 0 ]]; then\n            WAITFORIT_end_ts=$(date +%s)\n            echoerr \"$WAITFORIT_cmdname: $WAITFORIT_HOST:$WAITFORIT_PORT is available after $((WAITFORIT_end_ts - WAITFORIT_start_ts)) seconds\"\n            break\n        fi\n        sleep 1\n    done\n    return $WAITFORIT_result\n}\n\nwait_for_wrapper()\n{\n    # In order to support SIGINT during timeout: http://unix.stackexchange.com/a/57692\n    if [[ $WAITFORIT_QUIET -eq 1 ]]; then\n        timeout $WAITFORIT_BUSYTIMEFLAG $WAITFORIT_TIMEOUT $0 --quiet --child --host=$WAITFORIT_HOST --port=$WAITFORIT_PORT --timeout=$WAITFORIT_TIMEOUT &\n    else\n        timeout $WAITFORIT_BUSYTIMEFLAG $WAITFORIT_TIMEOUT $0 --child --host=$WAITFORIT_HOST --port=$WAITFORIT_PORT --timeout=$WAITFORIT_TIMEOUT &\n    fi\n    WAITFORIT_PID=$!\n    trap \"kill -INT -$WAITFORIT_PID\" INT\n    wait $WAITFORIT_PID\n    WAITFORIT_RESULT=$?\n    if [[ $WAITFORIT_RESULT -ne 0 ]]; then\n        echoerr \"$WAITFORIT_cmdname: timeout occurred after waiting $WAITFORIT_TIMEOUT seconds for $WAITFORIT_HOST:$WAITFORIT_PORT\"\n    fi\n    return $WAITFORIT_RESULT\n}\n\n# process arguments\nwhile [[ $# -gt 0 ]]\ndo\n    case \"$1\" in\n        *:* )\n        WAITFORIT_hostport=(${1//:/ })\n        WAITFORIT_HOST=${WAITFORIT_hostport[0]}\n        WAITFORIT_PORT=${WAITFORIT_hostport[1]}\n        shift 1\n        ;;\n        --child)\n        WAITFORIT_CHILD=1\n        shift 1\n        ;;\n        -q | --quiet)\n        WAITFORIT_QUIET=1\n        shift 1\n        ;;\n        -s | --strict)\n        WAITFORIT_STRICT=1\n        shift 1\n        ;;\n        -h)\n        WAITFORIT_HOST=\"$2\"\n        if [[ $WAITFORIT_HOST == \"\" ]]; then break; fi\n        shift 2\n        ;;\n        --host=*)\n        WAITFORIT_HOST=\"${1#*=}\"\n        shift 1\n        ;;\n        -p)\n        WAITFORIT_PORT=\"$2\"\n        if [[ $WAITFORIT_PORT == \"\" ]]; then break; fi\n        shift 2\n        ;;\n        --port=*)\n        WAITFORIT_PORT=\"${1#*=}\"\n        shift 1\n        ;;\n        -t)\n        WAITFORIT_TIMEOUT=\"$2\"\n        if [[ $WAITFORIT_TIMEOUT == \"\" ]]; then break; fi\n        shift 2\n        ;;\n        --timeout=*)\n        WAITFORIT_TIMEOUT=\"${1#*=}\"\n        shift 1\n        ;;\n        --)\n        shift\n        WAITFORIT_CLI=(\"$@\")\n        break\n        ;;\n        --help)\n        usage\n        ;;\n        *)\n        echoerr \"Unknown argument: $1\"\n        usage\n        ;;\n    esac\ndone\n\nif [[ \"$WAITFORIT_HOST\" == \"\" || \"$WAITFORIT_PORT\" == \"\" ]]; then\n    echoerr \"Error: you need to provide a host and port to test.\"\n    usage\nfi\n\nWAITFORIT_TIMEOUT=${WAITFORIT_TIMEOUT:-15}\nWAITFORIT_STRICT=${WAITFORIT_STRICT:-0}\nWAITFORIT_CHILD=${WAITFORIT_CHILD:-0}\nWAITFORIT_QUIET=${WAITFORIT_QUIET:-0}\n\n# check to see if timeout is from busybox?\nWAITFORIT_TIMEOUT_PATH=$(type -p timeout)\nWAITFORIT_TIMEOUT_PATH=$(realpath $WAITFORIT_TIMEOUT_PATH 2>/dev/null || readlink -f $WAITFORIT_TIMEOUT_PATH)\nif [[ $WAITFORIT_TIMEOUT_PATH =~ \"busybox\" ]]; then\n        WAITFORIT_ISBUSY=1\n        WAITFORIT_BUSYTIMEFLAG=\"-t\"\n\nelse\n        WAITFORIT_ISBUSY=0\n        WAITFORIT_BUSYTIMEFLAG=\"\"\nfi\n\nif [[ $WAITFORIT_CHILD -gt 0 ]]; then\n    wait_for\n    WAITFORIT_RESULT=$?\n    exit $WAITFORIT_RESULT\nelse\n    if [[ $WAITFORIT_TIMEOUT -gt 0 ]]; then\n        wait_for_wrapper\n        WAITFORIT_RESULT=$?\n    else\n        wait_for\n        WAITFORIT_RESULT=$?\n    fi\nfi\n\nif [[ $WAITFORIT_CLI != \"\" ]]; then\n    if [[ $WAITFORIT_RESULT -ne 0 && $WAITFORIT_STRICT -eq 1 ]]; then\n        echoerr \"$WAITFORIT_cmdname: strict mode, refusing to execute subprocess\"\n        exit $WAITFORIT_RESULT\n    fi\n    exec \"${WAITFORIT_CLI[@]}\"\nelse\n    exit $WAITFORIT_RESULT\nfi\n"
  },
  {
    "path": "tests/trust_metric.rs",
    "content": "/// NOTE: Test may panic after drop full node future, which is\n/// expected.\npub mod common;\nmod trust_metric_all;\n"
  },
  {
    "path": "tests/trust_metric_all/client_node.rs",
    "content": "use std::collections::HashSet;\nuse std::convert::TryFrom;\nuse std::iter::FromIterator;\nuse std::net::{IpAddr, Ipv4Addr, SocketAddr};\nuse std::ops::Deref;\nuse std::str::FromStr;\n\nuse common_crypto::{PrivateKey, PublicKey, Secp256k1PrivateKey, ToPublicKey};\nuse core_consensus::message::{\n    FixedBlock, FixedHeight, BROADCAST_HEIGHT, RPC_RESP_SYNC_PULL_BLOCK, RPC_SYNC_PULL_BLOCK,\n};\nuse core_network::{\n    DiagnosticEvent, NetworkConfig, NetworkService, NetworkServiceHandle, PeerId, PeerIdExt,\n    TrustReport,\n};\nuse derive_more::Display;\nuse protocol::traits::{\n    Context, Gossip, MessageCodec, MessageHandler, Priority, Rpc, TrustFeedback,\n};\nuse protocol::types::{Address, Block, BlockHeader, Hash, Proof};\nuse protocol::{async_trait, Bytes};\n\nuse crate::common::node::consts;\nuse crate::common::node::diagnostic::{\n    TrustNewIntervalReq, TrustTwinEventReq, TwinEvent, GOSSIP_TRUST_NEW_INTERVAL,\n    GOSSIP_TRUST_TWIN_EVENT,\n};\nuse crate::common::node::sync::{Sync, SyncError, SyncEvent};\n\n#[derive(Debug, Display)]\npub enum ClientNodeError {\n    #[display(fmt = \"not connected\")]\n    NotConnected,\n\n    #[display(fmt = \"unexpected {}\", _0)]\n    Unexpected(String),\n}\nimpl std::error::Error for ClientNodeError {}\n\nimpl From<SyncError> for ClientNodeError {\n    fn from(err: SyncError) -> Self {\n        match err {\n            SyncError::Recv(err) => ClientNodeError::Unexpected(err.to_string()),\n            SyncError::Timeout => ClientNodeError::Unexpected(err.to_string()),\n            SyncError::Disconected => ClientNodeError::NotConnected,\n        }\n    }\n}\n\ntype ClientResult<T> = Result<T, ClientNodeError>;\n\nstruct DummyPullBlockRpcHandler(NetworkServiceHandle);\n\n#[async_trait]\nimpl MessageHandler for DummyPullBlockRpcHandler {\n    type Message = FixedHeight;\n\n    async fn process(&self, ctx: Context, msg: FixedHeight) -> TrustFeedback {\n        let block = FixedBlock::new(mock_block(msg.inner));\n        self.0\n            .response(ctx, RPC_RESP_SYNC_PULL_BLOCK, Ok(block), Priority::High)\n            .await\n            .expect(\"dummy response pull block\");\n\n        TrustFeedback::Neutral\n    }\n}\n\nstruct ReceiveRemoteHeight(Sync);\n\n#[async_trait]\nimpl MessageHandler for ReceiveRemoteHeight {\n    type Message = u64;\n\n    async fn process(&self, _: Context, msg: u64) -> TrustFeedback {\n        self.0.emit(DiagnosticEvent::RemoteHeight { height: msg });\n        TrustFeedback::Neutral\n    }\n}\n\npub struct ClientNode {\n    pub network:        NetworkServiceHandle,\n    pub remote_peer_id: PeerId,\n    pub priv_key:       Secp256k1PrivateKey,\n    pub sync:           Sync,\n}\n\npub async fn connect(\n    full_node_port: u16,\n    full_seckey: String,\n    listen_port: u16,\n    sync: Sync,\n) -> ClientNode {\n    let full_node_peer_id = full_node_peer_id(&full_seckey);\n    let full_node_addr = format!(\"127.0.0.1:{}\", full_node_port);\n\n    let config = NetworkConfig::new()\n        .ping_interval(consts::NETWORK_PING_INTERVAL)\n        .peer_trust_metric(consts::NETWORK_TRUST_METRIC_INTERVAL, None)\n        .expect(\"peer trust\")\n        .bootstraps(vec![(full_node_peer_id.to_base58(), full_node_addr)])\n        .expect(\"test node config\");\n    let priv_key = Secp256k1PrivateKey::generate(&mut rand::rngs::OsRng);\n\n    let mut network = NetworkService::new(config);\n    let handle = network.handle();\n\n    network.set_chain_id(Hash::from_hex(consts::CHAIN_ID).expect(\"chain id\"));\n\n    network\n        .register_endpoint_handler(\n            RPC_SYNC_PULL_BLOCK,\n            DummyPullBlockRpcHandler(handle.clone()),\n        )\n        .expect(\"register consensus rpc pull block\");\n    network\n        .register_rpc_response::<FixedBlock>(RPC_RESP_SYNC_PULL_BLOCK)\n        .expect(\"register consensus rpc response pull block\");\n\n    network\n        .register_endpoint_handler(BROADCAST_HEIGHT, ReceiveRemoteHeight(sync.clone()))\n        .expect(\"register remote height\");\n\n    let hook_fn = |sync: Sync| -> _ {\n        Box::new(move |event: DiagnosticEvent| {\n            // We only care connected event on client node\n            if let DiagnosticEvent::NewSession = event {\n                sync.emit(event)\n            }\n        })\n    };\n    network.register_diagnostic_hook(hook_fn(sync.clone()));\n\n    network\n        .listen(SocketAddr::new(\n            IpAddr::V4(Ipv4Addr::new(0, 0, 0, 0)),\n            listen_port,\n        ))\n        .await\n        .expect(\"test node listen\");\n\n    tokio::spawn(network);\n    sync.wait_connected().await;\n\n    ClientNode {\n        network: handle,\n        remote_peer_id: full_node_peer_id,\n        priv_key,\n        sync,\n    }\n}\n\nimpl ClientNode {\n    // # Panic\n    pub async fn wait_connected(&self) {\n        self.sync.wait_connected().await\n    }\n\n    pub fn connected(&self) -> bool {\n        let diagnostic = &self.network.diagnostic;\n        let opt_session = diagnostic.session(&self.remote_peer_id);\n\n        self.sync.is_connected() && opt_session.is_some()\n    }\n\n    pub fn connected_session(&self, peer_id: &PeerId) -> Option<usize> {\n        if !self.connected() {\n            None\n        } else {\n            let diagnostic = &self.network.diagnostic;\n            let opt_session = diagnostic.session(peer_id);\n\n            opt_session.map(|sid| sid.value())\n        }\n    }\n\n    pub async fn broadcast<M: MessageCodec>(&self, endpoint: &str, msg: M) -> ClientResult<()> {\n        use Priority::High;\n\n        let sid = match self.connected_session(&self.remote_peer_id) {\n            Some(sid) => sid,\n            None => return Err(ClientNodeError::NotConnected),\n        };\n\n        let ctx = Context::new().with_value::<usize>(\"session_id\", sid);\n        let peers = vec![Bytes::from(self.remote_peer_id.clone().into_bytes())];\n\n        match self.multicast(ctx, endpoint, peers, msg, High).await {\n            Err(_) if !self.connected() => Err(ClientNodeError::NotConnected),\n            Err(e) => {\n                let err_msg = format!(\"broadcast to {} {}\", endpoint, e);\n                Err(ClientNodeError::Unexpected(err_msg))\n            }\n            Ok(_) => Ok(()),\n        }\n    }\n\n    pub async fn rpc<M, R>(&self, endpoint: &str, msg: M) -> ClientResult<R>\n    where\n        M: MessageCodec,\n        R: MessageCodec,\n    {\n        let sid = match self.connected_session(&self.remote_peer_id) {\n            Some(sid) => sid,\n            None => return Err(ClientNodeError::NotConnected),\n        };\n\n        let ctx = Context::new().with_value::<usize>(\"session_id\", sid);\n        match self.call::<M, R>(ctx, endpoint, msg, Priority::High).await {\n            Ok(resp) => Ok(resp),\n            Err(e) if e.to_string().to_lowercase().contains(\"timeout\") && !self.connected() => {\n                Err(ClientNodeError::NotConnected)\n            }\n            Err(e) => {\n                let err_msg = format!(\"rpc to {} {}\", endpoint, e);\n                Err(ClientNodeError::Unexpected(err_msg))\n            }\n        }\n    }\n\n    pub async fn get_block(&self, height: u64) -> ClientResult<Block> {\n        let resp = self\n            .rpc::<_, FixedBlock>(RPC_SYNC_PULL_BLOCK, FixedHeight::new(height))\n            .await?;\n        Ok(resp.inner)\n    }\n\n    pub async fn trust_twin_event(&self, event: TwinEvent) -> ClientResult<()> {\n        self.broadcast(GOSSIP_TRUST_TWIN_EVENT, TrustTwinEventReq(event))\n            .await?;\n\n        let mut targets: HashSet<TwinEvent> = if event == TwinEvent::Both {\n            HashSet::from_iter(vec![TwinEvent::Good, TwinEvent::Bad])\n        } else {\n            HashSet::from_iter(vec![event])\n        };\n\n        while !targets.is_empty() {\n            let _ = match self.until_trust_processed().await? {\n                TrustFeedback::Bad(_) => targets.remove(&TwinEvent::Bad),\n                TrustFeedback::Good => targets.remove(&TwinEvent::Good),\n                TrustFeedback::Worse(_) => targets.remove(&TwinEvent::Worse),\n                TrustFeedback::Neutral | TrustFeedback::Fatal(_) => {\n                    // No Fatal action yet\n                    println!(\"skip neutral or fatal feedback\");\n                    continue;\n                }\n            };\n        }\n\n        Ok(())\n    }\n\n    pub async fn until_trust_processed(&self) -> ClientResult<TrustFeedback> {\n        loop {\n            let event = self.sync.recv().await?;\n            match event {\n                SyncEvent::TrustMetric(feedback) => return Ok(feedback),\n                SyncEvent::RemoteHeight(_) => continue,\n                _ => return Err(ClientNodeError::Unexpected(event.to_string())),\n            }\n        }\n    }\n\n    pub async fn trust_new_interval(&self) -> ClientResult<TrustReport> {\n        self.broadcast(GOSSIP_TRUST_NEW_INTERVAL, TrustNewIntervalReq(0))\n            .await?;\n\n        loop {\n            let event = self.sync.recv().await?;\n            match event {\n                SyncEvent::TrustReport(report) => return Ok(report),\n                SyncEvent::Connected => {\n                    return Err(ClientNodeError::Unexpected(\"connected\".to_owned()))\n                }\n                SyncEvent::TrustMetric(_) | SyncEvent::RemoteHeight(_) => {\n                    println!(\"skip event {}\", event);\n                    continue;\n                }\n            }\n        }\n    }\n}\n\nimpl Deref for ClientNode {\n    type Target = NetworkServiceHandle;\n\n    fn deref(&self) -> &Self::Target {\n        &self.network\n    }\n}\n\nfn full_node_peer_id(full_seckey: &str) -> PeerId {\n    let seckey = {\n        let key = hex::decode(full_seckey).expect(\"hex private key string\");\n        Secp256k1PrivateKey::try_from(key.as_ref()).expect(\"valid private key\")\n    };\n    let pubkey = seckey.pub_key();\n    PeerId::from_pubkey_bytes(pubkey.to_bytes()).expect(\"valid public key\")\n}\n\nfn mock_block(height: u64) -> Block {\n    let block_hash = Hash::digest(Bytes::from(\"22\"));\n    let nonce = Hash::digest(Bytes::from(\"33\"));\n    let addr_str = \"muta14e0lmgck835vm2dfm0w3ckv6svmez8fdgdl705\";\n\n    let proof = Proof {\n        height: 0,\n        round: 0,\n        block_hash,\n        signature: Default::default(),\n        bitmap: Default::default(),\n    };\n\n    let header = BlockHeader {\n        chain_id: nonce.clone(),\n        height,\n        exec_height: height - 1,\n        prev_hash: nonce.clone(),\n        timestamp: 1000,\n        order_root: nonce.clone(),\n        order_signed_transactions_hash: nonce.clone(),\n        confirm_root: Vec::new(),\n        state_root: nonce,\n        receipt_root: Vec::new(),\n        cycles_used: vec![999_999],\n        proposer: Address::from_str(addr_str).unwrap(),\n        proof,\n        validator_version: 1,\n        validators: Vec::new(),\n    };\n\n    Block {\n        header,\n        ordered_tx_hashes: Vec::new(),\n    }\n}\n"
  },
  {
    "path": "tests/trust_metric_all/common.rs",
    "content": "use common_crypto::{\n    Crypto, PrivateKey, PublicKey, Secp256k1, Secp256k1PrivateKey, Signature, ToPublicKey,\n};\nuse protocol::fixed_codec::FixedCodec;\nuse protocol::types::{\n    Address, Hash, JsonString, RawTransaction, SignedTransaction, TransactionRequest,\n};\nuse protocol::{Bytes, BytesMut};\nuse rand::{rngs::OsRng, RngCore};\n\nuse crate::common::node::consts;\n\npub struct SignedTransactionBuilder {\n    chain_id:     Hash,\n    timeout:      u64,\n    cycles_limit: u64,\n    payload:      JsonString,\n}\n\nimpl Default for SignedTransactionBuilder {\n    fn default() -> Self {\n        let chain_id = Hash::from_hex(consts::CHAIN_ID).expect(\"chain id\");\n        let timeout = 19;\n        let cycles_limit = 314_159;\n        let payload = \"test\".to_owned();\n\n        SignedTransactionBuilder {\n            chain_id,\n            timeout,\n            cycles_limit,\n            payload,\n        }\n    }\n}\n\nimpl SignedTransactionBuilder {\n    pub fn chain_id(mut self, chain_id_bytes: Bytes) -> Self {\n        self.chain_id = Hash::digest(chain_id_bytes);\n        self\n    }\n\n    pub fn cycles_limit(mut self, cycles_limit: u64) -> Self {\n        self.cycles_limit = cycles_limit;\n        self\n    }\n\n    pub fn payload(mut self, payload: JsonString) -> Self {\n        self.payload = payload;\n        self\n    }\n\n    pub fn build(self, pk: &Secp256k1PrivateKey) -> SignedTransaction {\n        let nonce = {\n            let mut random_bytes = [0u8; 32];\n            OsRng.fill_bytes(&mut random_bytes);\n            Hash::digest(BytesMut::from(random_bytes.as_ref()).freeze())\n        };\n\n        let request = TransactionRequest {\n            service_name: \"metadata\".to_owned(),\n            method:       \"get_metadata\".to_owned(),\n            payload:      self.payload,\n        };\n\n        let raw = RawTransaction {\n            chain_id: self.chain_id,\n            nonce,\n            timeout: self.timeout,\n            cycles_limit: self.cycles_limit,\n            cycles_price: 1,\n            request,\n            sender: Address::from_pubkey_bytes(pk.pub_key().to_bytes()).unwrap(),\n        };\n\n        let raw_bytes = raw.encode_fixed().expect(\"encode raw tx\");\n        let tx_hash = Hash::digest(raw_bytes);\n\n        let sig = Secp256k1::sign_message(&tx_hash.as_bytes(), &pk.to_bytes()).expect(\"sign tx\");\n\n        SignedTransaction {\n            raw,\n            tx_hash,\n            pubkey: Bytes::from(rlp::encode_list::<Vec<u8>, _>(&[pk\n                .pub_key()\n                .to_bytes()\n                .to_vec()])),\n            signature: Bytes::from(rlp::encode_list::<Vec<u8>, _>(&[sig.to_bytes().to_vec()])),\n        }\n    }\n}\n\npub fn stx_builder() -> SignedTransactionBuilder {\n    SignedTransactionBuilder::default()\n}\n"
  },
  {
    "path": "tests/trust_metric_all/consensus.rs",
    "content": "use core_consensus::message::{\n    Choke, Proposal, Vote, BROADCAST_HEIGHT, END_GOSSIP_AGGREGATED_VOTE, END_GOSSIP_SIGNED_CHOKE,\n    END_GOSSIP_SIGNED_PROPOSAL, END_GOSSIP_SIGNED_VOTE, QC,\n};\nuse protocol::traits::TrustFeedback;\n\nuse super::client_node::ClientNodeError;\nuse super::trust_test;\n\n#[test]\nfn should_be_disconnected_for_repeated_undecodeable_proposal_within_four_intervals() {\n    trust_test(move |client_node| {\n        Box::pin(async move {\n            let proposal = Proposal(vec![0000]);\n            for _ in 0..4u8 {\n                if let Err(ClientNodeError::Unexpected(e)) = client_node\n                    .broadcast(END_GOSSIP_SIGNED_PROPOSAL, proposal.clone())\n                    .await\n                {\n                    panic!(\"unexpected {}\", e);\n                }\n\n                loop {\n                    match client_node.until_trust_processed().await {\n                        Ok(TrustFeedback::Worse(_)) => break,\n                        Ok(TrustFeedback::Neutral) => continue,\n                        Ok(feedback) => panic!(\"unexpected feedback {}\", feedback),\n                        _ => panic!(\"fetch trust report\"),\n                    }\n                }\n\n                match client_node.trust_new_interval().await {\n                    Ok(_) => (),\n                    Err(ClientNodeError::NotConnected) => return,\n                    Err(e) => panic!(\"unexpected error {}\", e),\n                }\n            }\n\n            assert!(!client_node.connected());\n        })\n    });\n}\n\n#[test]\nfn should_be_disconnected_for_repeated_undecodeable_vote_within_four_intervals() {\n    trust_test(move |client_node| {\n        Box::pin(async move {\n            let vote = Vote(vec![0000]);\n            for _ in 0..4u8 {\n                if let Err(ClientNodeError::Unexpected(e)) = client_node\n                    .broadcast(END_GOSSIP_SIGNED_VOTE, vote.clone())\n                    .await\n                {\n                    panic!(\"unexpected {}\", e);\n                }\n\n                loop {\n                    match client_node.until_trust_processed().await {\n                        Ok(TrustFeedback::Worse(_)) => break,\n                        Ok(TrustFeedback::Neutral) => continue,\n                        Ok(feedback) => panic!(\"unexpected feedback {}\", feedback),\n                        _ => panic!(\"fetch trust report\"),\n                    }\n                }\n\n                match client_node.trust_new_interval().await {\n                    Ok(_) => (),\n                    Err(ClientNodeError::NotConnected) => return,\n                    Err(e) => panic!(\"unexpected error {}\", e),\n                }\n            }\n\n            assert!(!client_node.connected());\n        })\n    });\n}\n\n#[test]\nfn should_be_disconnected_for_repeated_undecodeable_qc_within_four_intervals() {\n    trust_test(move |client_node| {\n        Box::pin(async move {\n            let qc = QC(vec![0000]);\n            for _ in 0..4u8 {\n                if let Err(ClientNodeError::Unexpected(e)) = client_node\n                    .broadcast(END_GOSSIP_AGGREGATED_VOTE, qc.clone())\n                    .await\n                {\n                    panic!(\"unexpected {}\", e);\n                }\n\n                loop {\n                    match client_node.until_trust_processed().await {\n                        Ok(TrustFeedback::Worse(_)) => break,\n                        Ok(TrustFeedback::Neutral) => continue,\n                        Ok(feedback) => panic!(\"unexpected feedback {}\", feedback),\n                        _ => panic!(\"fetch trust report\"),\n                    }\n                }\n\n                match client_node.trust_new_interval().await {\n                    Ok(_) => (),\n                    Err(ClientNodeError::NotConnected) => return,\n                    Err(e) => panic!(\"unexpected error {}\", e),\n                }\n            }\n\n            assert!(!client_node.connected());\n        })\n    });\n}\n\n#[test]\nfn should_be_disconnected_for_repeated_undecodeable_choke_within_four_intervals() {\n    trust_test(move |client_node| {\n        Box::pin(async move {\n            let choke = Choke(vec![0000]);\n            for _ in 0..4u8 {\n                if let Err(ClientNodeError::Unexpected(e)) = client_node\n                    .broadcast(END_GOSSIP_SIGNED_CHOKE, choke.clone())\n                    .await\n                {\n                    panic!(\"unexpected {}\", e);\n                }\n\n                loop {\n                    match client_node.until_trust_processed().await {\n                        Ok(TrustFeedback::Worse(_)) => break,\n                        Ok(TrustFeedback::Neutral) => continue,\n                        Ok(feedback) => panic!(\"unexpected feedback {}\", feedback),\n                        _ => panic!(\"fetch trust report\"),\n                    }\n                }\n\n                match client_node.trust_new_interval().await {\n                    Ok(_) => (),\n                    Err(ClientNodeError::NotConnected) => return,\n                    Err(e) => panic!(\"unexpected error {}\", e),\n                }\n            }\n\n            assert!(!client_node.connected());\n        })\n    });\n}\n\n#[test]\nfn should_be_disconnected_for_repeated_malicious_new_height_broadcast_within_four_intervals() {\n    trust_test(move |client_node| {\n        Box::pin(async move {\n            for _ in 0..4u8 {\n                if let Err(ClientNodeError::Unexpected(e)) =\n                    client_node.broadcast(BROADCAST_HEIGHT, 99u64).await\n                {\n                    panic!(\"unexpected {}\", e);\n                }\n\n                loop {\n                    match client_node.until_trust_processed().await {\n                        Ok(TrustFeedback::Bad(_)) => break,\n                        Ok(TrustFeedback::Neutral) => continue,\n                        Ok(feedback) => panic!(\"unexpected feedback {}\", feedback),\n                        _ => panic!(\"fetch trust report\"),\n                    }\n                }\n\n                match client_node.trust_new_interval().await {\n                    Ok(_) => (),\n                    Err(ClientNodeError::NotConnected) => return,\n                    Err(e) => panic!(\"unexpected error {}\", e),\n                }\n            }\n\n            assert!(!client_node.connected());\n        })\n    });\n}\n"
  },
  {
    "path": "tests/trust_metric_all/logger.rs",
    "content": "use std::{collections::HashMap, path::PathBuf};\n\nconst LOGGER_FILTER: &str = \"warn\";\nconst LOGGER_LOG_TO_CONSOLE: bool = true;\nconst LOGGER_CONSOLE_SHOW_FILE_AND_LINE: bool = false;\nconst LOGGER_LOG_TO_FILE: bool = false;\nconst LOGGER_METRICS: bool = false;\nconst LOGGER_FILE_SIZE_LIMIT: u64 = 1024 * 1024 * 1024;\n\n#[allow(dead_code)]\npub fn init() {\n    let log_path = PathBuf::new();\n\n    let mut modules_level = HashMap::new();\n    modules_level.insert(\"core_network\".to_owned(), \"debug\".to_owned());\n\n    common_logger::init(\n        LOGGER_FILTER.to_owned(),\n        LOGGER_LOG_TO_CONSOLE,\n        LOGGER_CONSOLE_SHOW_FILE_AND_LINE,\n        LOGGER_LOG_TO_FILE,\n        LOGGER_METRICS,\n        log_path,\n        LOGGER_FILE_SIZE_LIMIT,\n        modules_level,\n    )\n}\n"
  },
  {
    "path": "tests/trust_metric_all/mempool.rs",
    "content": "use core_mempool::{MsgNewTxs, END_GOSSIP_NEW_TXS};\nuse protocol::{traits::TrustFeedback, types::Hash, Bytes};\n\nuse super::client_node::ClientNodeError;\nuse super::common;\nuse super::trust_test;\n\n#[test]\nfn should_report_good_on_valid_transaction() {\n    trust_test(move |client_node| {\n        Box::pin(async move {\n            let stx = common::stx_builder().build(&client_node.priv_key);\n            let msg_stxs = MsgNewTxs {\n                batch_stxs: vec![stx.clone()],\n            };\n\n            client_node\n                .broadcast(END_GOSSIP_NEW_TXS, msg_stxs)\n                .await\n                .expect(\"broadcast stx\");\n\n            match client_node.until_trust_processed().await {\n                Ok(TrustFeedback::Good) => {}\n                Ok(_) => panic!(\"should be good report\"),\n                _ => panic!(\"fetch trust report\"),\n            }\n        })\n    });\n}\n\n#[test]\nfn should_be_disconnected_for_repeated_wrong_signature_only_within_four_intervals() {\n    trust_test(move |client_node| {\n        Box::pin(async move {\n            let mut stx = common::stx_builder().build(&client_node.priv_key);\n            stx.signature = Bytes::from(vec![0]);\n            for _ in 0..4u8 {\n                let msg_stxs = MsgNewTxs {\n                    batch_stxs: vec![stx.clone()],\n                };\n\n                if let Err(ClientNodeError::Unexpected(e)) =\n                    client_node.broadcast(END_GOSSIP_NEW_TXS, msg_stxs).await\n                {\n                    panic!(\"unexpected {}\", e);\n                }\n\n                loop {\n                    match client_node.until_trust_processed().await {\n                        Ok(TrustFeedback::Worse(_)) => break,\n                        Ok(TrustFeedback::Neutral) => continue,\n                        Ok(feedback) => panic!(\"unexpected feedback {}\", feedback),\n                        _ => panic!(\"fetch trust report\"),\n                    }\n                }\n\n                match client_node.trust_new_interval().await {\n                    Ok(_) => (),\n                    Err(ClientNodeError::NotConnected) => return,\n                    Err(e) => panic!(\"unexpected error {}\", e),\n                }\n            }\n\n            assert!(!client_node.connected());\n        })\n    });\n}\n\n#[test]\nfn should_be_disconnected_for_repeated_wrong_tx_hash_only_within_four_intervals() {\n    trust_test(move |client_node| {\n        Box::pin(async move {\n            let mut stx = common::stx_builder().build(&client_node.priv_key);\n            stx.tx_hash = Hash::digest(Bytes::from(vec![0]));\n            for _ in 0..4u8 {\n                let msg_stxs = MsgNewTxs {\n                    batch_stxs: vec![stx.clone()],\n                };\n\n                if let Err(ClientNodeError::Unexpected(e)) =\n                    client_node.broadcast(END_GOSSIP_NEW_TXS, msg_stxs).await\n                {\n                    panic!(\"unexpected {}\", e);\n                }\n\n                loop {\n                    match client_node.until_trust_processed().await {\n                        Ok(TrustFeedback::Worse(_)) => break,\n                        Ok(TrustFeedback::Neutral) => continue,\n                        Ok(_) => panic!(\"should be good report\"),\n                        _ => panic!(\"fetch trust report\"),\n                    }\n                }\n\n                match client_node.trust_new_interval().await {\n                    Ok(_) => (),\n                    Err(ClientNodeError::NotConnected) => return,\n                    Err(e) => panic!(\"unexpected error {}\", e),\n                }\n            }\n\n            assert!(!client_node.connected());\n        })\n    });\n}\n\n#[test]\nfn should_be_disconnected_for_repeated_exceed_tx_size_limit_only_within_four_intervals() {\n    trust_test(move |client_node| {\n        Box::pin(async move {\n            let stx = common::stx_builder()\n                .payload(\"trust-metric\".repeat(1_000))\n                .build(&client_node.priv_key);\n            for _ in 0..4u8 {\n                let msg_stxs = MsgNewTxs {\n                    batch_stxs: vec![stx.clone()],\n                };\n\n                if let Err(ClientNodeError::Unexpected(e)) =\n                    client_node.broadcast(END_GOSSIP_NEW_TXS, msg_stxs).await\n                {\n                    panic!(\"unexpected {}\", e);\n                }\n\n                loop {\n                    match client_node.until_trust_processed().await {\n                        Ok(TrustFeedback::Bad(_)) => break,\n                        Ok(TrustFeedback::Neutral) => continue,\n                        Ok(_) => panic!(\"should be good report\"),\n                        _ => panic!(\"fetch trust report\"),\n                    }\n                }\n\n                match client_node.trust_new_interval().await {\n                    Ok(_) => (),\n                    Err(ClientNodeError::NotConnected) => return,\n                    Err(e) => panic!(\"unexpected error {}\", e),\n                }\n            }\n\n            assert!(!client_node.connected());\n        })\n    });\n}\n\n#[test]\nfn should_be_disconnected_for_repeated_exceed_cycles_limit_only_within_four_intervals() {\n    trust_test(move |client_node| {\n        Box::pin(async move {\n            let stx = common::stx_builder()\n                .cycles_limit(999_999_999_999)\n                .build(&client_node.priv_key);\n            for _ in 0..4u8 {\n                let msg_stxs = MsgNewTxs {\n                    batch_stxs: vec![stx.clone()],\n                };\n\n                if let Err(ClientNodeError::Unexpected(e)) =\n                    client_node.broadcast(END_GOSSIP_NEW_TXS, msg_stxs).await\n                {\n                    panic!(\"unexpected {}\", e);\n                }\n\n                loop {\n                    match client_node.until_trust_processed().await {\n                        Ok(TrustFeedback::Bad(_)) => break,\n                        Ok(TrustFeedback::Neutral) => continue,\n                        Ok(_) => panic!(\"should be good report\"),\n                        _ => panic!(\"fetch trust report\"),\n                    }\n                }\n\n                match client_node.trust_new_interval().await {\n                    Ok(_) => (),\n                    Err(ClientNodeError::NotConnected) => return,\n                    Err(e) => panic!(\"unexpected error {}\", e),\n                }\n            }\n\n            assert!(!client_node.connected());\n        })\n    });\n}\n\n#[test]\nfn should_be_disconnected_for_repeated_wrong_chain_id_only_within_four_intervals() {\n    trust_test(move |client_node| {\n        Box::pin(async move {\n            let stx = common::stx_builder()\n                .chain_id(Bytes::from(vec![0]))\n                .build(&client_node.priv_key);\n            for _ in 0..4u8 {\n                let msg_stxs = MsgNewTxs {\n                    batch_stxs: vec![stx.clone()],\n                };\n\n                if let Err(ClientNodeError::Unexpected(e)) =\n                    client_node.broadcast(END_GOSSIP_NEW_TXS, msg_stxs).await\n                {\n                    panic!(\"unexpected {}\", e);\n                }\n\n                loop {\n                    match client_node.until_trust_processed().await {\n                        Ok(TrustFeedback::Worse(_)) => break,\n                        Ok(TrustFeedback::Neutral) => continue,\n                        Ok(_) => panic!(\"should be good report\"),\n                        _ => panic!(\"fetch trust report\"),\n                    }\n                }\n\n                match client_node.trust_new_interval().await {\n                    Ok(_) => (),\n                    Err(ClientNodeError::NotConnected) => return,\n                    Err(e) => panic!(\"unexpected error {}\", e),\n                }\n            }\n\n            assert!(!client_node.connected());\n        })\n    });\n}\n"
  },
  {
    "path": "tests/trust_metric_all/mod.rs",
    "content": "#![allow(clippy::mutable_key_type)]\n\nmod client_node;\nmod common;\nmod consensus;\nmod logger;\nmod mempool;\n\nuse std::panic;\n\nuse common_crypto::{PrivateKey, Secp256k1PrivateKey};\nuse futures::future::BoxFuture;\n\nuse crate::common::node::sync::Sync;\nuse crate::common::{available_port_pair, node};\nuse client_node::{ClientNode, ClientNodeError};\n\nfn trust_test(test: impl FnOnce(ClientNode) -> BoxFuture<'static, ()> + Send + 'static) {\n    let (full_port, client_port) = available_port_pair();\n    let mut rt = tokio::runtime::Runtime::new().expect(\"create runtime\");\n    let local = tokio::task::LocalSet::new();\n\n    local.block_on(&mut rt, async move {\n        let sync = Sync::new();\n        let full_seckey = {\n            let key = Secp256k1PrivateKey::generate(&mut rand::rngs::OsRng);\n            hex::encode(key.to_bytes()).to_string()\n        };\n        tokio::task::spawn_local(node::full_node::run(\n            full_port,\n            full_seckey.clone(),\n            sync.clone(),\n        ));\n\n        // Wait full node network initialization\n        sync.wait().await;\n\n        let handle = tokio::spawn(async move {\n            let client_node = client_node::connect(full_port, full_seckey, client_port, sync).await;\n\n            test(client_node).await;\n        });\n\n        handle.await.expect(\"test failed\");\n    });\n}\n\n#[test]\nfn trust_metric_basic_setup_test() {\n    trust_test(move |client_node| {\n        Box::pin(async move {\n            let block = client_node.get_block(0).await.expect(\"get genesis\");\n            assert_eq!(block.header.height, 0);\n        })\n    });\n}\n\n#[test]\nfn should_have_working_trust_diagnostic() {\n    trust_test(move |client_node| {\n        Box::pin(async move {\n            client_node\n                .trust_twin_event(node::TwinEvent::Both)\n                .await\n                .expect(\"test trust twin event\");\n\n            let report = client_node.trust_new_interval().await.unwrap();\n            assert_eq!(report.good_events, 1, \"should have 1 good event\");\n            assert_eq!(report.bad_events, 1, \"should have 1 good event\");\n        })\n    });\n}\n\n#[test]\nfn should_be_disconnected_for_repeated_bad_only_within_four_intervals_from_max_score() {\n    trust_test(move |client_node| {\n        Box::pin(async move {\n            // Repeat at least 30 interval\n            let mut count = 30u8;\n            while count > 0 {\n                count -= 1;\n\n                client_node\n                    .trust_twin_event(node::TwinEvent::Good)\n                    .await\n                    .expect(\"test trust twin event\");\n\n                let report = client_node\n                    .trust_new_interval()\n                    .await\n                    .expect(\"test trust new interval\");\n\n                if report.score >= 95 {\n                    break;\n                }\n            }\n\n            for _ in 0..4u8 {\n                if let Err(ClientNodeError::Unexpected(e)) =\n                    client_node.trust_twin_event(node::TwinEvent::Bad).await\n                {\n                    panic!(\"unexpected {}\", e);\n                }\n\n                match client_node.trust_new_interval().await {\n                    Ok(_) => continue,\n                    Err(ClientNodeError::NotConnected) => return,\n                    Err(e) => panic!(\"unexpected error {}\", e),\n                }\n            }\n\n            assert!(!client_node.connected());\n        })\n    });\n}\n\n#[test]\nfn should_be_disconnected_for_repeated_s_strategy_within_17_intervals_from_max_score() {\n    trust_test(move |client_node| {\n        Box::pin(async move {\n            // Repeat at least 30 interval\n            let mut count = 30u8;\n            while count > 0 {\n                count -= 1;\n\n                client_node\n                    .trust_twin_event(node::TwinEvent::Good)\n                    .await\n                    .expect(\"test trust twin event\");\n\n                let report = client_node\n                    .trust_new_interval()\n                    .await\n                    .expect(\"test trust new interval\");\n\n                if report.score >= 95 {\n                    break;\n                }\n            }\n\n            for _ in 0..17u8 {\n                if let Err(ClientNodeError::Unexpected(e)) =\n                    client_node.trust_twin_event(node::TwinEvent::Worse).await\n                {\n                    panic!(\"unexpected {}\", e);\n                }\n\n                if let Err(ClientNodeError::Unexpected(e)) =\n                    client_node.trust_twin_event(node::TwinEvent::Good).await\n                {\n                    panic!(\"unexpected {}\", e);\n                }\n\n                match client_node.trust_new_interval().await {\n                    Ok(_) => (),\n                    Err(ClientNodeError::NotConnected) => return,\n                    Err(e) => panic!(\"unexpected error {}\", e),\n                };\n\n                if let Err(ClientNodeError::Unexpected(e)) =\n                    client_node.trust_twin_event(node::TwinEvent::Good).await\n                {\n                    panic!(\"unexpected {}\", e);\n                }\n\n                match client_node.trust_new_interval().await {\n                    Ok(_) => continue,\n                    Err(ClientNodeError::NotConnected) => return,\n                    Err(e) => panic!(\"unexpected error {}\", e),\n                };\n            }\n\n            assert!(!client_node.connected());\n        })\n    });\n}\n\n#[test]\nfn should_keep_connected_for_z_strategy_but_have_lower_score() {\n    trust_test(move |client_node| {\n        Box::pin(async move {\n            let mut base_report = None;\n\n            // Repeat at least 30 interval\n            let mut count = 30u8;\n            while count > 0 {\n                count -= 1;\n\n                client_node\n                    .trust_twin_event(node::TwinEvent::Good)\n                    .await\n                    .expect(\"test trust twin event\");\n\n                let report = client_node\n                    .trust_new_interval()\n                    .await\n                    .expect(\"test trust new interval\");\n\n                if report.score >= 95 {\n                    base_report = Some(report);\n                    break;\n                }\n            }\n\n            let mut report = base_report.expect(\"should have base report\");\n\n            for _ in 0..100u8 {\n                if let Err(ClientNodeError::Unexpected(e)) =\n                    client_node.trust_twin_event(node::TwinEvent::Bad).await\n                {\n                    panic!(\"unexpected {}\", e);\n                }\n\n                if let Err(ClientNodeError::Unexpected(e)) =\n                    client_node.trust_twin_event(node::TwinEvent::Good).await\n                {\n                    panic!(\"unexpected {}\", e);\n                }\n\n                let latest_report = match client_node.trust_new_interval().await {\n                    Ok(report) => report,\n                    Err(ClientNodeError::NotConnected) => return,\n                    Err(e) => panic!(\"unexpected error {}\", e),\n                };\n\n                assert!(latest_report.score <= report.score);\n                report = latest_report;\n            }\n\n            assert!(client_node.connected(), \"should be connected\");\n        })\n    });\n}\n\n#[test]\nfn should_able_to_reconnect_after_trust_metric_soft_ban() {\n    trust_test(move |client_node| {\n        Box::pin(async move {\n            let mut count = 30u8;\n\n            while count > 0 {\n                count -= 1;\n\n                if let Err(ClientNodeError::Unexpected(e)) =\n                    client_node.trust_twin_event(node::TwinEvent::Bad).await\n                {\n                    panic!(\"unexpected {}\", e);\n                }\n\n                match client_node.trust_new_interval().await {\n                    Ok(_) => (),\n                    Err(ClientNodeError::NotConnected) => return,\n                    Err(e) => panic!(\"unexpected error {}\", e),\n                };\n\n                if !client_node.connected() {\n                    break;\n                }\n            }\n\n            assert!(!client_node.connected(), \"should be disconnected\");\n\n            // Ensure we we dont sleep longer than back-off time\n            let soft_ban_duration =\n                node::consts::NETWORK_SOFT_BAND_DURATION.expect(\"soft ban\") * 2u64;\n            tokio::time::delay_for(std::time::Duration::from_secs(soft_ban_duration)).await;\n\n            client_node.wait_connected().await;\n        })\n    });\n}\n"
  },
  {
    "path": "tests/verify_chain_id.rs",
    "content": "/// NOTE: Test may panic after drop full node future, which is\n/// expected.\npub mod common;\n\nuse std::convert::TryFrom;\nuse std::net::{IpAddr, Ipv4Addr, SocketAddr};\nuse std::ops::Deref;\n\nuse common_crypto::{PrivateKey, PublicKey, Secp256k1PrivateKey, ToPublicKey};\nuse core_consensus::message::{\n    FixedBlock, FixedHeight, BROADCAST_HEIGHT, RPC_RESP_SYNC_PULL_BLOCK, RPC_SYNC_PULL_BLOCK,\n};\nuse core_network::{\n    DiagnosticEvent, NetworkConfig, NetworkService, NetworkServiceHandle, PeerId, PeerIdExt,\n};\nuse derive_more::Display;\nuse protocol::traits::{Context, MessageCodec, MessageHandler, Priority, Rpc, TrustFeedback};\nuse protocol::types::{Block, Hash};\nuse protocol::{async_trait, Bytes};\n\nuse crate::common::available_port_pair;\nuse crate::common::node::consts;\nuse crate::common::node::full_node;\nuse crate::common::node::sync::{Sync, SyncError};\n\n#[test]\nfn should_be_disconnected_due_to_different_chain_id() {\n    let (full_port, client_port) = available_port_pair();\n    let mut rt = tokio::runtime::Runtime::new().expect(\"create runtime\");\n    let local = tokio::task::LocalSet::new();\n\n    local.block_on(&mut rt, async move {\n        let sync = Sync::new();\n        let full_seckey = {\n            let key = Secp256k1PrivateKey::generate(&mut rand::rngs::OsRng);\n            hex::encode(key.to_bytes()).to_string()\n        };\n        tokio::task::spawn_local(full_node::run(full_port, full_seckey.clone(), sync.clone()));\n\n        // Wait full node network initialization\n        sync.wait().await;\n\n        let chain_id = Hash::digest(Bytes::from_static(b\"beautiful world\"));\n        let full_node_peer_id = full_node_peer_id(&full_seckey);\n        let full_node_addr = format!(\"127.0.0.1:{}\", full_port);\n\n        let config = NetworkConfig::new()\n            .ping_interval(consts::NETWORK_PING_INTERVAL)\n            .peer_trust_metric(consts::NETWORK_TRUST_METRIC_INTERVAL, None)\n            .expect(\"peer trust\")\n            .bootstraps(vec![(full_node_peer_id.to_base58(), full_node_addr)])\n            .expect(\"test node config\");\n\n        let mut network = NetworkService::new(config);\n\n        network.set_chain_id(chain_id);\n\n        network\n            .register_endpoint_handler(BROADCAST_HEIGHT, ReceiveRemoteHeight(sync.clone()))\n            .expect(\"register remote height\");\n\n        let hook_fn = |sync: Sync| -> _ {\n            Box::new(move |event: DiagnosticEvent| {\n                // We only care connected event on client node\n                if let DiagnosticEvent::NewSession = event {\n                    sync.emit(event)\n                }\n            })\n        };\n        network.register_diagnostic_hook(hook_fn(sync.clone()));\n\n        network\n            .listen(SocketAddr::new(\n                IpAddr::V4(Ipv4Addr::new(0, 0, 0, 0)),\n                client_port,\n            ))\n            .await\n            .expect(\"test node listen\");\n        tokio::spawn(network);\n\n        match sync.recv().await {\n            Err(SyncError::Disconected) => (),\n            Err(err) => panic!(\"unexpected err {}\", err),\n            Ok(event) => panic!(\"unexpected event {}\", event),\n        }\n    });\n}\n\n#[test]\nfn should_be_connected_with_same_chain_id() {\n    let (full_port, client_port) = available_port_pair();\n    let mut rt = tokio::runtime::Runtime::new().expect(\"create runtime\");\n    let local = tokio::task::LocalSet::new();\n\n    local.block_on(&mut rt, async move {\n        let sync = Sync::new();\n        let full_seckey = {\n            let key = Secp256k1PrivateKey::generate(&mut rand::rngs::OsRng);\n            hex::encode(key.to_bytes()).to_string()\n        };\n        tokio::task::spawn_local(full_node::run(full_port, full_seckey.clone(), sync.clone()));\n\n        // Wait full node network initialization\n        sync.wait().await;\n        let chain_id = Hash::from_hex(consts::CHAIN_ID).expect(\"chain id\");\n        let client_node =\n            connect(full_port, full_seckey, chain_id, client_port, sync.clone()).await;\n\n        let block = client_node.get_block(0).await.expect(\"get genesis\");\n        assert_eq!(block.header.height, 0);\n    });\n}\n\n#[derive(Debug, Display)]\nenum ClientNodeError {\n    #[display(fmt = \"not connected\")]\n    NotConnected,\n\n    #[display(fmt = \"unexpected {}\", _0)]\n    Unexpected(String),\n}\nimpl std::error::Error for ClientNodeError {}\n\nimpl From<SyncError> for ClientNodeError {\n    fn from(err: SyncError) -> Self {\n        match err {\n            SyncError::Recv(err) => ClientNodeError::Unexpected(err.to_string()),\n            SyncError::Timeout => ClientNodeError::Unexpected(err.to_string()),\n            SyncError::Disconected => ClientNodeError::NotConnected,\n        }\n    }\n}\n\ntype ClientResult<T> = Result<T, ClientNodeError>;\n\nstruct ReceiveRemoteHeight(Sync);\n\n#[async_trait]\nimpl MessageHandler for ReceiveRemoteHeight {\n    type Message = u64;\n\n    async fn process(&self, _: Context, msg: u64) -> TrustFeedback {\n        self.0.emit(DiagnosticEvent::RemoteHeight { height: msg });\n        TrustFeedback::Neutral\n    }\n}\nstruct ClientNode {\n    pub network:        NetworkServiceHandle,\n    pub remote_peer_id: PeerId,\n    pub priv_key:       Secp256k1PrivateKey,\n    pub sync:           Sync,\n}\n\nasync fn connect(\n    full_node_port: u16,\n    full_seckey: String,\n    chain_id: Hash,\n    listen_port: u16,\n    sync: Sync,\n) -> ClientNode {\n    let full_node_peer_id = full_node_peer_id(&full_seckey);\n    let full_node_addr = format!(\"127.0.0.1:{}\", full_node_port);\n\n    let config = NetworkConfig::new()\n        .ping_interval(consts::NETWORK_PING_INTERVAL)\n        .peer_trust_metric(consts::NETWORK_TRUST_METRIC_INTERVAL, None)\n        .expect(\"peer trust\")\n        .bootstraps(vec![(full_node_peer_id.to_base58(), full_node_addr)])\n        .expect(\"test node config\");\n    let priv_key = Secp256k1PrivateKey::generate(&mut rand::rngs::OsRng);\n\n    let mut network = NetworkService::new(config);\n    let handle = network.handle();\n\n    network.set_chain_id(chain_id);\n\n    network\n        .register_rpc_response::<FixedBlock>(RPC_RESP_SYNC_PULL_BLOCK)\n        .expect(\"register consensus rpc response pull block\");\n\n    network\n        .register_endpoint_handler(BROADCAST_HEIGHT, ReceiveRemoteHeight(sync.clone()))\n        .expect(\"register remote height\");\n\n    let hook_fn = |sync: Sync| -> _ {\n        Box::new(move |event: DiagnosticEvent| {\n            // We only care connected event on client node\n            if let DiagnosticEvent::NewSession = event {\n                sync.emit(event)\n            }\n        })\n    };\n    network.register_diagnostic_hook(hook_fn(sync.clone()));\n\n    network\n        .listen(SocketAddr::new(\n            IpAddr::V4(Ipv4Addr::new(0, 0, 0, 0)),\n            listen_port,\n        ))\n        .await\n        .expect(\"test node listen\");\n\n    tokio::spawn(network);\n    sync.wait_connected().await;\n\n    ClientNode {\n        network: handle,\n        remote_peer_id: full_node_peer_id,\n        priv_key,\n        sync,\n    }\n}\n\nimpl ClientNode {\n    pub fn connected(&self) -> bool {\n        let diagnostic = &self.network.diagnostic;\n        let opt_session = diagnostic.session(&self.remote_peer_id);\n\n        self.sync.is_connected() && opt_session.is_some()\n    }\n\n    pub fn connected_session(&self, peer_id: &PeerId) -> Option<usize> {\n        if !self.connected() {\n            None\n        } else {\n            let diagnostic = &self.network.diagnostic;\n            let opt_session = diagnostic.session(peer_id);\n\n            opt_session.map(|sid| sid.value())\n        }\n    }\n\n    pub async fn rpc<M, R>(&self, endpoint: &str, msg: M) -> ClientResult<R>\n    where\n        M: MessageCodec,\n        R: MessageCodec,\n    {\n        let sid = match self.connected_session(&self.remote_peer_id) {\n            Some(sid) => sid,\n            None => return Err(ClientNodeError::NotConnected),\n        };\n\n        let ctx = Context::new().with_value::<usize>(\"session_id\", sid);\n        match self.call::<M, R>(ctx, endpoint, msg, Priority::High).await {\n            Ok(resp) => Ok(resp),\n            Err(e) if e.to_string().to_lowercase().contains(\"timeout\") && !self.connected() => {\n                Err(ClientNodeError::NotConnected)\n            }\n            Err(e) => {\n                let err_msg = format!(\"rpc to {} {}\", endpoint, e);\n                Err(ClientNodeError::Unexpected(err_msg))\n            }\n        }\n    }\n\n    pub async fn get_block(&self, height: u64) -> ClientResult<Block> {\n        let resp = self\n            .rpc::<_, FixedBlock>(RPC_SYNC_PULL_BLOCK, FixedHeight::new(height))\n            .await?;\n        Ok(resp.inner)\n    }\n}\n\nimpl Deref for ClientNode {\n    type Target = NetworkServiceHandle;\n\n    fn deref(&self) -> &Self::Target {\n        &self.network\n    }\n}\n\nfn full_node_peer_id(full_seckey: &str) -> PeerId {\n    let seckey = {\n        let key = hex::decode(full_seckey).expect(\"hex private key string\");\n        Secp256k1PrivateKey::try_from(key.as_ref()).expect(\"valid private key\")\n    };\n    let pubkey = seckey.pub_key();\n    PeerId::from_pubkey_bytes(pubkey.to_bytes()).expect(\"valid public key\")\n}\n"
  }
]