[
  {
    "path": ".cargo/config",
    "content": "[target.wasm32-unknown-unknown]\nrustflags = [\"-C\", \"link-arg=--max-memory=4294967296\"]\n\n[unstable]\nbuild-std = [\"panic_abort\", \"std\"]\n\n[build]\ntarget = \"x86_64-apple-darwin\""
  },
  {
    "path": ".eslintignore",
    "content": "wasm_bytes.ts"
  },
  {
    "path": ".eslintrc.js",
    "content": "/* eslint-disable no-undef */\nmodule.exports = {\n  root: true,\n  extends: [\n    \"eslint:recommended\",\n    \"plugin:react/recommended\",\n    \"plugin:react-hooks/recommended\",\n    \"plugin:@typescript-eslint/recommended\",\n    \"plugin:security/recommended\"\n  ],\n  parser: \"@typescript-eslint/parser\",\n  parserOptions: {\n    ecmaFeatures: {\n      jsx: true\n    },\n    ecmaVersion: 13,\n    sourceType: \"module\"\n  },\n  plugins: [\"react\", \"@typescript-eslint\", \"security\"],\n  rules: {\n    \"@typescript-eslint/no-var-requires\": \"off\",\n    \"@typescript-eslint/ban-ts-comment\": \"off\",\n    \"@typescript-eslint/no-explicit-any\": \"off\",\n    \"react/react-in-jsx-scope\": \"off\",\n    \"react/prop-types\": \"off\",\n  }\n};\n"
  },
  {
    "path": ".github/workflows/publish.yaml",
    "content": "name: Publish Package to npmjs\non:\n  release:\n    types: [published]\n  workflow_dispatch:\n\njobs:\n  publish:\n    runs-on: macos-latest\n    steps:\n      - uses: actions/checkout@v3\n        with:\n          ref: ${{ github.ref_name }}\n      # Setup Node.js\n      - uses: actions/setup-node@v3\n        with:\n          node-version: 18\n          registry-url: \"https://registry.npmjs.org\"\n      # Setup Rust\n      - uses: actions-rs/toolchain@v1\n        with:\n          toolchain: nightly-2022-10-31\n      - run: rustup component add rust-src\n      - run: rustup target add x86_64-apple-darwin\n      # Install circom-secq\n      - uses: GuillaumeFalourd/clone-github-repo-action@v2\n        with:\n          owner: \"DanTehrani\"\n          repository: \"circom-secq\"\n      - run: cd circom-secq && cargo build --release && cargo install --path circom\n      # Install wasm-pack\n      - uses: jetli/wasm-pack-action@v0.4.0\n        with:\n          version: \"v0.10.3\"\n      - run: cargo test --release\n      - run: yarn\n      - run: yarn build\n      - run: yarn test\n      - run: npm publish\n        working-directory: ./packages/lib\n        env:\n          NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}\n"
  },
  {
    "path": ".gitignore",
    "content": "# Generated by Cargo\n# will have compiled files and executables\n/target/\n\n\n# These are backup files generated by rustfmt\n**/*.rs.bk\n\n*.txt\n\nnode_modules/\n\n.next/\nnext-env.d.ts\n\nyarn-error.log\n.DS_Store\n\npkg/\ncircom_input.json\ncircom_witness.wtns\n\n*.ptau\n\nbuild/\ndist/\n\n*.r1cs\n*.sym\n!test_circuit.r1cs\n\npackages/prover/test_circuit/test_circuit_js/\n#input files\npackages/prover/test_circuit/*.json \n\n\nwasmBytes.ts\n**/sage/*.sage.py\npackages/lib/src/circuits/\n\npackages/lib/example/\n"
  },
  {
    "path": ".prettierignore",
    "content": "wasm_bytes.ts"
  },
  {
    "path": ".prettierrc.json",
    "content": "{\n    \"trailingComma\": \"none\",\n    \"tabWidth\": 2,\n    \"semi\": true,\n    \"singleQuote\": false,\n    \"arrowParens\": \"avoid\",\n    \"uppercase\": true\n  }"
  },
  {
    "path": ".vscode/settings.json",
    "content": "{\n    \"editor.formatOnSave\": true,\n    \"cSpell.words\": [\n        \"merkle\",\n        \"NIZK\"\n    ]\n}"
  },
  {
    "path": "Cargo.toml",
    "content": "[workspace]\nmembers = [\n    \"packages/spartan_wasm\",\n    \"packages/secq256k1\",\n    \"packages/poseidon\",\n    \"packages/Spartan-secq\",\n    \"packages/circuit_reader\",\n]"
  },
  {
    "path": "README.md",
    "content": "# Spartan-ecdsa\n\nSpartan-ecdsa (which to our knowledge) is the fastest open-source method to verify ECDSA (secp256k1) signatures in zero-knowledge. It can prove ECDSA group membership 10 times faster than [efficient-zk-ecdsa](https://github.com/personaelabs/efficient-zk-ecdsa), our previous implementation of fast ECDSA signature proving. Please refer to [this blog post](https://personaelabs.org/posts/spartan-ecdsa/) for further information.\n\n## Constraint breakdown\n\nspartan-ecdsa achieves the phenomenal result of **hashing becoming the bottleneck instead of ECC operations** for the `pubkey_membership.circom` circuit. In particular, there are **3,039** constraints for efficient ECDSA signature verification, and **5,037** constraints for a depth 20 merkle tree membership check + 1 Poseidon hash of the ECDSA public key. The drop from the original 1.5 million constraints of [circom-ecdsa](https://github.com/0xPARC/circom-ecdsa) comes primarily from doing right-field arithmetic with secq and avoiding SNARK-unfriendly range checks and big integer math.\n\nWe also use [efficient ECDSA signatures](https://personaelabs.org/posts/efficient-ecdsa-1/) instead of standard ECDSA siagnatures to save an additional **14,505** constraints. To review, the standard ECDSA signature consists of $(r, s)$ for a public key $Q_a$ and message $m$, where $r$ is the x-coordinate of a random elliptic curve point $R$. Standard ECDSA signature verification checks if\n\n```math\nR == m s ^{-1} * G + r s ^{-1} * Q_a\n```\n\nwhere $G$ is the generator point of the curve. The efficient ECDSA signature consists of $s$ as well as $T = r^{-1} * R$ and $U = -r^{-1} * m * G$, which can both be computed outside of the SNARK without breaking correctness. Efficient ECDSA signature verification checks if\n\n```math\ns * T + U == Q_a\n```\n\nThus, verifying a standard ECDSA signature instead of the efficient ECDSA signature requires (1) computing $s^{-1}$, $r \\* s^{-1}$, $m \\* s^{-1}$, and (2) an extra ECC scalar multiply to compute $m s ^{-1} * G$. The former computations happen in the scalar field of secp, which is unequal to the scalar field of secq, and so we incur 11,494 additional constraints for the wrong-field math. The latter can use the `Secp256k1Mul` subroutine and incurs 3,011 additional constraints.\n\n## Benchmarks\n\nProving membership to a group of ECDSA public keys\n\n|          Benchmark           |   #   |\n| :--------------------------: | :---: |\n|         Constraints          | 8,076 |\n|   Proving time in browser    |  4s   |\n|   Proving time in Node.js    |  2s   |\n| Verification time in browser |  1s   |\n| Verification time in Node.js | 300ms |\n|          Proof size          | 16kb  |\n\n- Measured on a M1 MacBook Pro with 80Mbps internet speed.\n- Both proving and verification time in browser includes the time to download the circuit.\n\n## Disclaimers\n\n- Spartan-ecdsa is unaudited. Please use it at your own risk.\n- Usage on mobile browsers isn’t currently supported.\n\n## Install\n\n```jsx\nyarn add @personaelabs/spartan-ecdsa\n```\n\n## Development\n\n### Node.js\n\nv18 or later\n\n### Build\n1. Install Circom with secq256k1 support\n\n```\ngit clone https://github.com/DanTehrani/circom-secq\ncd circom-secq && cargo build --release && cargo install --path circom\n```\n\n2. Install [wasm-pack](https://rustwasm.github.io/wasm-pack/installer/)\n\n4. Install dependencies & Build all packages\n\n```jsx\nyarn && yarn build\n```\n"
  },
  {
    "path": "lerna.json",
    "content": "{\n  \"$schema\": \"node_modules/lerna/schemas/lerna-schema.json\",\n  \"useWorkspaces\": true,\n  \"version\": \"0.0.0\"\n}\n"
  },
  {
    "path": "package.json",
    "content": "{\n  \"private\": true,\n  \"name\": \"spartan-ecdsa-monorepo\",\n  \"version\": \"1.0.0\",\n  \"main\": \"index.js\",\n  \"repository\": \"https://github.com/DanTehrani/spartan-wasm.git\",\n  \"author\": \"Daniel Tehrani <contact@dantehrani.com>\",\n  \"scripts\": {\n    \"build\": \"sh ./scripts/build.sh && lerna run build\",\n    \"test\": \"sh ./scripts/test.sh\"\n  },\n  \"devDependencies\": {\n    \"@types/jest\": \"^29.2.4\",\n    \"@typescript-eslint/eslint-plugin\": \"5.49.0\",\n    \"eslint\": \"8.32.0\",\n    \"eslint-plugin-react\": \"7.32.1\",\n    \"eslint-plugin-react-hooks\": \"4.6.0\",\n    \"eslint-plugin-security\": \"1.7.0\",\n    \"lerna\": \"^6.4.0\"\n  },\n  \"workspaces\": [\n    \"packages/lib\",\n    \"packages/benchmark/web\",\n    \"packages/benchmark/node\",\n    \"packages/circuits\"\n  ]\n}\n"
  },
  {
    "path": "packages/Spartan-secq/CODE_OF_CONDUCT.md",
    "content": "# Microsoft Open Source Code of Conduct\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\n\nResources:\n\n- [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/)\n- [Microsoft Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)\n- Contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with questions or concerns\n"
  },
  {
    "path": "packages/Spartan-secq/CONTRIBUTING.md",
    "content": "This project welcomes contributions and suggestions. Most contributions require you to\nagree to a Contributor License Agreement (CLA) declaring that you have the right to,\nand actually do, grant us the rights to use your contribution. For details, visit\nhttps://cla.microsoft.com.\n\nWhen you submit a pull request, a CLA-bot will automatically determine whether you need\nto provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the\ninstructions provided by the bot. You will only need to do this once across all repositories using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)\nor contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments."
  },
  {
    "path": "packages/Spartan-secq/Cargo.toml",
    "content": "[package]\nname = \"spartan\"\nversion = \"0.7.1\"\nauthors = [\"Srinath Setty <srinath@microsoft.com>\"]\nedition = \"2021\"\ndescription = \"High-speed zkSNARKs without trusted setup\"\ndocumentation = \"https://docs.rs/spartan/\"\nreadme = \"README.md\"\nrepository = \"https://github.com/microsoft/Spartan\"\nlicense-file = \"LICENSE\"\nkeywords = [\"zkSNARKs\", \"cryptography\", \"proofs\"]\n\n[dependencies]\nnum-bigint-dig = \"^0.7\"\nsecq256k1 = { path = \"../secq256k1\" }\nmerlin = \"3.0.0\"\nrand = \"0.7.3\"\ndigest = \"0.8.1\"\nsha3 = \"0.8.2\"\nbyteorder = \"1.3.4\"\nrayon = { version = \"1.3.0\", optional = true }\nserde = { version = \"1.0.106\", features = [\"derive\"] }\nbincode = \"1.2.1\"\nsubtle = { version = \"2.4\", default-features = false }\nrand_core = { version = \"0.6\", default-features = false }\nzeroize = { version = \"1\", default-features = false }\nitertools = \"0.10.0\"\ncolored = \"2.0.0\"\nflate2 = \"1.0.14\"\nthiserror = \"1.0\"\nnum-traits = \"0.2.15\"\nhex-literal = { version = \"0.3\" }\nmultiexp = \"0.2.2\"\n\n[dev-dependencies]\ncriterion = \"0.3.1\"\n\n[lib]\nname = \"libspartan\"\npath = \"src/lib.rs\"\ncrate-type = [\"cdylib\", \"rlib\"]\n\n[[bin]]\nname = \"snark\"\npath = \"profiler/snark.rs\"\n\n[[bin]]\nname = \"nizk\"\npath = \"profiler/nizk.rs\"\n\n[[bench]]\nname = \"snark\"\nharness = false\n\n[[bench]]\nname = \"nizk\"\nharness = false\n"
  },
  {
    "path": "packages/Spartan-secq/LICENSE",
    "content": "    MIT License\n\n    Copyright (c) Microsoft Corporation.\n\n    Permission is hereby granted, free of charge, to any person obtaining a copy\n    of this software and associated documentation files (the \"Software\"), to deal\n    in the Software without restriction, including without limitation the rights\n    to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n    copies of the Software, and to permit persons to whom the Software is\n    furnished to do so, subject to the following conditions:\n\n    The above copyright notice and this permission notice shall be included in all\n    copies or substantial portions of the Software.\n\n    THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n    IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n    FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n    AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n    LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n    OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n    SOFTWARE\n"
  },
  {
    "path": "packages/Spartan-secq/README.md",
    "content": "## Fork of [Spartan](https://github.com/microsoft/Spartan)\n_This fork is still under development._\n\nModify Spartan to operate over the **base field** of secp256k1.\n\n### Changes from the original Spartan\n- Use the secq256k1 crate instead of curve25519-dalek\n- Modify values in scalar.rs (originally ristretto255.rs) \n\nPlease refer to [spartan-ecdsa](https://github.com/personaelabs/spartan-ecdsa) for development status.\n"
  },
  {
    "path": "packages/Spartan-secq/SECURITY.md",
    "content": "<!-- BEGIN MICROSOFT SECURITY.MD V0.0.3 BLOCK -->\n\n## Security\n\nMicrosoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/Microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), and [our GitHub organizations](https://opensource.microsoft.com/).\n\nIf you believe you have found a security vulnerability in any Microsoft-owned repository that meets Microsoft's [Microsoft's definition of a security vulnerability](https://docs.microsoft.com/en-us/previous-versions/tn-archive/cc751383(v=technet.10)) of a security vulnerability, please report it to us as described below.\n\n## Reporting Security Issues\n\n**Please do not report security vulnerabilities through public GitHub issues.**\n\nInstead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://msrc.microsoft.com/create-report).\n\nIf you prefer to submit without logging in, send email to [secure@microsoft.com](mailto:secure@microsoft.com).  If possible, encrypt your message with our PGP key; please download it from the the [Microsoft Security Response Center PGP Key page](https://www.microsoft.com/en-us/msrc/pgp-key-msrc).\n\nYou should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://www.microsoft.com/msrc).\n\nPlease include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue:\n\n  * Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.)\n  * Full paths of source file(s) related to the manifestation of the issue\n  * The location of the affected source code (tag/branch/commit or direct URL)\n  * Any special configuration required to reproduce the issue\n  * Step-by-step instructions to reproduce the issue\n  * Proof-of-concept or exploit code (if possible)\n  * Impact of the issue, including how an attacker might exploit the issue\n\nThis information will help us triage your report more quickly.\n\nIf you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://microsoft.com/msrc/bounty) page for more details about our active programs.\n\n## Preferred Languages\n\nWe prefer all communications to be in English.\n\n## Policy\n\nMicrosoft follows the principle of [Coordinated Vulnerability Disclosure](https://www.microsoft.com/en-us/msrc/cvd).\n\n<!-- END MICROSOFT SECURITY.MD BLOCK -->\n"
  },
  {
    "path": "packages/Spartan-secq/benches/nizk.rs",
    "content": "#![allow(clippy::assertions_on_result_states)]\nextern crate byteorder;\nextern crate core;\nextern crate criterion;\nextern crate digest;\nextern crate libspartan;\nextern crate merlin;\nextern crate rand;\nextern crate sha3;\n\nuse libspartan::{Instance, NIZKGens, NIZK};\nuse merlin::Transcript;\n\nuse criterion::*;\n\nfn nizk_prove_benchmark(c: &mut Criterion) {\n  for &s in [10, 12, 16].iter() {\n    let plot_config = PlotConfiguration::default().summary_scale(AxisScale::Logarithmic);\n    let mut group = c.benchmark_group(\"NIZK_prove_benchmark\");\n    group.plot_config(plot_config);\n\n    let num_vars = (2_usize).pow(s as u32);\n    let num_cons = num_vars;\n    let num_inputs = 10;\n\n    let (inst, vars, inputs) = Instance::produce_synthetic_r1cs(num_cons, num_vars, num_inputs);\n\n    let gens = NIZKGens::new(num_cons, num_vars, num_inputs);\n\n    let name = format!(\"NIZK_prove_{}\", num_vars);\n    group.bench_function(&name, move |b| {\n      b.iter(|| {\n        let mut prover_transcript = Transcript::new(b\"example\");\n        NIZK::prove(\n          black_box(&inst),\n          black_box(vars.clone()),\n          black_box(&inputs),\n          black_box(&gens),\n          black_box(&mut prover_transcript),\n        );\n      });\n    });\n    group.finish();\n  }\n}\n\nfn nizk_verify_benchmark(c: &mut Criterion) {\n  for &s in [10, 12, 16].iter() {\n    let plot_config = PlotConfiguration::default().summary_scale(AxisScale::Logarithmic);\n    let mut group = c.benchmark_group(\"NIZK_verify_benchmark\");\n    group.plot_config(plot_config);\n\n    let num_vars = (2_usize).pow(s as u32);\n    let num_cons = num_vars;\n    let num_inputs = 10;\n    let (inst, vars, inputs) = Instance::produce_synthetic_r1cs(num_cons, num_vars, num_inputs);\n\n    let gens = NIZKGens::new(num_cons, num_vars, num_inputs);\n\n    // produce a proof of satisfiability\n    let mut prover_transcript = Transcript::new(b\"example\");\n    let proof = NIZK::prove(&inst, vars, &inputs, &gens, &mut prover_transcript);\n\n    let name = format!(\"NIZK_verify_{}\", num_cons);\n    group.bench_function(&name, move |b| {\n      b.iter(|| {\n        let mut verifier_transcript = Transcript::new(b\"example\");\n        assert!(proof\n          .verify(\n            black_box(&inst),\n            black_box(&inputs),\n            black_box(&mut verifier_transcript),\n            black_box(&gens)\n          )\n          .is_ok());\n      });\n    });\n    group.finish();\n  }\n}\n\nfn set_duration() -> Criterion {\n  Criterion::default().sample_size(10)\n}\n\ncriterion_group! {\nname = benches_nizk;\nconfig = set_duration();\ntargets = nizk_prove_benchmark, nizk_verify_benchmark\n}\n\ncriterion_main!(benches_nizk);\n"
  },
  {
    "path": "packages/Spartan-secq/benches/snark.rs",
    "content": "#![allow(clippy::assertions_on_result_states)]\nextern crate libspartan;\nextern crate merlin;\n\nuse libspartan::{Instance, SNARKGens, SNARK};\nuse merlin::Transcript;\n\nuse criterion::*;\n\nfn snark_encode_benchmark(c: &mut Criterion) {\n  for &s in [10, 12, 16].iter() {\n    let plot_config = PlotConfiguration::default().summary_scale(AxisScale::Logarithmic);\n    let mut group = c.benchmark_group(\"SNARK_encode_benchmark\");\n    group.plot_config(plot_config);\n\n    let num_vars = (2_usize).pow(s as u32);\n    let num_cons = num_vars;\n    let num_inputs = 10;\n    let (inst, _vars, _inputs) = Instance::produce_synthetic_r1cs(num_cons, num_vars, num_inputs);\n\n    // produce public parameters\n    let gens = SNARKGens::new(num_cons, num_vars, num_inputs, num_cons);\n\n    // produce a commitment to R1CS instance\n    let name = format!(\"SNARK_encode_{}\", num_cons);\n    group.bench_function(&name, move |b| {\n      b.iter(|| {\n        SNARK::encode(black_box(&inst), black_box(&gens));\n      });\n    });\n    group.finish();\n  }\n}\n\nfn snark_prove_benchmark(c: &mut Criterion) {\n  for &s in [10, 12, 16].iter() {\n    let plot_config = PlotConfiguration::default().summary_scale(AxisScale::Logarithmic);\n    let mut group = c.benchmark_group(\"SNARK_prove_benchmark\");\n    group.plot_config(plot_config);\n\n    let num_vars = (2_usize).pow(s as u32);\n    let num_cons = num_vars;\n    let num_inputs = 10;\n\n    let (inst, vars, inputs) = Instance::produce_synthetic_r1cs(num_cons, num_vars, num_inputs);\n\n    // produce public parameters\n    let gens = SNARKGens::new(num_cons, num_vars, num_inputs, num_cons);\n\n    // produce a commitment to R1CS instance\n    let (comm, decomm) = SNARK::encode(&inst, &gens);\n\n    // produce a proof\n    let name = format!(\"SNARK_prove_{}\", num_cons);\n    group.bench_function(&name, move |b| {\n      b.iter(|| {\n        let mut prover_transcript = Transcript::new(b\"example\");\n        SNARK::prove(\n          black_box(&inst),\n          black_box(&comm),\n          black_box(&decomm),\n          black_box(vars.clone()),\n          black_box(&inputs),\n          black_box(&gens),\n          black_box(&mut prover_transcript),\n        );\n      });\n    });\n    group.finish();\n  }\n}\n\nfn snark_verify_benchmark(c: &mut Criterion) {\n  for &s in [10, 12, 16].iter() {\n    let plot_config = PlotConfiguration::default().summary_scale(AxisScale::Logarithmic);\n    let mut group = c.benchmark_group(\"SNARK_verify_benchmark\");\n    group.plot_config(plot_config);\n\n    let num_vars = (2_usize).pow(s as u32);\n    let num_cons = num_vars;\n    let num_inputs = 10;\n    let (inst, vars, inputs) = Instance::produce_synthetic_r1cs(num_cons, num_vars, num_inputs);\n\n    // produce public parameters\n    let gens = SNARKGens::new(num_cons, num_vars, num_inputs, num_cons);\n\n    // produce a commitment to R1CS instance\n    let (comm, decomm) = SNARK::encode(&inst, &gens);\n\n    // produce a proof of satisfiability\n    let mut prover_transcript = Transcript::new(b\"example\");\n    let proof = SNARK::prove(\n      &inst,\n      &comm,\n      &decomm,\n      vars,\n      &inputs,\n      &gens,\n      &mut prover_transcript,\n    );\n\n    // verify the proof\n    let name = format!(\"SNARK_verify_{}\", num_cons);\n    group.bench_function(&name, move |b| {\n      b.iter(|| {\n        let mut verifier_transcript = Transcript::new(b\"example\");\n        assert!(proof\n          .verify(\n            black_box(&comm),\n            black_box(&inputs),\n            black_box(&mut verifier_transcript),\n            black_box(&gens)\n          )\n          .is_ok());\n      });\n    });\n    group.finish();\n  }\n}\n\nfn set_duration() -> Criterion {\n  Criterion::default().sample_size(10)\n}\n\ncriterion_group! {\nname = benches_snark;\nconfig = set_duration();\ntargets = snark_encode_benchmark, snark_prove_benchmark, snark_verify_benchmark\n}\n\ncriterion_main!(benches_snark);\n"
  },
  {
    "path": "packages/Spartan-secq/examples/cubic.rs",
    "content": "//! Demonstrates how to produces a proof for canonical cubic equation: `x^3 + x + 5 = y`.\n//! The example is described in detail [here].\n//!\n//! The R1CS for this problem consists of the following 4 constraints:\n//! `Z0 * Z0 - Z1 = 0`\n//! `Z1 * Z0 - Z2 = 0`\n//! `(Z2 + Z0) * 1 - Z3 = 0`\n//! `(Z3 + 5) * 1 - I0 = 0`\n//!\n//! [here]: https://medium.com/@VitalikButerin/quadratic-arithmetic-programs-from-zero-to-hero-f6d558cea649\n#![allow(clippy::assertions_on_result_states)]\nuse libspartan::{InputsAssignment, Instance, SNARKGens, VarsAssignment, SNARK};\nuse merlin::Transcript;\nuse rand_core::OsRng;\nuse secq256k1::elliptic_curve::Field;\nuse secq256k1::Scalar;\n\n#[allow(non_snake_case)]\nfn produce_r1cs() -> (\n  usize,\n  usize,\n  usize,\n  usize,\n  Instance,\n  VarsAssignment,\n  InputsAssignment,\n) {\n  // parameters of the R1CS instance\n  let num_cons = 4;\n  let num_vars = 4;\n  let num_inputs = 1;\n  let num_non_zero_entries = 8;\n\n  // We will encode the above constraints into three matrices, where\n  // the coefficients in the matrix are in the little-endian byte order\n  let mut A: Vec<(usize, usize, [u8; 32])> = Vec::new();\n  let mut B: Vec<(usize, usize, [u8; 32])> = Vec::new();\n  let mut C: Vec<(usize, usize, [u8; 32])> = Vec::new();\n\n  let one: [u8; 32] = Scalar::ONE.to_bytes().into();\n\n  // R1CS is a set of three sparse matrices A B C, where is a row for every\n  // constraint and a column for every entry in z = (vars, 1, inputs)\n  // An R1CS instance is satisfiable iff:\n  // Az \\circ Bz = Cz, where z = (vars, 1, inputs)\n\n  // constraint 0 entries in (A,B,C)\n  // constraint 0 is Z0 * Z0 - Z1 = 0.\n  A.push((0, 0, one));\n  B.push((0, 0, one));\n  C.push((0, 1, one));\n\n  // constraint 1 entries in (A,B,C)\n  // constraint 1 is Z1 * Z0 - Z2 = 0.\n  A.push((1, 1, one));\n  B.push((1, 0, one));\n  C.push((1, 2, one));\n\n  // constraint 2 entries in (A,B,C)\n  // constraint 2 is (Z2 + Z0) * 1 - Z3 = 0.\n  A.push((2, 2, one));\n  A.push((2, 0, one));\n  B.push((2, num_vars, one));\n  C.push((2, 3, one));\n\n  // constraint 3 entries in (A,B,C)\n  // constraint 3 is (Z3 + 5) * 1 - I0 = 0.\n  A.push((3, 3, one));\n  A.push((3, num_vars, Scalar::from(5u32).to_bytes().into()));\n  B.push((3, num_vars, one));\n  C.push((3, num_vars + 1, one));\n\n  let inst = Instance::new(num_cons, num_vars, num_inputs, &A, &B, &C).unwrap();\n\n  // compute a satisfying assignment\n  let mut csprng: OsRng = OsRng;\n  let z0 = Scalar::random(&mut csprng);\n  let z1 = z0 * z0; // constraint 0\n  let z2 = z1 * z0; // constraint 1\n  let z3 = z2 + z0; // constraint 2\n  let i0 = z3 + Scalar::from(5u32); // constraint 3\n\n  // create a VarsAssignment\n  let mut vars: Vec<[u8; 32]> = vec![Scalar::ZERO.to_bytes().into(); num_vars];\n  vars[0] = z0.to_bytes().into();\n  vars[1] = z1.to_bytes().into();\n  vars[2] = z2.to_bytes().into();\n  vars[3] = z3.to_bytes().into();\n  let assignment_vars = VarsAssignment::new(&vars).unwrap();\n\n  // create an InputsAssignment\n  let mut inputs: Vec<[u8; 32]> = vec![Scalar::ZERO.to_bytes().into(); num_inputs];\n  inputs[0] = i0.to_bytes().into();\n  let assignment_inputs = InputsAssignment::new(&inputs).unwrap();\n\n  // check if the instance we created is satisfiable\n  let res = inst.is_sat(&assignment_vars, &assignment_inputs);\n  assert!(res.unwrap(), \"should be satisfied\");\n\n  (\n    num_cons,\n    num_vars,\n    num_inputs,\n    num_non_zero_entries,\n    inst,\n    assignment_vars,\n    assignment_inputs,\n  )\n}\n\nfn main() {\n  // produce an R1CS instance\n  let (\n    num_cons,\n    num_vars,\n    num_inputs,\n    num_non_zero_entries,\n    inst,\n    assignment_vars,\n    assignment_inputs,\n  ) = produce_r1cs();\n\n  // produce public parameters\n  let gens = SNARKGens::new(num_cons, num_vars, num_inputs, num_non_zero_entries);\n\n  // create a commitment to the R1CS instance\n  let (comm, decomm) = SNARK::encode(&inst, &gens);\n\n  // produce a proof of satisfiability\n  let mut prover_transcript = Transcript::new(b\"snark_example\");\n  let proof = SNARK::prove(\n    &inst,\n    &comm,\n    &decomm,\n    assignment_vars,\n    &assignment_inputs,\n    &gens,\n    &mut prover_transcript,\n  );\n\n  // verify the proof of satisfiability\n  let mut verifier_transcript = Transcript::new(b\"snark_example\");\n  assert!(proof\n    .verify(&comm, &assignment_inputs, &mut verifier_transcript, &gens)\n    .is_ok());\n  println!(\"proof verification successful!\");\n}\n"
  },
  {
    "path": "packages/Spartan-secq/profiler/nizk.rs",
    "content": "#![allow(non_snake_case)]\n#![allow(clippy::assertions_on_result_states)]\n\nextern crate flate2;\nextern crate libspartan;\nextern crate merlin;\nextern crate rand;\n\nuse flate2::{write::ZlibEncoder, Compression};\nuse libspartan::{Instance, NIZKGens, NIZK};\nuse merlin::Transcript;\n\nfn print(msg: &str) {\n  let star = \"* \";\n  println!(\"{:indent$}{}{}\", \"\", star, msg, indent = 2);\n}\n\npub fn main() {\n  // the list of number of variables (and constraints) in an R1CS instance\n  let inst_sizes = vec![10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20];\n\n  println!(\"Profiler:: NIZK\");\n  for &s in inst_sizes.iter() {\n    let num_vars = (2_usize).pow(s as u32);\n    let num_cons = num_vars;\n    let num_inputs = 10;\n\n    // produce a synthetic R1CSInstance\n    let (inst, vars, inputs) = Instance::produce_synthetic_r1cs(num_cons, num_vars, num_inputs);\n\n    // produce public generators\n    let gens = NIZKGens::new(num_cons, num_vars, num_inputs);\n\n    // produce a proof of satisfiability\n    let mut prover_transcript = Transcript::new(b\"nizk_example\");\n    let proof = NIZK::prove(&inst, vars, &inputs, &gens, &mut prover_transcript);\n\n    let mut encoder = ZlibEncoder::new(Vec::new(), Compression::default());\n    bincode::serialize_into(&mut encoder, &proof).unwrap();\n    let proof_encoded = encoder.finish().unwrap();\n    let msg_proof_len = format!(\"NIZK::proof_compressed_len {:?}\", proof_encoded.len());\n    print(&msg_proof_len);\n\n    // verify the proof of satisfiability\n    let mut verifier_transcript = Transcript::new(b\"nizk_example\");\n    assert!(proof\n      .verify(&inst, &inputs, &mut verifier_transcript, &gens)\n      .is_ok());\n\n    println!();\n  }\n}\n"
  },
  {
    "path": "packages/Spartan-secq/profiler/snark.rs",
    "content": "#![allow(non_snake_case)]\n#![allow(clippy::assertions_on_result_states)]\n\nextern crate flate2;\nextern crate libspartan;\nextern crate merlin;\n\nuse flate2::{write::ZlibEncoder, Compression};\nuse libspartan::{Instance, SNARKGens, SNARK};\nuse merlin::Transcript;\n\nfn print(msg: &str) {\n  let star = \"* \";\n  println!(\"{:indent$}{}{}\", \"\", star, msg, indent = 2);\n}\n\npub fn main() {\n  // the list of number of variables (and constraints) in an R1CS instance\n  let inst_sizes = vec![10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20];\n\n  println!(\"Profiler:: SNARK\");\n  for &s in inst_sizes.iter() {\n    let num_vars = (2_usize).pow(s as u32);\n    let num_cons = num_vars;\n    let num_inputs = 10;\n\n    // produce a synthetic R1CSInstance\n    let (inst, vars, inputs) = Instance::produce_synthetic_r1cs(num_cons, num_vars, num_inputs);\n\n    // produce public generators\n    let gens = SNARKGens::new(num_cons, num_vars, num_inputs, num_cons);\n\n    // create a commitment to R1CSInstance\n    let (comm, decomm) = SNARK::encode(&inst, &gens);\n\n    // produce a proof of satisfiability\n    let mut prover_transcript = Transcript::new(b\"snark_example\");\n    let proof = SNARK::prove(\n      &inst,\n      &comm,\n      &decomm,\n      vars,\n      &inputs,\n      &gens,\n      &mut prover_transcript,\n    );\n\n    let mut encoder = ZlibEncoder::new(Vec::new(), Compression::default());\n    bincode::serialize_into(&mut encoder, &proof).unwrap();\n    let proof_encoded = encoder.finish().unwrap();\n    let msg_proof_len = format!(\"SNARK::proof_compressed_len {:?}\", proof_encoded.len());\n    print(&msg_proof_len);\n\n    // verify the proof of satisfiability\n    let mut verifier_transcript = Transcript::new(b\"snark_example\");\n    assert!(proof\n      .verify(&comm, &inputs, &mut verifier_transcript, &gens)\n      .is_ok());\n\n    println!();\n  }\n}\n"
  },
  {
    "path": "packages/Spartan-secq/rustfmt.toml",
    "content": "edition = \"2018\"\ntab_spaces = 2\nnewline_style = \"Unix\"\nuse_try_shorthand = true\n"
  },
  {
    "path": "packages/Spartan-secq/src/bin/mont_params.rs",
    "content": "use hex_literal::hex;\nuse num_bigint_dig::{BigInt, BigUint, ModInverse, ToBigInt};\nuse num_traits::{FromPrimitive, ToPrimitive};\nuse std::ops::Neg;\n\nfn get_words(n: &BigUint) -> [u64; 4] {\n  let mut words = [0u64; 4];\n  for i in 0..4 {\n    let word = n.clone() >> (64 * i) & BigUint::from(0xffffffffffffffffu64);\n    words[i] = word.to_u64().unwrap();\n  }\n  words\n}\n\nfn render_hex(label: String, words: &[u64; 4]) {\n  println!(\"// {}\", label);\n  for word in words {\n    println!(\"0x{:016x},\", word);\n  }\n}\n\nfn main() {\n  let modulus = BigUint::from_bytes_be(&hex!(\n    \"fffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2f\"\n  ));\n\n  let r = BigUint::from_u8(2)\n    .unwrap()\n    .modpow(&BigUint::from_u64(256).unwrap(), &modulus);\n\n  let r2 = BigUint::from_u8(2)\n    .unwrap()\n    .modpow(&BigUint::from_u64(512).unwrap(), &modulus);\n\n  let r3 = BigUint::from_u8(2)\n    .unwrap()\n    .modpow(&BigUint::from_u64(768).unwrap(), &modulus);\n\n  let two_pow_64 = BigUint::from_u128(18446744073709551616u128).unwrap();\n  let one = BigInt::from_u8(1).unwrap();\n\n  let inv = modulus\n    .clone()\n    .mod_inverse(&two_pow_64)\n    .unwrap()\n    .neg()\n    .modpow(&one, &two_pow_64.to_bigint().unwrap());\n\n  render_hex(\"Modulus\".to_string(), &get_words(&modulus));\n  render_hex(\"R\".to_string(), &get_words(&r));\n  render_hex(\"R2\".to_string(), &get_words(&r2));\n  render_hex(\"R3\".to_string(), &get_words(&r3));\n  render_hex(\"INV\".to_string(), &get_words(&inv.to_biguint().unwrap()));\n}\n"
  },
  {
    "path": "packages/Spartan-secq/src/commitments.rs",
    "content": "use super::group::{GroupElement, VartimeMultiscalarMul};\nuse super::scalar::Scalar;\nuse digest::{ExtendableOutput, Input};\nuse secq256k1::AffinePoint;\nuse sha3::Shake256;\nuse std::io::Read;\n\n#[derive(Debug)]\npub struct MultiCommitGens {\n  pub n: usize,\n  pub G: Vec<GroupElement>,\n  pub h: GroupElement,\n}\n\nimpl MultiCommitGens {\n  pub fn new(n: usize, label: &[u8]) -> Self {\n    let mut shake = Shake256::default();\n    shake.input(label);\n    shake.input(AffinePoint::generator().compress().as_bytes());\n\n    let mut reader = shake.xof_result();\n    let mut gens: Vec<GroupElement> = Vec::new();\n    let mut uniform_bytes = [0u8; 128];\n    for _ in 0..n + 1 {\n      reader.read_exact(&mut uniform_bytes).unwrap();\n      gens.push(AffinePoint::from_uniform_bytes(&uniform_bytes));\n    }\n\n    MultiCommitGens {\n      n,\n      G: gens[..n].to_vec(),\n      h: gens[n],\n    }\n  }\n\n  pub fn clone(&self) -> MultiCommitGens {\n    MultiCommitGens {\n      n: self.n,\n      h: self.h,\n      G: self.G.clone(),\n    }\n  }\n\n  pub fn scale(&self, s: &Scalar) -> MultiCommitGens {\n    MultiCommitGens {\n      n: self.n,\n      h: self.h,\n      G: (0..self.n).map(|i| s * self.G[i]).collect(),\n    }\n  }\n\n  pub fn split_at(&self, mid: usize) -> (MultiCommitGens, MultiCommitGens) {\n    let (G1, G2) = self.G.split_at(mid);\n\n    (\n      MultiCommitGens {\n        n: G1.len(),\n        G: G1.to_vec(),\n        h: self.h,\n      },\n      MultiCommitGens {\n        n: G2.len(),\n        G: G2.to_vec(),\n        h: self.h,\n      },\n    )\n  }\n}\n\npub trait Commitments {\n  fn commit(&self, blind: &Scalar, gens_n: &MultiCommitGens) -> GroupElement;\n}\n\nimpl Commitments for Scalar {\n  fn commit(&self, blind: &Scalar, gens_n: &MultiCommitGens) -> GroupElement {\n    assert_eq!(gens_n.n, 1);\n    GroupElement::vartime_multiscalar_mul(\n      [*self, *blind].to_vec(),\n      [gens_n.G[0], gens_n.h].to_vec(),\n    )\n  }\n}\n\nimpl Commitments for Vec<Scalar> {\n  fn commit(&self, blind: &Scalar, gens_n: &MultiCommitGens) -> GroupElement {\n    assert_eq!(gens_n.n, self.len());\n    GroupElement::vartime_multiscalar_mul((*self).clone(), gens_n.G.clone()) + blind * gens_n.h\n  }\n}\n\nimpl Commitments for [Scalar] {\n  fn commit(&self, blind: &Scalar, gens_n: &MultiCommitGens) -> GroupElement {\n    assert_eq!(gens_n.n, self.len());\n    GroupElement::vartime_multiscalar_mul(self.to_vec(), gens_n.G.clone()) + blind * gens_n.h\n  }\n}\n"
  },
  {
    "path": "packages/Spartan-secq/src/dense_mlpoly.rs",
    "content": "#![allow(clippy::too_many_arguments)]\nuse super::commitments::{Commitments, MultiCommitGens};\nuse super::errors::ProofVerifyError;\nuse super::group::{CompressedGroup, GroupElement, VartimeMultiscalarMul};\nuse super::math::Math;\nuse super::nizk::{DotProductProofGens, DotProductProofLog};\nuse super::random::RandomTape;\nuse super::scalar::Scalar;\nuse super::transcript::{AppendToTranscript, ProofTranscript};\nuse crate::group::DecompressEncodedPoint;\nuse core::ops::Index;\nuse merlin::Transcript;\nuse serde::{Deserialize, Serialize};\n\n#[cfg(feature = \"multicore\")]\nuse rayon::prelude::*;\n\n#[derive(Debug)]\npub struct DensePolynomial {\n  num_vars: usize, // the number of variables in the multilinear polynomial\n  len: usize,\n  Z: Vec<Scalar>, // evaluations of the polynomial in all the 2^num_vars Boolean inputs\n}\n\npub struct PolyCommitmentGens {\n  pub gens: DotProductProofGens,\n}\n\nimpl PolyCommitmentGens {\n  // the number of variables in the multilinear polynomial\n  pub fn new(num_vars: usize, label: &'static [u8]) -> PolyCommitmentGens {\n    let (_left, right) = EqPolynomial::compute_factored_lens(num_vars);\n    let gens = DotProductProofGens::new(right.pow2(), label);\n    PolyCommitmentGens { gens }\n  }\n}\n\npub struct PolyCommitmentBlinds {\n  blinds: Vec<Scalar>,\n}\n\n#[derive(Debug, Serialize, Deserialize)]\npub struct PolyCommitment {\n  C: Vec<CompressedGroup>,\n}\n\n#[derive(Debug, Serialize, Deserialize)]\npub struct ConstPolyCommitment {\n  C: CompressedGroup,\n}\n\npub struct EqPolynomial {\n  r: Vec<Scalar>,\n}\n\nimpl EqPolynomial {\n  pub fn new(r: Vec<Scalar>) -> Self {\n    EqPolynomial { r }\n  }\n\n  pub fn evaluate(&self, rx: &[Scalar]) -> Scalar {\n    assert_eq!(self.r.len(), rx.len());\n    (0..rx.len())\n      .map(|i| self.r[i] * rx[i] + (Scalar::one() - self.r[i]) * (Scalar::one() - rx[i]))\n      .product()\n  }\n\n  pub fn evals(&self) -> Vec<Scalar> {\n    let ell = self.r.len();\n\n    let mut evals: Vec<Scalar> = vec![Scalar::one(); ell.pow2()];\n    let mut size = 1;\n    for j in 0..ell {\n      // in each iteration, we double the size of chis\n      size *= 2;\n      for i in (0..size).rev().step_by(2) {\n        // copy each element from the prior iteration twice\n        let scalar = evals[i / 2];\n        evals[i] = scalar * self.r[j];\n        evals[i - 1] = scalar - evals[i];\n      }\n    }\n    evals\n  }\n\n  pub fn compute_factored_lens(ell: usize) -> (usize, usize) {\n    (ell / 2, ell - ell / 2)\n  }\n\n  pub fn compute_factored_evals(&self) -> (Vec<Scalar>, Vec<Scalar>) {\n    let ell = self.r.len();\n    let (left_num_vars, _right_num_vars) = EqPolynomial::compute_factored_lens(ell);\n\n    let L = EqPolynomial::new(self.r[..left_num_vars].to_vec()).evals();\n    let R = EqPolynomial::new(self.r[left_num_vars..ell].to_vec()).evals();\n\n    (L, R)\n  }\n}\n\npub struct IdentityPolynomial {\n  size_point: usize,\n}\n\nimpl IdentityPolynomial {\n  pub fn new(size_point: usize) -> Self {\n    IdentityPolynomial { size_point }\n  }\n\n  pub fn evaluate(&self, r: &[Scalar]) -> Scalar {\n    let len = r.len();\n    assert_eq!(len, self.size_point);\n    (0..len)\n      .map(|i| Scalar::from((len - i - 1).pow2() as u64) * r[i])\n      .sum()\n  }\n}\n\nimpl DensePolynomial {\n  pub fn new(Z: Vec<Scalar>) -> Self {\n    DensePolynomial {\n      num_vars: Z.len().log_2(),\n      len: Z.len(),\n      Z,\n    }\n  }\n\n  pub fn get_num_vars(&self) -> usize {\n    self.num_vars\n  }\n\n  pub fn len(&self) -> usize {\n    self.len\n  }\n\n  pub fn clone(&self) -> DensePolynomial {\n    DensePolynomial::new(self.Z[0..self.len].to_vec())\n  }\n\n  pub fn split(&self, idx: usize) -> (DensePolynomial, DensePolynomial) {\n    assert!(idx < self.len());\n    (\n      DensePolynomial::new(self.Z[..idx].to_vec()),\n      DensePolynomial::new(self.Z[idx..2 * idx].to_vec()),\n    )\n  }\n\n  #[cfg(feature = \"multicore\")]\n  fn commit_inner(&self, blinds: &[Scalar], gens: &MultiCommitGens) -> PolyCommitment {\n    let L_size = blinds.len();\n    let R_size = self.Z.len() / L_size;\n    assert_eq!(L_size * R_size, self.Z.len());\n    let C = (0..L_size)\n      .into_par_iter()\n      .map(|i| {\n        self.Z[R_size * i..R_size * (i + 1)]\n          .commit(&blinds[i], gens)\n          .compress()\n      })\n      .collect();\n    PolyCommitment { C }\n  }\n\n  #[cfg(not(feature = \"multicore\"))]\n  fn commit_inner(&self, blinds: &[Scalar], gens: &MultiCommitGens) -> PolyCommitment {\n    let L_size = blinds.len();\n    let R_size = self.Z.len() / L_size;\n    assert_eq!(L_size * R_size, self.Z.len());\n    let C = (0..L_size)\n      .map(|i| {\n        self.Z[R_size * i..R_size * (i + 1)]\n          .commit(&blinds[i], gens)\n          .compress()\n      })\n      .collect();\n    PolyCommitment { C }\n  }\n\n  pub fn commit(\n    &self,\n    gens: &PolyCommitmentGens,\n    random_tape: Option<&mut RandomTape>,\n  ) -> (PolyCommitment, PolyCommitmentBlinds) {\n    let n = self.Z.len();\n    let ell = self.get_num_vars();\n    assert_eq!(n, ell.pow2());\n\n    let (left_num_vars, right_num_vars) = EqPolynomial::compute_factored_lens(ell);\n    let L_size = left_num_vars.pow2();\n    let R_size = right_num_vars.pow2();\n    assert_eq!(L_size * R_size, n);\n\n    let blinds = if let Some(t) = random_tape {\n      PolyCommitmentBlinds {\n        blinds: t.random_vector(b\"poly_blinds\", L_size),\n      }\n    } else {\n      PolyCommitmentBlinds {\n        blinds: vec![Scalar::zero(); L_size],\n      }\n    };\n\n    (self.commit_inner(&blinds.blinds, &gens.gens.gens_n), blinds)\n  }\n\n  pub fn bound(&self, L: &[Scalar]) -> Vec<Scalar> {\n    let (left_num_vars, right_num_vars) = EqPolynomial::compute_factored_lens(self.get_num_vars());\n    let L_size = left_num_vars.pow2();\n    let R_size = right_num_vars.pow2();\n    (0..R_size)\n      .map(|i| (0..L_size).map(|j| L[j] * self.Z[j * R_size + i]).sum())\n      .collect()\n  }\n\n  pub fn bound_poly_var_top(&mut self, r: &Scalar) {\n    let n = self.len() / 2;\n    for i in 0..n {\n      self.Z[i] = self.Z[i] + r * (self.Z[i + n] - self.Z[i]);\n    }\n    self.num_vars -= 1;\n    self.len = n;\n  }\n\n  pub fn bound_poly_var_bot(&mut self, r: &Scalar) {\n    let n = self.len() / 2;\n    for i in 0..n {\n      self.Z[i] = self.Z[2 * i] + r * (self.Z[2 * i + 1] - self.Z[2 * i]);\n    }\n    self.num_vars -= 1;\n    self.len = n;\n  }\n\n  // returns Z(r) in O(n) time\n  pub fn evaluate(&self, r: &[Scalar]) -> Scalar {\n    // r must have a value for each variable\n    assert_eq!(r.len(), self.get_num_vars());\n    let chis = EqPolynomial::new(r.to_vec()).evals();\n    assert_eq!(chis.len(), self.Z.len());\n    DotProductProofLog::compute_dotproduct(&self.Z, &chis)\n  }\n\n  fn vec(&self) -> &Vec<Scalar> {\n    &self.Z\n  }\n\n  pub fn extend(&mut self, other: &DensePolynomial) {\n    // TODO: allow extension even when some vars are bound\n    assert_eq!(self.Z.len(), self.len);\n    let other_vec = other.vec();\n    assert_eq!(other_vec.len(), self.len);\n    self.Z.extend(other_vec);\n    self.num_vars += 1;\n    self.len *= 2;\n    assert_eq!(self.Z.len(), self.len);\n  }\n\n  pub fn merge<'a, I>(polys: I) -> DensePolynomial\n  where\n    I: IntoIterator<Item = &'a DensePolynomial>,\n  {\n    let mut Z: Vec<Scalar> = Vec::new();\n    for poly in polys.into_iter() {\n      Z.extend(poly.vec());\n    }\n\n    // pad the polynomial with zero polynomial at the end\n    Z.resize(Z.len().next_power_of_two(), Scalar::zero());\n\n    DensePolynomial::new(Z)\n  }\n\n  pub fn from_usize(Z: &[usize]) -> Self {\n    DensePolynomial::new(\n      (0..Z.len())\n        .map(|i| Scalar::from(Z[i] as u64))\n        .collect::<Vec<Scalar>>(),\n    )\n  }\n}\n\nimpl Index<usize> for DensePolynomial {\n  type Output = Scalar;\n\n  #[inline(always)]\n  fn index(&self, _index: usize) -> &Scalar {\n    &(self.Z[_index])\n  }\n}\n\nimpl AppendToTranscript for PolyCommitment {\n  fn append_to_transcript(&self, label: &'static [u8], transcript: &mut Transcript) {\n    transcript.append_message(label, b\"poly_commitment_begin\");\n    for i in 0..self.C.len() {\n      transcript.append_point(b\"poly_commitment_share\", &self.C[i]);\n    }\n    transcript.append_message(label, b\"poly_commitment_end\");\n  }\n}\n\n#[derive(Debug, Serialize, Deserialize)]\npub struct PolyEvalProof {\n  proof: DotProductProofLog,\n}\n\nimpl PolyEvalProof {\n  fn protocol_name() -> &'static [u8] {\n    b\"polynomial evaluation proof\"\n  }\n\n  pub fn prove(\n    poly: &DensePolynomial,\n    blinds_opt: Option<&PolyCommitmentBlinds>,\n    r: &[Scalar],                  // point at which the polynomial is evaluated\n    Zr: &Scalar,                   // evaluation of \\widetilde{Z}(r)\n    blind_Zr_opt: Option<&Scalar>, // specifies a blind for Zr\n    gens: &PolyCommitmentGens,\n    transcript: &mut Transcript,\n    random_tape: &mut RandomTape,\n  ) -> (PolyEvalProof, CompressedGroup) {\n    transcript.append_protocol_name(PolyEvalProof::protocol_name());\n\n    // assert vectors are of the right size\n    assert_eq!(poly.get_num_vars(), r.len());\n\n    let (left_num_vars, right_num_vars) = EqPolynomial::compute_factored_lens(r.len());\n    let L_size = left_num_vars.pow2();\n    let R_size = right_num_vars.pow2();\n\n    let default_blinds = PolyCommitmentBlinds {\n      blinds: vec![Scalar::zero(); L_size],\n    };\n    let blinds = blinds_opt.map_or(&default_blinds, |p| p);\n\n    assert_eq!(blinds.blinds.len(), L_size);\n\n    let zero = Scalar::zero();\n    let blind_Zr = blind_Zr_opt.map_or(&zero, |p| p);\n\n    // compute the L and R vectors\n    let eq = EqPolynomial::new(r.to_vec());\n    let (L, R) = eq.compute_factored_evals();\n    assert_eq!(L.len(), L_size);\n    assert_eq!(R.len(), R_size);\n\n    // compute the vector underneath L*Z and the L*blinds\n    // compute vector-matrix product between L and Z viewed as a matrix\n    let LZ = poly.bound(&L);\n    let LZ_blind: Scalar = (0..L.len()).map(|i| blinds.blinds[i] * L[i]).sum();\n\n    // a dot product proof of size R_size\n    let (proof, _C_LR, C_Zr_prime) = DotProductProofLog::prove(\n      &gens.gens,\n      transcript,\n      random_tape,\n      &LZ,\n      &LZ_blind,\n      &R,\n      Zr,\n      blind_Zr,\n    );\n\n    (PolyEvalProof { proof }, C_Zr_prime)\n  }\n\n  pub fn verify(\n    &self,\n    gens: &PolyCommitmentGens,\n    transcript: &mut Transcript,\n    r: &[Scalar],           // point at which the polynomial is evaluated\n    C_Zr: &CompressedGroup, // commitment to \\widetilde{Z}(r)\n    comm: &PolyCommitment,\n  ) -> Result<(), ProofVerifyError> {\n    transcript.append_protocol_name(PolyEvalProof::protocol_name());\n\n    // compute L and R\n    let eq = EqPolynomial::new(r.to_vec());\n    let (L, R) = eq.compute_factored_evals();\n\n    // compute a weighted sum of commitments and L\n    let C_decompressed = comm.C.iter().map(|pt| pt.decompress().unwrap());\n\n    let C_LZ = GroupElement::vartime_multiscalar_mul(L, C_decompressed.collect()).compress();\n\n    self\n      .proof\n      .verify(R.len(), &gens.gens, transcript, &R, &C_LZ, C_Zr)\n  }\n\n  pub fn verify_plain(\n    &self,\n    gens: &PolyCommitmentGens,\n    transcript: &mut Transcript,\n    r: &[Scalar], // point at which the polynomial is evaluated\n    Zr: &Scalar,  // evaluation \\widetilde{Z}(r)\n    comm: &PolyCommitment,\n  ) -> Result<(), ProofVerifyError> {\n    // compute a commitment to Zr with a blind of zero\n    let C_Zr = Zr.commit(&Scalar::zero(), &gens.gens.gens_1).compress();\n\n    self.verify(gens, transcript, r, &C_Zr, comm)\n  }\n}\n\n#[cfg(test)]\nmod tests {\n  use super::super::scalar::ScalarFromPrimitives;\n  use super::*;\n  use rand_core::OsRng;\n\n  fn evaluate_with_LR(Z: &[Scalar], r: &[Scalar]) -> Scalar {\n    let eq = EqPolynomial::new(r.to_vec());\n    let (L, R) = eq.compute_factored_evals();\n\n    let ell = r.len();\n    // ensure ell is even\n    assert!(ell % 2 == 0);\n    // compute n = 2^\\ell\n    let n = ell.pow2();\n    // compute m = sqrt(n) = 2^{\\ell/2}\n    let m = n.square_root();\n\n    // compute vector-matrix product between L and Z viewed as a matrix\n    let LZ = (0..m)\n      .map(|i| (0..m).map(|j| L[j] * Z[j * m + i]).sum())\n      .collect::<Vec<Scalar>>();\n\n    // compute dot product between LZ and R\n    DotProductProofLog::compute_dotproduct(&LZ, &R)\n  }\n\n  #[test]\n  fn check_polynomial_evaluation() {\n    // Z = [1, 2, 1, 4]\n    let Z = vec![\n      Scalar::one(),\n      (2_usize).to_scalar(),\n      (1_usize).to_scalar(),\n      (4_usize).to_scalar(),\n    ];\n\n    // r = [4,3]\n    let r = vec![(4_usize).to_scalar(), (3_usize).to_scalar()];\n\n    let eval_with_LR = evaluate_with_LR(&Z, &r);\n    let poly = DensePolynomial::new(Z);\n\n    let eval = poly.evaluate(&r);\n    assert_eq!(eval, (28_usize).to_scalar());\n    assert_eq!(eval_with_LR, eval);\n  }\n\n  pub fn compute_factored_chis_at_r(r: &[Scalar]) -> (Vec<Scalar>, Vec<Scalar>) {\n    let mut L: Vec<Scalar> = Vec::new();\n    let mut R: Vec<Scalar> = Vec::new();\n\n    let ell = r.len();\n    assert!(ell % 2 == 0); // ensure ell is even\n    let n = ell.pow2();\n    let m = n.square_root();\n\n    // compute row vector L\n    for i in 0..m {\n      let mut chi_i = Scalar::one();\n      for j in 0..ell / 2 {\n        let bit_j = ((m * i) & (1 << (r.len() - j - 1))) > 0;\n        if bit_j {\n          chi_i *= r[j];\n        } else {\n          chi_i *= Scalar::one() - r[j];\n        }\n      }\n      L.push(chi_i);\n    }\n\n    // compute column vector R\n    for i in 0..m {\n      let mut chi_i = Scalar::one();\n      for j in ell / 2..ell {\n        let bit_j = (i & (1 << (r.len() - j - 1))) > 0;\n        if bit_j {\n          chi_i *= r[j];\n        } else {\n          chi_i *= Scalar::one() - r[j];\n        }\n      }\n      R.push(chi_i);\n    }\n    (L, R)\n  }\n\n  pub fn compute_chis_at_r(r: &[Scalar]) -> Vec<Scalar> {\n    let ell = r.len();\n    let n = ell.pow2();\n    let mut chis: Vec<Scalar> = Vec::new();\n    for i in 0..n {\n      let mut chi_i = Scalar::one();\n      for j in 0..r.len() {\n        let bit_j = (i & (1 << (r.len() - j - 1))) > 0;\n        if bit_j {\n          chi_i *= r[j];\n        } else {\n          chi_i *= Scalar::one() - r[j];\n        }\n      }\n      chis.push(chi_i);\n    }\n    chis\n  }\n\n  pub fn compute_outerproduct(L: Vec<Scalar>, R: Vec<Scalar>) -> Vec<Scalar> {\n    assert_eq!(L.len(), R.len());\n    (0..L.len())\n      .map(|i| (0..R.len()).map(|j| L[i] * R[j]).collect::<Vec<Scalar>>())\n      .collect::<Vec<Vec<Scalar>>>()\n      .into_iter()\n      .flatten()\n      .collect::<Vec<Scalar>>()\n  }\n\n  #[test]\n  fn check_memoized_chis() {\n    let mut csprng: OsRng = OsRng;\n\n    let s = 10;\n    let mut r: Vec<Scalar> = Vec::new();\n    for _i in 0..s {\n      r.push(Scalar::random(&mut csprng));\n    }\n    let chis = tests::compute_chis_at_r(&r);\n    let chis_m = EqPolynomial::new(r).evals();\n    assert_eq!(chis, chis_m);\n  }\n\n  #[test]\n  fn check_factored_chis() {\n    let mut csprng: OsRng = OsRng;\n\n    let s = 10;\n    let mut r: Vec<Scalar> = Vec::new();\n    for _i in 0..s {\n      r.push(Scalar::random(&mut csprng));\n    }\n    let chis = EqPolynomial::new(r.clone()).evals();\n    let (L, R) = EqPolynomial::new(r).compute_factored_evals();\n    let O = compute_outerproduct(L, R);\n    assert_eq!(chis, O);\n  }\n\n  #[test]\n  fn check_memoized_factored_chis() {\n    let mut csprng: OsRng = OsRng;\n\n    let s = 10;\n    let mut r: Vec<Scalar> = Vec::new();\n    for _i in 0..s {\n      r.push(Scalar::random(&mut csprng));\n    }\n    let (L, R) = tests::compute_factored_chis_at_r(&r);\n    let eq = EqPolynomial::new(r);\n    let (L2, R2) = eq.compute_factored_evals();\n    assert_eq!(L, L2);\n    assert_eq!(R, R2);\n  }\n\n  #[test]\n  fn check_polynomial_commit() {\n    let Z = vec![\n      (1_usize).to_scalar(),\n      (2_usize).to_scalar(),\n      (1_usize).to_scalar(),\n      (4_usize).to_scalar(),\n    ];\n    let poly = DensePolynomial::new(Z);\n\n    // r = [4,3]\n    let r = vec![(4_usize).to_scalar(), (3_usize).to_scalar()];\n    let eval = poly.evaluate(&r);\n    assert_eq!(eval, (28_usize).to_scalar());\n\n    let gens = PolyCommitmentGens::new(poly.get_num_vars(), b\"test-two\");\n    let (poly_commitment, blinds) = poly.commit(&gens, None);\n\n    let mut random_tape = RandomTape::new(b\"proof\");\n    let mut prover_transcript = Transcript::new(b\"example\");\n    let (proof, C_Zr) = PolyEvalProof::prove(\n      &poly,\n      Some(&blinds),\n      &r,\n      &eval,\n      None,\n      &gens,\n      &mut prover_transcript,\n      &mut random_tape,\n    );\n\n    let mut verifier_transcript = Transcript::new(b\"example\");\n\n    assert!(proof\n      .verify(&gens, &mut verifier_transcript, &r, &C_Zr, &poly_commitment)\n      .is_ok());\n  }\n}\n"
  },
  {
    "path": "packages/Spartan-secq/src/errors.rs",
    "content": "use core::fmt::Debug;\nuse thiserror::Error;\n\n#[derive(Error, Debug)]\npub enum ProofVerifyError {\n  #[error(\"Proof verification failed\")]\n  InternalError,\n  #[error(\"Compressed group element failed to decompress: {0:?}\")]\n  DecompressionError([u8; 32]),\n}\n\nimpl Default for ProofVerifyError {\n  fn default() -> Self {\n    ProofVerifyError::InternalError\n  }\n}\n\n#[derive(Clone, Debug, Eq, PartialEq)]\npub enum R1CSError {\n  /// returned if the number of constraints is not a power of 2\n  NonPowerOfTwoCons,\n  /// returned if the number of variables is not a power of 2\n  NonPowerOfTwoVars,\n  /// returned if a wrong number of inputs in an assignment are supplied\n  InvalidNumberOfInputs,\n  /// returned if a wrong number of variables in an assignment are supplied\n  InvalidNumberOfVars,\n  /// returned if a [u8;32] does not parse into a valid Scalar in the field of secq256k1\n  InvalidScalar,\n  /// returned if the supplied row or col in (row,col,val) tuple is out of range\n  InvalidIndex,\n}\n"
  },
  {
    "path": "packages/Spartan-secq/src/group.rs",
    "content": "use secq256k1::{AffinePoint, ProjectivePoint};\n\nuse super::errors::ProofVerifyError;\nuse super::scalar::{Scalar, ScalarBytes, ScalarBytesFromScalar};\nuse core::ops::{Mul, MulAssign};\nuse multiexp::multiexp;\n\npub type GroupElement = secq256k1::AffinePoint;\npub type CompressedGroup = secq256k1::EncodedPoint;\npub trait CompressedGroupExt {\n  type Group;\n  fn unpack(&self) -> Result<Self::Group, ProofVerifyError>;\n}\n\nimpl CompressedGroupExt for CompressedGroup {\n  type Group = secq256k1::AffinePoint;\n  fn unpack(&self) -> Result<Self::Group, ProofVerifyError> {\n    let result = AffinePoint::decompress(*self);\n    if result.is_some().into() {\n      return Ok(result.unwrap());\n    } else {\n      Err(ProofVerifyError::DecompressionError(\n        (*self.to_bytes()).try_into().unwrap(),\n      ))\n    }\n  }\n}\n\npub trait DecompressEncodedPoint {\n  fn decompress(&self) -> Option<GroupElement>;\n}\n\nimpl DecompressEncodedPoint for CompressedGroup {\n  fn decompress(&self) -> Option<GroupElement> {\n    Some(self.unpack().unwrap())\n  }\n}\n\nimpl<'b> MulAssign<&'b Scalar> for GroupElement {\n  fn mul_assign(&mut self, scalar: &'b Scalar) {\n    let result = (self as &GroupElement) * Scalar::decompress_scalar(scalar);\n    *self = result;\n  }\n}\n\nimpl<'a, 'b> Mul<&'b Scalar> for &'a GroupElement {\n  type Output = GroupElement;\n  fn mul(self, scalar: &'b Scalar) -> GroupElement {\n    *self * Scalar::decompress_scalar(scalar)\n  }\n}\n\nimpl<'a, 'b> Mul<&'b GroupElement> for &'a Scalar {\n  type Output = GroupElement;\n\n  fn mul(self, point: &'b GroupElement) -> GroupElement {\n    (*point * Scalar::decompress_scalar(self)).into()\n  }\n}\n\nmacro_rules! define_mul_variants {\n  (LHS = $lhs:ty, RHS = $rhs:ty, Output = $out:ty) => {\n    impl<'b> Mul<&'b $rhs> for $lhs {\n      type Output = $out;\n      fn mul(self, rhs: &'b $rhs) -> $out {\n        &self * rhs\n      }\n    }\n\n    impl<'a> Mul<$rhs> for &'a $lhs {\n      type Output = $out;\n      fn mul(self, rhs: $rhs) -> $out {\n        self * &rhs\n      }\n    }\n\n    impl Mul<$rhs> for $lhs {\n      type Output = $out;\n      fn mul(self, rhs: $rhs) -> $out {\n        &self * &rhs\n      }\n    }\n  };\n}\n\nmacro_rules! define_mul_assign_variants {\n  (LHS = $lhs:ty, RHS = $rhs:ty) => {\n    impl MulAssign<$rhs> for $lhs {\n      fn mul_assign(&mut self, rhs: $rhs) {\n        *self *= &rhs;\n      }\n    }\n  };\n}\n\ndefine_mul_assign_variants!(LHS = GroupElement, RHS = Scalar);\ndefine_mul_variants!(LHS = GroupElement, RHS = Scalar, Output = GroupElement);\ndefine_mul_variants!(LHS = Scalar, RHS = GroupElement, Output = GroupElement);\n\npub trait VartimeMultiscalarMul {\n  type Scalar;\n  fn vartime_multiscalar_mul(scalars: Vec<Scalar>, points: Vec<GroupElement>) -> Self;\n}\n\nimpl VartimeMultiscalarMul for GroupElement {\n  type Scalar = super::scalar::Scalar;\n  // TODO Borrow the arguments so we don't have to clone them, as it was in the original implementation\n  fn vartime_multiscalar_mul(scalars: Vec<Scalar>, points: Vec<GroupElement>) -> Self {\n    let points: Vec<ProjectivePoint> = points.iter().map(|p| ProjectivePoint::from(p.0)).collect();\n\n    let pairs: Vec<(ScalarBytes, ProjectivePoint)> = scalars\n      .into_iter()\n      .enumerate()\n      .map(|(i, s)| (Scalar::decompress_scalar(&s), points[i]))\n      .collect();\n\n    let result = multiexp::<ProjectivePoint>(pairs.as_slice());\n\n    AffinePoint(result.to_affine())\n  }\n}\n\n#[cfg(test)]\nmod tests {\n  use super::*;\n  #[test]\n  fn msm() {\n    let scalars = vec![Scalar::from(1), Scalar::from(2), Scalar::from(3)];\n    let points = vec![\n      GroupElement::generator(),\n      GroupElement::generator(),\n      GroupElement::generator(),\n    ];\n    let result = GroupElement::vartime_multiscalar_mul(scalars, points);\n\n    assert_eq!(result, GroupElement::generator() * Scalar::from(6));\n  }\n}\n"
  },
  {
    "path": "packages/Spartan-secq/src/lib.rs",
    "content": "#![allow(non_snake_case)]\n#![doc = include_str!(\"../README.md\")]\n#![deny(missing_docs)]\n#![allow(clippy::assertions_on_result_states)]\n\nextern crate byteorder;\nextern crate core;\nextern crate digest;\nextern crate merlin;\nextern crate rand;\nextern crate sha3;\n\n#[cfg(feature = \"multicore\")]\nextern crate rayon;\n\nmod commitments;\nmod dense_mlpoly;\nmod errors;\nmod group;\nmod math;\nmod nizk;\nmod product_tree;\nmod r1csinstance;\nmod r1csproof;\nmod random;\nmod scalar;\nmod sparse_mlpoly;\nmod sumcheck;\nmod timer;\nmod transcript;\nmod unipoly;\n\nuse core::cmp::max;\nuse errors::{ProofVerifyError, R1CSError};\nuse merlin::Transcript;\nuse r1csinstance::{\n  R1CSCommitment, R1CSCommitmentGens, R1CSDecommitment, R1CSEvalProof, R1CSInstance,\n};\nuse r1csproof::{R1CSGens, R1CSProof};\nuse random::RandomTape;\nuse scalar::Scalar;\nuse serde::{Deserialize, Serialize};\nuse timer::Timer;\nuse transcript::{AppendToTranscript, ProofTranscript};\n\n/// `ComputationCommitment` holds a public preprocessed NP statement (e.g., R1CS)\npub struct ComputationCommitment {\n  comm: R1CSCommitment,\n}\n\n/// `ComputationDecommitment` holds information to decommit `ComputationCommitment`\npub struct ComputationDecommitment {\n  decomm: R1CSDecommitment,\n}\n\n/// `Assignment` holds an assignment of values to either the inputs or variables in an `Instance`\n#[derive(Serialize, Deserialize, Clone)]\npub struct Assignment {\n  assignment: Vec<Scalar>,\n}\n\nimpl Assignment {\n  /// Constructs a new `Assignment` from a vector\n  pub fn new(assignment: &[[u8; 32]]) -> Result<Assignment, R1CSError> {\n    let bytes_to_scalar = |vec: &[[u8; 32]]| -> Result<Vec<Scalar>, R1CSError> {\n      let mut vec_scalar: Vec<Scalar> = Vec::new();\n      for v in vec {\n        let val = Scalar::from_bytes(v);\n        if val.is_some().unwrap_u8() == 1 {\n          vec_scalar.push(val.unwrap());\n        } else {\n          return Err(R1CSError::InvalidScalar);\n        }\n      }\n      Ok(vec_scalar)\n    };\n\n    let assignment_scalar = bytes_to_scalar(assignment);\n\n    // check for any parsing errors\n    if assignment_scalar.is_err() {\n      return Err(R1CSError::InvalidScalar);\n    }\n\n    Ok(Assignment {\n      assignment: assignment_scalar.unwrap(),\n    })\n  }\n\n  /// pads Assignment to the specified length\n  fn pad(&self, len: usize) -> VarsAssignment {\n    // check that the new length is higher than current length\n    assert!(len > self.assignment.len());\n\n    let padded_assignment = {\n      let mut padded_assignment = self.assignment.clone();\n      padded_assignment.extend(vec![Scalar::zero(); len - self.assignment.len()]);\n      padded_assignment\n    };\n\n    VarsAssignment {\n      assignment: padded_assignment,\n    }\n  }\n}\n\n/// `VarsAssignment` holds an assignment of values to variables in an `Instance`\npub type VarsAssignment = Assignment;\n\n/// `InputsAssignment` holds an assignment of values to variables in an `Instance`\npub type InputsAssignment = Assignment;\n\n/// `Instance` holds the description of R1CS matrices and a hash of the matrices\n#[derive(Serialize, Deserialize)]\npub struct Instance {\n  /// R1CS instance\n  pub inst: R1CSInstance,\n  digest: Vec<u8>,\n}\n\nimpl Instance {\n  /// Constructs a new `Instance` and an associated satisfying assignment\n  pub fn new(\n    num_cons: usize,\n    num_vars: usize,\n    num_inputs: usize,\n    A: &[(usize, usize, [u8; 32])],\n    B: &[(usize, usize, [u8; 32])],\n    C: &[(usize, usize, [u8; 32])],\n  ) -> Result<Instance, R1CSError> {\n    let (num_vars_padded, num_cons_padded) = {\n      let num_vars_padded = {\n        let mut num_vars_padded = num_vars;\n\n        // ensure that num_inputs + 1 <= num_vars\n        num_vars_padded = max(num_vars_padded, num_inputs + 1);\n\n        // ensure that num_vars_padded a power of two\n        if num_vars_padded.next_power_of_two() != num_vars_padded {\n          num_vars_padded = num_vars_padded.next_power_of_two();\n        }\n        num_vars_padded\n      };\n\n      let num_cons_padded = {\n        let mut num_cons_padded = num_cons;\n\n        // ensure that num_cons_padded is at least 2\n        if num_cons_padded == 0 || num_cons_padded == 1 {\n          num_cons_padded = 2;\n        }\n\n        // ensure that num_cons_padded is power of 2\n        if num_cons.next_power_of_two() != num_cons {\n          num_cons_padded = num_cons.next_power_of_two();\n        }\n        num_cons_padded\n      };\n\n      (num_vars_padded, num_cons_padded)\n    };\n\n    let bytes_to_scalar =\n      |tups: &[(usize, usize, [u8; 32])]| -> Result<Vec<(usize, usize, Scalar)>, R1CSError> {\n        let mut mat: Vec<(usize, usize, Scalar)> = Vec::new();\n        for &(row, col, val_bytes) in tups {\n          // row must be smaller than num_cons\n          if row >= num_cons {\n            return Err(R1CSError::InvalidIndex);\n          }\n\n          // col must be smaller than num_vars + 1 + num_inputs\n          if col >= num_vars + 1 + num_inputs {\n            return Err(R1CSError::InvalidIndex);\n          }\n\n          let val = Scalar::from_bytes(&val_bytes);\n          if val.is_some().unwrap_u8() == 1 {\n            // if col >= num_vars, it means that it is referencing a 1 or input in the satisfying\n            // assignment\n            if col >= num_vars {\n              mat.push((row, col + num_vars_padded - num_vars, val.unwrap()));\n            } else {\n              mat.push((row, col, val.unwrap()));\n            }\n          } else {\n            return Err(R1CSError::InvalidScalar);\n          }\n        }\n\n        // pad with additional constraints up until num_cons_padded if the original constraints were 0 or 1\n        // we do not need to pad otherwise because the dummy constraints are implicit in the sum-check protocol\n        if num_cons == 0 || num_cons == 1 {\n          for i in tups.len()..num_cons_padded {\n            mat.push((i, num_vars, Scalar::zero()));\n          }\n        }\n\n        Ok(mat)\n      };\n\n    let A_scalar = bytes_to_scalar(A);\n    if A_scalar.is_err() {\n      return Err(A_scalar.err().unwrap());\n    }\n\n    let B_scalar = bytes_to_scalar(B);\n    if B_scalar.is_err() {\n      return Err(B_scalar.err().unwrap());\n    }\n\n    let C_scalar = bytes_to_scalar(C);\n    if C_scalar.is_err() {\n      return Err(C_scalar.err().unwrap());\n    }\n\n    let inst = R1CSInstance::new(\n      num_cons_padded,\n      num_vars_padded,\n      num_inputs,\n      &A_scalar.unwrap(),\n      &B_scalar.unwrap(),\n      &C_scalar.unwrap(),\n    );\n\n    let digest = inst.get_digest();\n\n    Ok(Instance { inst, digest })\n  }\n\n  /// Checks if a given R1CSInstance is satisfiable with a given variables and inputs assignments\n  pub fn is_sat(\n    &self,\n    vars: &VarsAssignment,\n    inputs: &InputsAssignment,\n  ) -> Result<bool, R1CSError> {\n    if vars.assignment.len() > self.inst.get_num_vars() {\n      return Err(R1CSError::InvalidNumberOfInputs);\n    }\n\n    if inputs.assignment.len() != self.inst.get_num_inputs() {\n      return Err(R1CSError::InvalidNumberOfInputs);\n    }\n\n    // we might need to pad variables\n    let padded_vars = {\n      let num_padded_vars = self.inst.get_num_vars();\n      let num_vars = vars.assignment.len();\n      if num_padded_vars > num_vars {\n        vars.pad(num_padded_vars)\n      } else {\n        vars.clone()\n      }\n    };\n\n    Ok(\n      self\n        .inst\n        .is_sat(&padded_vars.assignment, &inputs.assignment),\n    )\n  }\n\n  /// Constructs a new synthetic R1CS `Instance` and an associated satisfying assignment\n  pub fn produce_synthetic_r1cs(\n    num_cons: usize,\n    num_vars: usize,\n    num_inputs: usize,\n  ) -> (Instance, VarsAssignment, InputsAssignment) {\n    let (inst, vars, inputs) = R1CSInstance::produce_synthetic_r1cs(num_cons, num_vars, num_inputs);\n    let digest = inst.get_digest();\n    (\n      Instance { inst, digest },\n      VarsAssignment { assignment: vars },\n      InputsAssignment { assignment: inputs },\n    )\n  }\n}\n\n/// `SNARKGens` holds public parameters for producing and verifying proofs with the Spartan SNARK\npub struct SNARKGens {\n  gens_r1cs_sat: R1CSGens,\n  gens_r1cs_eval: R1CSCommitmentGens,\n}\n\nimpl SNARKGens {\n  /// Constructs a new `SNARKGens` given the size of the R1CS statement\n  /// `num_nz_entries` specifies the maximum number of non-zero entries in any of the three R1CS matrices\n  pub fn new(num_cons: usize, num_vars: usize, num_inputs: usize, num_nz_entries: usize) -> Self {\n    let num_vars_padded = {\n      let mut num_vars_padded = max(num_vars, num_inputs + 1);\n      if num_vars_padded != num_vars_padded.next_power_of_two() {\n        num_vars_padded = num_vars_padded.next_power_of_two();\n      }\n      num_vars_padded\n    };\n\n    let gens_r1cs_sat = R1CSGens::new(b\"gens_r1cs_sat\", num_cons, num_vars_padded);\n    let gens_r1cs_eval = R1CSCommitmentGens::new(\n      b\"gens_r1cs_eval\",\n      num_cons,\n      num_vars_padded,\n      num_inputs,\n      num_nz_entries,\n    );\n    SNARKGens {\n      gens_r1cs_sat,\n      gens_r1cs_eval,\n    }\n  }\n}\n\n/// `SNARK` holds a proof produced by Spartan SNARK\n#[derive(Serialize, Deserialize, Debug)]\npub struct SNARK {\n  r1cs_sat_proof: R1CSProof,\n  inst_evals: (Scalar, Scalar, Scalar),\n  r1cs_eval_proof: R1CSEvalProof,\n}\n\nimpl SNARK {\n  fn protocol_name() -> &'static [u8] {\n    b\"Spartan SNARK proof\"\n  }\n\n  /// A public computation to create a commitment to an R1CS instance\n  pub fn encode(\n    inst: &Instance,\n    gens: &SNARKGens,\n  ) -> (ComputationCommitment, ComputationDecommitment) {\n    let timer_encode = Timer::new(\"SNARK::encode\");\n    let (comm, decomm) = inst.inst.commit(&gens.gens_r1cs_eval);\n    timer_encode.stop();\n    (\n      ComputationCommitment { comm },\n      ComputationDecommitment { decomm },\n    )\n  }\n\n  /// A method to produce a SNARK proof of the satisfiability of an R1CS instance\n  pub fn prove(\n    inst: &Instance,\n    comm: &ComputationCommitment,\n    decomm: &ComputationDecommitment,\n    vars: VarsAssignment,\n    inputs: &InputsAssignment,\n    gens: &SNARKGens,\n    transcript: &mut Transcript,\n  ) -> Self {\n    let timer_prove = Timer::new(\"SNARK::prove\");\n\n    // we create a Transcript object seeded with a random Scalar\n    // to aid the prover produce its randomness\n    let mut random_tape = RandomTape::new(b\"proof\");\n\n    transcript.append_protocol_name(SNARK::protocol_name());\n    comm.comm.append_to_transcript(b\"comm\", transcript);\n\n    let (r1cs_sat_proof, rx, ry) = {\n      let (proof, rx, ry) = {\n        // we might need to pad variables\n        let padded_vars = {\n          let num_padded_vars = inst.inst.get_num_vars();\n          let num_vars = vars.assignment.len();\n          if num_padded_vars > num_vars {\n            vars.pad(num_padded_vars)\n          } else {\n            vars\n          }\n        };\n\n        R1CSProof::prove(\n          &inst.inst,\n          padded_vars.assignment,\n          &inputs.assignment,\n          &gens.gens_r1cs_sat,\n          transcript,\n          &mut random_tape,\n        )\n      };\n\n      let proof_encoded: Vec<u8> = bincode::serialize(&proof).unwrap();\n      Timer::print(&format!(\"len_r1cs_sat_proof {:?}\", proof_encoded.len()));\n\n      (proof, rx, ry)\n    };\n\n    // We send evaluations of A, B, C at r = (rx, ry) as claims\n    // to enable the verifier complete the first sum-check\n    let timer_eval = Timer::new(\"eval_sparse_polys\");\n    let inst_evals = {\n      let (Ar, Br, Cr) = inst.inst.evaluate(&rx, &ry);\n      Ar.append_to_transcript(b\"Ar_claim\", transcript);\n      Br.append_to_transcript(b\"Br_claim\", transcript);\n      Cr.append_to_transcript(b\"Cr_claim\", transcript);\n      (Ar, Br, Cr)\n    };\n    timer_eval.stop();\n\n    let r1cs_eval_proof = {\n      let proof = R1CSEvalProof::prove(\n        &decomm.decomm,\n        &rx,\n        &ry,\n        &inst_evals,\n        &gens.gens_r1cs_eval,\n        transcript,\n        &mut random_tape,\n      );\n\n      let proof_encoded: Vec<u8> = bincode::serialize(&proof).unwrap();\n      Timer::print(&format!(\"len_r1cs_eval_proof {:?}\", proof_encoded.len()));\n      proof\n    };\n\n    timer_prove.stop();\n    SNARK {\n      r1cs_sat_proof,\n      inst_evals,\n      r1cs_eval_proof,\n    }\n  }\n\n  /// A method to verify the SNARK proof of the satisfiability of an R1CS instance\n  pub fn verify(\n    &self,\n    comm: &ComputationCommitment,\n    input: &InputsAssignment,\n    transcript: &mut Transcript,\n    gens: &SNARKGens,\n  ) -> Result<(), ProofVerifyError> {\n    let timer_verify = Timer::new(\"SNARK::verify\");\n    transcript.append_protocol_name(SNARK::protocol_name());\n\n    // append a commitment to the computation to the transcript\n    comm.comm.append_to_transcript(b\"comm\", transcript);\n\n    let timer_sat_proof = Timer::new(\"verify_sat_proof\");\n    assert_eq!(input.assignment.len(), comm.comm.get_num_inputs());\n    let (rx, ry) = self.r1cs_sat_proof.verify(\n      comm.comm.get_num_vars(),\n      comm.comm.get_num_cons(),\n      &input.assignment,\n      &self.inst_evals,\n      transcript,\n      &gens.gens_r1cs_sat,\n    )?;\n    timer_sat_proof.stop();\n\n    let timer_eval_proof = Timer::new(\"verify_eval_proof\");\n    let (Ar, Br, Cr) = &self.inst_evals;\n    Ar.append_to_transcript(b\"Ar_claim\", transcript);\n    Br.append_to_transcript(b\"Br_claim\", transcript);\n    Cr.append_to_transcript(b\"Cr_claim\", transcript);\n    self.r1cs_eval_proof.verify(\n      &comm.comm,\n      &rx,\n      &ry,\n      &self.inst_evals,\n      &gens.gens_r1cs_eval,\n      transcript,\n    )?;\n    timer_eval_proof.stop();\n    timer_verify.stop();\n    Ok(())\n  }\n}\n\n/// `NIZKGens` holds public parameters for producing and verifying proofs with the Spartan NIZK\npub struct NIZKGens {\n  gens_r1cs_sat: R1CSGens,\n}\n\nimpl NIZKGens {\n  /// Constructs a new `NIZKGens` given the size of the R1CS statement\n  pub fn new(num_cons: usize, num_vars: usize, num_inputs: usize) -> Self {\n    let num_vars_padded = {\n      let mut num_vars_padded = max(num_vars, num_inputs + 1);\n      if num_vars_padded != num_vars_padded.next_power_of_two() {\n        num_vars_padded = num_vars_padded.next_power_of_two();\n      }\n      num_vars_padded\n    };\n\n    let gens_r1cs_sat = R1CSGens::new(b\"gens_r1cs_sat\", num_cons, num_vars_padded);\n    NIZKGens { gens_r1cs_sat }\n  }\n}\n\n/// `NIZK` holds a proof produced by Spartan NIZK\n#[derive(Serialize, Deserialize, Debug)]\npub struct NIZK {\n  r1cs_sat_proof: R1CSProof,\n  r: (Vec<Scalar>, Vec<Scalar>),\n}\n\nimpl NIZK {\n  fn protocol_name() -> &'static [u8] {\n    b\"Spartan NIZK proof\"\n  }\n\n  /// A method to produce a NIZK proof of the satisfiability of an R1CS instance\n  pub fn prove(\n    inst: &Instance,\n    vars: VarsAssignment,\n    input: &InputsAssignment,\n    gens: &NIZKGens,\n    transcript: &mut Transcript,\n  ) -> Self {\n    let timer_prove = Timer::new(\"NIZK::prove\");\n    // we create a Transcript object seeded with a random Scalar\n    // to aid the prover produce its randomness\n    let mut random_tape = RandomTape::new(b\"proof\");\n\n    transcript.append_protocol_name(NIZK::protocol_name());\n    transcript.append_message(b\"R1CSInstanceDigest\", &inst.digest);\n\n    let (r1cs_sat_proof, rx, ry) = {\n      // we might need to pad variables\n      let padded_vars = {\n        let num_padded_vars = inst.inst.get_num_vars();\n        let num_vars = vars.assignment.len();\n        if num_padded_vars > num_vars {\n          vars.pad(num_padded_vars)\n        } else {\n          vars\n        }\n      };\n\n      let (proof, rx, ry) = R1CSProof::prove(\n        &inst.inst,\n        padded_vars.assignment,\n        &input.assignment,\n        &gens.gens_r1cs_sat,\n        transcript,\n        &mut random_tape,\n      );\n      let proof_encoded: Vec<u8> = bincode::serialize(&proof).unwrap();\n      Timer::print(&format!(\"len_r1cs_sat_proof {:?}\", proof_encoded.len()));\n      (proof, rx, ry)\n    };\n\n    timer_prove.stop();\n    NIZK {\n      r1cs_sat_proof,\n      r: (rx, ry),\n    }\n  }\n\n  /// A method to verify a NIZK proof of the satisfiability of an R1CS instance\n  pub fn verify(\n    &self,\n    inst: &Instance,\n    input: &InputsAssignment,\n    transcript: &mut Transcript,\n    gens: &NIZKGens,\n  ) -> Result<(), ProofVerifyError> {\n    let timer_verify = Timer::new(\"NIZK::verify\");\n\n    transcript.append_protocol_name(NIZK::protocol_name());\n    transcript.append_message(b\"R1CSInstanceDigest\", &inst.digest);\n\n    // We send evaluations of A, B, C at r = (rx, ry) as claims\n    // to enable the verifier complete the first sum-check\n    let timer_eval = Timer::new(\"eval_sparse_polys\");\n    let (claimed_rx, claimed_ry) = &self.r;\n    let inst_evals = inst.inst.evaluate(claimed_rx, claimed_ry);\n    timer_eval.stop();\n\n    let timer_sat_proof = Timer::new(\"verify_sat_proof\");\n    assert_eq!(input.assignment.len(), inst.inst.get_num_inputs());\n    let (rx, ry) = self.r1cs_sat_proof.verify(\n      inst.inst.get_num_vars(),\n      inst.inst.get_num_cons(),\n      &input.assignment,\n      &inst_evals,\n      transcript,\n      &gens.gens_r1cs_sat,\n    )?;\n\n    // verify if claimed rx and ry are correct\n    assert_eq!(rx, *claimed_rx);\n    assert_eq!(ry, *claimed_ry);\n    timer_sat_proof.stop();\n    timer_verify.stop();\n\n    Ok(())\n  }\n}\n\n#[cfg(test)]\nmod tests {\n  use super::*;\n\n  #[test]\n  pub fn check_snark() {\n    let num_vars = 256;\n    let num_cons = num_vars;\n    let num_inputs = 10;\n\n    // produce public generators\n    let gens = SNARKGens::new(num_cons, num_vars, num_inputs, num_cons);\n\n    // produce a synthetic R1CSInstance\n    let (inst, vars, inputs) = Instance::produce_synthetic_r1cs(num_cons, num_vars, num_inputs);\n\n    // create a commitment to R1CSInstance\n    let (comm, decomm) = SNARK::encode(&inst, &gens);\n\n    // produce a proof\n    let mut prover_transcript = Transcript::new(b\"example\");\n    let proof = SNARK::prove(\n      &inst,\n      &comm,\n      &decomm,\n      vars,\n      &inputs,\n      &gens,\n      &mut prover_transcript,\n    );\n\n    // verify the proof\n    let mut verifier_transcript = Transcript::new(b\"example\");\n    assert!(proof\n      .verify(&comm, &inputs, &mut verifier_transcript, &gens)\n      .is_ok());\n  }\n\n  #[test]\n  pub fn check_r1cs_invalid_index() {\n    let num_cons = 4;\n    let num_vars = 8;\n    let num_inputs = 1;\n\n    let zero: [u8; 32] = [\n      0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n      0,\n    ];\n\n    let A = vec![(0, 0, zero)];\n    let B = vec![(100, 1, zero)];\n    let C = vec![(1, 1, zero)];\n\n    let inst = Instance::new(num_cons, num_vars, num_inputs, &A, &B, &C);\n    assert!(inst.is_err());\n    assert_eq!(inst.err(), Some(R1CSError::InvalidIndex));\n  }\n\n  #[test]\n  pub fn check_r1cs_invalid_scalar() {\n    let num_cons = 4;\n    let num_vars = 8;\n    let num_inputs = 1;\n\n    let zero: [u8; 32] = [\n      0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n      0,\n    ];\n\n    let larger_than_mod = [255; 32];\n\n    let A = vec![(0, 0, zero)];\n    let B = vec![(1, 1, larger_than_mod)];\n    let C = vec![(1, 1, zero)];\n\n    let inst = Instance::new(num_cons, num_vars, num_inputs, &A, &B, &C);\n    assert!(inst.is_err());\n    assert_eq!(inst.err(), Some(R1CSError::InvalidScalar));\n  }\n\n  #[test]\n  fn test_padded_constraints() {\n    // parameters of the R1CS instance\n    let num_cons = 1;\n    let num_vars = 0;\n    let num_inputs = 3;\n    let num_non_zero_entries = 3;\n\n    // We will encode the above constraints into three matrices, where\n    // the coefficients in the matrix are in the little-endian byte order\n    let mut A: Vec<(usize, usize, [u8; 32])> = Vec::new();\n    let mut B: Vec<(usize, usize, [u8; 32])> = Vec::new();\n    let mut C: Vec<(usize, usize, [u8; 32])> = Vec::new();\n\n    // Create a^2 + b + 13\n    A.push((0, num_vars + 2, Scalar::one().to_bytes())); // 1*a\n    B.push((0, num_vars + 2, Scalar::one().to_bytes())); // 1*a\n    C.push((0, num_vars + 1, Scalar::one().to_bytes())); // 1*z\n    C.push((0, num_vars, (-Scalar::from(13u64)).to_bytes())); // -13*1\n    C.push((0, num_vars + 3, (-Scalar::one()).to_bytes())); // -1*b\n\n    // Var Assignments (Z_0 = 16 is the only output)\n    let vars = vec![Scalar::zero().to_bytes(); num_vars];\n\n    // create an InputsAssignment (a = 1, b = 2)\n    let mut inputs = vec![Scalar::zero().to_bytes(); num_inputs];\n    inputs[0] = Scalar::from(16u64).to_bytes();\n    inputs[1] = Scalar::from(1u64).to_bytes();\n    inputs[2] = Scalar::from(2u64).to_bytes();\n\n    let assignment_inputs = InputsAssignment::new(&inputs).unwrap();\n    let assignment_vars = VarsAssignment::new(&vars).unwrap();\n\n    // Check if instance is satisfiable\n    let inst = Instance::new(num_cons, num_vars, num_inputs, &A, &B, &C).unwrap();\n    let res = inst.is_sat(&assignment_vars, &assignment_inputs);\n    assert!(res.unwrap(), \"should be satisfied\");\n\n    // SNARK public params\n    let gens = SNARKGens::new(num_cons, num_vars, num_inputs, num_non_zero_entries);\n\n    // create a commitment to the R1CS instance\n    let (comm, decomm) = SNARK::encode(&inst, &gens);\n\n    // produce a SNARK\n    let mut prover_transcript = Transcript::new(b\"snark_example\");\n    let proof = SNARK::prove(\n      &inst,\n      &comm,\n      &decomm,\n      assignment_vars.clone(),\n      &assignment_inputs,\n      &gens,\n      &mut prover_transcript,\n    );\n\n    // verify the SNARK\n    let mut verifier_transcript = Transcript::new(b\"snark_example\");\n    assert!(proof\n      .verify(&comm, &assignment_inputs, &mut verifier_transcript, &gens)\n      .is_ok());\n\n    // NIZK public params\n    let gens = NIZKGens::new(num_cons, num_vars, num_inputs);\n\n    // produce a NIZK\n    let mut prover_transcript = Transcript::new(b\"nizk_example\");\n    let proof = NIZK::prove(\n      &inst,\n      assignment_vars,\n      &assignment_inputs,\n      &gens,\n      &mut prover_transcript,\n    );\n\n    // verify the NIZK\n    let mut verifier_transcript = Transcript::new(b\"nizk_example\");\n    assert!(proof\n      .verify(&inst, &assignment_inputs, &mut verifier_transcript, &gens)\n      .is_ok());\n  }\n}\n"
  },
  {
    "path": "packages/Spartan-secq/src/math.rs",
    "content": "pub trait Math {\n  fn square_root(self) -> usize;\n  fn pow2(self) -> usize;\n  fn get_bits(self, num_bits: usize) -> Vec<bool>;\n  fn log_2(self) -> usize;\n}\n\nimpl Math for usize {\n  #[inline]\n  fn square_root(self) -> usize {\n    (self as f64).sqrt() as usize\n  }\n\n  #[inline]\n  fn pow2(self) -> usize {\n    let base: usize = 2;\n    base.pow(self as u32)\n  }\n\n  /// Returns the num_bits from n in a canonical order\n  fn get_bits(self, num_bits: usize) -> Vec<bool> {\n    (0..num_bits)\n      .map(|shift_amount| ((self & (1 << (num_bits - shift_amount - 1))) > 0))\n      .collect::<Vec<bool>>()\n  }\n\n  fn log_2(self) -> usize {\n    assert_ne!(self, 0);\n\n    if self.is_power_of_two() {\n      (1usize.leading_zeros() - self.leading_zeros()) as usize\n    } else {\n      (0usize.leading_zeros() - self.leading_zeros()) as usize\n    }\n  }\n}\n"
  },
  {
    "path": "packages/Spartan-secq/src/nizk/bullet.rs",
    "content": "//! This module is an adaptation of code from the bulletproofs crate.\n//! See NOTICE.md for more details\n#![allow(non_snake_case)]\n#![allow(clippy::type_complexity)]\n#![allow(clippy::too_many_arguments)]\nuse super::super::errors::ProofVerifyError;\nuse super::super::group::{CompressedGroup, GroupElement, VartimeMultiscalarMul};\nuse super::super::math::Math;\nuse super::super::scalar::Scalar;\nuse super::super::transcript::ProofTranscript;\nuse crate::group::DecompressEncodedPoint;\nuse core::iter;\nuse merlin::Transcript;\nuse serde::{Deserialize, Serialize};\n\n#[derive(Debug, Serialize, Deserialize)]\npub struct BulletReductionProof {\n  L_vec: Vec<CompressedGroup>,\n  R_vec: Vec<CompressedGroup>,\n}\n\nimpl BulletReductionProof {\n  /// Create an inner-product proof.\n  ///\n  /// The proof is created with respect to the bases \\\\(G\\\\).\n  ///\n  /// The `transcript` is passed in as a parameter so that the\n  /// challenges depend on the *entire* transcript (including parent\n  /// protocols).\n  ///\n  /// The lengths of the vectors must all be the same, and must all be\n  /// either 0 or a power of 2.\n  pub fn prove(\n    transcript: &mut Transcript,\n    Q: &GroupElement,\n    G_vec: &[GroupElement],\n    H: &GroupElement,\n    a_vec: &[Scalar],\n    b_vec: &[Scalar],\n    blind: &Scalar,\n    blinds_vec: &[(Scalar, Scalar)],\n  ) -> (\n    BulletReductionProof,\n    GroupElement,\n    Scalar,\n    Scalar,\n    GroupElement,\n    Scalar,\n  ) {\n    // Create slices G, H, a, b backed by their respective\n    // vectors.  This lets us reslice as we compress the lengths\n    // of the vectors in the main loop below.\n    let mut G = &mut G_vec.to_owned()[..];\n    let mut a = &mut a_vec.to_owned()[..];\n    let mut b = &mut b_vec.to_owned()[..];\n\n    // All of the input vectors must have a length that is a power of two.\n    let mut n = G.len();\n    assert!(n.is_power_of_two());\n    let lg_n = n.log_2();\n\n    // All of the input vectors must have the same length.\n    assert_eq!(G.len(), n);\n    assert_eq!(a.len(), n);\n    assert_eq!(b.len(), n);\n    assert_eq!(blinds_vec.len(), 2 * lg_n);\n\n    let mut L_vec = Vec::with_capacity(lg_n);\n    let mut R_vec = Vec::with_capacity(lg_n);\n    let mut blinds_iter = blinds_vec.iter();\n    let mut blind_fin = *blind;\n\n    while n != 1 {\n      n /= 2;\n      let (a_L, a_R) = a.split_at_mut(n);\n      let (b_L, b_R) = b.split_at_mut(n);\n      let (G_L, G_R) = G.split_at_mut(n);\n\n      let c_L = inner_product(a_L, b_R);\n      let c_R = inner_product(a_R, b_L);\n\n      let (blind_L, blind_R) = blinds_iter.next().unwrap();\n\n      let L = GroupElement::vartime_multiscalar_mul(\n        a_L\n          .iter()\n          .chain(iter::once(&c_L))\n          .chain(iter::once(blind_L))\n          .map(|s| *s)\n          .collect(),\n        G_R\n          .iter()\n          .chain(iter::once(Q))\n          .chain(iter::once(H))\n          .map(|s| *s)\n          .collect(),\n      );\n\n      let R = GroupElement::vartime_multiscalar_mul(\n        a_R\n          .iter()\n          .chain(iter::once(&c_R))\n          .chain(iter::once(blind_R))\n          .map(|s| *s)\n          .collect(),\n        G_L\n          .iter()\n          .chain(iter::once(Q))\n          .chain(iter::once(H))\n          .map(|s| *s)\n          .collect(),\n      );\n\n      transcript.append_point(b\"L\", &L.compress());\n      transcript.append_point(b\"R\", &R.compress());\n\n      let u = transcript.challenge_scalar(b\"u\");\n      let u_inv = u.invert().unwrap();\n\n      for i in 0..n {\n        a_L[i] = a_L[i] * u + u_inv * a_R[i];\n        b_L[i] = b_L[i] * u_inv + u * b_R[i];\n        G_L[i] =\n          GroupElement::vartime_multiscalar_mul([u_inv, u].to_vec(), [G_L[i], G_R[i]].to_vec());\n      }\n\n      blind_fin = blind_fin + blind_L * u * u + blind_R * u_inv * u_inv;\n\n      L_vec.push(L.compress());\n      R_vec.push(R.compress());\n\n      a = a_L;\n      b = b_L;\n      G = G_L;\n    }\n\n    let Gamma_hat = GroupElement::vartime_multiscalar_mul(\n      [a[0], a[0] * b[0], blind_fin].to_vec(),\n      [G[0], *Q, *H].to_vec(),\n    );\n\n    (\n      BulletReductionProof { L_vec, R_vec },\n      Gamma_hat,\n      a[0],\n      b[0],\n      G[0],\n      blind_fin,\n    )\n  }\n\n  /// Computes three vectors of verification scalars \\\\([u\\_{i}^{2}]\\\\), \\\\([u\\_{i}^{-2}]\\\\) and \\\\([s\\_{i}]\\\\) for combined multiscalar multiplication\n  /// in a parent protocol. See [inner product protocol notes](index.html#verification-equation) for details.\n  /// The verifier must provide the input length \\\\(n\\\\) explicitly to avoid unbounded allocation within the inner product proof.\n  fn verification_scalars(\n    &self,\n    n: usize,\n    transcript: &mut Transcript,\n  ) -> Result<(Vec<Scalar>, Vec<Scalar>, Vec<Scalar>), ProofVerifyError> {\n    let lg_n = self.L_vec.len();\n    if lg_n >= 32 {\n      // 4 billion multiplications should be enough for anyone\n      // and this check prevents overflow in 1<<lg_n below.\n      return Err(ProofVerifyError::InternalError);\n    }\n    if n != (1 << lg_n) {\n      return Err(ProofVerifyError::InternalError);\n    }\n\n    // 1. Recompute x_k,...,x_1 based on the proof transcript\n    let mut challenges = Vec::with_capacity(lg_n);\n    for (L, R) in self.L_vec.iter().zip(self.R_vec.iter()) {\n      transcript.append_point(b\"L\", L);\n      transcript.append_point(b\"R\", R);\n      challenges.push(transcript.challenge_scalar(b\"u\"));\n    }\n\n    // 2. Compute 1/(u_k...u_1) and 1/u_k, ..., 1/u_1\n    let mut challenges_inv = challenges.clone();\n    let allinv = Scalar::batch_invert(&mut challenges_inv);\n\n    // 3. Compute u_i^2 and (1/u_i)^2\n    for i in 0..lg_n {\n      challenges[i] = challenges[i].square();\n      challenges_inv[i] = challenges_inv[i].square();\n    }\n    let challenges_sq = challenges;\n    let challenges_inv_sq = challenges_inv;\n\n    // 4. Compute s values inductively.\n    let mut s = Vec::with_capacity(n);\n    s.push(allinv);\n    for i in 1..n {\n      let lg_i = (32 - 1 - (i as u32).leading_zeros()) as usize;\n      let k = 1 << lg_i;\n      // The challenges are stored in \"creation order\" as [u_k,...,u_1],\n      // so u_{lg(i)+1} = is indexed by (lg_n-1) - lg_i\n      let u_lg_i_sq = challenges_sq[(lg_n - 1) - lg_i];\n      s.push(s[i - k] * u_lg_i_sq);\n    }\n\n    Ok((challenges_sq, challenges_inv_sq, s))\n  }\n\n  /// This method is for testing that proof generation work,\n  /// but for efficiency the actual protocols would use `verification_scalars`\n  /// method to combine inner product verification with other checks\n  /// in a single multiscalar multiplication.\n  pub fn verify(\n    &self,\n    n: usize,\n    a: &[Scalar],\n    transcript: &mut Transcript,\n    Gamma: &GroupElement,\n    G: &[GroupElement],\n  ) -> Result<(GroupElement, GroupElement, Scalar), ProofVerifyError> {\n    let (u_sq, u_inv_sq, s) = self.verification_scalars(n, transcript)?;\n\n    let Ls = self\n      .L_vec\n      .iter()\n      .map(|p| p.decompress().ok_or(ProofVerifyError::InternalError))\n      .collect::<Result<Vec<_>, _>>()?;\n\n    let Rs = self\n      .R_vec\n      .iter()\n      .map(|p| p.decompress().ok_or(ProofVerifyError::InternalError))\n      .collect::<Result<Vec<_>, _>>()?;\n\n    let G_hat = GroupElement::vartime_multiscalar_mul(s.clone(), G.to_vec());\n    let a_hat = inner_product(a, &s);\n\n    let Gamma_hat = GroupElement::vartime_multiscalar_mul(\n      u_sq\n        .iter()\n        .chain(u_inv_sq.iter())\n        .chain(iter::once(&Scalar::one()))\n        .map(|s| *s)\n        .collect(),\n      Ls.iter()\n        .chain(Rs.iter())\n        .chain(iter::once(Gamma))\n        .map(|p| *p)\n        .collect(),\n    );\n\n    Ok((G_hat, Gamma_hat, a_hat))\n  }\n}\n\n/// Computes an inner product of two vectors\n/// \\\\[\n///    {\\langle {\\mathbf{a}}, {\\mathbf{b}} \\rangle} = \\sum\\_{i=0}^{n-1} a\\_i \\cdot b\\_i.\n/// \\\\]\n/// Panics if the lengths of \\\\(\\mathbf{a}\\\\) and \\\\(\\mathbf{b}\\\\) are not equal.\npub fn inner_product(a: &[Scalar], b: &[Scalar]) -> Scalar {\n  assert!(\n    a.len() == b.len(),\n    \"inner_product(a,b): lengths of vectors do not match\"\n  );\n  let mut out = Scalar::zero();\n  for i in 0..a.len() {\n    out += a[i] * b[i];\n  }\n  out\n}\n"
  },
  {
    "path": "packages/Spartan-secq/src/nizk/mod.rs",
    "content": "#![allow(clippy::too_many_arguments)]\nuse super::commitments::{Commitments, MultiCommitGens};\nuse super::errors::ProofVerifyError;\nuse super::group::{CompressedGroup, CompressedGroupExt};\nuse super::math::Math;\nuse super::random::RandomTape;\nuse super::scalar::Scalar;\nuse super::transcript::{AppendToTranscript, ProofTranscript};\nuse crate::group::DecompressEncodedPoint;\nuse merlin::Transcript;\nuse serde::{Deserialize, Serialize};\n\nmod bullet;\nuse bullet::BulletReductionProof;\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct KnowledgeProof {\n  alpha: CompressedGroup,\n  z1: Scalar,\n  z2: Scalar,\n}\n\nimpl KnowledgeProof {\n  fn protocol_name() -> &'static [u8] {\n    b\"knowledge proof\"\n  }\n\n  pub fn prove(\n    gens_n: &MultiCommitGens,\n    transcript: &mut Transcript,\n    random_tape: &mut RandomTape,\n    x: &Scalar,\n    r: &Scalar,\n  ) -> (KnowledgeProof, CompressedGroup) {\n    transcript.append_protocol_name(KnowledgeProof::protocol_name());\n\n    // produce two random Scalars\n    let t1 = random_tape.random_scalar(b\"t1\");\n    let t2 = random_tape.random_scalar(b\"t2\");\n\n    let C = x.commit(r, gens_n).compress();\n    C.append_to_transcript(b\"C\", transcript);\n\n    let alpha = t1.commit(&t2, gens_n).compress();\n    alpha.append_to_transcript(b\"alpha\", transcript);\n\n    let c = transcript.challenge_scalar(b\"c\");\n\n    let z1 = x * c + t1;\n    let z2 = r * c + t2;\n\n    (KnowledgeProof { alpha, z1, z2 }, C)\n  }\n\n  pub fn verify(\n    &self,\n    gens_n: &MultiCommitGens,\n    transcript: &mut Transcript,\n    C: &CompressedGroup,\n  ) -> Result<(), ProofVerifyError> {\n    transcript.append_protocol_name(KnowledgeProof::protocol_name());\n    C.append_to_transcript(b\"C\", transcript);\n    self.alpha.append_to_transcript(b\"alpha\", transcript);\n\n    let c = transcript.challenge_scalar(b\"c\");\n\n    let lhs = self.z1.commit(&self.z2, gens_n).compress();\n    let rhs = (c * C.unpack()? + self.alpha.unpack()?).compress();\n\n    if lhs == rhs {\n      Ok(())\n    } else {\n      Err(ProofVerifyError::InternalError)\n    }\n  }\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct EqualityProof {\n  alpha: CompressedGroup,\n  z: Scalar,\n}\n\nimpl EqualityProof {\n  fn protocol_name() -> &'static [u8] {\n    b\"equality proof\"\n  }\n\n  pub fn prove(\n    gens_n: &MultiCommitGens,\n    transcript: &mut Transcript,\n    random_tape: &mut RandomTape,\n    v1: &Scalar,\n    s1: &Scalar,\n    v2: &Scalar,\n    s2: &Scalar,\n  ) -> (EqualityProof, CompressedGroup, CompressedGroup) {\n    transcript.append_protocol_name(EqualityProof::protocol_name());\n\n    // produce a random Scalar\n    let r = random_tape.random_scalar(b\"r\");\n\n    let C1 = v1.commit(s1, gens_n).compress();\n    C1.append_to_transcript(b\"C1\", transcript);\n\n    let C2 = v2.commit(s2, gens_n).compress();\n    C2.append_to_transcript(b\"C2\", transcript);\n\n    let alpha = (r * gens_n.h).compress();\n    alpha.append_to_transcript(b\"alpha\", transcript);\n\n    let c = transcript.challenge_scalar(b\"c\");\n\n    let z = c * (s1 - s2) + r;\n\n    (EqualityProof { alpha, z }, C1, C2)\n  }\n\n  pub fn verify(\n    &self,\n    gens_n: &MultiCommitGens,\n    transcript: &mut Transcript,\n    C1: &CompressedGroup,\n    C2: &CompressedGroup,\n  ) -> Result<(), ProofVerifyError> {\n    transcript.append_protocol_name(EqualityProof::protocol_name());\n    C1.append_to_transcript(b\"C1\", transcript);\n    C2.append_to_transcript(b\"C2\", transcript);\n    self.alpha.append_to_transcript(b\"alpha\", transcript);\n\n    let c = transcript.challenge_scalar(b\"c\");\n    let rhs = {\n      let C = C1.unpack()? - C2.unpack()?;\n      (c * C + self.alpha.unpack()?).compress()\n    };\n\n    let lhs = (self.z * gens_n.h).compress();\n\n    if lhs == rhs {\n      Ok(())\n    } else {\n      Err(ProofVerifyError::InternalError)\n    }\n  }\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct ProductProof {\n  alpha: CompressedGroup,\n  beta: CompressedGroup,\n  delta: CompressedGroup,\n  z: [Scalar; 5],\n}\n\nimpl ProductProof {\n  fn protocol_name() -> &'static [u8] {\n    b\"product proof\"\n  }\n\n  pub fn prove(\n    gens_n: &MultiCommitGens,\n    transcript: &mut Transcript,\n    random_tape: &mut RandomTape,\n    x: &Scalar,\n    rX: &Scalar,\n    y: &Scalar,\n    rY: &Scalar,\n    z: &Scalar,\n    rZ: &Scalar,\n  ) -> (\n    ProductProof,\n    CompressedGroup,\n    CompressedGroup,\n    CompressedGroup,\n  ) {\n    transcript.append_protocol_name(ProductProof::protocol_name());\n\n    // produce five random Scalar\n    let b1 = random_tape.random_scalar(b\"b1\");\n    let b2 = random_tape.random_scalar(b\"b2\");\n    let b3 = random_tape.random_scalar(b\"b3\");\n    let b4 = random_tape.random_scalar(b\"b4\");\n    let b5 = random_tape.random_scalar(b\"b5\");\n\n    let X = x.commit(rX, gens_n).compress();\n    X.append_to_transcript(b\"X\", transcript);\n\n    let Y = y.commit(rY, gens_n).compress();\n    Y.append_to_transcript(b\"Y\", transcript);\n\n    let Z = z.commit(rZ, gens_n).compress();\n    Z.append_to_transcript(b\"Z\", transcript);\n\n    let alpha = b1.commit(&b2, gens_n).compress();\n    alpha.append_to_transcript(b\"alpha\", transcript);\n\n    let beta = b3.commit(&b4, gens_n).compress();\n    beta.append_to_transcript(b\"beta\", transcript);\n\n    let delta = {\n      let gens_X = &MultiCommitGens {\n        n: 1,\n        G: vec![X.decompress().unwrap()],\n        h: gens_n.h,\n      };\n      b3.commit(&b5, gens_X).compress()\n    };\n    delta.append_to_transcript(b\"delta\", transcript);\n\n    let c = transcript.challenge_scalar(b\"c\");\n\n    let z1 = b1 + c * x;\n    let z2 = b2 + c * rX;\n    let z3 = b3 + c * y;\n    let z4 = b4 + c * rY;\n    let z5 = b5 + c * (rZ - rX * y);\n    let z = [z1, z2, z3, z4, z5];\n\n    (\n      ProductProof {\n        alpha,\n        beta,\n        delta,\n        z,\n      },\n      X,\n      Y,\n      Z,\n    )\n  }\n\n  fn check_equality(\n    P: &CompressedGroup,\n    X: &CompressedGroup,\n    c: &Scalar,\n    gens_n: &MultiCommitGens,\n    z1: &Scalar,\n    z2: &Scalar,\n  ) -> bool {\n    let lhs = (P.decompress().unwrap() + c * X.decompress().unwrap()).compress();\n    let rhs = z1.commit(z2, gens_n).compress();\n\n    lhs == rhs\n  }\n\n  pub fn verify(\n    &self,\n    gens_n: &MultiCommitGens,\n    transcript: &mut Transcript,\n    X: &CompressedGroup,\n    Y: &CompressedGroup,\n    Z: &CompressedGroup,\n  ) -> Result<(), ProofVerifyError> {\n    transcript.append_protocol_name(ProductProof::protocol_name());\n\n    X.append_to_transcript(b\"X\", transcript);\n    Y.append_to_transcript(b\"Y\", transcript);\n    Z.append_to_transcript(b\"Z\", transcript);\n    self.alpha.append_to_transcript(b\"alpha\", transcript);\n    self.beta.append_to_transcript(b\"beta\", transcript);\n    self.delta.append_to_transcript(b\"delta\", transcript);\n\n    let z1 = self.z[0];\n    let z2 = self.z[1];\n    let z3 = self.z[2];\n    let z4 = self.z[3];\n    let z5 = self.z[4];\n\n    let c = transcript.challenge_scalar(b\"c\");\n\n    if ProductProof::check_equality(&self.alpha, X, &c, gens_n, &z1, &z2)\n      && ProductProof::check_equality(&self.beta, Y, &c, gens_n, &z3, &z4)\n      && ProductProof::check_equality(\n        &self.delta,\n        Z,\n        &c,\n        &MultiCommitGens {\n          n: 1,\n          G: vec![X.unpack()?],\n          h: gens_n.h,\n        },\n        &z3,\n        &z5,\n      )\n    {\n      Ok(())\n    } else {\n      Err(ProofVerifyError::InternalError)\n    }\n  }\n}\n\n#[derive(Debug, Serialize, Deserialize)]\npub struct DotProductProof {\n  delta: CompressedGroup,\n  beta: CompressedGroup,\n  z: Vec<Scalar>,\n  z_delta: Scalar,\n  z_beta: Scalar,\n}\n\nimpl DotProductProof {\n  fn protocol_name() -> &'static [u8] {\n    b\"dot product proof\"\n  }\n\n  pub fn compute_dotproduct(a: &[Scalar], b: &[Scalar]) -> Scalar {\n    assert_eq!(a.len(), b.len());\n    (0..a.len()).map(|i| a[i] * b[i]).sum()\n  }\n\n  pub fn prove(\n    gens_1: &MultiCommitGens,\n    gens_n: &MultiCommitGens,\n    transcript: &mut Transcript,\n    random_tape: &mut RandomTape,\n    x_vec: &[Scalar],\n    blind_x: &Scalar,\n    a_vec: &[Scalar],\n    y: &Scalar,\n    blind_y: &Scalar,\n  ) -> (DotProductProof, CompressedGroup, CompressedGroup) {\n    transcript.append_protocol_name(DotProductProof::protocol_name());\n\n    let n = x_vec.len();\n    assert_eq!(x_vec.len(), a_vec.len());\n    assert_eq!(gens_n.n, a_vec.len());\n    assert_eq!(gens_1.n, 1);\n\n    // produce randomness for the proofs\n    let d_vec = random_tape.random_vector(b\"d_vec\", n);\n    let r_delta = random_tape.random_scalar(b\"r_delta\");\n    let r_beta = random_tape.random_scalar(b\"r_beta\");\n\n    let Cx = x_vec.commit(blind_x, gens_n).compress();\n    Cx.append_to_transcript(b\"Cx\", transcript);\n\n    let Cy = y.commit(blind_y, gens_1).compress();\n    Cy.append_to_transcript(b\"Cy\", transcript);\n\n    a_vec.append_to_transcript(b\"a\", transcript);\n\n    let delta = d_vec.commit(&r_delta, gens_n).compress();\n    delta.append_to_transcript(b\"delta\", transcript);\n\n    let dotproduct_a_d = DotProductProof::compute_dotproduct(a_vec, &d_vec);\n\n    let beta = dotproduct_a_d.commit(&r_beta, gens_1).compress();\n    beta.append_to_transcript(b\"beta\", transcript);\n\n    let c = transcript.challenge_scalar(b\"c\");\n\n    let z = (0..d_vec.len())\n      .map(|i| c * x_vec[i] + d_vec[i])\n      .collect::<Vec<Scalar>>();\n\n    let z_delta = c * blind_x + r_delta;\n    let z_beta = c * blind_y + r_beta;\n\n    (\n      DotProductProof {\n        delta,\n        beta,\n        z,\n        z_delta,\n        z_beta,\n      },\n      Cx,\n      Cy,\n    )\n  }\n\n  pub fn verify(\n    &self,\n    gens_1: &MultiCommitGens,\n    gens_n: &MultiCommitGens,\n    transcript: &mut Transcript,\n    a: &[Scalar],\n    Cx: &CompressedGroup,\n    Cy: &CompressedGroup,\n  ) -> Result<(), ProofVerifyError> {\n    assert_eq!(gens_n.n, a.len());\n    assert_eq!(gens_1.n, 1);\n\n    transcript.append_protocol_name(DotProductProof::protocol_name());\n    Cx.append_to_transcript(b\"Cx\", transcript);\n    Cy.append_to_transcript(b\"Cy\", transcript);\n    a.append_to_transcript(b\"a\", transcript);\n    self.delta.append_to_transcript(b\"delta\", transcript);\n    self.beta.append_to_transcript(b\"beta\", transcript);\n\n    let c = transcript.challenge_scalar(b\"c\");\n\n    let mut result =\n      c * Cx.unpack()? + self.delta.unpack()? == self.z.commit(&self.z_delta, gens_n);\n\n    let dotproduct_z_a = DotProductProof::compute_dotproduct(&self.z, a);\n    result &= c * Cy.unpack()? + self.beta.unpack()? == dotproduct_z_a.commit(&self.z_beta, gens_1);\n\n    if result {\n      Ok(())\n    } else {\n      Err(ProofVerifyError::InternalError)\n    }\n  }\n}\n\npub struct DotProductProofGens {\n  n: usize,\n  pub gens_n: MultiCommitGens,\n  pub gens_1: MultiCommitGens,\n}\n\nimpl DotProductProofGens {\n  pub fn new(n: usize, label: &[u8]) -> Self {\n    let (gens_n, gens_1) = MultiCommitGens::new(n + 1, label).split_at(n);\n    DotProductProofGens { n, gens_n, gens_1 }\n  }\n}\n\n#[derive(Debug, Serialize, Deserialize)]\npub struct DotProductProofLog {\n  bullet_reduction_proof: BulletReductionProof,\n  delta: CompressedGroup,\n  beta: CompressedGroup,\n  z1: Scalar,\n  z2: Scalar,\n}\n\nimpl DotProductProofLog {\n  fn protocol_name() -> &'static [u8] {\n    b\"dot product proof (log)\"\n  }\n\n  pub fn compute_dotproduct(a: &[Scalar], b: &[Scalar]) -> Scalar {\n    assert_eq!(a.len(), b.len());\n    (0..a.len()).map(|i| a[i] * b[i]).sum()\n  }\n\n  pub fn prove(\n    gens: &DotProductProofGens,\n    transcript: &mut Transcript,\n    random_tape: &mut RandomTape,\n    x_vec: &[Scalar],\n    blind_x: &Scalar,\n    a_vec: &[Scalar],\n    y: &Scalar,\n    blind_y: &Scalar,\n  ) -> (DotProductProofLog, CompressedGroup, CompressedGroup) {\n    transcript.append_protocol_name(DotProductProofLog::protocol_name());\n\n    let n = x_vec.len();\n    assert_eq!(x_vec.len(), a_vec.len());\n    assert_eq!(gens.n, n);\n\n    // produce randomness for generating a proof\n    let d = random_tape.random_scalar(b\"d\");\n    let r_delta = random_tape.random_scalar(b\"r_delta\");\n    let r_beta = random_tape.random_scalar(b\"r_delta\");\n    let blinds_vec = {\n      let v1 = random_tape.random_vector(b\"blinds_vec_1\", 2 * n.log_2());\n      let v2 = random_tape.random_vector(b\"blinds_vec_2\", 2 * n.log_2());\n      (0..v1.len())\n        .map(|i| (v1[i], v2[i]))\n        .collect::<Vec<(Scalar, Scalar)>>()\n    };\n\n    let Cx = x_vec.commit(blind_x, &gens.gens_n).compress();\n    Cx.append_to_transcript(b\"Cx\", transcript);\n\n    let Cy = y.commit(blind_y, &gens.gens_1).compress();\n    Cy.append_to_transcript(b\"Cy\", transcript);\n\n    a_vec.append_to_transcript(b\"a\", transcript);\n\n    // sample a random base and scale the generator used for\n    // the output of the inner product\n    let r = transcript.challenge_scalar(b\"r\");\n    let gens_1_scaled = gens.gens_1.scale(&r);\n\n    let blind_Gamma = blind_x + r * blind_y;\n    let (bullet_reduction_proof, _Gamma_hat, x_hat, a_hat, g_hat, rhat_Gamma) =\n      BulletReductionProof::prove(\n        transcript,\n        &gens_1_scaled.G[0],\n        &gens.gens_n.G,\n        &gens.gens_n.h,\n        x_vec,\n        a_vec,\n        &blind_Gamma,\n        &blinds_vec,\n      );\n    let y_hat = x_hat * a_hat;\n\n    let delta = {\n      let gens_hat = MultiCommitGens {\n        n: 1,\n        G: vec![g_hat],\n        h: gens.gens_1.h,\n      };\n      d.commit(&r_delta, &gens_hat).compress()\n    };\n    delta.append_to_transcript(b\"delta\", transcript);\n\n    let beta = d.commit(&r_beta, &gens_1_scaled).compress();\n    beta.append_to_transcript(b\"beta\", transcript);\n\n    let c = transcript.challenge_scalar(b\"c\");\n\n    let z1 = d + c * y_hat;\n    let z2 = a_hat * (c * rhat_Gamma + r_beta) + r_delta;\n\n    (\n      DotProductProofLog {\n        bullet_reduction_proof,\n        delta,\n        beta,\n        z1,\n        z2,\n      },\n      Cx,\n      Cy,\n    )\n  }\n\n  pub fn verify(\n    &self,\n    n: usize,\n    gens: &DotProductProofGens,\n    transcript: &mut Transcript,\n    a: &[Scalar],\n    Cx: &CompressedGroup,\n    Cy: &CompressedGroup,\n  ) -> Result<(), ProofVerifyError> {\n    assert_eq!(gens.n, n);\n    assert_eq!(a.len(), n);\n\n    transcript.append_protocol_name(DotProductProofLog::protocol_name());\n    Cx.append_to_transcript(b\"Cx\", transcript);\n    Cy.append_to_transcript(b\"Cy\", transcript);\n    a.append_to_transcript(b\"a\", transcript);\n\n    // sample a random base and scale the generator used for\n    // the output of the inner product\n    let r = transcript.challenge_scalar(b\"r\");\n    let gens_1_scaled = gens.gens_1.scale(&r);\n\n    let Gamma = Cx.unpack()? + r * Cy.unpack()?;\n\n    let (g_hat, Gamma_hat, a_hat) =\n      self\n        .bullet_reduction_proof\n        .verify(n, a, transcript, &Gamma, &gens.gens_n.G)?;\n    self.delta.append_to_transcript(b\"delta\", transcript);\n    self.beta.append_to_transcript(b\"beta\", transcript);\n\n    let c = transcript.challenge_scalar(b\"c\");\n\n    let c_s = &c;\n    let beta_s = self.beta.unpack()?;\n    let a_hat_s = &a_hat;\n    let delta_s = self.delta.unpack()?;\n    let z1_s = &self.z1;\n    let z2_s = &self.z2;\n\n    let lhs = ((Gamma_hat * c_s + beta_s) * a_hat_s + delta_s).compress();\n    let rhs = ((g_hat + gens_1_scaled.G[0] * a_hat_s) * z1_s + gens_1_scaled.h * z2_s).compress();\n\n    assert_eq!(lhs, rhs);\n\n    if lhs == rhs {\n      Ok(())\n    } else {\n      Err(ProofVerifyError::InternalError)\n    }\n  }\n}\n\n#[cfg(test)]\nmod tests {\n  use super::*;\n  use rand_core::OsRng;\n  #[test]\n  fn check_knowledgeproof() {\n    let mut csprng: OsRng = OsRng;\n\n    let gens_1 = MultiCommitGens::new(1, b\"test-knowledgeproof\");\n\n    let x = Scalar::random(&mut csprng);\n    let r = Scalar::random(&mut csprng);\n\n    let mut random_tape = RandomTape::new(b\"proof\");\n    let mut prover_transcript = Transcript::new(b\"example\");\n    let (proof, committed_value) =\n      KnowledgeProof::prove(&gens_1, &mut prover_transcript, &mut random_tape, &x, &r);\n\n    let mut verifier_transcript = Transcript::new(b\"example\");\n    assert!(proof\n      .verify(&gens_1, &mut verifier_transcript, &committed_value)\n      .is_ok());\n  }\n\n  #[test]\n  fn check_equalityproof() {\n    let mut csprng: OsRng = OsRng;\n\n    let gens_1 = MultiCommitGens::new(1, b\"test-equalityproof\");\n    let v1 = Scalar::random(&mut csprng);\n    let v2 = v1;\n    let s1 = Scalar::random(&mut csprng);\n    let s2 = Scalar::random(&mut csprng);\n\n    let mut random_tape = RandomTape::new(b\"proof\");\n    let mut prover_transcript = Transcript::new(b\"example\");\n    let (proof, C1, C2) = EqualityProof::prove(\n      &gens_1,\n      &mut prover_transcript,\n      &mut random_tape,\n      &v1,\n      &s1,\n      &v2,\n      &s2,\n    );\n\n    let mut verifier_transcript = Transcript::new(b\"example\");\n    assert!(proof\n      .verify(&gens_1, &mut verifier_transcript, &C1, &C2)\n      .is_ok());\n  }\n\n  #[test]\n  fn check_productproof() {\n    let mut csprng: OsRng = OsRng;\n\n    let gens_1 = MultiCommitGens::new(1, b\"test-productproof\");\n    let x = Scalar::random(&mut csprng);\n    let rX = Scalar::random(&mut csprng);\n    let y = Scalar::random(&mut csprng);\n    let rY = Scalar::random(&mut csprng);\n    let z = x * y;\n    let rZ = Scalar::random(&mut csprng);\n\n    let mut random_tape = RandomTape::new(b\"proof\");\n    let mut prover_transcript = Transcript::new(b\"example\");\n    let (proof, X, Y, Z) = ProductProof::prove(\n      &gens_1,\n      &mut prover_transcript,\n      &mut random_tape,\n      &x,\n      &rX,\n      &y,\n      &rY,\n      &z,\n      &rZ,\n    );\n\n    let mut verifier_transcript = Transcript::new(b\"example\");\n    assert!(proof\n      .verify(&gens_1, &mut verifier_transcript, &X, &Y, &Z)\n      .is_ok());\n  }\n\n  #[test]\n  fn check_dotproductproof() {\n    let mut csprng: OsRng = OsRng;\n\n    let n = 1024;\n\n    let gens_1 = MultiCommitGens::new(1, b\"test-two\");\n    let gens_1024 = MultiCommitGens::new(n, b\"test-1024\");\n\n    let mut x: Vec<Scalar> = Vec::new();\n    let mut a: Vec<Scalar> = Vec::new();\n    for _ in 0..n {\n      x.push(Scalar::random(&mut csprng));\n      a.push(Scalar::random(&mut csprng));\n    }\n    let y = DotProductProofLog::compute_dotproduct(&x, &a);\n    let r_x = Scalar::random(&mut csprng);\n    let r_y = Scalar::random(&mut csprng);\n\n    let mut random_tape = RandomTape::new(b\"proof\");\n    let mut prover_transcript = Transcript::new(b\"example\");\n    let (proof, Cx, Cy) = DotProductProof::prove(\n      &gens_1,\n      &gens_1024,\n      &mut prover_transcript,\n      &mut random_tape,\n      &x,\n      &r_x,\n      &a,\n      &y,\n      &r_y,\n    );\n\n    let mut verifier_transcript = Transcript::new(b\"example\");\n    assert!(proof\n      .verify(&gens_1, &gens_1024, &mut verifier_transcript, &a, &Cx, &Cy)\n      .is_ok());\n  }\n\n  #[test]\n  fn check_dotproductproof_log() {\n    let mut csprng: OsRng = OsRng;\n\n    let n = 1024;\n\n    let gens = DotProductProofGens::new(n, b\"test-1024\");\n\n    let x: Vec<Scalar> = (0..n).map(|_i| Scalar::random(&mut csprng)).collect();\n    let a: Vec<Scalar> = (0..n).map(|_i| Scalar::random(&mut csprng)).collect();\n    let y = DotProductProof::compute_dotproduct(&x, &a);\n\n    let r_x = Scalar::random(&mut csprng);\n    let r_y = Scalar::random(&mut csprng);\n\n    let mut random_tape = RandomTape::new(b\"proof\");\n    let mut prover_transcript = Transcript::new(b\"example\");\n    let (proof, Cx, Cy) = DotProductProofLog::prove(\n      &gens,\n      &mut prover_transcript,\n      &mut random_tape,\n      &x,\n      &r_x,\n      &a,\n      &y,\n      &r_y,\n    );\n\n    let mut verifier_transcript = Transcript::new(b\"example\");\n    assert!(proof\n      .verify(n, &gens, &mut verifier_transcript, &a, &Cx, &Cy)\n      .is_ok());\n  }\n}\n"
  },
  {
    "path": "packages/Spartan-secq/src/product_tree.rs",
    "content": "#![allow(dead_code)]\nuse super::dense_mlpoly::DensePolynomial;\nuse super::dense_mlpoly::EqPolynomial;\nuse super::math::Math;\nuse super::scalar::Scalar;\nuse super::sumcheck::SumcheckInstanceProof;\nuse super::transcript::ProofTranscript;\nuse merlin::Transcript;\nuse serde::{Deserialize, Serialize};\n\n#[derive(Debug)]\npub struct ProductCircuit {\n  left_vec: Vec<DensePolynomial>,\n  right_vec: Vec<DensePolynomial>,\n}\n\nimpl ProductCircuit {\n  fn compute_layer(\n    inp_left: &DensePolynomial,\n    inp_right: &DensePolynomial,\n  ) -> (DensePolynomial, DensePolynomial) {\n    let len = inp_left.len() + inp_right.len();\n    let outp_left = (0..len / 4)\n      .map(|i| inp_left[i] * inp_right[i])\n      .collect::<Vec<Scalar>>();\n    let outp_right = (len / 4..len / 2)\n      .map(|i| inp_left[i] * inp_right[i])\n      .collect::<Vec<Scalar>>();\n\n    (\n      DensePolynomial::new(outp_left),\n      DensePolynomial::new(outp_right),\n    )\n  }\n\n  pub fn new(poly: &DensePolynomial) -> Self {\n    let mut left_vec: Vec<DensePolynomial> = Vec::new();\n    let mut right_vec: Vec<DensePolynomial> = Vec::new();\n\n    let num_layers = poly.len().log_2();\n    let (outp_left, outp_right) = poly.split(poly.len() / 2);\n\n    left_vec.push(outp_left);\n    right_vec.push(outp_right);\n\n    for i in 0..num_layers - 1 {\n      let (outp_left, outp_right) = ProductCircuit::compute_layer(&left_vec[i], &right_vec[i]);\n      left_vec.push(outp_left);\n      right_vec.push(outp_right);\n    }\n\n    ProductCircuit {\n      left_vec,\n      right_vec,\n    }\n  }\n\n  pub fn evaluate(&self) -> Scalar {\n    let len = self.left_vec.len();\n    assert_eq!(self.left_vec[len - 1].get_num_vars(), 0);\n    assert_eq!(self.right_vec[len - 1].get_num_vars(), 0);\n    self.left_vec[len - 1][0] * self.right_vec[len - 1][0]\n  }\n}\n\npub struct DotProductCircuit {\n  left: DensePolynomial,\n  right: DensePolynomial,\n  weight: DensePolynomial,\n}\n\nimpl DotProductCircuit {\n  pub fn new(left: DensePolynomial, right: DensePolynomial, weight: DensePolynomial) -> Self {\n    assert_eq!(left.len(), right.len());\n    assert_eq!(left.len(), weight.len());\n    DotProductCircuit {\n      left,\n      right,\n      weight,\n    }\n  }\n\n  pub fn evaluate(&self) -> Scalar {\n    (0..self.left.len())\n      .map(|i| self.left[i] * self.right[i] * self.weight[i])\n      .sum()\n  }\n\n  pub fn split(&mut self) -> (DotProductCircuit, DotProductCircuit) {\n    let idx = self.left.len() / 2;\n    assert_eq!(idx * 2, self.left.len());\n    let (l1, l2) = self.left.split(idx);\n    let (r1, r2) = self.right.split(idx);\n    let (w1, w2) = self.weight.split(idx);\n    (\n      DotProductCircuit {\n        left: l1,\n        right: r1,\n        weight: w1,\n      },\n      DotProductCircuit {\n        left: l2,\n        right: r2,\n        weight: w2,\n      },\n    )\n  }\n}\n\n#[allow(dead_code)]\n#[derive(Debug, Serialize, Deserialize)]\npub struct LayerProof {\n  pub proof: SumcheckInstanceProof,\n  pub claims: Vec<Scalar>,\n}\n\n#[allow(dead_code)]\nimpl LayerProof {\n  pub fn verify(\n    &self,\n    claim: Scalar,\n    num_rounds: usize,\n    degree_bound: usize,\n    transcript: &mut Transcript,\n  ) -> (Scalar, Vec<Scalar>) {\n    self\n      .proof\n      .verify(claim, num_rounds, degree_bound, transcript)\n      .unwrap()\n  }\n}\n\n#[allow(dead_code)]\n#[derive(Debug, Serialize, Deserialize)]\npub struct LayerProofBatched {\n  pub proof: SumcheckInstanceProof,\n  pub claims_prod_left: Vec<Scalar>,\n  pub claims_prod_right: Vec<Scalar>,\n}\n\n#[allow(dead_code)]\nimpl LayerProofBatched {\n  pub fn verify(\n    &self,\n    claim: Scalar,\n    num_rounds: usize,\n    degree_bound: usize,\n    transcript: &mut Transcript,\n  ) -> (Scalar, Vec<Scalar>) {\n    self\n      .proof\n      .verify(claim, num_rounds, degree_bound, transcript)\n      .unwrap()\n  }\n}\n\n#[derive(Debug, Serialize, Deserialize)]\npub struct ProductCircuitEvalProof {\n  proof: Vec<LayerProof>,\n}\n\n#[derive(Debug, Serialize, Deserialize)]\npub struct ProductCircuitEvalProofBatched {\n  proof: Vec<LayerProofBatched>,\n  claims_dotp: (Vec<Scalar>, Vec<Scalar>, Vec<Scalar>),\n}\n\nimpl ProductCircuitEvalProof {\n  #![allow(dead_code)]\n  pub fn prove(\n    circuit: &mut ProductCircuit,\n    transcript: &mut Transcript,\n  ) -> (Self, Scalar, Vec<Scalar>) {\n    let mut proof: Vec<LayerProof> = Vec::new();\n    let num_layers = circuit.left_vec.len();\n\n    let mut claim = circuit.evaluate();\n    let mut rand = Vec::new();\n    for layer_id in (0..num_layers).rev() {\n      let len = circuit.left_vec[layer_id].len() + circuit.right_vec[layer_id].len();\n\n      let mut poly_C = DensePolynomial::new(EqPolynomial::new(rand.clone()).evals());\n      assert_eq!(poly_C.len(), len / 2);\n\n      let num_rounds_prod = poly_C.len().log_2();\n      let comb_func_prod = |poly_A_comp: &Scalar,\n                            poly_B_comp: &Scalar,\n                            poly_C_comp: &Scalar|\n       -> Scalar { poly_A_comp * poly_B_comp * poly_C_comp };\n      let (proof_prod, rand_prod, claims_prod) = SumcheckInstanceProof::prove_cubic(\n        &claim,\n        num_rounds_prod,\n        &mut circuit.left_vec[layer_id],\n        &mut circuit.right_vec[layer_id],\n        &mut poly_C,\n        comb_func_prod,\n        transcript,\n      );\n\n      transcript.append_scalar(b\"claim_prod_left\", &claims_prod[0]);\n      transcript.append_scalar(b\"claim_prod_right\", &claims_prod[1]);\n\n      // produce a random challenge\n      let r_layer = transcript.challenge_scalar(b\"challenge_r_layer\");\n      claim = claims_prod[0] + r_layer * (claims_prod[1] - claims_prod[0]);\n\n      let mut ext = vec![r_layer];\n      ext.extend(rand_prod);\n      rand = ext;\n\n      proof.push(LayerProof {\n        proof: proof_prod,\n        claims: claims_prod[0..claims_prod.len() - 1].to_vec(),\n      });\n    }\n\n    (ProductCircuitEvalProof { proof }, claim, rand)\n  }\n\n  pub fn verify(\n    &self,\n    eval: Scalar,\n    len: usize,\n    transcript: &mut Transcript,\n  ) -> (Scalar, Vec<Scalar>) {\n    let num_layers = len.log_2();\n    let mut claim = eval;\n    let mut rand: Vec<Scalar> = Vec::new();\n    //let mut num_rounds = 0;\n    assert_eq!(self.proof.len(), num_layers);\n    for (num_rounds, i) in (0..num_layers).enumerate() {\n      let (claim_last, rand_prod) = self.proof[i].verify(claim, num_rounds, 3, transcript);\n\n      let claims_prod = &self.proof[i].claims;\n      transcript.append_scalar(b\"claim_prod_left\", &claims_prod[0]);\n      transcript.append_scalar(b\"claim_prod_right\", &claims_prod[1]);\n\n      assert_eq!(rand.len(), rand_prod.len());\n      let eq: Scalar = (0..rand.len())\n        .map(|i| {\n          rand[i] * rand_prod[i] + (Scalar::one() - rand[i]) * (Scalar::one() - rand_prod[i])\n        })\n        .product();\n      assert_eq!(claims_prod[0] * claims_prod[1] * eq, claim_last);\n\n      // produce a random challenge\n      let r_layer = transcript.challenge_scalar(b\"challenge_r_layer\");\n      claim = (Scalar::one() - r_layer) * claims_prod[0] + r_layer * claims_prod[1];\n      let mut ext = vec![r_layer];\n      ext.extend(rand_prod);\n      rand = ext;\n    }\n\n    (claim, rand)\n  }\n}\n\nimpl ProductCircuitEvalProofBatched {\n  pub fn prove(\n    prod_circuit_vec: &mut Vec<&mut ProductCircuit>,\n    dotp_circuit_vec: &mut Vec<&mut DotProductCircuit>,\n    transcript: &mut Transcript,\n  ) -> (Self, Vec<Scalar>) {\n    assert!(!prod_circuit_vec.is_empty());\n\n    let mut claims_dotp_final = (Vec::new(), Vec::new(), Vec::new());\n\n    let mut proof_layers: Vec<LayerProofBatched> = Vec::new();\n    let num_layers = prod_circuit_vec[0].left_vec.len();\n    let mut claims_to_verify = (0..prod_circuit_vec.len())\n      .map(|i| prod_circuit_vec[i].evaluate())\n      .collect::<Vec<Scalar>>();\n    let mut rand = Vec::new();\n    for layer_id in (0..num_layers).rev() {\n      // prepare paralell instance that share poly_C first\n      let len = prod_circuit_vec[0].left_vec[layer_id].len()\n        + prod_circuit_vec[0].right_vec[layer_id].len();\n\n      let mut poly_C_par = DensePolynomial::new(EqPolynomial::new(rand.clone()).evals());\n      assert_eq!(poly_C_par.len(), len / 2);\n\n      let num_rounds_prod = poly_C_par.len().log_2();\n      let comb_func_prod = |poly_A_comp: &Scalar,\n                            poly_B_comp: &Scalar,\n                            poly_C_comp: &Scalar|\n       -> Scalar { poly_A_comp * poly_B_comp * poly_C_comp };\n\n      let mut poly_A_batched_par: Vec<&mut DensePolynomial> = Vec::new();\n      let mut poly_B_batched_par: Vec<&mut DensePolynomial> = Vec::new();\n      for prod_circuit in prod_circuit_vec.iter_mut() {\n        poly_A_batched_par.push(&mut prod_circuit.left_vec[layer_id]);\n        poly_B_batched_par.push(&mut prod_circuit.right_vec[layer_id])\n      }\n      let poly_vec_par = (\n        &mut poly_A_batched_par,\n        &mut poly_B_batched_par,\n        &mut poly_C_par,\n      );\n\n      // prepare sequential instances that don't share poly_C\n      let mut poly_A_batched_seq: Vec<&mut DensePolynomial> = Vec::new();\n      let mut poly_B_batched_seq: Vec<&mut DensePolynomial> = Vec::new();\n      let mut poly_C_batched_seq: Vec<&mut DensePolynomial> = Vec::new();\n      if layer_id == 0 && !dotp_circuit_vec.is_empty() {\n        // add additional claims\n        for item in dotp_circuit_vec.iter() {\n          claims_to_verify.push(item.evaluate());\n          assert_eq!(len / 2, item.left.len());\n          assert_eq!(len / 2, item.right.len());\n          assert_eq!(len / 2, item.weight.len());\n        }\n\n        for dotp_circuit in dotp_circuit_vec.iter_mut() {\n          poly_A_batched_seq.push(&mut dotp_circuit.left);\n          poly_B_batched_seq.push(&mut dotp_circuit.right);\n          poly_C_batched_seq.push(&mut dotp_circuit.weight);\n        }\n      }\n      let poly_vec_seq = (\n        &mut poly_A_batched_seq,\n        &mut poly_B_batched_seq,\n        &mut poly_C_batched_seq,\n      );\n\n      // produce a fresh set of coeffs and a joint claim\n      let coeff_vec =\n        transcript.challenge_vector(b\"rand_coeffs_next_layer\", claims_to_verify.len());\n      let claim = (0..claims_to_verify.len())\n        .map(|i| claims_to_verify[i] * coeff_vec[i])\n        .sum();\n\n      let (proof, rand_prod, claims_prod, claims_dotp) = SumcheckInstanceProof::prove_cubic_batched(\n        &claim,\n        num_rounds_prod,\n        poly_vec_par,\n        poly_vec_seq,\n        &coeff_vec,\n        comb_func_prod,\n        transcript,\n      );\n\n      let (claims_prod_left, claims_prod_right, _claims_eq) = claims_prod;\n      for i in 0..prod_circuit_vec.len() {\n        transcript.append_scalar(b\"claim_prod_left\", &claims_prod_left[i]);\n        transcript.append_scalar(b\"claim_prod_right\", &claims_prod_right[i]);\n      }\n\n      if layer_id == 0 && !dotp_circuit_vec.is_empty() {\n        let (claims_dotp_left, claims_dotp_right, claims_dotp_weight) = claims_dotp;\n        for i in 0..dotp_circuit_vec.len() {\n          transcript.append_scalar(b\"claim_dotp_left\", &claims_dotp_left[i]);\n          transcript.append_scalar(b\"claim_dotp_right\", &claims_dotp_right[i]);\n          transcript.append_scalar(b\"claim_dotp_weight\", &claims_dotp_weight[i]);\n        }\n        claims_dotp_final = (claims_dotp_left, claims_dotp_right, claims_dotp_weight);\n      }\n\n      // produce a random challenge to condense two claims into a single claim\n      let r_layer = transcript.challenge_scalar(b\"challenge_r_layer\");\n\n      claims_to_verify = (0..prod_circuit_vec.len())\n        .map(|i| claims_prod_left[i] + r_layer * (claims_prod_right[i] - claims_prod_left[i]))\n        .collect::<Vec<Scalar>>();\n\n      let mut ext = vec![r_layer];\n      ext.extend(rand_prod);\n      rand = ext;\n\n      proof_layers.push(LayerProofBatched {\n        proof,\n        claims_prod_left,\n        claims_prod_right,\n      });\n    }\n\n    (\n      ProductCircuitEvalProofBatched {\n        proof: proof_layers,\n        claims_dotp: claims_dotp_final,\n      },\n      rand,\n    )\n  }\n\n  pub fn verify(\n    &self,\n    claims_prod_vec: &[Scalar],\n    claims_dotp_vec: &[Scalar],\n    len: usize,\n    transcript: &mut Transcript,\n  ) -> (Vec<Scalar>, Vec<Scalar>, Vec<Scalar>) {\n    let num_layers = len.log_2();\n    let mut rand: Vec<Scalar> = Vec::new();\n    //let mut num_rounds = 0;\n    assert_eq!(self.proof.len(), num_layers);\n\n    let mut claims_to_verify = claims_prod_vec.to_owned();\n    let mut claims_to_verify_dotp: Vec<Scalar> = Vec::new();\n    for (num_rounds, i) in (0..num_layers).enumerate() {\n      if i == num_layers - 1 {\n        claims_to_verify.extend(claims_dotp_vec);\n      }\n\n      // produce random coefficients, one for each instance\n      let coeff_vec =\n        transcript.challenge_vector(b\"rand_coeffs_next_layer\", claims_to_verify.len());\n\n      // produce a joint claim\n      let claim = (0..claims_to_verify.len())\n        .map(|i| claims_to_verify[i] * coeff_vec[i])\n        .sum();\n\n      let (claim_last, rand_prod) = self.proof[i].verify(claim, num_rounds, 3, transcript);\n\n      let claims_prod_left = &self.proof[i].claims_prod_left;\n      let claims_prod_right = &self.proof[i].claims_prod_right;\n      assert_eq!(claims_prod_left.len(), claims_prod_vec.len());\n      assert_eq!(claims_prod_right.len(), claims_prod_vec.len());\n\n      for i in 0..claims_prod_vec.len() {\n        transcript.append_scalar(b\"claim_prod_left\", &claims_prod_left[i]);\n        transcript.append_scalar(b\"claim_prod_right\", &claims_prod_right[i]);\n      }\n\n      assert_eq!(rand.len(), rand_prod.len());\n      let eq: Scalar = (0..rand.len())\n        .map(|i| {\n          rand[i] * rand_prod[i] + (Scalar::one() - rand[i]) * (Scalar::one() - rand_prod[i])\n        })\n        .product();\n      let mut claim_expected: Scalar = (0..claims_prod_vec.len())\n        .map(|i| coeff_vec[i] * (claims_prod_left[i] * claims_prod_right[i] * eq))\n        .sum();\n\n      // add claims from the dotp instances\n      if i == num_layers - 1 {\n        let num_prod_instances = claims_prod_vec.len();\n        let (claims_dotp_left, claims_dotp_right, claims_dotp_weight) = &self.claims_dotp;\n        for i in 0..claims_dotp_left.len() {\n          transcript.append_scalar(b\"claim_dotp_left\", &claims_dotp_left[i]);\n          transcript.append_scalar(b\"claim_dotp_right\", &claims_dotp_right[i]);\n          transcript.append_scalar(b\"claim_dotp_weight\", &claims_dotp_weight[i]);\n\n          claim_expected += coeff_vec[i + num_prod_instances]\n            * claims_dotp_left[i]\n            * claims_dotp_right[i]\n            * claims_dotp_weight[i];\n        }\n      }\n\n      assert_eq!(claim_expected, claim_last);\n\n      // produce a random challenge\n      let r_layer = transcript.challenge_scalar(b\"challenge_r_layer\");\n\n      claims_to_verify = (0..claims_prod_left.len())\n        .map(|i| claims_prod_left[i] + r_layer * (claims_prod_right[i] - claims_prod_left[i]))\n        .collect::<Vec<Scalar>>();\n\n      // add claims to verify for dotp circuit\n      if i == num_layers - 1 {\n        let (claims_dotp_left, claims_dotp_right, claims_dotp_weight) = &self.claims_dotp;\n\n        for i in 0..claims_dotp_vec.len() / 2 {\n          // combine left claims\n          let claim_left = claims_dotp_left[2 * i]\n            + r_layer * (claims_dotp_left[2 * i + 1] - claims_dotp_left[2 * i]);\n\n          let claim_right = claims_dotp_right[2 * i]\n            + r_layer * (claims_dotp_right[2 * i + 1] - claims_dotp_right[2 * i]);\n\n          let claim_weight = claims_dotp_weight[2 * i]\n            + r_layer * (claims_dotp_weight[2 * i + 1] - claims_dotp_weight[2 * i]);\n          claims_to_verify_dotp.push(claim_left);\n          claims_to_verify_dotp.push(claim_right);\n          claims_to_verify_dotp.push(claim_weight);\n        }\n      }\n\n      let mut ext = vec![r_layer];\n      ext.extend(rand_prod);\n      rand = ext;\n    }\n    (claims_to_verify, claims_to_verify_dotp, rand)\n  }\n}\n"
  },
  {
    "path": "packages/Spartan-secq/src/r1csinstance.rs",
    "content": "use crate::transcript::AppendToTranscript;\n\nuse super::dense_mlpoly::DensePolynomial;\nuse super::errors::ProofVerifyError;\nuse super::math::Math;\nuse super::random::RandomTape;\nuse super::scalar::Scalar;\nuse super::sparse_mlpoly::{\n  MultiSparseMatPolynomialAsDense, SparseMatEntry, SparseMatPolyCommitment,\n  SparseMatPolyCommitmentGens, SparseMatPolyEvalProof, SparseMatPolynomial,\n};\nuse super::timer::Timer;\nuse flate2::{write::ZlibEncoder, Compression};\nuse merlin::Transcript;\nuse rand_core::OsRng;\nuse serde::{Deserialize, Serialize};\n\n#[derive(Debug, Serialize, Deserialize)]\npub struct R1CSInstance {\n  num_cons: usize,\n  num_vars: usize,\n  num_inputs: usize,\n  A: SparseMatPolynomial,\n  B: SparseMatPolynomial,\n  C: SparseMatPolynomial,\n}\n\npub struct R1CSCommitmentGens {\n  gens: SparseMatPolyCommitmentGens,\n}\n\nimpl R1CSCommitmentGens {\n  pub fn new(\n    label: &'static [u8],\n    num_cons: usize,\n    num_vars: usize,\n    num_inputs: usize,\n    num_nz_entries: usize,\n  ) -> R1CSCommitmentGens {\n    assert!(num_inputs < num_vars);\n    let num_poly_vars_x = num_cons.log_2();\n    let num_poly_vars_y = (2 * num_vars).log_2();\n    let gens =\n      SparseMatPolyCommitmentGens::new(label, num_poly_vars_x, num_poly_vars_y, num_nz_entries, 3);\n    R1CSCommitmentGens { gens }\n  }\n}\n\n#[derive(Debug, Serialize, Deserialize)]\npub struct R1CSCommitment {\n  num_cons: usize,\n  num_vars: usize,\n  num_inputs: usize,\n  comm: SparseMatPolyCommitment,\n}\n\nimpl AppendToTranscript for R1CSCommitment {\n  fn append_to_transcript(&self, _label: &'static [u8], transcript: &mut Transcript) {\n    transcript.append_u64(b\"num_cons\", self.num_cons as u64);\n    transcript.append_u64(b\"num_vars\", self.num_vars as u64);\n    transcript.append_u64(b\"num_inputs\", self.num_inputs as u64);\n    self.comm.append_to_transcript(b\"comm\", transcript);\n  }\n}\n\npub struct R1CSDecommitment {\n  dense: MultiSparseMatPolynomialAsDense,\n}\n\nimpl R1CSCommitment {\n  pub fn get_num_cons(&self) -> usize {\n    self.num_cons\n  }\n\n  pub fn get_num_vars(&self) -> usize {\n    self.num_vars\n  }\n\n  pub fn get_num_inputs(&self) -> usize {\n    self.num_inputs\n  }\n}\n\nimpl R1CSInstance {\n  pub fn new(\n    num_cons: usize,\n    num_vars: usize,\n    num_inputs: usize,\n    A: &[(usize, usize, Scalar)],\n    B: &[(usize, usize, Scalar)],\n    C: &[(usize, usize, Scalar)],\n  ) -> R1CSInstance {\n    Timer::print(&format!(\"number_of_constraints {}\", num_cons));\n    Timer::print(&format!(\"number_of_variables {}\", num_vars));\n    Timer::print(&format!(\"number_of_inputs {}\", num_inputs));\n    Timer::print(&format!(\"number_non-zero_entries_A {}\", A.len()));\n    Timer::print(&format!(\"number_non-zero_entries_B {}\", B.len()));\n    Timer::print(&format!(\"number_non-zero_entries_C {}\", C.len()));\n\n    // check that num_cons is a power of 2\n    assert_eq!(num_cons.next_power_of_two(), num_cons);\n\n    // check that num_vars is a power of 2\n    assert_eq!(num_vars.next_power_of_two(), num_vars);\n\n    // check that number_inputs + 1 <= num_vars\n    assert!(num_inputs < num_vars);\n\n    // no errors, so create polynomials\n    let num_poly_vars_x = num_cons.log_2();\n    let num_poly_vars_y = (2 * num_vars).log_2();\n\n    let mat_A = (0..A.len())\n      .map(|i| SparseMatEntry::new(A[i].0, A[i].1, A[i].2))\n      .collect::<Vec<SparseMatEntry>>();\n    let mat_B = (0..B.len())\n      .map(|i| SparseMatEntry::new(B[i].0, B[i].1, B[i].2))\n      .collect::<Vec<SparseMatEntry>>();\n    let mat_C = (0..C.len())\n      .map(|i| SparseMatEntry::new(C[i].0, C[i].1, C[i].2))\n      .collect::<Vec<SparseMatEntry>>();\n\n    let poly_A = SparseMatPolynomial::new(num_poly_vars_x, num_poly_vars_y, mat_A);\n    let poly_B = SparseMatPolynomial::new(num_poly_vars_x, num_poly_vars_y, mat_B);\n    let poly_C = SparseMatPolynomial::new(num_poly_vars_x, num_poly_vars_y, mat_C);\n\n    R1CSInstance {\n      num_cons,\n      num_vars,\n      num_inputs,\n      A: poly_A,\n      B: poly_B,\n      C: poly_C,\n    }\n  }\n\n  pub fn get_num_vars(&self) -> usize {\n    self.num_vars\n  }\n\n  pub fn get_num_cons(&self) -> usize {\n    self.num_cons\n  }\n\n  pub fn get_num_inputs(&self) -> usize {\n    self.num_inputs\n  }\n\n  pub fn get_digest(&self) -> Vec<u8> {\n    let mut encoder = ZlibEncoder::new(Vec::new(), Compression::default());\n    bincode::serialize_into(&mut encoder, &self).unwrap();\n    encoder.finish().unwrap()\n  }\n\n  pub fn produce_synthetic_r1cs(\n    num_cons: usize,\n    num_vars: usize,\n    num_inputs: usize,\n  ) -> (R1CSInstance, Vec<Scalar>, Vec<Scalar>) {\n    Timer::print(&format!(\"number_of_constraints {}\", num_cons));\n    Timer::print(&format!(\"number_of_variables {}\", num_vars));\n    Timer::print(&format!(\"number_of_inputs {}\", num_inputs));\n\n    let mut csprng: OsRng = OsRng;\n\n    // assert num_cons and num_vars are power of 2\n    assert_eq!((num_cons.log_2()).pow2(), num_cons);\n    assert_eq!((num_vars.log_2()).pow2(), num_vars);\n\n    // num_inputs + 1 <= num_vars\n    assert!(num_inputs < num_vars);\n\n    // z is organized as [vars,1,io]\n    let size_z = num_vars + num_inputs + 1;\n\n    // produce a random satisfying assignment\n    let Z = {\n      let mut Z: Vec<Scalar> = (0..size_z)\n        .map(|_i| Scalar::random(&mut csprng))\n        .collect::<Vec<Scalar>>();\n      Z[num_vars] = Scalar::one(); // set the constant term to 1\n      Z\n    };\n\n    // three sparse matrices\n    let mut A: Vec<SparseMatEntry> = Vec::new();\n    let mut B: Vec<SparseMatEntry> = Vec::new();\n    let mut C: Vec<SparseMatEntry> = Vec::new();\n    let one = Scalar::one();\n    for i in 0..num_cons {\n      let A_idx = i % size_z;\n      let B_idx = (i + 2) % size_z;\n      A.push(SparseMatEntry::new(i, A_idx, one));\n      B.push(SparseMatEntry::new(i, B_idx, one));\n      let AB_val = Z[A_idx] * Z[B_idx];\n\n      let C_idx = (i + 3) % size_z;\n      let C_val = Z[C_idx];\n\n      if C_val == Scalar::zero() {\n        C.push(SparseMatEntry::new(i, num_vars, AB_val));\n      } else {\n        C.push(SparseMatEntry::new(\n          i,\n          C_idx,\n          AB_val * C_val.invert().unwrap(),\n        ));\n      }\n    }\n\n    Timer::print(&format!(\"number_non-zero_entries_A {}\", A.len()));\n    Timer::print(&format!(\"number_non-zero_entries_B {}\", B.len()));\n    Timer::print(&format!(\"number_non-zero_entries_C {}\", C.len()));\n\n    let num_poly_vars_x = num_cons.log_2();\n    let num_poly_vars_y = (2 * num_vars).log_2();\n    let poly_A = SparseMatPolynomial::new(num_poly_vars_x, num_poly_vars_y, A);\n    let poly_B = SparseMatPolynomial::new(num_poly_vars_x, num_poly_vars_y, B);\n    let poly_C = SparseMatPolynomial::new(num_poly_vars_x, num_poly_vars_y, C);\n\n    let inst = R1CSInstance {\n      num_cons,\n      num_vars,\n      num_inputs,\n      A: poly_A,\n      B: poly_B,\n      C: poly_C,\n    };\n\n    assert!(inst.is_sat(&Z[..num_vars], &Z[num_vars + 1..]));\n\n    (inst, Z[..num_vars].to_vec(), Z[num_vars + 1..].to_vec())\n  }\n\n  pub fn is_sat(&self, vars: &[Scalar], input: &[Scalar]) -> bool {\n    assert_eq!(vars.len(), self.num_vars);\n    assert_eq!(input.len(), self.num_inputs);\n\n    let z = {\n      let mut z = vars.to_vec();\n      z.extend(&vec![Scalar::one()]);\n      z.extend(input);\n      z\n    };\n\n    // verify if Az * Bz - Cz = [0...]\n    let Az = self\n      .A\n      .multiply_vec(self.num_cons, self.num_vars + self.num_inputs + 1, &z);\n    let Bz = self\n      .B\n      .multiply_vec(self.num_cons, self.num_vars + self.num_inputs + 1, &z);\n    let Cz = self\n      .C\n      .multiply_vec(self.num_cons, self.num_vars + self.num_inputs + 1, &z);\n\n    assert_eq!(Az.len(), self.num_cons);\n    assert_eq!(Bz.len(), self.num_cons);\n    assert_eq!(Cz.len(), self.num_cons);\n    let res: usize = (0..self.num_cons)\n      .map(|i| usize::from(Az[i] * Bz[i] != Cz[i]))\n      .sum();\n\n    res == 0\n  }\n\n  pub fn multiply_vec(\n    &self,\n    num_rows: usize,\n    num_cols: usize,\n    z: &[Scalar],\n  ) -> (DensePolynomial, DensePolynomial, DensePolynomial) {\n    assert_eq!(num_rows, self.num_cons);\n    assert_eq!(z.len(), num_cols);\n    assert!(num_cols > self.num_vars);\n    (\n      DensePolynomial::new(self.A.multiply_vec(num_rows, num_cols, z)),\n      DensePolynomial::new(self.B.multiply_vec(num_rows, num_cols, z)),\n      DensePolynomial::new(self.C.multiply_vec(num_rows, num_cols, z)),\n    )\n  }\n\n  pub fn compute_eval_table_sparse(\n    &self,\n    num_rows: usize,\n    num_cols: usize,\n    evals: &[Scalar],\n  ) -> (Vec<Scalar>, Vec<Scalar>, Vec<Scalar>) {\n    assert_eq!(num_rows, self.num_cons);\n    assert!(num_cols > self.num_vars);\n\n    let evals_A = self.A.compute_eval_table_sparse(evals, num_rows, num_cols);\n    let evals_B = self.B.compute_eval_table_sparse(evals, num_rows, num_cols);\n    let evals_C = self.C.compute_eval_table_sparse(evals, num_rows, num_cols);\n\n    (evals_A, evals_B, evals_C)\n  }\n\n  pub fn evaluate(&self, rx: &[Scalar], ry: &[Scalar]) -> (Scalar, Scalar, Scalar) {\n    let evals = SparseMatPolynomial::multi_evaluate(&[&self.A, &self.B, &self.C], rx, ry);\n    (evals[0], evals[1], evals[2])\n  }\n\n  pub fn commit(&self, gens: &R1CSCommitmentGens) -> (R1CSCommitment, R1CSDecommitment) {\n    let (comm, dense) = SparseMatPolynomial::multi_commit(&[&self.A, &self.B, &self.C], &gens.gens);\n    let r1cs_comm = R1CSCommitment {\n      num_cons: self.num_cons,\n      num_vars: self.num_vars,\n      num_inputs: self.num_inputs,\n      comm,\n    };\n\n    let r1cs_decomm = R1CSDecommitment { dense };\n\n    (r1cs_comm, r1cs_decomm)\n  }\n}\n\n#[derive(Debug, Serialize, Deserialize)]\npub struct R1CSEvalProof {\n  proof: SparseMatPolyEvalProof,\n}\n\nimpl R1CSEvalProof {\n  pub fn prove(\n    decomm: &R1CSDecommitment,\n    rx: &[Scalar], // point at which the polynomial is evaluated\n    ry: &[Scalar],\n    evals: &(Scalar, Scalar, Scalar),\n    gens: &R1CSCommitmentGens,\n    transcript: &mut Transcript,\n    random_tape: &mut RandomTape,\n  ) -> R1CSEvalProof {\n    let timer = Timer::new(\"R1CSEvalProof::prove\");\n    let proof = SparseMatPolyEvalProof::prove(\n      &decomm.dense,\n      rx,\n      ry,\n      &[evals.0, evals.1, evals.2],\n      &gens.gens,\n      transcript,\n      random_tape,\n    );\n    timer.stop();\n\n    R1CSEvalProof { proof }\n  }\n\n  pub fn verify(\n    &self,\n    comm: &R1CSCommitment,\n    rx: &[Scalar], // point at which the R1CS matrix polynomials are evaluated\n    ry: &[Scalar],\n    evals: &(Scalar, Scalar, Scalar),\n    gens: &R1CSCommitmentGens,\n    transcript: &mut Transcript,\n  ) -> Result<(), ProofVerifyError> {\n    self.proof.verify(\n      &comm.comm,\n      rx,\n      ry,\n      &[evals.0, evals.1, evals.2],\n      &gens.gens,\n      transcript,\n    )\n  }\n}\n"
  },
  {
    "path": "packages/Spartan-secq/src/r1csproof.rs",
    "content": "#![allow(clippy::too_many_arguments)]\nuse super::commitments::{Commitments, MultiCommitGens};\nuse super::dense_mlpoly::{\n  DensePolynomial, EqPolynomial, PolyCommitment, PolyCommitmentGens, PolyEvalProof,\n};\nuse super::errors::ProofVerifyError;\nuse super::group::{CompressedGroup, GroupElement, VartimeMultiscalarMul};\nuse super::math::Math;\nuse super::nizk::{EqualityProof, KnowledgeProof, ProductProof};\nuse super::r1csinstance::R1CSInstance;\nuse super::random::RandomTape;\nuse super::scalar::Scalar;\nuse super::sparse_mlpoly::{SparsePolyEntry, SparsePolynomial};\nuse super::sumcheck::ZKSumcheckInstanceProof;\nuse super::timer::Timer;\nuse super::transcript::{AppendToTranscript, ProofTranscript};\nuse crate::group::DecompressEncodedPoint;\nuse core::iter;\nuse merlin::Transcript;\nuse serde::{Deserialize, Serialize};\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct R1CSProof {\n  comm_vars: PolyCommitment,\n  sc_proof_phase1: ZKSumcheckInstanceProof,\n  claims_phase2: (\n    CompressedGroup,\n    CompressedGroup,\n    CompressedGroup,\n    CompressedGroup,\n  ),\n  pok_claims_phase2: (KnowledgeProof, ProductProof),\n  proof_eq_sc_phase1: EqualityProof,\n  sc_proof_phase2: ZKSumcheckInstanceProof,\n  comm_vars_at_ry: CompressedGroup,\n  proof_eval_vars_at_ry: PolyEvalProof,\n  proof_eq_sc_phase2: EqualityProof,\n}\n\npub struct R1CSSumcheckGens {\n  gens_1: MultiCommitGens,\n  gens_3: MultiCommitGens,\n  gens_4: MultiCommitGens,\n}\n\n// TODO: fix passing gens_1_ref\nimpl R1CSSumcheckGens {\n  pub fn new(label: &'static [u8], gens_1_ref: &MultiCommitGens) -> Self {\n    let gens_1 = gens_1_ref.clone();\n    let gens_3 = MultiCommitGens::new(3, label);\n    let gens_4 = MultiCommitGens::new(4, label);\n\n    R1CSSumcheckGens {\n      gens_1,\n      gens_3,\n      gens_4,\n    }\n  }\n}\n\npub struct R1CSGens {\n  gens_sc: R1CSSumcheckGens,\n  gens_pc: PolyCommitmentGens,\n}\n\nimpl R1CSGens {\n  pub fn new(label: &'static [u8], _num_cons: usize, num_vars: usize) -> Self {\n    let num_poly_vars = num_vars.log_2();\n    let gens_pc = PolyCommitmentGens::new(num_poly_vars, label);\n    let gens_sc = R1CSSumcheckGens::new(label, &gens_pc.gens.gens_1);\n    R1CSGens { gens_sc, gens_pc }\n  }\n}\n\nimpl R1CSProof {\n  fn prove_phase_one(\n    num_rounds: usize,\n    evals_tau: &mut DensePolynomial,\n    evals_Az: &mut DensePolynomial,\n    evals_Bz: &mut DensePolynomial,\n    evals_Cz: &mut DensePolynomial,\n    gens: &R1CSSumcheckGens,\n    transcript: &mut Transcript,\n    random_tape: &mut RandomTape,\n  ) -> (ZKSumcheckInstanceProof, Vec<Scalar>, Vec<Scalar>, Scalar) {\n    let comb_func = |poly_A_comp: &Scalar,\n                     poly_B_comp: &Scalar,\n                     poly_C_comp: &Scalar,\n                     poly_D_comp: &Scalar|\n     -> Scalar { poly_A_comp * (poly_B_comp * poly_C_comp - poly_D_comp) };\n\n    let (sc_proof_phase_one, r, claims, blind_claim_postsc) =\n      ZKSumcheckInstanceProof::prove_cubic_with_additive_term(\n        &Scalar::zero(), // claim is zero\n        &Scalar::zero(), // blind for claim is also zero\n        num_rounds,\n        evals_tau,\n        evals_Az,\n        evals_Bz,\n        evals_Cz,\n        comb_func,\n        &gens.gens_1,\n        &gens.gens_4,\n        transcript,\n        random_tape,\n      );\n\n    (sc_proof_phase_one, r, claims, blind_claim_postsc)\n  }\n\n  fn prove_phase_two(\n    num_rounds: usize,\n    claim: &Scalar,\n    blind_claim: &Scalar,\n    evals_z: &mut DensePolynomial,\n    evals_ABC: &mut DensePolynomial,\n    gens: &R1CSSumcheckGens,\n    transcript: &mut Transcript,\n    random_tape: &mut RandomTape,\n  ) -> (ZKSumcheckInstanceProof, Vec<Scalar>, Vec<Scalar>, Scalar) {\n    let comb_func =\n      |poly_A_comp: &Scalar, poly_B_comp: &Scalar| -> Scalar { poly_A_comp * poly_B_comp };\n    let (sc_proof_phase_two, r, claims, blind_claim_postsc) = ZKSumcheckInstanceProof::prove_quad(\n      claim,\n      blind_claim,\n      num_rounds,\n      evals_z,\n      evals_ABC,\n      comb_func,\n      &gens.gens_1,\n      &gens.gens_3,\n      transcript,\n      random_tape,\n    );\n\n    (sc_proof_phase_two, r, claims, blind_claim_postsc)\n  }\n\n  fn protocol_name() -> &'static [u8] {\n    b\"R1CS proof\"\n  }\n\n  pub fn prove(\n    inst: &R1CSInstance,\n    vars: Vec<Scalar>,\n    input: &[Scalar],\n    gens: &R1CSGens,\n    transcript: &mut Transcript,\n    random_tape: &mut RandomTape,\n  ) -> (R1CSProof, Vec<Scalar>, Vec<Scalar>) {\n    let timer_prove = Timer::new(\"R1CSProof::prove\");\n    transcript.append_protocol_name(R1CSProof::protocol_name());\n\n    // we currently require the number of |inputs| + 1 to be at most number of vars\n    assert!(input.len() < vars.len());\n\n    input.append_to_transcript(b\"input\", transcript);\n\n    let timer_commit = Timer::new(\"polycommit\");\n    let (poly_vars, comm_vars, blinds_vars) = {\n      // create a multilinear polynomial using the supplied assignment for variables\n      let poly_vars = DensePolynomial::new(vars.clone());\n\n      // produce a commitment to the satisfying assignment\n      let (comm_vars, blinds_vars) = poly_vars.commit(&gens.gens_pc, Some(random_tape));\n\n      // add the commitment to the prover's transcript\n      comm_vars.append_to_transcript(b\"poly_commitment\", transcript);\n      (poly_vars, comm_vars, blinds_vars)\n    };\n    timer_commit.stop();\n\n    let timer_sc_proof_phase1 = Timer::new(\"prove_sc_phase_one\");\n\n    // append input to variables to create a single vector z\n    let z = {\n      let num_inputs = input.len();\n      let num_vars = vars.len();\n      let mut z = vars;\n      z.extend(&vec![Scalar::one()]); // add constant term in z\n      z.extend(input);\n      z.extend(&vec![Scalar::zero(); num_vars - num_inputs - 1]); // we will pad with zeros\n      z\n    };\n\n    // derive the verifier's challenge tau\n    let (num_rounds_x, num_rounds_y) = (inst.get_num_cons().log_2(), z.len().log_2());\n    let tau = transcript.challenge_vector(b\"challenge_tau\", num_rounds_x);\n    // compute the initial evaluation table for R(\\tau, x)\n    let mut poly_tau = DensePolynomial::new(EqPolynomial::new(tau).evals());\n    let (mut poly_Az, mut poly_Bz, mut poly_Cz) =\n      inst.multiply_vec(inst.get_num_cons(), z.len(), &z);\n\n    let (sc_proof_phase1, rx, _claims_phase1, blind_claim_postsc1) = R1CSProof::prove_phase_one(\n      num_rounds_x,\n      &mut poly_tau,\n      &mut poly_Az,\n      &mut poly_Bz,\n      &mut poly_Cz,\n      &gens.gens_sc,\n      transcript,\n      random_tape,\n    );\n    assert_eq!(poly_tau.len(), 1);\n    assert_eq!(poly_Az.len(), 1);\n    assert_eq!(poly_Bz.len(), 1);\n    assert_eq!(poly_Cz.len(), 1);\n    timer_sc_proof_phase1.stop();\n\n    let (tau_claim, Az_claim, Bz_claim, Cz_claim) =\n      (&poly_tau[0], &poly_Az[0], &poly_Bz[0], &poly_Cz[0]);\n    let (Az_blind, Bz_blind, Cz_blind, prod_Az_Bz_blind) = (\n      random_tape.random_scalar(b\"Az_blind\"),\n      random_tape.random_scalar(b\"Bz_blind\"),\n      random_tape.random_scalar(b\"Cz_blind\"),\n      random_tape.random_scalar(b\"prod_Az_Bz_blind\"),\n    );\n\n    let (pok_Cz_claim, comm_Cz_claim) = {\n      KnowledgeProof::prove(\n        &gens.gens_sc.gens_1,\n        transcript,\n        random_tape,\n        Cz_claim,\n        &Cz_blind,\n      )\n    };\n\n    let (proof_prod, comm_Az_claim, comm_Bz_claim, comm_prod_Az_Bz_claims) = {\n      let prod = Az_claim * Bz_claim;\n      ProductProof::prove(\n        &gens.gens_sc.gens_1,\n        transcript,\n        random_tape,\n        Az_claim,\n        &Az_blind,\n        Bz_claim,\n        &Bz_blind,\n        &prod,\n        &prod_Az_Bz_blind,\n      )\n    };\n\n    comm_Az_claim.append_to_transcript(b\"comm_Az_claim\", transcript);\n    comm_Bz_claim.append_to_transcript(b\"comm_Bz_claim\", transcript);\n    comm_Cz_claim.append_to_transcript(b\"comm_Cz_claim\", transcript);\n    comm_prod_Az_Bz_claims.append_to_transcript(b\"comm_prod_Az_Bz_claims\", transcript);\n\n    // prove the final step of sum-check #1\n    let taus_bound_rx = tau_claim;\n    let blind_expected_claim_postsc1 = taus_bound_rx * (prod_Az_Bz_blind - Cz_blind);\n    let claim_post_phase1 = (Az_claim * Bz_claim - Cz_claim) * taus_bound_rx;\n    let (proof_eq_sc_phase1, _C1, _C2) = EqualityProof::prove(\n      &gens.gens_sc.gens_1,\n      transcript,\n      random_tape,\n      &claim_post_phase1,\n      &blind_expected_claim_postsc1,\n      &claim_post_phase1,\n      &blind_claim_postsc1,\n    );\n\n    let timer_sc_proof_phase2 = Timer::new(\"prove_sc_phase_two\");\n    // combine the three claims into a single claim\n    let r_A = transcript.challenge_scalar(b\"challenege_Az\");\n    let r_B = transcript.challenge_scalar(b\"challenege_Bz\");\n    let r_C = transcript.challenge_scalar(b\"challenege_Cz\");\n    let claim_phase2 = r_A * Az_claim + r_B * Bz_claim + r_C * Cz_claim;\n    let blind_claim_phase2 = r_A * Az_blind + r_B * Bz_blind + r_C * Cz_blind;\n\n    let evals_ABC = {\n      // compute the initial evaluation table for R(\\tau, x)\n      let evals_rx = EqPolynomial::new(rx.clone()).evals();\n      let (evals_A, evals_B, evals_C) =\n        inst.compute_eval_table_sparse(inst.get_num_cons(), z.len(), &evals_rx);\n\n      assert_eq!(evals_A.len(), evals_B.len());\n      assert_eq!(evals_A.len(), evals_C.len());\n      (0..evals_A.len())\n        .map(|i| r_A * evals_A[i] + r_B * evals_B[i] + r_C * evals_C[i])\n        .collect::<Vec<Scalar>>()\n    };\n\n    // another instance of the sum-check protocol\n    let (sc_proof_phase2, ry, claims_phase2, blind_claim_postsc2) = R1CSProof::prove_phase_two(\n      num_rounds_y,\n      &claim_phase2,\n      &blind_claim_phase2,\n      &mut DensePolynomial::new(z),\n      &mut DensePolynomial::new(evals_ABC),\n      &gens.gens_sc,\n      transcript,\n      random_tape,\n    );\n    timer_sc_proof_phase2.stop();\n\n    let timer_polyeval = Timer::new(\"polyeval\");\n    let eval_vars_at_ry = poly_vars.evaluate(&ry[1..]);\n    let blind_eval = random_tape.random_scalar(b\"blind_eval\");\n    let (proof_eval_vars_at_ry, comm_vars_at_ry) = PolyEvalProof::prove(\n      &poly_vars,\n      Some(&blinds_vars),\n      &ry[1..],\n      &eval_vars_at_ry,\n      Some(&blind_eval),\n      &gens.gens_pc,\n      transcript,\n      random_tape,\n    );\n    timer_polyeval.stop();\n\n    // prove the final step of sum-check #2\n    let blind_eval_Z_at_ry = (Scalar::one() - ry[0]) * blind_eval;\n    let blind_expected_claim_postsc2 = claims_phase2[1] * blind_eval_Z_at_ry;\n    let claim_post_phase2 = claims_phase2[0] * claims_phase2[1];\n    let (proof_eq_sc_phase2, _C1, _C2) = EqualityProof::prove(\n      &gens.gens_pc.gens.gens_1,\n      transcript,\n      random_tape,\n      &claim_post_phase2,\n      &blind_expected_claim_postsc2,\n      &claim_post_phase2,\n      &blind_claim_postsc2,\n    );\n\n    timer_prove.stop();\n\n    (\n      R1CSProof {\n        comm_vars,\n        sc_proof_phase1,\n        claims_phase2: (\n          comm_Az_claim,\n          comm_Bz_claim,\n          comm_Cz_claim,\n          comm_prod_Az_Bz_claims,\n        ),\n        pok_claims_phase2: (pok_Cz_claim, proof_prod),\n        proof_eq_sc_phase1,\n        sc_proof_phase2,\n        comm_vars_at_ry,\n        proof_eval_vars_at_ry,\n        proof_eq_sc_phase2,\n      },\n      rx,\n      ry,\n    )\n  }\n\n  pub fn verify(\n    &self,\n    num_vars: usize,\n    num_cons: usize,\n    input: &[Scalar],\n    evals: &(Scalar, Scalar, Scalar),\n    transcript: &mut Transcript,\n    gens: &R1CSGens,\n  ) -> Result<(Vec<Scalar>, Vec<Scalar>), ProofVerifyError> {\n    transcript.append_protocol_name(R1CSProof::protocol_name());\n\n    input.append_to_transcript(b\"input\", transcript);\n\n    let n = num_vars;\n    // add the commitment to the verifier's transcript\n    self\n      .comm_vars\n      .append_to_transcript(b\"poly_commitment\", transcript);\n\n    let (num_rounds_x, num_rounds_y) = (num_cons.log_2(), (2 * num_vars).log_2());\n\n    // derive the verifier's challenge tau\n    let tau = transcript.challenge_vector(b\"challenge_tau\", num_rounds_x);\n\n    // verify the first sum-check instance\n    let claim_phase1 = Scalar::zero()\n      .commit(&Scalar::zero(), &gens.gens_sc.gens_1)\n      .compress();\n    let (comm_claim_post_phase1, rx) = self.sc_proof_phase1.verify(\n      &claim_phase1,\n      num_rounds_x,\n      3,\n      &gens.gens_sc.gens_1,\n      &gens.gens_sc.gens_4,\n      transcript,\n    )?;\n    // perform the intermediate sum-check test with claimed Az, Bz, and Cz\n    let (comm_Az_claim, comm_Bz_claim, comm_Cz_claim, comm_prod_Az_Bz_claims) = &self.claims_phase2;\n    let (pok_Cz_claim, proof_prod) = &self.pok_claims_phase2;\n\n    pok_Cz_claim.verify(&gens.gens_sc.gens_1, transcript, comm_Cz_claim)?;\n    proof_prod.verify(\n      &gens.gens_sc.gens_1,\n      transcript,\n      comm_Az_claim,\n      comm_Bz_claim,\n      comm_prod_Az_Bz_claims,\n    )?;\n\n    comm_Az_claim.append_to_transcript(b\"comm_Az_claim\", transcript);\n    comm_Bz_claim.append_to_transcript(b\"comm_Bz_claim\", transcript);\n    comm_Cz_claim.append_to_transcript(b\"comm_Cz_claim\", transcript);\n    comm_prod_Az_Bz_claims.append_to_transcript(b\"comm_prod_Az_Bz_claims\", transcript);\n\n    let taus_bound_rx: Scalar = (0..rx.len())\n      .map(|i| rx[i] * tau[i] + (Scalar::one() - rx[i]) * (Scalar::one() - tau[i]))\n      .product();\n    let expected_claim_post_phase1 = (taus_bound_rx\n      * (comm_prod_Az_Bz_claims.decompress().unwrap() - comm_Cz_claim.decompress().unwrap()))\n    .compress();\n\n    // verify proof that expected_claim_post_phase1 == claim_post_phase1\n    self.proof_eq_sc_phase1.verify(\n      &gens.gens_sc.gens_1,\n      transcript,\n      &expected_claim_post_phase1,\n      &comm_claim_post_phase1,\n    )?;\n\n    // derive three public challenges and then derive a joint claim\n    let r_A = transcript.challenge_scalar(b\"challenege_Az\");\n    let r_B = transcript.challenge_scalar(b\"challenege_Bz\");\n    let r_C = transcript.challenge_scalar(b\"challenege_Cz\");\n\n    // r_A * comm_Az_claim + r_B * comm_Bz_claim + r_C * comm_Cz_claim;\n    let comm_claim_phase2 = GroupElement::vartime_multiscalar_mul(\n      iter::once(r_A)\n        .chain(iter::once(r_B))\n        .chain(iter::once(r_C))\n        .collect(),\n      iter::once(&comm_Az_claim)\n        .chain(iter::once(&comm_Bz_claim))\n        .chain(iter::once(&comm_Cz_claim))\n        .map(|pt| pt.decompress().unwrap())\n        .collect(),\n    )\n    .compress();\n\n    // verify the joint claim with a sum-check protocol\n    let (comm_claim_post_phase2, ry) = self.sc_proof_phase2.verify(\n      &comm_claim_phase2,\n      num_rounds_y,\n      2,\n      &gens.gens_sc.gens_1,\n      &gens.gens_sc.gens_3,\n      transcript,\n    )?;\n\n    // verify Z(ry) proof against the initial commitment\n    self.proof_eval_vars_at_ry.verify(\n      &gens.gens_pc,\n      transcript,\n      &ry[1..],\n      &self.comm_vars_at_ry,\n      &self.comm_vars,\n    )?;\n\n    let poly_input_eval = {\n      // constant term\n      let mut input_as_sparse_poly_entries = vec![SparsePolyEntry::new(0, Scalar::one())];\n      //remaining inputs\n      input_as_sparse_poly_entries.extend(\n        (0..input.len())\n          .map(|i| SparsePolyEntry::new(i + 1, input[i]))\n          .collect::<Vec<SparsePolyEntry>>(),\n      );\n      SparsePolynomial::new(n.log_2(), input_as_sparse_poly_entries).evaluate(&ry[1..])\n    };\n\n    // compute commitment to eval_Z_at_ry = (Scalar::one() - ry[0]) * self.eval_vars_at_ry + ry[0] * poly_input_eval\n    let comm_eval_Z_at_ry = GroupElement::vartime_multiscalar_mul(\n      iter::once(Scalar::one() - ry[0])\n        .chain(iter::once(ry[0]))\n        .map(|s| s)\n        .collect(),\n      iter::once(self.comm_vars_at_ry.decompress().unwrap())\n        .chain(iter::once(\n          poly_input_eval.commit(&Scalar::zero(), &gens.gens_pc.gens.gens_1),\n        ))\n        .collect(),\n    );\n\n    // perform the final check in the second sum-check protocol\n    let (eval_A_r, eval_B_r, eval_C_r) = evals;\n    let expected_claim_post_phase2 =\n      ((r_A * eval_A_r + r_B * eval_B_r + r_C * eval_C_r) * comm_eval_Z_at_ry).compress();\n    // verify proof that expected_claim_post_phase1 == claim_post_phase1\n    self.proof_eq_sc_phase2.verify(\n      &gens.gens_sc.gens_1,\n      transcript,\n      &expected_claim_post_phase2,\n      &comm_claim_post_phase2,\n    )?;\n\n    Ok((rx, ry))\n  }\n}\n\n#[cfg(test)]\nmod tests {\n  use super::*;\n  use rand_core::OsRng;\n\n  fn produce_tiny_r1cs() -> (R1CSInstance, Vec<Scalar>, Vec<Scalar>) {\n    // three constraints over five variables Z1, Z2, Z3, Z4, and Z5\n    // rounded to the nearest power of two\n    let num_cons = 128;\n    let num_vars = 256;\n    let num_inputs = 2;\n\n    // encode the above constraints into three matrices\n    let mut A: Vec<(usize, usize, Scalar)> = Vec::new();\n    let mut B: Vec<(usize, usize, Scalar)> = Vec::new();\n    let mut C: Vec<(usize, usize, Scalar)> = Vec::new();\n\n    let one = Scalar::one();\n    // constraint 0 entries\n    // (Z1 + Z2) * I0 - Z3 = 0;\n    A.push((0, 0, one));\n    A.push((0, 1, one));\n    B.push((0, num_vars + 1, one));\n    C.push((0, 2, one));\n\n    // constraint 1 entries\n    // (Z1 + I1) * (Z3) - Z4 = 0\n    A.push((1, 0, one));\n    A.push((1, num_vars + 2, one));\n    B.push((1, 2, one));\n    C.push((1, 3, one));\n    // constraint 3 entries\n    // Z5 * 1 - 0 = 0\n    A.push((2, 4, one));\n    B.push((2, num_vars, one));\n\n    let inst = R1CSInstance::new(num_cons, num_vars, num_inputs, &A, &B, &C);\n\n    // compute a satisfying assignment\n    let mut csprng: OsRng = OsRng;\n    let i0 = Scalar::random(&mut csprng);\n    let i1 = Scalar::random(&mut csprng);\n    let z1 = Scalar::random(&mut csprng);\n    let z2 = Scalar::random(&mut csprng);\n    let z3 = (z1 + z2) * i0; // constraint 1: (Z1 + Z2) * I0 - Z3 = 0;\n    let z4 = (z1 + i1) * z3; // constraint 2: (Z1 + I1) * (Z3) - Z4 = 0\n    let z5 = Scalar::zero(); //constraint 3\n\n    let mut vars = vec![Scalar::zero(); num_vars];\n    vars[0] = z1;\n    vars[1] = z2;\n    vars[2] = z3;\n    vars[3] = z4;\n    vars[4] = z5;\n\n    let mut input = vec![Scalar::zero(); num_inputs];\n    input[0] = i0;\n    input[1] = i1;\n\n    (inst, vars, input)\n  }\n\n  #[test]\n  fn test_tiny_r1cs() {\n    let (inst, vars, input) = tests::produce_tiny_r1cs();\n    let is_sat = inst.is_sat(&vars, &input);\n    assert!(is_sat);\n  }\n\n  #[test]\n  fn test_synthetic_r1cs() {\n    let (inst, vars, input) = R1CSInstance::produce_synthetic_r1cs(1024, 1024, 10);\n    let is_sat = inst.is_sat(&vars, &input);\n    assert!(is_sat);\n  }\n\n  #[test]\n  pub fn check_r1cs_proof() {\n    let num_vars = 1024;\n    let num_cons = num_vars;\n    let num_inputs = 10;\n    let (inst, vars, input) = R1CSInstance::produce_synthetic_r1cs(num_cons, num_vars, num_inputs);\n\n    let gens = R1CSGens::new(b\"test-m\", num_cons, num_vars);\n\n    let mut random_tape = RandomTape::new(b\"proof\");\n    let mut prover_transcript = Transcript::new(b\"example\");\n    let (proof, rx, ry) = R1CSProof::prove(\n      &inst,\n      vars,\n      &input,\n      &gens,\n      &mut prover_transcript,\n      &mut random_tape,\n    );\n\n    let inst_evals = inst.evaluate(&rx, &ry);\n\n    let mut verifier_transcript = Transcript::new(b\"example\");\n    assert!(proof\n      .verify(\n        inst.get_num_vars(),\n        inst.get_num_cons(),\n        &input,\n        &inst_evals,\n        &mut verifier_transcript,\n        &gens,\n      )\n      .is_ok());\n  }\n}\n"
  },
  {
    "path": "packages/Spartan-secq/src/random.rs",
    "content": "use super::scalar::Scalar;\nuse super::transcript::ProofTranscript;\nuse merlin::Transcript;\nuse rand_core::OsRng;\npub struct RandomTape {\n  tape: Transcript,\n}\n\nimpl RandomTape {\n  pub fn new(name: &'static [u8]) -> Self {\n    let tape = {\n      let mut rng = OsRng::default();\n      let mut tape = Transcript::new(name);\n      tape.append_scalar(b\"init_randomness\", &Scalar::random(&mut rng));\n      tape\n    };\n    Self { tape }\n  }\n\n  pub fn random_scalar(&mut self, label: &'static [u8]) -> Scalar {\n    self.tape.challenge_scalar(label)\n  }\n\n  pub fn random_vector(&mut self, label: &'static [u8], len: usize) -> Vec<Scalar> {\n    self.tape.challenge_vector(label, len)\n  }\n}\n"
  },
  {
    "path": "packages/Spartan-secq/src/scalar/mod.rs",
    "content": "use secq256k1::elliptic_curve::ops::Reduce;\nuse secq256k1::U256;\n\nmod scalar;\n\npub type Scalar = scalar::Scalar;\npub type ScalarBytes = secq256k1::Scalar;\n\npub trait ScalarFromPrimitives {\n  fn to_scalar(self) -> Scalar;\n}\n\nimpl ScalarFromPrimitives for usize {\n  #[inline]\n  fn to_scalar(self) -> Scalar {\n    (0..self).map(|_i| Scalar::one()).sum()\n  }\n}\n\nimpl ScalarFromPrimitives for bool {\n  #[inline]\n  fn to_scalar(self) -> Scalar {\n    if self {\n      Scalar::one()\n    } else {\n      Scalar::zero()\n    }\n  }\n}\n\npub trait ScalarBytesFromScalar {\n  fn decompress_scalar(s: &Scalar) -> ScalarBytes;\n  fn decompress_vector(s: &[Scalar]) -> Vec<ScalarBytes>;\n}\n\nimpl ScalarBytesFromScalar for Scalar {\n  fn decompress_scalar(s: &Scalar) -> ScalarBytes {\n    ScalarBytes::from_uint_reduced(U256::from_le_slice(&s.to_bytes()))\n  }\n\n  fn decompress_vector(s: &[Scalar]) -> Vec<ScalarBytes> {\n    (0..s.len())\n      .map(|i| Scalar::decompress_scalar(&s[i]))\n      .collect::<Vec<ScalarBytes>>()\n  }\n}\n"
  },
  {
    "path": "packages/Spartan-secq/src/scalar/scalar.rs",
    "content": "//! This module provides an implementation of the secq256k1's scalar field $\\mathbb{F}_q$\n//! where `q = 0xfffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2f`\n//! This module is an adaptation of code from the bls12-381 crate.\n//! We modify various constants (MODULUS, R, R2, etc.) to appropriate values for secq256k1 and update tests\n#![allow(clippy::all)]\nuse core::borrow::Borrow;\nuse core::convert::TryFrom;\nuse core::fmt;\nuse core::iter::{Product, Sum};\nuse core::ops::{Add, AddAssign, Mul, MulAssign, Neg, Sub, SubAssign};\nuse hex_literal::hex;\nuse num_bigint_dig::{BigUint, ModInverse};\nuse rand_core::{CryptoRng, RngCore};\nuse serde::de::Visitor;\nuse serde::{Deserialize, Serialize};\nuse subtle::{Choice, ConditionallySelectable, ConstantTimeEq, CtOption};\nuse zeroize::Zeroize;\n\n// use crate::util::{adc, mac, sbb};\n/// Compute a + b + carry, returning the result and the new carry over.\n#[inline(always)]\npub const fn adc(a: u64, b: u64, carry: u64) -> (u64, u64) {\n  let ret = (a as u128) + (b as u128) + (carry as u128);\n  (ret as u64, (ret >> 64) as u64)\n}\n\n/// Compute a - (b + borrow), returning the result and the new borrow.\n#[inline(always)]\npub const fn sbb(a: u64, b: u64, borrow: u64) -> (u64, u64) {\n  let ret = (a as u128).wrapping_sub((b as u128) + ((borrow >> 63) as u128));\n  (ret as u64, (ret >> 64) as u64)\n}\n\n/// Compute a + (b * c) + carry, returning the result and the new carry over.\n#[inline(always)]\npub const fn mac(a: u64, b: u64, c: u64, carry: u64) -> (u64, u64) {\n  let ret = (a as u128) + ((b as u128) * (c as u128)) + (carry as u128);\n  (ret as u64, (ret >> 64) as u64)\n}\n\nmacro_rules! impl_add_binop_specify_output {\n  ($lhs:ident, $rhs:ident, $output:ident) => {\n    impl<'b> Add<&'b $rhs> for $lhs {\n      type Output = $output;\n\n      #[inline]\n      fn add(self, rhs: &'b $rhs) -> $output {\n        &self + rhs\n      }\n    }\n\n    impl<'a> Add<$rhs> for &'a $lhs {\n      type Output = $output;\n\n      #[inline]\n      fn add(self, rhs: $rhs) -> $output {\n        self + &rhs\n      }\n    }\n\n    impl Add<$rhs> for $lhs {\n      type Output = $output;\n\n      #[inline]\n      fn add(self, rhs: $rhs) -> $output {\n        &self + &rhs\n      }\n    }\n  };\n}\n\nmacro_rules! impl_sub_binop_specify_output {\n  ($lhs:ident, $rhs:ident, $output:ident) => {\n    impl<'b> Sub<&'b $rhs> for $lhs {\n      type Output = $output;\n\n      #[inline]\n      fn sub(self, rhs: &'b $rhs) -> $output {\n        &self - rhs\n      }\n    }\n\n    impl<'a> Sub<$rhs> for &'a $lhs {\n      type Output = $output;\n\n      #[inline]\n      fn sub(self, rhs: $rhs) -> $output {\n        self - &rhs\n      }\n    }\n\n    impl Sub<$rhs> for $lhs {\n      type Output = $output;\n\n      #[inline]\n      fn sub(self, rhs: $rhs) -> $output {\n        &self - &rhs\n      }\n    }\n  };\n}\n\nmacro_rules! impl_binops_additive_specify_output {\n  ($lhs:ident, $rhs:ident, $output:ident) => {\n    impl_add_binop_specify_output!($lhs, $rhs, $output);\n    impl_sub_binop_specify_output!($lhs, $rhs, $output);\n  };\n}\n\nmacro_rules! impl_binops_multiplicative_mixed {\n  ($lhs:ident, $rhs:ident, $output:ident) => {\n    impl<'b> Mul<&'b $rhs> for $lhs {\n      type Output = $output;\n\n      #[inline]\n      fn mul(self, rhs: &'b $rhs) -> $output {\n        &self * rhs\n      }\n    }\n\n    impl<'a> Mul<$rhs> for &'a $lhs {\n      type Output = $output;\n\n      #[inline]\n      fn mul(self, rhs: $rhs) -> $output {\n        self * &rhs\n      }\n    }\n\n    impl Mul<$rhs> for $lhs {\n      type Output = $output;\n\n      #[inline]\n      fn mul(self, rhs: $rhs) -> $output {\n        &self * &rhs\n      }\n    }\n  };\n}\n\nmacro_rules! impl_binops_additive {\n  ($lhs:ident, $rhs:ident) => {\n    impl_binops_additive_specify_output!($lhs, $rhs, $lhs);\n\n    impl SubAssign<$rhs> for $lhs {\n      #[inline]\n      fn sub_assign(&mut self, rhs: $rhs) {\n        *self = &*self - &rhs;\n      }\n    }\n\n    impl AddAssign<$rhs> for $lhs {\n      #[inline]\n      fn add_assign(&mut self, rhs: $rhs) {\n        *self = &*self + &rhs;\n      }\n    }\n\n    impl<'b> SubAssign<&'b $rhs> for $lhs {\n      #[inline]\n      fn sub_assign(&mut self, rhs: &'b $rhs) {\n        *self = &*self - rhs;\n      }\n    }\n\n    impl<'b> AddAssign<&'b $rhs> for $lhs {\n      #[inline]\n      fn add_assign(&mut self, rhs: &'b $rhs) {\n        *self = &*self + rhs;\n      }\n    }\n  };\n}\n\nmacro_rules! impl_binops_multiplicative {\n  ($lhs:ident, $rhs:ident) => {\n    impl_binops_multiplicative_mixed!($lhs, $rhs, $lhs);\n\n    impl MulAssign<$rhs> for $lhs {\n      #[inline]\n      fn mul_assign(&mut self, rhs: $rhs) {\n        *self = &*self * &rhs;\n      }\n    }\n\n    impl<'b> MulAssign<&'b $rhs> for $lhs {\n      #[inline]\n      fn mul_assign(&mut self, rhs: &'b $rhs) {\n        *self = &*self * rhs;\n      }\n    }\n  };\n}\n\n/// Represents an element of the scalar field $\\mathbb{F}_q$ of the secq256k1 elliptic\n/// curve construction.\n// The internal representation of this type is four 64-bit unsigned\n// integers in little-endian order. `Scalar` values are always in\n// Montgomery form; i.e., Scalar(a) = aR mod q, with R = 2^256.\n#[derive(Clone, Copy, Eq)]\npub struct Scalar(pub(crate) [u64; 5]);\n\nuse serde::ser::SerializeSeq;\nuse serde::{Deserializer, Serializer};\n\nimpl Serialize for Scalar {\n  fn serialize<S: Serializer>(&self, serializer: S) -> Result<S::Ok, S::Error> {\n    let values = self.to_bytes();\n\n    let mut seq = serializer.serialize_seq(Some(values.len()))?;\n    for val in values.iter() {\n      seq.serialize_element(val)?;\n    }\n\n    seq.end()\n  }\n}\n\nstruct U64ArrayVisitor;\n\nimpl<'de> Visitor<'de> for U64ArrayVisitor {\n  type Value = Scalar;\n\n  fn expecting(&self, formatter: &mut std::fmt::Formatter) -> std::fmt::Result {\n    formatter.write_str(\"a sequence of 4 u64 values\")\n  }\n\n  fn visit_seq<A>(self, mut seq: A) -> Result<Self::Value, A::Error>\n  where\n    A: serde::de::SeqAccess<'de>,\n  {\n    let mut result = [0u64; 4];\n\n    for i in 0..4 {\n      let mut val: u64 = 0;\n      for j in 0..8 {\n        val += (seq.next_element::<u8>().unwrap().unwrap() as u64) * 256u64.pow(j)\n      }\n      result[i] = val;\n    }\n\n    Ok(Scalar::from_raw(result))\n  }\n}\n\nimpl<'de> Deserialize<'de> for Scalar {\n  fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>\n  where\n    D: Deserializer<'de>,\n  {\n    deserializer.deserialize_seq(U64ArrayVisitor)\n  }\n}\n\nimpl fmt::Debug for Scalar {\n  fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {\n    let tmp = self.to_bytes();\n    write!(f, \"0x\")?;\n    for &b in tmp.iter().rev() {\n      write!(f, \"{:02x}\", b)?;\n    }\n    Ok(())\n  }\n}\n\nimpl From<u64> for Scalar {\n  fn from(val: u64) -> Scalar {\n    Scalar([val, 0, 0, 0, 0]) * R2\n  }\n}\n\nimpl ConstantTimeEq for Scalar {\n  fn ct_eq(&self, other: &Self) -> Choice {\n    self.0[0].ct_eq(&other.0[0])\n      & self.0[1].ct_eq(&other.0[1])\n      & self.0[2].ct_eq(&other.0[2])\n      & self.0[3].ct_eq(&other.0[3])\n  }\n}\n\nimpl PartialEq for Scalar {\n  #[inline]\n  fn eq(&self, other: &Self) -> bool {\n    self.ct_eq(other).unwrap_u8() == 1\n  }\n}\n\nimpl ConditionallySelectable for Scalar {\n  fn conditional_select(a: &Self, b: &Self, choice: Choice) -> Self {\n    Scalar([\n      u64::conditional_select(&a.0[0], &b.0[0], choice),\n      u64::conditional_select(&a.0[1], &b.0[1], choice),\n      u64::conditional_select(&a.0[2], &b.0[2], choice),\n      u64::conditional_select(&a.0[3], &b.0[3], choice),\n      u64::conditional_select(&a.0[4], &b.0[4], choice),\n    ])\n  }\n}\n\n/// Constant representing the modulus\n/// 0xfffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2f\nconst MODULUS: Scalar = Scalar([\n  0xfffffffefffffc2f,\n  0xffffffffffffffff,\n  0xffffffffffffffff,\n  0xffffffffffffffff,\n  0,\n]);\n\nimpl<'a> Neg for &'a Scalar {\n  type Output = Scalar;\n\n  #[inline]\n  fn neg(self) -> Scalar {\n    self.neg()\n  }\n}\n\nimpl Neg for Scalar {\n  type Output = Scalar;\n\n  #[inline]\n  fn neg(self) -> Scalar {\n    -&self\n  }\n}\n\nimpl<'a, 'b> Sub<&'b Scalar> for &'a Scalar {\n  type Output = Scalar;\n\n  #[inline]\n  fn sub(self, rhs: &'b Scalar) -> Scalar {\n    self.sub(rhs)\n  }\n}\n\nimpl<'a, 'b> Add<&'b Scalar> for &'a Scalar {\n  type Output = Scalar;\n\n  #[inline]\n  fn add(self, rhs: &'b Scalar) -> Scalar {\n    self.add(rhs)\n  }\n}\n\nimpl<'a, 'b> Mul<&'b Scalar> for &'a Scalar {\n  type Output = Scalar;\n\n  #[inline]\n  fn mul(self, rhs: &'b Scalar) -> Scalar {\n    self.mul(rhs)\n  }\n}\n\nimpl_binops_additive!(Scalar, Scalar);\nimpl_binops_multiplicative!(Scalar, Scalar);\n\n/// INV = -(q^{-1} mod 2^64) mod 2^64\nconst INV: u64 = 0xd838091dd2253531;\n\n/// R = 2^256 mod q\nconst R: Scalar = Scalar([\n  0x00000001000003d1,\n  0x0000000000000000,\n  0x0000000000000000,\n  0x0000000000000000,\n  0x0,\n]);\n\n/// R^2 = 2^512 mod q\nconst R2: Scalar = Scalar([\n  0x000007a2000e90a1,\n  0x0000000000000001,\n  0x0000000000000000,\n  0x0000000000000000,\n  0,\n]);\n\n/// R^3 = 2^768 mod q\nconst R3: Scalar = Scalar([\n  0x002bb1e33795f671,\n  0x0000000100000b73,\n  0x0000000000000000,\n  0x0000000000000000,\n  0x0,\n]);\n\nimpl Default for Scalar {\n  #[inline]\n  fn default() -> Self {\n    Self::zero()\n  }\n}\n\nimpl<T> Product<T> for Scalar\nwhere\n  T: Borrow<Scalar>,\n{\n  fn product<I>(iter: I) -> Self\n  where\n    I: Iterator<Item = T>,\n  {\n    iter.fold(Scalar::one(), |acc, item| acc * item.borrow())\n  }\n}\n\nimpl<T> Sum<T> for Scalar\nwhere\n  T: Borrow<Scalar>,\n{\n  fn sum<I>(iter: I) -> Self\n  where\n    I: Iterator<Item = T>,\n  {\n    iter.fold(Scalar::zero(), |acc, item| acc + item.borrow())\n  }\n}\n\nimpl Zeroize for Scalar {\n  fn zeroize(&mut self) {\n    self.0 = [0u64; 5];\n  }\n}\n\nimpl Scalar {\n  /// Returns zero, the additive identity.\n  #[inline]\n  pub const fn zero() -> Scalar {\n    Scalar([0, 0, 0, 0, 0])\n  }\n\n  /// Returns one, the multiplicative identity.\n  #[inline]\n  pub const fn one() -> Scalar {\n    R\n  }\n\n  pub fn random<Rng: RngCore + CryptoRng>(rng: &mut Rng) -> Self {\n    let mut limbs = [0u64; 8];\n    for i in 0..8 {\n      limbs[i] = rng.next_u64();\n    }\n    Scalar::from_u512(limbs)\n  }\n\n  /// Doubles this field element.\n  #[inline]\n  pub const fn double(&self) -> Scalar {\n    // TODO: This can be achieved more efficiently with a bitshift.\n    self.add(self)\n  }\n\n  /// Attempts to convert a little-endian byte representation of\n  /// a scalar into a `Scalar`, failing if the input is not canonical.\n  pub fn from_bytes(bytes: &[u8; 32]) -> CtOption<Scalar> {\n    let mut tmp = Scalar([0, 0, 0, 0, 0]);\n\n    tmp.0[0] = u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[..8]).unwrap());\n    tmp.0[1] = u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[8..16]).unwrap());\n    tmp.0[2] = u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[16..24]).unwrap());\n    tmp.0[3] = u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[24..32]).unwrap());\n\n    // Try to subtract the modulus\n    let (_, borrow) = sbb(tmp.0[0], MODULUS.0[0], 0);\n    let (_, borrow) = sbb(tmp.0[1], MODULUS.0[1], borrow);\n    let (_, borrow) = sbb(tmp.0[2], MODULUS.0[2], borrow);\n    let (_, borrow) = sbb(tmp.0[3], MODULUS.0[3], borrow);\n\n    // If the element is smaller than MODULUS then the\n    // subtraction will underflow, producing a borrow value\n    // of 0xffff...ffff. Otherwise, it'll be zero.\n    let is_some = (borrow as u8) & 1;\n\n    // Convert to Montgomery form by computing\n    // (a.R^0 * R^2) / R = a.R\n    tmp *= &R2;\n\n    CtOption::new(tmp, Choice::from(is_some))\n  }\n\n  /// Converts an element of `Scalar` into a byte representation in\n  /// little-endian byte order.\n  pub fn to_bytes(&self) -> [u8; 32] {\n    // Turn into canonical form by computing\n    // (a.R) / R = a\n    let tmp = Scalar::montgomery_reduce(\n      self.0[0], self.0[1], self.0[2], self.0[3], self.0[4], 0, 0, 0, 0,\n    );\n\n    let mut res = [0; 32];\n    res[..8].copy_from_slice(&tmp.0[0].to_le_bytes());\n    res[8..16].copy_from_slice(&tmp.0[1].to_le_bytes());\n    res[16..24].copy_from_slice(&tmp.0[2].to_le_bytes());\n    res[24..32].copy_from_slice(&tmp.0[3].to_le_bytes());\n\n    res\n  }\n\n  /// Converts a 512-bit little endian integer into\n  /// a `Scalar` by reducing by the modulus.\n  pub fn from_bytes_wide(bytes: &[u8; 64]) -> Scalar {\n    Scalar::from_u512([\n      u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[..8]).unwrap()),\n      u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[8..16]).unwrap()),\n      u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[16..24]).unwrap()),\n      u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[24..32]).unwrap()),\n      u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[32..40]).unwrap()),\n      u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[40..48]).unwrap()),\n      u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[48..56]).unwrap()),\n      u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[56..64]).unwrap()),\n    ])\n  }\n\n  fn from_u512(limbs: [u64; 8]) -> Scalar {\n    // We reduce an arbitrary 512-bit number by decomposing it into two 256-bit digits\n    // with the higher bits multiplied by 2^256. Thus, we perform two reductions\n    //\n    // 1. the lower bits are multiplied by R^2, as normal\n    // 2. the upper bits are multiplied by R^2 * 2^256 = R^3\n    //\n    // and computing their sum in the field. It remains to see that arbitrary 256-bit\n    // numbers can be placed into Montgomery form safely using the reduction. The\n    // reduction works so long as the product is less than R=2^256 multipled by\n    // the modulus. This holds because for any `c` smaller than the modulus, we have\n    // that (2^256 - 1)*c is an acceptable product for the reduction. Therefore, the\n    // reduction always works so long as `c` is in the field; in this case it is either the\n    // constant `R2` or `R3`.\n    let d0 = Scalar([limbs[0], limbs[1], limbs[2], limbs[3], 0]);\n    let d1 = Scalar([limbs[4], limbs[5], limbs[6], limbs[7], 0]);\n    // Convert to Montgomery form\n    d0 * R2 + d1 * R3\n  }\n\n  /// Converts from an integer represented in little endian\n  /// into its (congruent) `Scalar` representation.\n  pub const fn from_raw(val: [u64; 4]) -> Self {\n    (&Scalar([val[0], val[1], val[2], val[3], 0])).mul(&R2)\n  }\n\n  /// Squares this element.\n  #[inline]\n  pub const fn square(&self) -> Scalar {\n    let (r1, carry) = mac(0, self.0[0], self.0[1], 0);\n    let (r2, carry) = mac(0, self.0[0], self.0[2], carry);\n    let (r3, r4) = mac(0, self.0[0], self.0[3], carry);\n\n    let (r3, carry) = mac(r3, self.0[1], self.0[2], 0);\n    let (r4, r5) = mac(r4, self.0[1], self.0[3], carry);\n\n    let (r5, r6) = mac(r5, self.0[2], self.0[3], 0);\n\n    let r7 = r6 >> 63;\n    let r6 = (r6 << 1) | (r5 >> 63);\n    let r5 = (r5 << 1) | (r4 >> 63);\n    let r4 = (r4 << 1) | (r3 >> 63);\n    let r3 = (r3 << 1) | (r2 >> 63);\n    let r2 = (r2 << 1) | (r1 >> 63);\n    let r1 = r1 << 1;\n\n    let (r0, carry) = mac(0, self.0[0], self.0[0], 0);\n    let (r1, carry) = adc(0, r1, carry);\n    let (r2, carry) = mac(r2, self.0[1], self.0[1], carry);\n    let (r3, carry) = adc(0, r3, carry);\n    let (r4, carry) = mac(r4, self.0[2], self.0[2], carry);\n    let (r5, carry) = adc(0, r5, carry);\n    let (r6, carry) = mac(r6, self.0[3], self.0[3], carry);\n    let (r7, _) = adc(0, r7, carry);\n\n    Scalar::montgomery_reduce(r0, r1, r2, r3, r4, r5, r6, r7, 0)\n  }\n\n  /// Exponentiates `self` by `by`, where `by` is a\n  /// little-endian order integer exponent.\n  pub fn pow(&self, by: &[u64; 4]) -> Self {\n    let mut res = Self::one();\n    for e in by.iter().rev() {\n      for i in (0..64).rev() {\n        res = res.square();\n        let mut tmp = res;\n        tmp *= self;\n        res.conditional_assign(&tmp, (((*e >> i) & 0x1) as u8).into());\n      }\n    }\n    res\n  }\n\n  /// Exponentiates `self` by `by`, where `by` is a\n  /// little-endian order integer exponent.\n  ///\n  /// **This operation is variable time with respect\n  /// to the exponent.** If the exponent is fixed,\n  /// this operation is effectively constant time.\n  pub fn pow_vartime(&self, by: &[u64; 4]) -> Self {\n    let mut res = Self::one();\n    for e in by.iter().rev() {\n      for i in (0..64).rev() {\n        res = res.square();\n\n        if ((*e >> i) & 1) == 1 {\n          res.mul_assign(self);\n        }\n      }\n    }\n    res\n  }\n\n  pub fn invert(&self) -> CtOption<Self> {\n    let val = BigUint::from_bytes_le(&self.to_bytes());\n\n    let result = val.mod_inverse(&BigUint::from_bytes_be(&hex!(\n      \"fffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2f\"\n    )));\n\n    if result.is_some() {\n      let mut result = result.unwrap().to_bytes_le().1.to_vec();\n      result.resize(64, 0);\n\n      let result_bytes: [u8; 64] = result.try_into().unwrap();\n\n      let result = Scalar::from_bytes_wide(&result_bytes);\n\n      CtOption::new(result, Choice::from(1))\n    } else {\n      CtOption::new(Scalar::zero(), Choice::from(0))\n    }\n  }\n\n  pub fn batch_invert(inputs: &mut [Scalar]) -> Scalar {\n    // This code is essentially identical to the FieldElement\n    // implementation, and is documented there.  Unfortunately,\n    // it's not easy to write it generically, since here we want\n    // to use `UnpackedScalar`s internally, and `Scalar`s\n    // externally, but there's no corresponding distinction for\n    // field elements.\n\n    use zeroize::Zeroizing;\n\n    let n = inputs.len();\n    let one = Scalar::one();\n\n    // Place scratch storage in a Zeroizing wrapper to wipe it when\n    // we pass out of scope.\n    let scratch_vec = vec![one; n];\n    let mut scratch = Zeroizing::new(scratch_vec);\n\n    // Keep an accumulator of all of the previous products\n    let mut acc = Scalar::one();\n\n    // Pass through the input vector, recording the previous\n    // products in the scratch space\n    for (input, scratch) in inputs.iter().zip(scratch.iter_mut()) {\n      *scratch = acc;\n\n      acc = acc * input;\n    }\n\n    // acc is nonzero iff all inputs are nonzero\n    debug_assert!(acc != Scalar::zero());\n\n    // Compute the inverse of all products\n    acc = acc.invert().unwrap();\n\n    // We need to return the product of all inverses later\n    let ret = acc;\n\n    // Pass through the vector backwards to compute the inverses\n    // in place\n    for (input, scratch) in inputs.iter_mut().rev().zip(scratch.iter().rev()) {\n      let tmp = &acc * input.clone();\n      *input = &acc * scratch;\n      acc = tmp;\n    }\n\n    ret\n  }\n\n  #[inline(always)]\n  const fn montgomery_reduce(\n    r0: u64,\n    r1: u64,\n    r2: u64,\n    r3: u64,\n    r4: u64,\n    r5: u64,\n    r6: u64,\n    r7: u64,\n    r8: u64,\n  ) -> Self {\n    // The Montgomery reduction here is based on Algorithm 14.32 in\n    // Handbook of Applied Cryptography\n    // <http://cacr.uwaterloo.ca/hac/about/chap14.pdf>.\n\n    let k = r0.wrapping_mul(INV);\n    let (_, carry) = mac(r0, k, MODULUS.0[0], 0);\n    let (r1, carry) = mac(r1, k, MODULUS.0[1], carry);\n    let (r2, carry) = mac(r2, k, MODULUS.0[2], carry);\n    let (r3, carry) = mac(r3, k, MODULUS.0[3], carry);\n    let (r4, carry) = mac(r4, k, MODULUS.0[4], carry);\n    let (r5, carry2) = adc(r5, 0, carry);\n\n    let k = r1.wrapping_mul(INV);\n    let (_, carry) = mac(r1, k, MODULUS.0[0], 0);\n    let (r2, carry) = mac(r2, k, MODULUS.0[1], carry);\n    let (r3, carry) = mac(r3, k, MODULUS.0[2], carry);\n    let (r4, carry) = mac(r4, k, MODULUS.0[3], carry);\n    let (r5, carry) = mac(r5, k, MODULUS.0[4], carry);\n    let (r6, carry2) = adc(r6, carry2, carry);\n\n    let k = r2.wrapping_mul(INV);\n    let (_, carry) = mac(r2, k, MODULUS.0[0], 0);\n    let (r3, carry) = mac(r3, k, MODULUS.0[1], carry);\n    let (r4, carry) = mac(r4, k, MODULUS.0[2], carry);\n    let (r5, carry) = mac(r5, k, MODULUS.0[3], carry);\n    let (r6, carry) = mac(r6, k, MODULUS.0[4], carry);\n    let (r7, carry2) = adc(r7, carry2, carry);\n\n    let k = r3.wrapping_mul(INV);\n    let (_, carry) = mac(r3, k, MODULUS.0[0], 0);\n    let (r4, carry) = mac(r4, k, MODULUS.0[1], carry);\n    let (r5, carry) = mac(r5, k, MODULUS.0[2], carry);\n    let (r6, carry) = mac(r6, k, MODULUS.0[3], carry);\n    let (r7, carry) = mac(r7, k, MODULUS.0[4], carry);\n    let (r8, _) = adc(r8, carry2, carry);\n\n    // Result may be within MODULUS of the correct value\n    (&Scalar([r4, r5, r6, r7, r8])).sub(&MODULUS)\n  }\n\n  /// Multiplies `rhs` by `self`, returning the result.\n  #[inline]\n  pub const fn mul(&self, rhs: &Self) -> Self {\n    // Schoolbook multiplication\n\n    let (r0, carry) = mac(0, self.0[0], rhs.0[0], 0);\n    let (r1, carry) = mac(0, self.0[0], rhs.0[1], carry);\n    let (r2, carry) = mac(0, self.0[0], rhs.0[2], carry);\n    let (r3, carry) = mac(0, self.0[0], rhs.0[3], carry);\n    let (r4, r5) = mac(0, self.0[0], rhs.0[4], carry);\n\n    let (r1, carry) = mac(r1, self.0[1], rhs.0[0], 0);\n    let (r2, carry) = mac(r2, self.0[1], rhs.0[1], carry);\n    let (r3, carry) = mac(r3, self.0[1], rhs.0[2], carry);\n    let (r4, carry) = mac(r4, self.0[1], rhs.0[3], carry);\n    let (r5, r6) = mac(r5, self.0[1], rhs.0[4], carry);\n\n    let (r2, carry) = mac(r2, self.0[2], rhs.0[0], 0);\n    let (r3, carry) = mac(r3, self.0[2], rhs.0[1], carry);\n    let (r4, carry) = mac(r4, self.0[2], rhs.0[2], carry);\n    let (r5, carry) = mac(r5, self.0[2], rhs.0[3], carry);\n    let (r6, r7) = mac(r6, self.0[2], rhs.0[4], carry);\n\n    let (r3, carry) = mac(r3, self.0[3], rhs.0[0], 0);\n    let (r4, carry) = mac(r4, self.0[3], rhs.0[1], carry);\n    let (r5, carry) = mac(r5, self.0[3], rhs.0[2], carry);\n    let (r6, carry) = mac(r6, self.0[3], rhs.0[3], carry);\n    let (r7, r8) = mac(r7, self.0[3], rhs.0[4], carry);\n\n    let (r4, carry) = mac(r4, self.0[4], rhs.0[0], 0);\n    let (r5, carry) = mac(r5, self.0[4], rhs.0[1], carry);\n    let (r6, carry) = mac(r6, self.0[4], rhs.0[2], carry);\n    let (r7, carry) = mac(r7, self.0[4], rhs.0[3], carry);\n    let (r8, _) = mac(r8, self.0[4], rhs.0[4], carry);\n\n    Scalar::montgomery_reduce(r0, r1, r2, r3, r4, r5, r6, r7, r8)\n  }\n\n  /// Subtracts `rhs` from `self`, returning the result.\n  #[inline]\n  pub const fn sub(&self, rhs: &Self) -> Self {\n    let (d0, borrow) = sbb(self.0[0], rhs.0[0], 0);\n    let (d1, borrow) = sbb(self.0[1], rhs.0[1], borrow);\n    let (d2, borrow) = sbb(self.0[2], rhs.0[2], borrow);\n    let (d3, borrow) = sbb(self.0[3], rhs.0[3], borrow);\n    let (d4, borrow) = sbb(self.0[4], rhs.0[4], borrow);\n\n    // If underflow occurred on the final limb, borrow = 0xfff...fff, otherwise\n    // borrow = 0x000...000. Thus, we use it as a mask to conditionally add the modulus.\n    let (d0, carry) = adc(d0, MODULUS.0[0] & borrow, 0);\n    let (d1, carry) = adc(d1, MODULUS.0[1] & borrow, carry);\n    let (d2, carry) = adc(d2, MODULUS.0[2] & borrow, carry);\n    let (d3, carry) = adc(d3, MODULUS.0[3] & borrow, carry);\n    let (d4, _) = adc(d4, MODULUS.0[4] & borrow, carry);\n\n    Scalar([d0, d1, d2, d3, d4])\n  }\n\n  /// Adds `rhs` to `self`, returning the result.\n  #[inline]\n  pub const fn add(&self, rhs: &Self) -> Self {\n    let (d0, carry) = adc(self.0[0], rhs.0[0], 0);\n    let (d1, carry) = adc(self.0[1], rhs.0[1], carry);\n    let (d2, carry) = adc(self.0[2], rhs.0[2], carry);\n    let (d3, carry) = adc(self.0[3], rhs.0[3], carry);\n    let (d4, _) = adc(self.0[4], rhs.0[4], carry);\n\n    // Attempt to subtract the modulus, to ensure the value\n    // is smaller than the modulus.\n    (&Scalar([d0, d1, d2, d3, d4])).sub(&MODULUS)\n  }\n\n  /// Negates `self`.\n  #[inline]\n  pub const fn neg(&self) -> Self {\n    // Subtract `self` from `MODULUS` to negate. Ignore the final\n    // borrow because it cannot underflow; self is guaranteed to\n    // be in the field.\n    let (d0, borrow) = sbb(MODULUS.0[0], self.0[0], 0);\n    let (d1, borrow) = sbb(MODULUS.0[1], self.0[1], borrow);\n    let (d2, borrow) = sbb(MODULUS.0[2], self.0[2], borrow);\n    let (d3, borrow) = sbb(MODULUS.0[3], self.0[3], borrow);\n    let (d4, _) = sbb(MODULUS.0[4], self.0[4], borrow);\n\n    // `tmp` could be `MODULUS` if `self` was zero. Create a mask that is\n    // zero if `self` was zero, and `u64::max_value()` if self was nonzero.\n    let mask =\n      (((self.0[0] | self.0[1] | self.0[2] | self.0[3] | self.0[4]) == 0) as u64).wrapping_sub(1);\n\n    Scalar([d0 & mask, d1 & mask, d2 & mask, d3 & mask, d4 & mask])\n  }\n}\n\nimpl<'a> From<&'a Scalar> for [u8; 32] {\n  fn from(value: &'a Scalar) -> [u8; 32] {\n    value.to_bytes()\n  }\n}\n\n#[cfg(test)]\nmod tests {\n  use super::*;\n\n  #[test]\n  fn test_inv() {\n    // Compute -(q^{-1} mod 2^64) mod 2^64 by exponentiating\n    // by totient(2**64) - 1\n\n    let mut inv = 1u64;\n    for _ in 0..63 {\n      inv = inv.wrapping_mul(inv);\n      inv = inv.wrapping_mul(MODULUS.0[0]);\n    }\n    inv = inv.wrapping_neg();\n\n    assert_eq!(inv, INV);\n  }\n\n  #[cfg(feature = \"std\")]\n  #[test]\n  fn test_debug() {\n    assert_eq!(\n      format!(\"{:?}\", Scalar::zero()),\n      \"0x0000000000000000000000000000000000000000000000000000000000000000\"\n    );\n    assert_eq!(\n      format!(\"{:?}\", Scalar::one()),\n      \"0x0000000000000000000000000000000000000000000000000000000000000001\"\n    );\n    assert_eq!(\n      format!(\"{:?}\", R2),\n      \"0x1824b159acc5056f998c4fefecbc4ff55884b7fa0003480200000001fffffffe\"\n    );\n  }\n\n  #[test]\n  fn test_equality() {\n    assert_eq!(Scalar::zero(), Scalar::zero());\n    assert_eq!(Scalar::one(), Scalar::one());\n    assert_eq!(R2, R2);\n\n    assert!(Scalar::zero() != Scalar::one());\n    assert!(Scalar::one() != R2);\n  }\n\n  #[test]\n  fn test_to_bytes() {\n    assert_eq!(\n      Scalar::zero().to_bytes(),\n      [\n        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n        0, 0\n      ]\n    );\n\n    assert_eq!(\n      Scalar::one().to_bytes(),\n      [\n        1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n        0, 0\n      ]\n    );\n\n    /*\n    assert_eq!(\n      R2.to_bytes(),\n      [\n        29, 149, 152, 141, 116, 49, 236, 214, 112, 207, 125, 115, 244, 91, 239, 198, 254, 255, 255,\n        255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 15\n      ]\n    );\n\n    assert_eq!(\n      (-&Scalar::one()).to_bytes(),\n      [\n        236, 211, 245, 92, 26, 99, 18, 88, 214, 156, 247, 162, 222, 249, 222, 20, 0, 0, 0, 0, 0, 0,\n        0, 0, 0, 0, 0, 0, 0, 0, 0, 16\n      ]\n    );\n     */\n  }\n\n  #[test]\n  fn test_from_bytes() {\n    assert_eq!(\n      Scalar::from_bytes(&[\n        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n        0, 0\n      ])\n      .unwrap(),\n      Scalar::zero()\n    );\n\n    assert_eq!(\n      Scalar::from_bytes(&[\n        1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n        0, 0\n      ])\n      .unwrap(),\n      Scalar::one()\n    );\n\n    assert_eq!(\n      Scalar::from_bytes(&[\n        209, 3, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n        0, 0\n      ])\n      .unwrap(),\n      R2\n    );\n\n    /*\n    // -1 should work\n    assert!(\n      Scalar::from_bytes(&[\n        236, 211, 245, 92, 26, 99, 18, 88, 214, 156, 247, 162, 222, 249, 222, 20, 0, 0, 0, 0, 0, 0,\n        0, 0, 0, 0, 0, 0, 0, 0, 0, 16\n      ])\n      .is_some()\n      .unwrap_u8()\n        == 1\n    );\n\n    // modulus is invalid\n    assert!(\n      Scalar::from_bytes(&[\n        1, 0, 0, 0, 255, 255, 255, 255, 254, 91, 254, 255, 2, 164, 189, 83, 5, 216, 161, 9, 8, 216,\n        57, 51, 72, 125, 157, 41, 83, 167, 237, 115\n      ])\n      .is_none()\n      .unwrap_u8()\n        == 1\n    );\n\n    // Anything larger than the modulus is invalid\n    assert!(\n      Scalar::from_bytes(&[\n        2, 0, 0, 0, 255, 255, 255, 255, 254, 91, 254, 255, 2, 164, 189, 83, 5, 216, 161, 9, 8, 216,\n        57, 51, 72, 125, 157, 41, 83, 167, 237, 115\n      ])\n      .is_none()\n      .unwrap_u8()\n        == 1\n    );\n    assert!(\n      Scalar::from_bytes(&[\n        1, 0, 0, 0, 255, 255, 255, 255, 254, 91, 254, 255, 2, 164, 189, 83, 5, 216, 161, 9, 8, 216,\n        58, 51, 72, 125, 157, 41, 83, 167, 237, 115\n      ])\n      .is_none()\n      .unwrap_u8()\n        == 1\n    );\n    assert!(\n      Scalar::from_bytes(&[\n        1, 0, 0, 0, 255, 255, 255, 255, 254, 91, 254, 255, 2, 164, 189, 83, 5, 216, 161, 9, 8, 216,\n        57, 51, 72, 125, 157, 41, 83, 167, 237, 116\n      ])\n      .is_none()\n      .unwrap_u8()\n        == 1\n    );\n     */\n  }\n\n  #[test]\n  fn test_from_u512_zero() {\n    assert_eq!(\n      Scalar::zero(),\n      Scalar::from_u512([\n        MODULUS.0[0],\n        MODULUS.0[1],\n        MODULUS.0[2],\n        MODULUS.0[3],\n        0,\n        0,\n        0,\n        0\n      ])\n    );\n  }\n\n  #[test]\n  fn test_from_u512_r() {\n    assert_eq!(R, Scalar::from_u512([1, 0, 0, 0, 0, 0, 0, 0]));\n  }\n\n  #[test]\n  fn test_from_u512_r2() {\n    assert_eq!(R2, Scalar::from_u512([0, 0, 0, 0, 1, 0, 0, 0]));\n  }\n\n  #[test]\n  fn test_from_u512_max() {\n    let max_u64 = 0xffffffffffffffff;\n    assert_eq!(\n      R3 - R,\n      Scalar::from_u512([max_u64, max_u64, max_u64, max_u64, max_u64, max_u64, max_u64, max_u64])\n    );\n  }\n\n  #[test]\n  fn test_from_bytes_wide_r2() {\n    assert_eq!(\n      R2,\n      Scalar::from_bytes_wide(&[\n        209, 3, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n        0, 0, 0, 0,\n      ])\n    );\n  }\n\n  #[test]\n  fn test_from_bytes_wide_negative_one() {\n    assert_eq!(\n      -&Scalar::one(),\n      Scalar::from_bytes_wide(&[\n        46, 252, 255, 255, 254, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,\n        255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 0, 0, 0, 0, 0, 0, 0,\n        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n      ])\n    );\n  }\n\n  #[test]\n  fn test_from_bytes_wide_maximum() {\n    assert_eq!(\n      Scalar::from_raw([0x000007a2000e90a0, 0x1, 0, 0]),\n      Scalar::from_bytes_wide(&[0xff; 64])\n    );\n  }\n\n  #[test]\n  fn test_zero() {\n    assert_eq!(Scalar::zero(), -&Scalar::zero());\n    assert_eq!(Scalar::zero(), Scalar::zero() + Scalar::zero());\n    assert_eq!(Scalar::zero(), Scalar::zero() - Scalar::zero());\n    assert_eq!(Scalar::zero(), Scalar::zero() * Scalar::zero());\n  }\n\n  const LARGEST: Scalar = Scalar([\n    0xfffffffefffffc2e,\n    0xffffffffffffffff,\n    0xffffffffffffffff,\n    0xffffffffffffffff,\n    0,\n  ]);\n\n  #[test]\n  fn test_addition() {\n    let mut tmp = LARGEST;\n    tmp += &LARGEST;\n\n    let target = Scalar([\n      0xfffffffefffffc2d,\n      0xffffffffffffffff,\n      0xffffffffffffffff,\n      0xffffffffffffffff,\n      0,\n    ]);\n\n    assert_eq!(tmp, target);\n\n    let mut tmp = LARGEST;\n    tmp += &Scalar([1, 0, 0, 0, 0]);\n\n    assert_eq!(tmp, Scalar::zero());\n  }\n\n  #[test]\n  fn test_negation() {\n    let tmp = -&LARGEST;\n\n    assert_eq!(tmp, Scalar([1, 0, 0, 0, 0]));\n\n    let tmp = -&Scalar::zero();\n    assert_eq!(tmp, Scalar::zero());\n    let tmp = -&Scalar([1, 0, 0, 0, 0]);\n    assert_eq!(tmp, LARGEST);\n  }\n\n  #[test]\n  fn test_subtraction() {\n    let mut tmp = LARGEST;\n    tmp -= &LARGEST;\n\n    assert_eq!(tmp, Scalar::zero());\n\n    let mut tmp = Scalar::zero();\n    tmp -= &LARGEST;\n\n    let mut tmp2 = MODULUS;\n    tmp2 -= &LARGEST;\n\n    assert_eq!(tmp, tmp2);\n  }\n\n  #[test]\n  fn test_multiplication() {\n    let mut cur = LARGEST;\n\n    for _ in 0..100 {\n      let mut tmp = cur;\n      tmp *= &cur;\n\n      let mut tmp2 = Scalar::zero();\n      for b in cur\n        .to_bytes()\n        .iter()\n        .rev()\n        .flat_map(|byte| (0..8).rev().map(move |i| ((byte >> i) & 1u8) == 1u8))\n      {\n        let tmp3 = tmp2;\n        tmp2.add_assign(&tmp3);\n\n        if b {\n          tmp2.add_assign(&cur);\n        }\n      }\n\n      assert_eq!(tmp, tmp2);\n\n      cur.add_assign(&LARGEST);\n    }\n  }\n\n  #[test]\n  fn test_squaring() {\n    let mut cur = LARGEST;\n\n    for _ in 0..100 {\n      let mut tmp = cur;\n      tmp = tmp.square();\n\n      let mut tmp2 = Scalar::zero();\n      for b in cur\n        .to_bytes()\n        .iter()\n        .rev()\n        .flat_map(|byte| (0..8).rev().map(move |i| ((byte >> i) & 1u8) == 1u8))\n      {\n        let tmp3 = tmp2;\n        tmp2.add_assign(&tmp3);\n\n        if b {\n          tmp2.add_assign(&cur);\n        }\n      }\n\n      assert_eq!(tmp, tmp2);\n\n      cur.add_assign(&LARGEST);\n    }\n  }\n\n  #[test]\n  fn test_inversion() {\n    assert_eq!(Scalar::zero().invert().is_none().unwrap_u8(), 1);\n    assert_eq!(Scalar::one().invert().unwrap(), Scalar::one());\n    assert_eq!((-&Scalar::one()).invert().unwrap(), -&Scalar::one());\n\n    let a = Scalar::from(123);\n    let result = a.invert().unwrap();\n    println!(\"result {:?}\", result);\n\n    let mut tmp = R2;\n\n    for _ in 0..100 {\n      let mut tmp2 = tmp.invert().unwrap();\n      println!(\"tmp2 {:?}\", tmp2);\n      tmp2.mul_assign(&tmp);\n\n      assert_eq!(tmp2, Scalar::one());\n\n      tmp.add_assign(&R2);\n    }\n  }\n\n  #[test]\n  fn test_invert_is_pow() {\n    let q_minus_2 = [\n      0xffff_fffe_ffff_fc2d,\n      0xffff_ffff_ffff_ffff,\n      0xffff_ffff_ffff_ffff,\n      0xffff_ffff_ffff_ffff,\n    ];\n\n    let mut r1 = R;\n    let mut r2 = R;\n    let mut r3 = R;\n\n    for _ in 0..100 {\n      r1 = r1.invert().unwrap();\n      r2 = r2.pow_vartime(&q_minus_2);\n      r3 = r3.pow(&q_minus_2);\n\n      assert_eq!(r1, r2);\n      assert_eq!(r2, r3);\n      // Add R so we check something different next time around\n      r1.add_assign(&R);\n      r2 = r1;\n      r3 = r1;\n    }\n  }\n\n  #[test]\n  fn test_from_raw() {\n    assert_eq!(\n      Scalar::from_raw([\n        0x00000001000003d0,\n        0x0000000000000000,\n        0x0000000000000000,\n        0x0000000000000000,\n      ]),\n      Scalar::from_raw([0xffffffffffffffff; 4])\n    );\n\n    assert_eq!(\n      Scalar::from_raw(MODULUS.0[..4].try_into().unwrap()),\n      Scalar::zero()\n    );\n\n    assert_eq!(Scalar::from_raw([1, 0, 0, 0]), R);\n  }\n\n  #[test]\n  fn test_double() {\n    let a = Scalar::from_raw([\n      0x1fff3231233ffffd,\n      0x4884b7fa00034802,\n      0x998c4fefecbc4ff3,\n      0x1824b159acc50562,\n    ]);\n\n    assert_eq!(a.double(), a + a);\n  }\n}\n"
  },
  {
    "path": "packages/Spartan-secq/src/sparse_mlpoly.rs",
    "content": "#![allow(clippy::type_complexity)]\n#![allow(clippy::too_many_arguments)]\n#![allow(clippy::needless_range_loop)]\nuse super::dense_mlpoly::DensePolynomial;\nuse super::dense_mlpoly::{\n  EqPolynomial, IdentityPolynomial, PolyCommitment, PolyCommitmentGens, PolyEvalProof,\n};\nuse super::errors::ProofVerifyError;\nuse super::math::Math;\nuse super::product_tree::{DotProductCircuit, ProductCircuit, ProductCircuitEvalProofBatched};\nuse super::random::RandomTape;\nuse super::scalar::Scalar;\nuse super::timer::Timer;\nuse super::transcript::{AppendToTranscript, ProofTranscript};\nuse core::cmp::Ordering;\nuse merlin::Transcript;\nuse serde::{Deserialize, Serialize};\n\n#[derive(Debug, Serialize, Deserialize)]\npub struct SparseMatEntry {\n  row: usize,\n  col: usize,\n  val: Scalar,\n}\n\nimpl SparseMatEntry {\n  pub fn new(row: usize, col: usize, val: Scalar) -> Self {\n    SparseMatEntry { row, col, val }\n  }\n}\n\n#[derive(Debug, Serialize, Deserialize)]\npub struct SparseMatPolynomial {\n  num_vars_x: usize,\n  num_vars_y: usize,\n  M: Vec<SparseMatEntry>,\n}\n\npub struct Derefs {\n  row_ops_val: Vec<DensePolynomial>,\n  col_ops_val: Vec<DensePolynomial>,\n  comb: DensePolynomial,\n}\n\n#[derive(Debug, Serialize, Deserialize)]\npub struct DerefsCommitment {\n  comm_ops_val: PolyCommitment,\n}\n\nimpl Derefs {\n  pub fn new(row_ops_val: Vec<DensePolynomial>, col_ops_val: Vec<DensePolynomial>) -> Self {\n    assert_eq!(row_ops_val.len(), col_ops_val.len());\n\n    let derefs = {\n      // combine all polynomials into a single polynomial (used below to produce a single commitment)\n      let comb = DensePolynomial::merge(row_ops_val.iter().chain(col_ops_val.iter()));\n\n      Derefs {\n        row_ops_val,\n        col_ops_val,\n        comb,\n      }\n    };\n\n    derefs\n  }\n\n  pub fn commit(&self, gens: &PolyCommitmentGens) -> DerefsCommitment {\n    let (comm_ops_val, _blinds) = self.comb.commit(gens, None);\n    DerefsCommitment { comm_ops_val }\n  }\n}\n\n#[derive(Debug, Serialize, Deserialize)]\npub struct DerefsEvalProof {\n  proof_derefs: PolyEvalProof,\n}\n\nimpl DerefsEvalProof {\n  fn protocol_name() -> &'static [u8] {\n    b\"Derefs evaluation proof\"\n  }\n\n  fn prove_single(\n    joint_poly: &DensePolynomial,\n    r: &[Scalar],\n    evals: Vec<Scalar>,\n    gens: &PolyCommitmentGens,\n    transcript: &mut Transcript,\n    random_tape: &mut RandomTape,\n  ) -> PolyEvalProof {\n    assert_eq!(joint_poly.get_num_vars(), r.len() + evals.len().log_2());\n\n    // append the claimed evaluations to transcript\n    evals.append_to_transcript(b\"evals_ops_val\", transcript);\n\n    // n-to-1 reduction\n    let (r_joint, eval_joint) = {\n      let challenges =\n        transcript.challenge_vector(b\"challenge_combine_n_to_one\", evals.len().log_2());\n      let mut poly_evals = DensePolynomial::new(evals);\n      for i in (0..challenges.len()).rev() {\n        poly_evals.bound_poly_var_bot(&challenges[i]);\n      }\n      assert_eq!(poly_evals.len(), 1);\n      let joint_claim_eval = poly_evals[0];\n      let mut r_joint = challenges;\n      r_joint.extend(r);\n\n      debug_assert_eq!(joint_poly.evaluate(&r_joint), joint_claim_eval);\n      (r_joint, joint_claim_eval)\n    };\n    // decommit the joint polynomial at r_joint\n    eval_joint.append_to_transcript(b\"joint_claim_eval\", transcript);\n    let (proof_derefs, _comm_derefs_eval) = PolyEvalProof::prove(\n      joint_poly,\n      None,\n      &r_joint,\n      &eval_joint,\n      None,\n      gens,\n      transcript,\n      random_tape,\n    );\n\n    proof_derefs\n  }\n\n  // evalues both polynomials at r and produces a joint proof of opening\n  pub fn prove(\n    derefs: &Derefs,\n    eval_row_ops_val_vec: &[Scalar],\n    eval_col_ops_val_vec: &[Scalar],\n    r: &[Scalar],\n    gens: &PolyCommitmentGens,\n    transcript: &mut Transcript,\n    random_tape: &mut RandomTape,\n  ) -> Self {\n    transcript.append_protocol_name(DerefsEvalProof::protocol_name());\n\n    let evals = {\n      let mut evals = eval_row_ops_val_vec.to_owned();\n      evals.extend(eval_col_ops_val_vec);\n      evals.resize(evals.len().next_power_of_two(), Scalar::zero());\n      evals\n    };\n    let proof_derefs =\n      DerefsEvalProof::prove_single(&derefs.comb, r, evals, gens, transcript, random_tape);\n\n    DerefsEvalProof { proof_derefs }\n  }\n\n  fn verify_single(\n    proof: &PolyEvalProof,\n    comm: &PolyCommitment,\n    r: &[Scalar],\n    evals: Vec<Scalar>,\n    gens: &PolyCommitmentGens,\n    transcript: &mut Transcript,\n  ) -> Result<(), ProofVerifyError> {\n    // append the claimed evaluations to transcript\n    evals.append_to_transcript(b\"evals_ops_val\", transcript);\n\n    // n-to-1 reduction\n    let challenges =\n      transcript.challenge_vector(b\"challenge_combine_n_to_one\", evals.len().log_2());\n    let mut poly_evals = DensePolynomial::new(evals);\n    for i in (0..challenges.len()).rev() {\n      poly_evals.bound_poly_var_bot(&challenges[i]);\n    }\n    assert_eq!(poly_evals.len(), 1);\n    let joint_claim_eval = poly_evals[0];\n    let mut r_joint = challenges;\n    r_joint.extend(r);\n\n    // decommit the joint polynomial at r_joint\n    joint_claim_eval.append_to_transcript(b\"joint_claim_eval\", transcript);\n\n    proof.verify_plain(gens, transcript, &r_joint, &joint_claim_eval, comm)\n  }\n\n  // verify evaluations of both polynomials at r\n  pub fn verify(\n    &self,\n    r: &[Scalar],\n    eval_row_ops_val_vec: &[Scalar],\n    eval_col_ops_val_vec: &[Scalar],\n    gens: &PolyCommitmentGens,\n    comm: &DerefsCommitment,\n    transcript: &mut Transcript,\n  ) -> Result<(), ProofVerifyError> {\n    transcript.append_protocol_name(DerefsEvalProof::protocol_name());\n    let mut evals = eval_row_ops_val_vec.to_owned();\n    evals.extend(eval_col_ops_val_vec);\n    evals.resize(evals.len().next_power_of_two(), Scalar::zero());\n\n    DerefsEvalProof::verify_single(\n      &self.proof_derefs,\n      &comm.comm_ops_val,\n      r,\n      evals,\n      gens,\n      transcript,\n    )\n  }\n}\n\nimpl AppendToTranscript for DerefsCommitment {\n  fn append_to_transcript(&self, label: &'static [u8], transcript: &mut Transcript) {\n    transcript.append_message(b\"derefs_commitment\", b\"begin_derefs_commitment\");\n    self.comm_ops_val.append_to_transcript(label, transcript);\n    transcript.append_message(b\"derefs_commitment\", b\"end_derefs_commitment\");\n  }\n}\n\nstruct AddrTimestamps {\n  ops_addr_usize: Vec<Vec<usize>>,\n  ops_addr: Vec<DensePolynomial>,\n  read_ts: Vec<DensePolynomial>,\n  audit_ts: DensePolynomial,\n}\n\nimpl AddrTimestamps {\n  pub fn new(num_cells: usize, num_ops: usize, ops_addr: Vec<Vec<usize>>) -> Self {\n    for item in ops_addr.iter() {\n      assert_eq!(item.len(), num_ops);\n    }\n\n    let mut audit_ts = vec![0usize; num_cells];\n    let mut ops_addr_vec: Vec<DensePolynomial> = Vec::new();\n    let mut read_ts_vec: Vec<DensePolynomial> = Vec::new();\n    for ops_addr_inst in ops_addr.iter() {\n      let mut read_ts = vec![0usize; num_ops];\n\n      // since read timestamps are trustworthy, we can simply increment the r-ts to obtain a w-ts\n      // this is sufficient to ensure that the write-set, consisting of (addr, val, ts) tuples, is a set\n      for i in 0..num_ops {\n        let addr = ops_addr_inst[i];\n        assert!(addr < num_cells);\n        let r_ts = audit_ts[addr];\n        read_ts[i] = r_ts;\n\n        let w_ts = r_ts + 1;\n        audit_ts[addr] = w_ts;\n      }\n\n      ops_addr_vec.push(DensePolynomial::from_usize(ops_addr_inst));\n      read_ts_vec.push(DensePolynomial::from_usize(&read_ts));\n    }\n\n    AddrTimestamps {\n      ops_addr: ops_addr_vec,\n      ops_addr_usize: ops_addr,\n      read_ts: read_ts_vec,\n      audit_ts: DensePolynomial::from_usize(&audit_ts),\n    }\n  }\n\n  fn deref_mem(addr: &[usize], mem_val: &[Scalar]) -> DensePolynomial {\n    DensePolynomial::new(\n      (0..addr.len())\n        .map(|i| {\n          let a = addr[i];\n          mem_val[a]\n        })\n        .collect::<Vec<Scalar>>(),\n    )\n  }\n\n  pub fn deref(&self, mem_val: &[Scalar]) -> Vec<DensePolynomial> {\n    (0..self.ops_addr.len())\n      .map(|i| AddrTimestamps::deref_mem(&self.ops_addr_usize[i], mem_val))\n      .collect::<Vec<DensePolynomial>>()\n  }\n}\n\npub struct MultiSparseMatPolynomialAsDense {\n  batch_size: usize,\n  val: Vec<DensePolynomial>,\n  row: AddrTimestamps,\n  col: AddrTimestamps,\n  comb_ops: DensePolynomial,\n  comb_mem: DensePolynomial,\n}\n\npub struct SparseMatPolyCommitmentGens {\n  gens_ops: PolyCommitmentGens,\n  gens_mem: PolyCommitmentGens,\n  gens_derefs: PolyCommitmentGens,\n}\n\nimpl SparseMatPolyCommitmentGens {\n  pub fn new(\n    label: &'static [u8],\n    num_vars_x: usize,\n    num_vars_y: usize,\n    num_nz_entries: usize,\n    batch_size: usize,\n  ) -> SparseMatPolyCommitmentGens {\n    let num_vars_ops =\n      num_nz_entries.next_power_of_two().log_2() + (batch_size * 5).next_power_of_two().log_2();\n    let num_vars_mem = if num_vars_x > num_vars_y {\n      num_vars_x\n    } else {\n      num_vars_y\n    } + 1;\n    let num_vars_derefs =\n      num_nz_entries.next_power_of_two().log_2() + (batch_size * 2).next_power_of_two().log_2();\n\n    let gens_ops = PolyCommitmentGens::new(num_vars_ops, label);\n    let gens_mem = PolyCommitmentGens::new(num_vars_mem, label);\n    let gens_derefs = PolyCommitmentGens::new(num_vars_derefs, label);\n    SparseMatPolyCommitmentGens {\n      gens_ops,\n      gens_mem,\n      gens_derefs,\n    }\n  }\n}\n\n#[derive(Debug, Serialize, Deserialize)]\npub struct SparseMatPolyCommitment {\n  batch_size: usize,\n  num_ops: usize,\n  num_mem_cells: usize,\n  comm_comb_ops: PolyCommitment,\n  comm_comb_mem: PolyCommitment,\n}\n\nimpl AppendToTranscript for SparseMatPolyCommitment {\n  fn append_to_transcript(&self, _label: &'static [u8], transcript: &mut Transcript) {\n    transcript.append_u64(b\"batch_size\", self.batch_size as u64);\n    transcript.append_u64(b\"num_ops\", self.num_ops as u64);\n    transcript.append_u64(b\"num_mem_cells\", self.num_mem_cells as u64);\n    self\n      .comm_comb_ops\n      .append_to_transcript(b\"comm_comb_ops\", transcript);\n    self\n      .comm_comb_mem\n      .append_to_transcript(b\"comm_comb_mem\", transcript);\n  }\n}\n\nimpl SparseMatPolynomial {\n  pub fn new(num_vars_x: usize, num_vars_y: usize, M: Vec<SparseMatEntry>) -> Self {\n    SparseMatPolynomial {\n      num_vars_x,\n      num_vars_y,\n      M,\n    }\n  }\n\n  pub fn get_num_nz_entries(&self) -> usize {\n    self.M.len().next_power_of_two()\n  }\n\n  fn sparse_to_dense_vecs(&self, N: usize) -> (Vec<usize>, Vec<usize>, Vec<Scalar>) {\n    assert!(N >= self.get_num_nz_entries());\n    let mut ops_row: Vec<usize> = vec![0; N];\n    let mut ops_col: Vec<usize> = vec![0; N];\n    let mut val: Vec<Scalar> = vec![Scalar::zero(); N];\n\n    for i in 0..self.M.len() {\n      ops_row[i] = self.M[i].row;\n      ops_col[i] = self.M[i].col;\n      val[i] = self.M[i].val;\n    }\n    (ops_row, ops_col, val)\n  }\n\n  fn multi_sparse_to_dense_rep(\n    sparse_polys: &[&SparseMatPolynomial],\n  ) -> MultiSparseMatPolynomialAsDense {\n    assert!(!sparse_polys.is_empty());\n    for i in 1..sparse_polys.len() {\n      assert_eq!(sparse_polys[i].num_vars_x, sparse_polys[0].num_vars_x);\n      assert_eq!(sparse_polys[i].num_vars_y, sparse_polys[0].num_vars_y);\n    }\n\n    let N = (0..sparse_polys.len())\n      .map(|i| sparse_polys[i].get_num_nz_entries())\n      .max()\n      .unwrap()\n      .next_power_of_two();\n\n    let mut ops_row_vec: Vec<Vec<usize>> = Vec::new();\n    let mut ops_col_vec: Vec<Vec<usize>> = Vec::new();\n    let mut val_vec: Vec<DensePolynomial> = Vec::new();\n    for poly in sparse_polys {\n      let (ops_row, ops_col, val) = poly.sparse_to_dense_vecs(N);\n      ops_row_vec.push(ops_row);\n      ops_col_vec.push(ops_col);\n      val_vec.push(DensePolynomial::new(val));\n    }\n\n    let any_poly = &sparse_polys[0];\n\n    let num_mem_cells = if any_poly.num_vars_x > any_poly.num_vars_y {\n      any_poly.num_vars_x.pow2()\n    } else {\n      any_poly.num_vars_y.pow2()\n    };\n\n    let row = AddrTimestamps::new(num_mem_cells, N, ops_row_vec);\n    let col = AddrTimestamps::new(num_mem_cells, N, ops_col_vec);\n\n    // combine polynomials into a single polynomial for commitment purposes\n    let comb_ops = DensePolynomial::merge(\n      row\n        .ops_addr\n        .iter()\n        .chain(row.read_ts.iter())\n        .chain(col.ops_addr.iter())\n        .chain(col.read_ts.iter())\n        .chain(val_vec.iter()),\n    );\n    let mut comb_mem = row.audit_ts.clone();\n    comb_mem.extend(&col.audit_ts);\n\n    MultiSparseMatPolynomialAsDense {\n      batch_size: sparse_polys.len(),\n      row,\n      col,\n      val: val_vec,\n      comb_ops,\n      comb_mem,\n    }\n  }\n\n  fn evaluate_with_tables(&self, eval_table_rx: &[Scalar], eval_table_ry: &[Scalar]) -> Scalar {\n    assert_eq!(self.num_vars_x.pow2(), eval_table_rx.len());\n    assert_eq!(self.num_vars_y.pow2(), eval_table_ry.len());\n\n    (0..self.M.len())\n      .map(|i| {\n        let row = self.M[i].row;\n        let col = self.M[i].col;\n        let val = &self.M[i].val;\n        eval_table_rx[row] * eval_table_ry[col] * val\n      })\n      .sum()\n  }\n\n  pub fn multi_evaluate(\n    polys: &[&SparseMatPolynomial],\n    rx: &[Scalar],\n    ry: &[Scalar],\n  ) -> Vec<Scalar> {\n    let eval_table_rx = EqPolynomial::new(rx.to_vec()).evals();\n    let eval_table_ry = EqPolynomial::new(ry.to_vec()).evals();\n\n    (0..polys.len())\n      .map(|i| polys[i].evaluate_with_tables(&eval_table_rx, &eval_table_ry))\n      .collect::<Vec<Scalar>>()\n  }\n\n  pub fn multiply_vec(&self, num_rows: usize, num_cols: usize, z: &[Scalar]) -> Vec<Scalar> {\n    assert_eq!(z.len(), num_cols);\n\n    (0..self.M.len())\n      .map(|i| {\n        let row = self.M[i].row;\n        let col = self.M[i].col;\n        let val = &self.M[i].val;\n        (row, val * z[col])\n      })\n      .fold(vec![Scalar::zero(); num_rows], |mut Mz, (r, v)| {\n        Mz[r] += v;\n        Mz\n      })\n  }\n\n  pub fn compute_eval_table_sparse(\n    &self,\n    rx: &[Scalar],\n    num_rows: usize,\n    num_cols: usize,\n  ) -> Vec<Scalar> {\n    assert_eq!(rx.len(), num_rows);\n\n    let mut M_evals: Vec<Scalar> = vec![Scalar::zero(); num_cols];\n\n    for i in 0..self.M.len() {\n      let entry = &self.M[i];\n      M_evals[entry.col] += rx[entry.row] * entry.val;\n    }\n    M_evals\n  }\n\n  pub fn multi_commit(\n    sparse_polys: &[&SparseMatPolynomial],\n    gens: &SparseMatPolyCommitmentGens,\n  ) -> (SparseMatPolyCommitment, MultiSparseMatPolynomialAsDense) {\n    let batch_size = sparse_polys.len();\n    let dense = SparseMatPolynomial::multi_sparse_to_dense_rep(sparse_polys);\n\n    let (comm_comb_ops, _blinds_comb_ops) = dense.comb_ops.commit(&gens.gens_ops, None);\n    let (comm_comb_mem, _blinds_comb_mem) = dense.comb_mem.commit(&gens.gens_mem, None);\n\n    (\n      SparseMatPolyCommitment {\n        batch_size,\n        num_mem_cells: dense.row.audit_ts.len(),\n        num_ops: dense.row.read_ts[0].len(),\n        comm_comb_ops,\n        comm_comb_mem,\n      },\n      dense,\n    )\n  }\n}\n\nimpl MultiSparseMatPolynomialAsDense {\n  pub fn deref(&self, row_mem_val: &[Scalar], col_mem_val: &[Scalar]) -> Derefs {\n    let row_ops_val = self.row.deref(row_mem_val);\n    let col_ops_val = self.col.deref(col_mem_val);\n\n    Derefs::new(row_ops_val, col_ops_val)\n  }\n}\n\n#[derive(Debug)]\nstruct ProductLayer {\n  init: ProductCircuit,\n  read_vec: Vec<ProductCircuit>,\n  write_vec: Vec<ProductCircuit>,\n  audit: ProductCircuit,\n}\n\n#[derive(Debug)]\nstruct Layers {\n  prod_layer: ProductLayer,\n}\n\nimpl Layers {\n  fn build_hash_layer(\n    eval_table: &[Scalar],\n    addrs_vec: &[DensePolynomial],\n    derefs_vec: &[DensePolynomial],\n    read_ts_vec: &[DensePolynomial],\n    audit_ts: &DensePolynomial,\n    r_mem_check: &(Scalar, Scalar),\n  ) -> (\n    DensePolynomial,\n    Vec<DensePolynomial>,\n    Vec<DensePolynomial>,\n    DensePolynomial,\n  ) {\n    let (r_hash, r_multiset_check) = r_mem_check;\n\n    //hash(addr, val, ts) = ts * r_hash_sqr + val * r_hash + addr\n    let r_hash_sqr = r_hash * r_hash;\n    let hash_func = |addr: &Scalar, val: &Scalar, ts: &Scalar| -> Scalar {\n      ts * r_hash_sqr + val * r_hash + addr\n    };\n\n    // hash init and audit that does not depend on #instances\n    let num_mem_cells = eval_table.len();\n    let poly_init_hashed = DensePolynomial::new(\n      (0..num_mem_cells)\n        .map(|i| {\n          // at init time, addr is given by i, init value is given by eval_table, and ts = 0\n          hash_func(&Scalar::from(i as u64), &eval_table[i], &Scalar::zero()) - r_multiset_check\n        })\n        .collect::<Vec<Scalar>>(),\n    );\n    let poly_audit_hashed = DensePolynomial::new(\n      (0..num_mem_cells)\n        .map(|i| {\n          // at audit time, addr is given by i, value is given by eval_table, and ts is given by audit_ts\n          hash_func(&Scalar::from(i as u64), &eval_table[i], &audit_ts[i]) - r_multiset_check\n        })\n        .collect::<Vec<Scalar>>(),\n    );\n\n    // hash read and write that depends on #instances\n    let mut poly_read_hashed_vec: Vec<DensePolynomial> = Vec::new();\n    let mut poly_write_hashed_vec: Vec<DensePolynomial> = Vec::new();\n    for i in 0..addrs_vec.len() {\n      let (addrs, derefs, read_ts) = (&addrs_vec[i], &derefs_vec[i], &read_ts_vec[i]);\n      assert_eq!(addrs.len(), derefs.len());\n      assert_eq!(addrs.len(), read_ts.len());\n      let num_ops = addrs.len();\n      let poly_read_hashed = DensePolynomial::new(\n        (0..num_ops)\n          .map(|i| {\n            // at read time, addr is given by addrs, value is given by derefs, and ts is given by read_ts\n            hash_func(&addrs[i], &derefs[i], &read_ts[i]) - r_multiset_check\n          })\n          .collect::<Vec<Scalar>>(),\n      );\n      poly_read_hashed_vec.push(poly_read_hashed);\n\n      let poly_write_hashed = DensePolynomial::new(\n        (0..num_ops)\n          .map(|i| {\n            // at write time, addr is given by addrs, value is given by derefs, and ts is given by write_ts = read_ts + 1\n            hash_func(&addrs[i], &derefs[i], &(read_ts[i] + Scalar::one())) - r_multiset_check\n          })\n          .collect::<Vec<Scalar>>(),\n      );\n      poly_write_hashed_vec.push(poly_write_hashed);\n    }\n\n    (\n      poly_init_hashed,\n      poly_read_hashed_vec,\n      poly_write_hashed_vec,\n      poly_audit_hashed,\n    )\n  }\n\n  pub fn new(\n    eval_table: &[Scalar],\n    addr_timestamps: &AddrTimestamps,\n    poly_ops_val: &[DensePolynomial],\n    r_mem_check: &(Scalar, Scalar),\n  ) -> Self {\n    let (poly_init_hashed, poly_read_hashed_vec, poly_write_hashed_vec, poly_audit_hashed) =\n      Layers::build_hash_layer(\n        eval_table,\n        &addr_timestamps.ops_addr,\n        poly_ops_val,\n        &addr_timestamps.read_ts,\n        &addr_timestamps.audit_ts,\n        r_mem_check,\n      );\n\n    let prod_init = ProductCircuit::new(&poly_init_hashed);\n    let prod_read_vec = (0..poly_read_hashed_vec.len())\n      .map(|i| ProductCircuit::new(&poly_read_hashed_vec[i]))\n      .collect::<Vec<ProductCircuit>>();\n    let prod_write_vec = (0..poly_write_hashed_vec.len())\n      .map(|i| ProductCircuit::new(&poly_write_hashed_vec[i]))\n      .collect::<Vec<ProductCircuit>>();\n    let prod_audit = ProductCircuit::new(&poly_audit_hashed);\n\n    // subset audit check\n    let hashed_writes: Scalar = (0..prod_write_vec.len())\n      .map(|i| prod_write_vec[i].evaluate())\n      .product();\n    let hashed_write_set: Scalar = prod_init.evaluate() * hashed_writes;\n\n    let hashed_reads: Scalar = (0..prod_read_vec.len())\n      .map(|i| prod_read_vec[i].evaluate())\n      .product();\n    let hashed_read_set: Scalar = hashed_reads * prod_audit.evaluate();\n\n    //assert_eq!(hashed_read_set, hashed_write_set);\n    debug_assert_eq!(hashed_read_set, hashed_write_set);\n\n    Layers {\n      prod_layer: ProductLayer {\n        init: prod_init,\n        read_vec: prod_read_vec,\n        write_vec: prod_write_vec,\n        audit: prod_audit,\n      },\n    }\n  }\n}\n\n#[derive(Debug)]\nstruct PolyEvalNetwork {\n  row_layers: Layers,\n  col_layers: Layers,\n}\n\nimpl PolyEvalNetwork {\n  pub fn new(\n    dense: &MultiSparseMatPolynomialAsDense,\n    derefs: &Derefs,\n    mem_rx: &[Scalar],\n    mem_ry: &[Scalar],\n    r_mem_check: &(Scalar, Scalar),\n  ) -> Self {\n    let row_layers = Layers::new(mem_rx, &dense.row, &derefs.row_ops_val, r_mem_check);\n    let col_layers = Layers::new(mem_ry, &dense.col, &derefs.col_ops_val, r_mem_check);\n\n    PolyEvalNetwork {\n      row_layers,\n      col_layers,\n    }\n  }\n}\n\n#[derive(Debug, Serialize, Deserialize)]\nstruct HashLayerProof {\n  eval_row: (Vec<Scalar>, Vec<Scalar>, Scalar),\n  eval_col: (Vec<Scalar>, Vec<Scalar>, Scalar),\n  eval_val: Vec<Scalar>,\n  eval_derefs: (Vec<Scalar>, Vec<Scalar>),\n  proof_ops: PolyEvalProof,\n  proof_mem: PolyEvalProof,\n  proof_derefs: DerefsEvalProof,\n}\n\nimpl HashLayerProof {\n  fn protocol_name() -> &'static [u8] {\n    b\"Sparse polynomial hash layer proof\"\n  }\n\n  fn prove_helper(\n    rand: (&Vec<Scalar>, &Vec<Scalar>),\n    addr_timestamps: &AddrTimestamps,\n  ) -> (Vec<Scalar>, Vec<Scalar>, Scalar) {\n    let (rand_mem, rand_ops) = rand;\n\n    // decommit ops-addr at rand_ops\n    let mut eval_ops_addr_vec: Vec<Scalar> = Vec::new();\n    for i in 0..addr_timestamps.ops_addr.len() {\n      let eval_ops_addr = addr_timestamps.ops_addr[i].evaluate(rand_ops);\n      eval_ops_addr_vec.push(eval_ops_addr);\n    }\n\n    // decommit read_ts at rand_ops\n    let mut eval_read_ts_vec: Vec<Scalar> = Vec::new();\n    for i in 0..addr_timestamps.read_ts.len() {\n      let eval_read_ts = addr_timestamps.read_ts[i].evaluate(rand_ops);\n      eval_read_ts_vec.push(eval_read_ts);\n    }\n\n    // decommit audit-ts at rand_mem\n    let eval_audit_ts = addr_timestamps.audit_ts.evaluate(rand_mem);\n\n    (eval_ops_addr_vec, eval_read_ts_vec, eval_audit_ts)\n  }\n\n  fn prove(\n    rand: (&Vec<Scalar>, &Vec<Scalar>),\n    dense: &MultiSparseMatPolynomialAsDense,\n    derefs: &Derefs,\n    gens: &SparseMatPolyCommitmentGens,\n    transcript: &mut Transcript,\n    random_tape: &mut RandomTape,\n  ) -> Self {\n    transcript.append_protocol_name(HashLayerProof::protocol_name());\n\n    let (rand_mem, rand_ops) = rand;\n\n    // decommit derefs at rand_ops\n    let eval_row_ops_val = (0..derefs.row_ops_val.len())\n      .map(|i| derefs.row_ops_val[i].evaluate(rand_ops))\n      .collect::<Vec<Scalar>>();\n    let eval_col_ops_val = (0..derefs.col_ops_val.len())\n      .map(|i| derefs.col_ops_val[i].evaluate(rand_ops))\n      .collect::<Vec<Scalar>>();\n    let proof_derefs = DerefsEvalProof::prove(\n      derefs,\n      &eval_row_ops_val,\n      &eval_col_ops_val,\n      rand_ops,\n      &gens.gens_derefs,\n      transcript,\n      random_tape,\n    );\n    let eval_derefs = (eval_row_ops_val, eval_col_ops_val);\n\n    // evaluate row_addr, row_read-ts, col_addr, col_read-ts, val at rand_ops\n    // evaluate row_audit_ts and col_audit_ts at rand_mem\n    let (eval_row_addr_vec, eval_row_read_ts_vec, eval_row_audit_ts) =\n      HashLayerProof::prove_helper((rand_mem, rand_ops), &dense.row);\n    let (eval_col_addr_vec, eval_col_read_ts_vec, eval_col_audit_ts) =\n      HashLayerProof::prove_helper((rand_mem, rand_ops), &dense.col);\n    let eval_val_vec = (0..dense.val.len())\n      .map(|i| dense.val[i].evaluate(rand_ops))\n      .collect::<Vec<Scalar>>();\n\n    // form a single decommitment using comm_comb_ops\n    let mut evals_ops: Vec<Scalar> = Vec::new();\n    evals_ops.extend(&eval_row_addr_vec);\n    evals_ops.extend(&eval_row_read_ts_vec);\n    evals_ops.extend(&eval_col_addr_vec);\n    evals_ops.extend(&eval_col_read_ts_vec);\n    evals_ops.extend(&eval_val_vec);\n    evals_ops.resize(evals_ops.len().next_power_of_two(), Scalar::zero());\n    evals_ops.append_to_transcript(b\"claim_evals_ops\", transcript);\n    let challenges_ops =\n      transcript.challenge_vector(b\"challenge_combine_n_to_one\", evals_ops.len().log_2());\n\n    let mut poly_evals_ops = DensePolynomial::new(evals_ops);\n    for i in (0..challenges_ops.len()).rev() {\n      poly_evals_ops.bound_poly_var_bot(&challenges_ops[i]);\n    }\n    assert_eq!(poly_evals_ops.len(), 1);\n    let joint_claim_eval_ops = poly_evals_ops[0];\n    let mut r_joint_ops = challenges_ops;\n    r_joint_ops.extend(rand_ops);\n    debug_assert_eq!(dense.comb_ops.evaluate(&r_joint_ops), joint_claim_eval_ops);\n    joint_claim_eval_ops.append_to_transcript(b\"joint_claim_eval_ops\", transcript);\n    let (proof_ops, _comm_ops_eval) = PolyEvalProof::prove(\n      &dense.comb_ops,\n      None,\n      &r_joint_ops,\n      &joint_claim_eval_ops,\n      None,\n      &gens.gens_ops,\n      transcript,\n      random_tape,\n    );\n\n    // form a single decommitment using comb_comb_mem at rand_mem\n    let evals_mem: Vec<Scalar> = vec![eval_row_audit_ts, eval_col_audit_ts];\n    evals_mem.append_to_transcript(b\"claim_evals_mem\", transcript);\n    let challenges_mem =\n      transcript.challenge_vector(b\"challenge_combine_two_to_one\", evals_mem.len().log_2());\n\n    let mut poly_evals_mem = DensePolynomial::new(evals_mem);\n    for i in (0..challenges_mem.len()).rev() {\n      poly_evals_mem.bound_poly_var_bot(&challenges_mem[i]);\n    }\n    assert_eq!(poly_evals_mem.len(), 1);\n    let joint_claim_eval_mem = poly_evals_mem[0];\n    let mut r_joint_mem = challenges_mem;\n    r_joint_mem.extend(rand_mem);\n    debug_assert_eq!(dense.comb_mem.evaluate(&r_joint_mem), joint_claim_eval_mem);\n    joint_claim_eval_mem.append_to_transcript(b\"joint_claim_eval_mem\", transcript);\n    let (proof_mem, _comm_mem_eval) = PolyEvalProof::prove(\n      &dense.comb_mem,\n      None,\n      &r_joint_mem,\n      &joint_claim_eval_mem,\n      None,\n      &gens.gens_mem,\n      transcript,\n      random_tape,\n    );\n\n    HashLayerProof {\n      eval_row: (eval_row_addr_vec, eval_row_read_ts_vec, eval_row_audit_ts),\n      eval_col: (eval_col_addr_vec, eval_col_read_ts_vec, eval_col_audit_ts),\n      eval_val: eval_val_vec,\n      eval_derefs,\n      proof_ops,\n      proof_mem,\n      proof_derefs,\n    }\n  }\n\n  fn verify_helper(\n    rand: &(&Vec<Scalar>, &Vec<Scalar>),\n    claims: &(Scalar, Vec<Scalar>, Vec<Scalar>, Scalar),\n    eval_ops_val: &[Scalar],\n    eval_ops_addr: &[Scalar],\n    eval_read_ts: &[Scalar],\n    eval_audit_ts: &Scalar,\n    r: &[Scalar],\n    r_hash: &Scalar,\n    r_multiset_check: &Scalar,\n  ) -> Result<(), ProofVerifyError> {\n    let r_hash_sqr = r_hash * r_hash;\n    let hash_func = |addr: &Scalar, val: &Scalar, ts: &Scalar| -> Scalar {\n      ts * r_hash_sqr + val * r_hash + addr\n    };\n\n    let (rand_mem, _rand_ops) = rand;\n    let (claim_init, claim_read, claim_write, claim_audit) = claims;\n\n    // init\n    let eval_init_addr = IdentityPolynomial::new(rand_mem.len()).evaluate(rand_mem);\n    let eval_init_val = EqPolynomial::new(r.to_vec()).evaluate(rand_mem);\n    let hash_init_at_rand_mem =\n      hash_func(&eval_init_addr, &eval_init_val, &Scalar::zero()) - r_multiset_check; // verify the claim_last of init chunk\n    assert_eq!(&hash_init_at_rand_mem, claim_init);\n\n    // read\n    for i in 0..eval_ops_addr.len() {\n      let hash_read_at_rand_ops =\n        hash_func(&eval_ops_addr[i], &eval_ops_val[i], &eval_read_ts[i]) - r_multiset_check; // verify the claim_last of init chunk\n      assert_eq!(&hash_read_at_rand_ops, &claim_read[i]);\n    }\n\n    // write: shares addr, val component; only decommit write_ts\n    for i in 0..eval_ops_addr.len() {\n      let eval_write_ts = eval_read_ts[i] + Scalar::one();\n      let hash_write_at_rand_ops =\n        hash_func(&eval_ops_addr[i], &eval_ops_val[i], &eval_write_ts) - r_multiset_check; // verify the claim_last of init chunk\n      assert_eq!(&hash_write_at_rand_ops, &claim_write[i]);\n    }\n\n    // audit: shares addr and val with init\n    let eval_audit_addr = eval_init_addr;\n    let eval_audit_val = eval_init_val;\n    let hash_audit_at_rand_mem =\n      hash_func(&eval_audit_addr, &eval_audit_val, eval_audit_ts) - r_multiset_check;\n    assert_eq!(&hash_audit_at_rand_mem, claim_audit); // verify the last step of the sum-check for audit\n\n    Ok(())\n  }\n\n  fn verify(\n    &self,\n    rand: (&Vec<Scalar>, &Vec<Scalar>),\n    claims_row: &(Scalar, Vec<Scalar>, Vec<Scalar>, Scalar),\n    claims_col: &(Scalar, Vec<Scalar>, Vec<Scalar>, Scalar),\n    claims_dotp: &[Scalar],\n    comm: &SparseMatPolyCommitment,\n    gens: &SparseMatPolyCommitmentGens,\n    comm_derefs: &DerefsCommitment,\n    rx: &[Scalar],\n    ry: &[Scalar],\n    r_hash: &Scalar,\n    r_multiset_check: &Scalar,\n    transcript: &mut Transcript,\n  ) -> Result<(), ProofVerifyError> {\n    let timer = Timer::new(\"verify_hash_proof\");\n    transcript.append_protocol_name(HashLayerProof::protocol_name());\n\n    let (rand_mem, rand_ops) = rand;\n\n    // verify derefs at rand_ops\n    let (eval_row_ops_val, eval_col_ops_val) = &self.eval_derefs;\n    assert_eq!(eval_row_ops_val.len(), eval_col_ops_val.len());\n    self.proof_derefs.verify(\n      rand_ops,\n      eval_row_ops_val,\n      eval_col_ops_val,\n      &gens.gens_derefs,\n      comm_derefs,\n      transcript,\n    )?;\n\n    // verify the decommitments used in evaluation sum-check\n    let eval_val_vec = &self.eval_val;\n    assert_eq!(claims_dotp.len(), 3 * eval_row_ops_val.len());\n    for i in 0..claims_dotp.len() / 3 {\n      let claim_row_ops_val = claims_dotp[3 * i];\n      let claim_col_ops_val = claims_dotp[3 * i + 1];\n      let claim_val = claims_dotp[3 * i + 2];\n\n      assert_eq!(claim_row_ops_val, eval_row_ops_val[i]);\n      assert_eq!(claim_col_ops_val, eval_col_ops_val[i]);\n      assert_eq!(claim_val, eval_val_vec[i]);\n    }\n\n    // verify addr-timestamps using comm_comb_ops at rand_ops\n    let (eval_row_addr_vec, eval_row_read_ts_vec, eval_row_audit_ts) = &self.eval_row;\n    let (eval_col_addr_vec, eval_col_read_ts_vec, eval_col_audit_ts) = &self.eval_col;\n\n    let mut evals_ops: Vec<Scalar> = Vec::new();\n    evals_ops.extend(eval_row_addr_vec);\n    evals_ops.extend(eval_row_read_ts_vec);\n    evals_ops.extend(eval_col_addr_vec);\n    evals_ops.extend(eval_col_read_ts_vec);\n    evals_ops.extend(eval_val_vec);\n    evals_ops.resize(evals_ops.len().next_power_of_two(), Scalar::zero());\n    evals_ops.append_to_transcript(b\"claim_evals_ops\", transcript);\n    let challenges_ops =\n      transcript.challenge_vector(b\"challenge_combine_n_to_one\", evals_ops.len().log_2());\n\n    let mut poly_evals_ops = DensePolynomial::new(evals_ops);\n    for i in (0..challenges_ops.len()).rev() {\n      poly_evals_ops.bound_poly_var_bot(&challenges_ops[i]);\n    }\n    assert_eq!(poly_evals_ops.len(), 1);\n    let joint_claim_eval_ops = poly_evals_ops[0];\n    let mut r_joint_ops = challenges_ops;\n    r_joint_ops.extend(rand_ops);\n    joint_claim_eval_ops.append_to_transcript(b\"joint_claim_eval_ops\", transcript);\n    self.proof_ops.verify_plain(\n      &gens.gens_ops,\n      transcript,\n      &r_joint_ops,\n      &joint_claim_eval_ops,\n      &comm.comm_comb_ops,\n    )?;\n\n    // verify proof-mem using comm_comb_mem at rand_mem\n    // form a single decommitment using comb_comb_mem at rand_mem\n    let evals_mem: Vec<Scalar> = vec![*eval_row_audit_ts, *eval_col_audit_ts];\n    evals_mem.append_to_transcript(b\"claim_evals_mem\", transcript);\n    let challenges_mem =\n      transcript.challenge_vector(b\"challenge_combine_two_to_one\", evals_mem.len().log_2());\n\n    let mut poly_evals_mem = DensePolynomial::new(evals_mem);\n    for i in (0..challenges_mem.len()).rev() {\n      poly_evals_mem.bound_poly_var_bot(&challenges_mem[i]);\n    }\n    assert_eq!(poly_evals_mem.len(), 1);\n    let joint_claim_eval_mem = poly_evals_mem[0];\n    let mut r_joint_mem = challenges_mem;\n    r_joint_mem.extend(rand_mem);\n    joint_claim_eval_mem.append_to_transcript(b\"joint_claim_eval_mem\", transcript);\n    self.proof_mem.verify_plain(\n      &gens.gens_mem,\n      transcript,\n      &r_joint_mem,\n      &joint_claim_eval_mem,\n      &comm.comm_comb_mem,\n    )?;\n\n    // verify the claims from the product layer\n    let (eval_ops_addr, eval_read_ts, eval_audit_ts) = &self.eval_row;\n    HashLayerProof::verify_helper(\n      &(rand_mem, rand_ops),\n      claims_row,\n      eval_row_ops_val,\n      eval_ops_addr,\n      eval_read_ts,\n      eval_audit_ts,\n      rx,\n      r_hash,\n      r_multiset_check,\n    )?;\n\n    let (eval_ops_addr, eval_read_ts, eval_audit_ts) = &self.eval_col;\n    HashLayerProof::verify_helper(\n      &(rand_mem, rand_ops),\n      claims_col,\n      eval_col_ops_val,\n      eval_ops_addr,\n      eval_read_ts,\n      eval_audit_ts,\n      ry,\n      r_hash,\n      r_multiset_check,\n    )?;\n\n    timer.stop();\n    Ok(())\n  }\n}\n\n#[derive(Debug, Serialize, Deserialize)]\nstruct ProductLayerProof {\n  eval_row: (Scalar, Vec<Scalar>, Vec<Scalar>, Scalar),\n  eval_col: (Scalar, Vec<Scalar>, Vec<Scalar>, Scalar),\n  eval_val: (Vec<Scalar>, Vec<Scalar>),\n  proof_mem: ProductCircuitEvalProofBatched,\n  proof_ops: ProductCircuitEvalProofBatched,\n}\n\nimpl ProductLayerProof {\n  fn protocol_name() -> &'static [u8] {\n    b\"Sparse polynomial product layer proof\"\n  }\n\n  pub fn prove(\n    row_prod_layer: &mut ProductLayer,\n    col_prod_layer: &mut ProductLayer,\n    dense: &MultiSparseMatPolynomialAsDense,\n    derefs: &Derefs,\n    eval: &[Scalar],\n    transcript: &mut Transcript,\n  ) -> (Self, Vec<Scalar>, Vec<Scalar>) {\n    transcript.append_protocol_name(ProductLayerProof::protocol_name());\n\n    let row_eval_init = row_prod_layer.init.evaluate();\n    let row_eval_audit = row_prod_layer.audit.evaluate();\n    let row_eval_read = (0..row_prod_layer.read_vec.len())\n      .map(|i| row_prod_layer.read_vec[i].evaluate())\n      .collect::<Vec<Scalar>>();\n    let row_eval_write = (0..row_prod_layer.write_vec.len())\n      .map(|i| row_prod_layer.write_vec[i].evaluate())\n      .collect::<Vec<Scalar>>();\n\n    // subset check\n    let ws: Scalar = (0..row_eval_write.len())\n      .map(|i| row_eval_write[i])\n      .product();\n    let rs: Scalar = (0..row_eval_read.len()).map(|i| row_eval_read[i]).product();\n    assert_eq!(row_eval_init * ws, rs * row_eval_audit);\n\n    row_eval_init.append_to_transcript(b\"claim_row_eval_init\", transcript);\n    row_eval_read.append_to_transcript(b\"claim_row_eval_read\", transcript);\n    row_eval_write.append_to_transcript(b\"claim_row_eval_write\", transcript);\n    row_eval_audit.append_to_transcript(b\"claim_row_eval_audit\", transcript);\n\n    let col_eval_init = col_prod_layer.init.evaluate();\n    let col_eval_audit = col_prod_layer.audit.evaluate();\n    let col_eval_read: Vec<Scalar> = (0..col_prod_layer.read_vec.len())\n      .map(|i| col_prod_layer.read_vec[i].evaluate())\n      .collect();\n    let col_eval_write: Vec<Scalar> = (0..col_prod_layer.write_vec.len())\n      .map(|i| col_prod_layer.write_vec[i].evaluate())\n      .collect();\n\n    // subset check\n    let ws: Scalar = (0..col_eval_write.len())\n      .map(|i| col_eval_write[i])\n      .product();\n    let rs: Scalar = (0..col_eval_read.len()).map(|i| col_eval_read[i]).product();\n    assert_eq!(col_eval_init * ws, rs * col_eval_audit);\n\n    col_eval_init.append_to_transcript(b\"claim_col_eval_init\", transcript);\n    col_eval_read.append_to_transcript(b\"claim_col_eval_read\", transcript);\n    col_eval_write.append_to_transcript(b\"claim_col_eval_write\", transcript);\n    col_eval_audit.append_to_transcript(b\"claim_col_eval_audit\", transcript);\n\n    // prepare dotproduct circuit for batching then with ops-related product circuits\n    assert_eq!(eval.len(), derefs.row_ops_val.len());\n    assert_eq!(eval.len(), derefs.col_ops_val.len());\n    assert_eq!(eval.len(), dense.val.len());\n    let mut dotp_circuit_left_vec: Vec<DotProductCircuit> = Vec::new();\n    let mut dotp_circuit_right_vec: Vec<DotProductCircuit> = Vec::new();\n    let mut eval_dotp_left_vec: Vec<Scalar> = Vec::new();\n    let mut eval_dotp_right_vec: Vec<Scalar> = Vec::new();\n    for i in 0..derefs.row_ops_val.len() {\n      // evaluate sparse polynomial evaluation using two dotp checks\n      let left = derefs.row_ops_val[i].clone();\n      let right = derefs.col_ops_val[i].clone();\n      let weights = dense.val[i].clone();\n\n      // build two dot product circuits to prove evaluation of sparse polynomial\n      let mut dotp_circuit = DotProductCircuit::new(left, right, weights);\n      let (dotp_circuit_left, dotp_circuit_right) = dotp_circuit.split();\n\n      let (eval_dotp_left, eval_dotp_right) =\n        (dotp_circuit_left.evaluate(), dotp_circuit_right.evaluate());\n\n      eval_dotp_left.append_to_transcript(b\"claim_eval_dotp_left\", transcript);\n      eval_dotp_right.append_to_transcript(b\"claim_eval_dotp_right\", transcript);\n      assert_eq!(eval_dotp_left + eval_dotp_right, eval[i]);\n      eval_dotp_left_vec.push(eval_dotp_left);\n      eval_dotp_right_vec.push(eval_dotp_right);\n\n      dotp_circuit_left_vec.push(dotp_circuit_left);\n      dotp_circuit_right_vec.push(dotp_circuit_right);\n    }\n\n    // The number of operations into the memory encoded by rx and ry are always the same (by design)\n    // So we can produce a batched product proof for all of them at the same time.\n    // prove the correctness of claim_row_eval_read, claim_row_eval_write, claim_col_eval_read, and claim_col_eval_write\n    // TODO: we currently only produce proofs for 3 batched sparse polynomial evaluations\n    assert_eq!(row_prod_layer.read_vec.len(), 3);\n    let (row_read_A, row_read_B, row_read_C) = {\n      let (vec_A, vec_BC) = row_prod_layer.read_vec.split_at_mut(1);\n      let (vec_B, vec_C) = vec_BC.split_at_mut(1);\n      (vec_A, vec_B, vec_C)\n    };\n\n    let (row_write_A, row_write_B, row_write_C) = {\n      let (vec_A, vec_BC) = row_prod_layer.write_vec.split_at_mut(1);\n      let (vec_B, vec_C) = vec_BC.split_at_mut(1);\n      (vec_A, vec_B, vec_C)\n    };\n\n    let (col_read_A, col_read_B, col_read_C) = {\n      let (vec_A, vec_BC) = col_prod_layer.read_vec.split_at_mut(1);\n      let (vec_B, vec_C) = vec_BC.split_at_mut(1);\n      (vec_A, vec_B, vec_C)\n    };\n\n    let (col_write_A, col_write_B, col_write_C) = {\n      let (vec_A, vec_BC) = col_prod_layer.write_vec.split_at_mut(1);\n      let (vec_B, vec_C) = vec_BC.split_at_mut(1);\n      (vec_A, vec_B, vec_C)\n    };\n\n    let (dotp_left_A, dotp_left_B, dotp_left_C) = {\n      let (vec_A, vec_BC) = dotp_circuit_left_vec.split_at_mut(1);\n      let (vec_B, vec_C) = vec_BC.split_at_mut(1);\n      (vec_A, vec_B, vec_C)\n    };\n\n    let (dotp_right_A, dotp_right_B, dotp_right_C) = {\n      let (vec_A, vec_BC) = dotp_circuit_right_vec.split_at_mut(1);\n      let (vec_B, vec_C) = vec_BC.split_at_mut(1);\n      (vec_A, vec_B, vec_C)\n    };\n\n    let (proof_ops, rand_ops) = ProductCircuitEvalProofBatched::prove(\n      &mut vec![\n        &mut row_read_A[0],\n        &mut row_read_B[0],\n        &mut row_read_C[0],\n        &mut row_write_A[0],\n        &mut row_write_B[0],\n        &mut row_write_C[0],\n        &mut col_read_A[0],\n        &mut col_read_B[0],\n        &mut col_read_C[0],\n        &mut col_write_A[0],\n        &mut col_write_B[0],\n        &mut col_write_C[0],\n      ],\n      &mut vec![\n        &mut dotp_left_A[0],\n        &mut dotp_right_A[0],\n        &mut dotp_left_B[0],\n        &mut dotp_right_B[0],\n        &mut dotp_left_C[0],\n        &mut dotp_right_C[0],\n      ],\n      transcript,\n    );\n\n    // produce a batched proof of memory-related product circuits\n    let (proof_mem, rand_mem) = ProductCircuitEvalProofBatched::prove(\n      &mut vec![\n        &mut row_prod_layer.init,\n        &mut row_prod_layer.audit,\n        &mut col_prod_layer.init,\n        &mut col_prod_layer.audit,\n      ],\n      &mut Vec::new(),\n      transcript,\n    );\n\n    let product_layer_proof = ProductLayerProof {\n      eval_row: (row_eval_init, row_eval_read, row_eval_write, row_eval_audit),\n      eval_col: (col_eval_init, col_eval_read, col_eval_write, col_eval_audit),\n      eval_val: (eval_dotp_left_vec, eval_dotp_right_vec),\n      proof_mem,\n      proof_ops,\n    };\n\n    let product_layer_proof_encoded: Vec<u8> = bincode::serialize(&product_layer_proof).unwrap();\n    let msg = format!(\n      \"len_product_layer_proof {:?}\",\n      product_layer_proof_encoded.len()\n    );\n    Timer::print(&msg);\n\n    (product_layer_proof, rand_mem, rand_ops)\n  }\n\n  pub fn verify(\n    &self,\n    num_ops: usize,\n    num_cells: usize,\n    eval: &[Scalar],\n    transcript: &mut Transcript,\n  ) -> Result<\n    (\n      Vec<Scalar>,\n      Vec<Scalar>,\n      Vec<Scalar>,\n      Vec<Scalar>,\n      Vec<Scalar>,\n    ),\n    ProofVerifyError,\n  > {\n    transcript.append_protocol_name(ProductLayerProof::protocol_name());\n\n    let timer = Timer::new(\"verify_prod_proof\");\n    let num_instances = eval.len();\n\n    // subset check\n    let (row_eval_init, row_eval_read, row_eval_write, row_eval_audit) = &self.eval_row;\n    assert_eq!(row_eval_write.len(), num_instances);\n    assert_eq!(row_eval_read.len(), num_instances);\n    let ws: Scalar = (0..row_eval_write.len())\n      .map(|i| row_eval_write[i])\n      .product();\n    let rs: Scalar = (0..row_eval_read.len()).map(|i| row_eval_read[i]).product();\n    assert_eq!(row_eval_init * ws, rs * row_eval_audit);\n\n    row_eval_init.append_to_transcript(b\"claim_row_eval_init\", transcript);\n    row_eval_read.append_to_transcript(b\"claim_row_eval_read\", transcript);\n    row_eval_write.append_to_transcript(b\"claim_row_eval_write\", transcript);\n    row_eval_audit.append_to_transcript(b\"claim_row_eval_audit\", transcript);\n\n    // subset check\n    let (col_eval_init, col_eval_read, col_eval_write, col_eval_audit) = &self.eval_col;\n    assert_eq!(col_eval_write.len(), num_instances);\n    assert_eq!(col_eval_read.len(), num_instances);\n    let ws: Scalar = (0..col_eval_write.len())\n      .map(|i| col_eval_write[i])\n      .product();\n    let rs: Scalar = (0..col_eval_read.len()).map(|i| col_eval_read[i]).product();\n    assert_eq!(col_eval_init * ws, rs * col_eval_audit);\n\n    col_eval_init.append_to_transcript(b\"claim_col_eval_init\", transcript);\n    col_eval_read.append_to_transcript(b\"claim_col_eval_read\", transcript);\n    col_eval_write.append_to_transcript(b\"claim_col_eval_write\", transcript);\n    col_eval_audit.append_to_transcript(b\"claim_col_eval_audit\", transcript);\n\n    // verify the evaluation of the sparse polynomial\n    let (eval_dotp_left, eval_dotp_right) = &self.eval_val;\n    assert_eq!(eval_dotp_left.len(), eval_dotp_left.len());\n    assert_eq!(eval_dotp_left.len(), num_instances);\n    let mut claims_dotp_circuit: Vec<Scalar> = Vec::new();\n    for i in 0..num_instances {\n      assert_eq!(eval_dotp_left[i] + eval_dotp_right[i], eval[i]);\n      eval_dotp_left[i].append_to_transcript(b\"claim_eval_dotp_left\", transcript);\n      eval_dotp_right[i].append_to_transcript(b\"claim_eval_dotp_right\", transcript);\n\n      claims_dotp_circuit.push(eval_dotp_left[i]);\n      claims_dotp_circuit.push(eval_dotp_right[i]);\n    }\n\n    // verify the correctness of claim_row_eval_read, claim_row_eval_write, claim_col_eval_read, and claim_col_eval_write\n    let mut claims_prod_circuit: Vec<Scalar> = Vec::new();\n    claims_prod_circuit.extend(row_eval_read);\n    claims_prod_circuit.extend(row_eval_write);\n    claims_prod_circuit.extend(col_eval_read);\n    claims_prod_circuit.extend(col_eval_write);\n\n    let (claims_ops, claims_dotp, rand_ops) = self.proof_ops.verify(\n      &claims_prod_circuit,\n      &claims_dotp_circuit,\n      num_ops,\n      transcript,\n    );\n    // verify the correctness of claim_row_eval_init and claim_row_eval_audit\n    let (claims_mem, _claims_mem_dotp, rand_mem) = self.proof_mem.verify(\n      &[\n        *row_eval_init,\n        *row_eval_audit,\n        *col_eval_init,\n        *col_eval_audit,\n      ],\n      &Vec::new(),\n      num_cells,\n      transcript,\n    );\n    timer.stop();\n\n    Ok((claims_mem, rand_mem, claims_ops, claims_dotp, rand_ops))\n  }\n}\n\n#[derive(Debug, Serialize, Deserialize)]\nstruct PolyEvalNetworkProof {\n  proof_prod_layer: ProductLayerProof,\n  proof_hash_layer: HashLayerProof,\n}\n\nimpl PolyEvalNetworkProof {\n  fn protocol_name() -> &'static [u8] {\n    b\"Sparse polynomial evaluation proof\"\n  }\n\n  pub fn prove(\n    network: &mut PolyEvalNetwork,\n    dense: &MultiSparseMatPolynomialAsDense,\n    derefs: &Derefs,\n    evals: &[Scalar],\n    gens: &SparseMatPolyCommitmentGens,\n    transcript: &mut Transcript,\n    random_tape: &mut RandomTape,\n  ) -> Self {\n    transcript.append_protocol_name(PolyEvalNetworkProof::protocol_name());\n\n    let (proof_prod_layer, rand_mem, rand_ops) = ProductLayerProof::prove(\n      &mut network.row_layers.prod_layer,\n      &mut network.col_layers.prod_layer,\n      dense,\n      derefs,\n      evals,\n      transcript,\n    );\n\n    // proof of hash layer for row and col\n    let proof_hash_layer = HashLayerProof::prove(\n      (&rand_mem, &rand_ops),\n      dense,\n      derefs,\n      gens,\n      transcript,\n      random_tape,\n    );\n\n    PolyEvalNetworkProof {\n      proof_prod_layer,\n      proof_hash_layer,\n    }\n  }\n\n  pub fn verify(\n    &self,\n    comm: &SparseMatPolyCommitment,\n    comm_derefs: &DerefsCommitment,\n    evals: &[Scalar],\n    gens: &SparseMatPolyCommitmentGens,\n    rx: &[Scalar],\n    ry: &[Scalar],\n    r_mem_check: &(Scalar, Scalar),\n    nz: usize,\n    transcript: &mut Transcript,\n  ) -> Result<(), ProofVerifyError> {\n    let timer = Timer::new(\"verify_polyeval_proof\");\n    transcript.append_protocol_name(PolyEvalNetworkProof::protocol_name());\n\n    let num_instances = evals.len();\n    let (r_hash, r_multiset_check) = r_mem_check;\n\n    let num_ops = nz.next_power_of_two();\n    let num_cells = rx.len().pow2();\n    assert_eq!(rx.len(), ry.len());\n\n    let (claims_mem, rand_mem, mut claims_ops, claims_dotp, rand_ops) = self\n      .proof_prod_layer\n      .verify(num_ops, num_cells, evals, transcript)?;\n    assert_eq!(claims_mem.len(), 4);\n    assert_eq!(claims_ops.len(), 4 * num_instances);\n    assert_eq!(claims_dotp.len(), 3 * num_instances);\n\n    let (claims_ops_row, claims_ops_col) = claims_ops.split_at_mut(2 * num_instances);\n    let (claims_ops_row_read, claims_ops_row_write) = claims_ops_row.split_at_mut(num_instances);\n    let (claims_ops_col_read, claims_ops_col_write) = claims_ops_col.split_at_mut(num_instances);\n\n    // verify the proof of hash layer\n    self.proof_hash_layer.verify(\n      (&rand_mem, &rand_ops),\n      &(\n        claims_mem[0],\n        claims_ops_row_read.to_vec(),\n        claims_ops_row_write.to_vec(),\n        claims_mem[1],\n      ),\n      &(\n        claims_mem[2],\n        claims_ops_col_read.to_vec(),\n        claims_ops_col_write.to_vec(),\n        claims_mem[3],\n      ),\n      &claims_dotp,\n      comm,\n      gens,\n      comm_derefs,\n      rx,\n      ry,\n      r_hash,\n      r_multiset_check,\n      transcript,\n    )?;\n    timer.stop();\n\n    Ok(())\n  }\n}\n\n#[derive(Debug, Serialize, Deserialize)]\npub struct SparseMatPolyEvalProof {\n  comm_derefs: DerefsCommitment,\n  poly_eval_network_proof: PolyEvalNetworkProof,\n}\n\nimpl SparseMatPolyEvalProof {\n  fn protocol_name() -> &'static [u8] {\n    b\"Sparse polynomial evaluation proof\"\n  }\n\n  fn equalize(rx: &[Scalar], ry: &[Scalar]) -> (Vec<Scalar>, Vec<Scalar>) {\n    match rx.len().cmp(&ry.len()) {\n      Ordering::Less => {\n        let diff = ry.len() - rx.len();\n        let mut rx_ext = vec![Scalar::zero(); diff];\n        rx_ext.extend(rx);\n        (rx_ext, ry.to_vec())\n      }\n      Ordering::Greater => {\n        let diff = rx.len() - ry.len();\n        let mut ry_ext = vec![Scalar::zero(); diff];\n        ry_ext.extend(ry);\n        (rx.to_vec(), ry_ext)\n      }\n      Ordering::Equal => (rx.to_vec(), ry.to_vec()),\n    }\n  }\n\n  pub fn prove(\n    dense: &MultiSparseMatPolynomialAsDense,\n    rx: &[Scalar], // point at which the polynomial is evaluated\n    ry: &[Scalar],\n    evals: &[Scalar], // a vector evaluation of \\widetilde{M}(r = (rx,ry)) for each M\n    gens: &SparseMatPolyCommitmentGens,\n    transcript: &mut Transcript,\n    random_tape: &mut RandomTape,\n  ) -> SparseMatPolyEvalProof {\n    transcript.append_protocol_name(SparseMatPolyEvalProof::protocol_name());\n\n    // ensure there is one eval for each polynomial in dense\n    assert_eq!(evals.len(), dense.batch_size);\n\n    let (mem_rx, mem_ry) = {\n      // equalize the lengths of rx and ry\n      let (rx_ext, ry_ext) = SparseMatPolyEvalProof::equalize(rx, ry);\n      let poly_rx = EqPolynomial::new(rx_ext).evals();\n      let poly_ry = EqPolynomial::new(ry_ext).evals();\n      (poly_rx, poly_ry)\n    };\n\n    let derefs = dense.deref(&mem_rx, &mem_ry);\n\n    // commit to non-deterministic choices of the prover\n    let timer_commit = Timer::new(\"commit_nondet_witness\");\n    let comm_derefs = {\n      let comm = derefs.commit(&gens.gens_derefs);\n      comm.append_to_transcript(b\"comm_poly_row_col_ops_val\", transcript);\n      comm\n    };\n    timer_commit.stop();\n\n    let poly_eval_network_proof = {\n      // produce a random element from the transcript for hash function\n      let r_mem_check = transcript.challenge_vector(b\"challenge_r_hash\", 2);\n\n      // build a network to evaluate the sparse polynomial\n      let timer_build_network = Timer::new(\"build_layered_network\");\n      let mut net = PolyEvalNetwork::new(\n        dense,\n        &derefs,\n        &mem_rx,\n        &mem_ry,\n        &(r_mem_check[0], r_mem_check[1]),\n      );\n      timer_build_network.stop();\n\n      let timer_eval_network = Timer::new(\"evalproof_layered_network\");\n      let poly_eval_network_proof = PolyEvalNetworkProof::prove(\n        &mut net,\n        dense,\n        &derefs,\n        evals,\n        gens,\n        transcript,\n        random_tape,\n      );\n      timer_eval_network.stop();\n\n      poly_eval_network_proof\n    };\n\n    SparseMatPolyEvalProof {\n      comm_derefs,\n      poly_eval_network_proof,\n    }\n  }\n\n  pub fn verify(\n    &self,\n    comm: &SparseMatPolyCommitment,\n    rx: &[Scalar], // point at which the polynomial is evaluated\n    ry: &[Scalar],\n    evals: &[Scalar], // evaluation of \\widetilde{M}(r = (rx,ry))\n    gens: &SparseMatPolyCommitmentGens,\n    transcript: &mut Transcript,\n  ) -> Result<(), ProofVerifyError> {\n    transcript.append_protocol_name(SparseMatPolyEvalProof::protocol_name());\n\n    // equalize the lengths of rx and ry\n    let (rx_ext, ry_ext) = SparseMatPolyEvalProof::equalize(rx, ry);\n\n    let (nz, num_mem_cells) = (comm.num_ops, comm.num_mem_cells);\n    assert_eq!(rx_ext.len().pow2(), num_mem_cells);\n\n    // add claims to transcript and obtain challenges for randomized mem-check circuit\n    self\n      .comm_derefs\n      .append_to_transcript(b\"comm_poly_row_col_ops_val\", transcript);\n\n    // produce a random element from the transcript for hash function\n    let r_mem_check = transcript.challenge_vector(b\"challenge_r_hash\", 2);\n\n    self.poly_eval_network_proof.verify(\n      comm,\n      &self.comm_derefs,\n      evals,\n      gens,\n      &rx_ext,\n      &ry_ext,\n      &(r_mem_check[0], r_mem_check[1]),\n      nz,\n      transcript,\n    )\n  }\n}\n\npub struct SparsePolyEntry {\n  idx: usize,\n  val: Scalar,\n}\n\nimpl SparsePolyEntry {\n  pub fn new(idx: usize, val: Scalar) -> Self {\n    SparsePolyEntry { idx, val }\n  }\n}\n\npub struct SparsePolynomial {\n  num_vars: usize,\n  Z: Vec<SparsePolyEntry>,\n}\n\nimpl SparsePolynomial {\n  pub fn new(num_vars: usize, Z: Vec<SparsePolyEntry>) -> Self {\n    SparsePolynomial { num_vars, Z }\n  }\n\n  fn compute_chi(a: &[bool], r: &[Scalar]) -> Scalar {\n    assert_eq!(a.len(), r.len());\n    let mut chi_i = Scalar::one();\n    for j in 0..r.len() {\n      if a[j] {\n        chi_i *= r[j];\n      } else {\n        chi_i *= Scalar::one() - r[j];\n      }\n    }\n    chi_i\n  }\n\n  // Takes O(n log n). TODO: do this in O(n) where n is the number of entries in Z\n  pub fn evaluate(&self, r: &[Scalar]) -> Scalar {\n    assert_eq!(self.num_vars, r.len());\n\n    (0..self.Z.len())\n      .map(|i| {\n        let bits = self.Z[i].idx.get_bits(r.len());\n        SparsePolynomial::compute_chi(&bits, r) * self.Z[i].val\n      })\n      .sum()\n  }\n}\n\n#[cfg(test)]\nmod tests {\n  use super::*;\n  use rand_core::{RngCore, OsRng};\n  #[test]\n  fn check_sparse_polyeval_proof() {\n    let mut csprng: OsRng = OsRng;\n\n    let num_nz_entries: usize = 256;\n    let num_rows: usize = 256;\n    let num_cols: usize = 256;\n    let num_vars_x: usize = num_rows.log_2();\n    let num_vars_y: usize = num_cols.log_2();\n\n    let mut M: Vec<SparseMatEntry> = Vec::new();\n\n    for _i in 0..num_nz_entries {\n      M.push(SparseMatEntry::new(\n        (csprng.next_u64() % (num_rows as u64)) as usize,\n        (csprng.next_u64() % (num_cols as u64)) as usize,\n        Scalar::random(&mut csprng),\n      ));\n    }\n\n    let poly_M = SparseMatPolynomial::new(num_vars_x, num_vars_y, M);\n    let gens = SparseMatPolyCommitmentGens::new(\n      b\"gens_sparse_poly\",\n      num_vars_x,\n      num_vars_y,\n      num_nz_entries,\n      3,\n    );\n\n    // commitment\n    let (poly_comm, dense) = SparseMatPolynomial::multi_commit(&[&poly_M, &poly_M, &poly_M], &gens);\n\n    // evaluation\n    let rx: Vec<Scalar> = (0..num_vars_x)\n      .map(|_i| Scalar::random(&mut csprng))\n      .collect::<Vec<Scalar>>();\n    let ry: Vec<Scalar> = (0..num_vars_y)\n      .map(|_i| Scalar::random(&mut csprng))\n      .collect::<Vec<Scalar>>();\n    let eval = SparseMatPolynomial::multi_evaluate(&[&poly_M], &rx, &ry);\n    let evals = vec![eval[0], eval[0], eval[0]];\n\n    let mut random_tape = RandomTape::new(b\"proof\");\n    let mut prover_transcript = Transcript::new(b\"example\");\n    let proof = SparseMatPolyEvalProof::prove(\n      &dense,\n      &rx,\n      &ry,\n      &evals,\n      &gens,\n      &mut prover_transcript,\n      &mut random_tape,\n    );\n\n    let mut verifier_transcript = Transcript::new(b\"example\");\n    assert!(proof\n      .verify(\n        &poly_comm,\n        &rx,\n        &ry,\n        &evals,\n        &gens,\n        &mut verifier_transcript,\n      )\n      .is_ok());\n  }\n}\n"
  },
  {
    "path": "packages/Spartan-secq/src/sumcheck.rs",
    "content": "#![allow(clippy::too_many_arguments)]\n#![allow(clippy::type_complexity)]\nuse super::commitments::{Commitments, MultiCommitGens};\nuse super::dense_mlpoly::DensePolynomial;\nuse super::errors::ProofVerifyError;\nuse super::group::{CompressedGroup, GroupElement, VartimeMultiscalarMul};\nuse super::nizk::DotProductProof;\nuse super::random::RandomTape;\nuse super::scalar::Scalar;\nuse super::transcript::{AppendToTranscript, ProofTranscript};\nuse super::unipoly::{CompressedUniPoly, UniPoly};\nuse crate::group::DecompressEncodedPoint;\nuse core::iter;\nuse itertools::izip;\nuse merlin::Transcript;\nuse serde::{Deserialize, Serialize};\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct SumcheckInstanceProof {\n  compressed_polys: Vec<CompressedUniPoly>,\n}\n\nimpl SumcheckInstanceProof {\n  pub fn new(compressed_polys: Vec<CompressedUniPoly>) -> SumcheckInstanceProof {\n    SumcheckInstanceProof { compressed_polys }\n  }\n\n  pub fn verify(\n    &self,\n    claim: Scalar,\n    num_rounds: usize,\n    degree_bound: usize,\n    transcript: &mut Transcript,\n  ) -> Result<(Scalar, Vec<Scalar>), ProofVerifyError> {\n    let mut e = claim;\n    let mut r: Vec<Scalar> = Vec::new();\n\n    // verify that there is a univariate polynomial for each round\n    assert_eq!(self.compressed_polys.len(), num_rounds);\n    for i in 0..self.compressed_polys.len() {\n      let poly = self.compressed_polys[i].decompress(&e);\n\n      // verify degree bound\n      assert_eq!(poly.degree(), degree_bound);\n\n      // check if G_k(0) + G_k(1) = e\n      assert_eq!(poly.eval_at_zero() + poly.eval_at_one(), e);\n\n      // append the prover's message to the transcript\n      poly.append_to_transcript(b\"poly\", transcript);\n\n      //derive the verifier's challenge for the next round\n      let r_i = transcript.challenge_scalar(b\"challenge_nextround\");\n\n      r.push(r_i);\n\n      // evaluate the claimed degree-ell polynomial at r_i\n      e = poly.evaluate(&r_i);\n    }\n\n    Ok((e, r))\n  }\n}\n\n#[derive(Serialize, Deserialize, Debug)]\npub struct ZKSumcheckInstanceProof {\n  comm_polys: Vec<CompressedGroup>,\n  comm_evals: Vec<CompressedGroup>,\n  proofs: Vec<DotProductProof>,\n}\n\nimpl ZKSumcheckInstanceProof {\n  pub fn new(\n    comm_polys: Vec<CompressedGroup>,\n    comm_evals: Vec<CompressedGroup>,\n    proofs: Vec<DotProductProof>,\n  ) -> Self {\n    ZKSumcheckInstanceProof {\n      comm_polys,\n      comm_evals,\n      proofs,\n    }\n  }\n\n  pub fn verify(\n    &self,\n    comm_claim: &CompressedGroup,\n    num_rounds: usize,\n    degree_bound: usize,\n    gens_1: &MultiCommitGens,\n    gens_n: &MultiCommitGens,\n    transcript: &mut Transcript,\n  ) -> Result<(CompressedGroup, Vec<Scalar>), ProofVerifyError> {\n    // verify degree bound\n    assert_eq!(gens_n.n, degree_bound + 1);\n\n    // verify that there is a univariate polynomial for each round\n    assert_eq!(self.comm_polys.len(), num_rounds);\n    assert_eq!(self.comm_evals.len(), num_rounds);\n\n    let mut r: Vec<Scalar> = Vec::new();\n    for i in 0..self.comm_polys.len() {\n      let comm_poly = &self.comm_polys[i];\n\n      // append the prover's polynomial to the transcript\n      comm_poly.append_to_transcript(b\"comm_poly\", transcript);\n\n      //derive the verifier's challenge for the next round\n      let r_i = transcript.challenge_scalar(b\"challenge_nextround\");\n\n      // verify the proof of sum-check and evals\n      let res = {\n        let comm_claim_per_round = if i == 0 {\n          comm_claim\n        } else {\n          &self.comm_evals[i - 1]\n        };\n        let comm_eval = &self.comm_evals[i];\n\n        // add two claims to transcript\n        comm_claim_per_round.append_to_transcript(b\"comm_claim_per_round\", transcript);\n        comm_eval.append_to_transcript(b\"comm_eval\", transcript);\n\n        // produce two weights\n        let w = transcript.challenge_vector(b\"combine_two_claims_to_one\", 2);\n\n        // compute a weighted sum of the RHS\n        let comm_target = GroupElement::vartime_multiscalar_mul(\n          w.clone(),\n          iter::once(&comm_claim_per_round)\n            .chain(iter::once(&comm_eval))\n            .map(|pt| pt.decompress().unwrap())\n            .collect(),\n        )\n        .compress();\n\n        let a = {\n          // the vector to use to decommit for sum-check test\n          let a_sc = {\n            let mut a = vec![Scalar::one(); degree_bound + 1];\n            a[0] += Scalar::one();\n            a\n          };\n\n          // the vector to use to decommit for evaluation\n          let a_eval = {\n            let mut a = vec![Scalar::one(); degree_bound + 1];\n            for j in 1..a.len() {\n              a[j] = a[j - 1] * r_i;\n            }\n            a\n          };\n\n          // take weighted sum of the two vectors using w\n          assert_eq!(a_sc.len(), a_eval.len());\n          (0..a_sc.len())\n            .map(|i| w[0] * a_sc[i] + w[1] * a_eval[i])\n            .collect::<Vec<Scalar>>()\n        };\n\n        self.proofs[i]\n          .verify(\n            gens_1,\n            gens_n,\n            transcript,\n            &a,\n            &self.comm_polys[i],\n            &comm_target,\n          )\n          .is_ok()\n      };\n      if !res {\n        return Err(ProofVerifyError::InternalError);\n      }\n\n      r.push(r_i);\n    }\n\n    Ok((self.comm_evals[self.comm_evals.len() - 1], r))\n  }\n}\n\nimpl SumcheckInstanceProof {\n  pub fn prove_cubic<F>(\n    claim: &Scalar,\n    num_rounds: usize,\n    poly_A: &mut DensePolynomial,\n    poly_B: &mut DensePolynomial,\n    poly_C: &mut DensePolynomial,\n    comb_func: F,\n    transcript: &mut Transcript,\n  ) -> (Self, Vec<Scalar>, Vec<Scalar>)\n  where\n    F: Fn(&Scalar, &Scalar, &Scalar) -> Scalar,\n  {\n    let mut e = *claim;\n    let mut r: Vec<Scalar> = Vec::new();\n    let mut cubic_polys: Vec<CompressedUniPoly> = Vec::new();\n    for _j in 0..num_rounds {\n      let mut eval_point_0 = Scalar::zero();\n      let mut eval_point_2 = Scalar::zero();\n      let mut eval_point_3 = Scalar::zero();\n\n      let len = poly_A.len() / 2;\n      for i in 0..len {\n        // eval 0: bound_func is A(low)\n        eval_point_0 += comb_func(&poly_A[i], &poly_B[i], &poly_C[i]);\n\n        // eval 2: bound_func is -A(low) + 2*A(high)\n        let poly_A_bound_point = poly_A[len + i] + poly_A[len + i] - poly_A[i];\n        let poly_B_bound_point = poly_B[len + i] + poly_B[len + i] - poly_B[i];\n        let poly_C_bound_point = poly_C[len + i] + poly_C[len + i] - poly_C[i];\n        eval_point_2 += comb_func(\n          &poly_A_bound_point,\n          &poly_B_bound_point,\n          &poly_C_bound_point,\n        );\n\n        // eval 3: bound_func is -2A(low) + 3A(high); computed incrementally with bound_func applied to eval(2)\n        let poly_A_bound_point = poly_A_bound_point + poly_A[len + i] - poly_A[i];\n        let poly_B_bound_point = poly_B_bound_point + poly_B[len + i] - poly_B[i];\n        let poly_C_bound_point = poly_C_bound_point + poly_C[len + i] - poly_C[i];\n\n        eval_point_3 += comb_func(\n          &poly_A_bound_point,\n          &poly_B_bound_point,\n          &poly_C_bound_point,\n        );\n      }\n\n      let evals = vec![eval_point_0, e - eval_point_0, eval_point_2, eval_point_3];\n      let poly = UniPoly::from_evals(&evals);\n\n      // append the prover's message to the transcript\n      poly.append_to_transcript(b\"poly\", transcript);\n\n      //derive the verifier's challenge for the next round\n      let r_j = transcript.challenge_scalar(b\"challenge_nextround\");\n      r.push(r_j);\n      // bound all tables to the verifier's challenege\n      poly_A.bound_poly_var_top(&r_j);\n      poly_B.bound_poly_var_top(&r_j);\n      poly_C.bound_poly_var_top(&r_j);\n      e = poly.evaluate(&r_j);\n      cubic_polys.push(poly.compress());\n    }\n\n    (\n      SumcheckInstanceProof::new(cubic_polys),\n      r,\n      vec![poly_A[0], poly_B[0], poly_C[0]],\n    )\n  }\n\n  pub fn prove_cubic_batched<F>(\n    claim: &Scalar,\n    num_rounds: usize,\n    poly_vec_par: (\n      &mut Vec<&mut DensePolynomial>,\n      &mut Vec<&mut DensePolynomial>,\n      &mut DensePolynomial,\n    ),\n    poly_vec_seq: (\n      &mut Vec<&mut DensePolynomial>,\n      &mut Vec<&mut DensePolynomial>,\n      &mut Vec<&mut DensePolynomial>,\n    ),\n    coeffs: &[Scalar],\n    comb_func: F,\n    transcript: &mut Transcript,\n  ) -> (\n    Self,\n    Vec<Scalar>,\n    (Vec<Scalar>, Vec<Scalar>, Scalar),\n    (Vec<Scalar>, Vec<Scalar>, Vec<Scalar>),\n  )\n  where\n    F: Fn(&Scalar, &Scalar, &Scalar) -> Scalar,\n  {\n    let (poly_A_vec_par, poly_B_vec_par, poly_C_par) = poly_vec_par;\n    let (poly_A_vec_seq, poly_B_vec_seq, poly_C_vec_seq) = poly_vec_seq;\n\n    //let (poly_A_vec_seq, poly_B_vec_seq, poly_C_vec_seq) = poly_vec_seq;\n    let mut e = *claim;\n    let mut r: Vec<Scalar> = Vec::new();\n    let mut cubic_polys: Vec<CompressedUniPoly> = Vec::new();\n\n    for _j in 0..num_rounds {\n      let mut evals: Vec<(Scalar, Scalar, Scalar)> = Vec::new();\n\n      for (poly_A, poly_B) in poly_A_vec_par.iter().zip(poly_B_vec_par.iter()) {\n        let mut eval_point_0 = Scalar::zero();\n        let mut eval_point_2 = Scalar::zero();\n        let mut eval_point_3 = Scalar::zero();\n\n        let len = poly_A.len() / 2;\n        for i in 0..len {\n          // eval 0: bound_func is A(low)\n          eval_point_0 += comb_func(&poly_A[i], &poly_B[i], &poly_C_par[i]);\n\n          // eval 2: bound_func is -A(low) + 2*A(high)\n          let poly_A_bound_point = poly_A[len + i] + poly_A[len + i] - poly_A[i];\n          let poly_B_bound_point = poly_B[len + i] + poly_B[len + i] - poly_B[i];\n          let poly_C_bound_point = poly_C_par[len + i] + poly_C_par[len + i] - poly_C_par[i];\n          eval_point_2 += comb_func(\n            &poly_A_bound_point,\n            &poly_B_bound_point,\n            &poly_C_bound_point,\n          );\n\n          // eval 3: bound_func is -2A(low) + 3A(high); computed incrementally with bound_func applied to eval(2)\n          let poly_A_bound_point = poly_A_bound_point + poly_A[len + i] - poly_A[i];\n          let poly_B_bound_point = poly_B_bound_point + poly_B[len + i] - poly_B[i];\n          let poly_C_bound_point = poly_C_bound_point + poly_C_par[len + i] - poly_C_par[i];\n\n          eval_point_3 += comb_func(\n            &poly_A_bound_point,\n            &poly_B_bound_point,\n            &poly_C_bound_point,\n          );\n        }\n\n        evals.push((eval_point_0, eval_point_2, eval_point_3));\n      }\n\n      for (poly_A, poly_B, poly_C) in izip!(\n        poly_A_vec_seq.iter(),\n        poly_B_vec_seq.iter(),\n        poly_C_vec_seq.iter()\n      ) {\n        let mut eval_point_0 = Scalar::zero();\n        let mut eval_point_2 = Scalar::zero();\n        let mut eval_point_3 = Scalar::zero();\n        let len = poly_A.len() / 2;\n        for i in 0..len {\n          // eval 0: bound_func is A(low)\n          eval_point_0 += comb_func(&poly_A[i], &poly_B[i], &poly_C[i]);\n          // eval 2: bound_func is -A(low) + 2*A(high)\n          let poly_A_bound_point = poly_A[len + i] + poly_A[len + i] - poly_A[i];\n          let poly_B_bound_point = poly_B[len + i] + poly_B[len + i] - poly_B[i];\n          let poly_C_bound_point = poly_C[len + i] + poly_C[len + i] - poly_C[i];\n          eval_point_2 += comb_func(\n            &poly_A_bound_point,\n            &poly_B_bound_point,\n            &poly_C_bound_point,\n          );\n          // eval 3: bound_func is -2A(low) + 3A(high); computed incrementally with bound_func applied to eval(2)\n          let poly_A_bound_point = poly_A_bound_point + poly_A[len + i] - poly_A[i];\n          let poly_B_bound_point = poly_B_bound_point + poly_B[len + i] - poly_B[i];\n          let poly_C_bound_point = poly_C_bound_point + poly_C[len + i] - poly_C[i];\n          eval_point_3 += comb_func(\n            &poly_A_bound_point,\n            &poly_B_bound_point,\n            &poly_C_bound_point,\n          );\n        }\n        evals.push((eval_point_0, eval_point_2, eval_point_3));\n      }\n\n      let evals_combined_0 = (0..evals.len()).map(|i| evals[i].0 * coeffs[i]).sum();\n      let evals_combined_2 = (0..evals.len()).map(|i| evals[i].1 * coeffs[i]).sum();\n      let evals_combined_3 = (0..evals.len()).map(|i| evals[i].2 * coeffs[i]).sum();\n\n      let evals = vec![\n        evals_combined_0,\n        e - evals_combined_0,\n        evals_combined_2,\n        evals_combined_3,\n      ];\n      let poly = UniPoly::from_evals(&evals);\n\n      // append the prover's message to the transcript\n      poly.append_to_transcript(b\"poly\", transcript);\n\n      //derive the verifier's challenge for the next round\n      let r_j = transcript.challenge_scalar(b\"challenge_nextround\");\n      r.push(r_j);\n\n      // bound all tables to the verifier's challenege\n      for (poly_A, poly_B) in poly_A_vec_par.iter_mut().zip(poly_B_vec_par.iter_mut()) {\n        poly_A.bound_poly_var_top(&r_j);\n        poly_B.bound_poly_var_top(&r_j);\n      }\n      poly_C_par.bound_poly_var_top(&r_j);\n\n      for (poly_A, poly_B, poly_C) in izip!(\n        poly_A_vec_seq.iter_mut(),\n        poly_B_vec_seq.iter_mut(),\n        poly_C_vec_seq.iter_mut()\n      ) {\n        poly_A.bound_poly_var_top(&r_j);\n        poly_B.bound_poly_var_top(&r_j);\n        poly_C.bound_poly_var_top(&r_j);\n      }\n\n      e = poly.evaluate(&r_j);\n      cubic_polys.push(poly.compress());\n    }\n\n    let poly_A_par_final = (0..poly_A_vec_par.len())\n      .map(|i| poly_A_vec_par[i][0])\n      .collect();\n    let poly_B_par_final = (0..poly_B_vec_par.len())\n      .map(|i| poly_B_vec_par[i][0])\n      .collect();\n    let claims_prod = (poly_A_par_final, poly_B_par_final, poly_C_par[0]);\n\n    let poly_A_seq_final = (0..poly_A_vec_seq.len())\n      .map(|i| poly_A_vec_seq[i][0])\n      .collect();\n    let poly_B_seq_final = (0..poly_B_vec_seq.len())\n      .map(|i| poly_B_vec_seq[i][0])\n      .collect();\n    let poly_C_seq_final = (0..poly_C_vec_seq.len())\n      .map(|i| poly_C_vec_seq[i][0])\n      .collect();\n    let claims_dotp = (poly_A_seq_final, poly_B_seq_final, poly_C_seq_final);\n\n    (\n      SumcheckInstanceProof::new(cubic_polys),\n      r,\n      claims_prod,\n      claims_dotp,\n    )\n  }\n}\n\nimpl ZKSumcheckInstanceProof {\n  pub fn prove_quad<F>(\n    claim: &Scalar,\n    blind_claim: &Scalar,\n    num_rounds: usize,\n    poly_A: &mut DensePolynomial,\n    poly_B: &mut DensePolynomial,\n    comb_func: F,\n    gens_1: &MultiCommitGens,\n    gens_n: &MultiCommitGens,\n    transcript: &mut Transcript,\n    random_tape: &mut RandomTape,\n  ) -> (Self, Vec<Scalar>, Vec<Scalar>, Scalar)\n  where\n    F: Fn(&Scalar, &Scalar) -> Scalar,\n  {\n    let (blinds_poly, blinds_evals) = (\n      random_tape.random_vector(b\"blinds_poly\", num_rounds),\n      random_tape.random_vector(b\"blinds_evals\", num_rounds),\n    );\n    let mut claim_per_round = *claim;\n    let mut comm_claim_per_round = claim_per_round.commit(blind_claim, gens_1).compress();\n\n    let mut r: Vec<Scalar> = Vec::new();\n    let mut comm_polys: Vec<CompressedGroup> = Vec::new();\n    let mut comm_evals: Vec<CompressedGroup> = Vec::new();\n    let mut proofs: Vec<DotProductProof> = Vec::new();\n\n    for j in 0..num_rounds {\n      let (poly, comm_poly) = {\n        let mut eval_point_0 = Scalar::zero();\n        let mut eval_point_2 = Scalar::zero();\n\n        let len = poly_A.len() / 2;\n        for i in 0..len {\n          // eval 0: bound_func is A(low)\n          eval_point_0 += comb_func(&poly_A[i], &poly_B[i]);\n\n          // eval 2: bound_func is -A(low) + 2*A(high)\n          let poly_A_bound_point = poly_A[len + i] + poly_A[len + i] - poly_A[i];\n          let poly_B_bound_point = poly_B[len + i] + poly_B[len + i] - poly_B[i];\n          eval_point_2 += comb_func(&poly_A_bound_point, &poly_B_bound_point);\n        }\n\n        let evals = vec![eval_point_0, claim_per_round - eval_point_0, eval_point_2];\n        let poly = UniPoly::from_evals(&evals);\n        let comm_poly = poly.commit(gens_n, &blinds_poly[j]).compress();\n        (poly, comm_poly)\n      };\n\n      // append the prover's message to the transcript\n      comm_poly.append_to_transcript(b\"comm_poly\", transcript);\n      comm_polys.push(comm_poly);\n\n      //derive the verifier's challenge for the next round\n      let r_j = transcript.challenge_scalar(b\"challenge_nextround\");\n\n      // bound all tables to the verifier's challenege\n      poly_A.bound_poly_var_top(&r_j);\n      poly_B.bound_poly_var_top(&r_j);\n\n      // produce a proof of sum-check and of evaluation\n      let (proof, claim_next_round, comm_claim_next_round) = {\n        let eval = poly.evaluate(&r_j);\n        let comm_eval = eval.commit(&blinds_evals[j], gens_1).compress();\n\n        // we need to prove the following under homomorphic commitments:\n        // (1) poly(0) + poly(1) = claim_per_round\n        // (2) poly(r_j) = eval\n\n        // Our technique is to leverage dot product proofs:\n        // (1) we can prove: <poly_in_coeffs_form, (2, 1, 1, 1)> = claim_per_round\n        // (2) we can prove: <poly_in_coeffs_form, (1, r_j, r^2_j, ..) = eval\n        // for efficiency we batch them using random weights\n\n        // add two claims to transcript\n        comm_claim_per_round.append_to_transcript(b\"comm_claim_per_round\", transcript);\n        comm_eval.append_to_transcript(b\"comm_eval\", transcript);\n\n        // produce two weights\n        let w = transcript.challenge_vector(b\"combine_two_claims_to_one\", 2);\n\n        // compute a weighted sum of the RHS\n        let target = w[0] * claim_per_round + w[1] * eval;\n        let comm_target = GroupElement::vartime_multiscalar_mul(\n          w.clone(),\n          iter::once(&comm_claim_per_round)\n            .chain(iter::once(&comm_eval))\n            .map(|pt| pt.decompress().unwrap())\n            .collect(),\n        )\n        .compress();\n\n        let blind = {\n          let blind_sc = if j == 0 {\n            blind_claim\n          } else {\n            &blinds_evals[j - 1]\n          };\n\n          let blind_eval = &blinds_evals[j];\n\n          w[0] * blind_sc + w[1] * blind_eval\n        };\n        assert_eq!(target.commit(&blind, gens_1).compress(), comm_target);\n\n        let a = {\n          // the vector to use to decommit for sum-check test\n          let a_sc = {\n            let mut a = vec![Scalar::one(); poly.degree() + 1];\n            a[0] += Scalar::one();\n            a\n          };\n\n          // the vector to use to decommit for evaluation\n          let a_eval = {\n            let mut a = vec![Scalar::one(); poly.degree() + 1];\n            for j in 1..a.len() {\n              a[j] = a[j - 1] * r_j;\n            }\n            a\n          };\n\n          // take weighted sum of the two vectors using w\n          assert_eq!(a_sc.len(), a_eval.len());\n          (0..a_sc.len())\n            .map(|i| w[0] * a_sc[i] + w[1] * a_eval[i])\n            .collect::<Vec<Scalar>>()\n        };\n\n        let (proof, _comm_poly, _comm_sc_eval) = DotProductProof::prove(\n          gens_1,\n          gens_n,\n          transcript,\n          random_tape,\n          &poly.as_vec(),\n          &blinds_poly[j],\n          &a,\n          &target,\n          &blind,\n        );\n\n        (proof, eval, comm_eval)\n      };\n\n      claim_per_round = claim_next_round;\n      comm_claim_per_round = comm_claim_next_round;\n\n      proofs.push(proof);\n      r.push(r_j);\n      comm_evals.push(comm_claim_per_round);\n    }\n\n    (\n      ZKSumcheckInstanceProof::new(comm_polys, comm_evals, proofs),\n      r,\n      vec![poly_A[0], poly_B[0]],\n      blinds_evals[num_rounds - 1],\n    )\n  }\n\n  pub fn prove_cubic_with_additive_term<F>(\n    claim: &Scalar,\n    blind_claim: &Scalar,\n    num_rounds: usize,\n    poly_A: &mut DensePolynomial,\n    poly_B: &mut DensePolynomial,\n    poly_C: &mut DensePolynomial,\n    poly_D: &mut DensePolynomial,\n    comb_func: F,\n    gens_1: &MultiCommitGens,\n    gens_n: &MultiCommitGens,\n    transcript: &mut Transcript,\n    random_tape: &mut RandomTape,\n  ) -> (Self, Vec<Scalar>, Vec<Scalar>, Scalar)\n  where\n    F: Fn(&Scalar, &Scalar, &Scalar, &Scalar) -> Scalar,\n  {\n    let (blinds_poly, blinds_evals) = (\n      random_tape.random_vector(b\"blinds_poly\", num_rounds),\n      random_tape.random_vector(b\"blinds_evals\", num_rounds),\n    );\n\n    let mut claim_per_round = *claim;\n    let mut comm_claim_per_round = claim_per_round.commit(blind_claim, gens_1).compress();\n\n    let mut r: Vec<Scalar> = Vec::new();\n    let mut comm_polys: Vec<CompressedGroup> = Vec::new();\n    let mut comm_evals: Vec<CompressedGroup> = Vec::new();\n    let mut proofs: Vec<DotProductProof> = Vec::new();\n\n    for j in 0..num_rounds {\n      let (poly, comm_poly) = {\n        let mut eval_point_0 = Scalar::zero();\n        let mut eval_point_2 = Scalar::zero();\n        let mut eval_point_3 = Scalar::zero();\n\n        let len = poly_A.len() / 2;\n        for i in 0..len {\n          // eval 0: bound_func is A(low)\n          eval_point_0 += comb_func(&poly_A[i], &poly_B[i], &poly_C[i], &poly_D[i]);\n\n          // eval 2: bound_func is -A(low) + 2*A(high)\n          let poly_A_bound_point = poly_A[len + i] + poly_A[len + i] - poly_A[i];\n          let poly_B_bound_point = poly_B[len + i] + poly_B[len + i] - poly_B[i];\n          let poly_C_bound_point = poly_C[len + i] + poly_C[len + i] - poly_C[i];\n          let poly_D_bound_point = poly_D[len + i] + poly_D[len + i] - poly_D[i];\n          eval_point_2 += comb_func(\n            &poly_A_bound_point,\n            &poly_B_bound_point,\n            &poly_C_bound_point,\n            &poly_D_bound_point,\n          );\n\n          // eval 3: bound_func is -2A(low) + 3A(high); computed incrementally with bound_func applied to eval(2)\n          let poly_A_bound_point = poly_A_bound_point + poly_A[len + i] - poly_A[i];\n          let poly_B_bound_point = poly_B_bound_point + poly_B[len + i] - poly_B[i];\n          let poly_C_bound_point = poly_C_bound_point + poly_C[len + i] - poly_C[i];\n          let poly_D_bound_point = poly_D_bound_point + poly_D[len + i] - poly_D[i];\n          eval_point_3 += comb_func(\n            &poly_A_bound_point,\n            &poly_B_bound_point,\n            &poly_C_bound_point,\n            &poly_D_bound_point,\n          );\n        }\n\n        let evals = vec![\n          eval_point_0,\n          claim_per_round - eval_point_0,\n          eval_point_2,\n          eval_point_3,\n        ];\n        let poly = UniPoly::from_evals(&evals);\n        let comm_poly = poly.commit(gens_n, &blinds_poly[j]).compress();\n        (poly, comm_poly)\n      };\n\n      // append the prover's message to the transcript\n      comm_poly.append_to_transcript(b\"comm_poly\", transcript);\n      comm_polys.push(comm_poly);\n\n      //derive the verifier's challenge for the next round\n      let r_j = transcript.challenge_scalar(b\"challenge_nextround\");\n\n      // bound all tables to the verifier's challenege\n      poly_A.bound_poly_var_top(&r_j);\n      poly_B.bound_poly_var_top(&r_j);\n      poly_C.bound_poly_var_top(&r_j);\n      poly_D.bound_poly_var_top(&r_j);\n\n      // produce a proof of sum-check and of evaluation\n      let (proof, claim_next_round, comm_claim_next_round) = {\n        let eval = poly.evaluate(&r_j);\n        let comm_eval = eval.commit(&blinds_evals[j], gens_1).compress();\n\n        // we need to prove the following under homomorphic commitments:\n        // (1) poly(0) + poly(1) = claim_per_round\n        // (2) poly(r_j) = eval\n\n        // Our technique is to leverage dot product proofs:\n        // (1) we can prove: <poly_in_coeffs_form, (2, 1, 1, 1)> = claim_per_round\n        // (2) we can prove: <poly_in_coeffs_form, (1, r_j, r^2_j, ..) = eval\n        // for efficiency we batch them using random weights\n\n        // add two claims to transcript\n        comm_claim_per_round.append_to_transcript(b\"comm_claim_per_round\", transcript);\n        comm_eval.append_to_transcript(b\"comm_eval\", transcript);\n\n        // produce two weights\n        let w = transcript.challenge_vector(b\"combine_two_claims_to_one\", 2);\n\n        // compute a weighted sum of the RHS\n        let target = w[0] * claim_per_round + w[1] * eval;\n        let comm_target = GroupElement::vartime_multiscalar_mul(\n          w.clone(),\n          iter::once(&comm_claim_per_round)\n            .chain(iter::once(&comm_eval))\n            .map(|pt| pt.decompress().unwrap())\n            .collect::<Vec<GroupElement>>(),\n        )\n        .compress();\n\n        let blind = {\n          let blind_sc = if j == 0 {\n            blind_claim\n          } else {\n            &blinds_evals[j - 1]\n          };\n\n          let blind_eval = &blinds_evals[j];\n\n          w[0] * blind_sc + w[1] * blind_eval\n        };\n\n        assert_eq!(target.commit(&blind, gens_1).compress(), comm_target);\n\n        let a = {\n          // the vector to use to decommit for sum-check test\n          let a_sc = {\n            let mut a = vec![Scalar::one(); poly.degree() + 1];\n            a[0] += Scalar::one();\n            a\n          };\n\n          // the vector to use to decommit for evaluation\n          let a_eval = {\n            let mut a = vec![Scalar::one(); poly.degree() + 1];\n            for j in 1..a.len() {\n              a[j] = a[j - 1] * r_j;\n            }\n            a\n          };\n\n          // take weighted sum of the two vectors using w\n          assert_eq!(a_sc.len(), a_eval.len());\n          (0..a_sc.len())\n            .map(|i| w[0] * a_sc[i] + w[1] * a_eval[i])\n            .collect::<Vec<Scalar>>()\n        };\n\n        let (proof, _comm_poly, _comm_sc_eval) = DotProductProof::prove(\n          gens_1,\n          gens_n,\n          transcript,\n          random_tape,\n          &poly.as_vec(),\n          &blinds_poly[j],\n          &a,\n          &target,\n          &blind,\n        );\n\n        (proof, eval, comm_eval)\n      };\n\n      proofs.push(proof);\n      claim_per_round = claim_next_round;\n      comm_claim_per_round = comm_claim_next_round;\n      r.push(r_j);\n      comm_evals.push(comm_claim_per_round);\n    }\n\n    (\n      ZKSumcheckInstanceProof::new(comm_polys, comm_evals, proofs),\n      r,\n      vec![poly_A[0], poly_B[0], poly_C[0], poly_D[0]],\n      blinds_evals[num_rounds - 1],\n    )\n  }\n}\n"
  },
  {
    "path": "packages/Spartan-secq/src/timer.rs",
    "content": "#[cfg(feature = \"profile\")]\nuse colored::Colorize;\n#[cfg(feature = \"profile\")]\nuse core::sync::atomic::AtomicUsize;\n#[cfg(feature = \"profile\")]\nuse core::sync::atomic::Ordering;\n#[cfg(feature = \"profile\")]\nuse std::time::Instant;\n\n#[cfg(feature = \"profile\")]\npub static CALL_DEPTH: AtomicUsize = AtomicUsize::new(0);\n\n#[cfg(feature = \"profile\")]\npub struct Timer {\n  label: String,\n  timer: Instant,\n}\n\n#[cfg(feature = \"profile\")]\nimpl Timer {\n  #[inline(always)]\n  pub fn new(label: &str) -> Self {\n    let timer = Instant::now();\n    CALL_DEPTH.fetch_add(1, Ordering::Relaxed);\n    let star = \"* \";\n    println!(\n      \"{:indent$}{}{}\",\n      \"\",\n      star,\n      label.yellow().bold(),\n      indent = 2 * CALL_DEPTH.fetch_add(0, Ordering::Relaxed)\n    );\n    Self {\n      label: label.to_string(),\n      timer,\n    }\n  }\n\n  #[inline(always)]\n  pub fn stop(&self) {\n    let duration = self.timer.elapsed();\n    let star = \"* \";\n    println!(\n      \"{:indent$}{}{} {:?}\",\n      \"\",\n      star,\n      self.label.blue().bold(),\n      duration,\n      indent = 2 * CALL_DEPTH.fetch_add(0, Ordering::Relaxed)\n    );\n    CALL_DEPTH.fetch_sub(1, Ordering::Relaxed);\n  }\n\n  #[inline(always)]\n  pub fn print(msg: &str) {\n    CALL_DEPTH.fetch_add(1, Ordering::Relaxed);\n    let star = \"* \";\n    println!(\n      \"{:indent$}{}{}\",\n      \"\",\n      star,\n      msg.to_string().green().bold(),\n      indent = 2 * CALL_DEPTH.fetch_add(0, Ordering::Relaxed)\n    );\n    CALL_DEPTH.fetch_sub(1, Ordering::Relaxed);\n  }\n}\n\n#[cfg(not(feature = \"profile\"))]\npub struct Timer {\n  _label: String,\n}\n\n#[cfg(not(feature = \"profile\"))]\nimpl Timer {\n  #[inline(always)]\n  pub fn new(label: &str) -> Self {\n    Self {\n      _label: label.to_string(),\n    }\n  }\n\n  #[inline(always)]\n  pub fn stop(&self) {}\n\n  #[inline(always)]\n  pub fn print(_msg: &str) {}\n}\n"
  },
  {
    "path": "packages/Spartan-secq/src/transcript.rs",
    "content": "use super::group::CompressedGroup;\nuse super::scalar::Scalar;\nuse merlin::Transcript;\n\npub trait ProofTranscript {\n  fn append_protocol_name(&mut self, protocol_name: &'static [u8]);\n  fn append_scalar(&mut self, label: &'static [u8], scalar: &Scalar);\n  fn append_point(&mut self, label: &'static [u8], point: &CompressedGroup);\n  fn challenge_scalar(&mut self, label: &'static [u8]) -> Scalar;\n  fn challenge_vector(&mut self, label: &'static [u8], len: usize) -> Vec<Scalar>;\n}\n\nimpl ProofTranscript for Transcript {\n  fn append_protocol_name(&mut self, protocol_name: &'static [u8]) {\n    self.append_message(b\"protocol-name\", protocol_name);\n  }\n\n  fn append_scalar(&mut self, label: &'static [u8], scalar: &Scalar) {\n    self.append_message(label, &scalar.to_bytes());\n  }\n\n  fn append_point(&mut self, label: &'static [u8], point: &CompressedGroup) {\n    self.append_message(label, point.as_bytes());\n  }\n\n  fn challenge_scalar(&mut self, label: &'static [u8]) -> Scalar {\n    let mut buf = [0u8; 64];\n    self.challenge_bytes(label, &mut buf);\n    Scalar::from_bytes_wide(&buf)\n  }\n\n  fn challenge_vector(&mut self, label: &'static [u8], len: usize) -> Vec<Scalar> {\n    (0..len)\n      .map(|_i| self.challenge_scalar(label))\n      .collect::<Vec<Scalar>>()\n  }\n}\n\npub trait AppendToTranscript {\n  fn append_to_transcript(&self, label: &'static [u8], transcript: &mut Transcript);\n}\n\nimpl AppendToTranscript for Scalar {\n  fn append_to_transcript(&self, label: &'static [u8], transcript: &mut Transcript) {\n    transcript.append_scalar(label, self);\n  }\n}\n\nimpl AppendToTranscript for [Scalar] {\n  fn append_to_transcript(&self, label: &'static [u8], transcript: &mut Transcript) {\n    transcript.append_message(label, b\"begin_append_vector\");\n    for item in self {\n      transcript.append_scalar(label, item);\n    }\n    transcript.append_message(label, b\"end_append_vector\");\n  }\n}\n\nimpl AppendToTranscript for CompressedGroup {\n  fn append_to_transcript(&self, label: &'static [u8], transcript: &mut Transcript) {\n    transcript.append_point(label, self);\n  }\n}\n"
  },
  {
    "path": "packages/Spartan-secq/src/unipoly.rs",
    "content": "use super::commitments::{Commitments, MultiCommitGens};\nuse super::group::GroupElement;\nuse super::scalar::{Scalar, ScalarFromPrimitives};\nuse super::transcript::{AppendToTranscript, ProofTranscript};\nuse merlin::Transcript;\nuse serde::{Deserialize, Serialize};\n\n// ax^2 + bx + c stored as vec![c,b,a]\n// ax^3 + bx^2 + cx + d stored as vec![d,c,b,a]\n#[derive(Debug)]\npub struct UniPoly {\n  coeffs: Vec<Scalar>,\n}\n\n// ax^2 + bx + c stored as vec![c,a]\n// ax^3 + bx^2 + cx + d stored as vec![d,b,a]\n#[derive(Serialize, Deserialize, Debug)]\npub struct CompressedUniPoly {\n  coeffs_except_linear_term: Vec<Scalar>,\n}\n\nimpl UniPoly {\n  pub fn from_evals(evals: &[Scalar]) -> Self {\n    // we only support degree-2 or degree-3 univariate polynomials\n    assert!(evals.len() == 3 || evals.len() == 4);\n    let coeffs = if evals.len() == 3 {\n      // ax^2 + bx + c\n      let two_inv = (2_usize).to_scalar().invert().unwrap();\n\n      let c = evals[0];\n      let a = two_inv * (evals[2] - evals[1] - evals[1] + c);\n      let b = evals[1] - c - a;\n      vec![c, b, a]\n    } else {\n      // ax^3 + bx^2 + cx + d\n      let two_inv = (2_usize).to_scalar().invert().unwrap();\n      let six_inv = (6_usize).to_scalar().invert().unwrap();\n\n      let d = evals[0];\n      let a = six_inv\n        * (evals[3] - evals[2] - evals[2] - evals[2] + evals[1] + evals[1] + evals[1] - evals[0]);\n      let b = two_inv\n        * (evals[0] + evals[0] - evals[1] - evals[1] - evals[1] - evals[1] - evals[1]\n          + evals[2]\n          + evals[2]\n          + evals[2]\n          + evals[2]\n          - evals[3]);\n      let c = evals[1] - d - a - b;\n      vec![d, c, b, a]\n    };\n\n    UniPoly { coeffs }\n  }\n\n  pub fn degree(&self) -> usize {\n    self.coeffs.len() - 1\n  }\n\n  pub fn as_vec(&self) -> Vec<Scalar> {\n    self.coeffs.clone()\n  }\n\n  pub fn eval_at_zero(&self) -> Scalar {\n    self.coeffs[0]\n  }\n\n  pub fn eval_at_one(&self) -> Scalar {\n    (0..self.coeffs.len()).map(|i| self.coeffs[i]).sum()\n  }\n\n  pub fn evaluate(&self, r: &Scalar) -> Scalar {\n    let mut eval = self.coeffs[0];\n    let mut power = *r;\n    for i in 1..self.coeffs.len() {\n      eval += power * self.coeffs[i];\n      power *= r;\n    }\n    eval\n  }\n\n  pub fn compress(&self) -> CompressedUniPoly {\n    let coeffs_except_linear_term = [&self.coeffs[..1], &self.coeffs[2..]].concat();\n    assert_eq!(coeffs_except_linear_term.len() + 1, self.coeffs.len());\n    CompressedUniPoly {\n      coeffs_except_linear_term,\n    }\n  }\n\n  pub fn commit(&self, gens: &MultiCommitGens, blind: &Scalar) -> GroupElement {\n    self.coeffs.commit(blind, gens)\n  }\n}\n\nimpl CompressedUniPoly {\n  // we require eval(0) + eval(1) = hint, so we can solve for the linear term as:\n  // linear_term = hint - 2 * constant_term - deg2 term - deg3 term\n  pub fn decompress(&self, hint: &Scalar) -> UniPoly {\n    let mut linear_term =\n      hint - self.coeffs_except_linear_term[0] - self.coeffs_except_linear_term[0];\n    for i in 1..self.coeffs_except_linear_term.len() {\n      linear_term -= self.coeffs_except_linear_term[i];\n    }\n\n    let mut coeffs = vec![self.coeffs_except_linear_term[0], linear_term];\n    coeffs.extend(&self.coeffs_except_linear_term[1..]);\n    assert_eq!(self.coeffs_except_linear_term.len() + 1, coeffs.len());\n    UniPoly { coeffs }\n  }\n}\n\nimpl AppendToTranscript for UniPoly {\n  fn append_to_transcript(&self, label: &'static [u8], transcript: &mut Transcript) {\n    transcript.append_message(label, b\"UniPoly_begin\");\n    for i in 0..self.coeffs.len() {\n      transcript.append_scalar(b\"coeff\", &self.coeffs[i]);\n    }\n    transcript.append_message(label, b\"UniPoly_end\");\n  }\n}\n\n#[cfg(test)]\nmod tests {\n\n  use super::*;\n\n  #[test]\n  fn test_from_evals_quad() {\n    // polynomial is 2x^2 + 3x + 1\n    let e0 = Scalar::one();\n    let e1 = (6_usize).to_scalar();\n    let e2 = (15_usize).to_scalar();\n    let evals = vec![e0, e1, e2];\n    let poly = UniPoly::from_evals(&evals);\n\n    assert_eq!(poly.eval_at_zero(), e0);\n    assert_eq!(poly.eval_at_one(), e1);\n    assert_eq!(poly.coeffs.len(), 3);\n    assert_eq!(poly.coeffs[0], Scalar::one());\n    assert_eq!(poly.coeffs[1], (3_usize).to_scalar());\n    assert_eq!(poly.coeffs[2], (2_usize).to_scalar());\n\n    let hint = e0 + e1;\n    let compressed_poly = poly.compress();\n    let decompressed_poly = compressed_poly.decompress(&hint);\n    for i in 0..decompressed_poly.coeffs.len() {\n      assert_eq!(decompressed_poly.coeffs[i], poly.coeffs[i]);\n    }\n\n    let e3 = (28_usize).to_scalar();\n    assert_eq!(poly.evaluate(&(3_usize).to_scalar()), e3);\n  }\n\n  #[test]\n  fn test_from_evals_cubic() {\n    // polynomial is x^3 + 2x^2 + 3x + 1\n    let e0 = Scalar::one();\n    let e1 = (7_usize).to_scalar();\n    let e2 = (23_usize).to_scalar();\n    let e3 = (55_usize).to_scalar();\n    let evals = vec![e0, e1, e2, e3];\n    let poly = UniPoly::from_evals(&evals);\n\n    assert_eq!(poly.eval_at_zero(), e0);\n    assert_eq!(poly.eval_at_one(), e1);\n    assert_eq!(poly.coeffs.len(), 4);\n    assert_eq!(poly.coeffs[0], Scalar::one());\n    assert_eq!(poly.coeffs[1], (3_usize).to_scalar());\n    assert_eq!(poly.coeffs[2], (2_usize).to_scalar());\n    assert_eq!(poly.coeffs[3], (1_usize).to_scalar());\n\n    let hint = e0 + e1;\n    let compressed_poly = poly.compress();\n    let decompressed_poly = compressed_poly.decompress(&hint);\n    for i in 0..decompressed_poly.coeffs.len() {\n      assert_eq!(decompressed_poly.coeffs[i], poly.coeffs[i]);\n    }\n\n    let e4 = (109_usize).to_scalar();\n    assert_eq!(poly.evaluate(&(4_usize).to_scalar()), e4);\n  }\n}\n"
  },
  {
    "path": "packages/benchmark/node/LICENSE",
    "content": "MIT License\n\nCopyright (c) 2022 Ethereum Foundation\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE."
  },
  {
    "path": "packages/benchmark/node/README.md",
    "content": "## Node.js\n\nRecommended: v18 or later\n\n## Install dependencies\n\n```\nyarn\n```\n\n## Run benchmark\n\n```\nyarn bench\n```\n"
  },
  {
    "path": "packages/benchmark/node/package.json",
    "content": "{\n  \"name\": \"node\",\n  \"version\": \"1.0.0\",\n  \"main\": \"node.bench.ts\",\n  \"license\": \"MIT\",\n  \"scripts\": {\n    \"bench\": \"ts-node ./src/node.bench.ts\"\n  },\n  \"dependencies\": {\n    \"@ethereumjs/util\": \"^8.0.3\",\n    \"@personaelabs/spartan-ecdsa\": \"file:./../../lib\"\n  },\n  \"devDependencies\": {\n    \"ts-node\": \"^10.9.1\",\n    \"typescript\": \"^4.9.4\"\n  }\n}\n"
  },
  {
    "path": "packages/benchmark/node/src/node.bench.ts",
    "content": "import benchPubKeyMembership from \"./node.bench_pubkey_membership\";\nimport benchAddressMembership from \"./node.bench_addr_membership\";\n\nconst bench = async () => {\n  await benchPubKeyMembership();\n  await benchAddressMembership();\n};\n\nbench();\n"
  },
  {
    "path": "packages/benchmark/node/src/node.bench_addr_membership.ts",
    "content": "import {\n  hashPersonalMessage,\n  privateToAddress,\n  ecsign\n} from \"@ethereumjs/util\";\nimport {\n  Tree,\n  Poseidon,\n  MembershipProver,\n  MembershipVerifier\n} from \"@personaelabs/spartan-ecdsa\";\nimport * as path from \"path\";\n\nconst benchAddrMembership = async () => {\n  const privKey = Buffer.from(\"\".padStart(16, \"🧙\"), \"utf16le\");\n  const msg = Buffer.from(\"harry potter\");\n  const msgHash = hashPersonalMessage(msg);\n\n  const { v, r, s } = ecsign(msgHash, privKey);\n  const sig = `0x${r.toString(\"hex\")}${s.toString(\"hex\")}${v.toString(16)}`;\n\n  // Init the Poseidon hash\n  const poseidon = new Poseidon();\n  await poseidon.initWasm();\n\n  const treeDepth = 20;\n  const tree = new Tree(treeDepth, poseidon);\n\n  // Get the prover public key hash\n  const proverAddress = BigInt(\n    \"0x\" + privateToAddress(privKey).toString(\"hex\")\n  );\n\n  // Insert prover public key hash into the tree\n  tree.insert(proverAddress);\n\n  // Insert other members into the tree\n  for (const member of [\"🕵️\", \"🥷\", \"👩‍🔬\"]) {\n    const address = BigInt(\n      \"0x\" +\n        privateToAddress(\n          Buffer.from(\"\".padStart(16, member), \"utf16le\")\n        ).toString(\"hex\")\n    );\n    tree.insert(address);\n  }\n\n  // Compute the merkle proof\n  const index = tree.indexOf(proverAddress);\n\n  const proverConfig = {\n    circuit: path.join(\n      __dirname,\n      \"../../../circuits/build/addr_membership/addr_membership.circuit\"\n    ),\n    witnessGenWasm: path.join(\n      __dirname,\n      \"../../../circuits/build/addr_membership/addr_membership_js/addr_membership.wasm\"\n    ),\n    enableProfiler: true\n  };\n  const merkleProof = tree.createProof(index);\n\n  // Init the prover\n  const prover = new MembershipProver(proverConfig);\n  await prover.initWasm();\n\n  // Prove membership\n  const { proof, publicInput } = await prover.prove(sig, msgHash, merkleProof);\n\n  const verifierConfig = {\n    circuit: proverConfig.circuit,\n    enableProfiler: true\n  };\n\n  // Init verifier\n  const verifier = new MembershipVerifier(verifierConfig);\n  await verifier.initWasm();\n\n  // Verify proof\n  await verifier.verify(proof, publicInput.serialize());\n};\n\nexport default benchAddrMembership;\n"
  },
  {
    "path": "packages/benchmark/node/src/node.bench_pubkey_membership.ts",
    "content": "import {\n  MembershipProver,\n  Poseidon,\n  Tree,\n  MembershipVerifier\n} from \"@personaelabs/spartan-ecdsa\";\nimport {\n  hashPersonalMessage,\n  ecsign,\n  ecrecover,\n  privateToPublic\n} from \"@ethereumjs/util\";\nimport * as path from \"path\";\n\nconst benchPubKeyMembership = async () => {\n  const privKey = Buffer.from(\"\".padStart(16, \"🧙\"), \"utf16le\");\n  const msg = Buffer.from(\"harry potter\");\n  const msgHash = hashPersonalMessage(msg);\n\n  const { v, r, s } = ecsign(msgHash, privKey);\n  const pubKey = ecrecover(msgHash, v, r, s);\n  const sig = `0x${r.toString(\"hex\")}${s.toString(\"hex\")}${v.toString(16)}`;\n\n  // Init the Poseidon hash\n  const poseidon = new Poseidon();\n  await poseidon.initWasm();\n\n  const treeDepth = 20;\n  const tree = new Tree(treeDepth, poseidon);\n\n  // Get the prover public key hash\n  const proverPubkeyHash = poseidon.hashPubKey(pubKey);\n\n  // Insert prover public key hash into the tree\n  tree.insert(proverPubkeyHash);\n\n  // Insert other members into the tree\n  for (const member of [\"🕵️\", \"🥷\", \"👩‍🔬\"]) {\n    const pubKey = privateToPublic(\n      Buffer.from(\"\".padStart(16, member), \"utf16le\")\n    );\n    tree.insert(poseidon.hashPubKey(pubKey));\n  }\n\n  // Compute the merkle proof\n  const index = tree.indexOf(proverPubkeyHash);\n  const merkleProof = tree.createProof(index);\n\n  const proverConfig = {\n    circuit: path.join(\n      __dirname,\n      \"../../../circuits/build/pubkey_membership/pubkey_membership.circuit\"\n    ),\n    witnessGenWasm: path.join(\n      __dirname,\n      \"../../../circuits/build/pubkey_membership/pubkey_membership_js/pubkey_membership.wasm\"\n    ),\n    enableProfiler: true\n  };\n\n  // Init the prover\n  const prover = new MembershipProver(proverConfig);\n  await prover.initWasm();\n\n  // Prove membership\n  const { proof, publicInput } = await prover.prove(sig, msgHash, merkleProof);\n\n  const verifierConfig = {\n    circuit: proverConfig.circuit,\n    enableProfiler: true\n  };\n\n  // Init verifier\n  const verifier = new MembershipVerifier(verifierConfig);\n  await verifier.initWasm();\n\n  // Verify proof\n  await verifier.verify(proof, publicInput.serialize());\n};\n\nexport default benchPubKeyMembership;\n"
  },
  {
    "path": "packages/benchmark/node/tsconfig.json",
    "content": "{\n  \"include\": [\n    \"./src/**/*\",\n  ],\n  \"exclude\": [\n    \"./node_modules\",\n      \"./build\"\n  ],\n  \"compilerOptions\": {\n    \"target\": \"ES6\",                           \n    \"module\": \"CommonJS\",                               \n    \"rootDir\": \"./src\",                                \n    \"moduleResolution\": \"node\",                     \n    \"allowJs\": true,                                 \n    \"outDir\": \"./build\",                \n    \"esModuleInterop\": true,                            \n    \"forceConsistentCasingInFileNames\": true,      \n    \"strict\": true,                                  \n    \"skipLibCheck\": true         \n  }\n}"
  },
  {
    "path": "packages/benchmark/web/.vscode/settings.json",
    "content": "{\n    \"editor.formatOnSave\": true,\n    \"editor.defaultFormatter\": \"esbenp.prettier-vscode\",\n    \"cSpell.words\": [\"layouter\", \"maingate\"]\n  }\n  "
  },
  {
    "path": "packages/benchmark/web/LICENSE",
    "content": "MIT License\n\nCopyright (c) 2022 Ethereum Foundation\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE."
  },
  {
    "path": "packages/benchmark/web/README.md",
    "content": "## Node.js\n\nRecommended: v18 or later\n\n### Install dependencies\n\n```\nyarn\n```\n\n### Start server\n\n```\nyarn dev\n```\n"
  },
  {
    "path": "packages/benchmark/web/next.config.js",
    "content": "/** @type {import('next').NextConfig} */\nconst nextConfig = {\n  reactStrictMode: true,\n  swcMinify: true,\n  webpack: config => {\n    config.resolve.fallback = { fs: false };\n    config.experiments = { asyncWebAssembly: true };\n\n    return config;\n  },\n  async headers() {\n    return [\n      {\n        source: \"/(.*)\",\n        headers: [\n          {\n            key: \"Cross-Origin-Embedder-Policy\",\n            value: \"require-corp\"\n          },\n          {\n            key: \"Cross-Origin-Opener-Policy\",\n            value: \"same-origin\"\n          }\n        ]\n      }\n    ];\n  }\n};\n\nmodule.exports = nextConfig;\n"
  },
  {
    "path": "packages/benchmark/web/package.json",
    "content": "{\n  \"name\": \"spartan-bench\",\n  \"version\": \"0.1.0\",\n  \"private\": true,\n  \"scripts\": {\n    \"dev\": \"next dev\",\n    \"build\": \"next build\",\n    \"start\": \"next start\",\n    \"lint\": \"next lint\"\n  },\n  \"dependencies\": {\n    \"@personaelabs/spartan-ecdsa\": \"file:./../../lib\",\n    \"@ethereumjs/util\": \"^8.0.3\",\n    \"comlink\": \"^4.3.1\",\n    \"elliptic\": \"^6.5.4\",\n    \"ffjavascript\": \"^0.2.57\",\n    \"next\": \"13.0.0\",\n    \"react\": \"18.2.0\",\n    \"react-dom\": \"18.2.0\",\n    \"readline\": \"^1.3.0\"\n  },\n  \"devDependencies\": {\n    \"@types/node\": \"18.11.7\",\n    \"@types/react\": \"^18.0.24\",\n    \"@types/react-dom\": \"18.0.8\",\n    \"eslint\": \"8.26.0\",\n    \"eslint-config-next\": \"13.0.0\",\n    \"prettier\": \"^2.7.1\",\n    \"typescript\": \"4.8.4\"\n  }\n}\n"
  },
  {
    "path": "packages/benchmark/web/pages/_app.tsx",
    "content": "import type { AppProps } from \"next/app\";\n\nexport default function App({ Component, pageProps }: AppProps) {\n  return <Component {...pageProps} />;\n}\n"
  },
  {
    "path": "packages/benchmark/web/pages/index.tsx",
    "content": "import { useState } from \"react\";\nimport {\n  MembershipProver,\n  MembershipVerifier,\n  Tree,\n  Poseidon,\n  defaultAddressMembershipPConfig,\n  defaultPubkeyMembershipPConfig,\n  defaultPubkeyMembershipVConfig,\n  defaultAddressMembershipVConfig\n} from \"@personaelabs/spartan-ecdsa\";\nimport {\n  ecrecover,\n  ecsign,\n  hashPersonalMessage,\n  privateToAddress,\n  privateToPublic,\n  pubToAddress\n} from \"@ethereumjs/util\";\n\nexport default function Home() {\n  const provePubKeyMembership = async () => {\n    const privKey = Buffer.from(\"\".padStart(16, \"🧙\"), \"utf16le\");\n    const msg = Buffer.from(\"harry potter\");\n    const msgHash = hashPersonalMessage(msg);\n\n    const { v, r, s } = ecsign(msgHash, privKey);\n    const pubKey = ecrecover(msgHash, v, r, s);\n    const sig = `0x${r.toString(\"hex\")}${s.toString(\"hex\")}${v.toString(16)}`;\n\n    const poseidon = new Poseidon();\n    await poseidon.initWasm();\n\n    const treeDepth = 20;\n    const pubKeyTree = new Tree(treeDepth, poseidon);\n\n    const proverPubKeyHash = poseidon.hashPubKey(pubKey);\n\n    pubKeyTree.insert(proverPubKeyHash);\n\n    // Insert other members into the tree\n    for (const member of [\"🕵️\", \"🥷\", \"👩‍🔬\"]) {\n      const pubKey = privateToPublic(\n        Buffer.from(\"\".padStart(16, member), \"utf16le\")\n      );\n      pubKeyTree.insert(poseidon.hashPubKey(pubKey));\n    }\n\n    const index = pubKeyTree.indexOf(proverPubKeyHash);\n    const merkleProof = pubKeyTree.createProof(index);\n\n    console.log(\"Proving...\");\n    console.time(\"Full proving time\");\n\n    const prover = new MembershipProver({\n      ...defaultPubkeyMembershipPConfig,\n      enableProfiler: true\n    });\n\n    await prover.initWasm();\n\n    const { proof, publicInput } = await prover.prove(\n      sig,\n      msgHash,\n      merkleProof\n    );\n\n    console.timeEnd(\"Full proving time\");\n    console.log(\n      \"Raw proof size (excluding public input)\",\n      proof.length,\n      \"bytes\"\n    );\n\n    console.log(\"Verifying...\");\n    const verifier = new MembershipVerifier({\n      ...defaultPubkeyMembershipVConfig,\n      enableProfiler: true\n    });\n    await verifier.initWasm();\n\n    console.time(\"Verification time\");\n    const result = await verifier.verify(proof, publicInput.serialize());\n    console.timeEnd(\"Verification time\");\n\n    if (result) {\n      console.log(\"Successfully verified proof!\");\n    } else {\n      console.log(\"Failed to verify proof :(\");\n    }\n  };\n\n  const proverAddressMembership = async () => {\n    const privKey = Buffer.from(\"\".padStart(16, \"🧙\"), \"utf16le\");\n    const msg = Buffer.from(\"harry potter\");\n    const msgHash = hashPersonalMessage(msg);\n\n    const { v, r, s } = ecsign(msgHash, privKey);\n    const sig = `0x${r.toString(\"hex\")}${s.toString(\"hex\")}${v.toString(16)}`;\n\n    const poseidon = new Poseidon();\n    await poseidon.initWasm();\n\n    const treeDepth = 20;\n    const addressTree = new Tree(treeDepth, poseidon);\n\n    const proverAddress = BigInt(\n      \"0x\" + privateToAddress(privKey).toString(\"hex\")\n    );\n    addressTree.insert(proverAddress);\n\n    // Insert other members into the tree\n    for (const member of [\"🕵️\", \"🥷\", \"👩‍🔬\"]) {\n      const pubKey = privateToPublic(\n        Buffer.from(\"\".padStart(16, member), \"utf16le\")\n      );\n      const address = BigInt(\"0x\" + pubToAddress(pubKey).toString(\"hex\"));\n      addressTree.insert(address);\n    }\n\n    const index = addressTree.indexOf(proverAddress);\n    const merkleProof = addressTree.createProof(index);\n\n    console.log(\"Proving...\");\n    console.time(\"Full proving time\");\n\n    const prover = new MembershipProver({\n      ...defaultAddressMembershipPConfig,\n      enableProfiler: true\n    });\n\n    await prover.initWasm();\n\n    const { proof, publicInput } = await prover.prove(\n      sig,\n      msgHash,\n      merkleProof\n    );\n\n    console.timeEnd(\"Full proving time\");\n    console.log(\n      \"Raw proof size (excluding public input)\",\n      proof.length,\n      \"bytes\"\n    );\n\n    console.log(\"Verifying...\");\n    const verifier = new MembershipVerifier({\n      ...defaultAddressMembershipVConfig,\n      enableProfiler: true\n    });\n    await verifier.initWasm();\n\n    console.time(\"Verification time\");\n    const result = await verifier.verify(proof, publicInput.serialize());\n    console.timeEnd(\"Verification time\");\n\n    if (result) {\n      console.log(\"Successfully verified proof!\");\n    } else {\n      console.log(\"Failed to verify proof :(\");\n    }\n  };\n\n  return (\n    <div>\n      <button onClick={provePubKeyMembership}>\n        Prove Public Key Membership\n      </button>\n      <button onClick={proverAddressMembership}>\n        Prove Address Membership\n      </button>\n    </div>\n  );\n}\n"
  },
  {
    "path": "packages/benchmark/web/tsconfig.json",
    "content": "{\n  \"compilerOptions\": {\n    \"target\": \"es5\",\n    \"lib\": [\n      \"dom\",\n      \"dom.iterable\",\n      \"esnext\"\n    ],\n    \"allowJs\": true,\n    \"skipLibCheck\": true,\n    \"strict\": false,\n    \"forceConsistentCasingInFileNames\": true,\n    \"noEmit\": true,\n    \"incremental\": true,\n    \"esModuleInterop\": true,\n    \"module\": \"esnext\",\n    \"moduleResolution\": \"node\",\n    \"resolveJsonModule\": true,\n    \"isolatedModules\": true,\n    \"jsx\": \"preserve\"\n  },\n  \"include\": [\n    \"next-env.d.ts\",\n    \"**/*.ts\",\n    \"**/*.tsx\"\n  ],\n  \"exclude\": [\n    \"node_modules\"\n  ]\n}\n"
  },
  {
    "path": "packages/circuit_reader/Cargo.toml",
    "content": "[package]\nname = \"circuit_reader\"\nversion = \"0.1.0\"\nedition = \"2021\"\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\nbincode = \"1.3.3\"\nsecq256k1 = { path = \"../secq256k1\" }\nspartan = { path = \"../Spartan-secq\" }\nff = \"0.12.0\"\nbyteorder = \"1.4.3\"\ngroup = \"0.12.0\"\nitertools = \"0.9.0\"\n\n[[bin]]\nname = \"gen_spartan_inst\"\npath = \"src/bin/gen_spartan_inst.rs\"\n\n\n\n"
  },
  {
    "path": "packages/circuit_reader/src/bin/gen_spartan_inst.rs",
    "content": "#![allow(non_snake_case)]\nuse bincode;\nuse circuit_reader::load_as_spartan_inst;\nuse std::env::{args, current_dir};\nuse std::fs::File;\nuse std::io::Write;\n\nfn main() {\n    let circom_r1cs_path = args().nth(1).unwrap();\n    let output_path = args().nth(2).unwrap();\n    let num_pub_inputs = args().nth(3).unwrap().parse::<usize>().unwrap();\n\n    let root = current_dir().unwrap();\n    let circom_r1cs_path = root.join(circom_r1cs_path);\n    let spartan_inst = load_as_spartan_inst(circom_r1cs_path, num_pub_inputs);\n    let sparta_inst_bytes = bincode::serialize(&spartan_inst).unwrap();\n\n    File::create(root.join(output_path.clone()))\n        .unwrap()\n        .write_all(sparta_inst_bytes.as_slice())\n        .unwrap();\n\n    println!(\"Written Spartan circuit to {}\", output_path);\n}\n"
  },
  {
    "path": "packages/circuit_reader/src/circom_reader.rs",
    "content": "// Code borrowed from Nova-Scotia https://github.com/nalinbhardwaj/Nova-Scotia\nuse byteorder::{LittleEndian, ReadBytesExt};\nuse ff::PrimeField;\nuse group::Group;\nuse itertools::Itertools;\nuse std::{\n    collections::HashMap,\n    io::{BufReader, Error, ErrorKind, Read, Result, Seek, SeekFrom},\n};\n\npub type Constraint<Fr> = (Vec<(usize, Fr)>, Vec<(usize, Fr)>, Vec<(usize, Fr)>);\n\n#[derive(Clone)]\npub struct R1CS<Fr: PrimeField> {\n    pub num_inputs: usize,\n    pub num_aux: usize,\n    pub num_variables: usize,\n    pub constraints: Vec<Constraint<Fr>>,\n}\n\n// R1CSFile's header\n#[derive(Debug, Default)]\npub struct Header {\n    pub field_size: u32,\n    pub prime_size: Vec<u8>,\n    pub n_wires: u32,\n    pub n_pub_out: u32,\n    pub n_pub_in: u32,\n    pub n_prv_in: u32,\n    pub n_labels: u64,\n    pub n_constraints: u32,\n}\n\n// R1CSFile parse result\n#[derive(Debug, Default)]\npub struct R1CSFile<Fr: PrimeField> {\n    pub version: u32,\n    pub header: Header,\n    pub constraints: Vec<Constraint<Fr>>,\n    pub wire_mapping: Vec<u64>,\n}\nuse std::fs::OpenOptions;\nuse std::path::Path;\n\npub fn load_r1cs_from_bin_file<G1: Group>(filename: &Path) -> (R1CS<G1::Scalar>, Vec<usize>) {\n    let reader = OpenOptions::new()\n        .read(true)\n        .open(filename)\n        .expect(\"unable to open.\");\n    load_r1cs_from_bin::<G1, _>(BufReader::new(reader))\n}\n\npub fn load_r1cs_from_bin<G1: Group, R: Read + Seek>(reader: R) -> (R1CS<G1::Scalar>, Vec<usize>) {\n    let file = from_reader::<G1, R>(reader).expect(\"unable to read.\");\n    let num_inputs = (1 + file.header.n_pub_in + file.header.n_pub_out) as usize;\n    let num_variables = file.header.n_wires as usize;\n    let num_aux = num_variables - num_inputs;\n    (\n        R1CS {\n            num_aux,\n            num_inputs,\n            num_variables,\n            constraints: file.constraints,\n        },\n        file.wire_mapping.iter().map(|e| *e as usize).collect_vec(),\n    )\n}\n\npub(crate) fn read_field<R: Read, Fr: PrimeField>(mut reader: R) -> Result<Fr> {\n    let mut repr = Fr::zero().to_repr();\n    for digit in repr.as_mut().iter_mut() {\n        // TODO: may need to reverse order?\n        *digit = reader.read_u8()?;\n    }\n    let fr = Fr::from_repr(repr).unwrap();\n    Ok(fr)\n}\n\nfn read_header<R: Read>(mut reader: R, size: u64) -> Result<Header> {\n    let field_size = reader.read_u32::<LittleEndian>()?;\n    let mut prime_size = vec![0u8; field_size as usize];\n    reader.read_exact(&mut prime_size)?;\n    if size != 32 + field_size as u64 {\n        return Err(Error::new(\n            ErrorKind::InvalidData,\n            \"Invalid header section size\",\n        ));\n    }\n\n    Ok(Header {\n        field_size,\n        prime_size,\n        n_wires: reader.read_u32::<LittleEndian>()?,\n        n_pub_out: reader.read_u32::<LittleEndian>()?,\n        n_pub_in: reader.read_u32::<LittleEndian>()?,\n        n_prv_in: reader.read_u32::<LittleEndian>()?,\n        n_labels: reader.read_u64::<LittleEndian>()?,\n        n_constraints: reader.read_u32::<LittleEndian>()?,\n    })\n}\n\nfn read_constraint_vec<R: Read, Fr: PrimeField>(mut reader: R) -> Result<Vec<(usize, Fr)>> {\n    let n_vec = reader.read_u32::<LittleEndian>()? as usize;\n    let mut vec = Vec::with_capacity(n_vec);\n    for _ in 0..n_vec {\n        vec.push((\n            reader.read_u32::<LittleEndian>()? as usize,\n            read_field::<&mut R, Fr>(&mut reader)?,\n        ));\n    }\n    Ok(vec)\n}\n\nfn read_constraints<R: Read, Fr: PrimeField>(\n    mut reader: R,\n    header: &Header,\n) -> Result<Vec<Constraint<Fr>>> {\n    // todo check section size\n    let mut vec = Vec::with_capacity(header.n_constraints as usize);\n    for _ in 0..header.n_constraints {\n        vec.push((\n            read_constraint_vec::<&mut R, Fr>(&mut reader)?,\n            read_constraint_vec::<&mut R, Fr>(&mut reader)?,\n            read_constraint_vec::<&mut R, Fr>(&mut reader)?,\n        ));\n    }\n    Ok(vec)\n}\n\nfn read_map<R: Read>(mut reader: R, size: u64, header: &Header) -> Result<Vec<u64>> {\n    if size != header.n_wires as u64 * 8 {\n        return Err(Error::new(\n            ErrorKind::InvalidData,\n            \"Invalid map section size\",\n        ));\n    }\n    let mut vec = Vec::with_capacity(header.n_wires as usize);\n    for _ in 0..header.n_wires {\n        vec.push(reader.read_u64::<LittleEndian>()?);\n    }\n    if vec[0] != 0 {\n        return Err(Error::new(\n            ErrorKind::InvalidData,\n            \"Wire 0 should always be mapped to 0\",\n        ));\n    }\n    Ok(vec)\n}\n\npub fn from_reader<G1: Group, R: Read + Seek>(mut reader: R) -> Result<R1CSFile<G1::Scalar>> {\n    let mut magic = [0u8; 4];\n    reader.read_exact(&mut magic)?;\n    if magic != [0x72, 0x31, 0x63, 0x73] {\n        // magic = \"r1cs\"\n        return Err(Error::new(ErrorKind::InvalidData, \"Invalid magic number\"));\n    }\n\n    let version = reader.read_u32::<LittleEndian>()?;\n    if version != 1 {\n        return Err(Error::new(ErrorKind::InvalidData, \"Unsupported version\"));\n    }\n\n    let num_sections = reader.read_u32::<LittleEndian>()?;\n\n    // section type -> file offset\n    let mut section_offsets = HashMap::<u32, u64>::new();\n    let mut section_sizes = HashMap::<u32, u64>::new();\n\n    // get file offset of each section\n    for _ in 0..num_sections {\n        let section_type = reader.read_u32::<LittleEndian>()?;\n        let section_size = reader.read_u64::<LittleEndian>()?;\n        let offset = reader.seek(SeekFrom::Current(0))?;\n        section_offsets.insert(section_type, offset);\n        section_sizes.insert(section_type, section_size);\n        reader.seek(SeekFrom::Current(section_size as i64))?;\n    }\n\n    let header_type = 1;\n    let constraint_type = 2;\n    let wire2label_type = 3;\n\n    reader.seek(SeekFrom::Start(*section_offsets.get(&header_type).unwrap()))?;\n    let header = read_header(&mut reader, *section_sizes.get(&header_type).unwrap())?;\n    if header.field_size != 32 {\n        return Err(Error::new(\n            ErrorKind::InvalidData,\n            \"This parser only supports 32-byte fields\",\n        ));\n    }\n    // if header.prime_size != hex!(\"010000f093f5e1439170b97948e833285d588181b64550b829a031e1724e6430\") {\n    //     return Err(Error::new(ErrorKind::InvalidData, \"This parser only supports bn256\"));\n    // }\n\n    reader.seek(SeekFrom::Start(\n        *section_offsets.get(&constraint_type).unwrap(),\n    ))?;\n    let constraints = read_constraints::<&mut R, <G1 as Group>::Scalar>(&mut reader, &header)?;\n\n    reader.seek(SeekFrom::Start(\n        *section_offsets.get(&wire2label_type).unwrap(),\n    ))?;\n    let wire_mapping = read_map(\n        &mut reader,\n        *section_sizes.get(&wire2label_type).unwrap(),\n        &header,\n    )?;\n\n    Ok(R1CSFile {\n        version,\n        header,\n        constraints,\n        wire_mapping,\n    })\n}\n"
  },
  {
    "path": "packages/circuit_reader/src/lib.rs",
    "content": "mod circom_reader;\n\nuse circom_reader::{load_r1cs_from_bin_file, R1CS};\nuse ff::PrimeField;\nuse libspartan::Instance;\nuse secq256k1::AffinePoint;\nuse secq256k1::FieldBytes;\nuse std::path::PathBuf;\n\npub fn load_as_spartan_inst(circuit_file: PathBuf, num_pub_inputs: usize) -> Instance {\n    let (r1cs, _) = load_r1cs_from_bin_file::<AffinePoint>(&circuit_file);\n    let spartan_inst = convert_to_spartan_r1cs(&r1cs, num_pub_inputs);\n    spartan_inst\n}\n\nfn convert_to_spartan_r1cs<F: PrimeField<Repr = FieldBytes>>(\n    r1cs: &R1CS<F>,\n    num_pub_inputs: usize,\n) -> Instance {\n    let num_cons = r1cs.constraints.len();\n    let num_vars = r1cs.num_variables;\n    let num_inputs = num_pub_inputs;\n\n    let mut A = vec![];\n    let mut B = vec![];\n    let mut C = vec![];\n\n    for (i, constraint) in r1cs.constraints.iter().enumerate() {\n        let (a, b, c) = constraint;\n\n        for (j, coeff) in a.iter() {\n            let bytes: [u8; 32] = coeff.to_repr().into();\n\n            A.push((i, *j, bytes));\n        }\n\n        for (j, coeff) in b.iter() {\n            let bytes: [u8; 32] = coeff.to_repr().into();\n            B.push((i, *j, bytes));\n        }\n\n        for (j, coeff) in c.iter() {\n            let bytes: [u8; 32] = coeff.to_repr().into();\n            C.push((i, *j, bytes));\n        }\n    }\n\n    let inst = Instance::new(\n        num_cons,\n        num_vars,\n        num_inputs,\n        A.as_slice(),\n        B.as_slice(),\n        C.as_slice(),\n    )\n    .unwrap();\n\n    inst\n}\n"
  },
  {
    "path": "packages/circuits/LICENSE",
    "content": "                    GNU GENERAL PUBLIC LICENSE\n                       Version 3, 29 June 2007\n\n Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>\n Everyone is permitted to copy and distribute verbatim copies\n of this license document, but changing it is not allowed.\n\n                            Preamble\n\n  The GNU General Public License is a free, copyleft license for\nsoftware and other kinds of works.\n\n  The licenses for most software and other practical works are designed\nto take away your freedom to share and change the works.  By contrast,\nthe GNU General Public License is intended to guarantee your freedom to\nshare and change all versions of a program--to make sure it remains free\nsoftware for all its users.  We, the Free Software Foundation, use the\nGNU General Public License for most of our software; it applies also to\nany other work released this way by its authors.  You can apply it to\nyour programs, too.\n\n  When we speak of free software, we are referring to freedom, not\nprice.  Our General Public Licenses are designed to make sure that you\nhave the freedom to distribute copies of free software (and charge for\nthem if you wish), that you receive source code or can get it if you\nwant it, that you can change the software or use pieces of it in new\nfree programs, and that you know you can do these things.\n\n  To protect your rights, we need to prevent others from denying you\nthese rights or asking you to surrender the rights.  Therefore, you have\ncertain responsibilities if you distribute copies of the software, or if\nyou modify it: responsibilities to respect the freedom of others.\n\n  For example, if you distribute copies of such a program, whether\ngratis or for a fee, you must pass on to the recipients the same\nfreedoms that you received.  You must make sure that they, too, receive\nor can get the source code.  And you must show them these terms so they\nknow their rights.\n\n  Developers that use the GNU GPL protect your rights with two steps:\n(1) assert copyright on the software, and (2) offer you this License\ngiving you legal permission to copy, distribute and/or modify it.\n\n  For the developers' and authors' protection, the GPL clearly explains\nthat there is no warranty for this free software.  For both users' and\nauthors' sake, the GPL requires that modified versions be marked as\nchanged, so that their problems will not be attributed erroneously to\nauthors of previous versions.\n\n  Some devices are designed to deny users access to install or run\nmodified versions of the software inside them, although the manufacturer\ncan do so.  This is fundamentally incompatible with the aim of\nprotecting users' freedom to change the software.  The systematic\npattern of such abuse occurs in the area of products for individuals to\nuse, which is precisely where it is most unacceptable.  Therefore, we\nhave designed this version of the GPL to prohibit the practice for those\nproducts.  If such problems arise substantially in other domains, we\nstand ready to extend this provision to those domains in future versions\nof the GPL, as needed to protect the freedom of users.\n\n  Finally, every program is threatened constantly by software patents.\nStates should not allow patents to restrict development and use of\nsoftware on general-purpose computers, but in those that do, we wish to\navoid the special danger that patents applied to a free program could\nmake it effectively proprietary.  To prevent this, the GPL assures that\npatents cannot be used to render the program non-free.\n\n  The precise terms and conditions for copying, distribution and\nmodification follow.\n\n                       TERMS AND CONDITIONS\n\n  0. Definitions.\n\n  \"This License\" refers to version 3 of the GNU General Public License.\n\n  \"Copyright\" also means copyright-like laws that apply to other kinds of\nworks, such as semiconductor masks.\n\n  \"The Program\" refers to any copyrightable work licensed under this\nLicense.  Each licensee is addressed as \"you\".  \"Licensees\" and\n\"recipients\" may be individuals or organizations.\n\n  To \"modify\" a work means to copy from or adapt all or part of the work\nin a fashion requiring copyright permission, other than the making of an\nexact copy.  The resulting work is called a \"modified version\" of the\nearlier work or a work \"based on\" the earlier work.\n\n  A \"covered work\" means either the unmodified Program or a work based\non the Program.\n\n  To \"propagate\" a work means to do anything with it that, without\npermission, would make you directly or secondarily liable for\ninfringement under applicable copyright law, except executing it on a\ncomputer or modifying a private copy.  Propagation includes copying,\ndistribution (with or without modification), making available to the\npublic, and in some countries other activities as well.\n\n  To \"convey\" a work means any kind of propagation that enables other\nparties to make or receive copies.  Mere interaction with a user through\na computer network, with no transfer of a copy, is not conveying.\n\n  An interactive user interface displays \"Appropriate Legal Notices\"\nto the extent that it includes a convenient and prominently visible\nfeature that (1) displays an appropriate copyright notice, and (2)\ntells the user that there is no warranty for the work (except to the\nextent that warranties are provided), that licensees may convey the\nwork under this License, and how to view a copy of this License.  If\nthe interface presents a list of user commands or options, such as a\nmenu, a prominent item in the list meets this criterion.\n\n  1. Source Code.\n\n  The \"source code\" for a work means the preferred form of the work\nfor making modifications to it.  \"Object code\" means any non-source\nform of a work.\n\n  A \"Standard Interface\" means an interface that either is an official\nstandard defined by a recognized standards body, or, in the case of\ninterfaces specified for a particular programming language, one that\nis widely used among developers working in that language.\n\n  The \"System Libraries\" of an executable work include anything, other\nthan the work as a whole, that (a) is included in the normal form of\npackaging a Major Component, but which is not part of that Major\nComponent, and (b) serves only to enable use of the work with that\nMajor Component, or to implement a Standard Interface for which an\nimplementation is available to the public in source code form.  A\n\"Major Component\", in this context, means a major essential component\n(kernel, window system, and so on) of the specific operating system\n(if any) on which the executable work runs, or a compiler used to\nproduce the work, or an object code interpreter used to run it.\n\n  The \"Corresponding Source\" for a work in object code form means all\nthe source code needed to generate, install, and (for an executable\nwork) run the object code and to modify the work, including scripts to\ncontrol those activities.  However, it does not include the work's\nSystem Libraries, or general-purpose tools or generally available free\nprograms which are used unmodified in performing those activities but\nwhich are not part of the work.  For example, Corresponding Source\nincludes interface definition files associated with source files for\nthe work, and the source code for shared libraries and dynamically\nlinked subprograms that the work is specifically designed to require,\nsuch as by intimate data communication or control flow between those\nsubprograms and other parts of the work.\n\n  The Corresponding Source need not include anything that users\ncan regenerate automatically from other parts of the Corresponding\nSource.\n\n  The Corresponding Source for a work in source code form is that\nsame work.\n\n  2. Basic Permissions.\n\n  All rights granted under this License are granted for the term of\ncopyright on the Program, and are irrevocable provided the stated\nconditions are met.  This License explicitly affirms your unlimited\npermission to run the unmodified Program.  The output from running a\ncovered work is covered by this License only if the output, given its\ncontent, constitutes a covered work.  This License acknowledges your\nrights of fair use or other equivalent, as provided by copyright law.\n\n  You may make, run and propagate covered works that you do not\nconvey, without conditions so long as your license otherwise remains\nin force.  You may convey covered works to others for the sole purpose\nof having them make modifications exclusively for you, or provide you\nwith facilities for running those works, provided that you comply with\nthe terms of this License in conveying all material for which you do\nnot control copyright.  Those thus making or running the covered works\nfor you must do so exclusively on your behalf, under your direction\nand control, on terms that prohibit them from making any copies of\nyour copyrighted material outside their relationship with you.\n\n  Conveying under any other circumstances is permitted solely under\nthe conditions stated below.  Sublicensing is not allowed; section 10\nmakes it unnecessary.\n\n  3. Protecting Users' Legal Rights From Anti-Circumvention Law.\n\n  No covered work shall be deemed part of an effective technological\nmeasure under any applicable law fulfilling obligations under article\n11 of the WIPO copyright treaty adopted on 20 December 1996, or\nsimilar laws prohibiting or restricting circumvention of such\nmeasures.\n\n  When you convey a covered work, you waive any legal power to forbid\ncircumvention of technological measures to the extent such circumvention\nis effected by exercising rights under this License with respect to\nthe covered work, and you disclaim any intention to limit operation or\nmodification of the work as a means of enforcing, against the work's\nusers, your or third parties' legal rights to forbid circumvention of\ntechnological measures.\n\n  4. Conveying Verbatim Copies.\n\n  You may convey verbatim copies of the Program's source code as you\nreceive it, in any medium, provided that you conspicuously and\nappropriately publish on each copy an appropriate copyright notice;\nkeep intact all notices stating that this License and any\nnon-permissive terms added in accord with section 7 apply to the code;\nkeep intact all notices of the absence of any warranty; and give all\nrecipients a copy of this License along with the Program.\n\n  You may charge any price or no price for each copy that you convey,\nand you may offer support or warranty protection for a fee.\n\n  5. Conveying Modified Source Versions.\n\n  You may convey a work based on the Program, or the modifications to\nproduce it from the Program, in the form of source code under the\nterms of section 4, provided that you also meet all of these conditions:\n\n    a) The work must carry prominent notices stating that you modified\n    it, and giving a relevant date.\n\n    b) The work must carry prominent notices stating that it is\n    released under this License and any conditions added under section\n    7.  This requirement modifies the requirement in section 4 to\n    \"keep intact all notices\".\n\n    c) You must license the entire work, as a whole, under this\n    License to anyone who comes into possession of a copy.  This\n    License will therefore apply, along with any applicable section 7\n    additional terms, to the whole of the work, and all its parts,\n    regardless of how they are packaged.  This License gives no\n    permission to license the work in any other way, but it does not\n    invalidate such permission if you have separately received it.\n\n    d) If the work has interactive user interfaces, each must display\n    Appropriate Legal Notices; however, if the Program has interactive\n    interfaces that do not display Appropriate Legal Notices, your\n    work need not make them do so.\n\n  A compilation of a covered work with other separate and independent\nworks, which are not by their nature extensions of the covered work,\nand which are not combined with it such as to form a larger program,\nin or on a volume of a storage or distribution medium, is called an\n\"aggregate\" if the compilation and its resulting copyright are not\nused to limit the access or legal rights of the compilation's users\nbeyond what the individual works permit.  Inclusion of a covered work\nin an aggregate does not cause this License to apply to the other\nparts of the aggregate.\n\n  6. Conveying Non-Source Forms.\n\n  You may convey a covered work in object code form under the terms\nof sections 4 and 5, provided that you also convey the\nmachine-readable Corresponding Source under the terms of this License,\nin one of these ways:\n\n    a) Convey the object code in, or embodied in, a physical product\n    (including a physical distribution medium), accompanied by the\n    Corresponding Source fixed on a durable physical medium\n    customarily used for software interchange.\n\n    b) Convey the object code in, or embodied in, a physical product\n    (including a physical distribution medium), accompanied by a\n    written offer, valid for at least three years and valid for as\n    long as you offer spare parts or customer support for that product\n    model, to give anyone who possesses the object code either (1) a\n    copy of the Corresponding Source for all the software in the\n    product that is covered by this License, on a durable physical\n    medium customarily used for software interchange, for a price no\n    more than your reasonable cost of physically performing this\n    conveying of source, or (2) access to copy the\n    Corresponding Source from a network server at no charge.\n\n    c) Convey individual copies of the object code with a copy of the\n    written offer to provide the Corresponding Source.  This\n    alternative is allowed only occasionally and noncommercially, and\n    only if you received the object code with such an offer, in accord\n    with subsection 6b.\n\n    d) Convey the object code by offering access from a designated\n    place (gratis or for a charge), and offer equivalent access to the\n    Corresponding Source in the same way through the same place at no\n    further charge.  You need not require recipients to copy the\n    Corresponding Source along with the object code.  If the place to\n    copy the object code is a network server, the Corresponding Source\n    may be on a different server (operated by you or a third party)\n    that supports equivalent copying facilities, provided you maintain\n    clear directions next to the object code saying where to find the\n    Corresponding Source.  Regardless of what server hosts the\n    Corresponding Source, you remain obligated to ensure that it is\n    available for as long as needed to satisfy these requirements.\n\n    e) Convey the object code using peer-to-peer transmission, provided\n    you inform other peers where the object code and Corresponding\n    Source of the work are being offered to the general public at no\n    charge under subsection 6d.\n\n  A separable portion of the object code, whose source code is excluded\nfrom the Corresponding Source as a System Library, need not be\nincluded in conveying the object code work.\n\n  A \"User Product\" is either (1) a \"consumer product\", which means any\ntangible personal property which is normally used for personal, family,\nor household purposes, or (2) anything designed or sold for incorporation\ninto a dwelling.  In determining whether a product is a consumer product,\ndoubtful cases shall be resolved in favor of coverage.  For a particular\nproduct received by a particular user, \"normally used\" refers to a\ntypical or common use of that class of product, regardless of the status\nof the particular user or of the way in which the particular user\nactually uses, or expects or is expected to use, the product.  A product\nis a consumer product regardless of whether the product has substantial\ncommercial, industrial or non-consumer uses, unless such uses represent\nthe only significant mode of use of the product.\n\n  \"Installation Information\" for a User Product means any methods,\nprocedures, authorization keys, or other information required to install\nand execute modified versions of a covered work in that User Product from\na modified version of its Corresponding Source.  The information must\nsuffice to ensure that the continued functioning of the modified object\ncode is in no case prevented or interfered with solely because\nmodification has been made.\n\n  If you convey an object code work under this section in, or with, or\nspecifically for use in, a User Product, and the conveying occurs as\npart of a transaction in which the right of possession and use of the\nUser Product is transferred to the recipient in perpetuity or for a\nfixed term (regardless of how the transaction is characterized), the\nCorresponding Source conveyed under this section must be accompanied\nby the Installation Information.  But this requirement does not apply\nif neither you nor any third party retains the ability to install\nmodified object code on the User Product (for example, the work has\nbeen installed in ROM).\n\n  The requirement to provide Installation Information does not include a\nrequirement to continue to provide support service, warranty, or updates\nfor a work that has been modified or installed by the recipient, or for\nthe User Product in which it has been modified or installed.  Access to a\nnetwork may be denied when the modification itself materially and\nadversely affects the operation of the network or violates the rules and\nprotocols for communication across the network.\n\n  Corresponding Source conveyed, and Installation Information provided,\nin accord with this section must be in a format that is publicly\ndocumented (and with an implementation available to the public in\nsource code form), and must require no special password or key for\nunpacking, reading or copying.\n\n  7. Additional Terms.\n\n  \"Additional permissions\" are terms that supplement the terms of this\nLicense by making exceptions from one or more of its conditions.\nAdditional permissions that are applicable to the entire Program shall\nbe treated as though they were included in this License, to the extent\nthat they are valid under applicable law.  If additional permissions\napply only to part of the Program, that part may be used separately\nunder those permissions, but the entire Program remains governed by\nthis License without regard to the additional permissions.\n\n  When you convey a copy of a covered work, you may at your option\nremove any additional permissions from that copy, or from any part of\nit.  (Additional permissions may be written to require their own\nremoval in certain cases when you modify the work.)  You may place\nadditional permissions on material, added by you to a covered work,\nfor which you have or can give appropriate copyright permission.\n\n  Notwithstanding any other provision of this License, for material you\nadd to a covered work, you may (if authorized by the copyright holders of\nthat material) supplement the terms of this License with terms:\n\n    a) Disclaiming warranty or limiting liability differently from the\n    terms of sections 15 and 16 of this License; or\n\n    b) Requiring preservation of specified reasonable legal notices or\n    author attributions in that material or in the Appropriate Legal\n    Notices displayed by works containing it; or\n\n    c) Prohibiting misrepresentation of the origin of that material, or\n    requiring that modified versions of such material be marked in\n    reasonable ways as different from the original version; or\n\n    d) Limiting the use for publicity purposes of names of licensors or\n    authors of the material; or\n\n    e) Declining to grant rights under trademark law for use of some\n    trade names, trademarks, or service marks; or\n\n    f) Requiring indemnification of licensors and authors of that\n    material by anyone who conveys the material (or modified versions of\n    it) with contractual assumptions of liability to the recipient, for\n    any liability that these contractual assumptions directly impose on\n    those licensors and authors.\n\n  All other non-permissive additional terms are considered \"further\nrestrictions\" within the meaning of section 10.  If the Program as you\nreceived it, or any part of it, contains a notice stating that it is\ngoverned by this License along with a term that is a further\nrestriction, you may remove that term.  If a license document contains\na further restriction but permits relicensing or conveying under this\nLicense, you may add to a covered work material governed by the terms\nof that license document, provided that the further restriction does\nnot survive such relicensing or conveying.\n\n  If you add terms to a covered work in accord with this section, you\nmust place, in the relevant source files, a statement of the\nadditional terms that apply to those files, or a notice indicating\nwhere to find the applicable terms.\n\n  Additional terms, permissive or non-permissive, may be stated in the\nform of a separately written license, or stated as exceptions;\nthe above requirements apply either way.\n\n  8. Termination.\n\n  You may not propagate or modify a covered work except as expressly\nprovided under this License.  Any attempt otherwise to propagate or\nmodify it is void, and will automatically terminate your rights under\nthis License (including any patent licenses granted under the third\nparagraph of section 11).\n\n  However, if you cease all violation of this License, then your\nlicense from a particular copyright holder is reinstated (a)\nprovisionally, unless and until the copyright holder explicitly and\nfinally terminates your license, and (b) permanently, if the copyright\nholder fails to notify you of the violation by some reasonable means\nprior to 60 days after the cessation.\n\n  Moreover, your license from a particular copyright holder is\nreinstated permanently if the copyright holder notifies you of the\nviolation by some reasonable means, this is the first time you have\nreceived notice of violation of this License (for any work) from that\ncopyright holder, and you cure the violation prior to 30 days after\nyour receipt of the notice.\n\n  Termination of your rights under this section does not terminate the\nlicenses of parties who have received copies or rights from you under\nthis License.  If your rights have been terminated and not permanently\nreinstated, you do not qualify to receive new licenses for the same\nmaterial under section 10.\n\n  9. Acceptance Not Required for Having Copies.\n\n  You are not required to accept this License in order to receive or\nrun a copy of the Program.  Ancillary propagation of a covered work\noccurring solely as a consequence of using peer-to-peer transmission\nto receive a copy likewise does not require acceptance.  However,\nnothing other than this License grants you permission to propagate or\nmodify any covered work.  These actions infringe copyright if you do\nnot accept this License.  Therefore, by modifying or propagating a\ncovered work, you indicate your acceptance of this License to do so.\n\n  10. Automatic Licensing of Downstream Recipients.\n\n  Each time you convey a covered work, the recipient automatically\nreceives a license from the original licensors, to run, modify and\npropagate that work, subject to this License.  You are not responsible\nfor enforcing compliance by third parties with this License.\n\n  An \"entity transaction\" is a transaction transferring control of an\norganization, or substantially all assets of one, or subdividing an\norganization, or merging organizations.  If propagation of a covered\nwork results from an entity transaction, each party to that\ntransaction who receives a copy of the work also receives whatever\nlicenses to the work the party's predecessor in interest had or could\ngive under the previous paragraph, plus a right to possession of the\nCorresponding Source of the work from the predecessor in interest, if\nthe predecessor has it or can get it with reasonable efforts.\n\n  You may not impose any further restrictions on the exercise of the\nrights granted or affirmed under this License.  For example, you may\nnot impose a license fee, royalty, or other charge for exercise of\nrights granted under this License, and you may not initiate litigation\n(including a cross-claim or counterclaim in a lawsuit) alleging that\nany patent claim is infringed by making, using, selling, offering for\nsale, or importing the Program or any portion of it.\n\n  11. Patents.\n\n  A \"contributor\" is a copyright holder who authorizes use under this\nLicense of the Program or a work on which the Program is based.  The\nwork thus licensed is called the contributor's \"contributor version\".\n\n  A contributor's \"essential patent claims\" are all patent claims\nowned or controlled by the contributor, whether already acquired or\nhereafter acquired, that would be infringed by some manner, permitted\nby this License, of making, using, or selling its contributor version,\nbut do not include claims that would be infringed only as a\nconsequence of further modification of the contributor version.  For\npurposes of this definition, \"control\" includes the right to grant\npatent sublicenses in a manner consistent with the requirements of\nthis License.\n\n  Each contributor grants you a non-exclusive, worldwide, royalty-free\npatent license under the contributor's essential patent claims, to\nmake, use, sell, offer for sale, import and otherwise run, modify and\npropagate the contents of its contributor version.\n\n  In the following three paragraphs, a \"patent license\" is any express\nagreement or commitment, however denominated, not to enforce a patent\n(such as an express permission to practice a patent or covenant not to\nsue for patent infringement).  To \"grant\" such a patent license to a\nparty means to make such an agreement or commitment not to enforce a\npatent against the party.\n\n  If you convey a covered work, knowingly relying on a patent license,\nand the Corresponding Source of the work is not available for anyone\nto copy, free of charge and under the terms of this License, through a\npublicly available network server or other readily accessible means,\nthen you must either (1) cause the Corresponding Source to be so\navailable, or (2) arrange to deprive yourself of the benefit of the\npatent license for this particular work, or (3) arrange, in a manner\nconsistent with the requirements of this License, to extend the patent\nlicense to downstream recipients.  \"Knowingly relying\" means you have\nactual knowledge that, but for the patent license, your conveying the\ncovered work in a country, or your recipient's use of the covered work\nin a country, would infringe one or more identifiable patents in that\ncountry that you have reason to believe are valid.\n\n  If, pursuant to or in connection with a single transaction or\narrangement, you convey, or propagate by procuring conveyance of, a\ncovered work, and grant a patent license to some of the parties\nreceiving the covered work authorizing them to use, propagate, modify\nor convey a specific copy of the covered work, then the patent license\nyou grant is automatically extended to all recipients of the covered\nwork and works based on it.\n\n  A patent license is \"discriminatory\" if it does not include within\nthe scope of its coverage, prohibits the exercise of, or is\nconditioned on the non-exercise of one or more of the rights that are\nspecifically granted under this License.  You may not convey a covered\nwork if you are a party to an arrangement with a third party that is\nin the business of distributing software, under which you make payment\nto the third party based on the extent of your activity of conveying\nthe work, and under which the third party grants, to any of the\nparties who would receive the covered work from you, a discriminatory\npatent license (a) in connection with copies of the covered work\nconveyed by you (or copies made from those copies), or (b) primarily\nfor and in connection with specific products or compilations that\ncontain the covered work, unless you entered into that arrangement,\nor that patent license was granted, prior to 28 March 2007.\n\n  Nothing in this License shall be construed as excluding or limiting\nany implied license or other defenses to infringement that may\notherwise be available to you under applicable patent law.\n\n  12. No Surrender of Others' Freedom.\n\n  If conditions are imposed on you (whether by court order, agreement or\notherwise) that contradict the conditions of this License, they do not\nexcuse you from the conditions of this License.  If you cannot convey a\ncovered work so as to satisfy simultaneously your obligations under this\nLicense and any other pertinent obligations, then as a consequence you may\nnot convey it at all.  For example, if you agree to terms that obligate you\nto collect a royalty for further conveying from those to whom you convey\nthe Program, the only way you could satisfy both those terms and this\nLicense would be to refrain entirely from conveying the Program.\n\n  13. Use with the GNU Affero General Public License.\n\n  Notwithstanding any other provision of this License, you have\npermission to link or combine any covered work with a work licensed\nunder version 3 of the GNU Affero General Public License into a single\ncombined work, and to convey the resulting work.  The terms of this\nLicense will continue to apply to the part which is the covered work,\nbut the special requirements of the GNU Affero General Public License,\nsection 13, concerning interaction through a network will apply to the\ncombination as such.\n\n  14. Revised Versions of this License.\n\n  The Free Software Foundation may publish revised and/or new versions of\nthe GNU General Public License from time to time.  Such new versions will\nbe similar in spirit to the present version, but may differ in detail to\naddress new problems or concerns.\n\n  Each version is given a distinguishing version number.  If the\nProgram specifies that a certain numbered version of the GNU General\nPublic License \"or any later version\" applies to it, you have the\noption of following the terms and conditions either of that numbered\nversion or of any later version published by the Free Software\nFoundation.  If the Program does not specify a version number of the\nGNU General Public License, you may choose any version ever published\nby the Free Software Foundation.\n\n  If the Program specifies that a proxy can decide which future\nversions of the GNU General Public License can be used, that proxy's\npublic statement of acceptance of a version permanently authorizes you\nto choose that version for the Program.\n\n  Later license versions may give you additional or different\npermissions.  However, no additional obligations are imposed on any\nauthor or copyright holder as a result of your choosing to follow a\nlater version.\n\n  15. Disclaimer of Warranty.\n\n  THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY\nAPPLICABLE LAW.  EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT\nHOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM \"AS IS\" WITHOUT WARRANTY\nOF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,\nTHE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR\nPURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM\nIS WITH YOU.  SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF\nALL NECESSARY SERVICING, REPAIR OR CORRECTION.\n\n  16. Limitation of Liability.\n\n  IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING\nWILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS\nTHE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY\nGENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE\nUSE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF\nDATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD\nPARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),\nEVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF\nSUCH DAMAGES.\n\n  17. Interpretation of Sections 15 and 16.\n\n  If the disclaimer of warranty and limitation of liability provided\nabove cannot be given local legal effect according to their terms,\nreviewing courts shall apply local law that most closely approximates\nan absolute waiver of all civil liability in connection with the\nProgram, unless a warranty or assumption of liability accompanies a\ncopy of the Program in return for a fee.\n\n                     END OF TERMS AND CONDITIONS\n\n            How to Apply These Terms to Your New Programs\n\n  If you develop a new program, and you want it to be of the greatest\npossible use to the public, the best way to achieve this is to make it\nfree software which everyone can redistribute and change under these terms.\n\n  To do so, attach the following notices to the program.  It is safest\nto attach them to the start of each source file to most effectively\nstate the exclusion of warranty; and each file should have at least\nthe \"copyright\" line and a pointer to where the full notice is found.\n\n    <one line to give the program's name and a brief idea of what it does.>\n    Copyright (C) <year>  <name of author>\n\n    This program is free software: you can redistribute it and/or modify\n    it under the terms of the GNU General Public License as published by\n    the Free Software Foundation, either version 3 of the License, or\n    (at your option) any later version.\n\n    This program is distributed in the hope that it will be useful,\n    but WITHOUT ANY WARRANTY; without even the implied warranty of\n    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n    GNU General Public License for more details.\n\n    You should have received a copy of the GNU General Public License\n    along with this program.  If not, see <https://www.gnu.org/licenses/>.\n\nAlso add information on how to contact you by electronic and paper mail.\n\n  If the program does terminal interaction, make it output a short\nnotice like this when it starts in an interactive mode:\n\n    <program>  Copyright (C) <year>  <name of author>\n    This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.\n    This is free software, and you are welcome to redistribute it\n    under certain conditions; type `show c' for details.\n\nThe hypothetical commands `show w' and `show c' should show the appropriate\nparts of the General Public License.  Of course, your program's commands\nmight be different; for a GUI interface, you would use an \"about box\".\n\n  You should also get your employer (if you work as a programmer) or school,\nif any, to sign a \"copyright disclaimer\" for the program, if necessary.\nFor more information on this, and how to apply and follow the GNU GPL, see\n<https://www.gnu.org/licenses/>.\n\n  The GNU General Public License does not permit incorporating your program\ninto proprietary programs.  If your program is a subroutine library, you\nmay consider it more useful to permit linking proprietary applications with\nthe library.  If this is what you want to do, use the GNU Lesser General\nPublic License instead of this License.  But first, please read\n<https://www.gnu.org/licenses/why-not-lgpl.html>."
  },
  {
    "path": "packages/circuits/README.md",
    "content": "## Node.js\n\nRecommended: v18 or later\n\n## Install dependencies\n\n```\nyarn\n```\n\n## Run tests\n\nInstall [this](https://github.com/DanTehrani/circom-secq) fork of Circom that supports compiling to the secp256k1 base field.\n\n```\ngit clone https://github.com/DanTehrani/circom-secq\n```\n\n```\ncd circom-secq && cargo build --release && cargo install --path circom\n```\n\n(In this directory) Install dependencies\n\n```\nyarn\n```\n\nRun tests\n\n```\nyarn jest\n```\n"
  },
  {
    "path": "packages/circuits/eff_ecdsa_membership/addr_membership.circom",
    "content": "pragma circom 2.1.2;\n\ninclude \"./eff_ecdsa.circom\";\ninclude \"./tree.circom\";\ninclude \"./to_address/zk-identity/eth.circom\";\n\n/**\n *  AddrMembership\n *  ==============\n *  \n *  Checks that an inputted efficient ECDSA signature (definition and discussion \n *  can be found at https://personaelabs.org/posts/efficient-ecdsa-1/) \n *  is signed by a public key that when converted to an address is a member of\n *  a Merkle tree of addresses. The public key is extracted from the efficient \n *  ECDSA signature in EfficientECDSA(), and converted to an address by Keccak\n *  hashing the public key in PubkeyToAddress().\n */\ntemplate AddrMembership(nLevels) {\n    signal input s;\n    signal input root;\n    signal input Tx; \n    signal input Ty; \n    signal input Ux;\n    signal input Uy;\n    signal input pathIndices[nLevels];\n    signal input siblings[nLevels];\n\n    component effEcdsa = EfficientECDSA();\n    effEcdsa.Tx <== Tx;\n    effEcdsa.Ty <== Ty;\n    effEcdsa.Ux <== Ux;\n    effEcdsa.Uy <== Uy;\n    effEcdsa.s <== s;\n\n    component pubKeyXBits = Num2Bits(256);\n    pubKeyXBits.in <== effEcdsa.pubKeyX;\n\n    component pubKeyYBits = Num2Bits(256);\n    pubKeyYBits.in <== effEcdsa.pubKeyY;\n\n    component pubToAddr = PubkeyToAddress();\n\n    for (var i = 0; i < 256; i++) {\n        pubToAddr.pubkeyBits[i] <== pubKeyYBits.out[i];\n        pubToAddr.pubkeyBits[i + 256] <== pubKeyXBits.out[i];\n    }\n\n    component merkleProof = MerkleTreeInclusionProof(nLevels);\n    merkleProof.leaf <== pubToAddr.address;\n\n    for (var i = 0; i < nLevels; i++) {\n        merkleProof.pathIndices[i] <== pathIndices[i];\n        merkleProof.siblings[i] <== siblings[i];\n    }\n\n    root === merkleProof.root;\n}"
  },
  {
    "path": "packages/circuits/eff_ecdsa_membership/eff_ecdsa.circom",
    "content": "pragma circom 2.1.2;\n\ninclude \"./secp256k1/mul.circom\";\ninclude \"../../../node_modules/circomlib/circuits/bitify.circom\";\n\n/**\n *  EfficientECDSA\n *  ====================\n *  \n *  Converts inputted efficient ECDSA signature to an public key. There is no\n *  public key validation included.\n */\ntemplate EfficientECDSA() {\n    var bits = 256;\n    signal input s;\n    signal input Tx; // T = r^-1 * R\n    signal input Ty; \n    signal input Ux; // U = -(m * r^-1 * G)\n    signal input Uy;\n\n    signal output pubKeyX;\n    signal output pubKeyY;\n\n    // sMultT = s * T\n    component sMultT = Secp256k1Mul();\n    sMultT.scalar <== s;\n    sMultT.xP <== Tx;\n    sMultT.yP <== Ty;\n\n    // pubKey = sMultT + U \n    component pubKey = Secp256k1AddComplete();\n    pubKey.xP <== sMultT.outX;\n    pubKey.yP <== sMultT.outY;\n    pubKey.xQ <== Ux;\n    pubKey.yQ <== Uy;\n\n    pubKeyX <== pubKey.outX;\n    pubKeyY <== pubKey.outY;\n}"
  },
  {
    "path": "packages/circuits/eff_ecdsa_membership/eff_ecdsa_to_addr.circom",
    "content": "pragma circom 2.1.2;\n\ninclude \"./eff_ecdsa.circom\";\ninclude \"./to_address/zk-identity/eth.circom\";\n\n/**\n *  EfficientECDSAToAddr\n *  ====================\n *  \n *  Converts inputted efficient ECDSA signature to an address.\n */\ntemplate EfficientECDSAToAddr() {\n    var bits = 256;\n    signal input s;\n    signal input Tx; // T = r^-1 * R\n    signal input Ty; \n    signal input Ux; // U = -(m * r^-1 * G)\n    signal input Uy;\n    signal output addr;\n    \n    component effEcdsa = EfficientECDSA();\n    effEcdsa.s <== s;\n    effEcdsa.Tx <== Tx;\n    effEcdsa.Ty <== Ty;\n    effEcdsa.Ux <== Ux;\n    effEcdsa.Uy <== Uy;\n\n    component pubKeyXBits = Num2Bits(256);\n    pubKeyXBits.in <== effEcdsa.pubKeyX;\n\n    component pubKeyYBits = Num2Bits(256);\n    pubKeyYBits.in <== effEcdsa.pubKeyY;\n\n    component pubToAddr = PubkeyToAddress();\n\n    for (var i = 0; i < 256; i++) {\n        pubToAddr.pubkeyBits[i] <== pubKeyYBits.out[i];\n        pubToAddr.pubkeyBits[i + 256] <== pubKeyXBits.out[i];\n    }\n\n    addr <== pubToAddr.address;\n}\n"
  },
  {
    "path": "packages/circuits/eff_ecdsa_membership/pubkey_membership.circom",
    "content": "pragma circom 2.1.2;\n\ninclude \"./eff_ecdsa.circom\";\ninclude \"./tree.circom\";\ninclude \"../poseidon/poseidon.circom\";\n\n/**\n *  PubkeyMembership\n *  ================\n *  \n *  Checks that an inputted efficient ECDSA signature (definition and discussion \n *  can be found at https://personaelabs.org/posts/efficient-ecdsa-1/) \n *  is signed by a public key that is in a Merkle tree of public keys. Avoids the\n *  SNARK-unfriendly Keccak hash that must be performed when validating if the \n *  public key is in a Merkle tree of addresses.\n */\ntemplate PubKeyMembership(nLevels) {\n    signal input s;\n    signal input root;\n    signal input Tx; \n    signal input Ty; \n    signal input Ux;\n    signal input Uy;\n    signal input pathIndices[nLevels];\n    signal input siblings[nLevels];\n\n    component ecdsa = EfficientECDSA();\n    ecdsa.Tx <== Tx;\n    ecdsa.Ty <== Ty;\n    ecdsa.Ux <== Ux;\n    ecdsa.Uy <== Uy;\n    ecdsa.s <== s;\n\n    component pubKeyHash = Poseidon();\n    pubKeyHash.inputs[0] <== ecdsa.pubKeyX;\n    pubKeyHash.inputs[1] <== ecdsa.pubKeyY;\n\n    component merkleProof = MerkleTreeInclusionProof(nLevels);\n    merkleProof.leaf <== pubKeyHash.out;\n\n    for (var i = 0; i < nLevels; i++) {\n        merkleProof.pathIndices[i] <== pathIndices[i];\n        merkleProof.siblings[i] <== siblings[i];\n    }\n    root === merkleProof.root;\n}"
  },
  {
    "path": "packages/circuits/eff_ecdsa_membership/secp256k1/add.circom",
    "content": "pragma circom 2.1.2;\n\ninclude \"../../../../node_modules/circomlib/circuits/comparators.circom\";\ninclude \"../../../../node_modules/circomlib/circuits/gates.circom\";\n\n/**\n *  Secp256k1AddIncomplete\n *  ======================\n *\n *  Adds two points (xP, yP) and (xQ, yQ) on the secp256k1 curve. This function \n *  only works for points where xP != xQ and are not at infinity. We can implement \n *  the raw formulae for this operation as we are doing right field arithmetic \n *  (we are doing secp256k1 base field arithmetic in the secq256k1 scalar field, \n *  which are equal). Should work for any short Weierstrass curve (Pasta, P-256).\n */\ntemplate Secp256k1AddIncomplete() {\n    signal input xP;\n    signal input yP;\n    signal input xQ;\n    signal input yQ;\n    signal output outX;\n    signal output outY;\n\n    signal lambda;\n    signal dx;\n    signal dy;\n\n    dx <== xP - xQ;\n    dy <== yP - yQ;\n\n    lambda <-- dy / dx;\n    dx * lambda === dy;\n\n    outX <== lambda * lambda - xP - xQ;\n    outY <== lambda * (xP - outX) - yP;\n}\n\n/**\n *  Secp256k1AddComplete\n *  ====================\n *\n *  Implements https://zcash.github.io/halo2/design/gadgets/ecc/addition.html#complete-addition\n *  so we can add any pair of points. Assumes (0, 0) is not a valid point (which \n *  is true for secp256k1) and is used as the point at infinity.\n */\ntemplate Secp256k1AddComplete() {\n    signal input xP;\n    signal input yP;\n    signal input xQ;\n    signal input yQ;\n\n    signal output outX;\n    signal output outY;\n\n    signal xPSquared <== xP * xP;\n\n    component isXEqual = IsEqual();\n    isXEqual.in[0] <== xP;\n    isXEqual.in[1] <== xQ;\n\n    component isXpZero = IsZero();\n    isXpZero.in <== xP;\n \n    component isXqZero = IsZero();\n    isXqZero.in <== xQ;\n\n    component isXEitherZero = IsZero();\n    isXEitherZero.in <== (1 - isXpZero.out) * (1 - isXqZero.out);\n    \n    // dx = xQ - xP\n    // dy = xP != xQ ? yQ - yP : 0\n    // lambdaA = xP != xQ ? (yQ - yP) / (xQ - xP) : 0\n    signal dx <== xQ - xP;\n    signal dy <== (yQ - yP) * (1 - isXEqual.out);\n    signal lambdaA <-- ((yQ - yP) / dx) * (1 - isXEqual.out);\n    dx * lambdaA === dy;\n\n    // lambdaB = (3 * xP^2) / (2 * yP)\n    signal lambdaB <-- ((3 * xPSquared) / (2 * yP));\n    lambdaB * 2 * yP === 3 * xPSquared;\n\n    // lambda = xP != xQ ? lambdaA : lambdaB\n    signal lambda <== (lambdaB * isXEqual.out) + lambdaA;\n\n    // outAx = lambda^2 - xP - xQ\n    // outAy = lambda * (xP - outAx) - yP\n    signal outAx <== lambda * lambda - xP - xQ;\n    signal outAy <== lambda * (xP - outAx) - yP;\n\n    // (outBx, outBy) = xP != 0 and xQ != 0 ? (outAx, outAy) : (0, 0)\n    signal outBx <== outAx * (1 - isXEitherZero.out);\n    signal outBy <== outAy * (1 - isXEitherZero.out);\n\n    //(outCx, outCy) = xP = 0 ? (xQ, yQ) : (0, 0)\n    signal outCx <== isXpZero.out * xQ;\n    signal outCy <== isXpZero.out * yQ;\n\n    // (outDx, outDy) = xQ = 0 ? (xP, yP) : (0, 0)\n    signal outDx <== isXqZero.out * xP;\n    signal outDy <== isXqZero.out * yP;\n\n    // zeroizeA = (xP = xQ and yP = -yQ) ? 1 : 0\n    component zeroizeA = IsEqual();\n    zeroizeA.in[0] <== isXEqual.out;\n    zeroizeA.in[1] <== 1 - (yP + yQ);\n\n    // zeroizeB = (xP = 0 and xQ = 0) ? 1 : 0\n    component zeroizeB = AND();\n    zeroizeB.a <== isXpZero.out;\n    zeroizeB.b <== isXqZero.out;\n\n    // zeroize = (xP = xQ and yP = -yQ) or (xP = 0 and xQ = 0) ? 1 : 0\n    // for this case we want to output the point at infinity (0, 0)\n    component zeroize = OR();\n    zeroize.a <== zeroizeA.out;\n    zeroize.b <== zeroizeB.out;\n\n    // The below three conditionals are mutually exclusive when zeroize = 0, \n    // so we can safely sum the outputs.\n    // outBx != 0 iff xP != 0 and xQ != 0\n    // outCx != 0 iff xP = 0\n    // outDx != 0 iff xQ = 0\n    outX <== (outBx + outCx + outDx) * (1 - zeroize.out);\n    outY <== (outBy + outCy + outDy) * (1 - zeroize.out);\n}"
  },
  {
    "path": "packages/circuits/eff_ecdsa_membership/secp256k1/double.circom",
    "content": "pragma circom 2.1.2;\n\n/**\n *  Secp256k1Double\n *  ===============\n *\n *  Double a specific point (xP, yP) on the secp256k1 curve. Should work for any \n *  short Weierstrass curve (Pasta, P-256).\n */\ntemplate Secp256k1Double() {\n    signal input xP; \n    signal input yP;\n\n    signal output outX;\n    signal output outY;\n\n    signal lambda;\n    signal xPSquared;\n\n    xPSquared <== xP * xP;\n\n    lambda <-- (3 * xPSquared) / (2 * yP);\n    lambda * 2 * yP === 3 * xPSquared;\n\n    outX <== lambda * lambda - (2 * xP);\n    outY <== lambda * (xP - outX) - yP;\n}\n"
  },
  {
    "path": "packages/circuits/eff_ecdsa_membership/secp256k1/mul.circom",
    "content": "pragma circom 2.1.2;\n\ninclude \"./add.circom\";\ninclude \"./double.circom\";\ninclude \"../../../../node_modules/circomlib/circuits/bitify.circom\";\ninclude \"../../../../node_modules/circomlib/circuits/comparators.circom\";\ninclude \"../../../../node_modules/circomlib/circuits/gates.circom\";\n\n// \n\n/**\n *  Secp256k1Mul\n *  ============\n *\n *  Implements https://zcash.github.io/halo2/design/gadgets/ecc/var-base-scalar-mul.html\n *  which allows us to use incomplete addition for the majority of the addition steps\n *  and only use complete addition for the final 3 steps.\n */\ntemplate Secp256k1Mul() {\n    var bits = 256;\n    signal input scalar;\n    signal input xP; \n    signal input yP;\n    signal output outX;\n    signal output outY;\n\n    component kBits = K();\n    kBits.s <== scalar;\n\n    component acc0 = Secp256k1Double();\n    acc0.xP <== xP;\n    acc0.yP <== yP;\n\n    component PIncomplete[bits-3]; \n    component accIncomplete[bits];\n\n    for (var i = 0; i < bits-3; i++) {\n        if (i == 0) {\n            PIncomplete[i] = Secp256k1AddIncomplete(); // (Acc + P)\n            PIncomplete[i].xP <== xP; // kBits[i] ? xP : -xP;\n            PIncomplete[i].yP <== -yP;// kBits[i] ? xP : -xP;\n            PIncomplete[i].xQ <== acc0.outX;\n            PIncomplete[i].yQ <== acc0.outY;\n            \n\n            accIncomplete[i] = Secp256k1AddIncomplete(); // (Acc + P) + Acc\n            accIncomplete[i].xP <== acc0.outX;\n            accIncomplete[i].yP <== acc0.outY;\n            accIncomplete[i].xQ <== PIncomplete[i].outX;\n            accIncomplete[i].yQ <== PIncomplete[i].outY;\n        } else {\n            PIncomplete[i] = Secp256k1AddIncomplete(); // (Acc + P)\n            PIncomplete[i].xP <== xP; // k_i ? xP : -xP;\n            PIncomplete[i].yP <== (2 * kBits.out[bits-i] - 1) * yP;// k_i ? xP : -xP;\n            PIncomplete[i].xQ <== accIncomplete[i-1].outX;\n            PIncomplete[i].yQ <== accIncomplete[i-1].outY;\n\n            accIncomplete[i] = Secp256k1AddIncomplete(); // (Acc + P) + Acc\n            accIncomplete[i].xP <== accIncomplete[i-1].outX;\n            accIncomplete[i].yP <== accIncomplete[i-1].outY;\n            accIncomplete[i].xQ <== PIncomplete[i].outX;\n            accIncomplete[i].yQ <== PIncomplete[i].outY;\n        }\n    }\n\n    component PComplete[bits-3]; \n    component accComplete[3];\n\n    for (var i = 0; i < 3; i++) {\n        PComplete[i] = Secp256k1AddComplete(); // (Acc + P)\n\n        PComplete[i].xP <== xP; // k_i ? xP : -xP;\n        PComplete[i].yP <== (2 * kBits.out[3 - i] - 1) * yP;// k_i ? xP : -xP;\n        if (i == 0) {\n            PComplete[i].xQ <== accIncomplete[252].outX;\n            PComplete[i].yQ <== accIncomplete[252].outY;\n        } else {\n            PComplete[i].xQ <== accComplete[i-1].outX;\n            PComplete[i].yQ <== accComplete[i-1].outY;\n        }\n\n        accComplete[i] = Secp256k1AddComplete(); // (Acc + P) + Acc\n        if (i == 0) {\n            accComplete[i].xP <== accIncomplete[252].outX;\n            accComplete[i].yP <== accIncomplete[252].outY;\n        } else {\n            accComplete[i].xP <== accComplete[i-1].outX;\n            accComplete[i].yP <== accComplete[i-1].outY;\n        }\n\n        accComplete[i].xQ <== PComplete[i].outX;\n        accComplete[i].yQ <== PComplete[i].outY;\n    }\n\n    component out = Secp256k1AddComplete();\n    out.xP <== accComplete[2].outX;\n    out.yP <== accComplete[2].outY;\n    out.xQ <== (1 - kBits.out[0]) * xP;\n    out.yQ <== (1 - kBits.out[0]) * -yP;\n\n    outX <== out.outX;\n    outY <== out.outY;\n}\n\n// Calculate k = (s + tQ) % q as follows:\n// Define notation: (s + tQ) / q = (quotient, remainder)\n// We can calculate the quotient and remainder as:\n// (s + tQ) < q ? = (0, s - tQ) : (1, (s - tQ) - q)\n// We use 128-bit registers to calculate the above since (s + tQ) can be larger than p.\ntemplate K() {\n    var bits = 256;\n    signal input s;\n    signal output out[bits];\n\n    // Split elemnts into 128 bit registers\n\n    var q = 0xfffffffffffffffffffffffffffffffebaaedce6af48a03bbfd25e8cd0364141; // The order of the scalar field\n    var qlo = q & ((2 ** 128) - 1);\n    var qhi = q >> 128;\n    var tQ = 115792089237316195423570985008687907852405143892509244725752742275123193348738; // (q - 2^256) % q;\n    var tQlo = tQ & (2 ** (128) - 1);\n    var tQhi = tQ >> 128;\n    signal slo <-- s & (2 ** (128) - 1);\n    signal shi <-- s >> 128;\n\n    // Get carry bit of (slo + tQlo)\n\n    component inBits = Num2Bits(128 + 1);\n    inBits.in <== slo + tQlo;\n    signal carry <== inBits.out[128];\n\n    // check a >= b\n    // where\n    // a = (s + tQ)\n    // b = q\n\n    // - alpha: ahi > bhi\n    // - beta: ahi = bhi\n    // - gamma: alo ≥ blo\n    // if alpha or (beta and gamma) then a >= b\n    \n    signal ahi <== shi + tQhi + carry;\n    signal bhi <== qhi;\n    signal alo <== slo + tQlo - (carry * 2 ** 128);\n    signal blo <== qlo;\n\n    component alpha = GreaterThan(129);\n    alpha.in[0] <== ahi;\n    alpha.in[1] <== bhi;\n\n    component beta = IsEqual();\n    beta.in[0] <== ahi;\n    beta.in[1] <== bhi;\n\n    component gamma = GreaterEqThan(129);\n    gamma.in[0] <== alo;\n    gamma.in[1] <== blo;\n\n    component betaANDgamma = AND();\n    betaANDgamma.a <== beta.out;\n    betaANDgamma.b <== gamma.out;\n\n    component isQuotientOne = OR();\n    isQuotientOne.a <== betaANDgamma.out;\n    isQuotientOne.b <== alpha.out;\n\n    // theta: (slo + tQlo) < qlo\n    component theta = GreaterThan(129);\n    theta.in[0] <== qlo;\n    theta.in[1] <== slo + tQlo;\n\n    // borrow: (slo + tQlo) < qlo and isQuotientOne ? 1 : 0\n    component borrow = AND();\n    borrow.a <== theta.out;\n    borrow.b <== isQuotientOne.out;\n\n    signal klo <== (slo + tQlo + borrow.out * (2 ** 128)) - isQuotientOne.out * qlo;\n    signal khi <== (shi + tQhi - borrow.out * 1)  - isQuotientOne.out * qhi;\n\n    component kloBits = Num2Bits(256);\n    kloBits.in <== klo;\n\n    component khiBits = Num2Bits(256);\n    khiBits.in <== khi;\n\n    for (var i = 0; i < 128; i++) {\n        out[i] <== kloBits.out[i];\n        out[i + 128] <== khiBits.out[i];\n    }\n}"
  },
  {
    "path": "packages/circuits/eff_ecdsa_membership/to_address/vocdoni-keccak/keccak.circom",
    "content": "pragma circom 2.0.2;\n\ninclude \"./utils.circom\";\ninclude \"./permutations.circom\";\n\ntemplate Pad(nBits) {\n    signal input in[nBits];\n\n    var blockSize=136*8;\n    signal output out[blockSize];\n    signal out2[blockSize];\n\n    var i;\n\n    for (i=0; i<nBits; i++) {\n        out2[i] <== in[i];\n    }\n    var domain = 0x01;\n    for (i=0; i<8; i++) {\n        out2[nBits+i] <== (domain >> i) & 1;\n    }\n    for (i=nBits+8; i<blockSize; i++) {\n        out2[i] <== 0;\n    }\n    component aux = OrArray(8);\n    for (i=0; i<8; i++) {\n        aux.a[i] <== out2[blockSize-8+i];\n        aux.b[i] <== (0x80 >> i) & 1;\n    }\n    for (i=0; i<8; i++) {\n        out[blockSize-8+i] <== aux.out[i];\n    }\n    for (i=0; i<blockSize-8; i++) {\n        out[i]<==out2[i];\n    }\n}\n\ntemplate KeccakfRound(r) {\n    signal input in[25*64];\n    signal output out[25*64];\n    var i;\n\n    component theta = Theta();\n    component rhopi = RhoPi();\n    component chi = Chi();\n    component iota = Iota(r);\n\n    for (i=0; i<25*64; i++) {\n        theta.in[i] <== in[i];\n    }\n    for (i=0; i<25*64; i++) {\n        rhopi.in[i] <== theta.out[i];\n    }\n    for (i=0; i<25*64; i++) {\n        chi.in[i] <== rhopi.out[i];\n    }\n    for (i=0; i<25*64; i++) {\n        iota.in[i] <== chi.out[i];\n    }\n    for (i=0; i<25*64; i++) {\n        out[i] <== iota.out[i];\n    }\n}\n\ntemplate Absorb() {\n    var blockSizeBytes=136;\n\n    signal input s[25*64];\n    signal input block[blockSizeBytes*8];\n    signal output out[25*64];\n    var i;\n    var j;\n\n    component aux[blockSizeBytes/8];\n    component newS = Keccakf();\n\n    for (i=0; i<blockSizeBytes/8; i++) {\n        aux[i] = XorArray(64);\n        for (j=0; j<64; j++) {\n            aux[i].a[j] <== s[i*64+j];\n            aux[i].b[j] <== block[i*64+j];\n        }\n        for (j=0; j<64; j++) {\n            newS.in[i*64+j] <== aux[i].out[j];\n        }\n    }\n    // fill the missing s that was not covered by the loop over\n    // blockSizeBytes/8\n    for (i=(blockSizeBytes/8)*64; i<25*64; i++) {\n            newS.in[i] <== s[i];\n    }\n    for (i=0; i<25*64; i++) {\n        out[i] <== newS.out[i];\n    }\n}\n\ntemplate Final(nBits) {\n    signal input in[nBits];\n    signal output out[25*64];\n    var blockSize=136*8;\n    var i;\n\n    // pad\n    component pad = Pad(nBits);\n    for (i=0; i<nBits; i++) {\n        pad.in[i] <== in[i];\n    }\n    // absorb\n    component abs = Absorb();\n    for (i=0; i<blockSize; i++) {\n        abs.block[i] <== pad.out[i];\n    }\n    for (i=0; i<25*64; i++) {\n        abs.s[i] <== 0;\n    }\n    for (i=0; i<25*64; i++) {\n        out[i] <== abs.out[i];\n    }\n}\n\ntemplate Squeeze(nBits) {\n    signal input s[25*64];\n    signal output out[nBits];\n    var i;\n    var j;\n\n    for (i=0; i<25; i++) {\n        for (j=0; j<64; j++) {\n            if (i*64+j<nBits) {\n                out[i*64+j] <== s[i*64+j];\n            }\n        }\n    }\n}\n\ntemplate Keccakf() {\n    signal input in[25*64];\n    signal output out[25*64];\n    var i;\n    var j;\n\n    // 24 rounds\n    component round[24];\n    signal midRound[24*25*64];\n    for (i=0; i<24; i++) {\n        round[i] = KeccakfRound(i);\n        if (i==0) {\n            for (j=0; j<25*64; j++) {\n                midRound[j] <== in[j];\n            }\n        }\n        for (j=0; j<25*64; j++) {\n            round[i].in[j] <== midRound[i*25*64+j];\n        }\n        if (i<23) {\n            for (j=0; j<25*64; j++) {\n                midRound[(i+1)*25*64+j] <== round[i].out[j];\n            }\n        }\n    }\n\n    for (i=0; i<25*64; i++) {\n        out[i] <== round[23].out[i];\n    }\n}\n\ntemplate Keccak(nBitsIn, nBitsOut) {\n    signal input in[nBitsIn];\n    signal output out[nBitsOut];\n    var i;\n\n    component f = Final(nBitsIn);\n    for (i=0; i<nBitsIn; i++) {\n        f.in[i] <== in[i];\n    }\n    component squeeze = Squeeze(nBitsOut);\n    for (i=0; i<25*64; i++) {\n        squeeze.s[i] <== f.out[i];\n    }\n    for (i=0; i<nBitsOut; i++) {\n        out[i] <== squeeze.out[i];\n    }\n}\n"
  },
  {
    "path": "packages/circuits/eff_ecdsa_membership/to_address/vocdoni-keccak/permutations.circom",
    "content": "pragma circom 2.0.2;\n\ninclude \"./utils.circom\";\n\n\n// Theta\n\ntemplate D(n, shl, shr) {\n    // d = b ^ (a<<shl | a>>shr)\n    signal input a[n];\n    signal input b[n];\n    signal output out[n];\n    var i;\n\n    component aux0 = ShR(64, shr);\n    for (i=0; i<64; i++) {\n        aux0.in[i] <== a[i];\n    }\n    component aux1 = ShL(64, shl);\n    for (i=0; i<64; i++) {\n        aux1.in[i] <== a[i];\n    }\n    component aux2 = OrArray(64);\n    for (i=0; i<64; i++) {\n        aux2.a[i] <== aux0.out[i];\n        aux2.b[i] <== aux1.out[i];\n    }\n    component aux3 = XorArray(64);\n    for (i=0; i<64; i++) {\n        aux3.a[i] <== b[i];\n        aux3.b[i] <== aux2.out[i];\n    }\n    for (i=0; i<64; i++) {\n        out[i] <== aux3.out[i];\n    }\n}\n\ntemplate Theta() {\n    signal input in[25*64];\n    signal output out[25*64];\n\n    var i;\n\n    component c0 = Xor5(64);\n    for (i=0; i<64; i++) {\n        c0.a[i] <== in[i];\n        c0.b[i] <== in[5*64+i];\n        c0.c[i] <== in[10*64+i];\n        c0.d[i] <== in[15*64+i];\n        c0.e[i] <== in[20*64+i];\n    }\n\n    component c1 = Xor5(64);\n    for (i=0; i<64; i++) {\n        c1.a[i] <== in[1*64+i];\n        c1.b[i] <== in[6*64+i];\n        c1.c[i] <== in[11*64+i];\n        c1.d[i] <== in[16*64+i];\n        c1.e[i] <== in[21*64+i];\n    }\n\n    component c2 = Xor5(64);\n    for (i=0; i<64; i++) {\n        c2.a[i] <== in[2*64+i];\n        c2.b[i] <== in[7*64+i];\n        c2.c[i] <== in[12*64+i];\n        c2.d[i] <== in[17*64+i];\n        c2.e[i] <== in[22*64+i];\n    }\n\n    component c3 = Xor5(64);\n    for (i=0; i<64; i++) {\n        c3.a[i] <== in[3*64+i];\n        c3.b[i] <== in[8*64+i];\n        c3.c[i] <== in[13*64+i];\n        c3.d[i] <== in[18*64+i];\n        c3.e[i] <== in[23*64+i];\n    }\n\n    component c4 = Xor5(64);\n    for (i=0; i<64; i++) {\n        c4.a[i] <== in[4*64+i];\n        c4.b[i] <== in[9*64+i];\n        c4.c[i] <== in[14*64+i];\n        c4.d[i] <== in[19*64+i];\n        c4.e[i] <== in[24*64+i];\n    }\n\n    // d = c4 ^ (c1<<1 | c1>>(64-1))\n    component d0 = D(64, 1, 64-1);\n    for (i=0; i<64; i++) {\n        d0.a[i] <== c1.out[i];\n        d0.b[i] <== c4.out[i];\n    }\n    // r[0] = a[0] ^ d\n    component r0 = XorArray(64);\n    for (i=0; i<64; i++) {\n        r0.a[i] <== in[i];\n        r0.b[i] <== d0.out[i];\n    }\n    for (i=0; i<64; i++) {\n        out[i] <== r0.out[i];\n    }\n    // r[5] = a[5] ^ d\n    component r5 = XorArray(64);\n    for (i=0; i<64; i++) {\n        r5.a[i] <== in[5*64+i];\n        r5.b[i] <== d0.out[i];\n    }\n    for (i=0; i<64; i++) {\n        out[5*64+i] <== r5.out[i];\n    }\n    // r[10] = a[10] ^ d\n    component r10 = XorArray(64);\n    for (i=0; i<64; i++) {\n        r10.a[i] <== in[10*64+i];\n        r10.b[i] <== d0.out[i];\n    }\n    for (i=0; i<64; i++) {\n        out[10*64+i] <== r10.out[i];\n    }\n    // r[15] = a[15] ^ d\n    component r15 = XorArray(64);\n    for (i=0; i<64; i++) {\n        r15.a[i] <== in[15*64+i];\n        r15.b[i] <== d0.out[i];\n    }\n    for (i=0; i<64; i++) {\n        out[15*64+i] <== r15.out[i];\n    }\n    // r[20] = a[20] ^ d\n    component r20 = XorArray(64);\n    for (i=0; i<64; i++) {\n        r20.a[i] <== in[20*64+i];\n        r20.b[i] <== d0.out[i];\n    }\n    for (i=0; i<64; i++) {\n        out[20*64+i] <== r20.out[i];\n    }\n\n    // d = c0 ^ (c2<<1 | c2>>(64-1))\n    component d1 = D(64, 1, 64-1);\n    for (i=0; i<64; i++) {\n        d1.a[i] <== c2.out[i];\n        d1.b[i] <== c0.out[i];\n    }\n    // r[1] = a[1] ^ d\n    component r1 = XorArray(64);\n    for (i=0; i<64; i++) {\n        r1.a[i] <== in[1*64+i];\n        r1.b[i] <== d1.out[i];\n    }\n    for (i=0; i<64; i++) {\n        out[1*64+i] <== r1.out[i];\n    }\n    // r[6] = a[6] ^ d\n    component r6 = XorArray(64);\n    for (i=0; i<64; i++) {\n        r6.a[i] <== in[6*64+i];\n        r6.b[i] <== d1.out[i];\n    }\n    for (i=0; i<64; i++) {\n        out[6*64+i] <== r6.out[i];\n    }\n    // r[11] = a[11] ^ d\n    component r11 = XorArray(64);\n    for (i=0; i<64; i++) {\n        r11.a[i] <== in[11*64+i];\n        r11.b[i] <== d1.out[i];\n    }\n    for (i=0; i<64; i++) {\n        out[11*64+i] <== r11.out[i];\n    }\n    // r[16] = a[16] ^ d\n    component r16 = XorArray(64);\n    for (i=0; i<64; i++) {\n        r16.a[i] <== in[16*64+i];\n        r16.b[i] <== d1.out[i];\n    }\n    for (i=0; i<64; i++) {\n        out[16*64+i] <== r16.out[i];\n    }\n    // r[21] = a[21] ^ d\n    component r21 = XorArray(64);\n    for (i=0; i<64; i++) {\n        r21.a[i] <== in[21*64+i];\n        r21.b[i] <== d1.out[i];\n    }\n    for (i=0; i<64; i++) {\n        out[21*64+i] <== r21.out[i];\n    }\n\n    // d = c1 ^ (c3<<1 | c3>>(64-1))\n    component d2 = D(64, 1, 64-1);\n    for (i=0; i<64; i++) {\n        d2.a[i] <== c3.out[i];\n        d2.b[i] <== c1.out[i];\n    }\n    // r[2] = a[2] ^ d\n    component r2 = XorArray(64);\n    for (i=0; i<64; i++) {\n        r2.a[i] <== in[2*64+i];\n        r2.b[i] <== d2.out[i];\n    }\n    for (i=0; i<64; i++) {\n        out[2*64+i] <== r2.out[i];\n    }\n    // r[7] = a[7] ^ d\n    component r7 = XorArray(64);\n    for (i=0; i<64; i++) {\n        r7.a[i] <== in[7*64+i];\n        r7.b[i] <== d2.out[i];\n    }\n    for (i=0; i<64; i++) {\n        out[7*64+i] <== r7.out[i];\n    }\n    // r[12] = a[12] ^ d\n    component r12 = XorArray(64);\n    for (i=0; i<64; i++) {\n        r12.a[i] <== in[12*64+i];\n        r12.b[i] <== d2.out[i];\n    }\n    for (i=0; i<64; i++) {\n        out[12*64+i] <== r12.out[i];\n    }\n    // r[17] = a[17] ^ d\n    component r17 = XorArray(64);\n    for (i=0; i<64; i++) {\n        r17.a[i] <== in[17*64+i];\n        r17.b[i] <== d2.out[i];\n    }\n    for (i=0; i<64; i++) {\n        out[17*64+i] <== r17.out[i];\n    }\n    // r[22] = a[22] ^ d\n    component r22 = XorArray(64);\n    for (i=0; i<64; i++) {\n        r22.a[i] <== in[22*64+i];\n        r22.b[i] <== d2.out[i];\n    }\n    for (i=0; i<64; i++) {\n        out[22*64+i] <== r22.out[i];\n    }\n\n    // d = c2 ^ (c4<<1 | c4>>(64-1))\n    component d3 = D(64, 1, 64-1);\n    for (i=0; i<64; i++) {\n        d3.a[i] <== c4.out[i];\n        d3.b[i] <== c2.out[i];\n    }\n    // r[3] = a[3] ^ d\n    component r3 = XorArray(64);\n    for (i=0; i<64; i++) {\n        r3.a[i] <== in[3*64+i];\n        r3.b[i] <== d3.out[i];\n    }\n    for (i=0; i<64; i++) {\n        out[3*64+i] <== r3.out[i];\n    }\n    // r[8] = a[8] ^ d\n    component r8 = XorArray(64);\n    for (i=0; i<64; i++) {\n        r8.a[i] <== in[8*64+i];\n        r8.b[i] <== d3.out[i];\n    }\n    for (i=0; i<64; i++) {\n        out[8*64+i] <== r8.out[i];\n    }\n    // r[13] = a[13] ^ d\n    component r13 = XorArray(64);\n    for (i=0; i<64; i++) {\n        r13.a[i] <== in[13*64+i];\n        r13.b[i] <== d3.out[i];\n    }\n    for (i=0; i<64; i++) {\n        out[13*64+i] <== r13.out[i];\n    }\n    // r[18] = a[18] ^ d\n    component r18 = XorArray(64);\n    for (i=0; i<64; i++) {\n        r18.a[i] <== in[18*64+i];\n        r18.b[i] <== d3.out[i];\n    }\n    for (i=0; i<64; i++) {\n        out[18*64+i] <== r18.out[i];\n    }\n    // r[23] = a[23] ^ d\n    component r23 = XorArray(64);\n    for (i=0; i<64; i++) {\n        r23.a[i] <== in[23*64+i];\n        r23.b[i] <== d3.out[i];\n    }\n    for (i=0; i<64; i++) {\n        out[23*64+i] <== r23.out[i];\n    }\n\n    // d = c3 ^ (c0<<1 | c0>>(64-1))\n    component d4 = D(64, 1, 64-1);\n    for (i=0; i<64; i++) {\n        d4.a[i] <== c0.out[i];\n        d4.b[i] <== c3.out[i];\n    }\n    // r[4] = a[4] ^ d\n    component r4 = XorArray(64);\n    for (i=0; i<64; i++) {\n        r4.a[i] <== in[4*64+i];\n        r4.b[i] <== d4.out[i];\n    }\n    for (i=0; i<64; i++) {\n        out[4*64+i] <== r4.out[i];\n    }\n    // r[9] = a[9] ^ d\n    component r9 = XorArray(64);\n    for (i=0; i<64; i++) {\n        r9.a[i] <== in[9*64+i];\n        r9.b[i] <== d4.out[i];\n    }\n    for (i=0; i<64; i++) {\n        out[9*64+i] <== r9.out[i];\n    }\n    // r[14] = a[14] ^ d\n    component r14 = XorArray(64);\n    for (i=0; i<64; i++) {\n        r14.a[i] <== in[14*64+i];\n        r14.b[i] <== d4.out[i];\n    }\n    for (i=0; i<64; i++) {\n        out[14*64+i] <== r14.out[i];\n    }\n    // r[19] = a[19] ^ d\n    component r19 = XorArray(64);\n    for (i=0; i<64; i++) {\n        r19.a[i] <== in[19*64+i];\n        r19.b[i] <== d4.out[i];\n    }\n    for (i=0; i<64; i++) {\n        out[19*64+i] <== r19.out[i];\n    }\n    // r[24] = a[24] ^ d\n    component r24 = XorArray(64);\n    for (i=0; i<64; i++) {\n        r24.a[i] <== in[24*64+i];\n        r24.b[i] <== d4.out[i];\n    }\n    for (i=0; i<64; i++) {\n        out[24*64+i] <== r24.out[i];\n    }\n}\n\n// RhoPi\n\ntemplate stepRhoPi(shl, shr) {\n    // out = a<<shl|a>>shr\n    signal input a[64];\n    signal output out[64];\n    var i;\n\n    component aux0 = ShR(64, shr);\n    for (i=0; i<64; i++) {\n        aux0.in[i] <== a[i];\n    }\n    component aux1 = ShL(64, shl);\n    for (i=0; i<64; i++) {\n        aux1.in[i] <== a[i];\n    }\n    component aux2 = OrArray(64);\n    for (i=0; i<64; i++) {\n        aux2.a[i] <== aux0.out[i];\n        aux2.b[i] <== aux1.out[i];\n    }\n    for (i=0; i<64; i++) {\n        out[i] <== aux2.out[i];\n    }\n}\ntemplate RhoPi() {\n    signal input in[25*64];\n    signal output out[25*64];\n\n    var i;\n\n    // r[10] = a[1]<<1|a[1]>>(64-1)\n    component s10 = stepRhoPi(1, 64-1);\n    for (i=0; i<64; i++) {\n        s10.a[i] <== in[1*64+i];\n    }\n    // r[7] = a[10]<<3|a[10]>>(64-3)\n    component s7 = stepRhoPi(3, 64-3);\n    for (i=0; i<64; i++) {\n        s7.a[i] <== in[10*64+i];\n    }\n    // r[11] = a[7]<<6|a[7]>>(64-6)\n    component s11 = stepRhoPi(6, 64-6);\n    for (i=0; i<64; i++) {\n        s11.a[i] <== in[7*64+i];\n    }\n    // r[17] = a[11]<<10|a[11]>>(64-10)\n    component s17 = stepRhoPi(10, 64-10);\n    for (i=0; i<64; i++) {\n        s17.a[i] <== in[11*64+i];\n    }\n    // r[18] = a[17]<<15|a[17]>>(64-15)\n    component s18 = stepRhoPi(15, 64-15);\n    for (i=0; i<64; i++) {\n        s18.a[i] <== in[17*64+i];\n    }\n    // r[3] = a[18]<<21|a[18]>>(64-21)\n    component s3 = stepRhoPi(21, 64-21);\n    for (i=0; i<64; i++) {\n        s3.a[i] <== in[18*64+i];\n    }\n    // r[5] = a[3]<<28|a[3]>>(64-28)\n    component s5 = stepRhoPi(28, 64-28);\n    for (i=0; i<64; i++) {\n        s5.a[i] <== in[3*64+i];\n    }\n    // r[16] = a[5]<<36|a[5]>>(64-36)\n    component s16 = stepRhoPi(36, 64-36);\n    for (i=0; i<64; i++) {\n        s16.a[i] <== in[5*64+i];\n    }\n    // r[8] = a[16]<<45|a[16]>>(64-45)\n    component s8 = stepRhoPi(45, 64-45);\n    for (i=0; i<64; i++) {\n        s8.a[i] <== in[16*64+i];\n    }\n    // r[21] = a[8]<<55|a[8]>>(64-55)\n    component s21 = stepRhoPi(55, 64-55);\n    for (i=0; i<64; i++) {\n        s21.a[i] <== in[8*64+i];\n    }\n    // r[24] = a[21]<<2|a[21]>>(64-2)\n    component s24 = stepRhoPi(2, 64-2);\n    for (i=0; i<64; i++) {\n        s24.a[i] <== in[21*64+i];\n    }\n    // r[4] = a[24]<<14|a[24]>>(64-14)\n    component s4 = stepRhoPi(14, 64-14);\n    for (i=0; i<64; i++) {\n        s4.a[i] <== in[24*64+i];\n    }\n    // r[15] = a[4]<<27|a[4]>>(64-27)\n    component s15 = stepRhoPi(27, 64-27);\n    for (i=0; i<64; i++) {\n        s15.a[i] <== in[4*64+i];\n    }\n    // r[23] = a[15]<<41|a[15]>>(64-41)\n    component s23 = stepRhoPi(41, 64-41);\n    for (i=0; i<64; i++) {\n        s23.a[i] <== in[15*64+i];\n    }\n    // r[19] = a[23]<<56|a[23]>>(64-56)\n    component s19 = stepRhoPi(56, 64-56);\n    for (i=0; i<64; i++) {\n        s19.a[i] <== in[23*64+i];\n    }\n    // r[13] = a[19]<<8|a[19]>>(64-8)\n    component s13 = stepRhoPi(8, 64-8);\n    for (i=0; i<64; i++) {\n        s13.a[i] <== in[19*64+i];\n    }\n    // r[12] = a[13]<<25|a[13]>>(64-25)\n    component s12 = stepRhoPi(25, 64-25);\n    for (i=0; i<64; i++) {\n        s12.a[i] <== in[13*64+i];\n    }\n    // r[2] = a[12]<<43|a[12]>>(64-43)\n    component s2 = stepRhoPi(43, 64-43);\n    for (i=0; i<64; i++) {\n        s2.a[i] <== in[12*64+i];\n    }\n    // r[20] = a[2]<<62|a[2]>>(64-62)\n    component s20 = stepRhoPi(62, 64-62);\n    for (i=0; i<64; i++) {\n        s20.a[i] <== in[2*64+i];\n    }\n    // r[14] = a[20]<<18|a[20]>>(64-18)\n    component s14 = stepRhoPi(18, 64-18);\n    for (i=0; i<64; i++) {\n        s14.a[i] <== in[20*64+i];\n    }\n    // r[22] = a[14]<<39|a[14]>>(64-39)\n    component s22 = stepRhoPi(39, 64-39);\n    for (i=0; i<64; i++) {\n        s22.a[i] <== in[14*64+i];\n    }\n    // r[9] = a[22]<<61|a[22]>>(64-61)\n    component s9 = stepRhoPi(61, 64-61);\n    for (i=0; i<64; i++) {\n        s9.a[i] <== in[22*64+i];\n    }\n    // r[6] = a[9]<<20|a[9]>>(64-20)\n    component s6 = stepRhoPi(20, 64-20);\n    for (i=0; i<64; i++) {\n        s6.a[i] <== in[9*64+i];\n    }\n    // r[1] = a[6]<<44|a[6]>>(64-44)\n    component s1 = stepRhoPi(44, 64-44);\n    for (i=0; i<64; i++) {\n        s1.a[i] <== in[6*64+i];\n    }\n\n    for (i=0; i<64; i++) {\n        out[i] <== in[i];\n        out[10*64+i] <== s10.out[i];\n        out[7*64+i] <== s7.out[i];\n        out[11*64+i] <== s11.out[i];\n        out[17*64+i] <== s17.out[i];\n        out[18*64+i] <== s18.out[i];\n        out[3*64+i] <== s3.out[i];\n        out[5*64+i] <== s5.out[i];\n        out[16*64+i] <== s16.out[i];\n        out[8*64+i] <== s8.out[i];\n        out[21*64+i] <== s21.out[i];\n        out[24*64+i] <== s24.out[i];\n        out[4*64+i] <== s4.out[i];\n        out[15*64+i] <== s15.out[i];\n        out[23*64+i] <== s23.out[i];\n        out[19*64+i] <== s19.out[i];\n        out[13*64+i] <== s13.out[i];\n        out[12*64+i] <== s12.out[i];\n        out[2*64+i] <== s2.out[i];\n        out[20*64+i] <== s20.out[i];\n        out[14*64+i] <== s14.out[i];\n        out[22*64+i] <== s22.out[i];\n        out[9*64+i] <== s9.out[i];\n        out[6*64+i] <== s6.out[i];\n        out[1*64+i] <== s1.out[i];\n    }\n}\n\n\n// Chi\n\ntemplate stepChi() {\n    // out = a ^ (^b) & c\n    signal input a[64];\n    signal input b[64];\n    signal input c[64];\n    signal output out[64];\n    var i;\n\n    // ^b\n    component bXor = XorArraySingle(64);\n    for (i=0; i<64; i++) {\n        bXor.a[i] <== b[i];\n    }\n    // (^b)&c\n    component bc = AndArray(64);\n    for (i=0; i<64; i++) {\n        bc.a[i] <== bXor.out[i];\n        bc.b[i] <== c[i];\n    }\n    // a^(^b)&c\n    component abc = XorArray(64);\n    for (i=0; i<64; i++) {\n        abc.a[i] <== a[i];\n        abc.b[i] <== bc.out[i];\n    }\n    for (i=0; i<64; i++) {\n        out[i] <== abc.out[i];\n    }\n}\n\ntemplate Chi() {\n    signal input in[25*64];\n    signal output out[25*64];\n\n    var i;\n\n    component r0 = stepChi();\n    for (i=0; i<64; i++) {\n        r0.a[i] <== in[i];\n        r0.b[i] <== in[1*64+i];\n        r0.c[i] <== in[2*64+i];\n    }\n    component r1 = stepChi();\n    for (i=0; i<64; i++) {\n        r1.a[i] <== in[1*64+i];\n        r1.b[i] <== in[2*64+i];\n        r1.c[i] <== in[3*64+i];\n    }\n    component r2 = stepChi();\n    for (i=0; i<64; i++) {\n        r2.a[i] <== in[2*64+i];\n        r2.b[i] <== in[3*64+i];\n        r2.c[i] <== in[4*64+i];\n    }\n    component r3 = stepChi();\n    for (i=0; i<64; i++) {\n        r3.a[i] <== in[3*64+i];\n        r3.b[i] <== in[4*64+i];\n        r3.c[i] <== in[0*64+i];\n    }\n    component r4 = stepChi();\n    for (i=0; i<64; i++) {\n        r4.a[i] <== in[4*64+i];\n        r4.b[i] <== in[i];\n        r4.c[i] <== in[1*64+i];\n    }\n\n    component r5 = stepChi();\n    for (i=0; i<64; i++) {\n        r5.a[i] <== in[5*64+i];\n        r5.b[i] <== in[6*64+i];\n        r5.c[i] <== in[7*64+i];\n    }\n    component r6 = stepChi();\n    for (i=0; i<64; i++) {\n        r6.a[i] <== in[6*64+i];\n        r6.b[i] <== in[7*64+i];\n        r6.c[i] <== in[8*64+i];\n    }\n    component r7 = stepChi();\n    for (i=0; i<64; i++) {\n        r7.a[i] <== in[7*64+i];\n        r7.b[i] <== in[8*64+i];\n        r7.c[i] <== in[9*64+i];\n    }\n    component r8 = stepChi();\n    for (i=0; i<64; i++) {\n        r8.a[i] <== in[8*64+i];\n        r8.b[i] <== in[9*64+i];\n        r8.c[i] <== in[5*64+i];\n    }\n    component r9 = stepChi();\n    for (i=0; i<64; i++) {\n        r9.a[i] <== in[9*64+i];\n        r9.b[i] <== in[5*64+i];\n        r9.c[i] <== in[6*64+i];\n    }\n\n    component r10 = stepChi();\n    for (i=0; i<64; i++) {\n        r10.a[i] <== in[10*64+i];\n        r10.b[i] <== in[11*64+i];\n        r10.c[i] <== in[12*64+i];\n    }\n    component r11 = stepChi();\n    for (i=0; i<64; i++) {\n        r11.a[i] <== in[11*64+i];\n        r11.b[i] <== in[12*64+i];\n        r11.c[i] <== in[13*64+i];\n    }\n    component r12 = stepChi();\n    for (i=0; i<64; i++) {\n        r12.a[i] <== in[12*64+i];\n        r12.b[i] <== in[13*64+i];\n        r12.c[i] <== in[14*64+i];\n    }\n    component r13 = stepChi();\n    for (i=0; i<64; i++) {\n        r13.a[i] <== in[13*64+i];\n        r13.b[i] <== in[14*64+i];\n        r13.c[i] <== in[10*64+i];\n    }\n    component r14 = stepChi();\n    for (i=0; i<64; i++) {\n        r14.a[i] <== in[14*64+i];\n        r14.b[i] <== in[10*64+i];\n        r14.c[i] <== in[11*64+i];\n    }\n\n    component r15 = stepChi();\n    for (i=0; i<64; i++) {\n        r15.a[i] <== in[15*64+i];\n        r15.b[i] <== in[16*64+i];\n        r15.c[i] <== in[17*64+i];\n    }\n    component r16 = stepChi();\n    for (i=0; i<64; i++) {\n        r16.a[i] <== in[16*64+i];\n        r16.b[i] <== in[17*64+i];\n        r16.c[i] <== in[18*64+i];\n    }\n    component r17 = stepChi();\n    for (i=0; i<64; i++) {\n        r17.a[i] <== in[17*64+i];\n        r17.b[i] <== in[18*64+i];\n        r17.c[i] <== in[19*64+i];\n    }\n    component r18 = stepChi();\n    for (i=0; i<64; i++) {\n        r18.a[i] <== in[18*64+i];\n        r18.b[i] <== in[19*64+i];\n        r18.c[i] <== in[15*64+i];\n    }\n    component r19 = stepChi();\n    for (i=0; i<64; i++) {\n        r19.a[i] <== in[19*64+i];\n        r19.b[i] <== in[15*64+i];\n        r19.c[i] <== in[16*64+i];\n    }\n\n    component r20 = stepChi();\n    for (i=0; i<64; i++) {\n        r20.a[i] <== in[20*64+i];\n        r20.b[i] <== in[21*64+i];\n        r20.c[i] <== in[22*64+i];\n    }\n    component r21 = stepChi();\n    for (i=0; i<64; i++) {\n        r21.a[i] <== in[21*64+i];\n        r21.b[i] <== in[22*64+i];\n        r21.c[i] <== in[23*64+i];\n    }\n    component r22 = stepChi();\n    for (i=0; i<64; i++) {\n        r22.a[i] <== in[22*64+i];\n        r22.b[i] <== in[23*64+i];\n        r22.c[i] <== in[24*64+i];\n    }\n    component r23 = stepChi();\n    for (i=0; i<64; i++) {\n        r23.a[i] <== in[23*64+i];\n        r23.b[i] <== in[24*64+i];\n        r23.c[i] <== in[20*64+i];\n    }\n    component r24 = stepChi();\n    for (i=0; i<64; i++) {\n        r24.a[i] <== in[24*64+i];\n        r24.b[i] <== in[20*64+i];\n        r24.c[i] <== in[21*64+i];\n    }\n\n    for (i=0; i<64; i++) {\n        out[i] <== r0.out[i];\n        out[1*64+i] <== r1.out[i];\n        out[2*64+i] <== r2.out[i];\n        out[3*64+i] <== r3.out[i];\n        out[4*64+i] <== r4.out[i];\n\n        out[5*64+i] <== r5.out[i];\n        out[6*64+i] <== r6.out[i];\n        out[7*64+i] <== r7.out[i];\n        out[8*64+i] <== r8.out[i];\n        out[9*64+i] <== r9.out[i];\n\n        out[10*64+i] <== r10.out[i];\n        out[11*64+i] <== r11.out[i];\n        out[12*64+i] <== r12.out[i];\n        out[13*64+i] <== r13.out[i];\n        out[14*64+i] <== r14.out[i];\n\n        out[15*64+i] <== r15.out[i];\n        out[16*64+i] <== r16.out[i];\n        out[17*64+i] <== r17.out[i];\n        out[18*64+i] <== r18.out[i];\n        out[19*64+i] <== r19.out[i];\n\n        out[20*64+i] <== r20.out[i];\n        out[21*64+i] <== r21.out[i];\n        out[22*64+i] <== r22.out[i];\n        out[23*64+i] <== r23.out[i];\n        out[24*64+i] <== r24.out[i];\n    }\n}\n\n// Iota\n\ntemplate RC(r) {\n    signal output out[64];\n    var rc[24] = [\n        0x0000000000000001, 0x0000000000008082, 0x800000000000808A,\n        0x8000000080008000, 0x000000000000808B, 0x0000000080000001,\n        0x8000000080008081, 0x8000000000008009, 0x000000000000008A,\n        0x0000000000000088, 0x0000000080008009, 0x000000008000000A,\n        0x000000008000808B, 0x800000000000008B, 0x8000000000008089,\n        0x8000000000008003, 0x8000000000008002, 0x8000000000000080,\n        0x000000000000800A, 0x800000008000000A, 0x8000000080008081,\n        0x8000000000008080, 0x0000000080000001, 0x8000000080008008\n    ];\n    for (var i=0; i<64; i++) {\n        out[i] <== (rc[r] >> i) & 1;\n    }\n}\n\ntemplate Iota(r) {\n    signal input in[25*64];\n    signal output out[25*64];\n    var i;\n\n    component rc = RC(r);\n\n    component iota = XorArray(64);\n    for (var i=0; i<64; i++) {\n        iota.a[i] <== in[i];\n        iota.b[i] <== rc.out[i];\n    }\n    for (i=0; i<64; i++) {\n        out[i] <== iota.out[i];\n    }\n    for (i=64; i<25*64; i++) {\n        out[i] <== in[i];\n    }\n}\n\n"
  },
  {
    "path": "packages/circuits/eff_ecdsa_membership/to_address/vocdoni-keccak/utils.circom",
    "content": "pragma circom 2.0.2;\n\ninclude \"../../../../../node_modules/circomlib/circuits/gates.circom\";\ninclude \"../../../../../node_modules/circomlib/circuits/sha256/xor3.circom\";\ninclude \"../../../../../node_modules/circomlib/circuits/sha256/shift.circom\"; // contains ShiftRight\n\ntemplate Xor5(n) {\n    signal input a[n];\n    signal input b[n];\n    signal input c[n];\n    signal input d[n];\n    signal input e[n];\n    signal output out[n];\n    var i;\n    \n    component xor3 = Xor3(n);\n    for (i=0; i<n; i++) {\n        xor3.a[i] <== a[i];\n        xor3.b[i] <== b[i];\n        xor3.c[i] <== c[i];\n    }\n    component xor4 = XorArray(n);\n    for (i=0; i<n; i++) {\n        xor4.a[i] <== xor3.out[i];\n        xor4.b[i] <== d[i];\n    }\n    component xor5 = XorArray(n);\n    for (i=0; i<n; i++) {\n        xor5.a[i] <== xor4.out[i];\n        xor5.b[i] <== e[i];\n    }\n    for (i=0; i<n; i++) {\n        out[i] <== xor5.out[i];\n    }\n}\n\ntemplate XorArray(n) {\n    signal input a[n];\n    signal input b[n];\n    signal output out[n];\n    var i;\n\n    component aux[n];\n    for (i=0; i<n; i++) {\n        aux[i] = XOR();\n        aux[i].a <== a[i];\n        aux[i].b <== b[i];\n    }\n    for (i=0; i<n; i++) {\n        out[i] <== aux[i].out;\n    }\n}\n\ntemplate XorArraySingle(n) {\n    signal input a[n];\n    signal output out[n];\n    var i;\n\n    component aux[n];\n    for (i=0; i<n; i++) {\n        aux[i] = XOR();\n        aux[i].a <== a[i];\n        aux[i].b <== 1;\n    }\n    for (i=0; i<n; i++) {\n        out[i] <== aux[i].out;\n    }\n}\n\ntemplate OrArray(n) {\n    signal input a[n];\n    signal input b[n];\n    signal output out[n];\n    var i;\n\n    component aux[n];\n    for (i=0; i<n; i++) {\n        aux[i] = OR();\n        aux[i].a <== a[i];\n        aux[i].b <== b[i];\n    }\n    for (i=0; i<n; i++) {\n        out[i] <== aux[i].out;\n    }\n}\n\ntemplate AndArray(n) {\n    signal input a[n];\n    signal input b[n];\n    signal output out[n];\n    var i;\n\n    component aux[n];\n    for (i=0; i<n; i++) {\n        aux[i] = AND();\n        aux[i].a <== a[i];\n        aux[i].b <== b[i];\n    }\n    for (i=0; i<n; i++) {\n        out[i] <== aux[i].out;\n    }\n}\n\ntemplate ShL(n, r) {\n    signal input in[n];\n    signal output out[n];\n\n    for (var i=0; i<n; i++) {\n        if (i < r) {\n            out[i] <== 0;\n        } else {\n            out[i] <== in[ i-r ];\n        }\n    }\n}\n"
  },
  {
    "path": "packages/circuits/eff_ecdsa_membership/to_address/zk-identity/eth.circom",
    "content": "pragma circom 2.0.2;\n\ninclude \"../vocdoni-keccak/keccak.circom\";\n\ninclude \"../../../../../node_modules/circomlib/circuits/bitify.circom\";\n\n/*\n * Possibly generalizable, but for now just flatten a single pubkey from k n-bit chunks to a * single bit array\n * representing the entire pubkey\n *\n */\ntemplate FlattenPubkey(numBits, k) {\n  signal input chunkedPubkey[2][k];\n\n  signal output pubkeyBits[512];\n\n  // must be able to hold entire pubkey in input\n  assert(numBits*k >= 256);\n\n  // convert pubkey to a single bit array\n  // - concat x and y coords\n  // - convert each register's number to corresponding bit array\n  // - concatenate all bit arrays in order\n\n  component chunks2BitsY[k];\n  for(var chunk = 0; chunk < k; chunk++){\n    chunks2BitsY[chunk] = Num2Bits(numBits);\n    chunks2BitsY[chunk].in <== chunkedPubkey[1][chunk];\n\n    for(var bit = 0; bit < numBits; bit++){\n        var bitIndex = bit + numBits * chunk;\n        if(bitIndex < 256) {\n          pubkeyBits[bitIndex] <== chunks2BitsY[chunk].out[bit];\n        }\n    }\n  }\n\n  component chunks2BitsX[k];\n  for(var chunk = 0; chunk < k; chunk++){\n    chunks2BitsX[chunk] = Num2Bits(numBits);\n    chunks2BitsX[chunk].in <== chunkedPubkey[0][chunk];\n\n    for(var bit = 0; bit < numBits; bit++){\n        var bitIndex = bit + 256 + (numBits * chunk);\n        if(bitIndex < 512) {\n          pubkeyBits[bitIndex] <== chunks2BitsX[chunk].out[bit];\n        }\n    }\n  }\n}\n\n/*\n * Helper for verifying an eth address refers to the correct public key point\n *\n * NOTE: uses https://github.com/vocdoni/keccak256-circom, a highly experimental keccak256 implementation\n */\ntemplate PubkeyToAddress() {\n    // public key is (x, y) curve point. this is a 512-bit little-endian bitstring representation of y + 2**256 * x\n    signal input pubkeyBits[512];\n\n    signal output address;\n\n    // our representation is little-endian 512-bit bitstring\n    // keccak template operates on bytestrings one byte at a time, starting with the biggest byte\n    // but bytes are represented as little-endian 8-bit bitstrings\n    signal reverse[512];\n\n    for (var i = 0; i < 512; i++) {\n      reverse[i] <== pubkeyBits[511-i];\n    }\n\n    component keccak = Keccak(512, 256);\n    for (var i = 0; i < 512 / 8; i += 1) {\n      for (var j = 0; j < 8; j++) {\n        keccak.in[8*i + j] <== reverse[8*i + (7-j)];\n      }\n    }\n\n    // convert the last 160 bits (20 bytes) into the number corresponding to address\n    // the output of keccak is 32 bytes. bytes are arranged from largest to smallest\n    // but bytes themselves are little-endian bitstrings of 8 bits\n    // we just want a little-endian bitstring of 160 bits\n    component bits2Num = Bits2Num(160);\n    for (var i = 0; i < 20; i++) {\n      for (var j = 0; j < 8; j++) {\n        bits2Num.in[8*i + j] <== keccak.out[256 - 8*(i+1) + j];\n      }\n    }\n\n    address <== bits2Num.out;\n}"
  },
  {
    "path": "packages/circuits/eff_ecdsa_membership/tree.circom",
    "content": "pragma circom 2.1.2;\ninclude \"../poseidon/poseidon.circom\";\ninclude \"../../../node_modules/circomlib/circuits/mux1.circom\";\n\n/**\n *  MerkleTreeInclusionProof\n *  ========================\n *  \n *  Copy of the Merkle Tree implementation in Semaphore:\n *  https://github.com/semaphore-protocol/semaphore/blob/main/packages/circuits/tree.circom\n *  Instead of using the circomlib Poseidon, we use our own implementation which\n *  uses constants specific to the secp256k1 curve.\n */\ntemplate MerkleTreeInclusionProof(nLevels) {\n    signal input leaf;\n    signal input pathIndices[nLevels];\n    signal input siblings[nLevels];\n\n    signal output root;\n\n    component poseidons[nLevels];\n    component mux[nLevels];\n\n    signal hashes[nLevels + 1];\n    hashes[0] <== leaf;\n\n    for (var i = 0; i < nLevels; i++) {\n        pathIndices[i] * (1 - pathIndices[i]) === 0;\n\n        poseidons[i] = Poseidon();\n        mux[i] = MultiMux1(2);\n\n        mux[i].c[0][0] <== hashes[i];\n        mux[i].c[0][1] <== siblings[i];\n\n        mux[i].c[1][0] <== siblings[i];\n        mux[i].c[1][1] <== hashes[i];\n\n        mux[i].s <== pathIndices[i];\n\n        poseidons[i].inputs[0] <== mux[i].out[0];\n        poseidons[i].inputs[1] <== mux[i].out[1];\n\n        hashes[i + 1] <== poseidons[i].out;\n    }\n\n    root <== hashes[nLevels];\n}\n"
  },
  {
    "path": "packages/circuits/instances/addr_membership.circom",
    "content": "pragma circom 2.1.2;\n\ninclude \"../eff_ecdsa_membership/addr_membership.circom\";\n\ncomponent main { public[ root, Tx, Ty, Ux, Uy ]} = AddrMembership(20);"
  },
  {
    "path": "packages/circuits/instances/pubkey_membership.circom",
    "content": "pragma circom 2.1.2;\n\ninclude \"../eff_ecdsa_membership/pubkey_membership.circom\";\n\ncomponent main { public[ root, Tx, Ty, Ux, Uy ]} = PubKeyMembership(20);"
  },
  {
    "path": "packages/circuits/jest.config.js",
    "content": "/** @type {import('ts-jest').JestConfigWithTsJest} */\nmodule.exports = {\n  preset: 'ts-jest',\n  testEnvironment: 'node',\n  testTimeout: 100000,\n};"
  },
  {
    "path": "packages/circuits/package.json",
    "content": "{\n  \"name\": \"@personaelabs/spartan-ecdsa-circuits\",\n  \"version\": \"0.1.0\",\n  \"main\": \"index.js\",\n  \"license\": \"MIT\",\n  \"dependencies\": {\n    \"@ethereumjs/util\": \"^8.0.3\",\n    \"bn.js\": \"^5.2.1\",\n    \"circom_tester\": \"^0.0.19\",\n    \"circomlib\": \"^2.0.5\",\n    \"circomlibjs\": \"^0.1.7\",\n    \"elliptic\": \"^6.5.4\"\n  },\n  \"devDependencies\": {\n    \"@personaelabs/spartan-ecdsa\": \"*\",\n    \"@zk-kit/incremental-merkle-tree\": \"^1.0.0\",\n    \"jest\": \"^29.3.1\",\n    \"ts-jest\": \"^29.0.3\",\n    \"typescript\": \"^4.9.4\"\n  }\n}\n"
  },
  {
    "path": "packages/circuits/poseidon/poseidon.circom",
    "content": "pragma circom 2.1.2;\n\ninclude \"./poseidon_constants.circom\";\n\ntemplate SBox() {\n    signal input in;\n    signal output out;\n\n    signal inDouble <== in * in;\n    signal inQuadruple <== inDouble * inDouble;\n    \n\n    out <== inQuadruple * in;\n}\n\ntemplate MatrixMul() {\n    var t = 3;\n    signal input state[t];\n    signal output out[t];\n    var mds_matrix[t][t] = MDS_MATRIX();\n\n    for (var i = 0; i < t; i++) {\n        var tmp = 0;\n        for (var j = 0; j < t; j++) {\n            tmp += state[j] * mds_matrix[i][j];\n        }\n        out[i] <== tmp;\n    }\n}\n\ntemplate AddRoundConst(pos) {\n    var t = 3;\n    signal input state[t];\n    signal output out[t]; \n    var round_keys[192] = ROUND_KEYS();\n\n    for (var i = 0; i < t; i++) {\n        out[i] <== state[i] + round_keys[pos + i];\n    }\n}\n\ntemplate FullRound(pos) {\n    var t = 3;\n    signal input state[t];\n    signal output out[t];\n    component constAdded = AddRoundConst(pos);\n    for (var i = 0; i < t; i++) {\n        constAdded.state[i] <== state[i];\n    }\n\n\n    component sBoxes[t];\n    for (var i = 0; i < t; i++) {\n        sBoxes[i] = SBox();\n        sBoxes[i].in <== constAdded.out[i];\n    }\n\n    component matrixMul = MatrixMul();\n    for (var i = 0; i < t; i++) {\n        matrixMul.state[i] <== sBoxes[i].out;\n    }\n\n    for (var i = 0; i < t; i++) {\n        out[i] <== matrixMul.out[i];\n    }\n}\n\ntemplate PartialRound(pos) {\n    var t = 3;\n    signal input state[t];\n    signal output out[t];\n\n    component constAdded = AddRoundConst(pos);\n    for (var i = 0; i < t; i++) {\n        constAdded.state[i] <== state[i];\n    }\n\n    component sBox = SBox();\n    sBox.in <== constAdded.out[0];\n\n    component matrixMul = MatrixMul();\n    for (var i = 0; i < t; i++) {\n        if (i == 0) {\n            matrixMul.state[i] <== sBox.out;\n        } else {\n            matrixMul.state[i] <== constAdded.out[i];\n        }\n    }\n\n    for (var i = 0; i < t; i++) {\n        out[i] <== matrixMul.out[i];\n    }\n}\n\ntemplate Poseidon() {\n    var numInputs = 2;\n    var t = numInputs + 1;\n    signal input inputs[numInputs];\n    var numFullRoundsHalf = 4;\n    var numPartialRounds = 56;\n    signal output out;\n\n    var stateIndex = 0;\n    \n    signal initState[3];\n\n    initState[0] <== 3;\n    initState[1] <== inputs[0];\n    initState[2] <== inputs[1];\n\n    component fRoundsFirst[numFullRoundsHalf];\n    for (var j = 0; j < numFullRoundsHalf; j++) {\n        fRoundsFirst[j] = FullRound(stateIndex * t);\n        if (j == 0) {\n            for (var i = 0; i < t; i++) {\n                fRoundsFirst[j].state[i] <== initState[i];\n            }\n        } else {\n            for (var i = 0; i < t; i++) {\n                fRoundsFirst[j].state[i] <== fRoundsFirst[j - 1].out[i];\n            }\n        }\n        stateIndex++;\n    }\n\n\n    component pRounds[numPartialRounds];\n    for (var j = 0; j < numPartialRounds; j++) {\n        pRounds[j] = PartialRound(stateIndex * t);\n        if (j == 0) {\n            for (var i = 0; i < t; i++) {\n                pRounds[j].state[i] <== fRoundsFirst[numFullRoundsHalf - 1].out[i];\n            }\n        } else {\n            for (var i = 0; i < t; i++) {\n                pRounds[j].state[i] <== pRounds[j - 1].out[i];\n            }\n        }\n        stateIndex++;\n    }\n\n    component fRoundsLast[numFullRoundsHalf];\n    for (var j = 0; j < numFullRoundsHalf; j++) {\n        fRoundsLast[j] = FullRound(stateIndex * t);\n        if (j == 0) {\n            for (var i = 0; i < t; i++) {\n                fRoundsLast[j].state[i] <== pRounds[numPartialRounds - 1].out[i];\n            }\n        } else {\n            for (var i = 0; i < t; i++) {\n                fRoundsLast[j].state[i] <== fRoundsLast[j - 1].out[i];\n            }\n        }\n        stateIndex++;\n    }\n\n    out <== fRoundsLast[numFullRoundsHalf-1].out[1];\n}\n"
  },
  {
    "path": "packages/circuits/poseidon/poseidon_constants.circom",
    "content": "pragma circom 2.1.2;\n\nfunction ROUND_KEYS() {\n    return [\n    15180568604901803243989155929934437997245952775071395385994322939386074967328,\n    98155933184944822056372510812105826951789406432246960633912199752807271851218,\n    32585497418154084368870158853355239726261349829448673320273043226636389078017,\n    66713968576806622579829258440960693099797917756640662361943757758980796487698,\n    61296025743283504825054745787375839406507895949474930140819919915792438454216,\n    64548089412749542282115556935384382035671782881737715696939837764375912217104,\n    108421562972909537718478936575770973463273651828765393113349044862621092658552,\n    93957623861448681916560847065407918286434708744548934125771289238599801659600,\n    31886767595881910145119755249133120645312710313371225820300496900248094187131,\n    36511615103248888903406040506250394762206798360602726106046630438239169384653,\n    21193239787133737740669439860809806837993750509086389566475677877580362491125,\n    15159189447883181997488877417695825734356570617827322308691834229181804753656,\n    19272373877630561389686073945290625876718814210798194797601715657476609730306,\n    23132197996397121955527964729507651432518694856862854469217474256539272053037,\n    9869753235007825662020275771343858285582964429845049469800863115040150206544,\n    36536341316285671890133896506951910369952562161551585116256678375995315827743,\n    62582239167707347777855528698896708360409296899261565735324151945083720570858,\n    96597358901965097853721114962031771931271685249979807653919643952343419105640,\n    99475971754252188104003224702005940217163363685728394033034788135108600073953,\n    52080483875928847502018688921126796935417602445765802481027972679966274137987,\n    101922748752417217354391348649359865075718358385248454632698502400961567227929,\n    26980595292132221181330746499613907829041623688147011560382352796984836870749,\n    7059991836806083192408106370472821784612460308866802565871813230060135266390,\n    19329812920723038526370491239817117039289784665617181727933894076969997926129,\n    65570620823578601926240439251563587376966657231502120214692324496443514623818,\n    58403733332589349613112270854204921427257113546270812628317365115158685715742,\n    45021021211732634759643776743541935700591354899980928498981462362035961745443,\n    313468157086800401026946312285365733155132234906935411743639256319782592571,\n    101316949793045093761117346380310841944294663456931203380573537653884068660109,\n    23683935571424619534194393788101669168630123784066421490798386323411538828592,\n    45470730427236677197026094498490008082250264942279323465121581539984407294442,\n    48141067373531800337373447278127981363951468257064369512416205750641258258193,\n    42554919225040466028330117313396362347164995917041931400909482795914116747618,\n    11551941832988244108260444347046942236051939264069344013774353630451796870907,\n    60185799182545404739011626517355854847787627814101363386450657535504094743765,\n    81823160578900678880708744457872721685515019032370491632046212317701226128393,\n    7165646831054215773988859638722974820791178194871546344315162343128362695647,\n    75289707601640398424243937567716657896680380639974371761136292031415717685949,\n    7150842764562742184396161198129263121409208675362553300851082062734889620953,\n    24380904705269761063866540342138412601132455197711667167747524315310027386226,\n    9728986075621437350131504894128984146939551938810073671231633620616345344412,\n    10579382052089733216628873394134968879891026686695240299956972154694558493896,\n    8171994519466002143995890536756742287314780571933910736618431096190430536601,\n    30420144259409274775063072923609924427757612539094840146996944760708902708570,\n    63962155989812703023698320394024694856871261481871757094333286947755599007133,\n    25280070391177856032024336895094721131222985610587247589336316615596140400436,\n    15305872319988027006162258914083163651002306183917888172691618513722838997098,\n    51545603291342006705870081001071419395633279951502747769141857387796043104608,\n    91109680756552587805002537489407348773333405839144382221272597323798859182191,\n    72175452855185658158184807496160149169667221240389196996344579971523681433202,\n    30361989157454953234766224747536334157139256334148153290771332849307087761025,\n    38169634499980959088614671703639492517637815232220682121652135514105493936992,\n    49591153263237620796156788742811547511792615129981565620486914545749079774827,\n    47403873018260745456113868791119169163627014766514972598212646481717066065016,\n    93989849689047144228924801010853106857960399638657695410345207191739048300111,\n    10590240512802509131776989274411792739339398409955259174829387591089799115255,\n    29183703335869638067547208413224742887766212046438654772943025958628178245227,\n    4131650227136944095885036960767735080970262672750406866066212532739784907379,\n    43395510588213653537697670365796375057855260611965666448183946252832290017444,\n    95246795133940226900907730059125298420936467652619708443128629427116119621152,\n    6012209003558496814495903476753006089125143165365334812097313083703216071080,\n    26183233284429251459198269925441295879550203824094631575778521083706115817955,\n    26058994700533582730528567480051558438548299522338811756875396252016497202713,\n    107240485663145290290374164860301805857261278222480421976433215167444496066511,\n    84412820763898503096477800002865877536719992495674955119188074297975154406587,\n    52386303852182662900790700046090769869460994629239741773176060026198900130384,\n    95746062835936512160025091603469309809932540674474329021370075533568318932379,\n    22711334660013961010382652754865456251782349529764119853461446587583972054666,\n    16959835233095757670013367728627149851239789174357906293937455553277911805495,\n    15116421110200928832147360650392633091242147433006813656250997138988179879750,\n    107878787525302837370688492081178689950008165750500003692400517211520334656293,\n    44210105558575948369921579518078229089923760124167628288943900602376706136436,\n    90305995748749060889452130219544332384396626628663475498252761213618628372367,\n    104941997925797907872686462815914481945432760720471803254797908465921520138024,\n    100036855232527386145662094141100441220151775745916101660987264242446845728894,\n    103285582836474146806606752170525767341430483568396209591447274936228630298052,\n    82197692939371228160449741709034077803239992888716859217989995857278406253737,\n    10040764964044995095453717286623030376397745892179877153575434454090155545240,\n    27304226040425863042893623786832369758179176309230053449707879364285977952630,\n    42627232144930751842910170221862679057276668485045156742021958050665662768084,\n    76972394926916659428228833084621905890924612368412796262119501852346293848159,\n    39796921406297542196667238133893946368231540421737718098283349901435707131075,\n    14745047092916651495052563068083093689676472592445845983334785004125684263162,\n    43421479365783318841667739359312715738029447177150400204380817518608837765863,\n    107871536756946365977710326147511195471121248998432910212631960353348700694610,\n    39505942243687894211614489736115535754716239859353578295470352855493707198619,\n    59676442091621150164811367352362126934419932715789994860508056194143441226580,\n    94470526851498636320865653968033227263836954414283116133326109455334870036212,\n    15044796858044094866329855531761112645684343559112419720568996573556805975600,\n    67157729293641241473980125231288476062565688273917759533572275886277269201651,\n    72911083146182058225942884942982388217243826839805061121973109250798137784134,\n    102973386186208530972563015865701244407271836208547629083437627219683649557477,\n    57485522356347377122696081086816661784954498123948319434326439317393351620564,\n    23112275556906805064863694321486306070917598599342299357379251070160695202292,\n    107618884362423342584703700349292347754139538760798319916678240538294838342400,\n    83961260400031958812820990908241261093246389047082613562825737834833753517337,\n    42726953951733266282750892844947149703751388034177248277671157488506520215317,\n    39379570934119946602507737250800178347029772561352879974941214627084076473292,\n    72203650529122342092280763801468513707870760755235719535090090101623606334441,\n    13389660788942842724553143053013919883368472759564135119390935439369513690496,\n    101745263541280877725997503552978999350831489463993178838531539940805924817361,\n    76849182334465191824607032600721023780793694103840553871800174717760598910761,\n    53896256317996838683363773836826653859512780625932638736752563553878867538095,\n    24688792501674999263943657175455335814404948006469220532686392550824320454904,\n    69132683906595821927803530074656979217668636557563597358799899743174233903941,\n    2861982085506615225917620192781928414994576134281371548401916333754363567986,\n    37311353286221616083824584705974993449107063556724405440534160586561042968316,\n    83718085796857523832195255218519255973031752296424117786202083986118546906913,\n    103633177691684814414226251117070754499104739002759424774194851613917008856616,\n    84968411062305024171594435878414659990735518025357685215223731503921265946461,\n    41865099330909055069143724769818364262362915440371474104937435863183989905059,\n    46156624920251322979270606388518884047396423747179340919303543598300663968593,\n    64416327466854458915398302811825971539792429791049027619471115285308743811583,\n    94942471312481523091911417289540395651121558150571128515230470225155209280585,\n    109682618775735319282534546194470743032129102295907200313471041846112653687024,\n    61531999191737540795124202104235799899980935519651613893518293245268304980543,\n    17797352534596268622733030076742840951214734697361029060619245779495726996632,\n    2323150752778983462106829021155031678603044899339819935981200101818542000989,\n    16998018904363448507967526489917057882529252665835717172712095240271574074587,\n    110634872413902251217040490777568744431854972018399530234679399294372694506842,\n    31639545145649753705216327198217551838008233610574104460826956396569310697060,\n    107845103764339268987018917144483935480716224058844669233389185480836000033760,\n    46240297572174662698030819651333060930818959915797061274854448535534474175039,\n    53065607123105696930220421963755520777674094852857308823370049733888025985616,\n    16931881300947470270453776207625163368485560075525342751440832370220475352149,\n    79254110800481916763656344422402393723573490114487681345184594841431920461089,\n    42268569642639492314994307446626647824927989776691987788682655102426770655233,\n    9749633319307409894058984489496091535125232227316143918000642155415596066903,\n    57606597628648270579042266322415267200058617178318601782866227410456726724976,\n    56082250485913115488341301630850455009935943641292622301678990296508134206571,\n    17957245764842844288802777667800779232762688847417238921175068882796163705248,\n    94356229516444419318132697346021621194464273500135725160277725602263001442644,\n    52536631226748676066386651084538409050048707922045928887930261833545619358914,\n    107794922118166328243581272159394479176678094739027519706768813902978100436849,\n    92984368734102511759118281503078145182557799453616537383408606074187034371208,\n    59652553897137603386525572460411404882917571255327541516871354737502335133690,\n    49012645345644326995052653072578673750516379655033952006121214224119466671764,\n    79025576845143484310735291619293962982804358365838961371044480386743856799994,\n    5437377540613244374799729812489584777222423091155743557287567155811057717409,\n    100687592213090267900708728796310211082532607828753010566886681655775031329660,\n    99074462968857696481475128596339544396152341206708424767062829343406495063192,\n    67476872698289965626550204192782761730653024363949045140720348870736942130242,\n    103307125141718054130755829916960708430672826104789971350239945481960770107890,\n    74087383014714668160537499936376991041273055222568604413015844459913259357334,\n    40924049099780965904051946083599822761993164889139026432053420731164022206736,\n    32594924940463736641240515015317856157169105212942308502676422036626316673214,\n    98990663138035055774586216545398054668349058134877723031747421828753359974443,\n    55821766022768786066770462759796825978667805772707620106340033118519147871694,\n    4001942224536365489828915551180230767516454384395893814399938353050969198154,\n    30136373426492646221252150708518703998248891683881870400906269276900707426865,\n    34943205764464817266133164313915763122699935186597909347522822673832250079664,\n    27737330737483170511275902246508559278973986181590368845166383812793468814968,\n    96292398813565494438359802278723334615526914389306923046282571355958508916558,\n    97147334956505986101750230325438660094766812949748276042292963837380833668274,\n    24754519562402723848413674701792328284127274989440581643644298347747941238812,\n    76111103490248669364580390783887028636436246028943665707064153006971943621186,\n    33764090322658516047637223655525551979364055499647855895233821795694749902854,\n    100536990630540359004783976190215234627391515555181073681294901127179838732969,\n    55991997435987096996680289872758998763908676069536901375395297778729059185671,\n    32860959903680178324832991459746631238726690317249285658471597044247794502256,\n    70074816806976994707467706079200635184034023598764203123459335544110485476930,\n    46213940675829172331116620705134022102338250410334045747023950259088879662946,\n    77000624259024986585504351395777746568094934279771127334532438603183524642061,\n    21719649576090832101273013788716623377603297433777804572370785470329817725170,\n    29209622978540575483991966565508890231057362045066230397327380085945876837821,\n    2445742484263083651472035320255578071935687960412507452207899496253120999364,\n    86846812580007547526361109808384103509272544750564766849178767957571523649544,\n    43025640639926253696325070988523609146060819319830735794100778654425057363895,\n    108957662689228031021948854644435971168708642184764962508575441689859324862868,\n    83891545396650121758556392255189778590486277180642660527000882403085396114823,\n    42527013475786190604202451803064937203698027000671529418992521798122995373551,\n    115180194520889678365425151865713593680657747284471744934804370945935167043862,\n    28979598171177052880917135045920701144584888536299261666846302083645491369348,\n    68351312608110279019109436395199010412431777911149851157132527077210966351650,\n    61759623963943995967580147094342313397376358019837276043205235302342147116585,\n    80714625408576660514217469096827255752431164791924432025682445176737446783085,\n    33048555646676368266608424610100449208381357250300222636992099726804869416731,\n    50682223610667325089810868083131721901859473966415125289975106060759036109476,\n    4271213571706787092297985431667190050727614825584809797590204884727103716461,\n    101314046722405990971733763321368296660561930294000591067108115987088407142646,\n    55565500177602146197728150332647093173137211885612327122425918553270191254877,\n    65556764608648687291293889343854786421750589271167654521933267288313526422497,\n    66877533773422945979143954094644173219583178339199697252673545117318799706373,\n    30511098623357801425494143655999121699575856091238269679669864984061501512835,\n    95900192636363991637086954986559552472749485926252879461208179855482821976623,\n    37879946127489462347049192209554168578320892231852882971030128420645686965013,\n    80479504274334215471057938992198620419540634144266821121799003865782336406529,\n    13326262422954139210095783388743602482455840337093117010479445267213907605425,\n    16047106134611124637925332265703907202779549268127518502853950466090054176776,\n    71499356105233640605079063493613576024353801558965221134519779175477723594865,\n    28438981751956157476540225984733791304599172905715743025543841239013139121102,\n    56066317647068426981453448715118237747130321302262827290362392918472904421147\n  ];\n}\n\nfunction MDS_MATRIX() {\n    return [\n        [\n            92469348809186613947252340883344274339611751744959319352506666082431267346705,\n            100938028378191533449096235266991198229563815869344032449592738345766724371160,\n            77486311749148948616988559783475694076613010381924638436641318334458515006661\n        ],\n        [\n            110352262556914082363749654180080464794716701228558638957603951672835474954408,\n            27607004873684391669404739690441550149894883072418944161048725383958774443141,\n            29671705769502357195586268679831947082918094959101307962374709600277676341325\n        ],\n        [\n            77762103796341032609398578911486222569419103128091016773380377798879650228751,\n            1753012011204964731088925227042671869111026487299375073665493007998674391999,\n            70274477372358662369456035572054501601454406272695978931839980644925236550307\n        ]\n    ];\n}\n"
  },
  {
    "path": "packages/circuits/tests/addr_membership.test.ts",
    "content": "const wasm_tester = require(\"circom_tester\").wasm;\nvar EC = require(\"elliptic\").ec;\nimport * as path from \"path\";\nconst ec = new EC(\"secp256k1\");\nimport { Poseidon, Tree } from \"@personaelabs/spartan-ecdsa\";\nimport { getEffEcdsaCircuitInput } from \"./test_utils\";\nimport { privateToAddress } from \"@ethereumjs/util\";\n\ndescribe(\"membership\", () => {\n  it(\"should verify correct signature and merkle proof\", async () => {\n    // Compile the circuit\n    const circuit = await wasm_tester(\n      path.join(__dirname, \"./circuits/addr_membership_test.circom\"),\n      {\n        prime: \"secq256k1\" // Specify to use the option --prime secq256k1 when compiling with circom\n      }\n    );\n\n    // Construct the tree\n    const poseidon = new Poseidon();\n    await poseidon.initWasm();\n\n    const nLevels = 10;\n    const tree = new Tree(nLevels, poseidon);\n\n    const privKeys = [\n      Buffer.from(\"\".padStart(16, \"🧙\"), \"utf16le\"),\n      Buffer.from(\"\".padStart(16, \"🪄\"), \"utf16le\"),\n      Buffer.from(\"\".padStart(16, \"🔮\"), \"utf16le\")\n    ];\n\n    // Store addresses hashes\n    const addresses: bigint[] = [];\n\n    // Compute public key hashes\n    for (const privKey of privKeys) {\n      const address = privateToAddress(privKey);\n      addresses.push(BigInt(\"0x\" + address.toString(\"hex\")));\n    }\n\n    // Insert the pubkey hashes into the tree\n    for (const address of addresses) {\n      tree.insert(address);\n    }\n\n    // Sanity check (check that there are not duplicate members)\n    expect(new Set(addresses).size === addresses.length).toBeTruthy();\n\n    // Sign\n    const index = 0; // Use privKeys[0] for proving\n    const privKey = privKeys[index];\n    const msg = Buffer.from(\"hello world\");\n\n    // Prepare signature proof input\n    const effEcdsaInput = getEffEcdsaCircuitInput(privKey, msg);\n\n    const merkleProof = tree.createProof(index);\n\n    const input = {\n      ...effEcdsaInput,\n      siblings: merkleProof.siblings,\n      pathIndices: merkleProof.pathIndices,\n      root: tree.root()\n    };\n\n    // Generate witness\n    const w = await circuit.calculateWitness(input, true);\n\n    await circuit.checkConstraints(w);\n  });\n});\n"
  },
  {
    "path": "packages/circuits/tests/circuits/add_complete_test.circom",
    "content": "pragma circom 2.1.2;\n\ninclude \"../../eff_ecdsa_membership/secp256k1/add.circom\";\n\ncomponent main = Secp256k1AddComplete();"
  },
  {
    "path": "packages/circuits/tests/circuits/add_incomplete_test.circom",
    "content": "pragma circom 2.1.2;\n\ninclude \"../../eff_ecdsa_membership/secp256k1/add.circom\";\n\ncomponent main = Secp256k1AddIncomplete();"
  },
  {
    "path": "packages/circuits/tests/circuits/addr_membership_test.circom",
    "content": "pragma circom 2.1.2;\n\ninclude \"../../eff_ecdsa_membership/addr_membership.circom\";\n\ncomponent main = AddrMembership(10);"
  },
  {
    "path": "packages/circuits/tests/circuits/double_test.circom",
    "content": "pragma circom 2.1.2;\n\ninclude \"../../eff_ecdsa_membership/secp256k1/double.circom\";\n\ncomponent main = Secp256k1Double();"
  },
  {
    "path": "packages/circuits/tests/circuits/eff_ecdsa_test.circom",
    "content": "pragma circom 2.1.2;\n\ninclude \"../../eff_ecdsa_membership/eff_ecdsa.circom\";\n\ncomponent main { public[ Tx, Ty, Ux, Uy ]} = EfficientECDSA();"
  },
  {
    "path": "packages/circuits/tests/circuits/eff_ecdsa_to_addr_test.circom",
    "content": "pragma circom 2.1.2;\n\ninclude \"../../eff_ecdsa_membership/eff_ecdsa_to_addr.circom\";\n\ncomponent main { public[ Tx, Ty, Ux, Uy ]} = EfficientECDSAToAddr();"
  },
  {
    "path": "packages/circuits/tests/circuits/k_test.circom",
    "content": "pragma circom 2.1.2;\n\ninclude \"../../eff_ecdsa_membership/secp256k1/mul.circom\";\n\ncomponent main = K();"
  },
  {
    "path": "packages/circuits/tests/circuits/mul_test.circom",
    "content": "pragma circom 2.1.2;\n\ninclude \"../../eff_ecdsa_membership/secp256k1/mul.circom\";\n\ncomponent main = Secp256k1Mul();"
  },
  {
    "path": "packages/circuits/tests/circuits/poseidon_test.circom",
    "content": "pragma circom 2.1.2;\n\ninclude \"../../poseidon/poseidon.circom\";\n\ncomponent main = Poseidon();"
  },
  {
    "path": "packages/circuits/tests/circuits/pubkey_membership_test.circom",
    "content": "pragma circom 2.1.2;\n\ninclude \"../../eff_ecdsa_membership/pubkey_membership.circom\";\n\ncomponent main = PubKeyMembership(10);"
  },
  {
    "path": "packages/circuits/tests/eff_ecdsa.test.ts",
    "content": "const wasm_tester = require(\"circom_tester\").wasm;\nvar EC = require(\"elliptic\").ec;\nimport * as path from \"path\";\nimport { getEffEcdsaCircuitInput } from \"./test_utils\";\n\nconst ec = new EC(\"secp256k1\");\n\ndescribe(\"ecdsa\", () => {\n  it(\"should verify valid message\", async () => {\n    const circuit = await wasm_tester(\n      path.join(__dirname, \"./circuits/eff_ecdsa_test.circom\"),\n      {\n        prime: \"secq256k1\"\n      }\n    );\n\n    const privKey = Buffer.from(\n      \"f5b552f608f5b552f608f5b552f6082ff5b552f608f5b552f608f5b552f6082f\",\n      \"hex\"\n    );\n    const pubKey = ec.keyFromPrivate(privKey.toString(\"hex\")).getPublic();\n    const msg = Buffer.from(\"hello world\");\n    const circuitInput = getEffEcdsaCircuitInput(privKey, msg);\n\n    const w = await circuit.calculateWitness(circuitInput, true);\n\n    await circuit.assertOut(w, {\n      pubKeyX: pubKey.x.toString(),\n      pubKeyY: pubKey.y.toString()\n    });\n\n    await circuit.checkConstraints(w);\n  });\n\n  // TODO - add more tests\n});\n"
  },
  {
    "path": "packages/circuits/tests/eff_ecdsa_to_addr.test.ts",
    "content": "const wasm_tester = require(\"circom_tester\").wasm;\nvar EC = require(\"elliptic\").ec;\nimport * as path from \"path\";\nimport { getEffEcdsaCircuitInput } from \"./test_utils\";\nimport { privateToAddress } from \"@ethereumjs/util\";\n\nconst ec = new EC(\"secp256k1\");\n\ndescribe(\"eff_ecdsa_to_addr\", () => {\n  it(\"should output correct address\", async () => {\n    const privKey = Buffer.from(\n      \"f5b552f608f5b552f608f5b552f6082ff5b552f608f5b552f608f5b552f6082f\",\n      \"hex\"\n    );\n    const pubKey = ec.keyFromPrivate(privKey.toString(\"hex\")).getPublic();\n    const addr = BigInt(\n      \"0x\" + privateToAddress(privKey).toString(\"hex\")\n    ).toString(10);\n\n    const circuit = await wasm_tester(\n      path.join(__dirname, \"./circuits/eff_ecdsa_to_addr_test.circom\"),\n      {\n        prime: \"secq256k1\"\n      }\n    );\n\n    const msg = Buffer.from(\"hello world\");\n    const circuitInput = getEffEcdsaCircuitInput(privKey, msg);\n\n    const w = await circuit.calculateWitness(circuitInput, true);\n\n    await circuit.assertOut(w, {\n      addr\n    });\n\n    await circuit.checkConstraints(w);\n  });\n\n  // TODO - add more tests\n});\n"
  },
  {
    "path": "packages/circuits/tests/poseidon.test.ts",
    "content": "const wasm_tester = require(\"circom_tester\").wasm;\nimport * as path from \"path\";\n\ndescribe(\"poseidon\", () => {\n  it(\"should output correct hash\", async () => {\n    const circuit = await wasm_tester(\n      path.join(__dirname, \"./circuits/poseidon_test.circom\"),\n      {\n        prime: \"secq256k1\"\n      }\n    );\n\n    // Using the same inputs as test_poseidon in wasm.rs\n    const input = {\n      inputs: [\n        \"115792089237316195423570985008687907853269984665640564039457584007908834671663\",\n        \"115792089237316195423570985008687907853269984665640564039457584007908834671662\"\n      ]\n    };\n    const w = await circuit.calculateWitness(input, true);\n\n    await circuit.assertOut(w, {\n      out: \"46702443887670435486723478191273607819169644657419964658749776213559127696053\"\n    });\n\n    await circuit.checkConstraints(w);\n  });\n});\n"
  },
  {
    "path": "packages/circuits/tests/pubkey_membership.test.ts",
    "content": "const wasm_tester = require(\"circom_tester\").wasm;\nvar EC = require(\"elliptic\").ec;\nimport * as path from \"path\";\nimport { Poseidon, Tree } from \"@personaelabs/spartan-ecdsa\";\nimport { privateToPublic } from \"@ethereumjs/util\";\nimport { getEffEcdsaCircuitInput } from \"./test_utils\";\n\ndescribe(\"pubkey_membership\", () => {\n  it(\"should verify correct signature and merkle proof\", async () => {\n    // Compile the circuit\n    const circuit = await wasm_tester(\n      path.join(__dirname, \"./circuits/pubkey_membership_test.circom\"),\n      {\n        prime: \"secq256k1\" // Specify to use the option --prime secq256k1 when compiling with circom\n      }\n    );\n\n    // Construct the tree\n    const poseidon = new Poseidon();\n    await poseidon.initWasm();\n\n    const nLevels = 10;\n    const tree = new Tree(nLevels, poseidon);\n\n    const privKeys = [\n      Buffer.from(\"\".padStart(16, \"🧙\"), \"utf16le\"),\n      Buffer.from(\"\".padStart(16, \"🪄\"), \"utf16le\"),\n      Buffer.from(\"\".padStart(16, \"🔮\"), \"utf16le\")\n    ];\n\n    // Store public key hashes\n    const pubKeyHashes: bigint[] = [];\n\n    // Compute public key hashes\n    for (const privKey of privKeys) {\n      const pubKey = privateToPublic(privKey);\n      const pubKeyHash = poseidon.hashPubKey(pubKey);\n      pubKeyHashes.push(pubKeyHash);\n    }\n\n    // Insert the pubkey hashes into the tree\n    for (const pubKeyHash of pubKeyHashes) {\n      tree.insert(pubKeyHash);\n    }\n\n    // Sanity check (check that there are not duplicate members)\n    expect(new Set(pubKeyHashes).size === pubKeyHashes.length).toBeTruthy();\n\n    // Sign\n    const index = 0; // Use privKeys[0] for proving\n    const privKey = privKeys[index];\n    const msg = Buffer.from(\"hello world\");\n\n    // Prepare signature proof input\n    const effEcdsaInput = getEffEcdsaCircuitInput(privKey, msg);\n\n    const merkleProof = tree.createProof(index);\n\n    const input = {\n      ...effEcdsaInput,\n      siblings: merkleProof.siblings,\n      pathIndices: merkleProof.pathIndices,\n      root: tree.root()\n    };\n\n    // Generate witness\n    const w = await circuit.calculateWitness(input, true);\n\n    await circuit.checkConstraints(w);\n  });\n});\n"
  },
  {
    "path": "packages/circuits/tests/secp256k1.test.ts",
    "content": "const wasm_tester = require(\"circom_tester\").wasm;\nvar EC = require(\"elliptic\").ec;\nimport * as path from \"path\";\nconst ec = new EC(\"secp256k1\");\n\ndescribe(\"secp256k1\", () => {\n  /**\n   * Test adding two points that have different x coordinates; doubling a point or\n   * adding a point to its negative will not work as we will have a division by\n   * zero in the circuit.\n   */\n  it(\"Secp256k1AddIncomplete\", async () => {\n    const circuit = await wasm_tester(\n      path.join(__dirname, \"./circuits/add_incomplete_test.circom\"),\n      {\n        prime: \"secq256k1\"\n      }\n    );\n\n    const p1 = ec.keyFromPrivate(BigInt(\"1\")).getPublic();\n    const p2 = ec.keyFromPrivate(BigInt(\"2\")).getPublic();\n    const p3 = p1.add(p2);\n\n    const input = {\n      xP: p1.x.toString(),\n      yP: p1.y.toString(),\n      xQ: p2.x.toString(),\n      yQ: p2.y.toString()\n    };\n\n    const w = await circuit.calculateWitness(input, true);\n\n    await circuit.assertOut(w, {\n      outX: p3.x.toString(),\n      outY: p3.y.toString()\n    });\n\n    await circuit.checkConstraints(w);\n  });\n\n  /**\n   * Go through all 6 cases included in the analysis of complete addition in\n   * https://zcash.github.io/halo2/design/gadgets/ecc/addition.html\n   */\n  describe(\"Secp256k1AddComplete\", () => {\n    let circuit;\n    const p1 = ec.keyFromPrivate(Buffer.from(\"🪄\", \"utf16le\")).getPublic();\n    const p2 = ec.keyFromPrivate(Buffer.from(\"🧙\", \"utf16le\")).getPublic();\n\n    beforeAll(async () => {\n      circuit = await wasm_tester(\n        path.join(__dirname, \"./circuits/add_complete_test.circom\"),\n        {\n          prime: \"secq256k1\"\n        }\n      );\n    });\n\n    it(\"should work when P = Q\", async () => {\n      const expected = p1.add(p1);\n\n      const input = {\n        xP: p1.x.toString(),\n        yP: p1.y.toString(),\n        xQ: p1.x.toString(),\n        yQ: p1.y.toString()\n      };\n\n      const w = await circuit.calculateWitness(input, true);\n\n      await circuit.assertOut(w, {\n        outX: expected.x.toString(),\n        outY: expected.y.toString()\n      });\n\n      await circuit.checkConstraints(w);\n    });\n\n    it(\"should work when P != Q\", async () => {\n      const expected = p1.add(p2);\n\n      const input = {\n        xP: p1.x.toString(),\n        yP: p1.y.toString(),\n        xQ: p2.x.toString(),\n        yQ: p2.y.toString()\n      };\n\n      const w = await circuit.calculateWitness(input, true);\n\n      await circuit.assertOut(w, {\n        outX: expected.x.toString(),\n        outY: expected.y.toString()\n      });\n\n      await circuit.checkConstraints(w);\n    });\n\n    it(\"should work when xP = 0 and xQ != 0\", async () => {\n      const input = {\n        xP: 0,\n        yP: 0,\n        xQ: p1.x.toString(),\n        yQ: p1.y.toString()\n      };\n\n      const w = await circuit.calculateWitness(input, true);\n\n      await circuit.assertOut(w, {\n        outX: p1.x.toString(),\n        outY: p1.y.toString()\n      });\n\n      await circuit.checkConstraints(w);\n    });\n\n    it(\"should work when xP != 0 and xQ = 0\", async () => {\n      const input = {\n        xP: p1.x.toString(),\n        yP: p1.y.toString(),\n        xQ: 0,\n        yQ: 0\n      };\n\n      const w = await circuit.calculateWitness(input, true);\n\n      await circuit.assertOut(w, {\n        outX: p1.x.toString(),\n        outY: p1.y.toString()\n      });\n\n      await circuit.checkConstraints(w);\n    });\n\n    it(\"should work when xP = xQ and yP = -yQ\", async () => {\n      const p1Neg = p1.neg();\n\n      // Sanity check\n      expect(p1.add(p1Neg).inf).toStrictEqual(true);\n\n      const input = {\n        xP: p1.x.toString(),\n        yP: p1.y.toString(),\n        xQ: p1Neg.x.toString(),\n        yQ: p1Neg.y.toString()\n      };\n\n      const w = await circuit.calculateWitness(input, true);\n\n      await circuit.assertOut(w, {\n        outX: 0,\n        outY: 0\n      });\n\n      await circuit.checkConstraints(w);\n    });\n\n    it(\"should work when xP = 0 and xQ = 0\", async () => {\n      const input = {\n        xP: 0,\n        yP: 0,\n        xQ: 0,\n        yQ: 0\n      };\n\n      const w = await circuit.calculateWitness(input, true);\n\n      await circuit.assertOut(w, {\n        outX: 0,\n        outY: 0\n      });\n\n      await circuit.checkConstraints(w);\n    });\n  });\n\n  /**\n   * Test doubling circuit on the generator point.\n   */\n  it(\"Secp256k1Double\", async () => {\n    const circuit = await wasm_tester(\n      path.join(__dirname, \"./circuits/double_test.circom\"),\n      {\n        prime: \"secq256k1\"\n      }\n    );\n\n    const p = ec.g;\n    const expected = p.mul(BigInt(\"2\"));\n\n    const input = {\n      xP: p.x.toString(),\n      yP: p.y.toString()\n    };\n\n    const w = await circuit.calculateWitness(input, true);\n\n    await circuit.assertOut(w, {\n      outX: expected.x.toString(),\n      outY: expected.y.toString()\n    });\n\n    await circuit.checkConstraints(w);\n  });\n\n  describe(\"mul\", () => {\n    /**\n     * Test the K() function for correctness with the two cases: (s + tQ) > q and\n     * (s + tQ) < q.\n     */\n    describe(\"K\", () => {\n      let circuit;\n      const q = BigInt(\n        \"0xfffffffffffffffffffffffffffffffebaaedce6af48a03bbfd25e8cd0364141\"\n      );\n      const tQ = BigInt(\n        \"115792089237316195423570985008687907852405143892509244725752742275123193348738\"\n      );\n      beforeAll(async () => {\n        circuit = await wasm_tester(\n          path.join(__dirname, \"./circuits/k_test.circom\"),\n          {\n            prime: \"secq256k1\"\n          }\n        );\n      });\n\n      it(\"should work when (s + tQ) > q (i.e. quotient = 1 for s + tQ / q)\", async () => {\n        const s = q - tQ + BigInt(1);\n\n        // Sanity check\n        expect(s + tQ).toBeGreaterThan(q);\n\n        const k = (s + tQ) % q;\n\n        const kBitsArr = k.toString(2).split(\"\").reverse();\n        const kBits = kBitsArr.join(\"\").padEnd(256, \"0\").split(\"\");\n\n        const input = {\n          s\n        };\n\n        const w = await circuit.calculateWitness(input, true);\n\n        await circuit.assertOut(w, {\n          out: kBits\n        });\n\n        await circuit.checkConstraints(w);\n      });\n\n      it(\"should work when (s + tQ) < q (i.e. quotient = 0 for s + tQ / q)\", async () => {\n        const s = q - tQ - BigInt(1);\n\n        // Sanity check\n        expect(s + tQ).toBeLessThanOrEqual(q);\n\n        const k = (s + tQ) % q;\n\n        const kBitsArr = k.toString(2).split(\"\").reverse();\n        const kBits = kBitsArr.join(\"\").padEnd(256, \"0\").split(\"\");\n\n        const input = {\n          s\n        };\n\n        const w = await circuit.calculateWitness(input, true);\n\n        await circuit.assertOut(w, {\n          out: kBits\n        });\n\n        await circuit.checkConstraints(w);\n      });\n    });\n\n    /**\n     * Test the mul circuit for correctness with:\n     * 1. scalar = q-1 (highest possible scalar)\n     * 2. scalar < q-1 (a few random cases)\n     */\n    describe(\"Secp256k1Mul\", () => {\n      let circuit;\n      beforeAll(async () => {\n        circuit = await wasm_tester(\n          path.join(__dirname, \"./circuits/mul_test.circom\"),\n          {\n            prime: \"secq256k1\"\n          }\n        );\n      });\n\n      it(\"should work when scalar = q - 1\", async () => {\n        const p1 = ec.g;\n\n        const largest =\n          \"fffffffffffffffffffffffffffffffebaaedce6af48a03bbfd25e8cd0364140\";\n\n        const p2 = p1.mul(largest);\n\n        const input = {\n          xP: p1.x.toString(),\n          yP: p1.y.toString(),\n          scalar: BigInt(\"0x\" + largest)\n        };\n\n        const w = await circuit.calculateWitness(input, true);\n        await circuit.assertOut(w, {\n          outX: p2.x.toString(),\n          outY: p2.y.toString()\n        });\n\n        await circuit.checkConstraints(w);\n      });\n\n      it(\"should work when scalar < q - 1\", async () => {\n        const p1 = ec.g;\n\n        const scalars = [\n          \"1\",\n          \"2\",\n          \"3\",\n          \"ff\",\n          \"100\",\n          \"101\",\n          \"fffffffffffffffffffffffffffffffebaaedce6af48a03bbfd25e8cd0364139\"\n        ];\n\n        for (const scalar of scalars) {\n          const p2 = p1.mul(scalar);\n\n          const input = {\n            xP: p1.x.toString(),\n            yP: p1.y.toString(),\n            scalar: BigInt(\"0x\" + scalar)\n          };\n\n          const w = await circuit.calculateWitness(input, true);\n          await circuit.assertOut(w, {\n            outX: p2.x.toString(),\n            outY: p2.y.toString()\n          });\n\n          await circuit.checkConstraints(w);\n        }\n      });\n    });\n  });\n});\n"
  },
  {
    "path": "packages/circuits/tests/test_utils.ts",
    "content": "import { hashPersonalMessage, ecsign } from \"@ethereumjs/util\";\nimport { computeEffEcdsaPubInput } from \"@personaelabs/spartan-ecdsa\";\n\nexport const getEffEcdsaCircuitInput = (privKey: Buffer, msg: Buffer) => {\n  const msgHash = hashPersonalMessage(msg);\n  const { v, r: _r, s } = ecsign(msgHash, privKey);\n  const r = BigInt(\"0x\" + _r.toString(\"hex\"));\n\n  const circuitPubInput = computeEffEcdsaPubInput(r, v, msgHash);\n  const input = {\n    s: BigInt(\"0x\" + s.toString(\"hex\")),\n    Tx: circuitPubInput.Tx,\n    Ty: circuitPubInput.Ty,\n    Ux: circuitPubInput.Ux,\n    Uy: circuitPubInput.Uy\n  };\n\n  return input;\n};\n\nexport const bytesToBigInt = (bytes: Uint8Array): bigint =>\n  BigInt(\"0x\" + Buffer.from(bytes).toString(\"hex\"));\n"
  },
  {
    "path": "packages/lib/.npmignore",
    "content": "/node_modules\n/src\n/tests\ntsconfig.json\njest.config.js\ncopy_artifacts.sh\nload_wasm.ts"
  },
  {
    "path": "packages/lib/LICENSE",
    "content": "                    GNU GENERAL PUBLIC LICENSE\n                       Version 3, 29 June 2007\n\n Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>\n Everyone is permitted to copy and distribute verbatim copies\n of this license document, but changing it is not allowed.\n\n                            Preamble\n\n  The GNU General Public License is a free, copyleft license for\nsoftware and other kinds of works.\n\n  The licenses for most software and other practical works are designed\nto take away your freedom to share and change the works.  By contrast,\nthe GNU General Public License is intended to guarantee your freedom to\nshare and change all versions of a program--to make sure it remains free\nsoftware for all its users.  We, the Free Software Foundation, use the\nGNU General Public License for most of our software; it applies also to\nany other work released this way by its authors.  You can apply it to\nyour programs, too.\n\n  When we speak of free software, we are referring to freedom, not\nprice.  Our General Public Licenses are designed to make sure that you\nhave the freedom to distribute copies of free software (and charge for\nthem if you wish), that you receive source code or can get it if you\nwant it, that you can change the software or use pieces of it in new\nfree programs, and that you know you can do these things.\n\n  To protect your rights, we need to prevent others from denying you\nthese rights or asking you to surrender the rights.  Therefore, you have\ncertain responsibilities if you distribute copies of the software, or if\nyou modify it: responsibilities to respect the freedom of others.\n\n  For example, if you distribute copies of such a program, whether\ngratis or for a fee, you must pass on to the recipients the same\nfreedoms that you received.  You must make sure that they, too, receive\nor can get the source code.  And you must show them these terms so they\nknow their rights.\n\n  Developers that use the GNU GPL protect your rights with two steps:\n(1) assert copyright on the software, and (2) offer you this License\ngiving you legal permission to copy, distribute and/or modify it.\n\n  For the developers' and authors' protection, the GPL clearly explains\nthat there is no warranty for this free software.  For both users' and\nauthors' sake, the GPL requires that modified versions be marked as\nchanged, so that their problems will not be attributed erroneously to\nauthors of previous versions.\n\n  Some devices are designed to deny users access to install or run\nmodified versions of the software inside them, although the manufacturer\ncan do so.  This is fundamentally incompatible with the aim of\nprotecting users' freedom to change the software.  The systematic\npattern of such abuse occurs in the area of products for individuals to\nuse, which is precisely where it is most unacceptable.  Therefore, we\nhave designed this version of the GPL to prohibit the practice for those\nproducts.  If such problems arise substantially in other domains, we\nstand ready to extend this provision to those domains in future versions\nof the GPL, as needed to protect the freedom of users.\n\n  Finally, every program is threatened constantly by software patents.\nStates should not allow patents to restrict development and use of\nsoftware on general-purpose computers, but in those that do, we wish to\navoid the special danger that patents applied to a free program could\nmake it effectively proprietary.  To prevent this, the GPL assures that\npatents cannot be used to render the program non-free.\n\n  The precise terms and conditions for copying, distribution and\nmodification follow.\n\n                       TERMS AND CONDITIONS\n\n  0. Definitions.\n\n  \"This License\" refers to version 3 of the GNU General Public License.\n\n  \"Copyright\" also means copyright-like laws that apply to other kinds of\nworks, such as semiconductor masks.\n\n  \"The Program\" refers to any copyrightable work licensed under this\nLicense.  Each licensee is addressed as \"you\".  \"Licensees\" and\n\"recipients\" may be individuals or organizations.\n\n  To \"modify\" a work means to copy from or adapt all or part of the work\nin a fashion requiring copyright permission, other than the making of an\nexact copy.  The resulting work is called a \"modified version\" of the\nearlier work or a work \"based on\" the earlier work.\n\n  A \"covered work\" means either the unmodified Program or a work based\non the Program.\n\n  To \"propagate\" a work means to do anything with it that, without\npermission, would make you directly or secondarily liable for\ninfringement under applicable copyright law, except executing it on a\ncomputer or modifying a private copy.  Propagation includes copying,\ndistribution (with or without modification), making available to the\npublic, and in some countries other activities as well.\n\n  To \"convey\" a work means any kind of propagation that enables other\nparties to make or receive copies.  Mere interaction with a user through\na computer network, with no transfer of a copy, is not conveying.\n\n  An interactive user interface displays \"Appropriate Legal Notices\"\nto the extent that it includes a convenient and prominently visible\nfeature that (1) displays an appropriate copyright notice, and (2)\ntells the user that there is no warranty for the work (except to the\nextent that warranties are provided), that licensees may convey the\nwork under this License, and how to view a copy of this License.  If\nthe interface presents a list of user commands or options, such as a\nmenu, a prominent item in the list meets this criterion.\n\n  1. Source Code.\n\n  The \"source code\" for a work means the preferred form of the work\nfor making modifications to it.  \"Object code\" means any non-source\nform of a work.\n\n  A \"Standard Interface\" means an interface that either is an official\nstandard defined by a recognized standards body, or, in the case of\ninterfaces specified for a particular programming language, one that\nis widely used among developers working in that language.\n\n  The \"System Libraries\" of an executable work include anything, other\nthan the work as a whole, that (a) is included in the normal form of\npackaging a Major Component, but which is not part of that Major\nComponent, and (b) serves only to enable use of the work with that\nMajor Component, or to implement a Standard Interface for which an\nimplementation is available to the public in source code form.  A\n\"Major Component\", in this context, means a major essential component\n(kernel, window system, and so on) of the specific operating system\n(if any) on which the executable work runs, or a compiler used to\nproduce the work, or an object code interpreter used to run it.\n\n  The \"Corresponding Source\" for a work in object code form means all\nthe source code needed to generate, install, and (for an executable\nwork) run the object code and to modify the work, including scripts to\ncontrol those activities.  However, it does not include the work's\nSystem Libraries, or general-purpose tools or generally available free\nprograms which are used unmodified in performing those activities but\nwhich are not part of the work.  For example, Corresponding Source\nincludes interface definition files associated with source files for\nthe work, and the source code for shared libraries and dynamically\nlinked subprograms that the work is specifically designed to require,\nsuch as by intimate data communication or control flow between those\nsubprograms and other parts of the work.\n\n  The Corresponding Source need not include anything that users\ncan regenerate automatically from other parts of the Corresponding\nSource.\n\n  The Corresponding Source for a work in source code form is that\nsame work.\n\n  2. Basic Permissions.\n\n  All rights granted under this License are granted for the term of\ncopyright on the Program, and are irrevocable provided the stated\nconditions are met.  This License explicitly affirms your unlimited\npermission to run the unmodified Program.  The output from running a\ncovered work is covered by this License only if the output, given its\ncontent, constitutes a covered work.  This License acknowledges your\nrights of fair use or other equivalent, as provided by copyright law.\n\n  You may make, run and propagate covered works that you do not\nconvey, without conditions so long as your license otherwise remains\nin force.  You may convey covered works to others for the sole purpose\nof having them make modifications exclusively for you, or provide you\nwith facilities for running those works, provided that you comply with\nthe terms of this License in conveying all material for which you do\nnot control copyright.  Those thus making or running the covered works\nfor you must do so exclusively on your behalf, under your direction\nand control, on terms that prohibit them from making any copies of\nyour copyrighted material outside their relationship with you.\n\n  Conveying under any other circumstances is permitted solely under\nthe conditions stated below.  Sublicensing is not allowed; section 10\nmakes it unnecessary.\n\n  3. Protecting Users' Legal Rights From Anti-Circumvention Law.\n\n  No covered work shall be deemed part of an effective technological\nmeasure under any applicable law fulfilling obligations under article\n11 of the WIPO copyright treaty adopted on 20 December 1996, or\nsimilar laws prohibiting or restricting circumvention of such\nmeasures.\n\n  When you convey a covered work, you waive any legal power to forbid\ncircumvention of technological measures to the extent such circumvention\nis effected by exercising rights under this License with respect to\nthe covered work, and you disclaim any intention to limit operation or\nmodification of the work as a means of enforcing, against the work's\nusers, your or third parties' legal rights to forbid circumvention of\ntechnological measures.\n\n  4. Conveying Verbatim Copies.\n\n  You may convey verbatim copies of the Program's source code as you\nreceive it, in any medium, provided that you conspicuously and\nappropriately publish on each copy an appropriate copyright notice;\nkeep intact all notices stating that this License and any\nnon-permissive terms added in accord with section 7 apply to the code;\nkeep intact all notices of the absence of any warranty; and give all\nrecipients a copy of this License along with the Program.\n\n  You may charge any price or no price for each copy that you convey,\nand you may offer support or warranty protection for a fee.\n\n  5. Conveying Modified Source Versions.\n\n  You may convey a work based on the Program, or the modifications to\nproduce it from the Program, in the form of source code under the\nterms of section 4, provided that you also meet all of these conditions:\n\n    a) The work must carry prominent notices stating that you modified\n    it, and giving a relevant date.\n\n    b) The work must carry prominent notices stating that it is\n    released under this License and any conditions added under section\n    7.  This requirement modifies the requirement in section 4 to\n    \"keep intact all notices\".\n\n    c) You must license the entire work, as a whole, under this\n    License to anyone who comes into possession of a copy.  This\n    License will therefore apply, along with any applicable section 7\n    additional terms, to the whole of the work, and all its parts,\n    regardless of how they are packaged.  This License gives no\n    permission to license the work in any other way, but it does not\n    invalidate such permission if you have separately received it.\n\n    d) If the work has interactive user interfaces, each must display\n    Appropriate Legal Notices; however, if the Program has interactive\n    interfaces that do not display Appropriate Legal Notices, your\n    work need not make them do so.\n\n  A compilation of a covered work with other separate and independent\nworks, which are not by their nature extensions of the covered work,\nand which are not combined with it such as to form a larger program,\nin or on a volume of a storage or distribution medium, is called an\n\"aggregate\" if the compilation and its resulting copyright are not\nused to limit the access or legal rights of the compilation's users\nbeyond what the individual works permit.  Inclusion of a covered work\nin an aggregate does not cause this License to apply to the other\nparts of the aggregate.\n\n  6. Conveying Non-Source Forms.\n\n  You may convey a covered work in object code form under the terms\nof sections 4 and 5, provided that you also convey the\nmachine-readable Corresponding Source under the terms of this License,\nin one of these ways:\n\n    a) Convey the object code in, or embodied in, a physical product\n    (including a physical distribution medium), accompanied by the\n    Corresponding Source fixed on a durable physical medium\n    customarily used for software interchange.\n\n    b) Convey the object code in, or embodied in, a physical product\n    (including a physical distribution medium), accompanied by a\n    written offer, valid for at least three years and valid for as\n    long as you offer spare parts or customer support for that product\n    model, to give anyone who possesses the object code either (1) a\n    copy of the Corresponding Source for all the software in the\n    product that is covered by this License, on a durable physical\n    medium customarily used for software interchange, for a price no\n    more than your reasonable cost of physically performing this\n    conveying of source, or (2) access to copy the\n    Corresponding Source from a network server at no charge.\n\n    c) Convey individual copies of the object code with a copy of the\n    written offer to provide the Corresponding Source.  This\n    alternative is allowed only occasionally and noncommercially, and\n    only if you received the object code with such an offer, in accord\n    with subsection 6b.\n\n    d) Convey the object code by offering access from a designated\n    place (gratis or for a charge), and offer equivalent access to the\n    Corresponding Source in the same way through the same place at no\n    further charge.  You need not require recipients to copy the\n    Corresponding Source along with the object code.  If the place to\n    copy the object code is a network server, the Corresponding Source\n    may be on a different server (operated by you or a third party)\n    that supports equivalent copying facilities, provided you maintain\n    clear directions next to the object code saying where to find the\n    Corresponding Source.  Regardless of what server hosts the\n    Corresponding Source, you remain obligated to ensure that it is\n    available for as long as needed to satisfy these requirements.\n\n    e) Convey the object code using peer-to-peer transmission, provided\n    you inform other peers where the object code and Corresponding\n    Source of the work are being offered to the general public at no\n    charge under subsection 6d.\n\n  A separable portion of the object code, whose source code is excluded\nfrom the Corresponding Source as a System Library, need not be\nincluded in conveying the object code work.\n\n  A \"User Product\" is either (1) a \"consumer product\", which means any\ntangible personal property which is normally used for personal, family,\nor household purposes, or (2) anything designed or sold for incorporation\ninto a dwelling.  In determining whether a product is a consumer product,\ndoubtful cases shall be resolved in favor of coverage.  For a particular\nproduct received by a particular user, \"normally used\" refers to a\ntypical or common use of that class of product, regardless of the status\nof the particular user or of the way in which the particular user\nactually uses, or expects or is expected to use, the product.  A product\nis a consumer product regardless of whether the product has substantial\ncommercial, industrial or non-consumer uses, unless such uses represent\nthe only significant mode of use of the product.\n\n  \"Installation Information\" for a User Product means any methods,\nprocedures, authorization keys, or other information required to install\nand execute modified versions of a covered work in that User Product from\na modified version of its Corresponding Source.  The information must\nsuffice to ensure that the continued functioning of the modified object\ncode is in no case prevented or interfered with solely because\nmodification has been made.\n\n  If you convey an object code work under this section in, or with, or\nspecifically for use in, a User Product, and the conveying occurs as\npart of a transaction in which the right of possession and use of the\nUser Product is transferred to the recipient in perpetuity or for a\nfixed term (regardless of how the transaction is characterized), the\nCorresponding Source conveyed under this section must be accompanied\nby the Installation Information.  But this requirement does not apply\nif neither you nor any third party retains the ability to install\nmodified object code on the User Product (for example, the work has\nbeen installed in ROM).\n\n  The requirement to provide Installation Information does not include a\nrequirement to continue to provide support service, warranty, or updates\nfor a work that has been modified or installed by the recipient, or for\nthe User Product in which it has been modified or installed.  Access to a\nnetwork may be denied when the modification itself materially and\nadversely affects the operation of the network or violates the rules and\nprotocols for communication across the network.\n\n  Corresponding Source conveyed, and Installation Information provided,\nin accord with this section must be in a format that is publicly\ndocumented (and with an implementation available to the public in\nsource code form), and must require no special password or key for\nunpacking, reading or copying.\n\n  7. Additional Terms.\n\n  \"Additional permissions\" are terms that supplement the terms of this\nLicense by making exceptions from one or more of its conditions.\nAdditional permissions that are applicable to the entire Program shall\nbe treated as though they were included in this License, to the extent\nthat they are valid under applicable law.  If additional permissions\napply only to part of the Program, that part may be used separately\nunder those permissions, but the entire Program remains governed by\nthis License without regard to the additional permissions.\n\n  When you convey a copy of a covered work, you may at your option\nremove any additional permissions from that copy, or from any part of\nit.  (Additional permissions may be written to require their own\nremoval in certain cases when you modify the work.)  You may place\nadditional permissions on material, added by you to a covered work,\nfor which you have or can give appropriate copyright permission.\n\n  Notwithstanding any other provision of this License, for material you\nadd to a covered work, you may (if authorized by the copyright holders of\nthat material) supplement the terms of this License with terms:\n\n    a) Disclaiming warranty or limiting liability differently from the\n    terms of sections 15 and 16 of this License; or\n\n    b) Requiring preservation of specified reasonable legal notices or\n    author attributions in that material or in the Appropriate Legal\n    Notices displayed by works containing it; or\n\n    c) Prohibiting misrepresentation of the origin of that material, or\n    requiring that modified versions of such material be marked in\n    reasonable ways as different from the original version; or\n\n    d) Limiting the use for publicity purposes of names of licensors or\n    authors of the material; or\n\n    e) Declining to grant rights under trademark law for use of some\n    trade names, trademarks, or service marks; or\n\n    f) Requiring indemnification of licensors and authors of that\n    material by anyone who conveys the material (or modified versions of\n    it) with contractual assumptions of liability to the recipient, for\n    any liability that these contractual assumptions directly impose on\n    those licensors and authors.\n\n  All other non-permissive additional terms are considered \"further\nrestrictions\" within the meaning of section 10.  If the Program as you\nreceived it, or any part of it, contains a notice stating that it is\ngoverned by this License along with a term that is a further\nrestriction, you may remove that term.  If a license document contains\na further restriction but permits relicensing or conveying under this\nLicense, you may add to a covered work material governed by the terms\nof that license document, provided that the further restriction does\nnot survive such relicensing or conveying.\n\n  If you add terms to a covered work in accord with this section, you\nmust place, in the relevant source files, a statement of the\nadditional terms that apply to those files, or a notice indicating\nwhere to find the applicable terms.\n\n  Additional terms, permissive or non-permissive, may be stated in the\nform of a separately written license, or stated as exceptions;\nthe above requirements apply either way.\n\n  8. Termination.\n\n  You may not propagate or modify a covered work except as expressly\nprovided under this License.  Any attempt otherwise to propagate or\nmodify it is void, and will automatically terminate your rights under\nthis License (including any patent licenses granted under the third\nparagraph of section 11).\n\n  However, if you cease all violation of this License, then your\nlicense from a particular copyright holder is reinstated (a)\nprovisionally, unless and until the copyright holder explicitly and\nfinally terminates your license, and (b) permanently, if the copyright\nholder fails to notify you of the violation by some reasonable means\nprior to 60 days after the cessation.\n\n  Moreover, your license from a particular copyright holder is\nreinstated permanently if the copyright holder notifies you of the\nviolation by some reasonable means, this is the first time you have\nreceived notice of violation of this License (for any work) from that\ncopyright holder, and you cure the violation prior to 30 days after\nyour receipt of the notice.\n\n  Termination of your rights under this section does not terminate the\nlicenses of parties who have received copies or rights from you under\nthis License.  If your rights have been terminated and not permanently\nreinstated, you do not qualify to receive new licenses for the same\nmaterial under section 10.\n\n  9. Acceptance Not Required for Having Copies.\n\n  You are not required to accept this License in order to receive or\nrun a copy of the Program.  Ancillary propagation of a covered work\noccurring solely as a consequence of using peer-to-peer transmission\nto receive a copy likewise does not require acceptance.  However,\nnothing other than this License grants you permission to propagate or\nmodify any covered work.  These actions infringe copyright if you do\nnot accept this License.  Therefore, by modifying or propagating a\ncovered work, you indicate your acceptance of this License to do so.\n\n  10. Automatic Licensing of Downstream Recipients.\n\n  Each time you convey a covered work, the recipient automatically\nreceives a license from the original licensors, to run, modify and\npropagate that work, subject to this License.  You are not responsible\nfor enforcing compliance by third parties with this License.\n\n  An \"entity transaction\" is a transaction transferring control of an\norganization, or substantially all assets of one, or subdividing an\norganization, or merging organizations.  If propagation of a covered\nwork results from an entity transaction, each party to that\ntransaction who receives a copy of the work also receives whatever\nlicenses to the work the party's predecessor in interest had or could\ngive under the previous paragraph, plus a right to possession of the\nCorresponding Source of the work from the predecessor in interest, if\nthe predecessor has it or can get it with reasonable efforts.\n\n  You may not impose any further restrictions on the exercise of the\nrights granted or affirmed under this License.  For example, you may\nnot impose a license fee, royalty, or other charge for exercise of\nrights granted under this License, and you may not initiate litigation\n(including a cross-claim or counterclaim in a lawsuit) alleging that\nany patent claim is infringed by making, using, selling, offering for\nsale, or importing the Program or any portion of it.\n\n  11. Patents.\n\n  A \"contributor\" is a copyright holder who authorizes use under this\nLicense of the Program or a work on which the Program is based.  The\nwork thus licensed is called the contributor's \"contributor version\".\n\n  A contributor's \"essential patent claims\" are all patent claims\nowned or controlled by the contributor, whether already acquired or\nhereafter acquired, that would be infringed by some manner, permitted\nby this License, of making, using, or selling its contributor version,\nbut do not include claims that would be infringed only as a\nconsequence of further modification of the contributor version.  For\npurposes of this definition, \"control\" includes the right to grant\npatent sublicenses in a manner consistent with the requirements of\nthis License.\n\n  Each contributor grants you a non-exclusive, worldwide, royalty-free\npatent license under the contributor's essential patent claims, to\nmake, use, sell, offer for sale, import and otherwise run, modify and\npropagate the contents of its contributor version.\n\n  In the following three paragraphs, a \"patent license\" is any express\nagreement or commitment, however denominated, not to enforce a patent\n(such as an express permission to practice a patent or covenant not to\nsue for patent infringement).  To \"grant\" such a patent license to a\nparty means to make such an agreement or commitment not to enforce a\npatent against the party.\n\n  If you convey a covered work, knowingly relying on a patent license,\nand the Corresponding Source of the work is not available for anyone\nto copy, free of charge and under the terms of this License, through a\npublicly available network server or other readily accessible means,\nthen you must either (1) cause the Corresponding Source to be so\navailable, or (2) arrange to deprive yourself of the benefit of the\npatent license for this particular work, or (3) arrange, in a manner\nconsistent with the requirements of this License, to extend the patent\nlicense to downstream recipients.  \"Knowingly relying\" means you have\nactual knowledge that, but for the patent license, your conveying the\ncovered work in a country, or your recipient's use of the covered work\nin a country, would infringe one or more identifiable patents in that\ncountry that you have reason to believe are valid.\n\n  If, pursuant to or in connection with a single transaction or\narrangement, you convey, or propagate by procuring conveyance of, a\ncovered work, and grant a patent license to some of the parties\nreceiving the covered work authorizing them to use, propagate, modify\nor convey a specific copy of the covered work, then the patent license\nyou grant is automatically extended to all recipients of the covered\nwork and works based on it.\n\n  A patent license is \"discriminatory\" if it does not include within\nthe scope of its coverage, prohibits the exercise of, or is\nconditioned on the non-exercise of one or more of the rights that are\nspecifically granted under this License.  You may not convey a covered\nwork if you are a party to an arrangement with a third party that is\nin the business of distributing software, under which you make payment\nto the third party based on the extent of your activity of conveying\nthe work, and under which the third party grants, to any of the\nparties who would receive the covered work from you, a discriminatory\npatent license (a) in connection with copies of the covered work\nconveyed by you (or copies made from those copies), or (b) primarily\nfor and in connection with specific products or compilations that\ncontain the covered work, unless you entered into that arrangement,\nor that patent license was granted, prior to 28 March 2007.\n\n  Nothing in this License shall be construed as excluding or limiting\nany implied license or other defenses to infringement that may\notherwise be available to you under applicable patent law.\n\n  12. No Surrender of Others' Freedom.\n\n  If conditions are imposed on you (whether by court order, agreement or\notherwise) that contradict the conditions of this License, they do not\nexcuse you from the conditions of this License.  If you cannot convey a\ncovered work so as to satisfy simultaneously your obligations under this\nLicense and any other pertinent obligations, then as a consequence you may\nnot convey it at all.  For example, if you agree to terms that obligate you\nto collect a royalty for further conveying from those to whom you convey\nthe Program, the only way you could satisfy both those terms and this\nLicense would be to refrain entirely from conveying the Program.\n\n  13. Use with the GNU Affero General Public License.\n\n  Notwithstanding any other provision of this License, you have\npermission to link or combine any covered work with a work licensed\nunder version 3 of the GNU Affero General Public License into a single\ncombined work, and to convey the resulting work.  The terms of this\nLicense will continue to apply to the part which is the covered work,\nbut the special requirements of the GNU Affero General Public License,\nsection 13, concerning interaction through a network will apply to the\ncombination as such.\n\n  14. Revised Versions of this License.\n\n  The Free Software Foundation may publish revised and/or new versions of\nthe GNU General Public License from time to time.  Such new versions will\nbe similar in spirit to the present version, but may differ in detail to\naddress new problems or concerns.\n\n  Each version is given a distinguishing version number.  If the\nProgram specifies that a certain numbered version of the GNU General\nPublic License \"or any later version\" applies to it, you have the\noption of following the terms and conditions either of that numbered\nversion or of any later version published by the Free Software\nFoundation.  If the Program does not specify a version number of the\nGNU General Public License, you may choose any version ever published\nby the Free Software Foundation.\n\n  If the Program specifies that a proxy can decide which future\nversions of the GNU General Public License can be used, that proxy's\npublic statement of acceptance of a version permanently authorizes you\nto choose that version for the Program.\n\n  Later license versions may give you additional or different\npermissions.  However, no additional obligations are imposed on any\nauthor or copyright holder as a result of your choosing to follow a\nlater version.\n\n  15. Disclaimer of Warranty.\n\n  THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY\nAPPLICABLE LAW.  EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT\nHOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM \"AS IS\" WITHOUT WARRANTY\nOF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,\nTHE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR\nPURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM\nIS WITH YOU.  SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF\nALL NECESSARY SERVICING, REPAIR OR CORRECTION.\n\n  16. Limitation of Liability.\n\n  IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING\nWILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS\nTHE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY\nGENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE\nUSE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF\nDATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD\nPARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),\nEVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF\nSUCH DAMAGES.\n\n  17. Interpretation of Sections 15 and 16.\n\n  If the disclaimer of warranty and limitation of liability provided\nabove cannot be given local legal effect according to their terms,\nreviewing courts shall apply local law that most closely approximates\nan absolute waiver of all civil liability in connection with the\nProgram, unless a warranty or assumption of liability accompanies a\ncopy of the Program in return for a fee.\n\n                     END OF TERMS AND CONDITIONS\n\n            How to Apply These Terms to Your New Programs\n\n  If you develop a new program, and you want it to be of the greatest\npossible use to the public, the best way to achieve this is to make it\nfree software which everyone can redistribute and change under these terms.\n\n  To do so, attach the following notices to the program.  It is safest\nto attach them to the start of each source file to most effectively\nstate the exclusion of warranty; and each file should have at least\nthe \"copyright\" line and a pointer to where the full notice is found.\n\n    <one line to give the program's name and a brief idea of what it does.>\n    Copyright (C) <year>  <name of author>\n\n    This program is free software: you can redistribute it and/or modify\n    it under the terms of the GNU General Public License as published by\n    the Free Software Foundation, either version 3 of the License, or\n    (at your option) any later version.\n\n    This program is distributed in the hope that it will be useful,\n    but WITHOUT ANY WARRANTY; without even the implied warranty of\n    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n    GNU General Public License for more details.\n\n    You should have received a copy of the GNU General Public License\n    along with this program.  If not, see <https://www.gnu.org/licenses/>.\n\nAlso add information on how to contact you by electronic and paper mail.\n\n  If the program does terminal interaction, make it output a short\nnotice like this when it starts in an interactive mode:\n\n    <program>  Copyright (C) <year>  <name of author>\n    This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.\n    This is free software, and you are welcome to redistribute it\n    under certain conditions; type `show c' for details.\n\nThe hypothetical commands `show w' and `show c' should show the appropriate\nparts of the General Public License.  Of course, your program's commands\nmight be different; for a GUI interface, you would use an \"about box\".\n\n  You should also get your employer (if you work as a programmer) or school,\nif any, to sign a \"copyright disclaimer\" for the program, if necessary.\nFor more information on this, and how to apply and follow the GNU GPL, see\n<https://www.gnu.org/licenses/>.\n\n  The GNU General Public License does not permit incorporating your program\ninto proprietary programs.  If your program is a subroutine library, you\nmay consider it more useful to permit linking proprietary applications with\nthe library.  If this is what you want to do, use the GNU Lesser General\nPublic License instead of this License.  But first, please read\n<https://www.gnu.org/licenses/why-not-lgpl.html>."
  },
  {
    "path": "packages/lib/README.md",
    "content": "# Spartan-ecdsa\n\nSpartan-ecdsa (which to our knowledge) is the fastest open-source method to verify ECDSA (secp256k1) signatures in zero-knowledge.\n\n## Disclaimers\n\n- Spartan-ecdsa is unaudited. Please use it at your own risk.\n- Usage on mobile browsers isn’t currently supported.\n\n## Usage example\n\n### Proving membership to a group of public keys\n\n```typescript\nimport {\n  MembershipProver,\n  MembershipVerifier,\n  Poseidon,\n  Tree,\n  defaultPubkeyMembershipPConfig,\n  defaultPubkeyMembershipVConfig\n} from \"@personaelabs/spartan-ecdsa\";\nimport { hashPersonalMessage } from \"@ethereumjs/util\";\n\n// Init the Poseidon hash\nconst poseidon = new Poseidon();\nawait poseidon.initWasm();\n\nconst treeDepth = 20; // Provided circuits have tree depth = 20\nconst tree = new Tree(treeDepth, poseidon);\n\nconst proverPubKey = Buffer.from(\"...\");\n// Get the prover public key hash\nconst proverPubkeyHash = poseidon.hashPubKey(proverPubKey);\n\n// Insert prover public key hash into the tree\ntree.insert(proverPubkeyHash);\n\n// Insert other members into the tree\nfor (const member of [\"🕵️\", \"🥷\", \"👩‍🔬\"]) {\n  tree.insert(\n    poseidon.hashPubKey(Buffer.from(\"\".padStart(16, member), \"utf16le\"))\n  );\n}\n\n// Compute the merkle proof\nconst index = tree.indexOf(proverPubkeyHash);\nconst merkleProof = tree.createProof(index);\n\n// Init the prover\nconst prover = new MembershipProver(defaultPubkeyMembershipPConfig);\nawait prover.initWasm();\n\nconst sig = \"0x...\";\nconst msgHash = hashPersonalMessage(Buffer.from(\"harry potter\"));\n// Prove membership\nconst { proof, publicInput } = await prover.prove(sig, msgHash, merkleProof);\n\n// Init verifier\nconst verifier = new MembershipVerifier(defaultPubkeyMembershipVConfig);\nawait verifier.initWasm();\n\n// Verify proof\nawait verifier.verify(proof, publicInput.serialize());\n```\n\n### Proving membership to a group of addresses\n\n```typescript\nimport {\n  MembershipProver,\n  MembershipVerifier,\n  Poseidon,\n  Tree,\n  defaultAddressMembershipPConfig,\n  defaultAddressMembershipVConfig\n} from \"@personaelabs/spartan-ecdsa\";\nimport { hashPersonalMessage } from \"@ethereumjs/util\";\n\n// Init the Poseidon hash\nconst poseidon = new Poseidon();\nawait poseidon.initWasm();\n\nconst treeDepth = 20; // Provided circuits have tree depth = 20\nconst tree = new Tree(treeDepth, poseidon);\n\n// Get the prover public key hash\nconst proverAddress = BigInt(\"0x...\");\n\n// Insert prover public key hash into the tree\ntree.insert(proverAddress);\n\n// Insert other members into the tree\nfor (const member of [\"🕵️\", \"🥷\", \"👩‍🔬\"]) {\n  tree.insert(\n    BigInt(\n      \"0x\" + Buffer.from(\"\".padStart(16, member), \"utf16le\").toString(\"hex\")\n    )\n  );\n}\n\n// Compute the merkle proof\nconst index = tree.indexOf(proverAddress);\nconst merkleProof = tree.createProof(index);\n\n// Init the prover\nconst prover = new MembershipProver(defaultAddressMembershipPConfig);\nawait prover.initWasm();\n\nconst sig = \"0x...\";\nconst msgHash = hashPersonalMessage(Buffer.from(\"harry potter\"));\n// Prove membership\nconst { proof, publicInput } = await prover.prove(sig, msgHash, merkleProof);\n\n// Init verifier\nconst verifier = new MembershipVerifier(defaultAddressMembershipVConfig);\nawait verifier.initWasm();\n\n// Verify proof\nawait verifier.verify(proof, publicInput.serialize());\n```\n\n## Circuit downloads\n\n_Provided circuits have Merkle tree depth = 20.\nChange in the tree depth doesn't significantly affect the proving time, hence we only provide a single tree depth that is adequate (2^20 ~= 1 million leaves) for most situations._\n\n**Public key membership**\n| | |\n| --- | --- |\n| circuit | https://storage.googleapis.com/personae-proving-keys/membership/pubkey_membership.circuit |\n| witnessGenWasm | https://storage.googleapis.com/personae-proving-keys/membership/pubkey_membership.wasm |\n\n**Ethereum address membership**\n|||\n| --- | --- |\n| circuit | https://storage.googleapis.com/personae-proving-keys/membership/addr_membership.circuit |\n| witnessGenWasm | https://storage.googleapis.com/personae-proving-keys/membership/addr_membership.wasm |\n\n## Development\n\n### Install dependencies\n\n```\n\nyarn\n\n```\n\n### Run tests\n\n```\n\nyarn jest\n\n```\n\n### Build\n\n```\n\nyarn build\n\n```\n"
  },
  {
    "path": "packages/lib/embedWasmBytes.ts",
    "content": "import * as fs from \"fs\";\n\n/**\n * Load the wasm file and output a typescript file with the wasm bytes embedded\n */\nconst embedWasmBytes = async () => {\n  let wasm = fs.readFileSync(\"../spartan_wasm/build/spartan_wasm_bg.wasm\");\n\n  let bytes = new Uint8Array(wasm.buffer);\n\n  const file = `\n    export const wasmBytes = new Uint8Array([${bytes.toString()}]);\n  `;\n\n  fs.writeFileSync(\"./src/wasm/wasmBytes.ts\", file);\n};\n\nembedWasmBytes();\n"
  },
  {
    "path": "packages/lib/jest.config.js",
    "content": "/** @type {import('ts-jest').JestConfigWithTsJest} */\nmodule.exports = {\n  preset: 'ts-jest',\n  testEnvironment: 'node',\n  transform: {\n    \"^.+\\\\.(ts|js)?$\": \"ts-jest\"\n  },\n  moduleNameMapper: {\n    \"@src/(.*)$\": \"<rootDir>/src/$1\",\n  },\n  testTimeout: 600000,\n};"
  },
  {
    "path": "packages/lib/package.json",
    "content": "{\n  \"name\": \"@personaelabs/spartan-ecdsa\",\n  \"version\": \"2.3.1\",\n  \"description\": \"Spartan-ecdsa (which to our knowledge) is the fastest open-source method to verify ECDSA (secp256k1) signatures in zero-knowledge.\",\n  \"keywords\": [\n    \"spartan\",\n    \"spartan-ecdsa\",\n    \"zk\",\n    \"efficient-ecdsa\"\n  ],\n  \"author\": \"Personae Labs\",\n  \"main\": \"./dist/index.js\",\n  \"types\": \"./dist/index.d.ts\",\n  \"license\": \"MIT\",\n  \"bugs\": {\n    \"url\": \"https://github.com/personaelabs/spartan-ecdsa/issues/new\"\n  },\n  \"homepage\": \"https://github.com/personaelabs/spartan-ecdsa\",\n  \"publishConfig\": {\n    \"access\": \"public\"\n  },\n  \"files\": [\n    \"dist/**/*\"\n  ],\n  \"scripts\": {\n    \"build\": \"rm -rf ./dist && yarn embedWasmBytes && tsc --project tsconfig.build.json\",\n    \"prepublishOnly\": \"yarn build\",\n    \"prepare\": \"yarn embedWasmBytes\",\n    \"embedWasmBytes\": \"ts-node ./embedWasmBytes.ts\",\n    \"test\": \"jest\"\n  },\n  \"devDependencies\": {\n    \"@types/jest\": \"^29.2.5\",\n    \"@zk-kit/incremental-merkle-tree\": \"^1.0.0\",\n    \"jest\": \"^29.3.1\",\n    \"ts-jest\": \"^29.0.3\",\n    \"typescript\": \"^4.9.4\"\n  },\n  \"dependencies\": {\n    \"@ethereumjs/util\": \"^8.0.3\",\n    \"@zk-kit/incremental-merkle-tree\": \"^1.0.0\",\n    \"elliptic\": \"^6.5.4\",\n    \"snarkjs\": \"^0.7.1\"\n  }\n}"
  },
  {
    "path": "packages/lib/src/config/index.ts",
    "content": "import { ProverConfig, VerifyConfig } from \"@src/types\";\n\n// Default configs for pubkey membership proving/verifying\nexport const defaultPubkeyProverConfig: ProverConfig = {\n    witnessGenWasm:\n        \"https://storage.googleapis.com/personae-proving-keys/membership/pubkey_membership.wasm\",\n    circuit:\n        \"https://storage.googleapis.com/personae-proving-keys/membership/pubkey_membership.circuit\"\n};\n\nexport const defaultPubkeyVerifierConfig: VerifyConfig = {\n    circuit: defaultPubkeyProverConfig.circuit\n};\n\n// Default configs for address membership proving/verifyign\nexport const defaultAddressProverConfig: ProverConfig = {\n    witnessGenWasm:\n        \"https://storage.googleapis.com/personae-proving-keys/membership/addr_membership.wasm\",\n    circuit:\n        \"https://storage.googleapis.com/personae-proving-keys/membership/addr_membership.circuit\"\n};\n\nexport const defaultAddressVerifierConfig: VerifyConfig = {\n    circuit: defaultAddressProverConfig.circuit\n};\n"
  },
  {
    "path": "packages/lib/src/core/prover.ts",
    "content": "import { Profiler } from \"@src/helpers/profiler\";\nimport { IProver, MerkleProof, NIZK, ProveArgs, ProverConfig } from \"@src/types\";\nimport { loadCircuit, fromSig, snarkJsWitnessGen } from \"@src/helpers/utils\";\nimport {\n  PublicInput,\n  computeEffEcdsaPubInput,\n  CircuitPubInput\n} from \"@src/helpers/publicInputs\";\nimport { init, wasm } from \"@src/wasm\";\nimport {\n  defaultPubkeyProverConfig,\n  defaultAddressProverConfig\n} from \"@src/config\";\n\n/**\n * ECDSA Membership Prover\n */\nexport class MembershipProver extends Profiler implements IProver {\n  circuit: string;\n  witnessGenWasm: string;\n  useRemoteCircuit: boolean;\n\n  constructor({\n    enableProfiler,\n    circuit,\n    witnessGenWasm,\n    useRemoteCircuit\n  }: ProverConfig) {\n    super({ enabled: enableProfiler });\n\n    if (\n      circuit === defaultPubkeyProverConfig.circuit ||\n      witnessGenWasm ===\n      defaultPubkeyProverConfig.witnessGenWasm ||\n      circuit === defaultAddressProverConfig.circuit ||\n      witnessGenWasm === defaultAddressProverConfig.witnessGenWasm\n    ) {\n      console.warn(`\n      Spartan-ecdsa default config warning:\n      We recommend using defaultPubkeyMembershipPConfig/defaultPubkeyMembershipVConfig only for testing purposes.\n      Please host and specify the circuit and witnessGenWasm files on your own server for sovereign control.\n      Download files: https://github.com/personaelabs/spartan-ecdsa/blob/main/packages/lib/README.md#circuit-downloads\n      `);\n    }\n\n    this.circuit = circuit;\n    this.witnessGenWasm = witnessGenWasm;\n    this.useRemoteCircuit = useRemoteCircuit ?? false;\n  }\n\n  async initWasm() {\n    await init();\n  }\n\n  async prove({ sig, msgHash, merkleProof }: ProveArgs): Promise<NIZK> {\n    const { r, s, v } = fromSig(sig);\n\n    const effEcdsaPubInput = computeEffEcdsaPubInput(r, v, msgHash);\n    const circuitPubInput = new CircuitPubInput(\n      merkleProof.root,\n      effEcdsaPubInput.Tx,\n      effEcdsaPubInput.Ty,\n      effEcdsaPubInput.Ux,\n      effEcdsaPubInput.Uy\n    );\n    const publicInput = new PublicInput(r, v, msgHash, circuitPubInput);\n\n    const witnessGenInput = {\n      s,\n      ...merkleProof,\n      ...effEcdsaPubInput\n    };\n\n    this.time(\"Generate witness\");\n    const witness = await snarkJsWitnessGen(\n      witnessGenInput,\n      this.witnessGenWasm\n    );\n    this.timeEnd(\"Generate witness\");\n\n    this.time(\"Load circuit\");\n    const useRemoteCircuit =\n      this.useRemoteCircuit || typeof window !== \"undefined\";\n    const circuitBin = await loadCircuit(this.circuit, useRemoteCircuit);\n    this.timeEnd(\"Load circuit\");\n\n    // Get the public input in bytes\n    const circuitPublicInput: Uint8Array =\n      publicInput.circuitPubInput.serialize();\n\n    this.time(\"Prove\");\n    let proof = wasm.prove(circuitBin, witness.data, circuitPublicInput);\n    this.timeEnd(\"Prove\");\n\n    return {\n      proof,\n      publicInput\n    };\n  }\n}\n"
  },
  {
    "path": "packages/lib/src/core/verifier.ts",
    "content": "import {\n  defaultAddressVerifierConfig,\n  defaultPubkeyVerifierConfig\n} from \"@src/config\";\nimport { Profiler } from \"@src/helpers/profiler\";\nimport { loadCircuit } from \"@src/helpers/utils\";\nimport { IVerifier, VerifyArgs, VerifyConfig } from \"@src/types\";\nimport { init, wasm } from \"@src/wasm\";\nimport { PublicInput, verifyEffEcdsaPubInput } from \"@src/helpers/publicInputs\";\n\n/**\n * ECDSA Membership Verifier\n */\nexport class MembershipVerifier extends Profiler implements IVerifier {\n  circuit: string;\n  useRemoteCircuit: boolean;\n\n  constructor({\n    circuit,\n    enableProfiler,\n    useRemoteCircuit\n  }: VerifyConfig) {\n    super({ enabled: enableProfiler });\n\n    if (\n      circuit === defaultAddressVerifierConfig.circuit ||\n      circuit === defaultPubkeyVerifierConfig.circuit\n    ) {\n      console.warn(`\n      Spartan-ecdsa default config warning:\n      We recommend using defaultPubkeyMembershipPConfig/defaultPubkeyMembershipVConfig only for testing purposes.\n      Please host and specify the circuit and witnessGenWasm files on your own server for sovereign control.\n      Download files: https://github.com/personaelabs/spartan-ecdsa/blob/main/packages/lib/README.md#circuit-downloads\n      `);\n    }\n\n    this.circuit = circuit;\n    this.useRemoteCircuit =\n      useRemoteCircuit || typeof window !== \"undefined\";\n  }\n\n  async initWasm() {\n    await init();\n  }\n\n  async verify({ proof, publicInputSer }: VerifyArgs): Promise<boolean> {\n    this.time(\"Load circuit\");\n    const circuitBin = await loadCircuit(this.circuit, this.useRemoteCircuit);\n    this.timeEnd(\"Load circuit\");\n\n    this.time(\"Verify public input\");\n    const publicInput = PublicInput.deserialize(publicInputSer);\n    const isPubInputValid = verifyEffEcdsaPubInput(publicInput);\n    this.timeEnd(\"Verify public input\");\n\n    this.time(\"Verify proof\");\n    let isProofValid;\n    try {\n      isProofValid = await wasm.verify(\n        circuitBin,\n        proof,\n        publicInput.circuitPubInput.serialize()\n      );\n    } catch (_e) {\n      isProofValid = false;\n    }\n\n    this.timeEnd(\"Verify proof\");\n    return isProofValid && isPubInputValid;\n  }\n}\n"
  },
  {
    "path": "packages/lib/src/helpers/poseidon.ts",
    "content": "import { init, wasm } from \"@src/wasm\";\nimport { bigIntToLeBytes, bytesLeToBigInt } from \"./utils\";\n\nexport class Poseidon {\n  hash(inputs: bigint[]): bigint {\n    const inputsBytes = new Uint8Array(32 * inputs.length);\n    for (let i = 0; i < inputs.length; i++) {\n      inputsBytes.set(bigIntToLeBytes(inputs[i], 32), i * 32);\n    }\n\n    const result = wasm.poseidon(inputsBytes);\n    return bytesLeToBigInt(result);\n  }\n\n  async initWasm() {\n    await init();\n  }\n\n  hashPubKey(pubKey: Buffer): bigint {\n    const pubKeyX = BigInt(\"0x\" + pubKey.toString(\"hex\").slice(0, 64));\n    const pubKeyY = BigInt(\"0x\" + pubKey.toString(\"hex\").slice(64, 128));\n\n    const pubKeyHash = this.hash([pubKeyX, pubKeyY]);\n    return pubKeyHash;\n  }\n}\n"
  },
  {
    "path": "packages/lib/src/helpers/profiler.ts",
    "content": "// A helper class to optionally run console.time/console.timeEnd\nexport class Profiler {\n  private enabled: boolean;\n\n  constructor(options: { enabled?: boolean }) {\n    this.enabled = options.enabled || false;\n  }\n\n  time(label: string) {\n    this.enabled && console.time(label);\n  }\n\n  timeEnd(label: string) {\n    this.enabled && console.timeEnd(label);\n  }\n}\n"
  },
  {
    "path": "packages/lib/src/helpers/publicInputs.ts",
    "content": "var EC = require(\"elliptic\").ec;\nconst BN = require(\"bn.js\");\n\nimport { EffECDSAPubInput } from \"@src/types\";\nimport { bytesToBigInt, bigIntToBytes } from \"./utils\";\n\nconst ec = new EC(\"secp256k1\");\n\nconst SECP256K1_N = new BN(\n  \"fffffffffffffffffffffffffffffffebaaedce6af48a03bbfd25e8cd0364141\",\n  16\n);\n\n/**\n * Public inputs that are passed into the membership circuit\n * This doesn't include the public values that aren't passed into the circuit,\n * which are the group element R and the msgHash.\n */\nexport class CircuitPubInput {\n  merkleRoot: bigint;\n  Tx: bigint;\n  Ty: bigint;\n  Ux: bigint;\n  Uy: bigint;\n\n  constructor(\n    merkleRoot: bigint,\n    Tx: bigint,\n    Ty: bigint,\n    Ux: bigint,\n    Uy: bigint\n  ) {\n    this.merkleRoot = merkleRoot;\n    this.Tx = Tx;\n    this.Ty = Ty;\n    this.Ux = Ux;\n    this.Uy = Uy;\n  }\n\n  serialize(): Uint8Array {\n    let serialized = new Uint8Array(32 * 5);\n\n    serialized.set(bigIntToBytes(this.merkleRoot, 32), 0);\n    serialized.set(bigIntToBytes(this.Tx, 32), 32);\n    serialized.set(bigIntToBytes(this.Ty, 32), 64);\n    serialized.set(bigIntToBytes(this.Ux, 32), 96);\n    serialized.set(bigIntToBytes(this.Uy, 32), 128);\n\n    return serialized;\n  }\n\n  static deserialize(serialized: Uint8Array): CircuitPubInput {\n    const merkleRoot = bytesToBigInt(serialized.slice(0, 32));\n    const Tx = bytesToBigInt(serialized.slice(32, 64));\n    const Ty = bytesToBigInt(serialized.slice(64, 96));\n    const Ux = bytesToBigInt(serialized.slice(96, 128));\n    const Uy = bytesToBigInt(serialized.slice(128, 160));\n\n    return new CircuitPubInput(merkleRoot, Tx, Ty, Ux, Uy);\n  }\n}\n\n/**\n * Public values of the membership circuit\n */\nexport class PublicInput {\n  r: bigint;\n  rV: bigint;\n  msgHash: Buffer;\n  circuitPubInput: CircuitPubInput;\n\n  constructor(\n    r: bigint,\n    v: bigint,\n    msgHash: Buffer,\n    circuitPubInput: CircuitPubInput\n  ) {\n    this.r = r;\n    this.rV = v;\n    this.msgHash = msgHash;\n    this.circuitPubInput = circuitPubInput;\n  }\n\n  serialize(): Uint8Array {\n    const circuitPubInput: Uint8Array = this.circuitPubInput.serialize();\n    let serialized = new Uint8Array(\n      32 + 1 + this.msgHash.byteLength + circuitPubInput.byteLength\n    );\n\n    serialized.set(bigIntToBytes(this.r, 32), 0);\n    serialized.set(bigIntToBytes(this.rV, 1), 32);\n    serialized.set(circuitPubInput, 33);\n    serialized.set(this.msgHash, 33 + circuitPubInput.byteLength);\n\n    return serialized;\n  }\n\n  static deserialize(serialized: Uint8Array): PublicInput {\n    const r = bytesToBigInt(serialized.slice(0, 32));\n    const rV = bytesToBigInt(serialized.slice(32, 33));\n    const circuitPubInput: CircuitPubInput = CircuitPubInput.deserialize(\n      serialized.slice(32 + 1, 32 + 1 + 32 * 5)\n    );\n    const msgHash = serialized.slice(32 + 1 + 32 * 5);\n\n    return new PublicInput(r, rV, Buffer.from(msgHash), circuitPubInput);\n  }\n}\n\n/**\n * Compute the group elements T and U for efficient ecdsa\n * https://personaelabs.org/posts/efficient-ecdsa-1/\n */\nexport const computeEffEcdsaPubInput = (\n  r: bigint,\n  v: bigint,\n  msgHash: Buffer\n): EffECDSAPubInput => {\n  const isYOdd = (v - BigInt(27)) % BigInt(2);\n  const rPoint = ec.keyFromPublic(\n    ec.curve.pointFromX(new BN(r), isYOdd).encode(\"hex\"),\n    \"hex\"\n  );\n\n  // Get the group element: -(m * r^−1 * G)\n  const rInv = new BN(r).invm(SECP256K1_N);\n\n  // w = -(r^-1 * msg)\n  const w = rInv.mul(new BN(msgHash)).neg().umod(SECP256K1_N);\n  // U = -(w * G) = -(r^-1 * msg * G)\n  const U = ec.curve.g.mul(w);\n\n  // T = r^-1 * R\n  const T = rPoint.getPublic().mul(rInv);\n\n  return {\n    Tx: BigInt(T.getX().toString()),\n    Ty: BigInt(T.getY().toString()),\n    Ux: BigInt(U.getX().toString()),\n    Uy: BigInt(U.getY().toString())\n  };\n};\n\n/**\n * Verify the public values of the efficient ECDSA circuit\n */\nexport const verifyEffEcdsaPubInput = ({\n  r,\n  rV,\n  msgHash,\n  circuitPubInput\n}: PublicInput): boolean => {\n  const expectedCircuitInput = computeEffEcdsaPubInput(\n    r,\n    rV,\n    msgHash\n  );\n\n  const isValid =\n    expectedCircuitInput.Tx === circuitPubInput.Tx &&\n    expectedCircuitInput.Ty === circuitPubInput.Ty &&\n    expectedCircuitInput.Ux === circuitPubInput.Ux &&\n    expectedCircuitInput.Uy === circuitPubInput.Uy;\n\n  return isValid;\n};\n"
  },
  {
    "path": "packages/lib/src/helpers/tree.ts",
    "content": "import { IncrementalMerkleTree } from \"@zk-kit/incremental-merkle-tree\";\nimport { Poseidon } from \"./poseidon\";\nimport { MerkleProof } from \"../types\";\n\nexport class Tree {\n  depth: number;\n  poseidon: Poseidon;\n  private treeInner!: IncrementalMerkleTree;\n\n  constructor(depth: number, poseidon: Poseidon) {\n    this.depth = depth;\n\n    this.poseidon = poseidon;\n    const hash = poseidon.hash.bind(poseidon);\n    this.treeInner = new IncrementalMerkleTree(hash, this.depth, BigInt(0));\n  }\n\n  insert(leaf: bigint) {\n    this.treeInner.insert(leaf);\n  }\n\n  delete(index: number) {\n    this.treeInner.delete(index);\n  }\n\n  leaves(): bigint[] {\n    return this.treeInner.leaves;\n  }\n\n  root(): bigint {\n    return this.treeInner.root;\n  }\n\n  indexOf(leaf: bigint): number {\n    return this.treeInner.indexOf(leaf);\n  }\n\n  createProof(index: number): MerkleProof {\n    const proof = this.treeInner.createProof(index);\n    return {\n      siblings: proof.siblings,\n      pathIndices: proof.pathIndices,\n      root: proof.root\n    };\n  }\n\n  verifyProof(proof: MerkleProof, leaf: bigint): boolean {\n    return this.treeInner.verifyProof({ ...proof, leaf });\n  }\n}\n"
  },
  {
    "path": "packages/lib/src/helpers/utils.ts",
    "content": "// @ts-ignore\nconst snarkJs = require(\"snarkjs\");\nimport { fromRpcSig } from \"@ethereumjs/util\";\nimport * as fs from \"fs\";\n\nexport const snarkJsWitnessGen = async (input: any, wasmFile: string) => {\n  const witness: {\n    type: string;\n    data?: any;\n  } = {\n    type: \"mem\"\n  };\n\n  await snarkJs.wtns.calculate(input, wasmFile, witness);\n  return witness;\n};\n\n/**\n * Load a circuit from a file or URL\n */\nexport const loadCircuit = async (\n  pathOrUrl: string,\n  useRemoteCircuit: boolean\n): Promise<Uint8Array> => {\n  if (useRemoteCircuit) {\n    return await fetchCircuit(pathOrUrl);\n  } else {\n    return await readCircuitFromFs(pathOrUrl);\n  }\n};\n\nconst readCircuitFromFs = async (path: string): Promise<Uint8Array> => {\n  const bytes = fs.readFileSync(path);\n  return new Uint8Array(bytes);\n};\n\nconst fetchCircuit = async (url: string): Promise<Uint8Array> => {\n  const response = await fetch(url);\n\n  const circuit = await response.arrayBuffer();\n\n  return new Uint8Array(circuit);\n};\n\nexport const bytesToBigInt = (bytes: Uint8Array): bigint =>\n  BigInt(\"0x\" + Buffer.from(bytes).toString(\"hex\"));\n\nexport const bytesLeToBigInt = (bytes: Uint8Array): bigint => {\n  const reversed = bytes.reverse();\n  return bytesToBigInt(reversed);\n};\n\nexport const bigIntToBytes = (n: bigint, size: number): Uint8Array => {\n  const hex = n.toString(16);\n  const hexPadded = hex.padStart(size * 2, \"0\");\n  return Buffer.from(hexPadded, \"hex\");\n};\n\nexport const bigIntToLeBytes = (n: bigint, size: number): Uint8Array => {\n  const bytes = bigIntToBytes(n, size);\n  return bytes.reverse();\n};\n\nexport const fromSig = (sig: string): { r: bigint; s: bigint; v: bigint } => {\n  const { r: _r, s: _s, v } = fromRpcSig(sig);\n  const r = BigInt(\"0x\" + _r.toString(\"hex\"));\n  const s = BigInt(\"0x\" + _s.toString(\"hex\"));\n  return { r, s, v };\n};\n"
  },
  {
    "path": "packages/lib/src/index.ts",
    "content": "export { MembershipProver } from \"@src/core/prover\";\nexport { MembershipVerifier } from \"@src/core/verifier\";\nexport { CircuitPubInput, PublicInput, computeEffEcdsaPubInput, verifyEffEcdsaPubInput } from \"@src/helpers/publicInputs\";\nexport { Tree } from \"@src/helpers/tree\";\nexport { Poseidon } from \"@src/helpers/poseidon\";\nexport { init, wasm } from \"@src/wasm/index\";\nexport { defaultPubkeyProverConfig as defaultPubkeyMembershipPConfig, defaultPubkeyVerifierConfig as defaultPubkeyMembershipVConfig, defaultAddressProverConfig as defaultAddressMembershipPConfig, defaultAddressVerifierConfig as defaultAddressMembershipVConfig } from \"@src/config\";\nexport type { MerkleProof, EffECDSAPubInput, NIZK, ProverConfig, VerifyConfig, IProver, IVerifier } from \"@src/types\";\n"
  },
  {
    "path": "packages/lib/src/types/index.ts",
    "content": "import { PublicInput } from \"@src/helpers/publicInputs\";\n\n// The same structure as MerkleProof in @zk-kit/incremental-merkle-tree.\n// Not directly using MerkleProof defined in @zk-kit/incremental-merkle-tree so\n// library users can choose whatever merkle tree management method they want.\nexport interface MerkleProof {\n    root: bigint;\n    siblings: [bigint][];\n    pathIndices: number[];\n}\nexport interface EffECDSAPubInput {\n    Tx: bigint;\n    Ty: bigint;\n    Ux: bigint;\n    Uy: bigint;\n}\n\nexport interface NIZK {\n    proof: Uint8Array;\n    publicInput: PublicInput;\n}\n\nexport interface ProverConfig {\n    witnessGenWasm: string;\n    circuit: string;\n    enableProfiler?: boolean;\n    useRemoteCircuit?: boolean;\n}\n\nexport interface ProveArgs {\n    sig: string;\n    msgHash: Buffer,\n    merkleProof: MerkleProof;\n}\n\nexport interface VerifyArgs {\n    proof: Uint8Array,\n    publicInputSer: Uint8Array\n}\n\nexport interface VerifyConfig {\n    circuit: string; // Path to circuit file compiled by Nova-Scotia\n    enableProfiler?: boolean;\n    useRemoteCircuit?: boolean;\n}\n\nexport interface IProver {\n    circuit: string; // Path to circuit file compiled by Nova-Scotia\n    witnessGenWasm: string; // Path to witness generator wasm file generated by Circom\n\n    prove({ sig, msgHash, merkleProof }: ProveArgs): Promise<NIZK>;\n}\n\nexport interface IVerifier {\n    circuit: string; // Path to circuit file compiled by Nova-Scotia\n\n    verify({ proof, publicInputSer }: VerifyArgs): Promise<boolean>;\n}\n"
  },
  {
    "path": "packages/lib/src/wasm/index.ts",
    "content": "import * as wasm from \"./wasm\";\n\nimport { wasmBytes } from \"./wasmBytes\";\n\nexport const init = async () => {\n  await wasm.initSync(wasmBytes.buffer);\n  wasm.init_panic_hook();\n};\n\nexport { wasm };\n"
  },
  {
    "path": "packages/lib/src/wasm/wasm.d.ts",
    "content": "/* tslint:disable */\n/* eslint-disable */\n/**\n*/\nexport function init_panic_hook(): void;\n/**\n* @param {Uint8Array} circuit\n* @param {Uint8Array} vars\n* @param {Uint8Array} public_inputs\n* @returns {Uint8Array}\n*/\nexport function prove(circuit: Uint8Array, vars: Uint8Array, public_inputs: Uint8Array): Uint8Array;\n/**\n* @param {Uint8Array} circuit\n* @param {Uint8Array} proof\n* @param {Uint8Array} public_input\n* @returns {boolean}\n*/\nexport function verify(circuit: Uint8Array, proof: Uint8Array, public_input: Uint8Array): boolean;\n/**\n* @param {Uint8Array} input_bytes\n* @returns {Uint8Array}\n*/\nexport function poseidon(input_bytes: Uint8Array): Uint8Array;\n\nexport type InitInput = RequestInfo | URL | Response | BufferSource | WebAssembly.Module;\n\nexport interface InitOutput {\n  readonly memory: WebAssembly.Memory;\n  readonly prove: (a: number, b: number, c: number, d: number, e: number, f: number, g: number) => void;\n  readonly verify: (a: number, b: number, c: number, d: number, e: number, f: number, g: number) => void;\n  readonly poseidon: (a: number, b: number, c: number) => void;\n  readonly init_panic_hook: () => void;\n  readonly __wbindgen_add_to_stack_pointer: (a: number) => number;\n  readonly __wbindgen_malloc: (a: number) => number;\n  readonly __wbindgen_free: (a: number, b: number) => void;\n  readonly __wbindgen_exn_store: (a: number) => void;\n  readonly __wbindgen_realloc: (a: number, b: number, c: number) => number;\n}\n\nexport type SyncInitInput = BufferSource | WebAssembly.Module;\n/**\n* Instantiates the given `module`, which can either be bytes or\n* a precompiled `WebAssembly.Module`.\n*\n* @param {SyncInitInput} module\n*\n* @returns {InitOutput}\n*/\nexport function initSync(module: SyncInitInput): InitOutput;\n/**\n* If `module_or_path` is {RequestInfo} or {URL}, makes a request and\n* for everything else, calls `WebAssembly.instantiate` directly.\n*\n* @param {InitInput | Promise<InitInput>} module_or_path\n* @param {WebAssembly.Memory} maybe_memory\n*\n* @returns {Promise<InitOutput>}\n*/\nexport default function __wbg_init (module_or_path?: InitInput | Promise<InitInput>, maybe_memory?: WebAssembly.Memory): Promise<InitOutput>;\n"
  },
  {
    "path": "packages/lib/src/wasm/wasm.js",
    "content": "let wasm;\n\nconst heap = new Array(128).fill(undefined);\n\nheap.push(undefined, null, true, false);\n\nfunction getObject(idx) { return heap[idx]; }\n\nlet heap_next = heap.length;\n\nfunction dropObject(idx) {\n    if (idx < 132) return;\n    heap[idx] = heap_next;\n    heap_next = idx;\n}\n\nfunction takeObject(idx) {\n    const ret = getObject(idx);\n    dropObject(idx);\n    return ret;\n}\n\nconst cachedTextDecoder = (typeof TextDecoder !== 'undefined' ? new TextDecoder('utf-8', { ignoreBOM: true, fatal: true }) : { decode: () => { throw Error('TextDecoder not available') } } );\n\nif (typeof TextDecoder !== 'undefined') { cachedTextDecoder.decode(); };\n\nlet cachedUint8Memory0 = null;\n\nfunction getUint8Memory0() {\n    if (cachedUint8Memory0 === null || cachedUint8Memory0.byteLength === 0) {\n        cachedUint8Memory0 = new Uint8Array(wasm.memory.buffer);\n    }\n    return cachedUint8Memory0;\n}\n\nfunction getStringFromWasm0(ptr, len) {\n    ptr = ptr >>> 0;\n    return cachedTextDecoder.decode(getUint8Memory0().subarray(ptr, ptr + len));\n}\n\nfunction addHeapObject(obj) {\n    if (heap_next === heap.length) heap.push(heap.length + 1);\n    const idx = heap_next;\n    heap_next = heap[idx];\n\n    heap[idx] = obj;\n    return idx;\n}\n/**\n*/\nexport function init_panic_hook() {\n    wasm.init_panic_hook();\n}\n\nlet WASM_VECTOR_LEN = 0;\n\nfunction passArray8ToWasm0(arg, malloc) {\n    const ptr = malloc(arg.length * 1) >>> 0;\n    getUint8Memory0().set(arg, ptr / 1);\n    WASM_VECTOR_LEN = arg.length;\n    return ptr;\n}\n\nlet cachedInt32Memory0 = null;\n\nfunction getInt32Memory0() {\n    if (cachedInt32Memory0 === null || cachedInt32Memory0.byteLength === 0) {\n        cachedInt32Memory0 = new Int32Array(wasm.memory.buffer);\n    }\n    return cachedInt32Memory0;\n}\n\nfunction getArrayU8FromWasm0(ptr, len) {\n    ptr = ptr >>> 0;\n    return getUint8Memory0().subarray(ptr / 1, ptr / 1 + len);\n}\n/**\n* @param {Uint8Array} circuit\n* @param {Uint8Array} vars\n* @param {Uint8Array} public_inputs\n* @returns {Uint8Array}\n*/\nexport function prove(circuit, vars, public_inputs) {\n    try {\n        const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);\n        const ptr0 = passArray8ToWasm0(circuit, wasm.__wbindgen_malloc);\n        const len0 = WASM_VECTOR_LEN;\n        const ptr1 = passArray8ToWasm0(vars, wasm.__wbindgen_malloc);\n        const len1 = WASM_VECTOR_LEN;\n        const ptr2 = passArray8ToWasm0(public_inputs, wasm.__wbindgen_malloc);\n        const len2 = WASM_VECTOR_LEN;\n        wasm.prove(retptr, ptr0, len0, ptr1, len1, ptr2, len2);\n        var r0 = getInt32Memory0()[retptr / 4 + 0];\n        var r1 = getInt32Memory0()[retptr / 4 + 1];\n        var r2 = getInt32Memory0()[retptr / 4 + 2];\n        var r3 = getInt32Memory0()[retptr / 4 + 3];\n        if (r3) {\n            throw takeObject(r2);\n        }\n        var v4 = getArrayU8FromWasm0(r0, r1).slice();\n        wasm.__wbindgen_free(r0, r1 * 1);\n        return v4;\n    } finally {\n        wasm.__wbindgen_add_to_stack_pointer(16);\n    }\n}\n\n/**\n* @param {Uint8Array} circuit\n* @param {Uint8Array} proof\n* @param {Uint8Array} public_input\n* @returns {boolean}\n*/\nexport function verify(circuit, proof, public_input) {\n    try {\n        const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);\n        const ptr0 = passArray8ToWasm0(circuit, wasm.__wbindgen_malloc);\n        const len0 = WASM_VECTOR_LEN;\n        const ptr1 = passArray8ToWasm0(proof, wasm.__wbindgen_malloc);\n        const len1 = WASM_VECTOR_LEN;\n        const ptr2 = passArray8ToWasm0(public_input, wasm.__wbindgen_malloc);\n        const len2 = WASM_VECTOR_LEN;\n        wasm.verify(retptr, ptr0, len0, ptr1, len1, ptr2, len2);\n        var r0 = getInt32Memory0()[retptr / 4 + 0];\n        var r1 = getInt32Memory0()[retptr / 4 + 1];\n        var r2 = getInt32Memory0()[retptr / 4 + 2];\n        if (r2) {\n            throw takeObject(r1);\n        }\n        return r0 !== 0;\n    } finally {\n        wasm.__wbindgen_add_to_stack_pointer(16);\n    }\n}\n\n/**\n* @param {Uint8Array} input_bytes\n* @returns {Uint8Array}\n*/\nexport function poseidon(input_bytes) {\n    try {\n        const retptr = wasm.__wbindgen_add_to_stack_pointer(-16);\n        const ptr0 = passArray8ToWasm0(input_bytes, wasm.__wbindgen_malloc);\n        const len0 = WASM_VECTOR_LEN;\n        wasm.poseidon(retptr, ptr0, len0);\n        var r0 = getInt32Memory0()[retptr / 4 + 0];\n        var r1 = getInt32Memory0()[retptr / 4 + 1];\n        var r2 = getInt32Memory0()[retptr / 4 + 2];\n        var r3 = getInt32Memory0()[retptr / 4 + 3];\n        if (r3) {\n            throw takeObject(r2);\n        }\n        var v2 = getArrayU8FromWasm0(r0, r1).slice();\n        wasm.__wbindgen_free(r0, r1 * 1);\n        return v2;\n    } finally {\n        wasm.__wbindgen_add_to_stack_pointer(16);\n    }\n}\n\nfunction handleError(f, args) {\n    try {\n        return f.apply(this, args);\n    } catch (e) {\n        wasm.__wbindgen_exn_store(addHeapObject(e));\n    }\n}\n\nconst cachedTextEncoder = (typeof TextEncoder !== 'undefined' ? new TextEncoder('utf-8') : { encode: () => { throw Error('TextEncoder not available') } } );\n\nconst encodeString = (typeof cachedTextEncoder.encodeInto === 'function'\n    ? function (arg, view) {\n    return cachedTextEncoder.encodeInto(arg, view);\n}\n    : function (arg, view) {\n    const buf = cachedTextEncoder.encode(arg);\n    view.set(buf);\n    return {\n        read: arg.length,\n        written: buf.length\n    };\n});\n\nfunction passStringToWasm0(arg, malloc, realloc) {\n\n    if (realloc === undefined) {\n        const buf = cachedTextEncoder.encode(arg);\n        const ptr = malloc(buf.length) >>> 0;\n        getUint8Memory0().subarray(ptr, ptr + buf.length).set(buf);\n        WASM_VECTOR_LEN = buf.length;\n        return ptr;\n    }\n\n    let len = arg.length;\n    let ptr = malloc(len) >>> 0;\n\n    const mem = getUint8Memory0();\n\n    let offset = 0;\n\n    for (; offset < len; offset++) {\n        const code = arg.charCodeAt(offset);\n        if (code > 0x7F) break;\n        mem[ptr + offset] = code;\n    }\n\n    if (offset !== len) {\n        if (offset !== 0) {\n            arg = arg.slice(offset);\n        }\n        ptr = realloc(ptr, len, len = offset + arg.length * 3) >>> 0;\n        const view = getUint8Memory0().subarray(ptr + offset, ptr + len);\n        const ret = encodeString(arg, view);\n\n        offset += ret.written;\n    }\n\n    WASM_VECTOR_LEN = offset;\n    return ptr;\n}\n\nasync function __wbg_load(module, imports) {\n    if (typeof Response === 'function' && module instanceof Response) {\n        if (typeof WebAssembly.instantiateStreaming === 'function') {\n            try {\n                return await WebAssembly.instantiateStreaming(module, imports);\n\n            } catch (e) {\n                if (module.headers.get('Content-Type') != 'application/wasm') {\n                    console.warn(\"`WebAssembly.instantiateStreaming` failed because your server does not serve wasm with `application/wasm` MIME type. Falling back to `WebAssembly.instantiate` which is slower. Original error:\\n\", e);\n\n                } else {\n                    throw e;\n                }\n            }\n        }\n\n        const bytes = await module.arrayBuffer();\n        return await WebAssembly.instantiate(bytes, imports);\n\n    } else {\n        const instance = await WebAssembly.instantiate(module, imports);\n\n        if (instance instanceof WebAssembly.Instance) {\n            return { instance, module };\n\n        } else {\n            return instance;\n        }\n    }\n}\n\nfunction __wbg_get_imports() {\n    const imports = {};\n    imports.wbg = {};\n    imports.wbg.__wbg_crypto_70a96de3b6b73dac = function(arg0) {\n        const ret = getObject(arg0).crypto;\n        return addHeapObject(ret);\n    };\n    imports.wbg.__wbindgen_is_object = function(arg0) {\n        const val = getObject(arg0);\n        const ret = typeof(val) === 'object' && val !== null;\n        return ret;\n    };\n    imports.wbg.__wbg_process_dd1577445152112e = function(arg0) {\n        const ret = getObject(arg0).process;\n        return addHeapObject(ret);\n    };\n    imports.wbg.__wbg_versions_58036bec3add9e6f = function(arg0) {\n        const ret = getObject(arg0).versions;\n        return addHeapObject(ret);\n    };\n    imports.wbg.__wbg_node_6a9d28205ed5b0d8 = function(arg0) {\n        const ret = getObject(arg0).node;\n        return addHeapObject(ret);\n    };\n    imports.wbg.__wbindgen_is_string = function(arg0) {\n        const ret = typeof(getObject(arg0)) === 'string';\n        return ret;\n    };\n    imports.wbg.__wbindgen_object_drop_ref = function(arg0) {\n        takeObject(arg0);\n    };\n    imports.wbg.__wbg_msCrypto_adbc770ec9eca9c7 = function(arg0) {\n        const ret = getObject(arg0).msCrypto;\n        return addHeapObject(ret);\n    };\n    imports.wbg.__wbg_require_f05d779769764e82 = function() { return handleError(function () {\n        const ret = module.require;\n        return addHeapObject(ret);\n    }, arguments) };\n    imports.wbg.__wbindgen_is_function = function(arg0) {\n        const ret = typeof(getObject(arg0)) === 'function';\n        return ret;\n    };\n    imports.wbg.__wbindgen_string_new = function(arg0, arg1) {\n        const ret = getStringFromWasm0(arg0, arg1);\n        return addHeapObject(ret);\n    };\n    imports.wbg.__wbg_getRandomValues_3774744e221a22ad = function() { return handleError(function (arg0, arg1) {\n        getObject(arg0).getRandomValues(getObject(arg1));\n    }, arguments) };\n    imports.wbg.__wbg_randomFillSync_e950366c42764a07 = function() { return handleError(function (arg0, arg1) {\n        getObject(arg0).randomFillSync(takeObject(arg1));\n    }, arguments) };\n    imports.wbg.__wbg_newnoargs_e643855c6572a4a8 = function(arg0, arg1) {\n        const ret = new Function(getStringFromWasm0(arg0, arg1));\n        return addHeapObject(ret);\n    };\n    imports.wbg.__wbg_call_f96b398515635514 = function() { return handleError(function (arg0, arg1) {\n        const ret = getObject(arg0).call(getObject(arg1));\n        return addHeapObject(ret);\n    }, arguments) };\n    imports.wbg.__wbindgen_object_clone_ref = function(arg0) {\n        const ret = getObject(arg0);\n        return addHeapObject(ret);\n    };\n    imports.wbg.__wbg_self_b9aad7f1c618bfaf = function() { return handleError(function () {\n        const ret = self.self;\n        return addHeapObject(ret);\n    }, arguments) };\n    imports.wbg.__wbg_window_55e469842c98b086 = function() { return handleError(function () {\n        const ret = window.window;\n        return addHeapObject(ret);\n    }, arguments) };\n    imports.wbg.__wbg_globalThis_d0957e302752547e = function() { return handleError(function () {\n        const ret = globalThis.globalThis;\n        return addHeapObject(ret);\n    }, arguments) };\n    imports.wbg.__wbg_global_ae2f87312b8987fb = function() { return handleError(function () {\n        const ret = global.global;\n        return addHeapObject(ret);\n    }, arguments) };\n    imports.wbg.__wbindgen_is_undefined = function(arg0) {\n        const ret = getObject(arg0) === undefined;\n        return ret;\n    };\n    imports.wbg.__wbg_call_35782e9a1aa5e091 = function() { return handleError(function (arg0, arg1, arg2) {\n        const ret = getObject(arg0).call(getObject(arg1), getObject(arg2));\n        return addHeapObject(ret);\n    }, arguments) };\n    imports.wbg.__wbg_buffer_fcbfb6d88b2732e9 = function(arg0) {\n        const ret = getObject(arg0).buffer;\n        return addHeapObject(ret);\n    };\n    imports.wbg.__wbg_newwithbyteoffsetandlength_92c251989c485785 = function(arg0, arg1, arg2) {\n        const ret = new Uint8Array(getObject(arg0), arg1 >>> 0, arg2 >>> 0);\n        return addHeapObject(ret);\n    };\n    imports.wbg.__wbg_new_bc5d9aad3f9ac80e = function(arg0) {\n        const ret = new Uint8Array(getObject(arg0));\n        return addHeapObject(ret);\n    };\n    imports.wbg.__wbg_set_4b3aa8445ac1e91c = function(arg0, arg1, arg2) {\n        getObject(arg0).set(getObject(arg1), arg2 >>> 0);\n    };\n    imports.wbg.__wbg_newwithlength_89eca18f2603a999 = function(arg0) {\n        const ret = new Uint8Array(arg0 >>> 0);\n        return addHeapObject(ret);\n    };\n    imports.wbg.__wbg_subarray_7649d027b2b141b3 = function(arg0, arg1, arg2) {\n        const ret = getObject(arg0).subarray(arg1 >>> 0, arg2 >>> 0);\n        return addHeapObject(ret);\n    };\n    imports.wbg.__wbg_new_abda76e883ba8a5f = function() {\n        const ret = new Error();\n        return addHeapObject(ret);\n    };\n    imports.wbg.__wbg_stack_658279fe44541cf6 = function(arg0, arg1) {\n        const ret = getObject(arg1).stack;\n        const ptr1 = passStringToWasm0(ret, wasm.__wbindgen_malloc, wasm.__wbindgen_realloc);\n        const len1 = WASM_VECTOR_LEN;\n        getInt32Memory0()[arg0 / 4 + 1] = len1;\n        getInt32Memory0()[arg0 / 4 + 0] = ptr1;\n    };\n    imports.wbg.__wbg_error_f851667af71bcfc6 = function(arg0, arg1) {\n        let deferred0_0;\n        let deferred0_1;\n        try {\n            deferred0_0 = arg0;\n            deferred0_1 = arg1;\n            console.error(getStringFromWasm0(arg0, arg1));\n        } finally {\n            wasm.__wbindgen_free(deferred0_0, deferred0_1);\n        }\n    };\n    imports.wbg.__wbindgen_throw = function(arg0, arg1) {\n        throw new Error(getStringFromWasm0(arg0, arg1));\n    };\n    imports.wbg.__wbindgen_memory = function() {\n        const ret = wasm.memory;\n        return addHeapObject(ret);\n    };\n\n    return imports;\n}\n\nfunction __wbg_init_memory(imports, maybe_memory) {\n\n}\n\nfunction __wbg_finalize_init(instance, module) {\n    wasm = instance.exports;\n    __wbg_init.__wbindgen_wasm_module = module;\n    cachedInt32Memory0 = null;\n    cachedUint8Memory0 = null;\n\n\n    return wasm;\n}\n\nasync function initSync(module, maybe_memory) {\n    if (wasm !== undefined) return wasm;\n\n    const imports = __wbg_get_imports();\n\n    __wbg_init_memory(imports, maybe_memory);\n\n     /*\n    if (!(module instanceof WebAssembly.Module)) {\n        module = new WebAssembly.Module(module);\n    }\n    */\n    const compiled = WebAssembly.compile(module);\n\n    const instance = await WebAssembly.instantiate(await compiled, imports);\n    \n\n    return __wbg_finalize_init(instance, module);\n}\n\nasync function __wbg_init(input, maybe_memory) {\n    if (wasm !== undefined) return wasm;\n\n    /*\n    if (typeof input === 'undefined') {\n        input = new URL('spartan_wasm_bg.wasm', import.meta.url);\n    }\n    */\n    const imports = __wbg_get_imports();\n\n    if (typeof input === 'string' || (typeof Request === 'function' && input instanceof Request) || (typeof URL === 'function' && input instanceof URL)) {\n        input = fetch(input);\n    }\n\n    __wbg_init_memory(imports, maybe_memory);\n\n    const { instance, module } = await __wbg_load(await input, imports);\n\n    return __wbg_finalize_init(instance, module);\n}\n\nexport { initSync }\nexport default __wbg_init;\n"
  },
  {
    "path": "packages/lib/tests/efficientEcdsa.test.ts",
    "content": "import { hashPersonalMessage } from \"@ethereumjs/util\";\n\nimport {\n  CircuitPubInput,\n  PublicInput,\n  verifyEffEcdsaPubInput\n} from \"../src/helpers/publicInputs\";\n\ndescribe(\"public_input\", () => {\n  /**\n     Hard coded values were computed in sage using the following code \n      p = 0xfffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2f\n      K = GF(p)\n      a = K(0x0000000000000000000000000000000000000000000000000000000000000000)\n      b = K(0x0000000000000000000000000000000000000000000000000000000000000007)\n      E = EllipticCurve(K, (a, b))\n      G = E(0x79be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798, 0x483ada7726a3c4655da4fbfc0e1108a8fd17b448a68554199c47d08ffb10d4b8)\n      E.set_order(0xfffffffffffffffffffffffffffffffebaaedce6af48a03bbfd25e8cd0364141 * 0x1)\n\n      q = 0xfffffffffffffffffffffffffffffffebaaedce6af48a03bbfd25e8cd0364141\n      msgHash = 0x8e05c70f46dbc3dda34547fc23ac835d728001bac55db9bd122d77d10d294431\n      rX = 0x5d5d43bec648296f5ef4b72c269bfde291fc0ed13bfc7e59c56b6c74aa9c932e\n      rY = 0x1b8ac22e769c661f029c58d04ee7871a8fc2327fd43b38fb3eeafe5e3e8343b5\n      R = E(rX, rY)\n      rInv = inverse_mod(rX, q)\n      T = R * rInv\n      U = ((-rInv * msgHash) % q) * G\n  */\n\n  it(\"should verify valid public input\", () => {\n    const merkleRoot = BigInt(\"0xbeef\");\n    const msg = Buffer.from(\"harry potter\");\n    const msgHash = hashPersonalMessage(msg);\n\n    const rX = BigInt(\n      \"0x5d5d43bec648296f5ef4b72c269bfde291fc0ed13bfc7e59c56b6c74aa9c932e\"\n    );\n    const Tx = BigInt(\n      \"0x2af2c62145d39e7dd285b55d5c51963baa31b58e0c1b8b7e1de9351840917581\"\n    );\n    const Ty = BigInt(\n      \"0xa662125801a14f2301cfb92965d5ba7a63765e6477a14ecd8e2d4f0b1353b83b\"\n    );\n    const Ux = BigInt(\n      \"0x7641bcce6a558dfa5018fe45da507ff49cc09aca5c02cceddfd845edebea6682\"\n    );\n    const Uy = BigInt(\n      \"0xeaeeff65d77a9334606577c4696178497a94e775573553267eb856bee4c54a6f\"\n    );\n    const v = BigInt(28);\n\n    const circuitPubInput = new CircuitPubInput(merkleRoot, Tx, Ty, Ux, Uy);\n    const effEcdsaPubInput = new PublicInput(rX, v, msgHash, circuitPubInput);\n    const isValid = verifyEffEcdsaPubInput(effEcdsaPubInput);\n\n    expect(isValid).toBe(true);\n  });\n\n  // TODO Add more tests!\n});\n"
  },
  {
    "path": "packages/lib/tests/membershipNizk.test.ts",
    "content": "import {\n  hashPersonalMessage,\n  ecsign,\n  privateToAddress,\n  privateToPublic\n} from \"@ethereumjs/util\";\n\nimport * as path from \"path\";\n\nimport {\n  MembershipProver,\n  MembershipVerifier,\n  Tree,\n  Poseidon,\n  NIZK\n} from \"../src\";\n\ndescribe(\"membership prove and verify\", () => {\n  // Init prover\n  const treeDepth = 20;\n\n  const privKeys = [\"1\", \"a\", \"bb\", \"ccc\", \"dddd\", \"ffff\"].map(val =>\n    Buffer.from(val.padStart(64, \"0\"), \"hex\")\n  );\n\n  // Sign (Use privKeys[0] for proving)\n  const proverIndex = 0;\n  const proverPrivKey = privKeys[proverIndex];\n\n  let msg = Buffer.from(\"harry potter\");\n  const msgHash = hashPersonalMessage(msg);\n\n  const { v, r, s } = ecsign(msgHash, proverPrivKey);\n  const sig = `0x${r.toString(\"hex\")}${s.toString(\"hex\")}${v.toString(16)}`;\n\n  let poseidon: Poseidon;\n\n  beforeAll(async () => {\n    // Init Poseidon\n    poseidon = new Poseidon();\n    await poseidon.initWasm();\n  });\n\n  describe(\"pubkey_membership prover and verify\", () => {\n    const config = {\n      witnessGenWasm: path.join(\n        __dirname,\n        \"../../circuits/build/pubkey_membership/pubkey_membership_js/pubkey_membership.wasm\"\n      ),\n      circuit: path.join(\n        __dirname,\n        \"../../circuits/build/pubkey_membership/pubkey_membership.circuit\"\n      )\n    };\n\n    let pubKeyMembershipVerifier: MembershipVerifier, nizk: NIZK;\n\n    beforeAll(async () => {\n      pubKeyMembershipVerifier = new MembershipVerifier({\n        circuit: config.circuit\n      });\n\n      await pubKeyMembershipVerifier.initWasm();\n    });\n\n    it(\"should prove and verify valid signature and merkle proof\", async () => {\n      const pubKeyTree = new Tree(treeDepth, poseidon);\n\n      let proverPubKeyHash;\n      // Insert the members into the tree\n      for (const privKey of privKeys) {\n        const pubKey = privateToPublic(privKey);\n        const pubKeyHash = poseidon.hashPubKey(pubKey);\n        pubKeyTree.insert(pubKeyHash);\n\n        // Set prover's public key hash for the reference below\n        if (proverPrivKey === privKey) proverPubKeyHash = pubKeyHash;\n      }\n\n      const pubKeyMembershipProver = new MembershipProver(config);\n\n      await pubKeyMembershipProver.initWasm();\n\n      const index = pubKeyTree.indexOf(proverPubKeyHash as bigint);\n      const merkleProof = pubKeyTree.createProof(index);\n\n      nizk = await pubKeyMembershipProver.prove({ sig, msgHash, merkleProof });\n\n      const { proof, publicInput } = nizk;\n      expect(\n        await pubKeyMembershipVerifier.verify({\n          proof,\n          publicInputSer: publicInput.serialize()\n        })\n      ).toBe(true);\n    });\n\n    it(\"should assert invalid proof\", async () => {\n      const { publicInput } = nizk;\n      let proof = nizk.proof;\n      proof[0] = proof[0] += 1;\n      expect(\n        await pubKeyMembershipVerifier.verify({\n          proof,\n          publicInputSer: publicInput.serialize()\n        })\n      ).toBe(false);\n    });\n\n    it(\"should assert invalid public input\", async () => {\n      const { proof } = nizk;\n      let publicInputSer = nizk.publicInput.serialize();\n      publicInputSer[0] = publicInputSer[0] += 1;\n      expect(\n        await pubKeyMembershipVerifier.verify({\n          proof,\n          publicInputSer\n        })\n      ).toBe(false);\n    });\n  });\n\n  describe(\"addr_membership prover and verify\", () => {\n    const config = {\n      witnessGenWasm: path.join(\n        __dirname,\n        \"../../circuits/build/addr_membership/addr_membership_js/addr_membership.wasm\"\n      ),\n      circuit: path.join(\n        __dirname,\n        \"../../circuits/build/addr_membership/addr_membership.circuit\"\n      )\n    };\n\n    let addressMembershipVerifier: MembershipVerifier, nizk: NIZK;\n    beforeAll(async () => {\n      addressMembershipVerifier = new MembershipVerifier({\n        circuit: config.circuit\n      });\n\n      await addressMembershipVerifier.initWasm();\n    });\n\n    it(\"should prove and verify valid signature and merkle proof\", async () => {\n      const addressTree = new Tree(treeDepth, poseidon);\n\n      let proverAddress;\n      // Insert the members into the tree\n      for (const privKey of privKeys) {\n        const address = BigInt(\n          \"0x\" + privateToAddress(privKey).toString(\"hex\")\n        );\n        addressTree.insert(address);\n\n        // Set prover's public key hash for the reference below\n        if (proverPrivKey === privKey) proverAddress = address;\n      }\n\n      const index = addressTree.indexOf(proverAddress as bigint);\n      const merkleProof = addressTree.createProof(index);\n\n      const addressMembershipProver = new MembershipProver(config);\n\n      await addressMembershipProver.initWasm();\n\n      nizk = await addressMembershipProver.prove({ sig, msgHash, merkleProof });\n      await addressMembershipVerifier.initWasm();\n\n      expect(\n        await addressMembershipVerifier.verify({\n          proof: nizk.proof,\n          publicInputSer: nizk.publicInput.serialize()\n        })\n      ).toBe(true);\n    });\n\n    it(\"should assert invalid proof\", async () => {\n      const { publicInput } = nizk;\n      let proof = nizk.proof;\n      proof[0] = proof[0] += 1;\n      expect(\n        await addressMembershipVerifier.verify({\n          proof,\n          publicInputSer: publicInput.serialize()\n        })\n      ).toBe(false);\n    });\n\n    it(\"should assert invalid public input\", async () => {\n      const { proof } = nizk;\n      let publicInputSer = nizk.publicInput.serialize();\n      publicInputSer[0] = publicInputSer[0] += 1;\n      expect(\n        await addressMembershipVerifier.verify({\n          proof,\n          publicInputSer\n        })\n      ).toBe(false);\n    });\n  });\n});\n"
  },
  {
    "path": "packages/lib/tests/tree.test.ts",
    "content": "import { Tree, Poseidon } from \"../src\";\n\ndescribe(\"Merkle tree prove and verify\", () => {\n  let poseidon: Poseidon;\n  let tree: Tree;\n  const members = new Array(10).fill(0).map((_, i) => BigInt(i));\n\n  beforeAll(async () => {\n    // Init Poseidon\n    poseidon = new Poseidon();\n    await poseidon.initWasm();\n    const treeDepth = 20;\n\n    tree = new Tree(treeDepth, poseidon);\n    for (const member of members) {\n      tree.insert(member);\n    }\n  });\n\n  it(\"should prove and verify a valid Merkle proof\", async () => {\n    const proof = tree.createProof(0);\n    expect(tree.verifyProof(proof, members[0])).toBe(true);\n  });\n\n  it(\"should assert an invalid Merkle proof\", async () => {\n    const proof = tree.createProof(0);\n    proof.siblings[0][0] = proof.siblings[0][0] += BigInt(1);\n    expect(tree.verifyProof(proof, members[0])).toBe(false);\n    proof.siblings[0][0] = proof.siblings[0][0] -= BigInt(1);\n  });\n});\n"
  },
  {
    "path": "packages/lib/tsconfig.build.json",
    "content": "{\n    \"extends\": \"./tsconfig.json\",\n    \"exclude\": [\n        \"./tests/**/*\"\n    ],\n}"
  },
  {
    "path": "packages/lib/tsconfig.json",
    "content": "{\n  \"compilerOptions\": {\n    \"baseUrl\": \".\",\n    \"rootDir\": \".\",\n    \"outDir\": \"./dist\",\n    \"declaration\": true,\n    \"target\": \"ES6\",\n    \"module\": \"CommonJS\",\n    \"moduleResolution\": \"node\",\n    \"allowJs\": true,\n    \"esModuleInterop\": true,\n    \"forceConsistentCasingInFileNames\": true,\n    \"strict\": true,\n    \"skipLibCheck\": true,\n    \"paths\": {\n      \"@src/*\": [\n        \"src/*\"\n      ]\n    },\n  },\n  \"include\": [\n    \"./src/**/*\",\n    \"./src/**/*.wasm\",\n    \"./tests/**/*\"\n  ],\n  \"exclude\": [\n    \"./jest.config.js\",\n    \"./node_modules\",\n    \"./dist\"\n  ],\n}"
  },
  {
    "path": "packages/poseidon/Cargo.toml",
    "content": "[package]\nname = \"poseidon\"\nversion = \"0.1.0\"\nedition = \"2021\"\n\n\n[dependencies]\nff = \"0.12.0\"\nhex = \"0.4.3\"\nhex-literal = \"0.3.4\"\nsecq256k1 = { path = \"../secq256k1\" }\ngetrandom = { version = \"0.2.8\", features = [\"js\"] }\nlazy_static = \"1.4.0\"\n#typenum = { version = \"1.16.0\", optional = true }\n#neptune = { version = \"8.1.0\", optional = true }\n#blstrs = { version = \"0.6.0\", optional = true }\n\n"
  },
  {
    "path": "packages/poseidon/LICENSE",
    "content": "MIT License\n\nCopyright (c) 2022 Ethereum Foundation\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE."
  },
  {
    "path": "packages/poseidon/README.md",
    "content": "Generate Poseidon params for the secp256k1 base field\n\n```\nsh ./k256_params.sh\n```\n\n## Parameters\n\nWe use the following parameters for our Poseidon instantiation (using the notation from the [Neptune specification](https://spec.filecoin.io/#section-algorithms.crypto.poseidon)). Security inequalities are checked in [security_inequalities.sage](https://github.com/personaelabs/spartan-ecdsa/blob/f6ffbb4fc8977c4e30ae6df4eba6f1da0c534722/packages/poseidon/sage/security_inequalities.sage).\n\n```\nM=128\nt=3\np=0xfffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2f\nRf=8\nRp=56\na=5\n```\n"
  },
  {
    "path": "packages/poseidon/k256_params.sh",
    "content": "sage ./sage/generate_params_poseidon.sage 1 0 256 3 5 128 0xfffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2f"
  },
  {
    "path": "packages/poseidon/sage/generate_params_poseidon.sage",
    "content": "# https://extgit.iaik.tugraz.at/krypto/hadeshash/-/blob/master/code/generate_params_poseidon.sage\n\nfrom math import *\nimport sys\n#from sage.rings.polynomial.polynomial_gf2x import GF2X_BuildIrred_list\n\nif len(sys.argv) < 8:\n    print(\"Usage: <script> <field> <s_box> <field_size> <num_cells> <alpha> <security_level> <modulus_hex>\")\n    print(\"field = 1 for GF(p)\")\n    print(\"s_box = 0 for x^alpha, s_box = 1 for x^(-1)\")\n    exit()\n\n# GF(p), n = 255, t = 3, alpha=5: sage generate_params_poseidon.sage 1 0 255 3 5 128 0x73eda753299d7d483339d80809a1d80553bda402fffe5bfeffffffff00000001\n# GF(p), n = 255, t = 5, alpha=5: sage generate_params_poseidon.sage 1 0 255 5 5 128 0x73eda753299d7d483339d80809a1d80553bda402fffe5bfeffffffff00000001\n# GF(p), n = 254, t = 3, alpha=5: sage generate_params_poseidon.sage 1 0 254 3 5 128 0x30644e72e131a029b85045b68181585d2833e84879b9709143e1f593f0000001\n# GF(p), n = 254, t = 5, alpha=5: sage generate_params_poseidon.sage 1 0 254 5 5 128 0x30644e72e131a029b85045b68181585d2833e84879b9709143e1f593f0000001\n# GF(p), n = 64, t = 24, alpha=3: sage generate_params_poseidon.sage 1 0 64 24 3 128 0xffffffffffffffc5\n\n# p = 2^251 + 17 * 2^192 + 1 = 0x800000000000011000000000000000000000000000000000000000000000001\n# sage generate_params_poseidon.sage 1 0 252 3 3 128 0x800000000000011000000000000000000000000000000000000000000000001\n# sage generate_params_poseidon.sage 1 0 252 4 3 128 0x800000000000011000000000000000000000000000000000000000000000001\n# sage generate_params_poseidon.sage 1 0 252 8 3 128 0x800000000000011000000000000000000000000000000000000000000000001\n# sage generate_params_poseidon.sage 1 0 252 16 3 128 0x800000000000011000000000000000000000000000000000000000000000001\n\n# Flags\nwrite_file = True\n\n# Parameters\nFIELD = int(sys.argv[1]) # 0 .. GF(2^n), 1 .. GF(p)\nSBOX = int(sys.argv[2]) # 0 .. x^alpha, 1 .. x^(-1)\nFIELD_SIZE = int(sys.argv[3]) # n\nNUM_CELLS = int(sys.argv[4]) # t\nALPHA = int(sys.argv[5])\nSECURITY_LEVEL = int(sys.argv[6])\nR_F_FIXED = 0\nR_P_FIXED = 0\n\nINIT_SEQUENCE = []\n\nPRIME_NUMBER = 0\nF = None\nif FIELD == 0:\n    #PRIME_NUMBER = GF(2)['x'](GF2X_BuildIrred_list(FIELD_SIZE))\n    PRIME_NUMBER = int(sys.argv[7], 16)\n    F.<x> = GF(2**FIELD_SIZE, name='x', modulus = PRIME_NUMBER)\nelif FIELD == 1:\n    PRIME_NUMBER = int(sys.argv[7], 16)\n    F = GF(PRIME_NUMBER)\nelse:\n    print(\"Unknown field type, only 0 and 1 supported!\")\n    exit()\n\ndef sat_inequiv_alpha(p, t, R_F, R_P, alpha, M):\n    N = int(FIELD_SIZE * NUM_CELLS)\n\n    if alpha > 0:\n        R_F_1 = 6 if M <= ((floor(log(p, 2) - ((alpha-1)/2.0))) * (t + 1)) else 10 # Statistical\n        R_F_2 = 1 + ceil(log(2, alpha) * min(M, FIELD_SIZE)) + ceil(log(t, alpha)) - R_P # Interpolation\n        #R_F_3 = ceil(min(FIELD_SIZE, M) / float(3*log(alpha, 2))) - R_P # Groebner 1\n        #R_F_3 = ((log(2, alpha) / float(2)) * min(FIELD_SIZE, M)) - R_P # Groebner 1\n        R_F_3 = 1 + (log(2, alpha) * min(M/float(3), log(p, 2)/float(2))) - R_P # Groebner 1\n        R_F_4 = t - 1 + min((log(2, alpha) * M) / float(t+1), ((log(2, alpha)*log(p, 2)) / float(2))) - R_P # Groebner 2\n        #R_F_5 = ((1.0/(2*log((alpha**alpha)/float((alpha-1)**(alpha-1)), 2))) * min(FIELD_SIZE, M) + t - 2 - R_P) / float(t - 1) # Groebner 3\n        R_F_max = max(ceil(R_F_1), ceil(R_F_2), ceil(R_F_3), ceil(R_F_4))\n        return (R_F >= R_F_max)\n\n    elif alpha == (-1):\n        R_F_1 = 6 if M <= ((floor(log(p, 2) - 2)) * (t + 1)) else 10 # Statistical\n        R_P_1 = 1 + ceil(0.5 * min(M, FIELD_SIZE)) + ceil(log(t, 2)) - floor(R_F * log(t, 2)) # Interpolation\n        R_P_2 = 1 + ceil(0.5 * min(M, FIELD_SIZE)) + ceil(log(t, 2)) - floor(R_F * log(t, 2))\n        R_P_3 = t - 1 + ceil(log(t, 2)) + min(ceil(M / float(t+1)), ceil(0.5*log(p, 2))) - floor(R_F * log(t, 2)) # Groebner 2\n        R_F_max = ceil(R_F_1)\n        R_P_max = max(ceil(R_P_1), ceil(R_P_2), ceil(R_P_3))\n        return (R_F >= R_F_max and R_P >= R_P_max)\n    else:\n        print(\"Invalid value for alpha!\")\n        exit(1)\n\ndef get_sbox_cost(R_F, R_P, N, t):\n    return int(t * R_F + R_P)\n\ndef get_size_cost(R_F, R_P, N, t):\n    n = ceil(float(N) / t)\n    return int((N * R_F) + (n * R_P))\n\ndef get_depth_cost(R_F, R_P, N, t):\n    return int(R_F + R_P)\n\ndef find_FD_round_numbers(p, t, alpha, M, cost_function, security_margin):\n    N = int(FIELD_SIZE * NUM_CELLS)\n\n    sat_inequiv = sat_inequiv_alpha\n    \n    R_P = 0\n    R_F = 0\n    min_cost = float(\"inf\")\n    max_cost_rf = 0\n    # Brute-force approach\n    for R_P_t in range(1, 500):\n        for R_F_t in range(4, 100):\n            if R_F_t % 2 == 0:\n                if (sat_inequiv(p, t, R_F_t, R_P_t, alpha, M) == True):\n                    if security_margin == True:\n                        R_F_t += 2\n                        R_P_t = int(ceil(float(R_P_t) * 1.075))\n                    cost = cost_function(R_F_t, R_P_t, N, t)\n                    if (cost < min_cost) or ((cost == min_cost) and (R_F_t < max_cost_rf)):\n                        R_P = ceil(R_P_t)\n                        R_F = ceil(R_F_t)\n                        min_cost = cost\n                        max_cost_rf = R_F\n    return (int(R_F), int(R_P))\n\ndef calc_final_numbers_fixed(p, t, alpha, M, security_margin):\n    # [Min. S-boxes] Find best possible for t and N\n    N = int(FIELD_SIZE * NUM_CELLS)\n    cost_function = get_sbox_cost\n    ret_list = []\n    (R_F, R_P) = find_FD_round_numbers(p, t, alpha, M, cost_function, security_margin)\n    min_sbox_cost = cost_function(R_F, R_P, N, t)\n    ret_list.append(R_F)\n    ret_list.append(R_P)\n    ret_list.append(min_sbox_cost)\n\n    # [Min. Size] Find best possible for t and N\n    # Minimum number of S-boxes for fixed n results in minimum size also (round numbers are the same)!\n    min_size_cost = get_size_cost(R_F, R_P, N, t)\n    ret_list.append(min_size_cost)\n\n    return ret_list # [R_F, R_P, min_sbox_cost, min_size_cost]\n\ndef print_latex_table_combinations(combinations, alpha, security_margin):\n    for comb in combinations:\n        N = comb[0]\n        t = comb[1]\n        M = comb[2]\n        n = int(N / t)\n        prime = PRIME_NUMBER\n        ret = calc_final_numbers_fixed(prime, t, alpha, M, security_margin)\n        field_string = \"\\mathbb F_{p}\"\n        sbox_string = \"x^{\" + str(alpha) + \"}\"\n        print(\"$\" + str(M) + \"$ & $\" + str(N) + \"$ & $\" + str(n) + \"$ & $\" + str(t) + \"$ & $\" + str(ret[0]) + \"$ & $\" + str(ret[1]) + \"$ & $\" + field_string + \"$ & $\" + str(ret[2]) + \"$ & $\" + str(ret[3]) + \"$ \\\\\\\\\")\n\n###\n### Get round number first\n###\nROUND_NUMBERS = calc_final_numbers_fixed(PRIME_NUMBER, NUM_CELLS, ALPHA, SECURITY_LEVEL, True)\nR_F_FIXED = ROUND_NUMBERS[0]\nR_P_FIXED = ROUND_NUMBERS[1]\n# R_F_FIXED = 8\n# R_P_FIXED = 60\n\nprint(\"Params: n=%d, t=%d, alpha=%d, M=%d, R_F=%d, R_P=%d\"%(FIELD_SIZE, NUM_CELLS, ALPHA, SECURITY_LEVEL, R_F_FIXED, R_P_FIXED))\nprint(\"Modulus = %d\"%(PRIME_NUMBER))\nprint(\"Number of S-boxes:\", ROUND_NUMBERS[2])\n# print(\"Number of S-boxes per state element:\", ceil(ROUND_NUMBERS[2] / float(NUM_CELLS)))\n\nFILE = None\nif write_file == True:\n    FILE = open(\"poseidon_params_n%d_t%d_alpha%d_M%d.txt\"%(FIELD_SIZE, NUM_CELLS, ALPHA, SECURITY_LEVEL),'w')\n    FILE.write(\"Params: n=%d, t=%d, alpha=%d, M=%d, R_F=%d, R_P=%d\\n\"%(FIELD_SIZE, NUM_CELLS, ALPHA, SECURITY_LEVEL, R_F_FIXED, R_P_FIXED))\n    FILE.write(\"Modulus = %d\\n\"%(PRIME_NUMBER))\n    FILE.write(\"Number of S-boxes: %d\\n\"%(ROUND_NUMBERS[2]))\n    # FILE.write(\"Number of S-boxes per state element: %d\\n\"%(ceil(ROUND_NUMBERS[2] / float(NUM_CELLS))))\n\n###\n### Matrices and round constants\n###\ndef grain_sr_generator():\n    bit_sequence = INIT_SEQUENCE\n    for _ in range(0, 160):\n        new_bit = bit_sequence[62] ^^ bit_sequence[51] ^^ bit_sequence[38] ^^ bit_sequence[23] ^^ bit_sequence[13] ^^ bit_sequence[0]\n        bit_sequence.pop(0)\n        bit_sequence.append(new_bit)\n        \n    while True:\n        new_bit = bit_sequence[62] ^^ bit_sequence[51] ^^ bit_sequence[38] ^^ bit_sequence[23] ^^ bit_sequence[13] ^^ bit_sequence[0]\n        bit_sequence.pop(0)\n        bit_sequence.append(new_bit)\n        while new_bit == 0:\n            new_bit = bit_sequence[62] ^^ bit_sequence[51] ^^ bit_sequence[38] ^^ bit_sequence[23] ^^ bit_sequence[13] ^^ bit_sequence[0]\n            bit_sequence.pop(0)\n            bit_sequence.append(new_bit)\n            new_bit = bit_sequence[62] ^^ bit_sequence[51] ^^ bit_sequence[38] ^^ bit_sequence[23] ^^ bit_sequence[13] ^^ bit_sequence[0]\n            bit_sequence.pop(0)\n            bit_sequence.append(new_bit)\n        new_bit = bit_sequence[62] ^^ bit_sequence[51] ^^ bit_sequence[38] ^^ bit_sequence[23] ^^ bit_sequence[13] ^^ bit_sequence[0]\n        bit_sequence.pop(0)\n        bit_sequence.append(new_bit)\n        yield new_bit\ngrain_gen = grain_sr_generator()\n        \ndef grain_random_bits(num_bits):\n    random_bits = [next(grain_gen) for i in range(0, num_bits)]\n    # random_bits.reverse() ## Remove comment to start from least significant bit\n    random_int = int(\"\".join(str(i) for i in random_bits), 2)\n    return random_int\n\ndef init_generator(field, sbox, n, t, R_F, R_P):\n    # Generate initial sequence based on parameters\n    bit_list_field = [_ for _ in (bin(FIELD)[2:].zfill(2))]\n    bit_list_sbox = [_ for _ in (bin(SBOX)[2:].zfill(4))]\n    bit_list_n = [_ for _ in (bin(FIELD_SIZE)[2:].zfill(12))]\n    bit_list_t = [_ for _ in (bin(NUM_CELLS)[2:].zfill(12))]\n    bit_list_R_F = [_ for _ in (bin(R_F)[2:].zfill(10))]\n    bit_list_R_P = [_ for _ in (bin(R_P)[2:].zfill(10))]\n    bit_list_1 = [1] * 30\n    global INIT_SEQUENCE\n    INIT_SEQUENCE = bit_list_field + bit_list_sbox + bit_list_n + bit_list_t + bit_list_R_F + bit_list_R_P + bit_list_1\n    INIT_SEQUENCE = [int(_) for _ in INIT_SEQUENCE]\n\ndef generate_constants(field, n, t, R_F, R_P, prime_number):\n    round_constants = []\n    num_constants = (R_F + R_P) * t\n\n    if field == 0:\n        for i in range(0, num_constants):\n            random_int = grain_random_bits(n)\n            round_constants.append(random_int)\n    elif field == 1:\n        for i in range(0, num_constants):\n            random_int = grain_random_bits(n)\n            while random_int >= prime_number:\n                # print(\"[Info] Round constant is not in prime field! Taking next one.\")\n                random_int = grain_random_bits(n)\n            round_constants.append(random_int)\n    return round_constants\n\ndef print_round_constants(round_constants, n, field):\n    print(\"Number of round constants:\", len(round_constants))\n    if write_file == True:\n        FILE.write(\"Number of round constants: \" + str(len(round_constants)) + \"\\n\")\n\n    if field == 0:\n        print(\"Round constants for GF(2^n):\")\n        if write_file == True:\n            FILE.write(\"Round constants for GF(2^n):\\n\")\n    elif field == 1:\n        print(\"Round constants for GF(p):\")\n        if write_file == True:\n            FILE.write(\"Round constants for GF(p):\\n\")\n    hex_length = int(ceil(float(n) / 4)) + 2 # +2 for \"0x\"\n    print([\"{}\".format(entry, hex_length) for entry in round_constants])\n    if write_file == True:\n        FILE.write(str([\"{}\".format(entry, hex_length) for entry in round_constants]) + \"\\n\")\n\ndef create_mds_p(n, t):\n    M = matrix(F, t, t)\n\n    # Sample random distinct indices and assign to xs and ys\n    while True:\n        flag = True\n        rand_list = [F(grain_random_bits(n)) for _ in range(0, 2*t)]\n        while len(rand_list) != len(set(rand_list)): # Check for duplicates\n            rand_list = [F(grain_random_bits(n)) for _ in range(0, 2*t)]\n        xs = rand_list[:t]\n        ys = rand_list[t:]\n        # xs = [F(ele) for ele in range(0, t)]\n        # ys = [F(ele) for ele in range(t, 2*t)]\n        for i in range(0, t):\n            for j in range(0, t):\n                if (flag == False) or ((xs[i] + ys[j]) == 0):\n                    flag = False\n                else:\n                    entry = (xs[i] + ys[j])^(-1)\n                    M[i, j] = entry\n        if flag == False:\n            continue\n        return M\n\ndef create_mds_gf2n(n, t):\n    M = matrix(F, t, t)\n\n    # Sample random distinct indices and assign to xs and ys\n    while True:\n        flag = True\n        rand_list = [F.fetch_int(grain_random_bits(n)) for _ in range(0, 2*t)]\n        while len(rand_list) != len(set(rand_list)): # Check for duplicates\n            rand_list = [F.fetch_int(grain_random_bits(n)) for _ in range(0, 2*t)]\n        xs = rand_list[:t]\n        ys = rand_list[t:]\n        for i in range(0, t):\n            for j in range(0, t):\n                if (flag == False) or ((xs[i] + ys[j]) == 0):\n                    flag = False\n                else:\n                    entry = (xs[i] + ys[j])^(-1)\n                    M[i, j] = entry\n        if flag == False:\n            continue\n        return M\n\ndef generate_vectorspace(round_num, M, M_round, NUM_CELLS):\n    t = NUM_CELLS\n    s = 1\n    V = VectorSpace(F, t)\n    if round_num == 0:\n        return V\n    elif round_num == 1:\n        return V.subspace(V.basis()[s:])\n    else:\n        mat_temp = matrix(F)\n        for i in range(0, round_num-1):\n            add_rows = []\n            for j in range(0, s):\n                add_rows.append(M_round[i].rows()[j][s:])\n            mat_temp = matrix(mat_temp.rows() + add_rows)\n        r_k = mat_temp.right_kernel()\n        extended_basis_vectors = []\n        for vec in r_k.basis():\n            extended_basis_vectors.append(vector([0]*s + list(vec)))\n        S = V.subspace(extended_basis_vectors)\n\n        return S\n\ndef subspace_times_matrix(subspace, M, NUM_CELLS):\n    t = NUM_CELLS\n    V = VectorSpace(F, t)\n    subspace_basis = subspace.basis()\n    new_basis = []\n    for vec in subspace_basis:\n        new_basis.append(M * vec)\n    new_subspace = V.subspace(new_basis)\n    return new_subspace\n\n# Returns True if the matrix is considered secure, False otherwise\ndef algorithm_1(M, NUM_CELLS):\n    t = NUM_CELLS\n    s = 1\n    r = floor((t - s) / float(s))\n\n    # Generate round matrices\n    M_round = []\n    for j in range(0, t+1):\n        M_round.append(M^(j+1))\n\n    for i in range(1, r+1):\n        mat_test = M^i\n        entry = mat_test[0, 0]\n        mat_target = matrix.circulant(vector([entry] + ([F(0)] * (t-1))))\n\n        if (mat_test - mat_target) == matrix.circulant(vector([F(0)] * (t))):\n            return [False, 1]\n\n        S = generate_vectorspace(i, M, M_round, t)\n        V = VectorSpace(F, t)\n\n        basis_vectors= []\n        for eigenspace in mat_test.eigenspaces_right(format='galois'):\n            if (eigenspace[0] not in F):\n                continue\n            vector_subspace = eigenspace[1]\n            intersection = S.intersection(vector_subspace)\n            basis_vectors += intersection.basis()\n        IS = V.subspace(basis_vectors)\n\n        if IS.dimension() >= 1 and IS != V:\n            return [False, 2]\n        for j in range(1, i+1):\n            S_mat_mul = subspace_times_matrix(S, M^j, t)\n            if S == S_mat_mul:\n                print(\"S.basis():\\n\", S.basis())\n                return [False, 3]\n\n    return [True, 0]\n\n# Returns True if the matrix is considered secure, False otherwise\ndef algorithm_2(M, NUM_CELLS):\n    t = NUM_CELLS\n    s = 1\n\n    V = VectorSpace(F, t)\n    trail = [None, None]\n    test_next = False\n    I = range(0, s)\n    I_powerset = list(sage.misc.misc.powerset(I))[1:]\n    for I_s in I_powerset:\n        test_next = False\n        new_basis = []\n        for l in I_s:\n            new_basis.append(V.basis()[l])\n        IS = V.subspace(new_basis)\n        for i in range(s, t):\n            new_basis.append(V.basis()[i])\n        full_iota_space = V.subspace(new_basis)\n        for l in I_s:\n            v = V.basis()[l]\n            while True:\n                delta = IS.dimension()\n                v = M * v\n                IS = V.subspace(IS.basis() + [v])\n                if IS.dimension() == t or IS.intersection(full_iota_space) != IS:\n                    test_next = True\n                    break\n                if IS.dimension() <= delta:\n                    break\n            if test_next == True:\n                break\n        if test_next == True:\n            continue\n        return [False, [IS, I_s]]\n\n    return [True, None]\n\n# Returns True if the matrix is considered secure, False otherwise\ndef algorithm_3(M, NUM_CELLS):\n    t = NUM_CELLS\n    s = 1\n\n    V = VectorSpace(F, t)\n\n    l = 4*t\n    for r in range(2, l+1):\n        next_r = False\n        res_alg_2 = algorithm_2(M^r, t)\n        if res_alg_2[0] == False:\n            return [False, None]\n\n        # if res_alg_2[1] == None:\n        #     continue\n        # IS = res_alg_2[1][0]\n        # I_s = res_alg_2[1][1]\n        # for j in range(1, r):\n        #     IS = subspace_times_matrix(IS, M, t)\n        #     I_j = []\n        #     for i in range(0, s):\n        #         new_basis = []\n        #         for k in range(0, t):\n        #             if k != i:\n        #                 new_basis.append(V.basis()[k])\n        #         iota_space = V.subspace(new_basis)\n        #         if IS.intersection(iota_space) != iota_space:\n        #             single_iota_space = V.subspace([V.basis()[i]])\n        #             if IS.intersection(single_iota_space) == single_iota_space:\n        #                 I_j.append(i)\n        #             else:\n        #                 next_r = True\n        #                 break\n        #     if next_r == True:\n        #         break\n        # if next_r == True:\n        #     continue\n        # return [False, [IS, I_j, r]]\n    \n    return [True, None]\n\ndef generate_matrix(FIELD, FIELD_SIZE, NUM_CELLS):\n    if FIELD == 0:\n        mds_matrix = create_mds_gf2n(FIELD_SIZE, NUM_CELLS)\n        result_1 = algorithm_1(mds_matrix, NUM_CELLS)\n        result_2 = algorithm_2(mds_matrix, NUM_CELLS)\n        result_3 = algorithm_3(mds_matrix, NUM_CELLS)\n        while result_1[0] == False or result_2[0] == False or result_3[0] == False:\n            mds_matrix = create_mds_p(FIELD_SIZE, NUM_CELLS)\n            result_1 = algorithm_1(mds_matrix, NUM_CELLS)\n            result_2 = algorithm_2(mds_matrix, NUM_CELLS)\n            result_3 = algorithm_3(mds_matrix, NUM_CELLS)\n        return mds_matrix\n    elif FIELD == 1:\n        mds_matrix = create_mds_p(FIELD_SIZE, NUM_CELLS)\n        result_1 = algorithm_1(mds_matrix, NUM_CELLS)\n        result_2 = algorithm_2(mds_matrix, NUM_CELLS)\n        result_3 = algorithm_3(mds_matrix, NUM_CELLS)\n        while result_1[0] == False or result_2[0] == False or result_3[0] == False:\n            mds_matrix = create_mds_p(FIELD_SIZE, NUM_CELLS)\n            result_1 = algorithm_1(mds_matrix, NUM_CELLS)\n            result_2 = algorithm_2(mds_matrix, NUM_CELLS)\n            result_3 = algorithm_3(mds_matrix, NUM_CELLS)\n        return mds_matrix\n\ndef print_linear_layer(M, n, t):\n    print(\"n:\", n)\n    print(\"t:\", t)\n    print(\"N:\", (n * t))\n    print(\"Result Algorithm 1:\\n\", algorithm_1(M, NUM_CELLS))\n    print(\"Result Algorithm 2:\\n\", algorithm_2(M, NUM_CELLS))\n    print(\"Result Algorithm 3:\\n\", algorithm_3(M, NUM_CELLS))\n    if write_file == True:\n        FILE.write(\"n: \" + str(n) + \"\\n\")\n        FILE.write(\"t: \" + str(t) + \"\\n\")\n        FILE.write(\"N: \" + str(n * t) + \"\\n\")\n        FILE.write(\"Result Algorithm 1:\\n\" + str(algorithm_1(M, NUM_CELLS)) + \"\\n\")\n        FILE.write(\"Result Algorithm 2:\\n\" + str(algorithm_2(M, NUM_CELLS)) + \"\\n\")\n        FILE.write(\"Result Algorithm 3:\\n\" + str(algorithm_3(M, NUM_CELLS)) + \"\\n\")\n        \n    hex_length = int(ceil(float(n) / 4)) + 2 # +2 for \"0x\"\n    if FIELD == 0:\n        print(\"Modulus:\", PRIME_NUMBER)\n        if write_file == True:\n            FILE.write(\"Modulus: \" + str(PRIME_NUMBER) + \"\\n\")\n    elif FIELD == 1:\n        print(\"Prime number:\", hex(PRIME_NUMBER))\n        if write_file == True:\n            FILE.write(\"Prime number: \" + hex(PRIME_NUMBER) + \"\\n\")\n    matrix_string = \"[\"\n    for i in range(0, t):\n        if FIELD == 0:\n            matrix_string += str([\"{}\".format(entry.integer_representation()) for entry in M[i]])\n        elif FIELD == 1:\n            matrix_string += str([\"{}\".format(int(entry)) for entry in M[i]])\n        if i < (t-1):\n            matrix_string += \",\"\n\n    matrix_string += \"]\"\n    print(\"MDS matrix:\\n\", matrix_string)\n    if write_file == True:\n        FILE.write(\"MDS matrix:\\n\" + str(matrix_string))\n\n# Init\ninit_generator(FIELD, SBOX, FIELD_SIZE, NUM_CELLS, R_F_FIXED, R_P_FIXED)\n\n# Round constants\nround_constants = generate_constants(FIELD, FIELD_SIZE, NUM_CELLS, R_F_FIXED, R_P_FIXED, PRIME_NUMBER)\nprint_round_constants(round_constants, FIELD_SIZE, FIELD)\n\n# Matrix\nlinear_layer = generate_matrix(FIELD, FIELD_SIZE, NUM_CELLS)\nprint_linear_layer(linear_layer, FIELD_SIZE, NUM_CELLS)\n\nif write_file == True:\n    FILE.close()"
  },
  {
    "path": "packages/poseidon/sage/security_inequalities.sage",
    "content": "# Check security inequalities as specified in the Neptune specification\n\nM=128\nt=3\np=0xfffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2f\nRf=8\nRp=56\nR=Rf + Rp\na=5\n\n# this is defined in Section 5.5.1 https://eprint.iacr.org/2019/458.pdf\n# (a = 5 then C = 2)\nC = 2\n\n# https://spec.filecoin.io/#section-algorithms.crypto.poseidon.security-inequalities\nprint(\"(1) 2^M <= p^t\", 2^M <= p^t)\nprint(\"(2) M <= (⌊log2(p) - C)⌋・(t + 1)\", M <= (floor(log(p, 2)).n() - C) * (t + 1))  # Section 5.5.1 https://eprint.iacr.org/2019/458.pdf\nprint(\"(3) R > M(log_a(2) + log_a(t)))\", R > (M * log(2, a).n()) + log(t, a).n())\nprint(\"(4a) R >  M * log(2, a).n() / 3\", R > (M * log(2, a).n()) / 3)\nprint(\"(4b) R > t - 1 + (M * log(2, a).n() / (t + 1))\", R > t - 1 + (M * log(2, a).n() / (t + 1)))"
  },
  {
    "path": "packages/poseidon/src/k256_consts.rs",
    "content": "use ff::PrimeField;\nuse lazy_static::lazy_static;\npub use secq256k1::field::field_secp::FieldElement;\n\npub(crate) const NUM_FULL_ROUNDS: usize = 8;\npub(crate) const NUM_PARTIAL_ROUNDS: usize = 56;\n\nlazy_static! {\n    pub(crate) static ref MDS_MATRIX: [[FieldElement; 3]; 3] = [\n        [\n            \"92469348809186613947252340883344274339611751744959319352506666082431267346705\",\n            \"100938028378191533449096235266991198229563815869344032449592738345766724371160\",\n            \"77486311749148948616988559783475694076613010381924638436641318334458515006661\",\n        ]\n        .map(|y| FieldElement::from_str_vartime(y).unwrap()),\n        [\n            \"110352262556914082363749654180080464794716701228558638957603951672835474954408\",\n            \"27607004873684391669404739690441550149894883072418944161048725383958774443141\",\n            \"29671705769502357195586268679831947082918094959101307962374709600277676341325\",\n        ]\n        .map(|y| FieldElement::from_str_vartime(y).unwrap()),\n        [\n            \"77762103796341032609398578911486222569419103128091016773380377798879650228751\",\n            \"1753012011204964731088925227042671869111026487299375073665493007998674391999\",\n            \"70274477372358662369456035572054501601454406272695978931839980644925236550307\",\n        ]\n        .map(|y| FieldElement::from_str_vartime(y).unwrap()),\n    ];\n    pub(crate) static ref ROUND_CONSTANTS: [FieldElement; 192] = [\n        \"15180568604901803243989155929934437997245952775071395385994322939386074967328\",\n        \"98155933184944822056372510812105826951789406432246960633912199752807271851218\",\n        \"32585497418154084368870158853355239726261349829448673320273043226636389078017\",\n        \"66713968576806622579829258440960693099797917756640662361943757758980796487698\",\n        \"61296025743283504825054745787375839406507895949474930140819919915792438454216\",\n        \"64548089412749542282115556935384382035671782881737715696939837764375912217104\",\n        \"108421562972909537718478936575770973463273651828765393113349044862621092658552\",\n        \"93957623861448681916560847065407918286434708744548934125771289238599801659600\",\n        \"31886767595881910145119755249133120645312710313371225820300496900248094187131\",\n        \"36511615103248888903406040506250394762206798360602726106046630438239169384653\",\n        \"21193239787133737740669439860809806837993750509086389566475677877580362491125\",\n        \"15159189447883181997488877417695825734356570617827322308691834229181804753656\",\n        \"19272373877630561389686073945290625876718814210798194797601715657476609730306\",\n        \"23132197996397121955527964729507651432518694856862854469217474256539272053037\",\n        \"9869753235007825662020275771343858285582964429845049469800863115040150206544\",\n        \"36536341316285671890133896506951910369952562161551585116256678375995315827743\",\n        \"62582239167707347777855528698896708360409296899261565735324151945083720570858\",\n        \"96597358901965097853721114962031771931271685249979807653919643952343419105640\",\n        \"99475971754252188104003224702005940217163363685728394033034788135108600073953\",\n        \"52080483875928847502018688921126796935417602445765802481027972679966274137987\",\n        \"101922748752417217354391348649359865075718358385248454632698502400961567227929\",\n        \"26980595292132221181330746499613907829041623688147011560382352796984836870749\",\n        \"7059991836806083192408106370472821784612460308866802565871813230060135266390\",\n        \"19329812920723038526370491239817117039289784665617181727933894076969997926129\",\n        \"65570620823578601926240439251563587376966657231502120214692324496443514623818\",\n        \"58403733332589349613112270854204921427257113546270812628317365115158685715742\",\n        \"45021021211732634759643776743541935700591354899980928498981462362035961745443\",\n        \"313468157086800401026946312285365733155132234906935411743639256319782592571\",\n        \"101316949793045093761117346380310841944294663456931203380573537653884068660109\",\n        \"23683935571424619534194393788101669168630123784066421490798386323411538828592\",\n        \"45470730427236677197026094498490008082250264942279323465121581539984407294442\",\n        \"48141067373531800337373447278127981363951468257064369512416205750641258258193\",\n        \"42554919225040466028330117313396362347164995917041931400909482795914116747618\",\n        \"11551941832988244108260444347046942236051939264069344013774353630451796870907\",\n        \"60185799182545404739011626517355854847787627814101363386450657535504094743765\",\n        \"81823160578900678880708744457872721685515019032370491632046212317701226128393\",\n        \"7165646831054215773988859638722974820791178194871546344315162343128362695647\",\n        \"75289707601640398424243937567716657896680380639974371761136292031415717685949\",\n        \"7150842764562742184396161198129263121409208675362553300851082062734889620953\",\n        \"24380904705269761063866540342138412601132455197711667167747524315310027386226\",\n        \"9728986075621437350131504894128984146939551938810073671231633620616345344412\",\n        \"10579382052089733216628873394134968879891026686695240299956972154694558493896\",\n        \"8171994519466002143995890536756742287314780571933910736618431096190430536601\",\n        \"30420144259409274775063072923609924427757612539094840146996944760708902708570\",\n        \"63962155989812703023698320394024694856871261481871757094333286947755599007133\",\n        \"25280070391177856032024336895094721131222985610587247589336316615596140400436\",\n        \"15305872319988027006162258914083163651002306183917888172691618513722838997098\",\n        \"51545603291342006705870081001071419395633279951502747769141857387796043104608\",\n        \"91109680756552587805002537489407348773333405839144382221272597323798859182191\",\n        \"72175452855185658158184807496160149169667221240389196996344579971523681433202\",\n        \"30361989157454953234766224747536334157139256334148153290771332849307087761025\",\n        \"38169634499980959088614671703639492517637815232220682121652135514105493936992\",\n        \"49591153263237620796156788742811547511792615129981565620486914545749079774827\",\n        \"47403873018260745456113868791119169163627014766514972598212646481717066065016\",\n        \"93989849689047144228924801010853106857960399638657695410345207191739048300111\",\n        \"10590240512802509131776989274411792739339398409955259174829387591089799115255\",\n        \"29183703335869638067547208413224742887766212046438654772943025958628178245227\",\n        \"4131650227136944095885036960767735080970262672750406866066212532739784907379\",\n        \"43395510588213653537697670365796375057855260611965666448183946252832290017444\",\n        \"95246795133940226900907730059125298420936467652619708443128629427116119621152\",\n        \"6012209003558496814495903476753006089125143165365334812097313083703216071080\",\n        \"26183233284429251459198269925441295879550203824094631575778521083706115817955\",\n        \"26058994700533582730528567480051558438548299522338811756875396252016497202713\",\n        \"107240485663145290290374164860301805857261278222480421976433215167444496066511\",\n        \"84412820763898503096477800002865877536719992495674955119188074297975154406587\",\n        \"52386303852182662900790700046090769869460994629239741773176060026198900130384\",\n        \"95746062835936512160025091603469309809932540674474329021370075533568318932379\",\n        \"22711334660013961010382652754865456251782349529764119853461446587583972054666\",\n        \"16959835233095757670013367728627149851239789174357906293937455553277911805495\",\n        \"15116421110200928832147360650392633091242147433006813656250997138988179879750\",\n        \"107878787525302837370688492081178689950008165750500003692400517211520334656293\",\n        \"44210105558575948369921579518078229089923760124167628288943900602376706136436\",\n        \"90305995748749060889452130219544332384396626628663475498252761213618628372367\",\n        \"104941997925797907872686462815914481945432760720471803254797908465921520138024\",\n        \"100036855232527386145662094141100441220151775745916101660987264242446845728894\",\n        \"103285582836474146806606752170525767341430483568396209591447274936228630298052\",\n        \"82197692939371228160449741709034077803239992888716859217989995857278406253737\",\n        \"10040764964044995095453717286623030376397745892179877153575434454090155545240\",\n        \"27304226040425863042893623786832369758179176309230053449707879364285977952630\",\n        \"42627232144930751842910170221862679057276668485045156742021958050665662768084\",\n        \"76972394926916659428228833084621905890924612368412796262119501852346293848159\",\n        \"39796921406297542196667238133893946368231540421737718098283349901435707131075\",\n        \"14745047092916651495052563068083093689676472592445845983334785004125684263162\",\n        \"43421479365783318841667739359312715738029447177150400204380817518608837765863\",\n        \"107871536756946365977710326147511195471121248998432910212631960353348700694610\",\n        \"39505942243687894211614489736115535754716239859353578295470352855493707198619\",\n        \"59676442091621150164811367352362126934419932715789994860508056194143441226580\",\n        \"94470526851498636320865653968033227263836954414283116133326109455334870036212\",\n        \"15044796858044094866329855531761112645684343559112419720568996573556805975600\",\n        \"67157729293641241473980125231288476062565688273917759533572275886277269201651\",\n        \"72911083146182058225942884942982388217243826839805061121973109250798137784134\",\n        \"102973386186208530972563015865701244407271836208547629083437627219683649557477\",\n        \"57485522356347377122696081086816661784954498123948319434326439317393351620564\",\n        \"23112275556906805064863694321486306070917598599342299357379251070160695202292\",\n        \"107618884362423342584703700349292347754139538760798319916678240538294838342400\",\n        \"83961260400031958812820990908241261093246389047082613562825737834833753517337\",\n        \"42726953951733266282750892844947149703751388034177248277671157488506520215317\",\n        \"39379570934119946602507737250800178347029772561352879974941214627084076473292\",\n        \"72203650529122342092280763801468513707870760755235719535090090101623606334441\",\n        \"13389660788942842724553143053013919883368472759564135119390935439369513690496\",\n        \"101745263541280877725997503552978999350831489463993178838531539940805924817361\",\n        \"76849182334465191824607032600721023780793694103840553871800174717760598910761\",\n        \"53896256317996838683363773836826653859512780625932638736752563553878867538095\",\n        \"24688792501674999263943657175455335814404948006469220532686392550824320454904\",\n        \"69132683906595821927803530074656979217668636557563597358799899743174233903941\",\n        \"2861982085506615225917620192781928414994576134281371548401916333754363567986\",\n        \"37311353286221616083824584705974993449107063556724405440534160586561042968316\",\n        \"83718085796857523832195255218519255973031752296424117786202083986118546906913\",\n        \"103633177691684814414226251117070754499104739002759424774194851613917008856616\",\n        \"84968411062305024171594435878414659990735518025357685215223731503921265946461\",\n        \"41865099330909055069143724769818364262362915440371474104937435863183989905059\",\n        \"46156624920251322979270606388518884047396423747179340919303543598300663968593\",\n        \"64416327466854458915398302811825971539792429791049027619471115285308743811583\",\n        \"94942471312481523091911417289540395651121558150571128515230470225155209280585\",\n        \"109682618775735319282534546194470743032129102295907200313471041846112653687024\",\n        \"61531999191737540795124202104235799899980935519651613893518293245268304980543\",\n        \"17797352534596268622733030076742840951214734697361029060619245779495726996632\",\n        \"2323150752778983462106829021155031678603044899339819935981200101818542000989\",\n        \"16998018904363448507967526489917057882529252665835717172712095240271574074587\",\n        \"110634872413902251217040490777568744431854972018399530234679399294372694506842\",\n        \"31639545145649753705216327198217551838008233610574104460826956396569310697060\",\n        \"107845103764339268987018917144483935480716224058844669233389185480836000033760\",\n        \"46240297572174662698030819651333060930818959915797061274854448535534474175039\",\n        \"53065607123105696930220421963755520777674094852857308823370049733888025985616\",\n        \"16931881300947470270453776207625163368485560075525342751440832370220475352149\",\n        \"79254110800481916763656344422402393723573490114487681345184594841431920461089\",\n        \"42268569642639492314994307446626647824927989776691987788682655102426770655233\",\n        \"9749633319307409894058984489496091535125232227316143918000642155415596066903\",\n        \"57606597628648270579042266322415267200058617178318601782866227410456726724976\",\n        \"56082250485913115488341301630850455009935943641292622301678990296508134206571\",\n        \"17957245764842844288802777667800779232762688847417238921175068882796163705248\",\n        \"94356229516444419318132697346021621194464273500135725160277725602263001442644\",\n        \"52536631226748676066386651084538409050048707922045928887930261833545619358914\",\n        \"107794922118166328243581272159394479176678094739027519706768813902978100436849\",\n        \"92984368734102511759118281503078145182557799453616537383408606074187034371208\",\n        \"59652553897137603386525572460411404882917571255327541516871354737502335133690\",\n        \"49012645345644326995052653072578673750516379655033952006121214224119466671764\",\n        \"79025576845143484310735291619293962982804358365838961371044480386743856799994\",\n        \"5437377540613244374799729812489584777222423091155743557287567155811057717409\",\n        \"100687592213090267900708728796310211082532607828753010566886681655775031329660\",\n        \"99074462968857696481475128596339544396152341206708424767062829343406495063192\",\n        \"67476872698289965626550204192782761730653024363949045140720348870736942130242\",\n        \"103307125141718054130755829916960708430672826104789971350239945481960770107890\",\n        \"74087383014714668160537499936376991041273055222568604413015844459913259357334\",\n        \"40924049099780965904051946083599822761993164889139026432053420731164022206736\",\n        \"32594924940463736641240515015317856157169105212942308502676422036626316673214\",\n        \"98990663138035055774586216545398054668349058134877723031747421828753359974443\",\n        \"55821766022768786066770462759796825978667805772707620106340033118519147871694\",\n        \"4001942224536365489828915551180230767516454384395893814399938353050969198154\",\n        \"30136373426492646221252150708518703998248891683881870400906269276900707426865\",\n        \"34943205764464817266133164313915763122699935186597909347522822673832250079664\",\n        \"27737330737483170511275902246508559278973986181590368845166383812793468814968\",\n        \"96292398813565494438359802278723334615526914389306923046282571355958508916558\",\n        \"97147334956505986101750230325438660094766812949748276042292963837380833668274\",\n        \"24754519562402723848413674701792328284127274989440581643644298347747941238812\",\n        \"76111103490248669364580390783887028636436246028943665707064153006971943621186\",\n        \"33764090322658516047637223655525551979364055499647855895233821795694749902854\",\n        \"100536990630540359004783976190215234627391515555181073681294901127179838732969\",\n        \"55991997435987096996680289872758998763908676069536901375395297778729059185671\",\n        \"32860959903680178324832991459746631238726690317249285658471597044247794502256\",\n        \"70074816806976994707467706079200635184034023598764203123459335544110485476930\",\n        \"46213940675829172331116620705134022102338250410334045747023950259088879662946\",\n        \"77000624259024986585504351395777746568094934279771127334532438603183524642061\",\n        \"21719649576090832101273013788716623377603297433777804572370785470329817725170\",\n        \"29209622978540575483991966565508890231057362045066230397327380085945876837821\",\n        \"2445742484263083651472035320255578071935687960412507452207899496253120999364\",\n        \"86846812580007547526361109808384103509272544750564766849178767957571523649544\",\n        \"43025640639926253696325070988523609146060819319830735794100778654425057363895\",\n        \"108957662689228031021948854644435971168708642184764962508575441689859324862868\",\n        \"83891545396650121758556392255189778590486277180642660527000882403085396114823\",\n        \"42527013475786190604202451803064937203698027000671529418992521798122995373551\",\n        \"115180194520889678365425151865713593680657747284471744934804370945935167043862\",\n        \"28979598171177052880917135045920701144584888536299261666846302083645491369348\",\n        \"68351312608110279019109436395199010412431777911149851157132527077210966351650\",\n        \"61759623963943995967580147094342313397376358019837276043205235302342147116585\",\n        \"80714625408576660514217469096827255752431164791924432025682445176737446783085\",\n        \"33048555646676368266608424610100449208381357250300222636992099726804869416731\",\n        \"50682223610667325089810868083131721901859473966415125289975106060759036109476\",\n        \"4271213571706787092297985431667190050727614825584809797590204884727103716461\",\n        \"101314046722405990971733763321368296660561930294000591067108115987088407142646\",\n        \"55565500177602146197728150332647093173137211885612327122425918553270191254877\",\n        \"65556764608648687291293889343854786421750589271167654521933267288313526422497\",\n        \"66877533773422945979143954094644173219583178339199697252673545117318799706373\",\n        \"30511098623357801425494143655999121699575856091238269679669864984061501512835\",\n        \"95900192636363991637086954986559552472749485926252879461208179855482821976623\",\n        \"37879946127489462347049192209554168578320892231852882971030128420645686965013\",\n        \"80479504274334215471057938992198620419540634144266821121799003865782336406529\",\n        \"13326262422954139210095783388743602482455840337093117010479445267213907605425\",\n        \"16047106134611124637925332265703907202779549268127518502853950466090054176776\",\n        \"71499356105233640605079063493613576024353801558965221134519779175477723594865\",\n        \"28438981751956157476540225984733791304599172905715743025543841239013139121102\",\n        \"56066317647068426981453448715118237747130321302262827290362392918472904421147\",\n    ]\n    .map(|y| FieldElement::from_str_vartime(y).unwrap());\n}\n"
  },
  {
    "path": "packages/poseidon/src/lib.rs",
    "content": "mod k256_consts;\npub mod poseidon_k256;\n\nuse ff::PrimeField;\n\npub struct PoseidonConstants<F: PrimeField> {\n    pub round_keys: Vec<F>,\n    pub mds_matrix: Vec<Vec<F>>,\n    pub num_full_rounds: usize,\n    pub num_partial_rounds: usize,\n}\n\nimpl<F: PrimeField> PoseidonConstants<F> {\n    pub fn new(\n        round_constants: Vec<F>,\n        mds_matrix: Vec<Vec<F>>,\n        num_full_rounds: usize,\n        num_partial_rounds: usize,\n    ) -> Self {\n        Self {\n            num_full_rounds,\n            num_partial_rounds,\n            mds_matrix,\n            round_keys: round_constants,\n        }\n    }\n}\n\npub struct Poseidon<F: PrimeField> {\n    pub state: [F; 3],\n    pub constants: PoseidonConstants<F>,\n    pub pos: usize,\n}\n\nimpl<F: PrimeField> Poseidon<F> {\n    pub fn new(constants: PoseidonConstants<F>) -> Self {\n        let state = [F::zero(); 3];\n        Self {\n            state,\n            constants,\n            pos: 0,\n        }\n    }\n\n    pub fn hash(&mut self, input: &[F; 2]) -> F {\n        // add the domain tag\n        let domain_tag = F::from(3); // 2^arity - 1\n        let input = [domain_tag, input[0], input[1]];\n\n        self.state = input;\n\n        let full_rounds_half = self.constants.num_full_rounds / 2;\n\n        // First half of full rounds\n        for _ in 0..full_rounds_half {\n            self.full_round();\n        }\n\n        // Partial rounds\n        for _ in 0..self.constants.num_partial_rounds {\n            self.partial_round();\n        }\n\n        // Second half of full rounds\n        for _ in 0..full_rounds_half {\n            self.full_round();\n        }\n\n        self.state[1]\n    }\n\n    fn add_constants(&mut self) {\n        // Add round constants\n        for i in 0..self.state.len() {\n            self.state[i] += self.constants.round_keys[i + self.pos];\n        }\n    }\n\n    // MDS matrix multiplication\n    fn matrix_mul(&mut self) {\n        let mut result = [F::zero(); 3];\n\n        for (i, val) in self.constants.mds_matrix.iter().enumerate() {\n            let mut tmp = F::zero();\n            for (j, element) in self.state.iter().enumerate() {\n                tmp += val[j] * element\n            }\n            result[i] = tmp;\n        }\n\n        self.state = result;\n    }\n\n    fn full_round(&mut self) {\n        let t = self.state.len();\n        self.add_constants();\n\n        // S-boxes\n        for i in 0..t {\n            self.state[i] = self.state[i].pow_vartime(&[5, 0, 0, 0]);\n        }\n\n        self.matrix_mul();\n\n        // Update the position of the round constants that are added\n        self.pos += self.state.len();\n    }\n\n    fn partial_round(&mut self) {\n        self.add_constants();\n\n        // S-box\n        self.state[0] = self.state[0].pow_vartime(&[5, 0, 0, 0]);\n\n        self.matrix_mul();\n\n        // Update the position of the round constants that are added\n        self.pos += self.state.len();\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use k256_consts::*;\n    use secq256k1::field::{field_secp, BaseField};\n\n    #[test]\n    fn test_k256() {\n        type Scalar = field_secp::FieldElement;\n        let input = [\n            Scalar::from_str_vartime(\"1234567\").unwrap(),\n            Scalar::from_str_vartime(\"109987\").unwrap(),\n        ];\n\n        let constants = PoseidonConstants::<FieldElement>::new(\n            ROUND_CONSTANTS.to_vec(),\n            vec![\n                MDS_MATRIX[0].to_vec(),\n                MDS_MATRIX[1].to_vec(),\n                MDS_MATRIX[2].to_vec(),\n            ],\n            NUM_FULL_ROUNDS,\n            NUM_PARTIAL_ROUNDS,\n        );\n        let mut poseidon = Poseidon::new(constants);\n\n        let digest = poseidon.hash(&input);\n\n        assert_eq!(\n            digest,\n            Scalar::from_bytes(&[\n                68, 120, 17, 40, 199, 247, 48, 80, 236, 89, 92, 44, 207, 217, 83, 62, 184, 194,\n                173, 48, 66, 119, 238, 98, 175, 232, 78, 234, 75, 101, 229, 148\n            ])\n            .unwrap()\n        );\n    }\n\n    /*\n    #[test]\n    fn test_bls() {\n        use blstrs;\n        use neptune::poseidon::{\n            Poseidon as NeptunePoseidon, PoseidonConstants as NeptuneConstants,\n        };\n        use typenum::U2;\n\n        type Scalar = blstrs::Scalar;\n        let input = vec![Scalar::one(), Scalar::zero()];\n\n        // Generate constants using Neptune\n        let nep_constants = NeptuneConstants::<Scalar, U2>::new();\n        let mut net_poseidon = NeptunePoseidon::<Scalar>::new_with_preimage(&input, &nep_constants);\n        let np_digest = net_poseidon.hash();\n\n        // Plug constants generated by Neptune into our Poseidon impl\n        let constants = PoseidonConstants::<Scalar>::new(\n            nep_constants.round_constants.unwrap(),\n            nep_constants.mds_matrices.m,\n            nep_constants.full_rounds,\n            nep_constants.partial_rounds,\n        );\n\n        let mut poseidon = Poseidon::new(constants);\n        let digest = poseidon.hash(input);\n\n        // Check that the two implementations produce the same output\n        assert_eq!(digest, np_digest);\n    }\n     */\n}\n"
  },
  {
    "path": "packages/poseidon/src/poseidon_k256.rs",
    "content": "use crate::k256_consts::*;\nuse crate::{Poseidon, PoseidonConstants};\npub use secq256k1::field::field_secp::FieldElement;\n\n#[allow(dead_code)]\npub fn hash(input: &[FieldElement; 2]) -> FieldElement {\n    let constants = PoseidonConstants::<FieldElement>::new(\n        ROUND_CONSTANTS.to_vec(),\n        vec![\n            MDS_MATRIX[0].to_vec(),\n            MDS_MATRIX[1].to_vec(),\n            MDS_MATRIX[2].to_vec(),\n        ],\n        NUM_FULL_ROUNDS,\n        NUM_PARTIAL_ROUNDS,\n    );\n    let mut poseidon = Poseidon::new(constants);\n\n    let result = poseidon.hash(input);\n\n    result\n}\n"
  },
  {
    "path": "packages/secq256k1/Cargo.toml",
    "content": "[package]\nname = \"secq256k1\"\nversion = \"0.1.0\"\nedition = \"2021\"\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\nhex-literal = { version = \"0.3\" }\nprimeorder = { git = \"https://github.com/DanTehrani/elliptic-curves.git\", features = [\"serde\"]}\nnum-bigint-dig = \"^0.7\"\nserde = { version = \"1.0.106\", features = [\"derive\"] }\nrand_core = { version = \"0.6\", default-features = false }\nzeroize = { version = \"1\", default-features = false }\nk256 = \"0.11.6\"\nff = \"0.12.0\"\n\n"
  },
  {
    "path": "packages/secq256k1/LICENSE",
    "content": "MIT License\n\nCopyright (c) 2022 Ethereum Foundation\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE."
  },
  {
    "path": "packages/secq256k1/README.md",
    "content": "# Secq256k1\n\nwip"
  },
  {
    "path": "packages/secq256k1/sage/hashtocurve_params.sage",
    "content": "import sage.schemes.elliptic_curves.isogeny_small_degree as isd\nload(\"sqrt_ratio_params.sage\")\n\n# https://neuromancer.sk/std/secg/secp256k1\n\n# Secp256k1\np = 0xfffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2f\nFp = GF(p)\npA = Fp(0x0000000000000000000000000000000000000000000000000000000000000000)\npB = Fp(0x0000000000000000000000000000000000000000000000000000000000000007)\nEp = EllipticCurve(Fp, (pA, pB))\nG = Ep(0x79be667ef9dcbbac55a06295ce870b07029bfcdb2dce28d959f2815b16f81798, 0x483ada7726a3c4655da4fbfc0e1108a8fd17b448a68554199c47d08ffb10d4b8)\nEp.set_order(0xfffffffffffffffffffffffffffffffebaaedce6af48a03bbfd25e8cd0364141 * 0x1)\n\n# Secq256k1\nq = 0xfffffffffffffffffffffffffffffffebaaedce6af48a03bbfd25e8cd0364141\nFq = GF(q)\nqA = Fq(0x0000000000000000000000000000000000000000000000000000000000000000)\nqB = Fq(0x0000000000000000000000000000000000000000000000000000000000000007)\nEq = EllipticCurve(Fq, (qA, qB)) # secq256k1\n\n# https://eprint.iacr.org/2019/403.pdf p.26 A The isogeny maps\ndef find_iso(E):\n    for p_test in primes(60):\n        isos = [ i for i in isd.isogenies_prime_degree(E, p_test)\n            if i.codomain().j_invariant() not in (0, 1728) ]\n        if len(isos) > 0:\n            return isos[0].dual()\n    return None\n    \n\n\n# https://www.ietf.org/archive/id/draft-irtf-cfrg-hash-to-curve-16.html#sswu-z-code\n# Arguments:\n# - F, a field object\n# - A and B, the coefficients of the curve equation y^2 = x^3 + A * x + B\ndef find_z_sswu(F, A, B):\n    R.<xx> = F[]                       # Polynomial ring over F\n    g = xx^3 + F(A) * xx + F(B)        # y^2 = g(x) = x^3 + A * x + B\n    ctr = F.gen()\n    while True:\n        for Z_cand in (F(ctr), F(-ctr)):\n            # Criterion 1: Z is non-square in F.\n            if is_square(Z_cand):\n                continue\n            # Criterion 2: Z != -1 in F.\n            if Z_cand == F(-1):\n                continue\n            # Criterion 3: g(x) - Z is irreducible over F.\n            if not (g - Z_cand).is_irreducible():\n                continue\n            # Criterion 4: g(B / (Z * A)) is square in F.\n            if is_square(g(B / (Z_cand * A))):\n                return Z_cand\n        ctr += 1\n\n# Secp256k1\nisogeny_ep = find_iso(Ep)\n\nIsoEpA = isogeny_ep.domain().a4()\nIsoEpB = isogeny_ep.domain().a6()\n\nIsoEpZNatural = find_z_sswu(Fp, IsoEpA, IsoEpB)\nIsoEpZ = Integer(IsoEpZNatural) - p\n\n(c1, c2, c3, c4, c5, c6, c7) = sqrt_ratio_params(p, IsoEpZ)\n\nprint(\"Secp256k1\")\nprint(\"Isogeny A:\", isogeny_ep.domain().a4())\nprint(\"Isogeny B:\", isogeny_ep.domain().a6())\nprint(\"Constants:\", [k for k in isogeny_ep.rational_maps()])\nprint(\"Z\", IsoEpZNatural, \"=\", IsoEpZ)\nprint(\"\\nsqrt_ratio constants\")\nprint(\"c1:\", c1)\nprint(\"c2:\", c2)\nprint(\"c3:\", c3)\nprint(\"c4:\", c4)\nprint(\"c5:\", c5)\nprint(\"c6:\", c6)\nprint(\"c7:\", c7)\n\n# Secq256k1\n\nisogeny_eq = find_iso(Eq)\n\nIsoEqA = isogeny_eq.domain().a4()\nIsoEqB = isogeny_eq.domain().a6()\n\nIsoEqZNatural = find_z_sswu(Fq, IsoEqA, IsoEqB)\nIsoEqZ = Integer(IsoEqZNatural) - q\n(c1, c2, c3, c4, c5, c6, c7) = sqrt_ratio_params(q, IsoEqZ)\n\nprint(\"\\nSecq256k1\")\nprint(\"\\nIsogeny A:\", isogeny_eq.domain().a4())\nprint(\"Isogeny B:\", isogeny_eq.domain().a6())\nprint(\"Constants:\", [k for k in isogeny_eq.rational_maps()])\n\nprint(\"Z:\", IsoEqZNatural, \"=\", IsoEqZ)\n\nprint(\"\\nsqrt_ratio constants\")\nprint(\"c1:\", c1)\nprint(\"c2:\", c2)\nprint(\"c3:\", c3)\nprint(\"c4:\", c4)\nprint(\"c5:\", c5)\nprint(\"c6:\", c6)\nprint(\"c7:\", c7)"
  },
  {
    "path": "packages/secq256k1/sage/sqrt_ratio_params.sage",
    "content": "# https://www.ietf.org/archive/id/draft-irtf-cfrg-hash-to-curve-16.html#name-sqrt_ratio-for-any-field\ndef sqrt_ratio_params(p, z) -> tuple([int, int, int, int, int, int, int]):\n    for i in range(256):\n        if ((p - 1) % (2^i) == 0):\n            c1 = i\n    c2 = (p - 1) / 2^c1\n    c3 = (c2 - 1) / 2\n    c4 = 2^c1 - 1               \n    c5 = 2^(c1 - 1)              \n    c6 = z.powermod(c2, p)\n    c7 = z.powermod((c2 + 1) / 2, p)\n    return (\n        c1, c2, c3, c4, c5, c6, c7\n    )\n\n\n\n\n\n\n\n"
  },
  {
    "path": "packages/secq256k1/sage/sswu_generic.sage",
    "content": "#!/usr/bin/sage\n# vim: syntax=python\n\nimport sys\ntry:\n    from sagelib.common import CMOV\n    from sagelib.generic_map import GenericMap\n    from sagelib.z_selection import find_z_sswu\nexcept ImportError:\n    sys.exit(\"Error loading preprocessed sage files. Try running `make clean pyfiles`\")\n\nclass GenericSSWU(GenericMap):\n    def __init__(self, F, A, B):\n        self.name = \"SSWU\"\n        self.F = F\n        self.A = F(A)\n        self.B = F(B)\n        if self.A == 0:\n            raise ValueError(\"S-SWU requires A != 0\")\n        if self.B == 0:\n            raise ValueError(\"S-SWU requires B != 0\")\n        self.Z = find_z_sswu(F, F(A), F(B))\n        self.E = EllipticCurve(F, [F(A), F(B)])\n\n        # constants for straight-line impl\n        self.c1 = -F(B) / F(A)\n        self.c2 = -F(1) / self.Z\n\n        # values at which the map is undefined\n        # i.e., when Z^2 * u^4 + Z * u^2 = 0\n        # which is at u = 0 and when Z * u^2 = -1\n        self.undefs = [F(0)]\n        if self.c2.is_square():\n            ex = self.c2.sqrt()\n            self.undefs += [ex, -ex]\n\n    def not_straight_line(self, u):\n        inv0 = self.inv0\n        is_square = self.is_square\n        sgn0 = self.sgn0\n        sqrt = self.sqrt\n        u = self.F(u)\n        A = self.A\n        B = self.B\n        Z = self.Z\n\n        tv1 = inv0(Z^2 * u^4 + Z * u^2)\n        x1 = (-B / A) * (1 + tv1)\n        if tv1 == 0:\n            x1 = B / (Z * A)\n        gx1 = x1^3 + A * x1 + B\n        x2 = Z * u^2 * x1\n        gx2 = x2^3 + A * x2 + B\n        if is_square(gx1):\n            x = x1\n            y = sqrt(gx1)\n        else:\n            x = x2\n            y = sqrt(gx2)\n        if sgn0(u) != sgn0(y):\n            y = -y\n        return (x, y)\n\n    def straight_line(self, u):\n        inv0 = self.inv0\n        is_square = self.is_square\n        sgn0 = self.sgn0\n        sqrt = self.sqrt\n        u = self.F(u)\n        A = self.A\n        B = self.B\n        Z = self.Z\n        c1 = self.c1\n        c2 = self.c2\n\n        tv1 = Z * u^2\n        tv2 = tv1^2\n        x1 = tv1 + tv2\n        x1 = inv0(x1)\n        e1 = x1 == 0\n        x1 = x1 + 1\n        x1 = CMOV(x1, c2, e1)    # If (tv1 + tv2) == 0, set x1 = -1 / Z\n        x1 = x1 * c1      # x1 = (-B / A) * (1 + (1 / (Z^2 * u^4 + Z * u^2)))\n        gx1 = x1^2\n        gx1 = gx1 + A\n        gx1 = gx1 * x1\n        gx1 = gx1 + B             # gx1 = g(x1) = x1^3 + A * x1 + B\n        x2 = tv1 * x1            # x2 = Z * u^2 * x1\n        tv2 = tv1 * tv2\n        gx2 = gx1 * tv2           # gx2 = (Z * u^2)^3 * gx1\n        e2 = is_square(gx1)\n        x = CMOV(x2, x1, e2)    # If is_square(gx1), x = x1, else x = x2\n        y2 = CMOV(gx2, gx1, e2)  # If is_square(gx1), y2 = gx1, else y2 = gx2\n        y = sqrt(y2)\n        e3 = sgn0(u) == sgn0(y)  # Fix sign of y\n        y = CMOV(-y, y, e3)\n        return (x, y)\n\np = 2^256 - 2^32 - 2^9 - 2^8 - 2^7 - 2^6 - 2^4 - 1\nF = GF(p)\nA = F(0)\nB = F(7)\n# Ap and Bp define isogenous curve y^2 = x^3 + Ap * x + Bp\nAp = F(0x3f8731abdd661adca08a5558f0f5d272e953d363cb6f0e5d405447c01a444533)\nBp = F(1771)\n\nGenericSSWU(F, Ap, Bp)"
  },
  {
    "path": "packages/secq256k1/src/affine.rs",
    "content": "use std::iter::Sum;\nuse std::ops::{Add, Mul, MulAssign, Neg, Sub};\nuse std::ops::{AddAssign, SubAssign};\n\nuse super::{ProjectivePoint, Secq256K1};\nuse crate::field::BaseField;\nuse crate::hashtocurve::hash_to_curve;\nuse crate::{EncodedPoint, Scalar};\nuse k256::elliptic_curve::subtle::Choice;\npub use primeorder::elliptic_curve::group::Group;\nuse primeorder::elliptic_curve::sec1::FromEncodedPoint;\nuse primeorder::elliptic_curve::sec1::ToEncodedPoint;\nuse primeorder::elliptic_curve::subtle::CtOption;\n\npub type AffinePointCore = primeorder::AffinePoint<Secq256K1>;\n\n#[derive(Copy, Clone, Debug, Default, Eq, PartialEq)]\npub struct AffinePoint(pub AffinePointCore);\n\nimpl Mul<Scalar> for AffinePoint {\n    type Output = AffinePoint;\n\n    fn mul(self, rhs: Scalar) -> Self::Output {\n        AffinePoint((self.0 * rhs).into())\n    }\n}\n\nimpl Mul<Scalar> for &AffinePoint {\n    type Output = AffinePoint;\n\n    fn mul(self, rhs: Scalar) -> Self::Output {\n        AffinePoint((self.0 * rhs).into())\n    }\n}\n\nimpl Mul<&Scalar> for AffinePoint {\n    type Output = AffinePoint;\n\n    fn mul(self, rhs: &Scalar) -> Self::Output {\n        AffinePoint((self.0 * *rhs).into())\n    }\n}\n\nimpl MulAssign<&Scalar> for AffinePoint {\n    fn mul_assign(&mut self, rhs: &Scalar) {\n        *self = *self * rhs;\n    }\n}\n\nimpl MulAssign<Scalar> for AffinePoint {\n    fn mul_assign(&mut self, rhs: Scalar) {\n        *self = *self * rhs;\n    }\n}\n\nimpl Add<AffinePoint> for AffinePoint {\n    type Output = AffinePoint;\n\n    fn add(self, rhs: AffinePoint) -> Self::Output {\n        AffinePoint((ProjectivePoint::from(self.0) + ProjectivePoint::from(rhs.0)).into())\n    }\n}\n\nimpl AddAssign<AffinePoint> for AffinePoint {\n    fn add_assign(&mut self, rhs: AffinePoint) {\n        *self = *self + rhs;\n    }\n}\n\nimpl Sub<AffinePoint> for AffinePoint {\n    type Output = AffinePoint;\n\n    fn sub(self, rhs: AffinePoint) -> Self::Output {\n        AffinePoint((ProjectivePoint::from(self.0) - rhs.0).into())\n    }\n}\n\nimpl SubAssign<AffinePoint> for AffinePoint {\n    fn sub_assign(&mut self, rhs: AffinePoint) {\n        *self = *self - rhs;\n    }\n}\n\nuse crate::FieldElement;\n\nimpl AffinePoint {\n    pub const fn identity() -> Self {\n        AffinePoint(AffinePointCore::IDENTITY)\n    }\n\n    pub const fn generator() -> Self {\n        AffinePoint(AffinePointCore::GENERATOR)\n    }\n\n    // The isogeny constants are outputs of hashtocurve_params.sage\n\n    pub const fn iso_a() -> FieldElement {\n        // 3642995984045157452672683439396299070953881827175886364060394186787010798372\n        FieldElement([\n            13132896970247110882,\n            16600479225705962415,\n            2267171952686981219,\n            10308142380130580469,\n            0,\n        ])\n    }\n\n    pub const fn iso_b() -> FieldElement {\n        // 1771\n        FieldElement([18134843254882603861, 9821735204204806823, 2250, 0, 0])\n    }\n\n    pub const fn iso_z() -> FieldElement {\n        // -14\n        FieldElement([\n            4419027667721769679,\n            17311539568058655616,\n            18446744073709551596,\n            18446744073709551615,\n            0,\n        ])\n    }\n\n    pub const fn iso_constants() -> [FieldElement; 13] {\n        [\n            FieldElement::from_raw([\n                7679007869575068054,\n                9522933797269734319,\n                16397105843297379213,\n                10248191152060862008,\n            ]),\n            FieldElement::from_raw([\n                9826996953646961554,\n                15182850926035153421,\n                14578491762904662818,\n                12647934416601614380,\n            ]),\n            FieldElement::from_raw([\n                12837744973953074055,\n                3022921441994356503,\n                9226076221592167090,\n                5322610924144458968,\n            ]),\n            FieldElement::from_raw([\n                7679007869575068113,\n                9522933797269734319,\n                16397105843297379213,\n                10248191152060862008,\n            ]),\n            FieldElement::from_raw([\n                5509687591411919004,\n                593833991126057235,\n                2079217350175104065,\n                3150945307157219731,\n            ]),\n            FieldElement::from_raw([\n                10055942181862970998,\n                5902098865151897053,\n                9296385024764340435,\n                14583286435933530837,\n            ]),\n            FieldElement::from_raw([\n                1018159320366879645,\n                7658288605871115257,\n                17763531330238827481,\n                9564978408590137874,\n            ]),\n            FieldElement::from_raw([\n                14136870513678256585,\n                7591425463017576710,\n                7289245881452331409,\n                6323967208300807190,\n            ]),\n            FieldElement::from_raw([\n                14802773332216597422,\n                16078857340678580677,\n                3084372689655359971,\n                1069495981486797935,\n            ]),\n            FieldElement::from_raw([\n                2553960894281893207,\n                13252224180066972444,\n                13664254869414482677,\n                11614616639002310276,\n            ]),\n            FieldElement::from_raw([\n                17487903423972654314,\n                10114123023543861660,\n                12342198062117431905,\n                4726417960735829596,\n            ]),\n            FieldElement::from_raw([\n                2523398215118668000,\n                9249176628478019873,\n                9442411000583469692,\n                6856371160381489280,\n            ]),\n            FieldElement::from_raw([\n                13822214165235121741,\n                13451932020343611451,\n                18446744073709551614,\n                18446744073709551615,\n            ]),\n        ]\n    }\n\n    pub fn compress(&self) -> EncodedPoint {\n        self.0.to_encoded_point(true)\n    }\n\n    pub fn decompress(bytes: EncodedPoint) -> CtOption<Self> {\n        AffinePointCore::from_encoded_point(&bytes).map(AffinePoint)\n    }\n\n    pub fn from_uniform_bytes(bytes: &[u8; 128]) -> Self {\n        let u1 = FieldElement::from_bytes_wide(bytes[0..64].try_into().unwrap());\n        let u2 = FieldElement::from_bytes_wide(bytes[64..128].try_into().unwrap());\n\n        let (p1_coords, p2_coords) = hash_to_curve(\n            u1,\n            u2,\n            Self::iso_a(),\n            Self::iso_b(),\n            Self::iso_z(),\n            Self::iso_constants(),\n        );\n        let p1 = AffinePoint::decompress(EncodedPoint::from_affine_coordinates(\n            &p1_coords.0.to_be_bytes().into(),\n            &p1_coords.1.to_be_bytes().into(),\n            false,\n        ))\n        .unwrap();\n\n        let p2 = AffinePoint::decompress(EncodedPoint::from_affine_coordinates(\n            &p2_coords.0.to_be_bytes().into(),\n            &p2_coords.1.to_be_bytes().into(),\n            false,\n        ))\n        .unwrap();\n\n        p1 + p2\n    }\n}\n\nimpl From<ProjectivePoint> for AffinePoint {\n    fn from(p: ProjectivePoint) -> Self {\n        AffinePoint(p.into())\n    }\n}\n\nimpl Neg for AffinePoint {\n    type Output = AffinePoint;\n\n    fn neg(self) -> Self::Output {\n        AffinePoint(self.0.neg())\n    }\n}\n\nimpl Add<&AffinePoint> for AffinePoint {\n    type Output = AffinePoint;\n\n    fn add(self, rhs: &AffinePoint) -> Self::Output {\n        self + *rhs\n    }\n}\n\nimpl AddAssign<&AffinePoint> for AffinePoint {\n    fn add_assign(&mut self, rhs: &AffinePoint) {\n        *self = *self + *rhs;\n    }\n}\n\nimpl Sub<&AffinePoint> for AffinePoint {\n    type Output = AffinePoint;\n\n    fn sub(self, rhs: &AffinePoint) -> Self::Output {\n        self - *rhs\n    }\n}\n\nimpl SubAssign<&AffinePoint> for AffinePoint {\n    fn sub_assign(&mut self, rhs: &AffinePoint) {\n        *self = *self - *rhs;\n    }\n}\n\nimpl Sum for AffinePoint {\n    fn sum<I: Iterator<Item = Self>>(iter: I) -> Self {\n        iter.fold(AffinePoint::identity(), |acc, x| acc + x)\n    }\n}\n\nimpl<'a> Sum<&'a AffinePoint> for AffinePoint {\n    fn sum<I: Iterator<Item = &'a AffinePoint>>(iter: I) -> Self {\n        iter.fold(AffinePoint::identity(), |acc, x| acc + x)\n    }\n}\n\nimpl Group for AffinePoint {\n    type Scalar = Scalar;\n\n    fn random(rng: impl rand_core::RngCore) -> Self {\n        AffinePoint(AffinePointCore::from(ProjectivePoint::random(rng)))\n    }\n\n    fn generator() -> Self {\n        AffinePoint::generator()\n    }\n\n    fn identity() -> Self {\n        AffinePoint::identity()\n    }\n\n    fn is_identity(&self) -> Choice {\n        self.0.is_identity()\n    }\n\n    fn double(&self) -> Self {\n        self.add(self)\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_from_uniform_bytes() {\n        // Case 1\n        let pseudo_bytes = [1u8; 128];\n        let p1 = AffinePoint::from_uniform_bytes(&pseudo_bytes);\n\n        let expected_point_1 = AffinePoint::decompress(\n            EncodedPoint::from_bytes(&[\n                3, 24, 36, 60, 213, 183, 10, 225, 197, 211, 160, 231, 226, 115, 43, 236, 156, 4,\n                195, 217, 173, 140, 136, 199, 137, 204, 135, 28, 56, 55, 158, 90, 42,\n            ])\n            .unwrap(),\n        )\n        .unwrap();\n\n        assert_eq!(p1, expected_point_1);\n\n        // Case 2\n        let pseudo_bytes = [255u8; 128];\n        let p2 = AffinePoint::from_uniform_bytes(&pseudo_bytes);\n        let expected_point_2 = AffinePoint::decompress(\n            EncodedPoint::from_bytes(&[\n                2, 224, 201, 211, 109, 246, 2, 231, 80, 53, 75, 7, 198, 101, 138, 177, 41, 203, 12,\n                215, 7, 190, 221, 177, 146, 53, 58, 202, 32, 229, 192, 136, 229,\n            ])\n            .unwrap(),\n        )\n        .unwrap();\n\n        assert_eq!(p2, expected_point_2);\n    }\n}\n"
  },
  {
    "path": "packages/secq256k1/src/field/field_secp.rs",
    "content": "//! This module provides an implementation of the secq256k1's scalar field $\\mathbb{F}_q$\n//! where `q = 0xfffffffffffffffffffffffffffffffebaaedce6af48a03bbfd25e8cd0364141`\n//! This is an adaptation of code from the k256 crate\n//! We modify various constants (MODULUS, R, R2, etc.) to appropriate values for secq256k1 and update tests\n#![allow(clippy::all)]\nuse crate::FieldBytes;\nuse core::borrow::Borrow;\nuse core::convert::TryFrom;\nuse core::fmt;\nuse core::iter::{Product, Sum};\nuse core::ops::{Add, AddAssign, Mul, MulAssign, Neg, Sub, SubAssign};\nuse hex_literal::hex;\nuse num_bigint_dig::{BigUint, ModInverse};\nuse primeorder::elliptic_curve::subtle::{\n    Choice, ConditionallySelectable, ConstantTimeEq, CtOption,\n};\nuse primeorder::{Field, PrimeField};\nuse rand_core::{CryptoRng, RngCore};\nuse serde::de::Visitor;\nuse serde::{Deserialize, Serialize};\nuse zeroize::Zeroize;\n\n// use crate::util::{adc, mac, sbb};\n/// Compute a + b + carry, returning the result and the new carry over.\n#[inline(always)]\npub const fn adc(a: u64, b: u64, carry: u64) -> (u64, u64) {\n    let ret = (a as u128) + (b as u128) + (carry as u128);\n    (ret as u64, (ret >> 64) as u64)\n}\n\n/// Compute a - (b + borrow), returning the result and the new borrow.\n#[inline(always)]\npub const fn sbb(a: u64, b: u64, borrow: u64) -> (u64, u64) {\n    let ret = (a as u128).wrapping_sub((b as u128) + ((borrow >> 63) as u128));\n    (ret as u64, (ret >> 64) as u64)\n}\n\n/// Compute a + (b * c) + carry, returning the result and the new carry over.\n#[inline(always)]\npub const fn mac(a: u64, b: u64, c: u64, carry: u64) -> (u64, u64) {\n    let ret = (a as u128) + ((b as u128) * (c as u128)) + (carry as u128);\n    (ret as u64, (ret >> 64) as u64)\n}\n\nmacro_rules! impl_add_binop_specify_output {\n    ($lhs:ident, $rhs:ident, $output:ident) => {\n        impl<'b> Add<&'b $rhs> for $lhs {\n            type Output = $output;\n\n            #[inline]\n            fn add(self, rhs: &'b $rhs) -> $output {\n                &self + rhs\n            }\n        }\n\n        impl<'a> Add<$rhs> for &'a $lhs {\n            type Output = $output;\n\n            #[inline]\n            fn add(self, rhs: $rhs) -> $output {\n                self + &rhs\n            }\n        }\n\n        impl Add<$rhs> for $lhs {\n            type Output = $output;\n\n            #[inline]\n            fn add(self, rhs: $rhs) -> $output {\n                &self + &rhs\n            }\n        }\n    };\n}\n\nmacro_rules! impl_sub_binop_specify_output {\n    ($lhs:ident, $rhs:ident, $output:ident) => {\n        impl<'b> Sub<&'b $rhs> for $lhs {\n            type Output = $output;\n\n            #[inline]\n            fn sub(self, rhs: &'b $rhs) -> $output {\n                &self - rhs\n            }\n        }\n\n        impl<'a> Sub<$rhs> for &'a $lhs {\n            type Output = $output;\n\n            #[inline]\n            fn sub(self, rhs: $rhs) -> $output {\n                self - &rhs\n            }\n        }\n\n        impl Sub<$rhs> for $lhs {\n            type Output = $output;\n\n            #[inline]\n            fn sub(self, rhs: $rhs) -> $output {\n                &self - &rhs\n            }\n        }\n    };\n}\n\nmacro_rules! impl_binops_additive_specify_output {\n    ($lhs:ident, $rhs:ident, $output:ident) => {\n        impl_add_binop_specify_output!($lhs, $rhs, $output);\n        impl_sub_binop_specify_output!($lhs, $rhs, $output);\n    };\n}\n\nmacro_rules! impl_binops_multiplicative_mixed {\n    ($lhs:ident, $rhs:ident, $output:ident) => {\n        impl<'b> Mul<&'b $rhs> for $lhs {\n            type Output = $output;\n\n            #[inline]\n            fn mul(self, rhs: &'b $rhs) -> $output {\n                &self * rhs\n            }\n        }\n\n        impl<'a> Mul<$rhs> for &'a $lhs {\n            type Output = $output;\n\n            #[inline]\n            fn mul(self, rhs: $rhs) -> $output {\n                self * &rhs\n            }\n        }\n\n        impl Mul<$rhs> for $lhs {\n            type Output = $output;\n\n            #[inline]\n            fn mul(self, rhs: $rhs) -> $output {\n                &self * &rhs\n            }\n        }\n    };\n}\n\nmacro_rules! impl_binops_additive {\n    ($lhs:ident, $rhs:ident) => {\n        impl_binops_additive_specify_output!($lhs, $rhs, $lhs);\n\n        impl SubAssign<$rhs> for $lhs {\n            #[inline]\n            fn sub_assign(&mut self, rhs: $rhs) {\n                *self = &*self - &rhs;\n            }\n        }\n\n        impl AddAssign<$rhs> for $lhs {\n            #[inline]\n            fn add_assign(&mut self, rhs: $rhs) {\n                *self = &*self + &rhs;\n            }\n        }\n\n        impl<'b> SubAssign<&'b $rhs> for $lhs {\n            #[inline]\n            fn sub_assign(&mut self, rhs: &'b $rhs) {\n                *self = &*self - rhs;\n            }\n        }\n\n        impl<'b> AddAssign<&'b $rhs> for $lhs {\n            #[inline]\n            fn add_assign(&mut self, rhs: &'b $rhs) {\n                *self = &*self + rhs;\n            }\n        }\n    };\n}\n\nmacro_rules! impl_binops_multiplicative {\n    ($lhs:ident, $rhs:ident) => {\n        impl_binops_multiplicative_mixed!($lhs, $rhs, $lhs);\n\n        impl MulAssign<$rhs> for $lhs {\n            #[inline]\n            fn mul_assign(&mut self, rhs: $rhs) {\n                *self = &*self * &rhs;\n            }\n        }\n\n        impl<'b> MulAssign<&'b $rhs> for $lhs {\n            #[inline]\n            fn mul_assign(&mut self, rhs: &'b $rhs) {\n                *self = &*self * rhs;\n            }\n        }\n    };\n}\n\n/// Represents an element of the scalar field $\\mathbb{F}_q$ of the secq256k1 elliptic\n/// curve construction.\n// The internal representation of this type is four 64-bit unsigned\n// integers in little-endian order. `FieldElement` values are always in\n// Montgomery form; i.e., FieldElement(a) = aR mod q, with R = 2^256.\n#[derive(Clone, Copy, Eq)]\npub struct FieldElement(pub(crate) [u64; 5]);\n\nuse serde::ser::SerializeSeq;\nuse serde::{Deserializer, Serializer};\n\nuse super::BaseField;\nuse super::SqrtRatio;\n\nimpl SqrtRatio for FieldElement {\n    // The constants are outputs of hashtocurve_params.sage\n\n    const C1: u64 = 1;\n\n    //  28948022309329048855892746252171976963317496166410141009864396001977208667915\n    const C3: Self = FieldElement([\n        18446744069414583343,\n        18446744073709551615,\n        18446744073709551615,\n        4611686018427387903,\n        0,\n    ]);\n\n    const C4: Self = Self::ONE;\n    const C5: Self = Self::ONE;\n\n    // 115792089237316195423570985008687907853269984665640564039457584007908834671662\n    const C6: Self = FieldElement([\n        18446744065119615070,\n        18446744073709551615,\n        18446744073709551615,\n        18446744073709551615,\n        0,\n    ]);\n\n    // 22612019078283109002402354608917265420620653587239490778472842791191070919257\n    const C7: Self = FieldElement([\n        10660218062043021626,\n        12685808213265501903,\n        5194980534593283555,\n        4353995932822220413,\n        0,\n    ]);\n\n    fn sqrt_ratio(u: &Self, v: &Self) -> (Choice, Self) {\n        let mut tv1 = Self::C6;\n        let mut tv2 = v.pow_by_self(&Self::C4);\n        let mut tv3 = tv2.pow_by_self(&Self::from(2));\n        tv3 = tv3 * v;\n        let mut tv5 = u * tv3;\n        tv5 = tv5.pow_by_self(&Self::C3);\n        tv5 = tv5 * tv2;\n        tv2 = tv5 * v;\n        tv3 = tv5 * u;\n        let mut tv4 = tv3 * tv2;\n        tv5 = tv4.pow_by_self(&Self::C5);\n        let is_qr = tv5.ct_eq(&Self::one());\n        tv2 = tv3 * Self::C7;\n        tv5 = tv4 * tv1;\n        tv3 = Self::conditional_select(&tv2, &tv3, is_qr);\n        tv4 = Self::conditional_select(&tv5, &tv4, is_qr);\n\n        let two = Self::from(2);\n        for i in (2..(Self::C1 + 1)).rev() {\n            let i = Self::from(i);\n            let mut tv5 = i - two;\n            tv5 = two.pow_by_self(&tv5);\n            tv5 = tv4.pow_by_self(&tv5);\n            let e1 = tv5.ct_eq(&Self::one());\n            tv2 = tv3 * tv1;\n            tv1 = tv1 * tv1;\n            tv5 = tv4 * tv1;\n            tv3 = Self::conditional_select(&tv2, &tv3, e1);\n            tv4 = Self::conditional_select(&tv5, &tv4, e1);\n        }\n\n        (is_qr, tv3)\n    }\n}\n\nimpl Serialize for FieldElement {\n    fn serialize<S: Serializer>(&self, serializer: S) -> Result<S::Ok, S::Error> {\n        let values: Vec<u8> = self.0.iter().map(|v| v.to_le_bytes()).flatten().collect();\n        let mut seq = serializer.serialize_seq(Some(values.len()))?;\n        for val in values.iter() {\n            seq.serialize_element(val)?;\n        }\n\n        seq.end()\n    }\n}\n\nstruct U64ArrayVisitor;\n\nimpl<'de> Visitor<'de> for U64ArrayVisitor {\n    type Value = FieldElement;\n\n    fn expecting(&self, formatter: &mut std::fmt::Formatter) -> std::fmt::Result {\n        formatter.write_str(\"a sequence of 4 u64 values\")\n    }\n\n    fn visit_seq<A>(self, mut seq: A) -> Result<Self::Value, A::Error>\n    where\n        A: serde::de::SeqAccess<'de>,\n    {\n        let mut result = [0u64; 4];\n\n        for i in 0..4 {\n            let mut val: u64 = 0;\n            for j in 0..8 {\n                val += (seq.next_element::<u8>().unwrap().unwrap() as u64) * 256u64.pow(j)\n            }\n            result[i] = val;\n        }\n\n        Ok(FieldElement::from_raw(result))\n    }\n}\n\nimpl<'de> Deserialize<'de> for FieldElement {\n    fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>\n    where\n        D: Deserializer<'de>,\n    {\n        deserializer.deserialize_seq(U64ArrayVisitor)\n    }\n}\n\nimpl fmt::Debug for FieldElement {\n    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {\n        let tmp = self.to_bytes();\n        write!(f, \"0x\")?;\n        for &b in tmp.iter().rev() {\n            write!(f, \"{:02x}\", b)?;\n        }\n        Ok(())\n    }\n}\n\nimpl From<u64> for FieldElement {\n    fn from(val: u64) -> FieldElement {\n        FieldElement([val, 0, 0, 0, 0]) * R2\n    }\n}\n\nimpl Field for FieldElement {\n    fn random(mut rng: impl RngCore) -> Self {\n        let mut bytes = FieldBytes::default();\n\n        loop {\n            rng.fill_bytes(&mut bytes);\n            if let Some(fe) = Self::from_bytes(&bytes.into()).into() {\n                return fe;\n            }\n        }\n    }\n\n    fn zero() -> Self {\n        FieldElement::zero()\n    }\n\n    fn one() -> Self {\n        FieldElement::one()\n    }\n\n    fn is_zero(&self) -> Choice {\n        self.ct_eq(&Self::ZERO)\n    }\n\n    fn square(&self) -> Self {\n        self.square()\n    }\n\n    fn double(&self) -> Self {\n        self.double()\n    }\n\n    fn sqrt(&self) -> CtOption<Self> {\n        let x2 = self.pow2k(1).mul(self);\n        let x3 = x2.pow2k(1).mul(self);\n        let x6 = x3.pow2k(3).mul(&x3);\n        let x9 = x6.pow2k(3).mul(&x3);\n        let x11 = x9.pow2k(2).mul(&x2);\n        let x22 = x11.pow2k(11).mul(&x11);\n        let x44 = x22.pow2k(22).mul(&x22);\n        let x88 = x44.pow2k(44).mul(&x44);\n        let x176 = x88.pow2k(88).mul(&x88);\n        let x220 = x176.pow2k(44).mul(&x44);\n        let x223 = x220.pow2k(3).mul(&x3);\n\n        // The final result is then assembled using a sliding window over the blocks.\n        let res = x223.pow2k(23).mul(&x22).pow2k(6).mul(&x2).pow2k(2);\n\n        // Only return Some if it's the square root.\n        CtOption::new(res, Choice::from(1))\n    }\n\n    fn is_zero_vartime(&self) -> bool {\n        self.is_zero().into()\n    }\n\n    fn cube(&self) -> Self {\n        self.square() * self\n    }\n\n    fn invert(&self) -> CtOption<Self> {\n        self.invert()\n    }\n}\n\nimpl PrimeField for FieldElement {\n    type Repr = FieldBytes;\n\n    const NUM_BITS: u32 = 256;\n    const CAPACITY: u32 = 255;\n    const S: u32 = 1;\n\n    fn from_repr(bytes: FieldBytes) -> CtOption<Self> {\n        Self::from_sec1(bytes)\n    }\n\n    fn to_repr(&self) -> FieldBytes {\n        self.to_sec1()\n    }\n\n    fn is_odd(&self) -> Choice {\n        // TODO: Possible optimization?\n        let val = FieldElement::montgomery_reduce(\n            self.0[0], self.0[1], self.0[2], self.0[3], self.0[4], 0, 0, 0, 0,\n        );\n        (val.0[0] as u8 & 1).into()\n    }\n\n    fn multiplicative_generator() -> Self {\n        3.into()\n    }\n\n    fn root_of_unity() -> Self {\n        Self::from_raw([\n            18446744069414583342,\n            18446744073709551615,\n            18446744073709551615,\n            18446744073709551615,\n        ])\n    }\n}\n\nimpl ConstantTimeEq for FieldElement {\n    fn ct_eq(&self, other: &Self) -> Choice {\n        self.0[0].ct_eq(&other.0[0])\n            & self.0[1].ct_eq(&other.0[1])\n            & self.0[2].ct_eq(&other.0[2])\n            & self.0[3].ct_eq(&other.0[3])\n    }\n}\n\nimpl PartialEq for FieldElement {\n    #[inline]\n    fn eq(&self, other: &Self) -> bool {\n        self.ct_eq(other).unwrap_u8() == 1\n    }\n}\n\nimpl ConditionallySelectable for FieldElement {\n    fn conditional_select(a: &Self, b: &Self, choice: Choice) -> Self {\n        FieldElement([\n            u64::conditional_select(&a.0[0], &b.0[0], choice),\n            u64::conditional_select(&a.0[1], &b.0[1], choice),\n            u64::conditional_select(&a.0[2], &b.0[2], choice),\n            u64::conditional_select(&a.0[3], &b.0[3], choice),\n            u64::conditional_select(&a.0[4], &b.0[4], choice),\n        ])\n    }\n}\n\n/// Constant representing the modulus\n/// 0xffffffffffffffff fffffffffffffffe baaedce6af48a03b bfd25e8cd0364141\nconst MODULUS: FieldElement = FieldElement([\n    0xfffffffefffffc2f,\n    0xffffffffffffffff,\n    0xffffffffffffffff,\n    0xffffffffffffffff,\n    0,\n]);\n\nimpl<'a> Neg for &'a FieldElement {\n    type Output = FieldElement;\n\n    #[inline]\n    fn neg(self) -> FieldElement {\n        self.neg()\n    }\n}\n\nimpl Neg for FieldElement {\n    type Output = FieldElement;\n\n    #[inline]\n    fn neg(self) -> FieldElement {\n        -&self\n    }\n}\n\nimpl<'a, 'b> Sub<&'b FieldElement> for &'a FieldElement {\n    type Output = FieldElement;\n\n    #[inline]\n    fn sub(self, rhs: &'b FieldElement) -> FieldElement {\n        self.sub(rhs)\n    }\n}\n\nimpl<'a, 'b> Add<&'b FieldElement> for &'a FieldElement {\n    type Output = FieldElement;\n\n    #[inline]\n    fn add(self, rhs: &'b FieldElement) -> FieldElement {\n        self.add(rhs)\n    }\n}\n\nimpl<'a, 'b> Mul<&'b FieldElement> for &'a FieldElement {\n    type Output = FieldElement;\n\n    #[inline]\n    fn mul(self, rhs: &'b FieldElement) -> FieldElement {\n        self.mul(rhs)\n    }\n}\n\nimpl_binops_additive!(FieldElement, FieldElement);\nimpl_binops_multiplicative!(FieldElement, FieldElement);\n\n/// INV = -(q^{-1} mod 2^64) mod 2^64\nconst INV: u64 = 0xd838091dd2253531;\n\n/// R = 2^256 mod q\n/// 0x1 4551231950b75fc4 402da1732fc9bebf\nconst R: FieldElement = FieldElement([\n    0x00000001000003d1,\n    0x0000000000000000,\n    0x0000000000000000,\n    0x0000000000000000,\n    0x0,\n]);\n\n/// R^2 = 2^512 mod q\n/// 0x9d671cd581c69bc5 e697f5e45bcd07c6 741496c20e7cf878 896cf21467d7d140\nconst R2: FieldElement = FieldElement([\n    0x000007a2000e90a1,\n    0x0000000000000001,\n    0x0000000000000000,\n    0x0000000000000000,\n    0,\n]);\n\n/// R^3 = 2^768 mod q\n/// 0x555d800c18ef116d b1b31347f1d0b2da 0017648444d4322c 7bc0cfe0e9ff41ed\nconst R3: FieldElement = FieldElement([\n    0x002bb1e33795f671,\n    0x0000000100000b73,\n    0x0000000000000000,\n    0x0000000000000000,\n    0x0,\n]);\n\nimpl Default for FieldElement {\n    #[inline]\n    fn default() -> Self {\n        Self::zero()\n    }\n}\n\nimpl<T> Product<T> for FieldElement\nwhere\n    T: Borrow<FieldElement>,\n{\n    fn product<I>(iter: I) -> Self\n    where\n        I: Iterator<Item = T>,\n    {\n        iter.fold(FieldElement::one(), |acc, item| acc * item.borrow())\n    }\n}\n\nimpl<T> Sum<T> for FieldElement\nwhere\n    T: Borrow<FieldElement>,\n{\n    fn sum<I>(iter: I) -> Self\n    where\n        I: Iterator<Item = T>,\n    {\n        iter.fold(FieldElement::zero(), |acc, item| acc + item.borrow())\n    }\n}\n\nimpl Zeroize for FieldElement {\n    fn zeroize(&mut self) {\n        self.0 = [0u64; 5];\n    }\n}\n\nimpl FieldElement {\n    pub const ZERO: Self = Self([0, 0, 0, 0, 0]);\n    pub const ONE: Self = R;\n\n    fn pow2k(&self, k: usize) -> Self {\n        let mut x = *self;\n        for _j in 0..k {\n            x = x.square();\n        }\n        x\n    }\n\n    /// Returns zero, the additive identity.\n    #[inline]\n    pub const fn zero() -> FieldElement {\n        FieldElement([0, 0, 0, 0, 0])\n    }\n\n    /// Returns one, the multiplicative identity.\n    #[inline]\n    pub const fn one() -> FieldElement {\n        R\n    }\n\n    pub fn random<Rng: RngCore + CryptoRng>(rng: &mut Rng) -> Self {\n        let mut limbs = [0u64; 8];\n        for i in 0..8 {\n            limbs[i] = rng.next_u64();\n        }\n        FieldElement::from_u512(limbs)\n    }\n\n    /// Doubles this field element.\n    #[inline]\n    pub const fn double(&self) -> FieldElement {\n        // TODO: This can be achieved more efficiently with a bitshift.\n        self.add(self)\n    }\n\n    /// Converts a 512-bit little endian integer into\n    /// a `FieldElement` by reducing by the modulus.\n    pub fn from_bytes_wide(bytes: &[u8; 64]) -> FieldElement {\n        FieldElement::from_u512([\n            u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[..8]).unwrap()),\n            u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[8..16]).unwrap()),\n            u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[16..24]).unwrap()),\n            u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[24..32]).unwrap()),\n            u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[32..40]).unwrap()),\n            u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[40..48]).unwrap()),\n            u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[48..56]).unwrap()),\n            u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[56..64]).unwrap()),\n        ])\n    }\n\n    fn from_u512(limbs: [u64; 8]) -> FieldElement {\n        // We reduce an arbitrary 512-bit number by decomposing it into two 256-bit digits\n        // with the higher bits multiplied by 2^256. Thus, we perform two reductions\n        //\n        // 1. the lower bits are multiplied by R^2, as normal\n        // 2. the upper bits are multiplied by R^2 * 2^256 = R^3\n        //\n        // and computing their sum in the field. It remains to see that arbitrary 256-bit\n        // numbers can be placed into Montgomery form safely using the reduction. The\n        // reduction works so long as the product is less than R=2^256 multipled by\n        // the modulus. This holds because for any `c` smaller than the modulus, we have\n        // that (2^256 - 1)*c is an acceptable product for the reduction. Therefore, the\n        // reduction always works so long as `c` is in the field; in this case it is either the\n        // constant `R2` or `R3`.\n        let d0 = FieldElement([limbs[0], limbs[1], limbs[2], limbs[3], 0]);\n        let d1 = FieldElement([limbs[4], limbs[5], limbs[6], limbs[7], 0]);\n        // Convert to Montgomery form\n        d0 * R2 + d1 * R3\n    }\n\n    /// Converts from an integer represented in little endian\n    /// into its (congruent) `FieldElement` representation.\n    pub const fn from_raw(val: [u64; 4]) -> Self {\n        (&FieldElement([val[0], val[1], val[2], val[3], 0])).mul(&R2)\n    }\n\n    /// Squares this element.\n    #[inline]\n    pub const fn square(&self) -> FieldElement {\n        let (r1, carry) = mac(0, self.0[0], self.0[1], 0);\n        let (r2, carry) = mac(0, self.0[0], self.0[2], carry);\n        let (r3, r4) = mac(0, self.0[0], self.0[3], carry);\n\n        let (r3, carry) = mac(r3, self.0[1], self.0[2], 0);\n        let (r4, r5) = mac(r4, self.0[1], self.0[3], carry);\n\n        let (r5, r6) = mac(r5, self.0[2], self.0[3], 0);\n\n        let r7 = r6 >> 63;\n        let r6 = (r6 << 1) | (r5 >> 63);\n        let r5 = (r5 << 1) | (r4 >> 63);\n        let r4 = (r4 << 1) | (r3 >> 63);\n        let r3 = (r3 << 1) | (r2 >> 63);\n        let r2 = (r2 << 1) | (r1 >> 63);\n        let r1 = r1 << 1;\n\n        let (r0, carry) = mac(0, self.0[0], self.0[0], 0);\n        let (r1, carry) = adc(0, r1, carry);\n        let (r2, carry) = mac(r2, self.0[1], self.0[1], carry);\n        let (r3, carry) = adc(0, r3, carry);\n        let (r4, carry) = mac(r4, self.0[2], self.0[2], carry);\n        let (r5, carry) = adc(0, r5, carry);\n        let (r6, carry) = mac(r6, self.0[3], self.0[3], carry);\n        let (r7, _) = adc(0, r7, carry);\n\n        FieldElement::montgomery_reduce(r0, r1, r2, r3, r4, r5, r6, r7, 0)\n    }\n\n    /// Exponentiates `self` by `by`, where `by` is a\n    /// little-endian order integer exponent.\n    pub fn pow(&self, by: &[u64; 4]) -> Self {\n        let mut res = Self::one();\n        for e in by.iter().rev() {\n            for i in (0..64).rev() {\n                res = res.square();\n                let mut tmp = res;\n                tmp *= self;\n                res.conditional_assign(&tmp, (((*e >> i) & 0x1) as u8).into());\n            }\n        }\n        res\n    }\n\n    pub fn pow_by_self(&self, exp: &Self) -> Self {\n        let mut registers = [0u64; 4];\n\n        let exp_bytes = exp.to_bytes();\n        registers[0] = u64::from_ne_bytes(exp_bytes[0..8].try_into().unwrap());\n        registers[1] = u64::from_ne_bytes(exp_bytes[8..16].try_into().unwrap());\n        registers[2] = u64::from_ne_bytes(exp_bytes[16..24].try_into().unwrap());\n        registers[3] = u64::from_ne_bytes(exp_bytes[24..32].try_into().unwrap());\n\n        self.pow(&registers)\n    }\n\n    /// Exponentiates `self` by `by`, where `by` is a\n    /// little-endian order integer exponent.\n    ///\n    /// **This operation is variable time with respect\n    /// to the exponent.** If the exponent is fixed,\n    /// this operation is effectively constant time.\n    pub fn pow_vartime(&self, by: &[u64; 4]) -> Self {\n        let mut res = Self::one();\n        for e in by.iter().rev() {\n            for i in (0..64).rev() {\n                res = res.square();\n\n                if ((*e >> i) & 1) == 1 {\n                    res.mul_assign(self);\n                }\n            }\n        }\n        res\n    }\n\n    pub fn invert(&self) -> CtOption<Self> {\n        let val = BigUint::from_bytes_le(&self.to_bytes());\n\n        let result = val.mod_inverse(&BigUint::from_bytes_be(&hex!(\n            \"fffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2f\"\n        )));\n\n        if result.is_some() {\n            let mut result = result.unwrap().to_bytes_le().1.to_vec();\n            result.resize(64, 0);\n\n            let result_bytes: [u8; 64] = result.try_into().unwrap();\n\n            let result = FieldElement::from_bytes_wide(&result_bytes);\n\n            CtOption::new(result, Choice::from(1))\n        } else {\n            CtOption::new(FieldElement::zero(), Choice::from(0))\n        }\n    }\n\n    pub fn batch_invert(inputs: &mut [FieldElement]) -> FieldElement {\n        // This code is essentially identical to the FieldElement\n        // implementation, and is documented there.  Unfortunately,\n        // it's not easy to write it generically, since here we want\n        // to use `UnpackedScalar`s internally, and `FieldElement`s\n        // externally, but there's no corresponding distinction for\n        // field elements.\n\n        use zeroize::Zeroizing;\n\n        let n = inputs.len();\n        let one = FieldElement::one();\n\n        // Place scratch storage in a Zeroizing wrapper to wipe it when\n        // we pass out of scope.\n        let scratch_vec = vec![one; n];\n        let mut scratch = Zeroizing::new(scratch_vec);\n\n        // Keep an accumulator of all of the previous products\n        let mut acc = FieldElement::one();\n\n        // Pass through the input vector, recording the previous\n        // products in the scratch space\n        for (input, scratch) in inputs.iter().zip(scratch.iter_mut()) {\n            *scratch = acc;\n\n            acc = acc * input;\n        }\n\n        // acc is nonzero iff all inputs are nonzero\n        debug_assert!(acc != FieldElement::zero());\n\n        // Compute the inverse of all products\n        acc = acc.invert().unwrap();\n\n        // We need to return the product of all inverses later\n        let ret = acc;\n\n        // Pass through the vector backwards to compute the inverses\n        // in place\n        for (input, scratch) in inputs.iter_mut().rev().zip(scratch.iter().rev()) {\n            let tmp = &acc * input.clone();\n            *input = &acc * scratch;\n            acc = tmp;\n        }\n\n        ret\n    }\n\n    #[inline(always)]\n    const fn montgomery_reduce(\n        r0: u64,\n        r1: u64,\n        r2: u64,\n        r3: u64,\n        r4: u64,\n        r5: u64,\n        r6: u64,\n        r7: u64,\n        r8: u64,\n    ) -> Self {\n        // The Montgomery reduction here is based on Algorithm 14.32 in\n        // Handbook of Applied Cryptography\n        // <http://cacr.uwaterloo.ca/hac/about/chap14.pdf>.\n\n        let k = r0.wrapping_mul(INV);\n        let (_, carry) = mac(r0, k, MODULUS.0[0], 0);\n        let (r1, carry) = mac(r1, k, MODULUS.0[1], carry);\n        let (r2, carry) = mac(r2, k, MODULUS.0[2], carry);\n        let (r3, carry) = mac(r3, k, MODULUS.0[3], carry);\n        let (r4, carry) = mac(r4, k, MODULUS.0[4], carry);\n        let (r5, carry2) = adc(r5, 0, carry);\n\n        let k = r1.wrapping_mul(INV);\n        let (_, carry) = mac(r1, k, MODULUS.0[0], 0);\n        let (r2, carry) = mac(r2, k, MODULUS.0[1], carry);\n        let (r3, carry) = mac(r3, k, MODULUS.0[2], carry);\n        let (r4, carry) = mac(r4, k, MODULUS.0[3], carry);\n        let (r5, carry) = mac(r5, k, MODULUS.0[4], carry);\n        let (r6, carry2) = adc(r6, carry2, carry);\n\n        let k = r2.wrapping_mul(INV);\n        let (_, carry) = mac(r2, k, MODULUS.0[0], 0);\n        let (r3, carry) = mac(r3, k, MODULUS.0[1], carry);\n        let (r4, carry) = mac(r4, k, MODULUS.0[2], carry);\n        let (r5, carry) = mac(r5, k, MODULUS.0[3], carry);\n        let (r6, carry) = mac(r6, k, MODULUS.0[4], carry);\n        let (r7, carry2) = adc(r7, carry2, carry);\n\n        let k = r3.wrapping_mul(INV);\n        let (_, carry) = mac(r3, k, MODULUS.0[0], 0);\n        let (r4, carry) = mac(r4, k, MODULUS.0[1], carry);\n        let (r5, carry) = mac(r5, k, MODULUS.0[2], carry);\n        let (r6, carry) = mac(r6, k, MODULUS.0[3], carry);\n        let (r7, carry) = mac(r7, k, MODULUS.0[4], carry);\n        let (r8, _) = adc(r8, carry2, carry);\n\n        // Result may be within MODULUS of the correct value\n        (&FieldElement([r4, r5, r6, r7, r8])).sub(&MODULUS)\n    }\n\n    /// Multiplies `rhs` by `self`, returning the result.\n    #[inline]\n    pub const fn mul(&self, rhs: &Self) -> Self {\n        // Schoolbook multiplication\n\n        let (r0, carry) = mac(0, self.0[0], rhs.0[0], 0);\n        let (r1, carry) = mac(0, self.0[0], rhs.0[1], carry);\n        let (r2, carry) = mac(0, self.0[0], rhs.0[2], carry);\n        let (r3, carry) = mac(0, self.0[0], rhs.0[3], carry);\n        let (r4, r5) = mac(0, self.0[0], rhs.0[4], carry);\n\n        let (r1, carry) = mac(r1, self.0[1], rhs.0[0], 0);\n        let (r2, carry) = mac(r2, self.0[1], rhs.0[1], carry);\n        let (r3, carry) = mac(r3, self.0[1], rhs.0[2], carry);\n        let (r4, carry) = mac(r4, self.0[1], rhs.0[3], carry);\n        let (r5, r6) = mac(r5, self.0[1], rhs.0[4], carry);\n\n        let (r2, carry) = mac(r2, self.0[2], rhs.0[0], 0);\n        let (r3, carry) = mac(r3, self.0[2], rhs.0[1], carry);\n        let (r4, carry) = mac(r4, self.0[2], rhs.0[2], carry);\n        let (r5, carry) = mac(r5, self.0[2], rhs.0[3], carry);\n        let (r6, r7) = mac(r6, self.0[2], rhs.0[4], carry);\n\n        let (r3, carry) = mac(r3, self.0[3], rhs.0[0], 0);\n        let (r4, carry) = mac(r4, self.0[3], rhs.0[1], carry);\n        let (r5, carry) = mac(r5, self.0[3], rhs.0[2], carry);\n        let (r6, carry) = mac(r6, self.0[3], rhs.0[3], carry);\n        let (r7, r8) = mac(r7, self.0[3], rhs.0[4], carry);\n\n        let (r4, carry) = mac(r4, self.0[4], rhs.0[0], 0);\n        let (r5, carry) = mac(r5, self.0[4], rhs.0[1], carry);\n        let (r6, carry) = mac(r6, self.0[4], rhs.0[2], carry);\n        let (r7, carry) = mac(r7, self.0[4], rhs.0[3], carry);\n        let (r8, _) = mac(r8, self.0[4], rhs.0[4], carry);\n\n        FieldElement::montgomery_reduce(r0, r1, r2, r3, r4, r5, r6, r7, r8)\n    }\n\n    /// Subtracts `rhs` from `self`, returning the result.\n    #[inline]\n    pub const fn sub(&self, rhs: &Self) -> Self {\n        let (d0, borrow) = sbb(self.0[0], rhs.0[0], 0);\n        let (d1, borrow) = sbb(self.0[1], rhs.0[1], borrow);\n        let (d2, borrow) = sbb(self.0[2], rhs.0[2], borrow);\n        let (d3, borrow) = sbb(self.0[3], rhs.0[3], borrow);\n        let (d4, borrow) = sbb(self.0[4], rhs.0[4], borrow);\n\n        // If underflow occurred on the final limb, borrow = 0xfff...fff, otherwise\n        // borrow = 0x000...000. Thus, we use it as a mask to conditionally add the modulus.\n        let (d0, carry) = adc(d0, MODULUS.0[0] & borrow, 0);\n        let (d1, carry) = adc(d1, MODULUS.0[1] & borrow, carry);\n        let (d2, carry) = adc(d2, MODULUS.0[2] & borrow, carry);\n        let (d3, carry) = adc(d3, MODULUS.0[3] & borrow, carry);\n        let (d4, _) = adc(d4, MODULUS.0[4] & borrow, carry);\n\n        FieldElement([d0, d1, d2, d3, d4])\n    }\n\n    /// Adds `rhs` to `self`, returning the result.\n    #[inline]\n    pub const fn add(&self, rhs: &Self) -> Self {\n        let (d0, carry) = adc(self.0[0], rhs.0[0], 0);\n        let (d1, carry) = adc(self.0[1], rhs.0[1], carry);\n        let (d2, carry) = adc(self.0[2], rhs.0[2], carry);\n        let (d3, carry) = adc(self.0[3], rhs.0[3], carry);\n        let (d4, _) = adc(self.0[4], rhs.0[4], carry);\n\n        // Attempt to subtract the modulus, to ensure the value\n        // is smaller than the modulus.\n        (&FieldElement([d0, d1, d2, d3, d4])).sub(&MODULUS)\n    }\n\n    /// Negates `self`.\n    #[inline]\n    pub const fn neg(&self) -> Self {\n        // Subtract `self` from `MODULUS` to negate. Ignore the final\n        // borrow because it cannot underflow; self is guaranteed to\n        // be in the field.\n        let (d0, borrow) = sbb(MODULUS.0[0], self.0[0], 0);\n        let (d1, borrow) = sbb(MODULUS.0[1], self.0[1], borrow);\n        let (d2, borrow) = sbb(MODULUS.0[2], self.0[2], borrow);\n        let (d3, borrow) = sbb(MODULUS.0[3], self.0[3], borrow);\n        let (d4, _) = sbb(MODULUS.0[4], self.0[4], borrow);\n\n        // `tmp` could be `MODULUS` if `self` was zero. Create a mask that is\n        // zero if `self` was zero, and `u64::max_value()` if self was nonzero.\n        let mask = (((self.0[0] | self.0[1] | self.0[2] | self.0[3] | self.0[4]) == 0) as u64)\n            .wrapping_sub(1);\n\n        FieldElement([d0 & mask, d1 & mask, d2 & mask, d3 & mask, d4 & mask])\n    }\n}\n\nimpl BaseField for FieldElement {\n    /// Converts an element of `FieldElement` into a byte representation in\n    /// little-endian byte order.\n    fn to_bytes(&self) -> [u8; 32] {\n        // Turn into canonical form by computing\n        // (a.R) / R = a\n        let tmp = FieldElement::montgomery_reduce(\n            self.0[0], self.0[1], self.0[2], self.0[3], self.0[4], 0, 0, 0, 0,\n        );\n\n        let mut res = [0; 32];\n        res[..8].copy_from_slice(&tmp.0[0].to_le_bytes());\n        res[8..16].copy_from_slice(&tmp.0[1].to_le_bytes());\n        res[16..24].copy_from_slice(&tmp.0[2].to_le_bytes());\n        res[24..32].copy_from_slice(&tmp.0[3].to_le_bytes());\n\n        res\n    }\n\n    /// Converts an element of `FieldElement` into a byte representation in\n    /// big-endian byte order.\n    fn to_be_bytes(&self) -> [u8; 32] {\n        // Turn into canonical form by computing\n        // (a.R) / R = a\n        let tmp = Self::montgomery_reduce(\n            self.0[0], self.0[1], self.0[2], self.0[3], self.0[4], 0, 0, 0, 0,\n        );\n\n        let mut res = [0; 32];\n        res[..8].copy_from_slice(&tmp.0[3].to_be_bytes());\n        res[8..16].copy_from_slice(&tmp.0[2].to_be_bytes());\n        res[16..24].copy_from_slice(&tmp.0[1].to_be_bytes());\n        res[24..32].copy_from_slice(&tmp.0[0].to_be_bytes());\n\n        res\n    }\n\n    fn from_bytes(bytes: &[u8; 32]) -> CtOption<Self> {\n        let mut tmp = Self([0, 0, 0, 0, 0]);\n\n        tmp.0[0] = u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[..8]).unwrap());\n        tmp.0[1] = u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[8..16]).unwrap());\n        tmp.0[2] = u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[16..24]).unwrap());\n        tmp.0[3] = u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[24..32]).unwrap());\n\n        // Try to subtract the modulus\n        let (_, borrow) = sbb(tmp.0[0], MODULUS.0[0], 0);\n        let (_, borrow) = sbb(tmp.0[1], MODULUS.0[1], borrow);\n        let (_, borrow) = sbb(tmp.0[2], MODULUS.0[2], borrow);\n        let (_, borrow) = sbb(tmp.0[3], MODULUS.0[3], borrow);\n\n        // If the element is smaller than MODULUS then the\n        // subtraction will underflow, producing a borrow value\n        // of 0xffff...ffff. Otherwise, it'll be zero.\n        let is_some = (borrow as u8) & 1;\n\n        // Convert to Montgomery form by computing\n        // (a.R^0 * R^2) / R = a.R\n        tmp *= &R2;\n\n        CtOption::new(tmp, Choice::from(is_some))\n    }\n}\n\nimpl<'a> From<&'a FieldElement> for [u8; 32] {\n    fn from(value: &'a FieldElement) -> [u8; 32] {\n        value.to_bytes()\n    }\n}\n\nimpl FieldElement {\n    /// Attempts to parse the given byte array as an SEC1-encoded field element.\n    ///\n    /// Returns None if the byte array does not contain a big-endian integer in the range\n    /// [0, p).\n    pub fn from_sec1(bytes: FieldBytes) -> CtOption<Self> {\n        let mut be = bytes.to_vec();\n        be.reverse();\n\n        Self::from_bytes(&be.as_slice().try_into().unwrap())\n    }\n\n    /// Returns the SEC1 encoding of this field element.\n    pub fn to_sec1(self) -> FieldBytes {\n        *FieldBytes::from_slice(&self.to_be_bytes())\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_inv() {\n        // Compute -(q^{-1} mod 2^64) mod 2^64 by exponentiating\n        // by totient(2**64) - 1\n\n        let mut inv = 1u64;\n        for _ in 0..63 {\n            inv = inv.wrapping_mul(inv);\n            inv = inv.wrapping_mul(MODULUS.0[0]);\n        }\n        inv = inv.wrapping_neg();\n\n        assert_eq!(inv, INV);\n    }\n\n    #[cfg(feature = \"std\")]\n    #[test]\n    fn test_debug() {\n        assert_eq!(\n            format!(\"{:?}\", FieldElement::zero()),\n            \"0x0000000000000000000000000000000000000000000000000000000000000000\"\n        );\n        assert_eq!(\n            format!(\"{:?}\", FieldElement::one()),\n            \"0x0000000000000000000000000000000000000000000000000000000000000001\"\n        );\n        assert_eq!(\n            format!(\"{:?}\", R2),\n            \"0x1824b159acc5056f998c4fefecbc4ff55884b7fa0003480200000001fffffffe\"\n        );\n    }\n\n    #[test]\n    fn test_equality() {\n        assert_eq!(FieldElement::zero(), FieldElement::zero());\n        assert_eq!(FieldElement::one(), FieldElement::one());\n        assert_eq!(R2, R2);\n\n        assert!(FieldElement::zero() != FieldElement::one());\n        assert!(FieldElement::one() != R2);\n    }\n\n    #[test]\n    fn test_to_bytes() {\n        assert_eq!(\n            FieldElement::zero().to_bytes(),\n            [\n                0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n                0, 0, 0, 0\n            ]\n        );\n\n        assert_eq!(\n            FieldElement::one().to_bytes(),\n            [\n                1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n                0, 0, 0, 0\n            ]\n        );\n    }\n\n    #[test]\n    fn test_from_bytes() {\n        assert_eq!(\n            FieldElement::from_bytes(&[\n                0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n                0, 0, 0, 0\n            ])\n            .unwrap(),\n            FieldElement::zero()\n        );\n\n        assert_eq!(\n            FieldElement::from_bytes(&[\n                1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n                0, 0, 0, 0\n            ])\n            .unwrap(),\n            FieldElement::one()\n        );\n    }\n\n    #[test]\n    fn test_from_u512_zero() {\n        assert_eq!(\n            FieldElement::zero(),\n            FieldElement::from_u512([\n                MODULUS.0[0],\n                MODULUS.0[1],\n                MODULUS.0[2],\n                MODULUS.0[3],\n                0,\n                0,\n                0,\n                0\n            ])\n        );\n    }\n\n    #[test]\n    fn test_from_u512_r() {\n        assert_eq!(R, FieldElement::from_u512([1, 0, 0, 0, 0, 0, 0, 0]));\n    }\n\n    #[test]\n    fn test_from_u512_r2() {\n        assert_eq!(R2, FieldElement::from_u512([0, 0, 0, 0, 1, 0, 0, 0]));\n    }\n\n    #[test]\n    fn test_from_u512_max() {\n        let max_u64 = 0xffffffffffffffff;\n        assert_eq!(\n            R3 - R,\n            FieldElement::from_u512([\n                max_u64, max_u64, max_u64, max_u64, max_u64, max_u64, max_u64, max_u64\n            ])\n        );\n    }\n\n    #[test]\n    fn test_from_bytes_wide_r2() {\n        assert_eq!(\n            R2,\n            FieldElement::from_bytes_wide(&[\n                209, 3, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n                0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n                0, 0, 0, 0, 0, 0, 0, 0, 0\n            ])\n        );\n    }\n\n    #[test]\n    fn test_from_bytes_wide_negative_one() {\n        println!(\"{:?}\", (-&FieldElement::one()).to_bytes());\n        assert_eq!(\n            -&FieldElement::one(),\n            FieldElement::from_bytes_wide(&[\n                46, 252, 255, 255, 254, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,\n                255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 0, 0, 0,\n                0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n                0,\n            ])\n        );\n    }\n\n    #[test]\n    fn test_zero() {\n        assert_eq!(FieldElement::zero(), -&FieldElement::zero());\n        assert_eq!(\n            FieldElement::zero(),\n            FieldElement::zero() + FieldElement::zero()\n        );\n        assert_eq!(\n            FieldElement::zero(),\n            FieldElement::zero() - FieldElement::zero()\n        );\n        assert_eq!(\n            FieldElement::zero(),\n            FieldElement::zero() * FieldElement::zero()\n        );\n    }\n\n    const LARGEST: FieldElement = FieldElement([\n        0xfffffffefffffc2e,\n        0xffffffffffffffff,\n        0xffffffffffffffff,\n        0xffffffffffffffff,\n        0,\n    ]);\n\n    #[test]\n    fn test_addition() {\n        let mut tmp = LARGEST;\n        tmp += &LARGEST;\n\n        let target = FieldElement([\n            0xfffffffefffffc2d,\n            0xffffffffffffffff,\n            0xffffffffffffffff,\n            0xffffffffffffffff,\n            0,\n        ]);\n\n        assert_eq!(tmp, target);\n\n        let mut tmp = LARGEST;\n        tmp += &FieldElement([1, 0, 0, 0, 0]);\n\n        assert_eq!(tmp, FieldElement::zero());\n    }\n\n    #[test]\n    fn test_negation() {\n        let tmp = -&LARGEST;\n\n        assert_eq!(tmp, FieldElement([1, 0, 0, 0, 0]));\n\n        let tmp = -&FieldElement::zero();\n        assert_eq!(tmp, FieldElement::zero());\n        let tmp = -&FieldElement([1, 0, 0, 0, 0]);\n        assert_eq!(tmp, LARGEST);\n    }\n\n    #[test]\n    fn test_subtraction() {\n        let mut tmp = LARGEST;\n        tmp -= &LARGEST;\n\n        assert_eq!(tmp, FieldElement::zero());\n\n        let mut tmp = FieldElement::zero();\n        tmp -= &LARGEST;\n\n        let mut tmp2 = MODULUS;\n        tmp2 -= &LARGEST;\n\n        assert_eq!(tmp, tmp2);\n    }\n\n    #[test]\n    fn test_multiplication() {\n        let mut cur = LARGEST;\n\n        for _ in 0..100 {\n            let mut tmp = cur;\n            tmp *= &cur;\n\n            let mut tmp2 = FieldElement::zero();\n            for b in cur\n                .to_bytes()\n                .iter()\n                .rev()\n                .flat_map(|byte| (0..8).rev().map(move |i| ((byte >> i) & 1u8) == 1u8))\n            {\n                let tmp3 = tmp2;\n                tmp2.add_assign(&tmp3);\n\n                if b {\n                    tmp2.add_assign(&cur);\n                }\n            }\n\n            assert_eq!(tmp, tmp2);\n\n            cur.add_assign(&LARGEST);\n        }\n    }\n\n    #[test]\n    fn test_squaring() {\n        let a = FieldElement::from_bytes(&[\n            217, 35, 199, 155, 133, 142, 1, 157, 157, 14, 108, 39, 117, 3, 81, 244, 139, 80, 137,\n            171, 94, 69, 166, 212, 190, 89, 50, 109, 4, 24, 202, 156,\n        ])\n        .unwrap();\n        let root = a.sqrt().unwrap();\n        println!(\"root: {:?}\", root);\n        println!(\"0root: {:?}\", -root);\n\n        /*\n               let mut cur = LARGEST;\n\n               for _ in 0..100 {\n                   let mut tmp = cur;\n                   tmp = tmp.square();\n\n                   let mut tmp2 = FieldElement::zero();\n                   for b in cur\n                       .to_bytes()\n                       .iter()\n                       .rev()\n                       .flat_map(|byte| (0..8).rev().map(move |i| ((byte >> i) & 1u8) == 1u8))\n                   {\n                       let tmp3 = tmp2;\n                       tmp2.add_assign(&tmp3);\n\n                       if b {\n                           tmp2.add_assign(&cur);\n                       }\n                   }\n\n                   assert_eq!(tmp, tmp2);\n\n                   cur.add_assign(&LARGEST);\n               }\n        */\n    }\n\n    #[test]\n    fn test_inversion() {\n        assert_eq!(FieldElement::zero().invert().is_none().unwrap_u8(), 1);\n        assert_eq!(FieldElement::one().invert().unwrap(), FieldElement::one());\n        assert_eq!(\n            (-&FieldElement::one()).invert().unwrap(),\n            -&FieldElement::one()\n        );\n\n        let a = FieldElement::from(123);\n        let result = a.invert().unwrap();\n        println!(\"result {:?}\", result);\n\n        let mut tmp = R2;\n\n        for _ in 0..100 {\n            let mut tmp2 = tmp.invert().unwrap();\n            tmp2.mul_assign(&tmp);\n\n            assert_eq!(tmp2, FieldElement::one());\n\n            tmp.add_assign(&R2);\n        }\n    }\n\n    #[test]\n    fn test_invert_is_pow() {\n        let q_minus_2 = [\n            0xfffffffefffffc2d,\n            0xffffffffffffffff,\n            0xffffffffffffffff,\n            0xffffffffffffffff,\n        ];\n\n        let mut r1 = R;\n        let mut r2 = R;\n        let mut r3 = R;\n\n        for _ in 0..100 {\n            r1 = r1.invert().unwrap();\n            r2 = r2.pow_vartime(&q_minus_2);\n            r3 = r3.pow(&q_minus_2);\n\n            assert_eq!(r1, r2);\n            assert_eq!(r2, r3);\n            // Add R so we check something different next time around\n            r1.add_assign(&R);\n            r2 = r1;\n            r3 = r1;\n        }\n    }\n\n    #[test]\n    fn test_from_raw() {\n        assert_eq!(\n            FieldElement::from_raw([\n                0x00000001000003d0,\n                0x0000000000000000,\n                0x0000000000000000,\n                0x0000000000000000,\n            ]),\n            FieldElement::from_raw([0xffffffffffffffff; 4])\n        );\n\n        assert_eq!(\n            FieldElement::from_raw(MODULUS.0[..4].try_into().unwrap()),\n            FieldElement::zero()\n        );\n\n        assert_eq!(FieldElement::from_raw([1, 0, 0, 0]), R);\n    }\n\n    #[test]\n    fn test_double() {\n        let a = FieldElement::from_raw([\n            0x1fff3231233ffffd,\n            0x4884b7fa00034802,\n            0x998c4fefecbc4ff3,\n            0x1824b159acc50562,\n        ]);\n\n        assert_eq!(a.double(), a + a);\n    }\n}\n"
  },
  {
    "path": "packages/secq256k1/src/field/field_secq.rs",
    "content": "//! This module provides an implementation of the secq256k1's scalar field $\\mathbb{F}_q$\n//! where `q = 0xfffffffffffffffffffffffffffffffebaaedce6af48a03bbfd25e8cd0364141`\n//! This is an adaptation of code from the k256 crate\n//! We modify various constants (MODULUS, R, R2, etc.) to appropriate values for secq256k1 and update tests\n#![allow(clippy::all)]\nuse crate::FieldBytes;\nuse core::borrow::Borrow;\nuse core::convert::TryFrom;\nuse core::fmt;\nuse core::iter::{Product, Sum};\nuse core::ops::{Add, AddAssign, Mul, MulAssign, Neg, Sub, SubAssign};\nuse k256::Scalar;\nuse primeorder::elliptic_curve::subtle::{\n    Choice, ConditionallySelectable, ConstantTimeEq, CtOption,\n};\nuse primeorder::{Field, PrimeField};\nuse rand_core::{CryptoRng, RngCore};\nuse serde::de::Visitor;\nuse serde::{Deserialize, Serialize};\nuse zeroize::Zeroize;\n\n// use crate::util::{adc, mac, sbb};\n/// Compute a + b + carry, returning the result and the new carry over.\n#[inline(always)]\npub const fn adc(a: u64, b: u64, carry: u64) -> (u64, u64) {\n    let ret = (a as u128) + (b as u128) + (carry as u128);\n    (ret as u64, (ret >> 64) as u64)\n}\n\n/// Compute a - (b + borrow), returning the result and the new borrow.\n#[inline(always)]\npub const fn sbb(a: u64, b: u64, borrow: u64) -> (u64, u64) {\n    let ret = (a as u128).wrapping_sub((b as u128) + ((borrow >> 63) as u128));\n    (ret as u64, (ret >> 64) as u64)\n}\n\n/// Compute a + (b * c) + carry, returning the result and the new carry over.\n#[inline(always)]\npub const fn mac(a: u64, b: u64, c: u64, carry: u64) -> (u64, u64) {\n    let ret = (a as u128) + ((b as u128) * (c as u128)) + (carry as u128);\n    (ret as u64, (ret >> 64) as u64)\n}\n\nmacro_rules! impl_add_binop_specify_output {\n    ($lhs:ident, $rhs:ident, $output:ident) => {\n        impl<'b> Add<&'b $rhs> for $lhs {\n            type Output = $output;\n\n            #[inline]\n            fn add(self, rhs: &'b $rhs) -> $output {\n                &self + rhs\n            }\n        }\n\n        impl<'a> Add<$rhs> for &'a $lhs {\n            type Output = $output;\n\n            #[inline]\n            fn add(self, rhs: $rhs) -> $output {\n                self + &rhs\n            }\n        }\n\n        impl Add<$rhs> for $lhs {\n            type Output = $output;\n\n            #[inline]\n            fn add(self, rhs: $rhs) -> $output {\n                &self + &rhs\n            }\n        }\n    };\n}\n\nmacro_rules! impl_sub_binop_specify_output {\n    ($lhs:ident, $rhs:ident, $output:ident) => {\n        impl<'b> Sub<&'b $rhs> for $lhs {\n            type Output = $output;\n\n            #[inline]\n            fn sub(self, rhs: &'b $rhs) -> $output {\n                &self - rhs\n            }\n        }\n\n        impl<'a> Sub<$rhs> for &'a $lhs {\n            type Output = $output;\n\n            #[inline]\n            fn sub(self, rhs: $rhs) -> $output {\n                self - &rhs\n            }\n        }\n\n        impl Sub<$rhs> for $lhs {\n            type Output = $output;\n\n            #[inline]\n            fn sub(self, rhs: $rhs) -> $output {\n                &self - &rhs\n            }\n        }\n    };\n}\n\nmacro_rules! impl_binops_additive_specify_output {\n    ($lhs:ident, $rhs:ident, $output:ident) => {\n        impl_add_binop_specify_output!($lhs, $rhs, $output);\n        impl_sub_binop_specify_output!($lhs, $rhs, $output);\n    };\n}\n\nmacro_rules! impl_binops_multiplicative_mixed {\n    ($lhs:ident, $rhs:ident, $output:ident) => {\n        impl<'b> Mul<&'b $rhs> for $lhs {\n            type Output = $output;\n\n            #[inline]\n            fn mul(self, rhs: &'b $rhs) -> $output {\n                &self * rhs\n            }\n        }\n\n        impl<'a> Mul<$rhs> for &'a $lhs {\n            type Output = $output;\n\n            #[inline]\n            fn mul(self, rhs: $rhs) -> $output {\n                self * &rhs\n            }\n        }\n\n        impl Mul<$rhs> for $lhs {\n            type Output = $output;\n\n            #[inline]\n            fn mul(self, rhs: $rhs) -> $output {\n                &self * &rhs\n            }\n        }\n    };\n}\n\nmacro_rules! impl_binops_additive {\n    ($lhs:ident, $rhs:ident) => {\n        impl_binops_additive_specify_output!($lhs, $rhs, $lhs);\n\n        impl SubAssign<$rhs> for $lhs {\n            #[inline]\n            fn sub_assign(&mut self, rhs: $rhs) {\n                *self = &*self - &rhs;\n            }\n        }\n\n        impl AddAssign<$rhs> for $lhs {\n            #[inline]\n            fn add_assign(&mut self, rhs: $rhs) {\n                *self = &*self + &rhs;\n            }\n        }\n\n        impl<'b> SubAssign<&'b $rhs> for $lhs {\n            #[inline]\n            fn sub_assign(&mut self, rhs: &'b $rhs) {\n                *self = &*self - rhs;\n            }\n        }\n\n        impl<'b> AddAssign<&'b $rhs> for $lhs {\n            #[inline]\n            fn add_assign(&mut self, rhs: &'b $rhs) {\n                *self = &*self + rhs;\n            }\n        }\n    };\n}\n\nmacro_rules! impl_binops_multiplicative {\n    ($lhs:ident, $rhs:ident) => {\n        impl_binops_multiplicative_mixed!($lhs, $rhs, $lhs);\n\n        impl MulAssign<$rhs> for $lhs {\n            #[inline]\n            fn mul_assign(&mut self, rhs: $rhs) {\n                *self = &*self * &rhs;\n            }\n        }\n\n        impl<'b> MulAssign<&'b $rhs> for $lhs {\n            #[inline]\n            fn mul_assign(&mut self, rhs: &'b $rhs) {\n                *self = &*self * rhs;\n            }\n        }\n    };\n}\n\n/// Represents an element of the scalar field $\\mathbb{F}_q$ of the secq256k1 elliptic\n/// curve construction.\n// The internal representation of this type is four 64-bit unsigned\n// integers in little-endian order. `FieldElement` values are always in\n// Montgomery form; i.e., FieldElement(a) = aR mod q, with R = 2^256.\n#[derive(Clone, Copy, Eq)]\npub struct FieldElement(pub(crate) [u64; 5]);\n\nuse serde::ser::SerializeSeq;\nuse serde::{Deserializer, Serializer};\n\nuse super::{BaseField, SqrtRatio};\nimpl SqrtRatio for FieldElement {\n    // The constants are outputs of hashtocurve_params.sage\n\n    const C1: u64 = 6;\n\n    //  904625697166532776746648320380374280100293470930272690489102837043110636674\n    const C3: Self = FieldElement([\n        13822214165235122497,\n        13451932020343611451,\n        18446744073709551614,\n        9079256848778919935,\n        0,\n    ]);\n\n    const C4: Self = FieldElement([14644223128245760257, 1078510108991852875, 80, 0, 0]);\n    const C5: Self = FieldElement([411004481505318880, 12260033118033672328, 40, 0, 0]);\n\n    // 110311768741588258819775753290068347910167536649173269426897453508275181317711\n    const C6: Self = FieldElement([\n        3136031371246777149,\n        4130463083053389383,\n        12279052256176627435,\n        4106525493002347721,\n        0,\n    ]);\n\n    // 94159471864959118282773103057807800350405967619523989516086101139055882323203\n    const C7: Self = FieldElement([\n        18366461224604398165,\n        46442091185214180,\n        18426301622747497549,\n        12927103485658657333,\n        0,\n    ]);\n\n    // https://www.ietf.org/archive/id/draft-irtf-cfrg-hash-to-curve-16.html#appendix-F.2.1.1\n    fn sqrt_ratio(u: &Self, v: &Self) -> (Choice, Self) {\n        let mut tv1 = Self::C6;\n        let mut tv2 = v.pow_by_self(&Self::C4);\n        let mut tv3 = tv2.pow_by_self(&Self::from(2));\n        tv3 = tv3 * v;\n        let mut tv5 = u * tv3;\n        tv5 = tv5.pow_by_self(&Self::C3);\n        tv5 = tv5 * tv2;\n        tv2 = tv5 * v;\n        tv3 = tv5 * u;\n        let mut tv4 = tv3 * tv2;\n        tv5 = tv4.pow_by_self(&Self::C5);\n        let is_qr = tv5.ct_eq(&Self::one());\n        tv2 = tv3 * Self::C7;\n        tv5 = tv4 * tv1;\n        tv3 = Self::conditional_select(&tv2, &tv3, is_qr);\n        tv4 = Self::conditional_select(&tv5, &tv4, is_qr);\n\n        let two = Self::from(2);\n        for i in (2..(Self::C1 + 1)).rev() {\n            let i = Self::from(i);\n            tv5 = i - two;\n            tv5 = two.pow_by_self(&tv5);\n            tv5 = tv4.pow_by_self(&tv5);\n            let e1 = tv5.ct_eq(&Self::one());\n            tv2 = tv3 * tv1;\n            tv1 = tv1 * tv1;\n            tv5 = tv4 * tv1;\n            tv3 = Self::conditional_select(&tv2, &tv3, e1);\n            tv4 = Self::conditional_select(&tv5, &tv4, e1);\n        }\n\n        (is_qr, tv3)\n    }\n}\n\nimpl BaseField for FieldElement {\n    /// Attempts to convert a little-endian byte representation of\n    /// a scalar into a `FieldElement`, failing if the input is not canonical.\n    fn from_bytes(bytes: &[u8; 32]) -> CtOption<FieldElement> {\n        let mut tmp = FieldElement([0, 0, 0, 0, 0]);\n\n        tmp.0[0] = u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[..8]).unwrap());\n        tmp.0[1] = u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[8..16]).unwrap());\n        tmp.0[2] = u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[16..24]).unwrap());\n        tmp.0[3] = u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[24..32]).unwrap());\n\n        // Try to subtract the modulus\n        let (_, borrow) = sbb(tmp.0[0], MODULUS.0[0], 0);\n        let (_, borrow) = sbb(tmp.0[1], MODULUS.0[1], borrow);\n        let (_, borrow) = sbb(tmp.0[2], MODULUS.0[2], borrow);\n        let (_, borrow) = sbb(tmp.0[3], MODULUS.0[3], borrow);\n\n        // If the element is smaller than MODULUS then the\n        // subtraction will underflow, producing a borrow value\n        // of 0xffff...ffff. Otherwise, it'll be zero.\n        let is_some = (borrow as u8) & 1;\n\n        // Convert to Montgomery form by computing\n        // (a.R^0 * R^2) / R = a.R\n        tmp *= &R2;\n\n        CtOption::new(tmp, Choice::from(is_some))\n    }\n\n    /// Converts an element of `FieldElement` into a byte representation in\n    /// little-endian byte order.\n    fn to_bytes(&self) -> [u8; 32] {\n        // Turn into canonical form by computing\n        // (a.R) / R = a\n        let tmp = FieldElement::montgomery_reduce(\n            self.0[0], self.0[1], self.0[2], self.0[3], self.0[4], 0, 0, 0, 0,\n        );\n\n        let mut res = [0; 32];\n        res[..8].copy_from_slice(&tmp.0[0].to_le_bytes());\n        res[8..16].copy_from_slice(&tmp.0[1].to_le_bytes());\n        res[16..24].copy_from_slice(&tmp.0[2].to_le_bytes());\n        res[24..32].copy_from_slice(&tmp.0[3].to_le_bytes());\n\n        res\n    }\n\n    /// Converts an element of `FieldElement` into a byte representation in\n    /// big-endian byte order.\n    fn to_be_bytes(&self) -> [u8; 32] {\n        // Turn into canonical form by computing\n        // (a.R) / R = a\n        let tmp = FieldElement::montgomery_reduce(\n            self.0[0], self.0[1], self.0[2], self.0[3], self.0[4], 0, 0, 0, 0,\n        );\n\n        let mut res = [0; 32];\n        res[..8].copy_from_slice(&tmp.0[3].to_be_bytes());\n        res[8..16].copy_from_slice(&tmp.0[2].to_be_bytes());\n        res[16..24].copy_from_slice(&tmp.0[1].to_be_bytes());\n        res[24..32].copy_from_slice(&tmp.0[0].to_be_bytes());\n\n        res\n    }\n}\n\nimpl Serialize for FieldElement {\n    fn serialize<S: Serializer>(&self, serializer: S) -> Result<S::Ok, S::Error> {\n        let values: Vec<u8> = self.0.iter().map(|v| v.to_le_bytes()).flatten().collect();\n        let mut seq = serializer.serialize_seq(Some(values.len()))?;\n        for val in values.iter() {\n            seq.serialize_element(val)?;\n        }\n\n        seq.end()\n    }\n}\n\nstruct U64ArrayVisitor;\n\nimpl<'de> Visitor<'de> for U64ArrayVisitor {\n    type Value = FieldElement;\n\n    fn expecting(&self, formatter: &mut std::fmt::Formatter) -> std::fmt::Result {\n        formatter.write_str(\"a sequence of 4 u64 values\")\n    }\n\n    fn visit_seq<A>(self, mut seq: A) -> Result<Self::Value, A::Error>\n    where\n        A: serde::de::SeqAccess<'de>,\n    {\n        let mut result = [0u64; 4];\n\n        for i in 0..4 {\n            let mut val: u64 = 0;\n            for j in 0..8 {\n                val += (seq.next_element::<u8>().unwrap().unwrap() as u64) * 256u64.pow(j)\n            }\n            result[i] = val;\n        }\n\n        Ok(FieldElement::from_raw(result))\n    }\n}\n\nimpl<'de> Deserialize<'de> for FieldElement {\n    fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>\n    where\n        D: Deserializer<'de>,\n    {\n        deserializer.deserialize_seq(U64ArrayVisitor)\n    }\n}\n\nimpl fmt::Debug for FieldElement {\n    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {\n        let tmp = self.to_bytes();\n        write!(f, \"0x\")?;\n        for &b in tmp.iter().rev() {\n            write!(f, \"{:02x}\", b)?;\n        }\n        Ok(())\n    }\n}\n\nimpl From<u64> for FieldElement {\n    fn from(val: u64) -> FieldElement {\n        FieldElement([val, 0, 0, 0, 0]) * R2\n    }\n}\n\nimpl Field for FieldElement {\n    fn random(mut rng: impl RngCore) -> Self {\n        let mut bytes = FieldBytes::default();\n\n        loop {\n            rng.fill_bytes(&mut bytes);\n            if let Some(fe) = Self::from_bytes(&bytes.into()).into() {\n                return fe;\n            }\n        }\n    }\n\n    fn zero() -> Self {\n        FieldElement::zero()\n    }\n\n    fn one() -> Self {\n        FieldElement::one()\n    }\n\n    fn is_zero(&self) -> Choice {\n        self.ct_eq(&Self::ZERO)\n    }\n\n    fn square(&self) -> Self {\n        self.square()\n    }\n\n    fn double(&self) -> Self {\n        self.double()\n    }\n\n    fn sqrt(&self) -> CtOption<Self> {\n        let as_scalar: Scalar = Scalar::from_repr(self.to_repr()).unwrap();\n        as_scalar\n            .sqrt()\n            .map(|s| FieldElement::from_sec1(s.to_bytes()).unwrap())\n    }\n\n    fn is_zero_vartime(&self) -> bool {\n        self.is_zero().into()\n    }\n\n    fn cube(&self) -> Self {\n        self.square() * self\n    }\n\n    fn invert(&self) -> CtOption<Self> {\n        self.invert()\n    }\n}\n\nimpl PrimeField for FieldElement {\n    type Repr = FieldBytes;\n\n    const NUM_BITS: u32 = 256;\n    const CAPACITY: u32 = 255;\n    const S: u32 = 1;\n\n    fn from_repr(bytes: FieldBytes) -> CtOption<Self> {\n        Self::from_sec1(bytes)\n    }\n\n    fn to_repr(&self) -> FieldBytes {\n        self.to_sec1()\n    }\n\n    fn is_odd(&self) -> Choice {\n        // TODO: Possible optimization?\n        let val = FieldElement::montgomery_reduce(\n            self.0[0], self.0[1], self.0[2], self.0[3], self.0[4], 0, 0, 0, 0,\n        );\n        (val.0[0] as u8 & 1).into()\n    }\n\n    fn multiplicative_generator() -> Self {\n        7.into()\n    }\n\n    fn root_of_unity() -> Self {\n        Self::from_raw([\n            0x992f4b5402b052f2,\n            0x98bdeab680756045,\n            0xdf9879a3fbc483a8,\n            0xc1dc060e7a91986,\n        ])\n    }\n}\n\nimpl ConstantTimeEq for FieldElement {\n    fn ct_eq(&self, other: &Self) -> Choice {\n        self.0[0].ct_eq(&other.0[0])\n            & self.0[1].ct_eq(&other.0[1])\n            & self.0[2].ct_eq(&other.0[2])\n            & self.0[3].ct_eq(&other.0[3])\n    }\n}\n\nimpl PartialEq for FieldElement {\n    #[inline]\n    fn eq(&self, other: &Self) -> bool {\n        self.ct_eq(other).unwrap_u8() == 1\n    }\n}\n\nimpl ConditionallySelectable for FieldElement {\n    fn conditional_select(a: &Self, b: &Self, choice: Choice) -> Self {\n        FieldElement([\n            u64::conditional_select(&a.0[0], &b.0[0], choice),\n            u64::conditional_select(&a.0[1], &b.0[1], choice),\n            u64::conditional_select(&a.0[2], &b.0[2], choice),\n            u64::conditional_select(&a.0[3], &b.0[3], choice),\n            u64::conditional_select(&a.0[4], &b.0[4], choice),\n        ])\n    }\n}\n\n/// Constant representing the modulus\n/// 0xffffffffffffffff fffffffffffffffe baaedce6af48a03b bfd25e8cd0364141\nconst MODULUS: FieldElement = FieldElement([\n    0xbfd25e8cd0364141,\n    0xbaaedce6af48a03b,\n    0xfffffffffffffffe,\n    0xffffffffffffffff,\n    0,\n]);\n\nimpl<'a> Neg for &'a FieldElement {\n    type Output = FieldElement;\n\n    #[inline]\n    fn neg(self) -> FieldElement {\n        self.neg()\n    }\n}\n\nimpl Neg for FieldElement {\n    type Output = FieldElement;\n\n    #[inline]\n    fn neg(self) -> FieldElement {\n        -&self\n    }\n}\n\nimpl<'a, 'b> Sub<&'b FieldElement> for &'a FieldElement {\n    type Output = FieldElement;\n\n    #[inline]\n    fn sub(self, rhs: &'b FieldElement) -> FieldElement {\n        self.sub(rhs)\n    }\n}\n\nimpl<'a, 'b> Add<&'b FieldElement> for &'a FieldElement {\n    type Output = FieldElement;\n\n    #[inline]\n    fn add(self, rhs: &'b FieldElement) -> FieldElement {\n        self.add(rhs)\n    }\n}\n\nimpl<'a, 'b> Mul<&'b FieldElement> for &'a FieldElement {\n    type Output = FieldElement;\n\n    #[inline]\n    fn mul(self, rhs: &'b FieldElement) -> FieldElement {\n        self.mul(rhs)\n    }\n}\n\nimpl_binops_additive!(FieldElement, FieldElement);\nimpl_binops_multiplicative!(FieldElement, FieldElement);\n\n/// INV = -(q^{-1} mod 2^64) mod 2^64\nconst INV: u64 = 0x4b0dff665588b13f;\n\n/// R = 2^256 mod q\n/// 0x1 4551231950b75fc4 402da1732fc9bebf\nconst R: FieldElement = FieldElement([\n    0x402da1732fc9bebf,\n    0x4551231950b75fc4,\n    0x0000000000000001,\n    0x0000000000000000,\n    0x0,\n]);\n\n/// R^2 = 2^512 mod q\n/// 0x9d671cd581c69bc5 e697f5e45bcd07c6 741496c20e7cf878 896cf21467d7d140\nconst R2: FieldElement = FieldElement([\n    0x896cf21467d7d140,\n    0x741496c20e7cf878,\n    0xe697f5e45bcd07c6,\n    0x9d671cd581c69bc5,\n    0,\n]);\n\n/// R^3 = 2^768 mod q\n/// 0x555d800c18ef116d b1b31347f1d0b2da 0017648444d4322c 7bc0cfe0e9ff41ed\nconst R3: FieldElement = FieldElement([\n    0x7bc0cfe0e9ff41ed,\n    0x0017648444d4322c,\n    0xb1b31347f1d0b2da,\n    0x555d800c18ef116d,\n    0x0,\n]);\n\nimpl Default for FieldElement {\n    #[inline]\n    fn default() -> Self {\n        Self::zero()\n    }\n}\n\nimpl<T> Product<T> for FieldElement\nwhere\n    T: Borrow<FieldElement>,\n{\n    fn product<I>(iter: I) -> Self\n    where\n        I: Iterator<Item = T>,\n    {\n        iter.fold(FieldElement::one(), |acc, item| acc * item.borrow())\n    }\n}\n\nimpl<T> Sum<T> for FieldElement\nwhere\n    T: Borrow<FieldElement>,\n{\n    fn sum<I>(iter: I) -> Self\n    where\n        I: Iterator<Item = T>,\n    {\n        iter.fold(FieldElement::zero(), |acc, item| acc + item.borrow())\n    }\n}\n\nimpl Zeroize for FieldElement {\n    fn zeroize(&mut self) {\n        self.0 = [0u64; 5];\n    }\n}\n\nimpl FieldElement {\n    pub const ZERO: Self = Self([0, 0, 0, 0, 0]);\n    pub const ONE: Self = R;\n\n    fn pow2k(&self, k: usize) -> Self {\n        let mut x = *self;\n        for _j in 0..k {\n            x = x.square();\n        }\n        x\n    }\n\n    /// Returns zero, the additive identity.\n    #[inline]\n    pub const fn zero() -> FieldElement {\n        FieldElement([0, 0, 0, 0, 0])\n    }\n\n    /// Returns one, the multiplicative identity.\n    #[inline]\n    pub const fn one() -> FieldElement {\n        R\n    }\n\n    pub fn random<Rng: RngCore + CryptoRng>(rng: &mut Rng) -> Self {\n        let mut limbs = [0u64; 8];\n        for i in 0..8 {\n            limbs[i] = rng.next_u64();\n        }\n        FieldElement::from_u512(limbs)\n    }\n\n    /// Doubles this field element.\n    #[inline]\n    pub const fn double(&self) -> FieldElement {\n        // TODO: This can be achieved more efficiently with a bitshift.\n        self.add(self)\n    }\n\n    /// Converts a 512-bit little endian integer into\n    /// a `FieldElement` by reducing by the modulus.\n    pub fn from_bytes_wide(bytes: &[u8; 64]) -> FieldElement {\n        FieldElement::from_u512([\n            u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[..8]).unwrap()),\n            u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[8..16]).unwrap()),\n            u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[16..24]).unwrap()),\n            u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[24..32]).unwrap()),\n            u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[32..40]).unwrap()),\n            u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[40..48]).unwrap()),\n            u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[48..56]).unwrap()),\n            u64::from_le_bytes(<[u8; 8]>::try_from(&bytes[56..64]).unwrap()),\n        ])\n    }\n\n    fn from_u512(limbs: [u64; 8]) -> FieldElement {\n        // We reduce an arbitrary 512-bit number by decomposing it into two 256-bit digits\n        // with the higher bits multiplied by 2^256. Thus, we perform two reductions\n        //\n        // 1. the lower bits are multiplied by R^2, as normal\n        // 2. the upper bits are multiplied by R^2 * 2^256 = R^3\n        //\n        // and computing their sum in the field. It remains to see that arbitrary 256-bit\n        // numbers can be placed into Montgomery form safely using the reduction. The\n        // reduction works so long as the product is less than R=2^256 multipled by\n        // the modulus. This holds because for any `c` smaller than the modulus, we have\n        // that (2^256 - 1)*c is an acceptable product for the reduction. Therefore, the\n        // reduction always works so long as `c` is in the field; in this case it is either the\n        // constant `R2` or `R3`.\n        let d0 = FieldElement([limbs[0], limbs[1], limbs[2], limbs[3], 0]);\n        let d1 = FieldElement([limbs[4], limbs[5], limbs[6], limbs[7], 0]);\n        // Convert to Montgomery form\n        d0 * R2 + d1 * R3\n    }\n\n    /// Converts from an integer represented in little endian\n    /// into its (congruent) `FieldElement` representation.\n    pub const fn from_raw(val: [u64; 4]) -> Self {\n        (&FieldElement([val[0], val[1], val[2], val[3], 0])).mul(&R2)\n    }\n\n    /// Squares this element.\n    #[inline]\n    pub const fn square(&self) -> FieldElement {\n        let (r1, carry) = mac(0, self.0[0], self.0[1], 0);\n        let (r2, carry) = mac(0, self.0[0], self.0[2], carry);\n        let (r3, r4) = mac(0, self.0[0], self.0[3], carry);\n\n        let (r3, carry) = mac(r3, self.0[1], self.0[2], 0);\n        let (r4, r5) = mac(r4, self.0[1], self.0[3], carry);\n\n        let (r5, r6) = mac(r5, self.0[2], self.0[3], 0);\n\n        let r7 = r6 >> 63;\n        let r6 = (r6 << 1) | (r5 >> 63);\n        let r5 = (r5 << 1) | (r4 >> 63);\n        let r4 = (r4 << 1) | (r3 >> 63);\n        let r3 = (r3 << 1) | (r2 >> 63);\n        let r2 = (r2 << 1) | (r1 >> 63);\n        let r1 = r1 << 1;\n\n        let (r0, carry) = mac(0, self.0[0], self.0[0], 0);\n        let (r1, carry) = adc(0, r1, carry);\n        let (r2, carry) = mac(r2, self.0[1], self.0[1], carry);\n        let (r3, carry) = adc(0, r3, carry);\n        let (r4, carry) = mac(r4, self.0[2], self.0[2], carry);\n        let (r5, carry) = adc(0, r5, carry);\n        let (r6, carry) = mac(r6, self.0[3], self.0[3], carry);\n        let (r7, _) = adc(0, r7, carry);\n\n        FieldElement::montgomery_reduce(r0, r1, r2, r3, r4, r5, r6, r7, 0)\n    }\n\n    pub fn pow_by_self(&self, exp: &Self) -> Self {\n        let mut registers = [0u64; 4];\n\n        let exp_bytes = exp.to_bytes();\n        registers[0] = u64::from_ne_bytes(exp_bytes[0..8].try_into().unwrap());\n        registers[1] = u64::from_ne_bytes(exp_bytes[8..16].try_into().unwrap());\n        registers[2] = u64::from_ne_bytes(exp_bytes[16..24].try_into().unwrap());\n        registers[3] = u64::from_ne_bytes(exp_bytes[24..32].try_into().unwrap());\n\n        self.pow(&registers)\n    }\n\n    /// Exponentiates `self` by `by`, where `by` is a\n    /// little-endian order integer exponent.\n    pub fn pow(&self, by: &[u64; 4]) -> Self {\n        let mut res = Self::one();\n        for e in by.iter().rev() {\n            for i in (0..64).rev() {\n                res = res.square();\n                let mut tmp = res;\n                tmp *= self;\n                res.conditional_assign(&tmp, (((*e >> i) & 0x1) as u8).into());\n            }\n        }\n        res\n    }\n\n    pub fn invert(&self) -> CtOption<Self> {\n        // Using an addition chain from\n        // https://briansmith.org/ecc-inversion-addition-chains-01#secp256k1_scalar_inversion\n        let x_1 = *self;\n        let x_10 = self.pow2k(1);\n        let x_11 = x_10.mul(&x_1);\n        let x_101 = x_10.mul(&x_11);\n        let x_111 = x_10.mul(&x_101);\n        let x_1001 = x_10.mul(&x_111);\n        let x_1011 = x_10.mul(&x_1001);\n        let x_1101 = x_10.mul(&x_1011);\n\n        let x6 = x_1101.pow2k(2).mul(&x_1011);\n        let x8 = x6.pow2k(2).mul(&x_11);\n        let x14 = x8.pow2k(6).mul(&x6);\n        let x28 = x14.pow2k(14).mul(&x14);\n        let x56 = x28.pow2k(28).mul(&x28);\n\n        #[rustfmt::skip]\n            let res = x56\n            .pow2k(56).mul(&x56)\n            .pow2k(14).mul(&x14)\n            .pow2k(3).mul(&x_101)\n            .pow2k(4).mul(&x_111)\n            .pow2k(4).mul(&x_101)\n            .pow2k(5).mul(&x_1011)\n            .pow2k(4).mul(&x_1011)\n            .pow2k(4).mul(&x_111)\n            .pow2k(5).mul(&x_111)\n            .pow2k(6).mul(&x_1101)\n            .pow2k(4).mul(&x_101)\n            .pow2k(3).mul(&x_111)\n            .pow2k(5).mul(&x_1001)\n            .pow2k(6).mul(&x_101)\n            .pow2k(10).mul(&x_111)\n            .pow2k(4).mul(&x_111)\n            .pow2k(9).mul(&x8)\n            .pow2k(5).mul(&x_1001)\n            .pow2k(6).mul(&x_1011)\n            .pow2k(4).mul(&x_1101)\n            .pow2k(5).mul(&x_11)\n            .pow2k(6).mul(&x_1101)\n            .pow2k(10).mul(&x_1101)\n            .pow2k(4).mul(&x_1001)\n            .pow2k(6).mul(&x_1)\n            .pow2k(8).mul(&x6);\n\n        CtOption::new(res, !self.is_zero())\n    }\n\n    pub fn batch_invert(inputs: &mut [FieldElement]) -> FieldElement {\n        // This code is essentially identical to the FieldElement\n        // implementation, and is documented there.  Unfortunately,\n        // it's not easy to write it generically, since here we want\n        // to use `UnpackedScalar`s internally, and `FieldElement`s\n        // externally, but there's no corresponding distinction for\n        // field elements.\n\n        use zeroize::Zeroizing;\n\n        let n = inputs.len();\n        let one = FieldElement::one();\n\n        // Place scratch storage in a Zeroizing wrapper to wipe it when\n        // we pass out of scope.\n        let scratch_vec = vec![one; n];\n        let mut scratch = Zeroizing::new(scratch_vec);\n\n        // Keep an accumulator of all of the previous products\n        let mut acc = FieldElement::one();\n\n        // Pass through the input vector, recording the previous\n        // products in the scratch space\n        for (input, scratch) in inputs.iter().zip(scratch.iter_mut()) {\n            *scratch = acc;\n\n            acc = acc * input;\n        }\n\n        // acc is nonzero iff all inputs are nonzero\n        debug_assert!(acc != FieldElement::zero());\n\n        // Compute the inverse of all products\n        acc = acc.invert().unwrap();\n\n        // We need to return the product of all inverses later\n        let ret = acc;\n\n        // Pass through the vector backwards to compute the inverses\n        // in place\n        for (input, scratch) in inputs.iter_mut().rev().zip(scratch.iter().rev()) {\n            let tmp = &acc * input.clone();\n            *input = &acc * scratch;\n            acc = tmp;\n        }\n\n        ret\n    }\n\n    #[inline(always)]\n    const fn montgomery_reduce(\n        r0: u64,\n        r1: u64,\n        r2: u64,\n        r3: u64,\n        r4: u64,\n        r5: u64,\n        r6: u64,\n        r7: u64,\n        r8: u64,\n    ) -> Self {\n        // The Montgomery reduction here is based on Algorithm 14.32 in\n        // Handbook of Applied Cryptography\n        // <http://cacr.uwaterloo.ca/hac/about/chap14.pdf>.\n\n        let k = r0.wrapping_mul(INV);\n        let (_, carry) = mac(r0, k, MODULUS.0[0], 0);\n        let (r1, carry) = mac(r1, k, MODULUS.0[1], carry);\n        let (r2, carry) = mac(r2, k, MODULUS.0[2], carry);\n        let (r3, carry) = mac(r3, k, MODULUS.0[3], carry);\n        let (r4, carry) = mac(r4, k, MODULUS.0[4], carry);\n        let (r5, carry2) = adc(r5, 0, carry);\n\n        let k = r1.wrapping_mul(INV);\n        let (_, carry) = mac(r1, k, MODULUS.0[0], 0);\n        let (r2, carry) = mac(r2, k, MODULUS.0[1], carry);\n        let (r3, carry) = mac(r3, k, MODULUS.0[2], carry);\n        let (r4, carry) = mac(r4, k, MODULUS.0[3], carry);\n        let (r5, carry) = mac(r5, k, MODULUS.0[4], carry);\n        let (r6, carry2) = adc(r6, carry2, carry);\n\n        let k = r2.wrapping_mul(INV);\n        let (_, carry) = mac(r2, k, MODULUS.0[0], 0);\n        let (r3, carry) = mac(r3, k, MODULUS.0[1], carry);\n        let (r4, carry) = mac(r4, k, MODULUS.0[2], carry);\n        let (r5, carry) = mac(r5, k, MODULUS.0[3], carry);\n        let (r6, carry) = mac(r6, k, MODULUS.0[4], carry);\n        let (r7, carry2) = adc(r7, carry2, carry);\n\n        let k = r3.wrapping_mul(INV);\n        let (_, carry) = mac(r3, k, MODULUS.0[0], 0);\n        let (r4, carry) = mac(r4, k, MODULUS.0[1], carry);\n        let (r5, carry) = mac(r5, k, MODULUS.0[2], carry);\n        let (r6, carry) = mac(r6, k, MODULUS.0[3], carry);\n        let (r7, carry) = mac(r7, k, MODULUS.0[4], carry);\n        let (r8, _) = adc(r8, carry2, carry);\n\n        // Result may be within MODULUS of the correct value\n        (&FieldElement([r4, r5, r6, r7, r8])).sub(&MODULUS)\n    }\n\n    /// Multiplies `rhs` by `self`, returning the result.\n    #[inline]\n    pub const fn mul(&self, rhs: &Self) -> Self {\n        // Schoolbook multiplication\n\n        let (r0, carry) = mac(0, self.0[0], rhs.0[0], 0);\n        let (r1, carry) = mac(0, self.0[0], rhs.0[1], carry);\n        let (r2, carry) = mac(0, self.0[0], rhs.0[2], carry);\n        let (r3, carry) = mac(0, self.0[0], rhs.0[3], carry);\n        let (r4, r5) = mac(0, self.0[0], rhs.0[4], carry);\n\n        let (r1, carry) = mac(r1, self.0[1], rhs.0[0], 0);\n        let (r2, carry) = mac(r2, self.0[1], rhs.0[1], carry);\n        let (r3, carry) = mac(r3, self.0[1], rhs.0[2], carry);\n        let (r4, carry) = mac(r4, self.0[1], rhs.0[3], carry);\n        let (r5, r6) = mac(r5, self.0[1], rhs.0[4], carry);\n\n        let (r2, carry) = mac(r2, self.0[2], rhs.0[0], 0);\n        let (r3, carry) = mac(r3, self.0[2], rhs.0[1], carry);\n        let (r4, carry) = mac(r4, self.0[2], rhs.0[2], carry);\n        let (r5, carry) = mac(r5, self.0[2], rhs.0[3], carry);\n        let (r6, r7) = mac(r6, self.0[2], rhs.0[4], carry);\n\n        let (r3, carry) = mac(r3, self.0[3], rhs.0[0], 0);\n        let (r4, carry) = mac(r4, self.0[3], rhs.0[1], carry);\n        let (r5, carry) = mac(r5, self.0[3], rhs.0[2], carry);\n        let (r6, carry) = mac(r6, self.0[3], rhs.0[3], carry);\n        let (r7, r8) = mac(r7, self.0[3], rhs.0[4], carry);\n\n        let (r4, carry) = mac(r4, self.0[4], rhs.0[0], 0);\n        let (r5, carry) = mac(r5, self.0[4], rhs.0[1], carry);\n        let (r6, carry) = mac(r6, self.0[4], rhs.0[2], carry);\n        let (r7, carry) = mac(r7, self.0[4], rhs.0[3], carry);\n        let (r8, _) = mac(r8, self.0[4], rhs.0[4], carry);\n\n        FieldElement::montgomery_reduce(r0, r1, r2, r3, r4, r5, r6, r7, r8)\n    }\n\n    /// Subtracts `rhs` from `self`, returning the result.\n    #[inline]\n    pub const fn sub(&self, rhs: &Self) -> Self {\n        let (d0, borrow) = sbb(self.0[0], rhs.0[0], 0);\n        let (d1, borrow) = sbb(self.0[1], rhs.0[1], borrow);\n        let (d2, borrow) = sbb(self.0[2], rhs.0[2], borrow);\n        let (d3, borrow) = sbb(self.0[3], rhs.0[3], borrow);\n        let (d4, borrow) = sbb(self.0[4], rhs.0[4], borrow);\n\n        // If underflow occurred on the final limb, borrow = 0xfff...fff, otherwise\n        // borrow = 0x000...000. Thus, we use it as a mask to conditionally add the modulus.\n        let (d0, carry) = adc(d0, MODULUS.0[0] & borrow, 0);\n        let (d1, carry) = adc(d1, MODULUS.0[1] & borrow, carry);\n        let (d2, carry) = adc(d2, MODULUS.0[2] & borrow, carry);\n        let (d3, carry) = adc(d3, MODULUS.0[3] & borrow, carry);\n        let (d4, _) = adc(d4, MODULUS.0[4] & borrow, carry);\n\n        FieldElement([d0, d1, d2, d3, d4])\n    }\n\n    /// Adds `rhs` to `self`, returning the result.\n    #[inline]\n    pub const fn add(&self, rhs: &Self) -> Self {\n        let (d0, carry) = adc(self.0[0], rhs.0[0], 0);\n        let (d1, carry) = adc(self.0[1], rhs.0[1], carry);\n        let (d2, carry) = adc(self.0[2], rhs.0[2], carry);\n        let (d3, carry) = adc(self.0[3], rhs.0[3], carry);\n        let (d4, _) = adc(self.0[4], rhs.0[4], carry);\n\n        // Attempt to subtract the modulus, to ensure the value\n        // is smaller than the modulus.\n        (&FieldElement([d0, d1, d2, d3, d4])).sub(&MODULUS)\n    }\n\n    /// Negates `self`.\n    #[inline]\n    pub const fn neg(&self) -> Self {\n        // Subtract `self` from `MODULUS` to negate. Ignore the final\n        // borrow because it cannot underflow; self is guaranteed to\n        // be in the field.\n        let (d0, borrow) = sbb(MODULUS.0[0], self.0[0], 0);\n        let (d1, borrow) = sbb(MODULUS.0[1], self.0[1], borrow);\n        let (d2, borrow) = sbb(MODULUS.0[2], self.0[2], borrow);\n        let (d3, borrow) = sbb(MODULUS.0[3], self.0[3], borrow);\n        let (d4, _) = sbb(MODULUS.0[4], self.0[4], borrow);\n\n        // `tmp` could be `MODULUS` if `self` was zero. Create a mask that is\n        // zero if `self` was zero, and `u64::max_value()` if self was nonzero.\n        let mask = (((self.0[0] | self.0[1] | self.0[2] | self.0[3] | self.0[4]) == 0) as u64)\n            .wrapping_sub(1);\n\n        FieldElement([d0 & mask, d1 & mask, d2 & mask, d3 & mask, d4 & mask])\n    }\n}\n\nimpl<'a> From<&'a FieldElement> for [u8; 32] {\n    fn from(value: &'a FieldElement) -> [u8; 32] {\n        value.to_bytes()\n    }\n}\n\nimpl FieldElement {\n    /// Attempts to parse the given byte array as an SEC1-encoded field element.\n    ///\n    /// Returns None if the byte array does not contain a big-endian integer in the range\n    /// [0, p).\n    pub fn from_sec1(bytes: FieldBytes) -> CtOption<Self> {\n        let mut be = bytes.to_vec();\n        be.reverse();\n\n        Self::from_bytes(&be.as_slice().try_into().unwrap())\n    }\n\n    /// Returns the SEC1 encoding of this field element.\n    pub fn to_sec1(self) -> FieldBytes {\n        let mut le_bytes = self.to_bytes().to_vec();\n        le_bytes.reverse();\n\n        *FieldBytes::from_slice(le_bytes.as_slice())\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_inv() {\n        // Compute -(q^{-1} mod 2^64) mod 2^64 by exponentiating\n        // by totient(2**64) - 1\n\n        let mut inv = 1u64;\n        for _ in 0..63 {\n            inv = inv.wrapping_mul(inv);\n            inv = inv.wrapping_mul(MODULUS.0[0]);\n        }\n        inv = inv.wrapping_neg();\n\n        assert_eq!(inv, INV);\n    }\n\n    #[cfg(feature = \"std\")]\n    #[test]\n    fn test_debug() {\n        assert_eq!(\n            format!(\"{:?}\", FieldElement::zero()),\n            \"0x0000000000000000000000000000000000000000000000000000000000000000\"\n        );\n        assert_eq!(\n            format!(\"{:?}\", FieldElement::one()),\n            \"0x0000000000000000000000000000000000000000000000000000000000000001\"\n        );\n        assert_eq!(\n            format!(\"{:?}\", R2),\n            \"0x1824b159acc5056f998c4fefecbc4ff55884b7fa0003480200000001fffffffe\"\n        );\n    }\n\n    #[test]\n    fn test_equality() {\n        assert_eq!(FieldElement::zero(), FieldElement::zero());\n        assert_eq!(FieldElement::one(), FieldElement::one());\n        assert_eq!(R2, R2);\n\n        assert!(FieldElement::zero() != FieldElement::one());\n        assert!(FieldElement::one() != R2);\n    }\n\n    #[test]\n    fn test_to_bytes() {\n        assert_eq!(\n            FieldElement::zero().to_bytes(),\n            [\n                0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n                0, 0, 0, 0\n            ]\n        );\n\n        assert_eq!(\n            FieldElement::one().to_bytes(),\n            [\n                1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n                0, 0, 0, 0\n            ]\n        );\n    }\n\n    #[test]\n    fn test_from_bytes() {\n        assert_eq!(\n            FieldElement::from_bytes(&[\n                0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n                0, 0, 0, 0\n            ])\n            .unwrap(),\n            FieldElement::zero()\n        );\n\n        assert_eq!(\n            FieldElement::from_bytes(&[\n                1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n                0, 0, 0, 0\n            ])\n            .unwrap(),\n            FieldElement::one()\n        );\n    }\n\n    #[test]\n    fn test_from_u512_zero() {\n        assert_eq!(\n            FieldElement::zero(),\n            FieldElement::from_u512([\n                MODULUS.0[0],\n                MODULUS.0[1],\n                MODULUS.0[2],\n                MODULUS.0[3],\n                0,\n                0,\n                0,\n                0\n            ])\n        );\n    }\n\n    #[test]\n    fn test_from_u512_r() {\n        assert_eq!(R, FieldElement::from_u512([1, 0, 0, 0, 0, 0, 0, 0]));\n    }\n\n    #[test]\n    fn test_from_u512_r2() {\n        assert_eq!(R2, FieldElement::from_u512([0, 0, 0, 0, 1, 0, 0, 0]));\n    }\n\n    #[test]\n    fn test_from_u512_max() {\n        let max_u64 = 0xffffffffffffffff;\n        assert_eq!(\n            R3 - R,\n            FieldElement::from_u512([\n                max_u64, max_u64, max_u64, max_u64, max_u64, max_u64, max_u64, max_u64\n            ])\n        );\n    }\n\n    #[test]\n    fn test_from_bytes_wide_r2() {\n        assert_eq!(\n            R2,\n            FieldElement::from_bytes_wide(&[\n                191, 190, 201, 47, 115, 161, 45, 64, 196, 95, 183, 80, 25, 35, 81, 69, 1, 0, 0, 0,\n                0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n                0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0\n            ])\n        );\n    }\n\n    #[test]\n    fn test_from_bytes_wide_negative_one() {\n        assert_eq!(\n            -&FieldElement::one(),\n            FieldElement::from_bytes_wide(&[\n                64, 65, 54, 208, 140, 94, 210, 191, 59, 160, 72, 175, 230, 220, 174, 186, 254, 255,\n                255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 0, 0, 0, 0,\n                0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,\n            ])\n        );\n    }\n\n    #[test]\n    fn test_zero() {\n        assert_eq!(FieldElement::zero(), -&FieldElement::zero());\n        assert_eq!(\n            FieldElement::zero(),\n            FieldElement::zero() + FieldElement::zero()\n        );\n        assert_eq!(\n            FieldElement::zero(),\n            FieldElement::zero() - FieldElement::zero()\n        );\n        assert_eq!(\n            FieldElement::zero(),\n            FieldElement::zero() * FieldElement::zero()\n        );\n    }\n\n    const LARGEST: FieldElement = FieldElement([\n        0xbfd25e8cd0364140,\n        0xbaaedce6af48a03b,\n        0xfffffffffffffffe,\n        0xffffffffffffffff,\n        0,\n    ]);\n\n    #[test]\n    fn test_addition() {\n        let mut tmp = LARGEST;\n        tmp += &LARGEST;\n\n        let target = FieldElement([\n            0xbfd25e8cd036413f,\n            0xbaaedce6af48a03b,\n            0xfffffffffffffffe,\n            0xffffffffffffffff,\n            0,\n        ]);\n\n        assert_eq!(tmp, target);\n\n        let mut tmp = LARGEST;\n        tmp += &FieldElement([1, 0, 0, 0, 0]);\n\n        assert_eq!(tmp, FieldElement::zero());\n    }\n\n    #[test]\n    fn test_negation() {\n        let tmp = -&LARGEST;\n\n        assert_eq!(tmp, FieldElement([1, 0, 0, 0, 0]));\n\n        let tmp = -&FieldElement::zero();\n        assert_eq!(tmp, FieldElement::zero());\n        let tmp = -&FieldElement([1, 0, 0, 0, 0]);\n        assert_eq!(tmp, LARGEST);\n    }\n\n    #[test]\n    fn test_subtraction() {\n        let mut tmp = LARGEST;\n        tmp -= &LARGEST;\n\n        assert_eq!(tmp, FieldElement::zero());\n\n        let mut tmp = FieldElement::zero();\n        tmp -= &LARGEST;\n\n        let mut tmp2 = MODULUS;\n        tmp2 -= &LARGEST;\n\n        assert_eq!(tmp, tmp2);\n    }\n\n    #[test]\n    fn test_multiplication() {\n        let mut cur = LARGEST;\n\n        for _ in 0..100 {\n            let mut tmp = cur;\n            tmp *= &cur;\n\n            let mut tmp2 = FieldElement::zero();\n            for b in cur\n                .to_bytes()\n                .iter()\n                .rev()\n                .flat_map(|byte| (0..8).rev().map(move |i| ((byte >> i) & 1u8) == 1u8))\n            {\n                let tmp3 = tmp2;\n                tmp2.add_assign(&tmp3);\n\n                if b {\n                    tmp2.add_assign(&cur);\n                }\n            }\n\n            assert_eq!(tmp, tmp2);\n\n            cur.add_assign(&LARGEST);\n        }\n    }\n\n    #[test]\n    fn test_squaring() {\n        let mut cur = LARGEST;\n\n        for _ in 0..100 {\n            let mut tmp = cur;\n            tmp = tmp.square();\n\n            let mut tmp2 = FieldElement::zero();\n            for b in cur\n                .to_bytes()\n                .iter()\n                .rev()\n                .flat_map(|byte| (0..8).rev().map(move |i| ((byte >> i) & 1u8) == 1u8))\n            {\n                let tmp3 = tmp2;\n                tmp2.add_assign(&tmp3);\n\n                if b {\n                    tmp2.add_assign(&cur);\n                }\n            }\n\n            assert_eq!(tmp, tmp2);\n\n            cur.add_assign(&LARGEST);\n        }\n    }\n\n    #[test]\n    fn test_inversion() {\n        assert_eq!(FieldElement::zero().invert().is_none().unwrap_u8(), 1);\n        assert_eq!(FieldElement::one().invert().unwrap(), FieldElement::one());\n        assert_eq!(\n            (-&FieldElement::one()).invert().unwrap(),\n            -&FieldElement::one()\n        );\n\n        let a = FieldElement::from(123);\n        let result = a.invert().unwrap();\n        println!(\"result {:?}\", result);\n\n        let mut tmp = R2;\n\n        for _ in 0..100 {\n            let mut tmp2 = tmp.invert().unwrap();\n            println!(\"tmp2 {:?}\", tmp2);\n            tmp2.mul_assign(&tmp);\n\n            assert_eq!(tmp2, FieldElement::one());\n\n            tmp.add_assign(&R2);\n        }\n    }\n\n    #[test]\n    fn test_invert_is_pow() {\n        let q_minus_2 = [\n            0xbfd25e8cd036413f,\n            0xbaaedce6af48a03b,\n            0xfffffffffffffffe,\n            0xffffffffffffffff,\n        ];\n\n        let mut r1 = R;\n        let mut r2 = R;\n        let mut r3 = R;\n\n        for _ in 0..100 {\n            r1 = r1.invert().unwrap();\n            r2 = r2.pow_vartime(&q_minus_2);\n            r3 = r3.pow(&q_minus_2);\n\n            assert_eq!(r1, r2);\n            assert_eq!(r2, r3);\n            // Add R so we check something different next time around\n            r1.add_assign(&R);\n            r2 = r1;\n            r3 = r1;\n        }\n    }\n\n    #[test]\n    fn test_from_raw() {\n        assert_eq!(\n            FieldElement::from_raw([0x402da1732fc9bebe, 0x4551231950b75fc4, 0x1, 0x0]),\n            FieldElement::from_raw([0xffffffffffffffff; 4])\n        );\n\n        assert_eq!(\n            FieldElement::from_raw(MODULUS.0[..4].try_into().unwrap()),\n            FieldElement::zero()\n        );\n\n        assert_eq!(FieldElement::from_raw([1, 0, 0, 0]), R);\n    }\n\n    #[test]\n    fn test_double() {\n        let a = FieldElement::from_raw([\n            0x1fff3231233ffffd,\n            0x4884b7fa00034802,\n            0x998c4fefecbc4ff3,\n            0x1824b159acc50562,\n        ]);\n\n        assert_eq!(a.double(), a + a);\n    }\n}\n"
  },
  {
    "path": "packages/secq256k1/src/field/mod.rs",
    "content": "use primeorder::{\n    elliptic_curve::subtle::{Choice, CtOption},\n    PrimeField,\n};\n\npub trait BaseField: PrimeField {\n    fn to_bytes(&self) -> [u8; 32];\n    /// Converts an element of `FieldElement` into a byte representation in\n    /// big-endian byte order.\n    fn to_be_bytes(&self) -> [u8; 32];\n    fn from_bytes(bytes: &[u8; 32]) -> CtOption<Self>;\n}\n\npub trait SqrtRatio: BaseField {\n    const C1: u64;\n    const C3: Self;\n    const C4: Self;\n    const C5: Self;\n    const C6: Self;\n    const C7: Self;\n\n    fn sqrt_ratio(u: &Self, v: &Self) -> (Choice, Self);\n}\n\npub mod field_secp;\npub mod field_secq;\n"
  },
  {
    "path": "packages/secq256k1/src/hashtocurve.rs",
    "content": "use crate::field::{BaseField, SqrtRatio};\nuse k256::elliptic_curve::subtle::{Choice, ConstantTimeEq};\n\n// https://www.ietf.org/archive/id/draft-irtf-cfrg-hash-to-curve-13.html#section-3\npub fn hash_to_curve<F: BaseField + SqrtRatio>(\n    u1: F,\n    u2: F,\n    curve_a: F,\n    curve_b: F,\n    z: F,\n    k: [F; 13],\n) -> ((F, F), (F, F)) {\n    let q1 = map_to_curve_simple_swu(u1, curve_a, curve_b, z);\n    let q2 = map_to_curve_simple_swu(u2, curve_a, curve_b, z);\n\n    // iso_map and add then together\n    let p1 = iso_map(q1.0, q1.1, k);\n    let p2 = iso_map(q2.0, q2.1, k);\n\n    (p1, p2)\n}\n\n// https://www.ietf.org/archive/id/draft-irtf-cfrg-hash-to-curve-13.html#appendix-E.1\nfn iso_map<F: BaseField + SqrtRatio>(x: F, y: F, k: [F; 13]) -> (F, F) {\n    let x_squared = x.pow_vartime(&[2, 0, 0, 0]);\n    let x_cubed = x_squared * x;\n\n    let x_num = k[0] * x_cubed + k[1] * x_squared + k[2] * x + k[3];\n    let x_den = x_squared + k[4] * x + k[5];\n\n    let x_f0 = x_num * x_den.invert().unwrap();\n\n    let y_num = k[6] * x_cubed + k[7] * x_squared + k[8] * x + k[9];\n    let y_den = x_cubed + k[10] * x_squared + k[11] * x + k[12];\n\n    let y_f0 = y * (y_num * y_den.invert().unwrap());\n\n    (x_f0, y_f0)\n}\n\n// https://www.ietf.org/archive/id/draft-irtf-cfrg-hash-to-curve-16.html#appendix-F.2\nfn map_to_curve_simple_swu<F: BaseField + SqrtRatio>(u: F, curve_a: F, curve_b: F, z: F) -> (F, F) {\n    let mut tv1 = u * u;\n    tv1 = z * tv1;\n    let mut tv2 = tv1 * tv1;\n    tv2 = tv2 + tv1;\n    let mut tv3 = tv2 + F::one();\n    tv3 = curve_b * tv3;\n\n    let mut tv4 = F::conditional_select(&z, &-tv2, Choice::from(!tv2.is_zero()));\n    tv4 = curve_a * tv4;\n\n    tv2 = tv3 * tv3;\n    let mut tv6 = tv4 * tv4;\n    let mut tv5 = curve_a * tv6;\n    tv2 = tv2 + tv5;\n\n    tv2 = tv2 * tv3;\n    tv6 = tv6 * tv4;\n    tv5 = curve_b * tv6;\n\n    tv2 = tv2 + tv5;\n    let mut x = tv1 * tv3;\n\n    let (is_gx1_square, y1) = F::sqrt_ratio(&tv2, &tv6);\n\n    let mut y = tv1 * u;\n    y = y * y1;\n    x = F::conditional_select(&x, &tv3, is_gx1_square);\n    y = F::conditional_select(&y, &y1, is_gx1_square);\n\n    y = F::conditional_select(&(-y), &y, u.is_odd().ct_eq(&y.is_odd()));\n\n    x = x * tv4.invert().unwrap();\n    (x, y)\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::field::field_secp::FieldElement;\n    use hex_literal::hex;\n    use k256::elliptic_curve::sec1::FromEncodedPoint;\n    use k256::{AffinePoint, EncodedPoint, ProjectivePoint};\n    type F = FieldElement;\n\n    // The constants are outputs of hashtocurve_params.sage\n\n    // 28734576633528757162648956269730739219262246272443394170905244663053633733939\n    const ISO_A: F = FieldElement([\n        15812504324673914017,\n        4924912935180573090,\n        11593825521208392688,\n        5790129131709978969,\n        0,\n    ]);\n\n    // 1771\n    const ISO_B: F = FieldElement([7606388811483, 0, 0, 0, 0]);\n\n    // -11\n    const ISO_Z: F = FieldElement([\n        18446744022169932340,\n        18446744073709551615,\n        18446744073709551615,\n        18446744073709551615,\n        0,\n    ]);\n\n    const ISO_CONSTANTS: [F; 13] = [\n        F::from_raw([\n            10248191149674768524,\n            4099276460824344803,\n            16397105843297379214,\n            10248191152060862008,\n        ]),\n        F::from_raw([\n            5677861232072053346,\n            16451756383528566833,\n            16331199996347402988,\n            6002227985152881894,\n        ]),\n        F::from_raw([\n            16140637477814429057,\n            15390439281582816146,\n            13399077293683197125,\n            564028334007329237,\n        ]),\n        F::from_raw([\n            10248191149674768583,\n            4099276460824344803,\n            16397105843297379214,\n            10248191152060862008,\n        ]),\n        F::from_raw([\n            14207262949819313428,\n            491854862080688571,\n            17853591451159765588,\n            17126563718956833821,\n        ]),\n        F::from_raw([\n            11522098205669897371,\n            9713490981125900413,\n            11286949528964841693,\n            15228765018197889418,\n        ]),\n        F::from_raw([\n            9564978407794773380,\n            13664254869414482678,\n            11614616639002310276,\n            3416063717353620669,\n        ]),\n        F::from_raw([\n            12062302652890802481,\n            8225878191764283416,\n            8165599998173701494,\n            3001113992576440947,\n        ]),\n        F::from_raw([\n            16139934577133973923,\n            7240293169244854895,\n            12236461929419286229,\n            14365933273833241615,\n        ]),\n        F::from_raw([\n            11614616637729727036,\n            3416063717353620669,\n            7515340178177965473,\n            5465701947765793071,\n        ]),\n        F::from_raw([\n            12087522392169162607,\n            737782293121032857,\n            17557015139884872574,\n            7243101504725699116,\n        ]),\n        F::from_raw([\n            16119550551890077043,\n            10693728869668149624,\n            15414104513184973464,\n            8792806907174565023,\n        ]),\n        F::from_raw([\n            18446744069414582587,\n            18446744073709551615,\n            18446744073709551615,\n            18446744073709551615,\n        ]),\n    ];\n\n    struct TestSuite {\n        u1: [u8; 32],\n        u2: [u8; 32],\n        px: [u8; 32],\n        py: [u8; 32],\n    }\n\n    impl TestSuite {\n        fn new(u1: [u8; 32], u2: [u8; 32], px: [u8; 32], py: [u8; 32]) -> Self {\n            Self { u1, u2, px, py }\n        }\n    }\n\n    fn assert_hash_to_curve(u1: FieldElement, u2: FieldElement, expected: AffinePoint) {\n        let (p1_coords, p2_coords) = hash_to_curve(u1, u2, ISO_A, ISO_B, ISO_Z, ISO_CONSTANTS);\n\n        let p1x = p1_coords.0.to_be_bytes();\n        let p1y = p1_coords.1.to_be_bytes();\n        let p2x = p2_coords.0.to_be_bytes();\n        let p2y = p2_coords.1.to_be_bytes();\n\n        let p1_encoded = EncodedPoint::from_affine_coordinates(&p1x.into(), &p1y.into(), false);\n        let p2_encoded = EncodedPoint::from_affine_coordinates(&p2x.into(), &p2y.into(), false);\n\n        let p1 = ProjectivePoint::from_encoded_point(&p1_encoded).unwrap();\n        let p2 = ProjectivePoint::from_encoded_point(&p2_encoded).unwrap();\n\n        let result = p1 + p2;\n\n        assert_eq!(result.to_affine(), expected);\n    }\n\n    #[test]\n    fn test_secp_hash_to_curve() {\n        // Use test suites from:\n        // https://www.ietf.org/archive/id/draft-irtf-cfrg-hash-to-curve-16.html#appendix-J.8.1\n        let suites: [TestSuite; 5] = [\n            TestSuite::new(\n                hex!(\"6b0f9910dd2ba71c78f2ee9f04d73b5f4c5f7fc773a701abea1e573cab002fb3\"),\n                hex!(\"1ae6c212e08fe1a5937f6202f929a2cc8ef4ee5b9782db68b0d5799fd8f09e16\"),\n                hex!(\"c1cae290e291aee617ebaef1be6d73861479c48b841eaba9b7b5852ddfeb1346\"),\n                hex!(\"64fa678e07ae116126f08b022a94af6de15985c996c3a91b64c406a960e51067\"),\n            ),\n            TestSuite::new(\n                hex!(\"128aab5d3679a1f7601e3bdf94ced1f43e491f544767e18a4873f397b08a2b61\"),\n                hex!(\"5897b65da3b595a813d0fdcc75c895dc531be76a03518b044daaa0f2e4689e00\"),\n                hex!(\"3377e01eab42db296b512293120c6cee72b6ecf9f9205760bd9ff11fb3cb2c4b\"),\n                hex!(\"7f95890f33efebd1044d382a01b1bee0900fb6116f94688d487c6c7b9c8371f6\"),\n            ),\n            TestSuite::new(\n                hex!(\"ea67a7c02f2cd5d8b87715c169d055a22520f74daeb080e6180958380e2f98b9\"),\n                hex!(\"7434d0d1a500d38380d1f9615c021857ac8d546925f5f2355319d823a478da18\"),\n                hex!(\"bac54083f293f1fe08e4a70137260aa90783a5cb84d3f35848b324d0674b0e3a\"),\n                hex!(\"4436476085d4c3c4508b60fcf4389c40176adce756b398bdee27bca19758d828\"),\n            ),\n            TestSuite::new(\n                hex!(\"eda89a5024fac0a8207a87e8cc4e85aa3bce10745d501a30deb87341b05bcdf5\"),\n                hex!(\"dfe78cd116818fc2c16f3837fedbe2639fab012c407eac9dfe9245bf650ac51d\"),\n                hex!(\"e2167bc785333a37aa562f021f1e881defb853839babf52a7f72b102e41890e9\"),\n                hex!(\"f2401dd95cc35867ffed4f367cd564763719fbc6a53e969fb8496a1e6685d873\"),\n            ),\n            TestSuite::new(\n                hex!(\"8d862e7e7e23d7843fe16d811d46d7e6480127a6b78838c277bca17df6900e9f\"),\n                hex!(\"68071d2530f040f081ba818d3c7188a94c900586761e9115efa47ae9bd847938\"),\n                hex!(\"e3c8d35aaaf0b9b647e88a0a0a7ee5d5bed5ad38238152e4e6fd8c1f8cb7c998\"),\n                hex!(\"8446eeb6181bf12f56a9d24e262221cc2f0c4725c7e3803024b5888ee5823aa6\"),\n            ),\n        ];\n\n        for suite in suites {\n            let expected_point =\n                EncodedPoint::from_affine_coordinates(&suite.px.into(), &suite.py.into(), false);\n            let expected_point = AffinePoint::from_encoded_point(&expected_point).unwrap();\n\n            let mut u1 = suite.u1.clone();\n            u1.reverse();\n            let mut u2 = suite.u2.clone();\n            u2.reverse();\n\n            let u1 = FieldElement::from_bytes(&u1).unwrap();\n            let u2 = FieldElement::from_bytes(&u2).unwrap();\n\n            assert_hash_to_curve(u1, u2, expected_point);\n        }\n    }\n}\n"
  },
  {
    "path": "packages/secq256k1/src/lib.rs",
    "content": "pub mod affine;\npub mod field;\nmod hashtocurve;\npub mod scalar;\n\npub use affine::AffinePoint;\nuse affine::AffinePointCore;\npub use primeorder::elliptic_curve;\npub use primeorder::elliptic_curve::bigint::U256;\n\nuse field::field_secq::FieldElement;\nuse primeorder::elliptic_curve::{AffineArithmetic, Curve, ProjectiveArithmetic, ScalarArithmetic};\nuse primeorder::{PrimeCurve, PrimeCurveParams};\npub use scalar::Scalar;\n\npub type EncodedPoint = primeorder::elliptic_curve::sec1::EncodedPoint<Secq256K1>;\npub type FieldBytes = primeorder::elliptic_curve::FieldBytes<Secq256K1>;\npub type ProjectivePoint = primeorder::ProjectivePoint<Secq256K1>;\n\npub const ORDER: U256 =\n    U256::from_be_hex(\"fffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2f\");\n\n#[derive(Copy, Clone, Debug, Default, Eq, PartialEq, PartialOrd, Ord)]\npub struct Secq256K1;\n\nimpl Curve for Secq256K1 {\n    type UInt = U256;\n\n    const ORDER: U256 =\n        U256::from_be_hex(\"fffffffffffffffffffffffffffffffffffffffffffffffffffffffefffffc2f\");\n}\n\nimpl PrimeCurveParams for Secq256K1 {\n    type FieldElement = FieldElement;\n\n    const ZERO: FieldElement = FieldElement::ZERO;\n    const ONE: FieldElement = FieldElement::ONE;\n\n    const EQUATION_A: FieldElement = FieldElement::ZERO;\n\n    const EQUATION_B: FieldElement =\n        FieldElement([13924965285611452217, 16516940299852029533, 8, 0, 0]); // 7 * R2\n\n    const GENERATOR: (FieldElement, FieldElement) = (\n        // 76c39f5585cb160eb6b06c87a2ce32e23134e45a097781a6a24288e37702eda6 * R2\n        FieldElement([\n            10469571329630693389,\n            10742150477581383480,\n            16610251588214968909,\n            7161385764161811800,\n            0,\n        ]),\n        // 3ffc646c7b2918b5dc2d265a8e82a7f7d18983d26e8dc055a4120ddad952677f * R2\n        FieldElement([\n            12565599782544440070,\n            11151484775266214907,\n            5786122696412099978,\n            14641184162808952937,\n            0,\n        ]),\n    );\n}\n\nimpl PrimeCurve for Secq256K1 {}\n\nimpl AffineArithmetic for Secq256K1 {\n    type AffinePoint = AffinePointCore;\n}\n\nimpl ProjectiveArithmetic for Secq256K1 {\n    type ProjectivePoint = ProjectivePoint;\n}\n\nimpl ScalarArithmetic for Secq256K1 {\n    type Scalar = Scalar;\n}\n"
  },
  {
    "path": "packages/secq256k1/src/scalar.rs",
    "content": "use crate::field::field_secp::FieldElement;\n\nuse super::{FieldBytes, Secq256K1};\nuse crate::field::BaseField;\n\nuse ff::{Field, PrimeField, PrimeFieldBits};\nuse k256::elliptic_curve::bigint::Encoding;\nuse primeorder::elliptic_curve::{\n    bigint::{Limb, U256},\n    generic_array::arr,\n    ops::Reduce,\n    rand_core::RngCore,\n    subtle::{Choice, ConditionallySelectable, ConstantTimeEq, CtOption},\n    zeroize::DefaultIsZeroes,\n    Curve, Error, IsHigh, Result,\n};\n\nuse std::{\n    iter::{Product, Sum},\n    ops::{Add, AddAssign, Mul, MulAssign, Neg, Sub, SubAssign},\n};\n\ntype ScalarCore = primeorder::elliptic_curve::ScalarCore<Secq256K1>;\n\n#[derive(Copy, Clone, Debug, Default, Eq, PartialEq, PartialOrd, Ord)]\npub struct Scalar(pub ScalarCore);\n\nimpl Field for Scalar {\n    fn one() -> Self {\n        Self(ScalarCore::ONE)\n    }\n\n    fn zero() -> Self {\n        Self(ScalarCore::ZERO)\n    }\n\n    fn random(mut rng: impl RngCore) -> Self {\n        let mut bytes = FieldBytes::default();\n\n        loop {\n            rng.fill_bytes(&mut bytes);\n            if let Some(scalar) = Self::from_repr(bytes).into() {\n                return scalar;\n            }\n        }\n    }\n\n    fn is_zero(&self) -> Choice {\n        self.0.is_zero()\n    }\n\n    #[must_use]\n    fn square(&self) -> Self {\n        unimplemented!();\n    }\n\n    #[must_use]\n    fn double(&self) -> Self {\n        self.add(self)\n    }\n\n    fn invert(&self) -> CtOption<Self> {\n        unimplemented!();\n    }\n\n    fn sqrt(&self) -> CtOption<Self> {\n        unimplemented!();\n    }\n}\n\nimpl PrimeField for Scalar {\n    type Repr = FieldBytes;\n\n    const NUM_BITS: u32 = 256;\n    const CAPACITY: u32 = 255;\n    const S: u32 = 4;\n\n    fn from_repr(bytes: FieldBytes) -> CtOption<Self> {\n        ScalarCore::from_be_bytes(bytes).map(Self)\n    }\n\n    fn to_repr(&self) -> FieldBytes {\n        self.0.to_be_bytes()\n    }\n\n    fn is_odd(&self) -> Choice {\n        self.0.is_odd()\n    }\n\n    fn multiplicative_generator() -> Self {\n        7u64.into()\n    }\n\n    fn root_of_unity() -> Self {\n        Self::from_repr(arr![u8;\n            0xff, 0xc9, 0x7f, 0x06, 0x2a, 0x77, 0x09, 0x92, 0xba, 0x80, 0x7a, 0xce, 0x84, 0x2a,\n            0x3d, 0xfc, 0x15, 0x46, 0xca, 0xd0, 0x04, 0x37, 0x8d, 0xaf, 0x05, 0x92, 0xd7, 0xfb,\n            0xb4, 0x1e, 0x66, 0x02,\n        ])\n        .unwrap()\n    }\n}\n\nimpl DefaultIsZeroes for Scalar {}\n\nimpl ConstantTimeEq for Scalar {\n    fn ct_eq(&self, other: &Self) -> Choice {\n        self.0.ct_eq(&other.0)\n    }\n}\n\nimpl Add<Scalar> for Scalar {\n    type Output = Scalar;\n\n    fn add(self, other: Scalar) -> Scalar {\n        self.add(&other)\n    }\n}\n\nimpl Add<&Scalar> for Scalar {\n    type Output = Scalar;\n\n    fn add(self, other: &Scalar) -> Scalar {\n        Self(self.0.add(&other.0))\n    }\n}\n\nimpl AddAssign<Scalar> for Scalar {\n    fn add_assign(&mut self, other: Scalar) {\n        *self = *self + other;\n    }\n}\n\nimpl AddAssign<&Scalar> for Scalar {\n    fn add_assign(&mut self, other: &Scalar) {\n        *self = *self + other;\n    }\n}\n\nimpl Sub<Scalar> for Scalar {\n    type Output = Scalar;\n\n    fn sub(self, other: Scalar) -> Scalar {\n        self.sub(&other)\n    }\n}\n\nimpl Sub<&Scalar> for Scalar {\n    type Output = Scalar;\n\n    fn sub(self, other: &Scalar) -> Scalar {\n        Self(self.0.sub(&other.0))\n    }\n}\n\nimpl SubAssign<Scalar> for Scalar {\n    fn sub_assign(&mut self, other: Scalar) {\n        *self = *self - other;\n    }\n}\n\nimpl SubAssign<&Scalar> for Scalar {\n    fn sub_assign(&mut self, other: &Scalar) {\n        *self = *self - other;\n    }\n}\n\nimpl Mul<Scalar> for Scalar {\n    type Output = Scalar;\n\n    fn mul(self, other: Scalar) -> Scalar {\n        let self_as_f = FieldElement::from_bytes(&self.to_bytes()).unwrap();\n        let other_as_f = FieldElement::from_bytes(&other.to_bytes()).unwrap();\n        let result = self_as_f.mul(other_as_f);\n        Scalar::from_repr(*FieldBytes::from_slice(&result.to_be_bytes())).unwrap()\n    }\n}\n\nimpl Mul<&Scalar> for Scalar {\n    type Output = Scalar;\n\n    fn mul(self, other: &Scalar) -> Scalar {\n        self.mul(*other)\n    }\n}\n\nimpl MulAssign<Scalar> for Scalar {\n    fn mul_assign(&mut self, rhs: Scalar) {\n        *self = self.mul(rhs)\n    }\n}\n\nimpl MulAssign<&Scalar> for Scalar {\n    fn mul_assign(&mut self, rhs: &Scalar) {\n        *self = self.mul(*rhs)\n    }\n}\n\nimpl Neg for Scalar {\n    type Output = Scalar;\n\n    fn neg(self) -> Scalar {\n        Self(self.0.neg())\n    }\n}\n\nimpl Sum for Scalar {\n    fn sum<I: Iterator<Item = Self>>(_iter: I) -> Self {\n        unimplemented!();\n    }\n}\n\nimpl<'a> Sum<&'a Scalar> for Scalar {\n    fn sum<I: Iterator<Item = &'a Scalar>>(_iter: I) -> Self {\n        unimplemented!();\n    }\n}\n\nimpl Product for Scalar {\n    fn product<I: Iterator<Item = Self>>(_iter: I) -> Self {\n        unimplemented!();\n    }\n}\n\nimpl<'a> Product<&'a Scalar> for Scalar {\n    fn product<I: Iterator<Item = &'a Scalar>>(_iter: I) -> Self {\n        unimplemented!();\n    }\n}\n\nimpl Reduce<U256> for Scalar {\n    fn from_uint_reduced(w: U256) -> Self {\n        let (r, underflow) = w.sbb(&Secq256K1::ORDER, Limb::ZERO);\n        let underflow = Choice::from((underflow.0 >> (Limb::BIT_SIZE - 1)) as u8);\n        let reduced = U256::conditional_select(&w, &r, !underflow);\n        Self(ScalarCore::new(reduced).unwrap())\n    }\n}\n\nimpl TryFrom<U256> for Scalar {\n    type Error = Error;\n\n    fn try_from(w: U256) -> Result<Self> {\n        Option::from(ScalarCore::new(w)).map(Self).ok_or(Error)\n    }\n}\n\nimpl From<Scalar> for U256 {\n    fn from(scalar: Scalar) -> U256 {\n        *scalar.0.as_uint()\n    }\n}\n\nimpl From<u64> for Scalar {\n    fn from(n: u64) -> Scalar {\n        Self(n.into())\n    }\n}\n\nimpl From<ScalarCore> for Scalar {\n    fn from(scalar: ScalarCore) -> Scalar {\n        Self(scalar)\n    }\n}\n\nimpl From<Scalar> for FieldBytes {\n    fn from(scalar: Scalar) -> Self {\n        Self::from(&scalar)\n    }\n}\n\nimpl From<&Scalar> for FieldBytes {\n    fn from(scalar: &Scalar) -> Self {\n        scalar.to_repr()\n    }\n}\n\nimpl IsHigh for Scalar {\n    fn is_high(&self) -> Choice {\n        self.0.is_high()\n    }\n}\n\nimpl ConditionallySelectable for Scalar {\n    fn conditional_select(a: &Self, b: &Self, choice: Choice) -> Self {\n        Self(ScalarCore::conditional_select(&a.0, &b.0, choice))\n    }\n}\n\nimpl Scalar {\n    pub const ZERO: Scalar = Scalar(ScalarCore::ZERO);\n    pub const ONE: Scalar = Scalar(ScalarCore::ONE);\n\n    pub fn to_bytes(&self) -> [u8; 32] {\n        self.0.to_le_bytes().into()\n    }\n}\n\nimpl From<u32> for Scalar {\n    fn from(n: u32) -> Scalar {\n        Self((n as u64).into())\n    }\n}\n\nimpl PrimeFieldBits for Scalar {\n    type ReprBits = [u8; 32];\n\n    fn to_le_bits(&self) -> ff::FieldBits<Self::ReprBits> {\n        self.to_bytes().into()\n    }\n\n    fn char_le_bits() -> ff::FieldBits<Self::ReprBits> {\n        ScalarCore::MODULUS.to_be_bytes().into()\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn add() {\n        let a = Scalar::from(1u32);\n        let b = Scalar::from(2u32);\n        let c = a + b;\n        assert_eq!(c, Scalar::from(3u32));\n    }\n\n    #[test]\n    fn test_all() {\n        let a = Scalar::from(2u64.pow(63) - 2);\n        let b = Scalar::from(2u64.pow(63) - 3);\n        let add = a + b;\n        let sub = a - b;\n        let mul = a * b;\n        let neg = -a;\n\n        println!(\"add {:?}\", add.0.to_string());\n        println!(\"sub {:?}\", sub.0.to_string());\n        println!(\"mul {:?}\", mul.0.to_string());\n        println!(\"neg {:?}\", neg.0.to_string());\n    }\n}\n"
  },
  {
    "path": "packages/spartan_wasm/Cargo.toml",
    "content": "[package]\nname = \"spartan_wasm\"\nversion = \"0.1.0\"\nedition = \"2021\"\n\n[lib]\nname = \"spartan_wasm\"\npath = \"src/lib.rs\"\ncrate-type = [\"cdylib\", \"rlib\"]\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\nspartan = { path = \"../Spartan-secq\" }\nwasm-bindgen = { version = \"0.2.81\", features = [\"serde-serialize\"]}\nconsole_error_panic_hook = \"0.1.7\"\nmerlin = \"3.0.0\"\nweb-sys = { version = \"0.3.60\", features = [\"console\"] }\nserde_json = \"1.0.89\"\nnum-bigint = \"0.4.3\"\nserde = \"1.0.151\"\nbyteorder = \"1.4.3\"\nff = \"0.12.0\"\nsecq256k1 = { path = \"../secq256k1\" }\nserde-wasm-bindgen = \"0.4.5\"\nbincode = \"1.3.3\"\n# Not directly using getrandom in this crate, \n# but some dependencies require getrandom \n# and the \"js\" features needs to be enabled for wasm compatibility\ngetrandom = { version = \"0.2.8\", features = [\"js\"] }\nposeidon = { path = \"../poseidon\" }\nitertools = \"0.9.0\"\ngroup = \"0.12.0\"\n\n"
  },
  {
    "path": "packages/spartan_wasm/LICENSE",
    "content": "MIT License\n\nCopyright (c) 2022 Ethereum Foundation\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE."
  },
  {
    "path": "packages/spartan_wasm/README.md",
    "content": "### Compile\n\nInstall wasm-pack\n\n```\ncurl https://rustwasm.github.io/wasm-pack/installer/init.sh -sSf | sh\n```\n\nRun compile script\n\n```\ncd ../.. && sh ./scripts/build_wasm.sh\n```\n"
  },
  {
    "path": "packages/spartan_wasm/src/lib.rs",
    "content": "pub mod wasm;\n"
  },
  {
    "path": "packages/spartan_wasm/src/wasm.rs",
    "content": "use byteorder::{LittleEndian, ReadBytesExt};\nuse console_error_panic_hook;\nuse ff::PrimeField;\nuse libspartan::{Assignment, Instance, NIZKGens, NIZK};\nuse merlin::Transcript;\nuse poseidon::poseidon_k256::{hash, FieldElement};\nuse secq256k1::{affine::Group, field::BaseField};\nuse std::io::{Error, Read};\nuse wasm_bindgen::prelude::*;\n\npub type G1 = secq256k1::AffinePoint;\npub type F1 = <G1 as Group>::Scalar;\n\n#[wasm_bindgen]\npub fn init_panic_hook() {\n    console_error_panic_hook::set_once();\n}\n\n#[wasm_bindgen]\npub fn prove(circuit: &[u8], vars: &[u8], public_inputs: &[u8]) -> Result<Vec<u8>, JsValue> {\n    let witness = load_witness_from_bin_reader::<F1, _>(vars).unwrap();\n    let witness_bytes = witness\n        .iter()\n        .map(|w| w.to_repr().into())\n        .collect::<Vec<[u8; 32]>>();\n\n    let assignment = Assignment::new(&witness_bytes).unwrap();\n    let circuit: Instance = bincode::deserialize(&circuit).unwrap();\n\n    let num_cons = circuit.inst.get_num_cons();\n    let num_vars = circuit.inst.get_num_vars();\n    let num_inputs = circuit.inst.get_num_inputs();\n\n    // produce public parameters\n    let gens = NIZKGens::new(num_cons, num_vars, num_inputs);\n\n    let mut input = Vec::new();\n    for i in 0..num_inputs {\n        input.push(public_inputs[(i * 32)..((i + 1) * 32)].try_into().unwrap());\n    }\n    let input = Assignment::new(&input).unwrap();\n\n    let mut prover_transcript = Transcript::new(b\"nizk_example\");\n\n    // produce a proof of satisfiability\n    let proof = NIZK::prove(\n        &circuit,\n        assignment.clone(),\n        &input,\n        &gens,\n        &mut prover_transcript,\n    );\n\n    Ok(bincode::serialize(&proof).unwrap())\n}\n\n#[wasm_bindgen]\npub fn verify(circuit: &[u8], proof: &[u8], public_input: &[u8]) -> Result<bool, JsValue> {\n    let circuit: Instance = bincode::deserialize(&circuit).unwrap();\n    let proof: NIZK = bincode::deserialize(&proof).unwrap();\n\n    let num_cons = circuit.inst.get_num_cons();\n    let num_vars = circuit.inst.get_num_vars();\n    let num_inputs = circuit.inst.get_num_inputs();\n\n    // produce public parameters\n    let gens = NIZKGens::new(num_cons, num_vars, num_inputs);\n\n    let mut inputs = Vec::new();\n    for i in 0..num_inputs {\n        inputs.push(public_input[(i * 32)..((i + 1) * 32)].try_into().unwrap());\n    }\n\n    let inputs = Assignment::new(&inputs).unwrap();\n\n    let mut verifier_transcript = Transcript::new(b\"nizk_example\");\n\n    let verified = proof\n        .verify(&circuit, &inputs, &mut verifier_transcript, &gens)\n        .is_ok();\n\n    Ok(verified)\n}\n\n#[wasm_bindgen]\npub fn poseidon(input_bytes: &[u8]) -> Result<Vec<u8>, JsValue> {\n    assert_eq!(input_bytes.len(), 64);\n\n    let input = [\n        FieldElement::from_bytes(&input_bytes[0..32].try_into().unwrap()).unwrap(),\n        FieldElement::from_bytes(&input_bytes[32..64].try_into().unwrap()).unwrap(),\n    ];\n\n    let result = hash(&input);\n\n    Ok(result.to_bytes().to_vec())\n}\n\n// Copied from Nova Scotia\npub fn read_field<R: Read, Fr: PrimeField>(mut reader: R) -> Result<Fr, Error> {\n    let mut repr = Fr::zero().to_repr();\n    for digit in repr.as_mut().iter_mut() {\n        // TODO: may need to reverse order?\n        *digit = reader.read_u8()?;\n    }\n    let fr = Fr::from_repr(repr).unwrap();\n    Ok(fr)\n}\n\npub fn load_witness_from_bin_reader<Fr: PrimeField, R: Read>(\n    mut reader: R,\n) -> Result<Vec<Fr>, Error> {\n    let mut wtns_header = [0u8; 4];\n    reader.read_exact(&mut wtns_header)?;\n    if wtns_header != [119, 116, 110, 115] {\n        // ruby -e 'p \"wtns\".bytes' => [119, 116, 110, 115]\n        panic!(\"invalid file header\");\n    }\n    let version = reader.read_u32::<LittleEndian>()?;\n    // println!(\"wtns version {}\", version);\n    if version > 2 {\n        panic!(\"unsupported file version\");\n    }\n    let num_sections = reader.read_u32::<LittleEndian>()?;\n    if num_sections != 2 {\n        panic!(\"invalid num sections\");\n    }\n    // read the first section\n    let sec_type = reader.read_u32::<LittleEndian>()?;\n    if sec_type != 1 {\n        panic!(\"invalid section type\");\n    }\n    let sec_size = reader.read_u64::<LittleEndian>()?;\n    if sec_size != 4 + 32 + 4 {\n        panic!(\"invalid section len\")\n    }\n    let field_size = reader.read_u32::<LittleEndian>()?;\n    if field_size != 32 {\n        panic!(\"invalid field byte size\");\n    }\n    let mut prime = vec![0u8; field_size as usize];\n    reader.read_exact(&mut prime)?;\n    // if prime != hex!(\"010000f093f5e1439170b97948e833285d588181b64550b829a031e1724e6430\") {\n    //     bail!(\"invalid curve prime {:?}\", prime);\n    // }\n    let witness_len = reader.read_u32::<LittleEndian>()?;\n    // println!(\"witness len {}\", witness_len);\n    let sec_type = reader.read_u32::<LittleEndian>()?;\n    if sec_type != 2 {\n        panic!(\"invalid section type\");\n    }\n    let sec_size = reader.read_u64::<LittleEndian>()?;\n    if sec_size != (witness_len * field_size) as u64 {\n        panic!(\"invalid witness section size {}\", sec_size);\n    }\n    let mut result = Vec::with_capacity(witness_len as usize);\n    for _ in 0..witness_len {\n        result.push(read_field::<&mut R, Fr>(&mut reader)?);\n    }\n    Ok(result)\n}\n\n#[cfg(test)]\nmod test {\n    use super::*;\n    use std::{env::current_dir, fs};\n\n    #[test]\n    fn check_nizk() {\n        let root = current_dir().unwrap();\n        let circuit = fs::read(root.join(\"test_circuit/test_circuit.circuit\")).unwrap();\n        let vars = fs::read(root.join(\"test_circuit/witness.wtns\")).unwrap();\n\n        let public_inputs = [F1::from(1u64), F1::from(1u64), F1::from(1u64)]\n            .iter()\n            .map(|w| w.to_repr())\n            .flatten()\n            .collect::<Vec<u8>>();\n\n        let proof = prove(\n            circuit.as_slice(),\n            vars.as_slice(),\n            public_inputs.as_slice(),\n        )\n        .unwrap();\n\n        let result = verify(\n            circuit.as_slice(),\n            proof.as_slice(),\n            public_inputs.as_slice(),\n        );\n\n        assert!(result.unwrap());\n    }\n\n    #[test]\n    fn test_poseidon() {\n        // Using the same inputs as poseidon.test.ts\n        let a = FieldElement::from_str_vartime(\n            \"115792089237316195423570985008687907853269984665640564039457584007908834671663\",\n        )\n        .unwrap()\n        .to_bytes();\n        let b = FieldElement::from_str_vartime(\n            \"115792089237316195423570985008687907853269984665640564039457584007908834671662\",\n        )\n        .unwrap()\n        .to_bytes();\n\n        let mut inputs = [0u8; 64];\n        inputs[..32].copy_from_slice(&a);\n        inputs[32..].copy_from_slice(&b);\n        let result = poseidon(&inputs).unwrap();\n\n        assert_eq!(\n            result.as_slice(),\n            &[\n                181, 226, 121, 200, 61, 3, 57, 70, 184, 30, 115, 145, 192, 7, 138, 73, 36, 8, 40,\n                132, 190, 141, 35, 89, 108, 149, 235, 51, 129, 165, 64, 103\n            ]\n        )\n    }\n}\n"
  },
  {
    "path": "packages/spartan_wasm/test_circuit/test_circuit.circom",
    "content": "pragma circom 2.1.2;\n\ntemplate TestCircuit() {\n    signal input a;\n    signal input b[2];\n    signal output c;\n\n    signal b_prod;\n    b_prod <== b[0] * b[1];\n\n    c <== a * b_prod;\n}\n\ncomponent main { public [ a, b ] } = TestCircuit();"
  },
  {
    "path": "packages/spartan_wasm/test_circuit/test_circuit_js/generate_witness.js",
    "content": "const wc  = require(\"./witness_calculator.js\");\nconst { readFileSync, writeFile } = require(\"fs\");\n\nif (process.argv.length != 5) {\n    console.log(\"Usage: node generate_witness.js <file.wasm> <input.json> <output.wtns>\");\n} else {\n    const input = JSON.parse(readFileSync(process.argv[3], \"utf8\"));\n    \n    const buffer = readFileSync(process.argv[2]);\n    wc(buffer).then(async witnessCalculator => {\n\t//    const w= await witnessCalculator.calculateWitness(input,0);\n\t//    for (let i=0; i< w.length; i++){\n\t//\tconsole.log(w[i]);\n\t//    }\n\tconst buff= await witnessCalculator.calculateWTNSBin(input,0);\n\twriteFile(process.argv[4], buff, function(err) {\n\t    if (err) throw err;\n\t});\n    });\n}\n"
  },
  {
    "path": "packages/spartan_wasm/test_circuit/test_circuit_js/witness_calculator.js",
    "content": "module.exports = async function builder(code, options) {\n\n    options = options || {};\n\n    let wasmModule;\n    try {\n\twasmModule = await WebAssembly.compile(code);\n    }  catch (err) {\n\tconsole.log(err);\n\tconsole.log(\"\\nTry to run circom --c in order to generate c++ code instead\\n\");\n\tthrow new Error(err);\n    }\n\n    let wc;\n\n    let errStr = \"\";\n    let msgStr = \"\";\n    \n    const instance = await WebAssembly.instantiate(wasmModule, {\n        runtime: {\n            exceptionHandler : function(code) {\n\t\tlet err;\n                if (code == 1) {\n                    err = \"Signal not found.\\n\";\n                } else if (code == 2) {\n                    err = \"Too many signals set.\\n\";\n                } else if (code == 3) {\n                    err = \"Signal already set.\\n\";\n\t\t} else if (code == 4) {\n                    err = \"Assert Failed.\\n\";\n\t\t} else if (code == 5) {\n                    err = \"Not enough memory.\\n\";\n\t\t} else if (code == 6) {\n                    err = \"Input signal array access exceeds the size.\\n\";\n\t\t} else {\n\t\t    err = \"Unknown error.\\n\";\n                }\n                throw new Error(err + errStr);\n            },\n\t    printErrorMessage : function() {\n\t\terrStr += getMessage() + \"\\n\";\n                // console.error(getMessage());\n\t    },\n\t    writeBufferMessage : function() {\n\t\t\tconst msg = getMessage();\n\t\t\t// Any calls to `log()` will always end with a `\\n`, so that's when we print and reset\n\t\t\tif (msg === \"\\n\") {\n\t\t\t\tconsole.log(msgStr);\n\t\t\t\tmsgStr = \"\";\n\t\t\t} else {\n\t\t\t\t// If we've buffered other content, put a space in between the items\n\t\t\t\tif (msgStr !== \"\") {\n\t\t\t\t\tmsgStr += \" \"\n\t\t\t\t}\n\t\t\t\t// Then append the message to the message we are creating\n\t\t\t\tmsgStr += msg;\n\t\t\t}\n\t    },\n\t    showSharedRWMemory : function() {\n\t\tprintSharedRWMemory ();\n            }\n\n        }\n    });\n\n    const sanityCheck =\n        options\n//        options &&\n//        (\n//            options.sanityCheck ||\n//            options.logGetSignal ||\n//            options.logSetSignal ||\n//            options.logStartComponent ||\n//            options.logFinishComponent\n//        );\n\n    \n    wc = new WitnessCalculator(instance, sanityCheck);\n    return wc;\n\n    function getMessage() {\n        var message = \"\";\n\tvar c = instance.exports.getMessageChar();\n        while ( c != 0 ) {\n\t    message += String.fromCharCode(c);\n\t    c = instance.exports.getMessageChar();\n\t}\n        return message;\n    }\n\t\n    function printSharedRWMemory () {\n\tconst shared_rw_memory_size = instance.exports.getFieldNumLen32();\n\tconst arr = new Uint32Array(shared_rw_memory_size);\n\tfor (let j=0; j<shared_rw_memory_size; j++) {\n\t    arr[shared_rw_memory_size-1-j] = instance.exports.readSharedRWMemory(j);\n\t}\n\n\t// If we've buffered other content, put a space in between the items\n\tif (msgStr !== \"\") {\n\t\tmsgStr += \" \"\n\t}\n\t// Then append the value to the message we are creating\n\tmsgStr += (fromArray32(arr).toString());\n\t}\n\n};\n\nclass WitnessCalculator {\n    constructor(instance, sanityCheck) {\n        this.instance = instance;\n\n\tthis.version = this.instance.exports.getVersion();\n        this.n32 = this.instance.exports.getFieldNumLen32();\n\n        this.instance.exports.getRawPrime();\n        const arr = new Uint32Array(this.n32);\n        for (let i=0; i<this.n32; i++) {\n            arr[this.n32-1-i] = this.instance.exports.readSharedRWMemory(i);\n        }\n        this.prime = fromArray32(arr);\n\n        this.witnessSize = this.instance.exports.getWitnessSize();\n\n        this.sanityCheck = sanityCheck;\n    }\n    \n    circom_version() {\n\treturn this.instance.exports.getVersion();\n    }\n\n    async _doCalculateWitness(input, sanityCheck) {\n\t//input is assumed to be a map from signals to arrays of bigints\n        this.instance.exports.init((this.sanityCheck || sanityCheck) ? 1 : 0);\n        const keys = Object.keys(input);\n\tvar input_counter = 0;\n        keys.forEach( (k) => {\n            const h = fnvHash(k);\n            const hMSB = parseInt(h.slice(0,8), 16);\n            const hLSB = parseInt(h.slice(8,16), 16);\n            const fArr = flatArray(input[k]);\n\t    let signalSize = this.instance.exports.getInputSignalSize(hMSB, hLSB);\n\t    if (signalSize < 0){\n\t\tthrow new Error(`Signal ${k} not found\\n`);\n\t    }\n\t    if (fArr.length < signalSize) {\n\t\tthrow new Error(`Not enough values for input signal ${k}\\n`);\n\t    }\n\t    if (fArr.length > signalSize) {\n\t\tthrow new Error(`Too many values for input signal ${k}\\n`);\n\t    }\n            for (let i=0; i<fArr.length; i++) {\n                const arrFr = toArray32(normalize(fArr[i],this.prime),this.n32)\n                for (let j=0; j<this.n32; j++) {\n\t\t    this.instance.exports.writeSharedRWMemory(j,arrFr[this.n32-1-j]);\n\t\t}\n\t\ttry {\n                    this.instance.exports.setInputSignal(hMSB, hLSB,i);\n\t\t    input_counter++;\n\t\t} catch (err) {\n\t\t    // console.log(`After adding signal ${i} of ${k}`)\n                    throw new Error(err);\n\t\t}\n            }\n\n        });\n\tif (input_counter < this.instance.exports.getInputSize()) {\n\t    throw new Error(`Not all inputs have been set. Only ${input_counter} out of ${this.instance.exports.getInputSize()}`);\n\t}\n    }\n\n    async calculateWitness(input, sanityCheck) {\n\n        const w = [];\n\n        await this._doCalculateWitness(input, sanityCheck);\n\n        for (let i=0; i<this.witnessSize; i++) {\n            this.instance.exports.getWitness(i);\n\t    const arr = new Uint32Array(this.n32);\n            for (let j=0; j<this.n32; j++) {\n            arr[this.n32-1-j] = this.instance.exports.readSharedRWMemory(j);\n            }\n            w.push(fromArray32(arr));\n        }\n\n        return w;\n    }\n    \n\n    async calculateBinWitness(input, sanityCheck) {\n\n        const buff32 = new Uint32Array(this.witnessSize*this.n32);\n\tconst buff = new  Uint8Array( buff32.buffer);\n        await this._doCalculateWitness(input, sanityCheck);\n\n        for (let i=0; i<this.witnessSize; i++) {\n            this.instance.exports.getWitness(i);\n\t    const pos = i*this.n32;\n            for (let j=0; j<this.n32; j++) {\n\t\tbuff32[pos+j] = this.instance.exports.readSharedRWMemory(j);\n            }\n        }\n\n\treturn buff;\n    }\n    \n\n    async calculateWTNSBin(input, sanityCheck) {\n\n        const buff32 = new Uint32Array(this.witnessSize*this.n32+this.n32+11);\n\tconst buff = new  Uint8Array( buff32.buffer);\n        await this._doCalculateWitness(input, sanityCheck);\n  \n\t//\"wtns\"\n\tbuff[0] = \"w\".charCodeAt(0)\n\tbuff[1] = \"t\".charCodeAt(0)\n\tbuff[2] = \"n\".charCodeAt(0)\n\tbuff[3] = \"s\".charCodeAt(0)\n\n\t//version 2\n\tbuff32[1] = 2;\n\n\t//number of sections: 2\n\tbuff32[2] = 2;\n\n\t//id section 1\n\tbuff32[3] = 1;\n\n\tconst n8 = this.n32*4;\n\t//id section 1 length in 64bytes\n\tconst idSection1length = 8 + n8;\n\tconst idSection1lengthHex = idSection1length.toString(16);\n        buff32[4] = parseInt(idSection1lengthHex.slice(0,8), 16);\n        buff32[5] = parseInt(idSection1lengthHex.slice(8,16), 16);\n\n\t//this.n32\n\tbuff32[6] = n8;\n\n\t//prime number\n\tthis.instance.exports.getRawPrime();\n\n\tvar pos = 7;\n        for (let j=0; j<this.n32; j++) {\n\t    buff32[pos+j] = this.instance.exports.readSharedRWMemory(j);\n        }\n\tpos += this.n32;\n\n\t// witness size\n\tbuff32[pos] = this.witnessSize;\n\tpos++;\n\n\t//id section 2\n\tbuff32[pos] = 2;\n\tpos++;\n\n\t// section 2 length\n\tconst idSection2length = n8*this.witnessSize;\n\tconst idSection2lengthHex = idSection2length.toString(16);\n        buff32[pos] = parseInt(idSection2lengthHex.slice(0,8), 16);\n        buff32[pos+1] = parseInt(idSection2lengthHex.slice(8,16), 16);\n\n\tpos += 2;\n        for (let i=0; i<this.witnessSize; i++) {\n            this.instance.exports.getWitness(i);\n            for (let j=0; j<this.n32; j++) {\n\t\tbuff32[pos+j] = this.instance.exports.readSharedRWMemory(j);\n            }\n\t    pos += this.n32;\n        }\n\n\treturn buff;\n    }\n\n}\n\n\nfunction toArray32(rem,size) {\n    const res = []; //new Uint32Array(size); //has no unshift\n    const radix = BigInt(0x100000000);\n    while (rem) {\n        res.unshift( Number(rem % radix));\n        rem = rem / radix;\n    }\n    if (size) {\n\tvar i = size - res.length;\n\twhile (i>0) {\n\t    res.unshift(0);\n\t    i--;\n\t}\n    }\n    return res;\n}\n\nfunction fromArray32(arr) { //returns a BigInt\n    var res = BigInt(0);\n    const radix = BigInt(0x100000000);\n    for (let i = 0; i<arr.length; i++) {\n        res = res*radix + BigInt(arr[i]);\n    }\n    return res;\n}\n\nfunction flatArray(a) {\n    var res = [];\n    fillArray(res, a);\n    return res;\n\n    function fillArray(res, a) {\n        if (Array.isArray(a)) {\n            for (let i=0; i<a.length; i++) {\n                fillArray(res, a[i]);\n            }\n        } else {\n            res.push(a);\n        }\n    }\n}\n\nfunction normalize(n, prime) {\n    let res = BigInt(n) % prime\n    if (res < 0) res += prime\n    return res\n}\n\nfunction fnvHash(str) {\n    const uint64_max = BigInt(2) ** BigInt(64);\n    let hash = BigInt(\"0xCBF29CE484222325\");\n    for (var i = 0; i < str.length; i++) {\n\thash ^= BigInt(str[i].charCodeAt());\n\thash *= BigInt(0x100000001B3);\n\thash %= uint64_max;\n    }\n    let shash = hash.toString(16);\n    let n = 16 - shash.length;\n    shash = '0'.repeat(n).concat(shash);\n    return shash;\n}\n"
  },
  {
    "path": "rust-toolchain",
    "content": "nightly-2022-10-31"
  },
  {
    "path": "scripts/addr_membership_circuit.sh",
    "content": "#!/bin/bash \nsh ./scripts/compile_circuit.sh addr_membership 5"
  },
  {
    "path": "scripts/build.sh",
    "content": "#!/bin/bash \nsh ./scripts/build_wasm.sh &&\nsh ./scripts/addr_membership_circuit.sh &&\nsh ./scripts/pubkey_membership_circuit.sh\n"
  },
  {
    "path": "scripts/build_wasm.sh",
    "content": "rm -rf ./packages/spartan_wasm/build &&\ncd ./packages/spartan_wasm &&\nwasm-pack build --target web --out-dir ../spartan_wasm/build\n"
  },
  {
    "path": "scripts/compile_circuit.sh",
    "content": "CIRCUIT_NAME=$1\nNUM_PUB_INPUTS=$2\n\nBUILD_DIR=./packages/circuits/build/$CIRCUIT_NAME\nmkdir -p $BUILD_DIR &&\ncircom ./packages/circuits/instances/$CIRCUIT_NAME.circom --r1cs --wasm --prime secq256k1 -o $BUILD_DIR &&\n\n# Compile circom r1cs into binary\ncargo run --release --bin gen_spartan_inst $BUILD_DIR/$CIRCUIT_NAME.r1cs $BUILD_DIR/$CIRCUIT_NAME.circuit $NUM_PUB_INPUTS &&\n\n# Copy the circuit into the lib dir\nLIB_CIRCUITS_DIR=./packages/lib/src/circuits\nmkdir -p $LIB_CIRCUITS_DIR &&\ncp $BUILD_DIR/*_js/*.wasm $LIB_CIRCUITS_DIR &&\ncp $BUILD_DIR/*.circuit $LIB_CIRCUITS_DIR\n"
  },
  {
    "path": "scripts/pubkey_membership_circuit.sh",
    "content": "#!/bin/bash \nsh ./scripts/compile_circuit.sh pubkey_membership 5"
  },
  {
    "path": "scripts/test.sh",
    "content": "cargo test --release &&\nyarn lerna run test"
  }
]