Full Code of aszepieniec/stark-anatomy for AI

master cae79160fd7b cached
42 files
331.1 KB
84.9k tokens
176 symbols
1 requests
Download .txt
Showing preview only (346K chars total). Download the full file or copy to clipboard to get everything.
Repository: aszepieniec/stark-anatomy
Branch: master
Commit: cae79160fd7b
Files: 42
Total size: 331.1 KB

Directory structure:
gitextract_3ffutrbv/

├── .gitignore
├── LICENSE
├── README.md
├── code/
│   ├── .gitignore
│   ├── algebra.py
│   ├── fast_rpsss.py
│   ├── fast_stark.py
│   ├── fri.py
│   ├── ip.py
│   ├── merkle.py
│   ├── multivariate.py
│   ├── ntt.py
│   ├── rescue_prime.py
│   ├── rpsss.py
│   ├── stark.py
│   ├── test_fast_stark.py
│   ├── test_fri.py
│   ├── test_ip.py
│   ├── test_merkle.py
│   ├── test_multivariate.py
│   ├── test_ntt.py
│   ├── test_rescue_prime.py
│   ├── test_rpsss.py
│   ├── test_stark.py
│   ├── test_univariate.py
│   └── univariate.py
└── docs/
    ├── .gitignore
    ├── 404.html
    ├── Gemfile
    ├── _config.yml
    ├── _includes/
    │   └── head-custom.html
    ├── _posts/
    │   └── 2021-10-20-welcome-to-jekyll.markdown
    ├── about.md
    ├── basic-tools.md
    ├── faster.md
    ├── fri.md
    ├── index.md
    ├── latex/
    │   ├── .gitignore
    │   └── graphics.tex
    ├── overview.md
    ├── rescue-prime.md
    └── stark.md

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitignore
================================================
docs/_site/*
docs/_site/


================================================
FILE: LICENSE
================================================
                                 Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following
      boilerplate notice, with the fields enclosed by brackets "[]"
      replaced with your own identifying information. (Don't include
      the brackets!)  The text should be enclosed in the appropriate
      comment syntax for the file format. We also recommend that a
      file or class name and description of purpose be included on the
      same "printed page" as the copyright notice for easier
      identification within third-party archives.

   Copyright [yyyy] [name of copyright owner]

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.


================================================
FILE: README.md
================================================
# stark-anatomy

STARK tutorial with supporting code in python

Outline:
 - introduction
 - overview of STARKs
 - basic tools -- algebra and polynomials
 - FRI low degree test
 - STARK information theoretical protocol
 - speeding things up with NTT and preprocessing

Visit the Github Pages website here: https://aszepieniec.github.io/stark-anatomy/

## Follow-up
Be sure to check out the [next tutorial](https://github.com/aszepieniec/stark-brainfuck) where we implement a STARK engine for a VM running Brainfuck. And our "real", functional, practical ZK-STARK VM, [Triton VM](https://triton-vm.org/).

## Running locally (the website, not the tutorial)

 1. Install ruby
 2. Install bundler
 3. Change directory to `docs/` and install Jekyll: `$> sudo bundle install`
 4. Run Jekyll: `$> bundle exec jekyll serve`
 5. Surf to [http://127.0.0.1:4000/](http://127.0.0.1:4000/)

## LaTeX and Github Pages

Github-Pages uses Kramdown as the markdown processor. Kramdown does not support LaTeX. Instead, there is a javascript header that loads MathJax, parses the page, and replaces LaTeX maths instructions with their proper formulae. Here is how to do it:

1. Open `_includes/head-custom.html` and paste the following code:
```javascript
<script type="text/x-mathjax-config">
MathJax.Hub.Config({
    TeX: {
      equationNumbers: {
        autoNumber: "AMS"
      }
    },
    tex2jax: {
    inlineMath: [ ['$', '$'], ['\\(', '\\)'] ],
    displayMath: [ ['$$', '$$'], ['\\[', '\\]'] ],
    processEscapes: true,
  }
});
MathJax.Hub.Register.MessageHook("Math Processing Error",function (message) {
	  alert("Math Processing Error: "+message[1]);
	});
MathJax.Hub.Register.MessageHook("TeX Jax - parse error",function (message) {
	  alert("Math Processing Error: "+message[1]);
	});
</script>
<script type="text/javascript" async
  src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-MML-AM_CHTML">
</script>
```

Jekyll, the site engine used by Github Pages, will load this header automatically. There is no need to change the `_config.yml` file.

Note that Kramdown interprets every underscore (`_`) that is followed by a non-whitespace character, as starting an emphasised piece of text. This interpretation interfereces with subscript in LaTeX formulae, which also uses underscores. The workaround is to re-write the LaTeX formulas by introducing a space after every underscore. Also, consider replacing:
 - `\{` by `\lbrace` and `\}` by `\rbrace`,
 - `|` by `\vert`.



================================================
FILE: code/.gitignore
================================================
__pycache__/
*.swp



================================================
FILE: code/algebra.py
================================================
def xgcd( x, y ):
    old_r, r = (x, y)
    old_s, s = (1, 0)
    old_t, t = (0, 1)

    while r != 0:
        quotient = old_r // r
        old_r, r = (r, old_r - quotient * r)
        old_s, s = (s, old_s - quotient * s)
        old_t, t = (t, old_t - quotient * t)

    return old_s, old_t, old_r # a, b, g

class FieldElement:
    def __init__( self, value, field ):
        self.value = value
        self.field = field

    def __add__( self, right ):
        return self.field.add(self, right)

    def __mul__( self, right ):
        return self.field.multiply(self, right)

    def __sub__( self, right ):
        return self.field.subtract(self, right)

    def __truediv__( self, right ):
        return self.field.divide(self, right)

    def __neg__( self ):
        return self.field.negate(self)

    def inverse( self ):
        return self.field.inverse(self)

    # modular exponentiation -- be sure to encapsulate in parentheses!
    def __xor__( self, exponent ):
        acc = FieldElement(1, self.field)
        val = FieldElement(self.value, self.field)
        for i in reversed(range(len(bin(exponent)[2:]))):
            acc = acc * acc
            if (1 << i) & exponent != 0:
                acc = acc * val
        return acc

    def __eq__( self, other ):
        return self.value == other.value

    def __neq__( self, other ):
        return self.value != other.value

    def __str__( self ):
        return str(self.value)

    def __bytes__( self ):
        return bytes(str(self).encode())

    def is_zero( self ):
        if self.value == 0:
            return True
        else:
            return False

class Field:
    def __init__( self, p ):
        self.p = p

    def zero( self ):
        return FieldElement(0, self)

    def one( self ):
        return FieldElement(1, self)

    def multiply( self, left, right ):
        return FieldElement((left.value * right.value) % self.p, self)

    def add( self, left, right ):
        return FieldElement((left.value + right.value) % self.p, self)

    def subtract( self, left, right ):
        return FieldElement((self.p + left.value - right.value) % self.p, self)

    def negate( self, operand ):
        return FieldElement((self.p - operand.value) % self.p, self)

    def inverse( self, operand ):
        a, b, g = xgcd(operand.value, self.p)
        return FieldElement(((a % self.p) + self.p) % self.p, self)

    def divide( self, left, right ):
        assert(not right.is_zero()), "divide by zero"
        a, b, g = xgcd(right.value, self.p)
        return FieldElement(left.value * a % self.p, self)

    def main():
        p = 1 + 407 * ( 1 << 119 ) # 1 + 11 * 37 * 2^119
        return Field(p)

    def generator( self ):
        assert(self.p == 1 + 407 * ( 1 << 119 )), "Do not know generator for other fields beyond 1+407*2^119"
        return FieldElement(85408008396924667383611388730472331217, self)

    def primitive_nth_root( self, n ):
        if self.p == 1 + 407 * ( 1 << 119 ):
            assert(n <= 1 << 119 and (n & (n-1)) == 0), "Field does not have nth root of unity where n > 2^119 or not power of two."
            root = FieldElement(85408008396924667383611388730472331217, self)
            order = 1 << 119
            while order != n:
                root = root^2
                order = order/2
            return root
        else:
            assert(False), "Unknown field, can't return root of unity."
            
    def sample( self, byte_array ):
        acc = 0
        for b in byte_array:
            acc = (acc << 8) ^ int(b)
        return FieldElement(acc % self.p, self)



================================================
FILE: code/fast_rpsss.py
================================================
from rescue_prime import *
from fast_stark import *
from hashlib import blake2s
import os
import pickle as pickle

class SignatureProofStream(ProofStream):
    def __init__( self, document ):
        ProofStream.__init__(self)
        self.document = document
        self.prefix = blake2s(bytes(document)).digest()

    def prover_fiat_shamir( self, num_bytes=32 ):
        return shake_256(self.prefix + self.serialize()).digest(num_bytes)

    def verifier_fiat_shamir( self, num_bytes=32 ):
        return shake_256(self.prefix + pickle.dumps(self.objects[:self.read_index])).digest(num_bytes)

    def deserialize( self, bb ):
        sps = SignatureProofStream(self.document)
        sps.objects = pickle.loads(bb)
        return sps

class FastRPSSS:
    def __init__( self ):
        self.field = Field.main()
        expansion_factor = 4
        num_colinearity_checks = 64
        security_level = 2 * num_colinearity_checks

        self.rp = RescuePrime()
        num_cycles = self.rp.N+1
        state_width = self.rp.m

        self.stark = FastStark(self.field, expansion_factor, num_colinearity_checks, security_level, state_width, num_cycles, transition_constraints_degree=3)
        self.transition_zerofier, self.transition_zerofier_codeword, self.transition_zerofier_root = self.stark.preprocess()

    def stark_prove( self, input_element, proof_stream ):
        output_element = self.rp.hash(input_element)

        trace = self.rp.trace(input_element)
        transition_constraints = self.rp.transition_constraints(self.stark.omicron)
        boundary_constraints = self.rp.boundary_constraints(output_element)
        proof = self.stark.prove(trace, transition_constraints, boundary_constraints, self.transition_zerofier, self.transition_zerofier_codeword, proof_stream)
 
        return proof

    def stark_verify( self, output_element, stark_proof, proof_stream ):
        boundary_constraints = self.rp.boundary_constraints(output_element)
        transition_constraints = self.rp.transition_constraints(self.stark.omicron)
        return self.stark.verify(stark_proof, transition_constraints, boundary_constraints, self.transition_zerofier_root, proof_stream)

    def keygen( self ):
        sk = self.field.sample(os.urandom(17))
        pk = self.rp.hash(sk)
        return sk, pk

    def sign( self, sk, document ):
        sps = SignatureProofStream(document)
        signature = self.stark_prove(sk, sps)
        return signature

    def verify( self, pk, document, signature ):
        sps = SignatureProofStream(document)
        return self.stark_verify(pk, signature, sps)



================================================
FILE: code/fast_stark.py
================================================
from fri import *
from univariate import *
from multivariate import *
from ntt import *
from functools import reduce
import os

class FastStark:
    def __init__( self, field, expansion_factor, num_colinearity_checks, security_level, num_registers, num_cycles, transition_constraints_degree=2 ):
        assert(len(bin(field.p)) - 2 >= security_level), "p must have at least as many bits as security level"
        assert(expansion_factor & (expansion_factor - 1) == 0), "expansion factor must be a power of 2"
        assert(expansion_factor >= 4), "expansion factor must be 4 or greater"
        assert(num_colinearity_checks * 2 >= security_level), "number of colinearity checks must be at least half of security level"

        self.field = field
        self.expansion_factor = expansion_factor
        self.num_colinearity_checks = num_colinearity_checks
        self.security_level = security_level

        self.num_randomizers = 4*num_colinearity_checks

        self.num_registers = num_registers
        self.original_trace_length = num_cycles
        
        self.randomized_trace_length = self.original_trace_length + self.num_randomizers
        self.omicron_domain_length = 1 << len(bin(self.randomized_trace_length * transition_constraints_degree)[2:])
        self.fri_domain_length = self.omicron_domain_length * expansion_factor

        self.generator = self.field.generator()
        self.omega = self.field.primitive_nth_root(self.fri_domain_length)
        self.omicron = self.field.primitive_nth_root(self.omicron_domain_length)
        self.omicron_domain = [self.omicron^i for i in range(self.omicron_domain_length)]

        self.fri = Fri(self.generator, self.omega, self.fri_domain_length, self.expansion_factor, self.num_colinearity_checks)

    def preprocess( self ):
        transition_zerofier = fast_zerofier(self.omicron_domain[:(self.original_trace_length-1)], self.omicron, len(self.omicron_domain))
        transition_zerofier_codeword = fast_coset_evaluate(transition_zerofier, self.generator, self.omega, self.fri.domain_length)
        transition_zerofier_root = Merkle.commit(transition_zerofier_codeword)
        return transition_zerofier, transition_zerofier_codeword, transition_zerofier_root

    def transition_degree_bounds( self, transition_constraints ):
        point_degrees = [1] + [self.original_trace_length+self.num_randomizers-1] * 2*self.num_registers
        return [max( sum(r*l for r, l in zip(point_degrees, k)) for k, v in a.dictionary.items()) for a in transition_constraints]

    def transition_quotient_degree_bounds( self, transition_constraints ):
        return [d - (self.original_trace_length-1) for d in self.transition_degree_bounds(transition_constraints)]

    def max_degree( self, transition_constraints ):
        md = max(self.transition_quotient_degree_bounds(transition_constraints))
        return (1 << (len(bin(md)[2:]))) - 1

    def boundary_zerofiers( self, boundary ):
        zerofiers = []
        for s in range(self.num_registers):
            points = [self.omicron^c for c, r, v in boundary if r == s]
            zerofiers = zerofiers + [Polynomial.zerofier_domain(points)]
        return zerofiers

    def boundary_interpolants( self, boundary ):
        interpolants = []
        for s in range(self.num_registers):
            points = [(c,v) for c, r, v in boundary if r == s]
            domain = [self.omicron^c for c,v in points]
            values = [v for c,v in points]
            interpolants = interpolants + [Polynomial.interpolate_domain(domain, values)]
        return interpolants

    def boundary_quotient_degree_bounds( self, randomized_trace_length, boundary ):
        randomized_trace_degree = randomized_trace_length - 1
        return [randomized_trace_degree - bz.degree() for bz in self.boundary_zerofiers(boundary)]

    def sample_weights( self, number, randomness ):
        return [self.field.sample(blake2b(randomness + bytes(i)).digest()) for i in range(0, number)]

    def prove( self, trace, transition_constraints, boundary, transition_zerofier, transition_zerofier_codeword, proof_stream=None ):
        # create proof stream object if necessary
        if proof_stream == None:
            proof_stream = ProofStream()
        
        # concatenate randomizers
        for k in range(self.num_randomizers):
            trace = trace + [[self.field.sample(os.urandom(17)) for s in range(self.num_registers)]]

        # interpolate
        trace_domain = [self.omicron^i for i in range(len(trace))]
        trace_polynomials = []
        for s in range(self.num_registers):
            single_trace = [trace[c][s] for c in range(len(trace))]
            trace_polynomials = trace_polynomials + [fast_interpolate(trace_domain, single_trace, self.omicron, self.omicron_domain_length)]

        # subtract boundary interpolants and divide out boundary zerofiers
        boundary_quotients = []
        for s in range(self.num_registers):
            interpolant = self.boundary_interpolants(boundary)[s]
            zerofier = self.boundary_zerofiers(boundary)[s]
            quotient = (trace_polynomials[s] - interpolant) / zerofier
            boundary_quotients += [quotient]

        # commit to boundary quotients
        boundary_quotient_codewords = []
        boundary_quotient_Merkle_roots = []
        for s in range(self.num_registers):
            boundary_quotient_codewords = boundary_quotient_codewords + [fast_coset_evaluate(boundary_quotients[s], self.generator, self.omega, self.fri_domain_length)]
            merkle_root = Merkle.commit(boundary_quotient_codewords[s])
            proof_stream.push(merkle_root)

        # symbolically evaluate transition constraints
        point = [Polynomial([self.field.zero(), self.field.one()])] + trace_polynomials + [tp.scale(self.omicron) for tp in trace_polynomials]
        transition_polynomials = [a.evaluate_symbolic(point) for a in transition_constraints]

        # divide out zerofier
        transition_quotients = [fast_coset_divide(tp, transition_zerofier, self.generator, self.omicron, self.omicron_domain_length) for tp in transition_polynomials]

        # commit to randomizer polynomial
        randomizer_polynomial = Polynomial([self.field.sample(os.urandom(17)) for i in range(self.max_degree(transition_constraints)+1)])
        randomizer_codeword = fast_coset_evaluate(randomizer_polynomial, self.generator, self.omega, self.fri_domain_length)
        randomizer_root = Merkle.commit(randomizer_codeword)
        proof_stream.push(randomizer_root)

        # get weights for nonlinear combination
        #  - 1 randomizer
        #  - 2 for every transition quotient
        #  - 2 for every boundary quotient
        weights = self.sample_weights(1 + 2*len(transition_quotients) + 2*len(boundary_quotients), proof_stream.prover_fiat_shamir())

        assert([tq.degree() for tq in transition_quotients] == self.transition_quotient_degree_bounds(transition_constraints)), "transition quotient degrees do not match with expectation"

        # compute terms of nonlinear combination polynomial
        x = Polynomial([self.field.zero(), self.field.one()])
        max_degree = self.max_degree(transition_constraints)
        terms = []
        terms += [randomizer_polynomial]
        for i in range(len(transition_quotients)):
            terms += [transition_quotients[i]]
            shift = max_degree - self.transition_quotient_degree_bounds(transition_constraints)[i]
            terms += [(x^shift) * transition_quotients[i]]
        for i in range(self.num_registers):
            terms += [boundary_quotients[i]]
            shift = max_degree - self.boundary_quotient_degree_bounds(len(trace), boundary)[i]
            terms += [(x^shift) * boundary_quotients[i]]

        # take weighted sum
        # combination = sum(weights[i] * terms[i] for all i)
        combination = reduce(lambda a, b : a+b, [Polynomial([weights[i]]) * terms[i] for i in range(len(terms))], Polynomial([]))

        # compute matching codeword
        combined_codeword = fast_coset_evaluate(combination, self.generator, self.omega, self.fri_domain_length)

        # prove low degree of combination polynomial, and collect indices
        indices = self.fri.prove(combined_codeword, proof_stream)

        # process indices
        duplicated_indices = [i for i in indices] + [(i + self.expansion_factor) % self.fri.domain_length for i in indices]
        quadrupled_indices = [i for i in duplicated_indices] + [(i + (self.fri.domain_length // 2)) % self.fri.domain_length for i in duplicated_indices]
        quadrupled_indices.sort()

        # open indicated positions in the boundary quotient codewords
        for bqc in boundary_quotient_codewords:
            for i in quadrupled_indices:
                proof_stream.push(bqc[i])
                path = Merkle.open(i, bqc)
                proof_stream.push(path)

        # ... as well as in the randomizer
        for i in quadrupled_indices:
            proof_stream.push(randomizer_codeword[i])
            path = Merkle.open(i, randomizer_codeword)
            proof_stream.push(path)

        # ... and also in the zerofier!
        for i in quadrupled_indices:
            proof_stream.push(transition_zerofier_codeword[i])
            path = Merkle.open(i, transition_zerofier_codeword)
            proof_stream.push(path)

        # the final proof is just the serialized stream
        return proof_stream.serialize()

    def verify( self, proof, transition_constraints, boundary, transition_zerofier_root, proof_stream=None ):
        H = blake2b

        # infer trace length from boundary conditions
        original_trace_length = 1 + max(c for c, r, v in boundary)
        randomized_trace_length = original_trace_length + self.num_randomizers

        # deserialize with right proof stream
        if proof_stream == None:
            proof_stream = ProofStream()
        proof_stream = proof_stream.deserialize(proof)

        # get Merkle roots of boundary quotient codewords
        boundary_quotient_roots = []
        for s in range(self.num_registers):
            boundary_quotient_roots = boundary_quotient_roots + [proof_stream.pull()]

        # get Merkle root of randomizer polynomial
        randomizer_root = proof_stream.pull()

        # get weights for nonlinear combination
        weights = self.sample_weights(1 + 2*len(transition_constraints) + 2*len(self.boundary_interpolants(boundary)), proof_stream.verifier_fiat_shamir())

        # verify low degree of combination polynomial
        polynomial_values = []
        verifier_accepts = self.fri.verify(proof_stream, polynomial_values)
        polynomial_values.sort(key=lambda iv : iv[0])
        if not verifier_accepts:
            return False

        indices = [i for i,v in polynomial_values]
        values = [v for i,v in polynomial_values]

        # read and verify leafs, which are elements of boundary quotient codewords
        duplicated_indices = [i for i in indices] + [(i + self.expansion_factor) % self.fri.domain_length for i in indices]
        duplicated_indices.sort()
        leafs = []
        for r in range(len(boundary_quotient_roots)):
            leafs = leafs + [dict()]
            for i in duplicated_indices:
                leafs[r][i] = proof_stream.pull()
                path = proof_stream.pull()
                verifier_accepts = verifier_accepts and Merkle.verify(boundary_quotient_roots[r], i, path, leafs[r][i])
                if not verifier_accepts:
                    return False

        # read and verify randomizer leafs
        randomizer = dict()
        for i in duplicated_indices:
            randomizer[i] = proof_stream.pull()
            path = proof_stream.pull()
            verifier_accepts = verifier_accepts and Merkle.verify(randomizer_root, i, path, randomizer[i])
            if not verifier_accepts:
                return False

        # read and verify transition zerofier leafs
        transition_zerofier = dict()
        for i in duplicated_indices:
            transition_zerofier[i] = proof_stream.pull()
            path = proof_stream.pull()
            verifier_accepts = verifier_accepts and Merkle.verify(transition_zerofier_root, i, path, transition_zerofier[i])
            if not verifier_accepts:
                return False

        # verify leafs of combination polynomial
        for i in range(len(indices)):
            current_index = indices[i] # do need i

            # get trace values by applying a correction to the boundary quotient values (which are the leafs)
            domain_current_index = self.generator * (self.omega^current_index)
            next_index = (current_index + self.expansion_factor) % self.fri.domain_length
            domain_next_index = self.generator * (self.omega^next_index)
            current_trace = [self.field.zero() for s in range(self.num_registers)]
            next_trace = [self.field.zero() for s in range(self.num_registers)]
            for s in range(self.num_registers):
                zerofier = self.boundary_zerofiers(boundary)[s]
                interpolant = self.boundary_interpolants(boundary)[s]

                current_trace[s] = leafs[s][current_index] * zerofier.evaluate(domain_current_index) + interpolant.evaluate(domain_current_index)
                next_trace[s] = leafs[s][next_index] * zerofier.evaluate(domain_next_index) + interpolant.evaluate(domain_next_index)

            point = [domain_current_index] + current_trace + next_trace
            transition_constraints_values = [transition_constraints[s].evaluate(point) for s in range(len(transition_constraints))]

            # compute nonlinear combination
            counter = 0
            terms = []
            terms += [randomizer[current_index]]
            for s in range(len(transition_constraints_values)):
                tcv = transition_constraints_values[s]
                quotient = tcv / transition_zerofier[current_index]
                terms += [quotient]
                shift = self.max_degree(transition_constraints) - self.transition_quotient_degree_bounds(transition_constraints)[s]
                terms += [quotient * (domain_current_index^shift)]
            for s in range(self.num_registers):
                bqv = leafs[s][current_index] # boundary quotient value
                terms += [bqv]
                shift = self.max_degree(transition_constraints) - self.boundary_quotient_degree_bounds(randomized_trace_length, boundary)[s]
                terms += [bqv * (domain_current_index^shift)]
            combination = reduce(lambda a, b : a+b, [terms[j] * weights[j] for j in range(len(terms))], self.field.zero())

            # verify against combination polynomial value
            verifier_accepts = verifier_accepts and (combination == values[i])
            if not verifier_accepts:
                return False

        return verifier_accepts



================================================
FILE: code/fri.py
================================================
from algebra import *
from merkle import *
from ip import *
from ntt import *
from binascii import hexlify, unhexlify
import math
from hashlib import blake2b

from univariate import *

class Fri:
    def __init__( self, offset, omega, initial_domain_length, expansion_factor, num_colinearity_tests ):
        self.offset = offset
        self.omega = omega
        self.domain_length = initial_domain_length
        self.field = omega.field
        self.expansion_factor = expansion_factor
        self.num_colinearity_tests = num_colinearity_tests

        assert(self.num_rounds() >= 1), "cannot do FRI with less than one round"

    def num_rounds( self ):
        codeword_length = self.domain_length
        num_rounds = 0
        while codeword_length > self.expansion_factor and 4*self.num_colinearity_tests < codeword_length:
            codeword_length /= 2
            num_rounds += 1
        return num_rounds

    def sample_index( byte_array, size ):
        acc = 0
        for b in byte_array:
            acc = (acc << 8) ^ int(b)
        return acc % size

    def sample_indices( self, seed, size, reduced_size, number ):
        assert(number <= reduced_size), f"cannot sample more indices than available in last codeword; requested: {number}, available: {reduced_size}"
        assert(number <= 2*reduced_size), "not enough entropy in indices wrt last codeword"

        indices = []
        reduced_indices = []
        counter = 0
        while len(indices) < number:
            index = Fri.sample_index(blake2b(seed + bytes(counter)).digest(), size)
            reduced_index = index % reduced_size
            counter += 1
            if reduced_index not in reduced_indices:
                indices += [index]
                reduced_indices += [reduced_index]

        return indices

    def eval_domain( self ):
        return [self.offset * (self.omega^i) for i in range(self.domain_length)]

    def commit( self, codeword, proof_stream, round_index=0 ):
        one = self.field.one()
        two = FieldElement(2, self.field)
        omega = self.omega
        offset = self.offset
        codewords = []

        # for each round
        for r in range(self.num_rounds()):
            N = len(codeword)

            # make sure omega has the right order
            assert(omega^(N - 1) == omega.inverse()), "error in commit: omega does not have the right order!"

            # compute and send Merkle root
            root = Merkle.commit(codeword)
            proof_stream.push(root)

            # prepare next round, but only if necessary
            if r == self.num_rounds() - 1:
                break

            # get challenge
            alpha = self.field.sample(proof_stream.prover_fiat_shamir())

            # collect codeword
            codewords += [codeword]

            # split and fold
            codeword = [two.inverse() * ( (one + alpha / (offset * (omega^i)) ) * codeword[i] + (one - alpha / (offset * (omega^i)) ) * codeword[N//2 + i] ) for i in range(N//2)]

            omega = omega^2
            offset = offset^2

        # send last codeword
        proof_stream.push(codeword)

        # collect last codeword too
        codewords = codewords + [codeword]

        return codewords

    def query( self, current_codeword, next_codeword, c_indices, proof_stream ):
        # infer a and b indices
        a_indices = [index for index in c_indices]
        b_indices = [index + len(current_codeword)//2 for index in c_indices]

        # reveal leafs
        for s in range(self.num_colinearity_tests):
            proof_stream.push((current_codeword[a_indices[s]], current_codeword[b_indices[s]], next_codeword[c_indices[s]]))

        # reveal authentication paths
        for s in range(self.num_colinearity_tests):
            proof_stream.push(Merkle.open(a_indices[s], current_codeword))
            proof_stream.push(Merkle.open(b_indices[s], current_codeword))
            proof_stream.push(Merkle.open(c_indices[s], next_codeword))

        return a_indices + b_indices

    def prove( self, codeword, proof_stream ):
        assert(self.domain_length == len(codeword)), "initial codeword length does not match length of initial codeword"

        # commit phase
        codewords = self.commit(codeword, proof_stream)

        # get indices
        top_level_indices = self.sample_indices(proof_stream.prover_fiat_shamir(), len(codewords[0])//2, len(codewords[-1]), self.num_colinearity_tests)
        indices = [index for index in top_level_indices]

        # query phase
        for i in range(len(codewords)-1):
            indices = [index % (len(codewords[i])//2) for index in indices] # fold
            self.query(codewords[i], codewords[i+1], indices, proof_stream)

        return top_level_indices

    def verify( self, proof_stream, polynomial_values ):
        omega = self.omega
        offset = self.offset

        # extract all roots and alphas
        roots = []
        alphas = []
        for r in range(self.num_rounds()):
            roots += [proof_stream.pull()]
            alphas += [self.field.sample(proof_stream.verifier_fiat_shamir())]

        # extract last codeword
        last_codeword = proof_stream.pull()

        # check if it matches the given root
        if roots[-1] != Merkle.commit(last_codeword):
            print("last codeword is not well formed")
            return False

        # check if it is low degree
        degree = (len(last_codeword) // self.expansion_factor) - 1
        last_omega = omega
        last_offset = offset
        for r in range(self.num_rounds()-1):
            last_omega = last_omega^2
            last_offset = last_offset^2

        # assert that last_omega has the right order
        assert(last_omega.inverse() == last_omega^(len(last_codeword)-1)), "omega does not have right order"

        # compute interpolant
        last_domain = [last_offset * (last_omega^i) for i in range(len(last_codeword))]
        poly = Polynomial.interpolate_domain(last_domain, last_codeword)
        #coefficients = intt(last_omega, last_codeword)
        #poly = Polynomial(coefficients).scale(last_offset.inverse())

        # verify by  evaluating
        assert(poly.evaluate_domain(last_domain) == last_codeword), "re-evaluated codeword does not match original!"
        if poly.degree() > degree:
            print("last codeword does not correspond to polynomial of low enough degree")
            print("observed degree:", poly.degree())
            print("but should be:", degree)
            return False

        # get indices
        top_level_indices = self.sample_indices(proof_stream.verifier_fiat_shamir(), self.domain_length >> 1, self.domain_length >> (self.num_rounds()-1), self.num_colinearity_tests)

        # for every round, check consistency of subsequent layers
        for r in range(0, self.num_rounds()-1):

            # fold c indices
            c_indices = [index % (self.domain_length >> (r+1)) for index in top_level_indices]

            # infer a and b indices
            a_indices = [index for index in c_indices]
            b_indices = [index + (self.domain_length >> (r+1)) for index in a_indices]

            # read values and check colinearity
            aa = []
            bb = []
            cc = []
            for s in range(self.num_colinearity_tests):
                (ay, by, cy) = proof_stream.pull()
                aa += [ay]
                bb += [by]
                cc += [cy]

                # record top-layer values for later verification
                if r == 0:
                    polynomial_values += [(a_indices[s], ay), (b_indices[s], by)]
                
                # colinearity check
                ax = offset * (omega^a_indices[s])
                bx = offset * (omega^b_indices[s])
                cx = alphas[r]
                if test_colinearity([(ax, ay), (bx, by), (cx, cy)]) == False:
                    print("colinearity check failure")
                    return False

            # verify authentication paths
            for i in range(self.num_colinearity_tests):
                path = proof_stream.pull()
                if Merkle.verify(roots[r], a_indices[i], path, aa[i]) == False:
                    print("merkle authentication path verification fails for aa")
                    return False
                path = proof_stream.pull()
                if Merkle.verify(roots[r], b_indices[i], path, bb[i]) == False:
                    print("merkle authentication path verification fails for bb")
                    return False
                path = proof_stream.pull()
                if Merkle.verify(roots[r+1], c_indices[i], path, cc[i]) == False:
                    print("merkle authentication path verification fails for cc")
                    return False

            # square omega and offset to prepare for next round
            omega = omega^2
            offset = offset^2

        # all checks passed
        return True



================================================
FILE: code/ip.py
================================================
from hashlib import shake_256
import pickle as pickle # serialization

class ProofStream:
    def __init__( self ):
        self.objects = []
        self.read_index = 0

    def push( self, obj ):
        self.objects += [obj]

    def pull( self ):
        assert(self.read_index < len(self.objects)), "ProofStream: cannot pull object; queue empty."
        obj = self.objects[self.read_index]
        self.read_index += 1
        return obj

    def serialize( self ):
        return pickle.dumps(self.objects)

    def prover_fiat_shamir( self, num_bytes=32 ):
        return shake_256(self.serialize()).digest(num_bytes)

    def verifier_fiat_shamir( self, num_bytes=32 ):
        return shake_256(pickle.dumps(self.objects[:self.read_index])).digest(num_bytes)

    def deserialize( self, bb ):
        ps = ProofStream()
        ps.objects = pickle.loads(bb)
        return ps



================================================
FILE: code/merkle.py
================================================
from hashlib import blake2b

class Merkle:
    H = blake2b

    def commit_( leafs ):
        assert(len(leafs) & (len(leafs)-1) == 0), "length must be power of two"
        if len(leafs) == 1:
            return leafs[0]
        else:
            return Merkle.H(Merkle.commit_(leafs[:len(leafs)//2]) + Merkle.commit_(leafs[len(leafs)//2:])).digest()

    def commit( data_array ):
        return Merkle.commit_([Merkle.H(bytes(da)).digest() for da in data_array])
    
    def open_( index, leafs ):
        assert(len(leafs) & (len(leafs)-1) == 0), "length must be power of two"
        assert(0 <= index and index < len(leafs)), "cannot open invalid index"
        if len(leafs) == 2:
            return [leafs[1 - index]]
        elif index < (len(leafs)/2):
            return Merkle.open_(index, leafs[:len(leafs)//2]) + [Merkle.commit_(leafs[len(leafs)//2:])]
        else:
            return Merkle.open_(index - len(leafs)//2, leafs[len(leafs)//2:]) + [Merkle.commit_(leafs[:len(leafs)//2])]

    def open( index, data_array ):
        return Merkle.open_(index, [Merkle.H(bytes(da)).digest() for da in data_array])
    
    def verify_( root, index, path, leaf ):
        assert(0 <= index and index < (1 << len(path))), "cannot verify invalid index"
        if len(path) == 1:
            if index == 0:
                return root == Merkle.H(leaf + path[0]).digest()
            else:
                return root == Merkle.H(path[0] + leaf).digest()
        else:
            if index % 2 == 0:
                return Merkle.verify_(root, index >> 1, path[1:], Merkle.H(leaf + path[0]).digest())
            else:
                return Merkle.verify_(root, index >> 1, path[1:], Merkle.H(path[0] + leaf).digest())

    def verify( root, index, path, data_element ):
        return Merkle.verify_(root, index, path, Merkle.H(bytes(data_element)).digest())



================================================
FILE: code/multivariate.py
================================================
from univariate import *

class MPolynomial:
    def __init__( self, dictionary ):
        # Multivariate polynomials are represented as dictionaries with exponent vectors
        # as keys and coefficients as values. E.g.:
        # f(x,y,z) = 17 + 2xy + 42z - 19x^6*y^3*z^12 is represented as:
        # {
        #     (0,0,0) => 17,
        #     (1,1,0) => 2,
        #     (0,0,1) => 42,
        #     (6,3,12) => -19,
        # }
        self.dictionary = dictionary

    def zero():
        return MPolynomial(dict())

    def __add__( self, other ):
        dictionary = dict()
        num_variables = max([len(k) for k in self.dictionary.keys()] + [len(k) for k in other.dictionary.keys()])
        for k, v in self.dictionary.items():
            pad = list(k) + [0] * (num_variables - len(k))
            pad = tuple(pad)
            dictionary[pad] = v
        for k, v in other.dictionary.items():
            pad = list(k) + [0] * (num_variables - len(k))
            pad = tuple(pad)
            if pad in dictionary.keys():
                dictionary[pad] = dictionary[pad] + v
            else:
                dictionary[pad] = v
        return MPolynomial(dictionary)

    def __mul__( self, other ):
        dictionary = dict()
        num_variables = max([len(k) for k in self.dictionary.keys()] + [len(k) for k in other.dictionary.keys()])
        for k0, v0 in self.dictionary.items():
            for k1, v1 in other.dictionary.items():
                exponent = [0] * num_variables
                for k in range(len(k0)):
                    exponent[k] += k0[k]
                for k in range(len(k1)):
                    exponent[k] += k1[k]
                exponent = tuple(exponent)
                if exponent in dictionary.keys():
                    dictionary[exponent] = dictionary[exponent] + v0 * v1
                else:
                    dictionary[exponent] = v0 * v1
        return MPolynomial(dictionary)

    def __sub__( self, other ):
        return self + (-other)

    def __neg__( self ):
        dictionary = dict()
        for k, v in self.dictionary.items():
            dictionary[k] = -v
        return MPolynomial(dictionary)

    def __xor__( self, exponent ):
        if self.is_zero():
            return MPolynomial(dict())
        field = list(self.dictionary.values())[0].field
        num_variables = len(list(self.dictionary.keys())[0])
        exp = [0] * num_variables
        acc = MPolynomial({tuple(exp): field.one()})
        for b in bin(exponent)[2:]:
            acc = acc * acc
            if b == '1':
                acc = acc * self
        return acc

    def constant( element ):
        return MPolynomial({tuple([0]): element})

    def is_zero( self ):
        if not self.dictionary:
            return True
        else:
            for v in self.dictionary.values():
                if v.is_zero() == False:
                    return False
            return True

    # Returns the multivariate polynomials representing each indeterminates linear function
    # with a leading coefficient of one. For three indeterminates, returns:
    # [f(x,y,z) = x, f(x,y,z) = y, f(x,y,z) = z]
    def variables( num_variables, field ):
        variables = []
        for i in range(num_variables):
            exponent = [0] * i + [1] + [0] * (num_variables - i - 1)
            variables = variables + [MPolynomial({tuple(exponent): field.one()})]
        return variables

    def evaluate( self, point ):
        acc = point[0].field.zero()
        for k, v in self.dictionary.items():
            prod = v
            for i in range(len(k)):
                prod = prod * (point[i]^k[i])
            acc = acc + prod
        return acc

    def evaluate_symbolic( self, point ):
        acc = Polynomial([])
        for k, v in self.dictionary.items():
            prod = Polynomial([v])
            for i in range(len(k)):
                prod = prod * (point[i]^k[i])
            acc = acc + prod
        return acc

    def lift( polynomial, variable_index ):
        if polynomial.is_zero():
            return MPolynomial({})
        field = polynomial.coefficients[0].field
        variables = MPolynomial.variables(variable_index+1, field)
        x = variables[-1]
        acc = MPolynomial({})
        for i in range(len(polynomial.coefficients)):
            acc = acc + MPolynomial.constant(polynomial.coefficients[i]) * (x^i)
        return acc


================================================
FILE: code/ntt.py
================================================
from univariate import *

def ntt( primitive_root, values ):
    assert(len(values) & (len(values) - 1) == 0), "cannot compute ntt of non-power-of-two sequence"
    if len(values) <= 1:
        return values

    field = values[0].field

    assert(primitive_root^len(values) == field.one()), "primitive root must be nth root of unity, where n is len(values)"
    assert(primitive_root^(len(values)//2) != field.one()), "primitive root is not primitive nth root of unity, where n is len(values)"

    half = len(values) // 2

    odds = ntt(primitive_root^2, values[1::2])
    evens = ntt(primitive_root^2, values[::2])

    return [evens[i % half] + (primitive_root^i) * odds[i % half] for i in range(len(values))]
 
def intt( primitive_root, values ):
    assert(len(values) & (len(values) - 1) == 0), "cannot compute intt of non-power-of-two sequence"

    if len(values) == 1:
        return values

    field = values[0].field
    ninv = FieldElement(len(values), field).inverse()

    transformed_values = ntt(primitive_root.inverse(), values)
    return [ninv*tv for tv in transformed_values]

def fast_multiply( lhs, rhs, primitive_root, root_order ):
    assert(primitive_root^root_order == primitive_root.field.one()), "supplied root does not have supplied order"
    assert(primitive_root^(root_order//2) != primitive_root.field.one()), "supplied root is not primitive root of supplied order"

    if lhs.is_zero() or rhs.is_zero():
        return Polynomial([])

    field = lhs.coefficients[0].field
    root = primitive_root
    order = root_order
    degree = lhs.degree() + rhs.degree()

    if degree < 8:
        return lhs * rhs

    while degree < order // 2:
        root = root^2
        order = order // 2

    lhs_coefficients = lhs.coefficients[:(lhs.degree()+1)]
    while len(lhs_coefficients) < order:
        lhs_coefficients += [field.zero()]
    rhs_coefficients = rhs.coefficients[:(rhs.degree()+1)]
    while len(rhs_coefficients) < order:
        rhs_coefficients += [field.zero()]

    lhs_codeword = ntt(root, lhs_coefficients)
    rhs_codeword = ntt(root, rhs_coefficients)

    hadamard_product = [l * r for (l, r) in zip(lhs_codeword, rhs_codeword)]

    product_coefficients = intt(root, hadamard_product)
    return Polynomial(product_coefficients[0:(degree+1)])

def fast_zerofier( domain, primitive_root, root_order ):
    assert(primitive_root^root_order == primitive_root.field.one()), "supplied root does not have supplied order"
    assert(primitive_root^(root_order//2) != primitive_root.field.one()), "supplied root is not primitive root of supplied order"

    if len(domain) == 0:
        return Polynomial([])

    if len(domain) == 1:
        return Polynomial([-domain[0], primitive_root.field.one()])

    half = len(domain) // 2

    left = fast_zerofier(domain[:half], primitive_root, root_order)
    right = fast_zerofier(domain[half:], primitive_root, root_order)
    return fast_multiply(left, right, primitive_root, root_order)

def fast_evaluate( polynomial, domain, primitive_root, root_order ):
    assert(primitive_root^root_order == primitive_root.field.one()), "supplied root does not have supplied order"
    assert(primitive_root^(root_order//2) != primitive_root.field.one()), "supplied root is not primitive root of supplied order"

    if len(domain) == 0:
        return []

    if len(domain) == 1:
        return [polynomial.evaluate(domain[0])]

    half = len(domain) // 2

    left_zerofier = fast_zerofier(domain[:half], primitive_root, root_order)
    right_zerofier = fast_zerofier(domain[half:], primitive_root, root_order)

    left = fast_evaluate(polynomial % left_zerofier, domain[:half], primitive_root, root_order)
    right = fast_evaluate(polynomial % right_zerofier, domain[half:], primitive_root, root_order)

    return left + right

def fast_interpolate( domain, values, primitive_root, root_order ):
    assert(primitive_root^root_order == primitive_root.field.one()), "supplied root does not have supplied order"
    assert(primitive_root^(root_order//2) != primitive_root.field.one()), "supplied root is not primitive root of supplied order"
    assert(len(domain) == len(values)), "cannot interpolate over domain of different length than values list"

    if len(domain) == 0:
        return Polynomial([])

    if len(domain) == 1:
        return Polynomial([values[0]])

    half = len(domain) // 2

    left_zerofier = fast_zerofier(domain[:half], primitive_root, root_order)
    right_zerofier = fast_zerofier(domain[half:], primitive_root, root_order)

    left_offset = fast_evaluate(right_zerofier, domain[:half], primitive_root, root_order)
    right_offset = fast_evaluate(left_zerofier, domain[half:], primitive_root, root_order)

    if not all(not v.is_zero() for v in left_offset):
        print("left_offset:", " ".join(str(v) for v in left_offset))

    left_targets = [n / d for (n,d) in zip(values[:half], left_offset)]
    right_targets = [n / d for (n,d) in zip(values[half:], right_offset)]

    left_interpolant = fast_interpolate(domain[:half], left_targets, primitive_root, root_order)
    right_interpolant = fast_interpolate(domain[half:], right_targets, primitive_root, root_order)

    return left_interpolant * right_zerofier + right_interpolant * left_zerofier

def fast_coset_evaluate( polynomial, offset, generator, order ):
    scaled_polynomial = polynomial.scale(offset)
    values = ntt(generator, scaled_polynomial.coefficients + [offset.field.zero()] * (order - len(polynomial.coefficients)))
    return values

def fast_coset_divide( lhs, rhs, offset, primitive_root, root_order ): # clean division only!
    assert(primitive_root^root_order == primitive_root.field.one()), "supplied root does not have supplied order"
    assert(primitive_root^(root_order//2) != primitive_root.field.one()), "supplied root is not primitive root of supplied order"
    assert(not rhs.is_zero()), "cannot divide by zero polynomial"

    if lhs.is_zero():
        return Polynomial([])

    assert(rhs.degree() <= lhs.degree()), "cannot divide by polynomial of larger degree"

    field = lhs.coefficients[0].field
    root = primitive_root
    order = root_order
    degree = max(lhs.degree(),rhs.degree())

    if degree < 8:
        return lhs / rhs

    while degree < order // 2:
        root = root^2
        order = order // 2

    scaled_lhs = lhs.scale(offset)
    scaled_rhs = rhs.scale(offset)
    
    lhs_coefficients = scaled_lhs.coefficients[:(lhs.degree()+1)]
    while len(lhs_coefficients) < order:
        lhs_coefficients += [field.zero()]
    rhs_coefficients = scaled_rhs.coefficients[:(rhs.degree()+1)]
    while len(rhs_coefficients) < order:
        rhs_coefficients += [field.zero()]

    lhs_codeword = ntt(root, lhs_coefficients)
    rhs_codeword = ntt(root, rhs_coefficients)

    quotient_codeword = [l / r for (l, r) in zip(lhs_codeword, rhs_codeword)]
    scaled_quotient_coefficients = intt(root, quotient_codeword)
    scaled_quotient = Polynomial(scaled_quotient_coefficients[:(lhs.degree() - rhs.degree() + 1)])

    return scaled_quotient.scale(offset.inverse())



================================================
FILE: code/rescue_prime.py
================================================
from algebra import *
from univariate import *
from multivariate import *

class RescuePrime:
    def __init__( self ):
        self.p = 407 * (1 << 119) + 1
        self.field = Field(self.p)
        self.m = 2
        self.rate = 1
        self.capacity = 1
        self.N = 27
        self.alpha = 3
        self.alphainv = 180331931428153586757283157844700080811
        self.MDS = [[FieldElement(v, self.field) for v in [270497897142230380135924736767050121214, 4]],
                    [FieldElement(v, self.field) for v in [270497897142230380135924736767050121205, 13]]]
        self.MDSinv = [[FieldElement(v, self.field) for v in [210387253332845851216830350818816760948, 60110643809384528919094385948233360270]],
                       [FieldElement(v, self.field) for v in [90165965714076793378641578922350040407, 180331931428153586757283157844700080811]]]
        self.round_constants = [FieldElement(v, self.field) for v in [174420698556543096520990950387834928928,
                                        109797589356993153279775383318666383471,
                                        228209559001143551442223248324541026000,
                                        268065703411175077628483247596226793933,
                                        250145786294793103303712876509736552288,
                                        154077925986488943960463842753819802236,
                                        204351119916823989032262966063401835731,
                                        57645879694647124999765652767459586992,
                                        102595110702094480597072290517349480965,
                                        8547439040206095323896524760274454544,
                                        50572190394727023982626065566525285390,
                                        87212354645973284136664042673979287772,
                                        64194686442324278631544434661927384193,
                                        23568247650578792137833165499572533289,
                                        264007385962234849237916966106429729444,
                                        227358300354534643391164539784212796168,
                                        179708233992972292788270914486717436725,
                                        102544935062767739638603684272741145148,
                                        65916940568893052493361867756647855734,
                                        144640159807528060664543800548526463356,
                                        58854991566939066418297427463486407598,
                                        144030533171309201969715569323510469388,
                                        264508722432906572066373216583268225708,
                                        22822825100935314666408731317941213728,
                                        33847779135505989201180138242500409760,
                                        146019284593100673590036640208621384175,
                                        51518045467620803302456472369449375741,
                                        73980612169525564135758195254813968438,
                                        31385101081646507577789564023348734881,
                                        270440021758749482599657914695597186347,
                                        185230877992845332344172234234093900282,
                                        210581925261995303483700331833844461519,
                                        233206235520000865382510460029939548462,
                                        178264060478215643105832556466392228683,
                                        69838834175855952450551936238929375468,
                                        75130152423898813192534713014890860884,
                                        59548275327570508231574439445023390415,
                                        43940979610564284967906719248029560342,
                                        95698099945510403318638730212513975543,
                                        77477281413246683919638580088082585351,
                                        206782304337497407273753387483545866988,
                                        141354674678885463410629926929791411677,
                                        19199940390616847185791261689448703536,
                                        177613618019817222931832611307175416361,
                                        267907751104005095811361156810067173120,
                                        33296937002574626161968730356414562829,
                                        63869971087730263431297345514089710163,
                                        200481282361858638356211874793723910968,
                                        69328322389827264175963301685224506573,
                                        239701591437699235962505536113880102063,
                                        17960711445525398132996203513667829940,
                                        219475635972825920849300179026969104558,
                                        230038611061931950901316413728344422823,
                                        149446814906994196814403811767389273580,
                                        25535582028106779796087284957910475912,
                                        93289417880348777872263904150910422367,
                                        4779480286211196984451238384230810357,
                                        208762241641328369347598009494500117007,
                                        34228805619823025763071411313049761059,
                                        158261639460060679368122984607245246072,
                                        65048656051037025727800046057154042857,
                                        134082885477766198947293095565706395050,
                                        23967684755547703714152865513907888630,
                                        8509910504689758897218307536423349149,
                                        232305018091414643115319608123377855094,
                                        170072389454430682177687789261779760420,
                                        62135161769871915508973643543011377095,
                                        15206455074148527786017895403501783555,
                                        201789266626211748844060539344508876901,
                                        179184798347291033565902633932801007181,
                                        9615415305648972863990712807943643216,
                                        95833504353120759807903032286346974132,
                                        181975981662825791627439958531194157276,
                                        267590267548392311337348990085222348350,
                                        49899900194200760923895805362651210299,
                                        89154519171560176870922732825690870368,
                                        265649728290587561988835145059696796797,
                                        140583850659111280842212115981043548773,
                                        266613908274746297875734026718148328473,
                                        236645120614796645424209995934912005038,
                                        265994065390091692951198742962775551587,
                                        59082836245981276360468435361137847418,
                                        26520064393601763202002257967586372271,
                                        108781692876845940775123575518154991932,
                                        138658034947980464912436420092172339656,
                                        45127926643030464660360100330441456786,
                                        210648707238405606524318597107528368459,
                                        42375307814689058540930810881506327698,
                                        237653383836912953043082350232373669114,
                                        236638771475482562810484106048928039069,
                                        168366677297979943348866069441526047857,
                                        195301262267610361172900534545341678525,
                                        2123819604855435621395010720102555908,
                                        96986567016099155020743003059932893278,
                                        248057324456138589201107100302767574618,
                                        198550227406618432920989444844179399959,
                                        177812676254201468976352471992022853250,
                                        211374136170376198628213577084029234846,
                                        105785712445518775732830634260671010540,
                                        122179368175793934687780753063673096166,
                                        126848216361173160497844444214866193172,
                                        22264167580742653700039698161547403113,
                                        234275908658634858929918842923795514466,
                                        189409811294589697028796856023159619258,
                                        75017033107075630953974011872571911999,
                                        144945344860351075586575129489570116296,
                                        261991152616933455169437121254310265934,
                                        18450316039330448878816627264054416127]]

    def hash( self, input_element ):
        # absorb
        state = [input_element] + [self.field.zero()] * (self.m - 1)

        # permutation
        for r in range(self.N):
            
            # forward half-round
            # S-box
            for i in range(self.m):
                state[i] = state[i]^self.alpha
            # matrix
            temp = [self.field.zero() for i in range(self.m)]
            for i in range(self.m):
                for j in range(self.m):
                    temp[i] = temp[i] + self.MDS[i][j] * state[j]
            # constants
            state = [temp[i] + self.round_constants[2*r*self.m+i] for i in range(self.m)]

            # backward half-round
            # S-box
            for i in range(self.m):
                state[i] = state[i]^self.alphainv
            # matrix
            temp = [self.field.zero() for i in range(self.m)]
            for i in range(self.m):
                for j in range(self.m):
                    temp[i] = temp[i] + self.MDS[i][j] * state[j]
            # constants
            state = [temp[i] + self.round_constants[2*r*self.m+self.m+i] for i in range(self.m)]

        # squeeze
        return state[0]

    def trace( self, input_element ):
        trace = []

        # absorb
        state = [input_element] + [self.field.zero()] * (self.m - 1)

        # explicit copy to record state into trace
        trace += [[s for s in state]]

        # permutation
        for r in range(self.N):
            
            # forward half-round
            # S-box
            for i in range(self.m):
                state[i] = state[i]^self.alpha
            # matrix
            temp = [self.field.zero() for i in range(self.m)]
            for i in range(self.m):
                for j in range(self.m):
                    temp[i] = temp[i] + self.MDS[i][j] * state[j]
            # constants
            state = [temp[i] + self.round_constants[2*r*self.m+i] for i in range(self.m)]

            # backward half-round
            # S-box
            for i in range(self.m):
                state[i] = state[i]^self.alphainv
            # matrix
            temp = [self.field.zero() for i in range(self.m)]
            for i in range(self.m):
                for j in range(self.m):
                    temp[i] = temp[i] + self.MDS[i][j] * state[j]
            # constants
            state = [temp[i] + self.round_constants[2*r*self.m+self.m+i] for i in range(self.m)]
            
            # record state at this point, with explicit copy
            trace += [[s for s in state]]

        # squeeze
        # output = state[0]

        return trace

    def boundary_constraints( self, output_element ):
        constraints = []

        # at start, capacity is zero
        constraints += [(0, 1, self.field.zero())]

        # at end, rate part is the given output element
        constraints += [(self.N, 0, output_element)]

        return constraints

    def round_constants_polynomials( self, omicron ):
        first_step_constants = []
        for i in range(self.m):
            domain = [omicron^r for r in range(0, self.N)]
            values = [self.round_constants[2*r*self.m+i] for r in range(0, self.N)]
            univariate = Polynomial.interpolate_domain(domain, values)
            multivariate = MPolynomial.lift(univariate, 0)
            first_step_constants += [multivariate]
        second_step_constants = []
        for i in range(self.m):
            domain = [omicron^r for r in range(0, self.N)]
            values = [self.field.zero()] * self.N
            #for r in range(self.N):
            #    print("len(round_constants):", len(self.round_constants), " but grabbing index:", 2*r*self.m+self.m+i, "for r=", r, "for m=", self.m, "for i=", i)
            #    values[r] = self.round_constants[2*r*self.m + self.m + i]
            values = [self.round_constants[2*r*self.m+self.m+i] for r in range(self.N)]
            univariate = Polynomial.interpolate_domain(domain, values)
            multivariate = MPolynomial.lift(univariate, 0)
            second_step_constants += [multivariate]

        return first_step_constants, second_step_constants

    def transition_constraints( self, omicron ):
        # get polynomials that interpolate through the round constants
        first_step_constants, second_step_constants = self.round_constants_polynomials(omicron)

        # arithmetize one round of Rescue-Prime
        variables = MPolynomial.variables(1 + 2*self.m, self.field)
        cycle_index = variables[0]
        previous_state = variables[1:(1+self.m)]
        next_state = variables[(1+self.m):(1+2*self.m)]
        air = []
        for i in range(self.m):
            # compute left hand side symbolically
            # lhs = sum(MPolynomial.constant(self.MDS[i][k]) * (previous_state[k]^self.alpha) for k in range(self.m)) + first_step_constants[i]
            lhs = MPolynomial.constant(self.field.zero())
            for k in range(self.m):
                lhs = lhs + MPolynomial.constant(self.MDS[i][k]) * (previous_state[k]^self.alpha)
            lhs = lhs + first_step_constants[i]

            # compute right hand side symbolically
            # rhs = sum(MPolynomial.constant(self.MDSinv[i][k]) * (next_state[k] - second_step_constants[k]) for k in range(self.m))^self.alpha
            rhs = MPolynomial.constant(self.field.zero())
            for k in range(self.m):
                rhs = rhs + MPolynomial.constant(self.MDSinv[i][k]) * (next_state[k] - second_step_constants[k])
            rhs = rhs^self.alpha

            # equate left and right hand sides
            air += [lhs-rhs]

        return air

    def randomizer_freedom( self, omicron, num_randomizers ):
        domain = [omicron^i for i in range(self.N, self.N+num_randomizers)]
        zerofier = Polynomial.zerofier_domain(domain)
        multivariate_zerofier = MPolynomial.lift(zerofier, 0)
        return multivariate_zerofier



================================================
FILE: code/rpsss.py
================================================
from rescue_prime import *
from stark import *
from hashlib import blake2s
import os
import pickle as pickle

class SignatureProofStream(ProofStream):
    def __init__( self, document ):
        ProofStream.__init__(self)
        self.document = document
        self.prefix = blake2s(bytes(document)).digest()

    def prover_fiat_shamir( self, num_bytes=32 ):
        return shake_256(self.prefix + self.serialize()).digest(num_bytes)

    def verifier_fiat_shamir( self, num_bytes=32 ):
        return shake_256(self.prefix + pickle.dumps(self.objects[:self.read_index])).digest(num_bytes)

    def deserialize( self, bb ):
        sps = SignatureProofStream(self.document)
        sps.objects = pickle.loads(bb)
        return sps

class RPSSS:
    def __init__( self ):
        self.field = Field.main()
        expansion_factor = 4
        num_colinearity_checks = 64
        security_level = 2 * num_colinearity_checks

        self.rp = RescuePrime()
        num_cycles = self.rp.N+1
        state_width = self.rp.m

        self.stark = Stark(self.field, expansion_factor, num_colinearity_checks, security_level, state_width, num_cycles, transition_constraints_degree=3)

    def stark_prove( self, input_element, proof_stream ):
        output_element = self.rp.hash(input_element)

        trace = self.rp.trace(input_element)
        transition_constraints = self.rp.transition_constraints(self.stark.omicron)
        boundary_constraints = self.rp.boundary_constraints(output_element)
        proof = self.stark.prove(trace, transition_constraints, boundary_constraints, proof_stream)
 
        return proof

    def stark_verify( self, output_element, stark_proof, proof_stream ):
        boundary_constraints = self.rp.boundary_constraints(output_element)
        transition_constraints = self.rp.transition_constraints(self.stark.omicron)
        return self.stark.verify(stark_proof, transition_constraints, boundary_constraints, proof_stream)

    def keygen( self ):
        sk = self.field.sample(os.urandom(17))
        pk = self.rp.hash(sk)
        return sk, pk

    def sign( self, sk, document ):
        sps = SignatureProofStream(document)
        signature = self.stark_prove(sk, sps)
        return signature

    def verify( self, pk, document, signature ):
        sps = SignatureProofStream(document)
        return self.stark_verify(pk, signature, sps)



================================================
FILE: code/stark.py
================================================
from fri import *
from univariate import *
from multivariate import *
from functools import reduce
import os

class Stark:
    def __init__( self, field, expansion_factor, num_colinearity_checks, security_level, num_registers, num_cycles, transition_constraints_degree=2 ):
        assert(len(bin(field.p)) - 2 >= security_level), "p must have at least as many bits as security level"
        assert(expansion_factor & (expansion_factor - 1) == 0), "expansion factor must be a power of 2"
        assert(expansion_factor >= 4), "expansion factor must be 4 or greater"
        assert(num_colinearity_checks * 2 >= security_level), "number of colinearity checks must be at least half of security level"

        self.field = field
        self.expansion_factor = expansion_factor
        self.num_colinearity_checks = num_colinearity_checks
        self.security_level = security_level

        self.num_randomizers = 4*num_colinearity_checks

        self.num_registers = num_registers
        self.original_trace_length = num_cycles
        
        randomized_trace_length = self.original_trace_length + self.num_randomizers
        omicron_domain_length = 1 << len(bin(randomized_trace_length * transition_constraints_degree)[2:])
        fri_domain_length = omicron_domain_length * expansion_factor

        self.generator = self.field.generator()
        self.omega = self.field.primitive_nth_root(fri_domain_length)
        self.omicron = self.field.primitive_nth_root(omicron_domain_length)
        self.omicron_domain = [self.omicron^i for i in range(omicron_domain_length)]

        self.fri = Fri(self.generator, self.omega, fri_domain_length, self.expansion_factor, self.num_colinearity_checks)

    def transition_degree_bounds( self, transition_constraints ):
        point_degrees = [1] + [self.original_trace_length+self.num_randomizers-1] * 2*self.num_registers
        return [max( sum(r*l for r, l in zip(point_degrees, k)) for k, v in a.dictionary.items()) for a in transition_constraints]

    def transition_quotient_degree_bounds( self, transition_constraints ):
        return [d - (self.original_trace_length-1) for d in self.transition_degree_bounds(transition_constraints)]

    def max_degree( self, transition_constraints ):
        md = max(self.transition_quotient_degree_bounds(transition_constraints))
        return (1 << (len(bin(md)[2:]))) - 1

    def transition_zerofier( self ):
        domain = self.omicron_domain[0:(self.original_trace_length-1)]
        return Polynomial.zerofier_domain(domain)

    def boundary_zerofiers( self, boundary ):
        zerofiers = []
        for s in range(self.num_registers):
            points = [self.omicron^c for c, r, v in boundary if r == s]
            zerofiers = zerofiers + [Polynomial.zerofier_domain(points)]
        return zerofiers

    def boundary_interpolants( self, boundary ):
        interpolants = []
        for s in range(self.num_registers):
            points = [(c,v) for c, r, v in boundary if r == s]
            domain = [self.omicron^c for c,v in points]
            values = [v for c,v in points]
            interpolants = interpolants + [Polynomial.interpolate_domain(domain, values)]
        return interpolants

    def boundary_quotient_degree_bounds( self, randomized_trace_length, boundary ):
        randomized_trace_degree = randomized_trace_length - 1
        return [randomized_trace_degree - bz.degree() for bz in self.boundary_zerofiers(boundary)]

    def sample_weights( self, number, randomness ):
        return [self.field.sample(blake2b(randomness + bytes(i)).digest()) for i in range(0, number)]

    def prove( self, trace, transition_constraints, boundary, proof_stream=None ):
        # create proof stream object if necessary
        if proof_stream == None:
            proof_stream = ProofStream()
        
        # concatenate randomizers
        for k in range(self.num_randomizers):
            trace = trace + [[self.field.sample(os.urandom(17)) for s in range(self.num_registers)]]

        # interpolate
        trace_domain = [self.omicron^i for i in range(len(trace))]
        trace_polynomials = []
        for s in range(self.num_registers):
            single_trace = [trace[c][s] for c in range(len(trace))]
            trace_polynomials = trace_polynomials + [Polynomial.interpolate_domain(trace_domain, single_trace)]

        # subtract boundary interpolants and divide out boundary zerofiers
        boundary_quotients = []
        for s in range(self.num_registers):
            interpolant = self.boundary_interpolants(boundary)[s]
            zerofier = self.boundary_zerofiers(boundary)[s]
            quotient = (trace_polynomials[s] - interpolant) / zerofier
            boundary_quotients += [quotient]

        # commit to boundary quotients
        fri_domain = self.fri.eval_domain()
        boundary_quotient_codewords = []
        boundary_quotient_Merkle_roots = []
        for s in range(self.num_registers):
            boundary_quotient_codewords = boundary_quotient_codewords + [boundary_quotients[s].evaluate_domain(fri_domain)]
            merkle_root = Merkle.commit(boundary_quotient_codewords[s])
            proof_stream.push(merkle_root)

        # symbolically evaluate transition constraints
        point = [Polynomial([self.field.zero(), self.field.one()])] + trace_polynomials + [tp.scale(self.omicron) for tp in trace_polynomials]
        transition_polynomials = [a.evaluate_symbolic(point) for a in transition_constraints]

        # divide out zerofier
        transition_quotients = [tp / self.transition_zerofier() for tp in transition_polynomials]

        # commit to randomizer polynomial
        randomizer_polynomial = Polynomial([self.field.sample(os.urandom(17)) for i in range(self.max_degree(transition_constraints)+1)])
        randomizer_codeword = randomizer_polynomial.evaluate_domain(fri_domain) 
        randomizer_root = Merkle.commit(randomizer_codeword)
        proof_stream.push(randomizer_root)

        # get weights for nonlinear combination
        #  - 1 randomizer
        #  - 2 for every transition quotient
        #  - 2 for every boundary quotient
        weights = self.sample_weights(1 + 2*len(transition_quotients) + 2*len(boundary_quotients), proof_stream.prover_fiat_shamir())

        assert([tq.degree() for tq in transition_quotients] == self.transition_quotient_degree_bounds(transition_constraints)), "transition quotient degrees do not match with expectation"

        # compute terms of nonlinear combination polynomial
        x = Polynomial([self.field.zero(), self.field.one()])
        max_degree = self.max_degree(transition_constraints)
        terms = []
        terms += [randomizer_polynomial]
        for i in range(len(transition_quotients)):
            terms += [transition_quotients[i]]
            shift = max_degree - self.transition_quotient_degree_bounds(transition_constraints)[i]
            terms += [(x^shift) * transition_quotients[i]]
        for i in range(self.num_registers):
            terms += [boundary_quotients[i]]
            shift = max_degree - self.boundary_quotient_degree_bounds(len(trace), boundary)[i]
            terms += [(x^shift) * boundary_quotients[i]]

        # take weighted sum
        # combination = sum(weights[i] * terms[i] for all i)
        combination = reduce(lambda a, b : a+b, [Polynomial([weights[i]]) * terms[i] for i in range(len(terms))], Polynomial([]))

        # compute matching codeword
        combined_codeword = combination.evaluate_domain(fri_domain)

        # prove low degree of combination polynomial, and collect indices
        indices = self.fri.prove(combined_codeword, proof_stream)

        # process indices
        duplicated_indices = [i for i in indices] + [(i + self.expansion_factor) % self.fri.domain_length for i in indices]
        quadrupled_indices = [i for i in duplicated_indices] + [(i + (self.fri.domain_length // 2)) % self.fri.domain_length for i in duplicated_indices]
        quadrupled_indices.sort()

        # open indicated positions in the boundary quotient codewords
        for bqc in boundary_quotient_codewords:
            for i in quadrupled_indices:
                proof_stream.push(bqc[i])
                path = Merkle.open(i, bqc)
                proof_stream.push(path)

        # ... as well as in the randomizer
        for i in quadrupled_indices:
            proof_stream.push(randomizer_codeword[i])
            path = Merkle.open(i, randomizer_codeword)
            proof_stream.push(path)

        # the final proof is just the serialized stream
        return proof_stream.serialize()

    def verify( self, proof, transition_constraints, boundary, proof_stream=None ):
        H = blake2b

        # infer trace length from boundary conditions
        original_trace_length = 1 + max(c for c, r, v in boundary)
        randomized_trace_length = original_trace_length + self.num_randomizers

        # deserialize with right proof stream
        if proof_stream == None:
            proof_stream = ProofStream()
        proof_stream = proof_stream.deserialize(proof)

        # get Merkle roots of boundary quotient codewords
        boundary_quotient_roots = []
        for s in range(self.num_registers):
            boundary_quotient_roots = boundary_quotient_roots + [proof_stream.pull()]

        # get Merkle root of randomizer polynomial
        randomizer_root = proof_stream.pull()

        # get weights for nonlinear combination
        weights = self.sample_weights(1 + 2*len(transition_constraints) + 2*len(self.boundary_interpolants(boundary)), proof_stream.verifier_fiat_shamir())

        # verify low degree of combination polynomial
        polynomial_values = []
        verifier_accepts = self.fri.verify(proof_stream, polynomial_values)
        polynomial_values.sort(key=lambda iv : iv[0])
        if not verifier_accepts:
            return False

        indices = [i for i,v in polynomial_values]
        values = [v for i,v in polynomial_values]

        # read and verify leafs, which are elements of boundary quotient codewords
        duplicated_indices = [i for i in indices] + [(i + self.expansion_factor) % self.fri.domain_length for i in indices]
        duplicated_indices.sort()
        leafs = []
        for r in range(len(boundary_quotient_roots)):
            leafs = leafs + [dict()]
            for i in duplicated_indices:
                leafs[r][i] = proof_stream.pull()
                path = proof_stream.pull()
                verifier_accepts = verifier_accepts and Merkle.verify(boundary_quotient_roots[r], i, path, leafs[r][i])
                if not verifier_accepts:
                    return False

        # read and verify randomizer leafs
        randomizer = dict()
        for i in duplicated_indices:
            randomizer[i] = proof_stream.pull()
            path = proof_stream.pull()
            verifier_accepts = verifier_accepts and Merkle.verify(randomizer_root, i, path, randomizer[i])
            if not verifier_accepts:
                return False

        # verify leafs of combination polynomial
        for i in range(len(indices)):
            current_index = indices[i] # do need i

            # get trace values by applying a correction to the boundary quotient values (which are the leafs)
            domain_current_index = self.generator * (self.omega^current_index)
            next_index = (current_index + self.expansion_factor) % self.fri.domain_length
            domain_next_index = self.generator * (self.omega^next_index)
            current_trace = [self.field.zero() for s in range(self.num_registers)]
            next_trace = [self.field.zero() for s in range(self.num_registers)]
            for s in range(self.num_registers):
                zerofier = self.boundary_zerofiers(boundary)[s]
                interpolant = self.boundary_interpolants(boundary)[s]

                current_trace[s] = leafs[s][current_index] * zerofier.evaluate(domain_current_index) + interpolant.evaluate(domain_current_index)
                next_trace[s] = leafs[s][next_index] * zerofier.evaluate(domain_next_index) + interpolant.evaluate(domain_next_index)

            point = [domain_current_index] + current_trace + next_trace
            transition_constraints_values = [transition_constraints[s].evaluate(point) for s in range(len(transition_constraints))]

            # compute nonlinear combination
            counter = 0
            terms = []
            terms += [randomizer[current_index]]
            for s in range(len(transition_constraints_values)):
                tcv = transition_constraints_values[s]
                quotient = tcv / self.transition_zerofier().evaluate(domain_current_index)
                terms += [quotient]
                shift = self.max_degree(transition_constraints) - self.transition_quotient_degree_bounds(transition_constraints)[s]
                terms += [quotient * (domain_current_index^shift)]
            for s in range(self.num_registers):
                bqv = leafs[s][current_index] # boundary quotient value
                terms += [bqv]
                shift = self.max_degree(transition_constraints) - self.boundary_quotient_degree_bounds(randomized_trace_length, boundary)[s]
                terms += [bqv * (domain_current_index^shift)]
            combination = reduce(lambda a, b : a+b, [terms[j] * weights[j] for j in range(len(terms))], self.field.zero())

            # verify against combination polynomial value
            verifier_accepts = verifier_accepts and (combination == values[i])
            if not verifier_accepts:
                return False

        return verifier_accepts



================================================
FILE: code/test_fast_stark.py
================================================
from algebra import *
from univariate import *
from multivariate import *
from rescue_prime import *
from fri import *
from ip import *
from fast_stark import *

def test_fast_stark( ):
    field = Field.main()
    expansion_factor = 4
    num_colinearity_checks = 2
    security_level = 2

    rp = RescuePrime()
    output_element = field.sample(bytes(b'0xdeadbeef'))

    for trial in range(0, 20):
        input_element = output_element
        print("running trial with input:", input_element.value)
        output_element = rp.hash(input_element)
        num_cycles = rp.N+1
        state_width = rp.m

        stark = FastStark(field, expansion_factor, num_colinearity_checks, security_level, state_width, num_cycles)
        transition_zerofier, transition_zerofier_codeword, transition_zerofier_root = stark.preprocess()

        # prove honestly
        print("honest proof generation ...")

        # prove
        trace = rp.trace(input_element)
        air = rp.transition_constraints(stark.omicron)
        boundary = rp.boundary_constraints(output_element)
        proof = stark.prove(trace, air, boundary, transition_zerofier, transition_zerofier_codeword)

        # verify
        verdict = stark.verify(proof, air, boundary, transition_zerofier_root)

        assert(verdict == True), "valid stark proof fails to verify"
        print("success \\o/")

        print("verifying false claim ...")
        # verify false claim
        output_element_ = output_element + field.one()
        boundary_ = rp.boundary_constraints(output_element_)
        verdict = stark.verify(proof, air, boundary_, transition_zerofier_root)

        assert(verdict == False), "invalid stark proof verifies"
        print("proof rejected! \\o/")

        # prove with false witness
        print("attempting to prove with witness violating transition constraints (should not fail because using fast division) ...")
        cycle = 1 + (int(os.urandom(1)[0]) % len(trace)-1)
        register = int(os.urandom(1)[0]) % state_width
        error = field.sample(os.urandom(17))
    
        trace[cycle][register] = trace[cycle][register] + error
    
        proof = stark.prove(trace, air, boundary, transition_zerofier, transition_zerofier_codeword)

        print(" ... but verification should fail :D")
        verdict = stark.verify(proof, air, boundary, transition_zerofier_root)
        assert(verdict == False), "STARK produced from false witness verifies :("
        print("proof rejected! \\o/")



================================================
FILE: code/test_fri.py
================================================
from algebra import *
from fri import *

def test_fri( ):
    field = Field.main()
    degree = 63
    expansion_factor = 4
    num_colinearity_tests = 17

    initial_codeword_length = (degree + 1) * expansion_factor
    log_codeword_length = 0
    codeword_length = initial_codeword_length
    while codeword_length > 1:
        codeword_length //= 2
        log_codeword_length += 1

    assert(1 << log_codeword_length == initial_codeword_length), "log not computed correctly"

    omega = field.primitive_nth_root(initial_codeword_length)
    generator = field.generator()

    assert(omega^(1 << log_codeword_length) == field.one()), "omega not nth root of unity"
    assert(omega^(1 << (log_codeword_length-1)) != field.one()), "omega not primitive"

    fri = Fri(generator, omega, initial_codeword_length, expansion_factor, num_colinearity_tests)

    polynomial = Polynomial([FieldElement(i, field) for i in range(degree+1)])
    domain = [omega^i for i in range(initial_codeword_length)]

    codeword = polynomial.evaluate_domain(domain)

    # test valid codeword
    print("testing valid codeword ...")
    proof_stream = ProofStream()

    fri.prove(codeword, proof_stream)
    print("")
    points = []
    verdict = fri.verify(proof_stream, points)
    if verdict == False:
        print("rejecting proof, but proof should be valid!")
        return

    for (x,y) in points:
        if polynomial.evaluate(omega^x) != y:
            print("polynomial evaluates to wrong value")
            assert(False)
    print("success! \\o/")

    # disturb then test for failure
    print("testing invalid codeword ...")
    proof_stream = ProofStream()
    for i in range(0, degree//3):
        codeword[i] = field.zero()

    fri.prove(codeword, proof_stream)
    points = []
    assert False == fri.verify(proof_stream, points), "proof should fail, but is accepted ..."
    print("success! \\o/")



================================================
FILE: code/test_ip.py
================================================
from ip import *

def test_serialize( ):
    proof1 = ProofStream()
    proof1.push(1)
    proof1.push({1: '1'})
    proof1.push([1])
    proof1.push(2)

    serialized = proof1.serialize()
    proof2 = ProofStream.deserialize(serialized)

    assert(proof1.pull() == proof2.pull()), "pulled object 0 don't match"
    assert(proof1.pull() == proof2.pull()), "pulled object 1 don't match"
    assert(proof1.pull() == proof2.pull()), "pulled object 2 don't match"
    assert(proof1.pull() == 2), "object 3 pulled from proof1 is not 2"
    assert(proof2.pull() == 2), "object 3 pulled from proof2 is not 2"
    assert(proof1.prover_fiat_shamir() == proof2.prover_fiat_shamir()), "fiat shamir is not the same"


================================================
FILE: code/test_merkle.py
================================================
from merkle import Merkle
from os import urandom

def test_merkle():
    n = 64
    leafs = [urandom(int(urandom(1)[0])) for i in range(n)]
    root = Merkle.commit_(leafs)

    # opening any leaf should work
    for i in range(n):
        path = Merkle.open_(i, leafs)
        assert(Merkle.verify_(root, i, path, leafs[i]))

    # opening non-leafs should not work
    for i in range(n):
        path = Merkle.open_(i, leafs)
        assert(False == Merkle.verify_(root, i, path, urandom(51)))

    # opening wrong leafs should not work
    for i in range(n):
        path = Merkle.open_(i, leafs)
        j = (i + 1 + (int(urandom(1)[0] % (n-1)))) % n
        assert(False == Merkle.verify_(root, i, path, leafs[j]))

    # opening leafs with the wrong index should not work
    for i in range(n):
        path = Merkle.open_(i, leafs)
        j = (i + 1 + (int(urandom(1)[0] % (n-1)))) % n
        assert(False == Merkle.verify_(root, j, path, leafs[i]))

    # opening leafs to a false root should not work
    for i in range(n):
        path = Merkle.open_(i, leafs)
        assert(False == Merkle.verify_(urandom(32), i, path, leafs[i]))

    # opening leafs with even one falsehood in the path should not work
    for i in range(n):
        path = Merkle.open_(i, leafs)
        for j in range(len(path)):
            fake_path = path[0:j] + [urandom(32)] + path[j+1:]
            assert(False == Merkle.verify_(root, i, fake_path, leafs[i]))

    # opening leafs to a different root should not work
    fake_root = Merkle.commit_([urandom(32) for i in range(n)])
    for i in range(n):
        path = Merkle.open_(i, leafs)
        assert(False == Merkle.verify_(fake_root, i, path, leafs[i]))


================================================
FILE: code/test_multivariate.py
================================================
from multivariate import *

def test_evaluate( ):
    field = Field.main()
    variables = MPolynomial.variables(4, field)
    zero = field.zero()
    one = field.one()
    two = FieldElement(2, field)
    five = FieldElement(5, field)

    mpoly1 = MPolynomial.constant(one) * variables[0] + MPolynomial.constant(two) * variables[1] + MPolynomial.constant(five) * (variables[2]^3)
    mpoly2 = MPolynomial.constant(one) * variables[0] * variables[3] + MPolynomial.constant(five) * (variables[3]^3) + MPolynomial.constant(five)

    mpoly3 = mpoly1 * mpoly2

    point = [zero, five, five, two]

    eval1 = mpoly1.evaluate(point)
    eval2 = mpoly2.evaluate(point)
    eval3 = mpoly3.evaluate(point)

    assert(eval1 * eval2 == eval3), "multivariate polynomial multiplication does not commute with evaluation"
    assert(eval1 + eval2 == (mpoly1 + mpoly2).evaluate(point)), "multivariate polynomial addition does not commute with evaluation"

    print("eval3:", eval3.value)
    print("multivariate evaluate test success \\o/")

def test_lift( ):
    field = Field.main()
    variables = MPolynomial.variables(4, field)
    zero = field.zero()
    one = field.one()
    two = FieldElement(2, field)
    five = FieldElement(5, field)

    upoly = Polynomial.interpolate_domain([zero, one, two], [two, five, five])
    mpoly = MPolynomial.from_univariate(upoly, 3)

    assert(upoly.evaluate(five) == mpoly.evaluate([zero, zero, zero, five])), "lifting univariate to multivariate failed"

    print("lifting univariate to multivariate polynomial success \\o/")


================================================
FILE: code/test_ntt.py
================================================
from algebra import *
from univariate import *
from ntt import *
import os

def test_ntt( ):
    field = Field.main()
    logn = 8
    n = 1 << logn
    primitive_root = field.primitive_nth_root(n)

    coefficients = [field.sample(os.urandom(17)) for i in range(n)]
    poly = Polynomial(coefficients)

    values = ntt(primitive_root, coefficients)

    values_again = poly.evaluate_domain([primitive_root^i for i in range(len(values))])

    assert(values == values_again), "ntt does not compute correct batch-evaluation"

def test_intt( ):
    field = Field.main()

    logn = 7
    n = 1 << logn
    primitive_root = field.primitive_nth_root(n)

    values = [field.sample(os.urandom(1)) for i in range(n)]
    coeffs = ntt(primitive_root, values)
    values_again = intt(primitive_root, coeffs)

    assert(values == values_again), "inverse ntt is different from forward ntt"

def test_multiply( ):
    field = Field.main()

    logn = 6
    n = 1 << logn
    primitive_root = field.primitive_nth_root(n)

    for trial in range(20):
        lhs_degree = int(os.urandom(1)[0]) % (n // 2)
        rhs_degree = int(os.urandom(1)[0]) % (n // 2)

        lhs = Polynomial([field.sample(os.urandom(17)) for i in range(lhs_degree+1)])
        rhs = Polynomial([field.sample(os.urandom(17)) for i in range(rhs_degree+1)])

        fast_product = fast_multiply(lhs, rhs, primitive_root, n)
        slow_product = lhs * rhs

        assert(fast_product == slow_product), "fast product does not equal slow product"

def test_divide( ):
    field = Field.main()

    logn = 6
    n = 1 << logn
    primitive_root = field.primitive_nth_root(n)

    for trial in range(20):
        lhs_degree = int(os.urandom(1)[0]) % (n // 2)
        rhs_degree = int(os.urandom(1)[0]) % (n // 2)

        lhs = Polynomial([field.sample(os.urandom(17)) for i in range(lhs_degree+1)])
        rhs = Polynomial([field.sample(os.urandom(17)) for i in range(rhs_degree+1)])

        fast_product = fast_multiply(lhs, rhs, primitive_root, n)
        quotient = fast_coset_divide(fast_product, lhs, field.generator(), primitive_root, n)

        assert(quotient == rhs), "fast divide does not equal original factor"

def test_interpolate( ):
    field = Field.main()

    logn = 9
    n = 1 << logn
    primitive_root = field.primitive_nth_root(n)

    for trial in range(10):
        N = sum((1 << (8*i)) * int(os.urandom(1)[0]) for i in range(8)) % n
        if N == 0:
            continue
        print("N:", N)
        values = [field.sample(os.urandom(17)) for i in range(N)]
        domain = [field.sample(os.urandom(17)) for i in range(N)]
        poly = fast_interpolate(domain, values, primitive_root, n)
        print("poly degree:", poly.degree())
        values_again = fast_evaluate(poly, domain, primitive_root, n)[0:N]
        #values_again = poly.evaluate_domain(domain)

        if values != values_again:
            print("fast interpolation and evaluation are not inverses")
            print("expected:", ",".join(str(c.value) for c in values))
            print("observed:", ",".join(str(c.value) for c in values_again))
            assert(False)
        print("")

def test_coset_evaluate( ):
    field = Field.main()

    logn = 9
    n = 1 << logn
    primitive_root = field.primitive_nth_root(n)

    two = FieldElement(2, field)

    domain = [two * (primitive_root^i) for i in range(n)]

    degree = ((int(os.urandom(1)[0])*256 + int(os.urandom(1)[0])) % n) - 1
    coefficients = [field.sample(os.urandom(17)) for i in range(degree+1)]
    poly = Polynomial(coefficients)

    values_fast = fast_coset_evaluate(poly, two, primitive_root, n)
    values_traditional = [poly.evaluate(d) for d in domain]

    assert(all(vf == vt for (vf, vt) in zip(values_fast, values_traditional))), "values do not match with traditional evaluations"



================================================
FILE: code/test_rescue_prime.py
================================================
from rescue_prime import *
import os

def test_rescue_prime( ):
    rp = RescuePrime()
    
    # test vectors
    assert(rp.hash(FieldElement(1, rp.field)) == FieldElement(244180265933090377212304188905974087294, rp.field)), "rescue prime test vector 1 failed"
    assert(rp.hash(FieldElement(57322816861100832358702415967512842988, rp.field)) == FieldElement(89633745865384635541695204788332415101, rp.field)), "rescue prime test vector 2 failed"


    # test trace boundaries
    a = FieldElement(57322816861100832358702415967512842988, rp.field)
    b = FieldElement(89633745865384635541695204788332415101, rp.field)
    trace = rp.trace(a)
    assert(trace[0][0] == a and trace[-1][0] == b), "rescue prime trace does not satisfy boundary conditions"

    print("Rescue-Prime eval tests pass \\o/")

def test_trace( ):
    rp = RescuePrime()

    input_element = FieldElement(57322816861100832358702415967512842988, rp.field)
    b = FieldElement(89633745865384635541695204788332415101, rp.field)
    output_element = rp.hash(input_element)
    assert(b == output_element), "output elements do not match"

    # get trace
    trace = rp.trace(input_element)

    # test boundary constraints
    for condition in rp.boundary_constraints(output_element):
        cycle, element, value = condition
        if trace[cycle][element] != value:
            print("rescue prime boundary condition error: trace element", element, "at cycle", cycle, "has value", trace[cycle][element], "but should have value", value)
            assert(False)

    # test transition constraints
    omicron = rp.field.primitive_kth_root(1 << 119)
    transition_constraints = rp.transition_constraints(omicron)
    first_step_constants, second_step_constants = rp.round_constants_polynomials(omicron)
    for o in range(len(trace)-1):
        for air_poly in rp.transition_constraints(omicron):
            previous_state = [trace[o][0], trace[o][1]]
            next_state = [trace[o+1][0], trace[o+1][1]]
            point = [omicron^o] + previous_state + next_state
            if air_poly.evaluate(point) != rp.field.zero():
                assert(False), "air polynomial does not evaluate to zero"

    print("valid Rescue-Prime trace passes tests, testing invalid traces ...")

    # insert errors into trace, to make sure errors get noticed
    for k in range(10):
        print("trial", k, "...")
        # sample error location and value randomly
        register_index = int(os.urandom(1)[0]) % rp.m
        cycle_index = int(os.urandom(1)[0]) % (rp.N+1)
        value_ = rp.field.sample(os.urandom(17))
        if value_ == rp.field.zero():
            continue

        # reproduce deterministic error
        if k == 0:
            register_index = 1
            cycle_index = 22
            value_ = FieldElement(17274817952119230544216945715808633996, rp.field)

        # perturb
        trace[cycle_index][register_index] = trace[cycle_index][register_index] + value_
    
        error_got_noticed = False

        # test boundary constraints
        for condition in rp.boundary_constraints(output_element):
            if error_got_noticed:
                break
            cycle, element, value = condition
            if trace[cycle][element] != value:
                error_got_noticed = True

        # test transition constraints
        for o in range(len(trace)-1):
            if error_got_noticed:
                break
            for air_poly in rp.transition_constraints(omicron):
                previous_state = [trace[o][0], trace[o][1]]
                next_state = [trace[o+1][0], trace[o+1][1]]
                point = [omicron^o] + previous_state + next_state
                if air_poly.evaluate(point) != rp.field.zero():
                    error_got_noticed = True

        # if error was not noticed, panic
        if not error_got_noticed:
            print("error was not noticed.")
            print("register index:", register_index)
            print("cycle index:", cycle_index)
            print("value_:", value_)
            assert(False), "error was not noticed"

        trace[cycle_index][register_index] = trace[cycle_index][register_index] - value_

    print("Rescue-Prime trace tests pass \\o/")



================================================
FILE: code/test_rpsss.py
================================================
from rpsss import *
from fast_rpsss import *
from time import time

def test_rpsss( ):
    print("Testing R'*K signature scheme ...")
    rpsss = RPSSS()

    tick = time()
    sk, pk = rpsss.keygen()
    tock = time()
    print("KeyGen:", (tock - tick), "seconds")

    doc = bytes("Hello, world!", "utf-8")
    tick = time()
    sig = rpsss.sign(sk, doc)
    tock = time()
    print("Sign:", (tock - tick), "seconds")

    tick = time()
    valid = rpsss.verify(pk, doc, sig)
    tock = time()
    print("Verify:", (tock - tick), "seconds")

    if valid:
        print("successfully verified correct signature! \\o/")
    else:
        print("correctly generated signature not valid. <O>")

    not_doc = bytes("Byebye.", "utf-8")
    tick = time()
    valid = rpsss.verify(pk, not_doc, sig)
    tock = time()
    print("Verify:", (tock - tick), "seconds")

    if valid:
        print("signature authenticates bad document <O>")
    else:
        print("signature fails to authenticate bad document! \\o/")

    print("size of signature:", len(sig), "bytes, or ", len(sig) / (2**13), "kB")

def test_fast_rpsss( ):
    print("Testing *FAST* R'*K signature scheme ...")
    rpsss = FastRPSSS()

    tick = time()
    sk, pk = rpsss.keygen()
    tock = time()
    print("KeyGen:", (tock - tick), "seconds")

    doc = bytes("Hello, world!", "utf-8")
    tick = time()
    sig = rpsss.sign(sk, doc)
    tock = time()
    print("Sign:", (tock - tick), "seconds")

    tick = time()
    valid = rpsss.verify(pk, doc, sig)
    tock = time()
    print("Verify:", (tock - tick), "seconds")

    if valid:
        print("successfully verified correct signature! \\o/")
    else:
        print("correctly generated signature not valid. <O>")

    not_doc = bytes("Byebye.", "utf-8")
    tick = time()
    valid = rpsss.verify(pk, not_doc, sig)
    tock = time()
    print("Verify:", (tock - tick), "seconds")

    if valid:
        print("signature authenticates bad document <O>")
    else:
        print("signature fails to authenticate bad document! \\o/")

    print("size of signature:", len(sig), "bytes, or ", len(sig) / (2**13), "kB")



================================================
FILE: code/test_stark.py
================================================
from algebra import *
from univariate import *
from multivariate import *
from rescue_prime import *
from fri import *
from ip import *
from stark import *

def test_stark( ):
    field = Field.main()
    expansion_factor = 4
    num_colinearity_checks = 2
    security_level = 2

    rp = RescuePrime()
    output_element = field.sample(bytes(b'0xdeadbeef'))

    for trial in range(0, 20):
        input_element = output_element
        print("running trial with input:", input_element.value)
        output_element = rp.hash(input_element)
        num_cycles = rp.N+1
        state_width = rp.m

        stark = Stark(field, expansion_factor, num_colinearity_checks, security_level, state_width, num_cycles)

        # prove honestly
        print("honest proof generation ...")

        # prove
        trace = rp.trace(input_element)
        air = rp.transition_constraints(stark.omicron)
        boundary = rp.boundary_constraints(output_element)
        proof = stark.prove(trace, air, boundary)

        # verify
        verdict = stark.verify(proof, air, boundary)

        assert(verdict == True), "valid stark proof fails to verify"
        print("success \\o/")

        print("verifying false claim ...")
        # verify false claim
        output_element_ = output_element + field.one()
        boundary_ = rp.boundary_constraints(output_element_)
        verdict = stark.verify(proof, air, boundary_)

        assert(verdict == False), "invalid stark proof verifies"
        print("proof rejected! \\o/")

    # verify with false witness
    print("attempting to prove with false witness (should fail) ...")
    cycle = int(os.urandom(1)[0]) % len(trace)
    register = int(os.urandom(1)[0]) % state_width
    error = field.sample(os.urandom(17))

    trace[cycle][register] = trace[cycle][register] + error

    proof = stark.prove(trace, air, boundary) # should fail



================================================
FILE: code/test_univariate.py
================================================
from univariate import *
import os

def test_distributivity():
    field = Field.main()
    zero = field.zero()
    one = field.one()
    two = FieldElement(2, field)
    five = FieldElement(5, field)

    a = Polynomial([one, zero, five, two])
    b = Polynomial([two, two, one])
    c = Polynomial([zero, five, two, five, five, one])

    lhs = a * (b + c)
    rhs = a * b + a * c
    assert(lhs == rhs), "distributivity fails for polynomials: {} =/= {}".format(lhs.__str__(), rhs.__str__())

    print("univariate polynomial distributivity success \\o/")

def test_division():
    field = Field.main()
    zero = field.zero()
    one = field.one()
    two = FieldElement(2, field)
    five = FieldElement(5, field)

    a = Polynomial([one, zero, five, two])
    b = Polynomial([two, two, one])
    c = Polynomial([zero, five, two, five, five, one])

    # a should divide a*b, quotient should be b
    quo, rem = Polynomial.divide(a*b, a)
    assert(rem.is_zero()), "fail division test 1"
    assert(quo == b), "fail division test 2"

    # b should divide a*b, quotient should be a
    quo, rem = Polynomial.divide(a*b, b)
    assert(rem.is_zero()), "fail division test 3"
    assert(quo == a), "fail division test 4"

    # c should not divide a*b
    quo, rem = Polynomial.divide(a*b, c)
    assert(not rem.is_zero()), "fail division test 5"

    # ... but quo * c + rem == a*b
    assert(quo * c + rem == a*b), "fail division test 6"

    print("univariate polynomial division success \\o/")

def test_interpolate():
    field = Field.main()
    zero = field.zero()
    one = field.one()
    two = FieldElement(2, field)
    five = FieldElement(5, field)
    
    values = [five, two, two, one, five, zero]
    domain = [FieldElement(i, field) for i in range(1, 6)]

    poly = Polynomial.interpolate_domain(domain, values)

    for i in range(len(domain)):
        assert(poly.evaluate(domain[i]) == values[i]), "fail interpolate test 1"

    # evaluation in random point is nonzero with high probability
    assert(poly.evaluate(FieldElement(363, field)) != zero), "fail interpolate test 2"

    assert(poly.degree() == len(domain)-1), "fail interpolate test 3"

    print("univariate polynomial interpolate success \\o/")

def test_zerofier( ):
    field = Field.main()

    for trial in range(0, 100):
        degree = int(os.urandom(1)[0])
        domain = []
        while len(domain) != degree:
            new = field.sample(os.urandom(17))
            if not new in domain:
                domain += [new]

        zerofier = Polynomial.zerofier_domain(domain)

        assert(zerofier.degree() == degree), "zerofier has degree unequal to size of domain"

        for d in domain:
            assert(zerofier.evaluate(d) == field.zero()), "zerofier does not evaluate to zero where it should"

        random = field.sample(os.urandom(17))
        while random in domain:
            random = field.sample(os.urandom(17))

        assert(zerofier.evaluate(random) != field.zero()), "zerofier evaluates to zero where it should not"

    print("univariate zerofier test success \\o/")



================================================
FILE: code/univariate.py
================================================
from algebra import *

class Polynomial:
    def __init__( self, coefficients ):
        self.coefficients = [c for c in coefficients]

    def degree( self ):
        if self.coefficients == []:
            return -1
        zero = self.coefficients[0].field.zero()
        if self.coefficients == [zero] * len(self.coefficients):
            return -1
        maxindex = 0
        for i in range(len(self.coefficients)):
            if self.coefficients[i] != zero:
                maxindex = i
        return maxindex

    def __neg__( self ):
        return Polynomial([-c for c in self.coefficients])

    def __add__( self, other ):
        if self.degree() == -1:
            return other
        elif other.degree() == -1:
            return self
        field = self.coefficients[0].field
        coeffs = [field.zero()] * max(len(self.coefficients), len(other.coefficients))
        for i in range(len(self.coefficients)):
            coeffs[i] = coeffs[i] + self.coefficients[i]
        for i in range(len(other.coefficients)):
            coeffs[i] = coeffs[i] + other.coefficients[i]
        return Polynomial(coeffs)

    def __sub__( self, other ):
        return self.__add__(-other)

    def __mul__(self, other ):
        if self.coefficients == [] or other.coefficients == []:
            return Polynomial([])
        zero = self.coefficients[0].field.zero()
        buf = [zero] * (len(self.coefficients) + len(other.coefficients) - 1)
        for i in range(len(self.coefficients)):
            if self.coefficients[i].is_zero():
                continue # optimization for sparse polynomials
            for j in range(len(other.coefficients)):
                buf[i+j] = buf[i+j] + self.coefficients[i] * other.coefficients[j]
        return Polynomial(buf)

    def __truediv__( self, other ):
        quo, rem = Polynomial.divide(self, other)
        assert(rem.is_zero()), "cannot perform polynomial division because remainder is not zero"
        return quo

    def __mod__( self, other ):
        quo, rem = Polynomial.divide(self, other)
        return rem

    def __eq__( self, other ):
        if self.degree() != other.degree():
            return False
        if self.degree() == -1:
            return True
        return all(self.coefficients[i] == other.coefficients[i] for i in range(len(self.coefficients)))

    def __neq__( self, other ):
        return not self.__eq__(other)

    def is_zero( self ):
        if self.degree() == -1:
            return True
        return False

    def __str__( self ):
        return "[" + ",".join(s.__str__() for s in self.coefficients) + "]"

    def leading_coefficient( self ):
        return self.coefficients[self.degree()]

    def divide( numerator, denominator ):
        if denominator.degree() == -1:
            return None
        if numerator.degree() < denominator.degree():
            return (Polynomial([]), numerator)
        field = denominator.coefficients[0].field
        remainder = Polynomial([n for n in numerator.coefficients])
        quotient_coefficients = [field.zero() for i in range(numerator.degree()-denominator.degree()+1)]
        for i in range(numerator.degree()-denominator.degree()+1):
            if remainder.degree() < denominator.degree():
                break
            coefficient = remainder.leading_coefficient() / denominator.leading_coefficient()
            shift = remainder.degree() - denominator.degree()
            subtractee = Polynomial([field.zero()] * shift + [coefficient]) * denominator
            quotient_coefficients[shift] = coefficient
            remainder = remainder - subtractee
        quotient = Polynomial(quotient_coefficients)
        return quotient, remainder

    def is_zero( self ):
        if self.coefficients == []:
            return True
        for c in self.coefficients:
            if not c.is_zero():
                return False
        return True

    def interpolate_domain( domain, values ):
        assert(len(domain) == len(values)), "number of elements in domain does not match number of values -- cannot interpolate"
        assert(len(domain) > 0), "cannot interpolate between zero points"
        field = domain[0].field
        x = Polynomial([field.zero(), field.one()])
        acc = Polynomial([])
        for i in range(len(domain)):
            prod = Polynomial([values[i]])
            for j in range(len(domain)):
                if j == i:
                    continue
                prod = prod * (x - Polynomial([domain[j]])) * Polynomial([(domain[i] - domain[j]).inverse()])
            acc = acc + prod
        return acc

    def zerofier_domain( domain ):
        field = domain[0].field
        x = Polynomial([field.zero(), field.one()])
        acc = Polynomial([field.one()])
        for d in domain:
            acc = acc * (x - Polynomial([d]))
        return acc

    def evaluate( self, point ):
        xi = point.field.one()
        value = point.field.zero()
        for c in self.coefficients:
            value = value + c * xi
            xi = xi * point
        return value

    def evaluate_domain( self, domain ):
        return [self.evaluate(d) for d in domain]

    def __xor__( self, exponent ):
        if self.is_zero():
            return Polynomial([])
        if exponent == 0:
            return Polynomial([self.coefficients[0].field.one()])
        acc = Polynomial([self.coefficients[0].field.one()])
        for i in reversed(range(len(bin(exponent)[2:]))):
            acc = acc * acc
            if (1 << i) & exponent != 0:
                acc = acc * self
        return acc

    def scale( self, factor ):
        return Polynomial([(factor^i) * self.coefficients[i] for i in range(len(self.coefficients))])

def test_colinearity( points ):
    domain = [p[0] for p in points]
    values = [p[1] for p in points]
    polynomial = Polynomial.interpolate_domain(domain, values)
    return polynomial.degree() == 1



================================================
FILE: docs/.gitignore
================================================
_site/
_site/*
Gemfile.lock



================================================
FILE: docs/404.html
================================================
---
layout: default
---

<style type="text/css" media="screen">
  .container {
    margin: 10px auto;
    max-width: 600px;
    text-align: center;
  }
  h1 {
    margin: 30px 0;
    font-size: 4em;
    line-height: 1;
    letter-spacing: -1px;
  }
</style>

<div class="container">
  <h1>404</h1>

  <p><strong>Page not found :(</strong></p>
  <p>The requested page could not be found.</p>
</div>


================================================
FILE: docs/Gemfile
================================================
source "https://rubygems.org"

# Hello! This is where you manage which Jekyll version is used to run.
# When you want to use a different version, change it below, save the
# file and run `bundle install`. Run Jekyll with `bundle exec`, like so:
#
#     bundle exec jekyll serve
#
# This will help ensure the proper Jekyll version is running.
# Happy Jekylling!
#gem "jekyll", "~> 3.9.0"

# This is the default theme for new Jekyll sites. You may change this to anything you like.
gem "minima", "~> 2.0"

# If you want to use GitHub Pages, remove the "gem "jekyll"" above and
# uncomment the line below. To upgrade, run `bundle update github-pages`.
gem "github-pages", "~> 219", group: :jekyll_plugins

# If you have any plugins, put them here!
group :jekyll_plugins do
  gem "jekyll-feed", "~> 0.6"
end

# Windows does not include zoneinfo files, so bundle the tzinfo-data gem
# and associated library.
platforms :mingw, :x64_mingw, :mswin, :jruby do
  gem "tzinfo", "~> 1.2"
  gem "tzinfo-data"
end

# Performance-booster for watching directories on Windows
gem "wdm", "~> 0.1.0", :platforms => [:mingw, :x64_mingw, :mswin]

# kramdown v2 ships without the gfm parser by default. If you're using
# kramdown v1, comment out this line.
gem "kramdown-parser-gfm"

gem "nokogiri", ">= 1.12.5"




================================================
FILE: docs/_config.yml
================================================
# Welcome to Jekyll!
#
# This config file is meant for settings that affect your whole blog, values
# which you are expected to set up once and rarely edit after that. If you find
# yourself editing this file very often, consider using Jekyll's data files
# feature for the data you need to update frequently.
#
# For technical reasons, this file is *NOT* reloaded automatically when you use
# 'bundle exec jekyll serve'. If you change this file, please restart the server process.

# Site settings
# These are used to personalize your new site. If you look in the HTML files,
# you will see them accessed via {{ site.title }}, {{ site.email }}, and so on.
# You can create any custom variable you would like, and they will be accessible
# in the templates via {{ site.myvariable }}.
title: Anatomy of a STARK
email: alan@nervos.org
description: STARK tutorial with supporting source code in python.
twitter_username: aszepieniec
github_username: aszepieniec

# Build settings
markdown: kramdown
#theme: jekyll-theme-minimal

# Exclude from processing.
# The following items will not be processed, by default. Create a custom list
# to override the default setting.
# exclude:
#   - Gemfile
#   - Gemfile.lock
#   - node_modules
#   - vendor/bundle/
#   - vendor/cache/
#   - vendor/gems/
#   - vendor/ruby/


================================================
FILE: docs/_includes/head-custom.html
================================================
<script type="text/x-mathjax-config">
MathJax.Hub.Config({
    TeX: {
      equationNumbers: {
        autoNumber: "AMS"
      }
    },
    tex2jax: {
    inlineMath: [ ['$', '$'], ['\\(', '\\)'] ],
    displayMath: [ ['$$', '$$'], ['\\[', '\\]'] ],
    processEscapes: true,
  }
});
MathJax.Hub.Register.MessageHook("Math Processing Error",function (message) {
	  alert("Math Processing Error: "+message[1]);
	});
MathJax.Hub.Register.MessageHook("TeX Jax - parse error",function (message) {
	  alert("Math Processing Error: "+message[1]);
	});
</script>
<script type="text/javascript" async
  src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-MML-AM_CHTML">
</script>



================================================
FILE: docs/_posts/2021-10-20-welcome-to-jekyll.markdown
================================================
---
layout: post
title:  "Welcome to Jekyll!"
date:   2021-10-20 16:21:33 +0200
categories: jekyll update
---
You’ll find this post in your `_posts` directory. Go ahead and edit it and re-build the site to see your changes. You can rebuild the site in many different ways, but the most common way is to run `jekyll serve`, which launches a web server and auto-regenerates your site when a file is updated.

To add new posts, simply add a file in the `_posts` directory that follows the convention `YYYY-MM-DD-name-of-post.ext` and includes the necessary front matter. Take a look at the source for this post to get an idea about how it works.

Jekyll also offers powerful support for code snippets:

{% highlight ruby %}
def print_hi(name)
  puts "Hi, #{name}"
end
print_hi('Tom')
#=> prints 'Hi, Tom' to STDOUT.
{% endhighlight %}

Check out the [Jekyll docs][jekyll-docs] for more info on how to get the most out of Jekyll. File all bugs/feature requests at [Jekyll’s GitHub repo][jekyll-gh]. If you have questions, you can ask them on [Jekyll Talk][jekyll-talk].

[jekyll-docs]: https://jekyllrb.com/docs/home
[jekyll-gh]:   https://github.com/jekyll/jekyll
[jekyll-talk]: https://talk.jekyllrb.com/


================================================
FILE: docs/about.md
================================================
---
layout: page
title: About
permalink: /about/
---

This is the base Jekyll theme. You can find out more info about customizing your Jekyll theme, as well as basic Jekyll usage documentation at [jekyllrb.com](https://jekyllrb.com/)

You can find the source code for Minima at GitHub:
[jekyll][jekyll-organization] /
[minima](https://github.com/jekyll/minima)

You can find the source code for Jekyll at GitHub:
[jekyll][jekyll-organization] /
[jekyll](https://github.com/jekyll/jekyll)


[jekyll-organization]: https://github.com/jekyll


================================================
FILE: docs/basic-tools.md
================================================
# Anatomy of a STARK, Part 2: Basic Tools

## Finite Field Arithmetic

[Finite fields](https://en.wikipedia.org/wiki/Finite_field) are ubiquitous throughout cryptography because they are natively compatible with computers. For instance, they cannot generate overflow or underflow errors, and their elements have a finite representation in terms of bits.

The easiest way to build a finite field is to select a prime number $p$, use the elements $\mathbb{F}_p \stackrel{\triangle}{=} \lbrace 0, 1, \ldots, p-1\rbrace$, and define the usual addition and multiplication operations in terms of their counterparts for the integers, followed by reduction modulo $p$. Subtraction is equivalent to addition of the left-hand side to the negation of the right-hand side, and negation represents multiplication by $-1 \equiv p-1 \mod p$. Similarly, division is equivalent to multiplication of the left-hand side by the multiplicative inverse of the right-hand side. This inverse can be found using the [extended Euclidean algorithm](https://en.wikipedia.org/wiki/Extended_Euclidean_algorithm), which on input two integers $x$ and $y$, returns their greatest common divisor $g$ along with matching [Bézout coefficients](https://en.wikipedia.org/wiki/B%C3%A9zout's_identity) $a, b$ such that $ax + by = g$. Indeed, whenever $\gcd(x,p) = 1$ the inverse of $x \in \mathbb{F}_p$ is $a$ because $ax + bp \equiv 1 \mod p$. Powers of field elements can be computed with the [square-and-multiply](https://en.wikipedia.org/wiki/Exponentiation_by_squaring) algorithm, which iterates over the bits in the expansion of the exponent, squares an accumulator variable in each iteration, and additionally multiplies it by the base element if the bit is set.

For the purpose of building STARKs we need finite fields with a particular structure[^1]: it needs to contain a substructure of order $2^k$ for some sufficiently large $k$. We consider prime fields whose defining modulus has the form $p = f \cdot 2^k + 1$, where $f$ is some cofactor that makes the number prime. In this case, the group $\mathbb{F}_p \backslash \lbrace 0\rbrace, \times$ has a subgroup of order $2^k$. For all intents and purposes, one can identify this subgroup with $2^k$ evenly spaced points on the complex unit circle.

An implementation starts with the extended Euclidean algorithm, for computing multiplicative inverses.
```python
def xgcd( x, y ):
    old_r, r = (x, y)
    old_s, s = (1, 0)
    old_t, t = (0, 1)

    while r != 0:
        quotient = old_r // r
        old_r, r = (r, old_r - quotient * r)
        old_s, s = (s, old_s - quotient * s)
        old_t, t = (t, old_t - quotient * t)

    return old_s, old_t, old_r # a, b, g
```

It makes sense to separate the logic concerning the field from the logic concerning the field elements. To this end, the field element contains a field object as a proper field; this field object implements the arithmetic. Furthermore, Python supports operator overloading, so we can repurpose natural arithmetic operators to do field arithmetic instead.

```python
class FieldElement:
    def __init__( self, value, field ):
        self.value = value
        self.field = field

    def __add__( self, right ):
        return self.field.add(self, right)

    def __mul__( self, right ):
        return self.field.multiply(self, right)

    def __sub__( self, right ):
        return self.field.subtract(self, right)

    def __truediv__( self, right ):
        return self.field.divide(self, right)

    def __neg__( self ):
        return self.field.negate(self)

    def inverse( self ):
        return self.field.inverse(self)

    # modular exponentiation -- be sure to encapsulate in parentheses!
    def __xor__( self, exponent ):
        acc = FieldElement(1, self.field)
        val = FieldElement(self.value, self.field)
        for i in reversed(range(len(bin(exponent)[2:]))):
            acc = acc * acc
            if (1 << i) & exponent != 0:
                acc = acc * val
        return acc

    def __eq__( self, other ):
        return self.value == other.value

    def __neq__( self, other ):
        return self.value != other.value

    def __str__( self ):
        return str(self.value)

    def __bytes__( self ):
        return bytes(str(self).encode())

    def is_zero( self ):
        if self.value == 0:
            return True
        else:
            return False

class Field:
    def __init__( self, p ):
        self.p = p

    def zero( self ):
        return FieldElement(0, self)

    def one( self ):
        return FieldElement(1, self)

    def multiply( self, left, right ):
        return FieldElement((left.value * right.value) % self.p, self)

    def add( self, left, right ):
        return FieldElement((left.value + right.value) % self.p, self)

    def subtract( self, left, right ):
        return FieldElement((self.p + left.value - right.value) % self.p, self)

    def negate( self, operand ):
        return FieldElement((self.p - operand.value) % self.p, self)

    def inverse( self, operand ):
        a, b, g = xgcd(operand.value, self.p)
        return FieldElement(a, self)

    def divide( self, left, right ):
        assert(not right.is_zero()), "divide by zero"
        a, b, g = xgcd(right.value, self.p)
        return FieldElement(left.value * a % self.p, self)
```

Implementing fields generically is nice. However, in this tutorial we will not use any field other than the one with $1+407 \cdot 2^{119}$ elements. This field has a sufficiently large subgroup of power-of-two order.

```python
    def main():
        p = 1 + 407 * ( 1 << 119 ) # 1 + 11 * 37 * 2^119
        return Field(p)
```

Besides ensuring that the subgroup of power-of-two order exists, the code also needs to supply the user with a generator for the entire multiplicative group, as well as the power-of-two subgroups. A generator for such a subgroup of order $n$ will be called a primitive $n$th root.

```python
    def generator( self ):
        assert(self.p == 1 + 407 * ( 1 << 119 )), "Do not know generator for other fields beyond 1+407*2^119"
        return FieldElement(85408008396924667383611388730472331217, self)
        
    def primitive_nth_root( self, n ):
        if self.p == 1 + 407 * ( 1 << 119 ):
            assert(n <= 1 << 119 and (n & (n-1)) == 0), "Field does not have nth root of unity where n > 2^119 or not power of two."
            root = FieldElement(85408008396924667383611388730472331217, self)
            order = 1 << 119
            while order != n:
                root = root^2
                order = order/2
            return root
        else:
            assert(False), "Unknown field, can't return root of unity."
```

Lastly, the protocol requires the ability to sample field elements randomly and pseudorandomly. To do this, the user supplies random bytes and the field logic turns them into a field element. The user should take care to provide enough random bytes.

```python
    def sample( self, byte_array ):
        acc = 0
        for b in byte_array:
            acc = (acc << 8) ^ int(b)
        return FieldElement(acc % self.p, self)
```

## Univariate Polynomials

A *univariate polynomial* is a weighted sum of non-negative powers of a single formal indeterminate. We write polynomials as a formal sum of terms, *i.e.*, $f(X) = c_0 + c_1 \cdot X + \cdots + c_d X^d$ or $f(X) = \sum_{i=0}^d c_i X^i$ because a) the value of the indeterminate $X$ is generally unknown and b) this form emphasises the polynomial's semantic origin and is thus more conducive to building intuition. In these expressions, the $c_i$ are called *coefficients* and $d$ is the polynomial's *degree*.

Univariate polynomials are immensely useful in proof systems because relations that apply to their coefficient vectors extend to their values on a potentially much larger domain. If polynomials are equal, they are equal everywhere; whereas if they are unequal, they are unequal almost everywhere. By this feature, univariate polynomials reduce claims about large vectors to claims about the values of their corresponding polynomials in a small selection of sufficiently random points. 

An implementation of univariate polynomial algebra starts with overloading the standard arithmetic operators to compute the right function of the polynomials' coefficient vectors. One important point requires special attention. It is impossible for the *leading coefficient* of a polynomial to be zero, since the leading coefficient means the coefficient of the highest-degree *non-zero* term. However, the implemented vector of coefficients might have trailing zeros, which should be ignored for all intents and purposes. The degree function comes in handy; it is defined here as one less than the length of the vector of coefficients after ignoring trailing zeros. This also means that the zero polynomial has degree $-1$ even though $-\infty$ makes more sense.

```python
from algebra import *

class Polynomial:
    def __init__( self, coefficients ):
        self.coefficients = [c for c in coefficients]

    def degree( self ):
        if self.coefficients == []:
            return -1
        zero = self.coefficients[0].field.zero()
        if self.coefficients == [zero] * len(self.coefficients):
            return -1
        maxindex = 0
        for i in range(len(self.coefficients)):
            if self.coefficients[i] != zero:
                maxindex = i
        return maxindex

    def __neg__( self ):
        return Polynomial([-c for c in self.coefficients])

    def __add__( self, other ):
        if self.degree() == -1:
            return other
        elif other.degree() == -1:
            return self
        field = self.coefficients[0].field
        coeffs = [field.zero()] * max(len(self.coefficients), len(other.coefficients))
        for i in range(len(self.coefficients)):
            coeffs[i] = coeffs[i] + self.coefficients[i]
        for i in range(len(other.coefficients)):
            coeffs[i] = coeffs[i] + other.coefficients[i]
        return Polynomial(coeffs)

    def __sub__( self, other ):
        return self.__add__(-other)

    def __mul__(self, other ):
        if self.coefficients == [] or other.coefficients == []:
            return Polynomial([])
        zero = self.coefficients[0].field.zero()
        buf = [zero] * (len(self.coefficients) + len(other.coefficients) - 1)
        for i in range(len(self.coefficients)):
            if self.coefficients[i].is_zero():
                continue # optimization for sparse polynomials
            for j in range(len(other.coefficients)):
                buf[i+j] = buf[i+j] + self.coefficients[i] * other.coefficients[j]
        return Polynomial(buf)

    def __eq__( self, other ):
        if self.degree() != other.degree():
            return False
        if self.degree() == -1:
            return True
        return all(self.coefficients[i] == other.coefficients[i] for i in range(len(self.coefficients)))

    def __neq__( self, other ):
        return not self.__eq__(other)

    def is_zero( self ):
        if self.degree() == -1:
            return True
        return False

    def leading_coefficient( self ):
        return self.coefficients[self.degree()]
```

This always get a little tricky when implementing division of polynomials. The intuition behind the schoolbook algorithm is that in every iteration you multiply the dividend by the correct term so as to generate a cancellation of leading terms. Once no such term exists, you have your remainder.

```python
    def divide( numerator, denominator ):
        if denominator.degree() == -1:
            return None
        if numerator.degree() < denominator.degree():
            return (Polynomial([]), numerator)
        field = denominator.coefficients[0].field
        remainder = Polynomial([n for n in numerator.coefficients])
        quotient_coefficients = [field.zero() for i in range(numerator.degree()-denominator.degree()+1)]
        for i in range(numerator.degree()-denominator.degree()+1):
            if remainder.degree() < denominator.degree():
                break
            coefficient = remainder.leading_coefficient() / denominator.leading_coefficient()
            shift = remainder.degree() - denominator.degree()
            subtractee = Polynomial([field.zero()] * shift + [coefficient]) * denominator
            quotient_coefficients[shift] = coefficient
            remainder = remainder - subtractee
        quotient = Polynomial(quotient_coefficients)
        return quotient, remainder

    def __truediv__( self, other ):
        quo, rem = Polynomial.divide(self, other)
        assert(rem.is_zero()), "cannot perform polynomial division because remainder is not zero"
        return quo

    def __mod__( self, other ):
        quo, rem = Polynomial.divide(self, other)
        return rem
```

In terms of basic arithmetic operations, it is worth including a powering map, although mostly for notational ease rather than performance.

```python
    def __xor__( self, exponent ):
        if self.is_zero():
            return Polynomial([])
        if exponent == 0:
            return Polynomial([self.coefficients[0].field.one()])
        acc = Polynomial([self.coefficients[0].field.one()])
        for i in reversed(range(len(bin(exponent)[2:]))):
            acc = acc * acc
            if (1 << i) & exponent != 0:
                acc = acc * self
        return acc
```

A polynomial is quite pointless if it does not admit the computation of its value in a given arbitrary point. For STARKs we need something more general -- polynomial evaluation on a *domain* of values, rather than a single point. Performance is not a concern at this point so the following implementation follows a straightforward iterative method. Conversely, STARKs also require polynomial interpolation where the x-coordinates are another known range of values. Once again, performance is not an immediate issue so for the time being standard [Lagrange interpolation](https://en.wikipedia.org/wiki/Lagrange_polynomial) suffices.

```python
    def evaluate( self, point ):
        xi = point.field.one()
        value = point.field.zero()
        for c in self.coefficients:
            value = value + c * xi
            xi = xi * point
        return value

    def evaluate_domain( self, domain ):
        return [self.evaluate(d) for d in domain]

    def interpolate_domain( domain, values ):
        assert(len(domain) == len(values)), "number of elements in domain does not match number of values -- cannot interpolate"
        assert(len(domain) > 0), "cannot interpolate between zero points"
        field = domain[0].field
        x = Polynomial([field.zero(), field.one()])
        acc = Polynomial([])
        for i in range(len(domain)):
            prod = Polynomial([values[i]])
            for j in range(len(domain)):
                if j == i:
                    continue
                prod = prod * (x - Polynomial([domain[j]])) * Polynomial([(domain[i] - domain[j]).inverse()])
            acc = acc + prod
        return acc
```

Speaking of domains: one thing that recurs time and again is the computation of polynomials that vanish on them. Any such polynomial is the multiple of $Z_D(X) = \prod_{d \in D} (X-d)$, the unique monic[^2] lowest-degree polynomial that takes the value 0 in all the points of $D$. This polynomial is usually called the *vanishing polynomial* and sometimes the *zerofier*. This tutorial prefers the second term.

```python
    def zerofier_domain( domain ):
        field = domain[0].field
        x = Polynomial([field.zero(), field.one()])
        acc = Polynomial([field.one()])
        for d in domain:
            acc = acc * (x - Polynomial([d]))
        return acc
```

Another useful tool is the ability to *scale* polynomials. Specifically, this means obtaining the vector of coefficients of $f(c \cdot X)$ from that of $f(X)$. This function is particularly useful when $f(X)$ is defined to take a sequence of values on the powers of $c$: $v_i = f(c^i)$. Then $f(c \cdot X)$ represents the same sequence of values but shifted by one position.

```python
    def scale( self, factor ):
        return Polynomial([(factor^i) * self.coefficients[i] for i in range(len(self.coefficients))])
```

The last function that belongs to the univariate polynomial module anticipates a key operations in the FRI protocol, namely testing whether a triple of points fall on the same line -- a fancy word for which is *colinearity*.

```python
def test_colinearity( points ):
    domain = [p[0] for p in points]
    values = [p[1] for p in points]
    polynomial = Polynomial.interpolate_domain(domain, values)
    return polynomial.degree() <= 1
```

Before moving on to the next section, it is worth pausing to note that all ingredients are in place for *finite extension fields*, or simply *extension fields*. A finite field is simply a set equipped with addition and multiplication operators that behave according to high school algebra rules, *e.g.* every nonzero element has an inverse, or no two nonzero elements multiplied give zero. There are two ways to obtain them:
 1. Start with the set of integers, and reduce the result of any addition or multiplication modulo a given prime number $p$.
 2. Start with the set of polynomials over a finite field, and reduce the result of any addition or multiplication modulo a given *irreducible polynomial* $p(X)$. A polynomial is *irreducible* when it cannot be decomposed as the product of two smaller polynomials, analogously to prime numbers.

 The point is that it is possible to do the arithmetization in a smaller field than cryptographic compilation step, as long as the latter step uses an extension field of that of the former. Specifically and for example, [EthSTARK](https://github.com/starkware-libs/ethSTARK) operates over the finite field defined by a 62-bit prime, but the FRI step operates over a quadratic extension field thereof in order to target a higher security level.

 This tutorial will not use extension fields, and so an elaborate discussion of the topic is out of scope.

## Multivariate Polynomials

*Multivariate polynomials* generalize univariate polynomials to many indeterminates -- not just $X$, but $X, Y, Z, \ldots$. Where univariate polynomials are useful for reducing big claims about large vectors to small claims about scalar values in random points, multivariate polynomials are useful for articulating the arithmetic constraints that an integral computation satisfies.

For example, consider the [arithmetic-geometric mean](https://en.wikipedia.org/wiki/Arithmetic%E2%80%93geometric_mean), which is defined as the limit of either the first or second coordinate (which are equal in the limit) of the sequence $(a, b) \mapsto \left( \frac{a+b}{2}, \sqrt{a \cdot b} \right)$, for a given starting point $(a_0, b_0)$. In order to prove the integrity of several iterations of this process[^3], what is needed is a set of multivariate polynomials that capture the constraint of the correct application of a single iteration that relates the current state, $X_0, X_1$ to the next state, $Y_0, Y_1$. In this phrase, the word *capture* means that the polynomial evaluates to zero if the computation is integral. These polynomials might be $m_0(X_0, X_1, Y_0, Y_1) = Y_0 - \frac{X_0 + X_1}{2}$ and $m_1(X_0, X_1, Y_0, Y_1) = Y_1^2 - X_0 \cdot X_1$. (Note that the natural choice $m_1(X_0, X_1, Y_0, Y_1) = Y_1 - \sqrt{X_0 \cdot X_1}$ is not in fact a polynomial, but has the same zeros.)

Where the natural structure for implementing univariate polynomials is a list of coefficients, the natural structure for multivariate polynomials is a dictionary mapping exponent vectors to coefficients. Whenever this dictionary contains zero coefficients, they should be ignored. As usual, the first step is to overload the standard arithmetic operators, basic constructors, and standard functionalities.

```python
class MPolynomial:
    def __init__( self, dictionary ):
        self.dictionary = dictionary

    def zero():
        return MPolynomial(dict())

    def __add__( self, other ):
        dictionary = dict()
        num_variables = max([len(k) for k in self.dictionary.keys()] + [len(k) for k in other.dictionary.keys()])
        for k, v in self.dictionary.items():
            pad = list(k) + [0] * (num_variables - len(k))
            pad = tuple(pad)
            dictionary[pad] = v
        for k, v in other.dictionary.items():
            pad = list(k) + [0] * (num_variables - len(k))
            pad = tuple(pad)
            if pad in dictionary.keys():
                dictionary[pad] = dictionary[pad] + v
            else:
                dictionary[pad] = v
        return MPolynomial(dictionary)

    def __mul__( self, other ):
        dictionary = dict()
        num_variables = max([len(k) for k in self.dictionary.keys()] + [len(k) for k in other.dictionary.keys()])
        for k0, v0 in self.dictionary.items():
            for k1, v1 in other.dictionary.items():
                exponent = [0] * num_variables
                for k in range(len(k0)):
                    exponent[k] += k0[k]
                for k in range(len(k1)):
                    exponent[k] += k1[k]
                exponent = tuple(exponent)
                if exponent in dictionary.keys():
                    dictionary[exponent] = dictionary[exponent] + v0 * v1
                else:
                    dictionary[exponent] = v0 * v1
        return MPolynomial(dictionary)

    def __sub__( self, other ):
        return self + (-other)

    def __neg__( self ):
        dictionary = dict()
        for k, v in self.dictionary.items():
            dictionary[k] = -v
        return MPolynomial(dictionary)

    def __xor__( self, exponent ):
        if self.is_zero():
            return MPolynomial(dict())
        field = list(self.dictionary.values())[0].field
        num_variables = len(list(self.dictionary.keys())[0])
        exp = [0] * num_variables
        acc = MPolynomial({tuple(exp): field.one()})
        for b in bin(exponent)[2:]:
            acc = acc * acc
            if b == '1':
                acc = acc * self
        return acc

    def constant( element ):
        return MPolynomial({tuple([0]): element})

    def is_zero( self ):
        if not self.dictionary:
            return True
        else:
            for v in self.dictionary.values():
                if v.is_zero() == False:
                    return False
            return True

    def variables( num_variables, field ):
        variables = []
        for i in range(num_variables):
            exponent = [0] * i + [1] + [0] * (num_variables - i - 1)
            variables = variables + [MPolynomial({tuple(exponent): field.one()})]
        return variables
```

Since multivariate polynomials are a generalization of univariate polynomials, there needs to be a way to inherit the logic that was already defined for the former class. The function `lift` does this by lifting a univariate polynomial to the multivariate polynomials. The second argument is the index of the indeterminate that corresponds to the univariate indeterminate.

```python
    def lift( polynomial, variable_index ):
        if polynomial.is_zero():
            return MPolynomial({})
        field = polynomial.coefficients[0].field
        variables = MPolynomial.variables(variable_index+1, field)
        x = variables[-1]
        acc = MPolynomial({})
        for i in range(len(polynomial.coefficients)):
            acc = acc + MPolynomial.constant(polynomial.coefficients[i]) * (x^i)
        return acc
```

Next up is evaluation. The argument to this method needs to be a tuple of scalars since it needs to assign a value to every indeterminate. However, it is worth anticipating a feature used in the STARK whereby the evaluation is *symbolic*: instead of evaluating the multivariate polynomial in a tuple of scalars, it is evaluated in a tuple of *univariate polynomials*. The result is not a scalar, but a new univariate polynomial.

```python
    def evaluate( self, point ):
        acc = point[0].field.zero()
        for k, v in self.dictionary.items():
            prod = v
            for i in range(len(k)):
                prod = prod * (point[i]^k[i])
            acc = acc + prod
        return acc

    def evaluate_symbolic( self, point ):
        acc = Polynomial([])
        for k, v in self.dictionary.items():
            prod = Polynomial([v])
            for i in range(len(k)):
                prod = prod * (point[i]^k[i])
            acc = acc + prod
        return acc
```

## The Fiat-Shamir Transform

In an interactive public coin protocol, the verifier's messages are pure randomness sampled from a distribution that *anyone* can sample from. The objective is to obtain a non-interactive protocol that proves the same thing, without sacrificing security. The Fiat-Shamir transform achieves this.

![Interactive proof, before Fiat-Shamir transform](graphics/interactive-proof.svg)

It turns out that for generating security against malicious provers, generating the verifier's messages randomly as the interactive protocol stipulates, is overkill. It is sufficient that the verifier's messages be difficult to predict by the prover. Hash functions are deterministic but still satisfy this property of outputs being difficult to predict. So intuitively, the protocol remains secure if the verifier's authentic randomness is replaced by a hash function's pseudorandom output. It is necessary to restrict the prover's control over what input goes into the hash function, because otherwise he can grind until he finds a suitable output. It suffices to set the input to the transcript of the protocol up until the point where the verifier's message is needed.

This is exactly the intuition behind the Fiat-Shamir transform: replace the verifier's random messages by the hash of the transcript of the protocol up until those points. The *Fiat-Shamir heuristic* states that this transform retains security. In an idealized model of the hash function called the *random oracle model*, this security is provable.

The Fiat-Shamir transform presents the first engineering challenge. The interactive protocol is described in terms of a *channel* which passes messages from prover to receiver or the other way around. The transform serializes this communication while enabling a description of the prover that makes abstraction of it. The transform does modify the description of the verifier, which becomes deterministic.

A *proof stream* is a useful concept to simulate this channel. The difference with respect to regular streams in programming is that there is no actual transmission to another process or computer taking place, and nor do sender and receiver need to operate simultaneously. It is not a simple queue either because the prover and the verifier have access to a function that computes pseudorandomness by hashing their view of the channel. For the prover, this view is the entire list of all messages *sent* so far. For the verifier, this view is the sublist of messages *read* so far. The verifier's messages are not added to the list because they can be deterministically computed from them. Given the list of prover's messages, serialization is straightforward. The non-interactive proof is exactly this serialization.

![Non-interactive proof, after Fiat-Shamir transform](graphics/proof-stream.svg)

In terms of implementation, what is needed is a class `ProofStream` that supports 3 functionalities.
 1. Pushing and pulling objects to and from a queue. The queue is simulated by a list with a read index. Whenever an item is pushed, it is appended. Whenever an item is pulled, the read index is incremented by one.
 2. Serialization and deserialization. The amazing python library `pickle` does this.
 3. Fiat-Shamir. Hashing is done below by first serializing the queue or the first part of it, and then applying SHAKE-256. SHAKE-256 admits a variable output length, which the particular application may want to set. By default the output length is set to 32 bytes.

```python
from hashlib import shake_256
import pickle as pickle # serialization

class ProofStream:
    def __init__( self ):
        self.objects = []
        self.read_index = 0

    def push( self, obj ):
        self.objects += [obj]

    def pull( self ):
        assert(self.read_index < len(self.objects)), "ProofStream: cannot pull object; queue empty."
        obj = self.objects[self.read_index]
        self.read_index += 1
        return obj

    def serialize( self ):
        return pickle.dumps(self.objects)

    def deserialize( bb ):
        ps = ProofStream()
        ps.objects = pickle.loads(bb)
        return ps

    def prover_fiat_shamir( self, num_bytes=32 ):
        return shake_256(self.serialize()).digest(num_bytes)

    def verifier_fiat_shamir( self, num_bytes=32 ):
        return shake_256(pickle.dumps(self.objects[:self.read_index])).digest(num_bytes)
```

## Merkle Tree

A [Merkle tree](https://en.wikipedia.org/wiki/Merkle_tree) is a vector commitment scheme built from a collision-resistant hash function[^4]. Specifically, it allows the user to commit to an array of $2^N$ items such that:
 - The commitment is a single hash digest and this commitment is *binding* -- it represents the array in a way that prevents the user from changing it without first breaking the hash function;
 - For any index $i \in \lbrace0, \ldots, 2^N-1\rbrace$, the value in location $i$ of the array represented by the commitment can be proven with $N$ more hash digests.

Specifically, every leaf of the binary tree represents the hash of a data element. Every non-leaf node represents the hash of the concatenation of its two children. The root of the tree is the commitment. A membership proof consists of all siblings of nodes on a path from the indicated leaf to the root. This list of siblings is called an *authentication path*, and provides the verifier with $N$ complete preimages to the hash function at every step of the path, leading to a final test in the root node.

![Merkle tree](graphics/merkle-tree.svg)

An implementation of this construct needs to provide three functionalities:
 1. $\mathsf{commit}$ -- computes the Merkle root of a given array.
 2. $\mathsf{open}$ -- computes the authentication path of an indicated leaf in the Merkle tree.
 3. $\mathsf{verify}$ -- verifies that a given leaf is an element of the committed vector at the given index.

If performance is not an issue (and for this tutorial it is not), the recursive nature of these functionalities gives rise to a wonderfully functional implementation.

```python
from hashlib import blake2b

class Merkle:
    H = blake2b

    def commit_( leafs ):
        assert(len(leafs) & (len(leafs)-1) == 0), "length must be power of two"
        if len(leafs) == 1:
            return leafs[0]
        else:
            return Merkle.H(Merkle.commit_(leafs[:len(leafs)//2]) + Merkle.commit_(leafs[len(leafs)//2:])).digest()
    
    def open_( index, leafs ):
        assert(len(leafs) & (len(leafs)-1) == 0), "length must be power of two"
        assert(0 <= index and index < len(leafs)), "cannot open invalid index"
        if len(leafs) == 2:
            return [leafs[1 - index]]
        elif index < (len(leafs)/2):
            return Merkle.open_(index, leafs[:len(leafs)//2]) + [Merkle.commit_(leafs[len(leafs)//2:])]
        else:
            return Merkle.open_(index - len(leafs)//2, leafs[len(leafs)//2:]) + [Merkle.commit_(leafs[:len(leafs)//2])]
    
    def verify_( root, index, path, leaf ):
        assert(0 <= index and index < (1 << len(path))), "cannot verify invalid index"
        if len(path) == 1:
            if index == 0:
                return root == Merkle.H(leaf + path[0]).digest()
            else:
                return root == Merkle.H(path[0] + leaf).digest()
        else:
            if index % 2 == 0:
                return Merkle.verify_(root, index >> 1, path[1:], Merkle.H(leaf + path[0]).digest())
            else:
                return Merkle.verify_(root, index >> 1, path[1:], Merkle.H(path[0] + leaf).digest())
```

This functional implementation overlooks one important aspect: the data objects are rarely hash digests. So in order to use these functions in combination with real-world data, the real-world data elements must be hashed first. This hashing for preprocessing is part of the Merkle tree logic, so the Merkle tree module needs to be extended to accommodate this.

```python
    def commit( data_array ):
        return Merkle.commit_([Merkle.H(bytes(da)).digest() for da in data_array])

    def open( index, data_array ):
        return Merkle.open_(index, [Merkle.H(bytes(da)).digest() for da in data_array])

    def verify( root, index, path, data_element ):
        return Merkle.verify_(root, index, path, Merkle.H(bytes(data_element)).digest())
```

[0](index) - [1](overview) - **2** - [3](fri) - [4](stark) - [5](rescue-prime) - [6](faster)

[^1]: Actually, an [amazing new paper](https://arxiv.org/pdf/2107.08473.pdf) by the StarkWare team shows how to apply the same techniques in *any* finite field, whether it has the requisite structure or not. This tutorial explains the construction the simple way, using structured finite fields.
[^2]: A *monic* polynomial is one whose leading coefficient is one.
[^3]: Never mind that it does not make any sense to prove the correct computation of the algebraic-geometric mean of finite field elements; it serves the purpose of illustration.
[^4]: In some cases, such as hash-based signatures, collision-resistance may be overkill and more basic security notion such as second-preimage resistance may suffice.


================================================
FILE: docs/faster.md
================================================
# Anatomy of a STARK, Part 6: Speeding Things Up

The previous part of this tutorial posed the question whether maths-level improvement can reduce the running times of the STARK algorithms. Indeed they can! There are folklore computational algebra tricks that are independent of the STARK machinery, as well as some techniques specific to interactive proof systems.

## The Number Theoretic Transform and its Applications

### The Fast Fourier Transform

Let $f(X)$ be a polynomial of degree at most $2^k - 1$ with complex numbers as coefficients. What is the most efficient way to find the list of evaluations $f(X)$ on the $2^k$ complex roots of unity? Specifically, let $\omega = e^{2 \pi i / 2^k}$, then the output of the algorithm should be $(f(\omega^i))_{i=0}^{2^k-1} = (f(1), f(\omega), f(\omega^2), \ldots, f(\omega^{2^k-1}))$.

The naïve solution is to sequentially compute each evaluation individually. A more intelligent solution relies on the observation that $f(\omega^i) = \sum_{j=0}^{2^k-1} \omega^{ij} f_j$ and splitting the even and odd terms gives
$$ f(\omega^i) = \sum_{j=0}^{2^{k-1}-1} \omega^{i(2j)}f_{2j} + \sum_{j=0}^{2^{k-1}-1} \omega^{i(2j+1)} f_{2j+1} \\
 = \sum_{j=0}^{2^{k-1}-1} \omega^{i(2j)}f_{2j} + \omega^i \cdot \sum_{j=0}^{2^{k-1}-1} \omega^{i(2j)} f_{2j+1} \\
 = f_E(\omega^{2i}) + \omega^i \cdot f_O(\omega^{2i}) \enspace , $$
where $f_E(X)$ and $f_O(X)$ are the polynomials whose coefficients are the even coefficients, and odd coefficients respectively, of $f(X)$.

In other words, the evaluation of $f(X)$ at $\omega^i$ can be described in terms of the evaluations of $f_E(X)$ and $f_O(X)$ at $\omega^{2i}$. The same is true for a batch of points $\lbrace\omega^{ij}\rbrace_ {j=0}^{2^k-1}$, in which case the values of $f_E(X)$ and $f_O(X)$ on a domain of only half the size are needed: $\lbrace(\omega^{ij})^2\rbrace_ {j=0}^{2^k-1} = \lbrace(\omega^{2i})^j\rbrace_ {j=0}^{2^{k-1}-1}$. Note that tasks of batch-evaluating $f_E(X)$ and $f_O(X)$ are independent tasks of half the size. This screams divide and conquer! Specifically, the following strategy suggests itself:
 - split the coefficient vector into even and odd parts;
 - evaluate $f_E(X)$ on $\lbrace(\omega^{2i})^j\rbrace_{j=0}^{2^{k-1}-1}$ by recursion;
 - evaluate $f_O(X)$ on $\lbrace(\omega^{2i})^j\rbrace_{j=0}^{2^{k-1}-1}$ by recursion;
 - merge the evaluation vectors using the formula $f(\omega^i) = f_E(\omega^{2i}) + \omega^i \cdot f_O(\omega^{2i})$.

Vòilá! That's the fast Fourier transform (FFT). The reason why the $2^k$th root of unity is needed is because it guarantees that $\lbrace(\omega^{ij})^2\rbrace_ {j=0}^{2^k-1} = \lbrace(\omega^{2i})^j\rbrace_ {j=0}^{2^{k-1}-1}$, and so the recursion really is on a domain of half the size. Phrased differently, if you were to use a similar strategy to evaluate $f(X)$ in $\lbrace z^j\rbrace_{j=0}^{2^k-1}$ where $z$ is not a primitive $2^k$th root of unity then the evaluation domain would not shrink with every recursion step. There are $k$ recursion steps, and at each level there are $2^k$ multiplications and additions, so the complexity of this algorithm is $O(2^k \cdot k)$, or expressed in terms of the length of the coefficient vector $N = 2^k$, $O(N \cdot \log N)$. A lot faster than the $O(N^2)$ complexity of the naïve sequential algorithm.

Note that the only property that we need from $\omega$ is that the set of squares $\lbrace\omega^j\rbrace_{j=0}^{2^k-1}$ is a set of half the size. The number $\omega$ satisfies this property because $\omega^{2^{k-1}+i} = -\omega^i$. Importantly, $\omega$ does not need to be a complex number as long as it satisfies this property. In fact, whenever a finite field has a subgroup of order $2^k$, this subgroup is generated by some $\omega$, and this $\omega$ can be used in exactly the same way. The resulting algorithm is a finite field analogue of the FFT, sometimes called the *Number Theory Transform (NTT)*.

```python
def ntt( primitive_root, values ):
    assert(len(values) & (len(values) - 1) == 0), "cannot compute ntt of non-power-of-two sequence"
    if len(values) <= 1:
        return values

    field = values[0].field

    assert(primitive_root^len(values) == field.one()), "primitive root must be nth root of unity, where n is len(values)"
    assert(primitive_root^(len(values)//2) != field.one()), "primitive root is not primitive nth root of unity, where n is len(values)"

    half = len(values) // 2

    odds = ntt(primitive_root^2, values[1::2])
    evens = ntt(primitive_root^2, values[::2])

    return [evens[i % half] + (primitive_root^i) * odds[i % half] for i in range(len(values))]
```

The real magic comes into play when we apply the FFT (or NTT) twice, but use the inverse of $\omega$ for the second layer. Specifically, what happens if we treat the list of evaluations as a list of polynomial coefficients, and evaluate this polynomial in the $2^k$th roots of unity, in opposite order?

Recall that the $i$th coefficient of the Fourier transform is $f(\omega^i) = \sum_{j=0}^{2^k-1} f_j \omega^{ij}$. So the $l$th coefficient of the double Fourier transform is
$$ \sum_{i=0}^{2^k-1} f(\omega^i) \omega^{-il} = \sum_{i=0}^{2^k-1} \left( \sum_{j=0}^{2^k-1} f_j \omega^{ij} \right) \omega^{-il} \\
= \sum_{j=0}^{2^k-1} f_j \sum_{i=0}^{2^k-1} \omega^{i(l-j)} \enspace .$$

Whenever $l-j \neq 0$, the sum $\sum_{i=0}^{2^k-1} \omega^{i(l+j)}$ vanishes. To see this, recall that $\omega^{2^{k-1} + i} = -\omega^i$ for all $i$, so every term in this sum has an equal and opposite term that cancels it. So in the formula above, the only coefficient $f_j$ that is multiplied by a nonzero sum is $f_{l}$, and in fact this sum is $\sum_{i=0}^{2^k-1}1 = 2^k$. So in summary, the $l$th coefficient of the double Fourier transform of $\mathbf{f}$ is $2^k \cdot f_{l}$, which is the same as the $l$th coefficient of $\mathbf{f}$ but scaled by a factor $2^k$.

What was derived was an inverse fast Fourier transform. Specifically, this inverse is the same as the regular fast Fourier transform, except:
 - it uses $\omega^{-1}$ instead of $\omega$; and
 - it needs to undo the scaling factor $2^k$ on every coefficient.

Once again, the logic applies to finite fields that are equipped with a subgroup of order $2^k$ without any change, resulting in the inverse NTT.

```python
def intt( primitive_root, values ):
    assert(len(values) & (len(values) - 1) == 0), "cannot compute intt of non-power-of-two sequence"

    if len(values) == 1:
        return values

    field = values[0].field
    ninv = FieldElement(len(values), field).inverse()

    transformed_values = ntt(primitive_root.inverse(), values)
    return [ninv*tv for tv in transformed_values]
```

### Fast Polynomial Arithmetic

The NTT is popular in computer algebra because the Fourier transform induces a homomorphism for polynomials and their values. Specifically, multiplication of polynomials corresponds to element-wise multiplication of their Fourier transforms. To see this, apply the formula for the Fourier transform to the formula for the product polynomial. To see why this is true, remember that the Fourier transform represents the *evaluations* of a polynomial. Clearly, the evaluation of $h(X) = f(X) \cdot g(X)$ in any point $z$ is the product of the evaluations of $f(X)$ and $g(X)$ in $z$. As long as $\mathsf{deg}(h(X)) < 2^k$, we can compute this product by:
 - computing the NTT; 
 - multiplying the resulting vectors element-wise; and
 - computing the inverse NTT.

```python
def fast_multiply( lhs, rhs, primitive_root, root_order ):
    assert(primitive_root^root_order == primitive_root.field.one()), "supplied root does not have supplied order"
    assert(primitive_root^(root_order//2) != primitive_root.field.one()), "supplied root is not primitive root of supplied order"

    if lhs.is_zero() or rhs.is_zero():
        return Polynomial([])

    field = lhs.coefficients[0].field
    root = primitive_root
    order = root_order
    degree = lhs.degree() + rhs.degree()

    if degree < 8:
        return lhs * rhs

    while degree < order // 2:
        root = root^2
        order = order // 2

    lhs_coefficients = lhs.coefficients[:(lhs.degree()+1)]
    while len(lhs_coefficients) < order:
        lhs_coefficients += [field.zero()]
    rhs_coefficients = rhs.coefficients[:(rhs.degree()+1)]
    while len(rhs_coefficients) < order:
        rhs_coefficients += [field.zero()]

    lhs_codeword = ntt(root, lhs_coefficients)
    rhs_codeword = ntt(root, rhs_coefficients)

    hadamard_product = [l * r for (l, r) in zip(lhs_codeword, rhs_codeword)]

    product_coefficients = intt(root, hadamard_product)
    return Polynomial(product_coefficients[0:(degree+1)])
```

Fast multiplication serves as the basis for a bunch of fast polynomial arithmetic algorithms. Of particular interest to this tutorial is the calculation of *zerofiers* -- the polynomials that vanish on a given list of points called the *domain*. For this task, the divide-and-conquer strategy suggests itself:
 - divide the domain into two equal parts;
 - compute the zerofiers for the two parts separately; and
 - multiply the zerofiers using fast multiplication.

```python
def fast_zerofier( domain, primitive_root, root_order ):
    assert(primitive_root^root_order == primitive_root.field.one()), "supplied root does not have supplied order"
    assert(primitive_root^(root_order//2) != primitive_root.field.one()), "supplied root is not primitive root of supplied order"

    if len(domain) == 0:
        return Polynomial([])

    if len(domain) == 1:
        return Polynomial([-domain[0], primitive_root.field.one()])

    half = len(domain) // 2

    left = fast_zerofier(domain[:half], primitive_root, root_order)
    right = fast_zerofier(domain[half:], primitive_root, root_order)
    return fast_multiply(left, right, primitive_root, root_order)
```

Another task benefiting from fast multiplication (not to mention fast zerofier calculation) is batch evaluation in an arbitrary domain. The idea behind the algorithm is to progressively reduce the given polynomial to a new polynomial that takes the same values on a subset of the domain. The term "reduce" is not a metaphor -- it is polynomial reduction modulo the zerofier for that domain. So this gives rise to another divide-and-conquer algorithm:
 - divide the domain into two halves, left and right;
 - compute the zerofier for each half;
 - reduce the polynomial modulo left zerofier and modulo right zerofier;
 - batch-evaluate the left remainder in the left domain half and the right remainder in the right domain;
 - concatenate the vectors of evaluation.

Note that the zerofiers, which are calculated by another divide-and-conquer algorithm, are used in the opposite order to how they are produced. A slightly more complex algorithm makes use of memoization for a performance boost.

```python
def fast_evaluate( polynomial, domain, primitive_root, root_order ):
    assert(primitive_root^root_order == primitive_root.field.one()), "supplied root does not have supplied order"
    assert(primitive_root^(root_order//2) != primitive_root.field.one()), "supplied root is not primitive root of supplied order"

    if len(domain) == 0:
        return []

    if len(domain) == 1:
        return [polynomial.evaluate(domain[0])]

    half = len(domain) // 2

    left_zerofier = fast_zerofier(domain[:half], primitive_root, root_order)
    right_zerofier = fast_zerofier(domain[half:], primitive_root, root_order)

    left = fast_evaluate(polynomial % left_zerofier, domain[:half], primitive_root, root_order)
    right = fast_evaluate(polynomial % right_zerofier, domain[half:], primitive_root, root_order)

    return left + right
```

Let's now turn to the opposite of evaluation -- polynomial interpolation. Ideally, we would like to apply another divide-and-conquer strategy, but it's tricky. We can divide the set of points into two halves and find the interpolants for each, but then how do we combine them?

How about finding the polynomial that passes through the left half of points, and takes the value 0 in the x-coordinates of the right half, and vice versa? This is certainly progress because adding them will give the desired interpolant. However, this is no longer a divide-and-conquer algorithm because after one recursion step the magnitude of the problem is still the same.

What if we find the interpolant through the left half of points, and multiply it by the zerofier of right half's x-coordinates? Close, but no cigar: the zerofier will take values different from 1 on the left x-coordinates, meaning that multiplication will destroy the information embedded in the left interpolant.

But the right zerofier's values in the left x-coordinates are not random, and can be predicted simply by calculating the right zerofier and batch-evaluating it in the left x-coordinates. What needs to be done is to find the polynomial that passes through points whose x-coordinates correspond to the left half of points, and whose y-coordinates anticipate multiplication by the zerofier. These are just the left y-coordinates, divided by values of the right zerofier in the matching x-coordinates.

```python
def fast_interpolate( domain, values, primitive_root, root_order ):
    assert(primitive_root^root_order == primitive_root.field.one()), "supplied root does not have supplied order"
    assert(primitive_root^(root_order//2) != primitive_root.field.one()), "supplied root is not primitive root of supplied order"
    assert(len(domain) == len(values)), "cannot interpolate over domain of different length than values list"

    if len(domain) == 0:
        return Polynomial([])

    if len(domain) == 1:
        return Polynomial([values[0]])

    half = len(domain) // 2

    left_zerofier = fast_zerofier(domain[:half], primitive_root, root_order)
    right_zerofier = fast_zerofier(domain[half:], primitive_root, root_order)

    left_offset = fast_evaluate(right_zerofier, domain[:half], primitive_root, root_order)
    right_offset = fast_evaluate(left_zerofier, domain[half:], primitive_root, root_order)

    if not all(not v.is_zero() for v in left_offset):
        print("left_offset:", " ".join(str(v) for v in left_offset))

    left_targets = [n / d for (n,d) in zip(values[:half], left_offset)]
    right_targets = [n / d for (n,d) in zip(values[half:], right_offset)]

    left_interpolant = fast_interpolate(domain[:half], left_targets, primitive_root, root_order)
    right_interpolant = fast_interpolate(domain[half:], right_targets, primitive_root, root_order)

    return left_interpolant * right_zerofier + right_interpolant * left_zerofier
```

Next up: fast evaluation on a coset. This task is needed in the STARK pipeline when transforming a polynomial into a codeword to be input to FRI. It is possible to solve this problem using fast batch-evaluation on arbitrary domains. However, when the given domain coincides with a coset of order $2^k$, it would be a shame not to use the NTT directly. The only question is how to shift the domain of evaluation. This is precisely what polynomial scaling achieves.

```python
def fast_coset_evaluate( polynomial, offset, generator, order ):
    scaled_polynomial = polynomial.scale(offset)
    values = ntt(generator, scaled_polynomial.coefficients + [offset.field.zero()] * (order - len(polynomial.coefficients)))
    return values
```

Fast evaluation on a coset allows us to answer a pesky problem that arises when adapting the fast multiplication procedure to divide instead of multiply. Where fast multiplication used element-wise multiplication on codewords, fast division uses element-wise division on codewords, where the codewords are obtained by applying the NTT to the polynomials' coefficient vectors. The problem is this: what happens when the divisor codeword is zero in a given location? If the numerator codeword is not zero in that location, then the division is unclean and has a nonzero remainder. In this case the entire operation can be flagged as erroneous. However, there can still be clean division if the numerator is also zero in the given location. The naïve fast division algorithm fails because of a zero-divided-by-zero error, even though the underlying polynomials generate a clean division. This is exactly the problem that occurs when attempting to use NTTs to divide out the zerofiers. We got around this problem in the previous part of the tutorial by using polynomial long division instead, but this solution has a *quadratic* running time. We want quasilinear!

The solution is to perform the element-wise division on codewords arising from evaluation on a coset of the group over which the NTT is defined. Specifically, the procedure involves five steps:
 - scale,
 - NTT,
 - element-wise divide,
 - inverse NTT, and
 - unscale.

This solution only works if the denominator polynomials do not have any zeros on the coset. However, in some cases (like dividing out zerofiers), the denominator is *known* not to have zeros on a particular coset.

The Python code has a lot of boilerplate to deal with special circumstances, but in the end it boils down to those five steps.

```python
def fast_coset_divide( lhs, rhs, offset, primitive_root, root_order ): # clean division only!
    assert(primitive_root^root_order == primitive_root.field.one()), "supplied root does not have supplied order"
    assert(primitive_root^(root_order//2) != primitive_root.field.one()), "supplied root is not primitive root of supplied order"
    assert(not rhs.is_zero()), "cannot divide by zero polynomial"

    if lhs.is_zero():
        return Polynomial([])

    assert(rhs.degree() <= lhs.degree()), "cannot divide by polynomial of larger degree"

    field = lhs.coefficients[0].field
    root = primitive_root
    order = root_order
    degree = max(lhs.degree(),rhs.degree())

    if degree < 8:
        return lhs / rhs

    while degree < order // 2:
        root = root^2
        order = order // 2

    scaled_lhs = lhs.scale(offset)
    scaled_rhs = rhs.scale(offset)
    
    lhs_coefficients = scaled_lhs.coefficients[:(lhs.degree()+1)]
    while len(lhs_coefficients) < order:
        lhs_coefficients += [field.zero()]
    rhs_coefficients = scaled_rhs.coefficients[:(rhs.degree()+1)]
    while len(rhs_coefficients) < order:
        rhs_coefficients += [field.zero()]

    lhs_codeword = ntt(root, lhs_coefficients)
    rhs_codeword = ntt(root, rhs_coefficients)

    quotient_codeword = [l / r for (l, r) in zip(lhs_codeword, rhs_codeword)]
    scaled_quotient_coefficients = intt(root, quotient_codeword)
    scaled_quotient = Polynomial(scaled_quotient_coefficients[:(lhs.degree() - rhs.degree() + 1)])

    return scaled_quotient.scale(offset.inverse())

```

## Fast Zerofier Evaluation

The algorithms described above chiefly apply to the prover, whose complexity drops from $O(T^2)$ to $O(T \log T)$. Scalability for the prover is achieved. The verifier's bottleneck is the evaluation of the transition zerofier, which is in general a dense polynomial of degree $T$. As a result, roughly $T$ coefficients will be possibly nonzero, and since the verifier must touch all of them to compute the polynomial's value, his running time will be on the same order of magnitude. For scalable verifiers, we need a running time of at most $\tilde{O}(\log T)$. There are two strategies to achieve this: (1) sparse zerofiers based on group theory and (2) preprocessed dense zerofiers.

## Sparse Zerofiers with Group Theory

It is an elementary fact of group theory that every element raised to its order gives the identity. For example, an element $x$ of the subgroup of order $r$ of the multiplicative group of a finite field $\mathbb{F}_ p \backslash \lbrace 0 \rbrace$ satisfies $x^r = 1$. Rearranging, and replacing $x$ with a formal indeterminate $X$, we get a polynomial
$$ X^r - 1 $$
that is guaranteed to evaluate to zero in every element of the order-$r$ subgroup. Furthermore, this polynomial is monic (*i.e.*, the leading coefficient is one) and of minimal degree (across all polynomials that vanish on all $r$ points of the subgroup). Therefore, this sparse polynomial is exactly the zerofier for the subgroup!

For STARKs, we are already using finite fields that come with subgroups of order $2^k$ for many $k$. Therefore, if the execution trace is interpolated over $\lbrace \omicron^i \, \vert \, 0 \leq i < 2^k \rbrace$ where $\omicron$ is a generator of the subgroup of order $2^k$, then the zerofier for $\lbrace \omicron^i \, \vert \, 0 \leq i < 2^k - 1\rbrace$ is equal to the rational expression
$$ \frac{X^{2^k-1} - 1}{X - \omicron^{-1}} $$
in all points $X$ except for $X = \omicron^{-1}$, where the rational expression is undefined.

The verifier obviously does not perform the division because it turns a sparse polynomial into a dense one. Instead, the verifier evaluates the numerator sparsely and divides it by the value of the denominator. This works as long as the verifier does not need to evaluate the zerofier in $\omicron^{-1}$, which is precisely what the coset-trick of FRI guarantees.

To apply this strategy, the STARK trace length must be a power of 2. If the trace is far from a power of two, say by a difference of $d$, then the verifier needs to evaluate a zerofier that has $d-1$ factors in the denominator. In other words, *the trace length must be a power of two in order for the verifier to be fast*.

The solution is to pad the trace until its length is the next power of 2. Clearly this padding must be compatible with the transition constraints so that the composition polynomials still evaluate to zero on all (but one point) of the power-of-two subgroup. The natural solution is to apply the same transition function for a power of two number of cycles, and have the boundary conditions refer to the "output" whose cycle index is somewhere in the middle. However, this design decision introduces a problem when it comes to appending randomizers to the trace for the purpose of leaking zero knowledge.
 - If the randomizers are appended after padding the trace, then the randomized trace does not fit into the power-of-two subgroup. In this case the interpolant must be computed such that:
   - over the power-of-two subgroup it evaluates to the execution trace; and
   - over a distinct domain it evaluates to the uniformly random randomizers.
 - If the randomizers are appended before padding, then the transition constraints must be compatible with this operation, or else the composition polynomials will not evaluate to zero in the entire power-of-two subgroup. This option requires changing the AIR.

### Preprocessing

Where a standard IOP consists of two parties, the prover and the verifier, a *Preprocessing IOP* consists if three: a prover, a verifier, and an *indexer*. (The indexer is sometimes also called the *preprocessor* or the *helper*.)

The role of the indexer is to perform computations that help the verifier (not to mention prover) but that are too expensive for the verifier to perform directly. The catch is that the indexer does not receive the same input as the verifier does. The indexer's input (the *index*) is information about the computation that can be computed ahead of time, before specific data is known. For example, the index could be the number of cycles that the computation is supposed to take, along with the transition constraints. The specific information about the computation, or *instance*, would be the boundary constraints. The verifier's input is the instance as well as the indexer's output (which itself may include the index). The point is that from the verifier's point of view, the indexer's output is trusted.

![Information flow in a proof system with preprocessing.](graphics/preprocessing.svg)

The formal definition of STARKs does not capture proof systems with preprocessing, and when counting the indexer's work as verifier work, a proof system with preprocessing is arguably not scalable. Nevertheless, a preprocessing proof system can be scalable in the English sense of the word if the verifier's work (not counting that of the indexer) is polylogarithmic in the size of the computation.

### Preprocessed Dense Zerofiers

Concretely, the indexer's output to the verifier will be a commitment to the zerofier $Z(X) = \prod_{i=0}^{T-1} (X-\omicron^i)$ via the familiar Merkle root of Reed-Solomon codeword construction. Whenever the verifier needs the value of this zerofier in a point, the prover supplies this leaf along with an authentication path. Note that the verifier does not need to evaluate the zerofier in points outside the FRI domain. As a result, there is no need to prove that the zerofier has a low degree; it comes straight from the trusted indexer.

This description highlights the main drawback of using preprocessing to achieve scalability: the proof is larger because it includes more Merkle authentication paths. Another drawback is the slightly stronger security model: the verifier needs to trust the indexer's output. Even though the preprocessing is transparent here, re-running the indexer in order to justify this trust might be prohibitively expensive. The code supporting this tutorial achieves scalability through preprocessing as opposed to group theory.

### Variable Execution Times

The solution described above works perfectly fine if the execution time $T$ is known beforehand. What to do, however, when the execution time is not known beforehand, and thus cannot be included in the index?

Preprocessing still holds a solution, but at the cost of a slightly more expensive verifier. The indexer commits to each member of a family of zerofiers $\{Z_ {2^k}(X)\}_ k$ where $Z_{2^k}(X) = \prod_{i=0}^{2^k-1} (X - \omicron^i)$. Let $t = \lfloor \log_2 T \rfloor$ such that $Z_{2^t}(X)$ belongs to this family.

The prover wishes to show that a certain transition polynomial $p(X)$ evaluates to zero on $\{\omicron^i\}_ {i=0}^{T-1}$. Without preprocessing, he would commit to and prove the bounded degree of a quotient polynomial $q(X) = p(X) / Z_{T-1}(X)$, where $Z_{T-1}(X) = \prod_{i=0}^{T-1} (X - \omicron^i)$. With preprocessing, he must commit to and prove the bounded degree of two quotient polynomials:
 1. $q_l(X) = \frac{p(X) }{ Z_{2^t}(X)}$ and
 2. $q_r(X) = \frac{p(X) }{\omicron^{T-1-2^t} \cdot Z_{2^t}(\omicron^{2^t-T+1} \cdot X)}$.

The denominator of the second polynomial is exactly the zerofier $\prod_{i=T-1-2^t}^{T-1} (X - \omicron^i)$. The transition polynomial is divisible by both zerofiers if and only if it is divisible by the union zerofier $\prod_{i=0}^{T-1} (X - \omicron^i)$.

While this solution works adequately in the general case, for the Rescue-Prime computation, the cycle count is known. Therefore, the implementation reflects this setting.

## Fast STARKs

Now it is time to apply the developed tools to make the STARK algorithmically efficient.

First, add a preprocessing function. This function is a member of the STARK class with access to its fields (such as the number of cycles). It produces two outputs: one for the prover, and one for the verifier. In this concrete case, the prover receives the zerofier polynomial and zerofier codeword, and the verifier receives the zerofier Merkle root.

```python
# class FastStark:
# [...]
    def preprocess( self ):
        transition_zerofier = fast_zerofier(self.omicron_domain[:(self.original_trace_length-1)], self.omicron, len(self.omicron_domain))
        transition_zerofier_codeword = fast_coset_evaluate(transition_zerofier, self.generator, self.omega, self.fri.domain_length)
        transition_zerofier_root = Merkle.commit(transition_zerofier_codeword)
        return transition_zerofier, transition_zerofier_codeword, transition_zerofier_root
```

The argument lists of `prove` and `verify` must be adapted accordingly.

```python
# class FastStark:
# [...]
    def prove( self, trace, transition_constraints, boundary, transition_zerofier, transition_zerofier_codeword, proof_stream=None ):
# [...]
    def verify( self, proof, transition_constraints, boundary, transition_zerofier_root, proof_stream=None ):
```

The prover can use fast coset division to divide out the transition zerofier, and note that this denominator is exactly the argument.

```python
# class FastStark:
#     [...]
#     def prove( [..] ):
#       [...]
        # divide out zerofier
        transition_quotients = [fast_coset_divide(tp, transition_zerofier, self.generator, self.omicron, self.omicron_domain_length) for tp in transition_polynomials]
```

The verifier needs to perform this division in a number of locations, which means that he need the value of the verifier in those locations. Therefore, the prover must provide them with authentication paths.

```python
# class FastStark:
#     [...]
#     def prove( [..] ):
#       [...]
        # ... and also in the zerofier!
        for i in quadrupled_indices:
            proof_stream.push(transition_zerofier_codeword[i])
            path = Merkle.open(i, transition_zerofier_codeword)
            proof_stream.push(path)
```

The verifier, in turn, needs to read these values and their authentication paths from the proof stream before verifying the authentication paths and storing the zerofier values in a structure for later use. Note that these authentication paths are verified against the Merkle root, which is the new input to the verifier.

```python
# class FastStark:
#     [...]
#     def verify( [..] ):
#       [...]
        # read and verify transition zerofier leafs
        transition_zerofier = dict()
        for i in duplicated_indices:
            transition_zerofier[i] = proof_stream.pull()
            path = proof_stream.pull()
            verifier_accepts = verifier_accepts and Merkle.verify(transition_zerofier_root, i, path, transition_zerofier[i])
            if not verifier_accepts:
                return False
```

Finally, when the nonlinear combination is computed, these values can be read from memory and used.

```python
# class FastStark:
#     [...]
#     def verify( [..] ):
#       [...]
                quotient = tcv / transition_zerofier[current_index]
```

At this point, what remains is to switch to fast polynomial arithmetic outside the context of preprocessing. The first opportunity is interpolating the trace.

```python
# class FastStark:
#     [...]
#     def prove( [..] ):
#         [...]
            trace_polynomials = trace_polynomials + [fast_interpolate(trace_domain, single_trace, self.omicron, self.omicron_domain_length)]
```

Next, when committing to the boundary quotients, use fast coset evaluation. Same goes for the randomizer polynomial and the combination polynomial.

```python
# class FastStark:
#     [...]
#     def prove( [..] ):
        # [...]
        # commit to boundary quotients
        # [...]
        for s in range(self.num_registers):
            boundary_quotient_codewords = boundary_quotient_codewords + [fast_coset_evaluate(boundary_quotients[s], self.generator, self.omega, self.fri_domain_length)]
            merkle_root = Merkle.commit(boundary_quotient_codewords[s])
            proof_stream.push(merkle_root)
        # [...]
        # commit to randomizer polynomial
        randomizer_polynomial = Polynomial([self.field.sample(os.urandom(17)) for i in range(self.max_degree(transition_constraints)+1)])
        randomizer_codeword = fast_coset_evaluate(randomizer_polynomial, self.generator, self.omega, self.fri_domain_length)
        randomizer_root = Merkle.commit(randomizer_codeword)
        proof_stream.push(randomizer_root)
        # [...]
        # compute matching codeword
        combined_codeword = fast_coset_evaluate(combination, self.generator, self.omega, self.fri_domain_length)
```

Dividing out the transition zerofier is a pretty intense task. It pays to switch to NTT-based division. Note that coset division is needed here, since the zerofier definitely takes the value zero on points of the trace domain.

```python
        # divide out zerofier
        transition_quotients = [fast_coset_divide(tp, transition_zerofier, self.generator, self.omicron, self.omicron_domain_length) for tp in transition_polynomials]
```

Lastly, in the FRI verifier, switch out the slow Lagrange interpolation for the much faster (coset) NTT based interpolation.

```python
# class Fri:
    # [...]
    # def verify( [..] ):
        # [...]
        # compute interpolant
        last_domain = [last_offset * (last_omega^i) for i in range(len(last_codeword))]
        coefficients = intt(last_omega, last_codeword)
        poly = Polynomial(coefficients).scale(last_offset.inverse())
```

After modifying the Rescue-Prime signature scheme to use the new, `FastStark` class and methods, this gives rise to a significantly faster signature scheme.

 - secret key size: 16 bytes (yay!)
 - public key size: 16 bytes (yay!)
 - signature size: **~160 kB**
 - keygen time: 0.01 seconds (acceptable)
 - signing time: **72 seconds**
 - verification time: **8 seconds**

How's that for an improvement? The proof is larger because there are many more Merkle paths associated with zerofier leafs, but in exchange for this larger proof, verification is an order of magnitude faster. Of course there is no shortage of further improvements, but those are beyond the scope of this tutorial and left as exercises to the reader.


[0](index) - [1](overview) - [2](basic-tools) - [3](fri) - [4](stark) - [5](rescue-prime) - **6**


================================================
FILE: docs/fri.md
================================================
# Anatomy of a STARK, Part 3: FRI

FRI is a protocol that establishes that a committed polynomial has a bounded degree. The acronym FRI stands for *Fast Reed-Solomon IOP of Proximity*, where IOP stands for *interactive oracle proof*. FRI is presented in the language of codewords: the prover sends codewords to the verifier who does not read them whole but who makes oracle-queries to read them in select locations. The codewords in this protocol are *Reed-Solomon codewords*, meaning that their values correspond to the evaluation of some low-degree polynomial in a list of points called the domain $D$. The length of this list is larger than the number of possibly nonzero coefficients in the polynomial by a factor called the *expansion factor* (also *blowup factor*), which is the reciprocal of the code's *rate* $\rho$.

Since the codewords represent low-degree polynomials, and since the codewords are hidden behind Merkle trees in any real-world deployment, it is arguably more natural to present FRI from the point of view of a polynomial commitment scheme, with some caveats. There is scientific merit in separating the type of codewords from the IOP, and those two from the Merkle tree that simulates the oracles. However, from an accessibility point of view, it is beneficial to consider them as three components of one basic primitive that relates to polynomial commitment schemes. For the remainder of this tutorial, we will use the term FRI in this sense.

In a regular polynomial commitment scheme, a prover commits to a polynomial $f(X)$ that is later opened at a given point $z$ such that it cannot equivocate between two different values of $f(z)$. The scheme consists of three algorithms:
 - $\mathsf{commit}$, which computes a binding commitment from the polynomial;
 - $\mathsf{open}$, which produces a proof that $f(z) = y$ for some $z$ and for the polynomial $f(X)$ that matches with the given commitment;
 - $\mathsf{verify}$, which verifies the proof produced by $\mathsf{open}$.

The FRI scheme has a different interface, but a later section shows how it can simulate the standard polynomial commitment scheme interface without much overhead. FRI is a protocol between a prover and a verifier, which establishes that a given codeword belongs to a polynomial of low degree -- low meaning at most $\rho$ times the length of the codeword. Without losing much generality[^1], the prover knows this codeword explicitly, whereas the verifier knows only its Merkle root and leafs of his choosing, assuming the successful validation of the authentication paths that establish the leafs' membership to the Merkle tree.

## Split-and-Fold

One of the great ideas for proof systems in recent years was *split-and-fold* technique. The idea is to reduce a claim to two claims of half the size. Then both claims are merged into one using random weights supplied by the verifier. After logarithmically many steps (as a function of the size of the original claim) the claim has been reduced to one of a trivial size which is true if and only if (modulo some negligible security degradation) the original claim was true.

In the case of FRI, this com
Download .txt
gitextract_3ffutrbv/

├── .gitignore
├── LICENSE
├── README.md
├── code/
│   ├── .gitignore
│   ├── algebra.py
│   ├── fast_rpsss.py
│   ├── fast_stark.py
│   ├── fri.py
│   ├── ip.py
│   ├── merkle.py
│   ├── multivariate.py
│   ├── ntt.py
│   ├── rescue_prime.py
│   ├── rpsss.py
│   ├── stark.py
│   ├── test_fast_stark.py
│   ├── test_fri.py
│   ├── test_ip.py
│   ├── test_merkle.py
│   ├── test_multivariate.py
│   ├── test_ntt.py
│   ├── test_rescue_prime.py
│   ├── test_rpsss.py
│   ├── test_stark.py
│   ├── test_univariate.py
│   └── univariate.py
└── docs/
    ├── .gitignore
    ├── 404.html
    ├── Gemfile
    ├── _config.yml
    ├── _includes/
    │   └── head-custom.html
    ├── _posts/
    │   └── 2021-10-20-welcome-to-jekyll.markdown
    ├── about.md
    ├── basic-tools.md
    ├── faster.md
    ├── fri.md
    ├── index.md
    ├── latex/
    │   ├── .gitignore
    │   └── graphics.tex
    ├── overview.md
    ├── rescue-prime.md
    └── stark.md
Download .txt
SYMBOL INDEX (176 symbols across 22 files)

FILE: code/algebra.py
  function xgcd (line 1) | def xgcd( x, y ):
  class FieldElement (line 14) | class FieldElement:
    method __init__ (line 15) | def __init__( self, value, field ):
    method __add__ (line 19) | def __add__( self, right ):
    method __mul__ (line 22) | def __mul__( self, right ):
    method __sub__ (line 25) | def __sub__( self, right ):
    method __truediv__ (line 28) | def __truediv__( self, right ):
    method __neg__ (line 31) | def __neg__( self ):
    method inverse (line 34) | def inverse( self ):
    method __xor__ (line 38) | def __xor__( self, exponent ):
    method __eq__ (line 47) | def __eq__( self, other ):
    method __neq__ (line 50) | def __neq__( self, other ):
    method __str__ (line 53) | def __str__( self ):
    method __bytes__ (line 56) | def __bytes__( self ):
    method is_zero (line 59) | def is_zero( self ):
  class Field (line 65) | class Field:
    method __init__ (line 66) | def __init__( self, p ):
    method zero (line 69) | def zero( self ):
    method one (line 72) | def one( self ):
    method multiply (line 75) | def multiply( self, left, right ):
    method add (line 78) | def add( self, left, right ):
    method subtract (line 81) | def subtract( self, left, right ):
    method negate (line 84) | def negate( self, operand ):
    method inverse (line 87) | def inverse( self, operand ):
    method divide (line 91) | def divide( self, left, right ):
    method main (line 96) | def main():
    method generator (line 100) | def generator( self ):
    method primitive_nth_root (line 104) | def primitive_nth_root( self, n ):
    method sample (line 116) | def sample( self, byte_array ):

FILE: code/fast_rpsss.py
  class SignatureProofStream (line 7) | class SignatureProofStream(ProofStream):
    method __init__ (line 8) | def __init__( self, document ):
    method prover_fiat_shamir (line 13) | def prover_fiat_shamir( self, num_bytes=32 ):
    method verifier_fiat_shamir (line 16) | def verifier_fiat_shamir( self, num_bytes=32 ):
    method deserialize (line 19) | def deserialize( self, bb ):
  class FastRPSSS (line 24) | class FastRPSSS:
    method __init__ (line 25) | def __init__( self ):
    method stark_prove (line 38) | def stark_prove( self, input_element, proof_stream ):
    method stark_verify (line 48) | def stark_verify( self, output_element, stark_proof, proof_stream ):
    method keygen (line 53) | def keygen( self ):
    method sign (line 58) | def sign( self, sk, document ):
    method verify (line 63) | def verify( self, pk, document, signature ):

FILE: code/fast_stark.py
  class FastStark (line 8) | class FastStark:
    method __init__ (line 9) | def __init__( self, field, expansion_factor, num_colinearity_checks, s...
    method preprocess (line 36) | def preprocess( self ):
    method transition_degree_bounds (line 42) | def transition_degree_bounds( self, transition_constraints ):
    method transition_quotient_degree_bounds (line 46) | def transition_quotient_degree_bounds( self, transition_constraints ):
    method max_degree (line 49) | def max_degree( self, transition_constraints ):
    method boundary_zerofiers (line 53) | def boundary_zerofiers( self, boundary ):
    method boundary_interpolants (line 60) | def boundary_interpolants( self, boundary ):
    method boundary_quotient_degree_bounds (line 69) | def boundary_quotient_degree_bounds( self, randomized_trace_length, bo...
    method sample_weights (line 73) | def sample_weights( self, number, randomness ):
    method prove (line 76) | def prove( self, trace, transition_constraints, boundary, transition_z...
    method verify (line 180) | def verify( self, proof, transition_constraints, boundary, transition_...

FILE: code/fri.py
  class Fri (line 11) | class Fri:
    method __init__ (line 12) | def __init__( self, offset, omega, initial_domain_length, expansion_fa...
    method num_rounds (line 22) | def num_rounds( self ):
    method sample_index (line 30) | def sample_index( byte_array, size ):
    method sample_indices (line 36) | def sample_indices( self, seed, size, reduced_size, number ):
    method eval_domain (line 53) | def eval_domain( self ):
    method commit (line 56) | def commit( self, codeword, proof_stream, round_index=0 ):
    method query (line 98) | def query( self, current_codeword, next_codeword, c_indices, proof_str...
    method prove (line 115) | def prove( self, codeword, proof_stream ):
    method verify (line 132) | def verify( self, proof_stream, polynomial_values ):

FILE: code/ip.py
  class ProofStream (line 4) | class ProofStream:
    method __init__ (line 5) | def __init__( self ):
    method push (line 9) | def push( self, obj ):
    method pull (line 12) | def pull( self ):
    method serialize (line 18) | def serialize( self ):
    method prover_fiat_shamir (line 21) | def prover_fiat_shamir( self, num_bytes=32 ):
    method verifier_fiat_shamir (line 24) | def verifier_fiat_shamir( self, num_bytes=32 ):
    method deserialize (line 27) | def deserialize( self, bb ):

FILE: code/merkle.py
  class Merkle (line 3) | class Merkle:
    method commit_ (line 6) | def commit_( leafs ):
    method commit (line 13) | def commit( data_array ):
    method open_ (line 16) | def open_( index, leafs ):
    method open (line 26) | def open( index, data_array ):
    method verify_ (line 29) | def verify_( root, index, path, leaf ):
    method verify (line 42) | def verify( root, index, path, data_element ):

FILE: code/multivariate.py
  class MPolynomial (line 3) | class MPolynomial:
    method __init__ (line 4) | def __init__( self, dictionary ):
    method zero (line 16) | def zero():
    method __add__ (line 19) | def __add__( self, other ):
    method __mul__ (line 35) | def __mul__( self, other ):
    method __sub__ (line 52) | def __sub__( self, other ):
    method __neg__ (line 55) | def __neg__( self ):
    method __xor__ (line 61) | def __xor__( self, exponent ):
    method constant (line 74) | def constant( element ):
    method is_zero (line 77) | def is_zero( self ):
    method variables (line 89) | def variables( num_variables, field ):
    method evaluate (line 96) | def evaluate( self, point ):
    method evaluate_symbolic (line 105) | def evaluate_symbolic( self, point ):
    method lift (line 114) | def lift( polynomial, variable_index ):

FILE: code/ntt.py
  function ntt (line 3) | def ntt( primitive_root, values ):
  function intt (line 20) | def intt( primitive_root, values ):
  function fast_multiply (line 32) | def fast_multiply( lhs, rhs, primitive_root, root_order ):
  function fast_zerofier (line 66) | def fast_zerofier( domain, primitive_root, root_order ):
  function fast_evaluate (line 82) | def fast_evaluate( polynomial, domain, primitive_root, root_order ):
  function fast_interpolate (line 102) | def fast_interpolate( domain, values, primitive_root, root_order ):
  function fast_coset_evaluate (line 132) | def fast_coset_evaluate( polynomial, offset, generator, order ):
  function fast_coset_divide (line 137) | def fast_coset_divide( lhs, rhs, offset, primitive_root, root_order ): #...

FILE: code/rescue_prime.py
  class RescuePrime (line 5) | class RescuePrime:
    method __init__ (line 6) | def __init__( self ):
    method hash (line 128) | def hash( self, input_element ):
    method trace (line 162) | def trace( self, input_element ):
    method boundary_constraints (line 206) | def boundary_constraints( self, output_element ):
    method round_constants_polynomials (line 217) | def round_constants_polynomials( self, omicron ):
    method transition_constraints (line 239) | def transition_constraints( self, omicron ):
    method randomizer_freedom (line 269) | def randomizer_freedom( self, omicron, num_randomizers ):

FILE: code/rpsss.py
  class SignatureProofStream (line 7) | class SignatureProofStream(ProofStream):
    method __init__ (line 8) | def __init__( self, document ):
    method prover_fiat_shamir (line 13) | def prover_fiat_shamir( self, num_bytes=32 ):
    method verifier_fiat_shamir (line 16) | def verifier_fiat_shamir( self, num_bytes=32 ):
    method deserialize (line 19) | def deserialize( self, bb ):
  class RPSSS (line 24) | class RPSSS:
    method __init__ (line 25) | def __init__( self ):
    method stark_prove (line 37) | def stark_prove( self, input_element, proof_stream ):
    method stark_verify (line 47) | def stark_verify( self, output_element, stark_proof, proof_stream ):
    method keygen (line 52) | def keygen( self ):
    method sign (line 57) | def sign( self, sk, document ):
    method verify (line 62) | def verify( self, pk, document, signature ):

FILE: code/stark.py
  class Stark (line 7) | class Stark:
    method __init__ (line 8) | def __init__( self, field, expansion_factor, num_colinearity_checks, s...
    method transition_degree_bounds (line 35) | def transition_degree_bounds( self, transition_constraints ):
    method transition_quotient_degree_bounds (line 39) | def transition_quotient_degree_bounds( self, transition_constraints ):
    method max_degree (line 42) | def max_degree( self, transition_constraints ):
    method transition_zerofier (line 46) | def transition_zerofier( self ):
    method boundary_zerofiers (line 50) | def boundary_zerofiers( self, boundary ):
    method boundary_interpolants (line 57) | def boundary_interpolants( self, boundary ):
    method boundary_quotient_degree_bounds (line 66) | def boundary_quotient_degree_bounds( self, randomized_trace_length, bo...
    method sample_weights (line 70) | def sample_weights( self, number, randomness ):
    method prove (line 73) | def prove( self, trace, transition_constraints, boundary, proof_stream...
    method verify (line 172) | def verify( self, proof, transition_constraints, boundary, proof_strea...

FILE: code/test_fast_stark.py
  function test_fast_stark (line 9) | def test_fast_stark( ):

FILE: code/test_fri.py
  function test_fri (line 4) | def test_fri( ):

FILE: code/test_ip.py
  function test_serialize (line 3) | def test_serialize( ):

FILE: code/test_merkle.py
  function test_merkle (line 4) | def test_merkle():

FILE: code/test_multivariate.py
  function test_evaluate (line 3) | def test_evaluate( ):
  function test_lift (line 28) | def test_lift( ):

FILE: code/test_ntt.py
  function test_ntt (line 6) | def test_ntt( ):
  function test_intt (line 21) | def test_intt( ):
  function test_multiply (line 34) | def test_multiply( ):
  function test_divide (line 53) | def test_divide( ):
  function test_interpolate (line 72) | def test_interpolate( ):
  function test_coset_evaluate (line 98) | def test_coset_evaluate( ):

FILE: code/test_rescue_prime.py
  function test_rescue_prime (line 4) | def test_rescue_prime( ):
  function test_trace (line 20) | def test_trace( ):

FILE: code/test_rpsss.py
  function test_rpsss (line 5) | def test_rpsss( ):
  function test_fast_rpsss (line 43) | def test_fast_rpsss( ):

FILE: code/test_stark.py
  function test_stark (line 9) | def test_stark( ):

FILE: code/test_univariate.py
  function test_distributivity (line 4) | def test_distributivity():
  function test_division (line 21) | def test_division():
  function test_interpolate (line 51) | def test_interpolate():
  function test_zerofier (line 73) | def test_zerofier( ):

FILE: code/univariate.py
  class Polynomial (line 3) | class Polynomial:
    method __init__ (line 4) | def __init__( self, coefficients ):
    method degree (line 7) | def degree( self ):
    method __neg__ (line 19) | def __neg__( self ):
    method __add__ (line 22) | def __add__( self, other ):
    method __sub__ (line 35) | def __sub__( self, other ):
    method __mul__ (line 38) | def __mul__(self, other ):
    method __truediv__ (line 50) | def __truediv__( self, other ):
    method __mod__ (line 55) | def __mod__( self, other ):
    method __eq__ (line 59) | def __eq__( self, other ):
    method __neq__ (line 66) | def __neq__( self, other ):
    method is_zero (line 69) | def is_zero( self ):
    method __str__ (line 74) | def __str__( self ):
    method leading_coefficient (line 77) | def leading_coefficient( self ):
    method divide (line 80) | def divide( numerator, denominator ):
    method is_zero (line 99) | def is_zero( self ):
    method interpolate_domain (line 107) | def interpolate_domain( domain, values ):
    method zerofier_domain (line 122) | def zerofier_domain( domain ):
    method evaluate (line 130) | def evaluate( self, point ):
    method evaluate_domain (line 138) | def evaluate_domain( self, domain ):
    method __xor__ (line 141) | def __xor__( self, exponent ):
    method scale (line 153) | def scale( self, factor ):
  function test_colinearity (line 156) | def test_colinearity( points ):
Condensed preview — 42 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (349K chars).
[
  {
    "path": ".gitignore",
    "chars": 25,
    "preview": "docs/_site/*\ndocs/_site/\n"
  },
  {
    "path": "LICENSE",
    "chars": 11357,
    "preview": "                                 Apache License\n                           Version 2.0, January 2004\n                   "
  },
  {
    "path": "README.md",
    "chars": 2487,
    "preview": "# stark-anatomy\n\nSTARK tutorial with supporting code in python\n\nOutline:\n - introduction\n - overview of STARKs\n - basic "
  },
  {
    "path": "code/.gitignore",
    "chars": 20,
    "preview": "__pycache__/\n*.swp\n\n"
  },
  {
    "path": "code/algebra.py",
    "chars": 3628,
    "preview": "def xgcd( x, y ):\n    old_r, r = (x, y)\n    old_s, s = (1, 0)\n    old_t, t = (0, 1)\n\n    while r != 0:\n        quotient "
  },
  {
    "path": "code/fast_rpsss.py",
    "chars": 2617,
    "preview": "from rescue_prime import *\nfrom fast_stark import *\nfrom hashlib import blake2s\nimport os\nimport pickle as pickle\n\nclass"
  },
  {
    "path": "code/fast_stark.py",
    "chars": 14973,
    "preview": "from fri import *\nfrom univariate import *\nfrom multivariate import *\nfrom ntt import *\nfrom functools import reduce\nimp"
  },
  {
    "path": "code/fri.py",
    "chars": 9000,
    "preview": "from algebra import *\nfrom merkle import *\nfrom ip import *\nfrom ntt import *\nfrom binascii import hexlify, unhexlify\nim"
  },
  {
    "path": "code/ip.py",
    "chars": 886,
    "preview": "from hashlib import shake_256\nimport pickle as pickle # serialization\n\nclass ProofStream:\n    def __init__( self ):\n    "
  },
  {
    "path": "code/merkle.py",
    "chars": 1871,
    "preview": "from hashlib import blake2b\n\nclass Merkle:\n    H = blake2b\n\n    def commit_( leafs ):\n        assert(len(leafs) & (len(l"
  },
  {
    "path": "code/multivariate.py",
    "chars": 4439,
    "preview": "from univariate import *\n\nclass MPolynomial:\n    def __init__( self, dictionary ):\n        # Multivariate polynomials ar"
  },
  {
    "path": "code/ntt.py",
    "chars": 7119,
    "preview": "from univariate import *\n\ndef ntt( primitive_root, values ):\n    assert(len(values) & (len(values) - 1) == 0), \"cannot c"
  },
  {
    "path": "code/rescue_prime.py",
    "chars": 15560,
    "preview": "from algebra import *\nfrom univariate import *\nfrom multivariate import *\n\nclass RescuePrime:\n    def __init__( self ):\n"
  },
  {
    "path": "code/rpsss.py",
    "chars": 2387,
    "preview": "from rescue_prime import *\nfrom stark import *\nfrom hashlib import blake2s\nimport os\nimport pickle as pickle\n\nclass Sign"
  },
  {
    "path": "code/stark.py",
    "chars": 13699,
    "preview": "from fri import *\nfrom univariate import *\nfrom multivariate import *\nfrom functools import reduce\nimport os\n\nclass Star"
  },
  {
    "path": "code/test_fast_stark.py",
    "chars": 2501,
    "preview": "from algebra import *\nfrom univariate import *\nfrom multivariate import *\nfrom rescue_prime import *\nfrom fri import *\nf"
  },
  {
    "path": "code/test_fri.py",
    "chars": 1908,
    "preview": "from algebra import *\nfrom fri import *\n\ndef test_fri( ):\n    field = Field.main()\n    degree = 63\n    expansion_factor "
  },
  {
    "path": "code/test_ip.py",
    "chars": 706,
    "preview": "from ip import *\n\ndef test_serialize( ):\n    proof1 = ProofStream()\n    proof1.push(1)\n    proof1.push({1: '1'})\n    pro"
  },
  {
    "path": "code/test_merkle.py",
    "chars": 1703,
    "preview": "from merkle import Merkle\nfrom os import urandom\n\ndef test_merkle():\n    n = 64\n    leafs = [urandom(int(urandom(1)[0]))"
  },
  {
    "path": "code/test_multivariate.py",
    "chars": 1562,
    "preview": "from multivariate import *\n\ndef test_evaluate( ):\n    field = Field.main()\n    variables = MPolynomial.variables(4, fiel"
  },
  {
    "path": "code/test_ntt.py",
    "chars": 3837,
    "preview": "from algebra import *\nfrom univariate import *\nfrom ntt import *\nimport os\n\ndef test_ntt( ):\n    field = Field.main()\n  "
  },
  {
    "path": "code/test_rescue_prime.py",
    "chars": 4236,
    "preview": "from rescue_prime import *\nimport os\n\ndef test_rescue_prime( ):\n    rp = RescuePrime()\n    \n    # test vectors\n    asser"
  },
  {
    "path": "code/test_rpsss.py",
    "chars": 2138,
    "preview": "from rpsss import *\nfrom fast_rpsss import *\nfrom time import time\n\ndef test_rpsss( ):\n    print(\"Testing R'*K signature"
  },
  {
    "path": "code/test_stark.py",
    "chars": 1886,
    "preview": "from algebra import *\nfrom univariate import *\nfrom multivariate import *\nfrom rescue_prime import *\nfrom fri import *\nf"
  },
  {
    "path": "code/test_univariate.py",
    "chars": 3100,
    "preview": "from univariate import *\nimport os\n\ndef test_distributivity():\n    field = Field.main()\n    zero = field.zero()\n    one "
  },
  {
    "path": "code/univariate.py",
    "chars": 5956,
    "preview": "from algebra import *\n\nclass Polynomial:\n    def __init__( self, coefficients ):\n        self.coefficients = [c for c in"
  },
  {
    "path": "docs/.gitignore",
    "chars": 29,
    "preview": "_site/\n_site/*\nGemfile.lock\n\n"
  },
  {
    "path": "docs/404.html",
    "chars": 398,
    "preview": "---\nlayout: default\n---\n\n<style type=\"text/css\" media=\"screen\">\n  .container {\n    margin: 10px auto;\n    max-width: 600"
  },
  {
    "path": "docs/Gemfile",
    "chars": 1293,
    "preview": "source \"https://rubygems.org\"\n\n# Hello! This is where you manage which Jekyll version is used to run.\n# When you want to"
  },
  {
    "path": "docs/_config.yml",
    "chars": 1308,
    "preview": "# Welcome to Jekyll!\n#\n# This config file is meant for settings that affect your whole blog, values\n# which you are expe"
  },
  {
    "path": "docs/_includes/head-custom.html",
    "chars": 687,
    "preview": "<script type=\"text/x-mathjax-config\">\nMathJax.Hub.Config({\n    TeX: {\n      equationNumbers: {\n        autoNumber: \"AMS\""
  },
  {
    "path": "docs/_posts/2021-10-20-welcome-to-jekyll.markdown",
    "chars": 1203,
    "preview": "---\nlayout: post\ntitle:  \"Welcome to Jekyll!\"\ndate:   2021-10-20 16:21:33 +0200\ncategories: jekyll update\n---\nYou’ll fin"
  },
  {
    "path": "docs/about.md",
    "chars": 539,
    "preview": "---\nlayout: page\ntitle: About\npermalink: /about/\n---\n\nThis is the base Jekyll theme. You can find out more info about cu"
  },
  {
    "path": "docs/basic-tools.md",
    "chars": 33826,
    "preview": "# Anatomy of a STARK, Part 2: Basic Tools\n\n## Finite Field Arithmetic\n\n[Finite fields](https://en.wikipedia.org/wiki/Fin"
  },
  {
    "path": "docs/faster.md",
    "chars": 33494,
    "preview": "# Anatomy of a STARK, Part 6: Speeding Things Up\n\nThe previous part of this tutorial posed the question whether maths-le"
  },
  {
    "path": "docs/fri.md",
    "chars": 35885,
    "preview": "# Anatomy of a STARK, Part 3: FRI\n\nFRI is a protocol that establishes that a committed polynomial has a bounded degree. "
  },
  {
    "path": "docs/index.md",
    "chars": 9944,
    "preview": "# Anatomy of a STARK, Part 0: Introduction\n\nThis series of articles is a six part tutorial explaining the mechanics of t"
  },
  {
    "path": "docs/latex/.gitignore",
    "chars": 112,
    "preview": "graphics.aux\ngraphics.log\ngraphics.nav\ngraphics.out\ngraphics.pdf\ngraphics.snm\ngraphics.synctex.gz\ngraphics.toc\n\n"
  },
  {
    "path": "docs/latex/graphics.tex",
    "chars": 22580,
    "preview": "\\documentclass[11pt]{beamer}\n%\\usetheme{Boadilla}\n\\usepackage[utf8]{inputenc}\n\\usepackage[english]{babel}\n\\usepackage{am"
  },
  {
    "path": "docs/overview.md",
    "chars": 16073,
    "preview": "# Anatomy of a STARK, Part 1: STARK Overview\n\nSTARKs are a class of interactive proof systems, but for the purpose of th"
  },
  {
    "path": "docs/rescue-prime.md",
    "chars": 26606,
    "preview": "# Anatomy of a STARK, Part 5: A Rescue-Prime STARK\n\nThis part of the tutorial puts the tools developed in the previous p"
  },
  {
    "path": "docs/stark.md",
    "chars": 35435,
    "preview": "# Anatomy of a STARK, Part 4: The STARK IOP\n\nThis part of the tutorial deals with the information-theoretic backbone of "
  }
]

About this extraction

This page contains the full source code of the aszepieniec/stark-anatomy GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 42 files (331.1 KB), approximately 84.9k tokens, and a symbol index with 176 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!